repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
explosion/spaCy | machine-learning | 13,652 | No compatible packages found for v3.8.2 of spaCy | It seems like the recently pushed` 3.8.2` version has some issues downloading models.
```
python -m spacy download en_core_web_md
✘ No compatible package found for 'en-core-web-md' (spaCy v3.8.2)
```
Here's my system info.
```
C:\Users\victim\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy\util.py:910: UserWarning: [W095] Model 'en_core_web_md' (3.7.1) was trained with spaCy v3.7.2 and may not be 100% compatible with the current version (3.8.2). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
warnings.warn(warn_msg)
============================== Info about spaCy ==============================
spaCy version 3.8.2
Location C:\Users\victim\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy
Platform Windows-11-10.0.22631-SP0
Python version 3.12.6
Pipelines en_core_web_md (3.7.1)
```
Issue fix: Just make it en_core_web_md instead of en-core-web-md | closed | 2024-10-04T09:35:19Z | 2024-11-04T00:03:06Z | https://github.com/explosion/spaCy/issues/13652 | [] | HydraDragonAntivirus | 1 |
matplotlib/matplotlib | matplotlib | 28,907 | [Bug]: completely freezes | ### Bug summary
"When using PyCharm (regardless of the version) in debug mode or starting matplotlib.pyplot.plot in the Python console, the process completely freezes, and I can only force it to end."
### Code for reproduction
```Python
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt
import numpy as np
plt.imshow(np.zeros((10, 10)))
plt.show()
```
### Actual outcome
any version of pycharm
### Expected outcome
nothing
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.*
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip | closed | 2024-09-29T14:50:21Z | 2024-10-30T01:16:24Z | https://github.com/matplotlib/matplotlib/issues/28907 | [
"status: needs clarification"
] | name-used | 17 |
dmlc/gluon-nlp | numpy | 1,020 | HybridBeamSearch broken | ## Description
HybridBeamSearch tests break after https://github.com/apache/incubator-mxnet/pull/16836.
Prior to the change in MXNet, the tests wouldn't fail because MXNet wouldn't give a chance to the operator to verify that types are correct.
### Error Message
```
tests/unittest/test_sequence_sampler.py <class 'test_sequence_sampler.test_beam_search.<locals>.RNNDecoder'>
RNNDecoder 2 1 3 0 1.0 1
terminate called after throwing an instance of 'dmlc::Error'
what(): [05:56:03] src/operator/numpy/linalg/./../../tensor/../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node hybridbeamsearchsampler0_identity0 at 0-th output: expected int32, got float32
Stack trace:
[bt] (0) /home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x30a67b) [0x7f154ef8567b]
[bt] (1) /home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x31ad42) [0x7f154ef95d42]
[bt] (2) /home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x33ee9c) [0x7f154efb9e9c]
[bt] (3) /home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x35804f6) [0x7f15521fb4f6]
Fatal Python error: Aborted
Current thread 0x00007f156ffa0700 (most recent call first):
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/_ctypes/ndarray.py", line 170 in __call__
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/gluon/block.py", line 1020 in _call_cached_op
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/gluon/block.py", line 1148 in forward
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/mxnet/gluon/block.py", line 693 in __call__
File "/home/ubuntu/projects/gluon-nlp/tests/unittest/test_sequence_sampler.py", line 280 in test_beam_search
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/python.py", line 176 in pytest_pyfunc_call
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/python.py", line 1445 in runtest
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/runner.py", line 126 in pytest_runtest_call
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/home/ubuntu/.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py", line 306 in <lambda>
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/runner.py", line 229 in from_call
File "/home/ubuntu/.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py", line 307 in call_runtest_hook
File "/home/ubuntu/.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py", line 129 in call_and_report
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/runner.py", line 96 in runtestprotocol
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/runner.py", line 81 in pytest_runtest_protocol
File "/home/ubuntu/.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py", line 92 in pytest_runtest_protocol
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/main.py", line 264 in pytest_runtestloop
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/main.py", line 240 in _main
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/main.py", line 196 in wrap_session
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/main.py", line 233 in pytest_cmdline_main
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/home/ubuntu/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/home/ubuntu/.pyenv/versions/3.7.3/lib/python3.7/site-packages/_pytest/config/__init__.py", line 92 in main
File "/home/ubuntu/.pyenv/versions/3.7.3/bin/pytest", line 8 in <module>
zsh: abort (core dumped) pytest --color=yes -s tests/unittest/test_sequence_sampler.py -k
```
## To Reproduce
(If you developed your own code, please provide a short script that reproduces the error. For existing examples, please provide link.)
### Steps to reproduce
1. `pip install --upgrade --pre 'mxnet==1.6.0b20191122'`
2. `pytest --color=yes -s tests/unittest/test_sequence_sampler.py -k 'test_beam_search[HybridBeamSearchSampler-True]''
@junrushao1994 | open | 2019-11-25T05:58:34Z | 2019-11-25T08:50:48Z | https://github.com/dmlc/gluon-nlp/issues/1020 | [
"bug"
] | leezu | 1 |
Lightning-AI/pytorch-lightning | data-science | 19,972 | Increase MlflowLogger parameter value length limit | ### Description & Motivation
Currently, MlflowLogger parameter value length is limited to 250 (#5893) which is too low for class names, data augmentations or others.
Recent MLflow support up to 6000 (mlflow/mlflow#9709).
### Pitch
MLflow use `mlflow.utils.validation.MAX_PARAM_VAL_LENGTH` to validate parameter value length.
`mlflow.utils.validation.MAX_PARAM_VAL_LENGTH` exist in mlflow >= 1.0
I believe we can use it to reliably truncate parameter value without checking MLflow version.
```python
class MLFlowLogger(Logger):
@rank_zero_only
def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None: # type: ignore[override]
...
from mlflow.utils.validation import MAX_PARAM_VAL_LENGTH
# Truncate parameter values to MAX_PARAM_VAL_LENGTH characters.
params_list = [Param(key=k, value=str(v)[:MAX_PARAM_VAL_LENGTH]) for k, v in params.items()]
...
```
### Alternatives
_No response_
### Additional context
_No response_
cc @borda | open | 2024-06-13T01:36:39Z | 2024-06-13T01:37:00Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19972 | [
"feature",
"needs triage"
] | jhjo432 | 0 |
jazzband/django-oauth-toolkit | django | 777 | access token detail page in admin very slow | We upgrade django-oauth-toolkit recently, and can't open access token detail page in admin now, because source_refresh_token is not in raw_id_fields, and can be a very large list. | closed | 2020-01-19T07:48:17Z | 2020-02-07T05:24:59Z | https://github.com/jazzband/django-oauth-toolkit/issues/777 | [] | Yiling-J | 0 |
joke2k/django-environ | django | 314 | Add elasticsearch7 to search scheme | closed | 2021-09-01T07:30:07Z | 2021-09-06T08:31:11Z | https://github.com/joke2k/django-environ/issues/314 | [
"enhancement"
] | sergeyklay | 1 |
|
ets-labs/python-dependency-injector | asyncio | 105 | Rename AbstractCatalog to DeclarativeCatalog (with backward compatibility) | closed | 2015-11-09T22:04:08Z | 2015-11-10T08:42:55Z | https://github.com/ets-labs/python-dependency-injector/issues/105 | [
"feature",
"refactoring"
] | rmk135 | 0 |
|
django-import-export/django-import-export | django | 1,270 | ManyToMany fields are not checked for changes in skip_row | **Describe the bug**
If a ManyToMany field of the resource contains changes and `skip_unchanged = True`, those changes are not checked and the rows are therefore ignored.
**To Reproduce**
Steps to reproduce the behavior:
1. Set `skip_unchanged = True` in your ModelResource
2. ModelResource should contain a Django ManyToManyField, like `categories` in a `Book`
3. `categories` field is in the list of `fields` to import
4. Import
5. Change categories in some Book
6. Import again, no changes are detected
**Versions (please complete the following information):**
- Django Import Export: 2.5
- Python 3.6
- Django 2.2.20
**Expected behavior**
When importing, the changes to ManyToMany fields should be detected even if `skip_unchanged` is set to `True`
| closed | 2021-04-16T12:54:39Z | 2023-04-12T13:41:10Z | https://github.com/django-import-export/django-import-export/issues/1270 | [
"bug"
] | manelclos | 4 |
plotly/dash-core-components | dash | 229 | add localstorage/sessionstorage support | It would be better that dash core components has localstorage or sessionstorage support,
The code is simple however useful.
And it is a better way to share state than hidden div from my view.
here is a quick experiment code:
https://github.com/Benjamin-Shengming/local_storage | closed | 2018-07-07T06:11:11Z | 2018-10-03T15:06:47Z | https://github.com/plotly/dash-core-components/issues/229 | [] | Benjamin-Shengming | 1 |
django-oscar/django-oscar | django | 3,994 | Supported version 3.x | Hi all,
according to the main page there is currently no supported 3.x version.
Is this correct?
Is there a roadmap for future versions?
Also, DataCash does not seem to be alive any more.
https://docs.oscarcommerce.com/en/latest/index.html
Many thanks | closed | 2022-10-18T15:11:20Z | 2023-06-16T10:13:30Z | https://github.com/django-oscar/django-oscar/issues/3994 | [] | Chrescht | 1 |
aimhubio/aim | data-visualization | 3,233 | corrupted index db . Deleting the index db to avoid errors .. | ## ❓Question
Hi, I am following the instructions in a fresh python environment from the page [https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html#server-side-setup](https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html#server-side-setup) .However, when I click on the ui link, a strange exception is caught and I see the following print:
```
ubuntu@ip-172-31-42-100:~/server-tracking$ aim up
Running Aim UI on repo `<Repo#-2658956876120156914 path=/home/ubuntu/server-tracking/.aim read_only=None>`
Open http://127.0.0.1:43800
Press Ctrl+C to exit
Corrupted index db. Deleting the index db to avoid errors. Please run `aim storage reindex command to restore optimal performance.`
```
After I run `aim storage reindex ` and restart the ui, nothing changes, same error when I open UI in browser.
I doubt it is an expected behavior. Could it be related to some known bug?
### Env info
Python: 3.10.12
aim: 3.25.0 | open | 2024-10-03T03:27:47Z | 2024-11-07T16:20:01Z | https://github.com/aimhubio/aim/issues/3233 | [
"type / question"
] | merryHunter | 5 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 946 | Getting stuck on "Loading the encoder" bit (not responding) | Hey, I just installed everything. I'm having trouble where whenever I try to input anything it loads up successfully and then gets caught on "Loading the encoder" and then says it's not responding. I have an RTX 3080. I've tried redownloading and using different pretrained.pt files for the encoder thing it's trying to load. | open | 2021-12-10T10:32:13Z | 2022-01-10T12:11:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/946 | [] | HirabayashiCallie | 2 |
keras-team/keras | tensorflow | 20,648 | tf.keras.models.Sequential | hi dear.
i have problem:
model = tf.keras.models.Sequential([
mobile_net,
### ann layer
tf.keras.layers.Dense(1, activation='sigmoid') #[0, 1] or [1, 0]
])
in new versions of tensorflow and keras it has problme :
ValueError: Only instances of keras.Layer can be added to a Sequential model. Received: <tensorflow_hub.keras_layer.KerasLayer object at 0x77d2995e4680> (of type <class 'tensorflow_hub.keras_layer.KerasLayer'>)
what should i do can you solve it?
this is my code even if you want you can take a look!
thanks

| closed | 2024-12-15T20:25:59Z | 2024-12-18T06:55:16Z | https://github.com/keras-team/keras/issues/20648 | [
"type:support"
] | moeinnm-99 | 9 |
ckan/ckan | api | 7,712 | [Guidance] Resource access authorization | ## CKAN version: 2.10
We have a requirement to enforce users to log-in to preview and download resources whilst keeping the datasets public i.e. Datasets must be publicly searchable and browsable but when a user goes to preview/download resources (of public or private Datasets), they should be redirected to the login page.
This should apply to API as well. E.g. `resource_show` should return 401 if user doesn't send a valid API key.
Are you able to guide us to implement this using CKAN's plugins toolkit? We looked at `IAuthFunctions` but it seems to operate at `action` layer rather than `views` i.e. we can add a custom auth function to `resource_show` but resource view seems to be using other actions under the hood (e.g. `package_show`), which we don't want to put behind authorization. | closed | 2023-07-25T05:08:02Z | 2023-07-25T13:17:51Z | https://github.com/ckan/ckan/issues/7712 | [] | IndikaUdagedara | 1 |
streamlit/streamlit | python | 10,101 | Support Multi-Tenant Folder Structure for Built-In Pages Navigation | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Currently, Streamlit automatically generates a sidebar navigation based on the scripts in a pages/ folder. I’d like to extend this to support tenant-based folders, so that each authenticated user (tenant) sees only the pages within their designated folder. This would enable a seamless multi-tenant dashboard structure within a single Streamlit app.
### Why?
1. Multi-Tenant Support: In many SaaS scenarios, each tenant (or client/project) has unique pages or dashboards.
2. Scalability: Storing tenant-specific pages in dedicated folders would keep the codebase organized, while reusing common components across tenants.
3. Security & Personalization: Automatically generating the correct sidebar for each tenant ensures that users only see content relevant to their role or tenant.
### How?
- Extended Pages Folder: Introduce an optional directory structure such as:
tenants/
--tenant1/
----page1.py
---- page2.py
--tenant2/
----page1.py
----page2.py
- Automatic Side Nav: Streamlit would detect these sub-folders at runtime (similar to how it currently handles pages/) and generate a tenant-specific navigation based on authentication or session state.
- Role/Tenant Check: When a user logs in, Streamlit could read a tenant or role from st.session_state and render pages from the corresponding tenant folder only.
- Optional Config: Perhaps a parameter or config setting in config.toml (e.g., multi_tenant = true) that, when enabled, triggers this multi-folder scanning and nav generation.
### Additional Context
_No response_ | open | 2025-01-03T14:37:11Z | 2025-01-03T18:25:21Z | https://github.com/streamlit/streamlit/issues/10101 | [
"type:enhancement",
"feature:multipage-apps"
] | marwanelmo | 2 |
httpie/cli | api | 838 | Allow specifying default hostnames (instead of localhost) | This seems to be related to #215 and #180, but is not a pure duplicate:
I would like to be able to create an alias or shell function for accessing a certain rest API requiring authentication. I started with
function girder {
https --session=~/.girder_session.json girder.host.local.lan/api/v1/$1 ${@:2}
}
but my main problem is that I cannot change the HTTP method (e.g. to PUT or PATCH) this way. I would like to be able to have a different way of specifying the default host, so that I do not have to add a commandline parsing around httpie. And I do have some hope, since httpie *already* supports a default hostname, only that it's hardcoded to be localhost.
I see two potential solutions:
1. #215 contains a small change introducing an environment variable for the default host, which would solve my issue.
2. Another idea I had (more in the line of #180) would be to make the default hostname an optional part of the sessions. Maybe that could even be made to work with named sessions, by looking into a "default" host directory: `~/.httpie/sessions/default/<name>.json`
With both solutions, the above shell function could be turned into a simple alias passing the desired `--session` parameter. | open | 2020-01-18T10:02:31Z | 2023-12-12T12:33:04Z | https://github.com/httpie/cli/issues/838 | [
"needs product design"
] | hmeine | 4 |
xlwings/xlwings | automation | 1,904 | Returning a dataframe as an excel table from a UDF | #### OS (e.g. Windows 10 or macOS Sierra)
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
#### Describe your issue (incl. Traceback!)
Researched this extensively and could not find a way. Straightforward to return a dynamic array, but is it possible to return an excel table? Can do in a Macro but not via an excel function.
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
``` | closed | 2022-04-28T08:58:21Z | 2022-05-21T17:46:45Z | https://github.com/xlwings/xlwings/issues/1904 | [] | vasilopoulos | 3 |
Miserlou/Zappa | flask | 1,380 | Django App Times Out on Lambda (Runs Successfully Locally) | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
The app runs fine locally but times out in Lambda. It is setup in a private subnet and can access the database as I was able to successfully run migrations through manage command on Zappa. I am even able to create a superuser through Zappa cli.
## Expected Behavior
<!--- Tell us what should happen -->
When trying to access the admin page `...amazonaws.com/staging/admin` it should load the admin page. If I try to access `...amazonaws.com/staging/` it successfully loads then debug page because of invalid url path.
## Actual Behavior
<!--- Tell us what happens instead -->
**When trying to access the admin page `...amazonaws.com/staging/admin` it throws `{"message": "Endpoint request timed out"}`**
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Remove all local apps from INSTALLED_APPS change database connection on RDS to Postgres only and remove Django GIS functionality. Remove GIS related library path in settings.py (GEOS and GDAL).
These attempts to fix didn't work.
Next things: TRIM EVERYTHING (REDIS, CELERY, ETC). If this fails then try with a new project to see if there is an issue with DB Connection or VPC or Security Groups. Any sample projects I can try to deploy in my VPC?
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Sorry unable to provide right now. Will try to start a new project and see if I face the same issue.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: `0.45.1`
* Operating System and Python version: `OSX 10.13.2` `Python 3.6.1`
```
```
* Link to your project (optional):
* Your `zappa_settings.py`:
```
staging:
profile_name: myprofile
aws_region: us-east-1
s3_bucket: myproject-zappa
project_name: myproject-django
runtime": python3.6
django_settings: config.settings.zappa
keep_warm_expression: rate(8 minutes)
delete_s3_zip: false
timeout_seconds: 30
vpc_config:
SubnetIds:
- subnet-xxx
- subnet-xxx
SecurityGroupIds:
- sg-xxx
tags:
Stage: Staging
aws_environment_variables:
GDAL_DATA: "/var/task/gdal_data/"
``` | closed | 2018-02-06T22:21:16Z | 2021-09-22T14:31:34Z | https://github.com/Miserlou/Zappa/issues/1380 | [
"aws"
] | hammadzz | 4 |
seleniumbase/SeleniumBase | pytest | 2,635 | UC Mode not working on window server 2022 | Last week, my code worked fine but after updating my code couldn't bypass the cloudflare bot. For information, I use Windows Server 2022.
This is my code:
```
def you_message(text: str, out_type: str = 'json', timeout: int = 20):
"""Function to send a message and get results from YouChat.com
Args:
text (str): text to send
out_type (str): type of result (json, string). Defaults to 'json'.
timeout (int): timeout in seconds to wait for a result. Defaults to 20.
Returns:
str: response of the message
"""
qoted_text = urllib.parse.quote_plus(text)
result = {}
data = ""
with SB(uc=True) as sb:
sb.open(
f"https://you.com/api/streamingSearch?q={qoted_text}&domain=youchat")
timeout_delta = time.time() + timeout
stream_available = False
while time.time() <= timeout_delta:
try:
sb.assert_text("event: youChatIntent", timeout=8.45)
if 'error' in result:
result.pop('error')
data = sb.get_text("body pre")
break
except Exception:
pass
# Try to easy solve captcha challenge
try:
if sb.assert_element('iframe'):
sb.switch_to_frame("iframe")
sb.find_element(".ctp-checkbox-label", timeout=1).click()
# sb.save_screenshot('sel2.png') # Debug
except Exception:
result['error'] = 'Selenium was detected! Try again later. Captcha not solved automaticly.'
finally:
# Force exit from iframe
sb.switch_to_default_content()
if time.time() > timeout_delta:
# sb.save_screenshot('sel-timeout.png') # Debug
result['error'] = 'Timeout while getting data from Selenium! Try again later.'
res_message = ""
for line in data.split("\n"):
if line.startswith("data: {"):
json_data = json.loads(line[5:])
if 'youChatToken' in json_data:
res_message += json_data['youChatToken']
result['generated_text'] = res_message
if out_type == 'json':
return json.dumps(result)
else:
str_res = result['error'] if (
'error' in result) else result['generated_text']
return str_res
``` | closed | 2024-03-24T17:45:54Z | 2024-03-24T18:14:44Z | https://github.com/seleniumbase/SeleniumBase/issues/2635 | [
"invalid usage",
"UC Mode / CDP Mode"
] | zing75blog | 1 |
cupy/cupy | numpy | 8,375 | cupyx.scipy.map_coordinates recompiled when coordinates array shape changes | ### Description
The kernel used in [cupyx.scipy.map_coordinates](https://github.com/cupy/cupy/blob/cd1c7367b6666597fda0b62960781479c6e26b41/cupyx/scipy/ndimage/_interpolation.py#L252) is recompile each time the shape of the argument `coordinates` changes.
The shape of the input `coordinates` is provided to the function [_interp_kernels._get_map_kernel](https://github.com/cupy/cupy/blob/cd1c7367b6666597fda0b62960781479c6e26b41/cupyx/scipy/ndimage/_interp_kernels.py#L492) through the argument `yshape` . This function is decorated with `@cupy._util.memoize` and as a result of having `yshape` as input, changing `yshape` causes a cache miss in the memoization. This argument is actually not used in the CUDA kernel because we use `omit_in_coord=True`. As a result the CUDA kernel is recompiled each time the shape of `map_coordinates` changes.
We should not provide the shape as input to` _interp_kernels._get_map_kernel` so that we can reuse the same kernel when the shape of the argument `coordinates` changes but not the dimension.
### To Reproduce
```py
# Write the code here
```
### Installation
Wheel (`pip install cupy-***`)
### Environment
_No response_
### Additional Information
_No response_ | closed | 2024-06-12T16:33:45Z | 2024-06-19T08:39:26Z | https://github.com/cupy/cupy/issues/8375 | [
"cat:enhancement",
"contribution welcome"
] | martinResearch | 0 |
vitalik/django-ninja | rest-api | 329 | patch release with pydantic 1.9? | Hi,
any chance we could do a patch release with the fix to allow pydantic 1.9? | closed | 2022-01-18T12:13:49Z | 2022-01-20T15:28:49Z | https://github.com/vitalik/django-ninja/issues/329 | [] | davidszotten | 2 |
facebookresearch/fairseq | pytorch | 4,714 | What are the utilities of kaldi_initializer.py and kaldi_decoder.py scripts present inside the fairseq/examples/speech_recognition/kaldi folder? | ## ❓ Questions and Help
What are the utilities of kaldi_initializer.py and kaldi_decoder.py scripts present inside the fairseq/examples/speech_recognition/kaldi folder?
| open | 2022-09-11T11:17:26Z | 2022-09-11T11:17:26Z | https://github.com/facebookresearch/fairseq/issues/4714 | [
"question",
"needs triage"
] | mukherjeesougata | 0 |
wger-project/wger | django | 1,723 | Multi Value Measurements | ## Use case
Some measurements e.g. Blood Pressure consist of multiple values. While this could be entered into wger via 2 custom measurement this feels wrong. Especially as the the date picker in web / app only allows to set a date not a time (not sure if i should open a second request on that) so the measurements cant be linked together by timestamp. E.g. https://codeberg.org/toz/MediLog medilog is a nice example for these multi value measurements tracked via an app.
## Proposal
Extend the custom measurements to allow to define multiple values per measurement.
Optional: Add some of the common ones as selectable defaults. e.g. https://github.com/wger-project/wger/issues/875 lists already most common ones. | open | 2024-07-10T08:51:04Z | 2024-07-10T08:51:04Z | https://github.com/wger-project/wger/issues/1723 | [] | derpeter | 0 |
lorien/grab | web-scraping | 395 | Wrong Thread method for Python 3.9.0+ | ```
/lib/python3.10/site-packages/grab/spider/base_service.py", line 64, in is_alive
return self.thread.isAlive()
AttributeError: 'Thread' object has no attribute 'isAlive'. Did you mean: 'is_alive'?
```
Python 3.10, but I saw same problem in 3.9 | closed | 2022-06-16T08:14:05Z | 2022-12-08T03:22:53Z | https://github.com/lorien/grab/issues/395 | [] | notzeldon | 1 |
mongkok/fastapi-debug-toolbar | graphql | 39 | IP restrictions via ALLOWED_IPS not working | I tried to get ALLOWED_IPS to work and found that the test
```python
remote_addr in settings.ALLOWED_IPS
```
in https://github.com/mongkok/fastapi-debug-toolbar/blob/main/debug_toolbar/middleware.py#L25
always returns `False` because `remote_addr` is of type `str` and `ALLOWED_IPS` is a list of type `IPv4Address`.
Printing the variables in question returns for
```python
print(f"{remote_addr} in {settings.ALLOWED_IPS}: {remote_addr in settings.ALLOWED_IPS}")
print(f"{type(remote_addr)} {type(settings.ALLOWED_IPS[0])}")
```
the results
```python
127.0.0.1 in [IPv4Address('127.0.0.1'), IPv4Address('1.2.3.4')]: False
<class 'str'> <class 'ipaddress.IPv4Address'>
```
| closed | 2024-01-23T12:27:28Z | 2024-02-13T08:40:22Z | https://github.com/mongkok/fastapi-debug-toolbar/issues/39 | [] | MartinSchmidt123 | 1 |
encode/apistar | api | 652 | CLI arguments are not validated correctly. | This is related to #650 (which is about `--path`).
CLI arguments aren't enforced, or given sensible defaults.
The one I hit is `--format`. If you don't provide this there is no error until you get to the final "Valid schema" message. At that point you get a totally mysterious `KeyError`:
```
(apistar) ~/Desktop $ apistar validate --path schema.yml
Traceback (most recent call last):
...
File ".../apistar/cli.py", line 163, in validate
}[format]
KeyError: None
```
Cracking open `cli.py`, `format` there is the missing CLI argument. Sending `--format=openapi` resolves the issue.
`format` should be required, so as to provide a helpful error message.
Maybe it should default to `openapi`... | closed | 2019-03-25T15:04:59Z | 2020-06-04T17:09:29Z | https://github.com/encode/apistar/issues/652 | [] | carltongibson | 0 |
ansible/awx | automation | 15,104 | awx workflow_job_templates launch --wait command fails with ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
awx workflow_job_templates launch --wait command fails with ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
It should wait for workflow to complete in ansible tower but rather the remote connection is closed.
### AWX version
24.2.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [X] CLI
- [ ] Other
### Installation method
N/A
### Modifications
no
### Ansible version
Ansible Automation Platform Controller 4.3.6
### Operating system
redhat linux 8
### Web browser
_No response_
### Steps to reproduce
awx workflow_job_templates launch --wait command fails with ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
### Expected results
It should wait for workflow to complete in ansible tower but rather the remote connection is closed.
### Actual results
awx workflow_job_templates launch --wait command fails with ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
### Additional information
_No response_ | open | 2024-04-11T21:44:37Z | 2024-04-26T15:14:11Z | https://github.com/ansible/awx/issues/15104 | [
"type:bug",
"needs_triage",
"community"
] | akshat87 | 4 |
napari/napari | numpy | 7,380 | Investigate hard threading failure on a qt_dims_slider test run | ### 🐛 Bug Report
threading failure when executing qt_dim test on 1 CI run
Lines that concern me in the run:
```sh
File "/home/runner/work/napari/napari/napari/_qt/widgets/qt_dims_slider.py", line 603 in work
File "/home/runner/work/napari/napari/napari/_qt/widgets/qt_dims_slider.py", line 553 in run
```
### 💡 Steps to Reproduce
View https://github.com/napari/napari/actions/runs/11871954145/job/33085271415?pr=7378#step:14:405
### 💡 Expected Behavior
No hard failure
### 🌎 Environment
CI
### 💡 Additional Context
```sh
napari/_qt/widgets/_tests/test_qt_dims.py ..........Fatal Python error: Aborted
Thread 0x00007efff2ffe640 (most recent call first):
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 331 in wait
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 629 in wait
File "/home/runner/work/napari/napari/napari/_qt/widgets/qt_dims_slider.py", line 603 in work
File "/home/runner/work/napari/napari/napari/_qt/widgets/qt_dims_slider.py", line 553 in run
File "/home/runner/work/napari/napari/napari/conftest.py", line 582 in run_with_trace
Thread 0x00007f0063013640 (most recent call first):
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 327 in wait
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/multiprocessing/queues.py", line 231 in _feed
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 982 in run
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/threading.py", line 1002 in _bootstrap
Current thread 0x00007f00e440eb80 (most recent call first):
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pytestqt/plugin.py", line 222 in _process_events
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pytestqt/plugin.py", line 206 in pytest_runtest_teardown
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_callers.py", line 98 in _multicall
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/runner.py", line 242 in <lambda>
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/runner.py", line 341 in from_call
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/runner.py", line 241 in call_and_report
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/runner.py", line 137 in runtestprotocol
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/runner.py", line 113 in pytest_runtest_protocol
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/main.py", line 362 in pytest_runtestloop
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/main.py", line 337 in _main
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/main.py", line 283 in wrap_session
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/main.py", line 330 in pytest_cmdline_main
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/config/__init__.py", line 175 in main
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/_pytest/config/__init__.py", line 201 in console_main
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/pytest/__main__.py", line 9 in <module>
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/coverage/execfile.py", line 208 in run
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/coverage/cmdline.py", line 858 in do_run
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/coverage/cmdline.py", line 681 in command_line
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/lib/python3.11/site-packages/coverage/cmdline.py", line 970 in main
File "/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/bin/coverage", line 8 in <module>
Extension modules: pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, psygnal._dataclass_utils, psygnal._exceptions, psygnal._mypyc, psygnal._weak_callback, psygnal._queue, psygnal._signal, psygnal._group, psygnal._group_descriptor, psygnal._evented_decorator, yaml._yaml, numpy._core._multiarray_umath, numpy._core._multiarray_tests, numpy.linalg._umath_linalg, psutil._psutil_linux, psutil._psutil_posix, markupsafe._speedups, scipy._lib._ccallback_c, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, charset_normalizer.md, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy._lib._uarray._uarray, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.fftpack.convolve, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, skimage._shared.geometry, lxml._elementpath, lxml.etree, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, PyQt5.QtCore, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, skimage.draw._draw, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy.stats._ansari_swilk_statistics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, vispy.visuals.text._sdf_cpu, PyQt5.QtGui, PyQt5.QtWidgets, PyQt5.QtTest, requests.packages.charset_normalizer.md, requests.packages.chardet.md, PyQt5.QtSvg, PyQt5.QtOpenGL, kiwisolver._cext, numcodecs.compat_ext, numcodecs.blosc, numcodecs.zstd, numcodecs.lz4, numcodecs._shuffle, numcodecs.jenkins, numcodecs.vlen, numcodecs.fletcher32, PIL._imaging, pydantic._hypothesis_plugin, skimage.transform._warps_cy, skimage.morphology._misc_cy, skimage.measure._ccomp, _skeletonize_3d_cy, skimage.morphology._skeletonize_3d_cy, skimage.morphology._skeletonize_cy, skimage.measure._pnpoly, skimage.morphology._convex_hull, skimage.morphology._grayreconstruct, skimage.morphology._extrema_cy, skimage.morphology._flood_fill_cy, skimage.morphology._max_tree, PIL._imagingmath, PIL._webp (total: 219)
F [3765/4316]py311-linux-pyqt5-cov: exit -6 (574.94 seconds) /home/runner/work/napari/napari> coverage run --parallel-mode -m pytest --color=yes --basetemp=/home/runner/work/napari/napari/.tox/py311-linux-pyqt5-cov/tmp --ignore tools --maxfail=5 --json-report --pystack-threshold=60 --pystack-args=--native-all --json-report-file=/home/runner/work/napari/napari/report-py311-linux-pyqt5-cov.json --basetemp=.pytest_tmp --save-leaked-object-graph pid=4521
py311-linux-pyqt5-cov: FAIL code -6 (583.98=setup[9.04]+cmd[0.00,574.94] seconds)
evaluation failed :( (584.30 seconds)
Error: The process '/usr/bin/bash' failed with exit code 250
The process '/usr/bin/bash' failed with exit code 250
``` | open | 2024-11-16T18:11:55Z | 2024-11-16T18:16:45Z | https://github.com/napari/napari/issues/7380 | [
"bug",
"tests",
"ci"
] | willingc | 0 |
laurentS/slowapi | fastapi | 176 | Add as package in arch user repository | Would somebody be so kind and ad this to the AUR? | open | 2023-11-25T14:55:51Z | 2023-11-25T14:55:51Z | https://github.com/laurentS/slowapi/issues/176 | [] | RottenSchnitzel | 0 |
jofpin/trape | flask | 181 | ngrok error | When executing the command python2 trape.py --url https://goodfon.com --port 8080 the error "Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 267, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/root/Репозитории/trape/core/ngrok.py", line 81, in start_ngrok
result = subprocess.check_output([str_ngrok, "http", port, '-log', hash + '.nlog'])
File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['./ngrok', 'http', '8080', '-log', 'cd50ed8.nlog']' returned non-zero exit status 1
[x] ERROR:
---=[ We can't connect with nGrok "
gets out, help I don’t know what to do
| open | 2019-09-21T19:03:12Z | 2022-10-09T15:27:05Z | https://github.com/jofpin/trape/issues/181 | [] | JoKerJAYS | 6 |
darrenburns/pytest-clarity | pytest | 19 | pprintpp might be better for complex/nested cases | https://github.com/wolever/pprintpp#why-is-it-prettier | closed | 2021-03-12T08:13:43Z | 2021-06-11T19:06:00Z | https://github.com/darrenburns/pytest-clarity/issues/19 | [] | dz0 | 0 |
stitchfix/hamilton | numpy | 6 | Requirements to open source hamilton | - [x] examples to run/use hamilton
- [x] onboarding documentation
- [x] contributor documentation
- [x] contributor guidelines
- [x] scrub documentation
- [x] scrub commit history
- [x] Determine license
- [x] Legal sign off -- yes changed BSD-3 Clause Clear License
- [] push to pypi
- [] make repo public
- [] notify interested persons
- [] publish blog post | closed | 2021-05-02T18:54:39Z | 2021-09-02T18:28:01Z | https://github.com/stitchfix/hamilton/issues/6 | [] | skrawcz | 1 |
dynaconf/dynaconf | fastapi | 720 | [bug] LOAD_DOTENV_FOR_DYNACONF environment variable does not work | **Describe the bug**
When setting `LOAD_DOTENV_FOR_DYNACONF` prior to load settings, variables from `.env` file are ignored unless I specify `load_dotenv=True` in class constructor.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<details>
<summary> Project structure </summary>
```
.
├── .env
├── config
│ ├── settings-staging.yaml
│ └── settings.yaml
└── src
└── app
└── app.py
```
</details>
2. Having the following config files:
<details>
<summary> Config files </summary>
**config/settings.yaml**
```yaml
default:
foo:
bar:
```
and
**config/settings-staging.yaml**
```yaml
staging:
foo:
bar: baz
```
and
**.env**
```dotenv
MERGE_ENABLED_FOR_DYNACONF=true
ENV_FOR_DYNACONF=staging
ROOT_PATH_FOR_DYNACONF=config
ENVIRONMENTS_FOR_DYNACONF=true
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**src/app/app.py**
```python
import os
from pathlib import Path
from dynaconf import Dynaconf
env = {key: val for key, val in os.environ.items() if 'DYNACONF' in key}
assert env == {}, f'Dynaconf environment not empty {env}'
os.environ['LOAD_DOTENV_FOR_DYNACONF'] = 'true'
settings_files = [
Path.cwd() / 'config' / 'settings.yaml',
Path.cwd() / 'config' / 'settings-staging.yaml']
assert all(p.exists() for p in settings_files), 'Some of the given config files do not exist'
settings = Dynaconf(
settings_module=settings_files,
# load_dotenv=True
)
assert settings.FOO.get('bar') == 'baz', 'Missing key FOO=bar in settings'
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
# conda 4.11.0
$ conda create -p venv python=3.7.2
$ venv/bin/python /path/src/app.py
```
</details>
**Expected behavior**
As described previously I expected that Dynaconf would load environment from `.env` file. More precisely, the exact same behaviour when setting `load_dotenv=True` in class constructor. Instead, Dynaconf raises an `AttributeError: 'Settings' object has no attribute 'FOO'`.
**Environment (please complete the following information):**
- System Version: macOS 12.2.1 (21D62)
- Kernel Version: Darwin 21.3.0
- Dynaconf Version 3.1.7
| closed | 2022-02-22T07:46:05Z | 2022-06-02T19:21:51Z | https://github.com/dynaconf/dynaconf/issues/720 | [
"bug"
] | tokr94 | 1 |
matterport/Mask_RCNN | tensorflow | 2,993 | No module 'keras.engine' | trying to run the demo file to get started. Running into this error in the mrcnn/model.py file. Has anyone seen this before? I cant seem to find keras.engine to install.
21 import keras.backend as K
22 import keras.layers as KL
---> 23 import keras.engine as KE
24 import keras.models as KM
26 from mrcnn import utils
ModuleNotFoundError: No module named 'keras.engine' | closed | 2023-09-28T15:23:33Z | 2025-02-25T23:08:50Z | https://github.com/matterport/Mask_RCNN/issues/2993 | [] | lrpalmer27 | 14 |
jupyter-incubator/sparkmagic | jupyter | 809 | [BUG] Default Docker container got broken | **Describe the bug**
I believe there was a new push of the image by datamechanics (5 days ago ?) and now sparkmagic docker image does not work anymore. If you log to the spark-1 container, and try ../bin/pyspark I get this error:
Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.1.0
at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:174)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:139)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100)
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:368)
**To Reproduce**
git clone https://github.com/jupyter-incubator/sparkmagic sparkmagic-dev
cd sparkmagic-dev
docker compose up
then create a new PySpark notebook and a simple command does not. work. eg. %data = [(1, 'John', 'Doe')]
```
The code failed because of a fatal error:
Error sending http request and maximum retry encountered..
Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
c) Restart the kernel.
```
**Expected behavior**
PySpark kernel should work
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions:**
- SparkMagic 20.4 or master
- Livy (if you know it)
- Spark 2.4.7
**Additional context**
I believe there was a new push of the image by datamechanics (5 days ago ?)
| open | 2023-03-13T17:34:48Z | 2023-04-05T15:28:01Z | https://github.com/jupyter-incubator/sparkmagic/issues/809 | [] | ltregan | 4 |
ultralytics/yolov5 | machine-learning | 13,462 | UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 233: illegal multibyte sequence | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
D:\0_yyyzt\AIM\yolo\yolov5> python train.py --weights yolov5s.pt --epochs 300 --batch-size 16 --workers 8 --data D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml
train: weights=yolov5s.pt, cfg=, data=D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=300, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github: up to date with https://github.com/ultralytics/yolov5
YOLOv5 v7.0-389-ge62a31b6 Python-3.11.9 torch-2.5.1+cpu CPU
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
COMET WARNING: Comet credentials have not been set. Comet will default to offline logging. Please set your credentials to enable online logging.
COMET INFO: Using 'D:\\0_yyyzt\\AIM\\yolo\\yolov5\\.cometml-runs' path as offline directory. Pass 'offline_directory' parameter into constructor or set the 'COMET_OFFLINE_DIRECTORY' environment variable to manually choose where to store offline experiment archives.
Traceback (most recent call last):
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 986, in <module>
main(opt)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 688, in main
train(opt.hyp, opt, device, callbacks)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 180, in train
loggers = Loggers(
^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\__init__.py", line 153, in __init__
self.comet_logger = CometLogger(self.opt, self.hyp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 102, in __init__
self.data_dict = self.check_dataset(self.opt.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 252, in check_dataset
data_config = yaml.safe_load(f)
^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 79, in load
loader = Loader(stream)
^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 85, in __init__
self.determine_encoding()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 124, in determine_encoding
self.update_raw()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 178, in update_raw
data = self.stream.read(size)
^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 233: illegal multibyte sequence
COMET INFO: The process of logging environment details (conda environment, git patch) is underway. Please be patient as this may take some time.
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Comet.ml OfflineExperiment Summary
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Data:
COMET INFO: display_summary_level : 1
COMET INFO: name : exp
COMET INFO: url : [OfflineExperiment will get URL after upload]
COMET INFO: Others:
COMET INFO: Name : exp
COMET INFO: offline_experiment : True
COMET INFO: Uploads:
COMET INFO: environment details : 1
COMET INFO: git metadata : 1
COMET INFO: installed packages : 1
COMET INFO:
COMET INFO: Still saving offline stats to messages file before program termination (may take up to 120 seconds)
COMET INFO: Begin archiving the offline data.
COMET INFO: To upload this offline experiment, run:
comet upload D:\0_yyyzt\AIM\yolo\yolov5\.cometml-runs\e9b6f27f962c4a25bfeb02399ccf699f.zip
### Environment
OS=win py=3.11.9 use=cpu NoUse NVIDIA
### Minimal Reproducible Example
D:\0_yyyzt\AIM\yolo\yolov5> python train.py --weights yolov5s.pt --epochs 300 --batch-size 16 --workers 8 --data D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml
train: weights=yolov5s.pt, cfg=, data=D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=300, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github: up to date with https://github.com/ultralytics/yolov5
YOLOv5 v7.0-389-ge62a31b6 Python-3.11.9 torch-2.5.1+cpu CPU
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
COMET WARNING: Comet credentials have not been set. Comet will default to offline logging. Please set your credentials to enable online logging.
COMET INFO: Using 'D:\\0_yyyzt\\AIM\\yolo\\yolov5\\.cometml-runs' path as offline directory. Pass 'offline_directory' parameter into constructor or set the 'COMET_OFFLINE_DIRECTORY' environment variable to manually choose where to store offline experiment archives.
Traceback (most recent call last):
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 986, in <module>
main(opt)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 688, in main
train(opt.hyp, opt, device, callbacks)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 180, in train
loggers = Loggers(
^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\__init__.py", line 153, in __init__
self.comet_logger = CometLogger(self.opt, self.hyp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 102, in __init__
self.data_dict = self.check_dataset(self.opt.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 252, in check_dataset
data_config = yaml.safe_load(f)
^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 79, in load
loader = Loader(stream)
^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 85, in __init__
self.determine_encoding()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 124, in determine_encoding
self.update_raw()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 178, in update_raw
data = self.stream.read(size)
^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 233: illegal multibyte sequence
COMET INFO: The process of logging environment details (conda environment, git patch) is underway. Please be patient as this may take some time.
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Comet.ml OfflineExperiment Summary
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Data:
COMET INFO: display_summary_level : 1
COMET INFO: name : exp
COMET INFO: url : [OfflineExperiment will get URL after upload]
COMET INFO: Others:
COMET INFO: Name : exp
COMET INFO: offline_experiment : True
COMET INFO: Uploads:
COMET INFO: environment details : 1
COMET INFO: git metadata : 1
COMET INFO: installed packages : 1
COMET INFO:
COMET INFO: Still saving offline stats to messages file before program termination (may take up to 120 seconds)
COMET INFO: Begin archiving the offline data.
COMET INFO: To upload this offline experiment, run:
comet upload D:\0_yyyzt\AIM\yolo\yolov5\.cometml-runs\e9b6f27f962c4a25bfeb02399ccf699f.zip
### Additional
D:\0_yyyzt\AIM\yolo\yolov5> python train.py --weights yolov5s.pt --epochs 300 --batch-size 16 --workers 8 --data D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml
train: weights=yolov5s.pt, cfg=, data=D:\0_yyyzt\AIM\yolo\datasets\zhengtu\zhengtu.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=300, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github: up to date with https://github.com/ultralytics/yolov5
YOLOv5 v7.0-389-ge62a31b6 Python-3.11.9 torch-2.5.1+cpu CPU
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
COMET WARNING: Comet credentials have not been set. Comet will default to offline logging. Please set your credentials to enable online logging.
COMET INFO: Using 'D:\\0_yyyzt\\AIM\\yolo\\yolov5\\.cometml-runs' path as offline directory. Pass 'offline_directory' parameter into constructor or set the 'COMET_OFFLINE_DIRECTORY' environment variable to manually choose where to store offline experiment archives.
Traceback (most recent call last):
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 986, in <module>
main(opt)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 688, in main
train(opt.hyp, opt, device, callbacks)
File "D:\0_yyyzt\AIM\yolo\yolov5\train.py", line 180, in train
loggers = Loggers(
^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\__init__.py", line 153, in __init__
self.comet_logger = CometLogger(self.opt, self.hyp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 102, in __init__
self.data_dict = self.check_dataset(self.opt.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0_yyyzt\AIM\yolo\yolov5\utils\loggers\comet\__init__.py", line 252, in check_dataset
data_config = yaml.safe_load(f)
^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\__init__.py", line 79, in load
loader = Loader(stream)
^^^^^^^^^^^^^^
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 85, in __init__
self.determine_encoding()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 124, in determine_encoding
self.update_raw()
File "C:\Users\1\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yaml\reader.py", line 178, in update_raw
data = self.stream.read(size)
^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 233: illegal multibyte sequence
COMET INFO: The process of logging environment details (conda environment, git patch) is underway. Please be patient as this may take some time.
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Comet.ml OfflineExperiment Summary
COMET INFO: ---------------------------------------------------------------------------------------
COMET INFO: Data:
COMET INFO: display_summary_level : 1
COMET INFO: name : exp
COMET INFO: url : [OfflineExperiment will get URL after upload]
COMET INFO: Others:
COMET INFO: Name : exp
COMET INFO: offline_experiment : True
COMET INFO: Uploads:
COMET INFO: environment details : 1
COMET INFO: git metadata : 1
COMET INFO: installed packages : 1
COMET INFO:
COMET INFO: Still saving offline stats to messages file before program termination (may take up to 120 seconds)
COMET INFO: Begin archiving the offline data.
COMET INFO: To upload this offline experiment, run:
comet upload D:\0_yyyzt\AIM\yolo\yolov5\.cometml-runs\e9b6f27f962c4a25bfeb02399ccf699f.zip
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | open | 2024-12-16T17:13:08Z | 2024-12-17T06:52:36Z | https://github.com/ultralytics/yolov5/issues/13462 | [
"bug",
"devops"
] | zhangsiying2001 | 2 |
matplotlib/mplfinance | matplotlib | 126 | Bug Report: min/max return NaN if nan in data, can use np.nanmin/np.nanmax instead. | **[From @char101](https://github.com/matplotlib/mplfinance/issues/116#issuecomment-623977064)**
> I have found one problem due to the usage of `min` and `max` on `ndarray` (https://github.com/matplotlib/mplfinance/blob/master/src/mplfinance/plotting.py#L424). If the array contains `nan` then both `min` and `max` will return `nan` so it should be replaced with `np.nanmin` and `np.nanmax`.
| closed | 2020-05-05T10:51:07Z | 2020-06-07T23:45:01Z | https://github.com/matplotlib/mplfinance/issues/126 | [
"bug",
"released"
] | DanielGoldfarb | 10 |
Gozargah/Marzban | api | 1,637 | Invalid Request User ID Error (proxy/vless/inbound) | سلام،
من در حال استفاده از پنل مرزبان (نسخه v0.8.4) به همراه xray (نسخه 1.8.24) هستم. تنظیمات اینتباند به صورت VLESS TCP Header میباشند و هنگام برقراری ارتباط با سرور با خطاهایی مواجه میشوم که به نظر میرسد ناشی از درخواست نامعتبر (invalid request user id) باشد.
خطاهای مشاهده شده در لاگهای سرور مسترم (مرزبان):
```
2025/02/09 09:55:56 [Info] [2327410349] proxy/vless/inbound: firstLen = 0
2025/02/09 09:55:56 [Info] [2327410349] app/proxyman/inbound: connection ends > proxy/vless/encoding: failed to read request version > EOF
```
خطاهای مشاهده شده در لاگ یکی از سرورهای نود:
```
2025/02/09 10:09:36 [Info] [25915429] proxy/freedom: dialing to tcp:142.250.200.132:443
2025/02/09 10:09:36 [Info] [25915429] transport/internet/tcp: dialing TCP to tcp:142.250.200.132:443
2025/02/09 10:09:36 [Info] [25915429] proxy/freedom: connection opened to tcp:www.google.com:443, local endpoint 45.86.229.247:58274, remote endpoint 142.250.200.132:443
2025/02/09 10:09:36 [Info] [25915429] proxy: CopyRawConn readv
2025/02/09 10:09:36 [Info] [3560550531] proxy/vless/inbound: firstLen = 152
2025/02/09 10:09:36 from 127.0.0.1:55316 rejected proxy/vless/encoding: invalid request user id
2025/02/09 10:09:36 [Info] [3560550531] app/proxyman/inbound: connection ends > proxy/vless/inbound: invalid request from 127.0.0.1:55316 > proxy/vless/encoding: invalid request user id
2025/02/09 10:09:36 [Info] [2094259789] app/proxyman/inbound: connection ends > proxy/vless/inbound: connection ends > context canceled
```
داخل سرویس ها قطعی مکرر به وجود اومده، خیلی ممنونم میشم در صورتی که اطلاع دارید چطوری مشکل رو حل کنم
سپاس از شما. | open | 2025-02-09T10:14:59Z | 2025-03-17T02:33:07Z | https://github.com/Gozargah/Marzban/issues/1637 | [] | neilgleichner | 3 |
erdewit/ib_insync | asyncio | 615 | Dividends Payment Schedule | I am looking to pull IB's dividend projections for a stock/ETF. I can see this information when I hover over the "Dividend Yield %" column for the ticker in my TWS watchlist/monitor, but don't know how to pull this information through ib_insync .
For example (today's date = 13-Jul-2023):

| closed | 2023-07-13T18:46:08Z | 2023-07-23T09:04:35Z | https://github.com/erdewit/ib_insync/issues/615 | [] | atamkapoor | 1 |
biolab/orange3 | data-visualization | 6,466 | Plots in Reports and exporting get wrong x/y axis | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
The scatter-plot when saved to a report or exported to a file looks different. Axis and annotation are wrong. Scaling of axis is wrong. The whole thing gets unusable as you can see on the screenshot below.
<img width="2056" alt="grafik" src="https://github.com/biolab/orange3/assets/32239392/c26c1467-2e1e-4be1-bf96-afb631e19d5f">
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
IT can be reproduced very easily. Take or draw some data. Do a scatter-plot of two variables and save it to the report.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS Ventura 13.4
- Orange version: 3.35.0
- How you installed Orange: stand alone
| closed | 2023-06-07T08:14:12Z | 2023-09-15T11:54:37Z | https://github.com/biolab/orange3/issues/6466 | [
"bug report"
] | WolframRinke | 4 |
yihong0618/running_page | data-visualization | 163 | [TODO] fix all alert | closed | 2021-08-17T08:54:20Z | 2022-01-07T05:19:48Z | https://github.com/yihong0618/running_page/issues/163 | [] | yihong0618 | 0 |
|
deepspeedai/DeepSpeed | deep-learning | 7,012 | nv-ds-chat CI test failure | The Nightly CI for https://github.com/deepspeedai/DeepSpeed/actions/runs/13230975186 failed.
| closed | 2025-02-07T00:26:40Z | 2025-02-10T22:30:14Z | https://github.com/deepspeedai/DeepSpeed/issues/7012 | [
"ci-failure"
] | github-actions[bot] | 0 |
lmcgartland/graphene-file-upload | graphql | 29 | No UI example found | We are trying to use this in our project not able to create a ui not even properly test.we need an documentation/ examples including ui | open | 2019-03-01T20:39:22Z | 2022-12-14T00:57:49Z | https://github.com/lmcgartland/graphene-file-upload/issues/29 | [] | amal-chandran | 8 |
keras-team/keras | pytorch | 21,078 | How Do I Track a List of Objects for Export? | I define an `EnsembleModel` class that is constructed from a list of other Keras models.
```
class EnsembleModel(keras.Model):
def __init__(
self,
models: Iterable[keras.Model],
reduce_fn: Callable = keras.ops.mean,
**kwargs):
super(EnsembleModel, self).__init__(**kwargs)
self.models = models
# self.model0 = models[0]
# self.model1 = models[1]
self.reduce_fn = reduce_fn
@tf.function(input_signature=[input_signature])
def call(
self,
input: Dict[Text, Any]) -> Any:
all_outputs = [keras.ops.reshape(model(input), newshape=(-1,)) for model in self.models]
output = self.reduce_fn(all_outputs, axis=0)
return output
averaging_model = EnsembleModel(models=[model0, model1])
```
I then wish to export the ensemble model:
```
averaging_model.export("export/1/", input_signature=[input_signature])
```
But I get an error on the export:
```
AssertionError: Tried to export a function which references an 'untracked' resource. TensorFlow objects (e.g.
tf.Variable) captured by functions must be 'tracked' by assigning them to an attribute of a tracked object or
assigned to an attribute of the main object directly. See the information below:
Function name = b'__inference_signature_wrapper___call___10899653'
Captured Tensor = <ResourceHandle(name="10671455", device="/job:localhost/replica:0/task:0/device:CPU:0",
container="localhost", type="tensorflow::lookup::LookupInterface", dtype and shapes : "[ ]")>
Trackable referencing this tensor = <tensorflow.python.ops.lookup_ops.StaticHashTable object at
0x7fd62d126990>
Internal Tensor = Tensor("10899255:0", shape=(), dtype=resource)
```
If I explicitly assign the models to variables in the constructor:
```
self.model0 = models[0]
self.model1 = models[1]
```
It works fine (even if I don't reference those variables anywhere else). But I want an instance of the `EnsembleModel` class to support an arbitrary list of models. How can I ensure the models are "tracked" so that I don't get an error on export? | open | 2025-03-21T02:25:34Z | 2025-03-24T05:45:11Z | https://github.com/keras-team/keras/issues/21078 | [
"type:support"
] | rlcauvin | 1 |
BayesWitnesses/m2cgen | scikit-learn | 212 | Can support native xgboost.core.Booster model? | closed | 2020-05-09T02:22:44Z | 2020-05-20T01:44:38Z | https://github.com/BayesWitnesses/m2cgen/issues/212 | [] | yuanjie-ai | 2 |
|
Farama-Foundation/PettingZoo | api | 896 | TypeError: aec_to_parallel_wrapper.render() got an unexpected keyword argument 'mode' | ### Question
I am learning Maddpg algorithm recently. After my algorithm is completed in the MPE environment training in Pettingzoo, I encountered the following problems when evaluating the algorithm:
`Traceback (most recent call last):
File "D:\PyCharm 2021.2.2\code\maddpg-pettingzoo-pytorch-master\evaluate.py", line 41, in <module>
frame_list.append(Image.fromarray(env.render(mode='rgb_array')))
TypeError: aec_to_parallel_wrapper.render() got an unexpected keyword argument 'mode'`
How can I solve this problem? | closed | 2023-03-08T08:37:14Z | 2023-03-15T06:42:55Z | https://github.com/Farama-Foundation/PettingZoo/issues/896 | [
"question"
] | wagh311 | 2 |
d2l-ai/d2l-en | deep-learning | 2,522 | Apple M2 processor GPU in pytorch | This is a feature request, please correct me if I am wrong, but it seems like we don't have any support for Apple's GPU. I noticed that there is a pull request that hasn't been merged addressing the same issue https://github.com/d2l-ai/d2l-en/pull/2453#issue-1616863143
I would like to volunteer to implant it for Apple silicon. Please let me know if there is an effort going on to address this problem @AnirudhDagar | open | 2023-06-30T08:04:20Z | 2023-08-28T09:03:23Z | https://github.com/d2l-ai/d2l-en/issues/2522 | [] | cx-olquinjica | 1 |
katanaml/sparrow | computer-vision | 9 | extraction Invalid sparrow key error | I already installed the application through the docker method and the normal streamlit run method and I don't know what I'm doing wrong but when I try to execute the extraction for a given document it gives me this error:
{
"error":"Invalid Sparrow key."
}
Also I can't delete or create any labels or groups in the setup section.
Thanks for all your work. | closed | 2023-06-06T03:47:24Z | 2024-03-20T12:00:37Z | https://github.com/katanaml/sparrow/issues/9 | [] | dani-lu | 8 |
PaddlePaddle/PaddleHub | nlp | 2,212 | wav2lip输出报错, | 执行代码为
import paddlehub as hub
module = hub.Module(name="wav2lip")
face_input_path = "1.MP4"
audio_input_path = "1.wav"
module.wav2lip_transfer(face=face_input_path, audio=audio_input_path, output_dir='./transfer_result/', use_gpu=False)
包版本为
paddlepaddle版本:2.4.1
paddlehub:2.3.1
Python:3.7.4(更换过3.9.16无效)
系统:win10
每次module.wav2lip_transfer显示Model loaded时到0/8会卡住,然后过一会py崩溃,报错会显示超过int32过大或者过小,求帮忙看看,中间有段时间突然可以使用了,不知道为啥,后来重装环境后又不行了。

| open | 2023-02-26T12:02:37Z | 2024-02-26T05:00:07Z | https://github.com/PaddlePaddle/PaddleHub/issues/2212 | [] | zjjzff123 | 2 |
docarray/docarray | fastapi | 1,152 | Dynamic class creation | **Is your feature request related to a problem? Please describe.**
In some scenarios users might want to dynamically create a class.
For example, in the search apps, data might be given through different sources, it might have a folder structure, and we are then responsible for converting it to a suitable docarray format, where this feature will come handy.
**Describe the solution you'd like**
Having a function that takes pairs of field names and their corresponding types and returns a class
**Describe alternatives you've considered**
Tried with [Pydantic's `create_model`](https://docs.pydantic.dev/usage/models/#dynamic-model-creation), and it works ok, but this doesn't return a `BaseDocument` so would be nice to have it handled inside docarray.
| closed | 2023-02-20T13:44:25Z | 2023-02-28T14:19:51Z | https://github.com/docarray/docarray/issues/1152 | [] | jupyterjazz | 0 |
slackapi/bolt-python | fastapi | 557 | How to know the user id of the receiving user? | My goal is to create an automated "voice mail" for users when they are not available.
Say there are three users - A, B, C. Arrow pointing outward means a user is sending a message to another user.
```
User A -> User B
User A -> User C
```
Assume that user B and user C have installed my Slack app but user A hasn't. Now, my question is how can I get the `WebClient` based on the receiving user?
I need to know the right `WebClient` such that I can respond to the sending user on their behalf. That is, when User A sends a message to user B then I want to respond on behalf of User B.
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
(Paste the output of `pip freeze | grep slack`)
slack-bolt==1.11.0
slack-sdk==3.11.2
#### Python runtime version
(Paste the output of `python --version`)
3.8.0
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
ProductName: macOS
ProductVersion: 12.1
BuildVersion: 21C52
Darwin Kernel Version 21.2.0: Sun Nov 28 20:28:54 PST 2021; root:xnu-8019.61.5~1/RELEASE_X86_64
### Expected result:
Get the `WebClient` of the receiving user
### Actual result:
Get the `WebClient` of the installed user
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-12-28T12:31:19Z | 2022-02-18T05:17:06Z | https://github.com/slackapi/bolt-python/issues/557 | [
"question",
"auto-triage-stale"
] | SamarpanRai-TomTom | 7 |
CorentinJ/Real-Time-Voice-Cloning | python | 576 | able to do Speech-To-Speech Synthesis | Can it perform Speech to speech synthesis, instead of text to speech | closed | 2020-10-26T23:27:04Z | 2021-04-28T19:41:00Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/576 | [] | ruah1984 | 1 |
FactoryBoy/factory_boy | sqlalchemy | 143 | factory.DjangoModelFactory class Meta: not working | Documentation guides to define factories like this:
```
class UserFactory(factory.Factory):
class Meta:
model = models.User
```
, I get `*** TypeError: 'Meta' is an invalid keyword argument for this function` when calling `UserFactory()`.
Instead I found that removing `class Meta:` definition and using:
```
class UserFactory(factory.Factory):
FACTORY_FOR = models.User
```
seems to work...
| closed | 2014-05-21T10:35:45Z | 2014-08-04T11:07:28Z | https://github.com/FactoryBoy/factory_boy/issues/143 | [] | JsseL | 8 |
jmcnamara/XlsxWriter | pandas | 652 | create check box | Does XlsxWriter can create check box in excel? I just find Buttons in documents | closed | 2019-08-27T15:17:38Z | 2019-08-27T17:08:41Z | https://github.com/jmcnamara/XlsxWriter/issues/652 | [
"question"
] | shahmohamadi | 1 |
ivy-llc/ivy | numpy | 28,510 | Fix Frontend Failing Test: jax - math.paddle.conj | To do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-03-08T11:05:58Z | 2024-03-14T21:30:32Z | https://github.com/ivy-llc/ivy/issues/28510 | [
"Sub Task"
] | ZJay07 | 0 |
google-research/bert | tensorflow | 1,182 | Appropriate training steps for fine-tuning on language model | I want to fine-tune BERT-base-uncased for the language model, according to my custom dataset. It consists of around 80M tweets. I'm a bit puzzled about how many training steps I should set so it is trained optimally (not under-/over-fit). The README says that it should practically be more than/around 10k steps, but what about large data collections as such the one I have? Does anybody have any estimation? | open | 2020-12-06T10:58:10Z | 2021-02-05T16:06:35Z | https://github.com/google-research/bert/issues/1182 | [] | sajastu | 1 |
dsdanielpark/Bard-API | api | 84 | KeyError: 'images' when using ChatBard | This is the code snippet I run:
'''
from bardapi import ChatBard
chat = ChatBard(token=token, language='en')
chat.start()
'''
Sometimes it pops up the keyerror, even if I modify the code in chat.py, is that a problem related to network? Note that I'm using a virtual network. Thanks guys.
<img width="927" alt="image" src="https://github.com/dsdanielpark/Bard-API/assets/82095274/23b165d2-906f-432f-9995-af9c8dc38ead">
| closed | 2023-06-29T14:03:15Z | 2023-06-30T06:43:35Z | https://github.com/dsdanielpark/Bard-API/issues/84 | [] | Xiansssss | 2 |
automl/auto-sklearn | scikit-learn | 1,543 | [Question] Do you cant import auto-sklearn also? How can I import it successfully? | Hello,
I cant import auto-sklearn anymore. I use Google Colab, so I always need to install auto-sklearn when opening a notebook, which works fine. Importing auto-sklearn I get following error:
IncorrectPackageVersionError: found 'dask' version 2.12.0 but requires dask version >=2021.12
When updating dask, it doesnt work either.
Greetings | closed | 2022-07-20T12:27:26Z | 2022-07-23T13:59:21Z | https://github.com/automl/auto-sklearn/issues/1543 | [
"question"
] | kobabulbul | 4 |
littlecodersh/ItChat | api | 288 | 不能获取国家地区信息? | get_friends后,联系人没有国家和地区相关的字段。但是网页版微信确实有这个信息的显示 | closed | 2017-03-18T12:45:05Z | 2017-03-22T07:49:56Z | https://github.com/littlecodersh/ItChat/issues/288 | [
"question"
] | finalion | 1 |
harry0703/MoneyPrinterTurbo | automation | 438 | 获取脚本失败是什么原因 | 2024-07-05 12:21:52.174 | INFO | __main__:<module>:654 - 开始生成视频
2024-07-05 12:21:52.175 | INFO | __main__:<module>:655 - {
"video_subject": "为什么要跑步",
"video_script": "",
"video_terms": "",
"video_aspect": "9:16",
"video_concat_mode": "random",
"video_clip_duration": 3,
"video_count": 1,
"video_source": "pexels",
"video_materials": null,
"video_language": "zh-CN",
"voice_name": "zh-CN-XiaoxiaoNeural-Female",
"voice_volume": 1.0,
"bgm_type": "random",
"bgm_file": "",
"bgm_volume": 0.2,
"subtitle_enabled": true,
"subtitle_position": "bottom",
"font_name": "MicrosoftYaHeiBold.ttc",
"text_fore_color": "#FFFFFF",
"text_background_color": "transparent",
"font_size": 60,
"stroke_color": "#000000",
"stroke_width": 1.5,
"n_threads": 2,
"paragraph_number": 1
}
2024-07-05 12:21:52.175 | INFO | app.services.task:start:30 - start task: bbfa3e14-f31e-48b3-b3eb-69bf9b7c512e
2024-07-05 12:21:52.178 | INFO | app.services.task:start:39 -
## generating video script
2024-07-05 12:21:52.178 | INFO | app.services.llm:generate_script:258 - subject: 为什么要跑步
2024-07-05 12:21:52.179 | INFO | app.services.llm:_generate_response:18 - llm provider: openai
2024-07-05 12:22:12.487 | ERROR | app.services.llm:generate_script:294 - failed to generate script: Request timed out.
2024-07-05 12:22:12.488 | WARNING | app.services.llm:generate_script:297 - failed to generate video script, trying again... 1
2024-07-05 12:22:12.489 | INFO | app.services.llm:_generate_response:18 - llm provider: openai
2024-07-05 12:22:30.172 | ERROR | app.services.llm:generate_script:294 - failed to generate script: Request timed out.
2024-07-05 12:22:30.172 | WARNING | app.services.llm:generate_script:297 - failed to generate video script, trying again... 2
2024-07-05 12:22:30.172 | INFO | app.services.llm:_generate_response:18 - llm provider: openai
2024-07-05 12:22:47.898 | ERROR | app.services.llm:generate_script:294 - failed to generate script: Request timed out.
2024-07-05 12:22:47.898 | WARNING | app.services.llm:generate_script:297 - failed to generate video script, trying again... 3
2024-07-05 12:22:47.898 | INFO | app.services.llm:_generate_response:18 - llm provider: openai
2024-07-05 12:23:02.654 | ERROR | app.services.llm:generate_script:294 - failed to generate script: Request timed out.
2024-07-05 12:23:02.655 | WARNING | app.services.llm:generate_script:297 - failed to generate video script, trying again... 4
2024-07-05 12:23:02.655 | INFO | app.services.llm:_generate_response:18 - llm provider: openai
2024-07-05 12:23:20.408 | ERROR | app.services.llm:generate_script:294 - failed to generate script: Request timed out.
2024-07-05 12:23:20.408 | WARNING | app.services.llm:generate_script:297 - failed to generate video script, trying again... 5
2024-07-05 12:23:20.411 | SUCCESS | app.services.llm:generate_script:299 - completed:
2024-07-05 12:23:20.412 | ERROR | app.services.task:start:49 - failed to generate video script.
2024-07-05 12:23:20.413 | ERROR | __main__:<module>:661 - 视频生成失败 | closed | 2024-07-05T04:26:54Z | 2024-07-05T07:01:52Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/438 | [] | yzh000000 | 1 |
marshmallow-code/apispec | rest-api | 60 | Schema Refs | Today, to add a ref to a Schema from a Schema's nested field, we must add a `ref` parameter, as such:
```
cat_with_ref = fields.Nested(CategorySchema, ref='Category', description="A category")
```
To me, it looks like we're repeating `Category` (which is gettable by removing `Schema` from the `CategorySchema` name).
I think we should do it the other way around: automatically add a ref such as `#/definitions/Category` for any `fields.Nested(CategorySchema)`, unless `use_refs=false` is passed to the APISpec.
What do you think ?
| closed | 2016-03-16T09:25:55Z | 2016-03-25T02:06:08Z | https://github.com/marshmallow-code/apispec/issues/60 | [] | martinlatrille | 1 |
pytorch/pytorch | python | 149,497 | symbolic_trace failed on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | ### 🐛 Describe the bug
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch_mlir import fx
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "What are the benefits of using AI in healthcare?"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
model.eval()
traced_model = torch.fx.symbolic_trace(model)
m = fx.export_and_import(traced_model, (input_ids,), enable_ir_printing=True,
enable_graph_printing=True)
with open("qwen1.5b_s.mlir", "w") as f:
f.write(str(m))
```
```shell
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/./qwen5.py", line 55, in <module>
traced_model = torch.fx.symbolic_trace(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 1314, in symbolic_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 788, in trace
fn, args = self.create_args_for_root(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 679, in create_args_for_root
root_fn = _patch_function(root_fn, len(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 184, in _patch_function
new_code = CodeType(*co_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^
ValueError: code: co_varnames is too small
```
### Versions
```shell
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11+local (heads/3.11-dirty:f0895aa9c1d, Dec 20 2024, 14:17:01) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.0
[pip3] torch==2.7.0.dev20250310+cpu
[pip3] torchvision==0.22.0.dev20250310+cpu
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | open | 2025-03-19T10:06:25Z | 2025-03-20T19:22:01Z | https://github.com/pytorch/pytorch/issues/149497 | [
"module: fx",
"oncall: fx"
] | FlintWangacc | 1 |
jina-ai/clip-as-service | pytorch | 219 | Can request don't use bert-as-service client | I want to use you server structure into my project, I have a question , Can I request bert-as-service server don't use bert-as-service client, just use python request? because in your code, I see that you use zmq to request the bert-as-service server. | closed | 2019-01-25T06:44:00Z | 2019-01-29T01:04:38Z | https://github.com/jina-ai/clip-as-service/issues/219 | [] | xiongma | 2 |
Significant-Gravitas/AutoGPT | python | 9,603 | Switch from Selenium to mouse clicks on (x,y) via Vision API | Hello, when will you switch from Selenium and Puppeteer to mouse clicks on (x,y) via Vision API?
OpenAI - https://platform.openai.com/docs/guides/vision
Anthropic - https://docs.anthropic.com/en/docs/build-with-claude/vision
Qwen2.5-VL - https://github.com/QwenLM/Qwen2.5-VL
You can see the implementation here
https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo
demo: https://youtu.be/vH2f7cjXjKI?t=24
Now many sites have WAF, it blocks Selenium and Puppeteer, checks TLS fingerprints in browser builds.
With the advent of ML in WAF products, we will increasingly encounter blocking.
Mouse emulation and Vision API are universal things that can be used everywhere (even with GUI apps of a specific OS).
| open | 2025-03-07T15:01:58Z | 2025-03-07T15:08:52Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9603 | [] | eiko4 | 0 |
graphql-python/gql | graphql | 452 | Unable to print schema with gql and error is an SSL error | **Describe the bug**
gql-cli is not using SSL even when the endpoint URL clearly requests this.
**To Reproduce**
Steps to reproduce the behavior. Below, the API-Key header is redacted:
```
gql-cli --print-schema -d -H 'API-Key: NRAK-1TTZZQEJWB1QV1FJVD8ZRK4ZKHD' 'Accept: application/json' -- https://api.newrelic.com/graphql
```
I get a long backtrace, but the bottom line is as follows:
```
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.newrelic.com:443 ssl:False [Connection reset by peer]
```
**Expected behavior**
A clear and concise description of what you expected to happen.
When given a URL beginning with https, `gql-cli` should enable SSL/TLS.
**System info (please complete the following information):**
- *OS:* MacOS
- *Python version:* Python 3.11.5
- *gql version:* 3.4.1
- *graphql-core version:* 3.2.3
| closed | 2023-12-01T01:14:56Z | 2023-12-01T13:50:42Z | https://github.com/graphql-python/gql/issues/452 | [
"type: invalid"
] | danizen | 1 |
noirbizarre/flask-restplus | flask | 403 | swagger docs: Available authorizations not populated correctly | Please let me know if I have an error in my code. But I tried to replicate the [example code](http://flask-restplus.readthedocs.io/en/stable/swagger.html#documenting-authorizations) as closely as possible.
I am trying to specify authorization for my resource
```
authorizations = {
"apikey": {
"type": "apiKey",
"in": "header",
"name": "X-API-KEY"
}
}
ns = Namespace("Entry", authorizations=authorizations)
@ns.route("/")
class ApiEntry(Resource):
@ns.doc(security="apikey")
@jwt_required
def delete(self, entry_id: int):
```
When this renders as a Swagger doc webpage, I see this:

Nothing to indicate that there is an available authorization except the exclamation mark. When I click on it, I see this popup. As you can see, it has no content.

But I believe I specified everything correctly, so why is the authorization not being picked up?
| open | 2018-03-05T22:17:00Z | 2018-03-06T00:08:26Z | https://github.com/noirbizarre/flask-restplus/issues/403 | [] | boompig | 3 |
PaddlePaddle/PaddleHub | nlp | 1,871 | hub install ***出错 | hub version: 2.2.0
system: windows 10
tools: pycharm 2021.2.2
错误现象:我在pycharm中的terminal中使用hub install ABC,会出现:FileNotFoundError: [Errno 2] No such file or directory: 'ABC\\module.py'
但是:但是我在系统中的cmd,使用hub install ABC,却是正确的
错误原因: 经过我的仔细排查,我发现如果你的pycharm下存在你要安装的包名相同文件夹,就会发生此Bug,比如我要安装ABC, hub install ABC, 我的pycharm项目下也存在这个ABC目录,则就会在paddlehub/commands/install.py 的第46行: if os.path.exists(_arg) and os.path.isdir(_arg):判断错误, 它返回的实际是我pycharm项目下的目录《Q:\workspace_pycharm\paddlehub_use\ABC》存在,从而导致后续找不到《Q:\workspace_pycharm\paddlehub_use\ABC\module.py》

**避免此Bug的方法: 不要在项目下方放与要安装的模块名有相同的文件夹名**

| open | 2022-05-17T07:58:12Z | 2024-02-26T05:02:09Z | https://github.com/PaddlePaddle/PaddleHub/issues/1871 | [] | AndACat | 1 |
pytest-dev/pytest-qt | pytest | 475 | How to automatically select documents from QFileDialog? | I want to test my Graphical User Interface with the qtbot from pytest-qt.
I am new to testing in general and i could need some guidance on how to start writing these tests.
I want the bot to click on the file icon, then a QFileDialog opens, as in the picture below and the bot needs to select a pdf.
I already looked for documentation and what i found was not really helpful, i didn't understand how to set the qtbot up.

```python
from PySide2.QtWidgets import QMainWindow, QPushButton, QApplication, QFileDialog
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.button = ''
btn = QPushButton('Open File', self)
btn.move(10, 10)
btn.clicked.connect(self.open_file)
self.resize(420, 450)
def open_file(self):
pdf_dialog_obj = QFileDialog.getOpenFileNames(self, "Open Pdf", "/Downloads", "Pdf Files (*.pdf)",)
pdf_path = pdf_dialog_obj[0]
print(pdf_path)
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
MW = MainWindow()
MW.show()
sys.exit(app.exec_())
```
Like:
https://stackoverflow.com/questions/58731798/gui-testing-in-pyside2-with-qtbot
| closed | 2023-02-01T12:32:03Z | 2023-02-03T03:35:05Z | https://github.com/pytest-dev/pytest-qt/issues/475 | [
"question :question:"
] | ningwana | 4 |
python-visualization/folium | data-visualization | 1,598 | FastMarkerCluster with Rectangles | **Is your feature request related to a problem? Please describe.**
I am trying to make a map with lots of rectangles. For scalability I want to use `FastMarkerCluster`.
**Describe the solution you'd like**
An example how to achieve this with the custom `callback` argument of `FastMarkerCluster`.
**Describe alternatives you've considered**
I have followed https://stackoverflow.com/questions/55082227/use-customized-markers-in-fastmarkercluster-in-python-folium and https://gis.stackexchange.com/questions/197882/is-it-possible-to-cluster-polygons-in-leaflet to come up with the following code (that does not work):
```python
import folium
from folium.plugins import FastMarkerCluster
callback = """\
function (row) {
L.RectangleClusterable = L.Rectangle.extend({
_originalInitialize: L.Rectangle.prototype.initialize,
initialize: function (bounds, options) {
this._originalInitialize(bounds, options);
this._latlng = this.getBounds().getCenter(); // Define the polygon "center".
},
getLatLng: function () {
return this._latlng;
},
// dummy method.
setLatLng: function () {}
});
var marker;
var bounds = [[row[0],row[1]],[row[2],row[3]]];
var marker;
var bounds = [[row[0],row[1]],[row[2],row[3]]];
marker = new L.RectangleClusterable(bounds, {color: row[4], fill: false});
return marker;
};
"""
dummy_datalist = [[15.239767846087643, -3.430904007476068, 15.274550329009562, -3.3952112382419943, 'red'], [12.555053058361464, 22.69460273995349, 12.589536739962934, 22.730168087109885, 'red'], [10.578401022114354, 18.96000929066635, 10.613334525077851, 18.99486324338091, 'red'], [-28.82025939819514, 20.279894065524264, -28.78580537471939, 20.31947355512883, 'red'], [15.243305252384948, 0.6368612962996214, 15.277369210446777, 0.6731502741999579, 'red']]
map = folium.Map(location = [0.0, 20.0], zoom_start = 3)
FastMarkerCluster(dummy_datalist, callback = callback).add_to(map)
map
```
It just displays nothing on top of the map. Help is very much appreciated!
Update: Fixed it, I had a mistake in the Javascript. :) | closed | 2022-05-25T13:15:39Z | 2022-05-25T17:31:40Z | https://github.com/python-visualization/folium/issues/1598 | [] | vitusbenson | 0 |
AirtestProject/Airtest | automation | 256 | GUI工具安装后,连接上设备后,进行操作时脚本编辑窗没有任何变化 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
GUI工具安装后,连接上设备后,进行操作时脚本编辑窗没有任何变化
**相关截图**

**复现步骤**
1. 打开IDE
2. 打开计算器
3. 点击IDE上窗口选择,选择计算器窗口
4. 点击计算器上的按钮
**预期效果**
脚本编辑窗出现对应操作的脚本
**python 版本:** `python2.7`
**airtest 版本:** `1.2.0`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 系统: windows 7 64位
| open | 2019-01-22T09:22:45Z | 2019-05-14T12:04:51Z | https://github.com/AirtestProject/Airtest/issues/256 | [] | hellapple | 9 |
ultralytics/ultralytics | python | 19,830 | For human detection task, do I need to train again COCO dataset with human label only ? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi there, I want to improve YOLO11n for the human detection task. I have a question: if I use a pretrained YOLO11n model trained on the COCO dataset to train for human detection on the COCO dataset again, is it possible to improve the current mAP score (39.5 on ultralytics site) on this ?
### Additional
_No response_ | open | 2025-03-23T10:03:52Z | 2025-03-23T12:20:57Z | https://github.com/ultralytics/ultralytics/issues/19830 | [
"question",
"detect"
] | ElectricGoal | 3 |
Anjok07/ultimatevocalremovergui | pytorch | 758 | One day i started having a problem.... I don't know what to do and asking for help. | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
Fail: "[ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running FusedConv node. Name:'Conv_0' Status Message: CUDNN error executing cudnnFindConvolutionForwardAlgorithmEx( s_.handle, s_.x_tensor, s_.x_data, s_.w_desc, s_.w_data, s_.conv_desc, s_.y_tensor, s_.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size)"
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 287, in seperate
File "separate.py", line 366, in demix_base
File "separate.py", line 386, in run_model
File "separate.py", line 281, in <lambda>
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 192, in run
"
Error Time Stamp [2023-08-23 02:06:54]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: 2
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: True
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_24
help_hints_var: True
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-08-22T19:07:49Z | 2023-08-22T19:07:49Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/758 | [] | dimadt | 0 |
giotto-ai/giotto-tda | scikit-learn | 582 | Question/References about calculations of Pairwise distance metrics | Hello!
I have been using the pairwise distance functions to create distance matrices for manifold learning (UMAP). Up to this point, I have been calculating full distance matrices as input to the algorithm.
Now however, I am in a situation where I am calculating the distances between 10s of thousands of persistence diagrams and I can no longer calculate the full distance mayrices. In order to use the heuristics built into the UMAP algorithm, I need need to be able to compile the distance function with Numba. Unfortunately, I believe this means I need to write them from scratch.
I have attempted to write functions that emulate the behavior of `gtda.diagrams.PairwiseDistance`, which I have used previously, but I cannot seem to be able to match the output. Additionally, I have had trouble finding the code that actually performs the distance calculation. Would it be possible to get references for how the distances are calculated?
Thank you so much! | closed | 2021-06-17T17:36:00Z | 2021-06-18T07:04:15Z | https://github.com/giotto-ai/giotto-tda/issues/582 | [] | mir-cat | 0 |
tflearn/tflearn | data-science | 422 | feed_dict_builder does work with traditional feed dict | in https://github.com/tflearn/tflearn/blob/master/tflearn/utils.py:feed_dict_builder if the input feed dict contains a mapping of tf.tensor to data, this is not added to the resulting feed dict.
Line 294 and line 331, continue the iteration through the input feed_dict but never update the output feed dict.
As a result when trying to predict by inputting a tensor -> value feed dict, prediction fails.
| open | 2016-10-29T20:32:15Z | 2016-10-30T19:37:04Z | https://github.com/tflearn/tflearn/issues/422 | [] | blake-varden | 3 |
johnthagen/python-blueprint | pytest | 85 | Pass -Werror to pytest rather than python | From `pytest --help`:
```
pytest-warnings:
-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
set which warnings to report, see -W option of python itself.
```
Passing `-Werror` to `pytest` directly is recommended from the `help`, and works better for `pytest-xdist`. | closed | 2022-03-08T14:58:27Z | 2022-05-15T23:18:10Z | https://github.com/johnthagen/python-blueprint/issues/85 | [] | johnthagen | 0 |
keras-team/keras | tensorflow | 20,420 | keras.src vs keras.api design question | This is more of a question for me to better understand the codebase.
Working on #20399 , I realised since there's a distinction between `keras.src` and `keras.api` (which is exposed as `keras` in the end), makes it impossible to do certain things.
For instance, if you want to typehint an input as `keras.Model`, then you'd need to do a `import keras.Model` kinda thing. But that results in a circular import issue along these lines:
```py
Cell In[1], line 1
----> 1 from keras.wrappers import KerasClassifier
File ~/Projects/gh/me/keras/keras/__init__.py:4
1 import os
3 # DO NOT EDIT. Generated by api_gen.sh
----> 4 from keras.api import DTypePolicy
5 from keras.api import FloatDTypePolicy
6 from keras.api import Function
File ~/Projects/gh/me/keras/keras/api/__init__.py:7
1 """DO NOT EDIT.
2
3 This file was autogenerated. Do not edit it by hand,
4 since your modifications would be overwritten.
5 """
----> 7 from keras.api import _tf_keras
8 from keras.api import activations
9 from keras.api import applications
File ~/Projects/gh/me/keras/keras/api/_tf_keras/__init__.py:1
----> 1 from keras.api._tf_keras import keras
File ~/Projects/gh/me/keras/keras/api/_tf_keras/keras/__init__.py:28
26 from keras.api import utils
27 from keras.api import visualization
---> 28 from keras.api import wrappers
29 from keras.api._tf_keras.keras import backend
30 from keras.api._tf_keras.keras import layers
File ~/Projects/gh/me/keras/keras/api/wrappers/__init__.py:7
1 """DO NOT EDIT.
2
3 This file was autogenerated. Do not edit it by hand,
4 since your modifications would be overwritten.
5 """
----> 7 from keras.src.wrappers._sklearn import KerasClassifier
8 from keras.src.wrappers._sklearn import KerasRegressor
File ~/Projects/gh/me/keras/keras/src/wrappers/_sklearn.py:37
35 import keras
36 from keras.src import losses as losses_module
---> 37 from keras import Model
38 from keras.src.api_export import keras_export
39 from keras.src.wrappers._utils import accepts_kwargs
ImportError: cannot import name 'Model' from partially initialized module 'keras' (most likely due to a circular import) (/home/adrin/Projects/gh/me/keras/keras/__init__.py)
```
Checking the codebase, I realise typehints are not a thing we do here, so I'll remove them, but it still begs the question, what are the gains with the separation of the two folders, which adds quite a bit of complexity. In other projects, we tend to have a leading `_` on file names, and `__init__.py` exposes what needs to be _public_ on the user API level. | closed | 2024-10-28T09:17:36Z | 2025-03-13T03:10:09Z | https://github.com/keras-team/keras/issues/20420 | [
"type:support"
] | adrinjalali | 7 |
dask/dask | scikit-learn | 11,336 | An inconsistency between the documentation of `dask.array.percentile` and code implementation | **Describe the issue**:
As mentioned in the parameter `method ` in the documentation of [`dask.array.percentile`](https://docs.dask.org/en/stable/generated/dask.array.percentile.html?highlight=percentile):
> **method{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}, optional**
The interpolation method to use when the desired percentile lies between two data points i < j. **Only valid for internal_method='dask'.**
However, Corresponding part in the source code:
```python
if (
internal_method == "tdigest"
and method == "linear"
and (np.issubdtype(dtype, np.floating) or np.issubdtype(dtype, np.integer))
):
from dask.utils import import_required
import_required(
"crick", "crick is a required dependency for using the t-digest method."
)
name = "percentile_tdigest_chunk-" + token
dsk = {
(name, i): (_tdigest_chunk, key) for i, key in enumerate(a.__dask_keys__())
}
name2 = "percentile_tdigest-" + token
dsk2 = {(name2, 0): (_percentiles_from_tdigest, q, sorted(dsk))}
```
Apparently, method is only valid for internal_method='dask', but also valid for internal_method='tdigest'.
Maybe you can check it and improve the documentation.
| open | 2024-08-21T07:07:55Z | 2024-08-21T11:50:32Z | https://github.com/dask/dask/issues/11336 | [
"array",
"documentation"
] | ParsifalXu | 2 |
coqui-ai/TTS | pytorch | 3,563 | [Feature request] | i am trying to clone voice with mp3 but i am getting an error is it a bug, i neeed to adjust some settings or its not avalible yet?
Thanks | closed | 2024-02-04T19:02:08Z | 2025-01-03T09:48:05Z | https://github.com/coqui-ai/TTS/issues/3563 | [
"wontfix",
"feature request"
] | m00nsp3ll | 1 |
xlwings/xlwings | automation | 1,951 | How to get/set different colours of the same range from an Excel file using xlwings in python? | It is possible to get/set the colour of a range using `xlwings` like this:
import xlwings as xw
# Define RGB codes
green = (226, 239, 218)
red = (252, 228, 214)
grey = (242, 242, 242)
# Connect to the Excel file
wb = xw.Book(EXCEL_FILENAME)
sht = wb.sheets[EXCEL_SHEETNAME]
# Set the color of the whole range to grey
sht.range("A1:C7").color = grey
print(sht.range("A1:C7").color) # prints (242, 242, 242)
# Set the color to some sub-ranges
sht['A1'].color = green
print(sht.range("A1").color) # prints (226, 239, 218)
sht['B2:B6'].color = green
sht['A4:A6'].color = red
print(sht.range("A4:A6").color) # prints (252, 228, 214)
sht['C1'].color = red
sht['C3:C7'].color = red
Getting/Setting the colour of a range works well, as long as there is only one colour in this range. But when there are several colours, it cannot handle the different codes properly
print(sht.range("A1:C7").color) # prints (0, 0, 0)
I am trying to find a way to retrieve in a single call a pandas dataframe with the corresponding colours of range. In a similar way that it is possible to get/set all the values or even formula of a range.
# Retrieve the values of a range
print(sht.range("A1:C7").value)
# example: [[1.0, 'b1', 3.0], [2.0, 3.0, None], [3.0, 'b3', 'c3'], [6.0, 'b4', 'c4'], [5.0, 'b5', 'c5'], [6.0, 'b6', 'c6'], [7.0, 'b7', 'c7']]
# Retrieve the formula of a range
print(sht.range("A1:C7").formula)
# example: (('1', 'b1', '=A1+2'), ('2', '=A2+1', ''), ('3', 'b3', 'c3'), ('=A3+3', 'b4', 'c4'), ('5', 'b5', 'c5'), ('6', 'b6', 'c6'), ('7', 'b7', 'c7'))
# Retrieve the formula of a range
print(sht.range("A1:C7").color)
# From our previous example: (0, 0, 0)
Is it possible to handle several colours in one call instead of having to split per continuous ranges of the same colours? It would be great to be able to get/set a list of tuples (containing the RGB codes) instead of a single one for the whole range.
Many thanks in advance!
| open | 2022-07-05T16:26:57Z | 2022-09-28T13:52:12Z | https://github.com/xlwings/xlwings/issues/1951 | [] | Rom2BE | 4 |
flasgger/flasgger | rest-api | 242 | Conflicting files due to installation of examples | There is a file conflict between flasgger and micawber, because both install files into the too generic path name examples.
For reference, please see [this Arch Linux bug](https://bugs.archlinux.org/task/60006) and this [bug with micawber](https://github.com/coleifer/micawber/issues/83).
As a solution, micawber and flasgger should either not install these examples at all, or if required into a unique directory (e.g. micawber-examples) or another system directory (e.g. on Linux: /usr/share/doc/python-micawber/examples, which is usually done by the packagers).
I will remove them for now to resolve the file conflict. | closed | 2018-09-15T06:30:08Z | 2018-09-18T00:56:36Z | https://github.com/flasgger/flasgger/issues/242 | [
"bug"
] | dvzrv | 2 |
open-mmlab/mmdetection | pytorch | 11,292 | Training xdecoder error | **Problem**
I want to train the xdecoder model, but there is an error.
```none
Traceback (most recent call last):
File "./tools/train.py", line 121, in <module>
main()
File "./tools/train.py", line 110, in main
runner = Runner.from_cfg(cfg)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/runner/runner.py", line 462, in from_cfg
runner = cls(
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/runner/runner.py", line 429, in __init__
self.model = self.build_model(model)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/runner/runner.py", line 836, in build_model
model = MODELS.build(model)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/data/mmdet/mmdetection-3.2.0/projects/XDecoder/xdecoder/xdecoder.py", line 27, in __init__
self.sem_seg_head = MODELS.build(head_) # TODO: sem_seg_head -> head
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/data/mmdet/mmdetection-3.2.0/projects/XDecoder/xdecoder/unified_head.py", line 34, in __init__
self.predictor = MODELS.build(transformer_decoder_)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/opt/conda/envs/mmdet/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/data/mmdet/mmdetection-3.2.0/projects/XDecoder/xdecoder/transformer_decoder.py", line 97, in __init__
self.lang_encoder = LanguageEncoder()
File "/data/mmdet/mmdetection-3.2.0/projects/XDecoder/xdecoder/language_model.py", line 30, in __init__
self.lang_encoder = Transformer(max_token_num,
File "/data/mmdet/mmdetection-3.2.0/projects/XDecoder/xdecoder/language_model.py", line 145, in __init__
torch.empty(self.context_length, width))
TypeError: empty(): argument 'size' must be tuple of SymInts, but found element of type int at pos 1
```
** Command**
the command I used is
```none
bash ./tools/dist_train.sh ./projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py 8
```
**Environment**
```none
Python 3.8.18
torch 1.13.0
torchaudio 0.13.0
torchvision 0.14.0
mmcv 2.1.0
mmdet 3.2.0
mmengine 0.10.1
```
Thank you very much.
| closed | 2023-12-18T05:42:01Z | 2023-12-26T10:18:58Z | https://github.com/open-mmlab/mmdetection/issues/11292 | [] | JunDaLee | 1 |
JaidedAI/EasyOCR | pytorch | 875 | trainingDataset | Hi everyone,
I need the dataset for starting the training process using train.py | open | 2022-10-17T05:17:17Z | 2022-10-17T05:17:17Z | https://github.com/JaidedAI/EasyOCR/issues/875 | [] | dewMohamed | 0 |
plotly/dash | data-visualization | 2,912 | 'orthographic' mode do not save the graph aspect (uirevision - camera point of view) | while using 'orthographic' mode(fig['layout']['scene']['camera']['projection']['type']), uirevision set to 'foo' in order to keep the last figure camera settings, does not work.
meaning that while orthographic is set, you cannot retain figure user camera changes.
```
dash 2.17.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
plotly 2.32
```
| open | 2024-07-04T12:28:52Z | 2024-08-13T19:54:14Z | https://github.com/plotly/dash/issues/2912 | [
"bug",
"P3"
] | AlmogHadad | 0 |
prkumar/uplink | rest-api | 189 | Expose a request hook: i.e., @response_handler for requests | **Is your feature request related to a problem? Please describe.**
Creating this issue as a follow up of a question asked on [gitter](https://gitter.im/python-uplink/Lobby?at=5e2213bb3ea53f0f663f700a):
> is there any way to log all the executed requests?
Currently, this is not easily doable. Listening for requests is technically possible by using a `RequestTemplate`. You could define your own `RequestTemplate` subclass and implement the `before_request `method to log each request sent. For an example of how to define and register a RetryTemplate, you can take a look at the `retry` decorator.
**Describe the solution you'd like**
We could expose a `@request_handler` decorator that enables users to add hooks that execute before a request is sent.
**Additional context**
As part of this work, we should define and expose a simplified interface of the user's request.
| open | 2020-01-19T22:35:24Z | 2023-05-19T16:51:26Z | https://github.com/prkumar/uplink/issues/189 | [] | prkumar | 5 |
vitalik/django-ninja | rest-api | 385 | Cursor pagination | **Is your feature request related to a problem? Please describe.**
The current way to implement pagination is based on limit offset. I prefer the cursor style pagination.
**Describe the solution you'd like**
Standard way to implement cursor style pagination, similar to DRF one:
https://www.django-rest-framework.org/api-guide/pagination/#cursorpagination
| open | 2022-03-08T10:04:00Z | 2024-06-07T15:52:10Z | https://github.com/vitalik/django-ninja/issues/385 | [] | asaff1 | 2 |
plotly/dash-bio | dash | 290 | App QA 2: Molecule viewer | - [x] Some more cool proteins to visualize in sample datasets:
http://pdb101.rcsb.org/motm/231
http://pdb101.rcsb.org/motm/121
http://pdb101.rcsb.org/browse/nucleic-acids
The P53 - https://www.rcsb.org/structure/1uol
- [x] Which DNA and Protein are these

Would be cool to have a description when change in dropdown
- [ ] Check out PyMOL for app ideas and UI

- PYMOL shows selection at top and uses a table on the right for what you've selected. Black background is nice. | closed | 2019-03-31T04:55:13Z | 2019-04-29T19:11:09Z | https://github.com/plotly/dash-bio/issues/290 | [
"App QA"
] | jackparmer | 1 |
thp/urlwatch | automation | 701 | How to use environment variables in URLs? | I have a question on using environment variables in side a job.
```yaml
name: "my job"
url: https://mysite.org?api_key=${env variable}
filter:
- some filter...
```
Is this possible today? | open | 2022-04-11T16:38:17Z | 2022-04-11T22:01:09Z | https://github.com/thp/urlwatch/issues/701 | [] | throwaway29345 | 1 |
geex-arts/django-jet | django | 171 | Error when trying to add item with pk not of Integer type | #I have a table that has pk of string and when I try to add an item to this table django-jet crashes with the error:
```
ValueError: invalid literal for int() with base 10: ''
```
All the other tables with numeric PKs work fine. django-jet version 1.0.4
HEre is the complete stack trace
```
Internal Server Error: /admin/inventory/item/add/
Traceback (most recent call last):
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner
response = get_response(request)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 30, in inner
return func(*args, **kwds)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/admin/options.py", line 544, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/utils/decorators.py", line 149, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/admin/sites.py", line 211, in inner
return view(request, *args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/admin/options.py", line 1509, in add_view
return self.changeform_view(request, None, form_url, extra_context)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/utils/decorators.py", line 149, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 30, in inner
return func(*args, **kwds)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/admin/options.py", line 1464, in changeform_view
formsets, inline_instances = self._create_formsets(request, form.instance, change=False)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/admin/options.py", line 1815, in _create_formsets
formsets.append(FormSet(**formset_params))
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/contrib/contenttypes/forms.py", line 30, in __init__
self.ct_fk_field.name: self.instance.pk,
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/query.py", line 796, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/query.py", line 814, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/sql/query.py", line 1227, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/sql/query.py", line 1253, in _add_q
allow_joins=allow_joins, split_subq=split_subq,
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/sql/query.py", line 1187, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/sql/query.py", line 1083, in build_lookup
return final_lookup(lhs, rhs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/lookups.py", line 19, in __init__
self.rhs = self.get_prep_lookup()
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/lookups.py", line 59, in get_prep_lookup
return self.lhs.output_field.get_prep_value(self.rhs)
File "/Users/delio/.virtualenvs/inventory/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 1832, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: ''
[30/Jan/2017 14:02:20] "GET /admin/inventory/item/add/ HTTP/1.1" 500 186330
``` | closed | 2017-01-30T21:05:13Z | 2018-08-27T18:25:15Z | https://github.com/geex-arts/django-jet/issues/171 | [] | jangeador | 5 |
errbotio/errbot | automation | 1,388 | Errbot --init shouldn't crash if plugins/ exists | ### I am...
* [ ] Reporting a bug
* [X] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 6.1.1
* OS version: Ubuntu 18.04 LTS
* Python version: 3.7.3
* Using a virtual environment: yes
### Issue description
`errbot --init` crashes if the plugin or data sub directories already exists. This prevents the command from being idempotent (once you correct an error the caused failure, you can't just rerun the script).
It can be useful to initiate with plugins in the `plugins/` directory. | closed | 2019-09-30T17:43:42Z | 2020-01-19T01:21:57Z | https://github.com/errbotio/errbot/issues/1388 | [
"type: bug",
"#configuration"
] | torgeirl | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,066 | Having problem in loading trained model stage | Hi!
Yesterday I trained my model with my own data for 300 epochs. However, when I want to reuse the model on test set, it came up with some error information
`Traceback (most recent call last):
File "pix2pix/test.py", line 39, in <module>
model.setup(opt) # regular setup: load and print networks; create schedulers
File "/data/chengzihao/intern-programme/code/oneWeekPrice/pix2pix/models/base_model.py", line 88, in setup
self.load_networks(load_suffix)
File "/data/chengzihao/intern-programme/code/oneWeekPrice/pix2pix/models/base_model.py", line 198, in load_networks
net.load_state_dict(state_dict)
File "/home/xiangwei/anaconda2/envs/tensorflow3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for UnetGenerator:
Missing key(s) in state_dict: "model.model.1.model.2.weight", "model.model.1.model.2.bias", "model.model.1.model.2.running_mean", "model.model.1.model.2.running_var", "model.model.1.model.3.model.2.weight", "model.model.1.model.3.model.2.bias", "model.model.1.model.3.model.2.running_mean", "model.model.1.model.3.model.2.running_var", "model.model.1.model.3.model.3.model.2.weight", "model.model.1.model.3.model.3.model.2.bias", "model.model.1.model.3.model.3.model.2.running_mean", "model.model.1.model.3.model.3.model.2.running_var", "model.model.1.model.3.model.3.model.3.model.2.weight", "model.model.1.model.3.model.3.model.3.model.2.bias", "model.model.1.model.3.model.3.model.3.model.2.running_mean", "model.model.1.model.3.model.3.model.3.model.2.running_var", "model.model.1.model.3.model.3.model.3.model.3.model.2.weight", "model.model.1.model.3.model.3.model.3.model.3.model.2.bias", "model.model.1.model.3.model.3.model.3.model.3.model.2.running_mean", "model.model.1.model.3.model.3.model.3.model.3.model.2.running_var", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.weight", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.running_mean", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.running_var", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.weight", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.running_mean", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.running_var", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.weight", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.running_mean", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.running_var", "model.model.1.model.3.model.3.model.3.model.3.model.6.weight", "model.model.1.model.3.model.3.model.3.model.3.model.6.bias", "model.model.1.model.3.model.3.model.3.model.3.model.6.running_mean", "model.model.1.model.3.model.3.model.3.model.3.model.6.running_var", "model.model.1.model.3.model.3.model.3.model.6.weight", "model.model.1.model.3.model.3.model.3.model.6.bias", "model.model.1.model.3.model.3.model.3.model.6.running_mean", "model.model.1.model.3.model.3.model.3.model.6.running_var", "model.model.1.model.3.model.3.model.6.weight", "model.model.1.model.3.model.3.model.6.bias", "model.model.1.model.3.model.3.model.6.running_mean", "model.model.1.model.3.model.3.model.6.running_var", "model.model.1.model.3.model.6.weight", "model.model.1.model.3.model.6.bias", "model.model.1.model.3.model.6.running_mean", "model.model.1.model.3.model.6.running_var", "model.model.1.model.6.weight", "model.model.1.model.6.bias", "model.model.1.model.6.running_mean", "model.model.1.model.6.running_var".
Unexpected key(s) in state_dict: "model.model.0.bias", "model.model.1.model.1.bias", "model.model.1.model.3.model.1.bias", "model.model.1.model.3.model.3.model.1.bias", "model.model.1.model.3.model.3.model.3.model.1.bias", "model.model.1.model.3.model.3.model.3.model.3.model.1.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.1.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.1.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.3.bias", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.5.bias", "model.model.1.model.3.model.3.model.3.model.3.model.5.bias", "model.model.1.model.3.model.3.model.3.model.5.bias", "model.model.1.model.3.model.3.model.5.bias", "model.model.1.model.3.model.5.bias", "model.model.1.model.5.bias". `
It seems like the dict stored in the model file cannot be read. The hyper parameters I used in the training step is as follows
> ----------------- Options ---------------
aspect_ratio: 1.0
batch_size: 1
beta1: 0.5
checkpoints_dir: pix2pix/checkpoints [default: ./checkpoints]
continue_train: True [default: False]
crop_size: 256
dataroot: pix2pix/datasets/0610-v2-fixPosition [default: None]
dataset_mode: aligned
direction: AtoB
display_env: main
display_freq: 300 [default: 400]
display_id: 1
display_ncols: 4
display_port: 8097
display_server: http://localhost
display_winsize: 256
epoch: 150 [default: latest]
epoch_count: 151 [default: 1]
gan_mode: lsgan [default: vanilla]
gpu_ids: 0,1 [default: 0]
init_gain: 0.02
init_type: normal
input_nc: 1 [default: 3]
isTrain: True [default: None]
lambda_L1: 20.0 [default: 100.0]
load_iter: 0 [default: 0]
load_size: 286
lr: 0.0002
lr_decay_iters: 50
lr_policy: linear
max_dataset_size: 3000 [default: inf]
model: pix2pix
n_epochs: 30 [default: 100]
n_epochs_decay: 270 [default: 100]
n_layers_D: 3
name: 0610v2 [default: experiment_name]
ndf: 32 [default: 64]
netD: basic
netG: unet_256
ngf: 64
no_dropout: False
no_flip: True [default: False]
no_html: False
norm: instance [default: batch]
num_threads: 3 [default: 4]
output_nc: 1 [default: 3]
phase: train
pool_size: 0
preprocess: none [default: resize_and_crop]
print_freq: 300 [default: 100]
save_by_iter: False
save_epoch_freq: 2 [default: 5]
save_latest_freq: 5000
serial_batches: False
suffix:
update_html_freq: 1000
verbose: False
----------------- End -------------------
Actually, I have reused the same code before and it is my first time to meet such problem. Could someone give me some advice?
Thanks a lot! | closed | 2020-06-11T01:47:56Z | 2020-06-11T05:34:37Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1066 | [] | Orange199609 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 947 | Runing this repository in Windows? | @andyli @heaversm @junyanz
I am training it in the Windows platform. it is OK in training except a little change!
I changed the num_thread=0.
When the num_thread=0, then it is training without error in Windows, otherwise it gives the following error:
> ForkingPickler(file, protocol).dump(obj)
> BrokenPipeError: [Errno 32] Broken pipe
Is there any way to solve it?
Will it reduce the accuracy or not for num_thread=0? | open | 2020-03-05T03:09:01Z | 2020-03-08T21:14:38Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/947 | [] | K-M-Ibrahim-Khalilullah | 1 |
comfyanonymous/ComfyUI | pytorch | 7,301 | LoadImage returns a 64x64 mask even when there's no mask painted | ### Expected Behavior
LoadImage should return "None" for Mask, when no mask is painted
### Actual Behavior
LoadImage returns a 64x64 mask even when there's no mask painted
### Steps to Reproduce
LoadImage -> MaskToImage -> PreviewImage
### Debug Logs
```powershell
n/a
```
### Other
this affects "mask optional" nodes, as I cannot just connect things together, as I _always_ get a mask, even when none is painted. | open | 2025-03-18T15:20:50Z | 2025-03-19T17:43:16Z | https://github.com/comfyanonymous/ComfyUI/issues/7301 | [
"Potential Bug"
] | ghostsquad | 12 |
dynaconf/dynaconf | django | 824 | [RFC] Support multidoc yaml files | **Is your feature request related to a problem? Please describe.**
Sometimes it can be difficult or impossible to pass multiple files with config fragments. yaml support multiple documents in one file and `safe_load_all` from pyaml api loads that accordingly. It is standard yaml feature, it would be nice to support it and make in usable in cases when passing one file (composited from more files) would be easier.
**Describe the solution you'd like**
Support `safe_load_all` as yaml loader.
**Describe alternatives you've considered**
Passing multiple files will do the work, however it doesn't have to be always straightforward.
**Additional context**
I have prepared a patch
| closed | 2022-10-30T08:14:30Z | 2023-07-18T20:24:08Z | https://github.com/dynaconf/dynaconf/issues/824 | [
"Not a Bug",
"RFC"
] | mangan | 0 |
FactoryBoy/factory_boy | django | 407 | Change default faker locale in factory_boy | How can I set the default locale in Python's factory_boy for all of my Factories?
In docs says that one should set it with factory.Faker.override_default_locale but that does nothing to my fakers...
```python
import factory
from app.models import Example
from custom_fakers import CustomFakers
# I use custom fakers, this indeed are added
factory.Faker.add_provider(CustomFakers)
# But not default locales
factory.Faker.override_default_locale('es_ES')
class ExampleFactory(factory.django.DjangoModelFactory):
class Meta:
model = Example
name = factory.Faker('first_name')
>>> from example import ExampleFactory
>>> e1 = ExampleFactory()
>>> e1.name
>>> u'Chad'
``` | open | 2017-08-21T13:38:15Z | 2021-01-16T15:44:35Z | https://github.com/FactoryBoy/factory_boy/issues/407 | [
"Feature",
"Doc"
] | 3ynm | 10 |
dynaconf/dynaconf | fastapi | 559 | Add an example with Django and Celery | Add celery to Django example and dicumentarion to django.md
related to:https://stackoverflow.com/questions/66016398/how-to-use-dynaconf-to-configure-celery/66769693#66769693 | open | 2021-03-23T19:06:02Z | 2022-06-29T13:57:06Z | https://github.com/dynaconf/dynaconf/issues/559 | [
"wontfix",
"Docs",
"django"
] | rochacbruno | 1 |
plotly/dash-table | plotly | 804 | Bootstrap.min.css breaks Dropdown | I was using this in the dash external stylesheets:
* https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/css/bootstrap.min.css
I ended up removing that stylesheet and functionality returned to the dropdown within the datatable.
I'm not sure if you can fix the problem (or find the problem within that stylesheet, but hopefully someone can resolve their issue faster after reading this potentially invalid bug report. | open | 2020-07-15T11:52:07Z | 2021-07-19T12:36:15Z | https://github.com/plotly/dash-table/issues/804 | [] | rbrasga | 3 |
ets-labs/python-dependency-injector | flask | 4 | Callable provider | Need to create `Callable` provider. Current `Function` provider is a _static_ provider of an function object.
Idea of `Callable` is to provide an callable function with some predefined dependencies.
| closed | 2015-01-27T21:42:22Z | 2015-01-27T22:51:12Z | https://github.com/ets-labs/python-dependency-injector/issues/4 | [
"enhancement"
] | rmk135 | 0 |
lanpa/tensorboardX | numpy | 606 | GCS Connection Error | **Describe the bug**
Got `ConnectionError` while training after looking at the traceback it seems when `event_file_writer` do `flush()` and if there's a connection error it hangs that thread as well as training.
**Expected behavior**
Training to complete without any errors.
**Traceback**
```
Exception in thread Thread-65:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 275, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 403, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.7/dist-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 275, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/dist-packages/tensorboardX/event_file_writer.py", line 219, in run
self._record_writer.flush()
File "/usr/local/lib/python3.7/dist-packages/tensorboardX/event_file_writer.py", line 69, in flush
self._py_recordio_writer.flush()
File "/usr/local/lib/python3.7/dist-packages/tensorboardX/record_writer.py", line 187, in flush
self._writer.flush()
File "/usr/local/lib/python3.7/dist-packages/tensorboardX/record_writer.py", line 149, in flush
self.blob.upload_from_string(upload_buffer.getvalue())
File "/usr/local/lib/python3.7/dist-packages/google/cloud/storage/blob.py", line 1733, in upload_from_string
if_metageneration_not_match=if_metageneration_not_match,
File "/usr/local/lib/python3.7/dist-packages/google/cloud/storage/blob.py", line 1567, in upload_from_file
if_metageneration_not_match,
File "/usr/local/lib/python3.7/dist-packages/google/cloud/storage/blob.py", line 1420, in _do_upload
if_metageneration_not_match,
File "/usr/local/lib/python3.7/dist-packages/google/cloud/storage/blob.py", line 1098, in _do_multipart_upload
response = upload.transmit(transport, data, object_metadata, content_type)
File "/usr/local/lib/python3.7/dist-packages/google/resumable_media/requests/upload.py", line 106, in transmit
retry_strategy=self._retry_strategy,
File "/usr/local/lib/python3.7/dist-packages/google/resumable_media/requests/_helpers.py", line 136, in http_request
return _helpers.wait_and_retry(func, RequestsMixin._get_status_code, retry_strategy)
File "/usr/local/lib/python3.7/dist-packages/google/resumable_media/_helpers.py", line 150, in wait_and_retry
response = func()
File "/usr/local/lib/python3.7/dist-packages/google/auth/transport/requests.py", line 470, in request
**kwargs
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
```
Seems like there's already a PR that handle connection error for S3: https://github.com/lanpa/tensorboardX/pull/555 | open | 2020-10-12T22:11:31Z | 2020-10-13T17:32:44Z | https://github.com/lanpa/tensorboardX/issues/606 | [] | Saurav-D | 0 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 717 | [FEATURE]: Make Cover Letter file name not use UNIX time | ### Feature summary
Cover Letter File Name with Job
### Feature description
Instead of having the the cover letter file name generated with the unix time, I propose making the job name and then just appending `(1)` if there are conflicts. Employers can potentially see this and it looks like a bot.
### Motivation
Hide that this is a bot
### Alternatives considered
I think there are a lot of possible solutions to this but that the current one should be changed
### Additional context
_No response_ | closed | 2024-11-03T03:02:17Z | 2024-11-19T23:45:41Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/717 | [
"enhancement"
] | FrancescoVassalli | 3 |
hankcs/HanLP | nlp | 1,255 | 【建议】是否作者分派任务,多人贡献,作者审核 | 首先非常感觉提供HanLP这么棒的开源项目,据我所知,现在HanLP还是您一个人维护的,
是否可以由您,或者您组建一个大牛团队,比如HanLP委员会QQ群来把关,然后将HanLP的任务分解为容易实现的明细项,然后由热心网友领取任务开发,然后提交汇总。
这样,作者就可以专注核心部分,不用干那么多体力活了。
当然,这只是一个建议,不一定可行,真正落地还需要考虑很多方面的细则 | closed | 2019-07-31T08:16:21Z | 2020-01-01T10:48:57Z | https://github.com/hankcs/HanLP/issues/1255 | [
"ignored"
] | xiyuan27 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.