repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
ymcui/Chinese-BERT-wwm | nlp | 55 | 微调roberta large需要什么样的配置 | closed | 2019-10-15T03:26:28Z | 2019-10-16T00:00:39Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/55 | [] | xiongma | 2 |
|
Layout-Parser/layout-parser | computer-vision | 42 | Bad support for PIL images for `crop_image` APIs | `box.crop_image(image)` doesn't support for PIL images. Current method requires manual conversion `box.crop_image(np.array(image))` beforehand. | open | 2021-05-05T20:52:03Z | 2021-05-05T20:52:03Z | https://github.com/Layout-Parser/layout-parser/issues/42 | [
"bug"
] | lolipopshock | 0 |
MaartenGr/BERTopic | nlp | 1,347 | Using Vectorizer Model After Updating Topics (after reducing outliers) | Hello,
Do you have any advice how to repeat text preprocessing steps after updating topic models (following outlier reduction technique)? I was able to get clean representative words using topic_model.visualize_barchart before using topic_model.update_topics. However, all my specifications in vectorizer_model for the original model do not transfer to updated one. Are there any steps I am missing? Basically, my visualization of top words includes mainly stopwords after updating topics, which needs to be cleaned again I assume.
<img width="976" alt="Screenshot 2023-06-16 at 7 49 07 PM" src="https://github.com/MaartenGr/BERTopic/assets/127628938/5c178954-aa72-4ac7-920a-c2742c535c3c">
| closed | 2023-06-16T23:47:27Z | 2023-06-17T00:31:33Z | https://github.com/MaartenGr/BERTopic/issues/1347 | [] | volhakatebi | 1 |
twopirllc/pandas-ta | pandas | 109 | Inside bar indicator request | Hi , can you please add the inside bar indicator from cma?
link: https://www.tradingview.com/script/IyIGN1WO-Inside-Bar/
I couldn't find it in the readme area unless it is in a different name
This indicator seems like it has 900 likes (just saying in case this would increase your reason to do this indicator XD)
Thanks
| closed | 2020-09-03T18:08:55Z | 2020-09-08T18:50:51Z | https://github.com/twopirllc/pandas-ta/issues/109 | [
"enhancement",
"good first issue"
] | SoftDevDanial | 8 |
tatsu-lab/stanford_alpaca | deep-learning | 188 | An important question about pre-instructions ("below is an instruction...") | Every training example starts with a pre-instruction prompt:
"Below is an instruction that describes a task. Write a response that appropriately completes the request."
(or with the +input version of the above.)
I would like to understand why and where does it format come from (InstructGPT paper?)
Since there are already instructions in the dataset, what function does it serve to prepend this additional layer which is exactly the same for each example? | closed | 2023-04-06T20:54:04Z | 2023-05-05T18:43:47Z | https://github.com/tatsu-lab/stanford_alpaca/issues/188 | [] | zygmuntz | 1 |
jumpserver/jumpserver | django | 14,455 | [Bug] IE browser Remote code execution | ### Product Version
v4.3.1
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Ubuntu 22.04
### 🐛 Bug Description
The internal firewall blocks the jumpserver website via the IPS (intrusion prevention system) and reports the following message: HTTP Microsoft Internet Explorer Code Execution (CVE-2018-8373). Microsoft Edge is used as the browser.
### Recurrence Steps
Open the webinterface with microsoft edge and have a internal firewall ips on.
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
Disable IPS on the firewall for http-s traffic to the jumphost and it works. | closed | 2024-11-14T15:45:54Z | 2024-11-16T11:14:10Z | https://github.com/jumpserver/jumpserver/issues/14455 | [
"🐛 Bug"
] | Kevko1337 | 5 |
Kanaries/pygwalker | pandas | 419 | There is some error with Computation service. Here is the Error message: Cannot read properties of undefined (reading '__wbindgen_add_to_stack_pointer') | **Describe the bug**
Integrate PyGWalker with Shiny-Python following https://github.com/ObservedObserver/pygwalker-shiny/tree/main.
But when I change ui.HTML(pyg.walk(df, spec="./viz-config.json", return_html=True, debug=False)),
to ui.HTML(pyg.walk(df, spec="./viz-config.json", use_kernel_calc=True, return_html=True, debug=False)),
bug happens.
and also there is no data in Data Tab。
Without the option: “use_kernel_calc=True”,the program runs normally.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions**
- pygwalker version:
- python version
- browser
**Additional context**
Add any other context about the problem here.
| open | 2024-02-02T11:40:33Z | 2024-10-18T04:12:03Z | https://github.com/Kanaries/pygwalker/issues/419 | [
"enhancement",
"good first issue",
"P2"
] | qingfengwuhen | 7 |
strawberry-graphql/strawberry | asyncio | 3,126 | local tests broken because of pydantic | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
Currently it isn't possible to test locally according to the contributing guide because pydantic errors crash the pytest sessions
## System Information
- Operating system: archlinux
- Strawberry version (if applicable): 0.209.2
## Additional ContextA
```
______________________________________________________________________________________________ ERROR collecting test session _______________________________________________________________________________________________
.venv/lib/python3.11/site-packages/_pytest/config/__init__.py:641: in _importconftest
mod = import_path(conftestpath, mode=importmode, root=rootpath)
.venv/lib/python3.11/site-packages/_pytest/pathlib.py:567: in import_path
importlib.import_module(module_name)
/usr/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1204: in _gcd_import
???
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/http/conftest.py:47: in <module>
@pytest.fixture(params=_get_http_client_classes())
.venv/lib/python3.11/site-packages/_pytest/fixtures.py:1312: in fixture
params=tuple(params) if params is not None else None,
tests/http/conftest.py:30: in _get_http_client_classes
importlib.import_module(f".{module}", package="tests.http.clients"),
/usr/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1204: in _gcd_import
???
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
<frozen importlib._bootstrap_external>:940: in exec_module
???
<frozen importlib._bootstrap>:241: in _call_with_frames_removed
???
tests/http/clients/starlite.py:9: in <module>
from starlite import Request, Starlite
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/starlite/__init__.py:1: in <module>
from starlite.app import Starlite
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/starlite/app.py:6: in <module>
from pydantic_openapi_schema import construct_open_api_with_schema_class
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/__init__.py:1: in <module>
from . import v3_1_0
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/__init__.py:9: in <module>
from .components import Components
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/components.py:7: in <module>
from .header import Header
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:8: in <module>
class Header(Parameter):
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:19: in Header
name: Literal[""] = Field(default="", const=True)
.venv/lib/python3.11/site-packages/pydantic/fields.py:757: in Field
raise PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs')
E pydantic.errors.PydanticUserError: `const` is removed, use `Literal` instead
E
E For further information visit https://errors.pydantic.dev/2.3/u/removed-kwargs
``` | closed | 2023-10-01T04:27:21Z | 2025-03-20T15:56:24Z | https://github.com/strawberry-graphql/strawberry/issues/3126 | [
"bug"
] | devkral | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,957 | Azure OAuth CSRF State Not Equal Error | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
### Environment
Flask-Appbuilder version:
pip freeze output: Flask-Appbuilder version==4.1.4
### Describe the expected results
We are currently running Airflow 2.4.3 on Kubernetes with the Airflow Community helm chart version 8.6.1 (located here: https://github.com/airflow-helm/charts).
We have enabled Azure OAuth authentication for our webserver. This should bring up our webserver with an "login with azure" button and we should be able to click it and log in just fine. This is our webserver_config that we are using:
```
from flask_appbuilder.security.manager import AUTH_OAUTH
from airflow.www.security import AirflowSecurityManager
import logging
from typing import Dict, Any, List, Union
import os
import sys
#Add this as a module to pythons path
sys.path.append('/opt/airflow')
log = logging.getLogger(__name__)
log.setLevel(os.getenv("AIRFLOW__LOGGING__FAB_LOGGING_LEVEL", "DEBUG"))
class AzureCustomSecurity(AirflowSecurityManager):
# In this example, the oauth provider == 'azure'.
# If you ever want to support other providers, see how it is done here:
# https://github.com/dpgaspar/Flask-AppBuilder/blob/master/flask_appbuilder/security/manager.py#L550
def get_oauth_user_info(self, provider, resp):
# Creates the user info payload from Azure.
# The user previously allowed your app to act on their behalf,
# so now we can query the user and teams endpoints for their data.
# Username and team membership are added to the payload and returned to FAB.
if provider == "azure":
log.debug("Azure response received : {0}".format(resp))
id_token = resp["id_token"]
log.debug(str(id_token))
me = self._azure_jwt_token_parse(id_token)
log.debug("Parse JWT token : {0}".format(me))
return {
"name": me.get("name", ""),
"email": me["upn"],
"first_name": me.get("given_name", ""),
"last_name": me.get("family_name", ""),
"id": me["oid"],
"username": me["oid"],
"role_keys": me.get("roles", []),
}
# Adding this in because if not the redirect url will start with http and we want https
os.environ["AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX"] = "True"
WTF_CSRF_ENABLED = False
CSRF_ENABLED = False
AUTH_TYPE = AUTH_OAUTH
AUTH_ROLES_SYNC_AT_LOGIN = True # Checks roles on every login
# Make sure to replace this with the path to your security manager class
FAB_SECURITY_MANAGER_CLASS = "webserver_config.AzureCustomSecurity"
# a mapping from the values of `userinfo["role_keys"]` to a list of FAB roles
AUTH_ROLES_MAPPING = {
"airflow_dev_admin": ["Admin"],
"airflow_dev_op": ["Op"],
"airflow_dev_user": ["User"],
"airflow_dev_viewer": ["Viewer"]
}
# force users to re-auth after 30min of inactivity (to keep roles in sync)
PERMANENT_SESSION_LIFETIME = 1800
# If you wish, you can add multiple OAuth providers.
OAUTH_PROVIDERS = [
{
"name": "azure",
"icon": "fa-windows",
"token_key": "access_token",
"remote_app": {
"client_id": "CLIENT_ID",
"client_secret": 'AZURE_DEV_CLIENT_SECRET',
"api_base_url": "https://login.microsoftonline.com/TENANT_ID",
"request_token_url": None,
'request_token_params': {
'scope': 'openid email profile'
},
"access_token_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token",
"access_token_params": {
'scope': 'openid email profile'
},
"authorize_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/authorize",
"authorize_params": {
'scope': 'openid email profile',
},
'jwks_uri':'https://login.microsoftonline.com/common/discovery/v2.0/keys',
},
},
]
```
### Describe the actual results
Instead, we are getting this error after we click the Azure button:
[2022-11-28 22:04:58,744] {views.py:659} ERROR - Error authorizing OAuth access token: mismatching_state: CSRF Warning! State not equal in request and response.
airflow-web [2022-11-28 22:04:58,744] {views.py:659} ERROR - Error authorizing OAuth access token: mismatching_state: CSRF Warning! State not equal in request and response.
### Steps to reproduce
Running Airflow 2.4.3 on Kubernetes with the Airflow Community helm chart version 8.6.1 and using the webserver_config file like above. When the webserver is running, you click on the "login to azure" button.
### Additional Comments
I already posted an issue like this in the Airflow repo, and they said this could more then likely be a Flask problem, which is why I am making this issue here. If any other information is needed please let me know | closed | 2022-12-08T14:18:30Z | 2023-11-17T04:08:28Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1957 | [
"question",
"pending"
] | ahipp13 | 11 |
Skyvern-AI/skyvern | api | 1,697 | Issue accessing assets when deploying Skyvern via Helm on Kubernetes | Hello,
I am trying to deploy Skyvern via Helm in my Kubernetes cluster.
The installation runs in a Chromium headless environment. (ubuntu server without GUI)
I have other applications in this cluster, and they are accessible using a context like https://mydns/myapp.
I would like to access Skyvern at https://mydns/skyvern.
When I visit https://mydns/skyvern, I receive a 200 response on this URL, but a 404 error for https://mydns/assets/index-BrsxbjwQ.js.
It seems that the skyvern prefix is removed so that the application tries to access resources at the root (thanks to [StripPrefix middleware in Traefik](https://doc.traefik.io/traefik/middlewares/http/stripprefix/)).
Do you have any idea how to resolve this issue?
Thanks!
| closed | 2025-02-01T15:29:52Z | 2025-02-19T09:39:49Z | https://github.com/Skyvern-AI/skyvern/issues/1697 | [] | BenLikeCode | 2 |
tensorly/tensorly | numpy | 431 | tensorly on large-scale tensor? | Hi team,
Thanks for this nice repo!
I'm wondering if tensorly actually supports decomposition on large tensors? I'm trying to run parafac on a (N,N,2) tensor, N is as large as 10k. It can run with rank 2, but more than that I won't have enough memory. Is it because of tensorly does all the computation in the dense format so it is hard to scale up? Any thoughts on how I can run parafac on large tensors? Thanks! | closed | 2022-07-19T01:25:19Z | 2022-07-22T19:06:52Z | https://github.com/tensorly/tensorly/issues/431 | [] | devnkong | 3 |
BeanieODM/beanie | asyncio | 65 | [fix] string to PydanticObjectId converting | Details:
```python
bar = await Product.get("608da169eb9e17281f0ab2ff") # not working,
bar = await Product.get(PydanticObjectId("608da169eb9e17281f0ab2ff")) # working.
``` | closed | 2021-06-17T13:35:24Z | 2021-06-17T14:38:29Z | https://github.com/BeanieODM/beanie/issues/65 | [] | roman-right | 1 |
Miserlou/Zappa | flask | 1,509 | Can't parse ".serverless/requirements/xlrd/biffh.py" unless encoding is latin | <!--- Provide a general summary of the issue in the Title above -->
## Context
when detect_flask is called during zappa init, it fails on an encoding issue because of the commented out block at the head of xlrd/biffh.py
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
It should not error out.
## Actual Behavior
<!--- Tell us what happens instead -->
It fails with an encoding exception at f.readlines()
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
just add encoding='latin' to the open call
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. have xlrd as a dependency
2. call zappa init
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version: OSX, python 3
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
| open | 2018-05-14T18:00:14Z | 2018-05-14T18:00:14Z | https://github.com/Miserlou/Zappa/issues/1509 | [] | joshmalina | 0 |
InstaPy/InstaPy | automation | 5,866 | session.unfollow_users do have option for white_list users to not unfollow them when execute with unfollow all | ## InstaPy configuration
I see in unfollow_util.py have a likes of code when is set white_list to don`t unfollow user account if its in it.
How to use it in sample code session.unfollow_users(white_list = ['user', 'user', 'user']
If its for that Thank you.
TypeError: unfollow_users() got an unexpected keyword argument 'white_list'
| closed | 2020-11-03T09:32:25Z | 2020-12-20T14:06:26Z | https://github.com/InstaPy/InstaPy/issues/5866 | [
"wontfix"
] | mik3bg | 1 |
automl/auto-sklearn | scikit-learn | 779 | How to disable Bayesian optimization? | Is there a way to disable the Bayesian optimization subroutine when fitting on a new dataset? I am curious about how the performance would be different when there is no such fine-tune. Thanks! | closed | 2020-02-09T16:02:48Z | 2020-04-02T11:26:08Z | https://github.com/automl/auto-sklearn/issues/779 | [] | chengrunyang | 4 |
youfou/wxpy | api | 275 | 登录失败报错,网页版可正常登录 | - 之前一直正常使用,定期重启
- 扫码之后,等待时间很长
- 之前出现过一次,手机上清理了缓存及一批好友之后正常
## Log
```
Getting uuid of QR code.
Downloading QR code.
Please scan the QR code to log in.
Please press confirm on your phone.
Loading the contact, this may take a little while.
Traceback (most recent call last):
File "test.py", line 4, in <module>
bot = Bot()
File "/usr/local/lib/python3.5/site-packages/wxpy/api/bot.py", line 86, in __init__
loginCallback=login_callback, exitCallback=logout_callback
File "/usr/local/lib/python3.5/site-packages/itchat/components/register.py", line 35, in auto_login
loginCallback=loginCallback, exitCallback=exitCallback)
File "/usr/local/lib/python3.5/site-packages/itchat/components/login.py", line 67, in login
self.get_contact(True)
File "/usr/local/lib/python3.5/site-packages/itchat/components/contact.py", line 284, in get_contact
seq, batchMemberList = _get_contact(seq)
File "/usr/local/lib/python3.5/site-packages/itchat/components/contact.py", line 280, in _get_contact
j = json.loads(r.content.decode('utf-8', 'replace'))
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
| open | 2018-03-16T08:40:44Z | 2018-08-13T15:27:42Z | https://github.com/youfou/wxpy/issues/275 | [] | Anynices | 14 |
anselal/antminer-monitor | dash | 111 | Add devices by IP address range | Hey, first off, thanks for antminer-monitor! It's uncomplicated software that does its job well!
Second, we were curious about adding devices by IP address range, for example:
192.168.69.101-200: antminer S9
192.168.70.1-250: antminer L3
Or, is there a way to add these devices in a range using the command line that I just don't know about yet?
We might be able to add this functionality and submit a PR, if you can point us in the right direction. Thanks! | closed | 2018-07-06T18:22:26Z | 2018-07-10T12:05:02Z | https://github.com/anselal/antminer-monitor/issues/111 | [
":dancing_men: duplicate"
] | faddat | 1 |
marcomusy/vedo | numpy | 559 | Implement multiple shadows | - Implement multiple shadows on planes, using the projection algorithm from #191.


- But there are more to do, such as clip points which are inside the plane.

| closed | 2021-12-15T08:14:27Z | 2021-12-16T13:55:24Z | https://github.com/marcomusy/vedo/issues/559 | [] | zhouzq-thu | 0 |
PaddlePaddle/models | nlp | 4,772 | 请问能提供更多的动态图预训练的bert模型吗?或者静态图转动态图的脚本也行 | https://github.com/PaddlePaddle/models/tree/release/1.8/dygraph/bert
目前只有:

能正常加载使用,但是没有中文的bert
能放出转换脚本吗? | open | 2020-07-28T06:53:06Z | 2024-02-26T05:10:48Z | https://github.com/PaddlePaddle/models/issues/4772 | [] | waywaywayw | 4 |
kensho-technologies/graphql-compiler | graphql | 952 | Explain query planning | Make a concise print function for the `QueryPlanningAnalysis` class https://github.com/kensho-technologies/graphql-compiler/blob/v2.0.0.dev25/graphql_compiler/cost_estimation/analysis.py#L381
In most cases, the printout of the analysis passes is enough to explain why a particular query plan was chosen. | open | 2020-10-12T20:59:21Z | 2020-10-12T20:59:21Z | https://github.com/kensho-technologies/graphql-compiler/issues/952 | [
"enhancement",
"user friendliness",
"good first issue",
"maintainer quality-of-life"
] | bojanserafimov | 0 |
facebookresearch/fairseq | pytorch | 5,035 | Auto create commandline args from yamls with fairseq-hydra-train | ## 🚀 Feature Request
Though Fairseq is such a powerful framework, I got some problems when I used fairseq-hydra-train.
If I want to define an arg in the config yaml file, I have to define it in some dataclass first.
This is cumbersome and slow compared with the hydra's ability of auto-creating commandline args.
Furthermore, this makes it impossible to use hydra's ability of multi-level nested parameters.
How can I access the full ability of hydra with fairseq?
| open | 2023-03-22T02:00:48Z | 2023-03-22T02:00:48Z | https://github.com/facebookresearch/fairseq/issues/5035 | [
"enhancement",
"help wanted",
"needs triage"
] | gystar | 0 |
apify/crawlee-python | automation | 839 | inital list of Requests dont all get handled? | request queue not being handled. So when i have a list of Requests with different labels it will only handle the labels of the first request it sees. Am I doing something wrong or is this a bug?
Here in the logs I queue up 4 things
```
DEBUG Added 4 requests to the queue, response: processed_requests=[ProcessedRequest(id='WCJHwnKoF1xWGYF', unique_key='https://URL1', was_already_present=False, was_already_handled=False), ProcessedRequest(id='ntWUsKSPofbfOU2', unique_key='https://URL2', was_already_present=False, was_already_handled=False), ProcessedRequest(id='xMI8mB7yETk8KJz', unique_key='https://URL3', was_already_present=False, was_already_handled=False), ProcessedRequest(id='2iT82Knl0Rr4qEi', unique_key='https://en.wikipedia.org/wiki/cgroups', was_already_present=False, was_already_handled=False)] unprocessed_requests=[]
[crawlee.memory_storage_client._memory_storage_client] DEBUG Storage was already purged on start.
[crawlee._autoscaling.autoscaled_pool] DEBUG Starting the pool
```
but then it only process the first 2 with label=JSON
```
│ requests_finished │ 2 │
│ requests_failed │ 0 │
│ retry_histogram │ [2] │
│ request_avg_failed_duration │ None │
│ request_avg_finished_duration │ 1.623642 │
│ requests_finished_per_minute │ 63 │
│ requests_failed_per_minute │ 0 │
│ request_total_duration │ 3.247283 │
│ requests_total │ 2 │
│ crawler_runtime │ 1.919938 │
```
Heres my list i sent to get queued.NOw the strange thing is if i comment out the first two Request the other two work. and when i put the HTML lable on top
```python
[
Request.from_url(
url="https://sightmap.com/app/api/v1/8epml7q1v6d/sightmaps/80524",
label="JSON",
user_data={"building_id": 1},
),
Request.from_url(
url="https://sightmap.com/app/api/v1/60p7q39nw7n/sightmaps/397",
label="JSON",
user_data={"building_id": 2},
),
Request.from_url(
url="https://www.windsorcommunities.com/properties/windsor-on-the-lake/floorplans/",
label="HTML",
user_data={"building_id": 3},
),
Request.from_url(
url="https://en.wikipedia.org/wiki/Cgroups",
label="HTML",
user_data={"building_id": 3},
),
]
```
Heres my router.py handlers
```python
@router.default_handler
async def default_handler(context: BeautifulSoupCrawlingContext) -> None:
"""Default request handler."""
building_id = context.request.user_data.model_extra.get("building_id")
logger.info("PASSING", url={context.request.url}, building_id=building_id)
@router.handler("HTML")
async def html_handler(context: BeautifulSoupCrawlingContext) -> None:
"""Default request handler."""
building_id = context.request.user_data.model_extra.get("building_id")
logger.info("Handling", url={context.request.url}, building_id=building_id)
http_response = context.http_response
content = http_response.read() if http_response else None
if content:
try:
content_str = content.decode("utf-8")
# BeautifulSoup will fixes invalid HTML
if content_str == str(BeautifulSoup(content_str, "html.parser")):
logger.error("Invalid HTML content.")
raise Exception("Invalid HTML content.")
else:
logger.debug("Valid HTML content.")
except Exception as e:
logger.error(
"An error occurred while parsing HTML content.",
error=str(e),
url=context.request.url,
)
raise e
else:
# Not sure if none content is already handled by crawlee doesn't hurt to have it here
logger.error("No content fetched.", url=context.request.url)
raise Exception("No content fetched.")
await save_scrape_response(context, content_str)
@router.handler("JSON")
async def json_handler(context: BeautifulSoupCrawlingContext) -> None:
"""Default request handler."""
building_id = context.request.user_data.model_extra.get("building_id")
logger.info("Handling", url={context.request.url}, building_id=building_id)
http_response = context.http_response
try:
json_content = json.load(http_response)
except json.JSONDecodeError:
json_content = None
logger.error("Invalid JSON content.", url=context.request.url)
# We should save invalid page for debugging?
# They get saved in the logs maybe future we pump them to a bad_responses bucket?
await save_scrape_response(context, json_content)
```
<details><summary>Logs</summary>
<p>
```
DEBUG Added 4 requests to the queue, response: processed_requests=[ProcessedRequest(id='WCJHwnKoF1xWGYF', unique_key='https://URL1', was_already_present=False, was_already_handled=False), ProcessedRequest(id='ntWUsKSPofbfOU2', unique_key='https://URL2', was_already_present=False, was_already_handled=False), ProcessedRequest(id='xMI8mB7yETk8KJz', unique_key='https://URL3', was_already_present=False, was_already_handled=False), ProcessedRequest(id='2iT82Knl0Rr4qEi', unique_key='https://en.wikipedia.org/wiki/cgroups', was_already_present=False, was_already_handled=False)] unprocessed_requests=[]
[crawlee.memory_storage_client._memory_storage_client] DEBUG Storage was already purged on start.
[crawlee._autoscaling.autoscaled_pool] DEBUG Starting the pool
[crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO Current request statistics:
┌───────────────────────────────┬──────────┐
│ requests_finished │ 0 │
│ requests_failed │ 0 │
│ retry_histogram │ [0] │
│ request_avg_failed_duration │ None │
│ request_avg_finished_duration │ None │
│ requests_finished_per_minute │ 0 │
│ requests_failed_per_minute │ 0 │
│ request_total_duration │ 0.0 │
│ requests_total │ 0 │
│ crawler_runtime │ 0.013797 │
└───────────────────────────────┴──────────┘
[crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 2; cpu = 0; mem = 0; event_loop = 0.0; client_info = 0.0
[crawlee._utils.system] DEBUG Calling get_cpu_info()...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task...
[crawlee.statistics._statistics] DEBUG Persisting state of the Statistics (event_data=is_migrating=False).
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed.
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set...
[crawlee.storages._request_queue] DEBUG Queue head still returned requests that need to be processed (or that are locked by other clients)
[crawlee._autoscaling.autoscaled_pool] DEBUG Scheduling a new task
[crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 4})
[crawlee._autoscaling.autoscaled_pool] DEBUG Scheduling a new task
[crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 4})
[crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - already running at desired concurrency
[httpx] DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False
[httpx] DEBUG load_verify_locations cafile='/home/vscode/.cache/pypoetry/virtualenvs/bs-crawler-7DgAT4g4-py3.12/lib/python3.12/site-packages/certifi/cacert.pem'
[httpcore.connection] DEBUG connect_tcp.started host='sightmap.com' port=443 local_address=None timeout=5.0 socket_options=None
[httpcore.connection] DEBUG connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0xffff8df70980>
[httpcore.connection] DEBUG start_tls.started ssl_context=<ssl.SSLContext object at 0xffff8e1b9550> server_hostname='sightmap.com' timeout=5.0
[crawlee._utils.system] DEBUG Calling get_memory_info()...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed.
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed.
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set...
[httpcore.connection] DEBUG start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0xffff8df73800>
[httpcore.http2] DEBUG send_connection_init.started request=<Request [b'GET']>
[httpcore.http2] DEBUG send_connection_init.complete
[httpcore.http2] DEBUG send_request_headers.started request=<Request [b'GET']> stream_id=1
[hpack.hpack] DEBUG Adding (b':method', b'GET') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 2 with 7 bits
[hpack.hpack] DEBUG Adding (b':authority', b'sightmap.com') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 1 with 6 bits
[hpack.hpack] DEBUG Encoding 9 with 7 bits
[hpack.hpack] DEBUG Adding (b':scheme', b'https') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 7 with 7 bits
[hpack.hpack] DEBUG Adding (b':path', b'/app/api/v1/8epml7q1v6d/sightmaps/80524') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 4 with 6 bits
[hpack.hpack] DEBUG Encoding 28 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept-encoding', b'gzip, deflate, br') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 16 with 6 bits
[hpack.hpack] DEBUG Encoding 13 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept', b'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 19 with 6 bits
[hpack.hpack] DEBUG Encoding 101 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept-language', b'en-US,en;q=0.9') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 17 with 6 bits
[hpack.hpack] DEBUG Encoding 11 with 7 bits
[hpack.hpack] DEBUG Adding (b'user-agent', b'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 58 with 6 bits
[hpack.hpack] DEBUG Encoding 92 with 7 bits
[hpack.hpack] DEBUG Encoded header block to b'\x82A\x89A\xa6\x9d4\x8e\xb5\xc8z\x7f\x87D\x9c`u\xd6\xc0\xeb3\x1d\xc2\xc3\xc5\xae\x9a\x1d\xec\x1e\xeeH\xc2\r4\xe9\xa4u\xa1\x87\x80\xd8\x9aP\x8d\x9b\xd9\xab\xfaRB\xcb@\xd2_\xa5#\xb3S\xe5I|\xa5\x89\xd3M\x1fC\xae\xba\x0cA\xa4\xc7\xa9\x8f3\xa6\x9a?\xdf\x9ah\xfa\x1du\xd0b\r&=Ly\xa6\x8f\xbe\xd0\x01w\xfe\x8dH\xe6+\x03\xeei~\x8dH\xe6+\x1e\x0b\x1d\x7fF\xa4s\x15\x81\xd7T\xdf_,|\xfd\xf6\x80\x0b\xbd\xf4:\xeb\xa0\xc4\x1aLz\x98A\xa6\xa8\xb2,_$\x9cuL_\xbe\xf0F\xcf\xdfh\x00\xbb\xbfQ\x8b-Kp\xdd\xf4Z\xbe\xfb@\x05\xdfz\xdc\xd0\x7ff\xa2\x81\xb0\xda\xe0S\xfa\xd02\x1a\xa4\x9d\x13\xfd\xa9\x92\xa4\x96\x854\x0c\x8aj\xdc\xa7\xe2\x81\x04Aj \xffjC]t\x17\x91c\xccd\xb0\xdb.\xae\xcb\x8a\x7fY\xb1\xef\xd1\x9f\xe9J\r\xd4\xaab):\x9f\xfbR\xf4\xf6\x1e\x92\xb0\xebk\x81v]t\x0b\x85\xa1)\xb8r\x8e\xc30\xdb.\xae\xcb\x9f'
[httpcore.http2] DEBUG send_request_headers.complete
[httpcore.http2] DEBUG send_request_body.started request=<Request [b'GET']> stream_id=1
[httpcore.http2] DEBUG send_request_body.complete
[httpcore.http2] DEBUG receive_response_headers.started request=<Request [b'GET']> stream_id=1
[httpcore.http2] DEBUG receive_remote_settings.started
[httpcore.http2] DEBUG receive_remote_settings.complete return_value=<RemoteSettingsChanged changed_settings:{ChangedSetting(setting=3, original_value=None, new_value=128), ChangedSetting(setting=4, original_value=65535, new_value=65536), ChangedSetting(setting=5, original_value=16384, new_value=16777215)}>
[httpcore.http2] DEBUG send_request_headers.started request=<Request [b'GET']> stream_id=3
[hpack.hpack] DEBUG Adding (b':method', b'GET') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 2 with 7 bits
[hpack.hpack] DEBUG Adding (b':authority', b'sightmap.com') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 67 with 7 bits
[hpack.hpack] DEBUG Adding (b':scheme', b'https') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 7 with 7 bits
[hpack.hpack] DEBUG Adding (b':path', b'/app/api/v1/60p7q39nw7n/sightmaps/397') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 4 with 6 bits
[hpack.hpack] DEBUG Encoding 27 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept-encoding', b'gzip, deflate, br') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 66 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept', b'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 65 with 7 bits
[hpack.hpack] DEBUG Adding (b'accept-language', b'en-US,en;q=0.9') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 64 with 7 bits
[hpack.hpack] DEBUG Adding (b'user-agent', b'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36') to the header table, sensitive:False, huffman:True
[hpack.hpack] DEBUG Encoding 63 with 7 bits
[hpack.hpack] DEBUG Encoded header block to b'\x82\xc3\x87D\x9b`u\xd6\xc0\xeb3\x1d\xc2\xc3\x80\xad\xde\xcc\xbfW\x87ja\x06\x9at\xd2:\xd0\xc3/\xbb\xc2\xc1\xc0\xbf'
[httpcore.http2] DEBUG send_request_headers.complete
[httpcore.http2] DEBUG send_request_body.started request=<Request [b'GET']> stream_id=3
[httpcore.http2] DEBUG send_request_body.complete
[httpcore.http2] DEBUG receive_response_headers.started request=<Request [b'GET']> stream_id=3
[hpack.hpack] DEBUG Decoding b' \x88a\x96\xdc4\xfd( \xa9|\xa4P@\x13J\x05\xfb\x80\r\xc1>\xa6-\x1b\xff_\x8b\x1du\xd0b\r&=LtA\xea\x00\x85Al\xee[?\x84\xaacU\xe7\x00\x04vary\x8b\x84\x84-i[\x05D<\x86\xaao\x00\x89 \xc99V!\xeaM\x87\xa3\x8c\xa8\xeb\x10d\x9c\xbfJWa\xbb\x8d%\x00\x8b!\xeaIjJ\xc5\xa8\x87\x90\xd5M\x83\x9b\xd9\xab'
[hpack.hpack] DEBUG Decoded 0, consumed 1 bytes
[hpack.table] DEBUG Resizing header table to 0 from 4096
[hpack.hpack] DEBUG Decoded 8, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b':status', b'200'), consumed 1
[hpack.hpack] DEBUG Decoded 33, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 22, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), total consumed 24 bytes, indexed True
[hpack.hpack] DEBUG Decoded 31, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'content-type', b'application/json'), total consumed 13 bytes, indexed True
[hpack.hpack] DEBUG Decoded 5, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 4, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'server', b'nginx'), total consumed 12 bytes, indexed False
[hpack.hpack] DEBUG Decoded 4, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (<memory at 0xffff8dfd0c40>, b'Accept-Encoding'), total consumed 18 bytes, indexed False
[hpack.hpack] DEBUG Decoded 9, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 12, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'cache-control', b'no-cache, private'), total consumed 24 bytes, indexed False
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 3, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'content-encoding', b'gzip'), total consumed 17 bytes, indexed False
[httpcore.http2] DEBUG receive_response_headers.complete return_value=(200, [(b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), (b'content-type', b'application/json'), (b'server', b'nginx'), (b'vary', b'Accept-Encoding'), (b'cache-control', b'no-cache, private'), (b'content-encoding', b'gzip')])
[httpx] INFO HTTP Request: GET https://URL1 "HTTP/2 200 OK"
[httpcore.http2] DEBUG receive_response_body.started request=<Request [b'GET']> stream_id=1
[hpack.hpack] DEBUG Decoding b'\x88a\x96\xdc4\xfd( \xa9|\xa4P@\x13J\x05\xfb\x80\r\xc1>\xa6-\x1b\xff_\x8b\x1du\xd0b\r&=LtA\xea\x00\x85Al\xee[?\x84\xaacU\xe7\x00\x04vary\x8b\x84\x84-i[\x05D<\x86\xaao\x00\x89 \xc99V!\xeaM\x87\xa3\x8c\xa8\xeb\x10d\x9c\xbfJWa\xbb\x8d%\x00\x8b!\xeaIjJ\xc5\xa8\x87\x90\xd5M\x83\x9b\xd9\xab'
[hpack.hpack] DEBUG Decoded 8, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b':status', b'200'), consumed 1
[hpack.hpack] DEBUG Decoded 33, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 22, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), total consumed 24 bytes, indexed True
[hpack.hpack] DEBUG Decoded 31, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'content-type', b'application/json'), total consumed 13 bytes, indexed True
[hpack.hpack] DEBUG Decoded 5, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 4, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'server', b'nginx'), total consumed 12 bytes, indexed False
[hpack.hpack] DEBUG Decoded 4, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (<memory at 0xffff8ddf4580>, b'Accept-Encoding'), total consumed 18 bytes, indexed False
[hpack.hpack] DEBUG Decoded 9, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 12, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'cache-control', b'no-cache, private'), total consumed 24 bytes, indexed False
[hpack.hpack] DEBUG Decoded 11, consumed 1 bytes
[hpack.hpack] DEBUG Decoded 3, consumed 1 bytes
[hpack.hpack] DEBUG Decoded (b'content-encoding', b'gzip'), total consumed 17 bytes, indexed False
[httpcore.http2] DEBUG receive_response_body.complete
[httpcore.http2] DEBUG response_closed.started stream_id=1
[httpcore.http2] DEBUG receive_response_headers.complete return_value=(200, [(b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), (b'content-type', b'application/json'), (b'server', b'nginx'), (b'vary', b'Accept-Encoding'), (b'cache-control', b'no-cache, private'), (b'content-encoding', b'gzip')])
[httpx] INFO HTTP Request: GET https://URL2 "HTTP/2 200 OK"
[httpcore.http2] DEBUG receive_response_body.started request=<Request [b'GET']> stream_id=3
[httpcore.http2] DEBUG response_closed.complete
[httpcore.http2] DEBUG receive_response_body.complete
[httpcore.http2] DEBUG response_closed.started stream_id=3
[httpcore.http2] DEBUG response_closed.complete
[crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2})
[crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - already running at desired concurrency
{"url": "{'https://URL2'}", "building_id": 2, "message": "Handling", "time": "2024-12-21T19:01:29.725513Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "99", "function": "routes:json_handler"}}
[urllib3.util.retry] DEBUG Converted retries value: 3 -> Retry(total=3, connect=None, read=None, redirect=None, status=None)
[urllib3.connectionpool] DEBUG Starting new HTTPS connection (1): oauth2.googleapis.com:443
[urllib3.connectionpool] DEBUG https://oauth2.googleapis.com:443 "POST /token HTTP/11" 200 None
[urllib3.connectionpool] DEBUG Starting new HTTPS connection (1): storage.googleapis.com:443
[urllib3.connectionpool] DEBUG https://storage.googleapis.com:443 "POST /upload/storage/v1/b/scraper-responses/o?uploadType=multipart HTTP/11" 200 968
{"destination_blob_name": "0193ea98-99ff-8c6e-b37f-cfd9e7568bb1.json", "building_id": 2, "message": "String content uploaded", "time": "2024-12-21T19:01:30.056656Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/utils/bucket_utils.py", "line": "32", "function": "bucket_utils:upload_string_to_gcs"}}
2024-12-21 19:01:30,068 INFO sqlalchemy.engine.Engine BEGIN (implicit)
[sqlalchemy.engine.Engine] INFO BEGIN (implicit) ({"message": "BEGIN (implicit)", "asctime": "2024-12-21 19:01:30,068"})
2024-12-21 19:01:30,079 INFO sqlalchemy.engine.Engine INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at
[sqlalchemy.engine.Engine] INFO INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at ({"message": "INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at", "asctime": "2024-12-21 19:01:30,079"})
2024-12-21 19:01:30,079 INFO sqlalchemy.engine.Engine [generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0)
[sqlalchemy.engine.Engine] INFO [generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0) ({"message": "[generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0)", "asctime": "2024-12-21 19:01:30,079"})
{"url": "{'https://URL1'}", "building_id": 1, "message": "Handling", "time": "2024-12-21T19:01:30.081275Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "99", "function": "routes:json_handler"}}
[urllib3.connectionpool] DEBUG https://storage.googleapis.com:443 "POST /upload/storage/v1/b/scraper-responses/o?uploadType=multipart HTTP/11" 200 968
{"destination_blob_name": "0193ea98-9b69-8046-96af-dc9893ff15c6.json", "building_id": 1, "message": "String content uploaded", "time": "2024-12-21T19:01:30.332999Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/utils/bucket_utils.py", "line": "32", "function": "bucket_utils:upload_string_to_gcs"}}
[crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2})
[crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - system is overloaded
[crawlee._utils.system] DEBUG Calling get_cpu_info()...
2024-12-21 19:01:30,408 INFO sqlalchemy.engine.Engine COMMIT
[sqlalchemy.engine.Engine] INFO COMMIT ({"message": "COMMIT", "asctime": "2024-12-21 19:01:30,408"})
[crawlee._utils.system] DEBUG Calling get_memory_info()...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed.
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set...
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed.
[crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set...
{"url": "{'https://URL2'}", "building_id": 2, "file_id": "UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1')", "message": "Scrape response saved to GCP.", "time": "2024-12-21T19:01:30.448841Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "42", "function": "routes:save_scrape_response"}}
[crawlee._autoscaling.autoscaled_pool] DEBUG Worker task finished
[crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2})
[crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO The crawler has reached its limit of 1 requests per crawl. All ongoing requests have now completed. Total requests processed: 1. The crawler will now shut down.
[crawlee._autoscaling.autoscaled_pool] DEBUG `is_finished_function` reports that we are finished
[crawlee._autoscaling.autoscaled_pool] DEBUG Terminating - waiting for tasks to complete
2024-12-21 19:01:30,764 INFO sqlalchemy.engine.Engine BEGIN (implicit)
[sqlalchemy.engine.Engine] INFO BEGIN (implicit) ({"message": "BEGIN (implicit)", "asctime": "2024-12-21 19:01:30,764"})
2024-12-21 19:01:30,765 INFO sqlalchemy.engine.Engine INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at
[sqlalchemy.engine.Engine] INFO INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at ({"message": "INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at", "asctime": "2024-12-21 19:01:30,765"})
2024-12-21 19:01:30,766 INFO sqlalchemy.engine.Engine [cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0)
[sqlalchemy.engine.Engine] INFO [cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0) ({"message": "[cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0)", "asctime": "2024-12-21 19:01:30,766"})
2024-12-21 19:01:30,909 INFO sqlalchemy.engine.Engine COMMIT
[sqlalchemy.engine.Engine] INFO COMMIT ({"message": "COMMIT", "asctime": "2024-12-21 19:01:30,909"})
{"url": "{'https://URL1'}", "building_id": 1, "file_id": "UUID('0193ea98-9b69-8046-96af-dc9893ff15c6')", "message": "Scrape response saved to GCP.", "time": "2024-12-21T19:01:30.946939Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "42", "function": "routes:save_scrape_response"}}
[crawlee._autoscaling.autoscaled_pool] DEBUG Worker task finished
[crawlee._autoscaling.autoscaled_pool] DEBUG Worker tasks finished
[crawlee._autoscaling.autoscaled_pool] INFO Waiting for remaining tasks to finish
[crawlee._autoscaling.autoscaled_pool] DEBUG Pool cleanup finished
[crawlee.statistics._statistics] DEBUG Persisting state of the Statistics (event_data=is_migrating=False).
[crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO Final request statistics:
┌───────────────────────────────┬──────────┐
│ requests_finished │ 2 │
│ requests_failed │ 0 │
│ retry_histogram │ [2] │
│ request_avg_failed_duration │ None │
│ request_avg_finished_duration │ 1.623642 │
│ requests_finished_per_minute │ 63 │
│ requests_failed_per_minute │ 0 │
│ request_total_duration │ 3.247283 │
│ requests_total │ 2 │
│ crawler_runtime │ 1.919938 │
└───────────────────────────────┴──────────┘
```
</p>
</details> | closed | 2024-12-21T19:16:43Z | 2024-12-21T19:42:25Z | https://github.com/apify/crawlee-python/issues/839 | [
"t-tooling"
] | CupOfGeo | 0 |
iperov/DeepFaceLab | deep-learning | 931 | PC Stability | Hello. its me again, I am a little but concern about my PC.
My PC configurations are:-
MODEL:- HP Pavailion 15
PROCESSOR:- Intel(R) Core i7-9750H @ 2.60 GHz
RAM:- 8 GB DDR4
GRAPHICS CARD:- 4 GB NVIDIA GEFORCE GTX 1650
OPERATING SYSTEM:- Windows 10 x64 bit.
I am using SAEHD for training I mean the GPU for training.
The problem is that whenever I am running the Trainer Module[(5.XSeg) train.bat]:-
1. My LAPTOP's temperature is somewhat rising a little bit after sometime. Say after 1 hour or so.! Is this fine?
2. The trainer module is taking near about 17 hours to mask "266 segmented" images is this normal? cause my fan speed is rising exponentially.
Please Help. | open | 2020-10-29T08:09:45Z | 2023-06-08T21:22:01Z | https://github.com/iperov/DeepFaceLab/issues/931 | [] | Aeranstorm | 2 |
ultralytics/ultralytics | machine-learning | 18,676 | Source of YOLOv10 pretrianed weights | ### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have a question regarding YOLOv10 pretrained weights [(:](https://docs.ultralytics.com/models/yolov10/#performance) do you train your own YOLOv10 models, or do you utilize the pretrained weights provided in the YOLOv10 repository?
### Additional
_No response_ | closed | 2025-01-14T08:57:59Z | 2025-01-16T05:49:54Z | https://github.com/ultralytics/ultralytics/issues/18676 | [
"question"
] | piupiuisland | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,249 | I got this error running demo_cli.py. Assistance would be appreciated. | Traceback (most recent call last):
File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\demo_cli.py", line 80, in <module>
encoder.embed_utterance(np.zeros(encoder.sampling_rate))
File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\encoder\inference.py", line 144, in embed_utterance
frames = audio.wav_to_mel_spectrogram(wav)
File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\encoder\audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given
Here is the code that the error occurred on:
def wav_to_mel_spectrogram(wav):
"""
Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform.
Note: this not a log-mel spectrogram.
"""
frames = librosa.feature.melspectrogram(
wav,
sampling_rate,
n_fft=int(sampling_rate * mel_window_length / 1000),
hop_length=int(sampling_rate * mel_window_step / 1000),
n_mels=mel_n_channels
)
return frames.astype(np.float32).T
Update: I believe the issue is that I am on the wrong version of librosa, would anyone know the version used here? | open | 2023-09-12T21:08:40Z | 2023-12-24T19:47:00Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1249 | [] | Lijey427 | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,875 | Issue when emitting in threads | Hi,
I am currently facing issues when emitting an event in a separate thread.
In short:
* Main app runs as usual
* When task is open, I start a thread in the background
* In the background thread, I use *flask_socketio.emit* to emit events
* In an Angular app I react to those events
In short,
* events from all around the app are detected
* events from the worker thread do not work
I have this issue when running the app via *socketio.run* or *eventlet.wsgi.server*.
When using *flask run* or *gunicorn* I have no issues.
Any clue on the why?
I can provide a minimal example if needed. | closed | 2022-09-09T12:14:41Z | 2022-09-09T13:05:26Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1875 | [] | ItachiSan | 0 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 62 | docker: host does not create user container (invalid reference format) | _From @inkrement on January 24, 2018 7:43_
I basically, tried to run the [Jupyterhub/docker-demo](https://github.com/jupyterhub/jupyterhub-deploy-docker), but upgraded to the newest docker-versions.
The server itself and Github-OAuth work fine, but when I get redirected from Github (right after authentication) I get the following error (and an uncaught exception):
```
jupyterhub | [I 2018-01-24 07:20:32.789 JupyterHub log:124] 302 GET /user/inkrement/ -> /hub/user/inkrement/ (@::ffff:137.208.40.78) 1.14ms
jupyterhub | [I 2018-01-24 07:20:32.888 JupyterHub dockerspawner:373] Container 'jupyter-inkrement' is gone
jupyterhub | [E 2018-01-24 07:20:32.911 JupyterHub user:427] Unhandled error starting inkrement's server: 500 Server Error: Internal Server Error ("invalid reference format")
jupyterhub | [I 2018-01-24 07:20:32.918 JupyterHub dockerspawner:373] Container 'jupyter-inkrement' is gone
jupyterhub | [W 2018-01-24 07:20:32.918 JupyterHub dockerspawner:344] container not found
```
I checked all running and stoped docker containers, but there is no container named "jupyter-inkrement". It seems like it was not able to spawn the docker container, but I do not know what to do. Any suggestions?
The container-docker is linked to the host-docker via volume as in the demo and I am using a quite new docker version: 17.05.0-ce, build 89658be
_Copied from original issue: jupyterhub/jupyterhub#1630_ | closed | 2018-01-26T17:54:40Z | 2022-12-05T00:54:37Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/62 | [
"question"
] | willingc | 5 |
Textualize/rich | python | 3,081 | [BUG] emoji overlaps with text | **Describe the bug**
I want to wrap text with emojis on both sides.
Adding emoji on the right side/after a text works fine, but adding an emoji before text causes both to overlap and I need to add manually spaces. Adding spaces is not ideal because I am adding multiple emojis at once, and I need to create separate variables depending on whether the emojis appear before or after a piece of text. It also looks a bit uneven visually.
Here is an example:

**Platform**
Ubuntu 22.04.2 LTS - terminal
<details>
<summary>Click to expand</summary>
```
python -m rich.diagnose
$ python -m rich.diagnose
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=211 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'> │
│ height = 53 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=211, height=53), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=211, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=53, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=211, height=53) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 211 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
```
```
pip freeze | grep rich
rich==13.5.2
rich-argparse==1.1.1
```
</details>
| closed | 2023-08-10T13:35:34Z | 2023-08-10T13:38:12Z | https://github.com/Textualize/rich/issues/3081 | [
"Needs triage"
] | kss149 | 3 |
hbldh/bleak | asyncio | 498 | Move cleanup to disconnect event handler in Windows backends | From code review comment: https://github.com/hbldh/bleak/pull/450#discussion_r597064014
| open | 2021-03-29T08:55:38Z | 2022-07-25T21:00:57Z | https://github.com/hbldh/bleak/issues/498 | [
"enhancement",
"Backend: WinRT"
] | hbldh | 0 |
indico/indico | flask | 6,045 | Convert google slide URLs to PDFs (as with powerpoint etc) | **Is your feature request related to a problem? Please describe.**
People increasingly add presentations as links to google slides. This is bad as these URLs can expire/be deleted in which case the content is lost. Of course, we can ask speakers to remember to also update a pdf version, but since indico can autoconvert other file formats, it would be great if it could do it for google slide URLs as well.
**Describe the solution you'd like**
Use google API to convert to pdf
```
GET https://www.googleapis.com/drive/v3/files/{fileId}/export
```
as described [here](https://developers.google.com/drive/api/reference/rest/v3/files/export).
where the `{field}` can be extracted from a URL e.g.
```
https://docs.google.com/presentation/d/1kbmidlabdSPHUAgS2sZOCGaMqvCmjSM6Kk2p9LSH3Oo/edit#slide=id.p
```
and the `mimeType` is obviously `application/pdf` e.g. using the URL above:
```
https://developers.google.com/drive/api/reference/rest/v3/files/export?apix_params=%7B%22fileId%22%3A%221kbmidlabdSPHUAgS2sZOCGaMqvCmjSM6Kk2p9LSH3Oo%22%2C%22mimeType%22%3A%22application%2Fpdf%22%7D
```
| closed | 2023-11-21T15:34:52Z | 2024-10-10T10:00:03Z | https://github.com/indico/indico/issues/6045 | [
"enhancement"
] | EdwardMoyse | 8 |
PrefectHQ/prefect | data-science | 16,826 | Error parsing boolean in DeploymentScheduleCreate | ### Bug summary
I get the following error when trying to have a parameterized value for the active field in schedule.
Error message
```
1 validation error for DeploymentScheduleCreate
active
Input should be a valid boolean, unable to interpret input
For further information visit https://errors.pydantic.dev/2.10/v/bool_parsing
```
get-schedule-isactive.sh
```
#!/bin/sh
echo "false"
```
prefect.yaml
```
definitions:
work_pools:
docker_work_pool: &docker_work_pool
name: docker-pool
work_queue_name: "{{ get-work-pool.stdout }}"
schedules:
every_hour: &every_hour
cron: "0 0 * * *"
timezone: "America/Chicago"
active: "{{ get-schedule-isactive.stdout }}"
actions:
docker_build: &docker_build
- prefect.deployments.steps.run_shell_script:
id: get-commit-hash
script: git rev-parse --short HEAD
stream_output: false
- prefect.deployments.steps.run_shell_script:
id: get-work-pool
script: sh utils/get-work-pool.sh
stream_output: false
- prefect.deployments.steps.run_shell_script:
id: get-schedule-isactive
script: sh utils/get-schedule-isactive.sh
stream_output: false
- prefect_docker.deployments.steps.build_docker_image:
id: build-image
image_name: "repo/image"
tag: "{{ get-commit-hash.stdout }}"
dockerfile: Dockerfile
```
If I update the schedule in prefect-yaml like below it works fins.
```
schedules:
every_hour: &every_hour
cron: "0 0 * * *"
timezone: "America/Chicago"
active: "false"
```
Is this because of pydantic? Workarounds?
### Version info
```Text
Version: 3.1.4
API version: 0.8.4
Python version: 3.12.6
Git commit: 78ee41cb
Built: Wed, Nov 20, 2024 7:37 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: server
Pydantic version: 2.10.3
Integrations:
prefect-docker: 0.6.2
prefect-bitbucket: 0.3.1
```
### Additional context
_No response_kr | closed | 2025-01-23T15:57:34Z | 2025-01-28T16:22:55Z | https://github.com/PrefectHQ/prefect/issues/16826 | [
"bug"
] | aurany | 2 |
FactoryBoy/factory_boy | django | 998 | FuzzyInteger with low bound set does not work as intended | #### Description
My intention is to have a randomized integer with a minimum value of 1 but instead the library returns a randomized integer between 0 and 1
#### To Reproduce
my_val = fuzzy.FuzzyInteger(low=1)
| closed | 2023-02-03T15:10:48Z | 2023-02-06T14:02:05Z | https://github.com/FactoryBoy/factory_boy/issues/998 | [] | roryodonnell | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 258 | Do you have a model trained on all datasets? | Great project, thank you!
Do you have a model trained on all datasets?
| closed | 2019-11-15T21:29:33Z | 2019-12-05T19:12:29Z | https://github.com/KaiyangZhou/deep-person-reid/issues/258 | [] | anuar12 | 3 |
PrefectHQ/prefect | automation | 16,982 | Upgrade `websockets.legacy` usage | ### Describe the current behavior
Currently we rely on imported objects from `websockets.legacy` which is deprecated (as can be seen here, for example: https://github.com/PrefectHQ/prefect/actions/runs/13147657643/job/36689175326?pr=16972).
### Describe the proposed behavior
We need to plan to move to the newer asyncio implementation following the guidelines outlined [here](https://websockets.readthedocs.io/en/stable/howto/upgrade.html). We believe this should be straightforward as we don't rely on anything deep cut, but opening this issue to track so we don't get caught off guard with an upgrade.
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-02-05T16:51:54Z | 2025-02-05T16:51:54Z | https://github.com/PrefectHQ/prefect/issues/16982 | [
"enhancement",
"upstream dependency"
] | cicdw | 0 |
tfranzel/drf-spectacular | rest-api | 1,303 | Exception caught for serializer field ReadOnlyField with a FileField source | **Describe the bug**
I have a simple file server:
```python
from django.db import models
from django.shortcuts import render
from rest_framework import generics, serializers
class FooFile(models.Model):
fname = models.FileField(max_length=1024, unique=True)
class FooFileSerializer(serializers.HyperlinkedModelSerializer):
fname = serializers.FileField(use_url=False, required=False)
fsize = serializers.ReadOnlyField(source="fname.size") # <-- herein lies the problem
class Meta:
model = FooFile
fields = ("fname", "fsize")
class FooFileList(generics.ListCreateAPIView):
http_method_names = ["get", "post"]
serializer_class = FooFileSerializer
queryset = FooFile.objects.all()
```
Generating the OpenAPI schema using drf-spectacular prints this warning:
```
Warning [FooFileList > FooFileSerializer]: could not resolve field on model <class 'foo.models.FooFile'> with path "fname.size". This is likely a custom field that does some unknown magic. Maybe consider annotating the field/property? Defaulting to "string". (Exception: 'NoneType' object has no attribute '_meta')
```
**To Reproduce**
See https://github.com/jennydaman/spectacular-fsize-bug
**Expected behavior**
No warnings. `components.schemas.FooFile.properties.fsize.type` should be `number`.
| closed | 2024-10-01T21:03:45Z | 2024-10-02T07:44:35Z | https://github.com/tfranzel/drf-spectacular/issues/1303 | [] | jennydaman | 3 |
tensorpack/tensorpack | tensorflow | 897 | UnboundLocalError: local variable 'raw_devices' referenced before assignment | https://github.com/tensorpack/tensorpack/blob/801e29218f299905298b9bf430d2b95b527b04d5/tensorpack/graph_builder/training.py#L350-L355
I have reviewed this file, and by comparison I think `raw_devices = ['/gpu:{}'.format(k) for k in self.towers]` in line 351 should probably be defined before the `if-else` statement just like line 153-158 in the same file:
https://github.com/tensorpack/tensorpack/blob/801e29218f299905298b9bf430d2b95b527b04d5/tensorpack/graph_builder/training.py#L153-L158 | closed | 2018-09-17T11:56:57Z | 2018-09-17T16:28:11Z | https://github.com/tensorpack/tensorpack/issues/897 | [
"bug"
] | thuzhf | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 349 | Allow customization of faiss index type | See #348 | closed | 2021-06-29T19:53:35Z | 2021-11-28T19:21:15Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/349 | [
"enhancement",
"fixed in dev branch"
] | KevinMusgrave | 1 |
jacobgil/pytorch-grad-cam | computer-vision | 531 | Loading a pt model and using Grad-cam | I hope that you can support me, I am trying to load a scripted model and then to use Grad-cam. Sadly it tells me that there is an issue on the hook that cannot be setup for a scripted model.
Do you know any way in which I could fix this?
Thank you in advance | open | 2024-09-25T12:39:30Z | 2024-09-25T12:39:30Z | https://github.com/jacobgil/pytorch-grad-cam/issues/531 | [] | aipatr | 0 |
numpy/numpy | numpy | 28,019 | TST,DOC: Bump `scipy_doctest` (or remove pin) and fix new failures | @ev-br ping FYI, since the new scipy-doctest release (less than an hour ago) the refcheck has universal failures.
I suspect, this is all just bad documentation that needs fixing, but not sure yet. In either case, until fixed both CircleCI and the "benchmark" tests which also still run the refcheck are expected to fail.
(Currently, also the linter just started failing...) | closed | 2024-12-17T11:59:12Z | 2024-12-19T08:19:01Z | https://github.com/numpy/numpy/issues/28019 | [] | seberg | 6 |
gradio-app/gradio | python | 10,143 | image.no_webcam_support | ### Describe the bug
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
这个代码部署在192.168.0.3服务器上面,我在192.168.0.5服务器访问项目 192.168.0.3:5019,然后点击摄像头报错image.no_webcam_support,这是为什么?该怎么修改
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
### Screenshot
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
### Logs
```shell
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
```
### System Info
```shell
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
```
### Severity
Blocking usage of gradio | closed | 2024-12-06T04:04:52Z | 2024-12-06T14:41:23Z | https://github.com/gradio-app/gradio/issues/10143 | [
"bug"
] | geekplusaa | 1 |
Gozargah/Marzban | api | 1,236 | کاستوم کانفیگ | سلام، امروز مرزبان رو ست کردم که کاستوم کانفیگ بزنم و فرگمنت رو برای کانفیگ های CDN فعال کنم، بدون مشکل انجام شد ولی بعدش متوجه شدم کانفیگهای مستقیم vless tcp no header دیگه کار نمیکنن، بعد از کلی جستجو متوجه شدم، موقع تبدیل کانفیگ ها به کاستوم ، کانفیگهای مستقیم که هدر نداشتن و بهتره بگم type اونها none بود، در کاستوم کانفیگ ها این type خود بخود تبدیل شده به http و به همین دلیل دیگه کانفیگ ها کار نمیکنن، و راه حلش هم این بود که توی تنظیمات کور اینباند برم type رو از none به http تغییر بدم که با کانفیگ های کاستوم مچ بشن.
لطفاً این مورد رو بررسی بفرمایید که چرا این تغییر خود بخود بوجود میاد توی کاستوم کانفیگ ها.
ممنون | closed | 2024-08-09T14:39:03Z | 2024-08-09T16:39:27Z | https://github.com/Gozargah/Marzban/issues/1236 | [
"Duplicate"
] | Nima786 | 1 |
frappe/frappe | rest-api | 31,625 | Scroll to field for read only fields is broken (develop) | ctrl+j > jump to field > type fieldname outside of current window.
Expected behaviour: Scrolls to that field slowly
Current behaviour: nothing. | open | 2025-03-10T13:16:01Z | 2025-03-17T15:46:42Z | https://github.com/frappe/frappe/issues/31625 | [
"bug",
"UX",
"v16"
] | ankush | 4 |
shibing624/text2vec | nlp | 58 | 相关paper或基准 | 大家好,请问有和[examples/training_sup_text_matching_model.py]相关的paper或相关github代码分享吗?比如文档查重对比结果分析之类的 | closed | 2023-03-28T08:21:57Z | 2023-03-28T11:37:24Z | https://github.com/shibing624/text2vec/issues/58 | [
"question"
] | 1264561652 | 2 |
keras-rl/keras-rl | tensorflow | 299 | reccurent-dqn examples | This mostly goes to @kirkscheper as I have seen is the most recently active in recurrent.
I try running the example recurrent_dqn_atari.py and I have several problems.
In the beginning I had this problem:
```
ValueError: perm dim 5 is out of range of input rank 5 for 'permute_1/transpose' (op: 'Transpose') with input shapes: [32,?,1,84,84], [5] and with computed input tensors: input[1] = <0 2 3 4 5>.>>
```
I solved it by changing this
```
model.add(Permute((2, 3, 4, 5), batch_input_shape=input_shape))
```
with that
```
model.add(Permute((1, 3, 4, 2), batch_input_shape=input_shape))
```
But that create a problem in dimensions:
```
ValueError: Error when checking input: expected permute_2_input to have 5 dimensions, but got array with shape (1, 1, 84, 84)
```
Any suggestions on how to run the example?
I want to implement recurrent in NAFAgent.
I cannot find any examples with recurrent NAFAgent but I must have at least one recurrent example to develop the NAFAgent recurrent.
Thank you! | closed | 2019-03-05T16:02:26Z | 2019-06-10T17:16:58Z | https://github.com/keras-rl/keras-rl/issues/299 | [
"wontfix"
] | chriskoups | 1 |
babysor/MockingBird | pytorch | 614 | hifigan的训练无法直接使用cpu,且修改代码后无法接着训练 | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
hifigan的训练无法直接使用cpu,且修改代码后无法接着训练
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
最新环境、代码版本,模型:hifigan
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
将MockingBird-main\vocoder\hifigan下trian.py中41行torch.cuda.manual_seed(h.seed)改为torch..manual_seed(h.seed);
42行 device = torch.device('cuda:{:d}'.format(rank))改为device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')。
之后可以以cpu训练,但每次运行相同代码无法接着上一次训练。 | open | 2022-06-12T01:52:36Z | 2022-07-02T04:57:25Z | https://github.com/babysor/MockingBird/issues/614 | [] | HSQhere | 2 |
InstaPy/InstaPy | automation | 6,650 | Issue when trying to run InstaPy | Workspace in use: "C:/Users/Jordan Gri/InstaPy"
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-10-27 16:03:58] [shotsbyjordangri] Session started!
oooooooooooooooooooooooooooooooooooooooooooooooooooooo
INFO [2022-10-27 16:04:02] [shotsbyjordangri] - Cookie file not found, creating cookie...
WARNING [2022-10-27 16:04:13] [shotsbyjordangri] Login A/B test detected! Trying another string...
WARNING [2022-10-27 16:04:18] [shotsbyjordangri] Could not pass the login A/B test. Trying last string...
ERROR [2022-10-27 16:04:23] [shotsbyjordangri] Login A/B test failed!
b"Message: Unable to locate element: //div[text()='Log In']\nStacktrace:\nWebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5\nNoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5\nelement.find/</<@chrome://remote/content/marionette/element.js:300:16\n"
Traceback (most recent call last):
File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 337, in login_user
login_elem = browser.find_element(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element
return self.execute(Command.FIND_ELEMENT, {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In']
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5
element.find/</<@chrome://remote/content/marionette/element.js:300:16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 343, in login_user
login_elem = browser.find_element(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element
return self.execute(Command.FIND_ELEMENT, {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In']
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5
element.find/</<@chrome://remote/content/marionette/element.js:300:16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 350, in login_user
login_elem = browser.find_element(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element
return self.execute(Command.FIND_ELEMENT, {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //div[text()='Log In']
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5
element.find/</<@chrome://remote/content/marionette/element.js:300:16
..............................................................................................................................
CRITICAL [2022-10-27 16:04:23] [shotsbyjordangri] Unable to login to Instagram! You will find more information in the logs above.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
ERROR [2022-10-27 16:04:23] [shotsbyjordangri] You have too few comments, please set at least 10 distinct comments to avoid looking suspicious.
INFO [2022-10-27 16:04:23] [shotsbyjordangri] Sessional Live Report:
|> No any statistics to show
[Session lasted 31.08 seconds]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-10-27 16:04:23] [shotsbyjordangri] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
| open | 2022-10-27T19:05:28Z | 2022-11-14T20:53:17Z | https://github.com/InstaPy/InstaPy/issues/6650 | [] | eastcoastboys | 9 |
Buuntu/fastapi-react | sqlalchemy | 204 | Front end does not respond to mouse. | The only way to navigate the front end is with the tab key. Not a frontend guy but I'm not seeing any errors. A lot of npm warnings. It looks ok. I have noticed if I try to render only a button it is also unresponsive. | closed | 2023-07-24T01:57:06Z | 2023-07-25T22:43:34Z | https://github.com/Buuntu/fastapi-react/issues/204 | [] | ddcroft73 | 3 |
streamlit/streamlit | machine-learning | 10,529 | st.segmented_control ==> use_container_width parameter | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support the use_container_width parameter for st.segmented_control and st.pill
### Why?
Improve UI of apps by reducing white space and allowing size of options in segmented control to expand/contract dynamically to current screen size
### How?
test = st.segmented_control(
label="Filter Options",
options=["One", "Two", "Three"],
label_visibility="collapsed",
**use_container_width=True**
)
### Additional Context
_No response_ | open | 2025-02-26T20:17:32Z | 2025-02-26T22:25:20Z | https://github.com/streamlit/streamlit/issues/10529 | [
"type:enhancement",
"feature:st.segmented_control",
"feature:st.pills"
] | Anthony-Marcovecchio | 2 |
d2l-ai/d2l-en | machine-learning | 1,816 | DEEP LEARNING | closed | 2021-06-26T08:42:47Z | 2021-06-26T21:57:35Z | https://github.com/d2l-ai/d2l-en/issues/1816 | [] | ardey26 | 1 |
|
0b01001001/spectree | pydantic | 194 | [BUG] Not Finding Blueprint routes when using url_prefix | ## Not Finding Blueprint routes when using url_prefix
It seems that the lib doesn't load Blueprints routes if the Blueprint uses a prefix, I've opened a PR explaining the issue and the possible solution in depth here: https://github.com/0b01001001/spectree/pull/193#issue-1107430572
| closed | 2022-01-18T22:38:39Z | 2022-01-19T13:13:45Z | https://github.com/0b01001001/spectree/issues/194 | [] | guilhermelou | 0 |
ymcui/Chinese-BERT-wwm | tensorflow | 154 | pytorch模型是用pytorch代码预训练的还是tf预训练后转成pytorch的? | 作者你好,我这有几个疑问,还请帮忙解答,谢谢。
1.pytorch模型是用pytorch代码预训练的还是tf预训练后转成pytorch的?
2.如果是tf预训练后转成pytorch的,那用tf和pytorch加载模型进行训练两者的差别大不大?
3.roberta是否采用了动态掩码方式进行预训练的?
谢谢 | closed | 2020-10-25T12:48:49Z | 2020-11-03T05:44:31Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/154 | [
"stale"
] | daniellibin | 3 |
fastapi/sqlmodel | sqlalchemy | 176 | How to accomplish Read/Write transactions with a one to many relationship | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class User(SQLModel):
__tablename__ = "users"
id: Optional[str]
cars: List[Car] = Relationship(sa_relationship=RelationshipProperty("Car", back_populates="user")
class Car(SQLModel):
...
user_id: str = Field(default=None, foreign_key="users.id")
user: User = Relationship(sa_relationship=RelationshipProperty("User", back_populates="cars"))
is_main_car: bool
```
### Description
I have two tables that have a many to one relationship, such as the one described above. Any given user can only have a single car that `is_main_car`. Additionally, the first car a user gets must be the main car.
I am trying to determine how the transactional semantics work with this relationship within a Session. If I read the `user`, and then use the `user.cars` field to determine if the user has 0 cars or already has a main car, can I rely on that condition still being true when I write my new main `Car` row to the `Cars` table (assuming it is all within a single Session)?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | open | 2021-12-03T22:09:13Z | 2021-12-03T22:10:06Z | https://github.com/fastapi/sqlmodel/issues/176 | [
"question"
] | br-follow | 0 |
django-import-export/django-import-export | django | 1,421 | Admin site: Invalid date/times during import generate confusing errors | **Describe the bug**
If a datetime is invalid during import, this information is reported via the admin site confirmation page. However it is not clear what the exact nature of the error is.
The error is reported, but it looks like the import was ok, because the date field is rendered:

**To Reproduce**
Steps to reproduce the behavior:
1. Edit the file `tests/core/exports/books1.csv`
2. Add a column called `added` with value: `2022/02/17 19:46:59` (this is a date which cannot be parsed by default)
3. Import the file via the Admin console
4. See error
**Versions (please complete the following information):**
- Django Import Export: 3.0.0 (beta)
- Python 3.9
- Django 4.0
**Expected behavior**
It would be best to see a clear indication of what the problem is. Note the original [exception](https://github.com/django-import-export/django-import-export/blob/033f803c5994ceba9da8b610819ee5b52a630bf7/import_export/widgets.py#L229) is:
> time data '2022/02/17 19:46:59' does not match format '%Y-%m-%d %H:%M:%S'
This information is lost.
| closed | 2022-04-13T21:10:27Z | 2023-10-24T18:02:14Z | https://github.com/django-import-export/django-import-export/issues/1421 | [
"bug"
] | matthewhegarty | 1 |
FactoryBoy/factory_boy | sqlalchemy | 199 | unable to get different dates with FuzzyDateTime | Hello,
In Django tests, i have
``` python
class EventFactory(factory.django.DjangoModelFactory):
...
dtstart = FuzzyDateTime(datetime.datetime(2008, 1, 1, tzinfo=UTC), datetime.datetime(2009, 1, 1, tzinfo=UTC),
force_hour=10,
force_minute=30,
force_second=0, force_microsecond=0).evaluate(2, None, False)
```
and i use it:
```
self.event = EventFactory.create()
self.event2 = EventFactory.create()
self.event3 = EventFactory.create()
```
Displaying the resulting dtstart, i got:
dtstart: "2008-07-21 10:30:00+00:00"
dtstart: "2008-07-21 10:30:00+00:00"
dtstart: "2008-07-21 10:30:00+00:00"
The dates are the same, that's not what i expect.
What i don't understand is when i try it in python shell, everytime i call FuzzyDateTime(...), the rresult is always different.
Am i missing something?
thanks in advance for help,
gerard
| closed | 2015-04-21T17:00:30Z | 2015-10-20T21:50:38Z | https://github.com/FactoryBoy/factory_boy/issues/199 | [
"Q&A",
"Doc"
] | jdh13 | 5 |
reloadware/reloadium | pandas | 95 | PyCharm with virtualenv and reloadium plugin does not work - No module named reloadium.corium | PyCharm Pro 2022.3.1 with virtualenv and Python 3.10 after icon click "Debug 'dl' with Reloadium" debug console output:
/home/user/py310env/bin/python -m reloadium pydev_proxy /home/user/pycharm/plugins/python/helpers/pydev/pydevd.py --multiprocess --save-signatures --qt-support=auto --client 127.0.0.1 --port 41999 --file /home/user/XXX/yyy/dl.py
It seems like your platform or Python version are not supported yet.
Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported.
Please submit a github issue if you believe Reloadium should be working on your system at
https://github.com/reloadware/reloadium
To see the exception run reloadium with environmental variable RW_DEBUG=True
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.10/runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/user/.reloadium/package/3.7/reloadium/__init__.py", line 4, in <module>
pre_import_check()
File "/home/user/.reloadium/package/3.7/reloadium/__utils__.py", line 21, in pre_import_check
import reloadium.corium
**ModuleNotFoundError: No module named 'reloadium.corium'**
Process finished with exit code 1
| closed | 2023-01-30T11:39:30Z | 2023-06-17T02:37:48Z | https://github.com/reloadware/reloadium/issues/95 | [] | saphireee | 6 |
gunthercox/ChatterBot | machine-learning | 1,711 | data error |

this is the error i got every time i run it.
i don't know what to do.please tell me what to do.
| closed | 2019-04-24T18:08:02Z | 2020-02-21T21:34:46Z | https://github.com/gunthercox/ChatterBot/issues/1711 | [] | sadmansad2003 | 10 |
PaddlePaddle/PaddleNLP | nlp | 9,399 | [Question]: 现在paddle应该怎么安装,我用离线安装,之前没问题现在不行了 | ### 请提出你的问题

之前可以安装,现在不行了 | closed | 2024-11-11T02:19:39Z | 2024-11-11T02:34:16Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9399 | [
"question"
] | liuzhipengchd | 0 |
huggingface/transformers | pytorch | 36,123 | torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default with `_prepare_4d_attention_mask_for_sdpa( |
> Hello @fxmarty
>
> When I try using torch.compile by using `_attn_implementation="sdpa"` in `BertConfig`, I get the error coming from `_prepare_4d_attention_mask_for_sdpa()` whichis because of the data dependent flow.
>
> Specifically,
>
>
>
> ```
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 1108, in forward
>
> extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/modeling_attn_mask_utils.py", line 448, in _prepare_4d_attention_mask_for_sdpa
>
> if not is_tracing and torch.all(mask == 1):
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 411, in __torch_dispatch__
>
> outs_unwrapped = func._op_dk(
>
> ^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/utils/_stats.py", line 20, in wrapper
>
> return fn(*args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 896, in __torch_dispatch__
>
> return self.dispatch(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in dispatch
>
> return self._cached_dispatch_impl(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 974, in _cached_dispatch_impl
>
> output = self._dispatch_impl(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1431, in _dispatch_impl
>
> op_impl_out = op_impl(self, func, *args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 150, in dispatch_to_op_implementations_dict
>
> return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 284, in local_scalar_dense
>
> raise DataDependentOutputException(func)
>
> torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
>
> ```
>
> Is this related to https://github.com/pytorch/pytorch/pull/120400, and do you anticipate there's any solution to this? Ofcourse turning SDPA off works
>
>
_Originally posted by @amodab01 in [221aaec](https://github.com/huggingface/transformers/commit/221aaec6ecf7558e4956dadd662d7d3adb22e420#r152370315)_ | closed | 2025-02-10T19:31:51Z | 2025-03-21T08:04:39Z | https://github.com/huggingface/transformers/issues/36123 | [] | amodab01 | 1 |
joerick/pyinstrument | django | 341 | Customize profile filename | It would be nice to easily be able to customize the profile filename in the middleware https://github.com/joerick/pyinstrument/blob/4b37f8cdc531be41a7f7e57932f0b770244025d5/pyinstrument/middleware.py#L78 | open | 2024-10-04T14:34:32Z | 2024-10-04T16:34:13Z | https://github.com/joerick/pyinstrument/issues/341 | [] | maingoh | 1 |
pytest-dev/pytest-xdist | pytest | 958 | Suppress header output | Is there any way to suppress this output at the start of `pytest -n auto`? `-q` has no output.
```
[gw0] darwin Python 3.9.17 cwd: /some/path
[gw1] darwin Python 3.9.17 cwd: /some/path
[gw2] darwin Python 3.9.17 cwd: /some/path
[gw3] darwin Python 3.9.17 cwd: /some/path
[gw4] darwin Python 3.9.17 cwd: /some/path
[gw5] darwin Python 3.9.17 cwd: /some/path
[gw6] darwin Python 3.9.17 cwd: /some/path
[gw7] darwin Python 3.9.17 cwd: /some/path
[gw8] darwin Python 3.9.17 cwd: /some/path
[gw9] darwin Python 3.9.17 cwd: /some/path
[gw0] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw1] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw2] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw3] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw4] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw6] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw5] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw7] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
[gw8] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)]
gw0 [2] / gw1 [2] / gw2 [2] / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok / gw8 ok / gw9 gw0 [2] / gw1 [2] / gw2 [2] / gw3 [2] / gw4 ok / gw5 ok / gw6 ok / gw7 ok / gw8 ok / gw9
```
This is a large log spew that is irrelevant to the user running the test.
I tried searching through the issues here and docs, but couldn't find a way that would suppress this output. | open | 2023-10-23T18:38:09Z | 2023-10-26T12:48:47Z | https://github.com/pytest-dev/pytest-xdist/issues/958 | [] | xixixao | 5 |
aiogram/aiogram | asyncio | 919 | Prevent handlers from cancelling | Perhaps I have gaps in understanding aiogram, but why can't I make handler cancellation protection?
```
import asyncio
import logging
import os
from aiogram import Bot, Dispatcher, types
#API_TOKEN = os.getenv("BOT_TOKEN")
API_TOKEN = "TOKEN"
# Configure logging
logging.basicConfig(level=logging.INFO)
# Initialize bot and dispatcher
bot = Bot(token=API_TOKEN)
dp = Dispatcher(bot)
def shielded(fn):
async def wrapped(*args, **kwargs):
await asyncio.shield(fn(*args, **kwargs))
return wrapped
@dp.message_handler()
@shielded
async def echo(message: types.Message, *args, **kwargs):
try:
await asyncio.sleep(7)
await message.answer(message.text)
except asyncio.CancelledError:
print("handler cancelled :(")
async def cancel_dp_with_delay(dp, sec):
await asyncio.sleep(sec)
dp.stop_polling()
async def main():
asyncio.create_task(cancel_dp_with_delay(dp, 5))
await dp.start_polling()
await dp.wait_closed()
await asyncio.sleep(3)
if __name__ == '__main__':
asyncio.run(main())
```
| closed | 2022-06-08T22:14:13Z | 2022-10-23T12:42:30Z | https://github.com/aiogram/aiogram/issues/919 | [
"question issue",
"2.x"
] | jemeraldo | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,425 | Make it possible for an administrator so send activation links to users upon their creation | ### What version of GlobaLeaks are you using?
5.0.56
### What browser(s) are you seeing the problem on?
N/A
### What operating system(s) are you seeing the problem on?
N/A
### Describe the issue
After upgrading our test environment to 5.0.56, admins cannot send account activation mails. Only the Escrow key admin can do so:
Regular admin:

Escrow key admin:

### Proposed solution
_No response_ | open | 2025-03-04T10:06:31Z | 2025-03-10T08:40:53Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4425 | [
"C: Client",
"C: Backend",
"T: Feature"
] | schris-dk | 6 |
frappe/frappe | rest-api | 31,299 | DocType: Tables in Field Type "Text Editor" - deleting rows and colums broken | <!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, use https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉
-->
## Description of the issue
Using a Text Editor field with content type "Rich Text": deleting rows and columns from tables does not work anymore. Rows can't be deleted, and on deleting columns always the first column gets deleted.
## Context information (for bug reports)
I found this bug while creating a Web Page and tried to insert a table, see below:

**Output of `bench version`**
```
erpnext 15.52.0
frappe 15.56.0
```
## Steps to reproduce the issue
1. Create a Web Page as shown above. Use "Rich Text" for Content Type.
2. Insert a table and fill in some data.
3. Try to remove a row.
4. Try to remove a column
5. Try to insert a row in the middle of the table
### Observed result
1. Try to remove a row: not possible
2. Try to remove a column: wrong column is deleted
3. Try to insert a row in the middle of the table: row is inserted, but data is "shifted" between cells below the new row.
### Expected result
1. Try to remove a row: current row is deleted
2. Try to remove a column: current column is deleted
3. Try to insert a row in the middle of the table: row is inserted , data is not "shifted"
### Stacktrace / full error message
```
Does not occur
```
## Additional information
OS version / distribution, `Frappe` install method, etc.
debian bullseye, manual install | open | 2025-02-18T07:06:32Z | 2025-03-13T12:19:59Z | https://github.com/frappe/frappe/issues/31299 | [
"bug"
] | zongo811 | 2 |
wkentaro/labelme | deep-learning | 1,292 | module 'labelme.utils' has no attribute 'label_colormap' | ### Provide environment information
I am trying to convert my .json files containing my labels (semantic segmentation, polygonal bounding box information) to VOC segmentation format for visualizing the segmentation masks over the input pixels of my image. I then get the error:
module 'labelme.utils' has no attribute 'label_colormap'
I am following the example here for VOC segmentation format dataset creation:
https://github.com/wkentaro/labelme/tree/main/examples/semantic_segmentation
(base) pravin@AdminisatorsMBP SEGMENTATION %
./labelme2voc.py Images_To_Segment Images_To_Segment_voc5 --labels labels.txt
Creating dataset: Images_To_Segment_voc5
class_names: ('_background_', 'vein')
Saved class_names: Images_To_Segment_voc5/class_names.txt
Traceback (most recent call last):
File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 95, in <module>
main()
File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 56, in main
colormap = labelme.utils.label_colormap(255)
AttributeError: module 'labelme.utils' has no attribute 'label_colormap'
(base) pravin@AdminisatorsMBP SEGMENTATION % pwd
/Users/pravin/Documents/SEGMENTATION
Please help.
I tried:
cd labelme
pip install -e .
But this did not fix the issue.
### What OS are you using?
Mac OS Ventura 13.2.1
### Describe the Bug
I am trying to convert my .json files containing my labels (semantic segmentation, polygonal bounding box information) to VOC segmentation format for visualizing the segmentation masks over the input pixels of my image. I then get the error:
module 'labelme.utils' has no attribute 'label_colormap'
I am following the example here for VOC segmentation format dataset creation:
https://github.com/wkentaro/labelme/tree/main/examples/semantic_segmentation
(base) pravin@AdminisatorsMBP SEGMENTATION %
./labelme2voc.py Images_To_Segment Images_To_Segment_voc5 --labels labels.txt
Creating dataset: Images_To_Segment_voc5
class_names: ('_background_', 'vein')
Saved class_names: Images_To_Segment_voc5/class_names.txt
Traceback (most recent call last):
File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 95, in <module>
main()
File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 56, in main
colormap = labelme.utils.label_colormap(255)
AttributeError: module 'labelme.utils' has no attribute 'label_colormap'
(base) pravin@AdminisatorsMBP SEGMENTATION % pwd
/Users/pravin/Documents/SEGMENTATION
Please help.
I tried:
cd labelme
pip install -e .
But this did not fix the issue.
### Expected Behavior
_No response_
### To Reproduce
_No response_ | closed | 2023-06-12T10:24:01Z | 2023-07-15T10:06:03Z | https://github.com/wkentaro/labelme/issues/1292 | [] | pravinrav | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 276 | Why does the red color of the area become lighter and lighter the more times of training? | Why does the red color of the area become lighter and lighter the more times of training? | closed | 2022-06-23T12:30:34Z | 2022-07-01T13:18:35Z | https://github.com/jacobgil/pytorch-grad-cam/issues/276 | [] | zhangzherui123 | 1 |
piskvorky/gensim | data-science | 3,008 | In softmax layer of word2vec, do we use cosine similarity or dot product? | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I have read the paper "Efficient Estimation of Word Representation in Vector Space". This article says that, we use **cosine similarity** in softmax layer of word2vec. But someone says that gensim uses **dot product** in softmax layer of word2vec while uses **cosine similarity** between word vectors which have been trained. I have not read the source code, and I wanted to confirm whether use dot product in softmax layer and use cosine similarity after trained.
| closed | 2020-11-27T09:42:37Z | 2020-12-14T19:26:15Z | https://github.com/piskvorky/gensim/issues/3008 | [] | tranquil-coder | 1 |
rthalley/dnspython | asyncio | 637 | dns.asyncresolver timeout | Hi,
While experimenting with `dns.asyncresolver` I encountered an error which occurs only on my Windows 10 machine and not on WSL or other Linux hosts.
Running the following code throws a timeout exception:
```
import asyncio
import dns.asyncresolver
import dns.asyncbackend
import dns.exception
from typing import List
async def asyncquery(target, type="A"):
record_type = "A"
resolver = dns.asyncresolver.Resolver()
resolver.nameservers = ["1.1.1.1", "8.8.8.8"]
resolver.timeout = 10.0
resolver.lifetime = 10.0
try:
answers = await resolver.resolve(target, rdtype=record_type)
records = [rdata for rdata in answers]
except dns.resolver.NoAnswer:
print(f'{target} query returned no answer')
return None
except dns.exception.Timeout:
print(f'{target} query timed out')
return None
return records
if __name__ == "__main__":
target = "google.com"
res = asyncio.run(asyncquery(target, "A"))
if res:
print(f"Results")
for r in res:
print(r)
```
I do see a valid response in Wireshark, but it doesn't seem to be captured by Python.

The non-async resolver works just fine though 🤷♀️
Python version: 3.9.1
dnspython: 2.1.0
Any ideas what can cause this?
Thanks for the help! | closed | 2021-02-18T19:29:15Z | 2021-11-02T00:49:22Z | https://github.com/rthalley/dnspython/issues/637 | [
"Bug",
"Fixed"
] | LizardBlizzard | 5 |
tflearn/tflearn | tensorflow | 392 | [utils.py] IndexError: indices are out-of-bounds | Hi,
I probably did something wrong but... I really don't find it.
During the fit call, I have this exception:
```
X (301, 2) Y (301, 2)
---------------------------------
Run id: JTHQIT
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 301
Validation samples: 0
--
Exception in thread Thread-3:
Traceback (most recent call last):
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/data_flow.py", line 186, in fill_feed_dict_queue
data = self.retrieve_data(batch_ids)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/data_flow.py", line 221, in retrieve_data
utils.slice_array(self.feed_dict[key], batch_ids)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/utils.py", line 187, in slice_array
return X[start]
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/frame.py", line 2051, in __getitem__
return self._getitem_array(key)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/frame.py", line 2096, in _getitem_array
return self.take(indexer, axis=1, convert=True)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/generic.py", line 1669, in take
convert=True, verify=True)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/internals.py", line 3932, in take
indexer = maybe_convert_indices(indexer, n)
File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/indexing.py", line 1872, in maybe_convert_indices
raise IndexError("indices are out-of-bounds")
IndexError: indices are out-of-bounds
```
I really started from the titanic example, I just took a different dataset (weight, height => sex), that I clean using pandas, that's the only difference.
Code:
```
import tflearn
import data_importer
X, Y = data_importer.load_data(0)
print("X", X.shape, "Y", Y.shape)
# Build neural network
net = tflearn.input_data(shape=[None, 2])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net)
# Define model
model = tflearn.DNN(net)
# Start training (apply gradient descent algorithm)
model.fit(X, Y, n_epoch=10, batch_size=1, show_metric=True)
```
Data importer code and data are available here:
https://github.com/shazz/tflearn-experiments/tree/master/cdc
Any help welcome...... is a a bug in my code (probably) or in tflearn ?
Thanks !
| closed | 2016-10-12T22:57:45Z | 2016-10-13T00:56:31Z | https://github.com/tflearn/tflearn/issues/392 | [] | shazz | 2 |
explosion/spaCy | deep-learning | 13,072 | `spacy.cli.download` doesn't work for transformer model | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
1. Create a new virtual env (e.g. in `pyenv`).
2. `pip install spacy[transformers]==3.7.2`.
3. Start python REPL and:
```py
import spacy
spacy.cli.download('en_core_web_trf')
nlp = spacy.load('en_core_web_trf')
```
4. At this point you'll get the following error:
> ValueError: [E002] Can't find factory for 'curated_transformer' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `@Language.component` (for function components) or `@Language.factory` (for class components).
Apparently, `spacy.cli.download` installed `curated-transformers` as a dependency of `en-core-web-trf` but couldn't load it. Everything works fine if you reenter the REPL and try `spacy.load` again.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
- **spaCy version:** 3.7.2
- **Platform:** macOS-14.0-arm64-arm-64bit
- **Python version:** 3.9.16
- **Pipelines:** en_core_web_trf (3.7.2)
| closed | 2023-10-19T15:25:01Z | 2023-10-23T09:25:12Z | https://github.com/explosion/spaCy/issues/13072 | [
"install"
] | Vasniktel | 1 |
pytest-dev/pytest-cov | pytest | 649 | Functions called in threads marked as missing in coverage report | # Summary
I am testing a function which runs code in a thread. All lines in that function are marked as `missing` in the coverage report in Windows and Linux.
## Expected vs actual result
Expected behaviour is that functions called in threads are not marked as `missing` in the coverage report, actual result is that they are.
# Reproducer
Here is a minimal example:
```python
# root_dir/my_file.py
import _thread as thread
from time import sleep
def foo(arr: list):
arr.append(1)
def bar():
arr = []
val = thread.start_new_thread(foo, (arr,))
sleep(5)
return arr
```
```python
from my_file import bar
def test_bar():
arr = bar()
assert 1 in arr
```
The test passes, but the contents of `foo` are marked as `missing` in the coverage report.
## Versions
`python==3.11.4`
`pytest==8.3.2`
`pytest-cov==5.0.0`
## Config
My `pyproject.toml` looks like this:
```toml
[tool.coverage.run]
source = ["root_dir"]
branch = false
concurrency = ["thread"]
[tool.coverage.report]
sort = "cover"
fail_under = 30
show_missing = true
skip_covered = true
exclude_lines = [
"pragma: no cover",
"if __name__ == \"__main__\":",
"@abstractmethod",
"if TYPE_CHECKING:",
]
``` | closed | 2024-08-06T10:14:09Z | 2024-08-06T10:34:17Z | https://github.com/pytest-dev/pytest-cov/issues/649 | [] | LachlanMarnham | 4 |
custom-components/pyscript | jupyter | 254 | Unable to filter dates in Jupyter (hass pyscript kernel) | Hello. Thank you for a great integration!
As electricity price got up I am working on a script that would control my heating depending on current electricity price.
I have faced the problem that I am unable to filter out today's prices from the list.
I have double checked next test case in regular `Python 3 (ipykernel)` where it works.
Test case to reproduce filtering bug in `hass pyscript` kernel
```
import datetime
import zoneinfo
data = [{'start': datetime.datetime(2021, 10, 24, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.114}, {'start': datetime.datetime(2021, 10, 24, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.068}, {'start': datetime.datetime(2021, 10, 24, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.081}, {'start': datetime.datetime(2021, 10, 24, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.085}, {'start': datetime.datetime(2021, 10, 24, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.084}, {'start': datetime.datetime(2021, 10, 24, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.085}, {'start': datetime.datetime(2021, 10, 24, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.089}, {'start': datetime.datetime(2021, 10, 24, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.1}, {'start': datetime.datetime(2021, 10, 24, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.099}, {'start': datetime.datetime(2021, 10, 24, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.105}, {'start': datetime.datetime(2021, 10, 24, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 24, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.074}, {'start': datetime.datetime(2021, 10, 24, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.072}, {'start': datetime.datetime(2021, 10, 24, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.092}, {'start': datetime.datetime(2021, 10, 24, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.111}, {'start': datetime.datetime(2021, 10, 24, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.125}, {'start': datetime.datetime(2021, 10, 24, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.179}, {'start': datetime.datetime(2021, 10, 24, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.163}, {'start': datetime.datetime(2021, 10, 24, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.116}, {'start': datetime.datetime(2021, 10, 24, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.09}, {'start': datetime.datetime(2021, 10, 24, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.084}, {'start': datetime.datetime(2021, 10, 25, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.072}, {'start': datetime.datetime(2021, 10, 25, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.086}, {'start': datetime.datetime(2021, 10, 25, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 25, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.049}, {'start': datetime.datetime(2021, 10, 25, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.051}, {'start': datetime.datetime(2021, 10, 25, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 25, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.096}, {'start': datetime.datetime(2021, 10, 25, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.117}, {'start': datetime.datetime(2021, 10, 25, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.142}, {'start': datetime.datetime(2021, 10, 25, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.134}, {'start': datetime.datetime(2021, 10, 25, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.126}, {'start': datetime.datetime(2021, 10, 25, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.138}, {'start': datetime.datetime(2021, 10, 25, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.154}, {'start': datetime.datetime(2021, 10, 25, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.18}, {'start': datetime.datetime(2021, 10, 25, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.247}, {'start': datetime.datetime(2021, 10, 25, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.18}, {'start': datetime.datetime(2021, 10, 25, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.122}, {'start': datetime.datetime(2021, 10, 25, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 26, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.116}]
def is_today(x):
today = datetime.date.today()
return today == x["start"].date()
def filter_today(candidates):
return filter(is_today, candidates)
result = list(filter_today(data))
expected = 24
assert expected == len(result), f"Should be {expected}. Got {len(result)}"
result
```
I am getting 48. But in real python it produces 24 results. | closed | 2021-10-24T13:31:00Z | 2023-09-21T09:57:30Z | https://github.com/custom-components/pyscript/issues/254 | [] | yozik04 | 2 |
postmanlabs/httpbin | api | 569 | Option to not pretty print the response | I'm hitting `http://httpbin.org/headers` and am receiving this JSON back:
```json
{
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "en-US,en;q=0.5",
"Host": "httpbin.org",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
}
}
```
I'd like an option to receive the response back without pretty printing:
```json
{"headers":{"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Encoding":"gzip, deflate","Accept-Language":"en-US,en;q=0.5","Host":"httpbin.org","Upgrade-Insecure-Requests":"1","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"}}
```
Maybe via a query parameter like `?pretty=false`? | open | 2019-07-19T12:18:37Z | 2019-07-19T12:18:56Z | https://github.com/postmanlabs/httpbin/issues/569 | [] | westy92 | 0 |
Netflix/metaflow | data-science | 2,199 | [Feature] Will Typescript with bun runtime be supported | It would be great if Typescript would be supported. And one can use libs like Effect-ts to orchestrate the Typescript code to be run parallel and concurrent. | open | 2025-01-07T14:04:25Z | 2025-01-07T14:04:25Z | https://github.com/Netflix/metaflow/issues/2199 | [] | suse-coder | 0 |
geex-arts/django-jet | django | 132 | Base breadcrumb is always НАЧАЛО | Looks to have been hard-coded into the JS rather than using localization.
| closed | 2016-10-18T20:20:05Z | 2016-10-30T17:38:09Z | https://github.com/geex-arts/django-jet/issues/132 | [] | jturmel | 2 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 124 | '_xsrf' missing while logging in | <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
Hello, i cloned repository, signed in as admin and got this error: 403 : Forbidden
'_xsrf' argument missing from POST

Login and password actually doesnt matter, if i register another user and use his creds or if i use wrong creds i still getting this error.
I launched original jupyterhub image before and it works fine. Separate image with single jupyter notebook also works fine.
Mb issue is that my os is windows. Is it possible to build an image on windows?
Appreciate any help
| closed | 2023-04-24T11:08:41Z | 2023-05-08T13:51:18Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/124 | [
"bug"
] | arseniymerkulov | 5 |
akfamily/akshare | data-science | 5,719 | AKShare 接口问题报告 | ak.stock_zh_a_hist()错误 | lib/python3.12/site-packages/akshare/stock_feature/stock_hist_em.py", line 1049, in stock_zh_a_hist
"secid": f"{code_id_dict[symbol]}.{symbol}",
~~~~~~~~~~~~^^^^^^^^
KeyError: '300114'
During handling of the above exception, another exception occurred:
1.16.6陆续更新了几个版本一直到最新的1.16.9,上面的问题依然存在,每次读取到300114出错
ak.stock_zh_a_hist(
symbol=code,
period=ASD.DATA_PERIOD,
start_date=start_date.strftime('%Y%m%d'),
end_date=end_date.strftime('%Y%m%d'),
adjust=ASD.DATA_ADJUST
) | closed | 2025-02-21T11:10:07Z | 2025-02-21T20:50:02Z | https://github.com/akfamily/akshare/issues/5719 | [
"bug"
] | mapleaj | 2 |
mars-project/mars | scikit-learn | 3,039 | Support get_chunk_meta in RayExecutionContext | Currently `RayExecutionContext.get_chunk_meta` is not supported, which will make any operands relied on this API failed on tiling, such as when call `DataFrame.groupby`:
```
df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd"))
df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int)
df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute()
```
Will got following error:
```
================================================================================== FAILURES ==================================================================================
________________________________________________________________________________ test_shuffle ________________________________________________________________________________
ray_start_regular_shared2 = RayContext(dashboard_url='127.0.0.1:8265', python_version='3.8.2', ray_version='1.12.0', ray_commit='f18fc31c756299095...127.0.0.1:55710', 'address': '127.0.0.1:55710', 'node_id': '38787319e06bc89f95d7600524069ed4dfba256068c917c261fe697f'})
create_cluster = (<mars.deploy.oscar.local.LocalClient object at 0x7fb22aaf38b0>, {})
@require_ray
@pytest.mark.asyncio
async def test_shuffle(ray_start_regular_shared2, create_cluster):
df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd"))
# `describe` contains multiple shuffle.
df.describe().execute()
arr = np.random.RandomState(0).rand(31, 27)
t1 = mt.tensor(arr, chunk_size=10).reshape(27, 31)
t1.op.extra_params["_reshape_with_shuffle"] = True
np.testing.assert_almost_equal(arr.reshape(27, 31), t1.to_numpy())
np.testing.assert_equal(mt.bincount(mt.arange(5, 10)).to_numpy(), np.bincount(np.arange(5, 10)))
# `RayExecutionContext.get_chunk_meta` not supported, skip dataframe.groupby
df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int)
> df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute()
mars/deploy/oscar/tests/test_ray_dag.py:147:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mars/core/entity/tileables.py:462: in execute
result = self.data.execute(session=session, **kw)
mars/core/entity/executable.py:144: in execute
return execute(self, session=session, **kw)
mars/deploy/oscar/session.py:1855: in execute
return session.execute(
mars/deploy/oscar/session.py:1649: in execute
execution_info: ExecutionInfo = fut.result(
../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()
../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result
raise self._exception
mars/deploy/oscar/session.py:1835: in _execute
await execution_info
mars/deploy/oscar/session.py:105: in wait
return await self._aio_task
mars/deploy/oscar/session.py:953: in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
mars/services/task/supervisor/processor.py:364: in run
async for stage_args in self._iter_stage_chunk_graph():
mars/services/task/supervisor/processor.py:158: in _iter_stage_chunk_graph
chunk_graph = await self._get_next_chunk_graph(chunk_graph_iter)
mars/services/task/supervisor/processor.py:149: in _get_next_chunk_graph
chunk_graph = await fut
mars/lib/aio/_threads.py:36: in to_thread
return await loop.run_in_executor(None, func_call)
../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/thread.py:57: in run
result = self.fn(*self.args, **self.kwargs)
mars/services/task/supervisor/processor.py:144: in next_chunk_graph
return next(chunk_graph_iter)
mars/services/task/supervisor/preprocessor.py:194: in tile
for chunk_graph in chunk_graph_builder.build():
mars/core/graph/builder/chunk.py:440: in build
yield from self._build()
mars/core/graph/builder/chunk.py:434: in _build
graph = next(tile_iterator)
mars/services/task/supervisor/preprocessor.py:74: in _iter_without_check
to_update_tileables = self._iter()
mars/core/graph/builder/chunk.py:317: in _iter
self._tile(
mars/core/graph/builder/chunk.py:211: in _tile
need_process = next(tile_handler)
mars/core/graph/builder/chunk.py:183: in _tile_handler
tiled_tileables = yield from handler.tile(tiled_tileables)
mars/core/entity/tileables.py:79: in tile
tiled_result = yield from tile_handler(op)
mars/dataframe/groupby/apply.py:151: in tile
return [auto_merge_chunks(get_context(), ret)]
mars/dataframe/utils.py:1333: in auto_merge_chunks
metas = ctx.get_chunks_meta(
mars/services/context.py:188: in get_chunks_meta
return self._call(self._get_chunks_meta(data_keys, fields=fields, error=error))
mars/services/context.py:84: in _call
return fut.result()
../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()
../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result
raise self._exception
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <mars.services.task.execution.ray.context.RayExecutionContext object at 0x7fb22b3485e0>
data_keys = ['9f92dcd8196d32f25e43e33ba1f56e02_0', '223590f1093c414359f466c42a698006_0', 'dc80798f45b8ed8bb358a7b39b6d8170_0'], fields = ['memory_size'], error = 'ignore'
async def _get_chunks_meta(
self, data_keys: List[str], fields: List[str] = None, error: str = "raise"
) -> List[Dict]:
# get chunks meta
get_metas = []
for data_key in data_keys:
meta = self._meta_api.get_chunk_meta.delay(
data_key, fields=["bands"], error=error
)
get_metas.append(meta)
metas = await self._meta_api.get_chunk_meta.batch(*get_metas)
api_to_keys_calls = defaultdict(lambda: (list(), list()))
for data_key, meta in zip(data_keys, metas):
> addr = meta["bands"][0][0]
E TypeError: 'NoneType' object is not subscriptable
mars/services/context.py:145: TypeError
```
We need to support get_chunk_meta for ray task backend.
| open | 2022-05-17T03:57:35Z | 2022-05-17T05:58:26Z | https://github.com/mars-project/mars/issues/3039 | [] | chaokunyang | 0 |
plotly/dash | data-visualization | 3,026 | [Feature Request] The console displays the back-end interface for the actual request | dash Back-end request interface, the browser console does not print the request interface, such as login request, but the browser does not display the login request interface, only such interfaces as update-component, how to display the actual interface in the control? | closed | 2024-10-08T03:13:58Z | 2024-10-08T12:01:14Z | https://github.com/plotly/dash/issues/3026 | [] | xiaoxiaoimg | 1 |
xonsh/xonsh | data-science | 4,924 | Enabling `trace on` creates erroneous traceback | <!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
```
+------------------+-----------------+
| xonsh | 0.13.1 |
| Python | 3.10.5 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.30 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.13.0 |
| on posix | True |
| on linux | True |
| distro | unknown |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+-----------------+
```
</details>
## Expected Behavior
<!--- Tell us what should happen -->
No exception and traceback in the output
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
When setting `trace on` an exception is raised and a traceback is produced
### Traceback (if applicable)
<details>
```
Exception ignored in: <function _removeHandlerRef at 0x7f6872497f40>
Traceback (most recent call last):
File "/home/gn/anaconda3/envs/xonsh/lib/python3.10/logging/__init__.py", line 836, in _removeHandlerRef
File "/home/gn/anaconda3/envs/xonsh/lib/python3.10/site-packages/xonsh/tracer.py", line 87, in trace
TypeError: 'NoneType' object is not callable
```
</details>
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
Any MWE with using `trace on` e.g. run the following `xonsh test.xsh`
```sh
$cat test.xsh
#!/usr/bin/env xonsh
$XONSH_TRACE_SUBPROC = True
trace on
echo "OK"
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2022-08-21T15:44:38Z | 2022-10-08T10:51:05Z | https://github.com/xonsh/xonsh/issues/4924 | [
"conda",
"trace"
] | gnikit | 5 |
tflearn/tflearn | tensorflow | 802 | Multiple encoded outputs into a single classification network | 
I have multiple data sets, where data sets are distinguished by feature length of a feature vector.
Regardless input feature size, I would like the network to be able to classify one of the two classes given the input. Hence, I would like to train an encoder for each data set (will train n encoders for n datasets), but pass the encoded outputs from all encoders to a single classification network since the encoded outputs are of the same dimension. See attached image for the network sketch.
I have code written below so far,
1. but I am not really sure if the current way of setting up the regression is valid or need to use merge with some sort of 'mean' to average classification results from different data sets.
2. I am also having trouble figuring out how to perform training given the situation.
Any help will be greatly appreciated.
```
import h5py
import numpy as np
import tflearn
STEPS_PER_EPOCH = 10
NUM_CLASSES = 2
train_Xs = []
train_Ys = []
batch_sizes = []
encodings = []
encode_hidden_1 = 500
classify_hidden_1 = 500
classify_hidden_2 = 100
for trainDataset in trainDatasets:
train_file = h5py.File(trainDataset,'r')
train_X = np.array(train_file['features'])
train_Y = np.array(train_file['label'])
train_Xs(train_X) # number of samples x number of features
train_Ys(train_Y) # number of samples x 2 (two classes)
nb_samples = train_X.shape[0]
nb_features = train_X.shape[1]
batch_size = int(nb_samples/STEPS_PER_EPOCH)
# batch size is determined by the number of samples in each dataset
encoder = tflearn.input_data(shape=[None, nb_features])
encoder = tflearn.fully_connected(encoder, encode_hidden_1)
encodings.append(encoder)
classifiers_1 = []
classifiers_2 = []
softmax_outputs = []
for encoding in encodings:
classifier1 = tflearn.fully_connected(encoding, classify_hidden_1, activation='relu')
classifiers_1.append(classifier1)
classifier2 = tflearn.fully_connected(classifiers1, classify_hidden_2, activation='relu')
classifiers_2.append(classifier2)
softmax = tflearn.fully_connected(classifier2, 2, activation='softmax')
softmax_outputs.append(softmax)
network = regression(softmax_outputs, optimizer = 'momentum', loss='categorical_crossentropy', learning_rate=0.1)
```
| open | 2017-06-19T22:58:52Z | 2017-06-19T22:58:52Z | https://github.com/tflearn/tflearn/issues/802 | [] | lykim200 | 0 |
python-gino/gino | sqlalchemy | 579 | Database autogenerate migrations | * GINO version: 0.8.3
* Python version: 3.7.5
* asyncpg version: 0.19.0
* aiocontextvars version: 0.2.2
* PostgreSQL version: 10.10
### Description
How to make model's `__tablename__` is autogenerated?
I've tried to using `gino.declarative.declared_attr` to automatically generate `__tablename__` attribute for model, but got error
`KeyError: '<function BaseModel.__tablename__ at 0x7f21defb7950>'`
### What I Did
I tried to structuring applications with `starlette` here https://github.com/nsiregar/letsgo but unable to autogenerate `__tablename__`
| closed | 2019-10-24T15:16:24Z | 2019-10-28T04:35:56Z | https://github.com/python-gino/gino/issues/579 | [
"question"
] | nsiregar | 5 |
babysor/MockingBird | pytorch | 508 | 在python3.10.1环境下numpy版本问题 | 这个程序要求numpy的版本低于1.21,然而python3.10不支持低于1.21版本的numpy。因此如何在python3.10环境下能正确运行程序呢? | open | 2022-04-19T06:02:54Z | 2022-04-28T12:07:43Z | https://github.com/babysor/MockingBird/issues/508 | [] | wzwtt | 6 |
CTFd/CTFd | flask | 1,980 | Add a timer and more warnings/protections in imports | We should have some kind of timer to give users an idea of how long the import process is taking. If it goes beyond some period of time we should notify the user to contact a server administrator to run the import manually or something to that extent.
Or we should come up with some way to have CTFd pause entirely until the import succeeds, maybe with a special config like `import_in_progress`. | closed | 2021-08-20T19:53:27Z | 2022-04-08T20:52:05Z | https://github.com/CTFd/CTFd/issues/1980 | [
"blocker"
] | ColdHeat | 0 |
piskvorky/gensim | nlp | 3,340 | ldaseqmodel convergence | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
https://github.com/RaRe-Technologies/gensim/blob/742fb188dc6de03a42411510bf5b45e26574b328/gensim/models/ldaseqmodel.py#L303
This line in `ldaseqmodel.py` seems preventing the early termination of the algorithm. Set the `convergence` to 1 whenever the convergence criterion is met makes it must exhaust the `em_max_iter` hence cannot terminate earlier.
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
gensim version 4.1.2 | open | 2022-04-29T01:12:11Z | 2022-04-29T01:12:11Z | https://github.com/piskvorky/gensim/issues/3340 | [] | trkwyk | 0 |
ARM-DOE/pyart | data-visualization | 1,022 | BUG: Error when using CyLP | After finally managing to successfully install CyLP, using it in phase_proc_lp (pyart.correct.phase_proc_lp(radar, 2.0, self_const = 12000.0, low_z=0.0, high_z=53.0, min_phidp=0.01, min_ncp=0.3, min_rhv=0.8, LP_solver='cylp_mp', proc=15)) does not work. The error seems to be
"Error in `python': free(): invalid pointer: 0x00005597c77d6c98"
A long list of messages and memory map is being printed out:
[cylp_messages.txt](https://github.com/ARM-DOE/pyart/files/7589014/cylp_messages.txt) And then the script just hangs.
I installed CyLP following these instructions https://github.com/coin-or/CyLP
I tried also installing CyLP following these instructions provided in the Py-ART documentation https://arm-doe.github.io/pyart/setting_up_an_environment.html but unsuccessfully. I got what looked like compiling issues even after installing additional conda compilers. So the original CyLP installation instructions worked, but for some reason the phase_proc_lp function is not working still.
| open | 2021-11-23T15:17:21Z | 2022-10-11T15:51:02Z | https://github.com/ARM-DOE/pyart/issues/1022 | [
"Bug"
] | tanelv | 36 |
aws/aws-sdk-pandas | pandas | 2,239 | store_parquet_metadata, path_ignore_suffix has conflicting types | *P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L808
arg to store_parquet_metadata, path_ignore_suffix has conflicting types;
[doc string](https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L864) shows; Union[str, List[str], None]
[typing](https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L814) in the code shows; Optional[str] = None
awswrangler v2 used to have Union[str, List[str], None] as the type.
If the code is right, and doc string is stale, then how can we use several suffixes to ignore? | closed | 2023-04-27T20:09:23Z | 2023-04-28T15:01:06Z | https://github.com/aws/aws-sdk-pandas/issues/2239 | [
"question"
] | aeural | 2 |
Kanaries/pygwalker | pandas | 604 | [BUG] pypwalker failed to load the visual in Streamlit | **Describe the bug**
After selecting a field in either x or y axis, pypwalker showed the visual but very quickly the selection was cleared with blank visual screen.
**To Reproduce**
Steps to reproduce the behavior:
1. Copy the demo example of pypwalker for streamlit
2. Run the codes
3. Select any field to the axis
4. See error as described above
**Versions**
- pygwalker version: pygwalker==0.4.9.4
- python version 3.12.4
- browser latest Chrome
- streamlit==1.37.1
<img width="1280" alt="image" src="https://github.com/user-attachments/assets/b2816582-ecb7-4b2d-a9cd-73e3fa88e07d">
| closed | 2024-08-08T02:40:34Z | 2024-08-15T14:48:11Z | https://github.com/Kanaries/pygwalker/issues/604 | [
"bug"
] | dickhfchan | 6 |
plotly/dash-table | plotly | 449 | Filter Time 00:10:00 | Is possible filter value time like this: 00:10:00?
**Example:**
`+ [{'if': {'column_id': 'Duração no Status', 'filter': 'Duração no Status >= 00:10:00'}, 'backgroundColor': 'white' ,'color': 'red', 'font-size': '1.1em'}]` | closed | 2019-05-30T13:37:59Z | 2019-06-03T11:51:59Z | https://github.com/plotly/dash-table/issues/449 | [] | jersonjunior | 0 |
modelscope/data-juicer | data-visualization | 624 | 配好环境之后尝试运行python tools/process_data.py --config demos/process_on_ray/configs/demo.yaml,遇到疑似卡住无任何日志输出的情况 | ### Before Asking 在提问之前
- [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [x] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
从源码进行安装了data-juicer(python==3.10.6, Ray==2.40.0, grpcio==1.71.0),当前设备是4*T4, 24Core, 512G。
ray start --head
ray status
python tools/process_data.py --config demos/process_on_ray/configs/demo.yaml
命令行界面在输出如下内容之后卡住:
2025-03-18 07:47:26 | INFO | data_juicer.core.ray_executor:56 - Initing Ray ...
2025-03-18 07:47:26,492 INFO worker.py:1636 -- Connecting to existing Ray cluster at address: 10.233.65.253:6379...
2025-03-18 07:47:26,504 INFO worker.py:1812 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265
log中也仅有如下内容:
2025-03-18 07:47:26.256 | INFO | data_juicer.config.config:config_backup:742 - Back up the input config file [/workspace/data-juicer/demos/process_on_ray/configs/demo.yaml] into the work_dir [/workspace/data-juicer/outputs/demo]
2025-03-18 07:47:26.277 | INFO | data_juicer.config.config:display_config:764 - Configuration table:
2025-03-18 07:47:26.477 | INFO | data_juicer.core.ray_executor:__init__:56 - Initing Ray ...
不知道是哪里的问题,十分困惑,还请解答。
### Additional 额外信息
_No response_ | open | 2025-03-18T08:00:38Z | 2025-03-21T09:06:03Z | https://github.com/modelscope/data-juicer/issues/624 | [
"question"
] | butterbutterflies | 6 |
flavors/django-graphql-jwt | graphql | 178 | Doc site down | Hi
doc site is down, I'm getting this
```xml
<Error>
<Code>AuthenticationFailed</Code>
<Message>
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:282b2f04-a01e-007f-63cd-f3c309000000 Time:2020-03-06T15:39:25.0193548Z
</Message>
</Error>
``` | closed | 2020-03-06T15:41:41Z | 2020-03-06T17:06:39Z | https://github.com/flavors/django-graphql-jwt/issues/178 | [] | ckristhoff | 1 |
tflearn/tflearn | tensorflow | 559 | python 3 incompatible when use map function | https://github.com/tflearn/tflearn/blob/master/tflearn/layers/core.py#L662
` x = map(lambda t: tf.reshape(t, [-1, 1]+utils.get_incoming_shape(t)[1:]), x)
`
cause a compatible issue when using python3. should change to
`x = list(map(lambda t: tf.reshape(t, [-1, 1]+utils.get_incoming_shape(t)[1:]), x))`
here is the exception I got:
...
File "/anaconda/lib/python3.5/site-packages/tflearn/layers/core.py", line 654, in time_distributed
return tf.concat(1, x)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1077, in concat
return identity(values[0], name=scope)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1424, in identity
result = _op_def_lib.apply_op("Identity", input=input, name=name)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 493, in apply_op
raise err
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 441, in make_tensor_proto
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 441, in <listcomp>
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/util/compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <map object at 0x159ca0630> | closed | 2017-01-13T00:01:48Z | 2017-05-12T02:37:26Z | https://github.com/tflearn/tflearn/issues/559 | [] | HelloGithubTest | 5 |
Sanster/IOPaint | pytorch | 102 | How to change the directory of ". cache/huggingface/diffusers/models" to my favorite file location | closed | 2022-10-29T02:42:08Z | 2023-11-01T04:32:18Z | https://github.com/Sanster/IOPaint/issues/102 | [] | gaoxuxu110 | 10 |
|
schenkd/nginx-ui | flask | 34 | Mobile support? | I would love to use this but on mobile that add domain field is out of the screen and I have to go into landscape mode to view it and then it looks really bad... | open | 2020-08-23T04:36:23Z | 2020-08-23T04:38:26Z | https://github.com/schenkd/nginx-ui/issues/34 | [] | theoparis | 1 |
inducer/pudb | pytest | 545 | When sys.argv be changed, pudb3 cannot enter REPL | **Describe the bug**
When sys.argv be changed, pudb3 cannot enter REPL
**To Reproduce**
test.py
```python
import sys
argv = sys.argv
sys.argv = []
print(1)
print(2)
print(3)
sys.argv = argv
print(4)
print(5)
print(6)
```
`pudb3 test.py`, when `sys.argv` is `[]`, press `!` cannot enter REPL. when `sys.argv` recovery, press `!` can enter REPL.
**Expected behavior**
press `!` can enter REPL.
**Additional context**
Can we backup `sys.argv` when the program start, And when we press `!` recover `sys.argv` temporarily to avoid this bug?
## Complete:
if some module of `sys.modules` (e.g, argparse) be changed, the same phenomenon will also happen. | closed | 2022-08-23T14:06:27Z | 2023-01-20T14:55:45Z | https://github.com/inducer/pudb/issues/545 | [
"Bug"
] | Freed-Wu | 4 |
vitalik/django-ninja | django | 406 | ImportError: cannot import name 'ModelSchema' from 'ninja' | I have a project setup using django-ninja==0.13.2, Django 4.0 and django-ninja-auth
It used to work but I haven't worked on it for a few months and now I come back to run it in the same venv and I'm getting this:
`ImportError: cannot import name 'ModelSchema' from 'ninja'`
Anyone know why this could be?
| closed | 2022-03-26T09:35:45Z | 2022-03-27T09:39:47Z | https://github.com/vitalik/django-ninja/issues/406 | [] | jontstaz | 2 |
serengil/deepface | machine-learning | 1,417 | [FEATURE]: euclidean_l2 and cosine distance have identical ROC curves, so you could drop one of them in benchmarks. | ### Description
Let u and v be unit vectors (i.e. you've already divided by the euclidean norm). Let n = len(u) Then the cosine distance is 1 - sum(u[i]*v[i] for i in range(n)). On the other hand, the square of the euclidean distance is sum((u[i] - v[i])**2 for i in range(n)) = sum(u[i]*u[i] + v[i]*v[i] - 2*u[i]*v[i] for i in range(n)) = sum(u[i]*u[i]) + sum(v[i]*v[i]) - 2*sum(u[i]*v[i]) = 2 - 2*sum(u[i]*v[i]), which is twice the cosine distance. So the metric provide the same information.
### Additional Info
_No response_ | closed | 2025-01-03T23:59:48Z | 2025-01-06T16:16:00Z | https://github.com/serengil/deepface/issues/1417 | [
"enhancement",
"invalid"
] | jbrownkramer | 7 |
Lightning-AI/pytorch-lightning | data-science | 19,563 | EarlyStopping in the middle of an epoch | ### Description & Motivation
I'm fitting a normalizing flow to learn the mapping between two embedding spaces. The first embedding space is sampled using the mapper of a pretrained stylegan and the second embedding space is derived by a pretrained covnet. I want to learn a mapper from the second embedding space back to the first one. Since the stylegan can produce infinite data, I'm using an iterable dataset across one single epoch that encompasses the entire training run. So, I want `EarlyStopping` to trigger in the middle of the epoch. Validation data isn't available.
### Pitch
An option called `check_interval` should be added to `EarlyStopping`. If the value is a float, it is the fraction of an epoch between checks. If the value is an integer, it is the amount of training steps between checks. For the change to be non-breaking, its default should be `1.0`.
### Alternatives
Currently, I'm passing the EarlyStopping callback to the LightningModule and manually calling the check at the end of each training batch:
```py
def on_train_batch_end(self, outputs, batch, batch_idx):
self.early_stopping_callback._run_early_stopping_check(self.trainer)
```
### Additional context
_No response_
cc @borda @carmocca @awaelchli | open | 2024-03-03T06:43:20Z | 2024-03-03T18:12:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19563 | [
"feature",
"callback: early stopping"
] | Richienb | 0 |
microsoft/nni | tensorflow | 5,686 | webui hyper-parameter | **Describe the issue**:
This is how I define my own model and search space,It searches correctly for all parameters,However, in the hyperparameter curve of webui, the value of this parameter cannot be displayed.
I want to know what the problem is and how to solve it? Looking forward to your reply, thanks very much!
class MyModelSpace(ModelSpace):
def __init__(self):
super().__init__()
input_size = 10
feature_1 = nni.choice('feature1', [64, 128, 256])
self.layer1 = MutableLinear(input_size, feature_1)
self.dropout1 = MutableDropout(nni.choice('dropout1', [0.25, 0.5, 0.75])) # choose dropout rate from 0.25, 0.5 and 0.75
self.relu1 = LayerChoice([
ReLUWrapper(),
TanhWrapper(),
SigmoidWrapper(),
], label='relu1')
self.skip_1 = MyModule(self.add_mutable(nni.choice('skip_connect_1', [0,1]))).chosen
model_space = MyModelSpace()
evaluator = FunctionalEvaluator(evaluate_model)
exp = NasExperiment(model_space, evaluator, search_strategy)
exp.config.max_trial_number = 10
exp.config.trial_concurrency = 2
exp.config.training_service.use_active_gpu = True
exp.config.trial_gpu_number = 1
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Client OS:
- Server OS (for remote mode only):
- Python version: 3.8
- PyTorch/TensorFlow version: 1.9
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:[2023-09-28 10:28:09] INFO (main) Start NNI manager
[2023-09-28 10:28:09] INFO (RestServer) Starting REST server at port 8034, URL prefix: "/"
[2023-09-28 10:28:09] INFO (RestServer) REST server started.
[2023-09-28 10:28:09] INFO (NNIDataStore) Datastore initialization done
[2023-09-28 10:28:09] INFO (NNIManager) Starting experiment: spg98lnc
[2023-09-28 10:28:09] INFO (NNIManager) Setup training service...
[2023-09-28 10:28:09] INFO (NNIManager) Setup tuner...
[2023-09-28 10:28:09] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2023-09-28 10:28:10] INFO (NNIManager) Add event listeners
[2023-09-28 10:28:10] INFO (LocalV3.local) Start
[2023-09-28 10:28:10] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2023-09-28 10:28:10] INFO (NNIManager) Updated search space [object Object]
[2023-09-28 10:28:10] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameters": {"status": "frozen", "model_symbol": {"__nni_type__": "bytes:gAWVrhUAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwUX21ha2Vfc2tlbGV0b25fY2xhc3OUk5QojAhidWlsdGluc5SMBHR5cGWUk5SMDE15TW9kZWxTcGFjZZSMF25uaS5uYXMubm4ucHl0b3JjaC5iYXNllIwKTW9kZWxTcGFjZZSTlIWUfZSMCl9fbW9kdWxlX1+UjAhfX21haW5fX5RzjCA3YjYxMTEyZDI4YmU0MjA4YjcyN2ZkMjlmYTA5OGRiNZROdJRSlIwcY2xvdWRwaWNrbGUuY2xvdWRwaWNrbGVfZmFzdJSMD19jbGFzc19zZXRzdGF0ZZSTlGgQfZQoaAxoDYwIX19pbml0X1+UaACMDl9tYWtlX2Z1bmN0aW9ulJOUKGgAjA1fYnVpbHRpbl90eXBllJOUjAhDb2RlVHlwZZSFlFKUKEsBSwBLAEsDSwpLH0OKfABqAHwBfAKOAQEAdAF8AGQBgwJzOHwAagJkAGsJci50A3wAagKDAXwAXwRuCnQFZAKDAXwAXwR0AXwAZAGDAnJ2fABqBGoGc3Z8AGoEjx4BAIgAfABmAXwBngJ8Ao4BVwACADUAUQBSAKMAUwBRAFIAWABuEIgAfABmAXwBngJ8Ao4BUwBkAFMAlE6MDF9sYWJlbF9zY29wZZSMCF91bnVzZWRflIeUKIwYYXV0b19zYXZlX2luaXRfYXJndW1lbnRzlIwHaGFzYXR0cpSMDV9sYWJlbF9wcmVmaXiUjAtsYWJlbF9zY29wZZRoHowSc3RyaWN0X2xhYmVsX3Njb3BllIwJYWN0aXZhdGVklHSUjARzZWxmlIwEYXJnc5SMBmt3YXJnc5SHlIw7L3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9ubmkvbm5pL25hcy9ubi9weXRvcmNoL2Jhc2UucHmUjAhuZXdfaW5pdJRNvgFDEgACDAIKAQoBDgIKARICCAEkApSMEWluaXRfd2l0aF9jb250ZXh0lIWUKXSUUpR9lCiMC19fcGFja2FnZV9flIwSbm5pLm5hcy5ubi5weXRvcmNolIwIX19uYW1lX1+UaAeMCF9fZmlsZV9flIw7L3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9ubmkvbm5pL25hcy9ubi9weXRvcmNoL2Jhc2UucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoEYwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaD59lH2UKGg2aC2MDF9fcXVhbG5hbWVfX5SMKm1vZGVsX3NwYWNlX2luaXRfd3JhcHBlci48bG9jYWxzPi5uZXdfaW5pdJSMD19fYW5ub3RhdGlvbnNfX5R9lChoKGgJjAZyZXR1cm6UTnWMDl9fa3dkZWZhdWx0c19flE6MDF9fZGVmYXVsdHNfX5ROaAxoB4wHX19kb2NfX5ROjAtfX2Nsb3N1cmVfX5RoAIwKX21ha2VfY2VsbJSTlGgXKGgcKEsBSwBLAEsESwpLH0NgdACDAH0DfANkAGsIckJ0AYMAfABfAnwAagKPHgEAiAB8AGYBfAGeAnwCjgFXAAIANQBRAFIAowBTAFEAUgBYAG4adAGgA6EAfABfAogAfABmAXwBngJ8Ao4BUwBkAFMAlE6FlCiMDWN1cnJlbnRfbW9kZWyUjA5mcm96ZW5fY29udGV4dJSMD19mcm96ZW5fY29udGV4dJSMC3RvcF9jb250ZXh0lHSUKGgoaCloKowEYXJjaJR0lGgsaC9NrgFDDgABBgEIBggBCAEkAwoBlIwQb3JpZ2luYWxfaW5pdF9mbpSFlCl0lFKUaDNOTmg6KVKUhZR0lFKUaEBoX32UfZQoaDZoL2hDjDNtb2RlbF9zcGFjZV9pbml0X3dyYXBwZXIuPGxvY2Fscz4uaW5pdF93aXRoX2NvbnRleHSUaEV9lGgoaAlzaEhOaElOaAxoB2hKTmhLaE1oFyhoHChLAUsASwBLB0sISwNCngIAAHQAgwCgAaEAAQBkAX0BdAKgA2QCZANkBGQFZwOhAn0CfACgBHwCoQEBAHQFfAF8AoMCfABfBnQHdAKgA2QGZAdkCGQJZwOhAoMBfABfCHQJdAqDAHQLgwB0DIMAZwNkCmQLjQJ8AF8NdA58AKAEdAKgA2QMZA1kDmcCoQKhAYMBag98AF8QfACgBHwCoQEBAHwAahBkDWsCcux0AqADZA9kA2QEZAVnA6ECfQN0BXwCfAODAnwAXxF0B3QCoANkEGQHZAhkCWcDoQKDAXwAXxJ0CXQKgwB0C4MAdAyDAGcDZBFkC40CfABfE24EfAJ9A3QOfACgBHQCoANkEmQNZA5nAqECoQGDAWoPfABfFHwAahRkDWsCkAFybHQCoANkE2QDZARkBWcDoQJ9BHQFfAN8BIMCfABfFXQHdAKgA2QUZAdkCGQJZwOhAoMBfABfFnQJdAqDAHQLgwB0DIMAZwNkFWQLjQJ8AF8XbgR8A30EdA58AKAEdAKgA2QWZA1kDmcCoQKhAYMBag98AF8YfABqGGQNawKQAXLsdAKgA2QXZANkBGQFZwOhAn0FdAV8BHwFgwJ8AF8ZdAd0AqADZBhkB2QIZAlnA6ECgwF8AF8adAl0CoMAdAuDAHQMgwBnA2QZZAuNAnwAXxtuBHwEfQV0DnwAoAR0AqADZBpkDWQOZwKhAqEBgwFqD3wAXxx8AGocZA1rApACcmx0AqADZBtkA2QEZAVnA6ECfQZ0BXwFfAaDAnwAXx10B3QCoANkHGQHZAhkCWcDoQKDAXwAXx50CXQKgwB0C4MAdAyDAGcDZB1kC40CfABfH24EfAV9BnQOfACgBHQCoANkHmQNZA5nAqECoQGDAWoPfABfIHQFfAZkDoMCfABfIWQAUwCUKE5LCowIZmVhdHVyZTGUS0BLgE0AAYwIZHJvcG91dDGURz/QAAAAAAAARz/gAAAAAAAARz/oAAAAAAAAjAVyZWx1MZSMBWxhYmVslIWUjA5za2lwX2Nvbm5lY3RfMZRLAEsBjAhmZWF0dXJlMpSMCGRyb3BvdXQylIwFcmVsdTKUjA5za2lwX2Nvbm5lY3RfMpSMCGZlYXR1cmUzlIwIZHJvcG91dDOUjAVyZWx1M5SMDnNraXBfY29ubmVjdF8zlIwIZmVhdHVyZTSUjAhkcm9wb3V0NJSMBXJlbHU0lIwOc2tpcF9jb25uZWN0XzSUjAhmZWF0dXJlNZSMCGRyb3BvdXQ1lIwFcmVsdTWUjA5za2lwX2Nvbm5lY3RfNZR0lCiMBXN1cGVylGgVjANubmmUjAZjaG9pY2WUjAthZGRfbXV0YWJsZZSMDU11dGFibGVMaW5lYXKUjAZsYXllcjGUjA5NdXRhYmxlRHJvcG91dJRoZowLTGF5ZXJDaG9pY2WUjAtSZUxVV3JhcHBlcpSMC1RhbmhXcmFwcGVylIwOU2lnbW9pZFdyYXBwZXKUaGeMCE15TW9kdWxllIwGY2hvc2VulIwGc2tpcF8xlIwGbGF5ZXIylGhsaG2MBnNraXBfMpSMBmxheWVyM5RocGhxjAZza2lwXzOUjAZsYXllcjSUaHRodYwGc2tpcF80lIwGbGF5ZXI1lGh4aHmMBnNraXBfNZSMBmZjX291dJR0lChoKIwKaW5wdXRfc2l6ZZSMCWZlYXR1cmVfMZSMCWZlYXR1cmVfMpSMCWZlYXR1cmVfM5SMCWZlYXR1cmVfNJSMCWZlYXR1cmVfNZR0lIwtL3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9hdXRvbWwvY3VzdG9tLnB5lGgVSzFDiAABCgEEARIBCgEMARgBAgEEAQQBBP0CBAL8CAYeAQoCCgESAQwBGAECAQQBBAEE/QIEAvwKBgQCHgMMARIBDAEYAQIBBAEEAQT9AgQC/AoGBAIeAwwBEgEMARgBAgEEAQQBBP0CBAL8CgYEAh4DDAESAQwBGAECAQQBBAEE/QIEAvwKBgQCHgKUjAlfX2NsYXNzX1+UhZQpdJRSlH2UKGg0Tmg2aA1oN2ibdU5OaDopUpSFlHSUUpRoQGilfZR9lChoNmgVaEOMFU15TW9kZWxTcGFjZS5fX2luaXRfX5RoRX2UaEhOaElOaAxoDWhKTmhLaE1oEIWUUpSFlIwXX2Nsb3VkcGlja2xlX3N1Ym1vZHVsZXOUXZSMC19fZ2xvYmFsc19flH2UKGh9aACMCXN1YmltcG9ydJSTlIwDbm5plIWUUpRogIwabm5pLm5hcy5ubi5weXRvcmNoLl9sYXllcnOUaICTlGiCaLZogpOUaIOMGW5uaS5uYXMubm4ucHl0b3JjaC5jaG9pY2WUaIOTlGiEaAIoaAVohIwXdG9yY2gubm4ubW9kdWxlcy5tb2R1bGWUjAZNb2R1bGWUk5SFlH2UaAxoDXOMIDM5YTliNjUxYjM1MzRhYjhhYjQ3YmJkMTU2MjZiNDNklE50lFKUaBNown2UKGgMaA2MB2ZvcndhcmSUaBcoaBwoSwJLAEsASwJLA0tDQwp0AKABfAGhAVMAlE6FlIwBRpSMBHJlbHWUhpRoKIwBeJSGlGibaMRLIEMCAAGUKSl0lFKUaKFOTk50lFKUaEBo0H2UfZQoaDZoxGhDjBNSZUxVV3JhcHBlci5mb3J3YXJklGhFfZRoSE5oSU5oDGgNaEpOaEtOaK1dlGivfZRox2iyjBN0b3JjaC5ubi5mdW5jdGlvbmFslIWUUpRzdYaUhlIwaEpOdX2UhpSGUjBohWgCKGgFaIVovYWUfZRoDGgNc4wgYzM0ZWRlNGYzNzRkNGU1Y2FkNDIxYzA2Zjc2MDMwZjmUTnSUUpRoE2jhfZQoaAxoDWjEaBcoaBwoSwJLAEsASwJLA0tDQwp0AKABfAGhAVMAlGjGjAV0b3JjaJSMBHRhbmiUhpRoy2ibaMRLJEMCAAGUKSl0lFKUaKFOTk50lFKUaEBo632UfZQoaDZoxGhDjBNUYW5oV3JhcHBlci5mb3J3YXJklGhFfZRoSE5oSU5oDGgNaEpOaEtOaK1dlGivfZRo5GiyaOSFlFKUc3WGlIZSMGhKTnV9lIaUhlIwaIZoAihoBWiGaL2FlH2UaAxoDXOMIDBhM2M1M2YzOGFlMDQ1YWY4MDZlZTk2ZmIxMmFkYjNilE50lFKUaBNo+32UKGgMaA1oxGgXKGgcKEsCSwBLAEsCSwNLQ0MKdACgAXwBoQFTAJRoxmjkjAdzaWdtb2lklIaUaMtom2jESyhDAgABlCkpdJRSlGihTk5OdJRSlGhAagQBAAB9lH2UKGg2aMRoQ4wWU2lnbW9pZFdyYXBwZXIuZm9yd2FyZJRoRX2UaEhOaElOaAxoDWhKTmhLTmitXZRor32UaORo83N1hpSGUjBoSk51fZSGlIZSMGiHaAIoaAVoh2gHjBJQYXJhbWV0cml6ZWRNb2R1bGWUk5SFlH2UaAxoDXOMIDdhODY1NjkwZjQ1NjQyNjNhNzI4MmFkZjdhYmNmOTU3lE50lFKUaBNqFAEAAH2UKGgMaA1oFWgXKGgcKEsBSwBLAEsFSwVLH0OedACDAH0DfANkAGsJcjR8AGoBfANmAXwBngJ8Ao4BXAJ9AX0CiAB8AGYBfAGeAnwCjgFTAHwAagJ8AXwCjgEBAHQDoAR8AXwCoAWhAKECRABdKH0EdAZ8BHQHgwJyanwAoAh8BKEBAQBxUHQJfAR8AGoKaguDAgEAcVB8AGoBZAF8AZ4CfAKOAVwCfQF9AogAfABmAXwBngJ8Ao4BUwCUTk6FlIaUKGhQjBVmcmVlemVfaW5pdF9hcmd1bWVudHOUaCGMCWl0ZXJ0b29sc5SMBWNoYWlulIwGdmFsdWVzlIwKaXNpbnN0YW5jZZSMB011dGFibGWUaH+MF193YXJuX2lmX25lc3RlZF9tdXRhYmxllGidaDZ0lChoKGgpaCpoVYwDYXJnlHSUaCxoLU1vAkMWAAEGAQgDFgEQAwwBFAEKAQwCEAQUAZRoWSl0lFKUaDNOTmg6KVKUhZR0lFKUaEBqKQEAAH2UfZQoaDZoLWhDjDJwYXJhbWV0cml6ZWRfbW9kdWxlX2luaXRfd3JhcHBlci48bG9jYWxzPi5uZXdfaW5pdJRoRX2UaChqDwEAAHNoSE5oSU5oDGgHaEpOaEtoTWgXKGgcKEsCSwBLAEsCSwJLA0MUdACDAKABoQABAHwBfABfAmQAUwCUaMZofGgVaIiHlGjLaJtoFUssQwQAAQoBlGieKXSUUpRooU5OaDopUpSFlHSUUpRoQGo2AQAAfZR9lChoNmgVaEOMEU15TW9kdWxlLl9faW5pdF9flGhFfZRoSE5oSU5oDGgNaEpOaEtoTWoUAQAAhZRSlIWUaK1dlGivfZR1hpSGUjCFlFKUhZRorV2UaK99lChoUIwUbm5pLm5hcy5zcGFjZS5mcm96ZW6UaFCTlGoaAQAAaLKMCWl0ZXJ0b29sc5SFlFKUah4BAACME25uaS5tdXRhYmxlLm11dGFibGWUah4BAACTlGofAQAAaAdqHwEAAJOUdXWGlIZSMGhKTowNX2luaXRfd3JhcHBlZJRqNgEAAHV9lIaUhlIwdXWGlIZSMIWUUpSFlGitXZRor32UKGhQakcBAABoUYwSbm5pLm11dGFibGUuZnJvemVulGhRk5R1dYaUhlIwhZRSlIWUaK1dlGivfZQoaCSMEW5uaS5tdXRhYmxlLnV0aWxzlGgkk5RoJWgHaCWTlHV1hpSGUjBoxGgXKGgcKEsCSwBLAEsDSwNLQ0PMfACgAHwBoQF9AXwAoAF8AaEBfQF8AKACfAGhAX0BfABqA2QBawJyRnwAoAR8AaEBfQF8AKAFfAGhAX0BfACgBnwBoQF9AXwAagdkAWsCcm58AKAIfAGhAX0BfACgCXwBoQF9AXwAoAp8AaEBfQF8AGoLZAFrAnKWfACgDHwBoQF9AXwAoA18AaEBfQF8AKAOfAGhAX0BfABqD2QBawJyvnwAoBB8AaEBfQF8AKARfAGhAX0BfACgEnwBoQF9AXwAoBN8AaEBfQJ8AlMAlE5LAIaUKGiBaGdoZmiJaIpobWhsaItojGhxaHBojWiOaHVodGiPaJBoeWh4aJJ0lGgoaMqMBm91dHB1dJSHlGibaMRLfkMqAAEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBlCkpdJRSlGihTk5OdJRSlGhAam0BAAB9lH2UKGg2aMRoQ4wUTXlNb2RlbFNwYWNlLmZvcndhcmSUaEV9lGhITmhJTmgMaA1oSk5oS05orV2UaK99lHWGlIZSMGhKTmpPAQAAaKVoI051fZSGlIZSMC4="}, "model_args": [], "model_kwargs": {}, "evaluator": {"__symbol__": "path:nni.nas.evaluator.functional.FunctionalEvaluator", "__kwargs__": {"function": {"__nni_type__": "bytes:gAWVHSAAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwtLB0tDQ8pkAX0BdABq
.....
[2023-09-28 10:28:21] INFO (NNIManager) Trial job OaXfE status changed from RUNNING to SUCCEEDED
[2023-09-28 10:28:21] INFO (NNIManager) Trial job BXIkl status changed from RUNNING to SUCCEEDED
..............
[2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: NO_MORE_TRIAL to: DONE
[2023-09-28 10:28:59] INFO (NNIManager) Experiment done.
[2023-09-28 10:28:59] INFO (ShutdownManager) Initiate shutdown: REST request
[2023-09-28 10:28:59] INFO (RestServer) Stopping REST server.
[2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: DONE to: STOPPING
[2023-09-28 10:28:59] INFO (NNIManager) Stopping experiment, cleaning up ...
[2023-09-28 10:28:59] INFO (TaskScheduler) Release whole experiment spg98lnc
[2023-09-28 10:28:59] INFO (LocalV3.local) All trials stopped
[2023-09-28 10:28:59] INFO (RestServer) REST server stopped.
[2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: STOPPING to: STOPPED
[2023-09-28 10:28:59] INFO (NNIManager) Experiment stopped.
[2023-09-28 10:28:59] INFO (NNITensorboardManager) Forced stopping all tensorboard task.
[2023-09-28 10:28:59] INFO (NNITensorboardManager) All tensorboard task stopped.
[2023-09-28 10:28:59] INFO (NNITensorboardManager) Tensorboard manager stopped.
[2023-09-28 10:28:59] INFO (ShutdownManager) Shutdown complete.
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2023-09-28T02:54:18Z | 2023-09-28T02:54:18Z | https://github.com/microsoft/nni/issues/5686 | [] | igodrr | 0 |
waditu/tushare | pandas | 1,413 | 场外基金调用fund_nav() 缺少资产净值等信息 | 代码如下
```
print(pro.fund_nav(ts_code="000171.OF"))
```
结果如下
```
ts_code ann_date end_date unit_nav accum_nav accum_div net_asset total_netasset adj_nav update_flag
0 000171.OF 20200815 20200814 1.959 1.959 None NaN NaN 1.959 0
1 000171.OF 20200814 20200813 1.953 1.953 None NaN NaN 1.953 0
2 000171.OF 20200813 20200812 1.953 1.953 None NaN NaN 1.953 0
3 000171.OF 20200812 20200811 1.961 1.961 None NaN NaN 1.961 0
4 000171.OF 20200811 20200810 1.964 1.964 None NaN NaN 1.964 0
... ... ... ... ... ... ... ... ... ... ...
1696 000171.OF 20130917 20130916 1.007 1.007 None NaN NaN 1.007 0
1697 000171.OF 20130914 20130913 1.005 1.005 None 6.211282e+08 6.211282e+08 1.005 0
1698 000171.OF 20130907 20130906 1.004 1.004 None 6.205057e+08 6.205057e+08 1.004 0
1699 000171.OF 20130831 20130830 0.997 0.997 None 6.163435e+08 6.163435e+08 0.997 0
1700 000171.OF 20130824 20130823 1.000 1.000 None 6.182839e+08 6.182839e+08 1.000 0
```
缺少最新的net_asset 和 total_netasset 信息
https://tushare.pro id: 386529 | open | 2020-08-17T08:09:45Z | 2020-08-17T09:49:06Z | https://github.com/waditu/tushare/issues/1413 | [] | yuruiz | 0 |
plotly/dash | plotly | 2,602 | uwsgi background callbacks progress hangs | dash 2.11.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
OS: debian 12.0
uwsgi 2.0.21
chrome 114.0.5735.198 (Official Build) (64-bit)
try to use uwsgi as app server:
`uwsgi --http-socket :8080 --master --workers 4 -w dtest2:wsgi_app`
and test background callbacks with progress
```python
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import time
from dash import Dash, html, dcc, Input, Output, callback, DiskcacheManager
import diskcache
import plotly.express as px
import plotly.io as pio
import pandas as pd
dcache = diskcache.Cache("./cache")
bcm = DiskcacheManager(dcache)
app = Dash(__name__, title='dash test2', background_callback_manager=bcm)
wsgi_app = app.server
app.layout = html.Div([
html.Div(
[
html.Div(
[
html.P(id="paragraph1", children=["Button not clicked"]),
html.Progress(id="progress_bar1", value="0"),
]
),
html.Button(id="button_run1", children="Run Job!"),
html.Button(id="button_cancel1", children="Cancel Running Job!"),
]
),
html.Div(
[
html.Div(
[
html.P(id="paragraph2", children=["Button not clicked"]),
html.Progress(id="progress_bar2", value="0"),
]
),
html.Button(id="button_run2", children="Run Job!"),
html.Button(id="button_cancel2", children="Cancel Running Job!"),
]
)
]
)
def long_task(set_progress, n_clicks):
total = 10
for i in range(total + 1):
set_progress((str(i), str(total)))
time.sleep(1)
pid = os.getpid()
return f"Clicked {n_clicks} times, pid {pid}"
@callback(
output=Output("paragraph1", "children"),
inputs=Input("button_run1", "n_clicks"),
running=[
(Output("button_run1", "disabled"), True, False),
(Output("button_cancel1", "disabled"), False, True),
(
Output("paragraph1", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar1", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
(
Output("progress_bar1", "value"),
'0',
'0',
),
],
cancel=Input("button_cancel1", "n_clicks"),
progress=[Output("progress_bar1", "value"), Output("progress_bar1", "max")],
background=True,
prevent_initial_call=True
)
def long_task_calback1(set_progress, n_clicks):
return long_task(set_progress, n_clicks)
@callback(
output=Output("paragraph2", "children"),
inputs=Input("button_run2", "n_clicks"),
running=[
(Output("button_run2", "disabled"), True, False),
(Output("button_cancel2", "disabled"), False, True),
(
Output("paragraph2", "style"),
{"visibility": "hidden"},
{"visibility": "visible"},
),
(
Output("progress_bar2", "style"),
{"visibility": "visible"},
{"visibility": "hidden"},
),
(
Output("progress_bar2", "value"),
'0',
'0',
),
],
cancel=Input("button_cancel2", "n_clicks"),
progress=[Output("progress_bar2", "value"), Output("progress_bar2", "max")],
background=True,
prevent_initial_call=True
)
def long_task_calback2(set_progress, n_clicks):
return long_task(set_progress, n_clicks)
if __name__ == '__main__':
app.run(debug=True)
```
look like background callbacks run in background as expected but no update progress performed
and `http://127.0.0.1:8080/_dash-update-component` hangs in Pending state
why uwsgi?
lot more options than in gunicorn's
with
`gunicorn dtest2:wsgi_app -b :8081`
everything works as expected
not sure this is bug or feature
| closed | 2023-07-14T02:30:13Z | 2024-07-25T13:20:39Z | https://github.com/plotly/dash/issues/2602 | [] | acrixl | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.