repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
OFA-Sys/Chinese-CLIP | nlp | 297 | 关于ACC和R@5的问题 | 验证集和训练集的ACC都达到了80+,但是在测试集和验证集中R@5评分只有18分和9分,请问是什么问题呢?应该如何解决呢 | open | 2024-04-16T09:59:30Z | 2024-08-26T09:08:05Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/297 | [] | EasonTuT | 14 |
widgetti/solara | flask | 1,022 | --auto-restart option of the 'solara run' command does not open browser | When `auto_restart` is set to `True`, the `uvicorn.Server` is not executed; instead, the `ChangeReload` instance is used.
https://github.com/widgetti/solara/blob/8947be4d7ddbc891017e89d6a463a94f8ac0c355/solara/__main__.py#L443
However, the `open_browser()` function does not check if the `ChangeReload` instance is currently running, causing it to remain trapped in the while loop indefinitely.
https://github.com/widgetti/solara/blob/8947be4d7ddbc891017e89d6a463a94f8ac0c355/solara/__main__.py#L382
## Expected Behaviour
`solara run —auto-restart` should open the browser.
## Current Behaviour
`solara run —auto-restart` does not open the browser.
## Specifications
- Solara Version: 1.44.0
- Platform: macOS
- Affected Python Versions: 3.12.4
| open | 2025-03-19T17:22:28Z | 2025-03-19T17:22:28Z | https://github.com/widgetti/solara/issues/1022 | [] | SeoulSKY | 0 |
lepture/authlib | django | 59 | AGPL?! | I just realized that this library is AGPL-licensed - quite unexpected considering that its predecessors like flask-oauthlib and oauthlib are BSD-licensed, and e.g. flask-oauthlib [strongly](https://github.com/lepture/flask-oauthlib#notice) recommends people to authlib instead.
While I completely understand that you want to make money with this library when people use it in commercial/closed-source software, the fact that it's AGPL and thus very viral seems problematic:
For example, many open source projects nowadays use a more liberal license like MIT or BSD.
And while IANAL, I'm pretty sure any such projects are currently excluded from using your library, since you cannot use GPL software in MIT/BSD-licensed application...
...which this is truly unfortunate, since AFAIK there is no other decent implementation of OAuth providers for Python out there - and many webapps do include OAuth provider functionality nowadays! And we all know what happens when people start implementing their own security code... usually it won't be as secure as it should be. | closed | 2018-05-25T12:35:20Z | 2019-04-06T05:14:49Z | https://github.com/lepture/authlib/issues/59 | [] | ThiefMaster | 45 |
DistrictDataLabs/yellowbrick | matplotlib | 826 | API Discussion for Figures and Axes | This issue is to discuss the open letter regarding Yellowbrick's API roadmap. To summarize, we currently attempt to only manage matplotlib `Axes` objects so that visualizers can be embedded into more complex plots and reports. However, many of our visualizers are getting increasingly complex, requiring subplots of their own. For a complete discussion of the API issue please see:
https://www.scikit-yb.org/en/develop/api/figures.html
The questions at hand are:
1. Like Seaborn, should YB have two classes of visualizer, one that wraps an axes and one that wraps a figure?
2. Should we go all in on the AxesGrid toolkit and continue to restrict our use of the figure, will this method be supported in the long run? | closed | 2019-04-24T20:19:39Z | 2019-08-28T23:37:18Z | https://github.com/DistrictDataLabs/yellowbrick/issues/826 | [
"type: question",
"priority: high",
"level: expert"
] | bbengfort | 10 |
errbotio/errbot | automation | 916 | Slack: Cannot send a message to a presence.identifier | When you receive a presence, it doesn't have any channelid attached to it.
If you send it straight back with a send()
Fixed by #914 | closed | 2016-11-27T21:47:58Z | 2016-12-06T20:59:14Z | https://github.com/errbotio/errbot/issues/916 | [] | gbin | 1 |
fastapi-users/fastapi-users | fastapi | 746 | TypeError: 'Record' object is not a mapping | ### Discussed in https://github.com/fastapi-users/fastapi-users/discussions/745
<div type='discussions-op-text'>
<sup>Originally posted by **Briscoooe** September 27, 2021</sup>
I am following [this example](https://replit.com/@frankie567/fastapi-users-sqlalchemy) in my project and I can't seem to get it to work. The error I am getting is
```Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 390, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/fastapi/applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/cors.py", line 78, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 580, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 241, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 52, in app
response = await func(request)
File "/usr/local/lib/python3.7/site-packages/fastapi/routing.py", line 217, in app
dependant=dependant, values=values, is_coroutine=is_coroutine
File "/usr/local/lib/python3.7/site-packages/fastapi/routing.py", line 149, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/lib/python3.7/site-packages/fastapi_users/router/auth.py", line 28, in login
user = await user_manager.authenticate(credentials)
File "/usr/local/lib/python3.7/site-packages/fastapi_users/manager.py", line 518, in authenticate
user = await self.get_by_email(credentials.username)
File "/usr/local/lib/python3.7/site-packages/fastapi_users/manager.py", line 102, in get_by_email
user = await self.user_db.get_by_email(user_email)
File "/usr/local/lib/python3.7/site-packages/fastapi_users_db_sqlalchemy/__init__.py", line 133, in get_by_email
return await self._make_user(user) if user else None
File "/usr/local/lib/python3.7/site-packages/fastapi_users_db_sqlalchemy/__init__.py", line 200, in _make_user
user_dict = {**user}
TypeError: 'Record' object is not a mapping
```
The request I am trying is [login](https://fastapi-users.github.io/fastapi-users/usage/routes/#post-login). I know the request is working up until the last point because when I try to login with a bad password I get a 400 error response with a JSON body. When I try and login with correct credentials (or even when the email does exist) I get a 500 as the user table record can't be mapped to the response body. I have followed the example to the letter, most of the code I have is copied 1:1 from the documentation so I can't really see why it doesn't work.
Here are my tables and models. I have moved all the snippets to one block for brevity, in the project they all exist in separate files.
```
class User(Base, SQLAlchemyBaseUserTable):
__tablename__ = 'user_'
first_name = Column(String, nullable=False)
last_name = Column(String, nullable=False)
class User(models.BaseUser):
pass
class UserCreate(models.BaseUserCreate):
pass
class UserUpdate(models.BaseUserUpdate):
pass
class UserInDB(User, models.BaseUserDB):
pass
def get_user_db():
yield SQLAlchemyUserDatabase(UserInDB, database, User.__table__)
SECRET = "SECRET"
class UserManager(BaseUserManager[UserCreate, UserInDB]):
user_db_model = UserInDB
reset_password_token_secret = SECRET
verification_token_secret = SECRET
def get_user_manager(user_db: SQLAlchemyUserDatabase = Depends(deps.get_user_db)):
yield UserManager(user_db)
jwt_authentication = JWTAuthentication(
secret=SECRET, lifetime_seconds=3600, tokenUrl="auth/jwt/login"
)
fastapi_users = FastAPIUsers(
get_user_manager,
[jwt_authentication],
User,
UserCreate,
UserUpdate,
UserInDB,
)
current_active_user = fastapi_users.current_user(active=True)
api_router.include_router(users.fastapi_users.get_auth_router(users.jwt_authentication), prefix="/auth/jwt", tags=["auth"])
api_router.include_router(users.fastapi_users.get_register_router(), prefix="/auth", tags=["auth"])
api_router.include_router(users.fastapi_users.get_reset_password_router(), prefix="/auth", tags=["auth"])
api_router.include_router(users.fastapi_users.get_verify_router(), prefix="/auth", tags=["auth"])
api_router.include_router(users.fastapi_users.get_users_router(), prefix="/users", tags=["users"])
``` | closed | 2021-09-27T09:06:03Z | 2021-10-02T11:05:07Z | https://github.com/fastapi-users/fastapi-users/issues/746 | [] | Briscoooe | 1 |
rafsaf/minimal-fastapi-postgres-template | sqlalchemy | 47 | docker run failed | great template!
`docker run -d -p 8000:8000 ..` yields the error message below
```
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started parent process [8]
Process SpawnProcess-1:
Process SpawnProcess-2:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/venv/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started
target(sockets=sockets)
File "/venv/lib/python3.12/site-packages/uvicorn/server.py", line 62, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/venv/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
config.load()
File "/venv/lib/python3.12/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/build/app/main.py", line 5, in <module>
from app.api.api_router import api_router, auth_router
File "/build/app/api/api_router.py", line 4, in <module>
from app.api.endpoints import auth, users, patient_status
File "/build/app/api/endpoints/auth.py", line 11, in <module>
from app.api import api_messages, deps
File "/build/app/api/deps.py", line 10, in <module>
from app.core import database_session
File "/build/app/core/database_session.py", line 31, in <module>
_ASYNC_ENGINE = new_async_engine(get_settings().sqlalchemy_database_uri)
^^^^^^^^^^^^^^
File "/build/app/core/config.py", line 70, in get_settings
return Settings() # type: ignore
^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/pydantic_settings/main.py", line 84, in __init__
super().__init__(
File "/venv/lib/python3.12/site-packages/pydantic/main.py", line 171, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
pydantic_core._pydantic_core.ValidationError: 2 validation errors for Settings
security
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
database
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
File "/venv/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started
target(sockets=sockets)
File "/venv/lib/python3.12/site-packages/uvicorn/server.py", line 62, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/venv/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
config.load()
File "/venv/lib/python3.12/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/build/app/main.py", line 5, in <module>
from app.api.api_router import api_router, auth_router
File "/build/app/api/api_router.py", line 4, in <module>
from app.api.endpoints import auth, users, patient_status
File "/build/app/api/endpoints/auth.py", line 11, in <module>
from app.api import api_messages, deps
File "/build/app/api/deps.py", line 10, in <module>
from app.core import database_session
File "/build/app/core/database_session.py", line 31, in <module>
_ASYNC_ENGINE = new_async_engine(get_settings().sqlalchemy_database_uri)
^^^^^^^^^^^^^^
File "/build/app/core/config.py", line 70, in get_settings
return Settings() # type: ignore
^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/pydantic_settings/main.py", line 84, in __init__
super().__init__(
File "/venv/lib/python3.12/site-packages/pydantic/main.py", line 171, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 2 validation errors for Settings
security
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
database
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
``` | closed | 2024-03-13T02:19:39Z | 2024-12-06T18:32:49Z | https://github.com/rafsaf/minimal-fastapi-postgres-template/issues/47 | [
"question"
] | zhibor | 1 |
Nekmo/amazon-dash | dash | 112 | Amazon dash sends 2 times | I've just configured a HA event with call_service.
For instance:
```
name: ColaCao
homeassistant: http://127.0.0.1:8123 # Address to the hass server
event: call_service # Event name to send
data: '{"domain": "switch", "service": "toggle", "service_data": {"entity_id": "switch.studio_speakers"}}'
```
The button makes 2 calls, 1 when I press it, that should be normal and another one when it stops blinking. I'm using it to toggle, so the first call is working but after 15-20 secs aprox, it turns off again. | closed | 2018-12-09T12:35:59Z | 2018-12-09T15:27:37Z | https://github.com/Nekmo/amazon-dash/issues/112 | [] | nicomda | 1 |
psf/requests | python | 6,310 | Requests loses payload when encountering HTTP code 301 | <!-- Summary. -->
When connecting to an endpoint with HTTP that has HTTP -> HTTPS redirect enabled, Requests loses the payload and sends no payload when reconnecting to the endpoint with HTTPS. It is considered good practice to redirect non-HTTPS traffic to HTTPS. The widely-accepted method is to set up a web server to send an HTTP 301 error ("Moved Permanently") back to the client with the equivalent HTTPS URL, and clients are expected to reconnect to the given HTTPS URL. When the requests library encounters this, it connects to the HTTPS URL, but does not re-send the payload.
<!-- What you expected. -->
Since the end user expects a 301 redirect to be transparent, Requests should re-send the payload to the new HTTPS endpoint URL.
<!-- What happened instead. -->
Requests sends no payload, the expected result is not seen from the API, and [I come to GitHub searching for help and coldly get told I didn't RTFM and I get my issue closed immediately before more empathic users have a chance to help me look into it.](https://github.com/psf/requests/issues/6308#issuecomment-1352419014)
```
#!/usr/local/bin/python3
import requests
id = 1000
api_url_root = "http://[api url]"
api_access_token = "[access token]"
api_url = api_url_root + "/items/filmstrips/" + str(id) + "?access_token=" + api_access_token
payload={'problem': False}
response = requests.patch(api_url, json=payload, headers = {'Content-type': 'application/json'})
```
By [enabling logging as outlined in this StackOverflow post](https://stackoverflow.com/questions/10588644/how-can-i-see-the-entire-http-request-thats-being-sent-by-my-python-application), I can see the data Requests is sending to my API. in this data we can see that the payload is not re-sent when Requests re-connects to the endpoint with HTTPS:
```
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): [redacted]:80
send: b'PATCH /items/filmstrips/1000?access_token=[access token] HTTP/1.1\r\nHost: [api url]\r\nUser-Agent: python-requests/2.28.1\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nContent-type: application/json\r\nContent-Length: 18\r\n\r\n'
send: b'{"problem": false}'
reply: 'HTTP/1.1 301 Moved Permanently\r\n'
header: Date: Thu, 15 Dec 2022 15:42:05 GMT
header: Server: Apache/2.4.41 (Ubuntu)
header: Location: https://[api url]/items/filmstrips/1000?access_token=[redacted]
header: Content-Length: 396
header: Keep-Alive: timeout=5, max=100
header: Connection: Keep-Alive
header: Content-Type: text/html; charset=iso-8859-1
DEBUG:urllib3.connectionpool:http://[api url]:80 "PATCH /items/filmstrips/1000?access_token=[access token] HTTP/1.1" 301 396
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): db.uncommonephemera.org:443
send: b'PATCH /items/filmstrips/1000?access_token=[access token] HTTP/1.1\r\nHost: [api url]\r\nContent-Length: 0\r\nUser-Agent: python-requests/2.28.1\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Thu, 15 Dec 2022 15:42:05 GMT
header: Server: Apache/2.4.41 (Ubuntu)
header: Content-Security-Policy: script-src 'self' 'unsafe-eval';worker-src 'self' blob:;child-src 'self' blob:;img-src 'self' data: blob: https://cdn.directus.io;media-src 'self' https://cdn.directus.io;connect-src 'self' https://*;default-src 'self';base-uri 'self';font-src 'self' https: data:;form-action 'self';frame-ancestors 'self';object-src 'none';script-src-attr 'none';style-src 'self' https: 'unsafe-inline'
header: X-Powered-By: Directus
header: Vary: Origin,Cache-Control
header: Access-Control-Allow-Credentials: true
header: Access-Control-Expose-Headers: Content-Range
header: Cache-Control: no-cache
header: Content-Type: application/json; charset=utf-8
header: Content-Length: 1155
header: ETag: W/"483-sGHR5YbuBGXNuxYH9U0ytfDhitE"
header: Keep-Alive: timeout=5, max=100
header: Connection: Keep-Alive
DEBUG:urllib3.connectionpool:https://[api url]:443 "PATCH /items/filmstrips/1000?access_token=[access token] HTTP/1.1" 200 1155
```
If I change `http://` in the API URL to `https://`, there is no redirect, the payload is not lost, and the payload is sent to the API.
In my case, the use of `http://` over `https://` was inadvertent, and if I had not done this Requests would have worked as expected. However, this bug plus the reactionary response from Requests devs that I must be too stupid to use their library cost me almost a full day's worth of time. It is hoped that by publishing this issue, I can help others in my situation avoid dealing with people who do not want to help.
## System Information
$ python -m requests.help
```
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.1.1"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.10.8"
},
"platform": {
"release": "21.6.0",
"system": "Darwin"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.28.1"
},
"system_ssl": {
"version": "1010113f"
},
"urllib3": {
"version": "1.26.12"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2022-12-15T16:20:51Z | 2023-12-16T00:03:15Z | https://github.com/psf/requests/issues/6310 | [] | uncommonephemera | 2 |
matterport/Mask_RCNN | tensorflow | 2,432 | ModuleNotFoundError: No module named 'astunparse' | When I run demo.py, Spyder prompts: ModuleNotFoundError: No module named 'astunparse' | open | 2020-11-28T11:38:09Z | 2020-11-28T11:38:09Z | https://github.com/matterport/Mask_RCNN/issues/2432 | [] | s1297422520 | 0 |
noirbizarre/flask-restplus | flask | 319 | MIT license is incompatible with the code which was copied from Flask-RESTful | I recognize a substantial amount of code from Flask-RESTful in this project (as noted originally in #149). Your inclusion of some attribution to the original authors added in 7edec90 helps, but you are still not able to re-license BSD code as MIT. Here's a helpful chart to illustrate license compatability:

> An arrow from box A to box B means that you can combine software with these licenses; the combined result effectively has the license of B, possibly with additions from A.
([source](https://www.dwheeler.com/essays/floss-license-slide.html))
tl;dr: You need to change the license of this project to be BSD 3-clause | closed | 2017-08-30T21:13:22Z | 2018-05-17T20:43:25Z | https://github.com/noirbizarre/flask-restplus/issues/319 | [] | joshfriend | 2 |
tartiflette/tartiflette | graphql | 87 | Subscription operations with multiple root field doesn't raise any error | A GraphQL request containing non-unique named operation definition doesn't raise any error (cf. [GraphQL spec](https://facebook.github.io/graphql/June2018/#sec-Single-root-field)):
```sdlang
interface Sentient {
name: String!
}
interface Pet {
name: String!
}
type Human implements Sentient {
name: String!
}
type Dog implements Pet {
name: String!
owner: Human
}
type Query {
dog: Dog
}
type Subscription {
newDog: Dog
newHuman: Human
}
```
Following requests should raise an error:
```graphql
subscription Sub {
newDog {
name
}
newHuman {
name
}
}
```
```graphql
fragment MultipleSubscriptionsFields on Subscription {
newDog {
name
}
newHuman {
name
}
}
subscription Sub {
...MultipleSubscriptionsFields
}
```
```graphql
subscription Sub {
newDog {
name
}
__typename
}
``` | closed | 2019-01-15T15:26:38Z | 2019-01-17T11:14:02Z | https://github.com/tartiflette/tartiflette/issues/87 | [
"bug"
] | Maximilien-R | 0 |
ultralytics/yolov5 | pytorch | 12,725 | train.py in YOLOv5 no information is displayed, program executes with no error messages, but weights are not saved | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
I am running the command:
!python train.py --img 256 --epochs 1 --batch-size 16 --data dataset.yml --weights yolov5n.pt
The command is able to execute and finish, but while it executes no information is displayed, and after it finishes no weights are saved unders runs/train/exp. There is no error message displayed either. Perhaps is there something wrong with the way I've organized my data?

### Environment
-YOLO: YOLOv5
-Python 3.11.5
-OS: Windows
### Minimal Reproducible Example
!python train.py --img 256 --epochs 1 --batch-size 16 --data dataset.yml --weights yolov5n.pt
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-02-09T22:50:47Z | 2024-10-20T19:39:25Z | https://github.com/ultralytics/yolov5/issues/12725 | [
"bug",
"Stale"
] | artduurrr | 7 |
huggingface/pytorch-image-models | pytorch | 1,594 | [Feature] Make (non `rmlp`) MaxxViT to support variable resolution | Dear rwightman, thanks for you job. I was going to input a tensor with size (16,3 112,112) to test the MaxxViT small 224 model, but it failed . do you have any solutions ? | closed | 2022-12-17T02:27:55Z | 2023-08-21T23:01:45Z | https://github.com/huggingface/pytorch-image-models/issues/1594 | [
"enhancement",
"help wanted"
] | mywebinfo65536 | 5 |
ageitgey/face_recognition | python | 1,269 | error with getting multiple matches using KNN module | * face_recognition version: latest
* Python version: 3.9.1
* ubuntu 20.04
### Description
after a lot of digging trying to get multiple results for an unknown person am getting the same results for everyone.
my dataset is 26 person about 170 images for all of them
### What I Did
```python
# Find encodings for faces in the in_stream image
face_encodings = face_recognition.face_encodings(X_img, known_face_locations=X_face_locations, num_jitters=5)
distance, index = knn_clf.kneighbors(face_encodings, n_neighbors=4)
training_labels_indices = knn_clf._y
class_labels = knn_clf.classes_
user_id = class_labels[training_labels_indices[index]]
for us, i in zip(user_id[0], range(len(index[0]))):
# print(user)
# user = class_labels[training_labels_indices[i]]
percentage = "{:.0%}".format(face_distance_to_conf(face_distance=distance[0][i],
face_match_threshold=distance_threshold))
# print(user)
# print(percentage)
if not any(u.user_id == us for u in results):
results.append(DetectionResultsModule(us, percentage))
```
I would appreciate any help | open | 2021-01-21T23:53:53Z | 2021-02-05T05:38:06Z | https://github.com/ageitgey/face_recognition/issues/1269 | [] | ferasawadi | 2 |
tensorlayer/TensorLayer | tensorflow | 293 | Object Extraction after detection | Hi,
Could you please share a script to extract co-ordinates of the detected classes? I want to extract the objects out of the picture? | closed | 2018-01-29T21:25:57Z | 2018-02-18T14:11:21Z | https://github.com/tensorlayer/TensorLayer/issues/293 | [] | alokgrover88 | 4 |
plotly/dash-table | dash | 829 | Dash DataTable with RadioButton as cell content | I’m trying to create a table containing players, games and a set of configurations that can be toggled on/off.
The ideal would be to use a Dash DataTable with RadioButtons (similar as for DropDowns). Is this possible? Any ideas on how to achieve this? Screenshot 2020-09-16 at 23.27.41

The example shown is created with “standard” HTML table, but as the list can be very long I need it to be scrollable and flexible similar to a DataTable. | open | 2020-09-18T22:11:20Z | 2020-09-18T22:11:20Z | https://github.com/plotly/dash-table/issues/829 | [] | TomRoger | 0 |
raphaelvallat/pingouin | pandas | 139 | Remove Glass Delta effect size | Pingouin's implementation of the [Glass Delta effect size](https://www.statisticshowto.com/glasss-delta/) is based on the assumption that the control group will always have the lowest standard deviation.
```python
# Find group with lowest variance
sd_control = np.min([x.std(ddof=1), y.std(ddof=1)])
d = (x.mean() - y.mean()) / sd_control
```
This is misleading. I therefore propose to remove the Glass Delta effect size. Users can either manually calculate the Glass Delta effect size or simply rely on the Cohen d or Hedges g. | closed | 2020-10-08T23:17:41Z | 2020-10-20T21:34:57Z | https://github.com/raphaelvallat/pingouin/issues/139 | [
"deprecation :skull:"
] | raphaelvallat | 1 |
plotly/dash | jupyter | 3,085 | [BUG] scrollTo Jumps the Scroll Bar | - `pip list`
```
dash 2.18.2
dash_ag_grid 31.2.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- Browser, Version and OS
- OS: win11
- Browser chrome
- Version 130.0.6723.119 (Offizieller Build) (64-Bit)
**Describe the bug**
Adjusting the scrollTo and selecting a row makes the scroll bar to jump back to the position set in my callback. The callback is getting triggered only once though in the beginning.
**Expected behavior**
Since the callback is not getting triggered when selecting a row, the scroll bar should stay where I manually scrolled.
**Code**
```
import dash_ag_grid as dag
from dash import Dash, html, Input, Output
import pandas as pd
app = Dash(__name__)
df = pd.read_csv(
"https://raw.githubusercontent.com/plotly/datasets/master/ag-grid/olympic-winners.csv"
)
columnDefs = [
{"headerName": "Row", "valueGetter": {"function": "params.node.rowIndex"}}
] + [{"field": c} for c in df.columns]
app.layout = html.Div(
[
dag.AgGrid(
id="grid-scroll-to-start",
columnDefs=columnDefs,
rowData=df.to_dict("records"),
dashGridOptions={'rowSelection': 'single', 'animateRows': False}
),
]
)
@app.callback(
Output('grid-scroll-to-start', 'scrollTo'),
Input('grid-scroll-to-start', 'rowData')
)
def handle_scroll_to(rowData):
#dummy logic
row = rowData[100]
return {'data': row}
if __name__ == "__main__":
app.run(debug=True)
```
| closed | 2024-11-19T18:32:13Z | 2024-11-20T12:23:17Z | https://github.com/plotly/dash/issues/3085 | [] | georgiossalon | 2 |
google-research/bert | nlp | 944 | NotFoundError: ./dev.tsv | When I'm running `run_classifier.py`, even though the `do_eval` is `false`, this class still looks for the `dev.tsv` file. This happens when I change the batch size from 32, which is the default, to 16 (since I don't have enough memory on my GPU, I'm just trying to reduce the batch size to see if I can train my model.) I'm wondering why this is happening? I don't want to create a dev set. | open | 2019-12-01T05:21:48Z | 2019-12-01T05:21:48Z | https://github.com/google-research/bert/issues/944 | [] | phosseini | 0 |
falconry/falcon | api | 2,075 | Ship cythonized `.c` files in sdist | As suggested by @koobs in https://github.com/falconry/falcon/issues/1281, it seems we closed that issue without a decision on this part.
This would let us drop `cython` from the PEP 517 build requirements. While not a very big/common issue, we are currently unable to fall back to pure Python if `cython` installation itself fails as part of the PEP 517 build process.
We don't want to have `.c` files in the source tree, but we want to produce them in the CI gate responsible for uploading `sdist`. | open | 2022-05-30T13:11:00Z | 2023-01-08T16:07:26Z | https://github.com/falconry/falcon/issues/2075 | [
"proposal",
"maintenance"
] | vytas7 | 3 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 124 | how to edit stanford-tensorflow-tutorials/2017/examples/09_tfrecord_example.py to read multiple images? | Looping the process of writing images into the .tfrecords-file works fine, but how do I read multiple images from a .tfrecords-file?
Is there any simple solution? would be great if added to the code. | closed | 2018-07-03T11:26:31Z | 2020-08-31T19:03:40Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/124 | [] | LarsFichtel | 0 |
mars-project/mars | scikit-learn | 3,216 | [BUG] Ray task backend no progress | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Ray task mode doesn't update progress until whole task finished:

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```python
def test_groupby(n=10):
from datetime import datetime
start = datetime.now()
df = md.DataFrame(
mt.random.rand(n * 500, 4, chunk_size=500),
columns=list('abcd'))
# print(df.sum().execute())
result = df.groupby(['a']).apply(lambda pdf: pdf).execute()
duration = datetime.now() - start
return result, duration
mars.new_session(n_worker=10, n_cpu=10*2, backend="ray")
test_groupby(200)
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-08-09T11:18:21Z | 2022-11-10T07:24:47Z | https://github.com/mars-project/mars/issues/3216 | [] | chaokunyang | 2 |
seleniumbase/SeleniumBase | web-scraping | 3,586 | `sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method | ### `sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method
----
This would cause failures in CDP Mode when calling regular SB methods directly. | closed | 2025-03-05T17:31:03Z | 2025-03-05T18:25:39Z | https://github.com/seleniumbase/SeleniumBase/issues/3586 | [
"bug",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
paperless-ngx/paperless-ngx | django | 8,211 | [BUG] Reprocess do not trigger the post-consumption script | ### Description
The Action "Preprocess" will just do OCR stuff, but don't reprocess the file like it is added newly in paperless.
My wish is, that it execute the post consumption script also, after doing the OCR tasks.
### Steps to reproduce
When I open a pdf file and select
"Action" -> "Preprocess"
### Webserver logs
```bash
[2024-11-06 15:24:31,870] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-11-06 15:24:32,673] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': PosixPath('/usr/src/paperless/media/documents/originals/o_Ärzte/2024/ pdfname.pdf'), 'output_file': PosixPath('/tmp/paperless/paperless-xsh6e13u/archive.pdf'), 'use_threads': True, 'jobs': 4, 'language': 'deu', 'output_type': 'pdfa', 'progress_bar': False, 'color_conversion_strategy': 'RGB', 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': PosixPath('/tmp/paperless/paperless-xsh6e13u/sidecar.txt')}
[2024-11-06 15:24:38,183] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 17.99 - rotation appears correct
[2024-11-06 15:24:55,737] [INFO] [ocrmypdf._pipelines.ocr] Postprocessing...
[2024-11-06 15:24:57,308] [INFO] [ocrmypdf._pipeline] Image optimization ratio: 1.24 savings: 19.5%
[2024-11-06 15:24:57,309] [INFO] [ocrmypdf._pipeline] Total file size ratio: 1.70 savings: 41.2%
[2024-11-06 15:24:57,316] [INFO] [ocrmypdf._pipelines._common] Output file is a PDF/A-2B (as expected)
[2024-11-06 15:24:57,986] [DEBUG] [paperless.parsing.tesseract] Using text from sidecar file
[2024-11-06 15:24:57,989] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient -define pdf:use-cropbox=true /tmp/paperless/paperless-xsh6e13u/archive.pdf[0] /tmp/paperless/paperless-xsh6e13u/convert.webp
[2024-11-06 15:25:01,386] [INFO] [paperless.parsing] convert exited 0
[2024-11-06 15:25:01,475] [INFO] [paperless.tasks] Updating index for document 329 (8f600c212036a77eab7aa580eda62484)
[2024-11-06 15:25:01,696] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-xsh6e13u
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.1
### Host OS
Raspberry pi aarch64
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.13.1",
"server_os": "Linux-6.1.0-rpi6-rpi-v8-aarch64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 246031364096,
"available": 174735343616
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1056_customfieldinstance_deleted_at_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-11-06T15:25:01.686282+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-11-06T14:05:02.670081Z",
"classifier_error": null
}
}
```
### Browser
Safari
### Configuration changes
docker-compose.yml
...
environment:
PAPERLESS_POST_CONSUME_SCRIPT: /usr/src/paperless/scripts/post-consumption.sh
...
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-11-06T14:35:46Z | 2024-12-07T03:16:39Z | https://github.com/paperless-ngx/paperless-ngx/issues/8211 | [
"not a bug"
] | slydlake | 2 |
holoviz/panel | matplotlib | 7,596 | Plotly fig with rangeslider of type 'date' freezes app if controls are shown | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
Python 3.12.4
```plaintext
panel==1.5.5
plotly==5.24.1
bokeh==3.6.2
```
</details>
#### Description of expected behavior and the observed behavior
Dragging the range slider on a plotly fig freezes the app, but only if its type of the x-axis is 'date' and the controls are being rendered.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import plotly.graph_objects as go
fig = go.Figure()
fig.update_layout(xaxis=dict(rangeslider=dict(visible=True), type="date"))
pt = pn.pane.Plotly(fig)
col = pn.Column(pt, pt.controls())
col.servable()
```
#### Screenshots or screencasts of the bug in action

- [ ] I may be interested in making a pull request to address this
| open | 2025-01-06T21:10:55Z | 2025-01-20T21:46:08Z | https://github.com/holoviz/panel/issues/7596 | [] | CodyKlingler | 1 |
aiortc/aioquic | asyncio | 68 | Certificate generation documentation? | Hi,
I am prototyping QUIC for a mobile application, thanks for making aioquic available.
Question: I am getting a certificate error even after hardcoding --host with the example http3 code, I am assuming that's because there is a mismatch between localhost and external server address I am communicating with (I'm not using localhost for development). Are there any documentation resources on how to generate the various certificates needed for that, if that is indeed the issue?
Here's the error:
```
2020-03-20 11:44:58,924 INFO quic [1dee7d339857a8fa] Connection close code 0x12A, reason hostname 'test.us-east-2.compute.amazonaws.com' doesn't match 'localhost'
```
Thanks in advance! | closed | 2020-03-20T12:05:17Z | 2020-03-22T09:37:47Z | https://github.com/aiortc/aioquic/issues/68 | [] | pablogranolabar | 1 |
neuml/txtai | nlp | 100 | Infer vector method using path | Currently, if a vector method is not provided with a new Embeddings object, it defaults to Word Vectors. Word Vectors are no longer installed in the base package (see #97), so this is no longer the best logic.
This change will infer the method using the path. New logic should be:
- If path is a file, set the method to use Word Vectors.
- Otherwise, the method will be set to use Transformers models.
This should work in almost all cases and the method should no longer need to be explicitly set.
| closed | 2021-08-15T17:33:37Z | 2021-08-15T17:52:58Z | https://github.com/neuml/txtai/issues/100 | [] | davidmezzetti | 0 |
aio-libs/aiopg | sqlalchemy | 186 | mogrify doesn't need to be a coroutine | Is there a good reason why [mogrify](https://github.com/aio-libs/aiopg/blob/fb5f28505debafd7b7f5961314dd6a9b28f3d616/aiopg/cursor.py#L149-L160) is a coroutine?
It's like to be used in "bulk create" operations where it'll be called a lot of times so it would have a margin improvement to performance to make it a simple function.
| closed | 2016-10-17T18:10:48Z | 2017-02-16T11:16:23Z | https://github.com/aio-libs/aiopg/issues/186 | [] | samuelcolvin | 3 |
pytest-dev/pytest-mock | pytest | 212 | When tests are duplicated they fail | I am not easily able to create something reproducible but I think this is related to a bug, it seems like the `MockFixture` is somehow persisting mocked methods between tests.
pytest mock version: pytest-mock-3.1.1
pytest version: 5.4.3
Python version: Python 3.8.2
If I duplicate the following method and change the name I would expect it to pass but it actually fails as none of the asserted methods are called:
```
def test_get_sites_cold_old(mocker, event_for_get_sites, context, site_report_data):
print(f"TYPE: {type(mocker)}")
ddb_table = "strategy_table"
expected_response = "SqlServer"
mock_connection = {}
os_env = {"DDB_DA_STRATEGY": ddb_table}
mocker.patch.dict(os.environ, os_env)
ddb_response = {"Item": {"Strategy": {"S": expected_response}}}
mocker.patch.object(site_dao.ddb_client, "get_item")
site_dao.ddb_client.get_item.return_value = ddb_response
mocker.patch.object(site_dao.SiteListDaoMSSQLImpl, "get_connection")
site_dao.SiteListDaoMSSQLImpl.get_connection.return_value = mock_connection
mocker.patch.object(site_dao.SiteListDaoMSSQLImpl, "get_customer_sites")
site_dao.SiteListDaoMSSQLImpl.get_customer_sites.return_value = site_report_data
response = site_api.get_customer_sites(event_for_get_sites, context)
assert len(response) == 1
_assert_site(response[0])
site_dao.ddb_client.get_item.assert_called_once_with(
TableName=ddb_table, Key={"ServiceName": {"S": "Site"}}
)
site_dao.SiteListDaoMSSQLImpl.get_connection.assert_called_once_with()
site_dao.SiteListDaoMSSQLImpl.get_customer_sites.assert_called_once_with(
mock_connection, event_for_get_sites.get("customer_id")
)
```
If I comment out this and run the original passes and vice versa which really makes no sense as the tests are separate and as far as I read in the documentation the mocks are supposed to be automatically reset between tests?
I have noticed this issue across my test suite since I first detected it this morning, and in some instances unrelated tests fail because they seem to use pre mocked objects, and if I swap the order of those tests around they pass...
Also interesting is that even though none of the asserted method calls pass, the result is actually returned and the assertions on the mocked data DO pass. | closed | 2020-10-16T12:32:09Z | 2020-10-16T13:50:21Z | https://github.com/pytest-dev/pytest-mock/issues/212 | [] | paulalex | 4 |
matplotlib/matplotlib | data-visualization | 29,101 | [Doc]: ax.scatter `alpha` also supports an array-like of floats | ### Documentation Link
https://matplotlib.org/devdocs/api/_as_gen/matplotlib.axes.Axes.scatter.html
### Problem
Aside from float, ax.scatter `alpha` also supports an array-like of floats.
### Suggested improvement
It would be useful if this was documented, potentially including an example. | closed | 2024-11-08T08:43:24Z | 2024-12-14T02:45:14Z | https://github.com/matplotlib/matplotlib/issues/29101 | [
"Documentation"
] | EwoutH | 4 |
marcomusy/vedo | numpy | 98 | tests fail: vtkOpenGLPolyDataMapper has no attribute 'SetArrayName' | Building and testing vtkplotter2020.0.2 on Debian GNU/Linux (2020.0.2+dfsg1-1), the tests fails, e.g.
tests/common$ ./run_all.sh
Processing test_actors.py script..
Traceback (most recent call last):
File "test_actors.py", line 11, in <module>
cone.addCellScalars(carr, 'carr')
File "/usr/lib/python3/dist-packages/vtkplotter/base.py", line 714, in addCellScalars
self._mapper.SetArrayName(name)
AttributeError: 'vtkRenderingOpenGL2Python.vtkOpenGLPolyDataMapper' object has no attribute 'SetArrayName'
Similar errors with `vtkOpenGLPolyDataMapper object has no attribute 'SetArrayName'` shows when trying to run the tests in tests/dolfin.
vtkplotter has been built against python3-vtk7. | open | 2020-01-22T04:21:38Z | 2020-01-22T11:22:53Z | https://github.com/marcomusy/vedo/issues/98 | [
"bug"
] | drew-parsons | 1 |
recommenders-team/recommenders | machine-learning | 2,064 | [BUG] NameError in ImplicitCF | ### Description
<!--- Describe your issue/bug/request in detail -->
```
2024-02-19T18:34:57.2553239Z @pytest.mark.gpu
2024-02-19T18:34:57.2568702Z def test_model_lightgcn(deeprec_resource_path, deeprec_config_path):
2024-02-19T18:34:57.2569997Z data_path = os.path.join(deeprec_resource_path, "dkn")
2024-02-19T18:34:57.2571489Z yaml_file = os.path.join(deeprec_config_path, "lightgcn.yaml")
2024-02-19T18:34:57.2572675Z user_file = os.path.join(data_path, r"user_embeddings.csv")
2024-02-19T18:34:57.2573837Z item_file = os.path.join(data_path, r"item_embeddings.csv")
2024-02-19T18:34:57.2574736Z
2024-02-19T18:34:57.2575330Z df = movielens.load_pandas_df(size="100k")
2024-02-19T18:34:57.2576298Z train, test = python_stratified_split(df, ratio=0.75)
2024-02-19T18:34:57.2577150Z
2024-02-19T18:34:57.2577757Z > data = ImplicitCF(train=train, test=test)
2024-02-19T18:34:57.2578784Z E NameError: name 'ImplicitCF' is not defined
2024-02-19T18:34:57.2579411Z
2024-02-19T18:34:57.2580002Z tests/smoke/recommenders/recommender/test_deeprec_model.py:251: NameError
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
https://github.com/recommenders-team/recommenders/actions/runs/7963399372/job/21738878728
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
FYI @SimonYansenZhao this is similar to the issue we got in #2022 | closed | 2024-02-19T19:30:03Z | 2024-04-05T14:07:32Z | https://github.com/recommenders-team/recommenders/issues/2064 | [
"bug"
] | miguelgfierro | 3 |
NullArray/AutoSploit | automation | 1,227 | Unhandled Exception (3d888a9f4) | Autosploit version: `2.2.3`
OS information: `Linux-3.18.71-perf-gbb71efc-armv8l-with-libc`
Running context: `/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit.py`
Error meesage: `[Errno 2] No such file or directory: ''`
Error traceback:
```
Traceback (most recent call):
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit/main.py", line 123, in main
terminal.terminal_main_display(loaded_exploits)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 331, in terminal_main_display
self.custom_host_list(loaded_mods)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 277, in custom_host_list
self.exploit_gathered_hosts(mods, hosts=provided_host_file)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 217, in exploit_gathered_hosts
host_file = open(hosts).readlines()
IOError: [Errno 2] No such file or directory: ''
```
Metasploit launched: `False`
| closed | 2019-12-29T17:23:03Z | 2020-02-02T01:20:00Z | https://github.com/NullArray/AutoSploit/issues/1227 | [] | AutosploitReporter | 0 |
Johnserf-Seed/TikTokDownload | api | 16 | File "/TikTokDownload/TikTokMulti.py", line 106, in judge_link key = re.findall('&sec_uid=(.*?)&',str(r.url))[0] IndexError: list index out of range | 
| closed | 2021-06-10T02:55:17Z | 2021-06-10T12:12:15Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/16 | [] | kukupigs | 0 |
wkentaro/labelme | deep-learning | 517 | How to generate occluded segments in annotations.jason | How to generate occluded segments in annotations.jason
examples/instance_segmentation/data_dataset_coco/annotations.json
I tried use labelme as the example below;
labelme data_annotated --labels labels.txt --nodata --labelflags '{.*: [occluded, truncated], person-\d+: [male]}'
and label person-1, person-2 with occlude flag checked.
How ever the generated annotations.json, multiple segments are not generated by labelme2coco.py.
In my annotations.json, person-1, person-2 are separated segment field. | closed | 2019-11-22T11:03:24Z | 2019-12-06T08:59:41Z | https://github.com/wkentaro/labelme/issues/517 | [] | sonozuka | 3 |
dunossauro/fastapi-do-zero | pydantic | 254 | Slides aulas assíncronas v2 | Atualizar os slides das aulas para v2
- [x] Recriar uma Aula 00 para o async
- [x] Aula 01
- [x] Versão do python 3.12
- [x] fastapi[standard]
- [x] Aula 02
- [x] Aula 03: Remover pydantic-email
- [x] Aula 04
- [x] Inserir evento no sumário
- [x] Inserir códigos do evento
- [x] Adicionar novo exercício
- [x] Aula 05: Adicionar novos tópicos sobre o PUT
- [x] Aula 06
- [x] Remover instalação do multipart
- [x] Novos exercícios
- [x] Aula 07: Adicionar tópicos de parâmetros via query com pydantic
- [x] Aula 08
- [x] Adicionar novo exercício
- [x] Aula 09
- [x] Endpoint de GET agora usa um modelo pydantic para query
- [x] Novos exercícios
- [x] Aula 10
- [x] Aula 11
- [x] Aula 12
- [x] Aula 13
- [x] Remover data de entrega
- [x] Coparação dos códigos - **revisão geral** | closed | 2024-10-05T18:04:49Z | 2024-10-07T01:32:34Z | https://github.com/dunossauro/fastapi-do-zero/issues/254 | [] | dunossauro | 1 |
harry0703/MoneyPrinterTurbo | automation | 50 | Language | how do i change the language?
can you please tell how to deploy to huggingface space?
is there a way so that to application keep on running even when i am offline? | closed | 2024-03-24T16:53:12Z | 2024-03-29T12:56:29Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/50 | [
"suggestion"
] | enterprisium | 5 |
iperov/DeepFaceLive | machine-learning | 38 | Excuse me, do you need the linux networking version? Windows I don't think it can be better commercialized. I have changed the linux interfaceless version. | Excuse me, do you need the linux networking version? Windows I don't think it can be better commercialized. I have changed the linux interfaceless version. | closed | 2022-02-15T07:36:42Z | 2022-07-25T07:29:32Z | https://github.com/iperov/DeepFaceLive/issues/38 | [] | baixinping618 | 2 |
miguelgrinberg/Flask-Migrate | flask | 125 | Migration cannot read database file inside a folder | I have trying to set up migration for my app but when the database file is inside a folder i get the following traceback: http://pastebin.com/tv8PiNAD but when i move the file outside the folder it seems to work any help?:
| closed | 2016-08-11T19:54:32Z | 2019-01-13T22:21:35Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/125 | [
"question",
"auto-closed"
] | draedarockstar | 8 |
OpenInterpreter/open-interpreter | python | 1,201 | Incorrect formatting & syntax error for vertex ai models | ### Describe the bug
When using vertex ai models, the interpreter does not format the code appropriately and leaves the trailing \``` which causes syntax errors.
### Reproduce
Run the terminal interpreter with any vertex AI model, for example
```
interpreter --model codechat-bison@001
```
Then prompt the model to write code to execute a task.
```
Use python to Open Chrome and go to YouTube.
```
```
Sure, I can do that. Here is a plan:
1 Open Chrome 2
. Go to YouTube
Here is the code:
/```
# Open Chrome
computer.browser.open("https://www.youtube.com")
# Go to YouTube
computer.browser.find_element_by_id("search_form").click()
computer.browser.find_element_by_id("search_input").send_keys("cats")
computer.browser.find_element_by_id("search_button").click
Would you like to run this code? (y/n)
```
### Expected behavior
The agent's code should be formatted without any "```" characters.
# Open Chrome
computer.browser.open("https://www.youtube.com")
### Screenshots
<img width="1061" alt="Screenshot 2024-04-12 at 1 39 31 PM" src="https://github.com/OpenInterpreter/open-interpreter/assets/34043765/3fd42eec-d482-40c3-bff7-66296b94148b">
### Open Interpreter version
0.2.4
### Python version
3.10
### Operating System name and version
MacOs 14.4.1
### Additional context
I tried this with several other vertex ai models, seems to be a similar problem for the others. | open | 2024-04-12T17:45:03Z | 2024-04-12T17:45:03Z | https://github.com/OpenInterpreter/open-interpreter/issues/1201 | [] | deleomike | 0 |
tensorpack/tensorpack | tensorflow | 1,152 | MemoryError when train FasterRCNN example with Mapillary Dataset, 4 x NVIDIA Tesla T4 | If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
TRAIN FasterRCNN network with Mapillary Dataset
(1) **If you're using examples, what's the command you run:**
python3 train.py --config MODE_MASK=True MODE_FPN=True 'DATA.BASEDIR=/home/federicolondon2019/tensorpack/COCO/DIR' 'BACKBONE.WEIGHTS
=/home/federicolondon2019/tensorpack/models/ImageNet-R50-AlignPadding.npz'
(2) **If you're using examples, have you made any changes to the examples? Paste `git status; git diff` here:**
I changed dataset.py and config.py.
DATASET.PY: (from line 17 to 32)
Original:
```python
class COCODetection:
# handle the weird (but standard) split of train and val
_INSTANCE_TO_BASEDIR = {
'valminusminival2014': 'val2014',
'minival2014': 'val2014',
}
COCO_id_to_category_id = {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 13: 12, 14: 13, 15: 14, 16: 15, 17: 16, 18: 17, 19: 18, 20: 19, 21: 20, 22: 21, 23: 22, 24: 23, 25: 24, 27: 25, 28: 26, 31: 27, 32: 28, 33: 29, 34: 30, 35: 31, 36: 32, 37: 33, 38: 34, 39: 35, 40: 36, 41: 37, 42: 38, 43: 39, 44: 40, 46: 41, 47: 42, 48: 43, 49: 44, 50: 45, 51: 46, 52: 47, 53: 48, 54: 49, 55: 50, 56: 51, 57: 52, 58: 53, 59: 54, 60: 55, 61: 56, 62: 57, 63: 58, 64: 59, 65: 60, 67: 61, 70: 62, 72: 63, 73: 64, 74: 65, 75: 66, 76: 67, 77: 68, 78: 69, 79: 70, 80: 71, 81: 72, 82: 73, 84: 74, 85: 75, 86: 76, 87: 77, 88: 78, 89: 79, 90: 80} # noqa
"""
Mapping from the incontinuous COCO category id to an id in [1, #category]
For your own dataset, this should usually be an identity mapping.
"""
# 80 names for COCO
class_names = [
"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"] # noqa
```
Now:
```
class COCODetection:
# handle the weird (but standard) split of train and val
_INSTANCE_TO_BASEDIR = {
'valminusminival2014': **'val2017',**
'minival2014': '**val2017**',
}
COCO_id_to_category_id = **{1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16, 17: 17, 18: 18, 19: 19, 20: 20, 21: 21, 22: 22, 23: 23, 24: 24, 25: 25, 26: 26, 27: 27, 28: 28, 29: 29, 30: 30, 31: 31, 32: 32, 33: 33, 34: 34, 35: 35, 36: 36, 37: 37, 38: 38}** # noqa
"""
Mapping from the incontinuous COCO category id to an id in [1, #category]
For your own dataset, this should usually be an identity mapping.
"""
# 80 names for COCO
class_names = [
**'BG', 'Bird', 'Ground_Animal', 'Crosswalk_Plain', 'Person', 'Bicyclist', 'Motorcyclist', 'Other_Rider', 'Lane_Marking_-_Crosswalk', 'Banner', 'Bench', 'Bike_Rack', 'Billboard', 'Catch_Basin', 'CCTV_Camera', 'Fire_Hydrant', 'Junction_Box', 'Mailbox', 'Manhole', 'Phone_Booth', 'Street_Light', 'Pole', 'Traffic_Sign_Frame', 'Utility_Pole', 'Traffic_Light', 'Traffic_Sign_(Back)', 'Traffic_Sign_(Front)', 'Trash_Can', 'Bicycle', 'Boat', 'Bus', 'Car', 'Caravan', 'Motorcycle', 'Other_Vehicle', 'Trailer', 'Truck', 'Wheeled_Slow'**] #
```
CONFIG.PY (from 82 to 96)
Original:
```
# dataset -----------------------
_C.DATA.BASEDIR = '/path/to/your/DATA/DIR'
# All TRAIN dataset will be concatenated for training.
_C.DATA.TRAIN = ('train2014', 'valminusminival2014') # i.e. trainval35k, AKA train2017
# Each VAL dataset will be evaluated separately (instead of concatenated)
_C.DATA.VAL = ('minival2014', ) # AKA val2017
# This two config will be populated later by the dataset loader:
_C.DATA.NUM_CATEGORY = 0 # without the background class (e.g., 80 for COCO)
_C.DATA.CLASS_NAMES = [] # NUM_CLASS (NUM_CATEGORY+1) strings, the first is "BG".
# whether the coordinates in the annotations are absolute pixel values, or a relative value in [0, 1]
_C.DATA.ABSOLUTE_COORD = True
_C.DATA.NUM_WORKERS = 5 # number of data loading workers
# basemodel ----------------------
_C.BACKBONE.WEIGHTS = '' # /path/to/weights.npz
Now:
# dataset -----------------------
_C.DATA.BASEDIR = '**/home/federicolondon2019/tensorpack/COCO/DIR'**
# All TRAIN dataset will be concatenated for training.
_C.DATA.TRAIN = **('train2017')** # i.e. trainval35k, AKA train2017
# Each VAL dataset will be evaluated separately (instead of concatenated)
_C.DATA.VAL = **('val2017')** # AKA val2017
# This two config will be populated later by the dataset loader:
_C.DATA.NUM_CATEGORY = **37** # without the background class (e.g., 80 for COCO)
_C.DATA.CLASS_NAMES = **['BG', 'Bird', 'Ground_Animal', 'Crosswalk_Plain', 'Person', 'Bicyclist', 'Motorcyclist', 'Other_Rider', 'Lane_Marking_-_Crosswalk', 'Banner', 'Bench', 'Bike_Rack', 'Billboard', 'Catch_Basin', 'CCTV_Camera', 'Fire_Hydrant', 'Junction_Box', 'Mailbox', 'Manhole', 'Phone_Booth', 'Street_Light', 'Pole', 'Traffic_Sign_Frame', 'Utility_Pole', 'Traffic_Light', 'Traffic_Sign_(Back)', 'Traffic_Sign_(Front)', 'Trash_Can', 'Bicycle', 'Boat', 'Bus', 'Car', 'Caravan', 'Motorcycle', 'Other_Vehicle', 'Trailer', 'Truck', 'Wheeled_Slow'**] # NUM_CLASS (NUM_CATEGORY+1) strings, the first is "BG".
# whether the coordinates in the annotations are absolute pixel values, or a relative value in [0, 1]
_C.DATA.ABSOLUTE_COORD = True
_C.DATA.NUM_WORKERS = 5 # number of data loading workers
# basemodel ----------------------
_C.BACKBONE.WEIGHTS = **'/home/federicolondon2019/tensorpack/models/ImageNet-R50-AlignPadding.npz' # /path/to/weights.npz
[Mapillary_FasterRCNN.docx](https://github.com/tensorpack/tensorpack/files/3085597/Mapillary_FasterRCNN.docx)
**
```
(3) **If not using examples, tell us what you did:**
It's always better to copy-paste what you did than to describe them.
Please try to provide enough information to let other __reproduce__ your issues.
Without reproducing the issue, we may not be able to investigate it.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
I get a MemoryError when running train.py in Google Cloud with 4 Tesla T4
It's always better to copy-paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `CMD 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
I only included 5 images to train and 2 images to val
### 3. What you expected, if not obvious.
I expected to get a log
If you expect higher speed, please read
http://tensorpack.readthedocs.io/tutorial/performance-tuning.html
before posting.
If you expect certain accuracy, only in one of the two conditions can we help with it:
(1) You're unable to reproduce the accuracy documented in tensorpack examples.
(2) It appears to be a tensorpack bug.
Otherwise, how to train a model to certain accuracy is a machine learning question.
We do not answer machine learning questions and it is your responsibility to
figure out how to make your models more accurate.
### 4. Your environment:
+ Python version: 3.5.3
+ TF version: 1.13.1
+ Tensorpack version: v0.9.4
You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
+ Hardware information, e.g. number of GPUs used.
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
Attached the entire output when running.
Thanks
[Mapillary_FasterRCNN.pdf](https://github.com/tensorpack/tensorpack/files/3085611/Mapillary_FasterRCNN.pdf)
| closed | 2019-04-16T15:26:17Z | 2019-04-25T16:18:16Z | https://github.com/tensorpack/tensorpack/issues/1152 | [
"examples",
"installation/environment"
] | AlbertoMCS | 8 |
lukas-blecher/LaTeX-OCR | pytorch | 159 | Model don't know how to use 'aligined' but always use 'array' | The 'aligned' command is frequetly used in latex. For example:
```latex
\begin{aligned}
y & =( x+y)^{2}\\
& =( x+y)( x+y)\\
& =x^{2} +2xy+y^{2}
\end{aligned}
```
<img width="101" alt="_20220602144601" src="https://user-images.githubusercontent.com/50736266/171569607-84367dc5-eb64-4ede-a76c-b6cec3d03c28.png">
However, I have tried many times in many cases, the model will always use 'array' instead of 'aligned'. It would be great if it could learn how to use aligned.
| open | 2022-06-02T06:50:29Z | 2022-06-08T17:23:01Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/159 | [
"bug",
"dataset"
] | Jie-Qiao | 2 |
Lightning-AI/LitServe | api | 339 | Prometheus logger is not pickable + monitoring metrics set via self.log are not tracked | ## 🐛 Bug
I get the following warning when using Prometheus inside of `ls.Logger`:
```
WARNING:litserve.loggers:Logger PrometheusLogger is not picklable and might not work properly.
```
Then the metrics that I am "observing" are not being tracked under the endpoint `/metrics` using `self.log` on the ls.LitAPI. These are the metrics I get:
```
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 1933.0
python_gc_objects_collected_total{generation="1"} 638.0
python_gc_objects_collected_total{generation="2"} 100.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 679.0
python_gc_collections_total{generation="1"} 61.0
python_gc_collections_total{generation="2"} 5.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="11",patchlevel="4",version="3.11.4"} 1.0
# HELP http_server_requests_duration_seconds_total HTTP request latency in seconds
# TYPE http_server_requests_duration_seconds_total histogram
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.005",method="POST",status_code="200"} 6.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.01",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.025",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.05",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.075",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.1",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.25",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.5",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="0.75",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="1.0",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="2.5",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="5.0",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="7.5",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="10.0",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_bucket{endpoint="/predict",le="+Inf",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_count{endpoint="/predict",method="POST",status_code="200"} 7.0
http_server_requests_duration_seconds_total_sum{endpoint="/predict",method="POST",status_code="200"} 0.01563624886330217
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.005",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.01",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.025",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.05",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.075",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.1",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.25",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.5",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="0.75",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="1.0",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="2.5",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="5.0",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="7.5",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="10.0",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_bucket{endpoint="/metrics",le="+Inf",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_count{endpoint="/metrics",method="GET",status_code="200"} 2.0
http_server_requests_duration_seconds_total_sum{endpoint="/metrics",method="GET",status_code="200"} 0.006937500089406967
# HELP http_server_requests_duration_seconds_total_created HTTP request latency in seconds
# TYPE http_server_requests_duration_seconds_total_created gauge
http_server_requests_duration_seconds_total_created{endpoint="/predict",method="POST",status_code="200"} 1.729620868354843e+09
http_server_requests_duration_seconds_total_created{endpoint="/metrics",method="GET",status_code="200"} 1.72962087676791e+09
# HELP request_processing_seconds Time spent processing request
# TYPE request_processing_seconds histogram
```
This is my implementation for the Logger:
```python
import litserve as ls
from prometheus_client import Histogram
class PrometheusLogger(ls.Logger):
def __init__(self):
super().__init__()
self.function_duration = Histogram(
"request_processing_seconds",
"Time spent processing request",
["function_name"],
)
def process(self, key, value):
self.function_duration.labels(function_name=key).observe(value)
```
### To Reproduce
I use the following running configuration:
```python
import litserve as ls
from prometheus_client import make_asgi_app
class CLIPModel(ls.LitAPI):
# ... bunch of all the other methods including predict, setup, etc ...
def get_image_embeddings(
self,
images,
normalize_embedding: bool = True,
model_version: str = "latest"
):
start_time = time.perf_counter()
with torch.no_grad():
images = self.processor(images=images, return_tensors="pt").to(self.latest_model.device)
if model_version == "latest":
embedding = self.latest_model.get_image_features(**images)
else:
embedding = self.prev_model.get_image_features(**images)
if normalize_embedding:
embedding = self.normalize_embedding(embedding)
self.log("get_image_embedding", time.perf_counter() - start_time) # <- here I am using the PrometheusLogger
return embedding
if __name__ == "__main__":
prometheus_logger = monitoring.PrometheusLogger()
prometheus_logger.mount(
path="/metrics",
app=make_asgi_app()
)
server = ls.LitServer(
CLIPModel(),
workers_per_device=1,
middlewares=[monitoring.HTTPLatencyMiddleware],
loggers=prometheus_logger,
stream=True,
)
server.run(
port=api_config.PORT,
num_api_servers=1,
)
```
where my `monitoring.HTTPLatencyMiddleware` is defined like this:
```python
import os
import time
from fastapi import Request
from prometheus_client import Histogram
from starlette.middleware.base import BaseHTTPMiddleware
ENDPOINT_LABEL = "endpoint"
STATUSCODE_LABEL = "status_code"
METHOD_LABEL = "method"
HTTP_REQUEST_LATENCY = Histogram(
"http_server_requests_duration_seconds_total",
"HTTP request latency in seconds",
[ENDPOINT_LABEL, STATUSCODE_LABEL, METHOD_LABEL],
# using default buckets
)
class HTTPLatencyMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
method = request.method
endpoint = os.path.normpath(request.url.path)
status_code = 200
start_time = time.perf_counter()
try:
# Process the request
response = await call_next(request)
status_code = response.status_code
except Exception as e:
raise e
finally:
# Record metrics
duration = time.perf_counter() - start_time
HTTP_REQUEST_LATENCY.labels(
method=method, endpoint=endpoint, status_code=status_code
).observe(duration)
return response
```
am I running properly correctly the `ls.Server` Logger and mounts? or there is something wrong?. I am following the docstrings from `ls.Logger`
I use `prometheus-client==0.21.0"` and `litserve==0.2.3`
| closed | 2024-10-22T18:33:47Z | 2024-10-27T00:04:07Z | https://github.com/Lightning-AI/LitServe/issues/339 | [
"bug",
"help wanted"
] | miguelalba96 | 5 |
harry0703/MoneyPrinterTurbo | automation | 593 | 【年度旗舰】顶级IEPL专线机场,解锁你的数字宇宙! | ### 是否已存在类似的功能请求?
- [x] 我已搜索现有的功能请求
### 痛点
还在用“平平无奇”的机场? [**万城 V-city(万城网络)**](https://user.vcsite02.com/#/sign-up?code=0BnXRJud),年度旗舰级高端机场,霸气降临! 我们以【顶级IEPL全专线】技术,重新定义速度与稳定,带你进入前所未有的数字宇宙!
🌟 **真·顶级专线,快到飞起**: 温哥华总部,IEPL全专线硬核加持! 香港🇭🇰、日本🇯🇵、新加坡🇸🇬、美国🇺🇸,小众节点也任你选! 速度快到突破天际,告别一切卡顿!
🎬 **全能解锁,玩转全球娱乐**: ChatGPT秒连! Netflix、Disney+、YouTube Premium高清无压力! UDP游戏加速,畅玩全球! 4K视频秒开,视觉盛宴即刻拥有!
🛡️ **企业级稳定,可靠到爆表**: 企业级跨境专线服务,稳定性无需多言! 高峰期?敏感期?统统无惧! 备用线路加持,给你满满的安全感!
💰 **限时福利,超值到尖叫**: 用户亲测力荐,品质杠杠的! 现在更有劲爆优惠活动,性价比直接爆表💥! 比90%机场都划算💸,错过血亏!
万城 V-city机场,不止是机场,更是你通往自由数字世界的【超能钥匙】! 立即加入,解锁你的无限可能,成为数字世界的弄潮儿!
### 通过下面的链接注册新用户,即可享受全员无门槛9折优惠券:
### [万城 V-city官网链接](https://user.vcsite02.com/#/sign-up?code=0BnXRJud)
### 建议的解决方案
以下为晚高峰测速:

### 有用的资源
_No response_
### 其他信息
_No response_ | closed | 2025-02-19T02:57:49Z | 2025-02-19T03:37:42Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/593 | [
"enhancement"
] | LorettaGreen3 | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 214 | Zero-shot分类问题 | 作者您好,
CN-clip是很棒的工作!我在复现voc-2007-classification zero-shot推理的过程中发现最终推理的性能与report的结果无法对齐,下面是我的执行结果,烦请有空是帮看下问题,感谢。
Params:
context_length: 52
datapath: ******/Chinese-CLIP/content/datasets/voc-2007-classification/test
dataset: voc-2007-classification
img_batch_size: 64
index:
label_file: ******/Chinese-CLIP/content/datasets/voc-2007-classification/label_cn.txt
num_workers: 4
precision: amp
resume: ******/Chinese-CLIP/content/pretrained_weights/clip_cn_vit-h-14.pt
save_dir: ******/Chinese-CLIP/eval_result//voc-2007-classification
text_model: RoBERTa-wwm-ext-large-chinese
vision_model: ViT-H-14
Loading vision model config from cn_clip/clip/model_configs/ViT-H-14.json
Loading text model config from cn_clip/clip/model_configs/RoBERTa-wwm-ext-large-chinese.json
Preparing zeroshot dataset.
224
Begin to load model checkpoint from ******/Chinese-CLIP/content/pretrained_weights/clip_cn_vit-h-14.pt.
=> loaded checkpoint ******/Chinese-CLIP/content/pretrained_weights/clip_cn_vit-h-14.pt (epoch 7 @ 40
000 steps)
Building zero-shot classifier
Using classifier
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:15<00:00, 1.28it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 78/78 [04:04<00:00, 3.13s/it]
torch.Size([4952, 20])
Result:
zeroshot-top1: 0.09268982229402262
Finished.
测试的数据为voc-2007-classification 从https://github.com/OFA-Sys/Chinese-CLIP/blob/master/zeroshot_dataset.md
处下载
而论文中的性能为

| open | 2023-09-28T08:47:02Z | 2023-12-11T13:16:18Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/214 | [] | cdy-for-grad | 1 |
opengeos/streamlit-geospatial | streamlit | 13 | streamlit.gishub.org U.S. Real Estate Data and Market Trends error | Hi on the hosted site the demo for us real estate is showing the following error

| closed | 2021-12-07T01:36:28Z | 2021-12-21T13:27:01Z | https://github.com/opengeos/streamlit-geospatial/issues/13 | [] | Niko-La | 1 |
pyeve/eve | flask | 1,277 | First example of Eve use doesn't work | ### Expected Behavior
This should work, as it's [the very first example of how Eve works](https://docs.python-eve.org/en/stable/#eve-is-simple):
```python
from eve import Eve
app = Eve()
```
### Actual Behavior
```pytb
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from eve import Eve
>>> app = Eve()
Traceback (most recent call last):
File "/home/sybren/workspace/cloud/eve/eve/flaskapp.py", line 319, in validate_domain_struct
domain = self.config["DOMAIN"]
KeyError: 'DOMAIN'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sybren/workspace/cloud/eve/eve/flaskapp.py", line 161, in __init__
self.validate_domain_struct()
File "/home/sybren/workspace/cloud/eve/eve/flaskapp.py", line 321, in validate_domain_struct
raise ConfigException("DOMAIN dictionary missing or wrong.")
eve.exceptions.ConfigException: DOMAIN dictionary missing or wrong.
```
### Environment
* Python version: 3.6.7
* Eve version: 0.9.1
| closed | 2019-05-29T12:11:51Z | 2019-06-07T13:41:18Z | https://github.com/pyeve/eve/issues/1277 | [
"documentation"
] | sybrenstuvel | 1 |
pydata/xarray | pandas | 9,404 | Linear interpolation gives negative output values with non-negative inputs | ### What happened?
I have some time-series data that contains non-negative values (with a few zeroes). When I call the interp() method with method='linear', and pass a time corresponding exactly to one of the zero values, the result is a (tiny) negative value. This causes problems later on when I pass the interpolated data to another routine that requires non-negative inputs.
### What did you expect to happen?
I expected the interpolated data to contain zeroes rather than negative values where the input data had zeroes.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
import numpy as np
time_strings_input = np.array(['2018-07-01T13:02:16.892474880',
'2018-07-01T13:02:16.922475008',
'2018-07-01T13:02:16.952474880'])
values_input = np.array([0.028584518, 0., 0.013626526],dtype=np.float32)
input_times_npdt64 = np.array([np.datetime64(t) for t in time_strings_input])
interp_to_times_npdt64 = np.array(input_times_npdt64[1])
input_times_float64 = input_times_npdt64.astype(np.float64)
interp_to_time_float64 = interp_to_times_npdt64.astype(np.float64)
data_array = xr.DataArray(values_input,dims=['time'],coords={'time':('time',input_times_float64)})
result = data_array.interp({"time": interp_to_time_float64},method='linear')
print(result.values)
# Expected output: 0.0
# Actual output: -3.469446951953614e-18
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
None
```
### Anything else we need to know?
This is actually an issue with scipy.interpolate.interp1d. See: https://github.com/scipy/scipy/issues/21459
A comment on that issue suggests using make_interp_spline() with k=1, which does give the desired output. numpy.interp() also gives the desired output.
It seems unlikely that scipy will fix the issue in interp1d, due to it being considered legacy code. Is there any chance that xarray might support the make_interp_spline way of doing it?
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.18 (main, Sep 11 2023, 08:25:10)
[Clang 14.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 22.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development
xarray: 2024.7.0
pandas: 2.1.0
numpy: 1.25.2
scipy: 1.13.1
netCDF4: 1.6.4
pydap: None
h5netcdf: None
h5py: 3.10.0
zarr: None
cftime: 1.6.2
nc_time_axis: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: 3.7.3
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 65.5.1
pip: 22.3.1
conda: None
pytest: None
mypy: None
IPython: 8.18.1
sphinx: 7.3.7
</details>
| closed | 2024-08-27T18:58:53Z | 2024-09-27T00:52:14Z | https://github.com/pydata/xarray/issues/9404 | [
"bug",
"contrib-help-wanted",
"enhancement",
"topic-interpolation"
] | jameswilburlewis | 5 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,053 | Synthesizer alignment | Hi, when I followed the steps inside the training guide to train the code, I found some problems with the alignment of the synthesizer. The dataset I used was LibriSpeech and I followed the instructions exactly for each step and also downloaded the alignment file. I don't know what's wrong, any help would be greatly appreciated. Looking forward to your reply :


| open | 2022-04-15T01:35:31Z | 2022-05-06T06:31:33Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1053 | [] | Alterbort | 1 |
andrew-hossack/dash-tools | plotly | 71 | [Feature Request] Change created Dockerfiles to use a non-root user. | **Is your feature request related to a problem? Please describe.**
Dockerfiles generated by `dashtools docker --init` default to the base image default user (in this case root). Best practice is to run container processes as a non-root user.
**Describe the solution you'd like**
Update the process that generates Dockerfiles to create and use a non-root user.
**Describe alternatives you've considered**
Alternative would be to continue running as root user, but this is not advised and is also easy to prevent.
I will submit a PR to fix this Issue shortly.
| closed | 2022-11-07T23:33:23Z | 2022-11-11T02:20:41Z | https://github.com/andrew-hossack/dash-tools/issues/71 | [] | jasonwashburn | 0 |
bmoscon/cryptofeed | asyncio | 353 | KeyError 'openInterest' with Binance Delivery | **Describe the bug**
Error message...
```bash
Task exception was never retrieved
future: <Task finished coro=<Binance._open_interest() done, defined at /home/pierre/miniconda3/lib/python3.7/site-packages/cryptofeed-1.6.2-py3.7.egg/cryptofeed/exchange/binance.py:234> exception=KeyError('openInterest')>
Traceback (most recent call last):
File "/home/pierre/miniconda3/lib/python3.7/site-packages/cryptofeed-1.6.2-py3.7.egg/cryptofeed/exchange/binance.py", line 251, in _open_interest
oi = data['openInterest']
KeyError: 'openInterest'
```
**To Reproduce**
... obtained with following config file.
```yaml
exchanges:
BINANCE_DELIVERY:
retries: -1
channel_timeouts:
open_interest: 90
open_interest: ['BTC-USD_PERP']
```
**Expected behavior**
No KeyError, but possibly a message indicating that OPEN_INTEREST channel is not implemented for BINANCE_DELIVERY.
**Operating System:**
Ubuntu 18.04
**Cryptofeed Version**
1.6.2 | closed | 2020-12-13T18:32:01Z | 2020-12-13T19:08:30Z | https://github.com/bmoscon/cryptofeed/issues/353 | [
"bug"
] | yohplala | 1 |
aiortc/aiortc | asyncio | 220 | Mulitple cameras and broadcast | First of all thank you for the amazing work! I could easily get a video streaming of my webcams with very small latency.
Tell me if I've understood well, to stream multiple videos (cameras) to the same client, we cannot add videos (called `track`?) to a connections and we should rather open a connection for each video tag in our HTML. Looking at the webcam example I implemented it creating a route for every camera and opening a new pc each time it is called:
```python
@routes.post('/camera{cam_n}')
async def camera(request):
cam_n = int(request.match_info['cam_n'])
params = await request.json()
offer = RTCSessionDescription(sdp=params["sdp"], type=params["type"])
pc = RTCPeerConnection()
pcs.add(pc)
```
I needed to tweak the javascript as well, because the ICE gathering was somehow colliding.
So far so good, how can I stream the same camera to multiple clients? I can open a new MediaPlayer for each, but they sure cannot share the same /dev/vide0. So I tried to share the same MediaPlayer for differents (or the same) clients, but it says the `track already has a sender`.
```python
if cam_n not in players:
path = cams[cam_n]['path']
options = {"framerate": "30", "video_size": "640x480"}
players[cam_n] = MediaPlayer(path, format="v4l2", options=options)
player = players[cam_n]
```
I am really stuck with that /: | closed | 2019-10-24T09:13:30Z | 2019-11-25T11:25:45Z | https://github.com/aiortc/aiortc/issues/220 | [] | AntoninRousset | 4 |
ResidentMario/geoplot | matplotlib | 11 | Host the documentation | Likely via my personal website for the moment. | closed | 2016-12-18T18:39:24Z | 2017-01-05T03:48:12Z | https://github.com/ResidentMario/geoplot/issues/11 | [] | ResidentMario | 0 |
albumentations-team/albumentations | deep-learning | 2,402 | [New feature] Add apply_to_images to FancyPCA | open | 2025-03-11T01:02:29Z | 2025-03-11T01:02:35Z | https://github.com/albumentations-team/albumentations/issues/2402 | [
"enhancement",
"good first issue"
] | ternaus | 0 |
|
graphdeco-inria/gaussian-splatting | computer-vision | 347 | questions: batch-render, multi-object splat and render, image augment. | as title
1. so far, the gaussian splatting rendering is one by one, right? if right, the performance on batch will ... :-)
2. multi-object splat and render, including partial-pose partial-occlusion.
I am just a scientist-guise layman, maybe not right. :-( | closed | 2023-10-20T02:33:31Z | 2023-10-21T14:44:38Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/347 | [] | yuedajiong | 1 |
pykaldi/pykaldi | numpy | 140 | Why I can't download model? | I run the command "./models.sh" inside zamia directory.
While there is an error:
`HTTP request sent, awaiting response... 404 Not Found
2019-06-22 21:06:24 ERROR 404: Not Found.
tar (child): kaldi-generic-en-tdnn_f-r20190227.tar.xz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
mv: cannot stat 'kaldi-generic-en-tdnn_f-r20190227/README.md': No such file or directory
mv: cannot stat 'kaldi-generic-en-tdnn_f-r20190227/*': No such file or directory
mv: cannot stat 'model': No such file or directory
mv: cannot stat 'extractor': No such file or directory
mv: cannot stat 'ivectors_test_hires': No such file or directory
ln: failed to create symbolic link 'conf/.': No such file or directory
ln: failed to create symbolic link 'data/.': No such file or directory
`
Does it mean that this url is useless?
@dogancan @r9y9 @vrmpx @davidavdav @caizexin
thank you!!! | closed | 2019-06-22T13:11:11Z | 2019-06-26T10:08:08Z | https://github.com/pykaldi/pykaldi/issues/140 | [] | CXiaoDing | 2 |
jumpserver/jumpserver | django | 14,441 | [Bug] 安全角色索引异常 | ### Product Version
4.0.1
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information

我选了管理员和审计员,直接显示两个审计员
### 🐛 Bug Description
估计是你们索引混乱
### Recurrence Steps
功能还能用
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2024-11-13T03:32:39Z | 2024-11-13T06:05:26Z | https://github.com/jumpserver/jumpserver/issues/14441 | [
"🐛 Bug"
] | kingofeagles | 1 |
chatanywhere/GPT_API_free | api | 26 | Host不太稳定经常连不上 | 转发api key对应的两个host感觉都不太稳定,尤其是在用sidebar的时候,经常用着就连不上了,或者回应速度会变得非常慢。同时,感觉转发api的速度比直连官方接口速度会慢不少。不知道是不是host访问量过大的原因?能否优化升级一下呢?谢谢! | closed | 2023-05-30T03:09:15Z | 2023-05-30T05:06:23Z | https://github.com/chatanywhere/GPT_API_free/issues/26 | [] | hshenmeow | 6 |
noirbizarre/flask-restplus | flask | 330 | Download link in SwaggerUI corrupts the file (changes the encoding) | Hi,
There is an issue (basically swagger messes up the encoding of the file) when adding binary data into Response, that seems to be originated in Swagger (https://github.com/swagger-api/swagger-ui/issues/2132).
It seems that the issue is solved in version 2.2.10 ? Do you have any plans of upgrading swagger or patching this ? Are there any workarounds ?
Thank you | open | 2017-09-28T13:49:37Z | 2018-02-08T18:50:41Z | https://github.com/noirbizarre/flask-restplus/issues/330 | [] | naeioan | 3 |
mjhea0/flaskr-tdd | flask | 52 | Posts can be deleted even though user isn't logged in | Currently, posts can be deleted if you aren't logged in. | closed | 2019-09-27T13:27:11Z | 2019-11-05T15:42:40Z | https://github.com/mjhea0/flaskr-tdd/issues/52 | [] | jeremiasbaur | 6 |
tflearn/tflearn | tensorflow | 298 | bug of global_avg_pool | sorry my fault
| closed | 2016-08-23T08:20:03Z | 2016-08-23T08:34:08Z | https://github.com/tflearn/tflearn/issues/298 | [] | lfz | 0 |
slackapi/bolt-python | fastapi | 938 | Chat Post Message Unfurl Link / Media Ignored | I have an app that sends messages in DMs that often contain 3-5 links. The links are sometimes links to other slack messages and sometimes the slack messages have links or files in them, so the previews can get really cluttered. So I have been setting the `unfurl_link` and `unfurl_media` to False. But when I post messages, I still see the link previews. My goal is to _not_ see any link previews if I set `unfurl_media=False` and `unfurl_link=False`.
### Reproducible in:
#### The `slack_bolt` version
```
slack-bolt==1.16.2
slack-sdk==3.19.5
```
#### Python runtime version
`Python 3.9.16`
#### OS info
Runs on AWS Lambda 3.9 Python Runtime
also reproduced on macOS
#### Steps to reproduce:
```python
from slack_bolt import App
link1 = "https://some-workspace.slack.com/archives/C04NX3245JR/p1685537161971159?thread_ts=1685537161.971159&cid=C04NX3245JR"
link2 = "https://some-workspace.slack.com/archives/C050J9G9VFC/p1687924354104319?thread_ts=1687924354.104319&cid=C050J9G9VFC"
link3 = "https://some-workspace.slack.com/archives/C058H0ND0BH/p1684777881725439?thread_ts=1684777881.725439&cid=C058H0ND0BH"
msg_txt = f"<{link1}|Link 1>\n<{link2}|Link 2>\n<{link3}|Link 3>\n"
SLACK_BOT_TOKEN, SLACK_SIGNING_SECRET = get_slack_tokens()
app = App(token=SLACK_BOT_TOKEN, signing_secret=SLACK_SIGNING_SECRET)
slack_client = app.client
slack_client.chat_postMessage(
channel="U0XXXXX",
text=msg_txt,
unfurl_media=False,
unfurl_link=False,
)
```
### Expected result:
Expect no previews to be shown underneath the message.
### Actual result:
I see many previews underneath the message.
<img width="519" alt="github-issue-slack-bolt-link-unfurling" src="https://github.com/slackapi/bolt-python/assets/13176059/e0dca603-8ad0-467a-85cd-36c7ec837964">
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct)
before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-07-28T14:32:39Z | 2023-07-28T16:16:15Z | https://github.com/slackapi/bolt-python/issues/938 | [
"question"
] | weallwegot | 2 |
blacklanternsecurity/bbot | automation | 2,248 | Unable to send scan files to elasticsearch | ### Description
Unable to send BBOT scan files to Elasticsearch despite following the official BBOT integration guide. The scan files are being extracted in SIEM-friendly JSON format but are not being successfully ingested into Elasticsearch.
### Environment Setup
- BBOT installed and configured
- Elasticsearch + Kibana running
- Fleet Server configured
- Elastic Agent installed on the same machine as BBOT
### Steps Taken
1. Configured BBOT integration as per the official guide
2. Extracting scans in SIEM-friendly JSON format
3. Added --insecure flags and tried various SSL configurations
4. All components (Elastic Agent + BBOT) are on the same machine
### Current Behavior
Scan files are not being sent/ingested into Elasticsearch
### Expected Behavior
BBOT scan results should be successfully ingested into Elasticsearch and viewable in Kibana
### Questions
- Is there a way to verify if BBOT is attempting to send the files? elastic-agent logs doesn't show any information except system metrics.
- Are there specific permissions or configurations needed beyond the basic setup? I'm keeping the logs inside /root/.bbot/scans but elastic-agent is running as root as well so it does looks like permission problem.
### Additional Context
SSL verification has been disabled for testing purposes but the issue persists. Looking for guidance on troubleshooting steps or additional configuration requirements.
**BBOT Command**
Example: `bbot -t evilcorp.com -p subdomain-enum -rf passive -c modules.json.siem_friendly=true -om json --name bbot`
**OS, BBOT Installation Method + Version**
Example: `OS: Ubuntu Linux 22.04, Installation method: pipx, BBOT version: 2.3.2`
| open | 2025-02-04T23:34:36Z | 2025-02-06T00:01:57Z | https://github.com/blacklanternsecurity/bbot/issues/2248 | [
"bug"
] | antton | 0 |
man-group/arctic | pandas | 915 | Naming database other than "arctic" | #### Arctic Version
```
# 1.79.4
```
#### Arctic Store
```
# VersionStore
```
#### Platform and version
Windows 10 x64
#### Description of problem and/or code sample that reproduces the issue
The database name cannot be named with another name than "arctic". I see the name is coded in arctic.py line 68 as "DB_PREFIX = "arctic". It would be interesting to name it differently by sending a name as an argument of class Arctic().
Sorry if it's already the case. I played around with the code and went through all of the docs and it didn't seem to be possible at this point.
| closed | 2021-09-29T12:56:11Z | 2022-08-04T12:28:20Z | https://github.com/man-group/arctic/issues/915 | [] | bmicognito | 2 |
iperov/DeepFaceLab | machine-learning | 5,471 | Input images dst checked after loading faces during at 7) merge SAEHD.bat | Hello
I noticed that the extracted images from data_dst.mp4 are checked after the model and src/dst faces are loaded.
This means in larger projects of 60.000 images, it takes quite some time to find out that you are missing your data_dst extracted images.
This check should be done first before proceeding to load model and src/dst faces to avoid this waste of time.
Now you could say just extract the data_dst.mp4 beforehand, but there are some cases that you need to delete them partially, or fully,
So its better to do the precheck of the data_dst.mp4 extracted images as first step to avoid timeloss. Or give the user the chance to extract it(mention that files are not there, please extract and press on y to continue). That would be good | open | 2022-02-04T10:17:57Z | 2023-06-08T22:57:33Z | https://github.com/iperov/DeepFaceLab/issues/5471 | [] | thedeepface | 1 |
yt-dlp/yt-dlp | python | 12,387 | [iPrima.cz] Login Failed | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Czechia
### Provide a description that is worded well enough to be understood
It worked on friday (15/2/2025), now it throws "Login Failed".
I verified that I have access to the website via browser, correct login credentials and I made sure that the special characters are properly escaped, even tried changing password to exclude special characters to rule out wrong password.
Tried stable AND nightly version, log included is from the latest nightly, tried on multple versions.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-F', 'https://www.iprima.cz/serialy/hvezdna-brana/sezona-vi/4-v-zajeti-ledu', '--username', 'PRIVATE', '--password', 'PRIVATE', '-vU']
[debug] Encodings: locale cp1250, fs utf-8, pref cp1250, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds [6ca23ffaa] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-118488-g0e7c2a6287-20250217 (setts), ffprobe N-118488-g0e7c2a6287-20250217
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1841 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds)
[IPrima] Downloading login page
[IPrima] Logging in
ERROR: [IPrima] 4-v-zajeti-ledu: Login failed
File "yt_dlp\extractor\common.py", line 744, in extract
File "yt_dlp\extractor\common.py", line 650, in initialize
File "yt_dlp\extractor\iprima.py", line 114, in _perform_login
``` | open | 2025-02-17T18:40:17Z | 2025-03-19T11:44:17Z | https://github.com/yt-dlp/yt-dlp/issues/12387 | [
"account-needed",
"geo-blocked",
"site-bug",
"triage",
"can-share-account"
] | s1ren9133 | 3 |
pywinauto/pywinauto | automation | 1,281 | Actual click location changes on dual monitor setup with different monitor scales | ## Expected Behavior
Click location in the window being consistent regardless of monitor.
## Actual Behavior
Click location in the window changes based on the monitor the window is on if the scaling for both monitors is not the same.
## Steps to Reproduce the Problem
1. Have a dual monitor setup with 2 different settings for "Scale and layout" under Windows Settings for "Display"
_example: monitor 1 = 100%, monitor 2 = 150%_
2. set click coords somewhere near the middle of the app (0,0 works consistently still)
3. compare behavior on monitor 1 and monitor 2
## Short Example of Code to Demonstrate the Problem
w = app.window(title='my_app')
w.click_input(coords=(300, 300))
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 32bit
- Platform and OS: Windows 10, dual monitor with different scaling options
<img width="498" alt="pywinauto scaling issue" src="https://user-images.githubusercontent.com/90365865/216903049-2dd12af0-01b1-49e0-aebc-df64dfdfedc9.png">
| open | 2023-02-06T06:55:14Z | 2023-07-15T05:53:56Z | https://github.com/pywinauto/pywinauto/issues/1281 | [
"bug",
"Priority-Low"
] | KarlRW | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 728 | Finish JointPlot for Machine Learning Use Cases | This issue is a follow on to #721 to wrap up the extension of `JointPlot` for machine learning-specific use cases. The tasks are as follows:
- [ ] Finish the JointPlot docstring
- [ ] In the case where two columns are specified, color the plot with the target variable as in Manifold
- [ ] Add in best fit line(s) as an option (this might extend the `kind` parameter) - note that lines may drawn for each class in a discrete target.
- [ ] Update the JointPlot documentation to reflect the machine learning specific use case of this visualizer.
- [ ] implement the quick method
- [ ] make the aspect ratio square
Note that the square aspect ratio is being discussed here:
https://stackoverflow.com/questions/54545758/create-equal-aspect-square-plot-with-multiple-axes-when-data-limits-are-differ
For the documentation ensure we add images that show a few different versions of JointPlot:
- [ ] feature-to-target
- [ ] feature-to-feature
- [ ] use of the hexbin plot
- [ ] feature-to-feature with discrete classes colored
- [ ] feature-to-feature with heatmap for regression
- [ ] use of different correlation measures
Add the following tests:
- [ ] test hist="density" image similarity
- [ ] test unknown plot kind raises exception after being set correctly in init
- [ ] test hexbin plot with and without histogram
- [ ] test exception when `columns=['onecol']` is passed (line 246)
- [ ] test X and y being passed as python lists and tuples
- [ ] test quick method with and without histogram
See coverage report for details:
https://coveralls.io/builds/21488827/source?filename=yellowbrick/features/jointplot.py | open | 2019-02-06T16:15:00Z | 2020-10-26T16:07:53Z | https://github.com/DistrictDataLabs/yellowbrick/issues/728 | [
"type: feature",
"priority: medium"
] | bbengfort | 3 |
davidsandberg/facenet | tensorflow | 373 | It seems that mismatch exists between outputs of MTCNN and inputs of inception-resnet-v1 | I am a beginner of the face recognition. In my view, inception-resnet-v1 (https://arxiv.org/abs/1602.07261) accepts a 299x299 image as the input, but in the "Validate on LFW" example, it seems that 160x160 aligned faced are cropped.
So, how can the inception-resnet-v1 accept 160x160 cropped images as inputs? Thank you! | closed | 2017-07-14T04:10:34Z | 2017-10-21T11:59:03Z | https://github.com/davidsandberg/facenet/issues/373 | [] | xs-han | 1 |
Yorko/mlcourse.ai | matplotlib | 593 | Dead link in README.md | The page is not available:
https://github.com/Yorko/mlcourse.ai/wiki/About-the-course-(in-Russian) | closed | 2019-05-18T15:47:47Z | 2019-05-18T17:58:59Z | https://github.com/Yorko/mlcourse.ai/issues/593 | [] | i-aztec | 1 |
keras-team/keras | deep-learning | 20,251 | Allow to pass **kwargs to optimizers.get | https://github.com/keras-team/keras/blob/f6c4ac55692c132cd16211f4877fac6dbeead749/keras/src/optimizers/__init__.py#L72-L97
When dynamically getting an optimizer by using tf.keras.optimizers.get(<OPT_NAME>), it would be extremely useful if one could also pass extra arguments to the function, so that the optimizer gets initialized properly. See below a test example of the behavior I would like to see:
```python
optimizer_name = 'adam'
opt_params = {'learning_rate': 3e-3, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon' : 1e-07, 'amdsgrad': True}
import tensorflow as tf
opt = tf.keras.optimizers.get(optimizer_name, **opt_params)
assert(opt.learning_rate == opt_params['learning_rate']), "Opt learning rate not being correctly initialized"
``` | closed | 2024-09-11T20:21:18Z | 2024-09-11T22:31:30Z | https://github.com/keras-team/keras/issues/20251 | [
"type:feature",
"keras-team-review-pending"
] | manuelblancovalentin | 1 |
koxudaxi/fastapi-code-generator | fastapi | 350 | Allow generation of subdirs when using '--template-dir' | Hi.
I'd love to be able to generate sub directories when using the `--template-dir` option. Currently this is not possible as in [fastapi_code_generator/__main__](https://github.com/koxudaxi/fastapi-code-generator/blob/master/fastapi_code_generator/__main__.py#L179) `for target in template_dir.rglob("*"):` is used instead of `for target in template_dir.rglob("*.jinja2"):` and parsing a directory path to the code gen throws an error.
Furthermore it would be cool to automatically generate subpaths if they do not exist.
## Example:
```
templates:
generated:
router.jinja2
main.jinja2
```
should result in
`main.py`, `generated/router.py`
This would help me, structuring the generated code further.
I'd like to make a pull request for this, but would love some input first :) | open | 2023-05-17T11:17:55Z | 2023-05-17T11:23:59Z | https://github.com/koxudaxi/fastapi-code-generator/issues/350 | [] | ThisIsANiceName | 0 |
plotly/jupyter-dash | dash | 72 | Unable to communicate with the jupyter_dash notebook or JupyterLab extension required to infer Jupyter configuration. | hi
i have follow the instruction but i have one question how can i load jupyter locally and write my codes after these installation commands?
| open | 2021-11-24T08:37:02Z | 2022-03-06T10:55:11Z | https://github.com/plotly/jupyter-dash/issues/72 | [] | armeh429 | 6 |
nvbn/thefuck | python | 724 | [Question] Run method in CorrectedCommand only runs side effect? | I was just (admittedly kinda briefly) looking through the code and there's something I don't understand - the default entrypoint is `fix_command` which runs the `run` method of the `CorrectedCommand` - however, the `run` method only appears to run the `side_effect` of the `CorrectedCommand`. When is the corrected command script itself actually run? | closed | 2017-11-03T04:42:19Z | 2018-02-13T00:10:41Z | https://github.com/nvbn/thefuck/issues/724 | [] | dieggsy | 0 |
sanic-org/sanic | asyncio | 3,027 | CookieJar.cookies assumes Set-Cookie header to be present | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I'm updating some code to work after the removal of CookieJar.items() and while before it was safe to call it if there was no cookie header set now it's raising a KeyError. Maybe it's worthwhile to handle that and return an empty list instead.
### Code snippet
_No response_
### Expected Behavior
_No response_
### How do you run Sanic?
Sanic CLI
### Operating System
Linux
### Sanic Version
24.12.0
### Additional context
_No response_ | open | 2025-01-09T15:41:37Z | 2025-01-10T08:56:50Z | https://github.com/sanic-org/sanic/issues/3027 | [
"bug"
] | xrmx | 2 |
dhaitz/mplcyberpunk | matplotlib | 3 | Add colormaps | Absolutely loving this stylesheet!
One thing I've found that I wanted to do was use some of the cyberpunk colors as linear colormaps, and basically implement some functions that return `LinearSegmentedColormap` objects for others to use (example attached), in the same kind of way as how [palettable](https://jiffyclub.github.io/palettable/) gives access to a bunch of colormaps.

In this example I'm using cyan to violet (which actually makes a good sequential colormap), but some of the others like `pink` and `matrix_green` in the stylesheet looks like they'd be good candidates too. To make the plot above, I took the hex codes and converted them into normalized RGB values, and used `matplotlib.colors.LinearSegementedColormap.from_list` to create them.
If you are open to contributions, I could fork/implement this and submit a PR :)
| closed | 2020-04-02T14:06:03Z | 2020-04-02T18:51:48Z | https://github.com/dhaitz/mplcyberpunk/issues/3 | [] | laserkelvin | 1 |
deepset-ai/haystack | nlp | 8,171 | Outdated documentation | Most of the examples provided in your documentation do not seem to be functioning correctly. Even on your website’s first page, under the “Quick Start” section (https://haystack.deepset.ai/overview/quick-start), there appears to be an error regarding the “PredefinedPipeline.” The line “from haystack import Pipeline, PredefinedPipeline” results in an error indicating that “PredefinedPipeline” cannot be found. Where can I find the correct and up-to-date documentation? | closed | 2024-08-08T04:13:33Z | 2024-09-07T22:52:48Z | https://github.com/deepset-ai/haystack/issues/8171 | [
"type:documentation",
"community-triage"
] | dariush-saberi | 4 |
marimo-team/marimo | data-science | 4,163 | New keyboard shortcuts and config file options | ### Description
I am hoping to be able to:
1. Give some keyboard commands to fold/unfold all markdown headers in a notebook.
2. Create a setting in the config file, to make all headers folded by default, on opening a notebook.
3. Make a setting in the config file, so that all newly created cells are viewed as markdown by default.
### Suggested solution
Make more keyboard shortcuts and extra config file options.
Thanks for the great notebooks!
### Alternative
_No response_
### Additional context
_No response_ | open | 2025-03-19T16:26:11Z | 2025-03-19T16:53:51Z | https://github.com/marimo-team/marimo/issues/4163 | [
"enhancement",
"help wanted",
"good first issue (typescript)"
] | axiomtutor | 0 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 226 | Add support for session id | **Describe the bug**
No support for sessions in http protocol.
Current url that contains session_id cuts get param to kwargs and does not use it.
This leads to problems with creating temp tables in clickhouse and problems like
`Code: 113. DB::Exception: There is no session or session context has expired. (THERE_IS_NO_SESSION)`
**To Reproduce**
Use url to connect:
`clickhouse://user:password@host:8123/?session_id=qwer123`
**Expected behavior**
Expected session id to be propagated to url for further connection
**Versions**
- Version of package with the problem.
0.2.3, 0.2.2

- Python version.
3.10, 3.11
**Solution**
I have simply added handle for session id parameter but would be nice to have it released.
Currently I use locally such a solution
https://github.com/xzkostyan/clickhouse-sqlalchemy/pull/227 | closed | 2022-12-20T15:14:29Z | 2024-08-06T10:17:28Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/226 | [] | hhalina | 2 |
graphql-python/graphene-django | graphql | 1,297 | Page No base pagination djangoFilterConnectField | i am consfused a lot about the implementation of page no based pagination in grahene_django and relay .
the default implementation cursor based which works well for infinite scroll but i need to implement traditional page no base .
i have tried creating a custom pagination field on the connection as
`
class CountableConnectionBase(relay.Connection):
class Meta:
abstract = True
pagination = graphene.Field(PaginationType, n=graphene.Int())
def resolve_pagination(self, info, *args, **kwargs):
print(self.page_info.__dir__(), kwargs, args, info)
`
but the kwargs is empty and has no atributes like offset ,first .
i think i am doing it wrong . there might be a better solution . but documention is very sparse and doesnt provide abd clue about implemting it | open | 2022-02-07T07:19:10Z | 2022-02-07T07:19:10Z | https://github.com/graphql-python/graphene-django/issues/1297 | [] | aabidsofi19 | 0 |
thp/urlwatch | automation | 740 | Shell reporter not working | Hi there, so I'm trying https://github.com/thp/urlwatch/commit/7c505657c7ccfe78060240ef5f09b21fd86cea13 but i can't get it to work;
it seems whatever i do, nothing is reported. If i disable all other reporters, and only keep the `shell` one, i get: `reporters WARNING: No reporters enabled.`
It seems as if it's not being triggered? In `urlwatch.yaml` i've configured:
```
report:
shell:
# command: ['/home/ubuntu/.local/bin/ntfyr', '--topic', 'hello', '--title', 'urlwatch']
command: ['tee', '-a', '/home/ubuntu/tmp/log.txt']
ignore_stdout: false
enabled: true
```
No errors are logged. Could it be some bug in de code?
Please advise :-)
-- urlwatch 2.25 | closed | 2022-12-19T22:58:51Z | 2023-01-05T06:57:56Z | https://github.com/thp/urlwatch/issues/740 | [] | notDavid | 5 |
unit8co/darts | data-science | 1,882 | [BUG] Error while importing forecasting model | **Describe the bug**
When I try to import models from Darts like this :
from darts.models import NaiveSeasonal
from darts.models import KalmanForecaster
from darts.models import StatsForecastAutoCES
from darts.models import StatsForecastAutoETS
I get the folloing error :
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
But it seems to work if I downgrade protobu like this :
pip install protobuf==3.19.4
**System (please complete the following information):**
- Python version: 3.9.5
- Darts version 0.24.0
- Databricks runtime 12.2 LTS for machine learning (https://docs.databricks.com/release-notes/runtime/12.2ml.html)
| closed | 2023-07-06T17:09:50Z | 2023-08-07T14:55:58Z | https://github.com/unit8co/darts/issues/1882 | [
"bug",
"devops"
] | Jonathan-87 | 2 |
google-research/bert | nlp | 412 | How to extract the word embedding parameters from the pretrained files? | Hi there, can anyone give some tips on extracting the word embedding parameters from the pretrained files? | open | 2019-02-01T17:27:15Z | 2019-02-19T07:17:07Z | https://github.com/google-research/bert/issues/412 | [] | dzhao123 | 3 |
plotly/dash-component-boilerplate | dash | 146 | upgrade to webpack 5 | I think it would be good to update all the packages used in webpack as a lot of the packages are still using a version of node that will exit lts. | closed | 2022-06-16T15:58:33Z | 2023-05-31T13:30:40Z | https://github.com/plotly/dash-component-boilerplate/issues/146 | [] | Brandontam29 | 3 |
Farama-Foundation/Gymnasium | api | 855 | [Question] Why was `FuncEnv.state_info()` renamed to `FuncEnv.initial_info()`? | ### Question
In https://github.com/Farama-Foundation/Gymnasium/pull/818/ `state_info` was renamed to `initial_info` with no explanation, what was the reason?
I suggest that this change be reversed, as you have state info in each step for some environments.
@RedTachyon
Thanks
| closed | 2023-12-22T11:00:28Z | 2023-12-25T20:55:08Z | https://github.com/Farama-Foundation/Gymnasium/issues/855 | [
"help wanted",
"question"
] | Kallinteris-Andreas | 1 |
QuivrHQ/quivr | api | 3,559 | AttributeError: type object 'LLMEndpoint' has no attribute '_cache' | Sentry Issue: [PYTHON-FASTAPI-1A4](https://quivr-brain.sentry.io/issues/6250899282/?referrer=Linear)
```
AttributeError: type object 'LLMEndpoint' has no attribute '_cache'
File "quivr_api/modules/rag_service/rag_service.py", line 293, in generate_answer_stream
brain_core = await self.build_brain_core(retrieval_config)
File "quivr_api/modules/rag_service/rag_service.py", line 251, in build_brain_core
llm = self.get_llm(retrieval_config)
File "quivr_api/modules/rag_service/rag_service.py", line 169, in get_llm
return LLMEndpoint.from_config(retrieval_config.llm_config)
File "quivr_core/llm/llm_endpoint.py", line 171, in from_config
cls._cache[cache_key] = instance
``` | closed | 2025-01-27T14:40:43Z | 2025-01-27T14:48:43Z | https://github.com/QuivrHQ/quivr/issues/3559 | [
"bug"
] | linear[bot] | 1 |
roboflow/supervision | deep-learning | 807 | [Detections] - make `from_ultralytics` extract class names from result | ### Description
ultralytics result stores a dict allowing `class_id` to be mapped to `class_name`. You can find it in `result.names`. Perform such a mapping by converting `class_id` values into class names and storing it in the output `Detections.data["class_names"]`. For references a similar mapping has already been done in the [`Detections.from_inference`](https://github.com/roboflow/supervision/blob/30db3c3f0640cc3b9c437ad900efaa2f4d829d1f/supervision/detection/core.py#L411) function, look there for more implementation details.
### Use case
Note that the class names associated with the detections are available through `detections["class_name"]`.
```python
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
detections.class_id
# np.array([2, 0])
detections["class_name"]
# np.array(["car", "person"])
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-01-29T13:54:25Z | 2024-01-31T07:30:34Z | https://github.com/roboflow/supervision/issues/807 | [
"enhancement",
"good first issue",
"api:detection",
"Q1.2024"
] | SkalskiP | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 277 | How to implement quickfilter after enable_quicksearch is deprecated? | Pretty much what the title says.
The official AG Grid documentation says to use the setGridOption api to enable quickfilter. How can we implement that in the st_aggrid library in Python? Or maybe override it? Any help is appreciated.
In the meantime, I am using agTextColumnFilter.
Official Documentation: https://www.ag-grid.com/javascript-data-grid/filter-quick/ | open | 2024-06-06T07:09:35Z | 2024-07-19T09:42:24Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/277 | [] | DuelistRaj | 2 |
scikit-tda/kepler-mapper | data-visualization | 166 | JOSS Review Suggested Edits | Review in openjournals/joss-reviews#1315 by @ixjlyons suggested a few minor fixes along with #164 and #165.
# JOSS Paper:
- [x] Typo in the paper, second paragraph: "We also an provide extensive..." -> "We also provide an extensive..."
- [x] I think there may be some missing DOIs in the references.
# KM implementation:
- [x] I'd consider hiding `SimplicialNerve` until it's implemented.
- [x] The last example in the `KeplerMapper.project()` docstring should probably be moved/copied to the `fit_transform()` docstring. | closed | 2019-04-06T04:03:47Z | 2019-11-26T01:33:52Z | https://github.com/scikit-tda/kepler-mapper/issues/166 | [
"good first issue"
] | sauln | 0 |
ultralytics/ultralytics | deep-learning | 18,862 | Image size | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, I have 1280*720 data, I train yolov11, and convert it to an onnx model with an input size of 1280*720. Is there any impact? The network size trained by olov11 may be 1280*1280 in nature?
### Additional
_No response_ | open | 2025-01-24T09:15:32Z | 2025-01-25T04:35:48Z | https://github.com/ultralytics/ultralytics/issues/18862 | [
"question",
"exports"
] | Wihui1 | 4 |
qubvel-org/segmentation_models.pytorch | computer-vision | 770 | Upgrade to PyTorch 2.0 | Hi, PyTorch 2.0 has been released a couple of month ago, is there a roadmap to upgrade and migrate to it? Any thoughts about it in general? | closed | 2023-05-30T21:10:23Z | 2023-08-08T01:52:21Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/770 | [
"Stale"
] | merryHunter | 3 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 240 | Bug: minimize_window function maximizes the window instead of minimizing it | https://github.com/kaliiiiiiiiii/Selenium-Driverless/blob/348b95dbb0025d1343fc3c6ade3540129269ae37/src/selenium_driverless/webdriver.py#L978
**Expected Behavior:**
The function should minimize the browser window.
**Actual Behavior:**
The function maximizes the browser window.
Found the workaround, set_window_state('minimized'), but the bug fix would be nice! | closed | 2024-06-09T02:31:35Z | 2024-06-09T06:32:46Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/240 | [
"bug",
"duplicate"
] | supritobiswas | 1 |
opengeos/leafmap | jupyter | 525 | Add palette customisation of cog layers | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
Allow palette customisation of cog layers
Palette is currently stretched to min/max of data
Could use the vmin and vmax arguments to set the palette min/max
Below the rendered version was output from QGIS with the palette values stretched 0-1000
The grayscale version is a singleband raster with values between 143-403
Apologies if this already exists using another method. Thanks!
### Source code
```
import leafmap.foliumap as lf
m=lf.Map()
m.add_cog_layer(url='https://temp312432.s3.ap-southeast-2.amazonaws.com/grayscale.tif', palette='plasma', name='grayscale')
m.add_cog_layer(url='https://temp312432.s3.ap-southeast-2.amazonaws.com/rendered.tif', name='rendered')
m.add_colormap(palette='plasma', vmin=0, vmax=1000)
m
```
| closed | 2023-08-28T22:32:58Z | 2023-08-29T01:26:38Z | https://github.com/opengeos/leafmap/issues/525 | [
"Feature Request"
] | Chris-airseed | 3 |
vllm-project/vllm | pytorch | 14,404 | [Bug]: RuntimeError: CUDA error: an illegal memory access was encountered cos_sin = self.cos_sin_cache[torch.add(positions, offsets) | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
INFO 03-07 04:33:32 [__init__.py:207] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.4.dev175+gbc6ccb987
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE SYS SYS SYS SYS NODE 0-31,64-95 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE SYS SYS SYS SYS NODE 0-31,64-95 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE SYS SYS SYS SYS NODE 0-31,64-95 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE PIX SYS SYS SYS SYS NODE 0-31,64-95 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE NODE NODE SYS 32-63,96-127 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS NODE PIX NODE NODE SYS 32-63,96-127 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE PIX NODE SYS 32-63,96-127 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE NODE PIX SYS 32-63,96-127 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS NODE
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS NODE
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS NODE
NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS NODE
NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS
NIC5 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS
NIC6 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS
NIC7 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS
NIC8 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_5
NIC4: mlx5_6
NIC5: mlx5_7
NIC6: mlx5_8
NIC7: mlx5_9
NIC8: mlx5_bond_0
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/local/cuda-12.8/lib64:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
</details>
### 🐛 Describe the bug
VLLM_USE_V1=1 python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 131072 --max-num-batched-tokens 2048 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.98 --served-model-name deepseek-reasoner --model /data/DeepSeek-R1-AWQ/cognitivecomputations/DeepSeek-R1-awq/ --enable-chunked-prefill --quantization moe_wna16 --max-num-seqs 16
python3 benchmark_serving.py --backend openai --model deepseek-reasoner --tokenizer /data/DeepSeek-R1-AWQ/cognitivecomputations/DeepSeek-R1-awq/ --dataset-name random --num-prompts 10000 --request-rate 20 --port 12345 --random-input-len 1024 --random-output-len 100 --ignore-eos
1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] WorkerProc hit an exception: %s
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] Traceback (most recent call last):
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] output = func(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return func(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/worker/gpu_worker.py", line 226, in execute_model
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] output = self.model_runner.execute_model(scheduler_output)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return func(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/worker/gpu_model_runner.py", line 931, in execute_model
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] hidden_states = self.model(
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/models/deepseek_v2.py", line 669, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] hidden_states = self.model(input_ids, positions, intermediate_tensors,
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/compilation/decorators.py", line 245, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] model_output = self.forward(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/models/deepseek_v2.py", line 607, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] def forward(
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return fn(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 784, in call_wrapped
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._wrapped_call(self, *args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 361, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] raise e
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 348, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "<eval_with_key>.124", line 2312, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] submod_37 = self.submod_37(getitem_90, s0, getitem_91, getitem_92, getitem_93); getitem_90 = getitem_91 = getitem_92 = submod_37 = None
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 784, in call_wrapped
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._wrapped_call(self, *args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 361, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] raise e
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 348, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "<eval_with_key>.38", line 5, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] unified_attention_with_output = torch.ops.vllm.unified_attention_with_output(x_349, x_353, k_pe_18, output_149, 'model.layers.18.self_attn.attn'); x_349 = x_353 = k_pe_18 = output_149 = unified_attention_with_output = None
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1116, in __call__
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._op(*args, **(kwargs or {}))
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/attention/layer.py", line 361, in unified_attention_with_output
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] self.impl.forward(self,
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/attention/backends/mla/common.py", line 1002, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] prefill_q_pe[...], prefill_k_pe[...] = self.rotary_emb(
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/layers/rotary_embedding.py", line 778, in forward
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] cos_sin = self.cos_sin_cache[torch.add(positions, offsets)
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] RuntimeError: CUDA error: an illegal memory access was encountered
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[1;36m(VllmWorker rank=5 pid=33503)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375]
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] WorkerProc hit an exception: %s
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] Traceback (most recent call last):
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] output = func(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return func(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/worker/gpu_worker.py", line 226, in execute_model
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] output = self.model_runner.execute_model(scheduler_output)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return func(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/worker/gpu_model_runner.py", line 931, in execute_model
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] hidden_states = self.model(
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/models/deepseek_v2.py", line 669, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] hidden_states = self.model(input_ids, positions, intermediate_tensors,
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/compilation/decorators.py", line 245, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] model_output = self.forward(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/models/deepseek_v2.py", line 607, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] def forward(
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return fn(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 784, in call_wrapped
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._wrapped_call(self, *args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 361, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] raise e
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 348, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "<eval_with_key>.124", line 2312, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] submod_37 = self.submod_37(getitem_90, s0, getitem_91, getitem_92, getitem_93); getitem_90 = getitem_91 = getitem_92 = submod_37 = None
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 784, in call_wrapped
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._wrapped_call(self, *args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 361, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] raise e
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 348, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "<eval_with_key>.38", line 5, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] unified_attention_with_output = torch.ops.vllm.unified_attention_with_output(x_349, x_353, k_pe_18, output_149, 'model.layers.18.self_attn.attn'); x_349 = x_353 = k_pe_18 = output_149 = unified_attention_with_output = None
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1116, in __call__
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._op(*args, **(kwargs or {}))
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/attention/layer.py", line 361, in unified_attention_with_output
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] self.impl.forward(self,
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/v1/attention/backends/mla/common.py", line 1002, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] prefill_q_pe[...], prefill_k_pe[...] = self.rotary_emb(
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] File "/workspace/vllm/vllm/model_executor/layers/rotary_embedding.py", line 778, in forward
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] cos_sin = self.cos_sin_cache[torch.add(positions, offsets)
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] RuntimeError: CUDA error: an illegal memory access was encountered
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[1;36m(VllmWorker rank=2 pid=33426)[0;0m ERROR 03-06 06:49:30 [multiproc_executor.py:375] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-07T04:26:52Z | 2025-03-07T04:30:56Z | https://github.com/vllm-project/vllm/issues/14404 | [
"bug"
] | sunjianxide | 0 |
ymcui/Chinese-BERT-wwm | tensorflow | 77 | Do you have plans to release Chinese ALBERT model? | Thanks for the great work of Chinese roberta-large, do you have plans to release ALBERT models? Looking forward to it! | closed | 2019-11-19T08:18:32Z | 2019-11-21T01:40:09Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/77 | [] | cmxcn | 3 |
InstaPy/InstaPy | automation | 6,042 | Unfollow users w/o avatar and/or number of posts less than N | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Unfollow users according to some filters
## Current Behavior
Unfollowing by list
## Possible Solution (optional)
Generate list of followers including only specific users
## InstaPy configuration
0.6.13
| closed | 2021-01-19T11:43:01Z | 2021-01-19T11:46:43Z | https://github.com/InstaPy/InstaPy/issues/6042 | [] | m0rtal | 1 |
horovod/horovod | machine-learning | 3,888 | Does horovod handle the problem of each process has different batch num? | ### Discussed in https://github.com/horovod/horovod/discussions/3887
<div type='discussions-op-text'>
<sup>Originally posted by **formath** April 14, 2023</sup>
```
is_chief = (hvd.rank() == 0)
# model
train_iterator = data_reader.iterator(
batch_size=100,
file_pattern='hdfs://....../part-*',
shard_num=device_num,
worker_index=hvd.rank()) # data parallel between device using tf.Dataset
loss = model(train_iterator.get_next())
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
opt = hvd.DistributedOptimizer(opt)
train_op = opt.minimize(train_loss)
broadcast_global_variables_op = hvd.broadcast_global_variables(0)
# train process
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
sess_config.gpu_options.visible_device_list = str(hvd.local_rank())
with tf.Session(config=sess_config) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
tf.tables_initializer().run()
sess.run(broadcast_global_variables_op)
# train epoch by epoch
epoch_num = 0
while epoch_num < 6:
epoch_num += 1
sess.run(train_iterator.initializer)
while True:
try:
sess.run(train_op)
except tf.errors.OutOfRangeError:
if is_chief:
saver.save(sess=sess,
save_path="hdfs://....../model.checkpoint." + str(epoch_num),
latest_filename='checkpoint.' + str(epoch_num))
break
```
I start horovod with `8` gpu devices where each device uses `1/8` part of all data. The job trains `6` epochs. The first `5` epochs run ok. But when the last epoch of the chief is done, an error will occur before `saver.save`.
The error message is:
```
tensorflow.python.framework.errors_impl.UnknownError: Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[[node DistributedAdamOptimizer_Allreduce/cond_8/HorovodAllgather_gradients_concat_16_0 (defined at /usr/local/lib64/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
```
Even each device uses `1/8` part of all data, I can't assure all devices has the same batch num e.g. `1/8 * all_data_num / batch_size`. So when some one device needs `allreduce`, some other devices may have been stopped.</div> | closed | 2023-04-15T14:53:22Z | 2023-04-21T14:20:48Z | https://github.com/horovod/horovod/issues/3888 | [] | formath | 0 |
pallets/flask | python | 5,430 | @app.errorhandler() cannot be used in blueprint when debug=False | I want to customize an error type in Flask, and then use `@app.errorhandler()` in the main program to capture it. Then, I define a function raise_error to actively throw this exception. When `debug=True` is enabled at runtime, both the main and blueprint routes can be used normally in the interface function. However, when `debug=False`, only the main route can be used, and the blueprint route cannot capture it and reports a program error,I hope someone can help me take a look
There are two files in total(`mains.py`,`aaa.py`)
```
#python mains.py
from flask import jsonify, Flask
from typing import Union
app = Flask(__name__)
class CustomError(Exception):
def __init__(self, message, status_code=500):
try:
self.message = message.__dict__
except Exception as e:
self.message = message
self.status_code = status_code
def raise_error(msg: Union[dict, str], status: Union[int, str] = 500):
raise CustomError(msg, status_code=status)
@app.route("/")
def home():
raise_error("this is error")
return {}
from aaa import router
app.register_blueprint(router)
@app.errorhandler(CustomError)
def handle_custom_exception(error: CustomError):
response = jsonify({"code": error.status_code, "data": error.message})
response.status = 20
return response
if __name__ == "__main__":
# app.run(host="0.0.0.0", port=7788, debug=True) #can in blueprint
app.run(host="0.0.0.0", port=7788, debug=False) # cannot in blueprint
#python aaa.py
from flask import Blueprint, jsonify
router = Blueprint("aaa", __name__, url_prefix="/aaa")
@router.get("/")
def aaa():
from mains import raise_error
print("is runing")
raise_error("this is eror in blueprint")
return jsonify({"data": "in blueprint"})
```
```
Flask==3.0.2
typing_extensions==4.10.0
python == 3.12.2
(my_flask) PS C:\Users\Administrator\Desktop\flask_project> & C:/ProgramData/miniconda3/envs/my_flask/python.exe c:/Users/Administrator/Desktop/flask_project/mains.py
* Serving Flask app 'mains'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:7788
* Running on http://192.168.3.69:7788
Press CTRL+C to quit
127.0.0.1 - - [05/Mar/2024 11:10:07] "GET / HTTP/1.1" 20 -
127.0.0.1 - - [05/Mar/2024 11:10:07] "GET /favicon.ico HTTP/1.1" 404 -
127.0.0.1 - - [05/Mar/2024 11:10:12] "GET /aaa HTTP/1.1" 308 -
is runing
[2024-03-05 11:10:12,268] ERROR in app: Exception on /aaa/ [GET]
Traceback (most recent call last):
File "C:\ProgramData\miniconda3\envs\my_flask\Lib\site-packages\flask\app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\miniconda3\envs\my_flask\Lib\site-packages\flask\app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\miniconda3\envs\my_flask\Lib\site-packages\flask\app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\miniconda3\envs\my_flask\Lib\site-packages\flask\app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Administrator\Desktop\flask_project\aaa.py", line 11, in aaa
raise_error("this is eror in blueprint")
File "c:\Users\Administrator\Desktop\flask_project\mains.py", line 17, in raise_error
raise CustomError(msg, status_code=status)
mains.CustomError: this is eror in blueprint
127.0.0.1 - - [05/Mar/2024 11:10:12] "GET /aaa/ HTTP/1.1" 500 -
127.0.0.1 - - [05/Mar/2024 11:10:12] "GET /favicon.ico HTTP/1.1" 404 -
``` | closed | 2024-03-05T03:13:48Z | 2024-03-05T03:54:33Z | https://github.com/pallets/flask/issues/5430 | [] | mengshun2022 | 0 |
Gerapy/Gerapy | django | 83 | scrapy 爬虫部署到主机时发生setting文件读取错误 | 一个写好的scrapy项目,通过图形界面部署到gerapy主机时,部署失败,后台的错误日志如下,
不知道是什么原因.
{ message: 'scrapyd_api.exceptions.ScrapydResponseError: Traceback (most recent call last):\\n File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main\\n "__main__", fname, loader, pkg_name)\\n File "/usr/lib/python2.7/runpy.py", line 72, in _run_code\\n exec code in run_globals\\n File "/usr/local/lib/python2.7/dist-packages/scrapyd/runner.py", line 40, in <module>\\n main()\\n File "/usr/local/lib/python2.7/dist-packages/scrapyd/runner.py", line 37, in main\\n execute()\\n File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 110, in execute\\n settings = get_project_settings()\\n File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/project.py", line 68, in get_project_settings\\n settings.setmodule(settings_module_path, priority=\'project\')\\n File "/usr/local/lib/python2.7/dist-packages/scrapy/settings/__init__.py", line 292, in setmodule\\n module = import_module(module)\\n File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module\\n __import__(name)\\nImportError: No module named CNKISpider.settings\\n' } } }
| closed | 2018-09-20T07:53:46Z | 2018-09-22T04:48:40Z | https://github.com/Gerapy/Gerapy/issues/83 | [] | FRANDAVID | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.