repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pykaldi/pykaldi
|
numpy
| 128 |
Install error from source
|
Hello,
I have a problem by install pykaldi, during install there are no error, but at the step: python setup.py install, getting errors , I have tried many times but it always so, I want to ask if you know what can cause this problem and how can I fix it?Thank you
Line .123456789.123456789.123456789.123456789
1:\n
2:from "itf/decodable-itf.h":\n
3: namespace `kaldi`:\n
4: class DecodableInterface:\n
5: """Decodable interface definition."""\n
6:\n
............
_ParseError: invalid statement in indented block (at char 86), (line:5,col:7)
............
|
closed
|
2019-05-24T06:13:20Z
|
2019-09-06T23:14:57Z
|
https://github.com/pykaldi/pykaldi/issues/128
|
[] |
Winderxia
| 1 |
joerick/pyinstrument
|
django
| 348 |
CUDA Out Of Memory issue
|
As far as I understand it, and during some testing I kept on getting Cuda OOM errors while running code with pyinstrument where multiple models were run one after another.
While making sure there was no reference kept to the tensors in the python code, I kept on getting CUDA OOM errors when using `pyinstrument`. But once disabled the errors disappeared and my VRAM reset as expected after each reference was deleted.
Is there an option to ensure pyinstrument clears its references to onnx and torch tensors, especially after calling `del tensor`.
As I'd like to keep using `pyinstrument` but it's not feasible atm.
- Emil
|
open
|
2024-10-28T15:33:07Z
|
2024-11-18T13:34:50Z
|
https://github.com/joerick/pyinstrument/issues/348
|
[] |
emil-peters
| 3 |
pydantic/FastUI
|
fastapi
| 134 |
Unable to view components when the app is mount in another app.
|
If I use one FastAPI app which contains all the FastUI routes and I mount this app in another FastAPI app, I can't access to any components in frontend.
What I'm doing wrong ?
I tried to use **root_path** parameter of FastAPI but with no success
To reproduce:
```python
from datetime import date
from fastapi import FastAPI, HTTPException
from fastapi.responses import HTMLResponse
from fastui import FastUI, AnyComponent, prebuilt_html, components as c
from fastui.components.display import DisplayMode, DisplayLookup
from fastui.events import GoToEvent, BackEvent
from pydantic import BaseModel, Field
import uvicorn
app = FastAPI()
class User(BaseModel):
id: int
name: str
dob: date = Field(title="Date of Birth")
# define some users
users = [
User(id=1, name="John", dob=date(1990, 1, 1)),
User(id=2, name="Jack", dob=date(1991, 1, 1)),
User(id=3, name="Jill", dob=date(1992, 1, 1)),
User(id=4, name="Jane", dob=date(1993, 1, 1)),
]
@app.get("/api/", response_model=FastUI, response_model_exclude_none=True)
def users_table() -> list[AnyComponent]:
"""
Show a table of four users, `/api` is the endpoint the frontend will connect to
when a user visits `/` to fetch components to render.
"""
return [
c.Page( # Page provides a basic container for components
components=[
c.Heading(text="Users", level=2), # renders `<h2>Users</h2>`
c.Table[
User
]( # c.Table is a generic component parameterized with the model used for rows
data=users,
# define two columns for the table
columns=[
# the first is the users, name rendered as a link to their profile
DisplayLookup(
field="name", on_click=GoToEvent(url="/user/{id}/")
),
# the second is the date of birth, rendered as a date
DisplayLookup(field="dob", mode=DisplayMode.date),
],
),
]
),
]
@app.get(
"/api/user/{user_id}/", response_model=FastUI, response_model_exclude_none=True
)
def user_profile(user_id: int) -> list[AnyComponent]:
"""
User profile page, the frontend will fetch this when the user visits `/user/{id}/`.
"""
try:
user = next(u for u in users if u.id == user_id)
except StopIteration:
raise HTTPException(status_code=404, detail="User not found")
return [
c.Page(
components=[
c.Heading(text=user.name, level=2),
c.Link(components=[c.Text(text="Back")], on_click=BackEvent()),
c.Details(data=user),
]
),
]
@app.get("/{path:path}")
async def html_landing() -> HTMLResponse:
"""Simple HTML page which serves the React app, comes last as it matches all paths."""
return HTMLResponse(prebuilt_html(title="FastUI Demo"))
if __name__ == "__main__":
other_app = FastAPI()
other_app.mount("/foo", app)
uvicorn.run(other_app, port=8200)
```
|
closed
|
2023-12-28T15:03:52Z
|
2024-07-19T06:26:37Z
|
https://github.com/pydantic/FastUI/issues/134
|
[] |
pbrochar
| 7 |
ivy-llc/ivy
|
pytorch
| 27,968 |
Fix Ivy Failing Test: tensorflow - elementwise.maximum
|
closed
|
2024-01-20T16:18:41Z
|
2024-01-25T09:54:03Z
|
https://github.com/ivy-llc/ivy/issues/27968
|
[
"Sub Task"
] |
samthakur587
| 0 |
|
gunthercox/ChatterBot
|
machine-learning
| 2,169 |
rgrgrgrg
|
rgrgrgrg
|
closed
|
2021-06-08T17:38:25Z
|
2021-10-02T20:49:03Z
|
https://github.com/gunthercox/ChatterBot/issues/2169
|
[] |
mgkw
| 1 |
koxudaxi/datamodel-code-generator
|
pydantic
| 1,823 |
Clean up Pydantic v2 Migration warnings
|
There's quite a few warnings generated by Pydantic v2 generated models that could be cleaned up to make a lot less noise in downstream project users.
Some examples:
```
datamodel_code_generator/parser/jsonschema.py:1663: PydanticDeprecatedSince20: The `parse_obj` method is deprecated; use `model_validate` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
root_obj = JsonSchemaObject.parse_obj(raw)
datamodel_code_generator/parser/jsonschema.py:299: PydanticDeprecatedSince20: The `__fields_set__` attribute is deprecated, use `model_fields_set` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
return 'default' in self.__fields_set__ or 'default_factory' in self.extras
```
|
open
|
2024-01-29T17:50:08Z
|
2024-01-30T17:37:31Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1823
|
[
"enhancement"
] |
rdeaton-freenome
| 1 |
minivision-ai/photo2cartoon
|
computer-vision
| 4 |
请问可以公开微信小程序客户端的源代码吗?
|
万分感谢!
|
closed
|
2020-04-21T15:27:33Z
|
2020-04-22T08:35:41Z
|
https://github.com/minivision-ai/photo2cartoon/issues/4
|
[] |
Z863058
| 2 |
stanfordnlp/stanza
|
nlp
| 1,070 |
NER for Polish
|
I'd like to add NER model for Polish. For now, I wonder what else is needed.
**Datasets**
- Char-LM: [Wikipedia Subcorpus](http://clip.ipipan.waw.pl/PolishWikipediaCorpus)
- NER annotations: [NKJP Corpus](http://clip.ipipan.waw.pl/NationalCorpusOfPolish)
**Baseline models**
- [char-lm based](http://mozart.ipipan.waw.pl/~ksaputa/stanza/saved_models-base-wikipedia.tar.gz)
- [BERT-based](http://mozart.ipipan.waw.pl/~ksaputa/stanza/saved_models-herbert-large-without-charlm.tar.gz) ([herbert-large](https://huggingface.co/allegro/herbert-large-cased))
**Results**
For char-lm model:
```
2022-06-28 13:39:24 INFO: Running NER tagger in predict mode
2022-06-28 13:39:25 INFO: Loading data with batch size 32...
2022-06-28 13:39:26 DEBUG: 38 batches created.
2022-06-28 13:39:26 INFO: Start evaluation...
2022-06-28 13:39:37 INFO: Score by entity:
Prec. Rec. F1
85.55 87.69 86.61
2022-06-28 13:39:37 INFO: Score by token:
Prec. Rec. F1
68.59 68.98 68.78
2022-06-28 13:39:37 INFO: NER tagger score:
2022-06-28 13:39:37 INFO: pl_nkjp 86.61
```
I could definitely improve these models further and share an update in the coming weeks.
I'd like to ask is there something more I need to prepare to include these in the next Stanza release.
Especially I'm not sure about
- BERT integration, for now I added only the training parameter in [my version](https://github.com/k-sap/stanza)
- to what extend sharing converted NER data & conversion code is needed
|
closed
|
2022-07-04T09:29:43Z
|
2022-09-14T19:48:55Z
|
https://github.com/stanfordnlp/stanza/issues/1070
|
[
"enhancement"
] |
k-sap
| 6 |
keras-team/keras
|
machine-learning
| 20,083 |
Incompatibility of TensorFlow/Keras Model Weights between Versions 2.15.0 and V3
|
Hi,
I have a significant number of models trained using TensorFlow 2.15.0 and Keras 2.15.0, saved in HDF5 format. Upon attempting to reuse these models with Keras 3.3.3, I discovered that the models are not backward compatible due to differences in naming conventions and structure of the HDF5 files.
**Observation**
Upon exploring the HDF5 files with both versions, I observed major differences in naming conventions between Keras v2 and Keras v3. Here is a small overview.
For Keras 3.3.3:
```bash
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_139/beta
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_139/gamma
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_139/moving_mean
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_139/moving_variance
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/conv2d_142/kernel
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_140/beta
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_140/gamma
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_140/moving_mean
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_140/moving_variance
model_weights/Module/Model/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/conv2d_143/kernel
```
For Keras 2.15.0:
```bash
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_57/beta:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_57/gamma:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_57/moving_mean:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/batch_normalization_57/moving_variance:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_1x1/conv2d_57/kernel:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_58/beta:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_58/gamma:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_58/moving_mean:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/batch_normalization_58/moving_variance:0
Module/Module/CrossStagePartialBlock_1024_4/CrossStagePartialBlock_1024_4_0/conv_3x3/conv2d_58/kernel:0
```
Besides the differences that can be easily seen, and easy to change with `h5py`
- Prefix: In Keras v3, the prefix model_weights is added.
- Suffix: In Keras v2, the suffix :0 is appended.
- Model name after the first dataset of the hdf5 file.
The indexing of the layers and parameters seems different.
Could you please provide guidance on how to properly convert or re-index these weights from Keras v2.15.0 to Keras v3.3.3?
Is there any documentation or tool available to handle this backward compatibility issue?
Thanks in advance for your assistance.
|
closed
|
2024-08-02T09:26:58Z
|
2024-09-05T01:58:29Z
|
https://github.com/keras-team/keras/issues/20083
|
[
"type:bug/performance",
"stat:awaiting response from contributor",
"stale"
] |
estebvac
| 5 |
pydantic/logfire
|
pydantic
| 838 |
TypeError: MeterProvider.get_meter() got multiple values for argument 'version' during FastAPI instrumentation
|
### Description
Hi Logfire team,
I'm encountering an issue when instrumenting my FastAPI application with Logfire.
When I call logfire.instrument_fastapi, I get the following error:
```bash
TypeError: MeterProvider.get_meter() got multiple values for argument 'version'
The error appears to be triggered from within Logfire's metrics integration when it calls provider.get_meter(). I suspect that the version argument is inadvertently being passed more than once (perhaps both positionally and as a keyword).
```
Minimal Reproduction Code:
```python
import logfire
from fastapi import FastAPI
app = FastAPI()
# This call triggers the error:
logfire.instrument_fastapi(app=app, excluded_urls=("/metrics", "/health", "/docs", "/openapi.json", "/static/*"))
```
Questions:
Is this a known issue with the Logfire integration for FastAPI?
Could this be due to a compatibility problem with a specific version of OpenTelemetry?
Are there any workarounds or fixes in progress to resolve this error?
Thanks in advance for your help.
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="3.5.1"
platform="macOS-15.2-arm64-arm-64bit"
python="3.12.8 (main, Dec 3 2024, 18:42:41) [Clang 16.0.0 (clang-1600.0.26.4)]"
[related_packages]
requests="2.32.3"
pydantic="2.10.5"
fastapi="0.115.7"
protobuf="5.29.3"
rich="13.9.4"
executing="2.1.0"
opentelemetry-api="1.29.0"
opentelemetry-exporter-otlp-proto-common="1.29.0"
opentelemetry-exporter-otlp-proto-http="1.29.0"
opentelemetry-instrumentation="0.50b0"
opentelemetry-instrumentation-asgi="0.50b0"
opentelemetry-instrumentation-fastapi="0.50b0"
opentelemetry-instrumentation-httpx="0.50b0"
opentelemetry-proto="1.29.0"
opentelemetry-sdk="1.29.0"
opentelemetry-semantic-conventions="0.50b0"
opentelemetry-util-http="0.50b0"
```
|
closed
|
2025-02-05T09:46:06Z
|
2025-02-05T10:50:49Z
|
https://github.com/pydantic/logfire/issues/838
|
[] |
alon710
| 2 |
roboflow/supervision
|
computer-vision
| 1,152 |
ultralytics_stream_example do not work
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I have run the ultralytics_stream_example file for time in zone example, but nothing happen shown as result tracking, despite the deprecated decorector and 2 bug:
1. when passing frame.image to the ultralytics to get the result detection, it must be frame[0].image. (fixed)
2. Is when pass detection to custom sink on_prediction method. (Not be fixed yet)
Please check them out.
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR!
|
closed
|
2024-04-29T08:29:57Z
|
2024-04-30T09:17:43Z
|
https://github.com/roboflow/supervision/issues/1152
|
[
"bug"
] |
tgbaoo
| 8 |
randyzwitch/streamlit-folium
|
streamlit
| 92 |
Dynamic width foliummap stuck at 50%
|
When using setting the width=None, to leverage the new dynamic width functionnalities, maps are being displayed as if they were DualMap (with width 50%).
I'm not good with web dev but I think the issue is related to the HTML element .float-child with has a width of 50% even for single maps.
If my explanation isn't clear, please try out the script below, you'll see the issue is very obvious.
```
import streamlit as st
from streamlit_folium import folium_static, st_folium
import folium
m = folium.Map(location=[39.949610, -75.150282], zoom_start=16)
st_folium(m, width=None)
```
|
closed
|
2022-11-16T10:48:38Z
|
2022-12-08T13:30:29Z
|
https://github.com/randyzwitch/streamlit-folium/issues/92
|
[] |
Berhinj
| 0 |
hpcaitech/ColossalAI
|
deep-learning
| 5,245 |
Implement speculative decoding
|
Development branch: https://github.com/hpcaitech/ColossalAI/tree/feat/speculative-decoding
In speculative decoding, or assisted decoding, both a drafter model (small model) and a main model (large model) will be used. The drafter model will generate a few tokens sequentially, and then the main model will validate those candidate tokens in parallel and accept validated ones. The decoding process will be speeded up, for the latency of speculating multiple tokens by the drafter model is lower than that by the main model.
We're going to support Speculative decoding using the inference engine, with optimized kernels and cache management for the main model.
Additionally, GLIDE, a modified draft model architecture that reuses key and value caches from the main model, is expected to be supported. It improves the acceptance rate and increment the speed-up ratio. Details can be found in research paper GLIDE with a CAPE - A Low-Hassle Method to Accelerate Speculative Decoding on [arXiv](https://arxiv.org/pdf/2402.02082.pdf).
|
closed
|
2024-01-09T08:34:54Z
|
2024-05-08T02:34:07Z
|
https://github.com/hpcaitech/ColossalAI/issues/5245
|
[
"enhancement"
] |
CjhHa1
| 0 |
chezou/tabula-py
|
pandas
| 59 |
Can 'convert_into()' pdf file to json but executing 'read_pdf()' as json gives UTF-8 encoding error.
|
# Summary of your issue
Can 'convert_into()' pdf file to json, but executing 'read_pdf()' as json gives UTF-8 encoding error.
# Environment
Write and check your environment.
- [ ] `python --version`: ? 3.6.1.final.0, jupyer notebook 5.0.0
- [ ] `java -version`: ?
- [ ] OS and it's version: ? win64 anaconda 4.3.22
- [ ] Your PDF URL: https://www.dropbox.com/s/rg11o0iitia4zua/QA-17H104161-2017-09-22-DO.pdf?dl=0
# What did you do when you faced the problem?
I don't understand why the convert_into function works fine with this pdf, but passing the same pdf into read_pdf() yields an encoding error. Shouldn't the default options for both functions be identical?
## Example code:
```
from tabula import read_pdf
from tabula import convert_into
import pandas
file = 'T:/baysestuaries/Data/WDFT-Coastal/db_archive/QA/QA-17H104161-2017-09-22-DO.pdf'
convert_into(file,"test.json", output_format='json')
df = read_pdf(file, output_format='json')
```
## Output:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-208-fc7babef8e03> in <module>()
----> 1 df = read_pdf(file, output_format='json')
C:\Users\ETurner\AppData\Local\Continuum\Anaconda3\lib\site-packages\tabula\wrapper.py in read_pdf(input_path, output_format, encoding, java_options, pandas_options, multiple_tables, **kwargs)
90
91 else:
---> 92 return json.loads(output.decode(encoding))
93
94 else:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 5134: invalid start byte
```
## What did you intend to be?
Ideally, the behavior of both functions should be identical. I am actually trying to read this pdf as a pandas dataframe, but it is very messy. Just reading it as a json works for me so I can parse out the items I need. However, don't want to have to convert files first to waste disk space.
|
closed
|
2017-10-10T16:07:24Z
|
2022-05-16T08:44:33Z
|
https://github.com/chezou/tabula-py/issues/59
|
[] |
evanleeturner
| 8 |
pytest-dev/pytest-xdist
|
pytest
| 211 |
looponfail tests broken on more recent pytest
|
#210 introduced a xfail for a looponfail test we should take a look on whether we want to fix that or solve it via the port into pytest-core
|
closed
|
2017-08-09T12:37:21Z
|
2021-11-18T13:09:23Z
|
https://github.com/pytest-dev/pytest-xdist/issues/211
|
[] |
RonnyPfannschmidt
| 2 |
TencentARC/GFPGAN
|
pytorch
| 169 |
in Windows AssertionError: An object named 'ResNetArcFace' was already registered in 'arch' registry!
|
runfile('../GFPGAN/inference_gfpgan.py', args='-i inputs/whole_imgs -o results -v 1.3 -s 2', wdir='../GFPGAN')
Traceback (most recent call last):
File "..\GFPGAN\inference_gfpgan.py", line 9, in <module>
from gfpgan import GFPGANer
File "..\GFPGAN\gfpgan\__init__.py", line 2, in <module>
from .archs import *
File "..\GFPGAN\gfpgan\archs\__init__.py", line 10, in <module>
_arch_modules = [importlib.import_module(f'gfpgan.archs.{file_name}') for file_name in arch_filenames]
File "..\GFPGAN\gfpgan\archs\__init__.py", line 10, in <listcomp>
_arch_modules = [importlib.import_module(f'gfpgan.archs.{file_name}') for file_name in arch_filenames]
File "..\anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "..\GFPGAN\gfpgan\archs\arcface_arch.py", line 172, in <module>
class ResNetArcFace(nn.Module):
File "..\anaconda3\lib\site-packages\basicsr\utils\registry.py", line 53, in deco
self._do_register(name, func_or_class)
File "..\anaconda3\lib\site-packages\basicsr\utils\registry.py", line 39, in _do_register
assert (name not in self._obj_map), (f"An object named '{name}' was already registered "
AssertionError: An object named 'ResNetArcFace' was already registered in 'arch' registry!
|
open
|
2022-03-02T10:16:38Z
|
2023-02-10T18:07:54Z
|
https://github.com/TencentARC/GFPGAN/issues/169
|
[] |
Mehrancd
| 2 |
vitalik/django-ninja
|
pydantic
| 389 |
Throwing a HttpError with `data`
|
I enjoy using `HttpError` as it is the simplest way to return an error response with a message anywhere in the project.
Yet, I've encountered some cases where I wish to return an error response with additional data (e.g. an error code or a dict of error fields and failed reasons).
Since a dict (`{ "detail": "{message}" }`) is returned by default, so I thought it would be better to make use of it.
Here is my suggestion and I would like to make PR if you agree with this feature. :)
```python3
# ninja/errors.py
class HttpError(Exception):
# Add a new argument: data
def __init__(self, status_code: int, message: str, data: dict = None) -> None:
self.status_code = status_code
self.data = data # <<<
super().__init__(message)
def _default_http_error(
request: HttpRequest, exc: HttpError, api: "NinjaAPI"
) -> HttpResponse:
if exc.data is None:
return api.create_response(request, {"detail": str(exc)}, status=exc.status_code)
return api.create_response(request, data, status=exc.status_code) # <<<
# my_api.py
@api.get("/foo")
def some_operation(request):
raise HttpError(400, "some error message", {"bar": ... }) # <<< Usage
```
|
closed
|
2022-03-14T14:03:40Z
|
2022-03-16T00:15:14Z
|
https://github.com/vitalik/django-ninja/issues/389
|
[] |
ach0o
| 4 |
databricks/koalas
|
pandas
| 1,974 |
HIVE JDBC Connection Using Pyspark-Koalas returns Column names as row values
|
I am using Pyspark to connect to HIVE and fetch some data. The issue is that it returns all rows with the values that are column names. It is returning correct column names. Only the Row values are incorrect.
Here is my Code:
```
hive_jar_path="C:Users/shakir/Downloads/ClouderaHiveJDBC-2.6.11.1014/ClouderaHiveJDBC-2.6.11.1014/ClouderaHiveJDBC42-2.6.11.1014/HiveJDBC42.jar"
print(hive_jar_path)
print("")
import os
os.environ["HADOOP_HOME"]="c:/users/shakir/downloads/spark/spark/spark"
import os
os.environ["SPARK_HOME"]="c:/users/shakir/downloads/spark/spark/spark"
import findspark
findspark.init()
from pyspark import SparkContext, SparkConf, SQLContext
from pyspark.sql import SparkSession
import uuid
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.warehouse.dir", "hdfs://...../user/hive/warehouse/..../....")
spark.config("spark.driver.extraClassPath", hive_jar_path)
spark.config("spark.sql.hive.llap", "true")
spark.config("spark.sql.warehouse.dir", "hdfs://...../user/hive/warehouse/..../....")
spark=spark.enableHiveSupport().getOrCreate()
import databricks.koalas as ks
print("Reading Data from Hive . . .")
options={
"fetchsize":1000,
"inferSchema": True,
"fileFormat":"orc",
"inputFormat":"org.apache.hadoop.hive.ql.io.orc.OrcInputFormat",
"outputFormat":"org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat",
"driver":"org.apache.hive.jdbc.HiveDriver",
}
df = ks.read_sql("SELECT * FROM PERSONS LIMIT 3", connection_string,options=options)
print("Done")
print(df)
```
Here is the code Output:
```
+------+-----+---------+
| Name | Age | Address |
+------+-----+---------+
| Name | Age | Address |
+------+-----+---------+
| Name | Age | Address |
+------+-----+---------+
| Name | Age | Address |
+------+-----+---------+
```
|
closed
|
2020-12-17T09:57:01Z
|
2020-12-21T11:16:22Z
|
https://github.com/databricks/koalas/issues/1974
|
[
"not a koalas issue"
] |
shakirshakeelzargar
| 4 |
CorentinJ/Real-Time-Voice-Cloning
|
python
| 261 |
No GPU - can we use Google Colab?
|
noob here without a GPU. How would someone use this with Google Colab
|
closed
|
2020-01-10T04:36:44Z
|
2020-07-04T23:22:10Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/261
|
[] |
infiniti350
| 2 |
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 194 |
Update Docs For Computational Performance
|
See #192. For other people who encounter performance problems, perhaps a 'performance optimization' section can be added to the docs that describes first using a miner of [type 2](https://github.com/KevinMusgrave/pytorch-metric-learning/issues/192#issuecomment-689814355) and either chaining the results into another miner or into a loss function.
|
open
|
2020-09-09T20:57:29Z
|
2020-09-09T23:52:59Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/194
|
[
"documentation"
] |
AlexSchuy
| 0 |
dmlc/gluon-cv
|
computer-vision
| 1,434 |
Faster-RCNN meets CUDA illegal memory access error when converting into symbol model
|
Hi,
I tried to convert the Faster-RCNN model from gluoncv model zoo into symbol API format. When I do inference with CPU, everything is well. However, when inferring with GPU, it raised CUDA Error
The full code as following:
```
import time
import mxnet as mx
from gluoncv import model_zoo
from tqdm import tqdm
ctx = mx.gpu()
model = model_zoo.get_model('faster_rcnn_resnet50_v1b_voc', pretrained=True)
model.hybridize(static_alloc=True, static_shape=True)
x = mx.nd.random.normal(shape=[1, 3, 300, 300])
_ = model(x)
model.export('temp', 0)
sym, args, aux = mx.model.load_checkpoint('temp', 0)
for k, v in args.items():
args[k] = v.as_in_context(ctx)
args['data'] = x.as_in_context(ctx)
executor = sym.bind(ctx, args=args, aux_states=aux, grad_req='null')
start = time.time()
for i in tqdm(range(100)):
executor.forward(is_train=False)
mx.nd.waitall()
end = time.time()
print('Elapsed time:', end-start)
```
Here is the error message:
```
Traceback (most recent call last):
File "benchmark.py", line 26, in <module>
mx.nd.waitall()
File "/home/elichen/faster-rcnn-benchmark/env/lib64/python3.7/site-packages/mxnet/ndarray/ndarray.py", line 166, in waitall
check_call(_LIB.MXNDArrayWaitAll())
File "/home/elichen/faster-rcnn-benchmark/env/lib64/python3.7/site-packages/mxnet/base.py", line 253, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [02:19:17] src/storage/./pooled_storage_manager.h:97: CUDA: an illegal memory access was encountered
```
Did anyone meet this error?
|
closed
|
2020-09-02T05:19:17Z
|
2021-05-22T06:40:29Z
|
https://github.com/dmlc/gluon-cv/issues/1434
|
[
"Stale"
] |
JIElite
| 1 |
graphdeco-inria/gaussian-splatting
|
computer-vision
| 207 |
Ply file
|
May I ask if the ply file obtained in the output is different from a typical point cloud file? Can I convert it to a grid for use?
|
closed
|
2023-09-18T03:49:02Z
|
2023-09-28T16:23:40Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/207
|
[] |
HeptagramV
| 1 |
koxudaxi/fastapi-code-generator
|
fastapi
| 35 |
Change datamodel-code-generator version
|
`fastapi-code-generator` depends on [datamodel-code-generator](https://github.com/koxudaxi/datamodel-code-generator)
I'm working on [refactoring the internal interface](https://github.com/koxudaxi/datamodel-code-generator/issues/237) of `datamodel-code-generator`.
I will resolve the issues after the refactoring is done.
https://github.com/koxudaxi/fastapi-code-generator/issues/27
https://github.com/koxudaxi/fastapi-code-generator/issues/26
https://github.com/koxudaxi/fastapi-code-generator/issues/25
https://github.com/koxudaxi/fastapi-code-generator/issues/24
https://github.com/koxudaxi/fastapi-code-generator/issues/15
|
closed
|
2020-10-13T09:25:42Z
|
2020-11-04T17:16:35Z
|
https://github.com/koxudaxi/fastapi-code-generator/issues/35
|
[
"released"
] |
koxudaxi
| 0 |
vanna-ai/vanna
|
data-visualization
| 128 |
Function to duplicate a model
|
We might want a function to duplicate a model. It would look something like this:
```python
def duplicate_model(from_model: str, to_model: str, types: list = ['ddl', 'documentation', 'sql']):
vn.set_model(from_model)
# Get the training data from the source model
training_data = vn.get_training_data()
# Set the model to the destination model
vn.set_model(to_model)
for ddl in training_data.query('training_data_type == "ddl"').content:
vn.train(ddl=ddl)
```
The code above needs to be modified to handle filters for ddl, documentation, and sql
Usage would look something like this:
```python
vn.duplicate_model(from_model='chinook', to_model='chinook-duplicate', types=['ddl'])
```
|
closed
|
2023-10-13T16:02:55Z
|
2024-01-17T05:57:38Z
|
https://github.com/vanna-ai/vanna/issues/128
|
[
"enhancement"
] |
zainhoda
| 0 |
gradio-app/gradio
|
machine-learning
| 10,277 |
Progress bar do not show up at the first run.
|
### Describe the bug
This is a minimal example explaning the problem I met: The progress bar works well since the second time I clicked the "Show Reverse Result" button, anyway it do not show up the first time.

### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import time
with gr.Blocks() as demo:
words_list = gr.Dropdown(['banana','alpha','rocket'], interactive=True)
show_btn = gr.Button('Show Reverse Result')
results = gr.Textbox(value='', visible=False)
def slowly_reverse(word, progress=gr.Progress()):
progress(0, desc="Starting")
time.sleep(1)
progress(0.05)
new_string = ""
for letter in progress.tqdm(word, desc="Reversing"):
time.sleep(0.25)
new_string = letter + new_string
results = gr.Textbox(value=new_string, visible=True)
return results
show_btn.click(fn=slowly_reverse, inputs=words_list, outputs=results)
demo.launch(server_name="0.0.0.0")
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.8.0
gradio_client version: 1.5.1
```
### Severity
I can work around it
|
closed
|
2025-01-02T13:50:18Z
|
2025-01-02T15:03:14Z
|
https://github.com/gradio-app/gradio/issues/10277
|
[
"bug"
] |
RyougiShiki-214
| 1 |
pyg-team/pytorch_geometric
|
deep-learning
| 9,299 |
Index out of range in SchNet on a modification of QM9 dataset.
|
### 🐛 Describe the bug
Hi!
The idea of the code below is to run a custom version of SchNet on SMILES representations of molecules. Code:
```python
print("Importing packages...")
import torch
import torch.nn.functional as F
from torch_geometric.loader import DataLoader
from torch_geometric.datasets import QM9
from torch_geometric.nn import SchNet
from tqdm import tqdm
import pickle
import os
print("Defining functions...")
# Define a function to convert SMILES to PyG data objects
def smiles_to_pyg_graph(smiles):
from rdkit import Chem
from rdkit.Chem import AllChem
from torch_geometric.data import Data
try:
mol = Chem.MolFromSmiles(smiles)
except:
return None
if mol is None:
return None
# Add Hydrogens to the molecule
mol = Chem.AddHs(mol)
AllChem.EmbedMolecule(mol)
# Convert the molecule to a graph
node_features = []
for atom in mol.GetAtoms():
node_features.append(atom_feature(atom))
# node_features = torch.tensor(node_features, dtype=torch.float)
node_features = torch.tensor(node_features, dtype=torch.long)
edge_indices = []
edge_features = []
for bond in mol.GetBonds():
start, end = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx()
edge_indices.append((start, end))
edge_indices.append((end, start))
edge_features.append(bond_feature(bond))
edge_features.append(bond_feature(bond))
edge_indices = torch.tensor(edge_indices).t().to(torch.long)
# edge_features = torch.tensor(edge_features, dtype=torch.float)
edge_features = torch.tensor(edge_features, dtype=torch.long)
return Data(x=node_features, edge_index=edge_indices, edge_attr=edge_features)
# Helper functions for node and edge features
def atom_feature(atom):
return [atom.GetAtomicNum(), atom.GetFormalCharge()]
def bond_feature(bond):
return [int(bond.GetBondTypeAsDouble())]
# Load dataset and convert SMILES to PyG data objects
print("Creating dataset...")
# if we have cached data, load it
if os.path.exists('data/qm9_pyg_data.pkl'):
print("Loading data from cache...")
with open('data/qm9_pyg_data.pkl', 'rb') as f:
data_list = pickle.load(f)
else:
print("Creating dataset from scratch...")
dataset = QM9(root='data')
data_list = []
# for i in tqdm(range(len(dataset))):
for i in tqdm(range(1000)):
smiles = dataset[i]['smiles']
data = smiles_to_pyg_graph(smiles)
if data is not None:
data_list.append(data)
# Save data_list to a pickle file
with open('data/qm9_pyg_data.pkl', 'wb') as f:
pickle.dump(data_list, f)
print(f"Example data entry in the data_list: {data_list[0]}")
# Define a SchNet model
class MySchNet(torch.nn.Module):
def __init__(self, num_features, hidden_channels, num_targets):
super(MySchNet, self).__init__()
self.schnet = SchNet(hidden_channels, num_features)
self.lin = torch.nn.Linear(hidden_channels, num_targets)
def forward(self, data):
print(f'pirnt from forward: data.x.shape: {data.x.shape}')
print(f'pirnt from forward: data.edge_index.shape: {data.edge_index.shape}')
print(f'pirnt from forward: data.edge_attr.shape: {data.edge_attr.shape}')
out = self.schnet(data.x, data.edge_index, data.edge_attr)
out = self.lin(out)
return out
# Instantiate the model and define other training parameters
print("Defining model...")
model = MySchNet(num_features=2, hidden_channels=64, num_targets=1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
criterion = torch.nn.MSELoss()
```
The correspondign output before the Exception:
```bash
Training...
Batch size: 32
type(batch.x): <class 'torch.Tensor'>
batch.x.dtype: torch.int64
Batch edge_index shape: torch.Size([2, 834])
Batch edge_index dtype: torch.int64
Batch edge_attr shape: torch.Size([834, 1])
Batch edge_attr dtype: torch.int64
pirnt from forward: data.x.shape: torch.Size([419, 2])
pirnt from forward: data.edge_index.shape: torch.Size([2, 834])
pirnt from forward: data.edge_attr.shape: torch.Size([834, 1])
```
And an Exception message:
```bash
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[5], [line 17](vscode-notebook-cell:?execution_count=5&line=17)
[15](vscode-notebook-cell:?execution_count=5&line=15) print(f'Batch edge_attr dtype: {batch.edge_attr.dtype}')
[16](vscode-notebook-cell:?execution_count=5&line=16) optimizer.zero_grad()
---> [17](vscode-notebook-cell:?execution_count=5&line=17) output = model(batch)
[18](vscode-notebook-cell:?execution_count=5&line=18) loss = criterion(output, batch.y.view(-1, 1)) # Assuming targets are stored in batch.y
[19](vscode-notebook-cell:?execution_count=5&line=19) loss.backward()
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
[1530](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1530) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
[1531](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1531) else:
-> [1532](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532) return self._call_impl(*args, **kwargs)
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
[1536](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1536) # If we don't have any hooks, we want to skip the rest of the logic in
[1537](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1537) # this function, and just call forward.
[1538](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1538) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1539](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1539) or _global_backward_pre_hooks or _global_backward_hooks
[1540](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1540) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1541](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541) return forward_call(*args, **kwargs)
[1543](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1543) try:
[1544](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1544) result = None
Cell In[4], [line 14](vscode-notebook-cell:?execution_count=4&line=14)
[12](vscode-notebook-cell:?execution_count=4&line=12) print(f'pirnt from forward: data.edge_index.shape: {data.edge_index.shape}')
[13](vscode-notebook-cell:?execution_count=4&line=13) print(f'pirnt from forward: data.edge_attr.shape: {data.edge_attr.shape}')
---> [14](vscode-notebook-cell:?execution_count=4&line=14) out = self.schnet(data.x, data.edge_index, data.edge_attr)
[15](vscode-notebook-cell:?execution_count=4&line=15) out = self.lin(out)
[16](vscode-notebook-cell:?execution_count=4&line=16) return out
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
[1530](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1530) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
[1531](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1531) else:
-> [1532](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532) return self._call_impl(*args, **kwargs)
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
[1536](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1536) # If we don't have any hooks, we want to skip the rest of the logic in
[1537](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1537) # this function, and just call forward.
[1538](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1538) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1539](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1539) or _global_backward_pre_hooks or _global_backward_hooks
[1540](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1540) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1541](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541) return forward_call(*args, **kwargs)
[1543](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1543) try:
[1544](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1544) result = None
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:284, in SchNet.forward(self, z, pos, batch)
[271](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:271) r"""Forward pass.
[272](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:272)
[273](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:273) Args:
(...)
[280](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:280) (default: :obj:`None`)
[281](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:281) """
[282](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:282) batch = torch.zeros_like(z) if batch is None else batch
--> [284](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:284) h = self.embedding(z)
[285](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:285) edge_index, edge_weight = self.interaction_graph(pos, batch)
[286](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch_geometric/nn/models/schnet.py:286) edge_attr = self.distance_expansion(edge_weight)
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
[1530](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1530) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
[1531](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1531) else:
-> [1532](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1532) return self._call_impl(*args, **kwargs)
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
[1536](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1536) # If we don't have any hooks, we want to skip the rest of the logic in
[1537](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1537) # this function, and just call forward.
[1538](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1538) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1539](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1539) or _global_backward_pre_hooks or _global_backward_hooks
[1540](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1540) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1541](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1541) return forward_call(*args, **kwargs)
[1543](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1543) try:
[1544](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/module.py:1544) result = None
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/sparse.py:163, in Embedding.forward(self, input)
[162](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/sparse.py:162) def forward(self, input: Tensor) -> Tensor:
--> [163](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/sparse.py:163) return F.embedding(
[164](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/sparse.py:164) input, self.weight, self.padding_idx, self.max_norm,
[165](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/modules/sparse.py:165) self.norm_type, self.scale_grad_by_freq, self.sparse)
File ~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2264, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
[2258](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2258) # Note [embedding_renorm set_grad_enabled]
[2259](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2259) # XXX: equivalent to
[2260](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2260) # with torch.no_grad():
[2261](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2261) # torch.embedding_renorm_
[2262](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2262) # remove once script supports set_grad_enabled
[2263](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2263) _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> [2264](/home/popova/Projects/citre-quantum-chemistry/nbs/~/.pyenv/versions/mambaforge/envs/pyg/lib/python3.12/site-packages/torch/nn/functional.py:2264) return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
Thanks for reading! I appreciate any feedback regarding the issue.
Best regards,
Anton.
### Versions
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1058-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 535.171.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 3000.000
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.02
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.2.3
[pip3] torch==2.3.0
[pip3] torch_cluster==1.6.3+pt22cu121
[pip3] torch-ema==0.3
[pip3] torch_geometric==2.5.3
[pip3] torch_scatter==2.1.2+pt22cu121
[pip3] torch_sparse==0.6.18+pt22cu121
[pip3] torch_spline_conv==1.2.2+pt22cu121
[pip3] torchaudio==2.3.0
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.18.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.2.3 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt22cu121 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-geometric 2.5.3 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt22cu121 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
|
open
|
2024-05-06T15:47:01Z
|
2024-05-13T09:22:23Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9299
|
[
"bug"
] |
CalmScout
| 1 |
nltk/nltk
|
nlp
| 2,525 |
install NLTK
|
Last login: Sat Apr 4 17:02:21 on ttys000
hanadys-mbp:~ hanadyahmed$ import nltk
-bash: import: command not found
hanadys-mbp:~ hanadyahmed$ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import nltk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nltk
>>>
|
closed
|
2020-04-04T15:18:52Z
|
2020-04-12T22:35:35Z
|
https://github.com/nltk/nltk/issues/2525
|
[] |
Hanadyma
| 4 |
PaddlePaddle/models
|
computer-vision
| 5,739 |
自定义切词错误
|
from LAC import LAC, custom
lac = LAC()
lac.model.custom = custom.Customization()
lac.model.custom.add_word('中华人民共和国/国家')
lac.model.custom.add_word('国2/标准')
print(lac.run('中华人民共和国2008年奥运会'))
// 返回 [['中华人民共和', '国2', '008年', '奥运会'], ['国家', '标准', 'TIME', 'nz']]
|
open
|
2023-11-09T09:20:01Z
|
2024-02-26T05:07:39Z
|
https://github.com/PaddlePaddle/models/issues/5739
|
[] |
guoandzhong
| 0 |
numpy/numpy
|
numpy
| 27,781 |
BUG: `numpy.std` misbehaves with an identity array
|
### Describe the issue:
I created a `(7,)`-shape array with the same number. Then I called `numpy.std`, and it returned `4` (far from the expected `0`).
### Reproduce the code example:
```python
import numpy as np
np.std(np.prod(np.full((7, 10), 45, dtype="float64"), axis=1),axis=0)
```
### Error message:
_No response_
### Python and NumPy Versions:
I tried this in two versions:
```shell
2.1.0
3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
```
```shell
1.26.3
3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
```
### Runtime Environment:
```shell
[{'numpy_version': '2.1.0',
'python': '3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]',
'uname': uname_result(system='Linux', node='1091aa609208', release='6.8.0-48-generic', version='#48~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Oct 7 11:24:13 UTC 2', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}},
{'architecture': 'Haswell',
'filepath': '/usr/local/lib/python3.10/dist-packages/numpy.libs/libscipy_openblas64_-ff651d7f.so',
'internal_api': 'openblas',
'num_threads': 24,
'prefix': 'libscipy_openblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.27'}]
```
```shell
[{'numpy_version': '1.26.3',
'python': '3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]',
'uname': uname_result(system='Linux', node='dc5b26f04603', release='6.8.0-48-generic', version='#48~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Oct 7 11:24:13 UTC 2', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}},
{'architecture': 'Prescott',
'filepath': '/usr/local/lib/python3.10/dist-packages/numpy.libs/libopenblas64_p-r0-0cf96a72.3.23.dev.so',
'internal_api': 'openblas',
'num_threads': 24,
'prefix': 'libopenblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.23.dev'}]
```
### Context for the issue:
The error in the returned result is significant.
|
closed
|
2024-11-17T04:35:32Z
|
2024-11-17T04:36:09Z
|
https://github.com/numpy/numpy/issues/27781
|
[
"00 - Bug"
] |
zzctmac
| 0 |
huggingface/transformers
|
deep-learning
| 36,769 |
Add Audio inputs available in apply_chat_template
|
### Feature request
Hello, I would like to request support for audio processing in the apply_chat_template function.
### Motivation
With the rapid advancement of multimodal models, audio processing has become increasingly crucial alongside image and text inputs. Models like Qwen2-Audio, Phi-4-multimodal, and various models now support audio understanding, making this feature essential for modern AI applications.
Supporting audio inputs would enable:
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "audio", "audio": "https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/examples/what_is_shown_in_this_image.wav"},
{"type": "text", "text": "Follow the instruction in the audio with this image."}
]
}
]
```
This enhancement would significantly expand the capabilities of the library to handle the full spectrum of multimodal inputs that state-of-the-art models now support, keeping the transformers library at the forefront of multimodal AI development.
### Your contribution
I've tested this implementation with several multimodal models and it works well for processing audio inputs alongside images and text. I'd be happy to contribute this code to the repository if there's interest.
|
open
|
2025-03-17T17:05:46Z
|
2025-03-17T20:41:41Z
|
https://github.com/huggingface/transformers/issues/36769
|
[
"Feature request"
] |
junnei
| 1 |
PokemonGoF/PokemonGo-Bot
|
automation
| 6,242 |
Error whit pokemon_hunter.py
|
### Actual Behavior
<!-- Tell us what is happening -->
It give me this error: https://pastebin.com/Zxfq1XM6
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
https://pastebin.com/Zxfq1XM6
### Steps to Reproduce
<!-- Tell us the steps you have taken to reproduce the issue -->
### Other Information
OS: Ubuntu 16.04
<!-- Tell us what Operating system you're using -->
Branch: dev
<!-- dev or master -->
Git Commit:
<!-- run 'git log -n 1 --pretty=format:"%H"' -->
Python Version:
<!-- run 'python -V' and paste it here) -->
Any other relevant files/configs (eg: path files)
<!-- Anything else which may be of relevance -->
|
open
|
2017-10-26T07:46:05Z
|
2017-11-09T02:07:59Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/6242
|
[] |
tobias86aa
| 2 |
tqdm/tqdm
|
jupyter
| 1,359 |
Failing in notebook
|
- [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```
4.64.0 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)] darwin
```
Upon switching to an Apple M1 chip, I have been unable to get the progress bars to work in the notebook.
It is possible I have not installed something correctly.
JupyterLab configuration:
```
JupyterLab v3.4.5
Other labextensions (built into JupyterLab)
app dir: /opt/homebrew/Cellar/python@3.10/3.10.6_1/Frameworks/Python.framework/Versions/3.10/share/jupyter/lab
@aquirdturtle/collapsible_headings v3.0.0 enabled OK
@jlab-enhanced/cell-toolbar v3.5.1 enabled OK
@jupyter-widgets/jupyterlab-manager v5.0.2 enabled OK
@timkpaine/jupyterlab_miami_nights v0.3.1 enabled OK
@yudai-nkt/jupyterlab_city-lights-theme v3.0.0 enabled OK
js v0.1.0 enabled OK
jupyter-matplotlib v0.11.2 enabled OK
jupyterlab-code-snippets v2.1.0 enabled OK
jupyterlab-jupytext v1.3.8 enabled OK
jupyterlab-theme-hale v0.1.3 enabled OK
```
While everything works fine in the shell, when I run something in the notebookI get this:
```python
from tqdm.notebook import tqdm
for i in tqdm(range(10)):
pass
```
```
root:
n: 0
total: 10
elapsed: 0.021565914154052734
ncols: null
nrows: 84
prefix: ""
ascii: false
unit: "it"
unit_scale: false
rate: null
bar_format: null
postfix: null
unit_divisor: 1000
initial: 0
colour: null
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
|
closed
|
2022-08-25T04:33:17Z
|
2023-04-14T13:29:11Z
|
https://github.com/tqdm/tqdm/issues/1359
|
[
"duplicate 🗐",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] |
grburgess
| 7 |
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,487 |
Problem of pix2pix on two different devices that shows 'nan' at the begining
|
When I use 2 different devices to run the pix2pix training part , one can smoothly finish the training part but another leads to 'nan' in loss function since the begining as the figure shows. The environments and dataset(facades) are quite the same.

|
open
|
2022-09-26T04:56:27Z
|
2022-09-27T20:24:54Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1487
|
[] |
YujieXiang
| 1 |
huggingface/datasets
|
pytorch
| 6,841 |
Unable to load wiki_auto_asset_turk from GEM
|
### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
>>import datasets
>>print (datasets.__version__)
>>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
System output:
Generating train split: 100%|█| 483801/483801 [00:03<00:00, 127164.26 examples/s
Generating validation split: 100%|█| 20000/20000 [00:00<00:00, 116052.94 example
Generating test_asset split: 100%|██| 359/359 [00:00<00:00, 76155.93 examples/s]
Generating test_turk split: 100%|███| 359/359 [00:00<00:00, 87691.76 examples/s]
Traceback (most recent call last):
File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module>
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split
split_info = self.info.splits[split_generator.name]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__
instructions = make_file_instructions(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions
info.name: filenames_for_dataset_split(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split
prefix = os.path.join(path, prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
import datasets
print (datasets.__version__)
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
### Expected behavior
Should be able to load the dataset without any issues
### Environment info
datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also)
Python 3.12.0
|
closed
|
2024-04-26T00:08:47Z
|
2024-05-29T13:54:03Z
|
https://github.com/huggingface/datasets/issues/6841
|
[] |
abhinavsethy
| 8 |
healthchecks/healthchecks
|
django
| 858 |
Notify on failure rate
|
I have a case of a check that is a bit flaky these days, but all I care about is that it runs at least sometimes.
Would it be possible to implement something like a threshold of failures for notifications?
Instead of notifying immediately on the first failure, we would compute the percentage of failed jobs over a period of time and alert the user only when this threshold is crossed.
Individual checks would have different thresholds and time periods.
Users may also like to configure the threshold with percents, or absolute numbers (X failures over Y minutes for example).
To keep the current behavior, the default would be alerting if the percentage of failed jobs exceeds 0% (or 0 failures) over a minute.
If this is implemented, we could also improve it later by adding thresholds for integrations. For example, I may want to be alerted by email for every failed job but only by phone call if the failure rate exceeds some higher threshold.
|
open
|
2023-07-05T21:56:32Z
|
2023-12-15T12:10:52Z
|
https://github.com/healthchecks/healthchecks/issues/858
|
[
"feature"
] |
Crocmagnon
| 7 |
tfranzel/drf-spectacular
|
rest-api
| 626 |
could not resolve authenticator JWTCookieAuthenticator
|
I'm seeing an error, could not resolve authenticator <class 'dj_rest_auth.jwt_auth.JWTCookieAuthentication'>. There was no OpenApiAuthenticationExtension registered for that class. Try creating one by subclassing it. Ignoring for now.
my default auth classes are below:
` "DEFAULT_AUTHENTICATION_CLASSES": (
"rest_framework.authentication.SessionAuthentication",
"rest_framework.authentication.TokenAuthentication",
"dj_rest_auth.jwt_auth.JWTCookieAuthentication",
),`
|
closed
|
2021-12-24T15:55:00Z
|
2021-12-25T10:50:55Z
|
https://github.com/tfranzel/drf-spectacular/issues/626
|
[] |
aabanaag
| 2 |
charlesq34/pointnet
|
tensorflow
| 33 |
Recognize bottle from ycb data as monitor!
|
hi..
I'm trying to use pointnet to recognize object from ycb dataset (http://www.ycbbenchmarks.com/) under the category of bottle, and it does have very similar shape to training dataset, but point-net did not recognize it correctly! I think the issue may be related to how I prepare/down-sample the data!
I tried with the 006_mustard_bottle.ply, it contains 847289 point, down-sampled it randomly to 1024 point:

result in following:


The classification result was :
irplane = -10065.7
bathtub = -10174.7
bed = -5508.48
bench = -6663.82
bookshelf = -7400.75
bottle = -15664.8
bowl = -6521.66
car = -14696.8
chair = -11587.8
cone = -8178.83
cup = -13763.6
curtain = -4454.49
desk = -19952.4
door = -12188.9
dresser = -4633.64
flower_pot = 1036.69
glass_box = -7719.06
guitar = -8977.43
keyboard = -15161.3
lamp = -5138.02
laptop = -6836.96
mantel = -13274.9
monitor = -9731.3
night_stand = -11224.4
person = -16864.7
piano = -17242.8
plant = 1167.26
**radio = 1352.54**
range_hood = -13157.3
sink = -9762.18
sofa = -18294.6
stairs = -5749.73
stool = -10743.8
table = -6021.3
tent = -11153.3
toilet = -10596.6
tv_stand = -3238.71
vase = -7211.66
wardrobe = -13034.5
xbox = -6992.64
I tried different approach, by voxlize to get better result:


loss value 13.4148
airplane = -9.70358
bathtub = -19.6874
bed = -13.8628
bench = -15.0484
bookshelf = -14.7929
bottle = -10.4973
bowl = -14.3144
car = -7.3393
chair = -12.6577
cone = -8.53707
cup = -7.65949
curtain = -19.0757
desk = -16.9046
door = -20.5851
dresser = -8.47661
flower_pot = 1.46643
glass_box = -15.3492
guitar = -12.4239
keyboard = -20.1066
lamp = -8.02352
laptop = -17.523
mantel = -13.7787
**monitor = 2.63108**
night_stand = -6.75316
person = -11.2164
piano = -3.55436
plant = -2.09806
radio = -3.64729
range_hood = -14.5621
sink = -10.3373
sofa = -14.9684
stairs = -3.0044
stool = -14.2689
table = -19.0393
tent = -3.54769
toilet = -4.67468
tv_stand = -15.6987
vase = -5.87417
wardrobe = -15.3384
xbox = -11.0024
What is the best approach to prepare data to get more accurate results?
|
closed
|
2017-08-09T12:24:05Z
|
2019-12-10T11:47:07Z
|
https://github.com/charlesq34/pointnet/issues/33
|
[] |
mzaiady
| 5 |
mwaskom/seaborn
|
data-science
| 3,479 |
Annotations on heatmaps not shown for all values
|
Example:
https://www.tutorialspoint.com/how-to-annotate-each-cell-of-a-heatmap-in-seaborn
```python
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
df = pd.DataFrame(np.random.random((5, 5)), columns=["a", "b", "c", "d", "e"])
sns.heatmap(df, annot=True, annot_kws={"size": 7})
plt.show()
```
My result on both my laptop and my desktop using VSCode and/or PyCharm Pro:

Similarly, the project I was working on had results like this:

````terminal
pip list
Package Version
--------------- ------------
contourpy 1.1.1
cycler 0.11.0
fonttools 4.42.1
kiwisolver 1.4.5
matplotlib 3.8.0
numpy 1.26.0
packaging 23.1
pandas 2.1.0
Pillow 10.0.1
pip 23.2.1
pyparsing 3.1.1
python-dateutil 2.8.2
pytz 2023.3.post1
seaborn 0.12.2
setuptools 65.5.0
six 1.16.0
tzdata 2023
|
closed
|
2023-09-18T01:05:34Z
|
2024-09-24T03:12:06Z
|
https://github.com/mwaskom/seaborn/issues/3479
|
[] |
ellie-okay
| 3 |
InstaPy/InstaPy
|
automation
| 6,050 |
commenting
|
i want to have the users follower and following list as an output and also i want my bot to comment on the post with the most likes (of every user in the list) please help me if you can thanks
|
open
|
2021-01-24T16:31:44Z
|
2021-07-21T02:19:12Z
|
https://github.com/InstaPy/InstaPy/issues/6050
|
[
"wontfix"
] |
melikaafs
| 1 |
tradingstrategy-ai/web3-ethereum-defi
|
pytest
| 11 |
autosummary docs to gitignore
|
Currently autosummary docs are generated, but they still get committed to the repo.
- Add to .gitignore.
- Remove from the existing git tree
- Check that readthedocs will still correctly build the docs
|
closed
|
2022-03-17T21:50:27Z
|
2022-03-18T14:54:49Z
|
https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/11
|
[
"priority: P2"
] |
miohtama
| 0 |
BeastByteAI/scikit-llm
|
scikit-learn
| 78 |
Safe openai version to work on?
|
Hi, I try to use the few-shot classifier in the sample code. However, it seems that the openai package is restructuring their codes: https://community.openai.com/t/attributeerror-module-openai-has-no-attribute-embedding/484499.
Here are the error codes:
Could not obtain the completion after 3 retries: `APIRemovedInV1 ::
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
...
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
`
None
Could not extract the label from the completion: 'NoneType' object is not subscriptable
So, is there an version of the openai package that is safe to run?
|
closed
|
2023-11-09T07:08:53Z
|
2023-11-09T07:23:45Z
|
https://github.com/BeastByteAI/scikit-llm/issues/78
|
[] |
Dededon
| 0 |
microsoft/nni
|
pytorch
| 4,945 |
Supported for torch.sum
|
**Describe the issue**:
I noticed that currently NNI does not support the `torch.sum` operation. But I did find the `torch.sum` operation in some network models, such as `resnest`.
I wrote my own support for `torch.sum` but it doesn't seem right.
```python
def sum_python(node, speedup):
c_node = node.key_node
inputs = list(c_node.inputs())
dim_list = translate_list(inputs[1], speedup)
keep_dim = inputs[2].toIValue()
new_sum = partial(torch.sum, dim=tuple(dim_list), keepdim=keep_dim)
return new_sum
```
Some masks of layers will be omitted.

**Environment**:
- NNI version: the latest
- Training service (local|remote|pai|aml|etc):
- Client OS: centos 7
- Server OS (for remote mode only):
- Python version: 3.8.8
- PyTorch/TensorFlow version: 1.8.0
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
**How to reproduce it?**:
This is the simpe code and you can download `mmclassification` to reproduce it. Note that the pytorch version should be higher than 1.8.0 or equal.
```python
import torch
from argparse import ArgumentParser
from mmcls.apis import inference_model, init_model, show_result_pyplot
from nni.compression.pytorch import ModelSpeedup
from nni.compression.pytorch.utils.counter import count_flops_params
from nni.algorithms.compression.v2.pytorch.pruning.basic_pruner import SlimPruner, L1NormPruner, FPGMPruner
from nni.compression.pytorch.utils import not_safe_to_prune
device = 'cuda:0'
config = 'configs/resnest/resnest50_8xb16_cifar10.py'
checkpoint = None
img_file = 'demo/demo.JPEG'
# build the model from a config file and a checkpoint file
model = init_model(config, checkpoint, device=device)
model.forward = model.dummy_forward
pre_flops, pre_params, _ = count_flops_params(model, torch.randn([128, 3, 32, 32]).to(device))
im = torch.ones(1, 3, 128, 128).to(device)
out = model(im)
# with torch.no_grad():
# input_name = ['input']
# output_name = ['output']
# onnxname = 'resnest.onnx'
# torch.onnx.export(model, im, onnxname, input_names = input_name, output_names = output_name,
# opset_version=11, training=False, verbose=False, do_constant_folding=False)
# print(f'successful export onnx {onnxname}')
# exit()
# scores = model(return_loss=False, **data)
# scores = model(return_loss=False, **im)
# test a single image
# result = inference_model(model, img_file)
# Start to prune and speedupls
print('\n' + '=' * 50 + ' START TO PRUNE THE BEST ACCURACY PRETRAINED MODEL ' + '=' * 50)
not_safe = not_safe_to_prune(model, im)
print('\n' + '=' * 50 + 'not_safe' + '=' * 50, not_safe)
cfg_list = []
for name, module in model.named_modules():
print(name)
if name in not_safe:
continue
if isinstance(module, torch.nn.Conv2d):
cfg_list.append({'op_types':['Conv2d'], 'sparsity':0.2, 'op_names':[name]})
print('cfg_list')
for i in cfg_list:
print(i)
pruner = FPGMPruner(model, cfg_list)
_, masks = pruner.compress()
pruner.show_pruned_weights()
pruner._unwrap_model()
pruner.show_pruned_weights()
ModelSpeedup(model, dummy_input=im, masks_file=masks, confidence=32).speedup_model()
torch.jit.trace(model, im, strict=False)
print(model)
flops, params, results = count_flops_params(model, torch.randn([128, 3, 32, 32]).to(device))
print(f'Pretrained model FLOPs {pre_flops/1e6:.2f} M, #Params: {pre_params/1e6:.2f}M')
print(f'Finetuned model FLOPs {flops/1e6:.2f} M, #Params: {params/1e6:.2f}M')
model.forward = model.forward_
torch.save(model, 'chek/prune_model/resnest50_8xb16_cifar10_sparsity_0.2.pth')
```
The config file for `resnest50_8xb16_cifar10.py` is:
```python
_base_ = [
'../_base_/datasets/cifar10_bs16.py',
'../_base_/schedules/cifar10_bs128.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='ResNeSt',
depth=50,
num_stages=4,
out_indices=(3, ),
style='pytorch'),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='LinearClsHead',
num_classes=10,
in_channels=2048,
loss=dict(
type='LabelSmoothLoss',
label_smooth_val=0.1,
num_classes=10,
reduction='mean',
loss_weight=1.0),
topk=(1, 5),
cal_acc=False))
train_cfg = dict(mixup=dict(alpha=0.2, num_classes=10))
lr_config = dict(policy='step', step=[120, 170])
runner = dict(type='EpochBasedRunner', max_epochs=200)
```
|
open
|
2022-06-17T07:14:28Z
|
2022-11-17T03:33:52Z
|
https://github.com/microsoft/nni/issues/4945
|
[
"bug",
"model compression",
"ModelSpeedup"
] |
maxin-cn
| 29 |
allenai/allennlp
|
nlp
| 5,153 |
Implement a ROUGE metric that faithfully reproduces the official metric written in perl.
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section below all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
I was using `allennlp-models==2.3.0` and training with the script `allennlp-models/training_scripts/generation/bart_cnn_dm.jsonnet`. And I've got the following performance output:
```
{
"best_epoch": 0,
"peak_worker_0_memory_MB": 77679.125,
"peak_gpu_0_memory_MB": 19497.5625,
"training_duration": "1 day, 9:30:58.908029",
"training_start_epoch": 0,
"training_epochs": 2,
"epoch": 2,
"training_loss": 2.7070548462225283,
"training_worker_0_memory_MB": 77679.125,
"training_gpu_0_memory_MB": 19497.5625,
"validation_ROUGE-1_R": 0.4871779537805129,
"validation_ROUGE-2_R": 0.26309701739882685,
"validation_ROUGE-1_P": 0.3966578995429105,
"validation_ROUGE-2_P": 0.21290430088784706,
"validation_ROUGE-1_F1": 0.4283963905120849,
"validation_ROUGE-2_F1": 0.23045514136364303,
"validation_ROUGE-L": 0.3206116030616199,
"validation_BLEU": 0.18484394329002954,
"validation_loss": 0.0,
"best_validation_ROUGE-1_R": 0.47620558012437575,
"best_validation_ROUGE-2_R": 0.25229075181929206,
"best_validation_ROUGE-1_P": 0.38737318484205874,
"best_validation_ROUGE-2_P": 0.20447094269175353,
"best_validation_ROUGE-1_F1": 0.41917399613391276,
"best_validation_ROUGE-2_F1": 0.22154245158723443,
"best_validation_ROUGE-L": 0.31225680111602044,
"best_validation_BLEU": 0.17805890029860716,
"best_validation_loss": 0.0
}
```
However, according to the implementation from fairseq (and what's reported in the paper), the Rouge-1/2/L score should be 44.16/21.28/40.90, so that the Rouge-L score is 9.7 points below the reference value, while Rouge-1/2 scores have some improvements.
I am wondering if this is expected and why the Rouge-L score is significantly worse. Is this an issue with how BART models are implemented in `allennlp-models` or how Rouge-L score is computed in `allennlp.training.metrics`.
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Linux
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.8
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
(I've created an environment with only `allennlp==2.3.0` and no other pkgs installed)
```
allennlp==2.3.0
attrs==20.3.0
blis==0.7.4
boto3==1.17.53
botocore==1.20.53
catalogue==2.0.3
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
configparser==5.0.2
conllu==4.4
cymem==2.0.5
docker-pycreds==0.4.0
filelock==3.0.12
ftfy==6.0.1
gitdb==4.0.7
GitPython==3.1.14
h5py==3.2.1
idna==2.10
iniconfig==1.1.1
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
jsonnet==0.17.0
lmdb==1.2.0
MarkupSafe==1.1.1
more-itertools==8.7.0
murmurhash==1.0.5
nltk==3.6.1
numpy==1.20.2
overrides==3.1.0
packaging==20.9
pathtools==0.1.2
pathy==0.4.0
Pillow==8.2.0
pluggy==0.13.1
preshed==3.0.5
promise==2.3
protobuf==3.15.8
psutil==5.8.0
py==1.10.0
pydantic==1.7.3
pyparsing==2.4.7
py-rouge==1.1
pytest==6.2.3
python-dateutil==2.8.1
PyYAML==5.4.1
regex==2021.4.4
requests==2.25.1
s3transfer==0.3.7
sacremoses==0.0.45
scikit-learn==0.24.1
scipy==1.6.2
sentencepiece==0.1.95
sentry-sdk==1.0.0
shortuuid==1.0.1
six==1.15.0
smart-open==3.0.0
smmap==4.0.0
spacy==3.0.5
spacy-legacy==3.0.2
srsly==2.4.1
subprocess32==3.5.4
tensorboardX==2.2
thinc==8.0.2
threadpoolctl==2.1.0
tokenizers==0.10.2
toml==0.10.2
torch==1.8.1+cu111
torchvision==0.9.1
tqdm==4.60.0
transformers==4.5.1
typer==0.3.2
typing-extensions==3.7.4.3
urllib3==1.26.4
wandb==0.10.26
wasabi==0.8.2
wcwidth==0.2.5
word2number==1.1
```
</p>
</details>
## Steps to reproduce
1. Create an environment with `python==3.8` and `allennlp==2.3.0`
2. Clone the `allennlp-models` repo
3. Run `allennlp train training_scripts/generaion/bart_cnn_dm.jsonnet -s tmp-save-dir --include-package allennlp_models`
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
|
open
|
2021-04-24T21:44:58Z
|
2021-06-09T07:42:53Z
|
https://github.com/allenai/allennlp/issues/5153
|
[
"Contributions welcome",
"Feature request"
] |
niansong1996
| 26 |
DistrictDataLabs/yellowbrick
|
scikit-learn
| 623 |
Pass fitted or unfitted objects to Classification Visualizers - how to avoid fitting X_train twice?
|
For yellowbrick `visualize()` - from the __Classification Visualizers__ section:
- I am trying to avoid fitting a classification model twice to training data. Sould we pass fitted or unfitted model/gridsearch objects to the `visualize` method?
- my question is about when to call `visualizer.fit()`
- before or after performing a `GridSearchCV`?
Here is a minimum example to recreate my question - I have a basic [`scikit-learn` pipeline that performs a simple grid search](https://stackoverflow.com/q/37021338/4057186)
```
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import DiscriminationThreshold
import pandas as pd
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
data = (
pd.read_csv(
'https://raw.githubusercontent.com/LuisM78/Occupancy-detection-data/master/datatest2.txt'
)
)
# Specify the features of interest and the classes of the target
features = ["Temperature", "HumidityRatio", "Light", "CO2", "Humidity"]
classes = [0, 1]
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.Occupancy.as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
mypipe = Pipeline(steps=[('forest', RandomForestClassifier())])
mydict = {'forest__n_estimators': [10,11]}
gs = GridSearchCV(mypipe, mydict, scoring='roc_auc', cv=3)
gs.fit(X_train, y_train)
print(gs.best_score_)
visualizer = DiscriminationThreshold(gs)
visualizer.fit(X_train, y_train)
visualizer.poof()
print(visualizer.best_score_)
```
Code output
```
0.997314002777
0.997314002777
```
In the [__Class Balance__ docs](http://www.scikit-yb.org/en/latest/api/classifier/class_balance.html#module-yellowbrick.classifier.class_balance), it says
> model: estimator
Scikit-Learn estimator object. Should be an instance of a classifier,
else __init__() will raise an exception.
though I couldn't find if it requires fitted or unfitted estimators(???)
__Questions__
__Case A__
Do I pass the fitted grid search object to the [Classification Visualizers](http://www.scikit-yb.org/en/latest/api/classifier/index.html#classification-visualizers)? Or must I pass only the unfitted objects to the `visualizer`?
|
closed
|
2018-10-01T22:38:43Z
|
2018-11-19T16:20:06Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/623
|
[
"type: question"
] |
edesz
| 6 |
feature-engine/feature_engine
|
scikit-learn
| 776 |
Provide optional "transform" function to run for each feature selection fit
|
**Is your feature request related to a problem? Please describe.**
Sometimes I use transformations that are dependent on the feature set. For example, one typical transformation is scaling by the total (e.g., x/x.sum()).
**Describe the solution you'd like**
The original feature matrix is retained and upon each fit, the transformation is computed. Here's a wacky version just to show the concept:
```
def transform(X):
return X/X.sum(axis=1).reshape(-1,1)
X = np.random.RandomState(0).randint(low=0, high=1000, size=(10,5))
y = np.random.RandomState(0).choice([0,1], size=10)
for i in range(1, X.shape[1]+1):
X_query = X[:,:i]
if X_query.shape[1] > 1:
X_query = transform(X_query)
# fit(X_query, y)
```
**Describe alternatives you've considered**
I'm currently making a custom class and reimplementing the fit method to have this feature.
**Additional context**
NA
|
closed
|
2024-06-24T20:58:39Z
|
2024-08-24T11:57:42Z
|
https://github.com/feature-engine/feature_engine/issues/776
|
[
"wontfix"
] |
jolespin
| 3 |
clovaai/donut
|
computer-vision
| 7 |
Release yaml files
|
Hi,
Thank you for sharing your interesting work. I was wondering if there is an expected date on when you will be releasing yaml files regarding anything other than CORD? I want to reproduce the experimental results in my environment.
|
closed
|
2022-08-01T00:28:02Z
|
2022-08-10T09:49:28Z
|
https://github.com/clovaai/donut/issues/7
|
[] |
rtanaka-lab
| 2 |
man-group/arctic
|
pandas
| 977 |
argument of type 'NoneType' is not iterable (when updating)
|
#### Arctic Version
```
1.80.5
```
#### Arctic Store
```
ChunkStore
```
#### Platform and version
Linux
#### Description of problem and/or code sample that reproduces the issue
I also encountered the same bug. When the original collection has this row of data, but the metadata collection does not have this row of data, this bug will occur when updating
https://github.com/man-group/arctic/issues/923#issue-1046762464
|
closed
|
2023-01-16T06:30:21Z
|
2023-02-16T09:52:01Z
|
https://github.com/man-group/arctic/issues/977
|
[] |
TwoBeng
| 1 |
explosion/spaCy
|
deep-learning
| 13,709 |
Unable to fine-tune previously trained transformer based spaCy NER.
|
## How to reproduce the behaviour
Use spacy to fine-tune a base model with a transformer from hugging face:
python -m spacy train config.cfg --output ./output --paths.train ./train.spacy --paths.dev ./dev.spacy
Collect new tagged entries under new sets and set your model location to the output/model-last in a new config:
python -m spacy train fine_tune_config.cfg --output ./fine_tune_output --paths.train ./newtrain.spacy --paths.dev ./newdev.spacy
You will get an error about a missing config.json. Even replacing this will then lead to an error of a missing tokenizer.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 11
- **spaCy version:** 3.7.2
- **Platform:** Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
- **Python version:** 3.10.13
|
open
|
2024-12-06T04:46:11Z
|
2024-12-06T04:57:26Z
|
https://github.com/explosion/spaCy/issues/13709
|
[] |
jlustgarten
| 1 |
dask/dask
|
numpy
| 11,318 |
cannot access local variable 'divisions' where it is not associated with a value
|
getting this error when trying to use sort_values multiple times
**Anything else we need to know?**:
Dask Scheduler Compute: 1core, 1GB mem
Dask Workers: 3, 1core, 1GB mem each
using docker to setup a cluster
docker compose.yml
**Environment**:
-Dask version:2024.5.2
- Python version: 3.12
- Operating System: linux (docker)
- Install method (conda, pip, source): pip
|
closed
|
2024-08-14T22:15:11Z
|
2024-10-10T17:02:48Z
|
https://github.com/dask/dask/issues/11318
|
[
"needs triage"
] |
Cognitus-Stuti
| 1 |
datadvance/DjangoChannelsGraphqlWs
|
graphql
| 45 |
Graphene v3 support?
|
is there any plan to support v3 of graphene?
if it's already in the roadmap, i can help to test :)
|
closed
|
2020-08-02T08:34:59Z
|
2023-04-27T21:08:38Z
|
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/45
|
[] |
hyusetiawan
| 14 |
microsoft/nni
|
data-science
| 5,758 |
USER_CANCELED
|

When I submit the code to run on the server, without performing any operations, the status changes to "USER_CANCELED". Even the NNI code that used to run successfully before is now encountering this issue when I try to run it. Could anyone please advise on how to solve this problem?
|
closed
|
2024-03-19T11:25:33Z
|
2024-03-28T02:32:51Z
|
https://github.com/microsoft/nni/issues/5758
|
[] |
fantasy0905
| 0 |
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,038 |
selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: cannot parse capability: goog:chromeOptions from invalid argument: unrecognized chrome option: excludeSwitches
|
I am finding this error. Please help me.
|
open
|
2023-02-05T21:22:33Z
|
2023-06-01T16:42:25Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1038
|
[] |
mominurr
| 3 |
httpie/cli
|
api
| 1,425 |
Support httpie in Chrome DevTools
|
This is not httpie tool request per-se, instead as an avid user of the httpie tool, I find it frustrating that in the network tab of the Chrome DevTools, there's an option in the context menu of a request to copy it as a Curl or Fetch command line, but not as Httpie command line.
It would be great if anyone from this community will work on a browser extension to support that!
|
open
|
2022-07-27T16:12:06Z
|
2022-07-27T17:06:06Z
|
https://github.com/httpie/cli/issues/1425
|
[
"enhancement"
] |
tomers
| 1 |
replicate/cog
|
tensorflow
| 1,784 |
Update cog.run nav to link to docs/deploy.md
|
@mattt recently made some updates to `docs/deploy.md` in https://github.com/replicate/cog/pull/1761/files
I went to take a look at those changes on the [cog.run](https://cog.run/) website, but noticed that file is not currently referenced from the site's nav/sidevar. See https://github.com/replicate/cog/blob/93ec993607d14c155d4c90bd5e5b3f622f983cf7/mkdocs.yml#L4-L16
I think I may have been responsible for omitting this when setting up the website, because the doc's purpose wasn't entirely clear to me. It's called "Deploy models with Cog" but it's not really about deployment, so much as running Cog locally and a few docker incantations to be aware of. Maybe we should rename it?
|
open
|
2024-07-02T15:48:47Z
|
2024-07-02T15:48:47Z
|
https://github.com/replicate/cog/issues/1784
|
[] |
zeke
| 0 |
onnx/onnx
|
pytorch
| 5,847 |
May I ask if PSD and SoudenMVDR in torchaudio support conversion to onnx?
|
May I ask if PSD and SoudenMVDR in torchaudio support conversion to onnx?
|
closed
|
2024-01-06T09:08:02Z
|
2025-01-29T06:43:46Z
|
https://github.com/onnx/onnx/issues/5847
|
[
"question",
"topic: converters",
"stale"
] |
wrz1999
| 2 |
sinaptik-ai/pandas-ai
|
pandas
| 645 |
Custom Prompt
|
### 🐛 Describe the bug
hey All,
I'm just new to pandasai i want help to use my custom prompt instead of default prompt template
@gventuri
thanks in advance
|
closed
|
2023-10-14T13:36:52Z
|
2024-06-01T00:20:13Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/645
|
[] |
L0GESHWARAN
| 1 |
ydataai/ydata-profiling
|
jupyter
| 1,429 |
Common Values incorrectly reporting (missing)
|
### Current Behaviour
OS:Mac
Python:3.11
Interface: Jupyter Lab
pip: 22.3.1
[dataset](https://github.com/plotly/datasets/blob/master/2015_flights.parquet)
|DEPARTURE_DELAY|ARRIVAL_DELAY|DISTANCE|SCHEDULED_DEPARTURE|
|---------------|-------------|--------|-------------------|
| -11.0| -22.0| 1448|0.08333333333333333|
| -8.0| -9.0| 2330|0.16666666666666666|
| -2.0| 5.0| 2296| 0.3333333333333333|
| -5.0| -9.0| 2342| 0.3333333333333333|
| -1.0| -21.0| 1448| 0.4166666666666667|
It appears that when a column value_counts exceeds 200 within the `common values` section:
- when aggregations of a value exceed 200 the remaining is categorised as `(missing)`
It overall contradicts Missing and Missing(%) main statistics for a variable
<img width="1066" alt="image" src="https://github.com/ydataai/ydata-profiling/assets/39754073/e2854ae1-3fbe-41fa-8f4a-8e73e696dc1f">
### Expected Behaviour
The `(Missing)` section within `Common values` should be removed or the difference to "other values"
### Data Description
https://github.com/plotly/datasets/blob/master/2015_flights.parquet
### Code that reproduces the bug
```Python
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from ydata_profiling import ProfileReport
import json
spark = SparkSession.builder.appName("ydata").getOrCreate()
spark_df = spark.read.parquet("ydata-test/2015_flights.parquet")
n_notnull = spark_df.filter(F.col("SCHEDULED_DEPARTURE").isNotNull()).count()
profile = ProfileReport(spark_df, minimal=True)
value_counts_values = sum(json.loads(profile.to_json())["variables"]["SCHEDULED_DEPARTURE"]["value_counts_without_nan"].values())
missing_common_values = 1650418 # as per html report
assert missing_common_values == (n_notnull - value_counts_values)
```
### pandas-profiling version
4.5.1
### Dependencies
```Text
ydata-profiling==4.5.1
pyspark==3.4.1
pandas==2.0.3
numpy==1.23.5
```
### OS
macos
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
open
|
2023-08-24T12:52:21Z
|
2024-01-28T05:33:33Z
|
https://github.com/ydataai/ydata-profiling/issues/1429
|
[
"bug 🐛",
"spark :zap:"
] |
danhosanee
| 1 |
plotly/dash
|
dash
| 3,212 |
Dash 3.0 getattr on app recursive error
|
Calling `getattr(app, "property_name")` generate an infinite recursive error.
|
closed
|
2025-03-12T18:33:46Z
|
2025-03-13T13:21:26Z
|
https://github.com/plotly/dash/issues/3212
|
[
"bug",
"P1",
"dash-3.0"
] |
T4rk1n
| 0 |
plotly/plotly.py
|
plotly
| 5,053 |
Flatten figures of subplot for easy plotting
|
I just dabbled into plotly for some project and i noticed via the documentation and stackoverflow that plotly does not have an easy way to flatten figures for easy subplotting. You literally have to explicitly define which rows and column you want your trace to be. It would be great if plotly could have a similar feature to matplotlib where you could flatten a series of axes subplots into a list and just for loop through it. Example below for clarity:
```
fig, axs = plt.subplots(nrows=4,ncols=4)
axs = axs.flatten()
for ax in axs:
ax.plot() # plot on each subplot from left to right for each row
```
|
open
|
2025-02-23T05:01:56Z
|
2025-02-24T16:19:03Z
|
https://github.com/plotly/plotly.py/issues/5053
|
[
"feature",
"P3"
] |
Folarin14
| 0 |
Kanaries/pygwalker
|
plotly
| 137 |
Code export showing "Cancel" as "Cancal"
|
Just a really small one. When clicking on "export_code" on the visualization tab, the pop-out shows "Cancal" instead of "Cancel" on the bottom right (see picture in attachment).

|
closed
|
2023-06-21T13:38:17Z
|
2023-06-25T12:01:32Z
|
https://github.com/Kanaries/pygwalker/issues/137
|
[
"P2"
] |
abfallboerseMWE
| 3 |
mars-project/mars
|
pandas
| 2,860 |
[BUG]xgb train exception in py 3.9.7
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
raise exception when train model use xgb
my code like this
```
(ray) [ray@ml-test ~]$ cat test_mars_xgb.py
import ray
ray.init(address="ray://172.16.210.22:10001")
import mars
import mars.tensor as mt
import mars.dataframe as md
session = mars.new_ray_session(worker_num=2, worker_mem=2 * 1024 ** 3)
from sklearn.datasets import load_boston
boston = load_boston()
data = md.DataFrame(boston.data, columns=boston.feature_names)
print("data.head().execute()")
print(data.head().execute())
print("data.describe().execute()")
print(data.describe().execute())
from mars.learn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, boston.target, train_size=0.7, random_state=0)
print("after split X_train: %s" % X_train)
from mars.learn.contrib import xgboost as xgb
train_dmatrix = xgb.MarsDMatrix(data=X_train, label=y_train)
test_dmatrix = xgb.MarsDMatrix(data=X_test, label=y_test)
print("train_dmatrix: %s" % train_dmatrix)
#params = {'objective': 'reg:squarederror','colsample_bytree': 0.3,'learning_rate': 0.1, 'max_depth': 5, 'alpha': 10, 'n_estimators': 10}
#booster = xgb.train(dtrain=train_dmatrix, params=params)
#xg_reg = xgb.XGBRegressor(objective='reg:squarederror', colsample_bytree=0.3, learning_rate=0.1, max_depth=5, alpha=10, n_estimators=10)
xg_reg = xgb.XGBRegressor()
print("xg_reg.fit %s" % xg_reg)
model = xg_reg.fit(X_train, y_train, session=session)
#xgb.predict(booster, X_test)
print("results.predict")
test_r = model.predict(X_test)
print("output:test_r:%s" % type(test_r))
print(test_r)
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version:3.9.7
2. The version of Mars you use:0.9.0rc1
3. Versions of crucial packages, such as numpy, scipy and pandas
1. Ray:1.11.0
2. Numpy:1.22.3
3. Pandas:1.4.1
4. Scipy:1.8.0
4. Full stack of the error.
```
(ray) [ray@ml-test ~]$ python test_mars_xgb.py
2022-03-24 10:59:42,970 INFO ray.py:432 -- Start cluster with config {'services': ['cluster', 'session', 'storage', 'meta', 'lifecycle', 'scheduling', 'subtask', 'task', 'mutable'], 'cluster': {'backend': 'ray', 'node_timeout': 120, 'node_check_interval': 1, 'ray': {'supervisor': {'standalone': False, 'sub_pool_num': 0}}}, 'session': {'custom_log_dir': None}, 'storage': {'default_config': {'transfer_block_size': '5 * 1024 ** 2'}, 'plasma': {'store_memory': '20%'}, 'backends': ['ray']}, 'meta': {'store': 'dict'}, 'task': {'default_config': {'optimize_tileable_graph': True, 'optimize_chunk_graph': True, 'fuse_enabled': True, 'initial_same_color_num': None, 'as_broadcaster_successor_num': None}}, 'scheduling': {'autoscale': {'enabled': False, 'min_workers': 1, 'max_workers': 100, 'scheduler_backlog_timeout': 20, 'worker_idle_timeout': 40}, 'speculation': {'enabled': False, 'dry': False, 'interval': 5, 'threshold': '75%', 'min_task_runtime': 3, 'multiplier': 1.5, 'max_concurrent_run': 3}, 'subtask_cancel_timeout': 5, 'subtask_max_retries': 3, 'subtask_max_reschedules': 2}, 'metrics': {'backend': 'ray', 'port': 0}}
2022-03-24 10:59:42,970 INFO api.py:53 -- Finished initialize the metrics with backend ray
2022-03-24 10:59:42,970 INFO driver.py:34 -- Setup cluster with {'ray://ray-cluster-1648090782/0': {'CPU': 2}, 'ray://ray-cluster-1648090782/1': {'CPU': 2}}
2022-03-24 10:59:42,970 INFO driver.py:40 -- Creating placement group ray-cluster-1648090782 with bundles [{'CPU': 2}, {'CPU': 2}].
2022-03-24 10:59:43,852 INFO driver.py:55 -- Create placement group success.
2022-03-24 10:59:45,128 INFO backend.py:82 -- Submit create actor pool ClientActorHandle(44dff4e8c2ea47cdd02bb84609000000) took 1.2752630710601807 seconds.
2022-03-24 10:59:46,268 INFO backend.py:82 -- Submit create actor pool ClientActorHandle(9ee3d50e43948f0f784697b809000000) took 1.116509199142456 seconds.
2022-03-24 10:59:48,475 INFO backend.py:82 -- Submit create actor pool ClientActorHandle(01f40453e2be6ed5ff7204d409000000) took 2.1755218505859375 seconds.
2022-03-24 10:59:48,501 INFO backend.py:89 -- Start actor pool ClientActorHandle(44dff4e8c2ea47cdd02bb84609000000) took 3.352660894393921 seconds.
2022-03-24 10:59:48,501 INFO backend.py:89 -- Start actor pool ClientActorHandle(9ee3d50e43948f0f784697b809000000) took 2.2049944400787354 seconds.
2022-03-24 10:59:48,501 INFO ray.py:526 -- Create supervisor on node ray://ray-cluster-1648090782/0/0 succeeds.
2022-03-24 10:59:50,148 INFO ray.py:536 -- Start services on supervisor ray://ray-cluster-1648090782/0/0 succeeds.
2022-03-24 10:59:50,494 INFO backend.py:89 -- Start actor pool ClientActorHandle(01f40453e2be6ed5ff7204d409000000) took 1.9973196983337402 seconds.
2022-03-24 10:59:50,494 INFO ray.py:541 -- Create 2 workers succeeds.
2022-03-24 10:59:50,722 INFO ray.py:545 -- Start services on 2 workers succeeds.
(RaySubPool pid=15700, ip=172.16.210.21) 2022-03-24 10:59:50,720 ERROR serialization.py:311 -- __init__() missing 1 required positional argument: 'pid'
(RaySubPool pid=15700, ip=172.16.210.21) Traceback (most recent call last):
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 309, in deserialize_objects
(RaySubPool pid=15700, ip=172.16.210.21) obj = self._deserialize_object(data, metadata, object_ref)
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 90, in _deserialize_object
(RaySubPool pid=15700, ip=172.16.210.21) value = _ray_deserialize_object(self, data, metadata, object_ref)
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 215, in _deserialize_object
(RaySubPool pid=15700, ip=172.16.210.21) return self._deserialize_msgpack_data(data, metadata_fields)
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 174, in _deserialize_msgpack_data
(RaySubPool pid=15700, ip=172.16.210.21) python_objects = self._deserialize_pickle5_data(pickle5_data)
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 164, in _deserialize_pickle5_data
(RaySubPool pid=15700, ip=172.16.210.21) obj = pickle.loads(in_band)
(RaySubPool pid=15700, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/lib/tblib/pickling_support.py", line 29, in unpickle_exception
(RaySubPool pid=15700, ip=172.16.210.21) inst = func(*args)
(RaySubPool pid=15700, ip=172.16.210.21) TypeError: __init__() missing 1 required positional argument: 'pid'
2022-03-24 10:59:50,770 WARNING ray.py:556 -- Web service started at http://0.0.0.0:50749
(RaySubPool pid=3583) 2022-03-24 10:59:50,725 ERROR serialization.py:311 -- __init__() missing 1 required positional argument: 'pid'
(RaySubPool pid=3583) Traceback (most recent call last):
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 309, in deserialize_objects
(RaySubPool pid=3583) obj = self._deserialize_object(data, metadata, object_ref)
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 90, in _deserialize_object
(RaySubPool pid=3583) value = _ray_deserialize_object(self, data, metadata, object_ref)
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 215, in _deserialize_object
(RaySubPool pid=3583) return self._deserialize_msgpack_data(data, metadata_fields)
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 174, in _deserialize_msgpack_data
(RaySubPool pid=3583) python_objects = self._deserialize_pickle5_data(pickle5_data)
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/serialization.py", line 164, in _deserialize_pickle5_data
(RaySubPool pid=3583) obj = pickle.loads(in_band)
(RaySubPool pid=3583) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/lib/tblib/pickling_support.py", line 29, in unpickle_exception
(RaySubPool pid=3583) inst = func(*args)
(RaySubPool pid=3583) TypeError: __init__() missing 1 required positional argument: 'pid'
/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function load_boston is deprecated; `load_boston` is deprecated in 1.0 and will be removed in 1.2.
The Boston housing prices dataset has an ethical problem. You can refer to
the documentation of this function for further details.
The scikit-learn maintainers therefore strongly discourage the use of this
dataset unless the purpose of the code is to study and educate about
ethical issues in data science and machine learning.
In this special case, you can fetch the dataset from the original
source::
import pandas as pd
import numpy as np
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
Alternative datasets include the California housing dataset (i.e.
:func:`~sklearn.datasets.fetch_california_housing`) and the Ames housing
dataset. You can load the datasets as follows::
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
for the California housing dataset and::
from sklearn.datasets import fetch_openml
housing = fetch_openml(name="house_prices", as_frame=True)
for the Ames housing dataset.
warnings.warn(msg, category=FutureWarning)
data.head().execute()
2022-03-24 10:59:51,023 INFO session.py:979 -- Time consuming to generate a tileable graph is 0.0007078647613525391s with address ray://ray-cluster-1648090782/0/0, session id zLE6ibnXqYxfFNUiCEndgZaF
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT
0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 15.3 396.90 4.98
1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 17.8 396.90 9.14
2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 17.8 392.83 4.03
3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 18.7 394.63 2.94
4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 18.7 396.90 5.33
data.describe().execute()
2022-03-24 10:59:51,504 INFO session.py:979 -- Time consuming to generate a tileable graph is 0.0005688667297363281s with address ray://ray-cluster-1648090782/0/0, session id zLE6ibnXqYxfFNUiCEndgZaF
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT
count 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000
mean 3.613524 11.363636 11.136779 0.069170 0.554695 6.284634 68.574901 3.795043 9.549407 408.237154 18.455534 356.674032 12.653063
std 8.601545 23.322453 6.860353 0.253994 0.115878 0.702617 28.148861 2.105710 8.707259 168.537116 2.164946 91.294864 7.141062
min 0.006320 0.000000 0.460000 0.000000 0.385000 3.561000 2.900000 1.129600 1.000000 187.000000 12.600000 0.320000 1.730000
25% 0.082045 0.000000 5.190000 0.000000 0.449000 5.885500 45.025000 2.100175 4.000000 279.000000 17.400000 375.377500 6.950000
50% 0.256510 0.000000 9.690000 0.000000 0.538000 6.208500 77.500000 3.207450 5.000000 330.000000 19.050000 391.440000 11.360000
75% 3.677083 12.500000 18.100000 0.000000 0.624000 6.623500 94.075000 5.188425 24.000000 666.000000 20.200000 396.225000 16.955000
max 88.976200 100.000000 27.740000 1.000000 0.871000 8.780000 100.000000 12.126500 24.000000 711.000000 22.000000 396.900000 37.970000
2022-03-24 10:59:51,992 INFO session.py:979 -- Time consuming to generate a tileable graph is 0.0019736289978027344s with address ray://ray-cluster-1648090782/0/0, session id zLE6ibnXqYxfFNUiCEndgZaF
after split X_train: CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT
191 0.06911 45.0 3.44 0.0 0.437 6.739 30.8 6.4798 5.0 398.0 15.2 389.71 4.69
380 88.97620 0.0 18.10 0.0 0.671 6.968 91.9 1.4165 24.0 666.0 20.2 396.90 17.21
337 0.03041 0.0 5.19 0.0 0.515 5.895 59.6 5.6150 5.0 224.0 20.2 394.81 10.56
266 0.78570 20.0 3.97 0.0 0.647 7.014 84.6 2.1329 5.0 264.0 13.0 384.07 14.79
221 0.40771 0.0 6.20 1.0 0.507 6.164 91.3 3.0480 8.0 307.0 17.4 395.24 21.46
.. ... ... ... ... ... ... ... ... ... ... ... ... ...
275 0.09604 40.0 6.41 0.0 0.447 6.854 42.8 4.2673 4.0 254.0 17.6 396.90 2.98
217 0.07013 0.0 13.89 0.0 0.550 6.642 85.1 3.4211 5.0 276.0 16.4 392.78 9.69
369 5.66998 0.0 18.10 1.0 0.631 6.683 96.8 1.3567 24.0 666.0 20.2 375.33 3.73
95 0.12204 0.0 2.89 0.0 0.445 6.625 57.8 3.4952 2.0 276.0 18.0 357.98 6.65
277 0.06127 40.0 6.41 1.0 0.447 6.826 27.6 4.8628 4.0 254.0 17.6 393.45 4.16
[354 rows x 13 columns]
train_dmatrix: DataFrame(op=ToDMatrix)
/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
from pandas import MultiIndex, Int64Index
xg_reg.fit XGBRegressor()
2022-03-24 10:59:53,085 INFO session.py:979 -- Time consuming to generate a tileable graph is 0.0010030269622802734s with address ray://ray-cluster-1648090782/0/0, session id zLE6ibnXqYxfFNUiCEndgZaF
(RaySubPool pid=15805, ip=172.16.210.21) Exception in thread Thread-42:
(RaySubPool pid=15805, ip=172.16.210.21) Traceback (most recent call last):
(RaySubPool pid=15805, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/threading.py", line 973, in _bootstrap_inner
(RaySubPool pid=15805, ip=172.16.210.21) self.run()
(RaySubPool pid=15805, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/threading.py", line 910, in run
(RaySubPool pid=15805, ip=172.16.210.21) self._target(*self._args, **self._kwargs)
(RaySubPool pid=15805, ip=172.16.210.21) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/learn/contrib/xgboost/tracker.py", line 355, in join
(RaySubPool pid=15805, ip=172.16.210.21) while self.thread.isAlive():
(RaySubPool pid=15805, ip=172.16.210.21) AttributeError: 'Thread' object has no attribute 'isAlive'
(RaySubPool pid=3583) [10:59:53] task NULL got new rank 0
2022-03-24 10:59:54,331 ERROR session.py:1822 -- Task exception was never retrieved
future: <Task finished name='Task-110' coro=<_wrap_awaitable() done, defined at /home/ray/anaconda3/envs/ray/lib/python3.9/asyncio/tasks.py:684> exception=TypeError("ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''")>
Traceback (most recent call last):
File "/home/ray/anaconda3/envs/ray/lib/python3.9/asyncio/tasks.py", line 691, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 106, in wait
return await self._aio_task
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 950, in _run_in_background
fetch_tileables = await self._task_api.get_fetch_tileables(task_id)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/services/task/api/oscar.py", line 100, in get_fetch_tileables
return await self._task_manager_ref.get_task_result_tileables(task_id)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 188, in send
result = await self._wait(future, actor_ref.address, message)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 83, in _wait
return await future
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 74, in _wait
await asyncio.shield(future)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/core.py", line 50, in _listen
message: _MessageBase = await client.recv()
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/communication/base.py", line 262, in recv
return await self.channel.recv()
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 209, in recv
result = await object_ref
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/util/client/server/server.py", line 375, in send_get_response
serialized = dumps_from_server(result, client_id, self)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/util/client/server/server_pickler.py", line 114, in dumps_from_server
sp.dump(obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 620, in dump
return Pickler.dump(self, obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 55, in __reduce__
return _argwrapper_unpickler, (serialize(self.message),)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/core.py", line 361, in serialize
gen_to_serial = gen.send(last_serial)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/core/base.py", line 140, in serialize
return (yield from super().serialize(obj, context))
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/serializables/core.py", line 108, in serialize
tag_to_values = self._get_tag_to_values(obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/serializables/core.py", line 101, in _get_tag_to_values
value = field.on_serialize(value)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/utils.py", line 157, in on_serialize_nsplits
new_nsplits.append(tuple(None if np.isnan(v) else v for v in dim_splits))
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/utils.py", line 157, in <genexpr>
new_nsplits.append(tuple(None if np.isnan(v) else v for v in dim_splits))
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Traceback (most recent call last):
File "/home/ray/test_mars_xgb.py", line 42, in <module>
model = xg_reg.fit(X_train, y_train, session=session)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/learn/contrib/xgboost/regressor.py", line 61, in fit
result = train(
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/learn/contrib/xgboost/train.py", line 249, in train
ret = t.execute(session=session, **run_kwargs).fetch(session=session)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/core/entity/executable.py", line 98, in execute
return execute(self, session=session, **kw)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 1851, in execute
return session.execute(
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 1647, in execute
execution_info: ExecutionInfo = fut.result(
File "/home/ray/anaconda3/envs/ray/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/home/ray/anaconda3/envs/ray/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 1831, in _execute
await execution_info
File "/home/ray/anaconda3/envs/ray/lib/python3.9/asyncio/tasks.py", line 691, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 106, in wait
return await self._aio_task
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 950, in _run_in_background
fetch_tileables = await self._task_api.get_fetch_tileables(task_id)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/services/task/api/oscar.py", line 100, in get_fetch_tileables
return await self._task_manager_ref.get_task_result_tileables(task_id)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 188, in send
result = await self._wait(future, actor_ref.address, message)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 83, in _wait
return await future
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 74, in _wait
await asyncio.shield(future)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/core.py", line 50, in _listen
message: _MessageBase = await client.recv()
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/communication/base.py", line 262, in recv
return await self.channel.recv()
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 209, in recv
result = await object_ref
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/util/client/server/server.py", line 375, in send_get_response
serialized = dumps_from_server(result, client_id, self)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/util/client/server/server_pickler.py", line 114, in dumps_from_server
sp.dump(obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 620, in dump
return Pickler.dump(self, obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/communication.py", line 55, in __reduce__
return _argwrapper_unpickler, (serialize(self.message),)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/core.py", line 361, in serialize
gen_to_serial = gen.send(last_serial)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/core/base.py", line 140, in serialize
return (yield from super().serialize(obj, context))
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/serializables/core.py", line 108, in serialize
tag_to_values = self._get_tag_to_values(obj)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/serialization/serializables/core.py", line 101, in _get_tag_to_values
value = field.on_serialize(value)
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/utils.py", line 157, in on_serialize_nsplits
new_nsplits.append(tuple(None if np.isnan(v) else v for v in dim_splits))
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/utils.py", line 157, in <genexpr>
new_nsplits.append(tuple(None if np.isnan(v) else v for v in dim_splits))
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
(RaySubPool pid=3400) Main pool Actor(RayMainPool, 9ee3d50e43948f0f784697b809000000) has exited, exit current sub pool now.
(RaySubPool pid=3400) Traceback (most recent call last):
(RaySubPool pid=3400) File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py", line 365, in check_main_pool_alive
(RaySubPool pid=3400) main_pool_start_timestamp = await main_pool.alive.remote()
(RaySubPool pid=3400) ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
(RaySubPool pid=3400) class_name: RayMainPool
(RaySubPool pid=3400) actor_id: 9ee3d50e43948f0f784697b809000000
(RaySubPool pid=3400) pid: 3514
(RaySubPool pid=3400) name: ray://ray-cluster-1648090782/0/1
(RaySubPool pid=3400) namespace: b7b70429-e17c-486f-9172-0872403ed6ef
(RaySubPool pid=3400) ip: 172.16.210.22
(RaySubPool pid=3400) The actor is dead because because all references to the actor were removed.
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff6f1ccaae6135c700f75befbe09000000 Worker ID: 707d6a3f910fa005ec33fe7ae60ddef5cfc1b9eb67510f1bc0f19623 Node ID: 7c54d788f2585a26ce8ef92e01f7e774359a4f0636b4bcfcb84272f7 Worker IP address: 172.16.210.21 Worker port: 10043 Worker PID: 15700
Exception ignored in: <function _TileableSession.__init__.<locals>.cb at 0x7efbd9a75160>
Traceback (most recent call last):
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/core/entity/executable.py", line 52, in cb
File "/home/ray/anaconda3/envs/ray/lib/python3.9/concurrent/futures/thread.py", line 156, in submit
AttributeError: __enter__
Exception ignored in: <function _TileableSession.__init__.<locals>.cb at 0x7efbd9a75dc0>
Traceback (most recent call last):
File "/home/ray/anaconda3/envs/ray/lib/python3.9/site-packages/mars/core/entity/executable.py", line 52, in cb
File "/home/ray/anaconda3/envs/ray/lib/python3.9/concurrent/futures/thread.py", line 156, in submit
AttributeError: __enter__
```
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-03-24T03:00:29Z
|
2022-03-24T07:48:45Z
|
https://github.com/mars-project/mars/issues/2860
|
[
"type: bug",
"mod: learn",
"prio: high"
] |
wuyeguo
| 1 |
piskvorky/gensim
|
nlp
| 3,216 |
Number of workers when working on multicore systems
|
Hello,
I am trying to run FastText on huge corpus of newspaper text, and on a multicore server at my University. I have requested 48 cores to run this operation, and I wondering if in the FastText parameters I have to specify workers=48 too. I don't understand from the documentation whether it has to be like this.
`bsub -W 12:00 -n 48 -N -B -R "rusage[mem=8GB]" python scriptname.py`
Thanks a lot.
Sandra
|
closed
|
2021-08-18T08:23:03Z
|
2021-08-18T09:48:16Z
|
https://github.com/piskvorky/gensim/issues/3216
|
[] |
sandrastampibombelli
| 1 |
pykaldi/pykaldi
|
numpy
| 92 |
it gives error when i use Cmvn.applpy_cmvn() function?
|
# C is a kaldi matrix 124*83
from kaldi.transform import cmvn
from kaldi.matrix import DoubleMatrix
ki=Input("cmvn.ark")
cmvn_stats=DoubleMatrix()
cmvn_data=cmvn_stats.read_(ki.stream(),True)
from kaldi.transform.cmvn import Cmvn
cmvn=Cmvn(83)
cmvn.apply(C)
the error is :
/transform/cmvn.pyc in apply(self, feats, norm_vars, reverse)
86 _cmvn.apply_cmvn_reverse(self.stats, norm_vars, feats)
87 else:
---> 88 _cmvn.apply_cmvn(self.stats, norm_vars, feats)
89
90 def init(self, dim):
RuntimeError: C++ exception:
|
closed
|
2019-03-18T10:20:50Z
|
2019-03-29T08:07:26Z
|
https://github.com/pykaldi/pykaldi/issues/92
|
[] |
liuchenbaidu
| 3 |
opengeos/leafmap
|
jupyter
| 231 |
Help with netcdf files
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.9.1
- Python version: 3.9.10
- Operating System: macOS 10.6.5
### Description
I was trying to follow along with the [netCDF example](https://leafmap.org/notebooks/52_netcdf/), but with my own netCDF file.
But when I do that, I get an error:
```python
AttributeError: 'Dataset' object has no attribute 'rio'
```
### What I Did
```python
import leafmap
filename = "test.nc4"
data = leafmap.read_netcdf(filename)
m = leafmap.Map(layers_control=True)
tif = 'wind_global.tif'
leafmap.netcdf_to_tif(filename, tif, variables=['U', 'V'], shift_lon=True)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [27], in <module>
----> 1 leafmap.netcdf_to_tif(filename, tif, variables=['U', 'V'], shift_lon=True)
File ~/GEOSpyD/4.10.3_py3.9/2022-02-14/lib/python3.9/site-packages/leafmap/common.py:5350, in netcdf_to_tif(filename, output, variables, shift_lon, lat, lon, return_vars, **kwargs)
5348 xds.rio.set_spatial_dims(x_dim=lon, y_dim=lat).rio.to_raster(output)
5349 else:
-> 5350 xds[variables].rio.set_spatial_dims(x_dim=lon, y_dim=lat).rio.to_raster(output)
5352 if return_vars:
5353 return output, allowed_vars
File ~/GEOSpyD/4.10.3_py3.9/2022-02-14/lib/python3.9/site-packages/xarray/core/common.py:239, in AttrAccessMixin.__getattr__(self, name)
237 with suppress(KeyError):
238 return source[name]
--> 239 raise AttributeError(
240 f"{type(self).__name__!r} object has no attribute {name!r}"
241 )
AttributeError: 'Dataset' object has no attribute 'rio'
```
Note that in my test file, I have `U` and `V` not `u_wind` and `v_wind`.
I also tried the `m.add_netcdf()` version, but it died with essentially the same error.
I'm not that good at Python, I just saw this project and thought, I want to try it out! :)
|
closed
|
2022-04-05T12:30:32Z
|
2022-04-05T16:31:04Z
|
https://github.com/opengeos/leafmap/issues/231
|
[
"bug"
] |
mathomp4
| 6 |
mljar/mercury
|
data-visualization
| 135 |
disable execution history in watch mode
|
Please disable execution history in watch mode. In the watch mode, we expect to have many changes in the notebook itself. There is no need to show execution history.
What is more, execution history reset widgets values:

|
closed
|
2022-07-12T10:25:07Z
|
2022-07-12T10:36:43Z
|
https://github.com/mljar/mercury/issues/135
|
[
"bug"
] |
pplonski
| 0 |
saulpw/visidata
|
pandas
| 2,224 |
[split] Change in behaviour in visidata 3.0
|
regex split behavior has changed with the update to version 3
Instead of splitting columns, the behavior I expected from version 2
I instead get a new column with format:
[N] first_value ; second_value
where N is a number (appears to be field count)
and the various fields are all contained in the first (and only) field
It's clear that something is being done, as the regex field has been replaced by a semicolon
Is this due to a change in how columns are handled in version 3
(it seems possible that the semicolon might be a default output)
If this is more appropriately a bug, update the ticket and I'll provide more information
(FWIW, I am using pipx to install, with some additional plugins with their support libraries, and I've tested without a visidatarc)
|
closed
|
2024-01-03T16:03:58Z
|
2024-10-12T04:41:01Z
|
https://github.com/saulpw/visidata/issues/2224
|
[
"documentation"
] |
fourjay
| 12 |
deepspeedai/DeepSpeed
|
deep-learning
| 6,589 |
[BUG] MOE: Loading experts parameters error when using expert parallel.
|
**Describe the bug**
I have a model with 60 experts, and I am training the experts in parallel on two GPUs. Theoretically, GPU0 should load the parameters of the first 30 experts, while GPU1 should load the parameters of the last 30 experts. However, I found that both GPUs are loading the parameters of the first 30 experts . How should I modify this?
|
open
|
2024-09-29T09:32:54Z
|
2024-10-08T12:49:13Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6589
|
[
"bug",
"training"
] |
kakaxi-liu
| 1 |
pytest-dev/pytest-mock
|
pytest
| 101 |
Confusing requirement on mock for Python 2
|
The README explicitly list `mock` as a requirement for Python 2, whereas it is listed in `extras_require` in the setup script. Please clarify whether `mock` is an optional or required dependency for `pytest-mock` with Python 2. If optional, please adjust the wording of the README, if required (which I suspect), please list `mock` under `install_requires` for Python 2.
|
closed
|
2018-02-15T20:05:21Z
|
2018-02-16T19:51:05Z
|
https://github.com/pytest-dev/pytest-mock/issues/101
|
[] |
ghisvail
| 1 |
graphdeco-inria/gaussian-splatting
|
computer-vision
| 959 |
How to get the cross section of reconstructions using gaussian splatting
|
open
|
2024-08-30T10:12:25Z
|
2024-08-30T10:12:25Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/959
|
[] |
sdfabkapoe
| 0 |
|
ultralytics/ultralytics
|
deep-learning
| 19,691 |
Error occurred when training YOLOV11 on dataset open-images-v7
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
train.py:
from ultralytics import YOLO
# Load a COCO-pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
error output:
Ultralytics 8.3.85 🚀 Python-3.10.15 torch-2.5.0+cu124 CUDA:0 (NVIDIA GeForce RTX 4080, 16076MiB)
engine/trainer: task=detect, mode=train, model=yolo11n.pt, data=open-images-v7.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train14, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/media/user/新加卷/zxc_ubuntu/code/ultralytics/runs/detect/train14
Dataset 'open-images-v7.yaml' images not found ⚠️, missing path '/media/user/新加卷/zxc_ubuntu/code/datasets/open-images-v7/images/val'
WARNING ⚠️ Open Images V7 dataset requires at least **561 GB of free space. Starting download...
Downloading split 'train' to '/media/user/新加卷/zxc_ubuntu/code/datasets/fiftyone/open-images-v7/open-images-v7/train' if necessary
Only found 744299 (<1743042) samples matching your requirements
Necessary images already downloaded
Existing download of split 'train' is sufficient
Subprocess ['/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/db/bin/mongod', '--dbpath', '/home/user/.fiftyone/var/lib/mongo', '--logpath', '/home/user/.fiftyone/var/lib/mongo/log/mongo.log', '--port', '0', '--nounixsocket'] exited with error 127:
/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/db/bin/mongod: error while loading shared libraries: libcrypto.so.3: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 564, in get_dataset
data = check_det_dataset(self.args.data)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/data/utils.py", line 385, in check_det_dataset
exec(s, {"yaml": data})
File "<string>", line 21, in <module>
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/zoo/datasets/__init__.py", line 399, in load_zoo_dataset
if fo.dataset_exists(dataset_name):
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/dataset.py", line 103, in dataset_exists
conn = foo.get_db_conn()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 394, in get_db_conn
_connect()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 233, in _connect
establish_db_conn(fo.config)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 195, in establish_db_conn
port = _db_service.port
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 277, in port
return self._wait_for_child_port()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 171, in _wait_for_child_port
return find_port()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 56, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 266, in call
raise attempt.get()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 301, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/six.py", line 719, in reraise
raise value
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 251, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 169, in find_port
raise ServiceListenTimeout(etau.get_class_name(self), port)
fiftyone.core.service.ServiceListenTimeout: fiftyone.core.service.DatabaseService failed to bind to port
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/train.py", line 7, in <module>
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/model.py", line 804, in train
self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 134, in __init__
self.trainset, self.testset = self.get_dataset()
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 568, in get_dataset
raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e
RuntimeError: Dataset 'open-images-v7.yaml' error ❌ fiftyone.core.service.DatabaseService failed to bind to port
open-images-v7.yaml:
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
# Open Images v7 dataset https://storage.googleapis.com/openimages/web/index.html by Google
# Documentation: https://docs.ultralytics.com/datasets/detect/open-images-v7/
# Example usage: yolo train data=open-images-v7.yaml
# parent
# ├── ultralytics
# └── datasets
# └── open-images-v7 ← downloads here (561 GB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/open-images-v7 # dataset root dir
train: images/train # train images (relative to 'path') 1743042 images
val: images/val # val images (relative to 'path') 41620 images
test: # test images (optional)
# Classes
names:
0: Accordion
1: Adhesive tape
2: Aircraft
3: Airplane
4: Alarm clock
5: Alpaca
6: Ambulance
7: Animal
8: Ant
9: Antelope
10: Apple
11: Armadillo
12: Artichoke
13: Auto part
14: Axe
15: Backpack
16: Bagel
17: Baked goods
18: Balance beam
19: Ball
20: Balloon
21: Banana
22: Band-aid
23: Banjo
24: Barge
25: Barrel
26: Baseball bat
27: Baseball glove
28: Bat (Animal)
29: Bathroom accessory
30: Bathroom cabinet
31: Bathtub
32: Beaker
33: Bear
34: Bed
35: Bee
36: Beehive
37: Beer
38: Beetle
39: Bell pepper
40: Belt
41: Bench
42: Bicycle
43: Bicycle helmet
44: Bicycle wheel
45: Bidet
46: Billboard
47: Billiard table
48: Binoculars
49: Bird
50: Blender
51: Blue jay
52: Boat
53: Bomb
54: Book
55: Bookcase
56: Boot
57: Bottle
58: Bottle opener
59: Bow and arrow
60: Bowl
61: Bowling equipment
62: Box
63: Boy
64: Brassiere
65: Bread
66: Briefcase
67: Broccoli
68: Bronze sculpture
69: Brown bear
70: Building
71: Bull
72: Burrito
73: Bus
74: Bust
75: Butterfly
76: Cabbage
77: Cabinetry
78: Cake
79: Cake stand
80: Calculator
81: Camel
82: Camera
83: Can opener
84: Canary
85: Candle
86: Candy
87: Cannon
88: Canoe
89: Cantaloupe
90: Car
91: Carnivore
92: Carrot
93: Cart
94: Cassette deck
95: Castle
96: Cat
97: Cat furniture
98: Caterpillar
99: Cattle
100: Ceiling fan
101: Cello
102: Centipede
103: Chainsaw
104: Chair
105: Cheese
106: Cheetah
107: Chest of drawers
108: Chicken
109: Chime
110: Chisel
111: Chopsticks
112: Christmas tree
113: Clock
114: Closet
115: Clothing
116: Coat
117: Cocktail
118: Cocktail shaker
119: Coconut
120: Coffee
121: Coffee cup
122: Coffee table
123: Coffeemaker
124: Coin
125: Common fig
126: Common sunflower
127: Computer keyboard
128: Computer monitor
129: Computer mouse
130: Container
131: Convenience store
132: Cookie
133: Cooking spray
134: Corded phone
135: Cosmetics
136: Couch
137: Countertop
138: Cowboy hat
139: Crab
140: Cream
141: Cricket ball
142: Crocodile
143: Croissant
144: Crown
145: Crutch
146: Cucumber
147: Cupboard
148: Curtain
149: Cutting board
150: Dagger
151: Dairy Product
152: Deer
153: Desk
154: Dessert
155: Diaper
156: Dice
157: Digital clock
158: Dinosaur
159: Dishwasher
160: Dog
161: Dog bed
162: Doll
163: Dolphin
164: Door
165: Door handle
166: Doughnut
167: Dragonfly
168: Drawer
169: Dress
170: Drill (Tool)
171: Drink
172: Drinking straw
173: Drum
174: Duck
175: Dumbbell
176: Eagle
177: Earrings
178: Egg (Food)
179: Elephant
180: Envelope
181: Eraser
182: Face powder
183: Facial tissue holder
184: Falcon
185: Fashion accessory
186: Fast food
187: Fax
188: Fedora
189: Filing cabinet
190: Fire hydrant
191: Fireplace
192: Fish
193: Flag
194: Flashlight
195: Flower
196: Flowerpot
197: Flute
198: Flying disc
199: Food
200: Food processor
201: Football
202: Football helmet
203: Footwear
204: Fork
205: Fountain
206: Fox
207: French fries
208: French horn
209: Frog
210: Fruit
211: Frying pan
212: Furniture
213: Garden Asparagus
214: Gas stove
215: Giraffe
216: Girl
217: Glasses
218: Glove
219: Goat
220: Goggles
221: Goldfish
222: Golf ball
223: Golf cart
224: Gondola
225: Goose
226: Grape
227: Grapefruit
228: Grinder
229: Guacamole
230: Guitar
231: Hair dryer
232: Hair spray
233: Hamburger
234: Hammer
235: Hamster
236: Hand dryer
237: Handbag
238: Handgun
239: Harbor seal
240: Harmonica
241: Harp
242: Harpsichord
243: Hat
244: Headphones
245: Heater
246: Hedgehog
247: Helicopter
248: Helmet
249: High heels
250: Hiking equipment
251: Hippopotamus
252: Home appliance
253: Honeycomb
254: Horizontal bar
255: Horse
256: Hot dog
257: House
258: Houseplant
259: Human arm
260: Human beard
261: Human body
262: Human ear
263: Human eye
264: Human face
265: Human foot
266: Human hair
267: Human hand
268: Human head
269: Human leg
270: Human mouth
271: Human nose
272: Humidifier
273: Ice cream
274: Indoor rower
275: Infant bed
276: Insect
277: Invertebrate
278: Ipod
279: Isopod
280: Jacket
281: Jacuzzi
282: Jaguar (Animal)
283: Jeans
284: Jellyfish
285: Jet ski
286: Jug
287: Juice
288: Kangaroo
289: Kettle
290: Kitchen & dining room table
291: Kitchen appliance
292: Kitchen knife
293: Kitchen utensil
294: Kitchenware
295: Kite
296: Knife
297: Koala
298: Ladder
299: Ladle
300: Ladybug
301: Lamp
302: Land vehicle
303: Lantern
304: Laptop
305: Lavender (Plant)
306: Lemon
307: Leopard
308: Light bulb
309: Light switch
310: Lighthouse
311: Lily
312: Limousine
313: Lion
314: Lipstick
315: Lizard
316: Lobster
317: Loveseat
318: Luggage and bags
319: Lynx
320: Magpie
321: Mammal
322: Man
323: Mango
324: Maple
325: Maracas
326: Marine invertebrates
327: Marine mammal
328: Measuring cup
329: Mechanical fan
330: Medical equipment
331: Microphone
332: Microwave oven
333: Milk
334: Miniskirt
335: Mirror
336: Missile
337: Mixer
338: Mixing bowl
339: Mobile phone
340: Monkey
341: Moths and butterflies
342: Motorcycle
343: Mouse
344: Muffin
345: Mug
346: Mule
347: Mushroom
348: Musical instrument
349: Musical keyboard
350: Nail (Construction)
351: Necklace
352: Nightstand
353: Oboe
354: Office building
355: Office supplies
356: Orange
357: Organ (Musical Instrument)
358: Ostrich
359: Otter
360: Oven
361: Owl
362: Oyster
363: Paddle
364: Palm tree
365: Pancake
366: Panda
367: Paper cutter
368: Paper towel
369: Parachute
370: Parking meter
371: Parrot
372: Pasta
373: Pastry
374: Peach
375: Pear
376: Pen
377: Pencil case
378: Pencil sharpener
379: Penguin
380: Perfume
381: Person
382: Personal care
383: Personal flotation device
384: Piano
385: Picnic basket
386: Picture frame
387: Pig
388: Pillow
389: Pineapple
390: Pitcher (Container)
391: Pizza
392: Pizza cutter
393: Plant
394: Plastic bag
395: Plate
396: Platter
397: Plumbing fixture
398: Polar bear
399: Pomegranate
400: Popcorn
401: Porch
402: Porcupine
403: Poster
404: Potato
405: Power plugs and sockets
406: Pressure cooker
407: Pretzel
408: Printer
409: Pumpkin
410: Punching bag
411: Rabbit
412: Raccoon
413: Racket
414: Radish
415: Ratchet (Device)
416: Raven
417: Rays and skates
418: Red panda
419: Refrigerator
420: Remote control
421: Reptile
422: Rhinoceros
423: Rifle
424: Ring binder
425: Rocket
426: Roller skates
427: Rose
428: Rugby ball
429: Ruler
430: Salad
431: Salt and pepper shakers
432: Sandal
433: Sandwich
434: Saucer
435: Saxophone
436: Scale
437: Scarf
438: Scissors
439: Scoreboard
440: Scorpion
441: Screwdriver
442: Sculpture
443: Sea lion
444: Sea turtle
445: Seafood
446: Seahorse
447: Seat belt
448: Segway
449: Serving tray
450: Sewing machine
451: Shark
452: Sheep
453: Shelf
454: Shellfish
455: Shirt
456: Shorts
457: Shotgun
458: Shower
459: Shrimp
460: Sink
461: Skateboard
462: Ski
463: Skirt
464: Skull
465: Skunk
466: Skyscraper
467: Slow cooker
468: Snack
469: Snail
470: Snake
471: Snowboard
472: Snowman
473: Snowmobile
474: Snowplow
475: Soap dispenser
476: Sock
477: Sofa bed
478: Sombrero
479: Sparrow
480: Spatula
481: Spice rack
482: Spider
483: Spoon
484: Sports equipment
485: Sports uniform
486: Squash (Plant)
487: Squid
488: Squirrel
489: Stairs
490: Stapler
491: Starfish
492: Stationary bicycle
493: Stethoscope
494: Stool
495: Stop sign
496: Strawberry
497: Street light
498: Stretcher
499: Studio couch
500: Submarine
501: Submarine sandwich
502: Suit
503: Suitcase
504: Sun hat
505: Sunglasses
506: Surfboard
507: Sushi
508: Swan
509: Swim cap
510: Swimming pool
511: Swimwear
512: Sword
513: Syringe
514: Table
515: Table tennis racket
516: Tablet computer
517: Tableware
518: Taco
519: Tank
520: Tap
521: Tart
522: Taxi
523: Tea
524: Teapot
525: Teddy bear
526: Telephone
527: Television
528: Tennis ball
529: Tennis racket
530: Tent
531: Tiara
532: Tick
533: Tie
534: Tiger
535: Tin can
536: Tire
537: Toaster
538: Toilet
539: Toilet paper
540: Tomato
541: Tool
542: Toothbrush
543: Torch
544: Tortoise
545: Towel
546: Tower
547: Toy
548: Traffic light
549: Traffic sign
550: Train
551: Training bench
552: Treadmill
553: Tree
554: Tree house
555: Tripod
556: Trombone
557: Trousers
558: Truck
559: Trumpet
560: Turkey
561: Turtle
562: Umbrella
563: Unicycle
564: Van
565: Vase
566: Vegetable
567: Vehicle
568: Vehicle registration plate
569: Violin
570: Volleyball (Ball)
571: Waffle
572: Waffle iron
573: Wall clock
574: Wardrobe
575: Washing machine
576: Waste container
577: Watch
578: Watercraft
579: Watermelon
580: Weapon
581: Whale
582: Wheel
583: Wheelchair
584: Whisk
585: Whiteboard
586: Willow
587: Window
588: Window blind
589: Wine
590: Wine glass
591: Wine rack
592: Winter melon
593: Wok
594: Woman
595: Wood-burning stove
596: Woodpecker
597: Worm
598: Wrench
599: Zebra
600: Zucchini
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
from ultralytics.utils import LOGGER, SETTINGS, Path, is_ubuntu, get_ubuntu_version
from ultralytics.utils.checks import check_requirements, check_version
check_requirements('fiftyone')
if is_ubuntu() and check_version(get_ubuntu_version(), '>=22.04'):
# Ubuntu>=22.04 patch https://github.com/voxel51/fiftyone/issues/2961#issuecomment-1666519347
check_requirements('fiftyone-db-ubuntu2204')
import fiftyone as fo
import fiftyone.zoo as foz
import warnings
name = 'open-images-v7'
fo.config.dataset_zoo_dir = Path(SETTINGS["datasets_dir"]) / "fiftyone" / name
fraction = 1.0 # fraction of full dataset to use
LOGGER.warning('WARNING ⚠️ Open Images V7 dataset requires at least **561 GB of free space. Starting download...')
for split in 'train', 'validation': # 1743042 train, 41620 val images
train = split == 'train'
# Load Open Images dataset
dataset = foz.load_zoo_dataset(name,
split=split,
label_types=['detections'],
classes=["Ambulance","Bicycle","Bus","Boy","Car","Motorcycle","Man","Person","Stop sign","Girl","Truck","Traffic light","Traffic sign","Cat", "Dog","Unicycle","Vehicle","Woman","Land vehicle","Snowplow","Van"],
max_samples=round((1743042 if train else 41620) * fraction))
# Define classes
if train:
classes = dataset.default_classes # all classes
# classes = dataset.distinct('ground_truth.detections.label') # only observed classes
# Export to YOLO format
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning, module="fiftyone.utils.yolo")
dataset.export(export_dir=str(Path(SETTINGS['datasets_dir']) / name),
dataset_type=fo.types.YOLOv5Dataset,
label_field='ground_truth',
split='val' if split == 'validation' else split,
classes=classes,
overwrite=train)
### Additional
_No response_
|
closed
|
2025-03-14T06:35:34Z
|
2025-03-19T10:43:33Z
|
https://github.com/ultralytics/ultralytics/issues/19691
|
[
"question",
"dependencies",
"detect"
] |
1623021453
| 10 |
dask/dask
|
scikit-learn
| 11,101 |
TypeError: can only concatenate str (not "traceback") to str
|
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
``````
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
# Put your MCVE code here
import dask
import dask.bag as db
import river
# Create a Dask bag from your data
df=pd.DataFrame([[0]*2],columns=['VendorID','fare_amount'])
data = db.from_sequence(df, npartitions=4)
# Define a function to process and train on each partition
def process_and_train(partition):
X_train,X_test,y_train,y_test=get_dask_train_test(partition)
model = river.linear_model.LinearRegression(optimizer=river.optim.SGD(0.01), l2=0.1)
# Stream learning from the DataFrame
for _,row in partition.iterrows():
y = row['fare_amount'] # Target
x = row.drop('fare_amount') # Features
model = model.learn_one(x, y)
print("done")
return model
# Use Dask to process and train in parallel
models = data.map(process_and_train).compute()
```
**Anything else we need to know?**:

**Environment**:
- Dask version:
- Python version:3.10
- Operating System:
- Install method (conda, pip, source):pip
|
open
|
2024-05-06T14:19:27Z
|
2024-05-06T14:19:41Z
|
https://github.com/dask/dask/issues/11101
|
[
"needs triage"
] |
sinsniwal
| 0 |
developmentseed/lonboard
|
jupyter
| 281 |
Better visualization defaults
|
See e.g. https://github.com/developmentseed/lonboard/issues/275. Not sure how much I want to try for "perfect defaults". Maybe `viz` should have "smart" defaults, but not direct layer constructors?
|
closed
|
2023-12-01T22:11:51Z
|
2024-02-26T22:41:02Z
|
https://github.com/developmentseed/lonboard/issues/281
|
[] |
kylebarron
| 0 |
axnsan12/drf-yasg
|
django
| 181 |
tox + setup.py failing with "The version specified ('0.0.0.dummy+0000016513faf897') is an invalid version"
|
tox and setup.py are failing for me currently on master (at 16b6ed7fd64d36f8a7ac5368a00a19da1e115c17) with this error:
```
$ /usr/bin/python setup.py
/usr/local/lib/python2.7/dist-packages/setuptools/dist.py:407: UserWarning: The version specified ('0.0.0.dummy+0000016513faf897') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
```
This is with Python 2.7.12 on Ubuntu 16.04 (but same error with Python 3.6 via pyenv)
These 2 line https://github.com/axnsan12/drf-yasg/blob/master/setup.py#L33 seems to be the culprit - if I comment out it works fine.
```
if any(any(dist in arg for dist in ['sdist', 'bdist']) for arg in sys.argv):
raise
```
|
closed
|
2018-08-07T10:47:43Z
|
2018-08-07T14:20:09Z
|
https://github.com/axnsan12/drf-yasg/issues/181
|
[] |
therefromhere
| 2 |
coqui-ai/TTS
|
python
| 3,064 |
[Bug] FileNotFoundError: [Errno 2] No such file or directory: ...config.json
|
### Describe the bug
`FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.local/share/tts/tts_models--multilingual--multi-dataset--bark/config.json'`
### To Reproduce
1. `pip install TTS`
2. `tts --text "some text" --model_name "tts_models/multilingual/multi-dataset/bark" --out_path /some/path`
3. Ctrl-C
4. `tts --text "some text" --model_name "tts_models/multilingual/multi-dataset/bark" --out_path /some/path`
### Expected behavior
The program should check whether it was completed successfully the first time and all the necessary files were downloaded/created, and if not, repeat the process for the missing files.
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu121",
"TTS": "0.17.8",
"numpy": "1.24.3"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "",
"python": "3.11.5",
"version": "#1 SMP PREEMPT_DYNAMIC Sat Sep 23 12:13:56 UTC 2023"
}
}
```
### Additional context
_No response_
|
closed
|
2023-10-13T10:33:13Z
|
2025-01-15T16:53:41Z
|
https://github.com/coqui-ai/TTS/issues/3064
|
[
"bug"
] |
Kzer-Za
| 5 |
PedroBern/django-graphql-auth
|
graphql
| 18 |
Example with custom user model
|
Is there an example with custom user model?
|
closed
|
2020-03-23T02:23:36Z
|
2020-03-25T23:36:56Z
|
https://github.com/PedroBern/django-graphql-auth/issues/18
|
[
"documentation"
] |
maxwaiyaki
| 2 |
yt-dlp/yt-dlp
|
python
| 12,585 |
Delay during download phase: unexplained wait time
|
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
I’ve been experiencing an issue where, during the download process, there is a delay of several seconds to minutes at certain stages, and I can't seem to pinpoint why it happens. The waiting phase seems to be intermittent—sometimes it occurs, sometimes it doesn’t.
All requests go smoothly, but there is a noticeable gap of several seconds between these two requests:
```shell
[2025-03-13 10:15:01,872: WARNING/MainProcess] [info] xYP9GeYSSqo: Downloading 1 format(s): 774
[2025-03-13 10:15:43,074: WARNING/MainProcess] [info] Downloading video thumbnail 41 ...
```
I have tried enabling the debug mode, but it did not provide any useful output. These two requests don't show any errors between them, just a long wait.
You can see that there is a significant delay before the download resumes. Is there any way to avoid or skip this waiting period? Or do you have suggestions for troubleshooting this behavior?
Any help or advice would be greatly appreciated!
Thanks in advance!
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[2025-03-13 10:14:59,382: WARNING/MainProcess] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out missing (LoggingProxy) (No ANSI), error missing (LoggingProxy) (No ANSI), screen missing (LoggingProxy) (No ANSI)
[2025-03-13 10:14:59,382: WARNING/MainProcess] [debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (pip) API
[2025-03-13 10:14:59,382: WARNING/MainProcess] [debug] params: {'extractor_args': {'youtube': {'skip': ['translated_subs', 'dash'], 'player_skip': ['webpage'], 'player_client': {'web_music'}, 'po_token': {'web_music.gvs+******************'}}}, 'verbose': True, 'cookiefile': 'cookies.txt', 'format': '774/251', 'postprocessors': [{'key': 'FFmpegMetadata', 'add_metadata': True}], 'paths': {'temp': './Temp'}, 'writethumbnail': True, 'embedthumbnail': True, 'outtmpl': {'default': './Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.%(ext)s'}, 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}
[2025-03-13 10:14:59,382: WARNING/MainProcess] [debug] Python 3.13.0 (CPython aarch64 64bit) - Linux-6.8.0-55-generic-aarch64-with-glibc2.36 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.36)
[2025-03-13 10:14:59,384: WARNING/MainProcess] [debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[2025-03-13 10:14:59,384: WARNING/MainProcess] [debug] Optional libraries: certifi-2025.01.31, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0
[2025-03-13 10:14:59,384: WARNING/MainProcess] [debug] Proxy map: {}
[2025-03-13 10:14:59,385: WARNING/MainProcess] [debug] Request Handlers: urllib, requests
[2025-03-13 10:14:59,450: WARNING/MainProcess] [debug] Loaded 1839 extractors
[2025-03-13 10:14:59,452: WARNING/MainProcess] [debug] [youtube] Found YouTube account cookies
[2025-03-13 10:14:59,452: WARNING/MainProcess] [youtube] Extracting URL: https://music.youtube.com/watch?v=xYP9GeYSSqo
[2025-03-13 10:14:59,453: WARNING/MainProcess] [youtube] xYP9GeYSSqo: Downloading web music client config
[2025-03-13 10:15:00,117: WARNING/MainProcess] [youtube] xYP9GeYSSqo: Downloading player 74e4bb46
[2025-03-13 10:15:00,219: WARNING/MainProcess] [youtube] xYP9GeYSSqo: Downloading web music player API JSON
[2025-03-13 10:15:00,423: WARNING/MainProcess] [debug] [youtube] Extracting signature function js_74e4bb46_106
[2025-03-13 10:15:00,424: WARNING/MainProcess] [debug] Loading youtube-sigfuncs.js_74e4bb46_106 from cache
[2025-03-13 10:15:00,424: WARNING/MainProcess] [debug] Loading youtube-nsig.74e4bb46 from cache
[2025-03-13 10:15:00,780: WARNING/MainProcess] [debug] [youtube] Decrypted nsig d6gJWcCgkaNpZNOWa9X => DDs50Tw0yZGfBQ
[2025-03-13 10:15:00,782: WARNING/MainProcess] [debug] Loading youtube-nsig.74e4bb46 from cache
[2025-03-13 10:15:01,140: WARNING/MainProcess] [debug] [youtube] Decrypted nsig 3Td651KeN-qJNvmMkH0 => UdKeOw6TKhdp1A
[2025-03-13 10:15:01,142: WARNING/MainProcess] [debug] [youtube] Extracting signature function js_74e4bb46_102
[2025-03-13 10:15:01,142: WARNING/MainProcess] [debug] Loading youtube-sigfuncs.js_74e4bb46_102 from cache
[2025-03-13 10:15:01,158: WARNING/MainProcess] [youtube] xYP9GeYSSqo: Downloading initial data API JSON
[2025-03-13 10:15:01,858: WARNING/MainProcess] [debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[2025-03-13 10:15:01,858: WARNING/MainProcess] [debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[2025-03-13 10:15:01,872: WARNING/MainProcess] [info] xYP9GeYSSqo: Downloading 1 format(s): 774
[2025-03-13 10:15:43,074: WARNING/MainProcess] [info] Downloading video thumbnail 41 ...
[2025-03-13 10:15:43,496: WARNING/MainProcess] [info] Writing video thumbnail 41 to: ./Temp/./Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.webp
[2025-03-13 10:15:43,541: WARNING/MainProcess] [debug] Invoking http downloader on "https://rr5---sn-cvh7knzr.googlevideo.com/videoplayback?expire=1741882500&ei=JLDSZ4LgD8iW4t4PqNG_0QQ&"
[2025-03-13 10:15:43,939: WARNING/MainProcess] [download] Destination: ./Temp/./Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.webm
[2025-03-13 10:15:43,939: WARNING/MainProcess]
[download] 0.0% of 4.77MiB at Unknown B/s ETA Unknown
[2025-03-13 10:15:43,939: WARNING/MainProcess]
[download] 0.1% of 4.77MiB at 2.77MiB/s ETA 00:01
[2025-03-13 10:15:43,940: WARNING/MainProcess]
[download] 0.1% of 4.77MiB at 4.56MiB/s ETA 00:01
[2025-03-13 10:15:43,940: WARNING/MainProcess]
[download] 0.3% of 4.77MiB at 7.52MiB/s ETA 00:00
[2025-03-13 10:15:43,941: WARNING/MainProcess]
[download] 0.6% of 4.77MiB at 12.09MiB/s ETA 00:00
[2025-03-13 10:15:43,943: WARNING/MainProcess]
[download] 1.3% of 4.77MiB at 15.13MiB/s ETA 00:00
[2025-03-13 10:15:43,944: WARNING/MainProcess]
[download] 2.6% of 4.77MiB at 22.37MiB/s ETA 00:00
[2025-03-13 10:15:43,951: WARNING/MainProcess]
[download] 5.2% of 4.77MiB at 20.39MiB/s ETA 00:00
[2025-03-13 10:15:43,968: WARNING/MainProcess]
[download] 10.5% of 4.77MiB at 17.08MiB/s ETA 00:00
[2025-03-13 10:15:44,002: WARNING/MainProcess]
[download] 20.9% of 4.77MiB at 15.67MiB/s ETA 00:00
[2025-03-13 10:15:44,312: WARNING/MainProcess]
[download] 41.9% of 4.77MiB at 5.35MiB/s ETA 00:00
[2025-03-13 10:15:44,490: WARNING/MainProcess]
[download] 83.8% of 4.77MiB at 7.25MiB/s ETA 00:00
[2025-03-13 10:15:44,495: WARNING/MainProcess]
[download] 100.0% of 4.77MiB at 8.58MiB/s ETA 00:00
[2025-03-13 10:15:44,497: WARNING/MainProcess]
[download] 100% of 4.77MiB in 00:00:00 at 5.00MiB/s
[2025-03-13 10:15:44,503: WARNING/MainProcess] [Metadata] Adding metadata to "./Temp/./Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.webm"
[2025-03-13 10:15:44,504: WARNING/MainProcess] [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i file:./Temp/./Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.webm -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Say Please' -metadata date=20221020 -metadata 'purl=https://www.youtube.com/watch?v=xYP9GeYSSqo' -metadata 'comment=https://www.youtube.com/watch?v=xYP9GeYSSqo' -metadata 'artist=Fredo Bang' -movflags +faststart file:./Temp/./Music/5ab8b82f1ed62ca6febfc46370a99062/e14fdf807f92922e33eda6464867c0a8.temp.webm
```
|
closed
|
2025-03-12T14:06:35Z
|
2025-03-14T03:00:04Z
|
https://github.com/yt-dlp/yt-dlp/issues/12585
|
[
"incomplete"
] |
Jekylor
| 0 |
alyssaq/face_morpher
|
numpy
| 34 |
What is the parameter "dest_shape" in warper.warp_image()?
|
closed
|
2018-02-08T03:11:57Z
|
2018-02-09T15:13:53Z
|
https://github.com/alyssaq/face_morpher/issues/34
|
[] |
HOTDEADGIRLS
| 2 |
|
PaddlePaddle/PaddleHub
|
nlp
| 2,026 |
Meet an error when installing paddlehub
|
Hi developer,
When I use the latest pip to install paddlehub as follow
`!pip install --upgrade paddlehub`
I meet the following problem,
```
Installing collected packages: visualdl, rarfile, easydict, paddlehub
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 164, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 413, in run
pycompile=options.compile,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/__init__.py", line 81, in install_given_reqs
pycompile=pycompile,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/req_install.py", line 810, in install
requested=self.user_supplied,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 737, in install_wheel
requested=requested,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 589, in _install_wheel
file.save()
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 383, in save
if os.path.exists(self.dest_path):
File "/usr/lib/python3.6/genericpath.py", line 19, in exists
os.stat(path)
UnicodeEncodeError: 'ascii' codec can't encode character '\u53f3' in position 81: ordinal not in range(128)
```
the system is Ubuntu 18.04.5 LTS
|
open
|
2022-09-16T02:10:35Z
|
2022-09-16T06:10:49Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/2026
|
[] |
Kunlun-Zhu
| 3 |
getsentry/sentry
|
python
| 87,404 |
[User Feedback] Feedback list loads slowly and doesn't show spam/resolve state changes
|
Sentry Feedback: [JAVASCRIPT-2YMF](https://sentry.sentry.io/feedback/?referrer=github_integration&feedbackSlug=javascript%3A6426713284&project=11276)
The user feedback list sometimes takes a long time to load new items when scrolling to the bottom. It also does not update the list if I mark a feedback as spam or resolved, I have to reload the page for it to appear in the new list.
|
open
|
2025-03-19T16:49:32Z
|
2025-03-19T16:50:08Z
|
https://github.com/getsentry/sentry/issues/87404
|
[
"Component: Feedback",
"Product Area: User Feedback"
] |
sentry-io[bot]
| 1 |
modoboa/modoboa
|
django
| 2,602 |
opendkim not work
|
# Impacted versions
* OS Type: Ubuntu 20.04
* OS Version: Number or Name
* Database Type: PostgreSQL
* Database version: X.y
* Modoboa: 2.0.1
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
systemctl status opendkim
● opendkim.service - OpenDKIM DomainKeys Identified Mail (DKIM) Milter
Loaded: loaded (/lib/systemd/system/opendkim.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-09-18 19:11:42 CEST; 9s ago
Docs: man:opendkim(8)
man:opendkim.conf(5)
man:opendkim-genkey(8)
man:opendkim-genzone(8)
man:opendkim-testadsp(8)
man:opendkim-testkey
http://www.opendkim.org/docs.html
Process: 2123 ExecStart=/usr/sbin/opendkim -x /etc/opendkim.conf (code=exited, status=78)
# Current behavior
Sep 18 19:11:41 mail opendkim[2120]: opendkim: /etc/opendkim.conf: dsn:pgsql://opendkim:AUgSEhrWj0FyTMCc@127.0.0.1:5432/modoboa/table=dkim?keycol=domain_name?datacol=id: dkimf_db_open(): Invalid argument
Sep 18 19:11:41 mail systemd[1]: opendkim.service: Control process exited, code=exited, status=78/CONFIG
Sep 18 19:11:41 mail systemd[1]: opendkim.service: Failed with result 'exit-code'.
Sep 18 19:11:41 mail systemd[1]: Failed to start OpenDKIM DomainKeys Identified Mail (DKIM) Milter.
Sep 18 19:11:42 mail systemd[1]: opendkim.service: Scheduled restart job, restart counter is at 3.
Sep 18 19:11:42 mail systemd[1]: Stopped OpenDKIM DomainKeys Identified Mail (DKIM) Milter.
Sep 18 19:11:42 mail systemd[1]: Starting OpenDKIM DomainKeys Identified Mail (DKIM) Milter...
<!--
Explain the behavior you're seeing that you think is a bug, and explain how you
think things should behave instead.
-->
# Expected behavior
# Video/Screenshot link (optional)
|
closed
|
2022-09-18T17:20:25Z
|
2022-09-22T08:54:35Z
|
https://github.com/modoboa/modoboa/issues/2602
|
[] |
tate11
| 6 |
AntonOsika/gpt-engineer
|
python
| 240 |
“Ask for feedback” step.
|
Create a step that asks “did it run/work/perfect”?, and store to memory folder.
And let the benchmark.py script check that result, and convert it to a markdown table like benchmark/RESULTS.md , and append it with some metadata to that file.
|
closed
|
2023-06-20T06:20:28Z
|
2023-07-02T14:00:37Z
|
https://github.com/AntonOsika/gpt-engineer/issues/240
|
[
"help wanted",
"good first issue"
] |
AntonOsika
| 6 |
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 1,068 |
choppy stretched out audio
|
My spectrogram looks kinda weird and the audio sounds like heavily synthesised choppy vocals, did I install anything [wrong?[
<img width="664" alt="Screenshot 2022-05-23 at 07 08 02" src="https://user-images.githubusercontent.com/71672036/169755394-e387d753-f4ce-46a3-8553-bafcec526580.png">
]
|
open
|
2022-05-23T06:17:29Z
|
2022-05-25T20:16:48Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1068
|
[] |
zemigm
| 1 |
sqlalchemy/alembic
|
sqlalchemy
| 439 |
Double "%" in SQL when exporting via --sql
|
**Migrated issue, originally created by babak ([@babakness](https://github.com/babakness))**
To summarize my issue, I have to fix the output of
`alembic upgrade [id]:head --sql > foo.sql`
with ` 's/[%]{2,2}/%/g'`
Because all "%" characters are escaped.
|
closed
|
2017-07-20T20:54:51Z
|
2017-09-18T05:37:03Z
|
https://github.com/sqlalchemy/alembic/issues/439
|
[
"bug"
] |
sqlalchemy-bot
| 4 |
litestar-org/litestar
|
api
| 3,780 |
Docs: testing + debug mode
|
### Summary
As a new litestar user, I've just started looking at documentation to test my application.
With the default example, un-handled errors are simply turning up as 500 internal server error without much to use.
Simply setting `app.debug = True` inside the test module is enough to have a proper traceback.
Would it be possible to add this line on the first examples? Maybe there is a better way?
-----
Looking up a the source for `AsyncTestClient` I see that there is a `raise_server_exceptions` option (in my case it is set to True, the default)
|
closed
|
2024-10-07T14:21:48Z
|
2025-03-20T15:54:57Z
|
https://github.com/litestar-org/litestar/issues/3780
|
[
"Documentation :books:"
] |
romuald
| 0 |
ultralytics/ultralytics
|
machine-learning
| 19,839 |
WARNING ⚠️ numpy>=1.23.0 is required, but numpy==2.2.4 is currently installed
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
```
Traceback (most recent call last):
File "/usr/local/bin/yolo", line 10, in <module>
sys.exit(entrypoint())
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/cfg/__init__.py", line 899, in entrypoint
special[a.lower()]()
File "/usr/local/lib/python3.11/dist-packages/ultralytics/utils/checks.py", line 680, in collect_system_info
is_met = "✅ " if check_version(current, str(r.specifier), name=r.name, hard=True) else "❌ "
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/utils/checks.py", line 253, in check_version
raise ModuleNotFoundError(emojis(warning)) # assert version requirements met
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: WARNING ⚠️ numpy>=1.23.0 is required, but numpy==2.2.4 is currently installed
```
### Environment
```
root@c104b728f01e:/# yolo checks
Ultralytics 8.3.95 🚀 Python-3.11.10 torch-2.8.0.dev20250323+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32120MiB)
Setup complete ✅ (32 CPUs, 187.8 GB RAM, 12.0/128.0 GB disk)
OS Linux-6.8.0-55-generic-x86_64-with-glibc2.35
Environment Docker
Python 3.11.10
Install pip
Path /usr/local/lib/python3.11/dist-packages/ultralytics
RAM 187.82 GB
Disk 12.0/128.0 GB
CPU AMD Ryzen 9 7950X 16-Core Processor
CPU count 32
GPU NVIDIA GeForce RTX 5090, 32120MiB
GPU count 1
CUDA 12.8
Traceback (most recent call last):
File "/usr/local/bin/yolo", line 10, in <module>
sys.exit(entrypoint())
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/cfg/__init__.py", line 899, in entrypoint
special[a.lower()]()
File "/usr/local/lib/python3.11/dist-packages/ultralytics/utils/checks.py", line 680, in collect_system_info
is_met = "✅ " if check_version(current, str(r.specifier), name=r.name, hard=True) else "❌ "
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/utils/checks.py", line 253, in check_version
raise ModuleNotFoundError(emojis(warning)) # assert version requirements met
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: WARNING ⚠️ numpy>=1.23.0 is required, but numpy==2.2.4 is currently installed
```
### Minimal Reproducible Example
`yolo checks` after installing latest numpy
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2025-03-24T10:40:06Z
|
2025-03-24T18:35:06Z
|
https://github.com/ultralytics/ultralytics/issues/19839
|
[
"bug",
"dependencies",
"fixed"
] |
glenn-jocher
| 2 |
matterport/Mask_RCNN
|
tensorflow
| 2,842 |
problems with transfer learning: using my own trained weights, or the Usiigaci trained wieghts
|
hey!
my project also involves cell detection, so I thought I'd try training my CNN using [Usiigaci pre-trained weights](https://github.com/oist/Usiigaci).
but when I try I get the following error:
`ValueError: Layer #362 (named "anchors") expects 1 weight(s), but the saved weights have 0 element(s).`
The training works fine for pretrained coco weigths for example.
this is the code i use to load the weights:
`model = MaskRCNN(mode='training', model_dir='./', config=config) model.load_weights('Usiigaci_3.h5', by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])`
I also get a similar problem when trying to load the weights that were generated by training my model over my own photos, to continue the training where I have stopped the last time.
the error received is:
`ValueError: Layer #362 (named "anchors"), weight <tf.Variable 'Variable:0' shape=(4, 261888, 4) dtype=float32> has shape (4, 261888, 4), but the saved weight has shape (2, 261888, 4).
`
loading the weights:
`model.load_weights('new_weigths/40_epochs/mask_rcnn_cell_cfg_0040.h5', by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
`
please let me know if you understand why is this happening.
thanks!!
|
open
|
2022-06-12T07:39:09Z
|
2022-06-12T07:40:55Z
|
https://github.com/matterport/Mask_RCNN/issues/2842
|
[] |
avnerst
| 0 |
pyjanitor-devs/pyjanitor
|
pandas
| 808 |
[ENH] Filter rows across Multiple Columns
|
# Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to propose a `filter_rows` function (not sure what the name should be), where rows in a dataframe can be filtered across columns or at a specific column
# Example API
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
Please modify the example API below to illustrate your proposed API, and then delete this sentence.
```python
# example data
df = {"x":["a",'b'], 'y':[1,1], 'z':[-1,1]}
df = pd.DataFrame(df)
df
x y z
0 a 1 -1
1 b 1 1
# Find all rows where EVERY numeric variable is greater than zero
# this is one way to solve it
df.loc[df.select_dtypes('number').gt(0).all(1)]
x y z
1 b 1 1
# or we could abstract it:
def filter_rows(df, dtype, columns, condition, any_True = True):
temp = df.copy()
if dtype:
temp = df.select_dtypes(dtype)
if columns:
booleans = temp.loc[:, columns].transform(condition)
else:
booleans = temp.transform(condition)
if any_True:
booleans = booleans.any(axis = 1)
else:
booleans = booleans.all(axis = 1)
return df.loc[booleans]
df.filter_rows(dtype='number', columns=None, condition= lambda df: df.gt(0), any_True=False)
```
I was working on a blog post, based on [Suzan Baert's](https://suzan.rbind.io/2018/02/dplyr-tutorial-3/#filtering-across-multiple-columns) blog, and it seemed like a good addition to the library, to abstract the steps.
instead of adding another function, maybe we could modify `filter_on`?
Just an idea for the rest of the team to consider and see if it is worth adding to the library.
|
closed
|
2021-03-02T06:02:09Z
|
2021-12-11T10:28:31Z
|
https://github.com/pyjanitor-devs/pyjanitor/issues/808
|
[] |
samukweku
| 0 |
miguelgrinberg/flasky
|
flask
| 63 |
Something can't understand in page 80
|
Hi Miguel,
Thanks you for your book,
There're some sentences I can't understand in the book.
In the second paragraph of the page 80,<b>"Importing these modules causes the routes and error handlers to be associated with the blueprint."</b>
Doesn't the routes and error handlers associated with the blueprint by the decorate,like the @main.route and @main.app_errorhandler?
sorry for the pool english;-)
|
closed
|
2015-08-24T03:55:12Z
|
2015-08-26T06:42:13Z
|
https://github.com/miguelgrinberg/flasky/issues/63
|
[
"question"
] |
testerman77
| 2 |
jupyter/nbgrader
|
jupyter
| 1,360 |
sort_index()
|
### Ubuntu 18.04
### nbgrader version 0.6.1
### `jupyterhub --version` (if used with JupyterHub)
### 6.0.3
### Expected behavior
extract_grades.py does not crash
### Actual behavior
extract_grades.py crashes
`grades = pd.DataFrame(grades).set_index(['student', 'assignment']).sortlevel()`
needs to be changed to
`grades = pd.DataFrame(grades).set_index(['student', 'assignment']).sort_index()`
sortlevel() is depricated since 0.20.0
### Steps to reproduce the behavior
`python extract_grades.py`
|
open
|
2020-08-28T16:31:37Z
|
2020-08-28T16:31:37Z
|
https://github.com/jupyter/nbgrader/issues/1360
|
[] |
drfeinberg
| 0 |
cvat-ai/cvat
|
tensorflow
| 8,546 |
Error uploading annotations with Yolov8 Detection format
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
I want to upload new annotations done in Yolov8 to a job that already has some annotations and not deleting anything
### Possible Solution
_No response_
### Context
I labelled some images, download the labels on Yolov8 Detection format to train a model. I trained a model to obtain the rest of the labels. When trying to upload those new labels, not only nothing new appears on my dataset but also the ones previously done disappear.
I have organized the folder to upload the same way as it is downloaded from cvat. A .zip file containing a folder named "labels" where all the labels are organized in .txt files, a data.yaml with all the different classes and a train.txt containing the path of the images.
The only file that changes is the "labels" folder where now all the labels done by the trained model of Yolov8 appear (data.yaml and train.txt are the ones that are donwload previously from cvat when exporting the job dataset with the labels)
### Environment
_No response_
|
closed
|
2024-10-16T06:16:51Z
|
2024-11-11T10:44:37Z
|
https://github.com/cvat-ai/cvat/issues/8546
|
[
"bug"
] |
uxuegu
| 1 |
QuivrHQ/quivr
|
api
| 3,187 |
[Backend] Update/Remove knowledge
|
* Moving knowledge -> Reassign knowledge id
* Rename file
* Remove knowledge with Cascade remove of folder empty or non empty
|
closed
|
2024-09-11T08:51:59Z
|
2024-09-16T14:51:17Z
|
https://github.com/QuivrHQ/quivr/issues/3187
|
[
"area: backend"
] |
linear[bot]
| 1 |
mljar/mljar-supervised
|
scikit-learn
| 501 |
Cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic'
|
Hello,
unfortunately I cannot properly use MLJar due to some import error after notebook creation.
It seems like some imports under the hood are not proper (pandas import throwing exception), the original problem is mentioned in screenshot:

_cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic' (C:\Users\sebas\AppData\Roaming\MLJAR-Studio\miniconda\lib\site-packages\pandas\core\dtypes\generic.py)_
I've just installed latest MLJar version for Windows (1.0.0), it is fresh and clean, problem occurs during the very first creation of notebook in the tool. It seems like it is a common bug in Pandas (meta class renaming between versions), I could find solution here:
https://stackoverflow.com/questions/68704002/importerror-cannot-import-name-abcindexclass-from-pandas-core-dtypes-generic
I was not very insightful about my env as MLJar creates its own miniconda's python distribution.
I hope there is a way to omit the problem, I do appreciate whole project, finger crossed for You guys!
|
open
|
2021-12-08T12:23:02Z
|
2021-12-08T13:14:36Z
|
https://github.com/mljar/mljar-supervised/issues/501
|
[] |
ghost
| 2 |
oegedijk/explainerdashboard
|
plotly
| 150 |
ExplainerDashboard() error in Google Colab: FormGroup was deprecated
|
Since one of the most recent updates, ExplainerDashboard is not working anymore in Google Colaboratory. It warns of several deprecated packages and then fails with FormGroup.
Trying to run the basic example:
```
!pip install explainerdashboard
```
```
from sklearn.ensemble import RandomForestClassifier
from explainerdashboard import ClassifierExplainer, ExplainerDashboard
from explainerdashboard.datasets import titanic_survive, feature_descriptions
X_train, y_train, X_test, y_test = titanic_survive()
model = RandomForestClassifier(n_estimators=50, max_depth=10).fit(X_train, y_train)
explainer = ClassifierExplainer(model, X_test, y_test,
cats=['Sex', 'Deck', 'Embarked'],
descriptions=feature_descriptions,
labels=['Not survived', 'Survived'])
ExplainerDashboard(explainer).run()
```
returns the following error:
```
The dash_html_components package is deprecated. Please replace
`import dash_html_components as html` with `from dash import html`
The dash_core_components package is deprecated. Please replace
`import dash_core_components as dcc` with `from dash import dcc`
Detected RandomForestClassifier model: Changing class type to RandomForestClassifierExplainer...
Note: model_output=='probability', so assuming that raw shap output of RandomForestClassifier is in probability space...
Generating self.shap_explainer = shap.TreeExplainer(model)
Building ExplainerDashboard..
Detected google colab environment, setting mode='external'
Warning: calculating shap interaction values can be slow! Pass shap_interaction=False to remove interactions tab.
Generating layout...
Calculating shap values...
The dash_table package is deprecated. Please replace
`import dash_table` with `from dash import dash_table`
Also, if you're using any of the table format helpers (e.g. Group), replace
`from dash_table.Format import Group` with
`from dash.dash_table.Format import Group`
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-65538403dd79> in <module>()
12 labels=['Not survived', 'Survived'])
13
---> 14 ExplainerDashboard(explainer).run()
8 frames
/usr/local/lib/python3.7/dist-packages/dash_bootstrap_components/__init__.py in __getattr__(self, name)
51 # TODO: update URL before release
52 raise AttributeError(
---> 53 f"{name} was deprecated in dash-bootstrap-components version "
54 f"1.0.0. You are using {__version__}. For more details please "
55 "see the migration guide: "
AttributeError: FormGroup was deprecated in dash-bootstrap-components version 1.0.0. You are using 1.0.0. For more details please see the migration guide: https://dbc-v1.herokuapp.com/migration-guide/
```
Any hints?
Thanks in advance.
|
closed
|
2021-10-24T12:14:43Z
|
2021-10-31T20:28:26Z
|
https://github.com/oegedijk/explainerdashboard/issues/150
|
[] |
yerbby
| 3 |
ets-labs/python-dependency-injector
|
flask
| 486 |
AsyncResource integration with fastapi-utils CBVs
|
Hello,
I'm trying to use an AsyncResource (to manage SQLAlchemy AsyncSessions) with FastAPI Class-Based Views (from [fastapi-utils](https://fastapi-utils.davidmontague.xyz/user-guide/class-based-views/)) so that I can inject common resources using a base class that my endpoints will inherit. Unfortunately, I can't seem to find a way to make it work.
Not even sure if what I'm wanting to do is possible, but maybe another set of eyes can help.
Here's a rough simplification of my code:
async_session_resource.py:
```python
from dependency_injector import resources
from sqlalchemy.ext.asyncio import AsyncSession
class AsyncSessionProvider(resources.AsyncResource):
async def init(self, sessionmaker) -> AsyncSession:
return sessionmaker()
async def shutdown(self, session: AsyncSession) -> None:
await session.close()
```
api_container.py:
```python
from async_session_provider import AsyncSessionProvider
from dependency_injector import containers, providers
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import sessionmaker
class ApiContainer(containers.DeclarativeContainer):
config = providers.Configuration()
config.db_url.from_env("DB_URL")
engine = providers.Singleton(create_async_engine, url=config.db_url)
async_session_factory = providers.Singleton(
sessionmaker,
bind=engine,
autoflush=True,
expire_on_commit=False,
class_=AsyncSession
)
session = providers.Resource(AsyncSessionProvider, async_session_factory)
```
endpoint_base.py:
```python
from api_container import ApiContainer
from dependency_injector.wiring import Closing, Provide, inject
from fastapi.param_functions import Depends
class Endpoint:
@inject
def __init__(
self,
session: AsyncSession = Depends(Closing[Provide[ApiContainer.session]])
):
self.session = session
```
my_endpoint.py:
```python
from api_router import router # router = APIRouter()
from endpoint_base import Endpoint
from fastapi import Path
from fastapi_utils.cbv import cbv
from my_resource import MyResource
@cbv(router)
class MyEndpoint(Endpoint):
@router.get("/my/{id}", response_model=MyResource)
async def get(self, id: int = Path(...)):
return await self.session.get(MyResource, id)
```
api_app.py:
```python
import sys
import endpoint_base
import my_endpoint
from api_container import ApiContainer
from api_router import router
from dependency_injector.wiring import Provide, inject
from fastapi import FastAPI
class MyAPI(FastAPI):
def __init__(self):
super().__init__(on_startup=[self.__load_container])
def __load_container(self):
modules = [
sys.modules[__name__],
endpoint_base,
my_endpoint,
]
self.container = ApiContainer()
self.container.wire(modules=modules)
app = MyAPI()
```
Run:
```bash
uvicorn api_app:app
```
The exception I get when I hit the endpoint:
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File ".venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 369, in run_asgi
result = await app(self.scope, self.receive, self.send)
File ".venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 59, in __call__
return await self.app(scope, receive, send)
File ".venv\lib\site-packages\fastapi\applications.py", line 208, in __call__
await super().__call__(scope, receive, send)
File ".venv\lib\site-packages\starlette\applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File ".venv\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
raise exc from None
File ".venv\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File ".venv\lib\site-packages\starlette\middleware\cors.py", line 78, in __call__
await self.app(scope, receive, send)
File ".venv\lib\site-packages\starlette\exceptions.py", line 82, in __call__
raise exc from None
File ".venv\lib\site-packages\starlette\exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File ".venv\lib\site-packages\starlette\routing.py", line 580, in __call__
await route.handle(scope, receive, send)
File ".venv\lib\site-packages\starlette\routing.py", line 241, in handle
await self.app(scope, receive, send)
File ".venv\lib\site-packages\starlette\routing.py", line 52, in app
response = await func(request)
File ".venv\lib\site-packages\fastapi\routing.py", line 213, in app
dependency_overrides_provider=dependency_overrides_provider,
File ".venv\lib\site-packages\fastapi\dependencies\utils.py", line 552, in solve_dependencies
solved = await run_in_threadpool(call, **sub_values)
File ".venv\lib\site-packages\starlette\concurrency.py", line 40, in run_in_threadpool
return await loop.run_in_executor(None, func, *args)
File ".pyenv\pyenv-win\versions\3.7.9\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File ".venv\lib\site-packages\fastapi_utils\cbv.py", line 82, in new_init
old_init(self, *args, **kwargs)
File ".venv\lib\site-packages\dependency_injector\wiring.py", line 595, in _patched
to_inject[injection] = provider()
File "src/dependency_injector/providers.pyx", line 207, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 3616, in dependency_injector.providers.Resource._provide
File "src/dependency_injector/providers.pyx", line 3674, in dependency_injector.providers.Resource._create_init_future
File ".pyenv\pyenv-win\versions\3.7.9\lib\asyncio\tasks.py", line 607, in ensure_future
loop = events.get_event_loop()
File ".pyenv\pyenv-win\versions\3.7.9\lib\asyncio\events.py", line 644, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'ThreadPoolExecutor-0_0'.
```
Notes:
- Synchronous `resources.Resource` resources inject properly in `Endpoint.__init__()`
- I could not find a way to call `session.close()` on an AsyncSession in a synchronous Resource that actually properly closes the session
- Using `async def __init__()` does not result in the above error, but of course I can't set object properties via an async method
- Injecting directly into the async `get()` methods works as expected of course
Is what I'm trying to with CBVs even possible? Am I trying to be too clever?
Thanks!
|
closed
|
2021-08-12T22:24:38Z
|
2021-08-13T15:38:54Z
|
https://github.com/ets-labs/python-dependency-injector/issues/486
|
[
"question"
] |
Daveography
| 4 |
QuivrHQ/quivr
|
api
| 3,488 |
Documentation not up to date with new changes
|
<img src="https://uploads.linear.app/51e2032d-a488-42cf-9483-a30479d3e2d0/5978517c-49ef-4491-a38d-3189509a5af3/09d81e58-233c-4032-8396-568e6131fef6?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiLzUxZTIwMzJkLWE0ODgtNDJjZi05NDgzLWEzMDQ3OWQzZTJkMC81OTc4NTE3Yy00OWVmLTQ0OTEtYTM4ZC0zMTg5NTA5YTVhZjMvMDlkODFlNTgtMjMzYy00MDMyLTgzOTYtNTY4ZTYxMzFmZWY2IiwiaWF0IjoxNzMyMjEzMTA4LCJleHAiOjMzMzAyNzczMTA4fQ.SWRDX_Deinet_oEyX3nExhW-nNBziRv1bMhEGK9u-Tk " alt="image.png" width="1446" height="515" />
The documentation has not been changed on core.quivr.app to reflect the new name for max_input_token
|
closed
|
2024-11-18T22:13:57Z
|
2025-02-24T20:06:30Z
|
https://github.com/QuivrHQ/quivr/issues/3488
|
[
"Stale",
"area: docs"
] |
StanGirard
| 3 |
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,243 |
problema con Ultimate Vocal Remover
|
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "MPS backend out of memory (MPS allocated: 447.37 MB, other allocations: 3.05 GB, max allowed: 3.40 GB). Tried to allocate 4.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)."
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 470, in seperate
File "separate.py", line 565, in demix
File "separate.py", line 606, in run_model
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "onnx2pytorch/convert/model.py", line 224, in forward
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"
Error Time Stamp [2024-03-17 13:17:29]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2024-03-17T12:20:58Z
|
2024-03-17T12:20:58Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1243
|
[] |
Mamboitalian0
| 0 |
milesmcc/shynet
|
django
| 185 |
Incorrect calculation of month value
|
```
shynet_main | [2022-01-01 00:04:18 +0000] [9] [INFO] Booting worker with pid: 9
shynet_main | ERROR Internal Server Error: /dashboard/
shynet_main | Traceback (most recent call last):
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
shynet_main | response = get_response(request)
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
shynet_main | response = wrapped_callback(request, *callback_args, **callback_kwargs)
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 70, in view
shynet_main | return self.dispatch(request, *args, **kwargs)
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/mixins.py", line 71, in dispatch
shynet_main | return super().dispatch(request, *args, **kwargs)
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 98, in dispatch
shynet_main | return handler(request, *args, **kwargs)
shynet_main | File "/usr/local/lib/python3.9/site-packages/django/views/generic/list.py", line 157, in get
shynet_main | context = self.get_context_data()
shynet_main | File "/usr/src/shynet/dashboard/views.py", line 36, in get_context_data
shynet_main | data = super().get_context_data(**kwargs)
shynet_main | File "/usr/src/shynet/dashboard/mixins.py", line 64, in get_context_data
shynet_main | data["date_ranges"] = self.get_date_ranges()
shynet_main | File "/usr/src/shynet/dashboard/mixins.py", line 50, in get_date_ranges
shynet_main | "start": now.replace(day=1, month=now.month - 1),
shynet_main | ValueError: month must be in 1..12
```
```
REPOSITORY TAG IMAGE ID CREATED
milesmcc/shynet latest 8fc7868f03dd 3 months ago
```
|
closed
|
2022-01-01T00:08:43Z
|
2022-01-01T21:28:41Z
|
https://github.com/milesmcc/shynet/issues/185
|
[] |
wolfpld
| 6 |
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,599 |
The color scheme of yaml code in the document makes it difficult to read
|
### 📚 Documentation
<img width="334" alt="截屏2024-03-08 18 09 26" src="https://github.com/Lightning-AI/pytorch-lightning/assets/17872844/07f59352-03eb-4c8d-a28c-00308de5be0a">
The black background and the dark blue text makes the code really difficult to read.
cc @borda
|
open
|
2024-03-08T10:10:48Z
|
2024-03-08T15:20:22Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19599
|
[
"bug",
"duplicate",
"docs"
] |
BakerBunker
| 2 |
thtrieu/darkflow
|
tensorflow
| 532 |
how to plot recall-precision curve and print mAP on the terminal along with the validation loss
|
Hey all,
Can anyone help me to plot recall-precision curve and print mAP on the terminal along with the loss?
Also has anyone applied validation flow after particular number of steps? I want to see the loss of train and validation sets in the same plot. Any help?
Thanks.
|
open
|
2018-01-20T09:07:31Z
|
2018-03-08T01:04:51Z
|
https://github.com/thtrieu/darkflow/issues/532
|
[] |
onurbarut
| 0 |
huggingface/pytorch-image-models
|
pytorch
| 1,625 |
[BUG] broken source links in the documentation
|
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Clicking on the `list_models` page redirects to
Broken link page → https://huggingface.co/docs/timm/reference/models

<https://github.com/rwightman/pytorch-image-models/blob/main/src/timm/models/_registry.py#L94>
**Expected behavior**
Instead, it should redirect to
<https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/_registry.py#L94>
I believe due to this other links in the docs are also broken.
|
closed
|
2023-01-08T17:40:32Z
|
2023-01-13T22:47:22Z
|
https://github.com/huggingface/pytorch-image-models/issues/1625
|
[
"bug"
] |
deven367
| 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.