repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
desec-io/desec-stack | rest-api | 498 | frontend: broken logout behavior when login expires | After login to desec (https://desec.io/login), one gets access to the domain management. After some time, this login will expire and the user will be essentially logged out. However, the user is not redirected to a 'you have been logged out' page, but all of their entries appear empty which essentially reproduces an empty account and can be misleading for the user.
Furthermore, while the "logout" button in the top right still shows, one cannot actively logout, but has to open a new desec window in the browser to 'forget' the login credentials or logged in state.
In essence, it appears that after login expiry the user is still handled as logged in, but blocked from any action (e.g. reading records or logging out). | open | 2021-01-04T14:25:49Z | 2021-01-04T14:26:59Z | https://github.com/desec-io/desec-stack/issues/498 | [
"bug",
"prio: medium",
"gui"
] | xopham | 0 |
pydantic/pydantic | pydantic | 11,535 | TypeAdapter for generic list raises PydanticSerializationError | ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
I noticed that Pydantic raises a `PydanticSerializationError` when I use a `TypeAdapter` to dump a list of a generic type (bound to `BaseModel`).
This only happens, if the `BaseModel` references another `BaseModel` that's defined afterwards (using `__future__.annotations`).
Invoking `model_rebuild()` on the outer `BaseModel` also fixes this issue.
I would expect that the generic `TypeAdapter` works exactly like the explicitly typed version and does not raise a `PydanticSerializationError`.
### Example Code
```Python
from __future__ import annotations
from typing import TypeVar
import pytest
from pydantic import BaseModel
from pydantic import TypeAdapter
from pydantic_core import PydanticSerializationError
def test_type_adapter_with_generic_typing():
# we define the models starting with the outermost model (only possible with __future__.annotations)
class Person(BaseModel):
name: str
hobbies: list[Hobby]
class Hobby(BaseModel):
name: str
# if we defined the models using its constructor or model_validate everything works as expected
# model = [Person(name="Joe Public", hobbies=[Hobby(name="swimming"), Hobby(name="cooking")])]
# if we use TypeAdapter instead, we might get an PydanticSerializationError
persons = [
{
"name": "Joe Public",
"hobbies": [
{"name": "swimming"},
{"name": "cooking"},
],
},
]
model = TypeAdapter(list[Person]).validate_python(persons)
# case 1: when using an explicit TypeAdapter instance, we can dump the model to JSON
assert TypeAdapter(list[Person]).dump_json(model).decode("utf-8")
# case 2: when using a generic TypeAdapter instance, we get an PydanticSerializationError
T = TypeVar("T", bound=BaseModel)
def _list_model_dump_json(data: list[T]) -> str:
return TypeAdapter(list[T]).dump_json(data).decode("utf-8")
with pytest.raises(PydanticSerializationError) as error:
_list_model_dump_json(model)
assert error.match(
"Error serializing to JSON: PydanticSerializationError: Error calling function `<lambda>`: TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'"
)
# case 3: we rebuilding the model before using the generic TypeAdapter, we can dump the model to JSON
Person.model_rebuild()
assert _list_model_dump_json(model)
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: .../.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.9 (main, Mar 4 2025, 17:13:15) [Clang 16.0.0 (clang-1600.0.26.6)]
platform: macOS-15.3.1-arm64-arm-64bit
related packages: mypy-1.15.0 typing_extensions-4.12.2
commit: unknown
``` | closed | 2025-03-07T14:07:17Z | 2025-03-11T13:30:42Z | https://github.com/pydantic/pydantic/issues/11535 | [
"bug V2",
"pending"
] | PhilippPHoffmann | 2 |
amidaware/tacticalrmm | django | 969 | Tasks set in the past should run at first opportunity | Any task that was set in the past should run at least once when comes online again.
See
https://discord.com/channels/736478043522072608/744281869499105290/940679837604003941 | closed | 2022-02-11T15:06:40Z | 2022-04-17T02:28:32Z | https://github.com/amidaware/tacticalrmm/issues/969 | [
"bug"
] | silversword411 | 1 |
strawberry-graphql/strawberry | graphql | 3,608 | Support of Geometry Scalars | <!--- Provide a general summary of the changes you want in the title above. -->
The Strawberry Django integration provides geometry scalars such as `Polygon`, `Point`, etc (https://github.com/strawberry-graphql/strawberry-django/blob/main/strawberry_django/fields/types.py#L339). Are there any plans to implement these in strawberry?
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
Since Postgres is a widely used database driver that is popular for its GIS extension, geometry types are present in many applications. Therefore, the feature of using GraphQL types in `strawberry` might be helpful.
However, this is in contrast to the GraphQL spec. | closed | 2024-08-30T09:07:13Z | 2025-03-20T15:56:50Z | https://github.com/strawberry-graphql/strawberry/issues/3608 | [] | tokr-bit | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 312 | Not able to install requirements anymore | After cloning the repository and running `pip3 install -r requirements.txt`, I get the following errors:
```
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11
ERROR: Could not find a version that satisfies the requirement torch==1.7.1 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1)
ERROR: No matching distribution found for torch==1.7.1
```
Trying to install the requirements with `conda install` instead of pip, in a new conda environment, I get this when trying to install pytorch, after running `conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch`:
```
Package libgcc-ng conflicts for:
python=3.11 -> bzip2[version='>=1.0.8,<2.0a0'] -> libgcc-ng[version='>=10.3.0|>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0']
python=3.11 -> libgcc-ng[version='>=11.2.0|>=12']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==12.1=0
- feature:/linux-64::__glibc==2.35=0
- feature:|@/linux-64::__glibc==2.35=0
- cudatoolkit=11.0 -> __glibc[version='>=2.17,<3.0.a0']
- cudatoolkit=11.0 -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- python=3.11 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- pytorch==1.7.1 -> cudatoolkit[version='>=11.0,<11.1'] -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
- torchvision==0.8.2 -> __glibc[version='>=2.17']
- torchvision==0.8.2 -> cudatoolkit[version='>=11.0,<11.1'] -> __glibc[version='>=2.17,<3.0.a0']
- torchvision==0.8.2 -> pytorch=[build=cuda*] -> __cuda
Your installed version is: 2.35
```
System: Ubuntu 22.04 with anaconda installed | open | 2023-05-14T16:03:54Z | 2023-10-16T04:54:28Z | https://github.com/hzwer/ECCV2022-RIFE/issues/312 | [] | lgmventura | 3 |
serengil/deepface | deep-learning | 1,067 | how to prep images to compare | hi folks,
fantastic repo and docs !! hats off!
Im trying to compare 2 images, and also do the analyze, not getting perfect results. any pointers appreciated.
```
result = DeepFace.verify(img1_path = "kp1.jpg", img2_path = "kp2.jpg", model="Facenet512")
{'verified': False, 'distance': 0.9413217688795057, 'threshold': 0.68, 'model': 'VGG-Face', 'detector_backend': 'opencv', 'similarity_metric': 'cosine', 'facial_areas': {'img1': {'x': 142, 'y': 125, 'w': 363, 'h': 363, 'left_eye': (145, 290), 'right_eye': (247, 156)}, 'img2': {'x': 103, 'y': 141, 'w': 341, 'h': 341, 'left_eye': (110, 139), 'right_eye': (217, 127)}}, 'time': 0.47}
result = DeepFace.verify(img1_path = "kp1.jpg", img2_path = "kp2.jpg", model_name ="Facenet512")
{'verified': False, 'distance': 0.9564904391863303, 'threshold': 0.3, 'model': 'Facenet512', 'detector_backend': 'opencv', 'similarity_metric': 'cosine', 'facial_areas': {'img1': {'x': 142, 'y': 125, 'w': 363, 'h': 363, 'left_eye': (145, 290), 'right_eye': (247, 156)}, 'img2': {'x': 103, 'y': 141, 'w': 341, 'h': 341, 'left_eye': (110, 139), 'right_eye': (217, 127)}}, 'time': 3.96}
#####
DeepFace.analyze(img_path = "kp1.jpg",actions = ['age', 'gender', 'race', 'emotion'])
[{'age': 32, 'region': {'x': 142, 'y': 125, 'w': 363, 'h': 363, 'left_eye': (145, 290), 'right_eye': (247, 156)}, 'face_confidence': 0.93, 'gender': {'Woman': 86.65652275085449, 'Man': 13.34347277879715}, 'dominant_gender': 'Woman', 'race': {'asian': 14.739155769348145, 'indian': 12.908931076526642, 'black': 5.508599802851677, 'white': 18.902520835399628, 'middle eastern': 18.655064702033997, 'latino hispanic': 29.285725951194763}, 'dominant_race': 'latino hispanic', 'emotion': {'angry': 49.89502727359696, 'disgust': 7.552223901204839e-05, 'fear': 16.26838244607344, 'happy': 0.005536496940148812, 'sad': 32.946189221512626, 'surprise': 0.0318374387338721, 'neutral': 0.8529571393404686}, 'dominant_emotion': 'angry'}]
```


| closed | 2024-03-06T20:41:11Z | 2024-03-06T22:40:42Z | https://github.com/serengil/deepface/issues/1067 | [
"question"
] | listaction | 2 |
fastapi/fastapi | python | 13,056 | Can't use `Annotated` with `ForwardRef` | ### Issue Content
The following code doesn't generate the correct OpenAPI json:
```py
from __future__ import annotations
from dataclasses import dataclass
from typing import Annotated
from fastapi import Depends, FastAPI
app = FastAPI()
def get_potato() -> Potato:
return Potato(color='red', size=10)
@app.get('/')
async def read_root(potato: Annotated[Potato, Depends(get_potato)]):
return {'Hello': 'World'}
@dataclass
class Potato:
color: str
size: int
```
If we move the `Potato` up, or remove the `Annotated`, then it works as expected. | open | 2024-12-10T12:11:29Z | 2025-03-20T16:51:07Z | https://github.com/fastapi/fastapi/issues/13056 | [] | Kludex | 11 |
mitmproxy/mitmproxy | python | 6,241 | HAR Import: Handle HTTP Version | #### Problem Description
When importing a HAR flow that's using HTTP/2 (see for example brave.har), mitmproxy's client replay should also use HTTP/2. The first thing that needs to happen for this is that we set `http_version` on import. :) | closed | 2023-07-11T15:12:22Z | 2023-07-29T16:38:30Z | https://github.com/mitmproxy/mitmproxy/issues/6241 | [
"kind/bug",
"area/addons"
] | mhils | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 1,671 | Upgrade Jquery to 3.5.0 or greater due to vulnerability | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
Flask-Appbuilder version: 3.3.1
Tell us what happens instead.
Currently Jquery version used is 3.4.1, need to update to 3.5 or higher due to security vulnerability
### Steps to reproduce
https://github.com/dpgaspar/Flask-AppBuilder/blob/master/flask_appbuilder/static/appbuilder/js/jquery-latest.js | closed | 2021-07-21T21:16:22Z | 2021-09-28T04:09:36Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1671 | [
"dependency-bump"
] | HardikVijayPatel | 4 |
pydata/xarray | numpy | 9,935 | Use DatasetGroupBy.quantile for DatasetGroupBy.median for multiple groups when using dask arrays | ### Is your feature request related to a problem?
I am grouping data in a Dataset and computing statistics. I wanted to take the median over (two) groups, but I got the following message:
```python
>>> ds.groupby(['x', 'y']).median()
# NotImplementedError: The da.nanmedian function only works along an axis or a subset of axes. The full algorithm is difficult to do in parallel
```
while `ds.groupby(['x']).median()` works without any problem.
I noticed that this issue is because the DataArrays are dask arrays: if they are numpy arrays, there is no problem. In addition, if `.median()` is replaced by `.quantile(0.5)`, there is no problem either. See below:
```python
import dask.array as da
import numpy as np
import xarray as xr
rng = da.random.default_rng(0)
ds = xr.Dataset(
{'a': (('x', 'y'), rng.random((10, 10)))},
coords={'x': np.arange(5).repeat(2), 'y': np.arange(5).repeat(2)}
)
# Raises:
# NotImplementedError: The da.nanmedian function only works along an axis or a subset of axes. The full algorithm is difficult to do in parallel
try:
ds.groupby(['x', 'y']).median()
except NotImplementedError as e:
print(e)
# No problems with the following:
ds.groupby(['x']).median()
ds.groupby(['x', 'y']).quantile(0.5)
ds.compute().groupby(['x', 'y']).median() # Implicit conversion to numpy array
```
### Describe the solution you'd like
A straightforward solution seems to be to use `DatasetGroupBy.quantile(0.5)` for `DatasetGroupBy.median()` if the median is to be computed over multiple groups.
### Describe alternatives you've considered
_No response_
### Additional context
My `xr.show_versions()`:
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:06:46) [GCC 10.3.0]
python-bits: 64
OS: Linux
OS-release: 6.8.0-49-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development
xarray: 2024.10.0
pandas: 2.2.3
numpy: 1.26.4
scipy: 1.14.1
netCDF4: 1.6.5
pydap: None
h5netcdf: 1.4.1
h5py: 3.12.1
zarr: 2.18.3
cftime: 1.6.4.post1
nc_time_axis: None
iris: None
bottleneck: 1.4.2
dask: 2024.11.2
distributed: None
matplotlib: 3.9.2
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 75.5.0
pip: 24.3.1
conda: None
pytest: None
mypy: None
IPython: 8.29.0
sphinx: 7.4.7
</details> | open | 2025-01-09T14:28:41Z | 2025-01-09T19:36:17Z | https://github.com/pydata/xarray/issues/9935 | [
"upstream issue"
] | adriaat | 0 |
elesiuta/picosnitch | plotly | 30 | Errors are being redireceted to /dev/null | https://github.com/elesiuta/picosnitch/blob/34f9f1f2cad39b6c2725f1f98a2d99fe65b98f1c/picosnitch.py#L2330-L2334
Hello,
Please remove the redirections and pipes to `/dev/null` for errors. You may want to consider removing all `/dev/null` redirections.
I got picosnitch up and running today, but with some difficulty. `picosnitch dash` wasn't starting, and gave no error messages, nor logs.
I had to open up my Python IDE in order to trace the code. I removed these `/dev/null` statements, and eventually found that Arch Linux currently has out of date packages for python-dash, which breaks with newer versions of Flask.
I have submitted changes to the Arch User Repo to get these packages updated, however I would appreciate not hiding these error messages. In case packages get updated and break things in the future, users will be able to troubleshoot what is wrong with their system more easily.
Love your work!
Regards,
Aeonik | closed | 2023-09-27T19:48:44Z | 2023-10-23T20:54:18Z | https://github.com/elesiuta/picosnitch/issues/30 | [] | aeonik | 1 |
twopirllc/pandas-ta | pandas | 354 | stdev does not take ddof = 0 | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
0.3.2b0
**Describe the bug**
Calling stdev with ddof=0 (and all other derived indicator, like bbands) actually returns a standard deviation with ddof = 1.
**To Reproduce**
```python
import pandas as pd
import pandas_ta as ta
data = {'close': ['1', '2', '2', '1']}
df = pd.DataFrame.from_dict(data)
df.ta.stdev(length=4, ddof=0)[3] # Shoukd be 0.5
df.ta.stdev(length=4, ddof=0).equals(df.ta.stdev(length=4, ddof=1)) # Should be False
```
**Expected behavior**
Should be the same as:
```python
df.rolling(window=4).std(ddof=0)['close']
```
**Additional context**
The issue seems to be [this validation](https://github.com/twopirllc/pandas-ta/blob/1deb7559f626d2a5cf664b6c0af7a8016a53bfca/pandas_ta/statistics/stdev.py#L12) which fails with ddof = 0.
| closed | 2021-07-26T13:51:09Z | 2022-06-14T04:18:07Z | https://github.com/twopirllc/pandas-ta/issues/354 | [
"bug"
] | dnabb | 8 |
mars-project/mars | scikit-learn | 2,555 | [BUG] AttributeError: 'WebStorageAPI' object has no attribute 'get_infos' | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Call to_ray_mldataset() in Mars client will raise the exception `AttributeError: 'WebStorageAPI' object has no attribute 'get_infos'`

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use latest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2021-10-28T07:53:06Z | 2021-10-28T14:56:51Z | https://github.com/mars-project/mars/issues/2555 | [
"mod: web",
"mod: storage"
] | fyrestone | 0 |
flavors/django-graphql-jwt | graphql | 55 | Added Per-cookie authentication | As mentioned in this article (https://dev.to/rdegges/please-stop-using-local-storage-1i04), storing sensitive data in localstorage is a bad idea.
I think it would be good to allow the JWT token to be stored in a cookie with the "httpOnly" and "secure" flags to counter XSS attacks. | closed | 2018-12-02T17:14:47Z | 2019-04-28T12:58:26Z | https://github.com/flavors/django-graphql-jwt/issues/55 | [
"enhancement"
] | skitoo | 8 |
plotly/dash-table | plotly | 575 | Set name of file downloaded after hitting Export Button [feature request] | Currently the file name is `Data.csv`, with no way to change it. Ideally, this could be dynamically changed depending on when the user clicks in (to allow for time stamps in the file name).
| open | 2019-09-05T20:44:14Z | 2019-09-05T21:03:49Z | https://github.com/plotly/dash-table/issues/575 | [
"dash-type-enhancement"
] | slazicoicr | 0 |
streamlit/streamlit | python | 10,098 | Optionally show nested spinners | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Issue #9951 and PR #9956 changed the behaviour of spinners to only show the outermost spinner for cached functions. Could an option be added to override that behaviour and show the nested spinners?
### Why?
I liked being able to see the progress of what was happening under the hood, and it's also better UX (imo) for users who are stuck waiting for the top level function to complete.
### How?
either:
- add a `show_nested_spinners` boolean to the `@st.cache_XXX` decorators, default to false to preserve the current behaviour
- add a `override_parent_spinners` boolean to the `@st.cache_XXX` decorators, default to false to preserve the current behaviour
The first is probably preferable I think. I'm sure there's a more elegant name for those parameters but that's all I can think of right now.
### Additional Context
_No response_ | open | 2025-01-03T02:37:05Z | 2025-01-08T15:31:04Z | https://github.com/streamlit/streamlit/issues/10098 | [
"type:enhancement",
"feature:cache",
"feature:st.spinner"
] | nataziel | 2 |
torrvision/crayon | data-visualization | 17 | Crayon might require some form of password protection | Maybe a simple key generated by the server at startup, or something you can add to the docker file startup. | open | 2017-02-06T21:29:10Z | 2017-07-06T16:36:05Z | https://github.com/torrvision/crayon/issues/17 | [
"enhancement"
] | edran | 2 |
sammchardy/python-binance | api | 1,027 | API and Binance.com showing different prices for when the stock was bought and sold | So I was running a bot and the bot bought shares at $55.72, and sold at $55.74. This mathematically should be a profit but upon looking at my order history, it shows a buy at $55.79 and a sell at 55.72, and I came out at a loss


TO REPRODUCE:
Have an API buy and sell shares, and then look at order history on binance.com
| open | 2021-09-14T00:08:24Z | 2021-09-16T13:43:54Z | https://github.com/sammchardy/python-binance/issues/1027 | [] | Chaneriel | 3 |
plotly/dash-table | plotly | 406 | DataTable error when combined with dcc.Tabs | I'm getting an error when creating a DataTable in an app that uses tabs. I found a similar issue that was popping up when it was still `dash_table_experiments ` and was able to get a solution to this problem from this [community post](https://community.plot.ly/t/display-tables-in-dash/4707/40). That said, I wasn't certain if this is still a bug or not as it looked like this was resolved when `dash_table` got released and I haven't been able to see any other issues come up for this.
You can reproduce with this minimal example:
```python
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_table
import pandas as pd
app = dash.Dash(__name__)
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
app.layout = html.Div([
dcc.Tabs(
id='tabs',
children=[
dcc.Tab(
label='Tab one',
children=[
dash_table.DataTable(
id='table',
columns=[{"name": i, "id": i} for i in df.columns],
data=df.to_dict("rows"),
)
]
),
dcc.Tab(
label='Tab two',
children=[
html.H1('This is tab two')
]
)
])
])
if __name__ == '__main__':
app.run_server(debug=True)
```
My fix was to create a dummy datatable on the second tab like so:
```python
dcc.Tab(
label='Tab two',
children=[
html.H1('This is tab two')
html.Div(
dash_table.DataTable(data=[{}], columns=[]),
style={'display': 'none'})
]
```
## Here are the console error messages I'm getting
Error on FF v66:
```
NotFoundError: Node was not found react-dom@15.4.2.min.js:12
```
Error on Chrome v73:
```
react-dom@15.4.2.min.js?v=0.21.0&m=1552054944:12 Uncaught DOMException: Failed to execute 'removeChild' on 'Node': The node to be removed is not a child of this node.
```
## Environment
```
dash 0.40.0
dash-core-components 0.45.0
dash-html-components 0.15.0
dash-renderer 0.21.0
dash-table 3.6.0
``` | open | 2019-04-02T05:22:00Z | 2020-09-17T19:24:43Z | https://github.com/plotly/dash-table/issues/406 | [
"dash-type-bug",
"dash-stage-backlog",
"size: 2"
] | mbkupfer | 7 |
nltk/nltk | nlp | 2,688 | Does it support the calculation of BLEU-SBP method? | closed | 2021-04-04T13:24:25Z | 2021-04-07T06:55:11Z | https://github.com/nltk/nltk/issues/2688 | [
"invalid"
] | wmathor | 0 |
|
sktime/pytorch-forecasting | pandas | 1,668 | [MNT] remove mutable objects from defaults | We should ensure that no mutable objects are argument defaults, e.g., lists, dicts.
All mutable defaults should be replaced by appropriate defaults, e.g., strings if applicable, or `None`, which internally is then replaced by a newly initialized mutable defaults.
Care needs to be taken in cases where a `self` write happens, e.g., dataclass-like structures should not overwrite the `self` attr with a mutable default either, instead write the replacement default to `self`. | closed | 2024-09-10T17:03:34Z | 2025-01-20T17:51:11Z | https://github.com/sktime/pytorch-forecasting/issues/1668 | [
"good first issue",
"maintenance"
] | fkiraly | 5 |
kubeflow/katib | scikit-learn | 1,915 | GPU not consuming for Katib experiment - GKE Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory | /kind bug
**What steps did you take and what happened:**
I am trying to create a kubeflow pipeline that tunes the hyper parameters of a text classification model in tensorflow using katib on GKE clusters. I created a cluster using the below commands
```
CLUSTER_NAME="kubeflow-pipelines-standalone-v2"
ZONE="us-central1-a"
MACHINE_TYPE="n1-standard-2"
SCOPES="cloud-platform"
NODES_NUM=1
gcloud container clusters create $CLUSTER_NAME --zone $ZONE --machine-type $MACHINE_TYPE --scopes $SCOPES --num-nodes $NODES_NUM
gcloud config set compute/zone $ZONE
gcloud container clusters get-credentials $CLUSTER_NAME
export PIPELINE_VERSION=1.8.2
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
# katib
kubectl apply -k "github.com/kubeflow/katib.git/manifests/v1beta1/installs/katib-standalone?ref=v0.13.0"
kubectl apply -k "github.com/kubeflow/training-operator/manifests/overlays/standalone?ref=v1.4.0"
kubectl apply -f ./test.yaml
# disabling caching
export NAMESPACE=kubeflow
kubectl get mutatingwebhookconfiguration cache-webhook-${NAMESPACE}
kubectl patch mutatingwebhookconfiguration cache-webhook-${NAMESPACE} --type='json' -p='[{"op":"replace", "path": "/webhooks/0/rules/0/operations/0", "value": "DELETE"}]'
kubectl describe configmap inverse-proxy-config -n kubeflow | grep googleusercontent.com
GPU_POOL_NAME="gpu-pool2"
CLUSTER_NAME="kubeflow-pipelines-standalone-v2"
CLUSTER_ZONE="us-central1-a"
GPU_TYPE="nvidia-tesla-k80"
GPU_COUNT=1
MACHINE_TYPE="n1-highmem-8"
NODES_NUM=1
# Node pool creation may take several minutes.
gcloud container node-pools create ${GPU_POOL_NAME} --accelerator type=${GPU_TYPE},count=${GPU_COUNT} --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} --num-nodes=0 --machine-type=${MACHINE_TYPE} --scopes=cloud-platform --num-nodes $NODES_NUM
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
```
I then created a kubeflow pipeline:
```
from kfp import compiler
import kfp
import kfp.dsl as dsl
from kfp import components
@dsl.pipeline(
name="End to End Pipeline",
description="An end to end mnist example including hyperparameter tuning, train and inference"
)
def pipeline_func(
time_loc = "gs://faris_bucket_us_central/Pipeline_data/input_dataset/dbpedia_model/GKE_Katib/time_csv/",
hyper_image_uri_train = "gcr.io/.............../hptunekatib:v7",
hyper_image_uri = "gcr.io/.............../hptunekatibclient:v7",
model_uri = "gs://faris_bucket_us_central/Pipeline_data/dbpedia_hyper_models/GKE_Katib/",
experiment_name = "dbpedia-exp-1",
experiment_namespace = "kubeflow",
experiment_timeout_minutes = 60
):
# first stage : ingest and preprocess -> returns uploaded gcs URI for the pre processed dataset, setting memmory to 32GB, CPU to 4 CPU
hp_tune = dsl.ContainerOp(
name='hp-tune-katib',
image=hyper_image_uri,
arguments=[
'--experiment_name', experiment_name,
'--experiment_namespace', experiment_namespace,
'--experiment_timeout_minutes', experiment_timeout_minutes,
'--delete_after_done', True,
'--hyper_image_uri', hyper_image_uri_train,
'--time_loc', time_loc,
'--model_uri', model_uri
],
file_outputs={'best-params': '/output.txt'}
).set_gpu_limit(1)
# restricting the maximum usable memory and cpu for preprocess stage
hp_tune.set_memory_limit("49G")
hp_tune.set_cpu_limit("7")
# Run the Kubeflow Pipeline in the user's namespace.
if __name__ == '__main__':
# compiling the model and generating tar.gz file to upload to Kubeflow Pipeline UI
import kfp.compiler as compiler
compiler.Compiler().compile(
pipeline_func, 'pipeline_db.tar.gz'
)
```
These are my two continers.
1. To launch the katib experiments based on the specified parameters and arguments passed to the dsl.ContainerOp()
2. The main training script for text classification. This container is passed as "image" to the trial spec for katib
gcr.io/.............../hptunekatibclient:v7
```
# importing required packages
import argparse
import datetime
from datetime import datetime as dt
from distutils.util import strtobool
import json
import os
import logging
import time
import pandas as pd
from google.cloud import storage
from pytz import timezone
from kubernetes.client import V1ObjectMeta
from kubeflow.katib import KatibClient
from kubeflow.katib import ApiClient
from kubeflow.katib import V1beta1Experiment
from kubeflow.katib import ApiClient
from kubeflow.katib import V1beta1ExperimentSpec
from kubeflow.katib import V1beta1AlgorithmSpec
from kubeflow.katib import V1beta1ObjectiveSpec
from kubeflow.katib import V1beta1ParameterSpec
from kubeflow.katib import V1beta1FeasibleSpace
from kubeflow.katib import V1beta1TrialTemplate
from kubeflow.katib import V1beta1TrialParameterSpec
from kubeflow.katib import V1beta1MetricsCollectorSpec
from kubeflow.katib import V1beta1CollectorSpec
from kubeflow.katib import V1beta1FileSystemPath
from kubeflow.katib import V1beta1SourceSpec
from kubeflow.katib import V1beta1FilterSpec
logger = logging.getLogger()
logging.basicConfig(level=logging.INFO)
FINISH_CONDITIONS = ["Succeeded", "Failed"]
# function to record the start time and end time to calculate execution time, pipeline start up and teardown time
def write_time(types, time_loc):
formats = "%Y-%m-%d %I:%M:%S %p"
now_utc = dt.now(timezone('UTC'))
now_asia = now_utc.astimezone(timezone('Asia/Kolkata'))
start_time = str(now_asia.strftime(formats))
time_df = pd.DataFrame({"time":[start_time]})
print("written")
time_df.to_csv(time_loc + types + ".csv", index=False)
def get_args():
parser = argparse.ArgumentParser(description='Katib Experiment launcher')
parser.add_argument('--experiment_name', type=str,
help='Experiment name')
parser.add_argument('--experiment_namespace', type=str, default='anonymous',
help='Experiment namespace')
parser.add_argument('--experiment_timeout_minutes', type=int, default=60*24,
help='Time in minutes to wait for the Experiment to complete')
parser.add_argument('--delete_after_done', type=strtobool, default=True,
help='Whether to delete the Experiment after it is finished')
parser.add_argument('--hyper_image_uri', type=str, default="gcr.io/.............../hptunekatib:v2",
help='Hyper image uri')
parser.add_argument('--time_loc', type=str, default="gs://faris_bucket_us_central/Pipeline_data/input_dataset/dbpedia_model/GKE_Katib/time_csv/",
help='Time loc')
parser.add_argument('--model_uri', type=str, default="gs://faris_bucket_us_central/Pipeline_data/dbpedia_hyper_models/GKE_Katib/",
help='Model URI')
return parser.parse_args()
def wait_experiment_finish(katib_client, experiment, timeout):
polling_interval = datetime.timedelta(seconds=30)
end_time = datetime.datetime.now() + datetime.timedelta(minutes=timeout)
experiment_name = experiment.metadata.name
experiment_namespace = experiment.metadata.namespace
while True:
current_status = None
try:
current_status = katib_client.get_experiment_status(name=experiment_name, namespace=experiment_namespace)
except Exception as e:
logger.info("Unable to get current status for the Experiment: {} in namespace: {}. Exception: {}".format(
experiment_name, experiment_namespace, e))
# If Experiment has reached complete condition, exit the loop.
if current_status in FINISH_CONDITIONS:
logger.info("Experiment: {} in namespace: {} has reached the end condition: {}".format(
experiment_name, experiment_namespace, current_status))
return
# Print the current condition.
logger.info("Current condition for Experiment: {} in namespace: {} is: {}".format(
experiment_name, experiment_namespace, current_status))
# If timeout has been reached, rise an exception.
if datetime.datetime.now() > end_time:
raise Exception("Timout waiting for Experiment: {} in namespace: {} "
"to reach one of these conditions: {}".format(
experiment_name, experiment_namespace, FINISH_CONDITIONS))
# Sleep for poll interval.
time.sleep(polling_interval.seconds)
if __name__ == "__main__":
args = get_args()
write_time("hyper_parameter_tuning_start", args.time_loc)
# Trial count specification.
max_trial_count = 2
max_failed_trial_count = 2
parallel_trial_count = 1
# Objective specification.
objective = V1beta1ObjectiveSpec(
type="minimize",
# goal=100,
objective_metric_name="accuracy"
# additional_metric_names=["accuracy"]
)
# Objective specification.
# metrics_collector_specs = V1beta1MetricsCollectorSpec(
# collector=V1beta1CollectorSpec(kind="File"),
# source=V1beta1SourceSpec(
# file_system_path=V1beta1FileSystemPath(
# # format="TEXT",
# path="/opt/trainer/katib/metrics.log",
# kind="File"
# ),
# filter=V1beta1FilterSpec(
# # metrics_format=["{metricName: ([\\w|-]+), metricValue: ((-?\\d+)(\\.\\d+)?)}"]
# metrics_format=["([\w|-]+)\s*=\s*([+-]?\d*(\.\d+)?([Ee][+-]?\d+)?)"]
# )
# )
# )
# Algorithm specification.
algorithm = V1beta1AlgorithmSpec(
algorithm_name="random",
)
# Experiment search space.
# In this example we tune learning rate and batch size.
parameters = [
V1beta1ParameterSpec(
name="batch_size",
parameter_type="discrete",
feasible_space=V1beta1FeasibleSpace(
list=["32", "42", "52", "62", "64"]
),
),
V1beta1ParameterSpec(
name="learning_rate",
parameter_type="double",
feasible_space=V1beta1FeasibleSpace(
min="0.001",
max="0.005"
),
)
]
# TODO (andreyvelich): Use community image for the mnist example.
trial_spec = {
"apiVersion": "kubeflow.org/v1",
"kind": "TFJob",
"spec": {
"tfReplicaSpecs": {
"PS": {
"replicas": 1,
"restartPolicy": "Never",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false",
}
},
"spec": {
"containers": [
{
"name": "tensorflow",
"image": args.hyper_image_uri,
"command": [
"python",
"/opt/trainer/task.py",
"--model_uri=" + args.model_uri,
"--batch_size=${trialParameters.batchSize}",
"--learning_rate=${trialParameters.learningRate}"
],
"ports" : [
{
"containerPort": 2222,
"name" : "tfjob-port"
}
]
# "resources": {
# "limits" : {
# "cpu": "1"
# }
# }
}
]
}
}
},
"Worker": {
"replicas": 1,
"restartPolicy": "Never",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false",
}
},
"spec": {
"containers": [
{
"name": "tensorflow",
"image": args.hyper_image_uri,
"command": [
"python",
"/opt/trainer/task.py",
"--model_uri=" + args.model_uri,
"--batch_size=${trialParameters.batchSize}",
"--learning_rate=${trialParameters.learningRate}"
],
"ports" : [
{
"containerPort": 2222,
"name" : "tfjob-port"
}
]
# "resources": {
# "limits" : {
# "nvidia.com/gpu": 1
# }
# }
}
]
}
}
}
}
}
}
# Configure parameters for the Trial template.
trial_template = V1beta1TrialTemplate(
primary_container_name="tensorflow",
trial_parameters=[
V1beta1TrialParameterSpec(
name="batchSize",
description="batch size",
reference="batch_size"
),
V1beta1TrialParameterSpec(
name="learningRate",
description="Learning rate",
reference="learning_rate"
),
],
trial_spec=trial_spec
)
# Create an Experiment from the above parameters.
experiment_spec = V1beta1ExperimentSpec(
max_trial_count=max_trial_count,
max_failed_trial_count=max_failed_trial_count,
parallel_trial_count=parallel_trial_count,
objective=objective,
algorithm=algorithm,
parameters=parameters,
trial_template=trial_template
)
experiment_name = args.experiment_name
experiment_namespace = args.experiment_namespace
logger.info("Creating Experiment: {} in namespace: {}".format(experiment_name, experiment_namespace))
# Create Experiment object.
experiment = V1beta1Experiment(
api_version="kubeflow.org/v1beta1",
kind="Experiment",
metadata=V1ObjectMeta(
name=experiment_name,
namespace=experiment_namespace
),
spec=experiment_spec
)
logger.info("Experiment Spec : " + str(experiment_spec))
logger.info("Experiment: " + str(experiment))
# Create Katib client.
katib_client = KatibClient()
# Create Experiment in Kubernetes cluster.
output = katib_client.create_experiment(experiment, namespace=experiment_namespace)
# Wait until Experiment is created.
end_time = datetime.datetime.now() + datetime.timedelta(minutes=60)
while True:
current_status = None
# Try to get Experiment status.
try:
current_status = katib_client.get_experiment_status(name=experiment_name, namespace=experiment_namespace)
except Exception:
logger.info("Waiting until Experiment is created...")
# If current status is set, exit the loop.
if current_status is not None:
break
# If timeout has been reached, rise an exception.
if datetime.datetime.now() > end_time:
raise Exception("Timout waiting for Experiment: {} in namespace: {} to be created".format(
experiment_name, experiment_namespace))
time.sleep(1)
logger.info("Experiment is created")
# Wait for Experiment finish.
wait_experiment_finish(katib_client, experiment, args.experiment_timeout_minutes)
# Check if Experiment is successful.
if katib_client.is_experiment_succeeded(name=experiment_name, namespace=experiment_namespace):
logger.info("Experiment: {} in namespace: {} is successful".format(
experiment_name, experiment_namespace))
optimal_hp = katib_client.get_optimal_hyperparameters(
name=experiment_name, namespace=experiment_namespace)
logger.info("Optimal hyperparameters:\n{}".format(optimal_hp))
# # Create dir if it doesn't exist.
# if not os.path.exists(os.path.dirname("output.txt")):
# os.makedirs(os.path.dirname("output.txt"))
# Save HyperParameters to the file.
with open("output.txt", 'w') as f:
f.write(json.dumps(optimal_hp))
else:
logger.info("Experiment: {} in namespace: {} is failed".format(
experiment_name, experiment_namespace))
# Print Experiment if it is failed.
experiment = katib_client.get_experiment(name=experiment_name, namespace=experiment_namespace)
logger.info(experiment)
# Delete Experiment if it is needed.
if args.delete_after_done:
katib_client.delete_experiment(name=experiment_name, namespace=experiment_namespace)
logger.info("Experiment: {} in namespace: {} has been deleted".format(
experiment_name, experiment_namespace))
write_time("hyper_parameter_tuning_end", args.time_loc)
```
Dockerfile
```
FROM gcr.io/deeplearning-platform-release/tf-gpu.2-8
# installing packages
RUN pip install pandas
RUN pip install gcsfs
RUN pip install google-cloud-storage
RUN pip install pytz
RUN pip install kubernetes
RUN pip install kubeflow-katib
# moving code to preprocess
RUN mkdir /hp_tune
COPY task.py /hp_tune
# CREDENTIAL Authentication
COPY /prj-vertex-ai-2c390f7e8fec.json /hp_tune/prj-vertex-ai-2c390f7e8fec.json
ENV GOOGLE_APPLICATION_CREDENTIALS="/hp_tune/prj-vertex-ai-2c390f7e8fec.json"
# entry point
ENTRYPOINT ["python3", "/hp_tune/task.py"]
```
gcr.io/.............../hptunekatib:v7
```
# import os
# os.system("pip install tensorflow-gpu==2.8.0")
from sklearn.preprocessing import LabelEncoder
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
import os
from tensorflow.keras.layers import Conv1D, MaxPool1D ,Embedding ,concatenate
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense,Input
from tensorflow.keras.models import Model
from tensorflow import keras
from datetime import datetime
from pytz import timezone
from sklearn.model_selection import train_test_split
import pandas as pd
from google.cloud import storage
import argparse
import logging
logger = logging.getLogger()
logging.basicConfig(level=logging.INFO)
logger.info("Num GPUs Available: " + str(tf.config.list_physical_devices('GPU')))
import subprocess
process = subprocess.Popen(['sh', '-c', 'nvidia-smi'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
logger.info("NVIDIA SMI " + str(out))
def format_strs(x):
strs = ""
if x > 0:
sign_t = "+"
strs += "+"
else:
sign_t = "-"
strs += "-"
strs = strs + "{:.1e}".format(x)
if "+" in strs[1:]:
sign = "+"
strs = strs[1:].split("+")
else:
sign = "-"
strs = strs[1:].split("-")
last_d = strs[1][1:] if strs[1][0] == "0" else strs[1]
strs_f = sign_t + strs[0] + sign + last_d
return strs_f
def get_args():
'''Parses args. Must include all hyperparameters you want to tune.'''
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate',
required=True,
type=float,
help='learning_rate')
parser.add_argument(
'--batch_size',
required=True,
type=int,
help='batch_size')
parser.add_argument(
'--model_uri',
required=True,
type=str,
help='Model Uri')
args = parser.parse_args()
return args
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The ID of your GCS object
# source_blob_name = "storage-object-name"
# The path to which the file should be downloaded
# destination_file_name = "local/path/to/file"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
# Construct a client side representation of a blob.
# Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve
# any content from Google Cloud Storage. As we don't need additional data,
# using `Bucket.blob` is preferred here.
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
def create_dataset():
download_blob("faris_bucket_us_central", "Pipeline_data/input_dataset/dbpedia_model/data/" + "train.csv", "train.csv")
trainData = pd.read_csv('train.csv')
trainData.columns = ['label','title','description']
# trainData = trainData.sample(frac=0.002)
X_train, X_test, y_train, y_test = train_test_split(trainData['description'], trainData['label'], stratify=trainData['label'], test_size=0.1, random_state=0)
return X_train, X_test, y_train, y_test
def train_model(train_X, train_y, test_X, test_y, learning_rate, batch_size):
logger.info("Training with lr = " + str(learning_rate) + "bs = " + str(batch_size))
bert_preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
bert_encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/2", trainable=False)
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessed_text = bert_preprocess(text_input)
outputs = bert_encoder(preprocessed_text)
# Neural network layers
l = tf.keras.layers.Dropout(0.2, name="dropout")(outputs['pooled_output']) # dropout_rate
l = tf.keras.layers.Dense(14,activation='softmax',kernel_initializer=tf.keras.initializers.GlorotNormal(seed=24))(l) # dense_units
model = Model(inputs=[text_input], outputs=l)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),loss='categorical_crossentropy',metrics=['accuracy'])
history = model.fit(train_X, train_y, epochs=5, validation_data=(test_X, test_y), batch_size=batch_size)
return model, history
def main():
args = get_args()
logger.info("Creating dataset")
train_X, test_X, train_y, test_y = create_dataset()
# one_hot_encoding the class label
encoder = LabelEncoder()
encoder.fit(train_y)
y_train_encoded = encoder.transform(train_y)
y_test_encoded = encoder.transform(test_y)
y_train_ohe = tf.keras.utils.to_categorical(y_train_encoded)
y_test_ohe = tf.keras.utils.to_categorical(y_test_encoded)
logger.info("Training model")
model = train_model(
train_X,
y_train_ohe,
test_X,
y_test_ohe,
args.learning_rate,
int(float(args.batch_size))
)
logger.info("Saving model")
artifact_filename = 'saved_model'
local_path = artifact_filename
tf.saved_model.save(model[0], local_path)
# Upload model artifact to Cloud Storage
model_directory = args.model_uri + "-".join(os.environ["HOSTNAME"].split("-")[:-2]) + "/"
local_path = "saved_model/assets/vocab.txt"
storage_path = os.path.join(model_directory, "assets/vocab.txt")
blob = storage.blob.Blob.from_string(storage_path, client=storage.Client())
blob.upload_from_filename(local_path)
local_path = "saved_model/variables/variables.data-00000-of-00001"
storage_path = os.path.join(model_directory, "variables/variables.data-00000-of-00001")
blob = storage.blob.Blob.from_string(storage_path, client=storage.Client())
blob.upload_from_filename(local_path)
local_path = "saved_model/variables/variables.index"
storage_path = os.path.join(model_directory, "variables/variables.index")
blob = storage.blob.Blob.from_string(storage_path, client=storage.Client())
blob.upload_from_filename(local_path)
local_path = "saved_model/saved_model.pb"
storage_path = os.path.join(model_directory, "saved_model.pb")
blob = storage.blob.Blob.from_string(storage_path, client=storage.Client())
blob.upload_from_filename(local_path)
logger.info("Model Saved at " + model_directory)
logger.info("Keras Score: " + str(model[1].history["accuracy"][-1]))
hp_metric = model[1].history["accuracy"][-1]
print("accuracy =", format_strs(hp_metric))
if __name__ == "__main__":
main()
```
Dockerfile
```
# FROM gcr.io/deeplearning-platform-release/tf-cpu.2-8
FROM gcr.io/deeplearning-platform-release/tf-gpu.2-8
RUN mkdir -p /opt/trainer
# RUN pip install scikit-learn
RUN pip install tensorflow_text==2.8.1
# RUN pip install tensorflow-gpu==2.8.0
# CREDENTIAL Authentication
COPY /prj-vertex-ai-2c390f7e8fec.json /prj-vertex-ai-2c390f7e8fec.json
ENV GOOGLE_APPLICATION_CREDENTIALS="/prj-vertex-ai-2c390f7e8fec.json"
COPY *.py /opt/trainer/
# # RUN chgrp -R 0 /opt/trainer && chmod -R g+rwX /opt/trainer
# RUN chmod -R 777 /home/trainer
ENTRYPOINT ["python", "/opt/trainer/task.py"]
# Sets up the entry point to invoke the trainer.
# ENTRYPOINT ["python", "-m", "trainer.task"]
```
The pipeline runs but it doesnot use the GPU and this piece of code
```
logger.info("Num GPUs Available: " + str(tf.config.list_physical_devices('GPU')))
import subprocess
process = subprocess.Popen(['sh', '-c', 'nvidia-smi'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
logger.info("NVIDIA SMI " + str(out))
```
gives empty list and empty string. It is like the GPU doesnot exist. I am attaching the logs of the container
```
insertId | labels."compute.googleapis.com/resource_name" | labels."k8s-pod/group-name" | labels."k8s-pod/job-name" | labels."k8s-pod/replica-index" | labels."k8s-pod/replica-type" | labels."k8s-pod/training_kubeflow_org/job-name" | labels."k8s-pod/training_kubeflow_org/operator-name" | labels."k8s-pod/training_kubeflow_org/replica-index" | labels."k8s-pod/training_kubeflow_org/replica-type" | logName | receiveLocation | receiveTimestamp | receivedLocation | resource.labels.cluster_name | resource.labels.container_name | resource.labels.location | resource.labels.namespace_name | resource.labels.pod_name | resource.labels.project_id | resource.type | severity | textPayload | timestamp
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
saaah727bfds9ymw | gke-kubeflow-pipelines-s-default-pool-e4e6dda3-544k | kubeflow.org | dbpedia-exp-1-ntq7tfvj | 0 | ps | dbpedia-exp-1-ntq7tfvj | tfjob-controller | 0 | ps | projects/prj-vertex-ai/logs/stdout | 2022-07-11T06:07:35.222632672Z | kubeflow-pipelines-standalone-v2 | tensorflow | us-central1-a | kubeflow | dbpedia-exp-1-ntq7tfvj-ps-0 | prj-vertex-ai | k8s_container | INFO | accuracy = +9.9e-1 | 2022-07-11T06:07:30.812554270Z
cg5hf72zfi4a8ymi | gke-kubeflow-pipelines-s-default-pool-e4e6dda3-544k | kubeflow.org | dbpedia-exp-1-ntq7tfvj | 0 | ps | dbpedia-exp-1-ntq7tfvj | tfjob-controller | 0 | ps | projects/prj-vertex-ai/logs/stderr | 2022-07-11T06:07:35.218143792Z | kubeflow-pipelines-standalone-v2 | tensorflow | us-central1-a | kubeflow | dbpedia-exp-1-ntq7tfvj-ps-0 | prj-vertex-ai | k8s_container | ERROR | INFO:root:Num GPUs Available: [] | 2022-07-11T06:07:30.812527036Z
0n32rintpe0v865p | gke-kubeflow-pipelines-s-default-pool-e4e6dda3-544k | kubeflow.org | dbpedia-exp-1-ntq7tfvj | 0 | ps | dbpedia-exp-1-ntq7tfvj | tfjob-controller | 0 | ps | projects/prj-vertex-ai/logs/stderr | 2022-07-11T06:07:35.218143792Z | kubeflow-pipelines-standalone-v2 | tensorflow | us-central1-a | kubeflow | dbpedia-exp-1-ntq7tfvj-ps-0 | prj-vertex-ai | k8s_container | ERROR | 2022-07-11 06:07:30.811609: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (dbpedia-exp-1-ntq7tfvj-ps-0): /proc/driver/nvidia/version does not exist | 2022-07-11T06:07:30.812519914Z
et3b3w8ji0nlmfc3 | gke-kubeflow-pipelines-s-default-pool-e4e6dda3-544k | kubeflow.org | dbpedia-exp-1-ntq7tfvj | 0 | ps | dbpedia-exp-1-ntq7tfvj | tfjob-controller | 0 | ps | projects/prj-vertex-ai/logs/stderr | 2022-07-11T06:07:35.218143792Z | kubeflow-pipelines-standalone-v2 | tensorflow | us-central1-a | kubeflow | dbpedia-exp-1-ntq7tfvj-ps-0 | prj-vertex-ai | k8s_container | ERROR | 2022-07-11 06:07:30.811541: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) | 2022-07-11T06:07:30.812511863Z
u8jhqsnsjg3n114l | gke-kubeflow-pipelines-s-default-pool-e4e6dda3-544k | kubeflow.org | dbpedia-exp-1-ntq7tfvj | 0 | ps | dbpedia-exp-1-ntq7tfvj | tfjob-controller | 0 | ps | projects/prj-vertex-ai/logs/stderr | 2022-07-11T06:07:35.218143792Z | kubeflow-pipelines-standalone-v2 | tensorflow | us-central1-a | kubeflow | dbpedia-exp-1-ntq7tfvj-ps-0 | prj-vertex-ai | k8s_container | ERROR | 2022-07-11 06:07:30.811461: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load /kind bug
```
**What did you expect to happen:**
I expected the pipeline stage to use GPU and run the text classiication using GPU but it doesnt.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Katib version (check the Katib controller image version): v0.13.0
- Kubernetes version: (`kubectl version`): 1.22.8-gke.202
- OS (`uname -a`): linux/ COS in containers
---
<!-- Don't delete this message to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
| closed | 2022-07-11T13:08:28Z | 2023-08-25T20:13:45Z | https://github.com/kubeflow/katib/issues/1915 | [
"kind/bug",
"lifecycle/stale"
] | farisfirenze | 10 |
assafelovic/gpt-researcher | automation | 351 | aiofiles to "requirements.txt" | Just a detail, I had to add aiofiles to "requirements.txt" to make the docker compose build.
For the rest, great stuffs working fine, thks !!!
philippehenin@xps15-phe-lan:~/Gits/gpt-researcher$ git diff requirements.txt
diff --git a/requirements.txt b/requirements.txt
index ca39dba..c53cc7c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -22,3 +22,4 @@ arxiv==2.0.0
PyMuPDF==1.23.6
requests==2.31.0
jinja2==3.1.2
+aiofiles
| closed | 2024-02-03T18:02:32Z | 2024-02-03T18:05:39Z | https://github.com/assafelovic/gpt-researcher/issues/351 | [] | philippehenin | 1 |
google-deepmind/graph_nets | tensorflow | 88 | NameError: name 'get_graphs' is not defined | Well, I already installed and import the libraries:
```
import graph_nets as gn
import sonnet as snt
```
and these warnings showed up:
```
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\sonnet\python\modules\util.py:63: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\graph_nets\blocks.py:474: The name tf.unsorted_segment_sum is deprecated. Please use tf.math.unsorted_segment_sum instead.
```
after running the
```
# Provide your own functions to generate graph-structured data.
input_graphs = get_graphs()
# Create the graph network.
graph_net_module = gn.modules.GraphNetwork(
edge_model_fn=lambda: snt.nets.MLP([32, 32]),
node_model_fn=lambda: snt.nets.MLP([32, 32]),
global_model_fn=lambda: snt.nets.MLP([32, 32]))
# Pass the input graphs to the graph network, and return the output graphs.
output_graphs = graph_net_module(input_graphs)
```
this error showed up:
`NameError: name 'get_graphs' is not defined
` | closed | 2019-09-19T12:01:43Z | 2019-09-27T13:43:22Z | https://github.com/google-deepmind/graph_nets/issues/88 | [] | MohammadHeydari | 1 |
nltk/nltk | nlp | 3,020 | [Implementation Error] nltk.metrics.windowdiff and nltk.metrics.pk | ## API that has an error
`nltk.metrics.windowdiff`
## Wrong Codes
Line 92
```
return wd / (len(seg1) - k + 1.0)
```
## Corrections
Line 92
Should be revised as
```
return wd / (len(seg1) - k)
```
## Detailed Descriptions
Hello, thanks for your contribution to this useful toolkit.
Recently, I found a mistake in the implementation of `nltk.metrics.windowdiff`.
According to the formula on the 8th page of the paper _'Pevzner, L., & Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1), 19-36.'_, the denominator of WindowDiff is `N-k`, where `k` is the window size. However, in the implementation of `nltk.metrics.windowdiff`, `N-k+1` was used, which will slightly affect the result.
It should be better to correct it ASAP. | closed | 2022-07-13T10:20:59Z | 2023-01-28T17:18:03Z | https://github.com/nltk/nltk/issues/3020 | [] | Coldog2333 | 5 |
keras-team/keras | machine-learning | 20,415 | Error in custom train loop and torch backend with multi-GPU | I'm training a basic CIFAR10 classifier (two Dense layers) using multi-GPU with torch backend (see the code below). The code works fine when the net is written in torch. When written in Keras it returns the following error in line 95:
RuntimeError: Exception encountered when calling Dense.call().
Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
Arguments received by Dense.call():
• inputs=torch.Tensor(shape=torch.Size([192, 3, 32, 32]), dtype=float32)
• training=None
The code is below
---------------------------------------
import os
os.environ[ "KERAS_BACKEND" ] = "torch"
os.environ[ "PYTORCH_CUDA_ALLOC_CONF" ] = "expandable_segments:True"
import time
import datetime
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
# from model import pyramidnet
import keras
num_epochs = 100
batch_size = 768
num_workers = torch.cuda.device_count()
print( 'Running on {} GPUs'.format( num_workers ) )
lr = 0.01
def main():
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('==> Preparing data..')
transforms_train = transforms.Compose( [
transforms.RandomCrop( 32, padding = 4 ),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize( ( 0.4914, 0.4822, 0.4465 ), ( 0.2023, 0.1994, 0.2010 ) ) ] )
dataset_train = CIFAR10( root = '../data', train = True, download = True,
transform = transforms_train )
train_loader = DataLoader( dataset_train, batch_size = batch_size,
shuffle = True, num_workers = num_workers )
print( '==> Making model..' )
# net = pyramidnet()
# # Define Pytorch net
# class TwoLayerPerceptron( nn.Module ) :
# def __init__( self ):
# super( TwoLayerPerceptron, self ).__init__()
# self.fc1 = nn.Linear( 32 * 32 * 3, 512 )
# self.fc2 = nn.Linear( 512, 10 )
# def forward( self, x ):
# x = x.view( x.size( 0 ), -1 )
# x = self.fc1( x )
# x = nn.functional.relu( x )
# x = self.fc2( x )
# x = nn.functional.softmax( x )
# return x
# # Instantiate the model
# net = TwoLayerPerceptron()
# Define Keras net
net = keras.Sequential( [
keras.layers.Input( shape = ( 3, 32, 32 ) ),
keras.layers.Dense( 512, activation = 'relu' ),
keras.layers.Dense( 10, activation = 'softmax' ) ] )
net = nn.DataParallel( net )
net = net.to( device )
num_params = sum( p.numel() for p in net.parameters() if p.requires_grad )
print( 'The number of parameters of model is', num_params )
# criterion = nn.CrossEntropyLoss()
# optimizer = optim.Adam( net.parameters(), lr = lr )
criterion = keras.losses.SparseCategoricalCrossentropy()
optimizer = keras.optimizers.Adam( learning_rate = lr )
train( net, criterion, optimizer, train_loader, device )
def train( net, criterion, optimizer, train_loader, device ):
net.train()
train_start = time.time()
for epoch in range( num_epochs ) :
train_loss = 0
correct = 0
total = 0
for batch_idx, ( inputs, targets ) in enumerate( train_loader ) :
start = time.time()
inputs = inputs.to( device )
targets = targets.to( device )
outputs = net( inputs )
loss = criterion( outputs, targets )
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max( 1 )
total += targets.size( 0 )
correct += predicted.eq( targets ).sum().item()
acc = 100 * correct / total
batch_time = time.time() - start
if batch_idx % 20 == 0:
print( 'Epoch: [{}/{}]\t| Batch: [{}/{}]\t| loss: {:.3f}\t| acc: {:.3f}\t| batch time: {:.3f}s '.format(
epoch, num_epochs, batch_idx, len( train_loader ), train_loss / ( batch_idx + 1 ), acc, batch_time ) )
elapse_time = time.time() - train_start
elapse_time = datetime.timedelta( seconds = elapse_time )
print( "Training time {}".format( elapse_time ) )
if __name__ == '__main__':
main() | closed | 2024-10-26T02:14:27Z | 2024-12-19T02:06:15Z | https://github.com/keras-team/keras/issues/20415 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | caiuspetronius | 4 |
littlecodersh/ItChat | api | 462 | 命令行生成的二维码,大小如何设置 | 您的itchat版本为:1.2.32。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
>
itchat.auto_login(hotReload=True, enableCmdQR=2)
itchat.send('Hello hahah ahahha', toUserName="****")
在Linux服务器上,一个窗口没办法完全显示这个二维码
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2017-07-31T07:19:32Z | 2018-06-11T15:26:01Z | https://github.com/littlecodersh/ItChat/issues/462 | [
"question"
] | yyf1986 | 3 |
InstaPy/InstaPy | automation | 5,951 | whenever i click on windows start button it says name undefined | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
## Current Behavior
## Possible Solution (optional)
## InstaPy configuration
| closed | 2020-12-10T14:01:39Z | 2020-12-10T17:43:13Z | https://github.com/InstaPy/InstaPy/issues/5951 | [
"duplicate"
] | ghost | 3 |
deepspeedai/DeepSpeed | deep-learning | 7,155 | no slot '4' specified on local host - trying to use 4 gpus on a node with 8 gpus while another user is using the other 4 gpus |
I am running a DeepSpeed training job on SLURM, requesting 4 GPUs on a shared node. SLURM correctly assigns my job GPUs 4,5,6,7, but DeepSpeed remaps them to 0,1,2,3, causing conflicts with another user’s job.
### Error Message:
```
ValueError: No slot '4' specified on host 'localhost'
```
DeepSpeed misinterprets the physical GPU IDs as local device indices, leading to an out-of-range error. `nvidia-smi` confirms that my job is running on the wrong GPUs (0-3) instead of the assigned GPUs (4-7), causing resource conflicts.
### SLURM Script Configuration:
```bash
#SBATCH --gres=gpu:h100:4
#SBATCH --mem=300G
#SBATCH --cpus-per-task=48
```
Before launching DeepSpeed, I set:
```bash
export CUDA_VISIBLE_DEVICES="$SLURM_JOB_GPUS"
echo "CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES"
```
However, DeepSpeed still ignores this setting and remaps GPUs.
### Troubleshooting Attempts:
1. **Remapping `SLURM_JOB_GPUS` to a Zero‐Indexed List**
```bash
export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((num_gpus - 1)))
```
❌ *Failure:* DeepSpeed still assigned GPUs 0,1,2,3, conflicting with another user's job.
2. **Setting `CUDA_VISIBLE_DEVICES` Directly to `SLURM_JOB_GPUS`**
```bash
export CUDA_VISIBLE_DEVICES="$SLURM_JOB_GPUS"
```
❌ *Failure:* DeepSpeed detected `CUDA_VISIBLE_DEVICES=4,5,6,7` but ignored it, using `--include=localhost:0,1,2,3` instead.
3. **Explicitly Passing a Zero‐Indexed `--include` Flag**
```bash
deepspeed --include=localhost:0,1,2,3
```
❌ *Failure:* DeepSpeed still reassigned `CUDA_VISIBLE_DEVICES=0,1,2,3`.
4. **Unsetting `SLURM_JOB_GPUS` and Removing `--include`**
```bash
unset SLURM_JOB_GPUS
deepspeed --master_port $MASTER_PORT train.py --deepspeed ./scripts/config.json
```
❌ *Failure:* DeepSpeed again reassigned `CUDA_VISIBLE_DEVICES=0,1,2,3`.
### Final Observations:
- **DeepSpeed Overrides `CUDA_VISIBLE_DEVICES`**
Even when explicitly set, DeepSpeed overrides `CUDA_VISIBLE_DEVICES` if any of the following flags are used:
`--include`, `--exclude`, `--num_gpus`, `--num_nodes`.
- **DeepSpeed Reassigns GPUs Internally**
DeepSpeed assumes GPUs should always be indexed 0,1,2,3, regardless of SLURM's physical GPU assignment, causing a mismatch.
- **`nvidia-smi` Confirms GPU Conflict**
My job (`psy_llava`) is running on the same GPUs as another job (`python`), despite SLURM allocating different GPUs.
- **No Known Fix Has Worked**
Every attempted fix (including removing `--include`) has failed to prevent DeepSpeed from overriding `CUDA_VISIBLE_DEVICES`.
This suggests a deeper issue with DeepSpeed's GPU initialization in SLURM-managed environments.
### Seeking Help:
- How can I force DeepSpeed to respect SLURM’s GPU allocation and prevent it from overriding `CUDA_VISIBLE_DEVICES`?
- Are there any known DeepSpeed settings to prevent it from remapping GPUs?
- Is there a recommended way to ensure that DeepSpeed only runs on the GPUs explicitly assigned by SLURM?
| open | 2025-03-19T23:45:14Z | 2025-03-19T23:45:14Z | https://github.com/deepspeedai/DeepSpeed/issues/7155 | [] | GonyRosenman | 0 |
hankcs/HanLP | nlp | 1,263 | jpype._jclass.OutOfMemoryError: Java heap space | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2019-08-13T02:58:05Z | 2019-08-13T02:59:06Z | https://github.com/hankcs/HanLP/issues/1263 | [] | shihuaxing | 0 |
zihangdai/xlnet | tensorflow | 253 | Is real factorization? | I find it seems that in __local_perm_ function, every token in non_mask_tokens will attend to all non_mask_tokens, details pls refer to _**perm_mask**_. | open | 2019-12-19T04:25:15Z | 2019-12-19T04:25:15Z | https://github.com/zihangdai/xlnet/issues/253 | [] | fangwch | 0 |
kochlisGit/ProphitBet-Soccer-Bets-Predictor | seaborn | 22 | 71360 illegal hardware instruction python main.py | Macbook M1 2022
everything required is installed.
`(base) ➜ ProphitBet-Soccer-Bets-Predictor git:(main) ✗ python main.py
[1] 71360 illegal hardware instruction python main.py` | open | 2023-04-20T12:13:24Z | 2024-01-07T00:17:44Z | https://github.com/kochlisGit/ProphitBet-Soccer-Bets-Predictor/issues/22 | [] | KaOCode | 3 |
httpie/cli | api | 1,047 | Feature request: load whole request from a file | It's possible that HTTPie has such a feature already, but I was unsuccessful in trying to find it.
## What enhancement would you like to see?
To have a possibility to load the whole request (URL, HTTP method, headers, payload, etc.) from a file.
## What problem does it solve?
The use case is to have a library of requests stored as a hierarchy of directories and files. It should be platform-independent, so it works with HTTPie on any platform it runs on.
**What is it good for?**
When working on an API project, I want to have a library of API requests consisting of prepared complete request files that I can simply execute using HTTPie, so I don't have to look for URLs, headers, parameters, payloads, etc. in documentation or even in a source code every time I need to make a request.
Such library for a single project may look like the following:
project
|
+ users
| |
| + create.http
| + update.http
| + delete.http
| + list.http
|
+ products
|
...
Currently, I'm always creating such a library for API request payloads, storing them in files. But that's not enough to make a request. I still have to look what URL, headers, and other parameters a specific API endpoint or request requires. I can have a separate library of HTTPie commands for each payload or search them in the shell history, but it would be great to have a single file containing everything needed to make a valid API request, so the only information I need to provide to HTTPie is that file:
http --request project/users/list.http
HTTPie is currently even able to create those request files (see below), I just didn't find a way to load them.
## Provide any additional information, screenshots, or code examples below
### Possible solution using shell script
An easy solution would be to store request commands as shell scripts, for example `project/users/create.sh` might contain the following:
http [parameters] [method] <url> < ./create.json
I would have two files for each request – a shell script and a payload file (if it needs one).
The problem with this solution is that it's platform-dependent. Those shell scripts would work with bash only.
### Proposed solution
I have managed to find a feature that is basically doing the first half of what I need. It's the [Offline mode](https://httpie.io/docs#offline-mode) and the `--offline` parameter. The following command saves the whole request including the payload into the file `request.http` (without actually sending any request):
http --offline [parameters] [method] <url> < payload.json > request.http
The file it produces contains the whole RAW HTTP request as is being sent, everything needed to make a request:
POST /api/endpoint HTTP/1.1
User-Agent: HTTPie/2.3.0
Accept-Encoding: gzip, deflate
Accept: application/json, */*;q=0.5
Connection: keep-alive
Content-Type: application/json
Content-Length: 8
Host: test.dev
payload
It's possible to send such request using for example netcat:
nc <domain> <port> < request.http
But I didn't find a way to do the same with HTTPie. I have tried the analogy to the `nc` command (`http <domain> < request.http`), but HTTPie considers the file to be just a payload.
I would like to have a possibility similar to the following:
http --request request.http
**How I would expect it to work:**
1. If a request file is provided, HTTPie reads all request parameters (URL, method, headers, payload, etc.) from there
2. If any parameter is provided on the command line, it overwrites the value from the request file
**Current issues:**
1. The URL parameter is mandatory. It would be nice to read it from the file also, but if it would be against the HTTPie's design, it wouldn't be a big issue. I can provide only the domain part in the URL parameter and have the whole URL path part loaded from the file.
2. HTTPie can save the request to a file but it cannot read it back, or I didn't find a way to do that.
### Alternative solution
Everything is the same as in the proposed solution, but instead of RAW HTTP request data, the request file contains HTTPie parameters in some form (e.g. similar to default options in the `config.json` file). | closed | 2021-03-27T14:14:39Z | 2021-05-05T11:10:24Z | https://github.com/httpie/cli/issues/1047 | [
"duplicate"
] | ferenczy | 3 |
mwaskom/seaborn | data-science | 3,645 | Docu: Reference to root package missing in object.inv file | As diagnosed [here at pydocotor](https://github.com/twisted/pydoctor/issues/763#issuecomment-1971662348) it seems that the object.inv file from seaborn miss a so called "reference to the root package".
That occur the problem that using pydoctor as API doc generator I am not able to reference "seaborn" as a package. I can reference seaborn objects (functions, methods, classes) without problems. But the package itself I can not. | closed | 2024-02-29T20:32:12Z | 2024-03-02T09:28:36Z | https://github.com/mwaskom/seaborn/issues/3645 | [
"question",
"docs"
] | buhtz | 6 |
deepset-ai/haystack | machine-learning | 8,571 | DocumentCleaner does not pass along dataframe | **Describe the bug**
DocumentCleaner does not pass along dataframe
https://github.com/deepset-ai/haystack/blob/main/haystack/components/preprocessors/document_cleaner.py
```
cleaned_docs.append(Document(content=text, meta=deepcopy(doc.meta), id=doc.id if self.keep_id else ""))
```
| closed | 2024-11-22T12:38:07Z | 2024-11-25T12:09:00Z | https://github.com/deepset-ai/haystack/issues/8571 | [
"P2"
] | tsoernes | 0 |
oegedijk/explainerdashboard | dash | 43 | logins with one username and one password | Checking https://explainerdashboard.readthedocs.io/en/latest/deployment.html?highlight=auth#setting-logins-and-password
I was testing with one login and one password.
`logins=["U", "P"]` doesn't work (see below) but `logins=[["U", "P"]]` does.
I don't suppose there is a `login` kwarg? or it can handle a list of len 2? It seems them is coming from dash_auth so I could upstream this there.
```
File "src/dashboard_cel.py", line 24, in <module>
logins=["Celebrity", "Beyond"],
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\explainerdashboard\dashboards.py", line 369, in __init__
self.auth = dash_auth.BasicAuth(self.app, logins)
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in __init__
else {k: v for k, v in username_password_list}
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in <dictcomp>
else {k: v for k, v in username_password_list}
ValueError: too many values to unpack (expected 2)
```
| closed | 2020-12-08T22:17:21Z | 2020-12-10T10:57:47Z | https://github.com/oegedijk/explainerdashboard/issues/43 | [] | raybellwaves | 3 |
unionai-oss/pandera | pandas | 1,675 | Example on how to use Decimal as dtype for a column | #### Question about pandera
I want to implement a column that uses decimal.Decimal with a specific precision. However i cannot find a clear example on how to use this in pandera.
Is it possible to provide a usage example?
| open | 2024-06-07T15:46:35Z | 2024-10-11T06:58:07Z | https://github.com/unionai-oss/pandera/issues/1675 | [
"question"
] | samdeleu | 1 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 157 | must be share=True ,where is it? | ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost. | closed | 2024-09-21T01:38:53Z | 2024-09-22T08:07:01Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/157 | [] | luckc4 | 3 |
piccolo-orm/piccolo | fastapi | 759 | Cannot perform operation: another operation is in progress | ### Description
I am executing a data migration process and encountering an error while calling asyncio.gather with a list of contracts to persist. The error message I am receiving is Cannot perform operation: another operation is in progress.
### Code Snippet
async def save(contract):
try:
await Contract(
...
).save()
except Exception as ex:
print(f"Unable to save {contract=} due to {ex}")
async def persist_contracts():
...
asyncio.gather(
*[save(contract=contract) for _, contract in contracts.iterrows()]
)
### Steps to Reproduce
- Execute the data migration process.
- Call the persist_contracts function.
- Observe the error message Cannot perform operation: another operation is in progress.
### Expected Result
The contracts should be successfully persisted.
### Actual Result
The error message Cannot perform operation: another operation is in progress is raised.
### Environment
Python version: 3.11.1
Piccolo version: 0.105.0
| open | 2023-02-01T12:32:44Z | 2023-02-21T13:49:13Z | https://github.com/piccolo-orm/piccolo/issues/759 | [] | devsarvesh92 | 5 |
redis/redis-om-python | pydantic | 569 | How to test a JSONModel ? | I have a similar problem as this at : https://stackoverflow.com/questions/76578825/how-to-test-a-jsonmodel-in-a-unittest
How does one test a simple JsonModel **without** connection to a redis instance?
| closed | 2023-10-03T07:58:31Z | 2024-05-08T13:19:02Z | https://github.com/redis/redis-om-python/issues/569 | [] | efratmiy | 1 |
seleniumbase/SeleniumBase | pytest | 2,717 | Request headers not changed if UC mode and Cloudflare | Hi!
I want to use different proxy on each request. So i developed plugin for MITM proxy, which take a custom header `"X-Upstream-Proxy": "http://user:pass@ip:port"`.
If i open simple site without cloudflare, my custom header is used, otherwise - request goes to my proxy but without custom header.
Code:
```
from seleniumbase import Driver
driver = Driver(
uc=True,
headless=True,
proxy="http://127.0.0.1:8080",
multi_proxy=True,
binary_location="c:\cloudparser\Chrome\chrome.exe",
uc_cdp_events=True)
try:
driver.execute_cdp_cmd("Network.enable", {})
headers = {'headers': {"X-Upstream-Proxy": "http://user:pass@194.28.210.32:9940"}}
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', headers)
driver.sleep(3)
driver.get('https://myip.ru/')
driver.sleep(3)
driver.save_screenshot('screen.png')
finally:
driver.quit()
```
With `driver.get('https://myip.ru/')` I got the **proxy ip**

But for `driver.get('https://radar.cloudflare.com/ip')` the result is **my local ip**

And if i debug my mitm proxy plugin, i did't see my custom header.
Any idea to resolve it? | closed | 2024-04-25T07:54:19Z | 2024-04-26T12:30:30Z | https://github.com/seleniumbase/SeleniumBase/issues/2717 | [
"external",
"UC Mode / CDP Mode"
] | ivanovevgeny | 3 |
dsdanielpark/Bard-API | api | 92 | Error: Temporarily unavailable due to traffic or an error in cookie values. | I'm getting this error:
```
Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n56\n[["di",70],["af.httprm",70,"-XXXXXXXXXXXXXXXXXXXX",29]]\n25\n[["e",4,null,null,131]]\n'.
Temporarily unavailable due to traffic or an error in cookie values. Please double-check the cookie values and verify your network environment.
```
This is my code:
```
from bardapi import Bard
token = '************************************************'
bard = Bard(token=token)
print(bard.get_answer("Hi Bard!")['content'])
```
I am in a country that is not available to Bard (I'm in the EU), so I could only get the token using Opera VPN, but Opera VPN is not for the whole internet, so the request does NOT get covered by the VPN. If this is the cause of the problem, are there any workarounds? | closed | 2023-07-04T17:04:58Z | 2023-07-05T11:17:33Z | https://github.com/dsdanielpark/Bard-API/issues/92 | [] | tiagorangel1 | 2 |
aio-libs-abandoned/aioredis-py | asyncio | 1,053 | pubsub: subscription doesn't get registered when using asyncio.create_task() instead of await | Hi,
I'm not sure if this is really an issue or user error, so apologies upfront for any undue trouble.
In this code:
```
#!/usr/bin/env python3
import asyncio
import aioredis
async def main():
redis = aioredis.from_url("redis://localhost")
async with redis.pubsub() as pubsub:
asyncio.create_task(pubsub.subscribe("")) # vs. await pubsub.subscribe("")
await pubsub.subscribe("xxx")
while True:
message = await pubsub.get_message(timeout=100)
print(message)
asyncio.run(main())
```
If instead of `await`ing for the `pubsub.subscribe("")` call, we `asyncio.create_task()` it, sometimes the subscription does not get registered somehow in the pubsub object (as reflected by the missing subscription confirmation message). The subscription actually does happen in Redis (`PUBLISH`ing shows one subscriber) but the client does not receive any message from the channel.
Since this happens randomly I presume there's some missing synchronization point either in the above code, or inside aioredis itself.
The actual use case I'm trying to implement is akin to:
```
# register a subscription _before_ starting the evloop:
asyncio.get_event_loop().create_task(pubsub.subscribe(some_channel))
...
# start the event loop
asyncio.get_event_loop().run_until_complete(something)
```
As I said I'm not sure if this use case is supported, so please point me to the right direction.
aioredis version 2.0.0b1. | open | 2021-07-14T17:39:14Z | 2021-08-02T20:50:07Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1053 | [
"enhancement"
] | alecov | 15 |
encode/databases | asyncio | 302 | Cannot use boolean in case function | Hi,
I can not use True or False in case function as below
```
case([(table.c.item_id == None, False)], else_= True).label('saved')
```
it is showing me this error
```
File "asyncpg\protocol\protocol.pyx", line 181, in bind_execute
File "asyncpg\protocol\prepared_stmt.pyx", line 171, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg
asyncpg.exceptions.DataError: invalid input for query argument $3: False (expected str, got bool)
```
I tried to use it with sqlalchemy and it is correctly returning True/False values.
Is it some issue with postgres driver or database library?
I tried to replace True/False with 'True'/'False' and it is working.
Is there another way to return boolean values?
Kind regards | closed | 2021-03-24T09:32:25Z | 2021-03-24T09:45:54Z | https://github.com/encode/databases/issues/302 | [] | psowa001 | 0 |
cobrateam/splinter | automation | 747 | splinter.driver.webdriver.WebDriverElement.screenshot: "'WebDriverElement' object has no attribute "recover_screen"" | When using screenshot function under splinter.driver.webdriver.WebDriverElement I get the following error:
'''
def screenshot(self, name='', suffix=".png", full=False):
name = name or ''
(fd, filename) = tempfile.mkstemp(prefix=name, suffix=suffix)
# don't hold the file
os.close(fd)
if full:
self.parent.full_screen()
target = self.screenshot_as_png()
> self.parent.recover_screen()
E AttributeError: 'WebDriverElement' object has no attribute "recover_screen"
'''
I believe it's because of the refactored function "_find" in .../Webdriver/init.py. When WebDriverElement calls the "_find" function, it instantiates the object using "self" on line 254
def _find(self, finder, selector):
[...]
if elements:
elem_list = [self.element_class(element, self) for element in elements]
return elem_list
and self, type WebDriverElement, is passed to become the parent of that element. And that's when the error occurs because the screenshot function expects self.parent to be of type WebDriver that has 'recover_screen' function.
This was working before because parent was always type "WebDriver". This can be seen in find_by methods in WebDriverElement. For example, looking at older code (ver 11), you can see that 'WebDriverElement.find_by_xpath' passes the parent instead of self to instantiate.
def find_by_xpath(self, selector, original_find=None, original_query=None):
elements = ElementList(self._element.find_elements_by_xpath(selector))
return ElementList(
[self.__class__(element, self.parent) for element in elements],
find_by="xpath",
query=selector,
)
Quick solution is to pass the parent instead of self in "_find" function when self is of type WebDriverElement but I do not know the code enough and may cause more issues.
Another possible solution is to implement the the missing functions that was expected for "WebDriver" such as "recover_screen", "full_screen" for WebDriverElement class.
Let me know if any of the solution above is good and I'll create a pull request for it. | closed | 2019-12-18T06:20:50Z | 2020-04-01T19:13:53Z | https://github.com/cobrateam/splinter/issues/747 | [
"bug"
] | DH-MP | 2 |
onnx/onnx | deep-learning | 5,784 | [Feature request] onnx.printer / parser support ID with '/', ':', etc | ### System information
_No response_
### What is the problem that this feature solves?
Currently the onnx.printer prints ID without quoted, like
```
<
ir_version: 7,
opset_import: [ "" : 10 ]
>
agraph (float[N, 128] X, float[128, 10] W, float[10] B) => (float[N, 10] C)
{
Foo = MatMul(X, W)
Bar = Add(Foo, B)
C = Softmax(Bar)
}
```
It is fine if the ID contains only `[a-zA-Z_]` however, a lot of models has special character in the ID of node, for example the llama has and node named `/model/layers.0/self_attn/Mul_3_output_0` it contains `.`, `/`, some other op even has `:`. I want to enhance the printer / parser, But I am not sure which spec is better:
1. Single quoted ID, any char except `'` can be used in the name. The printed ID is quoted. The parser respect that , too.
2. Don't quoted, just treat `/`, `:`, `.` as `_`. But I am not sure will it confused with other syntax.
Dose anyone have any suggestions? Thank you.
### Alternatives considered
_No response_
### Describe the feature
Quoted the ID or extend the acceptable char in the parser.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | closed | 2023-11-30T21:37:27Z | 2024-12-23T06:45:00Z | https://github.com/onnx/onnx/issues/5784 | [
"topic: enhancement",
"stale"
] | yocox | 1 |
pytest-dev/pytest-xdist | pytest | 979 | Multiple "random" looking like failures in colour-science unit tests since 3.4.0 release | Hello,
It took me a few days to pinpoint the [source](https://github.com/colour-science/colour/actions/runs/7028449640/job/19124427128) [of](https://github.com/colour-science/colour/actions/runs/7027933293) [our](https://github.com/colour-science/colour/actions/runs/7009452236) [broken](https://github.com/colour-science/colour/actions/runs/7001719291) builds and it comes down to the 3.4.0 release 3 weeks ago. It seems like this release (and 3.5.0) is now causing issues with our caches as if I disable them the [unit tests are passing](https://github.com/colour-science/colour/actions/runs/7015999647/job/19127292945).
Looking at the [diff between](https://github.com/pytest-dev/pytest-xdist/compare/v3.3.1...v3.4.0) 3.3.1 which is working and 3.4.0 which is not, the only change that I think could cause this is that one: https://github.com/pytest-dev/pytest-xdist/commit/230ba6ad1057574c9f3d42a97f890788cd9ec6c3
To confirm the finding, I have banned the 3.4.0 and 3.5.0 and our builds are now passing again: https://github.com/colour-science/colour/actions/runs/7029639824/job/19127625456
Keen to hear your thoughts on this one.
Cheers,
Thomas | closed | 2023-11-29T08:16:36Z | 2024-01-21T01:19:38Z | https://github.com/pytest-dev/pytest-xdist/issues/979 | [] | KelSolaar | 7 |
deepset-ai/haystack | machine-learning | 8,902 | Prepare for end-of-life of Haystack 1.26 on March 11 | As we communicate on our old documentation pages, Haystack 1.26 will reach its end-of-life on March 11, 2025.
We should hide all documentation pages about Haystack version 1.x then and plan other related tasks, such as:
Start now
- [x] #8922
- [x] #8923
- [x] #8924
- [x] #8925
- [x] #8927
- [x] #8932
- [x] #8930
- [x] #8929
- [x] #9013
- [x] #8931
- [ ] #8920
Start after March 11
- [ ] #8921
- [x] #8933
- [x] #8934
- [x] #8928
- [ ] #8926 | open | 2025-02-21T13:50:35Z | 2025-03-17T09:24:31Z | https://github.com/deepset-ai/haystack/issues/8902 | [
"type:documentation",
"P2"
] | julian-risch | 3 |
PokeAPI/pokeapi | api | 192 | Evolution gender docs | The docs claim that EvolutionDetail's gender field is a named resource to the gender object, but in reality it's just an int. This a bug either in the code or the docs. I think this should be changed to a named resource of the gender, but if not then update the doc.
See: `/api/v2/evolution-chain/213/`
| closed | 2016-05-25T00:01:55Z | 2017-06-12T12:52:02Z | https://github.com/PokeAPI/pokeapi/issues/192 | [] | sargunv | 2 |
QingdaoU/OnlineJudge | django | 259 | SPJ题目编译错误(json) | 

| closed | 2019-08-30T12:53:32Z | 2020-11-08T10:28:31Z | https://github.com/QingdaoU/OnlineJudge/issues/259 | [] | Akvicor | 3 |
google-research/bert | nlp | 1,362 | Ispy | open | 2022-08-08T15:10:36Z | 2022-08-08T15:10:36Z | https://github.com/google-research/bert/issues/1362 | [] | Smmfh223 | 0 |
|
sinaptik-ai/pandas-ai | pandas | 1,407 | Additional guidance on configuring the pandasai.json file in the LLM setup process. | Path: /llms | closed | 2024-10-24T08:23:23Z | 2024-12-16T11:21:25Z | https://github.com/sinaptik-ai/pandas-ai/issues/1407 | [
"documentation"
] | Muhammad-Adam1 | 10 |
onnx/onnx | machine-learning | 5,955 | Convert format from NCHW to NHWC in onnx itself without adding extra transpose layer | ### System information
Latest
### What is the problem that this feature solves?
My Accelerator H/W only support NHWC format so my SDK can take tflite/onnx model as input and make run on Accelerator but the issue is the format i.e. NHWC. If this problem gets solved i will be able run all kind of model directly on my H/W via onnx.
### Alternatives considered
pytorch-->onnx->openvino-.tflite but this whole conversion is not very good.
### Describe the feature
makes onnx the common layer which can be run on my h/w
### Will this influence the current api (Y/N)?
not sure
### Feature Area
not sure
### Are you willing to contribute it (Y/N)
No
### Notes
_No response_ | open | 2024-02-23T04:08:15Z | 2024-02-23T04:08:15Z | https://github.com/onnx/onnx/issues/5955 | [
"topic: enhancement"
] | abhishek27m1992github | 0 |
dynaconf/dynaconf | fastapi | 839 | [bug][Documentation] Exporting: write() got an unexpected keyword argument 'merge' | **Describe the bug**
Following the example on the documentation to [export](https://www.dynaconf.com/advanced/#exporting) Dynaconf data to a file raises an exception with the `merge` argument
**To Reproduce**
~~~Python
loaders.write("/a/b/c", DynaBox(config).to_dict(), merge=False)
~~~
**Expected behavior**
The file should have been written
**Actual Behavior**
`TypeError: write() got an unexpected keyword argument 'merge'`
Just a quick documentation fix,
thanks !
| closed | 2022-11-28T20:21:10Z | 2023-07-13T19:11:04Z | https://github.com/dynaconf/dynaconf/issues/839 | [
"bug"
] | Wenzel | 0 |
sinaptik-ai/pandas-ai | pandas | 573 | OpenAI should automatically switch to the model with more context | ### 🚀 The feature
Currently, when users encounter errors due to a context window that exceeds the model's capacity, they need to manually adjust their model choice to a larger version, which can be cumbersome and may disrupt the workflow. This feature would automate the process, ensuring that users get the best possible performance without the need for manual intervention.
## Proposed Solution
The proposed solution is to implement an automatic context model switching mechanism that checks the context window size and switches to a larger model if needed. Here's how it could work:
1. When a user sends a request with a specific model (e.g., "gpt-4"), the system checks the context window size.
2. If the context window size exceeds the capacity of the selected model (e.g., "gpt-4"), the system automatically switches to the corresponding larger model (e.g., "gpt-4-32k").
3. The system processes the request using the larger model to ensure the context fits within its capacity.
4. The user receives the response seamlessly, without having to manually change the model selection.
## Implementation Considerations
To implement this feature, OpenAI may need to develop a mechanism for context size detection and automatic model switching within the API infrastructure.
## Example Scenario
A user sends a request to the API using "gpt-4" but accidentally provides a very long context that exceeds the model's capacity. Instead of receiving an error, the system automatically switches to "gpt-4-32k" to accommodate the larger context, and the user receives a timely and accurate response.
### Motivation, pitch
1. Improved user experience: Users won't have to manually switch models when encountering context-related errors, making the interaction with OpenAI models more seamless.
2. Error prevention: Automatic context model switching can help prevent errors caused by users inadvertently exceeding the context window of their selected model.
3. Efficient use of resources: By automatically selecting the appropriate model based on context size, the system can make efficient use of computational resources.
### Alternatives
_No response_
### Additional context
This feature could be particularly useful for applications that involve dynamic and variable-length input contexts, such as chatbots, language translation, and content generation. | closed | 2023-09-18T22:26:15Z | 2024-06-01T00:18:12Z | https://github.com/sinaptik-ai/pandas-ai/issues/573 | [
"enhancement",
"good first issue"
] | gventuri | 8 |
HumanSignal/labelImg | deep-learning | 462 | impossible to save : float division by zero | Hi,
Does not want to record labels. I have tried with several versions, nothing works.
How to fix ?
I only have two very large images. High-precision satellite photo (15kX15k). I have to get more than 1000 labels per photo
```
Img: C:\Users\wkn\Documents\IA\ortoIA\images\png5ca3a3d5e5635f000690af6b.png -> Its txt: C:\Users\wkn\Documents\IA\ortoIA\labels\png5ca3a3d5e5635f000690af6b.txt
Traceback (most recent call last):
File "<string>", line 1291, in saveFile
File "<string>", line 1320, in _saveFile
File "<string>", line 808, in saveLabels
File "Z:\home\darrenl\tmp\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\libs.labelFile", line 83, in saveYoloFormat
File "Z:\home\darrenl\tmp\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\libs.yolo_io", line 64, in save
File "Z:\home\darrenl\tmp\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\libs.yolo_io", line 36, in BndBox2YoloLine
ZeroDivisionError: float division by zero
```
Thx !
- **Windows 10 64**
- **3.7:**
| open | 2019-04-17T23:13:49Z | 2020-04-30T14:27:54Z | https://github.com/HumanSignal/labelImg/issues/462 | [] | lucydjo | 3 |
ultralytics/ultralytics | deep-learning | 19,231 | Calculating AP and mAP for yolov8 model with some postprocessing techniques | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I have a yolov8 model with postprocessing techniques like TTA.
I want to calculate the AP and mAP metrics for it but when i use the below code for it, i take less mAP then only yolov8 model. I know that i should take more mAP with postprocessing techniques like TTA than only yolov8 model.
I analyze the results of model with TTA, I see successful iou results.
I can't understand why. I thought that maybe the yolov8's algorithm about calculating mAP is different.
Can you help me?
```
import numpy as np
def compute_iou(box1, box2):
"""
Compute Intersection over Union (IoU) between two bounding boxes.
Box format: [x_min, y_min, x_max, y_max]
"""
x_min_inter = max(box1[0], box2[0])
y_min_inter = max(box1[1], box2[1])
x_max_inter = min(box1[2], box2[2])
y_max_inter = min(box1[3], box2[3])
inter_width = max(0, x_max_inter - x_min_inter)
inter_height = max(0, y_max_inter - y_min_inter)
intersection = inter_width * inter_height
box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])
union = box1_area + box2_area - intersection
iou = intersection / union if union > 0 else 0
print("IOU", iou)
return iou
def calculate_ap(true_boxes, pred_boxes, confidence_scores, iou_threshold=0.3):
"""
Calculate Average Precision (AP) for object detection.
Parameters:
- true_boxes: List of ground truth boxes in [x_min, y_min, x_max, y_max] format.
- pred_boxes: List of predicted boxes in [x_min, y_min, x_max, y_max] format.
- confidence_scores: List of confidence scores corresponding to pred_boxes.
- iou_threshold: IoU threshold for a prediction to be considered a true positive.
Returns:
- AP (Average Precision) value.
"""
# Eğer prediction box sayısı confidence score sayısıyla eşleşmiyorsa hata ver
if len(pred_boxes) != len(confidence_scores):
raise ValueError("Number of prediction boxes and confidence scores must be equal.")
# Tahmin kutularını confidence skoruna göre sırala (büyükten küçüğe)
pred_data = sorted(zip(pred_boxes, confidence_scores), key=lambda x: x[1], reverse=True)
pred_boxes = [x[0] for x in pred_data] # Sıralanmış prediction boxes
confidence_scores = [x[1] for x in pred_data] # Sıralanmış confidence scores
tp = [] # True Positives
fp = [] # False Positives
matched_gt = set() # Eşleşen True Boxes'ları takip et
for pred in pred_boxes:
pred_box = pred # Koordinatlar [x_min, y_min, x_max, y_max]
max_iou = 0
matched_gt_idx = -1
for i, gt_box in enumerate(true_boxes):
iou = compute_iou(pred_box, gt_box)
if iou > max_iou:
max_iou = iou
matched_gt_idx = i
# Eğer IoU eşik değerinin üzerindeyse ve daha önce eşleşmemişse TP, yoksa FP
if max_iou >= iou_threshold and matched_gt_idx not in matched_gt:
tp.append(1)
fp.append(0)
matched_gt.add(matched_gt_idx)
else:
tp.append(0)
fp.append(1)
# Kümülatif Precision ve Recall hesapla
tp = np.cumsum(tp)
fp = np.cumsum(fp)
total_true = len(true_boxes)
precisions = tp / (tp + fp)
recalls = tp / total_true
# AP hesaplama (trapezoidal rule ile integral hesaplama)
ap = np.trapz(precisions, recalls)
return ap
```
### Additional
_No response_ | closed | 2025-02-13T11:19:13Z | 2025-02-17T12:26:30Z | https://github.com/ultralytics/ultralytics/issues/19231 | [
"question",
"detect"
] | teengineer | 3 |
jacobgil/pytorch-grad-cam | computer-vision | 268 | help me | Tuberculosis detection whose file first run | closed | 2022-06-14T10:55:23Z | 2022-07-04T19:32:12Z | https://github.com/jacobgil/pytorch-grad-cam/issues/268 | [] | janasubrata | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 900 | Broken tests on armv7 | I don't even know how it's possible for a pure Python package to break on a specific architecture, but it happens anyway.
Where the tests succeed on all other architectures I tried this on (x86, x86_64, ppc64le, s390x, aarch64), it fails only on armv7.
```
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /builds/PureTryOut/aports/testing/py3-scikit-optimize/src/scikit-optimize-0.7.4, inifile: setup.cfg
Fatal Python error: Aborted
Current thread 0xf7de0558 (most recent call first):
File "/usr/lib/python3.8/site-packages/matplotlib/font_manager.py", line 991 in __init__
File "/usr/lib/python3.8/site-packages/matplotlib/font_manager.py", line 1334 in _rebuild
File "/usr/lib/python3.8/site-packages/matplotlib/font_manager.py", line 1343 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 783 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "/usr/lib/python3.8/site-packages/matplotlib/contour.py", line 16 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 783 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "/usr/lib/python3.8/site-packages/matplotlib/colorbar.py", line 31 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 783 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "/usr/lib/python3.8/site-packages/matplotlib/pyplot.py", line 32 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 783 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist
File "/usr/lib/python3.8/site-packages/matplotlib/__init__.py", line 1280 in use
File "/usr/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py", line 358 in wrapper
File "/usr/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py", line 296 in wrapper
File "/builds/PureTryOut/aports/testing/py3-scikit-optimize/src/scikit-optimize-0.7.4/skopt/plots.py", line 15 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 783 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "/usr/lib/python3.8/site-packages/py/_path/local.py", line 701 in pyimport
File "/usr/lib/python3.8/site-packages/_pytest/doctest.py", line 483 in collect
File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 264 in <lambda>
File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 244 in from_call
File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 264 in pytest_make_collect_report
File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 382 in collect_one_node
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 681 in genitems
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 684 in genitems
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 490 in _perform_collect
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 452 in perform_collect
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 257 in pytest_collection
File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 246 in _main
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 191 in wrap_session
File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 240 in pytest_cmdline_main
File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/lib/python3.8/site-packages/_pytest/config/__init__.py", line 124 in main
File "/usr/bin/pytest", line 11 in <module>
Aborted (core dumped)
``` | closed | 2020-05-05T17:10:13Z | 2022-01-03T09:03:56Z | https://github.com/scikit-optimize/scikit-optimize/issues/900 | [] | PureTryOut | 1 |
sktime/sktime | scikit-learn | 7,271 | [ENH] RBF neural networks for time series forecasting | We can implement RBF ANNs for time series forecasting, the basis can be the `basisfunction` implemented in #7261
from there we can forward and create a neural network for handling complex time series | closed | 2024-10-15T20:09:44Z | 2025-01-30T15:03:27Z | https://github.com/sktime/sktime/issues/7271 | [
"feature request",
"module:forecasting",
"enhancement"
] | phoeenniixx | 5 |
littlecodersh/ItChat | api | 578 | 下载的目录能否指定 | 如题 想和Django结合一下 把下载的文件放到Django的静态资源目录下 | closed | 2018-01-15T05:05:14Z | 2018-02-28T03:11:34Z | https://github.com/littlecodersh/ItChat/issues/578 | [
"question"
] | gzxy-0102 | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 96 | 抖音主页下载不支持图集 | 调用API发现视频的可以下载,图集的不行。 | closed | 2022-11-03T07:39:29Z | 2024-04-23T05:03:28Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/96 | [
"BUG",
"enhancement",
"help wanted"
] | liuliuzx | 7 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,282 | Headless not working? | Hi.
driver = uc.Chrome(headless=True) has been working fine for me for a long time,
suddenly i get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python\.venv\Lib\site-packages\undetected_chromedriver\__init__.py", line 386, in __init__
if self.patcher.version_main < 108:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<' not supported between instances of 'NoneType' and 'int'
I have tried making an instance of patcher and getting release.version[0], which correctly returns "113", and that is why I cannot understand that self.patcher.version_main would be 'NoneType'.
Versions:
chrome, Version 113.0.5672.127 (Official Build) (64-bit)
ChromeDriver 113.0.5672.63
Thank you in advance.
P.S. if you want some help with working around the detection on Bet365 let me know. I have some tips and tricks for making it work despite the detection happening when you navigate the site :) | open | 2023-05-22T12:04:00Z | 2023-08-09T11:32:46Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1282 | [] | bigredcardinal | 7 |
ray-project/ray | python | 50,679 | [core] Cover cpplint for ray/src/ray/scheduling | ## Description
As part of the initiative to introduce cpplint into the pre-commit hook, we are gradually cleaning up C++ folders to ensure compliance with code style requirements. This issue focuses on cleaning up `ray/src/ray/scheduling`.
## Goal
- Ensure all `.h` and `.cc` files in `ray/src/ray/scheduling` comply with cpplint rules.
- Address or suppress all cpplint warnings.
- Add `ray/src/ray/scheduling` to the pre-commit hook once it is clean.
### Steps to Complete
1. Checkout the latest main branch and install the pre-commit hook.
2. Manually modify all C++ files in `ray/src/ray/scheduling` to trigger cpplint (e.g., by adding a newline).
3. Run `git commit` to trigger cpplint and identify issues.
4. Fix the reported issues or suppress them using clang-tidy if necessary.
5. Once all warnings are resolved, update the pre-commit hook to include `ray/src/ray/scheduling`.
This is a sub issue from #50583
| closed | 2025-02-18T03:31:31Z | 2025-02-21T15:19:55Z | https://github.com/ray-project/ray/issues/50679 | [
"enhancement",
"core"
] | 400Ping | 2 |
datadvance/DjangoChannelsGraphqlWs | graphql | 42 | Potential dependency conflicts between django-channels-graphql-ws and graphql-core | Hi, as shown in the following full dependency graph of **_django-channels-graphql-ws_**, **_django-channels-graphql-ws_** requires **_graphql-core >=2.0,<2.3_** , while the installed version of **_graphql-core (2.2.12)_** ,**_graphene 4.0.0_** requires **_graphql-core >=2.0.1_**.
According to Pip's _“first found wins”_ installation strategy, **_graphql-core 2.2.12_** is the actually installed version.
Although the first found package version **_graphql-core 2.2.12_** just satisfies the later dependency constraint **_(graphql-core >=2.0.1)_**, it will easily cause a build failure once the updated graphene introduces a higher version of **_Graphql-core_**.
According to the release history of graphene, it habitually upgrates **_Graphql-core_** in its recent releases. For instance, **_graphene 2.0.0_** upgrated **_Graphql-core_**’s constraint from **_>=1.2.dev to >=2.0.dev_**,**_graphene 2.1.3_** upgrated **_Graphql-core_**’s constraint from **_>=2.0,<3.0to >=2.1,<3.0_** and **_graphene next version_** upgrated **_Graphql-core_**’s constraint from **_>=2.1,<3.0 to >=3.1.0b1,<4._**
### Dependency tree
```
django-channels-graphql-ws - 0.5.0
| +- aiohttp(install version:3.6.2 version range:>=3.5,<4.0)
| +- asgiref(install version:3.2.7 version range:>=3.2,<4.0)
| +- channels(install version:2.4.0 version range:>=2.2,<3.0)
| | +- asgiref(install version:3.2.7 version range:<4,>=3.2)
| | +- daphne(install version:2.5.0 version range:<3,>=2.3)
| | +- django(install version:3.0.5 version range:>=2.2)
| | | +- asgiref (install version: version range:=3.2)
| | | +- pytz(install version:2019.3 version range:*)
| | | +- sqlparse (install version:0.3.1 version range:>=0.2.2)
| +- django(install version:3.0.5 version range:>=2.2)
| | +- asgiref (install version: version range:=3.2)
| | +- pytz(install version:2019.3 version range:*)
| | +- sqlparse (install version:0.3.1 version range:>=0.2.2)
| +- graphene(install version:2.1.8 version range:>=2.1,<3.0)
| | +- aniso8601(install version:7 version range:>=3,<=7)
| | +- graphql-core(install version:2.3.1 version range:<3,>=2.1)
| | | +- promise(install version:2.3 version range:>=2.3,<3)
| | | | +- six(install version:1.14.0 version range:*)
| | | | +- typing(install version:3.7.4.1 version range:>=3.6.4)
| | | +- rx(install version:1.6.1 version range:>=1.6,<2)
| | | +- six(install version:1.14.0 version range:>=1.10.0)
| | +- graphql-relay(install version:2.0.1 version range:<3,>=2)
| | | +- graphql-core(install version:2.3.1 version range:>=2.2,<3)
| | | | +- promise(install version:2.3 version range:>=2.3,<3)
| | | | +- rx(install version:1.6.1 version range:>=1.6,<2)
| | | | +- six(install version:1.14.0 version range:>=1.10.0)
| | | +- promise(install version:2.3 version range:>=2.2,<3)
| | | | +- six(install version:1.14.0 version range:*)
| | | | +- typing(install version:3.7.4.1 version range:>=3.6.4)
| | | +- six(install version:1.14.0 version range:>=1.12)
| | +- six(install version:1.14.0 version range:>=1.10.0,<2)
| +- graphql-core(install version:2.3.1 version range:>=2.2,<3.0)
| | +- promise(install version:2.3 version range:>=2.3,<3)
| | | +- six(install version:1.14.0 version range:*)
| | | +- typing(install version:3.7.4.1 version range:>=3.6.4)
| | +- rx(install version:1.6.1 version range:>=1.6,<2)
| | +- six(install version:1.14.0 version range:>=1.10.0)
| +- msgpack(install version:0.6.2 version range:>=0.6.1,<0.7.0)
```
Thanks for your help.
Best,
Neolith
| closed | 2020-05-09T15:18:16Z | 2020-07-25T12:36:53Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/42 | [] | NeolithEra | 5 |
newpanjing/simpleui | django | 498 | 当去掉了默认的delete_selected action 后,选择框会消失 | open | 2024-05-11T12:34:26Z | 2024-05-11T12:34:26Z | https://github.com/newpanjing/simpleui/issues/498 | [
"bug"
] | davincilll | 0 |
|
huggingface/datasets | pytorch | 6,720 | TypeError: 'str' object is not callable | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty.
My only guess would be that this may have to do with zstandard
```
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single
writer.write(example, key)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir
yield tmp_dir
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module>
ds = load_dataset(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete'
```
Interestingly, though, this directory _does_ appear to be empty:
```shell
> cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
> ls -lah
total 0
drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 .
drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 ..
> cd ..
> ls
7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt_mono_v1_2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
No error.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| closed | 2024-03-07T11:07:09Z | 2024-03-08T07:34:53Z | https://github.com/huggingface/datasets/issues/6720 | [] | BramVanroy | 2 |
ultralytics/ultralytics | deep-learning | 19,452 | Bad result after convert to OpenVINO format and run on DLStreamer pipeline | Original issue on DLStreamer repository: https://github.com/dlstreamer/dlstreamer/issues/455
Hello everyone. I have some weird detection result after convert the YOLOv8s model trained on custom dataset (fire, smoke) to OpenVINO format. Please read in the above issue. | open | 2025-02-27T04:46:41Z | 2025-03-05T18:35:51Z | https://github.com/ultralytics/ultralytics/issues/19452 | [
"detect",
"exports"
] | hungtrieu07 | 5 |
ghtmtt/DataPlotly | plotly | 312 | Encountering 'AttributeError: module 'pandas' has no attribute 'Series'' error while developing Hypsometric curves graphs using DataPlotly plugin in QGIS | I am attempting to develop Hypsometric curves graphs in QGIS using the Data Plotly plugin, but I'm encountering the following error:
```
line 176, in is_homogeneous_array
or (pd and isinstance(v, (pd.Series, pd.Index)))
AttributeError: module 'pandas' has no attribute 'Series'
```
I've tried upgrading my Pandas version, but the problem persists. Can you provide any guidance on how to resolve this issue? Thank you for your assistance. | closed | 2023-02-28T15:52:12Z | 2023-03-01T11:35:23Z | https://github.com/ghtmtt/DataPlotly/issues/312 | [] | mfaisalhanif | 4 |
skypilot-org/skypilot | data-science | 4,614 | [RunPod] Reuse templates | Hi, we use SkyPilot to run clusters on RunPod.
It works great, but on every launch, it creates a new template and downloads an image from a remote repository. This process is slow and results in multiple templates being uploaded to RunPod.
There should be logic for defining an existing template or finding one based on the image name/credentials. The `runpod.getPodTemplates` function can retrieve all available templates and iterate over them to find the one that matches the provided image name/credentials. | open | 2025-01-28T00:36:58Z | 2025-02-06T07:25:12Z | https://github.com/skypilot-org/skypilot/issues/4614 | [] | Kovbo | 1 |
microsoft/nni | data-science | 5,667 | ProxylessNas on CIFAR | Hello everyone, I would like to test ProxylessNAS on a smaller dataset like CIFAR using NNI version 3.0. I used the code that inherits from the DARTS example. However, it does not work. Could you please help me figure out how to fix this issue? Here is my code:
```
RuntimeError: Shape inference failed because no shape inference formula is found for AvgPool2d(kernel_size=3, stride=1, padding=1) of type AvgPool2d. Meanwhile the nested modules and functions inside failed to propagate the shape information. Please provide a `_shape_forward` member function or register a formula using `register_shape_inference_formula`.
```
```
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
"""
Reproduction of experiments in `DARTS paper <https://arxiv.org/abs/1806.09055>`__.
"""
import argparse
import json
import os
import numpy as np
import torch
import nni
from nni.nas.evaluator.pytorch import Lightning, ClassificationModule, Trainer
from nni.nas.experiment import NasExperiment
from nni.nas.space import model_context
from nni.nas.hub.pytorch import DARTS
from nni.nas.strategy import DARTS as DartsStrategy
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.data import DataLoader
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
from torchvision.datasets import CIFAR10
@nni.trace
class AuxLossClassificationModule(ClassificationModule):
"""Several customization for the training of DARTS, based on default Classification."""
model: DARTS
def __init__(self,
learning_rate: float = 0.001,
weight_decay: float = 0.,
auxiliary_loss_weight: float = 0.4,
max_epochs: int = 600):
self.auxiliary_loss_weight = auxiliary_loss_weight
self.max_epochs = max_epochs
super().__init__(learning_rate=learning_rate, weight_decay=weight_decay, num_classes=10)
def configure_optimizers(self):
"""Customized optimizer with momentum, as well as a scheduler."""
optimizer = torch.optim.SGD(
self.parameters(),
momentum=0.9,
lr=self.hparams.learning_rate, # type: ignore
weight_decay=self.hparams.weight_decay # type: ignore
)
return {
'optimizer': optimizer,
'lr_scheduler': torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, self.max_epochs, eta_min=1e-3)
}
def training_step(self, batch, batch_idx):
"""Training step, customized with auxiliary loss."""
x, y = batch
if self.auxiliary_loss_weight:
y_hat, y_aux = self(x)
loss_main = self.criterion(y_hat, y)
loss_aux = self.criterion(y_aux, y)
self.log('train_loss_main', loss_main)
self.log('train_loss_aux', loss_aux)
loss = loss_main + self.auxiliary_loss_weight * loss_aux
else:
y_hat = self(x)
loss = self.criterion(y_hat, y)
self.log('train_loss', loss, prog_bar=True)
for name, metric in self.metrics.items():
self.log('train_' + name, metric(y_hat, y), prog_bar=True)
return loss
def on_train_epoch_start(self):
"""Set drop path probability before every epoch. This has no effect if drop path is not enabled in model."""
self.model.set_drop_path_prob(self.model.drop_path_prob * self.current_epoch / self.max_epochs)
# Logging learning rate at the beginning of every epoch
self.log('lr', self.trainer.optimizers[0].param_groups[0]['lr'])
def cutout_transform(img, length: int = 16):
h, w = img.size(1), img.size(2)
mask = np.ones((h, w), np.float32)
y = np.random.randint(h)
x = np.random.randint(w)
y1 = np.clip(y - length // 2, 0, h)
y2 = np.clip(y + length // 2, 0, h)
x1 = np.clip(x - length // 2, 0, w)
x2 = np.clip(x + length // 2, 0, w)
mask[y1: y2, x1: x2] = 0.
mask = torch.from_numpy(mask)
mask = mask.expand_as(img)
img *= mask
return img
def get_cifar10_dataset(train: bool = True, cutout: bool = False):
CIFAR_MEAN = [0.49139968, 0.48215827, 0.44653124]
CIFAR_STD = [0.24703233, 0.24348505, 0.26158768]
if train:
transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(CIFAR_MEAN, CIFAR_STD),
])
if cutout:
transform.transforms.append(cutout_transform)
else:
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(CIFAR_MEAN, CIFAR_STD),
])
return nni.trace(CIFAR10)(root='./data', train=train, download=True, transform=transform)
def search(log_dir: str, batch_size: int = 64, **kwargs):
model_space = DARTS(16, 8, 'cifar')
train_data = get_cifar10_dataset()
num_samples = len(train_data)
indices = np.random.permutation(num_samples)
split = num_samples // 2
train_loader = DataLoader(
train_data, batch_size=batch_size,
sampler=SubsetRandomSampler(indices[:split]),
pin_memory=True, num_workers=6
)
valid_loader = DataLoader(
train_data, batch_size=batch_size,
sampler=SubsetRandomSampler(indices[split:]),
pin_memory=True, num_workers=6
)
evaluator = Lightning(
AuxLossClassificationModule(0.025, 3e-4, 0., 50),
Trainer(
gpus=1,
max_epochs=50,
logger=TensorBoardLogger(log_dir, name='search')
),
train_dataloaders=train_loader,
val_dataloaders=valid_loader
)
# # Gradient clip needs to be put here because DARTS strategy doesn't support this configuration from trainer.
# strategy = DartsStrategy(gradient_clip_val=5.)
# # from nni.nas.space import RawFormatModelSpace
# # from nni.nas.execution import SequentialExecutionEngine
# # engine = SequentialExecutionEngine()
# # strategy(RawFormatModelSpace(model_space, evaluator), engine)
# # print(next(strategy.list_models()).sample)
# experiment = NasExperiment(model_space, evaluator, strategy)
# experiment.run()
# # return next(strategy.list_models()).sample
import json
from nni.nas.execution import SequentialExecutionEngine
from nni.nas.space import RawFormatModelSpace
from nni.nas.oneshot.pytorch.profiler import ExpectationProfilerPenalty
from nni.nas.strategy import Proxyless
from nni.nas.profiler.pytorch.flops import FlopsProfiler
print (model_space)
profiler = FlopsProfiler(model_space, torch.randn(1, 3, 32, 32), count_normalization=False, count_bias=False, count_activation=False)
penalty = ExpectationProfilerPenalty(profiler, 320e6, scale=0.1, nonlinear='absolute')
engine = SequentialExecutionEngine()
strategy = Proxyless(warmup_epochs=20, penalty=penalty, arc_learning_rate=1e-3)
strategy(RawFormatModelSpace(model_space, evaluator), engine)
arch = next(strategy.list_models()).sample
print(arch)
with open(os.path.join(log_dir, 'arch.json'), 'w') as f:
json.dump(arch, f)
def train(arch: dict, log_dir: str, batch_size: int = 96, ckpt_path: str = None, **kwargs):
with model_context(arch):
model = DARTS(36, 20, 'cifar', auxiliary_loss=True, drop_path_prob=0.2)
train_data = get_cifar10_dataset(cutout=True)
valid_data = get_cifar10_dataset(train=False)
fit_kwargs = {}
if ckpt_path:
fit_kwargs['ckpt_path'] = ckpt_path
evaluator = Lightning(
AuxLossClassificationModule(0.025, 3e-4, 0.4, 600),
Trainer(
gpus=1,
gradient_clip_val=5.,
max_epochs=600,
logger=TensorBoardLogger(log_dir, name='train')
),
train_dataloaders=DataLoader(train_data, batch_size=batch_size, pin_memory=True, shuffle=True, num_workers=6),
val_dataloaders=DataLoader(valid_data, batch_size=batch_size, pin_memory=True, num_workers=6),
fit_kwargs=fit_kwargs
)
evaluator.fit(model)
def test(arch, weight_file, batch_size: int = 512, **kwargs):
with model_context(arch):
model = DARTS(36, 20, 'cifar')
model.load_state_dict(torch.load(weight_file))
lightning_module = AuxLossClassificationModule(0.025, 3e-4, 0., 600)
lightning_module.set_model(model)
trainer = Trainer(gpus=1)
valid_data = get_cifar10_dataset(train=False)
valid_loader = DataLoader(valid_data, batch_size=batch_size, pin_memory=True, num_workers=6)
trainer.validate(lightning_module, valid_loader)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--mode', choices=['search', 'train', 'test', 'search_train'], default='search_train')
parser.add_argument('--batch_size', type=int)
parser.add_argument('--arch', type=str)
parser.add_argument('--weight_file', type=str)
parser.add_argument('--log_dir', default='lightning_logs', type=str)
parser.add_argument('--ckpt_path', type=str)
parsed_args = parser.parse_args()
config = {k: v for k, v in vars(parsed_args).items() if v is not None}
if 'arch' in config:
config['arch'] = json.loads(config['arch'])
if 'search' in config['mode']:
config['arch'] = search(**config)
json.dump(config['arch'], open(os.path.join(config['log_dir'], 'arch.json'), 'w'))
print('Searched config', config['arch'])
if 'train' in config['mode']:
train(**config)
if config['mode'] == 'test':
test(**config)
if __name__ == '__main__':
main()
``` | open | 2023-08-20T18:42:25Z | 2023-08-20T18:42:25Z | https://github.com/microsoft/nni/issues/5667 | [] | John1231983 | 0 |
Yorko/mlcourse.ai | pandas | 13 | Ссылка на 3 статью | Добавьте пожалуйста в readme ссылку на третью статью на хабр. | closed | 2017-03-17T07:43:27Z | 2017-03-17T08:05:13Z | https://github.com/Yorko/mlcourse.ai/issues/13 | [
"minor_fix"
] | loopdigga96 | 2 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 185 | Can't get global restoration to work. | I keep getting `Mapping: You are using the mapping model without global restoration.` despite having unzipped the checkpoints correctly. Any idea why? | open | 2021-07-18T01:31:50Z | 2024-09-15T00:30:41Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/185 | [] | acc3d | 15 |
Lightning-AI/pytorch-lightning | pytorch | 20,027 | [Fabric Lightning] Named barriers | ### Description & Motivation
To prevent ranks losing alignment due to user error -- it would be beneficial to have named barriers with lightning allowing nodes to move forward only if same barrier name is met.
### Pitch
For example:
```
if fabric.global_rank == 0:
fabric.barrier("rank_0")
else:
fabric.barrier("not_rank_0")
```
will fail in this case, and upon timeout each rank will raise an error with the barrier at which it is held up.
This is as opposed to potential user error where due to incorrect logic the various ranks might go different paths, reach some other barrier which in turn enables the whole flow to continue.
An issue that will likely repeat itself is with `fabric.save`. It is not obvious to new users (that don't dig into the documentation) that this should be called in all nodes, as it implements its own internal barrier call.
A typical mistake would be to construct
```
if fabric.global_rank == 0:
fabric.save(...)
fabric.barrier()
do_training_stuff
fabric.barrier()
```
In this case, rank 0 will start to lag behind as it performs an additional barrier call.
If `fabric.save` would implement `fabric.barrier("save")` then the above program would exit printing that there is an alignment issue.
### Alternatives
_No response_
### Additional context
https://github.com/Lightning-AI/pytorch-lightning/issues/19780
cc @borda @awaelchli | open | 2024-06-28T11:14:00Z | 2024-06-28T12:25:44Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20027 | [
"feature",
"help wanted",
"distributed"
] | tesslerc | 1 |
huggingface/datasets | numpy | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
to solve above error
!pip install pyarrow==14.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible.
### Steps to reproduce the bug
!pip install datasets>=2.19.1
### Expected behavior
run without dependency error
### Environment info
Diffusers version: 0.31.0.dev0
Platform: Linux-6.1.85+-x86_64-with-glibc2.35
Running on Google Colab?: Yes
Python version: 3.10.12
PyTorch version (GPU?): 2.3.1+cu121 (True)
Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu)
Jax version: 0.4.26
JaxLib version: 0.4.26
Huggingface_hub version: 0.23.5
Transformers version: 4.42.4
Accelerate version: 0.32.1
PEFT version: 0.7.0
Bitsandbytes version: not installed
Safetensors version: 0.4.4
xFormers version: not installed
Accelerator: Tesla T4, 15360 MiB
Using GPU in script?:
Using distributed or parallel set-up in script?: | open | 2024-08-20T08:13:55Z | 2024-09-20T15:30:03Z | https://github.com/huggingface/datasets/issues/7112 | [] | SoumyaMB10 | 2 |
ycd/manage-fastapi | fastapi | 71 | Duplicate Code in Test File | `test_startproject.py` contains the whole code for the test, I think there is no need for this file ([test_startapp.py](https://github.com/ycd/manage-fastapi/blob/master/tests/test_startapp.py)) | closed | 2022-03-15T10:49:35Z | 2022-08-21T06:36:18Z | https://github.com/ycd/manage-fastapi/issues/71 | [] | Saksham1611 | 1 |
aleju/imgaug | machine-learning | 836 | Track Random Augmentations as They Are Applied | I am working on a project where I am tracking the roll estimations for a face in an image. When I possibly rotate an image using a Sequence of probabilistic augmentations, I need to return the rotation that occurred (if any).
train_aug = Sequential(
[
Resize(tgt_img_size[:2], interpolation="linear"),
Fliplr(0.3),
Sometimes(0.3, Affine(rotate=10)),
]
)
Calling get_parameters() returns the distributions but not the parameters that were sampled from the distributions.
What I really need is something like:
{
flipped_lr: True,
rotate: [True, -4.5]
}
I would appreciate a code snippet demonstrating how people track imgaug operations performed like described above. I appreciate that this project is not being actively maintained, but I do believe that this already exists and I am just missing some documentation on it or a code example.
Thanks in advance.
| open | 2023-06-14T21:39:57Z | 2023-06-14T21:39:57Z | https://github.com/aleju/imgaug/issues/836 | [] | aidansmyth95 | 0 |
plotly/dash-component-boilerplate | dash | 30 | Extra package.json? | I think that the second `package.json` is exteraneous, the one at `/{{cookiecutter.project_shortname}}/{{cookiecutter.project_shortname}}/package.json`.
It's being read from `setup.py` but I think that `setup.py` should read the `/{{cookiecutter.project_shortname}}/package.json` instead, no? | open | 2018-11-06T17:12:30Z | 2018-11-06T17:43:49Z | https://github.com/plotly/dash-component-boilerplate/issues/30 | [] | nicolaskruchten | 3 |
ultralytics/ultralytics | machine-learning | 18,721 | yolo with mysql | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
First of all, thank you for your tutorial. May I ask how to save the yolov11 detection results into the mysql database? Are there any tutorials? Thank you very much
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-17T03:09:47Z | 2025-01-17T07:30:02Z | https://github.com/ultralytics/ultralytics/issues/18721 | [
"enhancement",
"question",
"detect"
] | dongqing998 | 3 |
marshmallow-code/apispec | rest-api | 119 | Ignore `load_only` fields if `dump=True` in `fields2jsonschema`? | Hi,
I'm making use of https://github.com/jmcarp/flask-apispec to automatically generate docs in a personal flask project. This library makes use of your apispec swagger extension to generate docs for requests and responses. I figured out that responses in the generated docs where including `marshmallow`'s `load_only` fields, which (at least in my case) is not convenient. In the line that I'm linking below you're excluding `dump_only` fields if `dump=False` when invoking that method. Do you think it would be a good idea to also ignore `load_only` fields if `dump=True`?
https://github.com/marshmallow-code/apispec/blob/dev/apispec/ext/marshmallow/swagger.py#L492
I'm opening this for discussion, and I'll be happy to create a PR for that in case you are ok with the proposed functionality. | closed | 2017-03-15T15:27:33Z | 2018-10-07T10:13:19Z | https://github.com/marshmallow-code/apispec/issues/119 | [
"help wanted",
"feedback welcome",
"backwards incompat"
] | luisincrespo | 15 |
jacobgil/pytorch-grad-cam | computer-vision | 326 | Grad-CAM for object detection (RetinaNet) | Hi! In your README, you explain that gradient based methods cannot be used on object detection problems (like for mine, object detection with RetinaNet), but I found this https://github.com/alexriedel1/detectron2-GradCAM. Is it, as you said, a specific method to solve the linearity problems which has been implemented and makes it work?
I also have another question: what happens if I don't have any instance output by the model with my confidence threshold? No output images? Should I use at least one instance, the one with the best score, to know what happens in my network?
Thanks a lot for your work! | closed | 2022-09-06T10:12:18Z | 2022-10-07T19:12:52Z | https://github.com/jacobgil/pytorch-grad-cam/issues/326 | [] | QuentinAndre11 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,310 | cyclegan: Shift and deformation in medical image translation - how to create a good training dataset | I am training using cyclegan with medical (MRI T1) images of the head and neck from two hospitals to learn the difference in appearance. I would like the generated image to have similar contours (anatomy) but different style.
It seems however, that after some epochs I get more predictions that shift the generated image or deform it. The head foreground region is not perfectly centered because I did some random cropping to generate the slices as 256x256 resolution.
- I am running with following command line: `python src\train.py --dataroot ./cyclegan/T1a_to_T1b--name T1a_to_T1b --model cycle_gan --input_nc 1 --output_nc 1 --gan_mode lsgan --preprocess crop`
- I have around 5000 grayscale images for trainA and trainB each (slices of around 15 different heads).
I will try the other `gan_mode` options, but wonder if this has to do with some bias in my training data. Should I ensure that the center of mass (image intensity) has a similar distribution for `trainA` and `trainB`? Can you give any recommendations how to create a good training dataset? Should I add e.g. rotation augmented training data?
To get a rough idea of the distribution of the training data:

| closed | 2021-08-25T12:45:49Z | 2021-12-09T07:09:29Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1310 | [] | dyollb | 2 |
aimhubio/aim | tensorflow | 2,586 | [Bug] Font choice on UI table-content makes Aim look unpolished | ## 🐛 Bug
Firstly, I really like Aim. It's what W&B should have been. I am planning to use it for a few research projects and recommend it to others at my company (Amazon).
My bug/quibble: The font in the table (Inconsolata) looks strange and out of place compared to the rest of the fonts on the page. In both Firefox and Chrome, `Inconsolata` is too tall and narrow. Compared to the rest of the fonts used on the page, this looks jarring and makes Aim seem a bit unfinished.
### To reproduce
Run `aim up` on a repo with some experiments, look at the Aim UI's table on any page (images below).
### Expected behavior
I made the following changed in my CSS (using Inspect Element) to produce the "proposed UI" image:
```
.Table__container--medium {
## Other CSS as-is
--cell-font-size: 0.8rem;
--font-family: "Inter",sans-serif
```
The `"Inter",sans-serif` is what is used on other page widgets. The font-color (light grey) and font-weight (650) are sufficient to help distinguish it, but I had to adjust the cell-font-size.
### Environment
- Aim Version: v3.16.0
- Python version: 3.10
- pip version: 22.2.2
- OS: MacOS Monterey 12.6.1, running on a Macbook M1 Pro
- Any other relevant information: using Firefox 102.8.0esr (64-bit), at 100% zoom.
## Additional context
### Current table look in Aim v3.16 UI

### Proposed table UI (using `cell-font-size: 0.8rem` and `font-family: "Inter",sans-serif`)

| open | 2023-03-12T18:08:22Z | 2023-08-27T10:04:33Z | https://github.com/aimhubio/aim/issues/2586 | [
"type / bug",
"help wanted",
"phase / exploring"
] | adivekar-utexas | 7 |
gradio-app/gradio | data-science | 10,005 | multipages | - [ ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-11-19T21:57:57Z | 2024-11-19T22:48:05Z | https://github.com/gradio-app/gradio/issues/10005 | [] | qingsonger | 1 |
pennersr/django-allauth | django | 3,179 | Incompatible with keycloak 20.0.0 | Unable to login with keycloak as provider after upgrading keycloak to v20.0.0, getting http error 500 when fetching userinfo (nullpointer in keycloak). Probably because allauth attempts to fetch userinfo with POST request. | closed | 2022-11-03T09:28:44Z | 2022-11-03T14:27:59Z | https://github.com/pennersr/django-allauth/issues/3179 | [] | snorrehu | 0 |
marcomusy/vedo | numpy | 176 | vedo convert is not working on Windows version | Hi, I installed the vedo library on python3.7 using pip on my Windows machine. It installed correctly but it is not working on the command prompt when I call the following line:-
```
vedo-convert demo.obj -to stl
```
Can you please help me out
Thanks | closed | 2020-07-09T09:29:21Z | 2020-07-19T20:05:14Z | https://github.com/marcomusy/vedo/issues/176 | [] | Archan2605 | 2 |
huggingface/datasets | nlp | 7,423 | Row indexing a dataset with numpy integers | ### Feature request
Allow indexing datasets with a scalar numpy integer type.
### Motivation
Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type`
``` python
def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str:
if isinstance(key, int):
return "row"
elif isinstance(key, str):
return "column"
elif isinstance(key, (slice, range, Iterable)):
return "batch"
_raise_bad_key_type(key)
```
In the row case, it checks if key is an int, which returns false when key is integer like but not a builtin python integer type. This is counterintuitive because a numpy array of np.int64s can be used for the batch case.
For example:
``` python
import numpy as np
import datasets
dataset = datasets.Dataset.from_dict({"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]})
# Regular indexing
dataset[0]
dataset[:2]
# Indexing with numpy data types (expect same results)
idx = np.asarray([0, 1])
dataset[idx] # Succeeds when using an array of np.int64 values
dataset[idx[0]] # Fails with TypeError when using scalar np.int64
```
For the user, this can be solved by wrapping `idx[0]` in `int` but the test could also be changed in `key_to_query_type` to accept a less strict definition of int.
``` diff
+import numbers
+
def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str:
+ if isinstance(key, numbers.Integral):
- if isinstance(key, int):
return "row"
elif isinstance(key, str):
return "column"
elif isinstance(key, (slice, range, Iterable)):
return "batch"
_raise_bad_key_type(key)
```
Looking at how others do it, pandas has an `is_integer` definition that it checks which uses `is_integer_object` defined in `pandas/_libs/utils.pxd`:
``` cython
cdef inline bint is_integer_object(object obj) noexcept:
"""
Cython equivalent of
`isinstance(val, (int, np.integer)) and not isinstance(val, (bool, np.timedelta64))`
Parameters
----------
val : object
Returns
-------
is_integer : bool
Notes
-----
This counts np.timedelta64 objects as integers.
"""
return (not PyBool_Check(obj) and isinstance(obj, (int, cnp.integer))
and not is_timedelta64_object(obj))
```
This would be less flexible as it explicitly checks for numpy integer, but worth noting that they had the need to ensure the key is not a bool.
### Your contribution
I can submit a pull request with the above changes after checking that indexing succeeds with the numpy integer type. Or if there is a different integer check that would be preferred I could add that.
If there is a reason not to want this behavior that is fine too.
| open | 2025-02-25T18:44:45Z | 2025-03-03T17:55:24Z | https://github.com/huggingface/datasets/issues/7423 | [
"enhancement"
] | DavidRConnell | 1 |
postmanlabs/httpbin | api | 12 | Request: Include 'httpbin' executable script after installing | I'm trying to port my urllib3 tests to use httpbin, and it would be extra convenient if when I did `pip install httpbin`, it installed an `httpbin` script into my virtualenv's `bin` dir for running it.
Kind of ironic that httpbin doesn't have a bin. ;)
| closed | 2011-09-27T22:24:29Z | 2018-04-26T17:50:55Z | https://github.com/postmanlabs/httpbin/issues/12 | [] | shazow | 5 |
microsoft/hummingbird | scikit-learn | 684 | [sklearn] OneHotEncoder does't work correctly | Hello, I found this project last week, and thanks for all of these work.
I installed `Hummingbird-ml==0.47` by pip, and I want to know which version of sklearn should I use.
I want to use one-hot encoder of sklearn to preprocess my categorical features, but the result's dim of sklearn is different from the dim of converted pytorch model. For sklearn, 15 features -> 69 dim,but for converted pytorch mdoel, 15 features -> 76 dim.
After my check, I'm sure the problem is the argument of sklearn's OneHotEncoder:
> Changed in version 1.1: 'infrequent_if_exist' was added to automatically handle unknown categories and infrequent categories.
Is there any way to solve this problem?Thanks for any solution! | open | 2023-02-09T13:22:06Z | 2023-04-04T22:37:34Z | https://github.com/microsoft/hummingbird/issues/684 | [
"bug",
"enhancement",
"help wanted"
] | faterazer | 6 |
litestar-org/litestar | api | 3,775 | Bug: Pydantic V1 DTO Validation for fields with the max_length validator. | ### Description
In the below example that uses the litestar DTO system with msgspec and pydantic v1, the API validator fails with the following error: litestar.exceptions.http_exceptions.ClientException: 400: Unsupported type: <class 'str'> - at `$.display_name`.
This is a potential regression after fixing this: https://github.com/litestar-org/litestar/issues/3710 as this case works fine in Litestar 2.11.
Python version: 3.11.9
Pydantic version: 1.10.18
### URL to code causing the issue
_No response_
### MCVE
app.py
```
import time
from litestar import Litestar, Router
from controllers.graph_crud_v1_controller import GraphCrudController
def create_app() -> Litestar:
# Routes Declarations
base_v1_router = Router(
path="/api/v1/accounts/{account_id:str}",
route_handlers=[
GraphCrudController,
],
)
return Litestar(
route_handlers=[base_v1_router],
debug=True,
)
app = create_app()
```
graph_curd_v1_controller.py
```
__all__ = ["GraphCrudController"]
import logging
import shortuuid
from litestar import Controller, post
from litestar.dto import DTOData
from dto.test_dtos import GraphReadDto, GraphWriteDto, GraphDto
log = logging.getLogger(__name__)
class GraphCrudController(Controller):
path = "/graphs"
dto = GraphWriteDto
return_dto = GraphReadDto
temp_data: GraphDto | None = None
@post(
path="/create",
summary="Create Graph",
)
async def create_graph(self, account_id: str, data: DTOData[GraphDto]) -> GraphDto:
log.info(
"Got a request to create a new graph object in account: %s", account_id
)
current_ts = time.time()
self.temp_data = data.create_instance(
id=shortuuid.uuid(),
account_id=account_id,
created_at=current_ts,
created_by="mock_user",
updated_at=current_ts,
updated_by="mock_user",
)
return self.temp_data
```
test_dtos.py
```
from typing import Annotated, Any
import shortuuid
from litestar.contrib.pydantic import PydanticDTO
from litestar.dto import DTOConfig
from pydantic import BaseModel
from pydantic.class_validators import validator
from pydantic.config import Extra
from pydantic.fields import Field
import time
class GeneralIdentifiers(BaseModel):
id: str = Field(default_factory=lambda: shortuuid.uuid())
created_at: int = Field(default_factory=lambda: time.time())
created_by: str
updated_at: int = Field(default_factory=lambda: time.time())
updated_by: str
class GraphBaseMeta(BaseModel):
display_name: str = Field(default_factory=lambda: shortuuid.uuid(), max_length=64)
version: float = Field(default=1.0)
account_id: str | None = Field(default=None)
description: str | None = Field(default=None, max_length=600)
class NodePosition(BaseModel):
x: float
y: float
class NodeParamData(BaseModel):
value: Any
show: bool = True
class Config:
extra = Extra.forbid
class Node(BaseModel):
id: str = Field(default_factory=lambda: shortuuid.uuid())
type: str
data: dict[str, NodeParamData]
position: NodePosition
class Edge(BaseModel):
id: str = Field(default_factory=lambda: shortuuid.uuid())
source: str
target: str
class GraphNodesEdges(BaseModel):
nodes: list[Node]
edges: list[Edge]
class GraphBase(GraphBaseMeta, GraphNodesEdges):
pass
class Graph(GraphBase, GeneralIdentifiers):
pass
class GraphDto(Graph):
@validator("nodes")
def validate_nodes(cls, value: list[Node]) -> list[Node]:
node_ids: set[str] = set()
for node in value:
if node.id:
if node.id in node_ids:
raise ValueError("Duplicate node ids are not allowed")
node_ids.add(node.id)
return value
write_config = DTOConfig(
exclude={
"id",
"account_id",
"created_at",
"created_by",
"updated_at",
"updated_by",
},
max_nested_depth=3,
)
read_config = DTOConfig(max_nested_depth=3)
GraphWriteDto = PydanticDTO[Annotated[GraphDto, write_config]]
GraphReadDto = PydanticDTO[Annotated[GraphDto, read_config]]
```
```
### Steps to reproduce
```bash
1. Run the application with the above MCVE. The dto class is under the Python package "dto", and the controller class is under the Python package "controllers". app.py located is in the root level of the project.
2. Run the following cURL(change the host/port if yours are configured differently):
curl --location 'localhost:8080/api/v1/accounts/123/graphs/create' \
--header 'Content-Type: application/json' \
--data '{
"display_name": "Test Graph",
"edges": [
{
"source": "source_test_id",
"sourceHandle": "handle_test",
"target": "target_test_id",
"targetHandle": "handle_test"
}
],
"public": true,
"nodes": [
{
"id": "source_test_id",
"base_type": "test",
"type": "test",
"position": {
"x": 10.5,
"y": 18.31231231
},
"data": {
"name": {
"show": true,
"value": "test"
}
}
},
{
"id": "target_test_id",
"base_type": "test",
"type": "test",
"position": {
"x": 15.5,
"y": 32.31231231
},
"data": {
"name": {
"show": true,
"value": "test"
}
}
}
]
}'
```
```
### Screenshots
```bash
""
```
### Logs
```bash
import sys; print('Python %s on %s' % (sys.version, sys.platform))
/Users/sergeyk/PycharmProjects/litestar-playground/.venv/bin/python -X pycache_prefix=/Users/sergeyk/Library/Caches/JetBrains/PyCharm2024.2/cpython-cache /Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py --module --multiprocess --qt-support=auto --client 127.0.0.1 --port 57713 --file litestar run --host 0.0.0.0 --port 8080
Connected to pydev debugger (build 242.23339.19)
INFO: Started server process [99855]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: 127.0.0.1:57717 - "POST /api/v1/accounts/123/graphs/create HTTP/1.1" 400 Bad Request
ERROR - 2024-10-01 19:48:33,156 - litestar - config - Uncaught exception (connection_type=http, path=/api/v1/accounts/123/graphs/create):
Traceback (most recent call last):
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/serialization/msgspec_hooks.py", line 139, in default_deserializer
raise TypeError(f"Unsupported type: {type(value)!r}")
TypeError: Unsupported type: <class 'str'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/serialization/msgspec_hooks.py", line 209, in decode_json
return msgspec.json.decode(
^^^^^^^^^^^^^^^^^^^^
msgspec.ValidationError: Unsupported type: <class 'str'> - at `$.display_name`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 173, in _get_response_data
kwargs = await parameter_model.to_kwargs(connection=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/_kwargs/kwargs_model.py", line 380, in to_kwargs
await extractor(output, connection)
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/_kwargs/extractors.py", line 484, in extractor
values["data"] = await data_extractor(connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/_kwargs/extractors.py", line 502, in dto_extractor
return data_dto(connection).decode_bytes(body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_dto_factory.py", line 104, in decode_bytes
return super().decode_bytes(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/dto/base_dto.py", line 115, in decode_bytes
return backend.populate_data_from_raw(value, self.asgi_connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/dto/_codegen_backend.py", line 143, in populate_data_from_raw
data_as_builtins=self._transfer_to_dict(self.parse_raw(raw, asgi_connection)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/dto/_backend.py", line 241, in parse_raw
result = decode_json(value=raw, target_type=self.annotation, type_decoders=type_decoders, strict=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/serialization/msgspec_hooks.py", line 219, in decode_json
raise SerializationException(str(msgspec_error)) from msgspec_error
litestar.exceptions.base_exceptions.SerializationException: Unsupported type: <class 'str'> - at `$.display_name`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 152, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sergeyk/PycharmProjects/litestar-playground/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 175, in _get_response_data
raise ClientException(str(e)) from e
litestar.exceptions.http_exceptions.ClientException: 400: Unsupported type: <class 'str'> - at `$.display_name`
```
### Litestar Version
2.12.1
### Platform
- [X] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-10-02T02:56:28Z | 2025-03-20T15:54:57Z | https://github.com/litestar-org/litestar/issues/3775 | [
"Bug :bug:"
] | sergeyk93 | 0 |
xlwings/xlwings | automation | 2,234 | Exception by reading a table-object from a not selected/visible spreadsheet | **Versions:**
- OS: Windows 10 Pro 22H2 (19045.2846)
- xlwings: several versions but focus lays on 0.27.11, 0.27.12 and 0.30.4 (latest version)
- Excel 2013 (german version) with all available updates
- Python 3.10.8
**I get this exception (version 0.27.12):**
```python
Traceback (most recent call last):
File "D:\Python\sandbox\test12.py", line 150, in <module>
a = xw.Range("Tabelle1[#Alle]").options(pd.DataFrame).value
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 1675, in __init__
impl = apps.active.range(cell1).impl
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 514, in range
return self.books.active.sheets.active.range(cell1, cell2)
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 1323, in range
return Range(impl=self.impl.range(cell1, cell2))
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\_xlwindows.py", line 948, in range
xl1 = self.xl.Range(arg1)
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\_xlwindows.py", line 104, in __call__
v = self.__method(*args, **kwargs)
File "C:\Users\marti\AppData\Local\Temp\gen_py\3.10\00020813-0000-0000-C000-000000000046x0x1x8.py", line 44946, in Range
ret = self._oleobj_.InvokeTypes(197, LCID, 2, (9, 0), ((12, 1), (12, 17)),Cell1
pywintypes.com_error: (-2147352567, 'Ausnahmefehler aufgetreten.', (0, None, None, None, 0, -2146827284), None)
```
**I get this exception (version 0.30.4 (latest)):**
```python
Traceback (most recent call last):
File "D:\Python\sandbox\test12.py", line 150, in <module>
a = xw.Range("Tabelle1[#Alle]")
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 1806, in __init__
impl = apps.active.range(cell1).impl
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 554, in range
return self.books.active.sheets.active.range(cell1, cell2)
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\main.py", line 1438, in range
return Range(impl=self.impl.range(cell1, cell2))
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\_xlwindows.py", line 1022, in range
xl1 = self.xl.Range(arg1)
File "D:\Python\sandbox\venv\lib\site-packages\xlwings\_xlwindows.py", line 121, in __call__
v = self.__method(*args, **kwargs)
File "C:\Users\marti\AppData\Local\Temp\gen_py\3.10\00020813-0000-0000-C000-000000000046x0x1x8.py", line 44946, in Range
ret = self._oleobj_.InvokeTypes(197, LCID, 2, (9, 0), ((12, 1), (12, 17)),Cell1
pywintypes.com_error: (-2147352567, 'Ausnahmefehler aufgetreten.', (0, None, None, None, 0, -2146827284), None)
```
**My code (example):**
```python
import xlwings as xw
wb = xw.Book("test.xlsx")
a = xw.Range("Tabelle1[#Alle]")
```
**Steps to reproduce this error**
1. You need an excel-file with at least two spreadsheets
2. Add a table-objet on one of the spreadsheets
3. Select a different spreadsheet
4. Safe the file
5. Run the script
**Which analyzes have already been carried out?**
This eception occurs when a spreadsheet is selected (visible) that does not contain the desired table-object.
Problem exists since version 0.27.12
Tested with:
- 0.27.13
- 0.27.14
- 0.27.15
- 0.28.0
- 0.29.0
- 0.30.0
- 0.30.4 (latest)
Up to version 0.27.11 it doesn't matter which worksheet is selected. You can even access table-objects that reside on hidden spreadsheets. | closed | 2023-04-19T21:23:21Z | 2023-04-20T20:50:42Z | https://github.com/xlwings/xlwings/issues/2234 | [] | TenchiMuyo1984 | 6 |
cvat-ai/cvat | computer-vision | 8,764 | Can access CVAT over LAN but not Internet | Hi, i did all in https://github.com/cvat-ai/cvat/issues/1095 but http://localhost:7070/auth/login don't login and show message "Could not check authentication on the server Open the Browser Console to get details" but on http://localhost:8080/auth/login no problem.it on WAN also http://myip:7060/, i have same state. can you help me?
| closed | 2024-12-02T12:04:18Z | 2024-12-10T17:54:18Z | https://github.com/cvat-ai/cvat/issues/8764 | [
"need info"
] | alirezajafarishahedan | 1 |
dmlc/gluon-cv | computer-vision | 1,559 | Windows Numpy Version Issue on Install | On the install of gluoncv via pip on Windows you get a numpy related error as described [here](https://developercommunity.visualstudio.com/content/problem/1207405/fmod-after-an-update-to-windows-2004-is-causing-a.html) and [here](https://stackoverflow.com/questions/64654805/how-do-you-fix-runtimeerror-package-fails-to-pass-a-sanity-check-for-numpy-an).
The recommended fix is downgrading to numpy version 1.19.3 or earlier while Windows addresses the issue in January 2021. Despite installing numpy 1.19.3 prior to installing gluoncv it still errors, creating a numpy install in `Temp`, which I am assuming is 1.19.4:
`AppData\Local\Temp\pip-build-env-1h8cmr9n\overlay\Lib\site-packages\numpy\__init__.py`
Are there any ways to get around this issue? | closed | 2020-12-05T07:29:16Z | 2021-05-22T06:40:16Z | https://github.com/dmlc/gluon-cv/issues/1559 | [
"Stale"
] | HaydenFaulkner | 1 |
axnsan12/drf-yasg | rest-api | 439 | Write out multiline strings as yaml block scalars? | Is it possible to write out multi-line descriptions (and other strings) as YAML block scalars? The resulting file would be more readable.
For example, instead of:
```
paths:
/api/one:
post:
operationId: api_one
summary: Inserts a batch of completions.
description: "REST Endpoint Format:\n{\n \"username\": \"username\",\n \"\
course_key\": \"course-key\",\n \"blocks\": {\n \"block_key1\": 0.0,\n\
\ \"block_key2\": 1.0,\n \"block_key3\": 1.0,\n }\n}\n\n**Returns**\n\
\nA Response object, with an appropriate status code.\n\nIf successful, status\
\ code is 200.\n{\n \"detail\" : _(\"ok\")\n}\n\nOtherwise, a 400 or 404\
\ may be returned, and the \"detail\" content will explain the error."
parameters: []
responses:
'201':
description: ''
tags:
- api
parameters: []
```
write out the description like this:
```
paths:
/api/one:
post:
operationId: api_one
summary: Inserts a batch of completions.
description: |
REST Endpoint Format:
{
"username": "username",
"course_key": "course-key",
"blocks": {
"block_key1": 0.0,
"block_key2": 1.0,
"block_key3": 1.0,
}
}
**Returns**
A Response object, with an appropriate status code.
If successful, status code is 200.
{
"detail" : _("ok")
}
Otherwise, a 400 or 404 may be returned, and the "detail" content will explain the error.
parameters: []
responses:
'201':
description: ''
tags:
- api
parameters: []
```
The data is the same other than a trailing newline on the description. | closed | 2019-08-12T18:29:37Z | 2019-10-02T23:06:07Z | https://github.com/axnsan12/drf-yasg/issues/439 | [] | nedbat | 3 |
tensorpack/tensorpack | tensorflow | 1,397 | Why multiply 49 here? | https://github.com/tensorpack/tensorpack/blob/b28cfa8a50b732651b3711d09afd0818083861b2/examples/DoReFa-Net/resnet-dorefa.py#L107
What will happen if without this line? | closed | 2020-02-19T12:57:16Z | 2020-02-19T18:24:39Z | https://github.com/tensorpack/tensorpack/issues/1397 | [
"duplicate"
] | iamhankai | 1 |
mljar/mercury | data-visualization | 260 | Error displaying widget in databricks |

| closed | 2023-04-27T07:43:10Z | 2023-04-28T06:38:58Z | https://github.com/mljar/mercury/issues/260 | [] | Prashantramappa | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 309 | 加载数据集卡住不动 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
Others
### 操作系统
Linux
### 详细描述问题
```
# 请在此处粘贴运行代码(请粘贴在本代码块里)
```
lr=2e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/home/work/models
chinese_tokenizer_path=/home/work/models
dataset_dir=/home/work/dataset/dianping
data_cache=/home/work/dataset/dianping/cache
per_device_train_batch_size=1
gradient_accumulation_steps=8
block_size=512
output_dir=/home/liweijie/output
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 1 run_clm_pt_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--data_cache_dir ${data_cache} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--do_train \
--seed 87464 \
--fp16 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.05 \
--weight_decay 0.01 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--block_size ${block_size} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--modules_to_save ${modules_to_save} \
--torch_dtype float16 \
--load_in_kbits 16 \
--gradient_checkpointing \
--ddp_find_unused_parameters False
运行后,代码一直卡在加载数据集那里 数据集是一个txt文件 显卡是RTX 4090 24G显存 即:
09/22/2023 20:25:55 - INFO - datasets.info - Loading Dataset Infos from /environment/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/text
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
(base) ➜ training git:(main) ✗ pip list | grep -E 'transformers|peft|torch|sentencepiece|bitsandbytes'
bitsandbytes 0.41.0
peft 0.5.0
sentencepiece 0.1.97
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
transformers 4.31.0
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
```
[INFO|tokenization_utils_base.py:1837] 2023-09-22 20:25:53,865 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:1837] 2023-09-22 20:25:53,865 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1837] 2023-09-22 20:25:53,865 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1837] 2023-09-22 20:25:53,865 >> loading file tokenizer_config.json
Using custom data configuration default-2addd6d49e256145
09/22/2023 20:25:55 - INFO - datasets.builder - Using custom data configuration default-2addd6d49e256145
Loading Dataset Infos from /environment/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/text
09/22/2023 20:25:55 - INFO - datasets.info - Loading Dataset Infos from /environment/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/text | closed | 2023-09-22T12:39:30Z | 2023-11-07T06:45:04Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/309 | [
"stale"
] | airmanisvip | 4 |
noirbizarre/flask-restplus | api | 453 | host and additional content type not exported to swagger file | I ran the following scripts:
`def get(self):`
`from restfulplus import api`
`data = api.__schema__`
It didn't extract host and the additional request content-type. Do you know which settings I should use to let restplus automatically extract them?
<img width="604" alt="screenshot" src="https://user-images.githubusercontent.com/39745132/40737752-aa3c11fa-640f-11e8-848c-1911b0e13f6c.png">
| closed | 2018-05-30T17:46:16Z | 2018-05-31T18:15:15Z | https://github.com/noirbizarre/flask-restplus/issues/453 | [] | manbeing | 1 |
lanpa/tensorboardX | numpy | 127 | Need support for pytorch 0.4.0 | closed | 2018-04-25T06:17:05Z | 2018-05-08T17:17:11Z | https://github.com/lanpa/tensorboardX/issues/127 | [] | yaodi833 | 6 |
|
DistrictDataLabs/yellowbrick | matplotlib | 703 | Update fit to fit_transform in JointPlot plot in walkthrough docs | OS:Linux18.04LTS
Python=3.6.8 Anaconda Inc.
YellowBrick=0.9
from yellowbrick.features import JointPlotVisualizer
visualizer = JointPlotVisualizer(feature='temp', target='feelslike')
visualizer.fit(X['temp'], X['feelslike'])
visualizer.poof()
It compiles properly but no plots appear as it appeared in Rank2D.Shall I use as_matrix ?
| closed | 2019-01-29T12:27:44Z | 2019-01-31T16:34:51Z | https://github.com/DistrictDataLabs/yellowbrick/issues/703 | [
"type: documentation"
] | dnabanita7 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.