repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
serengil/deepface | deep-learning | 740 | seems to be failing with latest python | (faceenv) ankit@ankit-System-Product-Name:~$ conda install -c conda-forge deepface
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- deepface -> python[version='>=3.10,<3.11.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0']
Your python: python=3.11
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==12.1=0
- feature:/linux-64::__glibc==2.35=0
- feature:|@/linux-64::__glibc==2.35=0
- deepface -> tensorflow[version='>=1.9.0'] -> __cuda
- deepface -> tensorflow[version='>=1.9.0'] -> __glibc[version='>=2.17']
- python=3.11 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.35
(faceenv) ankit@ankit-System-Product-Name:~$
| closed | 2023-05-01T13:40:52Z | 2023-05-01T14:17:11Z | https://github.com/serengil/deepface/issues/740 | [
"dependencies"
] | ankit-g | 1 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 100 | Compare results between this one and the Tensorflow one | Can you give some comparative experiment results between this one and Tensorflow one? Do these two performs similar? | open | 2019-04-01T08:29:28Z | 2019-05-22T08:58:48Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/100 | [] | Frozenmad | 1 |
ploomber/ploomber | jupyter | 611 | add links to cookbook/file-client in relevant sections in the docs | we recently added an example to use File clients, we should update the docs to link to the example wherever is relevant: https://github.com/ploomber/projects/tree/master/cookbook/file-client | open | 2022-02-21T16:17:49Z | 2023-03-20T20:40:33Z | https://github.com/ploomber/ploomber/issues/611 | [
"documentation",
"good first issue"
] | edublancas | 0 |
dynaconf/dynaconf | fastapi | 768 | [RFC] Resolve depreciation warning for depreciated property kv | **Is your feature request related to a problem? Please describe.**
Yes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`
Clear Warning:
DeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward
**Describe the solution you'd like**
Remove the usage of kv property directly in dynaconf and use if from `Client.secrets`
**Describe alternatives you've considered**
The alternative is not required.
| closed | 2022-07-15T09:11:08Z | 2022-07-16T19:03:29Z | https://github.com/dynaconf/dynaconf/issues/768 | [
"Not a Bug",
"RFC"
] | jyejare | 0 |
ageitgey/face_recognition | machine-learning | 1,325 | ModuleNotFoundError: No module named 'sklearn.neighbors.base' | The is the error I am getting. I have tried intalling missingpy and scikit-learn and it has not worked.
I am using Anaconda - Jupyter Notebook and have tried conda update all. Nothing fixes the issue any sugestions
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-8a1fde735eb6> in <module>
12 from sklearn.metrics import confusion_matrix
13 from sklearn.model_selection import cross_val_score
---> 14 from missingpy import MissForest
C:\Anaconda\lib\site-packages\missingpy\__init__.py in <module>
----> 1 from .knnimpute import KNNImputer
2 from .missforest import MissForest
3
4 __all__ = ['KNNImputer', 'MissForest']
C:\Anaconda\lib\site-packages\missingpy\knnimpute.py in <module>
11 from sklearn.utils.validation import check_is_fitted
12 from sklearn.utils.validation import FLOAT_DTYPES
---> 13 from sklearn.neighbors.base import _check_weights
14 from sklearn.neighbors.base import _get_weights
15
ModuleNotFoundError: No module named 'sklearn.neighbors.base'
| open | 2021-06-13T02:49:47Z | 2021-06-13T02:49:47Z | https://github.com/ageitgey/face_recognition/issues/1325 | [] | AVAC555 | 0 |
huggingface/datasets | numpy | 6,760 | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | open | 2024-03-28T03:44:26Z | 2024-06-19T07:06:40Z | https://github.com/huggingface/datasets/issues/6760 | [] | yucc-leon | 4 |
RobertCraigie/prisma-client-py | asyncio | 791 | Support pydantic >= 2 | ## Problem
I've tried using prisma-client-py with `fastapi = { version = "^0.100.0"}`, but it's incompatible because it requires Pydantic 2.
## Suggested Solution
Support for Pydantic v2 in prisma-client-py would be ideal, ensuring compatibility with the latest FastAPI server.
## Alternatives
As of now, I've had to forego Prisma and instead use `psycopg` for my database queries.
## Additional context
```
Because no versions of prisma match >0.9.1,<0.10.0
and prisma (0.9.1) depends on pydantic (>=1.8.0,<2.0.0), prisma (>=0.9.1,<0.10.0) requires pydantic (>=1.8.0,<2.0.0).
Because pydantic-extra-types (2.0.0) depends on pydantic (>=2.0b3)
and no versions of pydantic-extra-types match >2.0.0, pydantic-extra-types (>=2.0.0) requires pydantic (>=2.0b3).
Thus, prisma (>=0.9.1,<0.10.0) is incompatible with pydantic-extra-types (>=2.0.0).
And because fastapi (0.100.0) depends on pydantic-extra-types (>=2.0.0)
and no versions of fastapi match >0.100.0,<0.101.0, prisma (>=0.9.1,<0.10.0) is incompatible with fastapi (>=0.100.0,<0.101.0).
```
| closed | 2023-07-19T22:58:15Z | 2023-08-30T14:45:48Z | https://github.com/RobertCraigie/prisma-client-py/issues/791 | [] | glesperance | 9 |
QuivrHQ/quivr | api | 3,207 | Check for circular references | Adding knowledge with parent_id should check for circular reference | closed | 2024-09-16T08:36:53Z | 2024-12-20T12:09:18Z | https://github.com/QuivrHQ/quivr/issues/3207 | [
"enhancement",
"Stale"
] | linear[bot] | 2 |
PaddlePaddle/models | computer-vision | 5,344 | PointNet++ ext_op 算子编译报错 | 基于 Docker 编译源码:
基础镜像:registry.baidubce.com/paddlepaddle/paddle:2.1.2-gpu-cuda11.2-cudnn8
Paddle 源码编译通过,但是在编译算子时报错,ext_op/src
报错关联对象:
> paddle/fluid/platform/complex.h(115): error: explicit type is missing ("int" assumed)
>
>paddle/fluid/platform/complex.h(115): error: qualified name is not allowed
>
>paddle/fluid/platform/complex.h(115): error: expected a ")"
>
>paddle/fluid/platform/complex.h(123): error: explicit type is missing ("int" assumed)
>
>paddle/fluid/platform/complex.h(123): error: qualified name is not allowed
>
>paddle/fluid/platform/complex.h(123): error: expected a ")"
>
>paddle/fluid/platform/complex.h(122): error: invalid redeclaration of member function template >"paddle::platform::complex<T>::complex(int)"
>(114): here | open | 2021-09-16T01:28:38Z | 2024-02-26T05:08:44Z | https://github.com/PaddlePaddle/models/issues/5344 | [] | zjuncd | 1 |
davidsandberg/facenet | tensorflow | 680 | Error while running ./contributed/clustering.py | I wanted to cluster my images. It worked using ./contributed/cluster.py, but while doing so using the chinese whisper algorithm ( ./contributed/clustering.py), I got the following error:
Traceback (most recent call last): [5/440]
File "./contributed/clustering.py", line 268, in <module>
main(parse_args())
File "./contributed/clustering.py", line 242, in main
sorted_clusters = cluster_facial_encodings(facial_encodings)
File "./contributed/clustering.py", line 150, in cluster_facial_encodings
sorted_clusters = _chinese_whispers(facial_encodings.items())
File "./contributed/clustering.py", line 89, in _chinese_whispers
shuffle(cluster_nodes)
File "/usr/lib/python2.7/random.py", line 293, in shuffle
x[i], x[j] = x[j], x[i]
TypeError: 'NodeView' object does not support item assignment
It seems that it occurred when trying to call the shuffle function. | closed | 2018-04-02T15:17:34Z | 2018-04-07T15:20:26Z | https://github.com/davidsandberg/facenet/issues/680 | [] | buddhashrestha | 1 |
openapi-generators/openapi-python-client | fastapi | 698 | Optional datetime fields cannot be parsed and raise an exception | **Describe the bug**
I'm using FastAPI to build an OpenAPI restful service.
I'm defining a pydantic schema for response with **optional datetime field**
```
class ApprovalRequestOut(BaseModel):
id: int
metadata: Dict
created_on: datetime
status: ApprovalStatus
approval_subject_id: int
approver: Optional[str]
requestor: Optional[str]
updated_on: Optional[datetime]
action_reason: Optional[str]
```
This is the generated OpenAPI schema
```
"ApprovalRequestOut": {
"title": "ApprovalRequestOut",
"required": [
"id",
"metadata",
"created_on",
"status",
"approval_subject_id"
],
"type": "object",
"properties": {
"id": {
"title": "Id",
"type": "integer"
},
"metadata": {
"title": "Metadata",
"type": "object"
},
"created_on": {
"title": "Created On",
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/components/schemas/ApprovalStatus"
},
"approval_subject_id": {
"title": "Approval Subject Id",
"type": "integer"
},
"approver": {
"title": "Approver",
"type": "string"
},
"requestor": {
"title": "Requestor",
"type": "string"
},
"updated_on": {
"title": "Updated On",
"type": "string",
"format": "date-time"
},
"action_reason": {
"title": "Action Reason",
"type": "string"
}
}
},
```
As you can see, the **updated_on** is optional and it is returned as None (Null) when there is no value.
The generated client is not parsing properly the response when the returned **updated_on** field is None (Null).
This is the generated code:
```
_updated_on = d.pop("updated_on", UNSET)
updated_on: Union[Unset, datetime.datetime]
if isinstance(_updated_on, Unset):
updated_on = UNSET
else:
updated_on = isoparse(_updated_on)
```
This is a simple fix of the above code:
```
_updated_on = d.pop("updated_on", UNSET)
updated_on: Union[Unset, datetime.datetime]
if isinstance(_updated_on, Unset):
updated_on = UNSET
else:
updated_on = isoparse(_updated_on) if _updated_on else None
```
**Expected behavior**
The generator should check for optional (nullable) datetime fields and not to parse them if their are None
**OpenAPI Spec File**
You can find details for the OpenAPI schema object above.
**Desktop (please complete the following information):**
- OS: Linux - Ubuntu 22.04
- Python Version: Python 3.9.15 (main, Oct 12 2022, 19:14:37)
- openapi-python-client version 0.11.6
| closed | 2022-11-11T14:15:15Z | 2023-01-17T15:26:28Z | https://github.com/openapi-generators/openapi-python-client/issues/698 | [
"🐞bug"
] | varadinov | 3 |
Sanster/IOPaint | pytorch | 323 | 一键安装文件在哪里 | 你好,为了支持您,买了个一键安装,然后就没有然后了,下载的东西链接在哪里啊 | closed | 2023-06-08T10:17:58Z | 2023-08-30T06:02:38Z | https://github.com/Sanster/IOPaint/issues/323 | [] | wangxiugang666 | 1 |
howie6879/owllook | asyncio | 41 | 建议收录一些文学站 | 建议收录一些文学站点(我搜到过《活着》,但是打开提示Internal Server Error),增加用户面 | closed | 2018-08-21T12:29:24Z | 2018-12-23T05:16:38Z | https://github.com/howie6879/owllook/issues/41 | [] | pl1612127 | 1 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 381 | fetch_one_video报 400 错误 | ***发生错误的平台?***
如:抖音
***发生错误的端点?***
docker hub 版本:4fa7ac0
/api/douyin/web/fetch_one_video
***提交的输入值?***
aweme_id=7356676215429811490
***是否有再次尝试?***
是
***你有查看本项目的自述文件或接口文档吗?***
是,fetch_user_post_videos 正常返回, 但是 fetch_one_video 重试三遍之后就报错,日志如下:
WARNING 第 1 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=w
ebapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190
500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_he
ight=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Fi
refox&browser_version=124.0&browser_online=true&engine_name=Gecko&engin
e_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&devic
e_memory=8&platform=PC&msToken=8z7bStncQKiXWIZmmqgDkMUiUNwIL7nn7VSH5Lhe
Z1_sbXY_uOLyR10hP1y8hoI0nPh_8iLi6zIbtuq1kPttblM8V_im6hgQqlGx47jpMi_gvfM
3J851scPxcva60A==&aweme_id=7356676215429811490&X-Bogus=DFSzswVYQxbANGVR
tRZvwl9WX7j8
WARNING 第 2 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=w
ebapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190
500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_he
ight=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Fi
refox&browser_version=124.0&browser_online=true&engine_name=Gecko&engin
e_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&devic
e_memory=8&platform=PC&msToken=8z7bStncQKiXWIZmmqgDkMUiUNwIL7nn7VSH5Lhe
Z1_sbXY_uOLyR10hP1y8hoI0nPh_8iLi6zIbtuq1kPttblM8V_im6hgQqlGx47jpMi_gvfM
3J851scPxcva60A==&aweme_id=7356676215429811490&X-Bogus=DFSzswVYQxbANGVR
tRZvwl9WX7j8
WARNING 第 3 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=w
ebapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190
500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&screen_he
ight=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Fi
refox&browser_version=124.0&browser_online=true&engine_name=Gecko&engin
e_version=122.0.0.0&os_name=Windows&os_version=10&cpu_core_num=12&devic
e_memory=8&platform=PC&msToken=8z7bStncQKiXWIZmmqgDkMUiUNwIL7nn7VSH5Lhe
Z1_sbXY_uOLyR10hP1y8hoI0nPh_8iLi6zIbtuq1kPttblM8V_im6hgQqlGx47jpMi_gvfM
3J851scPxcva60A==&aweme_id=7356676215429811490&X-Bogus=DFSzswVYQxbANGVR
tRZvwl9WX7j8
程序出现异常,请检查错误信息。
ERROR 无效响应类型。响应类型: <class 'NoneType'>
程序出现异常,请检查错误信息。
INFO: 10.0.0.2:11321 - "GET /api/douyin/web/fetch_one_video?aweme_id=7356676215429811490 HTTP/1.1" 400 Bad Request
| closed | 2024-05-03T16:50:09Z | 2024-05-04T06:03:54Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/381 | [
"BUG"
] | xxccll | 0 |
deepinsight/insightface | pytorch | 2,407 | gaze estimation when eyes are partially/fully covered | I have been playing around with different gaze estimation algo and 1 challenge i am having is knowing when eyes are closed/covered. I have tried using eyes keypoints to calculate the percentage of eyes closing but it is not very accurate as different people have different eye sizes. I wonder if the gaze algo provided can already detect when the pupil is half/fully covered. | open | 2023-08-19T15:22:25Z | 2023-08-19T15:22:25Z | https://github.com/deepinsight/insightface/issues/2407 | [] | HeChengHui | 0 |
explosion/spaCy | data-science | 13,330 | Custom component to split coordinations | **Description**
Build a custom component to:
1. identify coordinations in a document
2. split the coordinations
3. return a new `Doc` object with the split coordinations
| open | 2024-02-15T12:53:35Z | 2024-02-16T11:53:01Z | https://github.com/explosion/spaCy/issues/13330 | [
"enhancement",
"feat / pipeline"
] | india-kerle | 0 |
pydantic/FastUI | pydantic | 343 | Return RedirectResponse or jinja2Templates' TemplateResponse from an endpoint? | I need to redirect the page to some template HTML pages within a fastui endpoint procedure, for example, I need to show the home template page when home link is clicked. so I tried
```
@router.get('/home', response_model=FastUI, response_model_exclude_none=True)
def home(request: Request) -> list[AnyComponent]:
return templates.TemplateResponse(
name="home.html",
context={"request": request}
)
```
but then I will get
```
Request Error
Response not valid JSON
```
I am sure there must be a way to do it; I just didn't do it correctly. I would appreciate any help! | open | 2024-08-14T07:09:36Z | 2024-09-07T13:01:00Z | https://github.com/pydantic/FastUI/issues/343 | [] | fmrib00 | 1 |
charlesq34/pointnet | tensorflow | 253 | Sem_seg running out of RAM when training on Colab | I forked the repo and tried to run it in Google Colab.
I did some revision on formatting and data fetching code.
My current version of Colab notebook is at https://colab.research.google.com/drive/1UxYltzePhc3MJzZhuBycBRR3ettQc8gL#revisionId=0B497FfEid-K5ZFNML3Y3SlBoZDZlT2tiUVBiUzkzU2VPaGxJPQ.
I was able to download the files. **However, when I started training the sem_seg data it runs out of RAM.**
When I have all batches and data loaded it has already consumed over 60% of available RAM, and when the session is initialized it takes up all the memory prior to starting the training. | open | 2020-07-12T20:30:39Z | 2021-06-24T14:45:54Z | https://github.com/charlesq34/pointnet/issues/253 | [] | ytxmobile98 | 1 |
ydataai/ydata-profiling | pandas | 1,241 | Bug Report | ### Current Behaviour
[This is how the High correlation in the Titanic dataset from Kaggle appeared previously.docx](https://github.com/ydataai/pandas-profiling/files/10486212/This.is.how.the.High.correlation.in.the.Titanic.dataset.from.Kaggle.appeared.previously.docx)
### Expected Behaviour
The only problem is that the correlations showed the two sides, which is not a good idea. If A is correlated to B, than surely B is correlated to A and there is no need to appear.
But besides this, now, some correct correlations are not appearing anymore. The threshold may have diminished.
### Data Description
You use Titanic Dataset from Kaggle
### Code that reproduces the bug
_No response_
### pandas-profiling version
2.6.2
### Dependencies
```Text
no dependencies
```
### OS
Windows 10
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2023-01-24T04:06:33Z | 2023-01-24T22:25:30Z | https://github.com/ydataai/ydata-profiling/issues/1241 | [
"needs-triage"
] | marcelovalentimsilva | 1 |
iperov/DeepFaceLab | deep-learning | 5,483 | WINDOWS 11 - Xseg - data_dst mask - edit -> qt.qpa.plugin: Could not find the Qt platform plugin "windows" in ..... | Running XSeg editor.
qt.qpa.plugin: Could not find the Qt platform plugin "windows" in "C:\Users\IMA ISLAND BOY\OneDrive\PoŔÝtaŔ\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\Lib\site-packages\PyQt5\Qt\plugins"
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
I did reinstall the program but the problem persist | open | 2022-02-21T10:55:19Z | 2023-06-08T23:18:52Z | https://github.com/iperov/DeepFaceLab/issues/5483 | [] | 4M0NK | 2 |
sigmavirus24/github3.py | rest-api | 679 | Issue with `betamax` not recording `iter_issues` requests | I'm running [a test][0] that looks like:
```
def test_issue_label_list(self):
with self.recorder.use_cassette(self._make_cassette_name()):
self.service.connect()
content = self.service.issue_label_list(user, repo)
return content
```
where [`self.recorder`][1] is built [using `self.service.gh._session`][2] as parameter so it can hijack the session.
where `self.service` is [the instance on which I'm testing a method][3] that implements github3.py as follows:
```
def connect(self):
…
self.gh.login(token=self._privatekey)
self.username = self.gh.user().login
…
def issue_label_list(self, user, repo):
repository = self.gh.repository(user, repo)
yield "Name"
for l in repository.iter_labels():
yield l.name
```
And as a result, I'm only getting the `/user` API call recorded in the [cassette][4] (that's done in `connect()`, not the `iter_labels`. When I'm calling the same method from command line with `urllib` debugging on, I'm getting all the three requests happening, along with the expected result.
From my experience with other API implementations, it's usually a case of having requests happening on another session instance instead of the one hijacked by betamax. But I have checked in my code, `self.gh._session` does not change, and I'm having it happening for tests that call:
* `Repository.iter_issues()`,
* `Repository.iter_labels()`,
* `Repository.iter_milestones()`
I'm still using `v0.9.5`, with betamax `v0.5.1`. If you feel I should rather report on betamax, please do tell!
[0]:https://github.com/guyzmo/git-repo/blob/514f34ab8bf17014274b5d70c88b8e6b897e2bcd/tests/integration/test_github.py#L359,L375
[1]:https://github.com/guyzmo/git-repo/blob/514f34ab8bf17014274b5d70c88b8e6b897e2bcd/tests/helpers.py#L384,L384
[2]:https://github.com/guyzmo/git-repo/blob/514f34ab8bf17014274b5d70c88b8e6b897e2bcd/tests/integration/test_github.py#L35,L36
[3]:https://github.com/guyzmo/git-repo/blob/514f34ab8bf17014274b5d70c88b8e6b897e2bcd/git_repo/services/ext/github.py#L350,L354
[4]:https://github.com/guyzmo/git-repo/blob/514f34ab8bf17014274b5d70c88b8e6b897e2bcd/tests/integration/cassettes/test_github_test_35_issue_label_list.json
-----
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/41516286-issue-with-betamax-not-recording-iter_issues-requests?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
| closed | 2017-01-31T16:47:08Z | 2017-01-31T19:58:23Z | https://github.com/sigmavirus24/github3.py/issues/679 | [] | guyzmo | 14 |
microsoft/qlib | deep-learning | 1,176 | What's the difference between model and model_ts? | ## ❓ Questions and Help
In qlib source code, both model and model_ts exists. For instance, pytorch_alstm.py and pytorch_alstm_ts.py, pytorch_gats.py and pytorch_gats_ts.py, and so on. I want to know the difference. | closed | 2022-07-03T11:30:54Z | 2022-10-10T15:04:51Z | https://github.com/microsoft/qlib/issues/1176 | [
"question",
"stale"
] | polaris-gogh | 3 |
jazzband/django-oauth-toolkit | django | 666 | Request: Upload sdist to PyPI for v1.2.0 | Related to #562 , but requesting for the latest version. | closed | 2018-11-10T11:20:43Z | 2021-12-16T14:55:35Z | https://github.com/jazzband/django-oauth-toolkit/issues/666 | [] | nehaljwani | 1 |
pyeve/eve | flask | 871 | 0.6.4 release? | Hi Nicola. Just wondering whether you have plans to release 0.6.4 in the near future? There are quite a few bugfix [changes](https://github.com/nicolaiarocci/eve/blob/develop/CHANGES) that I need for my current project. If possible I'd rather run an official package from PyPI on my prod servers than a github checkout of the dev branch. Cheers!
| closed | 2016-06-05T15:35:42Z | 2016-06-08T22:51:44Z | https://github.com/pyeve/eve/issues/871 | [] | amorphic | 3 |
python-gitlab/python-gitlab | api | 2,327 | Gitlab("http://mysite.com:1234", access_token) still connect to port 80 but not 1234 | ## Description of the problem, including code/CLI snippet
gl = gitlab.Gitlab("http://mysite.com:1234", access_token) #still connect to port 80 but not 1234
## Expected Behavior
I need it connect to the port 1234 of mysite.com
## Actual Behavior
It connects to 80 port of mysite.com
## Specifications
- python-gitlab version: 3.10
- API version you are using (v3/v4): v3
- Gitlab server version (or gitlab.com):
| closed | 2022-10-18T14:14:20Z | 2023-10-23T01:13:32Z | https://github.com/python-gitlab/python-gitlab/issues/2327 | [
"need info"
] | ruanjianhui | 2 |
scrapy/scrapy | python | 6,476 | Undocumented (performance) behaviour with default value of `DOWNLOAD_DELAY` setting (0) | ### Description
On runs with default value of `DOWNLOAD_DELAY` setting (0) request sending rate.. limited only by CPU capabilities until number of sent requests will reach value `CONCURRENT_REQUESTS_PER_DOMAIN` (default - 8, or `CONCURRENT_REQUESTS` if it is's value lower).
As far as I was in scrapy documentation - it didn't mentioned anywhere.
### Steps to Reproduce
Here is the script that reproduce this:
<details> <summary>script.py</summary>
```python
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.signals import request_reached_downloader
class BooksToScrapeSpider(scrapy.Spider):
name = "books"
custom_settings = {
"CONCURRENT_REQUESTS": 8
}
def request_reached_downloader(self, request, spider):
self.logger.info(f"request to {request.url}")
def start_requests(self):
self.crawler.signals.connect(self.request_reached_downloader, signal=request_reached_downloader)
yield scrapy.Request(url='http://books.toscrape.com/catalogue/page-1.html', callback=self.parse)
def parse(self, response):
for book in response.css("h3 a::attr(href)").getall():
yield scrapy.Request(url=response.urljoin(book), callback=self.parse_product)
#if next_page := response.css("li.next a::attr(href)").get():
# yield scrapy.Request(url=response.urljoin(next_page), callback=self.parse, priority=-10)
def parse_product(self, response):
response.css('li.current::text').get()
pass
if __name__ == "__main__":
p = CrawlerProcess(settings={"LOG_DATEFORMAT": ""}); p.crawl(BooksToScrapeSpider); p.start()
```
</details>
<details> <summary>log_output</summary>
```
2024-09-12 16:19:15,967 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: scrapybot)
2024-09-12 16:19:16,000 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.10.8 | packaged by conda-forge | (main, Nov 24 2022, 14:07:00) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.0.0 (OpenSSL 1.1.1w 11 Sep 2023), cryptography 39.0.1, Platform Windows-10-10.0.22631-SP0
2024-09-12 16:19:16,017 [scrapy.addons] INFO: Enabled addons:
[]
2024-09-12 16:19:16,031 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2024-09-12 16:19:16,062 [scrapy.extensions.telnet] INFO: Telnet Password: b8049b496606c603
2024-09-12 16:19:16,317 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-09-12 16:19:16,317 [scrapy.crawler] INFO: Overridden settings:
{'CONCURRENT_REQUESTS': 8, 'LOG_DATEFORMAT': ''}
2024-09-12 16:19:16,605 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-09-12 16:19:16,607 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-09-12 16:19:16,613 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-09-12 16:19:16,613 [scrapy.core.engine] INFO: Spider opened
2024-09-12 16:19:16,640 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-09-12 16:19:16,640 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-09-12 16:19:21,130 [books] INFO: request to http://books.toscrape.com/catalogue/page-1.html
2024-09-12 16:19:21,957 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/page-1.html> (referer: None)
2024-09-12 16:19:22,093 [books] INFO: request to http://books.toscrape.com/catalogue/scott-pilgrims-precious-little-life-scott-pilgrim-1_987/index.html
2024-09-12 16:19:22,093 [books] INFO: request to http://books.toscrape.com/catalogue/set-me-free_988/index.html
2024-09-12 16:19:22,093 [books] INFO: request to http://books.toscrape.com/catalogue/shakespeares-sonnets_989/index.html
2024-09-12 16:19:22,098 [books] INFO: request to http://books.toscrape.com/catalogue/starving-hearts-triangular-trade-trilogy-1_990/index.html
2024-09-12 16:19:22,098 [books] INFO: request to http://books.toscrape.com/catalogue/the-black-maria_991/index.html
2024-09-12 16:19:22,098 [books] INFO: request to http://books.toscrape.com/catalogue/the-boys-in-the-boat-nine-americans-and-their-epic-quest-for-gold-at-the-1936-berlin-olympics_992/index.html
2024-09-12 16:19:22,098 [books] INFO: request to http://books.toscrape.com/catalogue/the-coming-woman-a-novel-based-on-the-life-of-the-infamous-feminist-victoria-woodhull_993/index.html
2024-09-12 16:19:22,098 [books] INFO: request to http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html
2024-09-12 16:19:22,324 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/scott-pilgrims-precious-little-life-scott-pilgrim-1_987/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,324 [books] INFO: request to http://books.toscrape.com/catalogue/its-only-the-himalayas_981/index.html
2024-09-12 16:19:22,539 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/the-boys-in-the-boat-nine-americans-and-their-epic-quest-for-gold-at-the-1936-berlin-olympics_992/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,540 [books] INFO: request to http://books.toscrape.com/catalogue/libertarianism-for-beginners_982/index.html
2024-09-12 16:19:22,549 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/shakespeares-sonnets_989/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,549 [books] INFO: request to http://books.toscrape.com/catalogue/mesaerion-the-best-science-fiction-stories-1800-1849_983/index.html
2024-09-12 16:19:22,554 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,556 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/its-only-the-himalayas_981/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,556 [books] INFO: request to http://books.toscrape.com/catalogue/olio_984/index.html
2024-09-12 16:19:22,559 [books] INFO: request to http://books.toscrape.com/catalogue/our-band-could-be-your-life-scenes-from-the-american-indie-underground-1981-1991_985/index.html
2024-09-12 16:19:22,559 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/set-me-free_988/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,565 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/starving-hearts-triangular-trade-trilogy-1_990/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,568 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/the-black-maria_991/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,571 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/the-coming-woman-a-novel-based-on-the-life-of-the-infamous-feminist-victoria-woodhull_993/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,574 [books] INFO: request to http://books.toscrape.com/catalogue/rip-it-up-and-start-again_986/index.html
2024-09-12 16:19:22,575 [books] INFO: request to http://books.toscrape.com/catalogue/the-requiem-red_995/index.html
2024-09-12 16:19:22,575 [books] INFO: request to http://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html
2024-09-12 16:19:22,578 [books] INFO: request to http://books.toscrape.com/catalogue/sharp-objects_997/index.html
2024-09-12 16:19:22,761 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/libertarianism-for-beginners_982/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,761 [books] INFO: request to http://books.toscrape.com/catalogue/soumission_998/index.html
2024-09-12 16:19:22,767 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/mesaerion-the-best-science-fiction-stories-1800-1849_983/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,773 [books] INFO: request to http://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html
2024-09-12 16:19:22,776 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/olio_984/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,776 [books] INFO: request to http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html
2024-09-12 16:19:22,786 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/our-band-could-be-your-life-scenes-from-the-american-indie-underground-1981-1991_985/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,791 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/rip-it-up-and-start-again_986/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,793 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,796 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/sharp-objects_997/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,796 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/the-requiem-red_995/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,972 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/soumission_998/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,987 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:22,987 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html> (referer: http://books.toscrape.com/catalogue/page-1.html)
2024-09-12 16:19:23,101 [scrapy.core.engine] INFO: Closing spider (finished)
2024-09-12 16:19:23,101 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 7055,
'downloader/request_count': 21,
'downloader/request_method_count/GET': 21,
'downloader/response_bytes': 78821,
'downloader/response_count': 21,
'downloader/response_status_count/200': 21,
'elapsed_time_seconds': 6.460656,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 9, 12, 14, 19, 23, 101159, tzinfo=datetime.timezone.utc),
'httpcompression/response_bytes': 407846,
'httpcompression/response_count': 21,
'items_per_minute': None,
'log_count/DEBUG': 22,
'log_count/INFO': 31,
'request_depth_max': 1,
'response_received_count': 21,
'responses_per_minute': None,
'scheduler/dequeued': 21,
'scheduler/dequeued/memory': 21,
'scheduler/enqueued': 21,
'scheduler/enqueued/memory': 21,
'start_time': datetime.datetime(2024, 9, 12, 14, 19, 16, 640503, tzinfo=datetime.timezone.utc)}
2024-09-12 16:19:23,101 [scrapy.core.engine] INFO: Spider closed (finished)
```
</details>
from log output we see that first 8 requests sent between 2024-09-12 16:19:22,093 and 2024-09-12 16:19:22,098 (5 milliseconds) on rate of 1600 rpm(on my hardware).
Any application with `DOWNLOAD_DELAY=0` and some custom code that schedule new requests on `spider_idle` signal - will produce that "burst" of requests on rate over 100 rpm on each `spider_idle` signal event.
**Reproduces how often:** 100% if `DOWNLOAD_DELAY` is `0` (default value). Every scrapy application where default value of `DOWNLOAD_DELAY` didn't change - affected by this.
### Additional context
Despite general recommendation about scraping politely - a lot of scrapy users are not aware of consequences of `DOWNLOAD_DELAY=0` on runs with current default settings and didn't override this.
This request sending pattern (8 very fast requests, 9th right after sending 1st response) - can be easily recognised by antibots.
Obvious way to 100% prevent this (as scrapy default behaviour) - is to make **backward incompatible** change and increase default value of `DOWNLOAD_DELAY` to bigger non-zero value.
Counting default value of `RANDOMIZE_DOWNLOAD_DELAY=True` and it's following logic from:
https://github.com/scrapy/scrapy/blob/e8cb5a03b382b98f2c8945355076390f708b918d/scrapy/core/downloader/__init__.py#L41-L43
it should be increased to at least to `4`
If scrapy user still wants to reduce `DOWNLOAD_DELAY` lower or to `0` and to face related consequences - **it should be set explicitly by updating related settings by scrapy user.(It is not supposed to be default behaviour of scrapy as now**) | open | 2024-09-12T15:00:32Z | 2025-02-06T17:24:39Z | https://github.com/scrapy/scrapy/issues/6476 | [] | GeorgeA92 | 8 |
LAION-AI/Open-Assistant | python | 3,141 | Is it "by design", that website only has 3 models to chat with | Website currently shows only these 3 model for me:

Is it supposed to be so? I heard, that OA also possesses other models, that are not based on llama and are fully open source. Aren't they available on purpose or not? | closed | 2023-05-12T21:03:22Z | 2023-05-12T21:09:23Z | https://github.com/LAION-AI/Open-Assistant/issues/3141 | [
"question"
] | DoctorKrolic | 1 |
huggingface/datasets | nlp | 7,313 | Cannot create a dataset with relative audio path | ### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_dataset, Audio
Path('my_dataset/audio').mkdir(parents=True, exist_ok=True)
Path('my_dataset/audio/file.mp3').touch(exist_ok=True)
Dataset.from_list(
[{'audio': {'path': 'audio/file.mp3'}}]
).to_parquet('my_dataset/data.parquet')
```
Result:
```
# my_dataset
# ├── audio
# │ └── file.mp3
# └── data.parquet
```
Trying to load the dataset
```
dataset = (
load_dataset('my_dataset', split='train')
.cast_column('audio', Audio(sampling_rate=16_000))
)
dataset[0]
>>> FileNotFoundError: [Errno 2] No such file or directory: 'audio/file.mp3'
```
### Expected behavior
I expect the dataset to load correctly.
I've found 2 workarounds, but they are not very good:
1. I can specify an absolute path to the audio, however, when I move the folder or upload to HF it will stop working.
2. I can set `'path': 'file.mp3'`, and load with `load_dataset('my_dataset', data_dir='audio')` - it seems to work, but does this mean that anyone from Hugging Face who wants to use this dataset should also pass the `data_dir` argument, otherwise it won't work?
### Environment info
datasets 3.1.0, Ubuntu 24.04.1 | open | 2024-12-09T07:34:20Z | 2024-12-12T13:46:38Z | https://github.com/huggingface/datasets/issues/7313 | [] | sedol1339 | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,595 | Training loss does not decrease | Dear authors @junyanz and @taesungp,
First, thank you for your excellent work!
I am currently using the pix2pix model to predict future images (gray-scale) based on the current one in the medical field.
I modified the code to use 7 input channels and 1 output channel for Generator.
During the training, I observed that the loss values of G_GAN, G_L1, D_Real, and D_Fake have not changed much since the beginning of the training process. (Figure 1)

Regarding the training results, the fake_B images are very different from the real_B ones.

I am new to this task and any insights/explanations/recommendations are highly appreciated!
Thank you very much. | open | 2023-09-05T00:41:27Z | 2024-01-02T14:00:56Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1595 | [] | nguyenpbui | 3 |
tableau/server-client-python | rest-api | 1,057 | Publish datasource and connection elements | **Describe the bug**
It appears to me that the publish datasource method utilizes a `connectionCredentials` subelement, and not a `connections` element.
https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_data_sources.htm#publish_data_source
The way I see it, we could solve this by one of two routes:
1. Combine the connections and connection_credentials inside the function call to a single object that gets passed to the publish request factory.
2. Make connection credentials accept a sequence, and surface an error to TSC users when they provide both.
**Versions**
Details of your environment, including:
- Tableau Server version (or note if using Tableau Online) N/A
- Python version 3.8.10
- TSC library version 0.19.0
| open | 2022-06-17T20:15:17Z | 2024-06-03T12:21:31Z | https://github.com/tableau/server-client-python/issues/1057 | [
"bug",
"in-progress"
] | jorwoods | 3 |
roboflow/supervision | machine-learning | 1,247 | Speed Estimator for Vehicle Tracking | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Where and how do I specifically add the configurations for both "vehicles.mp4" and "vehicles-result.mp4" in the ultralytics script, "ultralytics_example.py"?
Does it simply replace the code-line: "--source_video_path" and "--target_video-path"?
Can you specifically send the 146-line ultralytics script to incorporate "vehicles.mp4" and "vehicles-result.mp4"?
### Additional
_No response_ | closed | 2024-05-30T11:10:04Z | 2024-05-30T11:38:05Z | https://github.com/roboflow/supervision/issues/1247 | [
"question"
] | bthoma48 | 1 |
sanic-org/sanic | asyncio | 3,018 | A more customisable CLI and REPL | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
Sanic CLI currently has options for different run configurations.
**Requested:**
- Ability to add custom commands for application related workflows. With application context available to the command.
Usage can be similar to how inspector commands are created currently.
Sanic REPL currently provides 4 objects by default In the repl context.
**Requested:**
- Ability to add/inject additional objects to the repl context.
- Ability to prepare REPL context. (e.g. pre importing modules, models)
Thanks
### Additional context
_No response_ | closed | 2024-12-24T10:36:21Z | 2025-03-05T13:29:07Z | https://github.com/sanic-org/sanic/issues/3018 | [
"feature request"
] | goodki-d | 7 |
matterport/Mask_RCNN | tensorflow | 2,812 | StopIteration | CUDA 11.4
tensorflow2.5-gpu
train succeed,but test error:
Traceback (most recent call last):
File "E:/Mask_RCNN/samples/linemod/test.py", line 98, in <module>
file_names = next(os.walk(IMAGE_DIR))[2]
StopIteration | open | 2022-04-18T06:21:04Z | 2022-04-18T06:21:04Z | https://github.com/matterport/Mask_RCNN/issues/2812 | [] | kungkook | 0 |
tensorpack/tensorpack | tensorflow | 1,167 | Tensorflow 2.0 | Hi,
It is definitely not an issue, but probably soon will be :) What is the future of tensorpack in TF 2.0? Are there any plans to modify tensorpack to be compatible with TF 2.0? Or will you stick to TF 1.x? | open | 2019-04-26T08:10:36Z | 2024-02-08T05:59:22Z | https://github.com/tensorpack/tensorpack/issues/1167 | [
"enhancement"
] | soldierofhell | 14 |
Kludex/mangum | fastapi | 69 | Error when no headers in request to API gateway | I've got a normal setup, based closely on the example in magnum-cli and terraform docs on AWS lambda. Using the AWS api gateway test button throws this error:
```yml
# loosely translated from the API gateway log
Endpoint response body before transformations:
errorMessage:
message: 'NoneType' object has no attribute 'get'
errorType: AttributeError
stackTrace: >
File "/var/task/mangum/adapter.py", line 40, in __call__
raise exc
File "/var/task/mangum/adapter.py", line 35, in __call__
response = self.handler(event, context)
File "/var/task/mangum/adapter.py", line 50, in handler
response = handle_http(self.app, event, context)
File "/var/task/mangum/protocols/http.py", line 58, in handle_http
server, client = get_server_and_client(event)
File "/var/task/mangum/utils.py", line 22, in get_server_and_client
server_addr = event["headers"].get("Host", None)
```
Adding `Accept:application/json` to the textbox for headers resolves the issue, and I get my expected `{ greeting: "hello" }` JSON back. | closed | 2019-11-18T18:48:10Z | 2020-02-07T19:34:36Z | https://github.com/Kludex/mangum/issues/69 | [] | SKalt | 6 |
tflearn/tflearn | tensorflow | 957 | Cannot Load Model | I am trying to train, save and load a tensorflow model using tflearn
# Building convolutional network
network = input_data(shape=[None, imageSize, imageSize, 1], name='input')
network = conv_2d(network, imageSize, self.windowSize, activation='relu', regularizer="L2")
network = max_pool_2d(network, 2)
network = local_response_normalization(network)
network = conv_2d(network, imageSize * 2, self.windowSize, activation='relu', regularizer="L2")
network = max_pool_2d(network, 2)
network = local_response_normalization(network)
network = fully_connected(network, (dim4 * dim4) * (imageSize * 2), activation='tanh')
network = dropout(network, keep)
network = fully_connected(network, (dim4 * dim4) * (imageSize * 2), activation='tanh')
network = dropout(network, keep)
network = fully_connected(network, n_classes, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=self.learningRate,
loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0, tensorboard_dir='some/dir')
model.fit(
{'input': np.array(myData.train_x).reshape(-1, self.imageSize, self.imageSize, 1)}, {'target': myData.train_y}, n_epoch=self.epochs,
validation_set=(
{'input': np.array(myData.test_x).reshape(-1, self.imageSize, self.imageSize, 1)},
{'target': myData.test_y}),
snapshot_step=100, show_metric=True, run_id='convnet')
model.save("some/path/model")
this part works. next, i do
model_path = "some/path/model.meta"
if os.path.exists(model_path):
model.load(model_path)
else :
return "need to train the model"
prediction = self.model.predict([<some_input>])
print(str(prediction))
return prediction
this fails at `model.load(model_path)`. I get the following error trace
DataLossError (see above for traceback): Unable to open table file some/path/model.meta: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[Node: save_5/RestoreV2_4 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_5/Const_0_0, save_5/RestoreV2_4/tensor_names, save_5/RestoreV2_4/shape_and_slices)]]
Caused by op 'save_5/RestoreV2_4', defined at:
what is meant by
Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
I can see that the model is indeed saved properly and is not an empty file. Why cant i load it?
What am i doing wrong? Is there something in the name of the model? Do I need to have a certain extension?
I have even tried changing the folder but it does not work.
Version Information
tensorflow==1.4.0
tensorflow-tensorboard==0.4.0rc2
tflearn==0.3.2
Python 3.6.3 :: Anaconda, Inc.
MAC OS Sierra
| closed | 2017-11-14T16:18:27Z | 2017-11-14T20:59:59Z | https://github.com/tflearn/tflearn/issues/957 | [] | abtpst | 4 |
comfyanonymous/ComfyUI | pytorch | 6,428 | Intel Mac Help - Pytorch/Anaconda does not comply | ### Your question
Okay, Trying to install ComfyUI to my MacOS (model is: iMac, Rentina 5k, 27-inch, 2020, w/3.1 GHz 6-Core Intel Core i5). Everything goes wells from installing Homebrew, to the python script. But after typing in "pip3 install - -upgrade pip setuptools" results in: Usage:
pip3 install [options] <requirement specifier> [package-index-options] ...
pip3 install [options] -r <requirements file> [package-index-options] ...
pip3 install [options] [-e] <vcs project url> ...
pip3 install [options] [-e] <local project path> ...
pip3 install [options] <archive url/path> ...
no such option: -u
which after attempting PyTorch ends with:
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try brew install
xyz, where xyz is the package you are trying to
install.
Heck, even putting in "pip3 install -r requirements.txt" results in not being able to find it even if the requirements.txt is right there.
So I tried to switch over to Anaconda. Everything seems to turn out great until i tried to verify the mps support with:
import torch
if torch.backends.mps.is_available():
mps_device = torch.device("mps")
x = torch.ones(1, device=mps_device)
print (x)
else:
print ("MPS device not found.")
The expected results should be, "tensor([1.], device='mps:0')", but the actual result is not in directory.
I'm at my wits end. What am I doing wrong here?
### Logs
_No response_
### Other
_No response_ | open | 2025-01-11T04:31:39Z | 2025-01-16T09:06:24Z | https://github.com/comfyanonymous/ComfyUI/issues/6428 | [
"User Support"
] | Ja-Do470 | 7 |
bmoscon/cryptofeed | asyncio | 231 | Change json library to yapic for performance | I have to say, thank you for making this easy. I dumped raw json data using examples/demo_raw_data.py, and then used it in the test below. The results show yapic is about 1.3x to 2.0x the speed of the standard library for parsing l2_book, ticker, and trades data from 100 MB files from Binance and Deribit feeds.
Based on these results, I am going to proceed with integrating yapic for further stability testing.
```
binance results
file lines: 211316
yapic: 13.38 seconds
json: 23.62 seconds
yapic lps: 47390.76 lines per second
json lps: 26838.02 lines per seond
json/yapic: 1.77x
deribit results
file lines: 310000
yapic: 18.85 seconds
json: 37.63 seconds
yapic lps: 49332.11 lines per second
json lps: 24717.06 lines per seond
json/yapic: 2.00x
```
```
import joblib
import timeit
iterations = 3
files = { 'binance' : 'BINANCE12f2ac65-b5c5-404d-b8b4-8f22c04da3a4.0',
'deribit' : 'DERIBIT252a6e79-7a32-4038-a64a-b1f7b29d6837.3' }
def load_file(filepath):
output = []
with open(filepath) as fp:
line = fp.readline()
while line:
output.append(line.split(' ')[1])
line = fp.readline()
joblib.dump(output, 'json.pkl')
return len(output)
for k, f in files.items():
length = load_file(f)
yapic_time = timeit.timeit('[json.loads(msg,parse_float=Decimal) for msg in txt]',setup="from yapic import json; import joblib; from decimal import Decimal; txt = joblib.load('json.pkl')", number=iterations)
yapic_lps = length * iterations / yapic_time
json_time = timeit.timeit('[json.loads(msg,parse_float=Decimal) for msg in txt]',setup="import json; import joblib; from decimal import Decimal; txt = joblib.load('json.pkl')", number=iterations)
json_lps = length * iterations / json_time
ratio = json_time / yapic_time
print(f'{k} results')
print(f'file lines: {length}')
print(f'yapic: {yapic_time:.2f} seconds')
print(f'json: {json_time:.2f} seconds')
print(f'yapic lps: {yapic_lps:.2f} lines per second')
print(f'json lps: {json_lps:.2f} lines per seond')
print(f'json/yapic: {ratio:.2f}x')
```
_Originally posted by @vincentmele in https://github.com/bmoscon/cryptofeed/pull/230#issuecomment-610076287_ | closed | 2020-04-06T22:50:54Z | 2020-04-21T00:04:34Z | https://github.com/bmoscon/cryptofeed/issues/231 | [] | vincentmele | 3 |
allenai/allennlp | nlp | 5,404 | Coreference Resolution training is failing with CUDA out of memory error |
```
```
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
Python 3.7.11
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
$conda create -n allennlp python=3.7
$source activate allennlp
$pip install allennlp
$pip install allennlp-models
$pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html -U
$allennlp train coref_spanbert_large.jsonnet -s output_dir
coref_precision: 0.2097, coref_recall: 0.0115, coref_f1: 0.0217, mention_recall: 0.1283, batch_loss: 30.2774, loss: 26.6453 ||: 52%|#####2 | 1467/2802 [04:04<03:58,
coref_precision: 0.2100, coref_recall: 0.0115, coref_f1: 0.0218, mention_recall: 0.1285, batch_loss: 31.8822, loss: 26.6488 ||: 52%|#####2 | 1468/2802 [04:04<03:31,
coref_precision: 0.2109, coref_recall: 0.0116, coref_f1: 0.0219, mention_recall: 0.1289, batch_loss: 112.6335, loss: 26.7074 ||: 52%|#####2 | 1469/2802 [04:08<30:40
coref_precision: 0.2110, coref_recall: 0.0116, coref_f1: 0.0219, mention_recall: 0.1291, batch_loss: 79.8499, loss: 26.7435 ||: 52%|#####2 | 1470/2802 [04:08<22:46,
coref_precision: 0.2110, coref_recall: 0.0116, coref_f1: 0.0219, mention_recall: 0.1291, batch_loss: 79.8499, loss: 26.7435 ||: 52%|#####2 | 1470/2802 [04:09<03:45, 5.90it/s]
2021-09-10 16:20:58,437 - CRITICAL - root - Uncaught exception
RuntimeError: CUDA out of memory. Tried to allocate 1.75 GiB (GPU 0; 39.59 GiB total capacity; 32.47 GiB already allocated; 1.52 GiB free; 36.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
| closed | 2021-09-10T16:51:02Z | 2021-09-14T08:28:34Z | https://github.com/allenai/allennlp/issues/5404 | [
"bug"
] | ranjita-naik | 3 |
svc-develop-team/so-vits-svc | pytorch | 44 | 在执行训练命令后出现如下错误 | closed | 2023-03-17T23:21:10Z | 2023-03-17T23:50:48Z | https://github.com/svc-develop-team/so-vits-svc/issues/44 | [] | endlessland | 0 |
|
odoo/odoo | python | 202,428 | [18.0] module: POS terminal payment non ending loop : odoo community | ### Odoo Version
- [ ] 16.0
- [ ] 17.0
- [x] 18.0
- [ ] Other (specify)
### Steps to Reproduce
Go to checkout screen and choose payment by payment terminal, if you switch to Cash by cancelling payment by terminal, system hangs up, transaction keeps on asking for card.
https://github.com/user-attachments/assets/2abfb2d4-542c-4187-98e6-31d1c9029e4a


poping as shown below.
### Log Output
```shell
Please check photos and videos.
```
### Support Ticket
_No response_ | open | 2025-03-18T22:43:40Z | 2025-03-18T22:44:34Z | https://github.com/odoo/odoo/issues/202428 | [] | ihtxam | 0 |
yt-dlp/yt-dlp | python | 12,030 | yt-dlp - Include supported site "https://www.documaniatv.com/" | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Europe
### Example URLs
https://www.documaniatv.com/naturaleza/planeta-helado-ii-1-mundos-helados-video_9cc181965.html
https://www.documaniatv.com/naturaleza/planeta-helado-ii-2-oceanos-helados-video_a241bc95f.html
https://www.documaniatv.com/naturaleza/planeta-helado-ii-3-cumbres-heladas-video_7196522d7.html
https://www.documaniatv.com/naturaleza/planeta-helado-ii-4-el-sur-helado-video_c51eaf2d5.html
https://www.documaniatv.com/naturaleza/planeta-helado-5-otono-video_71d7e6c10.html
https://www.documaniatv.com/naturaleza/planeta-helado-ii-6-nuestro-planeta-helado-video_7fbe039e7.html
### Provide a description that is worded well enough to be understood
I make the request to include the website "https://www.documaniatv.com" in supported yt-dlp sites (https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md) Thanks.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
https://www.documaniatv.com
```
| closed | 2025-01-08T08:04:48Z | 2025-01-13T08:56:34Z | https://github.com/yt-dlp/yt-dlp/issues/12030 | [
"duplicate",
"site-request"
] | spinbilly | 3 |
man-group/arctic | pandas | 678 | Builds fail with pytest 4.1 | Looking at: https://github.com/pytest-dev/pytest/issues/4546
this breaks our builds as the error iirc reflected missing get_marker. | closed | 2019-01-06T22:31:38Z | 2019-01-07T17:41:38Z | https://github.com/man-group/arctic/issues/678 | [] | shashank88 | 3 |
3b1b/manim | python | 1,299 | Update Coordinates - Always Redraw | I'm attempting to track the coordinates of a function, but I don't seem to be able to get always_redraw as it is not defined to work. Is this a new feature?
```
def construct(self):
self.wait()
self.setup_axes()
graph = self.get_graph(lambda x: x**3, x_min=-5, x_max=5)
initial_x = -5
final_x = 5
track = ValueTracker(initial_x)
coor = self.coords_to_point
dot = track.get_value
func = graph.underlying_function
moving_dot = always_redraw(lambda: Dot(coor(dot(), func(dot()))))
self.add(graph, moving_dot)
self.play(track.set_value, final_x, run_time=5)
self.wait()
``` | open | 2020-12-29T02:07:02Z | 2020-12-29T02:08:37Z | https://github.com/3b1b/manim/issues/1299 | [] | Othello14 | 0 |
jupyterhub/repo2docker | jupyter | 430 | Bug Report: Docs fail to build with recommended virtualenvironment | The docs are currently failing to build in the CI and if one follows the ["Set up a local virtual environment" instructions in the CONTRIBUTING.md](https://github.com/jupyter/repo2docker/blob/master/CONTRIBUTING.md#set-up-a-local-virtual-environment) then
```
$ cd docs
$ make html
```
will fail as `dev-requirements.txt` is missing both `sphinx` and `recommonmark`. I will open a PR that fixes this. | closed | 2018-10-11T20:55:31Z | 2018-10-12T19:24:26Z | https://github.com/jupyterhub/repo2docker/issues/430 | [] | matthewfeickert | 0 |
floodsung/Deep-Learning-Papers-Reading-Roadmap | deep-learning | 45 | What are awesome knowledge graph papers? | @songrotek Thank you. | open | 2017-03-20T06:39:10Z | 2017-03-20T06:40:10Z | https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap/issues/45 | [] | guotong1988 | 0 |
miguelgrinberg/Flask-SocketIO | flask | 2,120 | Client disconnection reason on the server | **Is your feature request related to a problem? Please describe.**
It seems from the documentation that Flask-SocketIO doesn't support server-side reasons for client disconnection. That is, if a client is disconnected for some reason, there would be no way of understanding the reason on the server (as specified by https://socket.io/docs/v4/server-socket-instance/#disconnect).
**Describe the solution you'd like**
An argument in the `disconnect()` handler that will contain the relevant string.
Perhaps I am misunderstanding the way the Socket.IO model works, please let me know if I'm missing a documentation remark on this or if this is already somehow implemented.
| closed | 2024-12-08T23:22:25Z | 2024-12-09T02:06:02Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/2120 | [] | maleksware | 1 |
streamlit/streamlit | data-visualization | 10,503 | unicode error | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
windows10 pro, python13.2, streamlit 1.40, error while importing heyoo module like...
.
.
.
there is no error if run with windows10 pro, python9/11, streamlit1.39/1.38
### Reproducible Code Example
```Python
from heyoo import WhatsApp
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 155-156: surrogates not allowed
Traceback:
File "C:\Users\niramay\PycharmProjects\clinic\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 121, in exec_func_with_error_handling
result = func()
File "C:\Users\niramay\PycharmProjects\clinic\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 591, in code_to_exec
exec(code, module.__dict__)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\niramay\PycharmProjects\clinic\pages\zPrescription.py", line 23, in <module>
import util
File "C:\Users\niramay\PycharmProjects\clinic\util.py", line 39, in <module>
from heyoo import WhatsApp
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:1.40
- Python version:13.2
- Operating System:windows10pro
- Browser:chrome latest
### Additional Information
_No response_ | open | 2025-02-25T07:19:20Z | 2025-03-03T18:53:45Z | https://github.com/streamlit/streamlit/issues/10503 | [
"type:bug",
"status:confirmed",
"priority:P4"
] | madhukarjadhav736 | 3 |
hatchet-dev/hatchet | fastapi | 525 | how to create dynamic steps | closed | 2024-05-23T00:47:29Z | 2024-05-28T16:02:49Z | https://github.com/hatchet-dev/hatchet/issues/525 | [] | koolay | 1 |
|
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,102 | Same ouput images for difference input images in pix2pix model | I tried to implement pix2pix model with KAIST thermal - visible dataset to transfer thermal image to visible image. I trained for around 20 epochs and the test results are very unexpected. All the generated fake image for different test images is the same with no detail.
I have tried many variations while training and end up with the same problem.
@junyanz please help me with this issue. | open | 2020-07-24T06:34:54Z | 2020-07-24T08:54:24Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1102 | [] | mAmulya | 1 |
Teemu/pytest-sugar | pytest | 218 | Feature Request: Integration with Pytest-Parallel as well | I am looking into use pytest-sugar and the fact that it supports pytest-xdist is great, but I was wondering if you were planning on supporting pytest-parallel as well. It has some advantages over pytest-xdist especially for multithreaded tests / concurrency tests.
It mostly works at the moment out of the box, the only thing is that the progress bar does not currently work. https://pypi.org/project/pytest-parallel/ | open | 2020-11-05T17:55:56Z | 2022-11-05T08:47:19Z | https://github.com/Teemu/pytest-sugar/issues/218 | [
"enhancement",
"3rd-party"
] | Skylion007 | 0 |
lux-org/lux | jupyter | 419 | ValueError: Field "duration" has type "timedelta64[ns]" which is not supported by Altair. | It worked fine with 2 columns of datetimes and 2 columns of integers. I used df.apply(f(x)) to create a timedelta column and got the following warning. Full text:
/opt/conda/lib/python3.7/site-packages/IPython/core/formatters.py:918: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.
Please report the following issue on Github: https://github.com/lux-org/lux/issues
/opt/conda/lib/python3.7/site-packages/lux/core/frame.py:632: UserWarning:Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 594, in _ipython_display_
self.maintain_recs()
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 451, in maintain_recs
self._widget = rec_df.render_widget()
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 681, in render_widget
widgetJSON = self.to_JSON(self._rec_info, input_current_vis=input_current_vis)
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 721, in to_JSON
recCollection = LuxDataFrame.rec_to_JSON(rec_infolist)
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 749, in rec_to_JSON
chart = vis.to_code(language=lux.config.plotting_backend, prettyOutput=False)
File "/opt/conda/lib/python3.7/site-packages/lux/vis/Vis.py", line 334, in to_code
return self.to_vegalite(**kwargs)
File "/opt/conda/lib/python3.7/site-packages/lux/vis/Vis.py", line 310, in to_vegalite
self._code = renderer.create_vis(self)
File "/opt/conda/lib/python3.7/site-packages/lux/vislib/altair/AltairRenderer.py", line 99, in create_vis
chart_dict = chart.chart.to_dict()
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 373, in to_dict
dct = super(TopLevelMixin, copy).to_dict(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 328, in to_dict
context=context,
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 62, in _todict
for k, v in obj.items()
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 63, in <dictcomp>
if v is not Undefined
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 58, in _todict
return [_todict(v, validate, context) for v in obj]
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 58, in <listcomp>
return [_todict(v, validate, context) for v in obj]
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 56, in _todict
return obj.to_dict(validate=validate, context=context)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 363, in to_dict
copy.data = _prepare_data(original_data, context)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 84, in _prepare_data
data = _pipe(data, data_transformers.get())
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 627, in pipe
data = func(data)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 303, in __call__
return self._partial(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/data.py", line 19, in default_data_transformer
return curried.pipe(data, limit_rows(max_rows=max_rows), to_values)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 627, in pipe
data = func(data)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 303, in __call__
return self._partial(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/data.py", line 149, in to_values
data = sanitize_dataframe(data)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/core.py", line 317, in sanitize_dataframe
"".format(col_name=col_name, dtype=dtype)
ValueError: Field "duration" has type "timedelta64[ns]" which is not supported by Altair. Please convert to either a timestamp or a numerical value. | closed | 2021-09-14T11:54:00Z | 2022-01-22T01:14:30Z | https://github.com/lux-org/lux/issues/419 | [
"bug",
"easy"
] | CiaranHaines | 4 |
pytorch/vision | computer-vision | 8,147 | Segfault in VideoClip test on python 3.8 | e.g. https://github.com/pytorch/vision/actions/runs/7099634313/job/19324043765?pr=7990
```
2023-12-05T11:15:45.8728795Z FAILED test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps - RuntimeError: DataLoader worker (pid(s) 35158) exited unexpectedly
2023-12-05T11:15:45.8730432Z = 1 failed, 21574 passed, 16925 skipped, 1 xfailed, 267 warnings in 1473.98s (0:24:33) =
```
This has been consistently failing for a few weeks, on 3.8 only. | closed | 2023-12-05T12:19:48Z | 2023-12-05T13:16:03Z | https://github.com/pytorch/vision/issues/8147 | [] | NicolasHug | 0 |
NVlabs/neuralangelo | computer-vision | 160 | Analytical gradient implementation | Hello, thanks for your work!
I was reproducing your project with analytical gradient but found:
`gradient = torch.autograd.grad(sdf.sum(), x, create_graph=True)[0]`
Could you explain why there is a sum operation in the code? | closed | 2023-11-26T10:56:14Z | 2023-11-26T11:23:11Z | https://github.com/NVlabs/neuralangelo/issues/160 | [] | ckrsls | 0 |
explosion/spaCy | data-science | 13,132 | Spacy-LLM code sample produces no output | Hi,
The code sample below - which is based on an example in Matthew Honnibal's blog "Against LLM maximalism" (https://explosion.ai/blog/against-llm-maximalism) - fails to produce any output. This is surprising given that the pipeline is configured to find the kind of entities present in the sentence being processed.
Note: This is a continuation of Issue #13096 (Spacy-LLM fails with storage not allocated on MPS device).
## How to reproduce the behavior
Here is the code:
```
import spacy
nlp = spacy.blank("en")
nlp.add_pipe("sentencizer")
nlp.add_pipe(
"llm",
config={
"task": {
"@llm_tasks": "spacy.NER.v1",
"labels": "SAAS_PLATFORM,PROGRAMMING_LANGUAGE,OPEN_SOURCE_LIBRARY"
},
"model": {
"@llm_models": "spacy.OpenLLaMA.v1",
"name": "open_llama_3b"
},
},
)
doc = nlp("There's no PyTorch bindings for Go. We just use Microsoft Cognitive Services.")
for ent in doc.ents:
print(ent.text, ent.label_, ent.sent)
```
## Your Environment
Platform: macOS-12.6-arm64-arm-64bit
Python Version: 3.11.4
spaCy Version: 3.6.1
| open | 2023-11-17T00:45:51Z | 2024-03-17T22:20:55Z | https://github.com/explosion/spaCy/issues/13132 | [
"feat/llm"
] | rkatriel | 16 |
microsoft/nni | machine-learning | 5,803 | nnictl resume cannot open GUI | As title, GUI cannot be open after I resume the experiment.
I observe that if the output log does not hang at "Web portal URLs: ...", GUI will be unable to open. However, I can't find the way to keep "nnictl resume ID" command running.
Complete log of command:
[2024-08-26 13:10:28] Creating experiment, Experiment ID: z39sirw8
[2024-08-26 13:10:28] Starting web server...
[2024-08-26 13:10:29] INFO (main) Start NNI manager
[2024-08-26 13:10:29] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2024-08-26 13:10:29] INFO (RestServer) REST server started.
[2024-08-26 13:10:29] INFO (NNIDataStore) Datastore initialization done
[2024-08-26 13:10:29] Setting up...
[2024-08-26 13:10:30] INFO (NNIManager) Resuming experiment: z39sirw8
[2024-08-26 13:10:30] INFO (NNIManager) Setup training service...
[2024-08-26 13:10:30] INFO (NNIManager) Setup tuner...
[2024-08-26 13:10:31] INFO (NNIManager) Number of current submitted trials: 621, where 0 is resuming.
[2024-08-26 13:10:31] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2024-08-26 13:10:31] Web portal URLs: http://127.0.0.1:8080 http://172.17.0.2:8080
[2024-08-26 13:10:31] Stopping experiment, please wait...
[2024-08-26 13:10:31] Saving experiment checkpoint...
[2024-08-26 13:10:31] Stopping NNI manager, if any...
[2024-08-26 13:10:31] INFO (ShutdownManager) Initiate shutdown: REST request
[2024-08-26 13:10:31] INFO (RestServer) Stopping REST server.
[2024-08-26 13:10:31] ERROR (ShutdownManager) Error during shutting down NniManager: TypeError: Cannot read properties of undefined (reading 'getBufferedAmount')
at TunerServer.sendCommand (/usr/local/lib/python3.8/dist-packages/nni_node/core/tuner_command_channel.js:60:26)
at NNIManager.stopExperimentTopHalf (/usr/local/lib/python3.8/dist-packages/nni_node/core/nnimanager.js:303:25)
at NNIManager.stopExperiment (/usr/local/lib/python3.8/dist-packages/nni_node/core/nnimanager.js:292:20)
at /usr/local/lib/python3.8/dist-packages/nni_node/common/globals/shutdown.js:49:23
at Array.map (<anonymous>)
at ShutdownManager.shutdown (/usr/local/lib/python3.8/dist-packages/nni_node/common/globals/shutdown.js:47:51)
at ShutdownManager.initiate (/usr/local/lib/python3.8/dist-packages/nni_node/common/globals/shutdown.js:22:18)
at /usr/local/lib/python3.8/dist-packages/nni_node/rest_server/restHandler.js:366:40
at Layer.handle [as handle_request] (/usr/local/lib/python3.8/dist-packages/nni_node/node_modules/express/lib/router/layer.js:95:5)
at next (/usr/local/lib/python3.8/dist-packages/nni_node/node_modules/express/lib/router/route.js:144:13)
[2024-08-26 13:10:31] INFO (NNIManager) Change NNIManager status from: RUNNING to: STOPPING
[2024-08-26 13:10:31] INFO (NNIManager) Stopping experiment, cleaning up ...
[2024-08-26 13:10:31] INFO (ShutdownManager) Shutdown complete.
[2024-08-26 13:10:31] INFO (RestServer) REST server stopped.
[2024-08-26 13:10:31] Experiment stopped.
root@e44bc2dd4409:/workspace/MediaPipePyTorch# Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/nni/__main__.py", line 85, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/nni/__main__.py", line 58, in main
dispatcher = MsgDispatcher(url, tuner, assessor)
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/msg_dispatcher.py", line 71, in __init__
super().__init__(command_channel_url)
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/msg_dispatcher_base.py", line 47, in __init__
self._channel.connect()
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 58, in connect
self._channel.connect()
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 23, in connect
self._ensure_conn()
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/usr/local/lib/python3.8/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.8/dist-packages/websockets/legacy/client.py", line 655, in __await_impl_timeout__
return await self.__await_impl__()
File "/usr/local/lib/python3.8/dist-packages/websockets/legacy/client.py", line 659, in __await_impl__
_transport, _protocol = await self._create_connection()
File "/usr/lib/python3.8/asyncio/base_events.py", line 1033, in create_connection
raise OSError('Multiple exceptions: {}'.format(
OSError: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 8080), [Errno 99] Cannot assign requested address
| open | 2024-08-26T12:43:00Z | 2024-08-26T13:20:43Z | https://github.com/microsoft/nni/issues/5803 | [] | jimmy133719 | 0 |
polarsource/polar | fastapi | 5,104 | Discount not selected by default when using payment link with discount preset | ### Description
When creating a payment link with a discount selected by default, it will automatically apply it and also show it in the price breakdown, but it won't be selected in the discount input.
### Current Behavior
Currently the preselected discount is not being shown in the discount input.
### Expected Behavior
The discount should be selected in the discount input.
### Screenshots
When opening the payment URL you can see the discount being properly applied, but the discount input is empty:

This is how it should look like:

| closed | 2025-02-25T15:26:05Z | 2025-02-28T12:17:51Z | https://github.com/polarsource/polar/issues/5104 | [
"bug"
] | MarcMogdanz | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 992 | Ubuntu 20.04: RuntimeError: can't start new thread | My objective was to use Scikit-Optimize library in python to minimize the function value in order to find the optimized parameters for xgboost model. The process involve running the model with different random parameters for 5,000 times.
However, it seems that the loop stopped at some point and gave me a RuntimeError: can't start new thread. I am using ubuntu 20.04 and is running python 3.8.5, Scikit-Optimize version is 0.8.1. I ran the same code in windows 10 and it appears that I do not encounter this RuntimeError, however, the code runs much more slower.
Looks like the issue happens in the parallel.py module in joblib libary. If I reduce the number of iteration from 5,000 to 4,000, the code will run a bit longer (ie. more iterations) before the thread is exhausted (giving me "Can't create new thread error)
I think I may need a threadpool to solve this issue but after searching through the web and I had no luck on finding a solution to implement the threadpool.
Below is a simplified version of the codes:
```
#This function will be passed to Scikit-Optimize to find the optimized parameters (Params)
def find_best_xgboost_para(params):
#Defines the parameters that I want to optimize
learning_rate,gamma,max_depth,min_child_weight,reg_alpha,reg_lambda,subsample,max_bin,num_parallel_tree,colsamp_lev,colsamp_tree,StopSteps\
=float(params[0]),float(params[1]),int(params[2]),int(params[3]),\
int(params[4]),int(params[5]),float(params[6]),int(params[7]),int(params[8]),float(params[9]),float(params[10]),int(params[11])
xgbc=XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=colsamp_lev,
colsample_bytree=colsamp_tree, gamma=gamma, learning_rate=learning_rate, max_delta_step=0,
max_depth=max_depth, min_child_weight=min_child_weight, missing=None, n_estimators=nTrees,
objective='binary:logistic',random_state=101, reg_alpha=reg_alpha,
reg_lambda=reg_lambda, scale_pos_weight=1,seed=101,
subsample=subsample,importance_type='gain',gpu_id=GPUID,max_bin=max_bin,
tree_method='gpu_hist',num_parallel_tree=num_parallel_tree,predictor='gpu_predictor',verbosity=0,\
refresh_leaf=0,grow_policy='depthwise',process_type=TreeUpdateStatus,single_precision_histogram=SinglePrecision)
tscv = TimeSeriesSplit(CV_nSplit)
error_data=xgboost.cv(xgbc.get_xgb_params(), CVTrain, num_boost_round=CVBoostRound, nfold=None, stratified=False, folds=tscv, metrics=(), \
obj=None, feval=f1_eval, maximize=False, early_stopping_rounds=StopSteps, fpreproc=None, as_pandas=True, \
verbose_eval=True, show_stdv=True, seed=101, shuffle=shuffle_trig)
eval_set = [(X_train, y_train), (X_test, y_test)]
xgbc.fit(X_train, y_train, eval_metric=f1_eval, early_stopping_rounds=StopSteps, eval_set=eval_set,verbose=True)
xgbc_predictions=xgbc.predict(X_test)
error=(1-metrics.f1_score(y_test, xgbc_predictions,average='macro'))
del xgbc
return error
#Define the range of values that Scikit-Optimize can choose from to find the optimized parameters
lr_low, lr_high=float(XgParamDict['lr_low']), float(XgParamDict['lr_high'])
gama_low, gama_high=float(XgParamDict['gama_low']), float(XgParamDict['gama_high'])
depth_low, depth_high=int(XgParamDict['depth_low']), int(XgParamDict['depth_high'])
child_weight_low, child_weight_high=int(XgParamDict['child_weight_low']), int(XgParamDict['child_weight_high'])
alpha_low,alpha_high=int(XgParamDict['alpha_low']),int(XgParamDict['alpha_high'])
lambda_low,lambda_high=int(XgParamDict['lambda_low']),int(XgParamDict['lambda_high'])
subsamp_low,subsamp_high=float(XgParamDict['subsamp_low']),float(XgParamDict['subsamp_high'])
max_bin_low,max_bin_high=int(XgParamDict['max_bin_low']),int(XgParamDict['max_bin_high'])
num_parallel_tree_low,num_parallel_tree_high=int(XgParamDict['num_parallel_tree_low']),int(XgParamDict['num_parallel_tree_high'])
colsamp_lev_low,colsamp_lev_high=float(XgParamDict['colsamp_lev_low']),float(XgParamDict['colsamp_lev_high'])
colsamp_tree_low,colsamp_tree_high=float(XgParamDict['colsamp_tree_low']),float(XgParamDict['colsamp_tree_high'])
StopSteps_low,StopSteps_high=float(XgParamDict['StopSteps_low']),float(XgParamDict['StopSteps_high'])
#Pass the target function (find_best_xgboost_para) as well as parameter ranges to Scikit-Optimize, 'res' will be an array of values that will need to be pass to another function
res=gbrt_minimize(find_best_xgboost_para,[(lr_low,lr_high),(gama_low, gama_high),(depth_low,depth_high),(child_weight_low,child_weight_high),\
(alpha_low,alpha_high),(lambda_low,lambda_high),(subsamp_low,subsamp_high),(max_bin_low,max_bin_high),\
(num_parallel_tree_low,num_parallel_tree_high),(colsamp_lev_low,colsamp_lev_high),(colsamp_tree_low,colsamp_tree_high),\
(StopSteps_low,StopSteps_high)],random_state=101,n_calls=5000,n_random_starts=1500,verbose=True,n_jobs=-1)
```
Below is the error message:
```
Traceback (most recent call last):
File "/home/FactorOpt.py", line 91, in <module>Opt(**FactorOptDict)
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/gbrt.py", line 179, in gbrt_minimize return base_minimize(func, dimensions, base_estimator,
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/base.py", line 301, in base_minimize
next_y = func(next_x)
File "/home/anaconda3/lib/python3.8/modelling/FactorOpt.py", line 456, in xgboost_opt
res=gbrt_minimize(find_best_xgboost_para,[(lr_low,lr_high),(gama_low, gama_high),(depth_low,depth_high),(child_weight_low,child_weight_high),\
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/gbrt.py", line 179, in gbrt_minimize
return base_minimize(func, dimensions, base_estimator,
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/base.py", line 302, in base_minimize
result = optimizer.tell(next_x, next_y)
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/optimizer.py", line 493, in tell
return self._tell(x, y, fit=fit)
File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/optimizer.py", line 536, in _tell
est.fit(self.space.transform(self.Xi), self.yi)
File "/home/anaconda3/lib/python3.8/site-packages/skopt/learning/gbrt.py", line 85, in fit
self.regressors_ = Parallel(n_jobs=self.n_jobs, backend='threading')(
File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py", line 1048, in __call__
if self.dispatch_one_batch(iterator):
File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py", line 866, in dispatch_one_batch
self._dispatch(tasks)
File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py", line 784, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 252, in apply_async
return self._get_pool().apply_async(
File "/home/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 407, in _get_pool
self._pool = ThreadPool(self._n_jobs)
File "/home/anaconda3/lib/python3.8/multiprocessing/pool.py", line 925, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/home/anaconda3/lib/python3.8/multiprocessing/pool.py", line 232, in __init__
self._worker_handler.start()
File "/home/anaconda3/lib/python3.8/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
```
| open | 2021-02-01T14:57:51Z | 2021-02-01T16:41:58Z | https://github.com/scikit-optimize/scikit-optimize/issues/992 | [] | tigertimwu | 2 |
BlinkDL/RWKV-LM | pytorch | 148 | 训练到这一步报错 build.ninja... | mitting ninja build file /home/hope/.cache/torch_extensions/py310_cu117/wkv_1024/build.ninja...
Building extension module wkv_1024...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/TH -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/THC -isystem /home/hope/miniconda3/envs/rwkv/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++14 -c /home/hope/work/RWKV-LM/RWKV-v4neo/cuda/wkv_cuda.cu -o wkv_cuda.cuda.o
FAILED: wkv_cuda.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/TH -isystem /home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/include/THC -isystem /home/hope/miniconda3/envs/rwkv/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++14 -c /home/hope/work/RWKV-LM/RWKV-v4neo/cuda/wkv_cuda.cu -o wkv_cuda.cuda.o
In file included from /usr/include/cuda_runtime.h:83,
from <command-line>:
/usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported!
138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported!
| ^~~~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1900, in _run_ninja_build
subprocess.run(
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hope/work/RWKV-LM/RWKV-v4neo/train.py", line 307, in <module>
from src.model import RWKV
File "/home/hope/work/RWKV-LM/RWKV-v4neo/src/model.py", line 80, in <module>
wkv_cuda = load(name=f"wkv_{T_MAX}", sources=["cuda/wkv_op.cpp", "cuda/wkv_cuda.cu"], verbose=True, extra_cuda_cflags=["-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1623, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/hope/miniconda3/envs/rwkv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1916, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'wkv_1024' | open | 2023-06-20T02:41:59Z | 2024-05-20T12:12:40Z | https://github.com/BlinkDL/RWKV-LM/issues/148 | [] | hopeforus | 4 |
microsoft/hummingbird | scikit-learn | 495 | XGBoost does not work for 1.4.0 for hummingbird | In #492 , we needed to pin to `xgboost<1.4.0` because `==1.4.0` ([released](https://github.com/dmlc/xgboost/commit/b9a4f3336af7c78f9f7b66f7c91a419108b7ec83) on April 12th 2021) broke our tests. We need to investigate why.
From [the pipeline](https://github.com/microsoft/hummingbird/pull/490/checks?check_run_id=2328236386):
```
tests/test_xgboost_converter.py F..FF...sFF...FFF...sFF..
=================================== FAILURES ===================================
__________ TestXGBoostConverter.test_float64_xgb_classifier_converter __________
self = <test_xgboost_converter.TestXGBoostConverter testMethod=test_float64_xgb_classifier_converter>
@unittest.skipIf(not xgboost_installed(), reason="XGBoost test requires XGBoost installed")
def test_float64_xgb_classifier_converter(self):
warnings.filterwarnings("ignore")
num_classes = 3
for max_depth in [1, 3, 8, 10, 12]:
model = xgb.XGBClassifier(n_estimators=10, max_depth=max_depth)
np.random.seed(0)
X = np.random.rand(100, 200)
y = np.random.randint(num_classes, size=100)
model.fit(X, y)
> torch_model = hummingbird.ml.convert(model, "torch", [])
tests/test_xgboost_converter.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:433: in convert
return _convert_common(model, backend, test_input, device, extra_config)
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:383: in _convert_common
return _convert_xgboost(model, backend_formatted, test_input, device, extra_config)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
model = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytre...lambda=1, scale_pos_weight=None, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
backend = 'torch', test_input = [], device = 'cpu'
extra_config = {'container': True, 'n_threads': 2}
def _convert_xgboost(model, backend, test_input, device, extra_config={}):
"""
This function is used to generate a *backend* model from a given input [XGBoost] model.
[XGBoost]: https://xgboost.readthedocs.io/
"""
assert (
xgboost_installed()
), "To convert XGboost models you need to instal XGBoost (or `pip install hummingbird-ml[extra]`)."
# XGBoostRegressor and Classifier have different APIs for extracting the number of features.
# In the former case we need to infer them from the test_input.
if "_features_count" in dir(model):
extra_config[constants.N_FEATURES] = model._features_count
elif test_input is not None:
if type(test_input) is np.ndarray and len(test_input.shape) == 2:
extra_config[constants.N_FEATURES] = test_input.shape[1]
else:
raise RuntimeError(
"XGBoost converter is not able to infer the number of input features.\
Please pass a test_input to convert or \
> fill an issue at https://github.com/microsoft/hummingbird/."
)
E RuntimeError: XGBoost converter is not able to infer the number of input features. Please pass a test_input to convert or fill an issue at https://github.com/microsoft/hummingbird/.
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:138: RuntimeError
----------------------------- Captured stdout call -----------------------------
[22:13:14] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
____________ TestXGBoostConverter.test_run_xgb_classifier_converter ____________
self = <test_xgboost_converter.TestXGBoostConverter testMethod=test_run_xgb_classifier_converter>
@unittest.skipIf(not xgboost_installed(), reason="XGBoost test requires XGBoost installed")
def test_run_xgb_classifier_converter(self):
warnings.filterwarnings("ignore")
for extra_config_param in ["tree_trav", "perf_tree_trav", "gemm"]:
model = xgb.XGBClassifier(n_estimators=1, max_depth=1)
np.random.seed(0)
X = np.random.rand(1, 1)
X = np.array(X, dtype=np.float32)
y = np.random.randint(2, size=1)
model.fit(X, y)
> torch_model = hummingbird.ml.convert(model, "torch", [], extra_config={"tree_implementation": extra_config_param})
tests/test_xgboost_converter.py:223:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:433: in convert
return _convert_common(model, backend, test_input, device, extra_config)
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:383: in _convert_common
return _convert_xgboost(model, backend_formatted, test_input, device, extra_config)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
model = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytre...eg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
backend = 'torch', test_input = [], device = 'cpu'
extra_config = {'container': True, 'n_threads': 2, 'tree_implementation': 'tree_trav'}
def _convert_xgboost(model, backend, test_input, device, extra_config={}):
"""
This function is used to generate a *backend* model from a given input [XGBoost] model.
[XGBoost]: https://xgboost.readthedocs.io/
"""
assert (
xgboost_installed()
), "To convert XGboost models you need to instal XGBoost (or `pip install hummingbird-ml[extra]`)."
# XGBoostRegressor and Classifier have different APIs for extracting the number of features.
# In the former case we need to infer them from the test_input.
if "_features_count" in dir(model):
extra_config[constants.N_FEATURES] = model._features_count
elif test_input is not None:
if type(test_input) is np.ndarray and len(test_input.shape) == 2:
extra_config[constants.N_FEATURES] = test_input.shape[1]
else:
raise RuntimeError(
"XGBoost converter is not able to infer the number of input features.\
Please pass a test_input to convert or \
> fill an issue at https://github.com/microsoft/hummingbird/."
)
E RuntimeError: XGBoost converter is not able to infer the number of input features. Please pass a test_input to convert or fill an issue at https://github.com/microsoft/hummingbird/.
/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/hummingbird/ml/convert.py:138: RuntimeError
----------------------------- Captured stdout call -----------------------------
[22:13:15] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
__________ TestXGBoostConverter.test_xgb_binary_classifier_converter ___________
self = <test_xgboost_converter.TestXGBoostConverter testMethod=test_xgb_binary_classifier_converter>
@unittest.skipIf(not xgboost_installed(), reason="XGBoost test requires XGBoost installed")
def test_xgb_binary_classifier_converter(self):
> self._run_xgb_classifier_converter(2)
``` | closed | 2021-04-14T16:47:25Z | 2021-06-19T23:24:53Z | https://github.com/microsoft/hummingbird/issues/495 | [] | ksaur | 2 |
lukas-blecher/LaTeX-OCR | pytorch | 289 | Normalize late code | Some words are separated by Spaces, like cos, sin, executing code preprocess formulas.py,but yours doesn't.
Here is my own latex data

Here is your latex data

| open | 2023-07-05T10:00:49Z | 2023-07-11T10:23:09Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/289 | [] | yiMangGuo | 1 |
mwaskom/seaborn | data-visualization | 3,647 | UnboundLocalError: local variable 'boxprops' referenced before assignment | On Python 3.9.7, with a fresh pip install seaborn, at line 1373 of [this function](https://github.com/stefan-jansen/pyfolio-reloaded/blob/main/src/pyfolio/plotting.py), with this call:
```
sns.boxplot(
data=[is_returns, is_weekly, is_monthly],
palette=["#4c72B0", "#55A868", "#CCB974"],
ax=ax,
**kwargs,
)
```
I get this error:
```
File [~/Desktop/traders/rlpy/lib/python3.9/site-packages/pyfolio/plotting.py:1373](http://localhost:8888/lab/tree/Asset-Portfolio-Management-usingDeep-Reinforcement-Learning-/rlpy/lib/python3.9/site-packages/pyfolio/plotting.py#line=1372), in plot_return_quantiles(returns, live_start_date, ax, **kwargs)
1371 is_weekly = ep.aggregate_returns(is_returns, "weekly")
1372 is_monthly = ep.aggregate_returns(is_returns, "monthly")
-> 1373 sns.boxplot(
1374 data=[is_returns, is_weekly, is_monthly],
1375 palette=["#4c72B0", "#55A868", "#CCB974"],
1376 ax=ax,
1377 **kwargs,
1378 )
1380 if live_start_date is not None:
1381 oos_returns = returns.loc[returns.index >= live_start_date]
File [~/Desktop/traders/rlpy/lib/python3.9/site-packages/seaborn/categorical.py:1634](http://localhost:8888/lab/tree/Asset-Portfolio-Management-usingDeep-Reinforcement-Learning-/rlpy/lib/python3.9/site-packages/seaborn/categorical.py#line=1633), in boxplot(data, x, y, hue, order, hue_order, orient, color, palette, saturation, fill, dodge, width, gap, whis, linecolor, linewidth, fliersize, hue_norm, native_scale, log_scale, formatter, legend, ax, **kwargs)
1627 color = _default_color(
1628 ax.fill_between, hue, color,
1629 {k: v for k, v in kwargs.items() if k in ["c", "color", "fc", "facecolor"]},
1630 saturation=saturation,
1631 )
1632 linecolor = p._complement_color(linecolor, color, p._hue_map)
-> 1634 p.plot_boxes(
1635 width=width,
1636 dodge=dodge,
1637 gap=gap,
1638 fill=fill,
1639 whis=whis,
1640 color=color,
1641 linecolor=linecolor,
1642 linewidth=linewidth,
1643 fliersize=fliersize,
1644 plot_kws=kwargs,
1645 )
1647 p._add_axis_labels(ax)
1648 p._adjust_cat_axis(ax, axis=p.orient)
File [~/Desktop/traders/rlpy/lib/python3.9/site-packages/seaborn/categorical.py:745](http://localhost:8888/lab/tree/Asset-Portfolio-Management-usingDeep-Reinforcement-Learning-/rlpy/lib/python3.9/site-packages/seaborn/categorical.py#line=744), in _CategoricalPlotter.plot_boxes(self, width, dodge, gap, fill, whis, color, linecolor, linewidth, fliersize, plot_kws)
742 ax.add_container(BoxPlotContainer(artists))
744 legend_artist = _get_patch_legend_artist(fill)
--> 745 self._configure_legend(ax, legend_artist, boxprops)
UnboundLocalError: local variable 'boxprops' referenced before assignment
``` | open | 2024-03-03T15:47:42Z | 2024-10-02T15:43:10Z | https://github.com/mwaskom/seaborn/issues/3647 | [] | catskillsresearch | 20 |
viewflow/viewflow | django | 285 | Changing Process model instance value in admin causes error | When I tried to change the `to_size` value in admin I got the following error:
```
Traceback (most recent call last):
File "/srv/www/pmas/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/srv/www/pmas/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/srv/www/pmas/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/contrib/admin/options.py", line 606, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/contrib/admin/sites.py", line 223, in inner
return view(request, *args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1637, in change_view
return self.changeform_view(request, object_id, form_url, extra_context)
File "/srv/www/pmas/lib/python3.6/site-packages/django/utils/decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/srv/www/pmas/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1522, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
File "/srv/www/pmas/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1554, in _changeform_view
form_validated = form.is_valid()
File "/srv/www/pmas/lib/python3.6/site-packages/django/forms/forms.py", line 185, in is_valid
return self.is_bound and not self.errors
File "/srv/www/pmas/lib/python3.6/site-packages/django/forms/forms.py", line 180, in errors
self.full_clean()
File "/srv/www/pmas/lib/python3.6/site-packages/django/forms/forms.py", line 383, in full_clean
self._post_clean()
File "/srv/www/pmas/lib/python3.6/site-packages/django/forms/models.py", line 398, in _post_clean
self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude)
File "/srv/www/pmas/lib/python3.6/site-packages/django/forms/models.py", line 60, in construct_instance
f.save_form_data(instance, cleaned_data[f.name])
File "/srv/www/pmas/lib/python3.6/site-packages/django/db/models/fields/__init__.py", line 859, in save_form_data
setattr(instance, self.name, data)
File "/srv/www/pmas/lib/python3.6/site-packages/viewflow/fields.py", line 79, in __set__
obj.__dict__[self.field.name] = self.field.to_python(value)
File "/srv/www/pmas/lib/python3.6/site-packages/viewflow/fields.py", line 104, in to_python
return import_flow_by_ref(value)
File "/srv/www/pmas/lib/python3.6/site-packages/viewflow/fields.py", line 14, in import_flow_by_ref
app_label, flow_path = flow_strref.split('/')
ValueError: not enough values to unpack (expected 2, got 1)
```
my models
```python
class BaseChangeRequest(Process):
customer_code = models.CharField(max_length=25)
requester_email = models.EmailField()
vmh = models.CharField(max_length=255)
node = models.CharField(max_length=25, blank=False)
vmid = models.IntegerField()
vm_name = models.CharField(max_length=25)
approved = models.NullBooleanField(blank=False, choices=((False, 'No'), (True, 'Yes')))
decided_by = models.ForeignKey(get_user_model(), null=True, on_delete=models.DO_NOTHING)
decision_reason = models.TextField(blank=True)
note = models.TextField(blank=True)
class Meta:
abstract = True
class DiskSizeChangeRequest(BaseChangeRequest):
disk = models.CharField(max_length=25)
disk_description = models.CharField(max_length=255)
tier = models.SmallIntegerField(null=True)
from_size = models.IntegerField(null=True)
to_size = models.IntegerField()
```
admin.py
```
@admin.register(DiskSizeChangeRequest)
class DiskSizeChangeRequestAdmin(admin.ModelAdmin):
list_display = ('pk', 'customer_code', 'vmid', 'vm_name', 'requester_email', 'disk', 'from_size', 'to_size')
``` | closed | 2020-08-26T01:28:58Z | 2024-02-14T12:13:47Z | https://github.com/viewflow/viewflow/issues/285 | [
"request/enhancement",
"dev/flow"
] | variable | 2 |
tradingstrategy-ai/web3-ethereum-defi | pytest | 212 | Set gas price manually | Hi
How I can set gas price price manually in uniswap v2 and v3 dex swap token? | closed | 2024-06-12T15:13:52Z | 2024-08-21T07:38:15Z | https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/212 | [] | azarshab-saeed | 1 |
tflearn/tflearn | tensorflow | 358 | 'tflearn_logs' dir permission issue. | I've get '/tmp/tflearn_logs/ permission denied'.
My traceback is attached.
I think we can handle that.
With setting permission for all users when tflearn makes tflearn_logs dir.
> Run id: Q17392
> Log directory: /tmp/tflearn_logs/
> Traceback (most recent call last):
> File "learner.py", line 82, in <module>
> model.fit(trainX, trainY, validation_set=0.1, n_epoch = N_EPOCH, show_metric=True, batch_size=256, snapshot_epoch=True, snapshot_step=500)
> File "/usr/local/lib/python2.7/dist-packages/tflearn/models/dnn.py", line 213, in fit
> callbacks=callbacks)
> File "/usr/local/lib/python2.7/dist-packages/tflearn/helpers/trainer.py", line 231, in fit
> self.tensorboard_dir + run_id, self.session.graph_def)
> File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/summary_io.py", line 102, in **init**
> gfile.MakeDirs(self._logdir)
> File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/gfile.py", line 295, in MakeDirs
> os.makedirs(path, mode)
> File "/usr/lib/python2.7/os.py", line 157, in makedirs
> mkdir(name, mode)
> OSError: [Errno 13] Permission denied: '/tmp/tflearn_logs/Q17392'
| open | 2016-09-27T18:23:33Z | 2016-12-15T03:54:12Z | https://github.com/tflearn/tflearn/issues/358 | [] | changukshin | 2 |
Kanaries/pygwalker | pandas | 310 | [BUG] pygwalker bug report | **Describe the bug**
in config>format you can set a formatting string for numbers but it is used only for charts, not for tables.
**To Reproduce**
Simply set a formatting string, for example '.2f' then set mak type to table: nothing seems formatted.
**Expected behavior**
Numbers formatted in table same as in charts
**Versions**
- pygwalker version: 0.3.11
- python version: 3.11
- browser
| closed | 2023-11-08T11:35:24Z | 2024-02-03T14:15:24Z | https://github.com/Kanaries/pygwalker/issues/310 | [
"bug",
"good first issue",
"graphic-walker"
] | dev72 | 4 |
idealo/imagededup | computer-vision | 148 | Cannot install imagededup==0.0.1, .. imagededup==0.1.0 because these package versions have conflicting dependencies. | Not sure if this is a recent issue, or related to my using an M1 Mac. Nevertheless, the tail of a very long traceback is below:
```
ERROR: Cannot install imagededup==0.0.1, imagededup==0.0.2, imagededup==0.0.3, imagededup==0.0.4 and imagededup==0.1.0 because these package versions have conflicting dependencies.
The conflict is caused by:
imagededup 0.1.0 depends on tensorflow==2.0.0
imagededup 0.0.4 depends on tensorflow==2.0.0
imagededup 0.0.3 depends on tensorflow==1.13.1
imagededup 0.0.2 depends on tensorflow==1.13.1
imagededup 0.0.1 depends on tensorflow==1.13.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
``` | closed | 2021-06-07T13:54:24Z | 2022-10-28T11:27:23Z | https://github.com/idealo/imagededup/issues/148 | [] | robmarkcole | 7 |
vvbbnn00/WARP-Clash-API | flask | 67 | . | closed | 2024-02-26T12:51:10Z | 2024-02-26T12:52:42Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/67 | [] | Marchccc | 0 |
|
pennersr/django-allauth | django | 3,044 | auth with github is not wokring | Hi I am trying to use django-allauth 0.43.0 and it is not working proeprerly.I am keep on getting extended uri with repeat param at the end like this. login/?next=/accounts/github/login/%3Fprocess%3Dlogin%26next%3D%252F
?next=/accounts/github/login/%3Fprocess%3Dlogin%26next%3D%252Faccounts%252Fgithub%252Flogin%252F%253Fprocess%253Dlogin%2526next%253D%25252F
Any idea what I did wrong while config django-allauth in my project? | closed | 2022-03-01T15:32:41Z | 2022-03-01T18:34:29Z | https://github.com/pennersr/django-allauth/issues/3044 | [] | jinaloo7 | 1 |
PaddlePaddle/PaddleHub | nlp | 1,563 | 多个模型封装在一个API进行部署的问题 | 目前有一个检测模型和一个识别模型,如何把两个模型级联起来然后通过paddle hub的sever服务部署 | open | 2021-08-06T07:26:22Z | 2021-08-10T12:58:54Z | https://github.com/PaddlePaddle/PaddleHub/issues/1563 | [] | ztc125521 | 1 |
lux-org/lux | jupyter | 249 | Misdetected data type when numerical column contains null | When a column contains null values, it can get misclassified as a nominal type, even if it has high cardinality and is largely quantitative. For instance, in this example, the `# Instances` and `# Attributes` are detected as `nominal` but they are better suited as `quantitative`.
```python
import pandas as pd
import lux
df = pd.read_html("https://archive.ics.uci.edu/ml/datasets.php?format=&task=&att=&area=&numAtt=&numIns=&type=&sort=nameUp&view=table")[5]
df.columns = df.loc[0]
df = df.loc[1:]
df['Year'] = pd.to_datetime(df['Year'], format='%Y')
df.data_type
```
As a result, even when dropna occurs on these columns, the data type remains `nominal`.
```python
df[['# Instances','# Attributes']].dropna()
```

| closed | 2021-01-29T08:11:20Z | 2021-04-10T23:46:41Z | https://github.com/lux-org/lux/issues/249 | [
"bug",
"easy"
] | dorisjlee | 1 |
lukas-blecher/LaTeX-OCR | pytorch | 101 | How to use custom images | I used the command to generate my png images:
convert -density 250 -colorspace gray -flatten /tmp/eqcjtqs0o1.pdf -quality 90 /tmp/eqcjtqs0o1.png
identify /tmp/eqcjtqs0o1.png
/tmp/eqcjtqs0o1.png PNG 308x86 308x86+0+0 8-bit Gray 16c 1783B 0.000u 0:00.000
And we generate the datasets, When training, make mistakes :
Traceback (most recent call last):
File "/home/long/Latex_ORC/LaTeX-OCR-main/train.py", line 89, in <module>
train(args)
File "/home/long//Latex_ORC/LaTeX-OCR-main/train.py", line 48, in train
encoded = encoder(im.to(device))
File "/home/long/anaconda3/envs/LaTexORC_py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/long/anaconda3/envs/LaTexORC_py39/lib/python3.9/site-packages/timm/models/vision_transformer.py", line 363, in forward
x = self.forward_features(x)
File "/home/long/Latex_ORC/LaTeX-OCR-main/models.py", line 80, in forward_features
x += self.pos_embed[:, pos_emb_ind]
RuntimeError: The size of tensor a (121) must match the size of tensor b (88) at non-singleton dimension 1
| closed | 2022-02-18T14:04:12Z | 2022-02-19T23:57:16Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/101 | [] | chulaihunde | 1 |
iterative/dvc | data-science | 10,199 | partial fetch is broken | Partial fetch is broken in dvc.
## Steps to Reproduce
```bash
#! /bin/bash
set -ex
gen() {
mkdir -p "$1"
for i in {00..99}; do echo "$1/$i" > "$1/${i}.txt"; done
}
setup() {
pip install -q -e "."
pushd "$1"
dvc init -q --no-scm
gen data/dir01
gen data/dir02
ls data
find data/dir01 -type file | wc -l
dvc remote add -q -d local "$(mktemp -d)"
if ! dvc add data; then
# fix umask imports
pip install dvc-objects==1
dvc add data
fi
dvc push
command rm -rf .dvc/{cache,tmp} data
popd
}
repo="$(mktemp -d)"
setup "$repo" || exit 125
pushd "$repo"
dvc pull data/dir01 || exit 125
# 100 files + .dir file
[ "$(find .dvc/cache -type file | wc -l)" -eq 101 ] || exit 1
```
This breaks an important scenario, see https://dvc.org/doc/user-guide/data-management/modifying-large-datasets#modifying-remote-datasets.
This regressed in 4c0bb8d316844af39350d8bfd1a5902edbeecfca (#9424) during the index migration.
```console
git bisect start main 2342099bd876e4afe8da39d75578724de96f8346
git bisect run bash script.sh
``` | closed | 2023-12-25T14:11:01Z | 2023-12-27T13:20:34Z | https://github.com/iterative/dvc/issues/10199 | [
"bug",
"p1-important",
"regression",
"A: data-management"
] | skshetry | 2 |
indico/indico | sqlalchemy | 6,803 | Add the possibility to bind registrations to invitation email | ### Is your feature request related to a problem? Please describe.
A common workflow is to keep registration forms protected and to send invitations so that only the intended people get registered. The system, however, doesn't lock the email address in the registration form to that of the invitee. This can result on the invitee forwarding the email and having someone else registering instead.
### Describe the solution you'd like
Add a "Bind to this email" configuration setting on the invitation form so that the registration will be bound to the email of the invitee. In practice, this would require for the registration form to make the `Email` field read-only. Potentially, also the `First Name` and `Last Name` fields could be read-only.
### Describe alternatives you've considered
Another approach would be that registrations created from invitations are always bound to the email but this may be problematic. For instance, the event manager sends the invitation to an email address but the person already has an Indico account with a different email.
| open | 2025-03-14T14:23:25Z | 2025-03-14T14:25:59Z | https://github.com/indico/indico/issues/6803 | [
"enhancement"
] | OmeGak | 0 |
keras-team/keras | tensorflow | 20,191 | ops.image.affine_transform() does not work as a layer in GPU | Hi,
I notice ops.image.affine_transform() does not work as part of a model in GPU
TF version: 2.16.1
keras version: 3.5.0
Some observations from some testing
1. model.predict_step() works, but model.predict() does not
2. it works if using CPU only
3. other similar functions such as ops.image.resize(), ops.image.pad_images() work ok.
Samples code as below
```
import tensorflow as tf
import keras
from keras import layers
from keras import ops
import numpy as np
print('TF version: {:s}'.format(tf.__version__))
print('keras version: {:s}'.format(keras.__version__))
shape = (20,18,1)
inputs = layers.Input(shape)
transform = ops.stack([1, 0, 0, 0, 1, 0, 0, 0], axis = 0)
if 1:
img = ops.image.affine_transform(
inputs,
transform,
)
else:
img = ops.image.resize(
inputs,
(10,9),
)
y = layers.Flatten()(img)
outputs = layers.Dense(1)(y)
model = keras.Model(inputs,outputs)
model.summary()
x = np.random.uniform(-1,1,(10000,*shape))
yp = model.predict_step(x)
print(yp)
yp = model.predict(x)
print(yp)
``` | open | 2024-08-30T22:55:19Z | 2024-09-18T14:24:19Z | https://github.com/keras-team/keras/issues/20191 | [
"stat:awaiting keras-eng",
"type:Bug"
] | kwchan7 | 5 |
ydataai/ydata-profiling | pandas | 867 | Integration Issue with Great Expectations | **Describe the bug**
Using pandas-profiling version 2.9.0 and great expectations version 0.13.37 in a Python 3.8.8 environment
Install both using CONDA
Running the sample code
suite = profile.to_expectation_suite(suite_name="titanic_expectations") results in this error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-815c37b56485> in <module>
12 df = pd.read_csv(file_name)
13
---> 14 profile = ProfileReport(df, title="Pandas Profiling Report", explorative=True).to_expectation_suite(suite_name="titanic_expectations")
15 #profile.to_file(output_file="abc_pandas_profiling.html")
16
AttributeError: 'ProfileReport' object has no attribute 'to_expectation_suite'
| closed | 2021-10-23T13:11:41Z | 2021-10-23T17:12:28Z | https://github.com/ydataai/ydata-profiling/issues/867 | [] | cyonghui81 | 1 |
ranaroussi/yfinance | pandas | 1,348 | No FinancialTemplateStore key in json data store | I wrote a python script to retrieve the latest AES key and it works fine. I'm able to get the balance sheet of MSFT.
But when I change the symbol to AAPL, I got another error, do you know why?
```
#!/usr/bin/env python3
import yfinance as yf
msft = yf.Ticker("AAPL")
# hist = msft.history(period="max")
# print(hist)
print(msft.balance_sheet.to_string())
```
get_json_data_stores returned empty dict, and I got this error:
```
- APPL: Failed to create balance-sheet financials table for reason: YFinanceDataException("Parsing FinancialTemplateStore failed, reason: KeyError('FinancialTemplateStore')")
Empty DataFrame
Columns: []
Index: []
```
| closed | 2023-01-26T02:58:07Z | 2023-07-09T21:21:33Z | https://github.com/ranaroussi/yfinance/issues/1348 | [] | CaledoniaProject | 2 |
GibbsConsulting/django-plotly-dash | plotly | 137 | Cannot find the static files from when running the demo app. FileNotFoundError bootstrap.min.css | I have installed the dependedncies, and started the redis server. Then followed the configuration instructions in the `README.md` document and when I open the 7th demo I encounter an error
FileNotFoundError
--
[WinError 2] The system cannot find the file specified: 'D:\\dpd-env\\lib\\site-packages\\dpd_static_support\\bootstrap.min.css'
`dpd-env` is my virtualenv directory

`pip freeze` data:
```
aioredis==1.2.0
alabaster==0.7.12
argh==0.26.2
asgiref==2.3.2
astroid==2.2.5
async-timeout==3.0.1
atomicwrites==1.3.0
attrs==19.1.0
autobahn==19.3.3
Automat==0.7.0
Babel==2.6.0
bleach==3.1.0
certifi==2019.3.9
channels==2.1.7
channels-redis==2.3.3
chardet==3.0.4
Click==7.0
colorama==0.4.1
constantly==15.1.0
coverage==4.5.3
daphne==2.3.0
dash==0.38.0
dash-bootstrap-components==0.4.0
dash-core-components==0.43.1
dash-html-components==0.13.5
dash-renderer==0.19.0
dash-table==3.5.0
decorator==4.4.0
Django==2.1.8
django-bootstrap4==0.0.8
-e git+https://github.com/thesunlover/django-plotly-dash@82191f85b00d78c90852338a4e21a14c8b5a034e#egg=django_plotly_dash
django-redis==4.10.0
docopt==0.6.2
docutils==0.14
dpd-components==0.1.0
dpd-static-support==0.0.2
Flask==1.0.2
Flask-Compress==1.4.0
grip==4.5.2
hiredis==1.0.0
hyperlink==19.0.0
idna==2.8
imagesize==1.1.0
incremental==17.5.0
ipython-genutils==0.2.0
isort==4.3.17
itsdangerous==1.1.0
Jinja2==2.10.1
jsonschema==3.0.1
jupyter-core==4.4.0
lazy-object-proxy==1.3.1
livereload==2.6.0
Markdown==3.1
MarkupSafe==1.1.1
mccabe==0.6.1
more-itertools==7.0.0
msgpack==0.6.1
nbformat==4.4.0
numpy==1.16.2
packaging==19.0
pandas==0.24.2
path-and-address==2.0.1
pathtools==0.1.2
pkginfo==1.5.0.1
plotly==3.7.1
pluggy==0.9.0
port-for==0.3.1
py==1.8.0
Pygments==2.3.1
PyHamcrest==1.9.0
pylint==2.3.1
pyparsing==2.4.0
pyrsistent==0.14.11
pytest==4.4.0
pytest-cov==2.6.1
pytest-django==3.4.8
python-coveralls==2.9.1
python-dateutil==2.8.0
pytz==2019.1
pywin32==224
PyYAML==5.1
readme-renderer==24.0
redis==3.2.1
requests==2.21.0
requests-toolbelt==0.9.1
retrying==1.3.3
six==1.12.0
snowballstemmer==1.2.1
Sphinx==2.0.1
sphinx-autobuild==0.7.1
sphinxcontrib-applehelp==1.0.1
sphinxcontrib-devhelp==1.0.1
sphinxcontrib-htmlhelp==1.0.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.2
sphinxcontrib-serializinghtml==1.1.3
tornado==6.0.2
tqdm==4.31.1
traitlets==4.3.2
twine==1.13.0
Twisted==19.2.0
txaio==18.8.1
typed-ast==1.3.1
urllib3==1.24.1
watchdog==0.9.0
webencodings==0.5.1
Werkzeug==0.15.2
whitenoise==4.1.2
wrapt==1.11.1
zope.interface==4.6.0
``` | closed | 2019-04-15T10:05:36Z | 2020-06-21T05:22:09Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/137 | [
"bug",
"question"
] | thesunlover | 8 |
huggingface/datasets | machine-learning | 7,185 | CI benchmarks are broken | Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975
```
{"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n\\n<details>\\n<summary>Show updated benchmarks!</summary>\\n\\n### Benchmark: benchmark_array_xd.json\\n\\n| metric | read_batch_formatted_as_numpy after write_array2d |
...
"headers":{"accept":"application/vnd.github.v3+json","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8","user-agent":"octokit-rest.js/18.0.0 octokit-core.js/3.6.0 Node.js/16.20.2 (linux; x64)"},"method":"POST","request":{"agent":{"_events":{},"_eventsCount":2,"cache":
...
"response":{"data":{"documentation_url":"https://docs.github.com/rest/issues/comments#create-an-issue-comment","message":"Resource not accessible by integration","status":"403"},
...
"stack":"HttpError: Resource not accessible by integration\n at /usr/lib/node_modules/@dvcorg/cml/node_modules/@octokit/request/dist-node/index.js:86:21\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Job.doExecute (/usr/lib/node_modules/@dvcorg/cml/node_modules/bottleneck/light.js:405:18)","status":403}
``` | closed | 2024-10-01T08:16:08Z | 2024-10-09T16:07:48Z | https://github.com/huggingface/datasets/issues/7185 | [
"maintenance"
] | albertvillanova | 1 |
sktime/sktime | scikit-learn | 7,295 | [ENH] rename `annotation` module to `detection` | We should rename the `annotation` module to `detection`.
This was suggested by @tveten, and I agree with the rationale, "detection" is a much more common and recognizable term for the learning tasks the module is dealing with, e.g., anomaly and changepoint detection, and segmentation (aka anomalous segment detection, set anomaly detection, if unsupervised).
Further, "annotation" is overloaded, especially due to the "human annotation" term commonly used for human AI labellers, and much less so anymore in, say, "image annotation" for the automated task.
The proposed deprecation strategy is as follows:
1. map all imports to `detection` module, while leaving the original module - https://github.com/sktime/sktime/pull/7294
2. move content from `annotation` to `detection`, reverse import direction
3. after an extended period of warnings, remove the `annotation` module (perhaps only in the major release 1.0.0)
Discussions appreciated - FYI @Alex-JG3, @sktime/core-developers | open | 2024-10-18T10:05:51Z | 2024-10-19T18:31:01Z | https://github.com/sktime/sktime/issues/7295 | [
"maintenance",
"module:detection",
"enhancement"
] | fkiraly | 5 |
ShishirPatil/gorilla | api | 94 | Allow the user to receive a more verbose recommendations with description and system dependencies | It is unclear exactly what the commands gorilla will intereract with, where they come from, and why. The system should format(possibly using a flag) detaileds description and systems/dependencies interacted with, as well as the source of the recommendation. | open | 2023-08-10T17:16:12Z | 2023-08-10T17:16:12Z | https://github.com/ShishirPatil/gorilla/issues/94 | [
"enhancement"
] | Shane-Burns-Dot-US | 0 |
Asabeneh/30-Days-Of-Python | pandas | 310 | Nice work | You've helped me a lot in learning .py | closed | 2022-10-07T18:43:16Z | 2023-01-17T19:42:51Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/310 | [] | HackEzra | 0 |
STVIR/pysot | computer-vision | 277 | VOT Testing AssertionError: | I created VOT2018 folder and put VOT2018.json file inside, To do a quick check I run
>> python -u ../../tools/test.py \
--snapshot snapshot/checkpoint_e5.pth \
--config config.yaml \ --dataset VOT2018 2>&1 | tee logs/test_dataset.log
And got this error.
>>AssertionError: /home/thomas/pysot/tools/../testing_dataset/VOT2018/ants1/color/00000001.jpg
loading VOT2018: 0%| | 0/60 [00:00<?, ?it/s, ants1]
What is problem? Do I need to download VOT2018 (images) datset and put it inside VOT2018 folder? I found out pysot-toolkit, but did not get it properly, how to download VOT2018 datset? | closed | 2020-01-08T12:49:30Z | 2020-01-09T15:23:35Z | https://github.com/STVIR/pysot/issues/277 | [
"good first issue"
] | ThomasBomjan | 2 |
xorbitsai/xorbits | numpy | 327 | ENH: `StorageAPI.get` supports convert GPU buffers into CPU buffers | ### Is your feature request related to a problem? Please describe
The supervisor doesn't have any workload on it. Thus, it should have nothing to do with GPUs.
But currently, the supervisor needs a cuda device to receive buffers from workers when fetching results.
### Describe the solution you'd like
Add an option `cpu` to `StorageAPI.get` that make the returned buffers always CPU buffers.
| closed | 2023-04-04T10:05:33Z | 2023-04-12T09:54:12Z | https://github.com/xorbitsai/xorbits/issues/327 | [
"enhancement",
"gpu"
] | UranusSeven | 0 |
deepspeedai/DeepSpeed | pytorch | 6,451 | [BUG] Significant difference between using DeepSpeed and not using DeepSpeed | **Describe the bug**
The goal is to train an LLM adapter (freeze the LLM and only train the adapter). However, training on a single GPU without DeepSpeed can achieve 79.24% on testing, while training with DeepSpeed on a single GPU or multiple GPUs only achieves 69.56% (single GPU) and 67.94% (7 GPUs).
The adapter will take a standard torch_geometric graph input, i.e. node embeddings with edge indices indicating the graph structure. I am following the ZeRO stage 2 optimization.
**To Reproduce**
Please check the ZeRO stage 2 configuration below
`{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto",
"gradient_accumulation_steps": "auto",
"zero_optimization": {
"stage": 2,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto"
}
}`
For the exact code, I am afraid it is still a private repo, but the training follows a standard transformers trainer manner.
**Expected behavior**
We should expect the model trained with DeepSpeed settings (single GPU or multiple GPUs) to have a similar performance as that without DeepSpeed.
**ds_report output**
Please run `ds_report` to give us details about your setup.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- Server: single node with 8 A100 GPUs.
- Python: Python 3.10.9
- DeepSpeed: 0.11.1
- Transformers: 4.31.0
| closed | 2024-08-28T01:55:28Z | 2024-11-01T21:11:45Z | https://github.com/deepspeedai/DeepSpeed/issues/6451 | [
"bug",
"training"
] | hxu105 | 2 |
521xueweihan/HelloGitHub | python | 2,830 | 最好用的梯子 | https://zli2b.wgy.life/auth/register?code=q3sk | closed | 2024-10-21T01:59:46Z | 2024-10-22T01:52:19Z | https://github.com/521xueweihan/HelloGitHub/issues/2830 | [
"恶意issue"
] | Ryder-lyq | 0 |
waditu/tushare | pandas | 921 | [BUG]000550.SZ日K线数据有误 | 使用pro_bar读取000550.SZ(江铃汽车)截止2019-02-13的前复权数据时,发现有数据错误,能引发错误的代码如下:

执行结果如下图所示:

该段代码中返回的000550.SZ在2018-01-22的前复权最高价为17.99,但使用同花顺以及东方财富查询该股票当日的前复权最高价,得到的结果均为18.31,如下两图所示:


注册链接【https://tushare.pro/register?reg=217213】 | closed | 2019-02-14T09:25:27Z | 2019-02-17T13:49:49Z | https://github.com/waditu/tushare/issues/921 | [] | ZodiacNailo | 1 |
errbotio/errbot | automation | 1,230 | errbot not starting automatically after a reboot on CentOS 7 | ### I am...
* [x] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 5.2.0
* OS version: CentOS Linux release 7.5.1804 (Core)
* Python version: 3.5.2
* Using a virtual environment: yes
### Issue description
I created the errbot.service for systemd as described in the [documentation](http://errbot.io/en/latest/user_guide/setup.html#starting-the-daemon) and activated the service with:
```
systemctl enable errbot
```
But errbot didn't start automatically after a reboot. I had to start errbot manually with ```systemctl start errbot```.
After modifying the errbot.service file based on another service, reboots are working fine for me now:
```
[Unit]
Description=Start Errbot chatbot
After=syslog.target network.target
Before=shutdown.target reboot.target halt.target
[Service]
Environment="LC_ALL=en_US.UTF-8"
ExecStart=/path/to/errbot/virtualenv/bin/errbot --config /path/to/errbot/config.py
WorkingDirectory=/path/to/errbot/
User=errbot
Restart=always
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target
```
Just want to provide my systemd errbot configuration as a reference.
### Steps to reproduce
* configure errbot as a systemd-service on CentOS 7
* enable the service with ```systemctl enable errbot```
* perform a ```reboot```
* check the status of errbot ```systemctl status errbot``` => not running
| closed | 2018-06-02T11:04:54Z | 2019-06-20T01:35:27Z | https://github.com/errbotio/errbot/issues/1230 | [
"type: documentation"
] | mattpoel | 2 |
httpie/cli | python | 959 | Pretty Print HTML Output (with newlines and indentation)? | When printing `Content-type: application/json` httpie formats the json with newlines and indentation making it very easy to read.
<pre> <font color="#135CD0">❯</font> <font color="#328A5D">http</font> --body <font color="#FA701D">'https://api2.nicehash.com/main/api/v2/legacy?method=stats.global.current'</font>
{
<font color="#135CD0"><b>"method"</b></font>: <font color="#FA701D">"stats.global.current"</font>,
<font color="#135CD0"><b>"result"</b></font>: {
<font color="#135CD0"><b>"error"</b></font>: <font color="#FA701D">"Method not supported"</font>
}
}
</pre>
However, when returning text/html, the output is syntax highlighted but not reformatted with newlines and indentation. For short HTML pages, reformatting would make the results a lot more easily readable.
<pre> <font color="#135CD0">❯</font> <font color="#328A5D">http</font> --body google.com
<<font color="#135CD0"><b>HTML</b></font>><<font color="#135CD0"><b>HEAD</b></font>><<font color="#135CD0"><b>meta</b></font> <font color="#33C3C1">http-equiv</font>=<font color="#FA701D">"content-type"</font> <font color="#33C3C1">content</font>=<font color="#FA701D">"text/html;charset=utf-8"</font>>
<<font color="#135CD0"><b>TITLE</b></font>>301 Moved</<font color="#135CD0"><b>TITLE</b></font>></<font color="#135CD0"><b>HEAD</b></font>><<font color="#135CD0"><b>BODY</b></font>>
<<font color="#135CD0"><b>H1</b></font>>301 Moved</<font color="#135CD0"><b>H1</b></font>>
The document has moved
<<font color="#135CD0"><b>A</b></font> <font color="#33C3C1">HREF</font>=<font color="#FA701D">"http://www.google.com/"</font>>here</<font color="#135CD0"><b>A</b></font>>.
</<font color="#135CD0"><b>BODY</b></font>></<font color="#135CD0"><b>HTML</b></font>>
</pre>
Syntax highlighting is fantastic, but would be very easy to read like:
```
<HTML>
<HEAD>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE>
</HEAD>
<BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY>
</HTML>
``` | closed | 2020-08-07T00:26:54Z | 2024-02-19T15:00:07Z | https://github.com/httpie/cli/issues/959 | [] | ryanerwin | 5 |
fastapi/sqlmodel | pydantic | 370 | How to create a time Field for a model like django DateTimeField | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import datetime
from typing import Optional
from sqlmodel import Field, SQLModel, DateTime
class Image_Save_Record(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
camera_ip: str
camera_channel: int
file_path: str
file_name: str
create_time: DateTime
update_time: DateTime
```
### Description
when i create a Image_Save_Record instance and commit it to db, the create_time field value should be that system time current.
``` python
# django version
create_time = models.DateTimeField(auto_now_add=True)
```
when i change the Image_Save_Record instance which i created before and commit it to db, the update_time field value should be that system time current.
``` python
# django version
update_time = models.DateTimeField(auto_now=True)
```
Is there sqlmodel have option to do the same like this?
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.7.9
### Additional Context
_No response_ | closed | 2022-06-29T04:28:08Z | 2025-03-17T14:39:21Z | https://github.com/fastapi/sqlmodel/issues/370 | [
"question"
] | regainOWO | 12 |
ageitgey/face_recognition | python | 750 | MemoryError: bad allocation when using cnn model | * face_recognition version: 1.2.3
* Python version: 3.7.2
* Operating System: Windows 10 1809 Build 17763.253
EDIT: dlib version 19.16 compiled with CUDA extensions (GTX 1080 Ti)
### Description
I wanted to use cnn model for face detection, if the images resolution is too big I get the error instantly. If I size the image down only 2/8 faces get detected whereas with the HOG Method 7/8 faces get detected.
### What I Did
```
from PIL import Image, ImageDraw
import face_recognition
# Load the jpg file into a numpy array
image = face_recognition.load_image_file("unknown10.jpg")
# Find all the faces in the image using a pre-trained convolutional neural network.
# This method is more accurate than the default HOG model, but it's slower
# unless you have an nvidia GPU and dlib compiled with CUDA extensions. But if you do,
# this will use GPU acceleration and perform well.
# See also: find_faces_in_picture.py
face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")
face_encodings = face_recognition.face_encodings(image, face_locations)
pil_image = Image.fromarray(image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = ImageDraw.Draw(pil_image)
print("I found {} face(s) in this photograph.".format(len(face_locations)))
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
draw.rectangle(((left, top), (right, bottom)), outline=(255, 0, 0))
del draw
pil_image.show()
pil_image.save("image_with_boxes.jpg")
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
E:\Projects\Coding\mangekyou\source\test2.py in <module>
10 # this will use GPU acceleration and perform well.
11 # See also: find_faces_in_picture.py
---> 12 face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")
13 face_encodings = face_recognition.face_encodings(image, face_locations)
14
e:\projects\coding\mangekyou\env\lib\site-packages\face_recognition\api.py in face_locations(img, number_of_times_to_upsample, model)
114 """
115 if model == "cnn":
--> 116 return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
117 else:
118 return [_trim_css_to_bounds(_rect_to_css(face), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, model)]
e:\projects\coding\mangekyou\env\lib\site-packages\face_recognition\api.py in _raw_face_locations(img, number_of_times_to_upsample, model)
98 """
99 if model == "cnn":
--> 100 return cnn_face_detector(img, number_of_times_to_upsample)
101 else:
102 return face_detector(img, number_of_times_to_upsample)
MemoryError: bad allocation
```
| closed | 2019-02-21T15:43:38Z | 2022-10-28T01:44:14Z | https://github.com/ageitgey/face_recognition/issues/750 | [] | h4ckd0tm3 | 4 |
hyperspy/hyperspy | data-visualization | 2,817 | Abimguous labeling of edges in elements.py and degenracy scaling with the 'factor' property | The P. Rez dataset of generalised oscillator strengths contains only the "odd" subshells, i.e. L1, N3, M5, etc.
This is, I suppose, due to the fact that the approximation used in the computation does not include any spin-orbit coupling. In this situation the missing edges are just shifted and scaled copies of the ones that are present.
This scaling accounts for the different degeneracy of the different levels (see Egerton, Appendix D).
For instance we have that, neglecting the shift in energy, L2 = 0.5*L3 or M4 = 2/3*M5.
This scaling is mostly handled by indicating for each edge which GOS file should actually be used, and including such scaling factors as 'factor' in the "elements.py" database.
See for instance the entry for the M4 edge of Europium:
```python
'M4': {'relevance': 'Major',
'threshold': 'Sharp peak',
'edge': 'Delayed maximum',
'factor': 0.6666666666666,
'onset_energy (eV)': 1061.0,
'filename': 'Eu.M5'},
```
These factors are in principle the same for all edges with the same 'name', and the edge-filename correspondence is trivial, so why store it in each entry with a lot of repetition?
It turns out that the database does not always distinguish between the subshells. A few elements, such as Na, Cl, or Al include only a combined L2,3 edge. The convention adopted, however, is to only label it L3 and scale it with a factor of 1.5 to account for the missing L2 edge:
```python
'L3': {'relevance': 'Major',
# overlaps
# with L2
'threshold': 'Sharp peak',
'edge': 'Delayed maximum',
'factor': 1.5,
'onset_energy (eV)': 31.0,
'filename': 'Na.L3'},
```
The problem is that if we want to include more options for the generalised oscillator strengths it may be worth removing from the elements database the information which is specific for one single GOS dataset.
My proposal is:
- I change all the peaks such as the latter one to names like L2,3 to remove ambiguity
- I remove all the 'filename' and 'factor' field. This information can easily be included in a very small dictionary inside hartree_slater_gos.py with entries like {'L2':['L3', 0.5], 'L2,3':['L3', 1.5]} and the names can be generated on the fly in the loader function.
- The representation of the component remains unvaried, the information is just shifted out of elements.py
I can do this quite quickly, as soon as I get a go-ahead.
On the other hand, I won't bother if this kind of change in not welcome. | closed | 2021-08-27T09:42:31Z | 2024-10-12T08:29:53Z | https://github.com/hyperspy/hyperspy/issues/2817 | [] | gguzzina | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,368 | Add Support for Language Dialect Consistency in Conversations | Hello,
I’m reaching out to propose a new feature that I believe would enhance the user experience.
### Issue
Currently, when selecting a language, the system does not differentiate between language variants. For instance, when Spanish is selected, dialogues are mixed between European Spanish (Spain) and Latin American Spanish. Similarly, with Catalan, where there is a mix of dialects (Catalan, Valencia, Balearic), and with Portuguese (Brazil, Portugal). This occasionally results in sentences and phrases that, while technically correct, can be perceived as "off" or "weird" by native speakers. I presume this happens with many other languages as well.
### Proposed Solution
I would like to suggest adding a more granular control over the language setting so that we can choose a specific variant (e.g. European Spanish, Mexican Spanish, etc.) for each language. Ideally, once a variant is chosen, the conversation thread should maintain consistency in the use of that variant throughout.
**Suggested Implementation:**
- Include a dropdown or a set of options under the language selection for users to choose the desired variant.
- Store the language variant selection and use it to keep the conversation thread consistent.
### Benefits
- **Enhanced Readability**: Ensuring consistency in language variants makes the conversation more readable and relatable for native speakers.
- **Greater Precision**: Some dialects have unique expressions or terminology. Maintaining consistency in language variants allows for more precise communication.
- **Cultural Sensitivity**: Respecting and utilizing the correct language variant reflects cultural awareness and sensitivity. | closed | 2023-06-09T19:20:23Z | 2023-06-10T09:10:38Z | https://github.com/LAION-AI/Open-Assistant/issues/3368 | [] | salvacarrion | 0 |
mwaskom/seaborn | data-science | 3,818 | Twin axis issue with bar plots. | I am facing an issue with twinx (dual y-axis). It does very weird behavior.
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Sample data
data = {
"Category": ["A", "B", "C", "D"],
"Metric1": [10, 15, 20, 25],
"Metric2": [100, 150, 200, 250],
}
df = pd.DataFrame(data)
# Bar width and positions
bar_width = 0.4
x = np.arange(len(df["Category"]))
# Create a figure and axis objects
fig, ax1 = plt.subplots(figsize=(8, 6))
# Plot the first bar plot (Metric1) with Seaborn
sns.barplot(
x=x - bar_width / 2, y="Metric1", data=df, color="skyblue", width=bar_width, ax=ax1
)
ax1.set_ylabel("Metric1 (blue)", color="skyblue")
ax1.tick_params(axis="y", labelcolor="skyblue")
# Create a twin axis for the second y-axis
ax2 = ax1.twinx()
# Plot the second bar plot (Metric2) with Seaborn
sns.barplot(
x=x + bar_width / 2, y="Metric2", data=df, color="orange", width=bar_width, ax=ax2
)
ax2.set_ylabel("Metric2 (orange)", color="orange")
ax2.tick_params(axis="y", labelcolor="orange")
# Set category labels on the x-axis
ax1.set_xticks(x)
ax1.set_xticklabels(df["Category"])
# Add a title and improve layout
plt.title("Dual Axis Side-by-Side Bar Plot")
fig.tight_layout()
# Show the plot
plt.show()
``` | closed | 2025-01-25T01:33:27Z | 2025-01-31T17:35:22Z | https://github.com/mwaskom/seaborn/issues/3818 | [] | washraf | 6 |
matterport/Mask_RCNN | tensorflow | 2,177 | Why the output mask's size different from the input image's size | I've used balloon.py to train an apple segmentation model,after training I've try it with a 960x1280 image,but the output mask's shape is 1024x1024.What should I do to change the output mask's size from 1024x1024 to 960x1280?(mask means r['masks'] in the end of inspect_balloon_data.ipynb) | closed | 2020-05-12T13:46:53Z | 2022-02-16T06:47:39Z | https://github.com/matterport/Mask_RCNN/issues/2177 | [] | zhaoguanao | 11 |
pydata/pandas-datareader | pandas | 829 | DataReader fails with change of the order of stocks | I run Webreader with a list of stocks. If I change the order of the symbols in the list, the Webreader fails. The two lists below are identical except I moved some stocks from the bottom of the list to the top.
The first symbols list works okay. The second symbols list failed to read symbols NVDA, LULU ad TAL, replacing with NaN warnings.
start = '2020-04-06'
end = '2020-09-02'
symbols = [] ## see below
web.DataReader(symbols,'yahoo',start,end)
## Symbols list 1##
symbols = [
'NVDA',
'FTNT',
'LULU',
'TAL',
'SEDG',
'PDD',
'TME',
'SQ',
'SHOP',
'CHWY',
'DDOG',
'TTD',
'INMD',
'LVGO',
'GSX',
'CODX',
'DOCU',
'ZM',
'TDOC',
'TEAM',
'NFLX',
'RNG',
'DXCM',
'PAYC',
'CRWD',
'WDAY',
'NOW',
'DBX',
'BABA',
'CARG',
'FND',
'XP',
'ALGN',
'MLNX',
'JD',
'REGN',
'PLMR',
'EDU',
'VEEV',
'AMZN'
]
##Symbols List 2##
symbols = [
'SEDG',
'PDD',
'TME',
'SQ',
'SHOP',
'CHWY',
'DDOG',
'TTD',
'INMD',
'LVGO',
'GSX',
'CODX',
'DOCU',
'ZM',
'TDOC',
'TEAM',
'NFLX',
'RNG',
'DXCM',
'PAYC',
'CRWD',
'WDAY',
'NOW',
'DBX',
'BABA',
'CARG',
'FND',
'XP',
'ALGN',
'MLNX',
'JD',
'REGN',
'PLMR',
'EDU',
'VEEV',
'AMZN',
'NVDA',
'FTNT',
'LULU',
'TAL'
]
Mike Scott | closed | 2020-09-22T20:04:28Z | 2021-07-21T17:01:28Z | https://github.com/pydata/pandas-datareader/issues/829 | [] | TraderMikeS | 3 |
pallets-eco/flask-wtf | flask | 401 | CSRF ValidationError messages translation | Messages in csrf.py like 'The CSRF token is missing' appear to be hard-coded in English(?). | closed | 2020-02-26T09:28:26Z | 2023-12-21T00:51:37Z | https://github.com/pallets-eco/flask-wtf/issues/401 | [
"csrf"
] | mybooc | 1 |
iperov/DeepFaceLab | deep-learning | 480 | Getting errors during conversion | Please see attached screen grab. Gives error at about the same point each time I try to convert. I can continue training, but won't let me convert. One thing I can tell you is I moved this from one computer to a different one. Just dragged the entire folder over the network, which I've done in the past. Not sure if that has anything to do with this. Just want to get the video finalized. Thanks!

| closed | 2019-11-06T21:54:18Z | 2020-03-28T05:41:45Z | https://github.com/iperov/DeepFaceLab/issues/480 | [] | kilerb | 1 |
jina-ai/clip-as-service | pytorch | 354 | Fixing documentation for fine-tuned model? | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
== cat /etc/issue ===============================================
Linux tlv-entas-004 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
VERSION="7.6 (Maipo)"
VERSION_ID="7.6"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
== are we in docker =============================================
No
== compiler =====================================================
c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== uname -a =====================================================
Linux tlv-entas-004 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
== check pips ===================================================
bert-tensorflow 1.0.1
numpy 1.16.3
protobuf 3.7.1
tensorflow 1.13.1
tensorflow-estimator 1.13.0
tensorflow-hub 0.4.0
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.version.VERSION = 1.13.1
tf.version.GIT_VERSION = b'v1.13.1-0-g6612da8951'
tf.version.COMPILER_VERSION = 4.8.5
---
### Description
I tried using a fine-tuned model to extract the embedding and according to the documentation I understood that only the model.ckpt-XXX file is needed in order to run the service. I created a directory for the tuned model and copied there only the model.ckpt-XXX file (as the fine-tuned model was trained on a different server I copied only the minimum needed files to save transfer time). However, after multiple attempts where I got the same error again and again (tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /export/home/gfuchs/projects/pl_guidance/BERT_models/fine_tuned_model_couples_17May2019_15cbd426-a1b3-4de9-afc8-c342a4ddf719/model.ckpt-15500) I solved the issue by coping to the fine tuned model directory also the files:
1. model.ckpt-15688.index
2. model.ckpt-15688.meta
Following that the service ran without any issues.
If indeed the .index and .meta files are needed in order to use fine-tuned model I recommend to make the documentation more explicit about adding those files to the fine-tuned model as well.
... | open | 2019-05-19T07:38:07Z | 2019-05-20T02:58:42Z | https://github.com/jina-ai/clip-as-service/issues/354 | [
"bug"
] | gilad19 | 0 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 135 | SpeechModel251训练中间不能打断吗? | 训练中间不能断点续作吗?有时候训练了一半突然死机了,又要从新开始吗? | closed | 2019-08-16T11:32:24Z | 2021-11-22T14:04:14Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/135 | [] | weigd2013 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.