repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
clovaai/donut
|
nlp
| 241 |
validation loss does not decrease
|
Hello,
I have been trying to finetune the donut model on my custom dataset. However, I have encountered an issue where the validation loss does not decrease after a few training epochs.
Here are the details of my dataset:
Total number of images in the training set: 12032
Total number of images in the validation set: 1290
Here are the config details that I have used for training;
config = { "max_epochs":30,
"val_check_interval":1.0,
"check_val_every_n_epoch":1,
"gradient_clip_val":1.0,
"num_training_samples_per_epoch": 12032,
"lr":3e-5,
"train_batch_sizes": [1],
"val_batch_sizes": [1],
# "seed":2022,
"num_nodes": 1,
"warmup_steps": 36096,
"result_path": "./result",
"verbose": False,
}
Here is the training log :
Epoch 21: 99%
13160/13320 [51:42<00:37, 4.24it/s, loss=0.0146, v_num=0]
Epoch : 0 | Train loss : 0.13534872224594618 | Validation loss : 0.06959894845040267
Epoch : 1 | Train loss : 0.06630147620920149 | Validation loss : 0.06210419170951011
Epoch : 2 | Train loss : 0.05352105059947349 | Validation loss : 0.07186826165058287
Epoch : 3 | Train loss : 0.04720975606560736 | Validation loss : 0.06583545940979477
Epoch : 4 | Train loss : 0.04027246460695355 | Validation loss : 0.07237467494971456
Epoch : 5 | Train loss : 0.03656758802423008 | Validation loss : 0.06615438500516262
Epoch : 6 | Train loss : 0.03334385565814249 | Validation loss : 0.0690448615986076
Epoch : 7 | Train loss : 0.030216083118764458 | Validation loss : 0.06872327175676446
Epoch : 8 | Train loss : 0.028938407997482745 | Validation loss : 0.06971958731054592
Epoch : 9 | Train loss : 0.02591740866504401 | Validation loss : 0.07369288451116424
Epoch : 10 | Train loss : 0.023537077281242467 | Validation loss : 0.09032832324105358
Epoch : 11 | Train loss : 0.023199086009602708 | Validation loss : 0.08460190268222034
Epoch : 12 | Train loss : 0.02142925070562108 | Validation loss : 0.08330771044260839
Epoch : 13 | Train loss : 0.023064635992034854 | Validation loss : 0.08292237208095442
Epoch : 14 | Train loss : 0.019547534460417258 | Validation loss : 0.0834848547896493
Epoch : 15 | Train loss : 0.018710007107520535 | Validation loss : 0.08551564997306298
Epoch : 16 | Train loss : 0.01841766658555733 | Validation loss : 0.08025501600490885
Epoch : 17 | Train loss : 0.017241064160256097 | Validation loss : 0.10344411130643169
Epoch : 18 | Train loss : 0.015813576313222295 | Validation loss : 0.10317703346507855
Epoch : 19 | Train loss : 0.015648367624887447 | Validation loss : 0.09659983590732446
Epoch : 20 | Train loss : 0.01492729377679406 | Validation loss : 0.09451819387128098
The validation loss appears to fluctuate without showing a consistent decreasing trend. I would appreciate any insights or suggestions on how to address this issue and potentially improve the validation loss convergence.
Thank you for your assistance.
|
open
|
2023-08-24T09:32:12Z
|
2024-05-27T13:55:38Z
|
https://github.com/clovaai/donut/issues/241
|
[] |
Mann1904
| 2 |
fastapi/sqlmodel
|
fastapi
| 281 |
I get type error if I use __root__ from pydantic while inheriting from SQLModel
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import SQLModel
from pydantic import BaseModel
data = [
{ "id": 1, "name": "awesome-product" }
]
class ProductBase(SQLModel):
name: str
class ProductOut(ProductBase):
id: int
# Here 👋 If I inherit from `SQLModel`` then I get type error. However, If I inherit from `BaseModel` then I don't get error.
# UnComment below line and comment the `SQLModel` usage to resolve the type error
# class ProductList(BaseModel):
class ProductList(SQLModel):
__root__: list[ProductOut]
class SomeResponse(SQLModel):
products: ProductList
msg: str
product_list_model = ProductList.parse_obj(data)
SomeResponse(products=product_list_model, msg="Hello world")
```
### Description
I get a type error if I inherit `ProductList` model from `SQLModel` saying:
```
Argument of type "SQLModel" cannot be assigned to parameter "products" of type "ProductList" in function "__init__"
"SQLModel" is incompatible with "ProductList"
```
However, If I use `BaseModel` from pydantic for inheritance error went away.
Below line gives type error
```python
class ProductList(SQLModel):
```
Below line looks fine
```python
class ProductList(BaseModel):
```
### Operating System
Linux
### Operating System Details
Ubuntu 21.10
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context

|
open
|
2022-03-24T07:40:45Z
|
2022-03-24T07:40:45Z
|
https://github.com/fastapi/sqlmodel/issues/281
|
[
"question"
] |
jd-solanki
| 0 |
PrefectHQ/prefect
|
data-science
| 17,060 |
Deadlock when spawning tasks from a function and limiting concurrency
|
### Bug summary
I'm getting what seems to be a deadlock when I have Python functions that aren't tasks "spawning" new tasks (and limiting concurrency). At some point Prefect is just waiting on a bunch of futures but no new tasks get started.
Here's a simple reproduction of the issue:
```python
"""
Example flow that demonstrates task nesting patterns in a ThreadPoolTaskRunner context.
Each parent task spawns multiple child tasks, which can lead to resource contention.
"""
from random import random
from time import sleep
from prefect import flow, task
from prefect.task_runners import ThreadPoolTaskRunner
@task
def dependent_task(n: int) -> int:
"""Child task that simulates work with a random delay.
Returns the input number unchanged."""
sleep_time = random() * 3
print(f"Dependent task {n} sleeping for {sleep_time:.2f}s")
sleep(sleep_time)
return n
def task_spawner(n: int) -> list[int]:
"""Creates 5 identical child tasks for a given number n.
Returns the collected results as a list."""
dependent_futures = dependent_task.map([n] * 5)
return dependent_futures.result()
@task
def initial_task(n: int) -> list[int]:
"""Parent task that adds its own delay before spawning child tasks.
Returns a list of results from child tasks."""
sleep_time = random() * 2
print(f"Initial task {n} sleeping for {sleep_time:.2f}s")
sleep(sleep_time)
return task_spawner(n)
@flow(task_runner=ThreadPoolTaskRunner(max_workers=10))
def deadlock_example_flow() -> None:
"""
Creates a workflow where 10 parent tasks each spawn 5 child tasks (50 total tasks)
using a thread pool limited to 10 workers. Tasks execute concurrently within
these constraints.
The flow demonstrates how task dependencies and thread pool limitations interact,
though "deadlock" is a misnomer as the tasks will eventually complete given
sufficient time.
"""
# Create 10 parent tasks
initial_futures = initial_task.map(range(10))
# Collect results from all task chains
results = [f.result() for f in initial_futures]
print(f"Flow complete with results: {results}")
```
Thanks a bunch!
### Version info
```Text
(crosswise-ai) [02/07/2025 04:58:22PM] [thomas:~/crosswise/crosswise_app]$ prefect version
Version: 3.1.12
API version: 0.8.4
Python version: 3.12.7
Git commit: e299e5a7
Built: Thu, Jan 9, 2025 10:09 AM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: cloud
Pydantic version: 2.9.2
Integrations:
prefect-aws: 0.5.3
```
### Additional context
_No response_
|
open
|
2025-02-08T01:34:29Z
|
2025-02-08T01:34:47Z
|
https://github.com/PrefectHQ/prefect/issues/17060
|
[
"bug"
] |
tboser
| 0 |
ydataai/ydata-profiling
|
pandas
| 1,706 |
Bug Report-font in all
|
### Current Behaviour
[0227.txt](https://github.com/user-attachments/files/19007883/0227.txt)
### Expected Behaviour
To be finished the task~
### Data Description
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:////Users/panyulong/Library/Group%20Containers/UBF8T346G9.Office/TemporaryItems/msohtmlclip/clip.htm">
<link rel=File-List
href="file:////Users/panyulong/Library/Group%20Containers/UBF8T346G9.Office/TemporaryItems/msohtmlclip/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
.font5
{color:windowtext;
font-size:9.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:宋体;
mso-generic-font-family:auto;
mso-font-charset:134;}
tr
{mso-height-source:auto;
mso-ruby-visibility:none;}
col
{mso-width-source:auto;
mso-ruby-visibility:none;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:12.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:宋体;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
.xl63
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
text-align:center;
vertical-align:middle;}
.xl64
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"0_\)\;\[Red\]\\\(0\\\)";
text-align:center;
vertical-align:middle;}
.xl65
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"\#\,\#\#0";
text-align:center;
vertical-align:middle;}
.xl66
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"Short Date";
text-align:center;
vertical-align:middle;}
.xl67
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
text-align:center;
vertical-align:middle;
white-space:normal;}
.xl68
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"0\.00_\)\;\[Red\]\\\(0\.00\\\)";
text-align:center;
vertical-align:middle;}
.xl69
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"\[$-804\]dddd\;\@";
text-align:center;
vertical-align:middle;}
.xl70
{font-size:10.0pt;
font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
vertical-align:middle;
border:.5pt solid #DEE0E3;
white-space:normal;}
.xl71
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"h\:mm\:ss";
text-align:center;
vertical-align:middle;}
.xl72
{font-family:微软雅黑;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-number-format:"\[$-409\]h\:mm\:ss\\ AM\/PM\;\@";
text-align:center;
vertical-align:middle;}
ruby
{ruby-align:left;}
rt
{color:windowtext;
font-size:9.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:宋体;
mso-generic-font-family:auto;
mso-font-charset:134;
mso-char-type:none;
display:none;}
-->
</style>
</head>
<body link=blue vlink=purple>
开始日期 | 开始日期 周几 | 结束日期 | 结束日期 周几 | 发布日期 | 发布日期 周几 | 发布时间 | 排名 | 视频 | 创作者 | 视频指数 | 播放量 | 点赞量 | 评论量 | 视频链接 | 视频内容关键词 | 视频时长 | 视频时长(s) | 视频指数/ 每次播放 | 视频指数/ 每次点赞 | 视频指数/ 每次评论 | 视频指数/ 每次播放*点赞 | 视频指数/ 每次播放*评论 | 视频指数/ 每次点赞*评论 | 每日播放量 | 每日点赞量 | 每日评论量 | 发布日期数字 周几
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/21 | 星期一 | 9:17:00 AM | 1 | 这么爽的人生,竟然才过了四年#马斯克 #震惊 #爽剧 #小说 #首富 | 尤可妮妮 | 683,112 | 5670000 | 306000 | 25000 | https://www.douyin.com/video/7462357100434443554 | 马斯克,世界首富,小儿子,名利场,财富,特斯拉,spacex,亿万富翁,叔叔,姑姑,奶奶,四岁 | 0:00:37 | 37 | 0.12 | 2.23 | 27.32 | 36866.36 | 3011.96 | 55809.80 | 810000 | 43714.29 | 3571.429 | 2
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/24 | 星期四 | 5:46:00 AM | 2 | 黄金的延展性也太好了 #黄金 #金的延展性 #冷知识 | 毒舌小疯子 | 616,272 | 9320000 | 99000 | 95000000 | https://www.douyin.com/video/7463415028633373967 | 黄金,延展性,冷知识 | 0:00:16 | 16 | 0.07 | 6.22 | 0.01 | 6546.24 | 6281742.49 | 591372121.21 | 1331429 | 14142.86 | 13571429 | 5
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/23 | 星期三 | 12:01:00 PM | 3 | 清华大学教授韩秀云:明明通缩,为什么感觉东西没便宜? #农村 #保险 #经济 #师资合作 #韩秀云 韩秀云老师主讲:经济、资本 现在存款利率进入了一时代,现在把钱放到哪里好?最好是保本理财。比如说储蓄/国债。大学毕业后是考研还是考公?这两个各有好处各有风险,你考研的话,面临3年以后还得继续找工作,但考公本身,大家知道现在机构也在改革,发工资也很困难,除非你特别喜欢,否则的话,可能找一份工作比这俩都好。您说现在是通缩,为什么我觉得东西没有便宜?是这样啊,因为你没去买大件,比如房子车子等啊,你买都是生活必需品,它是刚需价格没弹性,鸡蛋蔬菜价格涨了,只占你收入吃才不到30%,所以我们通缩看物价指数,看CPI不是凭你个人的感受。我现在手里有50万现金是存银行好还是买国债好?我建议你呢,30万去存银行,20万可以去买国债,国债比例不能大于银行的储蓄。老师我现在兼职做半职业的炒股,这是抗风险的手段吗?实际就是一边工作,业余时间去炒点股票,是这样啊,练练手可以啊,千万记住不可太多,有人炒股说千拿2万吧,后来好吧10万进去越来越多,结果就发现你是半职炒股,没想到你全职,赚的钱都在半职中赔出去了。药剂师值得考试吗?药剂师值得考试,未来中国的药品一定变得规范化,药剂师在国外是一个很好的职业呢。宅基地您建议卖掉吗?如果不需要钱,一定要保留,因为有人曾经这样问有两套房,城里有一套农村有一套,卖哪套好?我的回答是卖城里这套能卖出价,卖农村那套不需要既卖出价,将来如果一旦在城里混不下去回到农村还有个窝。50岁了什么保险都没买,现在买什么保险合适,50岁的话还能交社保吗?50岁如果能交社保,尽量交哪怕补一点钱也交上啊,如果说不能交了也没有单位,那去买一个商业养老保险。您建议是早退休好还是晚退休好?对刚才我就看到有人提这个问题,说是50退休还是55岁退休好,当你有选择的时候,你一定要问自己,我退了还有个别的事干吗?如果没有,我建议晚退休好,否则的话这5年当中你会无处寄托自己,没退的时候特想退,等退了以后突然发现好失落,社会不需要了,家里找不到乐趣所在,如果是我的话,我想晚退休。 | 奇正师资 | 577,269 | 7730000 | 220000 | 53160000 | https://www.douyin.com/video/7457781386598944054 | 清华大学,韩秀云,通缩,物价,存款利率,保本理财,储蓄,国债,考研,考公,工作选择,50万现金规划,半职业炒股,药剂师考试,宅基地,50岁保险,社保,商业养老保险,早退休,晚退休 | 0:03:18 | 198 | 0.07 | 2.62 | 0.01 | 16429.39 | 3969937.91 | 139489182.00 | 1104286 | 31428.57 | 7594286 | 4
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/20 | 星期日 | 10:07:00 AM | 4 | Dollor 崩了,RMB半小时升值600点,收复7.3 太牛掰了吧 | 股市百战 | 557,042 | 11870000 | 168000 | 16000 | https://www.douyin.com/video/7461998966633270586 | 美元,人民币,升值,高开低走 | 0:00:22 | 22 | 0.05 | 3.32 | 34.82 | 7884.00 | 750.86 | 53051.62 | 1695714 | 24000 | 2285.714 | 1
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/21 | 星期一 | 6:54:00 AM | 5 | 这就是别人家的老板#老板 #崔培军 #万万没想到 | 毒舌小熊猫 | 497,880 | 8660000 | 93000 | 13000 | https://www.douyin.com/video/7462320233819278650 | 河南矿山集团,崔培军,老板,员工福利,发钱,年会,现金,加班补贴,员工父母,旅游 | 0:00:31 | 31 | 0.06 | 5.35 | 38.30 | 5346.75 | 747.39 | 69596.13 | 1237143 | 13285.71 | 1857.143 | 2
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/21 | 星期一 | 8:24:00 AM | 6 | 特朗普宣誓就任美国总统,美元深夜跳水,离岸人民币大涨800点#美国#美元#财经(编辑:杨) | 风口财经 | 480,499 | 15280000 | 64000 | 26490000 | https://www.douyin.com/video/7462156939888184627 | 特朗普,宣誓就任美国总统,美元跳水,离岸人民币大涨,行政令,南部边境,国家紧急状态,非法移民,国家能源紧急状态,传统能源开采,绿色新政,电动车优惠政策,美国传统汽车工业,美股休市,中国资产上涨,在岸人民币 | 0:00:06 | 6 | 0.03 | 7.51 | 0.02 | 2012.56 | 833011.68 | 198881539.22 | 2182857 | 9142.857 | 3784286 | 2
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 链接已删除 | #VALUE! | 链接已删除 | 7 | 实体经济 #资本运作 #揭秘 #实体经济 #企业老板 #资本运作底层逻辑 这样一套运作流程,谁不迷糊呢? | 小太阳爱米粒 | 438,838 | 7980000 | 131000 | 12000 | 链接已删除 | 链接已删除 | 链接已删除 | #VALUE! | 0.05 | 3.35 | 36.57 | 7203.98 | 659.91 | 40198.90 | 1140000 | 18714.29 | 1714.286 | #VALUE!
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/23 | 星期三 | 9:38:00 AM | 8 | 证监会主席吴清:引导大型国有保险公司增加A股投资规模和实际比例,其中从2025年起每年新增保费的30%用于投资A股。 | 央视新闻 | 385,342 | 14500000 | 125000 | 37000 | https://www.douyin.com/video/7462918201551228198 | 证监会主席,吴清,大型国有保险公司,A股投资规模,实际比例,2025年,新增保费,30% | 0:00:22 | 22 | 0.03 | 3.08 | 10.41 | 3321.91 | 983.29 | 114061.23 | 2071429 | 17857.14 | 5285.714 | 4
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/24 | 星期四 | 12:46:00 PM | 9 | 车厘子大跳水背后的真相 #财经 #车厘子 #经济 #掘金计划2025 | 资本论 | 329,004 | 9270000 | 171000 | 26000 | https://www.douyin.com/video/7463334027441884431 | 车厘子,价格大跳水,大国博弈,南美国家,一带一路,钱凯港,运输成本,种植面积,产量,贸易合作 | 0:06:45 | 405 | 0.04 | 1.92 | 12.65 | 6069.01 | 922.77 | 50024.00 | 1324286 | 24428.57 | 3714.286 | 5
2025/1/20 | 星期一 | 2025/1/26 | 星期日 | 2025/1/20 | 星期日 | 12:49:00 PM | 10 | 量化宽松,疯狂印钞大放水?#买房 #卖房 #财商 #热点 #商业 | 海鸥财商说 | 328,927 | 5950000 | 51000 | 14000 | https://www.douyin.com/video/7461505228777540902 | 量化宽松,印钱,货币贬值,通货膨胀,债务稀释,财富分配,赚钱效应,现金,抵御通胀 | 0:01:39 | 99 | 0.06 | 6.45 | 23.49 | 2819.37 | 773.95 | 90293.69 | 850000 | 7285.714 | 2000 | 1
</body>
</html>
### Code that reproduces the bug
```Python
import os
import pandas as pd
from ydata_profiling import ProfileReport
# 指定输入文件夹路径
input_folder_path = '/Users/panyulong/Desktop/报告生成/报告'
# 指定输出文件夹路径
output_folder_path = '/Users/panyulong/Desktop/报告生成/报告'
# 确保输出文件夹存在,如果不存在则创建
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
# 遍历输入文件夹中的所有文件
for file_name in os.listdir(input_folder_path):
# 检查文件扩展名是否为.xlsx
if file_name.endswith('.xlsx'):
# 构造完整的文件路径
file_path = os.path.join(input_folder_path, file_name)
# 读取Excel文件
df = pd.read_excel(file_path)
# 根据文件名生成报告标题(去掉扩展名并添加“报告”)
report_title = f"{os.path.splitext(file_name)[0]} 报告"
# 生成报告
profile = ProfileReport(df, title=report_title, explorative=True)
# 构造保存路径(将文件扩展名改为.html,并保存到输出文件夹)
save_path = os.path.join(output_folder_path, os.path.splitext(file_name)[0] + '.html')
# 保存报告
profile.to_file(save_path)
print(f"报告已生成并保存到:{save_path}")
```
### pandas-profiling version
2.2.3
### Dependencies
```Text
annotated-types 0.6.0 py39hca03da5_0
attrs 24.3.0 py39hca03da5_0
blas 1.0 openblas
bottleneck 1.4.2 py39hbda83bc_0
brotli-python 1.0.9 py39h313beb8_9
ca-certificates 2025.2.25 hca03da5_0
certifi 2025.1.31 py39hca03da5_0
charset-normalizer 3.3.2 pyhd3eb1b0_0
contourpy 1.2.1 py39h48ca7d4_1
cycler 0.11.0 pyhd3eb1b0_0
dacite 1.8.1 py39hca03da5_0
et-xmlfile 2.0.0 pypi_0 pypi
fonttools 4.55.3 py39h80987f9_0
freetype 2.12.1 h1192e45_0
htmlmin 0.1.12 pyhd3eb1b0_1
idna 3.7 py39hca03da5_0
imagehash 4.3.1 py39hca03da5_0
importlib-metadata 8.5.0 py39hca03da5_0
importlib_metadata 8.5.0 hd3eb1b0_0
importlib_resources 6.4.0 py39hca03da5_0
jinja2 3.1.5 py39hca03da5_0
joblib 1.4.2 py39hca03da5_0
jpeg 9e h80987f9_3
kiwisolver 1.4.4 py39h313beb8_0
lcms2 2.16 he93ba84_0
lerc 4.0.0 h313beb8_0
libcxx 14.0.6 h848a8c0_0
libdeflate 1.22 h80987f9_0
libffi 3.4.4 hca03da5_1
libgfortran 5.0.0 11_3_0_hca03da5_28
libgfortran5 11.3.0 h009349e_28
libllvm14 14.0.6 h19fdd8a_4
libopenblas 0.3.21 h269037a_0
libpng 1.6.39 h80987f9_0
libtiff 4.5.1 hc9ead59_1
libwebp-base 1.3.2 h80987f9_1
llvm-openmp 14.0.6 hc6e5704_0
llvmlite 0.43.0 py39h313beb8_1
lz4-c 1.9.4 h313beb8_1
markupsafe 3.0.2 py39h80987f9_0
matplotlib-base 3.8.4 py39h46d7db6_0
multimethod 1.9.1 py39hca03da5_0
ncurses 6.4 h313beb8_0
networkx 3.2.1 py39hca03da5_0
numba 0.60.0 py39h313beb8_1
numexpr 2.10.1 py39h5d9532f_0
numpy 1.26.4 py39h3b2db8e_0
numpy-base 1.26.4 py39ha9811e2_0
openjpeg 2.5.2 h54b8e55_0
openpyxl 3.1.5 pypi_0 pypi
openssl 3.4.1 h81ee809_0 conda-forge
packaging 24.2 py39hca03da5_0
pandas 2.2.3 py39hcf29cfe_0
patsy 1.0.1 py39hca03da5_0
phik 0.12.3 py39h48ca7d4_0
pillow 11.1.0 py39h84e58ab_0
pip 25.0 py39hca03da5_0
pybind11-abi 4 hd3eb1b0_1
pydantic 2.10.3 py39hca03da5_0
pydantic-core 2.27.1 py39h2aea54e_0
pyparsing 3.0.9 py39hca03da5_0
pysocks 1.7.1 py39hca03da5_0
python 3.9.21 hb885b13_1
python-dateutil 2.9.0post0 py39hca03da5_2
python-tzdata 2023.3 pyhd3eb1b0_0
pytz 2024.1 py39hca03da5_0
pywavelets 1.5.0 py39hbda83bc_0
pyyaml 6.0.2 py39h80987f9_0
readline 8.2 h1a28f6b_0
requests 2.32.3 py39hca03da5_1
scipy 1.13.1 py39hd336fd7_1
seaborn 0.13.2 py39hca03da5_1
setuptools 75.8.0 py39hca03da5_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.45.3 h80987f9_0
statsmodels 0.14.4 py39h80987f9_0
tbb 2021.8.0 h48ca7d4_0
tk 8.6.14 h6ba3021_0
tqdm 4.67.1 py39h33ce5c2_0
typeguard 4.2.1 py39hca03da5_0
typing-extensions 4.12.2 py39hca03da5_0
typing_extensions 4.12.2 py39hca03da5_0
tzdata 2025a h04d1e81_0
unicodedata2 15.1.0 py39h80987f9_1
urllib3 2.3.0 py39hca03da5_0
visions 0.7.6 py39hca03da5_0
wheel 0.45.1 py39hca03da5_0
wordcloud 1.9.4 py39h80987f9_0
xz 5.6.4 h80987f9_1
yaml 0.2.5 h1a28f6b_0
ydata-profiling 4.12.2 pypi_0 pypi
zipp 3.21.0 py39hca03da5_0
zlib 1.2.13 h18a0788_1
zstd 1.5.6 hfb09047_0
```
### OS
macOS Ventura 13.5.2
### Checklist
- [x] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [x] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [x] The issue has not been resolved by the entries listed under [Common Issues](https://docs.profiling.ydata.ai/latest/support-contribution/contribution_guidelines/).
|
open
|
2025-02-27T11:16:41Z
|
2025-02-27T14:22:32Z
|
https://github.com/ydataai/ydata-profiling/issues/1706
|
[
"needs-triage"
] |
patrickstar231
| 1 |
pydata/pandas-datareader
|
pandas
| 874 |
get_data_yahoo raise RemoteDataError(msg)!
|
import pandas_datareader as pdr
start=datetime(2019,1,1)
nio=pdr.get_data_yahoo('NIO',start=start)
error:
raise RemoteDataError(msg)
pandas_datareader._utils.RemoteDataError: Unable to read URL: https://finance.yahoo.com/quote/NIO/history?period1=1546333200&period2=1625990399&interval=1d&frequency=1d&filter=history
Response Text:
|
closed
|
2021-07-11T11:40:15Z
|
2021-07-13T10:24:43Z
|
https://github.com/pydata/pandas-datareader/issues/874
|
[] |
waynelee123
| 3 |
pykaldi/pykaldi
|
numpy
| 114 |
Very low-efficiency dataloader when no explicit matrix2numpy conversion using pykaldi+pytorch
|
Thanks to pykaldi. Now it is very easy to incorporate kaldi's feature into pytorch to do NN training related to speaker verification with only few lines of codes.
However, I found it is very slow to convert pykaldi's submatrix into pytorch's FloatTensor if there is no explicit conversion from submatrix to numpy. This leads the data loading phase become performance bottleneck when incorporating pykaldi into pytorch using its dataloader scheme.
The initial problematic part of codes of my dataloader.py looks like this:
```python
class Mydataset(object):
...
def __getitem__(self, idx):
uttid = self.uttids[idx]
feat2d = utt2feat2d[uttid] # utt2feat2d is provided by read_utt2feat2d function
label = labels[uttid]
return torch.FloatTensor(feat2d), torch.LongTensor(label)
def read_utt2feat2d(self, iopath2feat2d): # this function provides utt2featd
rspec = 'scp:{0}/feat2d.scp'.format(iopath2feat2d)
utt2feat2d = {}
for key,val in SequentialMatrixReader(rspec):
utt2feat2d[key] = val # Replacing it with utt2feat2d[key] = val.numpy() increases data loading speed
return utt2feat2d. # It occupies large amount of men, but the aim here is to debug
```
The above codes result a slow dataloader. If the batch-size is 128, #workers 24, it costs **4 mins** to load 150 batches.
I checked the gpu (doing model training) consumes about 1mins, and the cpu (data loading) consumes
4 mins. And they are nearly doing in parallel, the bottleneck lies in the data loading part.
And If I simply revise the code of function read_utt2feat2d to: **utt2feat2d[key] = val.numpy()**, the total time consumed is **1min**.
I don't know the underlying reason and feel interested.
|
closed
|
2019-04-28T05:24:57Z
|
2019-04-30T03:43:08Z
|
https://github.com/pykaldi/pykaldi/issues/114
|
[] |
JerryPeng21cuhk
| 2 |
QingdaoU/OnlineJudge
|
django
| 120 |
Armv7l环境运行错误
|
设备 树莓派3b
系统 Raspbian 4.x
docker镜像
postgres 正常
redis 无限重启,log输出错误 缺失文件
其余两个组件均无法运行 standrad_init_linux.go:195 报错 exec user process caused "exec format error"
|
closed
|
2018-01-11T00:52:34Z
|
2018-01-11T20:22:40Z
|
https://github.com/QingdaoU/OnlineJudge/issues/120
|
[] |
iamapig120
| 2 |
BlinkDL/RWKV-LM
|
pytorch
| 85 |
Add `Model-based Deep Reinforcement Learning` to RWKV-LM?
|
What about add some `Model-based Deep Reinforcement Learning` to RWKV-LM?
|
closed
|
2023-04-15T16:02:59Z
|
2023-04-17T19:19:40Z
|
https://github.com/BlinkDL/RWKV-LM/issues/85
|
[] |
linkerlin
| 2 |
pyppeteer/pyppeteer
|
automation
| 297 |
I am using. page SetRequestInterception (True), a page doesn't load properly
|
I want to get all the requests and responses on the page through the interceptor, but it prevents the page from loading properly.
It's just a simple demo, and I'm just going to listen with an interceptor, and I'm going to print out everything I hear. That's it.
|
open
|
2021-08-11T03:03:16Z
|
2021-10-24T03:48:50Z
|
https://github.com/pyppeteer/pyppeteer/issues/297
|
[
"waiting for info"
] |
ghost
| 4 |
tflearn/tflearn
|
data-science
| 283 |
Assertion on input dim in recurrent model
|
Hi,
I'd like to try LSTM with an input size less than 3 but I receive this error:
`AssertionError: Input dim should be at least 3.`
Does tflearn inherit this from TF? Would the recurrent model still works fine if I remove that assertion?
|
open
|
2016-08-15T18:54:29Z
|
2016-08-16T04:43:50Z
|
https://github.com/tflearn/tflearn/issues/283
|
[] |
sauberf
| 1 |
sktime/sktime
|
data-science
| 7,056 |
[BUG] `sktime` fails if an older version of `polars` is installed
|
Reported by @wirrywoo on discord.
If an older version of `polars` is installed, `sktime` fails due to import chains and module level generation of a test fixture with `DataFrame(strict=False)`, where the `strict` argument is not present in earlier `polars` versions.
The solution is to add the fixture only on `polars` versions that have the argument, and a workaround is to avoid older `polars` versions.
|
closed
|
2024-08-30T14:27:49Z
|
2024-08-31T22:09:10Z
|
https://github.com/sktime/sktime/issues/7056
|
[
"bug",
"module:datatypes"
] |
fkiraly
| 1 |
aimhubio/aim
|
data-visualization
| 3,070 |
cannot import aimstack without aimos package
|
## ❓Question
I'm trying to setup a Langchain debugger on Windows. Since `aimos` cannot be installed on Windows I installed `aimstack` and have the following code:
```
def get_callbacks() -> list:
callbacks = []
aimos_url = os.environ["AIMOS_URL"]
if aimos_url:
try:
from aimstack.langchain_debugger.callback_handlers import \
GenericCallbackHandler
callbacks.append(GenericCallbackHandler(aimos_url))
except ImportError:
pass
return callbacks
```
For some reason I'm getting ImportError. I've checked that correct venv is used and double checked I've installed `aimstack`. Please help
|
open
|
2023-12-21T14:47:42Z
|
2024-01-31T13:25:12Z
|
https://github.com/aimhubio/aim/issues/3070
|
[
"type / question"
] |
MrZoidberg
| 1 |
marcomusy/vedo
|
numpy
| 1,148 |
Vedo only render the change when I move my mouse on the screen
|
it happens to me that the scene only change when I hover my mouse on the screen which does not make it smooth. Can you help to solve this issue ?
|
open
|
2024-06-27T18:48:10Z
|
2024-06-28T14:34:10Z
|
https://github.com/marcomusy/vedo/issues/1148
|
[] |
OhmPuchiss
| 6 |
horovod/horovod
|
deep-learning
| 2,929 |
Key Error: 'ib0'
|
**Environment:**
1. Framework: pytorch
2. Framework version:1.8.0
3. Horovod version:0.21.3
4. MPI version: mpich 3.0.4
5. CUDA version: 10.1
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
when I run `horovodrun -np 1 --start-timeout=180 --min-np 1 --max-np 3 --host-discovery-script ./discover_hosts.sh python -u pytorch_mnist_elastic.py `, I got the following error
**Bug report:**
Traceback (most recent call last):
File "/home/test/dat01/txacs/anaconda3/envs/py36/bin/horovodrun", line 8, in <module>
sys.exit(run_commandline())
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/launch.py", line 768, in run_commandline
_run(args)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/launch.py", line 756, in _run
return _run_elastic(args)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/launch.py", line 666, in _run_elastic
gloo_run_elastic(settings, env, args.command)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/gloo_run.py", line 336, in gloo_run_elastic
launch_gloo_elastic(command, exec_command, settings, env, get_common_interfaces, rendezvous)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/gloo_run.py", line 303, in launch_gloo_elastic
server_ip = network.get_driver_ip(nics)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/util/network.py", line 100, in get_driver_ip
for addr in net_if_addrs()[iface]:
KeyError: 'ib0'
Launching horovod task function was not successful:
Exception in thread Thread-10:
Traceback (most recent call last):
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/util/threads.py", line 58, in fn_execute
res = fn(*arg[:-1])
File "/home/test/dat01/txacs/anaconda3/envs/py36/lib/python3.6/site-packages/horovod/runner/driver/driver_service.py", line 87, in _exec_command
os._exit(exit_code)
TypeError: an integer is required (got type NoneType)
Launching horovod task function was not successful:
Please help me, thanks!
|
closed
|
2021-05-22T12:58:57Z
|
2021-05-26T07:01:36Z
|
https://github.com/horovod/horovod/issues/2929
|
[
"question"
] |
TXacs
| 3 |
comfyanonymous/ComfyUI
|
pytorch
| 7,323 |
thanks
|
delete thanks
|
closed
|
2025-03-20T07:25:30Z
|
2025-03-20T07:55:14Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7323
|
[
"Potential Bug"
] |
chenfeng6a
| 0 |
pykaldi/pykaldi
|
numpy
| 294 |
SingleUtteranceGmmDecoder.feature_pipeline() causes segmentation fault
|
Kaldi: compiled from master (d366a93aad)
PyKaldi: pykaldi-cpu 0.1.3 py37h14c3975_1 pykaldi
Python: 3.7.11
OS: Manjaro Linux VM
When calling `feature_pipeline()` from a `SingleUtteranceGmmDecoder` object twice, a segmentation fault occurs.
I am trying to run online GMM based decoding. I translated a similar c++ example that can be found here: <https://kaldi-asr.org/doc/online_decoding.html#GMM-based>, to PyKaldi. But when I try to feed the `OnlineFeaturePipeline` instance with audio data, it crashes with a segmentation fault.
I could enclose the error to being thrown when trying to get the feature pipeline twice and created following minimal working example:
```py
#!/usr/bin/env python
from kaldi.online2 import (
SingleUtteranceGmmDecoder,
OnlineGmmAdaptationState,
OnlineFeaturePipelineCommandLineConfig,
OnlineGmmDecodingConfig,
OnlineFeaturePipelineConfig,
OnlineFeaturePipeline,
OnlineGmmDecodingModels,
)
from kaldi.fstext import read_fst_kaldi
import subprocess, sys
from os.path import expanduser
base_path = expanduser("~/speech/kaldi/asr")
kaldi_root = expanduser("~/speech/kaldi/kaldi")
subprocess.run(
f"{kaldi_root}/src/bin/matrix-sum --binary=false scp:{base_path}/data/train/cmvn.scp - >/tmp/global_cmvn.stats", shell=True
)
feature_cmdline_config = OnlineFeaturePipelineCommandLineConfig()
feature_cmdline_config.feature_type = "mfcc"
feature_cmdline_config.mfcc_config = f"{base_path}/conf/mfcc.conf"
feature_cmdline_config.global_cmvn_stats_rxfilename = "/tmp/global_cmvn.stats"
feature_config = OnlineFeaturePipelineConfig.from_config(feature_cmdline_config)
decode_config = OnlineGmmDecodingConfig()
decode_config.faster_decoder_opts.beam = 11.0
decode_config.faster_decoder_opts.max_active = 7000
decode_config.model_rxfilename = f"{base_path}/exp/mono/final.mdl"
gmm_models = OnlineGmmDecodingModels(decode_config)
pipeline_prototype = OnlineFeaturePipeline(feature_config)
decode_fst = read_fst_kaldi(f"{base_path}/exp/mono/graph/HCLG.fst")
adaptation_state = OnlineGmmAdaptationState()
decoder = SingleUtteranceGmmDecoder(
decode_config, gmm_models, pipeline_prototype, decode_fst, adaptation_state
)
# this one does not crash, but using pipe.accept_waveform crashed for me
pipe = decoder.feature_pipeline()
# the next line crashes with "segmentation fault (core dumped)"
pipe = decoder.feature_pipeline()
```
|
closed
|
2022-01-15T13:52:38Z
|
2022-02-27T20:57:23Z
|
https://github.com/pykaldi/pykaldi/issues/294
|
[] |
vb42e
| 1 |
christabor/flask_jsondash
|
flask
| 47 |
Add raw log endpoint
|
Inspired by http://atlasboard.bitbucket.org
|
closed
|
2016-09-11T04:49:52Z
|
2016-11-30T23:02:59Z
|
https://github.com/christabor/flask_jsondash/issues/47
|
[
"enhancement",
"new feature"
] |
christabor
| 1 |
python-gino/gino
|
asyncio
| 635 |
Database Connection per Schema
|
* GINO version: 0.8.6
* Python version: 3.7.0
* asyncpg version: 0.20.1
* aiocontextvars version: 0.2.2
* PostgreSQL version: 11
### Description
We're trying to implement our database logic to be a single connection to a schema in a database, meaning that we use schemas as databases themselves to maintain our connection pool. We need this since we don't want to connect to different databases as this affects our performance, but we still need to separate the information between different schemas as its client specific.
Currently we are doing this for each new request (because each request could be related to a different client and we need to set a new client schema):
```
async def adjust_schemas(self):
"""Adjust schemas on all model tables."""
for table_name, table_dict in self.__db.tables.items():
table_dict.schema = self.__schema
```
but most probably you can already guess that this messes up with requests that have still not finished giving a response, since the schema could change from a different request and data would go into the wrong client schema.
We can't find any straightforward solution for this. Do you think we can achieve this with Gino?
|
closed
|
2020-03-06T12:06:33Z
|
2020-10-10T05:17:56Z
|
https://github.com/python-gino/gino/issues/635
|
[
"question"
] |
shsimeonova
| 3 |
ExpDev07/coronavirus-tracker-api
|
rest-api
| 172 |
Hi, I created an interactive map with your API, thanks for the dataset!
|
You can visit the map in [COVID-19 Map](https://python.robertocideos.com)
Thanks for the hardwork and the dataset!
|
closed
|
2020-03-25T06:28:38Z
|
2020-04-19T18:17:23Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/172
|
[
"user-created"
] |
rcideos
| 0 |
pytest-dev/pytest-django
|
pytest
| 923 |
`Database access not allowed` when passing function to default foreign key
|
I am getting the following error when I am setting a function as a `default` value for a foreign key. I have the decorator on many tests, but it doesn't even finish loading the first test with the decorator before exploding.
```
Failed: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
```
Here is what I have:
```python
class Score(models.Model):
def default_value():
return Sport.objects.get(game='football').id
sport = models.ForeignKey(
Sport,
null=True,
blank=True,
on_delete=models.SET_NULL,
default=default_value
)
```
1. This works with django since default is either looking for a value or a callable.
2. It works in migrations since it is being called after all of the apps are initialized.
3. It also just works in the normal course of using the project
I suspect this is just tripping up the order of something getting loaded.
|
open
|
2021-04-20T22:20:59Z
|
2022-08-04T08:23:08Z
|
https://github.com/pytest-dev/pytest-django/issues/923
|
[
"needs-info"
] |
buddylindsey
| 2 |
Guovin/iptv-api
|
api
| 709 |
[Bug]:
|
### Don't skip these steps / 不要跳过这些步骤
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field / 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**封锁**
- [X] I have checked through the search that there are no similar issues that already exist / 我已经通过搜索仔细检查过没有存在已经创建的相似问题
- [X] I will not submit any issues that are not related to this project / 我不会提交任何与本项目无关的问题
### Occurrence environment / 触发环境
- [ ] Workflow / 工作流
- [ ] GUI / 软件
- [X] Docker
- [ ] Command line / 命令行
### Bug description / 具体描述
docket完整版。同步回来的播放源。
地址后面会有汉字~湖北酒店源~。
直接导入到播放器中,无法播放噢
### Error log / 报错日志
_No response_
|
closed
|
2024-12-19T07:05:42Z
|
2024-12-19T07:30:24Z
|
https://github.com/Guovin/iptv-api/issues/709
|
[
"invalid"
] |
wudixxqq
| 2 |
mars-project/mars
|
numpy
| 2,663 |
Add support for HTTP request rewriter
|
Sometimes we need to pass through proxies which have authorizations, and we need to rewrite our HTTP requests to meet those needs. A `request_rewriter` argument can be added to session objects to support this.
|
closed
|
2022-01-29T14:34:53Z
|
2022-01-30T07:00:05Z
|
https://github.com/mars-project/mars/issues/2663
|
[
"type: enhancement",
"mod: web"
] |
wjsi
| 0 |
deepfakes/faceswap
|
deep-learning
| 984 |
No alignments file found
|
Hi, when i am doing "convert". it tells me No alignments file found.
The command I used is:
python3 faceswap.py convert -i ~/faceswap/src/trump/ -o ~/faceswap/converted/ -m ~/faceswap/trump_cage_model/
The console output is :
03/11/2020 10:39:30 ERROR No alignments file found. Please provide an alignments file for your destination video (recommended) or enable on-the-fly conversion (not recommended).
I find there is a xx_alignments.fsa file in the input dir. But no alignments.json file. So what should I do then.
|
closed
|
2020-03-11T02:43:40Z
|
2024-02-02T16:16:05Z
|
https://github.com/deepfakes/faceswap/issues/984
|
[] |
chenbinghui1
| 13 |
anselal/antminer-monitor
|
dash
| 133 |
Change warning temp
|
I changed warning temp in v0.4 from 80 to 90
Now I can not find where it was in v0.5
Can you help me please?
|
closed
|
2018-10-05T10:23:41Z
|
2018-10-05T13:08:27Z
|
https://github.com/anselal/antminer-monitor/issues/133
|
[
":octocat: help wanted"
] |
papampi
| 2 |
seleniumbase/SeleniumBase
|
pytest
| 2,449 |
Disabling the GPU causes `--enable-3d-apis` to not work
|
## Disabling the GPU causes `--enable-3d-apis` to not work
The fix for this is simple: If using `--enable-3d-apis` / `enable_3d_apis=True`, then don't disable the GPU, which was being set in places to prevent other issues from happening. The GPU was being disabled by the Chromium option: `--disable-gpu`, which is needed under several circumstances. However, once the SeleniumBase option `--enable-3d-apis` is fixed, SeleniumBase will prioritize the 3D stuff over the GPU stuff when using that option.
Related SeleniumBase issues:
* https://github.com/seleniumbase/SeleniumBase/issues/1384
* https://github.com/seleniumbase/SeleniumBase/issues/1873
|
closed
|
2024-01-25T15:14:54Z
|
2024-01-25T19:13:10Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2449
|
[
"bug"
] |
mdmintz
| 1 |
onnx/onnx
|
machine-learning
| 5,922 |
Intermittent failing of ONNX model
|
# Bug Report
### Describe the bug
I have a script from compiling a `pytorch` model to ONNX that runs inference with the ONNX model, and when running inference on the GPU, it intermittently fails with the error:
```File "/home/ec2-user/anaconda3/envs/onnx/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Expand node. Name:'/Expand_591' Status Message: /Expand_591: left operand cannot broadcast on dim 0 LeftShape: {243}, RightShape: {267}.
```
Some additional notes:
1. In the script (see below), I'm running inference 10x (via a for loop). When it fails, it fails on the first iteration of the for loop and crashes the script. But, if I re-run the script, it sometimes doesn't fail on that first iteration and completes successfully. Thus, the intermittent nature here seems to be between iterations of the script, _not between iterations of the for loop_.
2. Each time it runs into the error, it does have the `Expand_591` node called out, and the `RightShape {267}` remains the same. However, the `LeftShape` (243 in the error example above) changes.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
<img width="317" alt="image" src="https://github.com/onnx/onnx/assets/124316637/20ccd564-5d0b-4515-a3e1-09fb27b5eb36">
- ONNX version (*e.g. 1.13*):
<img width="235" alt="image" src="https://github.com/onnx/onnx/assets/124316637/c1680d8f-17f0-4239-910f-e84c69ac1a2d">
- Python version: 3.10.6
- Torch version
<img width="158" alt="image" src="https://github.com/onnx/onnx/assets/124316637/ef625eb7-6ff7-4487-bf61-5a93e3afcd1f">
### Reproduction instructions
Script I'm using to test (with private details removed):
```
import onnx
import onnxruntime
import torch
import numpy as np
device = torch.device("cuda")
input_tensor = torch.randn(1, 3, 1280, 896)
input_tensor = input_tensor.to(device)
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
onnx_model = onnx.load("exp_06_aug_stacked_strong_v5_step_50_epoch_69.onnx")
onnx.checker.check_model(onnx_model)
ort_session = onnxruntime.InferenceSession(
"exp_06_aug_stacked_strong_v5_step_50_epoch_69.onnx",
providers=['CUDAExecutionProvider']
)
# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_tensor)}
for idx in range(10):
ort_outs = ort_session.run(None, ort_inputs)
```
### Expected behavior
I would expect the model to run successfully each time and not intermittently fail.
### Notes
We got several different flavors of warnings when compiling:
- TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
- TracerWarning: torch. tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
- TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
- When we compiled the model on the GPU but ran on the CPU, it ran successfully each time. However, it did not produce the same results as the underlying pytorch model.
- When we compiled this same model on CPU and tested using the `CPUExecutionProvider`, we ran into this error 100% of the time:
```
2024-02-08 22:53:05.966710901 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Gather node.
Name:'/Gather_2452' Status Message: indices element out of data bounds,
idx=264 must be within the inclusive range [-264,263]
Traceback (most recent call last): File "/home/ec2-user/projects/onnx/test_onnx_model.py", line 60,
in <module> ort_outs = ort_session.run(None, ort_inputs)
File "/home/ec2-user/anaconda3/envs/onnx/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run return self._sess.run(output_names, input_feed, run_options)
```
|
closed
|
2024-02-08T23:51:24Z
|
2024-02-09T16:06:35Z
|
https://github.com/onnx/onnx/issues/5922
|
[
"bug",
"topic: runtime"
] |
sallamander317
| 2 |
vitalik/django-ninja
|
django
| 1,321 |
TestClient request mock HttpRequest is missing SessionStore session attribute
|
**AttributeError: Mock object has no attribute 'session'**
This error is raised when using TestClient to test a login endpoint that uses `django.contrib.auth.login` because the mock request object as defined here https://github.com/vitalik/django-ninja/blob/master/ninja/testing/client.py#L128-L138 is missing a session attribute.
**Possible Solution**
I was able to solve this issue on my own by monkey patching the test client by defining a function like
```
from django.contrib.sessions.backends.db import SessionStore
def _new_build_request(self, *args, **kwargs) -> Mock:
"""Method to be monkey patched into the TestClient to add session store to the request mock"""
mock = self._old_build_request(*args, **kwargs)
mock.session = SessionStore()
return mock
```
and then using this new function to replace the `_build_request` function in my TestClient instance like
```
client._old_build_request = client._build_request
client._build_request = _new_build_request.__get__(client)
```
Maybe a better solution would be to use a SessionStore mock?
|
open
|
2024-10-18T13:29:07Z
|
2024-10-29T08:53:24Z
|
https://github.com/vitalik/django-ninja/issues/1321
|
[] |
picturedots
| 1 |
ageitgey/face_recognition
|
machine-learning
| 1,369 |
Obtain hair outline as landmark
|
Hi,
This is a general question. I am able to get the face landmarks. However, I am also interested in the hair. Any way to extract this as landmarks?
Thanks.
|
open
|
2021-09-05T19:16:28Z
|
2021-09-05T19:16:28Z
|
https://github.com/ageitgey/face_recognition/issues/1369
|
[] |
SridharRamasami
| 0 |
dask/dask
|
numpy
| 11,610 |
`dataframe.read_parquet` crashed with DefaultAzureCredential cannot be deterministically hashed
|
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Dask 2024.2.1 version in python 3.9 works as expected.
Dask 2024.12.0 version in python 3.12 crashed with
```
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/utils.py", line 772, in __call__
return meth(arg, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/tokenize.py", line 159, in normalize_seq
return type(seq).__name__, _normalize_seq_func(seq)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/tokenize.py", line 152, in _normalize_seq_func
return tuple(map(_inner_normalize_token, seq))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/tokenize.py", line 146, in _inner_normalize_token
return normalize_token(item)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/utils.py", line 772, in __call__
return meth(arg, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/tokenize.py", line 210, in normalize_object
_maybe_raise_nondeterministic(
File "/home/user/conda-envs/dev-env/lib/python3.12/site-packages/dask/tokenize.py", line 89, in _maybe_raise_nondeterministic
raise TokenizationError(msg)
dask.tokenize.TokenizationError: Object <azure.identity.aio._credentials.default.DefaultAzureCredential object at 0x7fb2dad44d40> cannot be deterministically hashed. See https://docs.dask.org/en/latest/custom-collections.html#implementing-deterministic-hashing for more information.
```
Note that in the following example if i replace `storage_options` by `filesystem` it works.
```python
from adlfs.spec import AzureBlobFileSystem
filesystem = AzureBlobFileSystem(
**storage_options,
)
```
**Minimal Complete Verifiable Example**:
```python
import pyarrow as pa
import dask.dataframe as dd
from azure.identity.aio import DefaultAzureCredential
DEV_PA_SCHEMAS = pa.schema([
('dev_code', pa.string()),
('dev_value', pa.float64()),
])
storage_options = dict(
account_name='my_azure_blob_storage_name',
credential=DefaultAzureCredential(),
)
d = dd.read_parquet(
[
'az://my-container/2024-12-17/file1.parquet',
'az://my-container/2024-12-17/file2.parquet',
],
filters=None,
index=False,
columns=['dev_code'],
engine='pyarrow',
storage_options=storage_options,
open_file_options=dict(precache_options=dict(method='parquet')),
schema=DEV_PA_SCHEMAS,
)['dev_code'].unique().compute()
```
**Anything else we need to know?**:
**Environment**: Azure Kubernetes pod
- Dask version: 2024.12.0
- Python version: 3.12.8
- Operating System: Ubuntu 22.04
- Install method (conda, pip, source): conda
- Pandas version: 2.2.3
- Pyarrow version: 18.1.0
|
open
|
2024-12-18T01:53:03Z
|
2025-02-17T02:01:02Z
|
https://github.com/dask/dask/issues/11610
|
[
"needs attention",
"dask-expr"
] |
seanslma
| 0 |
deeppavlov/DeepPavlov
|
tensorflow
| 1,390 |
There is no config.json in pre-trained BERT models by DeepPavlov
|
BERT pre-trained models from http://docs.deeppavlov.ai/en/master/features/pretrained_vectors.html#bert have `bert_config.json` instead of `config.json`. This leads to errors when these models are used with HuggingFace Transformers:
```python
from transformers import AutoTokenizer
t = AutoTokenizer.from_pretrained("./conversational_cased_L-12_H-768_A-12_v1")
```
```
OSError Traceback (most recent call last)
<ipython-input-2-1a3f920b5ef3> in <module>
----> 1 t = AutoTokenizer.from_pretrained("/home/yurakuratov/.deeppavlov/downloads/bert_models/conversational_cased_L-12_H-768_A-12_v1")
~/anaconda3/envs/dp_tf1.15/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
184 config = kwargs.pop("config", None)
185 if not isinstance(config, PretrainedConfig):
--> 186 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
187
188 if "bert-base-japanese" in pretrained_model_name_or_path:
~/anaconda3/envs/dp_tf1.15/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
185 """
186 config_dict, _ = PretrainedConfig.get_config_dict(
--> 187 pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs
188 )
189
~/anaconda3/envs/dp_tf1.15/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
268 )
269 )
--> 270 raise EnvironmentError(msg)
271
272 except json.JSONDecodeError:
OSError: Can't load '/home/yurakuratov/.deeppavlov/downloads/bert_models/conversational_cased_L-12_H-768_A-12_v1'. Make sure that:
- '/home/yurakuratov/.deeppavlov/downloads/bert_models/conversational_cased_L-12_H-768_A-12_v1' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/home/yurakuratov/.deeppavlov/downloads/bert_models/conversational_cased_L-12_H-768_A-12_v1' is the correct path to a directory containing a 'config.json' file
```
Renaming `bert_config.json` to `config.json` should solve the problem.
|
closed
|
2021-01-27T15:16:01Z
|
2022-04-04T13:44:52Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1390
|
[] |
yurakuratov
| 1 |
psf/requests
|
python
| 6,793 |
Cannot close the proxy
|
<!-- Summary. -->
In windows pycharm jupyterlab when i open windows system proxy requests will use the proxy i set on windows system.
but cannot close this proxy direct to the internet . I try
```python
response = requests.post(url, headers=headers, json=data, proxies=None)
response = requests.post(url, headers=headers, json=data, proxies={})
response = requests.post(url, headers=headers, json=data, proxies="")
```
can't work
## Expected Result
don't use proxy I set on windows
<!-- What you expected. -->
## Actual Result
I can check this connection on clash
<!-- What happened instead. -->
## Reproduction Steps
windows requests 2.31.0
use pycharm and jupyter
```python
import requests
response = requests.post(url, headers=headers, json=data, proxies=None)# I try None {} "" []
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.0.4"
},
"cryptography": {
"version": "41.0.7"
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.11.5"
},
"platform": {
"release": "10",
"system": "Windows"
},
"pyOpenSSL": {
"openssl_version": "300000c0",
"version": "23.2.0"
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "300000c0"
},
"urllib3": {
"version": "1.26.18"
},
"using_charset_normalizer": true,
"using_pyopenssl": true
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
|
open
|
2024-08-27T10:37:43Z
|
2025-01-27T05:14:09Z
|
https://github.com/psf/requests/issues/6793
|
[] |
invisifire
| 1 |
huggingface/transformers
|
nlp
| 36,320 |
Support for Multi-Modality Models (DeepSeek Janus-Pro-7B)
|
### Feature request
I’m requesting support for multi_modality models in transformers, specifically for models like DeepSeek Janus-Pro-7B.
Currently, when attempting to load this model using AutoModel.from_pretrained(), I received the following error:
KeyError: 'multi_modality'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1092 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1093 except KeyError:
-> 1094 raise ValueError(
1095 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
1096 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
### Motivation
I’d like to use DeepSeek Janus-Pro-7B with transformers, but since it’s labeled as multi_modality, it cannot be loaded.
Is there an ETA for support?
Are there any workarounds to load the model until official support is added?
### Your contribution
At this time, I am unable to submit a PR, but I am happy to help with testing once support for multi_modality models is added. If there are any workarounds or steps to try, I’d be glad to assist in debugging and verifying compatibility.
|
closed
|
2025-02-21T06:49:16Z
|
2025-02-21T08:33:57Z
|
https://github.com/huggingface/transformers/issues/36320
|
[
"Feature request"
] |
smiling621
| 3 |
indico/indico
|
flask
| 6,034 |
Navigation bar vanishes in case of protected event in an otherwise unprotected category for logged in user
|
**Describe the bug / how to reprocess**
Maybe it's a feature, not a bug, but I'll leave it here, as I find it mildly annoying :sweat_smile:
I'm logged in to our Indico instance and can see all the categories and events that I have the permissions to see, and in each category I can also see protected events but can't open them, which is OK.
When I click on an unprotected event in an unprotected category (=unprotected when logged in, i.e. I have the rights to see it) I see the navigation bar at the top (Figure 1). If I now use the "go to previous event" (Older Event) or "go to next event" (Newer Event) this works fine, until I reach a protected event, where I see the "Access denied" message and all navigation vanishes (Figure 2). The only option is to go back in the history to the previously visited event, go up to the category, select the next unprotected event in the direction Older / Newer that I want to go, and then I can use the shortcuts to navigate again - until I encounter the next protected event in the category. Example (after navigating to category `Sub category Foo`:
```
Home page >> Some Category >> Sub category Foo
.
.
.
8. Event on Topic A
7. Event on Topic C
6. Event on Topic B
5. Conveners meeting (protected)
4. Event on Topic A
3. Event on Topic B
.
.
.
```
In this case if I start at event 7 by clicking on it, and proceed to go to `Older event` I reach event 6 - fine. If I click `Older event` again, I get to the protected event, get the `Access denied` message, and all of the navigation is gone. I can't just go to the category, nor just simply click `Older event` again to get to event 4 which would be the ideal case. The only way to continue is to go back in browser history as described above. Same if I start at event 1, 2, 3 and click `Newer Event`.
**Expected behavior**
Ideally, the navigation bar would still be present, even if the event is protected and the `Access denied` message is shown, to allow for easy navigation. It's still clear that I'm not supposed to see the content of that meeting, but I know it is there anyway from the category overview, but at least I can easily go to the older or newer event.
**Screenshots**

Figure 1: Usual navigation bar

Figure 2: For restricted / protected events I don't have the permissions to see
I hope the bug description is clear enough. If this is the desired behaviour, I'm happy to learn about the reasons :slightly_smiling_face:
|
open
|
2023-11-14T16:39:41Z
|
2023-11-14T17:32:07Z
|
https://github.com/indico/indico/issues/6034
|
[
"bug"
] |
chrishanw
| 0 |
plotly/dash
|
dash
| 2,706 |
Scattermapbox cluster says “The layer does not exist in the map’s style…”
|
**Describe your context**
```
dash 2.14.1
dash-auth 2.0.0
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 1.0.1
dash-html-components 2.0.0
dash-leaflet 0.1.23
dash-table 5.0.0
plotly 5.18.0
```
**Describe the bug**
Hi,
I’m trying to create a webapp which uses the cluster function within scattermapbox. However, every so often, when loading the webapp, I’m presented with the following console error (which prevents any further interaction with the map):
```
Uncaught (in promise) Error: Mapbox error.
```
followed by multiple errors of the type:
```
Error: The layer 'plotly-trace-layer-4f7f6d-circle' does not exist in the map's style and cannot be queried for features.
```
I’ve created the following minimal example which throws up the same errors (they occur once every ~10 times I reload the webapp making the issue hard to track down). The example creates a list of random points around the world and plots them on a map. The example includes a simple callback to print the location of a point when clicking on it. I’ve tracked the issue down to the use of the cluster option in the “map_data” list (i.e. if I disable the cluster option, the errors no longer appear). From other posts/the documentation, I’m aware that the cluster option is not expected to work with OpenStreetMaps tiles hence the example requires a Mapbox access token.
```python
from dash import Dash, dcc, html
from dash import Input, Output
from random import randint, seed
# -- Fix the randomness
seed(10)
# -- Generate random data
npoints = 100
latitudes = [randint(-90, 90) for i in range(npoints)]
longitudes = [randint(-180, 180) for i in range(npoints)]
colors = ["green" for i in range(npoints)]
# -- Mapbox styles
mapbox_style = "streets"
mapbox_accesstoken = open(".mapbox_token").read().strip()
# -- Set map data
map_data = [
{
"type": "scattermapbox",
"lat": latitudes,
"lon": longitudes,
"mode": "markers",
"marker": {
"size": 15,
"color": colors,
},
"cluster": {
"enabled": True,
"color": "green",
"type": "circle",
"maxzoom": 10,
"size": 25,
"opacity": 0.7,
},
},
]
# -- Set map layout
map_layout = {
"mapbox": {
"style": mapbox_style,
"accesstoken": mapbox_accesstoken,
},
"clickmode": "event",
"margin": {"t": 0, "r": 0, "b": 0, "l": 0},
}
# -- Create div with map and a dummy div for the callback
layout = html.Div(
children=[
dcc.Graph(
id="world-map",
figure={"data": map_data, "layout": map_layout},
config={"displayModeBar": False, "scrollZoom": True},
style={"height": "100vh"},
),
html.Div(id="dummy"),
],
)
# -- Create app
app = Dash(
__name__,
)
app.layout = layout
# -- Simple callback to print click data
@app.callback(
Output("dummy", "children"),
Input("world-map", "clickData"),
prevent_initial_call=True,
)
def print_click(
clickData,
):
lat = clickData["points"][0]["lat"]
lon = clickData["points"][0]["lon"]
print("Clicked on point at lat/lon {}/{}".format(lat, lon))
return None
if __name__ == "__main__":
app.run_server(debug=True, use_reloader=False, host="0.0.0.0", port=8081)
```
I have tested the code on multiple computers with different browsers and they all present the same issue. The full console logs for the errors are:
```
Uncaught (in promise) Error: Mapbox error.
r plotly.min.js:8
fire plotly.min.js:8
fire plotly.min.js:8
queryRenderedFeatures plotly.min.js:8
queryRenderedFeatures plotly.min.js:8
hoverPoints plotly.min.js:8
ht plotly.min.js:8
hover plotly.min.js:8
hover plotly.min.js:8
l plotly.min.js:8
throttle plotly.min.js:8
hover plotly.min.js:8
initFx plotly.min.js:8
fire plotly.min.js:8
mousemove plotly.min.js:8
handleEvent plotly.min.js:8
addEventListener plotly.min.js:8
ki plotly.min.js:8
i plotly.min.js:8
createMap plotly.min.js:8
n plotly.min.js:8
plot plotly.min.js:8
plot plotly.min.js:8
drawData plotly.min.js:8
syncOrAsync plotly.min.js:8
_doPlot plotly.min.js:8
newPlot plotly.min.js:8
react plotly.min.js:8
React 3
commitLifeCycles react-dom@16.v2_14_1m1699425702.14.0.js:19949
commitLayoutEffects react-dom@16.v2_14_1m1699425702.14.0.js:22938
callCallback react-dom@16.v2_14_1m1699425702.14.0.js:182
invokeGuardedCallbackDev react-dom@16.v2_14_1m1699425702.14.0.js:231
invokeGuardedCallback react-dom@16.v2_14_1m1699425702.14.0.js:286
commitRootImpl react-dom@16.v2_14_1m1699425702.14.0.js:22676
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
commitRoot react-dom@16.v2_14_1m1699425702.14.0.js:22516
finishSyncRender react-dom@16.v2_14_1m1699425702.14.0.js:21942
performSyncWorkOnRoot react-dom@16.v2_14_1m1699425702.14.0.js:21928
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11224
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11219
workLoop react@16.v2_14_1m1699425702.14.0.js:2629
flushWork react@16.v2_14_1m1699425702.14.0.js:2584
performWorkUntilDeadline react@16.v2_14_1m1699425702.14.0.js:2196
EventHandlerNonNull* react@16.v2_14_1m1699425702.14.0.js:2219
<anonymous> react@16.v2_14_1m1699425702.14.0.js:15
<anonymous> react@16.v2_14_1m1699425702.14.0.js:16
```
and
```
Error: The layer 'plotly-trace-layer-4f7f6d-circle' does not exist in the map's style and cannot be queried for features.
queryRenderedFeatures plotly.min.js:8
queryRenderedFeatures plotly.min.js:8
hoverPoints plotly.min.js:8
ht plotly.min.js:8
hover plotly.min.js:8
hover plotly.min.js:8
l plotly.min.js:8
throttle plotly.min.js:8
hover plotly.min.js:8
initFx plotly.min.js:8
fire plotly.min.js:8
mousemove plotly.min.js:8
handleEvent plotly.min.js:8
addEventListener plotly.min.js:8
ki plotly.min.js:8
i plotly.min.js:8
createMap plotly.min.js:8
n plotly.min.js:8
plot plotly.min.js:8
plot plotly.min.js:8
drawData plotly.min.js:8
syncOrAsync plotly.min.js:8
_doPlot plotly.min.js:8
newPlot plotly.min.js:8
react plotly.min.js:8
React 3
commitLifeCycles react-dom@16.v2_14_1m1699425702.14.0.js:19949
commitLayoutEffects react-dom@16.v2_14_1m1699425702.14.0.js:22938
callCallback react-dom@16.v2_14_1m1699425702.14.0.js:182
invokeGuardedCallbackDev react-dom@16.v2_14_1m1699425702.14.0.js:231
invokeGuardedCallback react-dom@16.v2_14_1m1699425702.14.0.js:286
commitRootImpl react-dom@16.v2_14_1m1699425702.14.0.js:22676
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
commitRoot react-dom@16.v2_14_1m1699425702.14.0.js:22516
finishSyncRender react-dom@16.v2_14_1m1699425702.14.0.js:21942
performSyncWorkOnRoot react-dom@16.v2_14_1m1699425702.14.0.js:21928
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11224
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11219
workLoop react@16.v2_14_1m1699425702.14.0.js:2629
flushWork react@16.v2_14_1m1699425702.14.0.js:2584
performWorkUntilDeadline react@16.v2_14_1m1699425702.14.0.js:2196
EventHandlerNonNull* react@16.v2_14_1m1699425702.14.0.js:2219
<anonymous> react@16.v2_14_1m1699425702.14.0.js:15
<anonymous> react@16.v2_14_1m1699425702.14.0.js:16
plotly.min.js:8:2494743
fire plotly.min.js:8
queryRenderedFeatures plotly.min.js:8
queryRenderedFeatures plotly.min.js:8
hoverPoints plotly.min.js:8
ht plotly.min.js:8
hover plotly.min.js:8
hover plotly.min.js:8
l plotly.min.js:8
throttle plotly.min.js:8
hover plotly.min.js:8
initFx plotly.min.js:8
fire plotly.min.js:8
mousemove plotly.min.js:8
handleEvent plotly.min.js:8
(Async: EventListener.handleEvent)
addEventListener plotly.min.js:8
ki plotly.min.js:8
i plotly.min.js:8
createMap plotly.min.js:8
n plotly.min.js:8
plot plotly.min.js:8
plot plotly.min.js:8
drawData plotly.min.js:8
syncOrAsync plotly.min.js:8
_doPlot plotly.min.js:8
newPlot plotly.min.js:8
react plotly.min.js:8
React 3
commitLifeCycles react-dom@16.v2_14_1m1699425702.14.0.js:19949
commitLayoutEffects react-dom@16.v2_14_1m1699425702.14.0.js:22938
callCallback react-dom@16.v2_14_1m1699425702.14.0.js:182
invokeGuardedCallbackDev react-dom@16.v2_14_1m1699425702.14.0.js:231
invokeGuardedCallback react-dom@16.v2_14_1m1699425702.14.0.js:286
commitRootImpl react-dom@16.v2_14_1m1699425702.14.0.js:22676
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
commitRoot react-dom@16.v2_14_1m1699425702.14.0.js:22516
finishSyncRender react-dom@16.v2_14_1m1699425702.14.0.js:21942
performSyncWorkOnRoot react-dom@16.v2_14_1m1699425702.14.0.js:21928
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11224
unstable_runWithPriority react@16.v2_14_1m1699425702.14.0.js:2685
runWithPriority$1 react-dom@16.v2_14_1m1699425702.14.0.js:11174
flushSyncCallbackQueueImpl react-dom@16.v2_14_1m1699425702.14.0.js:11219
workLoop react@16.v2_14_1m1699425702.14.0.js:2629
flushWork react@16.v2_14_1m1699425702.14.0.js:2584
performWorkUntilDeadline react@16.v2_14_1m1699425702.14.0.js:2196
(Async: EventHandlerNonNull)
<anonymous> react@16.v2_14_1m1699425702.14.0.js:2219
<anonymous> react@16.v2_14_1m1699425702.14.0.js:15
<anonymous> react@16.v2_14_1m1699425702.14.0.js:16
```
Any help on understanding the source of the issue and a way to remedy it would be greatly appreciated!
[This is a duplicate of [this post](https://community.plotly.com/t/scattermapbox-cluster-bug-the-layer-does-not-exist-in-the-maps-style/80132/1) on the Plotly forum]
|
open
|
2023-12-01T09:05:46Z
|
2024-08-13T19:43:44Z
|
https://github.com/plotly/dash/issues/2706
|
[
"bug",
"P3"
] |
stephenwinn16
| 7 |
onnx/onnx
|
scikit-learn
| 6,008 |
[Feature request] checking an input rank is within a specific range
|
### What is the problem that this feature solves?
Please keep in mind I am new to ONNX. I will be missing context on priorities with the code so this might be useless.
While looking into extending Microsoft's ORT functionality to accept a 5D input for Grid Sampling, I noticed it might be helpful to have shape inferencing capabilities to check an input's rank is within a range when you know the inputs rank ahead of time.
Currently `shape_inference.h` has
```
inline void checkInputRank(InferenceContext& ctx, size_t input_index, int expected_rank) {
// We check the rank only if a rank is known for the input:
if (hasInputShape(ctx, input_index)) {
auto rank = getInputShape(ctx, input_index).dim_size();
if (rank != expected_rank) {
fail_shape_inference("Input ", input_index, " expected to have rank ", expected_rank, " but has rank ", rank);
}
}
}
```
which will work for only one rank. But if you want to extend an operators functionality to work within a certain range of ranks I believe it would be helpful to have an overload that will accept a range instead.
### Alternatives considered
downstream code can use their own implementation by reusing functions like `hasInputShape`, `getInputShape` and `fail_shape_inference`.
### Describe the feature
if it makes sense for the operator to work with different ranks, downstream code will not need to define their own function.
### Will this influence the current api (Y/N)?
no
### Feature Area
shape_inference
### Are you willing to contribute it (Y/N)
Yes
### Notes
I understand this is quite small and insignificant. Figured it was a good entry point to get to contributing to ONNX.
|
closed
|
2024-03-10T21:47:38Z
|
2024-03-12T21:06:28Z
|
https://github.com/onnx/onnx/issues/6008
|
[
"topic: enhancement",
"module: shape inference"
] |
ZelboK
| 6 |
noirbizarre/flask-restplus
|
flask
| 546 |
400 error in Swagger when using POST/PUT through reqparse
|
Hey all,
While testing out PUT/POST requests using reqparser through Swagger UI (using _**Try it Out!**_), my application will throw a 400 error with the following message:
`{
"message": "The browser (or proxy) sent a request that this server could not understand."
}`
The same call will result in a success when submitted through Postman however. There is no stacktrace for the error. Also note that this issue only arises through passing the reqparse through @api.expect()
I can successfully pass a model through without any error calling the api on swagger. However, I need the option to pass things like choices etc for the user.
I'm using Flask-restplus v 0.10.0 and Python v 3.6. My SQL is handled through pyodbc 4.0.23.
Here is the code I use for setting up the reqparser:
```
parser = reqparse.RequestParser()
parser.add_argument("alternateNameId", type=int, required=False)
parser.add_argument("alternateName", type=str, required=True)
parser.add_argument("isColloquial", type=bool, required=True, default='False')
parser.add_argument("isSearchTerm", type=bool, required=True)
```
and then it's called through the @api.expect decorator as follows:
```@api.route('/<int:diseaseId>/AlternateName', methods=['PUT'])
class AlternateName(Resource):
@api.doc(model=altNameModel, id='put_alternatename', responses={201: 'Success', 400: 'Validation Error'})
@api.expect(parser)
@auth.requires_auth
def put(self, diseaseId):
```
And here are screenshots of the swagger UI:


I have seen similar issues logged but nothing quite addressing the fact that the operation only fails through the swagger UI and a GET request operates as normal.
Has anyone seen this behavior before or understand how to mitigate it? My users would be using swagger as their main UI to access the endpoint.
|
open
|
2018-10-29T13:36:22Z
|
2018-10-29T13:36:22Z
|
https://github.com/noirbizarre/flask-restplus/issues/546
|
[] |
SonyaKaramchandani
| 0 |
blacklanternsecurity/bbot
|
automation
| 2,171 |
stats not attributing URLs to discovering modules
|
As an example - ffuf_shortnames discovers URL_UNVERIFIED events which are not tracked in stats, but are then checked by httpx, and some will become URL events. But despite the face that ffuf_shortnames discovered them, it does not get attributed with the URL.
Expected behavior: when HTTPX finds a URL, the stats should get attributed to the module that supplied the URL_UNVERIFIED event not HTTPX itself, falling back to HTTPX if there isn't one.
This should apply to ffuf and excavate as well. In the case of excavate, I think it is much more useful to know it came from excavate then just everything being attributed to httpx.
|
open
|
2025-01-14T12:58:26Z
|
2025-01-14T12:58:27Z
|
https://github.com/blacklanternsecurity/bbot/issues/2171
|
[
"bug",
"low priority"
] |
liquidsec
| 0 |
keras-team/keras
|
pytorch
| 20,314 |
Keras fails to load TextVectorization layer from .keras file
|
When downloading a model I trained on Kaggle using the `.keras` format it fails to load on my machine. I believe it is a codec error because the TextVectorization layer uses the `utf-8` format, but the error message appears to be using the `charmap` codec in python. This is all just speculation though.
```
ValueError: A total of 2 objects could not be loaded. Example error message for object <TextVectorization name=text_vectorization, built=True>:
'charmap' codec can't decode byte 0x8d in position 8946: character maps to <undefined>
List of objects that could not be loaded:
[<TextVectorization name=text_vectorization, built=True>, <StringLookup name=string_lookup, built=False>]
```
In the notebook it was trained on, it loaded perfectly so I don't understand the reason why this failed to work.
My Machine:
python version 3.10.5
```
Name Version Build Channel
_tflow_select 2.3.0 mkl
abseil-cpp 20211102.0 hd77b12b_0
absl-py 2.1.0 py310haa95532_0
aext-assistant 4.0.15 py310haa95532_jl4_0
aext-assistant-server 4.0.15 py310haa95532_0
aext-core 4.0.15 py310haa95532_jl4_0
aext-core-server 4.0.15 py310haa95532_1
aext-panels 4.0.15 py310haa95532_0
aext-panels-server 4.0.15 py310haa95532_0
aext-share-notebook 4.0.15 py310haa95532_0
aext-share-notebook-server 4.0.15 py310haa95532_0
aext-shared 4.0.15 py310haa95532_0
aiohappyeyeballs 2.4.0 py310haa95532_0
aiohttp 3.10.5 py310h827c3e9_0
aiosignal 1.2.0 pyhd3eb1b0_0
anaconda-cloud-auth 0.5.1 py310haa95532_0
anaconda-toolbox 4.0.15 py310haa95532_0
annotated-types 0.6.0 py310haa95532_0
anyio 4.2.0 py310haa95532_0
argon2-cffi 21.3.0 pyhd3eb1b0_0
argon2-cffi-bindings 21.2.0 py310h2bbff1b_0
asttokens 2.0.5 pyhd3eb1b0_0
astunparse 1.6.3 py_0
async-lru 2.0.4 py310haa95532_0
async-timeout 4.0.3 py310haa95532_0
attrs 23.1.0 py310haa95532_0
babel 2.11.0 py310haa95532_0
beautifulsoup4 4.12.3 py310haa95532_0
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0
blinker 1.6.2 py310haa95532_0
brotli-python 1.0.9 py310hd77b12b_8
bzip2 1.0.8 h2bbff1b_6
c-ares 1.19.1 h2bbff1b_0
ca-certificates 2024.9.24 haa95532_0
cachetools 5.3.3 py310haa95532_0
certifi 2024.8.30 py310haa95532_0
cffi 1.17.1 py310h827c3e9_0
charset-normalizer 3.3.2 pyhd3eb1b0_0
click 8.1.7 py310haa95532_0
colorama 0.4.6 py310haa95532_0
comm 0.2.1 py310haa95532_0
cryptography 41.0.3 py310h3438e0d_0
debugpy 1.6.7 py310hd77b12b_0
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
exceptiongroup 1.2.0 py310haa95532_0
executing 0.8.3 pyhd3eb1b0_0
flatbuffers 2.0.0 h6c2663c_0
frozenlist 1.4.0 py310h2bbff1b_0
gast 0.4.0 pyhd3eb1b0_0
giflib 5.2.1 h8cc25b3_3
google-auth 2.29.0 py310haa95532_0
google-auth-oauthlib 0.4.4 pyhd3eb1b0_0
google-pasta 0.2.0 pyhd3eb1b0_0
grpc-cpp 1.48.2 hf108199_0
grpcio 1.48.2 py310hf108199_0
h11 0.14.0 py310haa95532_0
h5py 3.11.0 py310hed405ee_0
hdf5 1.12.1 h51c971a_3
httpcore 1.0.2 py310haa95532_0
httpx 0.27.0 py310haa95532_0
icc_rt 2022.1.0 h6049295_2
icu 58.2 ha925a31_3
idna 3.7 py310haa95532_0
importlib-metadata 7.0.1 py310haa95532_0
importlib_metadata 7.0.1 hd3eb1b0_0
intel-openmp 2023.1.0 h59b6b97_46320
ipykernel 6.28.0 py310haa95532_0
ipython 8.27.0 py310haa95532_0
jaraco.classes 3.2.1 pyhd3eb1b0_0
jedi 0.19.1 py310haa95532_0
jinja2 3.1.4 py310haa95532_0
jpeg 9e h827c3e9_3
json5 0.9.6 pyhd3eb1b0_0
jsonschema 4.19.2 py310haa95532_0
jsonschema-specifications 2023.7.1 py310haa95532_0
jupyter-lsp 2.2.0 py310haa95532_0
jupyter_client 8.6.0 py310haa95532_0
jupyter_core 5.7.2 py310haa95532_0
jupyter_events 0.10.0 py310haa95532_0
jupyter_server 2.14.1 py310haa95532_0
jupyter_server_terminals 0.4.4 py310haa95532_1
jupyterlab 4.2.5 py310haa95532_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.27.3 py310haa95532_0
keras 2.10.0 py310haa95532_0
keras-preprocessing 1.1.2 pyhd3eb1b0_0
keyring 24.3.1 py310haa95532_0
libcurl 8.9.1 h0416ee5_0
libffi 3.4.4 hd77b12b_1
libpng 1.6.39 h8cc25b3_0
libprotobuf 3.20.3 h23ce68f_0
libsodium 1.0.18 h62dcd97_0
libssh2 1.10.0 hcd4344a_2
markdown 3.4.1 py310haa95532_0
markupsafe 2.1.3 py310h2bbff1b_0
matplotlib-inline 0.1.6 py310haa95532_0
mistune 2.0.4 py310haa95532_0
mkl 2023.1.0 h6b88ed4_46358
mkl-service 2.4.0 py310h2bbff1b_1
mkl_fft 1.3.10 py310h827c3e9_0
mkl_random 1.2.7 py310hc64d2fc_0
more-itertools 10.3.0 py310haa95532_0
multidict 6.0.4 py310h2bbff1b_0
nbclient 0.8.0 py310haa95532_0
nbconvert 7.10.0 py310haa95532_0
nbformat 5.9.2 py310haa95532_0
nest-asyncio 1.6.0 py310haa95532_0
notebook 7.2.2 py310haa95532_0
notebook-shim 0.2.3 py310haa95532_0
numpy 1.26.4 py310h055cbcc_0
numpy-base 1.26.4 py310h65a83cf_0
oauthlib 3.2.2 py310haa95532_0
openssl 1.1.1w h2bbff1b_0
opt_einsum 3.3.0 pyhd3eb1b0_1
overrides 7.4.0 py310haa95532_0
packaging 24.1 py310haa95532_0
pandocfilters 1.5.0 pyhd3eb1b0_0
parso 0.8.3 pyhd3eb1b0_0
pip 24.2 py310haa95532_0
pkce 1.0.3 py310haa95532_0
platformdirs 3.10.0 py310haa95532_0
prometheus_client 0.14.1 py310haa95532_0
prompt-toolkit 3.0.43 py310haa95532_0
prompt_toolkit 3.0.43 hd3eb1b0_0
protobuf 3.20.3 py310hd77b12b_0
psutil 5.9.0 py310h2bbff1b_0
pure_eval 0.2.2 pyhd3eb1b0_0
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pybind11-abi 5 hd3eb1b0_0
pycparser 2.21 pyhd3eb1b0_0
pydantic 2.8.2 py310haa95532_0
pydantic-core 2.20.1 py310hefb1915_0
pygments 2.15.1 py310haa95532_1
pyjwt 2.8.0 py310haa95532_0
pyopenssl 23.2.0 py310haa95532_0
pysocks 1.7.1 py310haa95532_0
python 3.10.13 h966fe2a_0
python-dateutil 2.9.0post0 py310haa95532_2
python-dotenv 0.21.0 py310haa95532_0
python-fastjsonschema 2.16.2 py310haa95532_0
python-flatbuffers 24.3.25 py310haa95532_0
python-json-logger 2.0.7 py310haa95532_0
pytz 2024.1 py310haa95532_0
pywin32 305 py310h2bbff1b_0
pywin32-ctypes 0.2.2 py310haa95532_0
pywinpty 2.0.10 py310h5da7b33_0
pyyaml 6.0.1 py310h2bbff1b_0
pyzmq 25.1.2 py310hd77b12b_0
re2 2022.04.01 hd77b12b_0
referencing 0.30.2 py310haa95532_0
requests 2.32.3 py310haa95532_0
requests-oauthlib 2.0.0 py310haa95532_0
rfc3339-validator 0.1.4 py310haa95532_0
rfc3986-validator 0.1.1 py310haa95532_0
rpds-py 0.10.6 py310h062c2fa_0
rsa 4.7.2 pyhd3eb1b0_1
scipy 1.13.1 py310h8640f81_0
semver 3.0.2 py310haa95532_0
send2trash 1.8.2 py310haa95532_0
setuptools 75.1.0 py310haa95532_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.2.1 hcdb6601_0
sniffio 1.3.0 py310haa95532_0
soupsieve 2.5 py310haa95532_0
sqlite 3.45.3 h2bbff1b_0
stack_data 0.2.0 pyhd3eb1b0_0
tbb 2021.8.0 h59b6b97_0
tensorboard 2.10.0 py310haa95532_0
tensorboard-data-server 0.6.1 py310haa95532_0
tensorboard-plugin-wit 1.8.1 py310haa95532_0
tensorflow 2.10.0 mkl_py310hd99672f_0
tensorflow-base 2.10.0 mkl_py310h6a7f48e_0
tensorflow-estimator 2.10.0 py310haa95532_0
termcolor 2.1.0 py310haa95532_0
terminado 0.17.1 py310haa95532_0
tinycss2 1.2.1 py310haa95532_0
tk 8.6.14 h0416ee5_0
tomli 2.0.1 py310haa95532_0
tornado 6.4.1 py310h827c3e9_0
traitlets 5.14.3 py310haa95532_0
typing-extensions 4.11.0 py310haa95532_0
typing_extensions 4.11.0 py310haa95532_0
tzdata 2024a h04d1e81_0
urllib3 2.2.3 py310haa95532_0
vc 14.40 h2eaa2aa_1
vs2015_runtime 14.40.33807 h98bb1dd_1
wcwidth 0.2.5 pyhd3eb1b0_0
webencodings 0.5.1 py310haa95532_1
websocket-client 1.8.0 py310haa95532_0
werkzeug 3.0.3 py310haa95532_0
wheel 0.44.0 py310haa95532_0
win_inet_pton 1.1.0 py310haa95532_0
winpty 0.4.3 4
wrapt 1.14.1 py310h2bbff1b_0
xz 5.4.6 h8cc25b3_1
yaml 0.2.5 he774522_0
yarl 1.11.0 py310h827c3e9_0
zeromq 4.3.5 hd77b12b_0
zipp 3.17.0 py310haa95532_0
zlib 1.2.13 h8cc25b3_1
```
On kaggle I used the 2024-8-21 [docker container ](https://github.com/Kaggle/docker-python/releases/tag/5439620d9e9d1944f6c7ed0711374b2f8a603e27bdda6f44b3a207c225454d7b)
|
closed
|
2024-10-01T22:48:18Z
|
2024-11-14T02:01:56Z
|
https://github.com/keras-team/keras/issues/20314
|
[
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] |
harsha7addanki
| 4 |
matplotlib/matplotlib
|
matplotlib
| 29,047 |
[ENH]: Registering custom markers
|
### Problem
While working on a library to make styles (with custom colors, etc...) I discovered that there is no easy way to register custom markers, unlike for colors and the like.
I found a workaround digging in `markers.py`:
```python
from matplotlib.markers import MarkerStyle
...
MarkerStyle.markers[marker_name] = marker_name
setattr(MarkerStyle, f'_set_{marker_name}', lambda self, path=marker_path: self._set_custom_marker(path))
```
which seems to work, and allows to use the new maker in other files as
```python
plt.plot(x, y, marker = marker_name)
```
However, this code is quite clumsy and inelegant!
### Proposed solution
It would be nice to have a way to specify
```python
MarkerStyle.register(marker_name, marker_path)
```
similarly to [how it is done for colormaps](https://matplotlib.org/stable/api/cm_api.html).
This would be pretty easy because it could leverage internally `MarkerStyle._set_custom_marker`, which already implements most of the necessary functionality!
If this is welcome, I would be happy to have a go and submit a PR!
I have found that this is quite nice to drive up the engagement of students to be able to easily play with visuals in this way :)
|
open
|
2024-10-31T01:01:10Z
|
2024-10-31T01:01:10Z
|
https://github.com/matplotlib/matplotlib/issues/29047
|
[
"New feature"
] |
LorenzoPeri17
| 0 |
Miserlou/Zappa
|
django
| 1,286 |
Pillow (4.3.0) for manylinux1 is not packaged, instead zappa packages Pillow for Windows 64-bit
|
This is almost related to #398 / #841 , but instead no Pillow is packaged at all.
## Context
Python 3.6 on Windows (Anaconda)
## Expected Behavior
Pillow 4.3.0 is packaged. It seems that lambda-packages doesn't have pillow 4.3.0 yet, only 3.4.2 (https://github.com/Miserlou/lambda-packages/tree/master/lambda_packages/Pillow), however there is a [manylinux wheel](https://pypi.python.org/pypi/Pillow/4.3.0): Pillow-4.3.0-cp36-cp36m-manylinux1_x86_64.whl which should be usable, right ?
## Actual Behavior
Pillow 4.3.0 is not packaged, and instead zappa uses PIL for Windows 64bit:

Pillow-4.3.0.dist-info exists:

`WHEEL` contains:
```
Wheel-Version: 1.0
Generator: bdist_wheel (0.30.0)
Root-Is-Purelib: false
Tag: cp36-cp36m-win_amd64
```
## Possible Fix
Patch the zip and use the manylinux wheel manually?
## Steps to Reproduce
On Windows 64-bit:
```
pip install Pillow
```
## Your Environment
* Zappa version used: zappa==0.45.1
* Operating System and Python version: Windows 10 64-bit, Python 3.6
* The output of `pip freeze`:
```
argcomplete==1.9.2
Babel==2.5.1
base58==0.2.4
boto==2.48.0
boto3==1.4.8
botocore==1.8.11
cachetools==2.0.1
certifi==2017.11.5
cfn-flip==0.2.5
chardet==3.0.4
click==6.7
decorator==4.1.2
Django==2.0
django-appconf==1.0.2
django-imagekit==4.0.2
django-ipware==1.1.6
django-nine==0.1.13
django-phonenumber-field==1.3.0
django-qartez==0.7.1
django-s3-storage==0.12.1
docutils==0.14
durationpy==0.5
future==0.16.0
google-auth==1.2.1
hjson==3.0.1
httplib2==0.10.3
idna==2.6
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
oauth2client==4.1.2
olefile==0.44
phonenumberslite==8.8.5
pilkit==2.0
Pillow==4.3.0
placebo==0.8.1
psycopg2==2.7.3.2
pyasn1==0.4.2
pyasn1-modules==0.2.1
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2017.3
PyYAML==3.12
ratelim==0.1.6
requests==2.18.4
rsa==3.4.2
s3transfer==0.1.12
six==1.11.0
toml==0.9.3
tqdm==4.19.1
troposphere==2.1.2
Unidecode==0.4.21
uritemplate==3.0.0
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Your `zappa_settings.py`: (Note: this should be `zappa_settings.json`, perhaps you want to change the template?)
```
{
"prd": {
"aws_region": "us-east-1",
"django_settings": "samaraweb.settings",
"profile_name": "default",
"project_name": "samaraweb",
"runtime": "python3.6",
"s3_bucket": "samaraedu-code",
"domain": "keluargasamara.com",
"certificate_arn": "arn:aws:acm:us-east-1:703881650703:certificate/a5683018-90ee-4e47-b59b-bc0d147ed174",
"route53_enabled": false,
"exclude": ["snapshot"]
}
}
```
|
open
|
2017-12-10T10:44:39Z
|
2020-05-22T05:11:26Z
|
https://github.com/Miserlou/Zappa/issues/1286
|
[] |
ceefour
| 7 |
graphql-python/graphene-sqlalchemy
|
graphql
| 195 |
Development and Maintance of this package
|
Hey, it seems to me that this package is lacking People to maintain and develop it.
I come to this conclusion because many Issues go unanswered and Pull requests not merged.
What can we do about it? Who is willing to actively contribute in any way?
Are the current Maintainers willing to give some level of access to those people or should we gather around a fork?
|
closed
|
2019-04-01T15:25:24Z
|
2023-02-25T06:58:22Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/195
|
[
"question"
] |
brasilikum
| 3 |
nvbn/thefuck
|
python
| 646 |
No module named 'thefuck'
|
When I install using the following commands, the terminal say:
File "/home/test/.local/bin/fuck", line 7, in <module>
from thefuck.not_configured import main
ImportError: No module named 'thefuck'
I don't know how to do at next, so I create the issue.
OS:elementary os 0.4
using bash
|
open
|
2017-05-07T07:16:34Z
|
2023-11-29T08:40:58Z
|
https://github.com/nvbn/thefuck/issues/646
|
[] |
JamesLiAndroid
| 6 |
ghtmtt/DataPlotly
|
plotly
| 42 |
Update selection when already a selection is made
|
If the plot is made with the `selected features` checkbox, the expression (and the selection) is correct, but it loops in **all** the attribute table and not just in the feature subset.
Handling this is quite tricky.
|
closed
|
2017-09-04T13:57:08Z
|
2019-10-22T06:53:55Z
|
https://github.com/ghtmtt/DataPlotly/issues/42
|
[
"bug",
"enhancement"
] |
ghtmtt
| 0 |
ageitgey/face_recognition
|
python
| 626 |
Properties of images for the best result
|
* face_recognition version:
* Python version:
* Operating System:
### Description
Using images to train the Model with face_recognition
### Query
What are all the similar properties (i.e : Image size, resolution) all the image should have So, that face_recognition gives the best results.
|
open
|
2018-09-21T01:56:55Z
|
2022-09-19T03:54:50Z
|
https://github.com/ageitgey/face_recognition/issues/626
|
[] |
akhilgupta0221
| 6 |
piskvorky/gensim
|
machine-learning
| 3,266 |
Incorrect CBOW implementation in Gensim leads to inferior performance
|
#### Problem description
According to this article https://aclanthology.org/2021.insights-1.1.pdf:
<img width="636" alt="Screen Shot 2021-11-09 at 15 47 21" src="https://user-images.githubusercontent.com/610412/140945923-7d279468-a9e9-41b4-b7c2-919919832bc5.png">
#### Steps/code/corpus to reproduce
I haven't tried to verify / reproduce. Gensim's goal is to follow the original C implementation faithfully, which it does. So this is not a bug per se, more a question of "how whether / how much we want to deviate from the reference implementation". I'm in favour if the result is unambiguous better (more accurate, faster, no downsides).
#### Versions
All versions since the beginning of word2vec in Gensim.
|
closed
|
2021-11-09T14:51:57Z
|
2021-11-15T17:36:33Z
|
https://github.com/piskvorky/gensim/issues/3266
|
[
"bug",
"difficulty medium",
"reach MEDIUM",
"impact LOW"
] |
piskvorky
| 3 |
dsdanielpark/Bard-API
|
api
| 100 |
PaLM API Example
|
I am android developer, i have tried to find the bard api, after long time i got the Palm api
here it is
https://makersuite.google.com/app/library
|
closed
|
2023-07-13T08:54:59Z
|
2024-03-05T08:21:54Z
|
https://github.com/dsdanielpark/Bard-API/issues/100
|
[] |
shakeel143
| 10 |
python-gino/gino
|
asyncio
| 532 |
GINO don't released the connection after exception in Starlette extension
|
* GINO version: 0.8.3
* Python version: 3.7.4
* asyncpg version: 0.18.3
* aiocontextvars version: 0.2.2
* PostgreSQL version: 11.3
* FastAPI version: 0.36.0
* Starlette version: 0.12.7
* uvicorn version: 0.8.6
* uvloop version: 0.12.2
### Description
I'm use GINO with FastAPI + uvicorn. In development mode i use autoreload by uvicorn, it's works well, but if in my endpoint, where i use GINO, raising exception, GINO interferes stopping application.
### What I Did
For example i have endpoint like this:
```python
@router.get('users/{user_id}', tags=['Users'], response_model=UserSchema)
async def retrieve_user(user_id: int):
user: User = await User.get(user_id)
return UserSchema.from_orm(user)
```
Now going to our server and try to get user with nonexistent ID (http://localhost:8000/users/1818456489489456). Oh no, we got "Internal Server Error". Well, let's fix it:
```python
@router.get('users/{user_id}', tags=['Users'], response_model=UserSchema)
async def retrieve_user(user_id: int):
user: User = await User.get(user_id)
if user:
return UserSchema.from_orm(user)
else:
raise HTTPException(status_code=404, detail="User with this ID not found")
```
Let's test it again. But wait, server don't responding. Ok, let's see the logs:
```
WARNING: Detected file change in 'api/v1/users.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
*** minute wait ***
WARNING: Pool.close() is taking over 60 seconds to complete. Check if you have any unreleased connections left. Use asyncio.wait_for() to set a timeout for Pool.close().
```
Only manual "hard" reset of the server helps.
### What i suggest
After small research i think i found bug (?). After raising exception in endpoint, Starlette Strategy (i don't checked realizations for anothers frameworks) of GINO don't release the connection. I'm added try-finnaly block in class `_Middleware` in `gino.ext.starlette` (inspired by [this](https://python-gino.readthedocs.io/en/latest/gino.engine.html#gino.engine.GinoEngine.acquire))
this code
```python
async def __call__(self, scope: Scope, receive: Receive,
send: Send) -> None:
if (scope['type'] == 'http' and
self.db.config['use_connection_for_request']):
scope['connection'] = await self.db.acquire(lazy=True)
await self.app(scope, receive, send)
conn = scope.pop('connection', None)
if conn is not None:
await conn.release()
return
```
i edited like this:
```python
async def __call__(self, scope: Scope, receive: Receive,
send: Send) -> None:
if (scope['type'] == 'http' and
self.db.config['use_connection_for_request']):
scope['connection'] = await self.db.acquire(lazy=True)
try:
await self.app(scope, receive, send)
finally:
conn = scope.pop('connection', None)
if conn is not None:
await conn.release()
return
```
and after that everything works great.
I am just starting to dive into the world of asynchronous python, so I'm not sure if this is a bug and i'm not sure if this that it completely fixes it.
|
closed
|
2019-08-27T21:54:27Z
|
2019-08-28T13:33:17Z
|
https://github.com/python-gino/gino/issues/532
|
[
"bug"
] |
qulaz
| 5 |
tableau/server-client-python
|
rest-api
| 1,472 |
Can we convert this in to ServerResponseError.from_response exception instead of NonXMLResponseError
|
It would be helpful if ServerResponseError.from_response is implemented on line 173 instead NonXMLResponseError.
https://github.com/tableau/server-client-python/blob/4259316ef2e2656531b0c65c71d043708b37b4a9/tableauserverclient/server/endpoint/endpoint.py#L173
|
closed
|
2024-09-22T07:31:55Z
|
2024-10-25T23:35:00Z
|
https://github.com/tableau/server-client-python/issues/1472
|
[] |
hprasad-tls
| 7 |
marcomusy/vedo
|
numpy
| 858 |
Fill in empty space in open mesh
|

Hi, I have open mesh and want to fill some space.
So, I try to create points about empty space. And using reconstruct_surface() to create a mesh filled whit empty space.
I want to get points for empty space through plane slicing (intersect_with_plane()) and create spline.
The result is similar to the image below.

The line was recognized individually and there was no order for the direction, making it impossible to fill the empty space through the splines.

Can we order multiple lines made through intercept_with_plane() ? Like the image above
Or Is there any other way to fill in the empty space?
|
closed
|
2023-05-09T07:06:58Z
|
2023-05-10T23:46:07Z
|
https://github.com/marcomusy/vedo/issues/858
|
[] |
HyungJoo-Kwon
| 4 |
wiseodd/generative-models
|
tensorflow
| 52 |
what is h_dim in vanilla VAE implementation
|
I tried VAE implementation but did not understand the algo. So I searched for implementations on GitHub and found yours. The problem I am facing with your implementation is to understand 2 things, 1st is what exactly is h_dim and how is the value of it decided?
Thanks in advance
|
closed
|
2018-03-18T18:18:36Z
|
2018-03-20T07:33:48Z
|
https://github.com/wiseodd/generative-models/issues/52
|
[] |
R1j1t
| 1 |
0xTheProDev/fastapi-clean-example
|
graphql
| 3 |
Ask : Nestjs Architecture
|
Hi @Progyan1997 , first of all thanks for sharing this Project. 🙏🏻
I am used to Nestjs, and it was mind-blowing to finally found modern Python project that structured in similar way to Nestjs.
By the way, is it just me or it is kinda inspired by NestJs project structure ?
|
closed
|
2022-06-08T13:22:35Z
|
2022-07-30T18:29:36Z
|
https://github.com/0xTheProDev/fastapi-clean-example/issues/3
|
[] |
ejabu
| 2 |
nolar/kopf
|
asyncio
| 138 |
Travis CI fails for contributor PRs
|
> <a href="https://github.com/dlmiddlecote"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> An issue by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-07-09 23:00:41+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/138
>
## Expected Behavior
Build passes if it should, i.e. if all tests pass.
## Actual Behavior
Tests pass but build fails because the coveralls command fails, see [here](https://travis-ci.org/dlmiddlecote/kopf/jobs/556541531).
### Side Note
Tags also build in forks, which could lead to versions of the library being uploaded to pupils.
---
> <a href="https://github.com/dlmiddlecote"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> Commented by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-07-14 14:21:20+00:00_
>
Solution to this is to turn on coveralls support for kopf fork repo.
|
closed
|
2020-08-18T19:57:11Z
|
2020-08-23T20:47:17Z
|
https://github.com/nolar/kopf/issues/138
|
[
"archive",
"automation"
] |
kopf-archiver[bot]
| 0 |
supabase/supabase-py
|
fastapi
| 51 |
unicode issues
|
When I follow the example to retrieve data I'm greeted with the following stacktrace:
```
In [4]: supabase.table("countries").select("*").execute()
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
<ipython-input-4-91499f52c962> in <module>
----> 1 supabase.table("countries").select("*").execute()
/usr/lib/python3.8/site-packages/supabase_py/client.py in table(self, table_name)
72 Alternatively you can use the `._from()` method.
73 """
---> 74 return self.from_(table_name)
75
76 def from_(self, table_name: str) -> SupabaseQueryBuilder:
/usr/lib/python3.8/site-packages/supabase_py/client.py in from_(self, table_name)
79 See the `table` method.
80 """
---> 81 query_builder = SupabaseQueryBuilder(
82 url=f"{self.rest_url}/{table_name}",
83 headers=self._get_auth_headers(),
/usr/lib/python3.8/site-packages/supabase_py/lib/query_builder.py in __init__(self, url, headers, schema, realtime, table)
71 **headers,
72 }
---> 73 self.session = AsyncClient(base_url=url, headers=headers)
74 # self._subscription = SupabaseRealtimeClient(realtime, schema, table)
75 # self._realtime = realtime
/usr/lib/python3.8/site-packages/httpx/_client.py in __init__(self, auth, params, headers, cookies, verify, cert, http2, proxies, timeout, limits, pool_limits, max_redirects, event_hooks, base_url, transport, app, trust_env)
1209 trust_env: bool = True,
1210 ):
-> 1211 super().__init__(
1212 auth=auth,
1213 params=params,
/usr/lib/python3.8/site-packages/httpx/_client.py in __init__(self, auth, params, headers, cookies, timeout, max_redirects, event_hooks, base_url, trust_env)
98 self._auth = self._build_auth(auth)
99 self._params = QueryParams(params)
--> 100 self.headers = Headers(headers)
101 self._cookies = Cookies(cookies)
102 self._timeout = Timeout(timeout)
/usr/lib/python3.8/site-packages/httpx/_models.py in __init__(self, headers, encoding)
549 self._list = list(headers._list)
550 elif isinstance(headers, dict):
--> 551 self._list = [
552 (
553 normalize_header_key(k, lower=False, encoding=encoding),
/usr/lib/python3.8/site-packages/httpx/_models.py in <listcomp>(.0)
553 normalize_header_key(k, lower=False, encoding=encoding),
554 normalize_header_key(k, lower=True, encoding=encoding),
--> 555 normalize_header_value(v, encoding),
556 )
557 for k, v in headers.items()
/usr/lib/python3.8/site-packages/httpx/_utils.py in normalize_header_value(value, encoding)
54 if isinstance(value, bytes):
55 return value
---> 56 return value.encode(encoding or "ascii")
57
58
UnicodeEncodeError: 'ascii' codec can't encode character '\u2026' in position 50: ordinal not in range(128)
In [5]: data = supabase.table("countries").select("*").execute()
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
<ipython-input-5-a2ce57b52ae2> in <module>
----> 1 data = supabase.table("countries").select("*").execute()
/usr/lib/python3.8/site-packages/supabase_py/client.py in table(self, table_name)
72 Alternatively you can use the `._from()` method.
73 """
---> 74 return self.from_(table_name)
75
76 def from_(self, table_name: str) -> SupabaseQueryBuilder:
/usr/lib/python3.8/site-packages/supabase_py/client.py in from_(self, table_name)
79 See the `table` method.
80 """
---> 81 query_builder = SupabaseQueryBuilder(
82 url=f"{self.rest_url}/{table_name}",
83 headers=self._get_auth_headers(),
/usr/lib/python3.8/site-packages/supabase_py/lib/query_builder.py in __init__(self, url, headers, schema, realtime, table)
71 **headers,
72 }
---> 73 self.session = AsyncClient(base_url=url, headers=headers)
74 # self._subscription = SupabaseRealtimeClient(realtime, schema, table)
75 # self._realtime = realtime
/usr/lib/python3.8/site-packages/httpx/_client.py in __init__(self, auth, params, headers, cookies, verify, cert, http2, proxies, timeout, limits, pool_limits, max_redirects, event_hooks, base_url, transport, app, trust_env)
1209 trust_env: bool = True,
1210 ):
-> 1211 super().__init__(
1212 auth=auth,
1213 params=params,
/usr/lib/python3.8/site-packages/httpx/_client.py in __init__(self, auth, params, headers, cookies, timeout, max_redirects, event_hooks, base_url, trust_env)
98 self._auth = self._build_auth(auth)
99 self._params = QueryParams(params)
--> 100 self.headers = Headers(headers)
101 self._cookies = Cookies(cookies)
102 self._timeout = Timeout(timeout)
/usr/lib/python3.8/site-packages/httpx/_models.py in __init__(self, headers, encoding)
549 self._list = list(headers._list)
550 elif isinstance(headers, dict):
--> 551 self._list = [
552 (
553 normalize_header_key(k, lower=False, encoding=encoding),
/usr/lib/python3.8/site-packages/httpx/_models.py in <listcomp>(.0)
553 normalize_header_key(k, lower=False, encoding=encoding),
554 normalize_header_key(k, lower=True, encoding=encoding),
--> 555 normalize_header_value(v, encoding),
556 )
557 for k, v in headers.items()
/usr/lib/python3.8/site-packages/httpx/_utils.py in normalize_header_value(value, encoding)
54 if isinstance(value, bytes):
55 return value
---> 56 return value.encode(encoding or "ascii")
57
58
UnicodeEncodeError: 'ascii' codec can't encode character '\u2026' in position 50: ordinal not in range(128)
```
I've tried this on Python 3.7, 3.8, and 3.9 all with similar results. I've also tried different OSes (OSX, Linux), but both fail in similar fashion.
|
closed
|
2021-09-30T00:55:50Z
|
2021-09-30T01:00:50Z
|
https://github.com/supabase/supabase-py/issues/51
|
[] |
dkvdm
| 1 |
scrapy/scrapy
|
web-scraping
| 5,899 |
Request.from_curl() with $-prefixed string literals
|
Chrome (and probably other things) sometimes generate curl commands with a [$-prefixed](https://www.gnu.org/software/bash/manual/html_node/ANSI_002dC-Quoting.html) data string, probably when it's easier to represent the string in that way or when it includes non-ASCII characters, e.g. the DiscoverQueryRendererQuery XHR on https://500px.com/popular is copied as
```
curl 'https://api.500px.com/graphql' \
<headers omitted>
--data-raw $'{"operationName":"DiscoverQueryRendererQuery",<omitted> "query":"query DiscoverQueryRendererQuery($filters: [PhotoDiscoverSearchFilter\u0021], <the rest omitted>' \
--compressed
```
, most likely because of `\u0021` in this payload.
`scrapy.utils.curl.curl_to_request_kwargs()` isn't smart enough to understand this kind of shell escaping, so it puts `$` into the request body which is incorrect. Ideally we should support this, though I don't know if there are existing libraries to unescape this.
|
closed
|
2023-04-18T10:21:07Z
|
2023-04-19T09:35:03Z
|
https://github.com/scrapy/scrapy/issues/5899
|
[
"enhancement"
] |
wRAR
| 0 |
huggingface/transformers
|
deep-learning
| 36,506 |
model from_pretrained bug in 4.50.dev0 in these days
|
### System Info
- `transformers` version: 4.50.dev0
- Platform: Linux-5.10.101-1.el8.ssai.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.16
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.15.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
@amyeroberts, @qubvel
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code sample
```
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
model_path = "Qwen/Qwen2.5-VL-7B-Instruct"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
processor = AutoProcessor.from_pretrained(model_path)
```
When I configured the environment and ran the code on a new machine as usual today, I encountered the following error
```
Loading checkpoint shards: 0%|
| 0/5 [00:00<?, ?it/s]
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/……/Qwen2.5-VL/…r/script.py", line 14, in <module>
[rank0]: model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
[rank0]: File "/opt/conda/envs/…/lib/python3.10/site-packages/transformers/modeling_utils.py", line 269, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/opt/conda/envs/…/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4417, in from_pretrained
[rank0]: ) = cls._load_pretrained_model(
[rank0]: File "/opt/conda/envs/…/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4985, in _load_pretrained_model
[rank0]: new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
[rank0]: File "/opt/conda/envs/…/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/opt/conda/envs/…/lib/python3.10/site-packages/transformers/modeling_utils.py", line 795, in _load_state_dict_into_meta_model
[rank0]: full_tp_plan.update(getattr(submodule, "_tp_plan", {}))
[rank0]: TypeError: 'NoneType' object is not iterable
[rank0]:[W303 15:32:35.530123370 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the applicati
on should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress o
f another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
```
The version of transformers I use is 4.50.dev0, downloaded from github.
This environment will not report errors when running the same code on the machine I configured a few days ago, but today's new environment reports errors.
I solved the problem by downgrading the transformers version from 4.50.dev0 to 4.49.0.
### Expected behavior
I want to load model
|
closed
|
2025-03-03T07:51:04Z
|
2025-03-19T09:37:54Z
|
https://github.com/huggingface/transformers/issues/36506
|
[
"bug"
] |
M3Dade
| 7 |
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 794 |
Trying to Find Bottleneck When Using Nvidia Jetson Nano
|
Hi,
Great work on this! It's amazing to see this working!
I am testing this software out on a [4 GB NVIDIA Jetson Nano Developer Kit](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), and am seeing ~1 minute needed to synthesize a waveform, and am trying to figure out what the bottleneck could be.
I originally tried this code on my Windows machine (Ryzen 7 2700X) and saw about 10 seconds for the waveform to be synthesized. This testing used the CPU for inference.
On the Jetson, it's using the GPU:
`"Found 1 GPUs available. Using GPU 0 (NVIDIA Tegra X1) of compute capability 5.3 with 4.1Gb total memory."`
It did seem to be RAM-limited at first, but created a swap file to file the gap and did not see the RAM changing much during synthesis. I can see it being read during synthesis and the read time of disk slowing everything down, but it looked like one of the four CPU cores was also taking a 100% load to process, making me think that I'm CPU bottlenecked.
I figured that since this project uses PyTorch, using a 128 CUDA core GPU would be faster than an 8 core CPU, but I may be missing some fundamentals, especially when seeing that one of my CPU cores is at 100% usage.
Is synthesis CPU and GPU constrained or would it rely mostly on GPU?
Here are images of the program just before it finished synthesizing and just after with jtop monitoring GPU, CPU, and RAM.
**Before:**
- 5.5GB of memory used. 3.4 is RAM, 2.089 is swap file on disk
- CPU1 at 100%
- CPU 2 at 25%
- GPU at 40%

**After:**
- 5.5GB of memory used. 3.4 is RAM, 2.089 is swap file on disk
- CPU1 at 12%
- CPU 2 at 98%
- GPU at 0%

Thank you!
voloved
|
closed
|
2021-07-10T17:31:49Z
|
2021-09-08T13:53:25Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/794
|
[] |
voloved
| 2 |
piskvorky/gensim
|
machine-learning
| 2,850 |
AttributeError: 'Doc2VecTrainables' object has no attribute 'vectors_lockf'
|
Python version 3.7.5
gensim version 3.6.0
apache-beam[gcp] 2.20.0
tensorflow==1.14
#### Problem description
Trying to create tf records using gensim Doc2Vec.
Expected result is to create tf records with the given parameters.
In Directrunner
tf record creation is happening when used with gensim 3.6.0
but AttributeError is raised when ran with 3.8.0 version of gensim (AttributeError: 'Doc2VecTrainables' object has no attribute 'vectors_lockf')
While running a dataflow job even with gensim 3.6.0
Attribute error is raised
#### Steps/code/corpus to reproduce
pretrained_emb = 'glove.6B.100d.txt'
vector_size = 300
window_size = 15
min_count = 1
sampling_threshold = 1e-5
negative_size = 5
train_epoch = 100
dm = 0 #0 = dbow; 1 = dmpv
worker_count = 1 #number of parallel processes
print('max_seq_len which is being passed above Doc2Vec', self.max_seq_len)
self.model = g.Doc2Vec(documents=None,size=vector_size,
window=window_size, min_count=min_count,
sample=sampling_threshold,
workers=worker_count, hs=0,
dm=dm, negative=negative_size,
dbow_words=1, dm_concat=1,
pretrained_emb=pretrained_emb,
iter=100)
print("Loaded Model")
plot class type is 'string'
embedding_vector = self.model.infer_vector([plot])
It is raising an attribute error when ran in dataflow runner. In Directrunner issue is raised when gensim version is 3.8.0
Error log:
I have pasted the entire error log.
textPayload: "Error message from worker: Traceback (most recent call last):
File "apache_beam/runners/common.py", line 950, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 547, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1078, in apache_beam.runners.common._OutputProcessor.process_outputs
File "tfrecord_util/csv2tfrecord_train_valid.py", line 310, in process
x = self.preprocess(x)
File "tfrecord_util/csv2tfrecord_train_valid.py", line 233, in preprocess
embedding_vector = self._embedding(plot)
File "tfrecord_util/csv2tfrecord_train_valid.py", line 300, in _embedding
embedding_vector = self.model.infer_vector([plot])
File "/usr/local/lib/python3.7/site-packages/gensim/models/doc2vec.py", line 915, in infer_vector
learn_words=False, learn_hidden=False, doctag_vectors=doctag_vectors, doctag_locks=doctag_locks
File "gensim/models/doc2vec_inner.pyx", line 332, in gensim.models.doc2vec_inner.train_document_dbow
File "gensim/models/doc2vec_inner.pyx", line 254, in gensim.models.doc2vec_inner.init_d2v_config
AttributeError: 'Doc2VecTrainables' object has no attribute 'vectors_lockf'
I hope you understand the issue from the above details. Please let me know if you still need any additional information.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 647, in do_work
work_executor.execute()
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 176, in execute
op.start()
File "dataflow_worker/native_operations.py", line 38, in dataflow_worker.native_operations.NativeReadOperation.start
File "dataflow_worker/native_operations.py", line 39, in dataflow_worker.native_operations.NativeReadOperation.start
File "dataflow_worker/native_operations.py", line 44, in dataflow_worker.native_operations.NativeReadOperation.start
File "dataflow_worker/native_operations.py", line 54, in dataflow_worker.native_operations.NativeReadOperation.start
File "apache_beam/runners/worker/operations.py", line 329, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 192, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
File "apache_beam/runners/worker/operations.py", line 682, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/worker/operations.py", line 683, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/common.py", line 952, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 1013, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 950, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 547, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1105, in apache_beam.runners.common._OutputProcessor.process_outputs
File "apache_beam/runners/worker/operations.py", line 192, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
File "apache_beam/runners/worker/operations.py", line 682, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/worker/operations.py", line 683, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/common.py", line 952, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 1028, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "/usr/local/lib/python3.7/site-packages/future/utils/__init__.py", line 421, in raise_with_traceback
raise exc.with_traceback(traceback)
File "apache_beam/runners/common.py", line 950, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 547, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1078, in apache_beam.runners.common._OutputProcessor.process_outputs
File "tfrecord_util/csv2tfrecord_train_valid.py", line 310, in process
x = self.preprocess(x)
File "tfrecord_util/csv2tfrecord_train_valid.py", line 233, in preprocess
embedding_vector = self._embedding(plot)
File "tfrecord_util/csv2tfrecord_train_valid.py", line 300, in _embedding
embedding_vector = self.model.infer_vector([plot])
File "/usr/local/lib/python3.7/site-packages/gensim/models/doc2vec.py", line 915, in infer_vector
learn_words=False, learn_hidden=False, doctag_vectors=doctag_vectors, doctag_locks=doctag_locks
File "gensim/models/doc2vec_inner.pyx", line 332, in gensim.models.doc2vec_inner.train_document_dbow
File "gensim/models/doc2vec_inner.pyx", line 254, in gensim.models.doc2vec_inner.init_d2v_config
AttributeError: 'Doc2VecTrainables' object has no attribute 'vectors_lockf' [while running 'PreprocessData']
```
|
closed
|
2020-06-04T11:27:03Z
|
2020-06-05T04:05:14Z
|
https://github.com/piskvorky/gensim/issues/2850
|
[] |
rohithsiddhartha
| 1 |
pydata/xarray
|
numpy
| 10,085 |
set encoding parameters in addition to the original encoding
|
### Is your feature request related to a problem?
When writing to disk with `to_netcdf`, the `encoding` argument causes existing encoding to be dropped. This is described in the [docs](https://docs.xarray.dev/en/latest/generated/xarray.Dataset.to_netcdf.html).
What is a good approach to add encoding parameters in addition to the original encoding? e.g.
```python
import rioxarray
import xarray as xr
import numpy as np
# make some random dummy netcdf file
data = np.random.rand(4, 4)
lat = np.linspace(10, 20, 4)
lon = np.linspace(10, 20, 4)
ds = xr.Dataset({"dummy": (["lat", "lon"], data)}, coords={"lat": lat, "lon": lon})
ds.rio.set_spatial_dims("lon", "lat", inplace=True)
ds.rio.write_crs("EPSG:4326", inplace=True)
# note the spatial_ref coordinate
print(ds.dummy)
```
```
<xarray.DataArray 'dummy' (lat: 4, lon: 4)> Size: 128B
...
Coordinates:
* lat (lat) float64 32B 10.0 13.33 16.67 20.0
* lon (lon) float64 32B 10.0 13.33 16.67 20.0
spatial_ref int64 8B 0
```
```python
ds.to_netcdf("test.nc", mode="w")
# read it back in - ok
ds2 = xr.open_dataset("test.nc", decode_coords="all")
print(ds2.dummy)
```
```
<xarray.DataArray 'dummy' (lat: 4, lon: 4)> Size: 128B
...
Coordinates:
* lat (lat) float64 32B 10.0 13.33 16.67 20.0
* lon (lon) float64 32B 10.0 13.33 16.67 20.0
spatial_ref int64 8B ...
```
```python
# now compress
ds2.to_netcdf("test_compressed.nc", mode="w", encoding={"dummy": {"compression": "zstd"}})
# read it back in - drops the spatial_ref
ds3 = xr.open_dataset("test_compressed.nc", decode_coords="all")
print(ds3.dummy)
```
```
<xarray.DataArray 'dummy' (lat: 4, lon: 4)> Size: 128B
...
Coordinates:
* lat (lat) float64 32B 10.0 13.33 16.67 20.0
* lon (lon) float64 32B 10.0 13.33 16.67 20.0
```
this is because rioxarray stores "grid_mapping" in the encoding.
so what is a nice generic way to specify encoding in addition to the original encoding?
```python
encoding = ds2.dummy.encoding.copy()
encoding["compression"] = "zstd"
ds2.to_netcdf("test_compressed_2.nc", mode="w", encoding={"dummy": encoding})
```
```
ValueError: unexpected encoding parameters for 'netCDF4' backend: ['szip', 'zstd', 'bzip2', 'blosc']. Valid encodings are: ...
```
It seems not possible to pass the original encoding back in (even unmodified) due to [additional checks](https://github.com/pydata/xarray/blob/5ea1e81f6ae7728dd9add2e97807f4357287fa6e/xarray/backends/api.py#L1968C1-L1969C1)
### Describe the solution you'd like
in `to_netcdf()` be able to specify `encoding` in addition to the original encoding
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
open
|
2025-02-28T13:24:11Z
|
2025-02-28T13:24:15Z
|
https://github.com/pydata/xarray/issues/10085
|
[
"enhancement"
] |
samdoolin
| 1 |
Yorko/mlcourse.ai
|
pandas
| 659 |
Issue related to Lasso and Ridge regression notebook file - mlcourse.ai/jupyter_english/topic06_features_regression/lesson6_lasso_ridge.ipynb /
|
While plotting Ridge coefficient vs weights, the alphas used are different from Ridge_alphas. And while fitting the the model and calculating the ridge_cv.alpha_ we're using ridge_alphas. So,in below code its taking alpha values from alpha defined for **Lasso**. if we plot using ridge alphas plot is quite different. Please suggest if this is correct plot.
n_alphas=200
ridge_alphas=np.logspace(-2,6,n_alphas)
coefs = []
for a in alphas: # alphas = np.linspace(0.1,10,200) it's from Lasso
model.set_params(alpha=a)
model.fit(X, y)
coefs.append(model.coef_)
|
closed
|
2020-03-24T11:08:45Z
|
2020-03-24T11:21:44Z
|
https://github.com/Yorko/mlcourse.ai/issues/659
|
[
"minor_fix"
] |
sonuksh
| 1 |
fugue-project/fugue
|
pandas
| 337 |
[FEATURE] Fix index warning in fugue_dask
|
**Is your feature request related to a problem? Please describe.**

**Describe the solution you'd like**
For newer version of pandas we need to do something similar to [this](https://github.com/fugue-project/triad/blob/4998449e8a714de2e4c02d51d841650fe2c068c5/triad/utils/pandas_like.py#L240)
|
closed
|
2022-07-11T07:28:07Z
|
2022-07-11T16:22:12Z
|
https://github.com/fugue-project/fugue/issues/337
|
[
"enhancement",
"pandas",
"dask"
] |
goodwanghan
| 0 |
google-research/bert
|
nlp
| 907 |
How do you get the training time on each epoch using TPUEstimator?
|
I am able to see INFO:tensorflow:loss = 134.62343, step = 97
but not the time.
|
open
|
2019-11-10T07:09:03Z
|
2019-11-10T07:09:03Z
|
https://github.com/google-research/bert/issues/907
|
[] |
elvinjgalarza
| 0 |
microsoft/unilm
|
nlp
| 1,140 |
DiT Licence?
|
What is the Licence for using DiT? I am seeing the whole repository is under MIT Licence, but some of the projects contains difference licensing. As there's no info mentioned for DiT, can you update it?
|
open
|
2023-06-14T06:02:06Z
|
2023-08-16T04:32:26Z
|
https://github.com/microsoft/unilm/issues/1140
|
[] |
KananVyas
| 2 |
sanic-org/sanic
|
asyncio
| 2,474 |
Different ways of websocket disconnection effects in task pending
|
**Describe the bug**
Hi I am actually seeking for help. I was following this gist https://gist.github.com/ahopkins/5b6d380560d8e9d49e25281ff964ed81 building up a chat server. Now that we have a frontend, I am strucked by a task pending problem.
From the perspective of a user, the most common practise of leaving a web conversation is by closing the tab directly. So I tried the movement, and the error occurs at server shutdown.
```bash
Task was destroyed but it is pending!
source_traceback: Object created at (most recent call last):
File "/home/yuzixin/workspace/sanicserver/server.py", line 30, in <module>
app.run(host="0.0.0.0", port=4017, debug=app.config.DEBUG, workers=1)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/mixins/runner.py", line 145, in run
self.__class__.serve(primary=self) # type: ignore
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/mixins/runner.py", line 578, in serve
serve_single(primary_server_info.settings)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/runners.py", line 206, in serve_single
serve(**server_settings)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/runners.py", line 155, in serve
loop.run_forever()
File "/home/yuzixin/workspace/sanicserver/utils/decorators.py", line 34, in decorated_function
response = await f(request, *args, **kwargs)
File "/home/yuzixin/workspace/sanicserver/filesystem/blueprint.py", line 30, in feed
await client.receiver()
File "/home/yuzixin/workspace/sanicserver/filesystem/client.py", line 52, in receiver
message_str = await self.protocol.recv()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/websockets/impl.py", line 523, in recv
asyncio.ensure_future(self.assembler.get(timeout)),
File "/home/yuzixin/usr/lib/python3.10/asyncio/tasks.py", line 619, in ensure_future
return _ensure_future(coro_or_future, loop=loop)
File "/home/yuzixin/usr/lib/python3.10/asyncio/tasks.py", line 638, in _ensure_future
return loop.create_task(coro_or_future)
task: <Task pending name='Task-28' coro=<WebsocketFrameAssembler.get() done, defined at /home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/websockets/frame.py:91> wait_for=<Future pending cb=[Task.task_wakeup()] created at /home/yuzixin/usr/lib/python3.10/asyncio/locks.py:210> created at /home/yuzixin/usr/lib/python3.10/asyncio/tasks.py:638>
```
Curiously, this does not happen when testing with postman. I catched the asyncio.CanceledError at client.py for a stack printing, turned out the cancelled error were raised by different lines in impl.py:
The stack at postman close
```bash
Traceback (most recent call last):
File "/home/yuzixin/workspace/sanicserver/filesystem/client.py", line 52, in receiver
message_str = await self.protocol.recv()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/websockets/impl.py", line 534, in recv
raise asyncio.CancelledError()
asyncio.exceptions.CancelledError
```
The stack at tab close
```
Traceback (most recent call last):
File "/home/yuzixin/workspace/sanicserver/filesystem/client.py", line 52, in receiver
message_str = await self.protocol.recv()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/websockets/impl.py", line 525, in recv
done, pending = await asyncio.wait(
File "/home/yuzixin/usr/lib/python3.10/asyncio/tasks.py", line 384, in wait
return await _wait(fs, timeout, return_when, loop)
File "/home/yuzixin/usr/lib/python3.10/asyncio/tasks.py", line 495, in _wait
await waiter
asyncio.exceptions.CancelledError
```
Codes between lineno 525 and lineno 534 are
```python
done, pending = await asyncio.wait(
tasks,
return_when=asyncio.FIRST_COMPLETED,
)
done_task = next(iter(done))
if done_task is self.recv_cancel:
# recv was cancelled
for p in pending:
p.cancel()
raise asyncio.CancelledError()
```
I am not quite familiar with async scripting, but if anything, this looks like some tasks were successfully created but not cancelled when asyncio wait was raising a cancelled error.
This is by far not effecting the server function, but I am a bit worried that this might indicate some tasks are constantly executing during the whole process on a server that could continue to run for months, and thus dragging down the whole performance. Perhaps theres something I could do to manually close the protocol.recv task when catching the error?
**Code snippet**
https://gist.github.com/ahopkins/5b6d380560d8e9d49e25281ff964ed81
**Expected behavior**
A clean server shutdown with no errors reporting.
**Environment (please complete the following information):**
- OS: Debian
- Version buster
- python version: 3.10
|
closed
|
2022-06-01T17:34:56Z
|
2022-06-01T17:57:03Z
|
https://github.com/sanic-org/sanic/issues/2474
|
[] |
jrayu
| 1 |
aleju/imgaug
|
machine-learning
| 820 |
Assigning Probability in imgaug OneOf
|
can we have a different probability for selecting augmentations in OneOf?
Its use case is for example when you want to select one of the 3 augmentations but with prob = [0.5. 0.25, 0.25] instead of 1/3 for all of them.
|
open
|
2022-06-09T07:26:08Z
|
2022-10-22T22:23:59Z
|
https://github.com/aleju/imgaug/issues/820
|
[] |
g-jindal2001
| 1 |
falconry/falcon
|
api
| 1,857 |
Docs recipe: stream media with range request
|
Hello,
i'm tryning to stream mp4 video, with range request support.but i cannot manage to make it work with resp.stream.
This code work :
```
def on_get(self, req, resp):
media = 'test.mp4'
resp.set_header('Content-Type', 'video/mp4')
resp.accept_ranges = 'bytes'
stream = open(media,'rb')
size = os.path.getsize(media)
if req.range :
end = req.range[1]
if end < 0 :
end = size + end
stream.seek(req.range[0])
resp.content_range = (req.range[0],end,size)
size = end - req.range[0] + 1
resp.status = falcon.HTTP_206
resp.content_length = size
resp.body = stream.read(size)
```
but this will load all file in memory, which is not an option.
if i change the 2 last line with
`resp.set_stream(stream,size)`,
i've got an error
```
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /api/stream/557 (ip 10.0.0.136) !!!
uwsgi_response_sendfile_do(): Broken pipe [core/writer.c line 645] during GET /api/stream/557 (10.0.0.136)
IOError: write error
```
i'm using uwsgi with nginx as reverse proxy. Not sure it's related to falcon, but i don't have any clue where to look at.
Any idea?
Thanks
Johan
Ps : i know it's not optimal to use falcon for this, but i cannot expose real video path to client (in real in come from a db). and performance are not really a problem in my case.
Edit :
Here's the chrome requests when it don't work.
| Name | Url | Method | Status | Protocol |type | initiator | size | time
|--|--|--|--|--|--|--|--|--|
557 | http://10.1.12.2/api/stream/557 | GET | 206 | http/1.1 | media | Other | 32.6 kB | 5.24 s | 50883753
557 | http://10.1.12.2/api/stream/557 | GET | 206 | http/1.1 | media | Other | 28.1 kB | 365 ms | 27817
557 | http://10.1.12.2/api/stream/557 | GET | (canceled) | | media | Other | 0 B | 1 ms
|
closed
|
2021-02-05T12:11:34Z
|
2022-01-05T21:32:24Z
|
https://github.com/falconry/falcon/issues/1857
|
[
"documentation",
"question"
] |
belese
| 11 |
albumentations-team/albumentations
|
deep-learning
| 2,021 |
import RandomOrder
|
## Describe the bug
RandomOrder is not in [ composition.\_\_all\_\_](https://github.com/albumentations-team/albumentations/blob/526187b98bb8f66b77601e9cb32e2aa24d8a76a3/albumentations/core/composition.py#L27) therefore it is not possible to import it like any other transform
### To Reproduce
Steps to reproduce the behavior:
1. Try this sample:
```
import albumentations as A
t = A.SomeOf(...) # this works
t = A.RandomOrder(...) # doesn't work
```
### Expected behavior
RandomOrder is available when importing albumentations.
### Actual behavior
RandomOrder is not available when importing albumentations.
|
closed
|
2024-10-24T10:54:10Z
|
2024-10-24T19:51:54Z
|
https://github.com/albumentations-team/albumentations/issues/2021
|
[
"bug"
] |
nrudakov
| 2 |
deepfakes/faceswap
|
deep-learning
| 1,144 |
when support 3060ti?
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2021-04-03T07:58:46Z
|
2021-04-03T10:46:36Z
|
https://github.com/deepfakes/faceswap/issues/1144
|
[] |
soufunlab
| 1 |
taverntesting/tavern
|
pytest
| 703 |
skipif mark can't utilize global python variables or vars returned by fixtures
|
Hi, I believe this is a feature request for the `skipif` mark.
### **Issue**
I'm trying out the `skipif` mark (see three code examples below), but the `eval()` function is only able to access vars stored in tavern.util.dict_util (ie. system environment variables and perhaps variables included in `!include` .yaml files). I tried `skipif: "global_python_var is True"` which uses a global var created in the conftest.py (also tried `skipif: "global_python_var in globals()"`). Additionally, I tried accessing variables returned from fixtures (also defined in the conftest.py) using the `skipif: '{var_name}'` format, but get the following error:
**ERROR:tavern.util.dict_util:Key(s) not found in format: url**, with this output (I set all env_values to None):
```
{'tavern':
{'env_vars':
{'NVM_INC': None, 'LDFLAGS': None, 'TERM_PROGRAM': None, 'PYENV_ROOT': None, 'NVM_CD_FLAGS': None, 'TERM': None, 'SHELL': None, 'CPPFLAGS': None, 'TMPDIR': None, 'GOOGLE_APPLICATION_CREDENTIALS': None, 'VAULT_ADDR': None, 'TERM_PROGRAM_VERSION': None, 'TERM_SESSION_ID': None, 'PYENV_VERSION': None, 'NVM_DIR': None, 'USER': None, 'SSH_AUTH_SOCK': None, 'PYENV_DIR': None, 'VIRTUAL_ENV': None, 'PATH': None, 'LaunchInstanceID': None, 'PWD': None, 'LANG': None, 'PYENV_HOOK_PATH': None, 'XPC_FLAGS': None, 'XPC_SERVICE_NAME': None, 'HOME': None, 'SHLVL': None, 'PYTHONPATH': None, 'LOGNAME': None, 'NVM_BIN': None, 'SECURITYSESSIONID': None, '__CF_USER_TEXT_ENCODING': None }
}
}
```
### **Request**
Could the eval function called by `skipif` access information just like the test that it is marking **and** the python global namespace? Additionally, utilizing skipif with external functions (ie. getting a function to return either "True" or "False", would also be a good alternative). My overall goal is to skip all tests if my basic health-check test failed following these steps:
### _My intended usage and test examples_
1. run healthcheck/base test
2. verify response with external function which will either create a global, or change an existing global created in conftest.py (returns True or False based on response)
3. Other tests are skipped if the `skipif eval()` finds that the global var == False. (Alternatively, skips if external function called in eval() returns `"False"`)
Here are the example code snippets I tried (located in test_name.tavern.yaml file):
```
marks:
- skipif: "'healthcheck_failed' in globals()"
```
```
marks:
- skipif: "'{autouse_session_fixture_returned_from_conftest}' is True"
```
```
marks:
- skipif:
- $ext:
- function: "utils:return_true"
```
|
closed
|
2021-07-12T18:35:16Z
|
2021-10-31T15:52:42Z
|
https://github.com/taverntesting/tavern/issues/703
|
[] |
JattMones
| 2 |
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 67 |
Weird window size
|
Even with the example code, the window is small. If I make it fullscreen, the rest of the window is blank
|
closed
|
2023-09-27T14:11:01Z
|
2023-12-24T20:39:16Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/67
|
[] |
Fragaile
| 1 |
polakowo/vectorbt
|
data-visualization
| 621 |
Getting a KeyError when using IndicatorFactory.run()
|
Hello, I am trying to play around with some simple strategies to learn about the library, so I started with this :
```import vectorbt as vbt
import numpy as np
import pandas as pd
import pytz import talib
data = vbt.BinanceData.download('ETHUSDT',
start = datetime.datetime(2017, 1, 2,tzinfo=pytz.timezone('UTC')),
end = datetime.datetime(2018, 1, 1, tzinfo=pytz.timezone('UTC'))).get(['Close'])
def dummy_strat(Close, fast_ema, slow_ema):
ema1 = vbt.talib('EMA').run(Close, fast_ema).real.to_numpy()
ema2 = vbt.talib('EMA').run(Close, slow_ema).real.to_numpy()
stoch_rsi = vbt.talib('STOCHRSI').run(Close).fastk.to_numpy()
entries = (ema1 >ema2) & (stoch_rsi <80)
exits = (ema1 <ema2) & (stoch_rsi > 20)
#print(help(ema1))
return entries, exits
DummyStrat = vbt.IndicatorFactory(
class_name= 'TrueStrat',
short_name = 'TS',
input_names = ["Close"] ,
param_names = ["fast_ema", "slow_ema"],
output_names= ["entries", _"exits"]
).from_apply_func(dummy_strat )
```
When I run
```
fast_ema = 10
slow_ema = 20
entries, exits = true_strat(data, fast_ema, slow_ema)
pf = vbt.Portfolio.from_signals(data, entries, exits, freq = '1H')
returns = pf.total_return()
```
it works as expected. But when I try this :
`entries, exits = TrueStrat.run(data,
fast_ema = np.arange(10, 50),
slow_ema = np.arange(30, 100),
param_product = True)`
I get a `KeyError: 0`
Can someone please help me and explain to me what I'm doing wrong?
Thanks
|
closed
|
2023-07-11T10:19:18Z
|
2024-03-16T10:47:55Z
|
https://github.com/polakowo/vectorbt/issues/621
|
[] |
myiroslav
| 1 |
sktime/pytorch-forecasting
|
pandas
| 1,006 |
Hello everyone, please after training my model how can I fit it to accept a dataset without target column when I want to predict new values. The fact is that in Real life we do not know yet the value we seeking by prediction process
|
open
|
2022-05-28T00:21:38Z
|
2022-06-10T11:05:01Z
|
https://github.com/sktime/pytorch-forecasting/issues/1006
|
[] |
daniwxcode
| 6 |
|
opengeos/leafmap
|
jupyter
| 492 |
leafmap add_raster function can't work in windows
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version:0.22.0
- Python version:3.9
- Operating System:windows 10
### Description
error:
1019
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
1022
1023 def close(self):
HTTPError: 400 Client Error: BAD REQUEST for url: http://localhost:62933/api/metadata?&filename=D%3A%5Ccode%5Cpy%5Cimages%5CImage10.tif
### What I Did
```
m = leafmap.Map(center=[30.33049401, 104.10887847], zoom=18, height="800px")
m.add_basemap("SATELLITE")
m
image = "D:\\code\\py\\images\\Image10.tif"
tms_to_geotiff(output=image, bbox=bbox, zoom=19, source="Satellite", overwrite=True)
m.layers[-1].visible = False
m.add_raster(image, layer_name="Image")
m
```
|
closed
|
2023-07-14T02:26:33Z
|
2023-07-17T01:30:11Z
|
https://github.com/opengeos/leafmap/issues/492
|
[
"bug"
] |
mrpan
| 2 |
allenai/allennlp
|
pytorch
| 4,850 |
Have new multi-process data loader put batches directly on the target device from workers
|
closed
|
2020-12-07T20:45:31Z
|
2021-02-12T00:47:02Z
|
https://github.com/allenai/allennlp/issues/4850
|
[] |
epwalsh
| 2 |
|
yinkaisheng/Python-UIAutomation-for-Windows
|
automation
| 184 |
方法 GetChildren() 的可靠性存疑
|
GetChildren() 实现中用到了 IUIAutomationTreeWalker::GetNextSiblingElement() 这个win32 API 。看microsoft的官网文档( 链接 https://docs.microsoft.com/en-us/windows/win32/api/uiautomationclient/nf-uiautomationclient-iuiautomationtreewalker-getnextsiblingelement )说,“ The structure of the Microsoft UI Automation tree changes as the visible UI elements on the desktop change. It is not guaranteed that an element returned as the next sibling element will be returned as the next sibling on subsequent passes.”我的理解是这个API并不保证第2次遍历控件树会得到相同结果。
这个问题的背景是在使用uiautomation 过程中,发现有个别控件用下标访问时访问失败,原因是下标变了。
不知道我的理解对不对,请问有谁可以帮忙解释一下吗?
|
open
|
2021-11-23T04:04:51Z
|
2022-10-18T05:20:30Z
|
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/184
|
[] |
ludeen007
| 1 |
pydantic/FastUI
|
fastapi
| 285 |
demo loading failed
|
<img width="1063" alt="image" src="https://github.com/pydantic/FastUI/assets/4550421/8c4fdabf-0dd2-494b-a904-88322f0c4e29">
|
closed
|
2024-04-26T05:30:58Z
|
2024-04-26T13:40:32Z
|
https://github.com/pydantic/FastUI/issues/285
|
[
"documentation",
"duplicate"
] |
HakunamatataLeo
| 2 |
graphql-python/graphene-sqlalchemy
|
sqlalchemy
| 24 |
How to solve 'utf8' can't decode,because of string:högskolan
|
if there is a string: högskolan in database,then there well be a error:
{
"errors": [
{
"message": "'utf8' codec can't decode byte 0xf6 in position 34: invalid start byte",
"locations": [
{
"column": 3,
"line": 2
}
]
}
],
"data": {
"allDegreess": null
}
}
|
closed
|
2016-11-29T06:25:41Z
|
2023-02-26T00:53:20Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/24
|
[] |
chyroc
| 3 |
aminalaee/sqladmin
|
sqlalchemy
| 410 |
Support SQLAlchemy v2
|
### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
A few days ago I started using SQLAlchemy for the first time - specifically, v2.0.0rc2 (released 2023-Jan-9). Today I decided to try setting up an admin UI, and after determining the Flask-Admin is broken and unmaintained, I decided to try `sqladmin` - but couldn't install it because your `pyproject.toml` specifies version `<1.5`.
### Describe the solution you would like.
Given that SQLAlchemy v2 is expected to come out in the next few weeks, now seems like the time to make sure sqladmin works with it, and then loosen the version specifier.
### Describe alternatives you considered
I don't see an alternative. I want to stay with SQLAlchemy v2, and sqladmin directly interacts with the models, so the backend and admin have to at least share the model code, which means they might as well be in the same project - which means they have to share the same list of package dependencies.
### Additional context
_No response_
|
closed
|
2023-01-12T17:01:04Z
|
2023-01-29T17:34:26Z
|
https://github.com/aminalaee/sqladmin/issues/410
|
[] |
odigity
| 3 |
ploomber/ploomber
|
jupyter
| 353 |
Request for a Binder example that combines Ploomber and Mlflow
|
I'm using Mlflow, but Mlflow doesn't have pipeline functionality.
Therefore, I would like to use a combination of Mlflow and Ploomber.
Can I ask you to create the simple notebook example (with Mlflow+Ploomber) that can be reproduced in Binder?
|
closed
|
2021-10-08T20:23:09Z
|
2021-12-02T03:12:52Z
|
https://github.com/ploomber/ploomber/issues/353
|
[] |
kozo2
| 6 |
Zeyi-Lin/HivisionIDPhotos
|
fastapi
| 175 |
马斯克看腻了,加个选项是否显示example吧。或者自定example
|
马斯克看腻了,加个选项是否显示example吧。或者自定example
|
open
|
2024-09-27T11:08:23Z
|
2024-10-18T01:12:56Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/175
|
[] |
Jio0oiJ
| 2 |
MilesCranmer/PySR
|
scikit-learn
| 725 |
Timeout in seconds not applying
|
### Discussed in https://github.com/MilesCranmer/PySR/discussions/724
<div type='discussions-op-text'>
<sup>Originally posted by **usebi** September 25, 2024</sup>
I tried the timeout_in_seconds function of pysr regressor and set the timeout to 12 hours but after many hours from the limit the program is still working because I see the resources used but it seems stopped because it no longer writes anything new</div>
|
open
|
2024-09-25T14:05:56Z
|
2024-09-26T16:01:10Z
|
https://github.com/MilesCranmer/PySR/issues/725
|
[
"bug"
] |
MilesCranmer
| 2 |
pandas-dev/pandas
|
data-science
| 60,301 |
API: return value of `.values` for Series with the future string dtype (numpy array vs extension array)
|
Historically, the `.values` attribute returned a numpy array (except for categoricals). When we added more ExtensionArrays, for certain dtypes (e.g. tz-aware timestamps, or periods, ..) the EA could more faithfully represent the underlying values instead of the lossy conversion to numpy (e.g for tz-aware timestamps we decided to return a numpy object dtype array instead of "datetime64[ns]" to not lose the timezone information). At that point, instead of "breaking" the behaviour of `.values`, we decided to add an `.array` attribute that then always returns the EA.
But for generic ExtensionArrays (external, or non-default EAs like the masked ones or the Arrow ones), the `.values` has always already directly returned the EA as well. So in those cases, there is no difference between `.values` and `.array`.
Now to the point: with the new default `StringDtype`, the current behaviour is indeed to also always return the EA for both `.values` and `.array`.
This means this is one of the breaking changes for users when upgrading to pandas 3.0, that for a column which is inferred as string data, the `.values` no longer returns a numpy array.
**Are we OK with this breaking change now?**
Or, we could also decide to keep `.values` return the numpy array with `.array` returning the EA.
Of course, when we would move to use EAs for all dtypes (which is being considered in the logical dtypes and missing values PDEP discussions), then we would have this breaking change as well (or at least need to make a decision about it). But, that could also be a reason to not yet do it for the string dtype now, if we would change it for all dtypes later.
cc @pandas-dev/pandas-core
|
open
|
2024-11-13T14:36:21Z
|
2024-11-14T00:14:04Z
|
https://github.com/pandas-dev/pandas/issues/60301
|
[
"API Design",
"Strings"
] |
jorisvandenbossche
| 10 |
Kanaries/pygwalker
|
pandas
| 519 |
How to switch the language pack to Chinese
|
How to switch the language pack to Chinese
|
closed
|
2024-04-12T06:10:00Z
|
2024-04-13T01:46:27Z
|
https://github.com/Kanaries/pygwalker/issues/519
|
[
"good first issue"
] |
zxdmrg
| 2 |
cvat-ai/cvat
|
tensorflow
| 8,674 |
Interaction error when working with SAM-2
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Set-up CVAT with serverless functions.
2. Host SAM-2 model.
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
When using SAM-2 model, the interface indicates its waiting for SAM processing but immediately gives an error :
Interaction error occured
Error: Request failed with status code 503. "HTTPConnectionPool(host='host.docker.internal', port=34361): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7905337c0>: Failed to establish a new connection: [Errno 111] Connection refused'))".

The logs from SAM2 container looks like :

The logs from cvat_server is giving the error :
2024-11-11 07:02:14,648 DEBG 'runserver' stderr output:
[Mon Nov 11 07:02:14.648184 2024] [wsgi:error] [pid 141:tid 140427000612608] [remote 172.18.0.3:37396] [2024-11-11 07:02:14,648] ERROR django.request: Service Unavailable: /api/lambda/functions/pth-facebookresearch-sam2-vit-h
2024-11-11 07:02:14,648 DEBG 'runserver' stderr output:
[Mon Nov 11 07:02:14.648325 2024] [wsgi:error] [pid 141:tid 140427000612608] [remote 172.18.0.3:37396] ERROR:django.request:Service Unavailable: /api/lambda/functions/pth-facebookresearch-sam2-vit-h

### Environment
```Markdown
- Operating System and version (e.g. Linux, Windows, MacOS) --> Ubuntu 20.04.6
- Are you using Docker Swarm or Kubernetes? --> Docker
```
|
closed
|
2024-11-11T08:44:33Z
|
2024-11-11T11:19:33Z
|
https://github.com/cvat-ai/cvat/issues/8674
|
[
"bug"
] |
amrithkrish
| 1 |
taverntesting/tavern
|
pytest
| 946 |
Python function not callable from tavern script for saving
|
Hi, I am calling a save function after getting response from my api, now in the response received i need to format string and save only few elements present,
```
response:
status_code: 200
save:
headers:
res_key:
$ext:
function: testing_utils:extract_sessid
extra_kwargs:
head: headers
```
however, my tavern yaml is unable to call extract_string method in testing_utils file,
But other functions written in testing_utils are working fine with following syntax
```
verify_response_with:
- function: testing_utils:check_jsonpath_value
```
Please help, basically in above way mentioned, the testing_utils file is not accessible ( inside save function) but in same tavern script existing test cases are able to access the same ( with verify_response_with).
|
open
|
2024-11-13T05:47:48Z
|
2025-03-08T14:41:49Z
|
https://github.com/taverntesting/tavern/issues/946
|
[] |
ShreyanshAyanger-Nykaa
| 1 |
widgetti/solara
|
jupyter
| 145 |
TypeError: set_parent() takes 3 positional arguments but 4 were given
|
When trying the First script example on the Quickstart of the docs, it works correctly when executed on Jupyter notebook, but it won't work as a script directly executed via solara executable.
When doing:
**solara run .\first_script.py**
the server starts but then it keeps logging the following error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\gzip.py", line 26, in __call__
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 197, in kernel_connection
await thread_return
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\to_thread.py", line 34, in run_sync
func, *args, cancellable=cancellable, limiter=limiter
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 190, in websocket_thread_runner
anyio.run(run)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_core\_eventloop.py", line 68, in run
return asynclib.run(func, *args, **backend_options)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 204, in run
return native_run(wrapper(), debug=debug)
File "c:\users\jicas\anaconda3\envs\ml\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "c:\users\jicas\anaconda3\envs\ml\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 199, in wrapper
return await func(*args)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 182, in run
await server.app_loop(ws_wrapper, session_id, connection_id, user)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\server.py", line 148, in app_loop
process_kernel_messages(kernel, msg)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\server.py", line 179, in process_kernel_messages
kernel.set_parent(None, msg)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\kernel.py", line 294, in set_parent
super().set_parent(ident, parent, channel)
TypeError: set_parent() takes 3 positional arguments but 4 were given
Is there anything I can do to avoid this error?
Thanks in advance.
|
closed
|
2023-06-06T10:05:14Z
|
2023-07-28T09:55:25Z
|
https://github.com/widgetti/solara/issues/145
|
[
"bug"
] |
jicastillow
| 5 |
healthchecks/healthchecks
|
django
| 1,004 |
Unexpected "down" after sending ping
|
I have a test check setup on healthchecks.io, configured with
Cron Expression | `* 9 * * *`
-- | --
Time Zone | America/Los_Angeles
Grace Time | 30 minutes
This triggers at 9:30AM local time (as expected), and I send a ping to put it back the "up" state. ~30 minutes after the ping, the check goes back to "down".
Here's a screenshot of the details page, with the unexpected transitions highlighted:

Chronologically (with my comments):
```
May 21 | 09:30 | Status: up ➔ down. # expected, that's 30 minutes after the cron time.
May 21 | 09:34 | OK | HTTPS POST from x.x.x.x - python-requests/2.31.0 # manual ping
May 21 | 09:34 | Status: down ➔ up. # expected after ping
May 21 | 10:05 | Status: up ➔ down. # unexpected! suspiciously at "grace time" after the last ping.
May 21 | 10:21 | OK | HTTPS POST from x.x.x.x - python-requests/2.31.0 # manual ping to shut it up
May 21 | 10:21 | Status: down ➔ up. # expected after ping
```
```
May 22 | 09:30 | Status: up ➔ down. # expected, that's 30 minutes after the cron time.
May 22 | 09:41 | OK | HTTPS POST from x.x.x.x - Mozilla/5.0 ... # manual ping from UI
May 22 | 09:41 | Status: down ➔ up. # expected after ping
May 22 | 10:12 | Status: up ➔ down. # unexpected!
May 22 | 10:13 | OK | HTTPS POST from x.x.x.x - Mozilla/5.0 … # manual ping to shut it up
May 22 | 10:13 | Status: down ➔ up. # expected after ping
```
Is my expectation of how this should work incorrect? Could there be something funny going on due to the non-UTC timezone?
|
closed
|
2024-05-22T20:16:07Z
|
2024-05-23T17:22:16Z
|
https://github.com/healthchecks/healthchecks/issues/1004
|
[] |
chriselion
| 2 |
coqui-ai/TTS
|
pytorch
| 3,996 |
[Bug] AttributeError: 'int' object has no attribute 'device'
|
### Describe the bug
example code gives error when saving.
### To Reproduce
```
import os
import time
import torch
import torchaudio
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
print("Loading model...")
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=True)
model.cuda()
print("Computing speaker latents...")
gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"])
print("Inference...")
t0 = time.time()
chunks = model.inference_stream(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
"en",
gpt_cond_latent,
speaker_embedding
)
wav_chuncks = []
for i, chunk in enumerate(chunks):
if i == 0:
print(f"Time to first chunck: {time.time() - t0}")
print(f"Received chunk {i} of audio length {chunk.shape[-1]}")
wav_chuncks.append(chunk)
wav = torch.cat(wav_chuncks, dim=0)
torchaudio.save("xtts_streaming.wav", wav.squeeze().unsqueeze(0).cpu(), 24000)
```
### Expected behavior
expect it to save a wav file
### Logs
Traceback (most recent call last):
```
if elements.device.type == "mps" and not is_torch_greater_or_equal_than_2_4:
AttributeError: 'int' object has no attribute 'device'
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3080",
"NVIDIA GeForce RTX 3080"
],
"available": true,
"version": "12.4"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.4.1+cu124",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.14",
"version": "#1 SMP Thu Jan 11 04:09:03 UTC 2024"
}
}
```
### Additional context
AttributeError: 'int' object has no attribute 'device'
|
closed
|
2024-09-11T17:34:17Z
|
2025-01-04T12:21:07Z
|
https://github.com/coqui-ai/TTS/issues/3996
|
[
"bug",
"wontfix"
] |
CrackerHax
| 4 |
2noise/ChatTTS
|
python
| 567 |
decoder.yaml sha256 hash mismatch
|
修改webui代码parser = argparse.ArgumentParser(description="ChatTTS demo Launch")
parser.add_argument(
"--server_name", type=str, default="0.0.0.0", help="server name"
)
parser.add_argument("--server_port", type=int, default=8080, help="server port")
parser.add_argument("--root_path", type=str, default=None, help="root path")
parser.add_argument(
"--custom_path", type=str, default="D:\ChatTTS-Model", help="custom model path"
)
parser.add_argument(
"--coef", type=str, default=None, help="custom dvae coefficient"
)
args = parser.parse_args()
后执行报错[+0800 20240713 10:03:04] [INFO] ChatTTS | core | try to load from local: D:\liu\ChatTTS-Model
[+0800 20240713 10:03:04] [INFO] ChatTTS | dl | checking assets...
[+0800 20240713 10:03:30] [INFO] ChatTTS | dl | checking configs...
[+0800 20240713 10:03:30] [WARN] ChatTTS | dl | D:\ChatTTS-Model\config\decoder.yaml sha256 hash mismatch.
[+0800 20240713 10:03:30] [INFO] ChatTTS | dl | expected: 0890ab719716b0ad8abcb9eba0a9bf52c59c2e45ddedbbbb5ed514ff87bff369
[+0800 20240713 10:03:30] [INFO] ChatTTS | dl | real val: 952d65eed43fa126e4ae257d4d7868163b0b1af23ccbe120288c3b28d091dae1
[+0800 20240713 10:03:30] [ERRO] ChatTTS | core | check models in custom path D:\ChatTTS-Model failed.
[+0800 20240713 10:03:30] [ERRO] WebUI | webui | Models load failed.
|
closed
|
2024-07-13T02:09:27Z
|
2024-07-15T05:00:19Z
|
https://github.com/2noise/ChatTTS/issues/567
|
[
"documentation",
"question"
] |
viviliuwqhduhnwqihwqwudceygysjiwuwnn
| 3 |
pydantic/pydantic-core
|
pydantic
| 1,476 |
Missing pre-build of the pydantic-core python package for musl lib on armv7.
|
Would be good to have an pre-build of the pydantic-core python package for musl lib on armv7.
https://github.com/pydantic/pydantic-core/blob/e3eff5cb8a6dae8914e3831b00c690d9dee4b740/.github/workflows/ci.yml#L430-L436
Related, docker build for [alpine linux on armv7](https://github.com/searxng/searxng/issues/3887#issuecomment-2394990168):
- https://github.com/searxng/searxng/issues/3887
|
closed
|
2024-10-07T10:42:39Z
|
2024-10-09T14:40:04Z
|
https://github.com/pydantic/pydantic-core/issues/1476
|
[] |
return42
| 0 |
iperov/DeepFaceLab
|
machine-learning
| 5,450 |
CPU use only efficiency core
|
Hello,
I recently upgrade my computer from i5 9400f to i9 12900k
before I upgrade(i5 9400f)
deepfacelab using my cpu around 100% and after I upgrade to i9 deep face use efficiency core and not use performance core.

I tried to update the version of deep face and issue found again.
Window 10 Pro
|
open
|
2021-12-28T17:23:19Z
|
2023-06-09T07:44:05Z
|
https://github.com/iperov/DeepFaceLab/issues/5450
|
[] |
VASAPOL
| 3 |
pydantic/FastUI
|
pydantic
| 21 |
`fastui-bootstrap` allow more customisation
|
`fastui-bootstrap` should take functions matching `CustomRender` and `ClassNameGenerator` to those functions respectively, so you can use `fastui-bootstrap` while still overriding some components.
|
open
|
2023-12-01T17:59:51Z
|
2023-12-01T18:56:37Z
|
https://github.com/pydantic/FastUI/issues/21
|
[
"enhancement"
] |
samuelcolvin
| 0 |
pallets-eco/flask-sqlalchemy
|
flask
| 386 |
Is it possible to use classic mapping?
|
SqlAlchemy allows the user to use classic mapping - http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_styles.html#classical-mappings
But how can I use classic mapping when using flask-sqlalchemy?
|
closed
|
2016-03-28T02:44:56Z
|
2020-12-05T21:31:04Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/386
|
[] |
johnnncodes
| 1 |
twopirllc/pandas-ta
|
pandas
| 612 |
Range Filter 5min indicator request
|
According to my experience its great indicator for scalping traders, i tried to convert it to python but my values are wrong.
```
//@version=4
//Original Script > @DonovanWall
// Actual Version > @guikroth
//////////////////////////////////////////////////////////////////////////
// Settings for 5min chart, BTCUSDC. For Other coin, change the paremeters
//////////////////////////////////////////////////////////////////////////
study(title="Range Filter 5min", overlay=true)
// Source
src = input(defval=close, title="Source")
// Sampling Period
// Settings for 5min chart, BTCUSDC. For Other coin, change the paremeters
per = input(defval=100, minval=1, title="Sampling Period")
// Range Multiplier
mult = input(defval=3.0, minval=0.1, title="Range Multiplier")
// Smooth Average Range
smoothrng(x, t, m) =>
wper = t * 2 - 1
avrng = ema(abs(x - x[1]), t)
smoothrng = ema(avrng, wper) * m
smoothrng
smrng = smoothrng(src, per, mult)
// Range Filter
rngfilt(x, r) =>
rngfilt = x
rngfilt := x > nz(rngfilt[1]) ? x - r < nz(rngfilt[1]) ? nz(rngfilt[1]) : x - r :
x + r > nz(rngfilt[1]) ? nz(rngfilt[1]) : x + r
rngfilt
filt = rngfilt(src, smrng)
// Filter Direction
upward = 0.0
upward := filt > filt[1] ? nz(upward[1]) + 1 : filt < filt[1] ? 0 : nz(upward[1])
downward = 0.0
downward := filt < filt[1] ? nz(downward[1]) + 1 : filt > filt[1] ? 0 : nz(downward[1])
// Target Bands
hband = filt + smrng
lband = filt - smrng
// Colors
filtcolor = upward > 0 ? color.lime : downward > 0 ? color.red : color.orange
barcolor = src > filt and src > src[1] and upward > 0 ? color.lime :
src > filt and src < src[1] and upward > 0 ? color.green :
src < filt and src < src[1] and downward > 0 ? color.red :
src < filt and src > src[1] and downward > 0 ? color.maroon : color.orange
filtplot = plot(filt, color=filtcolor, linewidth=3, title="Range Filter")
// Target
hbandplot = plot(hband, color=color.aqua, transp=100, title="High Target")
lbandplot = plot(lband, color=color.fuchsia, transp=100, title="Low Target")
// Fills
fill(hbandplot, filtplot, color=color.aqua, title="High Target Range")
fill(lbandplot, filtplot, color=color.fuchsia, title="Low Target Range")
// Bar Color
barcolor(barcolor)
// Break Outs
longCond = bool(na)
shortCond = bool(na)
longCond := src > filt and src > src[1] and upward > 0 or
src > filt and src < src[1] and upward > 0
shortCond := src < filt and src < src[1] and downward > 0 or
src < filt and src > src[1] and downward > 0
CondIni = 0
CondIni := longCond ? 1 : shortCond ? -1 : CondIni[1]
longCondition = longCond and CondIni[1] == -1
shortCondition = shortCond and CondIni[1] == 1
//Alerts
plotshape(longCondition, title="Buy Signal", text="BUY", textcolor=color.white, style=shape.labelup, size=size.normal, location=location.belowbar, color=color.green, transp=0)
plotshape(shortCondition, title="Sell Signal", text="SELL", textcolor=color.white, style=shape.labeldown, size=size.normal, location=location.abovebar, color=color.red, transp=0)
alertcondition(longCondition, title="Buy Alert", message="BUY")
alertcondition(shortCondition, title="Sell Alert", message="SELL")
//For use like Strategy,
//1. Change the word "study" for "strategy" at the top
//2. Remove the "//" below
//strategy.entry( id = "Long", long = true, when = longCondition )
//strategy.close( id = "Long", when = shortCondition )
```
Can you translate this to python or we can do this conversion with my code:
My code below :
```python
src = dfLB["close"]
per = 100
mult = 3
def smoothrng(x, t, m) :
wper = t * 2 - 1
avrng = ta.ema((np.absolute(x - x.shift())), t)
smoothrng = ta.ema(avrng, wper) * m
return smoothrng
smrng = smoothrng(src, 100, 3)
def rngfilt(x, r):
rngfilt = x
rngfilt = np.where(x > rngfilt.shift(),np.where((x-r) < rngfilt.shift(),rngfilt.shift(),x-r),np.where((x+r) > rngfilt.shift(),rngfilt.shift(),x+r))
return rngfilt
dfLB["filt"] = rngfilt(src, smrng)
dfLB["upward"] = 0.0
dfLB["upward"] = np.where((dfLB["filt"] > dfLB["filt"].shift()), dfLB["upward"].shift() + 1,np.where(dfLB["filt"] < dfLB["filt"].shift(), 0, dfLB["upward"].shift()))
dfLB["downward"] = 0.0
dfLB["downward"] = np.where((dfLB["filt"] < dfLB["filt"].shift()), dfLB["downward"].shift() + 1,np.where(dfLB["filt"] > dfLB["filt"].shift(), 0, dfLB["downward"].shift()))
hband = dfLB["filt"] + smrng
lband = dfLB["filt"] - smrng
longCond = np.where((((src > dfLB["filt"]) & (src > src.shift()) & (dfLB["upward"] > 0)) | ((src > dfLB["filt"]) & (src < src.shift()) & (dfLB["upward"] > 0))),1,0)
shortCond = np.where((((src < dfLB["filt"]) & (src < src.shift()) & (dfLB["downward"] > 0)) | ((src < dfLB["filt"]) & (src > src.shift()) & (dfLB["downward"] > 0))),1,0)
dfLB["CondIni"] = 0
dfLB["CondIni"] = np.where((longCond == 1), 1 , np.where((shortCond==1), -1 , dfLB["CondIni"].shift()))
longCondition = np.where(((longCond==1) & (dfLB["CondIni"].shift() == -1)),1,0)
shortCondition = np.where(((shortCond==1) & (dfLB["CondIni"].shift()== 1)),1,0)
```
you can check hband and lband values in tradingview ( https://tr.tradingview.com/chart/mLWdxhy9/?symbol=BITSTAMP%3AXRPUSD)
hband = blue values
lband = purple values
If you can translate this code to python I would be really grateful. Thank you.
|
open
|
2022-10-24T13:06:09Z
|
2023-09-02T15:19:05Z
|
https://github.com/twopirllc/pandas-ta/issues/612
|
[
"enhancement",
"help wanted",
"good first issue"
] |
kaanguven
| 3 |
suitenumerique/docs
|
django
| 203 |
🧑💻Add conf ngnix for upload in dev mode
|
## Feature Request
Add conf ngnix for upload with local dev mode working with docker-compose.
We have 2 ways to develop in local mode, with `Tilt` (k8s stack) and with `docker-compose` (docker-compose stack), the upload image process works with Tilt but not with the docker-compose stack.
## Code
On Tilt dev:
https://github.com/numerique-gouv/impress/blob/67a20f249e33ffbea326f2be825e085847c34331/src/helm/env.d/dev/values.impress.yaml.gotmpl#L107-L119
Adapt this file to use the same conf:
https://github.com/numerique-gouv/impress/blob/main/docker/files/etc/nginx/conf.d/default.conf
----
See: #118
|
closed
|
2024-08-29T09:54:20Z
|
2024-08-29T16:31:27Z
|
https://github.com/suitenumerique/docs/issues/203
|
[
"enhancement",
"docker"
] |
AntoLC
| 0 |
ray-project/ray
|
data-science
| 51,056 |
CI test darwin://python/ray/tests:test_placement_group_3 is consistently_failing
|
CI test **darwin://python/ray/tests:test_placement_group_3** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge-macos/builds/4657#01955f62-ed51-458c-8bfb-a4a96b5b7134
- https://buildkite.com/ray-project/postmerge-macos/builds/4657#01955dd4-ae7a-4bd0-ab9d-14abaf0cdd17
DataCaseName-darwin://python/ray/tests:test_placement_group_3-END
Managed by OSS Test Policy
|
closed
|
2025-03-04T06:17:38Z
|
2025-03-04T13:06:58Z
|
https://github.com/ray-project/ray/issues/51056
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 2 |
deezer/spleeter
|
deep-learning
| 660 |
Many errors
|
Hi
I am on imac Osx El Capitan.
When i apply in the terminal :
[(base) iMac-de-mar:~ mar$ conda activate myenv
[(myenv) iMac-de-mar:~ mar$ cd /Applications/SpleeterGui
[(myenv) iMac-de-mar:SpleeterGui mar$ spleeter separate -i spleeter/cancion.mp3 -p spleeter:2stems -o output get these errors.
Please how can i solve them?
Traceback (most recent call last):
File "/Users/mar/opt/anaconda3/envs/myenv/bin/spleeter", line 11, in <module>
sys.exit(entrypoint())
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/__main__.py", line 54, in entrypoint
main(sys.argv)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/__main__.py", line 46, in main
entrypoint(arguments, params)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/commands/separate.py", line 45, in entrypoint
synchronous=False
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/separator.py", line 228, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/separator.py", line 195, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/separator.py", line 173, in _separate_librosa
outputs = self._get_builder().outputs
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 301, in outputs
self._build_outputs()
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 476, in _build_outputs
self._outputs = self.masked_stfts
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 325, in masked_stfts
self._build_masked_stfts()
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 440, in _build_masked_stfts
for instrument, mask in self.masks.items():
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 319, in masks
self._build_masks()
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 423, in _build_masks
instrument_mask = self._extend_mask(instrument_mask)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/spleeter/model/__init__.py", line 397, in _extend_mask
mask_shape[-1]))
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2338, in zeros
output = _constant_if_small(zero, shape, dtype, name)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2295, in _constant_if_small
if np.prod(shape) < 1000:
File "<__array_function__ internals>", line 6, in prod
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3052, in prod
keepdims=keepdims, initial=initial, where=where)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
File "/Users/mar/opt/anaconda3/envs/myenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 736, in __array__
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (strided_slice_4:0) to a numpy array.
(myenv
|
open
|
2021-09-10T18:46:17Z
|
2021-09-10T18:46:17Z
|
https://github.com/deezer/spleeter/issues/660
|
[
"bug",
"invalid"
] |
lunatico67
| 0 |
deeppavlov/DeepPavlov
|
tensorflow
| 1,430 |
Multi class emotion classification for text in russian
|
Как использовать BERT Classifier для multi class классификаций текста? У меня есть свой датасет, нужно тренировать модель на этом датасете.
Пример Input:
Я сегодня чувствую себя не очень хорошо
Output:
Sadness
Классов должно быть 5 или 6
Знаю что есть rusentiment_bert.json. Это как я понимаю pretrained и здесь только Positive neutral negative speech skip, а мне надо чтобы были эмоций типа (радость, печаль итп)
Мне получается нужно быть изменить конфиг rusentiment_bert.json? Если да – то как и что надо изменить для настройки данной модели?
Прошу помочь c гайденсом как работает весь процесс.
|
closed
|
2021-04-14T22:16:44Z
|
2021-04-19T13:17:03Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1430
|
[
"enhancement"
] |
MuhammedTech
| 1 |
vaexio/vaex
|
data-science
| 2,022 |
[BUG-REPORT] Group By memory Issue
|
Hello,
I have a project running on vaex v4.0.0, I also have it wrapped around flask to have API's running off it. I was hopping to get some help related to memory.
I do face memory leak issues while using group by here's an example.
df.groupby(['rooms_count'], agg={vx.agg.mean('price_per_meter'),vx.agg.min('price_per_meter'),vx.agg.max('price_per_meter'),vx.agg.count('price_per_meter')})
My issue is not with the amount of memory being used. But after the API call is executed the memory used is not released back to the OS. Scale it to multiple API requests and soon I am out of memory on server. I have tried using garbage collection but still the memory isn't released back to the OS.
I was asked to help replicate the issue. You can find the code and steps to replicate over there
[https://github.com/MHK107/vaex-groupby-memory-issue/tree/main](Link to the repo)
Please let me know if I can help in any way possible to replicate and resolve this
|
open
|
2022-04-18T07:25:11Z
|
2022-05-16T06:52:04Z
|
https://github.com/vaexio/vaex/issues/2022
|
[] |
MHK107
| 4 |
litestar-org/polyfactory
|
pydantic
| 534 |
Bug(CI): Updated lockfile changes type checking CI causing failures
|
### Description
https://github.com/litestar-org/polyfactory/actions/runs/8928572773/job/24524431663
```
mypy.....................................................................Failed
- hook id: mypy
- exit code: 1
polyfactory/value_generators/constrained_dates.py:41: error: Redundant cast to "date" [redundant-cast]
polyfactory/factories/base.py:508: error: Argument 1 to "UUID" has incompatible type "bytes | str | UUID"; expected "str | None" [arg-type]
tests/test_random_configuration.py:68: error: Redundant cast to "int" [redundant-cast]
polyfactory/factories/pydantic_factory.py:546: error: Incompatible return value type (got "dict[Any, object]", expected "dict[Any, Callable[[], Any]]") [return-value]
tests/test_recursive_models.py:56: error: Non-overlapping identity check (left operand type: "PydanticNode", right operand type: "type[_Sentinel]") [comparison-overlap]
docs/examples/decorators/test_example_1.py:19: error: Returning Any from function declared to return "datetime" [no-any-return]
docs/examples/decorators/test_example_1.py:19: error: Redundant cast to "timedelta" [redundant-cast]
polyfactory/factories/beanie_odm_factory.py:32: error: Unused "type: ignore" comment [unused-ignore]
Found 8 errors in 7 files (checked 129 source files)
```
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Release Version
CI
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
|
closed
|
2024-05-02T18:29:18Z
|
2025-03-20T15:53:16Z
|
https://github.com/litestar-org/polyfactory/issues/534
|
[
"bug",
"ci"
] |
JacobCoffee
| 0 |
PaddlePaddle/PaddleHub
|
nlp
| 1,817 |
高层封装的predict函数请问如何把所有的类别概率输出
|
还有未经过siftmax层输出的数值,非概率的数值可以输出吗
|
open
|
2022-03-23T11:22:35Z
|
2022-03-29T12:25:51Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1817
|
[] |
tangkai521
| 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.