repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 590 | [Enhancement]: answer.json should be nested having job appiication URL's as keys | ### Feature summary
answer.json should be nested having job appiication URL's as keys
### Feature description
answer.json should be nested having job appiication URL's as keys
### Motivation
currenly you know question & answer but you don't know for which job application
### Alternatives considered
_No response_
### Additional context
_No response_ | closed | 2024-10-24T03:43:14Z | 2025-01-22T22:54:04Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/590 | [
"enhancement"
] | surapuramakhil | 0 |
graphistry/pygraphistry | jupyter | 202 | [BUG] as_files consistency checking | When using `as_files=True`:
* should compare settings
* should check for use-after-delete | open | 2021-01-27T01:07:27Z | 2021-07-16T19:47:53Z | https://github.com/graphistry/pygraphistry/issues/202 | [
"bug"
] | lmeyerov | 1 |
anselal/antminer-monitor | dash | 183 | new page mobile | It is possible to edit home.html to new page like mobile.html witch show only total hashrate and inactive and active miners count ?
Best Regards | closed | 2021-08-06T13:33:28Z | 2021-08-06T13:57:15Z | https://github.com/anselal/antminer-monitor/issues/183 | [
":question: question"
] | HMCmin | 3 |
sinaptik-ai/pandas-ai | pandas | 1,315 | How to connect pandasai to the internet to improve answers | closed | 2024-08-07T12:38:35Z | 2024-11-13T16:06:40Z | https://github.com/sinaptik-ai/pandas-ai/issues/1315 | [] | junle-chen | 1 |
|
Anjok07/ultimatevocalremovergui | pytorch | 1,148 | karaoke 6 not working | When I use the "6_HP_Karaoke-UVR" with GPU conversion, I get this error : "An error occured : runtime error ; if the error persists, please contact the developers with the error details..."
The error line code says :
Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error(s) in loading state_dict for CascadedNet:
Missing key(s) in state_dict: "stg1_low_band_net.0.enc1.conv.0.weight", "stg1_low_band_net.0.enc1.conv.1.weight", "stg1_low_band_net.0.enc1.conv.1.bias", "stg1_low_band_net.0.enc1.conv.1.running_mean", "stg1_low_band_net.0.enc1.conv.1.running_var", "stg1_low_band_net.0.enc2.conv1.conv.0.weight", "stg1_low_band_net.0.enc2.conv1.conv.1.weight", "stg1_low_band_net.0.enc2.conv1.conv.1.bias", "stg1_low_band_net.0.enc2.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc2.conv1.conv.1.running_var", "stg1_low_band_net.0.enc2.conv2.conv.0.weight", "stg1_low_band_net.0.enc2.conv2.conv.1.weight", "stg1_low_band_net.0.enc2.conv2.conv.1.bias", "stg1_low_band_net.0.enc2.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc2.conv2.conv.1.running_var", "stg1_low_band_net.0.enc3.conv1.conv.0.weight", "stg1_low_band_net.0.enc3.conv1.conv.1.weight", "stg1_low_band_net.0.enc3.conv1.conv.1.bias", "stg1_low_band_net.0.enc3.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc3.conv1.conv.1.running_var", "stg1_low_band_net.0.enc3.conv2.conv.0.weight", "stg1_low_band_net.0.enc3.conv2.conv.1.weight", "stg1_low_band_net.0.enc3.conv2.conv.1.bias", "stg1_low_band_net.0.enc3.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc3.conv2.conv.1.running_var", "stg1_low_band_net.0.enc4.conv1.conv.0.weight", "stg1_low_band_net.0.enc4.conv1.conv.1.weight", "stg1_low_band_net.0.enc4.conv1.conv.1.bias", "stg1_low_band_net.0.enc4.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc4.conv1.conv.1.running_var", "stg1_low_band_net.0.enc4.conv2.conv.0.weight", "stg1_low_band_net.0.enc4.conv2.conv.1.weight", "stg1_low_band_net.0.enc4.conv2.conv.1.bias", "stg1_low_band_net.0.enc4.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc4.conv2.conv.1.running_var", "stg1_low_band_net.0.enc5.conv1.conv.0.weight", "stg1_low_band_net.0.enc5.conv1.conv.1.weight", "stg1_low_band_net.0.enc5.conv1.conv.1.bias", "stg1_low_band_net.0.enc5.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc5.conv1.conv.1.running_var", "stg1_low_band_net.0.enc5.conv2.conv.0.weight", "stg1_low_band_net.0.enc5.conv2.conv.1.weight", "stg1_low_band_net.0.enc5.conv2.conv.1.bias", "stg1_low_band_net.0.enc5.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc5.conv2.conv.1.running_var", "stg1_low_band_net.0.aspp.conv1.1.conv.0.weight", "stg1_low_band_net.0.aspp.conv1.1.conv.1.weight", "stg1_low_band_net.0.aspp.conv1.1.conv.1.bias", "stg1_low_band_net.0.aspp.conv1.1.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv1.1.conv.1.running_var", "stg1_low_band_net.0.aspp.conv2.conv.0.weight", "stg1_low_band_net.0.aspp.conv2.conv.1.weight", "stg1_low_band_net.0.aspp.conv2.conv.1.bias", "stg1_low_band_net.0.aspp.conv2.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv2.conv.1.running_var", "stg1_low_band_net.0.aspp.conv3.conv.0.weight", "stg1_low_band_net.0.aspp.conv3.conv.1.weight", "stg1_low_band_net.0.aspp.conv3.conv.1.bias", "stg1_low_band_net.0.aspp.conv3.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv3.conv.1.running_var", "stg1_low_band_net.0.aspp.conv4.conv.0.weight", "stg1_low_band_net.0.aspp.conv4.conv.1.weight", "stg1_low_band_net.0.aspp.conv4.conv.1.bias", "stg1_low_band_net.0.aspp.conv4.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv4.conv.1.running_var", "stg1_low_band_net.0.aspp.conv5.conv.0.weight", "stg1_low_band_net.0.aspp.conv5.conv.1.weight", "stg1_low_band_net.0.aspp.conv5.conv.1.bias", "stg1_low_band_net.0.aspp.conv5.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv5.conv.1.running_var", "stg1_low_band_net.0.aspp.bottleneck.conv.0.weight", "stg1_low_band_net.0.aspp.bottleneck.conv.1.weight", "stg1_low_band_net.0.aspp.bottleneck.conv.1.bias", "stg1_low_band_net.0.aspp.bottleneck.conv.1.running_mean", "stg1_low_band_net.0.aspp.bottleneck.conv.1.running_var", "stg1_low_band_net.0.dec4.conv1.conv.0.weight", "stg1_low_band_net.0.dec4.conv1.conv.1.weight", "stg1_low_band_net.0.dec4.conv1.conv.1.bias", "stg1_low_band_net.0.dec4.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec4.conv1.conv.1.running_var", "stg1_low_band_net.0.dec3.conv1.conv.0.weight", "stg1_low_band_net.0.dec3.conv1.conv.1.weight", "stg1_low_band_net.0.dec3.conv1.conv.1.bias", "stg1_low_band_net.0.dec3.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec3.conv1.conv.1.running_var", "stg1_low_band_net.0.dec2.conv1.conv.0.weight", "stg1_low_band_net.0.dec2.conv1.conv.1.weight", "stg1_low_band_net.0.dec2.conv1.conv.1.bias", "stg1_low_band_net.0.dec2.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec2.conv1.conv.1.running_var", "stg1_low_band_net.0.lstm_dec2.conv.conv.0.weight", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.weight", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.bias", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.running_mean", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.running_var", "stg1_low_band_net.0.lstm_dec2.lstm.weight_ih_l0", "stg1_low_band_net.0.lstm_dec2.lstm.weight_hh_l0", "stg1_low_band_net.0.lstm_dec2.lstm.bias_ih_l0", "stg1_low_band_net.0.lstm_dec2.lstm.bias_hh_l0", "stg1_low_band_net.0.lstm_dec2.lstm.weight_ih_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.weight_hh_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.bias_ih_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.bias_hh_l0_reverse", "stg1_low_band_net.0.lstm_dec2.dense.0.weight", "stg1_low_band_net.0.lstm_dec2.dense.0.bias", "stg1_low_band_net.0.lstm_dec2.dense.1.weight", "stg1_low_band_net.0.lstm_dec2.dense.1.bias", "stg1_low_band_net.0.lstm_dec2.dense.1.running_mean", "stg1_low_band_net.0.lstm_dec2.dense.1.running_var", "stg1_low_band_net.0.dec1.conv1.conv.0.weight", "stg1_low_band_net.0.dec1.conv1.conv.1.weight", "stg1_low_band_net.0.dec1.conv1.conv.1.bias", "stg1_low_band_net.0.dec1.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec1.conv1.conv.1.running_var", "stg1_low_band_net.1.conv.0.weight", "stg1_low_band_net.1.conv.1.weight", "stg1_low_band_net.1.conv.1.bias", "stg1_low_band_net.1.conv.1.running_mean", "stg1_low_band_net.1.conv.1.running_var", "stg1_high_band_net.enc1.conv.0.weight", "stg1_high_band_net.enc1.conv.1.weight", "stg1_high_band_net.enc1.conv.1.bias", "stg1_high_band_net.enc1.conv.1.running_mean", "stg1_high_band_net.enc1.conv.1.running_var", "stg1_high_band_net.enc5.conv1.conv.0.weight", "stg1_high_band_net.enc5.conv1.conv.1.weight", "stg1_high_band_net.enc5.conv1.conv.1.bias", "stg1_high_band_net.enc5.conv1.conv.1.running_mean", "stg1_high_band_net.enc5.conv1.conv.1.running_var", "stg1_high_band_net.enc5.conv2.conv.0.weight", "stg1_high_band_net.enc5.conv2.conv.1.weight", "stg1_high_band_net.enc5.conv2.conv.1.bias", "stg1_high_band_net.enc5.conv2.conv.1.running_mean", "stg1_high_band_net.enc5.conv2.conv.1.running_var", "stg1_high_band_net.aspp.conv3.conv.1.bias", "stg1_high_band_net.aspp.conv3.conv.1.running_mean", "stg1_high_band_net.aspp.conv3.conv.1.running_var", "stg1_high_band_net.aspp.conv4.conv.1.bias", "stg1_high_band_net.aspp.conv4.conv.1.running_mean", "stg1_high_band_net.aspp.conv4.conv.1.running_var", "stg1_high_band_net.aspp.conv5.conv.1.bias", "stg1_high_band_net.aspp.conv5.conv.1.running_mean", "stg1_high_band_net.aspp.conv5.conv.1.running_var", "stg1_high_band_net.aspp.bottleneck.conv.0.weight", "stg1_high_band_net.aspp.bottleneck.conv.1.weight", "stg1_high_band_net.aspp.bottleneck.conv.1.bias", "stg1_high_band_net.aspp.bottleneck.conv.1.running_mean", "stg1_high_band_net.aspp.bottleneck.conv.1.running_var", "stg1_high_band_net.dec4.conv1.conv.0.weight", "stg1_high_band_net.dec4.conv1.conv.1.weight", "stg1_high_band_net.dec4.conv1.conv.1.bias", "stg1_high_band_net.dec4.conv1.conv.1.running_mean", "stg1_high_band_net.dec4.conv1.conv.1.running_var", "stg1_high_band_net.dec3.conv1.conv.0.weight", "stg1_high_band_net.dec3.conv1.conv.1.weight", "stg1_high_band_net.dec3.conv1.conv.1.bias", "stg1_high_band_net.dec3.conv1.conv.1.running_mean", "stg1_high_band_net.dec3.conv1.conv.1.running_var", "stg1_high_band_net.dec2.conv1.conv.0.weight", "stg1_high_band_net.dec2.conv1.conv.1.weight", "stg1_high_band_net.dec2.conv1.conv.1.bias", "stg1_high_band_net.dec2.conv1.conv.1.running_mean", "stg1_high_band_net.dec2.conv1.conv.1.running_var", "stg1_high_band_net.lstm_dec2.conv.conv.0.weight", "stg1_high_band_net.lstm_dec2.conv.conv.1.weight", "stg1_high_band_net.lstm_dec2.conv.conv.1.bias", "stg1_high_band_net.lstm_dec2.conv.conv.1.running_mean", "stg1_high_band_net.lstm_dec2.conv.conv.1.running_var", "stg1_high_band_net.lstm_dec2.lstm.weight_ih_l0", "stg1_high_band_net.lstm_dec2.lstm.weight_hh_l0", "stg1_high_band_net.lstm_dec2.lstm.bias_ih_l0", "stg1_high_band_net.lstm_dec2.lstm.bias_hh_l0", "stg1_high_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg1_high_band_net.lstm_dec2.dense.0.weight", "stg1_high_band_net.lstm_dec2.dense.0.bias", "stg1_high_band_net.lstm_dec2.dense.1.weight", "stg1_high_band_net.lstm_dec2.dense.1.bias", "stg1_high_band_net.lstm_dec2.dense.1.running_mean", "stg1_high_band_net.lstm_dec2.dense.1.running_var", "stg1_high_band_net.dec1.conv1.conv.0.weight", "stg1_high_band_net.dec1.conv1.conv.1.weight", "stg1_high_band_net.dec1.conv1.conv.1.bias", "stg1_high_band_net.dec1.conv1.conv.1.running_mean", "stg1_high_band_net.dec1.conv1.conv.1.running_var", "stg2_low_band_net.0.enc1.conv.0.weight", "stg2_low_band_net.0.enc1.conv.1.weight", "stg2_low_band_net.0.enc1.conv.1.bias", "stg2_low_band_net.0.enc1.conv.1.running_mean", "stg2_low_band_net.0.enc1.conv.1.running_var", "stg2_low_band_net.0.enc2.conv1.conv.0.weight", "stg2_low_band_net.0.enc2.conv1.conv.1.weight", "stg2_low_band_net.0.enc2.conv1.conv.1.bias", "stg2_low_band_net.0.enc2.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc2.conv1.conv.1.running_var", "stg2_low_band_net.0.enc2.conv2.conv.0.weight", "stg2_low_band_net.0.enc2.conv2.conv.1.weight", "stg2_low_band_net.0.enc2.conv2.conv.1.bias", "stg2_low_band_net.0.enc2.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc2.conv2.conv.1.running_var", "stg2_low_band_net.0.enc3.conv1.conv.0.weight", "stg2_low_band_net.0.enc3.conv1.conv.1.weight", "stg2_low_band_net.0.enc3.conv1.conv.1.bias", "stg2_low_band_net.0.enc3.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc3.conv1.conv.1.running_var", "stg2_low_band_net.0.enc3.conv2.conv.0.weight", "stg2_low_band_net.0.enc3.conv2.conv.1.weight", "stg2_low_band_net.0.enc3.conv2.conv.1.bias", "stg2_low_band_net.0.enc3.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc3.conv2.conv.1.running_var", "stg2_low_band_net.0.enc4.conv1.conv.0.weight", "stg2_low_band_net.0.enc4.conv1.conv.1.weight", "stg2_low_band_net.0.enc4.conv1.conv.1.bias", "stg2_low_band_net.0.enc4.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc4.conv1.conv.1.running_var", "stg2_low_band_net.0.enc4.conv2.conv.0.weight", "stg2_low_band_net.0.enc4.conv2.conv.1.weight", "stg2_low_band_net.0.enc4.conv2.conv.1.bias", "stg2_low_band_net.0.enc4.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc4.conv2.conv.1.running_var", "stg2_low_band_net.0.enc5.conv1.conv.0.weight", "stg2_low_band_net.0.enc5.conv1.conv.1.weight", "stg2_low_band_net.0.enc5.conv1.conv.1.bias", "stg2_low_band_net.0.enc5.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc5.conv1.conv.1.running_var", "stg2_low_band_net.0.enc5.conv2.conv.0.weight", "stg2_low_band_net.0.enc5.conv2.conv.1.weight", "stg2_low_band_net.0.enc5.conv2.conv.1.bias", "stg2_low_band_net.0.enc5.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc5.conv2.conv.1.running_var", "stg2_low_band_net.0.aspp.conv1.1.conv.0.weight", "stg2_low_band_net.0.aspp.conv1.1.conv.1.weight", "stg2_low_band_net.0.aspp.conv1.1.conv.1.bias", "stg2_low_band_net.0.aspp.conv1.1.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv1.1.conv.1.running_var", "stg2_low_band_net.0.aspp.conv2.conv.0.weight", "stg2_low_band_net.0.aspp.conv2.conv.1.weight", "stg2_low_band_net.0.aspp.conv2.conv.1.bias", "stg2_low_band_net.0.aspp.conv2.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv2.conv.1.running_var", "stg2_low_band_net.0.aspp.conv3.conv.0.weight", "stg2_low_band_net.0.aspp.conv3.conv.1.weight", "stg2_low_band_net.0.aspp.conv3.conv.1.bias", "stg2_low_band_net.0.aspp.conv3.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv3.conv.1.running_var", "stg2_low_band_net.0.aspp.conv4.conv.0.weight", "stg2_low_band_net.0.aspp.conv4.conv.1.weight", "stg2_low_band_net.0.aspp.conv4.conv.1.bias", "stg2_low_band_net.0.aspp.conv4.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv4.conv.1.running_var", "stg2_low_band_net.0.aspp.conv5.conv.0.weight", "stg2_low_band_net.0.aspp.conv5.conv.1.weight", "stg2_low_band_net.0.aspp.conv5.conv.1.bias", "stg2_low_band_net.0.aspp.conv5.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv5.conv.1.running_var", "stg2_low_band_net.0.aspp.bottleneck.conv.0.weight", "stg2_low_band_net.0.aspp.bottleneck.conv.1.weight", "stg2_low_band_net.0.aspp.bottleneck.conv.1.bias", "stg2_low_band_net.0.aspp.bottleneck.conv.1.running_mean", "stg2_low_band_net.0.aspp.bottleneck.conv.1.running_var", "stg2_low_band_net.0.dec4.conv1.conv.0.weight", "stg2_low_band_net.0.dec4.conv1.conv.1.weight", "stg2_low_band_net.0.dec4.conv1.conv.1.bias", "stg2_low_band_net.0.dec4.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec4.conv1.conv.1.running_var", "stg2_low_band_net.0.dec3.conv1.conv.0.weight", "stg2_low_band_net.0.dec3.conv1.conv.1.weight", "stg2_low_band_net.0.dec3.conv1.conv.1.bias", "stg2_low_band_net.0.dec3.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec3.conv1.conv.1.running_var", "stg2_low_band_net.0.dec2.conv1.conv.0.weight", "stg2_low_band_net.0.dec2.conv1.conv.1.weight", "stg2_low_band_net.0.dec2.conv1.conv.1.bias", "stg2_low_band_net.0.dec2.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec2.conv1.conv.1.running_var", "stg2_low_band_net.0.lstm_dec2.conv.conv.0.weight", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.weight", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.bias", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.running_mean", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.running_var", "stg2_low_band_net.0.lstm_dec2.lstm.weight_ih_l0", "stg2_low_band_net.0.lstm_dec2.lstm.weight_hh_l0", "stg2_low_band_net.0.lstm_dec2.lstm.bias_ih_l0", "stg2_low_band_net.0.lstm_dec2.lstm.bias_hh_l0", "stg2_low_band_net.0.lstm_dec2.lstm.weight_ih_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.weight_hh_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.bias_ih_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.bias_hh_l0_reverse", "stg2_low_band_net.0.lstm_dec2.dense.0.weight", "stg2_low_band_net.0.lstm_dec2.dense.0.bias", "stg2_low_band_net.0.lstm_dec2.dense.1.weight", "stg2_low_band_net.0.lstm_dec2.dense.1.bias", "stg2_low_band_net.0.lstm_dec2.dense.1.running_mean", "stg2_low_band_net.0.lstm_dec2.dense.1.running_var", "stg2_low_band_net.0.dec1.conv1.conv.0.weight", "stg2_low_band_net.0.dec1.conv1.conv.1.weight", "stg2_low_band_net.0.dec1.conv1.conv.1.bias", "stg2_low_band_net.0.dec1.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec1.conv1.conv.1.running_var", "stg2_low_band_net.1.conv.0.weight", "stg2_low_band_net.1.conv.1.weight", "stg2_low_band_net.1.conv.1.bias", "stg2_low_band_net.1.conv.1.running_mean", "stg2_low_band_net.1.conv.1.running_var", "stg2_high_band_net.enc1.conv.0.weight", "stg2_high_band_net.enc1.conv.1.weight", "stg2_high_band_net.enc1.conv.1.bias", "stg2_high_band_net.enc1.conv.1.running_mean", "stg2_high_band_net.enc1.conv.1.running_var", "stg2_high_band_net.enc2.conv1.conv.0.weight", "stg2_high_band_net.enc2.conv1.conv.1.weight", "stg2_high_band_net.enc2.conv1.conv.1.bias", "stg2_high_band_net.enc2.conv1.conv.1.running_mean", "stg2_high_band_net.enc2.conv1.conv.1.running_var", "stg2_high_band_net.enc2.conv2.conv.0.weight", "stg2_high_band_net.enc2.conv2.conv.1.weight", "stg2_high_band_net.enc2.conv2.conv.1.bias", "stg2_high_band_net.enc2.conv2.conv.1.running_mean", "stg2_high_band_net.enc2.conv2.conv.1.running_var", "stg2_high_band_net.enc3.conv1.conv.0.weight", "stg2_high_band_net.enc3.conv1.conv.1.weight", "stg2_high_band_net.enc3.conv1.conv.1.bias", "stg2_high_band_net.enc3.conv1.conv.1.running_mean", "stg2_high_band_net.enc3.conv1.conv.1.running_var", "stg2_high_band_net.enc3.conv2.conv.0.weight", "stg2_high_band_net.enc3.conv2.conv.1.weight", "stg2_high_band_net.enc3.conv2.conv.1.bias", "stg2_high_band_net.enc3.conv2.conv.1.running_mean", "stg2_high_band_net.enc3.conv2.conv.1.running_var", "stg2_high_band_net.enc4.conv1.conv.0.weight", "stg2_high_band_net.enc4.conv1.conv.1.weight", "stg2_high_band_net.enc4.conv1.conv.1.bias", "stg2_high_band_net.enc4.conv1.conv.1.running_mean", "stg2_high_band_net.enc4.conv1.conv.1.running_var", "stg2_high_band_net.enc4.conv2.conv.0.weight", "stg2_high_band_net.enc4.conv2.conv.1.weight", "stg2_high_band_net.enc4.conv2.conv.1.bias", "stg2_high_band_net.enc4.conv2.conv.1.running_mean", "stg2_high_band_net.enc4.conv2.conv.1.running_var", "stg2_high_band_net.enc5.conv1.conv.0.weight", "stg2_high_band_net.enc5.conv1.conv.1.weight", "stg2_high_band_net.enc5.conv1.conv.1.bias", "stg2_high_band_net.enc5.conv1.conv.1.running_mean", "stg2_high_band_net.enc5.conv1.conv.1.running_var", "stg2_high_band_net.enc5.conv2.conv.0.weight", "stg2_high_band_net.enc5.conv2.conv.1.weight", "stg2_high_band_net.enc5.conv2.conv.1.bias", "stg2_high_band_net.enc5.conv2.conv.1.running_mean", "stg2_high_band_net.enc5.conv2.conv.1.running_var", "stg2_high_band_net.aspp.conv1.1.conv.0.weight", "stg2_high_band_net.aspp.conv1.1.conv.1.weight", "stg2_high_band_net.aspp.conv1.1.conv.1.bias", "stg2_high_band_net.aspp.conv1.1.conv.1.running_mean", "stg2_high_band_net.aspp.conv1.1.conv.1.running_var", "stg2_high_band_net.aspp.conv2.conv.0.weight", "stg2_high_band_net.aspp.conv2.conv.1.weight", "stg2_high_band_net.aspp.conv2.conv.1.bias", "stg2_high_band_net.aspp.conv2.conv.1.running_mean", "stg2_high_band_net.aspp.conv2.conv.1.running_var", "stg2_high_band_net.aspp.conv3.conv.0.weight", "stg2_high_band_net.aspp.conv3.conv.1.weight", "stg2_high_band_net.aspp.conv3.conv.1.bias", "stg2_high_band_net.aspp.conv3.conv.1.running_mean", "stg2_high_band_net.aspp.conv3.conv.1.running_var", "stg2_high_band_net.aspp.conv4.conv.0.weight", "stg2_high_band_net.aspp.conv4.conv.1.weight", "stg2_high_band_net.aspp.conv4.conv.1.bias", "stg2_high_band_net.aspp.conv4.conv.1.running_mean", "stg2_high_band_net.aspp.conv4.conv.1.running_var", "stg2_high_band_net.aspp.conv5.conv.0.weight", "stg2_high_band_net.aspp.conv5.conv.1.weight", "stg2_high_band_net.aspp.conv5.conv.1.bias", "stg2_high_band_net.aspp.conv5.conv.1.running_mean", "stg2_high_band_net.aspp.conv5.conv.1.running_var", "stg2_high_band_net.aspp.bottleneck.conv.0.weight", "stg2_high_band_net.aspp.bottleneck.conv.1.weight", "stg2_high_band_net.aspp.bottleneck.conv.1.bias", "stg2_high_band_net.aspp.bottleneck.conv.1.running_mean", "stg2_high_band_net.aspp.bottleneck.conv.1.running_var", "stg2_high_band_net.dec4.conv1.conv.0.weight", "stg2_high_band_net.dec4.conv1.conv.1.weight", "stg2_high_band_net.dec4.conv1.conv.1.bias", "stg2_high_band_net.dec4.conv1.conv.1.running_mean", "stg2_high_band_net.dec4.conv1.conv.1.running_var", "stg2_high_band_net.dec3.conv1.conv.0.weight", "stg2_high_band_net.dec3.conv1.conv.1.weight", "stg2_high_band_net.dec3.conv1.conv.1.bias", "stg2_high_band_net.dec3.conv1.conv.1.running_mean", "stg2_high_band_net.dec3.conv1.conv.1.running_var", "stg2_high_band_net.dec2.conv1.conv.0.weight", "stg2_high_band_net.dec2.conv1.conv.1.weight", "stg2_high_band_net.dec2.conv1.conv.1.bias", "stg2_high_band_net.dec2.conv1.conv.1.running_mean", "stg2_high_band_net.dec2.conv1.conv.1.running_var", "stg2_high_band_net.lstm_dec2.conv.conv.0.weight", "stg2_high_band_net.lstm_dec2.conv.conv.1.weight", "stg2_high_band_net.lstm_dec2.conv.conv.1.bias", "stg2_high_band_net.lstm_dec2.conv.conv.1.running_mean", "stg2_high_band_net.lstm_dec2.conv.conv.1.running_var", "stg2_high_band_net.lstm_dec2.lstm.weight_ih_l0", "stg2_high_band_net.lstm_dec2.lstm.weight_hh_l0", "stg2_high_band_net.lstm_dec2.lstm.bias_ih_l0", "stg2_high_band_net.lstm_dec2.lstm.bias_hh_l0", "stg2_high_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg2_high_band_net.lstm_dec2.dense.0.weight", "stg2_high_band_net.lstm_dec2.dense.0.bias", "stg2_high_band_net.lstm_dec2.dense.1.weight", "stg2_high_band_net.lstm_dec2.dense.1.bias", "stg2_high_band_net.lstm_dec2.dense.1.running_mean", "stg2_high_band_net.lstm_dec2.dense.1.running_var", "stg2_high_band_net.dec1.conv1.conv.0.weight", "stg2_high_band_net.dec1.conv1.conv.1.weight", "stg2_high_band_net.dec1.conv1.conv.1.bias", "stg2_high_band_net.dec1.conv1.conv.1.running_mean", "stg2_high_band_net.dec1.conv1.conv.1.running_var", "stg3_full_band_net.enc1.conv.0.weight", "stg3_full_band_net.enc1.conv.1.weight", "stg3_full_band_net.enc1.conv.1.bias", "stg3_full_band_net.enc1.conv.1.running_mean", "stg3_full_band_net.enc1.conv.1.running_var", "stg3_full_band_net.enc5.conv1.conv.0.weight", "stg3_full_band_net.enc5.conv1.conv.1.weight", "stg3_full_band_net.enc5.conv1.conv.1.bias", "stg3_full_band_net.enc5.conv1.conv.1.running_mean", "stg3_full_band_net.enc5.conv1.conv.1.running_var", "stg3_full_band_net.enc5.conv2.conv.0.weight", "stg3_full_band_net.enc5.conv2.conv.1.weight", "stg3_full_band_net.enc5.conv2.conv.1.bias", "stg3_full_band_net.enc5.conv2.conv.1.running_mean", "stg3_full_band_net.enc5.conv2.conv.1.running_var", "stg3_full_band_net.aspp.conv3.conv.1.bias", "stg3_full_band_net.aspp.conv3.conv.1.running_mean", "stg3_full_band_net.aspp.conv3.conv.1.running_var", "stg3_full_band_net.aspp.conv4.conv.1.bias", "stg3_full_band_net.aspp.conv4.conv.1.running_mean", "stg3_full_band_net.aspp.conv4.conv.1.running_var", "stg3_full_band_net.aspp.conv5.conv.1.bias", "stg3_full_band_net.aspp.conv5.conv.1.running_mean", "stg3_full_band_net.aspp.conv5.conv.1.running_var", "stg3_full_band_net.aspp.bottleneck.conv.0.weight", "stg3_full_band_net.aspp.bottleneck.conv.1.weight", "stg3_full_band_net.aspp.bottleneck.conv.1.bias", "stg3_full_band_net.aspp.bottleneck.conv.1.running_mean", "stg3_full_band_net.aspp.bottleneck.conv.1.running_var", "stg3_full_band_net.dec4.conv1.conv.0.weight", "stg3_full_band_net.dec4.conv1.conv.1.weight", "stg3_full_band_net.dec4.conv1.conv.1.bias", "stg3_full_band_net.dec4.conv1.conv.1.running_mean", "stg3_full_band_net.dec4.conv1.conv.1.running_var", "stg3_full_band_net.dec3.conv1.conv.0.weight", "stg3_full_band_net.dec3.conv1.conv.1.weight", "stg3_full_band_net.dec3.conv1.conv.1.bias", "stg3_full_band_net.dec3.conv1.conv.1.running_mean", "stg3_full_band_net.dec3.conv1.conv.1.running_var", "stg3_full_band_net.dec2.conv1.conv.0.weight", "stg3_full_band_net.dec2.conv1.conv.1.weight", "stg3_full_band_net.dec2.conv1.conv.1.bias", "stg3_full_band_net.dec2.conv1.conv.1.running_mean", "stg3_full_band_net.dec2.conv1.conv.1.running_var", "stg3_full_band_net.lstm_dec2.conv.conv.0.weight", "stg3_full_band_net.lstm_dec2.conv.conv.1.weight", "stg3_full_band_net.lstm_dec2.conv.conv.1.bias", "stg3_full_band_net.lstm_dec2.conv.conv.1.running_mean", "stg3_full_band_net.lstm_dec2.conv.conv.1.running_var", "stg3_full_band_net.lstm_dec2.lstm.weight_ih_l0", "stg3_full_band_net.lstm_dec2.lstm.weight_hh_l0", "stg3_full_band_net.lstm_dec2.lstm.bias_ih_l0", "stg3_full_band_net.lstm_dec2.lstm.bias_hh_l0", "stg3_full_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg3_full_band_net.lstm_dec2.dense.0.weight", "stg3_full_band_net.lstm_dec2.dense.0.bias", "stg3_full_band_net.lstm_dec2.dense.1.weight", "stg3_full_band_net.lstm_dec2.dense.1.bias", "stg3_full_band_net.lstm_dec2.dense.1.running_mean", "stg3_full_band_net.lstm_dec2.dense.1.running_var", "stg3_full_band_net.dec1.conv1.conv.0.weight", "stg3_full_band_net.dec1.conv1.conv.1.weight", "stg3_full_band_net.dec1.conv1.conv.1.bias", "stg3_full_band_net.dec1.conv1.conv.1.running_mean", "stg3_full_band_net.dec1.conv1.conv.1.running_var", "aux_out.weight".
Unexpected key(s) in state_dict: "stg2_bridge.conv.0.weight", "stg2_bridge.conv.1.weight", "stg2_bridge.conv.1.bias", "stg2_bridge.conv.1.running_mean", "stg2_bridge.conv.1.running_var", "stg2_bridge.conv.1.num_batches_tracked", "stg2_full_band_net.enc1.conv1.conv.0.weight", "stg2_full_band_net.enc1.conv1.conv.1.weight", "stg2_full_band_net.enc1.conv1.conv.1.bias", "stg2_full_band_net.enc1.conv1.conv.1.running_mean", "stg2_full_band_net.enc1.conv1.conv.1.running_var", "stg2_full_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc1.conv2.conv.0.weight", "stg2_full_band_net.enc1.conv2.conv.1.weight", "stg2_full_band_net.enc1.conv2.conv.1.bias", "stg2_full_band_net.enc1.conv2.conv.1.running_mean", "stg2_full_band_net.enc1.conv2.conv.1.running_var", "stg2_full_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc2.conv1.conv.0.weight", "stg2_full_band_net.enc2.conv1.conv.1.weight", "stg2_full_band_net.enc2.conv1.conv.1.bias", "stg2_full_band_net.enc2.conv1.conv.1.running_mean", "stg2_full_band_net.enc2.conv1.conv.1.running_var", "stg2_full_band_net.enc2.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc2.conv2.conv.0.weight", "stg2_full_band_net.enc2.conv2.conv.1.weight", "stg2_full_band_net.enc2.conv2.conv.1.bias", "stg2_full_band_net.enc2.conv2.conv.1.running_mean", "stg2_full_band_net.enc2.conv2.conv.1.running_var", "stg2_full_band_net.enc2.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc3.conv1.conv.0.weight", "stg2_full_band_net.enc3.conv1.conv.1.weight", "stg2_full_band_net.enc3.conv1.conv.1.bias", "stg2_full_band_net.enc3.conv1.conv.1.running_mean", "stg2_full_band_net.enc3.conv1.conv.1.running_var", "stg2_full_band_net.enc3.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc3.conv2.conv.0.weight", "stg2_full_band_net.enc3.conv2.conv.1.weight", "stg2_full_band_net.enc3.conv2.conv.1.bias", "stg2_full_band_net.enc3.conv2.conv.1.running_mean", "stg2_full_band_net.enc3.conv2.conv.1.running_var", "stg2_full_band_net.enc3.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc4.conv1.conv.0.weight", "stg2_full_band_net.enc4.conv1.conv.1.weight", "stg2_full_band_net.enc4.conv1.conv.1.bias", "stg2_full_band_net.enc4.conv1.conv.1.running_mean", "stg2_full_band_net.enc4.conv1.conv.1.running_var", "stg2_full_band_net.enc4.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc4.conv2.conv.0.weight", "stg2_full_band_net.enc4.conv2.conv.1.weight", "stg2_full_band_net.enc4.conv2.conv.1.bias", "stg2_full_band_net.enc4.conv2.conv.1.running_mean", "stg2_full_band_net.enc4.conv2.conv.1.running_var", "stg2_full_band_net.enc4.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv1.1.conv.0.weight", "stg2_full_band_net.aspp.conv1.1.conv.1.weight", "stg2_full_band_net.aspp.conv1.1.conv.1.bias", "stg2_full_band_net.aspp.conv1.1.conv.1.running_mean", "stg2_full_band_net.aspp.conv1.1.conv.1.running_var", "stg2_full_band_net.aspp.conv1.1.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv2.conv.0.weight", "stg2_full_band_net.aspp.conv2.conv.1.weight", "stg2_full_band_net.aspp.conv2.conv.1.bias", "stg2_full_band_net.aspp.conv2.conv.1.running_mean", "stg2_full_band_net.aspp.conv2.conv.1.running_var", "stg2_full_band_net.aspp.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv3.conv.0.weight", "stg2_full_band_net.aspp.conv3.conv.1.weight", "stg2_full_band_net.aspp.conv3.conv.2.weight", "stg2_full_band_net.aspp.conv3.conv.2.bias", "stg2_full_band_net.aspp.conv3.conv.2.running_mean", "stg2_full_band_net.aspp.conv3.conv.2.running_var", "stg2_full_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.conv4.conv.0.weight", "stg2_full_band_net.aspp.conv4.conv.1.weight", "stg2_full_band_net.aspp.conv4.conv.2.weight", "stg2_full_band_net.aspp.conv4.conv.2.bias", "stg2_full_band_net.aspp.conv4.conv.2.running_mean", "stg2_full_band_net.aspp.conv4.conv.2.running_var", "stg2_full_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.conv5.conv.0.weight", "stg2_full_band_net.aspp.conv5.conv.1.weight", "stg2_full_band_net.aspp.conv5.conv.2.weight", "stg2_full_band_net.aspp.conv5.conv.2.bias", "stg2_full_band_net.aspp.conv5.conv.2.running_mean", "stg2_full_band_net.aspp.conv5.conv.2.running_var", "stg2_full_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.bottleneck.0.conv.0.weight", "stg2_full_band_net.aspp.bottleneck.0.conv.1.weight", "stg2_full_band_net.aspp.bottleneck.0.conv.1.bias", "stg2_full_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg2_full_band_net.aspp.bottleneck.0.conv.1.running_var", "stg2_full_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg2_full_band_net.dec4.conv.conv.0.weight", "stg2_full_band_net.dec4.conv.conv.1.weight", "stg2_full_band_net.dec4.conv.conv.1.bias", "stg2_full_band_net.dec4.conv.conv.1.running_mean", "stg2_full_band_net.dec4.conv.conv.1.running_var", "stg2_full_band_net.dec4.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec3.conv.conv.0.weight", "stg2_full_band_net.dec3.conv.conv.1.weight", "stg2_full_band_net.dec3.conv.conv.1.bias", "stg2_full_band_net.dec3.conv.conv.1.running_mean", "stg2_full_band_net.dec3.conv.conv.1.running_var", "stg2_full_band_net.dec3.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec2.conv.conv.0.weight", "stg2_full_band_net.dec2.conv.conv.1.weight", "stg2_full_band_net.dec2.conv.conv.1.bias", "stg2_full_band_net.dec2.conv.conv.1.running_mean", "stg2_full_band_net.dec2.conv.conv.1.running_var", "stg2_full_band_net.dec2.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec1.conv.conv.0.weight", "stg2_full_band_net.dec1.conv.conv.1.weight", "stg2_full_band_net.dec1.conv.conv.1.bias", "stg2_full_band_net.dec1.conv.conv.1.running_mean", "stg2_full_band_net.dec1.conv.conv.1.running_var", "stg2_full_band_net.dec1.conv.conv.1.num_batches_tracked", "stg3_bridge.conv.0.weight", "stg3_bridge.conv.1.weight", "stg3_bridge.conv.1.bias", "stg3_bridge.conv.1.running_mean", "stg3_bridge.conv.1.running_var", "stg3_bridge.conv.1.num_batches_tracked", "aux1_out.weight", "aux2_out.weight", "stg1_low_band_net.enc1.conv1.conv.0.weight", "stg1_low_band_net.enc1.conv1.conv.1.weight", "stg1_low_band_net.enc1.conv1.conv.1.bias", "stg1_low_band_net.enc1.conv1.conv.1.running_mean", "stg1_low_band_net.enc1.conv1.conv.1.running_var", "stg1_low_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc1.conv2.conv.0.weight", "stg1_low_band_net.enc1.conv2.conv.1.weight", "stg1_low_band_net.enc1.conv2.conv.1.bias", "stg1_low_band_net.enc1.conv2.conv.1.running_mean", "stg1_low_band_net.enc1.conv2.conv.1.running_var", "stg1_low_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc2.conv1.conv.0.weight", "stg1_low_band_net.enc2.conv1.conv.1.weight", "stg1_low_band_net.enc2.conv1.conv.1.bias", "stg1_low_band_net.enc2.conv1.conv.1.running_mean", "stg1_low_band_net.enc2.conv1.conv.1.running_var", "stg1_low_band_net.enc2.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc2.conv2.conv.0.weight", "stg1_low_band_net.enc2.conv2.conv.1.weight", "stg1_low_band_net.enc2.conv2.conv.1.bias", "stg1_low_band_net.enc2.conv2.conv.1.running_mean", "stg1_low_band_net.enc2.conv2.conv.1.running_var", "stg1_low_band_net.enc2.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc3.conv1.conv.0.weight", "stg1_low_band_net.enc3.conv1.conv.1.weight", "stg1_low_band_net.enc3.conv1.conv.1.bias", "stg1_low_band_net.enc3.conv1.conv.1.running_mean", "stg1_low_band_net.enc3.conv1.conv.1.running_var", "stg1_low_band_net.enc3.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc3.conv2.conv.0.weight", "stg1_low_band_net.enc3.conv2.conv.1.weight", "stg1_low_band_net.enc3.conv2.conv.1.bias", "stg1_low_band_net.enc3.conv2.conv.1.running_mean", "stg1_low_band_net.enc3.conv2.conv.1.running_var", "stg1_low_band_net.enc3.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc4.conv1.conv.0.weight", "stg1_low_band_net.enc4.conv1.conv.1.weight", "stg1_low_band_net.enc4.conv1.conv.1.bias", "stg1_low_band_net.enc4.conv1.conv.1.running_mean", "stg1_low_band_net.enc4.conv1.conv.1.running_var", "stg1_low_band_net.enc4.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc4.conv2.conv.0.weight", "stg1_low_band_net.enc4.conv2.conv.1.weight", "stg1_low_band_net.enc4.conv2.conv.1.bias", "stg1_low_band_net.enc4.conv2.conv.1.running_mean", "stg1_low_band_net.enc4.conv2.conv.1.running_var", "stg1_low_band_net.enc4.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv1.1.conv.0.weight", "stg1_low_band_net.aspp.conv1.1.conv.1.weight", "stg1_low_band_net.aspp.conv1.1.conv.1.bias", "stg1_low_band_net.aspp.conv1.1.conv.1.running_mean", "stg1_low_band_net.aspp.conv1.1.conv.1.running_var", "stg1_low_band_net.aspp.conv1.1.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv2.conv.0.weight", "stg1_low_band_net.aspp.conv2.conv.1.weight", "stg1_low_band_net.aspp.conv2.conv.1.bias", "stg1_low_band_net.aspp.conv2.conv.1.running_mean", "stg1_low_band_net.aspp.conv2.conv.1.running_var", "stg1_low_band_net.aspp.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv3.conv.0.weight", "stg1_low_band_net.aspp.conv3.conv.1.weight", "stg1_low_band_net.aspp.conv3.conv.2.weight", "stg1_low_band_net.aspp.conv3.conv.2.bias", "stg1_low_band_net.aspp.conv3.conv.2.running_mean", "stg1_low_band_net.aspp.conv3.conv.2.running_var", "stg1_low_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.conv4.conv.0.weight", "stg1_low_band_net.aspp.conv4.conv.1.weight", "stg1_low_band_net.aspp.conv4.conv.2.weight", "stg1_low_band_net.aspp.conv4.conv.2.bias", "stg1_low_band_net.aspp.conv4.conv.2.running_mean", "stg1_low_band_net.aspp.conv4.conv.2.running_var", "stg1_low_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.conv5.conv.0.weight", "stg1_low_band_net.aspp.conv5.conv.1.weight", "stg1_low_band_net.aspp.conv5.conv.2.weight", "stg1_low_band_net.aspp.conv5.conv.2.bias", "stg1_low_band_net.aspp.conv5.conv.2.running_mean", "stg1_low_band_net.aspp.conv5.conv.2.running_var", "stg1_low_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.bottleneck.0.conv.0.weight", "stg1_low_band_net.aspp.bottleneck.0.conv.1.weight", "stg1_low_band_net.aspp.bottleneck.0.conv.1.bias", "stg1_low_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg1_low_band_net.aspp.bottleneck.0.conv.1.running_var", "stg1_low_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg1_low_band_net.dec4.conv.conv.0.weight", "stg1_low_band_net.dec4.conv.conv.1.weight", "stg1_low_band_net.dec4.conv.conv.1.bias", "stg1_low_band_net.dec4.conv.conv.1.running_mean", "stg1_low_band_net.dec4.conv.conv.1.running_var", "stg1_low_band_net.dec4.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec3.conv.conv.0.weight", "stg1_low_band_net.dec3.conv.conv.1.weight", "stg1_low_band_net.dec3.conv.conv.1.bias", "stg1_low_band_net.dec3.conv.conv.1.running_mean", "stg1_low_band_net.dec3.conv.conv.1.running_var", "stg1_low_band_net.dec3.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec2.conv.conv.0.weight", "stg1_low_band_net.dec2.conv.conv.1.weight", "stg1_low_band_net.dec2.conv.conv.1.bias", "stg1_low_band_net.dec2.conv.conv.1.running_mean", "stg1_low_band_net.dec2.conv.conv.1.running_var", "stg1_low_band_net.dec2.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec1.conv.conv.0.weight", "stg1_low_band_net.dec1.conv.conv.1.weight", "stg1_low_band_net.dec1.conv.conv.1.bias", "stg1_low_band_net.dec1.conv.conv.1.running_mean", "stg1_low_band_net.dec1.conv.conv.1.running_var", "stg1_low_band_net.dec1.conv.conv.1.num_batches_tracked", "stg1_high_band_net.enc1.conv1.conv.0.weight", "stg1_high_band_net.enc1.conv1.conv.1.weight", "stg1_high_band_net.enc1.conv1.conv.1.bias", "stg1_high_band_net.enc1.conv1.conv.1.running_mean", "stg1_high_band_net.enc1.conv1.conv.1.running_var", "stg1_high_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg1_high_band_net.enc1.conv2.conv.0.weight", "stg1_high_band_net.enc1.conv2.conv.1.weight", "stg1_high_band_net.enc1.conv2.conv.1.bias", "stg1_high_band_net.enc1.conv2.conv.1.running_mean", "stg1_high_band_net.enc1.conv2.conv.1.running_var", "stg1_high_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg1_high_band_net.aspp.conv3.conv.2.weight", "stg1_high_band_net.aspp.conv3.conv.2.bias", "stg1_high_band_net.aspp.conv3.conv.2.running_mean", "stg1_high_band_net.aspp.conv3.conv.2.running_var", "stg1_high_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.conv4.conv.2.weight", "stg1_high_band_net.aspp.conv4.conv.2.bias", "stg1_high_band_net.aspp.conv4.conv.2.running_mean", "stg1_high_band_net.aspp.conv4.conv.2.running_var", "stg1_high_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.conv5.conv.2.weight", "stg1_high_band_net.aspp.conv5.conv.2.bias", "stg1_high_band_net.aspp.conv5.conv.2.running_mean", "stg1_high_band_net.aspp.conv5.conv.2.running_var", "stg1_high_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.bottleneck.0.conv.0.weight", "stg1_high_band_net.aspp.bottleneck.0.conv.1.weight", "stg1_high_band_net.aspp.bottleneck.0.conv.1.bias", "stg1_high_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg1_high_band_net.aspp.bottleneck.0.conv.1.running_var", "stg1_high_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg1_high_band_net.dec4.conv.conv.0.weight", "stg1_high_band_net.dec4.conv.conv.1.weight", "stg1_high_band_net.dec4.conv.conv.1.bias", "stg1_high_band_net.dec4.conv.conv.1.running_mean", "stg1_high_band_net.dec4.conv.conv.1.running_var", "stg1_high_band_net.dec4.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec3.conv.conv.0.weight", "stg1_high_band_net.dec3.conv.conv.1.weight", "stg1_high_band_net.dec3.conv.conv.1.bias", "stg1_high_band_net.dec3.conv.conv.1.running_mean", "stg1_high_band_net.dec3.conv.conv.1.running_var", "stg1_high_band_net.dec3.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec2.conv.conv.0.weight", "stg1_high_band_net.dec2.conv.conv.1.weight", "stg1_high_band_net.dec2.conv.conv.1.bias", "stg1_high_band_net.dec2.conv.conv.1.running_mean", "stg1_high_band_net.dec2.conv.conv.1.running_var", "stg1_high_band_net.dec2.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec1.conv.conv.0.weight", "stg1_high_band_net.dec1.conv.conv.1.weight", "stg1_high_band_net.dec1.conv.conv.1.bias", "stg1_high_band_net.dec1.conv.conv.1.running_mean", "stg1_high_band_net.dec1.conv.conv.1.running_var", "stg1_high_band_net.dec1.conv.conv.1.num_batches_tracked", "stg3_full_band_net.enc1.conv1.conv.0.weight", "stg3_full_band_net.enc1.conv1.conv.1.weight", "stg3_full_band_net.enc1.conv1.conv.1.bias", "stg3_full_band_net.enc1.conv1.conv.1.running_mean", "stg3_full_band_net.enc1.conv1.conv.1.running_var", "stg3_full_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg3_full_band_net.enc1.conv2.conv.0.weight", "stg3_full_band_net.enc1.conv2.conv.1.weight", "stg3_full_band_net.enc1.conv2.conv.1.bias", "stg3_full_band_net.enc1.conv2.conv.1.running_mean", "stg3_full_band_net.enc1.conv2.conv.1.running_var", "stg3_full_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg3_full_band_net.aspp.conv3.conv.2.weight", "stg3_full_band_net.aspp.conv3.conv.2.bias", "stg3_full_band_net.aspp.conv3.conv.2.running_mean", "stg3_full_band_net.aspp.conv3.conv.2.running_var", "stg3_full_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.conv4.conv.2.weight", "stg3_full_band_net.aspp.conv4.conv.2.bias", "stg3_full_band_net.aspp.conv4.conv.2.running_mean", "stg3_full_band_net.aspp.conv4.conv.2.running_var", "stg3_full_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.conv5.conv.2.weight", "stg3_full_band_net.aspp.conv5.conv.2.bias", "stg3_full_band_net.aspp.conv5.conv.2.running_mean", "stg3_full_band_net.aspp.conv5.conv.2.running_var", "stg3_full_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.bottleneck.0.conv.0.weight", "stg3_full_band_net.aspp.bottleneck.0.conv.1.weight", "stg3_full_band_net.aspp.bottleneck.0.conv.1.bias", "stg3_full_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg3_full_band_net.aspp.bottleneck.0.conv.1.running_var", "stg3_full_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg3_full_band_net.dec4.conv.conv.0.weight", "stg3_full_band_net.dec4.conv.conv.1.weight", "stg3_full_band_net.dec4.conv.conv.1.bias", "stg3_full_band_net.dec4.conv.conv.1.running_mean", "stg3_full_band_net.dec4.conv.conv.1.running_var", "stg3_full_band_net.dec4.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec3.conv.conv.0.weight", "stg3_full_band_net.dec3.conv.conv.1.weight", "stg3_full_band_net.dec3.conv.conv.1.bias", "stg3_full_band_net.dec3.conv.conv.1.running_mean", "stg3_full_band_net.dec3.conv.conv.1.running_var", "stg3_full_band_net.dec3.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec2.conv.conv.0.weight", "stg3_full_band_net.dec2.conv.conv.1.weight", "stg3_full_band_net.dec2.conv.conv.1.bias", "stg3_full_band_net.dec2.conv.conv.1.running_mean", "stg3_full_band_net.dec2.conv.conv.1.running_var", "stg3_full_band_net.dec2.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec1.conv.conv.0.weight", "stg3_full_band_net.dec1.conv.conv.1.weight", "stg3_full_band_net.dec1.conv.conv.1.bias", "stg3_full_band_net.dec1.conv.conv.1.running_mean", "stg3_full_band_net.dec1.conv.conv.1.running_var", "stg3_full_band_net.dec1.conv.conv.1.num_batches_tracked".
size mismatch for stg1_high_band_net.enc2.conv1.conv.0.weight: copying a param with shape torch.Size([64, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 8, 3, 3]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.0.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 32, 3, 3]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 48, 3, 3]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.0.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv3.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv3.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv4.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv4.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv5.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv5.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.0.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.0.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 128, 3, 3]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 192, 3, 3]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.0.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.0.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv3.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv3.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv4.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv4.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv5.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv5.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for out.weight: copying a param with shape torch.Size([2, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 32, 1, 1])."
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1050, in seperate
File "torch\nn\modules\module.py", line 1667, in load_state_dict
"
Error Time Stamp [2024-02-04 19:10:47]
Full Application Settings:
vr_model: 6_HP-Karaoke-UVR
aggression_setting: 50
window_size: 1024
mdx_segment_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: 0.25
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: NVIDIA GeForce GTX 1650:0
help_hints_var: True
set_vocal_splitter: VR Arc: 6_HP-Karaoke-UVR
is_set_vocal_splitter: True
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
So, what can I do ? I doesn't happen with other separation models :(
Thanks in advance | open | 2024-02-04T18:11:36Z | 2024-02-04T18:13:38Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1148 | [] | YONDEE76 | 0 |
yt-dlp/yt-dlp | python | 11,755 | Patreon alternative url | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
EU
### Provide a description that is worded well enough to be understood
When trying to download an entire Patreon channel/campaign the url is forwarded in browser to the format: https://www.patreon.com/c/(channel)/posts or https://www.patreon.com/c/(channel)
However this format is not recognised by the patreon support, instead if you enter the channel as https://www.patreon.com/(channel)/posts or https://www.patreon.com/(channel) it does work, but this url is not used by the browser making copying more tedious. Is it possible to add the new alternative?
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-s', '--cookies-from-browser', 'firefox', 'https://www.patreon.com/c/OgSog']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.06 from yt-dlp/yt-dlp [4bd265539] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.2.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\User\AppData\Roaming\Mozilla\Firefox\Profiles\53ausj81.default-release\cookies.sqlite"
Extracted 278 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.patreon.com/c/OgSog
[generic] OgSog: Downloading webpage
[redirect] Following redirect to https://www.patreon.com/c/OgSog/posts
[generic] Extracting URL: https://www.patreon.com/c/OgSog/posts
[generic] posts: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] posts: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.patreon.com/c/OgSog/posts
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1759, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.patreon.com/c/OgSog/posts
PS D:\Source\Mercurial\YoutubeDL\YoutubeDLGui2\bin\Debug\net8.0-windows10.0.19041.0\ffmpeg> ./yt-dlp.exe -vU -s --cookies-from-browser firefox https://www.patreon.com/OgSog
[debug] Command-line config: ['-vU', '-s', '--cookies-from-browser', 'firefox', 'https://www.patreon.com/OgSog']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.06 from yt-dlp/yt-dlp [4bd265539] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.2.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\User\AppData\Roaming\Mozilla\Firefox\Profiles\53ausj81.default-release\cookies.sqlite"
Extracted 278 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.06 from yt-dlp/yt-dlp)
[patreon:campaign] Extracting URL: https://www.patreon.com/OgSog
[patreon:campaign] OgSog: Downloading webpage
[patreon:campaign] 8504388: Downloading campaign info
[download] Downloading playlist: OGSoG
[patreon:campaign] 8504388: Downloading posts page 1
[patreon:campaign] 8504388: Downloading posts page 2
[patreon:campaign] 8504388: Downloading posts page 3
[patreon:campaign] 8504388: Downloading posts page 4
[patreon:campaign] 8504388: Downloading posts page 5
[patreon:campaign] 8504388: Downloading posts page 6
[patreon:campaign] 8504388: Downloading posts page 7
[patreon:campaign] 8504388: Downloading posts page 8
(...)
```
| closed | 2024-12-06T22:00:53Z | 2024-12-12T13:44:21Z | https://github.com/yt-dlp/yt-dlp/issues/11755 | [
"site-bug"
] | Levi--G | 0 |
InstaPy/InstaPy | automation | 6,230 | Cannot detect post media type. Skip https://www.instagram.com/p/CQDsnilnRGg/ | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior : The bot should like the photos from the tags provided.
## Current Behavior : Cannot detect post media type. Skip https://www.instagram.com/p/CQDsnilnRGg/
## Possible Solution (optional) :
## InstaPy configuration : 0.6.13
THis issue is persistent. It is coming for all the tags which I am putting.
| open | 2021-06-13T11:12:45Z | 2022-01-22T18:59:06Z | https://github.com/InstaPy/InstaPy/issues/6230 | [] | theCuriousHAT | 9 |
yt-dlp/yt-dlp | python | 11,936 | ERROR: unable to download video data: HTTP Error 403: Forbidden | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
pORTUGAL
### Provide a description that is worded well enough to be understood
[youtube:tab] Extracting URL: https://www.youtube.com/watch?v=1k8craCGpgs&list=PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL
[youtube:tab] Downloading playlist PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL - add --no-playlist to download just the video 1k8craCGpgs
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Downloading webpage
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Downloading webpage
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: Greatest Hits
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL page 1: Downloading API JSON
[youtube:tab] Playlist Greatest Hits: Downloading 21 items of 21
[download] Downloading item 1 of 21
[youtube] Extracting URL: https://www.youtube.com/watch?v=1k8craCGpgs
[youtube] 1k8craCGpgs: Downloading webpage
[youtube] 1k8craCGpgs: Downloading ios player API JSON
[youtube] 1k8craCGpgs: Downloading mweb player API JSON
[youtube] 1k8craCGpgs: Downloading m3u8 information
[info] 1k8craCGpgs: Downloading 1 format(s): 251
ERROR: unable to download video data: HTTP Error 403: Forbidden
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--no-flat-playlist', '--recode-video', 'mp3', '--extract-audio', '--audio-quality', '0', '--progress', '-P', 'Greatests', '--no-keep-video', '--ffmpeg-location', 'bin\\ffmpeg.exe', 'https://www.youtube.com/watch?v=1k8craCGpgs&list=PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.23 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.23 from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/watch?v=1k8craCGpgs&list=PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL
[youtube:tab] Downloading playlist PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL - add --no-playlist to download just the video 1k8craCGpgs
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Downloading webpage
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Downloading webpage
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: Greatest Hits
[youtube:tab] PLUueilRKpIbIrG1_epVxnVBsL4lkmlBWL page 1: Downloading API JSON
[youtube:tab] Playlist Greatest Hits: Downloading 21 items of 21
[download] Downloading item 1 of 21
[youtube] Extracting URL: https://www.youtube.com/watch?v=1k8craCGpgs
[youtube] 1k8craCGpgs: Downloading webpage
[youtube] 1k8craCGpgs: Downloading ios player API JSON
[youtube] 1k8craCGpgs: Downloading mweb player API JSON
[debug] [youtube] 1k8craCGpgs: ios client https formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=ios+XXX. For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] [youtube] Extracting signature function js_03dbdfab_103
[debug] Loading youtube-sigfuncs.js_03dbdfab_103 from cache
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig xcFjgtQEcUThDeOAu => aD1dpXj_KhAXXQ
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig 1BNE-vzslYXwWEdm8 => Af5aeNh4_DWmsQ
[debug] [youtube] Extracting signature function js_03dbdfab_107
[debug] Loading youtube-sigfuncs.js_03dbdfab_107 from cache
[youtube] 1k8craCGpgs: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] 1k8craCGpgs: Downloading 1 format(s): 251
[debug] Invoking http downloader on "https://rr2---sn-apn7en7s.googlevideo.com/videoplayback?expire=1735422821&ei=BR9wZ8uUJtL_xN8PrYDooQc&ip=2001%3A818%3Ae348%3A5700%3Aedb1%3A824c%3Aea5%3A4766&id=o-ACT93_x6_h-kCq94ZSp-cGP69qflHD4EhVTvbIXc8GWW&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1735401221%2C&mh=Hb&mm=31%2C29&mn=sn-apn7en7s%2Csn-apn7en7e&ms=au%2Crdu&mv=m&mvi=2&pl=42&rms=au%2Cau&gcr=pt&initcwndbps=3205000&bui=AfMhrI9nMXPjWRFcRuJVhIXPnQ0nSkpTR0q5HP3BiCju2z5zFSuJ8HnAJqCo3hrnYwDPElJekx_djQ2o&vprv=1&svpuc=1&mime=audio%2Fwebm&ns=DGLDagkRqm4fZ2U_LBNQpbwQ&rqh=1&gir=yes&clen=4080051&dur=250.241&lmt=1715026746312448&mt=1735400726&fvip=1&keepalive=yes&fexp=51326932%2C51331020%2C51335594%2C51371294&c=MWEB&sefc=1&txp=4502434&n=Af5aeNh4_DWmsQ&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cgcr%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRQIgIwTLxhXn3cBWkMlKNtfMvbGZDpFOw6FOPxoKU5fuF9oCIQCtQ85AYg-RCwU6m4Zh88cLaPrqyPi6W6v-WnLIlIlxNQ%3D%3D&sig=AJfQdSswRgIhAOV02As8oxH56O10UMeUwUg8WWWBGh2pfI3ZiwKgutMoAiEA62HB06Y07TslVdnItLVjyyYxgECyEN5BhaDwZA2O4Dg%3D"
ERROR: unable to download video data: HTTP Error 403: Forbidden
```
| closed | 2024-12-28T15:54:54Z | 2024-12-28T22:09:06Z | https://github.com/yt-dlp/yt-dlp/issues/11936 | [
"duplicate",
"site-bug",
"site:youtube"
] | Persona78 | 2 |
graphql-python/graphene-django | django | 832 | In and Range filter is not working. | closed | 2019-12-24T18:23:21Z | 2019-12-26T14:12:18Z | https://github.com/graphql-python/graphene-django/issues/832 | [] | zayazayazaya | 1 |
|
keras-team/keras | tensorflow | 20,444 | Mode.fit() error. Someone please help me fix this error. I am not able to figure it out | I'm building a capsule network in TensorFlow for binary classification using a custom CapsuleLayer. My model and associated components are as follows:
```python
class CapsuleLayer(layers.Layer):
def __init__(self, num_capsule, dim_capsule, routings=3, **kwargs):
super(CapsuleLayer, self).__init__(**kwargs)
self.num_capsule = num_capsule
self.dim_capsule = dim_capsule
self.routings = routings
def build(self, input_shape):
self.kernel = self.add_weight(name='capsule_kernel',
shape=(input_shape[-1], self.num_capsule * self.dim_capsule),
initializer='glorot_uniform',
trainable=True)
def call(self, inputs):
inputs_hat = K.dot(inputs, self.kernel)
inputs_hat = K.reshape(inputs_hat, (-1, self.num_capsule, self.dim_capsule))
b = K.zeros_like(inputs_hat[:, :, 0])
for i in range(self.routings):
c = tf.nn.softmax(b, axis=1)
o = squash(tf.reduce_sum(c[..., None] * inputs_hat, 1))
if i < self.routings - 1:
b += tf.reduce_sum(inputs_hat * o[:, None, :], -1)
return o
def squash(vectors, axis=-1):
s_squared_norm = K.sum(K.square(vectors), axis, keepdims=True)
scale = s_squared_norm / (1 + s_squared_norm) / K.sqrt(s_squared_norm + K.epsilon())
return scale * vectors
# Network architecture and margin loss
def CapsNet(input_shape):
inputs = Input(shape=input_shape)
x = Conv2D(64, (9, 9), strides=1, activation='relu', padding='valid')(inputs)
x = Conv2D(128, (9, 9), strides=2, activation='relu', padding='valid')(x)
x = Reshape((-1, 8))(x)
primary_caps = CapsuleLayer(num_capsule=10, dim_capsule=8, routings=3)(x)
digit_caps = CapsuleLayer(num_capsule=2, dim_capsule=16, routings=3)(primary_caps)
out_caps = Lambda(lambda z: K.sqrt(K.sum(K.square(z), -1)))(digit_caps)
return models.Model(inputs, out_caps)
def margin_loss(y_true, y_pred):
m_plus, m_minus, lambda_val = 0.9, 0.1, 0.5
left = tf.square(tf.maximum(0., m_plus - y_pred))
right = tf.square(tf.maximum(0., y_pred - m_minus))
return tf.reduce_mean(tf.reduce_sum(y_true * left + lambda_val * (1 - y_true) * right, axis=-1))
```
When training, I receive this error:
ValueError: Cannot squeeze axis=-1, because the dimension is not 1.
I've set class_mode='categorical' in the ImageDataGenerator flow:
train_generator = train_datagen.flow_from_directory(train_dir, target_size=(224, 224),
color_mode='grayscale', batch_size=64, class_mode='categorical')
I am using this model to classify an image dataset into 2 classes. Please help! | closed | 2024-11-04T01:29:39Z | 2024-12-21T02:00:56Z | https://github.com/keras-team/keras/issues/20444 | [
"stat:awaiting response from contributor",
"stale"
] | Israh-Abdul | 4 |
Nemo2011/bilibili-api | api | 248 | 【提问】无法下载历史弹幕 | **Python 版本:** 3.10
**模块版本:** 15.3.1
**运行环境:** Windows
---
如题,下载历史弹幕的代码报错提示Credential 类未提供 sessdata
```python
import datetime
from bilibili_api import ass, sync, video
from bilibili_api import Credential
cred = Credential(sessdata="我的sessdata",
……)
dt = datetime.date(2022, 5, 5)
sync(ass.make_ass_file_danmakus_protobuf(video.Video("BV1AV411x7Gs"), 0,
dt.strftime("%Y-%m-%d"), credential=cred,
date=dt))
```
 | closed | 2023-03-28T15:47:51Z | 2023-08-17T08:18:50Z | https://github.com/Nemo2011/bilibili-api/issues/248 | [
"question"
] | debuggerzh | 2 |
home-assistant/core | python | 141,112 | Statistics log division by zero errors | ### The problem
The statistics sensor produces division by zero errors in the log. This seems to be caused by having values that have identical change timestamps.
It might be that this was caused by sensors that were updated several times in a very short time interval and the precision of the timestamps is too low to distinguish the two change timestamps (that is just a guess though). It could also be that this is something triggered by the startup phase.
I also saw that there was a recent change in the code where timestamps were replaced with floats, which might have reduced the precision of the timestamp delta calculation.
I can easily reproduce the problem, so it is not a once in a lifetime exceptional case.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
statistics
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/statistics/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
`2025-03-22 09:57:43.540 ERROR (MainThread) [homeassistant.helpers.event] Error while dispatching event for sensor.inverter_production to <Job track state_changed event ['sensor.inverter_production'] HassJobType.Callback <bound method StatisticsSensor._async_stats_sensor_state_change_listener of <entity sensor.inverter_production_avg_15s=0.0>>>
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/event.py", line 355, in _async_dispatch_entity_id_event
hass.async_run_hass_job(job, event)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 940, in async_run_hass_job
hassjob.target(*args)
~~~~~~~~~~~~~~^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 748, in _async_stats_sensor_state_change_listener
self._async_handle_new_state(event.data["new_state"])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 734, in _async_handle_new_state
self._async_purge_update_and_schedule()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 986, in _async_purge_update_and_schedule
self._update_value()
~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 1097, in _update_value
value = self._state_characteristic_fn(self.states, self.ages, self._percentile)
File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 142, in _stat_average_step
return area / age_range_seconds
~~~~~^~~~~~~~~~~~~~~~~~~
ZeroDivisionError: float division by zero`
```
### Additional information
_No response_ | open | 2025-03-22T12:38:36Z | 2025-03-24T18:49:06Z | https://github.com/home-assistant/core/issues/141112 | [
"integration: statistics"
] | unfug-at-github | 3 |
cvat-ai/cvat | tensorflow | 8,738 | Image Quality documentation-reality discrepancy | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1) See [user manual - advanced configuration](https://docs.cvat.ai/docs/manual/basics/create_an_annotation_task/#advanced-configuration), the part about Image Quality and read that when Image Quality is set to 100% then the image is not compressed
2) Follow the procedure with the image below

3) Open the editor and get surprised to see a compressed image:

4) Try to monitor trafic and see that the default data fetching request is `.../data?quality=compressed`. This is where the documentation is wrong. According to documentation there should be no compression applied. Using `.../data?quality=` returns data in their original state.
### Expected Behavior
Opening the task, you would expect to see that image not compressed.
### Possible Solution
Respect the user settings. When image quality is set to 100% then either
A) Don't send `quality=compress` query params from frontend, send `quality=standard`
B) Send `quality=compress` but return uncompressed data because you identified user compression settings
Another solution is to fix the documentation so it would not state false information, but that would be a bummer.
### Context
I am trying to use CVAT for its original purpose with the data provided abovee
### Environment
```Markdown
- Using local deployment
- From `develop` branch commit `2cca2dd3cc61290aeac138443979cd28571e5846` from Oct 22nd
- Relevant links:
- 4yo closed issue: https://github.com/cvat-ai/cvat/issues/1900
- open discussion https://github.com/cvat-ai/cvat/discussions/3824
```
| closed | 2024-11-22T15:12:10Z | 2024-11-25T11:05:15Z | https://github.com/cvat-ai/cvat/issues/8738 | [
"bug"
] | jaroslavknotek | 4 |
newpanjing/simpleui | django | 104 | 使用“保存并增加另一个”与“保存并编辑”时,返回按钮逻辑建议优化 | **你希望哪些方面得到优化?**
1. 使用“保存并增加另一个”与“保存并编辑”时,若点击返回则会返回上一页面,而并非返回列表页。此种操作影响:当多次使用上述保存方式时,返回按钮需要多次点击才会返回列表页,影响用户体验
**留下你的联系方式,以便与你取得联系**
邮箱:13021080852@163.com
| closed | 2019-06-27T10:21:12Z | 2019-07-09T05:48:31Z | https://github.com/newpanjing/simpleui/issues/104 | [
"enhancement"
] | OneCat08 | 0 |
ivy-llc/ivy | tensorflow | 28,638 | Fix Frontend Failing Test: tensorflow - order_statistics.numpy.ptp | closed | 2024-03-19T14:35:05Z | 2024-03-25T13:27:01Z | https://github.com/ivy-llc/ivy/issues/28638 | [
"Sub Task"
] | ZenithFlux | 0 |
|
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 19 | Can't reflect database with Array field in a table | sqlalchemy can't reflect database if it has a table with an Array field.
```sql
CREATE TABLE test_db.test(
id Int64,
array_field Array(Float64)
) ENGINE = Memory()
```
```python
import sqlalchemy as sa
from clickhouse_sqlalchemy import make_session
engine = sa.create_engine('clickhouse+native://user:password@host:9000/test_db')
ch_session = make_session(engine)
metadata = sa.MetaData(bind=engine, quote_schema='')
metadata.reflect()
```
Raises:
```
TypeError Traceback (most recent call last)
<ipython-input-81-1a8e5f1aa1a4> in <module>()
2 ch_session = make_session(engine)
3 metadata = sa.MetaData(bind=engine, quote_schema='')
----> 4 metadata.reflect()
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in reflect(self, bind, schema, views, only, extend_existing, autoload_replace, **dialect_kwargs)
3907
3908 for name in load:
-> 3909 Table(name, self, **reflect_opts)
3910
3911 def append_ddl_listener(self, event_name, listener):
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in __new__(cls, *args, **kw)
437 except:
438 with util.safe_reraise():
--> 439 metadata._remove_table(name, schema)
440
441 @property
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py in __exit__(self, type_, value, traceback)
64 self._exc_info = None # remove potential circular references
65 if not self.warn_only:
---> 66 compat.reraise(exc_type, exc_value, exc_tb)
67 else:
68 if not compat.py3k and self._exc_info and self._exc_info[1]:
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/util/compat.py in reraise(tp, value, tb, cause)
185 if value.__traceback__ is not tb:
186 raise value.with_traceback(tb)
--> 187 raise value
188
189 else:
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in __new__(cls, *args, **kw)
432 metadata._add_table(name, schema, table)
433 try:
--> 434 table._init(name, metadata, *args, **kw)
435 table.dispatch.after_parent_attach(table, metadata)
436 return table
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in _init(self, name, metadata, *args, **kwargs)
512 self._autoload(
513 metadata, autoload_with,
--> 514 include_columns, _extend_on=_extend_on)
515
516 # initialize all the column, etc. objects. done after reflection to
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in _autoload(self, metadata, autoload_with, include_columns, exclude_columns, _extend_on)
525 autoload_with.dialect.reflecttable,
526 self, include_columns, exclude_columns,
--> 527 _extend_on=_extend_on
528 )
529 else:
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/engine/base.py in run_callable(self, callable_, *args, **kwargs)
1532
1533 """
-> 1534 return callable_(self, *args, **kwargs)
1535
1536 def _run_visitor(self, visitorcallable, element, **kwargs):
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/engine/default.py in reflecttable(self, connection, table, include_columns, exclude_columns, **opts)
370 insp = reflection.Inspector.from_engine(connection)
371 return insp.reflecttable(
--> 372 table, include_columns, exclude_columns, **opts)
373
374 def get_pk_constraint(self, conn, table_name, schema=None, **kw):
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/engine/reflection.py in reflecttable(self, table, include_columns, exclude_columns, _extend_on)
596
597 for col_d in self.get_columns(
--> 598 table_name, schema, **table.dialect_kwargs):
599 found_table = True
600
~/.pyenv/versions/jupyter3.6.4/lib/python3.6/site-packages/sqlalchemy/engine/reflection.py in get_columns(self, table_name, schema, **kw)
372 coltype = col_def['type']
373 if not isinstance(coltype, TypeEngine):
--> 374 col_def['type'] = coltype()
375 return col_defs
376
TypeError: __init__() missing 1 required positional argument: 'item_type'
``` | closed | 2018-06-25T20:38:55Z | 2018-07-01T12:26:32Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/19 | [] | solokirrik | 1 |
nschloe/tikzplotlib | matplotlib | 31 | Missing brackets in line 654?! | I've tried this script with python3.2 and got syntax error in line 654. What you find there is,
``` python
print "Problem during transformation, continuing with original data"
```
If I just add the brackets, then the problem is solved:
``` python
print ("Problem during transformation, continuing with original data")
```
| closed | 2013-07-27T14:42:24Z | 2013-07-27T16:45:45Z | https://github.com/nschloe/tikzplotlib/issues/31 | [] | alexeyegorov | 1 |
onnx/onnx | tensorflow | 5,894 | CI Pipeline broken after onnxruntime 1.17 release | # Bug Report
### Describe the bug
The Mac OS CI pipeline is failing due to failures in pytest onnx/test/test_backend_onnxruntime.py
ONNXRunTime 1.17 was released yesterday (01/31/2024) https://pypi.org/project/onnxruntime/
which means that tests that were previously skipped are now getting executed (and fail).
See condition to skip tests: https://github.com/onnx/onnx/blob/main/onnx/test/test_backend_onnxruntime.py#L263
Also, it seems like these tests aren't running in the Linux CI because it passes there (fails on local Linux after updating ONNXRunTime to v1.17).
### Reproduction instructions
pip install onnxruntime==1.17.0
pytest onnx/test/test_backend_onnxruntime.py
### Expected behavior
All tests should pass
| closed | 2024-02-01T15:24:05Z | 2024-02-02T01:01:44Z | https://github.com/onnx/onnx/issues/5894 | [
"bug",
"topic: test"
] | galagam | 1 |
robotframework/robotframework | automation | 4,588 | RF doc links should be able to open to a new page | The HTML of the RF documentation is such one can only open a doc on the same page. Most of the time I need to open a number of them so have to open multiple pages. The current design breaks a very common use pattern of allowing the user to choose how to open a link.
http://robotframework.org/robotframework/ | closed | 2023-01-07T18:04:23Z | 2023-12-20T01:07:24Z | https://github.com/robotframework/robotframework/issues/4588 | [] | glueologist | 2 |
piskvorky/gensim | nlp | 3,201 | Web documentation don't correpond to Jupyter notebooks | I noticed that the web documentation sometimes don't correspond to what I see in the Jupyter notebooks.
For example, that's how I see it on the web:

And that's what I see on the Jupyter notebooks:

Here is another example:
On the web:

On Jupyter notebook (notice that the link to the python documentation is in full):

| open | 2021-07-23T09:02:51Z | 2021-07-23T12:14:24Z | https://github.com/piskvorky/gensim/issues/3201 | [
"bug",
"documentation"
] | raffaem | 0 |
PokeAPI/pokeapi | api | 1,208 | Sylveon's back sprite is a blank value | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
When I tried to get the back sprite for Sylveon it didn't show up. I don't know whether this is normal or not (I'm new to this). Please don't get frustrated, I'm looking to help...
Thanks :) | open | 2025-02-23T01:06:28Z | 2025-03-17T00:09:09Z | https://github.com/PokeAPI/pokeapi/issues/1208 | [] | superL132 | 4 |
nteract/papermill | jupyter | 446 | Executing a notebook with parameters, but no expected output | Is there a possiblity to run a notebook, without either changing it's original "template" (the notebook to process) or producing any output notebook. I.e. just take advantage of the parametrization, and the actual results are produced as side-effects, not as notebook content?
There is a simple, yet not portable way to do it:
`$ papermill template.ipynb - -p value 5 > /dev/null`
or
`C:\> papermill template.ipynb - -p value 5 > NUL`
Would it be possible to have a commandline option like `--no-output` which prevents any output creation, possibly enabling some optimization later on? | open | 2019-11-18T16:13:02Z | 2019-11-19T18:09:50Z | https://github.com/nteract/papermill/issues/446 | [] | tgandor | 2 |
mitmproxy/pdoc | api | 401 | Support GitHub Flavored Markdown or at least CommonMark | #### Problem Description
Given the proliferation and ubiquity of GitHub, I believe many unassuming developers (myself included) naively believed what Markdown syntax is supported on GH is part of the original 2004 Markdown spec. Even *if* developers knew the difference, that they would prefer GitHub as that's what's most common. https://github.com/mitmproxy/pdoc/issues/64 suggests this conflation is a problem.
I personally stumbled across then when I wanted a URL to automatically be hyperlinked with `pdoc`, not realizing the [original spec](https://daringfireball.net/projects/markdown/syntax#autolink) requires the URLs to be wrapped in `<` and `>` characters. In a phrase: I've become spoiled by GFM.
#### Proposal
Support the [GitHub Flavored Markdown spec](https://github.github.com/gfm/).
This can take place in a few possible ways:
* via a new `-d gfm` or `-d github-markdown` flag, reserving the `-d markdown` flag for the original 2004 spec
* replace the Markdown support in `pdoc` with the GFM version by default, optionally exposing a `-d original-markdown` for the 2004 spec
* A "pick n choose" approach where the most popular/used parts of GFM are supported on top of the current Markdown implementation (I don't really advocate for this one)
#### Alternatives
I guess not supporting GFM at all?
Independent of my proposal, `pdoc` should probably declare what version or flavour we colloquially know/call as "Markdown". The original is from 2004 released as a Perl script and has remained largely unchanged, warts and all.
[CommonMark](https://spec.commonmark.org/) appears to be the, well, *common* and well-defined and *versioned* specification of the original Markdown. Indeed, GFM is derived from CommonMark!
I would hope that `pdoc` would choose a version of the CommonMark spec and support that version explicitly.
#### Additional context
The [Differences from original Markdown](https://github.com/commonmark/commonmark-spec/#differences-from-original-markdown) section of [commonmark-spec](https://github.com/commonmark/commonmark-spec)'s README is enlightening. | closed | 2022-06-07T02:01:07Z | 2022-11-15T17:48:21Z | https://github.com/mitmproxy/pdoc/issues/401 | [
"enhancement"
] | f3ndot | 8 |
python-restx/flask-restx | flask | 601 | @api.marshal_with is meant to support headers but gets unexpected keyword argument | ```python
class World(Resource):
@api.marshal_with(bay_model, headers={"x-my-header": "description"})
def get(self, id):
reuturn {"hello": "world"}
```
### **Repro Steps** (if applicable)
1. Use a response specific header with marshal_with
2. Get an unexpected keyword argument error
### **Expected Behavior**
Make a response specific header doc
### **Actual Behavior**
Raises error
### **Error Messages/Stack Trace**
```
return marshal_with(fields, ordered=self.ordered, **kwargs)(func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: marshal_with.__init__() got an unexpected keyword argument 'headers'
```
### Possible fix?
I think it can be fixed by adding `headers={}` to `__init__` in marshal_with.
```
def __init__(
self, fields, envelope=None, skip_none=False, mask=None, ordered=False, headers={}
):
```
| open | 2024-04-15T08:23:30Z | 2024-07-24T15:54:39Z | https://github.com/python-restx/flask-restx/issues/601 | [
"bug"
] | hjmallon | 2 |
abhiTronix/vidgear | dash | 123 | Output video drops frames and then pads by duplicating the final frame many times | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
If the important info is missing we'll add the 'Needs more information' label
or may choose to close the issue until there is enough information provided.
-->
## Description
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
### Acknowledgment
<!--- By posting an issue you acknowledge the following: -->
- [x ] A brief but descriptive Title of your issue
- [ x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
- [x ] I have read the [FAQ](https://github.com/abhiTronix/vidgear/wiki/FAQ-&-Troubleshooting).
- [x ] I have read the [Wiki](https://github.com/abhiTronix/vidgear/wiki#vidgear).
- [x ] I have read the [Contributing Guidelines](https://github.com/abhiTronix/vidgear/blob/master/contributing.md).
### Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* VidGear version: 0.1.6
* Branch: <!--- Master/Testing/Development/PyPi -->
* Python version: 3.7.4
* pip version: 19.2.2
* Operating System and version: Windows 10
### Expected Behavior
<!--- Tell us what should happen -->
When writing frames from a USB vision camera, I have vsync set to zero to prevent dropped or replicated frames, however whenever the framerate of ffmpegs encoding drops below the capture rate of my camera vidgear/ffmpeg does not cache the frame into memory but instead drops it and compensates by replicating the final frame for each dropped frame.
Is there anyway to prevent writer.write from sending the next frame to ffmpeg until the previous has been written? or to ensure that frames are cached into system memory until written.
### Actual Behavior
<!--- Tell us what happens instead -->
Frames are dropped and the final frame is duplicated for each dropped frame
### Steps to reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
I am writing frames to a thread queue from the camera, then taking the frames from the queue in a separate thread and writing them to a video using vidgear.
the options im using are:
output_params = {"-vcodec":"h264","-bufsize":"1M","-crf": 5, "-preset": "ultrafast","-profile:v":"high","-input_framerate":90} #define (Codec,CRF,preset)
### Screenshot

| closed | 2020-04-08T04:41:37Z | 2020-04-13T04:42:44Z | https://github.com/abhiTronix/vidgear/issues/123 | [
"QUESTION :question:",
"SOLVED :checkered_flag:"
] | PolarBean | 15 |
hack4impact/flask-base | flask | 121 | 3 failing tests | <img width="842" alt="screen shot 2017-02-27 at 9 33 13 pm" src="https://cloud.githubusercontent.com/assets/4250521/23389174/75bead54-fd34-11e6-8038-b739ebf862ac.png">
| closed | 2017-02-28T02:34:09Z | 2017-02-28T02:34:36Z | https://github.com/hack4impact/flask-base/issues/121 | [] | yoninachmany | 1 |
neuml/txtai | nlp | 574 | Need urgent help | I'm facing issues while creating semantic search for tabular data, Please help me , i tried several changes but not getting luck
```
# Build index with graph subindex
embeddings = txtai.Embeddings(
content=True,
defaults=False,
functions=[
{"name": "graph", "function": "indexes.act.graph.attribute"}
],
expressions=[
{"name": "topic", "expression": "graph(indexid, 'topic')"},
],
indexes={
"keywordProductTenderBrief": {
"path": "sentence-transformers/quora-distilbert-multilingual",
"columns": {
"text": "Keyword",
"text": "productname",
"text": "TenderBrief"
},
"graph": {
"topics": {}
}
},
"LocationIndex":{
"path": "sentence-transformers/quora-distilbert-multilingual",
"columns": {
"text": "City",
"text": "State"
}
}
}
)
```
this is my current embedding generation process as demonstrated in an example. I want to create hybrid semantic search for my tender dataset but the results are not that great, I'm not getting how to create multiple indexes properly or what are their applications. please help | closed | 2023-10-10T16:22:13Z | 2023-10-13T12:45:51Z | https://github.com/neuml/txtai/issues/574 | [] | raaj1v | 4 |
shibing624/text2vec | nlp | 51 | “中文匹配数据集的评测结果”和“本项目release模型的中文匹配评测结果”结果为什么差别那么大 | 不好意思打扰了,想问一下为什么readme里列出的“中文匹配数据集的评测结果”和“本项目release模型的中文匹配评测结果”
结果差异那么大,release模型怎么好像普遍低很多?是评测数据集不一样吗?谢谢~ | closed | 2022-11-16T08:56:59Z | 2023-01-13T06:24:21Z | https://github.com/shibing624/text2vec/issues/51 | [
"question"
] | Ramlinbird | 1 |
NullArray/AutoSploit | automation | 1,074 | New | Not working properly in KALi | closed | 2019-05-06T07:56:42Z | 2019-05-06T08:08:56Z | https://github.com/NullArray/AutoSploit/issues/1074 | [] | lmx5200410 | 0 |
jina-ai/clip-as-service | pytorch | 607 | Good or bad embedding vectors? | Hi, this is not an issue. I just want to ask if you have any comment.
I am trying to create a domain BERT by running further pre-train on my corpus from a checkpoint of google.
My purpose is to create sentence embedding, which already achieved with your git, thanks.
But I don't know whether the generated embedding vectors are good or not, or good enough.
Is there any way to know the performance of embedding vectors?
I have read somewhere that just use them for a specific task and check the result of that task. Is this the only way?
Any comment will be very appreciated. | open | 2020-12-02T09:40:07Z | 2020-12-02T09:40:07Z | https://github.com/jina-ai/clip-as-service/issues/607 | [] | reobroqn | 0 |
jupyter-incubator/sparkmagic | jupyter | 498 | Please delete | closed | 2018-12-17T14:54:32Z | 2018-12-17T15:28:56Z | https://github.com/jupyter-incubator/sparkmagic/issues/498 | [] | juliusvonkohout | 0 |
|
ijl/orjson | numpy | 335 | Segmentation fault - since 3.8.4 - alpine py-3.11 | Hello,
I am experiencing `Segmentation fault` since version 3.8.4. when encoding nested data.
Issue can be replicated in docker `python:3.11-alpine` by running:
```
import datetime
import orjson
data = {'pfkrpavmb': 'maxyjzmvacdwjfiifmzwbztjmnqdsjesykpf', 'obtsdcnmi': 'psyucdnwjr', 'ghsccsccdwep': 1673954411550, 'vyqvkq': 'ilfcrjas', 'drfobem': {'mzqwuvwsglxx': 1673954411550, 'oup': 'mmimyli', 'pxfepg': {'pnqjr': 'ylttscz', 'rahfmy': 'xrcsutu', 'rccgrkom': 'fbt', 'xulnoryigkhtoybq': 'hubxdjrnaq', 'vdwriwvlgu': datetime.datetime(2023, 1, 15, 15, 23, 38, 686000, tzinfo=datetime.timezone.utc), 'fhmjsszqmxwfruiq': 'fzghfrbjxqccf', 'dyiurstuzhu': None, 'tdovgfimofmclc': datetime.datetime(2023, 1, 15, 15, 23, 38, 686000, tzinfo=datetime.timezone.utc), 'iyxkgbwxdlrdc': datetime.datetime(2023, 1, 17, 11, 19, 55, 761000, tzinfo=datetime.timezone.utc), 'jnjtckehsrtwhgzuhksmclk': ['tlejijcpbjzygepptbxgrugcbufncyupnivbljzhxe'], 'zewoojzsiykjf': datetime.datetime(2023, 1, 17, 11, 17, 46, 140960, tzinfo=datetime.timezone.utc), 'muzabbfnxptvqwzbeilkz': False, 'wdiuepootdqyniogblxgwkgcqezutcesb': None, 'lzkthufcerqnxdypdts': datetime.datetime(2023, 1, 17, 11, 19, 56, 73000, tzinfo=datetime.timezone.utc), 'epukgzafaubmn': 50000.0, 'cdpeessdedncodoajdqsos': 50000.0, 'adxucexfjgfwxo': 'jwuoomwdrfklgt', 'sotxdizdpuunbssidop': None, 'lxmgvysiltbzfkjne': None, 'wyeaarjbilfmjbfzjuzv': None, 'cwlcgx': -1272.22, 'oniptvyaub': -1275.75, 'hqsfeelokxlwnha': datetime.datetime(2023, 1, 17, 11, 19, 55, 886000, tzinfo=datetime.timezone.utc), 'nuidlcyrxcrkyytgrnmc': -733.5, 'wmofdeftonjcdnkg': -737.03, 'bnsttxjfxxgxphfiguqew': datetime.datetime(2023, 1, 17, 11, 19, 55, 886000, tzinfo=datetime.timezone.utc), 'audhoqqxjliwnsqttwsadmwwv': -737.03, 'badwwjzugwtdkbsamckoljfrrumtrt': datetime.datetime(2023, 1, 17, 11, 19, 55, 886000, tzinfo=datetime.timezone.utc), 'zlbggbbjgsugkgkqjycxwdx': -1241.28, 'fxueeffryeafcxtkfzdmlmgu': -538.72, 'yjmapfqummrsyujkosmixumjgfkwd': datetime.datetime(2023, 1, 16, 22, 59, 59, 999999, tzinfo=datetime.timezone.utc), 'qepdxlodjetleseyminybdvitcgd': None, 'ltokvpltajwbn': datetime.date(2023, 1, 17), 'ifzhionnrpeoorsupiniwbljek': datetime.datetime(2023, 1, 17, 11, 19, 49, 113000, tzinfo=datetime.timezone.utc), 'ljmmehacdawrlbhlhthm': -1241.28, 'jnwffrtloedorwctsclshnpwjq': -702.56, 'yhgssmtrmrcqhsdaekvoxyv': None, 'nfzljididdzkofkrjfxdloygjxfhhoe': None, 'mpctjlifbrgugaugiijj': None, 'ckknohnsefzknbvnmwzlxlajsckl': None, 'rfehqmeeslkcfbptrrghvivcrx': None, 'nqeovbshknctkgkcytzbhfuvpcyamfrafi': None, 'lptomdhvkvjnegsanzshqecas': 0, 'vkbijuitbghlywkeojjf': None, 'hzzmtggbqdglch': 'xgehztikx', 'yhmplqyhbndcfdafjvvr': False, 'oucaxvjhjapayexuqwvnnls': None, 'xbnagbhttfloffstxyr': 1673954411.5502248, 'eiqrshvbjmlyzqle': {'dkayiglkkhfrvbliqy': ['ohjuifj'], 'grqcjzqdiaslqaxhcqg': ['fuuxwsu']}, 'uflenvgkk': {'ehycwsz': {'jeikui': 'noxawd', 'gkrefq': 'hfonlfp', 'xkxs': 'jzt', 'ztpmv': 'mpscuot', 'zagmfzmgh': 'pdculhh', 'jgzsrpukwqoln': 100000.0, 'vlqzkxbwc': datetime.datetime(2023, 1, 17, 11, 19, 50, 867000, tzinfo=datetime.timezone.utc), 'cchovdmelbchcgvtg': -30.94, 'xvznnjfpwtdujqrh': 0.92059, 'tmsqwiiopyhlcovcxhojuzzyac': 1.0862009, 'tfzkaimjrpsbeswnrxeo': 0.0, 'isqjxmjupeiboufeaavkdj': -9.76, 'ywjqjiasfuifyqmz': 0.0, 'uvtlmdrk': 0.92028, 'dquzguej': None, 'guudreveynvhvhihegoybqrmejkj': datetime.datetime(2023, 1, 17, 11, 19, 56, 73000, tzinfo=datetime.timezone.utc), 'agvnijfztpbpatej': 'zym', 'mqsozcvnuvueixszfz': [{'oepzcayabl': 'givcnhztbdmili', 'rhhaorqbiziqvyhglecqw': True, 'paxvrmateisxfqs': 1.0862009, 'bydrnmhvj': {'kwqlickvqv': 'beinfgmofalgytujorwxqfvlxtbeujmqwrdqzkfpul', 'cxdikf': 'dfpbnpe', 'dnnhiy': 'reeenz', 'tx': datetime.datetime(2023, 1, 17, 11, 19, 56, 73000, tzinfo=datetime.timezone.utc), 'tck': datetime.date(2023, 1, 17), 'nvt': 0.92064, 'enc': 0.92059, 'icginezbybhcs': 1673954396073, 'gfamgxmknxirghgmtxl': 1673954411.5492423}}], 'dqiabsyky': {'hxzdtwunrr': 'fozhshbmijhujcznqykxtlaxfbtdpzvwvjtyuqzlyw', 'tmpscl': 'tbivvoa', 'vjjjvl': 'arukeb', 'fm': datetime.datetime(2023, 1, 17, 11, 19, 56, 73000, tzinfo=datetime.timezone.utc), 'rjq': datetime.date(2023, 1, 17), 'oax': 0.92064, 'gdv': 0.92059, 'vousomtllbpsh': 1673954396073, 'pgiblyqswxvwkpmpyay': 1673954411.5492423}, 'gebil': [{'bzrjh': 0.92065, 'izmljcvqinm': 3.25, 'legczrbxlrmcep': None}], 'eqg': [{'yngp': 'kako', 'udntq': {'wzygahsmwd': 'hplammnltegchpaorxaremhymtqtxdpfzzoyouimnw', 'iofcbwwgu': datetime.datetime(2023, 1, 17, 11, 19, 50, 867000, tzinfo=datetime.timezone.utc), 'nengib': 'zpyilz', 'sorpcw': 'ixhzipg', 'kruw': 'taq', 'vaqaj': 'kravspj', 'omkjhzkxp': 'watatag', 'ckwtjcqkjxmdn': 100000.0, 'kpjtgiuhfqx': 3.25, 'upkgqboyyg': 0.92065, 'gkshzyqtpmolnybr': 0.92065, 'oeiueaildnobcyzzpqwjwivkgj': 1.0861891, 'hiheqtjxyjnweryve': 0.0, 'wntcyohtaeylkylp': 0.0, 'jmebuufukzzymohzynpxzp': -9.76, 'rblubytyjuvbeurwrqmz': 0.0, 'xpscrgcnratymu': None}}], 'kpmgmiqgmswzebawzciss': -0.7, 'ktggnjdemtfxhnultritqokgbjucktdiooic': 0.92058, 'oawdfonaymcwwvmszhdlemjcnb': datetime.datetime(2023, 1, 17, 11, 19, 55, 886000, tzinfo=datetime.timezone.utc), 'bwfkzqjqqjdgbfbbjwoxhweihipy': 'lzvn', 'feslxjpurrukajwellwjqww': 0.0, 'ptuysyuuhwkfqlugjlxkohwanzijtzknupfikp': None, 'gquuleqhpsbyiluhijdddreenggl': datetime.datetime(2023, 1, 17, 11, 19, 50, 867000, tzinfo=datetime.timezone.utc), 'auhxrvhvvtszkkkpyhbhvpjlypjoyz': 'vqdxfdvgxqcu'}}}, 'qbov': 'vylhkevwf', 'uidiyv': {'qkyoj': {'cclzxqbosqmj': 1673954395761, 'rzijfrywwwcr': 1, 'toujesmzk': 'afnu', 'aqmpunnlt': 'nyreokscjljpfcrrstxgvwddphymgzkvuolbigqhla', 'ofrjrk': 'rlffwrw', 'legyfjl': {'byalenyqro': 'tbzhyxo', 'qxrtujt': 0.92028, 'onmhbvy': 0, 'cbhmp': 'vqrkzbg'}}}}}
orjson.dumps(data)
```
I've tested this also on 3.9-alpine, 3.10-alpine with same results, both used wheel for installation.
In debian base python:3.11 this works without issue.
| closed | 2023-01-17T13:14:45Z | 2023-02-09T14:56:26Z | https://github.com/ijl/orjson/issues/335 | [] | pbabics | 2 |
RobertCraigie/prisma-client-py | pydantic | 975 | Import generated types | ## Problem
I keep creating models that represent the data in my schema.prisma. I know that prisma python client has generated these types since I see them in the type errors. Is there a way for me to easily import these generated types into my codebase?
## Suggested solution
Similar to how the typescript client works, it would be great to be able to import the types from the python client.
EX:
```ts
import {
Education,
Gender,
SalaryRange,
} from "@prisma/client";
```
| closed | 2024-07-17T06:08:15Z | 2024-07-21T16:11:48Z | https://github.com/RobertCraigie/prisma-client-py/issues/975 | [
"kind/question"
] | owencraston | 1 |
vitalik/django-ninja | pydantic | 1,133 | [BUG] | **Describe the bug**
Hello guys, the option `exclude_unset=True` isn't working for me for some reason:
**Versions (please complete the following information):**
- Python version: 3.12.1
- Django version: 5.0.1
- Django-Ninja version: 1.1.0
- Pydantic version: 2.6.4
Here is the snippet code:
```python
class ContractSchemaInput(ModelSchema):
sender_id: int
client_id: int
type_id: int
price_frequency: Literal["minute", "hourly", "daily", "weekly", "monthly"]
care_type: Literal["ambulante", "accommodation"]
attachment_ids: list[str] = []
class Meta:
model = Contract
exclude = ("id", "type", "sender", "client", "updated", "created")
@router.patch("/contracts/{int:id}/update", response=ContractSchema)
def update_client_contract(request: HttpRequest, id: int, contract: ContractSchemaInput):
print("Payload:", contract.dict(exclude_unset=True))
Contract.objects.filter(id=id).update(**contract.dict(exclude_unset=True))
return get_object_or_404(Contract, id=id)
```
Output error:
```bash
{
"detail": [
{
"type": "missing",
"loc": [
"body",
"contract",
"sender_id"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"client_id"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"type_id"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"care_type"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"start_date"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"end_date"
],
"msg": "Field required"
},
{
"type": "missing",
"loc": [
"body",
"contract",
"care_name"
],
"msg": "Field required"
}
]
}
```
Please any suggestions? | open | 2024-04-19T11:00:57Z | 2024-04-19T14:07:40Z | https://github.com/vitalik/django-ninja/issues/1133 | [] | medram | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,966 | Unable to turn off cert validation or point to CA bundle | In the following example, `API_BASE_URL` is an `https://` URL with self-signed certificates.
Requests fail with
> [2022-12-20T18:17:49.363+0000] {views.py:659} ERROR - Error authorizing OAuth access token: HTTPSConnectionPool(host='keycloak.redacted.redacted, port=443): Max retries exceeded with url: /auth/realms/redacted/protocol/openid-connect/token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1129)')))
How can we either:
- Set TLS verification to false, or
- Preferably, point FAB OAUTH_PROVIDERS config to a CA bundle to validate the cert?
```python
OAUTH_PROVIDERS = [
{
"name": "keycloak",
"token_key": "access_token",
"icon": "fa-key",
"remote_app": {
"api_base_url": API_BASE_URL,
"client_kwargs": {"scope": "email profile"},
"access_token_url": f"{API_BASE_URL}/token",
"authorize_url": f"{API_BASE_URL}/auth",
"request_token_url": None,
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET,
},
}
]
``` | closed | 2022-12-20T18:42:20Z | 2023-02-23T09:04:10Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1966 | [] | brsolomon-deloitte | 1 |
slackapi/bolt-python | fastapi | 954 | Get Invalid 'user_id' when calling view.publish api for home tab view | I am getting this error when trying to publish to home tab
This is the home tab event response :
```python
body: {'token': 'ET2WFtFts2w0ohKXgtAgXxVn', 'team_id': 'T044ELWP8RE', 'api_app_id': 'A04T4F7M77Z', 'event': {'type': 'app_home_opened', 'user': 'U059VM4U6PP', 'channel': 'D05A14JL1FE', 'tab': 'home', 'event_ts': '1693830258.248468'}, 'type': 'event_callback', 'event_id': 'Ev05RJS3616C', 'event_time': 1693830258, 'authorizations': [{'enterprise_id': None, 'team_id': 'T044ELWP8RE', 'user_id': 'U050X9B9XD2', 'is_bot': True, 'is_enterprise_install': False}], 'is_ext_shared_channel': False}
```
using this api
```
POST https://slack.com/api/views.publish
Content-type: application/json
Authorization: Bearer YOUR_TOKEN_HERE
{
"user_id": "YOUR_USER_ID",
"view": {
"type": "home",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "This is a Block Kit example"
},
"accessory": {
"type": "image",
"image_url": "https://api.slack.com/img/blocks/bkb_template_images/notifications.png",
"alt_text": "calendar thumbnail"
}
}
]
}
}
```
- I tried using user_id as both event['user'] and authorizations[0]['user_id'] for the user id to replace YOUR_USER_ID
I am getting this error:
```
resp: {'ok': False, 'error': 'invalid_arguments', 'response_metadata': {'messages': ['[ERROR] invalid `user_id`']}}
```
I used the xoxb-xxxx-xxx token
what is cause of the error | closed | 2023-09-04T12:44:59Z | 2023-10-23T00:10:38Z | https://github.com/slackapi/bolt-python/issues/954 | [
"question",
"auto-triage-stale"
] | sunnex0 | 4 |
pallets/flask | python | 4,715 | remove ability to toggle lazy/eager loading | When the CLI was first introduced, it would always lazily load the application, and then resolve that lazy import the first time the app was needed. For the `run` command, in debug mode, that would be _after_ the server started, so that errors would show up in the debugger and the reloader would work across errors. This meant that errors wouldn't show up immediately when running the command, which was confusing. The `--eager-loading/--lazy-loading` option controlled whether that was disabled/enabled regardless of debug mode.
Later, this behavior was changed so that the app is always eagerly loaded the first time, and only lazily loaded on reloads. This makes errors show up consistently when running any command, including `run`, while still allowing the reloader to work across errors.
There shouldn't be a reason now to control loading. Errors will always be shown immediately in the terminal when a command is run. Lazy loading should always be used within the reloader to handle errors. | closed | 2022-07-30T17:13:39Z | 2022-08-16T00:06:54Z | https://github.com/pallets/flask/issues/4715 | [
"cli"
] | davidism | 0 |
feature-engine/feature_engine | scikit-learn | 843 | an add additional ranking methods, like correlation, F-statistics and MI to RFE and RFA | RFE and RFA rely on embedded methods to make the initial ranking of features, which will guide the order in which the features are added or selected. This makes these methods dependent on embedded methods (in other words, only suitable for linear models and tree based models).
To make them fully model agnostic, we can add additional ranking methods, like correlation, F-statistics and MI | open | 2025-02-03T15:11:27Z | 2025-02-03T15:11:27Z | https://github.com/feature-engine/feature_engine/issues/843 | [] | solegalli | 0 |
suitenumerique/docs | django | 119 | ✨Version differences | ## Feature Request
See: https://github.com/numerique-gouv/impress/issues/112
Could be nice to be able to see the difference between 2 versions like with `git diff` but directly on the editor (or other suggestion ?).
The editor comes from [blocknote ](https://www.blocknotejs.org/). Blocknote is based on [ProseMirror](https://prosemirror.net/).
Maybe it exists a prosemirror plugin doing this feature, or talking about it. | open | 2024-07-01T14:40:43Z | 2024-07-01T14:41:18Z | https://github.com/suitenumerique/docs/issues/119 | [
"enhancement",
"frontend",
"feature"
] | AntoLC | 0 |
krish-adi/barfi | streamlit | 21 | Add filtering in “Add Node” | if there are many nodes to select from, a filter will be required. | open | 2023-07-21T23:35:35Z | 2025-01-15T23:37:00Z | https://github.com/krish-adi/barfi/issues/21 | [
"enhancement"
] | GrimPixel | 1 |
dsdanielpark/Bard-API | api | 126 | Temporarily unavailable due to traffic or cookie issues | When running the API, I get this error:
Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[8]]]\n56\n[["di",77],["af.httprm",77,"-6224477002209243600",24]]\n25\n[["e",4,null,null,131]]\n'.
Temporarily unavailable due to traffic or an error in cookie values. Please double-check the cookie values and verify your network environment.
I've triple checked that the cookie value is correct. I had this error when first using the API, and then it seemed to resolve but now it has come back. I am in the US so I'm not sure whether just to wait or if there is some solution that I can't figure out. Thank you! | closed | 2023-07-19T19:44:42Z | 2023-10-06T07:53:53Z | https://github.com/dsdanielpark/Bard-API/issues/126 | [] | valenmoore | 4 |
Significant-Gravitas/AutoGPT | python | 8,822 | Creator Dashboard - Sidebar different width than other pages | closed | 2024-11-27T13:10:11Z | 2024-12-09T16:30:51Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8822 | [
"bug",
"UI",
"platform/frontend"
] | Swiftyos | 0 |
|
PaddlePaddle/models | nlp | 5,737 | Compiled with WITH_GPU, but no GPU found in runtime | 
FROM paddlepaddle/paddle:2.4.2-gpu-cuda11.7-cudnn8.4-trt8.4
I have used above image as base image.
RUN python -m pip install --no-cache-dir paddlepaddle-gpu==2.4.2.post117 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
I am using above versionof paddle only because I get error during export to onnx with other versions. https://github.com/PaddlePaddle/Paddle2ONNX/issues/1147
The code runs fine while running on my local gpu but in ml.p2.xlarge instance with aws docker sagemaker i get the above error, tried with many combinations of images still same issue, can you help me with this? | open | 2023-09-11T06:15:40Z | 2024-02-26T05:07:42Z | https://github.com/PaddlePaddle/models/issues/5737 | [] | mahesh11T | 0 |
iperov/DeepFaceLive | machine-learning | 120 | Import Error : Dll load failed : unable to find the specified module WINDOWS 10 NVIDIA/DIRECTX12 | Same issue with Nvidia and Directx12 version
Running DeepFaceLive.
Traceback (most recent call last):
File "_internal\DeepFaceLive\main.py", line 95, in <module>
main()
File "_internal\DeepFaceLive\main.py", line 88, in main
args.func(args)
File "_internal\DeepFaceLive\main.py", line 30, in run_DeepFaceLive
from apps.DeepFaceLive.DeepFaceLiveApp import DeepFaceLiveApp
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\apps\DeepFaceLive\DeepFaceLiveApp.py", line 6, in <module>
from resources.gfx import QXImageDB
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\resources\gfx\__init__.py", line 1, in <module>
from .QXImageDB import QXImageDB
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\resources\gfx\QXImageDB.py", line 5, in <module>
from xlib.qt.gui.from_file import QXImage_from_file
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\xlib\qt\__init__.py", line 26, in <module>
from .gui.from_np import (QImage_ARGB32_from_buffer, QImage_BGR888_from_buffer,
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\xlib\qt\gui\from_np.py", line 6, in <module>
from ...image import ImageProcessor, get_NHWC_shape
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\xlib\image\__init__.py", line 1, in <module>
from .ImageProcessor import ImageProcessor
File "C:\DeepFaceLive_NVIDIA\_internal\DeepFaceLive\xlib\image\ImageProcessor.py", line 4, in <module>
import cv2
File "C:\DeepFaceLive_NVIDIA\_internal\python\lib\site-packages\cv2\__init__.py", line 181, in <module>
bootstrap()
File "C:\DeepFaceLive_NVIDIA\_internal\python\lib\site-packages\cv2\__init__.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\__init__.py", line 127, in import_module
applySysPathWorkaround = True
ImportError: DLL load failed: Impossibile trovare il modulo specificato.
Premere un tasto per continuare . . .

| closed | 2023-01-18T11:09:52Z | 2023-01-18T11:38:26Z | https://github.com/iperov/DeepFaceLive/issues/120 | [] | NGIULO4444 | 0 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 403 | Using the same attribute twice does not seem to work even with dump_only=True on a Nested field | I intend to use the same schema for loading and dumping. An example structure for this use case is: `Author -< Book`
### Loading
AuthorSchema field: `books = RelatedList(Related(), attribute='books')`
`AuthorSchema().load({ 'books': [ 1, 2 ] }) # {'books': [<Book>, <Book> ] }` ✅
### Dumping
AuthorSchema field: `books = Nested('BookSchema')`
`AuthorSchema().dump(Author.query.get(1)) # {'books': {'id': 1, 'title': 'Lorem ipsum' }}` ✅
### Both
Having `books = RelatedList(Related(), attribute='books')` and `books_list = Nested('BookSchema', dump_only=True)` as explained in https://github.com/marshmallow-code/marshmallow/issues/1038, works for loading but not for dumping. Should the field have the same name as the attribute? If so that would make it impossible for both to exist simultaneously.
However using Pluck as in https://github.com/marshmallow-code/marshmallow/issues/1038 gives the desired effect, only it would be tedious to replicate for all fields.
I might have an incomplete idea of how marshmallow/marshmallow-sqlalchemy works, so please feel free to elaborate, or to ask for further clarification on my specific use case.
_Addendum:_
I've noticed that in the case where you'd want to load a new `Author` as well as new nested `Book`s e.g. `AuthorSchema().load({ books: [{ 'title': 'My new book' }]})` (as show in [this SO questions](https://stackoverflow.com/questions/51751877/how-to-handle-nested-relationships-in-marshmallow-sqlalchemy)) you need the Nested field.
ETA: change example to more popular Author/Books model | closed | 2021-07-25T21:29:14Z | 2025-01-12T06:18:18Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/403 | [] | aphilas | 1 |
noirbizarre/flask-restplus | api | 297 | Clone/Alias/Redirect-to a Namespace? | I have a namespace and API like:
```
ns = Namespace('myNamespace')
api = Api()
api.add_namespace(ns)
```
I can access the namespace at 'http://localhost/myNamespace'
I'd like be able to **also** access the namespace at 'http://localhost/mynamespace'
What is the correct way to do this? Or is this not supported? I've tried cloning the namespace and adding a url_rule/route to the api, but if those methods should work I can't determine the correct syntax.
Thanks.
| open | 2017-06-30T03:40:10Z | 2022-05-10T13:40:14Z | https://github.com/noirbizarre/flask-restplus/issues/297 | [] | proximous | 2 |
jupyter-book/jupyter-book | jupyter | 1,929 | Issue on page /lectures/algo_analysis.html | Close, wrong issues repo… | closed | 2023-02-12T12:44:50Z | 2023-02-12T12:46:14Z | https://github.com/jupyter-book/jupyter-book/issues/1929 | [] | js-uri | 0 |
nonebot/nonebot2 | fastapi | 2,479 | Feature: 支持Graceful Shutdown(等待正在执行的matcher执行完毕后再退出) | ### 希望能解决的问题
我有一个提供naga牌谱解析服务的插件 [nonebot-plugin-nagabus](https://github.com/bot-ssttkkl/nonebot-plugin-nagabus) 。解析需要消耗点数,换句话说就是要给naga送钱。所以希望退出时等待用户请求处理完毕再退出,否则用户消耗了点数又没拿到结果。
### 描述所需要的功能
当用户退出时,不希望打断当前正在执行的matcher,希望能够等待执行完毕后再退出
目前是通过hook丑陋地实现的( https://github.com/bot-ssttkkl/ssttkkl-nonebot-utils/blob/master/ssttkkl_nonebot_utils/interceptor/with_graceful_shutdown.py ),希望nb官方能够提供支持 | closed | 2023-12-04T04:40:06Z | 2024-10-31T11:02:49Z | https://github.com/nonebot/nonebot2/issues/2479 | [
"wontfix"
] | ssttkkl | 5 |
ultralytics/ultralytics | pytorch | 18,845 | High GPU usage when arg show=False | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I'm running yolo prediction using custom trained model with GPU Nvidia T100 on Linux debian based OS using this command:
`truck_results = model_tk(MAINPATH+"streams/truck.streams", task='detect', stream=True, conf=0.7,imgsz=1280, save=False, show=True,verbose=False)`
If I run the inference with argument Show=False, the GPU usage is between 88-92%, else setting to True and running in gui interface the usage is very low (10-20%).
What I'm missing?
Thanks
### Additional
_No response_ | closed | 2025-01-23T12:08:39Z | 2025-01-25T17:24:52Z | https://github.com/ultralytics/ultralytics/issues/18845 | [
"question",
"detect"
] | lucamancusodev | 16 |
pallets-eco/flask-wtf | flask | 304 | Depreciation warning is less helpful then it could be | This is a kinda dumb issue, but I'm going through a bunch of depreciation warnings, and it's bothering me.
Basically, If I have out-of-date dependencies, I get error messages from flask like:
`FlaskWTFDeprecationWarning: "flask_wtf.Form" has been renamed to "FlaskForm" and will be removed in 1.0.` or `"flask_wtf.CsrfProtect" has been renamed to "CSRFProtect" and will be removed in 1.0.`
The problem here is that *sometimes*, the first aspect of the import is the part that's out of date, sometimes it's the second part of the import, and sometimes it's the entire thing (across multiple flask ext libraries):
- `from FlaskForm import Form` (This is common for lots of stuff that was in the `flask.ext.***` namespace
- `from flask_wtf import FlaskForm` (Correct option, in this case)
- `import FlaskForm`
Is there any reason the error message can't be: `FlaskWTFDeprecationWarning: "flask_wtf.Form" has been renamed to "flask_wtf.FlaskForm" and will be removed in 1.0.`
I'm not sure how the depreciation warning is getting thrown, but it takes what would normally be a simple replacement and turns it into a "dig through the docs to figure out what part actually changed". | closed | 2017-08-28T03:48:18Z | 2021-05-27T00:57:31Z | https://github.com/pallets-eco/flask-wtf/issues/304 | [] | fake-name | 5 |
deepinsight/insightface | pytorch | 2,421 | dlopen issue on arm64 - mac silicon M 1 environment | Getting this error in a arm64 terminal, the library works fine when running in 386 - is this expected?
`
dlopen(/Users/user/stable-diffusion-webui/venv/lib/python3.10/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-310-darwin.so, 0x0002): tried: '/Users/user/stable-diffusion-webui/venv/lib/python3.10/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/user/stable-diffusion-webui/venv/lib/python3.10/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-310-darwin.so' (no such file), '/Users/user/stable-diffusion-webui/venv/lib/python3.10/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
`
Tried re-installing insightface and purging venv etc. to no luck | open | 2023-09-02T16:06:58Z | 2024-11-30T03:17:46Z | https://github.com/deepinsight/insightface/issues/2421 | [] | ramakay | 2 |
microsoft/nni | pytorch | 5,132 | After AGPPrunner,The Flops change to small, But model‘s input output and weight did not change anymore. convert to .onnx before prune and after prune. It also did not change anymore. | **Describe the issue**:
torch.manual_seed(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = get_origin_model(cfg)
print(model)
#calculate_kpi(cfg, model, '/data/0530/test/origin_pred')
dummy_input=torch.rand([1, 12, 272, 480])
dummy_input=dummy_input.to(device)
#torch.onnx.export(model, dummy_input.to(device), 'origin.onnx', export_params=True, opset_version=9)
pre_flops, pre_params, _ = count_flops_params(model, dummy_input.to(device))
print('start finetuning...')
config_list = [
{
'sparsity': 0.5,
# 'op_names': ['backbone.features.{}.conv.2'.format(x) for x in range(2, 14)],
#'op_names': ['backbone.features.{}.conv.0.0'.format(x) for x in range(1, 14)],
#'op_names': ['backbone.features.{}.conv.1.0'.format(x) for x in range(2, 18)],
'op_names': ['block{}.conv1.0'.format(x) for x in range(15, 24)],
'op_types': ['Conv2d']
# 'op_names': ['backbone.features.0.0', 'backbone.features.2.conv.0.0']
},
# {
# 'sparsity': 0.5,
# # 'op_names': ['backbone.features.{}.conv.2'.format(x) for x in range(2, 14)],
# #'op_names': ['backbone.features.{}.conv.0.0'.format(x) for x in range(1, 14)],
# # 'op_names': ['backbone.features.{}.conv.1.0'.format(x) for x in range(2, 18)],
# 'op_names': ['block{}.conv2.0'.format(x) for x in range(15, 24)],
# 'op_types': ['Conv2d']
# # 'op_names': ['backbone.features.0.0', 'backbone.features.2.conv.0.0']
# },
]
criterion = CurveSegmentLoss(cfg)
optimizer = torch.optim.Adam(params=model.parameters(),lr=cfg.train.learning_rate,weight_decay=cfg.train.weight_decay)
traced_optimizer= nni.trace(torch.optim.Adam)(params=model.parameters(),lr=cfg.train.learning_rate,weight_decay=cfg.train.weight_decay)
# pruner = AGPPruner(model, config_list, pruning_algorithm ='slim', total_iteration=200, finetuner=finetuner,
# speedup=True, dummy_input=dummy_input, pruning_params={
# "trainer":trainer, "traced_optimizer": traced_optimizer, "criterion": criterion, "training_epochs":1} )
pruner = AGPPruner(model, config_list, optimizer, trainer, criterion,
num_iterations=200, epochs_per_iteration=1, pruning_algorithm='l2') # 200 1
pruner.compress()
model_path = os.path.join("model_path")
mask_path = os.path.join("mask_path")
pruner.export_model(model_path=model_path, mask_path=mask_path)
pruner._unwrap_model()
ModelSpeedup(model, dummy_input.to(device), mask_path, device).speedup_model()
flops, params, _ = count_flops_params(model, dummy_input.to(device))
print(f'Pretrained model FLOPs {pre_flops/1e6:.2f} M, #Params: {pre_params/1e6:.2f}M')
print(f'pruned model FLOPs {flops/1e6:.2f} M, #Params: {params/1e6:.2f}M')
calculate_kpi(cfg, model, '/data/0530/test/0913_ni200_e20_block1_pred') # '/data/0530/test/lld_pred'
torch.onnx.export(model, dummy_input.to(device), '0919_ni200_e1_block1.onnx', export_params=True, opset_version=9)
print('model convert successfully')
Before prune:

After prune:

| closed | 2022-09-20T02:04:43Z | 2022-09-27T03:02:40Z | https://github.com/microsoft/nni/issues/5132 | [
"model compression",
"support"
] | zctang | 3 |
aiortc/aiortc | asyncio | 374 | Add custom encryption | I am planning to add custom encryption for videos steam like this one https://stackoverflow.com/questions/12524994/encrypt-decrypt-using-pycrypto-aes-256. I have found that the latest chrome version has insertableStream option which allow per-processing frame as explained here https://webrtchacks.com/true-end-to-end-encryption-with-webrtc-insertable-streams/. So is there any option with aiortc to add custom encryption. If not exist then the can I use the approach like opencv per-processing example for encoding.
Ref:
https://github.com/alvestrand/webrtc-media-streams/blob/master/explainer.md | closed | 2020-06-01T12:40:43Z | 2020-06-02T13:23:19Z | https://github.com/aiortc/aiortc/issues/374 | [] | SourceCodeZone | 3 |
mars-project/mars | numpy | 3,056 | [BUG] wrong index of the last chunk after auto merging | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When I run TCP-H 19th query under 10G scale factor,it throws an exception shown below.

or

It seems that some input chunk indices are wrong.
After debugging, I found that the last chunk index is wrong after `df.merge()` in TCP-H 19th query.
**To Reproduce**
Run TCP-H 19th query under 10G scale factor in a 5 node cluster.
| closed | 2022-05-20T08:22:20Z | 2022-05-23T02:57:51Z | https://github.com/mars-project/mars/issues/3056 | [
"type: bug",
"mod: dataframe"
] | yuyiming | 0 |
allenai/allennlp | pytorch | 5,377 | How can i retrain the elmo model using pytorch? | Please ask questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than on GitHub. We monitor and triage questions on Stack Overflow with the AllenNLP label and questions there are more easily searchable for others.
| closed | 2021-08-25T06:52:58Z | 2021-09-08T16:09:39Z | https://github.com/allenai/allennlp/issues/5377 | [
"question",
"stale"
] | xiaoqimiao7 | 2 |
okken/pytest-check | pytest | 19 | How to flip back to asserts for pdb/debugging | Is there a way to switch to strict/eager assertion fail mode, so that one can use `--pdb` on demand? | closed | 2019-06-04T06:14:48Z | 2020-12-14T19:22:03Z | https://github.com/okken/pytest-check/issues/19 | [
"help wanted",
"good first issue",
"documentation"
] | floer32 | 4 |
koaning/scikit-lego | scikit-learn | 131 | [FEATURE] print_step | I'd like there to be a `print_step` variant as well of the `log_step` function we have in `pandas_utils`. If you're just debugging in a notebook there's no need for a logger but I'd still want to have the logger around for the "production" step. | closed | 2019-05-09T06:54:03Z | 2019-06-20T20:54:16Z | https://github.com/koaning/scikit-lego/issues/131 | [
"enhancement"
] | koaning | 0 |
MagicStack/asyncpg | asyncio | 825 | getaddrinfo() error / exception not re-raised? | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.22
* **PostgreSQL version**:
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
* **Python version**: 3.8
* **Platform**: linux ubuntu 20.04
* **Do you use pgbouncer?**:nope
* **Did you install asyncpg with pip?**:yes
* **If you built asyncpg locally, which version of Cython did you use?**:no
* **Can the issue be reproduced under both asyncio and
[uvloop]()?**: Did not test
<!-- Enter your issue details below this comment. -->
**Potential slight bug?**
I have an asyncio loop with a task that sequentially pops queries from an asyncio queue to execute, one at a time, in a remote timescaleDB (postgresql) database. Everything works like a charm. And I'd rather say quite performant!
But, whenever I unplug the ethernet cable of query requester (to simulate sudden connection losses), an uncatched exception error pops up related to the impossibility to resolve the name address of the DSN
: 2021-09-10 10:10:44.846 ERROR: Future exception was never retrieved
: future: <Future finished exception=gaierror(-3, 'Temporary failure in name resolution')>
: Traceback (most recent call last):
: File "/home/venv/lib/python3.8/site-packages/asyncpg/connection.py", line 1393, in _cancel
: await connect_utils._cancel(
: File "/home/venv/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 696, in _cancel
: tr, pr = await _create_ssl_connection(
: File "/home/venv/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 544, in _create_ssl_connection
: tr, pr = await loop.create_connection(
: File "/usr/lib/python3.8/asyncio/base_events.py", line 986, in create_connection
: infos = await self._ensure_resolved(
: File "/usr/lib/python3.8/asyncio/base_events.py", line 1365, in _ensure_resolved
: return await loop.getaddrinfo(host, port, family=family, type=type,
: File "/usr/lib/python3.8/asyncio/base_events.py", line 825, in getaddrinfo
: return await self.run_in_executor(
: File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
: result = self.fn(*self.args, **self.kwargs)
: File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
: for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
: socket.gaierror: [Errno -3] Temporary failure in name resolution
I have the execute() code within a try: / exception: block. Amongst the several exceptions i trap i have the generic exception Exception. But this is of no use because this error is not re raised and keeps within the asyncpg library.
Shouldn't it be re raised so that the app deals with that issue?
Thank you for this fantastic library!
| open | 2021-09-10T16:39:34Z | 2021-11-07T21:48:34Z | https://github.com/MagicStack/asyncpg/issues/825 | [] | bsense-rius | 1 |
scikit-learn/scikit-learn | python | 30,212 | Missing documentation on ConvergenceWarning? | ### Describe the issue linked to the documentation
Hi!
I was looking to know more about the convergence warning, I found [this link](https://scikit-learn.org/1.5/modules/generated/sklearn.exceptions.ConvergenceWarning.html), which redirects towards sklearn.utils. However, when scrolling in the left pane menu in sklearn.utils, I can't find it. Is it because it's deprecated and don't exist anymore (it's still referenced extensively in the code therefore I don't think so)? Shouldn't this page say so if it's the case?

### Suggest a potential alternative/fix
If not deprecated, it would be nice to put the link directly in this page to the new page. | closed | 2024-11-04T16:06:08Z | 2024-11-08T15:53:51Z | https://github.com/scikit-learn/scikit-learn/issues/30212 | [
"Documentation",
"Needs Triage"
] | MarieSacksick | 2 |
plotly/dash | dash | 2,667 | Dropdown reordering options by value on search | dash 2.14.0, 2.9.2
Windows 10
Chrome 118.0.5993.71
**Description**
When entering a `search_value` in `dcc.Dropdown`, the matching options are ordered by _value_, ignoring the original option order. This behavior only happens when the option values are integers or integer-like strings (i.e. 3 or "3" or 3.0 but not "03" or 3.1).
**Expected behavior**
The original order of the dropdown options should be preserved when searching.
**Example**
Searching "*" in the dropdown below returns all three results, but the options are reordered by ascending value.
```python
from dash import Dash, html, dcc
app = Dash(
name=__name__,
)
my_options = [
{"label": "three *", "value": 3},
{"label": "two *", "value": 2},
{"label": "one *", "value": 1},
]
app.layout = html.Div(
[
dcc.Dropdown(
id="my-dropdown",
options=my_options,
searchable=True,
),
]
)
```

**Attempted fixes**
I expected Dynamic Dropdowns to give control over custom search behavior. However, even when a particular order of options is specified, the matching options are re-ordered by ascending value.
```python
# This callback should overwrite the built-in search behavior.
# Instead, the filtered options are sorted by ascending value.
@app.callback(
Output("my-dropdown", "options"),
Input("my-dropdown", "search_value"),
)
def custom_search_sort(search_value):
if not search_value:
raise PreventUpdate
return [o for o in my_options if search_value in str(o)]
```
| open | 2023-10-18T20:02:10Z | 2024-08-13T19:41:22Z | https://github.com/plotly/dash/issues/2667 | [
"bug",
"P3"
] | TGeary | 0 |
nerfstudio-project/nerfstudio | computer-vision | 3,440 | point cloud from RGB and depth | I rendered RGB and depth, and generated point clouds from multiple views using the intrinsic and extrinsic parameters estimated by COLMAP. Why do the point clouds generated from multiple views not overlap with each other? | open | 2024-09-24T04:33:38Z | 2024-10-19T05:35:12Z | https://github.com/nerfstudio-project/nerfstudio/issues/3440 | [] | Lizhinwafu | 1 |
ivy-llc/ivy | pytorch | 28,165 | Fix Ivy Failing Test: paddle - elementwise.allclose | closed | 2024-02-03T11:57:58Z | 2024-02-06T12:24:51Z | https://github.com/ivy-llc/ivy/issues/28165 | [
"Sub Task"
] | MuhammadNizamani | 0 |
|
FactoryBoy/factory_boy | sqlalchemy | 239 | Multiple fields not supported in django_get_or_create | I am currently using factory_boy with faker to generate test data for my django application. The model that I am currently testing looks like the following;
```python
class Thing(models.Model):
name = models.CharField(max_length=50)
code = models.CharField(max_length=20, unique=True)
short_code = models.CharField(max_length=5, unique=True)
```
My factory looks like this;
```python
class ThingFactory(factory.django.DjangoModelFactory)
class Meta:
model = app.models.Thing
django_get_or_create = ('code', 'short_code')
name = factory.lazy_attribute(lambda x: faker.company())
code = factory.sequence(lambda n: 'thing-code-%05d' % n)
short_code = factory.sequence(lambda n: '%05d' % n)
```
and my test to verify that this is functioning properly is;
```python
def test_thing_code(self):
ThingFactory.create(code="test")
things = Thing.objects.all()
assert_equal(len(things), 1)
thing = Thing.objects.get(code="test")
ThingFactory.create(short_code=thing.short_code)
ThingFactory.create(code="test")
things = Clinic.objects.all()
assert_equal(len(things), 1)
```
The expected behaviour of this test is that when I run a create using the existing short_code, I should instead get the same object that already exists. Additionally, when I attempt to create a new object using the same code as the first line, I should also get the same object. When I execute this, I receive the following error message:
```python
test_thing_code ... ERROR
======================================================================
ERROR: test_thing_code
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/app/test/tests.py", line 109, in test_thing_code
ThingFactory.create(short_code=thing.short_code)
File "/home/factory_boy/factory/base.py", line 559, in create
return cls._generate(True, attrs)
File "/home/factory_boy/factory/django.py", line 288, in wrapped_generate
return generate_method(*args, **kwargs)
File "/home/factory_boy/factory/base.py", line 484, in _generate
obj = cls._prepare(create, **attrs)
File "/home/factory_boy/factory/base.py", line 459, in _prepare
return cls._create(model_class, *args, **kwargs)
File "/home/factory_boy/factory/django.py", line 147, in _create
return cls._get_or_create(model_class, *args, **kwargs)
File "/home/factory_boy/factory/django.py", line 138, in _get_or_create
obj, _created = manager.get_or_create(*args, **key_fields)
File "/home/django/django/db/models/manager.py", line 134, in get_or_create
return self.get_query_set().get_or_create(**kwargs)
File "/home/django/django/db/models/query.py", line 452, in get_or_create
obj.save(force_insert=True, using=self.db)
File "/home/django/django/utils/functional.py", line 11, in _curried
return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
File "/home/app/models.py", line 415, in capture_user_on_save
output = old_save(self, *args, **kwargs)
File "/home/app/models.py", line 264, in save
super(Thing, self).save(*args, **kwargs)
File "/home/django/django/db/models/base.py", line 463, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/home/django/django/db/models/base.py", line 551, in save_base
result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw)
File "/home/django/django/db/models/manager.py", line 203, in _insert
return insert_query(self.model, objs, fields, **kwargs)
File "/home/django/django/db/models/query.py", line 1593, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "/home/django/django/db/models/sql/compiler.py", line 912, in execute_sql
cursor.execute(sql, params)
File "/home/django/django/db/backends/postgresql_psycopg2/base.py", line 52, in execute
return self.cursor.execute(query, args)
IntegrityError: duplicate key value violates unique constraint "app_thing_short_code_key"
DETAIL: Key (short_code)=(000) already exists.
```
This error was received using factory_boy 2.5.2 and Django 1.4.20 final
If I change the order in the django_get_or_create attribute, whichever one is listed second is the test that causes the database error.
| closed | 2015-10-23T14:39:52Z | 2021-08-31T02:10:43Z | https://github.com/FactoryBoy/factory_boy/issues/239 | [] | stephenross | 11 |
xonsh/xonsh | data-science | 4,845 | Changes in whole_word_jumping xontrib (ptk_win32) break loading under linux | The changes introduced in #4788 break loading xontrib `whole_word_jumping` under linux.
## xonfig
```
$ xonfig
+------------------+----------------------+
| xonsh | 0.12.5.dev2 |
| Git SHA | f077e23b |
| Commit Date | Jun 17 12:53:59 2022 |
| Python | 3.10.5 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.29 |
| shell type | prompt_toolkit |
| history backend | sqlite |
| pygments | 2.11.2 |
| on posix | True |
| on linux | True |
| distro | manjaro |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | cmd_done |
| xontrib 2 | prompt_ret_code |
| xontrib 3 | fzf-widgets |
| xontrib 4 | prompt_bar |
| xontrib 5 | xog |
| xontrib 6 | pipeliner |
| xontrib 7 | readable-traceback |
| RC file 1 | /home/user/.xonshrc |
+------------------+----------------------+
```
## Expected Behavior
Xontrib is loaded as with xonsh 0.12.4.
## Current Behavior
Xontrib fails to load with:
```shell
AssertionError
Failed to load xontrib whole_word_jumping.
```
## Steps to Reproduce
`xontrib load whole_word_jumping`
## Possible Change
The problem is the import of `prompt_toolkit.input.win32` which is not feasible under Linux. A conditional import might help:
```python
from xonsh.platform import ON_WINDOWS
if ON_WINDOWS:
import prompt_toolkit.input.win32 as ptk_win32
```
| closed | 2022-06-19T10:16:28Z | 2022-06-30T05:12:06Z | https://github.com/xonsh/xonsh/issues/4845 | [
"windows",
"xontrib",
"good first issue"
] | hroemer | 2 |
keras-team/keras | python | 20,118 | Testing functional models as layers | In keras V2 it was possible to test functional models as layers with TestCase.run_layer_test
But in keras V3 it is not due to an issue with deserialization https://colab.research.google.com/drive/1OUnnbeLOvI7eFnWYDvQiiZKqPMF5Rl0M?usp=sharing
The root issue is input_shape type in model config is a list, while layers expect a tuple.
As far as i understand the root issue is json dump/load in serialization test. Can we omit this step? | closed | 2024-08-14T08:42:59Z | 2024-10-21T06:37:10Z | https://github.com/keras-team/keras/issues/20118 | [
"type:Bug"
] | shkarupa-alex | 3 |
jmcnamara/XlsxWriter | pandas | 474 | Add ability to change color of clicked on hyperlink | Hello,
This is a little nit picky but it actually is important to see the links within a spreadsheet change color after they have been clicked on (it is really helpful to see the rows that have already been worked on). Looking at this [article](https://www.extendoffice.com/documents/excel/1695-excel-prevent-hyperlink-from-changing-color.html) it looks like Excel uses a theme color to change the color of a hyperlink. Is there any way to add this functionality into XlsxWriter? | closed | 2017-10-12T00:11:07Z | 2017-10-12T06:54:47Z | https://github.com/jmcnamara/XlsxWriter/issues/474 | [] | jdarrah | 1 |
pytorch/pytorch | machine-learning | 149,151 | [ONNX] Cover dynamic_shapes checks within verify=True | https://github.com/pytorch/pytorch/blob/38e81a53324146d445a81eb8f80bccebe623eb35/torch/onnx/_internal/exporter/_verification.py#L137
We can try a different set of inputs that has different shape to examine the dynamic_shapes so that users/us can catch the issues before actually applying the model, and save the trouble of making another code snippet to test it.
cc @justinchuby | open | 2025-03-13T20:14:19Z | 2025-03-13T20:42:11Z | https://github.com/pytorch/pytorch/issues/149151 | [
"module: onnx",
"triaged"
] | titaiwangms | 2 |
pyro-ppl/numpyro | numpy | 1,334 | Trying to reproduce the gaussian mixture example from bayesian-methods-for-hackers | Hi everyone! I'm starting exploring this package and I had some problems recreating the gaussian mixture example from bayesian methods for hackers (https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb).
This is my snippet
```python
from jax import random
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS
import pandas as pd
def model(data):
with numpyro.plate('sample_i', len(data)):
p_i = numpyro.sample('p_i', dist.Uniform(0, 1))
c_i = numpyro.sample('c_i', dist.Bernoulli(p_i))
mu1 = numpyro.sample('mu1', dist.Normal(120, 10))
mu2 = numpyro.sample('mu2', dist.Normal(190, 10))
sd1 = numpyro.sample('sd1', dist.Uniform(0, 100))
sd2 = numpyro.sample('sd2', dist.Uniform(0, 100))
center_i = jnp.where(c_i < 0.5, mu1, mu2)
sd_i = jnp.where(c_i < 0.5, sd1, sd2)
with numpyro.plate('sample_i', len(data)):
obs = numpyro.sample('obs', dist.Normal(center_i, sd_i), obs=data)
data = pd.read_csv('https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter3_MCMC/data/mixture_data.csv', header=None)
data = data.values.flatten()
rng_key = random.PRNGKey(42)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, data=data
)
```
The problem is that in the example linked above, the averages of the `p_i` ranges from `0` to `1`
<img width="851" alt="download" src="https://user-images.githubusercontent.com/64030770/153747312-ae0eaf22-0c11-4aba-b7a6-0196f57219ac.png">
while in my implementation using `numpyro` they range only between about `0.35` and `0.65`.
Is it related to some problem/bug with my implementation or could be some convergence issue?
Thanks!
| closed | 2022-02-13T09:48:46Z | 2022-02-13T14:27:21Z | https://github.com/pyro-ppl/numpyro/issues/1334 | [] | salvomcl | 2 |
amidaware/tacticalrmm | django | 1,567 | Local install without lets encrypt | **Is your feature request related to a problem? Please describe.**
I want to install RMM inside of our network. We have our own CA and internal domains and the server wont be reachable from the internet but the installer always forces me to use a lets encrypt certificate.
**Describe the solution you'd like**
A mode in the installer where i can choose something like "local installation" where i can provide my own certificates or tell the installer that the server runs behind an SSLproxy.
Or is there some other way to install RMM locally without the lets encrypt stuff in my way?
| closed | 2023-07-18T09:57:34Z | 2023-07-18T09:58:32Z | https://github.com/amidaware/tacticalrmm/issues/1567 | [] | mhktc | 0 |
biolab/orange3 | scikit-learn | 6,497 | Column width in Test and Score shouldn't reset on new data | **What's your use case?**
I am using Orange3 version 3.35.0 on Windows 11. In the [Test and Score] widget, when I add a classification model (such as logistic regression) and display the "Evaluation results for target," the CA (Classification Accuracy) and F1 scores are displayed as "0.9..." without showing the decimal places beyond the second digit. However, I have observed that it is possible to adjust the display width and show up to three decimal places, for example, displaying the score as 0.965. The problem arises when I adjust the 'C' parameter of the logistic regression, as it resets the display precision, and the scores once again appear as "0.9...". This makes it extremely difficult to find the optimal value for 'C'.
**What's your proposed solution?**
Is there a way to fix the display precision and keep it consistent?
| open | 2023-07-05T12:29:10Z | 2023-08-19T17:37:54Z | https://github.com/biolab/orange3/issues/6497 | [
"bug"
] | nim-hrkn | 3 |
d2l-ai/d2l-en | machine-learning | 1,792 | Inconsistent use of np.dot and torch.mv in Section 2.3.8 | The final paragraph of Section 2.3.8 *Matrix-Vector Products* mentions the use of `np.dot` but not `torch.mv`. The subsequent code examples uses `torch.mv` but not `np.dot`.
Either the text should describe `torch.mv` or the code should use `np.dot` | closed | 2021-06-13T16:39:31Z | 2021-06-16T20:10:14Z | https://github.com/d2l-ai/d2l-en/issues/1792 | [] | dowobeha | 1 |
oegedijk/explainerdashboard | plotly | 100 | Module is not able to handle pipline | Imagine that you want to use a pipeline that performs feature engineering and data cleaning and then training your model. This could be useful for the production level as the new data comes it goes through the pipeline for cleaning, building features (based on the training data), and predicting its label. This module is not able to handle such a case. Adding such an ability could be very useful.
Here is a very simple example that the module is not able to handle and throws an error. I tried to log transform the target variable and then fit the model. I can send more examples upon request.
````
mdl = TransformedTargetRegressor(regressor=RandomForestRegressor(),
func=np.log1p, inverse_func=np.exp,
check_inverse=False)
explainer = RegressionExplainer(mdl, X_test, y_test,
shap = 'kernel',
cats=['Sex', 'Deck', 'Embarked'],
idxs=test_names,
descriptions = feature_descriptions,
target='Fare',
units="$")
ExplainerDashboard(explainer).run()
````
| closed | 2021-03-05T00:14:11Z | 2021-03-18T19:26:13Z | https://github.com/oegedijk/explainerdashboard/issues/100 | [] | jkiani64 | 8 |
encode/databases | asyncio | 240 | Separate DB Drivers Required? | If the need is to create and interact with a Postgres database, it seems from the documentation that standard SQL queries are executed to create the tables. Once created Databases can interact with them using the Sqlalchemy (SQLA) Core syntax.
What if we wish to generate the database using SQLA syntax as well with create_all() and not standard SQL?
The following code fails with app.models.sqla holding valid SQLA models to create the database:
```
# Standard libraries
import asyncio
# Third party libraries
import asyncpg
import uvicorn
import databases
import sqlalchemy as sa
# Local libraries
import app.core.security
from app.models.sqla import *
# database
DATABASE_URL = "postgresql://postgres:abc123@localhost:5432/db"
engine = sa.create_engine("postgresql://postgres:abc123@localhost:5432/db")
metadata.create_all(engine)
db = databases.Database(DATABASE_URL)
```
Error:
```ModuleNotFoundError: No module named 'psycopg2'```
Attempts to define the driver dialect explicitly fail as well with "postgresql+asnycpg://..."
Databases does not support psycopg2 and SQLA does not support asyncpg. This presents some complexity right off the bat. Is it standard practice to import both drivers; psycopg2 to create the database, and then use the database through Databases with asyncpg?
| closed | 2020-08-27T18:51:08Z | 2021-08-27T05:20:08Z | https://github.com/encode/databases/issues/240 | [] | liquidgenius | 6 |
automagica/automagica | automation | 115 | Word.export_to_html() is broken | Automagica version 2.0.25
```
wrd = Word(file_path=self.file_path)
wrd.export_to_html("D:\\temp\\output.html")
```
Cause error:
```
Traceback (most recent call last):
File "D:/Automagica/flow1/wordParser.py", line 30, in <module>
print(exc.exctract_name().name)
File "D:/Automagica/flow1/wordParser.py", line 17, in exctract_name
wrd.export_to_html("D:\\temp\\output.html")
File "C:\Users\vkhomenko\AppData\Local\Programs\Python\Python38\lib\site-packages\automagica\utilities.py", line 17, in wrapper
return func(*args, **kwargs)
File "C:\Users\vkhomenko\AppData\Local\Programs\Python\Python38\lib\site-packages\automagica\activities.py", line 3334, in export_to_html
word.app.ActiveDocument.WebOptions.RelyOnCSS = 1
NameError: name 'word' is not defined
```
| closed | 2020-03-23T10:56:11Z | 2020-04-03T11:32:59Z | https://github.com/automagica/automagica/issues/115 | [] | vasichkin | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 1,515 | [question] How to upload a file with rest api and ModelRestApi | Hello,
I am trying to upload a pdf file from a form in a React component.
I get an error when I make a post request with a `'Content-Type': 'multipart/form-data'` header to the model endpoint `http://localhost:5000/api/v1/template/` . The error is `error 400: Bad request` and the message is `Request is not JSON`.
Which means that the api does not accept 'multipart/form-data' request. Then, how can I upload a file to the rest api? Thank you for your answer.
| closed | 2020-11-10T21:19:16Z | 2021-06-29T00:56:11Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1515 | [
"question",
"stale"
] | mjozan | 2 |
BlinkDL/RWKV-LM | pytorch | 111 | Record failed attempts | Can you create a place, for example the Issues tab of a specified repo, where people can report what prompts they have tried, but failed to get an expected result, which other llms like chatgpt4 can provide?
For example, I have tried the prompts of https://github.com/theseamusjames/gpt3-python-maze-solver on rwkv, but failed.
Sharing failed attempts can save other people's time by avoiding the same attempts. And maybe people can discuss how to fix the prompts for rwkv to get a better result.
Also, we can share the successful prompts, just like civitai.com's image gallery. Successful prompts can provide experiences for new comers.
These records should be sorted by which models and strategies they used.
| closed | 2023-05-12T06:35:17Z | 2023-05-18T06:53:24Z | https://github.com/BlinkDL/RWKV-LM/issues/111 | [] | w2404 | 1 |
kubeflow/katib | scikit-learn | 2,353 | Create Katib Controller Failure | ### What happened?
Environment: k8s 1.28
Installation Type: With cert-manager
Katib version: v0.15.0-rc.0, v0.17.0-rc.0
Issues Description:
Cert-Manager works well, cert has been correctly issued, training-operator/CRD start up correctly. When starting the katib-controller, it report following error:
```
kubectl logs katib-controller-ff8db7d89-xfd77 -n kubeflow
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"entrypoint","msg":"Config:","experiment-suggestion-name":"default","webhook-port":8443,"metrics-addr":":8080","healthz-addr":":18080","inject-security-context":false,"enable-grpc-probe-in-suggestion":true,"trial-resources":[{"Group":"batch","Version":"v1","Kind":"Job"},{"Group":"kubeflow.org","Version":"v1","Kind":"TFJob"},{"Group":"kubeflow.org","Version":"v1","Kind":"PyTorchJob"},{"Group":"kubeflow.org","Version":"v1","Kind":"MPIJob"},{"Group":"kubeflow.org","Version":"v1","Kind":"XGBoostJob"},{"Group":"kubeflow.org","Version":"v1","Kind":"MXJob"}]}
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"entrypoint","msg":"Setting up health checker."}
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"entrypoint","msg":"Starting the manager."}
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
{"level":"info","ts":"2024-06-12T12:03:10Z","msg":"starting server","kind":"health probe","addr":"[::]:18080"}
{"level":"info","ts":"2024-06-12T12:03:10Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"cert-generator","msg":"Waiting for certs to get ready."}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"cert-generator","msg":"Succeeded to be mounted certs inside the container."}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"entrypoint","msg":"Certs ready"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"entrypoint","msg":"Setting up controller."}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"experiment-controller","msg":"Using the default suggestion implementation"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"experiment-controller","msg":"Experiment controller created"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting Controller","controller":"suggestion-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting workers","controller":"suggestion-controller","worker count":1}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"experiment-controller","source":"kind source: *v1beta1.Experiment"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"suggestion-controller","source":"kind source: *v1beta1.Suggestion"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"suggestion-controller","source":"kind source: *v1.Deployment"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"suggestion-controller","source":"kind source: *v1.Service"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"suggestion-controller","source":"kind source: *v1.PersistentVolumeClaim"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"suggestion-controller","msg":"Suggestion controller created"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"experiment-controller","source":"kind source: *v1beta1.Trial"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"experiment-controller","source":"kind source: *v1beta1.Suggestion"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting Controller","controller":"experiment-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *v1beta1.Trial"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting Controller","controller":"trial-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting workers","controller":"experiment-controller","worker count":1}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting workers","controller":"trial-controller","worker count":1}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"batch","CRD Version":"v1","CRD Kind":"Job"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"kubeflow.org","CRD Version":"v1","CRD Kind":"TFJob"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"kubeflow.org","CRD Version":"v1","CRD Kind":"PyTorchJob"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"kubeflow.org","CRD Version":"v1","CRD Kind":"MPIJob"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"kubeflow.org","CRD Version":"v1","CRD Kind":"XGBoostJob"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Job watch added successfully","CRD Group":"kubeflow.org","CRD Version":"v1","CRD Kind":"MXJob"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"trial-controller","msg":"Trial controller created"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"entrypoint","msg":"Setting up webhooks."}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.webhook","msg":"Starting webhook server"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-experiment"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-experiment"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-pod"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"trial-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"suggestion-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"All workers finished","controller":"suggestion-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"experiment-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"All workers finished","controller":"experiment-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"All workers finished","controller":"trial-controller"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Stopping and waiting for caches"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.source.EventHandler","msg":"failed to get informer from cache","error":"Timeout: failed waiting for *unstructured.Unstructured Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:68\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/source/kind.go:56"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Stopping and waiting for HTTP servers"}
{"level":"info","ts":"2024-06-12T12:03:11Z","logger":"controller-runtime.metrics","msg":"Shutting down metrics server with timeout of 1 minute"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"shutting down server","kind":"health probe","addr":"[::]:18080"}
{"level":"info","ts":"2024-06-12T12:03:11Z","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"error","ts":"2024-06-12T12:03:11Z","logger":"entrypoint","msg":"Unable to run the manager","error":"too many open files","stacktrace":"main.main\n\t/go/src/github.com/kubeflow/katib/cmd/katib-controller/v1beta1/main.go:163\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"}
```
We deploy in 1.20,1.27 version K8S it works (for train-controller *unstructured.Unstructured issue, it also exist), while this time the controller is never succeeded to deploy, please kindly help us
### What did you expect to happen?
Because 0.17 version is supporting 1.27-1.29 k8s in roadmap, we suppose it should normally works
### Environment
Kubernetes version: 1.28
```bash
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.6", GitCommit:"be3af46a4654bdf05b4838fe94e95ec8c165660c", GitTreeState:"clean", BuildDate:"2024-01-17T13:39:00Z", GoVersion:"go1.20.13", Compiler:"gc", Platform:"linux/amd64"}
```
Katib controller version:
```bash
$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
docker.io/kubeflowkatib/katib-controller:v0.17.0-rc.0
```
Katib Python SDK version:
```bash
$ pip show kubeflow-katib
```
### Impacted by this bug?
Give it a 👍 We prioritize the issues with most 👍 | closed | 2024-06-12T12:13:42Z | 2024-06-25T23:34:40Z | https://github.com/kubeflow/katib/issues/2353 | [
"kind/bug"
] | rogeryuchao | 10 |
pykaldi/pykaldi | numpy | 304 | nnet3-compute output dim not as expected | I load a tdnnf15_l33r33_2048x384 model, and the kaldi tool can output the content of context * 384, but the output dim obtained by using pykaldi is 2704.
```python
def __init__(self, transition_model, acoustic_model,
online_ivector_period=10):
if not isinstance(acoustic_model, nnet3.AmNnetSimple):
raise TypeError("acoustic_model should be a AmNnetSimple object")
self.transition_model = transition_model
self.acoustic_model = acoustic_model
nnet = self.acoustic_model.get_nnet()
nnet3.set_batchnorm_test_mode(True, nnet)
nnet3.set_dropout_test_mode(True, nnet)
nnet3.collapse_model(nnet3.CollapseModelConfig(), nnet)
priors = Vector(0)
ivector_dim = max(0, nnet.input_dim("ivector"))
ivector = Vector(ivector_dim)
ivector.set_randn_()
# self.decodable_opts = NnetSimpleLoopedComputationOptions()
# info = DecodableNnetSimpleLoopedInfo.from_priors(self.decodable_opts , priors, nnet)
# num_frames = 5 + random.randint(1, 100)
input_dim = nnet.input_dim("input")
# input = Matrix(num_frames, input_dim)
m = kaldi_io.read_mat_scp('feat.scp')
for key, mat in m:
input = Matrix(mat)
break
# input.set_randn_()
num_frames = input.size()[0]
# decodable = DecodableNnetSimpleLooped(info, input,
# ivector if ivector_dim else None)
self.decodable_opts = nnet3.NnetSimpleComputationOptions()
compiler = nnet3.CachingOptimizingCompiler.new_with_optimize_opts(
nnet, self.decodable_opts.optimize_config)
# self.online_ivector_period = online_ivector_period
decodable = DecodableNnetSimple(self.decodable_opts, nnet, priors, input, compiler,
ivector if ivector_dim else None)
print('expected 384')
print(f'decodable.output_dim() {decodable.output_dim()}')
output_dim = 384
output2 = Matrix(num_frames, output_dim)
for t in range(num_frames):
decodable.get_output_for_frame(t, output2[t])
```` | closed | 2022-07-07T12:39:53Z | 2022-07-08T13:03:59Z | https://github.com/pykaldi/pykaldi/issues/304 | [] | xesdiny | 1 |
TheKevJames/coveralls-python | pytest | 195 | KeyError: 'url' and no result on coveralls.io | `coveralls` executes with the following output and now coverage report is pushed to coveralls.
```
Submitting coverage to coveralls.io...
Coverage submitted!
Couldn't find a repository matching this job.
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/bin/coveralls", line 11, in <module>
sys.exit(main())
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/coveralls/cli.py", line 80, in main
log.info(result['url'])
KeyError: 'url'
```
The problem is occurring on this pull request:
https://github.com/AKSW/QuitStore/pull/221
The output can be seen here:
https://travis-ci.org/AKSW/QuitStore/jobs/495597359
And I've also enabled `coveralls debug` in the next commit:
https://travis-ci.org/AKSW/QuitStore/jobs/495675977 | closed | 2019-02-20T08:51:22Z | 2019-03-20T03:07:28Z | https://github.com/TheKevJames/coveralls-python/issues/195 | [] | white-gecko | 3 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 328 | This **problem is related to indentation**. Once your **indentation** in file is same as that of files in data_folder_example. You'll be able to run it. | @feder-cr This **problem is related to indentation**. Once your **indentation** in file is same as that of files in data_folder_example. You'll be able to run it. Hope this helps.
If in case you get a **runtime error**. That will be due to a parsing error in one of YAML files. I think most people will make errors in resume.yaml file.
Note: Check your code or opening and closing braces.
_Originally posted by @madhuptiwari in https://github.com/feder-cr/linkedIn_auto_jobs_applier_with_AI/issues/4#issuecomment-2337136759_ | closed | 2024-09-09T05:16:29Z | 2024-09-09T06:42:24Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/328 | [] | madhuptiwari | 2 |
codertimo/BERT-pytorch | nlp | 98 | why specify `ignore_index=0` in the NLLLoss function in BERTTrainer? | # trainer/pretrain.py
```python
class BERTTrainer:
def __init__(self, ...):
...
# Using Negative Log Likelihood Loss function for predicting the masked_token
self.criterion = nn.NLLLoss(ignore_index=0)
...
```
I cannot understand why `ignore index=0` is specified when calculating NLLLoss. If the ground truth of `is_next` is False (label = 0) in terms of the NSP task but BERT predicts True, then NLLLoss will be 0 (or nan)... so what's the aim of `ignore_index = 0` ???
====================
Well, I've found that `ignore_index = 0` is useful to the MLM task, but I still can't agree the NSP task should share the same NLLLoss with MLM. | open | 2022-07-07T02:46:50Z | 2023-01-10T16:17:06Z | https://github.com/codertimo/BERT-pytorch/issues/98 | [] | Jasmine969 | 1 |
codertimo/BERT-pytorch | nlp | 29 | when training the masked LM, the unmasked words (have label 0) were trained together with masked words? | According to the code
```
def random_word(self, sentence):
tokens = sentence.split()
output_label = []
for i, token in enumerate(tokens):
prob = random.random()
if prob < 0.15:
# 80% randomly change token to make token
if prob < prob * 0.8:
tokens[i] = self.vocab.mask_index
# 10% randomly change token to random token
elif prob * 0.8 <= prob < prob * 0.9:
tokens[i] = random.randrange(len(self.vocab))
# 10% randomly change token to current token
elif prob >= prob * 0.9:
tokens[i] = self.vocab.stoi.get(token, self.vocab.unk_index)
output_label.append(self.vocab.stoi.get(token, self.vocab.unk_index))
else:
tokens[i] = self.vocab.stoi.get(token, self.vocab.unk_index)
output_label.append(0)
return tokens, output_label
```
Do we need to exclude the unmasked words when training the LM? | open | 2018-10-23T07:35:37Z | 2018-10-30T07:47:04Z | https://github.com/codertimo/BERT-pytorch/issues/29 | [
"enhancement",
"question"
] | coddinglxf | 6 |
sinaptik-ai/pandas-ai | pandas | 1,655 | AttributeError: 'LangchainLLM' object has no attribute '_llm_type' | ### System Info
pandasai==3.0.0b14
system in windows10 and ubuntu22.04
python3.11
### 🐛 Describe the bug
from langchain.chat_models import ChatOpenAI
import pandasai as pai
from pandasai_langchain import LangchainLLM
dataset_path = "qshop/log-data"
try:
sql_table = pai.create(
path=dataset_path,
description="XXXXXXXXXXXXXX",
source={
"type": "mysql",
"connection": {
"host": "192.168.0.4",
"port": 8096,
"user": "qshop_rw",
"password": "Hd43eN+DkNaR",
"database": "qshop"
},
"table": "tb_log"
},
columns=[
{
"name": "Id",
"type": "string",
"description": "每条数据的唯一标识符"
},
{
"name": "UserID",
"type": "string",
"description": "此条操作记录的用户,无就代表用户没登录"
},
{
"name": "CreateTime",
"type": "datetime",
"description": "此条操作记录的产生的时间"
},
{
"name": "PageName",
"type": "string",
"description": "此条操作记录访问的页面名称"
},
{
"name": "GoodsName",
"type": "string",
"description": "此条操作记录访问的产品的名称,或者需求的名称,或者视频资讯的名称"
},
{
"name": "Col1",
"type": "string",
"description": "辅助判断列,如果值为小模型发布则说明GoodsName对应的是产品,如果值为小模型需求则说明GoodsName对应的是需求,如果值为小模型视频说明GoodsName对应的是视频资讯"
}
]
)
print(f"成功创建新数据集: {dataset_path}")
except Exception as e:
print(f"创建数据集时出错: {e}")
llm = ChatOpenAI(base_url='https://XXXX.XXX.XX.XX:XXX/v1/',
api_key='sk-proj-1234567890',
model='deepseek-r1-distill-qwen',
request_timeout=300)
llm1 = LangchainLLM(langchain_llm=llm)
pai.config.set({
"llm": llm1,
"timeout": 300,
"enable_cache": False,
})
# 从连接器获取数据
agent = pai.load('qshop/log-data')
# 示例查询
ans = agent.chat("请根据这个表格生成一份访问分析报告,并根据报告给出后续的运营建议。")
print(ans)
Exception has occurred: AttributeError
'LangchainLLM' object has no attribute '_llm_type'
File "E:\develop\aiagent\pandasaitest.py", line 84, in <module>
ans = agent.chat("请根据这个表格生成一份访问分析报告,并根据报告给出后续的运营建议。")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'LangchainLLM' object has no attribute '_llm_type'
| closed | 2025-03-04T04:09:26Z | 2025-03-14T17:03:38Z | https://github.com/sinaptik-ai/pandas-ai/issues/1655 | [] | ban0228 | 1 |
nltk/nltk | nlp | 2,520 | computer restart after nltk.download() | I've installed nltk through anoconda. when I print:
'
>>> import nltk
>>> nltk.download()
'
in terminal, then my mac went blank and restart.
Anyone knows what's the matter? | closed | 2020-03-25T10:54:36Z | 2020-04-12T22:37:49Z | https://github.com/nltk/nltk/issues/2520 | [] | ShirleneLiu | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 512 | Remove old datasets code and rewire with new datasets.load_* api | As a follow up to reworking the datasets API, we need to go through and remove redundant old code in these locations:
- [x] `yellowbrick/download.py`
- [ ] `tests/dataset.py`
Part of this will be a requirement to rewire tests and examples as needed. Likely also there might be slight transformations of data in code that will have to happen
@DistrictDataLabs/team-oz-maintainers
| closed | 2018-07-19T17:35:30Z | 2019-02-06T14:40:20Z | https://github.com/DistrictDataLabs/yellowbrick/issues/512 | [
"priority: high",
"type: technical debt",
"level: intermediate"
] | ndanielsen | 3 |
facebookresearch/fairseq | pytorch | 5,051 | License for NLLB-200's tokenizer(SPM-200) | ## ❓ Questions and Help
#### What is your question?
What is the license for the tokenizer model used in NLLB(SPM-200)?
The NLLB model itself is cc-by-nc-4.0, but it is unclear if the SPM-200 model also shares the same license | open | 2023-04-02T13:17:06Z | 2023-04-02T13:17:06Z | https://github.com/facebookresearch/fairseq/issues/5051 | [
"question",
"needs triage"
] | chris-ha458 | 0 |
lukas-blecher/LaTeX-OCR | pytorch | 267 | RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory | closed | 2023-04-23T08:59:00Z | 2023-04-24T09:29:35Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/267 | [] | ltw321 | 2 |
|
sunscrapers/djoser | rest-api | 229 | OAuth client integration | Use it for Facebook/Google/Github etc. authentication.
Should we push it to separate package? e.g. djoser-oauth? | closed | 2017-09-25T21:07:05Z | 2017-11-05T01:11:35Z | https://github.com/sunscrapers/djoser/issues/229 | [] | pszpetkowski | 0 |
deepinsight/insightface | pytorch | 2,253 | Getting error when fetching model | I am getting this error
RuntimeError: Failed downloading url http://insightface.cn-sh2.ufileos.com/models/inswapper_128.onnx.zip . Is this available via some other url? | closed | 2023-02-24T22:08:13Z | 2023-05-14T08:06:08Z | https://github.com/deepinsight/insightface/issues/2253 | [] | bharatsingh430 | 2 |
dunossauro/fastapi-do-zero | pydantic | 297 | Adicionar explicação de que os blocos de código COMTEM informação adicional | Algo como isso (mas, bem feito, claro):
 | closed | 2025-02-05T23:15:01Z | 2025-02-20T18:52:13Z | https://github.com/dunossauro/fastapi-do-zero/issues/297 | [] | dunossauro | 0 |
dagster-io/dagster | data-science | 28,342 | Error when using type annotation for `AssetExecutionContext` | ### What's the issue?
When I use a type annotation for `AssetExecutionContext` in an asset that also uses any other resource, dagster gets confused and thinks that the type annotation is wrong and gives me this error (paths in the error might be a bit off as I had to remove sensitive data from them):
```
dagster._core.errors.DagsterInvalidDefinitionError: Cannot annotate `context` parameter with type AssetExecutionContext. `context` must be annotated with AssetExecutionContext, AssetCheckExecutionContext, OpExecutionContext, or left blank.
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_grpc/server.py", line 420, in __init__
self._loaded_repositories: Optional[LoadedRepositories] = LoadedRepositories(
^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_grpc/server.py", line 253, in __init__
loadable_targets = get_loadable_targets(
^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_grpc/utils.py", line 51, in get_loadable_targets
else loadable_targets_from_python_module(module_name, working_directory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/workspace/autodiscovery.py", line 33, in loadable_targets_from_python_module
module = load_python_module(
^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/code_pointer.py", line 135, in load_python_module
return importlib.import_module(module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python312/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/code/test_dagster/dagster/__init__.py", line 41, in <module>
from test_dagster import my_ctx
File "/code/test_dagster/dagster/my_ctx/__init__.py", line 6, in <module>
from test_dagster.my_ctx import staging
File "/code/test_dagster/dagster/my_ctx/staging/__init__.py", line 5, in <module>
from test_dagster.my_ctx.staging import data
File "/code/test_dagster/dagster/my_ctx/staging/data.py", line 471, in <module>
@asset()
^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/decorators/asset_decorator.py", line 339, in inner
return create_assets_def_from_fn_and_decorator_args(args, fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/decorators/asset_decorator.py", line 538, in create_assets_def_from_fn_and_decorator_args
return builder.create_assets_definition()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/decorators/decorator_assets_definition_builder.py", line 576, in create_assets_definition
node_def=self.create_op_definition(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/decorators/decorator_assets_definition_builder.py", line 556, in create_op_definition
return _Op(
^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/decorators/op_decorator.py", line 123, in __call__
op_def = OpDefinition.dagster_internal_init(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/op_definition.py", line 201, in dagster_internal_init
return OpDefinition(
^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/decorator_utils.py", line 195, in wrapped_with_pre_call_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/op_definition.py", line 144, in __init__
_validate_context_type_hint(self._compute_fn.decorated_fn)
File "/code/test_dagster/.virtualenv/lib/python3.12/site-packages/dagster/_core/definitions/op_definition.py", line 593, in _validate_context_type_hint
raise DagsterInvalidDefinitionError(
```
### What did you expect to happen?
It should work as I'm using a type annotation of the type that the error is expecting.
### How to reproduce?
`dagster dev` works in this case:
```python
@asset()
def data(context, my_resource: ResourceParam[MyResource]):
pass
```
`dagster dev` fails to start in this case with the error mentioned above:
```python
@asset()
def data(context: AssetExecutionContext, my_resource: ResourceParam[MyResource]):
pass
```
### Dagster version
dagster, version 1.10.4
### Deployment type
None
### Deployment details
I'm running Python 3.12.8 locally
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | open | 2025-03-09T23:18:02Z | 2025-03-11T20:16:46Z | https://github.com/dagster-io/dagster/issues/28342 | [
"type: bug",
"area: asset"
] | ElPincheTopo | 1 |
Yorko/mlcourse.ai | seaborn | 376 | Topic 3, Some graphviz images missing | It looks like graphviz images after code cell 9 and 13 were not rendered. Considering presence of the graph after the 6th cell, it isn't a browser error and there should not be many difficulties restoring them. | closed | 2018-10-16T12:31:36Z | 2018-10-17T21:34:24Z | https://github.com/Yorko/mlcourse.ai/issues/376 | [
"minor_fix"
] | foghegehog | 1 |
hankcs/HanLP | nlp | 1,330 | 词性标注不准确,‘明确提出给的标注是名词’ | <img width="855" alt="企业微信截图_15745009854716" src="https://user-images.githubusercontent.com/49904623/69476574-7e583200-0e16-11ea-9fe3-ec9d0c7f39a1.png">
| closed | 2019-11-23T09:27:30Z | 2019-11-23T15:49:13Z | https://github.com/hankcs/HanLP/issues/1330 | [] | mociwang | 0 |
HIT-SCIR/ltp | nlp | 263 | ltpcsharp执行失败,而且网上关于c#语言的ltp很少,希望能帮助解决,谢谢 | 


项目编译生成的时候都是正常的,只有在执行程序的时候才会报异常
| closed | 2017-11-28T08:30:02Z | 2020-06-25T11:20:38Z | https://github.com/HIT-SCIR/ltp/issues/263 | [] | hxw8187 | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 726 | how can i use my trained voice models to sv2tts? | how can i use my trained voice models to sv2tts?
i have trained a voice models in colab so i want put my trained model to sv2tts | closed | 2021-04-06T18:24:10Z | 2021-04-09T19:22:23Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/726 | [] | CrazyPlaysHD | 1 |
widgetti/solara | flask | 446 | Markdown's code highlight, math and Mermaid support are broken in VS Code | ````python
import solara
@solara.component
def Page():
solara.Markdown(r'''
# Large
## Smaller
## List items
* item 1
* item 2
## Math
Also, $x^2$ is rendered as math.
Or multiline math:
$$
\int_0^1 x^2 dx = \frac{1}{3}
$$
## Code highlight support
```python
code = "formatted" and "supports highlighting"
```
## Mermaid support!
See [Mermaid docs](https://mermaid-js.github.io/)
```mermaid
graph TD;
A-->B;
A-->C;
B-->D;
C-->D;
```
''')
Page()
````

vscode-jupyter version: v2024.1.100 | closed | 2024-01-05T19:58:54Z | 2024-02-09T15:59:58Z | https://github.com/widgetti/solara/issues/446 | [] | Chaoses-Ib | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 94 | Example notebooks should import torchvision.models as a different name | In the example notebooks, torchvision.models is getting imported as "models" but then that gets overwritten later on by the "models" dictionary, and that is confusing. | closed | 2020-05-08T17:47:38Z | 2020-05-09T16:40:44Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/94 | [
"enhancement"
] | KevinMusgrave | 1 |
marcomusy/vedo | numpy | 217 | How to make the iso-surface closed? |
I am using ```Volume(mask).isosurface(1)``` to extract the surface from mask, and the result is show as following:

The extracted result is not closed. How can I make the surface closed?
It seems that vtkvmtkCapPolyData may help.
```
vtkvmtkCapPolyData - Add caps to boundaries.
Superclass: vtkPolyDataAlgorithm
This class closes the boundaries of a surface with a cap. Each cap is
made of triangles sharing the boundary baricenter. Boundary
baricenters are added to the dataset. It is possible to retrieve the
ids of the added points with GetCapCenterIds. Boundary baricenters
can be displaced along boundary normals through the Displacement
parameter. Since this class is used as a preprocessing step for
Delaunay tessellation, displacement is meant to avoid the occurence
of degenerate tetrahedra on the caps.
```
| closed | 2020-09-23T08:12:04Z | 2020-09-25T01:23:30Z | https://github.com/marcomusy/vedo/issues/217 | [] | NeuZhangQiang | 2 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 63 | Feature Request: Rate Limiter | ## Feature Request: Rate Limiter
Can you please help to add rate limiter to the login endpoints? Can consider using this: https://github.com/long2ice/fastapi-limiter.
Thank you. | closed | 2023-04-15T19:03:10Z | 2023-05-08T23:51:58Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/63 | [] | jymchng | 4 |
jmcarpenter2/swifter | pandas | 26 | swifter apply for resample groups | I've used swifter to speed up apply calls on DataFrames, but this isn't the only context apply is used in pandas. Would it be simple to implement for resample objects also?
See: [pandas.DataFrame.resample](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html)
Can we go from:
`series.resample('3T').apply(custom_resampler)`
to:
`series.resample('3T').swifter.apply(custom_resampler)`? | closed | 2018-10-30T09:56:22Z | 2019-11-25T07:11:34Z | https://github.com/jmcarpenter2/swifter/issues/26 | [
"enhancement"
] | harahu | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.