repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
BlinkDL/RWKV-LM | pytorch | 127 | finetune for other languages? | how I can fine-tune for vietnamese for dialog chatbot ? Thanks guys | closed | 2023-05-25T08:48:41Z | 2023-06-07T16:56:20Z | https://github.com/BlinkDL/RWKV-LM/issues/127 | [] | batman-do | 3 |
pytorch/pytorch | deep-learning | 149,097 | Aten arange behavior when dtype is int64 and step size is greater than range | ### 🐛 Describe the bug
While testing corner cases on torch.arange, i see the following behavior when dtype is int64 and step size is greater than range.
On CPU, i get the following behavior for arange.
>> a = torch.arange(0, 0.5, 1, dtype=torch.int64)
>> a
tensor([], dtype=torch.int64)
>> a = torch.arange(0, 0.5, 1, dtype=torch.int32)
>>a
tensor([0], dtype=torch.int32)
Why is it that size of ‘a’ is 0 when dtype is int64 where as it is 1 for int32? Logically speaking the first element is anyways 0 and size should have been 1 even for int64 type, isn’t it?
### Versions
2025-03-13 05:10:24 (2.62 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.6.0+hpu_1.21.0-202.git603340c
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://git@github.com/habana-internal/tpc_llvm10 6423f90703886aa37631daf63eaf24f24df9ba3d)
CMake version: version 3.29.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] habana-torch-dataloader==1.21.0+git9d09025dd
[pip3] habana-torch-plugin==1.21.0+git9d09025dd
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+hpu.1.21.0.202.git603340c
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.5.1a0+1661daf
[pip3] torchdata==0.9.0+d4bb3e6
[pip3] torchmetrics==1.2.1
[pip3] torchtext==0.18.0a0+9bed85d
[pip3] torchvision==0.20.1a0+3ac97aa
[conda] Could not collect
cc @albanD | open | 2025-03-13T03:11:13Z | 2025-03-17T04:22:59Z | https://github.com/pytorch/pytorch/issues/149097 | [
"triaged",
"module: python frontend"
] | satheeshhab | 1 |
amdegroot/ssd.pytorch | computer-vision | 300 | I am training with my own data set, but the problem of loss is empty. | my pc
GTX1080Ti centos7 learnng rate 1e-5
Who has the same situation as me, please help me. thanks。
iter 310 || Loss: nan || timer: 0.1921 sec.
iter 320 || Loss: nan || timer: 0.2041 sec.
iter 330 || Loss: nan || timer: 0.2006 sec.
iter 340 || Loss: nan || timer: 0.2043 sec.
iter 350 || Loss: nan || timer: 0.2128 sec.
iter 360 || Loss: nan || timer: 0.2072 sec.
iter 370 || Loss: nan || timer: 0.2091 sec.
iter 380 || Loss: nan || timer: 0.2141 sec.
iter 390 || Loss: nan || timer: 0.2486 sec.
iter 400 || Loss: nan || timer: 0.1914 sec.
iter 410 || Loss: nan || timer: 0.2052 sec.
iter 420 || Loss: nan || timer: 0.1976 sec.
iter 430 || Loss: nan || timer: 0.1952 sec.
iter 440 || Loss: nan || timer: 0.1942 sec.
iter 450 || Loss: nan || timer: 0.2101 sec.
iter 460 || Loss: nan || timer: 0.1934 sec. | open | 2019-03-07T14:23:59Z | 2019-06-28T08:50:30Z | https://github.com/amdegroot/ssd.pytorch/issues/300 | [] | dazhangzhang | 6 |
MycroftAI/mycroft-core | nlp | 2,600 | Ubuntu CLI commands | Hi, love that Mycroft is working on Kubuntu 18.10. I only had to fiddle with pulseaudio (restart?) to get the audio out working (although the audio test worked). I've been trying to get Voice Command to work for years, starting with WSP, Vocola and Dragonfly on Windows, but the Winapi and such calls are very limited and poorly documented. Its great that Kubuntu can use Python calls.
So the first thing I wanted to do was/is to Voice Control Vim (like in this old video using .NET? https://www.youtube.com/watch?v=TEBMlXRjhZY) or at least run some CLI commands. Unfortunately, it looks like using the KDE Plasmoid is the only way to do this? Please correct me if I'm wrong. I do see there are windows navigation controls with the Desktop Control Skill (is the plasmoid necessary for this? shouldnt there be a way to use the desktop control commands throught the command line without the whole plasmoid feature?), which would be handy, but I cant seem to get the plasmoid installed. So if there is a direct way to hook into Mycroft output and redirect it to bash or some other basic form, that would be my simplest solution. I have used the CLI debug tool which is great, but dont see how that could be redirected yet. I realize Mycroft was built to operate on devices without keyboards, but output as text to CLI commands seems like a basic tool that is fundamental, even for workarounds such as a plasmoid not installing.
Installation of the plasmoid hangs on "Installing../lottie/qmldir" and had two package install errors for qtdeclarative5-qtquick2-plugin and qtdeclarative5-models-plugin. Similar to this issue for installing on Debian (https://github.com/MycroftAI/installers/issues/9), except Im using Kubuntu 18.10 Cosmic which doesnt have these packages in its repos. Im not sure if I can install them manually. Ive been using the appimage installer, but will try the manual install for Debian again. No, actually that ended where the instructions say "sudo chmod +x /usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/code/startservice.sh" because there is no 'code' directory created, which would have the scripts to run manually even. Not sure if I need those scripts, but after the hung appimage install I do have a plasmoid, which gives these error:
```
"Error loading QML file: file:///usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/ui/main.qml:33:34: Type FullRepresentation unavailable
file:///usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/ui/FullRepresentation.qml:31:1: module "Mycroft" is not installed"
```
I may try a restart after finishing tasks left in browser windows, but would love a path forward that doesnt require any plasmoid and all the depedencies that install required. Thanks for any pointers. | closed | 2020-05-31T16:54:53Z | 2020-06-07T22:12:24Z | https://github.com/MycroftAI/mycroft-core/issues/2600 | [] | auwsom | 13 |
dask/dask | numpy | 11,285 | BUG: `array.asarray` does not respect `dtype` arg | **Describe the issue**:
`dask.array.asarray` does not respect the `dtype` argument.
**Minimal Complete Verifiable Example**:
```python
>>> import numpy as np
>>> import dask.array as da
>>> Zm = da.asarray([[1, 2, 3]])
>>> Zm
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z = da.asarray(Zm, dtype=da.float64)
>>> Z
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z.compute().dtype
dtype('int64')
# same issue is present with `np` dtypes directly
>>> Z = da.asarray(Zm, dtype=np.float64)
>>> Z
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z.compute().dtype
dtype('int64')
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.8.0+3.g65270980
- Python version: 3.12.4
- Operating System: Ubuntu
- Install method (conda, pip, source): `python -m pip install git+https://github.com/dask/dask.git`
| closed | 2024-08-08T12:12:01Z | 2024-08-12T14:46:21Z | https://github.com/dask/dask/issues/11285 | [
"array"
] | lucascolley | 1 |
matplotlib/matplotlib | data-visualization | 29,146 | [MNT]: Add Type Checking to Avoid AttributeError in Functions Handling Units | ### Summary
Currently, in unit-related example scripts , the code raises an AttributeError when numpy.float64 objects are used without being converted to objects with a convert_to method. This can create issues for users who attempt to pass Numpy float values directly, unaware that the function expects specific unit-handling objects.
### Proposed fix
Add a type check at the beginning of functions to ensure that input is of the correct type. If it is not, raise an exception with a clear message (e.g., "Values must be unit-handling objects, not float"). | closed | 2024-11-15T21:18:07Z | 2025-01-04T18:08:48Z | https://github.com/matplotlib/matplotlib/issues/29146 | [
"status: needs clarification",
"Maintenance"
] | MehdiNemri | 3 |
pydantic/pydantic-core | pydantic | 1,462 | 2.24.0: not ready for `pyupgrade --py39-plus` (fails on linking DSO) | Next month python 3.8mwill be EOSed.
I've tested patch generated by `pyupgrade --py39-plus` and looks like with that patch build fails on linking with
```console
+ /usr/bin/python3 -sBm build -w --no-isolation
* Getting build dependencies for wheel...
* Building wheel...
Running `maturin pep517 build-wheel -i /usr/bin/python3 --compatibility off`
📦 Including license file "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/LICENSE"
🍹 Building a mixed python/rust project
🔗 Found pyo3 bindings
🐍 Found CPython 3.10 at /usr/bin/python3
📡 Using build options features, bindings from pyproject.toml
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
Compiling target-lexicon v0.12.14
Compiling python3-dll-a v0.2.10
Compiling once_cell v1.19.0
Compiling autocfg v1.3.0
Compiling stable_deref_trait v1.2.0
Compiling libc v0.2.155
Compiling heck v0.5.0
Compiling version_check v0.9.5
Compiling litemap v0.7.3
Compiling writeable v0.5.5
Compiling rustversion v1.0.17
Compiling memchr v2.7.4
Compiling icu_locid_transform_data v1.5.0
Compiling radium v0.7.0
Compiling cfg-if v1.0.0
Compiling tinyvec_macros v0.1.1
Compiling static_assertions v1.1.0
Compiling smallvec v1.13.2
Compiling icu_properties_data v1.5.0
Compiling serde v1.0.209
Compiling tap v1.0.1
Compiling serde_json v1.0.128
Compiling indoc v2.0.5
Compiling write16 v1.0.0
Compiling percent-encoding v2.3.1
Compiling unindent v0.2.3
Compiling utf8_iter v1.0.4
Compiling hashbrown v0.14.5
Compiling unicode-bidi v0.3.15
Compiling icu_normalizer_data v1.5.0
Compiling funty v2.0.0
Compiling utf16_iter v1.0.5
Compiling zerocopy v0.7.34
Compiling regex-syntax v0.8.4
Compiling equivalent v1.0.1
Compiling ryu v1.0.18
Compiling itoa v1.0.11
Compiling uuid v1.10.0
Compiling hex v0.4.3
Compiling base64 v0.22.1
Compiling lexical-util v0.8.5
Compiling tinyvec v1.6.1
Compiling wyz v0.5.1
Compiling form_urlencoded v1.2.1
Compiling aho-corasick v1.1.3
Compiling bitvec v1.0.1
Compiling indexmap v2.2.6
Compiling lexical-parse-integer v0.8.6
Compiling unicode-normalization v0.1.23
Compiling lexical-parse-float v0.8.5
Compiling quote v1.0.36
Compiling syn v2.0.68
Compiling ahash v0.8.11
Compiling idna v0.5.0
Compiling getrandom v0.2.15
Compiling num-traits v0.2.19
Compiling memoffset v0.9.1
Compiling url v2.5.2
Compiling regex-automata v0.4.7
Compiling pyo3-build-config v0.22.2
Compiling num-integer v0.1.46
Compiling num-bigint v0.4.6
Compiling regex v1.10.6
Compiling synstructure v0.13.1
Compiling pyo3-ffi v0.22.2
Compiling pyo3-macros-backend v0.22.2
Compiling pyo3 v0.22.2
Compiling jiter v0.5.0
Compiling pydantic-core v2.24.0 (/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0)
error: failed to run custom build command for `pydantic-core v2.24.0 (/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0)`
note: To improve backtraces for build dependencies, set the CARGO_PROFILE_RELEASE_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.
Caused by:
process didn't exit successfully: `/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/target/release/build/pydantic-core-176bfdeef7d000ae/build-script-build` (exit status: 101)
--- stdout
cargo:rustc-check-cfg=cfg(Py_LIMITED_API)
cargo:rustc-check-cfg=cfg(PyPy)
cargo:rustc-check-cfg=cfg(GraalPy)
cargo:rustc-check-cfg=cfg(py_sys_config, values("Py_DEBUG", "Py_REF_DEBUG", "Py_TRACE_REFS", "COUNT_ALLOCS"))
cargo:rustc-check-cfg=cfg(invalid_from_utf8_lint)
cargo:rustc-check-cfg=cfg(pyo3_disable_reference_pool)
cargo:rustc-check-cfg=cfg(pyo3_leak_on_drop_without_reference_pool)
cargo:rustc-check-cfg=cfg(diagnostic_namespace)
cargo:rustc-check-cfg=cfg(c_str_lit)
cargo:rustc-check-cfg=cfg(Py_3_7)
cargo:rustc-check-cfg=cfg(Py_3_8)
cargo:rustc-check-cfg=cfg(Py_3_9)
cargo:rustc-check-cfg=cfg(Py_3_10)
cargo:rustc-check-cfg=cfg(Py_3_11)
cargo:rustc-check-cfg=cfg(Py_3_12)
cargo:rustc-check-cfg=cfg(Py_3_13)
cargo:rustc-cfg=Py_3_6
cargo:rustc-cfg=Py_3_7
cargo:rustc-cfg=Py_3_8
cargo:rustc-cfg=Py_3_9
cargo:rustc-cfg=Py_3_10
cargo:rustc-check-cfg=cfg(has_coverage_attribute)
cargo:rustc-check-cfg=cfg(specified_profile_use)
cargo:rerun-if-changed=python/pydantic_core/core_schema.py
cargo:rerun-if-changed=generate_self_schema.py
--- stderr
Traceback (most recent call last):
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 247, in <module>
main()
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 217, in main
value = get_schema(s, definitions)
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 57, in get_schema
return type_dict_schema(obj, definitions)
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 156, in type_dict_schema
raise ValueError(f'Unknown Schema forward ref: {fr_arg}')
ValueError: Unknown Schema forward ref: list[CoreSchema]
thread 'main' panicked at build.rs:29:9:
generate_self_schema.py failed with exit status: 1
stack backtrace:
0: 0x560bdf04ef75 - std::backtrace_rs::backtrace::libunwind::trace::h5c85e557799ed486
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x560bdf04ef75 - std::backtrace_rs::backtrace::trace_unsynchronized::ha97b107185df65bb
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x560bdf04ef75 - std::sys::backtrace::_print_fmt::h490acf9e9b8c6eb2
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:65:5
3: 0x560bdf04ef75 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h9c32407e5a23c650
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:40:26
4: 0x560bdf06f17b - core::fmt::rt::Argument::fmt::hae324c745842212e
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/fmt/rt.rs:173:76
5: 0x560bdf06f17b - core::fmt::write::h8e3a6cb8df1f9a95
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/fmt/mod.rs:1182:21
6: 0x560bdf04ccef - std::io::Write::write_fmt::h83bcab37323a9399
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/io/mod.rs:1827:15
7: 0x560bdf0500c1 - std::sys::backtrace::BacktraceLock::print::hd3c35caa6032e632
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:43:9
8: 0x560bdf0500c1 - std::panicking::default_hook::{{closure}}::hd3c6083514eb2656
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:269:22
9: 0x560bdf04fd9c - std::panicking::default_hook::h94d20e9291e6eb42
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:296:9
10: 0x560bdf050691 - std::panicking::rust_panic_with_hook::hfa25182080856bef
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:800:13
11: 0x560bdf050587 - std::panicking::begin_panic_handler::{{closure}}::h2fc3fd5367175cd3
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:674:13
12: 0x560bdf04f439 - std::sys::backtrace::__rust_end_short_backtrace::h877093daaa72bd28
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:168:18
13: 0x560bdf050214 - rust_begin_unwind
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:665:5
14: 0x560bdf017f33 - core::panicking::panic_fmt::hfc4c464a0d356173
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/panicking.rs:74:14
15: 0x560bdf019cd8 - build_script_build::generate_self_schema::hf9f929900624c562
at /home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/build.rs:29:9
16: 0x560bdf019cd8 - build_script_build::main::ha5db4a51bcd9603f
at /home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/build.rs:48:5
17: 0x560bdf018703 - core::ops::function::FnOnce::call_once::h4e11fd2c02563b95
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/ops/function.rs:250:5
18: 0x560bdf018703 - std::sys::backtrace::__rust_begin_short_backtrace::h0b129e115204002e
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:152:18
19: 0x560bdf0186f9 - std::rt::lang_start::{{closure}}::hd691eceb39629f76
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:162:18
20: 0x560bdf0490f0 - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h90440e1dec31addc
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/ops/function.rs:284:13
21: 0x560bdf0490f0 - std::panicking::try::do_call::h864c0af700b810b6
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:557:40
22: 0x560bdf0490f0 - std::panicking::try::h81dc1c4c7a744be2
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:521:19
23: 0x560bdf0490f0 - std::panic::catch_unwind::hce4947710c9959a6
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panic.rs:350:14
24: 0x560bdf0490f0 - std::rt::lang_start_internal::{{closure}}::hb8ca788eb716154b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:141:48
25: 0x560bdf0490f0 - std::panicking::try::do_call::hbe20672d94e23c41
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:557:40
26: 0x560bdf0490f0 - std::panicking::try::h08906107fe8c4aae
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:521:19
27: 0x560bdf0490f0 - std::panic::catch_unwind::h20cf014a4ed35f8b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panic.rs:350:14
28: 0x560bdf0490f0 - std::rt::lang_start_internal::he74de233149dbe8b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:141:20
29: 0x560bdf019e3f - main
30: 0x7f6aa2e461c8 - __libc_start_call_main
31: 0x7f6aa2e4628b - __libc_start_main@GLIBC_2.2.5
32: 0x560bdf018625 - _start
33: 0x0 - <unknown>
warning: build failed, waiting for other jobs to finish...
💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.10-64bit" PYO3_PYTHON="/usr/bin/python3" PYTHON_SYS_EXECUTABLE="/usr/bin/python3" "cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics" "--manifest-path" "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/Cargo.toml" "--release" "--lib" "--crate-type" "cdylib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/usr/bin/python3', '--compatibility', 'off'] returned non-zero exit status 1
ERROR Backend subprocess exited when trying to invoke build_wheel
``` | closed | 2024-09-21T18:23:39Z | 2024-10-22T13:24:07Z | https://github.com/pydantic/pydantic-core/issues/1462 | [] | kloczek | 5 |
igorbenav/fastcrud | pydantic | 60 | IN and NOT IN filter | Thanks for this awesome package.
**Is your feature request related to a problem? Please describe.**
It should be possible to filter records using the **IN** and **NOT IN** operators in the **get** and **get_multi** functions.
**Describe the solution you'd like**
```python
db_asset = await crud_users.get(
db=db,
schema_to_select=User,
return_as_model=True,
id=id,
filter=User.id.not_in(ids),
is_deleted=False,
)
```
It would be great to utilize logical operators such as `and_` and `or_`.
**Describe alternatives you've considered**
Currently, one must rely on SQLAlchemy methods to execute such filtering operations.
```python
smt = select(User).where(User.reference_id == ref_id).filter(User.id.not_in(ids))
```
| closed | 2024-04-24T17:36:27Z | 2024-05-07T03:16:35Z | https://github.com/igorbenav/fastcrud/issues/60 | [
"enhancement",
"FastCRUD Methods"
] | FCMHUB | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 724 | Stuck on "WARNING:root:Setting up a new session" | I downloaded the `facades` dataset. I then run `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`, but I'm stuck at `WARNING:root:Setting up a new session`
Even after a few hours, it says there and doesn't seem to progress. Why is this? | closed | 2019-08-06T04:45:16Z | 2019-09-21T22:33:10Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/724 | [] | za13 | 1 |
MaartenGr/BERTopic | nlp | 2,211 | There are Chinese characters in my project, but after calling the visualize_document_datamap() method, the characters appear as garbled text. | ### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
fig = topic_model.visualize_document_datamap(
sentences,
topics=topics,
reduced_embeddings=reduced_embeddings,
#custom_labels=custom_labels,
title='文档和主题的分布',
sub_title='基于 BERTopic 的主题建模',
width=1200,
height=1200
)
Even after setting
plt.rcParams['font.sans-serif'] = ['SimHei'],
I still can't see the characters.
### Reproduction
```python
from bertopic import BERTopic
# with the reduced embeddings
reduced_embeddings = UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)
fig = topic_model.visualize_document_datamap(
sentences,
topics=topics,
reduced_embeddings=reduced_embeddings,
#custom_labels=custom_labels,
title='文档和主题的分布',
sub_title='基于 BERTopic 的主题建模',
width=1200,
height=1200
)
```
### BERTopic Version
0.16.4 | open | 2024-11-12T08:08:52Z | 2024-12-06T21:25:29Z | https://github.com/MaartenGr/BERTopic/issues/2211 | [
"bug"
] | superseanyoung | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 18 | Support of Batch Input | Hi,
it appears that the original code did not directly support the batch input. I forked the repo and created some simple modification. (you may discard the part to load my own models)
https://github.com/CielAl/pytorch-grad-cam_batch/blob/master/grad_cam.py#L114
Hope this might be useful in case others try to apply your implementation :))) | closed | 2019-03-31T08:07:54Z | 2021-05-01T17:38:05Z | https://github.com/jacobgil/pytorch-grad-cam/issues/18 | [] | CielAl | 4 |
brightmart/text_classification | tensorflow | 111 | a02的cnn_multiple_layers问题 | a02下的p7_TextCNN_model.py中多层CNN函数cnn_multiple_layers中,因为第一层conv的padding是SAME,卷积后会保持图的大小,因此输出的维度是[[batch_size, sequence_length, embedding_size, num_filters]],第二层卷积层开始时136行的reshape,最后一维写成1,会造成第一维变成batch_size*embedding_size,导致内存容易溢出,其实这里的reshape可以去掉的,修复这个问题之后,显存占用回到正常水平。
另外个人感觉,第一层没必要用SAME来padding,这里不像是图像,实际上横向是完整的词向量,padding出来的是没有实际含义的,第一层直接用VALID,横向宽度上是1,只在纵向多个词之间再次卷积即可 | closed | 2019-03-11T02:50:15Z | 2019-03-14T02:15:49Z | https://github.com/brightmart/text_classification/issues/111 | [] | snaillp | 1 |
Gozargah/Marzban | api | 893 | ای پی لیمیت | روشی نیست که ای پی لیمیت تعریف کنیم تو مرزبان ؟
دوتا اسکریپت هست ولی خیلی باگ داره سازندشم جواب گو نیست | closed | 2024-03-27T07:27:43Z | 2024-03-27T14:15:22Z | https://github.com/Gozargah/Marzban/issues/893 | [
"Duplicate",
"Invalid"
] | hossein2 | 1 |
harry0703/MoneyPrinterTurbo | automation | 586 | 老哥可以整合下支持本地sd生成图片? | ### 是否已存在类似的功能请求?
- [x] 我已搜索现有的功能请求
### 痛点
老哥可否把这个项目整合下 支持sd生成图片 https://github.com/Anning01/ComicTweets
### 建议的解决方案
老哥可否把这个项目整合下 支持sd生成图片 https://github.com/Anning01/ComicTweets
### 有用的资源
_No response_
### 其他信息
_No response_ | open | 2025-02-11T05:08:44Z | 2025-02-19T13:54:57Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/586 | [
"enhancement"
] | Weststreet | 1 |
serengil/deepface | machine-learning | 739 | despite cuda drivers being installed, I'm seeing this issue | [2023-05-01 12:17:25 +0000] [8] [INFO] Booting worker with pid: 8
2023-05-01 12:17:26.167599: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-01 12:17:26.168772: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-05-01 12:17:26.189715: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-05-01 12:17:26.189953: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-01 12:17:26.572682: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Directory /root /.deepface created
Directory /root /.deepface/weights created
| closed | 2023-05-01T12:20:05Z | 2023-05-01T14:15:56Z | https://github.com/serengil/deepface/issues/739 | [
"dependencies"
] | ankit-g | 1 |
ploomber/ploomber | jupyter | 936 | improving the onboarding tutorial - your first pipeline | We want to improve the onboarding experience of the basic tutorials on binder.
## general observations
* the objective of the initial tutorial should be to convince people to give ploomber a try. the current version tries to teach them ploomber, we want to change that. our value proposition should be clear: ploomber allows you to run more experiments faster
* since our purpose is to convince, there is no need to show low-level details like the pipeline.yaml. We can mention it and add a link to open it, but it should be optional
* there is no context on the dataset. the example analyzes covid data; we should make a few comments on it during the example; telling a story will make it more compelling
* we should simplify and prettify the output HTML reports. hide code, make the plots prettier (use ggplot [style](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html)), larger fonts, etc.
* I think (?) the sample dataset has data per country, we could select a single country (or maybe continent if there's a continent column), then, at the end of the tutorial, show that they can go and change the initial task, re-run the pipeline and re-generate all outputs for the new country. we can implement this with a pipeline parameter so the outputs are stored in different folders, and the user can switch the parameter to generate the outputs for the new country
* the pipeline should generate both ipynb and HTML outputs. at the end of the tutorial, [we can use this](https://sklearn-evaluation.readthedocs.io/en/latest/user_guide/NotebookCollection.html) to explore the outputs from the ipynb files
* instead of using shell commands (e.g. `ploomber build`), we should using the Python API because this one has a better experience when executed on a notebook. e.g. shell commands get stuck while running, while the Python API shows a progress bar
* I enabled on binder the option to open py files as notebooks with a single click, we should update the tutorial since it says that you need to do right click
## libraries
I came across two projects that can help us improve the onboarding experience on binder.
[ipylab](https://github.com/jtpio/ipylab) allows interacting with the frontend from Python. This can effectively demonstrate what Ploomber is doing as the user progresses in a specific tutorial. For example, in the introduction tutorial (that users might run from Binder), we could open the data preview after we run the pipeline, the HTML reports, etc.
[jupyterlab-tour](https://github.com/jupyterlab-contrib/jupyterlab-tour) allows creating a "tour" on JupyterLab by highlighting specific areas in the user interface. I'm unsure how granular it is since it looks like it only allows to highlight of entire UI sections, so not sure how useful this could be.
##
share your feedback while searching the docs | closed | 2022-07-23T03:54:32Z | 2022-08-18T12:00:28Z | https://github.com/ploomber/ploomber/issues/936 | [
"documentation",
"stash"
] | edublancas | 1 |
explosion/spaCy | data-science | 13,407 | Import broken python 3.9 |
## How to reproduce the behaviour
```import spacy```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 10.0.19045
* Python Version Used: 3.9.13
* spaCy Version Used: 3.7.4
```
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\Scripts\spacy.exe\__main__.py", line 4, in <module>
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\__init__.py", line 13, in <module>
from . import pipeline # noqa: F401
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipeline\__init__.py", line 1, in <module>
from .attributeruler import AttributeRuler
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipeline\attributeruler.py", line 8, in <module>
from ..language import Language
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\language.py", line 43, in <module>
from .pipe_analysis import analyze_pipes, print_pipe_analysis, validate_attrs
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipe_analysis.py", line 6, in <module>
from .tokens import Doc, Span, Token
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\tokens\__init__.py", line 1, in <module>
from ._serialize import DocBin
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\tokens\_serialize.py", line 14, in <module>
from ..vocab import Vocab
File "spacy\vocab.pyx", line 1, in init spacy.vocab
File "spacy\tokens\doc.pyx", line 49, in init spacy.tokens.doc
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\schemas.py", line 287, in <module>
class TokenPattern(BaseModel):
File "pydantic\main.py", line 299, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 411, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 342, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 451, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 545, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 550, in pydantic.fields.ModelField._type_analysis
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
``` | closed | 2024-04-04T03:27:58Z | 2024-04-04T08:43:54Z | https://github.com/explosion/spaCy/issues/13407 | [
"install"
] | SilverDew-sg | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,526 | [Bug]: v1.9.0 GFPGAN and CodeFormer not work | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
После обновления версии на 1.9, сломалост восстановление лиц.
В пример касается и GFPGAN и CodeFormer:
- Если значение равно 0 либо 1, то генерация проходит успешно с указанным показателем.
- Если значение снизить больше 0 но меньше 1, то появляется ошибка
ValueError: images do not match
*** Error completing request
*** Arguments: ('task(23eulwtbdi5llkk)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x2891D69F8B0>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process
res = Image.blend(pp.image, res, gfpgan_visibility)
File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend
return im1._new(core.blend(im1.im, im2.im, alpha))
ValueError: images do not match
---
### Steps to reproduce the problem
1. Go to Extra
2. Drop image
3. Activate GFPGAN или CodeFormer
4. Set the parameter to 1
5. Click on "Generete"
6. Set the parameter to 0
7. Click on "Generete"
8. Set the parameter from 0 to 1
9. Click on "Generete"
### What should have happened?
The sensitivity of facial reconstruction should change.
### What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome
### Sysinfo
[sysinfo-2024-04-15-12-29.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14979457/sysinfo-2024-04-15-12-29.json)
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Installing requirements
[Auto-Photoshop-SD] Attempting auto-update...
[Auto-Photoshop-SD] switch branch to extension branch.
checkout_result: Your branch is up to date with 'origin/master'.
[Auto-Photoshop-SD] Current Branch.
branch_result: * master
[Auto-Photoshop-SD] Fetch upstream.
fetch_result:
[Auto-Photoshop-SD] Pull upstream.
pull_result: Already up to date.
All models for DeOldify are already downloaded.
Installing yt-dlp for DeOldify extension.
Installing yt-dlp
If submitting an issue on github, please provide the full startup log for debugging purposes.
Initializing Dreambooth
Dreambooth revision: 45a12fe5950bf93205b6ef2b7511eb94052a241f
Checking xformers...
Checking bitsandbytes...
Checking bitsandbytes (ALL!)
Checking Dreambooth requirements...
Installed version of bitsandbytes: 0.43.0
[Dreambooth] bitsandbytes v0.43.0 is already installed.
Installed version of accelerate: 0.21.0
[Dreambooth] accelerate v0.21.0 is already installed.
Installed version of dadaptation: 3.2
[Dreambooth] dadaptation v3.2 is already installed.
Installed version of diffusers: 0.27.2
[Dreambooth] diffusers v0.25.0 is already installed.
Installed version of discord-webhook: 1.3.0
[Dreambooth] discord-webhook v1.3.0 is already installed.
Installed version of fastapi: 0.94.0
[Dreambooth] fastapi is already installed.
Installed version of gitpython: 3.1.32
[Dreambooth] gitpython v3.1.40 is not installed.
Successfully installed gitpython-3.1.43
Installed version of pytorch_optimizer: 2.12.0
[Dreambooth] pytorch_optimizer v2.12.0 is already installed.
Installed version of Pillow: 9.5.0
[Dreambooth] Pillow is already installed.
Installed version of tqdm: 4.66.2
[Dreambooth] tqdm is already installed.
Installed version of tomesd: 0.1.3
[Dreambooth] tomesd v0.1.2 is already installed.
Installed version of tensorboard: 2.13.0
[Dreambooth] tensorboard v2.13.0 is already installed.
[+] torch version 2.1.2+cu121 installed.
[+] torchvision version 0.16.2+cu121 installed.
[+] accelerate version 0.21.0 installed.
[+] diffusers version 0.27.2 installed.
[+] bitsandbytes version 0.43.0 installed.
[+] xformers version 0.0.23.post1 installed.
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
*** Error loading script: img2img.py
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "W:\stablediffusion v2\webui\scripts\img2img.py", line 16, in <module>
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
---
*** Error loading script: txt2img.py
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "W:\stablediffusion v2\webui\scripts\txt2img.py", line 14, in <module>
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
---
python_server_full_path: W:\stablediffusion v2\webui\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
[-] ADetailer initialized. version: 24.4.1, num models: 10
*** Error loading script: main.py
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "W:\stablediffusion v2\webui\extensions\openpose-editor\scripts\main.py", line 14, in <module>
from basicsr.utils.download_util import load_file_from_url
ModuleNotFoundError: No module named 'basicsr'
---
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: W:\stablediffusion v2\webui\extensions\sd-webui-controlnet\annotator\downloads
2024-04-15 15:13:39,359 - ControlNet - INFO - ControlNet v1.1.443
2024-04-15 15:13:39,516 - ControlNet - INFO - ControlNet v1.1.443
[sdwi2iextender] Developper warning:
[sdwi2iextender] ./modules/img2img.py is being recompiled at run time with a patch. Your debugger will not work in this file.
[sdwi2iextender] If you need debug tools in this file, disable all extensions that use the sdwi2iextender library.
[sdwi2iextender] This patch is temporary and will be removed when v1.9 will be released.
Loading weights [dcd690123c] from W:\stablediffusion v2\webui\models\Stable-diffusion\Stable SR\Models\v2-1_768-ema-pruned.safetensors
[LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
2024-04-15 15:13:43,634 - ControlNet - INFO - ControlNet UI callback registered.
W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:446: GradioDeprecationWarning: 'scale' value should be an integer. Using 0.1 will cause issues.
with gr.Column(min_width=100, scale = 0.1):
W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:463: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
model_generation_data = gr.Textbox(label = model_generation_data_label_text(), value = "", lines = 3, elem_id = "def_model_gen_data_textbox").style(show_copy_button=True)
W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:466: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
triggerWords = gr.CheckboxGroup([], multiselect=True, label="Trigger Words", interactive = True).style(container=True, item_container=True)
W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:466: GradioDeprecationWarning: The `item_container` parameter is deprecated.
triggerWords = gr.CheckboxGroup([], multiselect=True, label="Trigger Words", interactive = True).style(container=True, item_container=True)
W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:493: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
output_textbox = gr.Textbox(interactive=False, label="Output").style(show_copy_button=True)
W:\stablediffusion v2\webui\modules\gradio_extensons.py:25: GradioDeprecationWarning: `height` is deprecated in `Interface()`, please use it within `launch()` instead.
res = original_IOComponent_init(self, *args, **kwargs)
W:\stablediffusion v2\webui\extensions\stable-diffusion-webui-Prompt_Generator\scripts\prompt_generator.py:229: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
row.style(equal_height=True)
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
COMMANDLINE_ARGS does not contain --api, API won't be mounted.
Startup time: 101.2s (prepare environment: 80.5s, import torch: 5.7s, import gradio: 1.0s, setup paths: 1.4s, initialize shared: 0.2s, other imports: 1.5s, load scripts: 7.9s, create ui: 1.6s, gradio launch: 1.1s).
Creating model from config: W:\stablediffusion v2\webui\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
Loading VAE weights specified in settings: W:\stablediffusion v2\webui\models\VAE\vqgan_cfw_00011_vae_only.ckpt
Applying attention optimization: Doggettx... done.
Model loaded in 8.5s (load weights from disk: 0.1s, find config: 3.0s, create model: 0.1s, apply weights to model: 4.1s, load VAE: 0.4s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.5s).
Advanced elements visible: False
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
*** Error loading script: img2img.py
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "W:\stablediffusion v2\webui\scripts\img2img.py", line 16, in <module>
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
---
*** Error loading script: txt2img.py
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "W:\stablediffusion v2\webui\scripts\txt2img.py", line 14, in <module>
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
---
Loading weights [dcd690123c] from W:\stablediffusion v2\webui\models\Stable-diffusion\Stable SR\Models\v2-1_768-ema-pruned.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 18.4s (prepare environment: 6.7s, import torch: 5.7s, import gradio: 1.0s, setup paths: 1.4s, initialize shared: 0.1s, other imports: 0.6s, load scripts: 1.8s, create ui: 0.7s, gradio launch: 0.1s).
Creating model from config: W:\stablediffusion v2\webui\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
Loading VAE weights specified in settings: W:\stablediffusion v2\webui\models\VAE\vqgan_cfw_00011_vae_only.ckpt
Applying attention optimization: Doggettx... done.
Model loaded in 4.7s (load weights from disk: 0.1s, find config: 1.9s, apply weights to model: 1.9s, load VAE: 0.4s, calculate empty prompt: 0.1s).
*** Error completing request
*** Arguments: ('task(23eulwtbdi5llkk)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x2891D69F8B0>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process
res = Image.blend(pp.image, res, gfpgan_visibility)
File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend
return im1._new(core.blend(im1.im, im2.im, alpha))
ValueError: images do not match
---
Loading model Deliberate\Deliberate_v5.safetensors (2 out of 2)
Calculating sha256 for W:\stablediffusion v2\webui\models\Stable-diffusion\Deliberate\Deliberate_v5.safetensors: 636fe404e3fd0c612ea3f2bd5d6f66fe8f005c026fac4fb54ee5c811ecd0da2c
Loading weights [636fe404e3] from W:\stablediffusion v2\webui\models\Stable-diffusion\Deliberate\Deliberate_v5.safetensors
Creating model from config: W:\stablediffusion v2\webui\configs\v1-inference.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 8.7s (calculate hash: 7.1s, load config: 0.2s, create model: 0.3s, apply weights to model: 0.8s, calculate empty prompt: 0.1s).
*** Error completing request
*** Arguments: ('task(xq5y56sd551hk5t)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123D0910>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process
res = Image.blend(pp.image, res, gfpgan_visibility)
File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend
return im1._new(core.blend(im1.im, im2.im, alpha))
ValueError: images do not match
---
*** Error completing request
*** Arguments: ('task(3mdy8dmj69j1luo)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123D2200>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.345, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process
res = Image.blend(pp.image, res, gfpgan_visibility)
File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend
return im1._new(core.blend(im1.im, im2.im, alpha))
ValueError: images do not match
---
*** Error completing request
*** Arguments: ('task(lv65ayupoqo2u2a)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123E1810>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.036, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process
res = Image.blend(pp.image, res, gfpgan_visibility)
File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend
return im1._new(core.blend(im1.im, im2.im, alpha))
ValueError: images do not match
---
```
### Additional information
Only automatic updates to the latest version to date, version 1.9 | open | 2024-04-15T13:13:48Z | 2024-06-21T20:15:57Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15526 | [
"bug-report"
] | PawAlldeller | 3 |
huggingface/datasets | tensorflow | 6,740 | Support for loading geotiff files as a part of the ImageFolder | ### Feature request
Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL
### Motivation
As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common?
### Your contribution
If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised. | closed | 2024-03-18T20:00:39Z | 2024-03-27T18:19:48Z | https://github.com/huggingface/datasets/issues/6740 | [
"enhancement"
] | sunny1401 | 0 |
tqdm/tqdm | pandas | 649 | tqdm_notebook bar malformed when bar_format is specified. | ### System Info
```sh
>>> import tqdm, sys
>>> print(tqdm.__version__, sys.version, sys.platform)
4.28.1 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 14:01:38)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] darwin
```
### Issue:
With the bar format below, the progress bars are correctly rendered in the terminal but incorrectly in jupyter notebook (instead of filling up the bar, another bar to the right is constructed).
<img width="945" alt="screen shot 2018-12-05 at 11 59 51 am" src="https://user-images.githubusercontent.com/1762463/49540671-8590d380-f885-11e8-8b7c-649d47e94aa2.png">
| closed | 2018-12-05T20:45:20Z | 2018-12-11T11:55:53Z | https://github.com/tqdm/tqdm/issues/649 | [
"duplicate 🗐",
"to-fix ⌛",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] | neerajprad | 4 |
oegedijk/explainerdashboard | plotly | 69 | random uuid with seed | Is there a reason why you are using UUIDs in the first place? Thinking you could just set seed and do randomization with numbers to get deterministic names.
E.g line 177 in dashboard_methods.py
` if not hasattr(self, "name") or self.name is None:
self.name = name or "uuid"+shortuuid.ShortUUID().random(length=5)
`
_Originally posted by @carlryn in https://github.com/oegedijk/explainerdashboard/issues/38#issuecomment-758700981_ | closed | 2021-01-12T19:25:11Z | 2021-02-15T11:23:28Z | https://github.com/oegedijk/explainerdashboard/issues/69 | [] | oegedijk | 7 |
huggingface/transformers | machine-learning | 36,584 | Significant Increase in Computation Time When Using Attention Mask in SDPA Attention | ### System Info
`transformers` version: 4.46.3
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.18
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- PyTorch version (GPU?): 2.4.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: False
- Using GPU in script?: True
- GPU type: NVIDIA A800-SXM4-40GB
### Who can help?
@ylacombe, @eustlb
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi,
I am experiencing a significant increase in computation time when using an attention mask with the WhisperSdpaAttention in the transformers library. I am not sure if this is expected behavior or a potential bug. Below is the code I used to test this:
```
import torch
import time
from transformers.models.whisper.modeling_whisper import WhisperSdpaAttention
def build_mask(x, x_lens):
batch_size = x_lens.size(0)
max_seq_len = x_lens.max()
# Create a sequence tensor of shape (batch_size, max_seq_len)
seq_range = (
torch.arange(
0,
max_seq_len,
dtype=x_lens.dtype,
device=x_lens.device,
)
.unsqueeze(0)
.expand(batch_size, max_seq_len)
)
lengths_expand = x_lens.unsqueeze(1).expand(batch_size, max_seq_len)
# Create mask
padding_mask = seq_range >= lengths_expand
audio_attention_mask_ = padding_mask.view(batch_size, 1, 1, max_seq_len).expand(
batch_size, 1, max_seq_len, max_seq_len
)
audio_attention_mask = audio_attention_mask_.to(
dtype=x.dtype,
device=x_lens.device,
)
audio_attention_mask[audio_attention_mask_] = float("-inf")
return audio_attention_mask
device = torch.device("cuda:0")
x = torch.randn(2, 200, 128).half().to(device)
x_lens = torch.tensor([200, 160]).long().to(device)
attn1 = WhisperSdpaAttention(embed_dim=128, num_heads=1, is_causal=False)
attn1.to(device).half()
with torch.no_grad():
begin = time.time()
z = attn1(x)
print("sdpa without mask: ", time.time() - begin)
begin = time.time()
mask = build_mask(x, x_lens).to(device)
out = attn1(x, attention_mask=mask)
print("sdpa with mask: ", time.time() - begin)
```
The output times are as follows:
SDPA without mask: 0.028657197952270508
SDPA with mask: 0.13893771171569824
### Expected behavior
As you can see, the computation time increases significantly when an attention mask is used. Could you please let me know if this is expected behavior or if there might be an issue with the implementation?
Thank you! | closed | 2025-03-06T12:21:38Z | 2025-03-08T04:11:34Z | https://github.com/huggingface/transformers/issues/36584 | [
"bug"
] | tartarleft | 4 |
flasgger/flasgger | rest-api | 148 | How to set "Parameter content type" | I need to change "Parameter content type" from "application/json" to "text/plain" for a "body" type parameter.
How can I do it?
Thanks. | closed | 2017-08-16T12:16:35Z | 2018-05-25T16:34:54Z | https://github.com/flasgger/flasgger/issues/148 | [] | frizner | 1 |
lepture/authlib | django | 432 | TokenValidator.scope_insufficient seems wrong | There's been changes in [TokenValidator.scope_insufficient()](https://github.com/lepture/authlib/blob/1089d5441c8e780a5165ca859b289fc8485ec5eb/authlib/oauth2/rfc6749/resource_protector.py#L33) to support nested required scopes, but I think it introduces a bug.
```
>>> from authlib.oauth2.rfc6749.resource_protector import TokenValidator
>>> TokenValidator.scope_insufficient(token_scopes=["read"], required_scopes=["read", "write"])
False
```
This seems wrong, since the token does not have all the required scopes. The reason is that now the function is looping over the required scopes, and as soon as it finds a matching scope it exists `False`. In this case it will not check the required `write` scope. | closed | 2022-02-24T17:40:45Z | 2022-03-02T08:01:25Z | https://github.com/lepture/authlib/issues/432 | [
"documentation"
] | abompard | 2 |
pydantic/logfire | fastapi | 408 | We're changing database | ## Rollout
We're gradually rolling out queries to the new database now. If you're affected, you'll see a banner like this:
<img width="770" alt="Screenshot 2024-09-18 at 14 42 24" src="https://github.com/user-attachments/assets/11990bfa-f669-4ca5-bf1a-45c8359da344">
**If you notice queries taking longer or returning errors or different results, please let us know below** or [contact us via email or Slack](https://docs.pydantic.dev/logfire/help/#email).
**If you need to continue querying the old database**, you can do so by right-clicking on your profile picture in the top right and setting the query engine to 'TS' (Timescale, the old database):
<img width="342" alt="Screenshot 2024-09-18 at 14 44 53" src="https://github.com/user-attachments/assets/f04b2aa3-1484-4ab7-8efe-0ffd7063547e">
**To get rid of the warning banner**, set the query engine to 'TS' and then back to 'FF' (FusionFire, the new database) again.
We will be increasing the percentage of users whose default query engine is FF over time and monitoring the impact. We may decrease it again if we notice problems. If you set a query engine explicitly to either TS or FF, this won't affect you. Otherwise, your query engine may switch back and forth. For most users, there shouldn't be a noticeable difference.
Most queries should be *faster* with FF, especially if they aggregate lots of data over a long time period. If your dashboards were timing out before with TS, try using FF. However some specific queries that are very fast with TS are slower with FF. In particular, TS can look up trace and span IDs almost instantly without needing a specific time range. **If you click on a link to a trace/span ID in a table, it will open the live view with a time range of 30 days because it doesn't know any better. If this doesn't load, reduce the time range.**
## Summary
We're changing the database that stores observability data in the Logfire platform from [Timescale](https://www.timescale.com/) to a custom database built on [Apache Datafusion](https://datafusion.apache.org/).
This should bring big improvements in performance, but will lead to some SQL compatibility issues initially (details below).
## Background
Timescale is great, it can be really performant when you know the kind of queries you regularly run (so you can set up continuous aggregates) and when you can enable their compression features (which both save money and make queries faster).
Unfortunately we can't use either of those features:
* our users can query their data however they like using SQL, so continuous aggregates aren't that helpful
* Timescale's compression features are incompatible with row level permissions — in Timescale/PostgreSQL we have to have row level permissions since we're running users SQL directly against the database
Earlier this year, as the volume of data the Logfire platform received increased in the beta, these limitations became clearer and clearer.
The other more fundamental limitation of Timescale was their open/closed source business model.
The ideal data architecture for us (and any analytics database I guess) is separated storage and compute: data is stored in S3/GCS as parquet (or equivalent), with an external index used by the query/compute nodes. Timescale has this, but it's completely closed source. So we can either get a scaleable architecture but be forced to use their SAAS, or run Timescale as a traditional "coupled storage and compute" database ourselves.
For lots of companies either of those solutions would be satisfactory, but if Logfire scales as we hope it does, we'd be scuppered with either.
## Datafusion
We settled on Datafusion as the foundation for our new database for a few reasons:
1. It's completely open source so we can build the separated storage and compute solution we want
2. It's all Rust, quite a few of our team are comfortable writing Rust, meaning the database isn't just a black box, we can dive in and improve it as we wish (as an example, Datafusion didn't have JSON querying support until we implemented it in [`datafusion-functions-json`](https://github.com/datafusion-contrib/datafusion-functions-json)). Since starting to use datafusion, our team has contributed 20 or 30 pull requests to datafusion, and associated projects like `arrow-rs` and `sqlparser-rs`
3. Datafusion is extremely extensible, we can adjust the SQL syntax, how queries are planned and run and build indexes exactly as we need them
4. Datafusion's [SQL parser](https://github.com/sqlparser-rs/sqlparser-rs) has pretty good compatibility with Postgres, and again, it's just Rust so we can improve it fairly easily
5. The project is excellently run, part of Apache, leverages the Arrow/Parquet ecosystem, and is used by large organizations like InfluxDB, Apple and Nvidia
## Transition
For the last couple of months we've been double-writing to Timescale and Fusionfire (our cringey internal name for the new datafusion-based database), working on improving reliability and performance of Fusionfire for all types of queries.
Fusionfire is now significantly (sometimes >10x) faster than timescale for most queries. There's a few low latency queries on very recent data which are still faster on timescale that we're working on improving.
Currently by default the live view, explore view, dashboards and alerts use timescale by default. **You can try fusionfire now for everything except alerts by right clicking on your profile picture in the top right and selecting "FF" as the query engine.**
In the next couple of weeks we'll migrate fully to Fusionfire and retire timescale.
We're working hard to make Fusionfire more compatible with PostgreSQL (see https://github.com/sqlparser-rs/sqlparser-rs/pull/1398, https://github.com/sqlparser-rs/sqlparser-rs/pull/1394, https://github.com/sqlparser-rs/sqlparser-rs/pull/1360, https://github.com/apache/arrow-rs/pull/6211, https://github.com/apache/datafusion/pull/11896, https://github.com/apache/datafusion/pull/11876, https://github.com/apache/datafusion/pull/11849, https://github.com/apache/datafusion/pull/11321, https://github.com/apache/arrow-rs/pull/6319, https://github.com/apache/arrow-rs/pull/6208, https://github.com/apache/arrow-rs/pull/6197, https://github.com/apache/arrow-rs/pull/6082, https://github.com/apache/datafusion/pull/11307), but there are still a few expressions which currently don't run correctly (a lot related to intervals):
* `generate_series('2024-08-28 00:00:00'::timestamptz, '2024-08-28 00:00:60'::timestamptz, INTERVAL '10 seconds')`
* `3 * interval '10 seconds'`
* `end_timestamp - interval '1 second' > start_timestamp` — will be fixed by https://github.com/sqlparser-rs/sqlparser-rs/pull/1398
* `extract(seconds from end_timestamp - start_timestamp)` — (`second` without the trailing `s` works thanks to https://github.com/sqlparser-rs/sqlparser-rs/pull/1394)
* JSON functions like `jsonb_array_elements` aren't available yet
If you notice any other issues, please let us know on this issue or a new issue, and we'll let you know how quickly we can fix it. | closed | 2024-08-29T19:24:54Z | 2024-10-15T09:12:32Z | https://github.com/pydantic/logfire/issues/408 | [] | samuelcolvin | 5 |
strawberry-graphql/strawberry-django | graphql | 351 | how to create a single field filter? | I am trying to define a field on a type that can be filtered, the field only returns 1 object so no list.
my attempt is this:
```
chat: auto = strawberry_django.field(
field_name="chat_set",
filters=ChatFilter,
default_factory=lambda: None,
)
```
but it still returns a list, I wonder if there is a way to express, something like : `self.chat_set.get(filters**)` instead of `self.chat_set.filter(filters**)`? | closed | 2023-08-28T23:10:31Z | 2025-03-20T15:57:18Z | https://github.com/strawberry-graphql/strawberry-django/issues/351 | [] | hyusetiawan | 4 |
OpenBB-finance/OpenBB | python | 6,840 | [🕹️]No-Code Side Quests Twitter thread | ### What side quest or challenge are you solving?
I have publish twitter thread
### Points
(🕹️ 150-500 Points)
### Description
_No response_
### Provide proof that you've completed the task
Thread Link is [here](https://x.com/adil_kadival/status/1848576037954982310)
| closed | 2024-10-22T09:06:29Z | 2024-10-24T09:18:17Z | https://github.com/OpenBB-finance/OpenBB/issues/6840 | [] | adilkadivala | 7 |
vitalik/django-ninja | django | 356 | DIFFERENT SCHEMA BASED ON API VERSION ON SWAGGER UI | I have two API versions but each with it's own schemas

The problem all schemas are displayed on both ends
How can I separate each schema to be displayed on the correct version


image above shows duplicate schema with Prefix (Paged)
| closed | 2022-02-10T19:41:12Z | 2023-01-13T10:09:31Z | https://github.com/vitalik/django-ninja/issues/356 | [] | rngallen | 1 |
strawberry-graphql/strawberry | asyncio | 3,607 | Pydantic type `all_fields` does not include computed fields | ## Describe the Bug
If a Pydantic model defines a computed field, those fields are excluded from the model when using the `all_fields` kwarg to the `strawberry.experimental.pydantic.type`.
I would expect them to be included by default as well, or for there to be a flag like `include_computed_fields` that I could specify to ensure they're exposed by the GraphQL type.
Extending the converted model to include the computed field with their proper type works. `strawberry.auto` does not work.
See the following:
```python
import strawberry
from pydantic import BaseModel, computed_field
class SomeModel(BaseModel):
name: str
@computed_field
@property
def normalized_name(self) -> str:
return f"normalized:{self.name}"
@strawberry.experimental.pydantic.type(SomeModel, all_fields=True)
class ModelType:
pass
# normalized_name: str
@strawberry.type
class Query:
@strawberry.field(graphql_type=ModelType)
def model(self) -> SomeModel:
return SomeModel(name="hello")
res = strawberry.Schema(query=Query).execute_sync(
"""
query {
model {
name
normalizedName
}
}
"""
)
print(res)
```
In the above code, `normalizedName` doesn't exist on the schema and therefore returns an error. After uncommenting the field from the type, the query returns properly.
If the computed field in the converted type is typed with `strawberry.auto`, I get
`TypeError: ModelType fields cannot be resolved. Unexpected type 'typing.Any'`
## System Information
- Operating system: Linux
- Strawberry version (if applicable): `0.235.2`
## Other information
I'm not sure if this is a bug or not, but the return typing for the query is also a bit funky. I cannot type the field to return the converted model type. Instead, I have to type the field as the actual pydantic model and specify `graphql_type` in the field arguments. During runtime, both work (incorrect typing and valid typing). | open | 2024-08-28T18:59:28Z | 2025-03-20T15:56:50Z | https://github.com/strawberry-graphql/strawberry/issues/3607 | [
"bug"
] | thearchitector | 2 |
gee-community/geemap | streamlit | 873 | Get_ee_stac_list() fails | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.11.0
- Python version: 3.7
- Operating System: On Google Colab
### Description
Hi, when trying to run datasets.get_ee_stac_list() it returns []
The URL https://earthengine-stac.storage.googleapis.com/catalog/catalog.json is accessible, so unsure why it returns a blank list.
Thanks | closed | 2022-01-18T11:05:51Z | 2022-01-19T05:14:33Z | https://github.com/gee-community/geemap/issues/873 | [
"bug"
] | adityachopra1 | 2 |
dask/dask | pandas | 11,394 | Discrepancy in column property with actual structure after grouping | **Describe the issue**:
After `groupby` and `reset_index`, DataFrame `columns` property have one column missing and one with an incorrect name, while computed DataFrame have proper structure.
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
import dask.dataframe as dd
data = {
'id': [1, 1, 1, 2, 2, 2],
'date': pd.to_datetime(['2023-01-01', '2023-01-04', '2023-01-05', '2023-01-01', '2023-01-04', '2023-01-05']),
'metric': [1,1,1,1,1,1]
}
pd_df = pd.DataFrame(data).astype({'id': 'int64', 'metric': 'int64', 'date': 'datetime64[ns]'})
df = dd.from_pandas(pd_df)
df = (
df
.groupby(by=['id'])
.apply(lambda x: x, include_groups=False, meta={'date': 'datetime64[ns]', "metric": "int64", })
.reset_index(drop=False)
.persist()
)
print('Actual:')
print(df.compute())
print(df.columns)
pd_df = (
pd_df
.groupby(by=['id'])
.apply(lambda x: x, include_groups=False)
.reset_index(drop=False)
)
print("\n\nExpected:")
print(pd_df)
print(pd_df.columns)
```
```
Actual:
id level_1 date metric
0 1 0 2023-01-01 1
1 1 1 2023-01-04 1
2 1 2 2023-01-05 1
3 2 3 2023-01-01 1
4 2 4 2023-01-04 1
5 2 5 2023-01-05 1
Index(['index', 'date', 'metric'], dtype='object') <---------- extra 'index' column and missing 'id' and 'level_1'
Expected:
id level_1 date metric
0 1 0 2023-01-01 1
1 1 1 2023-01-04 1
2 1 2 2023-01-05 1
3 2 3 2023-01-01 1
4 2 4 2023-01-04 1
5 2 5 2023-01-05 1
Index(['id', 'level_1', 'date', 'metric'], dtype='object')
```
**Environment**:
- Dask version: 2024.8.0
- Python version: 3.10
- Operating System: WSL
- Install method (conda, pip, source): poetry
| open | 2024-09-17T16:04:55Z | 2025-03-10T01:51:02Z | https://github.com/dask/dask/issues/11394 | [
"dataframe",
"needs attention"
] | dbalabka | 0 |
dmlc/gluon-cv | computer-vision | 998 | run demo_ssd.py gets error | ----------Python Info----------
Version : 3.5.6
Compiler : GCC 7.3.0
Build : ('default', 'Aug 26 2018 21:41:56')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 10.0.1
Directory : /home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/pip
----------MXNet Info-----------
Version : 1.6.0
Directory : /home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/mxnet
Commit Hash : b1932c027ba8df081ca398dd8b5d3a893c5bc61d
Library : ['/home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/mxnet/libmxnet.so'] | closed | 2019-10-22T09:39:08Z | 2021-06-07T07:04:22Z | https://github.com/dmlc/gluon-cv/issues/998 | [
"Stale"
] | ghost | 2 |
google-research/bert | nlp | 558 | if we only use mask LM in training and disable the'next sentence', how should I modify the create_pretraining_data.py | in pre-training code, I figure out just disable the 'next sentence loss' code
| open | 2019-04-06T08:20:00Z | 2019-04-10T07:20:17Z | https://github.com/google-research/bert/issues/558 | [] | SeekPoint | 1 |
huggingface/transformers | python | 36,071 | modeling_phi3 errors with AttributeError: 'DynamicCache' object has no attribute 'get_max_length' | ### System Info
- `transformers` version: 4.49.0.dev0 (315a9f494e0e00d8652722ce950be590852a4727~1)
- Platform: Windows-10-10.0.20348-SP0
- Python version: 3.11.7
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
- Jax version: 0.5.0
- JaxLib version: 0.5.0
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA RTX A5000
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Use Phi3 with any cache configuration, including default (DynamicCache)
I think get_max_length is probably declared on a mixin that isn't on the cache classes yet?
```
comfy_extras\nodes\nodes_language.py:361: in execute
return model.generate(tokens, max_new_tokens, repetition_penalty, seed, sampler),
comfy\language\transformers_model_management.py:228: in generate
output_ids = transformers_model.generate(
..\..\.venv\Lib\site-packages\torch\utils\_contextlib.py:116: in decorate_context
return func(*args, **kwargs)
..\..\.venv\Lib\site-packages\transformers\generation\utils.py:2224: in generate
result = self._sample(
..\..\.venv\Lib\site-packages\transformers\generation\utils.py:3198: in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
C:\Users\bberman\.cache\huggingface\modules\transformers_modules\c1358f8a35e6d2af81890deffbbfa575b978c62f\modeling_phi3.py:1292: in prepare_inputs_for_generation
max_cache_length = past_key_values.get_max_length()
```
### Expected behavior
related to #35168?
I'm not sure why this is only coming up with phi-3 so far | open | 2025-02-06T16:50:38Z | 2025-03-20T16:54:58Z | https://github.com/huggingface/transformers/issues/36071 | [
"bug"
] | doctorpangloss | 11 |
yezz123/authx | pydantic | 251 | raise NotImplementedError in BaseDBBackend | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to AuthX but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to AuthX but to [FastAPI](https://github.com/tiangolo/fastapi).
### Example Code
```python
Following this example: https://github.com/yezz123/authx/blob/main/example/app/main.py
```
### Description
I cloned the repository, and I'm trying out AuthX as it looks to be what I want. I've run into the following issues:
1. A sqlite DB isn't generated. I can see in the `BaseDBBackend` class, there are a bunch of `raise NotImplementedError`, does this mean the BaseDBBackend isn't finished yet?
2. When starting the app and navigating to `/docs`, I can see a bunch of endpoints, but the `register` endpoint, for example, doesn't let me put it any parameters.
When will the sqlite DB backend be finished?
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.77.1
### Python Version
3.10.4
### Additional Context

| closed | 2022-07-01T15:33:17Z | 2023-03-06T09:31:11Z | https://github.com/yezz123/authx/issues/251 | [
"enhancement",
"question"
] | nickshanks347 | 2 |
ets-labs/python-dependency-injector | asyncio | 364 | feature request: provider for "self" | I would like a provider to be able to pass a container as an argument:
```python
class Container(containers.DeclarativeContainer):
foo = providers.Callable(calc_foo, containers.MarkerForContainer)
bar = providers.Object('hello')
container = Container()
container.override_providers(container=container)
def calc_foo(container):
print(container.bar())
container.foo() # prints "hello" - ?
```
I assume that is impossible to do directly, right now? I guess perhaps I could do:
```python
class Container(containers.DeclarativeContainer):
container = providers.DependenciesContainer()
foo = providers.Callable(calc_foo, container)
bar = providers.Object('hello')
container = Container()
container.override_providers(container=container)
def calc_foo(container):
print(container.bar())
container.foo() # prints "hello" - ?
```
But having the container work without having to put cheese in it (to use a mousetrap analogy) would be great... any chance of something like the former (if indeed it isn't possible)? | closed | 2021-01-19T10:05:35Z | 2021-02-09T13:17:00Z | https://github.com/ets-labs/python-dependency-injector/issues/364 | [
"feature"
] | shaunc | 9 |
graphql-python/flask-graphql | graphql | 32 | Is this project still mantained? | @syrusakbary thank you for this great project! I noticed that there have been a lot of commits since the last release, of which the last one was 6 months ago. Are you still planning on working on this project?
Best regards | closed | 2017-09-11T13:15:59Z | 2021-01-04T16:23:12Z | https://github.com/graphql-python/flask-graphql/issues/32 | [] | lucasrcosta | 9 |
koxudaxi/datamodel-code-generator | pydantic | 1,450 | Unique items should be allowed as list in Pydantic v2 | **Describe the bug**
When Pydantic v2 is used, i.e., `--output-model-type pydantic_v2.BaseModel`, any field tagged with unique items will always be created as set. Where it is understandable to use set to ensure unique items, there is a distinct difference between set and list. Some important ones are:
* Set does not preserve order like list.
* Set requires item hashable.
As such, it is desirable to use list to store data even when unique items are used for many applications.
Noted that there is a `--use-unique-items-as-set` flag, which usually means by default the list is used (It is the case when other output model types are used.) May I suggest to use list for Pydantic v2 as well? Alternatively, can we support a `--use-unique-items-as-list` flag?
**To Reproduce**
Example schema:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Example",
"type": "object",
"properties": {
"data": {
"type": "array",
"uniqueItems": true
}
}
}
```
Used commandline:
```
$ datamodel-codegen --output-model-type pydantic_v2.BaseModel --input schema.json --output model.py
```
**Expected behavior**
Ability to use list instead of set in the output model.
**Version:**
- OS: macOS
- Python version: 3.11.4
- datamodel-code-generator version: 0.21.2
| closed | 2023-07-25T02:18:28Z | 2023-08-09T15:34:07Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1450 | [
"bug"
] | xu-cheng | 4 |
biolab/orange3 | numpy | 6,719 | ROC Curve widget sets a wrong prior probability | With apologies to self for writing such a bad bug report: I have no time to properly explore it now, but I have to write this down lest I forget.
I encountered a situation (on Pima diabetes data) in which the ROC widget's target was set to 1, but the prior probability was that for class 0. I changed the target to 0 and back to 1, and the prior probabilty was reset properly. My hunch is that if the widget was loaded in the workflow, the target is retrieved from settings, but the prior probability is set before and disregarding the target. Changing the target back and forth calls the necessary callbacks and updates the prior probability. This is just a hypothesis, I don't have time to actually reproduce the bug and check the code. | closed | 2024-01-26T15:14:10Z | 2024-03-28T15:24:45Z | https://github.com/biolab/orange3/issues/6719 | [
"bug",
"snack"
] | janezd | 1 |
pyro-ppl/numpyro | numpy | 1,370 | Using dirichlet sampler directly in Dirichlet distribution | After https://github.com/google/jax/pull/9906, `jax.random.dirichlet` should be robust for small concentration, so we can remove the current trick that we put in the Dirichlet sampler. | closed | 2022-03-19T05:09:56Z | 2022-04-13T10:45:43Z | https://github.com/pyro-ppl/numpyro/issues/1370 | [
"enhancement"
] | fehiepsi | 1 |
comfyanonymous/ComfyUI | pytorch | 7,027 | Wan model is not working in MacOs if scheduler is `uni_pc` | ### Expected Behavior
A normal video output
### Actual Behavior
https://github.com/user-attachments/assets/66051ca7-ccd2-4fb9-a186-a9bf4e974772
### Steps to Reproduce
I was trying to use Wan 2.1 model in Comfy in my macbook pro (M2). And use the example workflow from [blog examples](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/example%20workflows_Wan2.1/image_to_video_wan_480p_example.json).
### Debug Logs
```powershell
env PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-upcast-attention
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-03-01 13:19:21.836
** Platform: Darwin
** Python version: 3.11.11 (main, Jan 5 2025, 06:40:04) [Clang 19.1.6 ]
** Python executable: /Users/edwin/AI/.venv/bin/python
** ComfyUI Path: /Users/edwin/AI/ComfyUI
** ComfyUI Base Folder Path: /Users/edwin/AI/ComfyUI
** User directory: /Users/edwin/AI/ComfyUI/user
** ComfyUI-Manager config path: /Users/edwin/AI/ComfyUI/user/default/ComfyUI-Manager/config.ini
** Log path: /Users/edwin/AI/ComfyUI/user/comfyui.log
Prestartup times for custom nodes:
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/rgthree-comfy
1.1 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-manager
Checkpoint files will always be loaded safely.
Total VRAM 65536 MB, total RAM 65536 MB
pytorch version: 2.7.0.dev20250210
xformers version: 0.0.29.post3
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
ComfyUI version: 0.3.18
[Prompt Server] web root: /Users/edwin/AI/ComfyUI/web
### Loading: ComfyUI-Manager (V3.18.1)
### ComfyUI Version: v0.3.18 | Released on '2025-02-26'
(pysssss:WD14Tagger) [DEBUG] Available ORT providers: CoreMLExecutionProvider, AzureExecutionProvider, CPUExecutionProvider
(pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider
/Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/wanvideo/modules/model.py:29: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@amp.autocast(enabled=False)
/Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/wanvideo/modules/model.py:42: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@amp.autocast(enabled=False)
[rgthree-comfy] Loaded 42 fantastic nodes. 🎉
Total VRAM 65536 MB, total RAM 65536 MB
pytorch version: 2.7.0.dev20250210
xformers version: 0.0.29.post3
Set vram state to: SHARED
Device: mps
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
------------------------------------------
Comfyroll Studio v1.76 : 175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------
Import times for custom nodes:
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/websocket_image_save.py
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui_ipadapter_plus
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-wd14-tagger
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-custom-scripts
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/rgthree-comfy
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-GGUF
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-IPAdapter-Flux
0.1 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper
0.2 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-manager
0.2 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-kjnodes
0.3 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-videohelpersuite
0.6 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-mvadapter
0.8 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-florence2
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH ComfyRegistry Data: 5/35
got prompt
Using split attention in VAE
Using split attention in VAE
VAE load device: mps, offload device: cpu, dtype: torch.bfloat16
FETCH ComfyRegistry Data: 10/35
Requested to load CLIPVisionModelProjection
loaded completely 9.5367431640625e+25 1208.09814453125 True
Requested to load WanTEModel
loaded completely 9.5367431640625e+25 6419.477203369141 True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
FETCH ComfyRegistry Data: 15/35
FETCH ComfyRegistry Data: 20/35
FETCH ComfyRegistry Data: 25/35
FETCH ComfyRegistry Data: 30/35
FETCH ComfyRegistry Data: 35/35
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
Requested to load WanVAE
loaded completely 9.5367431640625e+25 242.02829551696777 True
/Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-GGUF/loader.py:65: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:209.)
torch_tensor = torch.from_numpy(tensor.data) # mmap
ggml_sd_loader:
0 823
12 360
14 120
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
Requested to load WAN21
loaded completely 9.5367431640625e+25 10943.232666015625 True
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [58:06<00:00, 174.34s/it]
Requested to load WanVAE
loaded completely 9.5367431640625e+25 242.02829551696777 True
Prompt executed in 3739.42 seconds
```
### Other
However, I found that I could fix it after I changed the KSampler sampler name to euler or euler-ancestral and ~KSampler scheduler to normal~ (Edit, it is not important). (Thanks for this [reddit post](https://www.reddit.com/r/comfyui/comments/1izktly/comment/mf3omam/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)) | open | 2025-03-01T07:27:04Z | 2025-03-10T18:15:32Z | https://github.com/comfyanonymous/ComfyUI/issues/7027 | [
"Potential Bug"
] | edwin0cheng | 6 |
pyro-ppl/numpyro | numpy | 1,838 | `nuts.get_extra_fields()["num_steps"]=0` after warmup | I have the following piece of code:
``` python
nuts = MCMC(
NUTS(model_logreg),
num_warmup=2**13,
num_samples=2**10,
num_chains=2**5,
chain_method="vectorized",
)
nuts.warmup(jr.key(2), x_train, labels_train, extra_fields=("num_steps",))
warmup_steps = nuts.get_extra_fields()["num_steps"]
print(f"num warmup steps: {warmup_steps}")
```
which returns
```
warmup: 100%|██████████| 8192/8192 [00:41<00:00, 199.13it/s]
num warmup steps: [0 0 0 ... 0 0 0]
```
If I do `nuts.run(jr.key(2), x_train, labels_train, extra_fields=("num_steps",)` it works just fine and reports a non-zero number of steps (although I suspect it doesn't count the warmup steps). Also the sampling itself works as intended and results in the correct distribution, so the problem probably isn't in my code. And the warmup does indeed work, because if I set `num_warmup=0`, then the output becomes biased towards the initial value.
This is quite bad because it makes it seem that NUTS can achieve good results with a very small number of gradient evaluations, giving it an unfair advantage over other samplers.
Also I saw this issue mentioned in the following thread, but it apparently hasn't been addressed yet:
https://forum.pyro.ai/t/how-to-calculate-effective-sample-size-per-gradient-evaluation/5398/7 | closed | 2024-07-26T12:39:07Z | 2024-08-05T22:55:42Z | https://github.com/pyro-ppl/numpyro/issues/1838 | [
"question"
] | andyElking | 4 |
roboflow/supervision | tensorflow | 1,022 | How to register detection in `PolygonZone` for any overlap | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
How can I register a detection inside a `PolygonZone` when there is any overlap? Without requiring the entire bounding box to be contained inside the zone.

I tried using `triggering_anchors` but it didn't work:
```python
polygon_zone = sv.PolygonZone(
polygon=zone_ndarray,
frame_resolution_wh=(img_width, img_height),
# Make a detection be considered inside the zone if there is any
# overlap.
triggering_anchors=[
Position.TOP_LEFT,
Position.TOP_RIGHT,
Position.BOTTOM_RIGHT,
Position.BOTTOM_LEFT,
],
)
```
Thanks!
### Additional
_No response_ | open | 2024-03-19T11:22:32Z | 2024-07-26T22:02:03Z | https://github.com/roboflow/supervision/issues/1022 | [
"question"
] | marcospgp | 4 |
piccolo-orm/piccolo | fastapi | 238 | Add `create` method | If you want to create an object and save it, the current approach is:
```python
band = Band(name="Pythonistas")
await band.save().run()
```
We can add a `create` method, which might be preferable for some people coming from other ORMs:
```python
band = await Band.objects().create(name="Pythonistas").run()
```
| closed | 2021-09-16T15:09:00Z | 2021-09-20T21:05:39Z | https://github.com/piccolo-orm/piccolo/issues/238 | [
"enhancement"
] | dantownsend | 1 |
iMerica/dj-rest-auth | rest-api | 460 | Login immediately on register when using JWT HTTP only | Hi there,
Is there a way to have a user logged in immediately on register? I have set `ACCOUNT_EMAIL_VERIFICATION = 'optional'` and want the flow to have a user logged in once they register (then verify email at their convinience), but the register view doesn't set the JWT cookies so the user is still required to hit the Login view separately after registering...
Is there a configuration or adjustment I can make to log in a user with JWT immediately after they register?
Thanks :) | open | 2022-12-09T12:50:54Z | 2022-12-09T12:50:54Z | https://github.com/iMerica/dj-rest-auth/issues/460 | [] | ainthateasy | 0 |
huggingface/datasets | pandas | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.)
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("ejschwartz/idioms")
### Expected behavior
The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | closed | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | https://github.com/huggingface/datasets/issues/7473 | [] | edmcman | 1 |
Kanaries/pygwalker | matplotlib | 571 | [BUG] Param "dark" not work | `dark` does not work in the current version.
It's worked well in older versions. I didn't change my code and just run an old notebook.
I can only click the theme buttons to change the theme, and my choice won't be remembered.
Name: pygwalker
Version: 0.4.8.9
Python 3.9 Jupyter Lab | closed | 2024-06-07T08:50:01Z | 2024-06-07T12:53:03Z | https://github.com/Kanaries/pygwalker/issues/571 | [
"bug"
] | Erimus-Koo | 5 |
strawberry-graphql/strawberry | fastapi | 3,552 | export-schema does not include ENUM descriptions | <!--- Provide a general summary of the changes you want in the title above. -->
allow `strawberry export-schema` to use, say, first line of docstring (or full docstring) as a comment and store it as part of the `schema.graphql` file - for entities and for attributes.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [ ] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
Our Federated GQL schema requires each attribute to have comments.
It seems that currently there is no way to auto-add docstrings to the exported schema file via strawberry. | closed | 2024-06-28T16:24:49Z | 2025-03-20T15:56:46Z | https://github.com/strawberry-graphql/strawberry/issues/3552 | [] | Casyfill | 1 |
HumanSignal/labelImg | deep-learning | 187 | Verify Image button has no icon | "Verify Image" button has no icon in the windows binaries v1.5.2
| closed | 2017-10-30T14:39:19Z | 2017-11-01T14:50:43Z | https://github.com/HumanSignal/labelImg/issues/187 | [] | jensdenbraber | 3 |
gunthercox/ChatterBot | machine-learning | 1,446 | Cannot view the data in actual postgresql | @gunthercox I have connected my chatterbot with postgresql. And it had trained the data that I have specified in the file. So after training, it had created the db **"jer"**.
Here's my code:
**bot = ChatBot(
"Terminal",
storage_adapter="chatterbot.storage.SQLStorageAdapter",
trainer='chatterbot.trainers.ListTrainer',
database_uri='postgresql://postgres:root@localhost:5432/jer',
database="jer"
)**
Can you please suggest me how to view the database in postgresql? I can see the db created but cannot see the data in the actual postgres database.
Can't we have control of the database that is created after running the chatterbot python code? | closed | 2018-10-08T05:04:36Z | 2019-08-06T20:45:17Z | https://github.com/gunthercox/ChatterBot/issues/1446 | [] | Jereemi | 6 |
streamlit/streamlit | data-visualization | 10,747 | Add support for Jupyter widgets / ipywidgets | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Jupyter Widgets are [interactive browser controls](https://github.com/jupyter-widgets/ipywidgets/blob/main/docs/source/examples/Index.ipynb) for Jupyter notebooks. Implement support for using ipywidgets elements in a Streamlit app.
### Why?
_No response_
### How?
```python
import ipywidgets as widgets
widget = st.ipywidgets(widgets.IntSlider())
st.write(widget.value)
```
### Additional Context
- Related to https://github.com/streamlit/streamlit/issues/10746
- Related discussion: https://discuss.streamlit.io/t/ipywidgets-wip/3870 | open | 2025-03-12T16:22:36Z | 2025-03-18T10:31:37Z | https://github.com/streamlit/streamlit/issues/10747 | [
"type:enhancement",
"feature:custom-components",
"type:possible-component"
] | lukasmasuch | 1 |
capitalone/DataProfiler | pandas | 1,156 | ModuleNotFoundError: No module named 'numpy.lib.histograms' | **General Information:**
- OS: Sonoma 14.5
- Python version: 3.9.10
- Library version: 0.12.0
**Describe the bug:**
When attempting to setup a Python virtual environment, I run `make setup` per this [Contribution](https://github.com/capitalone/DataProfiler/blob/main/.github/CONTRIBUTING.md) guideline. When the Makefile executes `pre-commit run`, the `check-manifest` stage fails with an error of
```
ImportError:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): [...]
ModuleNotFoundError: No module named 'numpy.lib.histograms'
```
I believe this to be a result of the latest [NumPy 2.0.0 release](https://github.com/numpy/numpy/releases) as of three weeks ago.
**To Reproduce:**
Run `make setup` per this [Contribution](https://github.com/capitalone/DataProfiler/blob/main/.github/CONTRIBUTING.md) guideline.
**Expected behavior:**
The Python virtual environment should be successfully set up. Instead, I encounter this NumPy error.
**Screenshots:**
<img width="683" alt="image" src="https://github.com/capitalone/DataProfiler/assets/83050155/0e581739-a63f-44e1-b8f8-ff37a625e626">
**Additional context:**
This is similar to #1154, however I encounter this issue when setting up the virtual environment rather than running a Python file that imports DataProfiler.
| open | 2024-07-06T21:13:06Z | 2024-09-10T16:15:50Z | https://github.com/capitalone/DataProfiler/issues/1156 | [
"Bug"
] | alexjdean | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 357 | class ZoneoutLSTMCell(tf.nn.rnn_cell.RNNCell): AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell' | class ZoneoutLSTMCell(tf.nn.rnn_cell.RNNCell):
AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell' | closed | 2020-06-08T07:15:27Z | 2020-07-04T15:11:37Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/357 | [] | wumingzhibei | 1 |
tensorlayer/TensorLayer | tensorflow | 900 | TensorLayer 2.0 | # NETWORK API REFACTORING - TO DO LIST
## [Design Docs](https://github.com/luomai/tensorlayer2-design/issues/7)
## [Refactoring Codes](https://github.com/zsdonghao/tensorlayer2)
Dear Contributors,
@DEKHTIARJonathan @akaraspt @luomai @lgarithm @JingqingZ @fangde etal.
As we discussed previously, TensorLayer 2.0 should support both eager and graph mode. The new API design is here https://github.com/luomai/tensorlayer2-design/issues/7
To make the refactoring faster, I simply fork tensorlayer/tensorlayer into zsdonghao/tensorlayer2: https://github.com/zsdonghao/tensorlayer2 , we can merge the branch back to tensorlayer/tensorlayer when the refactoring is finished. In doing so, the contributions will be in may commits rather than only 1.
# Work to be done
## Layers
- [x] **core.py:**
* Layer:
- [x] refactored @JingqingZ 2019/01/28
- [x] tested @JingqingZ 2019/01/31 2019/03/06
- [x] documentation @JingqingZ 2019/03/06
* ModelLayer:
- [x] created @JingqingZ 2019/01/28
- [x] tested @JingqingZ 2019/03/06
- [x] documentation @JingqingZ 2019/03/06
* LayerList:
- [x] created @JingqingZ 2019/01/28 @ChrisWu1997
- [x] tested @JingqingZ 2019/03/06
- [x] documentation @JingqingZ 2019/03/06
* LayerNode:
- [x] created @ChrisWu1997
- [x] tested @ChrisWu1997 2019/03/22
- [x] documentation @ChrisWu1997 2019/03/22
- [x] **activation.py:**
* PRelu:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
- [x] tested @JingqingZ 2019/03/20
- [x] documentation @JingqingZ 2019/03/20
* PRelu6:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
- [x] tested @JingqingZ 2019/03/20
- [x] documentation @JingqingZ 2019/03/20
* PTRelu6:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
- [x] tested @JingqingZ 2019/03/20
- [x] documentation @JingqingZ 2019/03/20
- **convolution/**
* AtrousConv1dLayer, AtrousConv2dLayer and AtrousDeConv2d are removed, use Conv1d/2d and DeConv2d with `dilation_rate` instead. (🀄️remember to change CN docs)
* BinaryConv2d:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @warshallrho 2019/03/16
- [x] documentation @warshallrho 2019/03/20
* Conv1d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* Conv2d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* Conv3d:
- [x] add @zsdonghao 2019/01/16 : (🀄️remember to change CN docs)
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* Conv1dLayer:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* Conv2dLayer:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* Conv3dLayer:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* DeConv1dLayer:
- [x] refactored @warshallrho 2019/03/16
- [x] tested @warshallrho 2019/03/16
- [x] documentation @warshallrho 2019/03/17
* DeConv2dLayer:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* DeConv3dLayer:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* DeConv2d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* DeConv3d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/17
* DeformableConv2d:
- [x] refactored @warshallrho 2019/03/18
- [x] tested @warshallrho 2019/03/18
- [x] documentation @warshallrho 2019/03/18
* DepthwiseConv2d:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/18
* DorefaConv2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/20
* GroupConv2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/20
* QuanConv2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/20
* QuanConv2dWithBN:
- [ ] refactored
- [ ] tested
- [ ] documentation
* SeparableConv1d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/18
* SeparableConv2d:
- [x] refactored @zsdonghao 2019/01/16
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/18
* SubpixelConv1d:
- [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18
- [x] tested @warshallrho 2019/03/18
- [x] documentation @warshallrho 2019/03/18
* SubpixelConv2d:
- [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18
- [x] tested @warshallrho 2019/03/18
- [x] documentation @warshallrho 2019/03/18
* TernaryConv2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/17
- [x] documentation @warshallrho 2019/03/20
- **dense/** [WIP] @ChrisWu1997
* BinaryDense:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @ChrisWu1997 2019/04/23 _need further test by example_
- [x] documentation @ChrisWu1997 2019/04/23
* Dense:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
- [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15
- [x] documentation @JingqingZ 2019/03/15
* DorefaDense:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @ChrisWu1997 2019/04/23 _need further test by example_
- [x] documentation @ChrisWu1997 2019/04/23
* DropconnectDense:
- [x] refactored @zsdonghao 2018/12/05
- [x] tested @ChrisWu1997 2019/04/23 _need further test by example_
- [x] documentation @ChrisWu1997 2019/04/23
* QuanDense:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @ChrisWu1997 2019/04/23 _need further test by example_
- [x] documentation @ChrisWu1997 2019/04/23
* QuanDenseWithBN:
- [ ] refactored
- [ ] tested
- [ ] documentation
* TernaryDense:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @ChrisWu1997 2019/04/23 _need further test by example_
- [x] documentation @ChrisWu1997 2019/04/23
- **dropout.py**
* Dropout:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
- [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15
- [x] documentation @JingqingZ 2019/03/15
- **extend.py**
* ExpandDims:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
* Tile:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
- **image_resampling.py**
* UpSampling2d:
- [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03
- [x] tested @ChrisWu1997 2019/04/03
- [x] documentation @ChrisWu1997 2019/04/03
* DownSampling2d:
- [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03
- [x] tested @ChrisWu1997 2019/04/03
- [x] documentation @ChrisWu1997 2019/04/03
- **importer.py**
* SlimNets:
- [ ] refactored
- [ ] tested
- [ ] documentation
* Keras:
- [ ] refactored
- [ ] tested
- [ ] documentation
- **inputs.py**
* Input:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
- [x] tested @JingqingZ 2019/03/06
- [x] documentation @JingqingZ 2019/03/06
- **embedding.py**
* OneHotInput: --> OneHot (🀄️remember to change CN docs)
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/23
- [x] tested @JingqingZ 2019/03/19
- [x] documentation @JingqingZ 2019/03/19
* Word2vecEmbeddingInput: --> Word2vecEmbedding (🀄️remember to change CN docs)
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/21
- [x] tested @JingqingZ 2019/03/19
- [x] documentation @JingqingZ 2019/03/19
* EmbeddingInput: --> Embedding
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/22
- [x] tested @JingqingZ 2019/03/19
- [x] documentation @JingqingZ 2019/03/19
* AverageEmbeddingInput: --> AverageEmbedding (🀄️remember to change CN docs)
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/20
- [x] tested @JingqingZ 2019/03/19
- [x] documentation @JingqingZ 2019/03/19
- **lambda_layers.py**
* ElementwiseLambda:
- [x] refactored @JingqingZ 2019/03/24
- [x] tested @JingqingZ 2019/03/24
- [x] documentation @JingqingZ 2019/03/24
* Lambda:
- [x] refactored @JingqingZ 2019/03/24
- [x] tested @JingqingZ 2019/03/24
- [x] documentation @JingqingZ 2019/03/24
- **merge.py**
* Concat:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @JingqingZ 2019/03/15
- [x] documentation @JingqingZ 2019/03/15
* Elementwise:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/15
- [x] tested @JingqingZ 2019/03/15
- [x] documentation @JingqingZ 2019/03/15
- **noise.py**
* GaussianNoise:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/20
- [x] documentation @warshallrho 2019/03/20
- **normalization.py**
* BatchNorm:
- [x] refactored @ChrisWu1997 2019/01/22 @ChrisWu1997 2019/03/05
- [x] tested @ChrisWu1997 2019/03/22
- [x] documentation @ChrisWu1997 2019/03/22
* BatchNorm1d:
- [x] refactored @ChrisWu1997 2019/03/05
- [x] tested @ChrisWu1997 2019/03/22
- [x] documentation @ChrisWu1997 2019/03/22
* BatchNorm2d:
- [x] refactored @ChrisWu1997 2019/03/05
- [x] tested @ChrisWu1997 2019/03/22
- [x] documentation @ChrisWu1997 2019/03/22
* BatchNorm3d:
- [x] refactored @ChrisWu1997 2019/03/05
- [x] tested @ChrisWu1997 2019/03/22
- [x] documentation @ChrisWu1997 2019/03/22
* GroupNorm:
- [x] refactored @zsdonghao 2018/12/05
- [ ] tested
- [ ] documentation
* InstanceNorm:
- [x] refactored @zsdonghao 2018/12/05
- [ ] tested
- [ ] documentation
* LayerNorm:
- [x] refactored @ChrisWu1997 2019/01/23
- [ ] tested
- [ ] documentation
* LocalResponseNorm:
- [x] refactored @zsdonghao 2018/12/05
- [ ] tested
- [ ] documentation
* SwitchNorm:
- [x] refactored @zsdonghao 2018/12/05
- [ ] tested
- [ ] documentation
- **padding.py**
* PadLayer:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/21
- [x] documentation @warshallrho 2019/03/21
* ZeroPad1d:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/21
- [x] documentation @warshallrho 2019/03/21
* ZeroPad2d:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/21
- [x] documentation @warshallrho 2019/03/21
* ZeroPad3d:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/21
- [x] documentation @warshallrho 2019/03/21
- **pooling/**
* MaxPool1d:
- [x] refactored @zsdonghao 2019/01/08
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* MaxPool2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* MaxPool3d:
- [x] refactored @zsdonghao 2019/01/08
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* MeanPool1d:
- [x] refactored @zsdonghao 2019/01/08
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* MeanPool2d:
- [x] refactored @zsdonghao 2019/01/08
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* MeanPool3d:
- [x] refactored @zsdonghao 2019/01/08
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/19
* GlobalMaxPool1d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* GlobalMaxPool2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* GlobalMaxPool3d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* GlobalMeanPool1d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* GlobalMeanPool2d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* GlobalMeanPool3d:
- [x] refactored @zsdonghao 2018/12/06
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/15
* PoolLayer:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @warshallrho 2019/03/15
- [x] documentation @warshallrho 2019/03/18
- **quantize_layers.py**
* Sign:
- [x] refactored
- [ ] tested
- [ ] documentation
- **recurrent/**
* BiRNN:
- [x] refactored @JingqingZ 2019/04/08
- [x] tested @JingqingZ 2019/04/08
- [x] documentation @JingqingZ 2019/04/08
* ConvLSTM:
- [ ] refactored
- [ ] tested
- [ ] documentation
* RNN:
- [x] refactored @JingqingZ 2019/03/31
- [x] tested @JingqingZ 2019/03/31
- [x] documentation @JingqingZ 2019/03/31
* Seq2Seq:
- [ ] refactored
- [ ] tested
- [ ] documentation
- **shape.py**
* Flatten:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
* Reshape:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
* Transpose:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
- **scale.py**
* Scale:
- [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
- [x] tested @JingqingZ 2019/03/22
- [x] documentation @JingqingZ 2019/03/22
- **contrib**
* ROIPooling:
- [ ] refactored
- [ ] tested
- [ ] documentation
- **spatial_transformer.py**
* SpatialTransformer2dAffine: see **test_layers_spatial_transformer.py**
- [ ] refactored
- [ ] tested
- [ ] documentation
- **stack.py** [WIP] @ChrisWu1997
* Stack:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @ChrisWu1997 2019/04/23
- [x] documentation @ChrisWu1997 2019/04/23
* UnStack:
- [x] refactored @zsdonghao 2018/12/04
- [x] tested @ChrisWu1997 2019/04/23
- [x] documentation @ChrisWu1997 2019/04/23
- **time_distribution.py** **Remove, as eager mode support this feature** (🀄️remember to change CN docs)
* TimeDistributed:
## tl.models
- **core.py**
* Model:
- [x] refactored @JingqingZ 2019/01/28 @ChrisWu1997 2019/02/16 2019/02/22
- [x] tested @ChrisWu1997 2019/03/21
- [x] documentation @ChrisWu1997 2019/03/21
- **vgg.py**
* vgg:
- [x] refactored @warshallrho 2019/02/19
- [ ] tested
- [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
* vgg16:
- [x] refactored @warshallrho 2019/02/19
- [ ] tested
- [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
* vgg19:
- [x] refactored @warshallrho 2019/03/09
- [ ] tested
- [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
- **mobilenetv1.py**
* MobileNet:
- [x] refactored @ChrisWu1997 2019/04/23
- [x] tested @ChrisWu1997 2019/04/23
- [x] documentation @ChrisWu1997 2019/04/23
* SqueezeNet:
- [x] refactored @ChrisWu1997 2019/04/23
- [x] tested @ChrisWu1997 2019/04/23
- [x] documentation @ChrisWu1997 2019/04/23
## Examples
- basic_tutorials
Too many basic tutorials, some codes can be removed.
- [x] Static model example MNIST @JingqingZ 2019/01/28 2019/03/24
- [x] Dynamic model example MNIST @JingqingZ 2019/01/28 2019/03/24
- [x] Static model example CIFAR10 (with dataset API) @ChrisWu1997 2019/03/24
- [x] Siamese example MNIST @ChrisWu1997 2019/03/26
- tutorial_mnist_float16.py removed by @ChrisWu1997
- tutorial_mnist_simple.py removed by @ChrisWu1997
- data_process
- tutorial_fast_affine_transform.py
- [x] refactored @ChrisWu1997 2019/04/11
- [x] tested @ChrisWu1997 2019/04/11
- tutorial_image_preprocess.py removed by @zsdonghao
- tutorial_tf_dataset_voc.py
- [x] refactored @ChrisWu1997 2019/04/11
- [x] tested @ChrisWu1997 2019/04/11
- tutorial_tfrecord.py
- [x] refactored @ChrisWu1997 2019/04/11
- [x] tested @ChrisWu1997 2019/04/11
- tutorial_tfrecord2.py
- [x] refactored @ChrisWu1997 2019/04/11
- [x] tested @ChrisWu1997 2019/04/11
- tutorial_tfrecord3.py
- [ ] refactored
- [ ] tested
- database
- [ ] refactored
- [ ] tested
- distributed_training
- tutorial_cifar10_distributed_trainer.py
- [ ] refactored
- [ ] tested
- tutorial_mnist_distributed_trainer.py
- [ ] refactored
- [ ] tested
- keras_tfslim
- tutorial_keras.py
- [x] refactored @ChrisWu1997 2019/04/11
- [x] tested @ChrisWu1997 2019/04/11
- tutorial_tfslim.py removed by @ChrisWu1997
- pretrained_cnn
- tutorial_inceptionV3_tfslim.py
- tutorial_mobilenet.py removed by @ChrisWu1997 2019/04/23
- tutorial_models_mobilenetv1.py
- [x] refactored @ChrisWu1997 2019/04/23
- [x] tested @ChrisWu1997 2019/04/23
- tutorial_models_squeezenetv1.py
- [x] refactored @ChrisWu1997 2019/04/23
- [x] tested @ChrisWu1997 2019/04/23
- tutorial_models_vgg.py
- [x] refactored @warshallrho 2019/04/30
- [ ] tested
- tutorial_models_vgg_static.py
- [x] refactored @warshallrho 2019/04/30
- [ ] tested
- tutorial_models_vgg16.py
- [x] refactored @warshallrho 2019/02/19
- [ ] tested
- tutorial_models_vgg19.py
- [x] refactored @warshallrho 2019/03/09
- [ ] tested
- tutorial_squeezenet.py removed by @ChrisWu1997 2019/04/23
- tutorial_vgg16.py removed by @warshallrho 2019/04/30
- tutorial_vgg19.py removed by @warshallrho 2019/04/30
- quantized_net
- tutorial_binarynet_cifar10_tfrecord.py
- [x] refactored
- [x] tested
- tutorial_binarynet_mnist_cnn.py
- [x] refactored
- [x] tested
- tutorial_dorefanet_cifar10_tfrecord.py
- [x] refactored
- [x] tested
- tutorial_dorefanet_mnist_cnn.py
- [x] refactored
- [x] tested
- tutorial_quanconv_cifar10.py
- [x] refactored
- [x] tested
- tutorial_quanconv_mnist.py
- [x] refactored
- [x] tested
- tutorial_ternaryweight_cifar10_tfrecord.py
- [x] refactored
- [x] tested
- tutorial_ternaryweight_mnist_cnn.py
- [x] refactored
- [x] tested
- reinforcement_learning
- tutorial_atari_pong.py @zsdonghao 2019/01/21
- [x] refactored
- [x] tested
- tutorial_bipedalwalker_a3c_continuous_action.py
- [ ] refactored
- [ ] tested
- tutorial_cartpole_ac.py @zsdonghao 2019/02/17
- [x] refactored
- [x] tested
- tutorial_frozenlake_dqn.py @zsdonghao 2019/02/16
- [x] refactored
- [x] tested
- tutorial_frozenlake_q_table.py @zsdonghao 2019/02/16
- [x] refactored
- [x] tested
- text_classification
- tutorial_imdb_fasttext.py @JingqingZ 2019/03/14
- [x] refactored
- [x] tested
- text_generation
- tutorial_generate_text.py
- [ ] refactored
- [ ] tested
- text_ptb
Are they duplicated?
- tutorial_ptb_lstm_state_is_tuple.py
- [ ] refactored
- [ ] tested
- tutorial_ptb_lstm.py
- [ ] refactored
- [ ] tested
- text_word_embedding
- tutorial_word2vec_basic.py @JingqingZ 2019/02/21 2019/03/19
- [x] refactored
- [x] tested
## Others
- tl.activation.py
- [x] refactored @JingqingZ 2019/03/06
- [x] tested @JingqingZ 2019/03/06
- [x] documentation @JingqingZ 2019/03/06
- tl.cli
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.decorators
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.logging
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.optimizers
- [ ] refactored
- tl.third_party
- [ ] refactored
- tl.array_ops
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.cost
- [x] refactored @ChrisWu1997 2019/04/12
- [x] documentation @ChrisWu1997 2019/04/12
- tl.db [WIP] @ChrisWu1997
- [ ] refactored
- tl.distributed
- [ ] refactored
- tl.initializers
- [x] refactored @ChrisWu1997 2019/04/12
- [x] tested @ChrisWu1997 2019/04/12
- [x] documentation @ChrisWu1997 2019/04/12
- tl.iterate
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.lazy_imports
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
- tl.nlp @OliverZijia @JingqingZ
- [x] refactored
- tl.package_info
- [ ] refactored
- tl.prepro
- [x] refactored @ChrisWu1997 2019/04/11
- tl.rein
- [ ] refactored
- tl.utils
- [x] refactored @ChrisWu1997 2019/04/17
- [x] tested _by `tutorial_mnist_simple.py`_ @ChrisWu1997 2019/04/17
- [x] documentation @ChrisWu1997 2019/04/17
- tl.visualize
- [x] refactored _no update needed_ @ChrisWu1997 2019/04/12
## Unittests Status:
- performance_test
- VGG @JingqingZ @ChrisWu1997 @warshallrho 2019/03/20
- layers
- test_layernode.py @ChrisWu1997 2019/03/22
- test_layers_activation.py @JingqingZ 2019/03/20
- test_layers_convolution.py (1d, 2d, 3d) @warshallrho 2019/03/20
- test_layers_core_basedense_dropout.py @JingqingZ 2019/03/06
- test_layers_convolution_deformable.py @warshallrho 2019/03/18
- test_layers_embedding.py @JingqingZ 2019/03/19
- test_layers_extend.py @JingqingZ 2019/03/22
- test_layers_lambda.py @JingqingZ 2019/03/24
- test_layers_merge.py @JingqingZ 2019/03/15
- test_layers_noise.py @warshallrho 2019/03/21
- test_layers_padding.py @warshallrho 2019/03/21
- test_layers_pooling.py @warshallrho 2019/03/18
- test_layers_recurrent.py @JingqingZ 2019/03/06
- test_layers_scale.py @JingqingZ 2019/03/22
- test_layers_shape.py @JingqingZ 2019/03/22
- test_activations.py @JingqingZ 2019/03/06
- models
- test_model_save_graph.py @warshallrho 2019/04/30
## Unittests Status (Pending):
Some testing codes can be removed.
- test_array_ops.py
- test_decorators.py
- test_documentation.py
- test_layers_basic.py
- test_layers_flow_control.py **removed** in favour of eager mode @zsdonghao 2018/12/04 (🀄️remember to change CN docs)
- test_layers_importer.py
- test_layers_normalization.py
- test_layers_padding.py
- test_layers_spatial_transformer.py
- test_layers_stack.py
- test_layers_super_resolution.py
- test_layers_time_distributed.py
- test_logging.py
- test_logging_hyperdash.py
- test_mnist_simple.py
- test_model_compilednetwork.py
- test_models.py
- test_network_custom_2d.py
- test_network_custom_input_layers.py
- test_network_custom_multiple_inputs.py
- test_network_custom_multiple_outputs.py
- test_network_sequential_1d.py
- test_network_sequential_2d.py
- test_network_sequential_3d.py
- test_network_sequential_rnn.py
- test_optimizer_amsgrad.py
- test_pydocstyle.py
- test_reuse_mlp.py
- test_tf_layers.py
- test_timeout.py
- test_utils_predict.py
- test_yapf_format.py
## tl.files
All save/load methods are also wrapped as class method in model core.
- save_hdf5_graph
- [x] created @warshallrho 2019/04/27
- [x] tested @warshallrho 2019/04/27
- [x] documentation @warshallrho 2019/04/27
- load_hdf5_graph
- [x] created @warshallrho 2019/04/27
- [x] tested @warshallrho 2019/04/27
- [x] documentation @warshallrho 2019/04/27
- save_weights_to_hdf5
- [x] created
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- load_hdf5_to_weights_in_order
- [x] created
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- load_hdf5_to_weights
- [x] created
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- save_npz([save_list, name, sess]) @ChrisWu1997 2019/02/21 --> save_npz([save_list, name]) @ChrisWu1997 2019/03/21
- [x] refactored
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- load_npz([path, name]) @ChrisWu1997 2019/02/21
- [x] refactored
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- assign_params(sess, params, network) --> assign_weights (🀄️remember to change CN docs) @ChrisWu1997 2019/02/22
- [x] refactored
- [ ] tested
- load_and_assign_npz([sess, name, network]) @ChrisWu1997 2019/02/21 --> load_and_assign_npz([name, network]) @ChrisWu1997 2019/03/21
- [x] refactored
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- save_npz_dict([save_list, name, sess]) @ChrisWu1997 2019/02/22 --> save_npz_dict([save_list, name]) @ChrisWu1997 2019/03/21
- [x] refactored
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- load_and_assign_npz_dict([name, sess]) --> ([name, network]) @ChrisWu1997 2019/03/21
- [x] refactored
- [x] tested @ChrisWu1997 2019/03/26
- [x] documentation @ChrisWu1997 2019/03/26
- save_ckpt([sess, mode_name, save_dir, …]) @ChrisWu1997 2019/02/22
- [x] refactored
- [ ] tested
- load_ckpt([sess, mode_name, save_dir, …]) @ChrisWu1997 2019/02/22
- [x] refactored
- [ ] tested | closed | 2018-12-04T07:57:15Z | 2019-05-13T15:29:32Z | https://github.com/tensorlayer/TensorLayer/issues/900 | [
"help_wanted",
"discussion",
"refactoring"
] | zsdonghao | 5 |
zappa/Zappa | flask | 727 | [Migrated] Zappa Deploy FileExistsError | Originally from: https://github.com/Miserlou/Zappa/issues/1839 by [enotuniq](https://github.com/enotuniq)
I am very new to Zappa and AWS. I successfully installed zappa and managed to go through zappa init. However, when I try to deploy it with zappa deploy, I keep getting this error below.
I cleared the temp directory and tried again and again but nothing changed.
**Error**
```
Traceback (most recent call last):
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 70, in mkpath
os.mkdir(head, mode)
FileExistsError: [WinError 183] File exists
'C:\\Users\\xx\\AppData\\Local\\Temp\\zappa-project_jcpoxaq\\hjson'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2779, in handle
sys.exit(cli.handle())
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 718, in deploy
self.create_package()
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2267, in create_package
disable_progress=self.disable_progress
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\core.py", line 629, in create_lambda_zip
copy_tree(temp_package_path, temp_project_path, update=True)
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 159, in copy_tree
verbose=verbose, dry_run=dry_run))
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 135, in copy_tree
mkpath(dst, verbose=verbose)
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 74, in mkpath
"could not create '%s': %s" % (head, exc.args[-1]))
distutils.errors.DistutilsFileError: could not create 'C:\Users\xx\AppData\Local\Temp\zappa-project_jcpoxaq\hjson' File exists
``` | closed | 2021-02-20T12:41:16Z | 2024-04-13T18:14:40Z | https://github.com/zappa/Zappa/issues/727 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
Lightning-AI/pytorch-lightning | pytorch | 19,940 | Custom batch selection for logging | ### Description & Motivation
Need to be able to select the same batch in every logging cycle. For generation pipelines similar to stable diffusion it is very hard to gauge the performance over training if we continue to choose random batches.
### Pitch
User should have selective ability to choose the batch to log which will be constant for all the logging cycles.
### Alternatives
Its possible to load the data again in train_btach_end() or validation_batch_end(), and call logging.
### Additional context
_No response_
cc @borda | open | 2024-06-04T10:29:40Z | 2024-06-08T11:03:18Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19940 | [
"feature",
"needs triage"
] | bhosalems | 3 |
twopirllc/pandas-ta | pandas | 38 | VWAP indicator is calculating wrong vwap values for past days in dataframe | @twopirllc Thanks for creating this wonderful python module. I'm extensively using this module for my algos.
I found an issue with VWAP indicator when I ran it with my backtesting data. As per definition, VWAP should be calculated on daily data(Intraday).
Since, we pass series data, which contains past dates data as well. It calculates the cumulative sum incorrectly in that case. Each day's opening volume and hlc price will be different for sure.
Thus, the calculation for vwap should start with fresh data of each day. eg cumsum()
Note: Calculation is absolutely correct in a case of series data contains only 1-day data.
Maybe we could give a try to group a series data according to date and performing a calculation. It's just my thought. But, I would be happy to hear from you. As it's important for cross-checking strategies with backtesting data. | closed | 2020-04-25T13:25:37Z | 2021-02-07T14:46:33Z | https://github.com/twopirllc/pandas-ta/issues/38 | [
"bug",
"enhancement"
] | codesutras | 11 |
Evil0ctal/Douyin_TikTok_Download_API | api | 338 | Reversing tiktok API. | I have question. Which endpoint I must reverse for find endpoint which used in your code?
Now this endpoint is not working, and you are not responding to issues, if I knew how to find a new endpoint, I would make a pull request.
How i can find the same endpoint but working? | closed | 2024-03-23T13:25:24Z | 2024-03-25T22:29:30Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/338 | [
"enhancement"
] | sheldygg | 10 |
gevent/gevent | asyncio | 2,026 | Why does gevent affect the asyncio usage of child thread? | * gevent version: 20.10.2
* Python version: cPython 3.9
* Operating System: macOS 14.3.1(M3)
### Description:
I use gevent patch for my program, but in the program, I need to execute asyncio related code in sub threads
When multiple sub threads execute the same coroutine, it triggers "RuntimeError: This event loop is already running"
They are different threads, I generated its own event loop for each sub thread,I cannot understand this issue
When I commented out the monkey patch, the program executed as I expected
```python-traceback
ERROR:root:This event loop is already running
Traceback (most recent call last):
File "/Users/computer1/pytest/test.py", line 30, in func1
loop.run_until_complete(asyncf1())
File "/opt/homebrew/Cellar/python@3.9/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 623, in run_until_complete
self._check_running()
File "/opt/homebrew/Cellar/python@3.9/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 583, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
/Users/computer1/pytest/test.py:32: RuntimeWarning: coroutine 'asyncf1' was never awaited
logging.error(e, exc_info=True)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
ERROR:root:This event loop is already running
Traceback (most recent call last):
File "/Users/computer1/pytest/test.py", line 30, in func1
loop.run_until_complete(asyncf1())
File "/opt/homebrew/Cellar/python@3.9/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 623, in run_until_complete
self._check_running()
File "/opt/homebrew/Cellar/python@3.9/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 583, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
```
### What I've run:
```python
import logging
import gevent.monkey
gevent.monkey.patch_all()
import concurrent.futures
import threading
import time
import asyncio
pool = concurrent.futures.ThreadPoolExecutor()
async def asyncf1():
print("aa")
def func1():
# print(f"thread:{threading.get_ident()},gevent:{id(gevent.getcurrent())}")
try:
try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
# asyncio.set_event_loop(loop)
loop.run_until_complete(asyncf1())
except Exception as e:
logging.error(e, exc_info=True)
print(threading.current_thread())
time.sleep(3)
for i in range(3):
pool.submit(func1)
time.sleep(10)
```
| open | 2024-04-03T09:26:16Z | 2024-06-10T12:03:55Z | https://github.com/gevent/gevent/issues/2026 | [] | ssppest | 1 |
jwkvam/celluloid | matplotlib | 12 | Edges of plot disappear after first loop | I am using celluloid to plot a function over 17 years and i love it so far, it works great!
I have one small problem though, the edges of my plot disappear after the first loop. I have attached images of how this looks.
First loop:

Second loop:

I am using cartopy and matplotlib in jupyter notebook and this is my code for the animation:
`
import matplotlib
matplotlib.use('Agg')
from IPython.display import HTML
from celluloid import Camera
fig=plt.figure(figsize=(9,5))
cmap=matplotlib.cm.RdBu_r
norm=matplotlib.colors.Normalize(vmin=0, vmax=50)
ax=plt.axes(projection=ccrs.PlateCarree(),extent=[-180, 180, -90, 90])
ax.set_xticks([-180, -120, -60, 0, 60, 120, 180], crs=ccrs.PlateCarree())
ax.set_yticks([-90, -60, -30, 0, 30, 60, 90], crs=ccrs.PlateCarree())
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]')
camera = Camera(fig)
for i in range(0,(stop-start)+1):
ax.coastlines()
plt.scatter(nphi[i], nthe[i], c=mag[i], s=40, norm=norm, cmap=cmap, edgecolor="k")
ax.text(0, 1.05, 'Global Observatory Plot of SV magnitude from target year '
+ str(start+i) + ' in the dB_' + out + '-direction', fontsize=9, transform=ax.transAxes)
camera.snap()
cbar=plt.colorbar()
cbar.set_label('Magnitude of SV [nT/yr$^2$]')
animation = camera.animate(interval=800)
animation.save('Figures/GlobalSVMag.mp4')
HTML(animation.to_html5_video())
`
Is there a way to make the edge appear all the way through the animation?
| open | 2020-03-06T17:59:49Z | 2020-12-04T20:07:05Z | https://github.com/jwkvam/celluloid/issues/12 | [] | rasmusmlbl | 1 |
inducer/pudb | pytest | 421 | PuDB does not update for terminal size changes [Urwid issue] | When I resize the terminal (gnome-terminal) the view stays the same size although the window becomes bigger. After I move the cursor it becomes fullsize. It is weird as this only happens with pudb, not with other terminal programs ( I use ubuntu and i3 wm).

_Originally posted by @makrobios in https://github.com/inducer/pudb/issues/410#issuecomment-758295335_ | open | 2021-01-12T00:32:53Z | 2022-07-19T14:38:34Z | https://github.com/inducer/pudb/issues/421 | [] | pyrrhull | 6 |
mljar/mercury | data-visualization | 216 | add arrow when mouse enters app card | closed | 2023-02-20T17:15:20Z | 2023-02-20T17:17:08Z | https://github.com/mljar/mercury/issues/216 | [] | pplonski | 0 |
|
autogluon/autogluon | scikit-learn | 4,414 | TabularPredictor. Shuffle=False?? | Hi everyone,
First of all, thank you for the well-documented library.
I have a question regarding the use of TabularPredictor for creating a stacking ensemble model. I’m unsure how AutoGluon handles hyperparameter tuning when both tuning_data and train_data are provided. Specifically, does AutoGluon perform hyperparameter tuning using K-Fold cross-validation? If so, is there a way to configure it to set shuffle=False? I’d appreciate any clarification on this point. | open | 2024-08-20T19:03:24Z | 2024-08-23T09:07:20Z | https://github.com/autogluon/autogluon/issues/4414 | [] | olitei | 2 |
roboflow/supervision | computer-vision | 1,395 | Results differ when using cv2 vs pillow | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
From [this comment](https://github.com/roboflow/supervision/issues/1038#issuecomment-2018147877), I understand supervision doesn't change channel order, and the issue I highlight here is likely addressed by documentation. I observe that if I open an image with cv2 or with pillow the predictions are different. The model was trained using ultraltics which I believe also uses cv2, so when I use pillow the channels order is changed. I suggest adding a note to the docs to check which library was used in training, then use that with supervision. Comparisons below:
cv2:

pillow:

### Environment
_No response_
### Minimal Reproducible Example
```python
image_path = "my.png"
# change
image = cv2.imread(image_path)
# image = np.array(Image.open(image_path))
def callback(image_slice: np.ndarray) -> sv.Detections:
result = model(image_slice)[0]
return sv.Detections.from_ultralytics(result)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)
detections = detections[detections.class_id == 1]
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = box_annotator.annotate(scene=image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-07-22T23:13:29Z | 2024-08-06T07:24:25Z | https://github.com/roboflow/supervision/issues/1395 | [
"bug"
] | robmarkcole | 9 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 15 | VGGmodel structure | Hi, I am using the VGG model. I find the dense layers have 4096 neural cells online, but your code writes 2048 cells. | closed | 2020-04-01T10:46:46Z | 2020-04-01T13:06:13Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/15 | [] | hobbitlzy | 2 |
darrenburns/posting | rest-api | 133 | Bearer Token Auth type support | I want to propose a Bearer Token auth type to `Auth` menu. Like Postman does:
<img width="1082" alt="Screenshot 2567-11-13 at 10 54 05" src="https://github.com/user-attachments/assets/3bf21341-8420-4c2c-9a27-1c7059484c54">
Currently, I work around this by manually setting the `Authorization` header in the' Headers' menu. Having this support would shorten the typing flow a little. | closed | 2024-11-13T03:57:20Z | 2024-11-16T17:28:22Z | https://github.com/darrenburns/posting/issues/133 | [] | wingyplus | 0 |
nschloe/tikzplotlib | matplotlib | 298 | Table row sep argument not included in tikzpicture | The `row sep` is not included in the tikz picture.
How I save my figure:
```python
matplotlib2tikz.save(out, figure=fig, textsize=8, extra_axis_parameters=extra_axis_param, float_format="{:.5f}", table_row_sep=r"\\")
```
Results in:
```latex
\addplot [semithick, color0]
table{%
4.00000 0.00000\\5.00000 0.00000\\6.00000 0.00000\\7.00000 0.00000\\8.00000 0.00000\\9.00000 0.00000\\10.00000 0.00000\\11.00000 0.00000\\12.00000 0.00000\\13.00000 0.00000\\14.00000 0.00000\\15.00000 0.00000\\16.00000 0.00000\\17.00000 0.00000\\18.00000 0.00000\\19.00000 0.00000\\20.00000 0.00000\\21.00000 0.00000\\22.00000 0
```
But is should be rendered/ouputed as:
```latex
\addplot [semithick, color0]
table[row sep=\\] {%
4.00000 0.00000\\5.00000 0.00000\\6.00000 0.00000\\7.00000 0.00000\\8.00000 0.00000\\9.00000 0.00000\\10.00000 0.00000\\11.00000 0.00000\\12.00000 0.00000\\13.00000 0.00000\\14.00000 0.00000\\15.00000 0.00000\\16.00000 0.00000\\17.00000 0.00000\\18.00000 0.00000\\19.00000 0.00000\\20.00000 0.00000\\21.00000 0.00000\\22.00000 0
```
| closed | 2019-05-09T12:36:13Z | 2019-10-13T12:00:31Z | https://github.com/nschloe/tikzplotlib/issues/298 | [] | GillesC | 1 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 140 | API服务是否还需要 upload接口 | {StatusCode: 404, ReasonPhrase: 'Not Found', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Date: Tue, 17 Sep 2024 07:20:55 GMT
Server: uvicorn
Content-Length: 22
Content-Type: application/json
}}
直接调用/idphoto报错 404 | closed | 2024-09-17T07:22:04Z | 2024-09-17T07:58:03Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/140 | [] | liantanqing | 0 |
modoboa/modoboa | django | 3,038 | imap_migration generates traceback | # Impacted versions
* OS Type: Ubuntu
* OS Version: 22.04 LTS
* Database Type: MySQL
* Database version: 8
* Modoboa: 2.1.2
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
- have offlineimap installed and configured for a domain
- I double checked: all migrations are applied successfully
- got to the shell and run the following command:
- `python manage.py generate_offlineimap_config`
# Current behavior
I get the fopllowing error message:
```
Traceback (most recent call last):
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 829, in _resolve_lookup
current = current[bit]
TypeError: 'Migration' object is not subscriptable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 133, in _verify_signature
h.verify(data[-32:])
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/hazmat/primitives/hmac.py", line 72, in verify
ctx.verify(signature)
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 85, in verify
raise InvalidSignature("Signature did not match digest.")
cryptography.exceptions.InvalidSignature: Signature did not match digest.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/modoboa/instance/manage.py", line 22, in <module>
main()
File "/srv/modoboa/instance/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/imap_migration/management/commands/generate_offlineimap_config.py", line 41, in handle
content = render_to_string(
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 170, in render
return self._render(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 162, in _render
return self.nodelist.render(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 938, in render
bit = node.render_annotated(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/defaulttags.py", line 214, in render
nodelist.append(node.render_annotated(context))
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated
return self.render(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 988, in render
output = self.filter_expression.resolve(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 671, in resolve
obj = self.var.resolve(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 796, in resolve
value = self._resolve_lookup(context)
File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 837, in _resolve_lookup
current = getattr(current, bit)
File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/imap_migration/models.py", line 50, in password
return decrypt(self._password)
File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/lib/cryptutils.py", line 42, in decrypt
return smart_text(_get_fernet().decrypt(smart_bytes(encrypted_value)))
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 90, in decrypt
return self._decrypt_data(data, timestamp, time_info)
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 151, in _decrypt_data
self._verify_signature(data)
File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 135, in _verify_signature
raise InvalidToken
cryptography.fernet.InvalidToken
```
# Expected behavior
getting the input files for offlineimap.
| closed | 2023-08-03T19:45:19Z | 2023-08-29T14:53:10Z | https://github.com/modoboa/modoboa/issues/3038 | [] | dorsax | 1 |
allure-framework/allure-python | pytest | 783 | allure 和 pytest.mark 不应该混淆 | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
不应该出现的标签
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
@allure.link("https://www.baidu.com/", name="baidu")
@allure.issue("https://www.baidu.com/", "BUG")
@allure.testcase("YHZ-123")
@pytest.mark.repeat(1)
@pytest.mark.parametrize("coupons_type", [1, 2])
def xxx():
pass

#### What is the expected behavior?
预期结果,allure 和 pytest.mark 不应该 混淆
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
Python = 3.9.0
allure = 2.17.2
allure-pytest = 2.13.2
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| open | 2023-12-20T12:48:35Z | 2023-12-20T12:48:35Z | https://github.com/allure-framework/allure-python/issues/783 | [] | yanghuizhi | 0 |
onnx/onnx | machine-learning | 6,589 | TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' | # Bug Report
### Describe the bug
I am trying to convert Nvidia NeMo's FilterbankFeaturesTA class to ONNX. Here is my code -
```
from nemo.collections.asr.parts.preprocessing.features import (
FilterbankFeatures,
FilterbankFeaturesTA,
make_seq_mask_like,
)
_model = FilterbankFeaturesTA(
sample_rate= 16000,
# window_size = 0.02,
# window_stride = 0.01,
n_window_size = None,
n_window_stride = None,
window = "hann",
normalize = "per_feature",
n_fft = None,
preemph = 0.97,
# features = 64,
lowfreq = 0,
highfreq = None,
log = True,
log_zero_guard_type = "add",
log_zero_guard_value = 2 ** -24,
dither = 1e-5,
pad_to = 16,
frame_splicing = 1,
exact_pad = False,
pad_value = 0,
mag_power = 2.0,
rng = None,
nb_augmentation_prob = 0.0,
nb_max_freq = 4000,
# use_torchaudio = False,
mel_norm = "slaney",
stft_exact_pad = False,
stft_conv = False,
)
_model.eval()
example_input_1 = torch.randn(1, 18432) # Input for x1
example_input_2 = torch.randn(18432) # Input for x2
# _model(example_input_1, example_input_2)
example_out = _model.forward(example_input_1, example_input_2,)
# example_out
onnx_file_path = "preprocessor.onnx"
args = (example_input_1, example_input_2)
# kwargs = {"seq_len": example_input_2}
onnx_model, _ = torch.onnx.dynamo_export(
_model, # Model to export
*args,
# **kwargs,
export_options=torch.onnx.ExportOptions(
dynamic_shapes=True,
),
)
# Save the ONNX model to file
onnx_model.save(onnx_file_path)
```
Running this code gives me the following error -
```
{
"name": "TypeError",
"message": "unsupported operand type(s) for //: 'NoneType' and 'int'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[66], line 9
1 # trying to export features.py FilterbankFeatures to onnx for web inference
2 # from nemo.collections.asr.parts.preprocessing import FilterbankFeatures
3 from nemo.collections.asr.parts.preprocessing.features import (
4 FilterbankFeatures,
5 FilterbankFeaturesTA,
6 make_seq_mask_like,
7 )
----> 9 _model = FilterbankFeaturesTA(
10 sample_rate= 16000,
11 # window_size = 0.02,
12 # window_stride = 0.01,
13 n_window_size = None,
14 n_window_stride = None,
15 window = \"hann\",
16 normalize = \"per_feature\",
17 n_fft = None,
18 preemph = 0.97,
19 # features = 64,
20 lowfreq = 0,
21 highfreq = None,
22 log = True,
23 log_zero_guard_type = \"add\",
24 log_zero_guard_value = 2 ** -24,
25 dither = 1e-5,
26 pad_to = 16,
27 frame_splicing = 1,
28 exact_pad = False,
29 pad_value = 0,
30 mag_power = 2.0,
31 rng = None,
32 nb_augmentation_prob = 0.0,
33 nb_max_freq = 4000,
34 # use_torchaudio = False,
35 mel_norm = \"slaney\",
36 stft_exact_pad = False,
37 stft_conv = False,
38 )
40 _model.eval()
42 example_input_1 = torch.randn(1, 18432) # Input for x1
File ~/Documents/aakhor/asr/NeMo/nemo/collections/asr/parts/preprocessing/features.py:555, in __init__(self, sample_rate, n_window_size, n_window_stride, normalize, nfilt, n_fft, preemph, lowfreq, highfreq, log, log_zero_guard_type, log_zero_guard_value, dither, window, pad_to, pad_value, mel_norm, use_grads, max_duration, frame_splicing, exact_pad, nb_augmentation_prob, nb_max_freq, mag_power, rng, stft_exact_pad, stft_conv)
553 self.dither = dither
554 self.pad_to = pad_to
--> 555 self.pad_value = pad_value
556 self.n_fft = n_fft
557 self._mel_spec_extractor: torchaudio.transforms.MelSpectrogram = torchaudio.transforms.MelSpectrogram(
558 sample_rate=self._sample_rate,
559 win_length=self.win_length,
(...)
568 wkwargs={\"periodic\": False},
569 )
File ~/miniconda3/envs/nemo/lib/python3.11/site-packages/torchaudio/transforms/_transforms.py:587, in MelSpectrogram.__init__(self, sample_rate, n_fft, win_length, hop_length, f_min, f_max, pad, n_mels, window_fn, power, normalized, wkwargs, center, pad_mode, onesided, norm, mel_scale)
585 self.n_fft = n_fft
586 self.win_length = win_length if win_length is not None else n_fft
--> 587 self.hop_length = hop_length if hop_length is not None else self.win_length // 2
588 self.pad = pad
589 self.power = power
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'"
}
```
### System information
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241218
[pip3] open_clip_torch==2.29.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.6.0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.24.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.29.0 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
### Reproduction instructions
1. Clone the NeMo github repo.
2. Run the code from above.
### Expected behavior
The model should export to onnx.
| closed | 2024-12-19T16:24:34Z | 2025-01-14T20:51:24Z | https://github.com/onnx/onnx/issues/6589 | [
"bug",
"topic: converters"
] | kabyanil | 1 |
LibreTranslate/LibreTranslate | api | 699 | How to achieve concurrency? | Multiple threads have been opened, and when concurrent, the result is empty. | closed | 2024-10-21T01:26:04Z | 2024-10-21T01:44:02Z | https://github.com/LibreTranslate/LibreTranslate/issues/699 | [
"possible bug"
] | junceo | 1 |
arogozhnikov/einops | tensorflow | 320 | [BUG] Basic code from documentation does not work | I tried both the codes described below from the [official docs](https://einops.rocks/api/repeat/) and it does not seem to work
```
# change it to RGB format by repeating in each channel
>>> repeat(image, 'h w -> h w c', c=3).shape
(30, 40, 3)
# repeat image 2 times along height (vertical axis)
>>> repeat(image, 'h w -> (repeat h) w', repeat=2).shape
(60, 40)
```
I get the following errors
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<__array_function__ internals>", line 198, in repeat
TypeError: repeat() got an unexpected keyword argument 'c'
```
and
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<__array_function__ internals>", line 198, in repeat
TypeError: repeat() got an unexpected keyword argument 'repeat'
```
Since it is the official documentation code, I thought it is better to ask here and get a quick answer instead of asking stackoverflow (none of the chatbot fixes worked). Below is the version of einops I have:
```
pip show einops
Name: einops
Version: 0.8.0
Summary: A new flavour of deep learning operations
Home-page:
Author: Alex Rogozhnikov
Author-email:
License: MIT
Location: /Users/myname/.pyenv/versions/3.8.16/lib/python3.8/site-packages
Requires:
Required-by:
```
Please let me know how to fix this, I'm in a bit of hurry.
| closed | 2024-05-11T09:35:25Z | 2024-09-15T14:39:12Z | https://github.com/arogozhnikov/einops/issues/320 | [] | vignesh99 | 1 |
onnx/onnx | machine-learning | 6,708 | Error when testing latest ONNX commit on ORT | # Ask a Question
### Question
<!-- Explain your question here. -->
It seems there are updates about `onnx::OpSchema` after 1.17 which would cause ORT build failure.
Is this expected?
```c++
...
/onnxruntime/onnxruntime/core/graph/contrib_ops/contrib_defs.cc: In function ‘void onnxruntime::contrib::RegisterContribSchemas()’:
/onnxruntime/onnxruntime/core/graph/contrib_ops/contrib_defs.cc:2904:46: error: conversion from ‘onnx::OpSchema’ to non-scalar type ‘onnx::OpSchemaRegistry::OpSchemaRegisterOnce’ requested 2904 | .SetContextDependentFunctionBodyBuilder(
...
```
Btw, here's the [onnx.patch](https://github.com/microsoft/onnxruntime/blob/yifanl/oss/cmake/patches/onnx/onnx.patch) that synced to latest onnx commit, and deps.txt pinned to latest as well.
### Further information
- Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. -->
- Is this issue related to a specific model?
**Model name**: <!-- *e.g. mnist* -->
**Model opset**: <!-- *e.g. 17* -->
### Notes
<!-- Any additional information, code snippets. -->
| open | 2025-02-14T21:47:52Z | 2025-02-15T15:50:19Z | https://github.com/onnx/onnx/issues/6708 | [
"question"
] | yf711 | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,451 | [Minor issue] [App Icon rendering] Using the macro navbar_block | Hello Team,
Thank you for this amazing Framework. Just to notify a minor issue that can be solved quicly (sorry for not using the usual issue template....)
The code inside the macro navbar_block have some missing html attributes compared to the navbar.html file
This is related to the app_icon rendering.
**Line 17 : `<img src="{{appbuilder.app_icon}}" height="100%" width="auto">`**
from source ->
https://github.com/dpgaspar/Flask-AppBuilder/blob/98b1be8b3390cd592dc20f215062e55d27e08eec/flask_appbuilder/templates/appbuilder/navbar.html
and
**Line 93 : `<img src="{{appbuilder.app_icon}}" >`**
from source -> https://github.com/dpgaspar/Flask-AppBuilder/blob/1e900bba85452de6d988f7da191f9a26fec62226/flask_appbuilder/templates/appbuilder/baselib.html
As result, the app_icon is rendered in differrent ways depending on if we use the macro or if we extends direclty from baselayout.html.
Thank you all again for your great work.
| open | 2020-08-17T15:03:14Z | 2020-08-26T07:17:55Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1451 | [
"bug",
"mvc"
] | pasqwal | 2 |
tflearn/tflearn | tensorflow | 806 | Why is tflearn.data_utils.shuffle() not used in all CIFAR-10 Examples? | In the **covnet_cifar10.py** and **network_in_network.py** examples the CIFAR-10 data is shuffled after its loaded using the `tflearn.data_utils.shuffle()` function:
`from tflearn.datasets import cifar10`
`(X, Y), (X_test, Y_test) = cifar10.load_data()`
`X, Y = shuffle(X, Y)`
However, in the **residual_network_cifar10.py** and **resnext_cifar10.py** examples this step is not taken after the data is loaded.
Is there a reason why this shuffle step is not included in these examples?
Is it just that the data is not required to be shuffled for these models to work? Or, is the shuffling of the data taking place during the `.fit()` training where the shuffle parameter is set to true `shuffle=True`?
| open | 2017-06-22T16:39:56Z | 2017-06-26T03:21:19Z | https://github.com/tflearn/tflearn/issues/806 | [] | rhammell | 1 |
matplotlib/matplotlib | data-visualization | 29,425 | [ENH]: Set list in marker parameter when plotting using matplotlib.pyplot.plot | ### Problem
Hi all,
It would be great to have the option to set the marker as a list, so that it each point gets a different value. That way instead of having to do this:
```
import matplotlib.pyplot as plt
x = [0, 1, 2, 3, 4]
y = [20, 13, 25, 36, 74]
markers = ['o', 's', '^', 'D', 'x']
for i in range(len(x)):
plt.plot(x[i], y[i], marker=markers[i])
plt.show()
```
We could directly do:
```
import matplotlib.pyplot as plt
x = [0, 1, 2, 3, 4]
y = [20, 13, 25, 36, 74]
markers = ['o', 's', '^', 'D', 'x']
plt.plot(x, y, marker=markers)
plt.show()
```
Which results in: `ValueError: Unrecognized marker style ['o', 's', '^', 'D', 'x']`.
The reason why I need this is because I could then store the plot result in a variable as one Line2D and use it easily elsewhere, for example to do hover annotations.
The same should work for other parameters (color, linewidth, etc.) in principle.
Thanks,
Alba
### Proposed solution
_No response_ | closed | 2025-01-07T14:25:13Z | 2025-01-08T10:50:16Z | https://github.com/matplotlib/matplotlib/issues/29425 | [
"Community support"
] | albavilanova | 2 |
holoviz/panel | plotly | 7,152 | file_dropper extension not loaded, but no file_dropper extension after adding | ```python
2024-08-15 15:43:37,814 pn.extension was initialized but 'file_dropper' extension was not loaded. In order for the required resources to be initialized ensure the extension is loaded with the following argument(s):
pn.extension('file_dropper')
```
The correct one is `filedropper`, but I think this message is auto formatted somewhere | closed | 2024-08-15T22:44:57Z | 2024-08-24T11:49:24Z | https://github.com/holoviz/panel/issues/7152 | [] | ahuang11 | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,757 | help with this issue | `ImportError: cannot import name 'Chrome' from partially initialized module 'seleniumwire.undetected_chromedriver.webdriver'`
`from .webdriver import Chrome, ChromeOptions # noqa: F401` | closed | 2024-02-22T17:47:24Z | 2024-02-23T10:09:16Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1757 | [] | ConnorMAD | 1 |
aleju/imgaug | machine-learning | 735 | Augment image as if somebody took a photo of the same image | We have a situation where we want to distinguish "real" photos from photos taken of other photos.
Wonder if there's a way to simulate taking a photo of a photo. Perhaps even taking photos from monitor screens.
Screen glare/spectral effects/monitor pixel effects/matte effect... etc. All of this could be useful as image augmentations.
Any ideas? Has anybody worked on anything like this? Perhaps as an academic paper on this. | open | 2020-12-02T23:58:44Z | 2021-11-04T09:47:56Z | https://github.com/aleju/imgaug/issues/735 | [] | CMCDragonkai | 1 |
Gurobi/gurobi-logtools | plotly | 24 | Incorrect status reported for incomplete logs | In latest master branch, if a MIP log is incomplete (i.e. cut off with no termination message for whatever reason), we might report optimal status incorrectly. For example:
```
Variable types: 23522 continuous, 2343 integer (0 binary)
Root barrier log...
Barrier solved model in 50 iterations and 72.71 seconds (53.24 work units)
Optimal objective -1.76339641e+08
Solved with barrier
Root relaxation: objective -1.763396e+08, 104343 iterations, 108.23 seconds (79.42 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
```
here we get 'OPTIMAL' status from `ContinuousParser`, but no termination message from `NodeLogParser`.
grblogtoolsv1 would give an 'incomplete log' warning in this situation (and report unknown status? I'm not sure).
We should check for this with some custom logic for Status and Runtime, something like:
- If the model is continuous, we can get Runtime and Status from ContinuousParser
- If the model is (a) a MIP or (b) a continuous model solved as a MIP, we should ignore Runtime and Status from ContinuousParser
- (a) We can check using model type in `SingleLogParser`
- (b) Look for the message `Solving as a MIP` in header or presolve
- If TerminationParser reports runtime or status, it should take precedence (this already happens) | open | 2022-04-01T04:41:25Z | 2022-04-04T08:05:36Z | https://github.com/Gurobi/gurobi-logtools/issues/24 | [] | simonbowly | 2 |
dunossauro/fastapi-do-zero | sqlalchemy | 109 | Duvida sobre implemtanção de um teste | closed | 2024-03-11T17:52:34Z | 2024-03-11T18:11:35Z | https://github.com/dunossauro/fastapi-do-zero/issues/109 | [] | azmovi | 0 |
|
scikit-image/scikit-image | computer-vision | 7,295 | `intensity_limits` would be a better name for `dtype_limits` | At a first glance I thought, `dtype_limits(...)` would give me the largest and lowest representable number of the dtype of a given image (like `np.finfo()` only for integers and floats). Though, the function actually returns our intensity conventions for a given dtype. So I propose to refactor the function:
```python
@deprecate_function(...)
def dtype_limits(image, clip_negative=False):
...
# to
def intensity_limits(dtype, *, clip_negative=False):
...
```
I'm guessing that this function isn't used too much in our user base and the new name should make the functions purpose a lot clearer. I don't think this refactor needs to be a high priority but I'd like to eventually get to it. At the very latest for skimage2. | open | 2024-01-14T15:12:42Z | 2024-07-16T02:28:27Z | https://github.com/scikit-image/scikit-image/issues/7295 | [
":wrench: type: Maintenance",
":arrow_down_small: Deprecation",
":scroll: type: API",
":sleeping: Dormant",
":ice_cube: Backburner"
] | lagru | 2 |
ets-labs/python-dependency-injector | asyncio | 495 | Is that possible to pass the same factory dependency to all dependants? | For example, I have 1 use case, it has 3 dependencies - Session, ProductRepository, and UserRepository; repositories depend on session. Could I pass single SQLAlchemy session to all them? When I create second use case, session should be different. | open | 2021-08-25T05:43:49Z | 2024-02-23T21:35:36Z | https://github.com/ets-labs/python-dependency-injector/issues/495 | [] | AlexanderFarkas | 12 |
nonebot/nonebot2 | fastapi | 2,914 | Plugin: 阿瓦隆 | ### PyPI 项目名
nonebot-plugin-avalon
### 插件 import 包名
nonebot_plugin_avalon
### 标签
[{"label":"game","color":"#ea5252"}]
### 插件配置项
_No response_ | closed | 2024-08-21T00:06:40Z | 2024-09-01T03:04:45Z | https://github.com/nonebot/nonebot2/issues/2914 | [
"Plugin"
] | SamuNatsu | 3 |
serengil/deepface | machine-learning | 1,181 | New answer | What do you recommend if the photo is not of very good quality, but is taken by combining frames from the video and I leave the option of selecting faces to the neural network (to distinguish where there is a face and where there is not) | closed | 2024-04-11T11:52:57Z | 2024-04-11T12:38:13Z | https://github.com/serengil/deepface/issues/1181 | [
"question"
] | Naster17 | 3 |
huggingface/datasets | nlp | 6,611 | `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` | ### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module>
dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk
fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download
return self.get(rpath, lpath, recursive=recursive, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync
raise return_result
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner
result[0] = await coro
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get
return await _run_coros_in_chunks(
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks
await asyncio.gather(*chunk, return_exceptions=return_exceptions),
File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file
body, content_length = await _open_file(range=0)
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file
resp = await self._call_s3(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3
return await _error_wrapper(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper
raise err
PermissionError: The difference between the request time and the current time is too large.
```
The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here:
- https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la
- https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed
The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path.
### Steps to reproduce the bug
1. Create large dataset
2. Try loading it from s3 using:
```
dataset = load_from_disk("s3://...", storage_options=storage_options)
```
### Expected behavior
Load dataset without running into this error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | open | 2024-01-23T12:37:57Z | 2024-01-23T12:37:57Z | https://github.com/huggingface/datasets/issues/6611 | [] | zotroneneis | 0 |
JaidedAI/EasyOCR | pytorch | 405 | Generation2 model files does not work with Pytorch 1.4 | Trying to load generation 2 models with `reader = easyocr.Reader(['en'], download_enabled=False)` yields the following error with PyTorch 1.4:
`RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132)` | closed | 2021-03-30T08:53:38Z | 2022-03-02T09:24:57Z | https://github.com/JaidedAI/EasyOCR/issues/405 | [] | suyanzhe | 2 |
d2l-ai/d2l-en | pytorch | 1,736 | Add "Open with Google Colab" feature in every notebook | The notebooks in this book do not have the feature to run on Google Colab. This feature will be very helpful for those who are just beginning with deep learning and will help us familiarize ourselves with the code in a better way. | closed | 2021-04-26T06:14:52Z | 2021-08-11T19:03:22Z | https://github.com/d2l-ai/d2l-en/issues/1736 | [] | Rukmini-Meda | 2 |
modoboa/modoboa | django | 3,234 | password_scheme [ '"sha512crypt" is not a valid choice.' ] | # Impacted versions
* OS Type: Debian
* OS Version: 12
* Database Type: postgres
* Database version: 15.6
* Modoboa: 2.2.4
* installer used: yes
* Webserver: nginx
# Steps to reproduce
1. login into a new-admin
2. go to /new-admin/parameters/core (go to settings > general)
3. ctrl+shift+k (open debug console in your browser)
4. no need to change anything, just click on the green floppy disk icon in bottom right corner and then see response from the server indicating failure
# Current behavior
this was installed yesterday, almost everything is in default settings (except maybe for importing users and domains CSVs)
I believe that this might be caused by the users csv import, but i tried to remove those.
<!--
Explain the behavior you're seeing that you think is a bug, and explain how you
think things should behave instead.
-->
## Response
```XHRPUT
XHRPUT
https://mail.<blablabla>/api/v2/parameters/core/
[HTTP/2 400 112ms]
password_scheme [ '"sha512crypt" is not a valid choice.' ]
```
# Expected behavior
status 200
# Video/Screenshot link (optional)

| closed | 2024-04-11T11:47:04Z | 2024-04-11T13:09:36Z | https://github.com/modoboa/modoboa/issues/3234 | [] | usernamehyphen | 4 |
pytest-dev/pytest-qt | pytest | 325 | Is there a way to query for widgets? | Does pytest-qt provide some way to query for widgets in the "tree" of the UI without making them "public" attributes of some parent widget? I'm thinking of something similar to the `queryByTestId()` facility that some testing libraries use in the context of javascript frontend applications. | closed | 2020-12-01T15:18:15Z | 2020-12-01T17:09:19Z | https://github.com/pytest-dev/pytest-qt/issues/325 | [] | samfrances | 1 |
jupyterhub/repo2docker | jupyter | 1,030 | line buffering (buffering=1) warning on Python 3.8 | Since the recent change made in #1014, I got a warning like below when launching images based on Python 3.8 (`python=3.8` in environment.yml).
```
/srv/conda/envs/notebook/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
```
I believe this issue is related to [the change](https://bugs.python.org/issue32236) in Python 3.8 where it started complaining whereas previous versions silently ignored `buffering=1` for binary mode. I guess the related code is below:
https://github.com/jupyterhub/repo2docker/blob/a5f5bbbb75a9945d1f8fe8f8ff4844dfd4481742/repo2docker/buildpacks/repo2docker-entrypoint#L40-L46
Not sure if related, I also noticed texts on the console (mostly) lost colors after this recent change; Jupyter log messages used to have colored headings for warning/info/etc, but they are now all monochrome. Yet some texts are still printed in colors (i.e. Julia banner lost colors, but its prompt still has a color). | open | 2021-03-26T00:29:33Z | 2021-04-13T14:42:29Z | https://github.com/jupyterhub/repo2docker/issues/1030 | [] | tomyun | 9 |
scikit-hep/awkward | numpy | 3,170 | Support Numpy 2 varlen strings | ### Description of new feature
This is not a feature-request per-se, rather it's tracking the future possibility of ingesting / exporting NumPy 2 varlen strings.
I took a brief glance at this again today (it's amazing how quickly this stuff fades once you're not doing it every day), and it's clear that right now we have some work ahead of us if we want to ingest these strings into Awkward.
NumPy's choice to have each string be its own arena-allocated object means that there's no trivial way to ask for a single flat buffer of UTF8 code-units. I only spent a few minutes to look at this, and so far it seems we probably can use the NumPy C API to avoid needing to convert the string into UTF-32 in order to produce a flat buffer. This conversion would need to iterate over every string object and fill a buffer.
In the return direction, I don't _think_ we can lean in to the simple slice-based view that we have internally. The C API for NumPy varlen strings is opaque w.r.t the allocators, so we would need to exactly reverse the ingest method (i.e. write each substring using the C API).
| open | 2024-06-27T11:00:15Z | 2024-06-27T16:23:10Z | https://github.com/scikit-hep/awkward/issues/3170 | [
"feature"
] | agoose77 | 2 |
pydantic/pydantic-ai | pydantic | 548 | Use Griffe's public API | Any reason you're importing from the internal API?
https://github.com/pydantic/pydantic-ai/blob/b9ec73fe8d47d7859dbf7eefbade198f3cc0eb34/pydantic_ai_slim/pydantic_ai/_griffe.py#L7-L8
You're exposing yourself to breakages if I change these internals :sweat_smile:
Public equivalent:
```python
from griffe import DocstringSectionKind, Docstring, Object as GriffeObject
```
If it's to avoid loading too many things, note that `_griffe.models` imports a lot of stuff anyway:
```python
from _griffe.c3linear import c3linear_merge
from _griffe.docstrings.parsers import DocstringStyle, parse
from _griffe.enumerations import Kind, ParameterKind, Parser
from _griffe.exceptions import AliasResolutionError, BuiltinModuleError, CyclicAliasError, NameResolutionError
from _griffe.expressions import ExprCall, ExprName
from _griffe.logger import logger
from _griffe.mixins import ObjectAliasMixin
``` | closed | 2024-12-26T16:04:44Z | 2024-12-26T17:27:12Z | https://github.com/pydantic/pydantic-ai/issues/548 | [] | pawamoy | 1 |
deepset-ai/haystack | machine-learning | 8,540 | Add a ranker component that uses an LLM to rerank documents | **Describe the solution you'd like**
I’d like to add a new ranker component that leverages a LLM to rerank retrieved documents based on their relevance to the query. This would better assess the quality of the top-ranked documents, helping ensure that only relevant results are given to the LLM to answer the question.
Additionally, having an ability for the LLM to choose how many documents to keep would also be nice. A sort of dynamic top-k if you will.
**Additional context**
We have started to employ this for some clients especially in situations where we need to provide extensive references. Basically for a given answer we need to provide all relevant documents that support the answer text. Having one reference in these situations is not enough. As a result in these situations we are willing to pay the extra cost to use an LLM to rerank and only keep the most relevant documents.
| open | 2024-11-12T14:59:54Z | 2025-01-23T09:48:44Z | https://github.com/deepset-ai/haystack/issues/8540 | [
"P3"
] | sjrl | 6 |
nolar/kopf | asyncio | 301 | Custom Scheduler? | > <a href="https://github.com/cliffburdick"><img align="left" height="50" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> An issue by [cliffburdick](https://github.com/cliffburdick) at _2020-01-28 23:22:41+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/301
>
Great project, and I enjoyed your talk at kubecon 2019!
## Problem
Kubernetes 1.17 added a new scheduler framework, but at the same time, are slowly deprecating the ability to use Python as a scheduler language. Most of the new hooks added must be written in Go, and the old scheduler extension framework that was language agnostic is going away.
Since custom schedulers are similar to operators, but just watch for the scheduler-name field to appear in the spec, kopf might be able to fit that need.
## Proposal
I don't know enough about the internals of kopf, but if the framework of callbacks when a pod using a custom scheduler appeared or disappeared could be reused, that would be ideal. Instead of dealing with a CRD (or in addition to), kopf could provide the scheduler framework the ability to bind to particular nodes.
## Checklist
- [X ] Many users can benefit from this feature, it is not a one-time case
- [X ] The proposal is related to the K8s operator framework, not to the K8s client libraries
Edit: Maybe this is already possible and I'm overthinking it. If kopf is registered to handle a CRD that always has the scheduler-name field set, the default scheduler won't touch it. Will kopf.on.create be enough to trigger the scheduler to know it's there?
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 09:35:03+00:00_
>
Can you please give some links with a description of this new scheduler? — To better understand the change.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 10:51:59+00:00_
>
I only found https://kubernetes.io/docs/concepts/configuration/scheduling-framework/ — but it is about the pod scheduling, i.e. assigning them to the nodes. I'm not sure this was ever doable with operators (both Go- & Python-based).
Technically, Kopf is able to handle built-in resources now, including pods. But this handling is limited to watching over them and patching their fields (either spec or status or metadata). If this is enough for scheduling, then it can be done now.
Can you provide some more detailed description of the idea? E.g. with some hypothetical code samples and a step-by-step flow of events explained?
To the level of my low knowledge of Kubernetes internals, the plugins are only possible when embedded into the Kubernetes itself. We can probably write a Go-based "mediator" plugin that will communicate with the scheduler framework internally, but with no own logic implemented, just by getting the statuses/values from the pod's fields — and by putting them back. And then, a regular controller/operator can actually do the job and "talk" to the plugin via these fields, which implies "talking" to the scheduler. Is that what you mean?
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-01-31 15:31:00+00:00_
>
Hi [nolar](https://github.com/nolar), thanks for the response. Let me give you a bit more background.
We have a custom Python scheduler we are developing here: https://github.com/Viasat/nhd, that we presented at Kubecon last year. Right now the scheduler does its own reconciliation loop, where the only difference (I believe) from a normal operator is it looks for the "schedulerName" field in the pod spec, and is allowed to bind the pod to a node if that field is set for it. The main feature of the scheduler is it makes decisions based on available hardware resources on the node. As you can imagine, that involves watching for new pods to show up, taking resources from a node when they are scheduled, and freeing those resources when the pod dies. The piece where it watches for new pods to show up and be deleted seem to be the same as a kopf.on.create/delete. I had to write my own (poor) logic to do this, even though it's been done a million times before. It's no problem, and even preferable, to require a CRD for these pods, since they are very similar, but not quite the same, as a StatefulSet. You mentioned kopf is limited to watching over pods and patching, and I think other than calling the client API bind command, a scheduler is nothing more than that (at least a simple one). With a CRD, I don't think we have to worry about the "scheduler" part of it, because the CRD would be the first object to come in, and that being created gives kopf the ability to deploy the pods.
So the work flow would be:
1. Watch for CRD type ABC to show up
2. on.create handler for ABC creates pods with the scheduler-name set to scheduler-ABC
3. on.create pod handler with filter on scheduler-name or some other identifier sees pod come in and binds it to a node // This is the one I'm not sure is possible. Would on.create be triggered if the pod is in a pending state without a node bound to it? Does this handler only get triggered when the pod is successfully deployed?
4. on.delete pod handlers for that same type, and schedulers frees appropriate internal resources.
My idea is that the kopf framework is running an operator that looks for this CRD to show up, launches an appropriate number of pods as owned by that CRD, and sees when the pods (or CRD) are deleted. Because much of the difficulty is in writing the reconciliation loop, which you've already solved, I figured the scheduler could simply be a wrapper around on.create/delete. I saw a lot of issues related to watching pods under CRDs, and it wasn't entirely clear to me if that's supported/working yet, since it seemed you were still coming up with ideas on how to handle that.
Also, after the CRD is deployed, I wanted to have another kopf operator silently watching the child pods of these CRDs as well, since it's in charge of doing things like sending messages to these pods when they come up, providing them configuration, etc.
For more context on Kubernetes, what I really wanted to do was make our scheduler into a [scheduler extension](https://kubernetes.io/docs/concepts/extend-kubernetes/extend-cluster/#scheduler-extensions). In the past, this was simply making a webhook that was called after the main scheduler was run, and it acted as a pre-filter step for other schedulers by doing the normal things like removing dead nodes/unhealthy nodes, etc. Unfortunately, it seems scheduler extensions via a webhook is going to be deprecated at some point in the future in favor of the scheduler framework added recently. The reason is the scheduler framework allows more flexibility as to where you want to plug in to compared to the webhook. The scheduler framework, at least my understanding, requires your scheduler to be written in Go, so this effectively will make writing a scheduler extension in Python impossible in the future without hacking in a Go module. But I digress.
I hope this helps describe the goal, and maybe you can say whether that's possible or not, since I really like the idea of kopf and think it would be a great application.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 21:13:57+00:00_
>
**For the first part,** as I understand, you want something like this:
```python
import kopf
import pykube
@kopf.on.create('zalando.org', 'v1', 'kopfexamples')
def spawn_kexy_pods(**_):
pod_body = {'spec': {'scheduler-name': 'scheduler-ABC', ...}, ...} # or parse a yaml template
kopf.adapt(pod_body) # for cascaded deletions
# kopf.label(pod_body, {'mykexypod': 'yes'}) # perhaps, not needed
api = pykube.HTTPClient()
pod = pykube.Pod(api, pod_body)
pod.create()
def _designated_for_us(spec, **_):
return spec.get('scheduler-name') == 'scheduler-ABC'
@kopf.on.create('', 'v1', 'pods',
# labels={'mykexypod': 'yes'}, # perhaps, not needed
when=_designated_for_us)
def bind_pod_to_node(namespace, name, patch, **_):
node_name = call_api_to_bind_it(namespace, name)
patch.setdefault('metadata', {}).setdefault('labels', {})['node'] = node_name
# The code below is optional:
def _assigned_to_a_node(old, new, **_):
old_node = old.get('metadata', {}).get('labels', {}).get('node')
new_node = new.get('metadata', {}).get('labels', {}).get('node')
return new_node is not None and old_node != new_node
@kopf.on.update('', 'v1', 'pods', when=_assigned_to_a_node)
def notice_node_assigned(**_):
pass # congrats!
```
Specifically:
> Would on.create be triggered if the pod is in a pending state without a node bound to it? Does this handler only get triggered when the pod is successfully deployed?
On-creation handlers are triggered when the pod is seen for the first time. I.e. when it is created. Usually, you can expect the handling to be done near instantly, much before the pod is actually started by any schedulers (but it already exists).
You might want to take a look into the `@on.update` handlers, or `@on.event` low-level handlers — to track when the pod is assigned/bound to the nodes. If you know the field where this information is stored, you can also use `@kopf.on.field(..., field="metadata.labels.node")`.
on-create, on-update, on-delete, on-field handlers will be retried until succeeded. on-event handler is fire-and-forget: if it fails, it will not be retried (or until the new event arrives some time later).
There are no special events for the pod's conditions (yet), and there is no special treatment for the pods above other resource kinds (yet). But that can be expressed via the `when=` filters.
Or do I miss something?
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-02-01 19:47:35+00:00_
>
[nolar](https://github.com/nolar) thanks! I think this sounds very promising, and I'll do some prototyping over the coming weeks. I really appreciate your comments, and I'll let you know when I update our project to use it.
By the way, I know nodes are not objects necessarily, but the ability to watch node status and be alerted is also really important for schedulers. This is likely outside the scope of kopf, though, and can still be done by separate code.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-08-19 14:59:21+00:00_
>
Marking this closed as this is now integrated into NHD:
https://github.com/Viasat/nhd | closed | 2020-08-18T20:03:08Z | 2020-08-23T20:55:01Z | https://github.com/nolar/kopf/issues/301 | [
"enhancement",
"archive"
] | kopf-archiver[bot] | 0 |
plotly/dash | flask | 2,948 | Excel files do not work in the `parse_contents()` function provided in the sample code for dcc.Upload | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe the bug**
I noticed this in my own code, then referred to the sample code provided and found the same issue.
When I click the 'Drag and Drop or Select Files' button in the documentation's sample code for [dcc.Upload](https://dash.plotly.com/dash-core-components/upload) and select an Excel file, I receive the error 'There was an error processing this file.'.
**Expected behavior**
I expect the file to upload, the contents to be parsed, and a table to be generated.
**Screenshots**

| closed | 2024-08-13T17:31:50Z | 2024-08-17T21:32:06Z | https://github.com/plotly/dash/issues/2948 | [
"bug",
"P3"
] | lucasprshepherd | 5 |
pywinauto/pywinauto | automation | 1,266 | onHover event in ComboBox changes selected_item | I am trying to extract the selected value from a ComboBox with `combo_box.selected_text()` through the `win32` backend, with no success as each time my mouse is hovering on another **not selected** item, this function returns the hovered item's text and not my selected item's text.
Am I missing something or is this intentional behavior on `onHover` event?
I've tried numerous ways to extract the selected value + tried to understand if the combo box list is visible (which means the user might hover over some items) but with no success at all.
I noticed (through Accessibility Insights) that there is a certain property in `ComboBox` item indicating the `selectedValue`, how do I extract it?
<img width="414" alt="Screen Shot 2022-12-18 at 10 19 54" src="https://user-images.githubusercontent.com/5401999/208290452-633c13d6-6263-4681-a59f-3de2bf09ff81.png">
| open | 2022-12-18T09:20:37Z | 2023-01-03T17:09:00Z | https://github.com/pywinauto/pywinauto/issues/1266 | [
"question"
] | ErezCsillag | 2 |
open-mmlab/mmdetection | pytorch | 11,365 | mask2former模型使用pytorch2torchscript.py,导出为gpu上的torchscript模型后,精度损失很厉害,而cpu的不会 | 我也尝试了直接使用下面代码来导出模型,和上面一样都没有报错,但结果依然没有变化
`model = init_model(config_path, checkpoint_path, device='cuda:0')
verify = True
imgs = torch.randn(1,3,512,512).to("cuda")
traced_model = torch.jit.trace(model,example_inputs=imgs,check_trace=verify,)
traced_model.save(output_file)`
而使用result = mmseg.apis.inference_model(model, img)来推理,精度是正常的,**使用cpu的torchscript模型监督也正常**
**是不是mmsegmentation对于导出gpu上的torchscript模型有bug,不适配**
我想使用traced_model = torch.jit.script(model)来导出,有些报错一下解决不了
| closed | 2024-01-12T11:12:32Z | 2024-01-12T16:58:30Z | https://github.com/open-mmlab/mmdetection/issues/11365 | [] | edition3234 | 1 |
healthchecks/healthchecks | django | 952 | Check goes down after one hour | Hello,
I am having a very strange issue with healthchecks.
I have three backup scripts that run daily and do a curl in the beggining and the end so I can also measure execution time.
Two of them I have no issues with but one of them is misbehaving in a very strange way. The issue goes as follows:
- Script starts and the first curl is done
- Healthchecks gets the curls and starts counting time
- Script ends the second curl is sent
- Healthchecks gets the second curl, posts execution time
- One hour later the check goes down with no other curls received or sent
You can see a print of the last two days below.

What am I missing? | closed | 2024-02-05T13:39:24Z | 2024-02-07T08:50:04Z | https://github.com/healthchecks/healthchecks/issues/952 | [] | The-Inamati | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.