repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
deeppavlov/DeepPavlov | nlp | 784 | Add __repr__() method for Chainer class | closed | 2019-03-29T14:05:34Z | 2019-05-02T19:04:17Z | https://github.com/deeppavlov/DeepPavlov/issues/784 | [] | yoptar | 1 |
|
unit8co/darts | data-science | 2,113 | QUESTION: How best to include COVID in models | Hi
I am looking at including the covid lockdowns to help improve my forcasts. I was just going to include them as a binary covarient feature. However I don't know if encoders would be better suited, or maybe a custom holiday?
Thanks | closed | 2023-12-08T15:59:42Z | 2024-04-17T07:00:42Z | https://github.com/unit8co/darts/issues/2113 | [
"question"
] | AndrewJGroves | 1 |
mwaskom/seaborn | matplotlib | 3,457 | Deprecation warnings when seaborn is used with Pandas 2.1.0 | Pandas 2.1.0 has [deprecated](https://pandas.pydata.org/docs/whatsnew/v2.1.0.html#other-deprecations) a number of functions, and this results in `FutureWarning`s when Seaborn is used.
For example:
```py
import seaborn as sns
tips = sns.load_dataset("tips")
sns.relplot(data=tips, x="total_bill", y="tip")
sns.relplot(data=tips, x="total_bill", y="tip")
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> <seaborn.axisgrid.FacetGrid object at 0x292c403d0>
``` | closed | 2023-08-31T14:44:49Z | 2023-08-31T14:58:26Z | https://github.com/mwaskom/seaborn/issues/3457 | [] | wch | 1 |
ijl/orjson | numpy | 69 | Ignore null serialize | Hello, i tried serialize dataclass, how to ignore null ?
`@dataclass
class CDCObject:
schema_ver: str
def default(obj):
print(type(obj))
if isinstance(obj, int):
print(123)
return str(obj)
raise TypeError
obj = CDCObject(None)
result = orjson.dumps(obj, default=default, option=orjson.OPT_SERIALIZE_DATACLASS)
print(result)`
and my default function does not invoked ( | closed | 2020-04-01T08:01:56Z | 2020-04-01T17:31:55Z | https://github.com/ijl/orjson/issues/69 | [] | ihatiko | 1 |
lexiforest/curl_cffi | web-scraping | 139 | 当post的data过长时 data超出部分会消失 | 版本0.5.9
请求data长度为5000左右 然后fd抓包只会收到4900左右 超出部分会自动被截断,使用原生requests post发送则是正常发出
我用'1111'伪造5000+长度也尝试过了 一样收到的不全 | closed | 2023-10-07T22:27:12Z | 2023-10-08T08:07:00Z | https://github.com/lexiforest/curl_cffi/issues/139 | [] | Kise1223 | 2 |
HumanSignal/labelImg | deep-learning | 321 | labelImg crashes at launch when installed as an app on macOS | Hello I tried to follow the instructions in the [README.rst](https://github.com/tzutalin/labelImg/blob/master/README.rst) to install labelImg as an app on macOS but I have the following error message "_labelImg has encountered a fatal error, and will now terminate._" when I am trying to launch it once I successfully completed the installation:

I used this manual installation:
```bash
brew install python3
pip install pipenv
pipenv --three
pipenv shell
pip install py2app
pip install PyQt5 lxml
make qt5py3
rm -rf build dist
python setup.py py2app -A
mv "dist/labelImg.app" /Applications
```
Moreover, I had to modify my `~/.bash_profile.` to solve a localization problem when using `pipenv --three`. I added the following lines ([source](https://github.com/pypa/pipenv/issues/187)):
```bash
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
I don't know if this could be the problem or not but I am also using Anaconda and therefore my python version is the following: `Python 3.6.5 :: Anaconda, Inc.`
Thanks a lot for your time.
- **OS:** macOS High Sierra 10.13.15
- **PyQt version:** PyQt5
| open | 2018-07-01T12:06:32Z | 2019-05-02T04:23:41Z | https://github.com/HumanSignal/labelImg/issues/321 | [] | LucasVandroux | 4 |
jina-ai/serve | fastapi | 5,252 | Wallet Address | 0x626fA38bA7B1f5a646b1349b70aF6DA814d5C598 | closed | 2022-10-07T01:24:31Z | 2022-10-07T03:00:43Z | https://github.com/jina-ai/serve/issues/5252 | [] | urica12 | 0 |
ageitgey/face_recognition | machine-learning | 894 | Dlib build error during face recognition installation | * face_recognition version:
* Python version: 3.5
* Operating System: ubuntu 16.04
### Description
I wastrying to install facerecognition module but building the dlib wheel file throws the following exception. I have previously installed dlib and face-recognition the same way in the same system.
### What I Did
```
pip install dlib
Collecting dlib
Using cached https://files.pythonhosted.org/packages/05/57/e8a8caa3c89a27f80bc78da39c423e2553f482a3705adc619176a3a24b36/dlib-19.17.0.tar.gz
Building wheels for collected packages: dlib
Building wheel for dlib (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /.virtualenvs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-naal67g1 --python-tag cp35
cwd: /tmp/pip-install-eape6x3c/dlib/
Complete output (522 lines):
running bdist_wheel
running build
running build_py
package init file 'dlib/__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.5.2 (default, Nov 12 2018, 13:43:14)
Invoking CMake setup: 'cmake /tmp/pip-install-eape6x3c/dlib/tools/python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/pip-install-eape6x3c/dlib/build/lib.linux-x86_64-3.5 -DPYTHON_EXECUTABLE= /.virtualenvs/env/bin/python3 -DCMAKE_BUILD_TYPE=Release'
-- The C compiler identification is GNU 5.5.0
-- The CXX compiler identification is GNU 5.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /home/bin/python3 (found version "3.5.2")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so
-- Performing Test HAS_CPP14_FLAG
-- Performing Test HAS_CPP14_FLAG - Success
-- pybind11 v2.2.2
-- Using CMake version: 3.5.1
-- Compiling dlib version: 19.17.0
-- SSE4 instructions can be executed by the host processor.
-- AVX instructions can be executed by the host processor.
-- Enabling AVX instructions
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- Found X11: /usr/lib/x86_64-linux-gnu/libX11.so
-- Looking for png_create_read_struct
-- Looking for png_create_read_struct - found
-- Looking for jpeg_read_header
-- Looking for jpeg_read_header - found
-- Searching for BLAS and LAPACK
-- Searching for BLAS and LAPACK
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
-- Checking for module 'cblas'
-- No package 'cblas' found
-- Checking for module 'lapack'
-- Found lapack, version 0.2.18
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Found OpenBLAS library
-- Looking for sgetrf_single
-- Looking for sgetrf_single - found
-- Using OpenBLAS's built in LAPACK
-- Looking for cblas_ddot
-- Looking for cblas_ddot - found
-- Looking for sgesv
-- Looking for sgesv - not found
-- Looking for sgesv_
-- Looking for sgesv_ - not found
-- Found CUDA: /usr/local/cuda-8.0 (found suitable version "8.0", minimum required is "7.5")
-- Looking for cuDNN install...
-- Found cuDNN: /usr/local/cuda-8.0/lib64/libcudnn.so
-- Building a CUDA test project to see if your compiler is compatible with CUDA...
-- Checking if you have the right version of cuDNN installed.
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA
-- C++11 activated.
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5
Invoking CMake build: 'cmake --build . --config Release -- -j6'
[ 1%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o
[ 2%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9220): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9231): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9244): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9255): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9268): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9279): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9292): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9303): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9316): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9327): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9340): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9352): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9365): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9376): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9389): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9401): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9410): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9419): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9428): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9437): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9445): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9454): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9463): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9472): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9481): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9490): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9499): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9508): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9517): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9526): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9535): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9544): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(55): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(63): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(73): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(81): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(91): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(100): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(109): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(117): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(127): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(136): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(145): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9220): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(153): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9231): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9244): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9255): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9268): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9279): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9292): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9303): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9316): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9327): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9340): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9352): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9365): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9376): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9389): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9401): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9410): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9419): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9428): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9437): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9445): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9454): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9463): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9472): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9481): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9490): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9499): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9508): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9517): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9526): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9535): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9544): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(55): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(63): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(73): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(81): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(91): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(100): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(109): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(117): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(127): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(136): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(145): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(153): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10799): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10811): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10823): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10835): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10847): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10859): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10871): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10883): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10895): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10907): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10919): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10931): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10943): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10955): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10967): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10979): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10989): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11000): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11009): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11020): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11029): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11040): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11049): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11060): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11069): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11080): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11089): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11100): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11109): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11120): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11129): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11140): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11149): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11160): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11169): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11180): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11189): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11200): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11209): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11220): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11229): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11240): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11249): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11260): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11269): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11280): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11289): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11300): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10799): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10811): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10823): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10835): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10847): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10859): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10871): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10883): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10895): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10907): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10919): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10931): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10943): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10955): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10967): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10979): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10989): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11000): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11009): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11020): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11029): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11040): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11049): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11060): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11069): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11080): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11089): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11100): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11109): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11120): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11129): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11140): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11149): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11160): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11169): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11180): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11189): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11200): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11209): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11220): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11229): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11240): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11249): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11260): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11269): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11280): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11289): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11300): error: argument of type "void *" is incompatible with parameter of type "long long *"
92 errors detected in the compilation of "/tmp/tmpxft_00004979_00000000-7_cusolver_dlibapi.cpp1.ii".
CMake Error at dlib_generated_cusolver_dlibapi.cu.o.cmake:266 (message):
Error generating file
/tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5/dlib_build/CMakeFiles/dlib.dir/cuda/./dlib_generated_cusolver_dlibapi.cu.o
dlib_build/CMakeFiles/dlib.dir/build.make:70: recipe for target 'dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o' failed
make[2]: *** [dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
92 errors detected in the compilation of "/tmp/tmpxft_00004971_00000000-7_cuda_dlib.cpp1.ii".
CMake Error at dlib_generated_cuda_dlib.cu.o.cmake:266 (message):
Error generating file
/tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5/dlib_build/CMakeFiles/dlib.dir/cuda/./dlib_generated_cuda_dlib.cu.o
dlib_build/CMakeFiles/dlib.dir/build.make:63: recipe for target 'dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o' failed
make[2]: *** [dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o] Error 1
CMakeFiles/Makefile2:140: recipe for target 'dlib_build/CMakeFiles/dlib.dir/all' failed
make[1]: *** [dlib_build/CMakeFiles/dlib.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 261, in <module>
'Topic :: Software Development',
File " /.virtualenvs/env/lib/python3.5/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lib/python3.5/site-packages/wheel/bdist_wheel.py", line 192, in run
self.run_command('build')
File "/usr/lib/python3.5/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.5/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.5/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 135, in run
self.build_extension(ext)
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 175, in build_extension
subprocess.check_call(cmake_build, cwd=build_folder)
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j6']' returned non-zero exit status 2
----------------------------------------
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
ERROR: Command errored out with exit status 1:
command: /home/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-j8x2batv/install-record.txt --single-version-externally-managed --compile --install-headers /home/include/site/python3.5/dlib
cwd: /tmp/pip-install-eape6x3c/dlib/
Complete output (524 lines):
running install
running build
running build_py
package init file 'dlib/__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.5.2 (default, Nov 12 2018, 13:43:14)
Invoking CMake setup: 'cmake /tmp/pip-install-eape6x3c/dlib/tools/python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/pip-install-eape6x3c/dlib/build/lib.linux-x86_64-3.5 -DPYTHON_EXECUTABLE=/home/bin/python3 -DCMAKE_BUILD_TYPE=Release'
-- The C compiler identification is GNU 5.5.0
-- The CXX compiler identification is GNU 5.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /home/bin/python3 (found version "3.5.2")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so
-- Performing Test HAS_CPP14_FLAG
-- Performing Test HAS_CPP14_FLAG - Success
-- pybind11 v2.2.2
-- Using CMake version: 3.5.1
-- Compiling dlib version: 19.17.0
-- SSE4 instructions can be executed by the host processor.
-- AVX instructions can be executed by the host processor.
-- Enabling AVX instructions
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- Found X11: /usr/lib/x86_64-linux-gnu/libX11.so
-- Looking for png_create_read_struct
-- Looking for png_create_read_struct - found
-- Looking for jpeg_read_header
-- Looking for jpeg_read_header - found
-- Searching for BLAS and LAPACK
-- Searching for BLAS and LAPACK
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
-- Checking for module 'cblas'
-- No package 'cblas' found
-- Checking for module 'lapack'
-- Found lapack, version 0.2.18
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Found OpenBLAS library
-- Looking for sgetrf_single
-- Looking for sgetrf_single - found
-- Using OpenBLAS's built in LAPACK
-- Looking for cblas_ddot
-- Looking for cblas_ddot - found
-- Looking for sgesv
-- Looking for sgesv - not found
-- Looking for sgesv_
-- Looking for sgesv_ - not found
-- Found CUDA: /usr/local/cuda-8.0 (found suitable version "8.0", minimum required is "7.5")
-- Looking for cuDNN install...
-- Found cuDNN: /usr/local/cuda-8.0/lib64/libcudnn.so
-- Building a CUDA test project to see if your compiler is compatible with CUDA...
-- Checking if you have the right version of cuDNN installed.
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA
-- C++11 activated.
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5
Invoking CMake build: 'cmake --build . --config Release -- -j6'
[ 1%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o
[ 2%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9220): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9231): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9244): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9255): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9268): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9279): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9292): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9303): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9316): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9327): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9340): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9352): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9365): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9376): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9389): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9401): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9410): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9419): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9428): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9437): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9445): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9454): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9463): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9472): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9481): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9490): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9499): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9508): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9517): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9526): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9535): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9544): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(55): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(63): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(73): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(81): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(91): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(100): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(109): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(117): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(127): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(136): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(145): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(153): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9220): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9231): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9244): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9255): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9268): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9279): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9292): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9303): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9316): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9327): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9340): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9352): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9365): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9376): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9389): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9401): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9410): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9419): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9428): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9437): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9445): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9454): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9463): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9472): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9481): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9490): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9499): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9508): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9517): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9526): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9535): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10799): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9544): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10811): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10823): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10835): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10847): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10859): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10871): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10883): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10895): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10907): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10919): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10931): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10943): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10955): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10967): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10979): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(55): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10989): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(63): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(73): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11000): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(81): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(91): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11009): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(100): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(109): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11020): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(117): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(127): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(136): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(145): error: argument of type "void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11029): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512pfintrin.h(153): error: argument of type "void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11040): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11049): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11060): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11069): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11080): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11089): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11100): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11109): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11120): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11129): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11140): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11149): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11160): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11169): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11180): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11189): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11200): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11209): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11220): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11229): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11240): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11249): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11260): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11269): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11280): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11289): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11300): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10799): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10811): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10823): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10835): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10847): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10859): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10871): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10883): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10895): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10907): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10919): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10931): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10943): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10955): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10967): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10979): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(10989): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11000): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11009): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11020): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11029): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11040): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11049): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11060): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11069): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11080): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11089): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11100): error: argument of type "void *" is incompatible with parameter of type "float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11109): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11120): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11129): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11140): error: argument of type "void *" is incompatible with parameter of type "double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11149): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11160): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11169): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11180): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11189): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11200): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11209): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11220): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11229): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11240): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11249): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11260): error: argument of type "void *" is incompatible with parameter of type "int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11269): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11280): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11289): error: argument of type "void *" is incompatible with parameter of type "long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512vlintrin.h(11300): error: argument of type "void *" is incompatible with parameter of type "long long *"
92 errors detected in the compilation of "/tmp/tmpxft_00004c96_00000000-7_cusolver_dlibapi.cpp1.ii".
CMake Error at dlib_generated_cusolver_dlibapi.cu.o.cmake:266 (message):
Error generating file
/tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5/dlib_build/CMakeFiles/dlib.dir/cuda/./dlib_generated_cusolver_dlibapi.cu.o
dlib_build/CMakeFiles/dlib.dir/build.make:70: recipe for target 'dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o' failed
make[2]: *** [dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
92 errors detected in the compilation of "/tmp/tmpxft_00004c8e_00000000-7_cuda_dlib.cpp1.ii".
CMake Error at dlib_generated_cuda_dlib.cu.o.cmake:266 (message):
Error generating file
/tmp/pip-install-eape6x3c/dlib/build/temp.linux-x86_64-3.5/dlib_build/CMakeFiles/dlib.dir/cuda/./dlib_generated_cuda_dlib.cu.o
dlib_build/CMakeFiles/dlib.dir/build.make:63: recipe for target 'dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o' failed
make[2]: *** [dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o] Error 1
CMakeFiles/Makefile2:140: recipe for target 'dlib_build/CMakeFiles/dlib.dir/all' failed
make[1]: *** [dlib_build/CMakeFiles/dlib.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 261, in <module>
'Topic :: Software Development',
File "/home/lib/python3.5/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lib/python3.5/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/lib/python3.5/distutils/command/install.py", line 583, in run
self.run_command('build')
File "/usr/lib/python3.5/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.5/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.5/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 135, in run
self.build_extension(ext)
File "/tmp/pip-install-eape6x3c/dlib/setup.py", line 175, in build_extension
subprocess.check_call(cmake_build, cwd=build_folder)
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j6']' returned non-zero exit status 2
----------------------------------------
ERROR: Command errored out with exit status 1: /home/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eape6x3c/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-j8x2batv/install-record.txt --single-version-externally-managed --compile --install-headers /home/include/site/python3.5/dlib Check the logs for full command output.
```
Please help me to find the issue. | open | 2019-07-30T10:59:17Z | 2024-08-08T16:34:58Z | https://github.com/ageitgey/face_recognition/issues/894 | [] | curious1me | 18 |
aleju/imgaug | machine-learning | 802 | Change saturation of yellow tone | Hello,
I'm looking for a way to change the strength of the yellow tones in the image.
My first thought was to change the temperature of the image with `ChangeColorTemperature()`m however that throws a known error ([Issue #720 ](https://github.com/aleju/imgaug/issues/720)) .
My next idea was to change the image from RGB to a different colorspace and then augment only one of the channels, however CMYK is not available as a colorspace so that also doesn't work.
Any help would be highly appreciated! | open | 2021-12-01T16:09:05Z | 2021-12-01T16:09:05Z | https://github.com/aleju/imgaug/issues/802 | [] | MariaKalt | 0 |
vanna-ai/vanna | data-visualization | 320 | The message is too long and exceeds the maximum number of tokens | This model's maximum context length is 16385 tokens. However, your messages resulted in 26620 tokens. Please reduce the length of the messages. | closed | 2024-03-28T09:31:26Z | 2024-03-28T09:36:42Z | https://github.com/vanna-ai/vanna/issues/320 | [
"bug"
] | strawberrymmmm | 0 |
pytorch/pytorch | python | 149,786 | CudaGraphs Failing on Blackwell | # Summary
Run repro:
```py
import torch
def func(a):
return torch.softmax(a, dim=-1, dtype=torch.float32)
a = torch.randn(4, 16, dtype=torch.float16, device="cuda")
g = torch.cuda.CUDAGraph()
torch.cuda.synchronize()
with torch.cuda.graph(g):
out = func(a)
torch.cuda.synchronize()
g.replay()
torch.cuda.synchronize()
print(out.shape)
```
Result
```Shell
raceback (most recent call last):
File "/home/drisspg/meta/scripts/misc/cuda_graph.py", line 13, in <module>
out = func(a)
^^^^^^^
File "/home/drisspg/meta/scripts/misc/cuda_graph.py", line 4, in func
return torch.softmax(a, dim=-1, dtype=torch.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not permitted when stream is capturing
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
```
cc @ptrblck @msaroufim @eqy @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng | open | 2025-03-22T00:33:04Z | 2025-03-24T17:35:36Z | https://github.com/pytorch/pytorch/issues/149786 | [
"module: cuda",
"triaged",
"module: cuda graphs",
"Blackwell"
] | drisspg | 1 |
miguelgrinberg/flasky | flask | 170 | sqlite3.OperationalError: (5a) | hope for help ~~!! I'm new to flask
First , I checkout 5a
```
git checkout 5a
```
then, I run `hello.py`
```
python hello.py
```
and I got this error
```
(venv) [root@HarryPotter flasky]# python hello.py
/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py:312: SAWarning: Exception attempting to detect unicode returns: OperationalError('(OperationalError) near "\xf1\x90\x81\x93\xf1\x90\x81\x8c\xf5\x80\x81\x83\xf0\xb0\x80\xa0\xf4\xb0\x81\x81\xfa\x80\x81\x94\xfd\x80\x80\xa7\xfc\xb0\x81\xa5\xf8\x80\x81\xb4\xfb\x80\x81\xb0\xfa\x90\x81\xa1\xf8\x80\x81\xae\xf9\x90\x81\xb2\xfd\x90\x81\xb4\xfb\xa0\x81\xb2\xf9\xb0\x81\xb3\xf0\x90\x80\xa0\xf8\x80\x81\x93\xf0\x90\x81\x96\xf0\xb0\x81\x92\xf0\x90\x81\x88\xfa\x80\x81\x92\xfc\x80\x80\xb6\xfa\x90\x80\xa9\xf0\x90\x80\xa0\xf8\x80\x81\x93\xfb\xa0\x81\xa1\xfb\xa0\x81\xaf\xfc\x90\x81\x9f": syntax error',)
results = set([check_unicode(test) for test in tests])
/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py:312: SAWarning: Exception attempting to detect unicode returns: OperationalError('(OperationalError) near "\xf1\x90\x81\x93\xf1\x90\x81\x8c\xf5\x80\x81\x83\xf0\xb0\x80\xa0\xf4\xb0\x81\x81\xfa\x80\x81\x94\xfd\x80\x80\xa7\xfc\xb0\x81\xa5\xf8\x80\x81\xb4\xfb\xa0\x81\xb5\xf8\xb0\x81\xa9\xf9\x80\x81\xaf\xf8\x80\x81\xa5\xf9\x90\x81\xb2\xfd\x90\x81\xb4\xfb\xa0\x81\xb2\xf9\xb0\x81\xb3\xf0\x90\x80\xa0\xf8\x80\x81\x93\xf0\x90\x81\x96\xf0\xb0\x81\x92\xf0\x90\x81\x88\xfa\x80\x81\x92\xfc\x80\x80\xb6\xfa\x90\x80\xa9\xf0\x90\x80\xa0\xf8\x80\x81\x93\xfb\xa0\x81\xa1\xfb\xa0\x81\xaf\xfc\x90\x81\x9f": syntax error',)
results = set([check_unicode(test) for test in tests])
Traceback (most recent call last):
File "hello.py", line 74, in <module>
db.create_all()
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 856, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 848, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), tables=tables)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3420, in create_all
tables=tables)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1727, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1720, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1910, in contextual_connect
self.pool.connect(),
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 338, in connect
return _ConnectionFairy._checkout(self)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 645, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 440, in checkout
rec = pool._do_get()
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1058, in _do_get
return self._create_connection()
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 285, in _create_connection
return _ConnectionRecord(self)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 416, in __init__
exec_once(self.connection, self)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 250, in exec_once
self(*args, **kw)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 260, in __call__
fn(*args, **kw)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 1219, in go
return once_fn(*arg, **kw)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 166, in first_connect
dialect.initialize(c)
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 248, in initialize
self._check_unicode_description(connection):
File "/root/pythonapp/Flask_Web_Development/flasky/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 335, in _check_unicode_description
]).compile(dialect=self)
sqlite3.OperationalError: near "���𐀠�������l": syntax error
```
| closed | 2016-08-03T14:07:49Z | 2018-03-10T16:35:55Z | https://github.com/miguelgrinberg/flasky/issues/170 | [
"question"
] | 363549406 | 6 |
robotframework/robotframework | automation | 4,544 | [BUG] Invalid parsing of reused variables in ***VARIABLES*** section | ## 1. VERSION INFORMATION
- Robot Framework 5.0.1
- Python 3.9.10
- macOS 10.15
## 2. STEPS
I'd like to reuse variables to create new ones.
When I try to load it into ROBOT model, cannot parse `${var2}` - check traceback below.
I can create a workaround, check variables store items with regex `r"(\$\{.+\})"` and replace value with actual one, but then emitting a ROBOT will replace my second variable with value `${var2}= Say Hi!` - which is not what I want. Seems problem
#### ROBOT
```robotframework
***VARIABLES***
${var1} Hi!
${var2} Say ${var1}
```
## 3. TRACEBACK
```shell
Error in file '<_io.TextIOWrapper name='tests\\mdcli.robot' mode='r' encoding='utf-8'>' on line 22: Setting variable '${var2}' failed: Variable '${var1}' not found.
Traceback (most recent call last):
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\store.py", line 46, in _resolve_delayed
self.data[name] = value.resolve(self._variables)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\tablesetter.py", line 69, in resolve
return self._replace_variables(self._values, variables)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\tablesetter.py", line 105, in _replace_variables
return variables.replace_scalar(values[0])
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\variables.py", line 55, in replace_scalar
return self._replacer.replace_scalar(item, ignore_errors)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\replacer.py", line 83, in replace_scalar
return self._replace_scalar(match, ignore_errors)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\replacer.py", line 92, in _replace_scalar
return self.replace_string(match, ignore_errors=ignore_errors)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\replacer.py", line 104, in replace_string
return self._replace_string(match, unescaper, ignore_errors)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\replacer.py", line 111, in _replace_string
safe_str(self._get_variable_value(match, ignore_errors))
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\replacer.py", line 125, in _get_variable_value
value = self._finder.find(match)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\finders.py", line 49, in find
variable_not_found(name, self._store.data)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\notfound.py", line 33, in variable_not_found
raise VariableError(message)
robot.errors.VariableError: Variable '${var1}' not found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\johndoe\getvars.py", line 12, in <module>
model = load_model(getLogger("SATS VARS"), f)
File "C:\Users\johndoe\venv\lib\site-packages\oatscommon\robot_models.py", line 990, in load_model
suites = RobotTestSuite.create_models(log, test_settings, suite)
File "C:\Users\johndoe\venv\lib\site-packages\oatscommon\robot_models.py", line 883, in create_models
variables = RobotVariable.create_models(log, variable_store.as_dict())
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\store.py", line 125, in as_dict
return NormalizedDict(variables, ignore='_')
File "C:\Users\johndoe\venv\lib\site-packages\robot\utils\normalizing.py", line 65, in __init__
self._add_initial(initial)
File "C:\Users\johndoe\venv\lib\site-packages\robot\utils\normalizing.py", line 69, in _add_initial
for key, value in items:
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\store.py", line 122, in <genexpr>
variables = (self._decorate(name, self[name]) for name in self)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\store.py", line 64, in __getitem__
return self._resolve_delayed(name, self.data[name])
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\store.py", line 52, in _resolve_delayed
variable_not_found('${%s}' % name, self.data)
File "C:\Users\johndoe\venv\lib\site-packages\robot\variables\notfound.py", line 33, in variable_not_found
raise VariableError(message)
robot.errors.VariableError: Variable '${var2}' not found.
```
I've examined ```TestSuite().resource.variables``` object and got `Variable()` objects with:
```bash
name: ${var1}
value: ("Hi!",)
name: ${var2}
value: ("Say ${var1}",)
``` | closed | 2022-11-23T15:40:33Z | 2023-12-20T01:15:15Z | https://github.com/robotframework/robotframework/issues/4544 | [] | mkaskow | 6 |
freqtrade/freqtrade | python | 10,924 | Regarding the time difference between robot data and residential area | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: windows 11
* Python Version:Python 3.12.7
* CCXT version:ccxt==4.4.24
* Freqtrade Version:freqtrade 2024.10
## Your question
You've worked hard, brother. You've helped me so much, and my gratitude is beyond words
I am in China and I am using a cloud server in Hong Kong because my OKX account's trading time is calculated based on domestic time and the Eastern Eight Districts. I have verified that my server time matches my account and my place of residence. Why is the data I see in FreqUI and Telegram delayed or not real-time? Although it's just a simulation trade,
Or is it because OKX's data does not allow real-time collection?
**server information:**
> root@ser575683352015:~/docker/ft_userdata# timedatectl status
> Local time: Wed 2024-11-13 04:07:47 CST
> Universal time: Tue 2024-11-12 20:07:47 UTC
> RTC time: Tue 2024-11-12 20:07:48
> Time zone: Asia/Shanghai (CST, +0800)
> System clock synchronized: yes
> systemd-timesyncd.service active: yes
> RTC in local TZ: no
> root@ser575683352015:~/docker/ft_userdata# date
> Wed Nov 13 04:07:51 CST 2024
> root@ser575683352015:~/docker/ft_userdata# docker exec -it freqtrade /bin/bash
> ftuser@c71890feabd6:/freqtrade$ date
> Wed Nov 13 04:08:04 CST 2024
**Docker-compose.yml file**
Added environment: TZ: Asia/Shanghai parameter
```
---
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
environment:
TZ: Asia/Shanghai
# image: freqtradeorg/freqtrade:develop
# Use plotting image
# image: freqtradeorg/freqtrade:develop_plot
# # Enable GPU Image and GPU Resources (only relevant for freqAI)
# # Make sure to uncomment the whole deploy section
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# for more information.
ports:
- "0.0.0.0:8080:8080"
# Default command used when running `docker compose up`
command: >
trade
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--strategy MeanReversionLongStrategy
```
Why are these errors every time I refresh my FreqUI
**log:**
```
freqtrade | 2024-11-13 04:02:33,258 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
freqtrade | 2024-11-13 04:02:33,312 - freqtrade.resolvers.iresolver - INFO - Using resolved pairlist StaticPairList from '/freqtrade/freqtrade/plugins/pairlist/StaticPairList.py'...
freqtrade | 2024-11-13 04:02:33,324 - freqtrade.plugins.pairlist.IPairList - WARNING - Pair MATIC/USDT:USDT is not compatible with exchange OKX. Removing it from whitelist..
freqtrade | 2024-11-13 04:02:33,330 - freqtrade.plugins.pairlistmanager - INFO - Whitelist with 8 pairs: ['DOGE/USDT:USDT', 'SOL/USDT:USDT', 'ETH/USDT:USDT', 'BTC/USDT:USDT', 'LTC/USDT:USDT', 'XRP/USDT:USDT', 'BNB/USDT:USDT', 'DOT/USDT:USDT']
freqtrade | 2024-11-13 04:02:33,332 - freqtrade.strategy.hyper - INFO - No params for buy found, using default values.
freqtrade | 2024-11-13 04:02:33,332 - freqtrade.strategy.hyper - INFO - No params for sell found, using default values.
freqtrade | 2024-11-13 04:02:33,333 - freqtrade.strategy.hyper - INFO - No params for protection found, using default values.
freqtrade | 2024-11-13 04:02:33,333 - freqtrade.plugins.protectionmanager - INFO - No protection Handlers defined.
freqtrade | 2024-11-13 04:02:33,333 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'type': status, 'status': 'running'}
freqtrade | 2024-11-13 04:02:33,337 - freqtrade.worker - INFO - Changing state to: RUNNING
freqtrade | 2024-11-13 04:02:33,351 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'type': warning, 'status': 'Dry run is enabled. All trades are simulated.'}
freqtrade | 2024-11-13 04:02:33,351 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'type': startup, 'status': "*Exchange:* `okx`\n*Stake per trade:* `unlimited USDT`\n*Minimum ROI:* `{'0': 0.04, '30': 0.06, '120': 0.1}`\n*Trailing Stoploss:* `-0.1`\n*Position adjustment:* `Off`\n*Timeframe:* `5m`\n*Strategy:* `MeanReversionLongStrategy`"}
freqtrade | 2024-11-13 04:02:33,352 - freqtrade.rpc.telegram - INFO - Notification 'startup' not sent.
freqtrade | 2024-11-13 04:02:33,399 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'type': startup, 'status': "Searching for USDT pairs to buy and sell based on [{'StaticPairList': 'StaticPairList'}]"}
freqtrade | 2024-11-13 04:02:33,399 - freqtrade.rpc.telegram - INFO - Notification 'startup' not sent.
freqtrade | 2024-11-13 04:02:33,682 - telegram.ext.Application - INFO - Application started
freqtrade | 2024-11-13 04:02:36,497 - uvicorn.error - INFO - ('87.254.23.2', 39397) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:02:36,497 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(04363232, ('87.254.23.2', 39397))
freqtrade | 2024-11-13 04:02:36,498 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:02:36,978 - uvicorn.error - INFO - ('87.254.23.2', 46939) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:02:36,979 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(8417eb7c, ('87.254.23.2', 46939))
freqtrade | 2024-11-13 04:02:36,979 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:02:38,409 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:02:54,573 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:02:54,574 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(04363232, ('87.254.23.2', 39397))
freqtrade | 2024-11-13 04:02:54,584 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:02:54,584 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(8417eb7c, ('87.254.23.2', 46939))
freqtrade | 2024-11-13 04:02:56,520 - uvicorn.error - INFO - ('87.254.23.2', 51883) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:02:56,521 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(79fdd51e, ('87.254.23.2', 51883))
freqtrade | 2024-11-13 04:02:56,522 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:02:57,033 - uvicorn.error - INFO - ('87.254.23.2', 58439) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:02:57,033 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(e46374bc, ('87.254.23.2', 58439))
freqtrade | 2024-11-13 04:02:57,034 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:03:38,414 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:03:56,355 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:03:56,356 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(79fdd51e, ('87.254.23.2', 51883))
freqtrade | 2024-11-13 04:03:56,875 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:03:56,876 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(e46374bc, ('87.254.23.2', 58439))
freqtrade | 2024-11-13 04:04:07,615 - uvicorn.error - INFO - ('87.254.23.2', 60077) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:04:07,616 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(16b8640c, ('87.254.23.2', 60077))
freqtrade | 2024-11-13 04:04:07,617 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:04:07,630 - uvicorn.error - INFO - ('87.254.23.2', 35585) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:04:07,630 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(c29a60d8, ('87.254.23.2', 35585))
freqtrade | 2024-11-13 04:04:07,631 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:04:38,418 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:05:37,450 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:05:37,451 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(16b8640c, ('87.254.23.2', 60077))
freqtrade | 2024-11-13 04:05:37,466 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:05:37,466 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(c29a60d8, ('87.254.23.2', 35585))
freqtrade | 2024-11-13 04:05:41,002 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:05:48,008 - uvicorn.error - INFO - ('87.254.23.2', 37557) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:05:48,008 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(45808a71, ('87.254.23.2', 37557))
freqtrade | 2024-11-13 04:05:48,009 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:05:48,041 - uvicorn.error - INFO - ('87.254.23.2', 42295) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:05:48,041 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(9368db98, ('87.254.23.2', 42295))
freqtrade | 2024-11-13 04:05:48,042 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:06:41,006 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:06:47,852 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:06:47,854 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(45808a71, ('87.254.23.2', 37557))
freqtrade | 2024-11-13 04:06:47,875 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:06:47,876 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(9368db98, ('87.254.23.2', 42295))
freqtrade | 2024-11-13 04:06:58,618 - uvicorn.error - INFO - ('87.254.23.2', 37617) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:06:58,619 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(dea9048b, ('87.254.23.2', 37617))
freqtrade | 2024-11-13 04:06:58,620 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:06:58,637 - uvicorn.error - INFO - ('87.254.23.2', 32857) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:06:58,637 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(5089c0d9, ('87.254.23.2', 32857))
freqtrade | 2024-11-13 04:06:58,638 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:07:41,011 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:07:58,460 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:07:58,462 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(dea9048b, ('87.254.23.2', 37617))
freqtrade | 2024-11-13 04:07:58,482 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:07:58,483 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(5089c0d9, ('87.254.23.2', 32857))
freqtrade | 2024-11-13 04:08:09,649 - uvicorn.error - INFO - ('87.254.23.2', 35325) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:08:09,650 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(0daac54c, ('87.254.23.2', 35325))
freqtrade | 2024-11-13 04:08:09,651 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:08:09,657 - uvicorn.error - INFO - ('87.254.23.2', 55945) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" [accepted]
freqtrade | 2024-11-13 04:08:09,657 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(9dc8cafb, ('87.254.23.2', 55945))
freqtrade | 2024-11-13 04:08:09,658 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:08:41,016 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:09:09,398 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:09:09,400 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(0daac54c, ('87.254.23.2', 35325))
freqtrade | 2024-11-13 04:09:09,408 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:09:09,409 - freqtrade.rpc.api_server.ws.channel - INFO - Disconnected from channel - WebSocketChannel(9dc8cafb, ('87.254.23.2', 55945))
freqtrade | 2024-11-13 04:09:20,552 - uvicorn.error - INFO - ('87.254.23.2', 59161) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" 403
freqtrade | 2024-11-13 04:09:20,554 - uvicorn.error - INFO - connection rejected (403 Forbidden)
freqtrade | 2024-11-13 04:09:20,555 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:09:20,599 - uvicorn.error - INFO - ('87.254.23.2', 47725) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDIxMTgsImlhdCI6MTczMTQ0MTIxOCwidHlwZSI6ImFjY2VzcyJ9.OwQBPwIRQ-E_Uf6KPcpYEO4qcwkPaa9KC4DpMR66Jho" 403
freqtrade | 2024-11-13 04:09:20,600 - uvicorn.error - INFO - connection rejected (403 Forbidden)
freqtrade | 2024-11-13 04:09:20,601 - uvicorn.error - INFO - connection closed
freqtrade | 2024-11-13 04:09:41,020 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:10:46,002 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:11:46,008 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:12:46,012 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:13:46,017 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
freqtrade | 2024-11-13 04:14:36,345 - uvicorn.error - INFO - ('87.254.23.2', 35705) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDMwNjMsImlhdCI6MTczMTQ0MjE2MywidHlwZSI6ImFjY2VzcyJ9.MpsZYBRBVkqYU-aaJhy0Ntp2evkwPUeiAWeEK74fwKY" [accepted]
freqtrade | 2024-11-13 04:14:36,347 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(100358a2, ('87.254.23.2', 35705))
freqtrade | 2024-11-13 04:14:36,348 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:14:36,727 - uvicorn.error - INFO - ('87.254.23.2', 34689) - "WebSocket /api/v1/message/ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZGVudGl0eSI6eyJ1IjoiZnJlcXRyYWRlciJ9LCJleHAiOjE3MzE0NDMwNjMsImlhdCI6MTczMTQ0MjE2MywidHlwZSI6ImFjY2VzcyJ9.MpsZYBRBVkqYU-aaJhy0Ntp2evkwPUeiAWeEK74fwKY" [accepted]
freqtrade | 2024-11-13 04:14:36,728 - freqtrade.rpc.api_server.ws.channel - INFO - Connected to channel - WebSocketChannel(edf4ee06, ('87.254.23.2', 34689))
freqtrade | 2024-11-13 04:14:36,729 - uvicorn.error - INFO - connection open
freqtrade | 2024-11-13 04:14:46,023 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.10', state='RUNNING'
```
**time difference**
For example, for me now, it's 4 o'clock on the 13th, but the data inside the robot is still around 8 o'clock on the 12th


| closed | 2024-11-12T20:19:25Z | 2024-11-13T05:43:32Z | https://github.com/freqtrade/freqtrade/issues/10924 | [
"Question"
] | Sadcatcat | 5 |
Yorko/mlcourse.ai | seaborn | 649 | can you help find email for Измайлов Константин | I see
Измайлов Константин Константинович (@Izmajlovkonstantin)
can you help find email for Измайлов Константин
I try to get him , ask code for
https://sphere.mail.ru/curriculum/program/discipline/818/
especially for video
https://www.youtube.com/watch?v=fit-ZAWexZ0&list=PLrCZzMib1e9p6lpNv-yt6uvHGyBxQncEh&index=8
11. Введение в SQL. Курс "ВВЕДЕНИЕ В АНАЛИЗ ДАННЫХ" | Технострим
from
mlcourse.ai/jupyter_russian/tutorials/boruta_tutorial_Izmajlovkonstantin.ipynb
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center>\n",
"<img src=\"../../img/ods_stickers.jpg\">\n",
"## Открытый курс по машинному обучению\n",
"<center>Автор материала: Измайлов Константин Константинович (@Izmajlovkonstantin)."
]
} | closed | 2020-01-30T21:33:58Z | 2020-01-30T23:28:54Z | https://github.com/Yorko/mlcourse.ai/issues/649 | [
"invalid"
] | Sandy4321 | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,429 | [Bug]: Long freezes at the end of hi-rez fix | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Hi! Hi-rez fix freezes during last step. It can be 20 seconds or 2 minutes long, it's completely random. My PC is really suffering during this. Without hi-rez it's okay. I have 12 GB NVIDIA GPU and I generate standart 512x768 pictures with 2x hi-rez fix upscale.
I've already downloaded 16-bit VAE (I thought maybe VAE is the reason, because it happens in the end of generation) which supposedly improves this, configured it to prioritise it, but it doesn't help. Interestingly, when I use Forge UI - this problem never occurs. Generation is fast and consumes little memory. But ControlNet doesn't work properly there, so I want the Automatic1111 UI to work properly to play with ControlNet. I tried various opt methods, but it's no use too. Can I fix that? Thanks in advance.
### Steps to reproduce the problem
1. Download any SDXL model.
2. Use it with hi-rez fix.
### What should have happened?
Normal generation
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-08-27-10-12.json](https://github.com/user-attachments/files/16760455/sysinfo-2024-08-27-10-12.json)
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: —opt-split-attention-v1 —opt-sub-quad-attention
No module 'xformers'. Proceeding without it.
Loading weights [e3c47aedb0] from E:\stable-diffusion-webui\webui\models\Stable-diffusion\animagineXLV31_v31.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: E:\stable-diffusion-webui\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 10.6s (prepare environment: 2.4s, import torch: 3.6s, import gradio: 0.9s, setup paths: 1.0s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 0.8s, create ui: 0.5s, gradio launch: 0.6s).
E:\stable-diffusion-webui\system\python\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Loading VAE weights specified in settings: E:\stable-diffusion-webui\webui\models\VAE\fixFP16ErrorsSDXLLowerMemoryUse_v10.safetensors
Applying attention optimization: sub-quadratic... done.
Model loaded in 7.3s (load weights from disk: 0.7s, create model: 2.4s, apply weights to model: 3.6s, load VAE: 0.1s, calculate empty prompt: 0.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.06it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:23<00:00, 1.26it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 60/60 [01:30<00:00, 1.51s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:11<00:00, 2.72it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:26<00:00, 1.12it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 60/60 [01:05<00:00, 1.09s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 60/60 [01:05<00:00, 1.25it/s]
```
### Additional information
_No response_ | closed | 2024-08-27T10:20:35Z | 2024-08-27T12:18:42Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16429 | [
"bug-report"
] | Astarot11 | 0 |
mwouts/itables | jupyter | 70 | Offline mode | At the moment itables does not have an [offline mode](https://github.com/mwouts/itables/issues/8). While the table data is embedded in the notebook, the jquery and datatables.net are loaded from a CDN, see [require.config](https://github.com/mwouts/itables/blob/main/itables/javascript/load_datatables_connected.js) and [table template](https://github.com/mwouts/itables/blob/main/itables/datatables_template.html), so an internet connection is required to display the tables.
Is there a way to add offline usage? | closed | 2022-04-09T17:29:45Z | 2022-06-23T23:01:37Z | https://github.com/mwouts/itables/issues/70 | [] | BBassi | 52 |
pytest-dev/pytest-html | pytest | 809 | pytest-html 4.x.x incompatible with pytest-metadata 3.1.1 ? | Hello,
I was recently still using pytest 7.3.1 and upgraded pytest-html from 3.2.0 to 4.1.1.
Without changing anything else, it failed to generate the report with the following error:
```
INTERNALERROR> File "C:\FSBMS\tools\main\environment\python\Python310\lib\site-packages\pytest_html\basereport.py", line 291, in _process_report
INTERNALERROR> self._report.add_test(data, report, outcome, processed_logs)
INTERNALERROR> File "C:\FSBMS\tools\main\environment\python\Python310\lib\site-packages\pytest_html\report_data.py", line 140, in add_test
INTERNALERROR> test_data["log"] = _handle_ansi("\n".join(logs))
INTERNALERROR> TypeError: sequence item 0: expected str instance, table found
```
I have the full log if necessary. I didn't prepare a sample code because I don't think it's relevant, what happens is probably not related to the content of the test.
As you can see I'm using python 3.10.
Upgrading pytest to 8.1.1 solved the issue, but I figured it's still worth noting since I couldn't find any incompatibility note in the doc.
| closed | 2024-04-21T09:21:55Z | 2024-04-24T17:53:47Z | https://github.com/pytest-dev/pytest-html/issues/809 | [] | supermete | 4 |
Avaiga/taipy | automation | 2,393 | Fix: Decoupled timer_start from Gui to avoid circular dependencies | ### Description
This PR refactors timer_start to accept callbacks instead of directly interacting with Taipy's GUI, resolving circular dependency issues. It also includes tests and documentation updates.
### Acceptance Criteria
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2025-01-11T14:26:08Z | 2025-01-13T17:25:16Z | https://github.com/Avaiga/taipy/issues/2393 | [
"📈 Improvement"
] | The1W2c | 0 |
mwaskom/seaborn | data-science | 3,191 | Only the upper limit is plotted when `so.Bar` handles log-transformed axes | I find that when setting the y-axis to log-transformed coordinates, it seems that only the upper limit is plotted.
```
(
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
) # original
```

```
(
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
.scale(y='log')
) # log-transformed
```

```
fig = (
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
.plot()._figure
)
ax = fig.axes[0]
ax.set_yscale('log')
fig # expected
```

| closed | 2022-12-19T05:44:02Z | 2022-12-19T05:48:31Z | https://github.com/mwaskom/seaborn/issues/3191 | [] | liuzj039 | 1 |
home-assistant/core | python | 141,186 | Tado integration loading error | ### The problem
Hello,
since yesterday, I have this message "Error during Tado setup: Login failed for unknown reason with status code 403" !
I tried to reload the integration => same thing !
My credentials (login/password) are working.
Thanks
### What version of Home Assistant Core has the issue?
2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tado
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/tado
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T10:06:06Z | 2025-03-23T10:33:39Z | https://github.com/home-assistant/core/issues/141186 | [
"integration: tado"
] | cseb17 | 2 |
pytest-dev/pytest-cov | pytest | 191 | `slaveoutput` and `slaveinput` are now called `workeroutput` and `workerinput` in pytest-xdist | In the latest version of pytest-xdist (1.22.1), the "master/slave" terminology has been replaced by "master/worker" (https://github.com/pytest-dev/pytest-xdist/pull/268).
I noticed the issue because I kept getting the following warning:
> The following slaves failed to return coverage data, ensure that pytest-cov is installed on these slaves.
They provide aliases for backward compatibility but they don't seem to be available everywhere... | closed | 2018-02-26T12:06:56Z | 2018-02-26T21:52:22Z | https://github.com/pytest-dev/pytest-cov/issues/191 | [
"question"
] | martinmaillard | 6 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 706 | 问了个很蠢的问题 | EvalCOCOMetric(data_loader.dataset.coco, "keypoints", ["key_results.json"](https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_keypoint/HRNet/train_utils/train_eval_utils.py#:~:text=keypoints%22%2C%20%22-,key_results.json,-%22)))
请教一下,这个key_results.json是在哪里生成的,谢谢。 | closed | 2022-12-08T17:13:03Z | 2022-12-11T11:58:08Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/706 | [] | cilinyan | 1 |
pytest-dev/pytest-django | pytest | 884 | django_db_block access wrong database bug | when i tried this command ``` pytest -s --import-mode=importlib ``` it works fine.
but when i tried this ``` pytest -s --import-mode=importlib app/tests/test_filename.py ```
first time was a general fail output but after that i got this error:
```
===================================================================================== ERRORS =====================================================================================
_________________________________________________________________ ERROR at setup of test_endpoint_containerList __________________________________________________________________
self = <django.db.backends.utils.CursorWrapper object at 0x107070190>
sql = 'INSERT INTO "auth_user" ("password", "last_login", "is_superuser", "username", "first_name", "last_name", "email", "is_staff", "is_active", "date_joined") VALUES (%s, %s,
%s, %s, %s, %s, %s, %s, %s, %s) RETURNING "auth_user"."id"'
params = ('pbkdf2_sha256$180000$UlJQcZHRuts0$jpA8wYudq5I+QdAPXWe6lvqU7V4t3CvADtn4iXpfR64=', None, False, 'test', '', '', ...)
ignored_wrapper_args = (False, {'connection': <django.db.backends.postgresql.base.DatabaseWrapper object at 0x1068fd290>, 'cursor': <django.db.backends.utils.CursorWrapper object
at 0x107070190>})
def _execute(self, sql, params, *ignored_wrapper_args):
self.db.validate_no_broken_transaction()
with self.db.wrap_database_errors:
if params is None:
# params default might be backend specific.
return self.cursor.execute(sql)
else:
> return self.cursor.execute(sql, params)
E psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "auth_user_username_key"
E DETAIL: Key (username)=(test) already exists.
../../.pyenv/versions/3.7.4/envs/dsl/lib/python3.7/site-packages/django/db/backends/utils.py:86: UniqueViolation
The above exception was the direct cause of the following exception:
django_db_blocker = <pytest_django.plugin._DatabaseBlocker object at 0x105377f90>
@pytest.fixture(scope='session')
def test_client(django_db_blocker):
with django_db_blocker.unblock():
> User.objects.create_user('test', 'test@test.com', 'test123')
conftest.py:20:
```
i checked my postgres database ```test_dbname``` first and select auth_user this table and it was empty i couldn't find this user i saved. i found it insert to my original database ```dbname``` auth_user table. it only occurred when i test specific test file
my ``` conftest.py ```
```
@pytest.fixture(scope='session')
def test_client(django_db_blocker):
with django_db_blocker.unblock():
User.objects.create_user('test', 'test@test.com', 'test123')
client = APIClient()
client.login(username='test', password='test123')
return client
```
my project structure
```
│ conftest.py
│
└───app_folder
│ │ apps.py
│ │ models.py
│ │ ...
│ └───tests
│ │ test_file1.py
│ │ test_file2.py
```
| open | 2020-10-16T09:03:13Z | 2022-03-25T17:14:52Z | https://github.com/pytest-dev/pytest-django/issues/884 | [] | lieric7766 | 2 |
mljar/mercury | jupyter | 117 | propagate widgets values if there is historical task on notebook open | closed | 2022-06-30T14:38:08Z | 2022-07-01T11:05:04Z | https://github.com/mljar/mercury/issues/117 | [
"enhancement"
] | pplonski | 1 |
|
pytorch/pytorch | deep-learning | 149,735 | DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64 (__main__.TestForeachCUDA) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39166913362).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int64]], args=(10), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99 | open | 2025-03-21T15:42:06Z | 2025-03-21T15:42:10Z | https://github.com/pytorch/pytorch/issues/149735 | [
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | pytorch-bot[bot] | 1 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 9 | can you also upload processed/vocab_1000.tsv ? | I notice this file is missing?
thanks | closed | 2017-03-19T01:27:58Z | 2017-03-22T23:54:32Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/9 | [] | shaoweipng | 3 |
coqui-ai/TTS | deep-learning | 3,742 | [Bug] xtts_v2, AttributeError: 'TTS' object has no attribute 'speakers' | ### Describe the bug
If I run tts.speakers after loading xtts_v2, it throws an error: 'TTS' object has no attribute 'speakers'.
### To Reproduce
import torch
from TTS.api import TTS
device = "cuda" if torch.cuda.is_available() else "cpu"
model = "tts_models/multilingual/multi-dataset/xtts_v2"
tts = TTS(model).to(device)
tts.speakers
### Expected behavior
It should list all the speaker names.
### Logs
```shell
Traceback (most recent call last):
File "<stdin>", line 1, in <module>.venv\Lib\site-packages\torch\nn\modules\module.py", line 1709, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'TTS' object has no attribute 'speakers'
```
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.3.0+cpu",
"TTS": "0.22.0",
"numpy": "1.26.4"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 94 Stepping 3, GenuineIntel",
"python": "3.11.9",
"version": "10.0.19045"
}
}
```
### Additional context
Other multi speaker models like vctk/vits work fine using the same method. | closed | 2024-05-16T13:45:01Z | 2024-07-29T09:59:50Z | https://github.com/coqui-ai/TTS/issues/3742 | [
"bug"
] | chigkim | 8 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,412 | [Bug]: RuntimeWarning: Invalid Value encountered in cast x_sample = x_sample.astype(np.uint8) | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
After trying to run txt2img I am getting this error and at the end I endup with just a black image "C:\Users\Nikola\stable-diffusion-webui\modules\processing.py:966: RuntimeWarning: invalid value encountered in cast
x_sample = x_sample.astype(np.uint8)" I have tried to remove my VAE, because at first I thought it was the problem, but than it happened again without it too. What more, when I tried it, it sometimes worked without errors, and sometimes with them. Sometimes this line with "processing.py:966" is "py:68" if it helps
### Steps to reproduce the problem
1. I start the webui-user.bat file
2. Put my input: score_9, score_8_up, score_7_up BREAK handsomize, 2boys, son goku, vegeta, dragon ball z, outdoors <lora:handsomize_x_pdxl_v1:1.2>
3. Set sampling steps to 34
4. Change image size to anything higher than 768x1024(w,h)
5. VA I am using is: sharpspectrumvae_v10.ckpt (SharpSpectrumVAE this is what is called on CivitAI, it fixes faces on my art there)
6. Run
7. Gets error after first or second try, it's about my luck
### What should have happened?
I should get an image of something similar to what I wanted, but instead I get black image with errors.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-03-30-13-52.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14812162/sysinfo-2024-03-30-13-52.json)
### Console logs
```Shell
Already up to date.
venv "C:\Users\Nikola\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments: --disable-nan-check
No module 'xformers'. Proceeding without it.
Loading weights [fbcf965a62] from C:\Users\Nikola\stable-diffusion-webui\models\Stable-diffusion\anythingelseV4_v45.ckpt
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 13.7s (prepare environment: 3.0s, import torch: 4.9s, import gradio: 1.4s, setup paths: 1.5s, initialize shared: 0.3s, other imports: 0.7s, load scripts: 1.0s, create ui: 0.5s, gradio launch: 0.2s).
Creating model from config: C:\Users\Nikola\stable-diffusion-webui\configs\v1-inference.yaml
Loading VAE weights specified in settings: C:\Users\Nikola\stable-diffusion-webui\models\VAE\sharpspectrumvae_v10.ckpt
Applying attention optimization: Doggettx... done.
Model loaded in 24.7s (load weights from disk: 21.2s, create model: 0.4s, apply weights to model: 1.5s, load VAE: 1.0s, calculate empty prompt: 0.5s).
100%|██████████████████████████████████████████████████████████████████████████████████| 34/34 [04:50<00:00, 8.54s/it]
C:\Users\Nikola\stable-diffusion-webui\modules\processing.py:966: RuntimeWarning: invalid value encountered in cast/it]
x_sample = x_sample.astype(np.uint8)
Total progress: 100%|██████████████████████████████████████████████████████████████████| 34/34 [05:43<00:00, 10.10s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 34/34 [05:43<00:00, 8.08s/it]
```
### Additional information
I have updated my GPU Drivers ad my GPU is GTX 1660Ti | open | 2024-03-30T14:04:38Z | 2025-02-24T22:33:15Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15412 | [
"bug-report"
] | DzoniTS | 4 |
geex-arts/django-jet | django | 19 | No date is selected when clicking on today in date widget | Hi, when clicking on "today", on date widget, no date is selected:

| closed | 2015-11-02T21:41:43Z | 2015-11-07T09:52:21Z | https://github.com/geex-arts/django-jet/issues/19 | [] | carlosfvieira | 2 |
sigmavirus24/github3.py | rest-api | 329 | Repo.stargazers_count always returning 0 | As far as I can tell the bug exists on 0.9.3 and 1.0.0a1
``` python
gh = github3.login("jamiis", "notrealpwd")
repos = gh.all_repositories()
for r in repos:
print r.stargazers_count
```
r.stargazers_count is always 0
| closed | 2014-12-21T10:39:56Z | 2014-12-21T15:59:22Z | https://github.com/sigmavirus24/github3.py/issues/329 | [] | jamiis | 1 |
modAL-python/modAL | scikit-learn | 35 | Question : Did ActiveLearner support a trained RandomForestClassifier ? | Hello,
I initialize my active learner with a saved trained randomforest classifier (loaded with pickle) with its training samples as you can see in the code below.
Did this impact the performances of the Active learner ?
The results i obtained are very bad and i get better results with a random selection with the same number of samples.
I would appreciate any feedback or advice !
Thank you in advance,
# old model
Model=pickle.load(open(OldModel, 'rb'))[0]
# training samples used by the old model
TrainDset0=pd.read_csv(OldTrainFile,sep=",")
X_train0=np.array(TrainDset0.loc[:,TrainDset0.loc[:,'band_0':'band_129'].columns.tolist()])
y_train0=np.array(TrainDset0.loc[:,str(ClassLabel)])
# new train samples
TrainDset2=pd.read_csv(NewTrainFile,sep=",")
X_train2=np.array(TrainDset2.loc[:,TrainDset2.loc[:,'band_0':'band_129'].columns.tolist()])
y_train2=np.array(TrainDset2.loc[:,str(ClassLabel)])
# validation samples
ValidationDset=pd.read_csv(NewValidationFile,sep=",")
X_validation=np.array(ValidationDset.loc[:,ValidationDset.loc[:,'band_0':'band_129'].columns.tolist()])
y_validation=np.array(ValidationDset.loc[:,str(ClassLabel)])
# Active learner
AdditionalSamples=10
MaxScore=0.9
estimator=deepcopy(Model)
Learner=ActiveLearner(estimator=estimator,query_strategy=entropy_sampling,X_training=X_train0,y_training=y_train0)
while Learner.score(X_validation,y_validation) < MaxScore:
query_idx, query_inst = Learner.query(X_train2,n_instances=AdditionalSamples)
Learner.teach(X=query_inst,y=y_train2[query_idx],only_new=False)
X_train2=np.delete(X_train2,query_idx,axis=0)
y_train2=np.delete(y_train2,query_idx)
# some results (it is the case of many iterations and data)
with AL samples [0.13, 0.0, 0.7, 0.66, 0.60, 0.49, 0.56, 0.81,................... 0.56, 0.71]
with Random samples [0.13, 0.0, 0.60, 0.70, 0.72, 0.71, 0.85, 0.84,................... 0.87, 0.88]
| closed | 2019-02-14T08:36:28Z | 2019-03-12T06:21:26Z | https://github.com/modAL-python/modAL/issues/35 | [] | YousraH | 1 |
ionelmc/pytest-benchmark | pytest | 259 | Add an option to store profiling information | In some cases I'd like to investigate the profiling information after the benchmark, maybe with [snakeviz](https://jiffyclub.github.io/snakeviz/).
I suggest adding `--benchmark-cprofile-save=FILE` that will save the cProfile information (pstats) to `FILE`.
If this flag is specific, it implies that `---benchmark-cprofile` is set as well. Maybe with `cumtime` if not explictly specified. | open | 2024-05-22T17:51:34Z | 2024-05-22T17:51:34Z | https://github.com/ionelmc/pytest-benchmark/issues/259 | [] | tebeka | 0 |
deepset-ai/haystack | pytorch | 8,929 | Remove explicit mention of Haystack "2.x" in tutorials | closed | 2025-02-25T10:56:05Z | 2025-03-11T10:01:31Z | https://github.com/deepset-ai/haystack/issues/8929 | [
"P1"
] | julian-risch | 0 |
|
keras-team/keras | machine-learning | 20,061 | Support advanced ND scatter/reduce operations | ## Proposal
Support operations that scatter updates into a tensor with the following requirements:
- Support a reduction with overlapping values (e.g. min, max, sum)
- Supports default value `tensor` to also consider in the reduction
- Supports ND updates & indices
Essentially, support tensorflow's `tensor_scatter_nd_*` operations, like:
- [`tf.tensor_scatter_nd_update`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_update)
- [`tf.tensor_scatter_nd_max`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_max)
- [`tf.tensor_scatter_nd_min`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_min)
## Design discussion
### API design
Currently, there is already a [`keras.ops.scatter_update`](https://github.com/keras-team/keras/blob/v3.3.3/keras/src/ops/core.py#L75) function. This only supports a default value of 0 for the `tensor`, and doesn't support reductions.
**Option 1:** Upgrade `keras.ops.scatter_update` to take in optional `tensor` and `reduction` arguments.
**Option 2:** Create a dedicated `tensor_scatter_nd` api for these new functions.
### Technical implementation
Overall the technical implementation is straightfoward, just flatten indices to the 1D case and invoke the segmentation functions of each backend (e.g. `segment_max` for TF and `scatter_reduce` for torch).
## Implementation plan
I have already implemented a `tensor_scatter_nd` function for Nuro's internal use and it works well for all our cases (and is XLA compatible for all backends). Once we settle on a design, I can upstream the implementation. | open | 2024-07-29T15:45:06Z | 2024-08-01T17:09:49Z | https://github.com/keras-team/keras/issues/20061 | [
"type:feature"
] | aboubezari | 2 |
PaddlePaddle/PaddleHub | nlp | 1,724 | PaddleHub迁移学习eval日志输出不正常,如下图所示。 | PaddleHub迁移学习eval日志输出不正常,如下图所示。

paddlefsl 1.0.0
paddlehub 2.0.4
paddlenlp 2.1.1
paddlepaddle-gpu 2.2.0.post101
tb-paddle 0.3.6 | open | 2021-12-09T01:57:29Z | 2021-12-09T07:50:36Z | https://github.com/PaddlePaddle/PaddleHub/issues/1724 | [] | livingbody | 2 |
strawberry-graphql/strawberry | django | 2,992 | Strawberry must provide server side ping messages | <!-- Provide a general summary of the bug in the title above. -->
Server side ping messages are necessary to keep the websocket connection open on all types of platforms.
The particular platform I'm working with is react-native on Android
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
React-Native on Android Websockets close due to no server side PING messages within 8-10 seconds.
You can follow the crux of the discussion here: https://discord.com/channels/689806334337482765/1134350180653740065
I have verified the issue with the author of `graphql-ws` repo.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system:
- Strawberry version (if applicable):
## Additional Context
I would recommend sending PINGs from strawberry every 6 seconds to account for most types of client websocket timeouts.
Probably would require changes to handlers.py within these lines:
```py
async def handle_connection_init(self, message: ConnectionInitMessage) -> None:
if self.connection_timed_out:
# No way to reliably excercise this case during testing
return # pragma: no cover
if self.connection_init_timeout_task:
self.connection_init_timeout_task.cancel()
if message.payload is not UNSET and not isinstance(message.payload, dict):
await self.close(code=4400, reason="Invalid connection init payload")
return
self.connection_params = message.payload
if self.connection_init_received:
reason = "Too many initialisation requests"
await self.close(code=4429, reason=reason)
return
self.connection_init_received = True
await self.send_message(ConnectionAckMessage())
self.connection_acknowledged = True
async def handle_ping(self, message: PingMessage) -> None:
await self.send_message(PongMessage())
async def handle_pong(self, message: PongMessage) -> None:
pass
```
<!-- Add any other relevant information about the problem here. --> | open | 2023-07-30T09:20:59Z | 2025-03-20T15:56:19Z | https://github.com/strawberry-graphql/strawberry/issues/2992 | [
"bug"
] | XChikuX | 6 |
remsky/Kokoro-FastAPI | fastapi | 224 | TTS output is skipping some input text | **Describe the bug**
Converting following text using this tool skips `group)—and` text altogether.
`high level of security with only a small group)—and at the same time, we realized that`
Here is the audio generated using the docker image
[kokoro-fastapi.zip](https://github.com/user-attachments/files/19115490/kokoro-fastapi.zip)
This is the one generated using [this other tool](https://huggingface.co/spaces/webml-community/kokoro-webgpu)
[kokoro-webgpu.zip](https://github.com/user-attachments/files/19115496/kokoro-webgpu.zip)
Voice used: `af_sarah`
**Branch / Deployment used**
Using docker image
ghcr.io/remsky/kokoro-fastapi-gpu:v0.2.2
**Operating System**
OS
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=24.04
DISTRIB_CODENAME=noble
DISTRIB_DESCRIPTION="Ubuntu 24.04.2 LTS"
Docker
Docker version 28.0.1, build 068a01e
NVIDIA GPU
NVIDIA-SMI 550.120
Driver Version: 550.120
CUDA Version: 12.4 | closed | 2025-03-06T21:33:03Z | 2025-03-24T05:57:22Z | https://github.com/remsky/Kokoro-FastAPI/issues/224 | [] | thekalinga | 8 |
fastapi-users/fastapi-users | asyncio | 563 | Cannot access /docs because of "TypeError: Object of type 'Depends' is not JSON serializable" | ## Describe the bug
When following the steps in [Dependency callables](https://frankie567.github.io/fastapi-users/usage/dependency-callables/#dependency-callables) from the website to [get the current active superuser](https://frankie567.github.io/fastapi-users/usage/dependency-callables/#get-the-current-active-superuser), I can no longer check the /docs page as I get "Failed to load API definition." and the console reports the error `TypeError: Object of type 'Depends' is not JSON serializable`
## To Reproduce
My code (just the relevant part) looks like this:
```python
fastapi_users = FastAPIUsers(
models.user_db,
auth.auth_backends,
models.User,
models.UserCreate,
models.UserUpdate,
models.UserDB,
)
current_user = fastapi_users.current_user(active=True, verified=True)
@app.get("/alerts", response_model=List[schemas.Alert])
async def get_alerts(
skip: int = 0, limit: int = 100, dependencies=[Depends(current_user)]
):
return await crud.get_alerts(skip=skip, limit=limit)
```
This problem started happening just as I have added the Depends part.
## Expected behavior
I expect the /docs page of my API to load
## Configuration
- Python version : 3.9.1
- FastAPI version : 0.63.0
- FastAPI Users version : 5.1.2
| closed | 2021-03-23T15:58:05Z | 2021-03-23T16:36:32Z | https://github.com/fastapi-users/fastapi-users/issues/563 | [
"bug"
] | alexferrari88 | 2 |
MaartenGr/BERTopic | nlp | 2,009 | Compare LDA, NMF, LSA with BERTopic (w/ embedding: all-MiniLM-L6-v2 + dim_red: UMAP + cluster: HDBSCAN) | Hi @MaartenGr ,
Given a dataset of texts, we want to extract topics using LDA, NMF, LSA and BERTopic (w/ embedding: all-MiniLM-L6-v2 + dim_red: UMAP + cluster: HDBSCAN).
In order to select the best algorithm for this dataset, there was an intuition that a mathematical combination of an applicable topic coherence measure and an applicable topic diversity measure was chosen to optimize. In one of previous issues, [#90](https://github.com/MaartenGr/BERTopic/issues/90) , I observed that when calculating topic coherence, you treated concatenation of texts belonging to a cluster as a single document.
However, for calculating topic coherence for LDA, LSA and NMF, we simply get the BoW representation of given texts and calculate topic coherence.
To the best of my understanding, shouldn't we ensure that the corpus and dictionary passed to initialize CoherenceModel object from gensim.coherencemodel be the same between BERTopic and LSA/LDA/NMF, so that we can actually now compare values of topic coherence achieved for all algorithms and then select the one with highest topic coherence?
Apologies for such a long description.
Thanks,
Abi | open | 2024-05-23T22:15:40Z | 2024-05-24T15:34:58Z | https://github.com/MaartenGr/BERTopic/issues/2009 | [] | abis330 | 1 |
plotly/dash | plotly | 2,736 | [Feature Request] Python 3.12 support | **Is your feature request related to a problem? Please describe.**
Currently CI only tests on 3.9 and 3.
**Describe the solution you'd like**
I'd like to see ci run against 3.8 and 3.12
**Describe alternatives you've considered**
n/a
**Additional context**
n/a
| closed | 2024-01-31T11:39:12Z | 2024-07-23T23:10:44Z | https://github.com/plotly/dash/issues/2736 | [] | graingert-coef | 2 |
geopandas/geopandas | pandas | 2,884 | gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) throws error: Symbol not found: (_ZSTD_compressBound) | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
ImportError Traceback (most recent call last)
Input In [13], in <module>
1 df_coords = pd.DataFrame(df_meta_samples["Latitude and Longitude"].apply(lambda x: list(format_coords(x))).to_dict(), index=["Latitude", "Longitude"]).T
3 gdf_coords = gpd.GeoDataFrame(data=df_coords, geometry=gpd.points_from_xy(x=df_coords["Longitude"], y=df_coords["Latitude"]), index=df_coords.index).set_crs("EPSG:4326")
----> 5 world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
6 mmin = np.min(gdf_coords[["Latitude", "Longitude"]].values.ravel())
7 mmax = np.max(gdf_coords[["Latitude", "Longitude"]].values.ravel())
File ~/anaconda3/envs/soothsayer_py3.9_env/lib/python3.9/site-packages/geopandas/io/file.py:242, in _read_file(filename, bbox, mask, rows, engine, **kwargs)
172 def _read_file(filename, bbox=None, mask=None, rows=None, engine=None, **kwargs):
173 """
174 Returns a GeoDataFrame from a file or URL.
175
(...)
240 by using the encoding keyword parameter, e.g. ``encoding='utf-8'``.
241 """
--> 242 engine = _check_engine(engine, "'read_file' function")
244 filename = _expand_user(filename)
246 from_bytes = False
File ~/anaconda3/envs/soothsayer_py3.9_env/lib/python3.9/site-packages/geopandas/io/file.py:112, in _check_engine(engine, func)
110 _check_pyogrio(func)
111 elif engine is None:
--> 112 raise ImportError(
113 f"The {func} requires the 'pyogrio' or 'fiona' package, "
114 "but neither is installed or imports correctly."
115 f"\nImporting fiona resulted in: {fiona_import_error}"
116 f"\nImporting pyogrio resulted in: {pyogrio_import_error}"
117 )
119 return engine
ImportError: The 'read_file' function requires the 'pyogrio' or 'fiona' package, but neither is installed or imports correctly.
Importing fiona resulted in: dlopen(/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/lib/python3.9/site-packages/fiona/ogrext.cpython-39-darwin.so, 0x0002): Symbol not found: (_ZSTD_compressBound)
Referenced from: '/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/lib/libgdal.30.dylib'
Expected in: '/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/lib/libblosc.1.21.2.dylib'
Importing pyogrio resulted in: No module named 'pyogrio'
```
#### Problem description
Failed loading backend programs. I've uninstalled and reinstalled both pyogrio and fiona.
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:55:37) [Clang 14.0.6 ]
executable : /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/bin/python
machine : macOS-12.6-x86_64-i386-64bit
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.10.2
GEOS lib : /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/lib/libgeos_c.dylib
GDAL : 3.5.0
GDAL data dir: None
PROJ : 9.0.0
PROJ data dir: /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.21.5
pandas : 1.4.0
pyproj : 3.3.1
shapely : 1.8.2
fiona : None
geoalchemy2: None
geopy : 2.3.0
matplotlib : 3.5.1
mapclassify: 2.5.0
pygeos : 0.12.0
pyogrio : v0.4.1
psycopg2 : None
pyarrow : None
rtree : 1.0.1
</details>
| closed | 2023-04-28T16:57:21Z | 2023-04-28T20:14:09Z | https://github.com/geopandas/geopandas/issues/2884 | [
"needs triage"
] | jolespin | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,689 | Based on this project, I developed a visualization interface that focuses on Excel | At present, the targeted users are Chinese people because there are free interfaces available for use.
https://github.com/via007/pandas-ai-excel
Online Experience:
https://huggingface.co/spaces/viaho/pandas-ai-excel | closed | 2025-03-20T02:29:31Z | 2025-03-20T07:52:07Z | https://github.com/sinaptik-ai/pandas-ai/issues/1689 | [] | via007 | 1 |
open-mmlab/mmdetection | pytorch | 12,307 | how to update config 2.x to 3.x | I've looked at the docs and changed everything I can, but I'm still getting the error
```
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
from torch.distributed.optim import \
Loads checkpoint by local backend from path: /home/ps/.cache/torch/hub/checkpoints/mmpose_anime-face_hrnetv2.pth
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmengine/runner/checkpoint.py:347: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(filename, map_location=map_location)
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmpose/apis/inference.py:121: UserWarning: Can not load dataset_meta from the checkpoint or the model config. Use COCO metainfo by default.
warnings.warn('Can not load dataset_meta from the checkpoint or the '
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmpose/datasets/datasets/utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/coco.py" does not exist. A matched config file "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmpose/.mim/configs/_base_/datasets/coco.py" will be used instead.
warnings.warn(
Loads checkpoint by local backend from path: /home/ps/.cache/torch/hub/checkpoints/mmdet_anime-face_yolov3.pth
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmengine/runner/checkpoint.py:347: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(filename, map_location=map_location)
Traceback (most recent call last):
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
cli.main()
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
run()
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "/home/ps/.vscode-server/extensions/ms-python.debugpy-2025.0.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "/home/ps/workspace/projects/mesh-fitter/src/main.py", line 24, in <module>
ai.detect_face.detect_face(img, output_path="models/base_female_face_with_landmarks.jpg")
File "/home/ps/workspace/projects/mesh-fitter/src/ai/detect_face.py", line 11, in detect_face
results = detector(new_image)
File "/home/ps/workspace/projects/mesh-fitter/src/ai/anime_face_detector/detector.py", line 133, in __call__
boxes = self._detect_faces(image)
File "/home/ps/workspace/projects/mesh-fitter/src/ai/anime_face_detector/detector.py", line 73, in _detect_faces
boxes = inference_detector(self.face_detector, image)[0]
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmdet/apis/inference.py", line 189, in inference_detector
results = model.test_step(data_)[0]
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 145, in test_step
return self._run_forward(data, mode='predict') # type: ignore
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmdet/models/detectors/base.py", line 94, in forward
return self.predict(inputs, data_samples)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmdet/models/detectors/single_stage.py", line 109, in predict
x = self.extract_feat(batch_inputs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmdet/models/detectors/single_stage.py", line 146, in extract_feat
x = self.backbone(batch_inputs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmdet/models/backbones/darknet.py", line 157, in forward
x = cr_block(x)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/mmcv/cnn/bricks/conv_module.py", line 281, in forward
x = self.conv(x)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 458, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 454, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (list, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
* (Tensor input, Tensor weight, Tensor bias = None, tuple of ints stride = 1, tuple of ints padding = 0, tuple of ints dilation = 1, int groups = 1)
didn't match because some of the arguments have invalid types: (list of [Tensor], Parameter, NoneType, tuple of (int, int), tuple of (int, int), tuple of (int, int), int)
* (Tensor input, Tensor weight, Tensor bias = None, tuple of ints stride = 1, str padding = "valid", tuple of ints dilation = 1, int groups = 1)
didn't match because some of the arguments have invalid types: (list of [Tensor], Parameter, NoneType, tuple of (int, int), tuple of (int, int), tuple of (int, int), int)
/home/ps/mambaforge/envs/mesh-fitter/lib/python3.10/tempfile.py:869: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmppfgq15nb'>
_warnings.warn(warn_message, ResourceWarning)
```
yolo3 config:
```python
model = dict(type='YOLOV3',
backbone=dict(type='Darknet', depth=53, out_indices=(3, 4, 5)),
neck=dict(type='YOLOV3Neck',
num_scales=3,
in_channels=[1024, 512, 256],
out_channels=[512, 256, 128]),
bbox_head=dict(type='YOLOV3Head',
num_classes=1,
in_channels=[512, 256, 128],
out_channels=[1024, 512, 256],
anchor_generator=dict(type='YOLOAnchorGenerator',
base_sizes=[[(116, 90),
(156, 198),
(373, 326)],
[(30, 61),
(62, 45),
(59, 119)],
[(10, 13),
(16, 30),
(33, 23)]],
strides=[32, 16, 8]),
bbox_coder=dict(type='YOLOBBoxCoder'),
featmap_strides=[32, 16, 8]),
test_cfg=dict(nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
conf_thr=0.005,
nms=dict(type='nms', iou_threshold=0.45),
max_per_img=100))
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(608, 608), keep_ratio=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor')),
]
test_dataloader = dict(
batch_size=1,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type="CocoDataset",
test_mode=True,
pipeline=test_pipeline
)
)
```
old config:
```python
model = dict(type='YOLOV3',
backbone=dict(type='Darknet', depth=53, out_indices=(3, 4, 5)),
neck=dict(type='YOLOV3Neck',
num_scales=3,
in_channels=[1024, 512, 256],
out_channels=[512, 256, 128]),
bbox_head=dict(type='YOLOV3Head',
num_classes=1,
in_channels=[512, 256, 128],
out_channels=[1024, 512, 256],
anchor_generator=dict(type='YOLOAnchorGenerator',
base_sizes=[[(116, 90),
(156, 198),
(373, 326)],
[(30, 61),
(62, 45),
(59, 119)],
[(10, 13),
(16, 30),
(33, 23)]],
strides=[32, 16, 8]),
bbox_coder=dict(type='YOLOBBoxCoder'),
featmap_strides=[32, 16, 8]),
test_cfg=dict(nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
conf_thr=0.005,
nms=dict(type='nms', iou_threshold=0.45),
max_per_img=100))
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='MultiScaleFlipAug',
img_scale=(608, 608),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize',
mean=[0, 0, 0],
std=[255.0, 255.0, 255.0],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(test=dict(pipeline=test_pipeline))
``` | open | 2025-02-12T11:50:45Z | 2025-02-13T03:26:54Z | https://github.com/open-mmlab/mmdetection/issues/12307 | [] | vipcxj | 2 |
pandas-dev/pandas | data-science | 60,581 | BUG: pandas.api.types.is_datetime64_any_dtype returns True for 'M' str | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas
pandas.api.types.is_datetime64_any_dtype('M') # True
```
### Issue Description
The string 'M' cannot be converted to a datetime using pandas.to_datetime(). It raises the error
```
Given date string M not likely a datetime present at position 0
```
I'm unsure what 'M' is supposed to represent? 'Y', 'D' all return False, so I don't think it's strftime strings, perhaps there's another significance I'm missing?
### Expected Behavior
I think the 'M' char on its own, should return False
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0f437949513225922d851e9581723d82120684a6
python : 3.11.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United Kingdom.1252
pandas : 2.0.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 68.2.2
pip : 23.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.26.0
pandas_datareader: None
bs4 : 4.12.3
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| closed | 2024-12-16T14:58:52Z | 2024-12-16T22:24:24Z | https://github.com/pandas-dev/pandas/issues/60581 | [
"Bug",
"Datetime",
"Closing Candidate"
] | ScottWilliamAnderson | 3 |
Lightning-AI/pytorch-lightning | machine-learning | 20,459 | ModelCheckpointCallback is triggered by mistake after every validation stage when mannual optimization | ### Bug description
I set the every_n_epochs param of ModelCheckpoint to 1 and val_check_interval of trainer to 200. The total iter of a batch is 1000. It should not save checkpoint files after the val_check. But it does.

### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
cc @tchaton @justusschock @awaelchli @borda | open | 2024-11-29T03:39:34Z | 2025-01-03T07:21:14Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20459 | [
"working as intended",
"design"
] | silverbulletmdc | 5 |
microsoft/nlp-recipes | nlp | 60 | Investigate ML Flow | We have integration with MLFlow to the AzureML Experimentation services so it would be good to investigate if we can use it throughout the repo and then show how to integrate with azureml
https://mlflow.org/ | closed | 2019-05-14T15:15:47Z | 2019-08-13T15:13:16Z | https://github.com/microsoft/nlp-recipes/issues/60 | [
"investigate"
] | heatherbshapiro | 5 |
wagtail/wagtail | django | 12,890 | Display total count in model index views | ### Is your proposal related to a problem?
Since we're now implementing most data views in Wagtail (also due to the flexible `ModelViewSet`/`SnippetViewSet` 👍 ), it would be nice, to have the total count of items somewhere on the screen. Currently, you only see the number of found items if you've filtered/searched something.
### Describe the solution you'd like
I'm not a UI guy, but I hope we could find a nice place/solution.
**Django:**
In Django index views, if no filters are active, the count is displayed within the paginator at the bottom:

(description: 4227 users in total)
I don't like the fact, that you always have to scroll to the bottom to find the number, but that's my personal opinion.
If filters are active, the total count is displayed beside the filtered item count (and basically links to the default view without filters):

(description: 18 found items, 4227 in total)
**Our temporary solution:**
In our case, we have quick-fixed it by (mis-)using the "sublabel" of the last breadcrumb item:

(description: 27 found items, 234 in total)
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| open | 2025-02-18T10:23:56Z | 2025-02-26T21:54:50Z | https://github.com/wagtail/wagtail/issues/12890 | [
"type:Enhancement"
] | th3hamm0r | 2 |
deepset-ai/haystack | machine-learning | 8,481 | Add support for Chinese language | **Is your feature request related to a problem? Please describe.**
https://github.com/deepset-ai/haystack/discussions/4585
| closed | 2024-10-22T17:02:52Z | 2024-12-08T02:10:34Z | https://github.com/deepset-ai/haystack/issues/8481 | [
"stale",
"community-triage"
] | aonoa | 1 |
scikit-image/scikit-image | computer-vision | 7,708 | Add OOF for 3D Vessel Segmentation | ### Description:
Dear Maintainers,
Optimally Oriented Flux (OOF) is a powerful 3D vessel segmentation, which shows better vesselness than the classic Frangi method, and adding it would greatly benefit the users in medical image processing.
The official MATLAB code is: [OOF MATLAB code](https://www.mathworks.com/matlabcentral/fileexchange/41612-optimally-oriented-flux-oof-for-3d-curvilinear-structure), and the 3rd party python implementation: [OOF Python code](https://github.com/fepegar/optimally-oriented-flux).
Could you please consider adding it to skimage, thank you very much!
Ref.
[1] M.W.K. Law and A.C.S. Chung, Three Dimensional Curvilinear Structure Detection using Optimally Oriented Flux, ECCV 2008.
[2] M.W.K. Law et al., Dilated Divergence based Scale-Space Representation for Curve Analysis, ECCV 2012. | open | 2025-02-21T04:54:05Z | 2025-02-21T04:54:05Z | https://github.com/scikit-image/scikit-image/issues/7708 | [
":pray: Feature request"
] | Spritea | 0 |
Nekmo/amazon-dash | dash | 84 | High CPU usage on Raspberry Pi | Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Guideline for bug reports
You can delete this section if your report is not a bug
* amazon-dash version: 1.2.0
* Python version: 2.7.13
* Pip & Setuptools version: 9.0.1, 33.1.1
* Operating System: Raspbian (latest)
How to get your version: pip install
```
amazon-dash --version
python --version
pip --version
easy_install --version
```
- [x] The `pip install` or `setup install` command has been completed without errors
- [x] The `python -m amazon_dash.install` command has been completed without errors
- [x] The `amazon-dash discovery` command works without errors
- [x] I have created/edited the configuration file
- [x] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
Upgraded to the latest version of Amazon Dash 1.2.0. The CPU usage is little less than previous version but still it hovers around 40-50%.
I am using latest Raspbian on Raspberry Pi 3B+. The same raspberry is also running HomeAssistant.
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
287 root 20 0 35084 31252 6472 S 70.6 3.3 5:55.02 amazon-dash
435 homeass+ 20 0 472424 88784 16560 S 60.7 9.4 2:35.33 hass
2025 pi 20 0 8044 3220 2712 R 0.7 0.3 0:01.08 top
66 root 20 0 0 0 0 S 0.3 0.0 0:02.35 mmcqd/0
```
#### What I Did
Upgrade from 1.1.1 to 1.2.0. The CPU usage came down about 10%.
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| closed | 2018-09-03T22:33:57Z | 2018-12-02T12:57:41Z | https://github.com/Nekmo/amazon-dash/issues/84 | [] | saurabhsharma001 | 3 |
lux-org/lux | jupyter | 408 | [BUG] Lux showing only max 10 values on x-axis. How to increase it? | Sorry, not sure if this is a bug, but I cannot find the information anywhere.
When visualizing more than 10 values I get (" + X more ...") in the top right corner but I cannot "unroll it". Seems like Lux is limited to just 10 values on X-axis, how do it increase this limit? | closed | 2021-08-16T14:39:41Z | 2021-09-02T23:06:19Z | https://github.com/lux-org/lux/issues/408 | [] | SnowRipple | 1 |
ivy-llc/ivy | numpy | 28,575 | Fix Frontend Failing Test: tensorflow - creation.paddle.tril | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-03-13T00:18:59Z | 2024-03-21T19:50:06Z | https://github.com/ivy-llc/ivy/issues/28575 | [
"Sub Task"
] | ZJay07 | 0 |
charlesq34/pointnet | tensorflow | 122 | KeyError: "Unable to open object (object 'data' doesn't exist)" | Thanks for your awesome code share!
I run the sem_seg code following readme step by step, but when I run`python train.py --log_dir log6 --test_area 6`, there is an error:`KeyError: "Unable to open object (object 'data' doesn't exist)"`, here is details:
```
Traceback (most recent call last):
File "train.py", line 70, in <module>
data_batch, label_batch = provider.loadDataFile(h5_filename)
File "/usr/Downloads/pointnet/provider.py", line 97, in loadDataFile
return load_h5(filename)
File "/usr/Downloads/pointnet/provider.py", line 92, in load_h5
data = f['data'][:]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'data' doesn't exist)"
```
I run the code in [tensorflow docker 1.7.1-devel-gpu](https://hub.docker.com/r/tensorflow/tensorflow/tags/) of `Python 2.7`, and I solve all the problem of dependency.
Looking forward to your response, thanks a lot! | open | 2018-07-26T06:52:43Z | 2020-09-15T17:50:51Z | https://github.com/charlesq34/pointnet/issues/122 | [] | qixuxiang | 4 |
PeterL1n/RobustVideoMatting | computer-vision | 160 | Where is the mIoU evaluation code? | Where is the mIoU evaluation code?
我想知道分割的MeanIoU 评价代码在哪里啊,我在evaluation folder中没有找到 | open | 2022-04-14T07:59:53Z | 2022-09-01T08:45:51Z | https://github.com/PeterL1n/RobustVideoMatting/issues/160 | [] | Mamba-ZJP | 1 |
liangliangyy/DjangoBlog | django | 600 | 百度统计部分如何添加能否给一个说明,代码检查显示referrer被禁用? | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [x] 请求技术支持
在管理员后台->网站配置->网站统计代码,把百度统计上给的代码贴进去保存,再去百度统计代码检查显示referrer被禁用,页面上点击百度统计显示500错误 | closed | 2022-08-21T17:09:58Z | 2022-09-06T10:02:10Z | https://github.com/liangliangyy/DjangoBlog/issues/600 | [] | zyan-repository | 3 |
pykaldi/pykaldi | numpy | 153 | Error installing Protobuf | I'm trying to install Pykaldi from source.
When installing Protobuf using "./install_protobuf.sh" I get the following error.
Installing Protobuf C++ library...
+ autoreconf -f -i -Wall,no-obsolete
configure.ac:30: error: possibly undefined macro: AC_PROG_LIBTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
Any ideas?
Thanks | closed | 2019-08-12T17:12:12Z | 2019-08-12T19:00:32Z | https://github.com/pykaldi/pykaldi/issues/153 | [] | jsullvan | 1 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,250 | Missing exchange configs between France, Jersey and Guernsey | There is a connection between France and Jersey but it is missing from the map (Normandie 1, 2 and 3).
Jersey gets 96% of its electricity from France (https://www.jec.co.uk/about-us/our-vision/sustainability/protecting-the-environment).
Guernsey has an interconnection from Jersey called GJ1 (https://www.electricity.gg/electricity/electricity-in-guernsey/importing-electricity/).
Channel Islands shape files may need to be added to the map. | open | 2024-09-30T21:22:50Z | 2024-10-08T00:55:03Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7250 | [
"exchange config"
] | AJDelusion | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 420 | New Audio Issue: Assertion Failed | This was working yesterday fine, and no big changes were made.
However, today starting up the demo toolbox loaded:
Assertion failed!
Program: C:\Users\paul1\AppData\Local\Programs\Python\Python37\python.exe
File: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061
Expression: FALSE
I have tried reinstalling visual studio as well, but to no avail. Any thoughts on this would be deeply appreciated.
| closed | 2020-07-11T22:04:58Z | 2020-07-12T02:20:03Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420 | [] | kyukenmyo | 3 |
coqui-ai/TTS | python | 3,965 | [Bug] Assertion srcIndex < srcSelectDimSize | ### Describe the bug
Assertion `srcIndex < srcSelectDimSize` failed.
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasLtMatmul with transpose_mat1 0 transpose_mat2 0 m 4096 n 108 k 1024 mat1_ld 4096 mat2_ld 1024 result_ld 4096 abcType 0 computeType 68 scaleType 0
CUDA Error Details:
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
this is the error i'm facing when i deploy model with Fastapi , if i send request with 300words twice , frist request processes fine then 2nd request triggers this error. it says its error with GPU memory but my memory never peaked to or reached the limit.
tts = TTS(model_path="./models/xtts", config_path='./models/xtts/config.json').to(device) this is how im uploading model
tts.tts_to_file(text=text, speaker_wav=f"./voices/{voice}", language=language, file_path=output_file) this is how i am generating file or am i loading or using it wrong , or should i limit my word limit
### To Reproduce
this is the error i'm facing when i deploy model with Fastapi , if i send request with 600words twice , first request processes fine then 2nd request triggers this error. it says its error with GPU memory but my memory never peaked to or reached the limit.
### Expected behavior
_No response_
### Logs
../aten/src/ATen/native/cuda/Indexing.cu:1236: indexSelectSmallIndex: block: [3,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1236: indexSelectSmallIndex: block: [3,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1236: indexSelectSmallIndex: block: [3,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasLtMatmul with transpose_mat1 0 transpose_mat2 0 m 4096 n 108 k 1024 mat1_ld 4096 mat2_ld 1024 result_ld 4096 abcType 0 computeType 68 scaleType 0
CUDA Error Details:
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
CUDA Error Details:
ERROR:root:RuntimeError during TTS generation: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Environment
```shell
-XTTS 2.2
-cuda 11.8.0
-python 3.8
-ubuntu 22.04
-pip3 install torch==2.3.1+cu118 torchaudio==2.3.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
-GPU T4 or 3060Ti
```
### Additional context
_No response_ | closed | 2024-08-13T05:27:51Z | 2025-01-03T08:49:02Z | https://github.com/coqui-ai/TTS/issues/3965 | [
"bug",
"wontfix"
] | davaavirtualplus | 1 |
0b01001001/spectree | pydantic | 301 | Exclude routes from OpenAPI spec | By default, all routes that are defined in Starlette application are included in the OpenAPI schema. Is it possible to specifically exclude some of them?
EDIT: Seems like overriding `bypass` method can do this, but it would be nice if `@spectree.validate()` allowed to mark method for exclusion from the OpenAPI schema. | closed | 2023-04-19T18:47:39Z | 2024-11-24T12:55:54Z | https://github.com/0b01001001/spectree/issues/301 | [] | and3rson | 1 |
akfamily/akshare | data-science | 5,353 | AKShare 接口问题报告 | AKShare Interface Issue Report | 接口:
港股估值指标
接口: stock_hk_valuation_baidu
目标地址: https://gushitong.baidu.com/stock/hk-06969
描述: 百度股市通-港股-财务报表-估值数据
限量: 单次获取指定 symbol 的指定 indicator 的特定 period 的历史数据
stock_hk_valuation_baidu_df = ak.stock_hk_valuation_baidu(symbol="02358", indicator="总市值", period="近一年")
问题:给出的数据是港股流通市值,而不是总市值。
00921给出的市值

雪球显示给出的是港股流通市值,是总市值的1/3左右

| closed | 2024-11-21T03:19:30Z | 2024-11-21T10:04:50Z | https://github.com/akfamily/akshare/issues/5353 | [
"bug"
] | Thalesoflaf | 1 |
napari/napari | numpy | 7,591 | [Labels] Consider a keybind for reporting total number of labels | ## 🧰 Task
A user asked me how to get the total number of objects labeled in a labels layer.
I realized I once asked this on zulip.
https://napari.zulipchat.com/#narrow/channel/212875-general/topic/.E2.9C.94.20number.20of.20labels
`np.count_nonzero(np.unique(labels.data))`
where `labels` is the Labels layer.
It might be worth adding this as a keybinding for the layer. For Points, If you select all points using the keybindings, you get a notification of how many it was. So something similar might make sense. Otherwise, consider it for the Layers menu? | open | 2025-02-10T15:11:06Z | 2025-02-11T00:29:00Z | https://github.com/napari/napari/issues/7591 | [
"task",
"enhancement",
"UI/UX"
] | psobolewskiPhD | 1 |
comfyanonymous/ComfyUI | pytorch | 6,750 | I cant config or install the environment on my first installtion of ComfyUI 's model( the JanusPro) | ### Your question
Hi:
Sorry to disturb your working. i have a question need your help. Please help me.
I am a student newhere who try to learn the AI model. I am not familiar about python and environment management. but i am trying to understanding about them. This is my first time to install the ComfyUI's model - Janus Pro 7B. Now i have installed a basic model - DeepseekR-1:14B. (I think this is a basic model? perhaps it is). As you can see, I deploy it locally.
My "ComfyUI" and "ComfyUI Management" had been installed yet. I can see the GUI view.
I have download and install the Janus Pro from the ComfyUI management.
But it doesnt work now. the screen shows as follow:
Local model not found at C:\temple2\ComfyUI_windows_portable\ComfyUI\models\Janus-Pro\Janus-Pro-7B. Please download the model and place it in the ComfyUI/models/Janus-Pro folder
I have created the folder from the "ComfyUI\models" as "Janus-Pro". but this report still coming when i queued again on the comfyUI.
So I downloaded the model file from this link. and downloade every files from the "main" table. and throw them in the "Janus-Pro" folder.
the link: https://huggingface.co/deepseek-ai/Janus-Pro-7B/tree/main
BTW, I notice that there is a note names README.md from the file list. I have read. but stop at the "git clone XXX Janus pro 7B". it says the path fold is not empty. however I cant find where is it.
So, I dont know how to do nest step. This is a kinds of complex problem. If you know how to solve the problem. I am very glad to get to know and try again to solve it.
This is my first tiem ask an issue on github. if there is any mistakes i took. please forgive me. i will modify it. I am not a native English speaker. but i will try my best to comprehense everything you saying and try to solve this.
Thank you~
good day! : )





### Logs
```powershell
I don't know how to upload logs as i need. but i have explain the problem as well as i can at the Question above. Sorry.
```
### Other
_No response_ | closed | 2025-02-08T18:34:20Z | 2025-02-09T10:29:41Z | https://github.com/comfyanonymous/ComfyUI/issues/6750 | [
"User Support"
] | FrankElson | 3 |
matterport/Mask_RCNN | tensorflow | 2,936 | ModuleNotFoundError: No module named 'keras.engine.base_layer_v1' | I had Mask_RCNN working and installed but my GPU was not being detected so I tried to use the newest version of tf for a new environment running python 3.8:
```
pip install tensorflow scikit-image matplotlib
pip install imgaug
pip install numpy
```
And then, following some instructions from an implementation article I run:
```
import os
import sys
import random
import math
import numpy as np # 1.24.2
import skimage.io # 0.20.0
import matplotlib # 3.7.1
import matplotlib.pyplot as plt
import tensorflow as tf # 2.11.0
import imgaug # 0.4.0
import keras # 2.11.0
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
```
and
```
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
```
No errors. But then running this final section for the model instantiation yields the titular error:
```
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
```
# **Error Message**
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[13], line 2
1 # Create model object in inference mode.
----> 2 model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
4 # Load weights trained on MS-COCO
5 model.load_weights(COCO_MODEL_PATH, by_name=True)
File [c:\Users\urp6gg\WorkingProjects\Mask_RCNN\mrcnn\model.py:1838](file:///C:/Users/urp6gg/WorkingProjects/Mask_RCNN/mrcnn/model.py:1838), in MaskRCNN.__init__(self, mode, config, model_dir)
1836 self.model_dir = model_dir
1837 self.set_log_dir()
-> 1838 self.keras_model = self.build(mode=mode, config=config)
File [c:\Users\urp6gg\WorkingProjects\Mask_RCNN\mrcnn\model.py:1856](file:///C:/Users/urp6gg/WorkingProjects/Mask_RCNN/mrcnn/model.py:1856), in MaskRCNN.build(self, mode, config)
1851 raise Exception("Image size must be dividable by 2 at least 6 times "
1852 "to avoid fractions when downscaling and upscaling."
1853 "For example, use 256, 320, 384, 448, 512, ... etc. ")
1855 # Inputs
-> 1856 input_image = KL.Input(
1857 shape=[None, None, config.IMAGE_SHAPE[2]], name="input_image")
1858 input_image_meta = KL.Input(shape=[config.IMAGE_META_SIZE],
1859 name="input_image_meta")
1860 if mode == "training":
1861 # RPN GT
...
File :991, in _find_and_load(name, import_)
File :973, in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'keras.engine.base_layer_v1'
``` | closed | 2023-03-14T22:40:32Z | 2023-08-18T03:17:15Z | https://github.com/matterport/Mask_RCNN/issues/2936 | [] | domattioli | 1 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 112 | fixing typo (theta -> \theta) | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/d36a5bceb0ef6fb2ee391670e7eaac54c288d5c2/labml_nn/diffusion/ddpm/__init__.py#L129
```suggestion
where $\epsilon_\theta$ is a learned function that predicts $\epsilon$ given $(x_t, t)$.
```
| closed | 2022-03-20T07:03:45Z | 2022-04-10T08:15:24Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/112 | [] | tkgw | 1 |
raphaelvallat/pingouin | pandas | 206 | BUGFIX v0.5.0 - Handling of missing values in repeated measurements | :bangbang: | This issue explains a critical bug that will be fixed in Pingouin v0.5.0. Please read carefully, and make sure to double check all your results with another statistical software.
:---: | :---
### One-way repeated measures ANOVA
Let's create a repeated measures dataset, in three different formats:
```python
import numpy as np
import pandas as pd
import pingouin as pg
# Create a wide-format dataframe, with missing values
df_a = pg.read_dataset("rm_anova_wide")
df_a['sub'] = np.arange(df_a.shape[0])
df_a['group'] = ['A'] * 5 + ['B'] * 7
df_a.set_index(['sub', 'group'], inplace=True)
# Convert to long-format
df_b = df_a.melt(var_name="time", ignore_index=False).sort_index().reset_index()
# Convert to long-format and remove rows with missing values
df_c = df_b.dropna().reset_index(drop=True)
```

Note how `df_c` has no explicit missing value. Instead, the missing values are "implicit", such that there are no rows in the dataframe when the data is missing. However, all these datasets contain exactly the same non-missing data. Therefore, any call to [pg.rm_anova](https://pingouin-stats.org/generated/pingouin.rm_anova.html) should lead to similar results. Unfortunately, this was not the case in versions of Pingouin <0.5.0. Indeed, `df_a` and `df_b` would return similar (correct) results, but `df_c` gave wrong results.
```python
print(pg.rm_anova(df_a))
print(df_b.rm_anova(dv="value", within="time", subject="sub"))
print(df_c.rm_anova(dv="value", within="time", subject="sub"))
```

Pay attention to the degrees of freedom (`ddof2`): `df_c` has more degrees of freedom, because no listwise deletion was applied. This leads to a smaller p-value. By contrast, in `df_a` and `df_b`, any subject/rows with missing values were completely removed before calculating the ANOVA. This is the default behavior of [JASP](https://jasp-stats.org/download/) and other statistical softwares, aka complete-case analysis. This has now been fixed in Pingouin 0.5.0, such that results would be similar for all three dataframes.
Importantly, this issue did not impact [pg.pairwise_ttests](https://pingouin-stats.org/generated/pingouin.pairwise_ttests.html). Indeed:
```python
df_b.pairwise_ttests(dv="value", within="time", subject="sub")
```
would lead to valid results (note how the degrees of freedom is 8 meaning that listwise deletion of rows with missing values was automatically performed)

However, calling `df_c` would lead to an error:
```python
df_c.pairwise_ttests(dv="value", within="time", subject="sub")
```

Of note, we can also disable the automatic listwise deletion by using `nan_policy="pairwise"`, in which case the degrees of freedom are larger because missing values are only removed separately for each pairwise test:

****
### Mixed ANOVA
The same issue applies to [pg.mixed_anova](https://pingouin-stats.org/generated/pingouin.mixed_anova.html). Indeed,
```python
print(df_b.mixed_anova(dv="value", within="time", between="group", subject="sub"))
print(df_c.mixed_anova(dv="value", within="time", between="group", subject="sub"))
```

Here again, `df_b` is the correct one. The degrees of freedom are smaller, meaning that a listwise deletion was applied: all participants with one or more missing value were entirely removed. This is also the default behavior in [JASP](https://jasp-stats.org/download/).
As earlier, this issue did not affect [pg.pairwise_ttests](https://pingouin-stats.org/generated/pingouin.pairwise_ttests.html), as it would return an error for `df_c`.
****
### Two-way repeated measures ANOVA
```python
# Long-format dataframe with explicit NaN
df_b = pg.read_dataset("rm_anova2").sort_values(by=['Time', 'Metric']).reset_index(drop=True)
df_b['Performance'] = df_b['Performance'].astype(float)
df_b.at[0, "Performance"] = np.nan
# Implicit missing value
df_c = df_b.copy().drop(index=0).reset_index(drop=True)
# Wide-format, explicit listwise deletion and convert to long-format
df_piv = df_b.pivot_table(index=['Subject'], columns=['Time', 'Metric'], values="Performance")
df_a = df_piv.dropna().melt(value_name="Performance", ignore_index=False).reset_index()
```

This is the output that we get in [JASP](https://jasp-stats.org/download/)(when using `df_piv` but without .dropna()):

No, if we run these same two-way repeated measures ANOVA in Pingouin <0.5.0, only `df_a` will produce the same output as JASP. This is because we have enforced a complete-case analysis with [pandas.dropna](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html). This is also the new behavior in Pingouin >0.5.0. You can check with these lines:
```python
df_a.rm_anova(dv="Performance", within=["Time", 'Metric'], subject="Subject")
df_b.rm_anova(dv="Performance", within=["Time", 'Metric'], subject="Subject")
df_c.rm_anova(dv="Performance", within=["Time", 'Metric'], subject="Subject")
```
****
**TL:DR**
In Pingouin <0.5.0, listwise deletion of subjects (or rows) with missing values was not strictly enforced in repeated measures or mixed ANOVA, depending on the input data format (if missing values were explicit or implicit). Pingouin 0.5.0 now uses a stricter complete-case analysis regardless of the input data format, which is the same behavior as JASP. Practically, this may lead to a decrease in the degrees of freedom and an increase in the p-values. We therefore highly recommend that you double check any results obtained with these functions. For future analyses, we also highly recommend that you manually deal with missing values (via imputation or listwise removal) before calling Pingouin's functions. | closed | 2021-10-23T17:39:39Z | 2022-06-19T18:13:27Z | https://github.com/raphaelvallat/pingouin/issues/206 | [
"bug :boom:",
"IMPORTANT❗"
] | raphaelvallat | 3 |
reloadware/reloadium | flask | 94 | Reloadium not working properly with Odoo - Ignoring configuration of Project Settings and not able to work with Form tool | ## Describe the bug*
I'm trying to use reloadium when executing Odoo, which might to some extent be similar to execute it with Django.
When I try to run it, it's incredibly slow as it loads the whole Odoo plus my new module files

Then I try to configure it in the Project Settings, just selecting the folder of my new module, and disabling the options "Add sources roots to reloadable paths" (as I have the core of Odoo as sources), and also the option "Add current working directory to reloadable paths", as otherwise it loads again ALL the files.
If both are disable and I only load my module, reloadium doesn't work, as it doesn't find any file to be reloaded (which is wrong as I have my breakpoints in the module configured as reloadable path).


However, if I enable "Add current working directory to reloadable paths", it loads again ALL the files, and then becomes incredibly slow.

Apart from that, reloadium fails with the utility of Odoo called Form (https://github.com/odoo/odoo/blob/16.0/odoo/tests/common.py#L2094)
When I try to execute it reloadium breaks


## Expected behavior
I would expect that Reloadium is able to be used to debug Odoo and able to reload just the folders that are configured in the Project Settings. Otherwise it is unusable due to the overload it produces.
It should also be able to work with any Odoo tool like Forms specially for unit tests.
## Desktop or remote (please complete the following information):**
- OS: ubuntu
- OS version: 20.04
- Reloadium package version: 0.9.10
- PyCharm plugin version: [0.9.5]
- Editor: [PyCharm Professional]
- Python Version: [3.9.2]
- Python Architecture: [64bit]
- Run mode: [Debug]
| closed | 2023-01-30T09:13:17Z | 2023-04-30T01:09:28Z | https://github.com/reloadware/reloadium/issues/94 | [
"bug"
] | BT-rmartin | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,108 | Using part of the dataset | While downloading data from the source, how can I specify the number of data? Is there a way to train only some parts of the dataset? I am using pix2pix, night2day. | closed | 2020-07-29T15:38:17Z | 2020-07-29T20:02:41Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1108 | [] | erdemarpaci | 1 |
psf/requests | python | 6,021 | CVE-2021-33503: bump up urllib3 version | | Severity | High |
| Identifier | CVE-2021-33503 |
| URL | https://nvd.nist.gov/vuln/detail/CVE-2021-33503 |
| Scanner | gemnasium |
| Message | Uncontrolled Resource Consumption in urllib3 |
| Package | urllib3 1.25.11 |
| Solution | Upgrade to version 1.26.5 or above. |
| Path | requests 2.26.0 > urllib3 1.25.11 |
| File | pipdeptree.json | closed | 2021-12-29T13:21:44Z | 2022-03-29T14:06:03Z | https://github.com/psf/requests/issues/6021 | [] | laurentdelosieresmano | 1 |
howie6879/owllook | asyncio | 82 | 如何调试源码 | 请问楼主是用什么IDE写的代码,怎么调试源码
我用的VSCODE,调试时候没有gunicorn的选项,求指导 | closed | 2020-03-17T06:47:42Z | 2020-04-13T06:10:01Z | https://github.com/howie6879/owllook/issues/82 | [] | jason163 | 3 |
ludwig-ai/ludwig | data-science | 4,000 | Error running inference on Llama3 model | When I run inference on a Llama3 model finetuned using Ludwig, I keep getting this error:
```
set_cols, feature, missing_value_strategy, computed_fill_value, backend)
1756 logger.warning(
1757 f"DROP_ROW missing value strategy applied. Dropped {len_before_dropped_rows - len_after_dropped_rows} "
1758 f"samples out of {len_before_dropped_rows} from column {feature[COLUMN]}. The rows containing these "
1759 f"samples will ultimately be dropped from the dataset."
1760 )
1761 else:
-> 1762 raise ValueError(f"Invalid missing value strategy {missing_value_strategy}")
ValueError: Invalid missing value strategy fill_with_const
```
Here is my training script:
```
qlora_fine_tuning_config = yaml.safe_load(
"""
model_type: llm
base_model: meta-llama/Meta-Llama-3-8B-Instruct
input_features:
- name: Prompt
type: text
preprocesssing:
max_sequence_length :256
output_features:
- name: Response
type: text
preprocesssing:
max_sequence_length :150
prompt:
template: >-
### Prompt: {Prompt}
### responses :
quantization:
bits: 4
generation:
temperature: 0.1
max_new_tokens: 150
preprocessing:
split:
probabilities:
- 1.0
- 0.0
- 0.0
adapter:
type: lora
trainer:
type: finetune
epochs: 10
batch_size: 1
eval_batch_size: 1
enable_gradient_checkpointing: true
gradient_accumulation_steps: 16
learning_rate: 0.00001
optimizer:
type: paged_adam
params:
eps: 1.e-8
betas:
- 0.9
- 0.999
weight_decay: 0
learning_rate_scheduler:
warmup_fraction: 0.03
reduce_on_plateau: 0
"""
)
new_model = LudwigModel(config=qlora_fine_tuning_config, logging_level=logging.INFO)
results = new_model.train(dataset=train_df)
```
And for inference:
`new_model.predict(_test_df.loc[0:1])`
Here is the full trace:
```
4 def predict(index):
----> 5 test_predictions = new_model.predict(_test_df.loc[index:index])[0]
7 completion = oclient.chat.completions.create(
8 model="gpt-3.5-turbo",
9 temperature = 0.1,
(...)
42 ]
43 )
44 results = completion.choices[0].message.content
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/api.py:1141, in LudwigModel.predict(self, dataset, data_format, split, batch_size, generation_config, skip_save_unprocessed_output, skip_save_predictions, output_directory, return_type, callbacks, **kwargs)
1139 start_time = time.time()
1140 logger.debug("Preprocessing")
-> 1141 dataset, _ = preprocess_for_prediction( # TODO (Connor): Refactor to use self.config_obj
1142 self.config_obj.to_dict(),
1143 dataset=dataset,
1144 training_set_metadata=self.training_set_metadata,
1145 data_format=data_format,
1146 split=split,
1147 include_outputs=False,
1148 backend=self.backend,
1149 callbacks=self.callbacks + (callbacks or []),
1150 )
1152 logger.debug("Predicting")
1153 with self.backend.create_predictor(self.model, batch_size=batch_size) as predictor:
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/data/preprocessing.py:2334, in preprocess_for_prediction(config, dataset, training_set_metadata, data_format, split, include_outputs, backend, callbacks)
2332 training_set, test_set, validation_set, training_set_metadata = processed
2333 else:
-> 2334 processed = data_format_processor.preprocess_for_prediction(
2335 config, dataset, features, preprocessing_params, training_set_metadata, backend, callbacks
2336 )
2337 dataset, training_set_metadata, new_hdf5_fp = processed
2338 training_set_metadata = training_set_metadata.copy()
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/data/preprocessing.py:276, in DataFramePreprocessor.preprocess_for_prediction(config, dataset, features, preprocessing_params, training_set_metadata, backend, callbacks)
273 if isinstance(dataset, pd.DataFrame):
274 dataset = backend.df_engine.from_pandas(dataset)
--> 276 dataset, training_set_metadata = build_dataset(
277 config,
278 dataset,
279 features,
280 preprocessing_params,
281 mode="prediction",
282 metadata=training_set_metadata,
283 backend=backend,
284 callbacks=callbacks,
285 )
286 return dataset, training_set_metadata, None
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/data/preprocessing.py:1271, in build_dataset(config, dataset_df, features, global_preprocessing_parameters, mode, metadata, backend, random_seed, skip_save_processed_input, callbacks)
1269 for feature_config in feature_configs:
1270 preprocessing_parameters = feature_name_to_preprocessing_parameters[feature_config[NAME]]
-> 1271 handle_missing_values(dataset_cols, feature_config, preprocessing_parameters, backend)
1273 # Happens after missing values are handled to avoid NaN casting issues.
1274 logger.debug("cast columns")
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/data/preprocessing.py:1703, in handle_missing_values(dataset_cols, feature, preprocessing_parameters, backend)
1701 missing_value_strategy = preprocessing_parameters["missing_value_strategy"]
1702 computed_fill_value = preprocessing_parameters.get("computed_fill_value")
-> 1703 _handle_missing_values(dataset_cols, feature, missing_value_strategy, computed_fill_value, backend)
File ~/anaconda3/envs/vineeth_10/lib/python3.10/site-packages/ludwig/data/preprocessing.py:1762, in _handle_missing_values(dataset_cols, feature, missing_value_strategy, computed_fill_value, backend)
1756 logger.warning(
1757 f"DROP_ROW missing value strategy applied. Dropped {len_before_dropped_rows - len_after_dropped_rows} "
1758 f"samples out of {len_before_dropped_rows} from column {feature[COLUMN]}. The rows containing these "
1759 f"samples will ultimately be dropped from the dataset."
1760 )
1761 else:
-> 1762 raise ValueError(f"Invalid missing value strategy {missing_value_strategy}")
ValueError: Invalid missing value strategy fill_with_const
```
-3.0
- 0.10.3
| open | 2024-04-24T17:49:22Z | 2024-10-21T18:55:16Z | https://github.com/ludwig-ai/ludwig/issues/4000 | [
"bug",
"llm"
] | vinven7 | 5 |
dynaconf/dynaconf | django | 1,221 | [RFC] support for specifying specific secret versions via the Vault API | Vault secrets can be accessed by the version. The current implementation brings the latest version of the secret.
Sometimes, we would like to set a new for the secret (this leads to the "new version in Vault"), but we want the "production" system to continue to use the old value/version.
The Vault versions can be different between environments. So, it will be nice to define the desired versions via environment variables:
```
VAULT_VESIONS_FOR_DYNACONF="{"default": 17, "dev": 11, "production": 10, "staging": 12}"
```
And you can use the provided "environment" secret versions in the Vault's call:
```
version = obj.VAULT_VESIONS_FOR_DYNACONF.get(env)
version_query = f"?version={version}" if version else ""
path = "/".join([obj.VAULT_PATH_FOR_DYNACONF, env]) + version_query
```
Thanks in advance
Dmitry | open | 2025-01-09T10:41:55Z | 2025-01-09T10:41:55Z | https://github.com/dynaconf/dynaconf/issues/1221 | [
"Not a Bug",
"RFC"
] | yukels | 0 |
neuml/txtai | nlp | 851 | Add notebook that analyzes NeuML LinkedIn posts | Add notebook that analyzes NeuML LinkedIn posts with Graphs and Agents. | closed | 2025-01-12T14:58:28Z | 2025-01-12T14:59:37Z | https://github.com/neuml/txtai/issues/851 | [] | davidmezzetti | 0 |
Kludex/mangum | fastapi | 101 | Document an example project that uses WebSockets | Probably will use Serverless Framework for this in a separate repo. Not sure yet. | closed | 2020-05-04T08:18:20Z | 2020-06-28T01:52:35Z | https://github.com/Kludex/mangum/issues/101 | [
"docs",
"websockets"
] | jordaneremieff | 1 |
gradio-app/gradio | deep-learning | 10,727 | Feature request: Better alignment of components on same row | - [ ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
When components on the same row have descriptions of significantly different length the interactive parts of the components are not aligned as expected. An image speaks a thousand words:

I would like:

**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2025-03-04T20:42:11Z | 2025-03-04T20:59:50Z | https://github.com/gradio-app/gradio/issues/10727 | [
"enhancement"
] | JackismyShephard | 0 |
Sanster/IOPaint | pytorch | 558 | 如何Debug代码? | 哥你好,我最近成功在本地运行了Lama iopaint的代码,但是想通过Debug学会怎么调用这些插件代码,实现图像的特殊前后处理,以及自己模型的调用,但无论怎么修改参数都貌似没用。想问下应该如何Debug这些插件代码?
执行命令:
`iopaint start --model=lama --device=cuda --port=8080`
| closed | 2024-08-13T06:26:31Z | 2024-08-14T02:49:49Z | https://github.com/Sanster/IOPaint/issues/558 | [] | xuyu666 | 0 |
cvat-ai/cvat | tensorflow | 8,645 | Support for Yolo tracking format | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
To my knowledge, it is not possible to import/export tracks in the yolo format.
Ultralytics Yolo supports tracking of objects and exports those detections / tracks in the following format:
`<class_id> <center_x> <center_y> <width> <height> <track_id>`
However, when importing such formatted label files, the following error is raised:
```
cvat.apps.dataset_manager.bindings.CvatImportError: Failed to import item ('frame_000342', 'train') annotation: Unexpected field count 6 in the bbox description. Expected 5 fields (label, xc, yc, w, h).
```
Additionaly, when exporting CVAT track annotations in yolo format, the track id is lost as well.
### Describe the solution you'd like
Extend the import functionality to accept yolo coordinates, with an optional track id as last number per line.
Extend the export with an additional "YOLOv8 Tracking" which includes the track ids per bounding box as last entry per line.
### Describe alternatives you've considered
Convert the yolo tracking results into a different tracking format which is supported by CVAT.
### Additional context
_No response_ | open | 2024-11-05T12:49:25Z | 2024-11-05T13:19:02Z | https://github.com/cvat-ai/cvat/issues/8645 | [
"enhancement"
] | gboeer | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 87 | Usability of Custom CSS | Hi,
First, thanks for putting all the effort in this awesome project!
Recently I needed to inject custom CSS into AgGrid. I saw that there is a setting for this - nice! I figured out that I just need to write the CSS, and I would be done:
`AgGrid(df, ..., custom_css=".some_class {some-property: 0 1 2 3;} .other_class {some-property: 3 4 5;}")`
Not so fast! This didn't work. So, I have looked into the `AgGrid()` docstring that this has to be a dict. Still no idea what should I put into the dict. Then, I had to search through the frontend code to find this:
```TypeScript
type CSSDict = {[key: string]: {[key: string]: string}}
function getCSS(styles: CSSDict): string {
var css = [];
for (let selector in styles) {
let style = selector + " {";
for (let prop in styles[selector]) {
style += prop + ": " + styles[selector][prop] + ";";
}
style += "}";
css.push(style);
}
return css.join("\n");
}
function addCustomCSS(custom_css: CSSDict): void {
var css = getCSS(custom_css)
var styleSheet = document.createElement("style")
styleSheet.type = "text/css"
styleSheet.innerText = css
console.log(`Adding cutom css: `, css)
document.head.appendChild(styleSheet)
}
```
Adding support for plain strings in custom_css has 3 benefits:
* You don't have to document the CSSDict format, since everyone already knows how to write CSS as a string.
* Users can just write CSS, without the need to learn a new format. Python has multiline strings `''' '''`, so it can be nicely formatted and retain all the readability benefits of your dict-based syntax.
* With larger stylesheets, users can load their custom_css from a .css file in their app and they can benefit from IDE support when writing CSS.
Many thanks,
Franek | closed | 2022-05-08T09:46:55Z | 2024-04-04T17:53:20Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/87 | [] | franekp | 1 |
marshmallow-code/apispec | rest-api | 797 | Support with marshmallow.fields.Enum | [marshmallow.fields.Enum](https://marshmallow.readthedocs.io/en/stable/marshmallow.fields.html#marshmallow.fields.Enum) can be used now.
Could apispec supports with it, Thanks! | closed | 2022-09-27T06:24:02Z | 2025-01-21T18:22:43Z | https://github.com/marshmallow-code/apispec/issues/797 | [] | vba34520 | 1 |
ultralytics/yolov5 | pytorch | 12,489 | Using LeakyReLU/ReLU break the model when exporting to tflite | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training, Export
### Bug
If you use the leaky relu activation function (or a just a relu), specifying it in the .yaml, the training goes well, but the tflite exported model is broken:
Running:
`python3 train.py --data coco.yaml --epochs 50 --weights '' --cfg ./hub/yolov5n-LeakyReLU.yaml --batch-size 204`
where yolov5n-LeakyReLU.yaml is the same of yolov5s-LeakyReLU.yaml, with the difference:
`width_multiple: 0.25 # layer channel multiple`
The performance I get after training are the following, all good:
```
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.206
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.359
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.209
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.106
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.231
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.265
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.211
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.374
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.430
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.240
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.477
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.561
```
After exporting:
`python3 export.py --weights runs/train/exp20/weights/best.pt --include tflite --int8`
and testing the tflite model with :
`python3 val.py --weights runs/train/exp20/weights/best-int8.tflite `
I get these performances:
```
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 128/128 [00:16<00:00,7.54it/s]
all 128 929 0.104 0.0495 0.00417 0.000997
```
I know quantization should reduce the accuracy, but here it is breaking somehow the network.
Exporting to tflite in fp or onnx doesn't hurt the model.
Any idea what is going on?
In particular, by looking at the output I see that the output of the network that are for the width and height of the box, are always zero after conversion to tflite. The rest of the output seems okay.
### Environment
Yolov5 latest
Ubuntu 22.04
python3.10
Nvidia A10G Driver Version: 535.129.03 CUDA Version: 12.2
tensorflow-cpu==2.15.0
torch==2.1.1
torchvision==0.16.1
### Minimal Reproducible Example
( I did it on a yolov5n, but it is the same on "s")
`python3 train.py --data coco.yaml --epochs 30 --weights '' --cfg ./hub/yolov5s-LeakyReLU.yaml --batch-size 128`
(replace exp20 with your folder)
`python3 export.py --weights runs/train/exp20/weights/best.pt --include tflite --int8`
`python3 val.py --weights runs/train/exp20/weights/best-int8.tflite `
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2023-12-10T12:01:01Z | 2024-01-14T01:14:53Z | https://github.com/ultralytics/yolov5/issues/12489 | [
"bug",
"Stale"
] | Corallo | 6 |
explosion/spaCy | data-science | 13,595 | Document good practices for caching spaCy models in CI setup | I use spaCy in a Jupyter book which currently downloads multiple spaCy models on every CI run, which wastes time and bandwidth.
The best solution would be to download and cache the models once, and get them restored on subsequent CI runs.
Are there any bits of documentation covering this concern somewhere? I could not find any in the official documentation.
Cheers. | open | 2024-08-13T15:19:09Z | 2024-08-13T15:19:09Z | https://github.com/explosion/spaCy/issues/13595 | [] | ghisvail | 0 |
jina-ai/serve | fastapi | 5,343 | make port argument support multiple ports | make port argument support multiple ports and adapt k8s yaml in order to create 1 service per port | closed | 2022-11-02T11:18:57Z | 2022-11-18T10:04:56Z | https://github.com/jina-ai/serve/issues/5343 | [] | alaeddine-13 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 736 | Issue running: python demo_toolbox.py | Hello,
I have satisfied all of my requirements and when I try to run the command
`python demo_toolbox.py`
I get an output that looks something like this:
`ModuleNotFoundError: No module named 'torch'`
I believe this is saying I don't have PyTorch. However, I installed PyTorch a few months ago and have been using programs requiring it since.
When I first installed PyTorch I used this command:
`conda install --yes -c PyTorch pytorch=1.7.1 torchvision cudatoolkit=11.0`
I am doing this all on PowerShell on Windows 10. Additionally, I am running python 3.6.8
Any thoughts would be great.
| closed | 2021-04-14T06:41:44Z | 2021-04-20T02:56:32Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/736 | [] | alexp-12 | 2 |
litestar-org/polyfactory | pydantic | 26 | Implement an init | ### What?
Similar to other factory boy extensions, I'd expect to be able to simply construct the object and not have to call `.build`, like the basic usage example from the [factory boy docs](https://factoryboy.readthedocs.io/en/stable/introduction.html#basic-usage)
### Behavior I want
```python
from pydantic import BaseModel
from pydantic_factories import ModelFactory
class Person(BaseModel):
id: int
name: str
phone_number: str
class PersonFactory(ModelFactory):
__model__ = Person
# rn I have to do PersonFactory.build()
print(PersonFactory())
#> id=6744 name='wqIOcAeNxRziXJlQEBqr' phone_number='lEJcvhasvbUJWRPvHMez'
```
### Why do I care?
It'd be nice if I could change this in my packages that serialize using Pydantic and not change every downstream test to use `.build`
| closed | 2022-02-08T22:09:01Z | 2022-02-09T08:37:45Z | https://github.com/litestar-org/polyfactory/issues/26 | [] | zschumacher | 1 |
TencentARC/GFPGAN | pytorch | 40 | I can't use GFPGANv1.pth | I already use GFPGANCleanv1-NoCE-C2.pth and is working great. Please help me with this problem...
Thanks.
python inference_gfpgan.py --upscale 2 --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results
```
inference_gfpgan.py:37: UserWarning: The unoptimized RealESRGAN is very slow on CPU. We do not use it. If you really want to use it, please modify the corresponding codes.
warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. '
Traceback (most recent call last):
File "inference_gfpgan.py", line 98, in <module>
main()
File "inference_gfpgan.py", line 52, in main
restorer = GFPGANer(
File "C:\Users\Zeus\Downloads\GFPGAN\gfpgan\utils.py", line 65, in __init__
self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
File "C:\Users\Zeus\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
``` | closed | 2021-08-12T02:39:47Z | 2021-08-13T12:09:17Z | https://github.com/TencentARC/GFPGAN/issues/40 | [] | paconaranjo | 2 |
man-group/arctic | pandas | 340 | initialize_library can't create a collection to persist data | #### Arctic Version
1.40.0
#### Arctic Store
VERSION_STORE
#### Platform and version
python 3.6 and IDE is PyCharm 2016.3.2
#### Description of problem and/or code sample that reproduces the issue
I follow the demo.py to create my project.When I run this project,it throw a error:
`<class 'pymongo.errors.OperationFailure'> stats [Collection [arctic_NASDAQ.stock] not found.], retrying 3`
my code is simple:
```
store=Arctic('localhost');
store.initialize_library('NASDAQ.stock');
library = store['NASDAQ.stock'];
```
later,I found that the library(db) arctic_NASDAQ have not include the collection named "stock".
When I create "stock" collections in arctic_NASDAQ use mongodb client and run the project again,there is no errors anymore.
Why the `store.initialize_library('NASDAQ.stock');` can't create collection named "stock"(even though it already create collections like stock.ARCTIC、stock.changes、stock.versions and etc...)?
It's there any errors in my code?or I use arctic in a wrong way?
Now,I find a way to workaround:
```
import pandas as pd
from arctic.arctic import Arctic
from pymongo import MongoClient
store=Arctic('localhost');
store.initialize_library('NASDAQ.stock');
library = store['NASDAQ.stock'];
#use pymongo to create the collection stock
client = MongoClient('localhost',27017);
db=client['arctic_NASDAQ'];
#judge whether the stock collection is created
if 'stock' not in db.collection_names():
db.create_collection('stock');
```
but I think it should be my fault to use the arctic in a wrong way.How can I make it correct?
Any information will be help.Thank you guys!
| closed | 2017-03-26T06:46:03Z | 2017-06-01T18:48:23Z | https://github.com/man-group/arctic/issues/340 | [] | bai343901438 | 5 |
mars-project/mars | numpy | 2,682 | [BUG] Optimization that compacts multiple filters into `eval` generates unexpected node in graph | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Optimization that compacts multiple filters into eval generates unexpected node in graph.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```python
@enter_mode(build=True)
def test_arithmetic_query(setup):
df1 = md.DataFrame(raw, chunk_size=10)
df2 = md.DataFrame(raw2, chunk_size=10)
df3 = df1.merge(df2, on='A', suffixes=('', '_'))
df3['K'] = df4 = df3["A"] * (1 - df3["B"])
graph = TileableGraph([df3.data])
next(TileableGraphBuilder(graph).build())
records = optimize(graph)
opt_df4 = records.get_optimization_result(df4.data)
assert opt_df4.op.expr == "(`A`) * ((1) - (`B`))"
assert len(graph) == 5 # for now len(graph) is 6
assert len([n for n in graph if isinstance(n.op, DataFrameEval)]) == 1 # and 2 evals exist
```
| closed | 2022-02-07T10:28:18Z | 2022-02-09T02:04:49Z | https://github.com/mars-project/mars/issues/2682 | [
"type: bug",
"mod: dataframe",
"task: medium"
] | qinxuye | 0 |
TencentARC/GFPGAN | pytorch | 441 | could you offer the new train option file about v1.4? I have some questions about the degradation settings about new version... | open | 2023-09-11T03:01:45Z | 2023-09-13T03:55:39Z | https://github.com/TencentARC/GFPGAN/issues/441 | [] | codinker | 1 |
|
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 51 | Expand docs clarifying the benefits of using `field_for()` rather than typical `fields.Str()` | I'm still trying to understand the benefits of using marshmallow-sqlalchemy above what marshmallow already gives me.
The obvious one is auto-generation of fields, but for my schemas, most of the fields require additional arguments such as `dump_only` or `required`, so this doesn't add much for me.
I checked the docs, but couldn't find much. However, I was just reading the code and noticed that a length validator is automatically included when sqlalchemy has a length constraint on the underlying db column. Similarly, https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/47#issuecomment-164083869 mentions that fields that aren't allowed to be null have marshmallow `required` added.
Both are clever optimizations--and I think it'd be worth mentioning in the docs.
Ultimately, be nice if there was a clear set of 'here's the benefits of this extension over and above vanilla marshmallow' as well as 'here's what using `field_for(column)` provides over and above the typical `fields.str()` or `fields.Integer()`'
| closed | 2015-12-18T11:31:37Z | 2020-02-09T21:21:40Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/51 | [] | jeffwidman | 2 |
django-import-export/django-import-export | django | 1,096 | Admin import action - TypeError: 'str' object is not callable | I have a fairly basic setup that appears to be failing on clicking the Import action from /admin.
admin.py:
```
from django.contrib import admin
from import_export import resources
from import_export.admin import ImportMixin
class MarketDataResource(resources.ModelResource):
class Meta:
model = MarketData
@admin.register(MarketData)
class MarketDataAdmin(ImportMixin, admin.ModelAdmin):
resource_class = 'MarketDataResource'
```
models.py:
```
from django.db import models
class MarketData(models.model):
name = models.CharField('Market data stream name', max_length=100)
def __str__(self):
return self.name
```
```
django_1 | 172.22.0.1 - - [10/Mar/2020 12:29:03] "GET /admin/users/marketdata/import/ HTTP/1.1" 500 -
django_1 | Traceback (most recent call last):
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/contrib/staticfiles/handlers.py", line 65, in __call__
django_1 | return self.application(environ, start_response)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/wsgi.py", line 141, in __call__
django_1 | response = self.get_response(request)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/base.py", line 75, in get_response
django_1 | response = self._middleware_chain(request)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/exception.py", line 36, in inner
django_1 | response = response_for_exception(request, exc)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/exception.py", line 90, in response_for_exception
django_1 | response = handle_uncaught_exception(request, get_resolver(get_urlconf()), sys.exc_info())
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/exception.py", line 125, in handle_uncaught_exception
django_1 | return debug.technical_500_response(request, *exc_info)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django_extensions/management/technical_response.py", line 37, in null_technical_500_response
django_1 | six.reraise(exc_type, exc_value, tb)
django_1 | File "/usr/local/lib/python3.7/dist-packages/six.py", line 702, in reraise
django_1 | raise value.with_traceback(tb)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/exception.py", line 34, in inner
django_1 | response = get_response(request)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/base.py", line 115, in _get_response
django_1 | response = self.process_exception_by_middleware(e, request)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/core/handlers/base.py", line 113, in _get_response
django_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
django_1 | File "/usr/lib/python3.7/contextlib.py", line 74, in inner
django_1 | return func(*args, **kwds)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/utils/decorators.py", line 142, in _wrapped_view
django_1 | response = view_func(request, *args, **kwargs)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
django_1 | response = view_func(request, *args, **kwargs)
django_1 | File "/usr/local/lib/python3.7/dist-packages/django/contrib/admin/sites.py", line 223, in inner
django_1 | return view(request, *args, **kwargs)
django_1 | File "/usr/local/lib/python3.7/dist-packages/import_export/admin.py", line 327, in import_action
django_1 | resource = self.get_import_resource_class()(**res_kwargs)
django_1 | TypeError: 'str' object is not callable
```
Anyone happen to have any clues on how to fix this by any chance?
Using version 2.0.2.
| closed | 2020-03-10T12:33:27Z | 2020-03-10T21:51:59Z | https://github.com/django-import-export/django-import-export/issues/1096 | [] | aerospatiale | 1 |
PeterL1n/RobustVideoMatting | computer-vision | 137 | 请问此算法,对于动态的背景的处理效果如何,据我测试好像不太好,可以说明一下原因嘛,或者如何去改进呢 | open | 2022-01-26T10:09:11Z | 2022-01-26T10:09:11Z | https://github.com/PeterL1n/RobustVideoMatting/issues/137 | [] | PFC-star | 0 |
|
KaiyangZhou/deep-person-reid | computer-vision | 587 | Dataset Used for Training | Hello,
Can you provide the list of Datasets used during training? Datasets listed [here](https://github.com/KaiyangZhou/deep-person-reid?tab=readme-ov-file#datasets) were used? | open | 2024-12-25T11:02:53Z | 2024-12-25T11:02:53Z | https://github.com/KaiyangZhou/deep-person-reid/issues/587 | [] | harshdhamecha | 0 |
davidteather/TikTok-Api | api | 1,116 | cause of "EmptyResponseException: None -> TikTok returned an empty response" error |
Does anyone know what the cause of the ultra-common "EmptyResponseException: None -> TikTok returned an empty response" error is?
This article states that "TikTok's free APIs have usage restrictions. The commercial content API allows a maximum of 600 requests per day."
Is the "EmptyResponseException: None -> TikTok returned an empty response" error caused by a rate limit? | open | 2024-02-15T22:43:37Z | 2024-04-01T19:46:29Z | https://github.com/davidteather/TikTok-Api/issues/1116 | [
"bug"
] | calvin-walters | 4 |
marcomusy/vedo | numpy | 154 | vectorized/parallel version of IntersectWithLine for ray casting | Hi @marcomusy, I was checking on your example [here ](https://stackoverflow.com/a/57092022) and I was wondering whether it is possible to apply ray casting with multiple rays at once, without a `for` loop. For example if I have an origin point at `p1: [0,0,0]` and then multiple rays at different directions `pnts: [[0,0,1], [0.25, 1, 0.5], [0.5, 1, 0.25]]`, how to get intersection points with a surface at once.
I was checking also whether I could maybe use `intersectWithLine()` with `numpy.vectorize()` or `numpy.apply_along_axis()`, but I am not sure how to do it. | closed | 2020-06-03T18:56:07Z | 2020-06-06T00:07:06Z | https://github.com/marcomusy/vedo/issues/154 | [] | ttsesm | 10 |
hankcs/HanLP | nlp | 699 | word2vector准确率测试,貌似和C没有什么区别 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.5.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
1. hanlp中word2vector的参数配置问题
2. 对于C版本,准确率比你的测试结果低了10%
3. 我对word2vector的各方版本进行了测试,发现准确率差别并不大
* 对于1,源码中参数只要发现有cbow和hs,就直接设为true,无关0与1的值,所以当测试了hs=0的时候,其实hanlp使用hs,而c版本没有,在[《word2vec原理推导与代码分析》](http://www.hankcs.com/nlp/word2vec.html)中尽管参数一样,但实际训练过程不一样,不知道这是不是造成准确率差别比较大的原因。我分别测试了hanlp在hs=1和没有添加hs这个参数时的准确率。
* 对于2,对于c版本,采用的c进行训练,gensim计算accuracy,我看过源码和跑过c的accuracy,两个结果一致,没有问题,但是gensim的更快,log更清晰,就跑了gensim的。
这是测试结果:比[《Accuracy rate seems to be 10% lower than the original version》](https://github.com/kojisekig/word2vec-lucene/issues/21)中的c低了10%,不知道为什么?
./word2vec -train text8 -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 8 -binary 0 -iter 15
2017-11-28 17:29:30,471 : INFO : loading projection weights from E:/data/word2vec/text8.google_c.word2vec.txt_1
2017-11-28 17:29:42,375 : INFO : loaded (71291L, 200L) matrix from E:/data/word2vec/text8.google_c.word2vec.txt_1
2017-11-28 17:29:42,436 : INFO : precomputing L2-norms of word weight vectors
2017-11-28 17:29:46,578 : INFO : capital-common-countries: 77.5% (392/506)
2017-11-28 17:30:15,301 : INFO : capital-world: 45.6% (1626/3564)
2017-11-28 17:30:20,082 : INFO : currency: 19.5% (116/596)
2017-11-28 17:30:38,799 : INFO : city-in-state: 41.2% (959/2330)
2017-11-28 17:30:42,157 : INFO : family: 61.7% (259/420)
2017-11-28 17:30:50,121 : INFO : gram1-adjective-to-adverb: 13.8% (137/992)
2017-11-28 17:30:56,214 : INFO : gram2-opposite: 13.1% (99/756)
2017-11-28 17:31:07,010 : INFO : gram3-comparative: 60.6% (807/1332)
2017-11-28 17:31:14,960 : INFO : gram4-superlative: 25.0% (248/992)
2017-11-28 17:31:23,447 : INFO : gram5-present-participle: 38.6% (408/1056)
2017-11-28 17:31:35,607 : INFO : gram6-nationality-adjective: 77.6% (1181/1521)
2017-11-28 17:31:48,147 : INFO : gram7-past-tense: 34.8% (543/1560)
2017-11-28 17:31:58,815 : INFO : gram8-plural: 49.5% (659/1332)
2017-11-28 17:32:05,812 : INFO : gram9-plural-verbs: 30.8% (268/870)
2017-11-28 17:32:05,812 : INFO : total: 43.2% (7702/17827)
* 对于3,分别测试了google的c版本,gensim,hanlp,deeplearning4j。 除了deeplearning4j,没有测试hs=0的情况,其他都测试了。统计发现使用hs的准确率更差一下,猜测是数据较少,太稀疏导致的。对于hs=0,各家大概43%,hs=1,各家大概35%。
:gensim
model = word2vec.Word2Vec(sentences, size=200, window=8, negative=25, hs=1, sample=0.0001, workers=8, iter=15)
2017-11-29 11:49:46,647 : INFO : loading projection weights from E:/data/word2vec/text8.gensim.word2vec.txt
2017-11-29 11:50:00,520 : INFO : loaded (71290L, 200L) matrix from E:/data/word2vec/text8.gensim.word2vec.txt
2017-11-29 11:50:00,599 : INFO : precomputing L2-norms of word weight vectors
2017-11-29 11:50:04,786 : INFO : capital-common-countries: 76.5% (387/506)
2017-11-29 11:50:33,871 : INFO : capital-world: 37.9% (1349/3564)
2017-11-29 11:50:38,687 : INFO : currency: 7.0% (42/596)
2017-11-29 11:50:57,526 : INFO : city-in-state: 40.4% (942/2330)
2017-11-29 11:51:01,313 : INFO : family: 47.4% (199/420)
2017-11-29 11:51:09,776 : INFO : gram1-adjective-to-adverb: 10.8% (107/992)
2017-11-29 11:51:16,038 : INFO : gram2-opposite: 9.0% (68/756)
2017-11-29 11:51:26,976 : INFO : gram3-comparative: 51.4% (685/1332)
2017-11-29 11:51:34,859 : INFO : gram4-superlative: 19.8% (196/992)
2017-11-29 11:51:43,236 : INFO : gram5-present-participle: 25.5% (269/1056)
2017-11-29 11:51:55,519 : INFO : gram6-nationality-adjective: 73.0% (1111/1521)
2017-11-29 11:52:07,953 : INFO : gram7-past-tense: 35.5% (554/1560)
2017-11-29 11:52:18,648 : INFO : gram8-plural: 49.2% (655/1332)
2017-11-29 11:52:25,628 : INFO : gram9-plural-verbs: 21.8% (190/870)
2017-11-29 11:52:25,628 : INFO : total: 37.9% (6754/17827)
model = word2vec.Word2Vec(sentences, size=200, window=8, negative=25, hs=0, sample=0.0001, workers=8, iter=15)
2017-11-29 11:53:14,415 : INFO : loading projection weights from E:/data/word2vec/text8.gensim.word2vec.txt_1
2017-11-29 11:53:27,427 : INFO : loaded (71290L, 200L) matrix from E:/data/word2vec/text8.gensim.word2vec.txt_1
2017-11-29 11:53:27,505 : INFO : precomputing L2-norms of word weight vectors
2017-11-29 11:53:31,894 : INFO : capital-common-countries: 72.9% (369/506)
2017-11-29 11:54:01,937 : INFO : capital-world: 51.1% (1822/3564)
2017-11-29 11:54:06,974 : INFO : currency: 18.0% (107/596)
2017-11-29 11:54:26,329 : INFO : city-in-state: 41.5% (966/2330)
2017-11-29 11:54:29,640 : INFO : family: 59.3% (249/420)
2017-11-29 11:54:37,565 : INFO : gram1-adjective-to-adverb: 14.3% (142/992)
2017-11-29 11:54:43,559 : INFO : gram2-opposite: 13.6% (103/756)
2017-11-29 11:54:54,144 : INFO : gram3-comparative: 64.3% (857/1332)
2017-11-29 11:55:02,068 : INFO : gram4-superlative: 23.1% (229/992)
2017-11-29 11:55:10,453 : INFO : gram5-present-participle: 36.0% (380/1056)
2017-11-29 11:55:22,509 : INFO : gram6-nationality-adjective: 73.7% (1121/1521)
2017-11-29 11:55:34,861 : INFO : gram7-past-tense: 34.3% (535/1560)
2017-11-29 11:55:45,290 : INFO : gram8-plural: 49.8% (664/1332)
2017-11-29 11:55:52,154 : INFO : gram9-plural-verbs: 31.5% (274/870)
2017-11-29 11:55:52,155 : INFO : total: 43.9% (7818/17827)
:hanlp
-input E:\data\word2vec\text8 -output E:\data\word2vec\text8.hanlp.word2vec.txt -size 200 -window 8 -negative 25 -hs 0 -cbow 1 -sample 1e-4 -threads 8 -binary 1 -iter 15
2017-11-28 16:53:03,293 : INFO : loading projection weights from E:/data/word2vec/text8.hanlp.word2vec.txt
2017-11-28 16:53:15,493 : INFO : loaded (71290L, 200L) matrix from E:/data/word2vec/text8.hanlp.word2vec.txt
2017-11-28 16:53:15,553 : INFO : precomputing L2-norms of word weight vectors
2017-11-28 16:53:19,831 : INFO : capital-common-countries: 69.8% (353/506)
2017-11-28 16:53:49,194 : INFO : capital-world: 30.3% (1079/3564)
2017-11-28 16:53:54,053 : INFO : currency: 4.9% (29/596)
2017-11-28 16:54:12,895 : INFO : city-in-state: 35.7% (831/2330)
2017-11-28 16:54:16,322 : INFO : family: 31.9% (134/420)
2017-11-28 16:54:24,401 : INFO : gram1-adjective-to-adverb: 7.7% (76/992)
2017-11-28 16:54:30,487 : INFO : gram2-opposite: 9.9% (75/756)
2017-11-28 16:54:41,328 : INFO : gram3-comparative: 38.3% (510/1332)
2017-11-28 16:54:49,278 : INFO : gram4-superlative: 13.5% (134/992)
2017-11-28 16:54:58,219 : INFO : gram5-present-participle: 21.6% (228/1056)
2017-11-28 16:55:10,444 : INFO : gram6-nationality-adjective: 72.4% (1101/1521)
2017-11-28 16:55:22,950 : INFO : gram7-past-tense: 28.5% (445/1560)
2017-11-28 16:55:33,730 : INFO : gram8-plural: 45.9% (612/1332)
2017-11-28 16:55:40,694 : INFO : gram9-plural-verbs: 17.1% (149/870)
2017-11-28 16:55:40,696 : INFO : total: 32.3% (5756/17827)
-input E:\data\word2vec\text8 -output E:\data\word2vec\text8.hanlp.word2vec.txt_1 -size 200 -window 8 -negative 25 -cbow 1 -sample 1e-4 -threads 8 -binary 1 -iter 15
2017-11-29 11:15:27,628 : INFO : loading projection weights from E:/data/word2vec/text8.hanlp.word2vec.txt_1
2017-11-29 11:15:42,361 : INFO : loaded (71290L, 200L) matrix from E:/data/word2vec/text8.hanlp.word2vec.txt_1
2017-11-29 11:15:42,461 : INFO : precomputing L2-norms of word weight vectors
2017-11-29 11:15:47,365 : INFO : capital-common-countries: 80.0% (405/506)
2017-11-29 11:16:20,013 : INFO : capital-world: 46.2% (1647/3564)
2017-11-29 11:16:25,338 : INFO : currency: 14.4% (86/596)
2017-11-29 11:16:46,128 : INFO : city-in-state: 46.4% (1081/2330)
2017-11-29 11:16:49,861 : INFO : family: 53.1% (223/420)
2017-11-29 11:16:58,723 : INFO : gram1-adjective-to-adverb: 15.7% (156/992)
2017-11-29 11:17:05,424 : INFO : gram2-opposite: 9.9% (75/756)
2017-11-29 11:17:17,216 : INFO : gram3-comparative: 51.1% (680/1332)
2017-11-29 11:17:26,082 : INFO : gram4-superlative: 20.0% (198/992)
2017-11-29 11:17:35,536 : INFO : gram5-present-participle: 29.9% (316/1056)
2017-11-29 11:17:49,177 : INFO : gram6-nationality-adjective: 82.4% (1254/1521)
2017-11-29 11:18:03,059 : INFO : gram7-past-tense: 32.5% (507/1560)
2017-11-29 11:18:15,029 : INFO : gram8-plural: 53.7% (715/1332)
2017-11-29 11:18:22,894 : INFO : gram9-plural-verbs: 26.7% (232/870)
2017-11-29 11:18:22,894 : INFO : total: 42.5% (7575/17827)
:deeplearning4j
Word2Vec vec = new Word2Vec.Builder().layerSize(200).windowSize(8).negativeSample(25).minWordFrequency(5).useHierarchicSoftmax(true).sampling(0.0001).workers(8).iterations(15).epochs(15).iterate(iter)
.elementsLearningAlgorithm("org.deeplearning4j.models.embeddings.learning.impl.elements.CBOW")
.tokenizerFactory(t)
.build();
2017-11-28 16:46:26,894 : INFO : loading projection weights from E:/data/word2vec/text8.deeplearning4j.word2vec.txt
2017-11-28 16:46:39,391 : INFO : loaded (71290L, 200L) matrix from E:/data/word2vec/text8.deeplearning4j.word2vec.txt
2017-11-28 16:46:39,453 : INFO : precomputing L2-norms of word weight vectors
2017-11-28 16:46:43,596 : INFO : capital-common-countries: 67.4% (341/506)
2017-11-28 16:47:12,592 : INFO : capital-world: 33.9% (1208/3564)
2017-11-28 16:47:17,515 : INFO : currency: 6.0% (36/596)
2017-11-28 16:47:36,332 : INFO : city-in-state: 36.6% (852/2330)
2017-11-28 16:47:39,834 : INFO : family: 38.3% (161/420)
2017-11-28 16:47:47,898 : INFO : gram1-adjective-to-adverb: 9.0% (89/992)
2017-11-28 16:47:53,953 : INFO : gram2-opposite: 7.0% (53/756)
2017-11-28 16:48:04,632 : INFO : gram3-comparative: 38.7% (515/1332)
2017-11-28 16:48:12,653 : INFO : gram4-superlative: 11.8% (117/992)
2017-11-28 16:48:21,220 : INFO : gram5-present-participle: 23.0% (243/1056)
2017-11-28 16:48:33,519 : INFO : gram6-nationality-adjective: 76.7% (1166/1521)
2017-11-28 16:48:46,165 : INFO : gram7-past-tense: 27.2% (424/1560)
2017-11-28 16:48:56,894 : INFO : gram8-plural: 48.2% (642/1332)
2017-11-28 16:49:03,973 : INFO : gram9-plural-verbs: 19.2% (167/870)
2017-11-28 16:49:03,974 : INFO : total: 33.7% (6014/17827)
:google_c
./word2vec -train text8 -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 1 -sample 1e-4 -threads 8 -binary 0 -iter 15
2017-11-28 16:49:29,132 : INFO : loading projection weights from E:/data/word2vec/text8.google_c.word2vec.txt
2017-11-28 16:49:41,848 : INFO : loaded (71291L, 200L) matrix from E:/data/word2vec/text8.google_c.word2vec.txt
2017-11-28 16:49:41,914 : INFO : precomputing L2-norms of word weight vectors
2017-11-28 16:49:46,154 : INFO : capital-common-countries: 75.7% (383/506)
2017-11-28 16:50:15,078 : INFO : capital-world: 33.2% (1184/3564)
2017-11-28 16:50:19,993 : INFO : currency: 6.0% (36/596)
2017-11-28 16:50:38,967 : INFO : city-in-state: 36.0% (838/2330)
2017-11-28 16:50:42,348 : INFO : family: 47.4% (199/420)
2017-11-28 16:50:50,315 : INFO : gram1-adjective-to-adverb: 10.6% (105/992)
2017-11-28 16:50:56,355 : INFO : gram2-opposite: 7.8% (59/756)
2017-11-28 16:51:07,065 : INFO : gram3-comparative: 48.3% (644/1332)
2017-11-28 16:51:14,905 : INFO : gram4-superlative: 18.0% (179/992)
2017-11-28 16:51:23,299 : INFO : gram5-present-participle: 29.0% (306/1056)
2017-11-28 16:51:35,345 : INFO : gram6-nationality-adjective: 70.1% (1066/1521)
2017-11-28 16:51:47,733 : INFO : gram7-past-tense: 31.9% (498/1560)
2017-11-28 16:51:58,316 : INFO : gram8-plural: 50.1% (667/1332)
2017-11-28 16:52:05,321 : INFO : gram9-plural-verbs: 20.0% (174/870)
2017-11-28 16:52:05,322 : INFO : total: 35.6% (6338/17827)
./word2vec -train text8 -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 8 -binary 0 -iter 15
2017-11-28 17:29:30,471 : INFO : loading projection weights from E:/data/word2vec/text8.google_c.word2vec.txt_1
2017-11-28 17:29:42,375 : INFO : loaded (71291L, 200L) matrix from E:/data/word2vec/text8.google_c.word2vec.txt_1
2017-11-28 17:29:42,436 : INFO : precomputing L2-norms of word weight vectors
2017-11-28 17:29:46,578 : INFO : capital-common-countries: 77.5% (392/506)
2017-11-28 17:30:15,301 : INFO : capital-world: 45.6% (1626/3564)
2017-11-28 17:30:20,082 : INFO : currency: 19.5% (116/596)
2017-11-28 17:30:38,799 : INFO : city-in-state: 41.2% (959/2330)
2017-11-28 17:30:42,157 : INFO : family: 61.7% (259/420)
2017-11-28 17:30:50,121 : INFO : gram1-adjective-to-adverb: 13.8% (137/992)
2017-11-28 17:30:56,214 : INFO : gram2-opposite: 13.1% (99/756)
2017-11-28 17:31:07,010 : INFO : gram3-comparative: 60.6% (807/1332)
2017-11-28 17:31:14,960 : INFO : gram4-superlative: 25.0% (248/992)
2017-11-28 17:31:23,447 : INFO : gram5-present-participle: 38.6% (408/1056)
2017-11-28 17:31:35,607 : INFO : gram6-nationality-adjective: 77.6% (1181/1521)
2017-11-28 17:31:48,147 : INFO : gram7-past-tense: 34.8% (543/1560)
2017-11-28 17:31:58,815 : INFO : gram8-plural: 49.5% (659/1332)
2017-11-28 17:32:05,812 : INFO : gram9-plural-verbs: 30.8% (268/870)
2017-11-28 17:32:05,812 : INFO : total: 43.2% (7702/17827)
| closed | 2017-11-29T06:38:13Z | 2017-12-04T07:08:17Z | https://github.com/hankcs/HanLP/issues/699 | [
"question"
] | tiandiweizun | 2 |
bmoscon/cryptofeed | asyncio | 623 | How to get historical candles data for a time period? | Hello
Thanks for your contribution to this project.
Is it possible to get the historical 1m candles data from Binance for a given start and end time? I can see the references to BinanceRestMixin but not sure how they should be added to the feed, an example would be great. | closed | 2021-09-06T22:00:20Z | 2021-09-10T00:17:38Z | https://github.com/bmoscon/cryptofeed/issues/623 | [
"question"
] | msounthar | 2 |
vaexio/vaex | data-science | 2,171 | [BUG-REPORT] Joining filtered datasets throws an error | I'm attempting to combine the columns from two filtered one-row dataframes. For example:
```python
import vaex
df1 = vaex.from_arrays(numbers=['one', 'two', 'three'])
df2 = vaex.from_arrays(letters=['aaa', 'bbb', 'ccc'])
df1_filtered = df1[df1.numbers == 'two']
print('df1_filtered')
print(df1_filtered)
df2_filtered = df2[df2.letters == 'aaa']
print('df2_filtered')
print(df2_filtered)
joined = df1_filtered.join(df2_filtered)
print('joined')
print(joined)
```
This produces the following output:
```
df1_filtered
# numbers
0 two
df2_filtered
# letters
0 aaa
Traceback (most recent call last):
File "/Users/maxharlow/Desktop/example.py", line 16, in <module>
joined = df1_filtered.join(df2_filtered)
File "/opt/homebrew/lib/python3.9/site-packages/vaex/dataframe.py", line 6686, in join
return vaex.join.join(**kwargs)
File "/opt/homebrew/lib/python3.9/site-packages/vaex/join.py", line 287, in join
dataset = left.dataset.merged(right_dataset)
File "/opt/homebrew/lib/python3.9/site-packages/vaex/dataset.py", line 1372, in merged
return DatasetMerged(self, rhs)
File "/opt/homebrew/lib/python3.9/site-packages/vaex/dataset.py", line 1194, in __init__
raise ValueError(f'Merging datasets with unequal row counts ({self.left.row_count} != {self.right.row_count})')
ValueError: Merging datasets with unequal row counts (3 != 1)
```
I had expected the output to be a dataframe as so:
```
┌─────────┬─────────┐
│ numbers │ letters │
├─────────┼─────────┤
│ two │ aaa │
└─────────┴─────────┘
```
I'm using Python 3.9.13, and Vaex version: `{'vaex': '4.11.1', 'vaex-core': '4.11.1', 'vaex-viz': '0.5.2', 'vaex-hdf5': '0.12.3', 'vaex-server': '0.8.1', 'vaex-astro': '0.9.1', 'vaex-jupyter': '0.8.0', 'vaex-ml': '0.18.0'}` | closed | 2022-08-14T14:50:26Z | 2022-08-14T18:33:39Z | https://github.com/vaexio/vaex/issues/2171 | [] | maxharlow | 3 |
d2l-ai/d2l-en | computer-vision | 2,253 | Error in the section GoogleLeNet | When running the notebook using TensorFlow in google Colab, I encountered the following error

It is in the section of GoogLeNet | closed | 2022-08-17T22:09:40Z | 2023-05-15T14:28:17Z | https://github.com/d2l-ai/d2l-en/issues/2253 | [] | COD1995 | 3 |
JaidedAI/EasyOCR | deep-learning | 632 | display error and not related to the Pytorch version | ```
>>> reader = easyocr.Reader(['en'],gpu=False)
Using CPU. Note: This module is much faster with a GPU.
>>> reader.readtext('~/data/000047287-rk.jpg')
段错误
```
em....why
”段错误“ translation is segment error.may be? | closed | 2022-01-01T13:40:23Z | 2022-08-07T05:04:35Z | https://github.com/JaidedAI/EasyOCR/issues/632 | [] | mrzhu666 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.