QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,870,806
| 1,066,291
|
Pickle custom collections.abc.Mapping as dict
|
<p>I have a custom facade that implements the collections.abc.Mapping mixin interface.</p>
<pre><code>from collections import abc
from typing import Mapping
class MyMapping(abc.Mapping):
def __init__(self, item:Mapping):
self._item = item
def __getitem__(self,key):
return self._item[key]
def __iter__(self):
return iter(self._item)
def __len__(self):
return len(self._item)
</code></pre>
<p>I would like it if this class would be pickled as a dict. That is, I want the following to be true:</p>
<pre><code>import pickle
#I'd like this statement to be true
pickle.dumps(MyMapping({'a':1})) == pickle.dumps({'a':1})
</code></pre>
<p>That being said I'd rather not have to convert all my types to dicts just to pickle them.</p>
<p>I've tried using <code>__reduce__</code> with a dict type in the first of the returned tuple but that doesn't work. Is there some other trick? Also, creating my own callable object to handle this or registering a dispatch reduction function also seems way overkill.</p>
|
<python><pickle>
|
2022-12-21 02:36:33
| 0
| 8,319
|
Mark Rucker
|
74,870,797
| 35,812
|
Format braces within regex count specifier braces
|
<blockquote>
<p>I need to find from "aaaa" -> 'aa', 'aa', 'aa', 'aaa', 'aaa', 'aaaa'.</p>
<p>I tried <code>re.findall(r'(.)\1{1,}')</code>, but all I find is 'a'.</p>
</blockquote>
<p>From <a href="https://stackoverflow.com/questions/74870518/python-regex-get-all-possible-repeated-substrings">this question</a>, I attempted to form a regex to get the desired results. However, there are <code>format</code> braces within the regex count specifier braces.</p>
<p>I think I've seen how that is handled but can't find it.</p>
<pre><code>for n in range(1, 3):
for m in re.finditer(r'(?=((.)\2{{0}}))'.format(n), 'aaaa'):
print(m.group(1))
</code></pre>
<p>This gives:</p>
<pre><code>a
a
a
a
a
a
a
a
</code></pre>
<p>but I want:</p>
<pre><code>aa
aa
aa
aaa
aaa
</code></pre>
|
<python><python-3.x><regex>
|
2022-12-21 02:34:12
| 1
| 6,671
|
Chris Charley
|
74,870,692
| 5,411,494
|
How to check if an image is "JPEG 2000" in Python?
|
<p>How to check if an image file is <code>JPEG 2000</code> in Python?</p>
<p>If possible, without installing a heavy image library.</p>
|
<python><image><jpeg><jpeg2000>
|
2022-12-21 02:09:57
| 1
| 3,328
|
Marcel
|
74,870,540
| 9,092,563
|
In Python scrapy, With Multiple projects, How Do You Import A Class Method From One project Into Another project?
|
<h1>PROBLEM</h1>
<p>I need to import a function/method located in scrapy project #1 into a spider in scrapy project # 2 <em>and use it in one of the spiders of project #2</em>.</p>
<h2>DIRECTORY STRUCTURE</h2>
<p>For starters, here's my directory structure (assume these are all under one root directory):</p>
<pre><code>/importables # scrapy project #1
/importables
/spiders
title_collection.py # take class functions defined from here
/alibaba # scrapy project #2
/alibaba
/spiders
alibabaPage.py # use them here
</code></pre>
<h1>WHAT I WANT</h1>
<p>As shown above, I am trying to get scrapy to:</p>
<ol>
<li>Run <code>alibabaPage.py</code></li>
<li>From <code>title_collection.py</code>, import a class method named <code>saveTitleInTitlesCollection</code> out of a class in that file named <code>TitleCollectionSpider</code></li>
<li>I want to use <code>saveTitleInTitlesCollection</code> inside functions that are called in the <code>alibabaPage.py</code> spider.</li>
</ol>
<h1>HOW IT'S GOING...</h1>
<p>Here's what I've done so far at the top of <code>alibabaPage.py</code>:<br></p>
<ol>
<li><p><code>from importables.importables.spiders import saveTitleInTitlesCollection</code></p>
<ul>
<li><p>nope. Fails and the error says <code>builtins.ModuleNotFoundError: No module named 'importables'</code></p>
</li>
<li><p>How can that be? That answer I got from <a href="https://stackoverflow.com/questions/43350207/scrapy-import-item-from-py-file-failes">this answer</a>.</p>
</li>
</ul>
</li>
<li><p><code>sys.path.append(os.path.join(os.path.dirname(__file__), '../..'))</code>
Then, I did this...
<code>from importables.importables.spiders import saveTitleInTitlesCollection</code></p>
<ul>
<li>nope, Fails and I get the same error as the first attempt. Taken from <a href="https://stackoverflow.com/questions/18196458/scrapy-importing-a-package-from-the-project-thats-not-in-the-same-directory">this answer</a>.</li>
</ul>
</li>
<li><p>Re-reading the post in the link from answer #1, I realized the guy put the two files in the same directory, so, I tried doing that (making a copy of <code>title_collection.py</code> and putting it in like so:</p>
</li>
</ol>
<pre><code>/alibaba # scrapy project #2
/alibaba
/spiders
alibabaPage.py # use them here
title_collection.py # added this
</code></pre>
<ul>
<li>Well, that appeared to work but didn't in the end. This threw no errors...</li>
</ul>
<pre><code>from alibaba.spiders.title_collection import TitleCollectionSpiderAlibaba
</code></pre>
<p>Leading me to assume everything worked. I added a test function named <code>testForImport</code> and tried importing it, ended up getting error: <code>builtins.ModuleNotFoundError: No module named 'alibaba.spiders.title_collection.testForImport'; 'alibaba.spiders.title_collection' is not a package</code></p>
<ul>
<li><p>Unfortunately, this wasn't actually achieving the goal of importing the class method I want to use, named <code>saveTitleInTitlesCollection</code>.</p>
</li>
<li><p>I have numerous scrapy projects and want to really just have one project of spiders <strong>that I can just import into every other project with ease</strong>.</p>
</li>
<li><p>This is not that solution so, the quest for a true solution to importing a bunch of class methods from one scrapy project to many continues... <em>can this even be done I wonder...</em></p>
</li>
<li><p>WAIT, this actually didn't work after all because when builtins.ModuleNotFoundError:
No module named 'TitleCollectionSpiderAlibaba'</p>
</li>
</ul>
<ol start="4">
<li><code>from alibaba.spiders.title_collection import testForImport</code></li>
</ol>
<ul>
<li><p>nope. This failed too.</p>
<p>But, this time it gave me slightly different error...</p>
</li>
</ul>
<pre><code>builtins.ImportError:
cannot import name 'testForImport' from 'alibaba.spiders.title_collection'
(C:\Users\User\\scrapy-webscrapers\alibaba\alibaba\spiders\title_collection.py)
</code></pre>
<h1>Consider this now solved!</h1>
<p>Due to Umair's answer I was able to do this:</p>
<pre><code># typical scrapy spider imports...
import scrapy
from ..items import AlibabaItem
# import this near the top of the page
sys.path.append(os.path.join(os.path.abspath('../')))
from importables.importables.spiders.title_collection import TitleCollectionSpider
...
# then, in parse method I did this...
def parse(self, response):
alibaba_item = AlibabaItem()
title_collection_spider_obj = TitleCollectionSpider()
title_collection_spider_obj.testForImportTitlesCollection()
# terminal showed this, proving it worked...
# "testForImport worked if you see this!"
</code></pre>
|
<python><web-scraping><import><module><scrapy>
|
2022-12-21 01:32:56
| 1
| 692
|
rom
|
74,870,518
| 10,226,040
|
Regex to get all possible repeated substrings
|
<p>I need to find from "aaaa" -> 'aa', 'aa', 'aa', 'aaa', 'aaa', 'aaaa'.</p>
<p>I tried <code>re.findall(r'(.)\1{1,}')</code>, but all I find is 'a'.</p>
|
<python><regex>
|
2022-12-21 01:27:37
| 1
| 311
|
Chainsaw
|
74,870,389
| 626,664
|
How to assign multiple column in one go with dictionary values?
|
<p>I have a pandas dataframe and a function that returns a dictionary. I want to assign the result of the function to multiple columns. But the problem is, the function returns a dictionary that I want to save only the values to columns. I can do this by sending only the values from the function. However, I want to solve if from the recipient side.</p>
<p>Example code:</p>
<pre><code>import pandas as pd
data = {
"col1": [420, 380, 390],
"col2": [50, 40, 45]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
</code></pre>
<p>The following function returns a dictionary. ( Can solve by returning return fun_dict.values(), but I want to send the dictionary and solve it from the calling side)</p>
<pre><code>def fun_function(my_input):
fun_dict = {}
fun_dict["num1"] = str(my_input)[0]
fun_dict["num2"] = str(my_input)[1]
return fun_dict
</code></pre>
<p>Driving side/recipient side:</p>
<pre><code>df["first_num_col"], df["last_num_col"] = zip(*df["col2"].map(fun_function))
</code></pre>
<p>With this code, I only get the key values, under each column. Any help/suggestion is appreciated. Thank you.</p>
|
<python><pandas><multiple-columns><python-zip>
|
2022-12-21 01:00:35
| 1
| 1,559
|
Droid-Bird
|
74,870,318
| 2,066,572
|
Problems installing pytables on M1 Mac
|
<p>I'm trying to install pytables on an M1 Mac (MacOS 12.6.1, python 3.11 and hdf5 1.12.2 installed using homebrew). Following the advice in <a href="https://stackoverflow.com/a/74276925">https://stackoverflow.com/a/74276925</a> I did the following:</p>
<pre><code>pip install cython
brew install hdf5
brew install c-blosc
export HDF5_DIR=/opt/homebrew/opt/hdf5
export BLOSC_DIR=/opt/homebrew/opt/c-blosc
pip install tables
</code></pre>
<p><code>pip install tables</code> produces the following output:</p>
<pre><code>Collecting tables
Using cached tables-3.7.0.tar.gz (8.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.19.0 in /opt/homebrew/lib/python3.11/site-packages (from tables) (1.24.0)
Collecting numexpr>=2.6.2
Using cached numexpr-2.8.4-cp311-cp311-macosx_11_0_arm64.whl (89 kB)
Requirement already satisfied: packaging in /opt/homebrew/lib/python3.11/site-packages (from tables) (22.0)
Building wheels for collected packages: tables
Building wheel for tables (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for tables (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [307 lines of output]
Error in sitecustomize; set PYTHONVERBOSE for traceback:
AssertionError:
/var/folders/82/c0s0s7md2m1c5nzd6s08j1tn09h343/T/lzo_version_dateuyld430w.c:2:5: error: implicit declaration of function 'lzo_version_date' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
lzo_version_date();
^
1 error generated.
/var/folders/82/c0s0s7md2m1c5nzd6s08j1tn09h343/T/lzo_version_date4yl230lw.c:2:5: error: implicit declaration of function 'lzo_version_date' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
lzo_version_date();
^
1 error generated.
/var/folders/82/c0s0s7md2m1c5nzd6s08j1tn09h343/T/BZ2_bzlibVersionowbloxlk.c:2:5: error: implicit declaration of function 'BZ2_bzlibVersion' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
BZ2_bzlibVersion();
^
1 error generated.
cpuinfo failed, assuming no CPU features: No module named 'cpuinfo'
* Using Python 3.11.0 (main, Oct 25 2022, 13:57:33) [Clang 14.0.0 (clang-1400.0.29.202)]
* Found cython 0.29.32
* USE_PKGCONFIG: True
* Found HDF5 headers at ``/opt/homebrew/opt/hdf5/include``, library at ``/opt/homebrew/opt/hdf5/lib``.
* Could not find LZO 2 headers and library; disabling support for it.
* Could not find LZO 1 headers and library; disabling support for it.
* Could not find bzip2 headers and library; disabling support for it.
* pkg-config header dirs for blosc: /opt/homebrew/Cellar/c-blosc/1.21.3/include
* pkg-config library dirs for blosc: /opt/homebrew/Cellar/c-blosc/1.21.3/lib
* Found blosc headers at ``/opt/homebrew/Cellar/c-blosc/1.21.3/include``, library at ``/opt/homebrew/Cellar/c-blosc/1.21.3/lib``.
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-12-arm64-cpython-311
creating build/lib.macosx-12-arm64-cpython-311/tables
[snip]
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DNDEBUG=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/Cellar/c-blosc/1.21.3/include -I/opt/homebrew/lib/python3.11/site-packages/numpy/core/include -I/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c tables/utilsextension.c -o build/temp.macosx-12-arm64-cpython-311/tables/utilsextension.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION
tables/utilsextension.c:8032:52: warning: comparison of integers of different signs: 'hsize_t' (aka 'unsigned long long') and 'long long' [-Wsign-compare]
__pyx_t_2 = (((__pyx_v_maxdims[__pyx_v_i]) == -1LL) != 0);
~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~
tables/utilsextension.c:12367:33: warning: comparison of integers of different signs: 'int' and 'hsize_t' (aka 'unsigned long long') [-Wsign-compare]
for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
~~~~~~~~~ ^ ~~~~~~~~~
tables/utilsextension.c:15186:35: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
~~~~~~~~~ ^ ~~~~~~~~~
tables/utilsextension.c:15413:52: warning: comparison of integers of different signs: 'hsize_t' (aka 'unsigned long long') and 'long long' [-Wsign-compare]
__pyx_t_3 = (((__pyx_v_maxdims[__pyx_v_j]) == -1LL) != 0);
~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~
tables/utilsextension.c:22030:23: error: no member named 'exc_type' in 'struct _err_stackitem'
while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
~~~~~~~~ ^
tables/utilsextension.c:22030:53: error: no member named 'exc_type' in 'struct _err_stackitem'
while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
~~~~~~~~ ^
tables/utilsextension.c:22044:23: error: no member named 'exc_type' in 'struct _err_stackitem'
*type = exc_info->exc_type;
~~~~~~~~ ^
tables/utilsextension.c:22046:21: error: no member named 'exc_traceback' in 'struct _err_stackitem'
*tb = exc_info->exc_traceback;
~~~~~~~~ ^
tables/utilsextension.c:22060:26: error: no member named 'exc_type' in 'struct _err_stackitem'
tmp_type = exc_info->exc_type;
~~~~~~~~ ^
tables/utilsextension.c:22062:24: error: no member named 'exc_traceback' in 'struct _err_stackitem'
tmp_tb = exc_info->exc_traceback;
~~~~~~~~ ^
tables/utilsextension.c:22063:15: error: no member named 'exc_type' in 'struct _err_stackitem'
exc_info->exc_type = type;
~~~~~~~~ ^
tables/utilsextension.c:22065:15: error: no member named 'exc_traceback' in 'struct _err_stackitem'
exc_info->exc_traceback = tb;
~~~~~~~~ ^
tables/utilsextension.c:22147:30: error: no member named 'exc_type' in 'struct _err_stackitem'
tmp_type = exc_info->exc_type;
~~~~~~~~ ^
tables/utilsextension.c:22149:28: error: no member named 'exc_traceback' in 'struct _err_stackitem'
tmp_tb = exc_info->exc_traceback;
~~~~~~~~ ^
tables/utilsextension.c:22150:19: error: no member named 'exc_type' in 'struct _err_stackitem'
exc_info->exc_type = local_type;
~~~~~~~~ ^
tables/utilsextension.c:22152:19: error: no member named 'exc_traceback' in 'struct _err_stackitem'
exc_info->exc_traceback = local_tb;
~~~~~~~~ ^
tables/utilsextension.c:22201:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash1 = ((PyBytesObject*)s1)->ob_shash;
^
/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
tables/utilsextension.c:22202:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash2 = ((PyBytesObject*)s2)->ob_shash;
^
/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
tables/utilsextension.c:22347:26: error: no member named 'exc_type' in 'struct _err_stackitem'
tmp_type = exc_info->exc_type;
~~~~~~~~ ^
tables/utilsextension.c:22349:24: error: no member named 'exc_traceback' in 'struct _err_stackitem'
tmp_tb = exc_info->exc_traceback;
~~~~~~~~ ^
tables/utilsextension.c:22350:15: error: no member named 'exc_type' in 'struct _err_stackitem'
exc_info->exc_type = *type;
~~~~~~~~ ^
tables/utilsextension.c:22352:15: error: no member named 'exc_traceback' in 'struct _err_stackitem'
exc_info->exc_traceback = *tb;
~~~~~~~~ ^
tables/utilsextension.c:23020:5: error: incomplete definition of type 'struct _frame'
__Pyx_PyFrame_SetLineNumber(py_frame, py_line);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
tables/utilsextension.c:445:62: note: expanded from macro '__Pyx_PyFrame_SetLineNumber'
#define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
~~~~~~~^
/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11/pytypedefs.h:22:16: note: forward declaration of 'struct _frame'
typedef struct _frame PyFrameObject;
^
6 warnings and 17 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tables
Failed to build tables
ERROR: Could not build wheels for tables, which is required to install pyproject.toml-based projects
</code></pre>
<p><a href="https://stackoverflow.com/a/65696724/2066572">https://stackoverflow.com/a/65696724/2066572</a> suggests a possible solution to the error <code>no member named 'exc_type' in 'struct _err_stackitem'</code> from the above output is to replace <code>pip install pytables</code> with <code>pip install --global-option build --global-option --force tables</code>. However, this fails with a different error:</p>
<pre><code>Collecting tables
Using cached tables-3.7.0.tar.gz (8.2 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [145 lines of output]
DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at https://github.com/pypa/pip/issues/11453
Collecting setuptools>=42.0
Using cached setuptools-65.6.3.tar.gz (2.6 MB)
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting wheel
Using cached wheel-0.38.4.tar.gz (67 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting oldest-supported-numpy
Using cached oldest-supported-numpy-2022.11.19.tar.gz (4.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting packaging
Using cached packaging-22.0.tar.gz (125 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting Cython>=0.29.21
Using cached Cython-0.29.32.tar.gz (2.1 MB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting numpy==1.23.2
Using cached numpy-1.23.2.tar.gz (10.7 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [97 lines of output]
Error in sitecustomize; set PYTHONVERBOSE for traceback:
AssertionError:
<string>:71: RuntimeWarning: NumPy 1.23.2 may not yet support Python 3.11.
Running from numpy source directory.
<string>:86: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
running egg_info
running build_src
INFO: build_src
creating numpy.egg-info
writing numpy.egg-info/PKG-INFO
writing dependency_links to numpy.egg-info/dependency_links.txt
writing entry points to numpy.egg-info/entry_points.txt
writing top-level names to numpy.egg-info/top_level.txt
writing manifest file 'numpy.egg-info/SOURCES.txt'
/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py:643: SetuptoolsDeprecationWarning: Custom 'build_py' does not implement 'get_data_files_without_manifest'.
Please extend command classes from setuptools instead of distutils.
warnings.warn(
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/opt/homebrew/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
self.run_setup()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/build_meta.py", line 484, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 493, in <module>
File "<string>", line 485, in setup_package
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/private/var/folders/82/c0s0s7md2m1c5nzd6s08j1tn09h343/T/pip-install-lomp8yz3/numpy_c85c0322750845cfba16acab1a65e8fa/numpy/distutils/command/egg_info.py", line 25, in run
_egg_info.run(self)
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 308, in run
self.find_sources()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 316, in find_sources
mm.run()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 560, in run
self.add_defaults()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 597, in add_defaults
sdist.add_defaults(self)
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/sdist.py", line 107, in add_defaults
self._add_defaults_build_sub_commands()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/sdist.py", line 127, in _add_defaults_build_sub_commands
self.filelist.extend(chain.from_iterable(files))
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 503, in extend
self.files.extend(filter(self._safe_path, paths))
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/sdist.py", line 126, in <genexpr>
files = (c.get_source_files() for c in cmds if hasattr(c, "get_source_files"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/command/sdist.py", line 125, in <genexpr>
cmds = (self.get_finalized_command(c) for c in missing_cmds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 306, in get_finalized_command
cmd_obj.ensure_finalized()
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 109, in ensure_finalized
self.finalize_options()
File "/private/var/folders/82/c0s0s7md2m1c5nzd6s08j1tn09h343/T/pip-install-lomp8yz3/numpy_c85c0322750845cfba16acab1a65e8fa/numpy/distutils/command/config_compiler.py", line 69, in finalize_options
v = getattr(c, a)
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/Cython/Distutils/old_build_ext.py", line 157, in __getattr__
return _build_ext.build_ext.__getattr__(self, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 105, in __getattr__
raise AttributeError(attr)
AttributeError: fcompiler. Did you mean: 'compiler'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Searches for the above error <code>AttributeError: fcompiler. Did you mean: 'compiler'?</code> have come up dry for me.</p>
<p>How do I install pytables in this environment? Is there a way to avoid building from source? If not, how do I get it to build successfully?</p>
|
<python><macos><pip><homebrew><pytables>
|
2022-12-21 00:44:07
| 2
| 438
|
jhaiduce
|
74,870,204
| 13,597,979
|
Difficulty extending tuples in Python
|
<p>I'm using Python 3.9.6. Why do the classes <code>A</code> and <code>B</code> work with the <code>*args</code> as a parameter, but <code>C</code> does not? Python throws this error (before printing <code>args</code>):</p>
<p><code>TypeError: tuple expected at most 1 argument, got 3</code>.</p>
<p>Note the only difference between the classes is if it is extending a class, and what class it's extending. I'm asking because I'm trying to make an object that behaves exactly like a tuple but would be distinguishable from it using <code>isinstance</code>.</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, *args):
print(args)
class B(list):
def __init__(self, *args):
print(args)
super().__init__(args)
class C(tuple):
def __init__(self, *args):
print(args)
super().__init__(args)
A(1,2,3)
B(4,5,6)
C(7,8,9)
</code></pre>
|
<python><tuples>
|
2022-12-21 00:22:07
| 1
| 550
|
TimH
|
74,870,085
| 9,403,538
|
Cannot get celery to run task with Flask-SockeIO and eventlet monkey patching
|
<h1>Update 2:</h1>
<p>The solution is in how monkey patching actually gets done. See my answer below.</p>
<h1>Update 1:</h1>
<p>The issue is the monkey patching of eventlet. Monkey patching is pretty magic to me, so I don't fully understand why exactly. I can get the celery task to run if I don't monkey patch eventlet. However, if I don't monkey patch eventlet, I cannot use a SocketIO instance in another process, as I cannot use redis as the SocketIO message queue without monkey patching. Trying gevent and still running into issues but will update with results.</p>
<p>Also note, I had to change the Celery object instantiation to <code>Celery("backend")</code> rather than <code>Celery(app.import_name)</code> or <code>Celery(app.name)</code> or <code>Celery(__name__)</code> to get the non-monkey patched task to run. Because I am not using anything from the app context in my task, I actually don't even need the <code>make_celery.py</code> module, and can just instantiate it directly within <code>backend.py</code>.</p>
<p>I also tried different databases within redis, thought it might be a conflict.</p>
<p>I also moved to remote debugging of the celery task through telnet as described <a href="https://docs.celeryq.dev/en/latest/userguide/debugging.html#debugging-tasks-remotely-using-pdb" rel="nofollow noreferrer">here</a>. Again, when NOT monkey patching, the task runs, the external socketio object exists, though it cannot communicate to the main server to emit. When I DO monkey patch, the task won't even run.</p>
<p>When using gevent and monkey patching, the application doesn't even start up.</p>
<h1>The Goal:</h1>
<p>Run a real-time web application that pushes data created in parallel process to the client via Socket.IO. I am using Flask and Flask-SocketIO.</p>
<h1>What I've Tried:</h1>
<p>I originally was just emitting from an external process as described in <a href="https://flask-socketio.readthedocs.io/en/latest/deployment.html?highlight=celery#emitting-from-an-external-process" rel="nofollow noreferrer">the docs</a> (see my original minimum working example, <a href="https://github.com/jacoblapenna/Eventlet_Multiprocessing.git" rel="nofollow noreferrer">here</a>). However, this proved to be buggy. Specifically, it worked flawlessly when the data streaming object was instantiated and the external process started within the <code>if __name__ == "__main__:</code> block, but failed when it was instantiated and started on demand from a Socket.IO event. Much research led me to <a href="https://github.com/eventlet/eventlet/issues/210" rel="nofollow noreferrer">this eventlet issue</a>, which is still open and suggests eventlet and multiprocessing do not play well together. I then tried gevent and it worked for a while but was still buggy when left running for long periods of time (e.g. 12 hours).</p>
<p><a href="https://stackoverflow.com/a/40066117/9403538">This answer</a> led me to try Celery in my app and I have been struggling ever since. Specifically, my issues is that the task status shows pending for a while (I'm guessing the defualt timeout amount of time) and then shows failure. When running the worker with debug logging level, the error shows <code>Received unregistered task of type '__main__.stream_data'.</code> I've tried every way I can think of to start the worker and register the task. I am confused because my Celery instance is defined in the same scope as the task definition, and, like the countless tutorials and examples I've found online, I start the worker with <code>celery -A backend.cel worker -l DEBUG</code> to tell it to queue from the celery instance within the <code>backend.py</code> module (at least that's what I think that command is doing).</p>
<h1>My Current Project State:</h1>
<pre><code>.
├── backend.py
├── static
│ └── js
│ ├── main.js
│ ├── socket.io.min.js
│ └── socket.io.min.js.map
└── templates
└── index.html
</code></pre>
<h4>backend.py</h4>
<pre><code>import eventlet
eventlet.monkey_patch()
# ^^^ COMMENT/UNCOMMENT to get the task to RUN/NOT RUN
from random import randrange
import time
from redis import Redis
from flask import Flask, render_template, request
from flask_socketio import SocketIO
from celery import Celery
from celery.contrib import rdb
def message_queue(db):
# thought it might be conflicting redis databases so I allowed choice
# this was not the issue.
return f"redis://localhost:6379/{db}"
app = Flask(__name__)
socketio = SocketIO(app, message_queue=message_queue(0))
cel = Celery("backend", broker=message_queue(0), backend=message_queue(0))
@app.route('/')
def index():
return render_template("index.html")
@socketio.on("start_data_stream")
def start_data_stream():
socketio.emit("new_data", {"value" : 666}) # <<< sanity check, socket server is working here
stream_data.delay(request.sid)
@cel.task()
def stream_data(sid):
data_socketio = SocketIO(message_queue=message_queue(0))
i = 1
while i <= 100:
value = randrange(0, 10000, 1) / 100
data_socketio.emit("new_data", {"value" : value})
i += 1
time.sleep(0.01)
# rdb.set_trace() # <<<< comment/uncomment as needed for debugging, see: https://docs.celeryq.dev/en/latest/userguide/debugging.html
return i, value
if __name__ == "__main__":
r = Redis()
r.flushall()
if r.ping():
pass
else:
raise Exception("You need redis: https://redis.io/docs/getting-started/installation/. Check that redis-server.service is running!")
ip = "192.168.1.8" # insert LAN address here
port = 8080
socketio.run(app, host=ip, port=port, use_reloader=False, debug=True)
</code></pre>
<h4>index.html</h4>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Minimal Example</title>
<script src="{{ url_for('static', filename='js/socket.io.min.js') }}"></script>
</head>
<body>
<button id="start" onclick="button_handler()">Start Stream</button>
<span id="data"></span>
<script type="text/javascript" src="{{ url_for('static', filename='js/main.js') }}"></script>
</body>
</html>
</code></pre>
<h4>main.js</h4>
<pre><code>var socket = io(location.origin);
var span = document.getElementById("data");
function button_handler() {
socket.emit("start_data_stream");
}
socket.on("new_data", function(data) {
span.innerHTML = data.value;
});
</code></pre>
<h4>dependencies</h4>
<pre><code>Package Version
---------------- -------
amqp 5.1.1
async-timeout 4.0.2
bidict 0.22.0
billiard 3.6.4.0
celery 5.2.7
click 8.1.3
click-didyoumean 0.3.0
click-plugins 1.1.1
click-repl 0.2.0
Deprecated 1.2.13
dnspython 2.2.1
eventlet 0.33.2
Flask 2.2.2
Flask-SocketIO 5.3.2
gevent 22.10.2
gevent-websocket 0.10.1
greenlet 2.0.1
itsdangerous 2.1.2
Jinja2 3.1.2
kombu 5.2.4
MarkupSafe 2.1.1
packaging 22.0
pip 22.3.1
prompt-toolkit 3.0.36
python-engineio 4.3.4
python-socketio 5.7.2
pytz 2022.6
redis 4.4.0
setuptools 58.1.0
six 1.16.0
vine 5.0.0
wcwidth 0.2.5
Werkzeug 2.2.2
wrapt 1.14.1
zope.event 4.6
zope.interface 5.5.2
</code></pre>
<h1>Questions:</h1>
<p>Is this still an issue with eventlet? <a href="https://stackoverflow.com/a/40066117/9403538">This answer</a> leads me to believe that Celery is the workaround to the eventlet issue and suggests Celery doesn't even need eventlet to work. However, eventlet seems entrenched in my source, as nothing works on the redis side if I do not monkey patch. Also, <a href="https://flask-socketio.readthedocs.io/en/latest/intro.html#requirements" rel="nofollow noreferrer">the docs</a> suggest that Flask-SocketIO automatically looks for eventlet, so just by instantiating the external SocketIO server in the celery task would bring in eventlet, correct? Is there something else I am doing wrong? Perhaps there are better ways to debug the worker and task?</p>
<p>Any help would be greatly appreciated, thanks!</p>
|
<python><flask><celery><flask-socketio><eventlet>
|
2022-12-20 23:58:16
| 1
| 1,107
|
jacob
|
74,870,051
| 17,620,776
|
Recieving output text when a TensorFlow model name != x
|
<p>I have made a model which detects when a person has their face to the right, left, or in the middle. I am making a prediction using the following code:</p>
<pre><code>from keras.models import load_model
from PIL import Image, ImageOps
import numpy as np
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = load_model('models//keras_Model.h5', compile=False)
# Load the labels
class_names = open('models//labels.txt', 'r').readlines()
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('IMG.png').convert('RGB')
# resize the image to a 224x224 with the same strategy as in TM2:
# resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.Resampling.LANCZOS)
# turn the image into a numpy array
image_array = np.asarray(image)
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
index = np.argmax(prediction)
class_name = class_names[index]
confidence_score = prediction[0][index]
print('Class:', class_name, end='')
print('Confidence score:', confidence_score)
</code></pre>
<p>That code gives me the proper prediction. Here is the labels.txt file:</p>
<pre><code>0 Left_Side_Face
1 Middle_Face
2 Right_Side_Face
</code></pre>
<p>What I want to do now is say for example the person has turned their head to the right side, I want to print the message 'face the computer' but when I tried doing it I failed.</p>
<p>Here is the code I tried:</p>
<pre><code>if class_name == "2 Right_Side_Face":
print('face the computer')
if class_name == "0 Left_Side_Face":
print('face the computer')
else:
print('Invalid class name')
print('exiting.....')
</code></pre>
<p>The problem is that whenever I run the code above it goes to the else and print 'Invalid class name' and when I remove the else there are no errors in the code, it just skips the if statements.</p>
<p><strong>Question:</strong></p>
<p>How can I do something (print a message, alarm, etc.) when a specific class is detected by a TensorFlow model?</p>
<p>EDIT:
Here is the output of the script when I feed the model with a right side face image:</p>
<pre><code>Class: 2 Right_Side_Face
Confidence score: 0.84736574
Invalid class name
exiting....
</code></pre>
|
<python><tensorflow><keras>
|
2022-12-20 23:52:46
| 1
| 357
|
JoraSN
|
74,869,964
| 10,181,236
|
regex that match amazon link and replace it
|
<p>I'm trying to create a regex that matches amazon links on a string and replace it with another string. The code i wrote for the moment does not work since it just substitute a part of the url. I want to substitute all the url. this is the code</p>
<pre><code>import re
regex = r"https://amzn.to/[a-zA-Z0-9]+" + "|" + r"https://www.amazon.it/[a-zA-Z0-9]+"
string="https://amzn.to/3Ueforw"
string1="https://www.amazon.it/dp/B08F9LM1FB/?tag=seller050-21&psc=1"
string = re.sub(regex, "URL", string)
string1 = re.sub(regex, "URL", string1)
print(string)
print(string1) # here I want to URL too not "URL/other part of the url"
</code></pre>
|
<python><regex>
|
2022-12-20 23:35:22
| 2
| 512
|
JayJona
|
74,869,890
| 4,507,231
|
How to modify integer values in a Pandas DataFrame whilst avoiding SettingWithCopyWarning?
|
<p>I have a heterogenous Pandas DataFrame - columns are a mix of data types. I only want to subtract the values in one of the integer columns for all rows by a fixed constant. That's it, and it's that simple. But I keep running into <code>SettingWithCopyWarning</code>.</p>
<p>Take a DataFrame of two columns. The first is of integer, and the second is of string:</p>
<pre><code>df = pd.DataFrame({"a":[10,20,30] , "b":["x","y","z"]})
</code></pre>
<p>I want to subtract one from each cell in column "a", so the returning Dataframe would be:</p>
<pre><code>a b
9 x
19 y
29 z
</code></pre>
<p>I've tried so many examples, and 9/10 (for want of a better word), don't even inform the Reader that their example will result in a SettingWithCopyWarning warning.</p>
|
<python><pandas>
|
2022-12-20 23:23:50
| 1
| 1,177
|
Anthony Nash
|
74,869,795
| 1,914,781
|
add color to markers with plotly python
|
<p>How can I change marker's color based on the z array? The colors can be anything but need differ if value of z is differ! i understand that plotly express can do it, but i need to use plotly graph objects</p>
<p>I try to add a color=z entry but it report error!</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
def plot(x,y,z):
fig = go.Figure()
trace1 = go.Scatter(
x=x, y=y,
z = z, #<- error line
mode='markers',
name='markers')
fig.add_traces([trace1])
tickvals = [0,np.pi/2,np.pi,np.pi*3/2,2*np.pi]
ticktext = ["0","$\\frac{\pi}{2}$","$\pi$","$\\frac{3\pi}{4}$","$2\pi$"]
layout = dict(
title="demo",
xaxis_title="X",
yaxis_title="Y",
title_x=0.5,
margin=dict(l=10,t=20,r=10,b=40),
height=300,
xaxis=dict(
side='bottom',
linecolor='black',
tickangle=0,
tickvals = tickvals,
ticktext=ticktext,
#tickmode='auto',
ticks='outside',
ticklen=7,
tickwidth=1,
tickcolor='black',
tickson="labels",
title=dict(standoff=5),
showticklabels=True,
),
yaxis=dict(
showgrid=True,
zeroline=True,
zerolinewidth=.5,
zerolinecolor='black',
showline=True,
linecolor='black',
showticklabels=True,
)
)
fig.update_traces(
marker_size=14,
)
fig.update_layout(layout)
save_fig(fig,"/vagrant/work/demo.png")
fig.show()
return
n = 20
x = np.linspace(0.0, 2*np.pi, n)
y = np.sin(x)
z = np.random.randint(0,100,size=n)
plot(x,y,z)
</code></pre>
|
<python><plotly>
|
2022-12-20 23:08:26
| 1
| 9,011
|
lucky1928
|
74,869,781
| 2,386,605
|
Overwrite environment variables with monkeypatch in pytest
|
<p>Currently, I have a file which contains</p>
<pre><code>DATABASE_URL = os.environ.get("DATABASE_URL")
engine = create_async_engine(DATABASE_URL, echo=False, future=True)
</code></pre>
<p>Now, I would like to overwrite DATABASE_URL with monkeypatch in pytest, but sadly whatever I tried it stocked to the "old" values.</p>
<p>Do you have an idea how I can do it?</p>
|
<python><environment-variables><pytest><monkeypatching>
|
2022-12-20 23:06:01
| 1
| 879
|
tobias
|
74,869,773
| 11,628,437
|
How do I get all 1000 results using the GitHub Search API?
|
<p>I understand that the GitHub Search API limits to 1000 results and 100 results per page. Therefore I wrote the following to view all 1000 results for a code search process that looks for a string <code>torch</code>-</p>
<pre><code>import requests
for i in range(1,11):
url = "https://api.github.com/search/code?q=torch +in:file + language:python&per_page=100&page="+str(i)
headers = {
'Authorization': 'xxxxxxxx'
}
response = requests.request("GET", url, headers=headers).json()
try:
print(len(response['items']))
except:
print("response = ", response)
</code></pre>
<p>Here is the output -</p>
<pre><code>15
62
response = {'documentation_url': 'https://docs.github.com/en/free-pro-team@latest/rest/overview/resources-in-the-rest-api#secondary-rate-limits', 'message': 'You have exceeded a secondary rate limit. Please wait a few minutes before you try again.'}
</code></pre>
<ol>
<li>It seems to hit the secondary rate limit just after the second iteration</li>
<li>The values in the pages aren't consistent. For instance, page 1 shows 15 results when I ran this time. However, if I run it again, it will be another number. I believe there should be 100 results per page.</li>
</ol>
<p>Does there exist an efficient way to get all 1000 results from the Search API?</p>
|
<python><github><github-api><github-search>
|
2022-12-20 23:04:03
| 1
| 1,851
|
desert_ranger
|
74,869,745
| 6,724,526
|
How to return a CSV from Pandas Dataframe using StreamingResponse in FastAPI?
|
<p>I'm trying to return a CSV response to a <code>GET</code> query (end library requirement..), and I am hoping to find a way to do this without writing the CSV to disk. A similar question has been answered <a href="https://stackoverflow.com/questions/61140398/fastapi-return-a-file-response-with-the-output-of-a-sql-query">here</a>, but I have one additional bit of complexity where I'm returning data from another function.</p>
<pre class="lang-py prettyprint-override"><code>import json
import time
from typing import Optional
import io
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import FileResponse
from fastapi.responses import StreamingResponse
import xarray as xr
import pandas as pd
import numpy as np
import gc
async def get_intensity_csv(x,y):
#Use select on the xarray datasource, returning a data array and then convert to a pandas dataframe
#Index
da = ds["intensity"].sel(longitude=y, latitude=x, method="nearest")
df = da.to_dataframe()
df.drop(columns=['latitude', 'longitude'], inplace=True)
#pandas to csv ready for streaming to the client
stream = io.StringIO()
resp_csv = df.to_csv(stream, index = False)
return resp_csv
</code></pre>
<p>The above is being called from:</p>
<pre class="lang-py prettyprint-override"><code>#create an app.get that returns intensity as a csv stream
@app.get("/intensity_csv/{x}/{y}", tags=["intensity_csv"])
async def get_intensity_csv(x: float, y: float):
res = await int_process(x,y)
return StreamingResponse(io.StringIO(res), media_type="text/csv")
</code></pre>
<p>The error I'm receiving is:</p>
<pre><code>return StreamingResponse(io.StringIO(res), media_type="text/csv")
TypeError: initial_value must be str or None, not dict
</code></pre>
<p>I believe my issue is getting data from the first sample via these couple of lines back to the <code>def async get_intensity_csv</code></p>
<pre><code>#pandas to csv ready for streaming to the client
stream = io.StringIO()
resp_csv = df.to_csv(stream, index = False)
</code></pre>
<p>I'm not sure how to pass the content of the dataframe back so I can then use <code>StreamingResponse</code> to deliver the csv to the client.</p>
|
<python><pandas><dataframe><streaming><fastapi>
|
2022-12-20 23:00:07
| 0
| 1,258
|
anakaine
|
74,869,685
| 10,146,441
|
How to run external command in neovim on BufWritePre set in Lua
|
<p>I would like to format my code using <code>/bin/black</code> everytime I save a python file in <code>neovim</code>. So I have the following in my <code>~/.config/nvim/init.lua</code></p>
<pre><code>a.nvim_create_autocmd( { "BufWritePre" }, {
pattern = { "*.py" },
command = [[ /bin/black ]],
})
</code></pre>
<p>but I get the error like:</p>
<pre><code>E492: Not an editor command: /bin/black
</code></pre>
<p>Could someone let me know how can I run <code>black</code> (an external command) in <code>BufWritePre</code> set.</p>
<p>Cheers,
DD.</p>
|
<python><lua><neovim><python-black>
|
2022-12-20 22:52:11
| 1
| 684
|
DDStackoverflow
|
74,869,684
| 3,482,266
|
Concatenating a list of Multi-indices pandas dataframe
|
<p>I have a list whose elements are MultiIndex objects like:</p>
<pre><code>MultiIndex([(48, 39),
(48, 40),
(48, 41),
(48, 42),
(48, 43),
(49, 39),
(49, 40),
(49, 41),
(49, 42),
(49, 43)],
)
MultiIndex([(48, 48),
(48, 49),
(49, 48),
(49, 49)],
)
</code></pre>
<p>I want to concatenate both, vertically, such that I have:</p>
<pre><code>MultiIndex([(48, 39),
(48, 40),
(48, 41),
(48, 42),
(48, 43),
(49, 39),
(49, 40),
(49, 41),
(49, 42),
(49, 43),
(48, 48),
(48, 49),
(49, 48),
(49, 49)],
)
</code></pre>
<p>If possible, I would also like it to:</p>
<ol>
<li>contain only unique pairs(but a,b is different from b,a )</li>
<li>and be ordered (the above is not ordered since two pairs starting with 48 show up after pairs with 49).</li>
</ol>
|
<python><pandas><multi-index>
|
2022-12-20 22:52:09
| 3
| 1,608
|
An old man in the sea.
|
74,869,677
| 3,361,462
|
Create class from inherited class instance
|
<p>I have simple class:</p>
<pre><code>class Dog:
def __init__(self, size):
self.size = size
# there can be many fields
def bar(self):
print(f"WOOF {self.size}")
</code></pre>
<p>And two methods I cannot modify:</p>
<pre><code>def get_dog():
return Dog("big")
def foo(dog: Dog):
dog.bar()
</code></pre>
<p>Now my goal is to expand functionality of <code>Dog.bar</code> so the <code>foo</code> function will call modified version.</p>
<pre><code>class BigDog(Dog):
def bar(self):
print("WOFWOF")
</code></pre>
<p>So I search for a way of creating <code>BigDog</code> from a <code>Dog</code> instance taken from the <code>get_dog</code>. The problem is I don't know what attributes are in <code>Dog</code> class and what is the implementation on <code>get_dog</code>.</p>
<p>The simplest way I considered was:</p>
<pre><code>class BigDog:
def __init__(self, dog: Dog):
self.dog = dog
def bar(self):
...
</code></pre>
<p>But now a <code>Dog</code> is not a dog and mypy complains.</p>
<p>How to properly implement <code>BigDog</code> to be subclass of <code>Dog</code> and allow it to be initialized from the <code>Dog</code> instance?</p>
|
<python><inheritance>
|
2022-12-20 22:51:30
| 1
| 7,278
|
kosciej16
|
74,869,658
| 10,443,817
|
What is [AssertionCredentials] parameter in BigQuery library and how to get it?
|
<p><a href="https://github.com/tylertreat/BigQuery-Python" rel="nofollow noreferrer">BigQuery-Python</a> is a small python package that provides functions to interact with GCP's BigQuery. To use it one needs to first instantiate a client object. The documentation for the function <a href="https://github.com/tylertreat/BigQuery-Python/blob/master/bigquery/client.py#L61" rel="nofollow noreferrer">get_client() states</a> that</p>
<pre><code>def get_client(project_id=None, credentials=None,
service_url=None, service_account=None,
private_key=None, private_key_file=None,
json_key=None, json_key_file=None,
readonly=True, swallow_results=True,
num_retries=0):
"""Return a singleton instance of BigQueryClient. Either
AssertionCredentials or a service account and private key combination need
to be provided in order to authenticate requests to BigQuery.
Parameters
----------
project_id : str, optional
The BigQuery project id, required unless json_key or json_key_file is
provided.
credentials : oauth2client.client.SignedJwtAssertionCredentials, optional
AssertionCredentials instance to authenticate requests to BigQuery
(optional, must provide `service_account` and (`private_key` or
`private_key_file`) or (`json_key` or `json_key_file`) if not included
</code></pre>
<p>Can someone explain to me what is the credentials parameter is and how to use it properly?</p>
|
<python><google-cloud-platform><google-bigquery><google-oauth>
|
2022-12-20 22:49:31
| 1
| 4,125
|
exan
|
74,869,243
| 17,696,880
|
Set up a search group with regular expressions that match one or two numeric values, but do not match any more immediately following numeric values
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "del 2065 de 42 52 de 200 de 2222 25 de 25 del 26. o del 8" #example input
num_pattern = r"(\d{1,2})"
identification_regex = r"(?:del|de[\s|]*el|de|)[\s|]*" + num_pattern
input_text = re.sub(identification_regex, "AA", input_text)
print(repr(input_text)) # --> output
</code></pre>
<p>This is the <strong>wrong output</strong> that I am getting if I use this search regex pattern for the numeric values identification in the input string</p>
<pre><code>'AAAA AAAA AAAA AAAAAA AA AA. o AA'
</code></pre>
<p>Since this is a simplification of a program, it has simplified the logic to just replace by <code>"AA"</code>, and this is the <strong>output</strong> we should get. In this output you can be noted that if there are more than 2 immediately consecutive numerical values, the replacement is not performed.</p>
<pre><code>"del 2065 AA AA de 200 de 2222 AA AA AA. o AA"
</code></pre>
<p>The problem with this search pattern <code>r"(\d{1,2})"</code> is that while it succeeds in capturing 2 numeric digits correctly, the problem is that it captures even when there are numbers of 4 or more digits immediately following each other.</p>
<p>What kind of additional constraint do I need to put on the character search pattern for the match and replace algorithm to work correctly?</p>
|
<python><python-3.x><regex><replace><regex-group>
|
2022-12-20 21:49:59
| 0
| 875
|
Matt095
|
74,869,164
| 9,537,439
|
Python - Enclosing time data from another table
|
<p>I have these two dataframes:</p>
<pre><code>import pandas as pd
from pandas import Timestamp
df_1 = pd.DataFrame({'id': {0: 'A',
1: 'A',
2: 'B',
3: 'C',
4: 'C'},
'IdOrder': {0: 1, 1: 2, 2: 1, 3: 1, 4: 2},
'TrackDateTime': {0: Timestamp('2020-01-21 23:28:35'),
1: Timestamp('2020-01-28 17:12:15'),
2: Timestamp('2020-01-07 12:41:48'),
3: Timestamp('2020-01-01 22:13:44'),
4: Timestamp('2020-01-01 22:49:53')}})
df_1
</code></pre>
<p><a href="https://i.sstatic.net/m5TiC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m5TiC.png" alt="enter image description here" /></a></p>
<pre><code>df_2 = pd.DataFrame({'id': {0: 'A',
1: 'B',
2: 'C',
3: 'D',
4: 'E'},
'InitialDate': {0: Timestamp('2020-01-21 23:28:35'),
1: Timestamp('2020-01-07 12:41:48'),
2: Timestamp('2020-01-01 22:13:44'),
3: Timestamp('2020-01-02 15:45:10'),
4: Timestamp('2020-01-02 22:21:36')},
'EndDate': {0: Timestamp('2020-01-28 00:00:00'),
1: Timestamp('2020-01-08 00:00:00'),
2: Timestamp('2020-01-03 00:00:00'),
3: Timestamp('2020-01-06 00:00:00'),
4: Timestamp('2020-04-10 00:00:00')}})
df_2
</code></pre>
<p><a href="https://i.sstatic.net/2aLba.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2aLba.png" alt="enter image description here" /></a></p>
<p>And I'm looking to enclose each <code>id</code> in <code>df_1</code> with the <code>InitialDate</code> and <code>EndDate</code> from <code>df_2</code>, which would give this expected output:</p>
<p><a href="https://i.sstatic.net/x9bqQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x9bqQ.png" alt="enter image description here" /></a></p>
<p>Also please consider that:</p>
<ul>
<li><code>IdOrder=0</code> for <code>InitialDate</code> and <code>IdOrder=-1</code> for <code>EndDate</code></li>
<li>I would like to keep the time granularity up to seconds (the output image doesn't show it because of the excel format)</li>
</ul>
<p>I haven't been able to figure out a solution for this. Any suggestions? :)</p>
|
<python><pandas><numpy>
|
2022-12-20 21:37:21
| 1
| 2,081
|
Chris
|
74,869,070
| 2,653,179
|
Python Jinja2: How to add an empty row in a table, which is created from a nested loop
|
<p>I create a table from 2 for loops, and I'd like to add an empty row when each inner loop is finished.</p>
<p>Do I need to keep track of both outer loop and inner loop for it, like this? Or is there a better way to do it?</p>
<pre><code><table border="1" style="border-collapse:collapse;">
{% for dict in table_data %}
{% set outer_loop = loop %}
{% for test_name, test_dict in dict["tests_data"].items() %}
....
{% if not outer_loop.last or not loop.last %} <!-- Add an empty row -->
<tr bgcolor="FFFFFF">
<td colspan="{{titles|length}}"><br/></td>
</tr>
{% endif %}
{% endfor %}
{% endfor %}
</table>
</code></pre>
|
<python><jinja2>
|
2022-12-20 21:24:54
| 0
| 423
|
user2653179
|
74,868,981
| 940,300
|
Cannot connect to MongoDB replicaSet cluster
|
<p>I set up a MongoDB cluster with Docker which uses replicaSet <code>rs0</code>.</p>
<p>Checking the status of the cluster everything seems fine:</p>
<pre><code>{
set: 'rs0',
date: ISODate("2022-12-20T21:00:46.972Z"),
myState: 1,
term: Long("1"),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long("2000"),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
lastCommittedWallTime: ISODate("2022-12-20T21:00:40.112Z"),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
appliedOpTime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
durableOpTime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
lastAppliedWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastDurableWallTime: ISODate("2022-12-20T21:00:40.112Z")
},
lastStableRecoveryTimestamp: Timestamp({ t: 1671570000, i: 1 }),
electionCandidateMetrics: {
lastElectionReason: 'electionTimeout',
lastElectionDate: ISODate("2022-12-20T20:45:09.901Z"),
electionTerm: Long("1"),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1671569099, i: 1 }), t: Long("-1") },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1671569099, i: 1 }), t: Long("-1") },
numVotesNeeded: 2,
priorityAtElection: 1,
electionTimeoutMillis: Long("10000"),
numCatchUpOps: Long("0"),
newTermStartDate: ISODate("2022-12-20T20:45:09.999Z"),
wMajorityWriteAvailabilityDate: ISODate("2022-12-20T20:45:10.856Z")
},
members: [
{
_id: 0,
name: 'mongo1:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 973,
optime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
optimeDate: ISODate("2022-12-20T21:00:40.000Z"),
lastAppliedWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastDurableWallTime: ISODate("2022-12-20T21:00:40.112Z"),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1671569109, i: 1 }),
electionDate: ISODate("2022-12-20T20:45:09.000Z"),
configVersion: 1,
configTerm: 1,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'mongo2:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 947,
optime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
optimeDurable: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
optimeDate: ISODate("2022-12-20T21:00:40.000Z"),
optimeDurableDate: ISODate("2022-12-20T21:00:40.000Z"),
lastAppliedWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastDurableWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastHeartbeat: ISODate("2022-12-20T21:00:45.203Z"),
lastHeartbeatRecv: ISODate("2022-12-20T21:00:46.101Z"),
pingMs: Long("0"),
lastHeartbeatMessage: '',
syncSourceHost: 'mongo1:27017',
syncSourceId: 0,
infoMessage: '',
configVersion: 1,
configTerm: 1
},
{
_id: 2,
name: 'mongo3:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 947,
optime: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
optimeDurable: { ts: Timestamp({ t: 1671570040, i: 1 }), t: Long("1") },
optimeDate: ISODate("2022-12-20T21:00:40.000Z"),
optimeDurableDate: ISODate("2022-12-20T21:00:40.000Z"),
lastAppliedWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastDurableWallTime: ISODate("2022-12-20T21:00:40.112Z"),
lastHeartbeat: ISODate("2022-12-20T21:00:45.203Z"),
lastHeartbeatRecv: ISODate("2022-12-20T21:00:46.100Z"),
pingMs: Long("0"),
lastHeartbeatMessage: '',
syncSourceHost: 'mongo1:27017',
syncSourceId: 0,
infoMessage: '',
configVersion: 1,
configTerm: 1
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1671570040, i: 1 }),
signature: {
hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
keyId: Long("0")
}
},
operationTime: Timestamp({ t: 1671570040, i: 1 })
}
</code></pre>
<p>I am using <code>pymongo</code> to connect to the cluster.
If I use just the node name of the primary node and connect directly to that, everything works just fine:</p>
<pre class="lang-py prettyprint-override"><code>client = MongoClient("mongodb://localhost:27017")
collection = client["db"]["test"]
collection.insert_one({"abc": "def"})
print(list(collection.find()))
</code></pre>
<p>However, from what I can tell, this is NOT how you connect to a MongoDB cluster. Instead of directly choosing the primary Node, one instead should provide all Nodes in the cluster plus the replicaSet:</p>
<pre class="lang-py prettyprint-override"><code>client = MongoClient("mongodb://localhost:27017,localhost:27018,localhost:27019?replicaSet=rs0")
collection = client["db"]["test"]
collection.insert_one({"abc": "def"}) # Hangs and eventually times out
</code></pre>
<p>The error raised:</p>
<pre><code>pymongo.errors.ServerSelectionTimeoutError:
mongo1:27017: [Errno 8] nodename nor servname provided, or not known,
mongo3:27017: [Errno 8] nodename nor servname provided, or not known,
mongo2:27017: [Errno 8] nodename nor servname provided, or not known,
Timeout: 30s,
Topology Description:
<TopologyDescription
id: 63a2230d721313f82429a19e,
topology_type: ReplicaSetNoPrimary,
servers: [
<ServerDescription ('mongo1', 27017)
server_type: Unknown,
rtt: None,
error=AutoReconnect('mongo1:27017: [Errno 8] nodename nor servname provided, or not known')>,
<ServerDescription ('mongo2', 27017)
server_type: Unknown,
rtt: None,
error=AutoReconnect('mongo2:27017: [Errno 8] nodename nor servname provided, or not known')>,
<ServerDescription ('mongo3', 27017)
server_type: Unknown,
rtt: None,
error=AutoReconnect('mongo3:27017: [Errno 8] nodename nor servname provided, or not known')>]>
</code></pre>
<p>I believe this might be because mongo1/2/3 are not actual hostnames but rather docker containers which are only valid hostnames within the Docker network, meaning, to access those hostnames I'd have to run my Python script within the same network.</p>
<p>I also tested if secondary nodes are reachable if I use <code>localhost:27018</code> and indeed, they are. <code>pymongo</code> tells me that I cannot insert directly into a secondary node, which makes sense.</p>
<p>My <code>docker-compose.yaml</code>:</p>
<pre><code>version: '3'
services:
mongo1:
image: mongo:latest
container_name: mongo1
command: mongod --replSet rs0 --bind_ip localhost,mongo1
ports:
- "27017:27017"
networks:
- mongoCluster
volumes:
- /mnt/workspace/mongodb-cluster/node1:/data/db
mongo2:
image: mongo:latest
container_name: mongo2
command: mongod --replSet rs0 --bind_ip localhost,mongo2
ports:
- "27018:27017"
networks:
- mongoCluster
volumes:
- /mnt/workspace/mongodb-cluster/node2:/data/db
mongo3:
image: mongo:latest
container_name: mongo3
command: mongod --replSet rs0 --bind_ip localhost,mongo3
ports:
- "27019:27017"
networks:
- mongoCluster
volumes:
- /mnt/workspace/mongodb-cluster/node3:/data/db
networks:
mongoCluster:
driver: bridge
</code></pre>
<p>I also tried to not use a dedicated network and instead give the docker containers access to the hostnetwork, thus being able to define the replicaSet with <code>localhost:<PORT></code>. While I was able to bring up the cluster, I still get the same error when trying to use it from Python.</p>
<p>All that being said, my question is now how I need to change my Docker or <code>pymongo</code> configuration such that I can specify all nodes in the connection string.</p>
|
<python><mongodb><docker>
|
2022-12-20 21:14:37
| 1
| 11,490
|
Basic Coder
|
74,868,876
| 5,212,614
|
How can we plot City borders, or County borders, on a Folium Map?
|
<p>I have some code that generates a nice Heat Map for me. Here is my code.</p>
<pre><code>import folium
from folium.plugins import HeatMap
max_amount = float(df_top20['Total_Minutes'].max())
hmap = folium.Map(location=[35.5, -82.5], zoom_start=7, )
hm_wide = HeatMap(list(zip(df_top20.Latitude.values, df_top20.Longitude.values, df_top20.Total_Minutes.values)),
min_opacity=0.2,
max_val=max_amount,
radius=25,
blur=20,
max_zoom=1,
)
hmap.add_child(hm_wide)
</code></pre>
<p><a href="https://i.sstatic.net/V4PhM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V4PhM.png" alt="enter image description here" /></a></p>
<p>How can I overlay specific North Carolina cities, or counties, on this map? I have cities/counties in a dataframe.</p>
|
<python><python-3.x><folium>
|
2022-12-20 21:02:18
| 1
| 20,492
|
ASH
|
74,868,814
| 7,829,241
|
Conda environment.yml - automatically add environment and extensions to Jupyter
|
<p>Whenever I install a sample <code>environment.yml</code>, say:</p>
<pre class="lang-yaml prettyprint-override"><code>name: my_env
dependencies:
- python
- numpy
- pip:
- scipy
- ipykernel
- ipywidgets
</code></pre>
<p>that I install by <code>conda env create -f environment.yml</code> I then always have to add it to the Jupyter "kernelspec":</p>
<pre><code>python -m ipykernel install --user --name my_env --display-name "my_env"
</code></pre>
<p>and enable extensions:</p>
<pre><code>jupyter nbextension enable --py widgetsnbextension
</code></pre>
<p>It would be simpler to have these two steps part of the <code>environment.yml</code> to start off with. Is this possible?</p>
|
<python><pip><jupyter-notebook><anaconda><conda>
|
2022-12-20 20:57:26
| 0
| 474
|
Zoom
|
74,868,767
| 1,098,759
|
Is there a way to disable positional arguments given that a flag is true with Python argparse?
|
<p>I am building a command line tool which should work as follows:</p>
<pre><code>mytool [-h] [-c|--config FILE] [-l|--list] ACTION
positional arguments:
ACTION the action to be performed
optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-l, --list show list of configured actions and exit
-c, --config CONFIG use CONFIG instead of the default configuration file
</code></pre>
<p>I am using <code>argparse</code> to parse the command line and I am facing what seems to be a limitation of the library. Let me explain through some use-cases.</p>
<p><strong>Use-case 1</strong></p>
<pre><code>$ mytool -h
$ mytool -c path/to/file -h
$ mytool -l -h
$ mytool some-action -h
</code></pre>
<p>All the above invocations of <code>mytool</code> shall print the help message and exit, exactly as it is shown above, more importantly showing <code>ACTIONS</code> to be mandatory.</p>
<p><strong>Use-case 2</strong></p>
<pre><code>$ mytool -l
$ mytool -c path/to/file --list
$ mytool --list some-action
$ mytool --list --config path/to/file
</code></pre>
<p>All the above invocations must list the configured actions given the content of the configuration files, and exit. The allowed values of <code>ACTION</code> depend on the content of the configuration file, they are not simply hard-coded in the program.</p>
<p>Notice that even if an action is given, it is ignored because <code>-l|--list</code> has a higher precendance, similar to how <code>-h</code> works against other flags and arguments.</p>
<p>Also, please note that solutions such as <a href="https://stackoverflow.com/a/69789554/1098759">this</a>, which implement custom <code>argparse.Action</code> sub-classes won't work because the action of the listing flag needs the value of the configuration flag, so the parsing must complete (at least partially) for the listing to begin.</p>
<p>Finally, the absence of the otherwise required positional argument <code>ACTION</code> does not cause the parser to abort with an error.</p>
<p><strong>Use-case 3</strong>
In the absence of <code>-l|--list</code> the parser works as expected:</p>
<pre><code>$ mytool -c path/to/file # exits with error, positional argument missing
$ mytool some-action # ok
</code></pre>
<p>In simple words, I am trying to make the <code>-l|--list</code> flag "disable" the mandatory enforcing of the positional argument <code>ACTION</code>. Moreover I am looking for a solution that allows me to perform the listing (the action of the <code>-l|--list</code> flag) after the parsing has (at least partially) completed, so the value of the <code>-c|--config</code> flag is available.</p>
|
<python><argparse>
|
2022-12-20 20:52:08
| 2
| 423
|
Alexandru N. Onea
|
74,868,696
| 10,576,322
|
pytest coverage with conditional imports and optional requirements
|
<p>I got a hint to use optional requirements and conditional import to provide a function that can use pandas or not, depending whether it's available.
See here for reference:<br />
<a href="https://stackoverflow.com/a/74862141/10576322">https://stackoverflow.com/a/74862141/10576322</a></p>
<p>This solution works, but if I test this code I get always a bad coverage since I either have pandas imported or not. So even if I configure hatch to create environments for both tests, it looks like the tests don't cover this if/else function definition sufficiently.</p>
<p>Is there a proper way around to eg. combine the two results? Or can I tell coverage that the result is expected for that block of code?</p>
<h1>Code</h1>
<p>The module is looking like that:</p>
<pre class="lang-py prettyprint-override"><code>try:
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
PANDAS_INSTALLED = False
if PANDAS_INSTALLED:
def your_function(...):
# magic with pandas
return output
else:
def your_function(...):
# magic without pandas
return output
</code></pre>
<p>The idea is that the two version of the two functions work exactly the same beside the inner procedures. So everybody no matter where can use my_module.my_function and don't need to start writing code depending on what environment they are on.</p>
<p>The same is true for testing. I can write tests for my_module.my_function and if the venv has pandas I am testing one part of it and if not the test is testing the other part.</p>
<pre class="lang-py prettyprint-override"><code>from mypackage import my_module
def test_my_function:
res = 'foo'
assert my_module.my_function() == res
</code></pre>
<p>That is working fine, but coverage evaluation is complicated.</p>
<h1>Paths to solution</h1>
<p>Till now I am ware of two solutions.</p>
<h2>1. mocking the behavior</h2>
<p>@TYZ suggested to have always pandad as dependency for testing and mock the global variable.<br />
I tried that, but it didn't work as I expected it. The reason is that I can of course mock the PANDAS_INSTALLED variable, but the function defifintion already took place during import and is not affected anymore by the variable.<br />
I tried to check if I can mock the import in another test module, but didn't succeed.</p>
<h2>2. defining venvs w and w/o pandas and combine results</h2>
<p>I found that coverage and pytest-cov have the abillity to append test results between environments or combine different results.<br />
In a first test I changed the pytest-cov script in hatch to include <code>--cov-append</code>. That worked, but it's totally global. That means if I get complete coverage in Python 3.8, but for whatever reason the switch doesn't work in Python 3.9 I wouldn't see it.</p>
<p>What I like to do is to combine the different results by some logic inherited from hatchs test.matrix. Like <code>coverage combine py38.core py38.pandas</code> and the same for 3.9. So I would see if I have same coverage for all tested versions.</p>
<p>I guess that there are possibly solutions to do that with tox, but maybe I don't need to include another tool.</p>
|
<python><pytest><coverage.py><test-coverage><hatch>
|
2022-12-20 20:44:14
| 1
| 426
|
FordPrefect
|
74,868,664
| 14,425,501
|
ResourceExhaustedError only when fine-tuning a EfficientNetV2L in TensorFlow
|
<p>I am doing a multiclass image classification and this code is working fine, when I put <code>base_model.trainable = False</code>:</p>
<pre><code>file_paths = train['image'].values # train is a pd.DataFrame
labels = train['label'].values
valfile_paths = val['image'].values
vallabels = val['label'].values
ds_train = tf.data.Dataset.from_tensor_slices((file_paths, labels))
ds_val = tf.data.Dataset.from_tensor_slices((valfile_paths, vallabels))
def read_image(image_file, label):
image = tf.io.read_file(image_file)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, (300, 500))
return image, label
def augment(image, label):
image = tf.image.random_flip_left_right(image)
image = tf.image.random_brightness(image, max_delta=0.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
return image, label
ds_train = ds_train.map(read_image).map(augment).batch(28)
ds_val = ds_val.map(read_image).batch(28)
base_model = EfficientNetV2L(input_shape = (300, 500, 3),
include_top = False,
weights = 'imagenet',
include_preprocessing = True)
base_model.trainable = False
x = base_model.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(128, activation = 'relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(6, activation = 'softmax')(x)
model = tf.keras.Model(inputs = base_model.input, outputs = x)
model.compile(optimizer = optimizers.Adam(learning_rate = 0.001),
loss = losses.SparseCategoricalCrossentropy(),
metrics = ['accuracy'])
callback = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 2)]
history = model.fit(ds_train, batch_size = 28, validation_data = ds_val, epochs = 10, verbose = 1, callbacks = callback)
</code></pre>
<p>After training the model for 8 epoch (early stopping), I want to fine tune it by setting training = True, But when I turn <code>base_model.trainable = True</code>,then it gives me <code>ResourceExhaustedError</code>:</p>
<pre><code>base_model.trainable = True
model.compile(optimizer = optimizers.Adam(learning_rate = 0.0001),
loss = losses.SparseCategoricalCrossentropy(),
metrics = ['accuracy'])
callback = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 2)]
history = model.fit(ds_train, batch_size = 16, validation_data = ds_val, epochs = 10, verbose = 1, callbacks = callback)
</code></pre>
<p>error:</p>
<pre><code>Epoch 1/10
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-56-6bda2975dd16> in <module>
1 callback = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 2)]
2
----> 3 history = model.fit(ds_train, batch_size = 16, validation_data = ds_val, epochs = 10, verbose = 1, callbacks = callback)
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
ResourceExhaustedError: Graph execution error:
Detected at node 'model/block3b_project_conv/Conv2D' defined at (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 992, in launch_instance
app.start()
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 612, in start
self.io_loop.start()
File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 149, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
self._run_once()
File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
handle._run()
File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 787, in inner
self.run()
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 748, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request
self.do_execute(
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell
result = self._run_cell(
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-56-6bda2975dd16>", line 3, in <module>
history = model.fit(ds_train, batch_size = 16, validation_data = ds_val, epochs = 10, verbose = 1, callbacks = callback)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 889, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 490, in __call__
return super().__call__(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 458, in call
return self._run_internal_graph(
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 596, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/convolutional/base_conv.py", line 250, in call
outputs = self.convolution_op(inputs, self.kernel)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/convolutional/base_conv.py", line 225, in convolution_op
return tf.nn.convolution(
Node: 'model/block3b_project_conv/Conv2D'
OOM when allocating tensor with shape[96,384,1,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node model/block3b_project_conv/Conv2D}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[Op:__inference_train_function_351566]
</code></pre>
<p>I tried setting <code>batch_size</code> = 1, but it is still not working. Any solution?</p>
|
<python><tensorflow><machine-learning><deep-learning>
|
2022-12-20 20:40:30
| 1
| 1,933
|
Adarsh Wase
|
74,868,633
| 1,866,833
|
Python format date in jsonfiy output
|
<p>How to intercept datetime value and make custom format?</p>
<p>Here is how i fetch records and put them in json format:</p>
<pre><code>cursor.execute(sql)
row_headers = [x[0] for x in cursor.description]
json_data = []
for row in cursor.fetchall():
json_data.append(dict(zip(row_headers, row)))
return jsonify(json_data)
</code></pre>
<p>When I send to console i have something like:</p>
<blockquote>
<p>('2022000390100002', '00002', 'CTD2X405AXXX', '2022/39/1', 48.0,
datetime.date(2022, 12, 20), 4, None)</p>
</blockquote>
<p>The json output looks like:</p>
<pre><code>{
"BoxNumber": "00002",
"BoxQty": 48.0,
"DateofSPC": "Tue, 20 Dec 2022 00:00:00 GMT",
"GPDI_Codice": null,
"Lot": "2022000390100002",
"OrderNumber": "2022/39/1",
"Product": "CTD2X405AXXX",
"TestResult": 4
}
</code></pre>
<p>What I want to do is to change DateofSPC output to dd.mm.yyyyy</p>
|
<python><flask><jsonify>
|
2022-12-20 20:37:06
| 1
| 2,714
|
Josef
|
74,868,580
| 8,262,535
|
conda create environment ResolvePackageNotFound when pip is successful
|
<p>I am trying to create a conda environment using the following environment.yml file:</p>
<pre><code>conda env create -n med -f environment.yml
</code></pre>
<p>File contents:</p>
<pre><code>name: med
channels:
- defaults
- conda-forge
dependencies:
- python=3.8
- batchgenerators==0.23
- pandas==1.1.5
- SimpleITK==2.2.1
- tensorboard==2.11.0
- tqdm
- pip
- pip:
- --extra-index-url https://download.pytorch.org/whl/cu117
- torch==1.13.1+cu117
- torchvision==0.14.1+cu117
</code></pre>
<p>The setup fails with:</p>
<pre><code>ResolvePackageNotFound:
- batchgenerators
</code></pre>
<p>If I delete batchgenerators from the yml file, create the environment, conda activate it and try pip install batchgenerators - it is successful.</p>
<p>Further, using pip also works</p>
<pre><code>conda create -n med
conda activate med
pip install -r requirements.txt
batchgenerators==0.23
pandas==1.1.5
SimpleITK==2.2.1
tensorboard==2.11.0
torch==1.13.1+cu117
torchvision==0.14.1+cu117
tqdm
</code></pre>
<p>Any suggestions to make conda work directly?
Thanks,
Bogdan</p>
|
<python><pip><anaconda>
|
2022-12-20 20:29:01
| 1
| 385
|
illan
|
74,868,489
| 14,584,978
|
How to extract letters from a string (object) column in pandas dataframe
|
<p>I am trying to extract letters from a string column in pandas. This is used for identifying the type of data I am looking at.</p>
<p>I want to take a column of:</p>
<blockquote>
<p>GE5000341</p>
<p>R22256544443</p>
<p>PDalirnamm</p>
<p>AAddda</p>
</blockquote>
<p>and create a new column of:</p>
<blockquote>
<p>GE</p>
<p>R</p>
<p>PDALIRN</p>
<p>AD</p>
</blockquote>
|
<python><pandas><string>
|
2022-12-20 20:17:41
| 2
| 374
|
Isaacnfairplay
|
74,868,363
| 11,550,733
|
Subclasses fail to instantiate base class variables with super().__init__() when base class inherits from Protocol
|
<p>Given the following code, I'd expect both <code>a.x</code> and <code>a.y</code> to be declared during instantiation. When class <code>P</code> doesn't inherit from <code>Protocol</code> both assertions pass. In the debugger, it doesn't seem that class <code>P</code>'s constructor is ever being ran. I suspect this has to do with the MRO in some way, but my main question is whether I can still use <code>super()__init__()</code> while still having my base class inherit from <code>Protocol</code></p>
<pre class="lang-py prettyprint-override"><code>
from typing import Protocol
class P(Protocol):
y: str
def __init__(self, y: str) -> None:
self.y = y
class A(P):
x: str
def __init__(self, x: str, y: str) -> None:
self.x = x
super().__init__(y=y)
a = A("x", "y")
assert a.x == "x"
assert a.y == "y" # AttributeError: 'A' object has no attribute 'y'
</code></pre>
|
<python><python-typing>
|
2022-12-20 20:04:26
| 2
| 902
|
Jamie.Sgro
|
74,868,286
| 10,309,712
|
How do I modify this function to return a 4d array instead of 3d?
|
<p>I created this function that takes in a <code>dataframe</code> to return an <code>ndarrays</code> of input and label.</p>
<pre class="lang-py prettyprint-override"><code>def transform_to_array(dataframe, chunk_size=100):
grouped = dataframe.groupby('id')
# initialize accumulators
X, y = np.zeros([0, 1, chunk_size, 4]), np.zeros([0,]) # original inpt shape: [0, 1, chunk_size, 4]
# loop over each group (df[df.id==1] and df[df.id==2])
for _, group in grouped:
inputs = group.loc[:, 'A':'D'].values
label = group.loc[:, 'label'].values[0]
# calculate number of splits
N = (len(inputs)-1) // chunk_size
if N > 0:
inputs = np.array_split(
inputs, [chunk_size + (chunk_size*i) for i in range(N)])
else:
inputs = [inputs]
# loop over splits
for inpt in inputs:
inpt = np.pad(
inpt, [(0, chunk_size-len(inpt)),(0, 0)],
mode='constant')
# add each inputs split to accumulators
X = np.concatenate([X, inpt[np.newaxis, np.newaxis]], axis=0)
y = np.concatenate([y, label[np.newaxis]], axis=0)
return X, y
</code></pre>
<p>The function returned <code>X</code> of shape <code>(n_samples, 1, chunk_size, 4)</code> and <code>y</code> of shape <code>(n_samples, )</code>.</p>
<p>For examples:</p>
<pre class="lang-py prettyprint-override"><code>N = 10_000
id = np.arange(N)
labels = np.random.randint(5, size=N)
df = pd.DataFrame(data = np.random.randn(N, 4), columns=list('ABCD'))
df['label'] = labels
df.insert(0, 'id', id)
df = df.loc[df.id.repeat(157)]
df.head()
id A B C D label
0 0 -0.571676 -0.337737 -0.019276 -1.377253 1
0 0 -0.571676 -0.337737 -0.019276 -1.377253 1
0 0 -0.571676 -0.337737 -0.019276 -1.377253 1
0 0 -0.571676 -0.337737 -0.019276 -1.377253 1
0 0 -0.571676 -0.337737 -0.019276 -1.377253 1
</code></pre>
<p>To generate the followings:</p>
<pre class="lang-py prettyprint-override"><code>X, y = transform_to_array(df)
X.shape # shape of input
(20000, 1, 100, 4)
y.shape # shape of label
(20000,)
</code></pre>
<p>This function works fine as intended, however, it takes long time to finish execution:</p>
<pre class="lang-py prettyprint-override"><code>start_time = time.time()
X, y = transform_to_array(df)
end_time = time.time()
print(f'Time taken: {end_time - start_time} seconds.')
Time taken: 227.83956217765808 seconds.
</code></pre>
<p>In attempt to improve performance of the function (minimise exec. time), I created the following modified func:</p>
<pre class="lang-py prettyprint-override"><code>def modified_transform_to_array(dataframe, chunk_size=100):
# group data by 'id'
grouped = dataframe.groupby('id')
# initialize lists to store transformed data
X, y = [], []
# loop over each group (df[df.id==1] and df[df.id==2])
for _, group in grouped:
# get input and label data for group
inputs = group.loc[:, 'A':'D'].values
label = group.loc[:, 'label'].values[0]
# calculate number of splits
N = (len(inputs)-1) // chunk_size
if N > 0:
# split input data into chunks
inputs = np.array_split(
inputs, [chunk_size + (chunk_size*i) for i in range(N)])
else:
inputs = [inputs]
# loop over splits
for inpt in inputs:
# pad input data to have a chunk size of chunk_size
inpt = np.pad(
inpt, [(0, chunk_size-len(inpt)),(0, 0)],
mode='constant')
# add each input split and corresponding label to lists
X.append(inpt)
y.append(label)
# convert lists to numpy arrays
X = np.array(X)
y = np.array(y)
return X, y
</code></pre>
<p>At first, it seems like I succeeded reducing time taken:</p>
<pre class="lang-py prettyprint-override"><code>start_time = time.time()
X2, y2 = modified_transform_to_array(df)
end_time = time.time()
print(f'Time taken: {end_time - start_time} seconds.')
Time taken: 5.842168092727661 seconds.
</code></pre>
<p>However, the result is that it changes the shape of the intended returned value.</p>
<pre class="lang-py prettyprint-override"><code>X2.shape # this should be (20000, 1, 100, 4)
(20000, 100, 4)
y.shape # this is fine
(20000, )
</code></pre>
<p><strong>Question</strong></p>
<p>How do I modify <code>modified_transform_to_array()</code> to return the intended array shape <code>(n_samples, 1, chunk_size, 4)</code> since it is much faster?</p>
|
<python><pandas><dataframe><numpy><numpy-ndarray>
|
2022-12-20 19:56:49
| 2
| 4,093
|
arilwan
|
74,868,270
| 19,797,660
|
DataFrame How to vectorize this for loop?
|
<p>I need help vectorizing this <code>for</code> loop.
i couldn't come up with my own solution.
So the general idea is that I want to calculate the number of bars since the last time the condition was true.<br />
I have DataFrame with initial values <code>0</code> and <code>1</code> where <code>0</code> is the anchor point for counting to start and stop (0 means that the condition was met for the index in this cell).
For example inital DataFrame would look like this (I am typing only the series raw values and I am ommiting column names etc.)</p>
<pre><code>NaN, NaN, NaN, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0
</code></pre>
<p>The output should look like this:</p>
<pre><code>NaN, NaN, NaN, 0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3, 4, 5, 0
</code></pre>
<p>My current code:</p>
<pre><code>cond_count = pd.DataFrame(index=range(cond.shape[0]), columns=range(1))
cond_count.rename(columns={0: 'Bars Since'})
cond_count['Bars Since'] = 'NaN'
cond_count['Bars Since'].iloc[indices_cond_met] = 0
cond_count['Bars Since'].iloc[indices_cond_cut] = 1
for i in range(cond_count.shape[0]):
if cond_count['Bars Since'].iloc[i] == 'NaN'
pass
elif cond_count['Bars Since'].iloc[i] == 0:
while cond_count['Bars Since'].iloc[j] != 0:
cond_count['Bars Since'].iloc[j] = cond_count['Bars Since'].shift(1).iloc[j] + 1
else:
pass
</code></pre>
|
<python><pandas>
|
2022-12-20 19:55:03
| 1
| 329
|
Jakub Szurlej
|
74,868,258
| 80,857
|
Batch call to PlaylistItems: insert YouTube API v3 with Google API Python client library produces intermittent HttpError 500
|
<p>I'm using the <a href="https://github.com/googleapis/google-api-python-client" rel="nofollow noreferrer">Google API Python client library</a> to run a batch of <a href="https://developers.google.com/youtube/v3/docs/playlistItems/insert" rel="nofollow noreferrer">PlaylistItems: insert</a> calls per the <code>batch</code> method <a href="https://github.com/googleapis/google-api-python-client/blob/main/docs/batch.md" rel="nofollow noreferrer">described in the docs</a>. When it's all put together it looks like this:</p>
<pre><code>youtube = get_authenticated_service()
batch = youtube.new_batch_http_request(callback=insert_callback)
playlistId = "[redacted]"
video_ids = [
"d7ypnPjz81I",
"vZv9-TWdBJM",
"7uG6E6bVKU0",
"8EzfBYFU8Q0"
]
playlist_items = []
for video_id in video_ids:
batch.add(youtube.playlistItems().insert(
part="snippet",
body={
"snippet": {
"playlistId": playlistId,
"resourceId": {
"kind": "youtube#video",
"videoId": video_id
}
}
}
)
)
response = batch.execute()
print(response)
def insert_callback(request_id, response, exception):
if exception is not None:
print(exception)
else:
print(response)
</code></pre>
<p>The script loops through the videos and makes an API call to add each one to the playlist, printing the result of the API call to the screen. The outcome of each call is unpredictable. In some cases, the video is successfully added to the playlist and in others, I get the following error response:</p>
<blockquote>
<p><HttpError 500 when requesting
<a href="https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&alt=json" rel="nofollow noreferrer">https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&alt=json</a>
returned "Internal error encountered.". Details: "[{'message':
'Internal error encountered.', 'domain': 'global', 'reason':
'backendError'}]"></p>
</blockquote>
<p>There is no discernable pattern of success or failure. A video can fail to be added on one run and then succeed on the next.</p>
<p>Is there something I can do to overcome this error? If not, is there any advantage running batches and then retrying the videos that error out until they succeed?</p>
<p><strong>EDIT:</strong></p>
<p>The Google API team suggested that I implement a delay between API calls. Can this be done using the Python client library?</p>
|
<python><youtube-api><batch-processing>
|
2022-12-20 19:53:45
| 0
| 5,324
|
Keyslinger
|
74,868,056
| 11,693,768
|
Making a dataframe where new row is created after every nth column using only semi colons as delimiters
|
<p>I have the following string in a column within a row in a pandas dataframe. You could just treat it as a string.</p>
<pre><code>;2;613;12;1;Ajc hw EEE;13;.387639;1;EXP;13;2;128;12;1;NNN XX Ajc;13;.208966;1;SGX;13;..
</code></pre>
<p>It goes on like that.</p>
<p>I want to convert it into a table and use the semi colon <code>;</code> symbol as a delimiter. The problem is there is no new line delimiter and I have to estimate it to be every 10 items.</p>
<p>So, it should look something like this.</p>
<pre><code>;2;613;12;1;Ajc hw EEE;13;.387639;1;EXP;13;
2;128;12;1;NNN XX Ajc;13;.208966;1;SGX;13;..
</code></pre>
<p>How do I convert that string into a new dataframe in pandas. After every 10 semi colon delimiters, a new row should be created.</p>
<p>I have no idea how to do this, any help would be greatly appreciated in terms of tools or ideas.</p>
|
<python><pandas><string><csv><delimiter>
|
2022-12-20 19:33:01
| 1
| 5,234
|
anarchy
|
74,868,004
| 4,918,319
|
How to find a complementary parallelogram
|
<p>On <a href="https://stackoverflow.com/a/8862483/4918319">https://stackoverflow.com/a/8862483/4918319</a>, an algorithm is put forward with which to determine whether a ray in 3-space goes through a given rectangle, if we specify the ray by two vectors (one specifying the origin of the ray and the other its direction) and the rectangle by three vectors (one specifying the location of one corner and the other two giving the relative positions of the two corners adjacent to that one).</p>
<p>The author then asserts that this algorithm is not limited to squares & rectangles, but also works for any parallelogram. However, while that initially appears to be true, and works to a limited extent, the fact is that, because projection is used, which involves moving a point onto a line by a course perpendicular to the line, it does not work, as it is, for just any parallelogram.</p>
<p>However, it appears that every parallelogram, once you have selected a corner, has one complementary parallelogram. Each parallelogram covers the area which that algorithm would attribute to its complement, and vice versa. For example, if you draw a rhombus and select an obtuse corner of it, and then shade the area of the paper that projects to each of the two sides which that corner joins, the shaded areas will overlap to form another rhombus, with one acute corner at the point which you selected. Then, if you draw the acute rhombus and then shade the areas of the paper which project to the sides adjacent to the originally selected corner, the shaded area will form the same shape, size, and orientation as your original rhombus.</p>
<p>The question remains, how to derive the complement of a parallelogram once you know the three vectors which define the first?</p>
|
<python><matlab><geometry><intersection><geometry-surface>
|
2022-12-20 19:27:59
| 0
| 728
|
Post169
|
74,867,992
| 18,758,062
|
Missing Matplotlib Animated Figure in VSCode Jupyter Notebook
|
<p>I am trying to create a simple <code>matplotlib</code> animation in a Jupyter notebook running inside vscode.</p>
<pre class="lang-py prettyprint-override"><code>%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
fig, ax = plt.subplots()
line, = ax.plot([]) # A tuple unpacking to unpack the only plot
ax.set_xlim(0, 2*np.pi)
ax.set_ylim(-1.1, 1.1)
def animate(frame_num):
y = np.sin(x + 2*np.pi * frame_num/100)
line.set_data((x, y))
return line
anim = FuncAnimation(fig, animate, frames=100, interval=5)
plt.show()
</code></pre>
<p>However, no figure appears after running the cell, just the green tick symbol.</p>
<p><a href="https://i.sstatic.net/G6Nqd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G6Nqd.png" alt="enter image description here" /></a></p>
<p>Using Jupyter extension, both release version <code>v2022.11.1003412109</code> and pre-release version <code>v2023.1.1003441034</code>.</p>
<p>Is it possible to get the animated matplotlib figure to work inside Vscode Jupyter notebook?</p>
|
<python><matplotlib><visual-studio-code><jupyter-notebook><matplotlib-animation>
|
2022-12-20 19:25:40
| 1
| 1,623
|
gameveloster
|
74,867,754
| 11,397,243
|
ModuleNotFoundError: No module named 'CGI.Util'; 'CGI' is not a package
|
<p>I am the author of <a href="https://github.com/snoopyjc/pythonizer/" rel="nofollow noreferrer">Pythonizer</a> - a Perl to Python translator and I'm currently trying to translate CGI.pm into Python. The output file is CGI.py. I am getting this error on this import:</p>
<pre><code>perllib.init_package("CGI", is_class=True) # Creates builtins.CGI = namespace
sys.path[0:0] = ["./PyModules"]
from CGI.Util import (ascii2ebcdic, ebcdic2ascii, escape, expires, make_attributes, rearrange, rearrange_header, unescape)
ModuleNotFoundError: No module named 'CGI.Util'; 'CGI' is not a package
</code></pre>
<p>The <code>PyModules</code> directory contains <code>CGI/Util.py</code>. Note that the name of the file generated that contains this code is <code>CGI.py</code>. I understand that this is a no-no, and also note that <code>perllib.init_package("CGI"...)</code> creates a <code>CGI</code> attribute of <code>builtins</code> such that when you mention <code>CGI</code> you get that namespace.</p>
<p>So my question is - any idea how to avoid this issue? I'm thinking of using __import__ for this case if that's the only option, and by "this case" I mean that the filename I'm generating is the same as the prefix of the thing I'm importing -or- I have a namespace by that name (not sure which is causing the issue - probably both). Insights would be appreciated.</p>
|
<python><python-3.x><import><module><package>
|
2022-12-20 19:03:47
| 1
| 633
|
snoopyjc
|
74,867,671
| 1,245,262
|
When is the watch() function required in Tensorflow to enable tracking of gradients?
|
<p>I'm puzzled by the fact that I've seen blocks of code that require tf.GradientTape().watch() to work and blocks that seem to work without it.</p>
<p>For example, this block of code requires the watch() function:</p>
<pre><code> x = tf.ones((2,2)) #Note: In original posting of question,
# I didn't include this key line.
with tf.GradientTape() as t:
# Record the actions performed on tensor x with `watch`
t.watch(x)
# Define y as the sum of the elements in x
y = tf.reduce_sum(x)
# Let z be the square of y
z = tf.square(y)
# Get the derivative of z wrt the original input tensor x
dz_dx = t.gradient(z, x)
</code></pre>
<p>But, this block does not:</p>
<pre><code>with tf.GradientTape() as tape:
logits = model(images, images = True)
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy.mean()
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradient(zip(grads, model.trainable_vaiables))
</code></pre>
<p>What is the difference between these two cases?</p>
|
<python><tensorflow>
|
2022-12-20 18:54:32
| 2
| 7,555
|
user1245262
|
74,867,657
| 12,436,050
|
Generating new columns from row values in Python
|
<p>I have following pandas dataframe (HC_subset_umls)</p>
<pre><code> term code source term_normlz CUI CODE SAB TTY STR
0 B-cell lymphoma meddra:10003899 meddra b-cell lymphoma C0079731 MTHU019696 OMIM PTCS b-cell lymphoma
1 B-cell lymphoma meddra:10003899 meddra b-cell lymphoma C0079731 10003899 MDR PT b-cell lymphoma
2 Astrocytoma meddra:10003571 meddra astrocytoma C0004114 10003571 MDR PT astrocytoma
3 Astrocytoma meddra:10003571 meddra astrocytoma C0004114 D001254 MSH MH astrocytoma
</code></pre>
<p>I would like to group rows based on common CUI and generate new columns.</p>
<p>The desired output is:</p>
<pre><code> term code source term_normlz CUI OMIM_CODE OMIM_TTY OMIM_STR MDR_CODE MDR_TTY MDR_STR MSH_CODE MSH_TTY MSH_STR
0 B-cell lymphoma meddra:10003899 meddra b-cell lymphoma C0079731 MTHU019696 PTCS b-cell lymphoma 10003899 PT b-cell lymphoma NA NA NA NA
2 Astrocytoma meddra:10003571 meddra astrocytoma C0004114 NA NA NA 10003571 MDR PT astrocytoma D001254 MSH MH astrocytoma
</code></pre>
<p>I am using following lines of code.</p>
<pre><code>HC_subset_umls['OMIM_CODE'] = (
HC_subset_umls['CUI']
.map(
HC_subset_umls
.groupby('CUI')
.apply(lambda x: x.loc[x['SAB'].isin(['OMIM']), 'CODE'].values[0])
)
)
HC_subset_umls['OMIM_TERM'] = (
HC_subset_umls['CUI']
.map(
HC_subset_umls
.groupby('CUI')
.apply(lambda x: x.loc[x['SAB'].isin(['OMIM']), 'STR'].values[0])
)
)
HC_subset_umls['OMIM_TTY'] = (
HC_subset_umls['CUI']
.map(
HC_subset_umls
.groupby('CUI')
.apply(lambda x: x.loc[x['SAB'].isin(['OMIM']), 'TTY'].values[0])
)
)
HC_subset_umls = HC_subset_umls[~(HC_subset_umls['SAB'].isin(['OMIM']))]
</code></pre>
<p>And subsequently for the other 'SAB' like 'MDR' and so on. However, I am getting following error.</p>
<pre><code>IndexError: index 0 is out of bounds for axis 0 with size 0
</code></pre>
<p>Any help is highly appreciated.</p>
|
<python><pandas>
|
2022-12-20 18:53:23
| 1
| 1,495
|
rshar
|
74,867,493
| 1,928,054
|
Setting values in subselection of MultiIndex pandas
|
<p>Consider the following DataFrame:</p>
<pre><code>import numpy as np
import pandas as pd
arrays1 = [
[
"A",
"A",
"A",
"B",
"B",
"B",
"C",
"C",
"C",
"D",
"D",
"D",
],
[
"qux",
"quux",
"corge",
"qux",
"quux",
"corge",
"qux",
"quux",
"corge",
"qux",
"quux",
"corge",
],
[
"one",
"two",
"three",
"one",
"two",
"three",
"one",
"two",
"three",
"one",
"two",
"three",
],
]
tuples1 = list(zip(*arrays1))
index_values1 = pd.MultiIndex.from_tuples(tuples1)
df1 = pd.DataFrame(
np.ones((12, 12)), index=index_values1, columns=index_values1
)
</code></pre>
<p>Yielding:</p>
<pre><code> A B C D
qux quux corge qux quux corge qux quux corge qux quux corge
one two three one two three one two three one two three
A qux one 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
B qux one 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
C qux one 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
D qux one 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>Say I want to set everything to zero, except for the following rows and columns:</p>
<pre><code> A B C D
qux quux corge qux quux corge qux quux corge qux quux corge
one two three one two three one two three one two three
A qux one 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
quux two 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
corge three 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B qux one 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
quux two 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
corge three 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C qux one 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
D qux one 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
<p>That is set everything to zero except for column A and B, and (quux, two), (corge, three) in index C and D.</p>
<p>To select column A and B and <code>(quux, two), (corge, three)</code> in both index B and C, I expected to be able to do:</p>
<pre><code>l_col_lvl0 = ['A', 'B']
l_idx_lvl0 = ['C', 'D']
l_idx_lvl1 = [("quux", "two"), ("corge", "three")]
df1_a_bc_qc = df1.loc[(l_idx_lvl0, l_idx_lvl1), l_col_lvl0]
</code></pre>
<p>However, this returns an empty DataFrame, and the following error message:</p>
<pre><code>FutureWarning: The behavior of indexing on a MultiIndex with a nested sequence of
labels is deprecated and will change in a future version. `series.loc[label, sequence]`
will raise if any members of 'sequence' or not present in the index's second level.
To retain the old behavior, use `series.index.isin(sequence, level=1)`
df1_a_bc_qc = df1.loc[(l_idx_lvl0, l_idx_lvl1), l_col_lvl0]
</code></pre>
<p>In turn, I can select column A and B, and indices C and D.</p>
<pre><code>df1_ab_cd = df1.loc[l_idx_lvl0, l_col_lvl0]
A B
qux quux corge qux quux corge
one two three one two three
C qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
D qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>Moreover, I can select (quux, two), (corge, three) in either index C or D:</p>
<pre><code>df1_ab_c_qc = df1.loc['C',l_col_lvl0].loc[l_idx_lvl1]
A B
qux quux corge qux quux corge
one two three one two three
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
df1_ab_d_qc = df1.loc['D',l_col_lvl0].loc[l_idx_lvl1]
A B
qux quux corge qux quux corge
one two three one two three
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>However, if I understand correctly, chained assignments are discouraged.</p>
<p>Moreover, if I try to pass <code>l_idx_lvl0</code> instead, I get the following error message:</p>
<pre><code>df1_ab_cd_qc = df1.loc[l_idx_lvl0,l_col_lvl0].loc[l_idx_lvl1]
ValueError: operands could not be broadcast together with shapes (2,2) (3,) (2,2)
</code></pre>
<p>In conclusion, how can I set verything to 0, except for column A and B, and (quux, two), (corge, three) in index B and C?</p>
<p>I believe that the solution to Question 6 (<a href="https://stackoverflow.com/questions/53927460/select-rows-in-pandas-multiindex-dataframe">Select rows in pandas MultiIndex DataFrame</a>) is very close to what I'm looking for, though I haven't gotten it to work for this case.</p>
<p>I would like to be flexible in the assignment by passing lists, instead of individual labels. Moreover, labels in level 1 and 2 are preferably not separated. That is, (quux, two) and (corge, three) should be passed together, instead of per level. The reason I mention this, is that I see that in Q6, labels are passed per level (i.e. <code>df.loc[(('a', 'b'), ('u', 'v')), :]</code>).</p>
<p>Any help is much appreciated.</p>
|
<python><pandas>
|
2022-12-20 18:35:42
| 2
| 503
|
BdB
|
74,867,404
| 9,415,280
|
How to clean nan in tf.data.Dataset in sequences multivariates inputs for LSTM
|
<p>I try to feed huge dataset (out of memory) to my lstm model.
I want to make some transformation on my data using the tf.data.Dataset.
I first turn my numpy data to dataset using tf.keras.utils.timeseries_dataset_from_array.
This is an exemple of my data:</p>
<p><a href="https://i.sstatic.net/uKp1K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uKp1K.png" alt="enter image description here" /></a></p>
<p>6 first columns are a feature, last one is my target and row are timesteps.</p>
<p>I turn my 7 features inputs to sequences of 5 timesteps and want to predict the output of one value using this code:</p>
<pre><code>input_dataset = tf.keras.utils.timeseries_dataset_from_array(
data[:,:-1], None, sequence_length=5, sequence_stride=1, shuffle=True, seed=1)
target_dataset = tf.keras.utils.timeseries_dataset_from_array(
data[:,-1], None, sequence_length=1, sequence_stride=1,
shuffle=True, seed=1)
</code></pre>
<p>as you see in my data, sometimes values are missing. What I try is to remove all sequences (input with associated output) with a 'nan' in the input OR output.</p>
<p>I try to adapt an exemple and get this:</p>
<pre><code>filter_nan = lambda i, j: not tf.reduce_any(tf.math.is_nan(i)) and not tf.math.is_nan(j)
ds = tf.data.Dataset.zip((input_dataset, output_dataset)).filter(filter_nan)
</code></pre>
<p>but get this error :</p>
<pre><code>Using a symbolic `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
</code></pre>
<p>I take a look to @tf.function but it is out of my comprehension for the moment and not sure my innitial trial was correct anyway.</p>
|
<python><tensorflow><lstm><tf.data.dataset><multivariate-time-series>
|
2022-12-20 18:26:43
| 1
| 451
|
Jonathan Roy
|
74,867,309
| 12,149,817
|
Save figure from a for loop in one pdf file in python
|
<p>I am trying to save scatter plots from a list of dataframes in one pdf file using this:</p>
<pre><code>pdf = matplotlib.backends.backend_pdf.PdfPages("./output.pdf")
for df in list_degs_dfs:
dfplt = df.plot.scatter("DEratio", "MI_scaled")
pdf.savefig(dfplt)
pdf.close()
</code></pre>
<p>But I get this error:</p>
<blockquote>
<p>ValueError: No figure AxesSubplot(0.125,0.11;0.775x0.77)</p>
</blockquote>
<p>I thought it can be an issue with converting the plots to matplotlib figure and tried this:</p>
<pre><code>import matplotlib.pyplot as plt
pdf = matplotlib.backends.backend_pdf.PdfPages("./output.pdf")
for df in list_degs_dfs:
dfplt = df.plot.scatter("DEratio", "MI_scaled")
dfplt = plt.figure(dfplt)
pdf.savefig(dfplt.Figure)
pdf.close()
</code></pre>
<p>and got this:</p>
<blockquote>
<p>TypeError: int() argument must be a string, a bytes-like object or a
number, not 'AxesSubplot'</p>
</blockquote>
<p>How to save all dfplt figrues from all df dataframes from a list of dataframes in one file?</p>
|
<python><matplotlib>
|
2022-12-20 18:17:21
| 1
| 720
|
Yulia Kentieva
|
74,867,297
| 20,793,070
|
Properly groupby and filter with Polars
|
<p>I have df for my work with 3 main columns: <code>cid1, cid2, cid3</code>, and more 7 columns <code>cid4, cid5, etc</code>.</p>
<p><code>cid1</code> and <code>cid2</code> is <code>int</code>, another columns is <code>float</code>.</p>
<p>Each combitations of <code>cid1</code> and <code>cid2</code> is a workset with some rows where is values of all other columns is different. I want to filter df and receive my df with only max values in column <code>cid3</code> for each combination of <code>cid1</code> and <code>cid2</code>. <code>cid4</code> and next columns must be leaved without changes.</p>
<p>This code helps me with one part of my task:</p>
<pre class="lang-py prettyprint-override"><code>df = (df
.group_by("cid1", "cid2")
.agg(pl.max("cid3").alias("max_cid3"))
)
</code></pre>
<p>It's receives only 3 columns: <code>cid1</code>, <code>cid2</code>, <code>max_cid3</code> and filter all rows when <code>cid3</code> is not maximal.
But I can't find how to receive all another columns (<code>cid4, etc</code>) for that rows without changes.</p>
<pre class="lang-py prettyprint-override"><code>df = (df
.group_by("cid1", "cid2")
.agg(pl.max("cid3").alias("max_cid3"), pl.col("cid4"))
)
</code></pre>
<p>I tried to add <code>pl.col("cid4")</code> to list of aggs but in column I see as values different lists of some <code>cid4</code> values.</p>
<p>How I can make it properly? Maybe Polars haves another way to make it then group_by?</p>
<p>In Pandas I can make it:</p>
<pre><code>import pandas as pd
import numpy as np
df["max_cid3"] = df.groupby(['cid1', 'cid2'])['cid3'].transform(np.max)
</code></pre>
<p>And then filter df wherever <code>cid3==max_cid3</code>
But I can't find a way to make it in Polars.</p>
<p>Thank you!</p>
|
<python><dataframe><window-functions><python-polars>
|
2022-12-20 18:16:24
| 1
| 433
|
Jahspear
|
74,867,266
| 12,415,855
|
Upload file with ftplib not working and without any error message?
|
<p>i try to upload an image to my ftp-server using ftplib with the following code -</p>
<pre><code>import ftplib
import os
import sys
ADDR = "68.66.248.00"
USERNAME = "MyUser"
PW = "MyPw"
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fn = os.path.join(path, "test.png")
session = ftplib.FTP()
session.connect(ADDR)
session.login(USERNAME, PW)
session.cwd("tmp/")
session.dir()
file = open(fn,'rb')
print("A")
session.storbinary('testXYZ.png', file)
print("B")
file.close()
session.quit()
</code></pre>
<p>When i run this code i get this output:</p>
<pre><code>$ python testFTP.py
drwxr-xr-x 7 rapidtec rapidtec 4096 Sep 20 00:16 .
drwx--x--x 37 rapidtec rapidtec 4096 Dec 20 09:49 ..
drwx------ 2 rapidtec rapidtec 4096 Oct 12 2020 analog
drwx------ 2 rapidtec rapidtec 4096 Oct 12 2020 awstats
drwx------ 2 rapidtec rapidtec 4096 Oct 12 2020 webalizer
A
</code></pre>
<p>As you can see the program stopps working at the line with "session.storbinary".
When i run this in vscode even the terminal is not anymore interruptable using Ctrl-C...</p>
<p>Why is the upload to the ftp-server not working and without any error?</p>
<p>It seems that the session-creation, and the session.cwd works fine but the file-transfer is for whatever reason not working</p>
<p>Why is this ftp-upload not working?</p>
|
<python><ftp><ftplib>
|
2022-12-20 18:13:58
| 1
| 1,515
|
Rapid1898
|
74,867,254
| 4,335,168
|
How to round 6.25 to 6.3 and 6.24 to 6.2 in Python?
|
<p>How to round 6.25 to 6.3 and 6.24 to 6.2 in Python?</p>
<p>The 'round' function is rounding 6.25 to 6.2:</p>
<pre><code>mynum = 6.25
print(round(mynum, 1))
</code></pre>
<p>produces 6.2 and I really need it to produce 6.3
It seems to only round up with 6 like 6.26 produces 6.3</p>
<p>Just to be clear: I want 6.25 to round to 6.3 and 6.24 to round to 6.2</p>
|
<python><python-3.x>
|
2022-12-20 18:12:34
| 0
| 2,120
|
Mattman85208
|
74,867,071
| 10,976,885
|
How to get execution date/logical date in PostgresOperator in Airflow?
|
<p>How do I get access to the execution date in the PostgresOperator in Apache Airflow?</p>
<p>With PythonOperator I get it as below</p>
<pre><code>def fun(context):
print(context["execution_date"])
print(context["logical_date"]) # also the same
task = PythonOperator(
task_id="python_test",
python_callable=fun,
)
</code></pre>
<p>My question is how do I get the same functionality with PostgresOperator.
I have the below task.</p>
<pre><code>task = PostgresOperator(
task_id="task_id",
postgres_conn_id="local_conn",
sql="sql/some_query.sql"
)
</code></pre>
<p>and some_query.sql looks like this.</p>
<pre><code>SELECT *
FROM TABLE
WHERE date = '2022-11-10'
</code></pre>
<p>I want to parameterize the date above with the execution date.
Something like:</p>
<pre><code>WHERE date = {{context.logical_date}}
</code></pre>
<p>How can I achieve that?
also is this the best practice to adjust the behaviour of a task based on a date?</p>
|
<python><postgresql><airflow><etl><pipeline>
|
2022-12-20 17:56:30
| 1
| 851
|
Dominik Sajovic
|
74,867,013
| 8,260,088
|
Getting a list of unique values within a pandas column
|
<p>Can you please help me with the following issue. Imagine, I have a following df:</p>
<pre><code>data = {
'A':['A1, B2, C', 'A2, A9, C', 'A3', 'A4, Z', 'A5, A1, Z'],
'B':['B1', 'B2', 'B3', 'B4', 'B4'],
}
df = pd.DataFrame(data)
</code></pre>
<p>How can I create a list with unique value that are stored in column 'A'? I want to smth like this:</p>
<pre><code> list_A = [A1, B2, C, A2, A9, A3, A4, Z, A5]
</code></pre>
|
<python><pandas>
|
2022-12-20 17:51:55
| 3
| 875
|
Alberto Alvarez
|
74,866,946
| 1,524,501
|
"ImportError: DLL load failed: %1 is not a valid Win32 application." but both Python and matplotlib are 64 bit
|
<p>I am trying to run the same code on my new PC (Windows 11). I installed Python 2.7 and matplotlib in both 64 bit but I still get "ImportError: DLL load failed: %1 is not a valid Win32 application."</p>
<p>I know this has been asked many times (e.g., <a href="https://stackoverflow.com/questions/35345434/matplotlib-importerror-dll-load-failed-1-is-not-a-valid-win32-application">here</a>, <a href="https://stackoverflow.com/questions/18898131/matplotlib-1-3-0-importerror-dll-load-failed-1-is-not-a-valid-win32-applicati">here</a>, <a href="https://stackoverflow.com/questions/4676433/solving-dll-load-failed-1-is-not-a-valid-win32-application-for-pygame">here</a>, <a href="https://stackoverflow.com/questions/46640587/importerror-dll-load-failed-1-is-not-a-valid-win32-application-while-importin">here</a>, here, here) and after reading all of these previously-asked posts, it seems you get this error when Python and the module bit do not match (e.g., Python in 32 bit and module in 64 bit, and vice versa). That is why I made sure that both are running in 64 bit. I restarted my PC, and I even installed pywin32 in 64 bit but that did not fix this issue.</p>
<p>My environment and versions
Windows 11 Pro 64 bit
Python 2.7.16 (v2.7.16:413a49145e, Mar 4 2019, 01:37:19) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
matplotlib: 2.2.5
pywin32: 228</p>
<p>Codes when the error occurs in PowerShell
import csv, os, os.path, pdb
import matplotlib.pyplot as plt</p>
<p>I also get the same error when I try to import _tkinter (based on <a href="https://stackoverflow.com/questions/35345434/matplotlib-importerror-dll-load-failed-1-is-not-a-valid-win32-application">another question</a>)</p>
<p>I have been trying to figure this out for days now and I would appreciate it if anybody could suggest what else I could do to make it work. Thanks!</p>
|
<python><python-2.7><matplotlib><64-bit><pywin32>
|
2022-12-20 17:46:00
| 0
| 2,111
|
owl
|
74,866,906
| 3,378,985
|
Automatically detect unused code and exclude it from wheel build
|
<p>I have a large(ish) python library that does a lot of inter-related stuff (training some neural networks, setting up datasets, testing, evaluation, running inference on the networks, etc.). I want to build an extremely lightweight cut-down bare bones wheel package, which would only allow running of the inference and a few related things, all in all I only need to call maybe 5 different functions. This would have fewer deps and other benefits (not the least a faster start time).</p>
<p>I started doing this by hand, by specifying only the necessary packages and removing the no-longer needed imports via <code>sed</code> in my build script. But this isn't very scalable at all, as soon as I make any changes to the files affected, the build script will need to be updated.</p>
<p>I've been reading up on tools such as Vulture, which would do static code analysis and show me dead code branches, but it's not quite what I need. For one, there are many CLI scripts which are not used (and which won't be discovered), and for the other the results are not immediately usable, I would still need to build something around it.</p>
<p>I cannot believe there isn't a tool to do this. What I'm after is specifying a bunch of functions (as imports?) and getting out the minimum amount of code that is needed to run those successfully. Of course I could build something of that sort, but that feels like reinventing the wheel. Is there a tool out there which I could either just use, or possibly bend to my needs? Thanks!</p>
|
<python><dependency-management><python-wheel>
|
2022-12-20 17:42:10
| 0
| 446
|
BIOStheZerg
|
74,866,858
| 3,052,832
|
In Pyside6 (Or C++ Qt), how do I set a custom QML style at runtime?
|
<p>I want to use a custom style for my QML application. I have followed these instructions: <a href="https://doc.qt.io/qt-6/qtquickcontrols2-customize.html#creating-a-custom-style" rel="nofollow noreferrer">https://doc.qt.io/qt-6/qtquickcontrols2-customize.html#creating-a-custom-style</a></p>
<p>I have a directory structure like this</p>
<pre><code>MyApp/
├─ main.py
├─ QML/
│ ├─ Main.qml
├─ MyStyle/
│ ├─ qmldir
│ ├─ ToolTip.qml
</code></pre>
<p>I have created a style by creating a module (using a qmldir, and putting QML files inside it)</p>
<p>My <code>qmldir</code> contains</p>
<pre><code>module MyStyle
ToolTip 3.0 ToolTip.qml
</code></pre>
<p>And <code>ToolTip.qml</code> contains</p>
<pre><code>import QtQuick.Templates 2.0 as T
import QtQuick
import QtQuick.Controls
T.ToolTip {
id: control
text: qsTr("A descriptive tool tip of what the button does")
contentItem: Text {
text: control.text
font: control.font
color: "red"
}
background: Rectangle {
border.color: "grey"
}
}
</code></pre>
<p>I then tried to apply this in <code>main.py</code> (The API is very similar to the C++ one)</p>
<pre><code>app = QApplication(sys.argv)
engine = QQmlApplicationEngine()
QQuickStyle.setStyle("MyStyle")
engine.load(QUrl.fromLocalFile(f"{os.path.dirname(__file__)}/QML/Main.qml"))
app.exec()
</code></pre>
<p>But during <code>engine.load</code> I get the error:</p>
<pre><code>file:///D:/MyApp/QML/Main.qml: module "MyStyle" is not installed `QQuickStyle.setStyle`
</code></pre>
<p>It is interesting that the error only happens once <code>engine.load</code> is reached. <code>QQuickStyle.setStyle</code> does not give an error (however, it also does not error even if nonsense is given).</p>
<p>I have tried adding:</p>
<pre><code>engine.registerModule("MyStyle", os.path.dirname(__file__) + "/MyStyle")
</code></pre>
<p>And:</p>
<pre><code>engine.setImportPathList([os.path.dirname(__file__) + "/MyStyle"])
</code></pre>
<p>But these also does not work.</p>
<p>How do I install my custom style so I can use it, just like I would use:</p>
<pre><code>QQuickStyle.setStyle("Material")
</code></pre>
<p>Thanks</p>
<p><em>edit</em></p>
<p>If I add:</p>
<pre><code>engine.addPluginPath(f"{os.path.dirname(__file__)}/MyStyle")
qmlRegisterModule("MyStyle", 1, 0)
</code></pre>
<p>Then I do not get the error, but my ToolTip style is still not applied.</p>
|
<python><qt><qml><pyside><qtquick2>
|
2022-12-20 17:37:56
| 2
| 2,054
|
Blue7
|
74,866,798
| 4,391,249
|
Does the Python built-in library have an incrementable counter?
|
<p>Is there anything like this in the Python built-in library:</p>
<pre class="lang-py prettyprint-override"><code>class Counter:
def __init__(self, start=0):
self.val = start
def consume(self):
val = self.val
self.val += 1
return val
</code></pre>
<p>I see this as much safer way of implementing code where a counter needs to be used and then immediately incremented on the next line. Kind of like using <code>i++</code> in other languages. But I'm trying to avoid clogging up my library with definitions like this if there's an inbuilt method.</p>
|
<python>
|
2022-12-20 17:33:15
| 1
| 3,347
|
Alexander Soare
|
74,866,786
| 6,263,000
|
python-oracledb thin client returns DPY-6005
|
<p>I'm trying to connect to a 21c ATP and 19c ADP (free tier, ACL enabled/configured with "My Address", TLS enabled (mTLS set to "Not required"), connection string contains "ssl_server_dn_match=yes") using Python's <strong>thin client</strong> but at the point of making a connection or setting up a connection pool, I get:</p>
<blockquote>
<pre><code>OperationalError: DPY-6005: cannot connect to database. Connection
failed with "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify
failed: self signed certificate in certificate chain (_ssl.c:1131)"
</code></pre>
</blockquote>
<p><strong>Envioronment</strong>:</p>
<p>DB: ATP 21c and ADP 19c</p>
<p>Python client library: oracledb-1.2.1 (I've tried 1.2.0 and 1.1.1, as well, but to no avail)</p>
<p>Environment: Python 3.10.4 and 3.8.10 (running on Mac OS)</p>
<p><strong>Code sample:</strong></p>
<pre><code>import oracledb
# copied from the ATP's "Database Connection"
cs='''(description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1521)(host=adb.uk-london-1.oraclecloud.com))(connect_data=(service_name=xxxx.adb.oraclecloud.com))(security=(ssl_server_dn_match=yes)))'''
connection = oracledb.connect(user="admin", password="<password>", dsn=cs)
with connection.cursor() as cursor:
try:
sql = """select systimestamp from dual"""
for r in cursor.execute(sql):
print(r)
except oracledb.Error as e:
error, = e.args
print(error.message)
print(sql)
if (error.offset):
print('^'.rjust(error.offset+1, ' '))
</code></pre>
<p><strong>References:</strong></p>
<p>I've used the following documents as a reference:</p>
<ul>
<li><a href="https://blogs.oracle.com/opal/post/easy-way-to-connect-python-applications-to-oracle-autonomous-databases" rel="nofollow noreferrer">https://blogs.oracle.com/opal/post/easy-way-to-connect-python-applications-to-oracle-autonomous-databases</a></li>
<li><a href="https://blogs.oracle.com/developers/post/writing-a-flask-application-using-python-oracledb" rel="nofollow noreferrer">https://blogs.oracle.com/developers/post/writing-a-flask-application-using-python-oracledb</a></li>
<li><a href="https://python-oracledb.readthedocs.io/en/latest/user_guide/installation.html" rel="nofollow noreferrer">https://python-oracledb.readthedocs.io/en/latest/user_guide/installation.html</a></li>
<li><a href="https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/connecting-python-tls.html#GUID-CA446B91-BC48-4A66-BF69-B8D54B9CBAD4" rel="nofollow noreferrer">https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/connecting-python-tls.html#GUID-CA446B91-BC48-4A66-BF69-B8D54B9CBAD4</a></li>
</ul>
|
<python><oracle-database><python-oracledb>
|
2022-12-20 17:32:18
| 1
| 609
|
Babak Tourani
|
74,866,740
| 1,073,786
|
Python lru_cache nested decorator
|
<p>I want to basically ignore a dict parameter in a method that want to decorate with <code>functools.lru_cache</code>.</p>
<p>This is the code I have written so far:</p>
<pre class="lang-py prettyprint-override"><code>import functools
class BlackBox:
"""All BlackBoxes are the same."""
def __init__(self, contents):
self._contents = contents
@property
def contents(self):
return self._contents
def __eq__(self, other):
return isinstance(other, type(self))
def __hash__(self):
return hash(type(self))
def lru_cache_ignore(func):
lru_decorator = functools.lru_cache(maxsize=2048)
@functools.wraps(func)
def wrapper_decorator(self, arg1, *args, **kwargs):
@lru_decorator
def helper(arg1_w: BlackBox, *args, **kwargs):
print(f"Cache miss {arg1_w} {args} {kwargs}")
# unpack the first argument from Blackbox
original_arg = arg1_w.contents
# Invoke the user function seamlessly
return func(self, original_arg, *args, **kwargs)
boxed_arg = BlackBox(arg1)
print(helper)
print(helper.__wrapped__)
return helper(boxed_arg, *args, **kwargs)
return wrapper_decorator
</code></pre>
<p>The intention of the usage is clear in the unit tests as follow:</p>
<pre class="lang-py prettyprint-override"><code>def test_actual_function_is_called():
my_mock = MagicMock()
class A:
@lru_cache_ignore
def my_fun(self, dict_arg, str_arg):
my_mock.call_fun(dict_arg, str_arg)
A().my_fun({}, 'test')
my_mock.call_fun.assert_called_once_with({}, 'test')
def test_second_call_is_cached():
my_mock = MagicMock()
my_mock.call_fun.return_value = [1]
class A:
@lru_cache_ignore
def my_fun(self, dict_arg, str_arg):
return my_mock.call_fun(dict_arg, str_arg)
a = A()
first_result = a.my_fun({}, 'test')
second_result = a.my_fun({}, 'test')
my_mock.call_fun.assert_called_once_with({}, 'test')
assert first_result == second_result
</code></pre>
<p>The issue is that the second test does not pass. The decorated function is called twice even though the input is the same.</p>
<p>This is the <code>pytest</code> output:</p>
<pre class="lang-bash prettyprint-override"><code>[CPython38-test] msg = "Expected 'call_fun' to be called once. Called 2 times.\nCalls: [call({}, 'test'), call({}, 'test')]."
[CPython38-test]
[CPython38-test] def assert_called_once_with(_mock_self, *args, **kwargs):
[CPython38-test] """assert that the mock was called exactly once and that that call was
[CPython38-test] with the specified arguments."""
[CPython38-test] self = _mock_self
[CPython38-test] if not self.call_count == 1:
[CPython38-test] msg = ("Expected '%s' to be called once. Called %s times.%s"
[CPython38-test] % (self._mock_name or 'mock',
[CPython38-test] self.call_count,
[CPython38-test] self._calls_repr()))
[CPython38-test] > raise AssertionError(msg)
[CPython38-test] E AssertionError: Expected 'call_fun' to be called once. Called 2 times.
[CPython38-test] E Calls: [call({}, 'test'), call({}, 'test')].
[CPython38-test]
[CPython38-test] /.../mock.py:956: AssertionError
[CPython38-test] ----------------------------- Captured stdout call -----------------------------
[CPython38-test] <functools._lru_cache_wrapper object at 0x7f8eacf31430>
[CPython38-test] <function lru_cache_ignore.<locals>.wrapper_decorator.<locals>.helper at 0x7f8eacf27e50>
[CPython38-test] Cache miss <ads.adapter.rule_engine.lru_utils.BlackBox object at 0x7f8eacb7e8e0> ('test',) {}
[CPython38-test] <functools._lru_cache_wrapper object at 0x7f8eacf31430>
[CPython38-test] <function lru_cache_ignore.<locals>.wrapper_decorator.<locals>.helper at 0x7f8eacf27e50>
[CPython38-test] Cache miss <ads.adapter.rule_engine.lru_utils.BlackBox object at 0x7f8eacb7e8e0> ('test',) {}
</code></pre>
<p>I can see that the functions that are invoked in the decorator are the same. What am I missing here?
I have adapted the answer from this other <a href="https://stackoverflow.com/a/30738279/1073786">post</a></p>
<p>Thank you!</p>
|
<python><python-3.x><decorator><python-decorators><python-lru-cache>
|
2022-12-20 17:28:32
| 1
| 2,184
|
Sanandrea
|
74,866,638
| 5,510,540
|
Numpy array of strings into an array of integers
|
<p>I have the following array:</p>
<pre><code>pattern = array([['[0, 0, 1, 0, 0]'],
['[0, 1, 1, 1, 1]'],
['[0, 1, 1, 1, 0]'],
['[0, 0, 1, 1, 1]'],
['[0, 0, 0, 1, 1]'],
['[0, 0, 1, 0, 1]'],
['[0, 0, 0, 0, 1]'],
['[1, 0, 1, 0, 0]'],
['[0, 1, 0, 1, 1]'],
['[0, 0, 1, 1, 0]'],
['[1, 1, 1, 1, 1]'],
['[1, 1, 1, 1, 0]']], dtype='<U15')
</code></pre>
<p>and I want to get it in non-string format as the following:</p>
<pre><code>import numpy
my_array = numpy.array([[0, 0, 1, 0, 0],
[0, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1],
[0, 0, 1, 0, 1],
[0, 0, 0, 0, 1],
[1, 0, 1, 0, 0],
[0, 1, 0, 1, 1],
[0, 0, 1, 1, 0],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 0]
])
</code></pre>
<p>Any idea on how to do it non-manually?</p>
|
<python><arrays><string><numpy>
|
2022-12-20 17:19:52
| 1
| 1,642
|
Economist_Ayahuasca
|
74,866,582
| 11,036,109
|
Selenium, how to locate and click a particular button
|
<p>I'm using selenium to try and scrape a listing of products in this website:
<a href="https://www.zonacriativa.com.br/harry-potter" rel="nofollow noreferrer">https://www.zonacriativa.com.br/harry-potter</a></p>
<p>However, I'm having trouble getting the full listing of products. the page list 116 products, yet only a few are shown at a time. If I want to see the other ones, I need to click on the "Carregar mais Produtos" (load more products) button at the bottom a few times to get the full listing.</p>
<p>I'm having trouble locating this button, as it doesn't have an id and its class is a huge string. I've tried several things, like the examples below, but they don't seem to work. Any suggestions?</p>
<pre><code>driver.find_element("xpath", "//button[text()='Carregar mais Produtos']").click()
</code></pre>
<pre><code>driver.find_element("css selector", ".vtex-button__label.flex.items-center.justify-center.h-100.ph5").click()
</code></pre>
<pre><code>driver.find_element(By.CLASS_NAME, "vtex-button.bw1.ba.fw5.v-mid.relative.pa0.lh-solid.br2.min-h-small.t-action--small.bg-action-primary.b--action-primary.c-on-action-primary.hover-bg-action-primary.hover-b--action-primary.hover-c-on-action-primary.pointer").click()
</code></pre>
|
<python><selenium><selenium-webdriver><xpath><scroll>
|
2022-12-20 17:14:39
| 2
| 411
|
Alain
|
74,866,244
| 9,770,831
|
Try to return list of bills
|
<p>New to fast API, not able to find where I am doing wrong</p>
<p>Here is my router -</p>
<pre><code>@router.get('/{user_id}/salesbill', response_model=list[schemas.ShowSalesBill], status_code=status.HTTP_200_OK)
def get_all_salesbills(user_id: int, db: Session = Depends(get_db)):
objs = db.query(bill_model.SalesBillModel).filter(bill_model.SalesBillModel.user_id==user_id)
print(objs, "=================")
return objs
</code></pre>
<p>Here is my schemas --</p>
<pre><code>class BillBase(BaseModel):
bill_no: int
amount: int
about: str
class ShowSalesBill(BillBase):
id: int
class Config:
orm_mode = True
</code></pre>
<p>Here is my model -</p>
<pre><code>
class SalesBillModel(Base):
__tablename__ = "salesbill"
id = Column(Integer, primary_key=True, index=True)
bill_no = Column(Integer, index=True)
amount = Column(Integer, nullable=False)
about = Column(String(50), nullable=True)
user_id = Column(Integer, ForeignKey("users.id", ondelete='CASCADE'))
user = relationship("User", back_populates="salesbills")
</code></pre>
<p>So, I am trying to get all the bills added by the user but getting an error</p>
<blockquote>
<p>pydantic.error_wrappers.ValidationError: 1 validation error for ShowSalesBill
response
value is not a valid list (type=type_error.list)</p>
</blockquote>
<p>Getting this error</p>
<pre><code>SELECT salesbill.id AS salesbill_id, salesbill.bill_no AS salesbill_bill_no, salesbill.amount AS salesbill_amount, salesbill.about AS salesbill_about, salesbill.user_id AS salesbill_user_id
FROM salesbill
WHERE salesbill.user_id = ? =================
INFO: 127.0.0.1:47410 - "GET /1/salesbill HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/mdhv/fastapi/env/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
</code></pre>
|
<python><sqlalchemy><model><fastapi><pydantic>
|
2022-12-20 16:44:10
| 1
| 657
|
mdhv_kothari
|
74,866,235
| 7,437,143
|
Importing/re-using test files from another module in a pip package?
|
<h2>Context</h2>
<p>As part of a pip package, I wrote a test which I would like to partially re-use for different configurations. (A single configuration takes +- 15 minutes (on my device), so ideally I would like to be able to test them separately, instead of sequentially in a single test). However, for the pip package I have the following folder structure:</p>
<pre><code>Projectname/
|-- src/
| |-- Projectname
| |--|-- __init__.py
| |--|-- main.py
| |--|-- somefile.py
|
|-- tests/
| |-- some_testfolder/
| | |-- __init__.py
| | |-- test_one.py
| |-- another_testfolder/
| | |-- __init__.py
| | |-- test_two.py
| |-- __init__.py
|
|-- setup.py
|-- README
</code></pre>
<p>Analogous to <a href="https://github.com/PyCQA/flake8/blob/main/tests/__init__.py" rel="nofollow noreferrer">the flake8 folder structure</a>. And the example class that is to be imported is analogue to <a href="https://stackoverflow.com/a/29987595">this answer</a>.</p>
<pre class="lang-py prettyprint-override"><code>class BasetTestCase(unittest.TestCase):
def setUp(self):
# ran by all subclasses
def helper(self):
# help
class TestCaseOne(BaseTestCase):
def setUp(self):
# additional setup
super(TestCaseOne, self).setUp()
def test_something(self):
self.helper() # <- from base
</code></pre>
<p>However, when I try to import a class from <code>tests/test_one.py</code> I get the error:</p>
<pre><code>================================================================================================== ERRORS ==================================================================================================
__________________________________________________________________ ERROR collecting tests/sparse/MDSA/test_snn_results_with_adaptation.py __________________________________________________________________
ImportError while importing test module '/projectname/tests/some_testfolder/test_one.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../anaconda/envs/snncompare/lib/python3.10/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/sparse/MDSA/test_snn_results_with_adaptation.py:8: in <module>
from projectname.tests.some_testfolder.test_one import (
E ModuleNotFoundError: No module named 'projectname.tests'
====================================================================
</code></pre>
<p>I suspect this is because the root directory named <code>Projectname</code> does not have an <code>__init__</code> file. However, if I add one there, then I get the error:</p>
<pre><code>No module named 'Projectname.somefile'
</code></pre>
<p>I think this is because then it starts looking from the root directory, which only contains the <code>src</code> folder, but not the <code>Projectname</code> folder.</p>
<p>However, I would expect the code to import from the pip package named <code>Projectname</code>, instead of from the <code>Projectname</code> folder.</p>
<h2>Question</h2>
<p>Where should I place the <code>__init__</code> files such that I can import functions from other test files, whilst also preserving imports from the pip packages instead of directory names?</p>
|
<python><pip><init><directory-structure>
|
2022-12-20 16:43:23
| 1
| 2,887
|
a.t.
|
74,866,193
| 730,285
|
How do I keep scipy's loadmat() from implicitly converting complex data types into reals while keeping the original data types?
|
<p>Using SciPy 1.9.3 in Python 3.11.1, I am attempting to load data from a MATLAB <code>mat</code> file, which contains a couple matrices with complex doubles. When I execute <code>scipi.io.loadmat(filename, matlab_compatible=True)</code>, I get the following warning message:</p>
<pre><code>ComplexWarning: Casting complex values to real discards the imaginary part
</code></pre>
<p>And, indeed, the loaded matrices contain just the real component of the complex numbers. After trying some variations, I was finally able to get complex data loaded, with a caveat. It seems the issue is related to the <code>mat_dtype=True</code> parameter, which is implicitly set with <code>matlab_compatible=True</code>. If I set it to <code>False</code>, then I get complex numbers. However, it also means that various other loaded arrays in the file may be of a smaller storage class than they should be (because MATLAB will potentially use a smaller storage class to save space).</p>
<p>So, how do I keep the original storage class (as would be loaded in MATLAB), but also keep the complex numbers?</p>
|
<python><scipy>
|
2022-12-20 16:40:38
| 0
| 2,070
|
seairth
|
74,865,977
| 499,721
|
Wrong result when comparing ref and WeakMethod in Python?
|
<p>I'm using a <code>set</code> to hold weak references to callables. These can be functions, callable instances (i.e. using the <code>__call__</code> method), and bound methods. Following the <a href="https://docs.python.org/3/library/weakref.html#module-weakref" rel="nofollow noreferrer">docs</a>, I'm using <code>weakref.WeakMethod</code> for bound methods, and <code>weakref.ref</code> for other callables.</p>
<p>The issue I'm facing, is best explained by an example:</p>
<pre><code>from weakref import ref, WeakMethod
class Callbacks:
def method(self, *args, **kwargs):
print('method()')
def __call__(self, *args, **kwargs):
print('__call__()')
cb = Callbacks()
listeners = set()
listeners.add(ref(cb))
print(f'#listeners: expected = 1, actual = {len(listeners)}')
listeners.add(WeakMethod(cb.method))
print(f'#listeners: expected = 2, actual = {len(listeners)}')
</code></pre>
<p>This prints:</p>
<blockquote>
<p>#listeners: expected = 1, actual = 1 <br/>
#listeners: expected = 2, actual = 1</p>
</blockquote>
<p>Digging in, I see that indeed <code>WeakMethod(cb.method) == ref(cb)</code>, even though <code>cb.method != cb</code>. What am I missing?</p>
|
<python><weak-references>
|
2022-12-20 16:20:38
| 2
| 11,117
|
bavaza
|
74,865,894
| 7,581,507
|
Pandas rolling average that takes missing points into account
|
<p>I would like to calculate a moving average on a dataframe that contains missing "points"</p>
<p>For the following example, we can see that the period of <code>09:00:02</code> has an average of <code>1.0</code>, where i would expect it to have the value of <code>0.5</code> because i would like to treat missing points (such as <code>09:00:01</code>) as zeroes.</p>
<pre><code>In [1]: import pandas as pd
...: import numpy as np
In [2]: df_time = pd.DataFrame({'B': [1, 1, 2, np.nan, 4]},
...: index = [pd.Timestamp('20130101 09:00:00'),
...: pd.Timestamp('20130101 09:00:02'),
...: pd.Timestamp('20130101 09:00:03'),
...: pd.Timestamp('20130101 09:00:05'),
...: pd.Timestamp('20130101 09:00:06')])
...:
In [3]: print(df_time)
...:
B
2013-01-01 09:00:00 1.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
In [4]: rolling = df_time.rolling('2s').mean()
In [5]: print(rolling)
B
2013-01-01 09:00:00 1.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 1.5
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
</code></pre>
<p>Is there an option to do that with pandas rolling?</p>
<p>Of course there is the option to fill the missing dates with reindex - but i am looking for a "native" rolling option, assuming it can also be more efficient</p>
|
<python><pandas><moving-average>
|
2022-12-20 16:13:24
| 0
| 1,686
|
Alonme
|
74,865,883
| 453,673
|
How to use the settings package's `--settings` when using ArgParse?
|
<p><strong>Background:</strong><br />
Python has a <code>simple_settings</code> [package][1] which allows easy import of program settings from an external file. A program someone wrote, used to supply the settings to the program from the commandline as <code>python prog.py --settings=someFolder.settings</code>, and the <code>settings.py</code> file would be located in <code>./someFolder</code>. The person who wrote it, didn't use any argument parser. The <code>--settings</code> argument was just detected by the <code>simple_settings</code> package.</p>
<p><strong>Problem that started:</strong><br />
But when I started using [argparse][2], I was forced by <code>argparse</code> to introduce this line <code>parser.add_argument("-s", "--settings", help="To enter the settings to be used. Use as </code>python3 thisProg.py --settings=somefolder.settings<code>")</code>. Else it'd throw an error that <code>--settings</code> is unrecognized.</p>
<p>Problem is, now <code>simple_settings</code> cannot find the <code>--settings</code> flag anymore. I get an error when I try to access any settings stored in the <code>settings.py</code> file.</p>
<pre><code> File "run.py", line 8, in <module>
from prog import app_logger
File "/home/user/project/prog/app_logger.py", line 17, in <module>
prog_fh = logging.FileHandler(settings.LOG_FILE)
File "/home/user/.local/lib/python3.8/site-packages/simple_settings/core.py", line 95, in __getattr__
self.setup()
File "/home/user/.local/lib/python3.8/site-packages/simple_settings/core.py", line 69, in setup
self._load_settings_pipeline()
File "/home/user/.local/lib/python3.8/site-packages/simple_settings/core.py", line 76, in _load_settings_pipeline
strategy = self._get_strategy_by_file(settings_file)
File "/home/user/.local/lib/python3.8/site-packages/simple_settings/core.py", line 92, in _get_strategy_by_file
raise RuntimeError('Invalid settings file [{}]'.format(settings_file))
RuntimeError: Invalid settings file [someFolder.settings]
</code></pre>
<p><strong>How I tried solving it:</strong><br />
Wrote a small program to emulate <code>argv</code>, but it seems to be the wrong way to do it and does not work.</p>
<pre><code>from argparse import ArgumentParser
from simple_settings import settings
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument("-d", "--desk", help="To enter the desk ID. Use as `python3 thisProg.py -d=<ID>`")
parser.add_argument("-s", "--settings", help="To enter the settings to be used. Use as `python3 thisProg.py --settings=somefolder.settings`")
args = parser.parse_args()
argv = []
argv.append(f"--settings={args.settings}")
serv = settings.RT_SERVICE,
sec = settings.CACHE_DATA_EXPIRY_SECONDS
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>RT_SERVICE = 'rt'
CACHE_DATA_EXPIRY_SECONDS = 260
LOG_FILE = 'somefile.log'
</code></pre>
<p><strong>Question:</strong><br />
What is the right way to supply the settings to the <code>simple_settings</code> program?<br />
[1]: <a href="https://pypi.org/project/simple-settings/" rel="nofollow noreferrer">https://pypi.org/project/simple-settings/</a>
[2]: <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow noreferrer">https://docs.python.org/3/library/argparse.html</a></p>
|
<python><python-3.x><command-line-arguments><argparse>
|
2022-12-20 16:12:34
| 2
| 20,826
|
Nav
|
74,865,755
| 4,019,495
|
Why is the first expression interpreted as an int, and the second as a string?
|
<p>Using PyYaml</p>
<pre class="lang-py prettyprint-override"><code>import yaml
yaml.full_load(StringIO('a: 16:00:00'))
# {'a': 57600}
yaml.full_load(StringIO('a: 09:31:00'))
# {'a': '09:31:00'}
</code></pre>
<p>Why is there a difference in those behaviors?</p>
|
<python><yaml><pyyaml>
|
2022-12-20 16:00:56
| 2
| 835
|
extremeaxe5
|
74,865,570
| 4,343,563
|
Recombining chunked data in Python not working properly?
|
<p>I have a file that is 1.1GB. I need to transfer it to a s3 bucket in a different AWS environment. Due to permission restrictions that can't be changed, I can't use aws s3 cp to move the file or just upload it in s3. My only option is a code pipeline that can only upload files of 25MB or less. So, I split the file into smaller files using the command:</p>
<pre><code>split -b 23m file.dat chunkfolder/newfiles
</code></pre>
<p>After splitting the files, I am testing out how to recombine them to get it all as one dataframe. After reading in the original 1.1GB file, I have a shape of (3958282, 60). To get this I use the following code, where dtypes is a dictionary of the data type each column should be:</p>
<pre><code>file = '/to/path/file.dat'
data = open(file, "rb")
dt = data.read()
df = pd.read_csv(BytesIO(dt), sep = '|', dtype = dtypes)
print('orig shape:', df.shape)
</code></pre>
<p>Then, to read in the chunked files I do:</p>
<pre><code>#reading in first chunk with col names
file = '/to/path/chunkfolder/newfilesaa'
data = open(file, "rb")
dt = data.read()
full_df = pd.read_csv(BytesIO(dt), sep = '|', dtype = dtypes)
print('full_df shape:', full_df.shape)
print(full_df.head())
#reading in rest of files
filepath = '/to/path/chunkfolder'
files = os.listdir(filepath)
for file in files:
if file != 'newfilesaa':
print(file)
data = open(filepath + '/' + file, "rb")
dt = data.read()
df = pd.read_csv(BytesIO(dt), sep = '|', header = None, names = list(full_df), dtype = dtypes)
full_df = full_df.append(df)
print('combined shape:', full_df.shape)
print('combined: ', full_df.head())
</code></pre>
<p>But, I now get an error <code>cannot safely cast non-equivalent float64 to int64</code> from the first file in the in the loop getting read in.</p>
<p>When I don't specify dtype, it goes through and the full_df has a shape of (3958326, 60). This is larger than what it should be. And, I get numerous warning errors like:</p>
<pre><code>newfilesau
sys:1: DtypeWarning: Columns (11,12,21,23,24,29,32,58) have mixed types.Specify dtype option on import or set low_memory=False.
newfilesav
sys:1: DtypeWarning: Columns (30,40,42,43,51,58) have mixed types.Specify dtype option on import or set low_memory=False.
</code></pre>
<p>I tried to check which rows don't match using:</p>
<pre><code>print(full_df.merge(df,indicator = True, how='outer').loc[lambda x : x['_merge']!='both'])
</code></pre>
<p>But this gives the error</p>
<pre><code>ValueError: You are trying to merge on object and float64 columns. If you wish to proceed you should use pd.concat
</code></pre>
<p>How can I recombine these chunked files to get a dataframe that matches the original(unchunked) dat file? Why am I able to specify dtypes and the original file be read in correctly, but the chunked data can't be read in with dtypes?</p>
|
<python><chunks>
|
2022-12-20 15:45:00
| 1
| 700
|
mjoy
|
74,865,438
| 4,576,519
|
How to get a flattened view of PyTorch model parameters?
|
<p>I want to have a flattened view of the parameters belonging to my Pytorch model. Note it should be a <strong>view</strong> and not a copy of the parameters. In other words, when I modify the parameters in the view it should also modify the model parameter. I can get the model parameters as follows:</p>
<pre class="lang-py prettyprint-override"><code>import torch
model = torch.nn.Sequential(
torch.nn.Linear(1, 10),
torch.nn.Tanh(),
torch.nn.Linear(10, 1)
)
params = list(model.parameters())
for p in params:
print(p)
</code></pre>
<p>Here <code>params</code> is a list of tensors. I need it to be a 1D tensor of all the parameter instead. It is trivial to do</p>
<pre class="lang-py prettyprint-override"><code>params = torch.cat([p.flatten() for p in model.parameters()])
print(params.shape) # torch.Size([31])
</code></pre>
<p>However, now modifying a parameter in <code>params</code> then does not change the actual model parameter (since <code>torch.cat()</code> copies the memory). <strong>Is it possible to get a 1D tensor view of model parameters?</strong></p>
|
<python><memory><parameters><pytorch><neural-network>
|
2022-12-20 15:33:44
| 1
| 6,829
|
Thomas Wagenaar
|
74,865,427
| 6,534,180
|
Cannot import script functions from another script
|
<p>I've the following hierarchy:</p>
<pre><code>.root
/program/main.py
/functions/myfunctions.py
</code></pre>
<p>And using my main.py I want to use the functions present in myfunctions.py script. For that I've defined my <strong>PYTHONPATH</strong> AS ./root/functions and I have this as the import on my script:</p>
<pre><code>from functions import myfunctions as func
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>ModuleNotFoundError: No module named 'functions'
</code></pre>
<p>How can I solve this?</p>
<p>Thanks</p>
|
<python><python-import>
|
2022-12-20 15:32:50
| 2
| 1,054
|
Pedro Alves
|
74,865,406
| 6,919,582
|
Reading complete metadata from images in Python
|
<p>I want to read the full list of metadata from images (e.g. jpg) as it is provided for example by ExifTool by Phil Harvey. I cannot use this command line tool and its Python wrapper exiftool due to security restrictions. Unfortunately other Python packages such as PIL only provide a fraction of the available metadata.</p>
<p>Is there any other Python library that provides the full image-metadata as it's returned by ExifTool?</p>
|
<python><image><metadata>
|
2022-12-20 15:31:30
| 2
| 675
|
MikeHuber
|
74,865,368
| 12,666,814
|
KL Divergence loss Equation
|
<p>I had a quick question regarding the KL divergence loss as while I'm researching I have seen numerous different implementations. The two most commmon are these two. However, while look at the mathematical equation, I'm not sure if mean should be included.</p>
<pre><code>KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)
OR
KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)
KL_loss = torch.mean(KL_loss)
</code></pre>
<p>Thank you!</p>
|
<python><machine-learning><pytorch>
|
2022-12-20 15:28:39
| 1
| 577
|
AliY
|
74,865,226
| 7,800,760
|
networkX DiGraph: list all neighbouring nodes disregarding edge direction
|
<p>Is there a better way to obtain the list of nodes that connect to a given one in a directed graph via a single edge (whether inbound or outbound)?</p>
<p>Here is what I came up with. First I build and draw a demo DiGraph:</p>
<pre><code>import networkx as nx
G = nx.DiGraph()
G.add_edges_from([
('A','B'),
('B','C'),
('C','D'),
('D','E'),
('F','B'),
('B','G'),
('B','D'),
])
nx.draw(
G,
pos=nx.nx_agraph.graphviz_layout(G, prog='dot'),
node_color='#FF0000',
with_labels=True
)
</code></pre>
<p><a href="https://i.sstatic.net/M5mnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5mnF.png" alt="simple demo directed graph" /></a></p>
<p>and now I'd like to get all "neighbouring nodes for node 'B'" and currently I do so with:</p>
<pre><code>node= 'B'
incoming = [n for n in G.predecessors(node)]
outgoing = [n for n in G.successors(node)]
neighbours = incoming + outgoing
print(neighbours)
['A', 'F', 'C', 'G', 'D']
</code></pre>
<p>Is there a simpler, better, faster way of achieving this result?</p>
|
<python><networkx>
|
2022-12-20 15:17:21
| 2
| 1,231
|
Robert Alexander
|
74,865,185
| 10,062,025
|
Why Scraping a mobile url link using requests does not return all value?
|
<p>I am trying to scrape this mobile link <a href="https://www.tokopedia.com/now/sumo-beras-putih-kemasan-merah-5-kg" rel="nofollow noreferrer">https://www.tokopedia.com/now/sumo-beras-putih-kemasan-merah-5-kg</a> using a simple requests. That can only be open in app on mobile phone on tokopedia only.</p>
<p>It should return the price and product name however I am not finding it in the content of the request. Do I have to use selenium to wait for it to load? Please do help.</p>
<p>Currently the code is just a simple</p>
<pre><code>resp = requests.get("https://www.tokopedia.com/now/sumo-beras-putih-kemasan-merah-5-kg", headers = {'User-Agent':'Mozilla/5.0'})
</code></pre>
<p>I tried searching for the price using in however it's not there. What should I do?</p>
|
<python><python-requests>
|
2022-12-20 15:13:43
| 1
| 333
|
Hal
|
74,865,101
| 9,274,940
|
convert plotly figure to image and then use this image as normal PNG
|
<p>I'm trying to convert a plotly express figure to image, then use this image to save it on a power point slide. This is my code:</p>
<pre><code>import plotly.express as px
import plotly.io as pio
from pptx import Presentation
wide_df = px.data.medals_wide()
fig = px.bar(wide_df, x="nation", y=["gold", "silver", "bronze"], title="Wide-Form Input, relabelled",
labels={"value": "count", "variable": "medal"})
# Convert the figure to a bytes object
img_bytes = pio.to_image(fig, format='png')
ppt = Presentation(
"template.pptx"
)
slide = ppt.slides[3]
placeholder = slide.placeholders[13]
placeholder.insert_picture(
img_bytes
)
</code></pre>
<p>But I'm getting the following <strong>error</strong> message:</p>
<pre><code>'bytes' object has no attribute 'seek'
</code></pre>
|
<python><plotly><powerpoint>
|
2022-12-20 15:07:31
| 1
| 551
|
Tonino Fernandez
|
74,865,069
| 10,557,442
|
How to run in parallel several docker services on a docker-compose file and then check for the global status code?
|
<p>I need to test a python application on different environments (different python versions, different pip packages) with pytest. For that, I created a Dockerfile which accepts different build arguments, and fill those args from a docker compose template, like this:</p>
<pre><code>version: '3.9'
services:
python37:
image: py3.7.0
build:
context: .
args:
PYTHON_VER: "3.7.0"
PIP_VER: "19.0.3"
NUMPY_VER: "1.17.5"
PANDAS_VER: "0.24.0"
SKLEARN_VER: "0.19.2"
python38:
image: py3.8.0
build:
context: .
args:
PYTHON_VER: "3.8.0"
PIP_VER: "20.3.2"
NUMPY_VER: "1.20.0"
PANDAS_VER: "1.0.0"
SKLEARN_VER: "0.24.2"
python310:
image: py3.10.0
build:
context: .
args:
PYTHON_VER: "3.10.0"
PIP_VER: "22.3.1"
NUMPY_VER: "1.24.0"
PANDAS_VER: "1.5.2"
SKLEARN_VER: "1.2.0"
</code></pre>
<p>The CMD in the Dockerfile just calls pytest over some test I've defined.
What I want is to run all services, as in <code>docker-compose up</code> but in a way that if any of the services fails its tests, then the whole execution should also fail.</p>
<p>I've tried <code>docker-compose up --exit-code-from</code> but this only works for a particular service and I want to check the global exit code</p>
<p>Also tried <code>docker-compose up --abort-on-container-exit</code> but the returning code of this is seems to be always 0.</p>
<p>How can I achieve that?</p>
|
<python><docker><docker-compose><continuous-integration><health-check>
|
2022-12-20 15:04:56
| 0
| 544
|
Dani
|
74,864,992
| 14,729,820
|
How to handle txt file using pandas and save it is results
|
<p>I have this input txt file that contains (image_name, other meta data not needed, and last column tokens separated by | character )
example of input: <strong>input.txt</strong></p>
<pre><code>a01-000u-00 ok 154 19 408 746 1661 89 A|MOVE|to|stop|Mr.|Gaitskell|from
a01-000u-01 ok 156 19 395 932 1850 105 nominating|any|more|Labour|life|Peers
a01-000u-02 ok 157 16 408 1106 1986 105 is|to|be|made|at|a|meeting|of|Labour
</code></pre>
<p>The expected output I want : is <strong>expected_out.txt</strong> or data frame that has only <code>image_name and text</code></p>
<pre><code>a01-000u-00.png A MOVE to stop Mr. Gaitskell from
a01-000u-01.png nominating any more Labour life Peers
a01-000u-02.png is to be made at a meeting of Labour
</code></pre>
<p>the script to process the file is as below :</p>
<pre><code>train_text = 'input.txt'
def load_data() -> pd.DataFrame:
data = []
with open(train_text) as infile:
for line in infile:
file_name, _, _, _, _, _, _, _, text = line.strip().split(' ')
data.append((file_name, process_last_column(text)))
df = pd.DataFrame(data, columns=['file_name', 'text'])
df.rename(columns={0: 'file_name', 8: 'text'}, inplace=True)
df['file_name'] = df['file_name'].apply(lambda x: x + '.png')
df = df[['file_name', 'text']]
return df
def process_last_column(input_text: str) -> str:
return input_text.replace('|', ' ')
</code></pre>
<p>The error I got is :</p>
<pre><code>Traceback (most recent call last):
File "train.py", line 205, in <module>
main()
File "train.py", line 146, in main
df = load_data()
File "train.py", line 108, in load_iam
file_name, _, _, _, _, _, _, _, text = line.strip().split(" ")
ValueError: not enough values to unpack (expected 9, got 1)
</code></pre>
|
<python><pandas><dataframe><text><pytorch>
|
2022-12-20 14:59:17
| 2
| 366
|
Mohammed
|
74,864,929
| 739,619
|
Number of replacements performed by str.replace
|
<p>I'd have expected to find at least a question on this subject, instead... nothing.</p>
<p>I am writing a script for the smart patching of a project, so I have a dictionary where keys are file paths and values are lists of tuples, and each tuple is an original/replacement pair. I loop on files, slurp each file, apply changes to the contents, rename the original, and write the changed contents to the original name. Pretty standard stuff.</p>
<p>The changes I have to apply were for a two month old version of the project, so I decided I did not want to use diff and patch: the sections I am changing now may have moved by many lines, but they <em>shouldn't</em> be changed – mostly.</p>
<p>This means I have to check that each diff <strong>is</strong> correctly applied – but it seems there is no way to obtain the number of replacements from <code>str.replace()</code>.</p>
<p>Of course I can compare the string before and after the replacement, but this seems inefficient, and does not allow to check the number of replacements performed.</p>
<p>Annoyingly, regexps have a <code>subn()</code> method that does exactly this, but my patterns may contain lots of special characters, so using regexps would be a pain. And apparently there is no equivalent method for strings.</p>
<p>I'm going to write a function to do a search/replace in a loop for each replacement – but it seems a sort of an overkill... Does anybody have a better idea?</p>
<p>EDIT: oh, here's a <a href="https://stackoverflow.com/questions/56659742/how-to-count-no-of-operations-performed-by-str-replace">similar question</a>, but the replacement pattern is a regex, so <code>re.subn()</code> is an acceptable solution...</p>
|
<python><python-3.x>
|
2022-12-20 14:53:37
| 0
| 533
|
Francesco Marchetti-Stasi
|
74,864,895
| 867,889
|
A pythonic way to init inherited dataclass from an object of parent type
|
<p>Given a dataclass structure:</p>
<pre><code>@dataclass
class RichParent:
many_fields: int = 1
more_fields: bool = False
class Child(RichParent):
some_extra: bool = False
def __init__(seed: RichParent):
# how to init more_fields and more_fields fields here without referencing them directly?
# self.more_fields = seed.more_fields
# self.many_fields = seed.many_fields
pass
</code></pre>
<p>What would be the right way to shallow copy seed object fields into the new child object?
I wouldn't mind even converting seed to Child type since there is no use for parent object after initialization.</p>
<p>Why do I do that? I want to avoid changing Child class every time RichParent has a change as long as parent stays a plain dataclass.</p>
|
<python><initialization><python-dataclasses>
|
2022-12-20 14:51:22
| 1
| 10,083
|
y.selivonchyk
|
74,864,878
| 17,903,744
|
Is there a way to make a "oscilloscope styled" double cursor delta while plotting in matplotlib?
|
<p>I was looking for more information about whether or not it was possible to make a "oscilloscope styled" double cursor delta (like the example below) while plotting with <code>matplotlib</code>?</p>
<p><a href="https://i.sstatic.net/A3IwM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A3IwM.png" alt="Oscilloscope example" /></a></p>
|
<python><matplotlib>
|
2022-12-20 14:50:14
| 1
| 345
|
Guillaume G
|
74,864,809
| 148,298
|
Is it possible to debug python code running under an embedded interpreter without attaching it to its PID?
|
<p>I'm debugging a script running in embedded interpreter hosted in an application from VS Code using PDB. Constantly launching the host and selecting its process ID from a dialog is a bit cumbersome. Sometimes its windows hide behind the IDE, which upsets my window placements in order to bring it to the foreground. The attaching process also makes it impossible to debug code in certain scenarios where the target script executes during the host's startup, which usually occurs well before I have a chance to attach to it.</p>
<p>Is there an alternative solution, such as a plugin or VSCode configuration? In C and C++, it's possible to insert a special function to break into the debugger. Does this exist for Python?</p>
|
<python><vscode-debugger><pdb>
|
2022-12-20 14:45:29
| 1
| 9,638
|
ATL_DEV
|
74,864,680
| 5,740,397
|
Plotly: refresh data passed to px.line via dropdown from Database
|
<p>I want to use the plotly express lineplot to create a simple interactive plot that refreshes its data when choosing a new item in a dropdown box.</p>
<p>I have created a simple function <code>prepare_dashboard_data</code> which loads data from a database and does some filtering. The dashboard is showing the data from the initial dataload but I am completely lost on how to create a callback to the <code>px.line</code> function, such that the plot is updated with new data loaded from the database.</p>
<p>I have taken inspiration from this <a href="https://stackoverflow.com/questions/61461583/plotly-dropdown-selection-does-not-update-plots-correctly/61463810#61463810">post</a> where <code>plotly.graph_objs</code> are used. But i quite like the functionality of the default lineplot.
And in the example a preloaded dataframe is simply filtered based on the dropdown choice. This is not what I want.</p>
<p>I have some limited knowledge with ipython widgets and the observer pattern, but I am completely lost in this case. Here is a rough sceme of my current code:</p>
<pre><code>import plotly.graph_objs as go
import plotly.express as px
def prepare_dashboard_data(serverconfig, shopname = "myshop"):
# Data is loaded form a db
# Transformed filtered by shopname and so on ...
# This returns a datframe with has an timestamp as an index and many items as columns
return df
# Prepare Dropdown menues
shopnames = df.columns # This just gives a list of available shopnames.
# plotly start
fig = go.Figure()
fig = px.line(prepare_dashboard_data(serverconfig=conf, shopname = "myshop"),width=1600, height=800)
# menu setup
updatemenu= []
# buttons for menu 1, shopnames
buttons=[]
# should i create traces for each shopnames alread here:
for webshop_name in shopnames :
buttons.append(dict(method='update',
label=webshop_name,
visible=True,
args=[#{I guess here goes the magic?}
]
)
)
# some adjustments to the updatemenus
updatemenu=[]
your_menu=dict()
updatemenu.append(your_menu)
updatemenu[0]['buttons']=buttons
updatemenu[0]['direction']='down'
updatemenu[0]['showactive']=True
fig.update_layout(
autosize=False,
width=1800,
height=800,
margin=dict(
l=50,
r=50,
b=100,
t=100,
pad=4
),
updatemenus=updatemenu,
paper_bgcolor="LightSteelBlue",
)
fig.show()
</code></pre>
<p>Any help would be really appreciated. I have tried to make sense of the documentation but i think i would need a pointer. Currently i generate my plots in a Vscode/jupyter notebook, not as a standalone app .</p>
|
<python><plotly><dashboard><reload><interactive>
|
2022-12-20 14:34:58
| 1
| 565
|
NorrinRadd
|
74,864,621
| 7,791,963
|
When writing in Python a dictionary to a YAML file, how to make sure the string in the YAML file is split based on '\n'?
|
<p>I have a long string in a dictionary which I will dump to a YAML file.</p>
<p>As an example</p>
<pre><code>d = {'test': {'long_string':'this is a long string that does not succesfully split when it sees the character '\n' which is an issue'}}
ff = open('./test.yaml', 'w+')
yaml.safe_dump(d, ff)
</code></pre>
<p>Which produces the following output in the YAML file</p>
<pre><code> test:
long_string: "this is a long string that does not successfully split when it sees\
\ the character '\n' which is an issue"
</code></pre>
<p>I want the string which is inside the YAML file to only be split into a new line when it sees the "\n", also, I don't want any characters indicating that it's a newline. I want the output as follows:</p>
<pre><code> test:
long_string: "this is a long string that does not successfully split when it sees the character ''
which is an issue"
</code></pre>
<p>What do I need to do to make the <code>yaml.dump</code> or <code>yaml.safe_dump</code> fulfill this?</p>
|
<python><yaml>
|
2022-12-20 14:30:53
| 1
| 697
|
Kspr
|
74,864,501
| 4,879,688
|
How can I get PID(s) of a running Jupyter notebook?
|
<p>I have run several (<code>n</code>) Jupyter notebooks in parallel. As they use FEniCS, each has spawned a number (much more than <code>m - 1</code>) of threads (PIDs) which can occupy all available cores (<code>m</code>). Now I have much more than <code>m * n</code> threads (of at least <code>n</code> processes) which compete for resources (cores, RAM etc.). That significantly damps their performance.</p>
<p>As I need results of some of the notebooks more urgently the of the others, I would like to set priorities of their PIDs accordingly (<code>renice</code> them). Sadly I have no idea which PIDs belongs to which notebook. How can I verify that without interrupting the notebooks?</p>
<p>I know I can stop all my notebooks but one (the least important), use <code>htop</code> to <code>renice</code> the processes which still use CPU, then run the other notebooks one by one and repeat the procedure. But is there a way to do it while all the notebooks are running?</p>
|
<python><linux><jupyter-notebook><process><fenics>
|
2022-12-20 14:21:13
| 0
| 2,742
|
abukaj
|
74,864,419
| 11,687,381
|
Failed building wheel for scikit-image
|
<p>I'm trying to install scikit-image library. As mentioned in official docs I'm running the command:</p>
<pre><code>python -m pip install -U scikit-image
</code></pre>
<p>What I have already:</p>
<ol>
<li>I have a virtual environment created for my project named env</li>
<li>I have numpy installed in my virtualenv</li>
<li>Python version 3.11.1</li>
</ol>
<p>I'm attaching logs which I think should be relevant for diagnosing the error.</p>
<pre><code>error: subprocess-exited-with-error
× Building wheel for scikit-image (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [760 lines of output]
setup.py:9: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
from numpy.distutils.command.build_ext import build_ext as npy_build_ext
C:\Users\apoor\AppData\Local\Temp\pip-build-env-nxyulh5k\overlay\Lib\site-packages\pythran\tables.py:4555: FutureWarning: In the future `np.bytes` will be defined as the corresponding NumPy scalar. (This may have returned Python scalars in past versions.
obj = getattr(themodule, elem)
Partial import of skimage during the build process.
There are a lot of logs between these two sections!
INFO: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
INFO: customize MSVCCompiler
INFO: customize MSVCCompiler using ConditionalOpenMP
A lot more logs...
CPU baseline :
Requested : 'min'
Enabled : SSE SSE2 SSE3
Flags : none Extra checks: none
CPU dispatch : Requested : 'max -xop -fma4' Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL Generated : none
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for scikit-image
Failed to build scikit-image
ERROR: Could not build wheels for scikit-image, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><numpy><scikit-image>
|
2022-12-20 14:15:31
| 1
| 609
|
Apoorv pandey
|
74,864,265
| 13,505,957
|
Selenium can't detect text in html from Google Maps
|
<p>I am trying to scrape travel times between two points from Google Maps:
<a href="https://i.sstatic.net/Q8yt1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q8yt1.png" alt="enter image description here" /></a></p>
<p>I have used inspect to find the XPath of the travel time in the HTML file:
<a href="https://i.sstatic.net/rShRt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rShRt.png" alt="enter image description here" /></a></p>
<p>The following code raises an exception:</p>
<pre><code>from selenium.webdriver.chrome.service import Service
from selenium import webdriver
from selenium.webdriver.common.by import By
url = 'https://www.google.com/maps/dir/35.7290613,+51.238014/35.7504171,51.2244444/@35.7296931,51.2580754,13z/data=!4m8!4m7!1m5!1m1!1s0x0:0x3a1893ebcae30b2e!2m2!1d51.238014!2d35.7290613!1m0'
#initialize web driver
with webdriver.Chrome() as driver:
#navigate to the url
driver.get(url)
#find element by xpath
myDiv = driver.find_element(By.XPATH, '/html/body/div/div[12]/div[2]/div/div[2]/div/div[3]/button[1]/div/div[1]/div/h1/span[2]/span/span')
print(myDiv)
print(myDiv.text)
</code></pre>
<p>The exception:</p>
<pre><code>NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div/div[12]/div[2]/div/div[2]/div/div[3]/button[1]/div/div[1]/div/h1/span[2]/span/span"} (Session info: chrome=108.0.5359.125)
</code></pre>
<p>Are there any workarounds?</p>
|
<python><selenium><web-scraping><xpath><webdriverwait>
|
2022-12-20 14:03:43
| 1
| 1,107
|
ali bakhtiari
|
74,864,234
| 18,432,809
|
how create a python service, executed when needed
|
<p>I want to create a notification service, that will be used in diferent applications. But I would run it only sometimes. How I achieve this? I will try to use webhooks, informing my service to notify something</p>
<p>basically, I don't keep my service running. I would deploy this in heroku, and heroku sleeps my app when it doen'st receive any request, but I think there is a methodology (I mean, a type of application that runs when needed) about this.</p>
<p>So, I would create webhooks in my main app that inform my service when notify, and I want to run the service only when this communication occurs.
By the way, I never use webhooks, so maybe I misleading something</p>
|
<python><service><webhooks>
|
2022-12-20 14:01:03
| 1
| 389
|
vitxr
|
74,864,198
| 10,748,412
|
Django - Get all Columns from Both tables
|
<p>models.py</p>
<pre><code>from django.db import models
class DepartmentModel(models.Model):
DeptID = models.AutoField(primary_key=True)
DeptName = models.CharField(max_length=100)
def __str__(self):
return self.DeptName
class Meta:
verbose_name = 'Department Table'
class EmployeeModel(models.Model):
Level_Types = (
('A', 'a'),
('B', 'b'),
('C', 'c'),
)
EmpID = models.AutoField(primary_key=True)
EmpName = models.CharField(max_length=100)
Email = models.CharField(max_length=100,null=True)
EmpLevel = models.CharField(max_length=20, default="A", choices=Level_Types)
EmpPosition = models.ForeignKey(DepartmentModel, null=True, on_delete=models.SET_NULL)
class Meta:
verbose_name = 'EmployeeTable' # Easy readable tablename - verbose_name
def __str__(self):
return self.EmpName
</code></pre>
<p>This is my models.py file</p>
<p>I have 2 tables and want to join both of them. also want all columns from both tables.</p>
<pre><code>emp_obj = EmployeeModel.objects.select_related('EmpPosition'). \
only('EmpID', 'EmpName', 'Email','EmpLevel', 'DeptName')
</code></pre>
<p>I have tried to do this but there is an error saying <code>EmployeeModel has no field named 'DeptName'</code></p>
<p>How can I get all these columns?</p>
|
<python><django><django-rest-framework><orm>
|
2022-12-20 13:57:16
| 1
| 365
|
ReaL_HyDRA
|
74,864,195
| 3,214,538
|
How do I create relationships across different schemas in SQLAlchemy ORM?
|
<p>I'm writing a Python application for an existing MySQL database ecosystem with tables spread across multiple schemas that I can't change without breaking a lot of legacy code (i.e. it's not an option). With my previous approach (see below) I was able to address all tables regardless of their schema so it didn't matter for the actual python code which schema a given table was in. However for my current project I want to implement some relationships that reach across different schemas and I noticed that this doesn't work.
Is there a way to implement the different table classes in a way where the application using them doesn't need to know the schema and still support relationships between all of them?</p>
<pre class="lang-py prettyprint-override"><code>from config import DB_USER, DB_PASS, DB_IP, DB_PORT
from sqlalchemy import create_engine, MetaData, Column, ForeignKey, Integer, String
from sqlalchemy.orm import declarative_base, sessionmaker, scoped_session, Session
DB_URI = f'mysql+pymysql://{DB_USER}:{DB_PASS}@{DB_IP}:{DB_PORT}'
SchemaA = declarative_base()
SchemaB = declarative_base()
engineA = create_engine(DB_URI + '/schemaA')
engineB = create_engine(DB_URI + '/schemaB')
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None):
if mapper and issubclass(mapper.class_, SchemaA):
return engineA
elif mapper and issubclass(mapper.class_, SchemaB):
return engineB
session_factory = sessionmaker(class_=RoutingSession)
Session = scoped_session(session_factory)
SchemaA.metadata = MetaData(bind=engineA)
SchemaB.metadata = MetaData(bind=engineB)
class Table1(SchemaA):
__tablename__ = 'table1'
__bind_key__ = 'schemaA'
id = Column(Integer, primary_key=True)
value = Column(String)
class Table2(SchemaB):
__tablename__ = 'table2'
__bind_key__ = 'schemaB'
id = Column(Integer, primary_key=True)
table1_id = Column(Integer, ForeignKey('schemaA.table1.id'))
value = Column(String)
# relationships
tab1 = relationship('Table1', backref=backref('tab2'))
SchemaA.metadata.create_all()
SchemaB.metadate.create_all()
with Session() as session:
t1 = session.query(Table1).get(1) # This works
t2 = t1.tab2 # This doesn't work
</code></pre>
<p>I believe this has something to do with SchemaA and SchemaB not sharing the same declarative_base but I don't know how to fix this. Any ideas would be greatly appreciated. Also the example is a simplified version of my original project (that works) with changed names and the added relationship. I didn't run <em>this</em> code specifically, so it could potentially contain some errors and typos but the concept itself is sound.</p>
<p>edit: The error message I'm getting is:</p>
<blockquote>
<p>sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped
class Table2->tab2, expression 'Table1' failed to locate a name
('Table1'). If this is a class name, consider adding this
relationship() to the <class 'Table2'> class after both dependent
classes have been defined.</p>
</blockquote>
<p>I already tried the obvious solution of simply moving the class Table2
in front of the class Table1 but that didn't fix my problem (error message remained the same).
Also in case it matters, in my non-simplified original code the engines, the RoutingSession and the Metadata binding all happen in a file <code>database.py</code> while the table classes are all defined in a file <code>models.py</code> importing Session and the schemas from database.</p>
|
<python><sqlalchemy><orm>
|
2022-12-20 13:57:02
| 1
| 443
|
Midnight
|
74,864,096
| 5,110,870
|
Arcgis & Python: Azure YAML pipeline fails with "Command 'krb5-config --libs gssapi' returned non-zero exit status 127."
|
<p>I am deploying my Python code to an Azure Function with Python runtime 3.9, using the following yaml pipeline:</p>
<pre class="lang-yaml prettyprint-override"><code>trigger:
branches:
include:
- dev
- test
- uat
- prod
pool:
vmImage: ubuntu-latest
stages:
- stage: Build
displayName: Build stage
condition: always()
jobs:
- job: Build_Stage
displayName: Build Stage
steps:
#Define Python Version
- task: UsePythonVersion@0
displayName: "Setting Python version to 3.9"
inputs:
versionSpec: '3.9'
architecture: 'x64'
#Install Python packages
- bash: |
if [ -f extensions.csproj ]
then
dotnet build extensions.csproj --output ./bin
fi
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
displayName: 'Install Python packages in venv'
#Archive files to create zip folder for the build
- task: ArchiveFiles@2
displayName: 'Archive files to create zip folder for the build'
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)' #Change to '$(Build.SourcesDirectory)/FolderNameWithFunctionAppInside' if a new parent folder is added
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
#Publish zip to Azure Pipeline
- task: PublishPipelineArtifact@1
displayName: 'Publish zip Package to Azure Pipeline'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
artifactName: $(artifactName)
artifactType: 'pipeline'
#Deploy to DEV
- stage: DEV
displayName: Deploy to DEV
dependsOn: Build
variables:
- group: <my-variable-group>
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact@2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp@1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
#Deploy to TEST
- stage: TEST
displayName: Deploy to TEST
dependsOn: Build
variables:
- group: <my-variable-group>
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/test'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact@2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp@1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
#Deploy to UAT
- stage: UAT
displayName: Deploy to UAT
dependsOn: Build
variables:
- group: <my-variable-group>
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/uat'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact@2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp@1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
#Deploy to PROD
- stage: PROD
displayName: Deploy to PROD
dependsOn: Build
variables:
- group: <my-variable-group>
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/prod'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact@2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp@1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
</code></pre>
<p>When I trigger this pipeline, my build stage fails with an error message pointing at <code>krb5-config</code>:</p>
<pre class="lang-bash prettyprint-override"><code>
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-j844587_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-j844587_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-j844587_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 109, in <module>
File "<string>", line 22, in get_output
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'krb5-config --libs gssapi' returned non-zero exit status 127.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
##[error]Bash exited with code '1'.
</code></pre>
<p>Is there a way I can fix this?</p>
<p><strong>EDIT</strong></p>
<p>The requirements are as follows:</p>
<pre class="lang-py prettyprint-override"><code>pip==22.3.1
azure-functions
azure-functions-durable
datetime
requests==2.28.1
arcgis==2.0.1
openpyxl==3.0.10
numpy
</code></pre>
|
<python><azure><arcgis><azure-pipelines-yaml><gssapi>
|
2022-12-20 13:49:21
| 1
| 7,979
|
FaCoffee
|
74,864,059
| 517,752
|
Greengrass V2 generates extra `stdout` logs
|
<p>I deploy greengrass components into my EC2 instance. The deploy greengrass components have been generating logs which wraps around my python log.</p>
<p>what is causing the "wrapping" around it? how can I remove these wraps.</p>
<p>For example, the logs in <strong>bold</strong> are wraps the original python log.
The log in <em>emphasis</em> is generated by my python log formatter.</p>
<p><strong>2022-12-13T23:59:56.926Z [INFO] (Copier) com.bolt-data.iot.RulesEngineCore: stdout.</strong> <em>[2022-12-13 23:59:56,925][DEBUG ][iot-ipc] checking redis pub-sub health (io_thread[140047617824320]:pub_sub_redis:_connection_health_nanny_task:61).</em>
<strong>{scriptName=services.com.bolt-data.iot.RulesEngineCore.lifecycle.Run, serviceName=com.bolt-data.iot.RulesEngineCore, currentState=RUNNING}</strong></p>
<p>The following is my python log formatter.</p>
<pre><code> formatter = logging.Formatter(
fmt="[%(asctime)s][%(levelname)-7s][%(name)s] %(message)s (%(threadName)s[%(thread)d]:%(module)s:%(funcName)s:%(lineno)d)"
)
# TODO: when we're running in the lambda function, don't stream to stdout
_handler = logging.StreamHandler(stream=stdout)
_handler.setLevel(get_level_from_environment())
_handler.setFormatter(formatter)
</code></pre>
|
<python><amazon-web-services><logging><python-logging><aws-iot-greengrass>
|
2022-12-20 13:46:50
| 1
| 1,342
|
Pankesh Patel
|
74,863,976
| 10,792,871
|
Removing _x000D_ from Text Records in Pandas Dataframe
|
<p>I have a Pandas data frame that I imported from an Excel file. One of the columns looks like the following:</p>
<pre><code>Link
=========
A-324
A-76_x000D_\nA-676
A-95
A-95_x00D_n\nA-495
...
</code></pre>
<p>I was able to use regex to remove the <code>\n</code> characters, but I am unable to remove <code>_x000D_</code>. Does anyone know what this is? Why am I unable to use traditional remove methods?</p>
<p>Here is what I've done:</p>
<pre><code>dat['Link'] = dat['Link'].replace("_x000D_", " ")
dat['Link'] = dat['Link'].replace(r'\s+|\\n', ' ', regex=True)
</code></pre>
|
<python><python-3.x><excel><pandas><data-cleaning>
|
2022-12-20 13:41:01
| 4
| 724
|
324
|
74,863,925
| 15,376,262
|
Check if a value exists per group and remove groups without this value in a pandas df
|
<p>I have a pandas df that looks like this:</p>
<pre><code>import pandas as pd
d = {'value1': [1, 1, 1, 2, 3, 3, 4, 4, 4, 4], 'value2': ['A', 'B', 'C', 'C', 'A', 'B', 'B', 'A', 'A', 'B']}
df = pd.DataFrame(data=d)
df
</code></pre>
<p>Per group in column <code>value1</code> I would like to check if that group contains at least one value 'C' in column <code>value2</code>. If a group doesn't have a 'C' value, I would like to exclude that group</p>
<pre><code> value1 value2
1 A
1 B
1 C
2 C
3 A
3 B
4 B
4 A
4 A
4 B
</code></pre>
<p>The resulting df should look like this:</p>
<pre><code> value1 value2
1 A
1 B
1 C
2 C
</code></pre>
<p>What's the best way to achieve this?</p>
|
<python><pandas><dataframe>
|
2022-12-20 13:36:08
| 2
| 479
|
sampeterson
|
74,863,858
| 12,695,210
|
pytest capture logging from function
|
<p>For some reason I am unable to obtain the logs genereated in a tested function when using pytest. I've had a read around for all the usual solutions but am stumped, feel like am missing something obvious. Any advice would be much appreciated:</p>
<p>minimum reproducable example:</p>
<pre><code>import logging
def function_to_test():
message = "log me"
logger = logging.getLogger("my-logger")
logger.setLevel(logging.INFO)
logger.propagate = True
logger.info(message)
class Test():
def test_function(self, caplog):
with caplog.at_level(logging.INFO, logger="my-logger"):
function_to_test()
assert caplog.text != ""
</code></pre>
|
<python><logging><pytest><python-logging>
|
2022-12-20 13:29:29
| 0
| 695
|
Joseph
|
74,863,813
| 15,076,691
|
Regex splitting with multiple end brackets
|
<p>I have an input of <code>10+sqrt(10+(100*20)+20)+sqrt(5)</code> which I need to be able to split up into <code>sqrt(...)</code> as many times as <code>sqrt</code> appears (in this instance, twice). The problem I am having is trying to split this up, I have tried on my own and come up with this <code>(sqrt\()(?<=sqrt\()(.+?)(\)+)</code> but regex only registers the first <code>)</code> it comes across whereas I need it to find the closing bracket.</p>
<p><a href="https://i.sstatic.net/FOXzx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FOXzx.png" alt="My REGEX attempt" /></a></p>
<p>As you can see in the picture the orange marker only covers up to the first bracket but i need it to end at the <code>+20</code>.</p>
<p>The desired output is a list as follows:</p>
<p><code>['10+', 'sqrt(', '10+(100*20)+20', ')', '+', 'sqrt(', '5', ')']</code></p>
<p>Thanks in advance</p>
|
<python><list>
|
2022-12-20 13:25:27
| 1
| 315
|
Hunter
|
74,863,748
| 16,389,095
|
Kivy md tabs: how to add tooltips
|
<p>I'm trying to develop an interface in Python / Kivy Md. The container Box Layout should include three widgets: tabs, a label and a button. I would like to add a tooltip for each tab. The tooltip should be displayed when the mouse moves on the tab. I added the tooltip in my code, but this is displayed only when the mouse is moved outside the corresponding tab, such as on the label in the middle of the screen. Here my code:</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
from kivymd.uix.tab import MDTabsBase
from kivymd.uix.floatlayout import MDFloatLayout
from kivymd.icon_definitions import md_icons
from kivymd.uix.tooltip import MDTooltip
from kivy.properties import StringProperty
KV = '''
MDBoxLayout:
orientation: "vertical"
padding: 10, 0, 10, 10
MDTabs:
id: tabs
on_tab_switch: app.on_tab_switch(*args)
MDLabel:
text: app.myLabel
pos_hint: {"x":0.5}
MDRaisedButton:
text: 'CONFIGURE'
size_hint_x: 1
pos_hint: {"center_y":0.5}
'''
class Tab(MDFloatLayout, MDTabsBase, MDTooltip):
'''Class implementing content for a tab.'''
class MainApp(MDApp):
icons = ["clock", "video-3d", "speedometer"]
icons_tooltips = ["TIMESTAMP", "ORIENTATION", "HIGH RATE"]
myLabel = StringProperty()
def build(self):
self.title = 'XSENS MTi-7 OUTPUT CONFIGURATION'
return Builder.load_string(KV)
def on_start(self):
for icon in range(len(self.icons)):
self.root.ids.tabs.add_widget(Tab(icon = self.icons[icon], tooltip_text = self.icons_tooltips[icon]))
def on_tab_switch(
self, instance_tabs, instance_tab, instance_tab_label, tab_text
):
'''
Called when switching tabs.
:type instance_tabs: <kivymd.uix.tab.MDTabs object>;
:param instance_tab: <__main__.Tab object>;
:param instance_tab_label: <kivymd.uix.tab.MDTabsLabel object>;
:param tab_text: text or name icon of tab;
'''
count_icon = instance_tab.icon
self.myLabel = count_icon
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>Any suggestion to make the tooltip appear when the mouse is on the tab?</p>
|
<python><kivy><kivy-language><kivymd>
|
2022-12-20 13:20:20
| 1
| 421
|
eljamba
|
74,863,582
| 8,849,755
|
Whittaker–Shannon (sinc) interpolation in Python
|
<p>I am using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d" rel="nofollow noreferrer"><code>interp1d</code></a> from Scipy to interpolate a function with <code>linear</code> interpolation. Now I need to upgrade to <a href="https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula" rel="nofollow noreferrer">Whittaker–Shannon interpolation</a>. Is this already implemented somewhere? I am surprised it is not among the options of <code>interp1d</code> as it is a very common interpolation algorithm.</p>
|
<python><interpolation>
|
2022-12-20 13:06:39
| 2
| 3,245
|
user171780
|
74,863,409
| 10,024,802
|
Pyqt5 window open in secondary display
|
<p>I have below code to open pyqt5 in secondary display although <code>print</code> gives secondary screen resolution, but opening in first display where is my fault?</p>
<pre><code>a = QApplication(sys.argv)
w = App()
# screen_count = a.desktop().screenCount()
screen_resolution = a.desktop().screenGeometry(1)
window_width = screen_resolution.width()
window_height = screen_resolution.height()
print(window_width,window_height) # it gives me secondary window res
x = (screen_resolution.width() - window_width) / 2
y = (screen_resolution.height() - window_height) / 2
w.move(x, y)
w.show()
</code></pre>
|
<python><python-3.x><pyqt5>
|
2022-12-20 12:54:54
| 1
| 566
|
CKocar
|
74,863,300
| 20,051,041
|
How to strip html elements from string in nested list, Python
|
<p>I decided to use BeautifulSoup for extracting string integers from Pandas column. BeautifulSoup works well applied on a simple example, however, does not work for a list column in Pandas. I cannot find any mistake. Can you help?</p>
<p>Input:</p>
<pre><code>df = pd.DataFrame({
"col1":[["<span style='color: red;'>9</span>", "abcd"], ["a", "b, d"], ["a, b, z, x, y"], ["a, y","y, z, b"]],
"col2":[0, 1, 0, 1],
})
for list in df["col1"]:
for item in list:
if "span" in item:
soup = BeautifulSoup(item, features = "lxml")
item = soup.get_text()
else:
None
print(df)
</code></pre>
<p><a href="https://i.sstatic.net/HIlGi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HIlGi.png" alt="This is what I get" /></a></p>
<p>Desired output:</p>
<pre><code>df = pd.DataFrame({
"col1":[["9", "abcd"], ["a", "b, d"], ["a, b, z, x, y"], ["a, y","y, z, b"]],
"col2":[0, 1, 0, 1],
})
</code></pre>
|
<python><html><pandas><beautifulsoup><xml-parsing>
|
2022-12-20 12:44:57
| 2
| 580
|
Mr.Slow
|
74,863,090
| 5,449,789
|
Should you normalizing a dataset per label, or across the range of entire dataset at once?
|
<p>So I'm looking to train a CNN model in Keras. The labels (Y) in my dataset are of shape (1080, 1920, 2). The values themselves however are quite large. Ranging from floating point numbers up to 80,000.</p>
<p>To smooth out the training process I want to normalize each label (array) using the following code where y is the array in question:</p>
<pre><code>y/np.linalg.norm(y)
</code></pre>
<p>In order to denormalize my array I would simply do the opposite:</p>
<pre><code>y * np.linalg.norm(y)
</code></pre>
<p>Should I normalize each label individually, or should I normalize across the entire Y dataset at once? I ask because, when it comes to denormalizing my models output, I won't be sure which normalization rate to use (np.linalg.norm(y) output) if I normalize each label individually.</p>
<p>Am I thinking of this the right way?</p>
<p>I did read this post here: <a href="https://stackoverflow.com/questions/59738160/denormalization-of-output-from-neural-network">Denormalization of output from neural network</a></p>
<p>It appears to address denormalization per label. Which, I don't understand how that would denormalize a models output correctly, if each label has it's own range and the model was trained on all labels.</p>
|
<python><numpy><dataset>
|
2022-12-20 12:26:55
| 0
| 461
|
junfanbl
|
74,863,017
| 10,024,802
|
How can I open virtual keyboard
|
<p>How can I open the virtual keyboard when I click spesific QlineEdit area ?</p>
<p>I am using below codes but I've error how can I do this correctly:</p>
<pre><code>self.phoneUi.phoneNumber.focusInEvent = self.show_keyboard
def show_keyboard(self, event):
input_method = QInputMethod.queryFocusObject()
input_method.show()
</code></pre>
<p>But I get this error:</p>
<pre><code>, line 323, in show_keyboard
input_method = QInputMethod.queryFocusObject()
TypeError: queryFocusObject(Qt.InputMethodQuery, Any): not enough arguments
</code></pre>
<p>What is the correct argument I am using pyqt5</p>
<p><strong>EDIT</strong></p>
<p>When I try with below codes:</p>
<pre><code> def show_keyboard(self, event):
self.phoneUi.phoneNumber.setFocus()
input_method = QInputMethod.queryFocusObject(Qt.InputMethodQuery.ImHints, None)
input_method.show()
</code></pre>
<p>I get</p>
<pre><code>input_method.show()
AttributeError: 'int' object has no attribute 'show'
</code></pre>
<p>How can I solve please help</p>
<p><strong>EDIT 2</strong></p>
<pre><code> self.phoneUi.phoneNumber.focusInEvent = self.show_keyboard
def show_keyboard(self, event):
print("hey")
input_method = QApplication.inputMethod()
input_method.show()
</code></pre>
<p>I also try with above code I can see hey print but virtual key does not open</p>
<pre><code>pip show pyqt5
Name: PyQt5
Version: 5.15.7
</code></pre>
|
<python><pyqt5>
|
2022-12-20 12:19:44
| 0
| 566
|
CKocar
|
74,862,986
| 4,869,005
|
First and last element in numpy array comes as NaN while reading with genfromtxt
|
<p>My csv file looks like this <a href="https://extendsclass.com/csv-editor.html#ef4a190" rel="nofollow noreferrer">https://extendsclass.com/csv-editor.html#ef4a190</a></p>
<pre><code>from numpy import genfromtxt
my_data = genfromtxt('2857_54065_N.csv', encoding=None, delimiter=',')
my_data
</code></pre>
<p>Which gives result like this.</p>
<p>array([ nan, 636., 654., ..., 572., 547., nan])</p>
<p>Why first andlast element comes as <code>nan</code> ?</p>
|
<python><numpy>
|
2022-12-20 12:17:47
| 1
| 2,257
|
user2129623
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.