QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,787,475
| 13,517,174
|
How do you slice an array of a dynamic dimension in numba?
|
<p>I have a script that looks as following:</p>
<pre><code>import numba as nb
@nb.njit
def test(v):
n = 1
if len(v.shape)>1:
n = max(n,v.shape[1])
return n
test(np.array([1,2]))
</code></pre>
<p>This works fine without the <code>njit</code> decorator, but when I use it like in my example I get the error message</p>
<pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
tuple index out of range
During: typing of static-get-item at my_script.py (7)
File "my_script.py", line 7:
def test(v):
<source elided>
n = max(n,v.shape[1])
^
</code></pre>
<p>How can I rewrite this function to make it dynamically check the shape of my array?</p>
|
<python><numba>
|
2022-12-13 15:47:04
| 0
| 453
|
Yes
|
74,787,251
| 4,613,465
|
TensorFlow display progress while iterating dataset
|
<p>I have a very large dataset (raw files ~750GB) and I created a cached dataset pipeline using the TensorFlow data API like this:</p>
<pre><code>dataset = tf.data.Dataset.from_generator(MSUMFSD(pathlib.Path(dataset_locations["mfsd"]), True), output_types=(tf.string, tf.float32))
</code></pre>
<p>This dataset consists of all file paths I want to use for processing. After that I use this <code>interleave</code> transformation, to generate the actual input data for my model:</p>
<pre><code>class DatasetTransformer:
def __init__(self, haar_cascade_path, window, img_shape):
self.rppg_extractor = RPPGExtractionChrom(haar_cascade_path, window, img_shape)
self.window = window
self.img_shape = img_shape
def generator(self, file, label):
for signal, frame in self.rppg_extractor.process_file_iter(file.decode()):
yield (signal, frame), [label]
def __call__(self, file, label):
output_signature = (
(
tensorflow.TensorSpec(shape=(self.window), dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=(self.img_shape[0], self.img_shape[1], 3), dtype=tensorflow.float32)
),
tensorflow.TensorSpec(shape=(1), dtype=tensorflow.float32))
return tensorflow.data.Dataset.from_generator(self.generator, args=(file, label), output_signature=output_signature)
dataset = dataset.interleave(
DatasetTransformer("rppg/haarcascade_frontalface_default.xml", window_size, img_shape),
num_parallel_calls=tf.data.AUTOTUNE
)
dataset = dataset.prefetch(tf.data.AUTOTUNE).shuffle(320).cache(cache_filename)
</code></pre>
<p>Now I want to iterate through the dataset once to create the cached dataset (consisting of the real input for the model) and to obtain the dataset size. Is there a way to show the progress of iteration? My attempt was to obtain the number of files before the interleave transformation like this:</p>
<pre><code>dataset_file_amount = dataset.reduce(0, lambda x,_: x + 1).numpy()
</code></pre>
<p>and then show a progress bar using tqdm while iterating through the "real" dataset like this:</p>
<pre><code>def dataset_reducer(x, pbar):
pbar.update()
return x + 1
pbar = tqdm(total=dataset_file_amount, desc="Preprocessing files...")
size = dataset.reduce(0, lambda x,_: dataset_reducer(x, pbar)).numpy()
</code></pre>
<p>When running this code I get a progress bar with the correct total amount (number of files) but the progress bar isn't updated. It's stuck at 0% at once the processing has been finished, it just continues the execution. Do you have an idea how to show (at least for the number of processed files) the progress of preprocessing? Thanks already!</p>
<p><strong>Edit</strong>
Actually, the progress bar is stuck at <code>1/X</code> instead of 0%.</p>
|
<python><tensorflow><dataset><pipeline><data-preprocessing>
|
2022-12-13 15:30:37
| 1
| 772
|
Fatorice
|
74,787,173
| 8,755,105
|
marshmallow - How can I map the schema attribute to another key when deserializing?
|
<p>I need to have a "from" field in my marshmallow schema, but since it is a Python reserved keyword, I am unable to use the name.</p>
<p>Input data has "from" key and deserialized map should also have "from" key.
Stumbled upon <a href="https://stackoverflow.com/questions/51727441/marshmallow-how-can-i-map-the-schema-attribute-to-another-key-when-serializing">similar question</a> for serializing objects, but unfortunately <code>data_key</code> only accomplished the first part of the goal - processing "from" key from the input data.</p>
<p>How can I make the key in deserialized data have the target name?<br />
Example schema:</p>
<pre><code>class TestSchema(Schema):
_from = fields.Str(
required=False,
missing='',
data_key='from',
)
</code></pre>
<p>Desired result: Python dictionary with key "from"</p>
|
<python><marshmallow>
|
2022-12-13 15:25:11
| 1
| 903
|
Roman Yakubovich
|
74,787,150
| 2,725,810
|
Cache performance of Python array (not list)
|
<p>I understand that Python's <code>array</code> provided by the array module stores consecutively the actual values (not pointers). Hence I would expect that, when elements of such an array are read in order, CPU cache would play a role.</p>
<p>Thus I would expect that Code A below should be faster than Code B (the difference between the two is in the order of reading the elements).</p>
<p>Code A:</p>
<pre class="lang-py prettyprint-override"><code>import array
import time
arr = array.array('l', range(100000000))
sum = 0
begin = time.time()
for i in range(10000):
for j in range(10000):
sum += arr[i * 10000 + j]
print(sum)
print(time.time() - begin)
</code></pre>
<p>Code B:</p>
<pre class="lang-py prettyprint-override"><code>import array
import time
arr = array.array('l', range(100000000))
sum = 0
begin = time.time()
for i in range(10000):
for j in range(10000):
sum += arr[j * 10000 + i]
print(sum)
print(time.time() - begin)
</code></pre>
<p>The two versions' timings are almost identical (a difference of only ~3%). Am I missing something about the workings of the array?</p>
|
<python><arrays><cpu-cache>
|
2022-12-13 15:23:13
| 1
| 8,211
|
AlwaysLearning
|
74,787,078
| 11,252,809
|
Alpine x-transition doesn't work inside jinja2 template for loop
|
<p>Inside this code, I can get the two divs to toggle when showOriginal is toggled, but the animation simply doesn't work? In this case, summaries are sqlmodel objects rendered by Jinja2.</p>
<pre><code> {% for summary in summaries %}
<div x-data="{showOriginal: true }" class=" flex flex-row">
<div class="ml-4 py-2">
<div>
<p @click="showOriginal = !showOriginal" class="text-xs"><span x-show="showOriginal">Show
Original</span><span x-show="!showOriginal">Show
Summarised</span>
</p>
<div x-show="showOriginal" x-transition.delay.1000ms>
{{summary.title}} </div>
<div x-show="!showOriginal" x-transition.delay.1000ms>
{{summary.paper.title}}</div>
</div>
<a href={{ url_for('summary', summary_id=summary.id) }} class=" md:text-lg text-baseline text-underline">
</a>
<p class="text-sm text-green-300">{{ summary.paper.authors[0].name }},
{{summary.paper.authors[-1].name}} et
al.
</p>
</div>
</div>
{% endfor %}
</code></pre>
<p>How I implement the js in base.html, which the above template inhertits from.</p>
<pre class="lang-html prettyprint-override"><code><head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://cdn.tailwindcss.com"></script>
<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v2.7.0/dist/alpine.min.js"></script>
<script src="https://unpkg.com/htmx.org@1.8.4"
integrity="sha384-wg5Y/JwF7VxGk4zLsJEcAojRtlVp1FKKdGy1qN+OMtdq72WRvX/EdRdqg/LOhYeV"
crossorigin="anonymous"></script>
{% if DEBUG %} {{ hot_reload.script(url_for('hot-reload')) | safe }}
{% else %}
{% endif %}
<link href={{url_for('static', path='/dist/output.css' )}} rel="stylesheet" />
<title>Openpaper.science </title>
</head>
</code></pre>
|
<javascript><python><jinja2><fastapi><alpine.js>
|
2022-12-13 15:18:12
| 1
| 565
|
phil0s0pher
|
74,787,036
| 10,357,604
|
How to resolve this pip install error (when trying to install mxnet)?
|
<p>I am working with anaconda & run the cmd:<code>pip instal mxnet</code>.
I already tried to upgrade <code>pip,wheel,setuptools</code>.
& to instal it with <code>--no-use-pep517</code>,<code>--no-cache-dir</code>
& <code>--pre</code> as recommended here:<a href="https://bobbyhadz.com/blog/python-eror-legacy-instal-failure" rel="nofollow noreferrer">https://bobbyhadz.com/blog/python-eror-legacy-instal-failure</a> & instaled visualC++build-tools.
Still gettinthe following:eror-legacy-instal-failure</p>
<pre><code> Collectin mxnet
Using cached mxnet-1.7.0.post2-py2.py3-none- win_amd64.whl(33.1MB)
Collectin numpy<1.17.0,>=1.8.2
Using cached numpy-1.16.6.zip (5.1MB)
Preparingmetadata(setup.py)...done
Req already satisfied:reqs<2.19.0,>=2.18.4 in c:\path\anaconda3\lib\site-packages (from mxnet)(2.18.4)
Collecting graphviz<0.9.0,>=0.8.1
Using cached graphviz-0.8.4-py2.py3-none-any.whl(16kB)
Requirement already satisfied:idna<2.7,>=2.5 in c:\path\anaconda3\lib\site-packages (from requests<2.19.0,>=2.18.4->mxnet)(2.6)
Requirement already satisfied:certifi>=2017.4.17 in c:\path\anaconda3\lib\site-packages (from requests<2.19.0,>=2.18.4->mxnet)(2022.9.14)
Requirement already satisfied:chardet<3.1.0,>=3.0.2 in c:\path\anaconda3\lib\site-packages (from requests<2.19.0,>=2.18.4->mxnet)(3.0.4)
Requirement alredy satisfied:urllib3<1.23,>=1.21.1 in c:\path\anaconda3\lib\site-packages (from requests<2.19.0,>=2.18.4->mxnet)(1.22)
Building wheels for collected packages:numpy
Building wheel for numpy (setup.py)
eror:subprocess-exited-with-eror
× python setup.py bdist_wheel didnt run successfuly
│ exitcode:1
╰─>[264lines of output]
Runing from numpy source dir.
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\misc_util.py:476:SyntaxWarning:"is" with a literal. Did u mean "=="?
return is_string(s)& ('*'in s or'?'is s)
blas_opt_info blas_mkl_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs mkl_rt not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
blis_info:
No modle named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs blis not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILBLE
openblas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
get_default_fcompiler:matching types:'['gnu','intelv','absoft','compaqv','intelev','gnu95','g95','intelvem','intelem','flang']'
customize GnuFCompiler
couldnt locate executable g77,f77
customize IntelVisualFCompiler
couldnt locate executable ifort,executable ifl
customize AbsoftFCompiler
couldnt locate executable f90
customize CompaqVisualFCompiler
couldnt locate executable DF
customize IntelItaniumVisualFCompiler
couldnt locate executable efl
customize Gnu95FCompiler
couldnt locate executable gfortran,executable f95
customize G95FCompiler
couldnt locate executable g95
customize IntelEM64VisualFCompiler,IntelEM64TFCompiler
couldnt locate executable efort,executable efc
customize PGroupFlangCompiler
couldnt locate executable flang
don't know how to compile Fortran code on platform 'nt'
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Atlas (http://math-atlas.sourceforge.net/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file (section[atlas])orby setting
the ATLAS env var.
self.calc_info()
blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs blas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Blas (http://www.netlib.org/blas/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file (section[blas])orby setting
the BLAS env var.
self.calc_info()
blas_src_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Blas (http://www.netlib.org/blas/)sources not found.
Dirs to search for the sources can be specified inthe
numpy/distutils/site.cfg file (section[blas_src])orby setting
the BLAS_SRC env var
self.calc_info()
NOT AVAILBLE
The cmd "svnversion" is either typed wrong orcant befind.
non-existing path in 'numpy\\distutils':'site.cfg'
lapack_opt_info:
lapack_mkl_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs mkl_rt not found in['C:\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
openblas_lapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
openblas_clapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas,lapack not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Lapack (http://www.netlib.org/lapack/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file (section[lapack])orby setting
the LAPACK env var.
self.calc_info()
lapack_src_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Lapack (http://www.netlib.org/lapack/)sources not found.
dirs to search for the sources can be specified inthe
numpy/distutils/site.cfg file (section[lapack_src])orby setting
the LAPACK_SRC env var.
self.calc_info()
NOT AVAILABLE
C:\path\anaconda3\lib\site-packages\setuptools\_distutils\dist.py:265:UserWarning:Unknown distribution option:'define_macros'
warnings.warn(msg)
runing bdist_wheel
runing build
runing config_cc
unifing config_cc,config,build_clib,build_ext,build comm&s --compiler options
runing config_fc
unifing config_fc,config,build_clib,build_ext,build comm&s --fcompiler options
runing build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-3.9
creating build\src.win-amd64-3.9\numpy
creating build\src.win-amd64-3.9\numpy\distutils
building lib "npymath" sources
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
eror:Microsoft Visual C++ 14.0 or greater is required.Get it with "Microsoft C++ Build Tools":https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
</code></pre>
<p>note:This eror originates from a subprocess,& is likely not a problem with pip
EROR:Failed building wheel for numpy
runing setup.py clean for numpy
eror:subprocess-exited-with-eror</p>
<pre><code> × python setup.py clean did not run successfully.
│ exit code:1
╰─>[10 lines of output]
runing from numpy source dir.
`setup.py clean` is not supported,use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files,doesn't touch
files that aren't checked into the git repo)
Add `--force` to ur comm& to use it anyway if u must (unsupported).
[end of output]
note:This eror originates from a subprocess,& is likely not a problem with pip.
</code></pre>
<p>eror:Failed cleaning build dir for numpy
Failed to build numpy
instaling collected packages:numpy,graphviz,mxnet
Attempting uninstal:numpy
Found existing instalation:numpy 1.23.5
Uninstaling numpy-1.23.5:
Successfully uninstaled numpy-1.23.5
runing setup.py instal for numpy ... eror
eror:subprocess-exited-with-eror</p>
<p>× runing setup.py instal for numpy did not run successfully.
│ exit code:1
╰─>[269 lines of output]
runing from numpy source dir.</p>
<pre><code> Note:if u need reliable uninstal behavior,then instalwith pip insted of usin `setup.py instal`:
- `pip instal .` (from a git repo or downloaded source release)
- `pip instal numpy` (last NumPy release on PyPi)
blas_opt_info:
blas_mkl_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs mkl_rt not fond in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
blis_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs blis not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
openblas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
get_default_fcompiler:matching types:'['gnu','intelv','absoft','compaqv','intelev','gnu95','g95','intelvem','intelem','flang']'
customize GnuFCompiler
couldnt locate executable g77,executable f77
customize IntelVisualFCompiler
couldnt locate executable ifort,ifl
customize AbsoftFCompiler
couldnt locate executable f90
customize CompaqVisualFCompiler
couldnt locate executable DF
customize IntelItaniumVisualFCompiler
couldnt locate executable efl
customize Gnu95FCompiler
couldnt locate executable gfortran,f95
customize G95FCompiler
couldnt locate executable g95
customize IntelEM64VisualFCompiler,IntelEM64TFCompiler
couldnt locate executable efort,efc
customize PGroupFlangCompiler
couldnt locate executable flang
don't know how to compile Fortran code on platform 'nt'
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Atlas (http://math-atlas.sourceforge.net/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file (section[atlas])orby setting
the ATLAS env var.
self.calc_info()
blas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs blas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Blas (http://www.netlib.org/blas/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file (section[blas])orby settin
the BLAS envi. var.
self.calc_info()
blas_src_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Blas (http://www.netlib.org/blas/)sources not found.
dirs to search for the sources can be specified inthe
numpy/distutils/site.cfg file (section[blas_src])orby setting
the BLAS_SRC env var.
self.calc_info()
NOT AVAILABLE
The cmd"svnversion" is eithertyped wrong orcant be find.
non-existing path in 'numpy\\distutils':'site.cfg'
lapack_opt_info:
lapack_mkl_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs mkl_rt not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
openblas_lapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
openblas_clapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs openblas,lapack not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryig from distutils
customize MSVCCompiler
libs tatlas,tatlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs satlas,satlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs ptf77blas,ptcblas,atlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\path\anaconda3\lib
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack_atlas not found in C:\path\anaconda3\libs
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs f77blas,cblas,atlas not found in C:\path\anaconda3\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
customize MSVCCompiler
libs lapack not found in['C:\\path\\anaconda3\\lib','C:\\','C:\\path\\anaconda3\\libs']
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Lapack (http://www.netlib.org/lapack/)libs not found.
dirs to search for the libs can be specified inthe
numpy/distutils/site.cfg file(section[lapack])orby setting
the LAPACK env var.
self.calc_info()
lapack_src_info:
NOT AVAILABLE
C:\path\AppData\Local\Temp\pip-instal-tt_fysn_\numpy_4c416a770c4e4a078f07d959c6bf4a7c\numpy\distutils\system_info.py:639:UserWarning:
Lapack (http://www.netlib.org/lapack/)sources not found.
dirs to search for the sources can be specified inthe
numpy/distutils/site.cfg file (section[lapack_src])orby setting
the LAPACK_SRC env var.
self.calc_info()
NOT AVAILABLE
C:\path\anaconda3\lib\site-packages\setuptools\_distutils\dist.py:265:UserWarning:Unknown distribution option:'define_macros'
warnings.warn(msg)
runing instal
C:\path\anaconda3\lib\site-packages\setuptools\comm&\instal.py:34:SetuptoolsDeprecationWarning:setup.py instal is deprecated. Use build & pip & other st&ards-based tools.
warnings.warn(
runing build
runing config_cc
unifing config_cc,config,build_clib,build_ext,build comm&s --compiler opts
runing config_fc
unifing config_fc,config,build_clib,build_ext,build comm&s --fcompiler options
runing build_src
build_src
building py_modules sources
building lib "npymath" sources
No module named'numpy.distutils._msvccompiler' in numpy.distutils; tryin from distutils
eror:Microsoft VisualC++14.0 or greater is required.Get it with "MicrosoftC++BuildTools":https://...
[end of output]
</code></pre>
<p>note:This eror originates from a subprocess,& is likely not a problem with pip.
Rolling back uninstal of numpy
movin to c:\path\anaconda3\lib\site-packages\numpy-1.23.5.dist-info<br />
from C:\path\anaconda3\Lib\site-packages~umpy-1.23.5.dist-info
movin to c:\path\anaconda3\lib\site-packages\numpy<br />
from C:\path\anaconda3\Lib\site-packages~umpy
movin to c:\path\anaconda3\scripts\f2py.exe
from C:\path\AppData\Local\Temp\pip-uninstal-xr8_bke0\f2py.exe
eror:legacy-instal-failure</p>
<p>× Encountered eror while tryin to instal pckage.
╰─> numpy</p>
<p>note:This is an issue with the pkg mentioned above,not pip
hint:See above for output from the failure</p>
|
<python><numpy><pip><legacy><mxnet>
|
2022-12-13 15:15:06
| 1
| 1,355
|
thestruggleisreal
|
74,786,966
| 12,760,550
|
How to confirm that sequences of ids in a column comply to a given rule?
|
<p>I have a dataframe that contains the information of different "contract types" in a company: Employee, Consultant, Non Employee.</p>
<p>Each row represents a contract and one person (meaning someone with same first name and last name) can have more than 1 contract, and the contract can be either one of the 3 values mentioned, and there is an "ID" column that identifies the "contract number" of the person. This is the dataframe:</p>
<pre><code>FirstName LastName Employee Type ID
Paulo Cortez Employee 2
Paulo Cortez Employee 1
Paulo Cortez Consultant 1
Paulo Cortez Non Worker 1
Paulo Cortez Employee 3
Felipe Cardoso Employee np.nan
Vera Lucia Employee 2
Pedro Machado Consultant 1
Lucio Mario Employee 1
Lucio Mario Employee 1
Lucio Mario Consultant 1
Maria Hebbe Employee 1
Maria Hebbe Consultant 1
Maria Hebbe Consultant 1
</code></pre>
<p>The Logic I would need to validate is the following:</p>
<ul>
<li><p>Per each person (same first name and last name) I would need to analyze how many times this person appears in this dataframe. Then, per each row, and per each "Employee Type", the "ID" column needs to be checked as following:</p>
<ul>
<li>The ID will be 1 for each first "Employee Type" for a person, meaning if the person appears 3 times in the data one for each "Employee Type" (Employee, Consultant, Non Worker), the "ID" column would be 1 (here does not matter where the record is in the table). Then, per each subsequential contract of the same type, the ID should add 1.</li>
</ul>
</li>
</ul>
<p>I need a way to filter this dataframe by fetching all person not following this logic (even if for some Employee type contracts it is correct, example being Lucio Mario that "Employee Type" consultant is correct but "Employee" is not), resulting on this dataframe:</p>
<pre><code>FirstName LastName Employee Type ID
Felipe Cardoso Employee np.nan
Vera Lucia Employee 2
Lucio Mario Employee 1
Lucio Mario Employee 1
Lucio Mario Consultant 1
Maria Hebbe Employee 1
Maria Hebbe Consultant 1
Maria Hebbe Consultant 1
</code></pre>
<p>What would be the best way to achieve it?</p>
|
<python><pandas><dataframe><filter><lambda>
|
2022-12-13 15:10:16
| 1
| 619
|
Paulo Cortez
|
74,786,941
| 17,762,566
|
How to match substring in a list of nested dictionary in python?
|
<p>I am stuck with the below issue where I am trying to search for specific <code>value</code> in a nested dictionary inside of a nested list. It doesn't have to completely match but more like <code>contains</code>. Below is the example data</p>
<pre><code>data=[[{'A': 'test1'}, {'A': 'test2'}, {'BB': {'A': '111testabc'}}],
[{'A': 'test1'}, {'A': 'test3'}, {'BB': {'A': '999000abc'}}],
[{'A': 'test1'}, {'A': 'test4'}, {'BB': {'A': '99111testabc123'}}],
[{'A': 'test1'}, {'A': 'test5'}, {'BB': {'A': '123456abc'}}]]
</code></pre>
<p>I want to extract all the nested list including dictionary. The serach string is <code>111testabc</code>. So my expected output would be</p>
<pre><code>[[{'A': 'test1'}, {'A': 'test2'}, {'BB': {'A': '111testabc'}}],
[{'A': 'test1'}, {'A': 'test4'}, {'BB': {'A': '99111testabc123'}}]]
</code></pre>
<p>I am trying to solve this with:</p>
<pre><code>key, value = 'A', '111testabc'
for lp in data:
dictList.append([myDict for myDict in lp if myDict.get(key) in value])
</code></pre>
<p>Can anyone please let me know how to resolve this? I am using <code>python3.9</code></p>
|
<python><python-3.9>
|
2022-12-13 15:08:34
| 3
| 793
|
Preeti
|
74,786,854
| 5,921,731
|
Networkx cutting of the image
|
<p>I am trying to create a graph using NetworkX using the following code:</p>
<pre><code>def plotgraph(stringdatafile,alldelays,columns):
"""Plots a temporal causal graph showing all discovered causal relationships annotated with the time delay between cause and effect."""
G = nx.DiGraph()
for c in columns:
G.add_node(c)
for pair in alldelays:
p1,p2 = pair
nodepair = (columns[p2], columns[p1])
print("nodepair", nodepair)
# asd
G.add_edges_from([nodepair],weight=alldelays[pair])
edge_labels=dict([((u,v,),d['weight'])
for u,v,d in G.edges(data=True)])
pos=nx.circular_layout(G)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
nx.draw(G,pos, node_color = 'white', edge_color='blue',node_size=10000,with_labels = True)
ax = plt.gca()
ax.collections[0].set_edgecolor("#000000")
pylab.plot()
plt.savefig("TCDF/result" + "result.png")
</code></pre>
<p>I am facing few issues:</p>
<ol>
<li>The figure being saved is cut off the saved image: <a href="https://i.sstatic.net/AVRna.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AVRna.jpg" alt="Cutt of image" /></a></li>
<li>The node label is going out of node and being cut off. How can I adapt node size to fit the label.</li>
<li>Its not displaying the self loops.</li>
</ol>
<p>Can someone please help me with it?</p>
|
<python><networkx>
|
2022-12-13 15:02:59
| 1
| 472
|
sidra Aleem
|
74,786,712
| 12,709,265
|
Read image labels from a csv file
|
<p>I have a dataset of medical images (<code>.dcm</code>) which I can read into <code>TensorFlow</code> as a batch. However, the problem that I am facing is that the labels of these images are in a <code>.csv</code>. The <code>.csv</code> file contains two columns - <code>image_path</code> (location of the image) and <code>image_labels</code> (0 for no; 1 for yes). I wanted to know how I can read the labels into a <code>TensorFlow</code> dataset batch wise. I am using the following code to load the images batch wise:-</p>
<pre><code>import tensorflow as tf
import tensorflow_io as tfio
def process_image(filename):
image_bytes = tf.io.read_file(filename)
image = tf.squeeze(
tfio.image.decode_dicom_image(image_bytes, on_error='strict', dtype=tf.uint16),
axis = 0
)
x = tfio.image.decode_dicom_data(image_bytes, tfio.image.dicom_tags.PhotometricInterpretation)
image = (image - tf.reduce_min(image))/(tf.reduce_max(image) - tf.reduce_min(image))
if(x == "MONOCHROME1"):
image = 1 - image
image = image*255
image = tf.cast(tf.image.resize(image, (512, 512)),tf.uint8)
return image
# train_images is a list containing the locations of .dcm images
dataset = tf.data.Dataset.from_tensor_slices(train_images)
dataset = dataset.map(process_image, num_parallel_calls=4).batch(50)
</code></pre>
<p>Hence, I can load the images into the <code>TensorFlow</code> dataset. But I would like to know how I can load the image labels batch wise.</p>
|
<python><python-3.x><tensorflow><tensorflow2.0>
|
2022-12-13 14:53:59
| 1
| 1,428
|
Shawn Brar
|
74,786,592
| 9,390,633
|
Split a dataframe by length of characters
|
<p>I have a table like</p>
<pre class="lang-none prettyprint-override"><code>--------------------|
Val
--------------------|
1, M A ,HELLO,WORLD |
2, M 1A,HELLO WORLD |
---------------------
</code></pre>
<p>I want to split the above dataframe so it contains the three columns below.</p>
<pre class="lang-none prettyprint-override"><code>----------------------
a | b | c |
----------------------
1 | M A | HELLO,WORLD|
1 | M 1A| HELLO WORLD|
----------------------
</code></pre>
<p>I have used the below code, but it does not work as expected. Is there a way to contain all characters after 5 characters in column <em>c</em>, etc. and character 2-5 in column <em>b</em>?</p>
<pre><code>df = df.withColumn('Splitted', F.split(hadf37dr_df['Val'], ',')).withColumn('a', F.col('Splitted')[0]).withColumn('b', F.col('Splitted')[1]).withColumn('c', F.col('Splitted')[2])
</code></pre>
|
<python><apache-spark><pyspark>
|
2022-12-13 14:44:14
| 4
| 363
|
lunbox
|
74,786,423
| 726,730
|
python check web radio url
|
<p><strong>File: check_web_radio.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen
import subprocess
import os
import sys
def check_retransmition(stream_url):
try:
print("Checking url:"+str(stream_url))
if os.path.exists("outputfile.mp3"):
os.remove("outputfile.mp3")
ffmpeg_path = os.path.abspath("ffmpeg.exe")
p1 = Popen([ffmpeg_path,'-y','-t','10','-i',stream_url,'outputfile.mp3'],stdout=subprocess.PIPE)
p1.wait()
try:
streamdata = p1.communicate()[0]
except:
pass
rc = p1.returncode
p1.terminate()
if os.path.exists("outputfile.mp3"):
os.remove("outputfile.mp3")
if int(rc)==0:
print("Result: True")
return True
else:
print("Result: False")
return False
except Exception as e:
print(e)
print("Result: False (exception)")
return False
check_retransmition("https://impradio.bytemasters.gr/8002/LIVE")
sys.exit()
</code></pre>
<p>Result:</p>
<pre><code>C:\Users\chris\Desktop>python check_web_radio.py
Checking url:https://impradio.bytemasters.gr/8002/LIVE
ffmpeg version N-108547-gaaf4109a5f-20221006 Copyright (c) 2000-2022 the FFmpeg
developers
built with gcc 12.1.0 (crosstool-NG 1.25.0.55_3defb7b)
configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-conf
ig=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw
32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-
pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --en
able-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbi
s --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable
-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enabl
e-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cu
da-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libass --en
able-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-lib
rist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enabl
e-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --en
able-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --en
able-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-lib
srt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm -
-disable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-
libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid
--enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxx
flags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-
version=20221006
libavutil 57. 39.100 / 57. 39.100
libavcodec 59. 50.100 / 59. 50.100
libavformat 59. 34.100 / 59. 34.100
libavdevice 59. 8.101 / 59. 8.101
libavfilter 8. 49.101 / 8. 49.101
libswscale 6. 8.112 / 6. 8.112
libswresample 4. 9.100 / 4. 9.100
libpostproc 56. 7.100 / 56. 7.100
[tls @ 00000000004db100] Creating security context failed (0x80090302)
https://impradio.bytemasters.gr/8002/LIVE: Unknown error occurred
Result: False
C:\Users\chris\Desktop>
</code></pre>
<p>Any thoughts how can i fix it?</p>
<p>Note that: <code>check_retransmition("http://impradio.bytemasters.gr/8002/LIVE")</code> (http instead of https) after a avast allow, works!</p>
|
<python><webradio>
|
2022-12-13 14:29:46
| 1
| 2,427
|
Chris P
|
74,786,177
| 3,247,006
|
split() vs rsplit() in Python
|
<p>I used <code>split()</code> and <code>rsplit()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>test = "1--2--3--4--5"
print(test.split("--")) # Here
print(test.rsplit("--")) # Here
</code></pre>
<p>Then, I got the same result as shown below:</p>
<pre class="lang-none prettyprint-override"><code>['1', '2', '3', '4', '5'] # split()
['1', '2', '3', '4', '5'] # rsplit()
</code></pre>
<p>So, what's the difference between <code>split()</code> and <code>rsplit()</code>?</p>
|
<python><split>
|
2022-12-13 14:11:16
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
74,786,109
| 1,142,881
|
Zombie xlwings Python processes hanging after closing excel
|
<p>Is there a way from VBA to kill hanging zombie <a href="https://docs.xlwings.org/en/stable/index.html" rel="nofollow noreferrer">xlwings</a> Python processes before opening or after closing Excel? I'm happy to take a solution from either VBA or Python, I don't really mind which.</p>
<p>For example, in VBA I could do the following, but this imprecise, I need to kill specific Python processes, namely those resulting from xlwings and those connected to a specific workbook i.e. that contain the <code>MyWorkbook.xlsx</code> in the file name. Is there a way to use <code>TASKKILL</code> in a more precise manner?</p>
<pre><code>Private Sub Workbook_Open()
Dim KillPython As String
KillPython = "TASKKILL /F /IM Python.exe"
Shell KillPython, vbHide
'do more stuff, now I can safely rely that there are no
'automation errors coming from xlwings' zombie processes
RunPython("import mymodule; mymodule.do_stuff()")
End Sub
</code></pre>
<p>In Python I could do the following, which is more precise:</p>
<pre><code>import os
import psutil
my_pid = os.getpid()
my_proc = None
for p in psutil.process_iter():
if 'python' in p.name() and 'MyWorkbook' in ' '.join(p.cmdline()):
if p.pid != my_pid:
p.kill()
else:
my_proc = p
# finally try to kill self, if needed
my_proc.kill()
</code></pre>
<p>Here there seems to be a relationship between my_pid and the p I am getting as some times it satisfies the condition <code>p.pid != my_pid</code> yet it kills the current process. Maybe there is a parent child process relationship when I match the process names.</p>
<p>On a separate note, why is xlwings always triggering the creation of two Python processes instead of only one?</p>
|
<python><excel><vba><xlwings><taskkill>
|
2022-12-13 14:06:57
| 1
| 14,469
|
SkyWalker
|
74,786,100
| 1,264,820
|
Typing and pint
|
<p>I'm using <code>pint</code> to use and convert units. I wanted to create classes which restricts the quantities only to "[time]" or "[length]" dimensions, so as a first approach I did the following:</p>
<pre class="lang-py prettyprint-override"><code>from pint import Quantity, DimensionalityError
class Time(Quantity):
def __new__(cls, v: str | Quantity) -> Quantity:
obj = Quantity(v)
if not obj.check("[time]"):
raise DimensionalityError(v, "[time]")
return obj
class Length(Quantity):
def __new__(cls, v: str | Quantity) -> Quantity:
obj = Quantity(v)
if not obj.check("[length]"):
raise DimensionalityError(v, "[length]")
return obj
</code></pre>
<p>At runtime it works as expected, i.e: I can do the following:</p>
<pre class="lang-py prettyprint-override"><code>1hour = Time("1h") # Works ok, variable 1hour contains `<Quantity(1, 'hour')>`
bad = Time("1meter") # As expected, raises pint.errors.DimensionalityError: Cannot convert from '1meter' to '[time]'
1meter = Length("1meter") # Ok
bad_again = Length("1h") # Ok, raises DimensionalityError
</code></pre>
<p>However, from a typing perspective, something is wrong:</p>
<pre class="lang-py prettyprint-override"><code>def myfunc(t: Time) -> str:
return f"The duration is {t}"
print(myfunc(Time("1h"))) # Ok
print(myfunc(Length("1m"))) # Type error?
</code></pre>
<p>The second call to <code>myfunc()</code> is a type error, since I'm passing a <code>Length</code> instead of a <code>Time</code>. However <code>mypy</code> is happy with the code. So I have some questions:</p>
<ol>
<li>Why doesn't mypy catches the error?</li>
<li>How to do it properly?</li>
</ol>
<p>For 1. I guess that something fishy is happening in pint's <code>Quantity</code> implementation. I tried:</p>
<pre class="lang-py prettyprint-override"><code>foo = Quantity("3 pounds")
reveal_type(foo)
</code></pre>
<p>and the revealed type is <code>Any</code> instead of <code>Quantity</code> which is very suspicious.</p>
<p>So I tried removing the base class <code>Quantity</code> from my <code>Time</code> and <code>Length</code> classes (i.e: they derive now from <code>object</code> instead of <code>Quantity</code>), and in this case, <code>mypy</code> correctly manages the typing errors.</p>
<p>But it fails again as soon as I try something like <code>Length("60km")/Time("1h")</code>. <code>mypy</code> complains that <code>Length</code> object does not implement the required method for performing that division (although the code works ok at runtime because, after all, <code>Length</code> and <code>Time</code> <code>__new__()</code> method is returning a <code>Quantity</code> object which <em>does</em> implement the arithmetical operations).</p>
<p>So, again, is there any workaround for making the idea work both at run-time and for <code>mypy</code>?</p>
|
<python><mypy><typing><pint>
|
2022-12-13 14:06:07
| 1
| 1,483
|
JLDiaz
|
74,786,083
| 6,729,591
|
Fill a plot with color from a y upwards to infinity
|
<p>I am trying to indicate a "dangerous" zone in my sns boxplot.</p>
<p>So far this is what i came up with:</p>
<pre class="lang-py prettyprint-override"><code>plt.fill_between([0, 6], [danger, danger], danger * 1.2, color='salmon', alpha=0.5)
</code></pre>
<p>But since I use <code>sns</code> boxplot, the x values are totally messed up and i had to play with it to get the following result:</p>
<p><a href="https://i.sstatic.net/rgvdO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rgvdO.png" alt="enter image description here" /></a></p>
<p>I cant really increase the polygon size as the plot grows with it (understandingly).
Ideally i would have wanted to simply paint everything above this threshold as red like so:
<a href="https://i.sstatic.net/1y2Yb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1y2Yb.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn>
|
2022-12-13 14:04:59
| 0
| 1,404
|
Dr. Prof. Patrick
|
74,786,028
| 20,078,696
|
How to convert window coords into turtle coords (Python Turtle)
|
<p>I am trying to create a program to move the turtle to where the mouse is.
I am doing:</p>
<pre><code>import turtle
t = turtle.Turtle()
canvas = turtle.getcanvas()
while True:
mouseX, mouseY = canvas.winfo_pointerxy()
t.goto(mouseX, mouseY)
</code></pre>
<p>but the turtle keeps moving off the screen.
I read from <a href="https://stackoverflow.com/questions/35732851/turtle-how-to-get-mouse-cursor-position-in-window">this question</a> that canvas.winfo_pointerxy() returns 'window coordinates' (0, 0 at the top left of the window) and that I need to convert them to 'turtle coordinates' (0, 0 at the center of the window) but I don't know how to do that.</p>
|
<python><coordinates><turtle-graphics><python-turtle>
|
2022-12-13 14:01:13
| 2
| 789
|
sbottingota
|
74,786,016
| 2,457,899
|
Install GRASS GIS and use it with python in Linux machines?
|
<p>Is there a bash script to install GRASS GIS and use it with python in Linux machines?</p>
|
<python><linux><gis><grass>
|
2022-12-13 13:59:58
| 2
| 4,041
|
gcamargo
|
74,785,987
| 7,800,760
|
Python: sum dataframe rows with same datetime year
|
<p>I have a dataframe with two datetime columns and a third column with a numeric value. Here is an example:</p>
<pre><code>2019-01-01 00:00:00 2019-12-31 00:00:00 118433.0
2020-01-01 00:00:00 2020-12-31 00:00:00 120087.0
2021-01-01 00:00:00 2021-06-30 00:00:00 63831.0
2021-07-01 00:00:00 2021-12-31 00:00:00 63089.0
2022-01-01 00:00:00 2022-06-30 00:00:00 60753.0
2022-07-08 00:00:00 2022-11-30 00:00:00 9067.17
</code></pre>
<p>As you can see while 2019 and 2020 are full years, the other rows describe different time spans.</p>
<p>I need to sum all the numeric values pertaining to the same year to get something like:</p>
<pre><code>2019 118422.0
2020 120087.0
2021 123842.0
2022 9067.0
</code></pre>
<p>The from-to dates on every row always fall in the same year.</p>
<p>Any given year does not have to be a full year.</p>
<p>I'd love to avoid simple iterations and learn a properly pythonic way of achieving this (list comprehension / vectorization).</p>
<p>Thank you</p>
|
<python><pandas><datetime>
|
2022-12-13 13:57:24
| 0
| 1,231
|
Robert Alexander
|
74,785,938
| 7,133,942
|
How to change the size of the axis values in Matplotlib
|
<p>I have the following Matplotlib plot:</p>
<pre><code>import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
price_values = [[4.2],
[4.1],
[4],
[3.8],
[3.9],
[4.2],
[4.5],
[4.8],
[5.2],
[5.2],
[5.2],
[5.6],
[5.2],
[5.1],
[5.3],
[6],
[6.2],
[6.3],
[6.2],
[6],
[5.5] ,
[5.2],
[4.8],
[4.6]]
sample = np.asarray(price_values).squeeze()
ecdf = sm.distributions.ECDF(sample)
x = np.linspace(min(sample), max(sample))
y = ecdf(x)
plt.step(x, y)
ax=plt.gca()
ax.set_xlabel("Price [Cent/kWh]", fontsize = 14, labelpad=8)
ax.set_ylabel("Cumulative frequency", fontsize = 14,labelpad=8)
plt.show()
fig = plt.figure(linewidth=3, figsize=(7, 4))
ax.tick_params(axis='both', which='major', labelsize=140)
plt.savefig('C:/Users/User1/Desktop/Diagramm_EDF.png', edgecolor='black', dpi=300, bbox_inches='tight')
</code></pre>
<p>I would like to change the size of the values in the x and y axis (not the labels). I tried the following <code>ax.tick_params(axis='both', which='major', labelsize=140)</code> but this does not work. The <code>labelsize</code> does not have any effect. Further, I would like to increase the linewidth of the diagramm. Any suggestions how I can do that?</p>
|
<python><matplotlib>
|
2022-12-13 13:54:04
| 0
| 902
|
PeterBe
|
74,785,911
| 12,113,958
|
Add row duplicated for a value missed in a Series
|
<p>I want to add a row that is a duplicate of the previous one for a missed value</p>
<pre><code>df = pd.DataFrame({'A':[0,1,2,3,5,6,7], 'B':['green','red','blue','blue','black','white','green']})
</code></pre>
<p>In this case the missed value in A column is 4 so I want add row with value 4 in column A and blue in column B because is the value of the previuos row</p>
<p>Desired Output</p>
<pre><code>
A B
0 0 green
1 1 red
2 2 blue
3 3 blue
4 4 blue
5 5 black
6 6 white
7 7 green
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-13 13:52:00
| 1
| 529
|
luka
|
74,785,745
| 986,437
|
doubts about python3 absolute imports
|
<p>I know in python we can use absolute imports and these imports start at the root folder of the poject, example:</p>
<pre><code>MyProject
├── module1.py
├── module2.py
├── package1
| ├──__init__.py
| └── module3.py
└── package2
├── __init__.py
└── module4.py
</code></pre>
<p>My question is, how does python know what the root folder is ?</p>
|
<python><import>
|
2022-12-13 13:38:41
| 1
| 2,892
|
GionJh
|
74,785,736
| 3,752,268
|
How to periodically call instance method from a separate process
|
<p>I'm trying to write a class to help with buffering some data that takes a while to read in, and which needs to be periodically updated. The python version is 3.7.
There are 3 criteria I would like the class to satisfy:</p>
<ul>
<li>Manual update: An instance of the class should have an 'update' function, which reads in new data.</li>
<li>Automatic update: An instance's update method should be periodically run, so the buffered data never gets too old. As reading takes a while, I'd like to do this without blocking the main process.</li>
<li>Self contained: Users should be able to inherit from the class and overwrite the method for refreshing data, i.e. the automatic updating should work out of the box.</li>
</ul>
<p>I've tried having instances create their own subprocess for running the updates. This causes problems because simply passing the instance to another process seems to create a copy, so the desired instance is not updated automatically.</p>
<p>Below is an example of the approach I'm trying. Can anyone help getting the automatic update to work?</p>
<pre><code>import multiprocessing as mp
import random
import time
def refresh_helper(buffer, lock):
"""Periodically calls refresh method in a buffer instance."""
while True:
with lock.acquire():
buffer._refresh_data()
time.sleep(10)
class Buffer:
def __init__(self):
# Set up a helper process to periodically update data
self.lock = mp.Lock()
self.proc = mp.Process(target=refresh_helper, args=(self, self.lock), daemon=True)
self.proc.start()
# Do an initial update
self.data = None
self.update()
def _refresh_data(self):
"""Pretends to read in some data. This would take a while for real data"""
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9]
data = [random.choice(numbers) for _ in range(3)]
self.data = data
def update(self):
with self.lock.acquire():
self._refresh_data()
def get_data(self):
return self.data
#
if __name__ == '__main__':
buffer = Buffer()
data_first = buffer.get_data()
time.sleep(11)
data_second = buffer.get_data() # should be different from first
</code></pre>
|
<python><multiprocessing>
|
2022-12-13 13:37:37
| 2
| 2,816
|
bjarkemoensted
|
74,785,720
| 1,635,525
|
python: prevent discarding of the function return value
|
<p>There's a common error in the code where people write something like:</p>
<pre><code>if (id):
query.filter(row.id == id)
</code></pre>
<p>instead of</p>
<pre><code>if (id):
query = query.filter(row.id == id)
</code></pre>
<p>The code looks "valid" and it's very hard to spot these by hand. In C++ there's the <code>[[nodiscard]]</code> function attribute that effectively prevents this mistake by enforcing the usage of the return value. I wasn't able to find anything similar in Python, does it exist?</p>
<p><strong>UPD</strong> A similar question: <a href="https://stackoverflow.com/questions/51858215/how-to-make-pylint-report-unused-return-values">How to make pylint report unused return values</a></p>
<p>I've checked pylint and mypy, apparently neither of them catch this as of today.</p>
<p><strong>UPD2</strong> A pylint feature request: <a href="https://github.com/PyCQA/pylint/issues/7935" rel="noreferrer">https://github.com/PyCQA/pylint/issues/7935</a></p>
|
<python><mypy><pylint><nodiscard>
|
2022-12-13 13:35:46
| 0
| 586
|
salmin
|
74,785,680
| 15,852,600
|
How to format a dataframe having many NaN values, join all rows to those not starting with NaN
|
<p>I have the follwing <code>df</code>:</p>
<pre><code>df = pd.DataFrame({
'col1': [1, np.nan, np.nan, np.nan, 1, np.nan, np.nan, np.nan],
'col2': [np.nan, 2, np.nan, np.nan, np.nan, 2, np.nan, np.nan],
'col3': [np.nan, np.nan, 3, np.nan, np.nan, np.nan, 3, np.nan],
'col4': [np.nan, np.nan, np.nan, 4, np.nan, np.nan, np.nan, 4]
})
</code></pre>
<p>It has the following display:</p>
<pre><code> col1 col2 col3 col4
0 1.0 NaN NaN NaN
1 NaN 2.0 NaN NaN
2 NaN NaN 3.0 NaN
3 NaN NaN NaN 4.0
4 5.0 NaN NaN NaN
5 NaN 6.0 NaN NaN
6 NaN NaN 7.0 NaN
7 NaN NaN NaN 8.0
</code></pre>
<p>My goal is to keep all rows begining with <code>float</code> (not NaN value) and join to them the remaining ones.</p>
<p>The <code>new_df</code> I want to get is:</p>
<pre><code> col1 col2 col3 col4
0 1 2 3 4
4 5 6 7 8
</code></pre>
<p>Any help form your side will be highly appreciated (I upvote all answers).</p>
<p>Thank you!</p>
|
<python><pandas><dataframe>
|
2022-12-13 13:32:03
| 3
| 921
|
Khaled DELLAL
|
74,785,615
| 4,865,723
|
Seaborns tight layout do cut the plot title
|
<p>In the plot the title is cut on the right side of it because I used <code>tight_layout()</code>.</p>
<p><a href="https://i.sstatic.net/3U0q0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3U0q0.png" alt="enter image description here" /></a></p>
<p>Can I prevent this somehow?</p>
<p>Here is the MWE for that plot:</p>
<pre><code>#!/usr/bin/env python3
import seaborn as sns
df = sns.load_dataset('Titanic')
plot = sns.catplot(kind='box', data=df, y='fare')
plot.set(title='Lorem ipsum dolor sit amet, consetetur sadipscing elitr')
plot.tight_layout()
plot.figure.show()
</code></pre>
|
<python><seaborn>
|
2022-12-13 13:27:00
| 1
| 12,450
|
buhtz
|
74,785,610
| 2,244,274
|
GUNICORN FastAPI Deployment with Proxy
|
<p>I am currently setting up a FastAPI application to run on an AWS RockyLinux 8 instance with a <a href="https://gunicorn.org/" rel="nofollow noreferrer">gunicorn</a> deployment. Most of the documentation that I have read recommends NGINX as a proxy server.</p>
<p>I currently have an Apache sever running on this instance and would rather not install another HTTP daemon if I can help it.</p>
<p>My question is: Why is a proxy HTTP server needed at all? <strong>gunicorn</strong> seems to run just fine without a proxy HTTP server simply by specifying a port other than 80. Is this a security recommendation or am I missing something?</p>
|
<python><nginx><fastapi><gunicorn>
|
2022-12-13 13:26:39
| 0
| 999
|
Jamie_D
|
74,785,520
| 13,775,706
|
generate a occupancy map from x, y coordinates of an irregular shape, script dying from SIGKILL
|
<p>I have a number of CSV files with x, y, and z coordinates. These coordinates are not long/lat, but rather a distance from an origin. So within the CSV, there is a 0,0 origin, and all other x, y locations are a distance from that origin point in meters.</p>
<p>The x, and y values will be both negative and positive float values. The largest file I have is ~1.4 million data points, the smallest is ~20k.</p>
<p>The files represent an irregular shaped map of sorts. The distance values will not produce a uniform shape such as a rectangle, circle, etc. I need to generate a bounding box that fits the most area within the values that are contained within the csv files.</p>
<p>logically, here are the steps I want to take.</p>
<ul>
<li>Read the points from the file</li>
<li>Get the minimum and maximum x coordinates</li>
<li>get the minimum and maximum y coordinates.</li>
<li>Use min/max coordinates to get a bounding rectangle with (xmin,ymin), (xmax,ymin), (xmin,ymax) and (xmax,ymax) that will contain the entirety of the values of the CSV file.</li>
<li>Create a grid across that rectangle with a 1 m resolution. Set that grid as a boolean array for the occupancy.</li>
<li>Round the map coordinates to the nearest integer.</li>
<li>For every rounded map coordinate switch the occupancy to True.</li>
<li>Use a morphological filter to erode the edges of the occupancy map.</li>
<li>Now when a point is selected check the nearest integer value and whether it falls within the occupancy map.</li>
</ul>
<p>I'm facing multiple issues, but thus far my biggest issue is memory resources. for some reason this script keeps dying with a SIGKILL, or at least I think that is what is occuring.</p>
<pre><code>class GridBuilder:
"""_"""
def __init__(self, filename, search_radius) -> None:
"""..."""
self.filename = filename
self.search_radius = search_radius
self.load_map()
self.process_points()
def load_map(self):
"""..."""
data = np.loadtxt(self.filename, delimiter=",")
self.x_coord = data[:, 0]
self.y_coord = data[:, 1]
self.z_coord = data[:, 2]
def process_points(self):
"""..."""
min_x = math.floor(np.min(self.x_coord))
min_y = math.floor(np.min(self.y_coord))
max_x = math.floor(np.max(self.x_coord))
max_y = math.floor(np.max(self.y_coord))
int_x_coord = np.floor(self.x_coord).astype(np.int32)
int_y_coord = np.floor(self.y_coord).astype(np.int32)
x = np.arange(min_x, max_x, 1)
y = np.arange(min_y, max_y, 1)
xx, yy = np.meshgrid(x, y, copy=False)
if __name__ == "__main__":
"""..."""
MAP_FILE_DIR = r"/sample_data"
FILE = "testfile.csv"
fname = os.path.join(MAP_FILE_DIR, FILE)
builder = GridBuilder(fname, 500)
</code></pre>
<p>my plan was to take the grid with the coordinates and update each location with a dataclass.</p>
<pre><code>@dataclass
class LocationData:
"""..."""
coord: list
occupied: bool
</code></pre>
<p>This identifies the grid location, and if its found within the CSV file map.</p>
<p>I understand this is going to be time consuming process, but I figured this would be my first attempt.</p>
<p>I know Stackoverflow generally dislikes attachements, but I figured it might be useful for a sample dataset of what I'm working with. So I've uploaded a file for sharing. <a href="https://1drv.ms/u/s!Ak_Up0H34vXskM0jdxFhY06L-DZZcg?e=FEo104" rel="nofollow noreferrer">test_file</a></p>
<p>UPDATE:
the original code utilized itertools to generate a grid for each location. I ended up switching away from itertools, and utilized numpy meshgrid() instead. This caused the same issue, but meshgrid() has a copy parameter that can be set to False to preserver memory resources. this fixed the memory issue.</p>
|
<python><occupancy-map>
|
2022-12-13 13:19:29
| 2
| 304
|
Michael
|
74,785,502
| 2,919,585
|
Python: Iteratively change superclass methods
|
<p>I want to create a class <code>Packet</code> identical to <code>list</code> except that it can be compared to <code>int</code> objects. Comparing to an <code>int</code> shall return the same result as comparing to a <code>Packet</code> containing only that <code>int</code>. The following definition does what I want.</p>
<pre class="lang-py prettyprint-override"><code>class Packet(list):
def __init__(self, iterable=()):
super().__init__()
for x in iterable:
if isinstance(x, list):
self.append(type(self)(x))
else:
self.append(x)
def __lt__(self, x):
if isinstance(x, int):
return self.__lt__(type(self)([x]))
return super().__lt__(x)
def __le__(self, x):
if isinstance(x, int):
return self.__le__(type(self)([x]))
return super().__le__(x)
def __eq__(self, x):
if isinstance(x, int):
return self.__eq__(type(self)([x]))
return super().__eq__(x)
def __ne__(self, x):
if isinstance(x, int):
return self.__ne__(type(self)([x]))
return super().__ne__(x)
def __ge__(self, x):
if isinstance(x, int):
return self.__ge__(type(self)([x]))
return super().__ge__(x)
def __gt__(self, x):
if isinstance(x, int):
return self.__gt__(type(self)([x]))
return super().__gt__(x)
a = Packet([2, 3, 5])
b = Packet([[2], 3, [[5]]])
c = Packet([2, [3, 4]])
d = 2
assert a == b
assert a < c
assert a > d
assert b < c
assert b > d
assert c > d
</code></pre>
<p>However, this is rather repetitive; I wrote basically the same code six times. There's got to be a way to do this in a loop or at least using a decorator, right? How can I create an identical class without repeating myself?</p>
|
<python><inheritance><operator-overloading><super>
|
2022-12-13 13:18:00
| 1
| 571
|
schtandard
|
74,785,476
| 13,635,877
|
use a list plus some strings to select columns from a dataframe
|
<p>I am trying to make a dynamic list and then combine it with a fixed string to select columns from a dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([], columns=['c1','c2','c3','c4'])
column_list= ['c2','c3']
df2 = df[['c1',column_list]]
</code></pre>
<p>but I get the following error:</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
<p>I tried a dict as well but that is similar error.</p>
|
<python><pandas><dataframe>
|
2022-12-13 13:15:36
| 1
| 452
|
lara_toff
|
74,785,427
| 17,696,880
|
How to capture string of characters from where it is indicated to the first point followed by a line break?
|
<pre class="lang-py prettyprint-override"><code>import re
x = """44
5844 44554 Hi hi! , sahhashash; asakjas. jjksakjaskjas.
ooooooppkkk"""
#both initial after the last line break that they have within their capture range
# ((?:\w+)?) ---> with a capturing group this pattern can capture a substring of alphanumeric characters (uppercase and lowercase) until it is taken with a space, a comma or a dot
# ((?:\w\s*)+) ---> this pattern is similar to the previous one but it does not stop when finding spaces
regex_patron_m1 = r"\s*((?:\w+)?) \s*\¿?(?:del |de |)\s*((?:\w\s*)+)\s*\??"
m1 = re.search(regex_patron_m1, x, re.IGNORECASE) #Con esto valido la regex haber si entra o no en el bloque de code
if m1:
word, association = m1.groups()
print(repr(word)) #print captured substring by first capture group
print(repr(association)) #print captured substring by second capture group
</code></pre>
<p>The output that I get with this two patterns</p>
<pre><code>'5844'
'44554 Hi hi'
</code></pre>
<p>What should I modify to get the following? since I don't understand why both capture groups start their capture after the newline</p>
<p>And what should I do so that the capture of the second capture group is up to the full stop point <code>".[\s|]*\n*"</code> or <code>".\n*"</code>? To get</p>
<pre><code>'44'
'5844 44554 Hi hi! , sahhashash; asakjas. jjksakjaskjas.'
</code></pre>
<p>And if I didn't want it to stop at the line break, to get something like this, what should I do?</p>
<pre><code>'44'
'5844 44554 Hi hi! , sahhashash; asakjas. jjksakjaskjas.
ooooooppkkk'
</code></pre>
|
<python><python-3.x><regex><string><regex-group>
|
2022-12-13 13:11:16
| 2
| 875
|
Matt095
|
74,785,327
| 1,831,518
|
Programmatically trigger a SageMaker's Notebook instance
|
<p>So to start with, I currently have a system on AWS that does the following</p>
<ol>
<li>Collects user data (consent is assumed of course) and store it in an S3 bucket</li>
<li>Uses Sagemaker notebook instance to train on the user data and deploys an endpoint. The endpoint is later invoked by a lambda function (I followed this <a href="https://towardsdatascience.com/using-aws-sagemakers-linear-learner-to-solve-regression-problems-36732d802ba6" rel="nofollow noreferrer">tutorial</a>)</li>
</ol>
<p>The problem is, I have to</p>
<ol>
<li>manually run the notebook instance</li>
<li>manually change the path for each user's data</li>
</ol>
<p>What I would like to do,</p>
<h3>First:</h3>
<ul>
<li><p>Trigger the notebook instance programmatically.</p>
</li>
<li><p>This can be periodically or through an event. I could call the notebook instance from a lambda function or another AWS service.</p>
</li>
</ul>
<h3>Second:</h3>
<ul>
<li>Pass an argument to the notebook instance (e.g., bucket path where
data is stored)</li>
</ul>
<h3>Approach</h3>
<h4>Notebook Jobs</h4>
<p>I found some resources that suggested Notebook jobs. However, notebook jobs only give the ability to schedule notebooks.</p>
<p>There is an option to send parameters (which I was not able to get to work), but that option just means you pass the same argument on every run.</p>
<p>My question is this, am I following the right approach? Or is there something else I should consider?</p>
<p>Thanks in advance</p>
|
<python><amazon-web-services><amazon-s3><jupyter-notebook><amazon-sagemaker>
|
2022-12-13 13:03:34
| 1
| 3,094
|
A.Shoman
|
74,785,245
| 1,969,638
|
How do I use a match statement to pattern match the class of multiple values in python?
|
<p>I have a union type, and I can create a value for it like so:</p>
<pre><code>import random
class X:
s: str = 'ab'
MyType = int | X
def get_value() -> MyType:
if random.random() > 0.5:
return 3
return X()
a = get_value()
</code></pre>
<p>And I can use a match statement to pattern match on the class:</p>
<pre><code>match a:
case int():
print(a + 1)
case X():
print(a.s)
</code></pre>
<p>But I want to be able to match on multiple variables at the same time. The typical way to do this in other languages is with a tuple, but I'm not sure I'm doing this correctly:</p>
<pre><code>a = get_value()
b = get_value()
match a, b:
case (int(),int()):
print(a + 1) # <-- Operator "+" not supported for types "MyType" and "Literal[1]"
case (X(), X()):
print(a.s) # <-- Cannot access member "s" for type "int"
</code></pre>
<p>The code does run with <code>python3.12</code>, but the above errors are shown when I'm using the language server <code>pyright 1.1.282</code>. Is there a problem with my code? Is there a way to do this that avoids diagnostic errors in my editor?</p>
|
<python><switch-statement><pattern-matching><union-types><pyright>
|
2022-12-13 12:57:17
| 1
| 905
|
Zantier
|
74,785,188
| 4,564,080
|
Pytorch complaining about input and label batch size mismatch
|
<p>I am using Huggingface to implement a BERT model using <code>BertForSequenceClassification.from_pretrained()</code>.</p>
<p>The model is trying to predict 1 of 24 classes. I am using a batch size of 32 and a sequence length of 66.</p>
<p>When I try to call the model in training, I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>ValueError: Expected input batch_size (32) to match target batch_size (768).
</code></pre>
<p>However, my target shape is 32x24. It seems like somewhere when the model is called, this is being flattened to 768x1. Here is a test I ran to check:</p>
<pre class="lang-py prettyprint-override"><code>for i in train_dataloader:
i = tuple(t.to(device) for t in i)
print(i[0].shape, i[1].shape, i[2].shape) # here i[2].shape is (32, 24)
output = model(i[0], attention_mask=i[1], labels=i[2]) # here PyTorch complains that i[2]'s shape is now (768, 1)
print(output.logits.shape)
break
</code></pre>
<p>This outputs:</p>
<pre class="lang-py prettyprint-override"><code>torch.Size([32, 66]) torch.Size([32, 66]) torch.Size([32, 24])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-68-c69db6168cc3> in <module>
2 i = tuple(t.to(device) for t in i)
3 print(i[0].shape, i[1].shape, i[2].shape)
----> 4 output = model(i[0], attention_mask=i[1], labels=i[2])
5 print(output.logits.shape)
6 break
4 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3024 if size_average is not None or reduce is not None:
3025 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3027
3028
ValueError: Expected input batch_size (32) to match target batch_size (768).
</code></pre>
|
<python><pytorch><huggingface-transformers>
|
2022-12-13 12:53:33
| 1
| 4,635
|
KOB
|
74,785,155
| 14,058,726
|
How to slice a pandas DataFrame between two dates (day/month) ignoring the year?
|
<p>I want to filter a pandas DataFrame with DatetimeIndex for multiple years between the 15th of april and the 16th of september. Afterwards I want to set a value the mask.</p>
<p>I was hoping for a function similar to <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>between_time()</code></a>, but this doesn't exist.</p>
<p>My actual solution is a loop over the unique years.</p>
<h2>Minimal Example</h2>
<pre><code>import pandas as pd
df = pd.DataFrame({'target':0}, index=pd.date_range('2020-01-01', '2022-01-01', freq='H'))
start_date = "04-15"
end_date = "09-16"
for year in df.index.year.unique():
# normal approche
# df[f'{year}-{start_date}':f'{year}-{end_date}'] = 1
# similar approche slightly faster
df.iloc[df.index.get_loc(f'{year}-{start_date}'):df.index.get_loc(f'{year}-{end_date}')+1]=1
</code></pre>
<p>Does a solution exist where I can avoid the loop and maybe improve the performance?</p>
|
<python><pandas><dataframe>
|
2022-12-13 12:51:31
| 1
| 6,392
|
mosc9575
|
74,785,131
| 9,333,987
|
Nested dictionary with key: list[key:value] pairs to dataframe
|
<p>I'm currently struggling with creating a dataframe based on a dictionary that is nested like <code>{key1:[{key:value},{key:value}, ...],key2:[{key:value},{key:value},...]}</code>
And I want this to go into a dataframe, where the value of <code>key1 and key2</code> are the index, while the <code>list</code> nested <code>key:value</code> pairs would become the <code>column and record</code> values.</p>
<p>Now, for each <code>key1, key2, etc</code> the list key:value pairs can be different in size. Example data:</p>
<pre><code>some_dict = {'0000297386FB11E2A2730050568F1BAB': [{'FILE_ID': '0000297386FB11E2A2730050568F1BAB'},
{'FileTime': '1362642335'},
{'Size': '1016439'},
{'DocType_Code': 'AF3BD580734A77068DD083389AD7FDAF'},
{'Filenr': 'F682B798EC9481FF031C4C12865AEB9A'},
{'DateRegistered': 'FAC4F7F9C3217645C518D5AE473DCB1E'},
{'TITLE': '2096158F036B0F8ACF6F766A9B61A58B'}],
'000031EA51DA11E397D30050568F1BAB': [{'FILE_ID': '000031EA51DA11E397D30050568F1BAB'},
{'FileTime': '1384948248'},
{'Size': '873514'},
{'DatePosted': '7C6BCB90AC45DA1ED6D1C376FC300E7B'},
{'DocType_Code': '28F404E9F3C394518AF2FD6A043D3A81'},
{'Filenr': '13A6A062672A88DE75C4D35917F3C415'},
{'DateRegistered': '8DD4262899F20DE45F09F22B3107B026'},
{'Comment': 'AE207D73C9DDB76E1EEAA9241VJGN02'},
{'TITLE': 'DF96336A6FE08E34C5A94F6A828B4B62'}]}
</code></pre>
<p>The final result should look like this:</p>
<pre><code>Index | File_ID | ... | DatePosted | ... | Comment | Title
0000297386FB11E2A2730050568F1BAB|0000297386FB11E2A2730050568F1BAB|...|NaN|...|NaN|2096158F036B0F8ACF6F766A9B61A58B
000031EA51DA11E397D30050568F1BAB|000031EA51DA11E397D30050568F1BAB|...|7C6BCB90AC45DA1ED6D1C376FC300E7B|...|AE207D73C9DDB76E1EEAA9241VJGN02|DF96336A6FE08E34C5A94F6A828B4B62
</code></pre>
<p>Now I've tried to parse the dict directly to pandas using comprehension as suggested in <a href="https://stackoverflow.com/questions/19736080/creating-dataframe-from-a-dictionary-where-entries-have-different-lengths">Creating dataframe from a dictionary where entries have different lengths</a> and tried to flatten the dict more, and then parsing it to pandas <a href="https://stackoverflow.com/questions/6027558/flatten-nested-dictionaries-compressing-keys">Flatten nested dictionaries, compressing keys</a>. Both with no avail.</p>
|
<python><pandas><dataframe>
|
2022-12-13 12:49:15
| 2
| 339
|
Wokkel
|
74,785,045
| 8,496,414
|
Best way to get a specific column as y in pandas DataFrame
|
<p>I want to extract one specific column as y from a pandas DataFrame.<br />
I found two ways to do this so far:</p>
<pre><code># The First way
y_df = df[specific_column]
y_array = np.array(y_df)
X_df = df.drop(columns=[specific_column])
X_array = np.array(X_df)
# The second way
features = ['some columns in my dataset']
y_df = np.array(df.loc[:, [specific_column]].values)
X_df = df.loc[:, features].values
</code></pre>
<p>But when I compare the values in each y array, I see they are not equal:</p>
<pre><code>y[:4]==y_array[:4]
array([[ True, True, False, False],
[ True, True, False, False],
[False, False, True, True],
[False, False, True, True]])
</code></pre>
<p>But I am sure that these two arrays contain the same elements:</p>
<pre><code>y[:4], y_array[:4]
(array([[0],
[0],
[1],
[1]], dtype=int64),
array([0, 0, 1, 1], dtype=int64))
</code></pre>
<p>So, why do I see False values when I compare them together?</p>
|
<python><pandas><feature-extraction>
|
2022-12-13 12:42:37
| 1
| 537
|
Sara
|
74,785,000
| 10,938,315
|
regex split on uppercase, but ignore titlecase
|
<p>How can I split <code>This Is ABC Title</code> into <code>This Is, ABC, Title</code> in Python? If is use <code>[A-Z]</code> as regex expression it will be split into <code>This, Is, ABC, Title</code>? I do not want to split on whitespace.</p>
|
<python><regex>
|
2022-12-13 12:39:12
| 1
| 881
|
Omega
|
74,784,836
| 7,713,770
|
How to calculate the difference between dictionary and list values?
|
<p>I currently have a <code>dictionary</code> and a <code>list</code>, and I would like to subtract the list values from the respective dictionary values (based on index position). This is what I have tried so far:</p>
<pre class="lang-py prettyprint-override"><code>dict1 = {
"appel": 3962.00,
"waspeen": 5018.75,
"ananas": 3488.16
}
number_fruit = [
3962,
5018.74,
3488.17
]
dictionary_values = list(number_fruit)
list_values = list(dict1.values())
are_values_equal = dictionary_values == list_values
</code></pre>
<p>But this only returns <code>True</code> or <code>False</code> if there is a difference.</p>
<p>But what I want to achieve is to return the difference between <code>number_fruit</code> list and <code>dict1</code> dictionary, only if the difference is not equal to zero..</p>
<p>Here is the expected output:</p>
<pre class="lang-py prettyprint-override"><code>waspeen : 0.01
ananas : -0.01
</code></pre>
|
<python>
|
2022-12-13 12:25:28
| 2
| 3,991
|
mightycode Newton
|
74,784,774
| 3,416,725
|
How to sum specific row values together in Sparse COO matrix to reshape matrix
|
<p>I have a sparse coo matrix built in python using the scipy library. An example data set looks something like this:</p>
<pre><code>>>> v.toarray()
array([[1, 0, 2, 4],
[0, 0, 3, 1],
[4, 5, 6, 9]])
</code></pre>
<p>I would like to add the 0th index and 2nd index together and the 1st index and the and 3rd index together so the shape would change from 3, 4 to 3, 2.</p>
<p>However looking at the docs their <a href="https://numpy.org/doc/stable/reference/generated/numpy.matrix.sum.html" rel="nofollow noreferrer">sum function</a> doesn't support slicing of some sort. So the only way I have thought of a way to do something like that would be to loop the matrix as an array then use numpy to get the summed values like so:</p>
<pre><code>a_col = []
b_col = []
for x in range(len(v.toarray()):
a_col.append(np.sum(v.toarray()[x, [0, 2]], axis=0))
b_col.append(np.sum(v.toarray()[x, [1, 3]], axis=0))
</code></pre>
<p>Then use those values for <code>a_col</code> and <code>b_col</code> to create the matrix again.</p>
<p>But surely there should be a way to handle it with the <code>sum</code> method?</p>
|
<python><numpy><scipy><sparse-matrix>
|
2022-12-13 12:20:40
| 2
| 493
|
mp252
|
74,784,575
| 17,277,677
|
summing by keywords and by groups in pandas
|
<p>I have a following problem:</p>
<p>a dataframe with keywords and groups:</p>
<p><a href="https://i.sstatic.net/xKaxR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xKaxR.png" alt="enter image description here" /></a></p>
<p>my task is to look for these keywords in another dataframe in the given description and calculate the occurrences in those descriptions - by adding columns. <- I've done that part, using the code:</p>
<pre><code>keywords = df["keyword"].to_list()
for key in keywords:
new_df[key] = new_df["description"].str.lower().str.count(key)
</code></pre>
<p>and the result df is:</p>
<p><a href="https://i.sstatic.net/jacJx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jacJx.png" alt="enter image description here" /></a></p>
<p>I would like to sum them by the group given in keywords dataframe.
So all the columns within one group : gas/gases/gasoline change into column <strong>gas</strong> and sum the results.</p>
|
<python><pandas><dataframe>
|
2022-12-13 12:01:57
| 1
| 313
|
Kas
|
74,784,338
| 5,024,631
|
Pandas new column with groupby correlation results
|
<p>Here's my dataframe:</p>
<pre><code>sample_df = pd.DataFrame({'id': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'],
'values_1':[2,4,6,8,12,13,13,17],
'values_2':[3,6,7,9,3,2,2,5]})
</code></pre>
<p>I would like to add a new column with some summary statistics such as the correlation coefficient between values_1 and values_2, within groups.</p>
<p>For the mean this is clear, either of these work:</p>
<pre><code>sample_df['overall_mean'] = sample_df.groupby('id')['values_1'].transform('mean')
sample_df['overall_mean'] = sample_df.groupby('id')['values_1'].mean()
</code></pre>
<p>And so I would assume from that, that to get the correlations, something lie this would work:</p>
<pre><code>sample_df['overall_cor'] = sample_df.groupby('id')['values_1','values_2'].transform('corr')
sample_df['overall_cor'] = sample_df.groupby('id')['values_1','values_2'].corr()
</code></pre>
<p>But niether do work. Does anyone have a solution to this, and can anyone elaborate upon why the same approaches that return the mean don't work for the correlation coefficient?</p>
|
<python><pandas><group-by><correlation>
|
2022-12-13 11:44:51
| 2
| 2,783
|
pd441
|
74,784,295
| 9,182,743
|
Install package using setup.py
|
<p>I want to create and install my own packages, so I can import functionality.py into script.py into otehr functions in the program. Following on from <a href="https://www.datasciencelearner.com/importerror-attempted-relative-import-parent-package/" rel="nofollow noreferrer">these instructions</a>, I have:</p>
<h2>Project structure</h2>
<pre class="lang-py prettyprint-override"><code>/SRC
/package_one
__init__.py
script.py #Note: script.py imports from functionality.py
/package_two
__init__.py
functionality.py
setup.py
</code></pre>
<h2>script.py</h2>
<pre class="lang-py prettyprint-override"><code>from package_two import functionality
functionality.execute()
</code></pre>
<h2>funcitonality.py</h2>
<pre class="lang-py prettyprint-override"><code>def execute():
print ("Running functinality")
</code></pre>
<h2>setup.py</h2>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup, find_packages
setup(name = 'pckage_two', packages = find_packages())
</code></pre>
<h2>The problem</h2>
<p>However, when i run:</p>
<pre class="lang-py prettyprint-override"><code>C:\Users\XXXX\XXXX\src> python setup.py install
</code></pre>
<p>from Terminal (in VS code with Anaconda)
I get the following error:</p>
<pre><code>the process cannot access the file because it is being used by another process: 'c:\\users\\XXXX\\anaconda3\\lib\\site-packages\\pckage_two-0.0.0-py3.9.egg'
</code></pre>
<h3>other infomation</h3>
<p>I am using Anaconda and VSCode, I have ran the python setup.py both from VScode terminal and Anaconda terminal</p>
|
<python><import><setup.py>
|
2022-12-13 11:41:09
| 1
| 1,168
|
Leo
|
74,784,277
| 4,720,018
|
How to impose constraints on sub-dependencies in setuptools?
|
<h3>Background</h3>
<p>Suppose my package depends on the <code>foo</code> package, and <code>foo</code>, in turn, depends on the <code>bar</code> package, specifically <code>bar>=1.0.0</code>.</p>
<p>In other words, <code>bar</code> is a sub-dependency for my package.</p>
<p>Following <a href="https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/" rel="nofollow noreferrer">best practice</a>, my <code>pyproject.toml</code> (or <code>setup.cfg</code>) specifies only <em><strong>direct</strong></em> dependencies, <em><strong>no</strong></em> sub-dependencies. For example:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
# ...
dependencies = [
"foo>=2.0",
]
</code></pre>
<h3>Problem</h3>
<p>Now suppose <code>bar</code> version <code>1.0.1</code> has been released in order to patch a security vulnerability, but <code>foo</code> has not picked up on this yet (it still uses <code>bar>=1.0.0</code>).</p>
<p>Instead of waiting for <code>foo</code> to catch up, I want to make sure my package enforces the use of <code>bar>=1.0.1</code>.</p>
<h3>Current solution</h3>
<p>The obvious solution would be to add a constraint to my <code>pyproject.toml</code> as follows:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
# ...
dependencies = [
"foo>=2.0",
# constraint for sub-dependency
"bar>=1.0.1",
]
</code></pre>
<p>This works, but now it looks like my project depends directly on <code>bar</code>, whereas <code>bar</code> is only a sub-dependency: I don't import anything directly from <code>bar</code>, and if <code>foo</code> ever decides not to use <code>bar</code> anymore, I won't need it either.</p>
<h3>Question</h3>
<p>Is there a better way to specify a constraint for a sub-dependency using <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html" rel="nofollow noreferrer">setuptools</a>?</p>
|
<python><dependencies><setuptools>
|
2022-12-13 11:39:43
| 1
| 14,749
|
djvg
|
74,784,248
| 12,811,183
|
List comprehension with pattern match in Python
|
<p>I have one list named <em>columns</em>, and I have to create one nested list based on a split of the elements (the first three).</p>
<p>For example, I will divide this element '101 Drive 1 A' in '101 Drive 1' and make a group.</p>
<pre><code>columns = ['101 Drive 1 A','101 Drive 1 B','102 Drive 2 A','102 Drive 2 B','102 Drive 2 C','103 Drive 1 A']
</code></pre>
<p>The output will look like this:</p>
<pre><code>[
['101 Drive 1 A', '101 Drive 1 B'],
['102 Drive 2 A', '102 Drive 2 B', '102 Drive 2 C'],
['103 Drive 1 A']
]
</code></pre>
|
<python><list-comprehension>
|
2022-12-13 11:37:42
| 2
| 521
|
sizo_abe
|
74,784,126
| 848,746
|
List comprehension on a list of dictionaries
|
<p>I have a list of dictionaries like so:</p>
<pre><code>[{'end': 34,
'entity_group': 'ORG',
'score': 0.99919325,
'start': 0,
'word': ' Community College Alabama'},
{'end': 54,
'entity_group': 'LOC',
'score': 0.90115756,
'start': 42,
'word': ' Maxwell Blvd'},
{'end': 66,
'entity_group': 'LOC',
'score': 0.9890175,
'start': 56,
'word': ' Montgomery'},
{'end': 70,
'entity_group': 'LOC',
'score': 0.9988833,
'start': 68,
'word': ' AL'}]
</code></pre>
<p>I would like to extract values of <code>word</code>, but only the ones where <code>'entity_group': 'LOC'</code>. So for the above example, that would be:</p>
<pre class="lang-py prettyprint-override"><code>[' Maxwell Blvd', ' Montgomery', ' AL']
</code></pre>
<p>I have tried to do this:</p>
<pre class="lang-py prettyprint-override"><code>[[item for item in d.items()] for d in a]
</code></pre>
<p>... but this does not yield what I want.</p>
|
<python>
|
2022-12-13 11:28:44
| 3
| 5,913
|
AJW
|
74,783,807
| 1,892,584
|
Making tqdm write to log files
|
<p><a href="https://github.com/tqdm/tqdm" rel="noreferrer">tqdm</a> is a nice python library to keep track of progress through an iterable.</p>
<p>It's default mode of operation is to repeatedly clear a line and <a href="https://github.com/tqdm/tqdm/issues/750" rel="noreferrer">redraw with a carriage</a> but this produced quite nasty output when combined with logging. Is there a way I can get this to write to log files periodically rather than using this print?</p>
<p>This is the best I've got is my own hacky implementation:</p>
<pre class="lang-py prettyprint-override"><code>def my_tqdm(iterable):
"Like tqdm but us logging. Include estimated time and time taken."
start = time.time()
for i, item in enumerate(iterable):
elapsed = time.time() - start
rate = elapsed / (i + 1)
estimated = rate * len(iterable) - elapsed
num_items = len(iterable)
LOGGER.info(
"Processed %d of %d items (%.1f%%) in %.1fs (%.1fs remaining, %.1f s/item)",
i,
num_items,
i / num_items * 100,
elapsed,
estimated,
rate,
)
yield item
</code></pre>
<p>But it'd be better if I could do this with tqdm itself so that people don't moan at me in code reviews.</p>
|
<python><logging><tqdm><python-logging>
|
2022-12-13 11:03:20
| 2
| 1,947
|
Att Righ
|
74,783,571
| 16,389,095
|
How to find all occurences of a substring in a numpy string array
|
<p>I'm trying to find all occurences of a substring in a numpy string array. Let's say:</p>
<pre><code>myArray = np.array(['Time', 'utc_sec', 'UTC_day', 'Utc_Hour'])
sub = 'utc'
</code></pre>
<p>It should be case insensitive, so the method should return [1,2,3].</p>
|
<python><arrays><string><numpy>
|
2022-12-13 10:45:03
| 3
| 421
|
eljamba
|
74,783,412
| 6,212,718
|
Python streamlit - subprocess
|
<p>I have created a streamlit app which runs fine on my local machine. However, I cannot get it running on the streamlit-cloud. In a nutshell my app does the following:</p>
<ol>
<li>take some user input</li>
<li>create a markdown file from the input ("deck.md")</li>
<li>convert markdown file to html file ("deck.html") using a <code>npx</code> command via <code>subprocess</code></li>
<li>opens a new tab showing the html file via <code>subprocess</code></li>
</ol>
<p>On my local machine I use the following command to do steps 3 and 4:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
def markdown2marp(file):
# Create HTML
marp_it = f"npx @marp-team/marp-cli@latest --theme custom.css --html {file}"
file = file.split(".")[0] # remove .md
proc = subprocess.run([marp_it], shell=True, stdout=subprocess.PIPE)
# Open HTML in Browser
subprocess.Popen(['open', f'{file}.html'])
return proc
</code></pre>
<p>Now on the streamlit cloud this is not working obviously.</p>
<p><strong>Question</strong>: is there a workaround to achieve the described functionality in the streamlit-cloud.</p>
<p>Any help is much appreciated!</p>
|
<python><subprocess><streamlit><npx>
|
2022-12-13 10:31:33
| 0
| 1,489
|
FredMaster
|
74,783,366
| 142,234
|
Python multiple modules in the same root folder import error
|
<p>I'm trying to organise my python project with the following structure:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><ul id="myUL">
<li><span>RootFolder</span>
<ul>
<li><span>module1(folder)</span>
<ul>
<li><span>submodule1(folder)</span>
<ul>
<li><span>pyfile1.py</span>
<ul>
<li>function test1()</li>
</ul>
</li>
</ul>
<li><span>main.py</span>
</li>
</li>
</ul>
</li>
<li><span>module2(folder)</span>
<ul>
<li><span>submodule2(folder)</span>
<ul>
<li><span>pyfile2.py</span>
<ul>
<li>function test2()</li>
</ul>
</li>
</ul>
<li><span>main.py</span>
</li>
</li>
</ul>
</li>
</ul>
</li>
</ul></code></pre>
</div>
</div>
</p>
<p>Two separate modules into the same rootFoder.
This is the content of module1.main file:</p>
<pre><code>import module1.submodule1 as sm
if __name__ == '__main__':
sm.pyfile1.test1()
</code></pre>
<p>But when I'm running <code>python -m module1.main</code> I have the following error</p>
<blockquote>
<p>module 'module1.submodule1' has no attribute 'pyfile1'</p>
</blockquote>
<p>My goal is to have multiple modules with their own main entry points under the same rootFolder.
What is wrong? I don't understand why this is not correct.</p>
|
<python><python-module>
|
2022-12-13 10:28:23
| 0
| 13,915
|
bAN
|
74,783,300
| 125,673
|
Python Pandas: Using corrwith and getting "outputs are collapsed"
|
<p>I want to find out how a column of data in a matrix correlates with the other columns in the matrix.
The data looks like;
<a href="https://i.sstatic.net/CickG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CickG.png" alt="enter image description here" /></a>
I use the following code;</p>
<pre><code>selected_product = "5002.0"
df_single_product = df_recommender[selected_product]
df_similar_to_selected_product=df_recommender.corrwith(df_single_product)
df_similar_to_selected_product.head()
</code></pre>
<p>The output from the head command is not produced. Instead I get a message saying "Outputs are collapsed". Why is this happening? is this a error I can trap or is the code wrong?
Maybe there are too many rows? I am using Visual Studio Code.</p>
|
<python><pandas><visual-studio-code><jupyter>
|
2022-12-13 10:22:22
| 1
| 10,241
|
arame3333
|
74,783,236
| 10,581,944
|
Is there a way to compare two dataframes and report which column is different in Pyspark?
|
<p>I'm using <code>df1.subtract(df2).rdd.isEmpty()</code> to compare two dataframes (assuming the schema of these two df are the same, or at least we expect them to be the same), but if one of the column doesn't match, I can't tell from the output logs, and it takes long time for me to find out the issue in the data (and it's exhausting when the datasets are quite big)</p>
<p>Is there a way that we can compare two df and return which column doesn't match with Pyspark? Thanks a lot.</p>
|
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
|
2022-12-13 10:16:50
| 1
| 3,433
|
wawawa
|
74,783,147
| 20,279,779
|
Count the frequency of users occuring in a list using python
|
<p>I have a table in postgresql database. From that table I have extracted data using the sql statement mentioned below:</p>
<pre><code>sql_statement = """ select a.slno, a.clientid, a.filename, a.user1_id, b.username, a.user2_id, c.username as username2, a.uploaded_ts, a.status_id
from masterdb.xmlform_joblist a
left outer join masterdb.auth_user b
on a.user1_id = b.id
left outer join masterdb.auth_user c
on a.user2_id = c.id
"""
cursor.execute(sql_statement)
result = cursor.fetchall()
</code></pre>
<p>From this code I accessed the database and extracted data from each fields and added the data into list using the below code:</p>
<pre><code>date = []
username1 = []
username2 = []
user1_id = []
user2_id = []
status_id = []
cient_id = []
filename = []
#extracting specific data from specified fields in the database
for row in result:
date.append(row[7])
username1.append(row[4])
username2.append(row[6])
status_id.append(row[8])
cient_id.append(row[1])
filename.append(row[2])
#creating log file for the extracted fields
logger.info("Date | {} , username1 | {} , username2 | {} , status_id | {} , client_id | {} , filename | {} ".format(row[7], row[4], row[6], row[8], row[1], row[2]))
</code></pre>
<p>Now I want to check the frequency of usernames from the data I have collected.</p>
<p>username1 looks like:</p>
<pre><code>username1 = ['Dean','Sarah','Sarah','Alan','Dean',......'Alan']
</code></pre>
<p>I want to know the count of the users in the list
Expected result:</p>
<pre><code>Dean = 10
Sarah = 6
Alan = 2
</code></pre>
<p>Is it possible to achieve this result using python.
I have tried Pandas but it isn't working as my database is postgresql.</p>
|
<python><postgresql>
|
2022-12-13 10:10:58
| 2
| 343
|
AstroInTheOcean
|
74,783,117
| 7,295,169
|
How to translate C double two-dim data using python ctypes?
|
<p>I try use python <code>ctypes</code> call a dll and translate <code>api</code> in python code. But I meet a function that have two-dimensional array and I dont know how to translate it now.I had forgot my <code>C++&C</code> language. <code>TCHAR</code> is a type like this <code>char *p</code>.</p>
<h4>show in red rectangle.</h4>
<p><a href="https://i.sstatic.net/xhOR1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xhOR1.png" alt="enter image description here" /></a></p>
|
<python><ctypes>
|
2022-12-13 10:09:35
| 1
| 1,193
|
jett chen
|
74,783,071
| 5,376,493
|
Reading binary file to find a sequences of ints little endian (permutations)
|
<p>Try to read a binary file (firmware) with a sequences like</p>
<pre><code>\x01\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00
</code></pre>
<p>Little endian integer 1,3,2,4</p>
<p>Attempt:</p>
<pre><code>with open("firm.bin", 'rb') as f:
s = f.read()
N = 16
allowed = set(range(4))
for val in allowed:
val = bytes(val)+b'\x00\x00\x00'
for index, b in enumerate(s):
print(b)
i = b.hex()
b= b'\x00\x00\x00'+bytes(bytes.fromhex(f'{i:x}'))
if b in allowed and set(s[index:index + N]) == allowed:
print(f'Found sequence {s[index:index + N]} at offset {index}')
</code></pre>
<p>Above does not seem to work with error:</p>
<pre><code>ValueError: Unknown format code 'x' for object of type 'str'
</code></pre>
<p>Why?</p>
<p>Problem I am trying to solve:</p>
<p>How can I find in binary file sequences like this being 16 ints little endian with values from 0 to 15 i.e</p>
<pre><code>[0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15]
</code></pre>
<p><strong>Update 1:</strong></p>
<p>Tried proposed answer, but no results, where it should:</p>
<pre><code>import numpy as np
import sys
# Synthesize firmware with 100 integers, all equal to 1
#firmware = np.full(100, 1, dtype=np.uint32)
#firmware = np.fromfile('firm.ori', dtype='uint32')
a1D = np.array([1, 2, 3, 4, 6, 5, 7, 8, 10, 9, 11, 13, 12, 14, 15, 0],dtype='uint32')
print(a1D)
r = np.convolve(a1D, [1]*16, mode='same')[8:-8]
np.set_printoptions(threshold=sys.maxsize)
print(r)
r = np.where(r < (16*15))
print(r)
print(a1D[r])
</code></pre>
<p>Ideally it should say offset 0, but values would be also fine i.e to print</p>
<pre><code>[ 1 2 3 4 6 5 7 8 10 9 11 13 12 14 15 0]
</code></pre>
<p>Now it outputs:</p>
<pre><code>[ 1 2 3 4 6 5 7 8 10 9 11 13 12 14 15 0]
[]
(array([], dtype=int64),)
[]
</code></pre>
|
<python>
|
2022-12-13 10:06:17
| 3
| 1,189
|
dev
|
74,783,066
| 1,070,092
|
Python/PySide6: Apply different style to sub class
|
<p>Cant figure out the solution of the problem.
The style of the sub class is identical to the base class:</p>
<pre><code>import sys
from PySide6.QtGui import *
from PySide6.QtWidgets import *
from PySide6.QtCore import *
class MyLabel(QLabel):
def __init__(self, pText: str) -> None:
super().__init__()
self.setText(pText)
class MainWindow(QWidget):
def __init__(self):
super(MainWindow, self).__init__()
self.setFixedWidth(200)
self.setFixedHeight(200)
layout = QVBoxLayout(self)
self.widget = QLabel("QLabel")
layout.addWidget(self.widget)
self.mywidget = MyLabel("MyLabel")
layout.addWidget(self.mywidget)
self.setLayout(layout)
stylesheet = ("""
MyLabel {
border: 20px solid black;
}
QLabel {
border: 20px solid red;
}
"""
)
self.setStyleSheet(stylesheet)
if __name__ == "__main__":
app = QApplication(sys.argv)
main = MainWindow()
main.show()
sys.exit(app.exec())
</code></pre>
<p>The expectation is to have the inherited label in a different color.
Eg MyLabel should be black. QLabel should be red.
Thanks for help</p>
|
<python><pyside6>
|
2022-12-13 10:05:59
| 1
| 345
|
Vik
|
74,783,029
| 10,992,997
|
avoid blank line when reading names from .txt into python list
|
<p>I'm trying read a list of names from a .txt file into python so I can work with it.</p>
<pre><code>humans = ['Barry', 'Finn', 'John', 'Jacob', '', 'George', 'Ringo', '']
with open(p_to_folder / 'humans.txt', 'w') as f:#using pathlib for my paths
for h in humans:
f.write(f'{h}\n')
</code></pre>
<p>What I'd like to do is read the .txt file back in so that I can work with names again, but leave out the blanks.</p>
<p>I have tried</p>
<pre><code>with open(p_to_folder / 'humans.txt', 'r') as f:#using pathlib for my paths
new_humans = [line.rstrip() for line in f.readlines() if line != '']
</code></pre>
<p>when I inspect this is list I get</p>
<pre><code>['Barry', 'Finn', 'John', 'Jacob', '', 'George', 'Ringo', '']
</code></pre>
<p>I can just run the line</p>
<pre><code>new_humans = [i for i in new_humans if i != '']
</code></pre>
<p>and get</p>
<pre><code>['Barry', 'Finn', 'John', 'Jacob', 'George', 'Ringo']
</code></pre>
<p>but I'd like to be able to do it in one line, and also to understand why my if statement isn't working in the original list comprehension.</p>
<p>Thank you</p>
|
<python><if-statement><list-comprehension>
|
2022-12-13 10:03:05
| 2
| 581
|
KevOMalley743
|
74,782,990
| 1,139,541
|
Patch method in a package that is shadowed in __init__.py
|
<p>I have a folder structure like this:</p>
<pre><code>├─ some_module/
│ ├─ __init__.py
│ ├─ module.py
├─ main.py
</code></pre>
<p>With the sub-module being shadowed by its content:</p>
<pre><code># __init__.py
from .module import module
__all__ = ["module"]
</code></pre>
<pre><code># module.py
def module_b():
print("In module_b")
def module():
print("In module")
module_b()
</code></pre>
<pre><code># main.py
from some_module.module import module
module()
</code></pre>
<p>The output of <code>main.py</code>:</p>
<pre><code>In module
In module_b
</code></pre>
<p>I want to patch the method <code>module_b()</code> from <code>main.py</code> with the method <code>lambda: print("other module")</code> (leaving the content of <code>some_module</code> folder as it is). Expected output after running <code>main.py</code>:</p>
<pre><code>In module
other module
</code></pre>
<p>How can this be done?</p>
|
<python>
|
2022-12-13 10:00:37
| 1
| 852
|
Ilya
|
74,782,988
| 12,777,005
|
Why are half my uniforms missing when initializing a compute shader using python modernGL?
|
<p>Here is the problem: I'm creating a compute shader in python for radiative transfer using modernGL. But upon creation, it does not contain all the uniforms I declare in the shader code.
Here is my (simplified) python code:</p>
<pre><code># Creation of the context
context = mgl.create_standalone_context(require=440)
# Creation of the compute shader with the source code passed as a string stored in a separate file
compute_shader = context.compute_shader(src_code)
# I set the uniform values stored in a dictionary defined previously. The keys of the dictionary match the uniforms defined in the shader source code.
for key, value in uniforms.items():
compute_shader.get(key, default=None).write(value)
</code></pre>
<p>Here is the head of the shader source file 'src_code':</p>
<pre><code>#version 440
uniform float instrument_azimut;
uniform float instrument_elevation;
uniform float instrument_altitude;
uniform float instrument_lon;
uniform float instrument_lat;
uniform float instrument_area;
uniform float instrument_fov;
uniform float wavelength;
uniform int los_nb_points;
uniform float map_delta_az;
uniform bool is_point_source;
uniform int atm_nb_altitudes;
uniform int atm_nb_angles;
uniform bool use_aerosol;
// Iterable uniforms
uniform float instrument_los[3];
uniform float los_altitudes[100];
uniform float atm_altitudes[121];
uniform float map_azimuts[10];
uniform float map_distances[10];
uniform float los_volumes[100];
uniform float atm_total_abs[121];
uniform float atm_ray_beta[121];
uniform float atm_aer_beta[121];
uniform float atm_angles[100];
uniform float atm_aer_Pfct[121];
uniform float atm_aer_Pfct_DoLP[121];
uniform float los_transmittance[100];
#define X 1
#define Y 1
#define Z 100
// Set up our compute groups
layout(local_size_x = X, local_size_y = Y, local_size_z = Z) in;
// Define constant values
const float EARTH_RADIUS = 6371.; //km
const float PI = 3.14159;
const float DtoR = PI / 180;
const float RtoD = 1. / DtoR;
// Stokes paramter buffers
layout(std430, binding=0) buffer in_0{
float in_V[];};
layout(std430, binding=1) buffer in_1{
float in_Vcos[];};
layout(std430, binding=2) buffer in_2{
float in_Vsin[];};
layout(std430, binding=3) buffer out_3{
float out_V[];};
layout(std430, binding=4) buffer out_4{
float out_Vcos[];};
layout(std430, binding=5) buffer out_5{
float out_Vsin[];};
\\...All useful functions...
</code></pre>
<p>When I try to list the uniforms contained in the compute shader object, almost half are missing! The following loop</p>
<pre><code>for u in compute_shader:
print(u)
</code></pre>
<p>outputs:</p>
<pre><code>instrument_elevation
instrument_altitude
instrument_area
map_delta_az
is_point_source
atm_nb_altitudes
atm_nb_angles
use_aerosol
los_altitudes
atm_altitudes
map_azimuts
map_distances
los_volumes
atm_total_abs
atm_ray_beta
atm_aer_beta
atm_angles
atm_aer_Pfct
atm_aer_Pfct_DoLP
los_transmittance
</code></pre>
<p>So when I try to write the value of the uniform "instrument_lon" (float) or "instrument_los" (float array) for example, it doesn't exists in the compute shader object. I'm lost here...
I thought about a format issue, so I tried to explicitly set the type of the uniform values in the dictionary. I also tried to declare them in a different order, but it is always the same that are missing. But I can't see any differences with the one initialized correctly...
Any help very welcome!</p>
|
<python><glsl><compute-shader><python-moderngl>
|
2022-12-13 10:00:28
| 1
| 325
|
Azireo
|
74,782,925
| 5,191,553
|
select color attributes via python in blender
|
<p>Is it possible to select color attributes in blender with python? Idea is to do the same like clicking on the vertex color in viewport. The goal is to make the colors visible in the viewport.</p>
<p>My current approach looks like this:</p>
<pre><code># accessing color attributes
test_1 = bpy.data.meshes['Cube'].color_attributes['test_1']
test_2 = bpy.data.meshes['Cube'].color_attributes['test_2']
# try to change selection
bpy.ops.geometry.color_attribute_render_set(name="test_2")
</code></pre>
<p>Unfortunately this is not working. Is there an easy approach to solve this? Thanks in advance.</p>
|
<python><colors><blender><vertex>
|
2022-12-13 09:54:43
| 1
| 531
|
Christoph Müller
|
74,782,862
| 12,740,468
|
Fancy indexing in numpy
|
<p>I am basically trying to do something like this but without the for-loop... I tried with <code>np.put_along_axis</code> but it requires <code>times</code> to be of dimension 10 (same as last index of <code>src</code>).</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
src = np.zeros((5,5,10), dtype=np.float64)
ix = np.array([4, 0, 0])
iy = np.array([1, 3, 4])
times = np.array([1 ,2, 4])
values = np.array([25., 10., -65.])
for i, time in enumerate(times):
src[ix, iy, time] += values[i]
</code></pre>
|
<python><numpy>
|
2022-12-13 09:50:12
| 1
| 358
|
Antoine Collet
|
74,782,778
| 18,396,935
|
Pass from one file to another a constant variable, without using function parameters
|
<p>I have the following code to generate probability labels on generated tabular dataframes.</p>
<pre><code>from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV, cross_val_score
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.optimizers import Adam
from sklearn.metrics import accuracy_score
def baseline_model(optimizer='adam', learn_rate=0.1):
model = Sequential()
model.add(Dense(100, input_dim=X.shape[1], activation='relu'))
model.add(Dense(50, activation='relu')) # 8 is the dim/ the number of hidden units (units are the kernel)
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
def get_probability_labels(x, y, optimizer='adam'):
all_predictions = []
cv_5 = StratifiedKFold(n_splits=5, random_state=None, shuffle=False)
estimator = KerasClassifier(optimizer=optimizer, batch_size=32, epochs=100, build_fn=baseline_model, verbose=0)
for train_index, test_index in cv_5.split(x, y):
X_train, X_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
estimator.fit(X_train, y_train)
predictions = estimator.predict(X_test)
all_predictions.append(predictions)
a = [j for i in all_predictions for j in i] #remove nested list
return a
def add_labels(real_data, synthetic_data):
# add labels 0 for real and 1 for synthetic
data = pd.concat([real_data, synthetic_data], ignore_index=True)
o_labels = np.zeros((len(real_data)), dtype=int)
s_labels = np.ones((len(synthetic_data)), dtype=int)
labels = np.concatenate([o_labels, s_labels], axis=0)
data['class'] = labels
x = data.drop('class', axis=1)
y = data['class']
return x, y
# other file
def main():
X, Y = add_labels(df, df_synth)
probability_labels = get_probability_labels(X, Y)
print(probability_labels)
</code></pre>
<p>I have fixed the following code based on the above problem. <a href="https://stackoverflow.com/questions/74743921/error-optimizer-parameter-not-legal-in-keras-function">Error optimizer parameter not legal in Keras function</a></p>
<p>The problem is that I cannot add <code>X</code> as a parameter to the <code>baseline_model</code> function (generates the error fixed in the previous post). But <code>baseline_model</code> works with <code>X.shape[1]</code>. How could I receive that value somehow considering that the main is in another file and I can't pass it as a parameter to the <code>baseline_model</code> function?</p>
|
<python><dataframe><keras><scikit-learn>
|
2022-12-13 09:44:08
| 1
| 366
|
Carola
|
74,782,709
| 5,515,287
|
How to embed and run a jar file using python
|
<p>I have a python script that will run the jar file when script is executed:</p>
<pre><code>import subprocess
jdk_path = "jdk\\bin\\java.exe"
jfx_path = "javafx\\lib"
modules = "javafx.controls,javafx.fxml,javafx.swing"
jar_file = "setup.jar"
command = jdk_path + " --module-path " + jfx_path + " --add-modules " + modules + " -jar " + jar_file
subprocess.call(command, shell=True)
</code></pre>
<p><a href="https://i.sstatic.net/JUPI9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JUPI9.png" alt="enter image description here" /></a></p>
<p>I want to create an <strong>exe</strong> of my script and include the setup.jar file in it using <strong>auto-py-to-exe</strong> or any other library. When the script file is clicked, the script calls the setup.jar file which is embedded inside my exe version of my script.</p>
|
<python><auto-py-to-exe>
|
2022-12-13 09:38:58
| 1
| 3,123
|
Mustafa Poya
|
74,782,602
| 9,786,534
|
How to add a constant to negative values in array
|
<p>Given the xarray below, I would like to add 10 to all negative values (i.e, -5 becomes 5, -4 becomes 6 ... -1 becomes 9, all values remain unchanged).</p>
<pre class="lang-py prettyprint-override"><code>a = xr.DataArray(np.arange(25).reshape(5, 5)-5, dims=("x", "y"))
</code></pre>
<p>I tried:</p>
<ul>
<li><code>a[a<0]=10+a[a<0]</code>, but it returns <code>2-dimensional boolean indexing is not supported</code>.</li>
<li>Several attempts with <code>a.where</code>, but it seems that the <code>other</code> argument can only replace the mapped values with a constant rather than with indexed values.</li>
</ul>
<p>I also considered using numpy as suggested <a href="https://stackoverflow.com/questions/56013101/xarray-computation-on-one-variable-based-on-condition-of-anther-variable">here</a>, but my actual dataset is ~ 80 Gb and loaded with dask and using numpy crashes my Jupyter console.</p>
<p>Is there any way to achieve this using only <code>xarray</code>?</p>
<h2>Update</h2>
<p>I updated the code using @SpaceBurger and <a href="https://stackoverflow.com/questions/60720294/2-dimensional-boolean-indexing-in-dask">this</a>. However my initial example was using a DataArray whereas my true problem is using a Dataset:</p>
<pre class="lang-py prettyprint-override"><code>a = xr.DataArray(np.arange(25).reshape(5, 5)-5, dims=("x", "y"))
a = a.to_dataset(name='variable')
</code></pre>
<p>Now, if I do this:</p>
<pre class="lang-py prettyprint-override"><code>a1 = a['variable']
a2 = 10+a1.copy()
a['variable'] = dask.array.where(a['variable'] < 0, a2, a1)
</code></pre>
<p>I get this error:</p>
<pre><code>MissingDimensionsError: cannot set variable 'variable' with 2-dimensional data without explicit dimension names. Pass a tuple of (dims, data) instead.
</code></pre>
<p>Can anyone suggest a proper syntax?</p>
|
<python><dask><python-xarray>
|
2022-12-13 09:30:02
| 2
| 324
|
e5k
|
74,782,472
| 2,075,630
|
Redirect subprocess stdout to stderr
|
<p>A standard feature Python's <code>subprocess</code> API is to combine STDERR and STDOUT by using the keyword argument</p>
<pre><code>stderr = subprocess.STDOUT
</code></pre>
<p>But currently I need the opposite: Redirect the STDOUT of a command to STDERR. Right now I am doing this manually using <code>subprocess.getoutput</code> or <code>subprocess.check_output</code> and printing the result, but is there a more concise option?</p>
|
<python><subprocess><io-redirection>
|
2022-12-13 09:19:28
| 1
| 4,456
|
kdb
|
74,782,345
| 8,481,155
|
Apache Beam Count Rows and store in a variable Python
|
<p>I'm trying to count the number of elements in a PCollection and store it in a variable which I want to use it for further calculations. Any guidance on how I can do it?</p>
<pre><code>import apache_beam as beam
pipeline = beam.Pipeline()
total_elements = (
pipeline
| 'Create elements' >> beam.Create(['one', 'two', 'three', 'four', 'five', 'six'])
| 'Count all elements' >> beam.combiners.Count.Globally()
| beam.Map(print))
print(total_elements + 10)
pipeline.run()
</code></pre>
<p>Printing the total_elements works, but my requirement is to add an integer to it.
I tried using int(total_elements) and that doesn't work as well.</p>
|
<python><python-3.x><apache-beam>
|
2022-12-13 09:07:07
| 2
| 701
|
Ashok KS
|
74,782,227
| 607,846
|
Combine yielded values into a list
|
<p>I wish to create a list of objects in a unit test, but skip an object's creation if its corresponding config is None. This is an attempt, but can it be done better?</p>
<pre><code>def object_factory(config, model):
if config is not None:
yield model(**config)
def objects_factory(config1, config2):
return list(
chain(
object_factory(config1, Model1),
object_factory(config2, Model2)
)
)
</code></pre>
<p>So this input:</p>
<pre><code>objects_factory(None, {pk=1})
</code></pre>
<p>will give the following:</p>
<pre><code>[Model2(pk=1)]
</code></pre>
<p>and this:</p>
<pre><code>objects_factory({pk=1}, {pk=1})
</code></pre>
<p>will give me:</p>
<pre><code>[Model1(pk=1), Model2(pk=1)]
</code></pre>
|
<python>
|
2022-12-13 08:58:19
| 1
| 13,283
|
Baz
|
74,782,137
| 12,201,407
|
Python Datetime Subtraction include 0 to hour section
|
<p>I've 2 strings of datetime, I need to subtract those to get duration but when subtracted for hour section it only contain 1 digit eg: 1:30, 2:30 So, my question is can we get subtracted datetime or string which includes 0 at the beginning eg: 01:30, 02:30.
Only for 1 - 9 hours include 0.
Assume that there will be no recording with a length of more than a day.</p>
<pre><code>d1 = '2022-12-10T08:59:02Z'
d2 = '2022-12-10T10:06:35Z'
</code></pre>
<p>For subtraction I use the code below.</p>
<pre><code>from dateutil import parser
parsed_d1 = parser.parse(d1)
parsed_d2 = parser.parse(d2)
result = str(parsed_d2 - parsed_d1)
print(result)
>>> '1:07:33'
</code></pre>
<p>If you want to use datetime strptime or strftime then the format is</p>
<pre><code>format = '%Y-%m-%dT%H:%M:%Sz'
</code></pre>
<p>Currently I'm getting the desired output by doing this</p>
<pre><code>duration_list = result.split(":")
if int(duration_list[0]) <= 9:
result = f"0{result}"
print(result)
>>> '01:07:33'
</code></pre>
|
<python><datetime><python-datetime><timedelta><python-dateutil>
|
2022-12-13 08:49:52
| 2
| 738
|
jeevu94
|
74,781,783
| 2,975,438
|
Pandas: how to add column with True/False if another column contain duplicate or not
|
<p>I have the following dataframe:</p>
<pre><code>d_test = {'name' : ['Beach', 'Dog', 'Bird', 'Dog', 'Ant', 'Beach']}
df_test = pd.DataFrame(d_test)
</code></pre>
<p>I want to add column <code>duplicate</code> with True/Fasle for each entry. I want <code>False</code> only for case if there only one entry in the column <code>name</code> and <code>True</code> in any other cases. Here is expected output:</p>
<pre><code> name duplicate
0 Beach True
1 Dog True
2 Bird False
3 Dog True
4 Ant False
5 Beach True
</code></pre>
<p>I am looking to <code>df.groupby('...')</code> method but I am not sure how to apply it to my case.</p>
|
<python><pandas>
|
2022-12-13 08:15:53
| 1
| 1,298
|
illuminato
|
74,781,771
| 14,046,645
|
How we can resolve "Solving environment: failed with initial frozen solve. Retrying with flexible solve." issue while installing the new conda package
|
<p>I have tried to install new package in conda for windows using the following command:</p>
<p><code>conda install -c conda-forge python-pdfkit</code></p>
<p>but got the following error:</p>
<blockquote>
<p>Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.</p>
</blockquote>
<p>I have tried the following workarounds but no use, still getting the same error:</p>
<p>Workaround 1:</p>
<pre><code>$conda create --name myenv
$conda activate myenv
</code></pre>
<p>Workaround 2:</p>
<pre><code>conda config --set ssl_verify false
</code></pre>
|
<python><python-3.x><anaconda><pdfkit><python-pdfkit>
|
2022-12-13 08:14:37
| 6
| 471
|
Rajesh Potnuru
|
74,781,686
| 16,971,617
|
Adding np array objects to a list
|
<p>I am trying to append a sub image which is numpy array objects to a list.</p>
<pre class="lang-py prettyprint-override"><code> temp = []
for color in COLOUR_RANGE:
#Some code to extract the color region which is represented in numpy
if cv2.contourArea(coloured_cnt) > 400:
x, y, w, h = cv2.boundingRect(coloured_cnt)
coloursRect = cv2.cvtColor(img[y:y+h, x:x+w], cv2.COLOR_BGR2RGB)
coloursRect_1D = np.vstack(coloursRect)/255
temp.append(coloursRect_1D.copy())
print(len(temp))
print(type(temp))
</code></pre>
<p>But the length of the list is added by two every loop.</p>
<pre><code>2
<class 'list'>
4
<class 'list'>
6
<class 'list'>
8
<class 'list'>
10
</code></pre>
<p>However, if i create a new list as follows</p>
<pre><code> temp = [
np.vstack(coloursRectList['blue'])/255,
np.vstack(coloursRectList['green'])/255,
np.vstack(coloursRectList['yellow'])/255,
np.vstack(coloursRectList['red'])/255,
np.vstack(coloursRectList['black'])/255,
print('len of temp', len(temp))
print('type of temp', type(temp))
</code></pre>
<p>The output is as expected</p>
<pre><code>len of temp 6
type of temp <class 'list'>
</code></pre>
<p>I prefer the first method as it would be more dynamic so that I can</p>
<pre><code>EXTRACTED = np.array(np.vstack(temp))
</code></pre>
<p>I am wondering how to append numpy object to a list properly. Any help is appreciated.</p>
<hr />
<p>Code tried to reproduce the error with numpy but it fails to reappear.
I am not sure what goes wrong.</p>
<pre><code>import numpy as np
mylist = []
a = np.zeros(shape=(96, 93, 3))
b = np.vstack(a)/255
print(a.shape)
print(type(a))
print(b.shape)
print(type(b))
for i in range(5):
mylist.append(b.copy())
print(len(mylist))
</code></pre>
<p>Output</p>
<pre><code>(96, 93, 3)
<class 'numpy.ndarray'>
(8928, 3)
<class 'numpy.ndarray'>
1
2
3
4
5
</code></pre>
|
<python><numpy>
|
2022-12-13 08:05:12
| 1
| 539
|
user16971617
|
74,781,683
| 12,064,467
|
Applymap on all but one Pandas DataFrame?
|
<p>I have a DataFrame <code>df</code> that looks like this:</p>
<pre><code> 0 1 2 3 4 5
0 first M A F I L
1 second M A F I L
2 third M S F I I
3 fourth M S F I L
4 fifth M L F F I
</code></pre>
<p>I would like to change each element of each column <em>except for the first</em> to its corresponding integer ASCII code (i.e. "M" gets mapped to the integer 77, "A" gets mapped to 65, etc.).</p>
<p>I can achieve this result with the following:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.loc[:, 1:].applymap(ord)
new_df.insert(0, 0, df[0])
</code></pre>
<p>Is there a better way to do this? There must be a better way to do this than by creating a new DataFrame. Perhaps a way to do <code>applymap</code> in-place on a subset of columns?</p>
|
<python><pandas><dataframe>
|
2022-12-13 08:04:42
| 1
| 522
|
DataScienceNovice
|
74,781,617
| 1,106,951
|
How to import a class located in another root directory folder
|
<p>Using Python 3.7.9 and having a file directory hierarchy like this</p>
<pre><code>Application
| |
| └── __init__.py
|
├── CLS
│ └── Article.py
│ └── __init__.py
│
└── OUT
└── publish.py
└── __init__.py
</code></pre>
<p>As you can see I have the <code>__init__.py</code> located in all folders <code>Application</code>, <code>CLS</code>, and <code>OUT</code> now when I am trying to import the <code>Article.py</code> file (which contains a Python class definition) into <code>publish.py</code> like</p>
<pre><code>from Application.CLS.Article import Article
</code></pre>
<p>or</p>
<pre><code>from CLS.Article import Article
</code></pre>
<p>or</p>
<pre><code> from ...CLS.Article import Article
</code></pre>
<p>I am getting this error message</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
<ipython-input-9-8542540653f1> in <module>
----> 1 from Application.CLS.Article import Article
ModuleNotFoundError: No module named 'Application.CLS.Article'; 'CLS.Article' is not a package
</code></pre>
<p>Why is this still happening while I have the <code>__init__.py</code> located in all folder levels?</p>
|
<python>
|
2022-12-13 07:57:40
| 1
| 6,336
|
Behseini
|
74,781,604
| 4,821,337
|
How to read the value from kubernetes secret in python application and set them as environment variable
|
<p>I am trying to get values from kubernetes secret in my python application as environment variables, I see that secrets are created as separate files and mounted on a specific path(in my case I mount them on etc/secrets/azure-bs. There are five secret files namely</p>
<ol>
<li>accessKeyId</li>
<li>bucket.properties</li>
<li>storageAccount</li>
<li>key.json</li>
<li>bucketName.</li>
</ol>
<p>Now bucket.properties has some key value pairs. There is a property_source parser that is used in the application and it is abstracted from my team. This usually parses secret values. I am however only able to parse bucket.properties since it has key value pairs. I want to be able to read contents from these other files and store them as environment variables. I am not sure how to go about that. The contents in these other files is not in the format of "key=value", instead they key is the filename itself and the value is the contents of the file.</p>
|
<python><environment-variables><azure-blob-storage><kubernetes-helm><kubernetes-secrets>
|
2022-12-13 07:56:28
| 1
| 359
|
capedCoder
|
74,781,497
| 10,394,971
|
How to create a dataframe using pandas based on comparing index with values of multiple columns in other dataframe
|
<p>I have two data source:</p>
<pre><code>raw_data = {'site_394$line_2420$tag_144': {1670231589000: 7,
1671231589000: 7,
1672231589000: 9,
1673231589000: 7},
'site_395$line_2420$tag_154': {1670231589000: 9,
1671231589000: 10,
1672231589000: 25,
1673231589000: 6}}
</code></pre>
<p>and</p>
<pre><code>events_data=[
{
"tag":"site_394$line_2420$tag_144",
"from_date": 1670231589000,
"to_date": 1670232589000,
"event_name": "Event One"
},
{
"tag":"site_394$line_2420$tag_144",
"from_date": 1671231589000,
"to_date": 1671332589000,
"event_name": "Event Two"
},
{
"tag":"site_394$line_2420$tag_144",
"from_date": 1671231589000,
"to_date": 1671332589000,
"event_name": "Event Two Update"
},
{
"tag":"site_394$line_2420$tag_144",
"from_date": 1670231589100,
"to_date": 1670232589200,
"event_name": "Event Three"
},
{
"tag":"site_395$line_2420$tag_154",
"from_date": 1670231589000,
"to_date": 1670232589000,
"event_name": "Event One"
},
{
"tag":"site_395$line_2420$tag_154",
"from_date": 1671231589000,
"to_date": 1671332589000,
"event_name": "Event Two"
},
{
"tag":"site_395$line_2420$tag_154",
"from_date": 1670231589100,
"to_date": 1670232589200,
"event_name": "Event Three"
}
]
</code></pre>
<p>I would like to combine the two into a single dataframe as shown below. The logic is, for a column in <code>raw_data</code>, if the index of raw data falls between <code>from_date</code> and <code>to_date</code> in <code>events_data</code>, then <code>event_name</code> should be replaced in place of value of the respective column. One catch is, if there are multiple matches, then the value should be appended comma separated. If the value of the column in <code>raw_data</code> is integer,</p>
<p>Expected result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">site_394$line_2420$tag_144</th>
<th style="text-align: left;">site_395$line_2420$tag_154</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1670231589000</td>
<td style="text-align: left;">Event One</td>
<td style="text-align: left;">Event One</td>
</tr>
<tr>
<td style="text-align: right;">1671231589000</td>
<td style="text-align: left;">Event Two,Event Two update</td>
<td style="text-align: left;">Event Two</td>
</tr>
<tr>
<td style="text-align: right;">1672231589000</td>
<td style="text-align: left;">9</td>
<td style="text-align: left;">25.0</td>
</tr>
<tr>
<td style="text-align: right;">1673231589000</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">6.0</td>
</tr>
</tbody>
</table>
</div>
<p>Any help or hint is appreciated.</p>
|
<python><pandas>
|
2022-12-13 07:46:51
| 1
| 2,635
|
Irfanuddin
|
74,781,114
| 12,559,770
|
Group overlaping start-end coordinates within groups in pandas
|
<p>I have a dataframe such as:</p>
<pre><code>Groups Scaff start end
G1 Scaff1 2278 4437
G1 Scaff1 2788 3048
G1 Scaff1 3959 4183
G1 Scaff1 4201 4407
G1 Scaff2 4553 5000
G1 Scaff2 6321 7700
G1 Scaff3 2870 5083
G1 Scaff4 1923 2042
G1 Scaff5 663 2885
G1 Scaff5 2145 2825
</code></pre>
<p>And I would like to add groups for each <code>Grous-Scaff</code> overlapping coordinates.</p>
<p>Lets first take the G<code>1-Scaff1</code> as an example:</p>
<pre><code>Groups Scaff start end
G1 Scaff1 2278 4437
G1 Scaff1 2788 3048
G1 Scaff1 3959 4183
G1 Scaff1 4201 4407
</code></pre>
<p>as you can see all coordinates overlap with each other:</p>
<ul>
<li><code>2278 - 4437</code> overlaps with <code>2788 - 3048</code></li>
<li><code>2788 - 3048</code> overlaps with <code>3959 - 4183</code></li>
<li><code>3959 - 4183</code> overlaps with <code>4201 - 4407</code></li>
</ul>
<p>so I group them all within the same Groups1:</p>
<pre><code>Groups Scaff start end New_group
G1 Scaff1 2278 4437 G1
G1 Scaff1 2788 3048 G1
G1 Scaff1 3959 4183 G1
G1 Scaff1 4201 4407 G1
</code></pre>
<p>When I say overlap I mean in that way for instance if we compare 1-10 and 3-7 it would give an overlap of 4.</p>
<p>For the other example in <code>G1 - Scaff2</code>: there is no overlap, then I put them in two different Groups</p>
<pre><code>Groups Scaff start end New_group
G1 Scaff2 4553 5000 G2
G1 Scaff2 6321 7700 G3
</code></pre>
<p>I should then get overall:</p>
<pre><code>Groups Scaff start end New_group
G1 Scaff1 2278 4437 G1
G1 Scaff1 2788 3048 G1
G1 Scaff1 3959 4183 G1
G1 Scaff1 4201 4407 G1
G1 Scaff2 4553 5000 G2
G1 Scaff2 6321 7700 G3
G1 Scaff3 2870 5083 G4
G1 Scaff4 1923 2042 G5
G1 Scaff5 663 2885 G6
G1 Scaff5 2145 2825 G6
</code></pre>
<p>So far I tried the following code:</p>
<pre><code>is_overlapped = lambda x: x['start'] >= x['end'].shift(fill_value=-1)
tab['New_group'] = tab.sort_values(['Groups','Scaff','start','end']).groupby(['Groups','Scaff'],as_index=False).apply(is_overlapped).droplevel(0).cumsum()
</code></pre>
<p>Which gives:</p>
<pre><code> Groups Scaff start end New_group
0 G1 Scaff1 2278 4437.0 1
1 G1 Scaff1 2788 3048.0 1
2 G1 Scaff1 3959 4183.0 2
3 G1 Scaff1 4201 4407.0 3
4 G1 Scaff2 4553 5000.0 4
5 G1 Scaff2 6321 7700.0 5
6 G1 Scaff3 2870 5083.0 6
7 G1 Scaff4 1923 2042 7
8 G1 Scaff5 663 2885 9
9 G1 Scaff5 2145 2825.0 8
</code></pre>
<p>and as you can see, rows 0,1,2 and 3 should all be in the same <code>New_group</code>...</p>
<p>Here is the dataframe in dict format if it can helps :</p>
<pre><code>{'Groups': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G1', 4: 'G1', 5: 'G1', 6: 'G1', 7: 'G1', 8: 'G1', 9: 'G1'}, 'Scaff': {0: 'Scaff1', 1: 'Scaff1', 2: 'Scaff1', 3: 'Scaff1', 4: 'Scaff2', 5: 'Scaff2', 6: 'Scaff3', 7: 'Scaff4', 8: 'Scaff5', 9: 'Scaff5'}, 'start': {0: 2278, 1: 2788, 2: 3959, 3: 4201, 4: 4553, 5: 6321, 6: 2870, 7: 1923, 8: 663, 9: 2145}, 'end': {0: 4437, 1: 3048, 2: 4183, 3: 4407, 4: 5000, 5: 7700, 6: 5083, 7: 2042, 8: 2885, 9: 2825}}
</code></pre>
|
<python><python-3.x><pandas>
|
2022-12-13 07:09:41
| 1
| 3,442
|
chippycentra
|
74,781,063
| 8,551,737
|
Converted tf.keras.layers.Conv2DTranspose to tflite can't run on the gpu
|
<p>I tried to converting tf2 keras model with Conv2DTranspose layer to tflite.</p>
<p>But after converting to tflite, <strong>this model generates automatically <code>Shape</code> and <code>Pack</code> operation</strong>. That is a quiet problem for me because <strong>these operation can't be supported by mobile gpu</strong>. Why tflite works like this and how do I fix it?</p>
<p>Tflite converting code:</p>
<pre><code>def get_model():
inputs = tf.keras.Input(shape=(64, 64, 3))
outputs = keras.layers.Conv2DTranspose(3, 3, strides=2, padding="same", activation="relu", name="deconv1")(inputs)
model = keras.Model(inputs=inputs, outputs=outputs, name="custom")
x = tf.ones((1, 64, 64, 3))
y = model(x)
return model
def convert_model(saved_model_dir, tflite_save_dir):
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float32]
tflite_model = converter.convert()
with open(tflite_save_dir, "wb") as f:
f.write(tflite_model)
if __name__=="__main__":
model = get_model()
current_path = os.path.dirname(os.path.realpath(__file__))
save_dir = os.path.join(current_path, "custom/1/")
tf.saved_model.save(model, save_dir)
tflite_save_dir = os.path.join(current_path, "my_model.tflite")
convert_model(save_dir, tflite_save_dir)
test_tflite(tflite_save_dir)
</code></pre>
<p>Tflite Model Visualizing:</p>
<p><a href="https://i.sstatic.net/EsewD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EsewD.png" alt="enter image description here" /></a></p>
<p><strong>P.S.</strong> <code>tf.compat.v1.layers.conv2d_transpose</code> graph not generate <code>Shape</code>-<code>StridedSlice</code>-<code>Pack</code> operation. That means tf.compat.v1.layers.conv2d_transpose can run on the mobile gpu. Why does this difference occur?</p>
<p>Tflite converting code:</p>
<pre><code>tfv1 = tf.compat.v1
def generate_tflite_model_from_v1(saved_model_dir):
tflite_save_dir = os.path.join(saved_model_dir, "my_model_tfv1.tflite")
with tf.Graph().as_default() as graph:
inputs = tfv1.placeholder(tf.float32, shape=[1, 64, 64, 3])
x = tfv1.layers.conv2d_transpose(inputs, 3, 3, strides=2, padding="same", name="deconv1")
outputs = tf.nn.relu(x)
with graph.as_default(), tfv1.Session(graph=graph) as session:
session.run(tfv1.global_variables_initializer())
converter = tfv1.lite.TFLiteConverter.from_session(
session, input_tensors=[inputs], output_tensors=[outputs])
with open(tflite_save_dir, "wb") as f:
f.write(converter.convert())
</code></pre>
<p><a href="https://i.sstatic.net/XVv2i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XVv2i.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><keras><tensorflow2.0><tflite>
|
2022-12-13 07:03:39
| 1
| 455
|
YeongHwa Jin
|
74,781,034
| 8,888,469
|
How to plot the frequency chart and world cloud from the frequency list created in Pandas
|
<p>I have data frame of n-grams frequency which is created by below code. How can create frequency plot and word cloud from the below output.</p>
<p><strong>Code</strong></p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
word_vectorizer = CountVectorizer(ngram_range=(3,3), analyzer='word')
sparse_matrix = word_vectorizer.fit_transform(dfs[dfs.columns[2]])
frequencies = sum(sparse_matrix).toarray()[0]
df =pd.DataFrame(frequencies, index=word_vectorizer.get_feature_names(), columns=['frequency'])
</code></pre>
<p><strong>Output</strong></p>
<pre><code> frequency
abbott issues deliver 10
abiding govt india 15
ability adapt totally 4
ability case emergencies 35
ability even client 11
ability get things 6
ability grow kn 1 18
</code></pre>
|
<python><pandas><matplotlib><nltk><word-cloud>
|
2022-12-13 06:59:41
| 0
| 933
|
aeapen
|
74,780,965
| 8,208,006
|
ls command - on progressive chunks in linux - large directory
|
<p>I have problem of listing files from really big directory.</p>
<p><code>ls -l *.pdf</code> command is taking lot of time and for every disconnection its starting from first.</p>
<p>i have tried few things</p>
<pre><code>ls -l /**/*.pdf
</code></pre>
<p>and</p>
<pre><code>find ./*.pdf -type d -maxdepth 1 -printf '%f\n' >> files.txt
</code></pre>
<p>and</p>
<pre><code>glob.glob(f"{dir}/**/*.pdf", recursive=True)
</code></pre>
<p>these are taking hours without any acknowledgement of progress.</p>
<p>is there any way to list the files in progressive manner from chunks of directory and to resume from the place its completed in a single directory. So it would be helpful to resume from there even in disconnections than to start from first.</p>
<p>I spent time exploring many blogs, but it didn't help.</p>
<p><code>displit</code> - may have chance
<a href="https://unix.stackexchange.com/a/247353/314903">https://unix.stackexchange.com/a/247353/314903</a></p>
<p>any small suggestions or leads would highly appreciated.</p>
|
<python><linux><powershell><shell>
|
2022-12-13 06:51:58
| 0
| 4,607
|
Naga kiran
|
74,780,953
| 8,219,760
|
Add row-wise accuracy to a seaborn heatmap
|
<pre class="lang-py prettyprint-override"><code>import seaborn as sb
import numpy as np
from matplotlib import pyplot as plt
A = np.array([[10, 5], [3, 10]], dtype=np.int32)
plt.figure()
sb.heatmap(
A,
square=True,
annot=True,
xticklabels=["False", "Positive"],
yticklabels=["False", "Positive"],
cbar=False,
fmt="2d",
)
plt.title("Example plot")
plt.show()
</code></pre>
<p>Shows example of an heatmap. I wish to add accuracy of each row to left side of the image.</p>
<p>The plot should be similar to</p>
<p><a href="https://i.sstatic.net/FAWCR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FAWCR.png" alt="enter image description here" /></a></p>
<p>Can this be achived?</p>
|
<python><numpy><matplotlib><seaborn>
|
2022-12-13 06:50:40
| 1
| 673
|
vahvero
|
74,780,800
| 59,300
|
How can I get a correct decimal results for a Python modulo operation (without extra libs)?
|
<p>I am trying to get a correct decimal (not scientific) expression for the following equation. It seems, by default, Python prints the scientific notation. I am using Python on an embedded system, so I cannot easily install libs. I am looking for a default way to get the correct result.</p>
<pre><code>>>> 4292739359**(-1) % 4292739360
2.329514830439068e-10
</code></pre>
<p>The <a href="https://www.wolframalpha.com/input?i=4292739359%5E%28-1%29%20mod%204292739360" rel="nofollow noreferrer">correct expression</a> is 4292739359. Python seems to show different results.</p>
|
<python><math><modulo>
|
2022-12-13 06:31:22
| 1
| 7,437
|
wishi
|
74,780,599
| 1,473,517
|
How to do division of very large integers rounded to nearest integer?
|
<p>I have two very large integers x, y. How can I compute x/y rounded to the nearest integer in pure Python? x//y will give me the correct answer but rounded down.</p>
<p>The numbers are big enough that round(x/y) will not give the correct answer due to floating point precision.</p>
<p>I would like to avoid using any libraries if at all possible. Although Decimal would work, I am using pypy and Decimal is very slow in pypy. If there were a suitable library that was fast in pypy I would be happy to use it.</p>
<p>Examples:</p>
<pre><code>x1 = 10**100 - 1
x2 = 10**100 + 1
y = 2 * 10**100
assert divide(x1, y) == 0
assert divide(x2, y) == 1
assert divide(x1, -y) == 0
assert divide(x2, -y) == -1
assert divide(-x1, y) == 0
assert divide(-x2, y) == -1
assert divide(-x1, -y) == 0
assert divide(-x2, -y) == 1
</code></pre>
<p><code>def divide(x, y): return round(x/y)</code> gives <code>0</code> for all 8 test cases, failing the <code>x2</code> test cases.</p>
|
<python>
|
2022-12-13 06:06:30
| 2
| 21,513
|
Simd
|
74,780,524
| 6,787,723
|
how to get ray task again while the driver submit the task died?
|
<p>I have a question a bit similar to <a href="https://stackoverflow.com/questions/69613739/how-to-kill-ray-tasks-when-the-driver-is-dead">How to kill ray tasks when the driver is dead</a></p>
<pre><code>ray.init(f"ray://{head_ip}:10001")
@ray.remote
def compute(compute_params):
# do some long computation
return some_result
# ==== driver part
ray_res = [compute.remote(cp) for cp in many_computations]
</code></pre>
<p>but I wish to get the task status even if the driver submited tasks are dead.
I know I can do this if I use named Actor,since it can use a name to get actor in another driver connect to the same cluster in same namespace.
but it seems doesn't work for task.
I hope if there is any api that I can use some thing like task_name,or future id to get the task obj again in other driver.
I found I can set name of task,but there is no api I can get it by name or id.</p>
<pre><code>compute.options(name="foobar").remote(compute_params)
>> ObjectRef(c8ef45ccd0112571ffffffffffffffffffffffff0100000001000000)
</code></pre>
<p>like</p>
<pre><code>ray.get_task_by_name('foobar')
</code></pre>
<p>or</p>
<pre><code>ray.get_task_by_id("c8ef45ccd0112571ffffffffffffffffffffffff0100000001000000")
</code></pre>
<hr />
<p>thx to ray community,they solve my problem now
there is no way to achive this now (202212),you can only use Actor,or use alpha libray
<a href="https://discuss.ray.io/t/how-to-get-ray-task-again-while-the-driver-submit-the-task-died/8636" rel="nofollow noreferrer">https://discuss.ray.io/t/how-to-get-ray-task-again-while-the-driver-submit-the-task-died/8636</a></p>
|
<python><ray>
|
2022-12-13 05:55:25
| 1
| 645
|
Tarjintor
|
74,780,473
| 2,975,438
|
How to do effective matrix computation and not get memory overload for similarity scoring?
|
<p>I have the following code for similarity scoring:</p>
<pre><code>from rapidfuzz import process, fuzz
import pandas as pd
d_test = {
'name' : ['South Beach', 'Dog', 'Bird', 'Ant', 'Big Dog', 'Beach', 'Dear', 'Cat'],
'cluster_number' : [1, 2, 3, 3, 2, 1, 4, 2]
}
df_test = pd.DataFrame(d_test)
names = df_test["name"]
scores = pd.DataFrame(rapidfuzz.process.cdist(names, names, workers=-1), columns=names, index=names)
x, y = np.where(scores > 50)
groups = (pd.DataFrame(scores.index[x], scores.index[y])
.groupby(level=0)
.agg(frozenset)
.drop_duplicates()
.reset_index(drop=True)
.reset_index()
.explode("name"))
groups.rename(columns={'index': 'id'}, inplace=True)
groups.id+= 1
df_test = df_test.merge(groups, how="left")
</code></pre>
<p>I want to identify similar names in <code>name</code> column if those names belong to one cluster number and create unique id for them. For example <code>South Beach</code> and <code>Beach</code> belong to cluster number <code>1</code> and their similarity score is pretty high. So we associate it with unique id, say <code>1</code>. Next cluster is number <code>2</code> and three entities from <code>name</code> column belong to this cluster: <code>Dog</code>, <code>Big Dog</code> and <code>Cat</code>. <code>Dog</code> and <code>Big Dog</code> have high similarity score and their unique id will be, say <code>2</code>. For <code>Cat</code> unique id will be, say <code>3</code>. And so on.</p>
<p>Code generates expected result:</p>
<pre><code> name cluster_number id
0 South Beach 1 1
1 Dog 2 2
2 Bird 3 3
3 Ant 3 4
4 Big Dog 2 2
5 Beach 1 1
6 Dear 4 5
7 Cat 2 6
</code></pre>
<p>Code above represents efficient and vectorized method for similarity scoring. It works perfectly for small data sets but when I try a dataframe with 1 million rows I get a <code>memoryError</code> for function <code>rapidfuzz.process.cdist(...)</code>. As mention in comment section bellow this function returns a matrix of len(queries) x len(choices) x size(dtype). By default this dtype is float or int32_t depending on the scorer (for the default scorer you are using it is float). So for 1 million names, the result matrix would require about 4 terabytes of memory. My PC has 12GB of free RAM space but it is not near enough. Any ideas how to avoid overloading RAM but keep computation in vectorized form?</p>
<p>For @J.M.Arnold solution including his comment the code may be rewritten as:</p>
<pre><code>d_test = {
'name' : ['South Beach', 'Dog', 'Bird', 'Ant', 'Big Dog', 'Beach', 'Dear', 'Cat'],
'cluster_number' : [1, 2, 3, 3, 2, 1, 4, 2]
}
df_test = pd.DataFrame(d_test)
df_test = df_test.sort_values(['cluster_number', 'name'])
df_test.reset_index(drop=True, inplace=True)
names = df_test["name"]
def calculate_similarity_matrix(names):
scores = pd.DataFrame(process.cdist(names, names, workers=-1), columns=names, index=names)
return scores
chunks = np.array_split(names, 1000)
_ = []
for i, chunk in enumerate(chunks):
matrix = calculate_similarity_matrix(chunk)
_.append(matrix)
finished = pd.concat(_)
x, y = np.where(finished > 50)
groups = (pd.DataFrame(finished.index[x], finished.index[y])
.groupby(level=0)
.agg(frozenset)
.drop_duplicates()
.reset_index(drop=True)
.reset_index()
.explode("name"))
groups.rename(columns={'index': 'id'}, inplace=True)
groups.id+= 1
df_test = df_test.merge(groups, how="left")
</code></pre>
<p>But it will not generate correct results:</p>
<pre><code> name cluster_number id
0 Beach 1 2
1 South Beach 1 8
2 Big Dog 2 3
3 Cat 2 5
4 Dog 2 7
5 Ant 3 1
6 Bird 3 4
7 Dear 4 6
</code></pre>
<p>Note as e.g. <code>Dog</code> and <code>Big Dog </code> have different <code>id</code> but they should have the same.</p>
|
<python><pandas><dataframe><rapidfuzz>
|
2022-12-13 05:49:25
| 1
| 1,298
|
illuminato
|
74,780,461
| 1,652,954
|
how to convert nested loops with body to parallelized iterables for multiprocessing
|
<p>i have the below two nested loops. i want to use them as iterables passed to .map operator to parallelize their execution.i am familiar with the following notation:</p>
<pre><code>with PoolExec(max_workers=int(config['MULTIPROCESSING']['proceses_count']),initializer=self.initPool,initargs=(arg0,arg1,arg2,arg3,arg4,arg5,arg6,arg7,arg8,arg9,)) as GridCells10mX10mIteratorPool.__poolExec:
self.__chunkSize = PoolUtils.getChunkSizeForLenOfIterables(lenOfIterablesList=self.__maxNumOfCellsVertically*self.__maxNumOfCellsHorizontally,cpuCount=int(config['MULTIPROCESSING']['cpu_count']))
for res in GridCells10mX10mIteratorPool.__poolExec.map(self.run,[(i,j) for i in range(0,1800,10) for j in range(0,2000,10)] ,chunksize=self.__chunkSize):
</code></pre>
<p>but as shown in code below, there are two lines of code after the outer loop and another two lines of code after the inner one.how can i convert these two loop to the above mentioned notation</p>
<p><strong>code</strong>:</p>
<pre><code>for x in range(row,row + gVerticalStep):
if rowsCnt == gVerticalStep:
rowsCnt = 0
for y in range(col,col + gHorizontalStep):
if colsCnt == gHorizontalStep:
colsCnt = 0
</code></pre>
|
<python><multiprocessing><python-multiprocessing>
|
2022-12-13 05:47:47
| 2
| 11,564
|
Amrmsmb
|
74,780,293
| 12,249,019
|
AWS S3 conditions and AWS lambda suffix/prefix
|
<p>I would like to upload image files to AWS S3 bucket which triggers a Lambda function that modify the images, using python <code>boto3</code> library and presigned URLs.<br />
And I would like to implement the following rules:</p>
<ol>
<li>Use the Content-Type condition to restrict the files to be uploaded only to image files (e.g., jpeg and png)</li>
<li>Use the suffix or prefix so that Lambda function is activated only when image files are uploaded</li>
</ol>
<p>Currently my codes and settings are as follows:<br />
[Python codes]</p>
<pre><code>boto3.client('s3', ...).generate_presigned_post(...,
Key='userupload/' + key,
Fields={
"Content-Type": "image/jpeg",
},
Conditions=[
["starts-with", "$Content-Type", "image/"],
]
)
</code></pre>
<p>[Lambda settings]</p>
<ul>
<li>Prefix: userupload</li>
<li>Suffix: .jpg</li>
</ul>
<p>But, when I try to upload an jpeg file, the file is successfully uploaded to the expected S3 bucket (in <code>userupload</code> folder), while the Lambda function is not triggered.<br />
I also found that the uploaded object doesn't have the standard "Type" value, while it has the following Metadata:</p>
<ul>
<li>Type: System defined</li>
<li>Key: Content-Type</li>
<li>Value: image/jpeg</li>
</ul>
<p>Are there any good ways to obtain the expected behavior?<br />
Thank you very much for your helps in advance!</p>
|
<python><amazon-web-services><amazon-s3><aws-lambda><boto3>
|
2022-12-13 05:17:50
| 1
| 389
|
yh6
|
74,780,215
| 7,984,318
|
Where are the python packages installed in docker
|
<p>I'm using docker in my Mac.</p>
<p>Dockerfile:</p>
<pre><code>#Dockerfile,Image,Container
FROM python:3.8
ADD main.py .
ADD test.py .
ADD spy511282022.csv .
RUN pip install requests pandas sqlalchemy
RUN pip list
CMD ["python","./test.py"]
</code></pre>
<p>My question is where are the packages sqlalchemy and requests located ,where are the directory ,because I want to modify some code of these package ,so I want to know where they have been installed.</p>
<p>Another question is ,can I customize the installation path and directory ?</p>
|
<python><docker><docker-compose><dockerfile>
|
2022-12-13 05:05:23
| 1
| 4,094
|
William
|
74,780,164
| 13,285,583
|
PySpark Pandas UDF don't work on M1 Apple Silicon
|
<p>My goal is to use PySpark. The problem is when I tried to use Pandas UDF, it throws an error.</p>
<p>What I did:</p>
<ol>
<li><code>pip install pyspark</code>.</li>
<li>Initialize spark</li>
</ol>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
</code></pre>
<ol start="3">
<li>Create the dataframe</li>
</ol>
<pre><code>rdd = spark.sparkContext.parallelize([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
])
df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e'])
df
</code></pre>
<ol start="4">
<li>Tried to use Pandas UDF</li>
</ol>
<pre><code>import pandas
from pyspark.sql.functions import pandas_udf
@pandas_udf('long')
def pandas_plus_one(series: pd.Series) -> pd.Series:
# Simply plus one by using pandas Series.
return series + 1
df.select(pandas_plus_one(df.a)).show()
</code></pre>
<p>It throws an error.</p>
<p>The error</p>
<pre><code>22/12/13 11:51:10 ERROR Executor: Exception in task 0.0 in stage 30.0 (TID 91)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.10 from "/usr/local/bin/python3"
* The NumPy version is: "1.23.5"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:559)
at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(PythonArrowOutput.scala:101)
at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(PythonArrowOutput.scala:50)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
22/12/13 11:51:10 WARN TaskSetManager: Lost task 0.0 in stage 30.0 (TID 91) (192.168.1.2 executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.10 from "/usr/local/bin/python3"
* The NumPy version is: "1.23.5"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:559)
at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(PythonArrowOutput.scala:101)
at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(PythonArrowOutput.scala:50)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:512)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
22/12/13 11:51:10 ERROR TaskSetManager: Task 0 in stage 30.0 failed 1 times; aborting job
</code></pre>
<ol start="5">
<li>Tried to reinstall numpy and pyspark</li>
</ol>
<pre><code>pip uninstall numpy pyspark
pip install numpy pyspark
</code></pre>
<p>However, the error persisted.</p>
|
<python><pandas><numpy><apache-spark><pyspark>
|
2022-12-13 04:58:24
| 0
| 2,173
|
Jason Rich Darmawan
|
74,780,158
| 982,786
|
ESports Match Predictor (Machine Learning)
|
<p>I think I'm starting to get the hang of some of the basics for machine learning, I've already completed three tutorial projects for stock price predicting, wine rating predictor, and a money ball (mlb) project. Now I am looking to do something a little more advanced with a personal project, but I can't seem to find information on how I would go about it.</p>
<p>I watch quite a bit of ESports and I found a game that has an extensive API with statistics of every rated match played, and I wanted to see if I could put together a match predictor.</p>
<p>The game is a 3v3 game were the users pick an Avatar (about 50ish in total) and battle each other on a specific map (about 10 in total). I think the important details to accomplish this would be the Avatars on each team, the position/role that the avatar plays, the map that was played, the player's personal rank, and what the final score was.</p>
<p>I pulled down 500k matches to train my data on and I created my "match objects" look like this -</p>
<pre><code>{
"map": "SomeMap",
"team1score": 1,
"team2score": 4
"team1": [
{"avatar": "whatever1", "position": "role1", "playerRank": 14},
{"avatar": "whatever2", "position": "role2", "playerRank": 10},
{"avatar": "whatever3", "position": "role4", "playerRank": 11},
],
"team2": [
{"avatar": "whatever4", "position": "role2", "playerRank": 11},
{"avatar": "whatever5", "position": "role3", "playerRank": 12},
{"avatar": "whatever6", "position": "role1", "playerRank": 11},
]
}
</code></pre>
<p>In the object above, there is the map, the team score, the avatars on each team, the role of each avatar, and the person's online rank.</p>
<p>I think where I am struggling is that there are more dimensions than what I did in the previous projects. How I originally started to code this was using a flat table like -</p>
<pre><code>avatar1, avatar1_rank, avatar1_tier, avatar2, avatar2_rank, avatar2_tier, etc
</code></pre>
<p>but I quickly discovered in doing that, it was giving weight to where "on the team" the avatar was (index 1, index 2, or index 3) when it shouldn't matter where on the team the avatar is, just that he is on the team. Ordering alphabetically doesn't solve this either as there are 50 or so avatars to choose from, so avatars are pretty much just as likely to be in position 1 as position 3. So essentially I am looking to do an analysis on two teams of players, where the stats for the players are given, and the players could appear in any order on the team (1-3).</p>
<p>So my questions is - does anyone have any code or know of any similar sample projects?</p>
<p>Thank you all.</p>
|
<python><python-3.x><machine-learning>
|
2022-12-13 04:57:10
| 2
| 3,599
|
Nefariis
|
74,780,128
| 4,451,521
|
A scattermapbox shows but it does not hover
|
<p>I have a dataframe df and I do</p>
<pre><code> fig.add_trace(go.Scattermapbox(
lat=one_mesh_p['lat'],
lon=one_mesh_p['lon'],
mode="markers",
marker = { 'size': 5, 'color': "blue" },
name="hello",
hoverinfo = 'text',
text=[str(one_mesh_p['id'])],
showlegend = True,
))
fig.update_layout(margin={"r": 20, "t": 40, "l": 20, "b": 20},
title={"text":file_title, "font":{"size":30}},
mapbox_style="open-street-map",
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=center[0],
lon=center[1]
),
pitch=0,
zoom=zoom_level
),
)
</code></pre>
<p>However, although I see the points, there is no hovering
I have checked and <code>one_mesh_p['id']</code> has the correct data</p>
|
<python><plotly>
|
2022-12-13 04:54:15
| 0
| 10,576
|
KansaiRobot
|
74,780,034
| 8,303,951
|
TypeError: cannot pickle '_io.BufferedReader' object when sending e-mail with Django-mailer and Djoser
|
<h1>The problem</h1>
<p>I'm trying to send a <a href="https://djoser.readthedocs.io/en/latest/settings.html#send-activation-email" rel="nofollow noreferrer">Djoser user's activation email</a> using <a href="https://github.com/pinax/django-mailer/" rel="nofollow noreferrer">django-mailer</a>.</p>
<p>However, I receive the following error:</p>
<p><strong>TL;DR:</strong></p>
<pre><code>TypeError: cannot pickle '_io.BufferedReader' object
</code></pre>
<p><a href="https://i.sstatic.net/mSw4g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mSw4g.png" alt="print of code snippet" /></a></p>
<p><a href="https://i.sstatic.net/xqApS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xqApS.png" alt="debug" /></a></p>
<p><strong>FULL:</strong></p>
<pre><code>Traceback (most recent call last):
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/djoser/views.py", line 144, in perform_create
settings.EMAIL.activation(self.request, context).send(to)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/templated_mail/mail.py", line 78, in send
super(BaseEmailMessage, self).send(*args, **kwargs)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/mail/message.py", line 298, in send
return self.get_connection(fail_silently).send_messages([self])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/backend.py", line 15, in send_messages
messages = Message.objects.bulk_create([
^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/backend.py", line 16, in <listcomp>
Message(email=email) for email in email_messages
^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/db/models/base.py", line 554, in __init__
_setattr(self, prop, value)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/models.py", line 150, in _set_email
self.message_data = email_to_db(val)
^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/models.py", line 94, in email_to_db
return base64_encode(pickle.dumps(email)).decode('ascii')
^^^^^^^^^^^^^^^^^^^
TypeError: cannot pickle '_io.BufferedReader' object
Internal Server Error: /api/auth/users/
Traceback (most recent call last):
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/djoser/views.py", line 144, in perform_create
settings.EMAIL.activation(self.request, context).send(to)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/templated_mail/mail.py", line 78, in send
super(BaseEmailMessage, self).send(*args, **kwargs)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/core/mail/message.py", line 298, in send
return self.get_connection(fail_silently).send_messages([self])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/backend.py", line 15, in send_messages
messages = Message.objects.bulk_create([
^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/backend.py", line 16, in <listcomp>
Message(email=email) for email in email_messages
^^^^^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/django/db/models/base.py", line 554, in __init__
_setattr(self, prop, value)
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/models.py", line 150, in _set_email
self.message_data = email_to_db(val)
^^^^^^^^^^^^^^^^
File "/Users/joaoalbuquerque/pyenv/versions/3.11.0/envs/garvy/lib/python3.11/site-packages/mailer/models.py", line 94, in email_to_db
return base64_encode(pickle.dumps(email)).decode('ascii')
^^^^^^^^^^^^^^^^^^^
TypeError: cannot pickle '_io.BufferedReader' object
</code></pre>
<h2>Libraries and versions used (Python 3.11):</h2>
<pre><code> "djangorestframework==3.14.0",
"dj-database-url==1.0.0",
"psycopg2-binary==2.9.5",
"django-cors-headers==3.13.0",
"gunicorn==20.1.0",
"pyjwt==2.6.0",
"python-dotenv==0.21.0",
"djoser==2.1.0",
"djangorestframework-simplejwt==4.8.0",
"django-mailer==2.1",
</code></pre>
<h2>What I've already tried:</h2>
<p>I found this related issue on Djoser's Github. However resolution support with django-mailer in version 2.1 didn't work around here.</p>
<p>Related issue: <a href="https://github.com/sunscrapers/djoser/issues/609" rel="nofollow noreferrer">https://github.com/sunscrapers/djoser/issues/609</a></p>
|
<python><django><django-rest-framework><djoser><django-mailer>
|
2022-12-13 04:38:46
| 0
| 476
|
Joao Albuquerque
|
74,779,902
| 678,572
|
Any dict-like mappable objects in Python that only show preview of contents when __repr__?
|
<p>I'm looking for an object I can use to store largish dataframes and sklearn objects. Ideally I would like to store them as <code>pd.Series</code> because it has the behavior I'm looking for in that I can do the following:</p>
<ul>
<li>Get the object using some key:value pair</li>
<li>Preview truncates large outputs</li>
<li>Can reveal the keys inside easily</li>
</ul>
<p>I can use <code>pd.Series</code> objects but don't know if the practice of storing complicated objects as values is frowned upon.</p>
<p><strong>Are there any other options I can use in the builtin library, <code>pandas</code>, <code>numpy</code>, <code>scipy</code>, or <code>scikit-learn</code>?</strong></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.arange(100)
y = list("abcde")*20
d = {"x":x, "y":y}
d
# {'x': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), 'y': ['a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e']}
from collections import namedtuple
nt = namedtuple("Example", ["x","y"])
nt(x=x, y=y)
# Example(x=array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
# 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
# 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
# 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67,
# 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84,
# 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), y=['a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e'])
from sklearn.utils import Bunch
b = Bunch(x=x, y=y)
b
# {'x': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), 'y': ['a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e']}
import pandas as pd
pd.Series({"x":x, "y":y})
# x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 7...
# y [a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d, e, a, b, c, d...
# dtype: object
</code></pre>
|
<python><pandas><dictionary><mapping><namedtuple>
|
2022-12-13 04:14:11
| 0
| 30,977
|
O.rka
|
74,779,670
| 219,153
|
How to make a title for this multi-axis matplotlib plot?
|
<p>This function:</p>
<pre><code>def plotGrid(ax, grid, text=''):
ax.imshow(grid, cmap=cmap, norm=Normalize(vmin=0, vmax=9))
ax.grid(True, which='both', color='lightgrey', linewidth=0.5)
ax.set_yticks([x-0.5 for x in range(1+len(grid))])
ax.set_xticks([x-0.5 for x in range(1+len(grid[0]))])
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title(text)
def plotTaskGrids(task):
nTrain = len(task['train'])
fig, ax = plt.subplots(2, nTrain, figsize=(3*nTrain, 3*2))
for i in range(nTrain):
plotGrid(ax[0, i], task['train'][i]['input'], 'train input')
plotGrid(ax[1, i], task['train'][i]['output'], 'train output')
plt.tight_layout()
plt.title('title')
plt.show()
</code></pre>
<p>displays this window:</p>
<p><a href="https://i.sstatic.net/1AaTd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1AaTd.png" alt="enter image description here" /></a></p>
<p>I would like to replace <code>Figure 1</code> in the window title with <code>title</code>, but <code>plt.title('title')</code> doesn't accomplish that, instead it changes one of the subtitles. What is the solution?</p>
|
<python><matplotlib>
|
2022-12-13 03:30:13
| 1
| 8,585
|
Paul Jurczak
|
74,779,644
| 20,762,114
|
Mapping a Python dict to a Polars series
|
<p>In Pandas we can use the <code>map</code> function to map a dict to a series to create another series with the mapped values. More generally speaking, I believe it invokes the index operator of the argument, i.e. <code>[]</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
dic = { 1: 'a', 2: 'b', 3: 'c' }
pd.Series([1, 2, 3, 4]).map(dic) # returns ["a", "b", "c", NaN]
</code></pre>
<p>I haven't found a way to do so directly in Polars, but have found a few alternatives. Would any of these be the recommended way to do so, or is there a better way?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
dic = { 1: 'a', 2: 'b', 3: 'c' }
# Approach 1 - map_elements
pl.Series([1, 2, 3, 4]).map_elements(lambda v: dic.get(v, None)) # returns ["a", "b", "c", null]
# Approach 2 - left join
(
pl.Series([1, 2, 3, 4])
.alias('key')
.to_frame()
.join(
pl.DataFrame({
'key': list(dic.keys()),
'value': list(dic.values()),
}),
on='key', how='left',
)['value']
) # returns ["a", "b", "c", null]
# Approach 3 - to pandas and back
pl.from_pandas(pl.Series([1, 2, 3, 4]).to_pandas().map(dic)) # returns ["a", "b", "c", null]
</code></pre>
<p>I saw <a href="https://stackoverflow.com/questions/73702931/how-to-map-a-dict-of-expressions-to-a-dataframe">this answer on mapping a dict of expressions</a> but since its chains <code>when/then/otherwise</code> it might not work well for huge dicts.</p>
|
<python><dataframe><python-polars>
|
2022-12-13 03:24:27
| 5
| 317
|
T.H Rice
|
74,779,376
| 656,780
|
Perl's Inline::Python Leads to Odd Error on AlmaLinux 8, but Not CentOS 7
|
<p>I'm trying to use <code>Inline::Python</code> to access the Braintree API that isn't available in Perl directly. It's code that has been working for years on a CentOS 7 server, but as I'm trying to move this code to a new, AlmaLinux 8 server, the very same code I've used for six years won't run, immediately failing as I try to import the Braintree Python module.</p>
<p>For example:</p>
<pre><code>#!/usr/bin/perl
use strict;
use warnings;
use v5.10;
use Inline 'Python' => <<'END_OF_PYTHON_CODE';
import braintree
END_OF_PYTHON_CODE
</code></pre>
<p>Results in this error:</p>
<pre><code>[tbutler@cedar tcms]# perl -MInline=info,force ~/testInline2.pl
<-----------------------Information Section----------------------------------->
Information about the processing of your Inline Python code:
Your source code needs to be compiled. I'll use this build directory:
/home/tbutler/public_html/cgi-bin/tcms/_Inline/build/testInline2_pl_4db4
and I'll install the executable as:
/home/tbutler/public_html/cgi-bin/tcms/_Inline/lib/auto/testInline2_pl_4db4/testInline2_pl_4db4.pydat
Traceback (most recent call last):
File "<string>", line 2, in <module>
ImportError: No module named braintree
Error -- py_eval raised an exception at /usr/local/lib64/perl5/Inline/Python.pm line 177.
<-----------------------End of Information Section---------------------------->
Traceback (most recent call last):
File "<string>", line 2, in <module>
ImportError: No module named braintree
Error -- py_eval raised an exception at /usr/local/lib64/perl5/Inline/Python.pm line 177.
BEGIN failed--compilation aborted at /home/tbutler/testInline2.pl line 9.
</code></pre>
<p>However that same import works just fine on the older CentOS 7 system. It also works fine if I do that import straight within Python on the new system.</p>
<p>I thought perhaps it has something to do with the version of Python on the system, so I tried running this code inline:</p>
<pre><code>import sys
print("Python version")
print (sys.version)
</code></pre>
<p>On the system where <code>Inline::Python</code> works ok with the <code>braintree</code> module, here is what it reports back:</p>
<pre><code>2.7.5 (default, Jun 28 2022, 15:30:04)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
</code></pre>
<p>On the system where it fails:</p>
<pre><code>2.7.18 (default, Oct 18 2022, 11:09:45)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
</code></pre>
<p>So everything seems reasonably similar. Is there some kind of setting Inline needs in order to function properly?</p>
<p>(To answer the inevitable question: the reason I'm using Inline rather than using Python is that the rest of the code I'm using in written in Perl and am reasonably proficient at Perl programming, but not Python programming.)</p>
<p>Edit (December 12 at 8:34 PM): I realize when I try the Python code directly, I've been using Python3, but the Inline tool is using Python 2.7, as reported above. That worked just fine on the old CentOS system, but not so much on the AlmaLinux one. I think this might be resolved by using Python 3.x instead, but have not yet figured out how to tell <code>Inline::Python</code> to use Python 3.x (both python2 and python3 are installed). Here is the error when I try to run the same import of Braintree on Python 2 on the new system:</p>
<pre><code> [tbutler@cedar safari]# python2 ~/testInline.py
Traceback (most recent call last):
File "/tbutler/testInline.py", line 2, in <module>
import braintree
File "/usr/lib/python2.7/site-packages/braintree/__init__.py", line 1, in <module>
from braintree.ach_mandate import AchMandate
File "/usr/lib/python2.7/site-packages/braintree/ach_mandate.py", line 2, in <module>
from braintree.util.datetime_parser import parse_datetime
File "/usr/lib/python2.7/site-packages/braintree/util/__init__.py", line 4, in <module>
from braintree.util.http import Http
File "/usr/lib/python2.7/site-packages/braintree/util/http.py", line 3, in <module>
from base64 import encodebytes
ImportError: cannot import name encodebytes
</code></pre>
<p>So, I either need to figure out why Python 2.7 runs the code on the one system but not the other <em>or</em> get <code>Inline::Python</code> to prefer Python 3 somehow.</p>
|
<python><perl><braintree>
|
2022-12-13 02:29:51
| 1
| 1,139
|
Timothy R. Butler
|
74,779,301
| 14,109,040
|
unable to assign free hole to a shell error when flattening polygons
|
<p>I have a list of polygons and I want to group them by the road_name property and flatten the polygons.</p>
<p>I tried the following:</p>
<pre><code>{"type": "FeatureCollection", "features": [{"type": "Feature", "properties": {"compositeid": "06000_2.5_2.78_5_B3_F0_PT3_T4", "index": 520, "road_name": "Gorge Rd"}, "geometry": {"type": "MultiPolygon", "coordinates": [[[[138.6288, -34.8485], [138.6268, -34.8502], [138.6267, -34.8503], [138.6267, -34.8504], [138.6261, -34.8521], [138.6263, -34.8525], [138.6264, -34.8539], [138.6266, -34.8541], [138.6268, -34.8547], [138.6271, -34.8555], [138.6272, -34.8557], [138.6268, -34.8564], [138.6261, -34.8575], [138.6259, -34.8582], [138.6264, -34.8593], [138.6259, -34.8601], [138.6268, -34.8596], [138.6275, -34.8605], [138.6287, -34.8611], [138.6274, -34.8624], [138.629, -34.8613], [138.63, -34.8621], [138.6312, -34.8627], [138.6313, -34.8628], [138.6318, -34.8629], [138.6332, -34.8631], [138.6334, -34.8632], [138.6354, -34.863], [138.6356, -34.8632], [138.6371, -34.8629], [138.6378, -34.8624], [138.6382, -34.8625], [138.64, -34.8614], [138.6419, -34.8613], [138.6422, -34.8614], [138.6435, -34.8618], [138.6444, -34.8627], [138.6445, -34.8628], [138.6445, -34.8629], [138.6455, -34.8637], [138.6466, -34.864], [138.647, -34.8643], [138.6473, -34.8647], [138.6472, -34.866], [138.6473, -34.8665], [138.6477, -34.8673], [138.6476, -34.8683], [138.6477, -34.8691], [138.6479, -34.8701], [138.6479, -34.8708], [138.6481, -34.8719], [138.647, -34.8733], [138.6469, -34.8737], [138.6466, -34.874], [138.6447, -34.8755], [138.6445, -34.8772], [138.6445, -34.8773], [138.6444, -34.8776], [138.6435, -34.8791], [138.6433, -34.88], [138.6444, -34.8807], [138.6444, -34.8808], [138.6444, -34.8809], [138.6444, -34.881], [138.6443, -34.8809], [138.6422, -34.8814], [138.6418, -34.8812], [138.6414, -34.8809], [138.6412, -34.8798], [138.64, -34.8794], [138.6398, -34.8792], [138.6378, -34.8792], [138.6376, -34.8792], [138.6366, -34.8791], [138.6359, -34.8788], [138.6356, -34.8788], [138.6336, -34.8789], [138.6334, -34.8788], [138.6316, -34.8791], [138.6312, -34.8791], [138.6298, -34.8809], [138.629, -34.8814], [138.6275, -34.8827], [138.6268, -34.8835], [138.6265, -34.8845], [138.6264, -34.8848], [138.626, -34.8862], [138.6258, -34.8871], [138.626, -34.888], [138.6261, -34.8887], [138.6265, -34.8898], [138.6254, -34.8911], [138.6268, -34.89], [138.6285, -34.8903], [138.6286, -34.8916], [138.6274, -34.893], [138.629, -34.8924], [138.63, -34.8927], [138.6312, -34.8934], [138.6296, -34.8948], [138.6291, -34.8952], [138.631, -34.8955], [138.6312, -34.8957], [138.6315, -34.8968], [138.6314, -34.897], [138.6312, -34.8973], [138.6295, -34.8988], [138.629, -34.8993], [138.6274, -34.9006], [138.6275, -34.9019], [138.6275, -34.9024], [138.627, -34.9041], [138.6269, -34.9042], [138.6268, -34.9043], [138.6255, -34.906], [138.6246, -34.9076], [138.6245, -34.9078], [138.6245, -34.908], [138.6246, -34.9092], [138.6248, -34.9095], [138.6249, -34.9096], [138.6246, -34.9099], [138.6242, -34.91], [138.6225, -34.9114], [138.6224, -34.9114], [138.6223, -34.9115], [138.6215, -34.9132], [138.6212, -34.9142], [138.6219, -34.915], [138.6217, -34.9156], [138.6225, -34.9167], [138.6232, -34.9162], [138.6246, -34.9153], [138.6257, -34.916], [138.6261, -34.9168], [138.6266, -34.9171], [138.6268, -34.9172], [138.6284, -34.9173], [138.629, -34.9172], [138.6306, -34.9174], [138.6312, -34.9178], [138.6326, -34.9175], [138.6334, -34.9182], [138.6339, -34.9182], [138.6356, -34.9179], [138.6365, -34.9179], [138.6374, -34.9186], [138.6377, -34.9187], [138.6378, -34.9188], [138.638, -34.9186], [138.638, -34.9185], [138.6387, -34.9168], [138.64, -34.9165], [138.6408, -34.9162], [138.6422, -34.9158], [138.6439, -34.915], [138.6443, -34.9133], [138.6443, -34.9132], [138.6444, -34.9132], [138.6444, -34.9132], [138.6445, -34.9132], [138.6449, -34.9146], [138.6446, -34.915], [138.6449, -34.9164], [138.6466, -34.9165], [138.6469, -34.9166], [138.6488, -34.9167], [138.6504, -34.9155], [138.6498, -34.9168], [138.6488, -34.9175], [138.6475, -34.9179], [138.6466, -34.9181], [138.6459, -34.9186], [138.6444, -34.9191], [138.6436, -34.9193], [138.6422, -34.9197], [138.6408, -34.9198], [138.64, -34.9196], [138.6389, -34.9204], [138.6391, -34.9212], [138.6396, -34.9222], [138.6384, -34.9235], [138.64, -34.9224], [138.6417, -34.9226], [138.6422, -34.9227], [138.6438, -34.9227], [138.6444, -34.9227], [138.6454, -34.9232], [138.6466, -34.9237], [138.6469, -34.9238], [138.6488, -34.924], [138.6489, -34.9239], [138.651, -34.9236], [138.6529, -34.9222], [138.6531, -34.9222], [138.6532, -34.9222], [138.6533, -34.9222], [138.6537, -34.9236], [138.6541, -34.924], [138.6544, -34.9248], [138.6553, -34.925], [138.656, -34.9253], [138.6575, -34.9254], [138.6579, -34.9255], [138.6591, -34.9258], [138.6578, -34.9274], [138.6597, -34.926], [138.6618, -34.926], [138.6619, -34.926], [138.6626, -34.9258], [138.6641, -34.9255], [138.6644, -34.9256], [138.6648, -34.9258], [138.6651, -34.9268], [138.665, -34.9276], [138.6653, -34.9284], [138.6657, -34.9294], [138.6658, -34.9298], [138.6662, -34.9312], [138.6662, -34.9313], [138.6663, -34.933], [138.6649, -34.9342], [138.6663, -34.933], [138.6677, -34.9336], [138.668, -34.9348], [138.6678, -34.9354], [138.6676, -34.9366], [138.6663, -34.9383], [138.666, -34.9384], [138.6647, -34.9397], [138.6663, -34.9385], [138.6685, -34.9384], [138.6685, -34.9395], [138.6687, -34.9384], [138.6707, -34.937], [138.6728, -34.9366], [138.6727, -34.9349], [138.6707, -34.9351], [138.6706, -34.9349], [138.6706, -34.9348], [138.67, -34.9336], [138.6702, -34.933], [138.6707, -34.9319], [138.6708, -34.9312], [138.6708, -34.9311], [138.6707, -34.9308], [138.6702, -34.9298], [138.67, -34.9294], [138.6707, -34.9284], [138.6725, -34.9279], [138.6729, -34.9277], [138.675, -34.9277], [138.6751, -34.9277], [138.6765, -34.9282], [138.6773, -34.9284], [138.6781, -34.9276], [138.6795, -34.9264], [138.6803, -34.9258], [138.6816, -34.9251], [138.6828, -34.9249], [138.6838, -34.9246], [138.6851, -34.924], [138.6856, -34.9225], [138.6859, -34.9222], [138.686, -34.922], [138.6866, -34.9218], [138.6882, -34.9207], [138.6898, -34.9204], [138.6896, -34.9193], [138.6882, -34.9187], [138.6882, -34.9187], [138.6882, -34.9186], [138.6876, -34.9174], [138.6873, -34.9168], [138.6882, -34.9154], [138.6896, -34.9157], [138.6904, -34.9162], [138.6921, -34.9154], [138.6926, -34.9153], [138.6928, -34.915], [138.6927, -34.9149], [138.6926, -34.9147], [138.6916, -34.914], [138.6914, -34.9132], [138.6908, -34.9129], [138.6904, -34.9121], [138.69, -34.9117], [138.69, -34.9114], [138.6904, -34.911], [138.6915, -34.9096], [138.6926, -34.9085], [138.6933, -34.9078], [138.6948, -34.9063], [138.6963, -34.9066], [138.697, -34.9066], [138.6981, -34.9069], [138.6992, -34.9074], [138.7004, -34.9068], [138.7002, -34.9078], [138.701, -34.9081], [138.7014, -34.9084], [138.7024, -34.9088], [138.7036, -34.9094], [138.7054, -34.9078], [138.7058, -34.9073], [138.7064, -34.906], [138.7061, -34.9057], [138.7058, -34.9053], [138.7047, -34.9051], [138.7045, -34.9042], [138.7047, -34.9033], [138.7049, -34.9024], [138.7041, -34.902], [138.7036, -34.9011], [138.7032, -34.901], [138.7029, -34.9006], [138.7036, -34.8997], [138.7039, -34.8988], [138.7037, -34.8987], [138.7036, -34.8986], [138.7023, -34.8981], [138.7014, -34.8975], [138.7006, -34.8977], [138.7005, -34.897], [138.7014, -34.8963], [138.7021, -34.8952], [138.7036, -34.894], [138.7043, -34.8934], [138.7058, -34.8918], [138.7059, -34.8916], [138.7061, -34.8914], [138.7069, -34.8898], [138.708, -34.889], [138.7089, -34.889], [138.7102, -34.8891], [138.7114, -34.888], [138.7109, -34.8874], [138.7102, -34.8864], [138.71, -34.8864], [138.7098, -34.8862], [138.7083, -34.886], [138.708, -34.8858], [138.7069, -34.8853], [138.7063, -34.8845], [138.7077, -34.8828], [138.7079, -34.8827], [138.708, -34.8824], [138.7086, -34.8809], [138.7102, -34.8801], [138.711, -34.8791], [138.7117, -34.8778], [138.7118, -34.8773], [138.7123, -34.8768], [138.7145, -34.8755], [138.7145, -34.8755], [138.7145, -34.8755], [138.7145, -34.8754], [138.7147, -34.8737], [138.7167, -34.8726], [138.7188, -34.872], [138.7189, -34.8719], [138.719, -34.8719], [138.719, -34.8718], [138.7189, -34.8716], [138.7182, -34.8707], [138.7182, -34.8701], [138.7185, -34.8686], [138.7186, -34.8683], [138.7189, -34.8673], [138.7203, -34.8672], [138.7211, -34.8666], [138.7213, -34.8665], [138.7215, -34.8661], [138.722, -34.8647], [138.7233, -34.8638], [138.7245, -34.8637], [138.7255, -34.8637], [138.7261, -34.8629], [138.7258, -34.8626], [138.7255, -34.8624], [138.724, -34.8623], [138.7233, -34.8614], [138.7231, -34.8613], [138.723, -34.8611], [138.7221, -34.8602], [138.722, -34.8593], [138.7216, -34.8589], [138.7211, -34.8585], [138.7201, -34.8583], [138.7197, -34.8575], [138.7193, -34.8572], [138.7189, -34.8569], [138.7178, -34.8566], [138.7167, -34.8564], [138.7159, -34.8564], [138.7145, -34.8561], [138.7136, -34.8564], [138.7123, -34.8568], [138.7114, -34.8564], [138.7114, -34.8557], [138.7119, -34.8542], [138.7102, -34.8539], [138.71, -34.854], [138.7098, -34.8539], [138.7102, -34.8532], [138.7112, -34.8521], [138.7108, -34.8515], [138.7102, -34.8511], [138.7098, -34.8506], [138.7095, -34.8503], [138.7095, -34.849], [138.7094, -34.8485], [138.7092, -34.8475], [138.7087, -34.8467], [138.7086, -34.8462], [138.7083, -34.8449], [138.7102, -34.8439], [138.711, -34.8431], [138.7113, -34.8421], [138.7112, -34.8413], [138.711, -34.8406], [138.7114, -34.8395], [138.7116, -34.8383], [138.7115, -34.8377], [138.711, -34.837], [138.7102, -34.8363], [138.7099, -34.8361], [138.7098, -34.8359], [138.7095, -34.8347], [138.7091, -34.8341], [138.7102, -34.8337], [138.7106, -34.8337], [138.711, -34.8341], [138.7106, -34.8355], [138.7123, -34.8348], [138.7133, -34.8351], [138.7141, -34.8359], [138.7131, -34.8371], [138.7145, -34.8376], [138.7154, -34.8359], [138.7167, -34.8345], [138.7181, -34.8348], [138.7189, -34.8345], [138.7207, -34.8341], [138.7211, -34.834], [138.7231, -34.8323], [138.7228, -34.8309], [138.7228, -34.8305], [138.7222, -34.8296], [138.7219, -34.8287], [138.7221, -34.8279], [138.7219, -34.8269], [138.7217, -34.8264], [138.7213, -34.8251], [138.7229, -34.8236], [138.7211, -34.8247], [138.7205, -34.8238], [138.7203, -34.8233], [138.7203, -34.8222], [138.7202, -34.8215], [138.7199, -34.8207], [138.7191, -34.8197], [138.7211, -34.8179], [138.7189, -34.8186], [138.7186, -34.8197], [138.7174, -34.8209], [138.7168, -34.8215], [138.7173, -34.8228], [138.7173, -34.8233], [138.7172, -34.8247], [138.7171, -34.8251], [138.7173, -34.8264], [138.7173, -34.8269], [138.7168, -34.8286], [138.7169, -34.8287], [138.7167, -34.8288], [138.7167, -34.8288], [138.7145, -34.8291], [138.7133, -34.8305], [138.7123, -34.8309], [138.7119, -34.8308], [138.7102, -34.8309], [138.7096, -34.831], [138.708, -34.8309], [138.7073, -34.831], [138.7058, -34.831], [138.7051, -34.8311], [138.7036, -34.8315], [138.7022, -34.8316], [138.7014, -34.8317], [138.7005, -34.8312], [138.6992, -34.8305], [138.6992, -34.8305], [138.6992, -34.8305], [138.6989, -34.8289], [138.6987, -34.8287], [138.6989, -34.8271], [138.699, -34.8269], [138.6991, -34.8252], [138.6991, -34.8251], [138.6992, -34.825], [138.6993, -34.825], [138.7014, -34.8247], [138.7021, -34.8245], [138.7035, -34.8251], [138.7035, -34.8251], [138.7036, -34.8251], [138.7036, -34.8251], [138.7058, -34.8235], [138.7075, -34.8233], [138.7064, -34.8228], [138.7073, -34.8215], [138.708, -34.8207], [138.7086, -34.8197], [138.7085, -34.8193], [138.708, -34.8185], [138.7076, -34.8182], [138.7075, -34.8179], [138.7078, -34.8162], [138.7078, -34.8161], [138.7077, -34.8145], [138.7058, -34.8146], [138.7047, -34.8152], [138.7036, -34.8148], [138.703, -34.8161], [138.7031, -34.8165], [138.703, -34.8179], [138.7014, -34.8192], [138.7004, -34.8187], [138.6992, -34.8181], [138.699, -34.818], [138.699, -34.8179], [138.6989, -34.8163], [138.6989, -34.8161], [138.698, -34.8153], [138.697, -34.8148], [138.6965, -34.8147], [138.6965, -34.8143], [138.6954, -34.8138], [138.6963, -34.8125], [138.697, -34.8117], [138.6983, -34.8114], [138.6992, -34.8112], [138.701, -34.8107], [138.7, -34.8101], [138.7002, -34.8089], [138.7, -34.8082], [138.6992, -34.8079], [138.6986, -34.8076], [138.697, -34.8077], [138.6948, -34.8089], [138.6948, -34.8089], [138.6948, -34.8089], [138.6926, -34.8099], [138.6915, -34.8107], [138.6915, -34.8116], [138.6912, -34.8125], [138.6912, -34.8137], [138.6914, -34.8143], [138.6911, -34.8156], [138.6907, -34.8161], [138.6904, -34.8175], [138.69, -34.8164], [138.6898, -34.8161], [138.6894, -34.8151], [138.6882, -34.8144], [138.6881, -34.8144], [138.6881, -34.8143], [138.6882, -34.814], [138.6886, -34.8125], [138.6886, -34.8122], [138.6882, -34.8119], [138.6875, -34.8113], [138.6867, -34.8107], [138.6866, -34.8102], [138.6861, -34.8089], [138.6861, -34.8089], [138.6865, -34.8071], [138.6866, -34.8066], [138.686, -34.8069], [138.6844, -34.8071], [138.6855, -34.8075], [138.686, -34.8089], [138.686, -34.809], [138.6856, -34.8107], [138.6848, -34.8117], [138.6841, -34.8125], [138.6838, -34.8131], [138.6835, -34.8143], [138.6818, -34.816], [138.6838, -34.8152], [138.6843, -34.8158], [138.6844, -34.8161], [138.6838, -34.8165], [138.6824, -34.8179], [138.6821, -34.8193], [138.6821, -34.8197], [138.682, -34.8213], [138.6819, -34.8215], [138.6816, -34.8218], [138.6799, -34.8233], [138.6802, -34.8245], [138.6805, -34.8251], [138.6805, -34.826], [138.6813, -34.8269], [138.6795, -34.8282], [138.679, -34.8287], [138.6788, -34.8292], [138.6795, -34.8292], [138.6807, -34.8295], [138.6816, -34.8299], [138.6819, -34.8303], [138.6819, -34.8305], [138.6816, -34.8321], [138.6816, -34.8323], [138.6795, -34.8336], [138.6778, -34.8341], [138.6773, -34.8342], [138.6756, -34.8359], [138.6751, -34.8363], [138.6737, -34.837], [138.6735, -34.8359], [138.6751, -34.8347], [138.6765, -34.8341], [138.6755, -34.8338], [138.6751, -34.8337], [138.6733, -34.8337], [138.6729, -34.8337], [138.6718, -34.8341], [138.6714, -34.8353], [138.6715, -34.8359], [138.6707, -34.8361], [138.6705, -34.8361], [138.6685, -34.8364], [138.668, -34.8363], [138.6663, -34.8359], [138.6662, -34.8359], [138.6653, -34.8359], [138.6642, -34.8358], [138.6641, -34.8351], [138.6637, -34.8344], [138.6639, -34.8341], [138.6635, -34.8328], [138.6627, -34.8323], [138.6623, -34.8319], [138.662, -34.8305], [138.662, -34.8304], [138.6619, -34.8298], [138.6618, -34.8305], [138.6616, -34.8307], [138.661, -34.8323], [138.6597, -34.8329], [138.6591, -34.8328], [138.6575, -34.8333], [138.6562, -34.8334], [138.6553, -34.8334], [138.654, -34.8334], [138.6531, -34.8332], [138.651, -34.834], [138.651, -34.8327], [138.6509, -34.8341], [138.6506, -34.8343], [138.649, -34.8359], [138.6488, -34.8377], [138.6488, -34.8377], [138.6495, -34.8389], [138.6497, -34.8395], [138.6492, -34.8409], [138.6502, -34.8413], [138.6488, -34.8416], [138.6484, -34.8416], [138.6484, -34.8413], [138.6479, -34.8402], [138.6466, -34.84], [138.645, -34.8408], [138.6444, -34.841], [138.6437, -34.8413], [138.6442, -34.8415], [138.6441, -34.8431], [138.6429, -34.8443], [138.6444, -34.8441], [138.6446, -34.8447], [138.645, -34.8449], [138.646, -34.8453], [138.6466, -34.8455], [138.6478, -34.8456], [138.648, -34.8467], [138.6479, -34.8474], [138.648, -34.8485], [138.6466, -34.8498], [138.6449, -34.8498], [138.6444, -34.8496], [138.6426, -34.8503], [138.6422, -34.8505], [138.6401, -34.852], [138.64, -34.8519], [138.6398, -34.8504], [138.6398, -34.8503], [138.6395, -34.8489], [138.6378, -34.8493], [138.637, -34.8492], [138.6356, -34.8488], [138.635, -34.849], [138.6334, -34.849], [138.6326, -34.8491], [138.6312, -34.849], [138.6302, -34.8493], [138.6296, -34.8485], [138.6302, -34.8475], [138.629, -34.8471], [138.6288, -34.8485]], [[138.6575, -34.8395], [138.6575, -34.8395], [138.6568, -34.8383], [138.6566, -34.8377], [138.6575, -34.837], [138.6586, -34.8368], [138.6597, -34.8371], [138.6608, -34.8368], [138.6605, -34.8377], [138.6597, -34.8388], [138.6577, -34.8395], [138.6575, -34.8395], [138.6575, -34.8395]], [[138.6799, -34.8553], [138.6796, -34.8539], [138.6795, -34.8538], [138.6796, -34.8521], [138.6797, -34.8519], [138.6797, -34.8503], [138.6816, -34.8493], [138.6826, -34.8485], [138.6822, -34.848], [138.6816, -34.8468], [138.6816, -34.8467], [138.6816, -34.8467], [138.6816, -34.846], [138.6818, -34.8449], [138.6818, -34.8447], [138.6816, -34.8447], [138.6811, -34.8435], [138.6805, -34.8431], [138.6805, -34.8423], [138.6804, -34.8413], [138.6816, -34.8402], [138.6826, -34.8395], [138.6838, -34.8388], [138.6846, -34.8389], [138.686, -34.8392], [138.6862, -34.8394], [138.6864, -34.8395], [138.6864, -34.841], [138.6882, -34.8405], [138.6898, -34.8395], [138.6904, -34.8392], [138.6907, -34.8393], [138.6926, -34.8393], [138.693, -34.8392], [138.6948, -34.8388], [138.696, -34.8377], [138.697, -34.8371], [138.6987, -34.8363], [138.6992, -34.8363], [138.6996, -34.8359], [138.7008, -34.8346], [138.7013, -34.8341], [138.7014, -34.8339], [138.7016, -34.8339], [138.7036, -34.834], [138.7036, -34.834], [138.7036, -34.8341], [138.7043, -34.8353], [138.7041, -34.8359], [138.704, -34.8374], [138.704, -34.8377], [138.7039, -34.8392], [138.7039, -34.8395], [138.7036, -34.8397], [138.7026, -34.8403], [138.7014, -34.841], [138.7001, -34.8406], [138.6992, -34.8402], [138.698, -34.8413], [138.697, -34.842], [138.6953, -34.8431], [138.696, -34.8439], [138.6962, -34.8449], [138.6964, -34.8454], [138.697, -34.846], [138.6974, -34.8464], [138.6977, -34.8467], [138.697, -34.8472], [138.6951, -34.8485], [138.6948, -34.8485], [138.6947, -34.8485], [138.6946, -34.8485], [138.6937, -34.8476], [138.6926, -34.847], [138.6925, -34.8485], [138.6925, -34.8486], [138.6925, -34.8503], [138.6925, -34.8504], [138.6926, -34.8521], [138.6926, -34.8521], [138.6926, -34.8525], [138.6929, -34.8536], [138.6929, -34.8539], [138.6926, -34.8541], [138.6907, -34.8557], [138.6904, -34.8558], [138.6902, -34.8558], [138.6894, -34.8557], [138.6884, -34.8556], [138.6882, -34.8555], [138.6865, -34.8553], [138.686, -34.8552], [138.6845, -34.8551], [138.6838, -34.8551], [138.6822, -34.8552], [138.6816, -34.8552], [138.6799, -34.8553]], [[138.6281, -34.914], [138.6287, -34.9132], [138.629, -34.9128], [138.6307, -34.9114], [138.6312, -34.911], [138.6334, -34.9097], [138.6334, -34.9097], [138.6338, -34.9111], [138.6345, -34.9114], [138.6334, -34.9119], [138.6318, -34.9127], [138.6312, -34.9129], [138.6308, -34.9132], [138.629, -34.9139], [138.6281, -34.914]], [[138.6902, -34.908], [138.6903, -34.9078], [138.6904, -34.9077], [138.6906, -34.9077], [138.6907, -34.9078], [138.6904, -34.908], [138.6902, -34.908]], [[138.668, -34.8417], [138.6679, -34.8413], [138.6685, -34.8409], [138.6698, -34.8395], [138.6707, -34.839], [138.6719, -34.8385], [138.6729, -34.8387], [138.6732, -34.8393], [138.6731, -34.8395], [138.6729, -34.84], [138.6718, -34.8404], [138.6707, -34.8405], [138.6693, -34.8413], [138.6685, -34.8418], [138.668, -34.8417]]], [[[138.5991, -34.927], [138.6005, -34.9263], [138.6015, -34.9268], [138.6027, -34.927], [138.6031, -34.9273], [138.6042, -34.9276], [138.6047, -34.9278], [138.6049, -34.928], [138.6065, -34.9281], [138.6071, -34.9281], [138.6091, -34.9276], [138.6093, -34.9276], [138.6113, -34.9258], [138.6115, -34.9257], [138.6134, -34.924], [138.6137, -34.9236], [138.6146, -34.9233], [138.6159, -34.9228], [138.6163, -34.9222], [138.6163, -34.9219], [138.6159, -34.9204], [138.6161, -34.9202], [138.616, -34.9186], [138.616, -34.9185], [138.6159, -34.9186], [138.6157, -34.9186], [138.6137, -34.9192], [138.6136, -34.9187], [138.6131, -34.9186], [138.6122, -34.918], [138.6115, -34.9172], [138.6113, -34.917], [138.6111, -34.9168], [138.6114, -34.9151], [138.6093, -34.9159], [138.6085, -34.9157], [138.6071, -34.9152], [138.6067, -34.9154], [138.6065, -34.915], [138.6053, -34.9147], [138.6056, -34.9132], [138.6058, -34.9125], [138.6049, -34.9124], [138.6038, -34.9124], [138.6027, -34.9121], [138.6017, -34.9132], [138.601, -34.9146], [138.6013, -34.915], [138.6005, -34.9168], [138.6027, -34.9155], [138.6041, -34.9157], [138.6038, -34.9168], [138.6027, -34.9178], [138.6018, -34.9186], [138.6012, -34.9199], [138.6009, -34.9204], [138.6005, -34.9214], [138.599, -34.9222], [138.6001, -34.9226], [138.5994, -34.924], [138.5996, -34.9248], [138.6001, -34.9258], [138.5991, -34.927]]], [[[138.6432, -34.8368], [138.6444, -34.8375], [138.645, -34.8372], [138.6466, -34.8365], [138.6479, -34.8359], [138.647, -34.8355], [138.6466, -34.8354], [138.6455, -34.8349], [138.6444, -34.8348], [138.6428, -34.8359], [138.6432, -34.8368]]], [[[138.6334, -34.8683], [138.6334, -34.8695], [138.6335, -34.8683], [138.6335, -34.8682], [138.6334, -34.8665], [138.6334, -34.8664], [138.6334, -34.8665], [138.6334, -34.8665], [138.6334, -34.8665], [138.6333, -34.8683], [138.6334, -34.8683]]], [[[138.7081, -34.816], [138.7102, -34.8154], [138.7112, -34.8143], [138.7123, -34.8125], [138.7118, -34.8125], [138.7105, -34.8122], [138.7102, -34.8124], [138.71, -34.8125], [138.7095, -34.813], [138.7083, -34.8143], [138.7081, -34.816]]], [[[138.5954, -34.9228], [138.5961, -34.9226], [138.5978, -34.9222], [138.5983, -34.9218], [138.5995, -34.9204], [138.5989, -34.92], [138.5983, -34.9192], [138.5978, -34.9204], [138.5961, -34.9218], [138.5954, -34.9222], [138.5954, -34.9228]]], [[[138.6465, -34.8305], [138.6466, -34.8305], [138.6466, -34.8305], [138.6466, -34.8305], [138.6466, -34.8304], [138.6465, -34.8305], [138.6465, -34.8305]]], [[[138.7057, -34.7999], [138.7058, -34.8001], [138.7058, -34.7999], [138.7058, -34.7999], [138.7058, -34.7999], [138.7057, -34.7999], [138.7057, -34.7999]]], [[[138.7182, -34.8167], [138.7189, -34.8177], [138.7193, -34.8161], [138.7203, -34.815], [138.7189, -34.8156], [138.7178, -34.8161], [138.7182, -34.8167]]], [[[138.6023, -34.91], [138.6027, -34.9098], [138.6028, -34.9096], [138.6035, -34.909], [138.6027, -34.9096], [138.6025, -34.9096], [138.6023, -34.91]]]]}}]}
import geopandas as gpd
gdf = gpd.GeoDataFrame.from_features(polygons_list['features'])
gdf.geometry.unary_union
</code></pre>
<p>This gives me the following error: TopologyException: unable to assign free hole to a shell at 138.63339999999999 -34.869500000000002</p>
<p>Any ideas in how I can fix this or workaround?</p>
|
<python><geopandas><shapely>
|
2022-12-13 02:18:08
| 1
| 712
|
z star
|
74,779,287
| 8,124,392
|
How to access a Google Form from the Google API?
|
<p>The following code accesses Google Sheets in my account:</p>
<pre><code>import gspread, json
import pandas as pd
from google.auth import default
from google.colab import auth
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
SCOPE = ["https://spreadsheets.google.com/feeds"]
SECRETS_FILE = "file.json"
SPREADSHEET = "File sheet"
# Based on docs here - http://gspread.readthedocs.org/en/latest/oauth2.html
# Load in the secret JSON key (must be a service account)
json_key = json.load(open(SECRETS_FILE))
# Authenticate using the signed key
#credentials = SignedJwtAssertionCredentials(json_key['client_email'], json_key['private_key'], SCOPE)
gc = gspread.authorize(creds)
print("The following sheets are available")
for sheet in gc.openall():
print("{} - {}".format(sheet.title, sheet.id))
# Open up the workbook based on the spreadsheet name
workbook = gc.open(SPREADSHEET)
# Get the first sheet
sheet = workbook.sheet1
# Extract all data into a dataframe
data = pd.DataFrame(sheet.get_all_records())
column_names = {}
data.rename(columns=column_names, inplace=True)
# data.timestamp = pd.to_datetime(data.timestamp)
print(data.head())
</code></pre>
<p>I have a Google Form in this account as well. How can I access the Google Forms and return as a Pandas dataframe as I am doing it here?</p>
|
<python><pandas><google-sheets><google-api><google-forms>
|
2022-12-13 02:14:15
| 0
| 3,203
|
mchd
|
74,779,215
| 904,050
|
Building Python against Openssl 3.0 in nonstandard location
|
<p>I have successfully source compiled OpenSSL 3.0 and installed it in <code>/some/dir</code> via the following sequence:</p>
<pre><code>$ wget https://github.com/openssl/openssl/archive/refs/tags/openssl-3.0.7.tar.gz
$ tar -xvf openssl-3.0.7.tar.gz
$ cd openssl-openssl-3.0.7/
$ ./Configure CC=clang --prefix=/some/dir --openssldir=/some/dir
'-Wl,-rpath,$(LIBRPATH)'
</code></pre>
<p>I then try to build Python3.11 on top of it via:</p>
<pre><code>$ wget https://www.python.org/ftp/python/3.11.1/Python-3.11.1.tgz
$ tar -xvf Python-3.11.1.tgz
$ cd Python-3.11.1/
$ ./configure CC=clang --with-openssl-rpath=auto --with-openssl=/some/dir
checking for openssl/ssl.h in /some/dir... yes
checking whether compiling and linking against OpenSSL works... yes
checking for --with-openssl-rpath... auto
checking whether OpenSSL provides required ssl module APIs... no
checking whether OpenSSL provides required hashlib module APIs... no
checking for --with-ssl-default-suites... python
checking for --with-builtin-hashlib-hashes... md5,sha1,sha256,sha512,sha3,blake2
...
checking for stdlib extension module _md5... yes
checking for stdlib extension module _sha1... yes
checking for stdlib extension module _sha256... yes
checking for stdlib extension module _sha512... yes
checking for stdlib extension module _sha3... yes
</code></pre>
<p>However, when I build, I get the following error:</p>
<pre><code>$ make
...
The necessary bits to build these optional modules were not found:
_hashlib _ssl _tkinter
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
Could not build the ssl module!
Python requires a OpenSSL 1.1.1 or newer
Custom linker flags may require --with-openssl-rpath=auto
</code></pre>
<p>(FWIW, I have tried with and without <code>--openssl-rpath=auto</code>. I have also seen that <a href="https://github.com/python/cpython/issues/83001" rel="nofollow noreferrer">python seems to support it</a>.)</p>
<p>How can I build this successfully with OpenSSL 3.0? OpenSSL 1.1.1 does not emit this warning.</p>
<p>As an aside, <code>hashlib</code> seems to work anyway:</p>
<pre class="lang-py prettyprint-override"><code>>>> import hashlib
>>> m = hashlib.sha256()
>>> m.update(b"Nobody inspects")
>>> m.digest()
b'\xe7\xa3\xf8\x08\xcb\x06\x87\xfd6`\xe9V\xa5\xdf\x0f\x00\xe2>\xda\xc5e\x07i\xec5N\xe6p\xb6X\x85\x8c'
</code></pre>
<p>but <code>ssl</code> does not:</p>
<pre class="lang-py prettyprint-override"><code>>>> import ssl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/some/dir/Python-3.11.1/Lib/ssl.py", line 100, in <module>
import _ssl # if we can't import it, let the error propagate
^^^^^^^^^^^
ModuleNotFoundError: No module named '_ssl'
</code></pre>
|
<python><openssl>
|
2022-12-13 02:01:03
| 0
| 5,311
|
user14717
|
74,779,179
| 3,102,471
|
Python CSV returning individual characters, expecting strings
|
<p>This program uses Python's CSV module to process a stream containing a CR/LF delimited list of comma separated values (CSV). Instead of getting a list of strings, with each string representing the text that appears between the delimiters (the commas), I'm getting a list of characters. The program uses <code>subprocess.run()</code> to return a stream containing rows of data separated by commas and newlines (CSV). The returned stream is printed and this output appears as expected (i.e. formatted as CSV). The program:</p>
<pre><code>import os
import subprocess
import csv
for file in os.listdir("/Temp/Video"):
if file.endswith(".mkv"):
print(os.path.join("/Temp/Video", file))
ps = subprocess.run(["ffprobe", "-show_streams", "-print_format", "csv", "-i", "/Temp/Video/" + file], capture_output = True, text = True)
print("------------------------------------------")
print(ps.stdout)
print("------------------------------------------")
reader = csv.reader(ps.stdout)
for row in reader:
print(row)
exit(0)
</code></pre>
<p>The output from the <code>print(ps.stdout)</code> statement:</p>
<pre><code>stream,0,h264,H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10,High,video,[0][0][0][0],0x0000,1920,1080,1920,1080,0,0,2,1:1,16:9,yuv420p,40,unknown,unknown,unknown,unknown,left,progressive,1,true,4,N/A,24000/1001,24000/1001,1/1000,0,0.000000,N/A,N/A,N/A,N/A,8,N/A,N/A,N/A,46,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,eng,17936509,01:20:18.271791666,115523,10802870592,001011,MakeMKV v1.16.4 win(x64-release),2021-08-20 19:09:26,BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES SOURCE_ID,Lavc59.7.102 libx264,00:01:30.010000000
stream,1,vorbis,Vorbis,unknown,audio,[0][0][0][0],0x0000,fltp,48000,3,3.0,0,0,N/A,0/0,0/0,1/1000,0,0.000000,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,3314,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,eng,Surround 3.0,2422660,01:20:18.272000000,451713,1459129736,001100,MakeMKV v1.16.4 win(x64-release),2021-08-20 19:09:26,BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES SOURCE_ID,Lavc59.7.102 libvorbis,00:01:30.003000000
</code></pre>
<p>And the some of the output from the <code>for</code> loop:</p>
<pre><code>['s']
['t']
['r']
['e']
['a']
['m']
['', '']
['0']
['', '']
['h']
['2']
['6']
['4']
['', '']
['H']
['.']
['2']
['6']
['4']
[' ']
['/']
[' ']
['A']
['V']
['C']
[' ']
['/']
[' ']
['M']
['P']
['E']
['G']
['-']
['4']
[' ']
</code></pre>
<p>What I was expecting was this:</p>
<pre><code>[stream,0,h264,H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10. ...]
[stream,1,vorbis,Vorbis,unknown,audio,[0][0][0][0] ...]
</code></pre>
<p>Why is <code>row</code> a list of characters and not a list of strings?</p>
|
<python><csv>
|
2022-12-13 01:54:37
| 1
| 1,150
|
mbmast
|
74,779,110
| 1,897,151
|
dataclass json for python make optional attribute
|
<p>Having this dataclass wish to be reuse in 2 different scenario</p>
<pre><code>class TestResponse:
name: str
parameters: Optional[list[ActionParameters]]
</code></pre>
<p>when call it with</p>
<pre><code>TestResponse(
name=name,
)
</code></pre>
<p>without adding parameter attribute, will hit an error. how to make this attribute to be something like optional?</p>
|
<python>
|
2022-12-13 01:39:19
| 0
| 503
|
user1897151
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.