QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
74,906,285
444,644
yt-dl command to download audio files - in a python script
<p>I want to convert this command into a python call:</p> <pre><code>yt-dlp --continue --no-overwrites --extract-audio --audio-format mp3 -o &quot;ST_%(upload_date)s_%(title).50s.%(ext)s&quot; https://www.youtube.com/channel/UCYt3jqP4rRP2rr5Ye8fs0LQ/videos --playlist-reverse --restrict-filenames --add-metadata --postprocessor-args &quot;-metadata albumartist=rAmAnuja-dayA&quot; --download-archive ./ytdl-archive.txt </code></pre> <p>The python call will look something like:</p> <pre><code> with yt_dlp.YoutubeDL(ydl_opts) as ydl: error_code = ydl.download([url]) </code></pre> <p>What should my options dict look like? Here is a start (not sure if it is right):</p> <pre><code>ydl_opts_base = { 'format': 'm4a/bestaudio/best', # ℹ️ See help(yt_dlp.postprocessor) for a list of available Postprocessors and their arguments 'postprocessors': [{ # Extract audio using ffmpeg 'key': 'FFmpegExtractAudio', 'preferredcodec': 'm4a', }], 'verbose': True, # Useful for checking if we have the latest version. 'playlistreverse': True, 'restrictfilenames': True, &quot;nooverwrites&quot;: True, &quot;continuedl&quot;: True, &quot;outtmpl&quot;: {&quot;default&quot;: &quot;ST_%(upload_date)s_%(title).50s.mp3&quot;} } ydl_opts = copy(ydl_opts_base) ydl_opts.update(options_dict) ydl_opts[&quot;paths&quot;] = {&quot;home&quot;: dest_dir} ydl_opts[&quot;download_archive&quot;] = os.path.join(dest_dir, &quot;ytdl-archive.txt&quot;) ydl_opts['postprocessor_args'] = {&quot;metadata&quot;: {&quot;albumartist&quot;: &quot;rAmAnuja-dayA&quot;}} </code></pre> <p>(This works with the latest version of yt-dl - downloads an mp3.)</p>
<python><youtube-dl><yt-dlp>
2022-12-24 07:50:20
0
3,207
vishvAs vAsuki
74,906,113
7,317,408
Keep only day 1 from multi day runners - stock market, Python
<p>I am trying to build a strategy around intraday runners in the stock market, and I need to eliminate from my dataset any date from a symbol which occurs after the first day of a multi day runner.</p> <p>However, only when that symbol runs multiple days. If it is only a day 1, then I need to keep it.</p> <p>So let's assume a stock moves X% every day, for 3 days, I only want to keep the 1st day.</p> <p>Once the run ends for that symbol I need to repeat the process (e.g keep only the first day from all other runs).</p> <p>Say I already have a dataframe of tickers and dates which fit my gap criteria, like so:</p> <pre><code> symbol date 1 FOXO 2022-12-22 // day 1 - keep 2 FOXO 2022-12-23 // day 2 - remove 3 FOXO 2022-12-27 // day 3 - remove - we had trading breaks here for Christmas and weekends, therefore it's still considered day 3 // 4 FOXO 2022-12-29 // day 1 - keep 5 FOXO 2022-12-30 // day 2 - remove 6 FOXO 2023-01-03 // day 1 - keep 7 FOXO 2023-01-04 // day 2 - remove 8 FOXO 2023-01-05 // day 3 - remove 6 APPL 2023-01-03 // day 1 - keep 7 APPL 2023-01-04 // day 2 - remove 8 APPL 2023-01-05 // day 3 - remove </code></pre> <p>How can I achieve the desired result with pandas?</p>
<python><pandas><numpy><stock><trading>
2022-12-24 07:06:40
2
3,436
a7dc
74,905,654
8,281,509
Run MATLAB executable from Python
<p>I am trying to run a MATLAB executable (main.exe) from Python. main.exe file was generated using the .m files in my project, using the application compiler. To run the executable from Python, I tried</p> <pre><code>import subprocess cmd = r&quot;C:/Windows/System32/cmd I:/sim/main/main.exe&quot; process = subprocess.Popen(cmd, stdout=subprocess.PIPE, creationflags=0x08000000) process.wait() </code></pre> <p>But this doesn't generate the output file. In MATLAB's command prompt, when I run the executable (!main) output is saved in the results folder in 50 secs. But the output file isn't generated while running from Python.</p> <p>Suggestions on how to run the executable in Python will be really helpful.</p>
<python><matlab><subprocess><executable>
2022-12-24 04:53:36
1
1,571
Natasha
74,905,601
14,108,609
how to run a conda environment based script via rc.local
<p>I have a lengthy python scrip <code>program.py</code> sitting inside my downloads folder. I am able to run this script only after activating my specific conda environment using <code>source /home/machineX/miniconda3/bin/activate my_env</code>. I have written the below bash script <code>trigger.sh</code> to activate my conda environment and run my python script.</p> <pre><code>#!/bin/bash cd /home/machineX/Downloads/ source /home/machineX/miniconda3/bin/activate my_env python /home/machineX/Downloads/program.py </code></pre> <p>I am running my script using the following command <code>source /home/machineX/trigger.sh</code></p> <p>Normally when I run it, first I activate my conda environment conda activate the_env and then run it by writing python program.py in my bash terminal.</p> <p>My goal is to run my <code>program.py</code> at the powering on of the machine. So I am trying to execute <code>trigger.sh</code> via <code>rc.local</code>. So I added the following before <code>exit 0</code> in my <code>etc/rc.local</code></p> <pre><code>su machineX -c '/home/machineX/trigger.sh' </code></pre> <p>Everything looks alright, my <code>rc.local</code> runs all types of bash scripts using the above line. But it just gives up at conda based script.</p>
<python><linux><raspberry-pi><conda>
2022-12-24 04:33:28
1
1,351
Janzaib M Baloch
74,905,558
1,145,760
Python type hint on default parameter explodes
<p>Without <code>: int</code> this program runs fine:</p> <pre><code>#!/usr/bin/env python3 import typing # not needed def foo(): return (1,2,3) def bar(i = foo()[0]: int): # adding ': int' breaks the Universe return i print(bar()) </code></pre>
<python><types>
2022-12-24 04:16:29
1
9,246
Vorac
74,905,253
17,981,859
Why does sphinx automodule work properly only on local but not on readthedocs?
<h2>Local Sphinx</h2> <p><a href="https://i.sstatic.net/5zVkf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5zVkf.png" alt="Local Sphinx" /></a></p> <h2>ReadTheDocs</h2> <p><a href="https://i.sstatic.net/BxkBd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BxkBd.png" alt="readthedocs" /></a></p> <p>As you can see in the ReadTheDocs image, there is no <code>EstraClient</code> imported, whereas it appears in Local Sphinx. I used an automodule that was not listed in readthedocs. If you want to see all the codes, please visit my <a href="https://github.com/StawaDev/Estrapy-API/tree/main/docs" rel="nofollow noreferrer">GitHub</a></p> <p>This is my code for the <code>EstraClient</code>, and this is the first time using Sphinx, so I don't really understand Sphinx fully. Please respond if there is a solution or a problem.</p> <pre><code>EstraClient ============ .. automodule:: Estrapy :members: :undoc-members: :show-inheritance: </code></pre> <p>I had already included code to insert the modules in my <code>conf.py</code>, but they still did not appear in readthedocs.</p> <pre><code>import os import sys sys.path.insert(0, os.path.abspath(&quot;../&quot;)) </code></pre> <h2>Update 1</h2> <p>After upgrading to Python v3.10, I got an error about sphinx.errors. ConfigError: The config directory doesn't contain a conf.py file; here's the full error output.</p> <pre><code>Running Sphinx v5.3.0 Traceback (most recent call last): File &quot;/home/docs/.asdf/installs/python/3.10.8/lib/python3.10/site-packages/sphinx/cmd/build.py&quot;, line 276, in build_main app = Sphinx(args.sourcedir, args.confdir, args.outputdir, File &quot;/home/docs/.asdf/installs/python/3.10.8/lib/python3.10/site-packages/sphinx/application.py&quot;, line 202, in __init__ self.config = Config.read(self.confdir, confoverrides or {}, self.tags) File &quot;/home/docs/.asdf/installs/python/3.10.8/lib/python3.10/site-packages/sphinx/config.py&quot;, line 170, in read raise ConfigError(__(&quot;config directory doesn't contain a conf.py file (%s)&quot;) % sphinx.errors.ConfigError: config directory doesn't contain a conf.py file (/home/docs/checkouts/readthedocs.org/user_builds/estrapy-api/checkouts/latest) Configuration error: config directory doesn't contain a conf.py file (/home/docs/checkouts/readthedocs.org/user_builds/estrapy-api/checkouts/latest) </code></pre> <p>And last thing my <code>.readthedocs.yaml</code> codes:</p> <pre><code>version: 2 sphinx: configuration: docs/conf.py builder: html build: os: ubuntu-20.04 tools: python: &quot;3.10&quot; commands: - pip install myst-parser - pip install sphinx-rtd-theme - python -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html </code></pre>
<python><python-sphinx><read-the-docs>
2022-12-24 02:16:25
1
355
Stawa
74,905,175
825,227
Intersection of non-NaN entries for pivot table in Python
<p>I have a collection of SQL data output that I'm reshaping via Pandas <code>pivot_table</code> to be row-based instead of column-based. Each of the new columns will be a time-series with associated datetime index, but the resulting columns may not have coinciding timeseries history.</p> <p>What's the best way to keep the intersection of non-NaN data for the resulting columns? Sample data and example below:</p> <p><strong>df</strong></p> <p><a href="https://i.sstatic.net/jSkRl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jSkRl.png" alt="enter image description here" /></a></p> <p><strong>post-pivot df</strong></p> <p><a href="https://i.sstatic.net/BoEqp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BoEqp.png" alt="enter image description here" /></a></p> <p>And I'd like it such that the resulting final dataframe has start index date of 2004-11-19, per the below.</p> <p><strong>post-pivot df</strong></p> <p><a href="https://i.sstatic.net/aRzNL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aRzNL.png" alt="enter image description here" /></a></p>
<python><python-3.x><pandas><pivot-table>
2022-12-24 01:51:23
1
1,702
Chris
74,904,917
2,316,663
Keras model_plot generates random text in figure
<p>I am currently testing some dummy code about CNN. I created the following tranditional model</p> <pre><code>model = Sequential() model.add(Conv2D(16, (3, 3), activation = 'relu', padding='same', input_shape = (6,6,1))) model.add(MaxPooling2D((2, 2), padding='same')) model.add(Conv2D(32, (3, 3), activation = 'relu', padding='same', input_shape = (6,6,1))) model.add(MaxPooling2D((2, 2), padding='same')) model.add(Conv2D(64, (3, 3), activation = 'relu', padding='same', input_shape = (6,6,1))) model.add(MaxPooling2D((2, 2), padding='same')) model.add(Conv2D(64, (3, 3), activation = 'relu', padding='same', input_shape = (6,6,1))) model.add(MaxPooling2D((2, 2), padding='same')) model.add(Flatten()) model.add(Dense(units = 32, activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(units = 1, activation = 'sigmoid')) model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy']) </code></pre> <p>Now, I am trying to plot the model using plot_model:</p> <pre><code>from tensorflow.keras.utils import plot_model plot_model(model, to_file=&quot;test.png&quot;, dpi=120, show_shapes=True) </code></pre> <p>However, the generated output figure includes random text that does not make sense for me:</p> <p><a href="https://i.sstatic.net/Jzkvz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jzkvz.png" alt="enter image description here" /></a></p> <p>I trying reinstalling the pydot and pydotplus again:</p> <pre><code>conda uninstall pydot conda uninstall pydotplus conda install pydot conda install pydotplus </code></pre> <p>However, no progress. Any help with that?</p>
<python><keras>
2022-12-24 00:16:56
0
363
Osama El-Ghonimy
74,904,866
3,875,378
Repeated Categorical X-Axis Labels in Matplotlib
<p>I have a simple question: <strong>why are my x-axis labels repeated?</strong></p> <p><a href="https://i.sstatic.net/G9trl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G9trl.png" alt="MWE Plot" /></a></p> <p>Here's an MWE: <a href="https://colab.research.google.com/drive/1yJHJdqIiZkkG9yZntyLWywKJNKY60nWU?usp=sharing" rel="nofollow noreferrer"><strong>X-Axis Labels MWE</strong></a></p> <pre><code>a = { # DATA -- 'CATEGORY': (VALUE, ERROR) 'Cats': (1, 0.105), 'Dogs': (2, 0.023), 'Pigs': (2.6, 0.134) } compositions = list(a.keys()) # MAKE INTO LIST a_vals = [i[0] for i in a.values()] # EXTRACT VALUES a_errors = [i[1] for i in a.values()] # EXTRACT ERRORS fig = plt.figure(figsize=(8, 6)) # DICTATE FIGURE SIZE bax = brokenaxes(ylims=((0,1.5), (1.7, 3)), hspace = 0.05) # BREAK AXES bax.plot(compositions, a_vals, marker = 'o') # PLOT DATA for i in range(0, len(a_errors)): # PLOT ALL ERROR BARS bax.errorbar(i, a_vals[i], yerr = a_errors[i], capsize = 5, fmt = 'red') # FORMAT ERROR BAR </code></pre> <p>Here's stuff I tried:</p> <ul> <li><a href="https://stackoverflow.com/questions/54388670/how-to-make-a-plot-with-repeating-labels-on-x-axis">Manually setting x-axis tick marks using <code>xticks</code></a></li> <li><a href="https://matplotlib.org/stable/gallery/ticks/ticks_too_many.html" rel="nofollow noreferrer">Converting strings to floats using <code>np.asarray(x, float)</code></a></li> <li><a href="https://stackoverflow.com/questions/6682784/reducing-number-of-plot-ticks">Reducing # ticks using <code>pyplot.locator_params(nbins=3)</code></a></li> </ul>
<python><matplotlib><axis-labels>
2022-12-24 00:03:23
1
447
DarkRunner
74,904,647
1,054,322
How can I restrict this dataclass serialization to only attributes on the base class?
<p>Here's some code:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from dataclasses_json import dataclass_json @dataclass_json @dataclass class Foo: f: str @dataclass_json @dataclass class Baz(Foo): b: str def full(self): return self.to_dict() # as expected, returns {&quot;f&quot;:&quot;f&quot;, &quot;b&quot;:&quot;b&quot;} def partial(self): return Foo.to_dict(self) # also returns {&quot;f&quot;:&quot;f&quot;, &quot;b&quot;:&quot;b&quot;} # how can I make it just return {&quot;f&quot;:&quot;f&quot;}? print(Baz(f=&quot;f&quot;, b=&quot;b&quot;).partial()) </code></pre> <p>output:</p> <pre class="lang-json prettyprint-override"><code>{&quot;f&quot;:&quot;f&quot;} </code></pre> <p>How can I restrict the value returned by <code>partial</code> to only <code>f</code> and not both <code>b</code> and <code>f</code>?</p>
<python><python-dataclasses>
2022-12-23 23:08:59
1
9,389
MatrixManAtYrService
74,904,556
15,875,806
appedPlainText without new line in pyqt5
<p>I have a QPlainTextEdit box which is read-only and is used as an application console. I basically want to append a string WITHOUT a newline or <code>\r</code> to the PlainTextEdit widget using <code>appendPlainText</code>. Looking through similar quesitions, people seem to just set cursor to the end with <code>myTextEdit-&gt;moveCursor (QTextCursor::End);</code> and then simply append. But this did not work for me.</p> <p>In my case I use pyqtSignal to append text. So, I first signal the mainwindow to set the cursor to the end and then append my string to the QPlainTextEdit widget. But it still didn't work.</p> <p>Since my MainWindow have thousands of lines of code, I will just paste the pieces related to my question.</p> <p>Code:</p> <pre><code>class Emitter(QThread): from PyQt5.QtCore import pyqtSignal, QObject, Qt from PyQt5.QtGui import QPixmap, QStandardItemModel, QStandardItem, QIcon from multiprocessing import Process, Queue, Pipe ConsoleSGL = pyqtSignal(str) consoleCursorSGL = pyqtSignal(str) def __init__(self, from_process: Pipe): super().__init__() self.data_from_process = from_process def run(self): while True: try: signalData = self.data_from_process.recv() except EOFError: break else: if (len(signalData) &gt; 1 and signalData[0] == &quot;ConsoleSGL&quot;): self.ConsoleSGL.emit(signalData[1]) elif (len(signalData) &gt; 1 and signalData[0] == &quot;consoleCursorSGL&quot;): self.consoleCursorSGL.emit(signalData[1]) class MainWindow(QMainWindow, Ui_MainWindow): from multiprocessing import Process, Queue, Pipe def __init__(self, child_process_queue: Queue, emitter: Emitter): QMainWindow.__init__(self) self.process_queue = child_process_queue self.emitter = emitter self.emitter.daemon = True self.emitter.start() self.setupUi(self) self.pyqt5GuiCodeMain() def pyqt5GuiCodeMain(self): self.emitter.ConsoleSGL.connect(self.consoleQPlainTextEdit.appendPlainText) self.emitter.consoleCursorSGL.connect(self.moveDevConsoleCSR) def moveDevConsoleCSR(self, position): if position == &quot;End&quot;: print(&quot;Going to end&quot;) self.consoleQPlainTextEdit.moveCursor(QTextCursor.End) </code></pre> <p>When ever I want to append to my plainTextWidget, I do this:</p> <pre><code>for text in range(0, 10): self.signaller.send([&quot;consoleCursorSGL&quot;, &quot;End&quot;]) self.signaller.send([&quot;ConsoleSGL&quot;, text]) </code></pre> <p>The output I get in my widget is:</p> <pre><code>0 1 2 3 4 5 6 7 8 9 </code></pre> <p>Note: I do see <code>print(&quot;Going to end&quot;)</code> prints in my python console output.</p> <p>Can anyone tell me what am I doing wrong? or how do I append without newline using pyqt5 signals?</p>
<python><pyqt5>
2022-12-23 22:47:42
0
305
hashy
74,904,444
9,947,140
Connect to a GCP Cloud SQL Auth Proxy using SQL alchemy and a GCP service account
<p>I am using terraform to set up a simple application that has a postgres db via Cloud SQL in google cloud platform (GCP). I set up a GCP Cloud SQL Auth proxy for my postgresql db using <a href="https://github.com/GoogleCloudPlatform/cloud-sql-proxy" rel="nofollow noreferrer">this guide</a>. I set up the proxy as a sidecar to my main kubernetes application. I also set up a GCP service account to be used for authentication in the cloud proxy. In other words, I set the <code>service_account_name</code> in the <code>kubernetes_deployment</code> resource in my terraform file to be a gcp service account with the necessary roles to connect to the database.</p> <p>Now, I'd like to use python and sql alchemy to connect to this postgresql db through the Cloud SQL proxy. Everything I found online (like <a href="https://cloud.google.com/sql/docs/mysql/connect-admin-proxy" rel="nofollow noreferrer">this documentation</a>) suggest that I need to add a username and password like this to connect to the cloud proxy: <code>mysql+pymysql://&lt;db_user&gt;:&lt;db_pass&gt;@&lt;db_host&gt;:&lt;db_port&gt;/&lt;db_name&gt;</code>. However, my google service account doesn't have a username and password.</p> <p>My question: is there a way to connect to the google cloud auth proxy without a password using my gcp service account?</p>
<python><postgresql><kubernetes><google-cloud-platform><sqlalchemy>
2022-12-23 22:26:19
1
342
randomrabbit
74,904,389
14,159,985
How to check if pyspark dataframe is empty QUICKLY
<p>I'm trying to check if my pyspark dataframe is empty and I have tried different ways to do that, like:</p> <pre><code>df.count() == 0 df.rdd.isEmpty() df.first().isEmpty() </code></pre> <p>But all this solutions are to slow, taking up to 2 minutes to run. How can I quicly check if my pyspark dataframe is empty or not? Do anyone have a solution for that?</p> <p>Thank you in advance!</p>
<python><apache-spark><pyspark>
2022-12-23 22:16:42
2
338
fernando fincatti
74,904,373
9,470,078
PyTorch create combination of ranges
<p>Is there a way to nest <code>arange</code>'s easily to create all the combinations of two ranges in PyTorch? For example:</p> <pre><code>x = torch.arange(2, 4) y = torch.arange(0, 3) something(x, y) # should be [[2,0], [2,1], [2,3], [3,0], [3,1], [3,2]] </code></pre> <p>I.e., something with the same functionality as this python code:</p> <pre><code>l = [] for x in range(2, 4): for y in range(0, 3): l.append([x, y]) </code></pre> <p>where the <code>range(x,y)</code> we can change.</p>
<python><pytorch>
2022-12-23 22:13:27
2
1,157
Monolith
74,904,322
1,285,419
Looking for an approach to break a B-spline/Cubic spline into multiple sections
<p>I am self-learning a game developing. The current topic I am learning is about path design. I get some random points and need a good math model to design a smooth path through those points (not necessary to pass all points but in best fitting). I read a book about spline (B-spline and cubic spline) and seems it is a good tool to use them. However, those two splines will get the path throughs the points instead of the best fitting smooth curve. I go research and I found scipy do have a smooth spline interploation may help</p> <pre><code>from scipy.interpolate import BSpline, CubicSpline from scipy.interpolate import splrep from scipy.interpolate import BPoly import numpy as np # x, y are the given points x = np.array([ 0. , 1.2, 1.9, 3.2, 5, 6.5]) y = np.array([ 0. , 2.3, 3. , 4.3, 2.9, 3.1]) t, c, k = splrep(x, y, s=0.8, k=3) # make it smooth by setting s=0.8 spl = BSpline(t, c, k) # create B-spline </code></pre> <p>The Bspline does help to generate a smooth path. For some reasons, I need to break the path into multiple section of curves. My goal is to change the Bspline into a multiple sections of Bezier curve. So I try the following two approaches found online</p> <ol> <li>I found a question <a href="https://stackoverflow.com/questions/56859792/how-to-create-bezier-curves-from-b-splines-in-sympy">How to create Bezier curves from B-Splines in Sympy?</a> in which someone ask about the same questions. I copy the code from there to below</li> </ol> <pre><code>import aggdraw import numpy as np import scipy.interpolate as si from PIL import Image # from https://stackoverflow.com/a/35007804/2849934 def scipy_bspline(cv, degree=3): &quot;&quot;&quot; cv: Array of control vertices degree: Curve degree &quot;&quot;&quot; count = cv.shape[0] degree = np.clip(degree, 1, count-1) kv = np.clip(np.arange(count+degree+1)-degree, 0, count-degree) max_param = count - (degree * (1-periodic)) spline = si.BSpline(kv, cv, degree) return spline, max_param # based on https://math.stackexchange.com/a/421572/396192 def bspline_to_bezier(cv): cv_len = cv.shape[0] assert cv_len &gt;= 4, &quot;Provide at least 4 control vertices&quot; spline, max_param = scipy_bspline(cv, degree=3) for i in range(1, max_param): spline = si.insert(i, spline, 2) return spline.c[:3 * max_param + 1] def draw_bezier(d, bezier): path = aggdraw.Path() path.moveto(*bezier[0]) for i in range(1, len(bezier) - 1, 3): v1, v2, v = bezier[i:i+3] path.curveto(*v1, *v2, *v) d.path(path, aggdraw.Pen(&quot;black&quot;, 2)) cv = np.array([[ 40., 148.], [ 40., 48.], [244., 24.], [160., 120.], [240., 144.], [210., 260.], [110., 250.]]) im = Image.fromarray(np.ones((400, 400, 3), dtype=np.uint8) * 255) bezier = bspline_to_bezier(cv) d = aggdraw.Draw(im) draw_bezier(d, bezier) d.flush() # show/save im </code></pre> <p>In the code, I don't quite understand what is <code>v1, v2, v = bezier[i:i+3]</code> v1, v2, v, is v1 the start point of the bezier curve, v2 the middle control point and v is the end point? so the bezier curve is always quadratic? I am trying to plot the v1, v2, v for each bezier extracted in the above code as follow, I get something very strange</p> <pre><code> for i in range(1, len(bezier) - 1, 3): v1, v2, v = bezier[i:i+3] cc = np.array([(v1[0], v1[1]), (v2[0], v2[1]), (v[0], v[1])]) curve = BPoly(cc[:, None, :], [0,1]) X = np.linspace(0, 1, 20) p = curve(X) plt.gca().set_aspect('equal') plt.plot(*p.T) </code></pre> <p><a href="https://i.sstatic.net/BcPyP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BcPyP.png" alt="enter image description here" /></a> Could someone help to explain if anything wrong with the code above?</p> <ol start="2"> <li>I have spent long time to figure out how to break the B spline into multiple Bezier curves but no good results, so I try to think of a different way. I use CubicSpline instead so the piecewise polynormial is avaialbe.</li> </ol> <pre><code>from scipy.interpolate import BSpline, CubicSpline from scipy.interpolate import splrep from scipy.interpolate import BPoly import numpy as np x = np.array([ 0. , 1.2, 1.9, 3.2, 5, 6.5]) y = np.array([ 0. , 2.3, 3. , 4.3, 2.9, 3.1]) t, c, k = splrep(x, y, s=0.8, k=3) spl = BSpline(t, c, k) import matplotlib.pyplot as plt fig, ax = plt.subplots() xx = np.linspace(0, 6, 500) ax.plot(xx, spl(xx), 'b-', lw=4, alpha=0.7, label='BSpline') ax.plot(x, y, 'rs') xx = x yy = spl(xx) cu = CubicSpline(xx, yy) plt.plot(xx, yy, 'ko') for i in range(len(cu.x)-1): xs = np.linspace(cu.x[i], cu.x[i+1], 100) plt.plot(xs, np.polyval(cu.c[:,i], xs - cu.x[i])) </code></pre> <p>The code does give me a multiple segments but the number of segments depend on the xx I choose. If the xx is very dense, I may end up with so many segements. My goal is to use as less polynormial segments as possible to approach the spline. Could anyone give me some information for any way to do that? Thanks.</p> <p>Looking for some advice how to apporach the problem</p>
<python><numpy><interpolation><bezier><spline>
2022-12-23 22:06:15
0
2,225
user1285419
74,904,304
4,541,649
How to stack summing vectors to numpy 3d array?
<p>I have a 3d <code>numpy</code> array that looks like so:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; g array([[[ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.], [ 1., 2., 3., 4., 6.]], [[ 0., 0., 0., 0., 0.], [11., 22., 33., 44., 66.], [ 0., 0., 0., 0., 0.]]]) </code></pre> <ol> <li>I know I can calculate a sum along the first axis with <code>gs = g.sum(axis=1)</code> that will result in this array:</li> </ol> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; gs array([[ 2., 3., 4., 5., 7.], [11., 22., 33., 44., 66.]]) </code></pre> <p>How do I stack this summing array to the original one as the forth vector in each of the two inside groups? The expected result would be:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; g array([[[ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.], [ 1., 2., 3., 4., 6.], [ 2., 3., 4., 5., 7.]], [[ 0., 0., 0., 0., 0.], [11., 22., 33., 44., 66.], [ 0., 0., 0., 0., 0.], [ 11., 22., 33., 44., 66.]]]) </code></pre> <ol start="2"> <li>And I have the same question about the summing array along the 0th dimension which is calculated with <code>gss = g.sum(axis=0)</code> and looks like so:</li> </ol> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; gss array([[ 1., 1., 1., 1., 1.], [11., 22., 33., 44., 66.], [ 1., 2., 3., 4., 6.]]) </code></pre> <p>How do I stack it to the original array to get the result shown below?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; g array([[[ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.], [ 1., 2., 3., 4., 6.]], [[ 0., 0., 0., 0., 0.], [11., 22., 33., 44., 66.], [ 0., 0., 0., 0., 0.]], [[ 1., 1., 1., 1., 1.], [11., 22., 33., 44., 66.], [ 1., 2., 3., 4., 6.]]]) </code></pre>
<python><arrays><numpy><multidimensional-array><sum>
2022-12-23 22:02:40
2
1,655
Sergey Zakharov
74,904,236
15,446,076
Python: ValueError Data cardinality is ambiguous
<p>I'm trying to make an AI that can convert handwriting to plain text. But I'm encountering the following error:</p> <pre><code>ValueError: Data cardinality is ambiguous: x sizes: 8 y sizes: 1 Make sure all arrays contain the same number of samples. </code></pre> <p>I have 8 images as testing dataset, each image has its own label file which looks something like this: <code>87 70 66 75 0 66 67 0 63 79 66 75 68 81</code>.</p> <p>Then I combine those into 1 file (combined_labels.csv):</p> <pre><code>39 46 70 0 51 66 70 71 66 13,42 66 0 73 62 62 81 0 80 81 66 66 65 80,74 66 66 79 0 83 62 75 0 71 66 69 82 74 76 79,87 70 66 75 0 66 67 0 63 79 66 75 68 81,68 66 87 66 73 73 70 68 69 66 70 65 0 70 75,65 66 0 72 73 62 80 2,38 66 75 70 66 81 0 83 62 75 0 65 66,83 62 72 62 75 81 70 66 2 </code></pre> <p>Relevant code:</p> <pre><code>image_paths = [] for i in range(8): image_paths.append(&quot;Dataset/Images/000000/000000_{}.png&quot;.format(str(i).zfill(2))) images = [] for image_path in image_paths: image = Image.open(image_path) # Resize and grayscale the image image = image.resize((28, 28)) image = image.convert(&quot;L&quot;) images.append(np.array(image)) images = np.array(images) images = images / 255.0 images = images.reshape(-1, 28, 28, 1) ... combined_labels = pd.read_csv(&quot;combined_labels.csv&quot;, names=[&quot;labels&quot;]) combined_labels = combined_labels[&quot;labels&quot;].str.split(&quot;,&quot;, expand=True) labels = combined_labels.values labels = labels.flatten() model.fit(images, labels, epochs=10) # Error line </code></pre>
<python><pandas><dataframe><numpy><artificial-intelligence>
2022-12-23 21:49:17
0
374
Tetie
74,904,197
12,242,085
How to find column where is punctation mark as a single value in Python Pandas?
<p>I have DataFrame like below:</p> <pre><code>COL1 | COL2 | COL3 -----|------|-------- abc | P | 123 b.bb | , | 22 1 | B | 2 ... |... | ... </code></pre> <p>And I need to find columns where is only punctation mark like !&quot;#$%&amp;'()*+,-./:;&lt;=&gt;?@[]^_`{|}~</p> <p>So as a result I need something like below (only COL2, because in COL1 is also punctation mark, but there is with other values).</p> <pre><code>COL2 ------- P , B ... </code></pre>
<python><pandas><string><dataframe>
2022-12-23 21:42:19
2
2,350
dingaro
74,904,129
5,716,192
How to add concrete type annotations for np.recarray?
<p>I have the following code in a file <code>scratch.py</code>:</p> <pre><code>import numpy as np def my_array(arr: np.recarray) -&gt; None: print(arr.x) my_array(np.rec.array([(1.0, 2), (3.0, 4)], dtype=[('x', '&lt;f8'), ('y', '&lt;i8')])) </code></pre> <p>Running <code>mypy scratch.py --disallow-any-generics</code> gives the following error:</p> <pre><code>scratch.py:3: error: Missing type parameters for generic type &quot;recarray&quot; [type-arg] </code></pre> <p>However, the following code gets rid of the error above:</p> <pre><code>def my_array(arr: np.recarray[Any, Any]) -&gt; None: print(arr.x) </code></pre> <p>but I would prefer more specific types like:</p> <pre><code>def my_array(arr: np.recarray[np.dtype[[('x', float), ('y', int)]]]) -&gt; None: print(arr.x) </code></pre> <p>but I haven't figured out how to do this correctly.</p> <pre><code>from typing import Union import numpy as np def my_array(arr: np.recarray[Union[float, int], np.dtype[Union[np.float_, np.int_]]]) -&gt; None: print(arr.x) my_array(np.rec.array([(1.0, 2), (3.0, 4)], dtype=[('x', '&lt;f8'), ('y', '&lt;i8')])) </code></pre> <p>Passes the check but i don't know why.</p>
<python><numpy><mypy>
2022-12-23 21:30:16
1
693
Victory Omole
74,904,086
8,401,294
Services on the same network cannot be seen by alias, only by ip
<p>I have the following <code>bridge</code> network locally:</p> <pre class="lang-bash prettyprint-override"><code>$ docker network ls NETWORK ID NAME DRIVER SCOPE 08e7e0710fd9 bridge bridge local </code></pre> <p>I have 3 containers linked in this network:</p> <pre class="lang-bash prettyprint-override"><code>$ docker network inspect bridge &quot;Containers&quot;: { &quot;0b76b1bb23bf805b181ee1696d1366f35e5f49df446203deb32f13bd2caaba32&quot;: { &quot;Name&quot;: &quot;local-mysql&quot;, &quot;EndpointID&quot;: &quot;0288da5e7374456d8c877a8230e9515a63799bfe20f1081ac03003cfab8e0f59&quot;, &quot;MacAddress&quot;: &quot;02:42:ac:11:00:03&quot;, &quot;IPv4Address&quot;: &quot;172.17.0.3/16&quot;, &quot;IPv6Address&quot;: &quot;&quot; }, &quot;340c09255c23104e1b52f6b6b2f1adbb4ce6d5a0a139e8ec4ac613198a853022&quot;: { &quot;Name&quot;: &quot;local-app&quot;, &quot;EndpointID&quot;: &quot;dedccf1c9830cb2e28e2ac3e3953c73006e1294b45a3f70475d01a918787c03c&quot;, &quot;MacAddress&quot;: &quot;02:42:ac:11:00:02&quot;, &quot;IPv4Address&quot;: &quot;172.17.0.2/16&quot;, &quot;IPv6Address&quot;: &quot;&quot; }, &quot;ccabe6e1231144b023ddfd49403f63c49de03747ce4c31f8b55533c5b5c69f91&quot;: { &quot;Name&quot;: &quot;local-phpmyadmin&quot;, &quot;EndpointID&quot;: &quot;3f13e0677c07da79df6b7ad32038b437619250c38d4d82656b3aa7dbc0aaebe2&quot;, &quot;MacAddress&quot;: &quot;02:42:ac:11:00:04&quot;, &quot;IPv4Address&quot;: &quot;172.17.0.4/16&quot;, &quot;IPv6Address&quot;: &quot;&quot; } }, </code></pre> <p>I have the following containers running:</p> <pre class="lang-bash prettyprint-override"><code>$ docker container ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a4395f3e72f6 local-php-app &quot;docker-php-entrypoi…&quot; 16 minutes ago Up 16 minutes 0.0.0.0:8080-&gt;80/tcp local-php-app ccabe6e12311 local-phpmyadmin &quot;/docker-entrypoint.…&quot; 16 minutes ago Up 16 minutes 0.0.0.0:3305-&gt;80/tcp local-phpmyadmin 0b76b1bb23bf local-mysql &quot;docker-entrypoint.s…&quot; 16 minutes ago Up 16 minutes 0.0.0.0:3306-&gt;3306/tcp, 33060/tcp local-mysql 340c09255c23 local-app &quot;/core/run.sh&quot; 4 hours ago Up 26 minutes 0.0.0.0:15384-&gt;5005/tcp, 0.0.0.0:20069-&gt;8080/tcp, 0.0.0.0:19286-&gt;8443/tcp local-app </code></pre> <p>Python code trying to connect:</p> <pre class="lang-py prettyprint-override"><code> mysql_pass = '' mysql_endpoint = 'local-mysql:3306' mysql_host = mysql_endpoint.split(&quot;:&quot;)[0] mysql_port = int(mysql_endpoint.split(&quot;:&quot;)[1]) mysql_db = &quot;localdb&quot; mysql_user = 'root' db_connection = pymysql.connect(host=mysql_host, port=mysql_port, user=mysql_user, passwd=mysql_pass, db=mysql_db) </code></pre> <p>So using <code>mysql_endpoint=&quot;local-mysql:3306&quot;</code>, I get the following error:</p> <pre><code>OperationalError pymysql.err.OperationalError: (2003, &quot;Can't connect to MySQL server on 'local-mysql' ([Errno -2] Name or service not known)&quot;) </code></pre> <p>So I decided to get the <code>local-mysql</code> container IP to use directly:</p> <pre><code>$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 0b76b1bb23bf 172.17.0.3 </code></pre> <p>If I change it to <code>mysql_endpoint=&quot;172.17.0.3:3306&quot;</code>, it works perfectly:</p> <pre><code>&quot;Database version: ('8.0.31',)&quot; </code></pre> <p>Why?</p> <p>The fact that they are all on the same network should already be possible to make the connection using the &quot;service name&quot;?</p> <p>What's the difference between using <code>&quot;alias&quot;</code> and using <code>IP</code>?</p> <p>How do I use &quot;links&quot; when I have services with different &quot;docker-compose.yml&quot;?</p>
<python><docker><docker-compose>
2022-12-23 21:22:41
0
365
José Victor
74,904,066
1,664,557
Python utf-8 encoding not following unicode rules
<p>Background: I've got a byte file that is encoded using unicode. However, I can't figure out the right method to get Python to decode it to a string. Sometimes is uses 1-byte ASCII text. The majority of the time it uses 2-byte &quot;plain latin&quot; text, but it can possibly contain any unicode character. So my python program needs to be able to decode that and handle it. Unfortunately <code>byte_string.decode('unicode')</code> isn't a thing, so I need to specify another encoding scheme. Using Python 3.9</p> <p>I've read through the Python doc on unicode and utf-8 <a href="https://docs.python.org/3/howto/unicode.html" rel="nofollow noreferrer">Python doc</a>. If Python uses unicode for it's strings, and utf-8 as default, this should be pretty straightforward, yet I keep getting incorrect decodes.</p> <p>If I understand how unicode works, the most significant byte is the character code, and the least significant byte is the lookup value in the decode table. So I would expect 0x00_41 to decode to &quot;A&quot;,<br> 0x00_F2 =&gt;<a href="https://i.sstatic.net/C4WkR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C4WkR.png" alt="enter image description here" /></a><br> x65_03_01 =&gt; é (e with combining acute accent).</p> <p>I wrote a short test file to experiment with these byte combinations, and I'm running into a few situations that I don't understand (despite extensive reading).</p> <p>Example code:</p> <pre><code>def main(): print(&quot;Starting MAIN...&quot;) vrsn_bytes = b'\x76\x72\x73\x6E' serato_bytes = b'\x00\x53\x00\x65\x00\x72\x00\x61\x00\x74\x00\x6F' special_bytes = b'\xB2\xF2' combining_bytes = b'\x41\x75\x64\x65\x03\x01' print(f&quot;vrsn_bytes: {vrsn_bytes}&quot;) print(f&quot;serato_bytes: {serato_bytes}&quot;) print(f&quot;special_bytes: {special_bytes}&quot;) print(f&quot;combining_bytes: {combining_bytes}&quot;) encoding_method = 'utf-8' # also tried latin-1 and cp1252 vrsn_str = vrsn_bytes.decode(encoding_method) serato_str = serato_bytes.decode(encoding_method) special_str = special_bytes.decode(encoding_method) combining_str = combining_bytes.decode(encoding_method) print(f&quot;vrsn_str: {vrsn_str}&quot;) print(f&quot;serato_str: {serato_str}&quot;) print(f&quot;special_str: {special_str}&quot;) print(f&quot;combining_str: {combining_str}&quot;) return True if __name__ == '__main__': print(&quot;Starting Command Line Experiment!&quot;) if not main(): print(&quot;\n Command Line Test FAILED!!&quot;) else: print(&quot;\n Command Line Test PASSED!!&quot;) </code></pre> <p>Issue 1: utf-8 encoding. As the experiment is written, I get the following error: <br><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 0: invalid start byte</code></p> <p>I don't understand why this fails to decode, according to the <a href="https://www.ssec.wisc.edu/%7Etomw/java/unicode.html#x0300" rel="nofollow noreferrer">unicode decode table</a>, 0x00B2 should be &quot;SUPERSCRIPT TWO&quot;. In fact, it seems like anything above 0x7F returns the same UnicodeDecodeError.</p> <p>I know that some encoding schemes only support 7 bits, which is what seems like is happening, but utf-8 should support not only 8 bits, but multiple bytes.</p> <p>If I changed <code>encoding_method</code> to <code>encoding_method = 'latin-1'</code> which extends the original ascii 128 characters to 256 characters (up to 0xFF), then I get a better output:</p> <pre><code>vrsn_str: vrsn serato_str: Serato special_str: ²ò combining_str: Aude </code></pre> <p>However, this encoding is not handling the 2-byte codes properly. \x00_53 should be <code>S</code>, not <code>�S</code>, and none of the encoding methods I'll mention in this post handle the combining acute accent after <code>Aude</code> properly.</p> <p>So far I've tried many different encoding methods, but the ones that are closest are: unicode_escape, latin-1, and cp1252. while I expect utf-8 to be what I'm supposed to use, it does not behave like it's described in the Python doc linked above.</p> <p>Any help is appreciated. Besides trying more methods, I don't understand why this isn't decoding according to the table in link 3.</p> <p>UPDATE:</p> <p>After some more reading, and see your responses, I understand why you're so confused. I'm going to explain further so that hopefully this helps someone in the future.</p> <p>The byte file that I'm decoding is not mine (hence why the encoding does not make sense). What I see now is that the bytes represent the code point, not the byte representation of the unicode character.</p> <p>For example: I want 0x00_B2 to translate to ò. But the actual byte representation of ò is 0xC3_B2. What I have is the integer representation of the code point. So while I was trying to decode, what I actually need to do is convert 0x00B2 to an integer = 178. then I can use chr(178) to convert to unicode.</p> <p>I don't know why the file was written this way, and I can't change it. But I see now why the decoding wasn't working. Hopefully this helps someone avoid the frustration I've been figuring out.</p> <p>Thanks!</p>
<python><unicode><encoding><utf-8>
2022-12-23 21:20:18
1
410
krose
74,903,930
14,336,726
Module not found error: what is Jupyter core?
<p>I think this code</p> <pre><code>jupyter nbconvert --to script weather_observations.ipynb </code></pre> <p>should convert a Jupyter script to a Python script, but I get this error</p> <pre><code>File &quot;/Library/Frameworks/Python.framework/Versions/3.8/bin/jupyter&quot;, line 6, in &lt;module&gt; from jupyter_core.command import main ModuleNotFoundError: No module named 'jupyter_core' (base) Users-MacBook-Air-2:~ user$ python weather_observations.py /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: can't open file 'weather_observations.py': [Errno 2] No such file or directory </code></pre> <p>What is that Jupyter core?</p> <p>Here is the where I got the ipynb</p> <p><a href="https://aaltoscicomp.github.io/python-for-scicomp/_downloads/4b858dab9366f77b3641c99adece5fd2/weather_observations.ipynb" rel="nofollow noreferrer">https://aaltoscicomp.github.io/python-for-scicomp/_downloads/4b858dab9366f77b3641c99adece5fd2/weather_observations.ipynb</a></p>
<python><jupyter-notebook><jupyter-lab>
2022-12-23 20:56:56
1
480
Espejito
74,903,737
3,543,200
MyPy shows "untyped decorator" error with a ContextDecorator?
<p>I have a timer that I use as both a context manager and as a decorator, i.e.</p> <pre class="lang-py prettyprint-override"><code>class Timer(ContextDecorator): &quot;&quot;&quot;Context manager which times block execution and adds results to APMContext instance&quot;&quot;&quot; def __init__(self, key: str, apm_context: Optional[APMContext] = None) -&gt; None: self._key = key self._apm_context = apm_context def __enter__(self) -&gt; None: self._start_time = time.monotonic() def __exit__( self, typ: Optional[Type[BaseException]], value: Optional[BaseException], traceback: Optional[types.TracebackType], ) -&gt; None: duration = max(time.monotonic() - self._start_time, 0) * 1000.0 log_metric(self._key, round(duration, 3), self._apm_context) </code></pre> <p>I'm adding typehints to my project gradually, but when I have a file that uses <code>Timer</code> as a decorator, I get the errors</p> <pre><code>results.py:52: error: Untyped decorator makes function &quot;get_results&quot; untyped [misc] @apm.Timer(key='get_results') </code></pre> <p>If I move the class definition to the file where it's used, <code>mypy</code> does not complain, presumably it is finding the correct typeshed forward declaration, but I guess it is not seeing the base class across the import.</p>
<python><mypy><python-typing>
2022-12-23 20:26:23
1
997
gmoss
74,903,722
9,367,543
How to install tensorflow on MacOS
<p>In a new virtual env, python 3.9 I tried to install tensorflow using pip</p> <pre><code>pip install tensorflow </code></pre> <p>I keep getting the error</p> <blockquote> <p>ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow</p> </blockquote> <p>When following the installation guidelines I still have the following error :</p> <pre><code>RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xe RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xe ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import </code></pre> <p>I am using a M1 MacOS Monterey version 12.6.1</p>
<python><tensorflow><apple-silicon>
2022-12-23 20:23:39
1
338
welu
74,903,289
2,313,307
Plotly Python - Heatmap - Include and update additional label parameters in Hovertext when using slider
<p>I want to plot data in a heatmap in Plotly and include a slider to toggle between different categories of a certain slider column.</p> <p>I am currently able to update the text in the <code>hovertemplate</code>, but I would also like to include additional information in the box, which should also be updated whenever the slider is changed.</p> <p>As an example,</p> <pre><code># create Pandas dataframe df = pd.DataFrame({'x_label': [1,2,3,1,2,3,1,2,3], 'y_label': [3,4,5,3,4,5,3,4,5], 'z_label':[6,7,8,9,10,11,12,13,14], 'A': [10, 11, 12,13,14,15,16,17,18], 'B': [10, 12, 14,16,18,20,22,24,26], 'C': [12, 14, 16,18,20,22,24,26,28], 'slider':['a','a','a','b','b','b','c','c','c']}) # create list of dataframes where each dataframe is a filtered dataframe based on the selected slider category multi_dfs = [df[df['slider'] == s] for s in df['slider'].unique()] # create and name each frame in the heatmap based on the slider name frames = [ go.Frame(data=go.Heatmap(z=df['z_label'], x=df['x_label'], y=df['y_label']), name=df['slider'].iloc[0], ) for i, df in enumerate(multi_dfs) ] # plot the heatmap figure fig = go.Figure(data=frames[0].data, frames=frames).update_layout( # iterate over frames to generate slider steps sliders=[{&quot;active&quot;:1, &quot;currentvalue&quot;:{&quot;prefix&quot;: &quot;slider: &quot;}, &quot;steps&quot;: [{&quot;args&quot;: [[f.name],{&quot;frame&quot;: {&quot;duration&quot;: 0, &quot;redraw&quot;: True}, &quot;mode&quot;: &quot;immediate&quot;,},], &quot;label&quot;: f.name, &quot;method&quot;: &quot;animate&quot;,} for f in frames],}] ) # update hovertemplate labels and information fig.update_traces( hovertemplate=&quot;X Custom Label: %{x}&quot; &quot;&lt;br&gt;Y Custom Label: %{y}&quot; &quot;&lt;br&gt;Z Custom Label: %{z}&lt;extra&gt;&lt;/extra&gt;&quot; ) </code></pre> <p>I get the following heatmap</p> <p><a href="https://i.sstatic.net/cFkSC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cFkSC.png" alt="enter image description here" /></a></p> <p>I would like to add to the hover information box the data that are in columns <code>A</code>, <code>B</code>, and <code>C</code>.</p> <p>I am not sure how to pass and update these parameters when the plot has a slider.</p>
<python><plotly><plotly-dash><heatmap>
2022-12-23 19:18:20
1
1,419
finstats
74,903,173
7,462,275
Problem in Pandas : impossible to do sum of int with arbitrary precision
<p>I tried to do the sum of large integers in pandas and the answer is not as expected.</p> <p>Input file : <code>my_file_lg_int</code></p> <pre><code>my_int 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 </code></pre> <p>Python code</p> <pre><code>file = 'my_file_lg_int' data = pd.read_csv(file) data['my_int'].sum() </code></pre> <p>The output is :</p> <pre><code>111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 </code></pre> <p>As integers are too long, they are not integers but strings. So I tried <code>data = pd.read_csv(file,dtype = {'my_int': int})</code> but I have an overflow error. How could I solve it ?</p>
<python><pandas>
2022-12-23 19:04:00
3
2,515
Stef1611
74,903,112
4,541,649
How to turn a 3d numpy array into a pandas dataframe of numpy 1d arrays?
<p>I have a <code>numpy</code> 3d array. I'd like to create a <code>pandas</code> dataframe off of it which would have as many rows as the array's 1st dimension's size and the number of columns would be the size of the 2nd dimension. The values of the dataframe should be the actual vectors (<code>numpy</code> arrays) with the size of the third dimension.</p> <p>Like if I have this array of size <code>(2, 3, 5)</code>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; arr array([[[ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.], [ 1., 2., 3., 4., 6.]], [[ 0., 0., 0., 0., 0.], [11., 22., 33., 44., 66.], [ 0., 0., 0., 0., 0.]]]) </code></pre> <p>I want to turn it into this dataframe (and do this efficiently with native <code>numpy</code> and/or <code>pandas</code> methods):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df_arr col0 col1 col2 0 [1.0, 1.0, 1.0, 1.0, 1.0] [0.0, 0.0, 0.0, 0.0, 0.0] [1.0, 2.0, 3.0, 4.0, 6.0] 1 [0.0, 0.0, 0.0, 0.0, 0.0] [11.0, 22.0, 33.0, 44.0, 66.0] [0.0, 0.0, 0.0, 0.0, 0.0] </code></pre>
<python><arrays><pandas><numpy><multidimensional-array>
2022-12-23 18:54:43
1
1,655
Sergey Zakharov
74,903,025
3,788
Missing "nonce" claim with Quickbooks + Authlib
<p>When I try to implement an OAuth flow into Quickbooks Online with the <code>openid</code> scope, I receive an error <code>authlib.jose.errors.MissingClaimError: missing_claim: Missing &quot;nonce&quot; claim</code>.</p> <p>Here is the code:</p> <pre class="lang-py prettyprint-override"><code>from authlib.integrations.flask_client import OAuth oauth = OAuth(app) oauth.register( name=&quot;qbo&quot;, client_id='x', client_secret='x', server_metadata_url='https://developer.api.intuit.com/.well-known/openid_sandbox_configuration', client_kwargs={&quot;scope&quot;: &quot;openid email profile com.intuit.quickbooks.accounting&quot;}, ) @app.route(&quot;/login&quot;) def login(): redirect_uri = url_for(&quot;callback&quot;, _external=True) client = getattr(oauth, 'qbo') return client.authorize_redirect(redirect_uri, state='hello') @app.route(&quot;/callback&quot;) def callback(): client = getattr(oauth, 'qbo') token = client.authorize_access_token() return 'authorized' </code></pre> <p>The line <code>client.authorize_access_token()</code> is failing. This also fails when I pass a <code>nonce</code> param to the <code>authorize_redirect()</code> method.</p> <p>When I remove the <code>openid email profile</code> scopes, then this works without an issue. I have similar code for openid and Google, and that works without any issues.</p> <p>Any ideas on what is happening in this case?</p>
<python><openid><quickbooks><authlib>
2022-12-23 18:44:42
0
19,469
poundifdef
74,902,874
17,148,496
Finding if the values in one list, are found between values of another sub-list
<p>I know the title is a bit confusing, it was hard for me to put a title to this. I'll give an example to what I need I think it will be clear.</p> <p>I have a list of coefficients (contains 15 values), let's call it listA:</p> <pre><code>Coefficients: [ 0.04870086 0.57480212 0.89015539 2.3233615 4.55812396 7.13551459 -1.08155996 2.17696328 -2.63679501 -2.33568303 1.44485836 2.57565872 1.49871307 0.26896593 4.91618077 4.40561426] </code></pre> <p>And I have a list of lists (contains 15 sub-lists). Each sub-list is made of two numbers, let's call it listB:</p> <pre><code> [[-0.006242951417219811, 0.2695363035421434], [0.18216832098326075, 1.2135053544677805] , [-5.7767682655295856, 8.644974491878234], [-2.6175178748619965, 11.350843901384977], [-3.5832555764006813, 19.889930736681176], [-18.98605217513358, -0.44537447407901887], [-4.66448539414492, 10.687900677104983], [-8.502439858318859, 3.8546296063721726], [-17.319599857758103, 18.476221095928576], [-5.287099091734136, 7.830321030743221], [-11.37116751629717, 24.648615759994385], [-8.133549393916292, 5.702535573546525], [-10.412791226300737, 13.0758676055572], [-3.5332459196432042, 13.790340644751073], [1.1737906639770186, 9.66211063676472]] </code></pre> <p>What I want to do is check if the values in listA are found between the two numbers in the corresponding sub-list, and present that in a vector of boolian values or something like that.</p> <p>For example, <code>listA[0]</code> is 0.04870086 and yes it can be found in <code>listB[0]</code> which is <code>[-0.006242951417219811, 0.2695363035421434]</code> (between the values). Then the same for <code>listA[1]</code> and <code>listB[1]</code> and so on.</p> <p>Could you guys help me do this with a for loop or something? and then store the results in a vector of <code>True\False</code> that I can show?</p> <p>Thanks!</p>
<python><list>
2022-12-23 18:25:27
2
375
Kev
74,902,698
9,720,696
Keeping the structure of a list in after operating a nested loop
<p>Suppose I have two lists as follow:</p> <pre><code>x = [['a','b','c'],['e','f']] y = ['w','x','y'] </code></pre> <p>I want to add each element of list <code>x</code> with each element of list <code>y</code> while keeping the structure as given in <code>x</code> the desired output should be like:</p> <pre><code>[['aw','ax','ay','bw','bx','by','cw','cx','cy'],['ew','ex','ey','fw','fx','fy']] </code></pre> <p>So far I've done:</p> <pre><code>res = [] for i in range(len(x)): for j in range(len(x[i])): for t in range(len(y)): res.append(x[i][j]+y[t]) </code></pre> <p>where <code>res</code> produces the sums correctly but I am losing the structure, I get:</p> <pre><code>['aw','ax','ay','bw','bx','by','cw','cx','cy','ew','ex','ey','fw','fx','fy'] </code></pre> <p>Also is there a better way of doing this instead of many nested loops?</p>
<python><list><nested-loops><list-manipulation>
2022-12-23 18:02:28
3
1,098
Wiliam
74,902,695
10,197,418
Multiple aggregations on multiple columns in Python polars
<p>Checking out how to implement binning with Python polars, I can easily calculate aggregates for individual columns:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import numpy as np t, v = np.arange(0, 100, 2), np.arange(0, 100, 2) df = pl.DataFrame({&quot;t&quot;: t, &quot;v0&quot;: v, &quot;v1&quot;: v}) df = df.with_columns((pl.datetime(2022,10,30) + pl.duration(seconds=df[&quot;t&quot;])).alias(&quot;datetime&quot;)).drop(&quot;t&quot;) df.group_by_dynamic(&quot;datetime&quot;, every=&quot;10s&quot;).agg(pl.col(&quot;v0&quot;).mean()) </code></pre> <pre><code>shape: (10, 2) ┌─────────────────────┬──────┐ │ datetime ┆ v0 │ │ --- ┆ --- │ │ datetime[μs] ┆ f64 │ ╞═════════════════════╪══════╡ │ 2022-10-30 00:00:00 ┆ 4.0 │ │ 2022-10-30 00:00:10 ┆ 14.0 │ │ 2022-10-30 00:00:20 ┆ 24.0 │ │ 2022-10-30 00:00:30 ┆ 34.0 │ │ ... ┆ ... │ </code></pre> <p>or calculate multiple aggregations like</p> <pre class="lang-py prettyprint-override"><code>df.group_by_dynamic(&quot;datetime&quot;, every=&quot;10s&quot;).agg( pl.col(&quot;v0&quot;).mean().alias(&quot;v0_binmean&quot;), pl.col(&quot;v0&quot;).count().alias(&quot;v0_bincount&quot;) ) ┌─────────────────────┬────────────┬─────────────┐ │ datetime ┆ v0_binmean ┆ v0_bincount │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ u32 │ ╞═════════════════════╪════════════╪═════════════╡ │ 2022-10-30 00:00:00 ┆ 4.0 ┆ 5 │ │ 2022-10-30 00:00:10 ┆ 14.0 ┆ 5 │ │ 2022-10-30 00:00:20 ┆ 24.0 ┆ 5 │ │ 2022-10-30 00:00:30 ┆ 34.0 ┆ 5 │ │ ... ┆ ... ┆ ... │ </code></pre> <p>or calculate one aggregation for multiple columns like</p> <pre class="lang-py prettyprint-override"><code>cols = [c for c in df.columns if &quot;datetime&quot; not in c] df.group_by_dynamic(&quot;datetime&quot;, every=&quot;10s&quot;).agg( pl.col(f&quot;{c}&quot;).mean().alias(f&quot;{c}_binmean&quot;) for c in cols ) ┌─────────────────────┬────────────┬────────────┐ │ datetime ┆ v0_binmean ┆ v1_binmean │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ f64 │ ╞═════════════════════╪════════════╪════════════╡ │ 2022-10-30 00:00:00 ┆ 4.0 ┆ 4.0 │ │ 2022-10-30 00:00:10 ┆ 14.0 ┆ 14.0 │ │ 2022-10-30 00:00:20 ┆ 24.0 ┆ 24.0 │ │ 2022-10-30 00:00:30 ┆ 34.0 ┆ 34.0 │ │ ... ┆ ... ┆ ... │ </code></pre> <p><strong>However</strong>, combining both approaches fails!</p> <pre class="lang-py prettyprint-override"><code>df.group_by_dynamic(&quot;datetime&quot;, every=&quot;10s&quot;).agg( [ pl.col(f&quot;{c}&quot;).mean().alias(f&quot;{c}_binmean&quot;), pl.col(f&quot;{c}&quot;).count().alias(f&quot;{c}_bincount&quot;) ] for c in cols ) </code></pre> <pre><code>DuplicateError: column with name 'literal' has more than one occurrences </code></pre> <p>Is there a &quot;polarustic&quot; approach to calculate multiple statistical parameters for multiple (all) columns of the dataframe in one go?</p> <p>related, pandas-specific: <a href="https://stackoverflow.com/q/43172970/10197418">Python pandas groupby aggregate on multiple columns</a></p>
<python><dataframe><group-by><aggregate><python-polars>
2022-12-23 18:02:01
1
26,076
FObersteiner
74,902,561
12,282,349
SQLAlchemy backref and backpopulates conflicts with relationship and mapper mapped has no property
<p>In FastAPI application I am using sqlalchemy library. I have tried to establish a many to many relationship as described below:</p> <pre><code>PostCity = Table('PostCity', Base.metadata, Column('id', Integer, primary_key=True), Column('post_id', Integer, ForeignKey('post.id')), Column('city_id', Integer, ForeignKey('city.id'))) class DbPost(Base): __tablename__ = 'post' id = Column(Integer, primary_key=True, index=True) image_url = Column(String) cities = relationship('DbCity', secondary=PostCity, backref='post') class DbCity(Base): __tablename__ = 'city' id = Column(Integer, primary_key=True, index=True) name = Column(String) posts = relationship('DbPost', secondary=PostCity, backref='city') </code></pre> <p>During add and commit commands it throws an error:</p> <pre><code>SAWarning: relationship 'DbCity.posts' will copy column city.id to column PostCity.city_id, which conflicts with relationship(s): 'Dntention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. To silence this warning, add the parameter 'overlaps=&quot;cities,post&quot;' to the 'DbCity.posts' relationship. (Background on this error at: https://sqlalche.me/e/14/qzyx) city = DbCity(name=city_name) </code></pre> <p>This <a href="https://stackoverflow.com/questions/68322485/conflicts-with-relationship-between-tables">answer</a> suggests to use <strong>back_populates=</strong> , however, when I am replacing <strong>backref</strong> with <strong>back_populates</strong> in both DbPost and DbCity classes it gives another error:</p> <pre><code>sqlalchemy.exc.InvalidRequestError: Mapper 'mapped class DbCity-&gt;city' has no property 'post' </code></pre> <p>What is the reason of this and how could I fix it ?</p>
<python><sqlalchemy><fastapi>
2022-12-23 17:45:17
1
513
Tomas Am
74,902,542
19,838,445
Track invocations of methods and functions
<p>I'm looking for the library which allows to track invocation of methods and functions. Think of it as of <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.called" rel="nofollow noreferrer">Mock</a> providing <code>called</code> and <code>call_count</code> properties.</p> <p>Example of end-result needed:</p> <pre class="lang-py prettyprint-override"><code>s = MagicProxyLib() @s class MyClass: def not_called(self): print(&quot;This is not called&quot;) def first_method(self): print(&quot;First is called&quot;) def second_method(self): print(&quot;Second is called&quot;) mc = MyClass() mc.first_method() mc.second_method() mc.second_method() </code></pre> <p>I can implement such a decorator myself, but do not want reinvent the wheel if there is already some library with similar functionality.</p> <p>I expect to be able to use this library is a such way</p> <pre class="lang-py prettyprint-override"><code>assert not s.called(mc.not_called) assert s.called(mc.first_method) assert s.call_count(mc.second_method) == 2 </code></pre> <p>I have checked <a href="https://stackoverflow.com/questions/13581254/tracking-number-of-executions-of-methods-and-functions-in-python-package">this answer</a> but profiling/tracing does not quite serve the same purpose as here. Thanks for you package suggestions.</p>
<python><python-3.x><profiling><python-decorators><proxy-object>
2022-12-23 17:43:10
1
720
GopherM
74,902,387
1,436,222
import error attempting to import from the directory above
<p>Not a python developer and I am clearly missing something fundamental here. Using Python 3.10.7 and i am getting an error:</p> <pre><code>from ..class_one import ClassOne ImportError: attempted relative import beyond top-level package </code></pre> <p>when attempting to execute <code>python run_me.py</code> script in the example below</p> <p>I have the following structure with following import statements</p> <pre><code>\Project \data_processor data_process.py from ..class_one import ClassOne &lt;--getting an error here run_me.py from data_processor.data_process import DataProcess class_one.py </code></pre> <p>interesting that when i type a line <code>from ..class_one import ClassOne</code> in <code>data_process.py</code> my IDE thinks its completely legit and intellisence works, suggesting that i import <code>ClassOne</code>.</p> <p>Most of the solutions imply Python earlier than v3.3 (changed the way packages are handled), which isn't the case here.</p>
<python><import><package><python-import>
2022-12-23 17:26:16
2
459
concentriq
74,902,212
12,361,700
multiprocessing Pool seems to get stuck for no apparent reason
<p>I have a python script with the following, more or less, code:</p> <pre><code>def some_function(): pass class SomeClass: def __init__(self): self.pool = mp.Pool(10) def do_smth(self): self.pool.map(some_function, range(10)) if __name__ == '__main__': cls = SomeClass() for _ in range(1000): print(&quot;*&quot;) cls.do_smth() </code></pre> <p>the jobs are obviously much more heavy than this, however at some point it just get stuck, in a sense that no error is reported, the terminal signals that the script is still running, but no more &quot;*&quot; are printed, and the CPU manager of my PC reports 3% of CPU usage, so seems like that it just &quot;crashed&quot; without saying nothing to nobody.</p> <p>For the moment, I this it might be a memory issue (however, during the time it works, RAM stays at 70%), but I have no idea... do you have any idea?</p> <p>I'm working on a Macbook pro M1 max with 24 GPUs and 32GB of RAM</p>
<python><multithreading><terminal>
2022-12-23 17:05:24
1
13,109
Alberto
74,902,103
3,460,864
Rolling sum between two tables with date gaps
<p><strong>Background and goal</strong>: I have two tables: counts and picks. The base table is counts and is what I want the result of all of this to be merged with. Screenshot of a reproducible example below.</p> <p><a href="https://i.sstatic.net/dxZVF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dxZVF.png" alt="pic of reproducible example" /></a></p> <p>The result I want is for each row within the counts table (left side, 2 rows here) find the rolling 7 day sum prior or equal to this date of number_picked for the appropriate group. The first row is group A on November 11th, 2022 at 2pm so this would lookup all records in the pick table that are in group A and are on November 11, 2022 at 2pm - 7 days, then sum. This would be 4. For row 2 it would be group B, summing number_picked November 14th, 2022 at 9am - 7 days, which would be 16.</p> <p>If I loop over each row and then do a temp table merge with the pick data, this will work but it's terrible practice to loop over rows in Python. In reality my count table is like 200k rows and the picks are millions. I cannot think of a smart way to do this that accomplishments handling the date gaps and is efficient.</p> <p>Python code to reproduce below:</p> <pre><code>import pandas as pd counts = pd.DataFrame(columns=['count_date_time', 'group', 'outcome'], data=[[&quot;November 11, 2022 2:00 PM&quot;,'A',1], [&quot;November 14, 2022 9:00 AM&quot;,'B',0]]) counts['count_date_time'] = pd.to_datetime(counts['count_date_time']) picks = pd.DataFrame(columns=['pick_date_time','group','number_picked'], data=[[&quot;November 1, 2022 10:00 AM&quot;,&quot;A&quot;,3], [&quot;November 1, 2022 11:00 AM&quot;,&quot;A&quot;,7], [&quot;November 7, 2022 2:00 PM&quot;,&quot;A&quot;,4], [&quot;November 12, 2022 3:00 PM&quot;,&quot;A&quot;,2], [&quot;November 2, 2022 11:00 AM&quot;,&quot;B&quot;,3], [&quot;November 8, 2022 4:00 AM&quot;,&quot;B&quot;,2], [&quot;November 10, 2022 6:00 PM&quot;,&quot;B&quot;,4],[&quot;November 12, 2022 6:00 PM&quot;,&quot;B&quot;,10]]) picks['pick_date_time'] = pd.to_datetime(picks['pick_date_time']) </code></pre>
<python><pandas><rolling-computation>
2022-12-23 16:52:04
2
411
user137698
74,902,068
10,634,126
Accessing PyMongo UpdateOne operation properties
<p>If I am receiving a list of prepared PyMongo UpdateOne operations (e.g. below)...</p> <pre><code>print(type(to_load[0])) &gt; &lt;class 'pymongo.operations.UpdateOne'&gt; print(to_load[0]) &gt; UpdateOne({'id': XXX}, {'$set': {'name': 'YYY'}}, True, None, None, None) </code></pre> <p>...is it possible to then extract information from these? For instance, if I want to get a list of all of the affected <code>'id'</code> values <code>[XXX, ...]</code>, is there something like the below (which does not work) that will work?</p> <pre><code>for record in to_load: print(record['filter']) </code></pre>
<python><mongodb><pymongo>
2022-12-23 16:47:50
1
909
OJT
74,901,980
10,025,767
Plotly can't display png images in subplots
<p>Is there any way to put png image instead of one of this Scatter graphs? I could only find how to use the image as logo or background image</p> <pre><code>from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=1, cols=2) fig.add_trace( go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=1, col=1 ) fig.add_trace( go.Scatter(x=[20, 30, 40], y=[50, 60, 70]), row=1, col=2 ) fig.update_layout(height=600, width=800, title_text=&quot;Side By Side Subplots&quot;) fig.show() </code></pre> <p>I tried to add this lines before <code>fig.update_layout</code></p> <pre><code>from PIL import Image import plotly.express as px img = Image.open('plot1.png') plotly_img = px.imshow(img) fig.add_trace(go.Image(plotly_img), row=1, col=2) #fig.add_trace(go.Image(img), row=1, col=2) </code></pre> <p>but it doesn't work</p> <blockquote> <p>ValueError: The first argument to the plotly.graph_objs.Image constructor must be a dict or an instance of :class:<code>plotly.graph_objs.Image</code></p> </blockquote>
<python><html><plotly>
2022-12-23 16:38:57
1
558
James Flash
74,901,976
8,070,090
How to give permission to Google Spreadsheet created by Service Account?
<p>I created a Google Service Account with this step:</p> <ol> <li>Open this link: <a href="https://console.cloud.google.com/apis/credentials?project=MyProject" rel="noreferrer">https://console.cloud.google.com/apis/credentials?project=MyProject</a></li> <li>Click on <strong>CREATE CREDENTIALS</strong> button, then select <strong>Service account</strong></li> <li>Fill the <strong>Service account details</strong> field like this. And then <strong>CREATE AND CONTINUE</strong> <a href="https://i.sstatic.net/pTdCk.png" rel="noreferrer"><img src="https://i.sstatic.net/pTdCk.png" alt="enter image description here" /></a></li> <li>On the <strong>Grant this service account access to project</strong>, click <strong>+ADD ROLE</strong>, then I select <strong>Quick Access -&gt; Basic -&gt; Owner</strong>, then <strong>CONTINUE</strong></li> </ol> <p><a href="https://i.sstatic.net/cspvD.png" rel="noreferrer"><img src="https://i.sstatic.net/cspvD.png" alt="enter image description here" /></a></p> <ol start="5"> <li>On the <strong>Grant users access to this service account</strong>, I fill it in with my normal email address. Then click on the <strong>DONE</strong> button.</li> </ol> <p><a href="https://i.sstatic.net/P602z.png" rel="noreferrer"><img src="https://i.sstatic.net/P602z.png" alt="enter image description here" /></a></p> <p>Now I have something like this on my Service Accounts list:</p> <p><a href="https://i.sstatic.net/6vrCj.png" rel="noreferrer"><img src="https://i.sstatic.net/6vrCj.png" alt="enter image description here" /></a></p> <ol start="6"> <li>Click on the service accounts, then go to the <strong>PERMISSIONS</strong> tab, then click on the <strong>GRANT ACCESS</strong> button:</li> </ol> <p><a href="https://i.sstatic.net/t8PNF.png" rel="noreferrer"><img src="https://i.sstatic.net/t8PNF.png" alt="enter image description here" /></a></p> <ol start="7"> <li><strong>Add principals</strong> with my email address, and I set the role as <strong>Service Account Admin</strong>, then <strong>SAVE</strong>.</li> </ol> <p><a href="https://i.sstatic.net/hxIBQ.png" rel="noreferrer"><img src="https://i.sstatic.net/hxIBQ.png" alt="enter image description here" /></a></p> <ol start="8"> <li>Now click on the <strong>KEYS</strong> tab, then click on <strong>ADD KEY</strong> dropdown, and select <strong>Create new key</strong></li> </ol> <p><a href="https://i.sstatic.net/hrnnF.png" rel="noreferrer"><img src="https://i.sstatic.net/hrnnF.png" alt="enter image description here" /></a></p> <ol start="9"> <li><p><strong>CREATE</strong> the key as <strong>JSON</strong> type.</p> </li> <li><p>After the JSON file gets downloaded I put it on my code path. And the code that I running is code from Google Spreadsheet API documentation to create a new spreadsheet, here is the snippet of the code:</p> </li> </ol> <pre> def create(title): creds = service_account.Credentials.from_service_account_file('odoo-spreadsheet-371808-7186d4c03b4c.json', scopes=SCOPES) service = build('sheets', 'v4', credentials=creds) spreadsheet = { 'properties': { 'title': title }, } spreadsheet = service.spreadsheets().create(body=spreadsheet) response = spreadsheet.execute() print('response', response) print('spreadsheetId:', response.get('spreadsheetId')) print('spreadsheetUrl:', response.get('spreadsheetUrl')) </pre> <p>The code run successfully and created a new spreadsheet on the service account that I have created above, but when I open the spreadsheetUrl on my browser using the email that I have been granted like step number 5 and 7 like above, I got Access Denied, I'm seeing a screen like this which is mean I don't have access to the spreadsheet.</p> <p><a href="https://i.sstatic.net/lvush.png" rel="noreferrer"><img src="https://i.sstatic.net/lvush.png" alt="enter image description here" /></a></p> <p>Isn't I already given my email access on the service account like step number 5 and 7 above?, So why my email still don't have permission to access the spreadsheet created by my service account?</p>
<python><google-cloud-platform><google-sheets-api><service-accounts>
2022-12-23 16:38:33
1
3,109
Tri
74,901,881
4,527,628
Django - Import Models from an installed Package
<p>I have all of my Django models in another package which I install using pip in a Django app.</p> <pre><code>models_package | - models.py | - setup.py </code></pre> <p>and in <code>models.py</code> i have</p> <pre><code>from django.contrib.auth.models import AbstractUser class User(AbstractUser): .... </code></pre> <p>in my Django app i have</p> <pre><code>my_django_app | ... | models.py website | ... | settings.py manage.py </code></pre> <p>in <code>my_django_app.model</code> i have</p> <pre><code>from models_package.models import * </code></pre> <p>and in the <code>website</code> i have added <code>my_django_app</code> as an app (add it to INSTALLED_APP) and in <code>website.settings.py</code> i have</p> <pre><code>AUTH_USER_MODEL = &quot;my_django_app.User&quot; </code></pre> <p>but when i run <code>python manage.py runserver</code> i get:</p> <pre><code>RuntimeError: Model class my_django_app.models.User doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS. </code></pre> <p>The Thing is <code>User</code> which comes from <code>models_packages.models</code> and <code>models_packages</code> is not a Django app which I add to <code>INSTALLED_APP</code> in settings.py. it is only a package containing all shared Models that I need in multiple different Django apps.</p> <p>Is there any way to use models in <code>models_package.models</code> without adding it to INSTALLED_APP inside <code>website.settings.py</code></p>
<python><django><django-models>
2022-12-23 16:30:12
1
1,225
M.Armoun
74,901,872
6,446,053
How to average multiindex row in Pandas
<p>The objective is to average the first level of a multi index row.</p> <p>For example, the task it to average the rows (s1,s2) and (s1,s3).</p> <p>Given the following <code>df</code></p> <pre><code> a fe gg new_text (s1, s2) 4 0 3 t (s1, s3) 3 3 1 t (s2, s3) 3 2 4 t (s2, s4) 0 0 4 t (s3, s1) 2 1 0 t (s3, s4) 1 1 0 t </code></pre> <p>The expected output is as below</p> <pre><code> a fe gg new_text s1 7 3 4 t s2 3 2 8 t s3 3 3 0 t </code></pre> <p>I tried using the following syntax</p> <pre><code>df.groupby(level=0).agg(['mean']) </code></pre> <p>Which produced undersired output</p> <pre><code> a fe gg mean mean mean (s1, s2) 4.0 0.0 3.0 (s1, s3) 3.0 3.0 1.0 (s2, s3) 3.0 2.0 4.0 (s2, s4) 0.0 0.0 4.0 (s3, s1) 2.0 1.0 0.0 (s3, s4) 1.0 1.0 0.0 </code></pre> <p>May I know how to address this problem.</p> <p>The output can be reproduced using the following codes</p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) arr=np.random.randint(5, size=(6, 3)) df = pd.DataFrame(data=arr, index=[('s1','s2'),('s1','s3'),('s2','s3'),('s2','s4'),('s3','s1'),('s3','s4')], columns=['a','fe','gg']) df['new_text']='t' df2=df.groupby(level=0).agg(['mean']) </code></pre>
<python><pandas><multi-index>
2022-12-23 16:28:37
4
3,297
rpb
74,901,825
386,861
How to solve error in VS Code - exited with code=127
<p>Baffled by this. I've been using VSCode for a few weeks and have python installed.</p> <pre><code>def print menu(): print (&quot;Let's play a game of Wordle!&quot;) print (&quot;Please type in a 5-letter word&quot;) print_menu() print_menu() </code></pre> <p>So far so simple, but when I run it I get this</p> <p>[Running] python -u &quot;/Users/davidelks/Dropbox/Personal/worldle.py&quot; /bin/sh: python: command not found</p> <p>[Done] exited with code=127 in 0.006 seconds</p> <p>What does this mean? I'm guessing it failed but why? This appears to be trivial.</p> <p>UPDATE:</p> <p>Tried:</p> <pre><code>def print menu(): print (&quot;Let's play a game of Wordle!&quot;) print (&quot;Please type in a 5-letter word&quot;) print_menu() </code></pre> <p>It failed.</p>
<python><visual-studio-code>
2022-12-23 16:23:26
6
7,882
elksie5000
74,901,804
12,282,349
Sqlalchemy engine - run a function only after a database with tables has been created
<p>On my FastApI application I am using sqlalchemy library:</p> <pre><code>from db import models from db.database import engine models.Base.metadata.create_all(engine) ... </code></pre> <p>If a db does not exist it creates one. Sometimes I delete that db and recreate, but I loose all the data. I run a function to repopulate db like this:</p> <pre><code>models.Base.metadata.create_all(engine) initate_cities() #some code that puts different data to db from csv files </code></pre> <p>However, each time I reload the app the population function runs. I would Like that it runs when a database has been created. How could I achieve this?</p>
<python><sqlalchemy><fastapi>
2022-12-23 16:19:38
1
513
Tomas Am
74,901,684
6,702,598
AWS Cognito: Receive sorted user list
<p>I'm using Python with boto3 for accessing my AWS Cognito user data information.</p> <h5>Problem</h5> <p>I'm using <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html?highlight=list_users_#CognitoIdentityProvider.Client.list_users" rel="nofollow noreferrer"><code>list_users</code></a> to retrieve a paginated list of users to eventually show them in a web browser. Problem is, the list is not sorted. So entries are difficult to find and every time I reload my Web UI a different order is shown.</p> <h5></h5> <p>How can I receive a <strong>sorted</strong> list of users.</p> <p>In <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html?highlight=list_users_#CognitoIdentityProvider.Client.list_users" rel="nofollow noreferrer">the documentation</a> I cannot find any information about sorting.</p>
<python><amazon-web-services><boto3><amazon-cognito>
2022-12-23 16:07:12
0
3,673
DarkTrick
74,901,522
453,673
Can MediaPipe specify which parts of the face mesh are the lips or nose or eyes?
<p>MediaPipe is capable of providing the x,y,z points of multiple points on the face, enabling it to generate a face mesh. However, the output is just in x,y,z points. Is there any way to know which of those points are those of the lips?</p> <pre><code>import cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_face_mesh = mp.solutions.face_mesh # For static images: IMAGE_FILES = [] drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) with mp_face_mesh.FaceMesh( static_image_mode=True, max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5) as face_mesh: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) # Convert the BGR image to RGB before processing. results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # Print and draw face mesh landmarks on the image. if not results.multi_face_landmarks: continue annotated_image = image.copy() for face_landmarks in results.multi_face_landmarks: print('face_landmarks:', face_landmarks) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_contours_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_iris_connections_style()) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) </code></pre> <p>Output:</p> <pre><code>landmark { x: 0.5328186750411987 y: 0.3934963345527649 z: -0.008206618018448353 } landmark { x: 0.5807108879089355 y: 0.3586674928665161 z: 0.017649170011281967 } landmark { x: 0.5844370126724243 y: 0.3515523076057434 z: 0.01841720938682556 } ...and more such points </code></pre>
<python><python-3.x><mediapipe>
2022-12-23 15:47:19
3
20,826
Nav
74,901,511
494,134
Module attribute mysteriously appears
<pre><code>$ python Python 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import urllib &gt;&gt;&gt; urllib.request Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; AttributeError: module 'urllib' has no attribute 'request' &gt;&gt;&gt; urllib.request &lt;module 'urllib.request' from '/usr/lib/python3.8/urllib/request.py'&gt; </code></pre> <p>Why is <code>urllib.request</code> recognized as an attribute the <em>second</em> time I try to access it?</p>
<python>
2022-12-23 15:46:50
0
33,765
John Gordon
74,901,508
9,422,346
How to print output of pexpect consisting '\r' in next line of the terminal?
<p>I have the follwoing code snippet from a pexpect script</p> <pre><code>import pexpect import time username=&lt;username&gt; password=&lt;password&gt; child = pexpect.spawn('ssh &lt;username&gt;@192.168.1.219',timeout=40) child.expect(['MyUbuntu','\$','\#']) child.sendline(&lt;password&gt;) child.expect(['MyUbuntu','\$','\#']) time.sleep(2) child.sendline('execute ping 192.168.1.2') child.expect(['MyUbuntu','\$','\#']) k=child.before k=k.splitline for line in k: print(line) </code></pre> <p>However it gives me an output as follows:</p> <pre><code>b' execute ping 192.168.1.2\r\r\nPING 192.168.1.2: 56 data bytes\r\n64 bytes from 192.168.1.2: icmp_seq=0 ttl=62 time=0.2 ms\r\n64 bytes from 192.168.1.2: icmp_seq=1 ttl=62 time=0.2 ms\r\n64 bytes from 192.168.1.2: icmp_seq=2 ttl=62 time=0.2 ms\r\n64 bytes from 192.168.1.2: icmp_seq=3 ttl=62 time=0.2 ms\r\n64 bytes from 192.168.1.2: icmp_seq=4 ttl=62 time=0.1 ms\r\n\r\n--- 192.168.1.2 ping statistics ---\r\n5 packets transmitted, 5 packets received, 0% packet loss\r\nround-trip min/avg/max = 0.1/0.1/0.2 ms\r\n\r\nMyUbuntu ' </code></pre> <p>I want the terminal to show proper readable format of the output with line breaks coming in next line as a normal ping operation would do. How to do that?</p>
<python><ubuntu><ssh><pexpect>
2022-12-23 15:46:36
1
407
mrin9san
74,901,355
5,212,614
My geopy.geocoders is throwing error: SSL: CERTIFICATE_VERIFY_FAILED. How can I resolve this?
<p>When I try to run this code.</p> <pre><code>from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent=&quot;ryan_data&quot;) location = geolocator.geocode(&quot;175 5th Avenue NYC&quot;) print(location.address) </code></pre> <p>I get this error.</p> <p>GeocoderUnavailable: HTTPSConnectionPool(host='nominatim.openstreetmap.org', port=443): Max retries exceeded with url: /search?q=175+5th+Avenue+NYC&amp;format=json&amp;limit=1 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))</p> <p>This should be a pretty simple thing, but I can't get the code to run. I'm trying to run the code on my corporate laptop. The exact same code works perfectly find on my personal laptop. Any idea what's wrong here?</p> <p>Documentation is here.</p> <p><a href="https://geopy.readthedocs.io/en/stable/#module-geopy.geocoders" rel="nofollow noreferrer">https://geopy.readthedocs.io/en/stable/#module-geopy.geocoders</a></p>
<python><python-3.x><geopy>
2022-12-23 15:29:24
0
20,492
ASH
74,901,315
15,825,321
Pandas rolling mean with offset by (not continuously available) date
<p>given the following example table</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index</th> <th>Date</th> <th>Weekday</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>05/12/2022</td> <td>2</td> <td>10</td> </tr> <tr> <td>2</td> <td>06/12/2022</td> <td>3</td> <td>20</td> </tr> <tr> <td>3</td> <td>07/12/2022</td> <td>4</td> <td>40</td> </tr> <tr> <td>4</td> <td>09/12/2022</td> <td>6</td> <td>10</td> </tr> <tr> <td>5</td> <td>10/12/2022</td> <td>7</td> <td>60</td> </tr> <tr> <td>6</td> <td>11/12/2022</td> <td>1</td> <td>30</td> </tr> <tr> <td>7</td> <td>12/12/2022</td> <td>2</td> <td>40</td> </tr> <tr> <td>8</td> <td>13/12/2022</td> <td>3</td> <td>50</td> </tr> <tr> <td>9</td> <td>14/12/2022</td> <td>4</td> <td>60</td> </tr> <tr> <td>10</td> <td>16/12/2022</td> <td>6</td> <td>20</td> </tr> <tr> <td>11</td> <td>17/12/2022</td> <td>7</td> <td>50</td> </tr> <tr> <td>12</td> <td>18/12/2022</td> <td>1</td> <td>10</td> </tr> <tr> <td>13</td> <td>20/12/2022</td> <td>3</td> <td>20</td> </tr> <tr> <td>14</td> <td>21/12/2022</td> <td>4</td> <td>10</td> </tr> <tr> <td>15</td> <td>22/12/2022</td> <td>5</td> <td>40</td> </tr> </tbody> </table> </div> <p>I want to calculate a rolling average of the last three observations (at least) a week ago. I cannot use .shift as some dates are randomly missing, and .shift would therefore not produce a reliable output.</p> <p>Desired output example for last three rows in the example dataset:</p> <p><code>Index 13: Avg of indices 8, 7, 6 = (30+40+50) / 3 = 40</code></p> <p><code>Index 14: Avg of indices 9, 8, 7 = (40+50+60) / 3 = 50</code></p> <p><code>Index 15: Avg of indices 9, 8, 7 = (40+50+60) / 3 = 50</code></p> <p>What would be a working solution for this? Thanks!</p> <p>Thanks!</p>
<python><pandas><moving-average><rolling-computation>
2022-12-23 15:26:06
2
303
Paul1911
74,901,273
769,449
scrapy.Request does not execute callback function to process custom URL
<p>I would expect to see &quot;HIT&quot; in my Visual Studio console but the <code>process_listing</code> function is never executed. When I run <code>scrapy crawl foo -O foo.json</code> I get error:</p> <blockquote> <p>start_requests = iter(self.spider.start_requests()) TypeError: 'NoneType' object is not iterable</p> </blockquote> <p>I already checked <a href="https://stackoverflow.com/questions/45075386/scrapy-request-does-not-callback-my-function">here</a>.</p> <pre><code>import json import re import os import requests import scrapy import time from scrapy.selector import Selector from scrapy.http import HtmlResponse import html2text class FooSpider(scrapy.Spider): name = 'foo' start_urls = ['https://www.example.com/item.json?lang=en'] def start_requests(self): r = requests.get(self.start_urls[0]) cont = r.json() self.parse(cont) def parse(self, response): for o in response['objects']: if o.get('option') == &quot;buy&quot; and o.get('is_available'): listing_url = &quot;https://www.example.com/&quot; + \ o.get('brand').lower().replace(' ','-') + &quot;-&quot; + \ o.get('model').lower() + &quot;-&quot; if o.get('make') is not None: listing_url += o.get('make') + &quot;-&quot; listing_url += o.get('year').lower() print(listing_url) #a valid url is printed here yield scrapy.Request( url=response.urljoin(listing_url), callback=self.process_listing ) def process_listing(self, response): #this function is never executed print('HIT') yield item </code></pre> <p>I tried:</p> <ul> <li><code>url=response.urljoin(listing_url)</code></li> <li><code>url=listing_url</code></li> </ul>
<python><json><scrapy>
2022-12-23 15:22:04
1
6,241
Adam
74,901,265
11,092,636
Set label font to bold without changing its size and its font
<p>I know it's possible to use another object than <code>tk.Label</code> but I would want to do it with <code>tk.Label</code> (for various reasons, I'm not allowed to do it with another object).</p> <p>What I don't understand is why do the font and the size of the label change when I just set it to bold. Here is a Minimum Reproducible Example:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk # make a tkinter with two labels window = tk.Tk() window.title(&quot;Login&quot;) window.geometry(&quot;300x200&quot;) label1 = tk.Label(window, text=&quot;Username&quot;) label1.config(font=&quot;bold&quot;) label1.grid(row=0, column=0) label2 = tk.Label(window, text=&quot;Password&quot;) label2.grid(row=1, column=0) window.mainloop() </code></pre> <p>Is there anyway to only change the label to bold and keeping its size and its font? I tried to get the font of label1 before setting it to bold so that maybe i could set the size and the font to what it was before setting it to bold, but I get that: <code>('font', 'font', 'Font', &lt;string object: 'TkDefaultFont'&gt;, 'bold')</code> and I don't really know what to do with it.</p>
<python><tkinter><fonts>
2022-12-23 15:21:31
1
720
FluidMechanics Potential Flows
74,901,114
1,611,898
Prevent Python CSV to JSON for loop iteration from overwriting previous entry
<p>I have a pretty basic Python For statement that I'm using to try to remap a CSV file into geoJSON format. My script looks like this:</p> <pre><code>def make_json(csvFilePath, jsonFilePath): # create a dictionary data = { &quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: [] } feature = { &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;Point&quot;, &quot;coordinates&quot;: [] }, &quot;properties&quot;: {} } # Open a csv reader called DictReader with open(csvFilePath, encoding='utf-8') as csvf: csvReader = csv.DictReader(csvf) # Convert each row into a dictionary # and add it to data for rows in csvReader: feature['geometry']['coordinates'] = [float(rows['s_dec']),float(rows['s_ra'])] feature['properties'] = rows data['features'].append(feature) # Open a json writer, and use the json.dumps() # function to dump data with open(jsonFilePath, 'w', encoding='utf-8') as jsonf: jsonf.write(json.dumps(data, indent=4)) </code></pre> <p>However, this is causing each new row entry to overwrite the previous. My output looks like this:</p> <pre><code>{ &quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: [ { &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;Point&quot;, &quot;coordinates&quot;: [ -67.33190277777777, 82.68714791666666 ] }, &quot;properties&quot;: { &quot;dataproduct_type&quot;: &quot;image&quot;, &quot;s_ra&quot;: &quot;82.68714791666666&quot;, &quot;s_dec&quot;: &quot;-67.33190277777777&quot;, &quot;t_min&quot;: &quot;59687.56540044768&quot;, &quot;t_max&quot;: &quot;59687.5702465162&quot;, &quot;s_region&quot;: &quot;POLYGON 82.746588309 -67.328433557 82.78394862 -67.338513769&quot; } }, { &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;Point&quot;, &quot;coordinates&quot;: [ -67.33190277777777, 82.68714791666666 ] }, &quot;properties&quot;: { &quot;dataproduct_type&quot;: &quot;image&quot;, &quot;s_ra&quot;: &quot;82.68714791666666&quot;, &quot;s_dec&quot;: &quot;-67.33190277777777&quot;, &quot;t_min&quot;: &quot;59687.56540044768&quot;, &quot;t_max&quot;: &quot;59687.5702465162&quot;, &quot;s_region&quot;: &quot;POLYGON 82.746588309 -67.328433557 82.78394862 -67.338513769&quot; } } ]} </code></pre> <p>Any thoughts on what I'm doing wrong here?</p>
<python><for-loop><csvtojson>
2022-12-23 15:04:46
2
631
thefreeline
74,901,076
17,945,841
Join two lists into one list of lists
<p>I have two lists of the same length. I want to merge them into a list of lists like this example:</p> <pre><code>left_list = [-1,-3,15,3,1.7] right_list = [1.2,2,17,3.5,2] res_list = [[-1,1.2],[-3,2],[15,17],[3,3.5],[1.7,2]] </code></pre> <p>notice the <code>left_list</code> has smaller valleues so the order matters. Thanks!!</p>
<python><list>
2022-12-23 15:01:13
1
1,352
Programming Noob
74,900,932
4,495,790
How to flatten grouped Pandas DF columns by ID?
<p>I have the following Pandas data frame (number of rows with the same ID are always the same):</p> <pre><code>ID VALUE --------- 1 11 1 12 2 21 2 22 3 31 3 32 </code></pre> <p>I would like to get a flattened version of it where each ID have one rows with N columns with the respective values belonging to ID in VALUE column (by sequence order) like this:</p> <pre><code>ID v1 v2 ---------- 1 11 12 2 21 22 3 31 32 </code></pre> <p>How can I get the desired result with Pandas?</p>
<python><pandas>
2022-12-23 14:47:03
2
459
Fredrik
74,900,861
12,323,468
How to write df.query to select all columns from a dataframe with NaN?
<p>I have the following code which returns a df with all columns having NaN values:</p> <pre><code>df.loc[:, df.isna().any()] </code></pre> <p>How would I write this code using df.query?</p>
<python>
2022-12-23 14:38:44
0
329
jack homareau
74,900,797
843,458
python error : codec can't decode byte 0xe4 in position 2857: invalid continuation byte
<p>I am reading a latex file using</p> <pre><code>with open(inputFileName, 'r', encoding=&quot;utf8&quot;) as inputFileHandle: for lineInput in inputFileHandle: </code></pre> <p>It fails with lineInput showing &quot;% Declare common style&quot; with the error</p> <pre><code>can't decode byte 0xe4 in position 2857: invalid continuation byte </code></pre> <p>I can not see any strange characters in the latex file. What is the byte 0xe4 and how can I identify it in the tex file?</p>
<python><encoding>
2022-12-23 14:32:46
0
3,516
Matthias Pospiech
74,900,770
4,576,519
Fast way to calculate Hessian matrix of model parameters in PyTorch
<p>I want to calculate the Hessian matrix of a loss w.r.t. model parameters in PyTorch, but using <a href="https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html" rel="nofollow noreferrer"><code>torch.autograd.functional.hessian</code></a> is not an option for me since it recomputes the model output and loss which I already have from previous calls. My current implementation is as follows:</p> <pre class="lang-py prettyprint-override"><code>import torch import time # Create model model = torch.nn.Sequential(torch.nn.Linear(1, 100), torch.nn.Tanh(), torch.nn.Linear(100, 1)) num_param = sum(p.numel() for p in model.parameters()) # Evaluate some loss on a random dataset x = torch.rand((1000,1)) y = torch.rand((1000,1)) y_hat = model(x) loss = ((y_hat - y)**2).mean() ''' Calculate Hessian ''' start = time.time() # Allocate Hessian size H = torch.zeros((num_param, num_param)) # Calculate Jacobian w.r.t. model parameters J = torch.autograd.grad(loss, list(model.parameters()), create_graph=True) J = torch.cat([e.flatten() for e in J]) # flatten # Fill in Hessian for i in range(num_param): result = torch.autograd.grad(J[i], list(model.parameters()), retain_graph=True) H[i] = torch.cat([r.flatten() for r in result]) # flatten print(time.time() - start) </code></pre> <p>Is there any way to do this faster? Perhaps without using the for loop, since it is calling <code>autograd.grad</code> for every single model variable.</p>
<python><optimization><pytorch><hessian>
2022-12-23 14:30:14
1
6,829
Thomas Wagenaar
74,900,699
7,415,412
How to append to list in python module written in rust?
<p>I am writing a python model for heavy calculations in rust using the pyo3 bindings. However, for a rust struct with a variable containing an empty list I cannot append to that variable in python.</p> <p>Does anybody know how to do this? See below for a MWE ( that I cannot get to work ):</p> <p>I have a rust file called lib.rs:</p> <pre class="lang-rust prettyprint-override"><code>// lib.rs use pyo3::prelude::*; use std::vec::Vec; #[pyclass(subclass)] pub struct TestClass { #[pyo3(get)] pub id: i32, #[pyo3(get, set)] pub test_list: Vec&lt;f32&gt; } #[pymethods] impl TestClass { #[new] pub fn new(id: i32) -&gt; TestClass { TestClass{ id, test_list: Vec::new() } } } /// A Python module implemented in Rust. #[pymodule] fn test_rust(_py: Python, m: &amp;PyModule) -&gt; PyResult&lt;()&gt; { m.add_class::&lt;TestClass&gt;()?; Ok(()) } </code></pre> <p>This library is build using the <code>maturin</code> package. When I initiate the <code>TestClass</code> in python that works as expected. However, when I want to append to the class attribute <code>test_list</code> this is not happening. <code>test_list</code> still is an empty list. See for an example below:</p> <pre class="lang-python prettyprint-override"><code>from test_rust import TestClass foo = TestClass(1) print(foo.test_list) # output: [] foo.test_list.append(2.3) print(foo.test_list) # output: [] - expected: [2.3] </code></pre> <p>The <a href="https://pyo3.rs/v0.12.3/conversions/tables.html" rel="nofollow noreferrer">pyo3 documentation</a> stated that the types <code>Vec&lt;T&gt;</code> and <code>list[T]</code> are accepted. However, this does not work.</p> <p>Any help would be very much appreciated.</p>
<python><rust><pyo3>
2022-12-23 14:24:06
0
599
westr
74,900,683
5,472,037
Wordnet taxonomy construction
<p>I'd like to build a minimum encompassing taxonomic tree for a given set of wordnet synsets. For a set of 2 synsets the tree would be one where they are both children nodes of their lowest common hypernym.</p> <p>For the following set:</p> <pre><code>[{'name': 'tench.n.01'}, {'name': 'goldfish.n.01'}, {'name': 'great_white_shark.n.01'}, {'name': 'tiger_shark.n.01'}, {'name': 'hammerhead.n.03'}] </code></pre> <p>The required result is:</p> <pre><code>{'name': 'fish.n.01', 'children': [{'name': 'cyprinid.n.01', 'children': [{'name': 'tench.n.01'}, {'name': 'goldfish.n.01'}]}, {'name': 'shark.n.01', 'children': [{'name': 'tiger_shark.n.01'}, {'name': 'great_white_shark.n.01'}, {'name': 'hammerhead.n.03'}]}]} </code></pre> <p>I had some success with relatively small sets. Once I try larger sets things start to break down.</p> <p>E.g. for a 30 long set I got a tree that can be visualized as follows:</p> <p><a href="https://i.sstatic.net/tam4d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tam4d.png" alt="enter image description here" /></a></p> <p>One can see for example that the great gray owl is not classified under bird.</p> <h1>Code example</h1> <p>Below I give a reproducible example in python of what I got so far:</p> <h2>Define tree building function</h2> <pre><code># import nltk # nltk.download('wordnet') from nltk.corpus import wordnet as wn from itertools import combinations import pandas as pd def synset_tree(synsets): # find similarities between all leaf nodes synsets_sim = [] for i,j in combinations(range(len(synsets)),2): synsets_sim.append(pd.DataFrame({'syn1':[synsets[i][&quot;name&quot;]], 'syn2':[synsets[j][&quot;name&quot;]], 'sim':[wn.synset(synsets[i][&quot;name&quot;]).path_similarity(wn.synset(synsets[j][&quot;name&quot;]))]})) synsets_sim = pd.concat(synsets_sim, axis=0) while len(synsets)&gt;1: synsets_sim = synsets_sim.sort_values('sim', ascending=False) # Find common ancestor of 2 closest leaf nodes common_hype = wn.synset(synsets_sim.syn1.iloc[0]).lowest_common_hypernyms(wn.synset(synsets_sim.syn2.iloc[0]))[0].name() # extract 2 leaf nodes syn_dict1 = list(filter(lambda x: x[&quot;name&quot;] == synsets_sim.syn1.iloc[0], synsets))[0] syn_dict2 = list(filter(lambda x: x[&quot;name&quot;] == synsets_sim.syn2.iloc[0], synsets))[0] # remove lead nodes from leaf node list synsets = [syn_dict for syn_dict in synsets if syn_dict not in [syn_dict1, syn_dict2]] # The common hypernym will replace the 2 leaf nodes. Calculate it's similarity to all remaining leaf nodes. new_sim = [] for i in range(len(synsets)): new_sim.append(pd.DataFrame({'syn1':[synsets[i][&quot;name&quot;]], 'syn2':[common_hype], 'sim':[wn.synset(synsets[i][&quot;name&quot;]).path_similarity(wn.synset(common_hype))]})) if len(new_sim) &gt; 0: new_sim = pd.concat(new_sim, axis=0) new_sim = new_sim[new_sim.sim&lt;1] synsets_sim = pd.concat([synsets_sim, new_sim],axis=0) # Add children of the nodes being removed to the common hypernym if common_hype == syn_dict1[&quot;name&quot;]: if syn_dict1.get(&quot;children&quot;): common_hype = {&quot;name&quot;:common_hype, &quot;children&quot;:[syn_dict2] + syn_dict1.get(&quot;children&quot;)} else: common_hype = {&quot;name&quot;:common_hype, &quot;children&quot;:[syn_dict2]} synsets_sim = synsets_sim[~((synsets_sim.syn1 == syn_dict2[&quot;name&quot;]) | (synsets_sim.syn2 == syn_dict2[&quot;name&quot;]))] elif common_hype == syn_dict2[&quot;name&quot;]: if syn_dict2.get(&quot;children&quot;): common_hype = {&quot;name&quot;:common_hype, &quot;children&quot;:[syn_dict1] + syn_dict2.get(&quot;children&quot;)} else: common_hype = {&quot;name&quot;:common_hype, &quot;children&quot;:[syn_dict1]} synsets_sim = synsets_sim[~((synsets_sim.syn1 == syn_dict1[&quot;name&quot;]) | (synsets_sim.syn2 == syn_dict1[&quot;name&quot;]))] elif common_hype in [x[&quot;name&quot;] for x in synsets]: for i in range(len(synsets)): if common_hype == synsets[i][&quot;name&quot;]: if synsets[i][&quot;children&quot;]: synsets[i][&quot;children&quot;] = synsets[i][&quot;children&quot;] + [syn_dict1, syn_dict2] else: synsets[i][&quot;children&quot;] = [syn_dict1, syn_dict2] else: common_hype = {&quot;name&quot;:common_hype, &quot;children&quot;:[syn_dict1, syn_dict2]} synsets_sim = synsets_sim[~((synsets_sim.syn1 == syn_dict1[&quot;name&quot;]) | (synsets_sim.syn2 == syn_dict1[&quot;name&quot;]))] synsets_sim = synsets_sim[~((synsets_sim.syn1 == syn_dict2[&quot;name&quot;]) | (synsets_sim.syn2 == syn_dict2[&quot;name&quot;]))] synsets.append(common_hype) return synsets[0] </code></pre> <h2>Input set that works</h2> <pre><code>synsets = [{'name': 'tench.n.01'}, {'name': 'goldfish.n.01'}, {'name': 'great_white_shark.n.01'}, {'name': 'tiger_shark.n.01'}, {'name': 'hammerhead.n.03'}, {'name': 'electric_ray.n.01'}, {'name': 'stingray.n.01'}, {'name': 'cock.n.05'}, {'name': 'hen.n.02'}, {'name': 'ostrich.n.02'}, {'name': 'brambling.n.01'}, {'name': 'goldfinch.n.02'}, {'name': 'house_finch.n.01'}, {'name': 'junco.n.01'}, {'name': 'indigo_bunting.n.01'}, {'name': 'robin.n.02'}, {'name': 'bulbul.n.01'}, {'name': 'jay.n.02'}, {'name': 'magpie.n.01'}, {'name': 'chickadee.n.01'}, {'name': 'water_ouzel.n.01'}, {'name': 'kite.n.04'}, {'name': 'bald_eagle.n.01'}, {'name': 'vulture.n.01'}, {'name': 'great_grey_owl.n.01'}, {'name': 'european_fire_salamander.n.01'}, {'name': 'common_newt.n.01'}, {'name': 'eft.n.01'}, {'name': 'spotted_salamander.n.01'}, {'name': 'axolotl.n.01'}] wow = synset_tree(synsets[:5]) wow </code></pre> <pre><code>{'name': 'fish.n.01', 'children': [{'name': 'cyprinid.n.01', 'children': [{'name': 'tench.n.01'}, {'name': 'goldfish.n.01'}]}, {'name': 'shark.n.01', 'children': [{'name': 'tiger_shark.n.01'}, {'name': 'great_white_shark.n.01'}, {'name': 'hammerhead.n.03'}]}]} </code></pre> <h2>Input set that fails</h2> <pre><code>wow = synset_tree(synsets) # this gives the tree which produces the above image </code></pre> <pre><code></code></pre>
<python><nlp><nltk><wordnet>
2022-12-23 14:22:54
1
641
Iyar Lin
74,900,576
9,385,568
How to map values from one dataframe to another?
<p>I have two different dataframes as follows:</p> <pre class="lang-py prettyprint-override"><code>df.head() ext_id credit_debit_indicator index_name business_date trench_tag trench_tag_l2 0 4SL19N2YQLCU62TY C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99 1 1EXHR74Y2YXBN4AM D ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99 2 OI0001WMRUD C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99 3 OI0001WKKXA C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99 4 SGW7000490024199 C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99 </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code>mapping_df.head() trench_code trench_level fink_code fink_level 0 COM0101001 4 PREPAID_01 2 1 COM0101002 4 PREPAID_01 2 2 COM0101003 4 PREPAID_01 2 3 COM0101099 4 PREPAID_01 2 4 COM0101999 4 PREPAID_01 2 </code></pre> <p>I need to match <code>df.trench_tag</code> with <code>mapping_df.trench_code</code>, and where there's a match, I want to copy <code>mapping_df.trench_code</code> into a new column in the original dataset <code>df.fink_sub_tag_key</code>. If I don't find a match, then I need to try match <code>df.trench_code_l2</code> with <code>mapping_df.trench_code</code>.</p> <p>I tried:</p> <pre class="lang-py prettyprint-override"><code>df2 = df.merge(mapping_df, left_on='trench_tag', right_on='trench_code', how='left') df2 = df.merge(mapping_df, left_on='trench_tag_l2', right_on='trench_code', how='left') </code></pre> <p>where the second join overwrites the first one.</p> <p>Help would be appreciated.</p>
<python><pandas>
2022-12-23 14:10:38
1
873
Stanislav Jirak
74,900,505
360,362
Convert SQLAlchemy Column type to Python data type respecting variant
<p>I have added support for sqlite in some of my SQLAlchemy models, so some columns appear with variants based on the dialect</p> <pre><code>dt = Column(DateTime().with_variant(String, 'sqlite')) </code></pre> <p>I have a logic that relies on the python data type for that field, such as <code> col.type.python_type</code> but this now returns <code>Variant</code>, in which the <code>python_type</code> attribute raise <code>NotImplementedError</code>.</p> <p>In my code I already know whether my engine is based off <code>sqlite</code> or <code>sqlserver</code>, so is there a way to tell sqlalchemy to give me the <code>.type.python_type</code> for the dialect that I am using instead of returning <code>Variant</code>?</p>
<python><sqlalchemy>
2022-12-23 14:02:33
1
9,790
Meitham
74,900,454
16,389,095
Python TypeError: takes 4 positional arguments but 5 were given
<p>I'm trying to design a user interface in Python / Kivy MD. Starting from an <a href="https://github.com/kivymd/KivyMD/wiki/Components-DropDownItem" rel="nofollow noreferrer">example</a>, I developed an interface with a simple drop down widget, such as a combobox. When the app runs, the widget should display its elements. Here is the code:</p> <pre><code>from kivy.lang import Builder from kivy.metrics import dp from kivy.properties import StringProperty from kivymd.uix.list import OneLineIconListItem from kivymd.app import MDApp from kivymd.uix.menu import MDDropdownMenu KV = ''' #:import toast kivymd.toast.toast MDScreen MDDropDownItem: id: drop_item pos_hint: {'center_x': .5, 'center_y': .5} text: 'FREQUENCY' on_release: app.menu.open() select: toast(self.current_item) ''' class MainApp(MDApp): def __init__(self, **kwargs): super().__init__(**kwargs) self.screen = Builder.load_string(KV) myItems = ['300 Hz', '200 Hz', '100 Hz'] menu_items = self.create_combobox_items(self.screen.ids.drop_item, myItems) # WAY 1: WORKS --- COMMENT THIS BLOCK ######################################## self.menu = MDDropdownMenu( caller = self.screen.ids.drop_item, items = menu_items, position = &quot;bottom&quot;, #top, bottom, center, auto width_mult = 2, ) ######################################## # WAY 2: DOESN'T WORK --- UNCOMMENT ONLY THE FOLLOWING LINE ######################################## self.menu = self.create_dropdown_object(self.screen.ids.drop_item, menu_items, 'auto', 2) ######################################## self.menu.bind() def create_dropdown_object(dropDownItem, menuItems, pos, width): ddMenu = MDDropdownMenu( caller = dropDownItem, items = menuItems, position = pos, #top, bottom, center, auto width_mult = width, ) return ddMenu def create_combobox_items(self, dropDownItem, itemList): comboBoxItems = [ { &quot;viewclass&quot;: &quot;OneLineListItem&quot;, &quot;text&quot;: itemList[i], &quot;height&quot;: dp(56), &quot;on_release&quot;: lambda x = itemList[i]: self.set_item(dropDownItem, self.menu, x), } for i in range(len(itemList)) ] return comboBoxItems def set_item(self, dropDownItem, dropDownMenu, textItem): dropDownItem.set_item(textItem) dropDownMenu.dismiss() def build(self): return self.screen if __name__ == '__main__': MainApp().run() </code></pre> <p>This code (WAY 1) works fine. I'm trying to develop a method for the MDDropDownItem widget creation, namely <em>create_dropdown_object</em>. When I comment the way 1 code block, and use the method defined below (WAY 2), I get the <em>TypeError: create_dropdown_object() takes 4 positional arguments but 5 were given</em>. How it can be possible considering that I passed 4 arguments to the function?</p>
<python><kivy><kivy-language><kivymd>
2022-12-23 13:58:08
1
421
eljamba
74,900,422
4,865,723
Open & write to a file-like object but avoid AttributeError about missing open()
<p>I want to have function that can write content into a file-like object. It accepts <code>pathlib.Path</code> objects or <code>io.StringIO</code>. The first one need to be <code>open()</code>'ed first, the second one not.</p> <p>Because of that it seems to me I have to explicte type check the object to know if I have to call <code>open()</code> on it or not.</p> <p>Is there an elegant and pythonic way to work around this?</p> <p>Here is an MWE.</p> <pre><code>#!/usr/bin/env python3 import io import pathlib import typing def foobar(file_like_obj: typing.Union[pathlib.Path, typing.IO]): with file_like_obj.open('w') as handle: handle.write('foobar') if __name__ == '__main__': p = pathlib.Path.home() / 'my.txt' foobar(p) sio = io.StringIO() foobar(sio) </code></pre> <p>The second call of <code>foobar()</code> here cause this error:</p> <pre><code>AttributeError: '_io.StringIO' object has no attribute 'open' </code></pre> <p>One pythonic-like way I know to prevent explicit type checking is to use <code>try-except</code> blocks. But this would break my <code>with</code> block.</p>
<python><file>
2022-12-23 13:54:47
1
12,450
buhtz
74,900,305
12,845,199
Regex that captures and filters the "steps" strings that have only one sole number at the early part
<p>So I have a pandas.Series as such</p> <pre><code>s = pd.Series(['1-Onboarding + Retorno', '1.1-Onboarding escolha de bot', '2-Seleciona produto', '3-Informa localizacao e cpf', '3.1-CPF valido (V.2.0)', '3.2-Obtencao de CEP'],name = 'Steps') 0 1-Onboarding + Retorno 1 1.1-Onboarding escolha de bot 2 2-Seleciona produto 3 3-Informa localizacao e cpf 4 3.1-CPF valido (V.2.0) 5 3.2-Obtencao de CEP </code></pre> <p>The idea here is to &quot;filter&quot; the df so I gather only the strings with the a unique number.</p> <pre><code>s = pd.Series(['1-Onboarding + Retorno', '2-Seleciona produto', '3-Informa localizacao e cpf'],name = 'Steps') 0 1-Onboarding + Retorno 1 2-Seleciona produto 2 3-Informa localizacao e cpf Name: Steps, dtype: object </code></pre> <p>Any ideas on how I could do that? I am having difficulties formulating the regex. I know I should use to formulate such filter in Pandas.</p> <pre><code>s.str.contains('',regex = True) </code></pre>
<python><pandas><regex>
2022-12-23 13:42:49
2
1,628
INGl0R1AM0R1
74,899,785
14,224,895
psycopg2.errors.ActiveSqlTransaction: CREATE DATABASE cannot run inside a transaction block
<p>I am trying to create a Django app that creates a new database for every user when he/she signs up. I am going with this approach due to some reason. I have tried many ways using management commands and even Celery. But I am still getting the same error.</p> <pre><code>2022-12-23 07:16:07.410 UTC [49] STATEMENT: CREATE DATABASE tenant_asdadsad [2022-12-23 07:16:07,415: ERROR/ForkPoolWorker-4] Task user.utils.create_database[089b0bc0-0b5f-4199-8cf3-bc336acc7624] raised unexpected: ActiveSqlTransaction('CREATE DATABASE cannot run inside a transaction block\n') Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/celery/app/trace.py&quot;, line 451, in trace_task R = retval = fun(*args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/celery/app/trace.py&quot;, line 734, in __protected_call__ return self.run(*args, **kwargs) File &quot;/app/user/utils.py&quot;, line 45, in create_database cursor.execute(f'CREATE DATABASE tenant_{tenant_id}') psycopg2.errors.ActiveSqlTransaction: CREATE DATABASE cannot run inside a transaction block </code></pre> <p>This is my task</p> <pre><code>@shared_task def create_database(tenant_id): conn = psycopg2.connect(database=&quot;mydb&quot;, user=&quot;dbuser&quot;, password=&quot;mypass&quot;, host=&quot;db&quot;) cursor = conn.cursor() transaction.set_autocommit(True) cursor.execute(f'CREATE DATABASE tenant_{tenant_id}') cursor.execute(f'GRANT ALL PRIVILEGES ON DATABASE tenant_{tenant_id} TO dbuser') cursor.close() conn.close() </code></pre> <p>I have tried several ways but I always get the same error</p> <p>This is my API call</p> <pre><code>def create(self, request, *args, **kwargs): serializer_class = mySerializer(data=request.data) if serializer_class.is_valid(): validated_data = serializer_class.validated_data or = validated_data[&quot;org&quot;] or = Org.objects.create(**org) create_database.delay(str(or.id)) return Response(create_user(validated_data)) </code></pre>
<python><django><postgresql><psycopg2>
2022-12-23 12:44:47
1
927
Abdullah Mujahid
74,899,706
4,718,423
bottom up method development parameter passing
<p>I develop bottom up, starting with small simple methods to go to the big full fledged implementation</p> <pre><code>class Pop(object): def welcome(self, name, new_member = False): response = &quot;&quot; if new_member: response = &quot; NOT&quot; return str(&quot;hello there &quot;+name+&quot;, you seem&quot;+response+&quot; to be a member\n&quot;) def ageVerification(self, name, age, new_member = False): the_welcome_string = self.welcome(name, new_member) minimum = &quot;&quot; excuse = &quot;&quot; if age &lt; 16: minimum = &quot; NOT&quot; excuse = &quot;, sorry&quot; return str(the_welcome_string+str(age)+&quot; is&quot;+minimum+&quot; the minimum required age to buy beer in Belgium&quot;+excuse+&quot;\n&quot;) def theWholething(self, name, age, address, new_member = False): if age &lt; 16: appology = str(&quot;you cannot order any beer\n&quot;) else: appology = str(&quot;your beer will be shipped to &quot;+address+&quot;\n&quot;) return str(self.ageVerification(name, age, new_member)+appology) # EOF </code></pre> <p>My question is if it is normal that when i reach theWholeThingMethod, I carry along all the parameters of the previously defined methods? Is this pythonic?</p> <p>My population class has almost 20 &quot;helper&quot; methods called in theWholeThing, and it seems I am just fiddling with parameters to get them in the right order ...</p> <pre><code>theWholeThing(self,\ name,\ age,\ address,\ registered = True,\ first_date_entered,\ last_date_entered,\ purchased_amount,\ favorite_beer,\ promotional_code,\ and_so_on0,\ and_so_on1,\ and_so_on2,\ and_so_on3,\ and_so_on4,\ and_so_on5,\ and_so_on6,\ and_so_on7,\ and_so_on8,\ and_so_on9,\ and_so_on10): </code></pre>
<python>
2022-12-23 12:34:31
1
1,446
hewi
74,899,586
3,423,825
How to add permissions in Django post_save signal?
<p>I have a problem to add permissions in Django post_save signal because the m2m relationships don't persist. They are correctly displayed with <code>user_permissions.all()</code> after assignment, but as soon as I save the model again the queryset get empty. What am'I doing wrong ?</p> <p>I don't have the same problem if I assign permissions at the object level with <code>guardian</code>.</p> <p><strong>models.py</strong></p> <pre><code>class User(AbstractBaseUser, PermissionsMixin): username = models.CharField(db_index=True, max_length=255, unique=True, null=True, blank=True) last_name = models.CharField(max_length=255, null=True, blank=True) first_name = models.CharField(max_length=255, null=True, blank=True) email = models.EmailField(db_index=True, unique=True) def save(self, *args, **kwargs): return super(User, self).save(*args, **kwargs) class Manager(User): ... def save(self, *args, **kwargs): return super(Manager, self).save(*args, **kwargs) </code></pre> <p><strong>signals.py</strong></p> <pre><code>@receiver(post_save, sender=Manager) def assign_manager_permissions(sender, instance, created, raw, using, **kwargs): content_type = ContentType.objects.get_for_model(Manager) print('Current permissions:') for p in instance.user_permissions.all(): print(p) for codename in ['view_manager', 'change_manager', 'add_manager', 'delete_manager']: permission = Permission.objects.get(content_type=content_type, codename=codename) instance.user_permissions.add(permission) instance.refresh_from_db() print('After refresh permissions:') for p in instance.user_permissions.all(): print(p) </code></pre> <p>When I save the model the result is always the same:</p> <pre><code>django-1 | Current permissions: django-1 | django-1 | After refresh permissions: django-1 | user | manager | Can add manager django-1 | user | manager | Can change manager django-1 | user | manager | Can delete manager django-1 | user | manager | Can view manager </code></pre>
<python><django>
2022-12-23 12:21:01
1
1,948
Florent
74,899,506
996,366
How to get log mel spectrogram of specific shape using librosa
<p>I have some audio files which I want to convert to log mel spectogram. I need the log mel spectogram to be in the shape of <code>(512,512)</code>. I changed the n_mels to 512, to get the first dimension 512 but I am unable to change the second dimension to 512 for all audios. I tried experimenting with hop_length values by trial and error, in some audio files it work and in the others it doesn't.How do we get log mel spectrogram of specific shape using librosa?</p> <pre><code>path = &quot;path/to/my/file&quot; scale, sr = librosa.load(path) mel_spectrogram = librosa.feature.melspectrogram(scale, sr, n_fft=2048, hop_length=512, n_mels=512, fmax=8000) log_mel_spectrogram = librosa.power_to_db(mel_spectrogram) librosa.display.specshow(log_mel_spectrogram, x_axis=&quot;time&quot;, y_axis=&quot;mel&quot;, sr=sr) ``` </code></pre>
<python><audio><librosa><spectrogram><mel>
2022-12-23 12:13:09
1
15,212
Eka
74,899,431
19,574,336
Python C API, send a python function pointer to c and execute it
<p>I want to create a function in python, pass it's function pointer to c and execute it there.</p> <p>So my python file:</p> <pre><code>import ctypes import example def tester_print(): print(&quot;Hello&quot;) my_function_ptr = ctypes.CFUNCTYPE(None)(tester_print) example.pass_func(my_function_ptr) </code></pre> <p>And here is what my function in c looks like:</p> <pre class="lang-c prettyprint-override"><code>typedef void (*MyFunctionType)(void); PyObject* pass_func(PyObject *self, PyObject* args) { PyObject* callable_object; if (!PyArg_ParseTuple(args, &quot;O&quot;, &amp;callable_object)) return NULL; if (!PyCallable_Check(callable_object)) { PyErr_SetString(PyExc_TypeError, &quot;The object is not a callable function.&quot;); return NULL; } PyObject* function_pointer = PyCapsule_New(callable_object, &quot;my_function_capsule&quot;, NULL); if (function_pointer == NULL) return NULL; MyFunctionType my_function = (MyFunctionType) PyCapsule_GetPointer(function_pointer, &quot;my_function_capsule&quot;); if (my_function == NULL) return NULL; my_function(); // Or (*my_function)() Both same result. // PyCapsule_Free(function_pointer); Py_RETURN_NONE; } </code></pre> <p>Doing this causes a seg fault on my_function() call. How can I do this?</p>
<python><c><python-c-api>
2022-12-23 12:02:44
1
859
Turgut
74,899,353
10,430,394
Discord bot doesn't show up in server members, even though invitation button says it has joined
<p>I made a bot with discord.py and have already invited it to my own private server. It works on that one without issues and now I wanted to add it to another server on which I have mod permissions. So I send an invitation to the new channel to my bot and clicked the accept button. I got redirected to the chat and the button in the DMs of the Bot show that it has joined the server. However, it does not show up in the members of the server, neither the specific channel I invited it to, nor the complete members list. When I run my py script with the channel guild, it gives me the following error message.</p> <pre class="lang-py prettyprint-override"><code>[2022-12-23 04:27:15] [ERROR ] discord.client: Ignoring exception in on_ready Traceback (most recent call last): File &quot;C:\Python\Python38\lib\site-packages\discord\client.py&quot;, line 409, in _run_event await coro(*args, **kwargs) File &quot;C:\Users\chris\Desktop\Jap App\Japanese_Vocabulary_Bot.py&quot;, line 75, in on_ready await tree.sync(guild=discord.Object(id=guild)) File &quot;C:\Python\Python38\lib\site-packages\discord\app_commands\tree.py&quot;, line 1071, in sync data = await self._http.bulk_upsert_guild_commands(self.client.application_id, guild.id, payload=payload) File &quot;C:\Python\Python38\lib\site-packages\discord\http.py&quot;, line 738, in request raise Forbidden(response, data) discord.errors.Forbidden: 403 Forbidden (error code: 50001): Missing Access </code></pre> <p>The only solutions to this error message I've seen is to give the bot the necessary permissions either with a role or in the creation tab of the bot. I already gave it permissions during creation and I can't give it a role since I can't see it on the server (even though it tells me it already joined).</p> <p>So what can I do? It still works fine with the guild ID of my private server. But when I give it the guild ID of the channel in the other server, it gives me that error. Even though it says it has joined that server.</p> <p>EDIT: I tried to add the OAuth2 option <code>application.commands</code> and a few others in the &quot;Bot&quot; tab, but I cannot confirm my settings anywhere. The moment I tick one of the boxes and change tabs and go back, all my settings have been reset. I don't know how to make the changes on the webpage stick. As far as I can tell, the problem is that discord has changed something about the way you authorise bots to do things in a server/channel and because of that I need to go to <a href="https://discord.com/developers/applications/APPLICATION_ID/oauth2/url-generator" rel="nofollow noreferrer">https://discord.com/developers/applications/APPLICATION_ID/oauth2/url-generator</a></p> <p>and set the appropriate options for my application. But I don't understand what the <code>url-generator</code> does and how to confirm my choices on the webpage. I selected &quot;bot&quot; and &quot;applications.commands&quot; in the OAuth2 tab under the URL section. That opened a bunch of checkboxes with permissions under which I selected the following</p> <ul> <li><a href="https://i.sstatic.net/HqukG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HqukG.png" alt="selected bot permissions" /></a></li> </ul> <p>Now I want to confirm my choices, but i don't know how.</p> <p>EDIT 2: I figured out that I'm supposed to choose my bot's privileges and then use the generated URL to allow it to join a particular server, but no matter which options I pick, the &quot;Generated URL&quot; box only ever says: &quot;Please provide a template uri&quot;.</p> <p>I'm going through the docs from A-Z right now, but I don't understand why the box doesn't generate any kind of link. I'm using Brave as a browser so I deactivated shields just in case, but no change.</p>
<python><discord><discord.py>
2022-12-23 11:54:36
1
534
J.Doe
74,899,335
9,185,021
TensorFlow unable to set large precision of tensors for 12 decimals with float 64
<p>Hello i am using TensorFlow i have i try to load float 64 tensor value but when i load to i loose decimals any help?</p> <pre><code> tenosr = tf.constant([57.695030261422, 11.911734826863], dtype=tf.float64) </code></pre> <p><a href="https://i.sstatic.net/mG0Q3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mG0Q3.png" alt="enter image description here" /></a></p>
<python><tensorflow><precision><floating-accuracy>
2022-12-23 11:52:31
0
339
Mixalis Navridis
74,899,302
12,242,085
How to deal with JSON and nested JSON inside a DataFrame columns into new columns in Python Pandas?
<p>I have DataFrame like below:</p> <p>data type:</p> <ul> <li>COL1 - float</li> <li>COL2 - int</li> <li>COL3 - int</li> <li>COL4 - float</li> <li>COL5 - float</li> <li>COL6 - object</li> <li>COL7 - object</li> </ul> <p>Source code:</p> <pre><code>a = pd.DataFrame() a[&quot;COL1&quot;] = [0.0, 800.0] a[&quot;COL2&quot;] = [2, 3] a[&quot;COL3&quot;] = [123, 444] a[&quot;COL4&quot;] = [1500.0, 1600.0] a[&quot;COL5&quot;] = [700.0, 850.0] a[&quot;COL6&quot;] = ['{&quot;account&quot;: {&quot;sector&quot;: 2, &quot;other&quot;: 15}}', np.nan] a[&quot;COL7&quot;] = ['{&quot;value&quot;: &quot;ab&quot;}', np.nan] </code></pre> <p><a href="https://i.sstatic.net/OapAk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OapAk.png" alt="enter image description here" /></a></p> <ul> <li>COL6 and COL7 contain JSON, COL6 contains nested JSON.</li> <li>Furthermore there could be missings both in COL6 and COL7.</li> <li>And I need to convert values from COL6 and COL7 to &quot;normal&quot; form, however I can not even imagine how to convert COL6 (nested JSON) to DataFrame form of column with value</li> </ul> <p>Desire output:</p> <p>In terms of outpur for COL7 it is like below, however I can not even imagine how should look output for COL6 ?</p> <pre><code>COL1 | COL2 | COL3 | COL4 | COL5 | value | ------|------|------|--------|-------|-------| 0.0 | 2 | 123 | 1500.0 | 700.0 | abc | 800.0 | 3 | 444 | 1600.0 | 850.0 | NaN | </code></pre> <p>How can I do that in Python Pandas ?</p> <p>The following solution does not work: <code>pd.json_normalize(df['COL7'].apply(ast.literal_eval))</code>, ERROR: <code>ValueError: malformed node or string: nan</code></p> <p>Source code (be aware that if I read it in Pandas there is also NaN):</p> <pre><code>{'COL1': [0.0, 0.0, 0.0], 'COL2': [2, 0, 33], 'COL3': [2162561990, 2167912785, 599119703], 'COL4': [1500.0, 500.0, 3500.0], 'COL5': [750.0, 0.0, 3500.0], 'COL6': ['{&quot;account&quot;: {&quot;sector&quot;: 4, &quot;other&quot;: 10} , &quot;account_2&quot;: {&quot;sector&quot;: 0, &quot;other&quot;: 0} , &quot;account_3&quot;: {&quot;sector&quot;: 6, &quot;other&quot;: 8}}'], 'COL7': ['{&quot;value&quot;: &quot;cc&quot; , &quot;value_2&quot;: 15.58 , &quot;value_3&quot;: 646}']} </code></pre>
<python><json><pandas><dataframe>
2022-12-23 11:48:43
2
2,350
dingaro
74,899,260
6,346,482
Don't make a second level when aggregating in Pandas
<p>Indexes and levels in Pandas still drive me nuts when using it. My dataframe structure looks like that:</p> <pre><code>Indexstruktur Datensatz Workload Selectivity Dimensionen Anzahl Datenpunkte Operation Measure Wert </code></pre> <p>I now group by all columns except the last one (&quot;Wert&quot;). This one will be used to aggregate, where I calculate average and standard deviation:</p> <pre><code>mean_df = df.groupby([&quot;Indexstruktur&quot;, &quot;Datensatz&quot;,&quot;Workload&quot;, &quot;Selectivity&quot;, &quot;Dimensionen&quot;, &quot;Anzahl Datenpunkte&quot;, &quot;Operation&quot;, &quot;Measure&quot;]).agg({&quot;Wert&quot;:['mean','std']}) mean_df.reset_index(inplace=True) </code></pre> <p>The result are two levels:</p> <pre><code>index Indexstruktur Datensatz Workload Selectivity Dimensionen Anzahl Datenpunkte Operation Measure Wert mean std </code></pre> <p>How can I get rid of &quot;Wert&quot; and just make &quot;avg&quot; and &quot;std&quot; two columns in the same level as the rest?</p> <p>mean_df.columns returns:</p> <pre><code>MultiIndex([( 'Indexstruktur', ''), ( 'Datensatz', ''), ( 'Workload', ''), ( 'Selectivity', ''), ( 'Dimensionen', ''), ('Anzahl Datenpunkte', ''), ( 'Operation', ''), ( 'Measure', ''), ( 'Wert', 'mean'), ( 'Wert', 'std')], ) </code></pre> <p>I tried reset_index, droplevel, as_index=False, but nothing changes anything. What is the solution for this?</p>
<python><pandas><multi-index>
2022-12-23 11:44:18
1
804
Hemmelig
74,899,104
2,444,023
How to get a convolution of 3 or more continuous PDFs to obtain the average of the PDFs in Python and or R
<p>Say I have three random variables. I would like to do a convolution to obtain the <strong>average</strong>. How do I do this in Python and or R?</p> <h3>Edit 1</h3> <p>Also. It seems the default behavior is to have the convolution size larger than any of the inputs. I will assume that all of the inputs are the same size. Is it possible to have the resulting convolution the same size as the vectors which are being used as inputs to the convolution?</p> <p>For example, if <code>x1</code> is <code>n=100</code> then I would like the resulting convolution to be <code>n=100</code></p> <h2>Edit 2 - Added Example</h2> <p>I theory the convolution should be close to what I can calculate analytically.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np rng = np.random.default_rng(42) n, u1, u2, u3, sd = 100, 10, 20, 6, 5 u_avg = np.mean([u1,u2,u3]) a = rng.normal(u1, sd, size=n) b = rng.normal(u2, sd, size=n) c = rng.normal(u3, sd, size=n) z = rng.normal(u_avg, sd/np.sqrt(3), size=n) convolution = rng.choice(reduce(np.convolve, [a, b, c]), size=n) print(&quot;true distribution&quot;) print(np.round(np.quantile(z, [0.01, 0.25, 0.5, 0.75, 0.99]), 2)) print(&quot;convolution&quot;) print(np.round(np.quantile(convolution, [0.01, 0.25, 0.5, 0.75, 0.99]),2)) </code></pre> <p>If the convolution is working then the <code>convolution</code> should be close to the <code>true</code> distribution.</p> <pre><code>true distribution [ 3.9 9.84 12.83 14.89 18.45] convolution [5.73630000e+03 5.47855750e+05 2.15576037e+06 6.67763665e+06 8.43843281e+06] </code></pre> <p>It looks like the convolution is not even close.</p>
<python><r><statistics><probability><convolution>
2022-12-23 11:26:10
1
2,838
Alex
74,899,068
7,333,766
Is is considered bad code practice to put code into __init__.py?
<p>Most of the __ init__.py of a project I work on (python 3.7) are empty but some have code in it.</p> <p>I am in charge of refactoring.</p> <p>Should I move the code out of the __ init__.pys as much as possible ?</p>
<python><python-3.x>
2022-12-23 11:21:02
0
2,215
Eli O.
74,898,922
5,362,515
Exception handling in Python asyncio code
<p>I am running the following example:</p> <pre class="lang-py prettyprint-override"><code>import asyncio async def longwait(n): print('doing long wait') await asyncio.sleep(n) async def main(): long_task = asyncio.create_task(longwait(10)) try: result = await asyncio.wait_for(asyncio.shield(long_task), timeout=1) # line 'a' print(result) except asyncio.exceptions.TimeoutError: print('Taking longer than usual. Please wait.') result = await long_task # line 'b' print(result) asyncio.run(main()) # why not use the `result` from try block? # maybe coz we tried to print result before it was assigned any value in try block? </code></pre> <p>What I don't get is why we are writing line 'b' when we already have line 'a' where we are assigning a value to the <code>result</code> variable. We can directly use the <code>result</code> from <code>try</code> clause, no?</p> <p>My reasoning is that if <code>TimeoutError</code> exception is raised before the future in line 'a' is completed, it is immediately handled in the <code>except</code> clause. The <code>except</code> clause will try to print <code>result</code>. If we don't use line 'b' and since no value was assigned to <code>result</code> in the <code>try</code> clause, the <code>except</code> clause will raise <code>UnboundLocalError</code>. Is this correct reasoning? Or is there something I am missing?</p>
<python><python-asyncio>
2022-12-23 11:04:14
1
327
mayankkaizen
74,898,806
4,290,315
Warning: MariaDB version ['8.0', '31'] is less than 10.6 which is not supported by Frappe
<p>I am trying to create a new site bench new-site library.test</p> <pre class="lang-none prettyprint-override"><code>Warning: MariaDB version ['8.0', '31'] is less than 10.6 which is not supported by Frappe Installing frappe... Updating DocTypes for frappe : [===== ] 12%Syntax error in query: create sequence if not exists access_log_id_seq nocache nocycle None There was an issue while migrating the DocType: Access Log </code></pre> <p>my current Mariadb version <strong>mariadb Ver 15.1 Distrib 10.10.2-MariaDB, for osx10.17 (arm64) using EditLine wrapper</strong></p>
<python><mariadb><frappe>
2022-12-23 10:51:32
1
606
ketan pradhan
74,898,775
19,155,645
pandas: plot .value_counts() of same column from two different dataframes
<p>I have two dataframes with same columns (different values, important for me not to combine the two).</p> <p>I would like to make a bar plot of the <code>.value_counts()</code> of the same column for these dataframes (e.g. column 'A' in dataframe1 will be green, and column 'A' in dataframe2 will be blue).</p> <p>of course also important that the X axis values will be compared correctly (that is, each X label will show value_counts of val1 for both, then val2 etc.).</p> <p>for now I can only do it seperately for each, for example: <code>df1['A'].value_counts(normalize=True).plot(kind='bar',title='col A distribution dataframe1')</code></p> <p>It is not vital for me to do it in one line (can also use &quot;native&quot; matplotlib).</p>
<python><pandas><matplotlib>
2022-12-23 10:48:14
1
512
ArieAI
74,898,759
15,178,267
Django: how to write a conditional statement to check if a post status was changes from live to cancel in django?
<p>i want to write a condition that would show a warning message when a user try to change the status of a an object to <strong>cancel</strong> that already have a <strong>participant</strong> in the object. The conditional statement seems not to be working.</p> <pre><code> class PredictionUpdate(ProductInline, UpdateView): def get_context_data(self, **kwargs): ctx = super(PredictionUpdate, self).get_context_data(**kwargs) ctx['named_formsets'] = self.get_named_formsets() return ctx def get_current_object(self, id): prediction = Predictions.objects.get(id=id) return { &quot;prediction&quot;:prediction } def get_named_formsets(self): return { 'variants': PredictionDataFormSet(self.request.POST or None, self.request.FILES or None, instance=self.object, prefix='variants'), } def dispatch(self, request ,*args, **kwargs): obj = self.get_object() if obj.user != self.request.user: messages.warning(self.request, &quot;You are not allowed to edit this bet!&quot;) return redirect(&quot;core:dashboard&quot;) if obj.status == &quot;finished&quot;: messages.warning(self.request, &quot;You cannot edit this bet again, send an update request to continue.&quot;) return redirect(&quot;core:dashboard&quot;) # raise Http404(&quot;You are not allowed to edit this Post&quot;) return super(PredictionUpdate, self).dispatch(request, *args, **kwargs) def form_valid(self, form): instance = form.instance if instance.participants.exists() and instance.status == 'cancelled': messages.warning(self.request, &quot;You cannot cancel a bet that already have participants&quot;) return redirect(&quot;core:dashboard&quot;) else: form.save() return HttpResponseRedirect(self.get_success_url()) </code></pre> <p>models.py</p> <pre><code>STATUS = ( (&quot;live&quot;, &quot;Live&quot;), (&quot;in_review&quot;, &quot;In review&quot;), (&quot;pending&quot;, &quot;Pending&quot;), (&quot;cancelled&quot;, &quot;Cancelled&quot;), (&quot;finished&quot;, &quot;Finished&quot;), ) class Predictions(models.Model): user = models.ForeignKey(User, on_delete=models.SET_NULL, null=True) title = models.CharField(max_length=1000) participants = models.ManyToManyField(User, related_name=&quot;participants&quot;, blank=True) status = models.CharField(choices=STATUS, max_length=100, default=&quot;in_review&quot;) </code></pre> <p>It sets the post to cancel even if there is already a participant. what i want is this, provided there is already at least 1 participant in the object, i want to show a warning message that they cannot cancel the post and redirect them back to the dashboard.</p>
<python><django>
2022-12-23 10:46:26
2
851
Destiny Franks
74,898,743
18,749,472
Django session variables not saving when back button clicked
<p>On my page i am trying to implement a recently viewed section on my home page. The problem is when I append a new item to request.session[&quot;recently-viewed&quot;], the item i just viewed gets deleted from the list when i load a new page.</p> <p>The item view is a page which displays the details about a specific item. I want that particular item to be added and saved into a session variable. When the user visits any other page the session variable &quot;recently-viewed&quot; should be saved. Recently viewed items can then be displayed on the home page.</p> <p>There is a similar question that has been asked but the only answer was a solution using javascript. If possible could solutions stay away from javascript.</p> <p><em>views.py</em></p> <pre><code>def item(request, item_id): if &quot;recently-viewed&quot; not in request.session: request.session[&quot;recently-viewed&quot;] = [item_id] else: request.session[&quot;recently-viewed&quot;].append(item_id) </code></pre> <p><strong>when in item view:</strong></p> <p><code>request.session[&quot;recently-viewed&quot;] = [&quot;item1&quot;, &quot;item2&quot;]</code></p> <p><strong>when another page is loaded:</strong></p> <p><code>request.session[&quot;recently-viewed&quot;] = [&quot;item1&quot;]</code></p>
<python><django><session><session-variables><django-sessions>
2022-12-23 10:43:53
1
639
logan_9997
74,898,631
17,491,224
Cannot load table object with sqlalchemy asyncio
<p>I am attempting to load a table using sqlalchemy asyncio. The synchronous way I would run it is as follows:</p> <pre><code>connect_string = 'db_handle://user:password@db_address:port/database' # where db_handle is postgressql+psycopg2 engine = create_engine(connect_string) table = Table(table, metadata, autoload=True, autoload_with=engine) </code></pre> <p>None of the solutions I implemented allow me (sqlalchemy core user) to load my table object to then use for querying (ie stmt=select([table.c.col])</p> <p>I have attempted the following:</p> <pre><code>connect_string = 'db_handle://user:password@db_address:port/database' # where db_handle is postgressql+asyncpg engine = create_async_engine(connect_string, echo=True) metadata = MetaData() </code></pre> <pre><code>#try 1 table = await Table(table, metadata, autoload=True, autoload_with=engine) # error 1: sqlalchemy.exc.NoInspectionAvailable: Inspection on an AsyncEngine is currently not supported. Please obtain a connection then use ``conn.run_sync`` to pass a callable where it's possible to call ``inspect`` on the passed connection. (Background on this error at: https://sqlalche.me/e/14/xd3s) </code></pre> <pre><code>#try 2 metadata.bind(db_engine_object) table = await Table(table, metadata, autoload=True) # error 2: TypeError: 'NoneType' object is not callable </code></pre> <pre><code>#try 3 connection = db_engine_object.connect() table = await Table(table, metadata, autoload=True, autoload_with=connection) # error 3: sqlalchemy.exc.NoInspectionAvailable: Inspection on an AsyncConnection is currently not supported. Please use ``run_sync`` to pass a callable where it's possible to call ``inspect`` on the passed connection. (Background on this error at: https://sqlalche.me/e/14/xd3s) </code></pre> <pre><code>#try 4 connection = db_engine_object.connect() table = await Table(table, metadata, autoload=True, autoload_with=connection.run_sync()) # error 4: TypeError: run_sync() missing 1 required positional argument: 'fn' </code></pre> <p>I can't run a query without having a table to direct the query to, and I can't find out how to get the table object.</p>
<python><asynchronous><sqlalchemy>
2022-12-23 10:31:42
1
518
Christina Stebbins
74,898,454
8,913,983
Calculating difference in multi index pandas data frame
<p>I have a <code>df</code>:</p> <pre><code>df_test = sns.load_dataset(&quot;flights&quot;) df_test['cat_2'] = np.random.choice(range(10), df_test.shape[0]) df_test.pivot_table(index='month', columns='year', values=['passengers', 'cat_2'])\ .swaplevel(0,1, axis=1)\ .sort_index(axis=1, level=0)\ .fillna(0) </code></pre> <p>I am trying to calculate the difference between <code>cat_2</code> and <code>passengers</code> each <code>year</code> compared to the year before.</p> <p>What is the best way to achieve this? Desired output would look similar to this:</p> <pre><code>year 1949 1950 1951 cat_2 passengers % diff cat_2 passengers % diff cat_2 passengers % diff month Jan 6 112 0 6 115 115/112 6 90 90/115 Feb 0 118 0 6 126 126/118 6 150 150 / 126 Mar 2 132 0 7 141 7 141 Apr 0 129 0 9 135 9 135 May 5 121 0 4 125 4 125 Jun 1 135 0 3 149 3 149 Jul 6 148 0 5 170 5 170 Aug 5 148 0 2 170 2 170 Sep 1 136 0 4 158 4 158 Oct 5 119 0 5 133 5 133 Nov 0 104 0 1 114 1 114 Dec 7 118 0 1 140 1 140 </code></pre> <p>I only showed the desired calculations for columns <code>passengers</code> but the same calculations method can be used for <code>cat_2</code> as well. As there is nothing to compare the first year I filled the values with <code>0</code>.</p>
<python><pandas>
2022-12-23 10:11:01
1
4,870
Jonas Palačionis
74,898,227
8,070,090
Can I using Google Sheet API only with API Key or using Client ID and Client secret, but without client_secret.json?
<p>In this provided Python code in this <a href="https://developers.google.com/sheets/api/quickstart/python#configure_the_sample" rel="nofollow noreferrer">quickstart</a>, it using <strong>credentials.json</strong> such in this line:</p> <pre><code>flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) </code></pre> <p>I have enabled the Sheet and Drive API. I created credentials from the <strong>APIs &amp; Services</strong> menu, then on the <strong>Credentials</strong> tab, click on the <strong>CREATE CREDENTIALS</strong> button, then click on the <strong>OAuth client ID</strong>, and on the <strong>Application type</strong> I selected the <strong>Desktop app</strong>, then I download its JSON file, then set the <strong>credentials.json</strong> file to the right path, like this:</p> <p><code>flow = InstalledAppFlow.from_client_secrets_file('client_secret_274233513361-l7vpffd7g9oree4tg5tledq9keqrevk3.apps.googleusercontent.com.json', SCOPES)</code></p> <p>Then, when I run the quickstart code above, <strong>it shows a new browser pop-up that requires me to log in</strong>. After successfully login, yea, I can run the Python code successfully without any error.</p> <p>But I don't want a new pop-up that requires me to log in first.</p> <p>So my question: Can I use Google Sheet API only with my credentials <strong>Client ID and Client secret</strong> <a href="https://i.sstatic.net/NeqnO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NeqnO.png" alt="enter image description here" /></a></p> <p>or only with <strong>API Key</strong></p> <p><a href="https://i.sstatic.net/WZRZK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WZRZK.png" alt="enter image description here" /></a></p> <p>If we can do it, how to do it? is there any documentation on how to achieve that?</p>
<python><google-api><google-oauth><google-sheets-api>
2022-12-23 09:46:19
2
3,109
Tri
74,897,874
257,299
FastAPI + uvicorn: Is it possible to accept multiple connections with a single worker?
<p>Here is some sample code to demonstrate the issue:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import datetime import time import uvicorn from fastapi import FastAPI from starlette.responses import PlainTextResponse app = FastAPI() @app.get(path=&quot;/sync&quot;) def get_sync(): print(f&quot;sync: {datetime.datetime.now()}: Before sleep&quot;) time.sleep(5) print(f&quot;sync: {datetime.datetime.now()}: After sleep&quot;) return PlainTextResponse(content=f&quot;sync: {datetime.datetime.now()}: Hello, World!&quot;) @app.get(path=&quot;/async&quot;) async def get_async(): print(f&quot;async: {datetime.datetime.now()}: Before sleep&quot;) await asyncio.sleep(5) print(f&quot;async: {datetime.datetime.now()}: After sleep&quot;) return PlainTextResponse(content=f&quot;async: {datetime.datetime.now()}: Hello, World!&quot;) if __name__ == &quot;__main__&quot;: uvicorn.run(app=app, host=&quot;0.0.0.0&quot;, port=1911) </code></pre> <ol> <li>Pick any endpoint above: <code>GET /sync</code> or <code>GET /async</code></li> <li>Call the endpoint from two different web browser tabs (or use cURL, etc.) to create two parallel requests</li> <li>The first request blocks the second request.</li> </ol> <p>I expected <code>GET /sync</code> to run on a threadpool. I expected <code>GET /async</code> to use some asyncio magic.</p> <p>I cannot use multiple workers. Is there a solution to allow concurrent requests with a single worker?</p> <p>FYI: I am using Python 3.7 (64-bit/Win10) and latest versions of FastAPI + unvicorn.</p>
<python><asynchronous><async-await><fastapi><uvicorn>
2022-12-23 09:01:25
1
21,609
kevinarpe
74,897,789
9,476,917
Concat List of DataFrames row-wise - throws pandas.errors.InvalidIndexError:
<p>I am trying to concatenate a list <code>df_l</code> of ~200 Dataframes, which all have the same number of columns and names.</p> <p>When I try to run:</p> <p><code>df = pd.concat(df_l, axis=0)</code></p> <p>it throws the error:</p> <blockquote> <p>pandas.errors.InvalidIndexError: Reindexing only valid with uniquely valued Index objects</p> </blockquote> <p>Following <a href="https://stackoverflow.com/questions/35084071/concat-dataframe-reindexing-only-valid-with-uniquely-valued-index-objects">this post</a> I tried to reset the index of each dataframe, but I'll still get the same error.</p> <pre><code>new_l = [df.reset_index(drop=True) for df in df_l] pd.concat(new_l, axis=0) </code></pre> <p>Also <code>pd.concat</code>arguments like <code>ignore_index=True</code> did not help in any combination. Any advice?</p> <p>Running on <code>python 3.8</code> and <code>pandas 1.4.2</code>.</p>
<python><pandas><dataframe><concatenation>
2022-12-23 08:51:43
1
755
Maeaex1
74,897,772
3,004,472
How to match names when one group is not present in RegEx
<p>I am stuck with following regex. my intention is to match following</p> <pre><code>abc_123_v1_f1_t21 12c_1sdsd_f1_t1 Android_v1_t21 </code></pre> <p>regex i am trying is</p> <pre><code>_(v[1-9]{1,3}\d?_f[1-9]{1,3}\d?_t[1-9]{1,3}\d?)(?!\d) </code></pre> <p>I am not able to get regex match if my names doesn't contains squence (v_f_t). how to get regex match when i have pattern like <strong>without v or f or t</strong> <em>ATLEAST two should be present. v_t or f_v or f_t or v_f_t or f_t_v etc</em></p> <p>if I give adcg_f1_t1 it should match</p> <p>if I give adcg_v2_f1 it should match</p> <p>if I give adcg_v2_t12 it should match</p> <p>How to do this ?</p>
<python><regex>
2022-12-23 08:49:53
2
880
BigD
74,897,769
20,793,070
Display exponential values as float in Polars
<p>I have some float columns in Polars df. Some values displays like <code>9.4036e15</code>. How I can display it with normal digital view?</p>
<python><dataframe><python-polars>
2022-12-23 08:49:10
1
433
Jahspear
74,897,725
3,131,604
Tensorflow 2 graph mode - For loop in a model train_step() function?
<p>I am struggling to make a loop work in a model train_step() function in graph mode.</p> <p>=====&gt; Please jump directly to UPDATE below</p> <p>The following snippet, which works in eager mode but not in graph mode, is not my train_step() code but if someone could explain how to make it work when the decorator is uncommented, I think it will help me to complete my train_step().</p> <pre><code>import tensorflow as tf # @tf.function def fct1(): y = tf.constant([2.3, 5.3, 4.1]) yd = tf.shape(y)[0] for t in tf.range(0, yd): if t == 1: return t print(fct1()) </code></pre> <p>====== UPDATE =======</p> <p>It turned out that the snippet above did not capture the <strong>&quot;TypeError: 'Tensor' object cannot be interpreted as an integer&quot;</strong> I have at the <em>for</em> line. Please ignore it.</p> <p>To reproduce my problem please run the following working code :</p> <pre><code>import tensorflow as tf @tf.function def fct1(): yd = tf.constant(5, dtype=tf.int32) for t in range(yd): pass fct1() </code></pre> <p>then add the following 3 lines of code in a working train_step() whose model is compiled with run_eagerly=False:</p> <pre><code>yd = tf.constant(5, dtype=tf.int32) for t in range(yd): pass </code></pre> <p>and get the error:</p> <p>File</p> <p>&quot;D:\gCloud\GoogleDrive\colabai\tfe\nlp\translators\seq2seq_bahdanau_11\seq2seq_bahdanau_lib.py&quot;, line 180, in train_step for t in range(yod):</p> <pre><code>TypeError: 'Tensor' object cannot be interpreted as an integer </code></pre> <p>The conclusion seems to be that using the decorator @tf.function to enable the graph mode does not behave the same way as using the run_eagerly parameter of the model.compile() :</p> <pre><code>model.compile( optimizer=tf.keras.optimizers.RMSprop(), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()], run_eagerly=False, ) </code></pre> <p>Thanks in advance for your ideas.</p>
<python><tensorflow2.0><eager-execution>
2022-12-23 08:41:54
2
7,433
u2gilles
74,897,712
13,647,125
How to make dictionary from list
<p>I have this variables:</p> <pre><code>fab_centrum = ['A', 'B'] sales_line = ['1AA', '2BB', '3CC', '1AA', '2BB', '3CC'] keys = ['feasibility_score', 'capacity_score'] feasibility_score = [10,30,40, 10,30,40] capacity_score = [50,60,70, 50,60,70] </code></pre> <p>And I would like to make dic with this structure</p> <pre><code> bach = { A: {'1AA': { feasibility_score: 10, capacity_score: 50, } '2BB': { feasibility_score: 10, capacity_score: 50, } } B: {'1AA': { feasibility_score: 10, capacity_score: 50, } .... } } </code></pre> <p>I know I can use something like</p> <pre><code>batch = dict.fromkeys(fab_centrum, sales_line) </code></pre> <p>But I dont know how to make dict with nested values like my structue. I am new to python</p>
<python><loops><dictionary>
2022-12-23 08:40:20
1
755
onhalu
74,897,669
1,354,400
pytype is not recognizing int or float output type as Number
<p>I have a function that returns a number but it's not obvious what the user enters. It could accept a str, int or float. So I've type-hinted it as follows:</p> <pre class="lang-py prettyprint-override"><code>from numbers import Number def to_number(value: Union[str, Number]) -&gt; Number: if isinstance(value, Number) and math.isnan(value): raise ValueError(f&quot;No NaNsense&quot;) try: return int(value) except ValueError: try: return float(value) except ValueError as exc: raise ValueError(f&quot;'{value}' is not a number&quot;) from exc </code></pre> <p>Upon running pytype on the file I get the following error:</p> <pre><code>File &quot;whatever.py&quot;, line 8, in evaluate: bad return type [bad-return-type] Expected: numbers.Number Actually returned: int File &quot;whatever.py&quot;, line 11, in evaluate: bad return type [bad-return-type] Expected: numbers.Number Actually returned: float </code></pre> <p>But in the REPL</p> <pre><code>&gt;&gt;&gt; isinstance(int(5), numbers.Number) True &gt;&gt;&gt; isinstance(float(5), numbers.Number) True </code></pre> <p>What's the resolution here? I'm on CPython 3.7.13, PyType 2022.12.15</p>
<python><python-typing><pytype>
2022-12-23 08:34:11
0
902
Syafiq Kamarul Azman
74,897,649
13,546,726
Mapping for entering values ​into a dataframe using a dictionary
<p>I have a dictionary that looks like this: {1:&quot;A&quot;, 2:&quot;B&quot;, 3:&quot;C&quot;}</p> <p>And dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">col_1</th> <th style="text-align: center;">col_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1,3,2</td> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: left;">2,3</td> <td style="text-align: center;"></td> </tr> </tbody> </table> </div> <p>How can I organize the mapping so that I get the following result:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">col_1</th> <th style="text-align: center;">col_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1,3,2</td> <td style="text-align: center;">A,C,B</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">A</td> </tr> <tr> <td style="text-align: left;">2,3</td> <td style="text-align: center;">B,C</td> </tr> </tbody> </table> </div>
<python><pandas>
2022-12-23 08:32:06
2
309
Sam324
74,897,630
12,631,415
Maya 2022 crashes while i run a python script
<p>I am trying to create a menu, a submenu and an optionBox which lets the submenu item to create and remove itself from main menu. when I continuously try to add and remove the item from the main menu maya crashes with the message</p> <blockquote> <p>Fatal Error. Attempting to save in &quot;{temp location}&quot;.ma</p> </blockquote> <p>it seems the name keeps recreating itself and the app crashes or something similar is happening.</p> <pre><code>import pymel.core as pm MainMayaWindow = pm.language.melGlobals['gMainWindow'] c=None customMenu = pm.menu('Custom Menu', parent=MainMayaWindow, tearOff=True) a=pm.menuItem(label=&quot;menu item 1'&quot;, parent=customMenu, subMenu=True) b=pm.menuItem(label=&quot;click me&quot;, parent=a) pm.menuItem(command=&quot;newItem()&quot;, parent=a, optionBox=True) def newItem(): global c if not c: c=pm.menuItem(label=&quot;click me&quot;, parent=customMenu) pm.menuItem(command=&quot;removeItem()&quot;, parent=customMenu, optionBox=True) else: print(&quot;already exist&quot;) def removeItem(): global b,c print(&quot;removed item&quot;) pm.deleteUI(c, control=True) c=None </code></pre> <p>I used tearoff menu to test as it was quicker/easier to test with continuos clicks. i also tried to change the label as unique which didnt work either. i tried this with maya.cmds as well.</p>
<python><maya><pymel><maya-api>
2022-12-23 08:30:09
1
381
aNup
74,897,486
1,926,852
How to resolve Pandas Read csv not working for Mock S3?
<p>I created a mock s3 bucket with moto. I put an object into it. I am trying to read with pandas read_csv.</p> <pre><code>S3_BUCKET_NAME = 'mock_bucket' def s3_bucket(): mocks3 = mock_s3() mocks3.start() os.environ[&quot;AWS_DEFAULT_REGION&quot;] = &quot;us-east-1&quot; s3 = boto3.resource('s3') s3.create_bucket(Bucket=S3_BUCKET_NAME) return s3 s3 = s3_bucket() bucket = s3.Bucket(S3_BUCKET_NAME) bucket.put_object(Key='/2022/12/14/6/25/t_ext_gop/t.csv', Body='test') a = pd.read_csv('s3://amx-mock-bigdata-stemtime/2022/12/14/6/25/t_ext_gop/t.csv') </code></pre> <p>I get the following error:</p> <pre><code>AttributeError: 'MockRawResponse' object has no attribute 'raw_headers' </code></pre> <p>How do I resolve this?</p>
<python><pandas><unit-testing><mocking><moto>
2022-12-23 08:12:58
0
602
Luv
74,897,395
8,924,587
AsyncIO: AttributeError: Can't pickle local object
<p>I am working on code that accesses an api that's not made using asyncio along with a discord bot that uses asyncio. In my dsicord commands I created local functions with the blocking parts of the api and want to run them with <code>BaseEventLoop.run_in_executer</code> with a <code>ProcessPoolExecutor</code> object.</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>@client.command() def someCommand(...): ... processing where lots of local variables are defined def task(): ... blocking code making use of lots of local variables loop.run_in_executer(processPool, task) </code></pre> <p>Some very useful answers are given around the multiprocessing module <a href="https://stackoverflow.com/questions/72766345/attributeerror-cant-pickle-local-object-in-multiprocessing">here</a>. Are there any similar solutions without moving the functions to global space where a lot of unwanted clutter will be created :( ? Having the blocking functions defined locally makes the code more clean and readable. I already have hundreds of functions in my file and don't want to add unnecessary global space clutter.</p>
<python><multiprocessing><python-asyncio><pickle>
2022-12-23 08:00:57
1
339
Hein Gertenbach
74,897,362
11,332,693
Removing inverted commas and brackets from python dataframe column items
<p>df1</p> <pre><code> Place Actor ['new york','washington'] ['chris evans','John'] ['new york','los angeles'] ['kate','lopez'] </code></pre> <p>I want to remove brackets and inverted commas from each column items:</p> <p>Expected output:</p> <p>df1</p> <pre><code> Place Actor new york,washington chris evans,John new york,los angeles kate,lopez </code></pre> <p>My try:</p> <pre><code>cols = [Place,Actor] df1[cols].apply(lambda x : x [' + ', '.join(df1[cols]) + ']') </code></pre>
<python><pandas><string><text><split>
2022-12-23 07:56:23
1
417
AB14
74,897,316
11,693,768
Split dataframe into multiple dataframe where each frame contains only rows and columns where that data isn't missing
<p>I have the following dataframe called <code>v</code>.</p> <pre><code> date x1 x2 x3 x4 x5 dname 1 20200705 8119 8013 8133 8031 100806 D1 2 20200706 8031 7950 8271 8200 443809 D1 3 20200707 8200 8188 8281 8217 303151 D1 4 20200708 8217 8200 8365 8334 509629 D1 5 20200709 8334 8139 8370 8204 588634 D1 ................................. 55 20221216 17340 16675 17525 16775 7266 D2 56 20221219 16690 16395 16770 16495 4393 D2 57 20221220 16325 16275 17095 16840 5601 D2 58 20221221 16870 16670 16885 16735 2295 D2 59 20221222 16725 16470 16850 16485 3359 D2 ................................. 125 20200705 9131 9000 9146 9014 D3 126 20200706 9014 8918 9352 9277 D3 127 20200707 9277 9207 9379 9255 D3 128 20200708 9255 9231 9473 9430 D3 129 20200709 9430 9165 9472 9237 D3 ................................. 500 20221218 1179 1173 1197 1183 D7 501 20221219 1183 1165 1195 1176 D7 502 20221220 1176 1151 1229 1216 D7 503 20221221 1216 1204 1222 1212 D7 504 20221222 1212 1183 1221 1186 D7 ................................. 992 D8 993 20200721 181 D9 994 20200818 50 D9 995 20200831 96 D9 996 20200925 84 D9 ................................. 1006 20220705 36 D11 1007 20220718 48 D11 1008 20220728 22 D11 1009 20220818 68 D11 1010 20220923 108 D11 </code></pre> <p>As you can see there are certain columns missing. Sometimes x1 - x4 are missing, sometimes x5 is missing, when they are missing they have a blank space character. Sometimes x2-x3 are missing.</p> <p>I want to create one dataframe each and group up each frame based on which columns they have. So for example all those rows which have all columns will have is on frame, then those without x5 will have it's own column etc.</p> <p>Right now I am manually programming each case. Is there a way to dynamically program this behaviour?</p> <p>Here is my code,</p> <pre><code>import pandas as pd v = pd.read_csv(filepath) d1 = v[v.x5 == &quot; &quot;] d2 = v[v.x5 != &quot; &quot;] d3 = v[v.x2 != &quot; &quot; &amp; v.x3 != &quot; &quot;] </code></pre> <p>I have to manually also go see which combination of missing columns exist before I do that. I have many dataframes like that.</p> <p>Is there a faster more efficient way to do it so I end up with multiple dataframes like this where each dataframe has the same columns of data not missing.</p> <p>df1</p> <pre><code> date x1 x2 x3 x4 x5 dname 1 20200705 8119 8013 8133 8031 100806 D1 2 20200706 8031 7950 8271 8200 443809 D1 3 20200707 8200 8188 8281 8217 303151 D1 4 20200708 8217 8200 8365 8334 509629 D1 5 20200709 8334 8139 8370 8204 588634 D1 ................................. 55 20221216 17340 16675 17525 16775 7266 D2 56 20221219 16690 16395 16770 16495 4393 D2 57 20221220 16325 16275 17095 16840 5601 D2 58 20221221 16870 16670 16885 16735 2295 D2 59 20221222 16725 16470 16850 16485 3359 D2 </code></pre> <p>df2</p> <pre><code> date x1 x2 x3 x4 dname 125 20200705 9131 9000 9146 9014 D3 126 20200706 9014 8918 9352 9277 D3 127 20200707 9277 9207 9379 9255 D3 128 20200708 9255 9231 9473 9430 D3 129 20200709 9430 9165 9472 9237 D3 ................................. 500 20221218 1179 1173 1197 1183 D7 501 20221219 1183 1165 1195 1176 D7 502 20221220 1176 1151 1229 1216 D7 503 20221221 1216 1204 1222 1212 D7 504 20221222 1212 1183 1221 1186 D7 </code></pre> <p>etc.</p>
<python><pandas><dataframe>
2022-12-23 07:50:21
1
5,234
anarchy
74,897,222
3,423,825
How to set different permissions for GET and POST methods with ListCreateAPIView?
<p>I would like to set <code>IsAuthenticated</code> permission for <code>GET</code> and <code>IsTeamLeader</code> permission for <code>POST</code> with <code>ListCreateAPIView</code> and <code>ModelSerializer</code>, but without having a unique permission that check the request method in <code>has_permission</code>, as suggested in these questions <a href="https://stackoverflow.com/questions/53158273/how-to-set-permission-public-for-get-method-and-admin-for-post-method-in-django">here</a> and <a href="https://stackoverflow.com/questions/74085003/unable-to-post-with-listcreateapiview">here</a>.</p> <p>How could I do that ?</p> <pre><code>@permission_classes([IsAuthenticated]) class ManagerListView(ListCreateAPIView): queryset = Manager.objects.all() serializer_class = ManagerSerializer class IsTeamLeader(permissions.BasePermission): def has_permission(self, request, view): if Manager.objects.filter(pk=request.user.pk).exists(): return Manager.objects.get(pk=request.user.pk).is_team_leader class ManagerSerializer(serializers.ModelSerializer): password1 = serializers.CharField(write_only=True) password2 = serializers.CharField(write_only=True) fields = serializers.JSONField(write_only=True) def validate(self, data): if data['password1'] != data['password2']: raise serializers.ValidationError('Passwords must match.') return data def create(self, validated_data): data = { key: value for key, value in validated_data.items() if key not in ('password1', 'password2') } data['password'] = validated_data['password1'] user = self.Meta.model.objects.create_user(**data) return user class Meta: model = Manager fields = ('id', 'email', 'first_name', 'last_name', 'username', 'role', 'is_team_leader', 'password1', 'password2', 'fields') read_only_fields = ('id', 'first_name', 'last_name', 'role', 'is_team_leader', 'address', 'contact') </code></pre>
<python><django><django-rest-framework>
2022-12-23 07:37:56
1
1,948
Florent
74,897,182
13,454,049
Python equality if variables can be NaN
<p>Is there a better way to compare variables if they can be NaN? Surely there must be something built-in for this, right?</p> <pre class="lang-py prettyprint-override"><code>NaN = float(&quot;NaN&quot;) def remove(obj, value_1): for key in reversed([ i for i, value_2 in enumerate(obj) if value_1 != value_1 and value_2 != value_2 or value_1 == value_2]): del obj[key] return obj def test(value_1, value_2): assert ( value_1 != value_1 and value_2 != value_2 or value_1 == value_2) print(remove([0, NaN], NaN)) test(NaN, NaN) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>[0] # No assertion error </code></pre> <p>I'm writing a JSON patcher and I want to be be able to remove NaN values from a list with an operation or raise an error if a value is not equal to NaN, I don't think the normal behaviour is particularly useful here. The full code is a bit too much to share here I'm afraid.</p>
<python>
2022-12-23 07:32:22
1
1,205
Nice Zombies
74,897,149
16,223,122
typing recursive function with nested type
<p>background: solving some algorithm problem</p> <h2>Problem</h2> <p>I'm trying to use a recursive function with nested type in VSCode, and it keep throwing error to me. I reduced it to this</p> <pre class="lang-py prettyprint-override"><code>from typing import Type NestedStr = list[str | Type[&quot;NestedStr&quot;]] def get_first(x: str | NestedStr) -&gt; str: if isinstance(x, str): return x return get_first(x[0]) # Argument of type &quot;str | NestedStr&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;str | NestedStr&quot; in function &quot;get_first&quot; assert get_first([&quot;a&quot;, &quot;b&quot;]) == &quot;a&quot; # No error thrown here assert get_first([[&quot;a&quot;, &quot;b&quot;]]) == &quot;a&quot; # Argument of type &quot;list[list[str]]&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;str | NestedStr&quot; in function &quot;get_first&quot; </code></pre> <p>Obviously, when <code>x</code> is not an <code>str</code> it should be a <code>NestedStr</code> hence it can be an infinite nested list but pylance seems not knowing it.<br /> The code can run perfectly but the error is annoying. Is there anyway to suppress it (except &quot;type: ignore&quot;)?</p> <h2>Related</h2> <ul> <li><a href="https://peps.python.org/pep-0483/" rel="nofollow noreferrer">PEP483</a></li> <li><a href="https://stackoverflow.com/questions/53845024/defining-a-recursive-type-hint-in-python/53845083#comment132174059_53845083">Defining a recursive type hint in Python?</a></li> </ul> <h2>Appendices</h2> <h3>Full Error Messages</h3> <ul> <li>on recursive call <code>get_first(x[0])</code></li> </ul> <pre><code>Argument of type &quot;str | NestedStr&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;str | NestedStr&quot; in function &quot;get_first&quot; Type &quot;str | NestedStr&quot; cannot be assigned to type &quot;str | NestedStr&quot; Type &quot;NestedStr&quot; cannot be assigned to type &quot;str | NestedStr&quot; &quot;Type[type]&quot; is incompatible with &quot;Type[str]&quot; &quot;Type[type]&quot; is incompatible with &quot;NestedStr&quot; Pylance reportGeneralTypeIssues </code></pre> <ul> <li>on call with <code>list[list[str]]</code></li> </ul> <pre><code>Argument of type &quot;list[list[str]]&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;str | NestedStr&quot; in function &quot;get_first&quot; Type &quot;list[str]&quot; cannot be assigned to type &quot;str | NestedStr&quot; &quot;list[str]&quot; is incompatible with &quot;str&quot; Type &quot;list[str]&quot; cannot be assigned to type &quot;NestedStr&quot; Pylance reportGeneralTypeIssues </code></pre>
<python><mypy><python-typing><pylance>
2022-12-23 07:28:30
2
1,453
Pablo LION
74,897,123
11,622,114
Get last and first n elements of a list
<p>Need an answer where numbers to leave from the start and end can easily be adjusted. Thanks.</p> <p>This is my code:</p> <pre><code>ls = [1,2,3,4,5,6] # desired output: [1,2,5,6] #Tried the following: ls[-2:2:1] ls[2:4:-1] # Both return empty list </code></pre>
<python><list>
2022-12-23 07:25:28
6
522
Sumant Agnihotri