QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,002,435
7,462,275
Can "fsolve (scipy)" find many roots of a function?
<p>I read in the scipy documentation (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html</a>) that <strong>fsolve</strong> : <em>Find the root<strong>S</strong> of a function.</em> But even, with a simple example, I obtained only <strong>one</strong> root</p> <pre><code>def my_eq(x): y = x*x-1 return y roots = fsolve(equation,0.1) print(roots) </code></pre> <p>So, is it possible to obtain multiple roots with fsolve in one call ?</p> <p>For other solvers, it is clear in the doc (<a href="https://docs.scipy.org/doc/scipy/reference/optimize.html#id2" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/optimize.html#id2</a>). They <em>Find <strong>a</strong> root of a function</em></p> <p>(N.B. I know that multiple root finding is difficult)</p>
<python><scipy><scipy-optimize>
2023-01-04 07:57:18
2
2,515
Stef1611
75,002,216
17,696,880
Add substring at specified position within a string if a pattern is identified in it except for the absence of that substring which should be complete
<pre class="lang-py prettyprint-override"><code>import re, datetime input_text = 'del dia 10 a las 10:00 am hasta el 15 de noviembre de 2020' #example 1 input_text = 'de el 10 hasta el 15 a las 20:00 pm de noviembre del año 2020' #example 2 input_text = 'desde el 10 hasta el 15 de noviembre del año 2020' #example 3 input_text = 'del 10 a las 10:00 am hasta el 15 a las 20:00 pm de noviembre de 2020' #example 4 identificate_day_or_month = r&quot;\b(\d{1,2})\b&quot; identificate_hours = r&quot;[\s|]*(\d{1,2}):(\d{1,2})[\s|]*(?:a.m.|a.m|am|p.m.|p.m|pm)[\s|]*&quot; months = r&quot;(?:enero|febrero|marzo|abril|mayo|junio|julio|agosto|septiembre|octubre|noviembre|diciembre|este mes|mes que viene|siguiente mes|mes siguiente|mes pasado|pasado mes|anterior año|mes anterior)&quot; identificate_years = r&quot;(?:del[\s|]*del[\s|]*año|de[\s|]*el[\s|]*año|del[\s|]*del[\s|]*ano|de[\s|]*el[\s|]*ano|del|de)[\s|]*(?:el|)[\s|]*(?:este[\s|]*año[\s|]*\d*|este[\s|]*año|año[\s|]*que[\s|]*viene|siguiente[\s|]*año|año[\s|]*siguiente|año[\s|]*pasado|pasado[\s|]*año|anterior[\s|]*año|año[\s|]*anterior|este[\s|]*ano[\s|]*\d*|este[\s|]*ano|ano[\s|]*que[\s|]*viene|siguiente[\s|]*ano|ano[\s|]*siguiente|ano[\s|]*pasado|pasado[\s|]*ano|anterior[\s|]*ano|ano[\s|]*anterior|este[\s|]*\d*|año \d*|ano \d*|el \d*|\d*)&quot; #Identification pattern conformed to the sequence of characters with which I am trying to define the search pattern identification_re_0 = r&quot;(?:(?&lt;=\s)|^)(?:desde[\s|]*el|desde|del|de[\s|]*el|de )[\s|]*(?:día|dia|)[\s|]*&quot; + identificate_day_or_month + identificate_hours + r&quot;[\s|]*(?:hasta|al|a )[\s|]*(?:el|)[\s|]*&quot; + identificate_day_or_month + identificate_hours + r&quot;[\s|]*(?:del|de[\s|]*el|de)[\s|]*(?:mes|)[\s|]*(?:de|)[\s|]*(?:&quot; + identificate_day_or_month + r&quot;|&quot; + months + r&quot;|este mes|mes[\s|]*que[\s|]*viene|siguiente[\s|]*mes|mes[\s|]*siguiente|mes[\s|]*pasado|pasado[\s|]*mes|anterior[\s|]*mes|mes[\s|]*anterior)[\s|]*&quot; + r&quot;(?:&quot; + identificate_years + r&quot;|&quot; + r&quot;)&quot; #Replacement in the input string by a string with built-in corrections where necessary input_text = re.sub(identification_re_0, lambda m: , input_text, re.IGNORECASE) print(repr(input_text)) # --&gt; output </code></pre> <p>I was trying to get that if the pattern <code>identification_re_0</code> is found incomplete, that is, without the times indicated, then it completes them with <code>&quot;a las 00:00 am&quot;</code>, which represents the beginning of that indicated day with that date.</p> <p>Within the same input string there may be more than one occurrence of this pattern where this procedure must be performed, therefore the number of replacements in the <code>re.sub()</code> function has not been limited. And I have added the <code>re.IGNORECASE</code> flag since capital letters should not have relevance when performing time recognition within a text.</p> <p>And the <strong>correct output</strong> in each of the cases should be like this.</p> <pre class="lang-py prettyprint-override"><code>'del dia 10 a las 10:00 am hasta el 15 a las 00:00 am de noviembre de 2020' #for the example 1 'de el 10 a las 00:00 am hasta el 15 a las 20:00 pm de noviembre del año 2020' #for the example 2 'desde el 10 a las 00:00 am hasta el 15 a las 00:00 am de noviembre del año 2020' #for the example 3 'del 10 a las 10:00 am hasta el 15 a las 20:00 pm de noviembre de 2020' #for the example 4, NOT modify </code></pre> <p>In <strong>example 1</strong> , <code>&quot;a las 00:00 am&quot;</code> has been added to the first date (reading from left to right).</p> <p>In <strong>example 2</strong> , <code>&quot;a las 00:00 am&quot;</code> has been added to the second date.</p> <p>And in <strong>example 3</strong>, <code>&quot;a las 00:00 am&quot;</code> has been added to both dates that make up the time interval.</p> <p>Note that in <strong>example 4</strong> it was not necessary to add anything, since the times associated with the dates are already indicated (following the model pattern).</p>
<python><python-3.x><regex><replace><regex-group>
2023-01-04 07:35:15
1
875
Matt095
75,002,215
1,652,954
cannot pickle 'generator' object created using yield key-word
<p>As shown in the below posted code, I call <code>__yieldIterables()</code> to generate iterables according to what the list <code>self.__itersList</code> contains. At run time I receive the following error:</p> <pre><code>TypeError: cannot pickle 'generator' object </code></pre> <p>Please let me know how to correct the below code in such a way that I can still convert the contents of <code>self.__itersList</code> to iterables that can be passed to the <code>self.run()</code></p> <p><strong>code</strong>:</p> <pre><code>def postTask(self): arg0 = self.getDistancesModel().getFieldCoordinatesAsTextInWKTEPSG25832() self.__itersList.append(self.getDistancesModel().getNZCCCenterPointsAsString()) self.__itersList.append(self.getDistancesModel().getZCCCenterPointsAsString()) self.__itersList.append(self.getDistancesModel().getNoDataCCenterPointsAsString()) with Pool(processes=self.__processesCount,initializer=self.initPool,initargs=(arg0,)) as DistancesRecordsPool.pool: self.__iterables = self.__yieldIterables() self.__chunkSize = PoolUtils.getChunkSizeForLenOfIterables(lenOfIterablesList=len(self.__itersList),cpuCount=self.__cpuCount) for res in DistancesRecordsPool.pool.map(func=self.run,iterable=self.__iterables,chunksize=self.__chunkSize): self.__res.append(res) DistancesRecordsPool.pool.join() gPostgreSQLHelperObject.closeConnection() return self.__res def __yieldIterables(self): for i in self.__itersList: yield i </code></pre>
<python><generator><pickle><python-multiprocessing>
2023-01-04 07:35:10
0
11,564
Amrmsmb
75,002,193
5,134,333
How to export a huge table from BigQuery into a Google cloud bucket as one file
<p>I am trying to export a huge table (2,000,000,000 rows, roughly 600GB in size) from BigQuery into a google bucket as a single file. All tools suggested in <a href="https://cloud.google.com/bigquery/docs/exporting-data#python" rel="nofollow noreferrer">Google's Documentation</a> are limited in export size and will create multiple files. Is there a pythonic way to do it without needing to hold the entire table in the memory?</p>
<python><google-bigquery><google-bucket>
2023-01-04 07:33:28
1
3,468
Roee Anuar
75,002,037
19,633,374
Marking valid days on time series
<p>The following is my code:</p> <pre><code>x = ts.loc[::-1, &quot;validday&quot;].eq(0) x = x.groupby(x.index.to_period('M'), sort=False).cumsum().head(35) x.head(35) </code></pre> <p>Current Output:</p> <pre><code> Date 2022-11-14 1 2022-11-13 1 2022-11-12 1 2022-11-11 2 2022-11-10 3 2022-11-09 4 2022-11-08 5 2022-11-07 6 2022-11-06 6 2022-11-05 7 2022-11-04 7 2022-11-03 8 . . . . . . . . 2019-09-14 . </code></pre> <p>The reason is to detect the last valid day, second last valid day and third last valid day based on the available dataset</p> <p>Can someone please tell me, how can i achieve this goal?</p>
<python><pandas><dataframe><date><time-series>
2023-01-04 07:15:15
1
642
Bella_18
75,001,968
2,632,748
How to sort a numpy array that contains floats and strings in numeric order?
<p>I've got a numpy array that contains some numbers and strings in separate columns:</p> <pre><code>a = np.array( [[ 3e-05, 'A' ], [ 2, 'B' ], [ 1e-05, 'C' ]] ) print(a[a[:, 0].argsort()]) </code></pre> <p>However, when try to sort it based on the first column using <code>.argsort()</code> it's sorted in string order not numeric order.</p> <pre><code>[['1e-05' 'C'] ['2' 'B'] ['3e-05' 'A']] </code></pre> <p>How do I go about getting the array to sort in numeric order based on the first column?</p>
<python><numpy>
2023-01-04 07:07:02
2
336
John Westlund
75,001,965
19,826,650
Pass multiple array php data into python for use
<p>I have a multiple array of $dataraw from the code below</p> <pre><code>$_SESSION['image']; $dataraw = $_SESSION['image']; echo '&lt;pre&gt;'; print_r($dataraw); echo '&lt;/pre&gt;'; </code></pre> <p>print_r($dataraw); output looks like this :</p> <pre><code>array(2) { [0]=&gt; array(4) { [&quot;FileName&quot;]=&gt; string(19) &quot;20221227_202035.jpg&quot; [&quot;Model&quot;]=&gt; string(8) &quot;SM-A528B&quot; [&quot;Longitude&quot;]=&gt; float(106.524251) [&quot;Latitude&quot;]=&gt; float(-6.367665) } [1]=&gt; array(4) { [&quot;FileName&quot;]=&gt; string(19) &quot;20221227_202157.jpg&quot; [&quot;Model&quot;]=&gt; string(8) &quot;SM-A528B&quot; [&quot;Longitude&quot;]=&gt; float(106.9522428) [&quot;Latitude&quot;]=&gt; float(-6.984758099722223) } } </code></pre> <p>As you can see, its a multiple array of data. Then i have a button click run python</p> <pre><code>$dataraw = $_SESSION['image']; $datagambar = json_encode($dataraw); $escaped_json = escapeshellarg($datagambar); if(isset($_POST['runpython'])){ echo shell_exec('D:\xampp\htdocs\Klasifikasi_KNN\admin\Klasifikasi.py'.$escaped_json); print(&quot;success&quot;); unset($_SESSION['image']); } </code></pre> <p>This is my python script to call it back</p> <pre><code>escaped_json = sys.argv[1] parsed_data = json.loads(escaped_json) print (parsed_data[0]) </code></pre> <p>How to pass the multiple array of data to python side? and calls it in python? i need the data for python to operate.</p> <p>parsed_data = json.loads(escaped_json) Seems give error</p> <p><code>JSONDecodeError: Expecting value: line 1 column 1 (char 0)</code></p>
<python><php>
2023-01-04 07:06:53
0
377
Jessen Jie
75,001,954
20,771,881
Repeat entire group 0 or more times (one or more words separated by +'s)
<p>I am trying to match words separated with the <code>+</code> character as input from a user in python and check if each of the words is in a predetermined list. I am having trouble creating a regular expression to match these words (words are comprised of more than one <code>A-z</code> characters). For example, an input string <code>foo</code> should match as well as <code>foo+bar</code> and <code>foo+bar+baz</code> with each of the words (not <code>+</code>'s) being captured.</p> <p>So far, I have tried a few regular expressions but the closest I have got is this:</p> <pre><code>/^([A-z+]+)\+([A-z+]+)$/ </code></pre> <p>However, this only matches the case in which there are two words separated with a <code>+</code>, I need there to be <em>one or more</em> words. My method above would have worked if I could somehow repeat the second group (<code>\+([A-z+]+)</code>) zero or more times. So hence my question is: How can I repeat a capturing group zero or more times? <br> If there is a better way to do what I am doing, please let me know.</p>
<python><regex><regex-group>
2023-01-04 07:06:00
2
361
Nasser Kessas
75,001,839
5,686,015
Resetting multi-index dataframe with categorical index failing with ValueError: The result input must be a ndarray
<p>I've the following multiindex dataframe:</p> <pre><code> item_quantity current_stock_qty_9XU7 month name zone product type January East Product 18111.0 17799.0 Subtotal 19343.0 17803.0 North Combo 2457.0 34.0 Product 16900.0 31708.0 Subtotal 19357.0 31742.0 South Combo 3042.0 24.0 Product 19453.0 6630.0 Subtotal 22495.0 6654.0 West Combo 2903.0 6.0 Product 19185.0 114959.0 Subtotal 22088.0 114965.0 February East Combo 845.0 0.0 Product 6820.0 0.0 Subtotal 7665.0 0.0 North Combo 2050.0 0.0 Product 14054.0 0.0 Subtotal 16104.0 0.0 </code></pre> <p>The index for <code>month name</code> has to be a Categorical Index because I need the month order to be preserved during sort. But when I try to <code>reset_index</code> to a single index, pandas throws a ValueError:</p> <pre><code>ValueError Traceback (most recent call last) ~/github-repos/dolphin/Dolphin_Dashboard/app/compute_engine_wrapper.py in transform_df(df, rows, columns, measure_col_map, measure_order, hide_subtotals, measures_first) 1120 try: -&gt; 1121 pt.reset_index(level=1, inplace=True) 1122 except ValueError: ~/github-repos/dolphin/dd_env/lib/python3.8/site-packages/pandas/core/frame.py in reset_index(self, level, drop, inplace, col_level, col_fill) 4707 # to ndarray and maybe infer different dtype -&gt; 4708 level_values = _maybe_casted_values(lev, lab) 4709 new_obj.insert(0, name, level_values) ~/github-repos/dolphin/dd_env/lib/python3.8/site-packages/pandas/core/frame.py in _maybe_casted_values(index, labels) 4658 if mask.any(): -&gt; 4659 values, changed = maybe_upcast_putmask(values, mask, np.nan) 4660 ~/github-repos/dolphin/dd_env/lib/python3.8/site-packages/pandas/core/dtypes/cast.py in maybe_upcast_putmask(result, mask, other) 231 if not isinstance(result, np.ndarray): --&gt; 232 raise ValueError(&quot;The result input must be a ndarray.&quot;) 233 ValueError: The result input must be a ndarray. </code></pre> <p>This is on pandas <code>0.25.3</code>. Upgrading my pandas version isn't an option due to various dependencies. Is there any way to work around this bug?</p> <p>PS: <a href="https://github.com/pandas-dev/pandas/pull/36876" rel="nofollow noreferrer">Here's the bug report on github</a>. Monkey patching this fix works but I'd rather not.</p>
<python><pandas><dataframe><multi-index>
2023-01-04 06:50:31
1
1,874
Judy T Raj
75,001,725
16,142,496
How to Merge this Data-frames in Python Pandas?
<p>I have 3 Data-frames of Following Shapes: <br /> (34376, 13), (52389, 28), (16531, 14)</p> <p>This is the First Dataframe which we have: <a href="https://i.sstatic.net/HwJCN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HwJCN.png" alt="enter image description here" /></a></p> <p>This is the Second Dataframe which we have: <a href="https://i.sstatic.net/lHtc3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHtc3.png" alt="enter image description here" /></a></p> <p>This the Third Dataframe which we have: <a href="https://i.sstatic.net/4tXDN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4tXDN.png" alt="enter image description here" /></a></p> <p>Now, as I have mentioned the shapes of all the Dataframes, the main task is we have to merge this on the Accession Number \</p> <p>DF1-has the exact 34376 Accession which we want.</p> <p>DF2- has around 28000 Accession which we want. This basically means that the remaining Accession of that table we don't want.</p> <p>DF3- has around 9200 Accession which we want</p> <p>How can we, merge all these 3 DF's on Accession Number, so that we get the extra columns of DF2,DF3 merged with DF1 on Accession Number. Also, we can see that DF2 has 52389 columns, so if there are same Accession Numbers repeated in DF2, we still want to merge it, but the rows of DF1 should be repeated while merged with the extra rows of DF2 and same with DF3. The Accession where no values are available in DF2/DF3 but present in DF1, the rows should become Null.</p>
<python><pandas><dataframe><merge><row>
2023-01-04 06:35:55
1
351
Manas Jadhav
75,001,691
19,633,374
Turning the dataframe upside down and performing groupby
<pre><code>x = ts.loc[::-1, &quot;column_1&quot;].eq(0) #First Line of code for reference x.groupby(pd.Grouper(freq=&quot;M&quot;)).cumsum().head(35) #Second Line of code for reference </code></pre> <p><code>Goal</code>: I have a timeseries dataframe which i need to turn it upside down and perform the groupby</p> <p><code>problem</code>: the <code>first line of code</code> above is succesfully turning my dataframe upside down, But in <code>second line of code</code> the groupby is automatically turning my dataframe into right order and then performing its functionality.</p> <p>Can someone tell me how to overcome this(How to apply groupby while my dataframe is stilll reverse?)</p> <pre><code>Sample timeseries dataset: Date 01.01.2000 02.01.2000 03.01.2000 .. .. .. 26.01.2000 27.01.2000 28.01.2000 29.01.2000 30.01.2000 31.01.2000 01.02.2000 02.02.2000 </code></pre>
<python><pandas><dataframe><date><time-series>
2023-01-04 06:31:51
1
642
Bella_18
75,001,535
800,735
In GCP Dataflow/Apache Beam Python SDK, is there a time limit for DoFn.process?
<p>In Apache Beam Python SDK running on GCP Dataflow, I have a <code>DoFn.process</code> that takes a long time. My DoFn takes a long time for reasons that are not that important - I have to accept them due to requirements out of my control. But if you must know, it is making network calls to external services that take quite long (several seconds) and it is processing multiple elements from a previous <code>GroupByKey</code> - leading to <code>DoFn.process</code> calls that takes minutes.</p> <p>Anyways, my question is: Is there a time limit for the runtime length of a <code>DoFn.process</code> call? I'm asking because I'm seeing logs that look like:</p> <pre><code>WARNING 2023-01-03T13:12:12.679957Z ReportProgress() took long: 1m49.15726646s WARNING 2023-01-03T13:12:14.474585Z ReportProgress() took long: 1m7.166061638s WARNING 2023-01-03T13:12:14.864634Z ReportProgress() took long: 1m58.479671042s WARNING 2023-01-03T13:12:16.967743Z ReportProgress() took long: 1m40.379289919s 2023-01-03 08:16:47.888 EST Error message from worker: SDK harness sdk-0-6 disconnected. 2023-01-03 08:21:25.826 EST Error message from worker: SDK harness sdk-0-2 disconnected. 2023-01-03 08:21:36.011 EST Error message from worker: SDK harness sdk-0-4 disconnected. </code></pre> <p>It seems to me like the Apache Beam Fn API Progress Reporting machinery thinks that my <code>DoFn.process</code> function is stuck not making any progress and eventually terminates the &quot;unresponsive&quot; SDK Harness. Is this happening because my <code>DoFn.process</code> is taking too low to process a single element? If so how do I report progress to the Dataflow Worker Engine to let it know that my <code>DoFn.process</code> is still alive?</p>
<python><timeout><google-cloud-dataflow><apache-beam><apache-beam-internals>
2023-01-04 06:10:45
1
965
cozos
75,001,534
10,937,025
How to upload a link of file in postman instead of downloading file in Django
<p>I created an API, that take xlsx as input file for post method and give me edited xlsx file.</p> <p><strong>Problem is</strong>:- File I got from link and I have to download the xlsx file every time and put in postman.</p> <p><strong>What I want</strong>:- directly put link in postman for input file</p> <p><strong>Note:-</strong> Everytime link contains only one xlsx file</p> <p>I Looked for the solutions in documentations , but I can't find a thing, of How to put link for inpt file.</p>
<python><django><postman>
2023-01-04 06:10:38
1
427
ZAVERI SIR
75,001,178
13,194,245
How to visualise nested functions in Django Admin documentation generator
<p>I have a django project and inside my <code>views.py</code> file I have the following (theoretical) functions where you can see that the <code>get_fullname()</code> function is nested:</p> <pre><code>def dataframerequest(request, pk): df_re = database_sql.objects.filter(projectid=pk).values() df_list = [k for k in df_re] df = pd.DataFrame(df_list) return df def get_first_name(): first_name = 'Jack' return first_name def get_last_name(): last_name = 'Smith' return last_name def get_fullname(): first_name = get_first_name() last_name = get_last_name() full_name = first_name + ' ' + last_name return full_name </code></pre> <p>Is there a way to visualise nested functions throughout <code>views.py</code> through the Django Admin documentation generator? or is there any other product that can easily visualise the link and usage of functions between.</p> <p>I have resorted to using a spreadsheet to track but I wondering if there is a better way/product that can do this automatically? This is my desired output that helps tracing back where things are used.</p> <p><a href="https://i.sstatic.net/N0UkJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N0UkJ.png" alt="enter image description here" /></a></p>
<python><django><documentation><documentation-generation>
2023-01-04 05:09:33
1
1,812
SOK
75,001,076
3,264,407
Flask-Sqlalchemy pagination - how to paginate through a table when there are multiple tables on same page
<p>I would like to have multiple tables on the same page and to be able to paginate through these tables individually. Currently if I click Next to go to the next page of results for a table then all of the tables on the page go to the next page. This is an issue if the tables have a different number of pages as the one with the lower number of pages will cause a 404 error. Edit: I just realised if I set <code>error_out=False</code> in the paginate arguments it does not 404 on me but just provides and empty table for the shorter table.</p> <p>I am using flask-sqlalchemy to query a table of support tickets and filter by support_team that the support agent is a member of. A support agent can be in more than one team.</p> <p>Here is my route:</p> <pre><code>@interaction_bp.route(&quot;/my_teams_tix/&quot;) def my_teams_tix(): user = User.query.filter_by(full_name=str(current_user)).first() # get the current user teams = user.teams # a list of all the teams the current user is in page = request.args.get('page', 1, type=int) tickets = {} for team in teams: team_ticket = ( Ticket.query.filter(Ticket.support_team_id == team) .filter(Ticket.status != &quot;Closed&quot;) .order_by(Ticket.ticket_number.asc()) .paginate(page=page, per_page=current_app.config[&quot;ROWS_PER_PAGE&quot;]) # this is 5 ) tickets[team.name] = team_ticket return render_template('interaction/my_teams_work.html', tickets=tickets </code></pre> <p>So this will list all of the tickets that are assigned to a team that the current user is in.</p> <p>I'm using a jinja2 template like so (stripping all the css coding and most fields out to simplify this example):</p> <pre><code>{% for team, pagination in tickets.items() %} &lt;h2&gt;{{ team }}&lt;/h2&gt; &lt;table id=&quot;{{team}}-table&quot;&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;Ticket ID&lt;/th&gt; &lt;th&gt;Title&lt;/th&gt; &lt;th&gt;Owner&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; {% for ticket in pagination.items %} &lt;tr&gt; &lt;td&gt;{{ ticket.ticket_number }}&lt;/td&gt; &lt;td&gt;{{ ticket.type }}&lt;/td&gt; &lt;td&gt;{{ ticket.supporter.name }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/tbody&gt; &lt;/table&gt; {% if pagination.has_prev %} &lt;a href=&quot;{{ url_for('interaction_bp.my_teams_tix', team_name=team, page=pagination.prev_num) }}&quot;&gt;Previous&lt;/a&gt; {% endif %} Page {{ pagination.page }} of {{ pagination.pages }} {% if pagination.has_next %} &lt;a href=&quot;{{ url_for('interaction_bp.my_teams_tix', team_name=team, page=pagination.next_num) }}&quot;&gt;Next&lt;/a&gt; {% endif %} {% endfor %} </code></pre> <p>This all works fine. In this example the user is a member of two teams, so I end up with two tables of tickets like so:</p> <pre><code>ITSM Ticket ID Title Owner 1006 Incident 1012 Request 1013 Incident 1015 Request 1016 Request Page 1 of 3 Next Database Admin Ticket ID Title Owner 1001 Incident 1007 Request 1008 Incident 1010 Incident 1014 Request Page 1 of 2 Next </code></pre> <p>the first &quot;Next&quot; link above is this</p> <pre><code>http://localhost:5000/my_teams_tix/?team_name=ITSM&amp;page=2 </code></pre> <p>and the other one is:</p> <pre><code>http://localhost:5000/my_teams_tix/?team_name=Database+Admin&amp;page=2 </code></pre> <p>When I click on Next on any of the tables it moves both tables to the next page. As one has 3 pages and the other 2 pages, when I click next for the one with 3 pages I then get an 404 error.</p> <p>I'd like to have all these tables on the same page (and there might be 4 or 5 of them depending on the number of teams the support agent belongs to) and paginate through them independently.</p> <p>my gut tells me I'll need to do it using javascript but I'm not sure how to go about it. Any help appreciated.</p>
<python><ajax><pagination><flask-sqlalchemy>
2023-01-04 04:48:23
2
473
calabash
75,001,056
4,451,521
Correct way to test a function that has some unused parameter
<p>Rather than coding this is a question on how to correctly test a function.</p> <p><strong>Background</strong></p> <p>I am using pytest to test a function. Now a bit about the background of this function</p> <p>Originally the developers wrote a function like</p> <pre><code>def the_function(first_df, second_df) </code></pre> <p>However, it seems that they realized something was not working with their function so they modified it to</p> <pre><code>def the_function(first_df, second_df, third_df) </code></pre> <p>I have seen the code and in this new implementation they quit using <code>second_df</code> at all for their logic. Now the logical thing would have been to rewrite the function as <code>def the_function(first_df,third_df)</code> but they didn't. They just left <code>second_df</code> there unused</p> <p>So now I have to write some unit tests for this function</p> <p><strong>The question</strong></p> <p>Since <code>second_df</code> is not being used at all, I am thinking of preparing some data needed for <code>first_df</code> and <code>third_df</code> and just enter an empty dataframe for <code>second_df</code> (since it is not being used at all)</p> <p>Would this strategy be OK?</p> <p>I am a bit worried because since one of the goals of unit testing is to keep any rewriting or refactoring of the function to introduce errors, what if in the future someone refactors <code>the_function</code> actually using <code>second_df</code>...</p> <p>On the other hand if someone does that, and then test the function, surely an empty dataframe will signal an error, but in that case, will the unit test have to be rewritten?</p>
<python><unit-testing>
2023-01-04 04:45:21
1
10,576
KansaiRobot
75,000,973
6,335,363
How can I cleanly assign class attributes as an instance of that class in Python?
<p>Consider the code below:</p> <pre class="lang-py prettyprint-override"><code>class Color: RED: &quot;Color&quot; GREEN: &quot;Color&quot; BLUE: &quot;Color&quot; WHITE: &quot;Color&quot; BLACK: &quot;Color&quot; def __init__(self, r: int, g: int, b: int) -&gt; None: self.r = r self.g = g self.b = b Color.RED = Color(255, 0, 0) Color.GREEN = Color(0, 255, 0) Color.BLUE = Color(0, 0, 255) Color.WHITE = Color(255, 255, 255) Color.BLACK = Color(0, 0, 0) </code></pre> <p>Here, I am creating a few color definitions, which can be accessed from the <code>Color</code> class, as well as creating custom color instances. However, it feels a little repetitive needing to declare then instantiate the values in two different places in my file. Optimally, I would do the following, but because it is self-referencing, I get a <code>NameError</code>.</p> <pre class="lang-py prettyprint-override"><code>class Color: RED = Color(255, 0, 0) GREEN = Color(0, 255, 0) BLUE = Color(0, 0, 255) WHITE = Color(255, 255, 255) BLACK = Color(0, 0, 0) def __init__(self, r: int, g: int, b: int) -&gt; None: self.r = r self.g = g self.b = b </code></pre> <p>Is there a way for me to cleanly define my preset colors in one place whilst maintaining type safety and readability, or is the first example already as good as it's going to get?</p>
<python><class><attributes>
2023-01-04 04:28:54
1
2,081
Maddy Guthridge
75,000,688
1,187,968
Python List Indexing: What's the advantage of using Inclusive index for lower bound, and Exclusive index for upper bound?
<p>For example, in the <code>range(0, 6)</code> function, we only generate number from 0 to 5. 0 is included, but 6 is excluded.</p> <p>Also I see this in list slicing. That <code>mylist[:6]</code>, index 0-5 in included, but index 6 is excluded.</p> <p>What are the benefits of such indexing mechanisms? I find it strange because lower bound is included, while upper bound is excluded.</p>
<python><list><indexing>
2023-01-04 03:29:57
1
8,146
user1187968
75,000,519
6,114,709
Comparing two dataframes on given columns
<p>I have two dataframes DF1 and DF2 with different column sets</p> <pre><code>------------------------------------------------ DF1 col1 col2 col3 col4 col5 col6 col7 --------------------------------------------- Asr Suh dervi xyz yes NY 2022-04-11 Bas nav dervi xyz yes CA 2022-04-11 Naz adn otc xyz no NJ 2022-05-01 --------------------------------------------------- DF2 col1 col2 col3 col4 ------------------------------------------------------ Asr Suh dervi xyz Bas nav dervi xyz -------------------------------------------------------- </code></pre> <p>I want to compare these two dataframes based on col2 and Col3 and filter out non matching rows from DF1.</p> <p>So compare operation should give below row in dataset</p> <pre><code>df3 Naz adn otc xyz no NJ 2022-05-01 </code></pre>
<python><pandas><dataframe><pyspark>
2023-01-04 02:52:25
2
301
Koushur
75,000,224
2,930,156
Why is tensorflow prediction_step extremely slow when the input features are python primitives instead of tensors?
<p>I spent half an hour debugging on the slowness of the following code snippet</p> <pre><code>import time feature_values = {'query': ['hello', 'world'], 'ctr': [0.1, 0.2]} model = tf.saved_model.load(model_path) start = time.time() output = model.prediction_step(feature_values) print(time.time() - start) </code></pre> <p>The above took a few minutes to finish. Then I found out that I need to convert the input to tensors first, then it became very fast, as expected.</p> <pre><code>feature_values = {k: tf.constant(v) for k, v in feature_values.items()} </code></pre> <p>My question is why is there such a big latency difference and why the first approach didn't even raise an error?</p>
<python><tensorflow2.0>
2023-01-04 01:43:39
1
985
John Jiang
75,000,164
4,682,839
Can't play video created with OpenCV VideoWriter
<h1>MWE</h1> <pre class="lang-py prettyprint-override"><code>import cv2 FPS = 30 KEY_ESC = 27 OUTPUT_FILE = &quot;vid.mp4&quot; cam = cv2.VideoCapture(0) codec = cv2.VideoWriter.fourcc(*&quot;mp4v&quot;) # MPEG-4 http://mp4ra.org/#/codecs frame_size = cam.read()[1].shape[:2] video_writer = cv2.VideoWriter(OUTPUT_FILE, codec, FPS, frame_size) # record until user exits with ESC while True: success, image = cam.read() cv2.imshow(&quot;window&quot;, image) video_writer.write(image) if cv2.waitKey(5) == KEY_ESC: break cam.release() video_writer.release() </code></pre> <h1>Problem</h1> <p>Video does not play.</p> <p>Firefox reports &quot;No video with supported format and MIME type found.&quot;.</p> <p>VLC reports &quot;cannot find any /moov/trak&quot; &quot;No steams found&quot;.</p>
<python><opencv>
2023-01-04 01:31:30
1
2,096
mcp
75,000,045
4,019,495
Is there a clean way to reindex a pandas dataframe, where you don't know if the index is a multiindex or a normal index?
<p>I have a pandas dataframe whose index might be a multiindex or may just be a normal index. This results from doing a groupby where there are one or more groups.</p> <p>Regardless, I try to reindex with an index constructed from pd.MultiIndex.from_product. However this doesn't work.</p> <pre><code>a = pd.DataFrame([1,2,3], index=[1,2,3]) a.reindex(pd.MultiIndex.from_product([[1,2,3]])) 0 1 NaN 2 NaN 3 NaN </code></pre> <p>The behavior I want is</p> <pre><code>a.reindex(pd.Index([1,2,3])) 0 1 1 2 2 3 3 </code></pre> <p>using code generic enough to support MultiIndexes.</p>
<python><pandas>
2023-01-04 01:03:04
1
835
extremeaxe5
74,999,927
2,482,149
KeyError When Trying to Insert Nested Dictionary Into Postgres Table PsycoPg2
<p>I'm getting this error <code>KeyError: 'country.id'</code> when I use psycopg2 to insert a list with a nested dictionary into a table in postgres:</p> <pre><code>import psycopg2 import logging from psycopg2.extras import LoggingConnection def insert_fixture_data(match: dict): conn = psycopg2.connect( connection_factory=LoggingConnection, **db_settings) home_team = match['home_team'] home_manager = home_team['managers'] sql_updates = ( &quot;INSERT INTO manager (id,country_id,name,nickname,birth_date) VALUES (%(id)s,%(country.id)s,%(name)s,%(nickname)s,%(dob)s) ON CONFLICT DO NOTHING RETURNING id;&quot; ) try: cursor = conn.cursor() cursor.executemany(sql_updates, home_manager) except Exception as error: print(error) finally: conn.close() </code></pre> <p><code>home_manager</code> looks like this:</p> <pre><code>[{'id': 665, 'name': 'Ivan Juric', 'nickname': None, 'dob': '1975-08-25', 'country': {'id': 56, 'name': 'Croatia'}}] </code></pre> <p>The schema and column names were correct when I checked in postgresql.</p>
<python><sql><postgresql><psycopg2>
2023-01-04 00:36:37
1
1,226
clattenburg cake
74,999,869
10,430,394
Clip patch not actually clipping in matplotlib
<p>I've made an animation that is supposed to show a Japanese hiragana character being drawn. To that end I am using a clip path so that my drawn line will only fill out a predetermined area. However, the clip path doesn't seem to do anything. It's not returning any errors and the clip patch is just not being applied in the animation. What am I doing wrong?</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from svgpath2mpl import parse_path from matplotlib.patches import PathPatch from matplotlib.animation import FuncAnimation clip_patch = &quot;M181 359C174 359 165 363 166 371C167 382 176 389 183 396C198 412 199 435 204 455C222 538 238 623 276 700C300 745 333 786 374 816C385 824 400 826 411 818C418 814 420 805 420 797C421 780 417 762 420 745C422 720 430 696 436 671C437 666 438 657 431 654C424 652 417 658 414 664C399 686 387 710 372 731C370 734 366 735 364 732C347 718 331 702 319 683C303 658 294 629 285 601C276 570 267 538 260 506C254 477 250 448 249 419C249 409 254 399 253 389C251 379 240 375 232 370C216 363 199 359 181 359Z&quot; stroke_path = &quot;M 174,370 218,397 276,642 316,734 394,801 396,733 423,673&quot; clip_patch = parse_path(clip_patch) stroke_path = parse_path(stroke_path) fig, ax = plt.subplots() ax.plot([],[],'k',lw=15) ax.set_clip_path(clip_patch,transform=ax.transData) clip_patch = PathPatch(clip_patch,fc=(0.3,0.3,0.3,0.3),ec=(0,0,0,0)) ax.add_patch(clip_patch) def init(): ax.set_xlim(0,1024) ax.set_ylim(0,1024) ax.set_aspect('equal') ax.invert_yaxis() return ax.lines def update(frame): x = stroke_path.vertices[:frame,0] y = stroke_path.vertices[:frame,1] ax.lines[0].set_data(x,y) return ax.lines ani = FuncAnimation(fig, update, frames=len(stroke_path),init_func=init, blit=True) plt.show() </code></pre> <p>As is, the line is being drawn, but the clip path (which is indicated in grey) doesn't actually clip the black line.</p>
<python><matplotlib><animation><clip>
2023-01-04 00:23:10
1
534
J.Doe
74,999,834
2,299,692
MemoryError in Jupyter but not in Python
<p>I'm running</p> <pre><code>NAME=&quot;CentOS Linux&quot; VERSION=&quot;7 (Core)&quot; ID=&quot;centos&quot; ID_LIKE=&quot;rhel fedora&quot; VERSION_ID=&quot;7&quot; </code></pre> <p>with plenty of memory</p> <pre><code> total used free shared buff/cache available Mem: 125G 3.3G 104G 879M 17G 120G </code></pre> <p>64 bit Anaconda <a href="https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh" rel="nofollow noreferrer">https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh</a></p> <p>I have set max_buffer_size to 64GB in both jupyter_notebook_config.json and jupyter_notebook_config.py, and just to make sure specify it on the command line:</p> <p><code>jupyter notebook --certfile=ssl/mycert.perm --keyfile ssl/mykey.key --no-browser --NotebookApp.max_buffer_size=64000000000</code></p> <p>And also</p> <pre><code>cat /proc/sys/vm/overcommit_memory 1 </code></pre> <p>I run a simple memory allocation snippet:</p> <pre class="lang-py prettyprint-override"><code> size = int(6e9) chunk = size * ['r'] print (chunk.__sizeof__()/1e9) </code></pre> <p>as a standalone .py file and it works:</p> <pre><code>python ../readgzip.py 48.00000004 </code></pre> <p>happily reporting that it allocated 48GB for my list.</p> <p>However, the same code in a jupyter notebook only works up to 7.76GB:</p> <pre class="lang-py prettyprint-override"><code> size = int(9.7e8) chunk = size * ['r'] print (chunk.__sizeof__()/1e9) 7.76000004 </code></pre> <p>and fails after increase the array size from 9.7e8 to 9.75e8</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) /tmp/ipykernel_12328/3436837519.py in &lt;module&gt; 1 size = int(9.75e8) ----&gt; 2 chunk = size * ['r'] 3 print (chunk.__sizeof__()/1e9) MemoryError: </code></pre> <p>Also, on my home Windows11 machine with 64GB of memory I can easily run the code above and allocate 32GB of memory.</p> <p>Seems like, I'm missing something about the Jupyter setup on Linux</p> <p>What am I missing?</p> <p>Thank you</p>
<python><jupyter-notebook><anaconda><out-of-memory><centos7>
2023-01-04 00:13:15
1
1,938
David Makovoz
74,999,761
1,275,942
Sqlalchemy many-to-many association proxy: silently reject duplicates
<p>I have a many to many association using association proxies as follows:</p> <pre><code>import sqlalchemy.orm as orm import sqlalchemy as sa import sqlalchemy.ext.associationproxy as ap class Asset(BaseModel): __tablename__ = 'assets' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.VARCHAR(255)) asset_tags = orm.relationship( &quot;AssetTag&quot;, back_populates='asset', cascade='all, delete-orphan') tags = ap.association_proxy( &quot;asset_tags&quot;, &quot;tag&quot;, creator=lambda tag: AssetTag(tag=tag)) class AssetTag(BaseModel): __tablename__ = 'asset_tags' asset_id = sa.Column(sa.Integaer, sa.ForeignKey(&quot;assets.id&quot;), primary_key=True) tag_id = sa.Column(sa.Integer, alsach.ForeignKey(&quot;tags.id&quot;), primary_key=True) asset = orm.relationship(&quot;Asset&quot;, back_populates='asset_tags') tag = orm.relationship(&quot;Tag&quot;, back_populates='asset_tags') class Tag(BaseModel): __tablename__ = 'tags' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.VARCHAR(255)) asset_tags = orm.relationship(&quot;AssetTag&quot;, back_populates='tag', cascade='all, delete-orphan') assets = ap.association_proxy(&quot;asset_tags&quot;, &quot;asset&quot;, creator=lambda asset: AssetTag(asset=asset)) </code></pre> <p>Note that the asset_tas table has a uniqueness constraint on (asset_id, tag_id).</p> <p>If I do</p> <pre><code>with Session() as s: a = s.get(Asset, 1) tag = a.tags[0] a.tags.append(tag) s.commit() </code></pre> <p>SQLAlchemy creates a new AssetTag between Asset&lt;1&gt; and Tag&lt;31&gt; (example number), and tries to commit that, violating the uniqueness constraint.</p> <pre><code>pymysql.err.IntegrityError: (1062, &quot;Duplicate entry '1-31' for key 'PRIMARY'&quot;) </code></pre> <p>Is there any way to make <code>asset.tags</code> have the behavior of a set, where adding an existing item is skipped?</p> <pre><code>asset.tags.append(tag) asset.tags.append(tag) # fails silently </code></pre>
<python><sqlalchemy>
2023-01-03 23:57:19
1
899
Kaia
74,999,752
5,750,741
Executing parameterized BigQuery SQL inside the Python function
<p>I am trying to pass some parameters through a Python function. Inside the function, I am trying to execute BigQuery SQL and update an existing table(creating and replacing tables). I keep getting</p> <pre><code>BadRequest: 400 1.2 - 1.118: Unrecognized token CREATE. [Try using standard SQL (https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql)] (job ID: 7417d5d6-fdcd-420e-b7ac-4aaa8bb3347c) -----Query Job SQL Follows----- | . | . | . | . | . | . | . | . | . | . | . | 1: CREATE OR REPLACE TABLE `analytics-mkt-cleanroom.MKT_DS.PXV2DWY_HS_MODEL_INTRMDT_TAB_01` AS SELECT '2021-07-01' AS DT | . | . | . | . | . | . | . | . | . | . | . | </code></pre> <p>error.</p> <p>Here's my complete Jupyter Notebook code:</p> <pre><code># Creating and initializing a random table: %%bigquery CREATE OR REPLACE TABLE `analytics-mkt-cleanroom.MKT_DS.Home_Services_PXV2DWY_HS_MODEL_INTRMDT_TABLE_01` AS SELECT CURRENT_DATE AS DT # Checking what's the current date: %%bigquery SELECT * FROM `analytics-mkt-cleanroom.MKT_DS.Home_Services_PXV2DWY_HS_MODEL_INTRMDT_TABLE_01` # Initializing random str date variable: from_date = f&quot;'2021-07-01'&quot; to_date = f&quot;'2022-06-30'&quot; # Creating a Python function to update the existing table using a parameter: from google.cloud import bigquery def my_func(from_date): client = bigquery.Client(project='analytics-mkt-cleanroom') job_config = bigquery.QueryJobConfig() job_config.use_legacy_sql = True destination_table_id = f'`analytics-mkt-cleanroom.MKT_DS.PXV2DWY_HS_MODEL_INTRMDT_TAB_01`' sql = &quot;&quot;&quot; CREATE OR REPLACE TABLE &quot;&quot;&quot; + destination_table_id + &quot;&quot;&quot; AS SELECT {0} AS DT &quot;&quot;&quot;.format(from_date) query = client.query(sql, job_config=job_config) query.result() return # Checking what's the SQL that is getting generated inside: destination_table_id = f'`analytics-mkt-cleanroom.MKT_DS.PXV2DWY_HS_MODEL_INTRMDT_TAB_01`' sql = &quot;&quot;&quot; CREATE OR REPLACE TABLE &quot;&quot;&quot; + destination_table_id + &quot;&quot;&quot; AS SELECT {0} AS DT &quot;&quot;&quot;.format(from_date) sql my_func(from_date) </code></pre> <p>This will be just a small part of larger project where I have to create data pipelines using Python and BigQuery.</p>
<python><google-bigquery>
2023-01-03 23:55:02
1
1,459
Piyush
74,999,616
9,485,834
shutil.copy() throwing up invalid argument error when repeating
<p>I'm seeing some very odd behavior with shutil's copy method.</p> <p>Here is my basic code:</p> <pre class="lang-py prettyprint-override"><code>def time_and_log(func): def inner_function(*args,**kwargs): times = np.array([]) repititions = kwargs.pop(&quot;repititions&quot;,1) for i in range(repititions): print(f&quot;i: {i}&quot;) start_time = time.perf_counter() logfile = func(*args,**kwargs) end_time = time.perf_counter() elapsed_time = end_time - start_time times = np.append(times,elapsed_time) #Other stuff ommitted here, but basically just logging to a file return inner_function @time_and_log def copy(src,dst,logfile,repititions=1): if Path(src).is_dir(): shutil.copytree(src,dst) else: #I originally didn't have this but I thought the existence of the file was blocking it somehow? if os.path.isfile(dst): os.remove(dst) shutil.copy(src,dst) return logfile </code></pre> <p>the time_and_log decorator just takes the number of repetitions and loops it that many times and then does some custom formatting and logging.</p> <p>I'm writing to a mounted file system via WSL2, the first iteration of the loop it runs, fine, then the second iteration of the loop returns an error:</p> <pre><code>OSError: [Errno 22] Invalid argument: '\\\\wsl.localhost\\\\Ubuntu\\\\home\\\\me\\\\mountpoint\\\\myfile.bin' </code></pre> <p>But then I can take that exact argument and do this:</p> <pre class="lang-py prettyprint-override"><code>shutil.copy(src,'\\\\wsl.localhost\\\\Ubuntu\\\\home\\\\me\\\\mountpoint\\\\myfile.bin') </code></pre> <p>and that works.</p> <p>I'm kind of baffled. Anyone know why this is happening?</p> <p>Here's the code to recreate it</p> <pre><code>LOG_FILE = &quot;log.txt&quot; REPS = 5 dst = r&quot;\\wsl.localhost\\Ubuntu\\home\\me\\mountpoint\\&quot; src_path = &quot;.\\anypath\\&quot; test_file = &quot;myfile.bin&quot; copy(src=src_path+test_file,dst=dst+test_file,logfile=LOG_FILE,repititions=REPS) </code></pre> <p>For clarity, the reason I'm doing this is I'm testing performance for different methods to mount to azure storage, blobfuse vs smb file share vs NFSv3 vs RClone</p>
<python><shutil><python-os>
2023-01-03 23:23:26
0
600
Jamalan
74,999,514
12,713,678
Can't bind a Python function to a QML button
<p>I've been struggling with binding QT Quick Application front-end and back-end. Here's my code. For a clear and short example the function prints &quot;Hello from back-end&quot; as an output without getting any arguments.</p> <p><strong>main.py</strong></p> <pre><code>from pathlib import Path from PySide6.QtGui import QGuiApplication from PySide6.QtQml import QQmlApplicationEngine, QmlElement from PyQt6.QtCore import QObject, pyqtSignal, pyqtSlot class Calculator(QObject): def __init__(self): QObject.__init__(self) @pyqtSlot() def greet(self): print(&quot;Hello from backend&quot;) if __name__ == &quot;__main__&quot;: app = QGuiApplication(sys.argv) engine = QQmlApplicationEngine() backend = Calculator() engine.rootContext().setContextProperty(&quot;backend&quot;, backend) engine.load(&quot;main.qml&quot;) engine.quit.connect(app.quit) sys.exit(app.exec_()) </code></pre> <p><strong>main.qml</strong></p> <pre><code>import QtCore import QtQuick import QtQuick.Controls import QtQuick.Dialogs ApplicationWindow { id: mainWindow ... Button { objectName: &quot;checkButton&quot; id: modelCheck text: qsTr(&quot;Check model&quot;) onClicked: backend.greet() } Connections { target: backend function onSignalPrintTxt(boolValue) { return } } } </code></pre> <p>When running this code, I get the following error:</p> <pre><code>TypeError: Property 'greet' of object QVariant(PySide::PyObjectWrapper, 0x1006571f0) is not a function </code></pre> <p>What am I doing wrong?</p>
<python><qt><qml><pyside6>
2023-01-03 23:06:28
1
347
Philipp
74,999,497
14,141,126
Return key/value pairs from dict if values are found in list
<p>I have a list with several Active Directory codes. I also have a dictionary that is built from a live query of the AD, meaning this dictionary does not hold constant key/value pairs. I get a large number of AD events each time the server is queried, but I'm only interested in some of them. AD events are categorized by event IDs, which contain a brief description of what they are.</p> <p>My dictionary contains both the description and the ID, and my list contains only the codes I'm interested in.</p> <pre><code>event_code_list = ['4662','4738','4763','4733','4730','4737','4753','4670','4754','4750'] events_and_ids = {'logged-in': '4624', 'Directory Service Access': '4662', 'permissions-changed': '4670', 'created-process': '4688', 'logged-out': '4634', 'exited-process': '4689', 'user-member-enumerated': '4799', 'Directory Service Access': '4662'} </code></pre> <p>What I'm trying to achieve is to see which dict values match the IDs in the list, or vice-versa, and then return a dictionary with the matching key/value pairs.</p> <pre><code>{'Directory Service Access': '4662','permissions-changed': '4670'} </code></pre> <p>All my attempts have so far failed. The sampel below returns all key/value pairs in the dictionary regardless if the value is matches or not, which is obviously wrong.</p> <pre><code>for e in event_code_list: if e in events_and_ids.values() and e in event_code_list: print(events_and_ids.items()) </code></pre> <p>Any help is appreciated.</p>
<python>
2023-01-03 23:03:41
1
959
Robin Sage
74,999,450
4,262,057
Fetch does not send body in POST request (request.get_json(silent=True) is empty)
<p>I am having the following custom component in which I am trying to send a HTTP post request to a server. The curl of this command is as follows:</p> <pre><code>curl --location --request POST 'https://CLOUD-FUNCTION-HTTP-URL' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer foo' \ --header 'Accept: application/json' \ --data-raw '{ &quot;message&quot;: &quot;hello&quot; }' </code></pre> <p>When I send the request with Postman, my server does receive the content of the <code>body</code>.</p> <p>However, when I send it from my React app, the body is not found (the content is empty).</p> <p>On the back-end side, I combined the following two tutorials</p> <ul> <li><a href="https://cloud.google.com/functions/docs/samples/functions-http-cors" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/samples/functions-http-cors</a> (replaced GET with POST)</li> <li><a href="https://cloud.google.com/functions/docs/samples/functions-http-content" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/samples/functions-http-content</a> (<code>request.get_json(silent=True)</code> is empty when sent from React)</li> </ul> <p>What is causing this mismatch?</p> <pre><code>import * as React from &quot;react&quot;; import { useState } from &quot;react&quot;; import CircularProgress from &quot;@mui/material/CircularProgress&quot;; import { Button, TextField } from &quot;@mui/material&quot;; import { getBearerToken } from &quot;../auth/firebase/AuthProvider&quot;; export default function Wrapper() { const [prompt, setPrompt] = useState(&quot;Do something&quot;); const [data, setData] = useState([]); const [status, setStatus] = useState(null); const handleSubmit = (e) =&gt; { e.preventDefault(); setStatus(&quot;loading&quot;); getBearerToken().then( function (token) {success(token);}, function (error) {setStatus(&quot;error&quot;);} ); function success(token) { const requestOptions = { method: &quot;POST&quot;, headers: { &quot;Accept&quot;: &quot;application/json&quot;, &quot;Content-Type&quot;: &quot;application/json&quot;, &quot;Authorization&quot;: &quot;Bearer &quot; + token }, body: JSON.stringify({ &quot;message&quot;: &quot;hello&quot; }), } fetch(&quot;https://CLOUD-FUNCTION-HTTP-URL&quot;, requestOptions) .then((res) =&gt; res.json()) .then((data) =&gt; { console.log(data); setData(data); setStatus(&quot;success&quot;); }) .catch((err) =&gt; { console.log(err) console.log(err.message); setStatus(&quot;error&quot;); }); } }; function handleSetPrompt(e) { setPrompt(e.target.value); } return ( &lt;&gt; &lt;form onSubmit={handleSubmit}&gt; &lt;TextField id=&quot;standard-basic&quot; label=&quot;Standard&quot; variant=&quot;standard&quot; onChange={handleSetPrompt} value={prompt} /&gt; &lt;Button color=&quot;inherit&quot; type=&quot;submit&quot;&gt;Fetch&lt;/Button&gt; &lt;LoadingSpinner shouldHide={status !== &quot;loading&quot;} /&gt; &lt;/form&gt; &lt;div&gt; &lt;p&gt;{prompt}&lt;/p&gt; &lt;p&gt;{status}&lt;/p&gt; &lt;/div&gt; &lt;/&gt; ); } const LoadingSpinner = (props) =&gt; ( &lt;div className={props.shouldHide ? &quot;hidden&quot; : undefined}&gt; &lt;CircularProgress&gt;&lt;/CircularProgress&gt; &lt;/div&gt; ); </code></pre>
<javascript><python><reactjs><fetch-api>
2023-01-03 22:57:44
1
7,054
WJA
74,999,181
18,758,062
How to cycle colors in Matplotlib PatchCollection?
<p>I am trying to automatically give each <code>Patch</code> in a <code>PatchCollection</code> a color from a color map like <code>tab20</code>.</p> <pre><code>from matplotlib.collections import PatchCollection import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(5,5)) coords = [ (0, 0), (1, 2), (1, 3), (2, 2), ] patches = [plt.Circle(coords[i], 0.1) for i in range(len(coords))] patch_collection = PatchCollection(patches, cmap='tab20', match_original=True) ax.add_collection(patch_collection) ax.set_xlim(-1, 3) ax.set_ylim(-1, 4) plt.axis('equal') </code></pre> <p>But the above code is drawing each circle using the same color. How can the colors be cycled?</p> <p><a href="https://i.sstatic.net/Z9K3i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z9K3i.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-01-03 22:15:58
2
1,623
gameveloster
74,999,093
3,103,957
Are builtins like str() and int() functions or classes?
<p>In Python, when we call <code>str(&lt;something&gt;)</code>, are we calling the function, or are we calling the class constructor (the class being <code>str</code>)? Similarly, <code>int()</code>, <code>complex()</code> etc.</p>
<python><function><class>
2023-01-03 22:02:40
0
878
user3103957
74,998,958
12,436,050
Compare dataframes and add new rows in python
<p>I have two pandas dataframes.</p> <pre><code>df1 col1 col2 col3 col4 A C0079731 s1 abc A C0079731 s2 abc </code></pre> <pre><code>df2 col1 col2 col3 A C0079731 s1 A C0079731 s2 AA C0079731 s3 </code></pre> <p>I would like to compare col2 and if any 'col3' value is missing, then add to 'df1'. The expected output is:</p> <pre><code>df1 col1 col2 col3 col4 A C0079731 s1 abc A C0079731 s2 abc AA C0079731 s3 abc </code></pre> <p>I tried so far is merging the two dataframes but how can I get the above expected output.</p> <p>df_2 = df1.merge(df2, left_on='col2', right_on = 'col2', how = 'inner')</p>
<python><pandas>
2023-01-03 21:47:44
1
1,495
rshar
74,998,947
12,603,110
What's python's LOAD_FAST bytecode instruction fast at?
<p><a href="https://docs.python.org/3/library/dis.html#opcode-LOAD_FAST" rel="nofollow noreferrer">https://docs.python.org/3/library/dis.html#opcode-LOAD_FAST</a></p> <p>I have been wondering, why is it specifically named fast and not just &quot;load&quot;?</p> <p>What alternatives could be used instead that are &quot;slower&quot;?</p> <p>When are these alternatives used?</p>
<python>
2023-01-03 21:46:53
1
812
Yorai Levi
74,998,715
353,337
Add multiple elements to pathlib path
<p>I have a Python pathlib <code>Path</code> and a list of strings, and I'd like to concatenate the strings to the path. This works</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path a = Path(&quot;a&quot;) lst = [&quot;b&quot;, &quot;c&quot;, &quot;d&quot;] for item in lst: a = a / item print(a) </code></pre> <pre><code>a/b/c/d </code></pre> <p>but is a little clumsy. Can the <code>for</code> loop be replaced by something else?</p>
<python><pathlib>
2023-01-03 21:16:29
2
59,565
Nico Schlömer
74,998,706
3,672,883
How can I add external python modules in runtime?
<p>I am trying to load a torch model, in order to do this, the model <code>.pt</code> needs two modules.</p> <p>In my code I have the following structure:</p> <pre><code>src __init__.py .... </code></pre> <p>I am trying to add that two modules to src in runtime with <code>sys.path.insert(0, path)</code> but doesn't work, it only works if I copy the modules at the same level that src like</p> <pre><code>module_a src </code></pre> <p>the problem is that I don't want to set this code in the code base of that application I only want to load it from an external folder.</p> <p>Is this possible?</p> <p>Thanks</p>
<python><pytorch>
2023-01-03 21:14:59
0
5,342
Tlaloc-ES
74,998,629
443,836
How to call a field of CFUNCTYPE?
<p>I have a DLL (<code>.dll</code> for Windows, <code>.so</code> for Linux) and this autogenerated C header file for that DLL (excerpt):</p> <hr /> <p><strong>Edit notes</strong></p> <p><strong>tl;dr:</strong> I have updated my header excerpt and Python code because I had left out too much.</p> <p><strong>Details:</strong> I had left out a bunch of declarations in the header that I deemed unimportant but that turned out to be crucial (revealed by <a href="https://stackoverflow.com/a/75000278/443836">the accepted answer</a>). Additionally, I had reduced struct nesting and renamed a struct to reduce clutter. (<code>kotlin.root.com.marcoeckstein.klib</code> was just <code>myLib</code>.) But since the accepted answer was created based on the actual DLL (as requested in the comments and shared outside of Stack Overflow), I have also revised my header excerpt and Python code to be consistent with that DLL and thus the accepted answer.</p> <hr /> <pre><code>typedef struct { // #region // The details of most of these declarations are not important, // but as it turned out, their memory requirements in the // binary layout are: void (*DisposeStablePointer)(libnative_KNativePtr ptr); void (*DisposeString)(const char* string); libnative_KBoolean (*IsInstance)(libnative_KNativePtr ref, const libnative_KType* type); libnative_kref_kotlin_Byte (*createNullableByte)(libnative_KByte); libnative_kref_kotlin_Short (*createNullableShort)(libnative_KShort); libnative_kref_kotlin_Int (*createNullableInt)(libnative_KInt); libnative_kref_kotlin_Long (*createNullableLong)(libnative_KLong); libnative_kref_kotlin_Float (*createNullableFloat)(libnative_KFloat); libnative_kref_kotlin_Double (*createNullableDouble)(libnative_KDouble); libnative_kref_kotlin_Char (*createNullableChar)(libnative_KChar); libnative_kref_kotlin_Boolean (*createNullableBoolean)(libnative_KBoolean); libnative_kref_kotlin_Unit (*createNullableUnit)(void); // #endregion struct { struct { struct { struct { struct { const char* (*createMessage)(); // More nested structs here // (irrelevant for this question) } klib; } marcoeckstein; } com; } root; } kotlin; } libnative_ExportedSymbols; extern libnative_ExportedSymbols* libnative_symbols(void); </code></pre> <p>I want to call the function <code>createMessage</code> from Python. I have tried this:</p> <pre><code>import ctypes class Klib(ctypes.Structure): _fields_ = [(&quot;createMessage&quot;, ctypes.CFUNCTYPE(ctypes.c_char_p))] class Marcoeckstein(ctypes.Structure): _fields_ = [(&quot;klib&quot;, Klib)] class Com(ctypes.Structure): _fields_ = [(&quot;marcoeckstein&quot;, Marcoeckstein)] class Root(ctypes.Structure): _fields_ = [(&quot;com&quot;, Com)] class Kotlin(ctypes.Structure): _fields_ = [(&quot;root&quot;, Root)] class Libnative_ExportedSymbols(ctypes.Structure): _fields_ = [(&quot;kotlin&quot;, Kotlin)] dll = ctypes.CDLL(&quot;path/to/dll&quot;) #region #The following two lines turned out to be wrong. See below. dll.libnative_symbols.restype = Libnative_ExportedSymbols libnative = dll.libnative_symbols() #endregion libnative.kotlin.root.com.marcoeckstein.klib.createMessage() </code></pre> <p>On Windows I get:</p> <p><code>OSError: exception: access violation writing 0x00007FFA51556340</code></p> <p>On Linux I get:</p> <p><code>Segmentation fault (core dumped)</code></p> <p>On Windows, I have also tried <code>WINFUNCTYPE</code> instead of <code>CFUNCTYPE</code>, but I got the same error.</p> <p>The wrapper generator <a href="https://github.com/ctypesgen/ctypesgen" rel="nofollow noreferrer">ctypesgen</a> generates the same type for <code>createMessage</code> as I did manually.</p> <p>So what is going on here? How can I call a field of <code>CFUNCTYPE</code>?</p> <hr /> <h2>Partial answer (edit)</h2> <p>As <a href="https://stackoverflow.com/users/1709364">jasonharper</a> noted in a comment, I needed to change this code...</p> <pre><code>dll.libnative_symbols.restype = Libnative_ExportedSymbols libnative = dll.libnative_symbols() </code></pre> <p>... to this:</p> <pre><code>dll.libnative_symbols.restype = ctypes.POINTER(Libnative_ExportedSymbols) libnative = dll.libnative_symbols().contents </code></pre> <p>This change turned out to be necessary but not sufficient. I got no error now, but <code>createMessage</code> was not properly called. It returned an empty instance of <code>bytes</code>, but the function had been implemented to always return a non-empty string.</p> <hr /> <h2>Additional Notes (edit)</h2> <ul> <li>There is a <a href="https://stackoverflow.com/questions/74987486/how-to-use-struct-pointer-from-c-in-python">related issue</a>.</li> <li>The DLL and header file were automatically created by <a href="https://kotlinlang.org/docs/native-dynamic-libraries.html#use-generated-headers-from-c" rel="nofollow noreferrer">Kotlin Multiplatform/Native</a>, but this is irrelevant for the question.</li> </ul>
<python><ctypes>
2023-01-03 21:02:34
1
4,878
Marco Eckstein
74,998,540
5,832,020
Filter list of rows based on a column value in PySpark
<p>I have a list of rows after using collect. How can I get the &quot;num_samples&quot; value where sample_label == 0? That is to say, how can I filter list of rows based on a column value?</p> <pre><code>[Row(sample_label=1, num_samples=14398), Row(sample_label=0, num_samples=12500), Row(sample_label=2, num_samples=98230] </code></pre>
<python><pyspark>
2023-01-03 20:51:11
2
483
Salty Gold Fish
74,998,392
12,574,341
Python reverse() vs [::-1] slice performance
<p>Python provides two ways to reverse a list:</p> <p>List slicing notation</p> <pre class="lang-py prettyprint-override"><code>['a','b','c'][::-1] # ['c','b','a'] </code></pre> <p>Built-in reversed() function</p> <pre class="lang-py prettyprint-override"><code>reversed(['a','b','c']) # ['c','b','a'] </code></pre> <p>Are there any relevant differences in implementation/performance, or scenarios when one is preferred over the other?</p>
<python><performance>
2023-01-03 20:32:22
1
1,459
Michael Moreno
74,998,328
20,803,947
The mypy daemon executable ('dmpy') was not found
<p>I installed mypy with poetry and after that I installed the mypy extension in vs code, but the message :</p> <p>The mypy daemon executable ('dmypy') was not found on your PATH. Please install mypy or adjust the mypy.dmypyExecutable setting.</p> <p>when I run the command 'which mypy', I get the result:</p> <p>/Users/luicruz/Projects/google-seasonality/.venv/bin/mypy</p> <p>so the mypy is installed...</p> <p>It is showing up every hour in the bottom right corner of my Visual Studio Code. How do I fix it?</p> <p><a href="https://i.sstatic.net/Y90kK.png" rel="noreferrer"><img src="https://i.sstatic.net/Y90kK.png" alt="enter image description here" /></a></p> <p>I tried following some forum tutorials but it didn't work when I messed with the settings.json in the Visual Studio code, I tried putting a key &quot;mypy.path&quot;: &quot;path to mypy&quot; and it still didn't work:</p> <p>&quot;mypy.path&quot;:&quot;/Users/luicruz/Projects/google-seasonality/.venv/bin/mypy&quot;,</p>
<python><mypy>
2023-01-03 20:25:32
4
309
Louis
74,998,294
5,212,614
How to find number of months between two dates where datatype is datetime64[ns]
<p>I have two fields in a dataframe, both of which are <code>datetime64[ns]</code></p> <p>I thought I could just do this...</p> <pre><code>df_hist['MonthsBetween'] = (df_hist.Last_Date - df_hist.Begin_Time) / pd.Timedelta(months=1) </code></pre> <p>One field has only a data and one has a date/time, but both are of datatype datetime64[ns]. I Googled this and it seems like it should work, but I'm getting an error message saying:</p> <pre><code>TypeError: '&lt;' not supported between instances of 'str' and 'int' </code></pre> <p>I thought these were both datetime64[ns], and neither str or int.</p>
<python><python-3.x><pandas>
2023-01-03 20:21:07
1
20,492
ASH
74,998,079
4,586,180
how to display python time.time() stored as a float in bigquery as a datetime?
<p>I am a bigquery newbie</p> <p>we have a table that stores the output from time.time() as float. For example</p> <pre><code>1671057937.23425 1670884516.891432 </code></pre> <p>How can I select these values such that they are formated/displayed as date and time stamp.</p> <p>I have tried casting to int64, using various date/time functions like DATE_FROM_UNIX_DATE and TIMESTAMP_MICROS</p> <p>Any suggestions would be greatly appreciated</p>
<python><time><google-bigquery>
2023-01-03 19:54:18
2
968
AEDWIP
74,998,040
466,316
Python: Crypto package error: "No module called '_number_new'"
<p>I have a software which installs its own local Python 3.9. Included in its python39/lib/site-packages is Crypto package, which causes errors and seems old and incompatible with Python 3.9. It includes long integers, like 1L, which I fixed by removing the &quot;L&quot;. But I'm still getting the error below, even though the file</p> <pre><code>...\python39\lib\site-packages\Crypto\Util\_number_new.py </code></pre> <p>exists. For now, I'm trying to fix such errors manually, to avoid dealing with other incompatibility issues that will show up, if I try to update the whole Crypto package. The line in number.py:</p> <pre><code># New functions from _number_new import * </code></pre> <p>Error message:</p> <pre><code>&gt; Traceback (most recent call last): File &quot;C:\Program &gt; Files\Soft\python39\lib\site-packages\Crypto\Util\number.py&quot;, line 62, &gt; in &lt;module&gt; &gt; from _number_new import * File &quot;C:\Program Files\Soft\python39\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py&quot;, &gt; line 142, in _import &gt; return original_import(name, *args, **kwargs) ModuleNotFoundError: No module named '_number_new' </code></pre> <p>...\python39\lib\site-packages\Crypto\Util listing:</p> <p><a href="https://i.sstatic.net/cF39S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cF39S.png" alt="lib\site-packages\Crypto\Util" /></a></p>
<python><package><cryptography>
2023-01-03 19:50:20
1
639
MrSparkly
74,997,987
678,572
Why does Scikit-Learn's DecisionTreeClassifier return zero weighted features after removing zero weighted features and refitting?
<p>Ive been trying to figure out why this is happening. I'm fitting a <code>DecisionTreeClassifier</code> and the model determines that a few features are not informative for the prediction. Fitting the same model with the same hyperparameters using all of the informative features (i.e., features that have a weight &gt; 0), now I get other features that have zero weights that had non-zero weights before?</p> <p>My questions:</p> <ul> <li><p>Is this behavior expected?</p> </li> <li><p>If so, how can I use a <code>while</code> loop to remove features until none of the feature weights are zero?</p> </li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np # Data y = pd.Series({1: 'Negative', 2: 'Positive', 3: 'Positive', 4: 'Negative', 5: 'Positive', 6: 'Negative', 7: 'Negative', 8: 'Negative', 9: 'Negative', 10: 'Negative', 11: 'Negative', 12: 'Negative', 13: 'Negative', 14: 'Negative', 15: 'Negative', 16: 'Negative', 17: 'Negative', 18: 'Negative', 19: 'Negative', 20: 'Negative', 21: 'Negative', 22: 'Negative', 23: 'Negative', 24: 'Negative', 25: 'Negative', 26: 'Negative', 27: 'Negative', 28: 'Negative', 29: 'Negative', 30: 'Negative', 31: 'Negative', 32: 'Negative', 33: 'Negative', 34: 'Negative', 35: 'Negative', 36: 'Positive', 37: 'Negative', 38: 'Positive', 39: 'Positive', 40: 'Positive', 41: 'Positive', 42: 'Negative', 43: 'Negative', 44: 'Positive', 45: 'Positive', 46: 'Negative', 47: 'Negative', 48: 'Positive', 49: 'Positive', 50: 'Negative', 51: 'Negative', 52: 'Negative', 53: 'Positive', 54: 'Positive', 55: 'Positive', 56: 'Negative', 57: 'Positive', 58: 'Positive', 59: 'Positive', 60: 'Negative', 61: 'Negative', 62: 'Negative', 63: 'Positive', 64: 'Positive', 65: 'Positive', 66: 'Negative', 67: 'Positive', 68: 'Negative', 69: 'Negative', 70: 'Negative', 71: 'Positive', 72: 'Positive', 73: 'Negative', 74: 'Positive', 75: 'Positive', 76: 'Positive', 77: 'Positive', 78: 'Positive', 79: 'Positive', 80: 'Negative'}) X = pd.DataFrame({'ASV019': {1: 0, 2: 0, 3: 0, 4: 344, 5: 0, 6: 1468, 7: 669, 8: 646, 9: 1192, 10: 169, 11: 801, 12: 793, 13: 147, 14: 27, 15: 34, 16: 1324, 17: 196, 18: 181, 19: 955, 20: 144, 21: 460, 22: 1563, 23: 253, 24: 1590, 25: 429, 26: 973, 27: 523, 28: 901, 29: 766, 30: 417, 31: 726, 32: 955, 33: 630, 34: 580, 35: 1002, 36: 0, 37: 696, 38: 0, 39: 20, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 87, 47: 162, 48: 0, 49: 0, 50: 173, 51: 215, 52: 634, 53: 0, 54: 40, 55: 0, 56: 17, 57: 0, 58: 0, 59: 0, 60: 787, 61: 503, 62: 439, 63: 0, 64: 25, 65: 0, 66: 365, 67: 0, 68: 252, 69: 382, 70: 1694, 71: 0, 72: 0, 73: 21, 74: 0, 75: 3069, 76: 0, 77: 2, 78: 80, 79: 0, 80: 0}, 'ASV552': {1: 0, 2: 0, 3: 0, 4: 81, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 15, 49: 16, 50: 0, 51: 0, 52: 13, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 0, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV007': {1: 217, 2: 1673, 3: 4694, 4: 669, 5: 2734, 6: 388, 7: 210, 8: 213, 9: 568, 10: 329, 11: 703, 12: 677, 13: 776, 14: 505, 15: 987, 16: 400, 17: 334, 18: 133, 19: 0, 20: 405, 21: 475, 22: 740, 23: 766, 24: 364, 25: 705, 26: 1099, 27: 143, 28: 270, 29: 134, 30: 229, 31: 317, 32: 84, 33: 449, 34: 92, 35: 207, 36: 9288, 37: 461, 38: 135, 39: 342, 40: 464, 41: 1043, 42: 4693, 43: 2858, 44: 197, 45: 2083, 46: 223, 47: 822, 48: 1036, 49: 11656, 50: 0, 51: 348, 52: 1089, 53: 465, 54: 72, 55: 0, 56: 3885, 57: 2849, 58: 1000, 59: 4091, 60: 0, 61: 639, 62: 459, 63: 619, 64: 2563, 65: 919, 66: 1266, 67: 3038, 68: 622, 69: 521, 70: 296, 71: 10603, 72: 828, 73: 4849, 74: 5995, 75: 1252, 76: 3165, 77: 682, 78: 4219, 79: 3732, 80: 1603}, 'ASV135': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 700, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 92, 52: 767, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 408, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV122': {1: 0, 2: 0, 3: 0, 4: 0, 5: 1303, 6: 6, 7: 26, 8: 0, 9: 0, 10: 5, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 19, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 17, 35: 0, 36: 0, 37: 0, 38: 82, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 70, 58: 0, 59: 0, 60: 411, 61: 0, 62: 37, 63: 32, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 11, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 5, 76: 12, 77: 0, 78: 252, 79: 0, 80: 0}, 'ASV952': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 9, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 6, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 5, 64: 0, 65: 0, 66: 7, 67: 0, 68: 0, 69: 0, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV156': {1: 0, 2: 26, 3: 0, 4: 3, 5: 72, 6: 2, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 12, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 22, 41: 0, 42: 0, 43: 2, 44: 4, 45: 0, 46: 9, 47: 0, 48: 11, 49: 15, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 35, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 7, 66: 8, 67: 88, 68: 67, 69: 15, 70: 0, 71: 0, 72: 0, 73: 76, 74: 1069, 75: 14, 76: 4, 77: 49, 78: 3, 79: 5, 80: 24}, 'ASV062': {1: 199, 2: 209, 3: 0, 4: 315, 5: 0, 6: 49, 7: 63, 8: 25, 9: 29, 10: 22, 11: 24, 12: 141, 13: 0, 14: 62, 15: 49, 16: 0, 17: 288, 18: 274, 19: 0, 20: 59, 21: 134, 22: 10, 23: 147, 24: 22, 25: 101, 26: 78, 27: 0, 28: 25, 29: 47, 30: 105, 31: 0, 32: 0, 33: 74, 34: 53, 35: 110, 36: 0, 37: 8, 38: 0, 39: 0, 40: 6, 41: 0, 42: 226, 43: 21, 44: 0, 45: 373, 46: 98, 47: 126, 48: 5, 49: 8, 50: 186, 51: 93, 52: 35, 53: 21, 54: 0, 55: 0, 56: 720, 57: 3, 58: 220, 59: 0, 60: 230, 61: 41, 62: 118, 63: 0, 64: 0, 65: 0, 66: 151, 67: 0, 68: 186, 69: 225, 70: 6, 71: 22, 72: 13, 73: 97, 74: 0, 75: 2, 76: 5, 77: 134, 78: 0, 79: 0, 80: 84}}) # Model params = {'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'entropy', 'max_depth': None, 'max_features': 'log2', 'max_leaf_nodes': None, 'min_impurity_decrease': 0.0, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'random_state': 0, 'splitter': 'best'} estimator=DecisionTreeClassifier(**params) # Fit model estimator.fit(X,y) estimator.feature_importances_ # array([0.68181101, 0. , 0.10029598, 0. , 0.03051763, # 0. , 0. , 0.18737538]) # Mask zero weighted features and refit X_1 = X.loc[:,estimator.feature_importances_ &gt; 0] estimator.fit(X_1,y) estimator.feature_importances_ # array([0.51290959, 0.11922515, 0. , 0.36786526]) # One more time X_2 = X_1.loc[:,estimator.feature_importances_ &gt; 0] estimator.fit(X_2,y) estimator.feature_importances_ # array([0.38116661, 0.32724164, 0.29159175]) </code></pre>
<python><scikit-learn><classification><decision-tree><feature-selection>
2023-01-03 19:44:40
1
30,977
O.rka
74,997,979
12,127,295
Complex lambda with sqlalchemy filter query
<p>Consider an ORM class describing a <em>&quot;Store&quot;</em>:</p> <pre class="lang-py prettyprint-override"><code>class Store(Base, AbstractSchema): __tablename__ = 'store' id = Column(Integer, primary_key=True) ... s2_cell_id = Column(BigInteger, unique=False, nullable=True) </code></pre> <p><code>s2_cell_id</code> describes the location of a store as an id of S2 cell <em>i.e Google S2</em> I am using <code>s2sphere</code> library in python for that.</p> <p>Goal is to write a query that searches for stores in a certain range. I tried to use <code>@hybrid_method</code> as follows:</p> <pre class="lang-py prettyprint-override"><code> @hybrid_method def lies_inside(self, cells : list[Cell]): for c in cells: is_inside = c.contains(Cell(CellId(self.s2_cell_id))) if is_inside : return True return False @lies_inside.expression def lies_inside(cls, cells: list[Cell]): for c in cells: is_inside = c.contains(Cell(CellId(cls.s2_cell_id))) if is_inside : return True return False </code></pre> <p>Usage is like this:</p> <pre class="lang-py prettyprint-override"><code># Compute the set of cell that intersect the search area candidates = router.location_manager.get_covering_cells(lat, lng, r, 13) # Query the database for stores whose cell IDs are in the set of intersecting cells query = db.query(Store).filter(Store.lies_inside(candidates)).all() </code></pre> <p>I get the following error unfortunately:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\uvicorn\protocols\http\httptools_impl.py&quot;, line 419, in run_asgi result = await app( # type: ignore[func-returns-value] File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\uvicorn\middleware\proxy_headers.py&quot;, line 78, in __call__ return await self.app(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\fastapi\applications.py&quot;, line 270, in __call__ await super().__call__(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\applications.py&quot;, line 124, in __call__ await self.middleware_stack(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\middleware\errors.py&quot;, line 184, in __call__ raise exc File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\middleware\errors.py&quot;, line 162, in __call__ await self.app(scope, receive, _send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\middleware\exceptions.py&quot;, line 79, in __call__ raise exc File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\middleware\exceptions.py&quot;, line 68, in __call__ await self.app(scope, receive, sender) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 21, in __call__ raise e File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 18, in __call__ await self.app(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\routing.py&quot;, line 706, in __call__ await route.handle(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\routing.py&quot;, line 276, in handle await self.app(scope, receive, send) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\routing.py&quot;, line 66, in app response = await func(request) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\fastapi\routing.py&quot;, line 235, in app raw_response = await run_endpoint_function( File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\fastapi\routing.py&quot;, line 163, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\starlette\concurrency.py&quot;, line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\anyio\to_thread.py&quot;, line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\anyio\_backends\_asyncio.py&quot;, line 937, in run_sync_in_worker_thread return await future File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\anyio\_backends\_asyncio.py&quot;, line 867, in run result = context.run(func, *args) File &quot;C:\Users\FARO-User\Desktop\personal\dev\repos\woher-backend\.\src\routers\search.py&quot;, line 33, in get_nearby_stores query = db.query(Store).filter(Store.lies_inside(candidates)).all() File &quot;c:\users\faro-user\desktop\personal\dev\repos\woher-backend\src\sql\models\store.py&quot;, line 55, in lies_inside is_inside = c.contains(Cell(CellId(cls.s2_cell_id))) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\s2sphere\sphere.py&quot;, line 2354, in __init__ face, i, j, orientation = cell_id.to_face_ij_orientation() File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\s2sphere\sphere.py&quot;, line 1298, in to_face_ij_orientation face = self.face() File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\s2sphere\sphere.py&quot;, line 1057, in face return self.id() &gt;&gt; self.__class__.POS_BITS File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\sqlalchemy\sql\operators.py&quot;, line 458, in __rshift__ return self.operate(rshift, other) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\sqlalchemy\sql\elements.py&quot;, line 868, in operate return op(self.comparator, *other, **kwargs) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\sqlalchemy\sql\operators.py&quot;, line 458, in __rshift__ return self.operate(rshift, other) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\sqlalchemy\sql\type_api.py&quot;, line 77, in operate return o[0](self.expr, op, *(other + o[1:]), **kwargs) File &quot;C:\Users\FARO-User\Anaconda3\envs\woher-backend\lib\site-packages\sqlalchemy\sql\default_comparator.py&quot;, line 181, in _unsupported_impl raise NotImplementedError( NotImplementedError: Operator 'rshift' is not supported on this expression </code></pre> <p>Any idea how to design such query appropriately?</p> <p><strong>Some Analysis</strong>:</p> <p>The issue occurs because internally in s2sphere, this line is computed somewhere which appears in the traceback as well: <code>return self.id() &gt;&gt; self.__class__.POS_BITS</code> which clashes with sqlalchemy</p>
<python><sqlalchemy><s2>
2023-01-03 19:43:08
0
455
John C.
74,997,885
9,947,140
How to specify a search path with SQL Alchemy and pg8000?
<p>I'm trying to connect to a postgres db using SQL Alchemy and the pg8000 driver. I'd like to specify a search path for this connection. With the Psycopg driver, I could do this by doing something like</p> <pre><code>engine = create_engine( 'postgresql+psycopg2://dbuser@dbhost:5432/dbname', connect_args={'options': '-csearch_path={}'.format(dbschema)}) </code></pre> <p>However, this does not work for the pg8000 driver. Is there a good way to do this?</p>
<python><sqlalchemy><pg8000>
2023-01-03 19:30:02
2
342
randomrabbit
74,997,732
4,645,982
An expression attribute name used in the document path is not defined; attribute name: #buyer
<p>I am using reserve DynamoDB keywords live value,users, name. I have create entry in DynamoDB with</p> <pre><code>{ &quot;id&quot;:1 &quot;poc_name&quot;: &quot;ABC&quot; } </code></pre> <p>I want to update exiting records with</p> <pre><code>{ &quot;id&quot;: 1, &quot;poc_name&quot;: &quot;ABC&quot;, &quot;buyer&quot;: { &quot;value&quot;: &quot;id1&quot;, &quot;label&quot;: &quot;Test&quot; } } </code></pre> <p>I am using reserve keyword &quot;value&quot;. When I try to update the record I am getting error:</p> <blockquote> <p>An expression attribute name used in the document path is not defined; attribute name: #buyer</p> </blockquote> <p>Update is not working because buyer map does not exist in DynamoDB. That's why I am getting document path is not found. I using following code snippet to handle the reserve keyword, It will generate the following updateValues, updateExpression, expression_attributes_names</p> <pre><code>updateValues: {':poc_name': ABC, ':buyer_value': 'id1', ':buyer_label': 'Test'} updateExpression: ['set ','poc_name = :poc_name,','#buyer.#value = :buyer_value,', 'buyer.label = :buyer_label,'] expression_attributes_names: {'#demand_poc_action': 'demand_poc_action', '#value': 'value', '#buyer': 'buyer'} </code></pre> <p>Code snippet:</p> <pre><code>for key, value in dictData.items(): if key in RESERVER_DDB_KEYWORDS or ( &quot;.&quot; in key and key.split(&quot;.&quot;)[1] in RESERVER_DDB_KEYWORDS ): key1 = key.replace(&quot;.&quot;, &quot;.#&quot;) updateExpression.append(f&quot;#{key1} = :{key.replace('.', '_')},&quot;) updateValues[f&quot;:{key.replace('.', '_')}&quot;] = value if &quot;.&quot; in key: expression_attributes_names[f&quot;#{key.split('.')[0]}&quot;] = key.split(&quot;.&quot;)[0] expression_attributes_names[f&quot;#{key.split('.')[1]}&quot;] = key.split(&quot;.&quot;)[1] else: expression_attributes_names[f&quot;#{key}&quot;] = key else: updateExpression.append(f&quot;{key} = :{key.replace('.', '_')},&quot;) updateValues[f&quot;:{key.replace('.', '_')}&quot;] = value UpdateExpression=&quot;&quot;.join(updateExpression)[:-1], ExpressionAttributeValues=updateValues, ReturnValues=&quot;UPDATED_NEW&quot;, ExpressionAttributeNames=expression_attributes_names, </code></pre> <p>The problem is that if buyer already exist in the DynamoDB then I will be able to update buyer record, however I am not able to update record for buyer which doesn't have buyer In DynamoDB then I am getting document path error. So my approach is that I will create the entry of buyer every time I do the update. However I am not able to fix above code.</p>
<python><python-3.x><amazon-dynamodb><boto3>
2023-01-03 19:13:42
1
2,676
Neelabh Singh
74,997,433
991,076
how to share code between python project and airflow?
<p>We have a python project structure as following, airflow is a new:</p> <pre><code>├── python │   ├── airflow │   │   ├── airflow.cfg │   │   ├── config │   │   ├── dags │   │   ├── logs │   │   ├── requirements.txt │   │   └── webserver_config.py │   ├── shared_utils │   │   ├── auth │   │   ├── datadog │   │   ├── drivers │   │   ├── entities │   │   ├── formatter │   │   ├── helpers │   │   └── system ... </code></pre> <p>We have several other package same level as shared_utils, some are common libraries and some are standalone backend services.</p> <p>We want to keep airflow part independent and meanwhile to benefit the common libraries. We have python folder in PYTHONPATH, python/airflow is in PYTHONPATH as well (currently airflow doesn't import any code from other package).</p> <p>I am wondering how can I call code from shared_utils in my airflow dags, or how should I organize the project structure to make it possible?</p> <p>UPDATE:</p> <p><strong>it seems there is no conflict when I set python and python/airflow both in PYTHONPATH, after add requirements from shared_utils to airflow, it does work as expected.</strong></p>
<python><airflow>
2023-01-03 18:40:48
2
2,630
seaguest
74,997,386
3,278,050
Offset parallel overlapping lines
<p>I am creating a step plot with matplotlib, but some parallel line sections overlap others (hiding those beneath).</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4, 5] y1 = [12, 12, 8, 10, 12, 11] y2 = [11, 11, 8, 9, 11, 11] y3 = [11, 10, 7, 11, 11, 11] for y in [y1, y2, y3]: plt.step(x, y, lw=3, where='mid') </code></pre> <p><a href="https://i.sstatic.net/y3hJC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y3hJC.png" alt="enter image description here" /></a></p> <p>I would like something more like this: <a href="https://i.sstatic.net/JMwCB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JMwCB.png" alt="enter image description here" /></a></p> <p>The problem is a bit like drawing a metro map, where some lines run between the same two stations.</p> <p>Is there a package or algorithm I can use to help me re-align my lines?</p> <p>This question <a href="https://stackoverflow.com/questions/40766909/suggestions-to-plot-overlapping-lines-in-matplotlib">Suggestions to plot overlapping lines in matplotlib?</a> is related, but none of the solutions work for me.</p>
<python><matplotlib>
2023-01-03 18:36:19
1
355
onewhaleid
74,996,835
5,489,190
Problems with using C dll in python
<p>I'm trying to import my own made C dll to python code. The function in the dll accepts 10x1 vector of float, return 1 float as result. This is MWE how I try to use it:</p> <pre><code>from ctypes import* import numpy as np mydll = cdll.LoadLibrary(&quot;.\Test.dll&quot;) input=[18.1000, 1.1000, 0.2222, 0.2115, 0.0663, 0.5000, 352.0000, 0, 0, 42.0000] input=np.transpose(input) result=mydll.Test(input) </code></pre> <p>This fails with following error:</p> <pre><code>----&gt; 7 result=mydll.Test(input) ArgumentError: argument 1: &lt;class 'TypeError'&gt;: Don't know how to convert parameter 1 </code></pre> <p>Can you help me figuring it out? My first idea was that maybe input data does not match but it doesn't matter if I transpose or not, the error is the same.</p>
<python><python-3.x><dll><ctypes><dllimport>
2023-01-03 17:42:31
1
749
Karls
74,996,834
9,671,120
Memory-efficient DataFrame.stack()
<p>If I <code>memray</code> the following code, <code>df.stack()</code> allocates 22MB, when the df is only 5MB.</p> <pre><code>import numpy as np import pandas as pd columns = list('abcdefghijklmnopqrstuvwxyz') df = pd.DataFrame(np.random.randint(0,100,size=(1000, 26*26)), columns=pd.MultiIndex.from_product([columns, columns])) print(df.memory_usage().sum()) # 5408128, ~5MB df.stack() # reshape: (1000,26*26) -&gt; (1000*26,26) </code></pre> <p>Why DataFrame.stack() consumes so much memory? It allocates 30% on dropna and remaining 70% re-allocating the underlying array 3 times to reshape. Shall I move to native <code>numpy.reshape</code> or is there anything I can do to make it slimmer?</p>
<python><pandas><memory>
2023-01-03 17:42:25
0
386
C. Claudio
74,996,702
11,853,066
PIL/Numpy: enlarging white areas of black/white mask image
<p>I want to produce a Python algorithm which takes in a 'mask' RGB image comprised exclusively of black and white pixels. Basically, each mask is a black image with one or more white shapes on it (see below).</p> <p><a href="https://i.sstatic.net/XlAiE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XlAiE.png" alt="enter image description here" /></a></p> <p>I want to transform this image by enlarging the white areas by a factor x:</p> <p><a href="https://i.sstatic.net/UwOUh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UwOUh.png" alt="enter image description here" /></a></p> <p>So far I have only got it to work by drawing rectangles around the shapes using PIL:</p> <pre><code>def add_padding_to_mask(mask, padding): # Create a copy of the original mask padded_mask = mask.copy() draw = ImageDraw.Draw(padded_mask) # Iterate over the pixels in the original mask for x in range(mask.width): for y in range(mask.height): # If the pixel is white, draw a white rectangle with the desired padding around it if mask.getpixel((x, y)) == (255, 255, 255): draw.rectangle((x-padding, y-padding, x+padding, y+padding), fill=(255, 255, 255)) return padded_mask </code></pre> <p>This is suboptimal since I want to retain the original white shapes (only make them larger). I can't figure out an efficient way to approach this problem. Any help greatly appreciated.</p>
<python><numpy><python-imaging-library><dilation>
2023-01-03 17:29:49
2
355
Ben123
74,996,648
925,913
How to set a side_effect on an instance method of a class that's been patched?
<p>How do I get a method of a <code>@patch</code>'ed class to throw an Exception when called?</p> <p><code>B</code> is just a class that you can call <code>go()</code> on, which in turn just prints that it happened:</p> <pre><code># module_b.py class B: def __init__(self) -&gt; None: self.x = True print(&quot;B constructed OK&quot;) def go(self): print(&quot;called B.go()&quot;) </code></pre> <p><code>A</code> is a class that holds an instance of <code>B</code>. It also has a <code>go()</code> method, which calls the <code>B</code> instance's <code>go()</code> method:</p> <pre><code># module_a.py from module_b import B class A: def __init__(self) -&gt; None: try: self.b = B() print(&quot;A constructed OK&quot;) except Exception as e: print(&quot;A's constructor threw an exception: &quot; + repr(e)) def go(self): try: self.b.go() print(&quot;b.go() called with no problems&quot;) except Exception as e: print(&quot;a.go() threw an exception: &quot; + repr(e)) </code></pre> <p>And here's the test code:</p> <pre><code># main.py from module_a import A from module_b import B from unittest import mock # a = A() # a.go() # output: called B.go() @mock.patch(&quot;module_a.B&quot;) # learned the hard way to do this, not &quot;module_b.B&quot; def test_b_constructor_throws(mock_b: mock.Mock): mock_b.side_effect = Exception(&quot;test&quot;) a = A() print(&quot;-- test_b_constructor_throws() --&quot;) test_b_constructor_throws() print(&quot;&quot;) @mock.patch(&quot;module_b.B.go&quot;) @mock.patch(&quot;module_a.B&quot;) def test_b_method_go_throws_1( mock_b: mock.Mock, mock_b_go: mock.Mock ): # --- attempt 1 --- # mock_b_go.side_effect = Exception(&quot;test&quot;) # a = A() # a.go() # --- attempt 2 --- mock_b.return_value.mock_b_go.side_effect = Exception(&quot;test&quot;) a = A() a.go() print(&quot;-- test_b_method_go_throws_1() --&quot;) test_b_method_go_throws_1() print(&quot;&quot;) </code></pre> <p>Finally, here's the output:</p> <pre class="lang-none prettyprint-override"><code>-- test_b_constructor_throws() -- B's constructor threw an exception: Exception('test') -- test_b_method_go_throws_1() -- A constructed OK b.go() called with no problems </code></pre> <p><a href="https://trinket.io/python3/d7bf3308d1" rel="nofollow noreferrer">Above code on Trinket.io</a></p> <p>I've tried a variety of other things too, like <code>autospec=True</code> and using <code>@patch.object()</code> instead, but no go. I've read a bunch of seemingly related questions but have been unable to come up with a solution:</p> <ul> <li><a href="https://stackoverflow.com/questions/32529934/using-mock-side-effect-for-an-instance-method">Using Mock.side_effect for an instance method</a></li> <li><a href="https://stackoverflow.com/questions/32419283/why-does-mock-ignore-the-instance-object-passed-to-a-mocked-out-method-when-it-i">Why does mock ignore the instance/object passed to a mocked out method when it is called?</a></li> <li><a href="https://stackoverflow.com/questions/49556209/python-side-effect-mocking-behavior-of-a-method">python side_effect - mocking behavior of a method</a></li> <li><a href="https://stackoverflow.com/questions/64992044/pytest-mock-multiple-calls-of-same-method-with-different-side-effect">Pytest: Mock multiple calls of same method with different side_effect</a></li> <li><a href="https://stackoverflow.com/questions/56550060/how-do-i-mock-a-method-on-an-object-created-by-an-patch-decorator">How do I mock a method on an object created by an @patch decorator?</a></li> </ul>
<python><python-unittest.mock>
2023-01-03 17:24:50
1
30,423
Andrew Cheong
74,996,476
20,920,790
How to make user function (group_by().apply and def function with agg functions)
<p>friends!</p> <p>I got this code:</p> <pre><code>def agr_list(x): f = {} f['quantity'] = sum(x['quantity']) f['revenue'] = sum(x['quantity']*x['price']) return pd.Series(f, index=['quantity', 'revenue']) z6_g = ( z6. groupby('name', as_index=False). apply(agr_list) ) </code></pre> <p>It's working, result is:</p> <pre><code> name quantity revenue 0 Avocaddo 20 3556 1 Got meat! 2 818 2 Pineapple 2 620 3 Pineapple sort2 1 219 4 Dove 1 229 5 Cotico 1 149 6 Orange 6 1014 7 Peanut 7 315 8 Baguette 2 251 9 Kotanyi 1 59 </code></pre> <p>Table to work with:</p> <pre><code> name quantity price 0 Product_names 1 169 </code></pre> <p>I just can't figure out how to make one custom function with this code. Also I can't remake this code without def agr_list.</p> <p>I can just get needed output with 2 custom functions:</p> <pre><code>def agr_list(x): f = {} f['quantity'] = sum(x['quantity']) f['revenue'] = sum(x['quantity']*x['price']) return pd.Series(f, index=['quantity', 'revenue']) def gr(df, gr_col): return df.groupby(gr_col, as_index=False).apply(agr_list) </code></pre> <p>Expected output:</p> <pre><code> name quantity revenue 0 Avocaddo 20 3556 1 Got meat! 2 818 2 Pineapple 2 620 3 Pineapple sort2 1 219 4 Dove 1 229 5 Cotico 1 149 6 Orange 6 1014 7 Peanut 7 315 8 Baguette 2 251 9 Kotanyi 1 59 </code></pre> <p>P. S. I just found out how to make it. I just put my code here. Thx, if spend your time for this.</p> <pre><code>def gr(df, gr_col): def agr_list(x): f = {} f['quantity'] = sum(x['quantity']) f['revenue'] = sum(x['quantity']*x['price']) return pd.Series(f, index=['quantity', 'revenue']) return df.groupby(gr_col, as_index=False).apply(agr_list) </code></pre>
<python><pandas>
2023-01-03 17:08:33
1
402
John Doe
75,004,889
802,339
Where is Qiskit's RepetitionCode class now?
<p>Now that Qiskit's <code>ignis</code> module has been deprecated, where does <code>qiskit.ignis.verification.topological_codes.RepetitionCode</code> or its equivalent reside now?</p>
<qiskit><python>
2023-01-03 16:57:48
1
1,398
bisarch
74,996,280
5,106,834
Reducing number of repeated queries in Django when creating new objects in a loop
<p>I'm populating a database using Django from an Excel sheet. In the example below, the <code>purchases</code> list simulates the format of that sheet.</p> <p>My problem is that I'm repeatedly using <code>Customer.objects.get</code> to retrieve the same <code>Customer</code> object when creating new <code>Purchase</code> entries. This results in repeated database queries for the same information.</p> <p>I've tried using <code>customers = Customer.objects.all()</code> to read in the whole <code>Customer</code> table into memory, but that doesn't seem to have worked since <code>customers.get()</code> still creates a new database query each time.</p> <pre class="lang-py prettyprint-override"><code>from django.db import models # My classes class Customer(models.Model): name = models.CharField(&quot;Name&quot;, max_length=50, unique=True) class Item(models.Model): name = models.CharField(&quot;Name&quot;, max_length=50, unique=True) class Purchase(models.Model): customer = models.ForeignKey(&quot;Customer&quot;, on_delete=models.CASCADE) item = models.ForeignKey(&quot;Customer&quot;, on_delete=models.CASCADE) # My data purchase_table =[ {'customer': 'Joe', 'item': 'Toaster'}, {'customer': 'Joe', 'item': 'Slippers'}, {'customer': 'Jane', 'item': 'Microwave'} ] # Populating DB for purchase in purchase_table: # Get item and customer (this is where the inefficient use of 'get' takes place) item = Item.objects.get(name=purchase['item']) customer = Customer.objects.get(name=purchase['customer']) # Create new purchase Purchase(customer, item).save() </code></pre>
<python><django><performance>
2023-01-03 16:51:48
2
607
Andrew Plowright
74,996,216
12,415,855
Selecting value from dropdown-box is not possible with selenium?
<p>I try to select the value &quot;Ukrainian Division&quot; in the dropdown box of the following site:</p> <p><a href="https://www.cyberarena.live/schedule-efootball" rel="nofollow noreferrer">https://www.cyberarena.live/schedule-efootball</a></p> <p>with the following code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import Select from webdriver_manager.chrome import ChromeDriverManager import time if __name__ == '__main__': WAIT = 3 options = Options() options.add_experimental_option ('excludeSwitches', ['enable-logging']) options.add_argument(&quot;start-maximized&quot;) options.add_argument('window-size=1920x1080') options.add_argument('--no-sandbox') options.add_argument('--disable-gpu') srv=Service(ChromeDriverManager().install()) driver = webdriver.Chrome (service=srv, options=options) link = f&quot;https://www.cyberarena.live/schedule-efootball&quot; driver.get (link) time.sleep(WAIT) select = Select(driver.find_elements(By.XPATH,&quot;//select&quot;)[1]) select.select_by_visible_text('Ukrainian Division') # select.select_by_value(&quot;1&quot;) input(&quot;Press!&quot;) driver.quit() </code></pre> <p>But unfortunately, nothing happens - the options are not selected with this code. I also tried it with select_by_value with this line</p> <pre><code>select.select_by_value(&quot;1&quot;) </code></pre> <p>instead of</p> <pre><code>select.select_by_visible_text('Ukrainian Division') </code></pre> <p>but this doesn´t work either.</p> <p>How can I select this option from the dropdown box?</p>
<python><selenium><selenium-webdriver><xpath><webdriverwait>
2023-01-03 16:45:21
1
1,515
Rapid1898
74,996,170
4,724,240
How to make a function switch with numba
<p>I have 3 jitted functions, <code>a(x)</code>, <code>b(x)</code> and <code>c(x)</code>.</p> <p>I need a switch function that does this:</p> <pre><code>@nb.njit def switch(i, x): if i == 0: return a(x) elif i == 1: return b(x) elif i == 2: return c(c) </code></pre> <p>But I would like to write it in a more concise way without performance loss, such as:</p> <pre><code>functions = (a, b, c) @nb.njit def switch(i, x): functions[i](x) </code></pre> <p>However, nopython jit compilation can't handle tuples of functions. How can I do this?</p>
<python><numba>
2023-01-03 16:40:37
1
1,334
Axel Puig
74,996,089
4,971,866
Jupyter widgets: disable a button if input is empty else enable?
<pre class="lang-py prettyprint-override"><code>widgets.VBox([ widgets.Text(value=&quot;&quot;), widgets.Button( description=&quot;Click&quot;, disabled=# ???, ) ] ) </code></pre> <p>How could I make the button disable based on the input value?</p> <p>Disable when input is <code>''</code>, else make the button enabled?</p> <p>Is this possible?</p>
<python><jupyter-notebook><jupyter-lab>
2023-01-03 16:32:45
1
2,687
CSSer
74,995,953
925,913
Why is setting a method to have a side_effect causing construction of that method's class to throw?
<p>I'm trying to create an MVCE for another problem I'm having. But I ran into an intermediate problem. You should be able to see and run an MVCE <a href="https://trinket.io/python3/f5e6dfe9d9" rel="nofollow noreferrer">here</a>, but here it is laid out:</p> <p><strong>B</strong> is just a class that you can call <strong>go()</strong> on, which in turn just prints that it happened.</p> <pre><code># module_b.py class B: def __init__(self) -&gt; None: self.x = True print(&quot;B constructed OK&quot;) def go(self): print(&quot;called B.go()&quot;) </code></pre> <p><strong>A</strong> is a class that holds an instance of B. It also has a <strong>go()</strong> method, which calls the B instance's go() method.</p> <pre><code># module_a.py from module_b import B class A: def __init__(self) -&gt; None: try: self.b = B() print(&quot;A constructed OK&quot;) except Exception as e: print(&quot;A's constructor threw an exception: &quot; + repr(e)) def go(self): try: self.b.go() except Exception as e: print(&quot;a.go() threw an exception: &quot; + repr(e)) </code></pre> <p>And here's the test code:</p> <pre><code># main.py from module_a import A from module_b import B from unittest import mock @mock.patch(&quot;module_a.B&quot;) @mock.patch(&quot;module_a.B.go&quot;) def test_b_method_go_throws_1( mock_b: mock.Mock, mock_b_go: mock.Mock ): mock_b.return_value = mock.Mock() mock_b_go.side_effect = Exception(&quot;test&quot;) # &lt;-- This line seems to cause the issue. a = A() a.go() print(&quot;-- test_b_method_go_throws_1() --&quot;) test_b_method_go_throws_1() print(&quot;&quot;) </code></pre> <p>Finally, here's the output:</p> <pre class="lang-none prettyprint-override"><code>-- test_b_method_go_throws_1() -- A's constructor threw an exception: Exception('test') a.go() threw an exception: AttributeError(&quot;'A' object has no attribute 'b'&quot;) </code></pre> <p>Originally I was trying to patch an instance's method to throw an error—I'm pretty sure I wasn't doing that right. But here I'm trying to understand why setting <code>mock_b_go.side_effect</code> is causing the construction of B to throw an exception.</p>
<python><unit-testing><exception><mocking>
2023-01-03 16:23:05
0
30,423
Andrew Cheong
74,995,951
3,482,266
Slicing from a generator in blocks of specific size
<pre><code>list_values = [...] gen = ( list_values[pos : pos + bucket_size] for pos in range(0, len(list_values), bucket_size) ) </code></pre> <p>Is there a way to make this work if list_values is a generator instead? My objective is to reduce RAM usage.</p> <p>I know that I can use itertools.islice to slice an iterator.</p> <pre><code>gen = ( islice(list_values, pos, pos + bucket_size) for pos in range(0, len(list_values), bucket_size) ) </code></pre> <p>The problem is:</p> <ul> <li>How would I remove/substitute <code>len(list_values)</code>, which doesn't work for generators?</li> <li>Will the use of islice, in this case, reduce peak RAM usage?</li> </ul>
<python><generator>
2023-01-03 16:23:01
1
1,608
An old man in the sea.
74,995,910
7,215,536
how to add conditionals in html attribute
<p>I am using django 4.1.4 and I am new to it.</p> <p>In a form I want to set the correct url based on the variable 'update'. If the variable is True, then the template is used for update (in this case a different url), else save new data.</p> <p>I created something like this:</p> <pre><code>&lt;form action=&quot;{% if update %} {% url 'in.edit_session' session.id %} {% else %} {% url 'in.save_session' %} {% endif %}&quot; method=&quot;POST&quot; class=&quot;form&quot;&gt; </code></pre> <p>but is not working.</p> <p>This will work... but I don't want to duplicate code:</p> <pre><code> {% if update %} &lt;form action=&quot;{% url 'in.edit_session' session.id %}&quot; method=&quot;POST&quot; class=&quot;form&quot;&gt; {% else %} &lt;form action=&quot;{% url 'in.save_session' %}&quot; method=&quot;POST&quot; class=&quot;form&quot;&gt; {% endif %} </code></pre> <p>How can I do this ?</p>
<python><django>
2023-01-03 16:18:20
1
1,021
calin24
74,995,851
7,161,082
Copy files from gridfs to another gridfs database
<p>I am searching for a way to copy files in gridfs. The idea is to keep all additional and metadata of the file, like the same &quot;_id&quot; etc.</p> <p>The use case is to setup a testing database with fraction of files in gridfs and maintaining the references to other collections and documents that are copied.</p> <p>My attempt in Python was to do this, but this already creates a new ObjectId for the inserted file.</p> <pre><code>import pymongo import gridfs ... fs1 = gridfs.GridFS(database=db1, collection=&quot;photos&quot;) buffer = buffer = fs1.find_one({&quot;_id&quot;: photo[&quot;binaryId&quot;]}) fs2 = gridfs.GridFS(database=db2, collection=&quot;photos&quot;) fs2.put(buffer) </code></pre> <p><em><strong>Update</strong></em></p> <p>I found a place where the information is kept.</p> <pre><code> fs2.put(buffer, **buffer._file) </code></pre>
<python><pymongo><gridfs>
2023-01-03 16:13:31
1
493
samusa
74,995,792
6,677,891
Using statannot with split violin plot
<p>I'm plotting some data as split violin and using <code>statannot</code> library to do T-test. I managed to get the plot and the annotation but I would like to <strong>tune the horizontal offset</strong> of the annotation. I checked the <code>statannot</code> example on <a href="https://github.com/webermarcolivier/statannot/blob/master/example/example.ipynb" rel="nofollow noreferrer">Github</a>, there seems to be options to set up y-offsets but not x.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statannot from statannot import add_stat_annotation l1 = [272.504,202.512,193.069,167.855,154.232,152.334,151.609,148.53,146.403,145.133,144.645,143.988,139.266,138.655,133.887,133.661,133.434,133.17,131.556,131.256,130.978,129.105,128.842,126.593,126.333,124.542,123.76,123.492,123.22,121.734,120.375,119.656,119.108,118.642,118.401,118.309,116.836,116.416,115.601,114.028,113.992,113.288,113.144,112.626,112.616,112.496,111.561,110.76,110.111,110.092,109.331,109.135,109.012,108.903,108.552,106.474,106.42,106.019,105.885,105.525,105.439,105.366,104.982,104.805,104.744,104.664,104.335,104.184,103.707,103.457,103.323,103.249,103.098,102.37,102.315,102.016,101.378,99.8643,99.5172,99.2789,99.2084,99.0091,97.7338,97.6874,97.6814,97.4388,96.432,96.2432,96.0918,96.0247,95.839,95.297,95.1323,94.2848,93.2593,93.2341,93.0388,92.3403,91.8436,91.8328,91.6193,91.3894,91.222,90.8495,90.734,90.7058,90.6775,90.5114,90.1493,90.0493,89.7854,89.6923,89.0814,89.0667,88.995,88.9301,88.7436,88.3927,88.3131,88.0119,87.4159,87.0414,86.9434,86.8593,86.7524,86.7398,86.2706,86.176,86.0691,85.9919,85.7879,85.6841,85.6708,85.3463,84.1721,84.066,83.7766,83.1467,83.0892,82.9535,82.816,82.5275,82.4752,82.471,82.4688,82.4646,81.9346,81.5371,81.3621,81.3368,80.9333,80.7021,80.6517,80.5567,80.1604,79.892,79.586,79.5603,79.4095,79.3454,79.2852,79.2086,79.0203,78.8051,78.706,78.6477,78.3607,78.0428,77.8758,77.5691,77.5387,77.5311,77.5263,77.3324,77.29,77.2234,77.0692,77.0666,77.0413,76.7028,76.5951,76.4135,76.392,76.3454,76.0672,75.6534,75.5475,75.2686,75.0235,74.9092,74.4248,74.2518,74.2257,74.0593,73.9968,73.9246,73.8414,73.8408,73.7763,73.4484,73.2291,73.145,73.0303,72.9797,72.9751,72.9024,72.826,72.6676,72.4249,72.4102,72.0856,71.9704,71.9172,71.7447,71.7206,71.5958,71.439,71.4298,71.2355,71.2251,71.0718,71.053,70.5732,70.4904,70.1866,69.8816,69.4722,69.4509,69.3045,69.2798,69.1982,69.1069,69.0067,68.6909,68.5816,68.4733,68.4249,68.024,68.0039,67.9734,67.7469,67.7361,67.713,67.6114,67.6018,67.4642,67.2782,67.0869,67.0519,66.8222,66.7673,66.6673,66.2306,66.1987,66.1889,66.0974,65.9233,65.8449,65.6032,65.3384,65.0957,65.0477,64.8658,64.7849,64.7369,64.7189,64.6764,64.6755,64.6455,64.4755,64.306,64.246,64.2147,64.0402,64.0325,63.8272,63.8245,63.6099,63.3386,63.2451,63.0935,63.0666,62.5662,62.4672,62.4578,62.4012,62.2842,62.2709,62.1582,61.9314,61.904,61.827,61.8042,61.7518,61.6945,61.6762,61.6359,61.5333,61.4807,61.304,61.1901,61.1656,61.1317,61.1297,60.9643,60.8862,60.6479,60.5511,60.5056,60.4721,60.4157,60.3676,60.1445,60.091,60.0761,60.0467,60.0336,59.9166,59.8809,59.8731,59.6334,59.5349,59.4131,59.3963,59.3804,59.361,59.3408,59.3174,59.2815,59.199,59.1076,58.8374,58.8146,58.8089,58.7414,58.5541,58.4368,58.2797,58.2499,58.1815,58.1174,58.1069,58.105,58.0428,57.9885,57.8912,57.8868,57.8822,57.676,57.6351,57.4839,57.4002,57.2343,57.1967,57.1731,57.0824,57.0567,56.8848,56.7218,56.4508,56.3671,56.3229,56.3205,56.1875,56.1257,56.0954,55.9887,55.8555,55.8546,55.809,55.7402,55.6578,55.5996,55.5602,55.5339,55.4469,55.3462,55.3444,55.3205,55.3117,55.2178,55.1261,55.0886,55.0576,54.9589,54.81,54.7532,54.7457,54.7033,54.5629,54.4999,54.3627,54.3135,54.311,54.2713,54.1049,54.0707,53.9745,53.77,53.5395,53.5006,53.3687,53.3117,53.2583,53.2529,53.2011,53.177,52.9779,52.909,52.905,52.6222,52.6088,52.5059,52.4902,52.4878,52.3373,52.3323,52.3029,52.3012,52.234,52.0334,51.9882,51.9661,51.9633,51.9121,51.8698,51.8114,51.788,51.6296,51.6248,51.4913,51.4739,51.3667,51.2958,51.294,51.2281,51.1854,51.1286,51.1162,51.0855,50.8662,50.5744,50.5062,50.4847,50.4691,50.4474,50.4179,50.2679,50.2074,50.1622,50.1334,49.7874,49.48,49.4539,49.4231,49.272,49.2575,49.1364,49.092,49.0221,48.8856,48.8175,48.7647,48.6915,48.647,48.6371,48.5609,48.5368,48.4042,48.234,48.1098,48.0656,47.9973,47.8853,47.7601,47.2543,47.2029,47.1206,47.1194,47.0535,46.9547,46.8645,46.8117,46.6366,46.3982,46.2792,46.2024,46.0142,45.8344,45.8233,45.8128,45.7977,45.7417,45.5822,45.5592,45.4089,45.3844,45.3696,45.136,45.0556,45.0016,45.0011,44.7754,44.6891,44.6553,44.6383,44.507,44.1616,44.1418,44.0245,43.9803,43.9753,43.968,43.8356,43.8177,43.6451,43.5594,43.3305,43.2629,43.2173,43.186,42.9444,42.8459,42.8095,42.7937,42.6379,42.5311,42.5082,42.4254,42.2728,42.1527,42.0969,42.0669,41.9709,41.9636,41.9144,41.9029,41.8869,41.8605,41.716,41.4758,41.4287,41.0717,41.0059,41.0017,40.9169,40.8871,40.7063,40.6942,40.6885,40.5788,40.5018,40.3402,40.3285,40.3031,40.2406,40.1971,40.167,40.1385,40.0427,39.8397,39.699,39.6931,39.6284,39.2814,39.2729,39.263,39.1405,39.1393,39.1139,39.0489,38.9667,38.8357,38.6099,38.4053,38.009,37.9577,37.8777,37.3667,37.1155,37.0287,36.9916,36.5744,36.2806,36.1317,36.0718,36.0612,36.039,35.9404,35.8914,35.8179,35.738,35.6264,35.5055,35.4206,35.4184,35.3975,35.3173,35.2659,35.1784,35.0709,34.8165,34.7544,34.7423,34.6369,34.6334,34.3476,34.3354,34.2578,34.1394,33.9828,33.9191,33.6455,33.6272,33.598,33.5212,33.3422,33.2855,33.2689,32.9122,32.8871,32.8818,32.4971,32.3061,32.2169,32.1022,31.9352,31.6088,31.5756,31.5731,31.4806,31.349,31.3405,31.2251,31.0717,30.7287,30.6552,30.5264,30.3869,30.3099,30.2343,30.1371,30.0559] l2 = [207.888,171.346,167.564,159.4,151.16,150,138.694,131.992,129.483,122.415,121.331,121.249,121.237,120.543,119.089,117.743,114.647,114.494,113.767,113.196,112.84,112.651,112.403,111.544,110.565,110.173,109.998,109.696,109.386,107.889,107.482,107.319,106.861,106.735,106.491,106.49,105.362,104.869,104.615,104.414,104.242,104.082,103.716,103.393,103.26,102.775,102.182,101.64,100.905,100.28,99.9022,99.884,99.3238,98.0795,97.7133,96.9512,96.9184,96.8695,96.7221,96.413,96.232,96.1422,95.8459,95.8459,95.8131,95.7728,95.3913,95.2376,94.8845,94.5705,93.2414,93.1552,92.5173,92.3415,92.2056,91.9852,91.9534,91.9501,91.9433,91.7053,91.49,91.3532,91.3028,91.1127,91.0899,90.8153,90.6707,90.4659,90.2175,90.2155,90.159,89.9459,89.5651,89.5639,89.1779,89.145,88.9312,88.8858,88.455,88.3704,88.2895,88.1412,87.9855,87.9778,87.9778,87.9013,87.6009,87.5154,87.1016,86.9321,86.7951,86.7836,86.7587,86.4046,86.1803,85.904,85.7661,85.7291,85.6081,85.5804,85.497,85.1705,85.0928,84.6087,84.4984,84.2268,83.8976,83.7881,83.7389,83.7389,83.5095,83.3326,83.0616,82.8301,82.794,82.794,82.7641,82.5966,82.4473,82.4219,82.2223,82.1793,82.0124,81.9174,81.8531,81.376,81.0667,81.0464,80.8922,80.8331,80.6349,80.5288,79.8589,79.8148,79.8148,79.6995,79.6146,79.6007,79.5844,79.4964,79.3517,79.298,79.1793,79.1632,79.0621,79.0124,78.9803,78.9774,78.8984,78.8976,78.8976,78.8805,78.8545,78.7977,78.7145,78.6846,78.6846,78.5711,78.4882,78.3885,78.1814,78.1159,78.0742,78.0108,77.9611,77.8295,77.8264,77.8001,77.5285,77.5232,77.2644,77.1439,77.1424,77.0046,76.9793,76.9487,76.8315,76.8283,76.8189,76.8189,76.7983,76.7027,76.7009,76.4747,76.4317,76.4124,76.3816,76.2862,76.1642,76.1527,75.9325,75.8247,75.7142,75.47,75.4662,75.4423,75.4232,75.333,75.0098,74.898,74.8773,74.8275,74.8053,74.7623,74.7514,74.6993,74.6993,74.6796,74.673,74.6625,74.2592,74.0637,74.0032,73.8717,73.6659,73.6505,73.5091,73.4831,73.4567,73.3633,73.2646,73.1528,72.9344,72.9313,72.8652,72.7537,72.5718,72.4492,72.4051,72.2192,72.1682,72.1634,72.0665,72.0289,71.6249,71.5958,71.5181,71.4805,71.3533,71.1772,71.1761,71.0991,71.053,71.0416,70.8338,70.488,70.4225,70.1584,70.1289,70.0825,70.0229,69.997,69.9822,69.9385,69.7941,69.723,69.5848,69.4934,69.4398,69.3906,69.3726,69.3726,69.2188,69.0563,68.8404,68.763,68.7606,68.7362,68.6994,68.657,68.6141,68.4659,68.3962,68.2867,68.2724,67.7605,67.6517,67.5337,67.5027,67.2305,67.2183,67.1789,67.0512,66.9404,66.69,66.6746,66.6483,66.6417,66.4847,66.4695,66.4503,66.4422,66.2737,66.0067,65.9082,65.8823,65.8823,65.8703,65.8449,65.7907,65.7635,65.7635,65.5651,65.5406,65.5047,65.4879,65.3959,65.2771,65.0997,65.0984,65.0023,64.7704,64.6576,64.6212,64.5761,64.482,64.3861,64.3453,64.2428,64.0602,64.0358,63.9648,63.94,63.8353,63.7891,63.7872,63.7437,63.7111,63.6128,63.5088,63.4259,63.3954,63.3381,63.2467,63.1638,63.1347,63.0981,63.0643,62.9795,62.9157,62.7673,62.7355,62.6865,62.6326,62.5611,62.3845,62.2136,62.1582,62.1221,62.0804,61.754,61.754,61.7538,61.5985,61.4088,61.392,61.2851,61.2619,61.2156,61.1782,61.1557,61.0956,61.0415,61.0297,60.9654,60.7681,60.6919,60.6803,60.5969,60.3806,60.3681,60.2862,60.2253,60.1682,60.1682,60.1452,60.1341,60.0051,59.9962,59.9775,59.9159,59.7761,59.6807,59.5799,59.5299,59.491,59.45,59.2914,59.1757,59.1162,58.9643,58.9336,58.916,58.9104,58.818,58.8159,58.6025,58.5532,58.5496,58.4647,58.2838,58.2579,58.1108,58.0855,58.0039,57.926,57.8545,57.8401,57.7977,57.4247,57.3585,57.3285,57.3285,57.2432,57.211,57.1942,57.1202,56.9691,56.8711,56.8627,56.8571,56.8326,56.7461,56.7351,56.7261,56.723,56.6504,56.6161,56.5315,56.4358,56.3995,56.3191,56.1791,56.1123,56.0066,55.9533,55.9467,55.9302,55.8873,55.7623,55.6959,55.5229,55.5216,55.4979,55.4929,55.4713,55.4565,55.4425,55.3594,55.3594,55.3583,55.3003,55.2434,55.0464,55.009,54.9714,54.9485,54.8953,54.8631,54.8077,54.8048,54.7574,54.6345,54.1857,54.0751,54.0189,53.948,53.712,53.6314,53.5875,53.5715,53.5661,53.5427,53.2477,53.077,53.0167,52.9753,52.8383,52.8235,52.8006,52.7509,52.7491,52.7464,52.5655,52.5612,52.5335,52.392,52.3605,52.2919,52.2889,52.2433,52.2337,51.9662,51.9639,51.9633,51.744,51.6756,51.6653,51.6248,51.6233,51.5476,51.5279,51.512,51.4396,51.3542,51.3091,51.1867,51.1765,51.1572,51.1562,50.9844,50.8947,50.8875,50.8654,50.7811,50.6834,50.6828,50.4879,50.3947,50.3924,50.3499,50.2686,50.2493,50.111,50.0893,50.0671,49.986,49.9593,49.9301,49.8905,49.872,49.7097,49.5889,49.5522,49.5216,49.5046,49.5,49.4539,49.444,49.2926,49.2884,49.2499,49.1404,49.1304,49.0876,49.0147,49.0147,48.9785,48.9616,48.8961,48.8185,48.5901,48.5854,48.5466,48.4818,48.459,48.4493,48.4289,48.2729,48.1958,48.1527,48.1499,48.14,48.0727,48.0451,47.999,47.9989,47.987,47.933,47.9187,47.8933,47.7601,47.7205,47.7088,47.7049,47.6592,47.627,47.5553,47.5182,47.4763,47.3752,47.2772,47.2718,47.2354,47.2021,47.1803,47.1628,47.1002,46.7306,46.7075,46.6655,46.6347,46.4787,46.4693,46.4693,46.4686,46.4304,46.4304,46.3982,46.3877,46.2395,46.2374,46.2374,46.1992,46.1781,46.1278,46.1276,46.1276,46.0755,45.8223,45.7552,45.7022,45.6475,45.4832,45.454,45.4023,45.1494,45.1494,45.0928,45.0437,45.0041,44.9834,44.8849] data = {'control':l1, 'wt': l2} df = pd.DataFrame(data) df1 = df.melt().assign(x='') order = [&quot;control&quot;, &quot;wt&quot;] boxpairs=[(&quot;control&quot;, &quot;wt&quot;)] a = sns.violinplot(data=df1, x=&quot;x&quot;, y=&quot;value&quot;, hue=&quot;variable&quot;, split=True, inner=&quot;quart&quot;, linewidth=2.5) a1, test_results = add_stat_annotation(a, data=df1, x=&quot;variable&quot;, y=&quot;value&quot;, order=order, box_pairs=boxpairs, test='t-test_ind', text_format='star', comparisons_correction=None, loc='outside', line_offset_to_box=0.2, line_offset=0.2, line_height=0.05, text_offset=10, verbose=0) </code></pre> <p>I have tried to illustrate what I actually want in the output -</p> <p><a href="https://i.sstatic.net/KPFu7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPFu7.jpg" alt="enter image description here" /></a></p> <p>Does anyone know some workaround to move the annotation as shown above?</p>
<python><pandas><matplotlib><seaborn><violin-plot>
2023-01-03 16:07:27
2
495
gaya
74,995,791
5,516,353
Regex split but keep delimiter returning different than expected value
<p>I am trying to split a string that represents a simple mathematical equation over <code>+</code>, <code>-</code>, <code>&lt;&lt;</code> and <code>&gt;&gt;</code> while keeping the symbol. I can't find where the problem is.</p> <pre><code>&gt;&gt;&gt; re.split(r'( \+ )|( \&lt;\&lt; )|( \- )|( \&gt;\&gt; )', 'x - x') &lt;&lt;&lt; ['x', None, None, ' - ', None, 'x'] # Expected ['x', '-', 'x'] &gt;&gt;&gt; re.split(r'( \+ )| \&lt;\&lt; | \- | \&gt;\&gt; ', 'x - x') &lt;&lt;&lt; ['x', None, 'x'] # Expected ['x', '-', 'x'] &gt;&gt;&gt; re.split(r'( \+ )| \&lt;\&lt; | \- | \&gt;\&gt; ', 'x + x') &lt;&lt;&lt; ['x', '+', 'x'] # The form I am looking for </code></pre>
<python><regex>
2023-01-03 16:07:25
1
312
Ziad Amerr
74,995,719
6,758,739
Identify the sql statements using python
<p>The task is to identify if the SQL statements is a DML / DDL or not.</p> <p>So I had to use an array and push all the DML/DDL patterns into that and search for them by iterating.</p> <p>Below is a simple code snippet where</p> <ol> <li>I am sending an SQL query as a parameter</li> <li>Check if it has update, alter, drop, delete</li> <li>Print</li> </ol> <pre class="lang-py prettyprint-override"><code>def check_dml_statementent (self,sql) actions = ['update', 'alter', 'drop', 'delete'] for action in actions if ( re.search (action,sql.lower() ) : print &quot;This is a dml statement &quot; </code></pre> <p>But there are some edge cases, for which I need to code for</p> <ul> <li><p>To consider</p> <pre class="lang-sql prettyprint-override"><code>Update table table_name where ... alter table table_name create table table_name delete * from table_name drop table_name </code></pre> </li> <li><p>Not to consider:</p> <pre class="lang-sql prettyprint-override"><code>select * from table where action='drop' </code></pre> </li> </ul> <p>So, the task is to identify only SQL statements which modify, drop, create, alter table etc.</p> <p>One specific idea is to check if an SQL statement starts with above array values using <code>startswith</code> function.</p>
<python><sql>
2023-01-03 16:00:51
1
992
LearningCpp
74,995,609
11,611,246
SPy error: "Axes don't match array" when calling .open_memmap()
<p>I have some <code>.bsq</code> files with corresponding header files (<code>.hdr</code>). I want to open and edit them in Python, using the <code>spectral</code> (<a href="https://www.spectralpython.net/" rel="nofollow noreferrer">SPy</a>) module.</p> <p>Since I need to edit the files from within a Python Toolbox in ArcMap (that is, a single Python script which uses the Python installation that comes with ArcMap), I decided to copy the <a href="https://github.com/spectralpython/spectral" rel="nofollow noreferrer">GitHub repository of the SPy module</a> and import it as</p> <pre><code>import sys sys.path.append(&quot;/path/to/module&quot;) import spatial as spy </code></pre> <p>in order to be able to run the toolbox on any computer without having to install pip or other software. (I intend to just copy the Toolbox and the module folder, or, in a later step, create a single Python script comprising the Toolbox as well as the SPy-module code.)</p> <p>I opened some <code>.bsq</code>-file and tried to subsequently edit it as <code>memmap</code>, following <a href="https://www.spectralpython.net/class_func_ref.html#envi-create-image" rel="nofollow noreferrer">this example</a>.</p> <p>First, I opened the image as <code>spectral.io.bsqfile.BsqFile</code>:</p> <pre><code>path = &quot;/path/to/image_header.hdr&quot; img = spy.open_image(path) </code></pre> <p>I am able to apply various methods to <code>img</code> (such as: view metadata, read bands as array, etc), hence, I assume there were no issues with the path or the image file. I can also read single bands with <code>memmap = True</code>:</p> <pre><code>mem_band = img.read_band(0, use_memmap = True) </code></pre> <p>Reading a single band results in an array of shape <code>(lines, samples)</code> with <code>dtype: float64</code> and where lines and sample correspond to the respective values in the <code>.hdr</code> file.</p> <p>However, trying to apply the <code>.open_memmap()</code> method to the <code>BsqFile</code> instance as follows:</p> <pre><code>mem = img.open_memmap(writable = True) </code></pre> <p>results in the following error:</p> <pre><code>Runtime error Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/path/to/module/spectral/io/spyfile.py&quot;, line 809, in open_memmap dst_inter)) File &quot;/path/to/lib/site-packages/numpy/core/fromnumeric.py&quot;, line 536, in transpose return _wrapit(a, 'transpose', axes) File &quot;/path/to/lib/site-packages/numpy/core/fromnumeric.py&quot;, line 45, in _wrapit result = getattr(asattr(obj), method)(*args, **kwds) ValueError: axes don't match array </code></pre> <p>Is this due to some incompatibilities between the <code>numpy</code> version that comes with the ArcMap-Python-Installation which I am required to use (numpy version 1.9.2)? Or are there other issues with the code or set-up?</p> <ul> <li>Python version: 2.7.10</li> <li>numpy version: 1.9.2</li> <li>spectral version: 0.23.1</li> </ul> <p><strong>Edit</strong></p> <p>With the given Python version, <code>spectral.envi.create_image()</code> cannot create images of the given size due to an <code>OverflowError</code>. Potentially, this older Python version does not handle large numbers &quot;correctly&quot;?</p> <p>Using another Python installation, the <code>.open_memmap()</code> method worked without issues.</p>
<python><numpy><spectral-python>
2023-01-03 15:50:42
0
1,215
Manuel Popp
74,995,422
19,303,365
Fetching the Max Payment from Group with Payer ID in pandas dataframe
<p>Existing Dataframe :</p> <pre><code>Group Payer_ID status Payment_Amount A 11 P 100 A 12 P 100 A 13 Q 50 A 14 P 100 A 15 P - B 11 P 10 B 16 Q 150 </code></pre> <p>Expected Dataframe :</p> <pre><code>Group Payer_ID Payment_Amount A 11 100 B 16 150 </code></pre> <p><strong>if Payment amount is same it should take the first Payer_ID</strong></p> <p>by below code i could get the max payment amount but need to fetch the respective Payer_ID as well</p> <p>what changes needs to be done.?</p> <pre><code>Max_Payment_Amt = df.groupby('Group',as_index=False)['Payment_Amount'].max() </code></pre>
<python><pandas><dataframe>
2023-01-03 15:33:31
0
365
Roshankumar
74,995,213
2,013,672
backslashes in strings
<p>When printinga a string containing a backslash, I expect the backslash (<code>\</code>) to stay untouched.</p> <pre><code>test1 = &quot;This is a \ test String?&quot; print(test1) 'This is a \\ test String?' test2 = &quot;This is a '\' test String?&quot; print(test2) &quot;This is a '' test String?&quot; </code></pre> <p>What I expect is &quot;<code>This is a \ test String!</code>&quot; or &quot;<code>This is a '\' test String!</code>&quot; respectively. How can I achieve that?</p>
<python>
2023-01-03 15:13:52
3
4,449
Sadık
74,994,991
1,139,541
Select sub-packages inside a main package for build
<p>I have a folder structure where there is a main package (mypkg) holding inside several sub-packages (subpkg1, subpkg2, ..., subpkgN):</p> <pre><code>project_root_directory ├── pyproject.toml ├── ... └── mypkg/ ├── __init__.py ├── subpkg1/ │ ├── __init__.py │ ├── ... │ └── module1.py └── subpkg2/ │ ├── __init__.py │ ├── ... │ └── module1.py ... └── subpkgN/ ├── __init__.py ├── ... └── moduleN.py </code></pre> <p>How to select only specific packages inside a main package using <code>pyproject.toml</code>? I tried setting</p> <pre><code>[tool.setuptools.packages.find] include = [ &quot;subpkg1&quot;, &quot;subpkg2&quot;, ] namespaces = false </code></pre> <p>But the problem is that <code>mypkg/__init__.py</code> is not added to the build. Is there a way to add this file?</p>
<python><setuptools>
2023-01-03 14:52:53
1
852
Ilya
74,994,921
1,324,023
python pandas generate pdf file from dataframe
<p>We can easily export pandas dataframe to csv file. Please find the code below:</p> <pre><code>import pandas as pd df = pd.read_csv('/Users/ashy/Downloads/weather_data.csv'); df.to_csv('/Users/ashy/Downloads/file1.csv'); </code></pre> <p>Is there an easy way to generate pdf file from dataframe. There isn't any method available to do so. What alternate methods can we use to generate file from a dataframe ? Please let me know.</p> <p>thanks</p>
<python><pandas><pdf>
2023-01-03 14:47:29
1
1,599
Ashy Ashcsi
74,994,907
11,894,831
Complex Json parsing: key not found but seems there yet
<p>I have this &quot;very nested&quot; Json: <a href="https://textdoc.co/iWgRduZxl2JEkXU1" rel="nofollow noreferrer">https://textdoc.co/iWgRduZxl2JEkXU1</a> that i have to get through a webservice (no issue there) and then operate with Python (issue there).</p> <p>I have used this online tools to create the python class from the (validated) json file: <a href="https://json2csharp.com/code-converters/json-to-python" rel="nofollow noreferrer">https://json2csharp.com/code-converters/json-to-python</a></p> <p>So, here's what i have so far (after light corrections: declarations were weirdly ordered):</p> <pre><code> @dataclass class DocDiffCheck: data_name_description: str data_name_in_script: str data_value: object @staticmethod def from_dict(obj: Any) -&gt; 'DocDiffCheck': _data_name_description = str(obj.get(&quot;data_name_description&quot;)) _data_name_in_script = str(obj.get(&quot;data_name_in_script&quot;)) _data_value = str(obj.get(&quot;data_value&quot;)) return DocDiffCheck(_data_name_description, _data_name_in_script, _data_value) @dataclass class DataValue: hashtag_value: str hashtag_description: str hashtag_descripteurs: str hashtag_dat_collect_start: str hashtag_dat_collect_end: str @staticmethod def from_dict(obj: Any) -&gt; 'DataValue': _hashtag_value = str(obj.get(&quot;hashtag_value&quot;)) _hashtag_description = str(obj.get(&quot;hashtag_description&quot;)) _hashtag_descripteurs = str(obj.get(&quot;hashtag_descripteurs&quot;)) _hashtag_dat_collect_start = str(obj.get(&quot;hashtag_dat_collect_start&quot;)) _hashtag_dat_collect_end = str(obj.get(&quot;hashtag_dat_collect_end&quot;)) return DataValue(_hashtag_value, _hashtag_description, _hashtag_descripteurs, _hashtag_dat_collect_start, _hashtag_dat_collect_end) @dataclass class EndsIn30: data_name_in_script: str data_name_description: str data_value: List[DataValue] @staticmethod def from_dict(obj: Any) -&gt; 'EndsIn30': _data_name_in_script = str(obj.get(&quot;data_name_in_script&quot;)) _data_name_description = str(obj.get(&quot;data_name_description&quot;)) _data_value = [DataValue.from_dict(y) for y in obj.get(&quot;data_value&quot;)] return EndsIn30(_data_name_in_script, _data_name_description, _data_value) @dataclass class GlobalDbStat: Type: str TotalCount: int CrawledCount: int @staticmethod def from_dict(obj: Any) -&gt; 'GlobalDbStat': _Type = str(obj.get(&quot;Type&quot;)) _TotalCount = int(obj.get(&quot;TotalCount&quot;)) _CrawledCount = int(obj.get(&quot;CrawledCount&quot;)) return GlobalDbStat(_Type, _TotalCount, _CrawledCount) @dataclass class VolumétrieCategorie: Categorie: str Total: int Crawled: int @staticmethod def from_dict(obj: Any) -&gt; 'VolumétrieCategorie': _Categorie = str(obj.get(&quot;Categorie&quot;)) _Total = int(obj.get(&quot;Total&quot;)) _Crawled = int(obj.get(&quot;Crawled&quot;)) return VolumétrieCategorie(_Categorie, _Total, _Crawled) @dataclass class Hashtag: VolumétrieCategorie: List[VolumétrieCategorie] EndsIn30: List[EndsIn30] @staticmethod def from_dict(obj: Any) -&gt; 'Hashtag': _VolumétrieCategorie = [VolumétrieCategorie.from_dict(y) for y in obj.get(&quot;VolumétrieCategorie&quot;)] _EndsIn30 = [EndsIn30.from_dict(y) for y in obj.get(&quot;EndsIn30&quot;)] return Hashtag(_VolumétrieCategorie, _EndsIn30) @dataclass class Website: CorpusName: str TotalCount: int CrawlCount: int @staticmethod def from_dict(obj: Any) -&gt; 'Website': _CorpusName = str(obj.get(&quot;CorpusName&quot;)) _TotalCount = int(obj.get(&quot;TotalCount&quot;)) _CrawlCount = int(obj.get(&quot;CrawlCount&quot;)) @dataclass class RSN: Type: str TotalCount: int CrawlCount: int @staticmethod def from_dict(obj: Any) -&gt; 'RSN': _Type = str(obj.get(&quot;Type&quot;)) _TotalCount = int(obj.get(&quot;TotalCount&quot;)) _CrawlCount = int(obj.get(&quot;CrawlCount&quot;)) return RSN(_Type, _TotalCount, _CrawlCount) @dataclass class UGC: Plateforme: str TotalCount: int CrawlCount: int @staticmethod def from_dict(obj: Any) -&gt; 'UGC': _Plateforme = str(obj.get(&quot;Plateforme&quot;)) _TotalCount = int(obj.get(&quot;TotalCount&quot;)) _CrawlCount = int(obj.get(&quot;CrawlCount&quot;)) return UGC(_Plateforme, _TotalCount, _CrawlCount) return Website(_CorpusName, _TotalCount, _CrawlCount) @dataclass class Root: global_db_stats: List[GlobalDbStat] Hashtags: List[Hashtag] Websites: List[Website] UGC: List[UGC] RSN: List[RSN] Doc_Diff_Check: List[DocDiffCheck] @staticmethod def from_dict(obj: Any) -&gt; 'Root': _global_db_stats = [GlobalDbStat.from_dict(y) for y in obj.get(&quot;global_db_stats&quot;)] _Hashtags = [Hashtag.from_dict(y) for y in obj.get(&quot;Hashtags&quot;)] _Websites = [Website.from_dict(y) for y in obj.get(&quot;Websites&quot;)] _UGC = [UGC.from_dict(y) for y in obj.get(&quot;UGC&quot;)] _RSN = [RSN.from_dict(y) for y in obj.get(&quot;RSN&quot;)] _Doc_Diff_Check = [DocDiffCheck.from_dict(y) for y in obj.get(&quot;Doc_Diff_Check&quot;)] return Root(_global_db_stats, _Hashtags, _Websites, _UGC, _RSN, _Doc_Diff_Check) </code></pre> <p>Then i used this code to get the json:</p> <pre><code>json_api_response = json.loads(strApiResponse) my_json = Root.from_dict(json_api_response) </code></pre> <p>I have this exception:</p> <pre><code>File &quot;XXXX.py&quot;, line 210, in MODULE_NAME return Root.from_dict(json_api_response) File &quot;XXXX.py&quot;, line 168, in from_dict _Hashtags = [Hashtag.from_dict(y) for y in obj.get(&quot;Hashtags&quot;)] File &quot;XXXX.py&quot;, line 168, in &lt;listcomp&gt; _Hashtags = [Hashtag.from_dict(y) for y in obj.get(&quot;Hashtags&quot;)] File &quot;XXXX.py&quot;, line 114, in from_dict _EndsIn30 = [EndsIn30.from_dict(y) for y in obj.get(&quot;EndsIn30&quot;)] TypeError: 'NoneType' object is not iterable </code></pre> <p>I know that it means that the key &quot;Hashtags&quot; is not found but i don't get why.</p>
<python><json>
2023-01-03 14:45:48
1
475
8oris
74,994,901
10,430,394
Can't set matplotlib.path.Path as clip_path in matplotlib. How to provide transform to patch?
<p>I'm trying to animate a patch in matplotlib. The patch is supposed to be filled incrementally with a stroke of black while using the patch as a clip path. Essentially, I am trying to make <a href="https://github.com/parsimonhi/animCJK/tree/master/svgsKana" rel="nofollow noreferrer">these</a> svg animations from the animCJK project in matplotlib. For that I extract the bezier curve info from the svg files for the patches (clip paths) as well as the stroking paths which can be found at the bottom of the svg files.</p> <p>So what I have currently done is to copy this <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/image_clip_path.html" rel="nofollow noreferrer">matplotlib example</a> on clip paths and tried to replace the cirlce patch with my own patch that I create using the <a href="https://pypi.org/project/svgpath2mpl/" rel="nofollow noreferrer">svgpath2mpl</a> library. Then I would animate the filling of those patches as in the svg animations and put my own spin on them. The problem is that I get the following error when I set my patch from the svg file as the clip path instead of the circle patch from the matplotlib example:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Chris\Desktop\quick.py&quot;, line 15, in &lt;module&gt; im.set_clip_path(patch) File &quot;C:\Users\Chris\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\artist.py&quot;, line 793, in set_clip_path self._clippath = TransformedPath(path, transform) File &quot;C:\Users\Chris\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\transforms.py&quot;, line 2708, in __init__ _api.check_isinstance(Transform, transform=transform) File &quot;C:\Users\Chris\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\_api\__init__.py&quot;, line 92, in check_isinstance raise TypeError( TypeError: 'transform' must be an instance of matplotlib.transforms.Transform, not a None </code></pre> <p>So I need to provide <code>transform=ax.transData</code> to my patch (Path object) so that the clipping works like in the matplotlib example, but I can't seem to figure out how to do that. Can anyone tell me how to get this to work? Here's my code.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.cbook as cbook from svgpath2mpl import parse_path string = &quot;M181 359C174 359 165 363 166 371C167 382 176 389 183 396C198 412 199 435 204 455C222 538 238 623 276 700C300 745 333 786 374 816C385 824 400 826 411 818C418 814 420 805 420 797C421 780 417 762 420 745C422 720 430 696 436 671C437 666 438 657 431 654C424 652 417 658 414 664C399 686 387 710 372 731C370 734 366 735 364 732C347 718 331 702 319 683C303 658 294 629 285 601C276 570 267 538 260 506C254 477 250 448 249 419C249 409 254 399 253 389C251 379 240 375 232 370C216 363 199 359 181 359Z&quot; patch = parse_path(string) with cbook.get_sample_data('grace_hopper.jpg') as image_file: image = plt.imread(image_file) fig, ax = plt.subplots() im = ax.imshow(image) im.set_clip_path(patch) ax.axis('off') plt.show() </code></pre>
<python><matplotlib><svg><clip>
2023-01-03 14:45:05
0
534
J.Doe
74,994,857
15,406,157
How to upgrade pip offline
<p>I have machine with CentOS 7.9 and pip version <code>pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6)</code>.</p> <p>I need to install on my machine <code>impacket</code>. But installation fails because pip is too old. So I upgrade pip with command</p> <p><code>pip3 install --upgrade pip</code> and after that everything works perfectly fine.</p> <p>The problem is that I make it when my machine is connected to internet. My customer's machine is not connected to internet. So I need to update all the stuff offline.</p> <p>My plan was to:</p> <ul> <li>download 2 packages: <code>impacket</code> and last version of <code>pip</code>.</li> <li>send these files to customer's machine.</li> <li>upgrade <code>pip</code> offline.</li> <li>install <code>impacket</code> offline.</li> </ul> <p>The first thing that I'm stuck with, is to upgrate pip. Following <a href="https://stackoverflow.com/questions/32321927/offline-pip-installation">this</a> advice I tried to download pip with <code>yum</code>, using this command:</p> <p><code>yum install --downloadonly python3-pip -y</code></p> <p>But I get such result and nothing happens:</p> <pre><code>Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: centos.spd.co.il * epel: mirror.dogado.de * extras: centos.spd.co.il * updates: centos.spd.co.il Package python3-pip-9.0.3-8.el7.noarch already installed and latest version Nothing to do </code></pre> <p>Or with command <code>yum upgrade --downloadonly python3-pip -y</code> also nothing happens:</p> <pre><code>Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: centos.spd.co.il * epel: mirror.dogado.de * extras: centos.spd.co.il * updates: centos.spd.co.il No packages marked for update </code></pre> <p>How can I implement my plan? (or maybe there is a better plan?)</p>
<python><python-3.x><pip><yum>
2023-01-03 14:41:25
1
338
Igor_M
74,994,827
818,131
Python/Django: setuptools/flit entry_point module path not found?
<p>I have a problem that I can't solve since days. I created a python module &quot;medux_timetracker&quot; packaged as <a href="https://gdaps.readthedocs.og" rel="nofollow noreferrer">GDAPS</a> plugin, details below, using an entry point. But the entry point is not recognized - and I'm stuck. The module is installed using <code>flit install --symlink .</code> from within the medux-timetracker directory, using the venv of the main project.</p> <p>pyproject.toml</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;flit_core &gt;=3.2,&lt;4&quot;] build-backend = &quot;flit_core.buildapi&quot; [project] name = &quot;medux_timetracker&quot; # ... [tool.flit.module] name=&quot;medux.plugins.timetracker:apps.TimeTrackerConfig&quot; # medux.plugins are namespace modules. [project.entry-points.&quot;medux.plugins&quot;] timetracker = &quot;medux.plugins.timetracker&quot; </code></pre> <p>The <code>class TimeTrackerConfig</code> in this module is basically a Django app, inheriting <code>AppConfig</code>.</p> <pre class="lang-py prettyprint-override"><code># settings.py INSTALLED_APPS = [ # ... ] INSTALLED_APPS += PluginManager.find_plugins(&quot;medux.plugins&quot;) </code></pre> <p>Here the installed apps are dynamically appended by the PluginManager's method which is this (the relevant part where GDAPS/Django loads the entry_point):</p> <pre class="lang-py prettyprint-override"><code> @classmethod def find_plugins(cls, group: str) -&gt; List[str]: &quot;&quot;&quot;Finds plugins from setuptools entry points. This function is supposed to be called in settings.py after the INSTALLED_APPS variable.... :param group: a dotted path where to find plugin apps. This is used as 'group' for setuptools' entry points. :returns: A list of dotted app_names, which can be appended to INSTALLED_APPS. &quot;&quot;&quot; # ... cls.group = group # =&quot;medux.plugins&quot; installed_plugin_apps = [] # here iter_entry_points returns nothing, so the for loop does never execute: for entry_point in iter_entry_points(group=group, name=None): appname = entry_point.module_name if entry_point.attrs: appname += &quot;.&quot; + &quot;.&quot;.join(entry_point.attrs) installed_plugin_apps.append(appname) logger.info(&quot;Found plugin '{}'.&quot;.format(appname)) return installed_plugin_apps </code></pre> <p>The error message when I start the Django server:</p> <pre><code> File &quot;/home/christian/Projekte/medux/.venv/lib/python3.10/site-packages/django/apps/config.py&quot;, line 193, in create import_module(entry) File &quot;/usr/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'medux.plugins.timetracker' </code></pre> <p>So, GDAPS finds the entry point from <code>pyproject.toml</code>, &quot;medux.plugins.timetracker&quot; is appended to INSTALLED_APPS correctly, but this module is not found. I suppose it's a PYTHONPATH problem, but can't find out why.</p> <p>in the venv, under .../lib/python3.10/site-packages, I have:</p> <pre><code>medux/ plugins/ timetracker -&gt; /home/christian/Projekte/medux-timetracker # symlink medux_timetracker-0.0.1.dist-info/ direct_url.json entry_points.txt INSTALLER METADATA RECORD REQUESTED </code></pre> <p>So the symlink to the package is there. I don't know further.</p>
<python><django><entry-point><flit>
2023-01-03 14:38:22
1
1,092
nerdoc
74,994,811
4,796,942
Import all values in .bashrc as enviroment variables
<p>I have created a <code>.bashrc</code> file using:</p> <pre><code>touch ~/.bashrc </code></pre> <p>and I am trying to get all the variables in it to be environment variables in my current environment. I saw online that you can source into it as <code>source ~/.bashrc</code> but nothing changed when I did this (I could not access the variables), however I could run <code>cat ~/.bashrc</code> and still see the variable names as the key and the variables as the password.</p> <p>I tried to also loop through it as</p> <pre><code>import os # open the .bashrc file in the home directory (~/) with open('~/.bashrc') as f: # read the lines in the file lines = f.readlines() # iterate over the lines in the file for line in lines: # split the line into parts parts = line.split('=') # if the line has the correct format (key=value) if len(parts) == 2: # extract the key and value key, value = parts # remove any leading or trailing whitespace from the key and value key = key.strip() value = value.strip() # set the key as an environment variable with the corresponding value os.environ[key] = value </code></pre> <p>but the open did not run, giving the error.</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: '~/.bashrc' </code></pre> <p>How can I import all the variables in my <code>.bashrc</code> file ?</p>
<python><bash><environment-variables>
2023-01-03 14:37:11
2
1,587
user4933
74,994,760
4,137,061
Fuseki returns no results when called from Python vs curl or online
<p>I'm setting up Fuseki/Jena to host a persistent collection of rdf and I'd like to control uploads through a python interface using rdflib graph objects as a means of pre-filtering/mastering the content before it reaches the persistence layer.</p> <p>I've started the server, and can access data via the online SPARQL endpoint, or by <code>curl</code>-ing the store with simple command like:</p> <pre><code>curl -X POST -d &quot;query=select ?s ?p ?o where { ?s ?p ?o.}&quot; localhost:3030/models/query </code></pre> <p>Which returns multiple results, as per expected.</p> <p>It also generates the following lines in the server log:</p> <pre><code>14:21:01 INFO Fuseki :: [49] POST http://localhost:3030/models/query 14:21:01 INFO Fuseki :: [49] Query = select ?s ?p ?o where { ?s ?p ?o.} 14:21:01 INFO Fuseki :: [49] 200 OK (34 ms) </code></pre> <p>I'd like to access this sparql enpoint in python via the rdflib graph.query api - and have tried the following:</p> <pre><code>from rdflib import Graph from rdflib.plugins.stores import sparqlstore jena = sparqlstore.SPARQLStore(&quot;http://localhost:3030/models/query&quot;) g = Graph(store=jena) qr = g.query(&quot;&quot;&quot;select ?s ?p ?o where { ?s ?p ?o. }&quot;&quot;&quot;) [r for r in qr] </code></pre> <p>But this returns no results.</p> <p>The log entry for this action is a little longer, and looks like this:</p> <pre><code>14:22:09 INFO Fuseki :: [50] GET http://localhost:3030/models/query?query=PREFIX+xml%3A+%3Chttp%3A%2F%2Fwww.w3.org%2FXML%2F1998%2Fnamespace%3E%0APREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0APREFIX+xml%3A+%3Chttp%3A%2F%2Fwww.w3.org%2FXML%2F1998%2Fnamespace%3E%0APREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0A%0Aselect+%3Fs+%3Fp+%3Fo+where+%7B+%3Fs+%3Fp+%3Fo.+%7D&amp;default-graph-uri=Nf346bf5146514bcd8940bf9a31ae9c9c 14:22:09 INFO Fuseki :: [50] Query = PREFIX xml: &lt;http://www.w3.org/XML/1998/namespace&gt; PREFIX rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; PREFIX rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; PREFIX xsd: &lt;http://www.w3.org/2001/XMLSchema#&gt; PREFIX xml: &lt;http://www.w3.org/XML/1998/namespace&gt; PREFIX rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; PREFIX rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; PREFIX xsd: &lt;http://www.w3.org/2001/XMLSchema#&gt; select ?s ?p ?o where { ?s ?p ?o. } 14:22:09 INFO Fuseki :: [50] 200 OK (12 ms) </code></pre> <p>I can see that the majority of the difference here is some additional <code>PREFIX</code> notation, and that the call appears to be a get, rather than a POST. Additionally, there's an additional parameter at the end of the call, referencing <code>default-graph-uri</code> - I don't think I'm using this, or haven't specifically set anything up as a named default graph.</p> <p>If I try an alternate <code>curl</code> however, I can copy the GET call and get it to work by dropping the final <code>default-graph-uri</code> parameter:</p> <pre><code>curl http://localhost:3030/models/query?query=PREFIX+xml%3A+%3Chttp%3A%2F%2Fwww.w3.org%2FXML%2F1998%2Fnamespace%3E%0APREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0APREFIX+xml%3A+%3Chttp%3A%2F%2Fwww.w3.org%2FXML%2F1998%2Fnamespace%3E%0APREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0A%0Aselect+%3Fs+%3Fp+%3Fo+where+%7B+%3Fs+%3Fp+%3Fo.+%7D </code></pre> <p>Which returns the results I'd expect to see using a modified call from rdflib's api.</p> <p>So my question is, how do I get rdflib to issue its queries without the breaking <code>default-graph-uri</code> parameter - OR - how should I configure Fuseki to handle this more <s>gracefully</s> in line with my expectations?</p>
<python><fuseki><rdflib>
2023-01-03 14:33:35
1
11,127
Thomas Kimber
74,994,614
720,877
How to implement simple terminal line editing similar to input() but handling escape character
<p>How can I implement a Python function to run on Unix systems (minimally, just Linux) that is similar to builtin <code>input()</code>, but supports using the escape key to cancel inputting text? So:</p> <ul> <li>Enter a single line of text at the command line (multiple lines would be OK also)</li> <li>with simple line editing (left/right arrow keys, backspace to delete a character, control-a to jump to start of line, control-e to jump to end of line, control-w to delete a word)</li> <li>(and allowing pastes from primary or clipboard selection in X/wayland -- I guess this requires no special support)</li> <li>submit the text by hitting the return key</li> <li>or hit the escape key to exit and cancel (it would be OK but not ideal if I couldn't tell the difference between this and the user entering an empty string and hitting return)</li> </ul> <p>How can I achieve that?</p> <p>I've tried curses, but that is focused on whole-screen input -- it seems hard to implement something that does not clear the screen.</p> <p>I've tried termios, but I don't know how to implement the backspace key for example, and I suspect I might end up wanting slightly more functionality than is immediately obvious -- for example control-w, which on my linux machine <code>input()</code> implements to delete a whole word. I don't need to reproduce every little detail of input()'s behaviour, but a few things like that (see the list above).</p>
<python><unix><terminal>
2023-01-03 14:20:14
1
2,820
Croad Langshan
74,994,560
11,901,834
Mock pandas read_csv using pytest
<p>I have a python function that takes a BigQuery bucket name &amp; file path as inputs and does the following:</p> <ul> <li>Check if the bucket it exists</li> <li>Check if the file is in the bucket</li> <li>Read the file into a dataframe and return the dataframe</li> </ul> <p>The function looks something like this:</p> <pre><code>def function_to_test(client, bucket_name, delimiter, full_path=None, header=None): try: bucket = client.get_bucket(bucket_name) assert bucket is not None except (gcp_exceptions.GoogleCloudError, AssertionError) as ex: raise AirflowFailException(f&quot;Failed to access bucket: {bucket_name}&quot;) from ex try: blob = bucket.get_blob(full_path) assert blob is not None if blob.size == 0: print(f'File is empty') return None df = pd.read_csv(f&quot;gs://{bucket_name}/{full_path}&quot;, sep=delimiter, dtype='str', header=header) return df except(gcp_exceptions.GoogleCloudError, AssertionError) as ex: raise AirflowFailException(f&quot;Failed to retrieve blob from bucket: {bucket_name}&quot;) from ex </code></pre> <p>I am trying to write pytest tests for this function and so far I have the following:</p> <pre><code>def test_func(mocker, generic_df): bucket_name = 'test_bucket' full_path = 'test_path' mock_client = mocker.patch('google.cloud.storage.Client', autospec=True) mock_bucket = mock_client.get_bucket(bucket_name) actual_df = function_to_test(client=mock_client,bucket_name=bucket_name, delimiter=',', full_path=full_path, header=0) </code></pre> <p>Currently, the above mocking of the bucket is sufficient to pass the bucket requirements in the function.</p> <p>However, I am unable to figure out how to mock the <code>read_csv</code> functionality and as such, the dataframe creation is failing.</p> <p>Is there a way I can mock this function so that the dataframe is mocked too?</p>
<python><pytest>
2023-01-03 14:15:45
0
1,579
nimgwfc
74,994,530
12,559,770
pivot table in pandas
<p>I have a dataframe such as :</p> <pre><code>Groups Species Value G1 SP1 YES G1 SP2 YES G1 SP3 NO G1 SP4 YES G2 SP1 NO G2 SP2 NO G2 SP4 YES G3 SP1 YES G3 SP2 YES G4 SP1 NO </code></pre> <p>And I would liek simply to pivot the table such as :</p> <pre><code>Species G1 G2 G3 G4 SP1 YES NO YES NO SP2 YES NO YES NA SP3 NO NA NA NA SP4 YES YES NA NA </code></pre> <p>So far I tried :</p> <pre><code>df.pivot(columns='Groups',index='Species',values=Value) </code></pre> <p>But I get :</p> <pre><code>ValueError: Index contains duplicate entries, cannot reshape </code></pre>
<python><pandas>
2023-01-03 14:13:25
0
3,442
chippycentra
74,994,508
4,391,249
How to safely use the file system as a sort of shared memory in Python?
<p>TLDR: Script A creates a directory and writes files in it. Script B periodically checks that directory. How does script B know when script A is done writing so that it can access the files?</p> <p>I have a Python script (call it the render server) that receives requests to generate images and associated data. I need to run a separate Python application (call it the consumer) that makes use of this data . The consumer does not know when new data will be available. Ideally it should not have to know of the presence of script A, just that data somehow becomes available.</p> <p>My quick and dirty solution is to have an <code>outputs</code> directory known to both Python scripts. In that directory, the render server creates timestamped directories and saves several files within those directories.</p> <p>The render server does something like:</p> <pre><code>os.makedirs('outputs/' + timestamped_subdir) # Write files into that directory. </code></pre> <p>The consumer checks that directory kind of like:</p> <pre class="lang-py prettyprint-override"><code>dirs = set() while True: new_dirs = set(glob('outputs/*')).difference(dirs) if not len(new_dirs): continue # Do stuff with the contents of the latest new directory. </code></pre> <p>The problem is that the consumer checks the contents of the directory before the render server finishes writing (and this is evident in a <code>FileNotFoundError</code>). I tried to fix this by making the render server do:</p> <pre class="lang-py prettyprint-override"><code>os.makedisr('temp') # Write files into that directory. shutil.copytree('temp', 'outputs/' + timestamped_subdir) </code></pre> <p>But the consumer is still able to know of the presence of the <code>timestamped_subdir</code> before the files within are done being copied (again there's a <code>FileNotFoundError</code>). What's one &quot;right&quot; way to do what I'm trying to achieve?</p> <p>Note: While writing this I realised I should do <code>shutil.move</code> instead of <code>shutil.copytree</code> and that seems to have fixed it. But I'm still not sure enough of the underlying mechanisms of that operation to know for sure that it works correctly.</p>
<python><multiprocessing>
2023-01-03 14:11:51
1
3,347
Alexander Soare
74,994,477
5,852,692
Checking the return value of API-Functions Python
<p>I am using an C++ API via Python using <code>ctypes.CDLL</code>:</p> <pre><code>api = CDLL(f'{path}/dll_name.dll') </code></pre> <p>This api has functions like open, remove, delete, select, read, write, etc... Whenever I use these functions it returns something. When I call these api functions via debugger one by one, I can of course see the return value.</p> <p>However, since the return value of these functions is just some information of what happened after calling this function, it is not really necessary to save it in some variable, I simply call them and proceed.</p> <p>Return values are integers between 0 to 15;</p> <pre><code>0: everything okay 1: license missing 2: wrong name ... </code></pre> <p>I would like to call 3 of these functions one after one with also checking if their return value is 0, otherwise raise some error. For that, the simple way would be something following, however it is also not that elegant, since I have to rewrite the same thing lots of time...:</p> <pre><code>status = api.select('something') if not status: pass else: raise SomeError(status) status = api.open('something') if not status: pass else: raise SomeError(status) ... </code></pre> <p>Is there a way to handle this in a better way?</p>
<python><error-handling><ctypes>
2023-01-03 14:09:53
1
1,588
oakca
74,994,476
2,196,409
passing **kwargs to a de-referencing function
<p>Hope the title is conveying the correct information.</p> <p>My problem is that I don't understand why call <code>kwarg_function(some_func, a=1, b=2, c=3)</code> fails. I would have thought that as 'c' isn't referenced with <code>some_func()</code> it would simply be ignored. Can anyone explain why 'c' isn't simply ignored.</p> <pre><code>def kwarg_function(function, **kwargs): print(kwargs) function(**kwargs) def some_func(a, b): print(f&quot;type: {type(a)} values: {a}&quot;) print(f&quot;type: {type(b)} values: {b}&quot;) kwarg_function(some_func, a=1, b=2) # called successfully kwarg_function(some_func, a=1, b=2, c=3) # fails with unexpected keyword arg 'c' </code></pre>
<python><python-3.x>
2023-01-03 14:09:49
2
1,173
eklektek
74,994,161
159,072
Is it possible to use two separate event handler functions on the server side?
<p>In the supplied source code,</p> <ol> <li><p>The client-side has two event-handlers: <code>connect</code> and <code>server_to_client</code>. When the page loads for the first time, the <code>connect</code> emits texts: <code>hello!</code> and <code>world!</code>.</p> </li> <li><p>Then, on the server side, the function <code>server_to_client()</code> received this message, prints it on the console, and subsequently emits another message <code>received from the server</code> to the client.</p> </li> <li><p>Finally, on the client side, the event-handler <code>server_to_client</code> prints the server side message to an H2-tag.</p> </li> </ol> <p>As you can see, two functions are working at the client side, and only one function is working at the server side.</p> <hr /> <p><strong>On the server side</strong>, the same function is handling <code>client_to_server</code> event and raising <code>server_to_client</code> event.</p> <p>Say, I want to use two different functions on the server side. I.e. one function will print <code>hello! world!</code>, and a different function will emit <code>received from the server</code> to the client.</p> <p>Is it possible?</p> <hr /> <p><strong>server_sends_client_receives.html</strong></p> <pre><code>&lt;!DOCTYPE HTML&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Long task&lt;/title&gt; &lt;script src=&quot;https://code.jquery.com/jquery-3.6.3.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://cdn.socket.io/4.5.4/socket.io.min.js&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/javascript&quot; charset=&quot;utf-8&quot;&gt; $(document).on('click', '.widget input', function (event) { namespace = '/test'; var socket = io(namespace); socket.on('connect', function() { $('#messages').append('&lt;br/&gt;' + $('&lt;div/&gt;').text('Requesting task to run').html()); ////myText = $(&quot;messages&quot;).text() socket.emit('client_to_server', {'hello': 'hello!', 'world': 'world!'}); }); socket.on('server_to_client', function(msg, cb) { $('#messages').text(msg.data); if (cb) cb(); }); event.preventDefault(); }); &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;widget&quot;&gt; &lt;input type=&quot;submit&quot; value=&quot;Click me&quot; /&gt; &lt;/div&gt; &lt;h3&gt;Messages&lt;/h3&gt; &lt;H2 id=&quot;messages&quot; &gt;xxx yyy zzz&lt;/H2&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><strong>server_sends_client_receives.py</strong></p> <pre><code>from flask import Flask, render_template from flask_socketio import SocketIO, emit, disconnect from time import sleep async_mode = None app = Flask(__name__) socketio_obj = SocketIO(app, async_mode=async_mode) @app.route('/') def index(): return render_template('server_sends_client_receives.html', sync_mode=socketio_obj.async_mode) @socketio_obj.on('client_to_server', namespace='/test') def server_to_client(arg1): try: print('received from the client : ', arg1['hello']) print('received from the client : ', arg1['world']) emit('server_to_client', {'data': 'received from the server'}) disconnect() except Exception as ex: print(ex) if __name__ == '__main__': socketio_obj.run(app, host='0.0.0.0', port=80, debug=True) </code></pre>
<python><flask><flask-socketio>
2023-01-03 13:40:16
1
17,446
user366312
74,993,968
5,896,319
How to check user is authorised or not in urls.py?
<p>I'm using a library for creating several calls in the front end but I have a problem. The library does not have authenticated user control, which is a severe problem, and also I cannot change the library for some reason.</p> <p>Is there a way to control the user login in urls.py?</p> <p><strong>urls.py</strong></p> <pre><code>from drf_auto_endpoint.router import router ... path('api/v1/', include(router.urls)), </code></pre>
<python><django>
2023-01-03 13:24:06
1
680
edche
74,993,877
6,485,881
Different behavior of apply(str) and astype(str) for datetime64[ns] pandas columns
<p>I'm working with datetime information in pandas and wanted to convert a bunch of <code>datetime64[ns]</code> columns to <code>str</code>. I noticed a different behavior from the two approaches that I expected to yield the same result.</p> <p>Here's a <a href="https://stackoverflow.com/help/minimal-reproducible-example">MCVE</a>.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # Create a dataframe with dates according to ISO8601 df = pd.DataFrame({&quot;dt_column&quot;: [&quot;2023-01-01&quot;, &quot;2023-01-02&quot;, &quot;2023-01-02&quot;]}) # Convert the strings to datetimes # (I expect the time portion to be 00:00:00) df[&quot;dt_column&quot;] = pd.to_datetime(df[&quot;dt_column&quot;]) df[&quot;str_from_astype&quot;] = df[&quot;dt_column&quot;].astype(str) df[&quot;str_from_apply&quot;] = df[&quot;dt_column&quot;].apply(str) print(df) print() print(&quot;Datatypes of the dataframe&quot;) print(df.dtypes) </code></pre> <p><strong>Output</strong></p> <pre><code> dt_column str_from_astype str_from_apply 0 2023-01-01 2023-01-01 2023-01-01 00:00:00 1 2023-01-02 2023-01-02 2023-01-02 00:00:00 2 2023-01-02 2023-01-02 2023-01-02 00:00:00 Datatypes of the dataframe dt_column datetime64[ns] str_from_astype object str_from_apply object dtype: object </code></pre> <p>If I use <code>.astype(str)</code> the time information is lost and when I use <code>.apply(str)</code> the time information is retained (or inferred).</p> <p>Why is that?</p> <p>(Pandas v1.5.2, Python 3.9.15)</p>
<python><pandas><datetime>
2023-01-03 13:15:49
1
13,322
Maurice
74,993,869
2,263,683
Nested curly brackets in Psycopg2 SQL Composition query
<p>I'm trying to create a query using <code>Pycopg2</code>'s <a href="https://www.psycopg.org/docs/sql.html" rel="nofollow noreferrer">SQL String Composition</a> which in I need to use a curly brackets inside my query to update a key value in a jsonb column. Something like this:</p> <pre><code>update myschema.users set data = jsonb_set(data, '{someId}', '100') </code></pre> <p>This is how I'm trying to write this query using Sql Composition string in Python:</p> <pre><code>statement = SQL( &quot;UPDATE {schema}.{table} set data = jsonb_set(data, '{{key}}', '{value}') {where};&quot; ).format( schema=Identifier(schema_var), table=Identifier(table_var), key=SQL(id_key), value=SQL(id_value), where=SQL(where), ) </code></pre> <p>But by running this, a new key called <code>key</code> will be added in the jsonb value. and if I try to run it with just one pair of curly brackets like this:</p> <pre><code>statement = SQL( &quot;UPDATE {schema}.{table} set data = jsonb_set(data, '{key}' ....&quot; # The rest is the same </code></pre> <p>I get this error:</p> <blockquote> <p>Array value must start with &quot;{&quot; or dimension information</p> </blockquote> <p>How can I fix this?</p>
<python><postgresql><psycopg2>
2023-01-03 13:15:09
2
15,775
Ghasem
74,993,391
1,806,392
Use a list of filters within polars
<p>Is there a way to filter a polars DataFrame by multiple conditions?</p> <p>This is my use case and how I currently solve it, but I wonder how to solve it, if my list of dates would be longer:</p> <pre><code>dates = [&quot;2018-03-25&quot;, &quot;2019-03-31&quot;, &quot;2020-03-29&quot;] timechange_forward = [(datetime.strptime(x+&quot;T02:00&quot;, '%Y-%m-%dT%H:%M'), datetime.strptime(x+&quot;T03:01&quot;, '%Y-%m-%dT%H:%M')) for x in dates] df.filter( pl.col(&quot;time&quot;).is_between(*timechange_forward[0]) | pl.col(&quot;time&quot;).is_between(*timechange_forward[1]) | pl.col(&quot;time&quot;).is_between(*timechange_forward[2]) ) </code></pre>
<python><dataframe><python-polars>
2023-01-03 12:29:10
3
2,314
nik
74,993,303
11,170,350
How to remove box overlapping 90% with other bounding box
<p>I have list of bounding boxes. When I plot them, there are some boxes overlapping with other. How can I identify them?</p> <p>Here is attached example</p> <p><a href="https://i.sstatic.net/eSVv9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eSVv9.png" alt="enter image description here" /></a></p> <p>Here we can see box 3,4 is overlapping, 10,11 is overlapping and box 7 is inside box 6. So I want to remove box which are 80% overlapping or 80% inside other box.</p> <p>Here are the coordinates</p> <pre><code>boxes=[(0, (156.52566528320312, 411.3934326171875, 508.0946350097656, 445.0401611328125)), (1, (153.34573364257812, 447.56744384765625, 1044.3194580078125, 612.4976196289062)), (2, (150.6321258544922, 662.0474243164062, 1076.75439453125, 899.3271484375)), (3, (154.38674926757812, 945.8661499023438, 1060.038330078125, 1026.8682861328125)), (4, (138.6205596923828, 951.3151245117188, 1035.56884765625, 1027.590087890625)), (5, (1245.50048828125, 410.4453430175781, 1393.0701904296875, 445.3376770019531)), (6, (1240.206787109375, 456.7169189453125, 2139.934326171875, 659.1046752929688)), (7, (1236.478759765625, 568.0098876953125, 2145.948486328125, 654.7606201171875)), (8, (1244.784912109375, 702.7620239257812, 2121.079345703125, 736.1748046875)), (9, (1244.885986328125, 746.2633666992188, 2151.534423828125, 991.8198852539062)), (10, (1251.84814453125, 1031.8487548828125, 2134.333251953125, 1153.9320068359375)), (11, (1254.38330078125, 1035.0196533203125, 2163.969970703125, 1153.2939453125))] </code></pre> <p>Here is the code, I used to generate above image</p> <pre><code>import cv2 import matplotlib.pyplot as plt import numpy as np img = np.ones([1654,2339,3],dtype=np.uint8)*255 for i in boxes: box=[int(i) for i in i[1]] image = cv2.rectangle(img, (box[0],box[1]), (box[2],box[3]), (0,0,0), 5) cv2.putText( img = image, text = str(i[0]), org = (box[0]+int(np.random.randint(0, high=500, size=1)),box[1]), fontFace = cv2.FONT_HERSHEY_DUPLEX, fontScale = 3.0, color = (0, 0, 0), thickness = 3 ) plt.imshow(img) </code></pre> <p>The output I want is box 0,1,2,3 or 4 (the larger one), 5,6 ,8,9,10 or 11 (the larger one)</p> <p>The solution I found work but not for all cases Here is my solution</p> <pre><code>#x_1, y_1, x_2, y_2 for i in range(len(boxes)-1): x_min_1,y_min_1,x_max_1,y_max_1=boxes[i][1][0],boxes[i][1][1],boxes[i][1][2],boxes[i][1][3] x_min_2,y_min_2,x_max_2,y_max_2=boxes[i+1][1][0],boxes[i+1][1][1],boxes[i+1][1][2],boxes[i+1][1][3] box_1_in_box_2 = ((x_max_2&gt; x_min_1 &gt;= x_min_2) or \ (x_max_2&gt;= x_max_1 &gt;x_min_2)) and \ ((y_max_2&gt; y_min_1 &gt;= y_min_2) or \ (y_max_2&gt;= y_max_1 &gt; y_min_2)) box_2_in_box_1 = ((x_max_1&gt; x_min_2 &gt;= x_min_1) or (x_max_1&gt;= x_max_2 &gt;x_min_1)) and ((y_max_1&gt; y_min_2 &gt;= y_min_1) or (y_max_1&gt;= y_max_2 &gt; y_min_1)) overlap = box_1_in_box_2 or box_2_in_box_1 print(i,overlap) </code></pre>
<python><numpy><opencv><non-maximum-suppression>
2023-01-03 12:21:56
3
2,979
Talha Anwar
74,993,236
13,839,945
Make function of class unusable after usage
<p>I want to create a one time usage method in my class. It should be possible to call the function one time, use a variable of the class, remove the variable and also remove the function. If the method is called afterwards it should raise an error that this method does not exist. If created a workaround which kind of achieves what I want, but the traceback is still not perfect, since the method still exists but raises the correct error.</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def __init__(self): self.a = 1 def foo(self): if hasattr(self, 'a'): delattr(self, 'a') else: raise AttributeError('foo') </code></pre> <p>This gives</p> <pre class="lang-py prettyprint-override"><code>Test = MyClass() Test.foo() Test.foo() </code></pre> <p>output:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /Users/path in &lt;cell line: 3&gt;() 1 Test = MyClass() 2 Test.foo() ----&gt; 3 Test.foo() /Users/path in MyClass.foo(self) 6 delattr(self, 'a') 7 else: ----&gt; 8 raise AttributeError('foo') AttributeError: foo </code></pre> <p>This is the closest I got.</p> <p>Edit:</p> <p>Use cas: The use case for this is a bit complicated, since it is completly out of context. But the idea is that I initialize a lot of these classes and need a one time read out of some information directly afterwards. More specific, the class is splitting data in a specific way via the <code>__getitem__</code> method which is most efficient. But I still need the full index list used, which should be the output of the one time function.</p> <p>I know this is not necessary, I could just never use the method again and everything is fine, but it feels odd having a method that is actually not working anymore. So I am just interested if this is somewhat possible.</p>
<python>
2023-01-03 12:16:09
3
341
JD.
74,993,173
12,810,223
The included URLconf does not appear to have any patterns in it. Error in Django
<pre><code>The included URLconf '&lt;module 'myapp.urls' from 'C:\\Users\\Hp\\Desktop\\Python\\Django Projects\\myproject\\myapp\\urls.py'&gt;' does not appear to have any patterns in it. If you see the 'urlpatterns' variable with valid patterns in the file then the issue is probably caused by a circular import. </code></pre> <p>This is the error that I am getting while building my django app.</p> <p>Here is urls.py from myapp -</p> <pre><code>from django.urls import path from . import views urlpatterns=[ path('', views.index, name='index') ] </code></pre> <p>Here is views.py -</p> <pre><code>from django.shortcuts import render from django import HttpResponse # Create your views here. def index(request): return HttpResponse('&lt;h1&gt;Hey, Welcome&lt;/h1&gt;') </code></pre> <p>this is from urls.py from myproject-</p> <pre><code>&quot;&quot;&quot;myproject URL Configuration The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/4.1/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: path('', views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.urls import include, path 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) &quot;&quot;&quot; from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('myapp.urls')), ] </code></pre>
<python><django>
2023-01-03 12:09:24
1
1,874
Shreyansh Sharma
74,993,119
498,504
keras always return same values in a Human horses CNN model example
<p>I'm working on a CNN model with Keras for Human vs Horses dataset to predict some images.</p> <p>with following codes I build the model and save in a file:</p> <pre><code>import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from keras.optimizers import RMSprop training_dir = 'horse-or-human/training' train_datagen = ImageDataGenerator( rescale=1/255, rotation_range=40, width_shift_range= 0.2, height_shift_range= 0.2, shear_range=0.2, zoom_range= 0.2, horizontal_flip= True, fill_mode='nearest' ) train_generator = train_datagen.flow_from_directory(training_dir , target_size=(300,300) , class_mode='binary') model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16 , (3,3), activation=tf.nn.relu , input_shape = (300,300,3)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(32 , (3,3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64 , (3,3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64 , (3,3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64 , (3,3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512 ,activation=tf.nn.relu ), tf.keras.layers.Dense(1, activation = tf.nn.sigmoid) ]) model.compile(optimizer = RMSprop(learning_rate = 0.001) , metrics=['accuracy'] , loss='binary_crossentropy' ) validation_dir = 'horse-or-human/validation' validation_datagen = ImageDataGenerator(rescale=1/255) validation_generator = validation_datagen.flow_from_directory( validation_dir , target_size=(300,300) , class_mode='binary' ) model.fit(train_generator , epochs= 15 ,validation_data=validation_generator) model.save('human-horses-model.h5') </code></pre> <p>And this part of my code that using that model to predict s specific image :</p> <pre><code>import tensorflow as tf from ipyfilechooser import FileChooser import keras.utils as image import numpy as np model = tf.keras.models.load_model('human-horses-model.h5') fc = FileChooser() display(fc) img = image.load_img(fc.selected , target_size=(300,300)) img = image.img_to_array(img) img /= 255. img = np.expand_dims(img , axis=0) output = model.predict(img) if output[0]&gt; 0.5 : print('selected Image is a Human') else : print('selected Image is a Horses') </code></pre> <p>And following is output of each epochs:</p> <pre><code>Found 256 images belonging to 2 classes. Epoch 1/15 33/33 [==============================] - 83s 2s/step - loss: 0.7800 - accuracy: 0.5686 - val_loss: 0.6024 - val_accuracy: 0.5859 Epoch 2/15 33/33 [==============================] - 73s 2s/step - loss: 0.6430 - accuracy: 0.6777 - val_loss: 0.8060 - val_accuracy: 0.5586 Epoch 3/15 33/33 [==============================] - 77s 2s/step - loss: 0.5252 - accuracy: 0.7595 - val_loss: 0.7498 - val_accuracy: 0.6875 Epoch 4/15 33/33 [==============================] - 79s 2s/step - loss: 0.4754 - accuracy: 0.7731 - val_loss: 1.7478 - val_accuracy: 0.5938 Epoch 5/15 33/33 [==============================] - 77s 2s/step - loss: 0.3966 - accuracy: 0.8130 - val_loss: 2.0004 - val_accuracy: 0.5234 Epoch 6/15 33/33 [==============================] - 73s 2s/step - loss: 0.4196 - accuracy: 0.8442 - val_loss: 0.3918 - val_accuracy: 0.8281 Epoch 7/15 33/33 [==============================] - 73s 2s/step - loss: 0.2859 - accuracy: 0.8802 - val_loss: 1.6727 - val_accuracy: 0.6680 Epoch 8/15 33/33 [==============================] - 74s 2s/step - loss: 0.2489 - accuracy: 0.8929 - val_loss: 3.1737 - val_accuracy: 0.6484 Epoch 9/15 33/33 [==============================] - 76s 2s/step - loss: 0.2829 - accuracy: 0.8948 - val_loss: 1.8389 - val_accuracy: 0.7109 Epoch 10/15 33/33 [==============================] - 76s 2s/step - loss: 0.2140 - accuracy: 0.9250 - val_loss: 1.8419 - val_accuracy: 0.7930 Epoch 11/15 33/33 [==============================] - 73s 2s/step - loss: 0.2341 - accuracy: 0.9299 - val_loss: 1.5261 - val_accuracy: 0.6914 Epoch 12/15 33/33 [==============================] - 74s 2s/step - loss: 0.1576 - accuracy: 0.9464 - val_loss: 0.9359 - val_accuracy: 0.8398 Epoch 13/15 33/33 [==============================] - 75s 2s/step - loss: 0.2002 - accuracy: 0.9250 - val_loss: 1.9854 - val_accuracy: 0.7344 Epoch 14/15 33/33 [==============================] - 79s 2s/step - loss: 0.1854 - accuracy: 0.9406 - val_loss: 0.7637 - val_accuracy: 0.8164 Epoch 15/15 33/33 [==============================] - 80s 2s/step - loss: 0.1160 - accuracy: 0.9611 - val_loss: 1.6901 - val_accuracy: 0.7656 </code></pre> <p>My model always return 1 or a number very near to 1 that show all images are <strong>Human</strong> while in real that are Horse.</p> <p>I searched a lot but did not find the answer!</p> <p>Can anyone help me to find and solve the problem.</p>
<python><tensorflow><keras>
2023-01-03 12:03:36
1
6,614
Ahmad Badpey
74,992,979
1,987,258
How to use an installed x509 Certificate as a Client certificate when programming a web request in python?
<p>I am working on a python script that is supposed to do web request using a client certificate from a store (so not from a file path). The certificate is already installed and the webrequest is already succeeding (when programmed in C# in a way similar to <a href="https://dotnetfiddle.net/1uAhSt" rel="nofollow noreferrer">this</a>).</p> <p>My python code is failing and I do not know how to fix this. Here is the failing code (the variables <code>url</code> and <code>in_subject</code> are set on top of the file).</p> <pre><code>import ssl from cryptography import x509 import requests def get_certificate(to_find_in_subject: str): for store in [&quot;CA&quot;, &quot;ROOT&quot;, &quot;MY&quot;]: for cert, encoding, trust in ssl.enum_certificates(store): certificate = x509.load_der_x509_certificate(cert, backend=None) if to_find_in_subject in f'{certificate.subject}': return certificate certifcate = get_certificate(in_subject) print(certifcate is None) response = requests.request(method=&quot;GET&quot;, url=url, cert=certifcate) print(response.text) </code></pre> <p>This is the output (shown below). Apparently, I cannot set the certificate as a certificate argument. That is not a big problem if I know what to do instead. So what is that, what to do instead? How to use an installed x509 Certificate as a Client certificate when programming a web request in python?</p> <p><em>False</em></p> <p><em>Traceback (most recent call last):</em></p> <p><em>File &quot;C:\Users\daan\PycharmProjects\certificateCall\reproduceforStackoverflow.py&quot;, line 19, in </em></p> <p><em>response = requests.request(method=&quot;GET&quot;, url=url, cert=certifcate)</em></p> <p>.........</p> <p><em>TypeError: 'builtins.Certificate' object is not subscriptable</em></p> <p><em>Process finished with exit code 1</em></p>
<python><x509certificate><x509><client-certificates>
2023-01-03 11:49:52
0
3,058
Daan
74,992,948
10,035,978
Get specific OID from SNMP
<p>I would like to get the sysUpTime from SNMP</p> <p>It's my first time working on this so i am not sure how to make it work. I downloaded the <strong>MIB Browser</strong> app and i get this:</p> <p><a href="https://i.sstatic.net/cRGUE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cRGUE.png" alt="enter image description here" /></a></p> <p>How can i do it in python way? I am trying this code</p> <pre><code>from pysnmp.hlapi import * import sys def walk(host, oid): for (errorIndication, errorStatus, errorIndex, varBinds) in nextCmd(SnmpEngine(), CommunityData('public'), UdpTransportTarget((host, 161)), ContextData(), ObjectType(ObjectIdentity(oid)), lookupMib=False, lexicographicMode=False): if errorIndication: print(errorIndication, file=sys.stderr) break elif errorStatus: print('%s at %s' % (errorStatus.prettyPrint(), errorIndex and varBinds[int(errorIndex) - 1][0] or '?'), file=sys.stderr) break else: for varBind in varBinds: print('%s = %s' % varBind) walk('192.188.14.126', '1.3.6.1.2.1.1.3.0') </code></pre> <p>But i get an error</p> <blockquote> <p>No SNMP response received before timeout</p> </blockquote>
<python><snmp><pysnmp>
2023-01-03 11:47:16
1
1,976
Alex
74,992,934
12,752,172
How to create a setup file that run on hosted server python?
<p>I'm creating a python app to get details from a website. I'm using selenium and pyodbc to create my app. It is getting all the details and saves them into a SQL server database. It is working fine on my pycharm IDE. Now I need to use this app on a hosted server like Linux or ubuntu server. How can I create a .exe file to run my app on a hosted server? And I used pyinstaller to create a .exe file using the following command.</p> <pre><code>pyinstaller --one main.py </code></pre> <p>I don't know what are the initial things that I should install on my server. Or is it not necessary to install any of the things to run my app?</p>
<python><selenium>
2023-01-03 11:46:20
1
469
Sidath
74,992,915
9,827,719
Panel for Python how to change Gauge colors and layout to be a semicircle
<p>I am using Panel framework from <a href="https://panel.holoviz.org" rel="nofollow noreferrer">https://panel.holoviz.org</a> in order to generate graphs in Python. I have generated a Gauge chart using the code form <a href="https://panel.holoviz.org/reference/panes/ECharts.html" rel="nofollow noreferrer">https://panel.holoviz.org/reference/panes/ECharts.html</a>.</p> <p>My code:</p> <pre><code>import panel as pn pn.extension('echarts') def main: gauge = { 'tooltip': { 'formatter': '{a} &lt;br/&gt;{b} : {c}%' }, 'series': [ { 'name': 'Incidents', 'type': 'gauge', 'detail': {'formatter': '{value}%'}, 'data': [{'value': 50, 'name': f'Stable'}] } ] } gauge_pane = pn.pane.ECharts(gauge, width=400, height=400) gauge_pane.save(filename=&quot;incidents.png&quot;) if __name__ == '__main__': main() </code></pre> <p><strong>This results in the following gauge graph</strong></p> <p><a href="https://i.sstatic.net/xxHUd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xxHUd.png" alt="enter image description here" /></a></p> <p><strong>However, I want it to be a semicircle that is colorful, like this:</strong></p> <p><a href="https://i.sstatic.net/g9bGu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g9bGu.png" alt="enter image description here" /></a></p> <p>How can I edit the graph in such a way that it is more like that I want?</p>
<python><panel>
2023-01-03 11:44:25
1
1,400
Europa
74,992,739
906,704
Dynamic change of selected value inside select tag in html page
<p>I'm a complete beginner in web development and I'm here to straight ask someone to create a javascript code to help my python flask app be a bit more user-friendly.</p> <p>My HTML page consists of a table with several rows (it can be up to hundreds). and I want to allow the user to give rank to the most beautiful places on the table. My wish is to have the select tags change dynamically when new places get ranked or if the rank change.<br /> For example, in the code below you can see that there are 5 places and 3 of them have already a selected rank.</p> <p>My wish is that if a new place gets the rank 2, the already ranked places moved to a number up. The new place gets ranked 2 -&gt; Place previously ranked 2 moved to 3 -&gt; place ranked 3 moved to 4 and so on. In a way, I don’t have different places with the same rank.</p> <pre><code> &lt;!DOCTYPE html&gt; &lt;body&gt; &lt;h1&gt; Beatiful Places Ranking Page&lt;/h1&gt; &lt;h6&gt; Please rank the following places you would like to travel &lt;/h6&gt; &lt;table&gt; &lt;tr&gt; &lt;th&gt;Region&lt;/th&gt; &lt;th&gt;Ranking&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Torres del Paine National Park, Chile&lt;/td&gt; &lt;td&gt; &lt;select id=&quot;regions&quot; name=0 &gt; &lt;option value=&quot;0&quot;&gt;-&lt;/option&gt; &lt;option value=&quot;1&quot;&gt;1&lt;/option&gt; &lt;option selected value=&quot;2&quot;&gt;2&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;3&lt;/option&gt; &lt;option value=&quot;4&quot;&gt;4&lt;/option&gt; &lt;option value=&quot;5&quot;&gt;5&lt;/option&gt; &lt;option value=&quot;6&quot;&gt;6&lt;/option&gt; &lt;option value=&quot;7&quot;&gt;7&lt;/option&gt; &lt;option value=&quot;8&quot;&gt;8&lt;/option&gt; &lt;option value=&quot;9&quot;&gt;9&lt;/option&gt; &lt;option value=&quot;10&quot;&gt;10&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Bagan, Myanmar&lt;/td&gt; &lt;td&gt; &lt;select id=&quot;regions&quot; name=1 &gt; &lt;option value=&quot;0&quot;&gt;-&lt;/option&gt; &lt;option value=&quot;1&quot;&gt;1&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;2&lt;/option&gt; &lt;option selected value=&quot;3&quot;&gt;3&lt;/option&gt; &lt;option value=&quot;4&quot;&gt;4&lt;/option&gt; &lt;option value=&quot;5&quot;&gt;5&lt;/option&gt; &lt;option value=&quot;6&quot;&gt;6&lt;/option&gt; &lt;option value=&quot;7&quot;&gt;7&lt;/option&gt; &lt;option value=&quot;8&quot;&gt;8&lt;/option&gt; &lt;option value=&quot;9&quot;&gt;9&lt;/option&gt; &lt;option value=&quot;10&quot;&gt;10&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Monteverde Cloud Forest Biological Reserve, Costa Rica&lt;/td&gt; &lt;td&gt; &lt;select id=&quot;regions&quot; name=2 &gt; &lt;option value=&quot;0&quot;&gt;-&lt;/option&gt; &lt;option selected value=&quot;1&quot;&gt;1&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;2&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;3&lt;/option&gt; &lt;option value=&quot;4&quot;&gt;4&lt;/option&gt; &lt;option value=&quot;5&quot;&gt;5&lt;/option&gt; &lt;option value=&quot;6&quot;&gt;6&lt;/option&gt; &lt;option value=&quot;7&quot;&gt;7&lt;/option&gt; &lt;option value=&quot;8&quot;&gt;8&lt;/option&gt; &lt;option value=&quot;9&quot;&gt;9&lt;/option&gt; &lt;option value=&quot;10&quot;&gt;10&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Anse Source d'Argent, Seychelles&lt;/td&gt; &lt;td&gt; &lt;select id=&quot;regions&quot; name=3 &gt; &lt;option value=&quot;0&quot;&gt;-&lt;/option&gt; &lt;option value=&quot;1&quot;&gt;1&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;2&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;3&lt;/option&gt; &lt;option value=&quot;4&quot;&gt;4&lt;/option&gt; &lt;option value=&quot;5&quot;&gt;5&lt;/option&gt; &lt;option value=&quot;6&quot;&gt;6&lt;/option&gt; &lt;option value=&quot;7&quot;&gt;7&lt;/option&gt; &lt;option value=&quot;8&quot;&gt;8&lt;/option&gt; &lt;option value=&quot;9&quot;&gt;9&lt;/option&gt; &lt;option value=&quot;10&quot;&gt;10&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Grand Canyon, Arizona&lt;/td&gt; &lt;td&gt; &lt;select id=&quot;regions&quot; name=4 &gt; &lt;option value=&quot;0&quot;&gt;-&lt;/option&gt; &lt;option value=&quot;1&quot;&gt;1&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;2&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;3&lt;/option&gt; &lt;option value=&quot;4&quot;&gt;4&lt;/option&gt; &lt;option value=&quot;5&quot;&gt;5&lt;/option&gt; &lt;option value=&quot;6&quot;&gt;6&lt;/option&gt; &lt;option value=&quot;7&quot;&gt;7&lt;/option&gt; &lt;option value=&quot;8&quot;&gt;8&lt;/option&gt; &lt;option value=&quot;9&quot;&gt;9&lt;/option&gt; &lt;option value=&quot;10&quot;&gt;10&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;/body&gt; </code></pre> <p>Thank you all in advance</p>
<javascript><python><html><flask>
2023-01-03 11:28:13
1
533
Carlos
74,992,610
5,468,372
Using Gunicorn with custom getMessage() logging --> logging error
<p>I'm using a custom <code>LogRecord</code> to handle an additional variable (<code>session_id</code>), each time a log is created</p> <pre><code>class LogRecord(logging.LogRecord): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.session_id = None def getMessage(self): msg = str(self.msg) if self.args: msg = msg % self.args self.session_id = self.args[0] return msg logging.setLogRecordFactory(LogRecord) </code></pre> <p>I'm using this inside a <code>Dash</code> application. When starting the application with <code>Dash</code>, all logs work like intended. When I run the app with <code>gunicorn</code>, it is using its own record engine but with my custom <code>LogRecord</code> which results in the following error trace:</p> <pre><code> File &quot;.../lib/python3.9/logging/__init__.py&quot;, line 1083, in emit msg = self.format(record) File &quot;.../lib/python3.9/logging/__init__.py&quot;, line 927, in format return fmt.format(record) File &quot;.../lib/python3.9/logging/__init__.py&quot;, line 663, in format record.message = record.getMessage() File &quot;.../LipViz/log/classes_log.py&quot;, line 17, in getMessage self.session_id = self.args[0] File &quot;.../lib/python3.9/site-packages/gunicorn/glogging.py&quot;, line 108, in __getitem__ if k.startswith(&quot;{&quot;): AttributeError: 'int' object has no attribute 'startswith' </code></pre> <p>This is because <code>gunicorn</code> uses its own array of custom arguments for logs. I'd need to find a way to tell <code>gunicorn</code> to use its <code>LogRecord.getMessage()</code> for its own logs and my custom one for the program it is casting.</p>
<python><logging><gunicorn><plotly-dash>
2023-01-03 11:15:28
1
401
Luggie
74,992,510
6,394,722
Difference of writing jinja2 macro in one line V.S. writing macro in multiple lines?
<p>I have next jinja2 template:</p> <p><strong>cfg.jinja2:</strong></p> <pre><code>{% macro cfg() %}ABC{% endmacro %} {% macro cfg2() %} ABC {% endmacro %} resource: {{ cfg()|indent(4) }} {{ cfg2()|indent(4) }} </code></pre> <p>And, next python file:</p> <p><strong>test.py:</strong></p> <pre><code>import os from jinja2 import Environment, FileSystemLoader path_dir = &quot;.&quot; loader = FileSystemLoader(searchpath=path_dir) env = Environment(loader=loader) template = env.get_template(&quot;cfg.jinja2&quot;) data = template.render() print(data) </code></pre> <p>It shows next result:</p> <pre><code>$ python3 test.py resource: ABC ABC </code></pre> <p>I wonder, why <code>cfg()</code> has no effect with <code>indent filter</code> while <code>cfg2()</code> works as expected?</p>
<python><jinja2>
2023-01-03 11:07:31
1
32,101
atline