QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,061,982
16,978,074
How can I print multiple indexes of a certain value in a dictionary?
<p>I'm just learning python and I have a problem. how can I print multiple indexes of a certain value in a dictionary? In particular, I want to print the index of each element of the dictionary_title array which has <code>gender_ids</code> as the key.</p> <pre><code>dictionary_title={ {'label': 'Green', 'genre_ids': 878}, {'label': 'Pink', 'genre_ids': 16}, {'label': 'Orange', 'genre_ids': 28}, {'label': 'Yellow', 'genre_ids': 9648}, {'label': 'Red', 'genre_ids': 878}, {'label': 'Brown', 'genre_ids': 12}, {'label': 'Black', 'genre_ids': 28}, {'label': 'White', 'genre_ids': 14}, {'label': 'Blue', 'genre_ids': 28}, {'label': 'Light Blue', 'genre_ids': 10751}, {'label': 'Magenta', 'genre_ids': 28}, {'label': 'Gray', 'genre_ids': 28}} </code></pre> <p>This is my code:</p> <pre><code> for values in dictionary_title[&quot;genre_ids&quot;]: for item in values: if item == 28: print(values.index(item)) </code></pre> <p>For example, I want to print index:2,6,8,10,11 which are the indexes of the items with the key, <code>genre_ids</code>=28. How can I do it?</p>
<python><dictionary><indexing>
2023-01-09 19:16:46
2
337
Elly
75,061,820
6,440,589
Caching data: numpy vs pandas vs MySQL
<p>I am currently processing time series data stored into h5 files, each file containing one hour of data.</p> <p>In order to move towards real time processing, I would like to process time series data, <em>one second at a time</em>. The plan is to <strong>aggregate</strong> one second of data, <strong>process</strong> the data, <strong>clear</strong> the cache and <strong>repeat</strong>.</p> <p>My first idea was to do this using numpy arrays or pandas dataframe, but a colleague suggested caching the data to a MySQL database instead.</p> <p>In order to benchmark the performance of each approach, I ran a simple timing exercise, trying to access 1,000 samples:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Method</th> <th>Execution time</th> </tr> </thead> <tbody> <tr> <td>Pandas</td> <td>1.36 µs</td> </tr> <tr> <td>Numpy</td> <td>790 ns</td> </tr> <tr> <td>MySQL</td> <td>552 ns</td> </tr> </tbody> </table> </div> <p>The code used to obtain these results is detailed below.</p> <p>From this limited exercise, it looks like the MySQL approach is the winner, but since most of the processing relies on numpy and pandas functions anyways, I am not sure whether it would make much sense to cache the data into a database prior to writing them to a numpy array or a pandas dataframe.</p> <p>So here's my question: apart from improved performance, <strong>what are the benefits of using a MySQL database to cache data?</strong></p> <hr /> <h1>Benchmark</h1> <pre><code>import pandas as pd import numpy as np import mysql.connector from timeit import timeit </code></pre> <h2>Pandas dataframe:</h2> <pre><code>df = pd.DataFrame() df['test'] = np.arange(1,1000) %timeit df['test'] </code></pre> <p>This returns <code>1.36 µs ± 26.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)</code></p> <h2>Numpy array:</h2> <pre><code>%timeit np.arange(1,1000) </code></pre> <p>This returns <code>790 ns ± 21.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)</code></p> <h2>MySQL database:</h2> <pre><code>cnx = mysql.connector.connect(user='root', password='', host='127.0.0.1', database='mydb') try: cursor = cnx.cursor() cursor.execute(&quot;&quot;&quot; select * from dummy_data &quot;&quot;&quot;) %timeit result_mysql = [item[0] for item in cursor.fetchall()] finally: cnx.close() </code></pre> <p>This yields <code>552 ns ± 26.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)</code></p>
<python><mysql><pandas><numpy><caching>
2023-01-09 18:59:31
1
4,770
Sheldon
75,061,799
3,932,263
Combine two pyplot axis into single axis
<p>Note: This is different from the following questions which make the following assumptions:</p> <ul> <li><a href="https://stackoverflow.com/questions/45810557/pyplot-copy-an-axes-content-and-show-it-in-a-new-figure">pyplot - copy an axes content and show it in a new figure</a>: Deletes lines from an axis</li> <li><a href="https://stackoverflow.com/questions/45861656/plot-something-in-one-figure-and-use-it-again-later-for-another-figure">Plot something in one figure, and use it again later for another figure</a>: code to plot the lines again is available</li> <li><a href="https://stackoverflow.com/questions/15962849/matplotlib-duplicate-plot-from-one-figure-to-another">matplotlib - duplicate plot from one figure to another?</a>: moves onto a new empty figure</li> </ul> <p>Here instead, the goal is to add lines to an existing axis which already has other lines on it.</p> <p>I want to merge two axes that have been created as follows:</p> <pre><code>xx = np.arange(12) fig1, ax1 = plt.subplots() ax1.plot(xx, np.sin(xx), label=&quot;V1&quot;) fig2, ax2 = plt.subplots() ax2.plot(xx, 0*xx, label=&quot;V2&quot;) </code></pre> <p>Suppose I no longer have access to the code used to create these plots, but just to <code>fig1, ax1, fig2, ax2</code> objects (e.g. via pickling).</p> <p>The two plots should be merged such that the output is (visually) the same (up to colors) as the output of:</p> <pre><code>fig, ax = plt.subplots() ax.plot(xx, np.sin(xx), label=&quot;V1&quot;) ax.plot(xx, 0*xx, label=&quot;V2&quot;) </code></pre> <p>I have tried</p> <pre><code># move lines from ax1 to ax2 def move_lines_to_axis(ax1, ax2): for line in ax1.lines: line.remove() line.set_linestyle(&quot;dashed&quot;) line.recache(always=True) ax2.add_line(line) return ax2 ax2 = move_lines_to_axis(ax1, ax2) ax2.figure </code></pre> <p>but this gives the wrong scaling.</p> <p>How can I copy the lines from one figure to the other?</p> <p>Fig1: <a href="https://i.sstatic.net/hb4wH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hb4wH.png" alt="enter image description here" /></a> Fig2: <a href="https://i.sstatic.net/xZRWg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xZRWg.png" alt="enter image description here" /></a> Expected merged figure: <a href="https://i.sstatic.net/K0VAt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K0VAt.png" alt="enter image description here" /></a> What the above code gives (note the wrong y-scale of the sinus): <a href="https://i.sstatic.net/l6OTI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l6OTI.png" alt="enter image description here" /></a> This seems to be related to the axis transformation, but looking at the code of <code>add_line</code>, it sets the transformation to <code>ax.transData</code>.</p>
<python><matplotlib><axes>
2023-01-09 18:56:41
0
1,399
Maximilian Mordig
75,061,729
4,152,567
Keras Custom Layer gives errors when saving the full model
<pre><code>class ConstLayer(tf.keras.layers.Layer): def __init__(self, x, **kwargs): super(ConstLayer, self).__init__(**kwargs) self.x = tf.Variable(x, trainable=False) def call(self, input): return self.x def get_config(self): #Note: all original model has eager execution disabled config = super(ConstLayer, self).get_config() config['x'] = self.x return config model_test_const_layer = keras.Sequential([ keras.Input(shape=(784)), ConstLayer([[1.,1.]], name=&quot;anchors&quot;), keras.layers.Dense(10), ]) model_test_const_layer.summary() model_test_const_layer.save(&quot;../models/my_model_test_constlayer.h5&quot;) del model_test_const_layer model_test_const_layer = keras.models.load_model(&quot;../models/my_model_test_constlayer.h5&quot;,custom_objects={'ConstLayer': ConstLayer,}) model_test_const_layer.summary() </code></pre> <p>This code is a sandbox replication of an error given by a larger Keras model with a RESNet 101 backbone.</p> <p><strong>Errors</strong>: If the model includes the custom layer ConstLayer:</p> <ul> <li><p><strong>without</strong> this line: <code>config['x'] = self.x</code> error when loading the saved model with <code>keras.models.load_model</code>: TypeError: <code>__init__()</code> missing 1 required positional argument: <code>'x'</code></p> </li> <li><p><strong>with</strong> <code>config['x'] = self.x</code> error: NotImplementedError: <strong>deepcopy</strong>() is only available when eager execution is enabled. <strong>Note</strong>: The larger model, requires eager execution disabled <code>tf.compat.v1.disable_eager_execution()</code></p> </li> </ul> <p>Any help and clues are greatly appreciated!</p>
<python><keras><tensorflow2.0>
2023-01-09 18:50:04
1
512
Mihai.Mehe
75,061,711
8,322,295
Trying to run sklearn on a loop and store predictions in a dataframe
<p><strong>My problem</strong></p> <p>I've been working for some time on an ML algorithm which predicts redshifts of galaxies based on magnitudes. An MWE here:</p> <p><a href="https://pastebin.com/G19Qx2Yj" rel="nofollow noreferrer">https://pastebin.com/G19Qx2Yj</a></p> <p>The MWE is a self-contained version of my full code, which is more complex, but this is as close as I can get it to the actual script.</p> <p>In lines 120-122, I add the predictions as a column to <code>X_test</code>. This works as intended.</p> <p><strong>What I'm trying to achieve</strong></p> <p>On each iteration through the loop, I'm trying to add another column to <code>X_test</code>, labelled <code>X_test1</code>, <code>X_test2</code>, <code>X_test3</code>, etc. so I can calculate the average of all of them.</p> <p><strong>What I've tried</strong></p> <ul> <li><p>I've tried using</p> <p>X_test['z_spec'] = y_test X_test['z_phot'+str(i)] = y_pred</p> </li> </ul> <p>but this ends up just recording y_pred from the final run.</p> <ul> <li><p>I've also tried moving <code>train_test_split</code> to line 102, before the loop, so that I'm not reinitialising <code>X_test</code> on each iteration, but this returns an error:</p> <pre><code> ValueError: Input 0 of layer &quot;sequential_15&quot; is incompatible with the layer: expected shape=(None, 5), found shape=(None, 8) </code></pre> </li> <li><p>I've tried making a new dataframe to store the predictions:</p> </li> </ul> <pre><code>z_df = pd.DataFrame([]) z_df ['z_spec'] = y_test z_df ['z_phot'+str(i)] = y_pred z_df ['delta_z'] = z_df ['z_spec'] - z_df ['z_phot'] </code></pre> <p>but this records only the final <code>y_pred</code>.</p> <p>(Because my data will contain <code>NaN</code>s, I can't put <code>z_df = pd.DataFrame([])</code> before the loop. -- <strong>Ignore</strong>)</p> <p>How can I achieve this?</p>
<python><machine-learning><scikit-learn>
2023-01-09 18:48:37
0
1,546
Jim421616
75,061,605
20,589,275
How to add two models in one page Django
<p>When i try to add two models in one page it doesn't work and return the html code:<a href="https://i.sstatic.net/ZxiNN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZxiNN.png" alt="enter image description here" /></a></p> <p>How can i add to the one page two models?</p> <p><strong>views.py</strong></p> <pre><code>def home(request): home_results = MainPageInfo.objects.all(); context_home = {'home_results': home_results} navigation_results_hone = Navigation.objects.all(); context_navigation_home = {'navigation_results_hone': navigation_results_hone} return render(request, 'index.html', context_home, context_navigation_home) </code></pre> <p><strong>models.py</strong></p> <pre><code>class Navigation(models.Model): title = models.FileField(upload_to='photos/%Y/%m/%d', blank = False, verbose_name=' SVG') class MainPageInfo(models.Model): title = models.CharField(max_length=255, verbose_name='Info') </code></pre> <p><strong>admin.py</strong></p> <pre><code>admin.site.register(Navigation) </code></pre>
<python><django>
2023-01-09 18:36:12
1
650
Proger228
75,061,437
11,937,086
Resample each ID of a dataframe with a given date range
<p>I have a dataframe like the one below. Each week, different IDs receive different tests.</p> <pre><code>date id test received 2023-01-02 a1 a 1 2023-01-02 c3 a 1 2023-01-02 e5 a 1 2023-01-02 b2 b 1 2023-01-02 d4 b 1 2023-01-09 a1 c 1 2023-01-09 b2 c 1 2023-01-09 c3 c 1 </code></pre> <pre><code>d = { &quot;date&quot;: [ &quot;2023-01-02&quot;, &quot;2023-01-02&quot;, &quot;2023-01-02&quot;, &quot;2023-01-02&quot;, &quot;2023-01-02&quot;, &quot;2023-01-09&quot;, &quot;2023-01-09&quot;, &quot;2023-01-09&quot;, ], &quot;id&quot;: [&quot;a1&quot;, &quot;c3&quot;, &quot;e5&quot;, &quot;b2&quot;, &quot;d4&quot;, &quot;a1&quot;, &quot;b2&quot;, &quot;c3&quot;], &quot;test&quot;: [&quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, &quot;c&quot;, &quot;c&quot;], &quot;received&quot;: [1, 1, 1, 1, 1, 1, 1, 1], } df = pd.DataFrame(data=d) </code></pre> <p>I want to resample it so that every ID is listed beside all the tests administered that week, and received = 1 or 0 depending on if they received it.</p> <pre><code>week_starting id test received 02/01/2023 a1 a 1 02/01/2023 b2 a 0 02/01/2023 c3 a 1 02/01/2023 d4 a 0 02/01/2023 e5 a 1 02/01/2023 a1 b 0 02/01/2023 b2 b 1 02/01/2023 c3 b 0 02/01/2023 d4 b 1 02/01/2023 e5 b 0 09/01/2023 a1 c 1 09/01/2023 b2 c 1 09/01/2023 c3 c 1 09/01/2023 d4 c 0 09/01/2023 e5 c 0 </code></pre> <p>Resampling by date is covered on StackOverflow, but resampling / padding by ID is not. Help?</p>
<python><pandas><padding><resampling>
2023-01-09 18:18:49
1
378
travelsandbooks
75,061,426
2,261,950
python - json loads, how to clear line breaks from outside of key value pairs but keep them within values?
<p>We receive AWS notifications to an automated mailbox in JSON format, I have a python script that should process these, however when im loading the content/body of these emails into JSON it is erroring with</p> <p><code>json.decoder.JSONDecodeError: Extra data: line 5 column 3007 (char 3159)</code></p> <p>When I looked at the content I can see it is full of line breaks where it seems the json has been formatted for readability in the body of the message. I need to maintain the line breaks in the values of the data but outside of the values they need stripping so I can load the content into readable JSON</p> <p>here is a sample of the content, does anyone have any ideas?</p> <p>Thanks</p> <pre class="lang-json prettyprint-override"><code>'{\r\n &quot;Type&quot; : &quot;Notification&quot;,\r\n &quot;MessageId&quot; : &quot;afad72049c0cb1&quot;,\r\n &quot;TopicArn&quot; : &quot;arn:aws:sns:eu-west-1:793738:aws-health&quot;,\r\n &quot;Message&quot; : &quot;{\\&quot;version\\&quot;:\\&quot;0\\&quot;,\\&quot;id\\&quot;:\\&quot;3f059336-bdd1-e27b423d5\\&quot;,\\&quot;detail-type\\&quot;:\\&quot;AWS Health Event\\&quot;,\\&quot;source\\&quot;:\\&quot;aws.health\\&quot;,\\&quot;account\\&quot;:\\&quot;7954138\\&quot;,\\&quot;time\\&quot;:\\&quot;2022-10-19T08:55:00Z\\&quot;,\\&quot;region\\&quot;:\\&quot;eu-west-1\\&quot;,\\&quot;resources\\&quot;:[\\&quot;docker/b\\&quot;,\\&quot;master/phub\\&quot;],\\&quot;detail\\&quot;:{\\&quot;eventArn\\&quot;:\\&quot;arn:aws:health:eu-west-1::event/ECS/AWS_ECS_SECURITY_NOTIFICATION/AWS_ECS_SECURITY_NOTIFICATION_3986a573dbe33a823860ad3272f72e\\&quot;,\\&quot;service\\&quot;:\\&quot;ECS\\&quot;,\\&quot;eventTypeCode\\&quot;:\\&quot;AWS_ECS_SECURITY_NOTIFICATION\\&quot;,\\&quot;eventTypeCategory\\&quot;:\\&quot;accountNotification\\&quot;,\\&quot;startTime\\&quot;:\\&quot;Wed, 19 Oct 2022 08:55:00 GMT\\&quot;,\\&quot;eventDescription\\&quot;:[{\\&quot;language\\&quot;:\\&quot;en_US\\&quot;,\\&quot;latestDescription\\&quot;:\\&quot;A software update has been deployed to Fargate which includes CVE patches or other critical patches. No action is required on your part. All new tasks launched automatically uses the latest software version. For running tasks, your tasks need to be restarted in order for these updates to apply. Your tasks running as part of the following ECS Services will be automatically updated beginning October 31, 2022.\\\\n\\\\nA list of your affected resource(s) can be found in the \'Affected resources\' tab in the \\\\\\&quot;Cluster | Service\\\\\\&quot; format.\\\\n\\\\nAfter October 31, 2022, Fargate will begin gradually restarting these tasks. Typically, services should see little to no interruption during the update and no action is required. Data your task has stored on local ephemeral storage will no longer be available, similar to a scaling down event. If you would like to control the timing of this restart you can update the service before October 31, 2022, by running the update-service command from the ECS command-line interface specifying force-new-deployment. For example:\\\\n\\\\n$ aws ecs update-service --service service_name \\\\\\\\\\\\n--cluster cluster_name --force-new-deployment\\\\n\\\\nFor further details on Fargate\'s update process, please refer to the ECS developer guide [1].\\\\n\\\\nIf you have any questions or concerns, please contact AWS Support [2].\\\\n\\\\n[1] https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.aws.amazon.com%2FAmazonECS%2Flatest%2Fuserguide%2Ftask-maintenance.html%2F%2Fn&amp;amp;data=05%7C01%7Cnetguru%40domain.com%7C18ecb8a6d7454302640808dab1df762e%7C9168a104f43a47ffa70848b8545e1691%7C0%7C0%7C638017870565849523%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;amp;sdata=8GnV6bDohXEG8AYo4mOwSY9dLuqRLknLuXnaelVS%2FnI%3D&amp;amp;reserved=0[2] https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Faws.amazon.com%2Fsupport%2F%2F762e%7C9168a104f43a47ffa70848b8545e1691%7C0%7C0%7C638017870565849523%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;amp;sdata=Ll8kJRsNgFw46znXWhmH9Ph%2Bu2zBchweMzqq1feqjQk%3D&amp;amp;reserved=0&quot;}],\\&quot;affectedEntities\\&quot;:[{\\&quot;entityValue\\&quot;:\\&quot;docker/rcure-hub\\&quot;},{\\&quot;entityValue\\&quot;:\\&quot;master/rcure-hub\\&quot;}]}}&quot;,\r\n &quot;Timestamp&quot; : &quot;2022-10-19T14:37:30.976Z&quot;,\r\n &quot;SignatureVersion&quot; : &quot;1&quot;,\r\n &quot;Signature&quot; : &quot;taT/Hxpaywf/WurHI/hs0wmZxA0hqhjDX1tFk9KmmY2Vyj6zXTzF6k78XoSiLvfGK7pOZCL+oruqZKBFyRy8SvKvDMa0ZT6ekKj9uAEwmpAItDZfkNvJM1hmSSNEV+8SpKRBU0GSQ8v4UkXMHQUNqGIURKRJpoJEORy8Yd7/Qsw8cNlZhrEAGzj/L7O6Fo84cUsjBASqDyjOwAnUmys0CVdxrEUYPoc6m4tPfazrTkw+GSteBQ904kSvSbEL7AR61n7TK4nqv6t3xJ7HcEiP6vO0m7mj3rhOjIgeFtQrPbFONUHdWt3hP1OD9Fa84tVEwPDHJiFm+w0+aJu+WhEUTg==&quot;,\r\n &quot;SigningCertURL&quot; : &quot;https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsns.eu-west-1.amazonaws.com%2FSimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem&amp;amp;data=05%7C01%7Cnetguru%40domain.com%7C18ecb8a6d7454302640808dab1df762e%7C9168a104f43a47ffa70848b8545e1691%7C0%7C0%7C638017870565849523%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;amp;sdata=8%2BWv%2FP64OBM3lk0CXurmLbYlIZCxHoR%2BeWCbWZUoUQw%3D&amp;amp;reserved=0&quot;,\r\n &quot;UnsubscribeURL&quot; : &quot;https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsns.eu-west-1.amazonaws.com%2F%3FAction%3DUnsubscribe%26SubscriptionArn%3Darn%3Aaws%3Asns%3Aeu-west-1%3A793726854138%3Aaws-health%3A6de24e4d-ae74-4aaa-bf78-36b6e95c335f&amp;amp;data=05%7C01%7Cnetguru%40domain.com%7C18ecb8a6d7454302640808dab1df762e%7C9168a104f43a47ffa70848b8545e1691%7C0%7C0%7C6V%2FvQ6tB2outb%2FrNzKRsJMJ3DE%3D&amp;amp;reserved=0&quot;\r\n}\r\n\r\n' </code></pre>
<python><json>
2023-01-09 18:17:56
2
2,163
AlexW
75,061,424
11,462,274
Working with numpy.where when datetime64 could not be promoted by str_
<pre class="lang-python prettyprint-override"><code>import pandas as pd from datetime import timedelta import numpy as np df = pd.DataFrame({ 'open_local_data':['2022-08-24 15:00:00','2022-08-24 18:00:00'], 'result':['WINNER',''] }) df['open_local_data'] = pd.to_datetime(df['open_local_data']) df['clock_now'] = np.where( df['result'] != '', df['open_local_data'] + timedelta(minutes=150), '' ) print(df[['open_local_data','clock_now']]) </code></pre> <p>Since I must work using conditions and only later decide whether to handle changes in a column, what should I do in case I receive this error:</p> <pre class="lang-none prettyprint-override"><code> df['clock_now'] = np.where( File &quot;&lt;__array_function__ internals&gt;&quot;, line 180, in where TypeError: The DType &lt;class 'numpy.dtype[datetime64]'&gt; could not be promoted by &lt;class 'numpy.dtype[str_]'&gt;. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (&lt;class 'numpy.dtype[datetime64]'&gt;, &lt;class 'numpy.dtype[str_]'&gt;) </code></pre>
<python><pandas><numpy><datetime>
2023-01-09 18:17:36
2
2,222
Digital Farmer
75,061,362
16,319,191
Convert categories to binary columns (concat the category columns)
<p>Want to convert the categories to binary columns, concatenated to the df. Category column values should be new columns with 0 or 1s for each id based on if the value is present or not.</p> <pre><code>df = pd.DataFrame({&quot;id&quot;: [0,1,1,3,3], &quot;value1&quot;: [&quot;ryan&quot;, &quot;delta&quot;, &quot;delta&quot;, &quot;delta&quot;, &quot;alpha&quot;], &quot;category&quot;: [&quot;teacher&quot;, &quot;pilot&quot;, &quot;engineer&quot;, &quot;pilot&quot;, &quot;teacher&quot;], &quot;value2&quot;: [1, 1, 2, 3, 7]}) df </code></pre> <p>Answer df should be:</p> <pre><code>finaldf = pd.DataFrame({&quot;id&quot;: [0,1,3], &quot;teacher&quot;:[1,0,1], &quot;pilot&quot;:[0,1,1], &quot;engineer&quot;: [0,1,0]}) </code></pre>
<python><pandas>
2023-01-09 18:11:41
1
392
AAA
75,061,348
4,429,002
Pandas DataFrame Hash Values Differ Between Unix and Windows
<p>I've noticed that hash values created from Pandas DataFrames change depending whether the below snippet is executed on Unix or Windows.</p> <pre><code>import pandas as pd import numpy as np import hashlib df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c']) hashvalue_new = hashlib.md5(df.values.flatten().data).hexdigest() print(hashvalue_new) </code></pre> <p>The above code prints <code>d0ecb84da86002807de1635ede730f0a</code> on Windows machines and <code>586962852295d584ec08e7214393f8b2</code> on Unix machines. Can someone more knowledgeable (or smarter) than me explain to me why this is happening and suggest a way to create a consistent hash value across platforms? I'm running Python 3.8.5 and pandas 1.2.5.</p>
<python><pandas><hash><operating-system><hashlib>
2023-01-09 18:10:45
1
309
Moritz
75,061,330
6,423,456
Can Django's Replace be used to replace multiple substrings at once?
<p>In Django, I can have queries that look like this:</p> <pre class="lang-py prettyprint-override"><code>from django.db.models import Value from django.db.models.functions import Replace MyModel.objects.update(description=Replace(&quot;description&quot;, Value(&quot;old_1&quot;), Value(&quot;new_1&quot;))) MyModel.objects.update(description=Replace(&quot;description&quot;, Value(&quot;old_2&quot;), Value(&quot;new_2&quot;))) </code></pre> <p>The first <code>.update</code> will go through the database, look for the &quot;old_1&quot; substring in the description field, and replace it with the &quot;new_1&quot; substring. The second <code>.update</code> call will do the same thing for the <code>old_2</code> substring, replacing it with the <code>new_2</code> substring.</p> <p>Can this be done in a single query?</p>
<python><django>
2023-01-09 18:09:16
1
2,774
John
75,061,301
2,543,622
running sql code in python difference between r''' and f'''
<p>I have a code like below. I noticed that i can replace <code>f'''</code> with <code>r'''</code> what is the difference between those 2 options and when to use it?. it seems that <code>r'''</code> when there is regex in the code? i tried to google but didnt get any good results</p> <pre><code>query = f''' with a as ( select some sql or hive code from a ''' </code></pre>
<python><sql><string>
2023-01-09 18:06:23
1
6,946
user2543622
75,061,006
1,862,861
Store all keyboard keys currently being pressed in PyQt5
<p>I'm trying to write a PyQt5 GUI that captures all keyboard keys that are currently being pressed. Based on <a href="https://stackoverflow.com/a/32727764/1862861">this answer</a>, I've tried the following minimal code:</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt5.QtWidgets import QApplication, QWidget from PyQt5.QtCore import QEvent class MainWindow(QWidget): def __init__(self): super().__init__() QApplication.instance().installEventFilter(self) self.pressedKeys = [] def eventFilter(self, source, event): if event.type() == QEvent.KeyPress: if int(event.key()) not in self.pressedKeys: self.pressedKeys.append(int(event.key())) print(self.pressedKeys) elif event.type() == QEvent.KeyRelease: if int(event.key()) in self.pressedKeys: self.pressedKeys.remove(int(event.key())) print(self.pressedKeys) return super().eventFilter(source, event) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) demo = MainWindow() demo.show() sys.exit(app.exec_()) </code></pre> <p>When I run this, if I hold down a key the output list keeps flipping back and forth between one containing the key value and being empty. Similarly, holding down multiple keys adds the keys to the list, but alternates back and forth between containing and removing the final key that I have pressed. It seems that if I hold down keys the <code>KeyRelease</code> event still keeps getting triggered for the last key I pressed.</p> <p>Is there are way to hold all current key presses in PyQt5, or should I use a different package (e.g., using one or other of the packages suggested in <a href="https://stackoverflow.com/questions/24072790/how-to-detect-key-presses">this question</a>)?</p> <p>Note, I've also tried:</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt5.QtWidgets import QApplication, QWidget class MainWindow(QWidget): def __init__(self): super().__init__() self.pressedKeys = [] def keyPressEvent(self, event): if int(event.key()) not in self.pressedKeys: self.pressedKeys.append(int(event.key())) print(self.pressedKeys) def keyReleaseEvent(self, event): if int(event.key()) in self.pressedKeys: self.pressedKeys.remove(int(event.key())) print(self.pressedKeys) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) demo = MainWindow() demo.show() sys.exit(app.exec_()) </code></pre> <p>which results in pretty much the same behaviour.</p>
<python><pyqt5><keyboard>
2023-01-09 17:36:16
1
7,300
Matt Pitkin
75,060,938
10,260,243
Line and text don't align in matplotlib
<p>I'm trying to draw a line with text written on it using the following code:</p> <pre><code>def angle_line(A, B): x = B[0] - A[0] y = B[1] - A[1] angle = math.atan2(y, x ) return angle*180/np.pi fig, ax = plt.subplots() xy_init = (0,0) xy_end = (1, 0.5) #PLOT LINE ax.plot((xy_init[0], xy_end[0]), (xy_init[1], xy_end[1]), color='black', linewidth=0.5, linestyle='--') center = (xy_init[0]+xy_end[0])/2, (xy_init[1]+xy_end[1])/2 angle = angle_line(xy_init, xy_end) #PLOT TEXT ax.text(*center, 'TEST', horizontalalignment='center', rotation = angle, verticalalignment='center',fontdict={'fontsize': 20}, color='red') </code></pre> <p>Basically, I draw the line using the two points, and then for the text, I calculate the angle and rotate the text accordingly. But, for some reason I cannot comprehend, they are misaligned, as shown in the picture below:</p> <p><a href="https://i.sstatic.net/RCtad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RCtad.png" alt="Plot" /></a></p> <p>Do I understand something wrong about the parameters of the matplotlib functions, or is there any problem with the code above?</p>
<python><matplotlib>
2023-01-09 17:29:21
0
4,678
Bruno Mello
75,060,885
14,958,374
GPU out of memory when FastAPI is used with SentenceTransformers inference
<p>I'm currently using FastAPI with <strong>Gunicorn</strong>/<strong>Uvicorn</strong> as my server engine. Inside FastAPI <code>GET</code> method I'm using <code>SentenceTransformer</code> model with <strong>GPU</strong>:</p> <pre><code># ... from sentence_transformers import SentenceTransformer encoding_model = SentenceTransformer(model_name, device='cuda') # ... app = FastAPI() @app.get(&quot;/search/&quot;) def encode(query): return encoding_model.encode(query).tolist() # ... def main(): uvicorn.run(app, host=&quot;127.0.0.1&quot;, port=8000) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I'm using the following config for <strong>Gunicorn</strong>:</p> <pre><code>TIMEOUT 0 GRACEFUL_TIMEOUT 120 KEEP_ALIVE 5 WORKERS 10 </code></pre> <p><strong>Uvicorn</strong> has all default settings, and is started in docker container casually:</p> <pre><code>CMD [&quot;uvicorn&quot;, &quot;app.main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;] </code></pre> <p>So, inside docker container I have 10 gunicorn workers, <strong>each using <strong>GPU</strong></strong>.</p> <p><strong>The problem is the following:</strong></p> <p>After some load my API fails with the following message:</p> <pre><code>torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 15.74 GiB total capacity; 11.44 GiB already allocated; 189.56 MiB free; 11.47 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF </code></pre>
<python><pytorch><fastapi><sentence-transformers>
2023-01-09 17:23:48
1
331
Nick Zorander
75,060,822
5,446,749
Right/Left align several values together in Python Logging Formatter
<p>In order to left-align the <code>levelname</code> value inside the logs with 8 chars, one can use <code>%(levelname)-8s</code>.</p> <pre class="lang-py prettyprint-override"><code>import logging logging.basicConfig( level=logging.DEBUG, format=&quot;[%(asctime)s] (%(module)s:%(funcName)s::%(lineno)s) - %(levelname)-8s - %(message)s &quot;, handlers=[logging.StreamHandler()] ) def fake_function(): logging.info(&quot;This is a info message&quot;) fake_function() </code></pre> <p>will give:</p> <pre><code>[2023-01-09 18:03:48,842] (example:fake_function::12)-100s - INFO - This is a info message </code></pre> <p>However, I am more interested in left-aligning the 3 values <code>(%(module)s:%(funcName)s::%(lineno)s</code>. I want to do it in one block, ie having:</p> <pre><code>[2023-01-09 18:07:14,743] (example:fake_function::12) - INFO - This is a info message [2023-01-09 18:07:14,745] (another_example:another_fake_function::123456) - INFO - This is a info message [2023-01-09 18:07:14,758] (a:b::1) - INFO - This is a info message </code></pre> <p>I know I could left-align these 3 values separately, but it would leave a lot of spaces between the <code>module</code>, the <code>funcName</code> and the <code>lineno</code> making it too messy for my taste.</p> <p>I tried to use <code>%(%(module)s:%(funcName)s::%(lineno)s)-100s</code> but it did not work (it simply printed <code>-100s</code>).</p> <p>Is there a way to right/left-align several values from the logs together as one?</p>
<python><logging><python-logging>
2023-01-09 17:17:12
1
32,794
vvvvv
75,060,820
11,913,986
How to add a pyspark rolling window based on restricted duplicate values
<p>I have a dataframe like this: <a href="https://i.sstatic.net/T2TiV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T2TiV.png" alt="enter image description here" /></a></p> <p>Reproduce:</p> <pre><code>df = spark.createDataFrame([(1, 4, 3), (2, 4, 2), (3, 4, 5), (1, 5, 3), (2, 5, 2), (3, 6, 5)], ['a', 'b', 'c']) </code></pre> <p>I want to restrict the duplicates of column 'b' to two, only two duplicates will be kept, rest will be dropped. After that, I want to add a new column as 'd', where there will be a rolling window of numeric values in Ascending order as 1,2 like:</p> <p><a href="https://i.sstatic.net/d9RN9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d9RN9.png" alt="enter image description here" /></a></p> <p>Is there anything like pandas rolling window equivalent in Pyspark which I have failed to dig out from Stack Overflow and documentation where I can do something like what I may have done on pandas:</p> <pre><code>y1 = y[df.COL3 == 'b'] y1 = y1.rolling(window).apply(lambda x: np.max(x) if len(x)&gt;0 else 0).fillna('drop')y = y1.reindex(y.index, fill_value = 0).loc[lambda x : x!='drop'] </code></pre> <p>I am new to PySpark, thanks in advance.</p>
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
2023-01-09 17:17:04
1
739
Strayhorn
75,060,772
8,610,286
How to search for multiple terms in Wikidata API using 'srsearch' (e.g. from Python)?
<p>I am trying to search multiple terms on Wikidata and then parse the results locally using Python.</p> <p>I am currently looping through a list of terms and running the following piece of code:</p> <pre><code>import requests term_list = [&quot;term a&quot;, &quot;term b&quot;] for search_term in term_list: base_url = &quot;https://www.wikidata.org/w/api.php&quot; payload = { &quot;action&quot;: &quot;query&quot;, &quot;list&quot;: &quot;search&quot;, &quot;srsearch&quot;: search_term, &quot;language&quot;: &quot;en&quot;, &quot;format&quot;: &quot;json&quot;, &quot;origin&quot;: &quot;*&quot;, } res = requests.get(base_url, params=payload) </code></pre> <p>This takes a lot of time, as each iteration makes new requests.</p> <p>Is there a way I could send a batch of terms simultaneously to the Wikidata API, thereby saving me time and saving resources to the API?</p> <p><strong>edit</strong></p> <p>By digging deeper in Phabricator, it seems that I can't actually do it (<a href="https://phabricator.wikimedia.org/T194016" rel="nofollow noreferrer">https://phabricator.wikimedia.org/T194016</a>). If anyone has more information on it, it would be very useful.</p>
<python><wikidata><phabricator><wikidata-api>
2023-01-09 17:12:50
0
349
Tiago Lubiana
75,060,698
20,615,590
Scientific notation not working in python
<p>I am using python version <code>3.9.5</code>, and trying to use scientific notation using the <code>format</code> method, but it's not working!</p> <p>I have searched across the web for this, but didn't get anything.</p> <p>I used the formatting method of <code>format(num, f'.{precision}e')</code> to turn it into scientific notation, and it works for non-decimal numbers, but when I use decimal numbers, it doesn't work:</p> <pre class="lang-py prettyprint-override"><code>num = int(10**9) # 1000000000 dec = int(10**-4) # 0.0001 print(format(num, '.5e')) print(format(dec, '.5e')) # 1.00000e+09 # 0.00000e+00 </code></pre> <p>As you can see, the second output is meant to be a different result but I got <code>0.00000e+00</code></p> <hr /> <p>Can somebody please help me solve this, thanks in advance.</p>
<python><python-3.x><formatting><format><scientific-notation>
2023-01-09 17:07:27
1
423
Pythoneer
75,060,695
1,229,624
Why cannot predict in TensorFlow a equation of third degree?
<p>I'm new to TensorFlow. I was able to make simple predication. But when I made changes it stopped working. Why? and how to fix it?</p> <p>I have used this demo. And I was able to solve an equation like this:</p> <p><code>y=2x-1</code></p> <p>By using this code:</p> <pre><code>model=Sequential([Dense(units=1,input_shape=[1])]) model.compile(optimizer='sgd',loss='mean_squared_error') xs=np.array([-1.0,0.0,1.0,2.0]) ys=np.array([-3.0,-1.0,1.0,3.0]) model.fit(xs,ys,epochs=400) print(model.predict([11,0])) </code></pre> <p>Then I tried the same concept to solve this equation:</p> <p><code>3x^3+5x^2+10</code></p> <p>This is the new code:</p> <pre><code>model=Sequential([Dense(units=1,input_shape=[1])]) model.compile(optimizer='sgd',loss='mean_squared_error') xs=np.array([5.0,6.0,7.0,8.0,10.0]) ys=np.array([435.0,730.0,1137.0,1674.0,3210.0]) model.fit(xs,ys,epochs=1000) print(model.predict([11,0])) </code></pre> <p>My question is, how to change my code so that it will solve it correctly?</p>
<python><tensorflow>
2023-01-09 17:07:08
1
24,785
Aminadav Glickshtein
75,060,673
2,156,115
How to print actual shortest path Dijkastra Python
<p>I am an adventofcode solver, and I have a my solution using Dijkastra in Python (see attached code). I have successfully calculated how many steps it takes to get to letter &quot;E&quot;. The solution for a sample data was 31.</p> <p>My solution: <a href="https://github.com/xjantoth/aoc2022/blob/main/day12/solution.py" rel="nofollow noreferrer">https://github.com/xjantoth/aoc2022/blob/main/day12/solution.py</a></p> <p>Riddle: <a href="https://adventofcode.com/2022/day/12" rel="nofollow noreferrer">https://adventofcode.com/2022/day/12</a></p> <p>Input data</p> <pre><code>Sabqponm abcryxxl accszExk acctuvwj abdefghi </code></pre> <p>One thin which is not clear to me (since it is a first time I have used this algorithm) is how to print actual shortest path. It is definitely not a &quot;buffer&quot;, neither the &quot;seen&quot; variable I have used. So how to print this 31 coordinates which represent shortest path in Dijkastra ? Thx</p> <p>Any ideas</p> <pre><code>#!/usr/bin/env python3 from collections import deque data = [line for line in open(0).read().splitlines()] m = {&quot;S&quot;: &quot;a&quot;, &quot;E&quot;: &quot;z&quot;} E = 0 + 0j S = 0 + 0j co = {} for y, d in enumerate(data): for x, i in enumerate(d): if i == &quot;E&quot;: E = x + y*1j if i == &quot;S&quot;: S = x + y*1j co[(x + y*1j)] = [m.get(i, i), ord(m.get(i, i))] def dfs(grid, start, end): buffer = deque([(start, 0)]) seen = set() while buffer: current = buffer.popleft() if current[0] in seen: continue # Part 1 if current[0] == end: return current[1], seen, len(seen) seen.add(current[0]) neighbours = [ current[0] + (-1 + 0j), current[0] + (1 + 0j), current[0] + (0 + -1j), current[0] + (0 + 1j) ] for n in neighbours: if n.real &lt; 0 or n.imag &lt; 0 or n.real &gt;= len(data[0]) or n.imag &gt;= len(data): #print(n) continue if grid[n][1] &lt;= grid[current[0]][1] + 1: buffer.append((n, current[1]+1)) return False print(dfs(co, S, E)[0]) </code></pre>
<python><shortest-path>
2023-01-09 17:05:09
1
1,266
user2156115
75,060,600
1,061,892
Reindex from MultiIndex removes row values
<p>I have a case where I have a dataset that I am trying to fill in missing dates for each value in the categorical column I am additionally grouping by data. I was following this <a href="https://stackoverflow.com/questions/14856941/insert-0-values-for-missing-dates-within-multiindex">solution</a>, and at the point where I reindex the cartesian product, I see that all row values for <code>ap</code> and <code>qps</code> columns (Non-indexed) are now NaN. I'm curious where I might have gone wrong with trying to use this solution? Is there something I missed with re-indexing?</p> <p><strong>Issues in question:</strong></p> <pre><code>new_dma_values_df = dma_values_groupby.reindex(new_index) created_at (Level 1) | dma_description (Level 2) | ap | qps 2021-01-01 | ALBANY - SCHENECTADY - TROY | NaN | NaN ... </code></pre> <p><strong>Full Code:</strong></p> <pre><code>dma_values_groupby.columns Index(['created_at', 'dma_description', 'ap', 'qps'], dtype='object') idx = pd.period_range(min(dma_values_groupby['created_at']), max(dma_values_groupby['created_at'])) PeriodIndex(['2022-01-01', '2022-01-02', '2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06', '2022-01-07', '2022-01-08', '2022-01-09', '2022-01-10', ... '2022-12-13', '2022-12-14', '2022-12-15', '2022-12-16', '2022-12-17', '2022-12-18', '2022-12-19', '2022-12-20', '2022-12-21', '2022-12-22'], dtype='period[D]', length=356) dma_values_groupby.set_index(['created_at', 'dma_description'], inplace=True) dma_values_groupby.head(5) created_at (Level 1) | dma_description (Level 2) | ap | qps 2021-01-01 | ALBANY - SCHENECTADY - TROY | 3 | 1 ... (created_at_index, dma_description_index) = dma_values_groupby.index.levels new_index = pd.MultiIndex.from_product([idx, dma_description_index]) new_index MultiIndex([('2022-01-01', 'ABILENE - SWEETWATER'), ('2022-01-01', 'ALBANY - SCHENECTADY - TROY'), ('2022-01-01', 'ALBANY, GA'), ... names=[None, 'dma_description'], length=72268) new_dma_values_df = dma_values_groupby.reindex(new_index) created_at (Level 1) | dma_description (Level 2) | ap | qps 2021-01-01 | ALBANY - SCHENECTADY - TROY | NaN | NaN ... new_dma_values_df = new_dma_values_df.fillna(0).astype(int) # Value I plan to use after the fix </code></pre>
<python><pandas>
2023-01-09 16:59:51
0
5,934
cphill
75,060,534
12,990,185
Python GET Rest API - package is downloaded but I cannot open it (invalid)
<p>I must run python to get some artifacts from repository in following syntax (invoked from batch with its variables) so this part to pass arguments is not changeable.</p> <pre><code>python get_artifacts.py %USERNAME%:%PASSWORD% http://url/artifactory/package.zip </code></pre> <p>My python script is the following:</p> <pre><code>import sys import requests from requests.auth import HTTPBasicAuth def get_artifact(url, save_artifact_name, username, password, chunk_size=128): try: get_method = requests.get(url, auth = HTTPBasicAuth(username, password), stream=True) with open(save_artifact_name, 'wb') as artifact: for chunk in get_method.iter_content(chunk_size=chunk_size): artifact.write(chunk) except requests.exceptions.RequestException as error: sys.exit(str(error)) if __name__ == '__main__': username_and_password = sys.argv[1].split(':') username = username_and_password[0] password = username_and_password[1] url = sys.argv[2] save_artifact_name = url.split(&quot;/&quot;)[-1] print(f'Retrieving artifact {save_artifact_name}...') get_artifact(url, save_artifact_name, username, password) print(&quot;Finished successfully!&quot;) </code></pre> <p>Now I CAN see my package downloaded, but my zip package is <strong>invalid</strong>. Of course with some other tool like <strong>curl.exe</strong> the same works. So definitely I am missing something in python script but not able to determine what am I missing (download works but package is invalid).</p> <p>Thanks a lot!</p>
<python><rest><python-3.8>
2023-01-09 16:54:09
2
1,260
vel
75,060,528
12,991,231
Given multiple lists, how to find the values between neighboring lists?
<p>I have multiple lists of increasing numbers. The elements in each list are strictly greater than those in its previous neighbor. i.e. <code>[1,2,3], [6,7,8], [10,11,12]</code>.</p> <p>How to find the numbers between neighboring lists? In this case, the results would be <code>[4,5], [9]</code>.</p> <p>If there are only two lists, I can use something like</p> <pre><code>a = [1,2,3] b = [4,5,6] result = list(range(a[-1]+1,b[0])) </code></pre> <p>but I can't think of a simple and fast way to construct a loop to do this if I have more than two lists.</p>
<python><list>
2023-01-09 16:53:32
2
337
sensationti
75,060,361
15,500,727
combining specific row conditionally and add output to existing row in pandas
<p>suppose I have following data frame :</p> <pre><code>data = {'age' :[10,11,12,11,11,10,11,13,13,13,14,14,15,15,15], 'num1':[10,11,12,13,14,15,16,17,18,19,20,21,22,23,24], 'num2':[20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]} df = pd.DataFrame(data) </code></pre> <p>I want to sum rows for age 14 and 15 and keep those new values as age 14. my expected output would be like this:</p> <pre><code> age time1 time2 1 10 10 20 2 11 11 21 3 12 12 22 4 11 13 23 5 11 14 24 6 10 15 25 7 11 16 26 8 13 17 27 9 13 18 28 10 13 19 29 11 14 110 160 </code></pre> <p>in the code below, I have tried to <code>group.by</code> age but it does not work for me:</p> <pre><code>df1 =df.groupby(age[age &gt;=14])['num1', 'num2'].apply(', '.join).reset_index(drop=True).to_frame() </code></pre>
<python><pandas>
2023-01-09 16:39:47
2
485
mehmo
75,060,332
1,373,258
Reading survey data CSV with multiple selection sub-columns?
<p>I would like to import this data from a Navigraph survey results. <a href="https://navigraph.com/blog/survey2022" rel="nofollow noreferrer">https://navigraph.com/blog/survey2022</a></p> <p>The dataset is here: <a href="https://download.navigraph.com/docs/flightsim-community-survey-by-navigraph-2022-data.zip" rel="nofollow noreferrer">https://download.navigraph.com/docs/flightsim-community-survey-by-navigraph-2022-data.zip</a></p> <p>However, I noticed the structure is something I'm not quite used to, and perhaps this is how a lot of polling data is shared. The semicolons being separators is not an issue. It's the fact there's a mix of &quot;select multiple&quot; responses as columns. The tidiest thing is starting at the third row, each row is a single respondent.</p> <p>How can I clean up this data so it is as &quot;tidy&quot; as possible? How would I <code>melt()</code> these columns into rows? How do I handle the multiple selection responses in the sub-columns?</p> <p>I'd like the questions and responses to simply be two columns respectively.</p> <p><a href="https://i.sstatic.net/fNmTE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fNmTE.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-01-09 16:37:49
1
11,617
tmn
75,060,298
8,512,262
Why do the tkinter geometry manager methods return None instead of returning the widget on which they're called?
<p><em><strong>EDIT: I have submitted this as a feature improvement proposal via the GitHub issue tracker for cpython. See <a href="https://github.com/python/cpython/issues/100891" rel="nofollow noreferrer">Issue 100891</a>.</strong></em></p> <hr> <p><em>N.B.: I recognize that this question may border on &quot;opinion-based&quot; - if there is a better venue for this discussion, please let me know and I'll remove it. I appreciate your candor!</em></p> <p>It seems like an extremely common pitfall for newcomers to tkinter to do the following:</p> <pre><code>my_button = tkinter.Button(text='Hello').pack() </code></pre> <p>only to run into errors because <code>my_button</code> evaluates to <code>None</code> after being chained to a geometry manager method, i.e. <code>pack</code>, <code>place</code>, or <code>grid</code>.</p> <p>Typical practice for this reason is to declare widgets separately from adding them to a geometry manager, e.g.:</p> <pre><code>my_button = tkinter.Button(text='Hello') my_button.pack() </code></pre> <hr> <p>A quick look into the <a href="https://github.com/python/cpython/blob/main/Lib/tkinter/__init__.py#L2431" rel="nofollow noreferrer">source code</a> for the geometry manager classes shows that it would be an extremely trivial change to return the widget on which they're called. The same line can be added to each geometry manager's respective <code>_configure</code> method: <code>return self</code> (I have done so below)</p> <h4>pack</h4> <pre><code>class Pack: &quot;&quot;&quot;Geometry manager Pack. Base class to use the methods pack_* in every widget.&quot;&quot;&quot; def pack_configure(self, cnf={}, **kw): &quot;&quot;&quot;Pack a widget in the parent widget. Use as options: after=widget - pack it after you have packed widget anchor=NSEW (or subset) - position widget according to given direction before=widget - pack it before you will pack widget expand=bool - expand widget if parent size grows fill=NONE or X or Y or BOTH - fill widget if widget grows in=master - use master to contain this widget in_=master - see 'in' option description ipadx=amount - add internal padding in x direction ipady=amount - add internal padding in y direction padx=amount - add padding in x direction pady=amount - add padding in y direction side=TOP or BOTTOM or LEFT or RIGHT - where to add this widget. &quot;&quot;&quot; self.tk.call( ('pack', 'configure', self._w) + self._options(cnf, kw)) return self # return the widget passed to this method pack = configure = config = pack_configure </code></pre> <h4>place</h4> <pre><code>class Place: &quot;&quot;&quot;Geometry manager Place. Base class to use the methods place_* in every widget.&quot;&quot;&quot; def place_configure(self, cnf={}, **kw): &quot;&quot;&quot;Place a widget in the parent widget. Use as options: in=master - master relative to which the widget is placed in_=master - see 'in' option description x=amount - locate anchor of this widget at position x of master y=amount - locate anchor of this widget at position y of master relx=amount - locate anchor of this widget between 0.0 and 1.0 relative to width of master (1.0 is right edge) rely=amount - locate anchor of this widget between 0.0 and 1.0 relative to height of master (1.0 is bottom edge) anchor=NSEW (or subset) - position anchor according to given direction width=amount - width of this widget in pixel height=amount - height of this widget in pixel relwidth=amount - width of this widget between 0.0 and 1.0 relative to width of master (1.0 is the same width as the master) relheight=amount - height of this widget between 0.0 and 1.0 relative to height of master (1.0 is the same height as the master) bordermode=&quot;inside&quot; or &quot;outside&quot; - whether to take border width of master widget into account &quot;&quot;&quot; self.tk.call( ('place', 'configure', self._w) + self._options(cnf, kw)) return self # return the widget passed to this method place = configure = config = place_configure </code></pre> <h4>grid</h4> <pre><code>class Grid: &quot;&quot;&quot;Geometry manager Grid. Base class to use the methods grid_* in every widget.&quot;&quot;&quot; # Thanks to Masazumi Yoshikawa (yosikawa@isi.edu) def grid_configure(self, cnf={}, **kw): &quot;&quot;&quot;Position a widget in the parent widget in a grid. Use as options: column=number - use cell identified with given column (starting with 0) columnspan=number - this widget will span several columns in=master - use master to contain this widget in_=master - see 'in' option description ipadx=amount - add internal padding in x direction ipady=amount - add internal padding in y direction padx=amount - add padding in x direction pady=amount - add padding in y direction row=number - use cell identified with given row (starting with 0) rowspan=number - this widget will span several rows sticky=NSEW - if cell is larger on which sides will this widget stick to the cell boundary &quot;&quot;&quot; self.tk.call( ('grid', 'configure', self._w) + self._options(cnf, kw)) return self # return the widget passed to this method grid = configure = config = grid_configure </code></pre> <p>The crux of my question is: <strong>Can anyone explain the design rationale behind this &quot;feature&quot; of tkinter? Ultimately, is a change like this worth a pull request / PEP? Would this cause undue breaking changes to tkinter?</strong></p> <p>This isn't necessarily a problem so much as it is the result of a deep dive taken after seeing <em>so many</em> questions here regarding this behavior.</p>
<python><tkinter>
2023-01-09 16:34:20
2
7,190
JRiggles
75,060,194
9,944,937
Get the time in hours of a time-series in python
<p>this might seem like a trivia question: I have a list of datapoints that have been recorded every 5 minutes with an overlap of 2.5 minutes (2 and a half minutes). I also have the timestamp of the start of the recording and another timestamp from where I need to start counting the time (e.g. the chronometer start):<a href="https://i.sstatic.net/YeiSy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YeiSy.png" alt="enter image description here" /></a></p> <p>I need to calculate how many hours have past from the start of the chronometer to the end of the recordings and make a dataframe where in one column I have the recordings and in another column the hour from the start of the chronometer to which that recording belongs: e.g.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>recording</th> <th>hours from chronometer start</th> </tr> </thead> <tbody> <tr> <td>0.262</td> <td>0</td> </tr> <tr> <td>0.243</td> <td>0</td> </tr> <tr> <td>0.263</td> <td>0</td> </tr> <tr> <td>0.342</td> <td>1</td> </tr> <tr> <td>0.765</td> <td>1</td> </tr> <tr> <td>0.111</td> <td>1</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <p>This is how I'm doing it in python:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd from math import floor recordings = list(np.random.rand(1000)) # an example of recording chronometer_start = 1670000000 #timestamp start_recording = 1673280570 #timestamp gap_in_seconds = start_recording - chronometer_start # given that the recordings are of 5 minutes each but with 2.5 minutes overlap, # I can calculate how many Null values to add at the beginning of the recording to # fill the gap from the chronometer start: gap_in_n_records = round(gap_in_seconds / 60 / 2.5) # fill the gap with null values recordings = [np.nan for _ in range(gap_in_n_records)] + recordings minutes = [5] # the first recording has no overlap for _ in range(len(recordings)-1): minutes += [minutes[-1]+2.5] hours = pd.Series(minutes).apply(lambda x: floor(x/60)) df = pd.DataFrame({ 'recording' : recordings, 'hour' : hours }) </code></pre> <p>But I'm worried I'm making some mistakes because then my data don't align with my results. Is there a better way of doing this?</p>
<python><pandas><numpy><time><time-series>
2023-01-09 16:26:04
1
1,101
Fabio Magarelli
75,060,025
668,498
Unable to locate package apt-get when building a custom Google Cloud Workstation image
<p>I am trying to &quot;build and push a modified workstations image to a container registry&quot; as explained in this previous <a href="https://stackoverflow.com/questions/74246955/install-php-in-home-folder-of-persistent-disk-in-google-cloud-workstations">SO answer</a>.</p> <p>This is the <code>Dockerfile</code> that I am trying to use:</p> <pre><code>FROM us-central1-docker.pkg.dev/cloud-workstations-images/predefined/code-oss:latest RUN \ sudo apt-get update &amp;&amp; \ sudo apt-get install -y apache2 \ sudo apt-get install -y libbz2-dev \ sudo apt-get install -y php &amp;&amp; \ sudo apt-get install php-mysqli </code></pre> <p>When I try to build the image using <code>docker build -t my-custom-image .</code> I eventually receive this error:</p> <pre><code>E: Unable to locate package apt-get E: Unable to locate package install E: Unable to locate package apt-get E: Unable to locate package install The command '/bin/sh -c sudo apt-get update &amp;&amp; sudo apt-get install -y apache2 sudo apt-get install -y libbz2-dev sudo apt-get install -y php &amp;&amp; sudo apt-get install php-mysqli' returned a non-zero code: 100 </code></pre> <p>What am I doing wrong? Why can't I build this image?</p>
<python><google-cloud-platform><apt-get><google-cloud-workstations>
2023-01-09 16:09:51
1
3,615
DanielAttard
75,060,009
4,865,723
Normalize one (umlaut) character results in two
<p>I assume I didn't fully understand <code>unicodedata.normalize()</code> function in Python.</p> <pre><code>from unicodedata import normalize result = normalize('NFKD', 'Ä') print(result) # 'A' print(len(result)) # 2 print(result == 'A') # False print(result[0] == 'A') # True </code></pre> <p>I'm confused why the <code>len()</code> is 2 instead of 1.</p>
<python><normalize>
2023-01-09 16:08:31
0
12,450
buhtz
75,059,959
3,182,044
sklearn2pmml omits field names
<p>I export an instance of <code>sklearn.preprocessing.StandardScaler</code> into a pmml-file. The problem is, that the names of the fields do not appear in the pmml-file, e.g. when using the iris dataset then the original field names <code>['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)']</code> do not appear. Instead only names like x1,x2, etc appear. Is there a way to get the original field names in the pmml-file? The Following code should be runnable:</p> <pre><code>from sklearn2pmml import sklearn2pmml, PMMLPipeline, make_pmml_pipeline from sklearn.datasets import load_iris from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler import pandas as pd data = load_iris() dfIris = pd.DataFrame(data=data.data, columns=data.feature_names) ssModel = StandardScaler() ssModel.fit(dfIris) pipe = PMMLPipeline([(&quot;StandardScaler&quot;, ssModel)]) sklearn2pmml(pipeline=make_pmml_pipeline(pipe), pmml=&quot;ssIris.pmml&quot;) </code></pre> <p>In the ssIris.pmml I see this: <a href="https://i.sstatic.net/movw4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/movw4.png" alt="enter image description here" /></a></p>
<python><scikit-learn><sklearn2pmml>
2023-01-09 16:04:57
2
345
dba
75,059,828
12,170,254
event.key seems to not be Key [pynput]
<p>I am writing an application which uses pynput to gather raw keyboard input. I needed a form of key event that could be instantiated, deleted, enabled, and disabled arbitrarily during runtime, so pynput's Global Hotkey system wouldn't work. So, I created my own event class:</p> <pre class="lang-py prettyprint-override"><code>keyEvents = [] class keyEvent(): def __init__(self, key, callback, onPress, onRelease): self.key = key, self.callback = callback self.onPress = onPress self.onRelease = onRelease self.active = True self.calls = [] keyEvents.append(self) # Called from listener thread, do not call callbacks from listener thread because then things happen at unpredictable times def fire(self, state): if self.active: print('{} fired {}({})'.format(self.key, self.callback, state)) if self.onPress and state: self.calls.append(True) elif self.onRelease and not state: self.calls.append(False) def _onKeyPress(key): print(key, key == keyboard.Key.enter) for event in keyEvents: if event.key == key: event.fire(True) else: print('Event did not fire {} != {}'.format(event.key, key)) def _onKeyRelease(key): for event in keyEvents: if event.key == key: event.fire(False) </code></pre> <p>And here I create several events, which are polled by <code>menu.exec</code>:</p> <pre class="lang-py prettyprint-override"><code>class menu(): def __init__(self, name): self.name = name self.items = [] self.keyEvents = [ keyEvent(keyboard.Key.left, self._keyLeft, True, False), keyEvent(keyboard.Key.right, self._keyRight, True, False), keyEvent(keyboard.Key.up, self._keyUp, True, False), keyEvent(keyboard.Key.down, self._keyDown, True, False), keyEvent(keyboard.Key.enter, self._keyEnter, True, False) ] for event in self.keyEvents: event.active = False ... def exec(self): for event in self.keyEvents: event.active = True self.refresh() self.active = True while self.active: for event in self.keyEvents: for call in event.calls: event.callback(call) time.sleep(0.1) </code></pre> <p>When I run the app, it gives me this output after I press the enter key:</p> <pre><code>Key.enter True Event did not fire (&lt;Key.left: &lt;65361&gt;&gt;,) != Key.enter Event did not fire (&lt;Key.right: &lt;65363&gt;&gt;,) != Key.enter Event did not fire (&lt;Key.up: &lt;65362&gt;&gt;,) != Key.enter Event did not fire (&lt;Key.down: &lt;65364&gt;&gt;,) != Key.enter Event did not fire (&lt;Key.enter: &lt;65293&gt;&gt;,) != Key.enter </code></pre> <p>The first line tells me that the key passed to <code>_onKeyPress</code> is indeed <code>keyboard.Key.enter</code>. The last 5 lines tell me that <code>_onKeyPress</code> refused to call <code>event.fire</code> for all 5 events, including the one that was assigned to <code>keyboard.Key.enter</code>. Nowhere else in the code does <code>event.key</code> get modified. It is first set in <code>keyEvent.__init__</code> and accessed in <code>_onKeyPressed</code> for the comparison and yet, the enter key that <code>_onKeyPressed</code> sees in the <code>event</code> object is different. Why is this?</p>
<python><pynput>
2023-01-09 15:56:02
1
521
AwesomeCronk
75,059,798
1,218,369
MacOS `subprocess` raising: FileNotFoundError: /usr/sbin/sysctl
<p>I've suddenly started getting an error on all versions of Python on MacOS reporting the following when using <code>subprocess</code>:</p> <p><code>FileNotFoundError: [Errno 2] No such file or directory: '/usr/sbin/sysctl -n machdep.cpu.brand_string'</code></p> <p>However, <code>/usr/sbin/sysctl</code> does exist, I can run the command myself, under my normal user, without any issue - just not with a Python interpreter. When launching a Python interpreter owned by root I don't get this issue.</p> <p>The permissions and ownership are reported as the following:</p> <p><code>-rwxr-xr-x 1 root wheel 135296 Oct 28 09:43 /usr/sbin/sysctl*</code></p> <p>Changing the permissions/ownership doesn't appear possible even undo <code>sudo</code> anyway; as <code>Operation not permitted</code> is reported.</p>
<python><macos><subprocess><sysctl>
2023-01-09 15:53:48
1
1,035
luke
75,059,652
9,006,687
How to get the document of an enumeration in python
<p>I have plenty of files formated as follows:</p> <pre><code># Content of Enumeration1.py from enum import IntEnum class Enumeration1(IntEnum): &quot;&quot;&quot; Some documentation. &quot;&quot;&quot; key_0 = 0 key_1 = 1 key_2 = 2 </code></pre> <p>How can I extract the documentation using python code from another &quot;main.py&quot;, i.e.,</p> <pre><code>path_to_file = &quot;./Enumeration1.py&quot; doc = get_documentation(path_to_file) # how does this function works? print(doc) # outputs &quot;Some documentation.&quot; </code></pre>
<python><enums>
2023-01-09 15:42:48
1
461
Theophile Champion
75,059,631
16,511,234
How to add a new row with new header information in same dataframe
<p>I have written a code to retrieve JSON data from an URL. It works fine. I give the start and end date and it loops through the date range and appends everything to a dataframe.</p> <p>The colums are populated with the JSON data <code>sensor</code> and its corresponding values, hence the column names are like <code>sensor_1</code>. When I request the data from the URL it sometimes happens that there are new sensors and the old ones are switched off and deliver no data anymore and often times the length of the columns change. In that case my code just adds new columns.</p> <p>What I want is instead of new columns a new header in the ongoing dataframe.</p> <p>What I currently get with my code:</p> <pre><code>datetime;sensor_1;sensor_2;sensor_3;new_sensor_8;new_sensor_9;sensor_10;sensor_11; 2023-01-01;23.2;43.5;45.2;NaN;NaN;NaN;NaN;NaN; 2023-01-02;13.2;33.5;55.2;NaN;NaN;NaN;NaN;NaN; 2023-01-03;26.2;23.5;76.2;NaN;NaN;NaN;NaN;NaN; 2023-01-04;NaN;NaN;NaN;75;12;75;93;123; 2023-01-05;NaN;NaN;NaN;23;31;24;15;136; 2023-01-06;NaN;NaN;NaN;79;12;96;65;72; </code></pre> <p>What I want:</p> <pre><code>datetime;sensor_1;sensor_2;sensor_3; 2023-01-01;23.2;43.5;45.2; 2023-01-02;13.2;33.5;55.2; 2023-01-03;26.2;23.5;76.2; datetime;new_sensor_8;new_sensor_9;sensor_10;sensor_11; 2023-01-04;75;12;75;93;123; 2023-01-05;23;31;24;15;136; 2023-01-06;79;12;96;65;72; </code></pre> <p>My loop to retrieve the data:</p> <pre><code>start_date = datetime.datetime(2023,1,1,0,0) end_date = datetime.datetime(2023,1,6,0,0) sensor_data = pd.DataFrame() while start_zeit &lt; end_zeit: q = 'url' r = requests.get(q) j = json.loads(r.text) sub_data = pd.DataFrame() if 'result' in j: datetime = pd.to_datetime(np.array(j['result']['data'])[:,0]) sensors = np.array(j['result']['sensors']) data = np.array(j['result']['data'])[:,1:] df_new = pd.DataFrame(data, index=datetime, columns=sensors) sub_data = pd.concat([sub_data, df_new]) sensor_data = pd.concat([sensor_data, sub_data]) start_date += timedelta(days=1) </code></pre>
<python><pandas><dataframe><datetime>
2023-01-09 15:40:42
1
351
Gobrel
75,059,619
897,272
Fastest way to delete all files in large volume in python?
<p>I want to completely clear the content of a very large Linux volume containing a huge number of small files. I know how to delete files, but just doing a for loop that calls delete on each one is very slow.</p> <p>I'd just send a command down to bash to use bash tools, but were running in a docker alpine Linux container, so all the tools I would use don't exist. I suppose I could change the docker file to ensure their there but that's feeling a bit ugly.</p>
<python><python-3.x><performance>
2023-01-09 15:39:16
1
6,521
dsollen
75,059,248
13,083,583
Set comparison optimization
<h3>Description</h3> <p>I have two large lists of sets</p> <pre class="lang-py prettyprint-override"><code>A = [ {...}, ..., {...} ] B = [ {...}, ..., {...} ] </code></pre> <p>I'm performing a very cost-intensive list comprehension that for every element in every set in A checks if there is a match with any element in B's sets and if so returns B's respective sets.</p> <pre class="lang-py prettyprint-override"><code>[find_sets(i) for i in A] </code></pre> <h3>Example</h3> <p>A minimal example looks like this:</p> <pre><code>import secrets # create sample data def generate_random_strings(num_strings, string_length): random_strings = [] for i in range(num_strings): random_strings.append(secrets.token_hex(string_length)) random_strings = set(random_strings) return random_strings A = [generate_random_strings(5, 1) for i in range(10000)] B = [generate_random_strings(5, 1) for i in range(10000)] # set checker def find_sets(A): matching_sets = [] for b_set in B: if A &amp; b_set: matching_sets.append(b_set) return matching_sets result = [find_set(i) for i in A] </code></pre> <h3>Multiprocessing</h3> <p>It's obviously faster on all my 32 CPU cores:</p> <pre class="lang-py prettyprint-override"><code>from tqdm.contrib.concurrent import process_map pool = multiprocessing.Pool(processes=32) results = process_map(find_sets, A, chunksize=100) </code></pre> <h3>Problem</h3> <p>While for a few thousand elements for A and B the list comprehension runs fairly fast on my machine and multiprocessing helps to scale it up to like 50.000 elements, it becomes very slow for 500.000 elements in each list which is my actual size.</p> <p>Is there any way to speed up my function code-wise with vectorization, hashing the sets before or working with some kind of optimized data types (frozensets didn't help)?</p>
<python><pandas><numpy><multiprocessing><vectorization>
2023-01-09 15:10:08
2
2,368
do-me
75,059,230
5,455,532
Using `drop_duplicates` on a Pandas dataframe isn't dropping rows
<p><strong>Situation</strong></p> <p>I have dataframe similar to below ( although I've removed many of the rows for this example, as evidenced in the 'index' column):</p> <p><code>df</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>id</th> <th>name</th> <th>last_updated</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1518</td> <td>Maker</td> <td>2022-12-31T03:02:00.000Z</td> </tr> <tr> <td>1</td> <td>1518</td> <td>Maker</td> <td>2022-12-31T02:02:00.000Z</td> </tr> <tr> <td>2</td> <td>1518</td> <td>Maker</td> <td>2022-12-31T14:02:00.000Z</td> </tr> <tr> <td>3</td> <td>1518</td> <td>Maker</td> <td>2022-12-31T16:02:00.000Z</td> </tr> <tr> <td>23</td> <td>1518</td> <td>Maker</td> <td>2022-12-31T17:02:00.000Z</td> </tr> <tr> <td>24</td> <td>2280</td> <td>Filecoin</td> <td>2022-12-31T01:02:00.000Z</td> </tr> <tr> <td>25</td> <td>2280</td> <td>Filecoin</td> <td>2022-12-31T03:01:00.000Z</td> </tr> <tr> <td>26</td> <td>2280</td> <td>Filecoin</td> <td>2022-12-31T02:01:00.000Z</td> </tr> <tr> <td>27</td> <td>2280</td> <td>Filecoin</td> <td>2022-12-31T00:02:00.000Z</td> </tr> <tr> <td>47</td> <td>2280</td> <td>Filecoin</td> <td>2022-12-31T08:02:00.000Z</td> </tr> <tr> <td>48</td> <td>4558</td> <td>Flow</td> <td>2022-12-31T01:02:00.000Z</td> </tr> <tr> <td>49</td> <td>4558</td> <td>Flow</td> <td>2022-12-31T02:01:00.000Z</td> </tr> <tr> <td>71</td> <td>4558</td> <td>Flow</td> <td>2022-12-31T05:02:00.000Z</td> </tr> <tr> <td>72</td> <td>5026</td> <td>Orchid</td> <td>2022-12-31T01:02:00.000Z</td> </tr> <tr> <td>73</td> <td>5026</td> <td>Orchid</td> <td>2022-12-31T03:02:00.000Z</td> </tr> <tr> <td>74</td> <td>5026</td> <td>Orchid</td> <td>2022-12-31T02:01:00.000Z</td> </tr> <tr> <td>75</td> <td>5026</td> <td>Orchid</td> <td>2022-12-31T00:02:00.000Z</td> </tr> </tbody> </table> </div> <p>I want a version of the above dataframe but with only 1 row for each <code>id</code> parameter. Keeping the last instance.</p> <p><strong>This is my code:</strong></p> <p><code>df.drop_duplicates(subset=['id'], keep='last')</code></p> <p><strong>Expectation</strong></p> <p>That the new df would retain only 4 rows, the 'last' instance for each 'id' value in dataframe <code>df</code>.</p> <p><strong>Result</strong></p> <p>After running the <code>drop_duplicates</code> command, the <code>df</code> returns the exact same dataframe. Same shape as prior to my <code>drop_duplicates</code> attempt.</p> <p>I've been trying to use this post to sort it out, but obvs there's something I'm not getting right:</p> <p><a href="https://stackoverflow.com/questions/66215844/pandas-select-rows-with-no-duplicate">pandas select rows with no duplicate</a></p> <p>I'd appreciate any input on why the last instance of rows with duplicate 'id' values are not being dropped.</p>
<python><pandas><dataframe><drop-duplicates>
2023-01-09 15:08:35
2
301
dsx
75,059,168
9,173,710
How to convert multi column expressions from Pandas to Polars
<p>I just found out about the Polars lib and I wanted to convert some old functions to get familiar.</p> <p>However, I stumbled upon an issue with my code. The &quot;Mean_Angle&quot; column is not calculated, and I have no idea if the last part even works as intended, it aborts during the group_by operation as the column is missing.</p> <p>This is the <strong>pandas</strong> code I want to convert:</p> <pre class="lang-py prettyprint-override"><code>def calc_mean_and_error(df: pd.DataFrame, columns=None, groupby=&quot;Magn_Pos&quot;) -&gt; pd.DataFrame: data = df.copy() if columns is None: columns = ['Left_Angle', 'Right_Angle', 'Magn_Pos', 'Magn_Field'] if 'Left_Angle' in columns and 'Right_Angle' in columns: data['Mean_Angle'] = (data['Left_Angle'] + data['Right_Angle']) / 2 columns.append('Mean_Angle') grouped_df = data[columns].groupby(groupby,sort=False) num_points_per_group = grouped_df.size().values mean_df = grouped_df.mean() # standard deviation mean_df[['Left_Angle_SDEV','Right_Angle_SDEV','Mean_Angle_SDEV']] = grouped_df[['Left_Angle','Right_Angle','Mean_Angle']].std() # standard error, 1 sigma confidence interval mean_df[['Left_Angle_SEM_68','Right_Angle_SEM_68','Mean_Angle_SEM_68']] = grouped_df[['Left_Angle','Right_Angle','Mean_Angle']].sem() # standard error, 2 sigma confidence interval - t distribution t_fac_95_conf_int = stats.t.ppf(0.95, num_points_per_group) # factor according to https://en.wikipedia.org/wiki/Student%27s_t-distribution mean_df[['Left_Angle_SEM_95','Right_Angle_SEM_95','Mean_Angle_SEM_95']] = mean_df[['Left_Angle_SEM_68','Right_Angle_SEM_68','Mean_Angle_SEM_68']].multiply(t_fac_95_conf_int, axis=0) # standard error, 3 sigma confidence interval - t distribution t_fac_99_conf_int = stats.t.ppf(0.997, num_points_per_group) mean_df[['Left_Angle_SEM_99','Right_Angle_SEM_99','Mean_Angle_SEM_99']] = mean_df[['Left_Angle_SEM_68','Right_Angle_SEM_68','Mean_Angle_SEM_68']].multiply(t_fac_99_conf_int, axis=0) mean_df = mean_df.reset_index() return mean_df </code></pre> <p>This is what I have so far:</p> <pre class="lang-py prettyprint-override"><code>def calc_mean_and_error(df: pl.DataFrame, columns=None, group_by=&quot;Magn_Pos&quot;) -&gt; pl.DataFrame: if columns is None: columns = ['Left_Angle', 'Right_Angle', 'Magn_Pos', 'Magn_Field'] if 'Left_Angle' in columns and 'Right_Angle' in columns: # this doesn't work? df.with_columns( pl.struct('Left_Angle', 'Right_Angle').map_elements(lambda x: (x['Left_Angle'] + x['Right_Angle']) / 2).alias(&quot;Mean_Angle&quot;) ) columns.append('Mean_Angle') grouped_df = df.select(columns).group_by(group_by) num_points_per_group = grouped_df.count()['count'][0] mean_df = grouped_df.mean() t_fac_95_conf_int = stats.t.ppf(0.95, num_points_per_group) # factor according to https://en.wikipedia.org/wiki/Student%27s_t-distribution t_fac_99_conf_int = stats.t.ppf(0.997, num_points_per_group) # standard deviation mean_df = df.select(columns).group_by(group_by).agg( pl.all().mean(), pl.all().std().name.suffix('_SDEV'), pl.all().std().map_elements(lambda x: x / np.sqrt(num_points_per_group)).name.suffix('_SEM_68'), # standard error pl.all().std().map_elements(lambda x: x*t_fac_95_conf_int / np.sqrt(num_points_per_group)).name.suffix('_SEM_95'), pl.all().std().map_elements(lambda x: x*t_fac_99_conf_int / np.sqrt(num_points_per_group)).name.suffix('_SEM_99'), ) return mean_df </code></pre> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from scipy import stats data_raw = &quot;&quot;&quot;Time\tRepetition\tLeft_Angle\tRight_Angle\tMagn_Pos\tMagn_Field 0.0\t0\t111.62539060014953\t111.65929559305457\t20.0\t0.05012 289.75\t1\t113.43406129503042\t113.29101205027376\t20.0\t0.05012 343.420999999973\t2\t113.21669960326668\t113.30918399000467\t20.0\t0.05012 397.68700000003446\t0\t114.50650196149256\t114.78488582815113\t10.0\t0.1317 456.10900000005495\t1\t114.7078936381882\t114.70239460290726\t10.0\t0.1317 507.8279999999795\t2\t115.71894177915732\t115.70104461571628\t10.0\t0.1317 565.3429999999935\t0\t121.71521327349599\t121.55379420624988\t5.0\t0.2276 612.045999999973\t1\t122.53171995914443\t122.4555143281342\t5.0\t0.2276 668.3120000000345\t2\t121.65748098845367\t121.60313424823333\t5.0\t0.2276 714.484000000055\t0\t130.88884567117995\t130.82365731381574\t2.5\t0.3011 774.9679999999935\t1\t132.72366563179372\t132.59019277520363\t2.5\t0.3011 817.765000000014\t2\t133.5549497954158\t133.4637401535662\t2.5\t0.3011 891.7029999999795\t0\t139.9155468732065\t139.78384156146674\t0.0\t0.3907 940.655999999959\t1\t143.34707217674438\t143.2278696177915\t0.0\t0.3907 984.125\t2\t144.30042471080577\t144.16800277145435\t0.0\t0.3907&quot;&quot;&quot;.encode(&quot;utf8&quot;) df = pl.read_csv(data_raw, separator='\t') df = calc_mean_and_error(df, columns=['Left_Angle', 'Right_Angle', 'Magn_Pos', 'Magn_Field']) print(df) </code></pre> <p>Error:</p> <pre><code># ColumnNotFoundError: Mean_Angle </code></pre> <p>I'm not really sure about the last part though! I am not entirely familiar with the syntax of the expressions. And I am not sure how to prevent calling group_by twice. Can someone lead me in the right direction? Thanks!</p>
<python><python-polars>
2023-01-09 15:04:10
1
1,215
Raphael
75,059,027
572,575
K-Folds cross-validator show KeyError: None of Int64Index
<p>I try to use K-Folds cross-validator with dicision tree. I use for loop to train and test data from KFOLD like this code.</p> <pre><code>df = pd.read_csv(r'C:\\Users\data.csv') # split data into X and y X = df.iloc[:,:200] Y = df.iloc[:,200] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) clf = DecisionTreeClassifier() kf =KFold(n_splits=5, shuffle=True, random_state=3) cnt = 1 # Cross-Validate for train, test in kf.split(X, Y): print(f'Fold:{cnt}, Train set: {len(train)}, Test set:{len(test)}') cnt += 1 X_train = X[train] y_train = Y[train] X_test = X[test] y_test = Y[test] clf = clf.fit(X_train,y_train) predictions = clf.predict(X_test) accuracy = accuracy_score(y_test, predictions) print(&quot;test&quot;) print(y_test) print(&quot;predict&quot;) print(predictions) print(&quot;Accuracy: %.2f%%&quot; % (accuracy * 100.0)) </code></pre> <p>when I run it show error like this.</p> <pre><code>KeyError: &quot;None of [Int64Index([ 0, 1, 2, 5, 7, 8, 9, 10, 11, 12,\n ...\n 161, 164, 165, 166, 167, 168, 169, 170, 171, 173],\n dtype='int64', length=120)] </code></pre> <p>How to fix it?</p>
<python><scikit-learn><cross-validation><k-fold>
2023-01-09 14:53:10
1
1,049
user572575
75,058,810
12,883,297
Create list type new column based on division operation of the existing columns in pandas
<p>I have a data frame</p> <pre><code>df = pd.DataFrame([[&quot;X&quot;,62,5],[&quot;Y&quot;,16,3],[&quot;Z&quot;,27,4]],columns=[&quot;id&quot;,&quot;total&quot;,&quot;days&quot;]) </code></pre> <pre><code>id total days X 62 5 Y 16 3 Z 27 4 </code></pre> <p>Divide <em>total</em> column by <em>days</em> column and Create a new column <em>plan</em> which is a list in which No. of elements=Divisor, and the value of elements=Quotient, if any reminder is there increase those many values from negative indexing.</p> <p><strong>Expected Output:</strong></p> <pre><code>df_out = pd.DataFrame([[&quot;X&quot;,62,5,[12,12,12,13,13]],[&quot;Y&quot;,16,3,[5, 5, 6]],[&quot;Z&quot;,27,4,[6, 7, 7, 7]]],columns=[&quot;id&quot;,&quot;total&quot;,&quot;days&quot;,&quot;plan&quot;]) </code></pre> <pre><code>id total days plan X 62 5 [12, 12, 12, 13, 13] Y 16 3 [5, 5, 6] Z 27 4 [6, 7, 7, 7] </code></pre> <p>How to do it in pandas?</p>
<python><python-3.x><pandas><list><dataframe>
2023-01-09 14:35:50
2
611
Chethan
75,058,761
3,696,490
subprocess still needs file after add-data in pyinstaller
<p>I am trying to generate one .exe file with pyinstaller</p> <p>my python file</p> <pre><code>import subprocess proc=r&quot;.\file.exe&quot; CLI_VERSION=subprocess.check_output([proc, '-v'],shell=True).decode('utf-8').strip() print (CLI_VERSION) </code></pre> <p>with file.exe being in the same folder as the python file <code>python myfile.py</code> works just fine and prints the expected output</p> <p>Now when I try to package that as .exe and include file.exe, subprocess still fails to find the file</p> <p><code>pyinstaller.exe --onefile --add-data &quot;.\file.exe;.&quot; .\myfile.py</code></p> <p>now take the generated.exe and try to run it:</p> <blockquote> <p>.\file.exe' is not recognized as an internal or external command, ... ... ... subprocess.CalledProcessError: Command '['.\file.exe', '-v']' returned non-zero exit status 1.</p> </blockquote> <p>I tried add-binary instead of add-data since this is an exe file but it is still not working. Please note that the file.exe always returns 0, if it is called properly.</p> <p>I am assuming this has to do with how subprocess works? is there a way to get it to work?</p> <p>Is there a way to list files included in the .exe package? Judging by the filesize variation, I think the file.exe has been added, but I believe subprocess access the filesystem directly without passing by the files included in .exe package, is that the case?</p> <p>I tried to do add-data for some files then inside the python file do &quot;dir&quot; (ls equivalent of ls) I don't see the files I included with <code>--add-data</code> in the list</p> <p>Edit: I unpackaged the generated .exe using pyinstxtractor. The outcome is that my file.exe is well included in the same place as the name_of_my_python_file.pyc in the extracted package.</p> <p>This makes me believe more in my theory: subprocess accesses the filesystem directly and does not read inside the packaged data. Does anyone have the knowledge to confirm and suggest a workaround (if possible)?</p> <p>Thanks</p>
<python><subprocess><pyinstaller>
2023-01-09 14:32:29
0
550
user206904
75,058,758
4,125,116
Python logging MemoryHandler not passing logs to handler
<p>I have this as python logging configuration in a project with the intention to batch the logs before printing it. But it seems none of the logs are getting printed..</p> <pre><code>logging.config.dictConfig({ &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: True, &quot;handlers&quot;: { &quot;stream_handler&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;stream&quot;: sys.stdout, &quot;level&quot;: &quot;INFO&quot;, &quot;formatter&quot;: &quot;opentelemetry_formatter&quot; }, &quot;opentelemetry_to_console&quot;: { &quot;capacity&quot;:1, &quot;class&quot;: &quot;logging.handlers.MemoryHandler&quot;, &quot;flushLevel&quot;: &quot;DEBUG&quot;, &quot;target&quot;: &quot;stream_handler&quot;, } }, &quot;filters&quot;: {}, &quot;formatters&quot;: { &quot;opentelemetry_formatter&quot;: { &quot;()&quot;: OpentelemetryLogFormatter, &quot;use_traces&quot;: True, &quot;restrict_attributes_to&quot;: [], &quot;discard_attributes_from&quot;: RESERVED_ATTRS, &quot;meta_character_limit&quot;: 1000, &quot;body_character_limit&quot;: 500, &quot;resource_attributes&quot;: resource_attributes } }, &quot;loggers&quot;: { &quot;&quot;: { &quot;level&quot;: &quot;DEBUG&quot;, &quot;handlers&quot;: [&quot;opentelemetry_to_console&quot;], # &quot;handlers&quot;: [], &quot;propagate&quot;: True } } }) </code></pre>
<python><logging><open-telemetry>
2023-01-09 14:32:00
2
2,042
Rajat Jain
75,058,536
458,700
systemd service keep giving me error when start or get status
<p>I have a python application and I need it to be run as a service, I tried many methods and I was advised to make it as systemd service</p> <p>I searched and tried some code</p> <p>here is my unit code</p> <pre><code>[Unit] Description=Airnotifier Service After=network.target [Service] Type=idle Restart=on-failure User=root ExecStart=python3 /home/airnotifier/airnotifier/app.py [Install] WantedBy=multi-user.target </code></pre> <p>and then I run the following commands</p> <pre><code>sudo systemctl daemon-reload sudo systemctl enable airnotifier.service sudo systemctl start airnotifier.service sudo systemctl status airnotifier.service </code></pre> <p>the service does not run and I am getting this errors</p> <pre><code>airnotifier@airnotifier:~$ sudo systemctl status airnotifier.service ● airnotifier.service - Airnotifier Service Loaded: loaded (/lib/systemd/system/airnotifier.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2023-01-09 14:07:38 UTC; 1s ago Process: 2072 ExecStart=/usr/bin/python3 /home/airnotifier/airnotifier/app.py (code=exited, status=1/FAILURE) Main PID: 2072 (code=exited, status=1/FAILURE) Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Scheduled restart job, restart counter is at 5. Jan 09 14:07:38 airnotifier systemd[1]: Stopped Airnotifier Service. Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Start request repeated too quickly. Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Failed with result 'exit-code'. Jan 09 14:07:38 airnotifier systemd[1]: Failed to start Airnotifier Service. </code></pre>
<python><ubuntu><systemd><systemctl>
2023-01-09 14:13:36
1
9,464
Amira Elsayed Ismail
75,058,447
11,291,663
Tensorflow 2.4.1 can't find GPUs
<p>I'm trying to run TensorFlow on a Linux machine (ubuntu). I've created a Conda env and installed the required packages but I think that there's something wrong with my versions:</p> <p>Updated versions</p> <blockquote> <p><s>cudatoolkit 11.6.0</s> cudatoolkit 11.2.0</p> <p>cudnn 8.1.0.77</p> <p>tensorflow-gpu 2.4.1</p> <p>python 3.9.15</p> </blockquote> <p>Running <code>nvcc -V</code> results</p> <blockquote> <p>nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Mon_Oct_24_19:12:58_PDT_2022 Cuda compilation tools, release 12.0, V12.0.76 Build cuda_12.0.r12.0/compiler.31968024_0</p> </blockquote> <p>and running <code>python3 -c &quot;import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))&quot;</code> returns an empty list.</p> <p>Seems that <code>release 12.0</code> is the problem here, but I'm not sure and it's not my machine that I'm running on so I don't want to make big changes on my own.</p> <p>Also, from TensorFlow's site, it seems that <code>tensorflow-2.4.0</code> should run with <code>python 3.6-3.8</code> and <code>CUDA 11.0</code> but the versions I mentioned are the versions that the Conda choose for me.</p> <p>I know that similar questions have been asked before, but I couldn't find an answer that works for me.</p>
<python><linux><tensorflow><conda>
2023-01-09 14:06:37
1
313
RedYoel
75,058,358
19,125,840
Using ARRAY of POINT-s (Point[]) in SQLModel, SQLAlchemy and PostgreSQL
<p>I have just started working with SQLModel which is built on top of the SQLAlchemy. I want to create a Class Model for table to do selects and inserts. I have one column that is an Array of Points. The DDL for that table looks like that:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE &quot;public&quot;.&quot;site_metrics&quot; ( &quot;site_metric_id&quot; integer DEFAULT GENERATED BY DEFAULT AS IDENTITY NOT NULL, &quot;site_id&quot; integer NOT NULL, &quot;metric_id&quot; integer NOT NULL, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL, &quot;n_value&quot; double precision, &quot;a_value&quot; point[], &quot;deleted_at&quot; timestamp, CONSTRAINT &quot;PK_f30bdeddd128eea8b65c72b3653&quot; PRIMARY KEY (&quot;site_metric_id&quot;) ) </code></pre> <p>now I want to create a model in SQLModel that will represent that table, I have found some threads that uses geoalchemy for creating Point type of columns but I have not seen using them with SQLModel or with arrays. So I have tried this by looking all examples by far:</p> <pre class="lang-py prettyprint-override"><code>from sqlmodel import Field, SQLModel, ARRAY from geoalchemy2 import Geometry from datetime import datetime class SiteMetrics(SQLModel, table=True): site_metric_id: Optional[int] = Field(default=None, primary_key=True) site_id: int metric_id: int created_at: datetime updated_at: datetime n_value: float a_value: ARRAY(Geometry(geometry_type='POINT')) deleted_at: datetime </code></pre> <p>but when I try this it thors this error:</p> <pre><code>error checking inheritance of ARRAY(Geometry(geometry_type='POINT', from_text='ST_GeomFromEWKT', name='geometry')) (type: ARRAY) </code></pre> <p>is there anything else I can do? I am kinda stuck on that</p> <p>UPDATE I also tried this, with alchemy schemas:</p> <pre class="lang-py prettyprint-override"><code>from sqlmodel import Field, Session, SQLModel, create_engine, select, ARRAY, Integer, String, FLOAT from geoalchemy2 import Geometry from typing import Optional from datetime import datetime from urllib.parse import quote_plus from typing import List, Optional, Set from sqlalchemy.sql.schema import Column class Site_Metrics(SQLModel, table=True): site_metric_id: Optional[int] = Field(default=None, primary_key=True) site_id: int metric_id: int created_at: datetime updated_at: datetime n_value: float a_value: Optional[List] = Field(default_factory=list, sa_column=Column(ARRAY(Geometry('POINT')))) deleted_at: datetime </code></pre> <p>With this code, I was able to at least create the class and now when I try to select. Now I am thinking about creating Type on my own, lets see if I will be able to</p>
<python><postgresql><sqlalchemy><psycopg2><sqlmodel>
2023-01-09 14:00:05
0
460
demetere._
75,058,236
6,054,066
Sphinx Autosummary (Autodoc) partially imports modules
<p>Sphinx autosummary/autodoc gives error for some of the modules, but not all.</p> <p>My code is opensource: <a href="https://github.com/dream-faster/krisi" rel="nofollow noreferrer">https://github.com/dream-faster/krisi</a></p> <h1>I get the following error:</h1> <pre><code>WARNING: autodoc: failed to import module 'metric'; the following exception was raised: No module named 'metric' WARNING: autodoc: failed to import module 'report'; the following exception was raised: No module named 'report' </code></pre> <p>It imports some of the modules (eg.: <code>compare.py</code>) but fails to import others (regardless of which subdirectory they are in).</p> <p><strong>The directory structure:</strong></p> <pre><code>library_name │ └───src │ │ │ └───library_name │ └─ __init__.py │ │ │ └───module_1.py │ │ └─ __init__.py │ │ └─ compare.py │ │ └─ report.py │ │ │ └───module_2.py │ └─ __init__.py │ └─ evaluate.py │ └─ metric.py │ └───docs └───source └─ conf.py </code></pre> <h1>Solutions I have tried:</h1> <p><strong>1. Specifying the path (although it finds the module partially)</strong></p> <p>I have tried all variations of appending the <code>path</code> to <code>sys.path</code>:</p> <pre class="lang-py prettyprint-override"><code> current_dir = os.path.dirname(__file__) target_dir = os.path.abspath(os.path.join(current_dir, &quot;../../src/project_name&quot;)) sys.path.insert(0, target_dir) </code></pre> <pre class="lang-py prettyprint-override"><code> sys.path.insert(0, os.path.abspath(&quot;../..&quot;)) </code></pre> <pre class="lang-py prettyprint-override"><code> sys.path.insert(0, os.path.abspath(&quot;../../src&quot;)) </code></pre> <pre class="lang-py prettyprint-override"><code> sys.path.insert(0, os.path.abspath(&quot;../../src/project_name&quot;)) </code></pre> <pre class="lang-py prettyprint-override"><code> for x in os.walk(&quot;../../src&quot;): sys.path.append(x[0]) </code></pre> <p><strong>2. Checking if all dependencies are installed.</strong></p> <p>I did a clean new <code>conda</code> environment and installed my package with <code>pip install -e .</code> All tests pass, that cover all modules.</p> <p><strong>3. Checking if cross module import is the culprit</strong></p> <p>Some modules reference other modules, eg.: <code>module_1.metric</code> references <code>module_2.type</code> However modules that were imported correctly do the same without an error.</p> <p>What am I overlooking?</p>
<python><python-sphinx><python-packaging><autodoc><autosummary>
2023-01-09 13:51:11
1
450
semyd
75,058,235
4,421,575
`pd.compare` when dataframes have different shape
<p>I want to compare two dataframes using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html" rel="nofollow noreferrer"><code>pd.compare()</code></a>. The method works really well if the shapes of the dataframes are the same, ie, <code>df1.shape == df2.shape</code>.</p> <p>For example:</p> <pre><code>In [75]: df1 Out[75]: NAME INTERFACE_NAME INTERFACE_IP 0 A Pipo 7.7.7.8/32 1 A loop210 1.1.1.210/32 2 A loop245 1.1.1.246/32 3 B loop230 1.1.1.230/32 4 B loop231 1.1.1.231/32 5 B loop8 11.11.11.29/32 6 B loopback0 10.204.64.55/32 In [76]: df2 Out[76]: NAME INTERFACE_NAME INTERFACE_IP 0 A Pipo 7.7.7.8/32 1 A loop210 1.1.1.210/32 2 A loop245 1.1.1.245/32 3 B loop230 1.1.1.230/32 4 B loop231 1.1.1.231/32 5 B loop8 11.11.11.29/32 6 B loopback0 10.204.64.55/32 </code></pre> <p>If I want to see what has changed between <code>df1</code> and <code>df2</code> I would do ...</p> <pre><code>In [79]: df1.compare(df2, result_names=('pre','post'), keep_shape=True) Out[79]: NAME INTERFACE_NAME INTERFACE_IP pre post pre post pre post 0 NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN 1.1.1.246/32 1.1.1.245/32 3 NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN NaN 5 NaN NaN NaN NaN NaN NaN 6 NaN NaN NaN NaN NaN NaN </code></pre> <p>However, let's say <code>df1</code> has more records that have disappeared in <code>df2</code>...</p> <pre><code>In [86]: df1_more Out[86]: NAME INTERFACE_NAME INTERFACE_IP 0 A Pipo 7.7.7.8/32 1 A loop210 1.1.1.210/32 2 A loop245 1.1.1.245/32 3 A loop246 1.1.1.246/32 4 A test88 10.0.0.5/32 5 B loop230 1.1.1.230/32 6 B loop231 1.1.1.231/32 7 B loop8 11.11.11.29/32 8 B loopback0 10.204.64.55/32 </code></pre> <p>... then I get an exception when trying to compare:</p> <pre><code>In [87]: df1_more.compare(df2) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[87], line 1 ----&gt; 1 df1_more.compare(df2) [ .... ] 293 # axis=1 is default for DataFrame-with-Series op 294 axis = left._get_axis_number(axis) if axis is not None else 1 ValueError: Can only compare identically-labeled DataFrame objects </code></pre> <p>How can I fix this? I mean, I'm particularly interested in using <code>pd.compare()</code> because I can easily spot which variable/column is the one that has changed.</p> <p>I could use, for example, <code>pd.merge(indicator='change', how='outer')</code> but in that case I get to see all the columns regardless of their change and I'm only interested specifically in the changed columns. Let's say I have a 10-column dataFrame, the <code>pd.merge()</code> will output all the columns even if only one has a difference between <code>df1</code> and <code>df2</code>.</p> <p>thanks.</p>
<python><pandas><compare><dimensions>
2023-01-09 13:51:08
0
1,509
Lucas Aimaretto
75,058,224
5,208,088
How to extract field value from protobuf/python object
<p>Using proto3 syntax I am using the protobuf protocol to generate messages and I am programmatically parsing them via python3. Is there some API or method to extract the value in specific fields and manipulate them/do further calculations?</p> <p>Take the following message as an example.</p> <pre><code>message File { bytes data =1; } </code></pre> <p>If I want to get the size of the bytestring and run len(msg.file.data) I get the error message <code>object of type 'File' has no len()</code> which is valid because the object File has no such method built-in, so how do I extract the bytes as bytes only independent from the object?</p>
<python><protocol-buffers>
2023-01-09 13:49:32
1
1,192
Mnemosyne
75,058,215
8,551,424
Obtain the percentage of groupby() count(), save it and plot it
<p>I have a dataframe like the following,</p> <pre><code> Time Value Cluster 1 2020-08-11 06:09:59 0 0 2 2020-08-11 06:14:59 0 12 3 2020-08-11 06:19:59 1 103 4 2020-08-11 06:24:59 0 0 5 2020-08-11 06:29:59 0 12 </code></pre> <p>Basically there are 3 columns, the first one is date and time (<code>Time</code>), the second one refers to the group to which the data belongs (<code>Cluster</code>) and the third one (<code>Value</code>) is the result (in binary).</p> <p>Therefore, what I would like is to be able to make a statistic according to the group that I am in this instant, the probability that the next instant the result is 1 or 0.</p> <p>I have thought of the following,</p> <ol> <li>Move the result column (<code>Value</code>) one position (one instant of time).</li> <li>Group the data and count them.</li> </ol> <p>This is the code I have made, for a single cluster,</p> <pre><code>test_df[(test_df['Cluster'] == 2)].groupby('ValueT1')['Cluster'].count() </code></pre> <p>With this out:</p> <pre><code>ValueT1 0 7406 1 7787 Name: Cluster, dtype: int64 </code></pre> <p>My question is, how do you get the percentage from here? Could you make a double bar chart (one next to the other with the set of clusters)?</p> <p>For the above example I would like to have as a result something similar to the following,</p> <pre><code>ValueT1 0 48.7% 1 51.2% Name: Cluster, dtype: int64 </code></pre> <p>And in addition, I would like to have a graph that gathers these percentages for each of the clusters.</p> <p>Thank you very much.</p>
<python><pandas>
2023-01-09 13:48:41
1
1,373
Lleims
75,058,198
5,166,312
How to reduce size of python/qml application after using cx_Freeze
<p>I do not why, but my exe version of my python+qml/qtQuick application takes on large size. All package has over 550 MB. (folder PySide6 under lib has 400 MB). I suppose there is a lot of useless items in PySide6 folder - how to eliminate it? Another think is that on clean windows installation the exe cannot be started: <a href="https://i.sstatic.net/yKPTv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yKPTv.png" alt="enter image description here" /></a></p>
<python><qtquick2><cx-freeze>
2023-01-09 13:46:48
0
337
WITC
75,058,098
7,104,332
Determine if all the phrases are found within the input string
<p>What am i doing wrong in the code</p> <h1>Objective of the code is to</h1> <p>Determine if all the phrases are found within the input string.</p> <p>If they're all found (using distance as a measure of leeway) return True. Else False. Example:</p> <ul> <li>input = 'can i go to the bathroom in the morning',</li> <li>phrases = ['can go', 'bathroom morning']</li> <li>if distance is 1 then this won't result in a match because 'bathroom', 'morning' has 2 words between it</li> <li>if distance is 2 then 'bathroom in the morning' is counted as a valid phrase</li> </ul> <h2>Expected Output</h2> <pre><code> input = &quot;can i go to the bathroom in the morning&quot; phrases = ['can go', 'bathroom morning'] distance = 2 print('Output',get_compound_keyword_match(input, phrases, distance)) </code></pre> <p><code>Output True </code></p> <p>My Code:</p> <pre><code>def get_compound_keyword_match(input: str, phrases: list, distance: int) -&gt; bool: if not distance: # We have no leeway for a match. if all(phrase in input for phrase in phrases): return True keywords = input.split() for phrase in phrases: phrase_matched = False ck_words = phrase.split() first_word_matches = [ i for i, x in enumerate(keywords) if x == ck_words[0] ] print('first word matches', first_word_matches) if not first_word_matches: return False for first_word_match in first_word_matches: old_match_index = first_word_match matched = False for i in range(0, len(ck_words)): try: match_index = keywords.index(ck_words[i]) if match_index - old_match_index &gt; (distance + 1): matched = False old_match_index = match_index except ValueError: print('value error false') matched = False if matched: phrase_matched = True break if not phrase_matched: print('phrase_matched false') return False return True if __name__ == &quot;__main__&quot;: input = &quot;can i go to the bathroom in the morning&quot; phrases = ['can go', 'bathroom morning'] distance = 2 print('Output',get_compound_keyword_match(input, phrases, distance)) </code></pre>
<python>
2023-01-09 13:38:37
1
474
Rohit Sthapit
75,058,042
7,084,115
NameError: name 'pytest' is not defined - GitHub Actions
<p>I have a simple <code>Docker</code> image that has some python test cases.</p> <p>Here's my <code>Dockerfile</code></p> <pre><code>ARG PYTHON_VERSION=3.8.3-alpine # Use the python image as the base image FROM python:${PYTHON_VERSION} # upgrade pip RUN pip install --upgrade pip # set CUSTOMER_NAME as environment variables ENV CUSTOMER_NAME=${CUSTOMER_NAME:-A} # Set the working directory WORKDIR /app # Create a non-root user and add the permissions RUN adduser -D &quot;${USERNAME:-jananath}&quot; # set the username - HERE WE ARE USING A DEFAULT VALUE FOR THE USERNAME, WHICH IS &quot;jananath&quot; USER &quot;${USERNAME:-jananath}&quot; # copy the requirements file and install the dependencies COPY --chown=&quot;${USERNAME:-jananath}&quot;:&quot;${USERNAME:-jananath}&quot; requirements.txt requirements.txt RUN pip install --no-cache-dir --upgrade --user -r requirements.txt ENV PATH=&quot;/home/${USERNAME:-jananath}/.local/bin:${PATH}&quot; # copy the app code COPY --chown=${USERNAME:-jananath}:${USERNAME:-jananath} . . # expose the default Flask port EXPOSE 80 # set the entrypoint to run the app CMD [&quot;uvicorn&quot;, &quot;main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;80&quot;] </code></pre> <p>Then I build the image:</p> <pre><code>docker build -t hello:v1 . </code></pre> <p>Then I run the app</p> <pre><code>docker run --rm -itd -p 80:80 -e CUSTOMER_NAME=A hello:v1 </code></pre> <p>Then I execute the <code>pytest</code></p> <pre><code>docker exec -it 570e3153ab90 pytest </code></pre> <p>And it successfully gives the <code>pytest</code> output as below:</p> <pre><code>================================================= test session starts ================================================= platform linux -- Python 3.8.3, pytest-7.2.0, pluggy-1.0.0 rootdir: /app, configfile: pytest.ini plugins: anyio-3.6.2 collected 2 items test_main.py .. [100%] ================================================== 2 passed in 0.24s ================================================== </code></pre> <p>Everything works fine, except I run the same image in the `GitHub Actions.</p> <pre><code>. . . container-test-job: runs-on: ubuntu-latest container: image: ghcr.io/&lt;USERNAME&gt;/&lt;REPO&gt;/hello:v2 credentials: username: ${{ github.actor }} password: ${{ secrets.TOKEN_REPOSITORY }} env: CUSTOMER_NAME: A steps: - name: Test shell: python run: | pytest . . . </code></pre> <p>But I get the below error:</p> <blockquote> <p>Traceback (most recent call last): File &quot;/__w/_temp/598eea6e-7acf-4d7c-964f-a69440ea38c2.py&quot;, line 1, in pytest NameError: name 'pytest' is not defined Error: Process completed with exit code 1.</p> </blockquote> <p>Can someone help me understand the issue here and how to fix it?</p> <p>Thank you!</p>
<python><docker><github><github-actions>
2023-01-09 13:33:06
1
4,101
Jananath Banuka
75,058,019
9,374,372
Adding a new nested level value to a MultiIndex DataFrame
<p>How can I add another level value to a MultiIndex Initialized to a certain value (for example None). Hard to describe with words, better graphically, how to add the <code>new</code> value level:</p> <pre><code>df_before a b c d l1 l2 bar one 24 13 8 9 two 11 30 7 23 baz one 21 31 12 30 two 2 5 19 24 foo one 15 18 3 16 two 2 24 28 11 qux one 23 9 6 12 two 29 28 11 21 df_after a b c d l1 l2 bar one 24 13 8 9 two 11 30 7 23 new None None None None baz one 21 31 12 30 two 2 5 19 24 new None None None None foo one 15 18 3 16 two 2 24 28 11 new None None None None qux one 23 9 6 12 two 29 28 11 21 new None None None None </code></pre> <p><strong>Note</strong>: my DataFrame indeed has three levels, so a solution that could generalize to more levels would be appreciated. My best attempt was getting the unique values for the old level, append a new value and set the new level, but it didn't produce my desired result</p> <pre><code># this is a failed attempt of what I wanted to do new_level_values = [*list(df.index.get_level_values(2).unique()), &quot;new&quot;] df.index = df.index.set_levels(levels=new_level_values, level=2) df </code></pre>
<python><pandas><multi-index>
2023-01-09 13:31:05
1
505
Fernando Jesus Garcia Hipola
75,057,968
11,479,825
How to compare coordinates in two dataframes?
<p>I have two dataframes</p> <p>df1</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>x1</th> <th>y1</th> <th>x2</th> <th>y2</th> <th>label</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>1240</td> <td>1755</td> <td>label1</td> </tr> <tr> <td>0</td> <td>0</td> <td>1240</td> <td>2</td> <td>label2</td> </tr> </tbody> </table> </div> <p>df2</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>x1</th> <th>y1</th> <th>x2</th> <th>y2</th> <th>text</th> </tr> </thead> <tbody> <tr> <td>992.0</td> <td>943.0</td> <td>1166.0</td> <td>974.0</td> <td>tex1</td> </tr> <tr> <td>1110.0</td> <td>864.0</td> <td>1166.0</td> <td>890.0</td> <td>text2</td> </tr> </tbody> </table> </div> <p>Based on a condition like the following:</p> <pre><code>if df1['x1'] &gt;= df2['x1'] or df1['y1'] &gt;= df2['y1']: # I want to add a new column 'text' in df1 with the text from df2. df1['text'] = df2['text'] </code></pre> <p>What's more, it is possible in <code>df2</code> to have more than one row that makes the above-mentioned condition <code>True</code>, so I will need to add another <code>if</code> statement for df2 to get the best match.</p> <p>My problem here is not the conditions but how am I supposed to approach the interaction between both data frames. Any help, or advice would be appreciated.</p>
<python><pandas>
2023-01-09 13:27:31
1
985
Yana
75,057,908
12,430,846
"ValueError: A given column is not a column of the dataframe" while combining text and categorical features
<p>I have a pandas dataframe:</p> <p>df3:</p> <pre><code>Text | Topic | Label some text | 2 | 0 other text | 1 | 0 text 3 | 3 | 1 </code></pre> <p>I divide in training and test set:</p> <pre><code>x_train, x_test, y_train, y_test = train_test_split(df3[['Text', 'Topic']],df3['Label'], test_size=0.3, random_state=434) </code></pre> <p>I want to use both Text and Topic feature to predict Label.</p> <pre><code>from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.preprocessing import OneHotEncoder from sklearn.svm import SVC # pipeline for text data text_features = df3['Text'] text_transformer = Pipeline(steps=[ ('vectorizer', TfidfVectorizer(stop_words=&quot;english&quot;)) ]) # pipeline for categorical data categorical_features = df3['Topic'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) </code></pre> <p>Then, i try to combine input variables:</p> <pre><code># combine preprocessing with ColumnTransformer preprocessor = ColumnTransformer( transformers=[ ('Text', text_transformer, text_features), ('Topic', categorical_transformer, categorical_features) ]) # add model to be part of pipeline clf_pipe = Pipeline(steps=[('preprocessor', preprocessor), (&quot;model&quot;, SVC()) ]) </code></pre> <p>Finally I use fit:</p> <pre><code>x_train = preprocessor.fit_transform(x_train) x_test = preprocessor.transform(x_test) clf_s= SVC().fit(x_train, y_train) clf_s.score(x_test, y_test) </code></pre> <p>Output says:</p> <p>&quot;ValueError: A given column is not a column of the dataframe&quot;</p> <p>The error is refereed to the line:</p> <pre><code>x_train = preprocessor.fit_transform(x_train) </code></pre> <p>Where did I go wrong?</p>
<python><python-3.x><pandas><scikit-learn>
2023-01-09 13:23:21
1
543
coelidonum
75,057,804
7,949,129
Convenient way to log each line of a function without cluttering the code in Python
<p>Let´s say I have an annoying function which has a lot of if-elses in it and this function has a bug which occurs randomly.</p> <p>I can not reproduce or debug it and I want to add some logging to see which parts of the function have been executed when the error occurs again.</p> <p>Now it would be possible to add a log before each line like this:</p> <pre><code>def my_annoying_function(self): logger.info(&quot;self._call_other_function()&quot;) self._call_other_function() logger.info(&quot;self.myvar = 5&quot;) self.myvar = 5 logger.info(&quot;i = 4&quot;) i = 4 logger.info(&quot;i = i + 5&quot;) i = i + 5 if self.myothervar is True: logger.info(&quot;self.call_other_function2()&quot;) self.call_other_function2() ... </code></pre> <p>Does somebody know a way to acheive the same behaviour for the logging and log each statement before or after execution without cluttering the code?</p>
<python><debugging><logging>
2023-01-09 13:14:20
2
359
A. L
75,057,759
14,958,374
FastApi with gunicorn/uvicorn stops responding
<p>I'm currently using <strong>FastApi</strong> with <strong>Gunicorn</strong>/<strong>Uvicorn</strong> as my server engine.</p> <p>I'm using the following config for <strong>Gunicorn</strong>:</p> <pre><code>TIMEOUT 0 GRACEFUL_TIMEOUT 120 KEEP_ALIVE 5 WORKERS 10 </code></pre> <p><strong>Uvicorn</strong> has all default settings, and is started in docker container casually:</p> <pre><code>CMD [&quot;uvicorn&quot;, &quot;app.main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;] </code></pre> <p>Everything is packed in docker container.</p> <p><strong>The problem is the following:</strong></p> <p>After some time (somewhere between 1 day and 1 week, depending on load) my app stops responding (even simple <code>curl http://0.0.0.0:8000</code> command hangs forever). Docker container keeps working, there are no application errors in logs, and there are no connection issues, but none of my workers are getting the request (and so I'm never getting my response). It seems like my request is lost somewhere between server engine and my application. Any ideas how to fix it?</p> <p><strong>UPDATE</strong>: I've managed to reproduce this behaviour with custom <strong>locust</strong> load profile:<a href="https://i.sstatic.net/FcuYJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FcuYJ.png" alt="load picture" /></a> The scenario was the following:</p> <ol> <li>In first 15 minutes ramp up to 50 users (30 of them will send requests requiring GPU at 1 rps, and 20 will send requests that do not require GPU at 10 rps)</li> <li>Work for another 4 hours As the plot shows, in about 30 minutes API stops responding. (And still, there are no error messages/warnings in output)</li> </ol> <p><strong>UPDATE 2</strong>: Can there be any hidden memory leak or deadlock due to incorrect <strong>Gunicorn</strong> setup or bug (such as <a href="https://github.com/tiangolo/fastapi/issues/596" rel="nofollow noreferrer">https://github.com/tiangolo/fastapi/issues/596</a>)?</p> <p><strong>UPDATE 4</strong>: I've got inside my container and executed <code>ps</code> command. It shows:</p> <pre><code> PID TTY TIME CMD 120 pts/0 00:00:00 bash 134 pts/0 00:00:00 ps </code></pre> <p>Which means my <strong>Gunicorn</strong> server app just silently turned off. And also there is binary file named <code>core</code> in the app directory, which obviously mens that something has crashed</p>
<python><docker><fastapi><gunicorn><uvicorn>
2023-01-09 13:10:23
1
331
Nick Zorander
75,057,500
8,188,120
Indexing Pandas DataFrame containing lists of strings after loading pickled DataFrame
<p>I'm trying to get the row indices of a pandas DataFrame, where the entries are being queried for containing a string amongst a list.</p> <p>However, after pickling a DataFrame and then loading, the column data seems to handle differently than when working prior to pickling.</p> <p>e.g. for the following DataFrame</p> <pre><code>import pandas as pd DB = pd.DataFrame(columns=['message', 'authors', 'style']) DB['message'] = ['hello', 'i wrote', 'this passage'] DB.loc[1]['authors'] = ['Adam', 'Bob'] DB.loc[2]['authors'] = ['Bob', 'Charlie'] print(DB) </code></pre> <p>...output...</p> <pre><code> message authors style 0 hello NaN NaN 1 i wrote [Adam, Bob] NaN 2 this passage [Bob, Charlie] NaN </code></pre> <p>Retrieving all the column indices where the <code>'author'</code> column contains the word <code>'Bob'</code> can be done using the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.contains.html#pandas.Series.str.contains" rel="nofollow noreferrer">pandas.Series.str.contains</a> method:</p> <pre><code>DB.authors.astype(str).str.contains('Adam') </code></pre> <p>Out:</p> <pre><code>0 False 1 True 2 False Name: authors, dtype: bool </code></pre> <p>Leaving the DataFrame column in its original format (I think...) which can be seen by querying the following:</p> <pre><code>print(DB2.authors[1]) </code></pre> <p>Out:</p> <pre><code>['Adam', 'Bob'] </code></pre> <p>However, if you first pickle the DataFrame for reloading somewhere else and then perform the same operation, you can see that the format of the DataFrame column has been changed in place:</p> <pre><code>DB.to_pickle('dummy_database.pkl') DB2 = pd.read_pickle('dummy_database.pkl') print(DB2.authors.astype(str).str.contains('Adam')) print(DB2.authors[1]) </code></pre> <p>Out:</p> <pre><code>0 False 1 True 2 False Name: authors, dtype: bool &quot;['Adam', 'Bob']&quot; </code></pre> <p>...as shown by the quotations around what was originally a list of strings.</p> <p>Is there anyway to undo this apparent in place change of datatype, and could anyone explain what I'm doing wrong which is causing this behaviour?</p> <hr /> <p>Useful note: prior to the query operation <code>DB2.authors.astype(str).str.contains('Adam')</code> you can still call <code>DB2.authors[1]</code> as before, and you appear to have the same format: <code>Out: ['Adam', 'Bob']</code>.</p>
<python><pandas><string><dataframe><dtype>
2023-01-09 12:48:48
0
925
user8188120
75,057,460
2,707,864
Convert string of a named expression in sympy to the expression itself
<p><strong>Description of the practical problem</strong>: <br> I have defined many expression using <code>sympy</code>, as in</p> <pre><code>import sympy as sp a, b = sp.symbols('a,b', real=True, positive=True) Xcharles_YclassA_Zregion1 = 1.01 * a**1.01 * b**0.99 Xbob_YclassA_Zregion1 = 1.009999 * a**1.01 * b**0.99 Xbob_YclassA_Zregion2 = 1.009999 * a**1.01 * b**0.99000000001 ... </code></pre> <p>So I have used the names of the expressions to describe options (e.g., <code>charles</code>, <code>bob</code>) within categories (e.g., <code>X</code>).</p> <p>Now I want a function that takes two strings (e.g., <code>'Xcharles_YclassA_Zregion1'</code> and <code>'Xbob_YclassA_Zregion1'</code>) and returns its simplified ratio (in this example, <code>1.00000099009999</code>), so I can quickly check &quot;how different&quot; they are, in terms of result, not in terms of how they are written. E.g., <code>2*a</code> and <code>a*2</code> are the same for my objective.</p> <p><strong>How can I achieve this?</strong></p> <p><strong>Notes</strong>:</p> <ol> <li>The expressions in the example are hardcoded for the sake of simplicity. But in my actual case they come from a sequence of many other expressions and operations.</li> <li>Not all combinations of options for all categories would exist. E.g., <code>Xcharles_YclassA_Zregion2</code> may not exist. Actually, if I were to write a table for existing expression names, it would be sparsely filled.</li> <li>I guess rewriting my code using <code>dict</code> to store the table <em>might</em> solve my problem. But I would have to modify a lot of code for that.</li> <li>Besides the practical aspects of my objective, I don't know if there is any formal difference between <code>Symbol</code> (which is a specific class) and <em>expression</em>. From the sources I read (e.g., <a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/manipulation.html" rel="nofollow noreferrer">this</a>) I did not arrive to a conclusion. This understanding may help in solving the question.</li> </ol> <hr> <p><strong>TL;DR - What I tried</strong></p> <p>I aimed at something like</p> <pre><code>def verify_ratio(vstr1, vstr2): &quot;&quot;&quot;Compare the result of two different computations of the same quantity&quot;&quot;&quot; ratio = sp.N(sp.parsing.sympy_parser.parse_expr(vstr1)) / sp.parsing.sympy_parser.parse_expr(vstr2) print(vstr1 + ' / ' + vstr2, '=', sp.N(ratio)) return </code></pre> <p>This did not work. Code below shows why</p> <pre><code>import sympy as sp a, b = sp.symbols('a,b', real=True, positive=True) expr2 = 1.01 * a**1.01 * b**0.99 print(type(expr2), '-&gt;', expr2) expr2b = sp.parsing.sympy_parser.parse_expr('expr2') print(type(expr2b), '-&gt;', expr2b) expr2c = sp.N(sp.parsing.sympy_parser.parse_expr('expr2')) print(type(expr2c), '-&gt;', expr2c) #print(sp.N(sp.parsing.sympy_parser.parse_expr('expr2'))) expr2d = sp.sympify('expr2') print(type(expr2d), '-&gt;', expr2d) </code></pre> <p>with output</p> <pre><code>&lt;class 'sympy.core.mul.Mul'&gt; -&gt; 1.01*a**1.01*b**0.99 &lt;class 'sympy.core.symbol.Symbol'&gt; -&gt; expr2 &lt;class 'sympy.core.symbol.Symbol'&gt; -&gt; expr2 &lt;class 'sympy.core.symbol.Symbol'&gt; -&gt; expr2 </code></pre> <p>I need something that takes the string <code>'expr2'</code> and returns the expression <code>1.01 * a**1.01 * b**0.99</code>.</p> <hr> <p>None of my attempts achieved the objective. Questions or links which did not help (at least for me):</p> <ol> <li><a href="https://stackoverflow.com/questions/33606667/from-string-to-sympy-expression">From string to sympy expression</a></li> <li><a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/basic_operations.html" rel="nofollow noreferrer">https://docs.sympy.org/latest/tutorials/intro-tutorial/basic_operations.html</a></li> <li><a href="https://docs.sympy.org/latest/modules/parsing.html" rel="nofollow noreferrer">https://docs.sympy.org/latest/modules/parsing.html</a></li> <li><a href="https://docs.sympy.org/latest/modules/core.html#sympy.core.sympify.sympify" rel="nofollow noreferrer">https://docs.sympy.org/latest/modules/core.html#sympy.core.sympify.sympify</a></li> <li><a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/manipulation.html" rel="nofollow noreferrer">https://docs.sympy.org/latest/tutorials/intro-tutorial/manipulation.html</a></li> </ol>
<python><expression><sympy>
2023-01-09 12:44:23
2
15,820
sancho.s ReinstateMonicaCellio
75,057,423
14,494,483
Streamlit how to use session state with Aggrid to keep the selection even after switching pages?
<p>This is potentially an easy one, but I just can’t figure out how to do it. Here’s a simple reproducible code example. How to use session state to keep the tickbox selection, even after switching pages (you will need to create a page folder to include multi pages)?</p> <pre><code>import pandas as pd import streamlit as st from st_aggrid import GridOptionsBuilder, AgGrid, GridUpdateMode, DataReturnMode, ColumnsAutoSizeMode data = { &quot;calories&quot;: [420, 380, 390], &quot;duration&quot;: [50, 40, 45], &quot;random1&quot;: [5, 12, 1], &quot;random2&quot;: [230, 23, 1] } df = pd.DataFrame(data) gb = GridOptionsBuilder.from_dataframe(df[[&quot;calories&quot;, &quot;duration&quot;]]) gb.configure_selection(selection_mode=&quot;single&quot;, use_checkbox=True) gb.configure_side_bar() gridOptions = gb.build() data = AgGrid(df, gridOptions=gridOptions, enable_enterprise_modules=True, allow_unsafe_jscode=True, update_mode=GridUpdateMode.SELECTION_CHANGED, columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS) selected_rows = data[&quot;selected_rows&quot;] if len(selected_rows) != 0: selected_rows[0] </code></pre> <p>For example, when I select the tickbox, and after I switch to page 2, then back to test page, the tickbox selection still remains. <a href="https://i.sstatic.net/tdJox.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tdJox.png" alt="enter image description here" /></a></p>
<python><ag-grid><session-state><streamlit>
2023-01-09 12:40:47
1
474
Subaru Spirit
75,057,418
8,792,159
How to create a polar plot with error bands in plotly?
<p>This post is closely related to <a href="https://stackoverflow.com/questions/41497257/error-bars-on-a-radar-plot">this one</a> but I need a solution that works with plotly and python. I would like to use <code>plotly</code> to create a polar plot with error bands. My dataset can be divided into multiple groups, where each of them should have its own trace. Samples within each group should be aggregated so that only the mean line and the error band are plotted.</p> <p>I noticed that seaborn has the function <code>sns.lineplot</code> implemented that already goes into the right direction but I would like to bend the x-axis in a 360 degree circle so we end up with a polar plot:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px rng = np.random.default_rng(12345) # create data a = rng.random(size=(10,10)) df = pd.DataFrame(a,columns=[f&quot;r_{idx}&quot; for idx in range(10)]) df['id'] = [1,2,3,4,5,6,7,8,9,10] df['group'] = ['a','a','a','a','a','b','b','b','b','b'] df = df.melt(id_vars=['id','group'],var_name='region') # use seaborn plt.figure() sns.lineplot(data=df,x='region',y='value',hue='group') </code></pre> <p><a href="https://i.sstatic.net/UlCDy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UlCDy.png" alt="enter image description here" /></a></p> <p><code>plotly.express</code> in contrast offers the function <code>px.scatter_polar</code> which creates a polar plot but apparently does not allow to aggregate the samples which leads to a quite unreadable plot:</p> <pre class="lang-py prettyprint-override"><code># plot scatter polarplot with plotly. Does not allow to aggregate fig = px.scatter_polar(df,r='value',theta='region',color='group') fig.show() </code></pre> <p><a href="https://i.sstatic.net/C66iD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C66iD.png" alt="enter image description here" /></a></p>
<python><plotly><polar-coordinates><radar-chart>
2023-01-09 12:40:28
1
1,317
Johannes Wiesner
75,057,304
11,857,780
What is the fastest way to filter a pandas time series?
<p>What is the fastest way to filter a pandas time series? For now I use boolean masking to filter the time series ts:</p> <pre><code>import time from datetime import datetime import pandas as pd import statistics # create time series idx = pd.date_range(start='2022-01-01', end='2023-01-01', freq=&quot;min&quot;) ts = pd.Series(1, index=idx) start_dt = datetime(2022, 1, 1, 0, 0, 0) end_dt = datetime(2022, 1, 2, 0, 0, 0) time_lst = [] # measure performance of boolean masking for i in range(100): start = time.time() # 1st method mask = (ts.index &gt; start_dt) &amp; (ts.index &lt;= end_dt) # 2nd method, nearly same velociy # mask = np.where((ts.index &gt; start_dt) &amp; (ts.index &lt;= end_dt), True, False) time_lst.append(time.time() - start) print(statistics.mean(time_lst)) filtered_ts = ts.loc[mask] </code></pre> <p>I am wondering, if this is already the fastest way (here ~0.003 s per run) or are there other methods? I use the masking many thousands of times for different <em>start_dt</em> and <em>end_dt</em> and it sums up to a significant time which I want to reduce.</p>
<python><pandas><filter><series><mask>
2023-01-09 12:31:30
1
325
DerDressing
75,057,274
8,405,296
Saving Custom TableNet Model (VGG19 based) for table extraction - Azure Databricks
<p>I have a model based on <a href="https://github.com/jainammm/TableNet" rel="nofollow noreferrer">TableNet</a> and <a href="https://www.mathworks.com/help/deeplearning/ref/vgg19.html" rel="nofollow noreferrer">VGG19</a>, the data (Marmoot) for training and the saving path is mapped to a datalake storage (using Azure).</p> <p>I'm trying to save it in the following ways and get the following errors on <a href="https://learn.microsoft.com/en-us/azure/databricks/introduction/" rel="nofollow noreferrer">Databricks</a>:</p> <ol> <li><p><strong>First approach:</strong></p> <pre class="lang-py prettyprint-override"><code>import pickle pickle.dump(model, open(filepath, 'wb')) </code></pre> <p>This saves the model and gives the following output:</p> <pre class="lang-py prettyprint-override"><code>WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 31). These functions will not be directly callable after loading. </code></pre> <p>Now when I try to reload the mode using:</p> <pre class="lang-py prettyprint-override"><code>loaded_model = pickle.load(open(filepath, 'rb')) </code></pre> <p>I get the following error (<a href="https://learn.microsoft.com/en-us/azure/databricks/introduction/" rel="nofollow noreferrer">Databricks</a> show in addition to the following error the entire stderr and stdout but this is the gist):</p> <pre class="lang-py prettyprint-override"><code>ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the `custom_objects` arg when calling `load_model()` and make sure that all layers implement `get_config` and `from_config`. </code></pre> </li> <li><p><strong>Second approach:</strong></p> <pre class="lang-py prettyprint-override"><code>model.save(filepath) </code></pre> <p>and for the I get the following error:</p> <pre class="lang-py prettyprint-override"><code>Fatal error: The Python kernel is unresponsive. The Python process exited with exit code 139 (SIGSEGV: Segmentation fault). The last 10 KB of the process's stderr and stdout can be found below. See driver logs for full logs. --------------------------------------------------------------------------- Last messages on stderr: Mon Jan 9 08:04:31 2023 Connection to spark from PID 1285 Mon Jan 9 08:04:31 2023 Initialized gateway on port 36597 Mon Jan 9 08:04:31 2023 Connected to spark. 2023-01-09 08:05:53.221618: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA </code></pre> <p>and much more, its hard to find the proper place of error form all of the stderr and stdout. It shows the entire stderr and stdout which makes it very hard to find the solution (it shows all the stderr and stdout including the training and everything)</p> </li> <li><p><strong>Third approach (partially):</strong></p> <p>I also tried:</p> <pre class="lang-py prettyprint-override"><code>model.save_weights(weights_path) </code></pre> <p>but once again I was unable to reload them (this approach was tried the least)</p> </li> </ol> <hr /> <p>Also I tried saving the checkpoints by adding this:</p> <pre><code>model_checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor = &quot;val_table_mask_loss&quot;, verbose = 1, save_weights_only=True) </code></pre> <p>as a callback in the <code>fit</code> method (<code>callbacks=[model_checkpoint]</code>) but in the end of the first epoch it will generate the following error(I show the end of the Traceback):</p> <pre><code>h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/h5f.pyx in h5py.h5f.create() OSError: Unable to create file (file signature not found) </code></pre> <hr /> <p>When I use the second approach on a platform that is not <a href="https://learn.microsoft.com/en-us/azure/databricks/introduction/" rel="nofollow noreferrer">Databricks</a> it works fine, but then when I try to load the model I get an error similar to the first approach loading.</p> <hr /> <h3>Update 1</h3> <p>my variable <code>filepath</code> that I try to save to is a <code>dbfs</code> reference, and my <code>dbfs</code> is mapped to the datalake storage</p> <hr /> <h3>Update 2</h3> <p>When trying as suggested in the comments, with the following <a href="https://stackoverflow.com/a/67020148/8405296">answer</a> I get the following error:</p> <pre class="lang-py prettyprint-override"><code>----&gt; 3 model2 = keras.models.load_model(&quot;/tmp/model-full2.h5&quot;) ... ValueError: Unknown layer: table_mask. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details. </code></pre> <h3>Update 3:</h3> <p>So I try following the error plus this <a href="https://stackoverflow.com/a/63053063/8405296">answer</a>:</p> <pre class="lang-py prettyprint-override"><code>model2 = keras.models.load_model(&quot;/tmp/model-full2.h5&quot;, custom_objects={'table_mask': table_mask}) </code></pre> <p>but then I get the following error:</p> <pre class="lang-py prettyprint-override"><code>TypeError: 'KerasTensor' object is not callable </code></pre>
<python><azure><tensorflow><azure-databricks><vgg-net>
2023-01-09 12:28:07
1
1,362
Lidor Eliyahu Shelef
75,057,218
8,583,502
How to combine a masked loss with tensorflow2 TimeSeriesGenerator
<p>We are trying to use a convolutional LSTM to predict the values of an image given the past 7 timesteps. We have used the tensorflow2 TimeSeriesGenerator method to create our time series data:</p> <pre><code> train_gen = TimeseriesGenerator( data, data, length=7, batch_size=32, shuffle=False ) </code></pre> <p>Every image (timestep) has the shape (55, 50, 1), therefore the generator has produced data with the shape (32, 7, 55, 50, 1) and their targets (32, 55, 50, 1). However, there is a twist, we only want to compute the loss of a prediction for a masked region of the image. This mask is constant and we have stored it in a tensor constant in the following way:</p> <pre><code>mask = tf.keras.backend.constant(mask) </code></pre> <p>Our idea was then to give this constant as a second input to our model and use it to compute a masked loss using a custom loss function:</p> <pre><code>def masked_MSE_loss(y_true, y_pred, mask): y_pred_masked = tf.math.multiply(y_pred, mask) mse = tf.keras.losses.mean_squared_error(y_true = y_true, y_pred = y_pred_masked) return mse </code></pre> <p>Our model then looks like the following:</p> <pre><code># Define the input tensors inputs = Input(shape=(lookback, 55, 50, 1)) input_mask = Input(tensor=mask) # First stack of convlstm layers convlstm1 = layers.ConvLSTM2D(filters=128, kernel_size=(3, 3), padding='same', activation='tanh', return_sequences=True)(inputs) bathnorm1 = layers.BatchNormalization()(convlstm1) convlstm2 = layers.ConvLSTM2D(filters=128, kernel_size=(3, 3), padding='same', activation='tanh', return_sequences=False)(bathnorm1) # Second stack of convlstm layers convlstm3 = layers.ConvLSTM2D(filters=128, kernel_size=(3, 3), padding='same', activation='tanh', return_sequences=True)(inputs) batchnorm2 = layers.BatchNormalization()(convlstm3) convlstm4 = layers.ConvLSTM2D(filters=128, kernel_size=(3, 3), padding='same', activation='tanh', return_sequences=False)(batchnorm2) # Concatenate outputs of two stacks concatenation = layers.concatenate([convlstm2, convlstm4]) outputs = layers.Conv2D(filters=1, kernel_size=1, padding=&quot;same&quot;, activation='tanh')(concatenation) # Create the model model = Model(inputs=[inputs, input_mask], outputs=outputs) model.add_loss(masked_MSE_loss(inputs, outputs, input_mask)) # Compile the model model.compile(optimizer='adam', loss=None, metrics=['mae']) </code></pre> <p>Finally, we tried fitting the model in a rather unique way in an attempt to merge our TimeSeriesGenerator with our constant input:</p> <pre><code>for batch in train_gen: batch_input, batch_target = batch model.fit(x=[batch_input, np.repeat(mask[np.newaxis, :, :, :], len(batch), axis=0)], y=batch_target, epochs=1) </code></pre> <p>We loop over each batch and repeat the constant <code>len(batch)</code> times before feeding it to our network and training it for 1 epoch (each batch). This gives us the following error:</p> <pre><code>ValueError: Input 1 of layer &quot;model&quot; is incompatible with the layer: expected shape=(None, 50, 1), found shape=(32, 55, 50, 1) </code></pre> <p>This made us think that we needed to feed the mask constant only once:</p> <pre><code>for batch in train_gen: batch_input, batch_target = batch model.fit(x=[batch_input, mask], y=batch_target, epochs=100) </code></pre> <p>But this gave us the error:</p> <pre><code>ValueError: Data cardinality is ambiguous: x sizes: 32, 55 y sizes: 32 Make sure all arrays contain the same number of samples </code></pre> <p>Clearly, the model expects the first argument to be the batch size, but it almost feels like the error contradicts the previously tried solution where we repeated the constant according to <code>len(batch)</code>.</p> <p><strong>So our question is: How do we fit our model with a constant tensor as our second input (to compute a masked loss over our predictions) combined with our TimeSeriesGenerator data?</strong></p>
<python><time-series><conv-neural-network><lstm><tensorflow2.0>
2023-01-09 12:22:28
1
402
Boomer
75,057,110
458,700
how to make your python web app always running on ubuntu machine
<p>I have a web app that I deployed to a machine that has ubuntu 20 installed to be able to run the app I should open ssh to the ubuntu machine and then run this command</p> <pre><code>cd mywebapp python3 app.py </code></pre> <p>it works successfully, but once I close the ssh console or reboot the machine or anything happens, it stopped and I have to repeat these commands</p> <p>I tried to add it as a corn job to be run after machine reboot but it does not work</p> <p>I post a question in the following link : <a href="https://stackoverflow.com/questions/75025188/run-python-app-after-server-restart-does-not-work-using-crontab">run python app after server restart does not work using crontab</a></p> <p>nothing work with me, and I have to make sure that this web app will always be running because it should be working to send push notification to mobile devices</p> <p>can anyone please advice, I have been searching and trying for so many time</p>
<python><linux><ubuntu>
2023-01-09 12:11:53
2
9,464
Amira Elsayed Ismail
75,057,097
3,650,983
pytorch dataset with 2 transformation train and validation
<p>I would like to use 2 different transformation one for training and the second one for validation and test. I mean to add some augmentation during the train process and validate/test without this augmentation.</p> <p>What's the pytorch way to do so?</p> <p>I mean to run 2 different transforms with torch.utils.data.Subset or torch.utils.data.Dataloader functions rather than creating 2 datasets.</p>
<python><machine-learning><pytorch><dataset><training-data>
2023-01-09 12:11:05
1
4,119
ChaosPredictor
75,056,713
10,035,978
Connect to OPCUA server with username and password
<p>I am using UAExpert application and i am connecting to my machine with these settings: <a href="https://i.sstatic.net/UcGy0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UcGy0.png" alt="enter image description here" /></a></p> <p>I want to connect to my device with python. I have this code but it doesn't work.</p> <pre><code>from opcua import Client client = Client(&quot;opc.tcp://&lt;ip&gt;:4840&quot;) client.set_user(&quot;username&quot;) client.set_password(&quot;password&quot;) client.set_security_string(&quot;Basic256Sha256,Sign,cert.der,key.pem&quot;) client.connect() </code></pre> <p>I am getting this error:</p> <blockquote> <p>raise ua.UaError(&quot;No matching endpoints: {0}, {1}&quot;.format(security_mode, policy_uri)) opcua.ua.uaerrors._base.UaError: No matching endpoints: 2, <a href="http://opcfoundation.org/UA/SecurityPolicy#Basic256Sha256" rel="nofollow noreferrer">http://opcfoundation.org/UA/SecurityPolicy#Basic256Sha256</a></p> </blockquote> <p>UPDATE:</p> <p>I think it's the issue of the certificate. So i found from UAExpert settings where it gets the certificate from. <a href="https://i.sstatic.net/8dV2k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8dV2k.png" alt="enter image description here" /></a> I use the same path for <code>cert.der</code> but i don't know where i can find the <code>key.pem</code></p>
<python><opc-ua><opc>
2023-01-09 11:36:22
1
1,976
Alex
75,056,435
13,517,174
How can you run singular parametrized tests in pytest if the parameter is a string that contains spaces?
<p>I have a test that looks as following:</p> <pre><code>@pytest.mark.parametrize('param', ['my param', 'my param 2']) def test_param(self,param): ... </code></pre> <p>This works fine when calling this test with</p> <pre><code>python3 -m pytest -s -k &quot;test_param&quot; </code></pre> <p>However, if I want to target a specific test as following:</p> <pre><code>python3 -m pytest -s -k &quot;test_param[my param]&quot; </code></pre> <p>I get the error message</p> <pre><code>ERROR: Wrong expression passed to '-k': my param: at column 4: expected end of input; got identifier </code></pre> <p>Also, when my input string contains a quotation mark <code>'</code>, I get the error</p> <pre><code>ERROR: Wrong expression passed to '-k': ... : at column 51: expected end of input; got left parenthesis </code></pre> <p>and if my string contains both <code>&quot;</code> and <code>'</code>, I am completely unable to call it with the <code>-k</code> option without the string terminating in the middle.</p> <p>How can I run tests with string parameters that contain these symbols? I am currently creating a dict and supplying <code>range(len(my_dict))</code> as the parameter so I can access these variables via index, but I would prefer to be able to directly enter them in the commandline.</p> <p>EDIT:</p> <p>The current suggestions are all great and already solve some of my problems. However, I'm still not sure how I would call singular tests if my test function looked like this (it has more than one entry as opposed to this minimal example):</p> <pre><code>@pytest.mark.parametrize('input, expected', [ ( &quot;&quot;&quot; integer :: &amp; my_var !&lt; my comment &quot;&quot;&quot;, {'my_var': 'my comment'} ) ]) def test_fetch_variable_definitions_multiline(input,expected): ... </code></pre>
<python><pytest>
2023-01-09 11:11:11
5
453
Yes
75,056,418
1,479,974
Anaconda Installation Failed on macOS Ventura
<p>I am trying to install Anaconda on my new MacBook which has Ventura 13.1 installed.</p> <p>I am installing only for myself and the installation fails.</p> <p>Can someone please help?</p> <p><strong>Edit 1</strong></p> <p>I have followed <a href="https://docs.anaconda.com/anaconda/user-guide/troubleshooting/#the-installation-failed-message-when-running-a-pkg-installer-on-osx" rel="nofollow noreferrer">this</a> link but I do not see that.</p> <p><strong>Edit 2</strong></p> <p>As it was a new setup, I erased the disk and was able to reinstall it.</p> <p><a href="https://i.sstatic.net/3jtUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3jtUA.png" alt="enter image description here" /></a></p>
<python><python-3.x><macos><anaconda><macos-ventura>
2023-01-09 11:10:19
3
6,544
chintan s
75,056,311
13,505,957
Explode raises values error ValueError: columns must have matching element counts
<p>I have the following dataframe:</p> <pre><code>list1 = [1, 6, 7, [46, 56, 49], 45, [15, 10, 12]] list2 = [[49, 57, 45], 3, 7, 8, [16, 19, 12], 41] data = {'A':list1, 'B': list2} data = pd.DataFrame(data) </code></pre> <p>I can explode the dataframe using this piece of code:</p> <pre><code>data.explode('A').explode('B') </code></pre> <p>but when I run this one to do the same operation a value error is raised:</p> <pre><code>data.explode(['A', 'B']) ValueError Traceback (most recent call last) &lt;ipython-input-97-efafc6c7cbfa&gt; in &lt;module&gt; 5 'B': list2} 6 data = pd.DataFrame(data) ----&gt; 7 data.explode(['A', 'B']) ~\AppData\Roaming\Python\Python38\site-packages\pandas\core\frame.py in explode(self, column, ignore_index) 9033 for c in columns[1:]: 9034 if not all(counts0 == self[c].apply(mylen)): -&gt; 9035 raise ValueError(&quot;columns must have matching element counts&quot;) 9036 result = DataFrame({c: df[c].explode() for c in columns}) 9037 result = df.drop(columns, axis=1).join(result) ValueError: columns must have matching element counts </code></pre> <p>Can anyone explain why?</p>
<python><pandas><dataframe>
2023-01-09 10:59:54
2
1,107
ali bakhtiari
75,056,198
8,771,126
Using SQLAlchemy to execute an SQL statement with named parameters
<p>Why can't I raw insert a list of dicts with SQLalchemy ?</p> <pre><code>import os import sqlalchemy import pandas as pd def connect_unix_socket() -&gt; sqlalchemy.engine: db_user = os.environ[&quot;DB_USER&quot;] db_pass = os.environ[&quot;DB_PASS&quot;] db_name = os.environ[&quot;DB_NAME&quot;] unix_socket_path = os.environ[&quot;INSTANCE_UNIX_SOCKET&quot;] return sqlalchemy.create_engine( sqlalchemy.engine.url.URL.create( drivername=&quot;postgresql+pg8000&quot;, username=db_user, password=db_pass, database=db_name, query={&quot;unix_sock&quot;: f&quot;{unix_socket_path}/.s.PGSQL.5432&quot;}, ) ) def _insert_ecoproduct(df: pd.DataFrame) -&gt; None: db = connect_unix_socket() db_matching = { 'gtin': 'ecoproduct_id', 'ITEM_NAME_AS_IN_MARKETPLACE' : 'ecoproductname', 'ITEM_WEIGHT_WITH_PACKAGE_KG' : 'ecoproductweight', 'ITEM_HEIGHT_CM' : 'ecoproductlength', 'ITEM_WIDTH_CM' : 'ecoproductwidth', 'test_gtin' : 'gtin_test', 'batteryembedded' : 'batteryembedded' } df = df[db_matching.keys()] df.rename(columns=db_matching, inplace=True) data = df.to_dict(orient='records') sql_query = &quot;&quot;&quot;INSERT INTO ecoproducts( ecoproduct_id, ecoproductname, ecoproductweight, ecoproductlength, ecoproductwidth, gtin_test, batteryembedded) VALUES (%(ecoproduct_id)s, %(ecoproductname)s,%(ecoproductweight)s,%(ecoproductlength)s, %(ecoproductwidth)s,%(gtin_test)s,%(batteryembedded)s) ON CONFLICT(ecoproduct_id) DO NOTHING;&quot;&quot;&quot; with db.connect() as conn: result = conn.exec_driver_sql(sql_query, data) print(f&quot;{result.rowcount} new rows were inserted.&quot;) </code></pre> <p>I keep having this error : <a href="https://i.sstatic.net/C7Ago.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C7Ago.png" alt="enter image description here" /></a></p> <p>Is it possible to map parameters with th dialect pg8000 ? Or maybe I should use psycopg2 ?</p> <p>What is the problem here ?</p> <h2>EDIT 1: see variable data details :</h2> <pre><code>print(data) print(type(data)) [{'ecoproduct_id': '6941487202157', 'ecoproductname': 'HUAWEI FreeBuds Pro Bluetooth sans Fil ', 'ecoproductweight': '4', 'ecoproductlength': '0.220', 'ecoproductwidth': '0.99', 'gtin_test': False, 'batteryembedded': 0}] &lt;class 'list'&gt; </code></pre>
<python><python-3.x><sqlalchemy><pg8000>
2023-01-09 10:50:56
1
358
IndiaSke
75,056,150
15,445,589
Communication between Microservices in Google App Engine
<p>I have currently 5 different services in Google App Engine all of them are in FastAPI Python standard environment. When a service gets called they call an authorization service and continue if permissions are valid. Im using firewall rule to disable all incoming requests but allow my computer. When using the firewall rule I cannot call the other service because it return Access Forbidden. I then found something about the requests in Python in GAE that you have to use Google's URLfetch to make calls to other services. But when I use the <code>monkeypatch()</code> function from <code>requests_toolbelt.adapters.appengine</code> I recieve an error in App Engine</p> <pre class="lang-py prettyprint-override"><code> File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/requests_toolbelt/adapters/appengine.py&quot;, line 121, in __init__ self.appengine_manager = gaecontrib.AppEngineManager( File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/urllib3/contrib/appengine.py&quot;, line 107, in __init__ raise AppEnginePlatformError( urllib3.contrib.appengine.AppEnginePlatformError: URLFetch is not available in this environment. </code></pre> <p>The main reason to restrict the API's is that nobody is able to read the docs from the services.</p>
<python><google-app-engine><google-cloud-platform><microservices><fastapi>
2023-01-09 10:46:33
1
641
Kevin Rump
75,056,147
1,711,088
Numpy. How to split 2D array to multiple arrays by grid?
<p>I have a numpy 2D-array:</p> <pre><code>c = np.arange(36).reshape(6, 6) [[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35]] </code></pre> <p>I want to split it to multiple 2D-arrays by grid 3x3. (It's like a split big image to 9 small images by grid 3x3):</p> <pre><code>[[ 0, 1,| 2, 3,| 4, 5], [ 6, 7,| 8, 9,| 10, 11], ---------+--------+--------- [12, 13,| 14, 15,| 16, 17], [18, 19,| 20, 21,| 22, 23], ---------+--------+--------- [24, 25,| 26, 27,| 28, 29], [30, 31,| 32, 33,| 34, 35]] </code></pre> <p>At final i need array with 9 2D-arrays. Like this:</p> <pre><code>[[[0, 1], [6, 7]], [[2, 3], [8, 9]], [[4, 5], [10, 11]], [[12, 13], [18, 19]], [[14, 15], [20, 21]], [[16, 17], [22, 23]], [[24, 25], [30, 31]], [[26, 27], [32, 33]], [[28, 29], [34, 35]]] </code></pre> <p>It's just a sample what i need. I want to know how to make small 2D arrays from big 2D array by grid (N,M)</p>
<python><arrays><numpy><split><slice>
2023-01-09 10:46:14
1
976
Massimo
75,056,143
3,540,903
threadpool executor inside processpoolexecutor RuntimeError: There is no current event loop in thread
<p>I have a processpoolexecutor into which I submit multiple disk read/write calls. I want to create a threadpool inside every process for performance benefits.</p> <p>below is my attempt to override and modify _process_worker method of concurrent.futures process.py to use with ProcessPoolExecutor. I am trying to run the function in a ThreadPoolExecutor inside -</p> <pre><code>from concurrent.futures import process as process_futures class ProcessPoolExecutor(process_futures.ProcessPoolExecutor): &quot;&quot;&quot;Override process creation to use our processes&quot;&quot;&quot; def _adjust_process_count(self): &quot;&quot;&quot;This is copy-pasted from concurrent.futures to override the Process class&quot;&quot;&quot; for _ in range(len(self._processes), self._max_workers): p = Process( target=_process_worker, args=(self._call_queue, self._result_queue, None, None)) p.start() self._processes[p.pid] = p def _process_worker(call_queue, result_queue): with ThreadPoolExecutor(max_workers=8) as executor: # starting a Threadpool while True: call_item = call_queue.get(block=True) if call_item is None: # Wake up queue management thread result_queue.put(os.getpid()) return try: if 1: # my changes , problem with this code future = executor.submit(call_item.fn, *call_item.args, **call_item.kwargs) future.add_done_callback( functools.partial(_return_result, call_item, result_queue)) else: # original code with only processpool as in futures process.py r = call_item.fn(*call_item.args, **call_item.kwargs) except BaseException as e: result_queue.put(process_futures._ResultItem(call_item.work_id, exception=e)) else: result_queue.put(process_futures._ResultItem(call_item.work_id, result=r)) </code></pre> <p>when I add a threadpoolexecutor inside processpoolexecutor , i get below error</p> <pre><code>RuntimeError: There is no current event loop in thread '&lt;threadedprocess._ThreadPoolExecutor object at 0x000001C5897B1FA0&gt;_0'. </code></pre> <p>I understand that eventloop are not created on child threads, so its complaining of no current event loop. and so, even if i add new event loop -</p> <pre><code>def _process_worker(call_queue, result_queue, a, b): try: import asyncio loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) except Exception as e: logger.info(&quot;eexception {} &quot;.format(e)) with ThreadPoolExecutor(max_workers=8) as executor: while True: call_item = call_queue.get(block=True) if call_item is None: # Wake up queue management thread result_queue.put(os.getpid()) return try: if 1: # my changes , problem with this code job_func = functools.partial(call_item.fn, *call_item.args, **call_item.kwargs) try: loop.run_in_executor(executor, job_func) except Exception as e: logger.info(&quot;exception recvd {}&quot;.format(e)) else: # original code with only processpool as in futures process.py r = call_item.fn(*call_item.args, **call_item.kwargs) except BaseException as e: result_queue.put(process_futures._ResultItem(call_item.work_id, exception=e)) else: result_queue.put(process_futures._ResultItem(call_item.work_id, result=r)) </code></pre> <p>I get a new error -</p> <pre><code>concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. </code></pre> <p>how can i change _process_worker to run the work in a threadpool ? Any suggestions please.</p>
<python><python-asyncio><concurrent.futures>
2023-01-09 10:46:05
1
312
CodeTry
75,056,128
460,544
When is a default object created in Python?
<p>I have a Python (3) structure like following:</p> <ul> <li>main_script.py</li> <li>util_script.py</li> <li>AccessClass.py</li> </ul> <p>The <code>main</code> script is calling a function in <code>util</code> with following signature:</p> <pre><code>def migrate_entity(project, name, access=AccessClass.AccessClass()): </code></pre> <p>The call itself in the main script is:</p> <pre><code>migrate_entity(project_from_file, name_from_args, access=access_object) </code></pre> <p>All objects do have values when the call is done. However, As soon as the <code>main</code> script is executed the <code>AccessClass</code> in the function parameters defaults is initialized, even though it is never used. For example this <code>main</code> script <code>__init__</code> will create the default class in the function signature:</p> <pre><code>if __name__ == &quot;__main__&quot;: argparser = argparse.ArgumentParser(description='Migrate support data') argparser.add_argument('--name', dest='p_name', type=str, help='The entity name to migrate') load_dotenv() fileConfig('logging.ini') # Just for the sake of it quit() # The rest of the code... # ...and then migrate_entity(project_from_file, name_from_args, access=access_object) </code></pre> <p>Even with the <code>quit()</code> added the <code>AccessClass</code> is created. And if I run the script with <code>./main_script.py -h</code> the <code>AccessClass</code> in the function signature is created. And even though the only call to the function really is with an access object I can see that the call is made to the <code>AccessClass.__init__</code>.</p> <p>If I replace the default with <code>None</code> and instead check the parameter inside the function and then create it, everything is working as expected, i.e. the <code>AccessClass</code> is not created if not needed.</p> <p>Can someone please enlighten me why this is happening and how defaults are expected to work?</p> <p>Are parameter defaults always created in advance in Python?</p>
<python>
2023-01-09 10:45:01
2
962
Sven
75,055,890
2,280,741
FastAPI - Cannot use `Response` as a return type when `status_code` is set to 204
<p>I've been using the following code for my <code>/healthz</code>:</p> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/healthz&quot;, status_code=status.HTTP_204_NO_CONTENT, tags=[&quot;healthz&quot;], summary=&quot;Service for 'Health Check'&quot;, description=&quot;This entrypoint is used to check if the service is alive or dead.&quot;, # include_in_schema=False ) def get_healthz() -&gt; Response: return Response(status_code=status.HTTP_204_NO_CONTENT) </code></pre> <p>This has been working since some years ago.</p> <p>Today I updated FastAPI from 0.88.0 to 0.89.0 and now I get <code>AssertionError: Status code 204 must not have a response body</code>. The full tracebakc can be seen below:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1234, in _handle_fromlist File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;......../src/routers/healthz.py&quot;, line 20, in &lt;module&gt; @router.get(&quot;/healthz&quot;, status_code=status.HTTP_204_NO_CONTENT, tags=[&quot;healthz&quot;], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/..../.local/share/virtualenvs/........../lib/python3.11/site-packages/fastapi/routing.py&quot;, line 633, in decorator self.add_api_route( File &quot;/Users/..../.local/share/virtualenvs/......../lib/python3.11/site-packages/fastapi/routing.py&quot;, line 572, in add_api_route route = route_class( ^^^^^^^^^^^^ File &quot;/Users/...../.local/share/virtualenvs/....../lib/python3.11/site-packages/fastapi/routing.py&quot;, line 396, in __init__ assert is_body_allowed_for_status_code( AssertionError: Status code 204 must not have a response body python-BaseException </code></pre> <p>Here: <a href="https://i.sstatic.net/UFgSE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UFgSE.png" alt="enter image description here" /></a></p> <p>My question is:</p> <p>Is this a bug from the version 0.89.0 , or should I write the <code>/heathz</code> In a different way?</p> <p>Even with <code>return Response(status_code=status.HTTP_204_NO_CONTENT, content=None)</code> is failling.</p> <p>Changelog of 0.89.0: <a href="https://i.sstatic.net/DripA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DripA.png" alt="enter image description here" /></a></p> <p>Thanks</p>
<python><fastapi>
2023-01-09 10:25:27
1
3,996
Rui Martins
75,055,788
5,774,969
Get Azure Batch job and task ID from python
<p>When running a custom python script using Azure Batch I am interested in getting access to, and logging, the JobID and TaskID of the tasks I am running on the compute node. Mainly for later traceability and debugging purposes. My desire is to get access to these parameters from within the python script that is being executed on the compute node. I have trawled through the Microsoft documentation, but have been unable to figure out how to access this information.</p> <p>I had assumed that it was similar to the way I'm accessing the custom job parameters through the <code>activity.json</code>, but that does not appear to be the case.</p> <p>How do I get the job ID and task ID of the Azure Batch task from within the python script that is being executed by the task?</p>
<python><azure><azure-batch>
2023-01-09 10:16:33
1
502
AstroAT
75,055,750
16,607,067
ModuleNotFoundError: No module named 'config' outside django app
<p>I am using <code>cookiecutter-django</code> on my project and having problem while trying to import settings from config outside app. Getting error <code>ModuleNotFoundError: No module named 'config'</code> <strong>Project structure</strong></p> <pre class="lang-bash prettyprint-override"><code>project ┣ .envs ┃ ┗ .local ┃ ┃ ┣ .bot ┃ ┃ ┣ .django ┣ bot ┃ ┣ __init__.py ┃ ┗ bot.py ┣ compose ┃ ┣ local ┃ ┃ ┣ django ┃ ┃ ┃ ┣ Dockerfile ┃ ┃ ┃ ┗ start ┃ ┃ ┗ pytelegrambot ┃ ┃ ┃ ┣ Dockerfile ┃ ┃ ┃ ┗ start ┣ config ┃ ┣ settings ┃ ┃ ┣ __init__.py ┃ ┃ ┣ base.py ┃ ┃ ┣ local.py ┃ ┣ __init__.py ┃ ┣ urls.py ┃ ┗ wsgi.py ┣ project ┃ ┣ app ┃ ┃ ┣ migrations ┃ ┃ ┃ ┗ __init__.py ┃ ┃ ┣ admin.py ┃ ┃ ┣ apps.py ┃ ┃ ┣ signals.py ┃ ┃ ┣ models.py ┃ ┃ ┗ views.py ┣ requirements ┃ ┣ base.txt ┣ README.md ┣ local.yml ┣ manage.py </code></pre> <p><strong>bot.py</strong></p> <pre class="lang-py prettyprint-override"><code>import telebot from config.settings.base import env bot = telebot.TeleBot(env('BOT_TOKEN')) def send_welcome(message): print(message) if __name__ == '__main__': bot.infinity_polling() </code></pre> <p><strong>signals.py</strong></p> <pre class="lang-py prettyprint-override"><code>from bot.bot import send_welcome @receiver(post_save, sender=Model) def translate(sender, instance, created, **kwargs): send_wlcome(&quot;Hi&quot;) </code></pre> <p>Here I am sending message on telegram bot when object created. If I try to use <code>os.environ['BOT_TOKEN']</code> it gives me another error from <strong>signals.py</strong> <code>KeyError: 'BOT_TOKEN'</code>. BOT_TOKEN is located in <code>.envs/.local/.bot</code> Please, can anyone help ?</p>
<python><django><cookiecutter-django>
2023-01-09 10:12:00
1
439
mirodil
75,055,683
10,748,412
Python - What is wrong with this fucniton declaration?
<pre><code>def convert(self, path: str): ^ SyntaxError: invalid syntax </code></pre> <p>i am getting a SyntaxError. I checked online and saw this is how it should be declared. what is wrong with this?</p>
<python><django><django-rest-framework>
2023-01-09 10:04:50
2
365
ReaL_HyDRA
75,055,530
20,732,098
Get Duplicated Rows in Dataframe and Overwrite them Python
<p>I have the following Dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>errorId</th> <th>start</th> <th>end</th> <th>timestamp</th> <th>uniqueId</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1404</td> <td>2022-04-25 02:10:41</td> <td>2022-04-25 02:10:46</td> <td>2022-04-25</td> <td>1404_2022-04-25</td> </tr> <tr> <td>1</td> <td>1302</td> <td>2022-04-25 02:10:41</td> <td>2022-04-25 02:10:46</td> <td>2022-04-25</td> <td>1302_2022-04-25</td> </tr> <tr> <td>2</td> <td>1404</td> <td>2022-04-27 12:54:46</td> <td>2022-04-27 12:54:51</td> <td>2022-04-25</td> <td>1404_2022-04-25</td> </tr> <tr> <td>3</td> <td>1302</td> <td>2022-04-27 13:34:43</td> <td>2022-04-27 13:34:50</td> <td>2022-04-25</td> <td>1302_2022-04-25</td> </tr> <tr> <td>4</td> <td>1404</td> <td>2022-04-29 04:30:22</td> <td>2022-04-29 04:30:29</td> <td>2022-04-25</td> <td>1404_2022-04-25</td> </tr> <tr> <td>5</td> <td>1302</td> <td>2022-04-29 08:26:25</td> <td>2022-04-29 08:26:32</td> <td>2022-04-25</td> <td>1302_2022-04-25</td> </tr> </tbody> </table> </div> <p>The unique_ID is a combination from the column errorId and uniqueId. It should be checked whether the column 'uniqueID' contains a duplicate value. If this is the case, the row should be taken where it appears for the first time. In the example for errorId 1404, it would be the column at index 0. Afterwards, the value in the column 'end' should be overwritten with the value where it appears for the last time. In the example here, at index 4.<br> The same for errorId 1302</p> <p>In the End it should look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>errorId</th> <th>start</th> <th>end</th> <th>timestamp</th> <th>uniqueId</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1404</td> <td>2022-04-25 02:10:41</td> <td>2022-04-29 04:30:29</td> <td>2022-04-25</td> <td>1404_2022-04-25</td> </tr> <tr> <td>1</td> <td>1302</td> <td>2022-04-25 02:10:41</td> <td>2022-04-29 08:26:32</td> <td>2022-04-25</td> <td>1302_2022-04-25</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe>
2023-01-09 09:51:08
1
336
ranqnova
75,055,413
2,707,342
GraphQL returns null for update and delete mutation
<p>I am using graphene-django library for integrating GraphQL Schema into my Django application.</p> <p>I have implemented Queries; get all and get specific, as well as implemented create, update and delete Mutations. All endpoints are working as expected except for the <strong>update</strong> and <strong>delete</strong>.</p> <p>Here is what my model looks like:</p> <pre><code>COUNTRIES = ( (&quot;sierra leone&quot;, &quot;Sierra Leone&quot;), (&quot;guinea&quot;, &quot;Guinea&quot;), ) class School(models.Model): name = models.CharField(_(&quot;Name&quot;), max_length=255) abreviation = models.CharField(_(&quot;Abreviation&quot;), max_length=10) label = models.TextField(_(&quot;Label&quot;), max_length=255, blank=True, null=True) school_id = models.CharField(_(&quot;School ID&quot;), max_length=100, unique=True, blank=True, null=True) adresse_line_1 = models.CharField(_(&quot;Adresse Line 1&quot;), max_length=255, blank=True) adresse_line_2 = models.CharField(_(&quot;Adresse Line 2&quot;), max_length=255, blank=True) city = models.CharField(_(&quot;City&quot;), max_length=255, blank=True) country = models.CharField(max_length=60, choices=COUNTRIES, blank=True, null=True) phone_number = models.CharField(_(&quot;Phone number&quot;), max_length=15) email = models.EmailField(_(&quot;Email&quot;)) website = models.CharField(_(&quot;Website&quot;), max_length=50, blank=True) logo = models.ImageField(_(&quot;Logo&quot;), upload_to='logo/', blank=True) small_logo = models.ImageField(_(&quot;Small Logo&quot;), upload_to='logo/', blank=True, null=True) site_favicon = models.ImageField(_(&quot;Favicon&quot;), upload_to='logo/', blank=True, null=True) </code></pre> <p>And here is the code for my update and delete mutations:</p> <pre><code>class SchoolType(DjangoObjectType): class Meta: model = School fields = ( &quot;name&quot;, &quot;abreviation&quot;, &quot;label&quot;, &quot;school_id&quot;, &quot;adresse_line_1&quot;, &quot;adresse_line_2&quot;, &quot;city&quot;, &quot;country&quot;, &quot;phone_number&quot;, &quot;email&quot;, &quot;website&quot;, &quot;logo&quot;, &quot;small_logo&quot;, &quot;site_favicon&quot;, ) interfaces = (graphene.relay.Node,) convert_choices_to_enum = False class UpdateSchoolMutation(graphene.Mutation): school = graphene.Field(SchoolType) success = graphene.Boolean() class Arguments: id = graphene.String(required=True) name = graphene.String() abreviation = graphene.String() label = graphene.String() school_id = graphene.String() adresse_line_1 = graphene.String() adresse_line_2 = graphene.String() city = graphene.String() country = graphene.String() phone_number = graphene.String() email = graphene.String() website = graphene.String() logo = Upload() small_logo = Upload() site_favicon = Upload() @classmethod def mutate(self, info, id, **kwargs): id = int(from_global_id(id)[1]) try: school = School.objects.get(pk=id) except School.DoesNotExist: raise Exception(&quot;School does not exist&quot;.format(id)) for field, value in kwargs.items(): setattr(school, field, value) school.save() return UpdateSchoolMutation(school=school, success=True) class DeleteSchoolMutation(graphene.Mutation): success = graphene.Boolean() class Arguments: id = graphene.String(required=True) @classmethod def mutate(self, info, id, **kwargs): id = int(from_global_id(id)[1]) try: school = School.objects.get(pk=id) except School.DoesNotExist: raise Exception(&quot;School does not exist&quot;.format(id)) school.archived = True school.save() return DeleteSchoolMutation(success=True) </code></pre> <p>When I carry out the delete mutation as such:</p> <pre><code>mutation { deleteSchool(id: &quot;U2Nob29sVHlwZToy&quot;) { success } } </code></pre> <p>I get the following results;</p> <pre><code>{ &quot;data&quot;: { &quot;deleteSchool&quot;: { &quot;success&quot;: null } } } </code></pre> <p>The same goes for the update mutation. These are the versions I am using incase if it helps:</p> <pre><code>django==4.0.8 graphene-django==3.0.0 django-filter==22.1 django-graphql-jwt==0.3.4 graphene-file-upload==1.3.0 </code></pre>
<python><python-3.x><graphql><graphene-django>
2023-01-09 09:40:52
1
571
Harith
75,055,126
16,978,074
create an edge list that groups films by genre, i.e. join two films of the same genre
<p>I've just been using python and I want to build an edge list that groups together the titles of movies that have a genre in common. I have this dictionary:</p> <pre><code>dictionary_title_withonegenere= {28: ['Avatar: The Way of Water', 'Violent Night', 'Puss in Boots: The Last Wish'], 12: ['Avatar: The Way of Water', 'The Chronicles of Narnia: The Lion, the Witch and the Wardrobe'], 16: ['Puss in Boots: The Last Wish', 'Strange World']} </code></pre> <p>now 28,12,16 are the genres of movies.I want to create an edge list that groups movies by genre, i.e. I join two movies of the same genre:</p> <pre><code>source target Avatar: The Way of Water Violent Nigh Avatar: The Way of Water Puss in Boots: The Last Wish Violent Nigh Puss in Boots: The Last Wish Avatar: The Way of Water The Chronicles of Narnia: The Lion, the Witch and the Wardrobe Puss in Boots: The Last Wish Strange World </code></pre> <p>This is my idea:</p> <pre><code>edges=[] genres=[28,12,16] for i in range(0,len(genres)): for genres[i] in dictionary_title_withonegenere[genres[i]]: for genres[i] in dictionary_title_withonegenere[genres[i]][1:]: edges.append({&quot;sorce&quot;:dictionary_title_withonegenere[genres[i]][0],&quot;target&quot;:dictionary_title_withonegenere[genres[i]][y]}) print((edges)) </code></pre> <p>My code don't work. How can i do?</p>
<python><dictionary><edge-list>
2023-01-09 09:13:09
2
337
Elly
75,055,028
6,708,782
Print pandas rows cells as string
<p>I'm trying to print a data frame where each cell appears as a string:</p> <p><strong>Dataset</strong></p> <pre><code> a b c 0 car new york queens 1 bus california los angeles 2 aircraft illinois chicago 3 rocket texas houston 4 subway maine augusta 5 train florida miami </code></pre> <p><strong>Mon script:</strong></p> <pre><code>for index, row in df.iterrows(): print(df[&quot;a&quot;], &quot;\n&quot;, testes[&quot;c&quot;], &quot;\n&quot;, testes[&quot;b&quot;]) </code></pre> <p><strong>My output:</strong></p> <p>0 car</p> <p>1 bus</p> <p>2 aircraft</p> <p>3 rocket</p> <p>4 subway</p> <p>5 train</p> <p>Name: a, dtype: object</p> <p>...</p> <p><strong>Good output:</strong></p> <pre><code>car queens new york bus los angeles california ... </code></pre>
<python><pandas>
2023-01-09 09:02:21
2
602
ladybug
75,054,985
1,057,473
Django Unit test is written but coverage report says there are missing statements
<p>I try to get coverage reports using a simple django model.</p> <p>Unit test for model was written but report says there are missing statements.</p> <p>The model:</p> <pre><code>from django.db import models from model_utils import FieldTracker from generics.base_model import BaseModelMixin class AModel(BaseModelMixin): record_name = models.CharField(max_length=255) tracker = FieldTracker() class Meta(BaseModelMixin.Meta): db_table = 'a_model_records' def __str__(self): record _name = self. record_ name if record_name is None: record_name = &quot;N/A&quot; return record name </code></pre> <p>The unit test:</p> <pre><code>class TestAModel(TestCase): def setUp(self): super().setUp() def test_amodel(self): record_name = &quot;some name&quot; is_active = True user = User.objects.first() record = AModel.objects.create( record_name=record_name, is_active=is_active, created_by_id=user.id, ) self.assertEqual(str(record), record_name) self.assertEqual(record.name, record_name) self.assertEqual(record.is_active, is_active) </code></pre> <p>The command line code I run:</p> <pre><code>source activate set -a source local.env set +a coverage run --source=&quot;unit_test_app&quot; --rcfile=.coveragerc project/manage.py test -v 2 coverage report coverage html </code></pre> <p>The coverage configuration is like this:</p> <pre><code>[run] omit= */migrations/* */tests/* */manage.py */apps.py */settings.py */urls.py */filters.py */serializers.py */generics/* [report] show_missing = true </code></pre> <p>Coverage result shows missing statements:</p> <p><a href="https://i.sstatic.net/cJCK7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cJCK7.png" alt="enter image description here" /></a></p> <p>This is the simplest example.</p> <p>There are many ignored functions by coverage which descends percentage average, but actually covered by unit tests.</p> <p>Why does coverage act like this?</p>
<python><django><unit-testing><code-coverage>
2023-01-09 08:58:09
0
1,271
Sencer H.
75,054,857
4,495,238
Reading GPKG format file over Apache Beam
<p>I have a requirement to parse and load <code>gpgk</code> extension file to Bigquery table through apache beam (Dataflow runner). I could see that beam has feature called <a href="https://beam.apache.org/documentation/io/developing-io-python/" rel="nofollow noreferrer">Geobeam</a>, but i couldn't see reference for loading of <code>gpgk</code> files.</p> <p>Q1: Which Beam library can help me to load <code>geopakage</code> file? Q2: As an alternate solution i am trying to read <code>geopakage</code> file as Binary file and over <code>ParDo</code> can transform it and get it loaded. How we can read <code>Binary</code> file over Apache beam?</p> <p>Does any one has experience over the same and share experience.</p> <p><em><strong>Update: Alternate solution</strong></em> I have a requirement to read Binary Coded file through Python Apache beam (Dataflow as a runner).</p> <p>I am trying to replicate following <a href="https://beam.apache.org/documentation/io/developing-io-python/" rel="nofollow noreferrer">example</a> <code>Reading from a new Source</code> over my code to read Binary files.</p> <p>My code looks is given below, can you help me where its going wrong:-</p> <pre><code>#------------Import Lib-----------------------# from apache_beam.options.pipeline_options import PipelineOptions, StandardOptions import apache_beam as beam, os, sys, argparse from apache_beam.options.pipeline_options import SetupOptions from apache_beam.io import iobase #------------Set up BQ parameters-----------------------# project = 'proj-dev' dataset_id = 'sandbox' table_schema_Audit = ('id:STRING,name:STRING') input = 'gs://bucket/beers.csv' BUCKET = 'bucket' #-------------Splitting Of Records----------------------# class Transaction(iobase.BoundedSource): def process(self): # Open the Shapefile import fiona with fiona.open('gs://bucket/2022_data.gpkg', 'r') as input_file: parsed_data = [[{&quot;id&quot;: json.loads(json.dumps(feature['properties']))['Id'], &quot;name&quot;: json.loads(json.dumps(feature['properties']))['Name']}] for feature in input_file] return parsed_data def run(argv=None, save_main_session=True): pipeline_args = [ '--project={0}'.format(project), '--job_name=loadstructu', '--staging_location=gs://{0}/staging/'.format(BUCKET), '--temp_location=gs://{0}/staging/'.format(BUCKET), '--region=us-yyyy1', '--runner=DataflowRunner', '--subnetwork=https://www.googleapis.com/compute/v1/projects/proj-dev/regions/us-yyyy1/subnetworks/xxxxxx-dev-subnet' ] pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session p1 = beam.Pipeline(options=pipeline_options) data_loading = ( p1 | 'ReadData' &gt;&gt; beam.io.ReadFromText(Transaction()) ) #---------------------Type = load---------------------------------------------------------------------------------------------------------------------- result = ( data_loading | 'Write-Audit' &gt;&gt; beam.io.WriteToBigQuery( table='structdata', dataset=dataset_id, project=project, schema=table_schema_Audit, create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE )) result = p1.run() result.wait_until_finish() if __name__ == '__main__': run() </code></pre> <p>It popping error as given below:-</p> <pre><code>~/apache-beam-2.43.0/packages/beam/sdks/python/apache_beam/io/textio.py in __init__(self, file_pattern, min_bundle_size, compression_type, strip_trailing_newlines, coder, validate, skip_header_lines, delimiter, escapechar, **kwargs) 772 skip_header_lines=skip_header_lines, 773 delimiter=delimiter, --&gt; 774 escapechar=escapechar) 775 776 def expand(self, pvalue): ~/apache-beam-2.43.0/packages/beam/sdks/python/apache_beam/io/textio.py in __init__(self, file_pattern, min_bundle_size, compression_type, strip_trailing_newlines, coder, buffer_size, validate, skip_header_lines, header_processor_fns, delimiter, escapechar) 133 min_bundle_size, 134 compression_type=compression_type, --&gt; 135 validate=validate) 136 137 self._strip_trailing_newlines = strip_trailing_newlines ~/apache-beam-2.43.0/packages/beam/sdks/python/apache_beam/io/filebasedsource.py in __init__(self, file_pattern, min_bundle_size, compression_type, splittable, validate) 110 '%s: file_pattern must be of type string' 111 ' or ValueProvider; got %r instead' % --&gt; 112 (self.__class__.__name__, file_pattern)) 113 114 if isinstance(file_pattern, str): TypeError: _TextSource: file_pattern must be of type string or ValueProvider; got &lt;__main__.Transaction object at 0x7fcc79ffc250&gt; instead </code></pre>
<python><gis><google-cloud-dataflow><apache-beam><apache-beam-io>
2023-01-09 08:45:48
1
699
Vibhor Gupta
75,054,569
9,751,001
How can I add a title and change other plot aesthetics for an UpSet plot in python?
<p>I have installed and imported the following (using Google Colab):</p> <pre><code>!pip install upsetplot import numpy as np import pandas as pd import matplotlib.pyplot as plt import upsetplot from upsetplot import generate_data, plot from upsetplot import UpSet from upsetplot import from_contents </code></pre> <p>Versions:</p> <ul> <li>Python 3.8.16</li> <li>Numpy version: 1.21.6</li> <li>Pandas version: 1.3.5</li> <li>matplotlib version: 3.2.2</li> <li>upsetplot 0.8.0</li> </ul> <p>...and defined a plot colour:</p> <pre><code>plot_colour = &quot;#4F84B9&quot; </code></pre> <p>I have the following pandas dataframe:</p> <pre><code>df = pd.DataFrame({'File':['File_1', 'File_2', 'File_3'], 'A':[1,1,0], 'B':[0,1,1], 'C':[1,0,1]}) </code></pre> <p><a href="https://i.sstatic.net/8gj2g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8gj2g.png" alt="view of dataframe" /></a></p> <p>I re-shape it to prepare it for an UpSet plot:</p> <pre><code>files_labelled_A = set(df.loc[df[&quot;A&quot;]==1, &quot;File&quot;]) files_labelled_B = set(df.loc[df[&quot;B&quot;]==1, &quot;File&quot;]) files_labelled_C = set(df.loc[df[&quot;C&quot;]==1, &quot;File&quot;]) contents = {'A': files_labelled_A, 'B': files_labelled_B, 'C': files_labelled_C} from_contents(contents) </code></pre> <p><a href="https://i.sstatic.net/nBJ4N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nBJ4N.png" alt="view of updated data" /></a></p> <p>I create and view the UpSet plot successfully:</p> <pre><code>plt = UpSet(from_contents(contents), subset_size='count', facecolor=plot_colour).plot() </code></pre> <p><a href="https://i.sstatic.net/X1rIv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X1rIv.png" alt="view of UpSet plot" /></a></p> <p>How do I add a title and change other plot aesthetics as I usually do with matplotlib plots? When I try adding:</p> <pre><code>plt.title('my title here') </code></pre> <p>I get an error:</p> <blockquote> <p>AttributeError: 'dict' object has no attribute 'title'</p> </blockquote> <p>I've found some guidance at <a href="https://upsetplot.readthedocs.io/en/latest/auto_examples/plot_sizing.html" rel="nofollow noreferrer">https://upsetplot.readthedocs.io/en/latest/auto_examples/plot_sizing.html</a> which creates the plot using a different method:</p> <pre><code>example = generate_counts() print(example) plot(example) plt.suptitle('Defaults') plt.show() </code></pre> <p>...and then successfully modifies the aesthetics in the typical matplotlib way, e.g.:</p> <pre><code>fig = plt.figure(figsize=(10, 3)) plot(example, fig=fig, element_size=None) plt.suptitle('Setting figsize explicitly') plt.show() </code></pre> <p>...but I can't follow this same approach as I don't know how the 'example' data was created using generate_counts(). I don't know how to use this same approach with my data.</p> <p>Can anyone help me to figure out either how to:</p> <p>(1) use the approach that uses generate_counts(), or (2) modify my approach so that I can change the matplotlib aesthetics (for example adding a title)?</p> <p>Full code examples using my data would be appreciated, rather than just descriptions of what to do.</p>
<python><pandas><matplotlib><visualization><upsetplot>
2023-01-09 08:10:46
1
631
code_to_joy
75,054,166
898,042
How to update count in dynamo table with 2 fields
<p>The Dynamodb table has 1 partition key and 2 fields. I'm trying to increment a or b by 1.</p> <p>I get cur_count for a or b(which is 0) from the table and +1 to it.</p> <p>The error:</p> <pre><code>An error occurred (ValidationException) </code></pre> <p>When calling the UpdateItem operation:</p> <pre><code>Invalid UpdateExpression: Syntax error; token: &quot;=&quot;, near: &quot;#a =#cur_count&quot; </code></pre> <p><a href="https://i.sstatic.net/KI4eR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KI4eR.png" alt="io" /></a></p> <pre><code>def update_count(vote): logging.info('update count....') print('update count....') print('vote' + str(vote)) ''' table.update_item( Key={'voter': 'count'}, UpdateExpression=&quot;ADD #vote :incr&quot;, ExpressionAttributeNames={'#vote': vote}, ExpressionAttributeValues={':incr': 1} ) ''' cur_count = 0 try: if vote == 'b': print('extracting cur count for b') q = table.get_item(Key={'voter':'count'}) res = q['Item'] print(res) cur_count = int(res['b']) print('****** cur count %d ' % cur_count) cur_count = str(cur_count) table.update_item( Key={'voter':'count'}, UpdateExpression=&quot;ADD #b =#cur_count + :incr&quot;, ExpressionAttributeNames={'#cur_count': cur_count}, ExpressionAttributeValues={':incr': 1}) print('******* b %d ' % b) elif vote == 'a': print('extracting cur count for a') q = table.get_item(Key={'voter':'count'}) res = q['Item'] print(res) cur_count = int(res['a']) print('****** cur count %d ' % cur_count) cur_count = str(cur_count) table.update_item( Key={'voter':'count'}, UpdateExpression=&quot;ADD #a =#cur_count + :incr&quot;, ExpressionAttributeNames={'#cur_count': cur_count}, ExpressionAttributeValues={':incr': 1}) print('***** a %d ' % a) except Exception as e: print('catching error here') print(e) </code></pre>
<python><amazon-web-services><amazon-dynamodb>
2023-01-09 07:19:28
1
24,573
ERJAN
75,054,152
17,402,986
Not Authorised Error: AWS lambda ec2.run_instances from launch template with boto3
<p>I am trying to run aws ec2 instances from my lambda. Creating instance from local machine works when I tried this -</p> <pre><code>import boto3 launchTemplateId = 'lt-000' ec2 = boto3.client('ec2', region_name='ap-xx-1') template_specifics = { 'LaunchTemplateId': launchTemplateId } resp = ec2.run_instances( MaxCount=1, MinCount=1, LaunchTemplate=template_specifics, ImageId='ami-00000' ) print(resp['ResponseMetadata']['HTTPStatusCode']) </code></pre> <p>And I am trying this on lambda -</p> <pre><code>def create_instance(lt_id, img_id, region): &quot;&quot;&quot; creates instance from launch template. &quot;&quot;&quot; ec2 = boto3.client('ec2', region_name=region) resp = ec2.run_instances( MaxCount=1, MinCount=1, LaunchTemplate={ 'LaunchTemplateId':lt_id }, ImageId=img_id ) return(resp['ResponseMetadata']['HTTPStatusCode']) </code></pre> <p>with IAM policy -</p> <pre><code>.... { # &quot;Sid&quot;: &quot;PassExecutionRole&quot;, &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;iam:PassRole&quot;, ], &quot;Resource&quot;: &quot;arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${aws_iam_role.xx_role.name}&quot; }, { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;ec2:*&quot; # &quot;ec2:StartInstances&quot;, # &quot;ec2:RunInstances&quot; ], # &quot;resource&quot;: &quot;*&quot; &quot;Resource&quot;: &quot;arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:*&quot; } ...... </code></pre> <p>Notice I even tried with wildcard * too, even added passRole as <a href="https://stackoverflow.com/questions/54788320/access-issue-with-lambda-trying-to-launch-ec2-instance">a comment suggested</a> but every time it just <strong>shows this error -</strong></p> <pre><code>ClientError: An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. ...... Traceback (most recent call last): File &quot;/var/task/xxxx.py&quot;, line 153, in xxxx_handler instance_create_resp = create_instance(lt_id, img_id, region) File &quot;/var/task/xxxx.py&quot;, line 79, in create_instance resp = ec2.run_instances( File &quot;/var/runtime/botocore/client.py&quot;, line 391, in _api_call return self._make_api_call(operation_name, kwargs) File &quot;/var/runtime/botocore/client.py&quot;, line 719, in _make_api_call raise error_class(parsed_response, operation_name) </code></pre> <p>What am I doing wrong? Any ideas will be much helpful.</p> <p><strong>UPDATE</strong> I was able to track down the problem and it's 'tags' and 'InstanceProfile' which are causing this error.</p> <pre><code>TagSpecifications=[{ 'ResourceType': 'instance', 'Tags': [ { 'Key': 'Name', 'Value': 'name' }] }], IamInstanceProfile={ 'Name': PROFILE }, </code></pre> <p>This causes the same error, else it works.</p>
<python><amazon-web-services><amazon-ec2><boto3>
2023-01-09 07:18:18
1
1,205
ashraf minhaj
75,054,091
3,099,733
How to print literal/quoted string in python for safety purpose?
<p>Given the following python script:</p> <pre class="lang-py prettyprint-override"><code>s = &quot;hello world&quot; print(s) </code></pre> <p>When you run it you will get</p> <pre class="lang-bash prettyprint-override"><code>hello world </code></pre> <p>If I want the output to be</p> <pre><code>&quot;hello world&quot; </code></pre> <p>Is there any build-in quote/escape method can do this? For example</p> <pre class="lang-py prettyprint-override"><code>s = &quot;hello world&quot; print(quote(s)) </code></pre> <p>Here is my real world use case: I want to run <code>glob</code> on a remote machine via fabric. And the search pattern of <code>glob</code> is provided by user. So I need to ensure the string are quoted properly. Here is the sample code (I already know <code>repr</code> is the right method)</p> <pre class="lang-py prettyprint-override"><code>import shlex glob_pattern = 'some-data/*' # user input, maybe malform script = 'from glob import glob; print(glob({}))'.format(repr(glob_pattern)) cmd = 'python -c {}'.format(shlex.quote(script)) connection.run(cmd) # use a fabric connection to run script on remote node </code></pre>
<python>
2023-01-09 07:10:31
2
1,959
link89
75,053,839
3,179,698
Python pandas keep first columns' order unchanged while second col sort by ascending order
<p>Hi I want to keep the column infoid order unchanged but sort date in increasing order(acsending) Is that possible?</p> <pre><code>statisticsdate infoid 20230108 46726004 20230106 46726004 20230108 46725082 20230107 46725082 20230108 46725081 20230108 46724162 20230108 46720662 </code></pre> <p>should be like:</p> <pre><code>statisticsdate infoid 20230106 46726004 20230108 46726004 20230107 46725082 20230108 46725082 20230108 46725081 20230108 46724162 20230108 46720662 </code></pre>
<python><pandas><dataframe>
2023-01-09 06:32:37
2
1,504
cloudscomputes
75,053,771
12,724,372
fastest way to replace values in one df with values from another df
<p>I have a dataframe df1 that looks like this :</p> <pre><code>class val 12 1271 12 1271 34 142 34 142 </code></pre> <p>and another df2 that looks like this</p> <pre><code>class val 12 123 34 141 69 667 </code></pre> <p>What would be the fastest way to map CorrectVal to df1 such that the resultant df is :</p> <pre><code>class val 12 123 12 123 34 141 34 141 </code></pre> <p>Ideally I would join the 2 dfs with df.merge and drop the val field and rename CorrectVal with val like so</p> <pre><code>df2 = df2.rename(columns={'val':'correctVal'}) df_resultant=df1.merge(df2, how ='left' , on='class') df_resultant.drop(columns='val').rename(columns={'CorrectVal':'val'}) </code></pre> <p>but this might not be the fastest way, right?</p>
<python><pandas>
2023-01-09 06:21:14
1
1,275
Devarshi Goswami
75,053,702
2,913,139
python mysql - is commit() required for SELECT?
<p>I have a very simple python code getting some data from mysql:</p> <pre><code> version = 1 with conn.cursor() as cur: query = f'select latest_version from version' cur.execute(query) for row in cur: version = int(row[0]) cur.close() print(version) </code></pre> <p>That code is executed from AWS Lambda accessing my private mysql instance (but none of that should matter imho).</p> <p>It was working perfectly fine, then i have updated version values in DB using SQL:</p> <pre><code>UPDATE version set latest_version=11; COMMIT; FLUSH TABLES; </code></pre> <p>The problem: after running above SQL code when i run my python code i was getting the old value from DB (10 instead of 11), like some caching....</p> <p>Now when i have added to my python code commit() like this:</p> <pre><code> version = 1 with conn.cursor() as cur: query = f'select latest_version from version' cur.execute(query) conn.commit() for row in cur: version = int(row[0]) cur.close() print(version) </code></pre> <p>It all started to work fine and i got a new version.</p> <p>This problem occurred for me several times with different SQL tables (and had to use above &quot;workaround&quot;). Always the same results, tested many times.</p> <p>I have quite some other code operating on that DB running a lot of inserts on other tables (but i never insert on version table via python code). Also i am quite intensive running those workers in many parallel threads operating on the same DB as the same time (but no dependencies, should not have any deadlocks, all workers finishing work in time).</p> <p>Could you please help me understand why this is happening ?</p> <p>One of my guesses was that i did not close connections to mysql correctly in other scripts, but in such case i guess i would run out of connections/db hanlders(workers) and would not be able to connect anymore, not get cached values....So i am clueless here. Any ideas ?</p> <p>Thanks,</p>
<python><mysql>
2023-01-09 06:09:48
1
617
user2913139
75,053,357
8,283,848
url tag not finding existing 'google_login' view function of django-allauth
<p>I have a simple HTML content point towards the Google Login, as below.</p> <pre><code># foo.html &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'google_login' %}&quot;&gt;Google Login&lt;/a&gt; </code></pre> <p>This simple demo project <em><strong>was</strong></em> using <code>django==3.2.x</code> and <code>django-allauth==0.43.x</code>. Recently, I have upgraded packages to the latest versions, viz, <code>django==4.1.x</code> and <code>django-allauth==0.52.x</code>.</p> <p>After the upgrade, I was getting the following error -</p> <blockquote> <p>Reverse for 'google_login' not found. 'google_login' is not a valid view function or pattern name.</p> </blockquote> <p>I have confirmed the <code>google_login</code> is present by executing the <code>reverse(...)</code> function from the shell.</p> <pre><code>In [1]: from django.urls import reverse In [2]: reverse(&quot;google_login&quot;) Out[2]: '/accounts/google/login/' </code></pre> <h3>Error traceback</h3> <pre><code>Environment: Request Method: GET Request URL: http://127.0.0.1:1234/ Django Version: 4.1.5 Python Version: 3.9.11 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'whitenoise.runserver_nostatic', 'django.contrib.staticfiles', 'django.contrib.sites', 'allauth', 'allauth.account', 'allauth.socialaccount', 'allauth.socialaccount.providers.google', 'crispy_forms', 'debug_toolbar', 'drf_yasg', 'django_extensions', 'rest_framework', 'drf_spectacular', 'accounts', 'pages', 'polls', 'extra', 'attendance'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'debug_toolbar.middleware.DebugToolbarMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Template error: In template /home/jpg/work-dir/projects/-personal/generic-django-example/templates/_base.html, error at line 32 Reverse for 'google_login' not found. 'google_login' is not a valid view function or pattern name. 22 : &lt;link rel=&quot;stylesheet&quot; href=&quot;{% static 'css/base.css' %}&quot;&gt; 23 : {% endblock %} 24 : &lt;/head&gt; 25 : 26 : &lt;body&gt; 27 : &lt;main role=&quot;main&quot;&gt; 28 : &lt;div class=&quot;d-flex flex-column flex-md-row align-items-center p-3 px-md-4 mb-3 bg-white border-bottom shadow-sm&quot;&gt; 29 : &lt;h5 class=&quot;my-0 mr-md-auto font-weight-normal&quot;&gt; 30 : &lt;a href=&quot;{% url 'home' %}&quot;&gt;DjangoX&lt;/a&gt; 31 : &lt;/h5&gt; 32 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot; {% url 'google_login' %} &quot;&gt;Google Login&lt;/a&gt; 33 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'polls-api:api-root' %}&quot;&gt;Polls APIs&lt;/a&gt; 34 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'drf-yasg:schema-swagger-ui' %}&quot;&gt;API Docs (drf-yasg)&lt;/a&gt; 35 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'drf-spectacular:swagger-ui' %}&quot;&gt;API Docs (drf-spectacular)&lt;/a&gt; 36 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'about' %}&quot;&gt;About&lt;/a&gt; 37 : &lt;a class=&quot;p-2 text-dark&quot; href=&quot;{% url 'admin:index' %}&quot;&gt;Admin&lt;/a&gt; 38 : &lt;nav class=&quot;my-2 my-md-0 mr-md-3&quot;&gt; 39 : {% if user.is_authenticated %} 40 : &lt;ul class=&quot;navbar-nav ml-auto&quot;&gt; 41 : &lt;li class=&quot;nav-item&quot;&gt; 42 : &lt;a class=&quot;nav-link dropdown-toggle&quot; href=&quot;#&quot; id=&quot;userMenu&quot; Traceback (most recent call last): File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/core/handlers/exception.py&quot;, line 55, in inner response = get_response(request) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/core/handlers/base.py&quot;, line 220, in _get_response response = response.render() File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/response.py&quot;, line 114, in render self.content = self.rendered_content File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/response.py&quot;, line 92, in rendered_content return template.render(context, self._request) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/backends/django.py&quot;, line 62, in render return self.template.render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 175, in render return self._render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/test/utils.py&quot;, line 111, in instrumented_test_render return self.nodelist.render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 1005, in render return SafeString(&quot;&quot;.join([node.render_annotated(context) for node in self])) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 1005, in &lt;listcomp&gt; return SafeString(&quot;&quot;.join([node.render_annotated(context) for node in self])) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 966, in render_annotated return self.render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/loader_tags.py&quot;, line 157, in render return compiled_parent._render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/test/utils.py&quot;, line 111, in instrumented_test_render return self.nodelist.render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 1005, in render return SafeString(&quot;&quot;.join([node.render_annotated(context) for node in self])) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 1005, in &lt;listcomp&gt; return SafeString(&quot;&quot;.join([node.render_annotated(context) for node in self])) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/base.py&quot;, line 966, in render_annotated return self.render(context) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/template/defaulttags.py&quot;, line 472, in render url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/urls/base.py&quot;, line 88, in reverse return resolver._reverse_with_prefix(view, prefix, *args, **kwargs) File &quot;/home/jpg/.local/share/virtualenvs/generic-django-example-Kc6WXfau/lib/python3.9/site-packages/django/urls/resolvers.py&quot;, line 828, in _reverse_with_prefix raise NoReverseMatch(msg) Exception Type: NoReverseMatch at / Exception Value: Reverse for 'google_login' not found. 'google_login' is not a valid view function or pattern name. </code></pre> <p>What could be the issue? How can I fix it?</p> <hr /> <h3>Update-1</h3> <pre><code># urls.py from django.conf import settings from django.contrib import admin from django.urls import include, path urlpatterns = [ path(&quot;admin/&quot;, admin.site.urls), path(&quot;accounts/&quot;, include(&quot;allauth.urls&quot;)), path(&quot;polls/&quot;, include(&quot;polls.urls&quot;)), path(&quot;attendance/&quot;, include(&quot;attendance.urls&quot;)), path(&quot;gql/strawberry/&quot;, include(&quot;strawberry_gql.urls&quot;)), path(&quot;&quot;, include(&quot;pages.urls&quot;)), ] if settings.DEBUG: import debug_toolbar urlpatterns = [ path(&quot;__debug__/&quot;, include(debug_toolbar.urls)), ] + urlpatterns </code></pre>
<python><django><django-urls><django-allauth>
2023-01-09 05:02:06
0
89,380
JPG
75,053,273
1,436,800
How to save list of objects in DRF
<p>I am new to django. I have following model:</p> <pre><code>class Standup(models.MOdel): team = models.ForeignKey(&quot;Team&quot;, on_delete=models.CASCADE) standup_time = models.DateTimeField(auto_now_add=True) employee = models.ForeignKey(&quot;Employee&quot;, on_delete=models.CASCADE) update_time = models.DateTimeField(auto_now_add=True) status = models.CharField(max_length=50) work_done_yesterday = models.TextField() work_to_do = models.TextField() blockers = models.TextField() </code></pre> <p>Serializer class looks like this:</p> <pre><code>class StandupSerializer(serializers.ModelSerializer): class Meta: model = Standup fields = '__all__' </code></pre> <p>Viewset is like this:</p> <pre><code>class StandupDetail(viewsets.ModelViewSet): queryset = Standup.objects.all() serializer_class = StandupSerializer </code></pre> <p>My task is to hit a single API which will save the data of all employees, instead of saving the data of employees separately. In the current implementation, each employee will have to hit the API separately to save the data in database. Each employee will select team first, as one employee can be a part of multiple team. We will save a list of objects. Any leads on how to do it?</p>
<python><django><django-models><django-rest-framework><django-views>
2023-01-09 04:46:31
3
315
Waleed Farrukh
75,053,239
11,266,345
Downloading pytorch3d installtion no c++ compiler
<p>I have been trying to download pytorch3d on my PC however it keeps failing. I believe its because of the c++ compiler but I am not sure.</p> <pre><code>FAILED: C:/Users/Virtual Machine/AppData/Local/Temp/pip-req-build-fu0xd6su/build/temp.win-amd64-cpython-310/Release/Users/Virtual Machine/AppData/Local/Temp/pip-req-build-fu0xd6su/pytorch3d/csrc/point_mesh/point_mesh_cpu.obj cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc &quot;-IC:\Users\Virtual Machine\AppData\Local\Temp\pip-req-build-fu0xd6su\pytorch3d\csrc&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\include&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\include\torch\csrc\api\include&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\include\TH&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\include\THC&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\include&quot; &quot;-IC:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\Include&quot; -c &quot;C:\Users\Virtual Machine\AppData\Local\Temp\pip-req-build-fu0xd6su\pytorch3d\csrc\point_mesh\point_mesh_cpu.cpp&quot; /Fo&quot;C:\Users\Virtual Machine\AppData\Local\Temp\pip-req-build-fu0xd6su\build\temp.win-amd64-cpython-310\Release\Users\Virtual Machine\AppData\Local\Temp\pip-req-build-fu0xd6su\pytorch3d\csrc\point_mesh\point_mesh_cpu.obj&quot; -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++14 cl : Command line warning D9002 : ignoring unknown option '-std=c++14' C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\include\c10/macros/Macros.h(3): fatal error C1083: Cannot open include file: 'cassert': No such file or directory ninja: build stopped: subcommand failed.''' </code></pre> <p>Also I am trying to use ninja but I get this error</p> <pre><code> File &quot;C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\build_ext.py&quot;, line 246, in build_extension _build_ext.build_extension(self, ext) File &quot;C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\command\build_ext.py&quot;, line 547, in build_extension objects = self.compiler.compile( File &quot;C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py&quot;, line 815, in win_wrap_ninja_compile _write_ninja_file_and_compile_objects( File &quot;C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py&quot;, line 1573, in _write_ninja_file_and_compile_objects _run_ninja_build( File &quot;C:\Users\Virtual Machine\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py&quot;, line 1916, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─&gt; pytorch3d </code></pre>
<python><c++><pip><pytorch><pytorch3d>
2023-01-09 04:39:12
0
341
Sai Veeramachaneni
75,053,229
668,498
ModuleNotFoundError: No module named '_bz2' in Google Cloud Workstation
<p>I launched a new Google Cloud Workstation and create a single python file with these contents:</p> <pre><code>import bz2 import binascii original_data = 'This is the original text.' print ('Original :', len(original_data), original_data) compressed = bz2.compress(original_data) print ('Compressed :', len(compressed), binascii.hexlify(compressed)) decompressed = bz2.decompress(compressed) print ('Decompressed :', len(decompressed), decompressed) </code></pre> <p>When I tried to run this code I received this error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/01/app.py&quot;, line 1, in &lt;module&gt; import bz2 File &quot;/usr/local/lib/python3.10/bz2.py&quot;, line 17, in &lt;module&gt; from _bz2 import BZ2Compressor, BZ2Decompressor ModuleNotFoundError: No module named '_bz2' </code></pre> <p>What am I doing wrong?</p>
<python><google-cloud-platform><bzip2><bz2><google-cloud-workstations>
2023-01-09 04:36:03
0
3,615
DanielAttard
75,053,220
12,131,472
from dataframe to the body of Email automatically,several formatting issues: thousand separator, color(red for negative number and green for positive)
<p>I have a dataframe look like this</p> <pre><code> date value value 2 daily value change shortCode TD1 2023-01-06 38.67 15162.0 -1.00 TD2 2023-01-06 53.42 33952.0 -0.40 TD3C 2023-01-06 52.91 30486.0 -0.36 TD6 2023-01-06 169.61 90824.0 -3.83 TD7 2023-01-06 168.56 66685.0 -1.25 TD8 2023-01-06 244.29 71413.0 -2.42 TD9 2023-01-06 129.38 24498.0 -2.50 TD14 2023-01-06 251.19 81252.0 -0.81 TD15 2023-01-06 54.03 32382.0 -0.56 TD18 2023-01-06 425.08 71615.0 -2.42 </code></pre> <p>I wish to send it as the BODY of the Email with Outlook, it would be great to automate it in the future (as daily report without human intervention) but for the moment I just struggle to achieve some formatting</p> <ol> <li>how to get it directly to the body of Email or I have to go via Excel?</li> <li>to have all the column headers shown properly, when go through Excel they are partly hidden and have to click manually to show the full title</li> <li>add thousand separator without adding the unnecessary .0 to the &quot;TCE value&quot; column, not sure why it has .0 now</li> <li>in the columns like &quot;daily value change&quot;(I have a few more columns not shown due to size),<br /> having green color for positive numbers and red for negatives.</li> </ol> <p>what I did: for thousand separator</p> <pre><code>df_bdti_final[['value', 'TCE value', ]] = df_bdti_final[['value', 'TCE value']].iloc[:, :].applymap('{:,}'.format) </code></pre>
<python><pandas><dataframe><email><formatting>
2023-01-09 04:34:27
1
447
neutralname
75,053,092
597,858
Able to plot 2 graphs in a row but not 3. get ValueError: values must be a 1D array
<p>This is the dataframe:</p> <pre><code>Data for last 8 months date close volume change% obv compare close_trend 6 2022-06-30 00:00:00+05:30 18760.40 358433 5.52 1358338 True 18482.242046 7 2022-07-31 00:00:00+05:30 20015.10 252637 6.27 1610975 True 18905.447351 8 2022-08-31 00:00:00+05:30 18739.75 317107 -6.81 1293868 False 19328.826505 9 2022-09-30 00:00:00+05:30 19139.15 561137 2.09 1855005 True 19753.246889 10 2022-10-31 00:00:00+05:30 19246.95 243999 0.56 2099004 True 20179.207712 11 2022-11-30 00:00:00+05:30 20237.80 311138 4.90 2410142 True 20606.824373 12 2022-12-31 00:00:00+05:30 21367.20 386070 5.29 2796212 True 21035.629608 13 2023-01-31 00:00:00+05:30 22250.00 101527 3.97 2897739 True 21464.925515 </code></pre> <p>I am able to plot 2 graphs in a row using matplotlib in <strong>jupyter notebook</strong>.</p> <pre><code> fig = plt.figure(figsize=(7,2)) plt.subplot(1,2,1) plt.plot(df['date'], df['close'], color='red', figure=fig) plt.subplot(1,2,2) plt.plot(df[['close','close_trend']],figure=fig) plt.tight_layout() plt.show() </code></pre> <p>I get:</p> <p><a href="https://i.sstatic.net/ZidIF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZidIF.png" alt="enter image description here" /></a></p> <p>But when I try to plot 3 graphs like this, I get <strong><code>ValueError: values must be a 1D array</code></strong></p> <pre><code> fig = plt.figure(figsize=(7,2)) plt.subplot(1,3,1) plt.plot(df['date'], df['close'], color='red', figure=fig) plt.subplot(1,3,2) plt.plot(df[['close','close_trend']],figure=fig) plt.subplot(1,3,3) plt.plot(df.index, df['obv'],color='blue', figure=fig) plt.tight_layout() plt.show() </code></pre> <p>How do I get 3 plots in a row?</p>
<python><matplotlib>
2023-01-09 04:04:53
0
10,020
KawaiKx