QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,114,080
1,594,077
How to sort a 4D tensor based on the first value in the 4th dimension?
<p>I have a 4D tensor that I want to sort. The order of values in the 4th dimension is important to stay the same, but I want to sort arrays in the 3rd dimension based on the first value in the 4th dimension. I am using TensorFlow 2.11. I have tried with tf.argsort() and tf.gather_nd(), but I can't make it work.</p> <p>For example, I have the following tensor:</p> <pre><code>&lt;tf.Tensor: shape=(1, 6, 4, 2), dtype=int64, numpy= array([[[[51, 92], [14, 71], [60, 20], [82, 86]], [[74, 74], [87, 99], [23, 2], [21, 52]], [[ 1, 87], [29, 37], [ 1, 63], [59, 20]], [[32, 75], [57, 21], [88, 48], [90, 58]], [[41, 91], [59, 79], [14, 61], [61, 46]], [[61, 50], [54, 63], [ 2, 50], [ 6, 20]]]])&gt; </code></pre> <p>I want it to be sorted like this:</p> <pre><code>&lt;tf.Tensor: shape=(1, 6, 4, 2), dtype=int64, numpy= array([[[[14, 71], [51, 92], [60, 20], [82, 86]], [[21, 52], [23, 2], [74, 74], [87, 99]], [[ 1, 87], [ 1, 63], [29, 37], [59, 20]], [[32, 75], [57, 21], [88, 48], [90, 58]], [[14, 61], [41, 91], [59, 79], [61, 46]], [[ 2, 50], [ 6, 20], [54, 63], [61, 50]]]])&gt; </code></pre>
<python><numpy><tensorflow>
2023-01-13 20:15:46
1
353
Soli Technology LLC
75,113,929
236,594
factory_boy: make a factory that returns the result of a function
<p>I have a function that generates a list of objects. The objects have complex relationships that are handled in the generator function.</p> <p>How do I make a factory (not a SubFactory!) that, when asked to generate a value, just calls this function?</p>
<python><factory-boy>
2023-01-13 19:56:54
1
2,098
breadjesus
75,113,746
16,978,074
read the edge list from a csv file and create a graph with networkx
<p>Hi everyone I want to read the edge list from a csv file and create a graph with networkx to calculate the betweenness centrality with python. My code is:</p> <pre><code>import pandas as pd import networkx as nx df = pd.read_csv('edges1.csv') Graphtype = nx.Graph() G = nx.from_pandas_edgelist(df, edge_attr='genre_ids', create_using=Graphtype) centrality = nx.betweenness_centrality(G, normalize=False) print(centrality) </code></pre> <p>edges1.csv have 97180 row:</p> <pre><code>Surce,Target,genre_ids Avatar,Violent Night,18 Harry Potter,The Woman King,20 Happy Feet, Froze,23 so on.... </code></pre> <p>My code give me error: <code>KeyError: 'source'</code>. How can i do?</p>
<python><csv><graph><networkx>
2023-01-13 19:33:40
1
337
Elly
75,113,742
4,796,942
Improving performance for a nested for loop iterating over dates
<p>I am looking to learn how to improve the performance of code over a large dataframe (10 million rows) and my solution loops over multiple dates <code>(2023-01-10, 2023-01-20, 2023-01-30)</code> for different combinations of <code>category_a</code> and <code>category_b</code>.</p> <p>The working approach is shown below, which iterates over the dates for different pairings of the two-category data by first locating a subset of a particular pair. However, I would want to refactor it to see if there is an approach that is more efficient.</p> <p>My input (<code>df</code>) looks like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">date</th> <th style="text-align: right;">category_a</th> <th style="text-align: right;">category_b</th> <th style="text-align: right;">outflow</th> <th style="text-align: right;">open</th> <th style="text-align: right;">inflow</th> <th style="text-align: right;">max</th> <th style="text-align: right;">close</th> <th style="text-align: right;">buy</th> <th style="text-align: left;">random_str</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">2023-01-10</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">10</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2023-01-20</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2023-01-30</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">10</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">2023-01-10</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">10</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: left;">b</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">2023-01-20</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">b</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: left;">2023-01-30</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">b</td> </tr> </tbody> </table> </div> <p>with 2 pairs <code>(4, 1)</code> and <code>(4,2)</code> over the days and my expected output (<code>results</code>) looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">date</th> <th style="text-align: right;">category_a</th> <th style="text-align: right;">category_b</th> <th style="text-align: right;">outflow</th> <th style="text-align: right;">open</th> <th style="text-align: right;">inflow</th> <th style="text-align: right;">max</th> <th style="text-align: right;">close</th> <th style="text-align: right;">buy</th> <th style="text-align: left;">random_str</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">2023-01-10</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">10</td> <td style="text-align: right;">-1</td> <td style="text-align: right;">23</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2023-01-20</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-1</td> <td style="text-align: right;">23</td> <td style="text-align: right;">20</td> <td style="text-align: right;">20</td> <td style="text-align: right;">10</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2023-01-30</td> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">10</td> <td style="text-align: right;">20</td> <td style="text-align: right;">10</td> <td style="text-align: right;">20</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">a</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">2023-01-10</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: right;">10</td> <td style="text-align: right;">-2</td> <td style="text-align: right;">24</td> <td style="text-align: left;">b</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">2023-01-20</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-2</td> <td style="text-align: right;">24</td> <td style="text-align: right;">20</td> <td style="text-align: right;">20</td> <td style="text-align: right;">0</td> <td style="text-align: left;">b</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: left;">2023-01-30</td> <td style="text-align: right;">4</td> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">0</td> <td style="text-align: right;">20</td> <td style="text-align: right;">20</td> <td style="text-align: right;">nan</td> <td style="text-align: left;">b</td> </tr> </tbody> </table> </div> <p>I have a working solution using pandas dataframes to take a subset then loop over it to get a solution but I would like to see how I can improve the performance of this using perhaps ;<code>numpy</code>, <code>numba</code>, <code>pandas-multiprocessing</code> or <code>dask</code>. Another great idea was to rewrite it in BigQuery SQL.</p> <p>I am not sure what the best solution would be and I would appreciate any help in improving the performance.</p> <p><em><strong>Minimum working example</strong></em></p> <p>The code below generates the input dataframe.</p> <pre><code>import pandas as pd import numpy as np # prepare the input df df = pd.DataFrame({ 'date' : ['2023-01-10', '2023-01-20','2023-01-30', '2023-01-10', '2023-01-20','2023-01-30'] , 'category_a' : [4, 4,4,4, 4, 4] , 'category_b' : [1, 1,1, 2, 2,2] , 'outflow' : [1.0, 2.0,10.0, 2.0, 2.0, 0.0], 'open' : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] , 'inflow' : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] , 'max' : [10.0, 20.0, 20.0 , 10.0, 20.0, 20.0] , 'close' : [0.0, np.nan,np.nan, 0.0, np.nan, np.nan] , 'buy' : [0.0, np.nan,np.nan, 0.0, np.nan,np.nan], 'random_str' : ['a', 'a', 'a', 'b', 'b', 'b'] }) df['date'] = pd.to_datetime(df['date']) # get unique pairs of category_a and category_b in a dictionary unique_pairs = df.groupby(['category_a', 'category_b']).size().reset_index().rename(columns={0:'count'})[['category_a', 'category_b']].to_dict('records') unique_dates = np.sort(df['date'].unique()) </code></pre> <p>Using this input dataframe and Numpy, the code below is what I am trying to optmizize.</p> <pre><code>df = df.set_index('date') day_0 = unique_dates[0] # first date # Using Dictionary comprehension list_of_numbers = list(range(len(unique_pairs))) myset = {key: None for key in list_of_numbers} for count_pair, value in enumerate(unique_pairs): # pair of category_a and category_b category_a = value['category_a'] category_b = value['category_b'] # subset the dataframe for the pair df_subset = df.loc[(df['category_a'] == category_a) &amp; (df['category_b'] == category_b)] log.info(f&quot; running for {category_a} and {category_b}&quot;) # day 0 df_subset.loc[day_0, 'close'] = df_subset.loc[day_0, 'open'] + df_subset.loc[day_0, 'inflow'] - df_subset.loc[day_0, 'outflow'] # loop over single pair using date for count, date in enumerate(unique_dates[1:], start=1): previous_date = unique_dates[count-1] df_subset.loc[date, 'open'] = df_subset.loc[previous_date, 'close'] df_subset.loc[date, 'close'] = df_subset.loc[date, 'open'] + df_subset.loc[date, 'inflow'] - df_subset.loc[date, 'outflow'] # check if closing value is negative, if so, set inflow to buy for next weeks deficit if df_subset.loc[date, 'close'] &lt; df_subset.loc[date, 'max']: df_subset.loc[previous_date, 'buy'] = df_subset.loc[date, 'max'] - df_subset.loc[date, 'close'] + df_subset.loc[date, 'inflow'] elif df_subset.loc[date, 'close'] &gt; df_subset.loc[date, 'max']: df_subset.loc[previous_date, 'buy'] = 0 else: df_subset.loc[previous_date, 'buy'] = df_subset.loc[date, 'inflow'] df_subset.loc[date, 'inflow'] = df_subset.loc[previous_date, 'buy'] df_subset.loc[date, 'close'] = df_subset.loc[date, 'open'] + df_subset.loc[date, 'inflow'] - df_subset.loc[date, 'outflow'] # store all the dataframes in a container myset myset[count_pair] = df_subset # make myset into a dataframe result = pd.concat(myset.values()).reset_index(drop=False) result </code></pre> <p>After which we can check that the solution is the same as what we expected.</p> <pre><code>from pandas.testing import assert_frame_equal expected = pd.DataFrame({ 'date' : [pd.Timestamp('2023-01-10 00:00:00'), pd.Timestamp('2023-01-20 00:00:00'), pd.Timestamp('2023-01-30 00:00:00'), pd.Timestamp('2023-01-10 00:00:00'), pd.Timestamp('2023-01-20 00:00:00'), pd.Timestamp('2023-01-30 00:00:00')] , 'category_a' : [4, 4, 4, 4, 4, 4] , 'category_b' : [1, 1, 1, 2, 2, 2] , 'outflow' : [1, 2, 10, 2, 2, 0] , 'open' : [0.0, -1.0, 20.0, 0.0, -2.0, 20.0] , 'inflow' : [0.0, 23.0, 10.0, 0.0, 24.0, 0.0] , 'max' : [10, 20, 20, 10, 20, 20] , 'close' : [-1.0, 20.0, 20.0, -2.0, 20.0, 20.0] , 'buy' : [23.0, 10.0, np.nan, 24.0, 0.0, np.nan] , 'random_str' : ['a', 'a', 'a', 'b', 'b', 'b'] }) # check that the result is the same as expected assert_frame_equal(result, expected) </code></pre> <p><em><strong>SQL to create first table</strong></em></p> <p>The solution can also be in sql, if so you can use the following code to create the initial table.</p> <p>I am busy trying to implement a solution in big query sql using a user defined function to keep the logic going too. This would be a nice approach to solving the problem too.</p> <pre><code>WITH data AS ( SELECT DATE '2023-01-10' as date, 4 as category_a, 1 as category_b, 1 as outflow, 0 as open, 0 as inflow, 10 as max, 0 as close, 0 as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-20' as date, 4 as category_a, 1 as category_b, 2 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-30' as date, 4 as category_a, 1 as category_b, 10 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-10' as date, 4 as category_a, 2 as category_b, 2 as outflow, 0 as open, 0 as inflow, 10 as max, 0 as close, 0 as buy, 'b' as random_str UNION ALL SELECT DATE '2023-01-20' as date, 4 as category_a, 2 as category_b, 2 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'b' as random_str UNION ALL SELECT DATE '2023-01-30' as date, 4 as category_a, 2 as category_b, 0 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'b' as random_str ) SELECT ROW_NUMBER() OVER (ORDER BY date) as &quot; &quot;, date, category_a, category_b, outflow, open, inflow, max, close, buy, random_str FROM data </code></pre>
<python><sql><pandas><numpy><google-bigquery>
2023-01-13 19:32:44
1
1,587
user4933
75,113,685
7,800,760
Python: finding longest version of names
<p>I am using python to parse a news article and obtain a set of people names contained within it. Currently every <strong>Named Entity</strong> classified as a <strong>PER</strong>son (by <em>Stanford's Stanza</em> NLP library) gets added to a <strong>set</strong> as follows:</p> <pre><code>maxnames = set() # initialize an empty set for PER references for entity in doc.entities: if entity.type == &quot;PER&quot;: if entity.text not in maxnames: maxnames.add(entity.text) </code></pre> <p>Here is a <strong>real example</strong> I end up with:</p> <pre><code>{'von der Leyen', 'Meloni', 'Lars Danielsson', 'Filippo Mannino', 'Danielsson', 'Giorgia Meloni', 'Ursula von der Leyen', 'Matteo Piantedosi', 'Lamberto Giannini'} </code></pre> <p>What I'm trying to achieve is to keep on the most complete name. In the above example this should become:</p> <pre><code>{'Lars Danielsson', 'Filippo Mannino', 'Giorgia Meloni', 'Ursula von der Leyen', 'Matteo Piantedosi', 'Lamberto Giannini'} </code></pre> <p>because in the first set:</p> <ul> <li>'von der Leyen' should be suppressed by 'Ursula von der Leyen'</li> <li>'Meloni' suppressed by 'Giorgia Meloni' and so on.</li> </ul> <p>This is how I'm trying but am getting lost :( Can you please spot the error?</p> <pre><code>def longestname(reference: str, nameset: set[str]) -&gt; set[str]: &quot;&quot;&quot; Return the longest name in a set of names &quot;&quot;&quot; for name in nameset.copy(): lenname = len(name) lenref = len(reference) if lenref &lt; lenname: if reference in name: nameset.add(name) else: nameset.remove(name) nameset.add(reference) return nameset nameset = set() nameset = longestname(&quot;von der Leyen&quot;, nameset) nameset = longestname(&quot;Meloni&quot;, nameset) nameset = longestname(&quot;Lars Danielsson&quot;, nameset) nameset = longestname(&quot;Lars&quot;, nameset) nameset = longestname(&quot;Giorgia Meloni&quot;, nameset) nameset = longestname(&quot;Ursula von der Leyen&quot;, nameset) nameset = longestname(&quot;Giorgia&quot;, nameset) print(nameset) # should contain exactly: # {'Lars Danielsson', 'Giorgia Meloni', 'Ursula von der Leyen'} </code></pre>
<python><string>
2023-01-13 19:25:21
1
1,231
Robert Alexander
75,113,505
4,822,772
Gravity form API with python
<p>The documentation of the API is <a href="https://docs.gravityforms.com/rest-api-v2/" rel="nofollow noreferrer">here</a>, and I try to implement this line in python</p> <pre><code>//retrieve entries created on a specific day (use the date_created field) //this example returns entries created on September 10, 2019 https://localhost/wp-json/gf/v2/entries?search={&quot;field_filters&quot;: [{&quot;key&quot;:&quot;date_created&quot;,&quot;value&quot;:&quot;09/10/2019&quot;,&quot;operator&quot;:&quot;is&quot;}]} </code></pre> <p>But when I try to do with python in the following code, I got an error:</p> <pre><code>import json import oauthlib from requests_oauthlib import OAuth1Session consumer_key = &quot;&quot; client_secret = &quot;&quot; session = OAuth1Session(consumer_key, client_secret=client_secret,signature_type=oauthlib.oauth1.SIGNATURE_TYPE_QUERY) url = 'https://localhost/wp-json/gf/v2/entries?search={&quot;field_filters&quot;: [{&quot;key&quot;:&quot;date_created&quot;,&quot;value&quot;:&quot;09/01/2023&quot;,&quot;operator&quot;:&quot;is&quot;}]}' r = session.get(url) print(r.content) </code></pre> <p>The error message is :</p> <pre><code>ValueError: Error trying to decode a non urlencoded string. Found invalid characters: {']', '['} in the string: 'search=%7B%22field_filters%22:%20[%7B%22key%22:%22date_created%22,%22value%22:%2209/01/2023%22,%22operator%22:%22is%22%7D]%7D'. Please ensure the request/response body is x-www-form-urlencoded. </code></pre> <p>One solution is to parameterize the url:</p> <pre><code>import requests import json url = 'https://localhost/wp-json/gf/v2/entries' params = { &quot;search&quot;: {&quot;field_filters&quot;: [{&quot;key&quot;:&quot;date_created&quot;,&quot;value&quot;:&quot;09/01/2023&quot;,&quot;operator&quot;:&quot;is&quot;}]} } headers = {'Content-type': 'application/json'} response = session.get(url, params=params, headers=headers) print(response.json()) </code></pre> <p>But in the retrieved entries, the data is not filtered with the specified date.</p> <p>In the official documentation, they gave a date in this format &quot;09/01/2023&quot;, but in my dataset, the format is: &quot;2023-01-10 19:16:59&quot; Do I have to transform the format ? I tried a different format for the date</p> <pre><code>date_created = &quot;09/01/2023&quot; date_created = datetime.strptime(date_created, &quot;%d/%m/%Y&quot;).strftime(&quot;%Y-%m-%d %H:%M:%S&quot;) </code></pre> <p>What alternative solutions can I test ?</p>
<python><wordpress-rest-api><gravity-forms-plugin>
2023-01-13 19:05:14
2
1,718
John Smith
75,113,503
2,738,155
run tests on a lambda container image?
<p>I'm using Lambda container images to package complicated libraries like opencv and pdf2image in Python.</p> <p>Is there a way to run unit tests against it so I can get a code coverage for tools like Sonar? With normal code, I could do the following: <code>python -m unittest -v</code></p> <p>But not sure how to do that if the code is inside a container image.</p> <p>I'm using bitbuckett pipelines as well.</p>
<python><aws-lambda><bitbucket-pipelines>
2023-01-13 19:04:27
1
1,352
chomp
75,113,464
9,779,999
M1 trying to run SparkSession, but having RuntimeError: Java gateway process exited before sending its port number
<p>I am trying to run a simple command <code>spark = SparkSession.builder.appName(&quot;Basics&quot;).getOrCreate()</code> in my M1 Mac, Monterey 12.6.2, but it throws an error:</p> <pre><code>The operation couldn’t be completed. Unable to locate a Java Runtime. Please visit http://www.java.com for information on installing Java. /Users/user/miniforge3/envs/bigdata/lib/python3.9/site-packages/pyspark/bin/spark-class: line 96: CMD: bad array subscript head: illegal line count -- -1 Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[2], line 2 1 # May take a little while on a local computer ----&gt; 2 spark = SparkSession.builder.appName(&quot;Basics&quot;).getOrCreate() File ~/miniforge3/envs/bigdata/lib/python3.9/site-packages/pyspark/sql/session.py:269, in SparkSession.Builder.getOrCreate(self) 267 sparkConf.set(key, value) 268 # This SparkContext may be an existing one. --&gt; 269 sc = SparkContext.getOrCreate(sparkConf) 270 # Do not update `SparkConf` for existing `SparkContext`, as it's shared 271 # by all sessions. 272 session = SparkSession(sc, options=self._options) File ~/miniforge3/envs/bigdata/lib/python3.9/site-packages/pyspark/context.py:483, in SparkContext.getOrCreate(cls, conf) 481 with SparkContext._lock: 482 if SparkContext._active_spark_context is None: --&gt; 483 SparkContext(conf=conf or SparkConf()) 484 assert SparkContext._active_spark_context is not None 485 return SparkContext._active_spark_context File ~/miniforge3/envs/bigdata/lib/python3.9/site-packages/pyspark/context.py:195, in SparkContext.__init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls, udf_profiler_cls) 189 if gateway is not None and gateway.gateway_parameters.auth_token is None: 190 raise ValueError( 191 &quot;You are trying to pass an insecure Py4j gateway to Spark. This&quot; ... --&gt; 106 raise RuntimeError(&quot;Java gateway process exited before sending its port number&quot;) 108 with open(conn_info_file, &quot;rb&quot;) as info: 109 gateway_port = read_int(info) RuntimeError: Java gateway process exited before sending its port number </code></pre> <p>I googled a lot, and finally decided to follow this solution here <a href="https://stackoverflow.com/questions/71900906/runtimeerror-java-gateway-process-exited-before-sending-its-port-number">###RuntimeError: Java gateway process exited before sending its port number</a> , and thus I need to go to zshrc by <code> ~/.zshrc</code> to add a line:</p> <p><code>export JAVA_HOME=&quot;/path/to/java_home/&quot;</code>. However it gives me this error <code>zsh: permission denied: /Users/user/.zshrc</code> I have tried these solutions here, but it doesn't work. <a href="https://www.stellarinfo.com/blog/fixed-zsh-permission-denied-in-mac-terminal/" rel="nofollow noreferrer">https://www.stellarinfo.com/blog/fixed-zsh-permission-denied-in-mac-terminal/</a>. I have given Full Disk Access rights to Terminal.</p> <p>Therefore I have 2 problems right now,</p> <ol> <li>Java gateway process exited before sending its port number.</li> <li>zsh permission denied.</li> </ol> <p>Would anyone please help?</p>
<python><macos><pyspark><zsh><java-home>
2023-01-13 19:01:48
1
1,669
yts61
75,113,403
2,146,381
scipy.integrate.quad with non plausible values
<p>He guys,</p> <p>I'm al little bit in trouble with math. So there is an <a href="https://stackoverflow.com/questions/74775404/">old question</a> and it's not really solved. I thought about editing the old one, but I think it's even good to start a new question.</p> <p>As you can see below there is an example of my experimental code. There are the functions kf and kfm. kf is a little bit funcy fancy math stuff from an engineer. kfm is just the integral of kf.</p> <p>When I use the <a href="https://en.wikipedia.org/wiki/Trapezoidal_rule" rel="nofollow noreferrer">trapezoid rule</a>, the value of kfm(0.05) is ~67. It's easy, because kf(0.05) is ~135 and the first step of the trapezoid is 1/2 of kf.</p> <p>So I don't know why kfm (the integral of kf) is starting at zero? I think it is wrong!</p> <pre><code>'''python code ''' import matplotlib.pyplot as plt from scipy import integrate import math # umf is just the x axis for kf and kfm list_umf = [0.05, 0.1, 0.15000000000000002, 0.2, 0.25, 0.3, 0.35000000000000003, 0.4, 0.45, 0.5, 0.55, 0.6000000000000001, 0.6500000000000001, 0.7000000000000001, 0.7500000000000001, 0.8] # the real kf function def kf(phi): k = 841.17 m1 = -0.00112 m2 = 0.17546 m3 = 0.01271 m4 = 0.00116 phi_p = 3.42 theta_einlauf = 1200 if phi &lt; 0.02: phi = 0.02 return k * math.exp(m1 * theta_einlauf) * (phi ** m2) * (phi_p ** m3) * math.exp(m4 / phi) list_kf_func = [kf(x) for x in list_umf] # the real kfm function def kfm(u, o): return integrate.quad(kf, u, o)[0] u = list_umf[0] list_kfm_func = [kfm(u, o) for o in list_umf] # here are some plots plt.plot(list_umf, list_kf_func, label='kf_func') plt.plot(list_umf, list_kfm_func, label='kfm_func') plt.legend(loc=&quot;upper left&quot;) </code></pre> <p>Here is the plot, generated with the code: <a href="https://i.sstatic.net/8Zb98.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Zb98.png" alt="Here is the plot, generated with the code" /></a></p>
<python><math><scipy>
2023-01-13 18:54:49
1
322
tux007
75,113,366
1,028,270
pip install --upgrade is installing my "local" version suffixed packages
<p>During development I build and push feature branch versions of my package that look like: <code>1.2.3+mybranch</code>.</p> <p>So I'll have packages named <code>1.2.3</code>, <code>1.2.3+mybranch</code> and <code>1.2.4+mybranch</code>, and <code>1.2.4</code>.</p> <p>The problem is it seems pip has no problem installing a package with a <code>+suffix</code> when doing a regular <code>pip install --upgrade</code>.</p> <p>I don't want pip to do that.</p> <p>Is there a way I can have only release versions installed with <code>pip install --upgrade</code>? I would think pip would do this by default.</p>
<python><pip>
2023-01-13 18:50:12
1
32,280
red888
75,113,344
8,401,374
Selenium can't find dynamially loaded input field element even after waiting for that specific element
<p>I'm trying to access an input field of username (of login page) with Selenium. The page is JavaScript based. <code>driver.get()</code> wait by default to load the complete page. In my case, it is unable to load that. I can inspect the element on browser (firefox) and I get this.</p> <pre><code>&lt;input type=&quot;text&quot; autocomplete=&quot;username&quot; name=&quot;username&quot;&gt; </code></pre> <p>I tried to wait for that specific element with <code>EC.presence_of_element_located</code>.</p> <p>Code trials:</p> <pre><code>driver = webdriver.Firefox() driver.get(url) delay = 10 # seconds try: myElem = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.NAME, 'username'))) print(&quot;Page is ready!&quot;) except TimeoutException: print(&quot;Loading took too much time!&quot;) print(driver.page_source) </code></pre> <p>I get <code>Loading took too much time!</code>. Even though the element is there as I can inspect it in the browser. I also tried <code>EC.presence_of_element_located((By.TAG_NAME, 'input')))</code> but It also can't find the tag.</p> <p>Update from the comments: url='https://drive.inditex.com/drfrcomr/login'</p>
<python><selenium><xpath><css-selectors><webdriverwait>
2023-01-13 18:48:27
3
1,710
Shaida Muhammad
75,113,200
5,924,264
How to check the amount of memory a single/set of lines in python is using?
<p>I added several lines to a python codebase and now the memory is exceeding the limits. Is there a way to see how much memory a particular line or set of lines is using in Python?</p> <p>Here's the set of lines that I added</p> <pre><code> df[&quot;value_weighted_average&quot;] = ( df.reset_index().merge(df2merge, on='vid', suffixes=['','_x']) .assign(proportions=lambda x: (x[['first','first_x']].min(axis=1) - x[['second','second_x']].max(axis=1))) .query('proportions &gt;= 0') .groupby('index')['proportions'].sum() ) </code></pre> <p><code>df1.shape</code> is ttypically about <code>O(10^5) x O(10^1)</code>. <code>df2.shape</code> is typically about <code>O(10^7) x O(10^1)</code>.</p> <p>I'm not well versed with Python, but I think the <code>merge</code> operation is what's introducing a lot of memory usage</p>
<python><memory><memory-management>
2023-01-13 18:32:45
0
2,502
roulette01
75,113,164
1,275,973
Understanding Python dictionary "lookups" between dictionaries to replace keys
<p>I have two dictionaries and my objective was to replace the keys in first_dict, with the values in second_dict.</p> <p>I got the code working, but largely through trial and error, so would like some help understanding and translate exactly what is going on here in Python.</p> <pre><code>first_dict={&quot;FirstName&quot;: &quot;Jeff&quot;, &quot;Town&quot;: &quot;Birmingham&quot;} second_dict={&quot;FirstName&quot;: &quot;c1&quot;, &quot;Town&quot;: &quot;c2&quot;} new_dict = {second_dict[k]: v for k, v in first_dict.items()} </code></pre> <p>This gives me what I want, a new dict as follows:</p> <pre><code>{'c1': 'Jeff', 'c2': 'Birmingham'} </code></pre> <p>How is this working?</p> <ul> <li>&quot;new_dict&quot; creates a new dictionary</li> <li>so &quot;in first_dict.items()&quot;, i.e. for each key-value paid in &quot;first_dict&quot;:</li> <li>the value in the new_dict is the value from &quot;row&quot;</li> <li>the key in the new_dict is the value from the second_dict</li> </ul> <p>How does &quot;second_dict[k]&quot; do this? it seems like it is doing some sort of a lookup to match between the keys of first_dict and second_dict? Is this right, and if so, how does it work?</p>
<python><dictionary>
2023-01-13 18:28:40
1
326
alexei7
75,113,080
676,430
Finding files in a directory that have an underscore and no extension and adding an extension to it
<p>I have a directory with some files <strong>with an underscore and no extension</strong> that I would like to <strong>add an extension to it and get rid of the underscore</strong>.</p> <p><strong>Example: list of files in directory</strong></p> <pre><code>filename1_.jpg file_name_2_ file_name3 </code></pre> <p><strong>The renamed files should be changed to look like whats below:</strong></p> <pre><code>file_name_2.jpg file_name3.jpg </code></pre> <p>I was going to start with just looking for files with no extension but that just list all the files.</p> <p><strong>Code I tried below:</strong></p> <pre><code># Finding files with extension using for loop #import os module import os # Specifies the path in path variable path = &quot;/tmp/0/jpg/&quot; for i in os.listdir(path): # List files with extension if i.endswith(&quot;&quot;): print(&quot;Files with extension no extension:&quot;,i) </code></pre> <p><strong>PS: I'm trying to do this all in python since it's going to be used in another piece of python code.</strong></p>
<python><python-3.x>
2023-01-13 18:19:46
3
3,419
Rick T
75,112,863
11,462,274
Columns changing names when I try to filter rows from a DataFrame that aren't in another DataFrame according to specific columns
<p>Following this answer:</p> <p><a href="https://stackoverflow.com/a/47107164/11462274">https://stackoverflow.com/a/47107164/11462274</a></p> <p>I try to create a DataFrame that is only the lines not found in another DataFrame, however, not according to all columns, but according to only some specific columns, so I tried to do it this way:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 'col2' : [10, 11, 12, 13, 14, 10], 'col3' : [1,5,7,9,6,7]}) df2 = pd.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12], 'col3' : [1,5,8]}) df_merge = df1.merge(df2.drop_duplicates(), on=['col1','col3'], how='left', indicator=True) df_merge = df_merge.query(&quot;_merge == 'left_only'&quot;)[df1.columns] print(df_merge) </code></pre> <p>But note that when not using all the columns, they change their name like <code>col2</code> to <code>col2_x</code>:</p> <pre class="lang-none prettyprint-override"><code> col1 col2_x col3 col2_y _merge 0 1 10 1 10.0 both 1 2 11 5 11.0 both 2 3 12 7 NaN left_only 3 4 13 9 NaN left_only 4 5 14 6 NaN left_only 5 3 10 7 NaN left_only </code></pre> <p>So when I try to create the final DataFrame without the unnecessary columns, the unused columns are not found to generate the desired filter:</p> <pre class="lang-none prettyprint-override"><code>KeyError(f&quot;{not_found} not in index&quot;) KeyError: &quot;['col2'] not in index&quot; </code></pre>
<python><pandas><dataframe>
2023-01-13 17:58:01
2
2,222
Digital Farmer
75,112,812
10,492,521
How do I make a Conan test package require the package that it is testing? Specifically if the package version is dynamic?
<p>let's say I have a package:</p> <pre><code>from conans import ConanFile class MainLibraryPackage(ConanFile): name = 'main' description = 'stub' def set_version(self): self.version = customFunctionToGetVersion() ... </code></pre> <p>And I have a test package for it:</p> <pre><code>import os from conans import ConanFile, CMake class MainLibraryTests(ConanFile): &quot;&quot;&quot; Conan recipe to run C++ tests for main library &quot;&quot;&quot; settings = 'arch', 'build_type', 'os', 'compiler' generators = &quot;cmake&quot; def requirements(self): self.requires(&quot;gtest/1.12.1&quot;) self.requires(&lt;my main library, somehow?&gt;) def build(self): cmake = CMake(self) cmake.configure() cmake.build() def test(self): print(&quot;THIS IS A TEST OF THE TEST&quot;) if not tools.cross_building(self): os.chdir(&quot;bin&quot;) self.run(&quot;.%smain_tests&quot; % os.sep) </code></pre> <p>How do I actually add the main package as a requirement? And if I do so, will that properly populate the <code>CONAN_LIBS</code> variable in the <code>CMakeLists.txt</code> for my test package? Thanks!</p>
<python><c++><cmake><conan>
2023-01-13 17:52:36
1
515
Danny
75,112,776
4,083,786
vscode remote ssh integrated terminal not reading correct python
<p>I followed the instructions on how to set up a remote ssh development environment. I have a host: windows 10. I go to my terminal and ssh into a remote linux server for python development. When i remote in for the first time, its a fresh new environment. When i type</p> <pre><code>which python </code></pre> <p>I receive usr/bin/python.</p> <p>I then copy a bash_profile located at a custom directory into the root directory and log off the remote session ang log back in and type</p> <pre><code>which python </code></pre> <p>I receive alias python='/opt/anaconda2/bin/python2.7' /opt/anaconda2/bin/python2.7</p> <pre><code>echo $PYTHONPATH </code></pre> <p>gives me /opt/Iceetcetc-3.4.2_51/python:/home/myusername/dev/py:/opt/py:/opt/html</p> <p>then when i type python i see the following on my remote session:</p> <p>Python 2.7.14 |Anaconda Inc.| (default, Oct 16 2017) [GCC 7.2.8] on linux2</p> <p>import company_module</p> <p>works fine</p> <p>Now, from my local host, a windows machine, i started a new vscode and pressed &quot;connect to remote host&quot; i type in my username@hostname with the requisite password and I'm logged in. When click on 'new terminal' inside vscode, inside the integrated terminal, i typed hostname and see that i am on the same hostname. However,</p> <pre><code>echo $PYTHONPATH </code></pre> <p>gives me a empty variable rather than /opt/Iceetcetc-3.4.2_51/python:/home/myusername/dev/py:/opt/py:/opt/html</p> <p>How can i get the integrated environment get the same pythonpath as my remote shell? thanks</p>
<python><visual-studio-code><ssh>
2023-01-13 17:49:52
1
1,182
turtle_in_mind
75,112,699
850,781
Compute correlations of several vectors
<p>I have several pairs of vectors (arranged as two matrices) and I want to compute the <em>vector</em> of their pairwise correlation coefficients (or, better yet, angles between them - but since correlation coefficient is its cosine, I am using <a href="https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html" rel="nofollow noreferrer"><code>numpy.corrcoef</code></a>):</p> <pre><code>np.array([np.corrcoef(m1[:,i],m2[:,i])[0,1] for i in range(m1.shape[1])]) </code></pre> <p>I wonder if there is a way to &quot;vectorize&quot; this, i.e., avoid calling <code>corrcoef</code> several times.</p>
<python><numpy><linear-algebra><pearson-correlation>
2023-01-13 17:42:02
1
60,468
sds
75,112,530
4,556,675
Pytest monkeypatch mock leak
<p>I have a module that I am trying to unit test, and when running individual tests by themselves, each test will pass. When I try to run the full set of tests together in a single test session using <code>pytest test_some_module.py</code>, some tests will fail due to a monkeypatch mock that is leaking across tests.</p> <p>I have an example module, <code>handler.py</code> that is as follows:</p> <pre class="lang-py prettyprint-override"><code>import os from redis import Redis host = os.environ[&quot;REDIS_HOSTNAME&quot;] r = Redis(host=host, db=0) def lambda_handler(event, context): &quot;&quot;&quot;Lambda Handler&quot;&quot;&quot; for key, value in event.items(): r.set(key, value) keys = r.keys() return keys </code></pre> <p>Then I have a test module, <code>test_redis.py</code>, with the following code:</p> <pre class="lang-py prettyprint-override"><code> import pytest import redis from fakeredis import FakeRedis @pytest.fixture() def hostname(): return &quot;a fake host&quot; @pytest.fixture() def fake_redis(hostname): return FakeRedis(host=hostname, db=0) @pytest.fixture(autouse=True) def mock_redis(monkeypatch, fake_redis, hostname): def _redis(*args, **kwargs): return fake_redis monkeypatch.setattr(redis, &quot;Redis&quot;, _redis) monkeypatch.setattr(redis, &quot;StrictRedis&quot;, _redis) monkeypatch.setenv(&quot;REDIS_HOSTNAME&quot;, hostname) def test_cache_is_set(hostname): from handler import lambda_handler result = lambda_handler({&quot;a_key&quot;: &quot;value&quot;}, {}) assert len(result) == 1 def test_cache_is_empty(hostname): from handler import lambda_handler result = lambda_handler({}, {}) assert len(result) == 0 </code></pre> <p>When I go to run either of the test functions by themselves, they will pass. However, when I run <code>pytest test_redis.py</code>, the <code>test_cache_is_empty</code> unit test will fail due to keys being present in the monkeypatched Redis database using <code>FakeRedis</code>.</p> <p>This is failing for me on Linux. Is there a good way to do this to ensure that mocked objects don't leak across unit tests?</p> <p>I'm using <code>pytest 6.2.4</code> and <code>fakeredis 1.6.0</code>.</p> <p>For context, I have tried adding the following code to the <code>mock_redis</code> test fixture and it still won't work as intended.</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True) def mock_redis(monkeypatch, fake_redis, hostname): def _redis(*args, **kwargs): return fake_redis monkeypatch.setattr(redis, &quot;Redis&quot;, _redis) monkeypatch.setattr(redis, &quot;StrictRedis&quot;, _redis) monkeypatch.setenv(&quot;REDIS_HOSTNAME&quot;, hostname) yield fake_redis.flushall() # &lt;- This should work and flush the db for the next test function to use </code></pre>
<python><redis><mocking><pytest><monkeypatching>
2023-01-13 17:26:47
0
5,868
CaptainDriftwood
75,112,453
2,706,344
Actual interpolation based on date
<p>This question here is a follow up question occurred in the comments of <a href="https://stackoverflow.com/q/75110993/2706344">Resampling on a multi index</a>.</p> <p>We start with following data:</p> <pre><code>data=pd.DataFrame({'dates':['2004','2008','2012'],'values':[k*(1+4*365) for k in range(3)]}) data['dates']=pd.to_datetime(data['dates']) data=data.set_index('dates') </code></pre> <p>That is what it produces: <a href="https://i.sstatic.net/2fwST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fwST.png" alt="enter image description here" /></a></p> <p>Now, when I resample and interpolate by</p> <pre><code>data.resample('A').mean().interpolate() </code></pre> <p>I obtain the following: <a href="https://i.sstatic.net/Q1lNt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q1lNt.png" alt="enter image description here" /></a></p> <p>But what I want (and the problem is already the resampling and not the interpolation step) is</p> <pre><code>2004-12-31 365 2005-12-31 730 2006-12-31 1095 2007-12-31 1460 2008-12-31 1826 2009-12-31 2191 2010-12-31 2556 2011-12-31 2921 2012-12-31 3287 </code></pre> <p>So I want an actual linear interpolation on the given data.</p> <p>To make it even clearer I wrote a function which does the job. However, I'm still looking for a build in solution (my own function is bad coding because of a very ugly runtime):</p> <pre><code>def fillResampleCorrectly(data,resample): for i in range(len(resample)): currentDate=resample.index[i] for j in range(len(data)): if currentDate&gt;=data.index[j]: if j&lt;len(data)-1: continue valueBefore=data[data.columns[0]].iloc[j-1] valueAfter=data[data.columns[0]].iloc[j] dateBefore=data.index[j-1] dateAfter=data.index[j] currentValue=valueBefore+(valueAfter-valueBefore)*((currentDate-dateBefore)/(dateAfter-dateBefore)) resample[data.columns[0]].iloc[i]=currentValue break </code></pre>
<python><pandas><interpolation>
2023-01-13 17:19:09
3
4,346
principal-ideal-domain
75,112,399
4,040,743
How to round down a datetime to the nearest 5 Minutes?
<p>I need a Python3 function that rounds down a <code>datetime.datetime</code> object to the nearest 5 minutes. Yes, this has been discussed in previous SO posts <a href="https://stackoverflow.com/questions/32723150/rounding-up-to-nearest-30-minutes-in-python">here</a> and <a href="https://stackoverflow.com/questions/41595754/round-down-datetime-to-previous-hour">here</a> and even <a href="https://stackoverflow.com/questions/25754405/how-can-i-extract-hours-and-minutes-from-a-datetime-datetime-object">here</a>, but I'm having no luck implementing their solutions.</p> <p><strong>NOTE: I can not use pandas</strong></p> <p>I want a function, given the below DateTime (<code>%Y%m%d%H%M</code>) objects, returns the following:</p> <pre><code>INPUT OUTPUT 202301131600 202301131600 202301131602 202301131600 202301131604 202301131600 202301131605 202301131605 202301131609 202301131605 202301131610 202301131610 </code></pre> <p>Here's my code, using <a href="https://docs.python.org/3/library/datetime.html#datetime.timedelta" rel="nofollow noreferrer">timedelta</a> as a mechanism:</p> <pre><code>from datetime import datetime from datetime import timedelta def roundDownDateTime(dt): # Arguments: # dt DateTime object delta = timedelta(minutes=5) return dt - (datetime.min - dt) % delta tmpDate = datetime.now() # Print the current time and then rounded-down time: print(&quot;\t&quot;+tmpDate.strftime('%Y%m%d%H%M')+&quot; --&gt; &quot;+(roundDownDateTime(tmpDate)).strftime('%Y%m%d%H%M') ) </code></pre> <p>Here's some output when I test the code multiple times:</p> <pre><code>202301131652 --&gt; 202301131650 202301131700 --&gt; 202301131655 202301131701 --&gt; 202301131657 </code></pre> <p>Ugh, no good! I adapted my function to this:</p> <pre><code>def roundDownDateTime(dt): # Arguments: # dt DateTime object n = dt - timedelta(minutes=5) return datetime(year=n.year, month=n.month, day=n.day, hour=n.hour) </code></pre> <p>But that was even <strong>worse</strong>:</p> <pre><code>202301131703 --&gt; 202301131600 202301131707 --&gt; 202301131700 202301131710 --&gt; 202301131700 </code></pre> <p>I am all thumbs when figuring out this basic <code>datetime</code> arithmetic stuff; can anyone see my error?</p>
<python><python-3.x><datetime><rounding>
2023-01-13 17:13:47
3
1,599
Pete
75,112,340
6,932,839
Tkinter - After Second Button Click, Change Button Function to Close Window
<p>I am trying to figure out a way to change a button's text and functionality after I have clicked the <code>Submit</code> button a second time. In the below instance, I am trying to:</p> <p><strong>1)</strong> Change the button's text from <code>Submit</code> to <code>Close</code> after I have entered in the username/password fields for <code>SecondName</code> and have clicked <code>Submit</code></p> <p><strong>2)</strong> Use the <code>Close()</code> function to close the window.</p> <p>I have attempted to accomplish these two processes by using an <code>if/else</code> statement.</p> <p><strong>Tkinter Code</strong></p> <pre><code>import tkinter as tk root = tk.Tk() user_var = tk.StringVar() pass_var = tk.StringVar() entries = {} def Submit(): user = user_var.get() passw = pass_var.get() label_text = user_label[&quot;text&quot;] char = label_text.split()[0] entries[char] = (user, passw) if char == &quot;FirstName&quot;: user_label[&quot;text&quot;] = &quot;SecondName &quot; + user_label[&quot;text&quot;].split()[1] pass_label[&quot;text&quot;] = &quot;SecondName &quot; + pass_label[&quot;text&quot;].split()[1] user_var.set(&quot;&quot;) pass_var.set(&quot;&quot;) print(entries) def Close(): root.quit() user_label = tk.Label(root, text=&quot;FirstName Username&quot;, width=21) user_entry = tk.Entry(root, textvariable=user_var) pass_label = tk.Label(root, text=&quot;FirstName Password&quot;, width=21) pass_entry = tk.Entry(root, textvariable=pass_var, show=&quot;•&quot;) if user_entry[&quot;text&quot;] == &quot;SecondName&quot;: sub_btn = tk.Button(root, text=&quot;Close&quot;, command=Close) else: sub_btn = tk.Button(root, text=&quot;Submit&quot;, command=Submit) sub_btn.grid(row=2, column=0) user_label.grid(row=0, column=0) user_entry.grid(row=0, column=1) pass_label.grid(row=1, column=0) pass_entry.grid(row=1, column=1) root.mainloop() </code></pre> <p><strong>Current Result</strong></p> <p><a href="https://i.sstatic.net/1AEwG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1AEwG.png" alt="enter image description here" /></a></p> <p><strong>Expected Result</strong></p> <p><a href="https://i.sstatic.net/mE1jP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mE1jP.png" alt="enter image description here" /></a></p>
<python><tkinter><label><tkinter-entry><tkinter-button>
2023-01-13 17:08:09
1
1,141
arnpry
75,112,280
978,434
Type support for dynamically added instance variable
<p>Because of working a lot with a 3rd party code (Django-ORM), I need to add some variables to the class that are not declared initially.</p> <p>The simplest Python example would be:</p> <pre class="lang-py prettyprint-override"><code>def enrich(obj): obj.c = 5 return obj </code></pre> <p>It works in Python, but strict type checking fails, and I want to benefit from static type analysis</p> <p>Full example:</p> <pre class="lang-py prettyprint-override"><code>import typing class A: a = 1 b = 2 class IExt(typing.Protocol): c: int T = typing.TypeVar('T') def enrich(obj: T) -&gt; IExt | T: typing.cast(IExt, obj).c = 5 return obj a = A() print(a.a) # OK - Works because class is used explicitly print(enrich(a).c) # Not OK - Auto-complete works, but type checking shows error print(enrich(a).b) # Not OK - Auto-complete works, but type checking shows error print(enrich(a).x) # OK - Fails and is expected to fail </code></pre> <p>The error says</p> <pre><code>main.py:21: error: Item &quot;A&quot; of &quot;Union[IExt, A]&quot; has no attribute &quot;c&quot; [union-attr] main.py:22: error: Item &quot;IExt&quot; of &quot;Union[IExt, A]&quot; has no attribute &quot;b&quot; [union-attr] </code></pre> <p>I understand that <code>Union</code> is a bit wrong choice here, because it says that variable is either of two types.</p> <p>What I want to do is to explain type checker that return type extends <code>Type[T]</code></p> <p>So I'd like to do something like</p> <pre><code>class IExt(typing.Protocol[T], typing.Type[T]): c: int = 0 </code></pre> <p>But looks like this is not possible. Any ideas for the workaround?</p> <p>I definitely want to keep it Generic, as <code>A</code> here is just example.</p>
<python><mypy><python-typing>
2023-01-13 17:02:46
0
3,179
Igor
75,112,254
10,967,961
Creating nodes for an undirected graph starting from pandas
<p>I have a dataframe that looks like this (I have 170000 observations in reality):</p> <pre><code>Firm pat cited_pat F_1 [p0,p1,p2] [p0,p1,p2] F_2 [] [] F_3 [p3,p6,p2] [p5,p0,p23,p29,p12,p8] F_4 [p0,p9,p25] [p0,p29,p31] ... </code></pre> <p>The idea is this:</p> <ol> <li>Create all possible couples of F_i, F_j;</li> <li>If two F_i, F_j have one (or more) &quot;ps&quot; in common, then put an edge of 1 and stop;</li> <li>If they do not, then take <code>cited_pat</code> and check how many &quot;ps&quot; are in common there. If more than 50% are in common than create an edge=1.</li> </ol> <p>Now, I am struggling a lot finding aa way to do it in an easy way. Could you please help me on this?</p>
<python><pandas><networkx>
2023-01-13 17:00:12
1
653
Lusian
75,112,136
16,529,391
Python unable to install guesslang
<p>I'm trying to install <code>guesslang</code> with pip but it seems that the last version (which was released on August 2021) depends on an obsolete version of Tensorflow (<code>2.5.0</code>). The problem is that I can't find this version anywhere. So, how can I install it? Or is there any other python library that does language detection?</p> <p>However here's the error I get when trying to install it, maybe I misunderstood...</p> <pre><code>&gt; pip install guesslang Collecting guesslang Using cached guesslang-2.2.1-py3-none-any.whl (2.5 MB) Using cached guesslang-2.2.0-py3-none-any.whl (2.5 MB) Using cached guesslang-2.0.3-py3-none-any.whl (2.1 MB) Using cached guesslang-2.0.1-py3-none-any.whl (2.1 MB) Using cached guesslang-2.0.0-py3-none-any.whl (13.0 MB) Using cached guesslang-0.9.3-py3-none-any.whl (3.2 MB) Collecting numpy Using cached numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB) Collecting guesslang Using cached guesslang-0.9.1-py3-none-any.whl (3.2 MB) ERROR: Cannot install guesslang==0.9.1, guesslang==0.9.3, guesslang==2.0.0, guesslang==2.0.1, guesslang==2.0.3, guesslang==2.2.0 and guesslang==2.2.1 because these package versions have conflicting dependencies. The conflict is caused by: guesslang 2.2.1 depends on tensorflow==2.5.0 guesslang 2.2.0 depends on tensorflow==2.5.0 guesslang 2.0.3 depends on tensorflow==2.5.0 guesslang 2.0.1 depends on tensorflow==2.2.0 guesslang 2.0.0 depends on tensorflow==2.2.0 guesslang 0.9.3 depends on tensorflow==1.7.0rc1 guesslang 0.9.1 depends on tensorflow==1.1.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts </code></pre>
<python><pip><language-detection>
2023-01-13 16:49:24
2
483
pasta64
75,112,083
4,439,524
Pandas eval returns error on date arithmetic using Timedelta
<p>When I try to use basic date arithmetic in Pandas' <code>eval()</code>, it gives the following error:</p> <p><code>Cannot convert input [1 days 00:00:00] of type &lt;class 'pandas._libs.tslibs.timedeltas.Timedelta'&gt; to Timestamp</code></p> <p>The same expression run against the data frame works fine, though. Is this a feature of <code>eval()</code> or am I missing something?</p> <p>A minimal example:</p> <pre><code>import pandas as pd df = pd.DataFrame(data={ &quot;date&quot;: pd.date_range(start=&quot;2020-01-01&quot;, periods=10, freq='D') }) # works fine df.date + pd.Timedelta('1D') # errors out df.eval(&quot;date + @pd.Timedelta('1D')&quot;) </code></pre> <p>Incidentally, the expression complains about undefined &quot;pd&quot;, even if it's included in the <code>local_dict</code> parameter to <code>eval()</code>. The only way I found to get around it is with the <code>@</code> prefix. For what it's worth, I'm using the latest version of <code>pandas==1.5.2</code>.</p>
<python><pandas><eval>
2023-01-13 16:44:39
0
1,456
gherka
75,111,895
10,134,422
Secure way to utilize AWS SSM parameter store to make API call
<p>I need to write a lambda function which makes an API call (to Airflow) using credentials stored in AWS SSM parameter store. I have been supplied with the key id for the credentials.</p> <p>How can I securely query the credentials and integrate them (again securely) into the API call?</p> <p>Is this on the right track:</p> <pre><code>Import boto3 key_supplied = 'the key I was supplied with' client = boto3.client('ssm') def lambda_handler(event, context): parameter = client.get_parameter(Name='key_supplied', WithDecryption=True) print(parameter) return parameter ['Parameter']['Value'] </code></pre>
<python><aws-lambda><airflow><aws-ssm><airflow-api>
2023-01-13 16:26:05
1
460
Sanchez333
75,111,782
14,735,451
Efficient way to compare if items in a list exist in a list of lists
<p>I have 2 lists:</p> <pre><code>list_1 = ['Denver Broncos', 'Carolina Panthers', &quot;Levi's Stadium in the San Francisco Bay Area at Santa Clara, California&quot;, 'Carolina Panthers', 'gold'] list_2 = [['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], ['Carolina Panthers', 'Carolina Panthers', 'Carolina Panthers'], ['Santa Clara, California', &quot;Levi's Stadium&quot;, &quot;Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.&quot;], ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], ['gold', 'gold', 'gold']] </code></pre> <p>Is there an efficient way to compare whether each item from <code>list_1</code> appears in the corresponding (based on its index) of <code>list_2</code>?</p> <p>Currently I have it as a <code>for loop</code>:</p> <pre><code>score = 0 for i in range(len(list_1)): if list_1[i] in list_2[i]: score+=1 average_score = score/len(predicted_answers) </code></pre> <p>but my lists are huge and I run this multiple times with multiple lists. I was hoping there's a more efficient way (I have access to a GPU if it help)</p>
<python><list>
2023-01-13 16:14:30
0
2,641
Penguin
75,111,707
10,798,503
How to reset Discord.OptionSelect in Python Pycord?
<p>I am trying to implement a &quot;Back&quot; button that will return to the previous dropdown. I am using Pycord.</p> <p>I have a dropdown with options to pick different food categories, after you pick a category, the dropdown menu changes to a new dropdown where see items in that category. In addition, you have a &quot;Back&quot; button that should get you to the previous dropdown.</p> <p>At the moment I get <code>In components.0.components.0.options.1: The specified option value is already used</code> Error, after I click the back button and click the same category again.</p> <p>Here is where I recreate the issue, first I run the slash command <code>/shop</code> and I click the &quot;Meats&quot; category</p> <p><a href="https://i.sstatic.net/LFM2L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LFM2L.png" alt="enter image description here" /></a></p> <p>Then I get to a new dropdown and I click the &quot;Back&quot; Button:</p> <p><a href="https://i.sstatic.net/W7UXc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W7UXc.png" alt="enter image description here" /></a></p> <p>I get to the original dropdown. and if I click the &quot;Meats&quot; cateogry again it crushes.</p> <p><a href="https://i.sstatic.net/kIJ0M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kIJ0M.png" alt="enter image description here" /></a></p> <p><strong>main.py</strong></p> <pre><code>import discord from example1 import CategoryView bot = discord.Bot() @bot.command() async def shop(ctx): await ctx.respond(&quot;Choose a category from the dropdown below! ✨&quot;, view=CategoryView()) @bot.event async def on_ready(): print(&quot;Ready!&quot;) bot.run(&quot;TOKEN&quot;) </code></pre> <p><strong>example1.py</strong></p> <pre><code>import discord from example2 import CategoryPriceView class CategoryView(discord.ui.View): def __init__(self): super().__init__() @discord.ui.select( placeholder = &quot;Choose a food category!&quot;, min_values = 1, max_values = 1, options = [ discord.SelectOption( label=&quot;Meats&quot;, emoji='🍖' ), discord.SelectOption( label=&quot;Salads&quot;, emoji='🥗' ) ] ) async def select_callback(self, select, interaction): category = select.values[0] await interaction.response.edit_message(content=&quot;Choose your item!&quot;, view=CategoryPriceView(category, self)) </code></pre> <p><strong>example2.py</strong></p> <pre><code>import discord options = [] class CategoryPriceView(discord.ui.View): def __init__(self, category, father): super().__init__() global options self.category = category self.father = father if self.category == 'Meats': options.append(discord.SelectOption(label='Steak', description=&quot;Price: 40$&quot;)) elif self.category == 'Salads': options.append(discord.SelectOption(label='Greek Salad', description=&quot;Price: 30$&quot;)) @discord.ui.button(label=&quot;Back&quot;, row=1, style=discord.ButtonStyle.blurple) async def button_callback(self, button, interaction): await interaction.response.edit_message(content=&quot;Choose a category from the dropdown below! ✨&quot;, view=self.father) @discord.ui.select( placeholder = &quot;Choose an item!&quot;, min_values = 1, max_values = 1, options = options ) async def select_callback(self, select, interaction): item = select.values[0] await interaction.response.edit_message(content=f&quot;You chose {item}! &quot;, view=None) </code></pre>
<python><discord.py><pycord>
2023-01-13 16:08:07
1
1,142
yarin Cohen
75,111,694
17,277,677
How to change dataframe from long to wide shape without losing duplicated values?
<p>I have given example dataframe:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ 'company_name': ['do holdings co', 'real estate b.v.', 'real estate b.v.','real coiffure', 'real coiffure', 'elendom', 'theatre media ltd'], 'sector_1': ['Industrials', 'Finance', 'Finance','Consumer', 'Consumer','Real Estate', 'Media'], 'company_country': ['USA', 'Poland', 'Poland','USA','USA', 'Poland', 'Canada'], 'keyword': ['holding', 'real', 'estate','real','coiffure', 'elendom', 'theatre'], 'value': [1,1,1,1,1,1,1], 'sector': ['Finance', 'Real Estate', 'Real Estate', 'Real Estate', 'Consumer', 'Real Estate', 'Media'] }) </code></pre> <p>I was checking if keywords exists in a company name, if they do - I was assigning them matching sector (column sector, sector_1 - please ignore for now).</p> <p>I have a list of keywords and as you can see they duplicate in a keyword column - because I was checking per each company. I already filtered out the keyword with 0 occurrences.</p> <p>I would like to change the table to wide format, but where we have duplication with key words - then assign two sectors, the results should be as below:</p> <pre class="lang-py prettyprint-override"><code>df_results = pd.DataFrame({ 'company_name': ['do holdings co', 'real estate b.v.', 'real coiffure', 'elendom', 'theatre media ltd'], 'sector_1': ['Industrials', 'Finance','Consumer', 'Real Estate', 'Media'], 'company_country': ['USA', 'Poland','USA', 'Poland', 'Canada'], 'holding': [1,0,0,0,0], 'real': [0,1,1,0,0], 'estate': [0,1,0,0,0], 'coiffure': [0,0,1,0,0], 'elendom': [0,0,0,1,0], 'theatre': [0,0,0,0,1], 'sector': ['Finance', ['Real Estate', 'Real Estate'],['Real Estate', 'Consumer'], 'Real Estate', 'Media'] }) </code></pre> <p>I have a problem approaching this task, appreciate the help.</p> <p>EDIT:</p> <p>This is what I've been trying, still not perfect but almost there:</p> <pre class="lang-py prettyprint-override"><code>df_wide = pd.crosstab(index=df['company_name'], columns=df['keyword'], values=df['value'], aggfunc='sum') df_wide['sector'] = df.groupby('company_name')['sector'].apply(lambda x: list(set(x))) df_results = pd.merge(df_wide, df[['company_name','sector_1','company_country']], on='company_name', how='left') </code></pre>
<python><pandas><dataframe><formatting><dummy-variable>
2023-01-13 16:07:23
1
313
Kas
75,111,658
2,913,106
transform colors in colorbar, not the ticks
<p>When using a custom normalization, for instance <code>PowerNorm</code>, we can adjust the mapping between values and the colors. If we then show a corresponding colorbar, we can see the change when observing the ticks (compare left and right plot in the following picture).</p> <p>Is there a way to use the normalization like on the left, but then have a colorbar where the <em>colours</em> are &quot;squished&quot; to one end, but the ticks remain equidistant (like on the right side)?</p> <p><a href="https://i.sstatic.net/oRDFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oRDFd.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import PowerNorm x, _ = np.meshgrid(*2*(np.linspace(-1, 1, 100),)) # with normalization: transformation is applied to image as desired, but the ticks at the colorbar are not equidistant plt.subplot(121) plt.imshow(x, norm=PowerNorm(gamma=4, vmin=-1, vmax=1)) plt.colorbar() # without normalization plt.subplot(122) plt.imshow(x) plt.colorbar() plt.show() </code></pre>
<python><matplotlib><transform><normalization><colorbar>
2023-01-13 16:04:17
1
11,728
flawr
75,111,569
13,142,245
Streamlit on AWS: serverless options?
<p>My goal is to deploy a Streamlit application to an AWS Serverless architecture. Streamlit does not appear to function properly without a Docker container, so the architecture would need to support containers.</p> <p>From various tutorials, EC2 is the most popular deployment option for Streamlit, which I have no interest in pursuing due to the server management aspect.</p> <p>AWS Lambda would be my preferred deployment option if viable. I see that <a href="https://docs.aws.amazon.com/lambda/latest/dg/images-create.html" rel="nofollow noreferrer">Lambda can support containers</a>, but I'm curious what the pros &amp; cons of Lambda vs Fargate is for containerized apps.</p> <p>My question is: Is Lambda or Fargate better for a serverless deployment of a Streamlit web app?</p>
<python><amazon-web-services><docker><serverless><streamlit>
2023-01-13 15:57:43
1
1,238
jbuddy_13
75,111,524
7,920,004
Python - handle empty list when iterating through dict
<p>I have a list of dicts and need to retreive <code>events</code> key which is a list. However that list is not always filled with data, depending on a case.</p> <p>How to iterate through them and not get <code>list index out of range</code> error? <code>[-1]</code> does work but when <code>events</code> is and empty list, I get that error.</p> <p>Sample input:</p> <pre><code>jobs = [ { &quot;JobName&quot;:&quot;xyz&quot;, &quot;JobRunState&quot;:&quot;SUCCEEDED&quot;, &quot;LogGroupName&quot;:&quot;xyz&quot;, &quot;Id&quot;:&quot;xyz&quot;, &quot;events&quot;:[ ] }, { &quot;JobName&quot;:&quot;xyz2&quot;, &quot;JobRunState&quot;:&quot;SUCCEEDED&quot;, &quot;LogGroupName&quot;:&quot;xyz&quot;, &quot;Id&quot;:&quot;xyz&quot;, &quot;events&quot;:[ { &quot;timestamp&quot;:1673596884835, &quot;message&quot;:&quot;....&quot;, &quot;ingestionTime&quot;:1673598934350 }, { &quot;timestamp&quot;:1673599235711, &quot;message&quot;:&quot;....&quot;, &quot;ingestionTime&quot;:1673599236353 } ] } ] </code></pre> <p>Code:</p> <pre><code> success = [ { &quot;name&quot;: x[&quot;JobName&quot;], &quot;state&quot;: x[&quot;JobRunState&quot;], &quot;event&quot;: self.logs_client.get_log_events( logGroupName=x[&quot;LogGroupName&quot;] + &quot;/output&quot;, logStreamName=x[&quot;Id&quot;], )[&quot;events&quot;][-1][&quot;message&quot;], } for x in jobs if x[&quot;JobRunState&quot;] in self.SUCCESS ] </code></pre> <p>Expected behavior: when <code>[&quot;events&quot;]</code> is empty, return <code>&quot;event&quot;</code> as an empty list.</p> <pre><code>[ {'name': 'xyz', 'state': 'SUCCEEDED', 'event': []}, {'name': 'xyz2', 'state': 'SUCCEEDED', 'event': &quot;....&quot;} ] </code></pre> <p>Error code:</p> <pre><code>&quot;event&quot;: self.logs_client.get_log_events( IndexError: list index out of range </code></pre>
<python>
2023-01-13 15:54:46
2
1,509
marcin2x4
75,111,518
11,462,274
Analyze if the value of a column is less than another and this another is less than another and so on
<p>Currently I do it this way:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd dt = pd.DataFrame({ '1st':[1,0,1,0,1], '2nd':[2,1,2,1,2], '3rd':[3,0,3,2,3], '4th':[4,3,4,3,4], '5th':[5,0,5,4,5], 'minute_traded':[6,5,6,5,6] }) dt = dt[ (dt['1st'] &lt; dt['2nd']) &amp; (dt['2nd'] &lt; dt['3rd']) &amp; (dt['3rd'] &lt; dt['4th']) &amp; (dt['4th'] &lt; dt['5th']) &amp; (dt['5th'] &lt; dt['minute_traded']) ] print(dt) </code></pre> <p>Result:</p> <pre class="lang-none prettyprint-override"><code> 1st 2nd 3rd 4th 5th minute_traded 0 1 2 3 4 5 6 2 1 2 3 4 5 6 3 0 1 2 3 4 5 4 1 2 3 4 5 6 </code></pre> <p>Is there a more correct method for an analysis like this that always uses the same pattern and only changes the columns to be analyzed?</p>
<python><pandas><dataframe>
2023-01-13 15:54:18
2
2,222
Digital Farmer
75,111,462
3,719,713
Annotate a mutating function default param
<p>Let's say I have this function:</p> <pre class="lang-py prettyprint-override"><code> def foo(inp = None): if inp is None: inp = [] inp.append(&quot;a&quot;) print(inp) </code></pre> <p>Note: <code>None</code> as default param must be used to avoid updating the same list.</p> <p>I want to annotate the <code>inp</code> param which is <code>None</code> but is going to become a list..I tried something like:</p> <pre class="lang-py prettyprint-override"><code>def foo(inp: None = None): if inp is None: # this will be flagged as error by type checker, e.g. mypy inp: list[int] = [] inp.append(&quot;a&quot;) print(inp) </code></pre> <p>But that won't work because the inp type was already defined as None. What is the recommended way of doing this?</p>
<python><mypy>
2023-01-13 15:50:12
2
1,208
diegus
75,111,217
53,491
How do I run DBT models from a Python script or program?
<p>I have a DBT project, and a python script will be grabbing data from the postgresql to produce output.</p> <p>However, part of the python script will need to make the DBT run. I haven't found the library that will let me cause a DBT run from an external script, but I'm pretty sure it exists. How do I do this?</p> <p>ETA: The correct answer may be to download the DBT CLI and then use python system calls to use that.... I was hoping for a library, but I'll take what I can get.</p>
<python><dbt>
2023-01-13 15:28:30
1
12,317
Brian Postow
75,111,196
16,315,671
YOLOv8 : RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase
<p>An attempt has been made to start a new process before the current process has finished its bootstrapping phase.</p> <pre><code> This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The &quot;freeze_support()&quot; line can be omitted if the program is not going to be frozen to produce an executable. </code></pre> <p>** This error shows up when trying to train a YOLOv8 model in a python environment** from ultralytics import YOLO</p> <pre><code># Load a model model = YOLO(&quot;yolov8n.yaml&quot;) # build a new model from scratch model = YOLO(&quot;yolov8n.pt&quot;) # load a pretrained model (recommended for training) # Use the model results = model.train(data=&quot;coco128.yaml&quot;, epochs=3) # train the model results = model.val() # evaluate model performance on the validation set results = model(&quot;https://ultralytics.com/images/bus.jpg&quot;) # predict on an image success = YOLO(&quot;yolov8n.pt&quot;).export(format=&quot;onnx&quot;) # export a model to ONNX format </code></pre>
<python><yolo>
2023-01-13 15:26:30
3
461
Prince David Nyarko
75,111,097
10,003,538
How to convert byte array to hex string?
<p>Here is the sample code in JS :</p> <pre><code>function toHexString(bytes) { return bytes.map(function(byte) { return (&quot;00&quot; + (byte &amp; 0xFF).toString(16)).slice(-2); }).join(''); } input -&gt; Buffer.from(&quot;333138383223633D77DB&quot;, 'hex') output -&gt; 333138383223630770 </code></pre> <p>Here is what I have tried so far in <code>Python</code></p> <pre><code>def toHexString(byteArray): return ''.join('{:02x}'.format(x) for x in byteArray) input -&gt; bytearray.fromhex(&quot;333138383223633D77DB&quot;) output -&gt; 333138383223633d77db </code></pre> <p>I think the logic is correct but does not know what is wrong</p> <p>My expectation result of the Python code should be similar to the result of <code>JS</code> code.</p> <p>I would like to ask how should I update the <code>python</code> code to get the exact result as <code>JS</code> code</p>
<javascript><python>
2023-01-13 15:18:34
3
1,225
Chau Loi
75,111,080
2,173,320
Training wav2vec2 for multiple (classification) tasks
<p>I trained a wav2vec2 model using pytorch and huggingface transformer. Here is the code: <a href="https://github.com/padmalcom/wav2vec2-nonverbalvocalization" rel="nofollow noreferrer">https://github.com/padmalcom/wav2vec2-nonverbalvocalization</a></p> <p>I now want to train the model on a second tasks, e.g. age classification or speech recognition (ASR).</p> <p>My problem is, that I do not really understand how I can configure my model to accept a seconds input and train another output. Can anybody give me a short explaination?</p> <p>I know that I have to use multiple heads in my model and that the thing I want to achieve is called &quot;multi task learning&quot;. My problem is, that I don't know how to write the model for that.</p>
<python><pytorch><classification><huggingface>
2023-01-13 15:16:51
1
1,507
padmalcom
75,110,993
2,706,344
Resampling on a multi index
<p>I have a DataFrame of the following form: <a href="https://i.sstatic.net/o3Vhf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o3Vhf.png" alt="enter image description here" /></a></p> <p>You see that it has a multi index. For each <code>muni</code> index I want to do a resampling of the form <code>.resample('A').mean()</code> of the <code>popDate</code> index. Hence, I want python to fill in the missing years. NaN values shall be replaced by a linear interpolation. How do I do that?</p> <p>Update: Some mock input DataFrame:</p> <pre><code>interData=pd.DataFrame({'muni':['Q1','Q1','Q1','Q2','Q2','Q2'],'popDate':['2015','2021','2022','2015','2017','2022'],'population':[5,11,22,15,17,22]}) interData['popDate']=pd.to_datetime(interData['popDate']) interData=interData.set_index(['muni','popDate']) </code></pre>
<python><pandas>
2023-01-13 15:10:54
1
4,346
principal-ideal-domain
75,110,767
7,437,143
Position of images as nodes in networkx plot?
<p>After using <a href="https://stackoverflow.com/a/53968787/7437143">this answer</a> to generate a plot with images as nodes, I am experiencing some difficulties in making the y-coordinate positions of the nodes line up with the y-coordinate positions of the images: <a href="https://i.sstatic.net/dWFLD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dWFLD.png" alt="enter image description here" /></a></p> <p>The <code>x,y</code>-positions of the nodes is set as:</p> <pre><code>pos={ 0: [0, 0], 1: [1, 0], 2: [2, 0], 3: [3, 0], 4: [4, 0], 5: [5, 0], 6: [6, 0], 7: [5, 1] } </code></pre> <p>and the code to generate the plot is:</p> <pre class="lang-py prettyprint-override"><code>pos:Dict={} for nodename in G.nodes: pos[nodename]=G.nodes[nodename][&quot;pos&quot;] print(f'pos={pos}') fig=plt.figure(figsize=(1,1)) ax=plt.subplot(111) ax.set_aspect('equal') nx.draw_networkx_edges(G,pos,ax=ax) # plt.xlim(-1,10) # plt.ylim(-1.5,1.5) trans=ax.transData.transform trans2=fig.transFigure.inverted().transform piesize=0.3 # this is the image size p2=piesize/2.0 for n in G: xx,yy=trans(pos[n]) # figure coordinates #print(f'xx,yy={xx,yy}') xa,ya=trans2((xx,yy)) # axes coordinates print(f'xa,ya={xa,ya}') a = plt.axes([xa-p2,ya-p2, piesize, piesize]) a.set_aspect('equal') a.imshow(G.nodes[n]['image']) a.axis('off') ax.axis('off') plt.show() </code></pre> <p>However, the y-coordinates of the edges does not align with the y-coordinates of the images. I expect this is because the coordinate transformation applied for the <code>plt.axes[[xa-p2...</code> line, is not applied to the edge drawings.</p> <h2>Question</h2> <p>How can I ensure the images are placed on their <code>x,y</code> coordinate positions a figure with size <code>[x_min,x_max,y_min,y_max]</code> whilst ensuring the edges of the networkx Graph also point to the accompaning coordinates?</p>
<python><image><matplotlib><networkx>
2023-01-13 14:50:17
1
2,887
a.t.
75,110,765
3,896,008
Truncating osmnx by bbox (or polygon): How to create dummy nodes at boundaries?
<p>I am trying to truncate an osmnx graph by bbox. It works as per the documentation. The reproducible self-explanatory code is given below:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import osmnx as ox import geopandas as gpd import networkx as nx import matplotlib.pyplot as plt N, S, E, W = 1.3235381983186159, 1.319982801681384, \ 103.85361309942331 , 103.84833190057668, graph = ox.graph_from_bbox(N, S, E, W, \ network_type='drive') nodes= ox.graph_to_gdfs(graph, nodes=True, edges=False) edges= ox.graph_to_gdfs(graph, edges=True, nodes=False) fig, ax = ox.plot.plot_graph( graph, ax=None, figsize=(10, 10), bgcolor=&quot;white&quot;, node_color=&quot;red&quot;, node_size=5, node_alpha=None, node_edgecolor=&quot;none&quot;, node_zorder=1, edge_color=&quot;black&quot;, edge_linewidth=0.1, edge_alpha=None, show=False, close=False, save=False, bbox=None, ) W_ = W + (E-W) * 0.8 S_ = S + (N-S)*0.7 width = (E - W)*0.07 height = (N - S)*0.1 rect = plt.Rectangle((W_, S_), width, height, facecolor=&quot;green&quot;, alpha=0.3, edgecolor=None) ax.add_patch(rect) plt.show() g_truncated = ox.truncate.truncate_graph_bbox(graph, S_ + height, S_, W_+width, W_, truncate_by_edge=False) ox.plot_graph(g_truncated) </code></pre> <p>The bbox and the extracted graphs are shown below:</p> <p><a href="https://i.sstatic.net/lfmbkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lfmbkm.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/VpBJim.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VpBJim.png" alt="enter image description here" /></a></p> <p>If I want to extract the subgraph such that I introduce dummy nodes at the boundaries, how can I do that? To be specific, I am trying to get a subgraph as it is visible in the picture. (i.e. a subgraph with 6 nodes in black as shown below:</p> <p><a href="https://i.sstatic.net/SRkyo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SRkyo.jpg" alt="enter image description here" /></a></p> <p>Given the wide popularity of osmnx, does there exist a simple/straightforward way to acheive this?</p>
<python><graph><truncate><osmnx>
2023-01-13 14:50:08
1
1,347
lifezbeautiful
75,110,739
14,607,802
Python Request, UnicodeEncodeError: 'charmap' codec can't encode character '\u0421' in position 1228799: character maps to <undefined>
<p>I am trying to request some information from Coincodex via Python:</p> <pre><code>url = &quot;https://coincodex.com/apps/coincodex/cache/all_coins.json&quot; response = requests.get(url) data = json.loads(response.text.encode('utf-8')) print(data) </code></pre> <p>However, I keep getting the following error:<code>UnicodeEncodeError: 'charmap' codec can't encode character '\u0421' in position 1228799: character maps to &lt;undefined&gt;</code></p> <p>I have tried <code>text.encode</code> and <code>content.decode</code>, but I still can't find a solution that works for me.</p>
<python>
2023-01-13 14:47:20
1
817
Charmalade
75,110,689
3,591,044
Generating text word by word for transformers
<p>I’m currently using GPT-J for generating text as shown below. This works well but it takes up to 5 seconds to generate the 100 tokens.</p> <p>Is it possible to do the generation word by word or sentence by sentence? Similar to what ChatGPT is doing (ChatGPT seems to produce the output word by word).</p> <pre><code>import transformers from transformers import GPTJForCausalLM config = transformers.GPTJConfig.from_pretrained(&quot;EleutherAI/gpt-j-6B&quot;) tokenizer = transformers.AutoTokenizer.from_pretrained(&quot;EleutherAI/gpt-j-6B&quot;, pad_token='&lt;|endoftext|&gt;', eos_token='&lt;|endoftext|&gt;', truncation_side='left') model = GPTJForCausalLM.from_pretrained( &quot;EleutherAI/gpt-j-6B&quot;, revision=&quot;float16&quot;, torch_dtype=torch.float16, low_cpu_mem_usage=True, use_cache=True, gradient_checkpointing=True, ) model.to(&quot;cuda&quot;) prompt = tokenizer(&quot;This is a test sentence, which should be completed&quot;, return_tensors='pt', truncation=True, max_length=2000) prompt = {key: value.to(&quot;cuda&quot;) for key, value in prompt.items()} out = model.generate(**prompt, n=1, min_length=16, max_new_tokens=100, do_sample=True, top_k=15, top_p=0.9, batch_size=1, temperature=1, no_repeat_ngram_size=4, clean_up_tokenization_spaces=True, use_cache=True, pad_token_id=tokenizer.eos_token_id, ) res = tokenizer.decode(out[0]) </code></pre>
<python><huggingface-transformers><huggingface>
2023-01-13 14:43:17
0
891
BlackHawk
75,110,664
5,810,060
How to find best string match out of multiple possibilities in a dataframe?
<p>I have a DF that looks like this:</p> <pre><code> Row Master Option1 Option2 1 00150042 plc WAGON PLC wegin llp 2 01 telecom, ltd. 01 TELECOM LTD telecom 1 3 0404 investments limited 0404 Investments Ltd 404 Limited Investments </code></pre> <p>What I am trying to do is to compare the <code>option1</code> and <code>option2</code> columns to the master columns separately and obtain a similarity score for each.</p> <p>I have got the code that provides the score:</p> <pre class="lang-py prettyprint-override"><code> from difflib import SequenceMatcher def similar(a, b): return SequenceMatcher(None, a, b).ratio() </code></pre> <p>What I need help with is for the logic on how to implement this.</p> <p>Is it a for loop that will iterate over the Option1 and the master columns, get the score saved on a new column called Option1_score, and then do the same thing with the Option2 column?</p> <p>Any help is highly appreciated!</p>
<python><python-3.x><pandas><for-loop><similarity>
2023-01-13 14:41:12
1
906
Raul Gonzales
75,110,631
15,673,412
python plotly - produce plot upon two clicks
<p>Let's suppose I have a plotly <code>graph_objects.Figure</code> object containing a scatterplot:</p> <pre><code>y = np.random.rand(10) x = np.random.rand(10) names = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'L'] fig = go.FigureWidget() fig.add_trace(go.Scatter( x=x, y=y, name='Base scatter', mode='markers', marker=dict(size=7), customdata = names, hovertemplate= '&lt;b&gt;x:&lt;/b&gt; %{x}&lt;br&gt;&lt;b&gt;y:&lt;/b&gt; %{y}&lt;br&gt;&lt;b&gt;Name:&lt;/b&gt; %{customdata}&lt;br&gt;')) fig.update_layout(title='Test') fig.show() </code></pre> <p>Which produces the following image: <a href="https://i.sstatic.net/tFG3N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tFG3N.png" alt="enter image description here" /></a></p> <p>I would like that upon click on <strong>two</strong> different points, a new temporary plot appears with information about the two points (e.g. those two points in another canvas connected by a line).</p> <p>I've looked into <a href="https://plotly.com/python/click-events/" rel="nofollow noreferrer">click handlers</a> but I can't figure out how to make it work.</p> <p>Thanks</p>
<python><onclick><plotly><plotly.graph-objects>
2023-01-13 14:37:45
0
480
Sala
75,110,547
815,653
Tensorflow's random.truncated_normal returns different results with the same seed
<p>The following lines are supposed to get the same result:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf print(tf.random.truncated_normal(shape=[2],seed=1234)) print(tf.random.truncated_normal(shape=[2],seed=1234)) </code></pre> <p>But I got:</p> <pre class="lang-py prettyprint-override"><code>tf.Tensor([-0.12297685 -0.76935077], shape=(2,), dtype=tf.float32) tf.Tensor([0.37034193 1.3367208 ], shape=(2,), dtype=tf.float32) </code></pre> <p>Why?</p>
<python><tensorflow><machine-learning><tensor><random-seed>
2023-01-13 14:29:34
2
10,344
zell
75,110,411
11,622,712
Group datetime values by datetime ranges and calculate the min and max values per range
<p>I have the following DatetimeIndex values:</p> <pre><code>DatetimeIndex(['2021-01-18 01:32:00', '2021-01-18 01:33:00', '2021-01-18 01:34:00', '2021-01-18 01:35:00', '2021-01-18 01:36:00', '2021-01-18 01:37:00', '2021-12-16 12:07:00', '2021-12-16 12:08:00', '2021-12-16 12:09:00', '2021-12-16 12:10:00'], dtype='datetime64[ns]', length=10, freq=None) </code></pre> <p>I need to group them by datetime ranges and calculate the min and max values per range.</p> <p>This is the expected result:</p> <pre><code>range range_min range_max 1 2021-01-18 01:32:00 2021-01-18 01:37:00 2 2021-12-16 12:07:00 2021-12-16 12:10:00 </code></pre> <p>How can I do it?</p> <p>I can get min and max across the complete set of values of timestamps, but I don't know how to group timestamps into ranges.</p> <pre><code>import numpy as np import pandas as pd pd.DataFrame(my_timestamps,columns=[&quot;timestamp&quot;]).agg({&quot;timestamp&quot; : [np.min, np.max]}) </code></pre>
<python><datetime>
2023-01-13 14:19:05
2
2,998
Fluxy
75,110,264
7,698,116
How to perform a synchronous task in an asynchronous FastAPI REST endpoint without blocking the event loop?
<p>I have an application setup using FastAPI and Celery for some CPU intensive tasks (basically some synchronous functions defined). One of the REST endpoints is <code>request_data</code> defined as <code>async</code> that users can call to request data. It provides an optional parameter <code>force</code> that if <code>False</code> would simply return the data from Cassandra. But, if <code>force</code> is <code>True</code>, then the REST endpoint has to perform the CPU intensive task in sync and return the results back in the same request.</p> <p>Here my problem is that, since the function is defined as <code>async</code>, calling the CPU intensive task would block the event loop and hamper FastAPI from handling other requests.</p> <p>I tried to look for following:</p> <ul> <li>Define two functions (one sync and one <code>async</code>). If FastAPI would allow me to add a middleware that can route my request to respective function but it doesn't seem possible.</li> <li>Submit the task to celery in <code>async</code> fashion. But it doesn't seem I can <code>await</code> on the result and so waiting otherwise would simply block the event loop.</li> <li>As a last option, I can make the REST endpoint function as sync but wanted to avoid that. Reason is that I am using Cassandra that provides <code>async</code> support via <code>execute_async</code> on which I can <code>await</code>. Making the REST endpoint function sync would take away <code>async</code> benefits on Cassandra IO.</li> </ul> <p>Does any one has any suggestions on this? I understand that calling a synchronous function in an asynchronous function would eventually mean that event loop would have to perform the CPU intensive task. My only concern is how to do that without blocking the event loop—in other words, without blocking FastAPI from handling other requests?</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI import cassandra_wrapper # Class wrapper providing access to Cassandra database app = FastAPI() @app.get(&quot;/data&quot;) async def request_data(force=False): if force: # perform computations and return data return cpu_intensive_task() else: return cassandra_wrapper.get_data() # Can call this celery task to do computations @celery.task(bind=True, name=&quot;cpu_intensive_task&quot;) def cpu_intensive_celery_task(): return cpu_intensive_task # Or can call this function directly to do computations def cpu_intensive_task(): import time time.sleep(5) </code></pre>
<python><asynchronous><concurrency><celery><fastapi>
2023-01-13 14:06:00
0
368
ATK
75,110,240
9,805,238
Comparing PDF files with varying degrees of strictness
<p>I have two folders, each including ca. 100 PDF files resulting from different runs of the same PDF generation program. After performing some changes to this program, the resulting PDF should always stay equal and nothing should break the layout, the fonts, any potential graphs and so on. This is why I would like to check for visual equality while ignoring any metadata that might have changed due to running the program at different times.</p> <p>My first approach was based on <a href="https://www.geeksforgeeks.org/check-if-two-pdf-documents-are-identical-with-python/" rel="nofollow noreferrer">this post</a> and attempted to compare the hashes of each file:</p> <pre><code>h1 = hashlib.sha1() h2 = hashlib.sha1() with open(fileName1, &quot;rb&quot;) as file: chunk = 0 while chunk != b'': chunk = file.read(1024) h1.update(chunk) with open(fileName2, &quot;rb&quot;) as file: chunk = 0 while chunk != b'': chunk = file.read(1024) h2.update(chunk) return (h1.hexdigest() == h2.hexdigest()) </code></pre> <p>This always returns &quot;False&quot;. I assume that this is due to different time dependent metadata, which is why I would like to ignore them. I've already found a way to set the modification and creation data to &quot;None&quot;:</p> <pre><code>pdf1 = pdfrw.PdfReader(fileName1) pdf1.Info.ModDate = pdf1.Info.CreationDate = None pdfrw.PdfWriter().write(fileName1, pdf1) pdf2 = pdfrw.PdfReader(fileName2) pdf2.Info.ModDate = pdf2.Info.CreationDate = None pdfrw.PdfWriter().write(fileName2, pdf2) </code></pre> <p>Looping through all files in each folder and running the second method before the first curiously sometimes results in a return value of &quot;True&quot; and sometimes in a return value of &quot;False&quot;.</p> <p>Thanks to the kind help of @jorj-mckie (see answer below), I've the following methods checking for xref equality:</p> <pre><code>doc1 = fitz.open(fileName1) xrefs1 = doc1.xref_length() # cross reference table 1 doc2 = fitz.open(fileName2) xrefs2 = doc2.xref_length() # cross reference table 2 if (xrefs1 != xrefs2): print(&quot;Files are not equal&quot;) return False for xref in range(1, xrefs1): # loop over objects, index 0 must be skipped # compare the PDF object definition sources if (doc1.xref_object(xref) != doc2.xref_object(xref)): print(f&quot;Files differ at xref {xref}.&quot;) return False if doc1.xref_is_stream(xref): # compare binary streams stream1 = doc1.xref_stream_raw(xref) # read binary stream try: stream2 = doc2.xref_stream_raw(xref) # read binary stream except: # stream extraction doc2 did not work! print(f&quot;stream discrepancy at xref {xref}&quot;) return False if (stream1 != stream2): print(f&quot;stream discrepancy at xref {xref}&quot;) return False return True </code></pre> <p>and xref equality without metadata:</p> <pre><code>doc1 = fitz.open(fileName1) xrefs1 = doc1.xref_length() # cross reference table 1 doc2 = fitz.open(fileName2) xrefs2 = doc2.xref_length() # cross reference table 2 info1 = doc1.xref_get_key(-1, &quot;Info&quot;) # extract the info object info2 = doc2.xref_get_key(-1, &quot;Info&quot;) if (info1 != info2): print(&quot;Unequal info objects&quot;) return False if (info1[0] == &quot;xref&quot;): # is there metadata at all? info_xref1 = int(info1[1].split()[0]) # xref of info object doc1 info_xref2 = int(info2[1].split()[0]) # xref of info object doc1 else: info_xref1 = 0 for xref in range(1, xrefs1): # loop over objects, index 0 must be skipped # compare the PDF object definition sources if (xref != info_xref1): if (doc1.xref_object(xref) != doc2.xref_object(xref)): print(f&quot;Files differ at xref {xref}.&quot;) return False if doc1.xref_is_stream(xref): # compare binary streams stream1 = doc1.xref_stream_raw(xref) # read binary stream try: stream2 = doc2.xref_stream_raw(xref) # read binary stream except: # stream extraction doc2 did not work! print(f&quot;stream discrepancy at xref {xref}&quot;) return False if (stream1 != stream2): print(f&quot;stream discrepancy at xref {xref}&quot;) return False return True </code></pre> <p>If I run the last two functions on my PDF files, whose timestamps have already been set to &quot;None&quot; (see above), I end up with some equality checks resulting in a &quot;True&quot; return value and others resulting in &quot;False&quot;.</p> <p>I'm using the <a href="https://docs.reportlab.com/" rel="nofollow noreferrer">reportlab library</a> to generate the PDFs. Do I just have to live with the fact that some PDFs will always have a different internal structure, resulting in different hashes even if the files look exactly the same? I would be very happy to learn that this is not the case and there is indeed a way to check for equality without actually having to export all pages to images first.</p>
<python><pdf><hash><pdfcompare>
2023-01-13 14:03:53
2
3,730
Hagbard
75,110,202
2,706,344
Why does the filtering by notnull() not work?
<p>I have a DataFrame <code>cMean</code>. Its origin is some resampling of some data. It contains many NaN values and I wanted to get rid of them so I tried <code>cMean[cMean.notnull()]</code>. However, they still show up: <a href="https://i.sstatic.net/xpXgy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xpXgym.png" alt="enter image description here" /></a></p> <p>Can you explain what is going on here? It seems <code>cMean.notnull()</code> works correct, as you can see here: <a href="https://i.sstatic.net/xxpPD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xxpPDm.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-01-13 13:59:49
1
4,346
principal-ideal-domain
75,110,151
8,170,368
How to make a module function use a locally named variable as a default parameter?
<p>I use the following function in several scripts in my project folder:</p> <pre><code>verbose_print = config[&quot;verbose&quot;].getboolean('print') def verbose(text, *args, **kwargs): if verbose_print: print(text, *args, **kwargs) return None </code></pre> <p><code>verbose_print</code> is a boolean value I get from a single config file when executing the script (every script shares the same config file).</p> <p>This project folder has a &quot;helper functions&quot; <code>lib</code> folder which acts as a local package of modules.</p> <p>Since it is rather ugly to define this same function at the top of every script (after <code>verbose_print</code> has been established), I wonder if there's any clean and hopefully elegant way to add it to the <code>lib</code> folder to make it use the <code>verbose_print</code> variable when imported to any of these scripts. Using <code>locals()</code> and searching for a variable with that desired name is the only solution I can think of, but I'm looking for better, cleaner solutions if possible so I can leave a clean codebase for when I quit.</p> <p>P.s.: it's a small startup data science team, so I'm not worried about &quot;best practice&quot; methods but more about cleanliness, readability, convenience. Suggestions are still appreciated, however.</p>
<python><function>
2023-01-13 13:54:49
1
388
mariogarcc
75,110,046
353,337
Function in Python list comprehension, don't eval twice
<p>I'm composing a Python list from an input list run through a transforming function. I would like to include only those items in the output list for which the result isn't <code>None</code>. This works:</p> <pre class="lang-py prettyprint-override"><code>def transform(n): # expensive irl, so don't execute twice return None if n == 2 else n**2 a = [1, 2, 3] lst = [] for n in a: t = transform(n) if t is not None: lst.append(t) print(lst) </code></pre> <pre><code>[1, 9] </code></pre> <p>I have a hunch that this can be simplified with a comprehension. However, the straighforward solution</p> <pre class="lang-py prettyprint-override"><code>def transform(n): return None if n == 2 else n**2 a = [1, 2, 3] lst = [transform(n) for n in a if transform(n) is not None] print(lst) </code></pre> <p>is no good since <code>transform()</code> is applied twice to each entry. Any way around this?</p>
<python><list-comprehension>
2023-01-13 13:45:44
2
59,565
Nico Schlömer
75,110,008
12,596,824
Assign column adding columns in pandas dynamically (method chaining in python)
<p>I want to create a new column named total which adds all the year columns (everything in these columns are integers). I want to do it dynamically because as each year passes there will be a new column (for example 2024).</p> <p>How can I do this in Python using method chaining and the assign operator?</p> <pre><code>id name 2018 2019 2020 2021 2022 type 1 John 0 1 0 0 2 A 2 Bill 1 5 4 0 0 B 3 Tom 0 0 2 0 5 B 4 Mary 0 1 1 0 0 A </code></pre> <p><strong>Expected Output:</strong></p> <pre><code>id name 2018 2019 2020 2021 2022 type total 1 John 0 1 0 0 2 A 3 2 Bill 1 5 4 0 0 B 10 3 Tom 0 0 2 0 5 B 7 4 Mary 0 1 1 0 0 A 2 </code></pre> <p>I have this solution but I don't like it, is there a more eloquent way of writing this code?</p> <p><strong>Temporary Solution:</strong></p> <pre><code>( df .assign(Total = lambda x: x['2018'] + x['2019'] + x['2020'] + x['2021'] x['2022']) ) </code></pre>
<python><pandas><method-chaining>
2023-01-13 13:41:59
3
1,937
Eisen
75,109,888
2,290,493
How to decorate iterables with an error handler?
<p>Suppose we have two kinds of methods: one returns a list, the other returns an iterator. So they are very comparable in the sense that both return values are iterable.</p> <p>I'd like to write a decorator that catches errors inside the iteration. The problem is that the iterator is returned without iteration and so no errors will be caught.</p> <p>In the below code, the <code>wrapped_properly</code> decorator works around the issue by providing two separate wrappers, a default one (<code>wrapper</code>) and one specifically for generator functions (<code>generatorfunctionwrapper</code>). The approach feels quite complicated and verbose.</p> <pre><code>from inspect import isgeneratorfunction from functools import wraps def failing_generator(): for i in range(1, 5): if i % 2 == 0: print('I dont like even numbers.') raise ValueError(i) yield i def wrapped_badly(fn): @wraps(fn) def wrapper(*args, **kwargs): try: return fn(*args, **kwargs) except ValueError as err: print('Not to worry.') return wrapper def wrapped_properly(fn): @wraps(fn) def wrapper(*args, **kwargs): try: return fn(*args, **kwargs) except ValueError as err: print('Not to worry.') @wraps(fn) def generatorfunctionwrapper(*args, **kwargs): try: yield from fn(*args, **kwargs) except ValueError as err: print('Not to worry.') if isgeneratorfunction(fn): return generatorfunctionwrapper else: return wrapper for x in wrapped_properly(failing_generator)(): print(x) # Prints: # 1 # I dont like even numbers. # Not to worry. for x in wrapped_badly(failing_generator)(): print(x) # Prints: # 1 # I dont like even numbers. # Traceback (most recent call last): # ... # ValueError: 2 </code></pre> <p>Is there a better/more pythonic way to do this?</p>
<python><error-handling><generator><decorator><iterable>
2023-01-13 13:30:07
1
846
Paul
75,109,865
12,011,020
Python: kedro viz SQLAlchemy DeprecationWarning
<p>I tried to work with kedro and started with the spaceflight tutorial. I installed the src/requirements.txt in a .venv. When running <code>kedro viz </code>(or <code>kedro run</code> or even <code>kedro --version</code>), I get lets of Deprecation Warnings. One of which is the following (relating to kedro viz)</p> <pre><code>kedro_viz\models\experiment_tracking.py:16: MovedIn20Warning: [31mDeprecated API features warnings.py:109 detected! These feature(s) are not compatible with SQLAlchemy 2.0. [32mTo prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to &quot;sqlalchemy&lt;2.0&quot;. [36mSet environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message.[0m (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9) Base = declarative_base() </code></pre> <h2>Context</h2> <p>This is a minor issue, but ofc I would like to setup the project to be as clean as possible.</p> <h2>Steps to Reproduce</h2> <ol> <li>Setup a fresh kedro installation (Version 0.18.4)</li> <li>Create a .venv and install the standard requirements</li> <li>Run any kedro command (e.g. <code>kedro --version</code>)</li> </ol> <h2>What I've tried</h2> <p>I tried to put <code>sqlalchemy&lt;=2.0</code> in the requirements.txt and again run <code>pip install -r src/requirements.txt</code>, but that did not resolve it. Double checked with <code>pip freeze</code> that the following version of SQLAlchemy is installed: <code>SQLAlchemy==1.4.46</code></p>
<python><sqlalchemy><dependencies><kedro>
2023-01-13 13:28:23
1
491
SysRIP
75,109,854
3,105,485
How to get the precise position of an error within the line in Python
<p>How to get the precise position of an error within the line in Python? The Python interpreter gives the line of the error and the type of the Error, but if there are more points in the line that could cause that error then there is ambiguity, here is a toy example:</p> <p><code>example.py</code></p> <pre><code>xs = [] ys = {&quot;item&quot;: xs} zs= {&quot;item&quot;:ys} print(zs['item']['item']['item']) </code></pre> <p>Where the error is:</p> <pre><code>Traceback (most recent call last): File &quot;p.py&quot;, line 4, in &lt;module&gt; print(zs['item']['item']['item']) TypeError: list indices must be integers or slices, not str </code></pre> <p>Here, considering that <code>xs</code>, <code>ys</code> and <code>zs</code> could be the result of long computation, it could not be clear which one of the <code>['item']</code> triggered the <code>TypeError</code>.</p> <p>I would prefer an error message like:</p> <pre><code>Traceback (most recent call last): File &quot;p.py&quot;, line 4, in &lt;module&gt; print(zs['item']['item']['item']) ^------- TypeError: list indices must be integers or slices, not str </code></pre> <p>That tells me that the problem is in the last accessing with <code>['item']</code>.</p> <p>I am using Python 3.8.16</p>
<python><exception><runtime-error><error-messaging>
2023-01-13 13:27:40
1
6,751
Caridorc
75,109,379
12,018,177
Why Pytorch Dataset class does not returning list?
<p>I am trying to use torch.utils.Dataset on a custom dataset. In my dataset, in a single row I have a list of 10 images like as follow:</p> <pre><code>| word | images | gold_image | |:-----|:-------|:-----------| |'andromeda'|['image.1.jpg','image.2.jpg','image.3.jpg']|[0,0,1]| </code></pre> <p>I expect to return batch from dataloader like this, with batch_size=4</p> <pre><code>('word_1', 'word_2', 'word_3', 'word_4'), ([image_1,image_2,image_3],[image_4,image_5,image_6],[image_7,image_8,image_9], [image_10,image11,image_12]), ([0,0,1],[1,0,0],[0,1,0],[0,1,0]) </code></pre> <p>But, I am getting like this,</p> <pre><code>('word_1', 'word_2', 'word_3', 'word_4'), [(image_1,image_2,image_3,image_4),(image_5,image_6,image_7,image_8), (image_9,image_10,image_11,image_12)], [(0,1,0,0),(1,0,0,0),(0,1,0,1)] </code></pre> <p>Here is my code:</p> <pre><code>class ImageTextDataset(Dataset): def __init__(self, data_dir, train_df, tokenizer, feature_extractor, data_type,device, text_augmentation=False): self.data_dir = data_dir if data_type == &quot;train&quot;: # this is for the original train set of the task # reshape all images to size [1440,1810] self.tokenizer = tokenizer self.feature_extractor=feature_extractor self.transforms = transforms.Compose([transforms.Resize([512,512]),transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) self.all_image_names = list(train_df['images']) self.keywords = list(train_df['word']) self.context = list(train_df['description']) self.gold_images = list(train_df['gold_image']) def __len__(self): return len(self.context) def __getitem__(self, idx): context = self.context[idx] # print(context) keyword = self.keywords[idx] #loading images label = [] images = self.all_image_names[idx] image = [] for i, img in enumerate(images): path = os.path.join(self.data_dir, &quot;trial_images_v1&quot;, img) img = Image.open(path) if img.mode != &quot;RGB&quot;: img = img.convert('RGB') img = self.transforms(img) image.append(img) label.append(1.0) if img == self.gold_images[idx] else label.append(0.0) # sample = {'context':context, 'images': images, 'label': label} return (context, image, label) </code></pre> <p>I can't figure it out what is the issue. Can anyone help?</p> <p>TIA.</p>
<python><pytorch><dataset><huggingface-datasets>
2023-01-13 12:42:55
1
383
Shantanu Nath
75,109,307
7,497,912
How to open a document from a Notes View with python noteslib?
<p>I have an established connection with a notes database and I am able to loop through all the records in a view. What I am curious about if it is possible to open a document and get the data from it using python. (Like double clicking on a record from an HCL Notes Client). Here is my code simplified:</p> <pre><code>import noteslib db = noteslib.Database('my-domino-server','my-db.nsf', 'mypassword') view = db.GetView('my_view') doc = view.GetFirstDocument() while doc: print(doc.ColumnValues) #here after printing the column values, I want to open the document and store it's values in a variable. doc = view.GetNextDocument(doc) </code></pre> <p>I tried googling about LotusScript and I found the Open() method, but doc.Open() did not work.</p>
<python><lotus-notes><hcl-notes>
2023-01-13 12:37:30
1
417
Looz
75,108,992
7,558,835
Is it possible to have a Spark DataFrame partitioned by multiple columns, and at the same time partitioned by all the individual columns?
<p>To get more efficient joins in pyspark, I would like to repartition my dataframes on multiple columns at the same time.</p> <p>This is not what the <code>repartition</code> function already does. For example, if I am partitioning on columns 'c1' and 'c2', the <code>reparition</code> function only ensures that all rows with the pairs of values <code>(c1, c2)</code> fall in the same partition. Instead, I would like to have a partitioning that ensures that that all rows with the same value of <code>c1</code> fall on the same parition, and the same for <code>c2</code>.</p> <p>With this, I would like to optimize my pipeline when doing a join on <code>c1</code> and then another join on <code>c2</code>, without having to reparition (implicitly or explicitely) 2 times.</p> <p>Is it possible to achieve this?</p>
<python><python-3.x><apache-spark><pyspark><apache-spark-sql>
2023-01-13 12:07:39
0
1,164
Diego Palacios
75,108,952
8,661,471
Converting Matplotlib axis to log only updates labels and not step spacing
<p>I am trying to scale the space on the vertical axis here so it is spaced logarithmically.</p> <p>After searching the internet the proposed solution was</p> <pre><code>ax.set_zscale('log') </code></pre> <p>After trying that you can see the result below that only the labels where changed and not the actual spacings.</p> <p><strong>Before</strong></p> <p><strong><a href="https://i.sstatic.net/JPceK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JPceK.png" alt="Before" /></a></strong></p> <p><strong>After</strong></p> <p><a href="https://i.sstatic.net/PIEDl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PIEDl.png" alt="Chart" /></a></p>
<python><matplotlib><data-analysis>
2023-01-13 12:03:46
1
694
codeThinker123
75,108,872
17,124,619
Invalid identifier in sql create table query
<p>I understand from online articles that this issue is related to case-sensitivity with oracle sql. However, I cannot see the error in my query: For example:</p> <pre><code>ENGINE = create_engine('credentials', pool_pre_ping = True) connection = ENGINE.raw_connection() cursor = connection.cursor() cursor.execute(&quot;CREATE TABLE US (NAME VARCHAR2(255), ACTION_NAME VARCHAR2(255), CONNECTORNAME VARCHAR2(255), TRIGGER VARCHAR2(255), UPDATEDBY DATE, IDSET VARCHAR2(255), CONNECTORID VARCHAR2(255))&quot;) </code></pre> <p>Will give the error:</p> <blockquote> <p>cx_Oracle.DatabaseError: ORA-00904: : invalid identifier</p> </blockquote>
<python><sql><cx-oracle>
2023-01-13 11:55:51
0
309
Emil11
75,108,871
8,489,687
How to read the logs of the DagFile Processor Process in Airflow?
<p>I have a python file that generates logs dynamically, reading from a table in a database. I always edit this file blindly because I can't debug the execution of it.</p> <p>I know Airflow triggers a subprocess to process this file (the <code>DagFileProcessorProcess</code>), I just want to be able to read the logs of this process to debug it. I've already tried changing the <code>logging.dag_processor_log_target</code> config to stdout and changing the log location as well with <code>logging.dag_processor_manager_log_location</code>. Nothing worked, I can just read scheduler logs and task execution logs.</p> <p>I'm using Airflow 2.2.5, running scheduler + webserver locally.</p>
<python><logging><airflow>
2023-01-13 11:55:50
1
355
Yago Dórea
75,108,809
1,668,622
How can a pre-built Python-installation be modified to work in another directory?
<p>For a project shipping with a pre-built customized Python distribution I need to be able to compile packages from source using <code>pip</code> (within the installed environment).</p> <p>This is what the file system structure for two installations of the final product might look like:</p> <pre><code>/opt ├── my-program-v1 │   ├── some-files │   ├── custom-python-3.9 ├── my-program-v2 │   ├── some-files │   ├── custom-python-3.11 </code></pre> <p>Since the readily installed program (together with its Python installation) might be installed to any directory, <code>pip</code> needs a way to find the header files for the used Python installation.</p> <p>Just copying the whole Python installation to the desired directory will result in errors when trying to <code>pip install</code> a package that needs to be built from source (e.g. <code>ibm_db</code>):</p> <pre><code>$ python3 -m pip install ibm_db Collecting ibm_db Downloading ibm_db-3.1.4.tar.gz (1.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 9.5 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [8 lines of output] Detected 64-bit Python Detected platform = linux, uname = x86_64 Downloading https://public.dhe.ibm.com/ibmdl/export/pub/software/data/db2/drivers/odbc_cli/linuxx64_odbc_cli.tar.gz Downloading DSDriver from url = https://public.dhe.ibm.com/ibmdl/export/pub/software/data/db2/drivers/odbc_cli/linuxx64_odbc_cli.tar.gz Pre-requisite check [gcc] : Passed No Python.h header file detected. Please install python-devel or python3-devel as instructed in &quot;https://github.com/ibmdb/python-ibmdb/blob/master/README.md#KnownIssues&quot; and continue with the installation [end of output] </code></pre> <p>Usually, when you build Python from source you would set the final destination folder using <code>configure --prefix=/opt/my-program-v2</code>, which would be stored inside the sysconfigdata of that readily built Python installation.</p> <p>But how can this be done when you need to pre-build Python and don't know the installation directory in advance?</p> <p>One possible way is to manually modify the generated sysconfig-data, (usually found in <code>lib/&lt;python-version&gt;/_sysconfigdata__linux_x86_64-linux-gnu.py</code>) but this is not fun (due to the used format) and introduces a step outside of the CI.</p> <p>How do others do that? Is there an intended way to place/relocate a prebuilt installation of python?</p>
<python><pip><distutils>
2023-01-13 11:50:06
0
9,958
frans
75,108,790
4,451,521
How to convert a difference in timestamp to miliseconds?
<p>I have two dates in timestamp and their difference is</p> <pre><code>dt 0.006951093673706055 </code></pre> <p>dt is: <code>(1669983551.287477-1669983551.280526)</code></p> <p>I want to generate several dates (in datetime) with that difference</p> <p>Now normally I would do</p> <pre><code>date_list = [datetime.now() + datetime.timedelta(milliseconds=x) for x in range(n)] </code></pre> <p>but here timedelta uses the number of milliseconds.</p> <p>So my question is how can I get the timestamp dt <code>0.006951093673706055</code> to miliseconds?</p>
<python><datetime><time><timedelta>
2023-01-13 11:47:59
1
10,576
KansaiRobot
75,108,709
6,029,488
Python Pandas: groupby.diff calculates difference between the last element of a group and the first element of the following group
<p>I have the following - already sorted - pandas dataframe:</p> <pre><code>instrumentExtId Date proxyMethod isForceXS xsValue curveValue .ID1 2008-03-28 00:00:00 CrossSectional FALSE 6.86046681 6.86046681 .ID1 2008-03-31 00:00:00 CrossSectional FALSE 6.97468855 6.97468855 .ID1 2008-04-01 00:00:00 CrossSectional FALSE 6.83893432 6.83893432 .ID1 2008-04-02 00:00:00 CrossSectional FALSE 6.70250452 6.70250452 .ID2 2008-03-28 00:00:00 CrossSectional FALSE 3.10441877 3.10441877 .ID2 2008-03-31 00:00:00 CrossSectional FALSE 3.5104612 3.5104612 .ID2 2008-04-01 00:00:00 CrossSectional FALSE 3.52994089 3.52994089 .ID2 2008-04-02 00:00:00 CrossSectional FALSE 3.24236585 3.24236585 </code></pre> <p>For each ID and for each date, I want to apply the inverse hyperbolic sine function (<code>np.arcsinh</code>) on columns &quot;xsValue&quot; and &quot;curveValue&quot; and then, calculate the date to date difference for each ID. I want columns &quot;proxyMethod&quot; and &quot;isForceXS&quot; to be preserved.</p> <p>I wrote the following code, but it seems that for the first row of the second ID, the difference is calculated between the first observation of the second ID and the last observation of the first ID. I was expecting a <code>nan</code> there (i.e. what I would like to see). What do I miss?</p> <pre><code>df = df.groupby(['instrumentExtId', 'Date', 'proxyMethod', 'isForceXS'], group_keys = True)[[&quot;xsValue&quot;,&quot;curveValue&quot;]].\ apply(lambda x:np.arcsinh(x)). \ diff(). \ reset_index().\ drop([&quot;level_4&quot;], axis=1) </code></pre> <pre><code>instrumentExtId Date proxyMethod isForceXS xsValue curveValue .ID1 2008-03-28 00:00:00 CrossSectional FALSE nan nan .ID1 2008-03-31 00:00:00 CrossSectional FALSE 0.016342295 0.016342295 .ID1 2008-04-01 00:00:00 CrossSectional FALSE -0.01945289 -0.01945289 .ID1 2008-04-02 00:00:00 CrossSectional FALSE -0.019934368 -0.019934368 .ID2 2008-03-28 00:00:00 CrossSectional FALSE -0.750188187 -0.75018818 .ID2 2008-03-31 00:00:00 CrossSectional FALSE 0.117631041 0.117631041 .ID2 2008-04-01 00:00:00 CrossSectional FALSE 0.005323083 0.005323083 .ID2 2008-04-02 00:00:00 CrossSectional FALSE -0.081488875 -0.081488875 </code></pre>
<python><pandas><group-by><diff>
2023-01-13 11:40:08
1
479
Whitebeard13
75,108,586
9,510,800
How to sum up the value from previous row to subsequent rows pandas
<p>I have a dataframe with the below specs</p> <pre><code> | ID | Name| count | | -- |---- | ---- | | 1 | A | 75 | | 2 | B | 10 | | 3 | A | 15 | | 4 | A | 10 | | 5 | A | 5 | | 6 | A | 3 | </code></pre> <p>If I set the threshold for the count to be 15, I want the below rows to get added up uniformly. So the output should be</p> <pre><code> | ID | Name | count | | -- |---- | ---- | | 1 | A | 15 | | 2 | B | 10 | | 3 | A | 30 | | 4 | A | 25 | | 5 | A | 20 | | 6 | A | 18 | </code></pre> <p>75 from ID 1 gets added up based on group &quot;Name&quot; and it is always based on threshold value. Please advice</p>
<python><pandas><numpy>
2023-01-13 11:29:20
1
874
python_interest
75,108,567
10,409,093
MongoDB: conditional updates considering arrays as unordered
<p>I need each document in a collection to be updated only if its content is different, regardless of the order of the elements in nested lists.</p> <p>Fundamentally, two versions should be the same if the elements are identical regardless of their order. MongoDB does not do that, by default.</p> <pre><code>def upsert(query, update): # collection is a pymongo.collection.Collection object result = collection.update_one(query, update, upsert=True) print(&quot;\tFound match: &quot;, result.matched_count &gt; 0) print(&quot;\tCreated: &quot;, result.upserted_id is not None) print(&quot;\tModified existing: &quot;, result.modified_count &gt; 0) query = {&quot;name&quot;: &quot;Some name&quot;} update = {&quot;$set&quot;: { &quot;products&quot;: [ {&quot;product_name&quot;: &quot;a&quot;}, {&quot;product_name&quot;: &quot;b&quot;}, {&quot;product_name&quot;: &quot;c&quot;}] }} print(&quot;First update&quot;) upsert(query, update) print(&quot;Same update&quot;) upsert(query, update) update = {&quot;$set&quot;: { &quot;products&quot;: [ {&quot;product_name&quot;: &quot;c&quot;}, {&quot;product_name&quot;: &quot;b&quot;}, {&quot;product_name&quot;: &quot;a&quot;}] }} print(&quot;Update with different order of products&quot;) upsert(query, update) </code></pre> <p>Output:</p> <pre><code>First update Found match: False Created: True Modified existing: False Same update Found match: True Created: False Modified existing: False Update with different order of products Found match: True Created: False Modified existing: True </code></pre> <p>The last update does modify the document because the order of products are indeed different.</p> <p>I did find a working solution which is to compare a sorting of the queried document's content and a sorting of the new one.</p> <p>Thanks to <a href="https://stackoverflow.com/users/1014938/zero-piraeus">Zero Piraeus</a>'s <a href="https://stackoverflow.com/a/25851972/10409093">response</a> for the short and convenient way to sort for comparison.</p> <pre><code>def ordered(obj): if isinstance(obj, dict): return sorted((k, ordered(v)) for k, v in obj.items()) if isinstance(obj, list): return sorted(ordered(x) for x in obj) else: return obj </code></pre> <p>I apply it to compare the current and the new versions of the document. If their sorting are different, I apply the update.</p> <pre><code>new_update = { &quot;products&quot;: [ {&quot;product_name&quot;: &quot;b&quot;}, {&quot;product_name&quot;: &quot;c&quot;}, {&quot;product_name&quot;: &quot;a&quot;}] } returned_doc = collection.find_one(query) # Merging remote document with local dictionary merged_doc = {**returned_doc, **new_update} if ordered(returned_doc) != ordered(merged_doc): upsert(query, {&quot;$set&quot;: new_update}) print(&quot;Updated&quot;) else: print(&quot;Not Updated&quot;) </code></pre> <p>Output:</p> <pre><code>Not Updated </code></pre> <p>That works, but that relies on python to do the comparison, introducing a delay between the read and the write.</p> <p>Is there a way to do it atomically ? Or, even better, a way to set a MongoDB Collection to adopt some kind of &quot;order inside arrays doesn't matter&quot; mode ?</p> <p>This is part of a generic implementation. Documents can have any kind of nesting in their structure.</p>
<python><mongodb><pymongo>
2023-01-13 11:26:41
2
2,177
Whole Brain
75,108,533
20,999,526
Why signInWithPopup() does not work with pywebview?
<p>I am trying google sign in with firebase and trying to load the page through pywebview.</p> <pre><code>from tkinter import * import webview as webview root = Tk() win_width = root.winfo_screenwidth() win_height = root.winfo_screenheight() root.geometry(&quot;%dx%d&quot; % (win_width, win_height)) webview.create_window(title='My Window', url='http://localhost:81',confirm_close=True) webview.start() root.destroy() </code></pre> <p>When opened with browser, it works fine, but when opened with my code and clicked on sign in, it shows the following message</p> <blockquote> <p>Unable to establish a connection with the popup. it may have been blocked by the browser.</p> </blockquote> <p><img src="https://i.sstatic.net/zrb98.png" alt="image" /></p> <p>What is the solution?</p>
<python><firebase><firebase-authentication><google-signin><pywebview>
2023-01-13 11:21:31
3
337
George
75,108,476
12,268,570
Unable to view in-memory table from SQLite database in PyCharm
<p>I am using PyCharm to capture some data from web and push it into in-memory database-table on SQLite. I have debugged the code, it works fine, in the debugger I can see data being fetched, it being pushed into db[table] location.</p> <p>Python code is as below -</p> <pre><code>import requests import dataset from bs4 import BeautifulSoup from urllib.parse import urljoin, urlparse def begin(): db = dataset.connect('sqlite:///quotes.db') authors_seen = set() base_url = 'http://quotes.toscrape.com/' def clean_url(url): # Clean '/author/Steve-Martin' to 'Steve-Martin' # Use urljoin to make an absolute URL url = urljoin(base_url, url) # Use urlparse to get out the path part path = urlparse(url).path # Now split the path by '/' and get the second part # E.g. '/author/Steve-Martinvisual studio' -&gt; ['','author', 'Steve-Martin'] return path.split('/')[2] def scrape_quotes(html_soup): for quote in html_soup.select('div.quote'): quote_text = quote.find(class_='text').get_text(strip=True) quote_author_url = clean_url(quote.find(class_='author').find_next_sibling('a').get('href')) quote_tag_urls = [clean_url(a.get('href')) for a in quote.find_all('a', class_='tag')] authors_seen.add(quote_author_url) # Store this quote and its tags quote_id = db['quotes'].insert({'text' : quote_text, 'author' : quote_author_url}) db['quotes_tags'].insert_many([{'quote_id' : quote_id, 'tag_id' : tag} for tag in quote_tag_urls]) def scrape_author(html_soup, author_id): author_name = html_soup.find(class_='author-title').get_text(strip=True) author_born_date = html_soup.find(class_='author-born-date').get_text(strip=True) author_born_loc = html_soup.find(class_='author-born-location').get_text(strip=True) author_desc = html_soup.find(class_='author-description').get_text(strip=True) db['authors'].insert({'author_id': author_id, 'name': author_name, 'born_date': author_born_date, 'born_location': author_born_loc, 'description': author_desc}) # Start by scraping all the quote pages print('*****Beginning scraping process - quotes first.*****') url = base_url while True: print('Now scraping page:', url) r = requests.get(url) html_soup = BeautifulSoup(r.text, 'html.parser') # Scrape the quotes scrape_quotes(html_soup) # Is there a next page? next_a = html_soup.select('li.next &gt; a') if not next_a or not next_a[0].get('href'): break url = urljoin(url, next_a[0].get('href')) # Now fetch out the author information print('*****Scraping authors data.*****') for author_id in authors_seen: url = urljoin(base_url, '/author/' + author_id) print('Now scraping author:', url) r = requests.get(url) html_soup = BeautifulSoup(r.text, 'html.parser') # Scrape the author information scrape_author(html_soup, author_id) db.commit() db.close() </code></pre> <p>What I am struggling with is the pycharm IDE connection. As shown in the figure below, I can see quotes.sqlite database. It has only one table listed - sqlite_master. Under server objects there are collations, modules and routines, which is part of infrastructure provided by SQLite.</p> <p><a href="https://i.sstatic.net/xaCqb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xaCqb.png" alt="SQLite connection view" /></a></p> <p>Also, when I view the db object (python's driver to SQLite) in debugger, I can see the relevant table as shown in the picture below -</p> <p><a href="https://i.sstatic.net/ADPR1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ADPR1.png" alt="Debugger showing tables" /></a></p> <p>Any ideas why PyCharm refuses to show relevant table/collection in the IDE?</p>
<python><sqlite><pycharm>
2023-01-13 11:15:47
0
646
Amogh Sarpotdar
75,108,450
1,186,904
How to override the default login for Django rest framework browsable API
<p>I have a Django Rest Framework (DRF) application and when we are using the browseable API page and click login it goes to the default Django login page. I want to override it to another page.</p> <p>Default api: <code>https://my-app/api-auth/login/ </code> But the new login page I want is this: <code>http://my-app/signin/ </code> the auth is done by Okta in the application.</p> <p>I did try overriding this</p> <p><code> LOGIN_REDIRECT_URL = '/signin/' LOGIN_URL = '/signin/'</code></p>
<python><django><django-rest-framework><okta>
2023-01-13 11:12:41
1
823
najeeb
75,108,405
12,450,117
Access Scalene Profile Object via Python Code
<p>I want to access the profiler output in python code after scalene_profiler.stop() but I cant seem to find any function that can give me access to it? The reason i need it is because I want the time-consumed in seconds instead of the Percentages that are in the report generated, and I want to save this data in my own format. Is that possible somehow?</p> <p>An attempt I made on my own (although I don't think its most efficient):</p> <pre class="lang-py prettyprint-override"><code>from scalene import scalene_profiler import time scalene_profiler.start() time.sleep(3). # my code - i wud like only stats of the lines of code in between start and stop in a dictionary. scalene_profiler.stop() def ddict2dict(d): if not isinstance(d, dict): return d new_d = {} for k, v in d.items(): if isinstance(v, dict): new_d[k] = ddict2dict(v) else: new_d[k] = v return new_d # an example of a way i've tried so far - i see RunningStats object here instead of the actual stats so not sure if this is most efficient data = {n: ddict2dict(getattr(scalene_profiler.Scalene._Scalene__stats, n)) for n in scalene_profiler.ScaleneStatistics.payload_contents} print(data) </code></pre> <p>Output: <code>{'max_footprint': 0, 'max_footprint_loc': None, 'current_footprint': 0, 'elapsed_time': 3.0018277168273926, 'alloc_samples': 0, 'total_cpu_samples': 2.391642999999993, 'cpu_samples_c': {'tmp.py': {18: 0.0016429999999999865}}, 'cpu_samples_python': {'tmp.py': {18: 2.389999999999993}}, 'bytei_map': {}, 'cpu_samples': {'tmp.py': 2.391642999999993}, 'cpu_utilization': {'tmp.py': {18: &lt;scalene.runningstats.RunningStats object at 0x1232298d0&gt;}}, 'memory_malloc_samples': {}, 'memory_python_samples': {}, 'memory_free_samples': {}, 'memcpy_samples': {}, 'memory_max_footprint': {}, 'per_line_footprint_samples': {}, 'total_memory_free_samples': 0.0, 'total_memory_malloc_samples': 0.0, 'memory_footprint_samples': [], 'function_map': {}, 'firstline_map': {}, 'gpu_samples': {'tmp.py': {18: 0.0}}, 'total_gpu_samples': 0.0, 'memory_malloc_count': {}, 'memory_free_count': {}} </code></p> <p>Thank you!</p>
<python><profiling><scalene>
2023-01-13 11:08:54
0
480
Ramsha Siddiqui
75,108,358
12,103,188
How to display a 3D plot in Python?
<p>I have the following code segment, but when I run it I only get a blank white screen and the plot is not displayed. I'm using Python 3.10.9. Any ideas about the issue?:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D def np_bivariate_normal_pdf(domain, mean, variance): X = np.arange(-domain+mean, domain+mean, variance) Y = np.arange(-domain+mean, domain+mean, variance) X, Y = np.meshgrid(X, Y) R = np.sqrt(X**2 + Y**2) Z = ((1. / np.sqrt(2 * np.pi)) * np.exp(-.5*R**2)) return X+mean, Y+mean, Z def plt_plot_bivariate_normal_pdf(x, y, z): fig = plt.figure(figsize=(12, 6)) ax = Axes3D(fig) ax.plot_surface(x, y, z, cmap=cm.coolwarm, linewidth=0, antialiased=True) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') plt.show() def main(): plt_plot_bivariate_normal_pdf(*np_bivariate_normal_pdf(4, 0, .25)) if __name__ == '__main__': main() </code></pre>
<python><matplotlib>
2023-01-13 11:04:45
1
469
terett
75,108,339
20,999,380
Python cannot find a file that is definitely there
<p>I am trying to upload an MP3 file and play it using pydub:</p> <pre><code>import pydub from pydub import AudioSegment from pydub.playback import play blast_file = AudioSegment.from_mp3( &quot;C:/Users/am650/Downloads/radio_static.mp3&quot;) </code></pre> <p>From this, I get the following error:</p> <pre><code>C:\Users\am650\PycharmProjects\pythonProject\venv\Scripts\python.exe C:\Users\am650\PycharmProjects\pythonProject\crtt_control.py C:\Users\am650\PycharmProjects\pythonProject\venv\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn(&quot;Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work&quot;, RuntimeWarning) C:\Users\am650\PycharmProjects\pythonProject\venv\Lib\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn(&quot;Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work&quot;, RuntimeWarning) Traceback (most recent call last): File &quot;C:\Users\am650\PycharmProjects\pythonProject\crtt_control.py&quot;, line 38, in &lt;module&gt; blast_file = AudioSegment.from_mp3( ^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\am650\PycharmProjects\pythonProject\venv\Lib\site-packages\pydub\audio_segment.py&quot;, line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\am650\PycharmProjects\pythonProject\venv\Lib\site-packages\pydub\audio_segment.py&quot;, line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\am650\PycharmProjects\pythonProject\venv\Lib\site-packages\pydub\utils.py&quot;, line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\am650\AppData\Local\Programs\Python\Python311\Lib\subprocess.py&quot;, line 1024, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File &quot;C:\Users\am650\AppData\Local\Programs\Python\Python311\Lib\subprocess.py&quot;, line 1493, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [WinError 2] The system cannot find the file specified Process finished with exit code 1 </code></pre> <p>I have messed around with forward and back slashes, I tried putting r in front of the file call (<code>r&quot;C:/Users/am650/Downloads/radio_static.mp3&quot;)</code>, I have moved the file to different locations, etc. I have also tried other files and file types. It seems that python cannot find any file of mine...</p> <p>I originally wrote this code on a mac (where it worked fine) and moved it to a PC. This error is occurring on the PC (windows 10). I am using Python 3.11.1 and I only have one version of Python downloaded. I had a similar problem earlier where Python would not recognise any of my pip installs, but I got around this by adding the packages directly in pycharm using PyPl. I now wounder if these two issues are related?</p> <p>It is also worth noting that I am using a school computer that was configured such that all downloads automatically save to one drive, not the local computer. I have moved python (and the audio file) to the computer drive but maybe I have missed a file somewhere? I do not have another PC on which I can test these theories, and my IT department took a look and could not figure it out.</p>
<python><windows><pydub>
2023-01-13 11:03:23
1
345
grace.cutler
75,108,175
11,737,958
AttributeError: class object has no attribute
<p>I am new to python. I try to access the attribute <strong>acnt_amount</strong> from the class bank_Customer, but throws &quot;AttributeError&quot; error. How to access the attribute of the function <strong>getDetails</strong> to <strong>withdraw</strong> with in the class from one function to another function? What is the mistake that i do? Any help will be much appreciated! Thanks in advance!</p> <p><strong>Code:</strong></p> <pre><code>class bank_Customer: def getDetails(self, cname, acnt_no, acnt_type, acnt_amount): self.cname = cname self.acnt_no = acnt_no self.acnt_type = acnt_type self.acnt_amount = acnt_amount row = self.cname + &quot;,&quot; + str(self.acnt_no) + &quot;,&quot; + self.acnt_type + &quot;,&quot; + str(self.acnt_amount) + &quot;\n&quot; file = open('cust_details.csv','a') file.write(str(row)) file.close() print('*'*40) print(&quot;Account has been added successfully!&quot;) return self.acnt_amount def withdraw(self): cash = int(input(&quot;Please enter the amount to be withdrawn: &quot;)) self.acnt_amount = self.acnt_amount - cash f&quot;balance amount is {balance}&quot; return balance base = bank_Customer() base.withdraw() </code></pre> <p><strong>Error:</strong></p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\kisha\IdeaProjects\Git projects in python\ATM application.py&quot;, line 96, in &lt;module&gt; base.withdraw() File &quot;C:\Users\kisha\IdeaProjects\Git projects in python\ATM application.py&quot;, line 66, in withdraw self.acnt_amount = self.acnt_amount - cash AttributeError: 'bank_Customer' object has no attribute 'acnt_amount' </code></pre>
<python>
2023-01-13 10:48:07
3
362
Kishan
75,107,982
10,924,836
Choropleth map in Python
<p>Below you can see one example with the Choropleth map for Italy. Below you can see an example :</p> <pre><code>import pandas as pd import geopandas as gpd regions = ['Trentino Alto Adige', &quot;Valle d'Aosta&quot;, 'Veneto', 'Lombardia', 'Emilia-Romagna', 'Toscana', 'Friuli-Venezia Giulia', 'Liguria', 'Piemonte', 'Marche', 'Lazio', 'Umbria', 'Abruzzo', 'Sardegna', 'Puglia', 'Molise', 'Basilicata', 'Calabria', 'Sicilia', 'Campania'] df = pd.DataFrame([regions,[10+(i/2) for i in range(20)]]).transpose() df.columns = ['region','quantity'] #Download a geojson of the region geometries gdf = gpd.read_file(filename=r'https://raw.githubusercontent.com/openpolis/geojson-italy/master/geojson/limits_IT_municipalities.geojson') gdf = gdf.dissolve(by='reg_name') #The geojson is to detailed, dissolve boundaries by reg_name attribute gdf = gdf.reset_index() #gdf.reg_name[~gdf.reg_name.isin(regions)] Two regions are missing in your df #16 Trentino-Alto Adige/Südtirol #18 Valle d'Aosta/Vallée d'Aoste gdf = pd.merge(left=gdf, right=df, how='left', left_on='reg_name', right_on='region') ax = gdf.plot( column=&quot;quantity&quot;, legend=True, figsize=(15, 10), cmap='OrRd', missing_kwds={'color': 'lightgrey'}); ax.set_axis_off(); </code></pre> <p>Output from this plot you can see below</p> <p><a href="https://i.sstatic.net/OffAc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OffAc.png" alt="enter image description here" /></a></p> <p>Now I want to make same Choropleth map but now for Aremnia.You can see data below</p> <pre><code>data_arm = { 'region': ['Aragatsotn','Ararat','Armavir','Gegharkunik','Kotayk','Lori','Shirak','Syunik','Tavush','Vayots Dzor','Yerevan'], 'quantity':[0.2560,0.083,0.0120,0.9560,0.423,0.420,0.2560,0.043,0.0820,0.4560,0.019] } df = pd.DataFrame(data_arm, columns = ['region', 'quantity' ]) df </code></pre> <p>So can anybody help me how to implement this ?</p>
<python><geopandas><choropleth>
2023-01-13 10:32:29
1
2,538
silent_hunter
75,107,892
4,622,046
Write pandas dataframe (from CSV) to BigQuery in batch mode
<p>I have a list of csv files, I want to copy the rows and push them to BQ sequentially. At the moment, I am using pandas to read the csv files, and the <code>to_gbq</code> method to get the data in bigquery. However, since the files are big (few gigs each), I wanted to ingest the data in a batch mode to avoid any memory error.</p>
<python><pandas><google-bigquery>
2023-01-13 10:24:54
1
11,318
Zabir Al Nazi Nabil
75,107,845
1,717,026
Json field truncated in sqlalchemy
<p>I am getting my data from my postgres database but it is truncated. For VARCHAR, I know it's possible to set the max size but is it possible to do it too with JSON, or is there an other way?</p> <p>Here is my request:</p> <pre class="lang-py prettyprint-override"><code>robot_id_cast = cast(RobotData.data.op(&quot;-&gt;&gt;&quot;)(&quot;id&quot;), String) robot_camera_cast = cast(RobotData.data.op(&quot;-&gt;&gt;&quot;)(self.camera_name), JSON) # Get the last upload time for this robot and this camera subquery_last_upload = ( select([func.max(RobotData.time).label(&quot;last_upload&quot;)]) .where(robot_id_cast == self.robot_id) .where(robot_camera_cast != None) ).alias(&quot;subquery_last_upload&quot;) main_query = ( select( [subquery_last_upload.c.last_upload,RobotData.data.op(&quot;-&gt;&quot;)(self.camera_name).label(self.camera_name),]) .where(RobotData.time == subquery_last_upload.c.last_upload) .where(robot_id_cast == self.robot_id) .where(robot_camera_cast != None) ) </code></pre> <p>The problem is with this select part <code>RobotData.data.op(&quot;-&gt;&quot;)(self.camera_name).label(self.camera_name)</code></p> <p>Here is my table</p> <pre class="lang-py prettyprint-override"><code>class RobotData(PGBase): __tablename__ = &quot;wr_table&quot; time = Column(DateTime, nullable=False, primary_key=True) data = Column(JSON, nullable=False) </code></pre> <p>Edit: My JSON is 429 characters</p>
<python><json><sqlalchemy>
2023-01-13 10:20:56
1
3,265
David Bensoussan
75,107,763
10,197,418
Why binary mode when reading/writing TOML in Python?
<p>When reading a <code>toml</code> file in normal read (<code>&quot;r&quot;</code>) mode, I get an error</p> <pre class="lang-py prettyprint-override"><code>import tomli with open(&quot;path_to_file/conf.toml&quot;, &quot;r&quot;) as f: # have to use &quot;rb&quot; ! toml_dict = tomli.load(f) </code></pre> <blockquote> <p>TypeError: File must be opened in binary mode, e.g. use <code>open('foo.toml', 'rb')</code></p> </blockquote> <p>Same happens when writing a <code>toml</code> file. Why?</p> <p><a href="https://github.com/hukkin/tomli" rel="noreferrer">tomli github readme</a> says</p> <blockquote> <p>The file must be opened in binary mode (with the &quot;rb&quot; flag). Binary mode will enforce decoding the file as UTF-8 with universal newlines disabled, both of which are required to correctly parse TOML.</p> </blockquote> <p>I thought the age of typewriters was over, so why is the &quot;universal newline&quot; not allowed? <a href="https://toml.io/en/v1.0.0#spec" rel="noreferrer">toml spec</a> says &quot;<em>Newline means LF (0x0A) or CRLF (0x0D 0x0A)</em>&quot; (poor Mac users) - that also doesn't clarify the reason to me... so, what am I missing?</p>
<python><utf-8><toml>
2023-01-13 10:15:27
1
26,076
FObersteiner
75,107,749
1,889,762
Poetry self update hangs
<p>When running <code>poetry update</code>, as well as other related commands, I get the process stuck at</p> <pre><code>Resolving dependencies... </code></pre> <p>I'm using poetry version 1.2.2, so I wanted to upgrade it by running <code>poetry self update -vvv</code></p> <p>The process hangs indefinitely at this point</p> <pre><code>Source (PyPI): Downloading sdist: msgpack-1.0.4.tar.gz Creating new session for files.pythonhosted.org </code></pre> <p>If it is a bug, is there a workaround to it?</p>
<python><python-poetry>
2023-01-13 10:14:31
2
3,760
HAL9000
75,107,569
16,332,690
Databricks not saving dataframes as Parquet properly in the blob storage
<p>I am using Databricks with a mounted blob storage. When I execute my Python notebook which creates large pandas DataFrame and tries to store them as .parquet files they show up having 0 bytes.</p> <p>The saving takes place in a submodule that I import and not in the main notebook itself. The strange this is that saving the dataframe as a parquet file always stores it as an empty file, i.e. with 0 bytes. However, if I try to save a dataframe as a .parquet file in the main notebook itself, it works.</p> <p>The problem seems to be very similar to this issue: <a href="https://community.databricks.com/s/question/0D58Y00009MIWkfSAH/how-can-i-save-a-parquet-file-using-pandas-with-a-data-factory-orchestrated-notebook" rel="nofollow noreferrer">https://community.databricks.com/s/question/0D58Y00009MIWkfSAH/how-can-i-save-a-parquet-file-using-pandas-with-a-data-factory-orchestrated-notebook</a></p> <p>I have installed both pyarrow and pandas and try to save a dataframe as follows:</p> <pre class="lang-py prettyprint-override"><code>df.to_parquet(&quot;blob storage location.parquet&quot;, index=False, engine=&quot;pyarrow&quot;) </code></pre> <p>Everything works fine locally but running this in Databricks is causing issues. I first tried to save my dataframes as HDF5 files, but the saving process doesn't work in Databricks it seems. I then switched to Parquet but I am running into the issue mentioned below.</p> <p>Does anyone have a solution or an explanation as to why this is happening?</p>
<python><azure><blob><databricks><parquet>
2023-01-13 09:59:34
1
308
brokkoo
75,107,534
10,924,836
Extracting years from data
<p>I want to extract data for years in the table below. Data are in the format <code>datetime64</code> and this data you can see below</p> <pre><code>import numpy as np import pandas as pd data = { 'Date': ['2021-01-01','2020-01-01','2019-01-01','2028-01-01'] } df = pd.DataFrame(data, columns = ['Date' ]) df['Date']= df['Date'].astype('datetime64') df </code></pre> <p>Now I want to subtract this data from 2023-01-01 in order to calculate how many years have been between this date and the data in the table . To do this I tried with the row below but unfortunately this did not work</p> <pre><code>df['Date']-['2023-01-01'].astype('datetime64') </code></pre> <p>So can anybody help me with how to solve this problem?</p>
<python><time>
2023-01-13 09:56:13
2
2,538
silent_hunter
75,107,525
4,291,923
Create pivot table from DataFrame with value columns on the "bottom"
<p>There is a dataframe:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame.from_dict({ 'A': ['A1','A1','A1','A1','A2','A2','A2','A2'], 'B': ['B1','B1','B2','B2','B3','B3','B4','B4'], 'C': ['one','two','one','two','one','two','one','two'], 'D': [0, 0, np.nan, 1, 0, np.nan, 1, 1], 'E': [1, 1, np.nan, 1, 0, np.nan, 1, 1] }) </code></pre> <p>So, as a table it looks like this:</p> <p><a href="https://i.sstatic.net/L4wsy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L4wsy.png" alt="enter image description here" /></a></p> <p>I try to group it by <code>A</code> and <code>B</code> and move column <code>C</code> to header, so columns will rename to <code>('one', 'D'), ('one', 'E'), ('two', 'D'), ('two', 'E')</code> and it will take the following look:</p> <p><a href="https://i.sstatic.net/uvfXi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uvfXi.png" alt="enter image description here" /></a></p> <p>To achieve this I tried <code>pivot_table</code> and <code>group + unstack</code> methods:</p> <pre class="lang-py prettyprint-override"><code># Method 1 df.pivot_table(index=['A', 'B'], columns='C', values=['D', 'E'], aggfunc='sum', fill_value=0) # Method 2 df.groupby(['A', 'B', 'C']).agg('sum').unstack(level=['D', 'E']) </code></pre> <p>Both methods return me the same result, where values as column names are at the very top:</p> <p><a href="https://i.sstatic.net/eyifu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eyifu.png" alt="enter image description here" /></a></p> <p>How can columns layers be moved or pivot table created with values on the very low column level?</p> <p>Or more precise question: how to get dataframe from image 2 instead of dataframe from image 3 from <code>df</code>?</p>
<python><pandas><dataframe><pivot-table>
2023-01-13 09:55:10
1
510
koshachok
75,107,407
6,682,498
Issue in using __repr__ function to return a non-string value from a method in a class
<p>I have a class that contains an <code>__init__</code> method, a method which changes the init value and a <code>__repr__</code> function that wants to print out the adjusted value</p> <p>The draft of the code is as follows</p> <pre><code>class Workflow: def __init__(self, a): self.a = a def build(self): self.a += 1 def __repr__(self): value = self.build() return value # Driver Code t = Workflow(1234) print(t) </code></pre> <p>And I got an error as follows</p> <pre class="lang-none prettyprint-override"><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[71], line 3 1 # Driver Code 2 t = Workflow(1234) ----&gt; 3 print(t) TypeError: __str__ returned non-string (type NoneType) </code></pre> <p>What's the mistake that I have made? In this case, if I want to print out the value that has beed changed by a method, how should I do that?</p>
<python>
2023-01-13 09:45:57
3
389
Pak Hang Leung
75,107,329
9,850,681
How to return an object after insert with sqlalchemy?
<p>I would like to retrieve the object after inserting it into the database, by object I mean the Base class. Some examples:</p> <pre class="lang-py prettyprint-override"><code>class EdaToken(Base): __tablename__ = &quot;eda_token&quot; &quot;&quot;&quot;id, primary key&quot;&quot;&quot; id = Column( Integer(), primary_key=True ) #... etc etc </code></pre> <p>this works, return an EdaToken object:</p> <pre class="lang-py prettyprint-override"><code> @classmethod async def get_all(cls) -&gt; List['EdaToken']: &quot;&quot;&quot; Get all records in database &quot;&quot;&quot; async with get_session() as conn: result = await conn.execute( select(EdaToken) ) return result.scalars().all() </code></pre> <p>The problem is in the insert:</p> <pre class="lang-py prettyprint-override"><code>#various tests @classmethod async def create_eda_token( cls, token: EdaTokenInputOnCreate ) -&gt; 'EdaToken': &quot;&quot;&quot; Create a token and returning its new id &quot;&quot;&quot; async with get_session() as conn: result = await conn.execute( insert(EdaToken).values(label=token.label,token=token.token) ) return result.scalars().unique().first() #?? </code></pre> <p>What I'd like to return is the new database entry as an EdaToken object.</p> <p>Error:</p> <pre><code>&lt;sqlalchemy.engine.cursor.CursorResult object at 0x7fedd7c24250&gt; 'CursorResult' object has no attribute 'id' </code></pre> <p>Another test:</p> <p>doesn't seem to work, though, it only allows me to enter a new token once, all new tokens are not entered and it always returns the previous token, the only one that is entered.</p> <pre class="lang-py prettyprint-override"><code> @classmethod async def create_eda_token(cls, token: EdaTokenInputOnCreate) -&gt; 'EdaToken': &quot;&quot;&quot; Create a token and returning its new id &quot;&quot;&quot; async with get_session() as conn: result = await conn.execute( insert(EdaToken).values(label=token.label,token=token.token).returning(EdaToken) ) await conn.flush() token_id = result.scalars().unique().first() result = await conn.execute( select(EdaToken).where(EdaToken.id == token_id) ) return result.scalars().unique().first() </code></pre> <p><code>psycopg2==2.9.3</code> <code>sqlalchemy==1.4.46</code> <code>asyncpg==0.27.0</code></p>
<python><sqlalchemy><fastapi>
2023-01-13 09:38:24
1
460
Plaoo
75,107,327
6,195,489
Is it possible to use zoom from one graph in a Dash app to select input for second graph
<p>I have a dash app that plots a dataframe which has a date component, and an entry that is either true or false. There are two graphs in the dashboard, one with the data vs date, and one with a percentage of True/False like below:</p> <p><a href="https://i.sstatic.net/xGIPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xGIPN.png" alt="dash app" /></a></p> <p>I can zoom in on the date range and select a subset clicking with the mouse.</p> <p>I would like to feed this range back into the second graph.</p> <p>At the moment to produce the above dashboard the relevant part of the code looks like:</p> <pre><code>from re import template import pandas as pd import plotly.express as px from dash import Dash, Input, Output, dcc, html from flask import globals def init_dashboard(server): evicted_df = pd.read_csv(&quot;app/data/evicted_jobs_node.csv&quot;, sep=&quot;\t&quot;) all_df = pd.read_csv(&quot;app/data/all_jobs_node.csv&quot;, sep=&quot;\t&quot;) all_df[&quot;datetime&quot;] = pd.to_datetime(all_df[&quot;datetime&quot;]) all_df = all_df.set_index([&quot;datetime&quot;]) all_df[&quot;evicted&quot;] = all_df[&quot;id_job&quot;].isin(evicted_df[&quot;id_job&quot;]) app = Dash(__name__, server=server, routes_pathname_prefix=&quot;/dash/&quot;) app.layout = html.Div( [ html.Div( className=&quot;row&quot;, children=[ html.Div( className=&quot;six columns&quot;, children=[dcc.Graph(id=&quot;graph-with-dropdown&quot;)], style=dict(width=&quot;75%&quot;), ), html.Div( className=&quot;six columns&quot;, children=[dcc.Graph(id=&quot;graph-with-dropdown2&quot;)], style=dict(width=&quot;25%&quot;), ), ], style=dict(display=&quot;flex&quot;), ), html.Div( className=&quot;row&quot;, children=[ html.Div( className=&quot;six columns&quot;, children=[ dcc.Dropdown( id=&quot;partition-dropdown&quot;, options=[ &quot;Partition (default is all)&quot;, *all_df[&quot;partition&quot;].unique(), ], value=&quot;Partition (default is all)&quot;, clearable=False, searchable=False, ) ], style={ &quot;width&quot;: &quot;50%&quot;, &quot;justify-content&quot;: &quot;center&quot;, }, ), html.Div( className=&quot;six columns&quot;, children=[ dcc.Dropdown( id=&quot;node-dropdown&quot;, options=[ &quot;Number of Nodes (default is all)&quot;, *sorted( [ int(nodes) for nodes in all_df[&quot;nodes_alloc&quot;].unique() ] ), ], value=&quot;Number of Nodes (default is all)&quot;, clearable=False, searchable=False, ) ], style=dict(width=&quot;50%&quot;), ), ], style=dict(display=&quot;flex&quot;), ), ] ) init_callbacks(app, df, all_df) return app.server def init_callbacks(app, df, all_df): @app.callback( Output(&quot;graph-with-dropdown2&quot;, &quot;figure&quot;), [Input(&quot;node-dropdown&quot;, &quot;value&quot;), Input(&quot;partition-dropdown&quot;, &quot;value&quot;)], ) def update_evicted_fig(selected_nodes, selected_partition): if selected_nodes != &quot;Number of Nodes (default is all)&quot;: filtered_df = all_df[all_df[&quot;nodes_alloc&quot;] == selected_nodes] else: filtered_df = all_df if selected_partition != &quot;Partition (default is all)&quot;: filtered_df = filtered_df[filtered_df[&quot;partition&quot;] == selected_partition] x = [&quot;Not Evicted&quot;, &quot;Evicted&quot;] df1 = filtered_df.groupby([&quot;evicted&quot;]).count().reset_index() fig = px.bar( df1, y=[ 100 * filtered_df[filtered_df[&quot;evicted&quot;] == False].size / filtered_df.size, 100 * filtered_df[filtered_df[&quot;evicted&quot;] == True].size / filtered_df.size, ], x=x, color=&quot;evicted&quot;, color_discrete_map={True: &quot;red&quot;, False: &quot;green&quot;}, labels={&quot;x&quot;: &quot;Job Status&quot;, &quot;y&quot;: &quot;% of Jobs&quot;}, ) fig.update_layout(transition_duration=500) return fig @app.callback( Output(&quot;graph-with-dropdown&quot;, &quot;figure&quot;), [Input(&quot;node-dropdown&quot;, &quot;value&quot;), Input(&quot;partition-dropdown&quot;, &quot;value&quot;)], ) def update_evicted_fig(selected_nodes, selected_partition): if selected_nodes != &quot;Number of Nodes (default is all)&quot;: filtered_df = all_df[all_df[&quot;nodes_alloc&quot;] == selected_nodes] else: filtered_df = all_df if selected_partition != &quot;Partition (default is all)&quot;: filtered_df = filtered_df[filtered_df[&quot;partition&quot;] == selected_partition] print( filtered_df[filtered_df[&quot;evicted&quot;] == True] .groupby([pd.Grouper(freq=&quot;6H&quot;)]) .sum(numeric_only=True)[&quot;node_hours&quot;] ) fig = px.bar( x=filtered_df[filtered_df[&quot;evicted&quot;] == False] .groupby([pd.Grouper(freq=&quot;6H&quot;)]) .sum(numeric_only=True)[&quot;node_hours&quot;] .index, y=filtered_df[filtered_df[&quot;evicted&quot;] == False] .groupby([pd.Grouper(freq=&quot;6H&quot;)]) .sum(numeric_only=True)[&quot;node_hours&quot;], labels={ &quot;x&quot;: &quot;Date&quot;, &quot;y&quot;: &quot;Node hours&quot;, }, title=&quot;Job Status&quot;, barmode=&quot;stack&quot;, ) fig.add_bar( name=&quot;Evicted&quot;, x=filtered_df[filtered_df[&quot;evicted&quot;] == True] .groupby([pd.Grouper(freq=&quot;6H&quot;)]) .sum(numeric_only=True)[&quot;node_hours&quot;] .index, y=filtered_df[filtered_df[&quot;evicted&quot;] == True] .groupby([pd.Grouper(freq=&quot;6H&quot;)]) .sum(numeric_only=True)[&quot;node_hours&quot;], ) fig.update_layout(transition_duration=500) return fig return app.server </code></pre> <p>Is what I am hoping to do possible, and if so is there some documentation or a worked example someone could highlight for me?</p>
<python><plotly-dash>
2023-01-13 09:38:12
1
849
abinitio
75,107,276
14,037,055
How many images are generated when Image augmentation is used (either for individual image or for all image)
<p>How can I determine how many images will be created after image augmentation via tensorflow ImageDataGenerator. What will be that number for an individual image and likewise all images.</p> <p>I have total 17 image (16 image for train and 1 image for test)</p> <pre class="lang-py prettyprint-override"><code> import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode=&quot;nearest&quot;) history_model = model.fit_generator( aug.flow(X_train , y_train, batch_size = 32), epochs=10, verbose=0) </code></pre>
<python><tensorflow><data-augmentation><image-augmentation>
2023-01-13 09:33:55
0
469
Pranab
75,107,072
5,810,060
Iterating over 2 columns and comparing similarities in Python
<p>I have a DF that looks like this:</p> <pre><code>Row Account_Name_HGI company_name_Ignite 1 00150042 plc WAGON PLC 2 01 telecom, ltd. 01 TELECOM LTD 3 0404 investments limited 0404 Investments Ltd </code></pre> <p>what I am trying to do is to iterate through the <code>Account_Name_HGI</code> and the <code>company_name_Ignite</code> columns and compare the 2 strings in row 1 and provide me with a similarity score. I have got the code that provides the score:</p> <pre><code>from difflib import SequenceMatcher def similar(a, b): return SequenceMatcher(None, a, b).ratio() </code></pre> <p>And that brings the similarity score that I want but I am having an issue with the logic on how to create a for loop that will iterate over the 2 columns and return the similarity score. Any help will be appreciated.</p>
<python><python-3.x><pandas><for-loop><similarity>
2023-01-13 09:15:27
2
906
Raul Gonzales
75,106,834
10,062,025
Scrape shopee using request is getting error type 2
<p>I am trying to scrape shopee sites using requests. With an example site <a href="https://shopee.co.id/Paha-Fillet-Ayam-Organik-Lacto-Farm-500gr-Paha-Fillet-Segar-Ayam-Probiotik-Organik-Paha-Boneless-Ayam-MPASI-Ayam-Sehat-Ayam-Anti-Alergi-Daging-Ayam-MPASI-i.382368918.8835294847" rel="nofollow noreferrer">https://shopee.co.id/Paha-Fillet-Ayam-Organik-Lacto-Farm-500gr-Paha-Fillet-Segar-Ayam-Probiotik-Organik-Paha-Boneless-Ayam-MPASI-Ayam-Sehat-Ayam-Anti-Alergi-Daging-Ayam-MPASI-i.382368918.8835294847</a></p> <p>I notice that it is using an api</p> <p>My current code is as follows</p> <pre><code>import requests url='https://shopee.co.id/api/v4/item/get?itemid=8835294847&amp;shopid=382368918' header={ &quot;x-api-source&quot;: 'pc', 'af-ac-enc-dat': 'null' } response=requests.get(url,headers=header,verify=True) </code></pre> <p>The response json that I am getting</p> <pre><code>{'tracking_id': '396e3995-dff2-4813-82e7-f7326026d714', 'action_type': 2, 'error': 90309999, 'is_customized': False, 'is_login': False, 'platform': 0, 'report_extra_info': 'eyJlbmNyeXB0X2tleSI6Im.....} </code></pre> <p>the response headers is as follows:</p> <pre><code> {'Server': 'SGW', 'Date': 'Sat, 14 Jan 2023 02:14:33 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Vary': 'Accept-Encoding', 'cache-control': 'no-store, max-age=0', 'Content-Encoding': 'gzip'} </code></pre> <p>Can someone help me, as I am not understanding why it does not return the response.json properly.</p>
<python><python-requests>
2023-01-13 08:50:38
1
333
Hal
75,106,725
3,088,891
How can I use .on(fig) without distorting the legend position in seaborn.objects?
<p>I am creating a plot in <strong>seaborn.objects</strong>. This plot has a legend, and I would also like to change its size.</p> <p>This can be done using the <code>.theme()</code> method, which affects the <strong>matplotlib</strong> <code>rcParams</code>:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn.objects as so import pandas as pd dat = pd.DataFrame({'group':['a','a','b','b'], 'x': [1, 2, 1, 2], 'y': [4, 3, 2, 1]}) # Choosing a very distorted figure size here so you can see when it works (so.Plot(dat, x = 'x', y = 'y', color = 'group') .add(so.Line()) .theme({'figure.figsize': (8,2)})) </code></pre> <p><a href="https://i.sstatic.net/RP3y6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RP3y6.png" alt="Line graph with legend and (8,2) figure size" /></a></p> <p>However, in order to solve the problem outlined in <a href="https://stackoverflow.com/questions/75039156/how-can-i-rotate-axis-labels-in-a-faceted-seaborn-objects-plot/75041356#75041356">this post</a>, I need to create a <strong>matplotlib</strong> figure object and then graph <code>.on()</code> that. When I do this, the <code>'figure.figsize'</code> setting in <code>.theme()</code> is ignored (some other <code>.theme()</code> settings do still work, but not this one or a couple others I've tried). Also if you look closely you can see the right edge of the legend being pushed off the edge of the image.</p> <p>(Note also that the <code>'legend.loc'</code> <code>rcParam</code> is ignored with or without <code>.on(fig)</code>: <strong>seaborn.objects</strong> has its own legend placement system I think.)</p> <pre class="lang-py prettyprint-override"><code>fig = plt.figure() # Choosing a very distorted figure size here so you can see when it works (so.Plot(dat, x = 'x', y = 'y', color = 'group') .on(fig) .add(so.Line()) .theme({'figure.figsize': (8,2)})) </code></pre> <p><a href="https://i.sstatic.net/RUstu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RUstu.png" alt="Line graph with legend and a figure size that is not (8,2)" /></a></p> <p>I can, however, now set <code>figsize</code> in the <code>plt.figure()</code> function. But when I do this, the legend positioning is thrown much further out of whack and is largely cut off.</p> <pre class="lang-py prettyprint-override"><code>fig = plt.figure(figsize = (8,2)) # Choosing a very distorted figure size here so you can see when it works (so.Plot(dat, x = 'x', y = 'y', color = 'group') .on(fig) .add(so.Line())) </code></pre> <p><a href="https://i.sstatic.net/diSqN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/diSqN.png" alt="Line graph with misplaced legend and (8,2) figure size" /></a></p> <p>How can I include both <code>.on(fig)</code> with a legend without pushing the legend away? As pointed out in this <a href="https://stackoverflow.com/questions/73747899/how-to-move-the-legend-position-in-the-seaborn-objects-api">other question</a> standard tools designed for legend movement in regular <strong>matplotlib</strong>/<strong>seaborn</strong> don't work in the same way for <strong>seaborn.objects</strong>. Although to be clear, my question isn't really about how to move the legend (while that would be one way to fix this problem) - <strong>seaborn.objects</strong> already knows how to properly place the legend for a resized figure if it's through <code>.theme()</code>, I ideally just want that to work through <code>plt.figure()</code> too.</p> <p>(Edit: immediately after posting it occurred to me to try changing the <code>figsize</code> using the <code>rcParams</code> but from within <strong>matplotlib</strong> but this doesn't matter: <code>import matplotlib as mpl; mpl.rcParams['figure.figsize'] = (8,2)</code> produces the same result as the <code>fig = plt.figure(figsize = (8,2))</code> attempt)</p>
<python><matplotlib><seaborn><seaborn-objects>
2023-01-13 08:40:40
1
1,253
NickCHK
75,106,677
572,616
Is it necessary to use abc.ABC for each base class in multiple inheritance?
<p>Consider the following code snippet:</p> <pre><code>import abc class Base(abc.ABC): @abc.abstractmethod def foo(self): pass class WithAbstract(Base, abc.ABC): @abc.abstractmethod def bar(self): pass class WithoutAbstract(Base): @abc.abstractmethod def bar(self): pass </code></pre> <p>I have two questions regarding the code above:</p> <ol> <li>Is it necessary to inherit <code>WithAbstract</code> from <code>abc.ABC</code> as well, or is it sufficient to inherit <code>WithoutAbstract</code> only from <code>Base</code>?</li> <li>What is the pythonic way of going about it? What is the best practice?</li> </ol>
<python><multiple-inheritance><abc>
2023-01-13 08:36:40
2
14,083
Woltan
75,106,657
3,062,183
Marshmallow schema from pydantic model
<p>Given <code>pydantic</code> models, what are the best/easiest ways to generate equivalent <code>marshmallow</code> schemas from them (if it's even possible)?</p> <p>I found <a href="https://gist.github.com/kmatarese/a5492f4a02449e13ea85ace8801b8dfb" rel="nofollow noreferrer">this snippet</a> and some other similar links which do the opposite (generate <code>pydantic</code> models from <code>marshmallow</code> schemas), but couldn't manage to find the direction I need.</p>
<python><pydantic><marshmallow>
2023-01-13 08:34:50
0
1,142
Dean Gurvitz
75,106,643
5,901,318
django template rendering error when template have comparison 'equal'
<p>I got problem about template rendering in django. It's show up when there is kind of integer comparison with '=='.</p> <p>I have a view like ..</p> <pre><code> def mailtemplate_GetSchema(request, id=None): if id == None: svars = [] this_var = {} this_var['s_id']=str(uuid4()) this_var['varname'] = f&quot;varname_{this_var['s_id']}&quot; this_var['varlabel'] = f&quot;varlabel_{this_var['s_id']}&quot; this_var['vartype'] = f&quot;vartype_{this_var['s_id']}&quot; this_var['varformat'] = f&quot;varformat_{this_var['_sid']}&quot; this_var['varname_value'] = '' this_var['varlabel_value'] = '' this_var['vartype_value'] = 1 this_var['varformat_value'] = '' svars.append(this_var) return render(request, 'mailtemplate_all.html', {'svars': svars}) try : obj = MailTemplate.objects.get(pk=id) schema = obj.schema schema = json.loads(schema) # it's a list of dictionary vars = [] for s in schema: this_var = {} this_var['field_id']=str(uuid4()) this_var['varname'] = f&quot;varname_{this_var['field_id']}&quot; this_var['varlabel'] = f&quot;varlabel_{this_var['field_id']}&quot; this_var['vartype'] = f&quot;vartype_{this_var['field_id']}&quot; this_var['varformat'] = f&quot;varformat_{this_var['field_id']}&quot; this_var['varname_value'] = s.get('name','') this_var['varlabel_value'] = s.get('label','') this_var['vartype_value'] = s.get('type',1) this_var['varformat_value'] = s.get('format',None) if this_var['varformat_value'] == '': this_var['varformat_value']=False vars.append(this_var) print(f'VARS:{vars}') return render(request, 'mailtemplate_all.html', {'vars': vars}) except MailTemplate.DoesNotExist: return HttpResponseNotFound() </code></pre> <p>the 'mailtemplate_all.html' is :</p> <pre><code> {% for v in vars%} &lt;div id={{ v.field_id }} class=&quot;divTableRow&quot;&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name={{v.varname}} value={{ v.varname_value}} class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name={{v.varlabel}} value={{ v.varlabel_value}} class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;select name={{v.vartype}}&gt; {% if v.vartype_value==1 %} &lt;option value=1 selected&gt; {% else %} &lt;option value=1&gt; {% endif %} Integer&lt;/option&gt; {% if v.vartype_value==2 %} &lt;option value=2 selected&gt; {% else %} &lt;option value=2&gt; {% endif %} String&lt;/option&gt; {% if v.vartype_value==3 %} &lt;option value=3 selected&gt; {% else %} &lt;option value=3&gt; {% endif %} Date&lt;/option&gt; &lt;/select&gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name={{v.varformat}} {% if v.varformat_value %}value={{v.varformat_value}}{% endif %} class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} </code></pre> <p>When the view called, I got error</p> <pre><code>[13/Jan/2023 08:23:12] &quot;GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1&quot; 200 86184 VARS:[{'field_id': 'ac9a3707-3906-4e75-9efd-514959056ddb', 'varname': 'varname_ac9a3707-3906-4e75-9efd-514959056ddb', 'varlabel': 'varlabel_ac9a3707-3906-4e75-9efd-514959056ddb', 'vartype': 'vartype_ac9a3707-3906-4e75-9efd-514959056ddb', 'varformat': 'varformat_ac9a3707-3906-4e75-9efd-514959056ddb', 'varname_value': 'kepada', 'varlabel_value': 'Kepada', 'vartype_value': 2, 'varformat_value': False}] Internal Server Error: /mst/htmx/get_schema/2 Traceback (most recent call last): File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/template/smartif.py&quot;, line 179, in translate_token op = OPERATORS[token] KeyError: 'v.vartype_value==1' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/core/handlers/exception.py&quot;, line 55, in inner response = get_response(request) File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/core/handlers/base.py&quot;, line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;/home/bino/Documents/dynform/htmx01/myapp/views.py&quot;, line 48, in mailtemplate_GetSchema return render(request, 'mailtemplate_all.html', {'vars': vars}) ... File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/template/smartif.py&quot;, line 181, in translate_token return self.create_var(token) File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/template/defaulttags.py&quot;, line 889, in create_var return TemplateLiteral(self.template_parser.compile_filter(value), value) File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/template/base.py&quot;, line 600, in compile_filter return FilterExpression(token, self) File &quot;/home/bino/.venv/htmx01/lib/python3.10/site-packages/django/template/base.py&quot;, line 703, in __init__ raise TemplateSyntaxError( django.template.exceptions.TemplateSyntaxError: Could not parse the remainder: '==1' from 'v.vartype_value==1' [13/Jan/2023 08:23:12] &quot;GET /mst/htmx/get_schema/2 HTTP/1.1&quot; 500 200297 Not Found: /favicon.ico [13/Jan/2023 08:23:12] &quot;GET /favicon.ico HTTP/1.1&quot; 404 2212 </code></pre> <p>So I take the 'vars' printed out from my views, and use it to test from python shell</p> <pre><code> (htmx01) bino@corobalap  ~/Documents/dynform/htmx01  python3 Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; from jinja2 import Template &gt;&gt;&gt; vars=[{'field_id': 'ac9a3707-3906-4e75-9efd-514959056ddb', 'varname': 'varname_ac9a3707-3906-4e75-9efd-514959056ddb', 'varlabel': 'varlabel_ac9a3707-3906-4e75-9efd-514959056ddb', 'vartype': 'vartype_ac9a3707-3906-4e75-9efd-514959056ddb', 'varformat': 'varformat_ac9a3707-3906-4e75-9efd-514959056ddb', 'varname_value': 'kepada', 'varlabel_value': 'Kepada', 'vartype_value': 2, 'varformat_value': False}] &gt;&gt;&gt; with open('myapp/templates/mailtemplate_all.html','r') as tfile: ... template = Template(tfile.read()) ... &gt;&gt;&gt; o=template.render(vars=vars) &gt;&gt;&gt; print(o) &lt;div id=ac9a3707-3906-4e75-9efd-514959056ddb class=&quot;divTableRow&quot;&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name=varname_ac9a3707-3906-4e75-9efd-514959056ddb value=kepada class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name=varlabel_ac9a3707-3906-4e75-9efd-514959056ddb value=Kepada class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;select name=vartype_ac9a3707-3906-4e75-9efd-514959056ddb&gt; &lt;option value=1&gt; Integer&lt;/option&gt; &lt;option value=2 selected&gt; String&lt;/option&gt; &lt;option value=3&gt; Date&lt;/option&gt; &lt;/select&gt; &lt;/div&gt; &lt;div class=&quot;divTableCell&quot;&gt; &lt;input type=&quot;text&quot; name=varformat_ac9a3707-3906-4e75-9efd-514959056ddb class=&quot;vTextField&quot; maxlength=&quot;20&quot; &gt; &lt;/div&gt; &lt;/div&gt; &gt;&gt;&gt; &gt;&gt;&gt; </code></pre> <p>got no error, and the output just like expected. It's show there is no problem in jinja2 about integer comparison.</p> <p>Kindly please tell me whats wrong with my views.py.</p>
<python><django><django-templates><jinja2>
2023-01-13 08:32:47
2
615
Bino Oetomo
75,106,604
2,201,789
RobotFramework to delete older folder or file
<p>I want to delete folders/files when its modified date is older than today date &lt;5 days.</p> <p>below is the sample test that I written in Robot Framework.</p> <p>The execution of test is passed and all content is deleted.</p> <p>This example, I have set the current year to 2022 so 2022 not equal to 2023 and deletion should triggered, just for testing purpose.</p> <p><strong>How do I set the test to delete content which modified date older than 5 days in actual test?</strong></p> <pre><code>*** Settings *** Library OperatingSystem *** Variables *** ${curr_y} 2022 ${curr_m} 01 *** Test Cases *** Old Files # ${curr_y} ${curr_m} Get Time year,month Log To Console 'current year is = ${curr_y}' Log To Console 'current month is = ${curr_m}' ${files}= List Files In Directory C:/trydel/ver1 absolute=True FOR ${file} IN @{files} ${y} ${m} = Get Modified Time ${file} year,month Log To Console 'modified year is = ${y}' Log To Console 'modified month is = ${m}' IF '${curr_y}' != '${y}' Empty Directory C:/trydel/ver1 ELSE IF '${curr_m}' != '${m}' Empty Directory C:/trydel/ver1 END END </code></pre>
<python><robotframework>
2023-01-13 08:28:40
1
1,201
user2201789
75,106,452
3,423,825
How to combine two or more QuerySets from different models and order objects chronologically?
<p>I have two querysets I need to combine and iterate through the objects chronologically, based on a <code>datetime</code> field which is common to both models. What is the best way to do that ?</p> <p>I'm able to combine querysets with <code>union</code> but objects are not sorted properly.</p> <pre><code>model_combination = model_set1.union(model_set2, all=True) </code></pre>
<python><django>
2023-01-13 08:14:47
1
1,948
Florent
75,106,356
1,436,800
Unable to apply migration on altered model in django
<p>I am new to django. I have changed some fields in my already created Django model. But It says this message when I try to apply migrations on it:</p> <pre><code>It is impossible to add a non-nullable field 'name' to table_name without specifying a default. This is because the database needs something to populate existing rows. Please select a fix: 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) 2) Quit and manually define a default value in models.py. </code></pre> <p>Although I have deleted the data of this table from database. I cannot set it's default value because the field has to store unique values. Do I need to delete my previous migration file related to that table?</p> <p>I have applied data migrations, but still getting the same error when applying migrations again:</p> <pre><code>def add_name_and_teacher(apps, schema_editor): Student = apps.get_model('app_name', 'Student') Teacher = apps.get_model('app_name', 'Teacher') for student in Student.objects.all(): student.name = 'name' student.teacher = Teacher.objects.get(id=1) student.save() class Migration(migrations.Migration): dependencies = [ ('app', '0045_standup_standupupdate'), ] operations = [ migrations.RunPython(add_name_and_teacher), ] </code></pre>
<python><django><django-models><django-rest-framework><django-migrations>
2023-01-13 08:03:50
1
315
Waleed Farrukh
75,106,282
6,753,182
How to reindex a datetime-based multiindex in pandas
<p>I have a dataframe that counts the number of times an event has occured per user per day. Users may have 0 events per day and (since the table is an aggregate from a raw event log) rows with 0 events are missing from the dataframe. I would like to add these missing rows and group the data by week so that each user has one entry per week (including 0 if applicable).</p> <p>Here is an example of my input:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd np.random.seed(42) df = pd.DataFrame({ &quot;person_id&quot;: np.arange(3).repeat(5), &quot;date&quot;: pd.date_range(&quot;2022-01-01&quot;, &quot;2022-01-15&quot;, freq=&quot;d&quot;), &quot;event_count&quot;: np.random.randint(1, 7, 15), }) # end of each week # Note: week 2022-01-23 is not in df, but should be part of the result desired_index = pd.to_datetime([&quot;2022-01-02&quot;, &quot;2022-01-09&quot;, &quot;2022-01-16&quot;, &quot;2022-01-23&quot;]) df </code></pre> <pre><code>| | person_id | date | event_count | |---:|------------:|:--------------------|--------------:| | 0 | 0 | 2022-01-01 00:00:00 | 4 | | 1 | 0 | 2022-01-02 00:00:00 | 5 | | 2 | 0 | 2022-01-03 00:00:00 | 3 | | 3 | 0 | 2022-01-04 00:00:00 | 5 | | 4 | 0 | 2022-01-05 00:00:00 | 5 | | 5 | 1 | 2022-01-06 00:00:00 | 2 | | 6 | 1 | 2022-01-07 00:00:00 | 3 | | 7 | 1 | 2022-01-08 00:00:00 | 3 | | 8 | 1 | 2022-01-09 00:00:00 | 3 | | 9 | 1 | 2022-01-10 00:00:00 | 5 | | 10 | 2 | 2022-01-11 00:00:00 | 4 | | 11 | 2 | 2022-01-12 00:00:00 | 3 | | 12 | 2 | 2022-01-13 00:00:00 | 6 | | 13 | 2 | 2022-01-14 00:00:00 | 5 | | 14 | 2 | 2022-01-15 00:00:00 | 2 | </code></pre> <p>This is how my desired result looks like:</p> <pre><code>| | person_id | level_1 | event_count | |---:|------------:|:--------------------|--------------:| | 0 | 0 | 2022-01-02 00:00:00 | 9 | | 1 | 0 | 2022-01-09 00:00:00 | 13 | | 2 | 0 | 2022-01-16 00:00:00 | 0 | | 3 | 0 | 2022-01-23 00:00:00 | 0 | | 4 | 1 | 2022-01-02 00:00:00 | 0 | | 5 | 1 | 2022-01-09 00:00:00 | 11 | | 6 | 1 | 2022-01-16 00:00:00 | 5 | | 7 | 1 | 2022-01-23 00:00:00 | 0 | | 8 | 2 | 2022-01-02 00:00:00 | 0 | | 9 | 2 | 2022-01-09 00:00:00 | 0 | | 10 | 2 | 2022-01-16 00:00:00 | 20 | | 11 | 2 | 2022-01-23 00:00:00 | 0 | </code></pre> <p>I can produce it using:</p> <pre class="lang-py prettyprint-override"><code>( df .groupby([&quot;person_id&quot;, pd.Grouper(key=&quot;date&quot;, freq=&quot;w&quot;)]).sum() .groupby(&quot;person_id&quot;).apply( lambda df: ( df .reset_index(drop=True, level=0) .reindex(desired_index, fill_value=0)) ) .reset_index() ) </code></pre> <p>However, according to the docs of <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>, I should be able to use it with <code>level=1</code> as a kwarg directly and without having to do another <code>groupby</code>. However, when I do this I get an &quot;inner join&quot; of the two indices instead of an &quot;outer join&quot;:</p> <pre class="lang-py prettyprint-override"><code>result = ( df .groupby([&quot;person_id&quot;, pd.Grouper(key=&quot;date&quot;, freq=&quot;w&quot;)]).sum() .reindex(desired_index, level=1) .reset_index() ) </code></pre> <pre><code>| | person_id | date | event_count | |---:|------------:|:--------------------|--------------:| | 0 | 0 | 2022-01-02 00:00:00 | 9 | | 1 | 0 | 2022-01-09 00:00:00 | 13 | | 2 | 1 | 2022-01-09 00:00:00 | 11 | | 3 | 1 | 2022-01-16 00:00:00 | 5 | | 4 | 2 | 2022-01-16 00:00:00 | 20 | </code></pre> <p>Why is that, and how am I supposed to use <code>df.reindex</code> correctly?</p> <hr /> <p>I have found <a href="https://stackoverflow.com/questions/56953517/reindex-specific-level-of-pandas-multiindex">a similar SO question</a> on reindexing a multi-index level, but the accepted answer there uses <code>df.unstack</code>, which doesn't work for me, because not every level of my desired index occurs in my current index (and vice versa).</p>
<python><pandas><multi-index><datetimeindex>
2023-01-13 07:56:52
1
3,290
FirefoxMetzger
75,106,167
3,247,006
How to hide the column assigned to "list_display" and "list_display_links" for "list_editable" in Django?
<p>I have <strong><code>Person</code> model</strong> below:</p> <pre class="lang-py prettyprint-override"><code># &quot;store/models.py&quot; from django.db import models class Person(models.Model): first_name = models.CharField(max_length=20) last_name = models.CharField(max_length=20) </code></pre> <p>Then, I assigned <code>&quot;first_name&quot;</code> and <code>&quot;last_name&quot;</code> to <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow noreferrer">list_display</a> and <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_editable" rel="nofollow noreferrer">list_editable</a> to make them editable as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;store/admin.py&quot; from django.contrib import admin from .models import Person @admin.register(Person) class PersonAdmin(admin.ModelAdmin): list_display = (&quot;first_name&quot;, &quot;last_name&quot;) # Here list_editable = (&quot;first_name&quot;, &quot;last_name&quot;) # Here </code></pre> <p>Then, I got the error below:</p> <blockquote> <p>ERRORS: &lt;class 'store.admin.PersonAdmin'&gt;: (admin.E124) The value of 'list_editable[0]' refers to the first field in 'list_display' ('first_name'), which cannot be used unless 'list_display_links' is set.</p> </blockquote> <p>So, I assigned <code>&quot;id&quot;</code> to <code>list_display</code> and <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display_links" rel="nofollow noreferrer">list_display_links</a> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;store/admin.py&quot; from django.contrib import admin from .models import Person @admin.register(Person) class PersonAdmin(admin.ModelAdmin): # Here list_display = (&quot;first_name&quot;, &quot;last_name&quot;, &quot;id&quot;) list_editable = (&quot;first_name&quot;, &quot;last_name&quot;) list_display_links = (&quot;id&quot;, ) # Here </code></pre> <p>Then, the error was solved and 3 columns were displayed as shown below. Now, I want to hide <strong>the 3rd column &quot;ID&quot;</strong> which I don't need:</p> <p><a href="https://i.sstatic.net/u9Qzb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u9Qzb.png" alt="enter image description here" /></a></p> <p>So, how can I hide <strong>the 3rd column &quot;ID&quot;</strong>?</p>
<python><django><django-admin><hide><changelist>
2023-01-13 07:43:31
2
42,516
Super Kai - Kazuya Ito
75,106,098
11,795,964
How to apply itertools to a series
<p>I have a dataset of patient surgery. Many of the patients have had multiple operations and the value_counts aggregation of their multiple operation codes (there are 4 codes) is shown below.</p> <pre><code>['O011'] 2785 ['O012'] 1813 ['O011', 'O011'] 811 ['O013'] 532 ['O012', 'O012'] 522 ['O014'] 131 ['O013', 'O013'] 125 ['O014', 'O014'] 26 ['O012', 'O011'] 24 ['O011', 'O012'] 20 ['O011', 'O011', 'O011'] 14 ['O011', 'O013'] 12 ['O012', 'O012', 'O011'] 6 ['O011', 'O012', 'O012'] 6 ['O011', 'O011', 'O011', 'O011'] 5 ['O013', 'O013', 'O013'] 5 ['O013', 'O011'] 4 ['O012', 'O012', 'O012'] 4 ['O012', 'O013'] 3 ['O013', 'O014'] 3 ['O011', 'O013', 'O013'] 3 ['O012', 'O014'] 3 ['O011', 'O012', 'O011'] 2 ['O012', 'O013', 'O013'] 2 ['O011', 'O014'] 2 ['O013', 'O012', 'O012'] 2 ['O014', 'O014', 'O014'] 2 ['O013', 'O012'] 1 ['O012', 'O012', 'O013', 'O013', 'O013'] 1 ['O012', 'O011', 'O012'] 1 ['O011', 'O011', 'O012'] 1 ['O013', 'O013', 'O011'] 1 ['O011', 'O011', 'O012', 'O012'] 1 ['O014', 'O013', 'O013'] 1 ['O013', 'O013', 'O012'] 1 ['O012', 'O011', 'O011'] 1 ['O011', 'O012', 'O013'] 1 ['O013', 'O011', 'O011'] 1 ['O012', 'O012', 'O012', 'O012'] 1 ['O013', 'O013', 'O012', 'O012'] 1 ['O014', 'O013', 'O011', 'O011'] 1 ['O012', 'O011', 'O011', 'O011'] 1 ['O013', 'O011', 'O012'] 1 </code></pre> <p>This shows the sequence of their operations by patient count, - so 2785 patients have had just the one procedure, - O012. I want to create a new column with a boolean 'Are all the operations the same'. There is an itertools recipe for comparing the values in a list <a href="https://stackoverflow.com/a/3844832/11795964">here</a> I am a surgeon and my python skills are not up to applying it to the series, - how do I create a new column using this function?.</p> <p>The series is <code>OPERTN_01_list</code> I tried</p> <pre><code>from itertools import groupby def all_equal(iterable): g = groupby(iterable) return next(g, True) and not next(g, False) </code></pre> <p>My dataset is <code>mo</code> (multiple operations), so I tried to apply the function <code>all_equal</code> to the series</p> <pre><code>mo['eq'] = all_equal(mo['OPERTN_01_list']) </code></pre> <p>but the new column <code>mo['eq']</code> had all false values.</p> <p>I am not sure the best way to implement the function.</p>
<python><python-itertools>
2023-01-13 07:35:45
1
363
capnahab
75,106,068
4,835,496
GeoPandas: Apply function to row multiple time
<p>I have a GeoDataFrame with the following columns. The column <em>node_location</em> is a dictionary of OSM node IDs with the corresponding coordinates.</p> <pre><code>{ &quot;geometry&quot;: LineString(LINESTRING (8.6320625 49.3500941, 8.632062 49.3501782)), &quot;node_locations&quot;: {75539413: {&quot;lat&quot;: 52.5749342, &quot;lon&quot;: 13.3008981}, 75539407: {&quot;lat&quot;: 52.5746156, &quot;lon&quot;: 13.3029441}, 75539412: {&quot;lat&quot;: 52.5751579, &quot;lon&quot;: 13.3012622} ... } </code></pre> <p>My goal is to split all intersecting lines, but <strong>only</strong> if the intersection point exists in the <em>node_locations</em> columns. E.g. in the picture only the lines with the green dot should be splitted, because the green dot is a point in the node_locations. The red one does not appear there, so it should not be splitted.</p> <p>As I have a lot of data, I want to use <em>apply()</em> to make it more performant. I created a function <em>split_intersecting_ways</em> that iterates over each row and and determines all intersecting geometries. Then I use another apply that calls <em>split_intersecting_geometries</em> on all these intersecting rows and pass my row from the first apply function as argument to split the geometry. This new split geometry should be used in the next iteration. As I can have multiple intersecting geometries where I should split, it should split the original geometry iterative and use the previus splitted GeometryCollection as input for the new iteration.</p> <pre><code>def split_intersecting_geometries(intersecting, row): if intersecting.name != row.name and intersecting.geometry.type != 'GeometryCollection': intersection = row.geometry.intersection(intersecting.geometry) if intersection.type == 'Point': lon, lat = intersection.coords.xy for key, value in row.node_locations.items(): if lat[0] == value[&quot;lat&quot;] and lon[0] == value[&quot;lon&quot;]: return split(row.geometry, intersecting.geometry) # Creates a GeometryCollection with splitted lines return row.geometry def split_intersecting_ways(row, data): intersecting_rows = data[data.geometry.intersects(row.geometry)] data['geometry'] = intersecting_rows.apply(split_intersecting_geometries, args=(row,), axis=1) return data['geometry'] edges['geometry'] = edges.apply(split_intersecting_ways, args=(edges,), axis=1) </code></pre> <p>After some iterations I get the error <strong>Columns must be same length as key</strong>. How can I fix this?</p> <p><a href="https://i.sstatic.net/HAG2g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HAG2g.png" alt="enter image description here" /></a></p>
<python><geopandas>
2023-01-13 07:32:26
1
1,681
Kewitschka