QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,071,603
19,303,365
dropping unnecessary text from list of string
<p>Existing Df :</p> <pre><code>Id dates 01 ['ATIVE 04/2018 to 03/2020',' XYZ mar 2020 – Jul 2022','June 2021 - 2023 XYZ'] </code></pre> <p>Expected Df :</p> <pre><code>Id dates 01 ['04/2018 to 03/2020','mar 2020 – Jul 2022','June 2021 - 2023'] </code></pre> <p>I am looking to clean the List under the dates column. i tried it with below function but doesn't serve the purpose. Any leads on the same..?</p> <pre><code>def clean_dates_list(dates_list): cleaned_dates_list = [] for date_str in dates_list: cleaned_date_str = re.sub(r'[^A-Za-z\s\d]+', '', date_str) cleaned_dates_list.append(cleaned_date_str) return cleaned_dates_list </code></pre>
<python><regex><dataframe>
2023-01-10 14:42:09
1
365
Roshankumar
75,071,174
7,123,933
How to calculate month by month change in value per user in pandas?
<p>I was looking for similar topics, but I found only change by month. My problem is that I would like to have a month change in value e.g. UPL but per user like in the below example.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;">user_id</th> <th style="text-align: left;">month</th> <th style="text-align: right;">UPL</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-01-01 00:00:00</td> <td style="text-align: right;">100</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-02-01 00:00:00</td> <td style="text-align: right;">200</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2022-01-01 00:00:00</td> <td style="text-align: right;">100</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2022-02-01 00:00:00</td> <td style="text-align: right;">50</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-03-01 00:00:00</td> <td style="text-align: right;">150</td> </tr> </tbody> </table> </div> <p>And to have additional column named &quot;UPL change month by month&quot;:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;">user_id</th> <th style="text-align: left;">month</th> <th style="text-align: right;">UPL</th> <th style="text-align: right;">UPL_change_by_month</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-01-01 00:00:00</td> <td style="text-align: right;">100</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-02-01 00:00:00</td> <td style="text-align: right;">200</td> <td style="text-align: right;">100</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2022-01-01 00:00:00</td> <td style="text-align: right;">100</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2022-02-01 00:00:00</td> <td style="text-align: right;">50</td> <td style="text-align: right;">-50</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2022-03-01 00:00:00</td> <td style="text-align: right;">150</td> <td style="text-align: right;">-50</td> </tr> </tbody> </table> </div> <p>Is it possible using aggfunc or shift function using Pandas?</p>
<python><pandas>
2023-01-10 14:08:16
1
359
Lukasz
75,071,155
8,610,286
How to convert a requests GET request in Python to asyncio with payloads?
<p>I am trying to parallelize requests to the Wikidata API using Python's asyncio module.</p> <p>My current synchronous script does the following:</p> <pre><code>import requests base_url = &quot;https://www.wikidata.org/w/api.php&amp;&quot; payload = { &quot;action&quot;: &quot;query&quot;, &quot;list&quot;: &quot;search&quot;, &quot;srsearch&quot;: search_term, &quot;language&quot;: &quot;en&quot;, &quot;format&quot;: &quot;json&quot;, &quot;origin&quot;: &quot;*&quot;, } res = requests.get(base_url, params=payload) </code></pre> <p>I am trying to do the same using <code>asyncio</code>, to send requests asynchronously.</p> <p>From <a href="https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html" rel="nofollow noreferrer">this blogpost</a> and the documentation, I understood that I need something like:</p> <pre><code>from aiohttp import ClientSession async with ClientSession() as session: async with session.get(url) as response: response = await response.read() </code></pre> <p>However, I did not manage to find how to add these payloads in the request. Do I have to reconstruct the URL manually or is there a way to send the payloads in asyncio?</p>
<python><python-requests><python-asyncio><wikidata-api>
2023-01-10 14:07:02
1
349
Tiago Lubiana
75,071,048
6,694,814
Leaflet on-click circle doesn't work in Python folium
<p>I would like to make the Leaflet circle an on-click feature in my Python folium map.</p> <p>Unfortunately, the code returns nothing despite no information from the console.</p> <pre><code>class Circle(MacroElement): &quot;&quot;&quot; https://leafletjs.com/reference.html#circle &quot;&quot;&quot; _template = Template( &quot;&quot;&quot; {% macro script(this, kwargs) %} function newCircle(e){ var circle_job = L.Circle().setLatLng(e.latlng).addTo({{this._parent.get_name()}}); } {% endmacro %} &quot;&quot;&quot; ) </code></pre> <p>job_range = Circle()</p> <p>map.add_child(job_range)</p> <p>What have I done wrong here?</p>
<javascript><python><leaflet><folium>
2023-01-10 13:59:09
0
1,556
Geographos
75,070,946
360,829
How to override a class __init__ method while keeping the types
<p>What's the proper way to extend a class <code>__init__</code> method while keeping the type annotations intact?</p> <p>Take this example class:</p> <pre class="lang-py prettyprint-override"><code>class Base: def __init__(self, *, a: str): pass </code></pre> <p>I would like to subclass <code>Base</code> and add a new parameter <code>b</code> to the <code>__init__</code> method:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class Sub(Base): def __init__(self, *args: Any, b: str, **kwargs: Any): super().__init__(*args, **kwargs) </code></pre> <p>The problem with that approach is that now <code>Sub</code> basically accepts anything. For example, mypy will happily accept the following:</p> <pre class="lang-py prettyprint-override"><code>Sub(a=&quot;&quot;, b=&quot;&quot;, invalid=1). # throws __init__() got an unexpected keyword argument 'invalid' </code></pre> <p>I also don't want to redefine <code>a</code> in <code>Sub</code>, since <code>Base</code> might be an external library that I don't fully control.</p>
<python><mypy><python-typing>
2023-01-10 13:50:55
3
20,443
Cesar Canassa
75,070,928
3,875,720
Unable to understand the looping logic
<p>I am trying to solve a problem using Python. The problem: Given a string like 'a b c d', replace the space with '_' for all combinations. So here, the expected output would be: 'a_b c d', 'a_b_c d', 'a_b_c_d', 'a b_c d', 'a b_c_d', 'a b c_d'</p> <p>I am using a sliding window approach for this, here's my code:</p> <pre><code>ip = 'a b c d' org = ip res = [] for i in range(len(ip)): if ip[i] == ' ': i += 1 for j in range(i + 1, len(ip)): if ip[j] == ' ': ip = ip[:j] + '_' + ip[j+1:] res.append(ip) j += 1 i += 1 ip = org </code></pre> <p>The problem is, the 2nd for loop is looping twice and appending duplicate results.</p> <p>Result: <code>['a_b c d', 'a_b_c d', 'a_b_c_d', 'a b_c d', 'a b_c_d', 'a b_c d', 'a b_c_d', 'a b c_d', 'a b c_d']</code></p> <p>I am not able to figure why this is happening, would appreciate any help.</p> <p>Thanks!</p>
<python><python-3.x><list>
2023-01-10 13:49:28
4
323
FenderBender
75,070,918
12,349,101
How to bind ONLY ASCII keys in tkinter?
<p>I want to bind only ASCII keys using tkinter. I know how to bind it selectively (per key) or even by binding it to all keyboard keys (by using <code>&lt;Key&gt;</code> or <code>&lt;KeyPress&gt;</code>), but problem is, I don't know how to do the same for every ASCII keys.</p> <p>Here is what I tried so far:</p> <ol> <li>Using <code>&lt;Key&gt;</code> or <code>&lt;KeyPress&gt;</code> binding for catching all keyboard keys (doesn't support mouse keys):</li> </ol> <pre class="lang-py prettyprint-override"><code>import tkinter as tk def key_press(event): label.config(text = f'char Pressed: {event.char!r}') label2.config(text=f'keysym Pressed: {event.keysym!r}') root = tk.Tk() label = tk.Label(root, text='Press a key') label2 = tk.Label(root, text='Press a key') label.pack() label2.pack() root.bind('&lt;Key&gt;', key_press) root.mainloop() </code></pre> <ol start="2"> <li>Using per key binding (need to know the name/keysym first, as seen on the <a href="https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/key-names.html" rel="nofollow noreferrer">tkinter documentation</a>):</li> </ol> <pre class="lang-py prettyprint-override"><code>import tkinter as tk def key_press(event): label.config(text = f'char Pressed: {event.char!r}') label2.config(text=f'keysym Pressed: {event.keysym!r}') root = tk.Tk() label = tk.Label(root, text='Press a key') label2 = tk.Label(root, text='Press a key') label.pack() label2.pack() # here we only use the K and BackSpace key as example root.bind('&lt;BackSpace&gt;', key_press) root.bind('&lt;K&gt;', key_press) root.mainloop() </code></pre> <p>How can I bind a function only to all ascii keys using just tkinter? (no third-party module if possible)</p>
<python><tkinter><tk-toolkit><key-bindings>
2023-01-10 13:48:19
1
553
secemp9
75,070,784
7,951,365
Print exact variable values with percentage symbols on pie chart
<p><a href="https://i.sstatic.net/0OB42.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0OB42.png" alt="enter image description here" /></a></p> <p>I would like to represent on a pie chart, the exact same values of the column &quot;Percentage Answers&quot; with a percentage symbol (100 % instead of 100.0). I researched similar questions in Stackoverflow, and they seemed to use <code>autopct</code>. I don't seem to use it properly (I don't understand it neither) to display the same values of my column, with %.</p> <p>Thanks in advance for your help!</p> <p>Here is a small reproducible code :</p> <pre><code># Import pandas library import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np # initialize list of lists data = [['Basics 1', 100.0], ['Basics 2', 100.0], ['Basics 3', 40.0]] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['Course', 'Percentage Answers']) # Plot teachers feedback percentages my_labels= list(df['Course']) plt.pie(df[&quot;Percentage Answers&quot;], labels = my_labels, autopct='%0.0f%%') plt.title(&quot;Percentage of Teacher's Feedback Participation&quot;) plt.axis('equal') plt.show() </code></pre>
<python><matplotlib><pie-chart>
2023-01-10 13:36:56
1
490
Kathia
75,070,619
2,406,499
How to fix an .csv file with wrong/different/inconsistent column numbers due to commas within a field
<p>I have 60 .xlsx files with 300K-500k rows that I was hoping to read into a df to do some analysis the problem I'm facing is that when I read the files into a df I get extra columns due to commas within a fields, looking into one of those excel files I encounter the following situation example right into the excel file:</p> <ul> <li>first row should have for ProductName: &quot;reg (.com,.net)&quot;. The 2nd row is fine. The third row the ProductName should be: &quot;reg(.com,.ca,.net)&quot;</li> </ul> <pre><code> ProductName | Product Code | Term | Amnt reg (.com. | .net) | X123 | 12 | 7.99 wh | Y987 | 36 | 5.99 reg (.com. | .net | .ca) | X123 | 12. | 7.99 </code></pre> <p>This situation appears to happen on all most (if not all) the excel files I need to work with. fixing this manually it's going to take forever given the massive amount of records</p> <p>Is there a way to fix those rows somehow with python and make them all even somehow?</p> <p>ps.</p> <ol> <li>when I read the files into a dataframe I'm simply using this code: I read each excel file into a dataframe which I then store in a list and then I concatenate df's into a single df</li> </ol> <pre><code>data_frame_list = [] files_in_folder = glob.glob('drive/MyDrive/partialdataset/*') #read data into dataframe for file in files_in_folder: data_frame_list.append(pd.read_excel(file)) #concatenate the dataframes df = pd.concat(data_frame_list) df </code></pre> <ol start="2"> <li>the original files are .ods which I converted to .xlsx because I was running into memory issues (even reading only one single .ods file) in colab when reading those into a dataframe. when I open the .ods in excel the same issue in the example is shown.</li> </ol>
<python><pandas><dataframe>
2023-01-10 13:22:34
1
1,268
Francisco Cortes
75,070,552
4,869,005
How to apply label to strip based on symbol present in two parallel array
<p>I have data strips of length 3600. This data strip has two values.</p> <ol> <li>symbols</li> <li>aux_note it looks like this - <a href="https://extendsclass.com/csv-editor.html#031e1f1" rel="nofollow noreferrer">https://extendsclass.com/csv-editor.html#031e1f1</a></li> </ol> <p>If the aux note is empty, it means consider the previous aux note value.</p> <p>Above data frame is of length 100000+, I want to create strip(window) of 3600 lengths.</p> <p>I apply set on each strip to get the unique symbols and aux_note for each strip. 1st array contains symbols, another array contains aux_note.</p> <p>Symbol value we get every time, but the aux_note value we get only if there is some event. If the aux_note value is empty that means it considers previous aux_note values ( previous event state )</p> <p>Two array looks like this. When we apply <code>set</code> operation on each strip, it give symbol and aux_note as below.</p> <pre><code>sym_list = set(strip['symbol']) aux_note_list = set(strip['aux_note']) print(sym_list, cur_aux_note) {'+', 'V', 'N'} {'', '(N\x00'} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'V', 'N'} {''} {'V', 'N'} {''} {'N'} {''} {'N'} {''} {'V', 'N'} {''} {'N'} {''} {'A', 'N'} {''} {'V', 'N'} {''} {'V', 'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'V', 'N'} {''} {'V', 'N'} {''} {'|', 'V', 'N'} {''} {'N'} {''} {'A', 'a', 'N'} {''} {'V', 'A', 'a', 'N'} {''} {'A', 'N'} {''} {'|', 'N'} {''} {'N', 'A'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'A', 'a', 'N'} {''} {'+', 'A', 'a', 'N'} {'', '(AFIB\x00', '(N\x00'} {'+', 'A', 'a', 'N'} {'', '(AFIB\x00'} {'N'} {''} {'N'} {''} {'V', 'N'} {''} {'+', 'F', 'A', 'N'} {'', '(AFIB\x00', '(N\x00'} {'a', 'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} {'+', 'N'} {'', '(AFL'} {'N'} {''} {'+', 'V', 'N'} {'', '(AFIB'} {'N'} {''} {'N'} {''} {'N'} {''} {'a', 'N'} {''} {'N'} {''} {'N'} {''} {'N'} {''} </code></pre> <p>I want to give a label to each strip based on the value present in the strip symbol and aux_note. Aux_note is the decision maker here.</p> <ol> <li>If aux note and symbol are the same, give the strip's label as aux_note. eg { N } { N } then label = N</li> <li>If Aux note is value other than N, and that value present in symbols, then give aux note values as label. eg. {A,N}{A} then label = A</li> <li>Give highest priority to A, eg. {A,V,N}{A} then label = A</li> </ol> <p>My attempt</p> <pre><code>for i in range(num_strips): strp = {} strp['signal'] = df_data[i * len_ecg_strip : (i+1)*len_ecg_strip]['MLII'] temp = df_descr[df_descr['sample'] &gt;= i * len_ecg_strip] temp = temp[df_descr['sample'] &lt;= (i+1)*len_ecg_strip] strp['sample'] = temp['sample'] strp['symbol'] = temp['symbol'] strp['aux_note'] = temp['aux_note'] prev_aux_note = strip_validate(strp, prev_aux_note) def strip_validate(strip, pre_aux): sym_list = set(strip['symbol']) aux_note_list = list(set(strip['aux_note'])) aux_note_list = list(filter(None,aux_note_list)) pdb.set_trace() if len(aux_note_list)==0: cur_aux_note = pre_aux label = cur_aux_note elif len(aux_note_list) == 1: cur_aux_note = aux_note_list[0] label = cur_aux_note elif 'AFIB' in aux_note_list: cur_aux_note = 'AFIB' label = cur_aux_note else: cur_aux_note = aux_note_list[1] label = cur_aux_note print(sym_list, cur_aux_note) return cur_aux_note, label </code></pre>
<python><for-loop><if-statement>
2023-01-10 13:15:51
0
2,257
user2129623
75,070,547
6,875,304
how to put all the templates in a single file and access them in jinja2 from python code
<p>I have 30 to 40 SQL queries and I want to use jinja2 template files for storing the queries so that the python code and SQL queries are stored separately</p> <p>for each SQL query, I am creating a separate template file and loading that template using the python code given below</p> <pre><code>from jinja2 import Environment, FileSystemLoader def main(): file_loader = FileSystemLoader('path to templates') template_group = Environment(loader=file_loader) select_template = template_group.get_template(&quot;select_query.txt&quot;) print(select_template.render()) </code></pre> <p>the content of select_query.txt is</p> <pre><code>select * from employee.details where department=&quot;cse&quot; </code></pre> <p>this is fine for just 3 queries which take 3 separate template files, but I have 30 to 40 queries and they may increase in the future,</p> <p>Is there any way to put all the SQL queries in a single template file and access those queries from the python code</p> <p>Thanks in advance</p>
<python><python-3.x><python-2.7><templates><jinja2>
2023-01-10 13:14:55
1
401
Akhil
75,070,527
433,267
Correct way to add dynamic form fields to WagtailModelAdminForm
<p>I have a use case where I need to add dynamic form fields to a <code>WagtailModelAdminForm</code>. With standard django I would normally just create a custom subclass and add the fields in the <code>__init__</code> method of the form. In Wagtail, because the forms are built up with the edit_handlers, this becomes a nightmare to deal with.</p> <p>I have the following dynamic form:</p> <pre class="lang-py prettyprint-override"><code>class ProductForm(WagtailAdminModelForm): class Meta: model = get_product_model() exclude = ['attributes', 'state', 'variant_of'] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) if self.instance: self.inject_attribute_fields() def inject_attribute_fields(self): for k, attr in self.instance.attributes.items(): field_klass = None field_data = attr.get(&quot;input&quot;) field_args = { 'label': field_data['name'], 'help_text': field_data['help_text'], 'required': field_data['is_required'], 'initial': attr['value'], } if 'choices' in field_data: field_args['choices'] = ( (choice[&quot;id&quot;], choice[&quot;value&quot;]) for choice in field_data['choices'] ) if field_data['is_multi_choice']: field_klass = forms.MultipleChoiceField else: field_klass = forms.ChoiceField else: typ = field_data['attr_type'] if typ == 'text': field_klass = forms.CharField elif typ == 'textarea': field_klass = forms.CharField field_args['widget'] = forms.Textarea elif typ == 'bool': field_klass = forms.BooleanField elif typ == 'int': field_klass = forms.IntegerField elif typ == 'decimal': field_klass = forms.DecimalField elif typ == 'date': field_klass = forms.DateField field_args['widget'] = AdminDateInput elif typ == 'time': field_klass = forms.TimeField field_args['widget'] = AdminTimeInput elif typ == 'datetime': field_klass = forms.DateTimeField field_args['widget'] = AdminDateTimeInput if field_klass is None: raise AttributeError('Cannot create widgets for invalid field types.') # Create the custom key self.fields[f&quot;attributes__{k}&quot;] = field_klass(**field_args) </code></pre> <p>Next I customized the ModelAdmin <code>EditView</code> (attributes are not present in the create view):</p> <pre class="lang-py prettyprint-override"><code>class EditProductView(EditView): def get_edit_handler(self): summary_panels = [ FieldPanel('title'), FieldPanel('description'), FieldPanel('body'), ] # NOTE: Product attributes are dynamic, so we generate them attributes_panel = get_product_attributes_panel(self.instance) variants_panel = [] if self.instance.is_variant: variants_panel.append( InlinePanel( 'stockrecords', classname=&quot;collapsed&quot;, heading=&quot;Variants &amp; Prices&quot; ) ) else: variants_panel.append(ProductVariantsPanel()) return TabbedInterface([ ObjectList(summary_panels, heading='Summary'), # This panel creates dynamic panels related to the dynamic form fields, # but raises an error saying that the &quot;fields are missing&quot;. # Understandable because it's not present on the original model # ObjectList(attributes_panel, heading='Attributes'), ObjectList(variants_panel, heading='Variants'), ObjectList(promote_panels, heading='Promote'), ObjectList(settings_panels, heading='Settings'), ], base_form_class=ProductForm).bind_to_model(self.model_admin.model) </code></pre> <p>Here is the <code>get_product_attributes_panel()</code> function for reference:</p> <pre class="lang-py prettyprint-override"><code>def get_product_attributes_panel(product) -&gt; list: panels = [] for key, attr in product.attributes.items(): widget = None field_name = &quot;attributes__&quot; + key attr_type = attr['input'].get('attr_type') if attr_type == 'date': widget = AdminDateInput() elif attr_type == 'datetime': widget = AdminDateTimeInput() else: if attr_type is None and 'choices' in attr['input']: if attr['input']['is_multi_choice']: widget = forms.SelectMultiple else: widget = forms.Select else: widget = forms.TextInput() if widget: panels.append(FieldPanel(field_name, widget=widget)) else: panels.append(FieldPanel(field_name)) return panels </code></pre> <p>So the problem is...</p> <p>A) Adding the ProductForm in the way I did above (by using it as the base_form_class in TabbedInterface) <em>almost works</em>; It adds the fields to the form; BUT I have no control over the rendering.</p> <p>B) If I <em>uncomment</em> the line <code>ObjectList(attributes_panel, heading='Attributes'),</code> (to get nice rendering of the fields), then I get an error for my dynamic fields, saying that they are missing.</p> <p>This is a very important requirement in the project I'm working on.</p> <p>A temporary workaround is to create a <strong>custom panel</strong> to render the dynamic fields directly in the html template; But then I lose the Django Form validation, which is also an important requirement for this.</p> <p>Is there any way to add dynamic fields the the WagtailModelAdminForm, that <strong>preserves</strong> the modeladmin features such as formsets, permissions etc.</p>
<python><django><wagtail><wagtail-admin>
2023-01-10 13:13:04
1
1,322
Andre
75,070,442
11,724,014
scipy.optimize.minimize stop after n iterations without being better
<p>I am looking to stop the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow noreferrer">minimization of scipy</a> when there are <strong>no improvment in score after n iterations</strong>.</p> <p>Like a <strong>counter that reset</strong> when the scipy minimization iteration find a best solution (minimum of score). When the counter go over a value, it stop searching and return the result.</p> <p>There are the parameter <code>maxiter</code> in options but it take the total number of iterations insteed of number of iterations without progress.</p>
<python><scipy><minimization>
2023-01-10 13:05:12
0
1,314
Vincent Bénet
75,070,392
7,583,953
Why does the value of this class variable persist even after resetting it?
<p>I have the following code, which is executed multiple times on different binary trees</p> <pre><code>class Solution: ans = [] def getLonelyNodes(self, root: Optional[TreeNode]) -&gt; List[int]: def helper(root): if root.left is None and root.right: self.ans.append(root.right.val) return helper(root.right) if root.left and root.right is None: self.ans.append(root.left.val) return helper(root.left) if root.left and root.right: return helper(root.left), helper(root.right) if root.left is None and root.right is None: return helper(root) ans = self.ans self.ans = [] return ans </code></pre> <p>The problem is that even though I reset <code>self.ans</code> at the end, the values in <code>ans</code> persist for the next run (i.e. every run adds the previous answer to the current one).</p> <p>I fixed the issue by moving <code>self.ans = []</code> to above the helper function. But I don't understand why it's different to reset the answer at the beginning or end of the function call. How come it doesn't work to reset <code>self.ans</code> at the end, but it works at the beginning?</p>
<python><class>
2023-01-10 13:00:52
1
9,733
Alec
75,070,236
4,492,738
Saving a redacted PDF file in Python to mask underneath text
<p>I read in a PDF file in Python, added a text box on top of the text that I'd like to redact, and saved the change in a new PDF file. When I searched for the text in the redacted PDF file using a PDF reader, the text can still be found.</p> <p>Is there a way to save the PDF as a single layer file? Or is there a way to ensure that the text under the text box can be removed?</p> <pre><code>import PyPDF2 import re import fitz import io import os import pandas import numpy as np from PyPDF2 import PdfFileReader, PdfFileWriter from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import A4 from reportlab.graphics import renderPDF from reportlab.lib import colors from reportlab.graphics.shapes import * reader = PyPDF2.PdfReader(files) packet = io.BytesIO() can = canvas.Canvas(packet, pagesize = A4) can.rect(65, 750, 40, 30, stroke=1, fill=1) can.setFillColorRGB(1, 1, 1) can.save() packet.seek(0) new_pdf = PdfFileReader(packet) output = PyPDF2.PdfFileWriter() pageToOutput = reader.getPage(1) pageToOutput.mergePage(new_pdf.getPage(0)) output.addPage(pageToOutput) outputStream = open('NewFile.pdf', &quot;wb&quot;) output.write(outputStream) outputStream.close() </code></pre>
<python><pdf><redaction>
2023-01-10 12:48:44
2
883
TTZ
75,070,080
1,200,914
Create different python packages from same repository
<p>I'm building a Python package from a source code repository I have, using a <code>setup.py</code> script with <code>setuptools.setup(...)</code>. In this function call I include all the Python libraries needed for the project to be installed using the <code>install_requires</code> argument.</p> <p>However, I noticed some users do not use all the sub-packages from this package, but only some specific ones, which do not need some libraries that are huge to be installed (e.g., torch).</p> <p>Given this situation, can I create in the same repository something like <code>myrepo['full']</code> or <code>myrepo['little']</code>? Do you have any document on how to do so if it's possible?</p>
<python><repository><setuptools><python-packaging>
2023-01-10 12:36:21
1
3,052
Learning from masters
75,069,888
4,948,165
perform numpy mean over matrix using labels as indicators
<pre><code>import numpy as np arr = np.random.random((5, 3)) labels = [1, 1, 2, 2, 3] arr Out[136]: array([[0.20349907, 0.1330621 , 0.78268978], [0.71883378, 0.24783927, 0.35576746], [0.17760916, 0.25003952, 0.29058267], [0.90379712, 0.78134806, 0.49941208], [0.08025936, 0.01712403, 0.53479622]]) labels Out[137]: [1, 1, 2, 2, 3] </code></pre> <p>assume I have this dataset. I would like, using the labels as indicators, to perform np.mean over the rows.</p> <p>(The labels here indicates the class of each row. labels could also be <code>[0, 1, 1, 0, 4, 1, 4]</code> So have no assumptions over them.)</p> <p>So the output here will be an average over the:</p> <pre><code>1st and 2nd row. 3rd and 4th row. 5th row. </code></pre> <p>in the most efficient way numpy offers. like so:</p> <pre><code>[np.mean(arr[:2], axis=0), np.mean(arr[2:4], axis=0), np.mean(arr[4:], axis=0)] Out[180]: [array([0.46116642, 0.19045069, 0.56922862]), array([0.54070314, 0.51569379, 0.39499737]), array([0.08025936, 0.01712403, 0.53479622])] </code></pre> <p>(in real life scenario the matrix dimensions could be <code>(100000, 256)</code>)</p>
<python><numpy>
2023-01-10 12:19:36
2
3,238
Eran Moshe
75,069,677
4,507,231
rpy2 throws a NotImplementedError concerning Conversion rules
<p>I'm trying to implement R code inside some Python (3.10) software using rpy2 (3.5.7). I want to know whether I can get rpy2 to work before trying anything complicated. This is an &quot;off-the-shelf&quot; execution, using one of the earliest examples in the documentation introduction. I am running this from inside the PyCharm IDE. There is no mention of performing any prerequisites in the documentation.</p> <p>There is a slight nuisance to this simple code. It is being executed within an event call (clicking a button) using the DearPyGUI package.</p> <p>This is the rpy2 code:</p> <pre><code>import rpy2.robjects as objects print(robjects.r) </code></pre> <p>Unfortunately, this throws:</p> <pre><code>... raise NotImplementedError(_missingconverter_msg) NotImplementedError: Conversion rules for `rpy2.robjects` appear to be missing. Those rules are in a Python contextvars.ContextVar. This could be caused by multithreading code not passing context to the thread. </code></pre> <p>This is a working example of the error:</p> <pre><code>import dearpygui.dearpygui as dpg import rpy2.robjects as robjects def testFunction(): print(robjects.r) dpg.create_context() dpg.create_viewport() dpg.setup_dearpygui() with dpg.window(label=&quot;Example Window&quot;): dpg.add_text(&quot;Hello world&quot;) dpg.add_button(label=&quot;Save&quot;, callback=testFunction) dpg.show_viewport() dpg.start_dearpygui() dpg.destroy_context() </code></pre> <p>With the full error message:</p> <pre><code>Traceback (most recent call last): File &quot;/home/anthony/CPRD-software/test.py&quot;, line 6, in testFunction print(robjects.r) File &quot;/home/anthony/anaconda3/envs/CPRD-software/lib/python3.10/site-packages/rpy2/robjects/__init__.py&quot;, line 451, in __str__ version = self['version'] File &quot;/home/anthony/anaconda3/envs/CPRD-software/lib/python3.10/site-packages/rpy2/robjects/__init__.py&quot;, line 440, in __getitem__ res = conversion.get_conversion().rpy2py(res) File &quot;/home/anthony/anaconda3/envs/CPRD-software/lib/python3.10/functools.py&quot;, line 889, in wrapper return dispatch(args[0].__class__)(*args, **kw) File &quot;/home/anthony/anaconda3/envs/CPRD-software/lib/python3.10/site-packages/rpy2/robjects/conversion.py&quot;, line 370, in _raise_missingconverter raise NotImplementedError(_missingconverter_msg) NotImplementedError: Conversion rules for `rpy2.robjects` appear to be missing. Those rules are in a Python contextvars.ContextVar. This could be caused by multithreading code not passing context to the thread. </code></pre> <p>What is going on?</p>
<python><r><rpy2>
2023-01-10 12:02:21
2
1,177
Anthony Nash
75,069,583
12,886,858
Get db call inside pytest FastAPI
<p>I am trying to get the test user that I'm creating while testing to delete at the end but I get the error: <code>FAILED tests_main.py::test_delete_new_users - AttributeError: 'Depends' object has no attribute 'query' </code></p> <p>This is my code:</p> <pre><code>def test_delete_new_users(db: Session = Depends(database.get_db)): auth = client.post('/token', data={'username': 'test123', 'password': 'test123'} ) access_token = auth.json().get('access_token') user = db.query(models.User).filter(models.User.username == 'test123').first() response = client.post('/user/delete/' + user.user_id, headers={ 'Authorization': 'bearer' + access_token }) assert response.status_code == 200 </code></pre> <p>Do I need to make a separate API call to retrieve this user for instance make an api called get_user_id_with_username where I will pass username as attribute or is there a way where I can make a db call within a pytest because from what I read I can only use Depends on API calls.</p>
<python><pytest><fastapi>
2023-01-10 11:53:27
0
633
Vedo
75,069,418
1,422,096
Log stderr to file, prefixed with datetime
<p>I do proper logging with the <code>logging</code> module (<code>logger.info</code>, <code>logger.debug</code>...) and this gets written to a file.</p> <p>But in some corner cases (external modules, uncaught exceptions, etc.), I sometimes still have errors written to <code>stderr</code>.</p> <p>I log this to a file with:</p> <pre><code>import sys sys.stdout, sys.stderr = open(&quot;stdout.log&quot;, &quot;a+&quot;, buffering=1), open(&quot;stderr.log&quot;, &quot;a+&quot;, buffering=1) print(&quot;hello&quot;) 1/0 </code></pre> <p><strong>It works, but how to also have the datetime logged before each error?</strong></p> <p>Note: I'd like to avoid to use <code>logging</code> for this part, but something more low level.</p> <p>I also want to avoid this solution:</p> <pre><code>def exc_handler(ex_cls, ex, tb): with open('mylog.log', 'a') as f: dt = time.strftime('%Y-%m-%d %H:%M:%S') f.write(f&quot;{dt}\n&quot;) traceback.print_tb(tb, file=f) f.write(f&quot;{dt}\n&quot;) sys.excepthook = exc_handler </code></pre> <p>because some external modules might override this. Is there a low level solution like overriding <code>sys.stderr.print</code>?</p>
<python><windows><logging><stderr>
2023-01-10 11:37:36
2
47,388
Basj
75,069,376
7,462,275
How to select rows filtered with condition on the previous and the next rows in pandas and put them in a empty df?
<p>Considering the following dataframe <code>df</code> :</p> <pre><code>df = pd.DataFrame( { &quot;col1&quot;: [0,1,2,3,4,5,6,7,8,9,10], &quot;col2&quot;: [&quot;A&quot;,&quot;B&quot;,&quot;C&quot;,&quot;D&quot;,&quot;E&quot;,&quot;F&quot;,&quot;G&quot;,&quot;H&quot;,&quot;I&quot;,&quot;J&quot;,&quot;K&quot;], &quot;col3&quot;: [1e-0,1e-1,1e-2,1e-3,1e-4,1e-5,1e-6,1e-7,1e-8,1e-9,1e-10], &quot;col4&quot;: [0,4,2,5,6,7,6,3,6,2,1] } ) </code></pre> <p>I would like to select rows when the <em>col4</em> value of the current row is greater than the <em>col4</em> values of the previous and next rows and to store them in an empty frame.</p> <p>I wrote the following code that works :</p> <pre><code>df1=pd.DataFrame() for i in range(1,len(df)-1,1): if ( (df.iloc[i]['col4'] &gt; df.iloc[i+1]['col4']) and (df.iloc[i]['col4'] &gt; df.iloc[i-1]['col4']) ): df1=pd.concat([df1,df.iloc[i:i+1]]) </code></pre> <p>I got the expected dataframe <code>df1</code></p> <pre><code> col1 col2 col3 col4 1 1 B 1.000000e-01 4 5 5 F 1.000000e-05 7 8 8 I 1.000000e-08 6 </code></pre> <p>But this code is very ugly, not readable, ... Is there a best solution ?</p>
<python><pandas><dataframe>
2023-01-10 11:34:31
1
2,515
Stef1611
75,069,342
8,548,828
Is there an optimized way to convert a numpy array to fortran order when using memmaps
<p>I have a memmapped numpy array:</p> <pre><code>arr = np.load(&quot;a.npy&quot;, mmap_mode='r') </code></pre> <p>It is bigger than my memory. For my further computation I need it in fortran order instead of C. So I can use <code>np.asfortranarray</code> to convert it and then use <code>np.save</code> to store it in a new file.</p> <p>However when I do this my memory usage increases proportionally to the input and as such makes me think that an object is being created in memory, I would like it to be fully a, file to file interaction.</p> <p>How can I convert <code>a.npy</code> into fortran order without having the full object in memory?</p> <hr /> <p>For example, I have array:</p> <pre><code>arr =array([[1, 2], [3, 4]]) </code></pre> <p>This is stored on disk as follows by numpy:</p> <pre><code>fortran_order: False 01 02 03 04 </code></pre> <p>I can do the following transformations:</p> <p>1</p> <pre><code>np.asfortranarray(arr) array([[1, 2], [3, 4]]) </code></pre> <p>saved as:</p> <pre><code>fortran_order: True 01 03 02 04 </code></pre> <p>2</p> <pre><code>np.asfortranarray(arr.T) array([[1, 3], [2, 4]]) </code></pre> <p>saved as:</p> <pre><code>fortran_order: True 01 02 03 04 </code></pre> <p>3</p> <pre><code>arr.T array([[1, 3], [2, 4]]) </code></pre> <p>saved as:</p> <pre><code>fortran_order: True 01 02 03 04 </code></pre> <p><code>arr.T</code> only converts the high level accessiblity, I need the on disk ordering to be swapped (this will help with my overall task by keeping the array indexing with C in cache). This can be done by calling <code>np.asfortranarray</code> without <code>arr.T</code> however this incurs a full data copy instead of a transposed view being created and written in the transposed order.</p>
<python><numpy><numpy-memmap>
2023-01-10 11:31:41
0
3,266
Tarick Welling
75,069,262
1,862,861
Change background colour of PyQt5 QPushButton without losing the default button style
<p>I have a basic code for a PyQt5 GUI with two buttons. In it, I want to change the the background colour of one of the buttons. I do this by setting the <code>background-color</code> style sheet attribute for the button. This works, however, under Windows it seems to remove all other style attributes for the button, leaving an unattractive button compared to the standard one, as shown in the image:</p> <p><a href="https://i.sstatic.net/fUsTO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fUsTO.png" alt="GUI in Windows" /></a></p> <p>The same code under Linux does not lose the other button stylings and produces:</p> <p><a href="https://i.sstatic.net/VpZ7t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VpZ7t.png" alt="enter image description here" /></a></p> <p>where things like the default button rounded corners and hover attributes are kept.</p> <p>The code is:</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt5.QtWidgets import QApplication, QGridLayout, QPushButton, QWidget class Window(QWidget): def __init__(self): super().__init__() self.layout = QGridLayout() button1 = QPushButton(&quot;A&quot;) button1.setFixedSize(64, 64) button2 = QPushButton(&quot;B&quot;) button2.setFixedSize(64, 64) button2.setStyleSheet(&quot;background-color: #ff0000&quot;) self.layout.addWidget(button1, 0, 0) self.layout.addWidget(button2, 0, 1) self.setLayout(self.layout) self.show() app = QApplication([]) demo = Window() demo.show() sys.exit(app.exec()) </code></pre> <p>Is is possible to set the background without losing the other attributes under Windows (11)?</p> <p>On Windows, I'm running in a conda environment with:</p> <pre><code>pyqt 5.12.3 pyqt5-sip 4.19.18 pyqtchart 5.12 pyqtwebengine 5.12.1 qt 5.12.9 </code></pre> <p>and on Linux (Ubuntu 20.04.5 running via WSL2) I'm running in a conda environment with:</p> <pre><code>pyqt 5.15.7 pyqt5-sip 12.11.0 qt-main 5.15.2 qt-webengine 5.15.9 qtconsole 5.3.2 </code></pre>
<python><pyqt5>
2023-01-10 11:24:31
2
7,300
Matt Pitkin
75,069,247
340,819
‘OSError: [Errno 22] Invalid argument’ when attempting to open a file in the %LOCALAPPDATA%\Microsoft\WindowsApps directory
<p>Python won't let me open this file. Why not? The format of the command is correct, and it's definitely a file that exists on disk.</p> <pre><code>Python 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import os, os.path &gt;&gt;&gt; path = os.path.join(os.getenv('LOCALAPPDATA'), r&quot;Microsoft\WindowsApps\ilspy.exe&quot;) &gt;&gt;&gt; f = open(path, &quot;rb&quot;) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; OSError: [Errno 22] Invalid argument: 'C:\\Users\\&lt;my name&gt;\\AppData\\Local\\Microsoft\\WindowsApps\\ilspy.exe' &gt;&gt;&gt; </code></pre>
<python><windows>
2023-01-10 11:23:44
0
22,450
Hammerite
75,069,239
11,867,978
How to convert code to write data in xlsx instead of csv using python without pandas?
<p>I have written a python code to store data in csv format but now need to store in .xlsx file. how to convert the below code and write the output in .xlsx file.</p> <pre><code> details_file = self.get_local_storage_path() + '/Myfile.xlsx' csv_columns = ( 'id', 'name', 'place', 'salary', 'email', ) with open(details_file, 'w') as csvfile: writer = csv.DictWriter( csvfile, fieldnames=csv_columns, extrasaction='ignore') writer.writeheader() for data in details: writer.writerow(data) </code></pre> <p>#I tried as below but getting error as TypeError: int() argument must be a string, a bytes-like object or a number, not 'dict'</p> <pre><code>with open(details_file, 'w') as csvfile: print(&quot;inside open&quot;) workbook = xlsxwriter.Workbook(csvfile) worksheet = workbook.add_worksheet() print(&quot;before for loop&quot;) for data in details: worksheet.write(data) workbook.close() </code></pre>
<python><python-3.x><csv><xlsx>
2023-01-10 11:22:57
1
448
Mia
75,069,099
17,487,457
Expand a multidimentional array with another array of different shape
<p>I have the following arrays:</p> <pre class="lang-py prettyprint-override"><code>A = np.array([ [[[0, 1, 2, 3], [3, 0, 1, 2], [2, 3, 0, 1], [1, 3, 2, 1], [1, 2, 3, 0]]], [[[9, 8, 7, 6], [5, 4, 3, 2], [0, 9, 8, 3], [1, 9, 2, 3], [1, 0, -1, 2]]], [[[0, 7, 1, 2], [1, 2, 1, 0], [0, 2, 0, 7], [-1, 3, 0, 1], [1, 0, 1, 0]]] ]) A.shape (3,1,5,4) B = np.array([ [[[1, 0], [-1, 2], [9, 1], [8, 2], [7, 0]]], [[[9, 6], [5, 2], [0, 3], [1, 9], [1, 0]]], [[[0, 7], [1, 0], [0, 7], [-1, 1], [0, 0]]] ]) B.shape (3,1,5,2) </code></pre> <p>Then I want to expand array <code>A</code> with <code>B</code> in the last dimension of <code>A</code>. Such that, the result <code>X</code> is:</p> <pre class="lang-py prettyprint-override"><code>X = np.array([ [[[0, 1, 2, 3, 1, 0], [3, 0, 1, 2,-1, 2], [2, 3, 0, 1, 9, 1], [1, 3, 2, 1, 8, 2], [1, 2, 3, 0, 7, 0]]], [[[9, 8, 7, 6, 9, 6], [5, 4, 3, 2, 5, 2], [0, 9, 8, 3, 0, 3], [1, 9, 2, 3, 1, 9], [1, 0,-1, 2, 1, 0]]], [[[0, 7, 1, 2, 0, 7], [1, 2, 1, 0, 1, 0], [0, 2, 0, 7, 0, 7], [-1,3, 0, 1,-1, 1], [1, 0, 1, 0, 0, 0]]] ]) X.shape (3,1,5,6) `` </code></pre>
<python><arrays><numpy><multidimensional-array><numpy-ndarray>
2023-01-10 11:11:25
1
305
Amina Umar
75,069,062
17,210,463
module 'numpy' has no attribute 'object'
<p>I am getting below error when running <code>mlflow app</code></p> <blockquote> <p>raise AttributeError(&quot;module {!r} has no attribute &quot; AttributeError: module 'numpy' has no attribute 'object'</p> </blockquote> <p>Can someone help me with this</p>
<python><python-3.x><numpy><kubernetes><dockerfile>
2023-01-10 11:08:51
4
369
Divya
75,069,045
7,841,521
OR-Tools in python how to set power of a variable
<p>here is a google OR-tool example to optimize a function:</p> <pre><code>from ortools.linear_solver import pywraplp def LinearProgrammingExample(): &quot;&quot;&quot;Linear programming sample.&quot;&quot;&quot; # Instantiate a Glop solver, naming it LinearExample. solver = pywraplp.Solver.CreateSolver('GLOP') if not solver: return # Create the two variables and let them take on any non-negative value. x = solver.NumVar(0, solver.infinity(), 'x') y = solver.NumVar(0, solver.infinity(), 'y') print('Number of variables =', solver.NumVariables()) # Constraint 0: x + 2y &lt;= 14. solver.Add(x + 2 * y &lt;= 14.0) # Constraint 1: 3x - y &gt;= 0. solver.Add(3 * x - y &gt;= 0.0) # Constraint 2: x - y &lt;= 2. solver.Add(x - y &lt;= 2.0) print('Number of constraints =', solver.NumConstraints()) # Objective function: 3x + 4y. solver.Maximize(3 * x + 4 * y) # Solve the system. status = solver.Solve() if status == pywraplp.Solver.OPTIMAL: print('Solution:') print('Objective value =', solver.Objective().Value()) print('x =', x.solution_value()) print('y =', y.solution_value()) else: print('The problem does not have an optimal solution.') print('\nAdvanced usage:') print('Problem solved in %f milliseconds' % solver.wall_time()) print('Problem solved in %d iterations' % solver.iterations()) LinearProgrammingExample() </code></pre> <p>but instead of optimizing 3<em>x+4y, I would like to optimize 3</em>x**2+4y. How to set the power of x ? I tried x*x, ** or np.power but x is an object. it is not working. any solution ?</p>
<python><or-tools>
2023-01-10 11:06:27
1
347
lelorrain7
75,069,012
5,283,030
Set initial axis limits while preserving pan/zoom in DearPyGui
<p>Using DearPyGui, the x-axis and y-axis limits of a plot can be set using the <code>dpg.set_axis_limits</code> function, i.e.</p> <pre><code>dpg.set_axis_limits(&quot;xaxis&quot;, xmin, xmax) dpg.set_axis_limits(&quot;yaxis&quot;, ymin, ymax) </code></pre> <p>This <em>does</em> set the limits of the respective axis, but it also prevents the plot from being panned or zoomed like is available by default in a DearPyGui plot (when <code>dpg.set_axis_limits</code> is <strong>not</strong> used).</p> <p>Is it possible to set the <em>initial axis limits</em> of a DearPyGui plot (similar to above) but still have the ability to pan and zoom beyond the <em>initial</em> axis limits?</p> <p>Note: due to the structure of my data, relying on auto-formatting via <code>dpg.set_axis_limits_auto</code> does not work for setting the initial plot view.</p>
<python><dearpygui>
2023-01-10 11:03:51
1
1,155
hyperdelia
75,068,956
6,870,955
How to mock inner method's default parameter in Pytest?
<p>I am having problems trying to mock/patch a default parameter for a method, that is being called inside a method that is being unit tested with Pytest. In general the code looks like so:</p> <pre class="lang-py prettyprint-override"><code>class Repository: DEFAULT_VERSION = &quot;0.1.10&quot; ... @classmethod def _get_metadata(cls, id: str, version: str = DEFAULT_VERSION) -&gt; Dict[str, str]: return ... def write(self, df: DataFrame, id: str) -&gt; None: ... metadata = self._get_metadata(id) class TestRepository: def test_write(self, ...): assert df.write(df=test_df, id=&quot;1&quot;).count() &gt; 1 TEST_DEFAULT_VERSION = &quot;0.2.20&quot; </code></pre> <p>Now, I would like to mock the value of <code>DEFAULT_VERSION</code> parameter to be the value of <code>TEST_DEFAULT_VERSION</code> - how can I do that in Pytest?</p>
<python><unit-testing><pytest>
2023-01-10 10:59:36
1
1,187
Bartosz Gajda
75,068,952
5,672,950
Keras inverse scaling prediction from model causes problems with broadcasting with shapes
<p>I have built multi classification model with Keras and after model is finished I would like to predict value for one of my test input.</p> <p>This is the part where I scaled features:</p> <pre><code>x = dataframe.drop(&quot;workTime&quot;, axis = 1) x = dataframe.drop(&quot;creation&quot;, axis = 1) from sklearn.preprocessing import StandardScaler sc = StandardScaler() x = pd.DataFrame(sc.fit_transform(x)) y = dataframe[&quot;workTime&quot;] import seaborn as sb corr = dataframe.corr() sb.heatmap(corr, cmap=&quot;Blues&quot;, annot=True) print(&quot;Scaled features:&quot;, x.head(3)) </code></pre> <p>Then I did:</p> <pre><code>y_cat = to_categorical(y) x_train, x_test, y_train, y_test = train_test_split(x.values, y_cat, test_size=0.2) </code></pre> <p>And built model:</p> <pre><code>model = Sequential() model.add(Dense(16, input_shape = (9,), activation = &quot;relu&quot;)) model.add(Dense(8, activation = &quot;relu&quot;)) model.add(Dropout(0.5)) model.add(Dense(6, activation = &quot;softmax&quot;)) model.compile(Adam(lr = 0.0001), &quot;categorical_crossentropy&quot;, metrics = [&quot;categorical_accuracy&quot;]) model.summary() model.fit(x_train, y_train, verbose=1, batch_size = 8, epochs=100, shuffle=True) </code></pre> <p>After my calculation finished, I wanted to take first element from test data and predict value/classify it.</p> <pre><code>print(x_test.shape, x_train.shape) // (1550, 9) (6196, 9) firstTest = x_test[:1]; // [[ 2.76473141 1.21064165 0.18816548 -0.94077449 -0.30981017 -0.37723917 -0.44471711 -1.44141792 0.20222467]] prediction = model.predict(firstTest) print(prediction) // [[7.5265622e-01 2.4710520e-01 2.3643016e-04 2.1405797e-06 3.8411264e-19 9.4137732e-23]] print(prediction[0]) // [7.5265622e-01 2.4710520e-01 2.3643016e-04 2.1405797e-06 3.8411264e-19 9.4137732e-23] unscaled = sc.inverse_transform(prediction) print(&quot;prediction&quot;, unscaled) </code></pre> <p>During this I retrieve:</p> <pre><code>ValueError: operands could not be broadcast together with shapes (1,6) (9,) (1,6) </code></pre> <p>I think it may be related to my scalers. And please correct me if I wrong, but what I want to achieve here is to either have one output value which points me how this entry was classified or array of possibilities for each classification label.</p> <p>Thank you for hints</p>
<python><pandas><tensorflow><keras><scikit-learn>
2023-01-10 10:59:31
1
954
Ernesto
75,068,739
1,422,096
How to make that f"..." string formatting uses comma instead of dot as decimal separator?
<p>I tried:</p> <pre><code>import locale print(locale.locale_alias) locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_NUMERIC, &quot;french&quot;) print(f&quot;{3.14:.2f}&quot;) </code></pre> <p>but the output is <code>3.14</code> whereas I would like <code>3,14</code>.</p> <p><strong>How to do this with f&quot;...&quot; string formatting?</strong></p> <p>Note: I don't want to use <code>.replace(&quot;.&quot;, &quot;,&quot;)</code></p> <p>Note: I'm looking for a Windows solution, and solutions from <a href="https://stackoverflow.com/questions/55379722/how-to-format-a-float-with-a-comma-as-decimal-separator-in-an-f-string">How to format a float with a comma as decimal separator in an f-string?</a> don't work (thus it's not a duplicate on Windows):</p> <pre><code>locale.setlocale(locale.LC_ALL, 'nl_NL') # or locale.setlocale(locale.LC_ALL, 'fr_FR') </code></pre> <blockquote> <p>locale.Error: unsupported locale setting</p> </blockquote>
<python><windows><floating-point><decimal><locale>
2023-01-10 10:38:32
3
47,388
Basj
75,068,551
9,703,418
Pytest showing error if path to test is not specified
<p>I have a conftest.py inside my tests/ folder which contains a fixture with a spark context as follows:</p> <pre><code>import pytest from pyspark import SparkConf from sedona.utils import SedonaKryoRegistrator, KryoSerializer from sedona.register import SedonaRegistrator from pyspark.sql import SparkSession @pytest.fixture def spark_session_sedona(): parameters = {'spark.driver.maxResultSize': '3g', 'spark.hadoop.fs.s3a.impl': 'org.apache.hadoop.fs.s3a.S3AFileSystem', 'spark.sql.execution.arrow.pyspark.enabled': True, 'spark.scheduler.mode': 'FAIR'} spark_conf = SparkConf().setAll(parameters.items()) spark_session_conf = ( SparkSession.builder.appName('appName') .enableHiveSupport() .config('spark.jars.packages', 'org.apache.hadoop:hadoop-common:3.3.4,' 'org.apache.hadoop:hadoop-azure:3.3.4,' 'com.microsoft.azure:azure-storage:8.6.6,' 'io.delta:delta-core_2.12:1.0.0,' 'org.apache.sedona:sedona-python-adapter-3.0_2.12:1.2.1-incubating,' 'org.datasyslab:geotools-wrapper:1.3.0-27.2') .config(conf=spark_conf) .config(&quot;spark.serializer&quot;, KryoSerializer.getName) .config(&quot;spark.kryo.registrator&quot;, SedonaKryoRegistrator.getName) ) return spark_session_conf.getOrCreate() </code></pre> <p>Then, I have a test that has this fixture as param and basically executes:</p> <pre><code>SedonaRegistrator.registerAll(spark) </code></pre> <p>When I execute the command</p> <blockquote> <p>pytest</p> </blockquote> <p>it returns the error:</p> <blockquote> <p>TypeError: 'JavaPackage' object is not callable</p> </blockquote> <p>However, if I execute:</p> <blockquote> <p>pytest src/tests/test_sedona.py</p> </blockquote> <p>it passes the test without any issue.</p> <p>Does anybody know what's going on?</p> <p>Full error:</p> <pre><code>src/tests/utils/test_lanes_scale.py:39: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/adas_lanes/utils/lanes_scale.py:112: in lanes_sql_line SedonaRegistrator.registerAll(spark) /home/vscode/.local/lib/python3.8/site-packages/sedona/register/geo_registrator.py:43: in registerAll cls.register(spark) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cls = &lt;class 'sedona.register.geo_registrator.SedonaRegistrator'&gt; spark = &lt;pyspark.sql.session.SparkSession object at 0x7f847a72f340&gt; @classmethod def register(cls, spark: SparkSession): &gt; return spark._jvm.SedonaSQLRegistrator.registerAll(spark._jsparkSession) E TypeError: 'JavaPackage' object is not callable </code></pre>
<python><pyspark><pytest><apache-sedona>
2023-01-10 10:21:32
1
384
Rorepio
75,068,402
7,714,681
Add values to unexisting keys in dictionary
<p>I have a directory with many pickled files, each containing a dictionary. The filename indicates the setting of the dictionary. E.g.: <code>20NewsGroup___10___Norm-False___Probs-True___euclidean.pickle</code>.</p> <p>I want to combine these different dicts all in one large dict. To do this, I have written the following code:</p> <pre><code>PATH = '&lt;SOME PATH&gt;' all_dicts = os.listdir(PATH) one_dict = dict() for i, filename in enumerate(all_dicts): infile = open(PATH+filename, 'rb') new_dict = pickle.load(infile) infile.close() splitted = filename.split('___') splitted[4] = splitted[4].split('.')[0] one_dict[splitted[0]][splitted[1]][splitted[2]][splitted[3]][splitted[4]] = new_dict </code></pre> <p>However, when I run this, I get a <code>KeyError</code>, as the key for <code>splitted[0]</code> does not yet exist. What is the best way to populate a dictionary similar to what I envision?</p>
<python><dictionary><pickle><keyerror>
2023-01-10 10:09:33
2
1,752
Emil
75,068,385
12,131,472
how to convert latest 2 dates per category in a dataframe column to strings
<p>I have this Pandas dataframe</p> <pre><code> category date value 0 A6TCE 2023-01-06 NaN 1 A6TCE 2023-01-09 NaN 2 BDTI 2023-01-06 NaN 3 BDTI 2023-01-09 NaN 4 S2TCE 2023-01-06 NaN 5 S2TCE 2023-01-09 NaN 6 TD1 2023-01-06 38.67 7 TD1 2023-01-09 37.39 8 TD14 2023-01-09 250.31 9 TD14 2023-01-10 248.31 10 TD15 2023-01-06 54.03 11 TD15 2023-01-09 52.36 12 TD18 2023-01-06 425.08 13 TD18 2023-01-09 417.08 14 TD19 2023-01-06 182.94 15 TD19 2023-01-09 201.38 16 TD2 2023-01-06 53.42 17 TD2 2023-01-09 51.59 18 TD20 2023-01-06 92.05 19 TD20 2023-01-09 93.95 20 TD21 2023-01-06 314.00 21 TD21 2023-01-09 301.00 22 TD22 2023-01-06 8437500.00 23 TD22 2023-01-09 8411111.00 24 TD23 2023-01-06 68.19 25 TD23 2023-01-09 67.38 26 TD25 2023-01-06 161.43 27 TD25 2023-01-09 151.43 28 TD26 2023-01-06 140.00 29 TD26 2023-01-09 137.81 30 TD3C 2023-01-06 52.91 31 TD3C 2023-01-09 50.77 32 TD6 2023-01-06 169.61 33 TD6 2023-01-09 168.67 34 TD7 2023-01-06 168.56 35 TD7 2023-01-09 168.25 36 TD8 2023-01-09 242.86 37 TD8 2023-01-10 241.79 38 TD9 2023-01-06 129.38 39 TD9 2023-01-09 128.44 40 V2TCE 2023-01-06 NaN 41 V2TCE 2023-01-09 NaN </code></pre> <p>They are the data in a ts with the latest 2 available dates, different categories don't have the same dates, for example TD8's latest 2 dates are 10th and 9th Jan while others are 06th and 09th Jan and we couldn't know this before retrieving the data.</p> <p>I wish to replace the latest date per each category by the string &quot;2nd day&quot; and earlier date with &quot;1st day&quot;, so it looks like this (extract from the middle)</p> <pre><code> 34 TD7 1st day 168.56 35 TD7 2nd day 168.25 36 TD8 1st day 242.86 37 TD8 2nd day 241.79 38 TD9 1st day 129.38 39 TD9 2nd day 128.44 </code></pre> <p>What I tried:, as I thought they would always give the same dates, I did</p> <pre><code>df_last_2d[&quot;date&quot;] = df_last_2d[&quot;date&quot;].dt.strftime(&quot;%Y-%m-%d&quot;) days= dict(zip(sorted(df_last_2d[&quot;date&quot;].unique()),[&quot;1st day&quot;,&quot;2nd day&quot;])) df_last_2d[&quot;date&quot;] = df_last_2d[&quot;date&quot;].apply(lambda x: days[x]) </code></pre> <p>then the last line would fail</p> <pre><code> df_last_2d[&quot;date&quot;] = df_last_2d[&quot;date&quot;].apply(lambda x: days[x]) KeyError: '2023-01-10' </code></pre>
<python><pandas><dataframe><time-series>
2023-01-10 10:08:33
1
447
neutralname
75,068,372
1,068,980
Update dictionary keys in one line
<p>I am updating the key names of a list of dictionaries in this way:</p> <pre><code>def update_keys(vars: list) -&gt; None: keys = [&quot;A&quot;, &quot;V&quot;, &quot;C&quot;] for v in vars: for key in keys: if key in v: v[key.lower()] = v.pop(key) </code></pre> <p>Is there any pythonic way to do the key loop/update in one single line? Thank you in advance!</p>
<python><python-3.x><loops><dictionary><list-comprehension>
2023-01-10 10:07:26
1
369
P. Solar
75,068,249
7,454,177
Why does the python list parsing work differently inline?
<p>In our project we try to work with the output of an aggregation, which is more challenging than expected. The following code throws an <code>IndexError</code>, even though our object looks (when using the <code>print</code> command) like this <code>[{'_id': None, 'sum': 2700}]</code>.</p> <pre class="lang-py prettyprint-override"><code>output = get_mongo_collection().aggregate([ {'$match': {'topic_name': topic, 'internal_id': self.internal_id }}, {'$group': {'_id': None, 'sum': {'$sum': '$payload.val'}}}]) print(list(output)[0][&quot;sum&quot;]) </code></pre> <p>Things we tried and were confused by</p> <pre class="lang-py prettyprint-override"><code>out = list(output) print(out) # [{'_id': None, 'sum': 2700}] print(len(out)) # 1 print(out[0][&quot;sum&quot;]) # 2700 print(list(output)) # [] print(len(list(output))) # 0 </code></pre> <p>It seems that this has something to do with the inline <code>list</code> keyword usage. Maybe this has something to do with the pymongo aggregation object? Because trying to reproduce it with a list created manually, it behaves as expected.</p>
<python><pymongo>
2023-01-10 09:57:47
0
2,126
creyD
75,068,061
5,638,513
Finding All DataFrame Matches for One Column to Get Combinations
<p>Let's say I have a DataFrame <code>base_df</code> that reads:</p> <pre><code> 0 1 2 3 0 2 'A' 'B' NaN 1 2 'A' 'C' NaN 2 2 'A' NaN 'D' 3 2 'A' NaN 'E' 4 2 'A' NaN 'F' </code></pre> <p>How can I expand through the cells and columns, preferably without needing to iterate, to produce:</p> <pre><code> 0 1 2 3 0 2 'A' 'B' NaN 1 2 'A' 'C' NaN 2 2 'A' NaN 'D' 3 2 'A' NaN 'E' 4 2 'A' NaN 'F' 5 3 'A' 'B' 'D' 6 3 'A' 'C' 'D' 7 3 'A' 'B' 'E' 8 3 'A' 'C' 'E' 9 3 'A' 'B' 'F' 10 3 'A' 'C' 'F' </code></pre> <p>Column 0 I can handle fine with <code>base_df.count(axis=1)</code>, but my solutions are generally forcing me to iterate through the rows with <code>.iterrows()</code>. Is there a better approach in pandas?</p> <p>Edit: I managed to work this out, though it's hardly fast enough to be advantageous:</p> <pre><code>DF = pd.DataFrame in_def = &lt;A STRING-NAN DF&gt; colspan = len(d.PG_LANGS) + 1 cols = range(1, colspan) for keep_len in range(3, len(d.PG_LANGS) + 1): out_df: DF = DF(columns=range(colspan)) print('KEEP LEN:', keep_len) for dex_a in cols: for dex_b in cols: if dex_a == dex_b: continue a_df: DF = in_df[in_df[dex_a].notna()] sansb_df: DF = a_df[a_df[dex_b].isna()] withb_df: DF = a_df[a_df[dex_b].notna()] shared_as: set[str] = \ set(sansb_df[dex_a]) &amp; set(withb_df[dex_a]) # type: ignore for sha in shared_as: sansb: DF = \ sansb_df[sansb_df[dex_a] == sha] # type: ignore withb: DF = \ withb_df[withb_df[dex_a] == sha] # type: ignore # print('SANS', sansb.shape[0]) # print('WITH', withb.shape[0]) if sansb.shape[0] == 0: continue if withb.shape[0] == 0: continue sansb = \ pd.concat([sansb] * withb.shape[0], # type: ignore axis=0, ignore_index=True) withb = \ pd.concat([withb] * sansb.shape[0], # type: ignore axis=0, ignore_index=True) sansb[dex_b] = withb[dex_b] sansb.drop_duplicates(ignore_index=True, inplace=True) # print(sansb) out_df = \ pd.concat([out_df, sansb], axis=0, # type: ignore ignore_index=True, sort=False) out_df.reset_index() out_df[0] = out_df.count(axis=1) # type: ignore out_df.drop_duplicates(ignore_index=True, inplace=True) print(out_df) in_df = out_df </code></pre>
<python><pandas><combinatorics>
2023-01-10 09:42:09
2
357
Joshua Harwood
75,068,007
959,894
Embed Python in Python?
<p>I wrote a &quot;compiler&quot; <a href="https://github.com/sloisel/pyptex" rel="nofollow noreferrer">PypTeX</a> that converts an input file <code>a.tex</code> containing <code>Hello @{3+4}</code> to an ouput file <code>a.pyptex</code> containing <code>Hello 7</code>. I evaluate arbitrary Python fragments like <code>@{3+4}</code> using something like <code>eval(compile('3+4','a.tex',mode='eval'),myglobals)</code>, where <code>myglobals</code> is some (initially empty) dict. This creates a thin illusion of an embedded interpreter for running code in <code>a.tex</code>, however the call stack when running <code>'3+4'</code> looks pretty weird, because it backs up all the way into the PypTeX interpreter, instead of topping out at the user code <code>'3+4'</code> in <code>a.tex</code>.</p> <p>Is there a way of doing something like <code>eval</code> but chopping off the top of the call stack?</p> <h1>Motivation: debugging</h1> <p>Imagine an exception is raised by the Python fragment deep inside numpy, and pdb is launched. The user types <code>up</code> until they reach the scope of their user code and then they type <code>list</code>. The way I've done it, this displays the <code>a.tex</code> file, which is the right context to be showing to the user and is the reason why I've done it this way. However, if the user types <code>up</code> again, the user ends up in the bowels of the PypTeX compiler.</p> <p>An analogy would be if the <code>g++</code> compiler had an error deep in a template, displayed a template &quot;call stack&quot; in its error message, but that template call stack backed all the way out into the bowels of the actual g++ call stack and exposed internal g++ details that would only serve to confuse the user.</p> <h1>Embedding Python in Python</h1> <p>Maybe the problem is that the illusion of the &quot;embedded interpreter&quot; created by <code>eval</code> is slightly too thin. <code>eval</code> allows to specify globals, but it inherits whatever call stack the caller has, so if one could somehow supply <code>eval</code> with a truncated call stack, that would resolve my problem. Alternatively, if <code>pdb</code> could be told &quot;you shall go no further up&quot; past a certain stack frame, that would help too. For example, if I could chop off a part of the stack in the traceback object and then pass it to <code>pdb.post_mortem()</code>.</p> <p>Or if one could do <code>from sys import Interpreter; foo = Interpreter(); foo.eval(...)</code>, meaning that <code>foo</code> is a clean embedded interpreter with a distinct call stack, global variables, etc..., that would also be good.</p> <p>Is there a way of doing this?</p> <h1>A rejected alternative</h1> <p>One way that is not good is to extract all Python fragments from <code>a.tex</code> by regular expression, dump them into a temporary file <code>a.py</code> and then run them by invoking a fresh new Python interpreter at the command line. This causes <code>pdb</code> to eventually top out into <code>a.py</code>. I've tried this and it's a very bad user experience. <code>a.py</code> should be an implementation detail; it is automatically generated and will look very unfamiliar to the user. It is hard for the user to figure out what bits of <code>a.py</code> came from what bits of <code>a.tex</code>. For large documents, I found this to be much too hard to use. See also <a href="https://github.com/gpoore/pythontex" rel="nofollow noreferrer">pythontex</a>.</p>
<python><compiler-construction><introspection>
2023-01-10 09:38:00
1
556
Sébastien Loisel
75,067,668
1,476,512
Locust failed to fire events when running as lib
<p>The following code is from the <a href="https://github.com/locustio/locust/blob/master/examples/use_as_lib.py" rel="nofollow noreferrer">tutorial</a>. I just added some codes to fire the <code>test_start</code> event(not sure if I fire it in the right place ?) and listen to both <code>init</code> and <code>test_start</code> events.</p> <pre><code>import gevent from locust import HttpUser, task, events from locust.env import Environment from locust.stats import stats_printer, stats_history from locust.log import setup_logging setup_logging(&quot;INFO&quot;, None) class MyUser(HttpUser): host = &quot;https://docs.locust.io&quot; @task def t(self): self.client.get(&quot;/&quot;) @events.init.add_listener def on_locust_init(**kwargs): print(&quot;on locust init ...&quot;) @events.test_start.add_listener def on_test_start(**kwargs): print(&quot;on test start ...&quot;) # setup Environment and Runner env = Environment(user_classes=[MyUser]) runner = env.create_local_runner() # start a WebUI instance web_ui = env.create_web_ui(&quot;127.0.0.1&quot;, 8089) # execute init event handlers (only really needed if you have registered any) env.events.init.fire(environment=env, runner=runner, web_ui=web_ui) # start a greenlet that periodically outputs the current stats gevent.spawn(stats_printer(env.stats)) # start a greenlet that save current stats to history gevent.spawn(stats_history, env.runner) # start the test runner.start(1, spawn_rate=1) # execute test_start event handlers (only really needed if you have registered any) env.events.test_start.fire(environment=env, runner=runner, web_ui=web_ui) # in 10 seconds stop the runner gevent.spawn_later(10, lambda: runner.quit()) # wait for the greenlets runner.greenlet.join() # stop the web server for good measures web_ui.stop() </code></pre> <p>When I ran it as a library (e.g. <code>python use_as_lib.py</code>), the two messages in MyUser didn't print. But if I remove those run-as-lib codes, and run it as tool (e.g. <code>locust -f use_as_lib.py --headless -u 1 -r 1 -t=10s</code>), messages been printed in the console. Seems I missed anything...</p> <p>Here's my locust version.</p> <pre><code>locust 2.13.0 from /Users/myuser/workspace/tmp/try_python/venv/lib/python3.8/site-packages/locust (python 3.8.12) </code></pre> <p>Any ideas? Thanks!</p>
<python><locust>
2023-01-10 09:05:46
1
2,851
mCY
75,067,665
6,510,276
How to fetch (pop) N elements from a Python list iteratively while list exhausts?
<p>I have the following Python list with a predefined N:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] N = 3 </code></pre> <p>I would like to have a collection of N elements (a list of lists, for example) from the list (if <code>len(l) % !=0</code> then the last collection could be shorter than N). So something like this:</p> <pre><code>[[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]] </code></pre> <p>How can I achieve the desired output? (Actually the order of elements in the output doesn't matter in my case, just to have all the elements once by the defined number groups)</p>
<python><list><collections>
2023-01-10 09:05:23
1
1,168
Hendrik
75,067,408
880,783
How to use `pylint` on code supporting multiple Python versions?
<p>Since <code>hashlib.file_digest</code> was introduced only in Python 3.11, I use a fallback to the previous code:</p> <pre class="lang-py prettyprint-override"><code>if sys.version_info &lt; (3, 11): digest = hashlib.sha256() digest.update(file.read()) else: digest = hashlib.file_digest(file, hashlib.sha256) </code></pre> <p>Running <code>pylint</code> on Python 3.10 on this file, I get the following error:</p> <blockquote> <p><code>Module 'hashlib' has no 'file_digest' member (no-member)</code></p> </blockquote> <p>I can add <code># pylint: disable=no-member</code> to the bottom branch of the code, but then I will get</p> <blockquote> <p><code>Useless suppression of 'no-member' (useless-suppression)</code></p> </blockquote> <p>when <code>pylint</code> is run in Python 3.11.</p>
<python><pylint><error-suppression>
2023-01-10 08:41:14
0
6,279
bers
75,067,333
3,734,914
Compare Polars DataFrames That Have a Polars Date Colums
<p>I want to test that two Polars DataFame objects are equivalent, that contain a column which represents dates.</p> <p>If I use <code>datetime.date</code> from the standard library I don't have any problems:</p> <pre class="lang-py prettyprint-override"><code>import datetime as dt import polars as pl from polars.testing import assert_frame_equal assert_frame_equal(pl.DataFrame({&quot;foo&quot;: [1], &quot;bar&quot;: [dt.date(2000, 1, 1)]}), pl.DataFrame({&quot;foo&quot;: [1], &quot;bar&quot;: [dt.date(2000, 1, 1)]})) </code></pre> <p>But if I try to use the <code>Date</code> type from polars the comparison fails, with a <code>PanicException: not implemented</code> exception.</p> <pre class="lang-py prettyprint-override"><code>assert_frame_equal(pl.DataFrame({&quot;foo&quot;: [1], &quot;bar&quot;: [pl.Date(2000, 1, 1)]}), pl.DataFrame({&quot;foo&quot;: [1], &quot;bar&quot;: [pl.Date(2000, 1, 1)]})) </code></pre> <p>Is there a way to use the polars <code>Date</code> type in the <code>DataFrame</code> and still be able to compare the two objects?</p>
<python><date><datetime><python-polars>
2023-01-10 08:33:47
2
9,017
Batman
75,067,279
2,998,077
Pandas to fill empty cells in column according to another column
<p>A dataframe looks like this, and I want to fill the empty cells in the 'Date' column (when the &quot;Area&quot; is West or North), with content in &quot;Year&quot; column plus &quot;0601&quot;.</p> <p><a href="https://i.sstatic.net/sD726.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sD726.png" alt="enter image description here" /></a></p> <p>Wanted result is as follows:</p> <p><a href="https://i.sstatic.net/62Tt4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/62Tt4.png" alt="enter image description here" /></a></p> <p>What I have tried:</p> <pre><code>from io import StringIO import pandas as pd csvfile = StringIO( &quot;&quot;&quot; Name Area Date Year David West 2014 Mike North 20220919 2022 Kate West 2017 Lilly East 20221226 2022 Peter North 20221226 2022 Cara Middle 2016 &quot;&quot;&quot;) df = pd.read_csv(csvfile, sep = '\t', engine='python') L1 = ['West','North'] m1 = df['Date'].isnull() m2 = df['Area'].isin(L1) df['Date'] = df['Date'].mask(m1 &amp; m2, df['Year'] + '0601') # Try_1 df['Date'] = np.where(np.where(m1 &amp; m2, df['Year'] + '0601')) # Try_2 </code></pre> <p>Both Try_1 and Try_2 pop the same error.</p> <p>What's the right way to write the lines?</p> <pre><code>Traceback (most recent call last): File &quot;C:\Python38\lib\site-packages\pandas\core\ops\array_ops.py&quot;, line 142, in _na_arithmetic_op result = expressions.evaluate(op, left, right) File &quot;C:\Python38\lib\site-packages\pandas\core\computation\expressions.py&quot;, line 235, in evaluate return _evaluate(op, op_str, a, b) # type: ignore[misc] File &quot;C:\Python38\lib\site-packages\pandas\core\computation\expressions.py&quot;, line 69, in _evaluate_standard return op(a, b) numpy.core._exceptions.UFuncTypeError: ufunc 'add' did not contain a loop with signature matching types (dtype('&lt;U21'), dtype('&lt;U21')) -&gt; dtype('&lt;U21') During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\My Documents\Scripts\(Desktop) WSS 20200323\GG.py&quot;, line 336, in &lt;module&gt; df['Date'] = np.where(np.where(m1 &amp; m2, df['Year'] + '0601')) # try 2 File &quot;C:\Python38\lib\site-packages\pandas\core\ops\common.py&quot;, line 65, in new_method return method(self, other) File &quot;C:\Python38\lib\site-packages\pandas\core\arraylike.py&quot;, line 89, in __add__ return self._arith_method(other, operator.add) File &quot;C:\Python38\lib\site-packages\pandas\core\series.py&quot;, line 4998, in _arith_method result = ops.arithmetic_op(lvalues, rvalues, op) File &quot;C:\Python38\lib\site-packages\pandas\core\ops\array_ops.py&quot;, line 189, in arithmetic_op res_values = _na_arithmetic_op(lvalues, rvalues, op) File &quot;C:\Python38\lib\site-packages\pandas\core\ops\array_ops.py&quot;, line 149, in _na_arithmetic_op result = _masked_arith_op(left, right, op) File &quot;C:\Python38\lib\site-packages\pandas\core\ops\array_ops.py&quot;, line 111, in _masked_arith_op result[mask] = op(xrav[mask], y) numpy.core._exceptions.UFuncTypeError: ufunc 'add' did not contain a loop with signature matching types (dtype('&lt;U21'), dtype('&lt;U21')) -&gt; dtype('&lt;U21') </code></pre>
<python><pandas><dataframe>
2023-01-10 08:27:07
1
9,496
Mark K
75,067,269
4,288,259
SQLAlchemy Imperatively Mapping a Composite Entity without __composite_values__
<p>I'd like to use SQLAlchemy to build my relational schema, but due to project constraints, the central model should not have any dependencies on any third-parties, and I'd like to avoid adding a <code>__composite_values__</code> method to any class that could be used as a composite in the database.</p> <p>As a concrete example, suppose I have the following entities:</p> <pre class="lang-py prettyprint-override"><code>@dataclass(kw_only=True) class Transaction: id: int value: Money description: str timestamp: datetime.datetime @dataclass(kw_only=True) class Money: amount: int currency: str </code></pre> <p>Of course, when I attempt to create an imperative mapping using these classes, I get <code>AttributeError: 'Money' object has no attribute '__composite_values__'</code>:</p> <pre class="lang-py prettyprint-override"><code>transaction_table = Table( &quot;transaction&quot;, mapper_registry.metadata, Column(&quot;id&quot;, BigInteger, primary_key=True), Column(&quot;description&quot;, String(1024)), Column( &quot;timestamp&quot;, DateTime(timezone=False), nullable=False, server_default=text(&quot;NOW()&quot;), ), Column(&quot;value_amount&quot;, Integer(), nullable=False), Column(&quot;value_currency&quot;, String(5), nullable=False), ) mapper_registry.map_imperatively( Transaction, transaction_table, properties={ &quot;value&quot;: composite( Money, transaction_table.c.value_amount, transaction_table.c.value_currency, ) }, ) </code></pre> <p>So, what are my options for mapping these classes? So far, I've only been able to think of the solution where I create a duplicate wrapper for each entity which <em>does</em> have the ORM-specific attachments, but this seems quite nasty.</p>
<python><sqlalchemy><orm>
2023-01-10 08:25:37
1
984
Andrew Lalis
75,067,208
386,861
How to access specific custom attribute elements with Beautifulsoup
<p>I'm still trying to understand the syntax for BeautifulSoup and hope someone can put this right.</p> <p>I've got an article - <a href="https://www.bbc.co.uk/news/world-europe-49345912" rel="nofollow noreferrer">https://www.bbc.co.uk/news/world-europe-49345912</a></p> <p>I want to do some NLP on the body text.</p> <p>I worked out some script which gets to the point to an extent - thanks to <a href="https://towardsdatascience.com/super-simple-way-to-scrape-bbc-news-articles-in-python-5fe1e6ee82d9" rel="nofollow noreferrer">https://towardsdatascience.com/super-simple-way-to-scrape-bbc-news-articles-in-python-5fe1e6ee82d9</a></p> <pre><code>import requests from bs4 import BeautifulSoup as bs class BBC: def __init__(self, url:str): article = requests.get(url) self.soup = bs(article.content, &quot;html.parser&quot;) self.body = self.get_body() self.title = self.get_title() def get_body(self) -&gt; list: body = self.soup.find(&quot;article&quot;) return [p.text for p in body.find_all(&quot;p&quot;, class_=&quot;ssrcss-1q0x1qg-Paragraph eq5iqo00&quot;)] def get_title(self) -&gt; str: return self.soup.find(&quot;h1&quot;).text print(BBC(&quot;https://www.bbc.co.uk/news/world-europe-49345912&quot;).body) print(BBC(&quot;https://www.bbc.co.uk/news/world-europe-49345912&quot;).title) </code></pre> <p>So far so groovy. But say I want to filter on something like div blocks that have the attribute 'data-component=&quot;text-block&quot;' and then filter out the p tags within them. At this point I'm lost. How do I identify a custom 'data-component' attribute? Here's an example.</p> <pre><code>&lt;div data-component=&quot;text-block&quot; class=&quot;ssrcss-11r1m41-RichTextComponentWrapper ep2nwvo0&quot;&gt;&lt;div class=&quot;ssrcss-7uxr49-RichTextContainer e5tfeyi1&quot;&gt;&lt;p class=&quot;ssrcss-1q0x1qg-Paragraph eq5iqo00&quot;&gt;&quot;This is a big symbolic moment,&quot; he said. &quot;Climate change doesn't have a beginning or end and I think the philosophy behind this plaque is to place this warning sign to remind ourselves that historical events are happening, and we should not normalise them. We should put our feet down and say, okay, this is gone, this is significant.&quot;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt; </code></pre>
<python><beautifulsoup>
2023-01-10 08:19:57
1
7,882
elksie5000
75,067,141
10,194,070
pip3 + is not a supported wheel on this platform
<p>We tried to install different kind of <code>lxml-python</code> modules on <code>rhel 7.X</code> but without success as the following</p> <pre><code>pip3 install --no-index --find-links /tmp lxml-4.9.2-cp39-cp39-musllinux_1_1_x86_64.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. ERROR: lxml-4.9.2-cp39-cp39-musllinux_1_1_x86_64.whl is not a supported wheel on this platform. pip3 install --no-index --find-links /tmp lxml-4.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. ERROR: lxml-4.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl is not a supported wheel on this platform. pip3 install --no-index --find-links /tmp lxml-4.9.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. ERROR: lxml-4.9.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl is not a supported wheel on this platform. </code></pre> <p><strong>pip3 version is</strong></p> <pre><code>pip3 --version pip 19.3.1 from /etc/rh/rh-python38/root/usr/lib/python3.8/site-packages/pip (python 3.8) </code></pre> <p>rhel release is:</p> <pre><code>more /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo) uname -r 3.10.0-1160.el7.x86_64 python3 --version Python 3.8.0 </code></pre> <p>Any idea how to solve the problem about “ <code>is not a supported wheel on this platform</code> “</p> <p>Note - the machine Linux <code>RHEL 7.X</code> is without external network so the modules installation are off line</p>
<python><python-3.x><pip><rhel>
2023-01-10 08:12:49
0
1,927
Judy
75,066,525
19,303,365
Parsing dates in Different format from Text
<p>i have a dataframe where within the raw text column certain text with Dates in different format is given. i am looking to extract this dates in separate column</p> <p>sample Raw Text :</p> <blockquote> <p>&quot;Sales Assistant @ DFS Duration - <strong>June 2021 - 2023</strong> Currently working in XYZ Within the role I am expected to achieve sales targets which I currently have no problems reaching. Job Role/Establishment - Plasterer @ XX Plasterer’s Duration - <strong>September 2016 - Nov 2016</strong> Job Role/Establishment - Customer Advisor @ AA Duration - <strong>(2015 – 2016)</strong> Job Role/Establishment - Warehouse Operative @ xyz Duration - <strong>03/2014 to 08/2015</strong> In the xyz warehouse Job Role/Establishment - Airport Terminal Assistant @ port Duration - <strong>01/2012 - 06/2013</strong> Working at the airport . Job Role/Establishment - Apprentice Floorer @ YY Floors Duration - <strong>DEC 2010 – APRIL 2012</strong> &quot;</p> </blockquote> <p>Expected Dataframe :</p> <pre><code>id Raw_text Dates 01 &quot;sample_raw_text&quot; June 2021 - 2023 , September 2016 - Nov 2016,(2015 – 2016),03/2014 to 08/2015 , 01/2012 - 06/2013, DEC 2010 – APRIL 2012 </code></pre> <p>I have Tried below pattern :</p> <pre><code>def extract_dates(df, column): # Define the regex pattern to match dates in different month formats pattern = r'(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)?[-,\s]*\d{1,2}[-,\s]*(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)?[-,\s]*\d{2,4}\s*[-–]\s*(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)?[-,\s]*\d{1,2}[-,\s]*(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)?[-,\s]*\d{2,4}' # Extract the dates from the specified column df['Dates'] = df[column].str.extract(pattern) </code></pre> <p>with above i am unable to fetch required output. please guide what am i missing</p>
<python><pandas><regex>
2023-01-10 07:04:20
1
365
Roshankumar
75,066,390
241,515
Pandas dataframe: change unique values in each column to NaNs
<p>I have a <code>DataFrame</code> arranged in a manner similar to this:</p> <pre><code>ID Sample_1 Sample_2 A 0.182 0.754 B 0.182 0.754 C 0.182 0.01 D 0.182 0.2 E 0.9 0.2 </code></pre> <p>As you can see, there are some repeated values (&quot;true&quot; measurements) and single values (that are actually &quot;bad&quot; measurements). What I need to do is to replace all unique values (that are so-called &quot;bad&quot;) with NAs. This needs to be done for all columns.</p> <p>In other words, the final dataframe should look like this:</p> <pre><code>ID Sample_1 Sample_2 A 0.182 0.754 B 0.182 0.754 C 0.182 NaN D 0.182 0.2 E NaN 0.2 </code></pre> <p>A possible solution I've thought about involves <code>groupby</code> and <code>filter</code> to get the index values (like in <a href="https://stackoverflow.com/questions/31049111/get-indexes-of-unique-values-in-column-pandas">Get indexes of unique values in column (pandas)</a>) and then replace the values, but the issue is that it works only for one column at a time:</p> <pre><code>unique_loc = df.groupby(&quot;Sample_1&quot;).filter(lambda x: len(x) == 1).index df.loc[unique_loc, &quot;Sample_1&quot;] = np.nan </code></pre> <p>This means it would need to get repeated for many columns (and I have many in the actual data). Is there a more efficient solution?</p>
<python><pandas><dataframe>
2023-01-10 06:47:59
2
4,973
Einar
75,066,078
19,826,650
Run Python from PHP code not executed problem
<p>I have php code and python code and i use visual studio code as code editor, i use xampp apache for personal server. The code is below :</p> <p>Html run button</p> <pre><code>&lt;form action=&quot;&quot; method=&quot;post&quot;&gt; &lt;div class=&quot;card-header py-3 mb-30 mt-20 pb-20 pt-20&quot;&gt; &lt;div class=&quot;center&quot;&gt; &lt;input type=&quot;submit&quot; name=&quot;runpython&quot; value=&quot;Run&quot; class=&quot;btn btn-primary&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;/form&gt; </code></pre> <p>This is the run python by php</p> <pre><code>&lt;?php if(isset($_POST['runpython'])){ $output = shell_exec('python3 /xampp/htdocs/Klasifikasi_KNN/admin test.py'); echo $output; echo shell_exec('/xampp/htdocs/Klasifikasi_KNN/admin test.py'); $output = shell_exec('python3 /xampp/htdocs/Klasifikasi_KNN/admin test.py'); echo $output; echo shell_exec('xampp/htdocs/Klasifikasi_KNN/admin test.py'); echo shell_exec('/xampp/htdocs/Klasifikasi_KNN/admin test.py'); echo shell_exec('/xampp/htdocs/Klasifikasi_KNN/admin/test.py'); echo shell_exec('xampp/htdocs/Klasifikasi_KNN/admin/test.py'); echo shell_exec('python3 xampp/htdocs/Klasifikasi_KNN/admin/test.py'); echo shell_exec('python3 /xampp/htdocs/Klasifikasi_KNN/admin/test.py'); } ?&gt; </code></pre> <p>This is the test.py</p> <pre><code>#! C:/Users/Jessen PC/AppData/Local/Microsoft/WindowsApps/python.exe print(&quot;Hello World&quot;) </code></pre> <p>What ive tried :</p> <ol> <li>check if shell_exect is exist</li> </ol> <pre><code> if(function_exists('exec')) { echo &quot;exec is enabled&quot;; } if(function_exists('shell_exec')) { echo &quot;shell_exec is enabled&quot;; } </code></pre> <ol start="2"> <li>Prove shell exect is working</li> </ol> <pre><code>if (exec('echo TEST') == 'TEST') { echo 'exec works!'; } </code></pre> <p>output : exec works! exec is enabled shell_exec is enabled&quot;</p> <ol start="3"> <li>Added handler script in httpd.conf in config xampp</li> </ol> <pre><code>AddHandler cgi-script .py ScriptInterpreterSource Registry-Strict </code></pre> <ol start="4"> <li>If i use D drive like this</li> </ol> <pre><code>$output = shell_exec('D:/xampp/htdocs/Klasifikasi_KNN/admin/test.py'); echo $output; </code></pre> <p>This open like image below <a href="https://i.sstatic.net/1XzQT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1XzQT.png" alt="output" /></a> What im aiming is the code is debug by python and display the output in website.</p> <p>Is there something missing needs to be done to execute python files from php?</p>
<python><php><shell-exec>
2023-01-10 06:05:41
0
377
Jessen Jie
75,065,768
9,648,520
Python - Adding quotes on a multi word phrase in a string
<p>Suppose there is a string like follow:</p> <pre><code>'jon doe AND (james OR james david)' </code></pre> <p>What I want to do is to add double quotes around the multi word phrase like in jon doe and in james david as follow:</p> <pre><code>'&quot;jon doe&quot; AND (james OR &quot;james david&quot;)' </code></pre> <p>I think it can be possible using regex but not sure how? As I am nooby in regex. I have tried writing my own python code without regex and was able to do so with string which do not have parenthesis. Like follow</p> <pre><code>'jon doe AND james OR james david' </code></pre> <p>to</p> <pre><code>'&quot;jon doe&quot; AND james OR &quot;james david&quot;' </code></pre> <p>but not with the parenthesis. If anyone has done this before do let me know. Thanks</p> <p>Edit 1: The method I have wrote is also not neat and clean and thus want a better solution also.</p> <p>Edit 2: In case someone wants to see the code here it is. It does the job and successfully does what I want but it does not seems to be a nice way. And I am sure there are more good ways than this:</p> <pre><code>s = 'jon doe AND (james OR james david)' new = '' new_list = [] splited = s.split() for i in range(0, len(splited)): word = splited[i] if word in ['AND', 'OR', 'NOT']: if len(new_list) &gt; 1: pharase = ' '.join(x for x in new_list) if pharase.endswith(&quot;)&quot;): pharase = pharase.replace(&quot;)&quot;, '&quot;') new += '&quot;{})'.format(pharase) elif pharase.startswith(&quot;(&quot;): pharase = pharase.replace(&quot;(&quot;, '&quot;') new += '({}&quot;'.format(pharase) else: new += '&quot;{}&quot;'.format(pharase) new += &quot; {} &quot;.format(word) else: pharase = ' '.join(x for x in new_list) new += '{}'.format(pharase) new += &quot; {} &quot;.format(word) new_list = [] elif i==len(splited)-1: new_list.append(word) pharase = ' '.join(x for x in new_list) if len(new_list) &gt; 1: if pharase.endswith(&quot;)&quot;): pharase = pharase.replace(&quot;)&quot;, '&quot;') new += '&quot;{})'.format(pharase) elif pharase.startswith(&quot;(&quot;): pharase = pharase.replace(&quot;(&quot;, '&quot;') new += '({}&quot;'.format(pharase) else: new += '&quot;{}&quot;'.format(pharase) else: new += '{}'.format(pharase) else: new_list.append(word) </code></pre> <p>Output: <code>'&quot;jon doe&quot; AND (james OR &quot;james david&quot;)'</code></p>
<python><regex><string>
2023-01-10 05:20:31
1
2,262
FightWithCode
75,065,700
8,391,469
Pandas merge or update an empty dataframe with another dataframe?
<p>I have an empty dataframe <code>df1</code></p> <pre><code>import numpy as np import pandas as pd df1 = pd.DataFrame(columns=['A','B','C','D','E']) df1 A B C D E </code></pre> <p>I want to merge or update this dataframe with another dataframe <code>df2</code></p> <pre><code>df2 = pd.DataFrame({ 'B': [1,2,3], 'D': [4,5,6], 'E': [7,8,9]}) df2 B D E 0 1 4 7 1 2 5 8 2 3 6 9 </code></pre> <p>to get a merged or updated dataframe as</p> <pre><code> A B C D E 0 NaN 1 NaN 4 7 1 NaN 2 NaN 5 8 2 NaN 3 NaN 6 9 </code></pre> <p>Besides, efficiency is required because I have a long <code>df1</code> and <code>df2</code>.</p> <p>Any good idea? Thank you.</p>
<python><pandas><dataframe>
2023-01-10 05:08:33
2
495
Johnny Tam
75,065,651
10,829,044
Pandas filter using one column and replace on another column
<p>I have a dataframe like as below</p> <pre><code>df = pd.DataFrame( {'stud_id' : [101, 101, 101, 101, 101, 101, 101, 101], 'sub_code' : ['CSE01', 'CSE02', 'CSE03', 'CSE06', 'CSE05', 'CSE04', 'CSE07', 'CSE08'], 'marks' : ['A','B','C','D', 'E','F','G','H']} ) </code></pre> <p>I would like to do the below</p> <p>a) Filter my dataframe based on <code>sub_code</code> using a list of values</p> <p>b) For the filtered/selected rows, replace their <code>marks</code> value by a constant value - <code>FAIL</code></p> <p>So, I tried the below but it doesn't work and results in NA for non-filtered rows. Instead of NA, I would like to see the actual value</p> <pre><code>sub_list = ['CSE01', 'CSE02', 'CSE03','CSE06', 'CSE05', 'CSE04'] df['marks'] = df[df['sub_code'].isin(sub_list)]['marks'].replace(r'^([A-Za-z])*$','FAIL', regex=True) </code></pre> <p>I expect my output to be like as below</p> <p><a href="https://i.sstatic.net/NeRhl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NeRhl.png" alt="enter image description here" /></a></p>
<python><pandas><list><dataframe><filter>
2023-01-10 05:00:28
1
7,793
The Great
75,065,591
708,305
Basic python error with importing module - getting modulenotfound error
<p>I'm doing something basic with python, and I'm getting a pretty common error, but not able to find exactly what's wrong. I'm trying to use a custom module (built by someone else). I have the folder structure like this:</p> <p><a href="https://i.sstatic.net/KF6Hg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KF6Hg.png" alt="enter image description here" /></a></p> <p>There is the <code>test</code> folder, and I have a file <code>testing.py</code> within that:</p> <p><a href="https://i.sstatic.net/AsaTp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AsaTp.png" alt="enter image description here" /></a></p> <p>The contents of <code>testing.py</code> is:</p> <pre><code>from util import get_data, plot_data fruits = [&quot;apple&quot;, &quot;banana&quot;, &quot;cherry&quot;] for x in fruits: print(x) </code></pre> <p>When I run this file, using python testing.py, I get this:</p> <p><a href="https://i.sstatic.net/1t4ju.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1t4ju.png" alt="enter image description here" /></a></p> <p>I went through the other questions that speak about paths, and this looks fine, so not sure what I am missing here. My environment is setup using conda, and the environment is active.</p> <p><strong>EDIT</strong></p> <p>As per @allan-wind, I made the relative edit, which got me past the error, but now getting different errors:</p> <p>I tried the relative import, and it got past that error, but then it is now throwing this error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\ProgramData\Anaconda3\envs\ml4t\lib\multiprocessing\context.py&quot;, line 190, in get_context ctx = _concrete_contexts[method] KeyError: 'fork' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;grade_analysis.py&quot;, line 21, in &lt;module&gt; from grading.grading import ( File &quot;E:\_Repo\GT\CS7646\mls4tsp23\grading\grading.py&quot;, line 15, in &lt;module&gt; multiprocessing.set_start_method('fork') File &quot;C:\ProgramData\Anaconda3\envs\ml4t\lib\multiprocessing\context.py&quot;, line 246, in set_start_method self._actual_context = self.get_context(method) File &quot;C:\ProgramData\Anaconda3\envs\ml4t\lib\multiprocessing\context.py&quot;, line 238, in get_context return super().get_context(method) File &quot;C:\ProgramData\Anaconda3\envs\ml4t\lib\multiprocessing\context.py&quot;, line 192, in get_context raise ValueError('cannot find context for %r' % method) ValueError: cannot find context for 'fork' </code></pre> <p>`</p>
<python>
2023-01-10 04:50:58
2
4,857
M.R.
75,065,483
668,498
Python 3.10.7 Import Error ModuleNotFoundError: No module named '_bz2'
<p>I am trying to create a <code>Google Cloud Workstation</code> that has all of my required tooling built-in such as PHP, MYSQL and PYTHON with BZ2 support. I understand that this involves creating a custom container image. Here is the <code>Dockerfile</code> that I used:</p> <pre><code>FROM us-central1-docker.pkg.dev/cloud-workstations-images/predefined/code-oss:latest RUN apt-get update &amp;&amp; apt-get install -y \ apache2 \ bzip2 \ libbz2-dev \ php </code></pre> <p>This installs the necessary modules, as far as I know. The next thing I did was <code>docker run --rm -it --entrypoint=bash myimage:latest</code> to run the image locally. Then I used this command <code>bzip2 --version</code> to verify that bz2 has been installed.</p> <p>The next thing I did was to tag the image with the name of the Google Cloud Repository: <code>docker tag myimage gcr.io/myproject/myimage:latest</code> and then push the image to the the repository with <code>docker -- push gcr.io/myproject/myimage:latest</code></p> <p>The next step was to launch a new workstation based on the custom image. When I did this I noticed that <code>Python 3.10.7</code> was installed automatically without having to specify this in the <code>Dockerfile</code>. When I launch python and try to import the <code>bz2</code> module, I receive this error:</p> <pre><code>Python 3.10.7 (main, Jan 3 2023, 22:08:44) [GCC 10.2.1 20210110] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import bz2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.10/bz2.py&quot;, line 17, in &lt;module&gt; from _bz2 import BZ2Compressor, BZ2Decompressor ModuleNotFoundError: No module named '_bz2' </code></pre> <p>What am I doing wrong?</p>
<python><google-cloud-platform><apt-get><google-cloud-workstations>
2023-01-10 04:27:50
2
3,615
DanielAttard
75,065,363
11,013,499
why the output of curve fitting is not true?
<p>I am trying to fit a curve into my data. my function is polynomial order 3 as bellow:</p> <pre><code>def objective3(x, a, b, c,d): return a * x + b * x**2 + c * x**3 +d y=center_1080[itr,:] x=[1000 ,2000, 3000, 4000, 5000, 6000, 8000, 10000, 12000] popt, pcov,info,msg, ier= curve_fit(objective3, x, y,full_output=True) a, b, c ,d= popt </code></pre> <p><code>y</code> has the same shape as <code>x</code>. after that I used the following code to find the new values based on this curve for y:</p> <pre><code>x_line4 = arange(min(x), max(x), 1) # calculate the output for the range y_line4 = objective3(x_line4, a, b, c,d) </code></pre> <p>suppose <code>x_line4</code> value is as follows (shape (5,):</p> <pre><code>11995 11996 11997 11998 11999 </code></pre> <p>when I use <code>objective3(x_line4,a,b,c,d)</code> the output is:</p> <pre><code>66.4718 66.4732 66.4746 66.4759 66.4773 </code></pre> <p>but when I use each element of x_line4 separately as input the output is different. for example <code>objective3(11999,a,b,c,d)=81.11075844620781</code>but <code>objective3(x_line4[4],a,b,c,d)=66.4773</code>!!!! what is the problem? <code>a=0.003184157353698613,b=-2.2820353448818053e-07,c=8.475420387015893e-12,d=61.11802131658904</code></p>
<python>
2023-01-10 04:06:30
1
1,295
david
75,065,299
17,103,465
Pandas remove duplicates within the list of values and identifying id's that share the same values
<p>I have a pandas dataframe :</p> <p>I used to have duplicate test_no ; so I remove the duplicates by</p> <pre><code>df['test_no'] = df['test_no'].apply(lambda x: ','.join(set(x.split(',')))) </code></pre> <p>but still as you can see the duplicates are still there ; I think it's due to extra spaces and I want to clean it</p> <p>Part 1:</p> <pre><code> my_id test_no 0 10000000000055910 461511, 461511 1 10000000000064510 528422 2 10000000000064222 528422,528422 , 528421 3 10000000000161538 433091.0, 433091.0 4 10000000000231708 nan,nan </code></pre> <p>Expected Output</p> <pre><code> my_id test_no 0 10000000000055910 461511 1 10000000000064510 528422 2 10000000000064222 528422, 528421 3 10000000000161538 433091.0 4 10000000000231708 nan </code></pre> <p>Part 2:</p> <p>I also want to check if any of the &quot;my_id&quot; share any of the test_no ; for example :</p> <pre><code>my_id matched_myid 10000000000064222 10000000000064510 </code></pre>
<python><pandas>
2023-01-10 03:52:37
1
349
Ash
75,065,276
5,928,682
string manipulation in python for reading json object and '' removal
<p>I am trying to construct a role in AWS where I am trying to have list of resources.</p> <p>Below is an example</p> <pre><code>shared ={ &quot;mts&quot;:{ &quot;account_id&quot;:&quot;11111&quot;, &quot;workbench&quot;:&quot;aaaaa&quot;, &quot;prefix&quot;:&quot;rad600-ars-sil,rad600-srr-sil-stage1,rad600-srr-sil-stage2&quot; }, &quot;tsf&quot;:{ &quot;account_id&quot;:&quot;22222&quot;, &quot;workbench&quot;:&quot;bbbbb&quot;, &quot;prefix&quot;:&quot;yyyy&quot; } } </code></pre> <p>I am trying to construct a list with</p> <pre><code>role_arn=[] for key in shared: role_arn.append(f&quot;arn:aws:iam::'{shared[key]['account_id']}':role/'{shared[key]['workbench']}'_role&quot;) </code></pre> <p>here is my output:</p> <pre><code>[&quot;arn:aws:iam::'11111':role/'aaaaa'_role&quot;, &quot;arn:aws:iam::'22222':role/'bbbbb'_role&quot;] </code></pre> <p>I want the <code>''</code> to be removed from the list while appending into the list itself.</p> <p>desired output:</p> <pre><code>[&quot;arn:aws:iam::11111:role/aaaaa_role&quot;, &quot;arn:aws:iam::22222:role/bbbbb_role&quot;] </code></pre> <p>I am trying my hands on python. IS there a way to achieve it?</p>
<python><python-3.x><amazon-web-services>
2023-01-10 03:47:53
2
677
Sumanth Shetty
75,065,247
19,321,677
How to get Effect Size from tt_ind_solve_power?
<p>I am trying to get the Effect Size given my alpha, power, sample size, ratio. I found tt_ind_solve_power to do this but how would this work for 4 variants + 1 control?</p> <p>This is how I have it currently</p> <pre><code>from statsmodels.stats.power import tt_ind_solve_power effect_size = tt_ind_solve_power(nobs1=X, alpha=0.05, power=0.8, ratio=1, alternative='two-sided') </code></pre> <p>My goal is to get the effect size for my experiment with 4 variants. How do I define my nobs=X parameter in the function above? And would the outcome be the effect size per variant or in aggregate?</p> <pre><code>Sample Sizes: Variant 1: 990 Variant 2: 1001 Variant 3: 1100 Variant 4: 999 Control: 1002 </code></pre> <p>Any help is very much appreciated!</p>
<python><scipy><statsmodels><scipy.stats>
2023-01-10 03:43:25
1
365
titutubs
75,065,231
1,128,648
Realtime data update to streamlit from python variable
<p>New to python world. I am trying to understand how to update realtime data in streamlit app.(Reference <a href="https://blog.streamlit.io/how-to-build-a-real-time-live-dashboard-with-streamlit/" rel="nofollow noreferrer">Blog</a>).</p> <p>I am looping through the numbers 1 to 100, display the <code>number</code> along with <code>number * 10</code>. But streamlit always shows number as 1. How can I update the numbers(realtime update) in streamlit?</p> <p>My code:</p> <pre><code>import threading import streamlit as st import time global val, multiply def test_run(): global val, multiply for x in range(1, 100): val = x multiply = val * 10 print(val) time.sleep(1) return val, multiply threading.Thread(target=test_run).start() # dashboard title st.title(&quot;Stramlit Learning&quot;) # creating a single-element container. placeholder = st.empty() with placeholder.container(): col1, col2 = st.columns(2) col1.metric(label=&quot;Current Value&quot;, value=val) col2.metric(label=&quot;Multiply by 10 &quot;, value=multiply) </code></pre>
<python><streamlit>
2023-01-10 03:40:43
1
1,746
acr
75,064,970
13,543,225
How do I use a Semaphore with asyncio.as_completed in Python?
<p><strong>SETUP:</strong> I have a large list (over 100+) of tasks (coroutines) that connect to a REST API database server. The coroutines use a client connection pool. I think that the client connection pool is cutting me off, because I am not able to get all my results. I also think that I could use a Semaphore to limit the concurrent connections to the API server, and get all my results before my script finishes. Here's a minimal example:</p> <pre><code>q = Queue(-1) progress = tqdm(total=total_hits) sem = asyncio.Semaphore(1) for task in asyncio.as_completed(tasks): async with sem: res = await task q.put_nowait(res[&quot;data&quot;]) progress.update(len(res[&quot;data&quot;])) while res[&quot;links&quot;].get(&quot;next&quot;, None) is not None: res = await client.get_json_async(res[&quot;links&quot;][&quot;next&quot;]) q.put_nowait(res[&quot;data&quot;]) progress.update(len(res[&quot;data&quot;])) </code></pre> <p><strong>PROBLEM:</strong> I know that I have 10,000 data points to capture. However, I consistently only capture about half of those. I think it's because the client is limiting my TCP connections to the server.</p> <p>Any ideas?</p>
<python><python-asyncio><semaphore>
2023-01-10 02:50:29
1
650
j7skov
75,064,949
11,402,025
How to fix "No module named..." error in Sphinx?
<p>I am getting the following error :</p> <pre><code>WARNING: autodoc: failed to import module 'create_alias' from module 'src.create_alias'; the following exception was raised: No module named 'helpers' </code></pre> <p>Project Structure looks like</p> <pre><code>project-1 | |---src |---create_alias |---__init__.py |---create_alias.py |---helpers |---__init__.py |---helper.py |---shared_lib | |---__init__.py |---common_helper.py | |--__init__.py </code></pre> <p>project-1 is the project directory and contains the following files/folder</p> <pre><code>README.md docs samconfig.toml template.yml __init__.py pyproject.toml src venv </code></pre> <p>I have added the following extensions in conf.py</p> <pre><code>sys.path.insert(0, os.path.abspath('.')) sys.path.insert(0, os.path.abspath('..')) sys.path.insert(0, os.path.abspath('../../project-1')) extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.todo', 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx', 'sphinx.ext.ifconfig', 'sphinx.ext.githubpages', 'sphinxcontrib.confluencebuilder' ] </code></pre> <p>Sphinx version : v6.1.2</p>
<python><python-sphinx><autodoc>
2023-01-10 02:46:34
0
1,712
Tanu
75,064,934
1,135,144
Convert encryption function from Javascript to Python
<p>I'm trying to convert this code from Javascript to Python3:</p> <pre><code>import crypto from 'crypto'; const secretKey = 'NgTriSCalcUltAbLoGResOnOuSeAKeSTraLryOuR' function verifySignature(rawBody) { const calculatedSignature = crypto .createHmac('sha256', secretKey) .update(rawBody, 'utf8') .digest('base64'); return calculatedSignature; } console.log(verifySignature('a')); </code></pre> <p>With that code I get this output: <code>vC8XBte0duRLElGZ4jCsplsbXnVTwBW4BJsUV1qgZbo=</code></p> <p>So I'm trying to convert the same function to Python using this code:</p> <p><strong>UPDATED</strong></p> <pre><code>import hmac import hashlib message = &quot;a&quot; key= &quot;NgTriSCalcUltAbLoGResOnOuSeAKeSTraLryOuR&quot; hmac1 = hmac.new(key=key.encode(), msg=message.encode(), digestmod=hashlib.sha256) message_digest1 = hmac1.hexdigest() print(message_digest1) </code></pre> <p>But I get this error: <strong>AttributeError: 'hash' object has no attribute 'digest_size'</strong></p> <p>Can someone tell me what I am missing to achieve the same output in Python?</p> <p>Thanks you! :)</p>
<javascript><python><python-3.x><cryptojs>
2023-01-10 02:43:13
1
473
Nacho Sarmiento
75,064,821
5,928,682
Json object export to environment variable in python returning string without ""
<p>I am creating a CDK stack using python. Here I am exporting json object in to a linux environment as it is a clodebuild step.</p> <pre><code>f&quot;export SHARED=\&quot;{json.dumps(shared)}\&quot;&quot; </code></pre> <p>The only reason to use <code>\&quot;</code> is i was getting an error for spaces with in the json object.</p> <p>When I am trying to import environment object and load it as json i am getting json object without <code>&quot;&quot;</code>.</p> <pre><code>{ mts:{ account_id:11111, workbench:aaaaa, prefix:rad600-ars-sil,rad600-srr-sil-stage1,rad600-srr-sil-stage2 }, tsf:{ account_id:22222, workbench:bbbbb, prefix:yyyy } } </code></pre> <p>with this object below loads is not working and giving out an error which states <code>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes</code></p> <pre><code>SHARED = json.loads(os.environ[&quot;SHARED&quot;]) </code></pre> <p>Am I missing something or is there a better way to send json object as environment variable?</p>
<python><amazon-web-services><aws-cdk>
2023-01-10 02:19:11
3
677
Sumanth Shetty
75,064,768
5,510,818
Trying to numerically match python Log-Mel Spectrogram in Accelerate / Swift
<p><a href="https://github.com/tanmayb123/OpenAI-Whisper-CoreML/pull/2" rel="nofollow noreferrer">I am working on a native port</a> of OpenAI's Whisper for macOS and iOS via CoreML and Accelerate / AVFoundation, and in doing so noticed numerical differences in my Log Mel Spectrogram and code Whispers.</p> <p><a href="https://colab.research.google.com/drive/1r9ghakH8__jGqGiYHC2DXtKaW_ozdSrV?usp=sharing" rel="nofollow noreferrer">This python Notebook</a> extracts the Log Mel Spectrogram exactly how Whisper does. I've decomposed the steps clearly.</p> <p>My Swift code matches:</p> <ul> <li>Raw Audio Samples</li> <li>Conversion to Float</li> <li>Conversion to Normalized Float</li> <li>Hamming Windows calculated</li> <li>Loading the precomputed Mel Filters</li> </ul> <p>What begins to differ:</p> <ul> <li>Complex Numbers from the STFT process</li> <li>Results of Matrix ops.</li> </ul> <p>However when I do my windowed FFTs, I notice similar values ranges, but inconsistent values that dont seem to match remotely. As in, not a rounding error.</p> <p><a href="https://github.com/vade/OpenAI-Whisper-CoreML/blob/feature/AccelerateMEL/Whisper/Whisper/MelSpectrogram.swift" rel="nofollow noreferrer">My Full Mel class can be found here:</a></p> <p>The Python code I am trying to match:</p> <pre><code> window = torch.hann_window(N_FFT).to(audio.device) stft = torch.stft(audio, N_FFT, HOP_LENGTH, window=window, return_complex=True) magnitudes = stft[:, :-1].abs() ** 2 filters = mel_filters(audio.device, n_mels) mel_spec = filters @ magnitudes log_spec = torch.clamp(mel_spec, min=1e-10).log10() log_spec = torch.maximum(log_spec, log_spec.max() - 8.0) log_spec = (log_spec + 4.0) / 4.0 </code></pre> <p>And the core section of my swift code:</p> <p>Points of interest:</p> <ul> <li>Python appears to be outputting STFT of size 201 and 3001, whereas im not sure why they get 201, or 3001 from. My math seems to work out to generating 200 x 3000 exactly with the params from Whisper?</li> <li>Am I calculating the overlapping hops correctly? I am not merging them into a single FFT. Im also padding by 200 samples before and after my audio frame of samples.</li> <li>I am assuming that the 3000 sets 200 length complex numbers generated are STFT output output from PyTorch</li> <li>Im assuming that vDSP's FFT implementation more or less matches whatever is happening in PyTorch</li> <li>The STFT from Pytorch 0ths element and last element both have zero'd imaginary components. Mine do not. How!?</li> </ul> <p>I know this is a large, complex question, but maybe someone brave will help me out?</p> <p>Thank you in advance!</p> <pre><code> func processData(audio: [Int16]) -&gt; [Float] { assert(self.sampleCount == audio.count) var audioFloat:[Float] = [Float](repeating: 0, count: audio.count) vDSP.convertElements(of: audio, to: &amp;audioFloat) vDSP.divide(audioFloat, 32768.0, result: &amp;audioFloat) // insert numFFT/2 samples before and numFFT/2 after so we have a extra numFFT amount to process audioFloat.insert(contentsOf: [Float](repeating: 0, count: self.numFFT/2), at: 0) audioFloat.append(contentsOf: [Float](repeating: 0, count: self.numFFT/2)) // Split Complex arrays holding the mel spectrogram var allSampleReal = [[Float]](repeating: [Float](repeating: 0, count: self.numFFT/2), count: self.melSampleCount) var allSampleImaginary = [[Float]](repeating: [Float](repeating: 0, count: self.numFFT/2), count: self.melSampleCount) // we need to create 200 x 3000 matrix of STFTs - note we appear to want to output complex numbers (?) for (i) in 0 ..&lt; self.melSampleCount { // Slice numFFTs every hop count (barf) and make a mel spectrum out of it var audioFrame = Array&lt;Float&gt;( audioFloat[ (i * self.hopCount) ..&lt; ( (i * self.hopCount) + self.numFFT) ] ) assert(audioFrame.count == self.numFFT) // Split Complex arrays holding a single FFT result, which gets appended to the var sampleReal:[Float] = [Float](repeating: 0, count: self.numFFT/2) var sampleImaginary:[Float] = [Float](repeating: 0, count: self.numFFT/2) sampleReal.withUnsafeMutableBufferPointer { realPtr in sampleImaginary.withUnsafeMutableBufferPointer { imagPtr in vDSP.multiply(audioFrame, hanningWindow, result: &amp;audioFrame) var complexSignal = DSPSplitComplex(realp: realPtr.baseAddress!, imagp: imagPtr.baseAddress!) audioFrame.withUnsafeBytes { unsafeAudioBytes in vDSP.convert(interleavedComplexVector: [DSPComplex](unsafeAudioBytes.bindMemory(to: DSPComplex.self)), toSplitComplexVector: &amp;complexSignal) } self.fft.forward(input: complexSignal, output: &amp;complexSignal) } } allSampleReal[i] = sampleReal allSampleImaginary[i] = sampleImaginary } // We create flattened 3000 x 200 array of DSPSplitComplex values var flattnedReal:[Float] = allSampleReal.flatMap { $0 } var flattnedImaginary:[Float] = allSampleImaginary.flatMap { $0 } // Take the magnitude squared of the matrix, which results in a Result flat array of 3000 x 200 of real floats // Then multiply it with our mel filter bank let count = flattnedReal.count var magnitudes = [Float](repeating: 0, count: count) var melSpectroGram = [Float](repeating: 0, count: 80 * 3000) flattnedReal.withUnsafeMutableBytes { unsafeReal in flattnedImaginary.withUnsafeMutableBytes { unsafeImaginary in let matrix = [DSPSplitComplex](repeating: DSPSplitComplex(realp: unsafeReal.bindMemory(to: Float.self).baseAddress!, imagp: unsafeImaginary.bindMemory(to: Float.self).baseAddress!), count: count) // populate magnitude matrix with magnitudes squared vDSP_zvmags(matrix, 1, &amp;magnitudes, 1, vDSP_Length(count)) // transpose magnitudes to get our 200 x 3000 matrix vDSP_mtrans(magnitudes, 1, &amp;magnitudes, 1, 3000, 200) // Matrix A, a MxK sized matrix // Matrix B, a KxN sized matrix // MATRIX A mel filters is 80 rows x 200 columns // MATRIX B magnitudes is 3000 x 200 // MATRIX B is TRANSPOSED to be 200 rows x 3000 columns // MATRIX C melSpectroGram is 80 rows x 3000 columns let M: Int32 = 80 // number of rows in matrix A let N: Int32 = 3000 // number of columns in matrix B let K: Int32 = 200 // number of columns in matrix A and number of rows in // matrix multiply magitude squared matrix with our filter bank // see https://www.advancedswift.com/matrix-math/ cblas_sgemm(CblasRowMajor, CblasNoTrans, // Transpose A CblasNoTrans, // M, // M Number of rows in matrices A and C. N, // N Number of columns in matrices B and C. K, // K Number of columns in matrix A; number of rows in matrix B. 1.0, // Alpha Scaling factor for the product of matrices A and B. self.melFilterMatrix, // Matrix A K, // LDA The size of the first dimension of matrix A; if you are passing a matrix A[m][n], the value should be m. magnitudes, // Matrix B N, // LDB The size of the first dimension of matrix B; if you are passing a matrix B[m][n], the value should be m. 0, // Beta Scaling factor for matrix C. &amp;melSpectroGram, // Matrix C N) // LDC The size of the first dimension of matrix C; if you are passing a matrix C[m][n], the value should be m. // } var minValue: Float = 1e-10 var maxValue: Float = 0.0 var maxIndex: vDSP_Length = 0 var minIndex: vDSP_Length = 0 let melCount = melSpectroGram.count // get the current max value vDSP_maxvi(melSpectroGram, 1, &amp;maxValue, &amp;maxIndex, vDSP_Length(melCount)) // Clip to a set min value, keeping the current max value vDSP_vclip(melSpectroGram, 1, &amp;minValue, &amp;maxValue, &amp;melSpectroGram, 1, vDSP_Length(melCount)) // Take the log base 10 var melCountInt32:UInt32 = UInt32(melCount) vvlog10f(&amp;melSpectroGram, melSpectroGram, &amp;melCountInt32) // get the new max value vDSP_maxvi(melSpectroGram, 1, &amp;maxValue, &amp;maxIndex, vDSP_Length(melCount)) // get the new min value vDSP_minvi(melSpectroGram, 1, &amp;minValue, &amp;minIndex, vDSP_Length(melCount)) // emulate // log_spec = torch.maximum(log_spec, log_spec.max() - 8.0) // we effectively clamp to max - 8.0 var newMin = maxValue - 8.0 // Clip to new max and updated min vDSP_vclip(melSpectroGram, 1, &amp;newMin, &amp;maxValue, &amp;melSpectroGram, 1, vDSP_Length(melCount)) // Add 4 and Divide by 4 var four:Float = 4.0 vDSP_vsadd(melSpectroGram, 1, &amp;four, &amp;melSpectroGram, 1, vDSP_Length(melCount)) vDSP_vsdiv(melSpectroGram, 1, &amp;four, &amp;melSpectroGram, 1, vDSP_Length(melCount)) } } return melSpectroGram } </code></pre>
<python><audio><fft><accelerate-framework><openai-whisper>
2023-01-10 02:09:36
0
783
vade
75,064,727
13,994,829
How to share dictionary memory in different process?
<p>Is it possible to share a dictionary variable in Python Multiprocess (<code>pathos.multiprocess</code>)?</p> <p>I use the following code, however it doesn't work as expected.</p> <p>I hope the <code>skus</code> be <code>{0: 0, 1: 1, ...}</code></p> <pre><code>from pathos.multiprocessing import Pool as ProcessPool def outer(): skus = {} def process(skus, sku): skus[sku] = sku * 10 with ProcessPool() as pool: pool.starmap(process, ((skus, sku) for sku in range(100)), chunksize=3) print(skus) if __name__ == &quot;__main__&quot;: outer() </code></pre> <p><strong>Output</strong>:</p> <pre><code>skus = {} </code></pre> <p>So, I used <code>Manager().dict</code> as my variable, but now I get the another error.</p> <p>Where is the problem and how can I correctly share <code>dict</code> in multiprocess?</p> <pre><code>from pathos.multiprocessing import Pool as ProcessPool from multiprocessing import Manager def outer(): manager = Manager() skus = manager.dict() def process(sku): skus[sku] = sku * 10 with ProcessPool() as pool: pool.map(process, range(100), chunksize=3) print(skus) if __name__ == &quot;__main__&quot;: outer() </code></pre> <p><strong>Output: (Error)</strong></p> <pre><code>.... raise AuthenticationError('digest sent was rejected') multiprocessing.context.AuthenticationError: digest sent was rejected </code></pre>
<python><dictionary><memory><multiprocessing><python-multiprocessing>
2023-01-10 02:02:21
2
545
Xiang
75,064,699
14,057,599
How to fill 2D binary Numpy array without using for loop?
<p>Suppose I have a Numpy array <code>a</code> and I want to fill the inner with all 1 like array <code>b</code></p> <pre><code>print(a) array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 1., 1., 1., 1., 0., 0., 0.], [0., 1., 0., 1., 1., 1., 1., 1., 0., 0.], [0., 1., 0., 0., 1., 0., 0., 0., 1., 0.], [0., 1., 0., 1., 0., 0., 0., 0., 1., 0.], [0., 1., 0., 1., 0., 0., 0., 1., 0., 0.], [0., 1., 0., 1., 0., 0., 0., 1., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 1., 0., 0.], [0., 0., 0., 1., 1., 1., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) print(b) array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 1., 1., 1., 1., 0., 0., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 0., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 0., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 0., 0.], [0., 0., 1., 1., 1., 1., 1., 1., 0., 0.], [0., 0., 0., 1., 1., 1., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) </code></pre> <p>I'm currently using the <code>for</code> loop to do this, are there any ways to do this <strong>without</strong> using the <code>for</code> loop and <strong>only</strong> using Numpy? Thanks.</p> <pre><code>b = np.zeros(a.shape) for i in range(a.shape[0]): occupied = np.where(a[i] == 1)[0] if len(occupied) &gt; 0: for j in range(occupied[0], occupied[-1] + 1): b[i][j] = 1 </code></pre> <p><strong>Edit</strong>:</p> <ul> <li>Only using Numpy</li> <li>The areas I want to fill always have contiguous boundaries.</li> </ul>
<python><numpy>
2023-01-10 01:56:41
2
317
Qimin Chen
75,064,656
18,758,062
Printing Pytorch Tensor from gpu, or move to cpu and/or detach?
<p>I'm starting Pytorch and still trying to understand the basic concepts.</p> <p>If I have a network <code>n</code> on the GPU that produces an output tensor <code>out</code>, can it be printed to stdout directly? Or should it first be moved to the cpu, or be detached from the graph before printing?</p> <p>Tried several combinations below involving <code>.cpu()</code> and <code>.detach()</code></p> <pre class="lang-py prettyprint-override"><code>import torch.nn as nn import torch class Net(nn.Module): def __init__(self): super().__init__() self.layers = nn.Sequential( nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 3), ) def forward(self, x): return self.layers(x) device = torch.device(&quot;cuda:0&quot;) # assume its available x = torch.rand(10, 5).to(device) net = Net().to(device) # Pretend we are in a training loop iteration out = net(x) print(f&quot;The output is {out.max()}&quot;) print(f&quot;The output is {out.max().detach()}&quot;) print(f&quot;The output is {out.max().cpu()}&quot;) print(f&quot;The output is {out.max().cpu().detach()}&quot;) # continue training iteration and repeat more iterations in training loop </code></pre> <p>I got the same output for all 4 methods. Which is the correct way?</p>
<python><pytorch>
2023-01-10 01:48:16
1
1,623
gameveloster
75,064,572
7,250,111
How to download certain columns of a table using Selenium and Python?
<p>I want to get 2 columns(Symbol, Name) of a table on this website : <a href="https://www.nasdaq.com/market-activity/quotes/nasdaq-ndx-index" rel="nofollow noreferrer">https://www.nasdaq.com/market-activity/quotes/nasdaq-ndx-index</a> <a href="https://i.sstatic.net/jDeoS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jDeoS.png" alt="enter image description here" /></a></p> <p>It seemed that each row has its class name as &quot;nasdaq-ndx-index__row&quot; so I tried</p> <pre><code>from selenium.webdriver.common.by import By from selenium import webdriver driver = webdriver.Chrome('D:\\PROGRAM\\chromedriver.exe') url = &quot;https://www.nasdaq.com/market-activity/quotes/nasdaq-ndx-index&quot; driver.get(url) driver.implicitly_wait(1) xpath = &quot;/html/body/div[2]/div/main/div[2]/article/div[2]/div/div[3]/div[3]/div[2]/table/tbody/tr[2]&quot; tb = driver.find_elements(By.CLASS_NAME, &quot;nasdaq-ndx-index__row&quot;) </code></pre> <p>but <code>tb</code> is just a list of</p> <pre><code>&lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;f66740af2c8b92f4c81f30e893044cdc&quot;, element=&quot;57a9c748-bbdc-4626-8e31-2cb6f7b680f4&quot;)&gt; &lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;f66740af2c8b92f4c81f30e893044cdc&quot;, element=&quot;b1c72b3b-9934-418a-a743-1aedf5dcd65c&quot;)&gt; &lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;f66740af2c8b92f4c81f30e893044cdc&quot;, element=&quot;c68764ea-d82a-42f1-8f3d-0801d9604eae&quot;)&gt; &lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;f66740af2c8b92f4c81f30e893044cdc&quot;, element=&quot;a2dad31a-ad7b-4b41-ba58-d6424b093b96&quot;)&gt; </code></pre> <p>What am I missing and how could I get the 2 columns?</p>
<python><selenium>
2023-01-10 01:32:38
1
2,056
maynull
75,064,415
3,362,334
db.create_all() not generating db
<p>I'm trying to test Flask with SQLAlchemy and I stumbeld accross this problem. First, I have to note that I read all of the related threads and none of them solves my problem. I have a problem that db.create_all() doesn't generate the table I defined. I have model class in file person.py:</p> <pre><code>from website import db class Person(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String, nullable=False) password = db.Column(db.String) width = db.Column(db.Integer) height = db.Column(db.Integer) agent = db.Column(db.String) user_data_dir = db.Column(db.String) </code></pre> <p>And in my website.py which is the file from where I launch the app:</p> <pre><code>from flask import Flask, jsonify, render_template, request from flask_sqlalchemy import SQLAlchemy # create the extension db = SQLAlchemy() def start_server(host, port, debug=False): from person import Person # create the app app = Flask(__name__, static_url_path='', static_folder='web/static', template_folder='web/templates') # configure the SQLite database, relative to the app instance folder app.config[&quot;SQLALCHEMY_DATABASE_URI&quot;] = &quot;sqlite:///database0.db&quot; # initialize the app with the extension db.init_app(app) print('initialized db') print('creating tables...') with app.app_context(): db.create_all() db.session.add(Person(username=&quot;example33&quot;)) db.session.commit() person = db.session.execute(db.select(Person)).scalar() print('persons') print(person.username) if __name__ == '__main__': start_server(host='0.0.0.0', port=5002, debug=True) </code></pre> <p>I think the problem might be that the Person class is not importing properly, because when I put the class inside the start_server function it executes fine and creates the table, but I don't know why this is happening. I followed all the advice and imported it before everything, and also I share the same db object between the 2 files</p>
<python><flask><sqlalchemy>
2023-01-10 00:58:50
2
2,228
user3362334
75,064,356
5,431,132
Using Numba njit with np.array
<p>I have two Python functions that I am trying to speed up with <code>njit</code> as they are impacting the performance of my program. Below is a MWE that reproduces the following error when we add the <code>@njit(fastmath=True)</code> decorator to <code>f</code>. Otherwise it works. I believe the error is because the array <code>A</code> has dtype object. Can I use Numba to decorate <code>f</code> in addition to <code>g</code>? If not, what is the fastest way to map <code>g</code> to the elements of <code>A</code>? Roughly, the length of A = B ~ 5000. These functions are called around 500 MM times though as part of a hpc workflow.</p> <pre><code>@njit(fastmath=True) def g(a, B): # some function of a and B return 19.12 / (len(a) + len(B)) def f(A, B): total = 0.0 for i in range(len(B)): total += g(A[i], B) return total A = [[2, 5], [4, 5, 6, 7], [0, 8], [6, 7], [1, 8], [0, 1], [1, 3], [1, 3], [2, 4]] B = [1, 1, 1, 1, 1, 1, 1, 1, 1] A = np.array([np.array(a, dtype=int) for a in A], dtype=object) B = np.array(B, dtype=int) f(A, B) </code></pre> <blockquote> <p>TypingError: Failed in nopython mode pipeline (step: nopython frontend) non-precise type array(pyobject, 1d, C) During: typing of argument at /var/folders/9x/hnb8fg0x2p1c9p69p_70jnn40000gq/T/ipykernel_59724/1681580915.py (8)</p> <p>File &quot;../../../../var/folders/9x/hnb8fg0x2p1c9p69p_70jnn40000gq/T/ipykernel_59724/1681580915.py&quot;, line 8: &lt;source missing, REPL/exec in use?&gt;</p> </blockquote>
<python><numpy><numba><hpc>
2023-01-10 00:43:36
2
582
AngusTheMan
75,064,347
491,894
In python, when loading a yaml file, is it possible to delay evaluation of a value until a specific *different* key is set?
<p>I use <code>python</code> and <code>ruamel.yaml</code> to load a configuration file. I am currently allowing a [<code>!NETRC</code>][1] entry evaluated from the user's <code>.netrc</code> file when a password or token is needed.</p> <p>This is working ok, but it can sometimes be frustrating when some functions take a while before attempting to connect and fail.</p> <p>However, to evaluate the netrc tag, I need the host entry (it should be a URL, but netrc allows any string in the host field), which is a sibling key in the yaml file.</p> <p>The relevant part of my yaml file looks like this.</p> <pre class="lang-yaml prettyprint-override"><code>connect: url: https://my.company/path/to/service login: mylogin token: !NETRC </code></pre> <p>I need <code>token</code> to <strong>not</strong> be evaluated until <code>url</code> is and a way to access that value.</p> <p>Am I expecting too much? Is there a way to do this with ruamel.yaml?</p> <p>A simplified example of the code looks like the following:</p> <pre class="lang-py prettyprint-override"><code>import ruamel.yaml yaml = ruamel.yaml.YAML(typ='safe') yaml.default_flow_style = False class NetrcTag(str): yaml_tag = '!NETRC' def __new__(cls, value): newvalue = str.__new__(cls, '!NETRC') # newvalue.netrctag = load_netrc(cfg, url value goes here) &lt;---- newvalue.netrctag = value return newvalue @classmethod def from_yaml(cls, constructor, node): return cls(node.value) @classmethod def to_yaml(cls, represented, node): return representer.represent_scalar(cls.yaml_tag, node.netrctag) </code></pre> <p>[1]: Thanks @Anthon! <a href="https://stackoverflow.com/q/75022789/491894">I&#39;ve loaded a yaml file with `!ENV SOME_VAR` and replaced the string with the value. How do I save the original string and not the changed string?</a></p>
<python><yaml><ruamel.yaml>
2023-01-10 00:42:32
1
1,304
harleypig
75,064,251
15,171,387
Random selection of value in a dataframe with multiple conditions
<p>Let's say I have a dataframe like this.</p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) df = pd.DataFrame(np.random.choice(list(['a', 'b', 'c', 'd']), 50), columns=list('1')) print(df.value_counts()) 1 d 18 a 12 b 12 c 8 dtype: int64 </code></pre> <p>Now, what I am trying is to do a sampling given the frequency of each value in the column. For example, if the count of a value is below 8 (here value c), then select 50% the rows, if between 8 and 12, then select 40%, and &gt;12, 30%.</p> <p>Here is what I though might be a way to do it, but this does not produce what I am exactly looking for.</p> <pre><code>sample_df = df.groupby('1').apply(lambda x: x.sample(frac=.2)).reset_index(drop=True) print(sample_df.value_counts()) 1 d 4 a 2 b 2 c 2 </code></pre>
<python><pandas><dataframe><random><sampling>
2023-01-10 00:21:59
2
651
armin
75,064,235
11,295,630
Sklearn's One-Class SVM with separable Nu Parameter
<p>The existing One-Class Classification (OCC) model implementation from Sklearn (OneClassSVM) has a parameter (Nu) that handles both the upper bound of training errors and the lower bound of support vectors. I have data that has no training error, however, I still want support vectors. Is there any known Python implementation of OneClassSVM that has separable parameters so that I can specify training error and support vectors separately?</p> <p>Currently, my alternative is to purposely contaminate my one-class data.</p> <p>Documentation: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html</a></p> <p>Alternatively, is there any pure one-class SVM implementation that does not assume an outlier class instance in training?</p>
<python><scikit-learn><svm><data-mining>
2023-01-10 00:19:17
1
403
Riley K
75,064,169
2,687,317
matplotlib subplots don't honor ylim
<p>I'm trying to create 9 subplots (3x3) <em><strong>all with the same x and y ranges</strong></em>. I setup the plots using this code, and all but the last subplot use the correct y-range (ylim).</p> <pre><code>brightM = -22 # Plot range... limitM = -14 dataIn = np.load(&quot;testLFdata.tst.npy&quot;,allow_pickle=True) fig, axs = plt.subplots(nrows=3, ncols=3, sharex='all',sharey='all',figsize=(15,15)) linaxs = axs.reshape(-1) for i, ax in zip(range(dataIn[0].size), linaxs): print(&quot;z={}&quot;.format(dataIn[0,i])) xdata = dataIn[1,i][0] ydata = dataIn[1,i][1] ax.plot(xdata,ydata,ls='-',marker='o', c=&quot;b&quot;) # Format plt ax.set_yscale('log') ax.set_ylim([5e-6,1e-1]) ax.set_yticks(np.logspace(-6,-1,6)) ax.set_xscale('linear') ax.set_xlim([brightM-0.85,limitM+0.5]) ax.set_xticks(np.arange(brightM,limitM+1,2)) ax.set_xticklabels([str(i) for i in np.arange(brightM,limitM+1,2)]) ax.grid(b=True) gc.collect() plt.subplots_adjust(left=0.07, bottom=0.08, right=1.0, top=1.0, wspace=.0, hspace=.0) gc.collect() </code></pre> <p><a href="https://www.dropbox.com/s/t716a8foddqk7tl/testLFdata.tst.npy?dl=0" rel="nofollow noreferrer">Here's the data for the data struct &quot;dataIn&quot;</a>.</p> <p>Here's the plot I get... As you can see the lower-right plot has the wrong y-limits!</p> <p><a href="https://i.sstatic.net/ieQUa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ieQUa.png" alt="9x subplots" /></a></p>
<python><matplotlib>
2023-01-10 00:07:39
0
533
earnric
75,064,150
14,278,839
Sort a list by index with default if index doesn't exist
<p>I have a 2 dimensional list. I want to sort this list by multiple criteria in the sublist, but provide a default if one of the chosen index criteria doesn't exist.</p> <pre><code>my_list = [[5, 4], [1], [6, 8, 1]] my_list.sort(key=lambda x: (x[0], x[1]) </code></pre> <p>Obviously, this will throw an out of range error. How can I get around this and provide a default value if the index doesn't exist?</p>
<python><list><sorting>
2023-01-10 00:03:16
3
461
YangTegap
75,064,080
1,142,502
Python error The truth value of a DataFrame is ambiguous
<p>My Python Pandas DF block is giving me the below error:</p> <pre><code>The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Traceback (most recent call last): File &quot;/var/task/lambda_function.py&quot;, line 58, in lambda_handler df, charter, charter_filename = clean_df(read_excel_data, file) File &quot;/var/task/lambda_function.py&quot;, line 159, in clean_df if df[df['program_code'] == 'F23-IPS-SD-ENG']: File &quot;/opt/python/pandas/core/generic.py&quot;, line 1443, in __nonzero__ f&quot;The truth value of a {type(self).__name__} is ambiguous. &quot; </code></pre> <p>I have the following lines of code:</p> <pre><code>if df[df['program_code'] == 'name_of_the_special_program']: print(&quot;Special Load&quot;) else: print(&quot;Regular Load&quot;) </code></pre> <p>I also tried:</p> <pre><code>array = ['name_of_the_special_program'] if df.loc[df['program_code'].isin(array)]: print(&quot;Special Load&quot;) else: print(&quot;Regular Load&quot;) </code></pre> <p>I did my research before posting here, which states:</p> <blockquote> <p>This error occurs because the if statement requires a truth value, i.e., a statement evaluating to True or False. In the above example, the &lt; operator used against a dataframe will return a boolean series, containing a combination of True and False for its values. Since a series is returned, Python doesn't know which value to use, meaning that the series has an ambiguous truth value.</p> <p>Instead, we can pass this statement into dataframe brackets to get the desired values</p> </blockquote> <p>That's why I wrote this:</p> <blockquote> <p>df[df['program_code'] == 'name_of_the_special_program']</p> </blockquote> <p>Sample data from the DF:</p> <p><a href="https://i.sstatic.net/ZWUu8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWUu8.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-01-09 23:49:09
3
427
aki2all
75,064,033
15,781,591
How to bin values in python based on dynamic ranges?
<p>I have the following datafame, df1:</p> <pre><code> ID Min_Value Max_Value --------------------------------------- 0 ID_1 100 150 1 ID_1 150 170 2 ID_2 80 105 3 ID_2 105 120 </code></pre> <p>I then have another dataframe, df2, with data that looks like:</p> <pre><code> ID Value ---------------------- 0 ID_1 102 1 ID_1 101 2 ID_1 155 3 ID_1 165 4 ID_1 162 5 ID_1 159 ... 55 ID_1 105 56 ID_1 121 57 ID_1 143 58 ID_1 137 59 ID_1 155 60 ID_1 165 ... 100 ID_2 95 101 ID_2 81 102 ID_2 91 103 ID_2 101 104 ID_2 115 105 ID_2 117 ... 165 ID_2 91 166 ID_2 90 167 ID_2 105 168 ID_2 119 169 ID_2 84 170 ID_2 86 ... </code></pre> <p>And so df1 shows for each unique &quot;ID&quot; there are two ranges, or bins. For ID_1, we have a lower bin: 100-150, and an upper bin: 150-170. And then for ID_2, we have a lower bin: 80-105, and an upper bin: 105-120. And then I have df2, which contains hundreds of rows, showing a value for each ID, for where in this case there are only 2 IDs, ID_1 and ID_2. What I want to do is bin the values of df2 to find out how many of its values fall within each of the bins for each ID in df1.</p> <p>And so I want to create the following df3:</p> <pre><code> ID Bin_1 Bin_2 Proportion_Pop ------------------------------------------------- 0 ID_1 XX XX 0.XX 1 ID_1 XX XX 0.XX 2 ID_2 XX XX 0.XX 3 ID_2 XX XX 0.XX </code></pre> <p>Where in this df3, I am finding out, for each unique ID, here ID_1 and ID_2, how many of the corresponding values fall within the lower bin - Bin_1, and then how many of the corresponding values fall within the upper bin-Bin_2? And then, what proportion of the total population of each ID_2 fall within each corresponding bin? These &quot;Proportion_Pop&quot; values for each ID should sum to 1.0.</p> <p>I am having trouble figuring out how to approach this in a way that is dynamic and can accommodate if there perhaps happen to be more IDs, e.g. ID_3, ID_4, ID_5, etc., and as well more than 2 bins, e.g. Bin_3, Bin_4, Bin_5.</p> <p>What I am thinking to do is capture the value ranges for each Bin for each ID, and then place them in a dictionary, and then after, loop through that dictionary for each ID, and then count the values in each bin via <code>value_counts.()</code> to derive the proportion of the total population, but this seems to be getting messy. Is there a straightforward way to accomplish this using '.value_counts()'?</p>
<python><pandas><dataframe>
2023-01-09 23:36:43
0
641
LostinSpatialAnalysis
75,064,024
996,815
Code coverage for Python using Bazel 6.0.0
<p>I want to generate a code coverage report using Bazel for a Python project (my environment: macOS Ventura, M1 arm64, Python 3.10, Bazel 6.0).</p> <p>The <a href="https://bazel.build/configure/coverage#python" rel="nofollow noreferrer">documentation</a> states that for this task Bazel 6.0 and a modified version of <code>coverage.py</code> should work.</p> <p>It suggests to create a <code>requiremnts.txt</code> with the following content:</p> <pre><code>git+https://github.com/ulfjack/coveragepy.git@lcov-support </code></pre> <p>extend the <code>WORKSPACE</code> file this way:</p> <pre><code>load(&quot;@bazel_tools//tools/build_defs/repo:http.bzl&quot;, &quot;http_archive&quot;) http_archive( name = &quot;rules_python&quot;, url = &quot;https://github.com/bazelbuild/rules_python/releases/download/0.5.0/rules_python-0.5.0.tar.gz&quot;, sha256 = &quot;cd6730ed53a002c56ce4e2f396ba3b3be262fd7cb68339f0377a45e8227fe332&quot;, ) load(&quot;@rules_python//python:pip.bzl&quot;, &quot;pip_install&quot;) pip_install( name = &quot;python_deps&quot;, requirements = &quot;//:requirements.txt&quot;, ) </code></pre> <p>And modify <code>py_test</code>s this way:</p> <p>load(&quot;@python_deps//:requirements.bzl&quot;, &quot;entry_point&quot;)</p> <pre><code>alias( name = &quot;python_coverage_tools&quot;, actual = entry_point(&quot;coverage&quot;), ) py_test( name = &quot;test&quot;, srcs = [&quot;test.py&quot;], env = { &quot;PYTHON_COVERAGE&quot;: &quot;$(location :python_coverage_tools)&quot;, }, deps = [ &quot;:main&quot;, &quot;:python_coverage_tools&quot;, ], ) </code></pre> <p>I followed this approach but get this error message:</p> <pre><code>ERROR: An error occurred during the fetch of repository 'python_deps': Traceback (most recent call last): File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/pip_repository.bzl&quot;, line 345, column 13, in _pip_repository_impl fail(&quot;rules_python failed: %s (%s)&quot; % (result.stdout, result.stderr)) Error in fail: rules_python failed: (Traceback (most recent call last): File &quot;/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/parse_requirements_to_bzl.py&quot;, line 322, in &lt;module&gt; main(requirement_file) File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/parse_requirements_to_bzl.py&quot;, line 309, in main generate_parsed_requirements_contents( File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/parse_requirements_to_bzl.py&quot;, line 114, in generate_parsed_requirements_contents repo_names_and_reqs = repo_names_and_requirements( File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/parse_requirements_to_bzl.py&quot;, line 69, in repo_names_and_requirements return [ File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/parse_requirements_to_bzl.py&quot;, line 71, in &lt;listcomp&gt; bazel.sanitise_name(ir.name, prefix=repo_prefix), File &quot;/private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/external/rules_python/python/pip_install/extract_wheels/bazel.py&quot;, line 25, in sanitise_name return prefix + name.replace(&quot;-&quot;, &quot;_&quot;).replace(&quot;.&quot;, &quot;_&quot;).lower() AttributeError: 'NoneType' object has no attribute 'replace' </code></pre> <p>Any ideas how to get proper code coverage for a Bazel based Python project are welcome.</p> <p>I tried also to run the code coverage command this way:</p> <pre><code>bazel coverage --test_env=PYTHON_COVERAGE=/Users/vertexahn/dev/coveragepy/__main__.py //tests:test_main </code></pre> <p>This results in:</p> <pre><code>exec ${PAGER:-/usr/bin/less} &quot;$0&quot; || exit 1 Executing tests from //tests:test_main ----------------------------------------------------------------------------- Unrecognized option '[run] relative_files=' in config file /private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/sandbox/darwin-sandbox/79/execroot/__main__/bazel-out/darwin_arm64-fastbuild/testlogs/_coverage/tests/test_main/test/.coveragerc Unknown command: 'lcov' Use 'coverage help' for help. -- Coverage runner: Not collecting coverage for failed test. The following commands failed with status 1 /private/var/tmp/_bazel_vertexwahn/401174f4ee3f8d1aff86fa2a0c8c5dbe/sandbox/darwin-sandbox/79/execroot/__main__/bazel-out/darwin_arm64-fastbuild/bin/tests/test_main.runfiles/__main__/tests/test_main </code></pre> <p>Similar questions on StackOverflow with no answer:</p> <ul> <li><a href="https://stackoverflow.com/questions/67442495/how-do-you-generate-python-coverage-in-bazel">How do you generate Python coverage in bazel?</a></li> <li><a href="https://stackoverflow.com/questions/56757952/how-to-get-code-coverage-for-python-using-bazel">How to get code coverage for python using Bazel</a></li> </ul>
<python><code-coverage><bazel><bazel-python>
2023-01-09 23:35:22
0
7,859
Vertexwahn
75,063,930
3,647,167
Adding columns for the closest lat/long in a reference list to existing lat/long
<p>I have a base dataframe (df_base) which contains my records and a lookup dataframe (df_lookup) containing a lookup list. I would like to find the closest lat/log in df_lookup to the lat/long in df_base and add them as columns. I am able to do this but it is very slow. df_base has over 1 million rows and df_lookup is at about 10,000. I suspect there is a way to vectorize or write it more efficiently but I have not been able to do it yet. My running but slow code is as follows</p> <pre><code>from math import radians, sin, cos, asin, sqrt def hav_distance(lat1, lon1, lat2, lon2): &quot;&quot;&quot; Calculate the great circle distance between two points on the earth (specified in decimal degrees) &quot;&quot;&quot; # convert decimal degrees to radians lat1, lon1, lat2, lon2 = map(radians, [lat1, lon1, lat2, lon2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) # Radius of earth in kilometers is 6371 km = 6371* c return km def find_nearest(lat, lon,df_lookup): distances = df_lookup.apply( lambda row: hav_distance(lat, lon, row['lat'], row['lon']),axis=1) return df_lookup.loc[distances.idxmin(), ['lat', 'lon']] df_base[['lookup lat','lookup long']] = df_base.apply(lambda row: find_nearest(row['Latitude'], row['Longitude'],df_lookup) if row[['Latitude','Longitude']].notnull().all() else np.nan, axis=1) </code></pre> <p>Example for df_base</p> <pre><code>Latitude , Longitude 37.75210734489673 , -122.49572485891302 37.75046506608679 , -122.50583612245225 37.75612411999306 , -122.50728172021206 37.75726922992242 , -122.50251213426036 37.75243837156798 , -122.50442682534892 37.7519789637837 , -122.50402178717827 37.750903349294404 , -122.50241414813944 37.75602225181627 , -122.50060819272488 37.757921529607835 , -122.50036152209083 37.75628955086523 , -122.50694962686946 37.7573215112949 , -122.50224043772997 37.75074935869865 , -122.50127064328588 37.7528943256246 , -122.501056716164 37.754832309416386 , -122.50268274843049 37.757352142065265 , -122.50390638094865 37.75055972208169 , -122.50381787073599 37.753482040181844 , -122.49795018201644 37.7578160107123 , -122.50013574926646 37.749592580038346 , -122.50730545397994 37.7514871501036 , -122.49702703770673 </code></pre> <p>Example for df_lookup</p> <pre><code>lat. , long 37.751 , -122.5 37.752 , -122.5 37.753 , -122.5 37.754 , -122.5 37.755 , -122.5 37.756 , -122.5 37.757 , -122.5 37.758 , -122.5 37.759 , -122.5 </code></pre>
<python><pandas><geospatial>
2023-01-09 23:20:34
1
4,950
Keith
75,063,839
14,057,599
How to use Numpy to compute the outer contour of binary image and fill inner area?
<p>I want to use Numpy (without any other packages) to find the outer contour of the 1st binary image and fill the inside area so it looks like the 2nd image, basically filling the holes of the wheels but I don't know how to do it. Does anyone have any ideas?</p> <p><a href="https://i.sstatic.net/KGBg5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KGBg5.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/ZY7YL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZY7YL.png" alt="enter image description here" /></a></p>
<python><numpy>
2023-01-09 23:06:16
1
317
Qimin Chen
75,063,761
8,322,295
Can't get GridSearchCV working with Keras
<p>I'm trying to use <code>GridSearchCV</code> to optimise the hyperparameters in a custom model built with <code>Keras</code>. My code so far:</p> <p><a href="https://pastebin.com/ujYJf67c#9suyZ8vM" rel="nofollow noreferrer">https://pastebin.com/ujYJf67c#9suyZ8vM</a></p> <p>The model definition:</p> <pre><code>def build_nn_model(n, hyperparameters, loss, metrics, opt): model = keras.Sequential([ keras.layers.Dense(hyperparameters[0], activation=hyperparameters[1], # number of outputs to next layer input_shape=[n]), # number of features keras.layers.Dense(hyperparameters[2], activation=hyperparameters[3]), keras.layers.Dense(hyperparameters[4], activation=hyperparameters[5]), keras.layers.Dense(1) # 1 output (redshift) ]) model.compile(loss=loss, optimizer = opt, metrics = metrics) return model </code></pre> <p>and the grid search:</p> <pre><code>optimizer = ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam'] epochs = [10, 50, 100] param_grid = dict(epochs=epochs, optimizer=optimizer) grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring='accuracy', n_jobs=-1, refit='boolean') grid_result = grid.fit(X_train, y_train) </code></pre> <p>throws an error:</p> <pre><code>TypeError: Cannot clone object '&lt;keras.engine.sequential.Sequential object at 0x0000028B8C50C0D0&gt;' (type &lt;class 'keras.engine.sequential.Sequential'&gt;): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' method. </code></pre> <p>How can I get <code>GridSearchCV</code> to play nicely with the model as it's defined?</p>
<python><tensorflow><machine-learning><keras><gridsearchcv>
2023-01-09 22:51:15
1
1,546
Jim421616
75,063,631
12,409,665
py script stop after execute
<p>I have the following script:</p> <pre><code>import time from pynput import keyboard from pynput.keyboard import Key, Listener STATUS=False COMBINATION={keyboard.Key.esc, keyboard.Key.alt} # The currently active modifiers current = set() def on_press(key): if key in COMBINATION: current.add(key) if all(k in current for k in COMBINATION): STATUS = False if key == keyboard.Key.esc: # Stop listener STATUS = True def on_release(key): try: current.remove(key) except KeyError: pass def hp(delay): keyboard.press(Key.f1) keyboard.release(Key.f1) time.sleep(delay) def cp(): keyboard.press(Key.f2) keyboard.release(Key.f2) def main(): with Listener( on_press=on_press, on_release=on_release) as listener: listener.join() while True: while STATUS: for i in range(3): hp(0.5) cp() </code></pre> <p>when I generate an executable with pyinstaller or even run it through the end with python3, the script simply doesn't loop as I had thought (run forever) as I don't have any err messages, I don't know where my err might be, can anyone help?</p>
<python><python-3.x><pynput>
2023-01-09 22:30:03
1
462
Felipe
75,063,570
16,978,074
grouping a dictionary inside a list in python
<p>I've just been studying python and I'm having trouble doing some exercises. I have a list containing, I think, a dictionary:</p> <pre><code>dictionary_title=[ {'Color': 'Green', 'ids': 878}, {'Color': 'Pink', 'ids': 16}, {'Color': 'Orange', 'ids': 28}, {'Color': 'Yellow', 'ids': 9648}, {'Color': 'Red', 'ids': 878}, {'Color': 'Brown', 'ids': 12}, {'Color': 'Black', 'ids': 28}, {'Color': 'White', 'ids': 14}, {'Color': 'Blue', 'ids': 28}, {'Color': 'Light Blue', 'ids': 10751}, {'Color': 'Magenta', 'ids': 28}, {'Color': 'Gray', 'ids': 28}] </code></pre> <p>now if i want to group by id, to have for example:</p> <pre><code>{878:['Green','Red'], 16:['Pink'], 28:['Orange','Black','Blue','Magenta','Gray'] and so on...} </code></pre> <p>Now this is my code:</p> <pre><code>dictionary={} genres=[878,16,28,9648,12,14,10751] for color in nodes: for index in range(0,len(genres)): if genres[index] == color[&quot;ids&quot;]: dictionary.setdefault(genres[index],[]) dictionary[genres[index]].append(color[&quot;color&quot;]) print(dictionary) </code></pre> <p>but my output is:</p> <pre><code>{878:['Green','Pink','Orange','Yellow','Red','Brown','Black','White','Blue','Light Blue','Magenta','Gray']} </code></pre> <p>How can i do?</p>
<python><arrays><list><dictionary><grouping>
2023-01-09 22:22:33
2
337
Elly
75,063,478
10,825,362
how to debug python modules while developing
<p>Let's say we are developing a simple python module with the following directory structure</p> <pre><code>. ├── module │   ├── __init__.py │   ├── core.py │   └── helpers.py └── test.py </code></pre> <p>contents of <strong>init</strong>.py</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- from .core import print_values </code></pre> <p>contents of core.py</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- from .helpers import values def print_values(): print(values) if __name__ == '__main__': print_values() </code></pre> <p>contents of helpers.py</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- values = [0, 2, 6] </code></pre> <p>contents of test.py</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- from module import print_values print_values() </code></pre> <p>now if we run <code>python test.py</code> with <code>'.'</code> as the working dir we get the expected output of <code>[0, 2, 6]</code>. Great!</p> <p>So here is the problem, if we change the working dir to <code>'./modules'</code> and run <code>python3 ./core.py</code> the following error will be raised:</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre> <p>So the question is how to design modules in a way that we can run python scripts from within it during development?</p>
<python><python-module>
2023-01-09 22:11:10
1
351
IjonTichy
75,063,390
12,285,078
How can I load tf.data.dataset object into an autoencoder?
<p>I have been struggling with this issue for weeks now... I more or less try to reproduce this code: <a href="https://github.com/mostafaibrahim17/Whole-Image-Slides-Unsupervised-Categorization/blob/master/Autoencoders/Convolutional%20Autoencoders/Basic%20Convolutional%20Autoencoder.ipynb" rel="nofollow noreferrer">https://github.com/mostafaibrahim17/Whole-Image-Slides-Unsupervised-Categorization/blob/master/Autoencoders/Convolutional%20Autoencoders/Basic%20Convolutional%20Autoencoder.ipynb</a></p> <p>Unlike this example where they load images as array :</p> <pre><code>## Data loading trainData = &quot;../../../autoenctrain/train&quot; testData = &quot;../../../autoenctrain/test&quot; new_train = [] new_test = [] for filename in os.listdir(trainData): if filename.endswith(&quot;.tif&quot;): image = Image.open(os.path.join(trainData, filename)) new_train.append(np.asarray( image, dtype=&quot;uint8&quot; )) for filename in os.listdir(testData): if filename.endswith(&quot;.tif&quot;): image = Image.open(os.path.join(testData, filename)) new_test.append(np.asarray( image, dtype=&quot;uint8&quot; )) </code></pre> <p>, I have a lot of big images (256, 256, 3) and I would like to load images from directory with function <code>tf.keras.utils.image_dataset_from_directory</code> :</p> <pre><code>train_ds = tf.keras.utils.image_dataset_from_directory( trainData, label_mode=None, color_mode = 'rgb', batch_size=32, image_size=(256,256)) </code></pre> <p>In this example, <code>label_mode=None</code> because images are in subdirectories and I don't want that images have the label corresponding to their subdirectory.</p> <p>I modified the autoencoder in order to adapt it to my images :</p> <pre><code>input_img = Input(shape=(256, 256, 3)) # adapt this if using `channels_first` image data format x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) # 96 x 96 x 32 x = MaxPooling2D((2, 2), padding='same')(x) # 32 x 32 x 32 x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) # 32 x 32 x 64 x = MaxPooling2D((2, 2), padding='same')(x) # 16 x 16 x 64 x = Conv2D(128, (3, 3), activation='relu', padding='same')(x) # 16 x 16 x 128 (small) encoded = MaxPooling2D((2, 2), padding='same')(x) # 8 x 8 x 128 # at this point the representation is (8, 8, 128) x = Conv2D(128, (3, 3), activation='relu', padding='same')(encoded) # 8 x 8 x 128 x = UpSampling2D((2, 2))(x) # 16 x 16 x 128 x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) # 16 x 16 x 64 x = UpSampling2D((2, 2))(x) # 32 x 32 x 64 # x = Conv2D(32, (3, 3), activation='relu')(x) x = UpSampling2D((2, 2))(x) # 96 x 96 x 64 decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x) # 96 x 96 x 3 autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adam', loss='mean_squared_error') </code></pre> <p>But when I try to fit the model :</p> <pre><code>autoencoder_train = autoencoder.fit(train_ds, train_ds, epochs=25, batch_size=32, shuffle=True, validation_data=(test_ds, test_ds)) </code></pre> <p>I have that error : <code>ValueError: </code>y<code> argument is not supported when using dataset as input.</code></p> <p>I tried to load subset of images with the same method (the one with arrays), and there was no issue. So I have the feeling that I am missing something in the architecture of the tf.data.dataset object (like the shape, or something like that).</p> <p>Please could you tell me :</p> <ol> <li>Why I have this error ?</li> <li>How can I fix this issue ? How can I load my images from directory and subdirectories without using the &quot;array&quot; method ?</li> </ol> <p>Thank you very much !</p> <p>PS this question is similar to this one : <a href="https://stackoverflow.com/questions/71879169/tensorflow-y-argument-is-not-supported-when-using-dataset-as-input">Tensorflow `y` argument is not supported when using dataset as input</a> But 1) the unique answer has not been validated and 2) I am not sure that I understand it.</p>
<python><tensorflow><keras><deep-learning><tensorflow-datasets>
2023-01-09 22:00:16
2
343
chalbiophysics
75,063,300
338,101
Music21 - How to keep the time signature when transposing?
<p>I am trying to transpose to a common key (C major/A minor) like this:</p> <pre class="lang-py prettyprint-override"><code>def main(): print(&quot;cwd=&quot;, os.getcwd()) path=&quot;midi/bachjs&quot; with os.scandir(path) as it: for entry in it: if entry.name.startswith(&quot;Minuet&quot;) and entry.name.endswith(&quot;.mid&quot;) and entry.is_file(): filename=Path(entry.path) print(filename) piece=converter.parse(filename) key=piece.analyze('key') target=pitch.Pitch('C') if key.type == 'minor': target=pitch.Pitch('A') if target.name != key.tonic.name: move = interval.Interval(key.tonic, target) newpiece=piece.transpose(move) newkey=newpiece.analyze('key') fp = newpiece.write('midi', fp=filename.with_suffix('').with_suffix('.in'+target.name+'.mid')) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Transposing BWV 114:</p> <p><a href="https://i.sstatic.net/qBhUp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qBhUp.jpg" alt="enter image description here" /></a></p> <p>indeed shifts the initial D down to G etc., but the result is horrible:</p> <p><a href="https://i.sstatic.net/MGP9D.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MGP9D.jpg" alt="enter image description here" /></a></p> <p>How could I fix this? Thanks!</p>
<python><transpose><midi><music21>
2023-01-09 21:51:34
1
6,436
smirkingman
75,063,269
17,696,880
Set a multichoice regex to make its matching attempts always from left to right, no matter if another previous regex tries to capture more chars?
<pre><code>import re input_text = 'el dia corrimos juntas hasta el 11° nivel de aquella montaña hasta el 2022_-_12_-_13' #input_text = 'desde el corrimos juntas hasta el 11° nivel de aquella montaña y luego bajamos hasta la salida, hasta el 2022_-_12_-_01 21:00 hs caminamos juntas' #example 2 date_format = r&quot;(?:\(|)\s*(\d*)_-_(\d{2})_-_(\d{2})\s*(?:\)|)&quot; #text in the middle associated with the date range... #some_text = r&quot;(?:(?!\.\s*?\n)[^;])*&quot; #but cannot contain &quot;;&quot;, &quot;.\s*\n&quot; some_text = r&quot;(?:(?!\.\s*)[^;])*&quot; #but cannot contain &quot;;&quot;, &quot;.\s*&quot; #some_text = r&quot;(?:[^.;])*&quot; #but cannot contain &quot;;&quot;, &quot;.&quot; identification_re_0 = r&quot;(?:el dia|dia|el)\s*(?:del|de\s*el|de |)\s*(&quot; + some_text + r&quot;)\s*(?:,\s*hasta|hasta|al|a )\s*(?:el|la|)\s*&quot; + date_format input_text = re.sub(identification_re_0, lambda m: print(m[1]), input_text, re.IGNORECASE) #print(repr(input_text)) # --&gt; output </code></pre> <p>These are the incorrect outputs that I got:</p> <pre><code>'corrimos juntas hasta el 11° nivel de aquella montaña hast' 'corrimos juntas hasta el 11° nivel de aquella montaña y luego bajamos hasta la salida, hast' </code></pre> <p>And these would be the correct outputs that you should get with this examples:</p> <pre><code>'corrimos juntas hasta el 11° nivel de aquella montaña' 'corrimos juntas hasta el 11° nivel de aquella montaña y luego bajamos hasta la salida' </code></pre> <p>Why does the <code>(?:,\s*hasta|hasta|al|a )</code> capture group try its options backwards? Why is it trying to conform to the greedy behavior of the above regex, in this case <code>(?:(?!\.\s*)[^;])*</code>?</p> <hr /> <p>Edit with a possible solution:</p> <p>I have achieved more or less close results except with example 3 where I could not make it so that if there was not something captured by some_text the () are not placed</p> <pre class="lang-py prettyprint-override"><code>import re input_text = 'desde el 2022_-_12_-_10 corrimos juntas hasta el 11° nivel de aquella montaña hasta el 2022_-_12_-_13' #example 1 #input_text = 'desde el 2022_-_11_-_10 18:30 pm corrimos juntas hasta el 11° nivel de aquella montaña y luego bajamos hasta la salida, hasta el 2022_-_12_-_01 21:00 hs caminamos juntas' #example 2 #input_text = 'desde el 2022_-_11_-_10 18:30 pm hasta el 2022_-_12_-_01 21:00 hs' #example 3 #text in the middle associated with the date range... #some_text = r&quot;(?:(?!\.\s*?\n)[^;])*&quot; #but cannot contain &quot;;&quot;, &quot;.\s*\n&quot; some_text = r&quot;(?:(?!\.\s*)[^;])*&quot; #but cannot contain &quot;;&quot;, &quot;.\s*&quot; #some_text = r&quot;(?:[^.;])*&quot; #but cannot contain &quot;;&quot;, &quot;.&quot; identificate_hours = r&quot;(?:a\s*las|a\s*la|)\s*(?:\(|)\s*(\d{1,2}):(\d{1,2})\s*(?:(am)|(pm))\s*(?:\)|)&quot; #acepta que no se le indicase el 'am' o el 'pm' identificate_hours = r&quot;(?:a\s*las|a\s*la|)\s*(?:\(|)\s*(\d{1,2}):(\d{1,2})\s*(?:(am)|(pm)|)\s*(?:\)|)&quot; #no acepta que no se le indicase el 'am' o el 'pm' date_format = r&quot;(?:\(|)\s*(\d*)_-_(\d{2})_-_(\d{2})\s*(?:\)|)&quot; # (?:,\s*hasta|hasta|al|a ) some_text_limiters = [r&quot;,\s*hasta&quot;, r&quot;hasta&quot;, r&quot;al&quot;, r&quot;a &quot;] for some_text_limiter in some_text_limiters: identification_re_0 = r&quot;(?:(?&lt;=\s)|^)(?:desde\s*el|desde|del|de\s*el|de\s*la|de |)\s*(?:día|dia|fecha|)\s*(?:del|de\s*el|de |)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)\s*(&quot; + some_text + r&quot;)\s*&quot; + some_text_limiter + r&quot;\s*(?:el|la|)\s*(?:fecha|d[íi]a|)\s*(?:del|de\s*el|de|)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)&quot; input_text = re.sub(identification_re_0, lambda m: (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))({m[8]})&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;), input_text, re.IGNORECASE) print(repr(input_text)) </code></pre>
<python><python-3.x><regex><regex-group><regex-greedy>
2023-01-09 21:47:39
1
875
Matt095
75,063,264
1,550,811
Docker [Errno 13] Permission denied with custom user
<p>I am trying to create a docker image and here is my current <code>Dockerfile</code>.</p> <p><strong>Dockerfile:</strong></p> <pre><code>FROM path-to-internal-images/python:3.8 RUN adduser --system myuser WORKDIR /home/myuser COPY requirements.txt . RUN pip install -i https://myendpoint/api/pypi/pypi-abc/simple -r requirements.txt COPY myfile.py . USER myuser </code></pre> <p><strong>Build and Run the container:</strong></p> <pre><code>$ IMG_TAG=&quot;docker.abc.com/my-image:0.1&quot; $ docker build -t ${IMG_TAG} . $ docker push ${IMG_TAG} $docker run --rm -it ${IMG_TAG} /bin/bash myuser@ mymachine:~$ pwd /home/myuser myuser@mymachine:~$ python myfile.py --output-path &lt;PATH_TO_A_DIRECTORY&gt; </code></pre> <p>In <code>myfile.py</code> I accept <code>--output-path</code> as an argument which is path to a directory (e.g. <strong>/output</strong>) and this could be any value and then it creates a directory using this path provided. However, since I am using <code>myuser</code> and not <code>root</code> I get following error:</p> <pre><code> File &quot;myfile.py&quot;, line 20, in write_function os.makedirs(dir, exist_ok=True) File &quot;/usr/local/lib/python3.8/os.py&quot;, line 213, in makedirs makedirs(head, exist_ok=exist_ok) File &quot;/usr/local/lib/python3.8/os.py&quot;, line 223, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/output' </code></pre> <p><strong>My question is:</strong> Is it possible to use <code>myuser</code> and still be able to create directory structure based on the argument passed when <code>myfile.py</code> is called when the container is running? I don't think I can use <code>RUN chown</code> here as I do not know the directory upfront, is there any other way to achieve this?</p> <p>I am relatively new to <em>containerization</em> and <em>docker</em>, so any pointers would help.</p>
<python><docker><dockerfile>
2023-01-09 21:46:35
0
1,543
Learner
75,063,220
16,589,029
Django changing date format is not working
<p>I decided to change the date format from <code>YYYY-mm-dd</code> to <code>%d %b %Y</code> something like <code>10 Jan 2023</code></p> <p>However, i have tried many things and it all seem to fail, let's start with settings.py:</p> <pre><code>LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = False USE_TZ = True USE_L10N = False DATE_FORMAT ='%d %b %Y' </code></pre> <p>the model:</p> <pre class="lang-py prettyprint-override"><code>class DateModel(models.Model): date = models.DateField(auto_add_now=True,blank=False) desc = models.CharField() </code></pre> <p>the serializer:</p> <pre class="lang-py prettyprint-override"><code>from appname import models class DateSerializer(serializer.ModelSerializer): class Meta: model = models.DateModel fields = '__all__' # alternatively 'date','desc', </code></pre> <p>the view:</p> <pre class="lang-py prettyprint-override"><code>from rest_framework import views from appname import serializer #importing the serializer file from django.utils import timezone,dateformat from appname import models from django.conf import settings class DateView(views.APIView): serializer_class = serializer.DateSerializer def get(self,request): return Response(serializer.DateSerializer(models.DateModel.objects.all().data),status=200) def post(self,request): data = serializer.DateSerializer(data=request.data) if data.is_valid(): desc = data.data['desc'] date = dateformat.format(timezone.now(),settings.DATE_FORMAT) model_instance = models.DateModel.objects.create(date=date,desc=desc) model_instance.save() return Response(&quot;Posted!&quot;,status=200) return Response(&quot;Invalid Data, 400 status code error is raised&quot;,status=400) </code></pre> <p>and when i submit a post and try to view it, i get the date in this format <code>2023-01-10</code>, as i mentioned before, but i am aiming for this format <code>10-Jan-2023</code></p>
<python><django><datetime><django-rest-framework>
2023-01-09 21:41:43
1
766
Ghazi
75,063,069
480,118
VSCode multi-project workspace: how to add individual files such as the .gitignore at the root of the workspace?
<p>I have the following folder structure..where _app, and _infra are two different projects. At the root of the workspace however are two files, the workspace project file itself and a .gitignore file. Each project has it's own .vscode folder and own .env files. The entire workspace is a single repository in git.</p> <pre><code>my_app_workspace - proj1_app/ - .venv/ (virtual environment) - vscode/ - settings.json - launch.json - task.json - src/ - config.py - .env - .env_linux - proj1_infra/ - vscode/ - settings.json - launch.json - task.json - src/ - config.py - .env - .env_linux - .git_ignore - my_app_workspace.code-workspace </code></pre> <p>the code-workspace file looks like this:</p> <pre><code>{ &quot;folders&quot;: [ { &quot;path&quot;: &quot;./proj1_app&quot; }, { &quot;path&quot;: &quot;./proj1_infra&quot; } ], } </code></pre> <p>This is all good, but i want to include the .git_ignore and my_app_workspace.code-workspace files also into the vscode editor so that i can easy make modifications to them. I know i can add another folder with '&quot;path&quot;: &quot;.&quot;', but this will add a folder with the project folders again - which seems redundant and not efficient.</p> <p><strong>Is there a way to add individual files to the workspace? Is the problem here i should simply split these up into two different repository in git? this way each has it's own .gitignore file as opposed to what im doing now is the entire workspace is a git repository</strong></p>
<python><git><visual-studio-code>
2023-01-09 21:23:49
2
6,184
mike01010
75,063,000
6,423,456
Is it possible to have Python sort data the same was PostgreSQL does?
<p>My PostgreSQL DB appears to be using <code>en_US.UTF-8</code> collation:</p> <pre class="lang-bash prettyprint-override"><code># SHOW lc_collate; lc_collate ------------- en_US.UTF-8 </code></pre> <p>If I have a list of strings like: <code>['C - test', 'Common Scope']</code>, and I sort them in Python, I get:</p> <pre class="lang-py prettyprint-override"><code>sorted(['C - test', 'Common Scope']) ['C - test', 'Common Scope'] </code></pre> <p>but in Postgres, I get the opposite order:</p> <pre class="lang-bash prettyprint-override"><code># select * from TEST ORDER BY name; name -------------- Common Scope C - test </code></pre> <p>Having Postgres sort the same way as Python does seems to be achievable by adding <code>COLLATE &quot;C&quot;</code> to the end of the select.</p> <p>Is it possible to go the other way, and have Python sort strings the same was as Postgres does?</p>
<python><postgresql>
2023-01-09 21:15:22
1
2,774
John
75,062,929
11,809,811
changing the color the tkinter title bar on macOS
<p>I have an app and want to change the color of the title bar. I can do that on windows with the following code:</p> <pre><code>import tkinter as tk from ctypes import windll, byref, sizeof, c_int root = tk.Tk() root.update() HWND = windll.user32.GetParent(root.winfo_id()) DWMWA_ATTRIBUTE = 35 COLOR = 0x000000FF # hex order: 0x00bbggrr windll.dwmapi.DwmSetWindowAttribute(HWND, DWMWA_ATTRIBUTE, byref(c_int(COLOR)), sizeof(c_int)) root.mainloop() </code></pre> <p>Now I want to do the same for macOS. Sadly, windll.user32 only exists on windows, so is there a mac specific way of approaching this?</p> <p>(I know I can hide the title bar with overrideredirect but I don't want to use that because it causes other weird behavior)</p>
<python><macos><tkinter>
2023-01-09 21:07:40
0
830
Another_coder
75,062,897
1,171,746
How to have typing support for a static property (using a decorator)
<p>Given a static property decorator:</p> <pre class="lang-py prettyprint-override"><code>class static_property: def __init__(self, getter): self.__getter = getter def __get__(self, obj, objtype): return self.__getter(objtype) @staticmethod def __call__(getter_fn): return static_property(getter_fn) </code></pre> <p>That is applied to a class as follows:</p> <pre class="lang-py prettyprint-override"><code>class Foo: @static_prop def bar(self) -&gt; int: return 10 </code></pre> <p>Add called as static:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(Foo.bar) 10 </code></pre> <p>How would I add typing support to <code>static_property</code> so <code>Foo.bar</code> is inferred as type <code>int</code> instead of as <code>Any</code>?</p> <p>Or is there another way to create the decorator to support type inference?</p> <p>See Also: <a href="https://stackoverflow.com/a/56816580/1171746">how-to-define-class-field-in-python-that-is-an-instance-of-a-class</a></p>
<python><design-patterns><static><python-decorators><python-typing>
2023-01-09 21:03:32
1
327
Amour Spirit
75,062,621
726,802
Issue while trying to select record in mysql using Python
<p><strong>Error Message</strong></p> <blockquote> <p>You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '%s' at line 1</p> </blockquote> <p><strong>MySQL Database Table</strong></p> <pre><code>CREATE TABLE `tblorders` ( `order_id` int(11) NOT NULL, `order_date` date NOT NULL, `order_number` varchar(50) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; ALTER TABLE `tblorders` ADD PRIMARY KEY (`order_id`), ADD UNIQUE KEY `order_number` (`order_number`); ALTER TABLE `tblorders` MODIFY `order_id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=4; </code></pre> <p><strong>Code</strong></p> <pre><code>mydb = mysql.connector.connect(host = &quot;localhost&quot;, user = &quot;root&quot;, password = &quot;&quot;, database = &quot;mydb&quot;) mycursor = mydb.cursor() sql = &quot;Select order_id from tblorders where order_number=%s&quot; val = (&quot;1221212&quot;) mycursor.execute(sql, val) </code></pre> <p>Am I missing anything?</p>
<python><mysql>
2023-01-09 20:30:38
2
10,163
Pankaj
75,062,500
6,057,371
Pandas dataframe expand rows in specific times
<p>I have a dataframe:</p> <pre><code>df = T1 C1 01/01/2022 11:20 2 01/01/2022 15:40 8 01/01/2022 17:50 3 </code></pre> <p>I want to expand it such that</p> <ol> <li>I will have the value in specific given times</li> <li>I will have a row for each round timestamp</li> </ol> <p>So if the times are given in</p> <pre><code>l=[ 01/01/2022 15:46 , 01/01/2022 11:28] </code></pre> <p>I will have:</p> <pre><code>df_new = T1 C1 01/01/2022 11:20 2 01/01/2022 11:28 2 01/01/2022 12:00 2 01/01/2022 13:00 2 01/01/2022 14:00 2 01/01/2022 15:00 2 01/01/2022 15:40 8 01/01/2022 15:46 8 01/01/2022 16:00 8 01/01/2022 17:00 8 01/01/2022 17:50 3 </code></pre>
<python><pandas><dataframe><ffill>
2023-01-09 20:18:03
3
2,050
Cranjis
75,062,480
12,945,785
how to get the sum of a dataframe by month/year
<p>I have a data frame that consists of :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">DATE</th> <th style="text-align: right;">X.</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1982-09-30 00:00:00</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">1982-10-31 00:00:00</td> <td style="text-align: right;">-0.75</td> </tr> <tr> <td style="text-align: left;">1982-11-30 00:00:00</td> <td style="text-align: right;">-0.5</td> </tr> <tr> <td style="text-align: left;">1982-12-31 00:00:00</td> <td style="text-align: right;">-0.5</td> </tr> <tr> <td style="text-align: left;">1983-01-31 00:00:00</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">...</td> <td style="text-align: right;">0.8</td> </tr> <tr> <td style="text-align: left;">2022-01-09 00:00:00</td> <td style="text-align: right;">0.8</td> </tr> </tbody> </table> </div> <p>From this dataframe, I would like to have a table with this format :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: right;">January</th> <th style="text-align: right;">February</th> <th style="text-align: right;">...</th> <th style="text-align: right;">December</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1982</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0.5</td> <td style="text-align: right;">-1.0</td> </tr> <tr> <td style="text-align: left;">1983</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0.5</td> <td style="text-align: right;">-1.0</td> </tr> <tr> <td style="text-align: left;">...</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0.5</td> <td style="text-align: right;">-1.0</td> </tr> <tr> <td style="text-align: left;">2022</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0.5</td> <td style="text-align: right;">-1.0</td> </tr> </tbody> </table> </div> <p>where each number inside the table is the sum of line/column intersection ie for January/1982 : the sum of the data for January 1982, etc..</p>
<python><pandas>
2023-01-09 20:15:10
3
315
Jacques Tebeka
75,062,429
1,226,649
NetworkX: Subgraph matching
<p>Trying to match a Query subgraph to a Target graph, where:</p> <p>Query:</p> <p><a href="https://i.sstatic.net/MxT29.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MxT29.png" alt="enter image description here" /></a></p> <p>and Target:</p> <p><a href="https://i.sstatic.net/r5l91.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r5l91.png" alt="enter image description here" /></a></p> <p>As I understand matching in this case should return tuples of matching nodes:</p> <pre><code>(1,1) (2,2) (3,3) </code></pre> <p>where first number is a node in Query and second is node in Target</p> <p>Yet with the following code I get another result:</p> <pre><code>import networkx as nx import networkx.algorithms.isomorphism as iso # Target graph target = nx.Graph() target.add_edge(1, 2) target.add_edge(1, 3) target.add_edge(1, 5) target.add_edge(2, 3) target.add_edge(3, 4) target.add_edge(4, 5) # Add attributes to target nodes target.nodes[1]['cat'] = 1 target.nodes[2]['cat'] = 2 target.nodes[3]['cat'] = 3 target.nodes[4]['cat'] = 4 target.nodes[5]['cat'] = 5 # Query graph query = nx.Graph() query.add_edge(1, 2) query.add_edge(1, 3) query.add_edge(2, 3) # Add attributes to query graph query.nodes[1]['cat'] = 1 query.nodes[2]['cat'] = 2 query.nodes[3]['cat'] = 3 # GraphMatcher is supposed to call this function when matching nodes def node_match(q_node_dict, t_node_dict): # This is never called! #print(&quot;q_node_dict: {} t_node_dict: {}&quot;.format(q_node_dict, t_node_dict)) return q_node_dict['cat'] == t_node_dict['cat'] # Matcher GM = iso.GraphMatcher(query, target, node_match=node_match) # This should print matching nodes? for x in GM.candidate_pairs_iter(): print(x) # What this should print? for x in GM.subgraph_isomorphisms_iter(): print(x) </code></pre> <p>Instead code in <code>GM.candidate_pairs_iter()</code> returns:</p> <pre><code>(1,1) (2,1) (3,1) </code></pre> <p>As if all nodes in Query match to a single node '1' in the Target. Why?</p> <p>Iterator <code>GM.subgraph_isomorphisms_iter()</code> is empty. What should it iterate?</p> <p>In hope to facilitate subgraph matching I have added attributes to Query and Target. Nodes 1,2,3 in Query and Target have the same values and should match, but it looks like they don't.</p> <p>According to <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.isomorphism.GraphMatcher.__init__.html#networkx.algorithms.isomorphism.GraphMatcher.__init__" rel="nofollow noreferrer">NetworkX documentation</a> <code>node_match</code> parameter of <code>GraphMatcher</code> constructor is a &quot;callable function that returns True iff node n1 in G1 and n2 in G2 should be considered equal during the isomorphism test. The function will be called like:</p> <p><code>node_match(G1.nodes[n1], G2.nodes[n2])</code> &quot;</p> <p>and this function is never called in my case!</p> <p>What am I missing? Thanks!</p>
<python><graph><networkx>
2023-01-09 20:09:24
0
3,549
dokondr
75,062,344
2,798,289
Python pandas group by, transform multiple columns with custom conditions
<p>I have dataframe containing 500k+ records and I would like to group-by multiple columns (data type of string and date) and later pick only few records inside each group based on custom condition.</p> <p>Basically, I need to group the records (by <code>first_roll_up</code>, <code>date</code>, <code>granular_timestamp</code>) to check if the group contains any value for column <code>top</code> and if present, choose only the record with <code>top</code> value. Also, if the group doesn't contain any record with <code>top</code> value, choose all the records.</p> <p>Input:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>first_roll_up</th> <th>sub</th> <th>top</th> <th>date</th> <th>granular_timestamp</th> <th>values</th> </tr> </thead> <tbody> <tr> <td>ABC</td> <td></td> <td>T1</td> <td>2/10/2022</td> <td>2/10/2022 10:00:00:000</td> <td>.</td> </tr> <tr> <td>ABC</td> <td>SUB_A_1</td> <td></td> <td>2/10/2022</td> <td>2/10/2022 10:00:00:000</td> <td>.</td> </tr> <tr> <td>ABC</td> <td>SUB_A_2</td> <td></td> <td>2/10/2022</td> <td>2/10/2022 10:00:00:000</td> <td>.</td> </tr> <tr> <td>ABC</td> <td>SUB_A_3</td> <td></td> <td>2/10/2022</td> <td>2/10/2022 10:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_X_1</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 11:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_X_2</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 11:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_Y_1</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 12:00:00:000</td> <td>.</td> </tr> </tbody> </table> </div> <p>Output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>first_roll_up</th> <th>sub</th> <th>top</th> <th>date</th> <th>granular_timestamp</th> <th>values</th> </tr> </thead> <tbody> <tr> <td>ABC</td> <td></td> <td>T1</td> <td>2/10/2022</td> <td>2/10/2022 10:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_X_1</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 11:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_X_2</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 11:00:00:000</td> <td>.</td> </tr> <tr> <td>XYZ</td> <td>SUB_Y_1</td> <td></td> <td>2/12/2022</td> <td>2/10/2022 12:00:00:000</td> <td>.</td> </tr> </tbody> </table> </div> <p>I tried to perform the below, but the function is taking 10+ mins to complete. I tried <code>transform</code> instead of apply by adding new boolean column to identify groups, but it didn't help too.</p> <pre><code>df.groupby(['first_roll_up', 'sub', 'top', 'date', 'granular_timestamp'], sort=False) .apply(custom_function_to_filter_each_group_records) </code></pre>
<python><pandas><dataframe><dask>
2023-01-09 19:59:26
2
2,522
Govind
75,062,303
17,718,587
Detecting the pressed keys in multi-languages in Psychopy
<p>I'm creating an experiment where I loop/finish a loop depending on the clicked key.</p> <p>The example below continues the loop when the participant presses the <code>r</code> key. When the participant presses the <code>p</code> or <code>q</code> key, the loop is finished:</p> <pre class="lang-py prettyprint-override"><code>keys = event.getKeys() for thisKey in keys: if thisKey == 'r': redisplay_image_loop.finished = False elif thisKey =='p' or thisKey == 'q': redisplay_image_loop.finished = True </code></pre> <p>The above example works great, but if the keyboard language is set to Hebrew when we start the experiment, the keys are no longer recognized. It only works if the keyboard language is set to English when we run the experiment.</p> <p>Is there any way to solve this issue? Maybe by checking the <code>key code</code> of the pressed key?</p> <p>The keys I need are:</p> <pre class="lang-py prettyprint-override"><code>p = פ = 80 q = 81 = / r = ר = 82 </code></pre> <p>Thanks!</p>
<python><psychopy>
2023-01-09 19:55:44
1
2,772
ChenBr