QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
74,805,662
1,905,305
How to multiply pandas dataframe columns with dictionary value where dictionary key matches dataframe index
<p>Is there a better way than iterating over columns to multiply column values by dictionary values where the dictionary key matches a specific dataframe column? Give a dataframe of:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'category': [1,2,3,4,5], 'a': [5,4,3,3,4], 'b': [3,2,4,3,10], 'c': [3, 2, 1, 1, 1] }) </code></pre> <p>And a dictionary of:</p> <pre><code>lookup = {1:0, 2:4, 3:1, 4:6, 5:2} </code></pre> <p>I can multiply each column other than 'category' by the dictionary value where the key matches 'category' this way:</p> <pre><code>for t in df.columns[1:]: df[t] = df[t].mul(df['category'].map(lookup)).fillna(df[t]) </code></pre> <p>But there must be a more succinct way to do this other than iterating over columns?</p>
<python><pandas><dataframe>
2022-12-15 00:31:42
2
779
mweber
74,805,446
16,287,416
How to hash a PyTorch Tensor
<p>Given a PyTorch Tensor of integers, for example, <code>torch.Size(N, C, H, W)</code>. Is there a way to efficiently hash each element of the tensor such that I get an output from <code>[-MAX_INT32 to +MAX_INT32]</code> or <code>[0 to MAX_INT32]</code> that fast runs on the GPU?</p> <p>Also in a way I can perform <code>output % N</code>, and each element will be uniformly or almost uniformly distributed from 0 - N.</p>
<python><hash><pytorch><tensor>
2022-12-14 23:54:49
3
307
Christian__
74,805,336
11,462,274
Among investment histories how to find the cumulative sum option that proves to be more reliable for the long term?
<p>In fact, not always the chart that reaches the highest peak of positive value when doing cumulative sum is the most reliable for long-term investments, because a single investment may have generated a very high profit but then it returns to the normal of being negative and if become an endless fall.</p> <p>Also relying on higher ROI (return on investment) is risky for the same reasons as above.</p> <p>That said, the cumulative sum graphs generated by these test values are:</p> <pre class="lang-python prettyprint-override"><code>ex_csv_1 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,1 a,b,c,1 a,b,c,-1 a,b,c,1 a,b,c,1 a,b,c,-1 a,b,c,1 &quot;&quot;&quot; </code></pre> <p><a href="https://i.sstatic.net/wDvVY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wDvVY.png" alt="enter image description here" /></a></p> <pre class="lang-python prettyprint-override"><code>ex_csv_2 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,1 a,b,c,-2 a,b,c,-3 a,b,c,4 a,b,c,5 a,b,c,6 a,b,c,7 &quot;&quot;&quot; </code></pre> <p><a href="https://i.sstatic.net/e5uNj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e5uNj.png" alt="enter image description here" /></a></p> <pre class="lang-python prettyprint-override"><code>ex_csv_3 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,-2 a,b,c,2 &quot;&quot;&quot; </code></pre> <p><a href="https://i.sstatic.net/bxfL0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bxfL0.png" alt="enter image description here" /></a></p> <p>If I wanted to find the one with the biggest peak, I would do it this way:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np import io ex_csv_1 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,1 a,b,c,1 a,b,c,-1 a,b,c,1 a,b,c,1 a,b,c,-1 a,b,c,1 &quot;&quot;&quot; ex_csv_2 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,1 a,b,c,-2 a,b,c,-3 a,b,c,4 a,b,c,5 a,b,c,6 a,b,c,7 &quot;&quot;&quot; ex_csv_3 = &quot;&quot;&quot; Col 1,Col 2,Col 3,return a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,2 a,b,c,-2 a,b,c,2 &quot;&quot;&quot; def save_fig(cs): values = np.cumsum(cs[2]) fig = plt.figure() plt.plot(values) fig.savefig(f'a_graph.png', dpi=fig.dpi) fig.clf() plt.close('all') options = [] for i,strio in enumerate([ex_csv_1,ex_csv_2,ex_csv_3]): df = pd.read_csv(io.StringIO(strio), sep=&quot;,&quot;) df['invest'] = df.groupby(['Col 1','Col 2','Col 3'])['return'].cumsum().gt(df['return']) pl = df[(df['invest'] == True)]['return'] total_sum = pl.sum() roi = total_sum/len(pl) options.append([total_sum,roi,pl]) max_list = max(options, key=lambda sublist: sublist[0]) save_fig(max_list) </code></pre> <p>But how should I go about finding which track record among the three demonstrates keeping the smallest fluctuation and delivering the greatest long-term reliability?</p> <p>I will put two charts below, the second chart that has less oscillations is the most reliable among them for the long term, as the variations are smaller and maintains a crescent with an established pattern:</p> <p><a href="https://i.sstatic.net/7WfAt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7WfAt.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/8ySJy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ySJy.png" alt="enter image description here" /></a></p>
<python><pandas><standard-deviation><cumsum>
2022-12-14 23:37:45
2
2,222
Digital Farmer
74,805,287
14,436,930
Why deque.append appears to be slower than list when performing extra calculations?
<p>As shown below, where I try to implement a memory with max length.</p> <p><code>a</code> uses a list that pops the first item when reaches the maximum length, <code>b</code> uses deque and relies on its built-in management of maxlen</p> <pre><code>import time import numpy as np import collections a = [np.ones((64, 64), dtype=np.int64) + i for i in range(100000)] b = collections.deque(a, maxlen=100000) element = np.ones((64, 64), dtype=np.int64) time.sleep(0.5) # stimulates more initialization and preprocessing t = time.time() for i in range(10000): temp = element * i - i # stimulate some calculation on the fly b.append(temp) print(time.time() - t) t = time.time() for i in range(10000): temp = element * i - i # stimulate some calculation on the fly if len(a) == 100000: a.pop(0) a.append(temp) print(time.time() - t) print(len(a)) print(len(b)) </code></pre> <p><strong>Expected</strong>: <code>b</code> should take less time running since it has O(1) for popleft and append, while <code>a</code> has O(n) for popleft and O(1) for append. Since their stimulated calculation is the same, deque should take less time.</p> <p>Surprisingly, it outputs this:</p> <pre><code>0.6613483428955078 # time for deque 0.24213624000549316 # time for list 100000 # check length and make sure they are doing what I want 100000 </code></pre> <p>Even more surprisingly, if I remove the calculation <code>temp = element * i - i</code> for both implementations, the result changed significantly</p> <pre><code>0.0010280609130859375 # time for deque 0.16035985946655273 # time for list 100000 100000 </code></pre> <p>What went wrong in the above code? Why do calculations unrelated to append make such a difference?</p>
<python><list><time><append><deque>
2022-12-14 23:30:19
0
671
seermer
74,805,267
143,684
How to use a custom console logger for the entire application in Python?
<p>After reading the logging documentation of Python and several examples of how to customise the log format and output, I came to the conclusion that formatting can only be set for a single logger instance. But since all libraries in other modules use their own logger instance (so that their source can be identified), they completely ignore how my main application wants to output the log messages.</p> <p>How can I create a custom logger that formats the records in a specific way (for example with abbreviated levels and corresponding console colours) that is automatically used for all modules when set up in the main function?</p> <p>What I saw is this:</p> <pre class="lang-py prettyprint-override"><code># main.py: logger = logging.getLogger('test') logger.addHandler(ConsoleHandler()) # module.py: logger = logging.getLogger(__name__) ... logger.info(&quot;a message from the library&quot;) </code></pre> <p>This prints my library log message as before, not with the custom format. It seems rather pointless to apply an output format to a single logger instance if each module has their own one. The formatter must be applied at the application level, not for each library individually. At least that's how I understand and successfully use things around <code>ILogger</code> in .NET. Can Python do that? I guess what I need is my own <code>ILogger</code> implementation that is made accessible throughout the application through dependency injection. That's not how Python logging looks like to me.</p>
<python><logging><python-logging>
2022-12-14 23:27:18
2
20,704
ygoe
74,805,228
6,017,833
Python how to detect a change in executed code
<p>I have a Python package with an entry point at <code>__main__</code>. I will be running a nightly Cron job, and I want the Cron job to execute the package if it detects a change in the execution of the package. I will be using git hashes to perform the comparison between files and commits.</p> <p>However, if I commit a change that affects a function that is NOT used based on the current execution from the <code>__main__</code> entry point, then I don't want the package to execute that night. How can I get around this issue?</p>
<python><git><cron><package>
2022-12-14 23:22:27
0
1,945
Harry Stuart
74,805,191
5,678,057
Pandas plot: How to add ```hue``` parameter to pandas plot
<p>I have the below graph obtained the following code:</p> <pre><code>x_var, y_var = 'category', 'instance' df.groupby(x_var)[y_var].nunique().plot.bar(stacked=False) </code></pre> <p>I want to add <code>hue</code> element using <code>Corr</code> column, on top of this so that I can see the distribution of the <code>Corr</code> across each category.</p> <p><code>hue</code> is not being accepted as valid parameter.</p> <p><a href="https://i.sstatic.net/yUDcx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yUDcx.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib>
2022-12-14 23:16:44
1
389
Salih
74,805,156
214,526
Pandas contains() returning an empty series while searching for text in a DataFrame column
<p>Dataset: <a href="http://github.com/mircealex/Movie_ratings_2016_17/raw/master/fandango_score_comparison.csv" rel="nofollow noreferrer"><code>fandango_score_comparison.csv</code></a></p> <p>I'm trying to access a row that matches a given movie name using the following code:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv(&quot;http://github.com/mircealex/Movie_ratings_2016_17/raw/master/fandango_score_comparison.csv&quot;) df.drop_duplicates(subset=[&quot;FILM&quot;], inplace=True, ignore_index=True) movie_name = df.FILM.iloc[0] movie_df = df[df[&quot;FILM&quot;].str.contains(movie_name)] </code></pre> <p>But the <code>movie_df</code> I get is always empty, irrespective of the <code>movie_name</code> I select. What am I missing or doing wrongly?</p>
<python><pandas><dataframe>
2022-12-14 23:12:09
1
911
soumeng78
74,805,132
5,227,892
Remove all elements in each list of a nested list, based on first nested list
<p>I have a following list:</p> <pre><code>a = [[0, 1, 0, 1, 1, 1], [23,22, 12, 45, 32, 33],[232, 332, 222, 342, 321, 232]] </code></pre> <p>I want to remove <code>0</code> in <code>a[0]</code> and corresponding values of <code>a[1]</code> and <code>[2]</code>, so the result list should be as follows:</p> <pre><code>d = [[1, 1, 1, 1], [22, 45, 32, 33], [332, 342, 321, 232]] </code></pre>
<python><python-3.x>
2022-12-14 23:09:26
3
435
Sher
74,804,825
3,507,825
How to save created Python win32com msg file without displaying the email in the GUI?
<p>I am using Pywin32 and win32com to programmatically create and save Outlook MSG files. (These are emails that will only ever be saved and never be sent across smtp.) I am able to create and save the emails, but it only works when the display() method is used. This is problematic because it creates and opens the actual Outlook email in the GUI.</p> <p>When I comment out the display() method, the email message save method just runs forever and yields zero in the debug console and the email is neither created or saved.</p> <p>The appearance of the Outlook emails in the GUI is not desirable as I will later programmatically create thousands of messages and cannot have them all opening in the GUI. (and having to close them!)</p> <p>edit: I've done some research here <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.outlook._mailitem.display?view=outlook-pia#microsoft-office-interop-outlook-mailitem-display(system-object)" rel="nofollow noreferrer">display method</a> and here <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.outlook._mailitem.display?view=outlook-pia#microsoft-office-interop-outlook-mailitem-display(system-object)" rel="nofollow noreferrer">.net mail class</a> but have not found a way to suppress the GUI display of the email.</p> <p>How can I create the emails to disk as msg files without them appearing in the Windows GUI as Outlook emails? Code as follows. Thank you.</p> <pre><code>import sys import win32com.client as client MSG_FOLDER_PATH = &quot;d:\\Emails_Msg\\&quot; html_body = &quot;&quot;&quot; &lt;div&gt; Test email 123 &lt;/div&gt;&lt;br&gt; &quot;&quot;&quot; recipient = &quot;johndoe@foo.com&quot; cc = &quot;janedoe@foo.com&quot; outlook = client.Dispatch(&quot;outlook.application&quot;) message = outlook.CreateItem(0) message.To = recipient message.CC = cc message.Subject = &quot;foo1&quot; message.HTMLBody = html_body # message display method message.Display() # save the message, only works if message.Display() runs message_name = MSG_FOLDER_PATH + &quot;foo.msg&quot; message.SaveAs(message_name) </code></pre>
<python><outlook><save><win32com><msg>
2022-12-14 22:26:32
1
451
user3507825
74,804,756
2,088,886
Python Flask App Deployed to IIS Webserver 500's when using Subprocess nslookup
<p>I have a simple flask app that works locally but gets 500'd when testing in IIS.</p> <p>Edit: I was wrong, initially thought was pandas read issue but the issue is actually coming from subprocess that tries to get the user's IP address:</p> <pre><code>from flask import Flask, request import subprocess app = Flask(__name__) html = ''' &lt;h1&gt;Test&lt;/h1&gt; &lt;h2&gt;Report Generator&lt;/h2&gt; &lt;p&gt; &lt;form action=&quot;/submitted&quot; method=&quot;post&quot;&gt; &lt;label for=&quot;reports&quot;&gt;Choose a Report:&lt;/label&gt; &lt;select id=&quot;reports&quot; name=&quot;reports&quot;&gt; &lt;option value=&quot;user_test&quot;&gt;User Test&lt;/option&gt; &lt;/select&gt; &lt;input type=&quot;submit&quot;&gt; &lt;/form&gt; ''' @app.route(&quot;/&quot;) def index(): return html @app.route(&quot;/submitted&quot;, methods=['POST']) def show(): select = request.form.get(&quot;reports&quot;) if select == 'user_test': name = 'XXXXXXXX.dcm.com' result = subprocess.check_output(['nslookup', name]) else: result = &quot;Not Available&quot; return result if __name__ == &quot;__main__&quot;: app.run() </code></pre> <p>This code runs fine when tested locally. If I remove the part where it runs subprocess to get user IP that works fine on IIS. The trouble is when I try to include the part that runs <code>subprocess.check_output(['nslookup',name])</code> when running on IIS, which leads to 500 internal server error.</p> <p>Here is picture of error:</p> <p><a href="https://i.sstatic.net/rmvCw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rmvCw.png" alt="enter image description here" /></a></p> <p>Thanks for the help!</p>
<python><flask><iis><subprocess><nslookup>
2022-12-14 22:18:56
1
2,161
David Yang
74,804,745
17,835,656
how can i show PDF file as Bytes using QWebEngineView in PyQt5?
<p>i need to import file from the database as Bytes and show it on the window using QWebEngineView in PyQt5.</p> <p>this is my code:</p> <pre class="lang-py prettyprint-override"><code> from PyQt5 import QtWidgets from PyQt5 import QtGui from PyQt5 import QtCore from PyQt5 import QtWebEngineWidgets import sys application = QtWidgets.QApplication(sys.argv) window = QtWidgets.QWidget() window_layout = QtWidgets.QGridLayout() PDF = &quot;D:\export.pdf&quot; file = open(PDF,&quot;rb&quot;) PDF_as_bytes = file.read() engine = QtWebEngineWidgets.QWebEngineView() engine.setMinimumSize(500,500) engine_settings = engine.settings() engine_settings.setAttribute(QtWebEngineWidgets.QWebEngineSettings.PluginsEnabled, True) engine_settings.setAttribute(QtWebEngineWidgets.QWebEngineSettings.ShowScrollBars, True) engine_settings.setAttribute(QtWebEngineWidgets.QWebEngineSettings.PdfViewerEnabled, True) engine.load(QtCore.QUrl.fromUserInput(PDF)) window_layout.addWidget(engine) window.setLayout(window_layout) window.show() application.exec() </code></pre> <p>i can show the pdf file when i put its path using this code:</p> <pre class="lang-py prettyprint-override"><code>engine.load(QtCore.QUrl.fromUserInput(PDF)) </code></pre> <p>but i need to show it from bytes data.</p> <p>i tried to use this code:</p> <pre class="lang-py prettyprint-override"><code>engine.load(QtCore.QUrl.fromEncoded(PDF_as_bytes)) </code></pre> <p>but this code is wrong.</p> <p>is there a way to show it using the file bytes code ?</p>
<python><qt><pyqt><pyqt5>
2022-12-14 22:17:33
0
721
Mohammed almalki
74,804,702
12,574,341
Instantiating TypedDict easily with unpacked args
<p>When using <code>NamedTuple</code>, you can easily instantiate by unpacking an arbitrary number of arguments using <code>*</code></p> <pre class="lang-py prettyprint-override"><code>class DateTimeUTC(NamedTuple): year: int month: int day: int hour: int minute: int second: float dt = DateTimeUTC(*time.gmtime(1639480335.751329)[:6]) </code></pre> <p>I'd prefer to reprsent it as a dictionary using <code>TypedDict</code> instead of a tuple, but I'm not able to easily unpack arguments in the same fashion.</p> <pre class="lang-py prettyprint-override"><code>class DateTimeUTC(TypedDict): year: int month: int day: int hour: int minute: int second: float dt = DateTimeUTC(*time.gmtime(1639480335.751329)[:6]) # &lt;-- this breaks </code></pre> <p>I get the following error</p> <p><code>Expected 0 positional arguments Pylance(reportGeneralTypeIssues)</code></p> <p><a href="https://i.sstatic.net/Wj6Dd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wj6Dd.png" alt="enter image description here" /></a></p> <p>I seen that I can do this</p> <pre class="lang-py prettyprint-override"><code>Y, M, D, h, m, s = time.gmtime(1639480335.751329)[:6] dt = DateTimeUTC(year=Y, month=M, day=D, hour=h, minute=m, second=s) </code></pre> <p>but it's not as elegant as I'd like.</p> <p>I tried implementing a custom <code>__init__()</code> constructor on the class, but that is apparently not allowed either.</p> <p><code>TypedDict classes can contain only type annotations Pylance</code></p> <p>Are there any alternatives?</p>
<python><python-typing>
2022-12-14 22:11:03
1
1,459
Michael Moreno
74,804,481
3,555,558
Django Signals not triggering when only using apps.py
<p>Here I want to create a Datalog when a new Customer create an account. I want to trigger the <code>Datalog</code> event and save the relevant information into the <code>Datalog</code> table.</p> <p>(I could write in signals.py but I prefer to write it into directly app.py)</p> <h3>apps.py</h3> <pre><code>from django.db.models.signals import post_save, post_delete from django.dispatch import receiver from .models import DataLog class LogAPIconfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'myapp' def ready(self): @receiver(post_save, sender=DataLog) def log_save_actioner(sender, created,instance, **kwargs): print(&quot;signal is sent to heatmap&quot;) action = 'create' if created else 'update' Datalog.objects.create( heatmap_type = instance.heatmap_type, status = instance.status, action = action, sender_table = sender.__name__, timestamp = instance.timestamp ) </code></pre> <h3>models.py</h3> <pre><code>class Customer(models.Model): Customer_name = models.ForeignKey(User, unique=True, primary_key=True, related_name=&quot;Customer_name&quot;) Customer_type = models.CharField(max_length=255) class Datalog(models.Model): Customer_name = models.ForeignKey(Customer, on_delete=models.CASCADE) status = models.CharField(max_length=255) comment = models.TextField(null=True, blank=True) followUpDate = models.DateTimeField(auto_now_add=True) class Meta: ordering = ['-followUpDate'] def __str__(self): return str(self.status) </code></pre> <p>settings.py</p> <pre><code>INSTALLED_APPS = [ &quot;django.contrib.admin&quot;, &quot;django.contrib.auth&quot;, &quot;django.contrib.contenttypes&quot;, &quot;django.contrib.sessions&quot;, &quot;django.contrib.messages&quot;, &quot;django.contrib.staticfiles&quot;, &quot;rest_framework&quot;, &quot;rest_framework.authtoken&quot;, &quot;corsheaders&quot;, &quot;django_auth_adfs&quot;, &quot;django_filters&quot;, 'myapp.apps.LogAPIconfig', ] </code></pre> <p>When I implemented this I got following error message in the terminal</p> <blockquote> <p>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</p> </blockquote> <p>after some search I think this is somewhat related with importing <code>Datalog</code> table. I want to know</p> <ul> <li>Is this something related because of I'm not using signals.py directly ?</li> <li>Who should be the <code>sender</code>?</li> <li>Do I need to use post_save.connect(log_save_actioner, sender=User) ?</li> </ul>
<python><django><django-signals>
2022-12-14 21:44:10
1
4,675
Alexander
74,804,404
2,989,642
How do I convert a Windows UNC path to a mapped drive letter path with Python?
<p>I've seen the reverse of this question asked a lot but can't find info on this one.</p> <p>A set of subprocesses I'm running in a script want Windows drive-letter paths (<code>M:\Some\Path\file.txt</code> instead of <code>\\my.subnet\someplace\Some\Path\file.txt</code>), but the UNC path is presented as an earlier input (which I don't have control over).</p> <p>I am familiar with the <code>pathlib.Path.drive</code> function, which gives me <code>\\my.subnet\subnet</code>, but how would I get from that to the drive letter?</p>
<python><pathlib>
2022-12-14 21:35:24
0
549
auslander
74,804,358
9,103,445
Combining large xml files efficiently with python
<p>I have about 200 xml files ranging from 5MB to 50MB, with 80% being &lt;10MB. These files contain multiple elements with both overlapping and unique data. My goal is to combine all this files, by performing a logical union over all the elements.</p> <p>The code seems to work but gets exponentially slower the more files it has to process. For example, it takes about 20sec to process the first 5 files, about a minute to process next five, about 5 min next five and so on, while also taking significantly more memory then the sum total of all the files. With the overall process running on the 4th hour as I type this.</p> <p>This is, obviously a 'to be expected' effect, considering that lookup needs to happen on an ever larger tree. Still, I wonder if there are ways to at least diminish this effect.</p> <p>I have tried implementing a form of simple caching, but I didnt notice any significant improvement.</p> <p>I also tried multiprocessing, which does help, but adds extra complexity and pushed the problem to the hardware level, which does not feel very optimal.</p> <p>Is there something I can do to improve the performance in any way?</p> <p>Note: I had to obfuscate parts of code and data due to confidentiality reasons. Please dont hesitate to inform if it breaks the example</p> <p>code:</p> <pre class="lang-py prettyprint-override"><code>import lxml.etree from lxml.etree import Element # Edit2: added timing prints def process_elements(files: list[str], indentifier: int) -&gt; lxml.etree._Element | None: base_el = Element('BASE') i = 0 cache = {} # Edit1. Missed this line start = time.time() time_spent_reading = 0 lookup_time = [0, 0] append_new_el_time = [0, ] cache_search_time = [0, 0] recursive_calls_counter = [0, ] for file in files: i += 1 print(f&quot;Process: {indentifier}, File {i} of {len(files)}: {file}&quot;) print(&quot;Reading file...&quot;) start_read = time.time() tree = lxml.etree.parse(f'data/{file}').getroot() print(f&quot;Reading file took {time.time() - start_read} seconds&quot;) print(&quot;Since start: &quot;, time.time() - start) packages = tree.find('BASE') print(&quot;Starting walk...&quot;) sart_walked = time.time() for package in packages: walk(package, base_el, cache, lookup_time, append_new_el_time, cache_search_time) print(f&quot;Walk took {time.time() - sart_walked} seconds&quot;) print(&quot;Since start: &quot;, time.time() - start) if indentifier == -1: return base_el else: print(&quot;Timing results:&quot;) print(&quot;Time spent reading: &quot;, time_spent_reading) print(&quot;Time spent on lookup: &quot;, lookup_time[0]) print(&quot;Time spent on append: &quot;, append_new_el_time[0]) print(&quot;Time spent on cache search: &quot;, cache_search_time[0]) base_el.getroottree().write( f'temp{indentifier}.xml', encoding='utf-8') return None def walk(element: lxml.etree._Element, reference: lxml.etree._Element, cache: dictlookup_time, append_new_el_time, cache_search_time, recursive_calls_counter) -&gt; None: recursive_calls_counter[0] += 1 children = element.iterchildren() elid = f&quot;{element.tag}&quot; element_name = element.get('some-id-i-need') if element_name is not None: elid += f'[@some-id-i-need=&quot;{element_name}&quot;]' cache_id = str(id(reference)) + &quot;_&quot; + elid cache_search_time_start = time.time() relevant_data = cache.get(cache_id) cache_search_time[0] += time.time() - cache_search_time_start # if element is found either in cache or in the new merged object # continue to its children # otherwise, element does not exist in merged object. # Add it to the merged object and to cache if relevant_data is None: # I believe this lookup may be what takes the most time # hence my attempt to cache this lookup_time_start = time.time() relevant_data = reference.find(elid) lookup_time[0] += time.time() - lookup_time_start lookup_time[1] += 1 else: # cache hit cache_search_time[1] += 1 if relevant_data is None: append_new_el_time_start = time.time() reference.append(element) append_new_el_time[0] += time.time() - append_new_el_time_start return else: cache.setdefault(cache_id, relevant_data) # if element has no children, loop will not run for child in children: walk(child, relevant_data, cache, lookup_time, append_new_el_time, cache_search_time, recursive_calls_counter) # to run: process_elements(os.listdir(&quot;data&quot;), -1) </code></pre> <p>example data:</p> <p>file1</p> <pre class="lang-xml prettyprint-override"><code>&lt;BASE&gt; &lt;elem id=&quot;1&quot;&gt; &lt;data-tag id=&quot;1&quot;&gt; &lt;object id=&quot;23124&quot;&gt; &lt;POS Tag=&quot;V&quot; /&gt; &lt;grammar type=&quot;STEM&quot; /&gt; &lt;Aspect type=&quot;IMPV&quot; /&gt; &lt;Number type=&quot;S&quot; /&gt; &lt;/object&gt; &lt;object id=&quot;128161&quot;&gt; &lt;POS Tag=&quot;V&quot; /&gt; &lt;grammar type=&quot;STEM&quot; /&gt; &lt;Aspect type=&quot;IMPF&quot; /&gt; &lt;/object&gt; &lt;/data-tag&gt; &lt;/elem&gt; &lt;/BASE&gt; </code></pre> <p>file2</p> <pre class="lang-xml prettyprint-override"><code>&lt;BASE&gt; &lt;elem id=&quot;1&quot;&gt; &lt;data-tag id=&quot;1&quot;&gt; &lt;object id=&quot;23124&quot;&gt; &lt;concept type=&quot;t1&quot; /&gt; &lt;/object&gt; &lt;object id=&quot;128161&quot;&gt; &lt;concept type=&quot;t2&quot; /&gt; &lt;/object&gt; &lt;/data-tag&gt; &lt;data-tag id=&quot;2&quot;&gt; &lt;object id=&quot;128162&quot;&gt; &lt;POS Tag=&quot;P&quot; /&gt; &lt;grammar type=&quot;PREFIX&quot; /&gt; &lt;Tag Tag=&quot;bi+&quot; /&gt; &lt;concept type=&quot;t3&quot; /&gt; &lt;/object&gt; &lt;/data-tag&gt; &lt;/elem&gt; &lt;/BASE&gt; </code></pre> <p>result:</p> <pre class="lang-xml prettyprint-override"><code>&lt;BASE&gt; &lt;elem id=&quot;1&quot;&gt; &lt;data-tag id=&quot;1&quot;&gt; &lt;object id=&quot;23124&quot;&gt; &lt;POS Tag=&quot;V&quot; /&gt; &lt;grammar type=&quot;STEM&quot; /&gt; &lt;Aspect type=&quot;IMPV&quot; /&gt; &lt;Number type=&quot;S&quot; /&gt; &lt;concept type=&quot;t1&quot; /&gt; &lt;/object&gt; &lt;object id=&quot;128161&quot;&gt; &lt;POS Tag=&quot;V&quot; /&gt; &lt;grammar type=&quot;STEM&quot; /&gt; &lt;Aspect type=&quot;IMPF&quot; /&gt; &lt;concept type=&quot;t2&quot; /&gt; &lt;/object&gt; &lt;/data-tag&gt; &lt;data-tag id=&quot;2&quot;&gt; &lt;object id=&quot;128162&quot;&gt; &lt;POS Tag=&quot;P&quot; /&gt; &lt;grammar type=&quot;PREFIX&quot; /&gt; &lt;Tag Tag=&quot;bi+&quot; /&gt; &lt;concept type=&quot;t3&quot; /&gt; &lt;/object&gt; &lt;/data-tag&gt; &lt;/elem&gt; &lt;/BASE&gt; </code></pre> <p>Edit2: Timing results after processing 10 files (about 60MB, 1m 24.8s):</p> <pre><code> Starting process... Process: 102, File 1 of 10: Reading file... Reading file took 0.1326887607574463 seconds Since start: 0.1326887607574463 preprocesing... merging... Starting walk... Walk took 0.8433401584625244 seconds Since start: 1.0600318908691406 Process: 102, File 2 of 10: Reading file... Reading file took 0.04700827598571777 seconds Since start: 1.1070401668548584 preprocesing... merging... Starting walk... Walk took 1.733034610748291 seconds Since start: 2.8680694103240967 Process: 102, File 3 of 10: Reading file... Reading file took 0.041702985763549805 seconds Since start: 2.9097723960876465 preprocesing... merging... ... Time spent on lookup: 79.53011083602905 Time spent on append: 1.1502337455749512 Time spent on cache search: 0.11017322540283203 Cache size: 30176 # Edit3: extra data Number of cache hits: 112503 Cache size: 30177 Number of recursive calls: 168063 </code></pre> <p>As an observation, I do expect significant overlap between the files, maybe the small cache search time indicates that something is wrong with how I implemented caching?</p> <p>Edit3: It does seem that I do get a lot of hits. but the strange part is that if I comment out the cache search part, it makes almost no difference in performance. In fact, it ran marginally faster without it (although not sure if a few seconds is a significant difference or just random chance in this case)</p> <pre class="lang-py prettyprint-override"><code>relevant_data = None # cache.get(cache_id) </code></pre> <p>log with cache commented out:</p> <pre><code>Time spent on lookup: 71.13456320762634 Number of lookups: 168063 Time spent on append: 3.9656710624694824 Time spent on cache search: 0.020023584365844727 Number of cache hits: 0 Cache size: 30177 Number of recursive calls: 168063 </code></pre>
<python><lxml>
2022-12-14 21:28:10
1
1,267
Petru Tanas
74,804,336
3,423,825
How to return the user ID in HTTP response after a user log in with DRF token authentification?
<p>My application has a <code>/login</code> endpoint where users can enter their login information, and after a user has been authenticated I would like to display a DRF view based on it's user ID as a parameter in the URL. What is the best way to do that ? Shall I need to include the user ID into the HTTP response and if so, how to do that ?</p> <p>This is how the login view and serializer look like :</p> <p><strong>view.py</strong></p> <pre><code>class LogInView(TokenObtainPairView): serializer_class = LogInSerializer </code></pre> <p><strong>serializer.py</strong></p> <pre><code>class LogInSerializer(TokenObtainPairSerializer): @classmethod def get_token(cls, user): token = super().get_token(user) user_data = ManagerSerializer(user).data for key, value in user_data.items(): if key != 'id': token[key] = value return token </code></pre> <p>The view I would like to display after the user login looks like this :</p> <p><strong>view.py</strong></p> <pre><code>class AccountDetails(RetrieveAPIView): serializer_class = AccountSerializer queryset = Account.objects.all() </code></pre> <p><strong>urls.py</strong></p> <pre><code>router = routers.DefaultRouter() urlpatterns = [ path('', include(router.urls)), path('account/&lt;pk&gt;', AccountDetails.as_view()), ] </code></pre>
<python><django><django-rest-framework>
2022-12-14 21:25:58
1
1,948
Florent
74,804,185
12,609,881
Update ObservableGauge in Open Telemetry Python
<p>I am using opentelemetry-api 1.14 and opentelemetry-sdk 1.14. I know how to create and use Counter and ObservableGauge instruments. However, I need to update and set the gauge throughout my application in a similar manner to how a counter can use its add method. I have working code below but in this working code the gauge is static at 9.</p> <pre><code>import time &quot;&quot;&quot; API is the interface that you should interact with.&quot;&quot;&quot; from opentelemetry import metrics &quot;&quot;&quot; SDK is the implementation. Only access SDK during initialization, startup, and shutdown. &quot;&quot;&quot; from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader, ConsoleMetricExporter from opentelemetry.sdk.metrics import MeterProvider from opentelemetry.sdk.resources import Resource def initialize(): resource = Resource(attributes={&quot;service.name&quot;: &quot;otel-test&quot;}) readers = [] # Console Exporter exporter = ConsoleMetricExporter() reader1 = PeriodicExportingMetricReader(exporter, export_interval_millis=5000) readers.append(reader1) provider = MeterProvider(metric_readers=readers, resource=resource) metrics.set_meter_provider(provider) initialize() provider = metrics.get_meter_provider() meter = provider.get_meter(&quot;my-demo-meter&quot;) simple_counter = meter.create_counter(&quot;simple_counter&quot;, description=&quot;simply increments each loop&quot;) # Async Gauge def observable_gauge_func(options): yield metrics.Observation(9, {}) simple_gauge = meter.create_observable_gauge(&quot;simple_gauge&quot;, [observable_gauge_func]) # How can I update simple_gauge in main def main(): loop_counter = 0 while True: print(loop_counter) loop_counter += 1 simple_counter.add(1) # How can I update simple_gauge here? time.sleep(5) main() </code></pre>
<python><open-telemetry>
2022-12-14 21:06:45
1
911
Matthew Thomas
74,804,068
3,826,115
How to add a equal-aspect inset axes in the corner of a parent axes
<p>I want to add an inset axis to the upper left corner of a parent axis. This can easily be done like so:</p> <pre><code>fig, ax = plt.subplots() iax = ax.inset_axes([0,.8, .2, .2]) </code></pre> <p><a href="https://i.sstatic.net/i6TJO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i6TJO.png" alt="inset_axes" /></a></p> <p>However, I need the inset axis <code>iax</code> to have an aspect ratio of one. But when I change the aspect ratio, the inset axis shifts slightly to the right.</p> <pre><code>iax.set_aspect('equal') </code></pre> <p><a href="https://i.sstatic.net/RCjC8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RCjC8.png" alt="offset equal aspect inset axis" /></a></p> <p>How can I make the aspect ratio of the inset axis &quot;equal&quot;, while still having it nestled in the upper left corner? I know I could just mess around with the inset_axes location parameter, but ideally I want some method that works for any given axis size.</p>
<python><matplotlib>
2022-12-14 20:54:23
1
1,533
hm8
74,804,010
9,422,346
Elasticsearch not showing running log in python
<p>I am new in Elasticsearch. I have a running Elasticsearch instance in cloud and accessing it via python, i want to see running logs which has a field - &quot;type&quot;- &quot;filebeat&quot;. I have following lines of code:</p> <pre><code>import elasticsearch from elasticsearch import Elasticsearch import elasticsearch.helpers # Creating the client instance es = Elasticsearch( cloud_id=CLOUD_ID, basic_auth=(&quot;elastic&quot;, ELASTIC_PASSWORD) ) # Successful response! print(es.info()) ES_INDEX = &lt;my index&gt; ES_TYPE=&quot;filebeat&quot; results_gen = elasticsearch.helpers.scan( es, query={&quot;query&quot;: {&quot;match_all&quot;: {}}}, index=ES_INDEX) results = list(results_gen) print(results) </code></pre> <p>The output shows the instance details and 4407 logs in result (obviously all logs). My question is how to obtain running logs and how to modify the query to show only logs with &quot;type&quot;-&quot;filebeat&quot;?</p>
<python><elasticsearch>
2022-12-14 20:46:47
1
407
mrin9san
74,803,953
34,935
How to tell whether a python unittest subTest has failed as it's running?
<p>Python docs <a href="https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests" rel="nofollow noreferrer">say</a> give this example of using a subTest in unit testing:</p> <pre><code>import unittest class NumbersTest(unittest.TestCase): def test_even(self): &quot;&quot;&quot; Test that numbers between 0 and 5 are all even. &quot;&quot;&quot; for i in range(0, 6): with self.subTest(i=i): self.assertEqual(i % 2, 0) </code></pre> <p>This runs all 5 tests (i=0..5) before producing output for all 5 failures.</p> <p>How can I print about the failures along the way (say in the for loop after the with block)?</p> <p>I only need the answer for Python 3.</p> <p>In my test, I have thousands of subtests and it can take many minutes to finish. I want to know if it's already failed as I'm watching it.</p>
<python><python-unittest>
2022-12-14 20:39:51
1
21,683
dfrankow
74,803,912
10,576,322
Python package naming convention
<p>I want to create some python packages and deploy them in my company, not on PyPi.</p> <p>I stumbled over the problem that one package name already existed. So instead of mypackage from our repo I installed a PyPi package.</p> <p>The obvious solution is to change the name of the package. However, if I don't put the package on PyPi, there are chances somebody will place another package with that name.</p> <p>I currently see two options. Create a dummy package and put it on PyPi to reserve the name. Or use something like a namespace.</p> <p>I read about PEP 423, which propose this idea, but it seems not to be agreed on.</p> <p>Is it anyway a good idea to use this? How would I do it? company.package or company_package? Both are not confirming to PEP 8.</p> <p>Or is there another way?</p> <p>Edit:<br /> Playing around with different name spaces I observed the following:</p> <ol> <li>Having a dot in the name is really a bad idea, as it seems not to work.</li> <li>Underscore does work with one minor annoyance. To install one must run <code>pip install my-package</code> and for import it's <code>import my_package</code>.</li> </ol>
<python><package>
2022-12-14 20:35:40
1
426
FordPrefect
74,803,726
1,100,913
Change tkinter menu radiobutton indicator
<p>Consider the menu below:</p> <pre><code>menu = tk.Menu(menubar, tearoff=0) menu.add_radiobutton(label='A', variable=flavour_value, value='a') menu.add_radiobutton(label='B', variable=flavour_value, value='b') </code></pre> <p>When radiobutton is selected, it shows V (checked) indicator from the left of a label.<br /> Is there a possibility to replace the V indicator with a circular one, like in regular tk.Radiobutton?</p>
<python><tkinter><menu><radio-button><indicator>
2022-12-14 20:14:10
2
953
Andrey
74,803,696
17,316,080
Get tar file buffer without write to file with Python
<p>I know how to tar file using Python</p> <pre><code>import os import tarfile with tarfile.open('res.tar.gz','w:xz' )as tar : tar.add('Pic.jpeg') </code></pre> <p>But I want to do that without create any tar.gz file, only get the results buffer.</p> <p>Hiw can I do that please?</p>
<python><python-3.x><tar><tarfile>
2022-12-14 20:10:52
1
363
Kokomelom
74,803,663
18,308,393
Replacing multiple columns with values in pandas
<p>I am replacing multiple columns values in pandas with the <code>pd.DataFrame.replace</code> method, however, this will not update any values inside my loop, and I cannot understand why it wont.</p> <p>For example:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': [0, 1, 2, 2, 2], 'B': [5, 6, 7, 8, 9], 'C': ['a', 'b', 'c', 'd', 'e']}) operators = { 'A':{ 0 : 2 } , 'B': { 5 : 8 }, 'C': None } for keys, values in operators.items(): if values == None: continue else: for existing, new in values.items(): if keys == 'A' and new is not None: print(keys, existing, new) df.replace({keys: existing}, new) elif keys == 'B' and new is not None: df.replace({keys: existing}, new) else: df.replace({keys: existing}, new) </code></pre> <p>Will print the exact same values for the dataframe.</p>
<python><pandas>
2022-12-14 20:07:33
1
367
Dollar Tune-bill
74,803,526
827,927
Why don't I get faster run-times with ThreadPoolExecutor?
<p>In order to understand how threads work in Python, I wrote the following simple function:</p> <pre><code>def sum_list(thelist:list, start:int, end:int): s = 0 for i in range(start,end): s += thelist[i]**3//10 return s </code></pre> <p>Then I created a list and tested how much time it takes to compute its sum:</p> <pre><code>LISTSIZE = 5000000 big_list = list(range(LISTSIZE)) start = time.perf_counter() big_sum=sum_list(big_list, 0, LISTSIZE) print(f&quot;One thread: sum={big_sum}, time={time.perf_counter()-start} sec&quot;) </code></pre> <p>It took about 2 seconds.</p> <p>Then I tried to partition the computation into threads, such that each thread computes the function on a subset of the list:</p> <pre><code>THREADCOUNT=4 SUBLISTSIZE = LISTSIZE//THREADCOUNT start = time.perf_counter() with concurrent.futures.ThreadPoolExecutor(THREADCOUNT) as executor: futures = [executor.submit(sum_list, big_list, i*SUBLISTSIZE, (i+1)*SUBLISTSIZE) for i in range(THREADCOUNT)] big_sum = 0 for res in concurrent.futures.as_completed(futures): # return each result as soon as it is completed: big_sum += res.result() print(f&quot;{THREADCOUNT} threads: sum={big_sum}, time={time.perf_counter()-start} sec&quot;) </code></pre> <p>Since I have a 4-cores CPU, I expected it to run 4 times faster. But it did not: it ran in about 1.8 seconds on my Ubuntu machine (on my Windows machine, with 8 cores, it ran even slower than the single-thread version: about 2.2 seconds).</p> <p>Is there a way to use <code>ThreadPoolExecutor</code> (or another threads-based mechanism in Python) so that I can compute this function faster?</p>
<python><multithreading><python-multithreading>
2022-12-14 19:52:31
2
37,410
Erel Segal-Halevi
74,803,430
5,500,634
Check graph equality using networkx isomorphic
<p>I have two graphs as follows</p> <pre><code>import networkx as nx G1, G2 = nx.DiGraph(), nx.DiGraph() G1.add_edges_from([(&quot;s1&quot;, &quot;s2&quot;), (&quot;s2&quot;, &quot;s3&quot;), (&quot;s3&quot;, &quot;s4&quot;)]) # G1: 1-&gt;2-&gt;3-&gt;4 G2.add_edges_from([(&quot;s1&quot;, &quot;s2&quot;), (&quot;s2&quot;, &quot;s3&quot;), (&quot;s3&quot;, &quot;s7&quot;)]) # G2: 1-&gt;2-&gt;3-&gt;7 nx.is_isomorphic(G1, G2) </code></pre> <p>By definition, we know the above two graphs are isomorphic, so <code>is_isomorphic</code> returns <strong>True</strong>.</p> <p>However, I wish to check structure <strong>equality</strong> between two graphs, meaning nodes and edges are the same (but weights allow difference). Since <code>G1</code> and <code>G2</code> have different last node, I am looking for the <code>is_isomorphic</code> function returning <strong>False</strong>.</p> <p>Q: Is it possible using <code>is_isomorphic</code> to identify the non-equality?</p> <p>P.S. I tried to use <code>iso.categorical_node_match</code> or <code>iso.numerical_node_match</code> or <code>iso.numerical_edge_match</code> as plug-in parameter in <code>is_isomorphic</code>:</p> <ol> <li><a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.isomorphism.numerical_edge_match.html" rel="nofollow noreferrer">network: numerical_edge_match</a></li> <li><a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.isomorphism.is_isomorphic.html" rel="nofollow noreferrer">networkx: is_isomorphic</a></li> </ol> <p>But I am still not sure how to call these <code>iso</code> function correctly in node_match or edge_match.</p>
<python><graph><networkx><isomorphic>
2022-12-14 19:38:22
1
489
TripleH
74,803,382
1,811,073
Mock patch the return value of an inner function that's not returned from the outer function
<p>I'd like to capture the return value of an inner function without having to explicitly return that return value from the outer function.</p> <p>So I'd like to do something like this:</p> <pre><code># bar.py import foo def my_fn(): foo.fn() </code></pre> <pre><code># test.py from mock import patch import foo import bar @patch(&quot;foo.fn&quot;) def test_bar(mock_foo_fn): bar.my_fn() # assert isinstance(mock_foo_fn.return_value, dict) </code></pre> <p>without having to do this:</p> <pre><code># bar.py import foo def my_fn(): return foo.fn() </code></pre>
<python><mocking>
2022-12-14 19:32:18
2
876
aweeeezy
74,803,368
6,396,569
How can I run ripgrep using subprocess.Popen in Python3 with arguments?
<p>I use Python 3.10.7 and I am trying to get the Python interpreter to run this command:</p> <p><code>rg mysearchterm /home/user/stuff</code></p> <p>This command, when I run it in <code>bash</code> directly successfully runs <code>ripgrep</code> and searches the directory (recursively) <code>/home/user/stuff</code> for the term <code>mysearchterm</code>. However, I'm trying to do this programmatically with Python's <code>subprocess.Popen()</code> and I am running into issues:</p> <pre><code>from subprocess import Popen, PIPE proc1 = Popen([&quot;rg&quot;, &quot;term&quot;, &quot;/home/user/stuff&quot;, &quot;--no-filename&quot;],stdout=PIPE,shell=True) proc2 = Popen([&quot;wc&quot;,&quot;-l&quot;],stdin=proc1.stdin,stdout=PIPE,shell=True) #Note: I've also tried it like below: proc1 = Popen(f&quot;rg term /home/user/stuff --no-filename&quot;,stdout=PIPE,shell=True) proc2 = Popen(&quot;wc -l&quot;,stdin=proc1.stdin,stdout=PIPE,shell=True) result, _ = proc2.communicate() print(result.decode()) </code></pre> <p>What happens here was bizarre to me; I get an error (from <code>rg</code> itself) which says:</p> <blockquote> <p>error: The following required arguments were not provided: <code>&lt;PATTERN&gt;</code></p> </blockquote> <p>So, using my debugging/tracing skills, I looked at the process chain and I see that the python interpreter itself is performing:</p> <pre><code>python3 1921496 953810 0 /usr/bin/python3 ./debug_script.py sh 1921497 1921496 0 /bin/sh -c rg term /home/user/stuff --no-filename sh 1921498 1921496 0 /bin/sh -c wc -l </code></pre> <p>So my next thought is just trying to run that manually in bash, leading to the same error. However, in bash, when I run <code>/bin/sh -c &quot;rg term /home/user/stuff --no-filename&quot;</code> <strong>with double quotations</strong>, the command works in <code>bash</code> but when I try to do this programmatically in <code>Popen()</code> it again doesn't work even when I try to escape them with <code>\</code>. This time, I get errors about unexpected EOF.</p>
<python><linux><bash><popen>
2022-12-14 19:31:14
1
2,567
the_endian
74,803,285
12,667,229
is these two dictonary statments are same while looping it in for loop?
<p>I read that my_dict.keys() returns the dynamic view to iterate over the dictionary. I usually iterate the dictionary without the keys() function.</p> <p>So my question is, are below two code blokes are same? if not, what performance differences do these two have (which one is more optimized)</p> <pre><code># without keys() function my_dict = {'key1' : 'value1',&quot;key2&quot; : &quot;value2&quot;} for key in my_dict: print(&quot;current key is&quot;,key) </code></pre> <pre><code># withkeys() function my_dict = {'key1' : 'value1',&quot;key2&quot; : &quot;value2&quot;} for key in my_dict.keys(): print(&quot;current key is&quot;,key) </code></pre> <p>Note: I usually use python version 3.7+ so if there's any version-wise implementation difference, kindly attach resources that I can refer to.</p>
<python><python-3.x><optimization>
2022-12-14 19:23:04
2
330
Sahil Lohiya
74,803,278
5,025,216
python request and MSL is not working with azure protected URL
<p>i have some url where i need to do web crawling and it is protected by azure SAML protected. i have got the access token but still, request.get method return me to the redirected SAML login page content i use python masl library for azure SAML authentication.</p> <pre><code>import requests http_proxy = &quot;http://my proxy setting here&quot; https_proxy = &quot;http://my proxy setting here&quot; proxyDict = { &quot;http&quot;: http_proxy, &quot;https&quot;: https_proxy, } app = ConfidentialClientApplication( &quot;my azure client id here&quot;, &quot;my azure sceret id here&quot;, authority=&quot;https://login.microsoftonline.com/my tenat id here&quot;, proxies=proxyDict) user = 'myuser@hotmail.com' pwd = 'mypassword' scope=['User.Read'] access_token = app.acquire_token_by_username_password(username=user, password=pwd, scopes=scope) url = 'https://my web site where i need to do crawling' api_head = {'Authorization': 'Bearer ' + access_token['access_token']} response = requests.get(url, proxies=proxyDict, headers=api_head) </code></pre> <p>but in return i get the login page url content only not the page i requested. also if i try to use some Microsoft graph that work fine and return the data. Not sure what is wrong in this flow any help is appreciated.</p> <p>Thanks in advance..</p>
<python><python-3.x><python-requests><azure-active-directory><azure-ad-msal>
2022-12-14 19:22:22
1
310
om tripathi
74,803,239
5,942,100
Tricky Reverse Aggregate values to unique rows per category in Pandas
<p>I have a dataset where I would like to de aggregate the values into their own unique rows as well as perform a pivot and grouping by category.</p> <h2>Data</h2> <pre><code>Date start end area BB_stat AA_stat BB_test AA_test final 10/1/2022 11/1/2022 12/1/2022 NY 10 80 0 1 1/1/2022 11/1/2022 12/1/2022 01/1/2023 NY 5 90 1 0 1/1/2022 10/1/2022 11/1/2022 12/1/2022 CA 6 100 3 1 1/1/2022 11/1/2022 12/1/2022 01/1/2023 CA 7 0 2 8 1/1/2022 </code></pre> <h2>Desired</h2> <p><strong>#create a new column by string transformation</strong></p> <pre><code>Date start end type area stat test final 10/1/2022 11/1/2022 12/1/2022 BB NY 10 0 1/1/2022 11/1/2022 12/1/2022 01/1/2023 BB NY 5 1 1/1/2022 10/1/2022 11/1/2022 12/1/2022 AA NY 80 1 1/1/2022 11/1/2022 12/1/2022 01/1/2023 AA NY 90 0 1/1/2022 10/1/2022 11/1/2022 12/1/2022 BB CA 6 3 1/1/2022 11/1/2022 12/1/2022 01/1/2023 BB CA 7 2 1/1/2022 10/1/2022 11/1/2022 12/1/2022 AA CA 100 1 1/1/2022 11/1/2022 12/1/2022 01/1/2023 AA CA 0 8 1/1/2022 </code></pre> <h2>Doing</h2> <p><strong>#some help from previous SO post/member</strong></p> <pre><code>df = df.set_index([&quot;Date&quot;, &quot;start&quot;, &quot;end&quot;]) new_df = pd.concat([pd.Series(c, index=df.index.repeat(df[c])) for c in df]).reset_index(name=&quot;type&quot;) # then sort values new_df = new_df.sort_values([&quot;Date&quot;, &quot;start&quot;, &quot;end&quot;], ignore_index=True) </code></pre> <p>Any suggestion is appreciated</p>
<python><pandas><numpy>
2022-12-14 19:18:38
1
4,428
Lynn
74,803,109
7,134,235
What is the best way to validate a large json file before reading it into a pyspark dataframe?
<p>I have to read a large (roughly 1500 lines) json file into a pyspark data frame, and take into account incomplete json objects and unexpected line endings. The file has one json object per line, and looks like this:</p> <pre><code>{&quot;place&quot;:{&quot;place_name&quot;:&quot;Chicago&quot;,&quot;place_id&quot;: 23654},&quot;category&quot;:&quot;city&quot;,&quot;population&quot;:8000000}...} {&quot;place&quot;:{&quot;place_name&quot;:&quot;New York&quot;,&quot;place_id&quot;: 23754},&quot;category&quot;:&quot;city&quot;,&quot;population&quot;:10000000}...} </code></pre> <p>I found an example of how to fix broken json objects, but that would require me to iterate over the file line by line (accepted answer here: <a href="https://stackoverflow.com/questions/53964597/complete-a-json-string-from-incomplete-http-json-response">Complete a json string from incomplete HTTP JSON response</a>). Is it a good strategy to do this and to rewrite any broken lines in the file before reading it into the pyspark dataframe?</p> <p>Also, would a simple regex replace operation be enough to remove the unexpected line endings in the middle of the objects?</p> <p>Any help would be much appreciated.</p>
<python><json><pyspark>
2022-12-14 19:02:10
0
906
Boris
74,803,071
19,675,781
How to assign colors to values in a seaborn heatmap
<p>I have a data frame with 15 rows and 15 columns and the values are whole numbers in the range(0-5).</p> <p>I use this data frame to create multiple heatmaps by filtering the data.</p> <p>In the heatmap, I want to assign particular colors to every value so that even if the number of unique values in the data frame varies for every heatmap, the colors remain consistent for every assigned value.</p> <p>Using the fixed color code, I want to use a single legend key for all the heat maps.</p> <p>How can I assign colors with a dictionary?</p> <pre><code>cmap_dict = {0:'#FFFFFF',1:'#ff2a00', 2:'#ff5500', 3:'#ff8000', 4:'#ffaa00', 5:'#ffd500'} heat_map = sns.heatmap(df,square=False,yticklabels=True,xticklabels=False,cmap=cmap1,vmin=0,vmax=5,cbar=False, annot=False) </code></pre>
<python><matplotlib><seaborn><legend><heatmap>
2022-12-14 18:59:13
1
357
Yash
74,802,764
7,617,510
plotly graph objects persistent data labels when clicking the graph
<p>I'm generating a graph:</p> <pre><code>import plotly.graph_objects as go </code></pre> <p><a href="https://i.sstatic.net/clHFX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/clHFX.png" alt="image1" /></a></p> <p>When I click on a data point I get the x,y data as shown, but as soon as I move the mouse pointer to a different data point then the other disappears.</p> <p><a href="https://i.sstatic.net/giWXP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/giWXP.png" alt="img2" /></a></p> <p>I want to have both points appear on the graph in the same image at the same time so that I can compare them. Is there a way of doing this?</p>
<python><plotly><plot-annotations><grouped-bar-chart>
2022-12-14 18:29:11
0
1,289
magicsword
74,802,761
1,635,305
Get page content with "login with google" authentication using python
<p>I am wondering if it's generally possible. I have to get content of some page with specific URL utilizing python3.x. When I put URL into the browser, I have only one option: &quot;Continue with google&quot;. I give my google account credentials and then desired page appears. I need his content. Is it possible to do it with python?</p>
<python><python-3.x><http><google-signin>
2022-12-14 18:28:53
2
1,595
Yuri Levinsky
74,802,735
11,614,319
easygui buttonbox crashes becase of default font from Tkinter
<p>I'm using easygui in a Python app and then generate an .exe with PyInstaller.</p> <p>Everything works fine in my computer, but my colleague get this weird error when they try to run the app :</p> <pre><code>Traceback (most recent call last): File &quot;easygui\boxes\button_box.py&quot;, line 95, in buttonbox File &quot;easygui\boxes\button_box.py&quot;, line 147, in __init__ File &quot;easygui\boxes\button_box.py&quot;, line 268, in __init__ File &quot;tkinter\font.py&quot;, line 23, in nametofont File &quot;tkinter\font.py&quot;, line 86, in __init__ RuntimeError: main thread is not in main loop </code></pre> <p>The line where easygui is called is simply</p> <pre class="lang-py prettyprint-override"><code>choice = easygui.buttonbox( &quot;msg&quot;, &quot;title&quot;, choices=[&quot;choice1&quot;, &quot;choice2&quot;], default_choice=&quot;choice1&quot;, cancel_choice=&quot;choice2&quot; ) </code></pre> <p>so the problem seems to be with the font but I'm not using anything particular in easygui ? I've searched for issues on the easygui's Git but couldn't find anything</p> <p>Also, there was another <code>easygui.buttonbox</code> earlier in the process but this one did show up properly so I'm just really confused.</p> <p>Thanks!</p>
<python><tkinter><pyinstaller><easygui>
2022-12-14 18:25:32
1
362
gee3107
74,802,727
3,849,039
Converting PyTorch to CoreML gives a TypeError: 'dict' object is not callable
<p>I've been following Apple's <a href="https://coremltools.readme.io/docs/pytorch-conversion-examples" rel="nofollow noreferrer">coremltools docs</a> for converting PyTorch segmentation models to CoreML.</p> <p>While it works fine when we're loading a remote PyTorch model, I'm yet to figure out a working Python script to perform conversions with local/already-downloaded PyTorch models.</p> <p>The following piece of code throws a <code>TypeError: 'dict' object is not callable</code></p> <pre><code>#This works fine: model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101',pretrained=True).eval() model = torch.load('local_model_file.pth') input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) with torch.no_grad(): output = model(input_batch)['out'][0] #error here torch_predictions = output.argmax(0) </code></pre> <p>There is a <a href="https://stackoverflow.com/a/57342167/3849039">SO answer</a> that offers a solution by initialising the model class and loading the state_dict, but I wonder what's the concrete solution when we don't have access to the PyTorch model?</p>
<python><machine-learning><pytorch><coreml><coremltools>
2022-12-14 18:24:42
1
1,919
AnupamChugh
74,802,720
15,186,292
Change color of the node in Pyvis Network when it gets clicked on
<p>I'm creating a network visualization in python using Pyvis. I can create the graph as I want with no problem, but when I run my script and get the html file as output, I want the nodes to change colors when I click on them and I can't achieve that. I've tried a lot of things and nothing seems to work. The data I'm using has the following structure but with a lot more nodes than this:</p> <pre><code>import pandas as pd from pyvis.network import Network data = [['A','B', 1], ['A','C', 4], ['B','A', 1], ['D','E', 5], ['D','F', 1], ['G','J', 7], ['A','J', 4], ['G','F', 14], ['C','L', 4], ['A','F', 2], ['E','H', 12], ['I','E', 2], ['I','K', 4], ['L','H', 21]] df = pd.DataFrame(data, columns=['source', 'target', 'feature_count'] # Step 1: Initialize Network graph = Network(height='750px', width='100%', directed=True) # Step 2: Get sources, targets and weights sources = df['source'] targets = df['targets'] weights = df['feature_count'] edge_data = zip(sources, targets, weights) # Step 3: Create the graph adding each node individually. for edge in edge_data: source_name = edge[0] target_name = edge[1] weight = edge[2] graph.add_node(n_id=source_name, label=source_name, labelHighlightBold=True) graph.add_node(n_id=target_name, label=target_name, labelHighlightBold=True) graph.add_edge(source_name, target_name, value=weight, title=weight, arrowStrikethrough=False) # Step 4: Use the Json Dictionary that sets the options to specify that I want the color of the node to change to red when clicked on options = { &quot;configure&quot;: { &quot;enabled&quot;: True }, &quot;interaction&quot;: { &quot;hover&quot;: True }, &quot;nodes&quot;: { &quot;borderWidth&quot;: 2, &quot;borderWidthSelected&quot;: 4, &quot;chosen&quot; : True, &quot;color&quot;: { &quot;highlight&quot;: { &quot;border&quot;: &quot;#FF4040&quot;, &quot;background&quot;: &quot;#EE3B3B&quot; }, &quot;hover&quot;: { &quot;border&quot;: &quot;#DEB887&quot;, &quot;background&quot;: &quot;#FFD39B&quot; } } } } # Step 5: Define the options graph.options = options # Step 6: Visualize the Network graph.show('graph.html') </code></pre> <p>For some reason this doesn't work. I've been searching for other possible solutions and I can't find them. It seems that the only thing left would be to create a function In Java that makes this and pass it to the html code. The thing is I don't know Java and I'm afraid to spend a lot of time trying that without getting to a solution or having another solution more quickly.</p> <p>Any ideas ?</p> <p>Thanks!</p>
<python><java><html><pyvis>
2022-12-14 18:23:57
0
301
TomasC8
74,802,692
275,002
MySQL deadlock error while updating within a loop
<p>I have the following piece of code updating records in a loop:</p> <pre><code> if connection is not None: with connection.cursor() as cursor: for instrument in instruments: instrument_name = instrument[0] price = instrument[2] price = instrument_name.split('-')[2] type = instrument_name.split('-')[3] if option_type == 'P' and float(strike) &gt; price: label = 'ABC' elif option_type == 'P' and float(strike) &lt; price: label = 'NRU' elif option_type == 'C' and float(strike) &gt; price: label = 'NRU' elif option_type == 'C' and float(strike) &lt; price: label = 'ABC' print('Assigning for the Instrument = ', instrument_name) sql = 'UPDATE {} set price =%s, label=%s WHERE instrument_name=%s'.format(table_name) cursor.execute(sql, (price, label, instrument_name,)) connection.commit() </code></pre> <p>Which is giving error <strong>Deadlock found when trying to get lock; try restarting transaction</strong> at the line <code>cursor.execute(sql, (price, label, instrument_name,))</code></p> <p>It does not give error when running a single instance of script, it gives error when I am running multiple instances like <code>python file.py 1</code>, <code>python file.py 2</code> etc... which I guess is locking due to once of the scripts already has a hold of the certain table.</p> <p>my question is, how do I tackle this because I have to run multiple instances anyway, should I separate tables or some other hack?</p>
<python><mysql><database-deadlocks>
2022-12-14 18:20:50
0
15,089
Volatil3
74,802,535
3,450,163
Implementing a complex custom function row by row using rolling and apply
<p>I have a dataframe as below:</p> <pre><code>my_dict = { 'id': [1, 2, 3, 4, 5], 'salary': [100, 200, 300, 400, 500], 'location_vector' : [[1, 2, 3], [2, 3, 4], [2, 3, 5], [4, 5, 4], [7, 5, 4]] } df = pd.DataFrame(my_dict) id salary location_vector 0 1 100 [1, 2, 3] 1 2 200 [2, 3, 4] 2 3 300 [2, 3, 5] 3 4 400 [4, 5, 4] 4 5 500 [7, 5, 4] </code></pre> <p>What I would like to accomplish is to apply a complex custom function to the <code>location_vector</code> column row by row.</p> <p>I thought I would use <code>pandas.rolling</code> along with <code>apply</code> to implement as below:</p> <p>my custom function</p> <pre><code>def my_func(arr1, arr2): # do stuff ... return new array </code></pre> <p>I wanted to create a new column and place the results there.</p> <pre><code>df['result'] = df['location_vector'].rolling(2).apply(lambda x, y: my_func(x, y)) </code></pre> <p>UPDATE: Here is my wanted result:</p> <pre><code> id salary location_vector result 0 1 100 [1, 2, 3] NaN 1 2 200 [2, 3, 4] [1, 2, 3] 2 3 300 [2, 3, 5] [2, 2, 5] 3 4 400 [4, 5, 4] [0, 2, 7] 4 5 500 [7, 5, 4] [1, 2, 5] </code></pre> <p>Please note that <code>df['result']</code> is the returned array from the <code>my_func</code>. This could be any array. I've just put some arbitrary numbers there</p> <p>I get DataError as below:</p> <p><code>DataError: No numeric types to aggregate.</code></p> <p>How do I implement a custom function to the column row by row? Any help would be appreciated.</p>
<python><pandas>
2022-12-14 18:04:35
1
3,097
GoGo
74,802,448
12,621,824
Client that communicates with multiple servers concurrently
<p>I am trying to learn multithreading programming with Python and specifically I am trying to build a programm that 1 Client sends data to multiple servers and get some messages back. In the first version of my programm I want my client to communicate back and forth with each server that I have spawned with each thread, till I type 'bye'. Now I have 2 issues with my implementation that I don't understand how to deal with. The first one is that I don't want the connection with the server to close after I type 'bye' (I want to add extra functionality after that) and the second one is that the servers doesn't get the messages that I type at the same type but I can communicate with the second server only if the first thread terminates (which as I said, I don't want to terminate). Any suggestions would be appreciated. Cheers!</p> <p><strong>Client.py</strong></p> <pre><code>import sys import threading from _thread import * import socket host_1 = '127.0.0.1' port_1 = 6000 host_2 = '127.0.0.2' port_2 = 7000 def connect_to_server(host, port): client_socket = socket.socket() # instantiate client_socket.connect((host, port)) # connect to the server message = input(&quot; -&gt; &quot;) # take input while message.lower().strip() != 'bye': client_socket.send(message.encode()) # send message data = client_socket.recv(1024).decode() # receive response print('Received from server: ' + data) # show in terminal message = input(&quot; -&gt; &quot;) # again take input threads_dict = {} th_1 = threading.Thread(target=connect_to_server, args=(host_1, port_1)) th_2 = threading.Thread(target=connect_to_server, args=(host_2, port_2)) th_1.start() th_2.start() th_1.join() th_2.join() </code></pre> <p><strong>Server.py</strong></p> <pre><code>import socket import sys def server_program(): host = sys.argv[1] # '127.0.0.1', '127.0.0.2' port = int(sys.argv[2]) # 6000, 7000 server_socket = socket.socket() # get instance server_socket.bind((host, port)) # bind host address and port together # configure how many client the server can listen simultaneously server_socket.listen(2) conn, address = server_socket.accept() # accept new connection print(&quot;Connection from: &quot; + str(address)) while True: # receive data stream. it won't accept data packet greater than 1024 bytes data = conn.recv(1024).decode() if not data: # if data is not received break break print(&quot;from connected user: &quot; + str(data)) data = input(' -&gt; ') conn.send(data.encode()) # send data to the client if __name__ == '__main__': server_program() </code></pre>
<python><multithreading><sockets><client-server><python-multithreading>
2022-12-14 17:56:40
0
529
C96
74,802,368
3,868,474
PySpark join DataFrames multiple columns dynamically ('or' operator)
<p>I have a scenario where I need to dynamically join two DataFrames. I am creating a helper function and passing DataFrames as input parameters like this.</p> <pre><code>def joinDataFrame(first_df, second_df, first_cols, second_cols,join_type) -&gt; DataFrame: return_df = first_df.join(second_df, (col(f) == col(s) for (f,s) in zip(first_cols, second_cols), join_type) return return_df </code></pre> <p>This works fine if I only have 'and' scenarios, but I have requirements to pass 'or' conditions as well.</p> <p>I did try to build a string containing the condition and then using <code>expr()</code> I can pass the join condition but I am getting <code>'ParseException'</code>.</p> <p>I would prefer to build the 'join' condition and pass it as a parameter to this function.</p>
<python><dataframe><pyspark><dynamic><apache-spark-sql>
2022-12-14 17:49:31
1
555
Prakash
74,802,248
12,193,952
After changing column type to string, hours, minutes and seconds are missing from the date
<p>I am have a dataframe loaded from a file containing a time series and values</p> <pre class="lang-py prettyprint-override"><code> datetime value_a 0 2019-08-19 00:00:00 194.32000000 1 2019-08-20 00:00:00 202.24000000 2 2019-08-21 00:00:00 196.55000000 3 2019-08-22 00:00:00 187.45000000 4 2019-08-23 00:00:00 190.36000000 </code></pre> <p>After I try to convert first column to <code>string</code>, the hours minutes and seconds vanish.</p> <pre class="lang-py prettyprint-override"><code> datetime value_a 0 2019-08-19 194.32000000 1 2019-08-20 202.24000000 2 2019-08-21 196.55000000 3 2019-08-22 187.45000000 4 2019-08-23 190.36000000 </code></pre> <p>Code snipped</p> <pre class="lang-py prettyprint-override"><code>df['datetime'] = df['datetime'].astype(str) </code></pre> <p>I kinda need the format <code>%Y-%m-%d %H:%M:%S</code>, because we are using it later.</p> <p>What is wrong?</p> <p>NOTE: I initially though that the issue is during conversion from object to datetime, however thanks to user @SomeDude, I have discovered that I am loosing h/m/s during to string conversion.</p>
<python><pandas><dataframe><datetime>
2022-12-14 17:38:41
2
873
FN_
74,802,237
6,322,924
matplotlib quiver weird plot
<p>I am trying to make simple quiver plot, with 4 points in xy plane which roughly form square, and 4 shifted points which form smaller, inner square. But my arrows are weird. What am I doing wrong?</p> <pre><code>p_x = np.array([1,10,10,1]).reshape(-1,1) p_y = np.array([1,1,10,10]).reshape(-1,1) corr_x = np.array([2,9,9,2]).reshape(-1,1) corr_y = np.array([2,2,9,9]).reshape(-1,1) </code></pre> <p><a href="https://i.sstatic.net/Ev1tG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ev1tG.png" alt="enter image description here" /></a></p> <p>And this is what I got with quiver with the following code:</p> <pre><code>fig, ax = plt.subplots(figsize=(8,6)) ax.quiver(p_x, p_y, corr_x, corr_y) plt.show() </code></pre> <p><a href="https://i.sstatic.net/3yMUa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3yMUa.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2022-12-14 17:37:34
1
607
Falco Peregrinus
74,802,223
4,551,325
Pandas resample().apply() with custom function very slow
<p>I have a pandas Series in business-day frequency, and I want to resample it to weekly frequency where I take the product of those 5 days in a week.</p> <p>Some dummy data:</p> <pre><code>dates = pd.bdate_range('2000-01-01', '2022-12-31') s = pd.Series(np.random.uniform(size=len(dates)), index=dates) # randomly assign NaN's mask = np.random.randint(0, len(dates), round(len(dates)*.9)) s.iloc[mask] = np.nan </code></pre> <p>Notice that majority of this Series are NaN's.</p> <p>The simple <code>.prod</code> method called after <code>.resample</code> is fast:</p> <pre><code>%timeit s.resample('W-FRI').prod() 10.2 ms ± 500 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p>But I have to be very precise when taking the product in that I want to give <code>min_count=1</code> when calling <code>np.prod</code>, and that's when it becomes very slow:</p> <pre><code>%timeit s.resample('W-FRI').apply(lambda x: x.prod(min_count=1)) 69.1 ms ± 1.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>I think the problem is not specific to <code>np.prod</code> but can be generalized to comparing all pandas-recognizable functions vs. applying custom functions.</p> <p>How do I achieve a similar performance as <code>.resample().prod()</code> with <code>min_count=1</code> argument?</p>
<python><pandas><numpy><apply><resample>
2022-12-14 17:36:28
1
1,755
data-monkey
74,802,101
17,696,880
Set alphanumeric regex pattern not accepting certain specific symbols
<pre class="lang-py prettyprint-override"><code>import re #Examples: input_text = &quot;Recien el 2021-10-12 despues de 3 dias 2021-10-12&quot; #NOT PASS input_text = &quot;Recien el 2021-10-12 hsah555sahsdhj. Ya despues de 3 dias hjsdfhjdsfhjdsf 2021-10-12&quot; #NOT PASS input_text = &quot;Recien el 2021-10-12 hsah555sahsdhj; despues de 3 dias hjsdfhjdsfhjdsf 2021-10-12&quot; #NOT PASS input_text = &quot;Recien el 2021-10-12 hsah555sahsdhj despues de 3 dias hjsdfhjdsfhjdsf.\n 2021-10-12&quot; #NOT PASS input_text = &quot;Recien el 2021-10-12 hsah555sahsdhj; mmm... creo que ya despues de 3 dias hjsdfhjdsfhjdsf.\n 2021-10-12&quot; #PASS input_text = &quot;Recien el 2021-10-12 hsah555sahsdhj. \n\n\n mmm... creo que ya despues de 3 dias hjsdfhjdsfhjdsf.\n 2021-10-12&quot; #PASS some_text = r&quot;[\s|]*&quot; # &lt;--- I NEED MODIFY THIS PATTERN date_format = r&quot;\d*-\d{2}-\d{2}&quot; check_00 = re.search(date_format + some_text + r&quot;(?:(?:pasados|pasado|despues del|despues de el|despues de|despues|tras) (\d+) (?:días|día|dias|dia)|(\d+) (?:días|día|dias|dia) (?:pasados|pasado|despues del|despues de el|despues de|despues|tras))&quot;, input_text, re.IGNORECASE) check_01 = re.search(r&quot;(?:(?:pasados|pasado|despues del|despues de el|despues de|despues|tras) (\d+) (?:días|día|dias|dia)|(\d+) (?:días|día|dias|dia) (?:pasados|pasado|despues del|despues de el|despues de|despues|tras))&quot; + some_text + date_format, input_text, re.IGNORECASE) if not check_00 and not check_01: print(&quot;1&quot;) else: print(&quot;0&quot;) </code></pre> <p>I need to set in the variable <code>some_text</code> a pattern that identify any alphanumeric substrings (that could possibly contain symbols included, such as <code>:</code> , <code>$</code>, <code>#</code>, <code>&amp;</code>, <code>?</code>, <code>¿</code>, <code>!</code>, <code>¡</code>, <code>|</code>, <code>°</code>, <code>,</code> , <code>.</code>, <code>(</code>, <code>)</code>, <code>]</code>, <code>[</code>, <code>}</code>, <code>{</code> ), and with the possibility of containing uppercase and lowercase characters, <strong>but the only symbols that should not to be present, not even once, are</strong> <code>;</code> <strong>and</strong> <code>.\n</code> <strong>or</strong> <code>.[\s|]*\n*</code></p> <p>In this case I need to determine which cases does NOT meet, therefore, the <code>if not</code> conditionals in the code.</p> <p>The <strong>output</strong> you should get if everything in the algorithm works fine would be this:</p> <pre><code>0 #for example 1 0 #for example 2 0 #for example 3 0 #for example 4 1 #for example 5 1 #for example 6 </code></pre> <p>Is it possible, within the same pattern that I want to place in the <code>some_text</code> variable, to indicate a list with the symbols that I do NOT want to appear in that identification area of the pattern (in this case <code>;</code> and <code>.[\s|]*\n*</code> )?</p>
<python><python-3.x><regex><string><regex-group>
2022-12-14 17:25:29
1
875
Matt095
74,802,041
3,826,115
How to plot two GeoDataFrames with one legend in Python
<p>I have the following code:</p> <pre><code>import geopandas as gpd import matplotlib.pyplot as plt states_gdf = gpd.read_file('https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_5m.zip') CO_gdf = states_gdf[states_gdf['STUSPS'] == 'CO'] fig, ax = plt.subplots() CO_gdf.plot(ax = ax, column = 'STUSPS') CO_gdf.boundary.to_frame().plot(ax = ax, color = 'k', label = 'CO Boundary') </code></pre> <p>I want to have one legend, that labels both the colored-in region and the boundary.</p> <p>If I try the following code:</p> <pre><code>fig, ax = plt.subplots() CO_gdf.plot(ax = ax, column = 'STUSPS', legend = True) CO_gdf.boundary.to_frame().plot(ax = ax, color = 'k', label = 'CO Boundary') ax.legend() </code></pre> <p>The first legend gets deleted when the second one is plotted.</p> <p>I've also tried collecting the handles and labels manually with <code>ax.get_legend_handles_labels()</code>, but for some reason the first plot line <code>CO_gdf.plot(...)</code> doesn't seem to create any handles.</p>
<python><matplotlib><legend><geopandas>
2022-12-14 17:20:43
0
1,533
hm8
74,802,016
1,892,584
How do I stop pdb from hiding tracebacks at the shell?
<p>If you execute a statement that results in an exception in pdb you get the exception but the traceback for the exception is hidden.</p> <p>Is there a way of getting pdb to output the exception?</p> <p>Unfortunately neither <code>sys.exc_info()</code> or <code>!sys.exc_info</code> seems to contain the exception for for the last executed command. One seems to point to the exception that triggered the pdb post mortem. The other is an exception from within pdb. It'd also be nice to always get it to do the right thing.</p>
<python><pdb>
2022-12-14 17:18:36
0
1,947
Att Righ
74,801,747
558,639
Laminating two column arrays
<p>Assume I have two numpy (N, 1) column arrays. I know their length to be equal:</p> <pre><code>&gt;&gt;&gt; a array([[0.], [2.], [4.], [6.]]) &gt;&gt;&gt; b array([[0.], [1.], [2.], [3.]]) </code></pre> <p>and I'd like to &quot;laminate&quot; them side by side to form an (N, 2) array. The following works:</p> <pre><code>&gt;&gt;&gt; np.array((a, b)).reshape(2, len(a)).transpose() array([[0., 0.], [2., 1.], [4., 2.], [6., 3.]]) </code></pre> <p>... but is there a simpler, more direct way to accomplish this?</p>
<python><numpy><numpy-ndarray>
2022-12-14 16:57:23
1
35,607
fearless_fool
74,801,490
11,894,831
Get Ublock Origin logger datas using Python and selenium
<p>I'd like to know the number of blocked trackers detected by Ublock Origin using Python (running on linux server, so no GUI) and Selenium (with firefox driver). I don't necessarly need to really block them but i need to know how much there are.</p> <p>Ublock Origin has a logger (<a href="https://github.com/gorhill/uBlock/wiki/The-logger#settings-dialog" rel="nofollow noreferrer">https://github.com/gorhill/uBlock/wiki/The-logger#settings-dialog</a>)) which i'd like to scrap.</p> <p>This logger is available through an url like this: moz-extension://<em>fc469b55-3182-4104-a95c-6b0b4f87cf0f</em>/logger-ui.html#_ where the part in italic is the UUID of Ublock Origin Addon.</p> <p>In this logger, for each entry, there is a div with class set to &quot;logEntry&quot; (yellow oblong in the screenshot below), and i'd like to get the datas in the green oblong: <a href="https://i.sstatic.net/9tD0H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9tD0H.png" alt="enter image description here" /></a></p> <p>So far, i got this:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.firefox.options import Options as FirefoxOptions browser_options = FirefoxOptions() browser_options.headless = True # Activate add on str_ublock_extension_path = &quot;/usr/local/bin/uBlock0_1.45.3b10.firefox.signed.xpi&quot; browser = webdriver.Firefox(executable_path='/usr/loca/bin/geckodriver',options=browser_options) str_id = browser.install_addon(str_ublock_extension_path) # Getting the UUID which is new each time the script is launched profile_path = browser.capabilities['moz:profile'] id_extension_firefox = &quot;uBlock0@raymondhill.net&quot; with open('{}/prefs.js'.format(profile_path), 'r') as file_prefs: lines = file_prefs.readlines() for line in lines: if 'extensions.webextensions.uuids' in line: sublines = line.split(',') for subline in sublines: if id_extension_firefox in subline: internal_uuid = subline.split(':')[1][2:38] str_uoo_panel_url = &quot;moz-extension://&quot; + internal_uuid + &quot;/logger-ui.html#_&quot; ubo_logger = browser.get(str_uoo_panel_url) ubo_logger_log_entries = ubo_logger.find_element(By.CLASS_NAME, &quot;logEntry&quot;) for log_entrie in ubo_logger_log_entries: print(log_entrie.text) </code></pre> <p>Using this &quot;weird&quot; url with moz-extension:// seems to work considering that <code>print(browser.page_source)</code> will display some relevant html code.</p> <p>Problem: <code>ubo_logger.find_element(By.CLASS_NAME, &quot;logEntry&quot;)</code> got nothing. What did i did wrong?</p>
<python><selenium-webdriver><selenium-firefoxdriver><ublock-origin>
2022-12-14 16:37:35
1
475
8oris
74,801,447
12,979,993
How to pprint custom collection
<p>I am not able to simulate pprint on my custom collections.<br /> See the following behaviour:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt;from pprint import * &gt;&gt;&gt;pprint([&quot;x&quot;*80]*4) ['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'] </code></pre> <p>However, when I pprint my custom <code>abc.Sequence</code> object:</p> <pre class="lang-py prettyprint-override"><code>import collections class MySequence(collections.abc.Sequence): def __init__(self, iterable): self.elements = list(iterable) def __iter__(self): return iter(self.elements) def __contains__(self, value): return value in self.elements def __getitem__(self, index): return self.elements.__getitem__(index) def __len__(self): return len(self.elements) def __repr__(self): return self.elements.__repr__() </code></pre> <p>I get the following:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt;pprint(MySequence([&quot;x&quot; * 80]*4)) ['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'] </code></pre> <p>At the end I need to print my dataclass using my custom class getting nice indentation:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class MyClass1: halloffame: List = field(default_factory=lambda: [&quot;x&quot; * 80]*4) @dataclass class MyClass2: halloffame: MySequence = field(default_factory=lambda: MySequence([&quot;x&quot; * 80]*4)) </code></pre> <p>So the following classes should print the same, however, I get:</p> <pre class="lang-py prettyprint-override"><code>pprint(MyClass1()) MyClass1(halloffame=['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']) pprint(MyClass2()) MyClass2(halloffame=['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']) </code></pre>
<python><abc><pprint>
2022-12-14 16:34:39
1
895
BorjaEst
74,801,365
10,309,712
Stacking ensemble of classifiers in a chain
<p>I have the following human activity recognition sample dataset:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( { 'mean_speed': [40.01, 3.1, 2.88, 20.89, 5.82, 40.01, 33.1, 40.88, 20.89, 5.82, 40.018, 23.1], 'max_speed': [70.11, 6.71, 7.08, 39.63, 6.68, 70.11, 65.71, 71.08, 39.63, 13.68, 70.11, 35.71], 'max_acc': [17.63, 2.93, 3.32, 15.57, 0.94, 17.63, 12.93, 3.32, 15.57, 0.94, 17.63, 12.93], 'mean_acc': [5.15, 1.97, 0.59, 5.11, 0.19, 5.15, 2.97, 0.59, 5.11, 0.19, 5.15, 2.97], 'activity': ['driving', 'walking', 'walking', 'riding', 'walking', 'driving', 'motor-bike', 'motor-bike', 'riding', 'riding', 'motor-bike', 'riding'] } ) df.head() mean_speed max_speed max_acc mean_acc activity 0 40.01 70.11 17.63 5.15 driving 1 3.10 6.71 2.93 1.97 walking 2 2.88 7.08 3.32 0.59 walking 3 20.89 39.63 15.57 5.11 riding 4 5.82 6.68 0.94 0.19 walking </code></pre> <p>So I want to create a chain of machine learning classifiers in a pipepline. Where the base classifier first predicts whether an <code>activity</code> is a <strong>mototised</strong> (<code>driving</code>, <code>motor-bike</code>), a <strong>non-mototised</strong> (<code>riding</code>, <code>walking</code>). The learning phase should proceed like so: <a href="https://i.sstatic.net/ermTV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ermTV.png" alt="enter image description here" /></a></p> <p>So I add a column <code>type</code> stating where an activity is motorised or otherwise.</p> <pre class="lang-py prettyprint-override"><code>class_mapping = {'driving':'motorised', 'motor-bike':'motorised', 'walking':'non-motorised', 'riding':'non-motorised'} df['type'] = df['activity'].map(class_mapping) df.head() mean_speed max_speed max_acc mean_acc activity type 0 40.01 70.11 17.63 5.15 driving motorised 1 3.10 6.71 2.93 1.97 walking non-motorised 2 2.88 7.08 3.32 0.59 walking non-motorised 3 20.89 39.63 15.57 5.11 riding non-motorised 4 5.82 6.68 0.94 0.19 walking non-motorised </code></pre> <p><strong>Question:</strong></p> <p>I would like to train a <code>Random Forest</code> as base classifier, to predict whether an activity is <code>motorised</code> or <code>non-motorised</code>, with a probability output. Then follows 2 meta-classifiers: <code>Decision Tree</code> to predict if the activity is <code>walking</code> or <code>riding</code>, and an <code>SVC</code> which predicts if an activity is <code>driving</code> or <code>motor-bike</code>. The meta-classifiers (<code>DT, SVC</code>) would take as input, the 4-features + probability output of the first classifier. Obviously, <code>DT</code> and <code>SVC</code> would only take a subset of the entire dataset corresponding to the classes they would predict.</p> <p>I have this idea of the learning procedure, but I am not sure how I to implement it.</p> <p>Can anyone out there show how this could be done?</p>
<python><machine-learning><scikit-learn><classification><supervised-learning>
2022-12-14 16:28:56
1
4,093
arilwan
74,801,357
3,937,811
How to connect a Twilio number to a Twilio flow
<p>I am working on a project to connect a phone number to a Twilio flow project. I've created a flow and I've added a number the flow does not work.</p> <p><a href="https://i.sstatic.net/688fV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/688fV.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/2VkLF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2VkLF.png" alt="enter image description here" /></a></p> <p>I contacted support about this.</p> <p>This is not okay. I followed the specific instructions in this video:</p> <p><a href="https://www.youtube.com/watch?time_continue=180&amp;v=VRxirse1UfQ&amp;embeds_euri=https%3A%2F%2Fwww.twilio.com%2F&amp;source_ve_path=Mjg2NjYsMjM4NTE&amp;feature=emb_title" rel="nofollow noreferrer">https://www.youtube.com/watch?time_continue=180&amp;v=VRxirse1UfQ&amp;embeds_euri=https%3A%2F%2Fwww.twilio.com%2F&amp;source_ve_path=Mjg2NjYsMjM4NTE&amp;feature=emb_title</a></p> <p>and nothing works as expected.</p> <p>Expected:</p> <pre><code>Send hi and receive: 1. What are the positive results or outcomes you have achieved lately? 2. What are the strengths and resources you have available to you to get even more results and were likely the reason you got the results in the first question. 3. What are your current priorities? 4. What do you and your team need to be focused on right now? 5. What are the benefits to all involved-you, your team and all other stakeholders who will be impacted by achieving your priority focus. 6. How can we (you and/or your team) move close? 7. What action steps are needed? 8. What are you going to do today? 9. What are you doing tomorrow ? 10. What did you do yesterday? Please respond in the order that the questions appear. </code></pre> <p>Actual:</p> <pre><code>Sent from your Twilio trial account - Thanks for the message. Configure your number's SMS URL to change this message.Reply HELP for help.Reply STOP to unsubscribe.Msg&amp;Data rates may apply. </code></pre>
<python><twilio>
2022-12-14 16:28:28
1
2,066
Evan Gertis
74,801,343
805,357
Python dateutils relativedelta incorrect result when starting with 30 day month
<p>I'm trying to create some date recurrence rules. Based on the issues with <code>dateutil</code> noted <a href="https://stackoverflow.com/questions/38328313/dateutils-rrule-returns-dates-that-2-months-apart">here</a> and <a href="https://github.com/dateutil/dateutil/issues/149" rel="nofollow noreferrer">here</a> I am trying to use <code>relativedelta</code> rather than <code>rrule</code>. However, I am encountering a silly problem that is stumping me and which is illustrated with the following minimal example. Consider this:</p> <pre><code>MONTH = relativedelta(months=+1) print(first_date + MONTH, first_date + 2*MONTH) </code></pre> <p>When <code>first_date</code> is the last day in a month with 31 days, such as 05/31/2007, the code yield the correct two dates: June 30th and July 31st.</p> <p>But, when <code>first_date</code> is the last day in a month with 30 days, such as 04/30/2007, the code yields : <strong>May 30th</strong> and June 30th. What it should have given is <strong>May 31st</strong> and June 30th.</p> <p>Any ideas how I can overcome this issue?</p>
<python><python-dateutil><relativedelta>
2022-12-14 16:27:30
1
2,085
deepak
74,801,333
166,723
Spyder not terminating tkinkter mainloop()
<p>I am starting with python and I'm using Spyder 3.9.13 with Python 3.9</p> <p>When beginning with the GUI design I have run in one problem: Spyder runs the script just once and jams in infinite loop.</p> <p>Code that fails:</p> <pre><code>import tkinter as TK ws=TK.Tk() print('Foo') ws.mainloop() print('Bar') </code></pre> <p>When run from Spyder the window emerges and only the <code>Foo</code> is printed. When run from command line, the <code>Bar</code> is printed when the window is closed, as expected.</p> <p>Adding the <code>manloop</code> killing button does not help either, the behaviour is the same.</p> <pre><code>import tkinter as TK ws=TK.Tk() Button( ws, text='Exit', command=lambda:ws.destroy() ).pack(expand=True) print('Foo') ws.mainloop() print('Bar') </code></pre> <p>When the <code>destroy()</code> method call is replaced by <code>quit()</code> call the <code>Bar</code> is printed but the window is not closed.</p> <p>The little expanded code works partially as intended:</p> <pre><code>import tkinter as TK class FooBar: def __init__(self,Name): self.win=TK.Tk() self.win.title=(Name) def terminate(self): self.win.quit() self.win.destroy() ws=FooBar('FooBar') TK.Button( ws.win, text='Exit', command=lambda:ws.terminate() ).pack(expand=True) print('Foo') ws.win.mainloop() print('Bar') </code></pre> <p>Even this code runs differently from the Spyder and from the command line. Form command line the <code>Bar</code> is printed no matter how the window is terminated; by the <code>Exit</code> button or the cross button. On the other hand, Spyder closes the window and prints <code>Bar</code> only if the window is closed by the <code>Exit</code> button. When closed as a window, the <code>Bar</code> is not printed and it is still stuck in the loop.</p> <p>What causes this behaviour and how can I prevent it?</p>
<python><tkinter><spyder>
2022-12-14 16:26:24
1
2,341
Crowley
74,801,291
2,321,195
How do I parse multi-line logs when I have some regex for individual lines?
<p>I have newline-delimited logs that look like this:</p> <pre><code>Unimportant unimportant Some THREAD-123 blah blah blah patternA blah blah blah Unimportant unimportant More THREAD-123 blah blah blah patternB blah blah blah Unimportant unimportant Unimportant unimportant Outbound XML distinctive doctype tag Unimportant unimportant Outbound XML distinctive root opening-tag Unimportant unimportant Unimportant unimportant Unimportant unimportant Outbound XML distinctive HEY-THIS-IS-MY-DATA tagset and innertext Unimportant unimportant Outbound XML distinctive root closing-tag Unimportant unimportant Unimportant unimportant Unimportant unimportant Yet more THREAD-123 blah blah blah patternC blah blah blah Unimportant unimportant Unimportant unimportant Even more THREAD-123 blah blah blah patternD blah blah blah Unimportant unimportant Inbound XML distinctive snippet Unimportant unimportant Unimportant unimportant Unimportant unimportant Just a bit more THREAD-123 blah blah blah patternE blah blah blah Unimportant unimportant Unimportant unimportant And then THREAD-123 blah blah blah patternF blah blah blah Unimportant unimportant </code></pre> <p>I've already come up with <code>^...$</code> regex patterns capable of recognizing every line you see here that isn't &quot;<code>Unimportant unimportant</code>&quot;, with one caveat:</p> <p>Sometimes, things that match one of these patterns will themselves be unimportant.</p> <p>Like, there might be overlapping concurrent threads that both match this pattern.</p> <p>So once I see a &quot;<code>Some THREAD-(\d+) blah blah blah patternA blah blah blah</code>&quot; I'll need to save off &quot;<code>(\d+)</code>&quot;'s value of &quot;<code>123</code>&quot; from &quot;<code>THREAD-(\d+)</code>&quot; into some sort of variable and use it as a literal in subsequent patternB-patternF <em>(actually look for &quot;<code>THREAD-123</code>&quot;)</em>.</p> <p>Furthermore, I need to pass in a parameter to the whole thing where I've written &quot;<code>HEY-THIS-IS-MY-DATA</code>.&quot;</p> <p>In other words, I'm looking for &quot;<code>HEY-THIS-IS-MY-DATA</code>&quot; surrounded by a consistent &quot;opening&quot; and &quot;closing&quot; sequences of regexes in a log file.</p> <p>Any tips on how I could approach this?</p> <p>Extremely vanilla Python 3 (as delivered on 2021-era AWS EC2 RHLE instances), older (v5) PowerShell, or Linux shell flavors that come with standard 2021-era AWS EC2 RHLE instances would be my preferred programming languages, as I'll be passing this on for others to use as a unit test for validating whether certain behaviors against &quot;<code>HEY-THIS-IS-MY-DATA</code>&quot; in an interactive UI &quot;show up correctly&quot; in logs.</p>
<python><powershell><sh><logparser>
2022-12-14 16:23:38
1
401
k..
74,801,254
9,274,940
pandas aggregate items as list and filter based on legth
<p>Let me show with an example, I have this dataframe:</p> <p><a href="https://i.sstatic.net/2Qz3E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Qz3E.png" alt="enter image description here" /></a></p> <p>I want to end up with to this dataframe: (so I group by &quot;column_1&quot; and &quot;last_column&quot; and I aggregate by &quot;column_2&quot; to get the items as a list)</p> <p><a href="https://i.sstatic.net/RxtJS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RxtJS.png" alt="enter image description here" /></a></p> <p>If you notice, when column_1 = 'yes' it doesn't appear that row, <strong>SINCE THE LENGTH OF THE RESULT IS 1</strong>.</p> <p>I'm able to filter and aggregate as a list separately, but not both together...</p> <pre><code>df.groupby( ['column_1', 'last_column'] )['column_2'].agg(list).filter(lambda x : len(x)&lt;2) </code></pre> <p>I'm getting the following error:</p> <p><a href="https://i.sstatic.net/gLpaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gLpaB.png" alt="enter image description here" /></a></p> <p>Dataframe:</p> <pre><code>import pandas as pd data = {'column_1': ['no', 'no', 'no', 'no', 'yes', 'yes', 'yes', 'yes'], 'column_2': ['spain', 'france', 'italy', 'germany', 'spain', 'france', 'italy', 'germany'], &quot;last_column&quot;: ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B']} df = pd.DataFrame.from_dict(data) </code></pre>
<python><pandas><dataframe><group-by>
2022-12-14 16:20:36
2
551
Tonino Fernandez
74,800,989
10,197,418
How to add a numeric seconds column (or duration) to a datetime in Polars?
<p>I want to add a duration in seconds to a date/time. My data looks like</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;dt&quot;: [ &quot;2022-12-14T00:00:00&quot;, &quot;2022-12-14T00:00:00&quot;, &quot;2022-12-14T00:00:00&quot;, ], &quot;seconds&quot;: [ 1.0, 2.2, 2.4, ], } ) df = df.with_columns(pl.col(&quot;dt&quot;).cast(pl.Datetime)) </code></pre> <p>Now my naive attempt was to to convert the float column to duration type to be able to add it to the datetime column (as I would do in <code>pandas</code>).</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns(pl.col(&quot;seconds&quot;).cast(pl.Duration).alias(&quot;duration0&quot;)) print(df.head()) </code></pre> <pre><code>┌─────────────────────┬─────────┬──────────────┐ │ dt ┆ seconds ┆ duration0 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ duration[μs] │ ╞═════════════════════╪═════════╪══════════════╡ │ 2022-12-14 00:00:00 ┆ 1.0 ┆ 0µs │ │ 2022-12-14 00:00:00 ┆ 2.2 ┆ 0µs │ │ 2022-12-14 00:00:00 ┆ 2.4 ┆ 0µs │ └─────────────────────┴─────────┴──────────────┘ </code></pre> <p>...gives the correct data type, however the <strong>values are all zero</strong>.</p> <p>The <a href="https://docs.pola.rs/user-guide/transformations/time-series/parsing/" rel="nofollow noreferrer">documentation</a> is kind of sparse on the topic, any better options?</p>
<python><dataframe><datetime><python-polars><timedelta>
2022-12-14 16:00:48
1
26,076
FObersteiner
74,800,807
6,045,509
Why isn't boto3 (AWS) using ~/.aws/credentials?
<p>My issue is AWS boto3 package, authorization, python.</p> <p>Referencing to <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html</a> section: &quot;Configuring credentials&quot; the aws credentials for boto3 instance can be sought from from &quot;4. Shared credential file (<code>~/.aws/credentials</code>)&quot;</p> <p>To proof the <code>~/.aws/credentials</code> are valid and sufficient I am using aws cli (<code>secretsmanager: create-secret, get-secret-value</code> calls). The response/results are OK.</p> <p>Not so the boto3 instance in python code (using <code>client.get_secret_value</code>).</p> <p>Expected: error free response</p> <p>Actualy:</p> <pre><code>botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the GetSecretValue operation: The security token included in the request is invalid. </code></pre> <p>Any hint appreciated, thnx.</p>
<python><amazon-web-services><boto3>
2022-12-14 15:47:06
1
361
harry hartmann
74,800,793
2,836,172
Find the lowest value index in a numpy array per column plus value
<p>This is quite easy:</p> <pre><code>import numpy as np np.random.seed(2341) data = (np.random.rand(3,4) * 100).astype(int) </code></pre> <p>so I have</p> <pre><code>[[35 20 47 39] [ 6 17 77 85] [ 8 25 2 3]] </code></pre> <p>Great, now lets get the indices of the smallest values per row:</p> <pre><code>kmin = np.argmin(data, axis=1) </code></pre> <p>this outputs</p> <pre><code>[1 0 2] </code></pre> <p>So in the first row, the second element is the smallest. In the second row the first and in the 3rd row it's the 3rd element. But how do I access those values and get them as one column?</p> <p>I tried this syntax:</p> <pre><code>min_vals = data[:, kmin] </code></pre> <p>but the result is an 3x3 array. I need an output like this:</p> <pre><code>[[20] [ 6] [ 2]] </code></pre> <p>I know that I get the values on a different way too, but later on I have to implement Matlab code like this</p> <pre><code>data(1:n1,kmin,1); </code></pre> <p>where I need to select the lowest values again.</p>
<python><numpy>
2022-12-14 15:46:16
1
1,522
Standard
74,800,749
6,119,375
pass custom scaling operation in python
<p>i am following an example of the <a href="https://github.com/google/lightweight_mmm" rel="nofollow noreferrer">https://github.com/google/lightweight_mmm</a> but instead of using the default setting for scalars, which is mean:</p> <pre><code>media_scaler = preprocessing.CustomScaler(divide_operation=jnp.mean) </code></pre> <p>i need to use the lambda function:</p> <pre><code>lambda x: jnp.mean(x[x &gt; 0]) </code></pre> <p>How can this be done? I tried couple of things, but since i am a complete beginner, i feel lost.</p> <p>So i have tried:</p> <pre><code>lambda x: jnp.mean(x[x &gt; 0]) media_scaler = preprocessing.CustomScaler(divide_operation=x) </code></pre> <p>and</p> <pre><code>lambda x: jnp.mean(x[x &gt; 0]) media_scaler = preprocessing.CustomScaler(divide_operation=lambda) </code></pre> <p>None of these work.</p>
<python><function><lambda><mean>
2022-12-14 15:43:24
1
1,890
Nneka
74,800,656
5,137,645
pytorch weighted MSE loss
<p>I wanted to apply a weighted MSE to my pytorch model, but I ran into some spots where I do not know how to adapt it correctly. The original lines of code are:</p> <pre><code>self.mse_criterion = torch.nn.MSELoss(reduction='none') loss_mot_rec = self.mse_criterion(self.fake_noise, self.real_noise).mean(dim=-1) def to(self, device): if self.opt.is_train: self.mse_criterion.to(device) self.encoder = self.encoder.to(device) </code></pre> <p>The function for my weighted MSE loss is:</p> <pre><code>def weighted_mse_loss(input, target): weight=torch.FloatTensor([2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]) return (weight * (input - target) ** 2) </code></pre> <p>So I am confused how to replace the mse_criterion with my function. Any help would be great. The entire original code can be found <a href="https://github.com/mingyuan-zhang/MotionDiffuse/blob/main/text2motion/trainers/ddpm_trainer.py" rel="nofollow noreferrer">here</a> Thanks</p> <p>I tried what @DerekG suggested and now I am getting this error. File &quot;/content/MotionDiffuse/text2motion/trainers/ddpm_trainer.py&quot;,</p> <pre><code> line 159, in to self.mse_criterion.to(device) File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 987, in to return self._apply(convert) File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 638, in _apply for module in self.children(): File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1792, in children for name, module in self.named_children(): File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1811, in named_children for name, module in self._modules.items(): File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1265, in __getattr__ raise AttributeError(&quot;'{}' object has no attribute '{}'&quot;.format( AttributeError: 'weighted_MSELoss' object has no attribute '_modules' </code></pre>
<python><pytorch>
2022-12-14 15:36:11
1
606
Nikita Belooussov
74,800,580
330,867
Celery pool parameter ignored in setup_defaults?
<p>I have a script that run the Celery Worker like this:</p> <pre><code>if __name__ == '__main__': worker = celery.Worker() worker.setup_defaults( loglevel=logging.INFO, pool='eventlet', concurrency=500 ) worker.start() </code></pre> <p>This launches Celery, as the output is :</p> <pre><code> -------------- celery@some.server.com v5.2.7 (dawn-chorus) --- ***** ----- -- ******* ---- Linux-5.10.0-19-cloud-amd64-x86_64-with-glibc2.31 2022-12-14 15:23:55 - *** --- * --- - ** ---------- [config] - ** ---------- .&gt; app: __main__:0x7fdda296baf0 - ** ---------- .&gt; transport: redis://localhost:6379/6 - ** ---------- .&gt; results: redis://localhost:6379/6 - *** --- * --- .&gt; concurrency: 500 (eventlet) -- ******* ---- .&gt; task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .&gt; celery exchange=celery(direct) key=celery [tasks] . task1 . task2 . celery.accumulate . celery.backend_cleanup . celery.chain . celery.chord . celery.chord_unlock . celery.chunks . celery.group . celery.map . celery.starmap </code></pre> <p>But, somehow, the processes are running as Fork:</p> <pre><code>[2022-12-14 15:08:00,623: WARNING/ForkPoolWorker-2] - Some print command [2022-12-14 15:08:00,623: WARNING/ForkPoolWorker-1] - Some print command </code></pre> <p>So I thought maybe the concurrency was off, so I tried with gevent. It's the same.</p> <p>So I tried something else, I replaced &quot;eventlet&quot; with a random text ; &quot;helloworld&quot;, and here's the output:</p> <pre><code> -------------- celery@some.server.com v5.2.7 (dawn-chorus) --- ***** ----- -- ******* ---- Linux-5.10.0-19-cloud-amd64-x86_64-with-glibc2.31 2022-12-14 15:23:55 - *** --- * --- - ** ---------- [config] - ** ---------- .&gt; app: __main__:0x7fdda296baf0 - ** ---------- .&gt; transport: redis://localhost:6379/6 - ** ---------- .&gt; results: redis://localhost:6379/6 - *** --- * --- .&gt; concurrency: 500 (helloworld) -- ******* ---- .&gt; task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .&gt; celery exchange=celery(direct) key=celery [tasks] . task1 . task2 . celery.accumulate . celery.backend_cleanup . celery.chain . celery.chord . celery.chord_unlock . celery.chunks . celery.group . celery.map . celery.starmap </code></pre> <p>I mean, what?</p> <p>Celery should fail if the pool isn't correct, but here, nothing happens.</p> <p>What is even weirder is that it was working fine previously and stopped yesterday without any changes on my end at all.</p> <p>Was there some recent updates that affect how the pool is defined?</p>
<python><celery><pool>
2022-12-14 15:30:43
1
40,087
Cyril N.
74,800,210
7,713,770
Comparing two lists with each other and color the difference with django
<p>I have a django application. And I try to mark the difference values in the lists red in the template.</p> <p>So I have some methods with lists inside. Because in the real situation. You can upload a pdf and excel file. But this is just for testing. So that I can use it in the real situation. But the idea is the same.</p> <p>So this are the methods:</p> <pre><code>from django.utils.safestring import mark_safe from tabulate import tabulate class FilterText: def total_cost_fruit(self): return [3588.20, 5018.75, 3488.16] def show_extracted_data_from_file(self): regexes = [self.total_cost_fruit()] matches = [(regex) for regex in regexes] columns = [&quot;kosten fruit&quot;] return mark_safe( tabulate( zip_longest(*matches), # type: ignore headers=columns, tablefmt=&quot;html&quot;, stralign=&quot;center&quot;, ) ) </code></pre> <p>and second class:</p> <pre><code>from django.utils.safestring import mark_safe from tabulate import tabulate class ExtractingTextFromExcel: def init(self): pass def extract_data_excel_combined(self): new_fruit_list = [[i] for i in self.total_fruit_cost()] columns = [&quot;totaal&quot;, &quot;kosten&quot;, &quot;fruit&quot;] return mark_safe(tabulate(new_fruit_list, headers=columns, tablefmt=&quot;html&quot;, stralign=&quot;center&quot;)) def total_fruit_cost(self): dict_fruit = {&quot;Watermeloen&quot;: 3588.10, &quot;Appel&quot;: 5018.40, &quot;Sinaasappel&quot;: 3488.16} fruit_list = list(dict_fruit.values()) #[[i] for i in dict_fruit.values()] print(fruit_list) return fruit_list </code></pre> <p>and the views.py:</p> <pre><code>def test(request): filter_excel = ExtractingTextFromExcel() filter_text = FilterText() compare_data = CompareData() total_fruit_cost_pdf = filter_text.total_cost_fruit() total_fruit_cost_excel = filter_excel.total_fruit_cost() diff_set = compare_data.diff(total_fruit_cost_pdf, total_fruit_cost_excel) print(diff_set) content_excel = &quot;&quot; content_pdf = &quot;&quot; content_pdf = filter_text.show_extracted_data_from_file() content_excel = filter_excel.extract_data_excel_combined() context = { &quot;content_pdf&quot;: content_pdf, &quot;content_excel&quot;: content_excel, &quot;diff_set&quot;: diff_set, } return render(request, &quot;main/test.html&quot;, context) </code></pre> <p>and template:</p> <pre><code> &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt; &lt;title&gt;Create a Profile&lt;/title&gt; &lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;{% static 'main/css/custom-style.css' %}&quot; /&gt; &lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;{% static 'main/css/bootstrap.css' %}&quot; /&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;container center&quot;&gt; &lt;span class=&quot;form-inline&quot; role=&quot;form&quot;&gt; &lt;div class=&quot;inline-div&quot;&gt; &lt;form class=&quot;form-inline&quot; action=&quot;controlepunt140&quot; method=&quot;POST&quot; enctype=&quot;multipart/form-data&quot;&gt; &lt;div class=&quot;d-grid gap-3&quot;&gt; &lt;div class=&quot;form-group&quot;&gt; {% csrf_token %} {{ pdf_form.as_p }} &lt;/div&gt; &lt;div class=&quot;form-outline&quot;&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;div class=&quot;wishlist&quot;&gt; {% for value in content_pdf %} &lt;span {% if value in diff_set %} style=&quot;color: red;&quot; {% endif %}&gt; {{value}}&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/span&gt; &lt;span class=&quot;form-inline&quot; role=&quot;form&quot;&gt; &lt;div class=&quot;inline-div&quot;&gt; &lt;form class=&quot;form-inline&quot; action=&quot;controlepunt140&quot; method=&quot;POST&quot; enctype=&quot;multipart/form-data&quot;&gt; &lt;div class=&quot;d-grid gap-3&quot;&gt; &lt;div class=&quot;form-group&quot;&gt; {% csrf_token %} {{ excel_form.as_p }} &lt;/div&gt; &lt;div class=&quot;form-outline&quot;&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;div class=&quot;wishlist&quot;&gt; {% for value in content_excel %} &lt;span {% if value in diff_set %} style=&quot;color: red;&quot; {% endif %}&gt; {{value}}&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/span&gt; &lt;span class=&quot;form-inline&quot; role=&quot;form&quot;&gt; &lt;div class=&quot;inline-div&quot;&gt; &lt;div class=&quot;d-grid gap-3&quot;&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt;&lt;/br&gt; &lt;button type=&quot;submit&quot; name=&quot;form_pdf&quot; class=&quot;btn btn-warning&quot;&gt;Upload!&lt;/button&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/span&gt; &lt;/form&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>So if I do a print statement in the views.py method. Then I see the correct differences:</p> <pre><code>{5018.75, 3588.2, 3588.1, 5018.4} </code></pre> <p>But in the template is nothing to see.</p> <p>Question: how to mark the different values red?</p> <p>if you do in the template:</p> <pre><code>&lt;div class=&quot;wishlist&quot;&gt; {{content_pdf}} &lt;/div&gt; &lt;div class=&quot;wishlist&quot;&gt; {{content_excel}} &lt;/div&gt; </code></pre> <p>Then it looks like:</p> <pre><code>kosten fruit totaal 3588.2 3588.1 5018.75 5018.4 3488.16 3488.16 </code></pre> <p>and so 3588.2, 3588.1 has to be colored red and 5018.75, 5018.4 has to be colored red.</p> <p>if I do <code> print(content_pdf)</code></p> <p>this is output:</p> <pre><code>&lt;table&gt; &lt;thead&gt; &lt;tr&gt;&lt;th style=&quot;text-align: right;&quot;&gt; kosten fruit&lt;/th&gt;&lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt;&lt;td style=&quot;text-align: right;&quot;&gt; 3588.2 &lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td style=&quot;text-align: right;&quot;&gt; 5018.75&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td style=&quot;text-align: right;&quot;&gt; 3488.16&lt;/td&gt;&lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; </code></pre> <p>So it prints literally the html in the template</p>
<python><django>
2022-12-14 15:01:57
2
3,991
mightycode Newton
74,800,045
5,675,325
Extend social pipeline and prevent a specific function to run during tests
<p>I'm using <a href="https://python-social-auth.readthedocs.io/en/latest/configuration/django.html" rel="nofollow noreferrer">Python Django Social Auth</a> and extended the pipeline with the following three steps</p> <ol> <li>One before the user is created (<a href="https://python-social-auth.readthedocs.io/en/latest/pipeline.html#partial-pipeline" rel="nofollow noreferrer">partial pipeline</a>) requesting some data.</li> <li>One for the user creation (overrides the <code>social.pipeline.user.create_user</code> method).</li> <li>One after the user is created.</li> </ol> <p>Here's how the <a href="https://python-social-auth.readthedocs.io/en/latest/configuration/django.html#personalized-configuration" rel="nofollow noreferrer">pipeline</a> currently looks like</p> <pre><code>SOCIAL_AUTH_PIPELINE = ( 'social_core.pipeline.social_auth.social_details', 'social_core.pipeline.social_auth.social_uid', 'social_core.pipeline.social_auth.social_user', 'myapp.file.before_user_is_created', 'myapp.file.create_user', 'social_core.pipeline.social_auth.associate_user', 'myapp.file.after_user_creation', 'social_core.pipeline.social_auth.load_extra_data', 'social_core.pipeline.user.user_details', ) </code></pre> <hr /> <p>In order to test it, I'm following <a href="https://github.com/python-social-auth/social-app-django/blob/master/tests/test_views.py#L29" rel="nofollow noreferrer">similar logic to the one used here</a>. This is what I have</p> <pre><code>@mock.patch(&quot;social_core.backends.base.BaseAuth.request&quot;) def test_complete(self, mock_request): url = reverse(&quot;social:complete&quot;, kwargs={&quot;backend&quot;: &quot;facebook&quot;}) url += &quot;?code=2&amp;state=1&quot; mock_request.return_value.json.return_value = {&quot;access_token&quot;: &quot;123&quot;} with mock.patch( &quot;django.contrib.sessions.backends.base.SessionBase&quot; &quot;.set_expiry&quot;, side_effect=[OverflowError, None], ): response_1 = self.client.get(url) self.assertEqual(response_1.status_code, 302) self.assertEqual(response_1.url, &quot;/before-user-is-created/&quot;) response_2 = self.client.post(&quot;/before-user-is-created/&quot;, {&quot;some_keys&quot;: &quot;some_values&quot;}) self.assertEqual(response_2.status_code, 302) self.assertEqual(response_2.url, &quot;/social-auth/complete/facebook/&quot;) response_3 = self.client.post(&quot;/social-auth/complete/facebook/&quot;) return response_3 </code></pre> <p>For step 1, I have a url (<code>/before-user-is-created/</code>) and a specific view. So, I get that view and I'm able to act on it when running</p> <pre><code>response_1 = self.client.get(url) </code></pre> <p>as you can see from the <code>self.assertEqual(response_1.url, &quot;/before-user-is-created/&quot;)</code> and from <code>response_2 = self.client.post(&quot;/before-user-is-created/&quot;, {&quot;some_keys&quot;: &quot;some_values&quot;})</code>.</p> <p>The problem is with step 3. That is essentially a function (<code>after_user_creation()</code>) that calls another one (<code>function_called()</code>)</p> <pre><code>def after_user_creation(user, *args, **kwargs): ... function_called(something_from_user) </code></pre> <p>That function is called in this part during the test (together with <code>load_extra_data</code> and <code>user_details</code> (the ones coming after it in the pipeline))</p> <pre><code>response_2 = self.client.post(&quot;/before-user-is-created/&quot;, {&quot;some_keys&quot;: &quot;some_values&quot;}) ... response_3 = self.client.post(&quot;/social-auth/complete/facebook/&quot;) ... </code></pre> <p>How to prevent <code>function_called(something_from_user)</code> to run during tests?</p>
<python><django><authentication><django-testing><python-social-auth>
2022-12-14 14:49:24
1
15,859
Tiago Peres
74,799,995
1,522,342
Flake8: how to select all lints
<p>According to: <a href="https://flake8.pycqa.org/en/6.0.0/user/options.html#cmdoption-flake8-select" rel="nofollow noreferrer">https://flake8.pycqa.org/en/6.0.0/user/options.html#cmdoption-flake8-select</a></p> <blockquote> <p>--select=&lt;errors&gt;</p> <p>Specify the list of error codes you wish Flake8 to report. Similarly to --ignore. You can specify a portion of an error code to get all that start with that string. For example, you can use E, E4, E43, and E431.</p> <p>This defaults to: E,F,W,C90</p> </blockquote> <p>I'm currently using:</p> <blockquote> <p>select = B,C,E,F,W,T4,B9,N8,E4</p> </blockquote> <p>My question is: <strong>is there any shortcut to select all lints?</strong>. I want this to write a bot (POC) to auto report issues (possibly ignoring project preferences) and I don't want to launch a new version of bot if a new select was added to flake8.</p> <p>I'm expecting something simple like <code>--select='*'</code></p>
<python><lint><flake8>
2022-12-14 14:45:15
2
2,453
iuridiniz
74,799,751
5,917,999
How bind my predictions trained on a subset, back to the original DF?
<p>I am making predictions on a feature engineered training set, without any identification key. How can I merge my predictions back to the original df?</p> <p>Original_DF</p> <pre><code>ID. ColumnB. ColumnC. ColumnD. Target A 2 3 1 8 B 2 3 1 9 C 2 3 1 6 </code></pre> <p>Then I trained my model on ColumnC and ColumnD, resulting in:</p> <pre><code>Subset_to_use = ['ColumnC', 'ColumnD', 'Target'] .... #Creating Train / Test resulting in train and test set, and X and Y: X_train, y_train X_test, y_test # Then doing the modelling, simplified: rf = RandomForestRegressor(n_estimators = 100) rf.fit(X_train, y_train) </code></pre> <p>Then the question: <strong>how can I bind the predictions back to the original_df? Since there is no ID column in anymore?</strong></p> <p>Training df:</p> <pre><code>ColumnC. ColumnD. Target 3 1 8 3 1 9 3 1 6 </code></pre> <p>My thinking directions:</p> <pre><code># Add the predictions to the df X_train['Prediction_TEST'] = y_train. # to have the original values X_test['Prediction_TEST'] = rf.predict(X_test) # to have the predicted values </code></pre> <p>and then to combine the above, like:</p> <pre><code>all_data = pd.concat(X_train, X_test]) </code></pre> <p>However, this is only giving the Training and testing DF with the new predictions, but WITHOUT the other original columns (e.g., ColumnA and ColumnB).</p> <p>What is the best way to solve this? Thank you!</p> <p>Desired outcome (predicted values are made up):</p> <pre><code>ID. ColumnB. ColumnC. ColumnD. Target Predicted A 2 3 1 8 8 B 2 3 1 9 10 C 2 3 1 6 7 </code></pre>
<python><pandas><scikit-learn>
2022-12-14 14:26:12
1
1,346
R overflow
74,799,676
2,146,894
How to run a "hello world" python script with Google Cloud Run
<p>Forgive my ignorance..</p> <p>I'm trying to learn how to schedule python scripts with Google Cloud. After a bit of research, I've seen many people suggest Docker + <a href="https://cloud.google.com/run" rel="nofollow noreferrer">Google Cloud Run</a> + <a href="https://cloud.google.com/scheduler" rel="nofollow noreferrer">Cloud Scheduler</a>. I've attempted to get a &quot;hello world&quot; example working, to no avail.</p> <h2>Code</h2> <p><strong>hello.py</strong></p> <pre><code>print(&quot;hello world&quot;) </code></pre> <p><strong>Dockerfile</strong></p> <pre><code># For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3.8-slim # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 WORKDIR /app COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers RUN adduser -u 5678 --disabled-password --gecos &quot;&quot; appuser &amp;&amp; chown -R appuser /app USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD [&quot;python&quot;, &quot;hello.py&quot;] </code></pre> <h2>Steps</h2> <ol> <li><p><a href="https://cloud.google.com/artifact-registry/docs/docker/store-docker-container-images#gcloud" rel="nofollow noreferrer">Create a repo with Google Cloud Artifact Registry</a></p> <pre><code>gcloud artifacts repositories create test-repo --repository-format=docker \ --location=us-central1 --description=&quot;My test repo&quot; </code></pre> </li> <li><p>Build the image</p> <pre><code>docker image build --pull --file Dockerfile --tag 'testdocker:latest' . </code></pre> </li> <li><p><a href="https://cloud.google.com/artifact-registry/docs/docker/store-docker-container-images#gcloud" rel="nofollow noreferrer">Configure auth</a></p> <pre><code>gcloud auth configure-docker us-central1-docker.pkg.dev </code></pre> </li> <li><p><a href="https://cloud.google.com/artifact-registry/docs/docker/store-docker-container-images#gcloud" rel="nofollow noreferrer">Tag the image with a registry name</a></p> <pre><code>docker tag testdocker:latest \ us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest </code></pre> </li> <li><p><a href="https://cloud.google.com/artifact-registry/docs/docker/store-docker-container-images#push" rel="nofollow noreferrer">Push the image to Artifact Registry</a></p> <pre><code>docker push us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest </code></pre> <p><a href="https://i.sstatic.net/TwqtR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TwqtR.png" alt="enter image description here" /></a></p> </li> <li><p>Deploy to Google Cloud Run</p> <p><a href="https://i.sstatic.net/pdbuB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pdbuB.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/oDBGO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oDBGO.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/Z6F3n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z6F3n.png" alt="enter image description here" /></a></p> </li> </ol> <h2>Error</h2> <p>At this point, I get the error</p> <blockquote> <p>The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.</p> </blockquote> <p>I've seen posts like <a href="https://stackoverflow.com/questions/55662222/container-failed-to-start-failed-to-start-and-then-listen-on-the-port-defined-b">this</a> which say to add</p> <pre><code>app.run(port=int(os.environ.get(&quot;PORT&quot;, 8080)),host='0.0.0.0',debug=True) </code></pre> <p>but this looks like a flask thing, and my script doesn't use flask. I feel like i have a fundamental misunderstanding of how this is supposed to work. Any help would be appreciated it.</p>
<python><docker><google-cloud-run>
2022-12-14 14:20:56
2
21,881
Ben
74,799,672
5,535,747
SQLAlchemy Joining 2 CTE Results in Ambiguous ON Clause
<p>I'm attempting retrieve a max version for either a published version or, if a resource has no published versions, the highest version. I'm using 3 CTE to find these values, one to get the max version that is published, a second to get the max version overall, and lastly a third to do an outer join which produces the highest published version if it exists, if not the highest version.</p> <p>The issue I'm having in SQLAlchemy is attempting to join the first 2 CTE so that I can produce a single result for each parent of the versions.</p> <p>The expected query looks like:</p> <pre class="lang-sql prettyprint-override"><code>WITH highest_published AS ( SELECT parent_id AS parent_id, MAX(subversion) AS m_version FROM child_version WHERE published AND NOT deleted GROUP BY parent_id ), highest_unpublished AS ( SELECT parent_id AS parent_id, MAX(subversion) AS m_version FROM child_version WHERE NOT deleted GROUP BY parent_id ), max_versions AS ( SELECT CASE WHEN hp.parent_id IS NOT NULL THEN hp.parent_id ELSE hu.parent_id END AS parent_id, CASE WHEN hp.m_version IS NOT null THEN hp.m_version ELSE hu.m_version END AS m_version FROM highest_unpublished AS hu LEFT OUTER JOIN highest_published AS hp ON hp.parent_id=hu.parent_id ) SELECT child_version.id, child_version.parent_id FROM child_version JOIN max_versions ON child_version.parent_id=max_versions.parent_id AND child_version.subversion=max_versions.m_version ORDER BY child_version.parent_id </code></pre> <p>This is the SA code using the ORM I would expect to produce this:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.dialects.postgresql import UUID import uuid class ChildVersion(db.Model): id = db.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) deleted = db.Column(db.Boolean, default=False, nullable=False) parent_id = db.Column(UUID(as_uuid=True), db.ForeignKey(Parent.id), nullable=True, index=True) subversion = db.Column(db.Integer, default=0, nullable=False) published = db.Column(db.Boolean, default=False, nullable=False) highest_published_version = ChildVersion.query.with_entities( ChildVersion.parent_id.label('parent_id'), sa.func.max(ChildVersion.subversion).label('m_version'), ).filter( ChildVersion.published, ~ChildVersion.deleted ).group_by(ChildVersion.parent_id).cte(name='highest_published') highest_unpublished_version = ChildVersion.query.with_entities( ChildVersion.parent_id.label('parent_id'), sa.func.max(ChildVersion.subversion).label('m_version'), ).filter( ~ChildVersion.deleted ).group_by(ChildVersion.parent_id).cte(name='highest_unpublished') versions = db.session.query(highest_unpublished_version).with_entities( sa.case( (highest_published_version.c.parent_id.is_not(None), highest_published_version.c.parent_id), else_=highest_unpublished_version.c.parent_id).label('parent_id'), sa.case( (highest_published_version.c.m_version.is_not(None), highest_published_version.c.m_version), else_=highest_unpublished_version.c.m_version).label('m_version'), ).join(highest_published_version, sa.and_(highest_unpublished_version.c.parent_id==highest_published_version.c.parent_id, highest_unpublished_version.c.m_version==highest_published_version.c.m_version), isouter=True ).cte(name='max_versions') </code></pre> <p>However I receive an error where my join is ambiguous:</p> <pre><code>Don't know how to join to &amp;lt;sqlalchemy.sql.selectable.CTE at 0x10b2c3ee0; highest_published&amp;gt;. Please use the .select_from() method to establish an explicit left side, as well as providing an explicit ON clause if not present already to help resolve the ambiguity. </code></pre> <p>Using the sqlalchemy.select to attempt to join the CTE results in invalid SQL.</p> <pre class="lang-py prettyprint-override"><code>highest_published_version = ChildVersion.query.with_entities( ChildVersion.parent_id.label('parent_id'), sa.func.max(ChildVersion.subversion).label('m_version'), ).filter( ChildVersion.published, ~ChildVersion.deleted ).group_by(ChildVersion.parent_id).cte(name='highest_published') highest_unpublished_version = ChildVersion.query.with_entities( ChildVersion.parent_id.label('parent_id'), sa.func.max(ChildVersion.subversion).label('m_version'), ).filter( ~ChildVersion.deleted ).group_by(ChildVersion.parent_id).cte('highest_unpublished') versions = db.session.query(highest_unpublished_version).with_entities( sa.case( (highest_published_version.c.parent_id.is_not(None), highest_published_version.c.parent_id), else_=highest_unpublished_version.c.parent_id).label('parent_id'), sa.case( (highest_published_version.c.m_version.is_not(None), highest_published_version.c.m_version), else_=highest_unpublished_version.c.m_version).label('m_version'), ).join(sa.select(highest_published_version), sa.and_(highest_unpublished_version.c.parent_id==highest_published_version.c.parent_id, highest_unpublished_version.c.m_version==highest_published_version.c.m_version), isouter=True ).cte(name='max_versions') versions_with_ids = ChildVersion.query.with_entities( ChildVersion.id ).join( versions, sa.and_(versions.c.parent_id==ChildVersion.parent_id, versions.c.m_version==ChildVersion.subversion) ) </code></pre> <pre><code>(psycopg2.errors.UndefinedTable) invalid reference to FROM-clause entry for table &quot;highest_unpublished&quot; LINE 28: FROM highest_published) AS anon_4 ON highest_unpublished.com... ^ HINT: There is an entry for table &quot;highest_unpublished&quot;, but it cannot be referenced from this part of the query. [SQL: WITH highest_unpublished AS (SELECT child_version.parent_id AS parent_id, max(child_version.subversion) AS m_version FROM child_version WHERE NOT child_version.deleted GROUP BY child_version.parent_id), highest_published AS (SELECT child_version.parent_id AS parent_id, max(child_version.subversion) AS m_version FROM child_version WHERE child_version.published AND NOT child_version.deleted GROUP BY child_version.parent_id), max_versions AS (SELECT CASE WHEN (highest_published.parent_id IS NOT NULL) THEN highest_published.parent_id ELSE highest_unpublished.parent_id END AS parent_id, CASE WHEN (highest_published.m_version IS NOT NULL) THEN highest_published.m_version ELSE highest_unpublished.m_version END AS m_version FROM highest_unpublished, highest_published LEFT OUTER JOIN (SELECT highest_published.parent_id AS parent_id, highest_published.m_version AS m_version FROM highest_published) AS anon_4 ON highest_unpublished.parent_id = highest_published.parent_id AND highest_unpublished.m_version = highest_published.m_version) SELECT child_version.id FROM child_version JOIN max_versions ON max_versions.parent_id = child_version.parent_id AND max_versions.m_version = child_version.subversion) AND NOT child_version.deleted GROUP BY child_version.parent_id, child_version.id] (Background on this error at: https://sqlalche.me/e/14/f405) </code></pre> <p>Any help would be greatly appreciated!</p>
<python><postgresql><sqlalchemy>
2022-12-14 14:20:32
2
301
BusinessFawn
74,799,642
3,805,467
sqlalchemy filter on date + offset
<p>what I try to do is to create a query that finds all records where a date is greater than today - days_before_activation.</p> <p>For this I use a @hybrid_property that shows the correct start_day (today - days_before_activation).</p> <p>The issue is, that timedelta does not work in filter queries, at least with sqllite. Error:</p> <blockquote> <p>E TypeError: unsupported type for timedelta days component: InstrumentedAttribute</p> </blockquote> <p>For this I created an additional column expression (@start.expression). To add the days using plain sql. Unfortunately it does not work when I use the cls.days_before_activation value as part of the expression. The result is always None. However, when I hardcode the integer value it does work. So it looks like I'm doing something wrong with using the value of the property as part of the expression.</p> <p>Not Working:</p> <pre><code>@start.expression def start(cls): return func.date(datetime.today(), f'+{cls.days_before_activation} days') </code></pre> <p>Working:</p> <pre><code>@start.expression def start(cls): return func.date(datetime.today(), '+ 10 days') </code></pre> <p>Any help are much appreciated.</p> <pre><code>class DefPayoutOffer(db.Model): __tablename__ = 'def_payout_option_offer' id = db.Column(db.Integer, primary_key=True) visual_text = db.Column(db.String(1000), unique=False, nullable=False) days_before_activation = db.Column(db.Integer, nullable=True) @hybrid_property def start(self): return datetime.today() - timedelta(days=self.days_before_activation) @start.expression def start(cls): return func.date(datetime.today(), f'+{cls.days_before_activation} days') Query: DefPayoutOffer.query.filter(DefPayoutOffer.start &gt; created_date).all() </code></pre> <p>UPDATE:</p> <p>After finding the solution with the help of python_user. I had the challenge to make it work for different dialects.</p> <p>For this I had to move away from the column expression approach to creating a expression.FunctionElement. Advantage is that it can be made dialect specific.</p> <pre><code>class StartCheck(expression.FunctionElement): name = 'start_check' inherit_cache = True @compiles(StartCheck, 'otherDialect') def compile(element, compiler, **kw): return &quot;foo&quot; @compiles(StartCheck, 'sqlite') def compile(element, compiler, **kw): return compiler.process(func.date('now', '+' + expression.cast(list(element.clauses)[0], Unicode) + ' days')) DefPayoutOffer.query.filter(StartCheck(DefPayoutOffer.days_before_activation) &gt;= date.today()).first() </code></pre>
<python><sqlalchemy>
2022-12-14 14:18:19
1
1,355
shalama
74,799,588
5,877,122
Insert a text transformation between open and json.load
<p>I actually do the following code:</p> <pre><code> with open(filename) as f: data = json.load(f) </code></pre> <p>I need to transform the content with some string replacements on the fly. So something like:</p> <pre><code> def repair(???): # The function to write with at least a call of string or bytes replacement like content.replace(&quot;abc&quot;, &quot;Def&quot;) return ???? with open(filename) as f: data = json.load(repair(f)) </code></pre> <p>How can I do that?</p>
<python>
2022-12-14 14:12:51
1
3,495
Benjamin
74,799,408
493,080
Why doesn't Pandas use the same memory block for reading into the same Dataframe?
<p>I read a big CSV file into a Dataframe in a Jupyter notebook with:</p> <pre><code>df = pd.read_csv(my_file) df.info() &gt; memory usage: 10.7+ GB </code></pre> <p>When I execute the same cell again, the total memory usage of my system increases. And after I repeat a few times, Jupyter kernel eventually dies.</p> <p>I would expect Python to release the memory before loading new data to the same variable, or release the memory once it finishes loading. Why does the memory usage increases more and more? How can I make Python return that memory back to the system?</p>
<python><pandas><jupyter>
2022-12-14 13:59:32
1
3,915
mustafa
74,799,224
11,261,546
Import elements from multiple submodules
<p>I have a python project with a package, where the tree looks like this:</p> <pre><code>my_package ├── __init__.py ├── A.py └── B.py </code></pre> <p>I would like to call several objects from <code>A</code> and <code>B</code> at once (from different files in the same command), is something &quot;like&quot; this posible?:</p> <pre><code>from my_package import ( # or some other syntax of course A_object_1, A_object_2, B_object_1 ) </code></pre> <p>Thanks</p>
<python><module><package>
2022-12-14 13:45:24
0
1,551
Ivan
74,799,058
12,301,726
Bigquery import asks for pyparsing in shell run
<p>I get this error <strong>&quot;ImportError: The 'pyparsing' package is required&quot;</strong> after trying to run .py file with <code>from google.cloud import bigquery</code> line. Import was working before and is still working in the Jupyter Notebook or in Ipython.</p> <p>I looked at existing options here and tried:</p> <ol> <li>pip install pyparsing</li> <li>downgrade setuptools</li> <li>uninstall pyparsing and setuptools and installing them back</li> <li>uninistall and purge pip and install it back</li> </ol> <p>Does anyone have suggestions? Thanks</p>
<python><google-bigquery><setuptools><pyparsing>
2022-12-14 13:32:55
1
1,250
poloniki
74,798,991
11,809,811
Creating a responsive layout, avoid infinite Configure loop
<p>I am trying to create a responsive layout where the content of the app changes depending on the width of the window (basically like any website works). The code I have so far is this:</p> <pre><code>import tkinter as tk from tkinter import ttk class App(tk.Tk): def __init__(self, start_size, min_size): super().__init__() self.title('Responsive layout') self.geometry(f'{start_size[0]}x{start_size[1]}') self.minsize(min_size[0],min_size[1]) self.frame = ttk.Frame(self) self.frame.pack(expand = True, fill = 'both') size_notifier = SizeNotifier(self, {300: self.create_small_layout, 600: self.create_medium_layout}, min_size[0]) self.mainloop() def create_small_layout(self): self.frame.pack_forget() self.frame = ttk.Frame(self) ttk.Label(self.frame, text = 'Label 1', background = 'red').pack(expand = True, fill = 'both') ttk.Label(self.frame, text = 'Label 2', background = 'green').pack(expand = True, fill = 'both') ttk.Label(self.frame, text = 'Label 3', background = 'blue').pack(expand = True, fill = 'both') ttk.Label(self.frame, text = 'Label 4', background = 'yellow').pack(expand = True, fill = 'both') self.frame.pack(expand = True, fill = 'both') def create_medium_layout(self): self.frame.pack_forget() self.frame = ttk.Frame(self) self.frame.columnconfigure((0,1), weight = 1, uniform = 'a') self.frame.rowconfigure((0,1), weight = 1, uniform = 'a') self.frame.pack(expand = True, fill = 'both') ttk.Label(self.frame, text = 'Label 1', background = 'red').grid(column = 0, row = 0, sticky = 'nsew') # sticky = 'nsew' causes configure loop ttk.Label(self.frame, text = 'Label 2', background = 'green').grid(column = 1, row = 0) ttk.Label(self.frame, text = 'Label 3', background = 'blue').grid(column = 0, row = 1) ttk.Label(self.frame, text = 'Label 4', background = 'yellow').grid(column = 1, row = 1) class SizeNotifier: def __init__(self, window, size_dict, min_width): self.window = window self.min_width = min_width self.size_dict = {key: value for key, value in sorted(size_dict.items())} self.current_min_size = None self.window.bind('&lt;Configure&gt;', self.check) def check(self, event): checked_size = None window_width = event.width if window_width &gt;= self.min_width: for min_size in self.size_dict: delta = window_width - min_size if delta &gt;= 0: checked_size = min_size if checked_size != self.current_min_size: self.current_min_size = checked_size print(self.current_min_size) # infinite loop visible here -&gt; print never stops self.size_dict[self.current_min_size]() app = App((400,300), (300,300)) </code></pre> <p>The basic idea is this: There are 2 functions that create different layouts (create_small_layout and create_medium_layout) and the app hides/reveals one of them depending on the app width. Getting the width of the application is handled by the SizeNotifier class, the logic in there was added to only trigger a layout build function when a new minimum size was reached.</p> <p>This entire thing works to a degree: In the create_medium_layout function, where I use the grid layout methods, things do work without the sticky argument. However, once I stick a widget to all sides of a cell (so 'nsew') configure is thrown into an infinite loop and the app stops displaying anything. using sticky with just 3 sides or less is fine though.</p> <p>Is there a way around this?</p>
<python><tkinter>
2022-12-14 13:26:32
1
830
Another_coder
74,798,878
8,761,554
Code a tensor view layer in nn.sequential
<p>I have a <code>sequential</code> container and inside I want to use the <code>Tensor.view</code> function. Thus my current solution looks like this:</p> <pre><code>class Reshape(nn.Module): def __init__(self, *args): super().__init__() self.my_shape = args def forward(self, x): return x.view(self.my_shape) </code></pre> <p>and in my <code>AutoEncoder</code> class I have:</p> <pre><code>self.decoder = nn.Sequential( torch.nn.Linear(self.bottleneck_size, 4096*2), Reshape(-1, 128, 8, 8), nn.UpsamplingNearest2d(scale_factor=2), ... </code></pre> <p>Is there a way to reshape the tensor directly in the <code>sequential</code> block so that I do not need to use the externally created <code>Reshape</code> class? Thank you</p>
<python><machine-learning><pytorch><autoencoder>
2022-12-14 13:17:03
1
341
Sam333
74,798,874
4,792,229
Nano Jetson Jetpack 4.6.1 can't install right h5py version?
<p>I have a Nano Jetson and flashed it with the latest available Jetpack version from here: <a href="https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit" rel="nofollow noreferrer">https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit</a> which is 4.6.1. Now when following this guide to install tensorflow: <a href="https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html</a> I checked the version of tensorflow I need here: <a href="https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform-release-notes/tf-jetson-rel.html#tf-jetson-rel" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform-release-notes/tf-jetson-rel.html#tf-jetson-rel</a> which is following command: <code>sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v461 tensorflow==2.7.0+nv22.01</code> now when I run this command I get following error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement h5py~=3.1.0 (from tensorflow) (from versions: 2.2.1, 2.3.0b1, 2.3.0, 2.3.1, 2.4.0b1, 2.4.0, 2.5.0, 2.6.0, 2.7.0rc2, 2.7.0, 2.7.1, 2.8.0rc1, 2.8.0, 2.9.0rc1, 2.9.0, 2.10.0, 3.0.0rc1, 3.0.0, 3.1.0) ERROR: No matching distribution found for h5py~=3.1.0 </code></pre> <p>which doesn't even make sense, since 3.1.0 is in the list of available versions. When I try to manually install h5py 3.1.0 it fails a few times and keeps trying to install older versions after which it finally successfully installs h5py version 2.10.0, which is obviously too old for the version needed for the tensorflow version for my jetpack version. How can I install tensorflow on my nano jetson with Jetpack version 4.6.1?</p>
<python><tensorflow><nvidia>
2022-12-14 13:16:55
1
3,002
Hakaishin
74,798,728
7,383,799
Python multi-processing cannot imported module
<p>I am testing toy code to parallelize a process using python's <code>multiprocess</code>. The code works on my home computer but when I migrated it to a remote server I am working on it returns an error.</p> <p>I first define functions in <code>defs.py</code></p> <pre><code>import numpy as np def g(n): A = np.random.rand(n, n) B = np.random.rand(n, n) return A * B def run_complex_operations(operation, input, pool): result = pool.map(operation, input) return result </code></pre> <p>Python seems to find <code>defs.py</code> because when I run the two lines below it returns the expected result</p> <pre><code>import defs print(defs.g(1)) </code></pre> <p>However, when I run the following code to use my function in a multiprocess, Python returns an error.</p> <pre><code>import defs import numpy as np import time import multiprocessing as mp x = 10 n = 10000 l = [n] * x start = time.time() if __name__ == '__main__': processes_pool = mp.Pool(3) l[:] = defs.run_complex_operations(defs.g, range(x), processes_pool) </code></pre> <p>The error is:</p> <pre><code>Process SpawnPoolWorker-1: Traceback (most recent call last): File &quot;C:\ProgramData\Anaconda3\lib\multiprocessing\process.py&quot;, line 315, in _bootstrap self.run() File &quot;C:\ProgramData\Anaconda3\lib\multiprocessing\process.py&quot;, line 108, in run self._target(*self._args, **self._kwargs) File &quot;C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py&quot;, line 114, in worker task = get() File &quot;C:\ProgramData\Anaconda3\lib\multiprocessing\queues.py&quot;, line 358, in get return _ForkingPickler.loads(res) ModuleNotFoundError: No module named 'defs' </code></pre> <p>What could be the reasons for the problem? It must be related to <code>multiprocessing</code> because python has no problem finding the other function in the defs module.</p> <p>FWIW, The server version of python is 3.8.5, my local python is 3.9.7.</p>
<python><python-3.x><multiprocessing><python-multiprocessing>
2022-12-14 13:03:45
3
375
eigenvector
74,798,626
1,564,730
Why is log(inf + inf j) equal to (inf + 0.785398 j), In C++/Python/NumPy?
<p>I've been finding a strange behaviour of <code>log</code> functions in C++ and numpy about the behaviour of <code>log</code> function handling complex infinite numbers. Specifically, <code>log(inf + inf * 1j)</code> equals <code>(inf + 0.785398j)</code> when I expect it to be <code>(inf + nan * 1j)</code>.</p> <p>When taking the log of a complex number, the real part is the log of the absolute value of the input and the imaginary part is the phase of the input. Returning 0.785398 as the imaginary part of <code>log(inf + inf * 1j)</code> means it assumes the <code>inf</code>s in the real and the imaginary part have the same length. This assumption does not seem to be consistent with other calculation, for example, <code>inf - inf == nan</code>, <code>inf / inf == nan</code> which assumes 2 <code>inf</code>s do not necessarily have the same values.</p> <p>Why is the assumption for <code>log(inf + inf * 1j)</code> different?</p> <p>Reproducing C++ code:</p> <pre><code>#include &lt;complex&gt; #include &lt;limits&gt; #include &lt;iostream&gt; int main() { double inf = std::numeric_limits&lt;double&gt;::infinity(); std::complex&lt;double&gt; b(inf, inf); std::complex&lt;double&gt; c = std::log(b); std::cout &lt;&lt; c &lt;&lt; &quot;\n&quot;; } </code></pre> <p>Reproducing Python code (numpy):</p> <pre><code>import numpy as np a = complex(float('inf'), float('inf')) print(np.log(a)) </code></pre> <p>EDIT: Thank you for everyone who's involved in the discussion about the historical reason and the mathematical reason. All of you turn this naive question into a really interesting discussion. The provided answers are all of high quality and I wish I can accept more than 1 answers. However, I've decided to accept @simon's answer as it explains in more detail the mathematical reason and provided a link to the document explaining the logic (although I can't fully understand it).</p>
<python><c++><numpy><complex-numbers><infinity>
2022-12-14 12:54:40
3
948
Firman
74,798,423
9,274,940
when grouping dataframe join the values that are different for a certain column
<p>I'm first going to show with an example what I mean:</p> <p>Let's suppose that I have this dataframe:</p> <p><a href="https://i.sstatic.net/kuB5S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kuB5S.png" alt="enter image description here" /></a></p> <p>If I group the dataframe <strong>without</strong> the penultimate column (column_2), I want to end up with this:</p> <p><a href="https://i.sstatic.net/Cpb6u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cpb6u.png" alt="enter image description here" /></a></p> <p>And, if I have a dataframe where column_1 and last_column have the same values. I don't need to &quot;join&quot; or &quot;append&quot; the values in the &quot;column_2&quot;, I just want an empty dataframe.</p> <p>Does that make sense what I mean? What I've just have is this:</p> <pre><code>import pandas as pd data = {'column_1': ['no', 'no', 'no', 'no'], 'column_2': ['spain', 'france', 'italy', 'germany'], &quot;last_column&quot;: ['A', 'A', 'A', 'B']} df = pd.DataFrame.from_dict(data) aux = df.drop(columns = ['column_2']) indices_to_keep = aux.groupby(aux.columns.to_list()).filter(lambda x : len(x)&lt;2).index df_to_keep = df.filter(items = indices_to_keep.to_list(), axis = 0) </code></pre> <p>My problem with this code, is that I don't know how to join the values on a single row when the df is being grouped.</p>
<python><pandas><dataframe><group-by>
2022-12-14 12:36:28
1
551
Tonino Fernandez
74,798,302
7,556,450
Using cattrs / attrs where attr name does not match keys to create an object
<p>I am looking at moving to cattrs / attrs from a completely manual process of typing out all my classes but need some help understanding how to achieve the following.</p> <p>This is a single example but the data returned will be varied and sometimes not with all the fields populated.</p> <pre><code>data = { &quot;data&quot;: [ { &quot;broadcaster_id&quot;: &quot;123&quot;, &quot;broadcaster_login&quot;: &quot;Sam&quot;, &quot;language&quot;: &quot;en&quot;, &quot;subscriber_id&quot;: &quot;1234&quot;, &quot;subscriber_login&quot;: &quot;Dave&quot;, &quot;moderator_id&quot;: &quot;12345&quot;, &quot;moderator_login&quot;: &quot;Tom&quot;, &quot;delay&quot;: &quot;0&quot;, &quot;title&quot;: &quot;Weekend Events&quot; } ] } @attrs.define class PartialUser: id: int login: str @attrs.define class Info: language: str title: str delay: int broadcaster: PartialUser subscriber: PartialUser moderator: PartialUser </code></pre> <p>So I understand how you would construct this and it works perfectly fine with 1:1 mappings, as expected, but how would you create the PartialUser objects dynamically since the names are not identical to the JSON response from the API?</p> <pre><code>instance = cattrs.structure(data[&quot;data&quot;][0], Info) </code></pre> <p>Is there some trick to using a converter? This would need to be done for around 70 classes which is why I thought maybe cattrs could modernise and simplify what I'm trying to do.</p> <p>thanks</p>
<python><python-3.x><python-attrs>
2022-12-14 12:28:12
1
1,019
SimonT
74,798,155
14,366,906
Drawing line through two points, instead of between (on a log scale, but still a straight line)
<p>I am currently working on a Python API responsible for plotting diagrams and tangents made in a Front-end application using Angular. In this application it is possible to move points to adjust the a line perpendicular to the curve.</p> <p>When the user thinks this is the way it is supposed to be the diagrams can be exported using matplotlib.</p> <p>When only drawing between points using code below:</p> <pre class="lang-py prettyprint-override"><code>x_values = [tangent.point1.x, tangent.point2.x] y_values = [tangent.point1.y, tangent.point2.y] plt.plot(x_values, y_values, scalex=False, scaley=False) </code></pre> <p>I get normal looking lines as follows <a href="https://i.sstatic.net/Cs0AM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cs0AM.png" alt="enter image description here" /></a></p> <p>Although I want the lines to keep going till the borders of the plot for if the two points set by the user do not intersect by themselves.</p> <p>I have tried converting the two points into an equation, and calculate two points from there but without luck (straight line). Also tried using np.linspace(xMin, xMax, 100) resulting on a exponential curve.</p> <p>If curious, this is what the user would interact with <a href="https://i.sstatic.net/CK2oj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CK2oj.png" alt="enter image description here" /></a></p> <p>Question / TLDR: Is there a way to draw the line through the points (indefinetely because I set scalex and scaley to false in the plt.plot)</p> <p>EDIT:</p> <p>I have come across in another post ax.axline(p1, p2). This draws an infinite line but does not have a scalex and scaley attribute. This post also referenced storing plt.axes() in a variable and applying them as xlim and ylim after plotting. This results in the lines not being visible??? I mean printing the points to the console. I can see where the line is supposed to be in the diagram, it is just not there.</p> <p>EDIT 2: I am stoopid and p1 in axline was p1.x and p2.x instead of p1.x and p1.y. This solved by not going out of bounds and being visible again.</p>
<python><matplotlib>
2022-12-14 12:16:38
1
335
Wessel van Leeuwen
74,798,130
19,580,067
Remove characters other than alphanumeric from first 4 values of string Python
<p>I need to remove characters other than alphanumeric from first 4 characters of string. I figured out how to do it for the whole string but not sure how to process only the first 4 values.</p> <pre><code>Data : '1/5AN 4/41 45' Expected: '15AN 4/41 45' </code></pre> <p>Here is the code to remove the non-alphanumeric characters from string.</p> <pre><code>strValue = re.sub(r'[^A-Za-z0-9 ]+', '', strValue) </code></pre> <p>Any suggestions?</p>
<python><python-3.x><regex>
2022-12-14 12:14:40
3
359
Pravin
74,798,015
11,622,712
TypeError: StructType can not accept object '1/1/2021 1:00:00 AM' in type
<p>I want to create a simple dataframe in PySpark. This datframe should contain a timestamp string &quot;1/1/2021 1:00:00 AM&quot; that later I want to convert from string into timestamp.</p> <p>This is my current code. When I run it, I get the error &quot;TypeError: StructType can not accept object '1/1/2021 1:00:00 AM' in type&quot;. How can I fix it in such a way that finally I can successfully execute <code>to_timestamp</code>?</p> <pre><code>from pyspark.sql.functions import to_timestamp from pyspark.sql.types import StringType, StructType, StructField schema = StructType([ StructField(&quot;timestamp_str&quot;, StringType(), True) ]) data = [(&quot;1/1/2021 1:00:00 AM&quot;)] df = spark.createDataFrame(data, schema=schema) df = df.withColumn(&quot;timestamp&quot;, to_timestamp(&quot;timestamp_str&quot;, &quot;MM/dd/yyyy hh:mm:ss a&quot;)) </code></pre> <p><strong>Update:</strong></p> <p>After changing <code>data = [(&quot;1/1/2021 1:00:00 AM&quot;)]</code> to <code>data = [(&quot;1/1/2021 1:00:00 AM&quot;,)]</code> I get another error. It appears when I run <code>df.show()</code>:</p> <blockquote> <p>org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 2.0 failed 4 times, most recent failure: Lost task 2.3 in stage 2.0 (TID 10) (10.233.49.69 executor 0): org.apache.spark.SparkUpgradeException: [INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER] You may get a different result due to the upgrading to Spark &gt;= 3.0:</p> </blockquote>
<python><python-3.x><pyspark>
2022-12-14 12:03:32
1
2,998
Fluxy
74,798,004
13,498,838
How can I filter a Pandas DataFrame based on whether all aggregated values in a column are True?
<p>I have the following data</p> <pre><code>data = [ [1, True], [1, True], [1, True], [1, True], [2, True], [2, False], [2, True], [3, True], [3, True], [3, True], [3, True], [4, True], [4, True], [4, False], [5, True], [5, True], [5, True], [5, True], ] df = pd.DataFrame(data, columns=['ids', 'accept']) </code></pre> <p>And I would like to filter out all rows whose IDs that have at least one <code>False</code> value in the <code>accept</code> column. So my result should look like (note the missing 2 and 4 IDs):</p> <pre><code> ids accept 0 1 True 1 1 True 2 1 True 3 1 True 4 3 True 5 3 True 6 3 True 7 3 True 8 5 True 9 5 True 10 5 True 11 5 True </code></pre> <p>I was able to get a list of the IDs for which all values in the accept column are <code>True</code> using the <code>groupby()</code> and <code>all()</code> methods:</p> <pre><code># Use the groupby() method to group the DataFrame by the 'ids' column grouped = df.groupby('ids') # Use the all() method to check whether all values in the 'accept' column are True for each group accept_all_true = grouped['accept'].all() </code></pre> <p>But I am stuck at this point. How can I apply this grouping to my original data frame?</p>
<python><pandas><dataframe>
2022-12-14 12:02:44
2
1,454
jda5
74,797,963
219,976
Django + Celery task never done
<p>I'm trying to run the example app Django+Celery from official celery repository:<br /> <a href="https://github.com/celery/celery/tree/master/examples/django" rel="nofollow noreferrer">https://github.com/celery/celery/tree/master/examples/django</a><br /> I cloned the repo, ran RabbitMQ in my docker container:</p> <pre><code>docker run -d --hostname localhost -p 15672:15672 --name rabbit-test rabbitmq:3 </code></pre> <p>ran celery worker like this:</p> <pre><code>celery -A proj worker -l INFO </code></pre> <p>When I try to execute a task:</p> <pre><code>python ./manage.py shell &gt;&gt;&gt; from demoapp.tasks import add, mul, xsum &gt;&gt;&gt; res = add.delay(2,3) &gt;&gt;&gt; res.ready() False </code></pre> <p>I always get <code>res.ready()</code> is <code>False</code>. The output from worker notify that task is recieved:</p> <pre><code>[2022-12-14 14:43:20,283: INFO/MainProcess] Task demoapp.tasks.add[29743cee-744b-4fa6-ba68-36d17e4ac806] received </code></pre> <p>but it's never done.<br /> What might be wrong? How to catch the problem?</p>
<python><django><rabbitmq><celery><django-celery>
2022-12-14 11:59:27
1
6,657
StuffHappens
74,797,790
11,167,163
How to display labels,values and percentage on a pie chart
<p>I tri to display on a pie chart :</p> <ol> <li>labels</li> <li>Value</li> <li>Percentage</li> </ol> <p>I know how to do to display both the value and the percentage :</p> <pre><code>def autopct_format(values): def my_format(pct): total = sum(values) val = int(round(pct*total/100.0)/1000000) LaString = str('{:.1f}%\n{v:,d}'.format(pct, v=val)) return LaString return my_format </code></pre> <p>But I don't know how to display the labels, I tried to do the following :</p> <pre><code>def autopct_format(values,MyString): def my_format(pct,MyString): total = sum(values) val = int(round(pct*total/100.0)/1000000) LaString = str('{0},{:.1f}%\n{v:,d}'.format(MyString, pct, v=val)) return LaString return my_format </code></pre> <p>But this does throw the following error :</p> <pre><code>TypeError: my_format() missing 1 required positional argument: 'MyString' </code></pre> <p>Below is the reproducible example :</p> <pre><code>import matplotlib.pyplot as plt import numpy as np def autopct_format(values,MyString): def my_format(pct,MyString): total = sum(values) val = int(round(pct*total/100.0)/1000000) LaString = str('{0},{:.1f}%\n{v:,d}'.format(MyString, pct, v=val)) return LaString return my_format fig, ax = plt.subplots(1,1,figsize=(10,10),dpi=100,layout=&quot;constrained&quot;) ax.axis('equal') width = 0.3 #Color A, B, C=[plt.cm.Blues, plt.cm.Reds, plt.cm.Greens] #OUTSIDE cin = [A(0.5),A(0.4),A(0.3),B(0.5),B(0.4),C(0.3),C(0.2),C(0.1), C(0.5),C(0.4),C(0.3)] Labels_Smalls = ['groupA', 'groupB', 'groupC'] labels = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'I'] Sizes_Detail = [4,3,5,6,5,10,5,5,4,6] Sizes = [12,11,30] pie2, _ ,junk = ax.pie(Sizes_Detail ,radius=1, #labels=labels,labeldistance=0.85, autopct=autopct_format(Sizes_Detail,labels) ,pctdistance = 0.7, colors=cin) plt.setp(pie2, width=width, edgecolor='white') #INSIDE pie, _ = ax.pie(Sizes, radius=1-width, #autopct=autopct_format(Sizes) ,pctdistance = 0.8, colors = [A(0.6), B(0.6), C(0.6)]) plt.setp(pie, width=width, edgecolor='white') plt.margins(0,0) </code></pre> <p>and there is the excepted output :</p> <p><a href="https://i.sstatic.net/MV7m6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MV7m6.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2022-12-14 11:45:56
1
4,464
TourEiffel
74,797,737
3,521,180
what would be the simplest way to get the value based on some comparison in pyspark?
<p>I'm playing around with some acceptance criteria and one of the requests is quite simple where I need to return the sum value of a column when the value of another column equals: <code>xycvg</code>.</p> <p>I've written this bit of code and was just wondering: Is there a simpler way of doing this?</p> <pre><code>df.groupBy('Mea_Desc').agg(sum('Meas_Val').alias(&quot;Totl&quot;)).filter(col('Mea_Desc') == 'xycvg').collect()[0][1] </code></pre> <p>This returns: <code>Decimal('10366755770.00')</code></p>
<python><sql><pyspark>
2022-12-14 11:40:54
1
1,150
user3521180
74,797,716
5,586,359
How do I get FastAPI to do SSR for Vue 3?
<p>According to this documentation for <a href="https://vuejs.org/guide/scaling-up/ssr.html#basic-tutorial" rel="nofollow noreferrer">Vue's SSR</a>, it is possible to use node.js to render an app and return it using an express server. Is is possible to do the same with FastAPI?</p> <p>Or is using <a href="https://jinja.palletsprojects.com/en/3.0.x/templates/" rel="nofollow noreferrer">Jinja2 templates</a> or <a href="https://nuxt.com/docs/guide/concepts/rendering#client-side-only-rendering" rel="nofollow noreferrer">SPA</a> the only solution?</p> <h3>Problems:</h3> <ul> <li>No SPA: To help with SEO</li> <li>No <a href="https://nuxt.com/docs/getting-started/deployment#static-hosting" rel="nofollow noreferrer">SSG</a>: Too many pages will be generated. Some need to be generated dynamically.</li> <li>No Jinja2/Python Templates: Node modules aren't built, bundled and served. All modules have to served from a remote package CDN.</li> </ul> <p>I have a feeling that maybe changing the Vue 3 delimiters and then building the project and serving the files as Jinja2 templates is the solution, but I'm not sure how it would work with Vue's routers. I know the <code>/dist</code> folder can be served on the default route and then use a catchall can be used to display files that do exist.</p> <h3>Possible Solution</h3> <pre class="lang-py prettyprint-override"><code>@app.get(&quot;/&quot;, response_class=FileResponse) def read_index(request: Request): index = f&quot;{static_folder}/index.html&quot; return FileResponse(index) @app.get(&quot;/{catchall:path}&quot;, response_class=FileResponse) def read_index(request: Request): path = request.path_params[&quot;catchall&quot;] file = static_folder + path if os.path.exists(file): return FileResponse(file) index = f&quot;{static_folder}/index.html&quot; return FileResponse(index) </code></pre> <h3>Questions</h3> <ul> <li>If there is a way to do SSR with FastAPI and Vue 3, what is it?</li> <li>If there is no direct way, how do I combine Vue's built <code>/dist</code> with Jinja2 templates to serve dynamic pages?</li> </ul>
<python><vue.js><jinja2><fastapi><server-side-rendering>
2022-12-14 11:38:49
1
954
Vivek Joshy
74,797,697
15,893,581
python - how to Interpolate NaN values
<p>Is there a way to interpolate Nan values of P &amp; C in final_df (where str is a range with equal step) ??</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt df0 = pd.DataFrame({'str': [var for var in range(700,1260,5)]}) print(df0) df1 = pd.DataFrame({'str': [700,705,710,715,720,1095,1100,1105,1110,1115,1120,1125,1130,1135,1205,1210,1215,1220,1225,1230,1235,1240,1245,1250,1255], 'P': [0.075,0.075,0.075,0.075,0.075,17.95,19.75,21.85,24.25,26.55,29.2,31.9,35.05,37.7,98.6,102.15,108.5,113.5,118.4,123.55,127.3,132.7,138.7,142.7,148.35], 'C': [407.8,403.65,398.3,391.65,387.8,30.05,26.65,23.7,21.35,19.65,16.05,14.3,11.95,9.95,0.475,0,0.525,0,0.2,0.175,0.15,0.375,0.125,0.075,0.175]}) df = pd.merge(df0,df1, on=&quot;str&quot;, how=&quot;outer&quot;) print(df) </code></pre> <p><em>df.interpolate(method ='cubic', limit_direction ='both')</em> seems not working...</p>
<python><pandas><interpolation>
2022-12-14 11:36:33
1
645
JeeyCi
74,797,692
8,916,474
Tox and pre-commit hook "error: unrecognized arguments" or not running test envs
<p>Everything is inside a docker container. Tox works fine separately inside docker. What I'm trying to do is to add Tox to the pre-commit hook to run it for files with changes to be committed.</p> <p><strong>.pre-commit-config.yaml</strong></p> <pre><code>- repo: local hooks: - id: tox-project name: tox entry: tox language: system types: [python] args: [-c, path_inside_docker/tox.ini,--workdir=path_inside_docker, -e, &quot;fix,pylint,[other_tests]&quot;] </code></pre> <p>When tox runs as a part of a pre-commit hook (defined as above), it generates an error that refers to the file with changes that I want to commit:</p> <pre><code>tox......................................................................Failed - hook id: tox-project - exit code: 2 usage: tox [-h] [--colored {yes,no}] [-v | -q] [--exit-and-dump-after seconds] [-c file] [--workdir dir] [--root dir] [--runner {virtualenv}] [--version] [--no-provision [REQ_JSON]] [--no-recreate-provision] [-r] [-x OVERRIDE] {run,r,run-parallel,p,depends,de,list,l,devenv,d,config,c,quickstart,q,exec,e,legacy, le} ... tox: error: unrecognized arguments: path/to/file/to/be/commited/main.py hint: if you tried to pass arguments to a command use -- to separate them from tox ones </code></pre> <p>What I tried:</p> <p><strong>1st solution.</strong> To add &quot;--&quot; to args for tox, to pass files from commit. Which I found here: <a href="https://stackoverflow.com/questions/51741320/tox-throws-tox-error-unrecognized-arguments-for-a-seemingly-valid-command">Add -- as argument for tox</a></p> <pre><code>args: [-c, path_inside_docker/tox.ini,--workdir=path_inside_docker, -e,&quot;fix,pylint,[other_tests]&quot;,--] </code></pre> <p>Now tox runs as a hook, but it doesn't work properly. Hook pass tox with success, and commit is done. But the tox is not generating any results as supposed to (when the same file is tested by a separate tox outside hook, it generates a list of errors).</p> <p><strong>2nd solution.</strong> Turn off passing filenames.</p> <p>I've found this solution for a similar issue, solved by: <a href="https://stackoverflow.com/questions/64036351/pre-commit-for-local-hook-gives-error-unrecognized-arguments-pre-commit-conf">pass_filenames: false</a>. I tried this as well. But then tox is (as I understand) not getting any files from the commit to process and finish everything with success - again not generating any results that it suppose to generate.</p> <p>Any ideas?</p>
<python><githooks><pre-commit-hook><tox><pre-commit.com>
2022-12-14 11:36:08
0
504
QbS
74,797,663
20,732,098
Convert timedelta to milliseconds python
<p>I have the following time:</p> <pre class="lang-py prettyprint-override"><code>time = datetime.timedelta(days=1, hours=4, minutes=5, seconds=33, milliseconds=623) </code></pre> <p>Is it possible, to convert the time in milliseconds? <br> Like this:</p> <pre class="lang-py prettyprint-override"><code>101133623.0 </code></pre>
<python><timestamp><timedelta>
2022-12-14 11:34:37
3
336
ranqnova
74,797,581
12,493,545
Understanding OpenCV's drawing calls: Are those lines of code irrelevant?
<p>Source: <a href="https://docs.opencv.org/4.x/d1/dc5/tutorial_background_subtraction.html" rel="nofollow noreferrer">https://docs.opencv.org/4.x/d1/dc5/tutorial_background_subtraction.html</a></p> <p>In that tutorial the lines of code:</p> <pre><code>cv.rectangle(frame, (10, 2), (100,20), (255,255,255), -1) cv.putText(frame, str(capture.get(cv.CAP_PROP_POS_FRAMES)), (15, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5 , (0,0,0)) </code></pre> <p>are used. However, as far as I understand it, one would need to save their return value onto the frame, to have any change at all since they are not &quot;in place&quot;:</p> <pre><code>frame = cv.rectangle(frame, (10, 2), (100,20), (255,255,255), -1) frame = cv.putText(frame, str(capture.get(cv.CAP_PROP_POS_FRAMES)), (15, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5 , (0,0,0)) </code></pre> <p>But even then I can't really see a difference in the result.</p> <p>Am I missing something?</p>
<python><opencv>
2022-12-14 11:28:30
1
1,133
Natan
74,797,565
1,150,683
XPath on lxml's iterparse matches elements outside its scope
<p>I have huge corpora that I am parsing with <code>lxml</code>, so I am using <code>iterparse</code> which makes it easy to read XML on-the-fly. By using <code>iterparse(fh, tag=&quot;your_tag&quot;)</code> we can efficiently iterate over nodes in large files.</p> <p>I wish to do some XPath matching for each major tag in the file, in my case <code>alpino_ds</code>. For each <code>alpino_ds</code> node I want to check whether some given XPath matches. I found, however, that an XPath would match on an element, when in reality it is matching on something else in the document - <em>not</em> just the current iterated <code>alpino_ds</code> element but a consecutive one.</p> <p>I am puzzled as to why this happens: in the example below, I would expect only one match (in the last <code>alpino_ds</code> node) but as you can see it matches three times and the matched XPath result is the same item in all three cases (part of the last node)!</p> <pre class="lang-py prettyprint-override"><code>from io import BytesIO import lxml.etree as ET xml = &quot;&quot;&quot;&lt;treebank&gt; &lt;alpino_ds version=&quot;1.3&quot; id=&quot;WR-P-P-D-0000000006.p.34.s.1&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;top&quot; end=&quot;4&quot; id=&quot;0&quot; rel=&quot;top&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;du&quot; end=&quot;3&quot; id=&quot;1&quot; rel=&quot;--&quot;&gt; &lt;node begin=&quot;0&quot; conjtype=&quot;neven&quot; end=&quot;1&quot; frame=&quot;complementizer(root)&quot; id=&quot;2&quot; lcat=&quot;du&quot; lemma=&quot;en&quot; pos=&quot;comp&quot; postag=&quot;VG(neven)&quot; pt=&quot;vg&quot; rel=&quot;dlink&quot; root=&quot;en&quot; sc=&quot;root&quot; sense=&quot;en&quot; word=&quot;en&quot;/&gt; &lt;node begin=&quot;1&quot; cat=&quot;np&quot; end=&quot;3&quot; id=&quot;3&quot; rel=&quot;nucl&quot;&gt; &lt;node begin=&quot;1&quot; end=&quot;2&quot; frame=&quot;number(hoofd(sg_num))&quot; id=&quot;4&quot; infl=&quot;sg_num&quot; lcat=&quot;detp&quot; lemma=&quot;een&quot; numtype=&quot;hoofd&quot; pos=&quot;num&quot; positie=&quot;vrij&quot; postag=&quot;TW(hoofd,vrij)&quot; pt=&quot;tw&quot; rel=&quot;det&quot; root=&quot;één&quot; sense=&quot;één&quot; special=&quot;hoofd&quot; word=&quot;één&quot;/&gt; &lt;node begin=&quot;2&quot; end=&quot;3&quot; frame=&quot;noun(de,count,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;5&quot; lcat=&quot;np&quot; lemma=&quot;printer&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;printer&quot; sense=&quot;printer&quot; word=&quot;printer&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;node begin=&quot;3&quot; end=&quot;4&quot; frame=&quot;punct(punt)&quot; id=&quot;6&quot; lcat=&quot;punct&quot; lemma=&quot;.&quot; pos=&quot;punct&quot; postag=&quot;LET()&quot; pt=&quot;let&quot; rel=&quot;--&quot; root=&quot;.&quot; sense=&quot;.&quot; special=&quot;punt&quot; word=&quot;.&quot;/&gt; &lt;/node&gt; &lt;sentence&gt;en één printer .&lt;/sentence&gt; &lt;comments&gt; &lt;comment&gt;Q#WR-P-P-D-0000000006.p.34.s.1|en één printer .|1|1|1.2960516563900006&lt;/comment&gt; &lt;/comments&gt; &lt;/alpino_ds&gt; &lt;alpino_ds version=&quot;1.3&quot; id=&quot;WR-P-P-D-0000000006.p.34.s.2&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;top&quot; end=&quot;20&quot; id=&quot;0&quot; rel=&quot;top&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;smain&quot; end=&quot;19&quot; id=&quot;1&quot; rel=&quot;--&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;np&quot; end=&quot;2&quot; id=&quot;2&quot; index=&quot;1&quot; rel=&quot;su&quot;&gt; &lt;node begin=&quot;0&quot; end=&quot;1&quot; frame=&quot;determiner(de,nwh,nmod,pro,nparg)&quot; getal=&quot;getal&quot; id=&quot;3&quot; infl=&quot;de&quot; lcat=&quot;detp&quot; lemma=&quot;die&quot; naamval=&quot;stan&quot; pdtype=&quot;pron&quot; persoon=&quot;3&quot; pos=&quot;det&quot; postag=&quot;VNW(aanw,pron,stan,vol,3,getal)&quot; pt=&quot;vnw&quot; rel=&quot;det&quot; root=&quot;die&quot; sense=&quot;die&quot; status=&quot;vol&quot; vwtype=&quot;aanw&quot; wh=&quot;nwh&quot; word=&quot;Die&quot;/&gt; &lt;node begin=&quot;1&quot; end=&quot;2&quot; frame=&quot;noun(de,count,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;4&quot; lcat=&quot;np&quot; lemma=&quot;printer&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;printer&quot; sense=&quot;printer&quot; word=&quot;printer&quot;/&gt; &lt;/node&gt; &lt;node begin=&quot;2&quot; end=&quot;3&quot; frame=&quot;verb(unacc,sg3,passive)&quot; id=&quot;5&quot; infl=&quot;sg3&quot; lcat=&quot;smain&quot; lemma=&quot;worden&quot; pos=&quot;verb&quot; postag=&quot;WW(pv,tgw,met-t)&quot; pt=&quot;ww&quot; pvagr=&quot;met-t&quot; pvtijd=&quot;tgw&quot; rel=&quot;hd&quot; root=&quot;word&quot; sc=&quot;passive&quot; sense=&quot;word&quot; tense=&quot;present&quot; word=&quot;wordt&quot; wvorm=&quot;pv&quot;/&gt; &lt;node begin=&quot;0&quot; cat=&quot;ppart&quot; end=&quot;19&quot; id=&quot;6&quot; rel=&quot;vc&quot;&gt; &lt;node begin=&quot;0&quot; end=&quot;2&quot; id=&quot;7&quot; index=&quot;1&quot; rel=&quot;obj1&quot;/&gt; &lt;node begin=&quot;3&quot; buiging=&quot;zonder&quot; end=&quot;4&quot; frame=&quot;verb(hebben,psp,np_pc_pp(voor))&quot; id=&quot;8&quot; infl=&quot;psp&quot; lcat=&quot;ppart&quot; lemma=&quot;gebruiken&quot; pos=&quot;verb&quot; positie=&quot;vrij&quot; postag=&quot;WW(vd,vrij,zonder)&quot; pt=&quot;ww&quot; rel=&quot;hd&quot; root=&quot;gebruik&quot; sc=&quot;np_pc_pp(voor)&quot; sense=&quot;gebruik-voor&quot; word=&quot;gebruikt&quot; wvorm=&quot;vd&quot;/&gt; &lt;node begin=&quot;4&quot; cat=&quot;pp&quot; end=&quot;19&quot; id=&quot;9&quot; rel=&quot;pc&quot;&gt; &lt;node begin=&quot;4&quot; end=&quot;5&quot; frame=&quot;preposition(voor,[aan,door,uit,[in,de,plaats]])&quot; id=&quot;10&quot; lcat=&quot;pp&quot; lemma=&quot;voor&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;voor&quot; sense=&quot;voor&quot; vztype=&quot;init&quot; word=&quot;voor&quot;/&gt; &lt;node begin=&quot;5&quot; cat=&quot;np&quot; end=&quot;19&quot; id=&quot;11&quot; rel=&quot;obj1&quot;&gt; &lt;node begin=&quot;5&quot; end=&quot;6&quot; frame=&quot;determiner(het,nwh,nmod,pro,nparg,wkpro)&quot; id=&quot;12&quot; infl=&quot;het&quot; lcat=&quot;detp&quot; lemma=&quot;het&quot; lwtype=&quot;bep&quot; naamval=&quot;stan&quot; npagr=&quot;evon&quot; pos=&quot;det&quot; postag=&quot;LID(bep,stan,evon)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;het&quot; sense=&quot;het&quot; wh=&quot;nwh&quot; word=&quot;het&quot;/&gt; &lt;node begin=&quot;6&quot; end=&quot;7&quot; frame=&quot;v_noun(intransitive)&quot; getal=&quot;mv&quot; graad=&quot;basis&quot; id=&quot;13&quot; lcat=&quot;np&quot; lemma=&quot;druk&quot; ntype=&quot;soort&quot; pos=&quot;verb&quot; postag=&quot;N(soort,mv,basis)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;druk&quot; sc=&quot;intransitive&quot; sense=&quot;druk&quot; special=&quot;v_noun&quot; word=&quot;drukken&quot;/&gt; &lt;node begin=&quot;7&quot; cat=&quot;pp&quot; end=&quot;19&quot; id=&quot;14&quot; rel=&quot;mod&quot;&gt; &lt;node begin=&quot;7&quot; end=&quot;8&quot; frame=&quot;preposition(van,[af,uit,vandaan,[af,aan]])&quot; id=&quot;15&quot; lcat=&quot;pp&quot; lemma=&quot;van&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;van&quot; sense=&quot;van&quot; vztype=&quot;init&quot; word=&quot;van&quot;/&gt; &lt;node begin=&quot;8&quot; cat=&quot;np&quot; end=&quot;19&quot; id=&quot;16&quot; rel=&quot;obj1&quot;&gt; &lt;node begin=&quot;8&quot; end=&quot;9&quot; frame=&quot;determiner(de)&quot; id=&quot;17&quot; infl=&quot;de&quot; lcat=&quot;detp&quot; lemma=&quot;de&quot; lwtype=&quot;bep&quot; naamval=&quot;stan&quot; npagr=&quot;rest&quot; pos=&quot;det&quot; postag=&quot;LID(bep,stan,rest)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;de&quot; sense=&quot;de&quot; word=&quot;de&quot;/&gt; &lt;node begin=&quot;9&quot; end=&quot;10&quot; frame=&quot;noun(de,count,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;18&quot; lcat=&quot;np&quot; lemma=&quot;tekst&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;tekst&quot; sense=&quot;tekst&quot; word=&quot;tekst&quot;/&gt; &lt;node begin=&quot;10&quot; cat=&quot;pp&quot; end=&quot;19&quot; id=&quot;19&quot; rel=&quot;mod&quot;&gt; &lt;node begin=&quot;10&quot; end=&quot;11&quot; frame=&quot;preposition(van,[af,uit,vandaan,[af,aan]])&quot; id=&quot;20&quot; lcat=&quot;pp&quot; lemma=&quot;van&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;van&quot; sense=&quot;van&quot; vztype=&quot;init&quot; word=&quot;van&quot;/&gt; &lt;node begin=&quot;11&quot; cat=&quot;conj&quot; end=&quot;19&quot; id=&quot;21&quot; rel=&quot;obj1&quot;&gt; &lt;node begin=&quot;14&quot; conjtype=&quot;neven&quot; end=&quot;15&quot; frame=&quot;conj(en)&quot; id=&quot;22&quot; lcat=&quot;vg&quot; lemma=&quot;en&quot; pos=&quot;vg&quot; postag=&quot;VG(neven)&quot; pt=&quot;vg&quot; rel=&quot;crd&quot; root=&quot;en&quot; sense=&quot;en&quot; word=&quot;en&quot;/&gt; &lt;node begin=&quot;11&quot; cat=&quot;np&quot; end=&quot;19&quot; id=&quot;23&quot; rel=&quot;cnj&quot;&gt; &lt;node begin=&quot;11&quot; end=&quot;12&quot; frame=&quot;modal_adverb&quot; id=&quot;24&quot; index=&quot;2&quot; lcat=&quot;advp&quot; lemma=&quot;bijvoorbeeld&quot; pos=&quot;adv&quot; postag=&quot;BW()&quot; pt=&quot;bw&quot; rel=&quot;mod&quot; root=&quot;bijvoorbeeld&quot; sc=&quot;modal&quot; sense=&quot;bijvoorbeeld&quot; word=&quot;bijvoorbeeld&quot;/&gt; &lt;node begin=&quot;12&quot; end=&quot;13&quot; frame=&quot;determiner(de)&quot; id=&quot;25&quot; index=&quot;3&quot; infl=&quot;de&quot; lcat=&quot;detp&quot; lemma=&quot;de&quot; lwtype=&quot;bep&quot; naamval=&quot;stan&quot; npagr=&quot;rest&quot; pos=&quot;det&quot; postag=&quot;LID(bep,stan,rest)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;de&quot; sense=&quot;de&quot; word=&quot;de&quot;/&gt; &lt;node begin=&quot;13&quot; end=&quot;14&quot; frame=&quot;noun(de,count,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;26&quot; lcat=&quot;np&quot; lemma=&quot;naam&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;naam&quot; sense=&quot;naam&quot; word=&quot;naam&quot;/&gt; &lt;node begin=&quot;16&quot; cat=&quot;pp&quot; end=&quot;19&quot; id=&quot;27&quot; index=&quot;4&quot; rel=&quot;mod&quot;&gt; &lt;node begin=&quot;16&quot; end=&quot;17&quot; frame=&quot;preposition(op,[af,na])&quot; id=&quot;28&quot; lcat=&quot;pp&quot; lemma=&quot;op&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;op&quot; sense=&quot;op&quot; vztype=&quot;init&quot; word=&quot;op&quot;/&gt; &lt;node begin=&quot;17&quot; cat=&quot;np&quot; end=&quot;19&quot; id=&quot;29&quot; rel=&quot;obj1&quot;&gt; &lt;node begin=&quot;17&quot; end=&quot;18&quot; frame=&quot;determiner(de)&quot; id=&quot;30&quot; infl=&quot;de&quot; lcat=&quot;detp&quot; lemma=&quot;de&quot; lwtype=&quot;bep&quot; naamval=&quot;stan&quot; npagr=&quot;rest&quot; pos=&quot;det&quot; postag=&quot;LID(bep,stan,rest)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;de&quot; sense=&quot;de&quot; word=&quot;de&quot;/&gt; &lt;node begin=&quot;18&quot; end=&quot;19&quot; frame=&quot;noun(de,count,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;31&quot; lcat=&quot;np&quot; lemma=&quot;cd&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;cd&quot; sense=&quot;cd&quot; word=&quot;cd&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;node begin=&quot;11&quot; cat=&quot;np&quot; end=&quot;19&quot; id=&quot;32&quot; rel=&quot;cnj&quot;&gt; &lt;node begin=&quot;11&quot; end=&quot;12&quot; id=&quot;33&quot; index=&quot;2&quot; rel=&quot;mod&quot;/&gt; &lt;node begin=&quot;12&quot; end=&quot;13&quot; id=&quot;34&quot; index=&quot;3&quot; rel=&quot;det&quot;/&gt; &lt;node begin=&quot;15&quot; end=&quot;16&quot; frame=&quot;noun(het,count,pl)&quot; gen=&quot;het&quot; getal=&quot;mv&quot; graad=&quot;basis&quot; id=&quot;35&quot; lcat=&quot;np&quot; lemma=&quot;adresgegevens&quot; ntype=&quot;soort&quot; num=&quot;pl&quot; pos=&quot;noun&quot; postag=&quot;N(soort,mv,basis)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;adres_gegeven&quot; sense=&quot;adres_gegeven&quot; word=&quot;adresgegevens&quot;/&gt; &lt;node begin=&quot;16&quot; end=&quot;19&quot; id=&quot;36&quot; index=&quot;4&quot; rel=&quot;mod&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;node begin=&quot;19&quot; end=&quot;20&quot; frame=&quot;punct(punt)&quot; id=&quot;37&quot; lcat=&quot;punct&quot; lemma=&quot;.&quot; pos=&quot;punct&quot; postag=&quot;LET()&quot; pt=&quot;let&quot; rel=&quot;--&quot; root=&quot;.&quot; sense=&quot;.&quot; special=&quot;punt&quot; word=&quot;.&quot;/&gt; &lt;/node&gt; &lt;sentence&gt;Die printer wordt gebruikt voor het drukken van de tekst van bijvoorbeeld de naam en adresgegevens op de cd .&lt;/sentence&gt; &lt;comments&gt; &lt;comment&gt;Q#WR-P-P-D-0000000006.p.34.s.2|Die printer wordt gebruikt voor het drukken van de tekst van bijvoorbeeld de naam en adresgegevens op de cd .|1|1|0.11022457209000547&lt;/comment&gt; &lt;/comments&gt; &lt;/alpino_ds&gt; &lt;alpino_ds version=&quot;1.3&quot; id=&quot;WR-P-P-D-0000000006.p.34.s.3&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;top&quot; end=&quot;25&quot; id=&quot;0&quot; rel=&quot;top&quot;&gt; &lt;node begin=&quot;15&quot; end=&quot;16&quot; frame=&quot;punct(komma)&quot; id=&quot;1&quot; lcat=&quot;punct&quot; lemma=&quot;,&quot; pos=&quot;punct&quot; postag=&quot;LET()&quot; pt=&quot;let&quot; rel=&quot;--&quot; root=&quot;,&quot; sense=&quot;,&quot; special=&quot;komma&quot; word=&quot;,&quot;/&gt; &lt;node begin=&quot;22&quot; end=&quot;23&quot; frame=&quot;punct(komma)&quot; id=&quot;2&quot; lcat=&quot;punct&quot; lemma=&quot;,&quot; pos=&quot;punct&quot; postag=&quot;LET()&quot; pt=&quot;let&quot; rel=&quot;--&quot; root=&quot;,&quot; sense=&quot;,&quot; special=&quot;komma&quot; word=&quot;,&quot;/&gt; &lt;node begin=&quot;0&quot; cat=&quot;smain&quot; end=&quot;25&quot; id=&quot;3&quot; rel=&quot;--&quot;&gt; &lt;node begin=&quot;0&quot; cat=&quot;np&quot; end=&quot;2&quot; id=&quot;4&quot; rel=&quot;su&quot;&gt; &lt;node begin=&quot;0&quot; end=&quot;1&quot; frame=&quot;determiner(een)&quot; id=&quot;5&quot; infl=&quot;een&quot; lcat=&quot;detp&quot; lemma=&quot;een&quot; lwtype=&quot;onbep&quot; naamval=&quot;stan&quot; npagr=&quot;agr&quot; pos=&quot;det&quot; postag=&quot;LID(onbep,stan,agr)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;een&quot; sense=&quot;een&quot; word=&quot;Een&quot;/&gt; &lt;node begin=&quot;1&quot; end=&quot;2&quot; frame=&quot;noun(het,count,sg)&quot; gen=&quot;het&quot; genus=&quot;onz&quot; getal=&quot;ev&quot; graad=&quot;dim&quot; id=&quot;6&quot; lcat=&quot;np&quot; lemma=&quot;robot-arm&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,dim,onz,stan)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;robot_arm_DIM&quot; sense=&quot;robot_arm_DIM&quot; word=&quot;robot-armpje&quot;/&gt; &lt;/node&gt; &lt;node begin=&quot;2&quot; end=&quot;3&quot; frame=&quot;verb(hebben,sg3,er_pp_sbar(voor))&quot; id=&quot;7&quot; infl=&quot;sg3&quot; lcat=&quot;smain&quot; lemma=&quot;zorgen&quot; pos=&quot;verb&quot; postag=&quot;WW(pv,tgw,met-t)&quot; pt=&quot;ww&quot; pvagr=&quot;met-t&quot; pvtijd=&quot;tgw&quot; rel=&quot;hd&quot; root=&quot;zorg&quot; sc=&quot;er_pp_sbar(voor)&quot; sense=&quot;zorg-voor&quot; tense=&quot;present&quot; word=&quot;zorgt&quot; wvorm=&quot;pv&quot;/&gt; &lt;node begin=&quot;3&quot; cat=&quot;pp&quot; end=&quot;25&quot; id=&quot;8&quot; rel=&quot;pc&quot;&gt; &lt;node begin=&quot;3&quot; end=&quot;4&quot; frame=&quot;er_adverb(voor)&quot; id=&quot;9&quot; lcat=&quot;pp&quot; lemma=&quot;ervoor&quot; pos=&quot;pp&quot; postag=&quot;BW()&quot; pt=&quot;bw&quot; rel=&quot;hd&quot; root=&quot;ervoor&quot; sense=&quot;ervoor&quot; special=&quot;er&quot; word=&quot;ervoor&quot;/&gt; &lt;node begin=&quot;4&quot; cat=&quot;cp&quot; end=&quot;25&quot; id=&quot;10&quot; rel=&quot;vc&quot;&gt; &lt;node begin=&quot;4&quot; conjtype=&quot;onder&quot; end=&quot;5&quot; frame=&quot;complementizer(dat)&quot; id=&quot;11&quot; lcat=&quot;cp&quot; lemma=&quot;dat&quot; pos=&quot;comp&quot; postag=&quot;VG(onder)&quot; pt=&quot;vg&quot; rel=&quot;cmp&quot; root=&quot;dat&quot; sc=&quot;dat&quot; sense=&quot;dat&quot; word=&quot;dat&quot;/&gt; &lt;node begin=&quot;5&quot; cat=&quot;conj&quot; end=&quot;25&quot; id=&quot;12&quot; rel=&quot;body&quot;&gt; &lt;node begin=&quot;5&quot; cat=&quot;ssub&quot; end=&quot;13&quot; id=&quot;13&quot; rel=&quot;cnj&quot;&gt; &lt;node begin=&quot;5&quot; cat=&quot;np&quot; end=&quot;7&quot; id=&quot;14&quot; index=&quot;1&quot; rel=&quot;su&quot;&gt; &lt;node begin=&quot;5&quot; end=&quot;6&quot; frame=&quot;determiner(de)&quot; id=&quot;15&quot; infl=&quot;de&quot; lcat=&quot;detp&quot; lemma=&quot;de&quot; lwtype=&quot;bep&quot; naamval=&quot;stan&quot; npagr=&quot;rest&quot; pos=&quot;det&quot; postag=&quot;LID(bep,stan,rest)&quot; pt=&quot;lid&quot; rel=&quot;det&quot; root=&quot;de&quot; sense=&quot;de&quot; word=&quot;de&quot;/&gt; &lt;node begin=&quot;6&quot; end=&quot;7&quot; frame=&quot;noun(de,count,pl)&quot; gen=&quot;de&quot; getal=&quot;mv&quot; graad=&quot;basis&quot; id=&quot;16&quot; lcat=&quot;np&quot; lemma=&quot;brander&quot; ntype=&quot;soort&quot; num=&quot;pl&quot; pos=&quot;noun&quot; postag=&quot;N(soort,mv,basis)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;brander&quot; sense=&quot;brander&quot; word=&quot;branders&quot;/&gt; &lt;/node&gt; &lt;node begin=&quot;9&quot; end=&quot;10&quot; frame=&quot;verb(unacc,pl,passive)&quot; id=&quot;17&quot; infl=&quot;pl&quot; lcat=&quot;ssub&quot; lemma=&quot;worden&quot; pos=&quot;verb&quot; postag=&quot;WW(pv,tgw,mv)&quot; pt=&quot;ww&quot; pvagr=&quot;mv&quot; pvtijd=&quot;tgw&quot; rel=&quot;hd&quot; root=&quot;word&quot; sc=&quot;passive&quot; sense=&quot;word&quot; tense=&quot;present&quot; word=&quot;worden&quot; wvorm=&quot;pv&quot;/&gt; &lt;node begin=&quot;5&quot; cat=&quot;ppart&quot; end=&quot;13&quot; id=&quot;18&quot; rel=&quot;vc&quot;&gt; &lt;node begin=&quot;5&quot; end=&quot;7&quot; id=&quot;19&quot; index=&quot;1&quot; rel=&quot;obj1&quot;/&gt; &lt;node begin=&quot;7&quot; end=&quot;8&quot; frame=&quot;adverb&quot; id=&quot;20&quot; lcat=&quot;advp&quot; lemma=&quot;steeds&quot; pos=&quot;adv&quot; postag=&quot;BW()&quot; pt=&quot;bw&quot; rel=&quot;mod&quot; root=&quot;steeds&quot; sense=&quot;steeds&quot; word=&quot;steeds&quot;/&gt; &lt;node begin=&quot;8&quot; buiging=&quot;zonder&quot; end=&quot;9&quot; frame=&quot;verb(hebben,psp,np_pc_pp(met))&quot; id=&quot;21&quot; infl=&quot;psp&quot; lcat=&quot;ppart&quot; lemma=&quot;laden&quot; pos=&quot;verb&quot; positie=&quot;vrij&quot; postag=&quot;WW(vd,vrij,zonder)&quot; pt=&quot;ww&quot; rel=&quot;hd&quot; root=&quot;laad&quot; sc=&quot;np_pc_pp(met)&quot; sense=&quot;laad-met&quot; word=&quot;geladen&quot; wvorm=&quot;vd&quot;/&gt; &lt;node begin=&quot;10&quot; cat=&quot;pp&quot; end=&quot;13&quot; id=&quot;22&quot; rel=&quot;pc&quot;&gt; &lt;node begin=&quot;10&quot; end=&quot;11&quot; frame=&quot;preposition(met,[mee,[en,al]])&quot; id=&quot;23&quot; lcat=&quot;pp&quot; lemma=&quot;met&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;met&quot; sense=&quot;met&quot; vztype=&quot;init&quot; word=&quot;met&quot;/&gt; &lt;node begin=&quot;11&quot; cat=&quot;np&quot; end=&quot;13&quot; id=&quot;24&quot; rel=&quot;obj1&quot;&gt; &lt;node aform=&quot;base&quot; begin=&quot;11&quot; buiging=&quot;met-e&quot; end=&quot;12&quot; frame=&quot;adjective(e)&quot; graad=&quot;basis&quot; id=&quot;25&quot; infl=&quot;e&quot; lcat=&quot;ap&quot; lemma=&quot;leeg&quot; naamval=&quot;stan&quot; pos=&quot;adj&quot; positie=&quot;prenom&quot; postag=&quot;ADJ(prenom,basis,met-e,stan)&quot; pt=&quot;adj&quot; rel=&quot;mod&quot; root=&quot;leeg&quot; sense=&quot;leeg&quot; vform=&quot;adj&quot; word=&quot;lege&quot;/&gt; &lt;node begin=&quot;12&quot; end=&quot;13&quot; frame=&quot;noun(de,count,pl)&quot; gen=&quot;de&quot; getal=&quot;mv&quot; graad=&quot;basis&quot; id=&quot;26&quot; lcat=&quot;np&quot; lemma=&quot;cd&quot; ntype=&quot;soort&quot; num=&quot;pl&quot; pos=&quot;noun&quot; postag=&quot;N(soort,mv,basis)&quot; pt=&quot;n&quot; rel=&quot;hd&quot; root=&quot;cd&quot; sense=&quot;cd&quot; word=&quot;cd&amp;apos;s&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;node begin=&quot;13&quot; conjtype=&quot;neven&quot; end=&quot;14&quot; frame=&quot;conj(en)&quot; id=&quot;27&quot; lcat=&quot;vg&quot; lemma=&quot;en&quot; pos=&quot;vg&quot; postag=&quot;VG(neven)&quot; pt=&quot;vg&quot; rel=&quot;crd&quot; root=&quot;en&quot; sense=&quot;en&quot; word=&quot;en&quot;/&gt; &lt;node begin=&quot;14&quot; cat=&quot;ssub&quot; end=&quot;25&quot; id=&quot;28&quot; rel=&quot;cnj&quot;&gt; &lt;node begin=&quot;14&quot; end=&quot;15&quot; frame=&quot;determiner(het,nwh,nmod,pro,nparg)&quot; getal=&quot;ev&quot; id=&quot;29&quot; infl=&quot;het&quot; lcat=&quot;np&quot; lemma=&quot;dat&quot; naamval=&quot;stan&quot; pdtype=&quot;pron&quot; persoon=&quot;3o&quot; pos=&quot;det&quot; postag=&quot;VNW(aanw,pron,stan,vol,3o,ev)&quot; pt=&quot;vnw&quot; rel=&quot;su&quot; root=&quot;dat&quot; sense=&quot;dat&quot; status=&quot;vol&quot; vwtype=&quot;aanw&quot; wh=&quot;nwh&quot; word=&quot;dat&quot;/&gt; &lt;node begin=&quot;16&quot; cat=&quot;cp&quot; end=&quot;22&quot; id=&quot;30&quot; rel=&quot;mod&quot;&gt; &lt;node begin=&quot;16&quot; conjtype=&quot;onder&quot; end=&quot;17&quot; frame=&quot;complementizer(als)&quot; id=&quot;31&quot; lcat=&quot;cp&quot; lemma=&quot;als&quot; pos=&quot;comp&quot; postag=&quot;VG(onder)&quot; pt=&quot;vg&quot; rel=&quot;cmp&quot; root=&quot;als&quot; sc=&quot;als&quot; sense=&quot;als&quot; word=&quot;als&quot;/&gt; &lt;node begin=&quot;17&quot; cat=&quot;ssub&quot; end=&quot;22&quot; id=&quot;32&quot; rel=&quot;body&quot;&gt; &lt;node begin=&quot;17&quot; case=&quot;both&quot; def=&quot;def&quot; end=&quot;18&quot; frame=&quot;pronoun(nwh,thi,both,de,both,def,wkpro)&quot; gen=&quot;de&quot; getal=&quot;mv&quot; id=&quot;33&quot; index=&quot;2&quot; lcat=&quot;np&quot; lemma=&quot;ze&quot; naamval=&quot;stan&quot; num=&quot;both&quot; pdtype=&quot;pron&quot; per=&quot;thi&quot; persoon=&quot;3&quot; pos=&quot;pron&quot; postag=&quot;VNW(pers,pron,stan,red,3,mv)&quot; pt=&quot;vnw&quot; rel=&quot;su&quot; root=&quot;ze&quot; sense=&quot;ze&quot; special=&quot;wkpro&quot; status=&quot;red&quot; vwtype=&quot;pers&quot; wh=&quot;nwh&quot; word=&quot;ze&quot;/&gt; &lt;node begin=&quot;19&quot; end=&quot;20&quot; frame=&quot;verb(unacc,pl,passive)&quot; id=&quot;34&quot; infl=&quot;pl&quot; lcat=&quot;ssub&quot; lemma=&quot;zijn&quot; pos=&quot;verb&quot; postag=&quot;WW(pv,tgw,mv)&quot; pt=&quot;ww&quot; pvagr=&quot;mv&quot; pvtijd=&quot;tgw&quot; rel=&quot;hd&quot; root=&quot;ben&quot; sc=&quot;passive&quot; sense=&quot;ben&quot; tense=&quot;present&quot; word=&quot;zijn&quot; wvorm=&quot;pv&quot;/&gt; &lt;node begin=&quot;17&quot; cat=&quot;ppart&quot; end=&quot;22&quot; id=&quot;35&quot; rel=&quot;vc&quot;&gt; &lt;node begin=&quot;17&quot; end=&quot;18&quot; id=&quot;36&quot; index=&quot;2&quot; rel=&quot;obj1&quot;/&gt; &lt;node begin=&quot;18&quot; end=&quot;19&quot; frame=&quot;verb(hebben,psp,np_pc_pp(van))&quot; id=&quot;37&quot; infl=&quot;psp&quot; lcat=&quot;ppart&quot; lemma=&quot;voorzien&quot; pos=&quot;verb&quot; postag=&quot;WW(pv,tgw,mv)&quot; pt=&quot;ww&quot; pvagr=&quot;mv&quot; pvtijd=&quot;tgw&quot; rel=&quot;hd&quot; root=&quot;voorzie&quot; sc=&quot;np_pc_pp(van)&quot; sense=&quot;voorzie-van&quot; word=&quot;voorzien&quot; wvorm=&quot;pv&quot;/&gt; &lt;node begin=&quot;20&quot; cat=&quot;pp&quot; end=&quot;22&quot; id=&quot;38&quot; rel=&quot;pc&quot;&gt; &lt;node begin=&quot;20&quot; end=&quot;21&quot; frame=&quot;preposition(van,[af,uit,vandaan,[af,aan]])&quot; id=&quot;39&quot; lcat=&quot;pp&quot; lemma=&quot;van&quot; pos=&quot;prep&quot; postag=&quot;VZ(init)&quot; pt=&quot;vz&quot; rel=&quot;hd&quot; root=&quot;van&quot; sense=&quot;van&quot; vztype=&quot;init&quot; word=&quot;van&quot;/&gt; &lt;node begin=&quot;21&quot; end=&quot;22&quot; frame=&quot;noun(de,mass,sg)&quot; gen=&quot;de&quot; genus=&quot;zijd&quot; getal=&quot;ev&quot; graad=&quot;basis&quot; id=&quot;40&quot; lcat=&quot;np&quot; lemma=&quot;audio&quot; naamval=&quot;stan&quot; ntype=&quot;soort&quot; num=&quot;sg&quot; pos=&quot;noun&quot; postag=&quot;N(soort,ev,basis,zijd,stan)&quot; pt=&quot;n&quot; rel=&quot;obj1&quot; root=&quot;audio&quot; sense=&quot;audio&quot; word=&quot;audio&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;node begin=&quot;23&quot; case=&quot;both&quot; def=&quot;def&quot; end=&quot;24&quot; frame=&quot;pronoun(nwh,thi,both,de,both,def,wkpro)&quot; gen=&quot;de&quot; getal=&quot;mv&quot; id=&quot;41&quot; lcat=&quot;np&quot; lemma=&quot;ze&quot; naamval=&quot;stan&quot; num=&quot;both&quot; pdtype=&quot;pron&quot; per=&quot;thi&quot; persoon=&quot;3&quot; pos=&quot;pron&quot; postag=&quot;VNW(pers,pron,stan,red,3,mv)&quot; pt=&quot;vnw&quot; rel=&quot;obj1&quot; root=&quot;ze&quot; sense=&quot;ze&quot; special=&quot;wkpro&quot; status=&quot;red&quot; vwtype=&quot;pers&quot; wh=&quot;nwh&quot; word=&quot;ze&quot;/&gt; &lt;node begin=&quot;24&quot; buiging=&quot;zonder&quot; end=&quot;25&quot; frame=&quot;verb(hebben,sg3,transitive)&quot; id=&quot;42&quot; infl=&quot;sg3&quot; lcat=&quot;ssub&quot; lemma=&quot;verplaatsen&quot; pos=&quot;verb&quot; positie=&quot;vrij&quot; postag=&quot;WW(vd,vrij,zonder)&quot; pt=&quot;ww&quot; rel=&quot;hd&quot; root=&quot;verplaats&quot; sc=&quot;transitive&quot; sense=&quot;verplaats&quot; tense=&quot;present&quot; word=&quot;verplaatst&quot; wvorm=&quot;vd&quot;/&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;/node&gt; &lt;sentence&gt;Een robot-armpje zorgt ervoor dat de branders steeds geladen worden met lege cd&amp;apos;s en dat , als ze voorzien zijn van audio , ze verplaatst&lt;/sentence&gt; &lt;comments&gt; &lt;comment&gt;Q#WR-P-P-D-0000000006.p.34.s.3|Een robot-armpje zorgt ervoor dat de branders steeds geladen worden met lege cd&amp;apos;s en dat , als ze voorzien zijn van audio , ze verplaatst|1|1|-0.4347218970399951&lt;/comment&gt; &lt;/comments&gt; &lt;/alpino_ds&gt; &lt;/treebank&gt; &quot;&quot;&quot; xpath = '//node[@cat=&quot;cp&quot; and node[@rel=&quot;cmp&quot; and @pt=&quot;vg&quot; and number(@begin) &lt; number(../node[@rel=&quot;body&quot; and @cat=&quot;ssub&quot;]/node[@rel=&quot;vc&quot; and @cat=&quot;ppart&quot;]/node[@rel=&quot;hd&quot; and @pt=&quot;ww&quot;]/@begin)] and node[@rel=&quot;body&quot; and @cat=&quot;ssub&quot; and node[@rel=&quot;vc&quot; and @cat=&quot;ppart&quot; and node[@rel=&quot;hd&quot; and @pt=&quot;ww&quot; and number(@begin) &lt; number(../../node[@rel=&quot;hd&quot; and @pt=&quot;ww&quot;]/@begin)]] and node[@rel=&quot;hd&quot; and @pt=&quot;ww&quot;]]]' for _, element in ET.iterparse(BytesIO(str.encode(xml)), tag=&quot;alpino_ds&quot;, events=(&quot;end&quot;, )): result = element.xpath(xpath) if result: print(&quot;match&quot;, ET.tostring(result[0])) </code></pre> <p>What am I missing here?</p>
<python><xml><xpath><lxml><xpath-1.0>
2022-12-14 11:27:10
1
28,776
Bram Vanroy
74,797,238
7,713,770
How to print list vertical and not horizaontal with tabulate package?
<p>I try to print the results of a list vertical. But they are at the moment display horizontal.</p> <p>So I try it as code:</p> <pre><code>def extract_data_excel_combined(self): dict_fruit = {&quot;Watermeloen&quot;: 3588.20, &quot;Appel&quot;: 5018.75, &quot;Sinaasappel&quot;: 3488.16} fruit_list = list(dict_fruit.values()) new_fruit_list = [] new_fruit_list.append((fruit_list)) columns = [&quot;totaal kosten fruit&quot;] return mark_safe( tabulate(new_fruit_list, headers=columns, tablefmt=&quot;html&quot;, stralign=&quot;center&quot;) ) </code></pre> <p>this results in this format:</p> <pre><code> totaal kosten fruit 3588.2 5018.75 3488.16 </code></pre> <p>But I want to have them:</p> <pre><code>totaal kosten fruit 3588.2 5018.75 3488.16 </code></pre> <p>Question: how to display the list items vertical?</p>
<python><tabulate>
2022-12-14 11:03:38
1
3,991
mightycode Newton
74,796,978
6,708,782
Webscraping all available repos from a topic search on github
<p>I'm trying to create a dataframe from a webscraping. Precisely: from a search of a topic on github, the objective is to retrieve the <strong>name</strong> of the owner of the repo, the <strong>link</strong> and the <strong>about</strong>.</p> <p>I have many problems.</p> <p><strong>1.</strong> The search shows that there are, for example, more than 300,000 repos, but my scraping can only get the information from 90. <strong>I would like to scrape all available repos</strong>.</p> <p><strong>2.</strong> Sometimes about is empty. It stops me after creating a dataframe</p> <blockquote> <p>ValueError: All arrays must be of the same length</p> </blockquote> <p><strong>3.</strong> My search for names is completely strange.</p> <p><strong>My code:</strong></p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd import re headers = {'User-Agent': 'Mozilla/5.0 (Linux; Android 5.1.1; SM-G928X Build/LMY47X) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.83 Mobile Safari/537.36'} search_topics = &quot;https://github.com/search?p=&quot; stock_urls = [] stock_names = [] stock_about = [] for page in range(1, 99): req = requests.get(search_topics + str(page) + &quot;&amp;q=&quot; + &quot;nlp&quot; + &quot;&amp;type=Repositories&quot;, headers = headers) soup = BeautifulSoup(req.text, &quot;html.parser&quot;) #about for about in soup.select(&quot;p.mb-1&quot;): stock_about.append(about.text) #urls for url in soup.findAll(&quot;a&quot;, attrs = {&quot;class&quot;:&quot;v-align-middle&quot;}): link = url['href'] complete_link = &quot;https://github.com&quot; + link stock_urls.append(complete_link) #profil name for url in soup.findAll(&quot;a&quot;, attrs = {&quot;class&quot;:&quot;v-align-middle&quot;}): link = url['href'] names = re.sub(r&quot;\/(.*)\/(.*)&quot;, &quot;\1&quot;, link) stock_names.append(names) dico = {&quot;name&quot;: stock_names, &quot;url&quot;: stock_urls, &quot;about&quot;: stock_about} #df = pd.DataFrame({&quot;name&quot;: stock_names, &quot;url&quot;: stock_urls, &quot;about&quot;: stock_about}) df = pd.DataFrame.from_dict(dico) </code></pre> <p>My output:</p> <blockquote> <p>ValueError: All arrays must be of the same length</p> </blockquote>
<python><web-scraping><beautifulsoup><request>
2022-12-14 10:43:16
1
602
ladybug
74,796,947
9,749,124
How to extract RSS links from website with Python
<p>I am trying to extract all RSS feed links from some websites. Ofc if RSS exists. These are some website links that have RSS, and below is list of RSS links from those websites.</p> <pre><code>website_links = [&quot;https://www.diepresse.com/&quot;, &quot;https://www.sueddeutsche.de/&quot;, &quot;https://www.berliner-zeitung.de/&quot;, &quot;https://www.aargauerzeitung.ch/&quot;, &quot;https://www.luzernerzeitung.ch/&quot;, &quot;https://www.nzz.ch/&quot;, &quot;https://www.spiegel.de/&quot;, &quot;https://www.blick.ch/&quot;, &quot;https://www.berliner-zeitung.de/&quot;, &quot;https://www.ostsee-zeitung.de/&quot;, &quot;https://www.kleinezeitung.at/&quot;, &quot;https://www.blick.ch/&quot;, &quot;https://www.ksta.de/&quot;, &quot;https://www.tagblatt.ch/&quot;, &quot;https://www.srf.ch/&quot;, &quot;https://www.derstandard.at/&quot;] website_rss_links = [&quot;https://www.diepresse.com/rss/Kunst&quot;, &quot;https://rss.sueddeutsche.de/rss/Kultur&quot;, &quot;https://www.berliner-zeitung.de/feed.id_kultur-kunst.xml&quot;, &quot;https://www.aargauerzeitung.ch/leben-kultur.rss&quot;, &quot;https://www.luzernerzeitung.ch/kultur.rss&quot;, &quot;https://www.nzz.ch/technologie.rss&quot;, &quot;https://www.spiegel.de/kultur/literatur/index.rss&quot;, &quot;https://www.luzernerzeitung.ch/wirtschaft.rss&quot;, &quot;https://www.blick.ch/wirtschaft/rss.xml&quot;, &quot;https://www.berliner-zeitung.de/feed.id_abgeordnetenhauswahl.xml&quot;, &quot;https://www.ostsee-zeitung.de/arc/outboundfeeds/rss/category/wissen/&quot;, &quot;https://www.kleinezeitung.at/rss/politik&quot;, &quot;https://www.blick.ch/wirtschaft/rss.xml&quot;, &quot;https://feed.ksta.de/feed/rss/politik/index.rss&quot;, &quot;https://www.tagblatt.ch/wirtschaft.rss&quot;, &quot;https://www.srf.ch/news/bnf/rss/1926&quot;, &quot;https://www.derstandard.at/rss/wirtschaft&quot;] </code></pre> <p>My approach is to extract all links, and then check if some of them has RSS in them, but that is just a first step:</p> <pre><code>for url in all_links: response = requests.get(url) print(response) soup = BeautifulSoup(response.content, 'html.parser') list_of_links = soup.select(&quot;a[href]&quot;) list_of_links = [link[&quot;href&quot;] for link in list_of_links] print(&quot;Number of links&quot;, len(list_of_links)) for l in list_of_links: if &quot;rss&quot; in l: print(url) print(l) print() </code></pre> <p>I have heard that I can look for RSS links like this, but I do not know how to incorporate this in my code.</p> <pre><code>type=application/rss+xml </code></pre> <p>My goal is to get working RSS urls at the end. Maybe it is an issue because I am sending request on the first page, and maybe I should crawl different pages in order to extract all RSS Links, but I hope that there is a faster/better way for RSS extraction.</p> <p>You can see that RSS links have or end up with (for example):</p> <pre><code>.rss /rss /rss/ rss.xml /feed/ rss-feed </code></pre> <p>etc.</p>
<python><web-scraping><beautifulsoup><rss>
2022-12-14 10:40:31
2
3,923
taga
74,796,781
19,502,111
Python json to object from model
<p>I know this looks like Frequency Ask Question mainly this question: <a href="https://stackoverflow.com/questions/6578986/how-to-convert-json-data-into-a-python-object">How to convert JSON data into a Python object?</a></p> <p>I will mention most voted answer:</p> <pre><code>import json from types import SimpleNamespace data = '{&quot;name&quot;: &quot;John Smith&quot;, &quot;hometown&quot;: {&quot;name&quot;: &quot;New York&quot;, &quot;id&quot;: 123}}' # Parse JSON into an object with attributes corresponding to dict keys. x = json.loads(data, object_hook=lambda d: SimpleNamespace(**d)) print(x.name, x.hometown.name, x.hometown.id) </code></pre> <p>Based on that answer, <code>x</code> is object. But it's not object from model. I mean model that created with class. For example:</p> <pre><code>import json from types import SimpleNamespace class Hometown: def __init__(self, name : str, id : int): self.name = name self.id = id class Person: # this is a model class def __init__(self, name: str, hometown: Hometown): self.name = name self.hometown = hometown data = '{&quot;name&quot;: &quot;John Smith&quot;, &quot;hometown&quot;: {&quot;name&quot;: &quot;New York&quot;, &quot;id&quot;: 123}}' x = Person(what should I fill here?) # I expect this will automatically fill properties from constructor based on json data print(type(x.hometown)) # will return Hometown class </code></pre> <p>I'm asking this is simply because my autocompletion doesn't work in my code editor if I don't create model class. For example if I type dot after object, it will not show properties name.</p>
<python><json><object><model>
2022-12-14 10:25:37
1
353
Citra Dewi
74,796,775
9,919,423
How to update an object's method in python
<p>Say I have a class and an object of it.</p> <p>In Python, you can update the behavior of an object's method by re-assigning the method to a new function. This is possible because methods in Python are just attributes of an object that happen to be functions. Here's an example of how you can do this:</p> <pre><code>class MyClass: def my_method(self): print(&quot;Original behavior&quot;) # create an instance of the class obj = MyClass() # call the original method obj.my_method() # Original behavior </code></pre> <p>and I want to update its method:</p> <pre><code># update the method with a new function def new_behavior(self): print(&quot;New behavior&quot;) obj.my_method = new_behavior # call the updated method obj.my_method() # New behavior </code></pre> <p>This gives me an error:</p> <pre><code> TypeError: new_behavior() missing 1 required positional argument: 'self' </code></pre> <p>Is this the correct way to update an object's method? How should I do it?</p>
<python><object><methods><reassign>
2022-12-14 10:24:35
0
412
David H. J.