QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
79,615,858
11,244,938
Async changing of global variables safe ? (Python)
<p>I've been researching when I work in ASGI environments (single-threaded) whether it is safe to mutate singletons or global variables. I thought initially that it would be safe, but AI tells me it's not safe and I'm having a tough time believing it because I cannot reproduce it. Look at this code here.</p> <pre><code>import asyncio import random counter = 0 async def increment(name: str): global counter await asyncio.sleep(random.uniform(0, 5)) # yield control to event loop counter = counter + 1 # write (based on possibly stale value) print(f&quot;{name}: counter β†’ {counter}&quot;) async def main(): await asyncio.gather(*[increment(f&quot;{i}&quot;) for i in range(10000)]) print(f&quot;Final counter (should be 10000): {counter}&quot;) if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>This retuns correctly at the end. Here is an example of the last lines of input:</p> <pre><code>7825: counter β†’ 9995 8403: counter β†’ 9996 6039: counter β†’ 9997 5887: counter β†’ 9998 9942: counter β†’ 9999 8631: counter β†’ 10000 Final counter (should be 10000): 10000 </code></pre> <p>This is safe right ?</p>
<python><asynchronous><concurrency><thread-safety><python-asyncio>
2025-05-10 19:43:09
2
301
Božidar Vulićević
79,615,849
20,895,654
Python pickled object equality guarantees
<p>I am wondering if, and if so which guarantees there are about the pickle module when using pickle.dump[s].</p> <p>Specifically for my problem I am pickling list[T] where T can be <code>bool</code>, <code>int</code>, <code>decimal</code>, <code>time</code>, <code>date</code>, <code>datetime</code>, <code>timedelta</code> or <code>str</code>, so the list is homogeneous (one type per list). I wonder if it is guaranteed that lists that would be equal in python are also guaranteed to have the same pickled result, at least for the types given.</p> <p>I couldn't find any guarantees online, but from some basic testing I couldn't find a case where the data would be different.</p> <p>So TL;DR:</p> <p>Given two list[T] where T is <code>bool | int | decimal | time | date | datetime | timedelta | str</code> that are equal in python, will the resulting pickle.dump() objects also be equal?</p>
<python><pickle><equality>
2025-05-10 19:27:02
1
346
JoniKauf
79,615,828
14,425,501
Why shap's explainer.model.predict() and model.predict don't match?
<p>I have a machine learning model and I calculated SHAP on it using following code:</p> <pre><code>import shap background = shap.kmeans(X_dev, k=100) explainer = shap.TreeExplainer(model, feature_perturbation=&quot;interventional&quot;, model_output='probability', data=background.data) shap_values = explainer.shap_values(X_val) </code></pre> <p>I observed that when I do <code>explainer.model.predict()</code> and <code>model.predict()</code> both predictions don't match.</p> <pre><code>model.predict_proba(X_val)[:10] &gt;&gt; array([[0.90563095, 0.09436908], [0.675441 , 0.324559 ], [0.7728198 , 0.22718018], [0.00906086, 0.99093914], [0.5687084 , 0.4312916 ], [0.146478 , 0.853522 ], [0.21917653, 0.78082347], [0.871528 , 0.12847197], [0.34144473, 0.65855527], [0.25084436, 0.74915564]], dtype=float32) explainer.model.predict(X_val)[:10] &gt;&gt;&gt; array([0.09436912, 0.32455891, 0.22718007, 0.99093911, 0.43129163, 0.85352203, 0.7808235 , 0.12847196, 0.65855535, 0.74915555]) </code></pre> <p><strong>Question:</strong> Why is there a difference in predictions after a few decimal points?? Why aren't they exactly similar?</p> <p><strong>What I observed:</strong> I have also observed that this difference increases if you train on bigger dataset. Like if you use larger dataset to train a model, like with 1 millon rows and 100s of columns, this differece in prediction increases. The example which I showed above is created on Titanic dataset (which is very small) but if you try it on larger dataset, the difference will be massive.</p> <p><strong>What I tried:</strong></p> <ul> <li>Tried both TPD and Interventional SHAP.</li> <li>Tried comparing raw scores.</li> <li>Tried using xgb.DMatrix to predict on the model.</li> <li>Tried making prediction using XGBClassifier API on pd.DataFrame data (not DMatrix).</li> </ul> <p><strong>Minimal reproducible code:</strong></p> <pre><code>import pandas as pd import numpy as np from sklearn.model_selection import train_test_split import xgboost as xgb import shap # Load Titanic dataset data = pd.read_csv('https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv') # Preprocessing data = data.drop(['Name', 'Ticket', 'Cabin'], axis=1) data = pd.get_dummies(data, columns=['Sex', 'Embarked'], drop_first=True) data = data.fillna(data.median()) X = data.drop('Survived', axis=1) y = data['Survived'] #---------- # If you want to use large data to see big difference #n_samples = 1000000 #n_features = 150 #X_large = pd.DataFrame(np.random.rand(n_samples, n_features), columns=[f'feature_{i}' for i in range(n_features)]) #y_large = pd.Series(np.random.randint(0, 2, size=n_samples), name='target') #X_dev_large, X_val_large, y_dev_large, y_val_large = train_test_split(X_large, y_large, test_size=0.2, random_state=42) #---------- X_dev, X_val, y_dev, y_val = train_test_split(X, y, test_size=0.2, random_state=42) dtrain = xgb.DMatrix(X_dev, label=y_dev) dval = xgb.DMatrix(X_val, label=y_val) params = { 'objective': 'binary:logistic', 'eval_metric': 'logloss', 'max_depth': 4, 'eta': 0.1 } model = xgb.train(params, dtrain, num_boost_round=100) model.save_model('xgboost_model.json') from xgboost import XGBClassifier loaded_model = XGBClassifier() loaded_model.load_model('xgboost_model.json') background = shap.kmeans(X_dev, k=100) explainer = shap.TreeExplainer(loaded_model, feature_perturbation=&quot;interventional&quot;, model_output='probability', data=background.data) shap_values = explainer.shap_values(X_val) print(loaded_model.predict_proba(X_val)[:10]) print(explainer.model.predict(X_val)[:10]) </code></pre> <p><strong>Other details:</strong></p> <ul> <li><p>Shap version: <code>0.47.2</code> (I also tried 0.37 something)</p> </li> <li><p>Python version: <code>3.9.21</code></p> </li> <li><p>XGBoost version: <code>2.1.1</code> (tried 1.7 as well)</p> </li> </ul>
<python><machine-learning><xgboost><shap>
2025-05-10 19:07:02
1
1,933
Adarsh Wase
79,615,747
3,163,618
Is polars bracket indexing to select a column discouraged?
<p>Some stuff online like <a href="https://stackoverflow.com/questions/74841242/selecting-with-indexing-is-an-anti-pattern-in-polars-how-to-parse-and-transform">Selecting with Indexing is an anti-pattern in Polars: How to parse and transform (select/filter?) a CSV that seems to require so?</a> suggests using indexing like <code>df[&quot;a&quot;]</code> is discouraged over <code>df.get_column(&quot;a&quot;)</code>. But the linked reference guide is gone now: <a href="https://pola-rs.github.io/polars-book/user-guide/howcani/selecting_data/selecting_data_indexing.html#selecting-with-indexing" rel="nofollow noreferrer">https://pola-rs.github.io/polars-book/user-guide/howcani/selecting_data/selecting_data_indexing.html#selecting-with-indexing</a></p> <p>So is it discouraged? What about similar to pandas <code>pl.col.a</code> instead of <code>pl.col(&quot;a&quot;)</code>?</p>
<python><select><indexing><python-polars>
2025-05-10 17:31:12
1
11,524
qwr
79,615,662
5,958,323
How to replace *all* occurrences of a string in Python, and why `str.replace` misses consecutive overlapping matches?
<p>I want to replace all patterns <code>0</code> in a string by <code>00</code> in Python. For example, turning:</p> <p><code>'28 5A 31 34 0 0 0 F0'</code></p> <p>into</p> <p><code>'28 5A 31 34 00 00 00 F0'</code>.</p> <p>I tried with <code>str.replace()</code>, but for some reason it misses some &quot;overlapping&quot; patterns: i.e.:</p> <pre><code>$ python3 Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; '28 5A 31 34 0 0 0 F0'.replace(&quot; 0 &quot;, &quot; 00 &quot;) '28 5A 31 34 00 0 00 F0' &gt;&gt;&gt; '28 5A 31 34 0 0 0 F0'.replace(&quot; 0 &quot;, &quot; 00 &quot;).replace(&quot; 0 &quot;, &quot; 00 &quot;) '28 5A 31 34 00 00 00 F0' </code></pre> <p>notice the &quot;middle&quot; <code>0</code> pattern that is not replaced by <code>00</code>.</p> <ul> <li>Any idea how I could replace all patterns at once? Of course I can do <code>'28 5A 31 34 0 0 0 F0'.replace(&quot; 0 &quot;, &quot; 00 &quot;).replace(&quot; 0 &quot;, &quot; 00 &quot;)</code>, but this is a bit heavy...</li> <li>I actually did not expect this behavior (found it through a bug in my code). In particular, I did not expect this behavior from the documentation at <a href="https://docs.python.org/3/library/stdtypes.html#str.replace" rel="nofollow noreferrer">https://docs.python.org/3/library/stdtypes.html#str.replace</a> . Any explanation to why this happens / anything that should have tipped me that this is the expected behavior? It looks like <code>replace</code> does not work with consecutive overlapping repetitions of the pattern, but this was not obvious to me from the documentation?</li> </ul> <hr /> <p>Edit 1:</p> <p>Thanks for the answer(s). The regexp works nicely.</p> <p>Still, I am confused. The official doc linked above says:</p> <blockquote> <p>&quot;Return a copy of the string with all occurrences of substring old replaced by new. If count is given, only the first count occurrences are replaced. If count is not specified or -1, then all occurrences are replaced.&quot;.</p> </blockquote> <p>&quot;Clearly&quot; this is not the case? (or am I missing something?).</p>
<python><string><replace>
2025-05-10 15:46:02
1
9,379
Zorglub29
79,615,493
7,921,684
InfluxDB 2.0 Client Stops Writing After Few Hours (Notifications Still Received, Container Restart Fixes Temporarily)
<p>I’m using InfluxDB 2.0 in Docker along with a custom notification receiver (influx-adapter-api container) that writes to InfluxDB using the Python influxdb-client in synchronous mode.</p> <ul> <li>The setup works fine for a few hours, but then:</li> <li>Notifications are still received correctly</li> <li>Writes to InfluxDB silently stop (no errors, no logs, no exceptions)</li> <li>Restarting the influx-adapter-api container makes writing work again</li> <li>InfluxDB itself is never restarted and seems unaffected</li> </ul> <p>So it seems likely the issue is with the InfluxDB client or the runtime inside the influx-adapter-api container.</p> <p><strong>What I’ve Tried</strong></p> <ol> <li>-Switching between SYNCHRONOUS and ASYNCHRONOUS modes</li> <li>-Reconnecting every hour</li> <li>-Handling session timeouts manually</li> <li>-Verifying the container continues receiving real-time data</li> <li>-Removing excess logging</li> </ol> <p><strong>Question:</strong></p> <ul> <li><p>What could cause the InfluxDB Python client or the influx-adapter-api container to silently stop writing to InfluxDB after running for some time?</p> <p>Could it be a client session leak, open connection timeout, thread hang, or socket exhaustion?</p> <p>Any known issues or recommended workarounds for long-lived InfluxDB write clients in Docker?</p> </li> </ul> <p>influxdb:</p> <pre><code> image: influxdb:2.0 restart: unless-stopped expose: - &quot;8086&quot; ports: - &quot;8086:8086&quot; # For testing only environment: - DOCKER_INFLUXDB_INIT_MODE=setup - DOCKER_INFLUXDB_INIT_USERNAME=influxdb - DOCKER_INFLUXDB_INIT_PASSWORD=influxdb - DOCKER_INFLUXDB_INIT_ORG=Polito - DOCKER_INFLUXDB_INIT_BUCKET=test - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=password volumes: - influxdb_data:/var/lib/influxdb2 </code></pre> <p>writer container:</p> <pre><code>influx-adapter-api: image: influx-adapter-api:latest restart: unless-stopped expose: - &quot;8008&quot; volumes: - setting:/influx-adapter-api </code></pre> <p>the code:</p> <pre><code>from influxdb_client import InfluxDBClient from influxdb_client.client.write_api import SYNCHRONOUS from datetime import datetime import time, threading, logging class InfluxWriter: def __init__(self, url, org, bucket, username, password): self.client = InfluxDBClient(url=url, username=username, password=password, org=org) self.write_api = self.client.write_api(write_options=SYNCHRONOUS) threading.Thread(target=self._periodic_reconnect, daemon=True).start() def _periodic_reconnect(self): while True: time.sleep(3600) self._reconnect() def _reconnect(self): self.write_api.close() self.client.close() self.client = InfluxDBClient(url=self.url, username=self.username, password=self.password, org=self.org) self.write_api = self.client.write_api(write_options=SYNCHRONOUS) def write_notification_to_influxdb(self, measurement, fields, timestamp): data = {&quot;measurement&quot;: measurement, &quot;fields&quot;: fields, &quot;time&quot;: timestamp} try: self.write_api.write(bucket=self.bucket, record=data) self.write_api.flush() except Exception as e: if &quot;session not found&quot; in str(e): self._reconnect() self.write_api.write(bucket=self.bucket, record=data) self.write_api.flush() else: logging.error(f&quot;Write error: {e}&quot;) </code></pre>
<python><docker><influxdb><influxdb-python>
2025-05-10 12:53:38
1
586
Gray
79,615,255
6,067,741
Azure python sdk resourcegraph no skip_token
<p>no matter what i try skip_token is None. my code:</p> <pre><code>from azure.identity import DefaultAzureCredential from azure.mgmt.resourcegraph import ResourceGraphClient from azure.mgmt.resourcegraph.models import QueryRequest, QueryRequestOptions credential = DefaultAzureCredential() client = ResourceGraphClient(credential) query = &quot;&quot;&quot; Resources | where type == &quot;microsoft.network/networkinterfaces&quot; | where isnotnull(managedBy) | mv-expand ipConfig = properties.ipConfigurations | where isnotnull(ipConfig.properties.privateLinkConnectionProperties.fqdns) | project fqdns = ipConfig.properties.privateLinkConnectionProperties.fqdns &quot;&quot;&quot; fqdns = [] skip_token = None while True: request = QueryRequest( query=query, options=QueryRequestOptions( skip_token=skip_token, top=1000, ), ) response = client.resources(request) # do smth here skip_token = response.skip_token # always None </code></pre> <p>sample response:</p> <blockquote> <p>{'additional_properties': {}, 'total_records': 10070, 'count': 1000, 'result_truncated': 'true', 'skip_token': None, 'data': [ xyz ], 'facets': [] }</p> </blockquote> <p>WORKING query\code:</p> <pre><code>query = &quot;&quot;&quot; Resources | where type == &quot;microsoft.network/networkinterfaces&quot; | where isnotnull(managedBy) and not(managedBy == &quot;&quot;) | where location == &quot;eastus2&quot; | mv-expand ipConfig = properties.ipConfigurations | where isnotnull(ipConfig.properties.privateLinkConnectionProperties.fqdns) and array_length(ipConfig.properties.privateLinkConnectionProperties.fqdns) &gt; 0 | project name, fqdns = ipConfig.properties.privateLinkConnectionProperties.fqdns | order by name asc &quot;&quot;&quot; fqdns = [] skip_token = None while True: request = QueryRequest( query=query, options=QueryRequestOptions( skip_token=skip_token ), ) response = client.resources(request) for item in response.data: # process results here if response.skip_token == None: break else: skip_token = response.skip_token </code></pre>
<python><azure><pagination><azure-resource-graph>
2025-05-10 07:18:10
1
71,429
4c74356b41
79,615,098
3,810,748
Is there simpler way to get all nested text inside of ElementTree?
<p>I am currently using the <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow noreferrer"><code>xml.etree</code></a> Python library to parse HTML.</p> <p>After finding a target DOM element, I am attempting to extract its text. Unfortunately, it seems that the <code>.text</code> attribute is severely limited in its functionality and will only return the immediate inner text of an element (and not anything nested). Do I really have to loop through all the children of the <code>ElementTree</code>? Or is there a more elegant solution?</p>
<python><xpath>
2025-05-10 02:57:34
2
6,155
AlanSTACK
79,615,081
1,686,522
Disable FreeCAD recomputes from the Python console (not the mouse)
<p>In FreeCAD, if I right-mouse-click on my document in the model tree, I have the option to &quot;Skip recomputes&quot;. This action does not generate Python code in the console. How can I do this programmatically? I'd like to turn off recomputes while my script is running but then leave them enabled after the script finishes. (My script includes a final manual recompute when it finishes its work). My goal is to save a few mouse clicks with each design iteration.</p>
<python><freecad>
2025-05-10 02:20:40
0
473
durette
79,614,986
1,126,944
Is "type" a keyword
<p>When I read the Python C API Reference Manual, it points to below from <a href="https://docs.python.org/3/c-api/type.html#c.PyType_Type" rel="nofollow noreferrer">https://docs.python.org/3/c-api/type.html#c.PyType_Type</a></p> <blockquote> <p>type PyTypeObject</p> <p>Part of the Limited API (as an opaque struct).</p> <p>The C structure of the objects used to describe built-in types.</p> </blockquote> <p>I wonder is &quot;type&quot; from &quot;<strong>type</strong> PyTypeObject&quot; a keyword of C, I only know there is typedef in C, is it an alias for &quot;struct&quot; in C, I did not find such alias for struct in C. Where does it come from ?</p>
<python><c>
2025-05-09 23:26:06
2
1,330
IcyBrk
79,614,976
6,141,238
Does file_obj.close() nicely close file objects in other modules that have been set equal to file_obj?
<p>I have a file <code>main_file.py</code> that creates a global variable <code>file_obj</code> by opening a text file and imports a module <code>imported_module.py</code> which has functions that write to this file and therefore also has a global variable <code>file_obj</code> which I set equal to <code>file_obj</code> in <code>main_file.py</code>:</p> <p><strong>main_file.py</strong></p> <pre><code>import imported_module as im file_obj = open('text_file.txt', mode='w') im.file_obj = file_obj def main(): a = 5 b = 7 im.add_func(a, b) im.multiply_func(a, b) return def add_func(x, y): z = x + y file_obj.write(str(z) + '\n') return main() file_obj.close() </code></pre> <p><strong>imported_module.py</strong></p> <pre><code>file_obj = None def multiply_func(x, y): z = x * y file_obj.write(str(z) + '\n') return </code></pre> <p>If I close <code>file_obj</code> in <code>main_file.py</code> as above, does this also nicely close <code>file_obj</code> in <code>imported_module.py</code>?</p> <p>(In the MRE above, I could add <code>im.file_obj.close()</code> to <code>main_file.py</code> just to be sure. However, a generalization of this explicit approach does not appear possible if <code>imported_module.py</code> imports a second module <code>imported_module0.py</code> which also has a global variable <code>file_obj</code> and sets this variable to its own copy of <code>file_obj</code> with a command like <code>im0.file_obj = file_obj</code>.)</p>
<python><module><global-variables><text-files>
2025-05-09 23:15:49
1
427
SapereAude
79,614,935
4,704,065
Comapre two DF with different lengths based on same column name
<p>I have 2 Dataframe of different lengths (number of rows and columns are different )but having some same column names I want to filter out rows of DF1 where column value of Dataframe 1 is present in column value of Dataframe 2</p> <p><strong>eg: DF1:</strong></p> <p><a href="https://i.sstatic.net/gYjJNgwI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYjJNgwI.png" alt="enter image description here" /></a></p> <p><strong>DF2</strong>:</p> <p><a href="https://i.sstatic.net/xFQApjQi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFQApjQi.png" alt="enter image description here" /></a></p> <p><strong>Result DF:</strong></p> <p><a href="https://i.sstatic.net/8slx0xTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8slx0xTK.png" alt="enter image description here" /></a></p> <p>I tried using <code>df.loc</code> but it is not working as the lengths of the dataframes are different. If <code>Itow</code> in <code>DF1</code> is present in <code>DF2</code> then it should filter out that row from <code>DF1</code>.</p> <pre><code>res = x.loc[(x[&quot;_iTOW&quot;]) == (act_3d_fixes_spoof[&quot;_iTOW&quot;])] </code></pre>
<python><pandas><dataframe>
2025-05-09 22:14:27
2
321
Kapil
79,614,850
20,591,261
How to replace string values in a strict way in Polars?
<p>I'm working with a Polars DataFrame that contains a column with string values. I aim to replace specific values in this column using the <code>str.replace_many()</code> method.</p> <p>My dataframe:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = (pl.DataFrame({&quot;Products&quot;: [&quot;cell_foo&quot;,&quot;cell_fooFlex&quot;,&quot;cell_fooPro&quot;]})) </code></pre> <p>Current approach:</p> <pre class="lang-py prettyprint-override"><code>mapping= { &quot;cell_foo&quot; : &quot;cell&quot;, &quot;cell_fooFlex&quot; : &quot;cell&quot;, &quot;cell_fooPro&quot;: &quot;cell&quot; } (df.with_columns(pl.col(&quot;Products&quot;).str.replace_many(mapping ).alias(&quot;Replaced&quot;))) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Products ┆ Replaced β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ cell_foo ┆ cell β”‚ β”‚ cell_fooFlex ┆ cellFlex β”‚ β”‚ cell_fooPro ┆ cellPro β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Desired Output:</p> <pre class="lang-py prettyprint-override"><code>shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Products ┆ Replaced β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ cell_foo ┆ cell β”‚ β”‚ cell_fooFlex ┆ cell β”‚ β”‚ cell_fooPro ┆ cell β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How can I modify my approach to ensure that replacements occur only when the entire string matches a key in the mapping?</p>
<python><python-polars>
2025-05-09 20:48:08
1
1,195
Simon
79,614,691
9,008,261
Why are the subplots in the subfigures getting smaller?
<p>I am using gridspec within a subfigure. For some reason matshow plots are getting smaller and smaller.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt mask = np.array([ [ 1, 0, 1, 0, 1], [ 1, 1, 1, 1, 1], [ 0, 0, 0, 0, 0], [ 1, 1, 1, 1, 1], [ 1, 1, 1, 1, 0] ]) fig = plt.figure(figsize = (12,16), layout = &quot;constrained&quot;) figures = fig.subfigures(4, 2) figures = figures.reshape(-1) x = np.linspace(0, 1, 100) y = x**(0.5) for f in figures: spec = f.add_gridspec(3, 2) for i in range(3): image_axis = f.add_subplot(spec[i, 0]) image_axis.matshow(mask, cmap = 'grey') plot_axis = f.add_subplot(spec[:,1]) plot_axis.plot(x, y) plt.savefig('test.png') </code></pre> <p><a href="https://i.sstatic.net/Yo9mDEx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yo9mDEx7.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2025-05-09 18:30:10
1
305
Todd Sierens
79,614,585
1,015,761
Python Asyncio: How to ensure async jobs are executed sequentially from requests
<p>I have a Django endpoint which needs to do some work when called (writing to a file), however, I would like the work to be done synchronised with any other calls to the same endpoint at the time. Such that, if the endpoint is called 10 times within the same moment, it will write to the file 10 times but one at a time, in the order the requests were received.</p> <p>I have tried implementing various solutions with <code>asyncio.create_event_loop()</code> and <code>run_in_executor()</code> but when I test this by sending several requests at once I haven't been able to get it to truly execute the workers one at a time on the event loop. How can I achieve this? I am basically looking for a task queue, but one that each process which submits to the tasks queue has no knowledge of what is currently on the task queue. I was hoping I could utility a single event loop to submit workers to it and have it run them one at a time, but it seems that is not the case from my tests.</p> <p>Here is my current code:</p> <pre><code> append = file.read() logger.info( f&quot;{request_number} Appending {file.size} bytes to file&quot; ) def append_to_file(): @retry(tries=3, backoff=3) def append_to_file_with_retry(): with default_storage.open(file_name, &quot;a+b&quot;) as f: logger.info(f&quot;Starting write @ {request_number}&quot;) f.write(b&quot;\n&quot; + append) logger.info(f&quot;Finished write @ {request_number}&quot;) append_to_file() self.get_sync_event_loop().run_in_executor(None, append_to_file, []) logger.info(f&quot;Sending response @ {request_number}&quot;) return Response(None, status=status.HTTP_200_OK) </code></pre>
<python><python-asyncio>
2025-05-09 17:10:18
1
3,876
Goulash
79,614,495
1,547,004
How to share configuration for custom PyPI indexes?
<p>Is there a way to include configuration with a python package source so that it is able to properly resolve dependencies from private registries using a standard build/install chain?</p> <p>For example, I have a python package with dependencies on several python packages that are each published to different private indexes on Gitlab. Their index paths looks similar to this:</p> <p><code>https://gitlab.mycompany.com/api/v4/projects/100/packages/pypi/simple</code></p> <p>I can do this with a <code>requirements.txt</code> file included in my project:</p> <pre><code>requests black pillow --extra-index-url https://__token__:${GITLAB_TOKEN}@gitlab.mycompany.com/api/v4/projects/100/packages/pypi/simple my-dependency-one==3.4.2 --extra-index-url https://__token__:${GITLAB_TOKEN}@gitlab.mycompany.com/api/v4/projects/101/packages/pypi/simple my-dependency-two==1.0.0 </code></pre> <p>This requires making some assumptions about how private registry tokens are stored, but I'm fine with those types of solutions.</p> <p>Unfortunately, <code>requirements.txt</code> files don't integrate with the standard build/install chain for python packages.</p> <p>Python packaging tools like <code>pip</code>, <code>uv</code>, and <code>hatch</code> work out of the box with the public PyPI index. They support private indexes, but the configuration I've found to support them must all be done manually via user config files or command line flags.</p> <p>I understand why python packaging tools wouldn't want python packages to quietly insert private registries and why that would be a security hole.</p> <p>But when working with a source repository, I would think there would be some way to tell <code>pip</code>, <code>uv</code>, <code>hatch</code>, etc. to &quot;use this configuration, too&quot; when doing a build install.</p>
<python><pip><gitlab><pypi>
2025-05-09 16:02:23
1
37,968
Brendan Abel
79,614,459
8,521,346
Install Latest Compatible Package Pip
<p>Say I have a client on <code>Django==4.0.0</code> and I run <code>pip install djangorestframework</code> how do I prevent Pip from upgrading django to v5 along with installing drf?</p> <p>I know I can manually specify the version number for the package, but is there a built in way to just install the latest version that is compatible with my other packages?</p>
<python><pip>
2025-05-09 15:42:15
1
2,198
Bigbob556677
79,614,419
382,784
How do you use certificates and certificate chain with Flask SocketIO?
<p>I need to use SocketIO from flask_socketio with certificate, secret key, and certification chain. How do I do that? Here is my code so far:</p> <pre class="lang-py prettyprint-override"><code>from gevent import monkey monkey.patch_all() import ssl from engineio.async_drivers import gevent # noqa: I201, F401 from flask import Flask, request, render_template, redirect, send_file, url_for, flash from flask_socketio import SocketIO # ... if __name__ == '__main__': port = 443 socketio.run(app, host='0.0.0.0', port=port) </code></pre>
<python><flask><socket.io><certificate>
2025-05-09 15:14:31
1
2,997
wedesoft
79,614,228
5,938,276
Comparison of Frequencies array to FFT indexes
<p>I have 10 channels with each channel corresponding to a frequency:</p> <pre><code>Ch Freq (Khz) 01 155.00 02 165.00 03 175.00 04 185.00 05 195.00 06 205.00 07 215.00 08 225.00 09 235.00 10 245.00 </code></pre> <p>Each channel is actually 10 Khz wide, so anything between 150-160 Khz is on channel 01, 160-170 is 02 etc</p> <p>I also have an fft:</p> <pre><code>np.fft.fftshift(np.fft.fftfreq(102, 1/102.4)) + 200 </code></pre> <p>This gives me a 102 bin numpy array between 150 and 250 Khz and gives just over 10 &quot;bins&quot; of the array per 10 Khz of channel bandwidth.</p> <pre><code>f=array([148.8, 149.803, 150.807. 151.811 ... 249.192, 250.196]) </code></pre> <p>This means to get a frequency from a index is trivial <code>f[index]</code>.</p> <p><strong>Question:</strong></p> <p>Given an array of frequencies <code>freqs=[165, 205]</code> and an array of fft indexes, i.e <code>peaks=[13, 54, 33, 42]</code> - how do I remove from <code>peaks</code> any indexes that are not one of the indexes which maps to a frequency in <code>freqs</code>?</p> <p>In the example above 165 Khz, Channel 02 actually covers 160-170 Khz, which is going to correspond to approximately fft indexes 10-20.</p> <p>205 Khz will correspond approximately to fft indexes 50-60.</p> <p>Working from the example above in the <code>peaks</code> array indexes 13 and 54 would remain, but 33 and 42 would not since those numbers do not fall into the range of indexes corresponding to either freq 165 Khz or 205 Khz.</p> <p>Note: the fft bins don't necessarily end exactly on a channel boundary.</p>
<python><arrays>
2025-05-09 13:26:26
1
2,456
Al Grant
79,614,223
2,954,288
Deviation between `/usr/bin/time` command and Python self measurement of running time
<p>The trivial Python script, t.py</p> <pre class="lang-py prettyprint-override"><code>import psutil, time startSecs = psutil.Process().create_time() print(&quot;startup took %.3fs&quot; % (time.time() - startSecs)) </code></pre> <p>when run as <code>/usr/bin/time -f 'wall=%e' python3 t.py</code>, results in the following output:</p> <pre><code>startup took 0.820s wall=0.03 </code></pre> <p>While it is surely difficult to estimate the time by &quot;feeling&quot;, the command line is finished much faster than 0.8s as can easily be checked by inserting a <code>time.sleep(0.8)</code> which makes for a noticeable wait.</p> <p>Question: How can the startup time of the python process, until it does <code>time.time()</code>, be 800ms slower than the overall time measured by <code>/usr/bin/time</code>?</p>
<python><psutil>
2025-05-09 13:23:28
1
5,257
Harald
79,614,175
3,888,116
Open Word document as "Read Only" with API doesn't work
<p>I'm using Python to open a Word file using the win32 API. When &quot;RO=False&quot;, it opens the Word file and the user can edit it. When the user opens the file using Word's built-in feature &quot;Open as Read-Only&quot;, the titlebar indicates &quot;Read-Only&quot; and the ribbon's buttons are grayed. The behaviour of these two test cases is expected. It is the latter behaviour that I would like to achieve from within Python.</p> <p>I was expecting that when I launch the code with &quot;RO=True&quot;, it should open the file as Read-Only, disable the ribbon, and have the titlebar indicate &quot;Read-Only&quot;; similarly as the Word's built-in feature. This does not seem to work: the user can still edit (and save) the document, the titlebar does not say &quot;Read-Only&quot; and the ribbon is available.</p> <p>API reference: <a href="https://learn.microsoft.com/en-us/office/vba/api/word.documents.open" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/office/vba/api/word.documents.open</a></p> <p>I'm using Python 3.11.4 and Word for Microsoft 365 MSO, Version 2504, Build 16.0.18730.20122</p> <p>Example code:</p> <pre><code>import win32com.client as win32 filename=&quot;C:\\temp\\test.docx&quot; RO=True #True should open the file as Read-Only, False should open the file as Read-Write. The former doesn't work. try: word = win32.Dispatch('Word.Application') except AttributeError: print(&quot;Failing connecting with Word...&quot;) exit() except Exception as e: print('Unknown Exception: {0}.'.format(e)) exit() if word is None: print(&quot;Word could not be opened. Cancelling request&quot;) try: retval = word.Documents.Open(filename, ReadOnly=RO, Visible=True) word.Visible = True word.Documents[word.Documents.count-1].Activate() word.Activate() except Exception as e: print('Unknown Exception: {0}.'.format(e)) exit() </code></pre> <p>I've also tried using Visual Basic for Applications to compare the behaviour of the API. This has an intermediate result: the titlebar indicates &quot;Read-Only&quot;, but the ribbon is active. When trying to save, it asks for a new filename. Hence, this behaviour is also as expected.</p> <pre><code>Sub OpenDoc() Documents.Open FileName:=&quot;C:\temp\test.docx&quot;, ReadOnly:=True End Sub </code></pre> <p>Windows also has a feature to open a file as read-only: press shift while right-clicking a Word-file, and in the context menu you can select &quot;Open as Read-Only&quot;. However, the file is still editable, so hence not &quot;Read-Only&quot;.</p> <p>I have a few questions:</p> <ul> <li>Does the ReadOnly argument needs to be another value, or another type for it to work?</li> <li>Is there maybe a better, or other way to open files as Read Only?</li> <li>Can somebody else reproduce a similar behaviour?</li> </ul>
<python><ms-word><pywin32>
2025-05-09 13:01:53
0
753
DaveG
79,614,127
2,336,887
PySide6.QtWidgets.QWidget crashes on super().__init__()
<p>The following code crashes and I don't understand why.</p> <pre class="lang-python prettyprint-override"><code>from PySide6 import QtWidgets class MockParent(QtWidgets.QWidget): def __init__(self, /): try: super().__init__() except Exception as e: print(e) if __name__ == '__main__': parent = MockParent() app: QtWidgets.QApplication = QtWidgets.QApplication([]) app.exec() </code></pre> <p>The console gives me something like: <code>Process finished with exit code -1073740791 (0xC0000409)</code></p> <p>I believe this kind of error happens in shared libraries. Ok, but why? What did I do wrong?</p> <p>My understanding of PySide6 documentation leads me to believe that I could pass &quot;None&quot; as arguments. I also set the window type, but the error still happens.</p>
<python><pyside6>
2025-05-09 12:37:22
0
1,181
BaldDude
79,614,094
12,439,683
How to check if an object is a method_descriptor
<p>Given an object, how can I make a check if it is a <code>method_descriptor</code>?</p> <pre class="lang-py prettyprint-override"><code>is_method_descriptor = isinstance(obj, method_descriptor) # does not work </code></pre> <p>The problem: <code>method_descriptor</code> is a builtin but not an accessible variable. I neither found it in <code>builtins</code> or <code>types</code>.</p> <hr /> <p>I use Python 3.10 here if relevant.</p> <p>I am looking for a proof of concept out of curiosity how to get hold of the class for an <code>isinstance</code> check if I actually wanted to, or alternative checks that satisfy it.<br /> For my Sphinx related problem where I encountered them unexpectedly I (hopefully) found a workaround already, but still need to test it if it is sufficient or if I should replace it with a more explicit check that I am here looking for.</p>
<python><python-descriptors><python-builtins>
2025-05-09 12:17:46
1
5,101
Daraan
79,614,070
4,240,413
Why is the upload of files to GCP Vertex AI RAG corpora so slow?
<p>I am experimenting with RAG on GCP/Vertex AI, and tried to create some simple example.</p> <p>Here's what I came up with, creating small dummy files locally and then uploading them one by one to a newly-minted RAG corpus:</p> <pre class="lang-py prettyprint-override"><code>import vertexai from vertexai import rag from tqdm import tqdm from pathlib import Path import lorem PROJECT_ID = &quot;my-project-id&quot; # change this as appropriate LOCATION = &quot;us-central1&quot; CORPUS_DISPLAY_NAME = f&quot;dummy_corpus&quot; TEMP_FILES_DIR = Path(&quot;temp_rag_files&quot;) def create_files(num_files=5): &quot;&quot;&quot;Creates a specified number of dummy text files with lorem ipsum content.&quot;&quot;&quot; TEMP_FILES_DIR.mkdir(exist_ok=True) created_file_paths = [] for i in range(num_files): file_path = TEMP_FILES_DIR / f&quot;dummy_file_{i+1}.txt&quot; content = f&quot;Dummy file {i+1} for RAG example.\n{lorem.paragraph()}&quot; file_path.write_text(content, encoding='utf-8') created_file_paths.append(file_path) print(f&quot;Created dummy file: {file_path}&quot;) return created_file_paths def main(): vertexai.init(project=PROJECT_ID, location=LOCATION) print(&quot;Creating dummy files...&quot;) dummy_file_paths = create_files(num_files=5) print(f&quot;Creating RAG Corpus '{CORPUS_DISPLAY_NAME}'...&quot;) corpus = rag.create_corpus( display_name=CORPUS_DISPLAY_NAME, description=&quot;Corpus with lorem ipsum files.&quot;, ) corpus_name = corpus.name print(f&quot;Successfully created RAG Corpus: {corpus_name}&quot;) print(f&quot;Uploading {len(dummy_file_paths)} files to '{corpus_name}'...&quot;) uploaded_rag_files_info = [] for file_path in tqdm(dummy_file_paths): display_name = file_path.stem rag_file = rag.upload_file( corpus_name=corpus_name, path=str(file_path), display_name=display_name, description=f&quot;Dummy lorem ipsum file: {display_name}&quot;, ) uploaded_rag_files_info.append({&quot;name&quot;: rag_file.name, &quot;display_name&quot;: rag_file.display_name}) print(f&quot;Successfully uploaded: {rag_file.name}&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>When running the code though, each file upload iteration is quite slow (10 seconds per file), thus making the upload of a reasonably-sized dataset unfeasible.</p> <p>Am I doing anything wrong?</p> <p>In case this could be due to latency, I also tried <code>europe-west3</code> as location (closer to me), but that replacement fails with</p> <blockquote> <p>RuntimeError: ('Failed in indexing the RagFile due to: ', {'code': 400, 'message': 'Request resource location europe-west3 does not match service location us-central1.', 'status': 'FAILED_PRECONDITION'})</p> </blockquote> <p>I may resort to other ways to upload files to the corpus, but I thought this could be a reasonable approach.</p>
<python><google-cloud-vertex-ai><rag>
2025-05-09 11:58:20
0
6,039
Davide Fiocco
79,614,033
1,077,695
What explains pattern matching in Python not matching for 0.0, but matching for float()?
<p>I would like to understand how pattern matching works in Python.</p> <p>I know that I can match a value like so:</p> <pre><code>&gt;&gt;&gt; t = 12.0 &gt;&gt;&gt; match t: ... case 13.0: ... print(&quot;13&quot;) ... case 12.0: ... print(&quot;12&quot;) ... 12 </code></pre> <p>But I notice that when I use matching with a type like <code>float()</code>, it matches <code>12.0</code>:</p> <pre><code>&gt;&gt;&gt; t = 12.0 &gt;&gt;&gt; match t: ... case float(): ... print(&quot;13&quot;) ... case 12.0: ... print(&quot;12&quot;) ... 13 </code></pre> <p>This seems strange, because <code>float()</code> evaluates to <code>0.0</code>, but the results are different if that is substituted in:</p> <pre><code>&gt;&gt;&gt; t = 12.0 &gt;&gt;&gt; match t: ... case 0.0: ... print(&quot;13&quot;) ... case 12.0: ... print(&quot;12&quot;) ... 12 </code></pre> <p>I would expect that if <code>12.0</code> matches <code>float()</code>, it would also match <code>0.0</code>.</p> <p>There are cases where I would like to match against types, so this result seems useful. But why does it happen? How does it work?</p>
<python><structural-pattern-matching>
2025-05-09 11:27:59
1
2,680
mvallebr
79,613,929
416,983
Steal the GIL after fork()'ing
<p>I'm writing a C++ python module, the module needs to fork() a child process and call some python functions in the child process. The child process has only one thread.</p> <p>I cannot acquire the GIL before calling fork() because doing so will result in a deadlock.</p> <p>If the GIL isn't acquired, after fork(), the child process will see the GIL as locked by other thread which isn't there, and it won't be able to acquire it.</p> <p>Is there any way to &quot;steal&quot; the GIL in the child process, since there is only one thread in the child process, it should be safe to do so.</p>
<python><fork><gil>
2025-05-09 10:28:24
0
1,106
user416983
79,613,844
1,413,856
TKInter widget not appearing on form
<p>I’m having trouble working out why a widget doesn’t appear on my tkinter form.</p> <p>Here is what I’m doint:</p> <ul> <li>Create a form</li> <li>Create a widget (a label) with the form as the master.</li> <li>Create a <code>Notebook</code> and <code>Frame</code> and add them to the form.</li> <li>Create additional widgets with the form as the master.</li> <li>Add the widgets to the form using grid, and specifying the <code>in_</code> parameter.</li> </ul> <p>Any widgets I create before the notebook and frame don’t appear, even though I don’t add them till <em>after</em> they’ve been created.</p> <p>Here is some sample code:</p> <pre class="lang-py prettyprint-override"><code>form = tkinter.Tk() label1 = tkinter.ttk.Label(form, text='Test Label 1') # This one doesn’t appear notebook = tkinter.ttk.Notebook(form) notebook.pack(expand=True) mainframe = tkinter.ttk.Frame(notebook, padding='13 3 12 12') notebook.add(mainframe, text='Test Page') label2 = tkinter.ttk.Label(form, text='Test Label 2') # This one works entry = tkinter.ttk.Entry(form) label1.grid(in_=mainframe, row=1, column=1) label2.grid(in_=mainframe, row=2, column=1) entry.grid(in_=mainframe, row=3, column=1) form.mainloop() </code></pre> <p>Note that <code>label</code> doesn’t appear even though there is a space for it. If I <code>print(id(form))</code> before and after creating the notebook and frame, they are the same, so it’s not as if the form itself has changed.</p> <p>Where has that first widget gone to and how can I get it to appear?</p>
<python><tkinter>
2025-05-09 09:32:25
1
16,921
Manngo
79,613,763
3,087,409
Access variables available in plotly's default hovertemplate
<p>I'm trying to change the hovertemplate on a stacked bar chart with plotly, but I'm struggling to access the same variables that plotly does by default. Here's the MWE I'm working with</p> <pre><code>import plotly.express as px import pandas as pd df = pd.DataFrame({ 'value': [3,4,5,6,7,8], 'category': ['Cat 1', 'Cat 1', 'Cat 1', 'Cat 2', 'Cat 2', 'Cat 2'], 'subplot': ['Eggs', 'Chips', 'Beans', 'Eggs', 'Chips', 'Beans']}) fig = px.bar(df, x=df['subplot'], y=df['value'], color=df['category']) print(&quot;plotly express hovertemplate:&quot;, fig.data[0].hovertemplate) fig.update_traces(hovertemplate='%{category}%{value}%{subplot}%{x}%{y}%{label}%{color}%{text}') fig.show() </code></pre> <p>Without editing the hovertemplate, plotly produces: <code>category=Cat 1&lt;br&gt;Food item=%{x}&lt;br&gt;units of stuff=%{y}&lt;extra&gt;&lt;/extra&gt;</code> but I can't see a way to edit that template and get access to the <code>category</code> variable. (Getting the value and the food-stuff works with a couple of the variables in my custom hovertemplate.)</p> <p>I tried every keyword I could think of (and some from the documentation <a href="https://plotly.com/python/reference/pie/#pie-hovertemplate" rel="nofollow noreferrer">here</a> (I don't know why plotly's documentation links to the pie chart page)), but I can never get Cat 1 or Cat 2 to show up in my template.</p>
<python><plotly>
2025-05-09 08:35:33
0
2,811
thosphor
79,613,514
754,136
Progress bar with multiple stats printed in new lines
<pre class="lang-py prettyprint-override"><code>import numpy as np from tqdm import tqdm import time tot_steps = 100 pbar = tqdm(total=tot_steps) for i in range(tot_steps): time.sleep(0.1) x, y, z, k = np.random.rand(4) pbar.update(1) pbar.set_description( f&quot;error[{x:.3f}] _ &quot; f&quot;samples[{y:.3f}] _ &quot; f&quot;min[{z:.3f}] _ &quot; f&quot;max[{k:.3f}] _ &quot; ) </code></pre> <p>Output</p> <pre><code>error[0.145] _ samples[0.255] _ min[0.286] _ max[0.878] _ : 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:13&lt;00:00, 70.96it/s] </code></pre> <p>But I would like to have something like this</p> <pre><code>100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:13&lt;00:00, 70.96it/s] error[0.145] samples[0.255] min[0.286] max[0.878] </code></pre> <p>With the numbers progressively updated <strong>without printing everything everytime</strong>.</p> <p>That is, I would like to print the logged stats on new lines. However, if I add <code>\n</code> to the print of the progress bar, it doesn't work (it keeps printing new lines).</p> <p><strong>EDIT</strong> I have accepted nischal's answer, but there is also another way to do that by making the progress bar wrap around multiple lines if needed. I just copy-pasted <a href="https://github.com/tqdm/tqdm/issues/630#issuecomment-1321245383" rel="nofollow noreferrer">this code</a> and it works. It is faster than nischal's answer but I can't really control how many lines are used (it depends on the screen width).</p>
<python><tqdm>
2025-05-09 05:11:08
2
5,474
Simon
79,613,461
2,085,438
How to make sure the browser can interpret a url through dcc.Location
<p>I am passing a string to the <code>pathname</code> property of a <code>dcc.Location</code> component in a callback function of my dash app.</p> <p>The string is structured like <code>path?query1=test</code>.</p> <p>When it actually runs, the <code>?</code> gets url encoded automatically and the browser does not interpret it correctly.</p> <p>I have tried to url encode the question mark myself but it also fails.</p> <p>How can I force it to just use whatever I send straight away without url encoding anything? Or how to make it work as intended?</p>
<python><url><plotly-dash>
2025-05-09 03:52:30
1
2,663
Chapo
79,613,434
1,023,928
Polars pl.read_csv_batched -> batch_size is not respected at all
<p>Using the <code>pl.read_csv_batched()</code> with <code>batch_size=n</code>, batches are read without any regard to <code>batch_size</code> whatsoever. I use polars version 1.29.0.</p> <p>What's up with that. Can I use polars to import a large csv file without going the manual route? Why does <code>batch_size</code> not constrain the batch sizes whatsoever?</p>
<python><python-polars>
2025-05-09 02:59:50
1
7,316
Matt
79,613,425
3,026,965
Get "Media created" timestamp with python for .mp4 and .m4a video, audio files (no EXIF)
<p>Trying to get &quot;<strong>Media created</strong>&quot; timestamp and insert as the &quot;<strong>Last modified date</strong>&quot; with python for .mp4 and .m4a video, audio files (no EXIF). The &quot;Media created&quot; timestamp shows up and correctly in Windows with right click file inspection, but I can not get it with python. What am I doing wrong? (This is also a working fix for cloud storage changing the last modified date of files.)</p> <p><a href="https://i.sstatic.net/ZhT7NimS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZhT7NimS.jpg" alt="Media created timestamp" /></a></p> <pre><code> enter from mutagen.mp4 import MP4, M4A from datetime import datetime import os def get_mp4_media_created_date(filepath): &quot;&quot;&quot; Extracts the &quot;Media Created&quot; date from an MP4 or M4A file. Args: filepath (str): The path to the MP4 or M4A file. Returns: datetime or None: The creation date as a datetime object, or None if not found. &quot;&quot;&quot; file_lower = filepath.lower() try: if file_lower.endswith(&quot;.mp4&quot;): media = MP4(filepath) elif file_lower.endswith(&quot;.m4a&quot;): media = M4A(filepath) else: return None # Not an MP4 or M4A file found_date = None date_tags_to_check = ['creation_time', 'com.apple.quicktime.creationdate'] for tag in date_tags_to_check: if tag in media: values = media[tag] if not isinstance(values, list): values = [values] for value in values: if isinstance(value, datetime): found_date = value break elif isinstance(value, str): try: found_date = datetime.fromisoformat(value.replace('Z', '+00:00')) break except ValueError: pass if found_date: break return found_date except Exception as e: print(f&quot;Error processing {filepath}: {e}&quot;) return None if __name__ == &quot;__main__&quot;: filepath = input(&quot;Enter the path to the MP4/M4A file: &quot;) if os.path.exists(filepath): creation_date = get_mp4_media_created_date(filepath) if creation_date: print(f&quot;Media Created Date: {creation_date}&quot;) else: print(&quot;Could not find Media Created Date.&quot;) else: print(&quot;File not found.&quot;) here </code></pre>
<python><windows><mp4><datecreated>
2025-05-09 02:48:38
2
727
user3026965
79,613,159
1,394,590
Can a decorator be used to define type hints for the decorated function?
<p>Given a decorator that injects an argument into the decorated function like the following dumb example:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable import time def pass_timestamp(fn: Callable[[float], None]) -&gt; Callable[[], None]: def wrapper(): fn(time.time()) return wrapper </code></pre> <p>Is it possible that the type of the injected value is &quot;pushed&quot; into the decorated function so that I don't need to annotate the parameter? For example:</p> <pre class="lang-py prettyprint-override"><code>@pass_timestamp def some_func(ts): reveal_type(ts) # &lt;-- I'd like to get &quot;float&quot;. </code></pre> <p>I'm afraid the answer is no, but I'd be happy to be wrong.</p>
<python><python-typing>
2025-05-08 21:01:19
1
15,387
bgusach
79,613,135
9,250,059
Can't connect to PostgreSQL, from Google Cloud Run to Google Cloud SQL
<p>Sup?</p> <p>I want to connect to my <strong>PostgreSQL</strong> database on <strong>Cloud SQL</strong> from <strong>Cloud Run</strong>, and whatever I do, I get errors. The most annoying one is the <strong>&quot;connection refused&quot;</strong> error. I have tried anything you can imagine: <strong>&quot;SQL client role&quot;</strong>, <strong>&quot;Different connection strings&quot;</strong>, <strong>&quot;Adding the client SQL connection&quot;</strong>, <strong>&quot;Docs&quot;</strong>, <strong>&quot;LLMs&quot;</strong>, anything you can imagine.</p> <p>I put the errors I receive in the logs here.May someone help me:</p> <p><em>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server on socket &quot;/cloudsql/&lt;project_id&gt;:&lt;region_id&gt;:&lt;instance_id&gt;/.s.PGSQL.5432&quot; failed: Connection refused</em></p> <p><em>Cloud SQL connection failed. Please see <a href="https://cloud.google.com/sql/docs/mysql/connect-run" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-run</a> for additional details: certificate had CN &quot;&quot;, expected &quot;&lt;project_id&gt;:&lt;region_id&gt;:&lt;instance_id&gt;&quot;</em></p> <pre><code># Codes I have tried to connect so far: DB_CONFIG = { &quot;drivername&quot;: &quot;postgresql+psycopg2&quot;, &quot;username&quot;: os.getenv(&quot;DB_USER&quot;), &quot;password&quot;: os.getenv(&quot;DB_PASS&quot;), &quot;database&quot;: os.getenv(&quot;DB_NAME&quot;), &quot;query&quot;: { &quot;host&quot;: os.getenv(&quot;DB_HOST&quot;) # like this: /cloudsql/&lt;project_id&gt;:&lt;region&gt;:&lt;instance_name&gt; } } #or DB_CONFIG = { &quot;drivername&quot;: &quot;postgresql+psycopg2&quot;, &quot;username&quot;: os.getenv(&quot;DB_USER&quot;), &quot;password&quot;: os.getenv(&quot;DB_PASS&quot;), &quot;database&quot;: os.getenv(&quot;DB_NAME&quot;), &quot;host&quot;: os.getenv(&quot;DB_HOST&quot;) } engine = create_engine(URL.create(**DB_CONFIG)) # or using direct connection strings DATABASE_URL = os.getenv( &quot;DATABASE_URL&quot;, &quot;postgresql://&lt;user&gt;:&lt;pass&gt;@/&lt;db_name&gt;? host=/cloudsql/&lt;project_id&gt;:&lt;region&gt;:&lt;instance_name&gt;&quot; ) engine = create_engine(DATABASE_URL) </code></pre> <p>P.S. I have a Python container running my FastAPI code, run via Cloud Run, that is trying to connect to my database on Cloud SQL.</p>
<python><postgresql><sqlalchemy><google-cloud-sql><google-cloud-run>
2025-05-08 20:35:53
1
470
Armin Fisher
79,613,107
7,124,155
PySpark UDF mapping is returning empty columns
<p>Given a dataframe, I want to apply a mapping with UDF but getting empty columns.</p> <pre><code>data = [(1, 3), (2, 3), (3, 5), (4, 10), (5, 20)] df = spark.createDataFrame(data, [&quot;int_1&quot;, &quot;int_2&quot;]) df.show() +-----+-----+ |int_1|int_2| +-----+-----+ | 1| 3| | 2| 3| | 3| 5| | 4| 10| | 5| 20| +-----+-----+ </code></pre> <p>I have a mapping:</p> <pre><code>def test_map(col): if col &lt; 5: score = 'low' else: score = 'high' return score mapp = {} test_udf = F.udf(test_map, IntegerType()) </code></pre> <p>I iterate here to populate mapp...</p> <pre><code>for x in (1, 2): print(f'Now working {x}') mapp[f'limit_{x}'] = test_udf(F.col(f'int_{x}')) print(mapp) {'limit_1': Column&lt;'test_map(int_1)'&gt;, 'limit_2': Column&lt;'test_map(int_2)'&gt;} df.withColumns(mapp).show() +-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| NULL| NULL| | 2| 3| NULL| NULL| | 3| 5| NULL| NULL| | 4| 10| NULL| NULL| | 5| 20| NULL| NULL| +-----+-----+-------+-------+ </code></pre> <p>The problem is I get null columns. What I'm expecting is:</p> <pre><code>+-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| low | low | | 2| 3| low | low | | 3| 5| low | low | | 4| 10| low | high| | 5| 20| low | high| +-----+-----+-------+-------+ </code></pre> <p>The reason I'm doing it is because I have to do for 100 columns. I heard that &quot;withColumns&quot; with a mapping is much faster than iterating over &quot;withColumn&quot; many times.</p>
<python><apache-spark><pyspark><databricks>
2025-05-08 20:15:56
1
1,329
Chuck
79,612,995
700,070
Parse specific key from JSON, and extract if field matches
<p>I am trying to parse this JSON file with the following declaration:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;results&quot;: [ { &quot;vulnerabilities&quot;: [ { &quot;status&quot;: &quot;fixed in 1.3.1&quot;, &quot;severity&quot;: &quot;critical&quot;, }, { &quot;severity&quot;: &quot;critical&quot;, }, ], } ], } </code></pre> <p>I have the following code:</p> <pre><code>import json f = open('file.json',) json_object = json.load(f) for i in json_object['results']: print(i) f.close() </code></pre> <p>this prints the whole declaration beginning at <code>results: 0:</code></p> <p>How can I get it to print all vulnerabilities that have matching severity critical or high?</p>
<python><json>
2025-05-08 18:52:19
3
2,680
Jshee
79,612,836
3,333,319
Pass Literal[...] where Type[T] is expected
<p>I have a Python function that accepts as parameters some raw <code>bytes</code> data (representing a JSON object) and a class, and it returns an instance of the class populated with values from the raw data. The function signature is as follows.</p> <pre class="lang-py prettyprint-override"><code>from typing import Type, TypeVar T = TypeVar['T'] def parse(data: bytes, cls: Type[T]) -&gt; T: ... </code></pre> <p>Usage:</p> <pre class="lang-py prettyprint-override"><code>class ClassA: ... class ClassB: ... parse_payload(bytes(&quot;{..objectA...}&quot;), ClassA) # Pylance does not complain parse_payload(bytes(&quot;{..objectB...}&quot;), ClassB) # Pylance does not complain </code></pre> <p>Now I need to also cover the case where <code>data</code> represents a JSON string (not a JSON object) with a few possible values, however I am unable to extend the definition of the function in a way that satisfies Pylance.</p> <p>Using the current definition and declaring the set of possible values using <code>typing.Literal</code> I get the following error from Pylance:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, TypeAlias StrValues: TypeAlias = Literal['ok', 'fail'] parse_payload(b'ok', StrValues) # Pylance complains: Argument of type &quot;StrValues&quot; cannot be assigned to parameter &quot;cls&quot; of type &quot;type[T@parse]&quot; in function &quot;parse&quot; - Pylance (reportArgumentType) </code></pre> <p>I get that the complaint is likely due to the fact that <code>StrValues</code> is not a regular type you can build an instance of (like <code>ClassA</code> or <code>ClassB</code>). Is there a way to properly define the signature of my function?</p>
<python><python-typing><pyright>
2025-05-08 16:46:56
0
973
Sirion
79,612,757
5,102,811
Scipy's WrappedCauchy function wrong?
<p>I'd like someone to check my understanding on the wrapped cauchy function in Scipy... From Wikipedia &quot;a wrapped Cauchy distribution is a wrapped probability distribution that results from the &quot;wrapping&quot; of the Cauchy distribution around the unit circle.&quot; It's similar to the Von Mises distribution in that way.</p> <p>I use the following bits of code to calculate a couple thousand random variates, get a histogram and plot it.</p> <pre><code>from scipy.stats import wrapcauchy, vonmises import plotly.graph_objects as go import numpy as np def plot_cauchy(c, loc = 0, scale = 1, size = 100000): ''' rvs(c, loc=0, scale=1, size=1, random_state=None) ''' rvses = vonmises.rvs(c, loc = loc, scale = scale, size = size) # rvses = wrapcauchy.rvs(c, # loc = loc, # scale = scale, # size = size) y,x = np.histogram(rvses, bins = 200, range = [-np.pi,np.pi], density = True) return x,y fig = go.Figure() loc = -3 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 1.5 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 0 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name=f'Centered on {loc}')) fig.show() </code></pre> <p>When plotting this using the Von Mises distribution I get a couple of distributions that are wrapped from -pi to pi and centered on &quot;loc&quot;:</p> <p><a href="https://i.sstatic.net/nLNpLkPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nLNpLkPN.png" alt="enter image description here" /></a></p> <p>When I replace the vonmises distribution with the wrapcauchy distribution I get a &quot;non-wrapped&quot; result, that to my eye just looks wrong. To plot this completely I have to adjust the ranges for the histogram<a href="https://i.sstatic.net/vTqBrwRo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTqBrwRo.png" alt="" /></a></p> <p>This is with Scipy version '1.15.2'.</p> <p>Is there a way to correctly &quot;wrap&quot; the outputs of a the Scipy call, or another library that correctly wraps the output from -pi to pi?</p>
<python><scipy><statistics>
2025-05-08 15:44:48
1
405
RedM
79,612,729
6,133,833
Training and validation losses do not reduce when fine-tuning ViTPose from huggingface
<p>I am trying to fine-tune a transformer/encoder based pose estimation model available here at: <a href="https://huggingface.co/docs/transformers/en/model_doc/vitpose" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/en/model_doc/vitpose</a></p> <p>When passing &quot;labels&quot; attribute to the forward pass of the model, the model returns &quot;Training not enabled&quot;. The core logic I have implemented is as follows. Since the model outputs heatmaps, I use a post-processing pipeline to get back the keypoint predictions in the image space, and compute a MSE Loss between these reconstructed keypoint (using soft argmax) and ground truth keypoints.</p> <ol> <li>Is this a correct way of thinking? Comparing heatmap to heatmap might seem more intuitive, but I didn't want to write the keypoint to heatmap and add the specific image processor's normalization with the worry that they might go off-scale</li> </ol> <p>Things I have tried:</p> <ol> <li>modified the model's heatmap head to predict for 24 keypoints for dogs instead of the 17 for humans it was trained on 2.added a Simple Adapter network right after the layer norm and before the model's heatmap head</li> <li>unfreeze some of the backbone layer's gradually</li> <li>track loss with both normalized and unnormalized keypoints.</li> <li>Added gradient clipping.</li> <li>the post processing pipeline is a differentiable approxiamtion of <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/vitpose/image_processing_vitpose.py" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/main/src/transformers/models/vitpose/image_processing_vitpose.py</a></li> <li>Tuning LRs</li> </ol> <p>However, gradient flow through the head and the adapter and the deeper encoder layers have been very very small.</p> <p>here is the notebook link: <a href="https://github.com/sohamb91/ML/blob/main/ViTpose_update_loss.ipynb" rel="nofollow noreferrer">https://github.com/sohamb91/ML/blob/main/ViTpose_update_loss.ipynb</a></p> <p>Looking forward to any discussions/help!</p>
<python><machine-learning><pytorch><huggingface-transformers><transformer-model>
2025-05-08 15:28:54
0
341
Soham Bhaumik
79,612,706
3,163,618
How do I keep both columns as the result of a Polars join with left_on and right_on?
<p>How do I keep both columns as the result of a Polars join with <code>left_on</code> and <code>right_on</code>, like in SQL?</p> <pre><code>df1 = pl.DataFrame({&quot;a&quot;: [&quot;x&quot;,&quot;y&quot;,&quot;z&quot;], &quot;lk&quot;: [1,2,3]}) df2 = pl.DataFrame({&quot;b&quot;: [&quot;x&quot;,&quot;y&quot;,&quot;z&quot;], &quot;rk&quot;: [1,2,3]}) </code></pre> <p>SQL analogy:</p> <pre class="lang-sql prettyprint-override"><code>SELECT * FROM tbl1 INNER JOIN tbl2 ON tbl1.lk = tbl2.rk </code></pre> <p>The usual result (currently undocumented) only keeps the <em>left column</em>. If I do a <code>.select(&quot;rk&quot;)</code> afterwards, that column is already gone.</p> <pre><code>&gt;&gt;&gt; print(df1.join(df2, how=&quot;inner&quot;, left_on=&quot;lk&quot;, right_on=&quot;rk&quot;)) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ lk ┆ b β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ x ┆ 1 ┆ x β”‚ β”‚ y ┆ 2 ┆ y β”‚ β”‚ z ┆ 3 ┆ z β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><join><python-polars>
2025-05-08 15:19:13
1
11,524
qwr
79,612,705
662,285
Error: Operation returned an invalid status 'Bad Request' on client.complete
<p>I am using below code to get data from &quot;Mistral-Nemo&quot; model hosted as SaaS in Azure AI Foundry. It gives me below error:</p> <pre><code>Error: Operation returned an invalid status 'Bad Request' Traceback (most recent call last): File &quot;c:\Users\guptswapmax\TestFalcon\TestFalcon.py&quot;, line 32, in &lt;module&gt; response = client.complete( ~~~~~~~~~~~~~~~^ messages=[ ^^^^^^^^^^ ...&lt;5 lines&gt;... model=model_name ^^^^^^^^^^^^^^^^ ) ^ File &quot;c:\Users\abc\TestFalcon\.venv\Lib\site-packages\azure\ai\inference\_patch.py&quot;, line 738, in complete raise HttpResponseError(response=response) azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'Bad Request' </code></pre> <p>I have looked into the permissions and make sure my SP has required permissions and it also has &quot;Coginitive Support User&quot; and &quot;Cognitive User Contributor&quot; role and trying to implement using <strong>Service Principal</strong></p> <p>When i use the same code with Azure API Key and AzureKeyCredential it works. <a href="https://github.com/MicrosoftDocs/azure-ai-docs/blob/main/articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md" rel="nofollow noreferrer">https://github.com/MicrosoftDocs/azure-ai-docs/blob/main/articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md</a></p> <pre><code>import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.identity import DefaultAzureCredential, ClientSecretCredential import logging from azure.core.pipeline.policies import HttpLoggingPolicy endpoint = &quot;https://abcdomain.services.ai.azure.com/models&quot; model_name = &quot;Mistral-Nemo&quot; Client_Id = &quot;xxxxxxxxxxxxxxxxxxxxxxxx&quot; Client_Secret = &quot;xxxxxxxxxxxxxxxxxxx&quot; Tenant_Id = &quot;xxxxxxxxxxxxxxxxxxxxxxxxx&quot; cred = ClientSecretCredential( tenant_id=Tenant_Id, client_id=Client_Id, client_secret=Client_Secret ) client = ChatCompletionsClient( endpoint=endpoint, credential=cred, credential_scopes=[&quot;https://cognitiveservices.azure.com/.default&quot;] ) try: result = client.complete( messages=[ SystemMessage(content=&quot;You are a helpful assistant.&quot;), UserMessage(content=&quot;How many languages are in the world?&quot;), ], temperature=0.8, top_p=0.1, max_tokens=2048, stream=True, model=model_name ) except Exception as e: print(&quot;Error:&quot;, e) </code></pre>
<python><azure><model><azure-machine-learning-service><azure-ai-foundry>
2025-05-08 15:18:26
1
4,564
Bokambo
79,612,668
3,163,618
Concatenate two Polars string columns or a fixed string to a column
<p>How do I concatenate two string columns horizontally or append a fixed string to a column?</p>
<python><python-polars><string-concatenation>
2025-05-08 14:55:14
1
11,524
qwr
79,612,625
1,391,441
Underlining fails in matplotlib
<p>My <code>matplotlib.__version__</code> is <code>3.10.1</code>. I'm trying to underline some text and can not get it to work. As far as I can tell, Latex is installed and accessible in my system:</p> <pre><code>import subprocess result = subprocess.run( [&quot;pdflatex&quot;, &quot;--version&quot;], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) print(result.stdout) </code></pre> <p>results in:</p> <pre><code>b'pdfTeX 3.141592653-2.6-1.40.25 (TeX Live 2023/Debian)\nkpathsea version 6.3.5\nCopyright 2023 Han The Thanh (pdfTeX) et al.\nThere is NO warranty. Redistribution of this software is\ncovered by the terms of both the pdfTeX copyright and\nthe Lesser GNU General Public License.\nFor more information about these matters, see the file\nnamed COPYING and the pdfTeX source.\nPrimary author of pdfTeX: Han The Thanh (pdfTeX) et al.\nCompiled with libpng 1.6.43; using libpng 1.6.43\nCompiled with zlib 1.3; using zlib 1.3\nCompiled with xpdf version 4.04\n' </code></pre> <p>Also the simple code:</p> <pre><code>import matplotlib.pyplot as plt plt.text(0.5, 0.5, r'$\frac{a}{b}$') plt.show() </code></pre> <p>works as expected.</p> <p>Similar questions from 2012 (<a href="https://stackoverflow.com/questions/10727368/underlining-text-in-python-matplotlib">Underlining Text in Python/Matplotlib</a>) and 2017 (<a href="https://stackoverflow.com/questions/46009701/matplotlib-text-underline">matplotlib text underline</a>) have accepted answers that fail with</p> <blockquote> <p>RuntimeError: Failed to process string with tex because dvipng could not be found</p> </blockquote> <p>A similar question from 2019 (<a href="https://stackoverflow.com/questions/57478211/underlining-not-working-in-matplotlib-graphs-for-the-following-code-using-tex">Underlining not working in matplotlib graphs for the following code using tex</a>) has no answer and it is my exact same issue, i.e.:</p> <pre><code>import matplotlib.pyplot as plt plt.text(.5, .5, r'Some $\underline{underlined}$ text') plt.show() </code></pre> <p>fails with:</p> <pre><code>ValueError: \underline{underlined} text ^ ParseFatalException: Unknown symbol: \underline, found '\' (at char 0), (line:1, col:1) </code></pre> <p>The 2017 question has a <a href="https://stackoverflow.com/a/68213799/1391441">deleted answer</a> that points to a closed <a href="https://github.com/matplotlib/matplotlib/pull/15624" rel="nofollow noreferrer">PR</a> in matplotlib's Github repo which points to another <a href="https://github.com/matplotlib/matplotlib/pull/23616" rel="nofollow noreferrer">PR</a> called <em>Support \underline in Mathtext</em> which is marked as a draft.</p> <p>Does my matplotlib version not support the <code>underline</code> Latex command?</p>
<python><matplotlib>
2025-05-08 14:33:15
1
42,941
Gabriel
79,612,553
11,770,390
avoid building .egg-info files in read-only docker mount for local dependencies (uv sync)
<p>I'm trying to use uv sync from within docker with this command:</p> <p><em>Dockerfile</em></p> <pre><code>RUN --mount=type=cache,target=/root/.cache/uv \ --mount=type=bind,source=uv.lock,target=uv.lock \ --mount=type=bind,source=pyproject.toml,target=pyproject.toml \ --mount=type=bind,source=packages,target=/app/packages \ uv sync --package ftp-mock --locked --no-editable </code></pre> <p>However this wants to create .egg files which fails because I mounted the <code>packages</code> folder as read only (and I want to keep it this way). This is my current project layout:</p> <pre><code>. β”œβ”€β”€ packages β”‚ β”œβ”€β”€ ftp-mock β”‚ β”‚ β”œβ”€β”€ config β”‚ β”‚ β”‚ β”œβ”€β”€ config_dev.json β”‚ β”‚ β”‚ └── ftp_mock_config.py β”‚ β”‚ β”œβ”€β”€ Dockerfile β”‚ β”‚ β”œβ”€β”€ ftp_mock.py β”‚ β”‚ └── pyproject.toml β”‚ β”œβ”€β”€ http-mock β”‚ β”‚ β”œβ”€β”€ config β”‚ β”‚ β”‚ β”œβ”€β”€ config_dev.json β”‚ β”‚ β”‚ └── http_mock_config.py β”‚ β”‚ β”œβ”€β”€ Dockerfile β”‚ β”‚ β”œβ”€β”€ pyproject.toml β”‚ β”‚ β”œβ”€β”€ http_mock.py β”‚ β”‚ └── templates β”‚ β”‚ └── index.html β”‚ β”œβ”€β”€ local-shared β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ pyproject.toml β”‚ β”‚ β”œβ”€β”€ mymodule.py β”‚ β”œβ”€β”€ main-app β”‚ β”‚ β”œβ”€β”€ config β”‚ β”‚ β”‚ β”œβ”€β”€ config_dev.json β”‚ β”‚ β”‚ └── main_app_config.py β”‚ β”‚ β”œβ”€β”€ Dockerfile β”‚ β”‚ β”œβ”€β”€ pyproject.toml β”‚ β”‚ └── main_app.py β”‚ └── __init__.py </code></pre> <p>...</p> <p>As you can see I'm only trying to build the workspace-member <code>ftp-mock</code> which in turn requires <code>local-shared</code>. Now <code>uv sync</code> tries to install <code>local-shared</code> which creates the <code>.egg-info</code> file (or rather it tries and fails because it only has read access to the mount). How can I avoid polluting my the directory with the egg file (or any new files for that matter) so that docker stops complaining?</p> <p>Here's the exact error from docker:</p> <pre><code> &gt; [builder 4/6] RUN --mount=type=cache,target=/root/.cache/uv --mount=type=bind,source=uv.lock,target=uv.lock --mount=type=bind,source=pyproject.toml,target=pyproject.toml --mount=type=bind,source=packages,target=/app/packages uv sync --project ftp-mock --package -ftp-mock --no-install-project --locked --no-editable: 0.382 Using CPython 3.11.12 interpreter at: /usr/local/bin/python3 0.382 Creating virtual environment at: .venv 0.384 Resolved 15 packages in 1ms 0.385 Building local-shared @ file:///app/packages/local-shared 0.588 Downloading pydantic-core (1.9MiB) 0.660 Building pyftpdlib==2.0.1 0.693 Downloading pydantic-core 3.815 Γ— Failed to build `local-shared @ file:///app/packages/local-shared` 3.815 β”œβ”€β–Ά The build backend returned an error 3.815 ╰─▢ Call to `setuptools.build_meta.build_wheel` failed (exit status: 1) 3.815 3.815 [stdout] 3.815 running egg_info 3.815 creating local_shared.egg-info 3.815 3.815 [stderr] 3.815 error: could not create 'local_shared.egg-info': Read-only file system 3.815 3.815 hint: This usually indicates a problem with the package or the build 3.815 environment. 3.815 help: `local-shared` was included because `ftp-mock` (v0.1.0) depends 3.815 on `local-shared` ------ Dockerfile:10 -------------------- 9 | # # Uses a cache to speed up the build. To clear the cache run docker with the --no-cache flag 10 | &gt;&gt;&gt; RUN --mount=type=cache,target=/root/.cache/uv \ 11 | &gt;&gt;&gt; --mount=type=bind,source=uv.lock,target=uv.lock \ 12 | &gt;&gt;&gt; --mount=type=bind,source=pyproject.toml,target=pyproject.toml \ 13 | &gt;&gt;&gt; --mount=type=bind,source=packages,target=/app/packages \ 14 | &gt;&gt;&gt; uv sync --project ftp-mock --package ftp-mock --no-install-project --locked --no-editable 15 | -------------------- ERROR: failed to solve: process &quot;/bin/sh -c uv sync --project ftp-mock --package ftp-mock --no-install-project --locked --no-editable&quot; did not complete successfully: exit code: 1 </code></pre>
<python><docker><uv>
2025-05-08 13:56:47
0
5,344
glades
79,612,454
167,745
VS Code debugger launch intermittently stops working and then does work again (Python debug)
<p>I have a Python script which works on the command line. It also does work via VS Code's debug launch ... sometimes.</p> <p>At first I thought it was to do with a bad <code>launch.json</code>, in particular related to how I was inject Env variables using</p> <pre><code>//&quot;envFile&quot;: &quot;${workspaceFolder}/.env&quot; </code></pre> <p>While the script itself uses <code>load_dotenv()</code> to achieve the same thing from the command line.</p> <p>But I don't think it's that, because then it started working again, after making an edit to the script and then reverting that edit - ie, no actual change except I touched the <code>.py</code> file.</p> <p>But &quot;not working&quot;, I mean that this happens (see screenshot) and the script never even gets to launch.</p> <p><a href="https://i.sstatic.net/658uQ5AB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/658uQ5AB.png" alt="debug stopped" /></a></p> <p>I don't think it's the launch config:</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;my-script&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}/scripts/my-script.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;args&quot;: [&quot;arg&quot;], &quot;python&quot;: &quot;${workspaceFolder}/.venv/bin/python&quot; //&quot;envFile&quot;: &quot;${workspaceFolder}/.env&quot; } ] } </code></pre> <p>Also, how the script picks up its env:</p> <pre><code>def main(): #dotenv_path=Path(__file__).parent.parent / &quot;.env&quot; #print(f&quot;Loading ENV from: {dotenv_path}&quot;) #load_dotenv(dotenv_path) # This works, and so does the above, from cmd line # Both have worked in VS Code debugger, but something else breaks it load_dotenv() </code></pre> <p>I suspect I'm doing something to fix / break it, but I can't pin down by experiment what it is.</p> <p>OS: MacBook Pro Sequoia 15.1</p>
<python><visual-studio-code><environment-variables><vscode-debugger>
2025-05-08 13:08:31
0
18,462
Stewart
79,612,414
8,365,731
Shared memory leaks in Python 3
<p>I experience problems with SharedMemory() in Python 3.12.0, it is not properly released. I use below context manager to handle share memory segments:</p> <pre><code>@contextmanager def managed_shm(name=None, size=0, create=False): shm = None try: shm = SharedMemory(create=create, name=name, size=size) yield shm finally: if shm: shm.close() if create: shm.unlink() </code></pre> <p>Process writing shared memory runs a class method in a thread. The method reads data from socket and updates shared memory with it, simplified code:</p> <pre><code>def recvmsg(self, msglen=60): self.conn, self.addr = self.socket.accept() data = b'' with managed_shm(name=self.shmname, size=self.msglen, create=True) as mqltick: while True: buf = self.conn.recv(msglen-len(data)) if not buf: break data += buf if len(data) == msglen: self.data = data mqltick.buf[:] = data data = b'' </code></pre> <p>In other process I read data from shared memory with this function:</p> <pre><code>def test_shm(maxok=10): prev = None errors=0 ok=0 reads = 0 with managed_shm(name=SocketServer.shmname) as shm: #SocketServer.shmname='mqltick4' while True: try: cur = shm.buf.tobytes() if cur != prev: reads +=1 ok+=1 prev=cur if reads % 100 == 0: print(reads, ok, errors, errors/reads if reads &gt;0 else None) if ok &gt;= maxok: break except ValueError: errors+=1 print('errors', errors) return reads, ok, errors, errors/reads if reads &gt;0 else None </code></pre> <p>It works fine, however upon interpreter exit I get the following warning:</p> <pre><code>Python-3.12.0/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' </code></pre> <p>Second attempt to run the code results in error:</p> <pre><code>File &quot;/home/jb/opt/pythonsrc/Python-3.12.0/Lib/multiprocessing/shared_memory.py&quot;, line 104, in __init__ self._fd = _posixshmem.shm_open( ^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/mqltick4' </code></pre> <p>Program updating shared memory works fine, however it will not restart after exit without shared memory name change. Question is there something wrong with my code or I hit some Python 3.12.0 bug?</p> <p>Update(1): After switching to generated memory segment names problem disappeared</p> <pre><code>with managed_shm(name=self.shmname, size=self.msglen, create=True) as mqltick: # generate shared memory segment name upon creation with managed_shm(size=self.msglen, create=True) as mqltick: self.shmname = mqltick.name </code></pre> <p>Shared memory segment name discovery had to be implemented, but it is another story. So it looks like Python 3.12.0 bug affecting shared memory segments created with user defined names.</p> <p>Update(2): My joy was premature. With new setup client process witch each contextmanager call adds new entry to potential memory leaks.</p> <pre><code>UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown </code></pre> <p>Since only 1 segment was created only 1 can be leaked... Server side has to be restarted in order to restore communication. Of course unlink() on server side reports error too:</p> <pre><code>_posixshmem.shm_unlink(self._name) FileNotFoundError: [Errno 2] No such file or directory: '/psm_046507f5' </code></pre> <p>So Python 3.12.0 bug scores again</p>
<python><shared-memory>
2025-05-08 12:49:27
1
563
Jacek BΕ‚ocki
79,612,338
2,494,865
How to build an exact MPFR number using gmpy2?
<p>I believe that the result of any MPFR computation, whatever rounding mode or precision has been selected to obtain it, is (the memory representation of) an exact binary number, of the form <code>m*2^e</code>, where <code>m</code> is an integer mantissa and <code>e</code> is a (possibly negative) integer exponent. Correct me if I'm wrong.</p> <p>I would like to be able to reconstruct the exact same value afterwards, without going through the computation again. The MPFR and <code>gmpy2</code> documentations hint at a way to do that by using <code>'p'</code> or <code>'P'</code> in the defining string. A command like this, if I understand the MPFR documentation correctly, should work in Python + <code>gmpy2</code>:</p> <pre><code>ci = gmpy2.mpfr('1p2') </code></pre> <p>and give me <code>mpfr('4.0')</code> as a result.</p> <p>But:</p> <ol> <li><p>the MPFR documentation seems to imply that even that construction involves the precision and the rounding mode, which is surprising since the number that it describes can be represented exactly (I don't intend to install MPFR+C to check that, for now),</p> </li> <li><p>the command shown above doesn't work at all and throws <code>ValueError('invalid digits')</code> as a response.</p> </li> </ol> <p>which means I cannot try the stuff to understand the inner workings of the construction, in particular with respect to 1.</p> <p><strong>What is the correct way to construct an exact MPFR binary number using <code>gmpy2</code>?</strong> And is it even possible?</p> <p>Note that the first half of my goal (getting the mantissa and (binary) exponent of an MPFR number in <code>gmpy2</code>) can be achieved using the <code>as_mantissa_exp</code> method. It's on the second part, rebuilding the same number using mantissa and exponent, that I'm stuck.</p> <p>My use case usually involves numbers with negative exponents. I already thought of experimenting with divisions, controlling precision and rounding mode, but I'd like to know if there's a canonical way of doing this before trying that path, partly because I'm not at ease with using what's essentially an approximate computation to express an exact number.</p>
<python><mpfr><gmpy>
2025-05-08 12:05:52
1
848
Taar
79,612,271
4,066,646
Reading Outlook mail in Python or Powershell
<p>I have been trying to read the content of mails in Outlook from both Python and Powershell, but for every version of code I have tried, I can't access the properties I want.</p> <p>I can get the com-objects and I can access the subject from each. But I need to get the body and receivers. In Powershell the properties are $null, and in Python I get:</p> <p><strong>com_error(-2147467260, 'Γ…tgΓ€rden avbrΓΆts', None, None)</strong></p> <p>I have tried:</p> <ul> <li>having Outlook opened</li> <li>one or no mail selected in Outlook</li> <li>saving a mail seperately and reading that specific file</li> </ul> <p>I have read many articles explaining how to access the files, but none of them worked, and never found a proper explanation for what might be the problem.</p> <p>Unfortunately I can not use MS Graph or the older Powershell modules for accessing Exchange to access the inbox and mail.</p> <p>This is the latest version I tried.</p> <pre class="lang-py prettyprint-override"><code> import win32com.client outlook = win32com.client.Dispatch(&quot;Outlook.Application&quot;).GetNamespace(&quot;MAPI&quot;) for f in outlook.Folders('my@mail.se').Folders: if f.Name == &quot;Inkorg&quot;: try: items = f.Items.Restrict(&quot;[MessageClass] = 'IPM.Note'&quot;) items.Sort(&quot;[ReceivedTime]&quot;, True) count = items.Count print(f&quot;Found {count} items in 'Inkorg' folder.&quot;) for i in range(1, count + 1): try: item = items.Item(i) # Filter only mail items if item.Class == 43: # olMailItem = 43 print(f&quot;Subject: {item.Subject}&quot;) sender = item.SenderName if item.SenderName else &quot;Sender not found&quot; html_body = item.HTMLBody if item.HTMLBody else &quot;HTMLBody not found&quot; print(f&quot;Sender: {sender}&quot;) print(f&quot;Body (HTML):\n{html_body[:200]}...&quot;) except Exception as e: print(f&quot;Error accessing item {i}: {e}&quot;) except Exception as e: print(f&quot;Error accessing folder '{f.Name}': {e}&quot;) </code></pre>
<python><powershell><email><outlook>
2025-05-08 11:32:20
1
357
Smorkster
79,612,149
1,826,066
Turn polars dataframe into nested dictionary
<p>I have a <code>polars</code> dataframe like such:</p> <pre><code>print( pl.DataFrame( { &quot;file&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;], &quot;user&quot;: [&quot;u1&quot;, &quot;u2&quot;, &quot;u3&quot;, &quot;u1&quot;, &quot;u2&quot;, &quot;u3&quot;], &quot;data1&quot;: [1, 2, 3, 4, 5, 6], &quot;data2&quot;: [7, 8, 9, 10, 11, 12], &quot;data3&quot;: [13, 14, 15, 16, 17, 18], } ) ) shape: (6, 5) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ file ┆ user ┆ data1 ┆ data2 ┆ data3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ A ┆ u1 ┆ 1 ┆ 7 ┆ 13 β”‚ β”‚ A ┆ u2 ┆ 2 ┆ 8 ┆ 14 β”‚ β”‚ A ┆ u3 ┆ 3 ┆ 9 ┆ 15 β”‚ β”‚ B ┆ u1 ┆ 4 ┆ 10 ┆ 16 β”‚ β”‚ B ┆ u2 ┆ 5 ┆ 11 ┆ 17 β”‚ β”‚ B ┆ u3 ┆ 6 ┆ 12 ┆ 18 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>And I would like to turn it into a bested dictionary like this</p> <pre><code>{ 'file': { 'A': { 'user': { 'u1': {'data1': 1, 'data2': 7, 'data3': 13}, 'u2': {'data1': 2, 'data2': 8, 'data3': 14}, 'u3': {'data1': 3, 'data2': 9, 'data3': 15} } }, 'B': { 'user': { 'u1': {'data1': 4, 'data2': 10, 'data3': 16}, 'u2': {'data1': 5, 'data2': 11, 'data3': 17}, 'u3': {'data1': 6, 'data2': 12, 'data3': 18} } } } } </code></pre> <p>I tried my luck with grouping and selecting structs, but I'm unable to do the conversion using only <code>polars</code> functions. What's the right approach here?</p> <p><strong>EDIT:</strong></p> <p>I tried two things:</p> <ol> <li>Using <code>rows_by_key</code>:</li> </ol> <pre class="lang-py prettyprint-override"><code>df.rows_by_key(key=[&quot;file&quot;, &quot;user&quot;], unique=True, named=True) </code></pre> <p>but that gives the keys as tuples, i.e.,</p> <pre><code>{('A', 'u1'): {'data1': 1, 'data2': 7, 'data3': 13}, ('A', 'u2'): {'data1': 2, 'data2': 8, 'data3': 14}, ('A', 'u3'): {'data1': 3, 'data2': 9, 'data3': 15}, ('B', 'u1'): {'data1': 4, 'data2': 10, 'data3': 16}, ('B', 'u2'): {'data1': 5, 'data2': 11, 'data3': 17}, ('B', 'u3'): {'data1': 6, 'data2': 12, 'data3': 18}} </code></pre> <ol start="2"> <li>I also tried to manually loop over the groups:</li> </ol> <pre class="lang-py prettyprint-override"><code>d = {} for file, df_file in (df).group_by(&quot;file&quot;, maintain_order=True): d_user = {} for user, df_file_user in df_file.group_by(&quot;user&quot;, maintain_order=True): d_user[user[0]] = df_file_user.drop(&quot;file&quot;, &quot;user&quot;).to_dicts()[0] d[file[0]] = {&quot;user&quot;: d_user} print({&quot;file&quot;: d}) </code></pre> <p>This yields the right output:</p> <pre><code>{ 'file': { 'A': { 'user': { 'u1': {'data1': 1, 'data2': 7, 'data3': 13}, 'u2': {'data1': 2, 'data2': 8, 'data3': 14}, 'u3': {'data1': 3, 'data2': 9, 'data3': 15} } }, 'B': { 'user': { 'u1': {'data1': 4, 'data2': 10, 'data3': 16}, 'u2': {'data1': 5, 'data2': 11, 'data3': 17}, 'u3': {'data1': 6, 'data2': 12, 'data3': 18} } } } } </code></pre> <p>But this feels cumbersome and I wonder whether there's a more direct way of doing this.</p>
<python><python-polars>
2025-05-08 10:12:39
2
1,351
Thomas
79,612,007
3,099,733
undefined reference to `Py_Initialize' when build a simple demo.c on a Linux conda environment
<p>I am testing of running a Python thread in a c program with a simple example like the below</p> <pre class="lang-py prettyprint-override"><code># demo.py import time for i in range(1, 101): print(i) time.sleep(0.1) </code></pre> <pre class="lang-c prettyprint-override"><code>// demo.c #include &lt;Python.h&gt; #include &lt;pthread.h&gt; #include &lt;stdio.h&gt; void *run_python_script(void *arg) { Py_Initialize(); if (!Py_IsInitialized()) { fprintf(stderr, &quot;Python initialization failed\n&quot;); return NULL; } FILE *fp = fopen(&quot;demo.py&quot;, &quot;r&quot;); if (fp == NULL) { fprintf(stderr, &quot;Failed to open demo.py\n&quot;); Py_Finalize(); return NULL; } PyRun_SimpleFile(fp, &quot;demo.py&quot;); fclose(fp); Py_Finalize(); return NULL; } int main() { pthread_t python_thread; if (pthread_create(&amp;python_thread, NULL, run_python_script, NULL) != 0) { fprintf(stderr, &quot;Failed to create thread\n&quot;); return 1; } pthread_join(python_thread, NULL); printf(&quot;Python thread has finished. Exiting program.\n&quot;); return 0; } </code></pre> <p>Then I build the above code with the following command</p> <pre class="lang-bash prettyprint-override"><code>gcc demo.c -o demo -lpthread -I$(python3-config --includes) $(python3-config --ldflags) $(python3-config --cflags) </code></pre> <p>Then I get the following error:</p> <pre><code>/usr/bin/ld: /tmp/ccsHQpZ3.o: in function `run_python_script': demo.c:(.text.run_python_script+0x7): undefined reference to `Py_Initialize' /usr/bin/ld: demo.c:(.text.run_python_script+0xd): undefined reference to `Py_IsInitialized' /usr/bin/ld: demo.c:(.text.run_python_script+0x41): undefined reference to `PyRun_SimpleFileExFlags' /usr/bin/ld: demo.c:(.text.run_python_script+0x50): undefined reference to `Py_Finalize' /usr/bin/ld: demo.c:(.text.run_python_script+0xab): undefined reference to `Py_Finalize' collect2: error: ld returned 1 exit status </code></pre> <p>The python library do exists,</p> <pre class="lang-bash prettyprint-override"><code> python3-config --ldflags -L/home/henry/anaconda3/lib/python3.9/config-3.9-x86_64-linux-gnu -L/home/henry/anaconda3/lib -lcrypt -lpthread -ldl -lutil -lm -lm ls -1 ~/anaconda3/lib | grep python libpython3.9.so libpython3.9.so.1.0 libpython3.so python3.9 </code></pre> <p>I have no idea about is link error.</p>
<python><gcc><anaconda><conda>
2025-05-08 08:43:16
1
1,959
link89
79,611,923
1,019,109
Mixin for making third-party Django model fields typed
<p>Given the problem that not all libraries add type annotations, I am trying to build a mixin that would allow me to add type hints easily. Without that, the type-checker infers the type of the model instance field as that of the underlying built-in field’s value (e.g., str, int, date, etc.) that the custom field is built upon (or <code>Unknown</code>).</p> <p>Here is an example:</p> <pre class="lang-py prettyprint-override"><code>from django.db import models # ========== defined in a third-party library ========== class StructuredCountry: iso_code: str ... class CountryField(models.CharField): # Not shown: the textual value from the database # is converted to a StructuredCountry object. pass # ========== defined in my own code ==================== class MyModel(models.Model): country = CountryField() reveal_type(MyModel().country) # β‡’ str MyModel().country.iso_code # E: Attribute &quot;iso_code&quot; is unknown </code></pre> <p>For the above example, I would like to be able to:</p> <pre class="lang-py prettyprint-override"><code>class TypedCountryField(FieldAnnotationMixin, CountryField): pass class MyModel(models.Model): country = TypedCountryField() reveal_type(MyModel().country) # β‡’ StructuredCountry MyModel().country.iso_code # βœ” </code></pre> <hr /> <p>So far I have come up with the following (inspired by VSCode’s stubs), which <em>indeed works</em>.</p> <pre class="lang-py prettyprint-override"><code>if TYPE_CHECKING: class FieldAnnotationMixin[FieldClassType: type[models.Field], ValueType]: def __new__(cls, *args, **kwargs) -&gt; FieldClassType: ... # Class access @overload def __get__(self, instance: None, owner: Any) -&gt; Self: ... # type: ignore[overload] # Model instance access @overload def __get__(self, instance: models.Model, owner: Any) -&gt; ValueType: ... # Non-Model instance access @overload def __get__(self, instance: Any, owner: Any) -&gt; Self: ... else: class FieldAnnotationMixin[IgnoredT1, IgnoredT2]: pass </code></pre> <p>However, I am stuck on the nullable field bit. For a field defined on the model with <code>null=True</code>, the model instance value should be <code>ValueType | None</code> (or <code>Optional[ValueType]</code>) and the type-checker will complain if the value is used before verifying that it is not null. Looking again at VSCode’s stubs I know that I can overload the <code>__new__</code> method:</p> <pre class="lang-py prettyprint-override"><code> @overload def __new__(cls, *args, null: Literal[False] = False, **kwargs) -&gt; FieldClassType[ValueType]: ... @overload def __new__(cls, *args, null: Literal[True] = True, **kwargs) -&gt; FieldClassType[ValueType | None]: ... </code></pre> <p>I am running into two problems here:</p> <ol> <li>The type-checker (rightly) complains β€œTypeVar &quot;type[FieldClassType@FieldAnnotationMixin]&quot; is not subscriptable”. And I don’t know in advance whether the third-party field is generic or not.</li> <li>I have no idea how to transfer the knowledge of the <code>null</code> parameter to the <code>__get__</code> overload so that it will return <code>Optional[ValueType]</code>.</li> </ol>
<python><django><python-typing><django-model-field>
2025-05-08 07:40:24
1
576
interDist
79,611,859
10,395,747
get processid of a function in python
<p>I have two functions in my python script</p> <pre><code> def func1(): #This does some processing and returns xyz informaiton return &quot;...&quot; def func2(): #This also does some other kind of prcessing and returns some info return &quot;#...#&quot; </code></pre> <p>More details on the functions :-</p> <p>the function1 is checking a filepath location in adls and checking if a certain number of files have arrived. the function2 should keep checking if this process of this func1 is finished(which means all files arrived) then do some processing on those files. I want tocheck what is the processid of func1 and further in the code want to do the following</p> <ol> <li>Check processid of func1.</li> <li>Inside func2 keep checking if this processid is still running</li> <li>inside func2 once this processid is completed kill this process..</li> </ol> <p>I am not sure how to achieve this in python.</p>
<python><python-3.x><pyspark>
2025-05-08 07:01:10
1
758
Aviator
79,611,667
334,719
How do I handle SIGTERM inside python async methods?
<p>Based on <a href="https://gist.github.com/nvgoldin/30cea3c04ee0796ebd0489aa62bcf00a" rel="nofollow noreferrer">this code</a>, I'm trying to catch SIGINT and SIGTERM. It works perfectly for SIGINT: I see it enter the signal handler, then my tasks do their cleanup before the whole program exits. On SIGTERM, though, the program simply exits immediately.</p> <p>My code is a bit of a hybrid of the two examples from the link above, as much of the original doesn't work under python 3.12:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import functools import signal async def signal_handler(sig, loop): &quot;&quot;&quot; Exit cleanly on SIGTERM (&quot;docker stop&quot;), SIGINT (^C when interactive) &quot;&quot;&quot; print('caught {0}'.format(sig.name)) tasks = [task for task in asyncio.all_tasks() if task is not asyncio.current_task()] list(map(lambda task: task.cancel(), tasks)) results = await asyncio.gather(*tasks, return_exceptions=True) print('finished awaiting cancelled tasks, results: {0}'.format(results)) loop.stop() if __name__ == &quot;__main__&quot;: loop = asyncio.new_event_loop() asyncio.ensure_future(task1(), loop=loop) asyncio.ensure_future(task2(), loop=loop) loop.add_signal_handler(signal.SIGTERM, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGTERM, loop))) loop.add_signal_handler(signal.SIGINT, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGINT, loop))) try: loop.run_forever() finally: loop.close() </code></pre> <p><code>task1</code> can terminate immediately, but <code>task2</code> has cleanup code that is clearly being executed after SIGINT, but not after SIGTERM</p>
<python><python-3.x><python-asyncio>
2025-05-08 03:07:59
2
2,265
Auspex
79,611,663
7,709
Automating a podman-based CI workflow using python
<p>I want to write a python3 script that will set up a pristine system (e.g: &quot;fedora:42&quot;), copy a local clone of a git repository, and run its tests. For continuous integration. I really would prefer to avoid Dockerfiles. A similar code in perl is <a href="https://github.com/thewml/website-meta-language/blob/master/CI-testing/docker-ci-run.pl" rel="nofollow noreferrer">https://github.com/thewml/website-meta-language/blob/master/CI-testing/docker-ci-run.pl</a> . (Note that it uses podman on fedora.) I want the python3 equivalents of the methods in <a href="https://metacpan.org/pod/Docker::CLI::Wrapper::Container" rel="nofollow noreferrer">https://metacpan.org/pod/Docker::CLI::Wrapper::Container</a> . Can anyone help?</p> <p><a href="https://github.com/shlomif/pysol-cards-in-C/blob/master/docker_ci.py" rel="nofollow noreferrer">https://github.com/shlomif/pysol-cards-in-C/blob/master/docker_ci.py</a> gives me β€œpodman.errors.exceptions.APIError: 500 Server Error: Internal Server Error (can only create exec sessions on running containers: container state improper)”.</p> <p>How can i get .exec_run() to work? I want to be able to execute bash codes inside the container, see their stdout/stderr, and wait for them to finish. Synchronously.</p> <p>I am on Fedora 42 x86-64 podman.</p> <p>my code so far is:</p> <pre><code>#! /usr/bin/env python3 # -*- coding: utf-8 -*- # vim:fenc=utf-8 # # Copyright Β© 2025 Shlomi Fish &lt; https://www.shlomifish.org/ &gt; # # Licensed under the terms of the MIT license. &quot;&quot;&quot; &quot;&quot;&quot; import json # import time from podman import PodmanClient &quot;&quot;&quot;Demonstrate PodmanClient.&quot;&quot;&quot; # Provide a URI path for the libpod service. In libpod, the URI can be a unix # domain socket(UDS) or TCP. The TCP connection has not been implemented in # this package yet. uri = &quot;unix:///run/user/1000/podman/podman.sock&quot; with PodmanClient(base_url=uri) as client: version = client.version() if False: print(&quot;Release: &quot;, version[&quot;Version&quot;]) print(&quot;Compatible API: &quot;, version[&quot;ApiVersion&quot;]) print(&quot;Podman API: &quot;, version[&quot;Components&quot;][0][&quot;Details&quot;][&quot;APIVersion&quot;], &quot;\n&quot;) # get all images for image in client.images.list(): print(image, image.id, &quot;\n&quot;) sysname = 'fedora:42' pull = client.images.pull(sysname) print(pull) image = client.images.get(sysname) # image = pull print(image) containers = client.containers # container = image.create() container = containers.create(image) print(container) # container.attach(eot=4) # container.attach() container2 = containers.run(image=image, detach=True,) print(container) print(container2) # time.sleep(5) ret = container.exec_run( cmd='echo helloworld\n', demux=True, ) print(container) print(ret) # container.run() print('before exec_run', container) if False: # find all containers for container in client.containers.list(): # After a list call you would probably want to reload the container # to get the information about the variables such as status. # Note that list() ignores the sparse option and assumes True # by default. container.reload() print(container, container.id, &quot;\n&quot;) print(container, container.status, &quot;\n&quot;) # available fields print(sorted(container.attrs.keys())) print(json.dumps(client.df(), indent=4)) </code></pre> <p>Its output is:</p> <pre><code>$ python docker_ci.py &lt;Image: 'registry.fedoraproject.org/fedora:42'&gt; &lt;Image: 'registry.fedoraproject.org/fedora:42'&gt; &lt;Container: 875f2fd7e5&gt; &lt;Container: 875f2fd7e5&gt; &lt;Container: 754729039a&gt; Traceback (most recent call last): File &quot;/home/shlomif/progs/python/pysol-cards-in-C/docker_ci.py&quot;, line 53, in &lt;module&gt; ret = container.exec_run( cmd='echo helloworld\n', demux=True, ) File &quot;/usr/lib/python3.13/site-packages/podman/domain/containers.py&quot;, line 211, in exec_run response.raise_for_status() ~~~~~~~~~~~~~~~~~~~~~~~~~^^ File &quot;/usr/lib/python3.13/site-packages/podman/api/client.py&quot;, line 82, in raise_for_status raise APIError(cause, response=self._response, explanation=message) podman.errors.exceptions.APIError: 500 Server Error: Internal Server Error (can only create exec sessions on running containers: container state improper) </code></pre>
<python><linux><docker><fedora><podman>
2025-05-08 03:04:52
1
4,510
Shlomi Fish
79,611,647
1,023,928
Polars for Python, can I read parquet files with hive_partitioning when the directory structure and files have been manually written?
<p>I manually created directory structures and wrote parquet files rather than used the <code>partition_by</code> parameter in the <code>write_parquet()</code> function of the python polars library because</p> <ul> <li>I want full control over the parquet file naming</li> <li>I want to handle what to do in the event of existing parquet files in the sub-directories (in my case I merge data and deduplicate)</li> </ul> <p>Question: Can I use the <code>pl.scan_parquet()</code> function with <code>hive_partitioning</code> set to <code>True</code> and benefit from filtering on lazy dataframes and parallel reads and automated file discovery (using glob pattern) even though the files were written manually not using the <code>partition_by</code> parameter?</p> <p>Thanks</p>
<python><parquet><python-polars><polars>
2025-05-08 02:34:09
1
7,316
Matt
79,611,625
5,082,463
Qt Creator Color Scheme for Python
<p>I'm trying Qt Creator with a Python project.</p> <p>Compared to Sublime Text it seems Qt Creator Color Scheme is less &quot;fancy&quot;.</p> <p>Below, the 1st is what I see on Qt Creator and the 2nd what I see on SublimeText:</p> <p><a href="https://i.sstatic.net/mSJAPeDs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mSJAPeDs.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/7uj7cXeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7uj7cXeK.png" alt="enter image description here" /></a></p> <p>ignoring the background, still Sublime Text is more colorful.</p> <p>How can I change <code>parse_args</code> color or <code>()</code> color?</p>
<python><qt-creator>
2025-05-08 01:57:37
0
6,367
KcFnMi
79,611,596
4,581,085
Qiskit v2.0.0 Statevector probabilities_dict throws TypeError: unsupported operand type(s) for -: 'int' and 'qiskit.circuit.Qubit'
<p>It’s my first week coding in Qiskit, so I’m using the latest version v2.0.0, but there seems to be a big overhaul, so I’m struggling with reading the docs and even going to the source code, and Copilot AIs are starting to loop into dead ends with their solutions.</p> <p>I’m implementing the <strong>Quantum Excess Evaluation Algorithm</strong> from this paper: <a href="https://arxiv.org/pdf/2410.20841v1" rel="nofollow noreferrer">https://arxiv.org/pdf/2410.20841v1</a> (Method A on Page 7).</p> <p>I got the State Preparation implemented, but I’m struggling with finalizing the <strong>Quantum Subtractor</strong> portion. There seems to be a deep bug, or maybe some basic oversight, that I can’t catch, and I can’t step into the function with a debugger. For lack of better terms, the bug seems embedded in the simulation.</p> <p>You can see below that I tried swapping <code>circuit.compose</code> for <code>circuit.append</code>, both of which run fine until the <code>sv.probabilities_dict(...)</code> call.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from qiskit import QuantumCircuit, QuantumRegister, transpile from qiskit.circuit import Qubit from qiskit.circuit.library import StatePreparation from qiskit.circuit.library.arithmetic import CDKMRippleCarryAdder from qiskit.circuit.library.arithmetic.integer_comparator import \ IntegerComparator from qiskit.quantum_info import Statevector from qiskit.visualization import plot_histogram from qiskit_aer import Aer, AerSimulator from scipy.stats import lognorm # Parameters num_qubits = 6 # Using 6 qubits for demonstration, scalable approach domain_min = 0 domain_max = 10 deductible = 1.0 # Deductible amount ###################################### # Set up the Probability Distribution ###################################### # Define lognormal distribution parameters sigma = 1.0 # Shape parameter mu = 0.0 # Scale parameter def lognormal_loss(x): &quot;&quot;&quot;Lognormal loss function&quot;&quot;&quot; return lognorm.pdf(x, sigma, loc=0, scale=np.exp(mu)) # Discretize domain into 2^num_qubits points N = 2**num_qubits x_vals = np.linspace(domain_min, domain_max, N) # Calculate probabilities for state preparation probabilities = np.array([lognormal_loss(x) for x in x_vals]) normalizing_constant = np.sum(probabilities) probabilities /= normalizing_constant # Calculate amplitudes (square root of probabilities) amplitudes = np.sqrt(probabilities) # Create quantum circuit for state preparation qc = QuantumCircuit(num_qubits) qc.append(StatePreparation(amplitudes), range(num_qubits)) # Transpile the circuit to basic gates that Aer supports qc = transpile(qc, basis_gates=['u3', 'cx']) # index of the first grid‑point *above* the deductible step = (domain_max - domain_min) / (N - 1) threshold_ix = int(round(deductible / step)) # β‰ˆ 6 when n = 6 print(f&quot;binary threshold = {threshold_ix:0{num_qubits}b}&quot;) ############################ # Set up Excess Calculation ############################ def subtract_if_excess(num_qubits: int, threshold: int) -&gt; QuantumCircuit: &quot;&quot;&quot; Works on Qiskit 2.x where AddConstant is gone. If x &gt;= threshold set flag=1 and compute x &lt;- x - threshold (mod 2**num_qubits) else set flag=0 and leave x unchanged. &quot;&quot;&quot; # allow threshold accidentally passed as a Qubit if isinstance(threshold, Qubit): threshold = threshold.index threshold = int(threshold) # Create Registers x_reg = QuantumRegister(num_qubits, &quot;x&quot;) # loss amount const = QuantumRegister(num_qubits, &quot;c&quot;) # holds (2ⁿ - k) flag = QuantumRegister(1, &quot;flag&quot;) # comparison result w_cmp = QuantumRegister(num_qubits - 1, &quot;w_cmp&quot;) # ancillas comparator carry = QuantumRegister(1, &quot;carry&quot;) # carry qubit for the adder circuit = QuantumCircuit(x_reg, const, flag, w_cmp, carry, name=&quot;Sub&gt;k&quot;) # Prepare |2ⁿ - k&gt; once in the const register delta = (-threshold) % (1 &lt;&lt; num_qubits) # 2's‑complement of k for i in range(num_qubits): if (delta &gt;&gt; i) &amp; 1: circuit.x(const[i]) # 1) Compare x with k β†’ flag compare_gate = IntegerComparator( num_state_qubits=num_qubits, value=threshold, geq=True) # circuit.append(compare_gate, x_reg[:] + flag[:] + w_cmp[:]) circuit = circuit.compose( compare_gate, qubits=x_reg[:] + flag[:] + w_cmp[:]) # 2) Controlled addition: x ← x + delta (≑ x βˆ’ k mod 2ⁿ) # The CDKM adder writes the sum into the *second* operand, # so we order the qubits as (const, x). # Since const is classical‑data only, it is unchanged and can be reused. adder = CDKMRippleCarryAdder(num_qubits, kind=&quot;fixed&quot;).control() # circuit.append(adder, flag[:] + const[:] + x_reg[:] + carry[:]) circuit = circuit.compose( adder, qubits=flag[:] + const[:] + x_reg[:] + carry[:]) # 3) Un‑compute comparator ancillas so only FLAG stores the outcome # circuit.append(compare_gate.inverse(), x_reg[:] + flag[:] + w_cmp[:]) circuit = circuit.compose(compare_gate.inverse(), qubits=x_reg[:] + flag[:] + w_cmp[:]) # # Clear circuit name to prevent unknown subcircuit name issues # circuit.name = &quot;&quot; return circuit # state‑preparation circuit `qc` you already built loss_reg = qc.qregs[0] # |Οˆβ‚βŸ©, 6 qubits # allocate once flag = QuantumRegister(1, &quot;flag&quot;) const = QuantumRegister(num_qubits, &quot;const&quot;) work = QuantumRegister(num_qubits - 1, &quot;w_cmp&quot;) carry = QuantumRegister(1, &quot;carry&quot;) qc.add_register(flag, const, work, carry) sub_block = subtract_if_excess(num_qubits, threshold_ix) qc = qc.compose( sub_block, qubits=loss_reg[:] + const[:] + flag[:] + work[:] + carry[:]) qc.barrier() qc.save_statevector() # build simulator and transpile sim = AerSimulator(method='statevector') qc_compiled = transpile(qc, sim) # run and get result job = sim.run(qc_compiled) result = job.result() # get the experiment's statevector statevector = result.get_statevector() sv = Statevector(statevector) # marginal probability of flag==1 must equal P[x β‰₯ deductible] p_excess = sv.probabilities_dict(qargs=[flag[0]])['1'] print(f&quot;P(loss β‰₯ 1) = {p_excess:.4f}&quot;) </code></pre> <p>The second to last line throws:</p> <p><code>TypeError: unsupported operand type(s) for -: 'int' and 'qiskit.circuit.Qubit'</code></p> <p>But the problem is most likely in the circuit’s configuration somewhere, or maybe something got overhauled in the simulator this Qiskit version. The penultimate line is supposed to read the probability of flag=1, but it throws the error above.</p> <p>Any help or relevant docs would be appreciated.</p>
<python><qiskit>
2025-05-08 01:21:12
1
985
Alex F
79,611,525
3,325,942
How to use a class property as a decorator in Python?
<p>I'm trying to make a RateLimiter class in Python that can be instantiated as a class property in other classes, and then applied to functions to be rate limited as a decorator. Here's the RateLimiter:</p> <pre class="lang-py prettyprint-override"><code>import time from functools import wraps from threading import Lock class RateLimiter: def __init__(self, qps: float): self.qps = qps self.min_interval = 1.0 / qps self.last_call = 0.0 self.lock = Lock() def limit(self): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): with self.lock: now = time.monotonic() elapsed = now - self.last_call if elapsed &lt; self.min_interval: time.sleep(self.min_interval - elapsed) self.last_call = time.monotonic() return func(*args, **kwargs) return wrapper return decorator </code></pre> <p>I want to use it in a class like so</p> <pre class="lang-py prettyprint-override"><code>class SomeWebClient: def __init__(self, qps: float = 10.0): self.rate_limiter = RateLimiter(qps=qps) @self.rate_limiter.limit() def some_api_call(self): # make some call to an API with a rate limit pass @self.rate_limiter.limit() def another_api_call(self): # make some call to an API with a rate limit pass </code></pre> <p>But this doesn't work because <code>self</code> isn't defined for the <code>@</code> decorator syntax. The only way I've been able to get this to work is by doing this:</p> <pre class="lang-py prettyprint-override"><code>class SomeWebClient: def __init__(self, qps: float = 10.0): self.rate_limiter = RateLimiter(qps=qps) self.some_api_call = self.rate_limiter.limit()(self.some_api_call) self.another_api_call = self.rate_limiter.limit()(self.another_api_call) def some_api_call(self): # make some call to an API with a rate limit pass def another_api_call(self): # make some call to an API with a rate limit pass </code></pre> <p>But this doesn't feel very clean. Is there a way to get it to work with the <code>@</code> syntax, and not need to pass in state to the decorator function? I.e. I don't want every invocation of the decorator to require me passing in the QPS and the lock, I'd like that to be hidden away so that the decorator usage is clean.</p> <p>Or to take a step back, is there another way of implementing a RateLimiter without a decorator but is clean? I basically want an abstraction such that every instance of <code>SomeWebClient</code> has its own rate limiter, with potentially different QPS. And then I can mark methods I want to be rate limited cleanly.</p>
<python>
2025-05-07 23:38:25
2
842
ROODAY
79,611,317
476,298
Why does the command source not work on Windows - cannot activate venv?
<p>I try to setup a virtual environment for a <em>Python</em> project, and the command <code>source</code> doesn't work:</p> <pre class="lang-none prettyprint-override"><code>C:\Users&gt;source MyEnv 'source' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>I could see by looking at other answers on Stack Overflow that there must be used the path of <code>virtualenv</code>, which is dependent on where your terminal window's location currently is. That's quite annoying.</p> <p>Is there a better way?</p>
<python><powershell><batch-file><virtualenv>
2025-05-07 19:44:44
1
2,107
Darkhydro
79,611,225
3,163,618
Polars check if column is unique
<p>There are multiple ways to check if a column in polars is unique, i.e. it can be used as a key. For example</p> <pre><code>df.[&quot;a&quot;].is_unique().all() </code></pre> <p>(<a href="https://stackoverflow.com/questions/74841242/selecting-with-indexing-is-an-anti-pattern-in-polars-how-to-parse-and-transform">Selecting with indexing is bad?</a> also I had <code>df.select(&quot;a&quot;)</code> which returns another DataFrame instead of a Series, not sure if it makes a difference)</p> <p>What is the cleanest or fastest or canonical way to do this? Is there some kind of constraint like SQL UNIQUE when loading the DataFrame? I found out for joins there is validation, so that is mostly what I use it for.</p>
<python><dataframe><unique><python-polars>
2025-05-07 18:36:32
1
11,524
qwr
79,611,220
1,328,439
"ModuleNotFoundError: No module named 'ESMF'" when importing xesmf installed via conda
<p>I have tried to install xESMF module developed by pangeo-data collective into my conda environment with</p> <pre class="lang-bash prettyprint-override"><code>conda install -c conda-forge --override-channels xesmf </code></pre> <p>however when I try to import xesmf I get the following error:</p> <pre><code>import xesmf as xe Traceback (most recent call last): File &quot;&lt;python-input-0&gt;&quot;, line 1, in &lt;module&gt; import xesmf File &quot;/opt/conda/envs/geo/lib/python3.9/site-packages/xesmf/__init__.py&quot;, line 4, in &lt;module&gt; from . frontend import Regridder File &quot;/opt/conda/envs/geo/lib/python3.9/site-packages/xesmf/frontend.py&quot;, line 10, in &lt;module&gt; from . backend import (esmf_grid, esmf_locstream, add_corner, esmf_regrid_build, esmf_regrid_finalize) File &quot;/opt/conda/envs/geo/lib/python3.9/site-packages/xesmf/backend.py&quot;, line 19, in &lt;module&gt; import ESMF ModuleNotFoundError: No module named 'ESMF' </code></pre> <p>Indeed module ESMF is not available neither in conda default channels, nor in conda-forge.</p> <p>This error has been reported in one of xesmf code repositories. <a href="https://github.com/JiaweiZhuang/xESMF/issues/131" rel="nofollow noreferrer">https://github.com/JiaweiZhuang/xESMF/issues/131</a></p> <p>However the only solution suggested on that thread was to create a new environment. Which I would prefer not to do as my current environment has many packages installed and recreating it with different package versions would require quite some integration testing for new versions.</p>
<python><conda><xesmf>
2025-05-07 18:34:31
1
17,323
Dima Chubarov
79,611,005
3,758,232
pyproject.toml: specify dynamic parameters for C extension
<p>I'm updating a Python package that contains some C extensions.</p> <p>In a previous version I had all parameters set up in the setuptools <code>setup()</code> function, and now I am porting that setup into a new version that uses <code>pyproject.toml</code> with setuptools as a build back end.</p> <p>The relevant snippet of my current setup:</p> <pre class="lang-ini prettyprint-override"><code> [[tool.setuptools.ext-modules]] name = &quot;_mypackage&quot; language = &quot;c&quot; sources = [&quot;./src/my_lib.c&quot;] include-dirs = [&quot;./include&quot;, &quot;/usr/local/include&quot;] # This should adapt to the build host. library-dirs = [&quot;/usr/local/lib&quot;] # This too. libraries = [&quot;myotherlib_dbg&quot;] # `_dbg` should only be used if the `DEBUG` env var is set to `1`. extra-compile-args = [ &quot;-std=gnu11&quot;, &quot;-Wall&quot;, &quot;-fPIC&quot;, &quot;-MMD&quot;, ] # Add optimization flag based on `DEBUG` env var value. </code></pre> <p>As you can see, some values are hard-coded just to test the build process, but I need at least to create some dynamic parameters based on environment variables or other factors.</p> <p>How do I achieve this? I read about the <a href="https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html#dynamic-metadata" rel="nofollow noreferrer">setuptools dynamic variable section</a>, but the <code>ext-modules</code> is not among the supported dynamic fields.</p>
<python><python-c-api><pyproject.toml>
2025-05-07 16:16:28
0
928
user3758232
79,610,948
2,886,640
How to modify the metadata creation date of a file through Python?
<p>I have a lot of photos with wrong dates in metadata. When I upload them to a web service like Amazon Photos for example, they are disorganized due to that.</p> <p>So I am doing a Python 3 script in Linux to modify the dates of the photos, but not all dates are changing.</p> <p>For example, when examining the date of a photo with <code>exiftool</code> I get this:</p> <pre><code>File Modification Date/Time : 2023:11:07 17:49:45+01:00 File Access Date/Time : 2025:05:07 17:30:20+02:00 File Inode Change Date/Time : 2025:05:07 17:30:20+02:00 Date/Time Original : 2023:11:06 19:09:35 Create Date : 2023:11:06 19:09:35 Modify Date : 2023:11:06 19:09:35 GPS Date Stamp : 2023:11:06 Profile Date Time : 2018:06:24 13:22:32 Date/Time Original : 2023:11:06 19:09:35.0 GPS Date/Time : 2023:11:06 19:09:35Z </code></pre> <p>Then, I execute my script to set <code>2025:05:05</code> as the new date, and after that the metadata are these:</p> <pre><code>File Modification Date/Time : 2025:05:05 15:39:00+02:00 File Access Date/Time : 2025:05:07 17:36:57+02:00 File Inode Change Date/Time : 2025:05:07 17:36:57+02:00 Modify Date : 2025:05:05 15:39:00 Date/Time Original : 2025:05:05 15:39:00 Create Date : 2025:05:05 15:39:00 GPS Date Stamp : 2023:11:06 Profile Date Time : 2018:06:24 13:22:32 Date/Time Original : 2025:05:05 15:39:00.0 GPS Date/Time : 2023:11:06 19:09:35Z </code></pre> <p>As you can see, the following dates were not changed:</p> <pre><code>File Access Date/Time : 2025:05:07 17:30:20+02:00 File Inode Change Date/Time : 2025:05:07 17:30:20+02:00 GPS Date Stamp : 2023:11:06 Profile Date Time : 2018:06:24 13:22:32 GPS Date/Time : 2023:11:06 19:09:35Z </code></pre> <p>When I navigate to the photo trough a file explorer like Dolphin, its <code>Created</code> value is not the one I set (<code>2025:05:05</code>), and I guess that is not a good sign. I do not mind the values of <code>Modified</code> or <code>Accesed</code>, which are OK, because they will change with any action so these are not reliable.</p> <p>What do you think about this? Which date should I modify to organize photos in web services? And is it OK to do that?</p> <p>This is my code, any suggestion will be appreciated:</p> <pre><code>#!/usr/bin/env python3 import subprocess file_path = &quot;IMG_20231107_174945.jpg&quot; new_date = &quot;2025:05:05 15:39:00&quot; subprocess.run([ &quot;exiftool&quot;, &quot;-overwrite_original&quot;, f&quot;-AllDates={new_date}&quot;, file_path ]) date_touch = new_date.replace(&quot;:&quot;, &quot;&quot;).replace(&quot; &quot;, &quot;&quot;)[:12] subprocess.run([&quot;touch&quot;, &quot;-t&quot;, date_touch, file_path]) </code></pre>
<python><python-3.x><metadata>
2025-05-07 15:50:04
0
10,269
forvas
79,610,656
3,760,519
What is the purpose of the output parameter in the server function of a shiny app?
<p>In R, the output is used to render outputs by assigning the result of a render function to some element of the output object. It does not seem to work this way in python.</p> <p>Some example python code:</p> <pre><code>def server(input: Inputs, output: Outputs, session: Session) -&gt; None: @render.plot def hist() -&gt; None: hue = &quot;species&quot; if input.species() else None sns.kdeplot(df, x=input.var(), hue=hue) if input.show_rug(): sns.rugplot(df, x=input.var(), hue=hue, color=&quot;black&quot;, alpha=0.25) </code></pre> <p>In python, rendering is controlled with decorators and having a function name matching the name of a component defined in the UI. So when is the output parameter used for anything?</p> <hr> <h2>Full working sample</h2> <pre><code>import seaborn as sns # Import data from shared.py from shared import df from shiny import App, render, ui app_ui = ui.page_sidebar( ui.sidebar( ui.input_select( &quot;var&quot;, &quot;Select variable&quot;, choices=[&quot;bill_length_mm&quot;, &quot;body_mass_g&quot;] ), ui.input_switch(&quot;species&quot;, &quot;Group by species&quot;, value=True), ui.input_switch(&quot;show_rug&quot;, &quot;Show Rug&quot;, value=True), ), ui.output_plot(&quot;hist&quot;), title=&quot;Hello sidebar!&quot;, ) def server(input, output, session): @render.plot def hist(): hue = &quot;species&quot; if input.species() else None sns.kdeplot(df, x=input.var(), hue=hue) if input.show_rug(): sns.rugplot(df, x=input.var(), hue=hue, color=&quot;black&quot;, alpha=0.25) app = App(app_ui, server) </code></pre>
<python><py-shiny>
2025-05-07 13:32:14
0
2,406
Chechy Levas
79,610,592
3,584,765
Utilizing cocoapi for evaluation in python produces print results in console
<p>I am using the Coco eval script for my project. I have modified the demo provided <a href="https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb" rel="nofollow noreferrer">here</a>. The commands I use are basically those:</p> <pre><code>cocoGt = COCO(gt_json) cocoDt = cocoGt.loadRes(results_json) imgIds = sorted(cocoGt.getImgIds()) # running evaluation cocoEval = COCOeval(cocoGt, cocoDt, annType) # cocoEval.params.catIds = [1] # person id : 1 cocoEval.params.imgIds = imgIds cocoEval.evaluate() cocoEval.accumulate() # in this step the results are printed cocoEval.summarize() </code></pre> <p>everything works fine except for a display of the results in the console. The display opccurs when <code>cocoEval.accumulate()</code> is executed. The display is something similar to this one:</p> <pre><code>DONE (t=0.52s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.617 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.945 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.688 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.533 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.869 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.111 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.539 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.717 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.658 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.722 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.925 </code></pre> <p>and it is displayed each time the prompt is used (if on debug or interactive mode that means in EVERY command!). When launched in a script the display is only occurring once but still it shouldn't happen.</p> <p>After investigating the code I found that the printing occurs in <a href="https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L411" rel="nofollow noreferrer">line 411</a> of <code>cocoeval.py</code>. The inexplicable issue is that the next line which cause this console printing is just a dictionary definition:</p> <pre><code>self.eval = { 'params': p, 'counts': [T, R, K, A, M], 'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), 'precision': precision, 'recall': recall, 'scores': scores, } </code></pre> <p>The even funnier thing is that if the precision field is empty the printing stops from happening! For example the following dict does not produce anything in the console (of course it crushes after a while and the script does not work correctly).</p> <pre><code>self.eval = { 'params': p, 'counts': [T, R, K, A, M], 'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), 'recall': recall, 'scores': scores, } </code></pre> <p>Similarly, if I include a line like</p> <pre><code>cocoEval.eval = {} # this stop the result printing for some reason </code></pre> <p>after <code>cocoEval.summarize()</code> the printing also stops (it has occurred once of course already). <code>precision</code> is an ordinary numpy array also.</p> <p>The questions I have are:</p> <p>a) Why the inclusion of this specific field <code>precision</code> in the <code>cocoEval.eval</code> dict produce this printing change?<br /> b) How can it be stopped from occurring?</p>
<python>
2025-05-07 13:00:11
0
5,743
Eypros
79,610,568
17,889,492
Store numpy array in pandas dataframe
<p>I want to store a numpy array in pandas cell.</p> <p>This does not work:</p> <pre><code>import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = [&quot;val&quot;, &quot;unit&quot;]) df.loc[&quot;bnd&quot;] = [bnd1, &quot;N/A&quot;] df.loc[&quot;bnd&quot;] = [bnd2, &quot;N/A&quot;] </code></pre> <p>But this does:</p> <pre><code>import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = [&quot;val&quot;]) df.loc[&quot;bnd&quot;] = [bnd1] df.loc[&quot;bnd&quot;] = [bnd2] </code></pre> <p>Can someone explain why, and what's the solution?</p> <p>Edit:</p> <p>The first returns:</p> <blockquote> <p>ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.</p> </blockquote> <p>The complete traceback is below:</p> <pre><code>&gt; --------------------------------------------------------------------------- AttributeError Traceback (most recent call &gt; last) File &gt; ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3185, &gt; in ndim(a) 3184 try: &gt; -&gt; 3185 return a.ndim 3186 except AttributeError: &gt; &gt; AttributeError: 'list' object has no attribute 'ndim' &gt; &gt; During handling of the above exception, another exception occurred: &gt; &gt; ValueError Traceback (most recent call &gt; last) Cell In[10], line 8 &gt; 6 df = pd.DataFrame(columns = [&quot;val&quot;, &quot;unit&quot;]) &gt; 7 df.loc[&quot;bnd&quot;] = [bnd1, &quot;N/A&quot;] &gt; ----&gt; 8 df.loc[&quot;bnd&quot;] = [bnd2, &quot;N/A&quot;] &gt; &gt; File &gt; ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:849, &gt; in _LocationIndexer.__setitem__(self, key, value) &gt; 846 self._has_valid_setitem_indexer(key) &gt; 848 iloc = self if self.name == &quot;iloc&quot; else self.obj.iloc &gt; --&gt; 849 iloc._setitem_with_indexer(indexer, value, self.name) &gt; &gt; File &gt; ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1835, &gt; in _iLocIndexer._setitem_with_indexer(self, indexer, value, name) &gt; 1832 # align and set the values 1833 if take_split_path: 1834 &gt; # We have to operate column-wise &gt; -&gt; 1835 self._setitem_with_indexer_split_path(indexer, value, name) 1836 else: 1837 self._setitem_single_block(indexer, &gt; value, name) &gt; &gt; File &gt; ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1872, &gt; in _iLocIndexer._setitem_with_indexer_split_path(self, indexer, value, &gt; name) 1869 if isinstance(value, ABCDataFrame): 1870 &gt; self._setitem_with_indexer_frame_value(indexer, value, name) &gt; -&gt; 1872 elif np.ndim(value) == 2: 1873 # TODO: avoid np.ndim call in case it isn't an ndarray, since 1874 # that will &gt; construct an ndarray, which will be wasteful 1875 &gt; self._setitem_with_indexer_2d_value(indexer, value) 1877 elif &gt; len(ilocs) == 1 and lplane_indexer == len(value) and not &gt; is_scalar(pi): 1878 # We are setting multiple rows in a single &gt; column. &gt; &gt; File &lt;__array_function__ internals&gt;:200, in ndim(*args, **kwargs) &gt; &gt; File &gt; ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3187, &gt; in ndim(a) 3185 return a.ndim 3186 except AttributeError: &gt; -&gt; 3187 return asarray(a).ndim &gt; &gt; ValueError: setting an array element with a sequence. The requested &gt; array has an inhomogeneous shape after 1 dimensions. The detected &gt; shape was (2,) + inhomogeneous part. </code></pre> <p>I'm using <code>pandas 2.0.3</code> and <code>numpy 1.24.4</code></p>
<python><pandas><numpy>
2025-05-07 12:49:15
2
526
R Walser
79,610,495
21,446,483
Spark memory error in thread spark-listener-group-eventLog
<p>I have a pyspark application which is using Graphframes to compute connected components on a DataFrame. The edges DataFrame I generate has 2.7M records. When I run the code it is slow, but slowly works through the required transformations and joins prior to the use of Graphframes. However, when calling the <code>connectedComponents</code> method it eventually fails with the error below. I'm limited by how much more I can scale the Dataproc cluster I'm running this code on.</p> <p><strong>Note:</strong> If I keep only a couple records to test the code runs successfully using spark-submit on my client and on Dataproc.</p> <p><strong>Note 2</strong>: I've tried setting the config option <code>.config(&quot;spark.sql.eventLog.enabled&quot;, &quot;false&quot;)</code> to disable the eventLog, no change.</p> <p>Error:</p> <pre><code>25/05/07 11:47:42 ERROR Utils: uncaught error in thread spark-listener-group-eventLog, stopping SparkContext java.lang.OutOfMemoryError: null at java.base/java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:125) ~[?:?] at java.base/java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:119) ~[?:?] at java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:95) ~[?:?] at java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:156) ~[?:?] at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2203) ~[jackson-core-2.15.2.jar:2.15.2] at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment2(UTF8JsonGenerator.java:1515) ~[jackson-core-2.15.2.jar:2.15.2] at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment(UTF8JsonGenerator.java:1462) ~[jackson-core-2.15.2.jar:2.15.2] at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegments(UTF8JsonGenerator.java:1345) ~[jackson-core-2.15.2.jar:2.15.2] at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:517) ~[jackson-core-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:732) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:772) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeWithType(BeanSerializerBase.java:655) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.impl.TypeWrappedSerializer.serialize(TypeWrappedSerializer.java:32) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:479) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:318) ~[jackson-databind-2.15.2.jar:2.15.2] at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:3303) ~[jackson-databind-2.15.2.jar:2.15.2] at org.apache.spark.util.JsonProtocol$.writeSparkEventToJson(JsonProtocol.scala:110) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonProtocol$.$anonfun$sparkEventToJsonString$1(JsonProtocol.scala:63) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonProtocol$.$anonfun$sparkEventToJsonString$1$adapted(JsonProtocol.scala:62) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonUtils.toJsonString(JsonUtils.scala:36) ~[spark-common-utils_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonUtils.toJsonString$(JsonUtils.scala:33) ~[spark-common-utils_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonProtocol$.toJsonString(JsonProtocol.scala:54) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.JsonProtocol$.sparkEventToJsonString(JsonProtocol.scala:62) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:96) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.EventLoggingListener.onOtherEvent(EventLoggingListener.scala:274) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:102) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105) ~[spark-core_2.12-3.5.3.jar:3.5.3] at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) ~[scala-library-2.12.18.jar:?] at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) ~[scala-library-2.12.18.jar:?] at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96) ~[spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1358) [spark-core_2.12-3.5.3.jar:3.5.3] at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96) [spark-core_2.12-3.5.3.jar:3.5.3] 25/05/07 11:47:42 ERROR Utils: throw uncaught fatal error in thread spark-listener-group-eventLog </code></pre> <p>Limited code sample:</p> <pre class="lang-py prettyprint-override"><code>edges = df.alias(&quot;a&quot;).join(df.alias(&quot;b&quot;), (F.col(&quot;a.domain&quot;) == F.col(&quot;b.domain&quot;)) | (F.col(&quot;a.identifier&quot;).isNotNull() &amp; (F.col(&quot;a.identifier&quot;) == F.col(&quot;b.identifier&quot;))) ) \ .where(F.col(&quot;a.id&quot;) != F.col(&quot;b.id&quot;)) \ .select( F.col(&quot;a.id&quot;).alias(&quot;src&quot;), F.col(&quot;b.id&quot;).alias(&quot;dst&quot;) ) self.logger.info(f&quot;Writing edges {edges.count()} records&quot;) self.write(edges, &quot;edges&quot;, &quot;overwrite&quot;) vertices = df.select([F.col(&quot;id&quot;)]) self.logger.info(f&quot;Writing vertices {edges.count()} records&quot;) g = GraphFrame(vertices, edges) result = g.connectedComponents() </code></pre> <p>Package versions:</p> <ul> <li>pyspark==3.5.5</li> <li>graphframes==0.6</li> </ul> <p>Additional context if useful, these are the logs right before the error:</p> <pre><code>25/05/07 11:43:39 WARN DAGScheduler: Broadcasting large task binary with size 1039.2 KiB 25/05/07 11:43:43 WARN DAGScheduler: Broadcasting large task binary with size 1092.1 KiB 25/05/07 11:43:45 INFO GoogleHadoopOutputStream: hflush(): No-op due to rate limit (RateLimiter[stableRate=0.2qps]): readers will *not* yet see flushed data for gs://&lt;path&gt; [CONTEXT ratelimit_period=&quot;1 MINUTES [skipped: 13]&quot; ] 25/05/07 11:43:48 WARN DAGScheduler: Broadcasting large task binary with size 1113.6 KiB 25/05/07 11:43:53 WARN DAGScheduler: Broadcasting large task binary with size 1129.0 KiB 25/05/07 11:43:58 WARN DAGScheduler: Broadcasting large task binary with size 1147.7 KiB 25/05/07 11:45:10 WARN DAGScheduler: Broadcasting large task binary with size 1151.5 KiB 25/05/07 11:45:24 INFO RequestTracker: Detected high latency for [url=&lt;gcs_upload_path&gt;; invocationId=&lt;id&gt;]. durationMs=316; method=PUT; thread=gcs-async-channel-pool-0 [CONTEXT ratelimit_period=&quot;10 SECONDS&quot; ] 25/05/07 11:45:27 INFO GoogleHadoopOutputStream: hflush(): No-op due to rate limit (RateLimiter[stableRate=0.2qps]): readers will *not* yet see flushed data for gs://&lt;path&gt; [CONTEXT ratelimit_period=&quot;1 MINUTES [skipped: 21]&quot; ] 25/05/07 11:47:35 WARN StringUtils: Truncated the string representation of a plan since it was too long. The plan had length 2147483632 and the maximum is 2147483602. This behavior can be adjusted by setting 'spark.sql.maxPlanStringLength'. </code></pre> <p>Any help would be appreciated. Even just pointers on how to do additional debugging.</p>
<python><apache-spark><pyspark><google-cloud-dataproc><graphframes>
2025-05-07 12:16:28
1
332
Jesus Diaz Rivero
79,610,462
13,392,257
Eliminate spaces in raw SQL in python script
<p>I am creating a sql-view with help of sqlalchemy, my code:</p> <pre><code>from sqlalchemy.sql import text from app.core.config import settings # CREATE VIEWS all_recent_tasks = text( f&quot;CREATE OR REPLACE VIEW all_recent_tasks AS \ SELECT COUNT(*) FROM tasks WHERE creation_time &gt;= NOW() - INTERVAL '{settings.common.metrics_pause_minutes} minutes';&quot; ) </code></pre> <p>The code prints</p> <pre><code>DEBUG__ CREATE OR REPLACE VIEW all_recent_tasks AS SELECT COUNT(*) FROM tasks WHERE creation_time &gt;= NOW() - INTERVAL '25 minutes'; </code></pre> <p>Is it possible to save current format, but eliminate spaces after 'AS'</p> <p>I mean that I don't want ugly format</p> <pre><code>all_recent_tasks = text( f&quot;CREATE OR REPLACE VIEW all_recent_tasks AS \ SELECT COUNT(*) FROM tasks WHERE creation_time &gt;= NOW() - INTERVAL '{settings.common.metrics_pause_minutes} minutes';&quot; ) </code></pre>
<python><sql><sqlalchemy>
2025-05-07 12:00:28
4
1,708
mascai
79,610,450
538,256
subtract parabolic average from image (vignette removal)
<p>I have to process the images I acquire with the microscope with a vertical binning, to obtain the distances between the dark/bright regions. Images are like this: <a href="https://i.sstatic.net/3Tw6IilD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Tw6IilD.png" alt="starting image" /></a></p> <p>I defined a threshold function that crops the circular image, makes the binning, and subtract a parabolic profile:</p> <pre><code>def threshold(image, X0Y0R): ''' crop the image along the circle of the ocular (step1) vertical binning of image, and subtract parabolic background (step2), resulting in sub_binn X0Y0R is the (X0,Y0,R) tuple, with circle center and radius returns: ((np.max(sub_binn)-np.min(sub_binn))/2)+np.min(sub_binn) -&gt; sub_binn's avg gray level image -&gt; the rotated, circle-cropped image binn -&gt; 1D array vert binning, previous to parabolic subtraction sub_binn -&gt; 1D array used by image_analisys() ''' image = rotate(image, tilt, reshape=True) ######## --&gt;&gt; step1 X0,Y0,R = X0Y0R[0],X0Y0R[1],X0Y0R[2] Y, X = np.ogrid[:image.shape[0], :image.shape[1]] outer_disk_mask = (X - X0)**2 + (Y - Y0)**2 &gt; R**2 image[outer_disk_mask] = False ######## --&gt;&gt; step2 binn = [] for i in range(len(image[0])): binn = np.append(binn, np.sum(np.transpose(image)[i])) binn = binn[np.argwhere(binn&gt;0)] x_n = np.arange(len(binn)) par = np.polyfit(range(len(binn)), binn, 2) sub_binn = [] for i in range(len(binn)): sub_binn = np.append(sub_binn, binn[[i]]-(x_n[i]*par[1]+par[0]*(x_n[i]*x_n[i]))) return ((np.max(sub_binn)-np.min(sub_binn))/2)+np.min(sub_binn),image, binn, sub_binn </code></pre> <p>Now when I examine the result:</p> <pre><code>image = img.imread('image.png').astype(np.int32) print(image.shape) X0,Y0,R = 950,700,550 it=threshold(image,X0Y0R=(X0,Y0,R)) fig,ax=plt.subplots(1,2,figsize=(12,4)) plt.subplot(121) plt.imshow(it[1], interpolation='none',cmap=&quot;gray&quot;) plt.subplot(122) plt.plot(it[2],color='gray',label='vertical binning') plt.plot(it[3],color='black',label='after parab subtract') plt.hlines(y=it[0], color='red',xmin=0,xmax=1000) print(it[1].shape, 2*R) plt.legend() plt.show() </code></pre> <p>I see that the parabolic fit is never correct - e.g. in this case it overestimates it:</p> <p><a href="https://i.sstatic.net/VCfPdbWt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCfPdbWt.png" alt="profiles" /></a></p> <p>Since it will have to work on different images, there is some way to extract a reasonably horizontal profile, unsupervised?</p>
<python><image-processing><signal-processing>
2025-05-07 11:49:55
1
4,004
alessandro
79,610,431
7,093,241
Failed import in jupyter notebook venv kernel in VS Code but successful import in terminal also in VS Code
<p>I was trying to go over this <a href="https://pgmpy.org/exact_infer/causal.html" rel="nofollow noreferrer">pgymy tutorial</a> and I ran into an issue with the package. I am running this on a Jupyter notebook on VS Code on Windows.</p> <pre><code>from pgmpy.models import DiscreteBayesianNetwork </code></pre> <p>However, I get this error:</p> <pre><code>ImportError: cannot import name 'DiscreteBayesianNetwork' from 'pgmpy.models' (J:\.venv\Lib\site-packages\pgmpy\models\__init__.py) </code></pre> <p>I see that I have version <code>1.0.0</code>, which is the latest for <code>pgmpy</code> but here are the versions of the rest of the packages.</p> <pre class="lang-bash prettyprint-override"><code>$ pip list Package Version ----------------- ----------- asttokens 3.0.0 colorama 0.4.6 comm 0.2.2 debugpy 1.8.12 decorator 5.1.1 executing 2.2.0 filelock 3.17.0 fsspec 2025.2.0 ipykernel 6.29.5 ipython 8.32.0 jedi 0.19.2 Jinja2 3.1.5 joblib 1.5.0 jupyter_client 8.6.3 jupyter_core 5.7.2 MarkupSafe 3.0.2 matplotlib-inline 0.1.7 mpmath 1.3.0 nest-asyncio 1.6.0 networkx 3.4.2 numpy 2.2.3 opt_einsum 3.4.0 packaging 24.2 pandas 2.2.3 parso 0.8.4 patsy 1.0.1 pgmpy 1.0.0 pillow 11.1.0 pip 25.0.1 platformdirs 4.3.6 prompt_toolkit 3.0.50 psutil 6.1.1 pure_eval 0.2.3 Pygments 2.19.1 pyro-api 0.1.2 pyro-ppl 1.9.1 python-dateutil 2.9.0.post0 pytz 2025.2 pywin32 308 pyzmq 26.2.1 scikit-learn 1.6.1 scipy 1.15.2 setuptools 75.8.2 six 1.17.0 stack-data 0.6.3 statsmodels 0.14.4 sympy 1.13.1 threadpoolctl 3.6.0 torch 2.6.0 torchvision 0.21.0 tornado 6.4.2 tqdm 4.67.1 traitlets 5.14.3 typing_extensions 4.12.2 tzdata 2025.2 wcwidth 0.2.13 </code></pre> <p>The other thing I did was I ran <code>python</code> in the shell and the import worked and I also ran <a href="https://stackoverflow.com/questions/20180543/how-do-i-check-the-versions-of-python-modules#answer-32965521">this answer</a>'s suggestion to check the module's version.</p> <pre><code>$ python Python 3.13.1 (tags/v3.13.1:0671451, Dec 3 2024, 19:06:28) [MSC v.1942 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; from pgmpy.models import DiscreteBayesianNetwork &gt;&gt;&gt; from importlib.metadata import version &gt;&gt;&gt; version('pgmpy') '1.0.0' </code></pre> <p>Everything seems to check out. Is there something I am missing?</p> <h1>EDIT:</h1> <ol> <li>I am running the project and <code>venv</code> on <code>Google Drive</code>.</li> <li>when I run the terminal that uses <code>Git Bash</code>, <code>VS Code</code> sources my activate file but when I run <code>which python</code>, I see the global python and not that from the <code>venv</code>.</li> </ol> <pre><code> $ source &quot;x:/My Drive/.venv/Scripts/activate&quot; (.venv) $ which python /c/Users/user/AppData/Local/Programs/Python/Python313/python </code></pre>
<python><visual-studio-code><jupyter-notebook><google-drive-shared-drive><venv>
2025-05-07 11:35:52
0
1,794
heretoinfinity
79,610,339
9,488,023
How to get the same shading and colors with Lightsource in Python on two different images
<p>I have two different images of maps that are located next to each other, and I want to shade them with the same method so that they fit seamlessly together with regards to shading and color. However, when I use LightSource in Python, they have different shadings despite using the same inputs. My code looks like this:</p> <pre><code>import numpy as np import rasterio from matplotlib import cm from matplotlib.colors import ListedColormap, LinearSegmentedColormap, LightSource, rgb_to_hsv, hsv_to_rgb import matplotlib.pyplot as plt with rasterio.open(&quot;image_1.tiff&quot;, 'r') as ds: z1 = ds.read() with rasterio.open(&quot;image_2.tiff&quot;, 'r') as ds: z2 = ds.read() z1 = z1[0,:,:] z2 = z2[0,:,:] vmin = -12000 vmax = 9000 vexag = 0.05 blend_mode = 'soft' cmap = cm.ocean.copy() ocean = cmap(np.linspace(0.30, 0.85, 144)) colors = [ (0.00, 'palegoldenrod'), (0.05, 'burlywood'), (0.30, 'sienna'), (0.50, 'tan'), (0.70, 'snow'), (1.00, 'white'), ] cmap = LinearSegmentedColormap.from_list('cm', colors) land = cmap(np.linspace(0, 1, 108)) cmap = ListedColormap(np.concatenate((ocean, land))) light = LightSource(azdeg = 45, altdeg = 45) rgb1 = light.shade( z1, cmap = cmap, blend_mode = blend_mode, vert_exag = vexag, vmin = vmin, vmax = vmax ) rgb1[:,:,-1][np.isnan(z1)] = 0 rgb1 = np.uint8(rgb1*255) rgb2 = light.shade( z2, cmap = cmap, blend_mode = blend_mode, vert_exag = vexag, vmin = vmin, vmax = vmax ) rgb2[:,:,-1][np.isnan(z2)] = 0 rgb2 = np.uint8(rgb2*255) rgb3 = np.concatenate((rgb2, rgb1), axis=1) plt.figure() plt.imshow(rgb3) plt.show() </code></pre> <p>The results look something like this:<a href="https://i.sstatic.net/KPBJz9Gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPBJz9Gy.png" alt="results with exaggeration 0.05" /></a></p> <p>From what I gather, it has to do with the vertical exaggeration, vexag, which I have set to 0.05. If I set the exaggeration to 0, the results look like this: <a href="https://i.sstatic.net/zOWeUi65.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOWeUi65.png" alt="results with no exaggeration" /></a></p> <p>They now fit together the way I want, but you can no longer see any details in the image. Is it possible to keep the vertical exaggeration to 0.05 but still get the same colors and shading in the image?</p>
<python><image><matplotlib><shading>
2025-05-07 10:36:33
0
423
Marcus K.
79,610,246
2,250,791
Type-hinting a function to Initialize a class with provided parameter
<p>The following code which is just the extracted problematic part of a non-useless method, really makes pyright mad, how can I fix that?</p> <pre class="lang-py prettyprint-override"><code>from typing import Type, Optional def mymethod[T](inp:str, out:Type[T]) -&gt; Optional[T]: return out(inp) </code></pre> <p>Pyright says:</p> <pre><code>[reportCallIssue]: Expected 0 positional arguments </code></pre>
<python><python-typing><pyright>
2025-05-07 09:52:00
1
2,075
Camden Narzt
79,610,192
1,473,517
How to profile code where the time is spent in imap_unordered?
<p>This code (it will only work in Linux) makes a 100MB numpy array and then runs <code>imap_unordered</code> where it in fact does no computation.</p> <p>It runs slowly and consistently. It outputs a <code>.</code> each time the square function is called and each takes roughly the same amount of time. How can I profile what it is doing?</p> <pre><code>from multiprocessing import Pool import numpy as np from pympler.asizeof import asizeof class ParallelProcessor: def __init__(self, num_processes=None): self.vals = np.random.random((3536, 3536)) print(&quot;Size of array in bytes&quot;, asizeof(self.vals)) def _square(self, x): print(&quot;.&quot;, end=&quot;&quot;, flush=True) return x * x def process(self, data): &quot;&quot;&quot; Processes the data in parallel using the square method. :param data: An iterable of items to be squared. :return: A list of squared results. &quot;&quot;&quot; with Pool(1) as pool: for result in pool.imap_unordered(self._square, data): # print(result) pass if __name__ == &quot;__main__&quot;: # Create an instance of the ParallelProcessor processor = ParallelProcessor() # Input data data = range(1000) # Run the processing in parallel processor.process(data) </code></pre>
<python><python-multiprocessing>
2025-05-07 09:27:14
1
21,513
Simd
79,610,172
2,526,586
Python Flask to log in different locations
<p>I have a Python Flask app that does <strong>3 things</strong> in one app:</p> <ul> <li>Serve as traditional monolithic web app with HTML, JS and CSS</li> <li>Serve RESTful API endpoints</li> <li>Run scheduled jobs using APScheduler</li> </ul> <p>I have two separate blueprints for serving as traditional web app and API.</p> <p>In my app.py (starting point of the app), I have something like this:</p> <pre><code>import Flask from logging.handlers import RotatingFileHandler app = Flask(__name__) # Register blueprints from .main import html_routes as html_blueprint app.register_blueprint(html_blueprint.blueprint) main_handler = RotatingFileHandler( '/logs/my_app/main/...', ... ) from .main import api_v10_routes as api_v10_blueprint app.register_blueprint(api_v10_blueprint.blueprint, url_prefix='/api/v1.0') api_handler = RotatingFileHandler( '/logs/my_app/api/...', ... ) from .main.scheduled_jobs import ScheduledJobs ScheduledJobs.init(app) scheduled_jobs_handler = RotatingFileHandler( '/logs/my_app/scheduled_jobs/...', ... ) </code></pre> <p>The three of them share common modules/components, for example a DAO-like class.</p> <p>My problem is to separate logging to three different locations. For example, I want the app to log to <code>/logs/my_app/main/...</code> when it is serving as monolithic web app, log to <code>/logs/my_app/api/...</code> when it is serving API endpionts, and <code>/logs/my_app/scheduled_jobs/...</code> when it is running scheduled jobs.</p> <p><strong>My question:</strong> How can I set up separate logging location for each of them? Bear in mind that I want the shared modules to use the correct logging location based on which initiator they are called by.</p> <p><em>Or if I rephrase my question:</em> How do the modules know which initiator call them and know which logger to use? i.e. how do the modules access some sort of logging context?</p> <hr /> <p>For an example, I have something like this in the scheduled_jobs.py:</p> <pre><code>import logging from flask_apscheduler import APScheduler scheduler = APScheduler() logger = logging.getLogger(__name__) logger.addHandler(scheduled_jobs_handler) class ScheduledJobs: app = None @classmethod def init(cls, app): cls.app = app scheduler.init_app(app) logger.debug('Starting scheduler...') scheduler.start() @scheduler.task('cron', id='do_something_job', day_of_week='wed', hour='0', minute='30') def do_something_job(): # ... start a new thread to do the following... logger.info('Populate some data...') person_data = {...} from ..service.person import PersonService PersonService.populate_person_data(person_data) </code></pre> <p>Then in person.py, imagine I have something like this:</p> <pre><code>import logging from ... import PersonDAO class PersonService: logger = logger.getLogger(__name__) @classmethod def populate_person_data(cls, person_data): cls.logger.info('Populate...') PersonDAO.insert(person_data) </code></pre> <p>Then finally for PersonDAO:</p> <pre><code>import logging class PersonDAO: logger = logger.getLogger(__name__) @classmethod def insert(cls, person_data): cls.logger.info('Writing to database') ... write to database ... </code></pre> <p><code>PersonService</code> and <code>PersonDAO</code> may also be called by the blueprint functions.</p> <p>As you can see, logging is all over the place. It will be difficult for the controllers/scheduled_jobs to pass down logger location to each class/functions in the call chain.</p> <hr /> <p><em>My thoughts on different possible options:</em></p> <p>I have previously thought of putting logging setting on <code>g</code> from the Flask framework, but that only works for request-based operation. It doesn't work for non-request-based scheduled_jobs.</p> <p>I have also thought of creating some sort of static/singleton class to manage which location should be used at different time being - The initiator decides which location should be used by all classed/functions from the point the initiator is being triggered. However, this does not work because scheduled_jobs start new threads to do non-blocking work and while this work is being run, there may be async requests to the web app to process concurrently.</p>
<python><flask><logging><apscheduler>
2025-05-07 09:14:58
1
1,342
user2526586
79,609,861
1,219,158
Azure Container Apps Jobs with Event Hubs integration is looping endlessly, even though there is no new events
<p>I am using Azure Container Apps Jobs with an event driven trigger through Azure Event Hubs with <code>blobMetadata</code> as the check point strategy. The job gets triggered as it should, the check point store gets updated by the job, as it should. The problem is that the jobs get triggered immediately after finishing in an endless loop, even though there ARE no new events. This I have confirmed through the logs of the job, for the first run, all events are logged, all succeeding events has no event logs at all.</p> <p>Here is the <code>eventTriggerConfig</code> of the job:</p> <pre class="lang-json prettyprint-override"><code>eventTriggerConfig: { parallelism: 1 replicaCompletionCount: 1 scale: { rules: [ { name: 'event-hub-trigger' type: 'azure-eventhub' auth: [ { secretRef: 'event-hub-connection-string' triggerParameter: 'connection' } { secretRef: 'storage-account-connection-string' triggerParameter: 'storageConnection' } ] metadata: { blobContainer: containerName checkPointStrategy: 'blobMetadata' consumerGroup: eventHubConsumerGroupName eventHubName: eventHubName connectionFromEnv: 'EVENT_HUB_CONNECTION_STRING' storageConnectionFromEnv: 'STORAGE_ACCOUNT_CONNECTION_STRING' activationUnprocessedEventThreshold: 1 unprocessedEventThreshold: 5 } } ] } } </code></pre> <p>Here is the Python based job logic:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from datetime import datetime, timedelta, timezone import logging import os from azure.eventhub.aio import EventHubConsumerClient from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore from azure.identity.aio import DefaultAzureCredential BLOB_STORAGE_ACCOUNT_URL = os.getenv(&quot;BLOB_STORAGE_ACCOUNT_URL&quot;) BLOB_CONTAINER_NAME = os.getenv(&quot;BLOB_CONTAINER_NAME&quot;) EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = os.getenv(&quot;EVENT_HUB_FULLY_QUALIFIED_NAMESPACE&quot;) EVENT_HUB_NAME = os.getenv(&quot;EVENT_HUB_NAME&quot;) EVENT_HUB_CONSUMER_GROUP = os.getenv(&quot;EVENT_HUB_CONSUMER_GROUP&quot;) logger = logging.getLogger(&quot;azure.eventhub&quot;) logging.basicConfig(level=logging.INFO) credential = DefaultAzureCredential() # Global variable to track the last event time last_event_time = None WAIT_DURATION = timedelta(seconds=30) async def on_event(partition_context, event): global last_event_time if event is not None: print( 'Received the event: &quot;{}&quot; from the partition with ID: &quot;{}&quot;'.format( event.body_as_str(encoding=&quot;UTF-8&quot;), partition_context.partition_id ) ) else: print(f&quot;Received a None event from partition ID: {partition_context.partition_id}&quot;) # Update the last event time last_event_time = datetime.now(timezone.utc) await partition_context.update_checkpoint(event) async def receive(): global last_event_time checkpoint_store = BlobCheckpointStore( blob_account_url=BLOB_STORAGE_ACCOUNT_URL, container_name=BLOB_CONTAINER_NAME, credential=credential, ) client = EventHubConsumerClient( fully_qualified_namespace=EVENT_HUB_FULLY_QUALIFIED_NAMESPACE, eventhub_name=EVENT_HUB_NAME, consumer_group=EVENT_HUB_CONSUMER_GROUP, checkpoint_store=checkpoint_store, credential=credential, ) # Initialize the last event time last_event_time = datetime.now(timezone.utc) async with client: # client.receive method is a blocking call, so we run it in a separate thread. receive_task = asyncio.create_task( client.receive( on_event=on_event, starting_position=&quot;-1&quot;, ) ) # Wait until no events are received for the specified duration while True: await asyncio.sleep(1) if datetime.now(timezone.utc) - last_event_time &gt; WAIT_DURATION: break # Close the client and the receive task await client.close() receive_task.cancel() try: await receive_task except asyncio.CancelledError: pass # Close credential when no longer needed. await credential.close() def run(): loop = asyncio.get_event_loop() loop.run_until_complete(receive()) </code></pre> <p>The job depends on <code>azure-eventhub-checkpointstoreblob-aio</code> (1.2.0) and <code>azure-identity</code> (1.21.0).</p>
<python><azure><azure-eventhub><keda><azure-container-app-jobs>
2025-05-07 05:50:50
2
1,961
Ganhammar
79,609,673
3,834,639
Worker was sent code 139! running Gunicorn + Chroma
<p>I have a Flask app that uses Gunicorn/Nginx + ChromaDB v1.0.8. I've had no issues with its functionality till today when I restarted the service, to which I would receive</p> <pre><code>May 06 19:33:34 cluster gunicorn[6804]: 2025-05-06 19:33:34.413 | INFO | app:insert:79 - insert: querying chroma for nearest embeddings May 06 19:33:34 cluster gunicorn[6802]: [2025-05-06 19:33:34 +0000] [6802] [ERROR] Worker (pid:6804) was sent code 139! May 06 19:33:34 cluster gunicorn[6852]: [2025-05-06 19:33:34 +0000] [6852] [INFO] Booting worker with pid: 6852 </code></pre> <p>This occurs when an endpoint is called to embed data into the vector database (Chroma), specifically when I go to query the collection for an embeddding</p> <pre><code>logger.info(&quot;insert: querying chroma for nearest embeddings&quot;) query_result = collection.query(query_embeddings=[embedding], n_results=5) distances = query_result[&quot;distances&quot;][0] metadatas_raw = query_result[&quot;metadatas&quot;][0] logger.info(f&quot;insert: query returned {len(distances)} distances&quot;) </code></pre> <p>I've struggled to isolate the issue as ~1 out of 20 restarts will work, with the rest failing. I run Gunicorn with</p> <pre><code>ExecStart=/var/www/cluster/.venv/bin/gunicorn -k gthread --threads 4 --workers 1 --timeout 60 --bind unix:/var/www/cluster/cluster.sock -m 007 &quot;app:app&quot; </code></pre> <p>I can access the DB directly via the Python interpreter so the issue does not appear to be the database itself. Any ideas on what might be causing this?</p>
<python><flask><nginx><gunicorn><chromadb>
2025-05-07 01:09:21
0
1,078
idris
79,609,620
18,002,913
Can we make future predictions using Orange Tool (time series forecasting)?
<p>I've been working on time series forecasting using models like LSTM, ARIMA, and Prophet. Recently, I came across Orange Tool, which seems very user-friendly and powerful for visual, low-code machine learning workflows. However, I couldn't find much documentation on how to perform future predictions with Orange β€” especially for time series data.</p> <p>I'd like to use Orange to forecast future values, such as temperature and precipitation for upcoming years based on historical data.</p> <p>(The dataset contains daily values for parameters like T2M, T2M_MAX, T2M_MIN, PRECTOTCORR, PET, Scaled_PRECTOTCORR)</p> <p>The dataset ends on 2025-04-20, and my goal is to predict the next 5 years for each parameter listed above.</p> <p><strong>Dataset that I use with example values:</strong></p> <p><a href="https://i.sstatic.net/oJqI7gA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJqI7gA4.png" alt="enter image description here" /></a></p> <p><strong>My questions:</strong></p> <p>Is it possible to use Orange Tool for forecasting future values in time series data?</p> <p>If yes, how should I structure my dataset for prediction?</p> <p>Do I need to generate future dates manually and leave target columns empty?</p> <p>Or should I transform the time series into a supervised format (e.g., lag features)?</p> <p>Are there any widgets or add-ons in Orange that support forecasting or time series modeling?</p> <p>I'd really appreciate any guidance, best practices, or examples on how to make future predictions using Orange. Thank you in advance!</p>
<python><time-series><orange>
2025-05-06 23:20:37
0
1,298
NewPartizal
79,609,493
13,112,929
Launch Celery Task from Database Object Method
<p>In Miguel Grinberg's <a href="https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world" rel="nofollow noreferrer">Flask Mega-Tutorial</a>, <a href="https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xxii-background-jobs" rel="nofollow noreferrer">chapter 22</a>, he creates a generic <code>launch_task</code> method as a part of the <code>User</code> object model. His tutorial uses rq and I am trying to do something similar with celery. Here's the relevant code from his tutorial:</p> <pre><code>class User(UserMixin, db.Model): # ... def launch_task(self, name, description, *args, **kwargs): rq_job = current_app.task_queue.enqueue(f'app.tasks.{name}', self.id, *args, **kwargs) task = Task(id=rq_job.get_id(), name=name, description=description, user=self) db.session.add(task) return task </code></pre> <p>And here's an example of how it's called:</p> <pre><code>@bp.route('/export_posts') @login_required def export_posts(): if current_user.get_task_in_progress('export_posts'): flash(_('An export task is currently in progress')) else: current_user.launch_task('export_posts', _('Exporting posts...')) db.session.commit() return redirect(url_for('main.user', username=current_user.username)) </code></pre> <p>Where <code>export_posts</code> is a function defined in a <code>tasks.py</code> file:</p> <pre><code>def export_posts(user_id): try: # read user posts from database # send email with data to user except Exception: # handle unexpected errors finally: # handle clean up </code></pre> <p>How would you convert the <code>launch_task</code> method to use celery instead of rq?</p>
<python><flask><celery><rq>
2025-05-06 21:12:44
1
477
setty
79,609,474
11,515,528
polars - no attribute 'extract_many'
<p>Trying this in Polars 0.20.23 but getting any error.</p> <pre><code>import polars as pl # Sample data data = { &quot;text&quot;: [ &quot;Year: 2020, Month: January&quot;, &quot;Year: 2021, Month: February&quot;, &quot;Year: 2022, Month: March&quot;, &quot;Year: 2023, Month: April&quot; ] } # Create a Polars DataFrame df = pl.DataFrame(data) # Define a regular expression pattern to extract year and month pattern = r&quot;Year: (\d{4}), Month: (\w+)&quot; # Use extract_many to extract year and month df_extracted = df.with_columns( pl.col(&quot;text&quot;).str.extract_many(pattern).alias(&quot;year_month&quot;) ) # Display the DataFrame with extracted values print(df_extracted) </code></pre> <p>error</p> <pre><code>AttributeError: 'ExprStringNameSpace' object has no attribute 'extract_many' </code></pre> <p>What am I doing wrong?</p>
<python><dataframe><python-polars>
2025-05-06 20:54:59
1
1,865
Cam
79,609,420
5,908,253
Using a Polars series as input for Scikit Learn TfidfVectorizer
<p>We are looking into adding polars support to string_grouper (<a href="https://github.com/Bergvca/string_grouper" rel="nofollow noreferrer">https://github.com/Bergvca/string_grouper</a>). To make this work, as a first step we should be able to run a TfIdfVectorizor on a Polars series. On the polars website it seems that this is supported: <a href="https://docs.pola.rs/user-guide/ecosystem/#machine-learning" rel="nofollow noreferrer">https://docs.pola.rs/user-guide/ecosystem/#machine-learning</a></p> <p>However when i try it i receive an error. Am I doing something wrong or is it really not supported.</p> <pre class="lang-py prettyprint-override"><code>from sklearn.feature_extraction.text import TfidfVectorizer company_names = df.select(pl.col('Company Name')) vectorizer = TfidfVectorizer(min_df=1) tf_idf_matrix = vectorizer.fit_transform(company_names) </code></pre> <p>Where <code>company_names</code> is:</p> <pre><code>print(company_names) shape: (663_000, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Company Name β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ !J INC β”‚ β”‚ #1 A LIFESAFER HOLDINGS, INC. β”‚ β”‚ #1 ARIZONA DISCOUNT PROPERTIES… β”‚ β”‚ #1 PAINTBALL CORP β”‚ β”‚ $ LLC β”‚ </code></pre> <p>Results in the following error:</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[8], line 1 ----&gt; 1 tf_idf_matrix = vectorizer.fit_transform(company_names) File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/feature_extraction/text.py:2104, in TfidfVectorizer.fit_transform(self, raw_documents, y) 2097 self._check_params() 2098 self._tfidf = TfidfTransformer( 2099 norm=self.norm, 2100 use_idf=self.use_idf, 2101 smooth_idf=self.smooth_idf, 2102 sublinear_tf=self.sublinear_tf, 2103 ) -&gt; 2104 X = super().fit_transform(raw_documents) 2105 self._tfidf.fit(X) 2106 # X is already a transformed view of raw_documents so 2107 # we set copy to False File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/base.py:1389, in _fit_context.&lt;locals&gt;.decorator.&lt;locals&gt;.wrapper(estimator, *args, **kwargs) 1382 estimator._validate_params() 1384 with config_context( 1385 skip_parameter_validation=( 1386 prefer_skip_nested_validation or global_skip_validation 1387 ) 1388 ): -&gt; 1389 return fit_method(estimator, *args, **kwargs) File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/feature_extraction/text.py:1376, in CountVectorizer.fit_transform(self, raw_documents, y) 1368 warnings.warn( 1369 &quot;Upper case characters found in&quot; 1370 &quot; vocabulary while 'lowercase'&quot; 1371 &quot; is True. These entries will not&quot; 1372 &quot; be matched with any documents&quot; 1373 ) 1374 break -&gt; 1376 vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary_) 1378 if self.binary: 1379 X.data.fill(1) File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/feature_extraction/text.py:1263, in CountVectorizer._count_vocab(self, raw_documents, fixed_vocab) 1261 for doc in raw_documents: 1262 feature_counter = {} -&gt; 1263 for feature in analyze(doc): 1264 try: 1265 feature_idx = vocabulary[feature] File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/feature_extraction/text.py:104, in _analyze(doc, analyzer, tokenizer, ngrams, preprocessor, decoder, stop_words) 102 else: 103 if preprocessor is not None: --&gt; 104 doc = preprocessor(doc) 105 if tokenizer is not None: 106 doc = tokenizer(doc) File ~/dev/python/sg_test/venv/lib/python3.12/site-packages/sklearn/feature_extraction/text.py:62, in _preprocess(doc, accent_function, lower) 43 &quot;&quot;&quot;Chain together an optional series of text preprocessing steps to 44 apply to a document. 45 (...) 59 preprocessed string 60 &quot;&quot;&quot; 61 if lower: ---&gt; 62 doc = doc.lower() 63 if accent_function is not None: 64 doc = accent_function(doc) AttributeError: 'Series' object has no attribute 'lower' </code></pre>
<python><python-polars>
2025-05-06 20:10:01
0
337
Chris van den Berg
79,609,395
11,515,528
Polars extract_all get the last item
<p>so this works in polars</p> <pre><code>path = ['some text 2020', '2021 text 2020', 'etxt 2022', '2023 text 2022'] names = [&quot;Alice&quot;, &quot;Bob&quot;, &quot;Charlie&quot;, &quot;David&quot;] df = pl.DataFrame({ &quot;path&quot;: path, &quot;name&quot;: names }) df year = ['2019', '2020', '2021', '2022', '2023'] pattern = &quot;((?i)&quot; + '|'.join(year) + &quot;)&quot; df.select(pl.col('path').str.extract(pattern)) </code></pre> <p>output</p> <pre><code>shape: (4, 1) path str &quot;2020&quot; &quot;2021&quot; &quot;2022&quot; &quot;2023&quot; </code></pre> <p>But now I want to extract_all and get the last item from the list it returns. This is not working.</p> <pre><code>year = ['2019', '2020', '2021', '2022', '2023'] pattern = &quot;((?i)&quot; + '|'.join(year) + &quot;)&quot; df_new = df_new.with_columns(pl.col('path').str.extract_all(pattern).arr.last()).alias('year') </code></pre> <p>error</p> <pre><code>SchemaError: invalid series dtype: expected `FixedSizeList`, got `list[str]` </code></pre> <p>What am I doing wrong? Is <code>.arr.last()</code> wrong?</p> <p><strong>edit</strong></p> <p>so thanks to ouroboros1 this works.</p> <pre><code>path = ['some text 2020', '2021 text 2020', 'etxt 2022', '2023 text 2022'] names = [&quot;Alice&quot;, &quot;Bob&quot;, &quot;Charlie&quot;, &quot;David&quot;] df = pl.DataFrame({ &quot;path&quot;: path, &quot;name&quot;: names }) df year = ['2019', '2020', '2021', '2022', '2023'] pattern = &quot;((?i)&quot; + '|'.join(year) + &quot;)&quot; df.select(pl.col('path').str.extract_all(pattern).list.last().alias('year')) </code></pre> <p>output</p> <pre><code>shape: (4, 1) year str &quot;2020&quot; &quot;2020&quot; &quot;2022&quot; &quot;2022&quot; </code></pre>
<python><python-polars>
2025-05-06 19:49:37
0
1,865
Cam
79,609,252
4,175,822
How can I define a protocol for a class that contains a protocol field and use a dataclass implementer?
<p>I want to do this:</p> <pre><code>import typing import dataclasses class ConfigProtocol(typing.Protocol): a: str @dataclasses.dataclass class Config: a: str class HasConfigProtocol(typing.Protocol): config: ConfigProtocol @dataclasses.dataclass class HasConfig: config: Config def accepts_hasconfig(h: HasConfigProtocol): pass accepts_hasconfig(HasConfig(config=Config(a=&quot;&quot;))) </code></pre> <p>But in pylance it reports:</p> <pre><code>Argument of type &quot;Position&quot; cannot be assigned to parameter &quot;template_position&quot; of type &quot;PositionProtocol&quot; in function &quot;__init__&quot; Β Β &quot;Position&quot; is incompatible with protocol &quot;PositionProtocol&quot; Β Β Β Β &quot;config&quot; is invariant because it is mutable Β Β Β Β &quot;config&quot; is an incompatible type Β Β Β Β Β Β &quot;Configuration&quot; is incompatible with protocol &quot;ConfigurationProtocol&quot; </code></pre> <p>To get this to work do I need to freeze my dataclasses and provide read only access in my protocols like so?</p> <pre><code>class ConfigProtocol(typing.Protocol): @property def a(self) -&gt; str: ... @dataclasses.dataclass(frozen=True) class Config: a: str class HasConfigProtocol(typing.Protocol): @property def config(self) -&gt; ConfigProtocol: ... @dataclasses.dataclass(frozen=True) class HasConfig: config: Config def accepts_hasconfig(h: HasConfigProtocol): pass accepts_hasconfig(HasConfig(config=Config(a=&quot;&quot;))) </code></pre> <p>Long term this is the code in one module, module_a which is stand alone. There is another module_b which also exists, has similar code and is stand alone. I want to use instances of HasConfig from module_a in module_b. Because the modules do not know about eachother, I am including the same protocol definition in each of them.</p> <p>To reuse protocols across modules and have instances work for both of them, the protocol definitions must be the same, and the Protocol definitions only must use @property definitions.</p>
<python><python-dataclasses>
2025-05-06 18:01:40
1
2,821
spacether
79,609,220
1,635,909
Documenting a script step by step with Sphinx
<p>I am documenting a python library with Sphinx.</p> <p>I have a couple of example scripts which I'd like to document in a narrative way, something like this:</p> <pre><code>#: Import necessary package and define :meth:`make_grid` import numpy as np def make_grid(a,b): &quot;&quot;&quot; Make a grid for constant by piece functions &quot;&quot;&quot; x = np.linspace(0,np.pi) xmid = (x[:-1]+x[1:])/2 h = x[1:]-x[:-1] return xmid,h #: Interpolate a function xmid,h = make_grid(0,np.pi) y = np.sin(xmid) #: Calculate its integral I = np.sum(y*h) print (&quot;Result %g&quot; % I ) </code></pre> <p>Those scripts should remain present as executable scripts in the repository, and I want to avoid duplicating their code into comments.</p> <p>I would like to generate the corresponding documentation, something like :</p> <p><a href="https://i.sstatic.net/XIYAQ2Dc.png" rel="noreferrer"><img src="https://i.sstatic.net/XIYAQ2Dc.png" alt="sphinx output" /></a></p> <p>Is there any automated way to do so? This would allow me not to duplicate the example script in the documentation. It seems to me this was the object of <a href="https://stackoverflow.com/questions/30964707/include-a-commented-python-script-into-a-sphinx-generated-documentation">this old question</a> but in my hands <code>viewcode</code> extension doesn't interpret comments, it just produces an <code>html</code> page with quoted code, comments remain comments.</p>
<python><python-sphinx>
2025-05-06 17:40:43
4
2,234
Joce
79,609,199
14,586,554
Numerically stable noncentral chi-squared distribution in torch?
<p>I need numerically stable non-central chi2 distribution in torch.</p>
<python><pytorch><statistics><numerical-methods>
2025-05-06 17:29:47
1
620
Kemsikov
79,608,877
8,458,083
Nixpacks build hangs when venv directory exists in Python project
<p>I'm using Nixpacks to build a Docker image from a Python project. The build works fine when my project directory does not contain a local virtual environment. However, if I create a virtual environment (e.g., with <code>python -m venv venv</code>), the build hangs indefinitely at the <code>Building Nixpacks image...</code> step.</p> <p>Steps to reproduce:</p> <p>Project structure before creating the virtual environment:</p> <pre class="lang-bash prettyprint-override"><code>. β”œβ”€β”€ main.py └── requirements.txt </code></pre> <p>Build command (works fine):</p> <pre class="lang-bash prettyprint-override"><code>sudo nixpacks build . -t image1 </code></pre> <p>Create a virtual environment:</p> <pre class="lang-bash prettyprint-override"><code>$ python -m venv venv </code></pre> <p>Now the directory structure is:</p> <pre class="lang-bash prettyprint-override"><code>. β”œβ”€β”€ main.py β”œβ”€β”€ requirements.txt └── venv/ </code></pre> <p>Run the same build command again:</p> <pre class="lang-bash prettyprint-override"><code>sudo nixpacks build . -t image1 </code></pre> <p>The build hangs at <code>Building Nixpacks image...</code> and never completes.</p> <p>Why does the presence of the venv directory cause Nixpacks to hang?</p> <p>Is there a recommended way to exclude the local venv directory from the build context, or configure Nixpacks to ignore it?</p> <p>Is this expected behavior, or am I missing a configuration step?</p>
<python><nixpacks>
2025-05-06 13:59:31
0
2,017
Pierre-olivier Gendraud
79,608,752
29,295,031
How to add space between Bubbles and increase thier size
<p>I have a bubble chart developed using plotly library and here’s the data :</p> <pre><code>import plotly.express as px import pandas as pd data = { &quot;lib_acte&quot;:[&quot;test 98lop1&quot;, &quot;test9665 opp1&quot;, &quot;test QSDFR1&quot;, &quot;test ABBE1&quot;, &quot;testtest21&quot;,&quot;test23&quot;], &quot;x&quot;:[12.6, 10.8, -1, -15.2, -10.4, 1.6], &quot;y&quot;:[15, 5, 44, -11, -35, -19], &quot;circle_size&quot;:[375, 112.5, 60,210, 202.5, 195], &quot;color&quot;:[&quot;green&quot;, &quot;green&quot;, &quot;green&quot;, &quot;red&quot;, &quot;red&quot;, &quot;red&quot;] } #load data into a DataFrame object: df = pd.DataFrame(data) fig = px.scatter( df, x=&quot;x&quot;, y=&quot;y&quot;, color=&quot;color&quot;, size='circle_size', text=&quot;lib_acte&quot;, hover_name=&quot;lib_acte&quot;, color_discrete_map={&quot;red&quot;: &quot;red&quot;, &quot;green&quot;: &quot;green&quot;}, title=&quot;chart&quot; ) fig.update_traces(textposition='middle right', textfont_size=14, textfont_color='black', textfont_family=&quot;Inter&quot;, hoverinfo=&quot;skip&quot;) newnames = {'red':'red title', 'green': 'green title'} fig.update_layout( { 'yaxis': { &quot;range&quot;: [-200, 200], 'zerolinewidth': 2, &quot;zerolinecolor&quot;: &quot;red&quot;, &quot;tick0&quot;: -200, &quot;dtick&quot;:45, }, 'xaxis': { &quot;range&quot;: [-200, 200], 'zerolinewidth': 2, &quot;zerolinecolor&quot;: &quot;gray&quot;, &quot;tick0&quot;: -200, &quot;dtick&quot;: 45, # &quot;scaleanchor&quot;: 'y' }, &quot;height&quot;: 800, } ) fig.add_scatter( x=[0, 0, -200, -200], y=[0, 200, 200, 0], fill=&quot;toself&quot;, fillcolor=&quot;gray&quot;, zorder=-1, mode=&quot;markers&quot;, marker_color=&quot;rgba(0,0,0,0)&quot;, showlegend=False, hoverinfo=&quot;skip&quot; ) fig.add_scatter( x=[0, 0, 200, 200], y=[0, -200, -200, 0], fill=&quot;toself&quot;, fillcolor=&quot;yellow&quot;, zorder=-1, mode=&quot;markers&quot;, marker_color=&quot;rgba(0,0,0,0)&quot;, showlegend=False, hoverinfo=&quot;skip&quot; ) fig.update_layout( paper_bgcolor=&quot;#F1F2F6&quot;, ) fig.show() </code></pre> <p>output :</p> <p><a href="https://i.sstatic.net/9BLQjlKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9BLQjlKN.png" alt="enter image description here" /></a></p> <p>now what I’m looking for please a way to add space between bubble if they are tight like (test 981op1 and test9665 opp1), and also a way to to increase the each bubble size 4% of its size for example. thanks for your help.</p>
<python><pandas><plotly>
2025-05-06 12:53:00
1
401
user29295031
79,608,713
2,026,659
Getting 'Could not install packages due to an OSError' when installing Python packages in GitLab pipeline
<p>I'm trying to install a few pip packages in a GitLab pipeline job in order to run a Python script. The image used to run the pipeline job is a custom Python image using Python version 3.12 and with Ubuntu 25.04 and is set to a non-root user python. When the pipeline job runs I have a command to active the Python virtual environment then I install the packages. I get an error when trying to run the <code>pip install</code> command.</p> <p>This is the job output where the error occurs:</p> <pre><code>$ source /home/python/venv/bin/activate # collapsed multi-line command /home/python/venv/bin/python Collecting openpyxl Downloading openpyxl-3.1.5-py2.py3-none-any.whl.metadata (2.5 kB) Collecting python-dateutil Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting et-xmlfile (from openpyxl) Downloading et_xmlfile-2.0.0-py3-none-any.whl.metadata (2.7 kB) Collecting six&gt;=1.5 (from python-dateutil) Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Downloading openpyxl-3.1.5-py2.py3-none-any.whl (250 kB) Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Downloading six-1.17.0-py2.py3-none-any.whl (11 kB) Downloading et_xmlfile-2.0.0-py3-none-any.whl (18 kB) Installing collected packages: six, et-xmlfile, python-dateutil, openpyxl ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/home/python/venv/lib/python3.13/site-packages/six.py' Check the permissions. </code></pre> <p>I've tried adding the <code>--user</code> flag to the pip install command but then I get a different error: <code>ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.</code></p> <p>I think the permissions error has to do with the Python image is not running as root. I can't figure out how to give the user python enough permissions to install the pip packages. Any help would be appreciated.</p>
<python><python-3.x><gitlab-ci>
2025-05-06 12:28:14
1
2,641
mdailey77
79,608,662
13,806,869
How can I upgrade Pandas via pip?
<p>I currently have Pandas 1.4.2 installed. I want to update to the latest version, so I entered this into the command prompt:</p> <blockquote> <p>pip install -U pandas</p> </blockquote> <p>However, it just returns this message:</p> <blockquote> <p>Requirement already satisfied: pandas in c:\users\my_username\appdata\local\programs\python\python39\lib\site-packages (1.4.2)</p> </blockquote> <p>Which sounds like it won't install the new version because I already have Pandas installed. But I thought the -U specifies that you want to update an existing package?</p> <p>Where am I going wrong? I'm using pip 22.0.4 and Python 3.9.12 if that helps.</p>
<python><pandas><pip>
2025-05-06 12:01:35
2
521
SRJCoding
79,608,369
4,865,723
Bars not fitting to X-axis ticks in a Seaborn distplot
<p>I do generate that figure with <code>seaborn.distplot()</code>. My problem is that the ticks on the X-axis do not fit to the bars, in all cases. I would expect a relationship between bars and ticks like you can see at 11 and 15.</p> <p><a href="https://i.sstatic.net/BOCA6W8z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOCA6W8z.png" alt="seaborn distplot" /></a></p> <p>This is the MWE</p> <pre><code>import numpy as np import pandas as pd import seaborn as sns # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 999999, n), 'Fruit': np.random.choice(['Banana', 'Strawberry'], n), 'Age': np.random.randint(9, 18, n) }) fig = sns.displot( data=df, x='Age', hue='Fruit', multiple='dodge').figure fig.show() </code></pre>
<python><pandas><seaborn>
2025-05-06 09:17:48
2
12,450
buhtz
79,608,254
2,037,570
Installing python3.10 on debian:stretch-slim image?
<p>I have an docker file where the base image is set to debian:stretch-slim. The default version of python3 is python3.5 in this image.</p> <p>FROM debian:stretch-slim as base-os</p> <p>.... ....</p> <pre><code>RUN python3 -V RUN which python3 </code></pre> <p>Output is:</p> <pre><code>#35 [linux/amd64 base-os 14/18] RUN python3 -V #35 0.132 Python 3.5.3 </code></pre> <p>I am trying to have python3.10 available in this image and have tried a couple of things and they are not working out so far.</p> <p>Approach 1:</p> <p>Manual download and install python3.10</p> <pre><code> RUN wget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgz RUN tar -xzf Python-3.10.0.tgz RUN cd Python-3.10.0 &amp;&amp; ./configure --prefix=/usr/local \ --enable-optimizations \ --with-lto \ --with-dbmliborder=bdb:gdbm \ --with-ensurepip \ 'CFLAGS=-g -fstack-protector-strong -Wformat -Werror=format-security' \ LDFLAGS=&quot;-Wl,-z,relro&quot; \ CPPFLAGS=-D_FORTIFY_SOURCE=2 &amp;&amp; make -j 4 &amp;&amp; make altinstall RUN rm -rf Python-3.10.0.tgz RUN rm -rf Python-3.10.0 </code></pre> <p>This approach 1 errors out with</p> <pre><code>#44 255.8 Traceback (most recent call last): #44 255.8 File &quot;&lt;frozen zipimport&gt;&quot;, line 570, in _get_decompress_func #44 255.8 ModuleNotFoundError: No module named 'zlib' Installing zlib again gives more errors and ends up in never ending problem. </code></pre> <p>I tried to read through some links and looks like many has faced this problem but no clear solution has been listed anywhere. Can some expert shed his thoughts on it as to what is needed for python3.10?</p> <p>I have already listed the approaches that I tried.</p>
<python><docker><dockerfile><debian>
2025-05-06 08:11:00
0
3,645
Hemant Bhargava
79,608,184
11,769,133
Wrong column assignment with np.genfromtxt if passed column order is not the same as in file
<p>This problem appeared in some larger code but I will give simple example:</p> <pre class="lang-py prettyprint-override"><code>from io import StringIO import numpy as np example_data = &quot;A B\na b\na b&quot; data1 = np.genfromtxt(StringIO(example_data), usecols=[&quot;A&quot;, &quot;B&quot;], names=True, dtype=None) print(data1[&quot;A&quot;], data1[&quot;B&quot;]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=[&quot;B&quot;, &quot;A&quot;], names=True, dtype=None) print(data2[&quot;A&quot;], data2[&quot;B&quot;]) # ['b' 'b'] ['a' 'a'] which is not correct </code></pre> <p>As you can see, if I change passed column order in regard of column order in file, I get wrong results. What's interesting is that <code>dtype</code>s are same:</p> <pre class="lang-py prettyprint-override"><code>print(data1.dtype) # [('A', '&lt;U1'), ('B', '&lt;U1')] print(data2.dtype) # [('A', '&lt;U1'), ('B', '&lt;U1')] </code></pre> <p>In this example it's not hard to sort column names before passing them, but in my case column names are gotten from some other part of system and it's not guaranteed that they will be in same order as those in file. I can probably circumvent that but I'm wondering if there is something wrong with my logic in this example or is there some kind of bug here.</p> <p>Any help is appreciated.</p> <p><strong>Update:</strong><br /> What I just realized playing around a bit is following, if I add one or more columns into example data (not important where) and pass <strong>subset</strong> of columns to <code>np.genfromtxt</code> in whichever order I want, it gives correct result.</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>example_data = &quot;A B C\na b c\na b c&quot; data1 = np.genfromtxt(StringIO(example_data), usecols=[&quot;A&quot;, &quot;B&quot;], names=True, dtype=None) print(data1[&quot;A&quot;], data1[&quot;B&quot;]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=[&quot;B&quot;, &quot;A&quot;], names=True, dtype=None) print(data2[&quot;A&quot;], data2[&quot;B&quot;]) # ['a' 'a'] ['b' 'b'] which is correct </code></pre>
<python><numpy><genfromtxt>
2025-05-06 07:24:24
2
1,142
Milos Stojanovic
79,608,124
1,713,297
Multiple time series chart in altair with selectable time series
<p>I want to make a chart in altair with the following properties:</p> <ul> <li>It shows multiple time series</li> <li>I can select which time series are displayed by clicking on the legend</li> <li>If a series is unselected in the legend, then it is not displayed in the plot</li> <li>If a series is unselected in the legend, then you can still see it in the legend</li> <li>Colours of lines remain consistent, no matter which time series are selected or unselected</li> </ul> <p>An LLM supplied me with the following code which doesn't do what I want</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import altair as alt dates = pd.date_range(start=&quot;2023-01-01&quot;, periods=10) data = { &quot;Date&quot;: dates.tolist() * 3, &quot;Value&quot;: [10, 20, 15, 30, 25, 35, 40, 45, 50, 55] + [5, 15, 10, 20, 15, 25, 30, 35, 40, 45] + [2, 12, 8, 18, 14, 22, 28, 32, 38, 42], &quot;Device&quot;: [&quot;Device A&quot;] * 10 + [&quot;Device B&quot;] * 10 + [&quot;Device C&quot;] * 10 } df = pd.DataFrame(data) # Create a selection object for the legend aselection = alt.selection_multi(fields=[&quot;Device&quot;], bind=&quot;legend&quot;) # Create the Altair chart achart = alt.Chart(df).mark_line().encode( x=&quot;Date:T&quot;, y=&quot;Value:Q&quot;, color=&quot;Device:N&quot; # Keep the color consistent ).transform_filter( aselection # Filter data based on the selection ).add_selection( aselection ).properties( title=&quot;Interactive Time Series Plot&quot; ) achart </code></pre> <p>It seems that the <code>transform_filter</code> makes an unselected series vanish from the chart, but this also forces it to vanish from the legend. I want it to vanish from the chart but stay in the legend.</p> <p>Alternatively, I have the following code</p> <pre class="lang-py prettyprint-override"><code>chart = alt.Chart(df).mark_line().encode( x=&quot;Date:T&quot;, y=&quot;Value:Q&quot;, color=&quot;Device:N&quot;, # Keep the color consistent opacity=alt.condition(aselection, alt.value(1), alt.value(0.2)) # Hide unselected lines ).add_selection( aselection ).properties( title=&quot;Interactive Time Series Plot&quot; ) </code></pre> <p>Now the legend works in the way I want (you can select multiple series by shift-clicking) but an unselected time series is still displayed on the chart, just in a paler colour. If I make the chart interactive, then the end user will often end up clicking on an unselected series by accident, so this is definitely not what I want either.</p> <p>Edit: the following almost works, but the problem is that the user can still affect the selection by clicking on the plot. Is there a way to make the plot interactive but disable clicking?</p> <pre class="lang-py prettyprint-override"><code>achart = alt.Chart(df).mark_line().encode( x=&quot;Date:T&quot;, y=&quot;Value:Q&quot;, color=&quot;Device:N&quot;, # Keep the color consistent ).transform_filter( aselection ).add_selection(aselection) chart2 = alt.Chart(df).encode( x=&quot;Date:T&quot;, y=&quot;Value:Q&quot;, color=&quot;Device:N&quot;, # Keep the color consistent # Hide unselected lines ).mark_line(opacity=0).properties( title=&quot;Interactive Time Series Plot&quot; ) chart2 + achart.interactive() </code></pre> <p>Is there a way to do what I want?</p>
<python><pandas><altair>
2025-05-06 06:45:22
1
662
Flounderer
79,607,892
9,630,700
How can I save the list of packages used after running "uv run --with REQUIRED_PACKAGE MYSCRIPT.py"?
<p>Say I have a script MYSCRIPT.py that imports REQUIRED_PACKAGE. REQUIRED_PACKAGE itself could have other dependencies. How can I save the details of the packages downloaded (e.g. into a lock file) when I run <code>uv run --with REQUIRED_PACKAGE MYSCRIPT.py</code>?</p> <p>For a specific example, if I save the following in test.py, I can run it with <code>uv run --with xarray test.py</code> and it says 8 packages were installed. How can I know what these packages (and their versions) are?</p> <pre><code>import xarray print(xarray.__version__) </code></pre> <p>Update: Running <code>uv run</code> with the <code>--verbose</code> flag does provide information about the package used, but this is mixed in with other debugging information.</p>
<python><uv>
2025-05-06 02:36:20
1
328
chuaxr
79,607,851
1,276,664
get global variable with dashes in jinja2
<p>is there a way to render the variable <code>b-c</code> (456) by changing only <code>template</code> variable in the following code?</p> <pre class="lang-python prettyprint-override"><code>import jinja2 environment = jinja2.Environment() data = {'a':123, 'b-c':456} template = &quot;hello {{ b-c }}&quot; result = environment.from_string(template, globals = data).render() print('result:', result) </code></pre> <p>the expected result:</p> <blockquote> <p>hello 456</p> </blockquote> <p>current code throws error:</p> <blockquote> <p>jinja2.exceptions.UndefinedError: 'b' is undefined</p> </blockquote> <p>i can't change how template variables (data) defined or how template is rendered at the moment. but i can change template.</p>
<python><jinja2>
2025-05-06 01:46:57
2
28,952
daggett
79,607,760
9,759,947
Wrong color space of infered image in Unity
<p>I'm having a hard time infering an image from a camera in Unity to an ONNX using a render Texture. I realized that the output from the ONNX inferenced with Barracuda is significantly different from the output I get when I infer the same image (after capturing) in python to the same ONNX.</p> <p>The goal is to only use Unity for inferencing the ONNX, but that's not working right now, and I don't know how to fix this.</p> <p>In Unity:</p> <p>My render texture has the Color format <code>R8G8B8A8_SRGB</code>, height and widht align with the script below:</p> <p>script:</p> <pre><code>public class LaneONNXRunner : MonoBehaviour { [Header(&quot;ONNX Model&quot;)] public NNModel modelAsset; [Header(&quot;Input Settings&quot;)] public Camera laneCamera; public RenderTexture renderTexture; public int modelInputWidth = 1280; public int modelInputHeight = 720; [Header(&quot;Async Settings&quot;)] [Tooltip(&quot;How many inference steps to run per frame. Lower = smoother UI, slower inference.&quot;)] public int stepsPerFrame = 1; private Model runtimeModel; private IWorker worker; private Texture2D inputTexture; void Start() { runtimeModel = ModelLoader.Load(modelAsset); worker = WorkerFactory.CreateWorker(WorkerFactory.Type.ComputePrecompiled, runtimeModel); inputTexture = new Texture2D(modelInputWidth, modelInputHeight, TextureFormat.RGBA32, false); //renderTexture = new RenderTexture(modelInputWidth, modelInputHeight, 24, RenderTextureFormat.ARGB32); } void Update() { if (Input.GetKeyDown(KeyCode.I)) { StartCoroutine(RunAsyncInference()); } } ... private void SaveInputAsJPG(Texture2D tex) { byte[] jpgBytes = tex.EncodeToJPG(95); // 0-100 quality, 95 is a good default string saveDir = Application.dataPath + &quot;/../../liveCaptures3&quot;; if (!Directory.Exists(saveDir)) Directory.CreateDirectory(saveDir); string path = saveDir + $&quot;/SavedONNXInput_{System.DateTime.Now:dd_HHmmss_f}.jpg&quot;; System.IO.File.WriteAllBytes(path, jpgBytes); Debug.Log($&quot;[Saved] JPEG input image saved to: {path}&quot;); } ... IEnumerator RunAsyncInference() { // Step 1: Capture image from RenderTexture to Texture2D laneCamera.Render(); // Ensures camera writes to RenderTexture RenderTexture.active = renderTexture; inputTexture.ReadPixels(new Rect(0, 0, modelInputWidth, modelInputHeight), 0, 0); inputTexture.Apply(); SaveInputAsJPG(inputTexture); // Add this RenderTexture.active = null; // 1.2. Encode to JPG and reload to simulate the JPG decoding (like in Python) byte[] jpgBytes = inputTexture.EncodeToJPG(95); //Texture2D jpgTex = new Texture2D(2, 2); // size wil be overwritten //jpgTex.LoadImage(jpgBytes); Texture2D jpgTex = new Texture2D(2, 2, TextureFormat.RGBA32, false, false); jpgTex.LoadImage(jpgBytes); // Step 2: Convert to Tensor //Color[] pixels = inputTexture.GetPixels(); Color[] pixels = jpgTex.GetPixels(); float[] floatValues = new float[pixels.Length * 3]; for (int i = 0; i &lt; pixels.Length; i++) { Color c = pixels[i]; floatValues[i * 3 + 0] = c.r / 1.0f; floatValues[i * 3 + 1] = c.g / 1.0f; floatValues[i * 3 + 2] = c.b / 1.0f; } string preview = &quot;&quot;; for (int i = 0; i &lt; 12; i++) preview += floatValues[i].ToString(&quot;F10&quot;) + &quot;, &quot;; Debug.Log(&quot;Unity Input Preview: &quot; + preview); ... </code></pre> <p>This logs for example the following to the console:</p> <pre><code>Unity Input Preview: 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, 0.1960784000, </code></pre> <p>Then I use the same image I just captured in infer it in Python, to see if there is any significant difference:</p> <pre><code>import onnxruntime as ort import numpy as np from PIL import Image import torchvision.transforms as T img = Image.open(&quot;inference_input/tteesst.jpg&quot;).convert(&quot;RGB&quot;) transform = T.Compose([T.Resize((720, 1280)), T.ToTensor()]) input_tensor = transform(img).unsqueeze(0).numpy() print(input_tensor.flatten()[:12]) ... </code></pre> <p>This prints</p> <pre><code>[0.53333336 0.53333336 0.53333336 0.53333336 0.53333336 0.53333336 0.5372549 0.5372549 0.53333336 0.53333336 0.53333336 0.53333336] </code></pre> <p>Notably, the inference done in the python script leads to much better prediction, than the one in the Unity script. But the Unity scipt's inference is not so bad that I would assume that I flipped the image over any axis (including diagonals).</p> <p>I'm pretty sure it has something to do with the color space in Unity not matching the one expected by the model.</p> <p>I have tried for hours to change the renderTexture, to convert the color space in Unity, but it seems I don't really understand what is happening, since nothing works.</p> <p>I even added some lines to first save the input as JPG and then load it again, like in Python, but to no avail.</p> <p>I'm totally lost here, and I don't seem to understand what's going on in the <code>GetPixel()</code> method in Unity vs. <code>Image.open()</code> / <code>Image.open(...).convert</code>in Python.</p> <p>I'd be sooo glad if someone could help me.</p>
<python><c#><image><unity-game-engine><colors>
2025-05-05 23:19:19
0
320
Schelmuffsky
79,607,611
8,788,960
More efficient way to compute elementwise gradients (Jacobian) in PyTorch while preserving create_graph
<p>I’m currently using PyTorch’s <code>torch.autograd.functional.jacobian</code> to compute per-sample, <em>elementwise</em> gradients of a scalar-valued model output w.r.t. its inputs. I need to keep <code>create_graph=True</code> because I want the resulting Jacobian entries to themselves require gradients (for further calculations).</p> <p>Here’s a minimal example of what I’m doing:</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.autograd.functional import jacobian def method_jac_strict(inputs, forward_fn): # inputs: (N, F) # forward_fn: (N, F) -&gt; (N, 1) # output: (N, F). # compute full Jacobian: d = jacobian(forward_fn, inputs, create_graph=True, strict=True) # (N, 1, N, F) d = d.squeeze() # (N, N, F) # extract only the diagonal block (each output wrt its own input sample): (N, F) d = torch.einsum('iif-&gt;if', d) return d </code></pre> <p><strong>A clarification - Batch-sample dependencies</strong></p> <p>My model may include layers like BatchNorm, so samples in the batch aren’t truly independent. However, I only care about the β€œelementwise” gradientsβ€”i.e. treating each scalar output as if it only depended on its own input sample, and ignoring cross-sample terms.</p> <h2>Question</h2> <p>Is there a more efficient/idiomatic way in PyTorch to compute this <em>elementwise</em> gradient preserving create_graph, without materializing the full (N, N, F) tensor and extracting its diagonal?</p> <p>Any pointers to built-in functions or custom tricks (e.g. clever use of torch.einsum, custom autograd.Function, batching hacks, etc.) would be much appreciated!</p>
<python><pytorch><gradient><autograd>
2025-05-05 20:50:42
1
1,171
hans
79,607,596
2,391,712
Dataclass/attrs based orm models and their relationships
<p>I am building a small python application to learn Domain-Driven-Design (DDD) approaches. Therefore I am using a dataclass/attrs class as my domain model and also use this class to imperatively model my orm instances. However I encounter a bug when I try to insert a class containing foreign keys:</p> <pre><code>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) NOT NULL constraint failed: some_events.measurement_log_before_id [SQL: INSERT INTO some_events (created_at, event, measurement_log_before_id, measurement_log_after_id) VALUES (?, ?, ?, ?)] [parameters: ('2025-05-05 20:13:06.936736', 'some event', None, None)] (Background on this error at: https://sqlalche.me/e/20/gkpj) [123, 1] </code></pre> <p>The odd thing is that the attrs/dataclass has the id value. But the orm class looses it somehow during the <code>session.commit()</code></p> <p>To help reproduce the issue I will add a rather long minimal viable example which allows you to reproduce the issue at the end. Feel free to switch pytest out and run it as a script instead.</p> <pre><code>from datetime import UTC, datetime, timedelta from pathlib import Path import pytest from attrs import define, field from sqlalchemy import Column, DateTime, ForeignKey, Integer, String, Table, func from sqlalchemy.engine import create_engine from sqlalchemy.orm import Session, registry, relationship, sessionmaker mapper_registry = registry() metadata = mapper_registry.metadata measurement_table = Table( &quot;measurement_log&quot;, metadata, Column(&quot;id&quot;, Integer, primary_key=True, autoincrement=True), Column(&quot;recorded_at&quot;, DateTime, nullable=False), Column(&quot;measurement&quot;, Integer, nullable=False), ) some_event_table = Table( &quot;some_events&quot;, metadata, Column(&quot;id&quot;, Integer, primary_key=True, autoincrement=True), Column(&quot;created_at&quot;, DateTime, nullable=False, server_default=func.now()), Column(&quot;event&quot;, String, nullable=False), Column( &quot;measurement_log_before_id&quot;, ForeignKey(&quot;measurement_log.id&quot;), nullable=False ), Column( &quot;measurement_log_after_id&quot;, ForeignKey(&quot;measurement_log.id&quot;), nullable=False ), ) @define(order=True, slots=False) class MeasurementLog: id: int = field(init=False) recorded_at: datetime measurement: int @define(slots=False) class SomeEvents: id: int = field(init=False) measurement_log_before_id: int measurement_log_after_id: int consumer: str event: str created_at: datetime = field(default=datetime.now(UTC)) before_log: MeasurementLog | None = None after_log: MeasurementLog | None = None mapper_registry.map_imperatively( MeasurementLog, measurement_table, properties={ &quot;starts_event&quot;: relationship( SomeEvents, foreign_keys=[some_event_table.c.measurement_log_before_id], back_populates=&quot;before_log&quot;, ), &quot;ends_event&quot;: relationship( SomeEvents, foreign_keys=[some_event_table.c.measurement_log_after_id], back_populates=&quot;after_log&quot;, ), }, ) mapper_registry.map_imperatively( SomeEvents, some_event_table, properties={ &quot;before_log&quot;: relationship( MeasurementLog, foreign_keys=[some_event_table.c.measurement_log_before_id], back_populates=&quot;starts_event&quot;, ), &quot;after_log&quot;: relationship( MeasurementLog, foreign_keys=[some_event_table.c.measurement_log_after_id], back_populates=&quot;ends_event&quot;, ), }, ) @pytest.fixture def test_db(): db_url = &quot;./test.db&quot; Path(db_url).unlink(missing_ok=True) engine = create_engine(f&quot;sqlite:///{db_url}&quot;, echo=True) try: metadata.create_all(bind=engine) session_factory = sessionmaker(bind=engine) with session_factory() as session: yield session finally: Path(db_url).unlink(missing_ok=True) def test_some_event(test_db: Session): log_entry1 = MeasurementLog( recorded_at=datetime.now(UTC) - timedelta(minutes=45), measurement=123 ) test_db.add(log_entry1) log_entry2 = MeasurementLog( recorded_at=datetime.now(UTC) - timedelta(minutes=33), measurement=123 ) test_db.add(log_entry2) test_db.commit() some_event = SomeEvents( measurement_log_before_id=log_entry1.id, measurement_log_after_id=log_entry2.id, consumer=&quot;alice&quot;, event=&quot;some event&quot;, ) assert log_entry1.id assert log_entry2.id test_db.add(some_event) test_db.commit() </code></pre> <p>I tried:</p> <ul> <li>switching to dataclass or a normal class for SomeEvents-object</li> <li>setting the id field as None instead of setting init=False</li> <li>reading the sqlalchemy documentation (cf. <a href="https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#imperative-mapping" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#imperative-mapping</a>)</li> </ul> <p>I am using:</p> <pre><code>attrs==25.3.0 sqlalchemy==2.0.40 </code></pre>
<python><sqlalchemy><domain-driven-design><python-dataclasses><python-attrs>
2025-05-05 20:28:45
2
2,515
5th
79,607,554
1,361,752
Does hvplot lazily load data from xarray objects?
<p>Suppose I open an xarray dataset <code>ds</code> using <code>xarray.open_dataset</code> that contains a 3d array called <code>cube</code> with dimension coordinates x, y, and z. Usually just opening the dataset doesn't load all the data into memory, its lazily retrieved as needed.</p> <p>Now suppose I do this:</p> <pre><code>ds.cube.hvplot.line(x='x', groupby=['y','z']) </code></pre> <p>This creates a line plot of a slice along the x direction, with a widget letting you choose the y,z, position of the slice.</p> <p>My question is if the resulting DynamicMap lazily loads each 1D array from disk as the widget is manipulated (only one 1d slice is in memory at a time). Or, is the the entire 3D cube is pulled into memory as soon as <code>hvplot</code> is called?</p>
<python><python-xarray><hvplot>
2025-05-05 19:53:06
0
4,167
Caleb
79,607,498
3,486,773
Why does my python function not properly cast the dates despite recognizing the format?
<p>I am attempting to dynamically cast various date formats that come across as a string column but are actually dates. I've gotten pretty far, and this code can correctly identify the dates from the string, but the fields 'Date2' and 'Date3' always return as null values. I can't understand why that is, or how to correct it so that it returns the converted date values.</p> <pre><code>from pyspark.sql import SparkSession from pyspark.sql.functions import col, min, max from pyspark.sql.types import IntegerType, FloatType, TimestampType, StringType, DateType from datetime import datetime, date # Define the function to convert values def convert_value(value): try: return int(value) except ValueError: pass try: return float(value) except ValueError: pass datetime_formats = [ '%m/%d/%Y %H:%M:%S', '%Y-%m-%d %H:%M:%S', '%Y-%m-%dT%H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%dT%H:%M:%S.%f' ] for fmt in datetime_formats: try: return datetime.strptime(value, fmt) except ValueError: pass date_formats = [ '%Y-%m-%d', '%d-%m-%Y', '%m/%d/%Y', '%d/%m/%Y', '%Y/%m/%d', '%b %d, %Y', '%d %b %Y' ] for fmt in date_formats: try: return datetime.strptime(value, fmt).date() except ValueError: pass return value # Function to infer data type for each column def infer_column_type(df, column): min_value = df.select(min(col(column))).collect()[0][0] max_value = df.select(max(col(column))).collect()[0][0] for value in [min_value, max_value]: if value is not None: converted_value = convert_value(value) print(f&quot;Column: {column}, Value: {value}, Converted: {converted_value}&quot;) # Debug print if isinstance(converted_value, int): return IntegerType() elif isinstance(converted_value, float): return FloatType() elif isinstance(converted_value, datetime): return TimestampType() elif isinstance(converted_value, date): return DateType() return StringType() # Example data with different date formats in separate columns data = [ ('1', '2021-01-01', '01-02-2021', '1/2/2021', '2021-01-01T12:34:56', '1.1', 1), ('2', '2021-02-01', '02-03-2021', '2/3/2021', '2021-02-01T13:45:56', '2.2', 2), ('3', '2021-03-01', '03-04-2021', '3/4/2021', '2021-03-01T14:56:56', '3.3', 3) ] # Create DataFrame spark = SparkSession.builder.appName(&quot;example&quot;).getOrCreate() columns = ['A', 'Date1', 'Date2', 'Date3', 'Date4', 'C', 'D'] df = spark.createDataFrame(data, columns) # Apply inferred data types to columns for column in df.columns: inferred_type = infer_column_type(df, column) df = df.withColumn(column, df[column].cast(inferred_type)) # Show the result df.show() df.dtypes </code></pre>
<python><dataframe><apache-spark>
2025-05-05 19:08:22
1
1,278
user3486773
79,607,384
659,389
Azure - Python client for listing key vaults
<p>I found <a href="https://learn.microsoft.com/en-us/python/api/overview/azure/key-vault?view=azure-python" rel="nofollow noreferrer">4 python libraries</a> for working with key vaults:</p> <ul> <li>admin</li> <li>keys</li> <li>certs</li> <li>secrets</li> </ul> <p>but I am missing an API in python to list all vaults. Samples and tests for the aforementioned libs always provide an arbitrary <code>vault_url</code>.</p> <p>Looks like this operation <a href="https://learn.microsoft.com/en-us/rest/api/keyvault/keyvault/vaults/list?view=rest-keyvault-keyvault-2022-07-01&amp;tabs=HTTP" rel="nofollow noreferrer">exists</a> on the REST API's level, but not in any of the python libraries. I'd prefer to use a native python client for everything, does anybody know if this functionality is implemented somewhere in python?</p>
<python><azure><azure-keyvault><azure-python-sdk>
2025-05-05 17:32:40
1
1,333
koleS