QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
79,457,349
9,261,322
How to install gitpython in GitBash on Windows?
<p>I downloaded Python 3.13.2 zip archive (Windows embeddable package (64-bit)). It contains the following files:</p> <pre><code>_asyncio.pyd _bz2.pyd _ctypes.pyd _decimal.pyd _elementtree.pyd _hashlib.pyd _lzma.pyd _multiprocessing.pyd _overlapped.pyd _queue.pyd _socket.pyd _sqlite3.pyd _ssl.pyd _uuid.pyd _wmi.pyd _zoneinfo.pyd libcrypto-3.dll libffi-8.dll libssl-3.dll LICENSE.txt pyexpat.pyd python.cat python.exe python3.dll python313._pth python313.dll python313.zip pythonw.exe select.pyd sqlite3.dll unicodedata.pyd vcruntime140.dll vcruntime140_1.dll winsound.pyd </code></pre> <p>when I try the following command (that worked with Python 3.5)</p> <pre><code>python -m pip install gitpython </code></pre> <p>I get</p> <pre><code>python-3.13.2\python.exe: No module named pip </code></pre> <p>On Linux I install <code>pip</code> with a command like this:</p> <pre><code>sudo apt install python3-pip </code></pre> <p>What is the command on Windows in GitBash?</p> <p>Tried:</p> <pre><code>$ python -m ensurepip C:\dev\tools\python-3.13.2\python.exe: No module named ensurepip </code></pre> <p>and</p> <pre><code>$ python get-pip.py Collecting pip Downloading pip-25.0.1-py3-none-any.whl.metadata (3.7 kB) Downloading pip-25.0.1-py3-none-any.whl (1.8 MB) ---------------------------------------- 1.8/1.8 MB 3.5 MB/s eta 0:00:00 Installing collected packages: pip WARNING: The scripts pip.exe, pip3.13.exe and pip3.exe are installed in 'C:\dev\tools\python-3.13.2\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed pip-25.0.1 export PATH=/c/dev/tools/python-3.13.2:/c/dev/tools/python-3.13.2/Scripts:$PATH </code></pre> <p>but still</p> <pre><code>$ python -m pip install gitpython C:\dev\tools\python-3.13.2\python.exe: No module named pip $ pip install gitpython Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;C:\dev\tools\python-3.13.2\Scripts\pip.exe\__main__.py&quot;, line 4, in &lt;module&gt; from pip._internal.cli.main import main ModuleNotFoundError: No module named 'pip' </code></pre> <p><code>pip</code> is in the PATH:</p> <pre><code>$ which pip /c/dev/tools/python-3.13.2/Scripts/pip </code></pre> <p>but</p> <pre><code>$ python -m pip --version C:\dev\tools\python-3.13.2\python.exe: No module named pip </code></pre> <p>what does it mean?</p>
<python><git-bash>
2025-02-21 12:22:00
2
4,405
Alexey Starinsky
79,457,247
13,769,291
How to get rid of a level in pandas dataframe?
<p>I want to remove some data from a Pandas dataframe and make it over. I tried using the drop function but this didnt work. It is unclear to me whether the data is a row or level. The picture below will hopefully clarify the issue.</p> <p>Using pd.columns I got the following information about the dataframe:</p> <pre><code>MultiIndex([( 'Close', 'XOM'), ( 'High', 'XOM'), ( 'Low', 'XOM'), ( 'Open', 'XOM'), ('Volume', 'XOM')], names=['Price', 'Ticker']) </code></pre> <p>I want to achieve these three things, so I can get the end result.</p> <p><a href="https://i.sstatic.net/oT0yD8jA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oT0yD8jA.png" alt="enter image description here" /></a></p>
<python><pandas>
2025-02-21 11:30:56
0
357
Alex
79,457,237
6,101,024
fit function stops after epoch 1
<p>I have implemented this function to fit the model</p> <pre><code>def fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs, val_set, time_windows, scaler): X_column_list = [item for item in val_set.columns.to_list() if item not in ['date', 'user', 'rank','rank_group', 'counts', 'target']] X_val_set = val_set[X_column_list].round(2) X_val_set[X_val_set.columns] = scaler.transform(X_val_set[X_val_set.columns] ) X_val_sequence = get_feature_array(X_val_set , X_column_list, time_windows) X_val_sequence_tensor = tf.convert_to_tensor(X_val_sequence, dtype=tf.float32) Y_column_list = ['target'] Y_val_set = val_set[Y_column_list].round(2) Y_val_sequence = get_feature_array(Y_val_set , Y_column_list, time_windows) Y_val_sequence_tensor = tf.convert_to_tensor(Y_val_sequence, dtype=tf.float32) history = model.fit(X_train_sequence_tensor,Y_train_sequence_tensor, epochs, validation_data=(X_val_sequence_tensor, Y_val_sequence_tensor)) return model, history </code></pre> <p>but when I call it as</p> <pre><code>fitted_model, history = fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs=100, val_set=val_set, time_windows=90, scaler=scaler) </code></pre> <p>it stops after the first epoch. It does not run for all the 100 as required.</p> <p>I tried to call it outside of the function call and it worked.</p> <pre><code>`# Step 3.2 : Fit the model + We pass some validation for # monitoring validation loss and metrics # at the end of each epoch X_val_set = val_set[X_column_list].round(2) #X_val_set.values = scaler.transform(X_val_set.values) X_val_set[X_val_set.columns] = scaler.transform(X_val_set[X_val_set.columns] ) X_val_sequence = get_feature_array(X_val_set , X_column_list, 90) X_val_sequence_tensor = tf.convert_to_tensor(X_val_sequence, dtype=tf.float32) Y_val_set = val_set[Y_column_list].round(2) Y_val_sequence = get_feature_array(Y_val_set , Y_column_list, 90) Y_val_sequence_tensor = tf.convert_to_tensor(Y_val_sequence, dtype=tf.float32) training_history = cnn1d_bilstm_model.fit(X_train_sequence_tensor,Y_train_sequence_tensor, epochs=200, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch validation_data=(X_val_sequence_tensor, Y_val_sequence_tensor)) </code></pre> <p>What am I doing wrong?</p>
<python><machine-learning><deep-learning>
2025-02-21 11:26:50
2
697
Carlo Allocca
79,457,089
1,251,549
Return same class from its function in Python?
<p>I have written:</p> <pre><code>class MyClass: def myfucntion(self) -&gt; MyClass: &quot;&quot;&quot;&quot;&quot;&quot; </code></pre> <p>And IntelliJ IDEA tells me <code>Unresolved reference 'MyClass'</code> on function defintion.</p> <p>What is missed? How define function that returns class where it defined?</p>
<python><python-typing>
2025-02-21 10:38:08
0
33,944
Cherry
79,457,083
1,422,096
UI element for an infinite scrolling image gallery for 100k images: which data structure?
<p>I'm making an (infinite) scrollable GUI that displays 100x100px thumbnails of possibly 100 000 image files (JPG), in Python <code>Tkinter</code>.</p> <p>The MCVE code below works:</p> <p><a href="https://i.sstatic.net/xJv7e2iI.png" rel="noreferrer"><img src="https://i.sstatic.net/xJv7e2iI.png" alt="enter image description here" /></a></p> <p>using lazy loading: it reloads 30 new images when the scrollbar reaches the end, <strong>but it will not scale when browsing 100k images</strong> because the <code>canvas</code> object will become bigger and bigger and overflow memory.</p> <p>What UI element to use for this? Is there a <code>tk.Frame</code> or <code>tk.Canvas</code> in which you can add new content at the bottom (e.g. <code>y = 10 000</code>), and destroy old content at the top (e.g. at <code>y=0..5 000</code>), and then downsize the canvas, for example remove the first half of it (to avoid the canvas height become higher and higher) without the GUI flickering?</p> <p>Or maybe a canvas frame data structure that &quot;cycles&quot; like a circular buffer? This would avoid the canvas height to explode, and always stay 10 000 pixels high. When we reach <code>y=9 000</code>, old images at <code>y=0..1 000</code> are replaced by new ones, and when scrolling at at <code>y=10 000</code> in fact we are looping back to <code>y=0</code>. Does this exist?</p> <p><a href="https://i.sstatic.net/O9QeHTM1.png" rel="noreferrer"><img src="https://i.sstatic.net/O9QeHTM1.png" alt="enter image description here" /></a></p> <p>Or maybe another system with tiles? (but I don't see how)</p> <p>Notes:</p> <ul> <li><p>Many image viewers like Google Picasa (long discontinued freeware) do this:</p> <p><a href="https://i.sstatic.net/3KFA8tblm.png" rel="noreferrer"><img src="https://i.sstatic.net/3KFA8tblm.png" alt="enter image description here" /></a></p> <p>Note that the right vertical scrollbar doesn't reflect the real progress, because the scrollbar handle would be too small and be imprecise for 100k images - here it's more a joystick-like scrollbar</p> </li> </ul> <p>Code:</p> <pre><code>import os, tkinter as tk, tkinter.ttk as ttk, PIL.Image, PIL.ImageTk class InfiniteScrollApp(tk.Tk): def __init__(self, image_dir, *args, **kwargs): super().__init__(*args, **kwargs) self.geometry(&quot;800x600&quot;) self.image_files = [os.path.join(root, file) for root, dirs, files in os.walk(image_dir) for file in files if file.lower().endswith(('.jpg', '.jpeg'))] self.image_objects = [] self.canvas = tk.Canvas(self, bg=&quot;white&quot;) self.scrollbar = ttk.Scrollbar(self, orient=&quot;vertical&quot;, command=self.yview) self.canvas.configure(yscrollcommand=self.scrollbar.set) self.scrollbar.pack(side=&quot;right&quot;, fill=&quot;y&quot;) self.canvas.pack(side=&quot;left&quot;, fill=&quot;both&quot;, expand=True) self.frame = tk.Frame(self.canvas, width=self.canvas.winfo_width()) self.canvas.create_window((0, 0), window=self.frame, anchor=&quot;nw&quot;) self.canvas.bind_all(&quot;&lt;MouseWheel&gt;&quot;, self.on_scroll) self.load_images() def yview(self, *args): self.canvas.yview(*args) self.update() def update(self): canvas_height = self.canvas.bbox(&quot;all&quot;)[3] scroll_pos = self.canvas.yview()[1] if scroll_pos &gt; 0.9: self.load_images() def load_images(self): current_image_count = len(self.image_objects) for i in range(current_image_count, current_image_count + 30): if i &lt; len(self.image_files): image_path = self.image_files[i] img = PIL.Image.open(image_path) img.thumbnail((100, 100)) photo = PIL.ImageTk.PhotoImage(img) label = tk.Label(self.frame, image=photo) label.image = photo label.pack(pady=5, padx=10) self.image_objects.append(label) self.frame.update_idletasks() self.canvas.config(scrollregion=self.canvas.bbox(&quot;all&quot;)) def on_scroll(self, event): self.canvas.yview_scroll(-1 * (event.delta // 120), &quot;units&quot;) self.update() app = InfiniteScrollApp(&quot;D:\images&quot;) app.mainloop() </code></pre> <hr /> <h2>Sample images</h2> <p>Sample code to generate 10k images quickly with a text number drawn on them:</p> <p><a href="https://i.sstatic.net/vIQTNxo7.png" rel="noreferrer"><img src="https://i.sstatic.net/vIQTNxo7.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/IYfZIwyW.png" rel="noreferrer"><img src="https://i.sstatic.net/IYfZIwyW.png" alt="enter image description here" /></a></p> <pre><code>from PIL import Image, ImageDraw, ImageFont import random for i in range(10000): img = Image.new('RGB', (200, 200), (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))) draw = ImageDraw.Draw(img) font = ImageFont.truetype(&quot;arial.ttf&quot;, 30) draw.text((10, 10), str(i), font=font, fill=(0, 0, 0)) img.save(f'color_image_{i:04d}.jpg') </code></pre>
<python><image><tkinter><tkinter-canvas><tile>
2025-02-21 10:35:29
4
47,388
Basj
79,456,811
1,145,666
Google Vision AI does not include correct documentation
<p>I am using the following code to do OCR using Google Vision AI:</p> <pre><code>from google.cloud import vision client = vision.ImageAnnotatorClient() bindata = base64.b64decode(b64data) # b64data is a Base64 encoded image file image = vision.Image(content=bindata) results = client.text_detection(image=image) </code></pre> <p>Which works fine for most images. I found now an image where text is not detected, but when I rotate the image, it does get detected.</p> <p>So, I am looking for extra options I could possibly give the <code>text_detection</code> method, but I can nowhere find good reference documentation on it. Even the <a href="https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1p2beta1.services.image_annotator.ImageAnnotatorClient" rel="nofollow noreferrer">official Google documentation</a> does not include that method.</p> <p>Where can I find the right documentation for the <code>text_detection</code> method? (Or, better, how can I improve the OCR detection to correctly detect upside down text).</p>
<python><documentation><google-cloud-vision>
2025-02-21 09:00:01
1
33,757
Bart Friederichs
79,456,717
2,315,319
'pdm build' adds date to scm version number, TestPyPI rejects it
<p>My distribution project is managed with git, The last tag is <code>0.1.2.dev1</code>. I use PDM (<a href="https://pdm-project.org" rel="nofollow noreferrer">https://pdm-project.org</a>). The relevant snipped in <code>pyproject.toml</code> is:</p> <pre><code>[build-system] requires = [&quot;pdm-backend&quot;] build-backend = &quot;pdm.backend&quot; [tool.pdm] distribution = true version = {source = &quot;scm&quot;, write_to = &quot;ff_testpkg/version.py&quot;, write_template = &quot;__version__ = '{}'&quot; } </code></pre> <p>On <code>pdm build</code>:</p> <pre><code>pdm build Building sdist... Built sdist at C:/dev/pylib/ff-testpkg/dist\ff-testpkg-0.1.2.dev1+d20250221.tar.gz Building wheel from sdist... Built wheel at C:/dev/pylib/ff-testpkg/dist\ff-testpkg-0.1.2.dev1+d20250221-py3-none-any.whl </code></pre> <p>I see the same auto-generated version number in <code>version.py</code>:</p> <pre><code>__version__ = '0.1.2.dev1+d20250221' </code></pre> <p>On <code>pdm publish</code>, I get this error:</p> <pre><code>400 The use of local versions in &lt;Version('0.1.2.dev1+d20250221')&gt; is not allowed. See https://packaging.python.org/specifications/core-metadata for more information. </code></pre> <p>How do I make pdm not inject the +d=date= local versioning?</p> <p>Edit: I read <a href="https://packaging.python.org/en/latest/specifications/version-specifiers/#local-version-identifiers" rel="nofollow noreferrer">https://packaging.python.org/en/latest/specifications/version-specifiers/#local-version-identifiers</a>, it says local versions must comply with <code>&lt;public version identifier&gt;[+&lt;local version label&gt;]</code>, but after a paragraph it says: &quot;local version labels MUST be limited to the following set of permitted characters: ASCII letters ([a-zA-Z]), ASCII digits ([0-9]), periods (.)&quot;. This means <code>0.1.2.dev1+d20250221</code> should be allowed, right?</p>
<python><pypi><python-pdm>
2025-02-21 08:18:48
0
313
fishfin
79,456,688
9,144,522
Spread out task evenly over a minute, some tasks will run multiple times and should have equal distance to each other
<p>I have a conifg file with a list of bash script that should run <code>times_per_min</code> amount of time(s) during 60 seconds.</p> <p><code>tasks.yaml</code>:</p> <pre><code>every_minute: - name: test0 path: test0.sh times_per_min: 3 - name: test1 path: test1.sh times_per_min: 2 </code></pre> <p>My python code to import the tasks and run them:</p> <pre><code>import subprocess import time import yaml TASKS = [] with open(&quot;tasks.yaml&quot;) as file: measurements = yaml.safe_load(file) for every_min in measurements[&quot;every_minute&quot;]: for _ in range(every_min[&quot;times_per_min&quot;]): TASKS.append(every_min[&quot;path&quot;]) interval = 60 / len(TASKS) for task in TASKS: try: PROCESSES.append(subprocess.Popen([task])) except Exception as e: print(&quot;An error occured&quot;) time.sleep(interval) </code></pre> <p>In this case, TASKS would look like:</p> <pre><code>print(TASKS) ['test0.sh', 'test0.sh', 'test0.sh', 'test1.sh', 'test1.sh'] </code></pre> <p>What I would like: <code>['test0.sh', 'test1.sh', 'test0.sh', 'test1.sh', 'test0.sh']</code></p> <p>Also note that I want this distribution to work no matter the amount of tasks and the times_per_min value...</p>
<python>
2025-02-21 08:01:06
1
341
Jesper.Lindberg
79,456,596
17,173,476
How to instantiate a single element Array/List in Polars expressions efficiently?
<p>I need to convert each element in a polars df into the following structure:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;value&quot;: &quot;A&quot;, &quot;lineItemName&quot;: &quot;value&quot;, &quot;dimensions&quot;: [ { &quot;itemCode&quot;: 1, &quot;dimensionName&quot;: &quot;Clients&quot; } ] } </code></pre> <p>where <code>value</code> corresponds to the value of that element, <code>lineItemName</code> to the column name, <code>itemCode</code> the value held in the key column in the row of that element and <code>dimensionName</code> is a given literal.</p> <p>For example</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({&quot;key&quot;: [1, 2, 3, 4, 5], &quot;value&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;]}) </code></pre> <p>Should result in:</p> <pre><code>shape: (5, 1) โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ value โ”‚ โ”‚ --- โ”‚ โ”‚ struct[3] โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ {&quot;A&quot;,&quot;value&quot;,[{1,&quot;D&quot;}]} โ”‚ โ”œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ”ค โ”‚ {&quot;B&quot;,&quot;value&quot;,[{2,&quot;D&quot;}]} โ”‚ โ”œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ”ค โ”‚ {&quot;C&quot;,&quot;value&quot;,[{3,&quot;D&quot;}]} โ”‚ โ”œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ”ค โ”‚ {&quot;D&quot;,&quot;value&quot;,[{4,&quot;D&quot;}]} โ”‚ โ”œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ•Œโ”ค โ”‚ {&quot;E&quot;,&quot;value&quot;,[{5,&quot;D&quot;}]} โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p><a href="https://i.sstatic.net/jtOOepSF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtOOepSF.png" alt="enter image description here" /></a></p> <p>My current implementation:</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns( pl.struct( pl.col(col).alias(&quot;value&quot;), pl.lit(col).alias(&quot;lineItemName&quot;), pl.concat_list( pl.struct(pl.col(&quot;key&quot;).alias(&quot;itemCode&quot;), pl.lit(&quot;D&quot;).alias(&quot;dimensionName&quot;)) ).alias(&quot;dimensions&quot;), ).alias(col) for col in df.columns if not col == &quot;key&quot; ).drop(&quot;key&quot;) </code></pre> <p>My issue is with the <code>pl.concat_list()</code> expression. The list holding the dimension struct is in my case guaranteed to always only hold one single element. That is why I am seeking a way to avoid taking the significant (and in my case unnecessary) performance hit of <code>pl.concat_list()</code>.</p> <p>Ideally, I'd be able to just:</p> <pre class="lang-py prettyprint-override"><code>pl.lit( [pl.struct(pl.col(&quot;key&quot;).alias(&quot;itemCode&quot;), pl.lit(&quot;D&quot;).alias(&quot;dimensionName&quot;))] ).alias(&quot;dimensions&quot;) </code></pre> <p>but this for the time being raises <code>TypeError: not yet implemented: Nested object types</code>.</p> <p>I have tried variations of the above, but I cannot seem to avoid running into the nested expression at some point. Is there any way I can cleanly instantiate this single element list or better yet Array?</p>
<python><python-polars>
2025-02-21 07:20:26
1
487
Vinz
79,456,565
6,629,148
Trigger a DAG based on upstream DAG completion (unscheduled)
<p>I am trying to see if I can trigger a DAG (say B1) to run (daily) if an upstream DAG (say A1) complete's run. I am already aware about <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/external_task_sensor.html#externaltasksensor" rel="nofollow noreferrer">ExternalTaskSensor</a>, but my limitation is that upstream DAG U1, is unscheduled i.e. schedule=None and A1 is triggered through a different upstream DAG using TriggerDAGrunOperator.</p> <p>I don't own DAG A1 so I can't modify its original code, I have come across <a href="https://registry.astronomer.io/providers/apache-airflow/versions/2.10.5/modules/DagStateTrigger" rel="nofollow noreferrer">DagStateTrigger</a> which is described as follows :</p> <blockquote> <p>Waits asynchronously for a different DAG to complete for a specific logical date.</p> </blockquote> <p>what does 'logical date' mean in this context, does anyone have any experience of using it?</p> <p>I ideally I will schedule my DAG B1 and it will poke the upstream DAG A1 until it completes and run immediately rightafter.</p>
<python><airflow><airflow-2.x><astronomer>
2025-02-21 07:05:00
1
1,088
Anand Vidvat
79,456,467
11,422,407
Python request LM Studio Model failed but Curl successful
<p>I tried to request local model by using Python with below code,</p> <pre><code>import requests import json url = 'http://localhost:1234/v1/chat/completions' headers = { 'Content-Type': 'application/json' } data = { 'model': 'deepseek-r1-distill-qwen-7b', 'messages': [ {'role': 'system', 'content': 'Always answer in rhymes. Today is Thursday'}, {'role': 'user', 'content': 'What day is it today?'} ], 'temperature': 0.7, 'max_tokens': -1, 'stream': False } response = requests.post(url, headers=headers, data=json.dumps(data)) if response.status_code == 200: print('Response:', response.json()) else: print('Error:', response.status_code, response.text) </code></pre> <p>and got <code>503 service unavailable</code> error. But if I request it successfully via <code>Curl</code>,</p> <pre><code>curl http://localhost:1234/v1/chat/completions \ -H &quot;Content-Type: application/json&quot; \ -d '{ &quot;model&quot;: &quot;deepseek-r1-distill-qwen-7b&quot;, &quot;messages&quot;: [ { &quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;Always answer in rhymes. Today is Thursday&quot; }, { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;What day is it today?&quot; } ], &quot;temperature&quot;: 0.7, &quot;max_tokens&quot;: -1, &quot;stream&quot;: false }' </code></pre> <p>why this happening and how could I fix it?</p>
<python><large-language-model><rag><lm-studio>
2025-02-21 06:06:22
1
1,576
leo0807
79,456,337
16,405,935
How to subtract data between columns that have same subfix
<p>I have a sample dataframe as below that has same subfix as <code>001, 002, 003</code>.</p> <pre><code>import pandas as pd import numpy as np branch_names = [f&quot;Branch_{i}&quot; for i in range(1, 11)] date_1 = '20241231' date_2 = '20250214' date_3 = '20250220' data = { 'Branch': branch_names, date_1 + '_001': np.random.randint(60, 90, 10), date_1 + '_002': np.random.randint(60, 90, 10), date_1 + '_003': np.random.randint(60, 90, 10), date_2 + '_001': np.random.randint(60, 90, 10), date_2 + '_002': np.random.randint(60, 90, 10), date_2 + '_003': np.random.randint(60, 90, 10), date_3 + '_001': np.random.randint(60, 90, 10), date_3 + '_002': np.random.randint(60, 90, 10), date_3 + '_003': np.random.randint(60, 90, 10) } # Chuyแปƒn thร nh DataFrame df = pd.DataFrame(data) </code></pre> <p><a href="https://i.sstatic.net/53iUjbUH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53iUjbUH.png" alt="enter image description here" /></a></p> <p>Now I want to subtract data between columns that have same subfix as below principle:</p> <pre><code>df['diff_1_001'] = df[date_3 + '_001'] - df[date_2 + '_001'] df['diff_2_001'] = df[date_3 + '_001'] - df[date_1 + '_001'] df['diff_1_002'] = df[date_3 + '_002'] - df[date_2 + '_002'] df['diff_2_002'] = df[date_3 + '_002'] - df[date_1 + '_002'] df['diff_1_003'] = df[date_3 + '_003'] - df[date_2 + '_003'] df['diff_2_003'] = df[date_3 + '_003'] - df[date_1 + '_003'] df </code></pre> <p>As you see that we have same <code>001, 002, 003</code> but prefix is different. So I want don't want to hard code the <code>001, 002, 003</code> but automatically subtract it as mentioned above.</p>
<python><pandas>
2025-02-21 04:09:17
1
1,793
hoa tran
79,456,253
1,252,307
Connection pool for beanie?
<p>I have the following function in my code:</p> <pre><code>async def context(client_id: str) database = f'db_{client_id}' await init_beanie( database=database, document_models=[ .. # quite a few models ], ) </code></pre> <p>Every time I fetch something from the database I will execute above function (it could be a different client every time).</p> <p>I don't know exactly what <code>init_beanie</code> does, but it seems there is an overhead calling it every time.</p> <p>I looked through the <a href="https://beanie-odm.dev/tutorial/initialization/" rel="nofollow noreferrer">docs</a> and couldn't find something like a connection pool or similar. Does such thing exist or is it not necessary?</p>
<python><mongodb><beanie>
2025-02-21 02:40:21
1
9,915
kev
79,455,927
6,471,140
how to get the target generated query on a self-query retriever(langchain)
<p>I'm implementing a <a href="https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html" rel="nofollow noreferrer">self-query retriever</a> using langchain with OpenSearch as the target vectore store, so far everything is good but we need to capture the generated query in DSL, for debugging and auditing purposes, after some testing I cannot find how to do it, I found how to return thye <a href="https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.StructuredQuery.html" rel="nofollow noreferrer">StructuredQuery</a>, and how to use the StructuredQuery and <a href="https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.opensearch.OpenSearchTranslator.html" rel="nofollow noreferrer">OpenSearchTranslator</a> to get a step closer to the final query, however it is not the final query sent to OpenSearch. Question is, how to get the query? This is my current code(that returns something close to it but not the final version):</p> <pre><code>opensearch_translator = OpenSearchTranslator() def show_translated_query(query): chain_structured_query = retriever.llm_chain.invoke(query) print(&quot;langchain structured query:&quot;) print(chain_structured_query) os_structured_query = opensearch_translator.visit_structured_query(chain_structured_query) print(&quot;OS query(semantic, filter):&quot;) print(os_structured_query) show_translated_query(&quot;a fire ocurring before 2023&quot;) &gt;&gt;langchain structured query: &gt;&gt;query='fire' filter=Comparison(comparator=&lt;Comparator.LT: 'lt'&gt;, attribute='year', value=2023) limit=None &gt;&gt;OS query(semantic, filter): &gt;&gt;('fire', {'filter': {'range': {'metadata.year': {'lt': 2023}}}}) </code></pre>
<python><nlp><artificial-intelligence><langchain>
2025-02-20 21:57:26
0
3,554
Luis Leal
79,455,807
182,781
Python Debugger not contributing to built-in Terminal's environment
<p>I'm unable to get the Python Debugger to contribute to the built-in Terminal's output. I learned about that feature in a screenshot here: <a href="https://github.com/microsoft/vscode-python-debugger/wiki/No%E2%80%90Config-Debugging" rel="nofollow noreferrer">https://github.com/microsoft/vscode-python-debugger/wiki/No%E2%80%90Config-Debugging</a>.</p> <p>My ultimate goal is to get that no-config workflow documented there to work, but I am completely unable to get the Python Debugger to show up in that status popover that appears on hover over the zsh icon in the terminal pane, and that seems to be a prerequisite.</p> <p>I verified that I have the latest version of the Python Debugger extension enabled. I also found this existing post about the issue: <a href="https://stackoverflow.com/questions/79383900/vscode-python-not-contributing-to-terminal-environment">vscode Python not contributing to Terminal Environment</a>. I tried the solution in there of force-enabling the <code>pythonTerminalEnvVarActivation</code> experiment, that didn't enable it either.</p> <p>Confusingly, the instructions for no-config on the wiki page noted above also specifically say to force that same experiment <em>off</em> because it's not compatible, so that contradiction is extra confusing.</p> <p>Even ignoring that using no-config debugging is my ultimate goal, has anybody else hit this issue of Python Debugger not showing up in the popover and contributing to the environment, and how did you fix it?</p>
<python><visual-studio-code>
2025-02-20 20:47:27
0
4,701
Marc Liyanage
79,455,749
3,732,793
Command not found when installing it with poetry
<p>python is 3.12.3 on linux and in pyproject.toml</p> <pre><code>[tool.poetry.dependencies] deptry = &quot;^0.18.0&quot; </code></pre> <p>and poetry reports</p> <pre><code>The following packages are already present in the pyproject.toml and will be skipped: - deptry </code></pre> <p>poetry install did run without any problems</p> <p>but deptry is not found when called directly or by poetry</p> <p>any hint why this could be ?</p>
<python><python-poetry>
2025-02-20 20:18:50
2
1,990
user3732793
79,455,684
16,383,578
How can one scrape any table from Wikipedia in Python?
<p>I want to scrape tables from Wikipedia in Python. Wikipedia is a good source to get data from, but the data present is in HTML format which is extremely machine unfriendly and cannot be used directly. I want the data in JSON format.</p> <p>As an example, the following scrapes the primary table from here: <a href="https://en.wikipedia.org/wiki/Unicode_block" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Unicode_block</a></p> <pre><code>import re import requests from lxml import html res = requests.get('https://en.wikipedia.org/wiki/Unicode_block').content tree = html.fromstring(res) UNICODE_BLOCKS = [] for block in tree.xpath(&quot;.//table[contains(@class, 'wikitable')]/tbody/tr/td/span[@class='monospaced']&quot;): codes = block.text start, end = (int(i[2:], 16) for i in codes.split('..')) row = block.xpath('./ancestor::tr')[0] block_name = re.sub('\n|\[\w+\]', '', row.find('./td[3]/a').text) assigned = int(row.find('./td[5]').text.replace(',', '')) scripts = row.find('./td[6]').text_content() if ',' in scripts: systems = [] for script in scripts.split(', '): i = script.index('(') name = script[:i-1] count = int(script[i+1:].split(&quot; &quot;)[0].replace(',', '')) systems.append((name, count)) else: systems = [(scripts.strip(), assigned)] UNICODE_BLOCKS.append((start, end, block_name, assigned, systems)) </code></pre> <p>It does exactly what I wanted and I have painstakingly verified its correctness, but as you can see it is rather complicated and works only for that specific table.</p> <p>Although I can try the same strategy with simple tables like the one listed, Wikipedia has many tables with merged cells, and my strategy won't work with them.</p> <p>A simple example is the second table from the linked page. How can I turn it into the following:</p> <pre><code>[ (0x1000, 0x105f, 'Tibetan', '1.0.0', '1.0.1', 'Myanmar', 'Tibetan', 96, 71, 'Tibetan'), (0x3400, 0x3d2d, 'Hangul', '1.0.0', '2.0', 'CJK Unified Ideographs Extension A', 'Hangul Syllables', 2350, 2350, 'Hangul'), (0x3d2e, 0x44b7, 'Hangul Supplementary-A', '1.1', '2.0', 'CJK Unified Ideographs Extension A', 'Hangul Syllables', 1930, 1930, 'Hangul'), (0x44b8, 0x4dff, 'Hangul Supplementary-B', '1.1', '2.0', 'CJK Unified Ideographs Extension A and Yijing Hexagram Symbols', 'Hangul Syllables', 2376, 2376, 'Hangul') ] </code></pre> <p>I remember encountering many tables like the above, but I somehow have trouble finding one when specifically looking for them. But I was able to find the following page: <a href="https://en.wikipedia.org/wiki/Lindsey_Stirling_discography" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lindsey_Stirling_discography</a></p> <p>How can I turn the Singles table into the following:</p> <pre><code>[ ('&quot;Crystallize&quot;', 2012, 'Lindsey Stirling'), ('&quot;Beyond the Veil&quot;', 2014, 'Shatter Me'), ('&quot;Shatter Me&quot; featuring Lzzy Hale)', 2014, 'Shatter Me'), ('&quot;Take Flight&quot;', 2014, 'Shatter Me'), ('&quot;Master of Tides&quot;', 2014, 'Shatter Me'), ('&quot;Hallelujah&quot;', 2015, 'Non-album single'), ('&quot;The Arena&quot;', 2016, 'Brave Enough'), ('&quot;Something Wild&quot; (featuring Andrew McMahon)', 2016, &quot;Brave Enough and Pete's Dragon&quot;), ('&quot;Prism&quot;', 2016, 'Brave Enough'), ('&quot;Hold My Heart&quot; (featuring ZZ Ward)', 2016, 'Brave Enough'), ('&quot;Love\'s Just a Feeling&quot; (featuring Rooty)', 2017, 'Brave Enough'), ('&quot;Dance of the Sugar Plum Fairy&quot;', 2017, 'Warmer in the Winter'), ('&quot;Christmas C\'mon&quot; (featuring Becky G)', 2017, 'Warmer in the Winter'), ('&quot;Warmer in the Winter&quot; (featuring Trombone Shorty)', 2018, 'Warmer in the Winter'), ('&quot;Carol of the Bells&quot;', 2018, 'Warmer in the Winter'), ('&quot;Underground&quot;', 2019, 'Artemis'), ('&quot;The Upside&quot; (solo or featuring Elle King)', 2019, 'Artemis'), ('&quot;Artemis&quot;', 2019, 'Artemis'), ('&quot;What You\'re Made Of&quot; (featuring Kiesza)', 2020, 'Azur Lane Soundtrack'), ('&quot;Lose You Now&quot;', 2021, 'Lose You Now'), ('&quot;Joy to the World&quot;', 2022, 'Snow Waltz'), ('&quot;Sleigh Ride&quot;', 2023, 'Snow Waltz'), ('&quot;Kashmir&quot;', 2023, 'Non-album single'), ('&quot;Carol of the Bells&quot; (Live from Summer Tour 2023)', 2023, 'Non-album single'), ('&quot;Heavy Weight&quot;', 2023, 'Beat Saber Original Soundtrack Vol. 6'), ('&quot;Eye of the Untold Her&quot;', 2024, 'Duality'), ('&quot;Inner Gold&quot; (featuring Royal &amp; the Serpent)', 2024, 'Duality'), ('&quot;You\'re a Mean One, Mr. Grinch&quot; featuring Sabrina Carpenter)', 2024, 'Warmer in the Winter'), ] </code></pre> <p>I had seen a bunch of similar questions, and many of them use <code>pandas</code> + <code>bs4</code>. I don't like <code>pandas</code> and <code>bs4</code> and I don't personally use them, and this question isn't about them, but to show my research I just downloaded <code>pandas</code>, which forced me to download <code>html5lib</code> and <code>beautifulsoup4</code>. Both of them I rarely used, in fact I don't remember ever using them, I primarily use <code>aiohttp</code> + <code>lxml</code> (although in this case I use <code>requests</code>).</p> <p>Now the following code doesn't work, and this question isn't about making it work:</p> <pre><code>import pandas as pd import requests from lxml import etree, html res = requests.get('https://en.wikipedia.org/wiki/Unicode_block').content tree = html.fromstring(res) pd.read_html(etree.tostring(tree.xpath(&quot;.//table[contains(@class, 'wikitable')]/tbody&quot;)[0]))[0] </code></pre> <p>It raises error:</p> <pre><code>ValueError: No tables found </code></pre> <p>I included the code to prevent the question from being closed as duplicates of those that use <code>pandas</code>. I don't like <code>pandas</code>.</p> <p>What is the proper way to scrape tables from Wikipedia? Answers shouldn't be so simple as &quot;just use <code>pandas</code>&quot;. Or, if using <code>pandas</code> is the go-to way, the answer has to demonstrate how it can parse all tables from Wikipedia correctly, especially parsing the two example tables given into the format given.</p> <hr /> <p>Thanks for the answer. I now know how to use <code>pandas</code> to correctly parse tables from Wikipedia, it seems to work with all tables I throw at it, but now I realize I don't need it at all, and <code>pandas</code> is truly useless to me.</p> <p>There are three major issues with <code>pandas</code>, it doesn't clean the data, it includes some rows at the end that are clearly not data rows, and it keeps the useless header rows, the indices it generates for the Singles table given in the third link is really horrible.</p> <p>And then it doesn't store the data in the format I want, which requires me to do post-processing to transform the data into the format I want, which adds execution time.</p> <p>And finally this is the fatal flaw, <code>pandas</code> is extremely slow, I am not lying, I had done many tests back when I tried to use <code>pandas</code>, the code I wrote using pure Python that create <code>list</code> of <code>tuple</code>s always consistently beat <code>pandas</code> at converting the same data, my solutions use less time every single time, no exceptions.</p> <p>With the first link, to convert Unicode data blocks wikitable to <code>DataFrame</code>:</p> <pre><code>In [126]: %timeit pd.read_html(etree.tostring(tree.xpath(&quot;.//table[contains(@class, 'wikitable')]&quot;)[0]))[0] 47.6 ms ยฑ 577 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) </code></pre> <p>It takes about 50 milliseconds just to convert the data to <code>DataFrame</code>, and I don't use <code>pandas</code>, I need to convert said <code>DataFrame</code> to my specified format later, just converting the <code>DataFrame</code> takes so long, it is unacceptable.</p> <p>Now I used PtIPython to benchmark the code executions, with <code>%%timeit</code> cell block magic, I benchmarked the codeblock starting from declaration of <code>UNICODE_BLOCKS</code> (inclusive):</p> <pre><code>32.4 ms ยฑ 472 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) </code></pre> <p>My original code was already way faster than the <code>pandas</code> solution.</p> <p>I rewrote my code to this:</p> <pre><code>import pandas as pd import re import requests from lxml import etree, html res = requests.get('https://en.wikipedia.org/wiki/Unicode_block').content tree = html.fromstring(res) clean = re.compile(r'\n|\[\w+\]') #%%timeit UNICODE_BLOCKS = [] rows = tree.xpath(&quot;.//table[contains(@class, 'wikitable')][1]/tbody/tr&quot;) for row in rows[2:-1]: start, end = (int(i[2:], 16) for i in row.find('./td[2]/span').text.split('..')) block_name = clean.sub('', row.find('./td[3]/a').text) assigned = int(row.find('./td[5]').text.replace(',', '')) scripts = row.find('./td[6]').text_content() if ',' in scripts: systems = [] for script in scripts.split(', '): i = script.index('(') name = script[:i-1] count = int(script[i+1:].split(&quot; &quot;)[0].replace(',', '')) systems.append((name, count)) else: systems = [(scripts.strip(), assigned)] UNICODE_BLOCKS.append((start, end, block_name, assigned, systems)) </code></pre> <p>The processing of the data, discounting fetching the webpage and creating the DOM, takes:</p> <pre><code>26.4 ms ยฑ 550 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) </code></pre> <p>As you can see, now it takes only half as much time as <code>pandas</code> solution.</p> <p>And I didn't stop there, I wrote code to process the table given as my last example:</p> <pre><code>res1 = requests.get('https://en.wikipedia.org/wiki/Lindsey_Stirling_discography').content tree1 = html.fromstring(res1) #%%timeit rows = tree1.xpath(&quot;.//table[contains(@class, 'wikitable')][5]/tbody/tr&quot;)[2:-1] same_year = same_album = 0 songs = [] for row in rows: song = row.find('./th').text_content().strip() if not same_year: year_cell = row.find('./td[1]') same_year = int(year_cell.get('rowspan', 0)) year = int(year_cell.text) decrement = 0 else: decrement = 1 if not same_album: album_cell = row.find(f'./td[{11 - decrement}]') same_album = int(album_cell.get('rowspan', 0)) album = album_cell.text_content().strip() songs.append((song, year, album)) same_year = max(0, same_year - 1) same_album = max(0, same_album - 1) </code></pre> <p>It works perfectly, although it takes me many times to make it right, the end result is worth it, and this strategy work with every table that has <code>rowspan</code> but not <code>colspan</code>.</p> <p>Processing of data:</p> <pre><code>1.37 ms ยฑ 8.19 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) </code></pre> <p><code>pandas</code> solution:</p> <pre><code>In [130]: %timeit pd.read_html(etree.tostring(tree1.xpath(&quot;.//table[contains(@class, 'wikitable')][5]&quot;)[0]))[0] 11.3 ms ยฑ 500 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) </code></pre> <p>As you can see, <code>pandas</code> code takes much more time to execute than my superior solution.</p> <p>So it looks like there really is no good way to scrape tables from Wikipedia, I am now working on parsing tables with both <code>colspan</code> and <code>rowspan</code>, and I will succeed. <code>pandas</code> is really useless and I will continue to not use it.</p> <hr /> <p>I just found these: <a href="https://stackoverflow.com/questions/9978445/parsing-a-table-with-rowspan-and-colspan">Parsing a table with rowspan and colspan</a> and <a href="https://stackoverflow.com/questions/48393253/how-to-parse-table-with-rowspan-and-colspan">How to parse table with rowspan and colspan</a>, I will need more testing, but it seems I had a good head start now.</p> <p>And no, this question isn't a duplicate of those, I will have to adapt code to my own needs, but now it seems I can write a generic class to parse Wikipedia tables into a <code>list</code> of <code>tuple</code>s, then I only need to write code to process the 2D-list to customize the table.</p>
<python><web-scraping><wikipedia>
2025-02-20 19:54:55
1
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,455,667
3,936,496
mix two dataframes with every second row
<p>I have two dataframes that I want to mix togther first two of the second table go to the every second two of, so like this:</p> <pre><code>First table Second table column_1 column_1 1 5 2 6 3 7 4 8 </code></pre> <p>And then the new table will be like this:</p> <pre><code>new table column_1 1 2 5 6 3 4 7 8 </code></pre> <p>Is there easy way to do this with pandas?</p>
<python><pandas>
2025-02-20 19:44:57
4
401
pinq-
79,455,504
1,194,864
Load Phi 3 model extract attention layer and visualize it
<p>I would like to visualize the attention layer of a <code>Phi-3-medium-4k-instruct</code> (or mini) model downloaded from hugging-face. In particular, I am using the following <code>model, tokenizer</code>:</p> <pre><code>import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import pdb tokenizer = AutoTokenizer.from_pretrained(&quot;microsoft/Phi-3-medium-4k-instruct&quot;) model = AutoModelForCausalLM.from_pretrained( &quot;microsoft/Phi-3-meduium-4k-instruct&quot;, device_map = &quot;auto&quot;, torch_dtype = &quot;auto&quot;, trust_remote_code = True ) # Create a pipeline generator = pipeline( &quot;text-generation&quot;, model = model, tokenizer = tokenizer, return_full_text= False, max_new_tokens = 50, do_sample = False ) prompt = &quot;...&quot; input_ids = tokenizer(prompt, return_tensors = &quot;pt&quot;).input_ids # tokenize the input prompt input_ids = input_ids.to(&quot;cuda:0&quot;) # get the output of the model model_output = model.model(input_ids) # extract the attention layer attention = model_output[-1] </code></pre> <p>Firstly, I am wondering if that is the correct way to extract attention from my model. What should expect from this model and how can I visualize it properly? Isn't that I should expect a matrix <code>n_tokens x n_tokens</code>?</p> <p>The <code>attention</code> variable I have extracted has a size of <code>1x40x40x15x15</code> (or <code>1x12x12x15x15</code> in the case of <code>mini</code> model), where the first dimension corresponds to different layers the second for the different <code>heads</code>, and the final two for the <code>attention matrix</code>. That is actually my assumption and I am not sure whether it is correct. When I am visualizing the attention I am getting some very weird matrices like:</p> <p><a href="https://i.sstatic.net/JLZsHi2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JLZsHi2C.png" alt="enter image description here" /></a></p> <p>What we see in this Figure, I assume is all the heads for one layer. However, most of the heads distribute the attention equally to all the tokens. Does that make sense?</p> <p>Edit: For the visualization I am doing sth like:</p> <pre><code># Save attention visualization code def save_attention_image(attention, tokens, filename='attention.png'): &quot;&quot;&quot; Save the attention weights for a specific layer and head as an image. :param attention: The attention weights from the model. :param tokens: The tokens corresponding to the input. :param layer_num: The layer number to visualize. :param head_num: The head number to visualize. :param filename: The filename to save the image. &quot;&quot;&quot; attn = attention[0].detach().cpu().float().numpy() num_heads = attn.shape[0] fig, axes = plt.subplots(3, 4, figsize=(20, 15)) # Adjust the grid size as needed for i, ax in enumerate(axes.flat): if i &lt; num_heads: cax = ax.matshow(attn[i], cmap='viridis') ax.set_title(f'Head {i + 1}') ax.set_xticks(range(len(tokens))) ax.set_yticks(range(len(tokens))) ax.set_xticklabels(tokens, rotation=90) ax.set_yticklabels(tokens) else: ax.axis('off') fig.colorbar(cax, ax=axes.ravel().tolist()) plt.suptitle(f'Layer {1}') plt.savefig(filename) plt.close() </code></pre>
<python><pytorch><huggingface-transformers><attention-model>
2025-02-20 18:25:48
1
5,452
Jose Ramon
79,455,366
10,034,073
Allow Enum Names as Valid Inputs with Pydantic's @validate_call
<p><a href="https://stackoverflow.com/questions/67911340/encode-pydantic-field-using-the-enum-name-instead-of-the-value">This question</a> asks about using the name of an enum when serializing a model. I want something like that except with the <code>@validate_call</code> decorator.</p> <p>Take this function <code>foo()</code>:</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum from pydantic import validate_call class Direction(Enum): NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 @validate_call def foo(d: Direction): print(d) </code></pre> <p>I want all of these inputs to work:</p> <pre class="lang-py prettyprint-override"><code># These work &gt;&gt;&gt; foo(0) Direction.NORTH &gt;&gt;&gt; foo(Direction.EAST) Direction.EAST # These don't, but I want them to &gt;&gt;&gt; foo('WEST') Direction.WEST &gt;&gt;&gt; foo(' sOUtH ') # This would be great though not essential Direction.SOUTH </code></pre> <p>What's the simplest way to do this?</p> <p>If it requires creating a function used as a <code>BeforeValidator</code>, I'd prefer that that function be generic. I have many <code>foo</code>s and many enums, and I don't want a separate validator for handling each one. (At that point, it's easier to validate the type within the function itself instead of using Pydantic).</p>
<python><enums><pydantic>
2025-02-20 17:28:56
1
444
kviLL
79,455,179
18,476,381
Fetching an image and meta-deta from S3 and returning via API in a restful manner
<p>I have an API that takes a time range input and fetches and returns the image back to the client via an API. One addition that makes this a bit tricky is that I also want to return some meta-data in the same response, such as the actual time range of the file. I've read some previous threads about having two different API's to do this but it's not applicable here since locating the image takes some logic of its own.</p> <p>I've provided some snippets in python for what I am currently doing.</p> <pre><code>async def fetch_file_from_s3(file_path): &quot;&quot;&quot; Fetches a file from S3 given its file path. &quot;&quot;&quot; s3_client = get_s3_client() response = s3_client.get_object( Bucket=app_settings.s3_hf_drilling.bucket_name, Key=file_path ) file_body = response[&quot;Body&quot;] return file_body </code></pre> <p>Returning file back to client</p> <pre><code>mime_type, _ = mimetypes.guess_type(file_stream_name) if mime_type is None: mime_type = &quot;application/octet-stream&quot; headers = { &quot;X-File-Start&quot;: str(file_start), &quot;X-File-End&quot;: str(file_end), &quot;X-File-Stream-Name&quot;: file_stream_name, } return Response(content=file_body.read(), media_type=mime_type, headers=headers) </code></pre> <p>I don't know if this is exactly the restful way to do this. On top of that the API just crashes, the image can be upwards of 100mb. Looking for any advice on how to tackle this.</p>
<python><rest><amazon-s3><bucket>
2025-02-20 16:26:47
0
609
Masterstack8080
79,455,175
16,383,578
Python lxml.html SyntaxError: invalid predicate with XPATH when using lxml find
<p>I am using CPython 3.12.6, lxml 5.3.1, Windows 11 Pro 23H2 x64.</p> <p>The following Python code raises an exception:</p> <pre><code>tree.find(&quot;.//table[contains(@class, 'wikitable')]//tr&quot;) </code></pre> <pre><code>SyntaxError: invalid predicate </code></pre> <p>Interestingly the following works:</p> <pre><code>tree.xpath(&quot;.//table[contains(@class, 'wikitable')]//tr&quot;) </code></pre> <p>Why?</p> <p>I am trying to understand why in this case, using <code>lxml.html</code> library, using the same XPATH, invoking <code>.find</code> with it on an <code>lxml.html.HtmlElement</code> object raises an exception, but invoking <code>.xpath</code> with the exact same XPATH on the same object succeeds. Aren't they supposed to be the same?</p>
<python><python-3.x><xpath><lxml>
2025-02-20 16:24:58
1
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,455,091
940,490
Gracefully terminate `asyncio` program in Python with a full queue
<h2>The Problem</h2> <p>I have a simplified example of an asynchronous program (Python 3.9) that is not working when exceptions are raised in futures, and I am looking for ways to gracefully terminate it.</p> <p>In particular, when the number of failed requests exceeds the sum of the queue size and the number of workers, I end up in a situation where the queue is full (see <code>COMMENT1</code> in the code snippet below), and I can no longer submit new requests, nor fail gracefully. I am looking for advice how to avoid this situation by raising the fact that an exception occurred and shutting the entire program down. In this specific case, I do not need to reraise the specific errors of worker as those are logged. Thank you in advance!</p> <pre class="lang-py prettyprint-override"><code>import asyncio HEALTHY_REQUESTS = 10 MAX_WORKERS = 3 QUEUE_SIZE = 19 # REQUESTS &gt;= HEALTHY_REQUESTS + (QUEUE_SIZE + MAX_WORKERS) + 1 ---&gt; full queue, program stalls because queue is full REQUESTS = HEALTHY_REQUESTS + (QUEUE_SIZE + MAX_WORKERS) + 2 async def task_producer(requests): print(f'Starting task producer with: {requests}') request_queue = asyncio.Queue(maxsize=QUEUE_SIZE) # starting tasks print('Initializing futures of the queue') request_workers = [asyncio.ensure_future(request_consumer(request_queue)) for _ in range(MAX_WORKERS)] # Submit requests for req in requests: print(f'Putting request {req} into queue, {request_queue.qsize()}') # COMMENT1: we get stuck here if # REQUESTS = HEALTHY_REQUESTS + (QUEUE_SIZE + MAX_WORKERS) + 2 or more. # We get stuck later if there is one less, # but I think we can ignore this fortunate case where we could still do something await request_queue.put(req) # Wait for all requests to come back await request_queue.put(None) await asyncio.wait(request_workers) print('Getting results from the parser') return 0 async def request_consumer(request_queue): print('request consumer started') while True: request_info = await request_queue.get() await asyncio.sleep(0.5) if request_info is None: await request_queue.put(None) break elif request_info &gt; HEALTHY_REQUESTS: raise RuntimeError(f'Breaking thing in make requests with request={request_info}') print(f'{request_info} - request consumer is finished with request') if __name__ == '__main__': requests = list(range(REQUESTS)) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) producer = loop.create_task(task_producer(requests)) producer_result = loop.run_until_complete(producer) loop.close() print(f'Output of producer: {producer_result}') </code></pre> <h2>Solution Attempts</h2> <h4>Attempt 1</h4> <p>Tried adding a check for exceptions raised in the worker futures, which in turn I put in the for loop and before putting a request (see <code>COMMENT1</code>). See the code snippet of the worker status check is presented below.</p> <p>This works in the contrived example above in a sense, that it helps terminating the loop, but the program still gets stuck when putting <code>None</code> afterwards. In the actual program, this does not work the same. Somehow the queue gets filled before this check has a chance to return <code>False</code>.</p> <pre class="lang-py prettyprint-override"><code>def workers_are_healthy(request_workers): for worker in request_workers: try: exception = worker.exception() except asyncio.InvalidStateError: continue if isinstance(exception, Exception): return False return True </code></pre> <h4>Attempt 2</h4> <p>Found a something that works, but I really do not like it... If I create an instance variable in the class where this code is packaged in, (effectively a global variable in the example above,) I can use it in a separate task to stop the program. In particular, I create another task, which then is passed into <code>loop.run_until_complete</code>. Inside of this task, I check every few seconds what the value of the global variable is. If an error were to occur inside of a specific worker (<code>request_workers</code>), I change its value, and hope that this works. Feels very wrong...</p>
<python><asynchronous><python-asyncio>
2025-02-20 15:55:11
2
1,615
J.K.
79,454,901
2,921,683
Monotonically increasing id order
<p>The spec of monotonically order id <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.monotonically_increasing_id.html" rel="nofollow noreferrer">monotonically_increasing_id</a> says</p> <blockquote> <p>The generated ID is guaranteed to be monotonically increasing and unique, but not consecutive.</p> </blockquote> <p>So I assume there is some ordering otherwise increasing has no meaning. So the question what does increasing mean? Or is it simply a badly named unique id?</p>
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
2025-02-20 14:49:54
1
1,403
BelowZero
79,454,786
1,654,143
Detect highlighted text in .docx
<p>I'm trying to detect text that has a coloured background in a MS Word docx, to separate it from the &quot;normal&quot; text.</p> <pre><code>from docx import Document ... # Load the document doc = Document(docx_path) highlighted_text = [] normal_text = [] # Iterate through all paragraphs for para in doc.paragraphs: # Iterate through all runs in the paragraph for run in para.runs: print(run.text + &quot; - &quot; + str(run.font.highlight_color)) # Check if the run has a highlight color set if run.font.highlight_color is not None: highlighted_text.append(run.text) print(f&quot;Found highlighted text: '{run.text}' with highlight color: {run.font.highlight_color}&quot;) return highlighted_text </code></pre> <p>However, in my test document it's only found grey highlights:</p> <p><a href="https://i.sstatic.net/59A9s1HO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/59A9s1HO.png" alt="text highlights" /></a></p> <p>This is the results from the print statement: Text (normal) - None Text in grey - GRAY_25 (16) Found highlighted text: 'Text in grey ' with highlight color: GRAY_25 (16) Text in yellow - None Text in green - None</p> <p>So not sure where I'm going wrong. I don't think the text has been been shaded as that is across a whole line.</p> <p>Addendum: It only works for grey for me - which I have highlighted in MS Office - however the other highlights, which are getting missed have been done by someone else. This might have been done with an old copy of Office, or docx compatible software or some other method of highlighting he text that isn't &quot;highlighting&quot;</p> <p>Any ideas?</p>
<python><python-3.x><ms-word>
2025-02-20 14:08:31
3
7,007
Ghoul Fool
79,454,470
1,679,410
Integrate Superset with API using Shillelagh
<p>I am trying to connect my Superset deployment using docker, with JSON APIs. I have tried using it with <a href="https://github.com/betodealmeida/shillelagh/blob/main/ARCHITECTURE.rst" rel="nofollow noreferrer">Shillelagh</a> but i am getting following errors:</p> <p><a href="https://i.sstatic.net/BHFepEQz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHFepEQz.png" alt="enter image description here" /></a></p> <p>Following are the logs:</p> <blockquote> <p>superset_worker | [2025-02-20 11:37:14,903: INFO/MainProcess] Task sql_lab.get_sql_results[4402fb45-20eb-403f-b621-2f0f5e39ae13] received superset_worker | [2025-02-20 11:37:14,907: INFO/ForkPoolWorker-1] Query 28: Executing 1 statement(s) superset_worker | [2025-02-20 11:37:14,907: INFO/ForkPoolWorker-1] Query 28: Set query to 'running' superset_app<br /> | 2025-02-20 11:37:14,908:INFO:werkzeug:192.168.176.1 - - [20/Feb/2025 11:37:14] &quot;POST /api/v1/sqllab/execute/ HTTP/1.1&quot; 202 - superset_worker | [2025-02-20 11:37:14,913: INFO/ForkPoolWorker-1] Query 28: Running statement 1 out of 1 superset_worker | [2025-02-20 11:37:14,926: INFO/ForkPoolWorker-1] Task sql_lab.get_sql_results[4402fb45-20eb-403f-b621-2f0f5e39ae13] succeeded in 0.023185771999123972s: {'query_id': 28, 'status': 'failed', 'error': 'shillelagh error: Unsupported table: <a href="https://data.cdc.gov/resource/unsk-b7fc.json%27" rel="nofollow noreferrer">https://data.cdc.gov/resource/unsk-b7fc.json'</a>, 'errors': [{'message': 'shillelagh error: Unsupported table: https://data.cdc.gov/resource/unsk-b7fc.json', 'error_type': 'GENERIC_DB_ENGINE_ERROR', 'level': 'error', 'extra': {...}}]}</p> </blockquote> <p>Following is the screenshot of logs if that is easier to read</p> <p><a href="https://i.sstatic.net/BOyXOBJz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOyXOBJz.png" alt="enter image description here" /></a></p> <p>Would greatly appreciate any help in this regard, thanks!</p>
<python><apache-superset>
2025-02-20 12:13:41
1
2,618
Dakait
79,454,460
2,197,109
Yfinace - Getting Too Many Requests. Rate limited. Try after a while
<p>i am getting Too Many Requests. Rate limited. Try after a while.</p> <p>while trying</p> <pre><code>response = yfinance.Ticker(&quot;MSFT&quot;) </code></pre> <p>my traceback:</p> <pre class="lang-none prettyprint-override"><code>File &quot;/usr/local/lib/python3.13/site-packages/yfinance/scrapers/quote.py&quot;, line 609, in _fetch_info 2025-02-20 17:31:31 result = self._fetch(proxy, modules=modules) 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/scrapers/quote.py&quot;, line 587, in _fetch 2025-02-20 17:31:31 result = self._data.get_raw_json(_QUOTE_SUMMARY_URL_ + f&quot;/{self._symbol}&quot;, user_agent_headers=self._data.user_agent_headers, params=params_dict, proxy=proxy) 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/data.py&quot;, line 425, in get_raw_json 2025-02-20 17:31:31 response = self.get(url, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout) 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/utils.py&quot;, line 104, in wrapper 2025-02-20 17:31:31 result = func(*args, **kwargs) 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/data.py&quot;, line 344, in get 2025-02-20 17:31:31 return self._make_request(url, request_method = self._session.get, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout) 2025-02-20 17:31:31 ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/utils.py&quot;, line 104, in wrapper 2025-02-20 17:31:31 result = func(*args, **kwargs) 2025-02-20 17:31:31 File &quot;/usr/local/lib/python3.13/site-packages/yfinance/data.py&quot;, line 406, in _make_request 2025-02-20 17:31:31 raise YFRateLimitError() 2025-02-20 17:31:31 yfinance.exceptions.YFRateLimitError: Too Many Requests. Rate limited. Try after a while. </code></pre> <p><strong>Edited info (for reopen):</strong> As can be seen from the discussion of yfinance issue <a href="https://github.com/ranaroussi/yfinance/issues/2422" rel="nofollow noreferrer">here</a>, the problem probably is caused by Yahoo recently implemented some mechanism to trace user scraping their data. The proposed solution is to impersonate a browser's TLS fingerprints. Hence, the question is nothing about the reason closing this question: &quot;This question was caused by a typo or a problem that can no longer be reproduced.&quot; It is a legitimate question. New yfinance version 0.2.58 was released in an attempt to solve it. Yet there is still pending issue on cache problem. We are yet to see a final solution solving all pending issues.</p>
<python><yfinance>
2025-02-20 12:12:01
2
775
Kalyanakannan padivasu
79,454,335
3,808,018
Google Earth Engine: Exported NDVI Rasters are Empty Despite Correct Visualization
<p>I am using Google Earth Engine (GEE) to export monthly median NDVI rasters from the MODIS MOD13Q1 dataset, clipped to a specific fire polygon.</p> <h3><strong>Issue</strong></h3> <ul> <li>NDVI visualizes correctly in the GEE Console (with expected values).</li> <li>Exported GeoTIFFs are completely empty (NaN values) despite no errors in the logs.</li> <li>The script successfully initiates exports, but the files contain <strong>no valid NDVI data</strong> when opened in QGIS or Python (<code>rasterio</code>).</li> </ul> <hr /> <h2><strong>What I've Tried</strong></h2> <h3>Visualizing NDVI in GEE Console (works)</h3> <p>I confirmed that <strong>MODIS NDVI</strong> data exists for my region by displaying the <strong>median NDVI for a month</strong>:</p> <pre class="lang-js prettyprint-override"><code>var fireRegion = ee.Geometry.Polygon([ [[-121.974, 37.634], [-121.974, 36.972], [-120.700, 36.972], [-120.700, 37.634]] ]); var modis = ee.ImageCollection(&quot;MODIS/061/MOD13Q1&quot;) .select(&quot;NDVI&quot;) .filterBounds(fireRegion) .filterDate(&quot;2020-08-01&quot;, &quot;2020-08-31&quot;) .median() .multiply(0.0001) // Scale NDVI correctly .clip(fireRegion); var ndviVis = {min: -0.2, max: 1.0, palette: [&quot;red&quot;, &quot;yellow&quot;, &quot;green&quot;]}; Map.centerObject(fireRegion, 8); Map.addLayer(modis, ndviVis, &quot;MODIS NDVI (August 2020)&quot;); print(&quot;NDVI Image:&quot;, modis); </code></pre> <ul> <li>NDVI renders correctly on the GEE map.</li> <li>Batch Export in Python (Fails)</li> </ul> <p>I wrote a script in Python to export NDVI for each month from 2019-2022.</p> <pre><code>import ee import geopandas as gpd import logging from datetime import datetime, timedelta # Set up logging logging.basicConfig(level=logging.INFO, format=&quot;%(asctime)s - %(levelname)s - %(message)s&quot;) # Initialize Earth Engine try: ee.Initialize(project=&quot;sound-of-resiliency&quot;) except Exception as e: ee.Authenticate() ee.Initialize(project=&quot;sound-of-resiliency&quot;) # Load the shapefile shapefile = gpd.read_file(&quot;fire_poly.shp&quot;) # Ensure the shapefile is in WGS84 (EPSG:4326) if shapefile.crs != &quot;EPSG:4326&quot;: shapefile = shapefile.to_crs(&quot;EPSG:4326&quot;) # Convert shapefile to an EE geometry fire_region_geom = shapefile.geometry.union_all() fire_region = ee.Geometry(fire_region_geom.__geo_interface__) # Load MODIS NDVI dataset modis = ee.ImageCollection(&quot;MODIS/061/MOD13Q1&quot;).select(&quot;NDVI&quot;).filterBounds(fire_region) def get_monthly_ndvi(year, month): &quot;&quot;&quot;Compute median NDVI for all images within a month.&quot;&quot;&quot; start = datetime(year, month, 1) end = (start + timedelta(days=32)).replace(day=1) # First day of next month monthly_collection = modis.filterDate(start.strftime(&quot;%Y-%m-%d&quot;), end.strftime(&quot;%Y-%m-%d&quot;)) # Debugging: Print number of images found image_count = monthly_collection.size().getInfo() logging.info(f&quot;MODIS images found for {year}-{month}: {image_count}&quot;) if image_count == 0: logging.warning(f&quot;No MODIS NDVI images found for {year}-{month}. Skipping.&quot;) return None monthly_ndvi = ( monthly_collection.median() .multiply(0.0001) # Convert to NDVI scale (-1 to 1) .clip(fire_region) ) return monthly_ndvi def export_ndvi_raster(image, year, month): &quot;&quot;&quot;Export NDVI raster to Google Drive if valid.&quot;&quot;&quot; if image is None: logging.warning(f&quot;Skipping export for {year}-{str(month).zfill(2)} (No NDVI data)&quot;) return image = image.reproject(crs=&quot;EPSG:4326&quot;, scale=250) stats = image.reduceRegion( reducer=ee.Reducer.mean(), geometry=fire_region, scale=250, maxPixels=1e13 ) ndvi_value = stats.get(&quot;NDVI&quot;) if ndvi_value is not None: task = ee.batch.Export.image.toDrive( image=image, description=f&quot;NDVI_{year}_{str(month).zfill(2)}&quot;, folder=&quot;GEE_Exports&quot;, fileNamePrefix=f&quot;NDVI_{year}_{str(month).zfill(2)}&quot;, scale=250, region=fire_region, fileFormat=&quot;GeoTIFF&quot;, maxPixels=1e13 ) task.start() logging.info(f&quot;Export started for NDVI {year}-{str(month).zfill(2)}&quot;) else: logging.warning(f&quot;Skipping export for {year}-{str(month).zfill(2)} (No valid NDVI pixels)&quot;) for year in range(2019, 2023): for month in range(1, 13): logging.info(f&quot;Processing NDVI for {year}-{str(month).zfill(2)}&quot;) monthly_ndvi = get_monthly_ndvi(year, month) export_ndvi_raster(monthly_ndvi, year, month) logging.info(&quot;All NDVI exports have been started. Check GEE tasks panel.&quot;) </code></pre> <p>Checking the Exported .tif Files</p> <p>After downloading the .tif files from Google Drive, I analyzed them using Python (rasterio):</p> <pre><code>import rasterio import numpy as np with rasterio.open(&quot;NDVI_2022_12.tif&quot;) as src: data = src.read(1) print(f&quot;Min: {np.nanmin(data)}, Max: {np.nanmax(data)}, Mean: {np.nanmean(data)}&quot;) Output: Min: nan, Max: nan, Mean: nan </code></pre> <p>The raster has data, but all values are NaN.</p> <p>Things I have checked:</p> <ul> <li>Fire polygon (fire_region) is correctly defined (since NDVI visualizes correctly).</li> <li>MODIS NDVI images exist for each month (verified with size().getInfo()).</li> <li>NDVI is properly scaled (multiply(0.0001)).</li> <li>Reprojection to EPSG:4326 is applied before export.</li> <li>The export is correctly initiated and appears in GEE &quot;Tasks&quot;.</li> <li>โŒ But the exported rasters contain only NaN values.</li> </ul> <p>Questions</p> <ul> <li>Why does the NDVI render fine in GEE but export as NaN when using .median()?</li> <li>Is there a better way to compute and export monthly NDVI from MOD13Q1?</li> <li>Could the .median() operation be causing an issue with missing pixels?</li> <li>Should scale=250 be modified during export, or is MODIS using an incompatible projection?</li> </ul>
<javascript><python><google-earth-engine>
2025-02-20 11:26:24
0
4,132
Derek Corcoran
79,454,185
5,868,293
How to apply different model on different rows of a pandas dataframe?
<p>I have a pandas dataframe that looks like this:</p> <pre><code>import pandas as pd df = pd.DataFrame({'id': [1,2], 'var1': [5,6], 'var2': [20,60], 'var3': [8, -2], 'model_version': ['model_a', 'model_b']}) </code></pre> <p>I have 2 different models, saved in <code>pkl</code> files, which I load them like this:</p> <pre><code>import pickle with open('model_a.pkl', 'rb') as file: model_a = pickle.load(file) with open('model_b.pkl', 'rb') as file: model_b = pickle.load(file) </code></pre> <p>I would like to apply and predict <code>model_a</code> for <code>id==1</code> and <code>model_b</code> for <code>id==2</code>, as indicated in the <code>model_version</code> column in the <code>df</code>.</p> <p>How can I do that in &quot;one-go&quot; ?</p> <p>What I mean in &quot;one-go&quot; is that: I am <strong>NOT</strong> looking for a solution that looks like this:</p> <pre><code>df_a = df.query('model_version==&quot;model_a&quot;') predictions_a = model_a.predict(df_a) df_b = df.query('model_version==&quot;model_b&quot;') predictions_b = model_b.predict(df_b) </code></pre>
<python><pandas><scikit-learn>
2025-02-20 10:38:44
1
4,512
quant
79,454,104
166,442
Using match statement with a class in Python 3
<p>Can somebody explain why in the following code Two matches?</p> <pre><code>&gt;&gt;&gt; class One: ... pass ... &gt;&gt;&gt; class Two: ... pass ... &gt;&gt;&gt; a = One() &gt;&gt;&gt; &gt;&gt;&gt; match a.__class__: ... case Two: ... print(&quot;is two&quot;) ... is two &gt;&gt;&gt; </code></pre>
<python><python-3.x>
2025-02-20 10:14:04
2
6,244
knipknap
79,453,988
6,930,340
Applying numpy partition to a multi-dimensional array
<p>I need to find the <code>k</code> smallest element within a <code>np.array</code>. In a simple case you would probably use <code>np.partition</code>.</p> <pre><code>import numpy as np a = np.array([7, 4, 1, 0]) kth = 1 p = np.partition(a, kth) print(f&quot;Partitioned array: {p}&quot;) print(f&quot;kth's smallest element: {p[kth]}&quot;) Partitioned array: [0 1 4 7] kth's smallest element: 1 </code></pre> <p>In my real use case, I need to apply the same technique to a multi-dimensional <code>np.array</code>. Let's take a 4-dim array as an example. The difficulty I am facing is that I need to apply different <code>kth</code>s to each row of that array.<br /> (Hint: <code>array-4d</code> and <code>kths</code> are coming from earlier operations.)</p> <p>Here's the setup:</p> <pre><code>array_4d = np.array( [ [ [ [4, 1, np.nan, 20, 11, 12], ], [ [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], ], [ [33, 4, 55, 26, 17, 18], ], ], [ [ [7, 8, 9, np.nan, 11, 12], ], [ [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], ], [ [13, 14, 15, 16, 17, 18], ], ], ] ) kths = np.array( [ [ [[1]], [[2]], [[0]], ], [ [[0]], [[2]], [[1]], ], ] ) print(&quot;4D array:&quot;) print(array_4d) print(f&quot;Shape: {array_4d.shape}&quot;) print(&quot;kths array:&quot;) print(kths) print(f&quot;Shape: {kths.shape}&quot;) 4D array: [[[[ 4. 1. nan 20. 11. 12.]] [[nan nan nan nan nan nan]] [[33. 4. 55. 26. 17. 18.]]] [[[ 7. 8. 9. nan 11. 12.]] [[nan nan nan nan nan nan]] [[13. 14. 15. 16. 17. 18.]]]] Shape: (2, 3, 1, 6) kths array: [[[[1]] [[2]] [[0]]] [[[0]] [[2]] [[1]]]] Shape: (2, 3, 1, 1) </code></pre> <p>I need to apply the different <code>kth</code>s (1, 2, 0, 0, 2, 1) to the respective row in the 4D array and find the respective smallest element at <code>kth</code> position.</p> <p>The expected result should probably look like this:</p> <pre><code>array([[[[ 4.]], [[nan]], [[ 4.]]], [[[ 7.]], [[nan]], [[14.]]]]) </code></pre> <p>EDIT: I am looking for a generalized solution. The input array could have any shape, with the exception that the second-to-last dimension (<code>axis=-2</code>) is always <code>1</code>. For the <code>kth</code> array, the two last dimensions are always <code>1</code>.</p>
<python><numpy>
2025-02-20 09:39:08
1
5,167
Andi
79,453,780
13,560,598
detecting debug mode for python in visual studio
<p>Let's say I have a file called script.py which I am running under Visual Studio 2022 Community Edition. I am running script.py by right clicking on the window and choosing `Start with debugging' or 'Start without debugging'. I would like to detect which mode it is launched in from inside the script. For C++, the preprocessor macro _DEBUG is defined in debug mode, and I can use something like</p> <pre><code>#ifdef _DEBUG debugging code #endif </code></pre> <p>I would like to know how to do something similar for python.</p>
<python><visual-studio><debugging>
2025-02-20 08:27:34
0
593
NNN
79,453,775
447,426
handling large tgz with pcap in pyspark - ValueError: can not serialize object larger than 2G
<p>I have a pyspark based pipeline that uses spark.read.format(&quot;binaryFile&quot;) to decompress tgz files and handle the pcap file inside (exploding to packages etc). The code that handles the tar, pcap, and single packets is written as pure python and integrated as &quot;User Defined Function&quot;.</p> <p>This pipeline works fine but files that contain a pcap files larger 2GB yield <code>ValueError: can not serialize object larger than 2G</code></p> <p>is there any way to overcome this?</p> <p>I would like to keep</p> <pre><code>self.unzipped: DataFrame = spark.read.format(&quot;binaryFile&quot;)\ .option(&quot;pathGlobFilter&quot;, &quot;*.pcap.tgz&quot;)\ .option(&quot;compression&quot;, &quot;gzip&quot;)\ .load(folder) </code></pre> <p>Because of the abstraction layer regarding the file source, it works with file://, hadoop:// and other like azure (abfss://) - if you add the dependencies.</p> <p>If not possible what are alternatives?</p> <ul> <li>Since this is an error in python serializer - will this work in an Scala or R implementation?</li> <li>if uncompressing on driver (with pure python code, and creating the first dataframe from chunks of packages from pcap) - how to read the file in similar way that accepts different protocols as path (i would need file:// and abfss://)</li> <li>any other ideas?</li> </ul> <p>Update:</p> <p>I am using Pyspark 3.51</p> <p>Source that raises the error: <a href="https://github.com/apache/spark/blob/bbb9c2c1878e200d9012d2322d979ae794b1d41d/python/pyspark/serializers.py#L160" rel="nofollow noreferrer">https://github.com/apache/spark/blob/bbb9c2c1878e200d9012d2322d979ae794b1d41d/python/pyspark/serializers.py#L160</a></p>
<python><apache-spark><pyspark>
2025-02-20 08:24:29
0
13,125
dermoritz
79,453,757
671,013
Define a dynamic default value of an option based on another option when using Typer
<p>I have the following ~minimal code:</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated import typer def main( username: Annotated[ str, typer.Option(&quot;--username&quot;, &quot;-u&quot;, help=&quot;Short username&quot;, show_default=False), ], cluster_name: Annotated[ str, typer.Option(&quot;--cluster-name&quot;, &quot;-c&quot;, help=&quot;Name of the cluster&quot;), ] = None, # type: ignore ): if cluster_name is None: cluster_name = f&quot;{username}-cluster&quot; print(&quot;Username is&quot;, username) print(&quot;Cluster name is&quot;, cluster_name) if __name__ == &quot;__main__&quot;: typer.run(main) </code></pre> <p>This works, but obviously, the output of <code>--help</code> is wrong, because the actual/implicit default value of <code>cluster_name</code> is not <code>None</code>. I am trying to understand how can I have a more streamlined solution. Maybe something like:</p> <pre class="lang-py prettyprint-override"><code>cluster_name: Annotated[ str, typer.Option(&quot;--cluster-name&quot;, &quot;-c&quot;, help=&quot;Name of the cluster&quot;), ] = f&quot;{username}-cluster&quot; </code></pre> <p>which obviously is not possible because <code>username</code> is not defined at that point. But, maybe there's a way to get it? I know there's something with the context, but I don't know how to use it in this case.</p>
<python><python-click><typer>
2025-02-20 08:15:47
0
13,161
Dror
79,453,711
2,695,990
How to configure python simple "logging" with different colors for different levels
<p>I have created simple python script:</p> <pre class="lang-py prettyprint-override"><code>import logging import sys if __name__ == &quot;__main__&quot;: logging.basicConfig(level=logging.DEBUG) print(&quot;This is a normal print statement&quot;, file=sys.stdout) logging.debug(&quot;debug message&quot;) logging.info(&quot;info message&quot;) logging.warning(&quot;warning message&quot;) </code></pre> <p>I would expect that the color for the &quot;INFO&quot; level will be same as sysout or at least some neuetral color. But the output of the run of my script is:</p> <p><a href="https://i.sstatic.net/KPNLCT5G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPNLCT5G.png" alt="enter image description here" /></a></p> <p>I would like the &quot;INFO&quot; logs to be in at least same color as the one coming from &quot;print&quot;. I have tried to add config like this:</p> <pre class="lang-py prettyprint-override"><code>logging.basicConfig(level=logging.DEBUG, format=&quot;%(asctime)s - %(levelname)s - %(message)s&quot;) </code></pre> <p>But this did not help. I do not want to setup &quot;loggers&quot; for my project because using simple &quot;logging&quot; is more convienient for me.</p> <p>In the real project which I am developing I have several classes where I use all the levels and many log messages, so this RED info logs makes the whole logs very unreadable.</p> <p>I cannot believe that its normal for the INFO log to be red, but as you see I dont have any special configs. How I can easily configre the &quot;logging&quot; to have proper colouring?</p>
<python><logging><colors><log-level>
2025-02-20 07:57:12
1
3,174
fascynacja
79,453,625
143,091
Download data models while installing my python library
<p>Sometimes, a Python library depends on additional data, such as ML models. This could be a model from <code>transformers</code>, <code>spacy</code>, <code>nltk</code>and so on. Typically there is a command to download such a model:</p> <pre><code>python -m nltk.downloader stopwords </code></pre> <p>How can I have this done automatically when my library is installed, when I use modern python packaging (pyproject.toml) and tools (uv)? The reason is that my library is used in another application and that application shouldn't be concerned what models I use under the hood. But I also don't want to download them at run time. AFAIK, there is no post install step anymore.</p> <p>I could provide a post install command in my package that the user would have to manually execute once. Is there any convention for this?</p> <p>At least for spacy, the models are distributed as wheels. However, they <a href="https://github.com/astral-sh/uv/issues/5478" rel="nofollow noreferrer">can't be used as dependencies</a> by uv since they are lacking versions.</p>
<python><nltk><spacy><pyproject.toml><uv>
2025-02-20 07:21:18
1
10,310
jdm
79,453,531
4,131,060
Using a python library installed in virtual environment from script running in standalone application
<p>There is a python library that only wants to be installed in a virtual environment, but how to import the library into scripts running in my standalone application that does not run in a virtual environment?</p> <p>I'm writing a a Delphi/Lazarus application using the component Python4Delphi and Python4Lazarus to run python scripts.</p>
<python><delphi><lazarus>
2025-02-20 06:33:11
1
400
Rimfire
79,453,312
13,869,231
How to find points that preserve given nearest neighbor distance?
<p>I have a set of two-dimensional points, <code>X</code> with dimension of <code>n by 2</code>, and nearest neighbor distance of <code>Dx (n by 1) </code> where <code>DX(i)</code> being the distance of <code>X(i)</code> to its nearest neighbor. I want to generate nontrivial Y with the constraint that <code>Dy=Dx</code>. With nontrivial I mean <code>Y</code> can't be translation, rotation of reflection of <code>X</code> or any combination of these operations.</p> <p>I tried to generate <code>Y</code> points iteratively with some random initial point and adding the subsequent points with the specified distance such that the new point doesn't change the previous points nearest neighbor. But then I'd need to sort distances and start with the highest distance and proceed to the lowest distance because otherwise if the next distance is higher, the nearest neighbor of the last point would be the previous point not the newly added point. With sorting the distances, I can get the desired points, but I prefer if they are more randomized. With distances sorted, all initial points are far from each other and all last points are close to each other.</p>
<python><random>
2025-02-20 03:57:37
2
436
Zain
79,452,945
72,437
When using asyncio to make 4 requests to gemini-1.5-flash, it gives Error code: 429 - Resource has been exhausted, RESOURCE_EXHAUSTED
<p>I try to use <code>gemini-1.5-flash</code>, to process 4 chunk of text, using async way.</p> <pre><code>def generate_readable_transcript(transcript: str, model: str, converter: OpenCC) -&gt; str: readable_transcript = asyncio.run(_generate_readable_transcript( transcript = transcript, model = model, converter = converter )) return readable_transcript async def _generate_readable_transcript(transcript: str, model: str, converter: OpenCC) -&gt; str: try: valid_models = ['gpt-4o-mini', 'gemini-1.5-flash'] if model not in valid_models: raise RuntimeError(f&quot;Unsupported model: {model}.&quot;) system_prompt = ( &quot;You are an assistant that improves the readability of text by adding proper capitalization, &quot; &quot;punctuation, and line breaks without adding or removing any words or content.&quot; ) if model == &quot;gemini-1.5-flash&quot;: client = AsyncOpenAI( base_url=&quot;https://generativelanguage.googleapis.com/v1beta/&quot;, api_key=GEMINI_KEY ) # https://firebase.google.com/docs/vertex-ai/gemini-models limit = 8192 * 0.9 gemeni_client = genai.Client(api_key=GEMINI_KEY) encoding = None else: client = AsyncOpenAI(api_key=OPEN_AI_KEY) # https://platform.openai.com/docs/models limit = 16384 * 0.9 gemeni_client = None encoding = tiktoken.encoding_for_model(model) start_time = time.time() texts = split_text_by_token_limit( text=transcript, limit=limit, gemeni_client=gemeni_client, encoding=encoding ) end_time = time.time() time_ms = (end_time - start_time) * 1000 # Convert to milliseconds print(f&quot;Time taken for split_text_by_token_limit: {time_ms:.2f} ms&quot;) print(f&quot;{len(texts)} splitted text ๐Ÿฐ&quot;) # Define an async helper to process one chunk. async def process_chunk(idx: int, text: str) -&gt; (int, str): user_prompt = ( f&quot;Please rewrite the following text with proper capitalization, punctuation, and line breaks &quot; f&quot;without adding or removing any words or content:\n\n{text}&quot; ) print(f&quot;Chunk {idx} processing... ๐Ÿฐ&quot;) #if idx == 1: # raise Exception(&quot;Simulated exception in chunk 2&quot;) response = await client.chat.completions.create( model=model, temperature=0, response_format={&quot;type&quot;: &quot;text&quot;}, messages=[ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: system_prompt}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: user_prompt} ] ) result = response.choices[0].message.content message = f&quot;Chunk {idx} processed ๐Ÿฐ&quot; print(message) return idx, result # Launch all chunk processing tasks concurrently. tasks = [asyncio.create_task(process_chunk(idx, text)) for idx, text in enumerate(texts)] try: results = await asyncio.gather(*tasks) except Exception as e: print(f&quot;Exception during chunk processing: {e}&quot;) for task in tasks: task.cancel() return None print(f&quot;{len(results)} results ๐Ÿฐ&quot;) if len(results) != len(texts): print(&quot;Chunk processing failed ๐Ÿคก&quot;) return None # Sort results by index to preserve sequential order. results.sort(key=lambda x: x[0]) response_content = &quot;\n\n&quot;.join(res for idx, res in results) response_content = response_content.strip() </code></pre> <p>However, the <code>gemini-1.5-flash</code> model always give me error</p> <blockquote> <p>Exception during chunk processing: Error code: 429 - [{'error': {'code': 429, 'message': 'Resource has been exhausted (e.g. check quota).', 'status': 'RESOURCE_EXHAUSTED'}}]</p> </blockquote> <p>I check my quota. It still look good to me. It seems like I am allowed to make 2000 requests simultaneously.</p> <p>May I know, how I can further debug this issue? Thank you.</p> <p><a href="https://i.sstatic.net/eAGFpr5v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAGFpr5v.png" alt="enter image description here" /></a></p>
<python><google-gemini><google-generativeai>
2025-02-19 22:38:20
1
42,256
Cheok Yan Cheng
79,452,943
13,132,728
Pandas - How to backfill a main dataframe with values from another while prioritizing the main dataframe
<h1>SET UP MY PROBLEM</h1> <p>I have two pandas dataframes. First, I have <code>main</code>:</p> <pre><code>import pandas as pd import numpy as np main = pd.DataFrame({&quot;foo&quot;:{&quot;a&quot;:1.0,&quot;b&quot;:2.0,&quot;c&quot;:3.0,&quot;d&quot;:np.nan},&quot;bar&quot;:{&quot;a&quot;:&quot;a&quot;,&quot;b&quot;:&quot;b&quot;,&quot;c&quot;:np.nan,&quot;d&quot;:&quot;d&quot;},&quot;baz&quot;:{&quot;a&quot;:np.nan,&quot;b&quot;:&quot;x&quot;,&quot;c&quot;:&quot;y&quot;,&quot;d&quot;:&quot;z&quot;}}) </code></pre> <p>Picture for example:</p> <p><a href="https://i.sstatic.net/jyUz5xJF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyUz5xJF.png" alt="enter image description here" /></a></p> <p>I have a second dataframe, called <code>backfill</code>:</p> <pre><code>backfill = pd.DataFrame({&quot;foo&quot;:{&quot;a&quot;:9.0,&quot;b&quot;:np.nan,&quot;c&quot;:7.0,&quot;d&quot;:4.0},&quot;bar&quot;:{&quot;a&quot;:np.nan,&quot;b&quot;:&quot;r&quot;,&quot;c&quot;:&quot;c&quot;,&quot;d&quot;:&quot;s&quot;},&quot;baz&quot;:{&quot;a&quot;:&quot;w&quot;,&quot;b&quot;:&quot;l&quot;,&quot;c&quot;:&quot;m&quot;,&quot;d&quot;:np.nan},&quot;foobar&quot;:{&quot;a&quot;:&quot;baz&quot;,&quot;b&quot;:np.nan,&quot;c&quot;:np.nan,&quot;d&quot;:np.nan}}) </code></pre> <p>Picture for example:</p> <p><a href="https://i.sstatic.net/o1r2sHA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o1r2sHA4.png" alt="enter image description here" /></a></p> <hr /> <hr /> <h1>WHAT I AM TRYING TO DO</h1> <p>I am trying to backfill any missing values or columns in <code>main</code> with the values from <code>backfill</code>, while prioritizing the values already present in <code>main</code>. That is, I would like to fill any missing value in <code>main</code> with the respective value in <code>backfill</code> AND add any columns and their values not in <code>main</code> with the respective values from <code>backfill</code>. Is there a one line solution to this part of the problem that works well with the solution to the first step I have figured out, or perhaps an entirely different approach that is more pythonic and computationally efficient?</p> <hr /> <hr /> <h1>MY DESIRED OUTPUT</h1> <p>Based on my explanation above, this is my desired output:</p> <pre><code> {&quot;foo&quot;:{&quot;a&quot;:1,&quot;b&quot;:2,&quot;c&quot;:3,&quot;d&quot;:4},&quot;bar&quot;:{&quot;a&quot;:&quot;a&quot;,&quot;b&quot;:&quot;b&quot;,&quot;c&quot;:&quot;c&quot;,&quot;d&quot;:&quot;d&quot;},&quot;baz&quot;:{&quot;a&quot;:&quot;w&quot;,&quot;b&quot;:&quot;x&quot;,&quot;c&quot;:&quot;y&quot;,&quot;d&quot;:&quot;z&quot;},&quot;foobar&quot;:{&quot;a&quot;:&quot;baz&quot;,&quot;b&quot;:null,&quot;c&quot;:null,&quot;d&quot;:null}} </code></pre> <p>Picture for example:</p> <p><a href="https://i.sstatic.net/r8sT5fkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r8sT5fkZ.png" alt="enter image description here" /></a></p> <p>As you can see, the missing values and columns in <code>main</code> are filled with the respective values of <code>backfill</code> while maintaining the populated values of <code>main</code>.</p> <hr /> <hr /> <h1>WHAT I HAVE TRIED</h1> <p>The following code seems to satisfy one of the conditions of my problem:</p> <pre><code>main[main.isnull()] = backfill {&quot;foo&quot;:{&quot;a&quot;:1.0,&quot;b&quot;:2.0,&quot;c&quot;:3.0,&quot;d&quot;:4.0},&quot;bar&quot;:{&quot;a&quot;:&quot;a&quot;,&quot;b&quot;:&quot;b&quot;,&quot;c&quot;:&quot;c&quot;,&quot;d&quot;:&quot;d&quot;},&quot;baz&quot;:{&quot;a&quot;:&quot;w&quot;,&quot;b&quot;:&quot;x&quot;,&quot;c&quot;:&quot;y&quot;,&quot;d&quot;:&quot;z&quot;}} </code></pre> <p>But I cannot figure out how to add the <code>foobar</code> column and its values from <code>backfill</code> to <code>main</code>. Of course I could technically merge it, but I would only want to merge the <code>foobar</code> column, and in the irl issue I am facing, I may not always know the name of columns that need to be added to <code>main</code>, which makes merging seem impractical.</p> <hr /> <p>Thank you for taking a look at my question.</p> <p><strong>EDIT(s): grammar, formatting</strong></p>
<python><pandas><dataframe><missing-data>
2025-02-19 22:37:04
0
1,645
bismo
79,452,925
605,006
Barebones Python Function App via IaC for Azure
<p>I need a bare minimum Bicep file to create an <strong>EMPTY</strong> Python 3.11 Function App in Azure via Infrastructure as Code (IaC). I do not want App Insights support. I should be able to deploy to a preexisting resource group named rg-py-func-tst via this one line command:</p> <pre class="lang-none prettyprint-override"><code>az deployment group create --resource-group rg-py-func-tst --template-file main.bicep </code></pre> <p>I've looked at samples on Git, tried exporting templates directly out of Azure, and even briefly attempted to use Azure Resource Manager (ARM) templates. Through all of that, this Bicep seems most reasonable to me:</p> <pre class="lang-none prettyprint-override"><code>// Created by Microsoft Copilot and Google Gemini // This Bicep script deploys an EMPTY Python 3.11 Function App named // 'shawnopyFuncTest' without App Insights, at service level Y1. // It creates a storage account named 'pyfunctest100', an app service // plan named 'pyfunctestAppService', and the function app itself. // // I do not have the Python code yet. The Python code will be // deployed in a separate project. I just want to create an // empty Python Function App @description('Name of the storage account') param storageAccountName string = 'pyfunctest100' @description('Name of the app service plan') param appServicePlanName string = 'pyfunctestAppService' @description('Name of the function app') param functionAppName string = 'shawnopyFuncTest' @description('Location for all resources') param location string = resourceGroup().location // Define the storage account resource resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: storageAccountName location: location sku: { name: 'Standard_LRS' } kind: 'StorageV2' } // Define the app service plan resource resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = { name: appServicePlanName location: location kind: 'linux' sku: { name: 'Y1' tier: 'Dynamic' } properties:{ reserved: true } } // Define the function app resource (Updated API version) resource functionApp 'Microsoft.Web/sites@2022-09-01' = { name: functionAppName location: location kind: 'functionapp,linux' properties: { serverFarmId: appServicePlan.id siteConfig: { linuxFxVersion: 'Python|3.11' appSettings: [ { name: 'FUNCTIONS_WORKER_RUNTIME' value: 'python' } { name: 'AzureWebJobsStorage' value: storageAccount.properties.primaryEndpoints.blob } ] } } } output storageAccountName string = storageAccount.name output appServicePlanName string = appServicePlan.name output functionAppName string = functionApp.name </code></pre> <p>I've ran the above deployment in two completely separate Azure tenants and the result is always the same. My resource group rg-py-func-tst is populated with: a Storage Account named pyfunctest100; an App Service Plan named pyfunctestAppService; and, a Function App named shawnopyFuncTest. That's what I want; unfortunately, that isn't the end of the story.</p> <p>When I open my newly deployed empty Function App shawnopyFuncTest, I see the text &quot;Error&quot; in the Runtime Version field: <a href="https://i.sstatic.net/8loutwTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8loutwTK.png" alt="Azure Portal Function App Page showing Runtime Version of Error" /></a></p> <p>But what I expect to see for an empty Python function app is this: <a href="https://i.sstatic.net/TMwA5IyJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMwA5IyJ.png" alt="Azure Portal Function App Page for a Fresh Empty Python Function App Runtime Version is 4.1036.2.2" /></a></p> <p>Can someone please help me correct the Bicep that MS Copilot and Gemini created? If not, can you provide a simpler, more clean template that still does what I need?</p>
<python><azure-functions><azure-bicep><infrastructure-as-code>
2025-02-19 22:27:16
1
752
Shawn Eary
79,452,893
3,727,648
poetry-plugin-shell install hangs when installing ptyprocess
<p>Background: Switched from pip to poetry and I am trying to utilize an existing virtual environment by running poetry shell. I am currently on a Windows machine.</p> <p>When I try to install the poetry shell plugin using <code>poetry self add poetry-plugin-shell</code>, the installation hangs (i.e., does not proceed at all and I have to kill the terminal window) when trying to install <code>ptyprocess</code>. I am trying to globally install via the terminal and not in my project directory. Could this be the source of my issue? I have attempted the installation of the plugin several times with no success. I have been able to install other dependencies/packages.</p> <p>Any guidance in the right direction would be appreciated.</p>
<python><python-poetry>
2025-02-19 22:08:05
1
485
MatthewS
79,452,824
15,412,256
Python Polars Encoding Continous Variables from Breakpoints in another DataFrame
<p>The breakpoints data is the following:</p> <pre class="lang-py prettyprint-override"><code>breakpoints = pl.DataFrame( { &quot;features&quot;: [&quot;feature_0&quot;, &quot;feature_0&quot;, &quot;feature_1&quot;], &quot;breakpoints&quot;: [0.1, 0.5, 1], &quot;n_possible_bins&quot;: [3, 3, 2], } ) print(breakpoints) out: shape: (3, 3) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ features โ”† breakpoints โ”† n_possible_bins โ”‚ โ”‚ --- โ”† --- โ”† --- โ”‚ โ”‚ str โ”† f64 โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ feature_0 โ”† 0.1 โ”† 3 โ”‚ โ”‚ feature_0 โ”† 0.5 โ”† 3 โ”‚ โ”‚ feature_1 โ”† 1.0 โ”† 2 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>The <code>df</code> has two continous variables that we wish to encode according to the <code>breakpoints</code> DataFrame:</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame( {&quot;feature_0&quot;: [0.05, 0.2, 0.6, 0.8], &quot;feature_1&quot;: [0.5, 1.5, 1.0, 1.1]} ) print(df) out: shape: (4, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ feature_0 โ”† feature_1 โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ f64 โ”† f64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 0.05 โ”† 0.5 โ”‚ โ”‚ 0.2 โ”† 1.5 โ”‚ โ”‚ 0.6 โ”† 1.0 โ”‚ โ”‚ 0.8 โ”† 1.1 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>After the encoding we should have the resulting DataFrame <code>encoded_df</code>:</p> <pre class="lang-py prettyprint-override"><code>encoded_df = pl.DataFrame({&quot;feature_0&quot;: [0, 1, 2, 2], &quot;feature_1&quot;: [0, 1, 0, 1]}) print(encoded_df) out: shape: (4, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ feature_0 โ”† feature_1 โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ i64 โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 0 โ”† 0 โ”‚ โ”‚ 1 โ”† 1 โ”‚ โ”‚ 2 โ”† 0 โ”‚ โ”‚ 2 โ”† 1 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <ol> <li>We can assume that the unique list of features in <code>encoded_df</code> are also available in <code>breakpoints</code></li> <li>Labels should be an array: <code>np.array([str(i) for i in range(n_possible_bins)])</code>, assuming <code>n_possible_bins</code> is a positive integer. <code>n_possible_bins</code> may be different across features.</li> <li>All the encoding follows <code>left_closed=False</code> where the bins are defined as <code>(breakpoint, next breakpoint]</code></li> </ol> <p>I know that <a href="https://docs.pola.rs/api/python/dev/reference/expressions/api/polars.Expr.cut.html" rel="nofollow noreferrer">Polars.Expr.cut()</a> takes in <code>breaks</code> parameter as <code>Sequence[float]</code>, but how do I pass in these breakpoints and labels from the <code>breakpoints</code> DataFrame effectively?</p>
<python><python-polars>
2025-02-19 21:26:24
3
649
Kevin Li
79,452,813
10,006,235
How to apply a custom function across multiple columns
<p>How to extend this</p> <pre><code>df = df.select( pl.col(&quot;x1&quot;).map_batches(custom_function).alias(&quot;new_x1&quot;) ) </code></pre> <p>to something like</p> <pre><code>df = df.select( pl.col(&quot;x1&quot;,&quot;x2&quot;).map_batches(custom_function).alias(&quot;new_x1&quot;, &quot;new_x2&quot;) ) </code></pre> <p>Or the way to go is doing it one by one</p> <pre><code>df = df.select( pl.col(&quot;x1&quot;).map_batches(custom_function).alias(&quot;new_x1&quot;) pl.col(&quot;x2&quot;).map_batches(custom_function).alias(&quot;new_x2&quot;) ) </code></pre>
<python><dataframe><python-polars>
2025-02-19 21:20:48
1
474
Nip
79,452,710
8,292,630
uv in PyCharm / handling local imports
<p>Issue: I've created project with uv as venv manager. To this venv I &quot;uv pip install .&quot; my local library caled commons. When testing in terminal all works well but PyCharm does not recognize common module (it is on a list of modules - &quot;uv pip list&quot;.</p> <p>Now, in terminal opened in PyCh I'm already in my venv and when in python repl trying to import commons get error: module not found. If I deactivate this venv and activate again import works OK.</p> <p>Looking for ideas how to solve this. I tried to install, reinstall, create new venv in PyCharm, restart etc and nothing seems to work.</p> <p><a href="https://i.sstatic.net/eNRMotvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eNRMotvI.png" alt="sshotfrom terminal in PyCharm" /></a></p>
<python><pycharm><uv><venv>
2025-02-19 20:29:28
0
315
Ranny
79,452,659
509,301
Is Fernet implementation threadsafe?
<p>I have the following code</p> <pre class="lang-py prettyprint-override"><code>from cryptography.fernet import Fernet encryption_key=&quot;....&quot; fernet_instance = Fernet(encryption_key) encoded_value = fernet_instance.encrypt(&quot;some text&quot;).decode() </code></pre> <p>Is <code>Fernet</code> implementation thread safe? can I use the same instance to concurrently encrypt different strings in different threads?</p>
<python><fernet>
2025-02-19 20:09:33
0
336
Dan Corneanu
79,452,414
7,976,097
Gio.Settings behaviour changing with environment
<p>I have a CMake project where I want to execute a Python script as part of the install target to automatically set a global keyboard shortcut for the target that gets installed.</p> <p>The script itself works when started from the terminal but when I add it to the install target like so</p> <pre class="lang-none prettyprint-override"><code>install( CODE &quot;execute_process( COMMAND /usr/bin/python3 \&quot;${CMAKE_SOURCE_DIR}/tools/enable-shutdown-menu-keybind.py\&quot; --command \&quot;${CMAKE_INSTALL_FULL_BINDIR}/shutdown_menu\&quot; RESULT_VARIABLE _ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT ) if (_ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT GREATER 0) message(FATAL_ERROR \&quot;Enabling shutdown-menu keybinds failed with exit code \${_ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT}\&quot;) endif()&quot; CONFIGURATIONS Release RelWithDebInfo MinSizeRel ) </code></pre> <p>it suddenly fails. In particular it seems to fail when getting the list of custom keybinds from <code>org.gnome.settings-daemon.plugins.media-keys</code> like you would with <code>gsettings get org.gnome.settings-daemon.plugins.media-keys custom-keybindings</code>.</p> <p>I don't think it's a problem resulting from running cmake with elevated rights in order to install the program as running the script from a terminal using sudo works as expected aswell.</p> <p>What i've tried so far:</p> <ul> <li>changing the command to <code>/usr/bin/bash -c \&quot;/usr/bin/python3 '${CMAKE_SOURCE_DIR}/tools/enable-shutdown-menu-keybind.py' --command '${CMAKE_INSTALL_FULL_BINDIR}/shutdown_menu'\&quot;</code></li> <li>adding <code>env GNOME_SHELL_SESSION_MODE='$ENV{GNOME_SHELL_SESSION_MODE}'</code> before the command (yes I made sure this had the desired effect)</li> </ul> <p>My guess is that it has something todo with the environment (which is why I tried the second point) but I'm not sure what I would be missing.</p> <p>EDIT:</p> <p>I just thought of something else I could test. I temporarily set the <code>CMAKE_INSTALL_PREFIX</code> to a directory owned by me and ran the cmake install command without elevated rights. To my surprise the script ran prefectly fine.</p> <p>I guess this means that it has something to do with running cmake as root. In case its important, i elevate cmake using pkexec.</p> <p>EDIT 2:</p> <p>It seems I was wrong about sudo - it fails there too. It did work when I launched the script using a VSCode launch configuration with the <code>&quot;sudo&quot;: true</code> option tho.</p> <p>EDIT 3:</p> <p>I've expanded the install function call to the following:</p> <pre class="lang-none prettyprint-override"><code> install( CODE &quot;if (DEFINED ENV{PKEXEC_UID}) set(UID \$ENV{PKEXEC_UID}) elseif(DEFINED ENV{SUDO_UID}) set(UID \$ENV{SUDO_UID}) else() message(FATAL_ERROR \&quot;Unable to determine callee UID.\&quot;) endif() execute_process( COMMAND sudo -H -u \&quot;#\${UID}\&quot; env DISPLAY=$ENV{DISPLAY} /usr/bin/python3 \&quot;${CMAKE_SOURCE_DIR}/tools/enable-shutdown-menu-keybind.py\&quot; --command \&quot;${CMAKE_INSTALL_FULL_BINDIR}/shutdown_menu\&quot; RESULT_VARIABLE _ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT ) if (_ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT GREATER 0) message(FATAL_ERROR \&quot;Enabling shutdown-menu keybinds failed with exit code \${_ENABLE_SHUTDOWN_MENU_KEYBIND_RESULT}\&quot;) endif()&quot; ) </code></pre> <p>which leads to the program getting executed and reporting to have set the values. But the changes do not seem to get applied, as the keybind shows up neither in dconf, the gnome-command-center nor gsettings.</p>
<python><cmake><gnome><gio>
2025-02-19 18:29:57
1
436
Nummer_42O
79,452,360
11,515,528
Pandas list dates to datetime
<p>I am looking to convert a column with dates in a list [D, M, Y] to a datetime column. The below works but there must be a better way?</p> <pre><code>new_df = pd.DataFrame({'date_parts': [[29, 'August', 2024], [28, 'August', 2024], [27, 'August', 2024]]}) display(new_df) ## Make new columns with dates new_df = pd.concat([new_df, new_df['date_parts'].apply(pd.Series)], axis=1).rename(columns={0:'D', 1:'M', 2:'Y'}) month_map = { 'January':1, 'February':2, 'March':3, 'April':4, 'May':5, 'June':6, 'July':7, 'August':8, 'September':9, 'October':10, 'November':11, 'December':12 } ## make datetime column new_df['release_date'] = pd.to_datetime(dict(year=new_df.Y, month=new_df.M.apply(lambda x: month_map[x]), day=new_df.D), format='%d-%B-%Y') new_df.drop(columns=['D', 'M', 'Y']) </code></pre> <pre><code>## Input date_parts 0 [29, August, 2024] 1 [28, August, 2024] 2 [27, August, 2024] ## Output date_parts release_date 0 [29, August, 2024] 2024-08-29 1 [28, August, 2024] 2024-08-28 2 [27, August, 2024] 2024-08-27 </code></pre>
<python><pandas>
2025-02-19 18:03:57
2
1,865
Cam
79,452,212
1,506,763
Numba np.linalg.eigvalsh exception raised inconsistenlty
<p>I'm using <code>numba</code> to compile some expensive calcualtion for signifcant performance gains - this is wonderful! Recently I made a small change to the calcualtion to extract some additional values (eigenvalues), cleared the cache and started to test. Everything compiles without error and starts to run.</p> <p>Most times nothing happens but then every so often, it just crashes with the following error message:</p> <pre><code>SystemError: &lt;function _numba_unpickle at 0x0000027532E4EFC0&gt; returned a result with an exception set </code></pre> <p>I've compiled all my functions with either:</p> <pre><code>@njit(error_model=&quot;numpy&quot;, cache=True) </code></pre> <p>or</p> <pre><code>@njit(error_model=&quot;numpy&quot;, cache=True, parallel=True) </code></pre> <p>where applicable. I'm using <code>puthon 3.12.9</code>, <code>numpy 2.1.3</code> and <code>numba 0.61.0</code> and whatever other depdendencies they pull in.</p> <p>Although I've set the debugger to stop upon a raised exception, it doesn't really help (much) as there appears to be nothing wrong with the data or code. Running all the data through the code contained in the compiled function in the console yields no errors and I can even call the compiled function that raised the error in the debugger and it runs. It is not aconsistently reproducible error.</p> <p>When I call the compiled function that raised the error one of three things happens: it returns the correct result, it returns a nonesense result where everything is junk (some value xxxxxe+303 - possibly meaning the eigenvalue calc has failed to converge?) or it returns the exception set error.</p> <p>The occurence of the error seems to be quite random as I test repeatedly on the same data: sometimes it runs into an error after one or two fuction calls, other times it can be a hundred calls to the function before the error re-occurs. In that case I'm able to extract the following error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\jpmor\anaconda3\envs\new_base\Lib\site-packages\numba\np\linalg.py&quot;, line 841, in _check_finite_matrix raise np.linalg.LinAlgError( numpy.linalg.LinAlgError: Array must not contain infs or NaNs. The above exception was the direct cause of the following exception: </code></pre> <p>I only use <code>np.linalg</code> twice in my entire code which helps narrow the field:</p> <pre><code>len = np.linalg.norm(vect[indx, :]) </code></pre> <p>and later</p> <pre><code>eigenvalues = np.linalg.eigvalsh(stress.reshape(3, 3)) </code></pre> <p>Since <code>_check_finite_matrix</code> is called by</p> <pre><code>@overload(np.linalg.eigvals) def eigvals_impl(a): .... </code></pre> <p>this must be the source of my problem but I don't understand why on the same data sometimes it will throw an error and othertimes it runs perfectly. From the <code>numpy</code> docs I can see that a <code>LinAlgError</code> will be raised if the eigenvalue computation does not converge.</p> <p>I believe this error is convergence related but only seems to happen with <code>numba</code> as I never seem to get the error when running with the slower, uncompiled function in pure <code>numpy</code>.</p> <p>Here is the full function:</p> <pre><code>@njit(error_model=&quot;numpy&quot;, cache=True) def compute_stress(num_elems, force, vect, weight_func, force_symmetry=False): results = np.zeros((1, 26)) # Compute Contact Stress computed_stress = np.zeros((1, 9)) for indx in range(num_elems): for a in range(3): for b in range(3): sid = 3 * a + b computed_stress[0, sid] += force[indx, a] * vect[indx, b] * weight_func[indx] if force_symmetry: a = (computed_stress[0, 1] + computed_stress[0, 3]) / 2 b = (computed_stress[0, 2] + computed_stress[0, 6]) / 2 c = (computed_stress[0, 5] + computed_stress[0, 7]) / 2 computed_stress[0, 1] = a computed_stress[0, 2] = b computed_stress[0, 5] = c computed_stress[0, 3] = a computed_stress[0, 6] = b computed_stress[0, 7] = c eigenvalues = np.linalg.eigvalsh(computed_stress.reshape(3, 3)) else: eigenvalues = np.linalg.eigvals(computed_stress.reshape(3, 3)) sigma_iso = np.trace(computed_stress.reshape(3, 3)) * np.eye(3) * 0.33333333333333333333 sigma_dev = computed_stress.reshape(3, 3) - sigma_iso eigenvalues.sort() eigenvalues = eigenvalues[::-1] tau_max = (eigenvalues[0] - eigenvalues[-1]) / 2 # compute the stress invariants J2 = 0.5 * np.trace(np.dot(sigma_dev, sigma_dev)) # compute other common stress measures mean_stress = np.mean(eigenvalues) eqv_stress = np.sqrt(3.0 * J2) stress_ratio = eigenvalues[0] / eigenvalues[-1] results[0, :9] = computed_stress results[0, 9:12] = eigenvalues results[0, 12] = tau_max results[0, 13] = mean_stress results[0, 14] = eqv_stress results[0, 15] = stress_ratio results[0, 16] = 0 results[0, 17:] = sigma_dev.reshape(1, -1) return results </code></pre> <p>This function is called within another driver function that has a parallel loop:</p> <pre><code>@njit(error_model=&quot;numpy&quot;, cache=True, parallel=True) def compute_for_mesh(node_list, force_symmetry=False): node_results = np.zeros((node_list.shape[0], 39)) for node_idx in prange(len(node_list)): # more calculations here # ... # ... # ... # Compute Contact Stress if num_elements &gt; 0: # lots of other calculations before call to offending function # ... # ... stresses = compute_stress(num_elements, force, vect, weight_func, force_symmetry=force_symmetry) # ... # ... # ... node_results[node_idx, :26] = stresses return node_results </code></pre> <p>I guess a possible simple solution would be to not use <code>numba</code> but I would like my code to finish running in the same day. Is there an alternative to <code>np.linalg.eigvals</code> that is stable with <code>numba</code>? Is it possible to write a home-made function that would work and be as quick?</p>
<python><numpy><linear-algebra><numba><eigenvalue>
2025-02-19 17:06:18
0
676
jpmorr
79,452,027
5,098,711
How to connect from Python to Postgres DB deployed on Azure Container Apps?
<p>I am facing an issue connecting from python to postgres on Azure Container Apps (ACA).</p> <p>What do I need to modify to fix the connection issue?</p> <p>ACA deployment yaml definition:</p> <pre class="lang-yaml prettyprint-override"><code>properties: workloadProfileName: D16 configuration: activeRevisionsMode: Single ingress: external: true targetPort: 5432 template: containers: - image: docker.io/postgres:latest name: postgres-db-container resources: cpu: 1 memory: 2Gi env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres - name: PGDATA value: /var/lib/postgresql/data/pgdata volumeMounts: - volumeName: pgdata-volume mountPath: /var/lib/postgresql/data scale: minReplicas: 1 maxReplicas: 1 volumes: - name: pgdata-volume storageType: AzureFile storageName: datamount mountOptions: 'dir_mode=0700,file_mode=0700,uid=999,gid=999,nobrl' </code></pre> <p>When I ssh into the container I can see that the service is running and I can also run psql commands, the problem arises when I try to connect from python as follows (host adress modified to hide confidential information):</p> <pre class="lang-bash prettyprint-override"><code>pip install psycopg2-binary </code></pre> <pre class="lang-py prettyprint-override"><code>import psycopg2 db_host = &quot;https://my-app.funnyname-1234567.germanywestcentral.azurecontainerapps.io&quot; conn_string = f&quot;host='{db_host}' dbname='postgres' user='postgres' password='postgres'&quot; conn = psycopg2.connect(conn_string) </code></pre> <p>Traceback</p> <pre><code>--------------------------------------------------------------------------- OperationalError Traceback (most recent call last) Cell In[7], line 8 5 conn_string = f&quot;host='{db_host}' dbname='postgres' user='postgres' password='postgres'&quot; 7 # get a connection, if a connect cannot be made an exception will be raised here ----&gt; 8 conn = psycopg2.connect(conn_string) File d:\VSCodeProjects\chat-with-docs-app\venv\lib\site-packages\psycopg2\__init__.py:122, in connect(dsn, connection_factory, cursor_factory, **kwargs) 119 kwasync['async_'] = kwargs.pop('async_') 121 dsn = _ext.make_dsn(dsn, **kwargs) --&gt; 122 conn = _connect(dsn, connection_factory=connection_factory, **kwasync) 123 if cursor_factory is not None: 124 conn.cursor_factory = cursor_factory OperationalError: could not translate host name &quot;https://my-app.funnyname-1234567.germanywestcentral.azurecontainerapps.io&quot; to address: Der angegebene Host ist unbekannt. </code></pre>
<python><postgresql><docker><azure-container-apps>
2025-02-19 16:04:24
0
890
kanimbla
79,451,974
3,048,363
word/ sentence similarities
<p>I am trying to find if a given word/ set of words are similar to a definition.</p> <p>Example - Definition - &quot;vegetarian User&quot;</p> <p>Now, if I want to check a set of sentences like below</p> <pre><code>sentences = ['vegetarian User', 'user sometimes eats chicken', 'user is vegetarian', 'user only eats fruits', 'user likes fish'] </code></pre> <p>I tried using some sentence transformer like below</p> <pre><code>model = SentenceTransformer(&quot;all-mpnet-base-v2&quot;) embeddings = model.encode(sentences) similarities = model.similarity(embeddings,embeddings) print(similarities) </code></pre> <p>But this is not giving me expected results.</p> <p>What is the best approach to achieve results like below?</p> <pre><code>[False,True,True,False] </code></pre> <p>Is it doable with nlp/ some other technique?</p>
<python><python-3.x><nlp>
2025-02-19 15:47:45
1
1,091
A3006
79,451,916
11,571,390
How to Preserve Autocomplete for String Arguments While Avoiding Magic Strings in Code?
<p>I have a large codebase with hundreds of functions that take string arguments, and I want to reduce brittle use of hardcoded strings while preserving autocomplete for users.</p> <p><strong>The Problem</strong></p> <p>Right now, functions look like this:</p> <pre><code>def process_data(dataset: str = &quot;default_dataset&quot;): &quot;&quot;&quot;Process a dataset. Args: dataset: The name of the dataset to process. Available options: 'default_dataset', 'alternative_dataset'. &quot;&quot;&quot; if dataset not in {&quot;default_dataset&quot;, &quot;alternative_dataset&quot;}: raise ValueError(f&quot;Invalid dataset: {dataset}&quot;) return f&quot;Processing {dataset}&quot; </code></pre> <p>This has <strong>three major issues</strong>:</p> <p><strong>Magic strings everywhere</strong> โ†’ &quot;default_dataset&quot; appears in function signatures, conditionals, and documentation. <strong>Brittle when renaming</strong> โ†’ If &quot;default_dataset&quot; changes, we must manually update all instances. <strong>Autocomplete is nice here</strong> (dataset: str = &quot;default_dataset&quot;).</p> <p><strong>Attempted Fix:</strong></p> <p>I tried using an Enum to remove hardcoded strings:</p> <pre><code>from enum import Enum class DatasetOptions(str, Enum): DEFAULT = &quot;default_dataset&quot; ALTERNATIVE = &quot;alternative_dataset&quot; def process_data(dataset: DatasetOptions = DatasetOptions.DEFAULT): &quot;&quot;&quot;Process a dataset. Args: dataset: The name of the dataset to process. Available options: {datasets}. &quot;&quot;&quot;.format(datasets=&quot;, &quot;.join([d.value for d in DatasetOptions])) if dataset not in DatasetOptions._value2member_map_: raise ValueError(f&quot;Invalid dataset: {dataset}&quot;) return f&quot;Processing {dataset}&quot; </code></pre> <p>This removes magic strings, but now autocomplete is bad:</p> <pre><code>process_data( # shows (dataset: DatasetOptions = DatasetOptions.DEFAULT) </code></pre> <p>Expected behavior:</p> <pre><code>process_data( # should show (dataset: str = &quot;default_dataset&quot;) </code></pre> <p>We want users to pass plain strings (&quot;default_dataset&quot;) but internally enforce correctness with the Enum.</p> <p><strong>What I Need</strong></p> <ul> <li>Autocomplete should still show dataset: str = &quot;default_dataset&quot; (not DatasetOptions.DEFAULT).</li> <li>No magic strings in the codebase (no &quot;default_dataset&quot; scattered everywhere).</li> <li>Functions should be easy to refactor without updating hundreds of string references.</li> <li>Users should be able to pass plain strings (&quot;default_dataset&quot;) and get a clean error if they mistype.</li> </ul> <p>Whatโ€™s the best way to achieve this balance? Is this even possible in python?</p> <p>How do you remove magic strings while keeping clean autocomplete in function signatures?</p>
<python><python-typing>
2025-02-19 15:29:39
0
595
Gary Frewin
79,451,761
113,586
Using pytest-twisted functions with pytest-asyncio fixtures
<p>I have code that uses Twisted so I've written a test function for it and decorated it with <code>@pytest_twisted.ensureDeferred</code>. The function awaits on some Deferreds. Then, I need to run some <code>aiohttp</code> website in it so I've written a fixture that uses the <code>pytest_aiohttp.plugin.aiohttp_client</code> fixture, decorated it with <code>@pytest_asyncio.fixture</code> and used it in my test function. The result doesn't work (probably because I need to make Twisted and aiohttp use the same event loop or something like that?). Specifically, it prints &quot;twisted/internet/asyncioreactor.py:50: DeprecationWarning: There is no current event loop&quot; and then crashes with the following exception:</p> <pre><code>.env/lib/python3.13/site-packages/pytest_twisted/__init__.py:343: in _run_inline_callbacks _instances.reactor.callLater(0.0, in_reactor, d, f, *args) .env/lib/python3.13/site-packages/twisted/internet/asyncioreactor.py:289: in callLater self._reschedule() .env/lib/python3.13/site-packages/twisted/internet/asyncioreactor.py:279: in _reschedule self._timerHandle = self._asyncioEventloop.call_at(abs_time, self._onTimer) /usr/lib/python3.13/asyncio/base_events.py:812: in call_at self._check_closed() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = &lt;_UnixSelectorEventLoop running=False closed=True debug=False&gt; def _check_closed(self): if self._closed: &gt; raise RuntimeError('Event loop is closed') E RuntimeError: Event loop is closed /usr/lib/python3.13/asyncio/base_events.py:556: RuntimeError </code></pre> <p>As the actual test function is not even executed its content doesn't seem to matter, and custom fixtures also aren't needed for this to fail so here is a minimal one that shows the problem:</p> <pre><code>@ensureDeferred async def test_minimal(aiohttp_client): app = web.Application() await aiohttp_client(app) </code></pre> <p>My settings:</p> <pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options] addopts = [ &quot;--reactor=asyncio&quot;, ] asyncio_mode = &quot;strict&quot; asyncio_default_fixture_loop_scope = &quot;function&quot; </code></pre> <p>(not 100% sure about these, but I think moving to <code>asyncio_mode = &quot;auto&quot;</code> and replacing <code>@pytest_asyncio.fixture</code> with <code>@pytest.fixture</code> can only make it worse, and changing the fixture loop scope to <code>&quot;module&quot;</code> makes the runner hang before doing anything).</p> <p>What is the correct way to write such test functions, assuming it exists?</p> <p>Edit: it seems to me now (but I may be very wrong) that the &quot;correct&quot; loop scope for everything is in fact <code>&quot;session&quot;</code>, because the asyncio reactor runs once and uses the same loop for the entire run, but the main problem is making all the pieces use the same loop, and I don't know if that's possible without changing either of the plugins.</p>
<python><pytest><python-asyncio><twisted><pytest-asyncio>
2025-02-19 14:41:02
2
25,704
wRAR
79,451,702
8,477,566
Python multiprocessing.Process hangs when large PyTorch tensors are initialised in both processes
<p>Why does the code shown below either finish normally or hang depending on which lines are commented/uncommented, as described in the table below?</p> <p>Summary of table: if I initialise sufficiently large tensors in both processes without using <code>&quot;spawn&quot;</code>, the program hangs. I can fix it by making either tensor smaller, or by using <code>&quot;spawn&quot;</code>.</p> <p>Note:</p> <ol> <li>All memory is purely CPU, I don't even have CUDA installed on this computer</li> <li>This issue does not occur if I replace <code>torch</code> with <code>numpy</code>, even if I make the array size 10x larger</li> <li>Version information: <code>Ubuntu 22.04.1 LTS, Python 3.10.12, torch 2.1.2+cpu</code></li> </ol> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Uncommented</th> <th>Commented</th> <th>Behaviour</th> </tr> </thead> <tbody> <tr> <td>(1), (4)</td> <td>(2), (3), (5)</td> <td>Hang</td> </tr> <tr> <td>(2), (4)</td> <td>(1), (3), (5)</td> <td>OK</td> </tr> <tr> <td>(1), (5)</td> <td>(2), (3), (4)</td> <td>OK</td> </tr> <tr> <td>(1), (3), (4)</td> <td>(2), (5)</td> <td>OK</td> </tr> </tbody> </table></div> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp import torch def train(): print(&quot;start of train&quot;) x = torch.arange(100000) # (1) x = torch.arange(10000) # (2) print(&quot;end of train&quot;) if __name__ == &quot;__main__&quot;: mp.set_start_method('spawn') # (3) x = torch.arange(100000) # (4) x = torch.arange(10000) # (5) p = mp.Process(target=train) p.start() p.join() </code></pre>
<python><pytorch><python-multiprocessing><freeze>
2025-02-19 14:27:04
1
1,950
Jake Levi
79,451,592
1,309,245
Airflow DAG gets stuck when filtering a Polars DataFrame
<p>I am dynamically generating Airflow DAGs based on data from a Polars DataFrame. The DAG definition includes filtering this DataFrame at DAG creation time and again inside a task when the DAG runs.</p> <p>However, when I run the dag and I attempt to filter the polars dataframe inside the dynamically generated DAG, the task gets stuck indefinitely after printing <code>before filter</code>, without raising an error. Just gets stuck and runs forever until an airflow exception is thrown on memory usage.</p> <p>I am with airflow 2.7.3 version and polars 0.20.31 for what it is worth mentioning it.</p> <pre><code>from airflow import DAG from airflow.operators.python import PythonOperator from datetime import datetime import polars as pl def dag_constructor(name): default_args = { 'owner': 'airflow', 'start_date': datetime(2023, 1, 1), 'retries': 1, } # Define the DAG dag = DAG( dag_id=f'{name}', default_args=default_args, description='A simple DAG to print Hello World', schedule_interval='@daily', catchup=False, ) def print_hello(): print(&quot;starting&quot;) df = pl.DataFrame({ &quot;key&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;A&quot;], &quot;branch&quot;: [&quot;br1&quot;, &quot;ooo&quot;, &quot;br2&quot;], &quot;chain&quot;: [&quot;ch1&quot;, &quot;Y&quot;, &quot;ch2&quot;] }) print(df) print(&quot;before filter&quot;) chains = df.filter(pl.col(&quot;key&quot;) == &quot;A&quot;).select(&quot;chain&quot;).to_series().to_list() print(&quot;after filter&quot;) print(chains) print(&quot;finish dag&quot;) hello_task = PythonOperator( task_id='print_hello', python_callable=print_hello, dag=dag, ) hello_task return dag df = pl.DataFrame({ &quot;key&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;A&quot;], &quot;branch&quot;: [&quot;br1&quot;, &quot;ooo&quot;, &quot;br2&quot;], &quot;chain&quot;: [&quot;ch1&quot;, &quot;Y&quot;, &quot;ch2&quot;] }) chains = df.filter(pl.col(&quot;key&quot;) == &quot;A&quot;).select(&quot;chain&quot;).to_series().to_list() ## chains = [&quot;ch1&quot;, &quot;ch2&quot;] THIS WOULD WORK, AND WONT GET STUCK, if uncommenting and commenting previous line for ch in chains: dag_my_id = f&quot;aa__{str(ch)}&quot; globals()[dag_my_id] = dag_constructor(&quot;aa__&quot;+ch) </code></pre>
<python><airflow><python-polars><polars>
2025-02-19 13:56:58
1
1,407
elvainch
79,451,344
17,500,571
Handling Occasional 100 MB API Responses in FastAPI: Polars vs. NumPy/Pandas?
<p>I'm working on an asynchronous FastAPI project that fetches large datasets from an API. Currently, I process the JSON response using a list comprehension and NumPy to extract device IDs and names. For example:</p> <pre><code>response = await client.get(tenant_url, headers=headers, params=params) response.raise_for_status() data = response.json() numpy_2d_arrays = np.array([[device[&quot;id&quot;][&quot;id&quot;], device[&quot;name&quot;]] for device in data[&quot;data&quot;]]) </code></pre> <p>While this works for my current small datasets, I anticipate that some API responses could be as large as 100 MB, depending on the customer size. These large responses will be occasional, and my main concern is processing them efficiently within our limited 12 GB cloud memory environment.</p> <p>Iโ€™m considering converting the raw data into a Polars DataFrame before extracting the device ID and name, in hopes of avoiding Python-level loops and improving performance. However, I haven't run any benchmarks comparing my current approach with a DataFrame-based one.</p> <p>My main questions are:</p> <p>Will converting to a Polars DataFrame offer significant performance improvements over using NumPy (or even Pandas) for simply extracting a couple of fields? Are there other recommended strategies or best practices for handling these occasional large responses in an asynchronous FastAPI setup with memory constraints? I appreciate any advice or performance tips from those with experience in processing large datasets in similar environments!</p>
<python><dataframe><for-loop><fastapi><python-polars>
2025-02-19 12:34:13
1
364
Foxbat
79,451,332
13,559,669
Slow data download into Windows shared folder via Databricks using SMB
<p>I use Databricks (python) to download data from a public source directly into a Windows shared drive (NetApp folder) using SMB protocol, but it is downloading at 700 kbps on average, whereas when using Azure Data Factory (on the same Azure VNet) to do the same, it downloads at 5.5 mbps on average.</p> <p>Some additional details for context:</p> <ul> <li>Databricks driver: Standard_DS5_v2 (56GB RAM, 16 CPU)</li> <li>Databricks worker: Standard_D8ads_v5 (32GB RAM, 8 CPU)</li> </ul> <p>This the code used in Databricks:</p> <pre><code>import os import smbclient import urllib from tqdm import tqdm import shutil # Setup SMB client with my credentials smbclient.ClientConfig(username='user', password='password') # Constants for file transfer CHUNK_SIZE = 1024 * 1024 * 1 # 1MB chunk size for large files # Function to download a file from a URL directly to an SMB share (with progress bar) def download_to_smb(url, smb_file_path): with urllib.request.urlopen(url) as response: file_size = int(response.getheader('Content-Length')) with smbclient.open_file(smb_file_path, mode='wb') as fdst: progress = tqdm(total=file_size, unit='B', unit_scale=True, desc=f&quot;Downloading to {smb_file_path}&quot;) while True: chunk = response.read(CHUNK_SIZE) if not chunk: break fdst.write(chunk) progress.update(len(chunk)) progress.close() print(f&quot;File downloaded from {url} to {smb_file_path}&quot;) # Download file directly to SMB file_url = 'https://xxx/file.gz' smb_file_path_download = r'\\server\folder/' download_to_smb(file_url, smb_file_path_download) </code></pre> <p>My goal is to download the data via Databricks at a speed of minimum 4 mbps. How can I reach it? Is there a different way then SMB to connect to the share via Databricks? Is it actually a technical limitation coming from the driver itself in Databricks?</p>
<python><windows><databricks><azure-databricks><smb>
2025-02-19 12:32:07
1
436
el_pazzu
79,451,254
2,112,406
Github actions to test python scripts after creating and activating conda environment not working
<p>I have a repo where there are a bunch of python scripts, and there's a <code>environment.yml</code> file to create a conda environment with. I run tests by:</p> <pre><code>conda env create -f environment.yml conda activate env_name cd tests pytest </code></pre> <p>I want to automate this with github actions. I have <code>pytest.yml</code> in <code>.github/workflows/</code>. It starts to run when I push, but it gets stuck with <code>Waiting for a runner to pick up this job...</code>, which I've understood to mean there's something wrong with my <code>.yml</code> file. <code>pytest.yml</code> contains (using <a href="https://github.com/marketplace/actions/setup-miniconda" rel="nofollow noreferrer">this resource</a>):</p> <pre><code>name: conda on: workflow_dispatch: pull_request: push: branches: - main jobs: tests: name: tests strategy: fail-fast: false matrix: platform: [windows-latest, macos-latest, ubuntu-latest] python-version: ['3.12'] runs-on: $${{ matrix.platform }} steps: - uses: conda-incubator/setup-miniconda@v3 with: miniconda-version: &quot;latest&quot; - name: tests run: | conda env create -f environment.yml conda activate price_predictor_env cd tests python -m pytest </code></pre> <p>What could I be missing?</p>
<python><github-actions><conda>
2025-02-19 12:05:25
1
3,203
sodiumnitrate
79,451,234
9,318,323
Self-signed certificate error when working with PreparedRequest and Session objects
<p>I have an app running on Azure that was not setup by me, I just need to use its API.</p> <p>If I use the code below, it works just fine.</p> <pre class="lang-py prettyprint-override"><code>import os import requests from msal import PublicClientApplication connection_data = { &quot;url&quot;: &quot;url&quot;, &quot;username&quot;: &quot;username&quot;, &quot;password&quot;: &quot;password&quot;, &quot;clientid&quot;: &quot;clientid&quot;, &quot;tenant&quot;: &quot;tenant&quot;, } _app = PublicClientApplication( connection_data[&quot;clientid&quot;], # single-tenant application authority=f&quot;https://login.microsoftonline.com/{connection_data['tenant']}&quot;, ) _token = None _scopes = [&quot;User.Read&quot;] accounts = _app.get_accounts() if accounts: _token = _app.acquire_token_silent(_scopes, account=accounts[0])[&quot;access_token&quot;] if not _token: result = _app.acquire_token_by_username_password( connection_data[&quot;username&quot;], connection_data[&quot;password&quot;], scopes=_scopes ) if &quot;access_token&quot; in result: _token = result[&quot;access_token&quot;] else: raise ValueError(&quot;Something went wrong when acquiring token.&quot;) headers = { &quot;Authorization&quot;: f&quot;Bearer {_token}&quot;, &quot;X-Client-Type&quot;: &quot;FOOBAR&quot;, } request_url = f&quot;{connection_data['url']}/api/ts/read/some/endpoint/B747&quot; response = requests.get(request_url, headers=headers) print(response) # 200 </code></pre> <p>The issue is when I try to do the same operation using <code>requests.PreparedRequest</code> and <code>Session</code>, it fails. Example from here: <a href="https://requests.readthedocs.io/en/latest/api/#requests.PreparedRequest" rel="nofollow noreferrer">https://requests.readthedocs.io/en/latest/api/#requests.PreparedRequest</a></p> <pre><code>from requests import Request, Session request_ = Request(&quot;GET&quot;, request_url, headers=headers).prepare() session = Session() response = session.send(request_) print(response) # requests.exceptions.SSLError: HTTPSConnectionPool(host='foobar.com', port=443): # Max retries exceeded with url: /api/ts/read/some/endpoint/B747 # (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: # self-signed certificate in certificate chain (_ssl.c:1006)'))) </code></pre> <p>My certificate is specified in <code>REQUESTS_CA_BUNDLE</code> environment variable.</p> <p>Is this issue related to the application I am trying to reach or am I doing something wrong?</p>
<python><ssl><python-requests><ssl-certificate>
2025-02-19 11:56:59
0
354
Vitamin C
79,450,942
497,649
How to set height & width in pm4py's graph visualization?
<p>If I use the Python package <code>pm4py</code>, I get a nice graph visualization. However, the size is quite small in terms of height, and I want to change it. The code so far:</p> <pre><code>from pm4py.visualization.bpmn import visualizer as bpmn_visualizer parameters = bpmn_visualizer.Variants.CLASSIC.value.Parameters gviz = bpmn_visualizer.apply(bpmn_model, parameters={ &quot;bgcolor&quot;: 'green' }) bpmn_visualizer.view(gviz) </code></pre> <p>Is there a parameter to affect the height and width of the chart?</p>
<python><package><rendering><process-mining>
2025-02-19 10:10:23
0
640
lambruscoAcido
79,450,855
7,227,146
multiprocess library barely works
<p>I'm using the <a href="https://pypi.org/project/multiprocess/" rel="nofollow noreferrer"><code>multiprocess</code></a> library to accelerate a CPU-bound task (a method inside a user-defined class).</p> <p>The function processes a page of a document, in my example a 500-page document takes around 20 seconds sequentially (so around 0.04 seconds per page). I'm incrementing an index until 2,000,000 to simulate this task.</p> <p><strong>dummy.py</strong></p> <pre><code>from multiprocess import Pool class DummyClass: def __init__(self, workers=1): self.workers = workers # Simulate CPU-intensive task def _process_one(self, page): count = 0 while count &lt; 2_000_000: count += 1 return page # Process with &quot;multiprocess&quot; def multiprocess(self, pages): with Pool(processes=self.workers) as pool: async_results = pool.map_async(self._process_one, pages) extraction = async_results.get() return extraction # Process sequentially def sequential(self, pages): extraction = [] for page in pages: extraction.append(self._process_one(page)) return extraction </code></pre> <p><strong>test.py</strong></p> <pre><code>import time from dummy import DummyClass # Sequential with dummy method def dummy_sequential(): dummy_extractor = DummyClass() extraction = dummy_extractor.sequential(range(500)) return extraction # Multiprocessing with dummy method def dummy_multiprocess(workers): dummy_extractor = DummyClass(workers=workers) extraction = dummy_extractor.multiprocess(range(500)) return extraction </code></pre> <p>Testing sequential:</p> <pre><code>if __name__ == &quot;__main__&quot;: ini = time.time() extraction = dummy_sequential() fin = time.time() print(&quot;Time: &quot;, fin - ini, &quot;seconds&quot;) </code></pre> <p>Prints out:</p> <pre><code>Time: 19.12088394165039 seconds </code></pre> <p>Testing multiprocess with different values:</p> <pre><code>if __name__ == &quot;__main__&quot;: for i in range(2, 9): ini = time.time() extraction = dummy_multiprocess(workers=i) fin = time.time() print(f&quot;Time with {i} workers&quot;, fin - ini, &quot;seconds&quot;) </code></pre> <p>Prints out:</p> <pre><code>Time with 2 workers 13.7001051902771 seconds Time with 3 workers 11.189585208892822 seconds Time with 4 workers 11.595974683761597 seconds Time with 5 workers 12.016109228134155 seconds Time with 6 workers 12.690005540847778 seconds Time with 7 workers 13.012137651443481 seconds Time with 8 workers 13.412734508514404 seconds </code></pre> <p>So we can see 3 workers is the optimal number of workers, while with more workers it slowly climbs up again.</p> <p>However this is a process that needs to be as fast as possible. If a 500-page document takes 20 seconds, I would like it to go under 2 seconds (my computer has 16 CPU cores). The fastest I can get now is 11 seconds.</p> <p>I understand this process has some overhead, but this seems too much. Is there no other way to make it faster?</p> <p>Thank you</p> <hr /> <p>New times:</p> <p>Okay now I got the following times:</p> <p>Using &quot;<strong>multiprocess</strong>&quot;</p> <pre><code>sequential: 18.77s 2 workers: 10.71s 3 workers: 7.68s 4 workers: 8.11s 5 workers: 7.36s 6 workers: 6.99s 7 workers: 7.18s 8 workers: 7.20s </code></pre> <p>Using &quot;<strong>multiprocessing</strong>&quot;</p> <pre><code>sequential: 18.76s 2 workers: 10.34s 3 workers: 7.27s 4 workers: 6.55s 5 workers: 6.70s 6 workers: 6.80s 7 workers: 6.26s 8 workers: 6.30s </code></pre> <p>So now it decreases further, but still far away from times by other colleagues.</p> <p>I'm using multiprocess v. 0.70.17, Python 3.12.9, Windows 11, 16 cores.</p>
<python><multiprocessing><multiprocess><dill><pathos>
2025-02-19 09:45:42
0
679
zest16
79,450,852
571,033
Null-aware Evaluation flawed in Polars 1.22.0?
<p>I observed that the polars expression:</p> <pre><code>pl.DataFrame(data={}).select(a=pl.lit(None) | pl.lit(True)) </code></pre> <p>evaluates to True, but it should evaluate to None in my estimation, based on the concept of &quot;null aware evaluation&quot;.</p> <blockquote> <p>This concept ensures that if any part of an expression evaluates to null, the overall result is also null. This is particularly relevant in expressions involving multiple operations, where the presence of a null value can affect the final outcome.</p> </blockquote> <p>In contrast:</p> <pre><code>pl.DataFrame(data={}).select(a=pl.lit(None) &amp; pl.lit(True)) </code></pre> <p>does indeed evaluate to None, rather than False. And so do all of the expressions:</p> <pre><code>pl.DataFrame(data={}).select(a=pl.lit(None) &gt; pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) &lt; pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) == pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) + pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) - pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) * pl.lit(2)) pl.DataFrame(data={}).select(a=pl.lit(None) / pl.lit(2)) </code></pre> <p>What is going on here?</p>
<python><dataframe><python-polars><polars>
2025-02-19 09:45:38
1
1,527
Silverdust
79,450,810
1,635,450
Pandas groupby multiple columns, aggregate some columns, add a count column of each group
<p>The data I am working with:</p> <pre><code>data (140631115432592), ndim: 2, size: 3947910, shape: (232230, 17) VIN (1-10) object County object City object State object Postal Code float64 Model Year int64 Make object Model object Electric Vehicle Type object Clean Alternative Fuel Vehicle (CAFV) Eligibility object Electric Range float64 Base MSRP float64 Legislative District float64 DOL Vehicle ID int64 Vehicle Location object Electric Utility object 2020 Census Tract float64 dtype: object VIN (1-10) County City State Postal Code ... Legislative District DOL Vehicle ID Vehicle Location Electric Utility 2020 Census Tract 0 2T3YL4DV0E King Bellevue WA 98005.0 ... 41.0 186450183 POINT (-122.1621 47.64441) PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA) 5.303302e+10 1 5YJ3E1EB6K King Bothell WA 98011.0 ... 1.0 478093654 POINT (-122.20563 47.76144) PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA) 5.303302e+10 2 5UX43EU02S Thurston Olympia WA 98502.0 ... 35.0 274800718 POINT (-122.92333 47.03779) PUGET SOUND ENERGY INC 5.306701e+10 3 JTMAB3FV5R Thurston Olympia WA 98513.0 ... 2.0 260758165 POINT (-122.81754 46.98876) PUGET SOUND ENERGY INC 5.306701e+10 4 5YJYGDEE8M Yakima Selah WA 98942.0 ... 15.0 236581355 POINT (-120.53145 46.65405) PACIFICORP 5.307700e+10 </code></pre> <p>Data in csv format:</p> <pre><code>VIN (1-10),County,City,State,Postal Code,Model Year,Make,Model,Electric Vehicle Type,Clean Alternative Fuel Vehicle (CAFV) Eligibility,Electric Range,Base MSRP,Legislative District,DOL Vehicle ID,Vehicle Location,Electric Utility,2020 Census Tract 2T3YL4DV0E,King,Bellevue,WA,98005,2014,TOYOTA,RAV4,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,103,0,41,186450183,POINT (-122.1621 47.64441),PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA),53033023604 5YJ3E1EB6K,King,Bothell,WA,98011,2019,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,220,0,1,478093654,POINT (-122.20563 47.76144),PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA),53033022102 5UX43EU02S,Thurston,Olympia,WA,98502,2025,BMW,X5,Plug-in Hybrid Electric Vehicle (PHEV),Clean Alternative Fuel Vehicle Eligible,40,0,35,274800718,POINT (-122.92333 47.03779),PUGET SOUND ENERGY INC,53067011902 JTMAB3FV5R,Thurston,Olympia,WA,98513,2024,TOYOTA,RAV4 PRIME,Plug-in Hybrid Electric Vehicle (PHEV),Clean Alternative Fuel Vehicle Eligible,42,0,2,260758165,POINT (-122.81754 46.98876),PUGET SOUND ENERGY INC,53067012332 5YJYGDEE8M,Yakima,Selah,WA,98942,2021,TESLA,MODEL Y,Battery Electric Vehicle (BEV),Eligibility unknown as battery range has not been researched,0,0,15,236581355,POINT (-120.53145 46.65405),PACIFICORP,53077003200 3C3CFFGE1G,Thurston,Olympia,WA,98501,2016,FIAT,500,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,84,0,22,294762219,POINT (-122.89166 47.03956),PUGET SOUND ENERGY INC,53067010802 5YJ3E1EA4J,Snohomish,Marysville,WA,98271,2018,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,215,0,39,270125096,POINT (-122.1677 48.11026),PUGET SOUND ENERGY INC,53061052808 5YJ3E1EA3K,King,Seattle,WA,98102,2019,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,220,0,43,238776492,POINT (-122.32427 47.63433),CITY OF SEATTLE - (WA)|CITY OF TACOMA - (WA),53033006600 1N4AZ0CP5E,Thurston,Yelm,WA,98597,2014,NISSAN,LEAF,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,84,0,2,257246118,POINT (-122.60735 46.94239),PUGET SOUND ENERGY INC,53067012421 </code></pre> <p>Filtering and grouping:</p> <pre><code>filt = (data[&quot;Model Year&quot;] &gt;= 2018) &amp; (data[&quot;Electric Vehicle Type&quot;] == &quot;Battery Electric Vehicle (BEV)&quot;) data = data[filt].groupby([&quot;State&quot;, &quot;Make&quot;], sort=False, observed=True, as_index=False).agg( avg_electric_range=pd.NamedAgg(column=&quot;Electric Range&quot;, aggfunc=&quot;mean&quot;), oldest_model_year=pd.NamedAgg(column=&quot;Model Year&quot;, aggfunc=&quot;min&quot;)) </code></pre> <p>Currently it yields the following table:</p> <pre><code> State Make avg_electric_range oldest_model_year 0 WA TESLA 52.143448 2018 1 WA NISSAN 60.051874 2018 &lt;snip&gt; </code></pre> <p>How do I add a <code>Count</code> column which shows the count of each group which is used for further filtering? Note: rule out <code>apply</code> as everything should stay in Pandas'land.</p>
<python><pandas><group-by><aggregation>
2025-02-19 09:32:13
2
4,280
khteh
79,450,165
11,156,131
Sorting axis in pandas plot
<p>How can I sort the axis in descending order? I tried the following and it did not work:</p> <pre class="lang-py prettyprint-override"><code>from io import BytesIO from fpdf import FPDF import pandas as pd import matplotlib.pyplot as plt import io DATA = { 'x': ['val_1', 'val_2', 'val_3', 'val_4', 'val_5'], 'y': [1, 2, 3, 4, 5] } COLUMNS = tuple(DATA.keys()) plt.figure() df = pd.DataFrame(DATA, columns=COLUMNS) df.sort_values(['y'], ascending=[False]).plot(x=COLUMNS[0], y=COLUMNS[1], kind=&quot;barh&quot;, legend=False) img_buf = BytesIO() plt.savefig(img_buf, dpi=200) pdf = FPDF() pdf.add_page() pdf.image(img_buf, w=pdf.epw) pdf.output('output.pdf') img_buf.close() </code></pre>
<python><pandas><matplotlib><fpdf2>
2025-02-19 04:18:20
2
4,376
smpa01
79,450,153
5,118,421
How does this algo cuttree hackerrank work?
<p>Given a tree T with n nodes, how many subtrees (Tโ€™) of T have at most K edges connected to (T โ€“ Tโ€™)? <strong>Input Format</strong></p> <p>The first line contains two integers n and K followed by n-1 lines each containing two integers a &amp; b denoting that thereโ€™s an edge between a &amp; b.</p> <p><strong>Constraints</strong></p> <p>1 &lt;= K &lt;= n &lt;= 50 Every node is indicated by a distinct number from 1 to n.</p> <p><strong>Output Format</strong></p> <p>A single integer which denotes the number of possible subtrees.</p> <pre><code>def cutTree(n, k, edges): g = [[] for _ in range(n)] for i in edges: g[i[0]-1].append(i[1]-1) g[i[1]-1].append(i[0]-1) global ans ans = 1 def multiply(x, y): ans = defaultdict(lambda: 0) for k, v in x.items(): for k1, v1 in y.items(): ans[k+k1-1] += v*v1 for k, v in ans.items(): if k in x: x[k] += v else: x[k] = v def dfs(i,p): global ans if g[i] == [p]: ans += 1 return {0:1} x = 1 if i else 0 res = {len(g[i])-x : 1} for nxt in g[i]: if nxt != p: multiply(res, dfs(nxt, i)) ans += sum(((v if i+x &lt;= k else 0) for i, v in res.items())) return res dfs(0,-1) return ans </code></pre> <p>It solves <a href="https://www.hackerrank.com/challenges/cuttree/forum" rel="nofollow noreferrer">https://www.hackerrank.com/challenges/cuttree/forum</a> but how?</p>
<python><algorithm>
2025-02-19 04:07:13
1
1,407
Irina
79,450,048
10,902,944
Sending large Dask graphs causing major slowdown before computation
<p>I am encountering a warning from Dask where it takes a long time to start a computation (roughly around 5 minutes) because of my large task graphs. Here is the full warning:</p> <pre><code>UserWarning: Sending large graph of size 29.88 MiB. This may cause some slowdown. Consider loading the data with Dask directly or using futures or delayed objects to embed the data into the graph without repetition. See also https://docs.dask.org/en/stable/best-practices.html#load-data-with-dask for more information. </code></pre> <p>Here is my code for replication purposes. It uses an extension of xarray called <a href="https://github.com/google/Xee" rel="nofollow noreferrer">xee</a> to pull data from Google Earth Engine into an xarray data structure. It's chunked to what xee considers a &quot;safe&quot; chunk size (one that does not exceed the request limit of Earth Engine queries).</p> <pre><code>import ee import json import xarray as xr from dask.distributed import performance_report import dask # Although we authenticated Google Earth Engine on our Dask workers, we also need to authenticate # Google Earth Engine on our local machine! with open(json_key, 'r') as file: data = json.load(file) credentials = ee.ServiceAccountCredentials(data[&quot;client_email&quot;], json_key) ee.Initialize(credentials = credentials, opt_url='https://earthengine-highvolume.googleapis.com') WSDemo = ee.FeatureCollection(&quot;projects/robust-raster/assets/boundaries/WSDemoSHP_Albers&quot;) California = ee.FeatureCollection(&quot;projects/robust-raster/assets/boundaries/California&quot;) #ic = ee.ImageCollection('LANDSAT/LC08/C02/T1_L2').filterDate('2014-01-01', '2014-12-31') ic = ee.ImageCollection('LANDSAT/LC08/C02/T1_L2').filterDate('2020-05-01', '2020-08-31') xarray_data = xr.open_dataset(ic, engine='ee', crs=&quot;EPSG:3310&quot;, scale=30, geometry=WSDemo.geometry()) xarray_data = xarray_data.chunk({&quot;time&quot;: 48, &quot;X&quot;: 512, &quot;Y&quot;: 256}) def compute_ndvi(df): # Perform your calculations df['ndvi'] = (df['SR_B5'] - df['SR_B4']) / (df['SR_B5'] + df['SR_B4']) return df def user_function_wrapper(ds, user_func, *args, **kwargs): df_input = ds.to_dataframe().reset_index() df_output = user_func(df_input, *args, **kwargs) df_output = df_output.set_index(list(ds.dims)) ds_output = df_output.to_xarray() return ds_output test = xr.map_blocks(user_function_wrapper, xarray_data, args=(compute_ndvi,)) # Create a Dask report of the single chunked run with performance_report(filename=&quot;dask_report.html&quot;): test.compute() </code></pre> <p>You can disregard the code that converts chunks into a data frame format. I am working on a way for users to write their own functions in a format compatible with pandas data frames. Is 29.88 MiB really too large of a graph? I'll be looking into futures or delayed objects next, but I wanted to ask the question here to see if there is a clear mistake in my code that is causing this warning (and long startup) to arise.</p>
<python><python-xarray><google-earth-engine>
2025-02-19 02:25:48
0
397
Adriano Matos
79,449,962
17,296,506
Why is RStudio Data Viewer Wrong with Python pd.DF - Snowflake Query
<p>At loss why when I look at the data viewer in RStudio it looks like it does in the image below yet when I print the data frame it looks normal. Am I missing something? It works fine in Spyder</p> <h2>Sample Code</h2> <pre><code># pip install `snowflake-connector-python` and `pandas` if not already installed import snowflake.connector import pandas as pd # Establish Snowflake connection con = snowflake.connector.connect( user=&quot;email&quot;, account=&quot;server&quot;, warehouse=&quot;DATA_LAKE_READER&quot;, database=&quot;ENTERPRISE&quot;, authenticator=&quot;externalbrowser&quot; ) # Create a cursor and set schema cursor = con.cursor() cursor.execute(&quot;USE SCHEMA CREW_ANALYTICS&quot;) # Execute query query = &quot;SELECT * FROM CT_DEADHEAD LIMIT 10&quot; cursor.execute(query) # Fetch data and set column names correctly columns = [col[0] for col in cursor.description] # Extract column headers deadhead_df = pd.DataFrame(cursor.fetchall(), columns=columns) # Assign headers to DataFrame # Close cursor and connection cursor.close() con.close() </code></pre> <h2>DF Head Output</h2> <pre><code>&gt;&gt;&gt; print(deadhead_df.head()) CREW_INDICATOR FLIGHT_NO FLIGHT_DATE ... FILL PAIRING_POSITION PLANNED_POSITION 0 P NF1 2024-04-18 ... None FO 1FO 1 P NF1 2024-04-18 ... None CA 1CA 2 P FAT 2024-04-18 ... None CA 1CA 3 P ARC 2024-04-04 ... None CA 1CA 4 P ARC 2024-04-04 ... None FO 1FO [5 rows x 45 columns] </code></pre> <h2>Incorrect RStudio Data Viewer Image</h2> <p><a href="https://i.sstatic.net/oHxQEgA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oHxQEgA4.png" alt="enter image description here" /></a></p> <h2>Spyder Viewer</h2> <p><a href="https://i.sstatic.net/bZBEoJOU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZBEoJOU.png" alt="enter image description here" /></a></p> <h2>Python Version</h2> <pre><code>Python 3.12.9 </code></pre> <h2>RStudio Info</h2> <pre><code>&gt; RStudio.Version()$version [1] โ€˜2024.12.0.467โ€™ </code></pre> <h2>R Version Info</h2> <pre><code>platform x86_64-w64-mingw32 arch x86_64 os mingw32 crt ucrt system x86_64, mingw32 status major 4 minor 4.2 year 2024 month 10 day 31 svn rev 87279 language R version.string R version 4.4.2 (2024-10-31 ucrt) nickname Pile of Leaves </code></pre>
<python><snowflake-cloud-data-platform><rstudio>
2025-02-19 01:01:32
0
371
Eizy
79,449,933
10,576,557
Skip bad hosts and return good results on Fabric call
<p>I would like to send a remote command to multiple hosts using the Python Fabric library. The following code works as long as all hosts are available</p> <pre class="lang-none prettyprint-override"><code>#!/usr/bin/python3 import fabric group = fabric.ThreadingGroup('boss1', 'boss2') results = group.run('uname -a', hide=True, timeout=5) for _connection, _result in results.items(): print(f'{_connection.host}: {_result.stdout.strip()}') </code></pre> <p>If one of the hosts is unavailable I get a timeout error such as the following</p> <pre><code>fabric.exceptions.GroupException: {&lt;Connection host=boss1 user=ec2-user&gt;: TimeoutError(60, 'Operation timed out'), &lt;Connection host=boss2 user=ec2-user&gt;: &lt;Result cmd='uname -a' exited=0&gt;} </code></pre> <p>I would (1) like to reduce the timeout to 5 seconds (as is specified in <code>group.run()</code>) and (2) isolate the error to only the bad host and return information from the successful one(s).</p> <p>I have found documentation about <code>connection_timeout</code>, but only <code>group.run()</code> seems to accept a <code>timeout</code> keyword and it does not reduce from 60 seconds.</p> <p>I have also found documentation about a <code>skip_bad_hosts</code> keyword but cannot insert it into my code without a runtime error.</p> <p>What is the best way to skip over an error on one system and return results from the others?</p> <p>Python 3, Fabric 3</p>
<python><fabric>
2025-02-19 00:35:05
1
569
shepster
79,449,698
1,492,613
how can I add new LD path in python?
<pre><code>import ctypes.util import importlib.util import os lib_dir=... # some local installed clibs os.environ[&quot;LD_LIBRARY_PATH&quot;] = lib_dir + &quot;:&quot; + os.environ.get(&quot;LD_LIBRARY_PATH&quot;, &quot;&quot;) os.environ[&quot;PATH&quot;] = lib_dir + &quot;:&quot; + os.environ.get(&quot;PATH&quot;, &quot;&quot;) import ctypes print(ctypes.util.find_library(&quot;xx&quot;)) libxx = ctypes.cdll.LoadLibrary(&quot;libxx.so&quot;) print(get_library_path(libxx)) </code></pre> <p>Here the environ change does not affect the ctypes at all, unless I add them to LD path before I start the python interpreter, this is very annoying, basically prevent us to add new libs during instance running.</p> <p>In windows there is <a href="https://docs.python.org/3.8/library/os.html#os.add_dll_directory" rel="nofollow noreferrer">add_dll_directory</a>, but how can I do this in linux?</p> <p>similar question has been asked <a href="https://stackoverflow.com/questions/6543847/setting-ld-library-path-from-inside-python">here</a> 14 years ago, but it never get any acceptable answer. And python has changed a lot since then, several approach mentioned in some answer does not behave the same now. There might be existing different solution now. For example, someone ask if there is a way to do similar thing in windows, but as I mentioned in new python3 there is a dedicate function for that. So I think this is good time reopen the topic to discuss again.</p>
<python><python-3.x>
2025-02-18 21:54:19
0
8,402
Wang
79,449,605
793,154
How to ask for parameter before installations with Poetry?
<p>My package's Python version depends on one of its dependencies'. Since it integrates with Databricks, the corresponding runtime environment (DBR) sets a dependency package <code>databricks-connect</code> which then fixes the Python version. That is to say:</p> <pre><code>DBR 13.3 -&gt; Python 3.10 DBR 14.3 -&gt; Python 3.11 DBR 15.4 -&gt; Python 3.11 </code></pre> <p>So I'm developing the package using Poetry, but they way I've found it manages dependencies is the other way around... you first fix the Python version, and then adjust the dependencies to match that.</p> <p>I've tried setting them up with groups and extras, but haven't really solved anything. Most of the time I get conflicts with the dependencies, because the corresponding <code>databricks-connect</code> version conflict with each other.</p> <p>Is there an approach that works with Poetry, or do I need to look further into other packages such as Tox?</p> <p>Thank you for your help.</p>
<python><databricks><python-packaging><python-poetry><databricks-connect>
2025-02-18 20:59:44
0
2,349
Diego-MX
79,449,433
554,305
Installing/Compiling wxPython for Python 3.13t (free threaded)
<p>I want to use wxPython under free threaded Python 3.13. Currently, there is no wheel file available.</p> <p><strong>Problem 1</strong>: Pip will build from source but fail due to missing packages.</p> <p><strong>Solution 1</strong>: I installed <code>wheel, setuptools, six</code> - and all packages listed in <code>\requirements\install.txt</code> and <code>\dev.txt</code> (located in the source tar-ball of wxPython). I already had Visual Studio 2019 installed, so I did not need to install C++ build tools.</p> <p><strong>Problem 2</strong>: While the core libraries compile, 0pip fails at the first extension library (<strong>nano-svg</strong>).</p> <p><strong>Solution 2</strong>: <strong>NONE.</strong> The problem originates from the Cython (?) file and is tied to <code>__pyx_vectorcallfunc</code>.</p> <pre><code> [1/1] Cythonizing wx/svg\_nanosvg.pyx running build_ext building 'wx.svg._nanosvg' extension creating build\wxsvg\temp.win-amd64-cpython-313t\Release\wx\svg &quot;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64\cl.exe&quot; /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DNANOSVG_IMPLEMENTATION=1 -DNANOSVGRAST_IMPLEMENTATION=1 -DNANOSVG_ALL_COLOR_KEYWORDS=1 -DPy_GIL_DISABLED=1 -Iext/nanosvg/src -ID:\Python\pyBuildTest\include &quot;-IC:\Program Files\Python\include&quot; &quot;-IC:\Program Files\Python\Include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\Include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\Include&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt&quot; /Tcwx/svg\_nanosvg.c /Fobuild/wxsvg\temp.win-amd64-cpython-313t\Release\wx\svg\_nanosvg.obj _nanosvg.c ext/nanosvg/src\nanosvg.h(1450): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(1458): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(1491): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(1770): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(1791): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(2553): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(2557): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(2561): warning C4244: '=': conversion from 'double' to 'float', possible loss of data ext/nanosvg/src\nanosvg.h(2565): warning C4244: '=': conversion from 'double' to 'float', possible loss of data wx/svg\_nanosvg.c(2455): error C2146: syntax error: missing ')' before identifier 'vc' wx/svg\_nanosvg.c(2455): error C2081: '__pyx_vectorcallfunc': name in formal parameter list illegal wx/svg\_nanosvg.c(2455): error C2061: syntax error: identifier 'vc' wx/svg\_nanosvg.c(2455): error C2059: syntax error: ';' wx/svg\_nanosvg.c(2455): error C2059: syntax error: ',' wx/svg\_nanosvg.c(2455): error C2059: syntax error: ')' wx/svg\_nanosvg.c(25912): error C2146: syntax error: missing ')' before identifier 'vc' wx/svg\_nanosvg.c(25912): error C2081: '__pyx_vectorcallfunc': name in formal parameter list illegal wx/svg\_nanosvg.c(25912): error C2061: syntax error: identifier 'vc' wx/svg\_nanosvg.c(25912): error C2059: syntax error: ';' wx/svg\_nanosvg.c(25912): error C2059: syntax error: ',' wx/svg\_nanosvg.c(25912): error C2059: syntax error: ')' wx/svg\_nanosvg.c(25957): error C2146: syntax error: missing ')' before identifier 'vc' wx/svg\_nanosvg.c(25957): error C2081: '__pyx_vectorcallfunc': name in formal parameter list illegal wx/svg\_nanosvg.c(25957): error C2061: syntax error: identifier 'vc' wx/svg\_nanosvg.c(25957): error C2059: syntax error: ';' wx/svg\_nanosvg.c(25957): error C2059: syntax error: ',' wx/svg\_nanosvg.c(25957): error C2059: syntax error: ')' wx/svg\_nanosvg.c(26646): error C2065: '__pyx_vectorcallfunc': undeclared identifier wx/svg\_nanosvg.c(26646): error C2146: syntax error: missing ';' before identifier 'vc' wx/svg\_nanosvg.c(26646): error C2065: 'vc': undeclared identifier wx/svg\_nanosvg.c(26646): warning C4047: '=': 'int' differs in levels of indirection from 'vectorcallfunc' wx/svg\_nanosvg.c(26647): error C2065: 'vc': undeclared identifier wx/svg\_nanosvg.c(26649): warning C4013: '__Pyx_PyVectorcall_FastCallDict' undefined; assuming extern returning int wx/svg\_nanosvg.c(26649): error C2065: 'vc': undeclared identifier wx/svg\_nanosvg.c(26649): warning C4047: 'return': 'PyObject *' differs in levels of indirection from 'int' error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX64\\x64\\cl.exe' failed with exit code 2 Command 'D:\Python\pyBuildTest\Scripts\python.exe setup-wxsvg.py build_ext --inplace' failed with exit code 1. Finished command: build_others (0m2.712s) Finished command: build_py (7m37.163s) Finished command: build (12m45.794s) Command '&quot;D:\Python\pyBuildTest\Scripts\python.exe&quot; -u build.py build' failed with exit code 1. </code></pre> <h2>My findings so far (no working solution)</h2> <ol> <li><code>__pyx_vectorcallfunc</code> is not recognized. This might be related to compatibility issue of Cython with three threading 3.13.</li> <li>A similar problem was posted in gits of other packages like scipy, linking the problem with <code>__pyx_vectorcallfunc</code> to an outdated Cython package.</li> <li><em>wxPython</em> requires Cython 3.0.10, while the current (stable) version is 3.0.12. I upgraded to Cython version 3.0.12, and then to the alpha 3.1.x (built from git master) but the source build of wxPython still fails due to <code>__pyx_vectorcallfunc</code>.</li> </ol> <hr /> <p>Has someone built wxPython (the wheel) from source successfully in free threading Python 3.13t?</p> <p>Should I address the wxPython team/git with this issue? Or should I just wait?</p> <p>Is there another workaround that allows me to build wxPython on 3.13t?</p> <p><strong>EDIT: Had to rephrase, should hopefully be clearer now.</strong></p>
<python><pip><build><wxpython>
2025-02-18 19:46:52
0
395
BeschBesch
79,449,424
11,156,131
FPDF2 separate styling in table
<p>Is it possible to separately style column headers and row values? I tried this but it generates same style for both. I am only keen on different font-size and font-color.</p> <pre><code>from fpdf import FPDF from fpdf.fonts import FontFace data = ( (&quot;First name&quot;, &quot;Last name&quot;, &quot;Age&quot;, &quot;City&quot;), (&quot;Jules&quot;, &quot;Smith&quot;, &quot;34&quot;, &quot;San Juan&quot;), (&quot;Mary&quot;, &quot;Ramos&quot;, &quot;45&quot;, &quot;Orlando&quot;) ) pdf = FPDF() pdf.add_page() pdf.set_font(family=&quot;Helvetica&quot;, style=&quot;B&quot;, size=12) # Table font_color_header = (0, 0, 0) font_color_cell = (255, 0, 0) headings_style = FontFace(color=font_color_header, size_pt=18) cell_style = FontFace(color=font_color_cell, size_pt=9) with pdf.table(headings_style=headings_style) as table: for data_row in data: row = table.row() for datum in data_row: row.cell(text=datum, style=cell_style) </code></pre>
<python><fpdf2>
2025-02-18 19:43:49
0
4,376
smpa01
79,449,407
14,122
Configuring a custom log handler to add kwargs from "extra" dict for captured logs in pytest
<p>I'm using structlog with keyword arguments to generate stdlib log events with the <code>extra</code> dict storing keyword arguments necessary to understand the context around the log message being printed.</p> <p>When my program is run under pytest, the log messages written directly to stdout honor the application's log settings, but when pytest prints <em>captured</em> log messages they go through a default-to-pytest formatter that doesn't include this information.</p> <p><strong><em>As far as I know</em>, a format string can't be used to format a dictionary into a string with <code>key=value</code> pairs</strong> (rather, when <code>extra=</code> is in use, the expectation is that a format string will pull in specific keywords from that dictionary), so it isn't obvious to me that <code>log_cli_format</code> (as recommended by numerous preexisting Q&amp;A entries) is sufficient for the task at hand.</p> <hr /> <p>Consider the following reproducer:</p> <pre><code>import pytest import logging logger = logging.getLogger() def test_logging(): # in the real use case &quot;extra&quot; is generated by structlog # ...the names may change for each log message and cannot be hardcoded logger.info('message here', extra={&quot;foo&quot;: &quot;bar&quot;, &quot;baz&quot;: &quot;qux&quot;}) assert False, 'Trigger failure' </code></pre> <p>When running:</p> <pre class="lang-bash prettyprint-override"><code>pytest --log-cli-level INFO </code></pre> <p>this outputs:</p> <pre class="lang-none prettyprint-override"><code>----------------------------------------- Captured log call ----------------------------------------- INFO root:test_me.py:8 message here ====================================== short test summary info ====================================== FAILED test_me.py::test_logging - AssertionError: Trigger failure assert False ========================================= 1 failed in 0.08s ========================================= </code></pre> <p>I want <code>message here</code> (within the <code>Captured log call</code> section, not captured stdout or stderr) to contain content equivalent to <code>foo='bar' baz='qux'</code>. How can this be accomplished?</p>
<python><logging><pytest>
2025-02-18 19:35:44
1
299,045
Charles Duffy
79,449,347
11,354,959
Exception when trying to launch django project debug in VS code
<p>Hi I have a django project (4.2.2) and when I start the project from the terminal inside VS Code using &quot;python3 manage.py runserver&quot; everything works perfectly. But I want to debug it so I created a launch.json file to start it as debug Here is what it contains:</p> <pre><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python Debugger django&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}/manage.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;django&quot;: true, &quot;args&quot;: [ &quot;runserver&quot;, ] } ] } </code></pre> <p>When I try to start the project using debug button I get the following error: <code>django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 or psycopg module</code></p> <p>I have installed psycopg library as if I look at my requiremnts.txt I have</p> <pre><code>psycopg==3.2.4 psycopg-binary==3.2.4 </code></pre> <p>Not sure what is the problem? In debug mode the error is display in my models.py file:</p> <pre class="lang-py prettyprint-override"><code>from django.db import models class User(models.Model): # &lt;-- here is the error &quot;&quot;&quot; User entity &quot;&quot;&quot; id = models.AutoField(primary_key=True) first_name = models.CharField(max_length=80) last_name = models.CharField(max_length=100) # created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return f&quot;{self.first_name} {self.last_name}&quot; </code></pre> <p>EDIT: added full error trace below</p> <pre><code>Exception in thread django-main-thread: Traceback (most recent call last): File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/backends/postgresql/base.py&quot;, line 25, in &lt;module&gt; import psycopg as Database ModuleNotFoundError: No module named 'psycopg' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/backends/postgresql/base.py&quot;, line 27, in &lt;module&gt; import psycopg2 as Database ModuleNotFoundError: No module named 'psycopg2' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner self.run() File &quot;/usr/lib/python3.10/threading.py&quot;, line 953, in run self._target(*self._args, **self._kwargs) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/utils/autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/core/management/commands/runserver.py&quot;, line 126, in inner_run autoreload.raise_last_exception() File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/utils/autoreload.py&quot;, line 87, in raise_last_exception raise _exception[1] File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/core/management/__init__.py&quot;, line 394, in execute autoreload.check_errors(django.setup)() File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/utils/autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/apps/registry.py&quot;, line 116, in populate app_config.import_models() File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/apps/config.py&quot;, line 269, in import_models self.models_module = import_module(models_module_name) File &quot;/usr/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/home/louis/zero/backend/ztm_app/models.py&quot;, line 3, in &lt;module&gt; class User(models.Model): File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/models/base.py&quot;, line 143, in __new__ new_class.add_to_class(&quot;_meta&quot;, Options(meta, app_label)) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/models/base.py&quot;, line 371, in add_to_class value.contribute_to_class(cls, name) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/models/options.py&quot;, line 231, in contribute_to_class self.db_table, connection.ops.max_name_length() File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 15, in __getattr__ return getattr(self._connections[self._alias], item) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 62, in __getitem__ conn = self.create_connection(alias) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/utils.py&quot;, line 193, in create_connection backend = load_backend(db[&quot;ENGINE&quot;]) File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/utils.py&quot;, line 113, in load_backend return import_module(&quot;%s.base&quot; % backend_name) File &quot;/usr/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/home/louis/zero/backend/venv/lib/python3.10/site-packages/django/db/backends/postgresql/base.py&quot;, line 29, in &lt;module&gt; raise ImproperlyConfigured(&quot;Error loading psycopg2 or psycopg module&quot;) django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 or psycopg module </code></pre>
<python><django><visual-studio-code>
2025-02-18 19:11:45
2
370
El Pandario
79,449,287
1,534,017
Align grid and cells in heatmap
<p>I use <code>plotly</code>'s <code>graph_objects</code> to generate a <code>heatmap</code>. It all works well, however, I would like to shift the grid in such a way that its lines are aligned with the boundaries of the cells that contain the numbers; right now the grid lines point towards the center of the the cells.</p> <p>A minimal example:</p> <pre><code>import numpy as np import plotly.graph_objects as go z = [[.1, .3, .5, .7, .9], [1, np.nan, .6, .4, .2], [.2, 0, np.nan, .7, .9], [.9, np.nan, np.nan, .2, 0], [.3, .4, np.nan, .7, 1]] cols = list(&quot;ABCDE&quot;) rows = list(&quot;12345&quot;) def plot_data(data: np.ndarray, cols: list[str], rows: list[str], title: str, color_scale: str = 'RdBu_r', colorbar_title: str = &quot;&quot;) -&gt; go.Figure: heatmap_fig = go.Figure() heatmap_fig.add_trace( go.Heatmap( z=data, x=cols, y=rows, colorscale=color_scale, text=[[f&quot;{val:.2f}&quot; if not np.isnan(val) else &quot;&quot; for val in row] for row in data], texttemplate=&quot;%{text}&quot;, textfont={&quot;size&quot;: 10}, showscale=True, colorbar=dict( title=colorbar_title, titleside=&quot;right&quot;, thickness=20, len=0.8 ), xgap=2, ygap=2 ) ) heatmap_fig.update_layout( title={ 'text': title, 'y':0.95, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top' }, height=400, width=None, yaxis_autorange='reversed', margin=dict(t=60, r=80, b=60, l=60), xaxis=dict( side='top', tickmode='linear', dtick=1, tickangle=0, showgrid=True, gridcolor='lightgrey', gridwidth=1, griddash='solid' ), yaxis=dict( showgrid=True, gridcolor='lightgrey', gridwidth=1, griddash='solid' ), plot_bgcolor='white' ) return heatmap_fig fig = plot_data(z, cols, rows, &quot;Test&quot;) fig.show() </code></pre> <p>which produces</p> <p><a href="https://i.sstatic.net/F0UUZg7V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0UUZg7V.png" alt="enter image description here" /></a></p> <p>So, all lines should be shifted by 0.5 cells but the row and column labels should remain centered. Is that possible?</p>
<python><plotly><heatmap>
2025-02-18 18:33:18
0
26,249
Cleb
79,449,212
15,456,681
Numba promotes types differently to numpy
<p>When adding (or multiplying, dividing, subtracting etcโ€ฆ) a python float to a numpy array, in numpy the dtype of the array is preserved, whereas in numba the array is promoted to float64. How can I modify the overload of <code>ndarray.__add__</code> etcโ€ฆ to change the dtype of the python float to match that of the array so that the result has the same dtype?</p> <p>Ideally, I don't want to have to modify my functions, rather just implement a new overload of the underlying addition etc functions, as there are many instances of this in my code.</p> <p>Code to demonstrate the issue, would like consistency with numpy in a function decorated with <code>njit</code>:</p> <pre><code>import numpy as np import numba as nb def func(array): return array + 1.0 numba_func = nb.njit(func) a_f64 = np.ones(1, dtype=np.float64) a_f32 = np.ones(1, dtype=np.float32) for i in (a_f64, a_f32): print(i.dtype) print(func(i).dtype) print(numba_func(i).dtype, end=&quot;\n\n&quot;) </code></pre> <p>Output (with numpy 2.1.3 and numba 0.61.0):</p> <pre><code>float64 float64 float64 float32 float32 float64 </code></pre>
<python><numpy><numba>
2025-02-18 17:58:12
1
3,592
Nin17
79,449,175
1,084,684
wtforms: optional FieldList of IntegerField?
<p>I need an <em>optional</em> list of int's in a WTForms-validated Rest API.</p> <p>Right now this is the relevant part of my form:</p> <pre><code>from wtforms import IntegerField, FileField, StringField, FieldList from wtforms.validators import DataRequired, Optional, UUID class PublishForm(FlaskForm): # This validates, but the patch_bases list comes up empty. # patch_bases = FieldList(IntegerField(&quot;pb&quot;, validators=[Required()]), min_entries=0, max_entries=5, validators=[Optional()]) # This too. patch_bases = FieldList(IntegerField(), min_entries=0, max_entries=5, validators=[Optional()]) </code></pre> <p>Here's the call to validate/parse:</p> <pre><code>form = PublishForm() if not form.validate_on_submit(): return form_error_response(form.errors) </code></pre> <p>And here's (part of) a test curl I'm using to hit the API:</p> <pre><code>curl \ -v \ -X POST \ -H &quot;Authorization: Bearer $(set -eu; cat token)&quot; \ --form 'patch_bases=[2, 4]' \ http://hostname:8080/api/v3/publish </code></pre> <p>I'm using --form because I need to upload a file as well (not shown here).</p> <p>I'm consistently getting back a &quot;successfully validated&quot; empty list even when I pass a nonempty list of int's via curl, which is not what I want.</p> <p>What I want, ideally, is a None for no patch_bases field specified at all, an empty list for patch_bases=[], or a nonempty list for 1 to 5 int's in patch_bases - all in form.patch_bases.data.</p> <p>I've Googled about this extensively on more than one occasion, but I'm not seeing a way of doing what I want.</p> <p>I'm using:</p> <pre><code>WTForms 2.0.2 Python 3.10.12 Flask 1.1.2 </code></pre> <p>I know, the WTForms version is ancient, but I'm stuck with it for now unless there's a pretty compelling reason to upgrade.</p> <p>If there's a fine manual to read, feel free to point me at it? I dug around in the WTForms 2.3.x doc on Fields (among many other Google hits), but didn't see what I need.</p> <p>Thanks!</p> <p>PS: I just tried the MultivalueOptional class from <a href="https://github.com/pallets-eco/wtforms/issues/835" rel="nofollow noreferrer">https://github.com/pallets-eco/wtforms/issues/835</a> - but field.raw_data is coming up None even when I do pass a value.</p>
<python><wtforms>
2025-02-18 17:44:13
1
7,243
dstromberg
79,449,057
1,870,832
confused by silent truncation in polars type casting
<p>I encountered some confusing behavior with polars type-casting (silently truncating floats to ints without raising an error, even when explicitly specifying <code>strict=True</code>), so I headed over to the <a href="https://docs.pola.rs/user-guide/expressions/casting/" rel="nofollow noreferrer">documentation page on casting</a> and now I'm even more confused.</p> <p>The text at the top of the page says:</p> <blockquote> <p>The function <code>cast</code> includes a parameter <code>strict</code> that determines how Polars behaves when it encounters a value that cannot be converted from the source data type to the target data type. The default behaviour is <code>strict=True</code>, which means that Polars will thrown an error to notify the user of the failed conversion while also providing details on the values that couldn't be cast.</p> </blockquote> <p>However, the code example immediately below (section title &quot;Basic example&quot;) shows a <code>df</code> with a <code>floats</code> column taking values including <code>5.8</code> being truncated to int <code>5</code> during <code>cast</code>ing with the code <code>pl.col(&quot;floats&quot;).cast(pl.Int32).alias(&quot;floats_as_integers&quot;)</code>, i.e. without <code>strict=False</code>.</p> <p>What am I misunderstanding here? The text seems to indicate that this truncation, with <code>strict=True</code> as default, should &quot;throw an error,&quot; but the code example in the documentation (and my own polars code) throws no error and silently truncates values.</p>
<python><dataframe><casting><python-polars>
2025-02-18 16:55:40
2
9,136
Max Power
79,448,931
1,838,076
How to dynamically pick the columns to show in dataframe with streamlit
<p>I am trying to pick the columns to show dynamically in a <code>streamlit</code> dataframe.</p> <p>Below is the code</p> <pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3 import streamlit as st import pandas as pd st.set_page_config(layout=&quot;wide&quot;) # Create a sample dataframe with 3 columns Col1, Col2, Col3 df = pd.DataFrame({ 'Col1': ['A', 'B', 'C'], 'Col2': ['X', 'Y', 'Z'], 'Col3': [1, 2, 3] }) # Have a dropdown to select a column among Col1, Col2 col = st.selectbox('Select a column', ['Col1', 'Col2']) # Show the dataframe with the selected column and column Col3 st.dataframe(df, column_order=[col, 'Col3'], use_container_width=True) </code></pre> <p>The issue is, irrespective of the selection, it always shows <code>Col1</code> and <code>Col3</code> always. As far as I can imagine, this might be because <code>column_order</code> is frozen when the dataframe is created the first time. But is this the intended behavior?</p> <p>The following code works dynamically to disable the dropdown based on checkbox, like its shown in examples.</p> <pre class="lang-py prettyprint-override"><code>disable = st.checkbox('Disable the dropdown') sel = st.selectbox('Select a column', ['Col1', 'Col2'], key='sel', disabled=disable) </code></pre> <p>Why is <code>column_order</code> different?</p> <p>I even tried using tuple to check if any python-mutable stuff is happening under the hood. But the behavior is same.</p> <pre class="lang-py prettyprint-override"><code>st.dataframe(df, column_order=(col, 'Col3'), use_container_width=True) </code></pre> <p>Am I missing something?</p>
<python><dataframe><streamlit>
2025-02-18 16:09:22
0
1,622
Krishna
79,448,844
11,470,719
Appropriate type-hints for Generic/Frozen Multivariate Distributions in scipy.stats
<p>I need to differentiate between 4 different sets of probability distributions from the <code>scipy.stats</code> module: generic univariate, frozen univariate, generic multivariate, &amp; frozen multivariate. Throughout the application, I would like to add type hints for these 4 sets.</p> <p>For the univariate cases, mypy has no problems with type hints like in this MWE:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any from scipy.stats import norm from scipy.stats._distn_infrastructure import rv_generic, rv_frozen def sample_frozen_univariate(n_samples: int, dist: rv_frozen): return dist.rvs(n_samples) def sample_generic_univariate(n_samples: int, dist: rv_generic, *distparams: Any): return dist(*distparams).rvs(n_samples) n_samples = 4 frozen_dist = norm() print(sample_frozen_univariate(n_samples, frozen_dist)) n_samples = 4 generic_dist = norm loc, scale = 1, 2 print(sample_generic_univariate(n_samples, generic_dist, loc, scale)) </code></pre> <hr /> <p>Now here's an MWE of the multivariate cases. If executed it provides the correct result:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any from scipy.stats import multivariate_normal from scipy.stats._multivariate import multi_rv_generic, multi_rv_frozen def sample_frozen_multivariate(n_samples: int, n_variates: int, dist: multi_rv_frozen): if dist.dim != n_variates: msg = 'distribution dimension %s inconsistent with n_variates=%s' raise ValueError(msg % (dist.dim, n_variates)) sample = dist.rvs(n_samples) return sample def sample_generic_multivariate(n_samples: int, n_variates: int, dist: multi_rv_generic, *distparams: Any): sample = dist(*distparams).rvs(n_samples) if sample.shape[1] != n_variates: msg = 'sample dimension %s inconsistent with n_variates=%s' raise ValueError(msg % (sample.shape[1], n_variates)) return sample n_samples = 4 n_variates = 2 frozen_dist = multivariate_normal(mean=[0, 0], cov=[[1, 0], [0, 1]]) print(sample_frozen_multivariate(n_samples, n_variates, frozen_dist)) n_samples = 4 n_variates = 2 generic_dist = multivariate_normal mean, cov = [-1, 1], [[1, 0], [0, 1]] print(sample_generic_multivariate(n_samples, n_variates, generic_dist, mean, cov)) </code></pre> <p>However, mypy reports the following issues:</p> <pre><code>example_2.py:8: error: &quot;multi_rv_frozen[multi_rv_generic]&quot; has no attribute &quot;dim&quot; [attr-defined] example_2.py:10: error: &quot;multi_rv_frozen[multi_rv_generic]&quot; has no attribute &quot;dim&quot; [attr-defined] example_2.py:11: error: &quot;multi_rv_frozen[multi_rv_generic]&quot; has no attribute &quot;rvs&quot; [attr-defined] example_2.py:17: error: &quot;multi_rv_generic&quot; not callable [operator] </code></pre> <p>I understand the issues that mypy is reporting, just not how to properly address them. <strong>What approach(es) to type hints for generic and frozen multivariate distributions from <code>scipy.stats</code> would appease type-checkers?</strong> (As an aside, I am aware of the problem with relying on types from &quot;private&quot; modules in <code>scipy.stats</code>. Though I understand that is a separate issue, I welcome solutions that simultaneously address that problem.)</p>
<python><python-typing><mypy><scipy.stats><probability-distribution>
2025-02-18 15:55:39
1
2,210
brentertainer
79,448,815
1,406,168
Default credentials with WorkspaceClient hosted in an Azure Web App with managed identity
<p>I have a fast API hosted in an azure web app. I am trying to get some data from DataBricks and return it from the FastAPI. This is my context:</p> <p>DbxContext.py:</p> <pre><code>import time import pandas as pd from databricks import sql from databricks.sdk import WorkspaceClient from pydantic import SecretStr # from databricks.sdk.service import users class DataBricksOptions: def __init__(self, client_id: str, client_secret: SecretStr, tenant_id: str, workspace_resource_id: str, http_path: str, host: str): self.client_id = client_id self.client_secret = client_secret self.tenant_id = tenant_id self.workspace_resource_id = workspace_resource_id self.http_path= http_path self.host = host class DataBricksContext: def __init__(self, options: DataBricksOptions): self.options = options def get_databricks_connection(self): try: # Initialize Databricks Workspace Client using default credentials w = WorkspaceClient() me2 = w.current_user.me() print(f&quot;Me: {me2.display_name}&quot;) #w.users.me() # Create Databricks Token token = w.tokens.create(comment=f&quot;sdk-{time.time_ns()}&quot;, lifetime_seconds=3600) # Establish SQL Connection connection = sql.connect( server_hostname=self.options.host, http_path=self.options.http_path, access_token=token.token_value ) return connection except Exception as e: print(f&quot;Error in get_databricks_connection(): {e}&quot;) return None def exec_query_to_df(self, connection, query: str): try: # Use the connection to execute the query df = pd.read_sql(query, connection) return df except Exception as e: print(f&quot;Error in exec_query_to_df(): {e}&quot;) return None </code></pre> <p>When I run it locally in VS code everything works as expected. When I run it in Azure I get following error:</p> <blockquote> <p>Error in get_databricks_connection(): default auth: cannot configure default credentials, please check <a href="https://docs.databricks.com/en/dev-tools/auth.html#databricks-client-unified-authentication" rel="nofollow noreferrer">https://docs.databricks.com/en/dev-tools/auth.html#databricks-client-unified-authentication</a> to configure credentials for your preferred authentication</p> </blockquote> <p>I have set System Assigned Identity to On on the azure web app. Any suggestions?</p>
<python><azure-web-app-service><databricks><azure-databricks><azure-managed-identity>
2025-02-18 15:45:25
0
5,363
Thomas Segato
79,448,760
4,693,577
CuPy ROI Analysis significantly slower than NumPy version on RTX 4090
<p>I have two versions of an ROI analysis class - one using NumPy and one using CuPy. The CuPy version is running much slower despite using an RTX 4090. Both versions perform the same operations:</p> <pre><code># Standard library imports import json from typing import Any, Dict, List, Optional, Tuple, Union # Third-party library imports import cupy as cp import pandas as pd from cupyx.scipy import stats from cupyx.scipy.ndimage import label from cucim.skimage.measure import regionprops_table import cudf # Local imports from HiTMicTools.utils import adjust_dimensions, stack_indexer # Type hints from numpy.typing import NDArray from pandas import DataFrame, Series def roi_skewness(regionmask, intensity): &quot;&quot;&quot;Cupy version for the ROI standard deviation as defined in analysis_tools.utils&quot;&quot;&quot; roi_intensities = intensity[regionmask] try: # Check if there are enough unique values in roi_intensities unique_values = cp.unique(roi_intensities) if len(unique_values) &lt; 10: return 0 return float(stats.skew(roi_intensities, bias=False)) except Exception: return 0 def roi_std_dev(regionmask, intensity): &quot;&quot;&quot;Cupy version for the ROI standard deviation as defined in analysis_tools.utils&quot;&quot;&quot; roi_intensities = intensity[regionmask] return float(cp.std(roi_intensities)) def coords_centroid(coords): centroid = cp.mean(coords, axis=0) return pd.Series(centroid, index=[&quot;slice&quot;, &quot;y&quot;, &quot;x&quot;]) def convert_to_list_and_dump(row): return json.dumps(row.tolist()) def stack_indexer_ingpu( nframes: Union[int, List[int], range] = [0], nslices: Union[int, List[int], range] = [0], nchannels: Union[int, List[int], range] = [0], ) -&gt; cp.ndarray: &quot;&quot;&quot; Generate an index table for accessing specific frames, slices, and channels in an image stack. This aims to simplify the process of iterating over different combinations of frame, slice, and channel indices with for loops. Args: nframes (Union[int, List[int], range], optional): Frame indices. Defaults to [0]. nslices (Union[int, List[int], range], optional): Slice indices. Defaults to [0]. nchannels (Union[int, List[int], range], optional): Channel indices. Defaults to [0]. Returns: cp.ndarray: Index table with shape (n_combinations, 3), where each row represents a combination of frame, slice, and channel indices. Raises: ValueError: If any dimension contains negative integers. TypeError: If any dimension is not an integer, list of integers, or range object. &quot;&quot;&quot; dimensions = [] for dimension in [nframes, nslices, nchannels]: if isinstance(dimension, int): if dimension &lt; 0: raise ValueError(&quot;Dimensions must be positive integers or lists.&quot;) dimensions.append([dimension]) elif isinstance(dimension, (list, range)): if not all(isinstance(i, int) and i &gt;= 0 for i in dimension): raise ValueError( &quot;All elements in the list dimensions must be positive integers.&quot; ) dimensions.append(dimension) else: raise TypeError( &quot;All dimensions must be either positive integers or lists of positive integers.&quot; ) combinations = list(itertools.product(*dimensions)) index_table = cp.array(combinations) return index_table class RoiAnalyser: def __init__(self, image, probability_map, stack_order=(&quot;TSCXY&quot;, &quot;TXY&quot;)): image = adjust_dimensions(image, stack_order[0]) probability_map = adjust_dimensions(probability_map, stack_order[1]) self.img = cp.asarray(image) self.proba_map = cp.asarray(probability_map) self.stack_order = stack_order def create_binary_mask(self, threshold=0.5): &quot;&quot;&quot; Create a binary mask from an image stack of probabilities. Args: image_stack (cp.ndarray): A 5D cupy array with shape (frames, slices, channels, height, width) containing probabilities. threshold (float): The threshold value to use for binarization (default: 0.5). Returns: cupy.ndarray: A 5D numpy array with the same shape as the input, where values above the threshold are set to 1, and values below or equal to the threshold are set to 0. &quot;&quot;&quot; # Convert probabilities to binary values self.binary_mask = self.proba_map &gt; threshold def clean_binmask(self, min_pixel_size=20): &quot;&quot;&quot; Clean the binary mask by removing small ROIs. Args: min_pixel_size (int): Minimum ROI size in pixels. Returns: cleaned_mask (cp.ndarray): Cleaned binary mask. &quot;&quot;&quot; labeled, num_features = self.get_labels(return_value=True) sizes = cp.bincount(labeled.ravel())[1:] mask_sizes = sizes &gt;= min_pixel_size label_map = cp.zeros(num_features + 1, dtype=int) label_map[1:][mask_sizes] = cp.arange(1, cp.sum(mask_sizes) + 1) cleaned_labeled = label_map[labeled] cleaned_mask = cleaned_labeled &gt; 0 self.binary_mask = cleaned_mask def get_labels(self, return_value=False): &quot;&quot;&quot; Get the labeled mask for the binary mask. Returns: None &quot;&quot;&quot; labeled_mask = cp.empty_like(self.binary_mask, dtype=int) num_rois = 0 max_label = 0 for i in range(self.binary_mask.shape[0]): labeled_frame, num = label(self.binary_mask[i]) labeled_mask[i] = cp.where( labeled_frame != 0, labeled_frame + max_label, labeled_frame ) max_label += num num_rois += num if return_value: return labeled_mask, num_rois else: self.total_rois = num_rois self.labeled_mask = labeled_mask def get_roi_measurements( self, target_channel=0, target_slice=0, properties=[&quot;label&quot;, &quot;centroid&quot;, &quot;mean_intensity&quot;], extra_properties=None, ): &quot;&quot;&quot; Get measurements for each ROI in the labeled mask for a specific channel and all frames. Args: img (cupy.ndarray): The original image. labeled_mask (cupy.ndarray): The labeled mask containing the ROIs. properties (list, optional): A list of properties to measure for each ROI. Defaults to ['mean_intensity', 'centroid']. Returns: list: A list of dictionaries, where each dictionary contains the measurements for a single ROI. &quot;&quot;&quot; assert ( self.labeled_mask is not None ), &quot;Run get_labels() first to generate labeled mask&quot; img = self.img[:, target_slice, target_channel, :, :] labeled_mask = self.labeled_mask[:, target_slice, 0, :, :] all_roi_properties = [] for frame in range(img.shape[0]): img_frame = img[frame] labeled_mask_frame = labeled_mask[frame] roi_properties = regionprops_table( labeled_mask_frame, intensity_image=img_frame, properties=properties, separator=&quot;_&quot;, extra_properties=extra_properties, ) roi_properties_df = cudf.DataFrame(roi_properties) roi_properties_df[&quot;frame&quot;] = frame roi_properties_df[&quot;slice&quot;] = target_channel roi_properties_df[&quot;channel&quot;] = target_slice all_roi_properties.append(roi_properties_df) all_roi_properties_cudf = cudf.concat(all_roi_properties, ignore_index=True) if &quot;coords&quot; in all_roi_properties_cudf.columns: all_roi_properties_cudf[&quot;coords&quot;] = all_roi_properties_cudf[&quot;coords&quot;].apply( convert_to_list_and_dump ) # rearrange the columns required_cols = [&quot;label&quot;, &quot;frame&quot;, &quot;slice&quot;, &quot;channel&quot;] other_cols = [ col for col in all_roi_properties_cudf.columns if col not in required_cols ] cols = required_cols + other_cols all_roi_properties_cudf = all_roi_properties_cudf[cols] # Convert to pandas DataFrame at the very end all_roi_properties_df = all_roi_properties_cudf.to_pandas() return all_roi_properties_df </code></pre> <ol> <li>NumPy -&gt; CuPy array operations</li> <li>scipy -&gt; cupyx.scipy for label()</li> <li>skimage -&gt; cucim.skimage for regionprops pandas -&gt; cudf for</li> <li>DataFrames The GPU version uses identical algorithms but performs</li> <li>data transfers between CPU/GPU memory.</li> </ol> <p>The processing pipeline includes:</p> <ul> <li>Binary mask creation</li> <li>Connected component labeling</li> <li>Region property measurements</li> <li>DataFrame operations</li> </ul> <p>Environment:</p> <ul> <li>RTX 4090 24GB</li> <li>CUDA 11.8</li> <li>CuPy 12.0.0</li> <li>Python 3.9</li> </ul> <p>What could be causing this performance degradation? Are there specific optimizations I should implement for the GPU version?</p> <p>CPU RoiAnalyser:</p> <pre><code># Standard library imports import json from typing import Any, Dict, List, Optional, Tuple, Union # Third-party library imports import numpy as np import pandas as pd from scipy.ndimage import label from skimage.measure import regionprops_table # Local imports from HiTMicTools.utils import adjust_dimensions, stack_indexer # Type hints from numpy.typing import NDArray from pandas import DataFrame, Series def coords_centroid(coords): centroid = np.mean(coords, axis=0) return pd.Series(centroid, index=[&quot;slice&quot;, &quot;y&quot;, &quot;x&quot;]) def convert_to_list_and_dump(row): return json.dumps(row.tolist()) class RoiAnalyser: def __init__(self, image, probability_map, stack_order=(&quot;TSCXY&quot;, &quot;TXY&quot;)): image = adjust_dimensions(image, stack_order[0]) probability_map = adjust_dimensions(probability_map, stack_order[1]) self.img = image self.proba_map = probability_map self.stack_order = stack_order pass def create_binary_mask(self, threshold=0.5): &quot;&quot;&quot; Create a binary mask from an image stack of probabilities. Args: image_stack (np.ndarray): A 5D numpy array with shape (frames, slices, channels, height, width) containing probabilities. threshold (float): The threshold value to use for binarization (default: 0.5). Returns: np.ndarray: A 5D numpy array with the same shape as the input, where values above the threshold are set to 1, and values below or equal to the threshold are set to 0. &quot;&quot;&quot; # Convert probabilities to binary values self.binary_mask = self.proba_map &gt; threshold def clean_binmask(self, min_pixel_size=20): &quot;&quot;&quot; Clean the binary mask by removing small ROIs. Args: min_pixel_size (int): Minimum ROI size in pixels. Returns: cleaned_mask (np.ndarray): Cleaned binary mask. &quot;&quot;&quot; labeled, num_features = self.get_labels(return_value=True) sizes = np.bincount(labeled.ravel())[1:] # Exclude background (label 0) mask_sizes = sizes &gt;= min_pixel_size label_map = np.zeros(num_features + 1, dtype=int) label_map[1:][mask_sizes] = np.arange(1, np.sum(mask_sizes) + 1) cleaned_labeled = label_map[labeled] cleaned_mask = cleaned_labeled &gt; 0 self.binary_mask = cleaned_mask def get_labels(self, return_value=False): &quot;&quot;&quot; Get the labeled mask for the binary mask. Returns: None &quot;&quot;&quot; labeled_mask = np.empty_like(self.binary_mask, dtype=int) num_rois = 0 max_label = 0 for i in range(self.binary_mask.shape[0]): labeled_frame, num = label(self.binary_mask[i]) labeled_mask[i] = np.where( labeled_frame != 0, labeled_frame + max_label, labeled_frame ) max_label += num num_rois += num if return_value: return labeled_mask, num_rois else: self.total_rois = num_rois self.labeled_mask = labeled_mask def get_roi_measurements( self, target_channel=0, target_slice=0, properties=[&quot;label&quot;, &quot;centroid&quot;, &quot;mean_intensity&quot;], extra_properties=None, ): &quot;&quot;&quot; Get measurements for each ROI in the labeled mask for a specific channel and all frames. Args: img (numpy.ndarray): The original image. labeled_mask (numpy.ndarray): The labeled mask containing the ROIs. properties (list, optional): A list of properties to measure for each ROI. Defaults to ['mean_intensity', 'centroid']. Returns: list: A list of dictionaries, where each dictionary contains the measurements for a single ROI. &quot;&quot;&quot; assert ( self.labeled_mask is not None ), &quot;Run get_labels() first to generate labeled mask&quot; img = self.img[:, target_slice, target_channel, :, :] labeled_mask = self.labeled_mask[:, target_slice, 0, :, :] all_roi_properties = [] for frame in range(img.shape[0]): img_frame = img[frame] labeled_mask_frame = labeled_mask[frame] roi_properties = regionprops_table( labeled_mask_frame, intensity_image=img_frame, properties=properties, separator=&quot;_&quot;, extra_properties=extra_properties, ) roi_properties_df = pd.DataFrame(roi_properties) roi_properties_df[&quot;frame&quot;] = frame roi_properties_df[&quot;slice&quot;] = target_channel roi_properties_df[&quot;channel&quot;] = target_slice all_roi_properties.append(roi_properties_df) all_roi_properties_df = pd.concat(all_roi_properties, ignore_index=True) if &quot;coords&quot; in all_roi_properties_df.columns: all_roi_properties_df[&quot;coords&quot;] = all_roi_properties_df[&quot;coords&quot;].apply( convert_to_list_and_dump ) # rearrange the columns required_cols = [&quot;label&quot;, &quot;frame&quot;, &quot;slice&quot;, &quot;channel&quot;] other_cols = [ col for col in all_roi_properties_df.columns if col not in required_cols ] cols = required_cols + other_cols all_roi_properties_df = all_roi_properties_df[cols] return all_roi_properties_df </code></pre>
<python><pandas><numpy><cupy>
2025-02-18 15:20:24
0
368
Santi
79,448,675
7,347,925
How to speed up calibrating denoiser?
<p>I'm trying to calibrate the denoiser like this:</p> <pre><code>import numpy as np from skimage.restoration import ( calibrate_denoiser, denoise_tv_chambolle, denoise_invariant, ) # create random noisy data noisy = np.random.random ([1200, 1000])*100 noise_std = np.std(noisy) # set weights for calibration n_weights = 50 weight_range = (noise_std/10, noise_std*3) weights = np.linspace(weight_range[0], weight_range[1], n_weights) parameter_ranges_tv = {'weight': weights} # calibration _, (parameters_tested_tv, losses_tv) = calibrate_denoiser( noisy, denoise_tv_chambolle, denoise_parameters=parameter_ranges_tv, extra_output=True, ) </code></pre> <p>It will take about 25 seconds. Is there any method to speed it up? I have tried multiprocessing with 20 cores, but the running time is similar.</p> <pre><code># Define function for parallel execution def evaluate_weight(weight): &quot;&quot;&quot;Apply denoising with a specific weight and compute loss.&quot;&quot;&quot; denoised = denoise_tv_chambolle(noisy, weight=weight) loss = np.mean((denoised - noisy) ** 2) # Example: Mean Squared Error return weight, loss with mp.Pool(processes=mp.cpu_count()) as pool: # Use all available cores results = pool.map(evaluate_weight, weights) # Parallel execution # Extract parameters and losses parameters_tested_tv, losses_tv = zip(*results) # Find the best weight with minimum loss best_weight = parameters_tested_tv[np.argmin(losses_tv)] print(f&quot;Best weight: {best_weight}&quot;) </code></pre>
<python><numpy><image-processing><scikit-image>
2025-02-18 14:53:59
0
1,039
zxdawn
79,448,571
14,084,653
Matplotlib show function is showing the plot window minmized after python input command
<p>Im seeing strange behavior to the matplotlib plot.show() function as result of the python input command as in the below code. If I take the input command the plot window will show up and centered to the sceen, however if I add it back the plot window will be minimized. Im sure this is related to focus but not sure how to fix it.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import tkinter #Remove the input line to see the difference input(&quot;presse any key to continue...&quot;) y = np.array([35, 25, 25, 15]) plt.pie(y) plt.show() </code></pre> <p>I tried the below suggestions but none worked or even compiled correctly specially around using the windowmanager.</p> <p><a href="https://stackoverflow.com/questions/38829797/matplotlib-open-plots-minimized">Matplotlib; open plots minimized?</a> <a href="https://stackoverflow.com/questions/12439588/how-to-maximize-a-plt-show-window">How to maximize a plt.show() window</a></p> <p>Im using python 3.11 and running my code with Visual Code on windows 11 machine.</p> <p>I'm just looking to display it with normal behavior not minimized nor maximized.</p> <p>Thanks</p>
<python><python-3.x><visual-studio><matplotlib>
2025-02-18 14:20:37
1
779
samsal77
79,448,337
13,840,270
Using re.sub and replace with overall match
<p>I was just writing a program where I wanted to insert a newline after a specific pattern. The idea was to match the pattern and replace with the overall match (i.e. capture group <code>\0</code>) and <code>\n</code>.</p> <pre><code>s = &quot;abc&quot; insert_newline_pattern = re.compile(r&quot;b&quot;) re.sub(insert_newline_pattern, r&quot;\0\n&quot;, s) </code></pre> <p>However the output is <code>a\x00\nc</code>, reading <code>\0</code> as a <a href="https://en.wikipedia.org/wiki/Null_character#Representation" rel="noreferrer">null character</a>.</p> <p>I know that I can &quot;simply&quot; rewrite this as:</p> <pre><code>s = &quot;abc&quot; insert_newline_pattern = re.compile(r&quot;(b)&quot;) re.sub(insert_newline_pattern, r&quot;\1\n&quot;, s) </code></pre> <p>which outputs the desired <code>ab\nc</code> with the idea of wrapping the overall match into group <code>\1</code> and substituting this. See also a <a href="https://regex101.com/r/vwwHRt/latest" rel="noreferrer">Python regex101 demo</a>.</p> <p>Is there a way to access the overall match in any way, similar to this <a href="https://regex101.com/r/3MuzkT/latest" rel="noreferrer">PCRE regex101 demo</a> in Python?</p>
<python><regex><python-re>
2025-02-18 12:55:09
2
3,215
DuesserBaest
79,448,155
650,405
Python Squish wrapper that reports the original call location
<p>Just some background, so this doesn't end up being an XY Problem:</p> <p>The Squish library has some basic methods to do tests, <code>test.verify</code>, <code>test.compare</code>, etc.</p> <p>Because the screenshot functionality is not working in our configuration and in the pipeline I don't see stack traces (only in the Squish UI) a created a wrapper for this library which just calls these functions and does the additional mentioned work if something fails. It works just fine.</p> <p>Unfortunately this has the side-effect that the Squish UI reports the problems in my wrapper module and not at the call site.</p> <p>Is there some python magic that I can apply so the <code>test.</code> methods in squish see the call site as if they were called from there and not from my wrapper module?</p> <p>Note: I have to use python3.8.</p>
<python><python-3.8><squish>
2025-02-18 11:49:51
0
96,680
Karoly Horvath
79,448,057
7,465,516
How does "MaybeNone" (also known as "The Any Trick") work in Python type hints?
<p>In typestubs for the Python standard library I noticed a peculiar type called <a href="https://github.com/python/typeshed/blob/132456af62b1d15e966e490d9ba9235d2f6bbf2e/stdlib/_typeshed/__init__.pyi#L58" rel="nofollow noreferrer"><code>MaybeNone</code></a> pop up, usually in the form of <code>NormalType | MaybeNone</code>. For example, in the sqlite3-Cursor class I find this:</p> <pre class="lang-py prettyprint-override"><code>class Cursor: # May be None, but using `| MaybeNone` (`| Any`) instead to avoid slightly annoying false positives. @property def description(self) -&gt; tuple[tuple[str, None, None, None, None, None, None], ...] | MaybeNone: ... </code></pre> <p>The definition of this <code>MaybeNone</code> is given as:</p> <pre class="lang-py prettyprint-override"><code># Marker for return types that include None, but where forcing the user to # check for None can be detrimental. Sometimes called &quot;the Any trick&quot;. See # CONTRIBUTING.md for more information. MaybeNone: TypeAlias = Any # stable </code></pre> <p>(I could not find additional information in the <code>CONTRIBUTING.md</code>, which I assume to be <a href="https://github.com/python/typeshed/blob/main/CONTRIBUTING.md" rel="nofollow noreferrer">this one</a>.)</p> <p>I understand the intention of marking a return type in such a way that a user is not forced to null check in cases where the null is more of a theoretical problem for most users. But how does this achieve the goal?</p> <ol> <li><p><code>SomeType | Any</code> seems to imply that the return type could be anything, when what I want to say is that it can be <code>SomeType</code> or in weird cases <code>None</code>, so this doesn't seem to express the intent.</p> </li> <li><p>MyPy already allows superfluous null-checks on variables that can be proven not to be <code>None</code> even with <code>--strict</code> (at least with my configuration?) so what does the special typing even accomplish as compared to simply doing nothing?</p> </li> </ol>
<python><python-typing><typeshed>
2025-02-18 11:17:38
2
2,196
julaine
79,447,986
6,113,142
How to run multiple browser-use agents in parallel?
<p>Currently I have this code</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- &quot;&quot;&quot; :brief: Run multiple browser-use agents in parallel &quot;&quot;&quot; import asyncio import os from browser_use import Agent, Browser, BrowserConfig, Controller from browser_use.browser.context import BrowserContext, BrowserContextConfig from langchain_openai import ChatOpenAI async def main(): controller = Controller() llm = ChatOpenAI(model=&quot;gpt-4o&quot;, api_key=os.getenv(&quot;OPENAI_API_KEY&quot;)) browser_config = BrowserConfig( chrome_instance_path=&quot;/path/to/chrome&quot;, headless=False, disable_security=True, extra_chromium_args=[ &quot;--no-first-run&quot;, &quot;--no-default-browser-check&quot;, &quot;--disable-extensions&quot;, &quot;--new-window&quot;, ], ) browser = Browser(config=browser_config) context1 = BrowserContext(browser=browser) context2 = BrowserContext(browser=browser) agent = Agent( task=&quot;Go to amazon.com and search for toys without logging in&quot;, llm=llm, controller=controller, browser=browser, browser_context=context1, ) agent2 = Agent( task=&quot;Go to Google.com and find the latest news&quot;, llm=llm, controller=controller, browser=browser, browser_context=context2, ) await asyncio.gather(agent.run(), agent2.run()) if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>But, it does not truly run in parellel. I was considering using <code>ProcessPoolExecutor</code> but not sure how to mix <code>concurrent.futures</code> and <code>asyncio</code> cleanly. Essentially, the approach should be to create 2 different browser instances so that the agents can work in tandem.</p>
<python><browser-automation><browser-use>
2025-02-18 10:54:24
1
350
Pratik K.
79,447,890
6,544,849
How can I seperate the fringes that have been calculated with findpeaks
<p>I would like to seperate the fringes (the red curved lines) that I have calculated with scipy findpeaks how can I achive it. I would like to seperate them and store in the text file.</p> <p><a href="https://i.sstatic.net/e47plvI4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e47plvI4.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np from scipy.signal import find_peaks import matplotlib.pyplot as plt X = np.load('X.npy') Y = np.load('Y.npy') P_new = np.load('P_new.npy') # Example data: Replace with your actual data T = np.real(P_new) # Simulating a 2D matrix # Plot the original image plt.figure() plt.imshow(T, cmap='jet', aspect='auto') plt.colorbar() plt.title('Original Image') plt.show() # Peak detection parameters min_peak_dist = 3 # Minimum distance between peaks min_peak_h = 3e-5 # Minimum peak height x_coords = [] y_coords = [] # Process all rows from top to bottom for k in range(T.shape[0]): tex = T[k, :] peaks, _ = find_peaks(tex, distance=min_peak_dist, height=min_peak_h) if peaks.size &gt; 0: x_coords.extend(X[k, peaks]) y_coords.extend(Y[k, peaks]) # Plot detected peaks plt.figure() plt.scatter(x_coords, y_coords, color='r', s=2) # 's' controls marker size plt.xlabel('X Coordinate') plt.ylabel('Y Coordinate') plt.title('Detected Fringes in Real-World Coordinates') plt.colorbar() plt.show() </code></pre> <p><a href="https://drive.google.com/drive/folders/1X4INVwIYh8NzEza5MAREGveXM0DMUmgj?usp=sharing" rel="nofollow noreferrer">data for plotting is here</a></p> <p>Want I want see is just seperate fringes like here: <a href="https://i.sstatic.net/CbJjlTCr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbJjlTCr.png" alt="enter image description here" /></a></p> <p>previously I could do it with cv2 method contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) but this is from finding the edges which is not very rigorous for my case as with finding peaks of the actual data.</p> <p>Can someone help with this?</p>
<python><arrays><numpy><scipy>
2025-02-18 10:27:59
2
321
wosker4yan
79,447,749
7,996,523
Python trace module does not display calling relationships when running pytest
<p>I would like to track the functions invoked by pytest test cases. I use <a href="https://github.com/pydata/xarray/blob/main/xarray/tests/test_coding.py" rel="nofollow noreferrer">this test file in xarray</a> as an example. I use the following command to trace</p> <pre><code>python -m trace --trackcalls --module pytest -rA xarray/tests/test_coding.py -n 64 </code></pre> <p>In the output I am able to see test cases are successfully executed.</p> <pre><code>============================= test session starts ============================== platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 rootdir: /.../xarray configfile: pyproject.toml plugins: timeout-2.3.1, cov-6.0.0, metadata-3.1.1, env-1.1.5, json-0.4.0, hypothesis-6.125.1, xdist-3.6.1, json-report-1.5.0 created: 64/64 workers 64 workers [31 items] ............................... [100%] ==================================== PASSES ==================================== =========================== short test summary info ============================ PASSED xarray/tests/test_coding.py::test_coder_roundtrip PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-i1] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-u2] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-u1] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-f2] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_encode_missing_fill_values_conflict[numeric-without-dtype] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-f2] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_missing_value PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-f4] PASSED xarray/tests/test_coding.py::test_scaling_offset_as_list[0.1-scale_factor1] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-u1] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-f4] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-i1] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_decode PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-i2] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_encode_missing_fill_values_conflict[numeric-with-dtype] PASSED xarray/tests/test_coding.py::test_scaling_offset_as_list[0.1-10] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f8-u2] PASSED xarray/tests/test_coding.py::test_scaling_converts_to_float[f4-i2] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_encode_missing_fill_values_conflict[times-with-dtype] PASSED xarray/tests/test_coding.py::test_scaling_offset_as_list[add_offset1-10] PASSED xarray/tests/test_coding.py::test_scaling_offset_as_list[add_offset1-scale_factor1] PASSED xarray/tests/test_coding.py::test_decode_unsigned_from_signed[1] PASSED xarray/tests/test_coding.py::test_decode_unsigned_from_signed[2] PASSED xarray/tests/test_coding.py::test_decode_signed_from_unsigned[2] PASSED xarray/tests/test_coding.py::test_decode_unsigned_from_signed[4] PASSED xarray/tests/test_coding.py::test_decode_unsigned_from_signed[8] PASSED xarray/tests/test_coding.py::test_decode_signed_from_unsigned[1] PASSED xarray/tests/test_coding.py::test_decode_signed_from_unsigned[8] PASSED xarray/tests/test_coding.py::test_decode_signed_from_unsigned[4] PASSED xarray/tests/test_coding.py::test_CFMaskCoder_decode_dask ======================== 31 passed in 113.20s (0:01:53) ======================== calling relationships: ... </code></pre> <p>But in the following calling relationships, none of the test methods and the methods that are supposed to be invoked are included. I would like to know why and how to resolve this issue. Thanks.</p>
<python><pytest><trace>
2025-02-18 09:33:52
1
901
Richard Hu
79,447,743
7,853,142
Representing Networkx graphs as strings in a consistent way
<p>I have multiple Networkx graphs (with node attributes), and I need to represent them as strings. There are several ways to do that, but I need to always get the same string for equal graphs (not isomorphic, by equal I mean that <a href="https://networkx.org/documentation/stable/reference/generated/networkx.utils.misc.graphs_equal.html" rel="nofollow noreferrer">graphs_equal</a> returns true). Is there a way to turn a graph into a string that guarantees this?</p>
<python><serialization><networkx>
2025-02-18 09:32:46
1
902
ThePirate42
79,447,533
6,930,340
Passing a polars struct to a user-defined function using map_batches
<p>I need to pass a variable number of columns to a user-defined function. The <a href="https://docs.pola.rs/user-guide/expressions/user-defined-python-functions/#combining-multiple-column-values" rel="nofollow noreferrer">docs</a> mention to first create a <code>pl.struct</code> and subsequently let the function extract it. Here's the example given on the website:</p> <pre><code># Add two arrays together: @guvectorize([(int64[:], int64[:], float64[:])], &quot;(n),(n)-&gt;(n)&quot;) def add(arr, arr2, result): for i in range(len(arr)): result[i] = arr[i] + arr2[i] df3 = pl.DataFrame({&quot;values1&quot;: [1, 2, 3], &quot;values2&quot;: [10, 20, 30]}) out = df3.select( # Create a struct that has two columns in it: pl.struct([&quot;values1&quot;, &quot;values2&quot;]) # Pass the struct to a lambda that then passes the individual columns to # the add() function: .map_batches( lambda combined: add( combined.struct.field(&quot;values1&quot;), combined.struct.field(&quot;values2&quot;) ) ) .alias(&quot;add_columns&quot;) ) print(out) </code></pre> <p>Now, in my case, I don't know upfront how many columns will enter the <code>pl.struct</code>. Think of using a selector like <code>pl.struct(cs.float())</code>. In my user-defined function, I need to operate on a <code>np.array</code>. That is, the user-defined function will have one input argument that takes the whole array. How can I then extract it within the user-defined function?</p> <p>EDIT: The output of my user-defined function will be an array that has the exact same shape as the input array. This array needs to be appended to the existing dataframe on axis 1 (new columns).</p> <p>EDIT: Using <code>pl.concat_arr</code> might be one way to attack my concrete issue. My use case would be along the following lines:</p> <pre><code>def multiply_by_two(arr): # In reality, there are some complex array operations. return arr * 2 df = pl.DataFrame({&quot;values1&quot;: [1, 2, 3], &quot;values2&quot;: [10, 20, 30]}) out = df.select( # Create an array consisting of two columns: pl.concat_arr([&quot;values1&quot;, &quot;values2&quot;]) .map_batches(lambda arr: multiply_by_two(arr)) .alias(&quot;result&quot;) ) </code></pre> <p>The new computed column <code>result</code> holds an array that has the same shape as the input array. I need to <code>unnest</code> the array (something like <code>pl.struct.unnest()</code>). The headings should be the original headings suffixed by &quot;result&quot; (<code>values1_result</code> and <code>values2_result</code>).</p> <p>Also, I would like to make use of <code>@guvectorize</code> to speed things up.</p>
<python><python-polars>
2025-02-18 08:11:40
1
5,167
Andi
79,447,468
10,377,244
Can't read parquet file with polars while pyarrow can
<p>I am getting a <code>dtype</code> exception</p> <blockquote> <p>pyo3_runtime.PanicException: Arrow datatype Map(Field { name: &quot;key_value&quot;, dtype: LargeList(Field { name: &quot;key_value&quot;, dtype: Struct([Field { name: &quot;key&quot;, dtype: Utf8View, is_nullable: false, metadata: None }, Field { name: &quot;value&quot;, dtype: Int64, is_nullable: true, metadata: None }]), is_nullable: true, metadata: None }), is_nullable: true, metadata: None }, false) not supported by Polars. You probably need to activate that data-type feature.</p> </blockquote> <p>I can read it if I set <code>use_pyarrow=True</code> but I need to use <code>lazy</code> mode.</p> <p>I have a lot of files so my workaround was to read each file in eager mode with <code>pyarrow</code> engine and save it again. I tried to use <code>pyarrow</code> dataset and <code>scan_pyarrow_dataset</code> but had no success.</p>
<python><parquet><python-polars><pyarrow><polars>
2025-02-18 07:44:03
0
1,127
MPA
79,447,425
7,106,508
what is best way to copy a class without using inheritance in Python
<p>I want a class 'child' to inherit all the properties of class 'parent' but this is going to be the exception rather than the rule. In more than 95% of the cases this is not going to happen, so what is the best way in Python to have a child class copy all of the attributes of the parent? Right now, I'm doing it this way (see below) but I have a hunch that there is a better way to do this. In the example below, of course it's better for the child to simply inherit from the cool_class, but like I said, with my current code it's not feasible to do it that way.</p> <pre><code>class cool_class(): def __init__(self): self.hey_you = 1 def copy_class(child, parent): attributes = [x for x in parent.__dict__] for attribute in attributes: if not attribute.startswith(&quot;__&quot;): setattr(child, attribute, getattr(parent, attribute)) ins = cool_class() ins.get_off_my_cloud = 1 ins2 = cool_class() copy_class(ins2,ins) assert ins2.get_off_my_cloud == 1 </code></pre>
<python>
2025-02-18 07:22:28
1
304
bobsmith76
79,447,219
169,165
SHAP plot with categorical columns
<p>Before one-hot encoding, the input data consists of 2 categorical columns (category1, category2). category1 is among A,B,C and category2 is among X,Y. After the one-hot encoding, the input data transforms to 5 columns(A,B,C,X,Y). The fit function works well.</p> <p>However, the problem lies on the output SHAP summary plot. I want to SHAP plot show about 2 input columns (category1, category2) but actually the SHAP plot shows 5 columns(A,B,C,X,Y) (picture below).</p> <p><a href="https://i.sstatic.net/IMB1jsWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IMB1jsWk.png" alt="output" /></a></p> <p>How can I do that? Here is the working code. I guess there my be some more parameter for TreeExplainer for specifying categorical columns so that permutation cost goes down, but I have no idea.</p> <pre><code>import pandas as pd from sklearn.preprocessing import OneHotEncoder, LabelEncoder from sklearn.compose import ColumnTransformer from xgboost import XGBClassifier from sklearn.model_selection import train_test_split import shap # example data (2 input columns, 1 output) data = { 'category1': ['A', 'B', 'C', 'A', 'B'], 'category2': ['X', 'Y', 'X', 'Y', 'X'], 'target': [0, 1, 0, 1, 0] } df = pd.DataFrame(data) # category columns categorical_features = ['category1', 'category2'] # OneHotEncoder transformer = ColumnTransformer( transformers=[ ('encoder', OneHotEncoder(sparse_output=False), categorical_features) ], remainder='passthrough' ) # data preparation X = df.drop('target', axis=1) y = df['target'] # encoding # after encoding, (2+3 columns and 1 output) X_encoded = transformer.fit_transform(X) # split data X_train, X_test, y_train, y_test = train_test_split( X_encoded, y, test_size=0.2, random_state=42 ) # train model model = XGBClassifier() model.fit(X_train, y_train) #------------------- # Explain the model's predictions using SHAP explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_test) # Visualize SHAP values with feature names feature_names = transformer.get_feature_names_out(categorical_features) shap.summary_plot(shap_values, X_test, feature_names=feature_names) </code></pre>
<python><xgboost><shap>
2025-02-18 05:31:15
0
2,969
Hyunjik Bae
79,447,153
20,762,114
Topological sort in Polars
<pre><code>df = pl.from_repr(''' shape: (6, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ A โ”† B โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ i64 โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•ก โ”‚ 1 โ”† null โ”‚ โ”‚ 2 โ”† 1 โ”‚ โ”‚ 2 โ”† 2 โ”‚ โ”‚ null โ”† 3 โ”‚ โ”‚ 3 โ”† 4 โ”‚ โ”‚ 4 โ”† null โ”‚ โ”‚ 5 โ”† 5 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ''') </code></pre> <p>I want to sort a dataframe such that multiple columns are in a sorted order, excluding nulls.</p> <p>In the example above, columns A and B are both sorted, excluding nulls. This feels like a topological sort to me, with the following conditions:</p> <pre><code>df[0, 'A'] &lt; df[1, 'A'] df[1, 'B'] &lt; df[2, 'B'] df[2, 'B'] &lt; df[3, 'B'] df[3, 'B'] &lt; df[4, 'B'] df[4, 'A'] &lt; df[5, 'A'] df[5, 'A'] &lt; df[6, 'A'] </code></pre> <p>I understand it's not always possible to do a topological sort if there is a cycle, e.g.</p> <pre><code>df[0, 'A'] &lt; df[1, 'A'] df[0, 'B'] &gt; df[1, 'B'] </code></pre> <p>In that case, I want to specify that ordering for column A should take precedence over column B.</p> <p>My use case is that I am merging time series data from multiple datasets with some overlapping events, and I want a single dataframe with all events in a chronological order. There are issues with some of the timestamps, so I cannot compare the raw timestamps directly across datasets.</p> <p>Is something like this possible in polars?</p>
<python><python-polars>
2025-02-18 04:45:54
1
317
T.H Rice
79,447,024
6,230,282
Why does Python regular expression give different result for the following 2 quantifiers?
<p>While investigating the semantic difference between quantifiers based on length and count, I noticed that Python 3 regular expression gave different result for the following 2 regular expressions (notice the quantifiers <code>+</code> and <code>*</code>:</p> <pre><code>Python 3.10.16 (main, Dec 7 2024, 13:31:33) [Clang 16.0.0 (clang-1600.0.26.6)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import re &gt;&gt;&gt; re.sub('(.{4,5})+', '-', '1234123412341234') '-4' &gt;&gt;&gt; re.sub('(.{4,5})*', '-', '1234123412341234') '--4-' &gt;&gt;&gt; </code></pre> <p>And I was able to reproduce this in PHP, presumably because they both use PCRE behind the back:</p> <pre><code>$ php -a Interactive shell php &gt; echo preg_replace('/(.{4,5})+/', '-', '1234123412341234'); -4 php &gt; echo preg_replace('/(.{4,5})*/', '-', '1234123412341234'); --4- php &gt; </code></pre> <p>How come?</p>
<python><php><regex><pcre>
2025-02-18 03:00:49
1
1,641
DannyNiu
79,446,885
1,301,310
PyPi Server on windows: 503 Forbidden
<p>I have a windows server with pypiserver running as a schedule task and am accessing it via a reverse proxy.</p> <ul> <li>My pypi server is running as a scheduled task under user X.</li> <li>I have .htpasswd setup.</li> <li>The package directory and the pypi server config directory (.htpassed, start.bat, pypiserver.log) and the package directory have both been set to allow full control for user X. I have cert on the URL.</li> </ul> <p>On the client machine:</p> <ul> <li>I have the .pypirc setup with credentials and pypi url</li> <li>I have the cert pem and installed in Trusted Root Certification Authorities\Certificates</li> <li><a href="https://myserver.com/pypi" rel="nofollow noreferrer">https://myserver.com/pypi</a> works and get &quot;This is a PyPI compatible package index serving 0 packages.&quot;</li> </ul> <p>When I try to upload a new package to the pypi server I am getting &quot;Response from <a href="https://myserver.com/pypi" rel="nofollow noreferrer">https://myserver.com/pypi</a>: 403 Forbidden&quot;</p> <p>twine upload --repository local dist/* --verbose</p> <p>I see the same in my pypi server logs so it is making it through the reverse proxy.</p> <pre><code>2025-02-17 16:26:41,516|pypiserver._app|INFO|3888|&lt;LocalRequest: POST http://localhost:8080/&gt; 2025-02-17 16:26:41,516|pypiserver._app|INFO|3888|403 Forbidden 2025-02-17 16:26:41,516|pypiserver.main|INFO|3888|127.0.0.1 - - [17/Feb/2025 16:26:41] &quot;POST / HTTP/1.1&quot; 403 702 </code></pre> <p>Thoughts?</p>
<python><pypi>
2025-02-18 00:29:58
1
1,833
Gina Marano
79,446,809
1,532,454
transpose pitch down an octave?
<p>Using music21, how can I transpose a pitch down an octave?</p> <p>If I use <code>p.transpose(-12)</code>, flats get changed to sharps:</p> <pre><code>import music21 p = music21.pitch.Pitch('D-4') print(p.transpose(-12)) </code></pre> <p>output</p> <pre><code>C#3 </code></pre>
<python><music21><music-notation>
2025-02-17 23:21:03
1
3,389
Jeff Learman
79,446,667
2,648,504
Pandas - shifting columns to a specific column position
<p>I have a simple dataframe:</p> <pre><code>data = [[2025, 198237, 77, 18175], [202, 292827, 77, 292827]] </code></pre> <p>I only want the 1st and 4th columns and I don't want header or index labels:</p> <pre><code>df = pd.DataFrame(data).iloc[:,[0,3]] print(df.to_string(index=False, header=False)) </code></pre> <p>Output is the following:</p> <pre><code>2025 18175 202 292827 </code></pre> <p>How do I line up my first column in column 3 (left-justified) and line up my second column in column 10 (left-justified)? Since i'm calling the to_string method, which is converting the dataframe to a string representation, shouldn't I be able to use <code>ljust</code>? I'm not able to produce the desired output, which would be:</p> <pre><code> 2025 18175 202 292827 </code></pre>
<python><pandas>
2025-02-17 21:52:51
1
881
yodish