QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,785,146
| 2,998,077
|
"Insert Python" in Excel's Formula section, to read another Excel file
|
<p>In Excel's Formula section, when using "Insert Python", I want to read another Excel file as data frame.</p>
<p>However as screenshot below, it seems wrong with the file path. I've try double slashes and backslashes too but it still doesn't work.</p>
<p>What's the right way to write it?</p>
<p><a href="https://i.sstatic.net/oTU2xt2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTU2xt2A.png" alt="enter image description here" /></a></p>
|
<python><excel>
|
2025-10-08 06:49:13
| 1
| 9,496
|
Mark K
|
79,784,978
| 2,518,602
|
Importing a table from a webpage as a dataframe in Python
|
<p>I am trying to read in a specific table from the US Customs and Border Protection's Dashboard on Southwest Land Border Encounters as a dataframe.</p>
<p>The url is: <a href="https://www.cbp.gov/newsroom/stats/southwest-land-border-encounters" rel="nofollow noreferrer">https://www.cbp.gov/newsroom/stats/southwest-land-border-encounters</a>. I am particularly interested in the "month" and "U.S. Border Patrol / Total" columns from the final table on the page:
<a href="https://i.sstatic.net/2fcSyxBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fcSyxBM.png" alt="table I am trying to import, with columns of interest highlighted" /></a></p>
<p>In past web scraping projects I've used the <code>read_html</code> function from the <code>pandas</code> package. But that doesn't work here. This code:</p>
<pre><code>import pandas as pd
pd.read_html('https://www.cbp.gov/newsroom/stats/southwest-land-border-encounters')
</code></pre>
<p>generates the error: <code>HTTPError: HTTP Error 403: Forbidden</code>.</p>
<p>Is there a way to programmatically get this data?</p>
|
<python><pandas><web-scraping>
|
2025-10-07 22:56:52
| 2
| 2,023
|
Ari
|
79,784,971
| 8,800,836
|
Wrap `jax.lax.fori_loop` to systematically override `upper<=lower` tracing behavior
|
<p>This is a follow-up to a <a href="https://stackoverflow.com/questions/79783857/jax-lax-fori-loop-with-equal-lower-and-upper-should-produce-no-iteration">previous question</a> about the <code>jax.lax.fori_loop</code> function, with a little bit of a challenge for you at the end.</p>
<p>As described in the <a href="https://docs.jax.dev/en/latest/_autosummary/jax.lax.fori_loop.html" rel="nofollow noreferrer">documentation</a>, the <code>fori_loop</code> is never executed at runtime for the case <code>upper<=lower</code>. However, as has been pointed out <a href="https://github.com/jax-ml/jax/issues/3285" rel="nofollow noreferrer">several times</a>, it is still traced. This can cause issues with out-of-bound indexing. I understand that the consensus is that this is intended behavior for <code>fori_loop</code>.</p>
<p><em>Nevertheless</em>, in my use cases, the <code>python</code>-like behavior makes things much, much easier conceptually. So in my previous question, I came up with the following wrapper that overrides the default behavior when the indexing issue occurs:</p>
<pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp
import jax
from jax.scipy.special import gammaln
# WRAPPER FOR FORI TO HANDLE THE CASE UPPER<=LOWER SEPARATELY
def wrapped_fori(lower, upper, body_fun, init_val, unroll=None):
if upper<=lower:
out = init_val
else:
out = jax.lax.fori_loop(lower, upper, body_fun, init_val, unroll=unroll)
return out
def comb(n, k):
return jnp.round(jnp.exp(gammaln(n + 1) - gammaln(k + 1) - gammaln(n - k + 1)))
def binom_conv(n, Aks, Bks):
return part_binom_conv(n, 0, n, Aks, Bks)
def part_binom_conv(n, k0, k1, Aks, Bks):
A_shape = Aks.shape[1:]
A_dtype = Aks.dtype
init_conv = jnp.zeros(A_shape, dtype=A_dtype)
conv = jax.lax.fori_loop(k0, k1, update_binom_conv, (init_conv, n, Aks, Bks))[0]
return conv
def update_binom_conv(k, val):
conv, n, Aks, Bks = val
conv = conv + comb(n-1, k) * Aks[k] @ Bks[(n-1)-k]
return conv, n, Aks, Bks
@jax.jit
def build(U, Hks):
n = Hks.shape[0] # n=0
H_shape = Hks.shape[1:] # H_shape=(2,2)
Uks_shape = (n+1,)+H_shape # Uks_shape=(1,2,2)
Uks = jnp.zeros(Uks_shape, dtype=Hks.dtype)
Uks = Uks.at[0].set(U)
Uks = wrapped_fori(0, n, update_Uks, (Uks, Hks))[0] # Treats the case n=0 separately
return Uks
def update_Uks(k, val):
Uks, Hks = val
Uks = Uks.at[k+1].set(-1j*binom_conv(k+1, Hks, Uks))
return Uks, Hks
# Test
Hks = jnp.zeros((3,2,2), dtype=complex)
U = jnp.eye(2, dtype=complex)
build(U, Hks)
</code></pre>
<p>The above works fine. However, I noticed that I can't replace all my <code>fori_loop</code>s with this wrapper. Specifically, it fails when used with nested loops. For example, the following modification of the function <code>part_binom_conv()</code> fails:</p>
<pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp
import jax
from jax.scipy.special import gammaln
# # WRAPPER FOR FORI TO HANDLE THE CASE UPPER<=LOWER SEPARATELY
def wrapped_fori(lower, upper, body_fun, init_val, unroll=None):
if upper<=lower:
out = init_val
else:
out = jax.lax.fori_loop(lower, upper, body_fun, init_val, unroll=unroll)
return out
def comb(n, k):
return jnp.round(jnp.exp(gammaln(n + 1) - gammaln(k + 1) - gammaln(n - k + 1)))
def binom_conv(n, Aks, Bks):
return part_binom_conv(n, 0, n, Aks, Bks)
def part_binom_conv(n, k0, k1, Aks, Bks):
A_shape = Aks.shape[1:]
A_dtype = Aks.dtype
init_conv = jnp.zeros(A_shape, dtype=A_dtype)
conv = wrapped_fori(k0, k1, update_binom_conv, (init_conv, n, Aks, Bks))[0] #<--- This causes an error
return conv
def update_binom_conv(k, val):
conv, n, Aks, Bks = val
conv = conv + comb(n-1, k) * Aks[k] @ Bks[(n-1)-k]
return conv, n, Aks, Bks
@jax.jit
def build(U, Hks):
n = Hks.shape[0] # n=0
H_shape = Hks.shape[1:] # H_shape=(2,2)
Uks_shape = (n+1,)+H_shape # Uks_shape=(1,2,2)
Uks = jnp.zeros(Uks_shape, dtype=Hks.dtype)
Uks = Uks.at[0].set(U)
Uks = wrapped_fori(0, n, update_Uks, (Uks, Hks))[0] # Treats the case n=0 separately
return Uks
def update_Uks(k, val):
Uks, Hks = val
Uks = Uks.at[k+1].set(-1j*binom_conv(k+1, Hks, Uks))
return Uks, Hks
# Test
Hks = jnp.zeros((3,2,2), dtype=complex)
U = jnp.eye(2, dtype=complex)
build(U, Hks)
</code></pre>
<p>The error is a <code>TracerBoolConversionError</code> which I think is related to the tracing the condition in my wrapper:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
TracerBoolConversionError Traceback (most recent call last)
Cell In[4], line 55
53 Hks = jnp.zeros((3,2,2), dtype=complex)
54 U = jnp.eye(2, dtype=complex)
---> 55 build(U, Hks)
[... skipping hidden 13 frame]
Cell In[4], line 43
41 Uks = jnp.zeros(Uks_shape, dtype=Hks.dtype)
42 Uks = Uks.at[0].set(U)
---> 43 Uks = wrapped_fori(0, n, update_Uks, (Uks, Hks))[0] # Treats the case n=0 separately
44 return Uks
Cell In[4], line 10
8 out = init_val
9 else:
---> 10 out = jax.lax.fori_loop(lower, upper, body_fun, init_val, unroll=unroll)
11 return out
[... skipping hidden 12 frame]
Cell In[4], line 48
46 def update_Uks(k, val):
...
-> 1806 raise TracerBoolConversionError(arg)
TracerBoolConversionError: Attempted boolean conversion of traced array with shape bool[].
The error occurred while tracing the function update_Uks at /var/folders/x0/28x522xx1vb2xl75tn781lqr0000gn/T/ipykernel_54810/1590930335.py:46 for fori_loop. This concrete value was not available in Python because it depends on the value of the argument k.
See https://docs.jax.dev/en/latest/errors.html#jax.errors.TracerBoolConversionError
</code></pre>
<p>My question is a little bit of a challenge. <strong>Is it possible to modify this wrapper for the <code>fori_loop</code> so that it doesn't trace the body when <code>upper<=lower</code>, and that it never causes an error in nested loops?</strong></p>
<p>I understand that this will not be implemented in <code>jax</code>, but I was wondering if it is something I could do in my code.</p>
|
<python><for-loop><jax>
|
2025-10-07 22:34:07
| 1
| 539
|
Ben
|
79,784,882
| 1,747,834
|
Can PyArg_Parse* functions automatically convert strings to numbers?
|
<p>My script reads data from CSV-files and passes the rows to C code, which uses <code>PyArg_ParseTuple()</code> and/or <code>PyArg_ParseTupleAndKeywords()</code> to parse the arguments and then work on them.</p>
<p>Some of the arguments are supposed to be strings, others -- numbers (integers and floating point). The expected types are passed to the <code>PyArg</code>-functions:</p>
<pre class="lang-c prettyprint-override"><code>if (!PyArg_ParseTupleAndKeywords(args, kw, "sdddd", keywords,
&name, &height, &width, &depth, &weight))
return NULL;
</code></pre>
<p>Trouble is, all of the fields parsed from CSV are <em>strings</em>. When passed to <code>PyArg_ParseTupleAndKeywords</code> this triggers a type error because <code>"11"</code> is not a number, even if <code>11</code> is...</p>
<p>So I'm explicitly casting some fields to numbers, but that looks ugly and duplicates the knowledge already expressed in the format string.</p>
<p>Is there a way to tell the PyArg-functions to try to convert the objects into numbers, when necessary -- when the specified argument is supposed to be a number -- and only fail the parsing, if the conversion fails?</p>
|
<python><c>
|
2025-10-07 19:41:59
| 0
| 4,246
|
Mikhail T.
|
79,784,865
| 344,286
|
How do I talk to Python in a subprocess?
|
<p>I would like to launch the Python REPL as a subprocess, but it doesn't appear to work:</p>
<pre><code>import sys
import subprocess
p = subprocess.Popen([sys.executable], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(p)
print('-')
p.stdin.write(b'42\n')
for line in p.stdout:
print(line)
for line in p.stderr:
print(line)
</code></pre>
<p>There's nothing there in stdout or stderr.</p>
|
<python><subprocess>
|
2025-10-07 19:16:45
| 0
| 52,263
|
Wayne Werner
|
79,784,814
| 12,366,148
|
Deploying an Azure Python Webjob on Linux
|
<p>I have a <code>Linux</code> based <code>NodeJS</code> <code>Web App</code> resource on <code>Azure</code>.</p>
<p>I want to deploy a <code>Python 3.12</code> based continuous <code>Web Job</code> under this Web App. I have added a <code>run.sh</code> which contains installing the <code>requirements.txt</code> and then running <code>main.py</code>, which contains an infinite while loop for running the job.</p>
<p>What I have noticed is that there is a <code>Python 3.11</code> available which is not ideal, as my web job was written for <code>3.12</code>. The 3.11 is also externally managed meaning i cant create a <code>venv</code> and also <code>pip</code> or <code>pip3</code> are unknown commands, so I can't install any pip packages (<code>ensurepip</code> is also not available). I have tried to install 3.12 in various ways but none worked (building from source, getting from the Python website via curl etc..). I also tried copying over the venv folder via my zip deploy but the file is too big. There are also no "Extensions" on the <code>Linux Kudu</code> to add a Python App. I am fairly new to Azure and can't find anything related to my issue in the official documentation.</p>
<p>What is the general way to add a certain Python version, and install pip dependencies on a NodeJs based Web App that runs on Linux?</p>
|
<python><linux><azure><azure-web-app-service><azure-webjobs>
|
2025-10-07 18:01:25
| 1
| 524
|
CaptainCsaba
|
79,784,755
| 7,179,546
|
AppDynamics Python agent not working when migrating to EKS
|
<p>I have a Python application that uses AppDynamics, that works on Rancher. Everything is ok, no errors of any kind and the data is sent correctly by the agent.</p>
<p>I'm migrating it now to EKS and the agent is not initializing anymore.</p>
<p>I'm seeing in the logs <code>"event": "Exception in agent startup."</code>
and finally</p>
<p><code>in _find_and_load_unlocked\nModuleNotFoundError: No module named 'appdynamics_bindeps'"</code></p>
<p>I've tried to add this package <a href="https://pypi.org/project/appdynamics-bindeps-linux-x64/" rel="nofollow noreferrer">https://pypi.org/project/appdynamics-bindeps-linux-x64/</a> In the last version, but nothing is working. The error is the same. The version of AppDynamics that I'm using is 24.11.0.7213 and for appdynamics-bindeps-linux-x64 25.6.0</p>
<p>I'm using Python 3.8 but I've tested also Python 3.9 with the same result.</p>
<p>Also I've played a bit with the annotations of my yaml where I deploy the project since it was giving problems with the telemetry</p>
<p>Currently this is what I have, although I have tested several combinations:</p>
<pre><code> annotations:
instrumentation.opentelemetry.io/inject-python: "false"
instrumentation.opentelemetry.io/inject-java: "false"
</code></pre>
<p>What's wrong?</p>
|
<python><amazon-eks><appdynamics>
|
2025-10-07 16:31:42
| 0
| 737
|
Carabes
|
79,784,428
| 5,269,892
|
Openpyxl conditional formatting if cell among list values
|
<p>I would like to use <code>openpyxl</code> to create an Excel conditional formatting rule where cells are filled with a particular color, when the cell value matches an element among a hardcoded list of values. The hardcoded list should <em>not</em> be stored in the same Excel file. If possible, I would like to use an <code>ISNUMBER(MATCH())</code> approach - I have also applied that approach with named ranges, which works perfectly fine.</p>
<p>Below is a minimal code example. The file is successfully created, but when opening it, Excel displays an error message regarding damaged file content and removes the conditional formatting. Nevertheless, manually entering that formula works both in English and German Excel localizations - at least with semicolons like <code>=ISNUMBER(MATCH(A1; {"test";"beer"}; 0))</code> or <code>=ISTZAHL(VERGLEICH(A1; {"test";"beer"}; 0))</code>.</p>
<p><strong>Q: Is there some syntax issue in the conditional formatting formula? Is there some specific issue on the openpyxl side?</strong></p>
<pre><code>from openpyxl import Workbook
from openpyxl.formatting.rule import FormulaRule
from openpyxl.styles import PatternFill
wb = Workbook()
ws = wb.active
# Add some test data
ws['A1'] = 'test'
ws['A2'] = 'beer'
ws['A3'] = 'foo'
ws['A4'] = 'bar'
# Define the green fill
green_fill = PatternFill(start_color='00FF00', end_color='00FF00', fill_type='solid')
# Inline array in the conditional format formula
formula = '=ISNUMBER(MATCH(A1, {"test","beer"}, 0))'
# Apply the formatting to A1:A10
ws.conditional_formatting.add('A1:A10', FormulaRule(formula=[formula], fill=green_fill))
wb.save('conditional_formatting_inline_array.xlsx')
</code></pre>
|
<python><excel><openpyxl><conditional-formatting>
|
2025-10-07 10:03:43
| 1
| 1,314
|
silence_of_the_lambdas
|
79,784,351
| 11,067,209
|
PyTorch + Optuna causes random segmentation fault inside TransformerEncoderLayer (PyTorch 2.6, CUDA 12)
|
<p>I'm running into a segmentation fault when training a Transformer model with PyTorch 2.6.0 and Optuna on CUDA (12.4).
The exact same code used to work fine the issue appeared only after using Optuna.</p>
<p>The crash happens deep inside the Transformer feed-forward block:</p>
<pre><code>Fatal Python error: Segmentation fault
File ".../torch/nn/modules/transformer.py", line 947, in _ff_block
File ".../torch/nn/functional.py", line 1704, in relu
<no Python frame>
</code></pre>
<p>No Python exception is raised; the process just dies.
It only happens when training is executed through Optuna (even with <code>n_jobs=1</code>), but not when I run the same training loop directly.</p>
<h1>Code to reproduce (the error is not deterministic)</h1>
<pre><code>import torch
import torch.nn as nn
import optuna
device = "cuda:0"
# --- simple transformer model ---
class TinyTransformer(nn.Module):
def __init__(self, d_model=128, nhead=2, num_layers=2):
super().__init__()
layer = nn.TransformerEncoderLayer(
d_model=d_model, nhead=nhead, batch_first=True
)
self.encoder = nn.TransformerEncoder(layer, num_layers=num_layers)
self.proj = nn.Linear(d_model, 1)
def forward(self, x):
return self.proj(self.encoder(x))
def objective(trial):
model = TinyTransformer().to(device)
opt = torch.optim.AdamW(model.parameters(), lr=1e-3)
loss_fn = nn.MSELoss()
x = torch.randn(8, 48, 128, device=device)
y = torch.randn(8, 48, 1, device=device)
for _ in range(10):
opt.zero_grad(set_to_none=True)
out = model(x)
loss = loss_fn(out, y)
loss.backward()
opt.step()
return float(loss)
if __name__ == "__main__":
study = optuna.create_study(direction="minimize")
# NOTE: the crash only occurs through study.optimize()
study.optimize(objective, n_trials=5, n_jobs=1)
</code></pre>
<h1>Questions</h1>
<ol>
<li>Is there a known incompatibility between Optuna and PyTorch 2.6 when using CUDA (e.g. due to forking/thread handling)?</li>
<li>Could flash attention at Transformer be the issue?</li>
</ol>
|
<python><pytorch><attention-model><optuna>
|
2025-10-07 08:39:26
| 1
| 665
|
Angelo
|
79,784,113
| 21,540,734
|
How to correctly pass a filename with a single quote to ffmpeg's subtitles filter in Python?
|
<p>I'm writing a Python script using <code>subprocess</code> to hardcode subtitles onto a video. My code builds a complex filter graph for <code>ffmpeg</code>'s <code>-vf</code> argument, which includes burning in multiple layers of styled subtitles from an SRT file.</p>
<p>The script works perfectly for most files, but it fails whenever the subtitle filename contains a single quote (apostrophe), for example, <code>That's It and That's All.srt</code>.</p>
<p>When the script encounters such a file, the <code>ffmpeg</code> process fails immediately and doesn't create an output file. This causes a downstream <code>KeyError: 'streams'</code> in my code when it later tries to run <code>ffprobe</code> on the non-existent output file, but the root cause is the <code>ffmpeg</code> command failing silently.</p>
<p>The only solution I've found is to programmatically rename the file to remove the apostrophe, which I want to avoid.</p>
<p>The first time I encountered this, I read through the output log and saw that <code>ffmpeg</code> was removing the apostrophes itself from the string path passed to <code>ffmpeg</code> in the code.</p>
<pre class="lang-none prettyprint-override"><code>[Parsed_subtitles_1 @ 000002c435dce5c0] Unable to open C:/FUBAR/Season 01/FUBAR - S01E08 - Thats It and Thats All.srt
</code></pre>
<hr />
<h3>The Code</h3>
<p>Here is a simplified example of how the <code>-vf</code> filter string and the final <code>ffmpeg</code> command are constructed in my script:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
from pathlib import Path
# --- Example of a problematic file path ---
source_video_path = Path(r"C:\videos\FUBAR - S01E08 - That's It and That's All.mkv")
subtitle_path = source_video_path.with_suffix('.srt')
# --- This is how I build the filter string ---
video_filter = 'crop=1920:800:0:140' # Example starting filter
# The subtitle path is formatted for the filter string
# NOTE: My real code finds this path dynamically.
formatted_subtitle_path = str(subtitle_path).replace('\\', '/')
# A simplified version of my style loop
style_string = "FontName=Segoe UI,FontSize=18,PrimaryColour=&H00FFFFFF"
# The filename is placed inside single quotes in the filter
video_filter += f",subtitles=filename='{formatted_subtitle_path}':force_style='{style_string}'"
# --- The final ffmpeg command list ---
command = [
'ffmpeg.exe',
'-y',
'-i', str(source_video_path),
'-vf', video_filter,
'-c:a', 'copy',
'output.mkv',
]
print("--- Generated FFmpeg Command ---")
# Using print to show how Python sees the arguments before execution
for i, arg in enumerate(command):
print(f"Arg[{i}]: {arg}")
# When run, ffmpeg fails on this command because of the ' in the filename.
# process = subprocess.run(command, text=True, capture_output=True)
# print("\n--- FFmpeg Output ---")
# print(process.stderr)
</code></pre>
<h3>What I've Tried</h3>
<p>I understand the problem is that the single quote in <code>That's It</code> prematurely terminates the <code>filename='...'</code> string within the <code>-vf</code> filter. I have tried several common methods to escape it, but none have worked:</p>
<ol>
<li><strong>Backslash Escaping:</strong> I've tried replacing the apostrophe in the <code>formatted_subtitle_path</code> with various escape sequences before building the filter string.</li>
</ol>
<ul>
<li><code>replace("'", r"\'")</code></li>
<li><code>replace("'", r"\\'")</code></li>
<li><code>replace("'", r"\\\'")</code></li>
</ul>
<ol start="2">
<li><p><strong>Shell Quoting Trick:</strong> I also tried the ''' method.
<code>replace("'", r"'\''")</code></p>
</li>
<li><p><strong>Using Double Quotes:</strong> I tried changing the filter string to wrap the filename in double quotes, but <code>ffmpeg</code> still seems to fail.</p>
</li>
</ol>
<p><code>video_filter += f""",subtitles=filename="{formatted_subtitle_path}":force_style=...</code></p>
<p>None of these attempts have succeeded; the <code>ffmpeg</code> process always fails to start or errors out immediately.</p>
<p>What is the definitive, cross-platform way to format the <code>filename</code> string for <code>ffmpeg</code>'s <code>subtitles</code> filter so that it can correctly handle paths containing single quotes?</p>
|
<python><ffmpeg><subprocess><escaping><command-line-arguments>
|
2025-10-06 22:13:06
| 4
| 425
|
phpjunkie
|
79,784,092
| 1,174,102
|
python virtualenv without losing apt-installed modules
|
<p>How can I create a python Virtual ENVironment (venv) such that the new venv has all of the python modules that are already installed on my system (with <code>apt</code>)?</p>
<blockquote>
<p>🛈 Note: It's much more secure to install python modules with <code>apt</code> than <code>pip</code>, due to the fact that <a href="https://security.stackexchange.com/a/234098/213165">pip doesn't verify the authenticity</a> of anything that it downloads with cryptographic signatures. This means that, unlike apt (which has required <a href="https://security.stackexchange.com/questions/246425/does-apt-get-enforce-cryptographic-authentication-and-integrity-validation-by-de">authenticating everything it downloads</a> since <a href="https://wiki.debian.org/SecureApt" rel="nofollow noreferrer">2005</a>), it is safer to obtain python modules with <code>apt</code> than with <code>pip</code>.</p>
</blockquote>
<p>Unfortunately, creating a venv in Debian 12 seems to "forget" the modules that I've already installed with <code>apt</code>.</p>
<pre><code>user@disp3666:~$ sudo apt-get install python3-virtualenv python3-numpy
...
user@disp3666:~$
user@disp3666:~$ python3
Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy;
>>>
user@disp3666:~$
user@disp3666:~$ python3 -m virtualenv /tmp/venv
created virtual environment CPython3.11.2.final.0-64 in 336ms
creator CPython3Posix(dest=/tmp/venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv)
added seed packages: pip==23.0.1, setuptools==66.1.1, wheel==0.38.4
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
user@disp3666:~$
user@disp3666:~$ source /tmp/venv/bin/activate
(venv) user@disp3666:~$ python
Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy;
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'numpy'
>>>
(venv) user@disp3666:~$
</code></pre>
<p>As you can see above, I've already installed <code>numpy</code>, but the venv can't find it.</p>
<p>How can I create a venv in python, such that the new virtual environment includes modules that I've already installed on the system with <code>apt</code>?</p>
|
<python><debian><virtualenv><apt>
|
2025-10-06 21:29:32
| 1
| 2,923
|
Michael Altfield
|
79,783,857
| 8,800,836
|
`jax.lax.fori_loop` with equal `lower` and `upper` should produce no iteration, but body still executed
|
<p>I have a code that uses a bunch of <code>jax.lax.fori_loop</code>. The <a href="https://docs.jax.dev/en/latest/_autosummary/jax.lax.fori_loop.html" rel="nofollow noreferrer">documentation</a> of <code>fori_loop</code> says that "setting upper <= lower will produce no iterations". So I was naively expecting the loop to just return its <code>init_val</code> unchanged. But in my case, it seems like it does attempt to execute the body.</p>
<p>The code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp
import jax
from jax.scipy.special import gammaln
# PRELIMINARY PART FOR MWE
def comb(n, k):
return jnp.round(jnp.exp(gammaln(n + 1) - gammaln(k + 1) - gammaln(n - k + 1)))
def binom_conv(n, Aks, Bks):
return part_binom_conv(n, 0, n, Aks, Bks)
def part_binom_conv(n, k0, k1, Aks, Bks):
A_shape = Aks.shape[1:]
A_dtype = Aks.dtype
init_conv = jnp.zeros(A_shape, dtype=A_dtype)
conv = jax.lax.fori_loop(k0, k1, update_binom_conv, (init_conv, n, Aks, Bks))[0]
return conv
def update_binom_conv(k, val):
conv, n, Aks, Bks = val
conv = conv + comb(n-1, k) * Aks[k] @ Bks[(n-1)-k]
return conv, n, Aks, Bks
# IMPORTANT PART
def build(U, Hks):
n = Hks.shape[0] # n=0
H_shape = Hks.shape[1:] # H_shape=(2,2)
Uks_shape = (n+1,)+H_shape # Uks_shape=(1,2,2)
Uks = jnp.zeros(Uks_shape, dtype=Hks.dtype)
Uks = Uks.at[0].set(U)
Uks = jax.lax.fori_loop(0, n, update_Uks, (Uks, Hks))[0] # n=0, so lower=upper=0. Should produce no iterations???
return Uks
def update_Uks(k, val):
Uks, Hks = val
Uks = Uks.at[k+1].set(-1j*binom_conv(k+1, Hks, Uks))
return Uks, Hks
# Test
Hks = jnp.zeros((0,2,2), dtype=complex)
U = jnp.eye(2, dtype=complex)
build(U, Hks)
</code></pre>
<p>This returns the following error:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[10], line 47
45 Hks = jnp.zeros((0,2,2), dtype=complex)
46 U = jnp.eye(2, dtype=complex)
---> 47 build(U, Hks)
Cell In[10], line 35
33 Uks = jnp.zeros(Uks_shape, dtype=Hks.dtype)
34 Uks = Uks.at[0].set(U)
---> 35 Uks = jax.lax.fori_loop(0, n, update_Uks, (Uks, Hks))[0] # n=0, so lower=upper=0. Should produce no iterations???
36 return Uks
[... skipping hidden 12 frame]
Cell In[10], line 40
38 def update_Uks(k, val):
39 Uks, Hks = val
---> 40 Uks = Uks.at[k+1].set(-1j*binom_conv(k+1, Hks, Uks))
41 return Uks, Hks
Cell In[10], line 12
11 def binom_conv(n, Aks, Bks):
---> 12 return part_binom_conv(n, 0, n, Aks, Bks)
...
--> 930 raise IndexError(f"index is out of bounds for axis {x_axis} with size 0")
931 i = _normalize_index(i, x_shape[x_axis]) if normalize_indices else i
932 i_converted = lax.convert_element_type(i, index_dtype)
IndexError: index is out of bounds for axis 0 with size 0
</code></pre>
<p>I'm not sure I understand what is going on here. Shouldn't the <code>fori_loop</code> just return its <code>init_val</code> and not cause this error?</p>
|
<python><for-loop><jax>
|
2025-10-06 15:49:48
| 2
| 539
|
Ben
|
79,783,603
| 1,194,864
|
Websocket communication between server and client in python
|
<p>I would like to create a WebSocket communication between my personal laptop and a server. I have created the <code>client.py</code> and <code>server.py</code>, and they look as follows:</p>
<p>Firstly, the server:</p>
<pre><code>import asyncio
import websockets
async def handle_client(websocket):
try:
async for message in websocket:
if isinstance(message, bytes): # binary frame (image)
print(f"Received image of {len(message)} bytes")
# (optional) save to disk for testing
with open("received.jpg", "wb") as f:
f.write(message)
# Generate a response (in real case: run model inference here)
response = "Server: image received and processed!"
await websocket.send(response)
else:
# Fallback if a client sends text
print("Received text:", message)
await websocket.send("Server: got your text!")
except websockets.exceptions.ConnectionClosed:
print("Client disconnected")
async def main():
server = await websockets.serve(handle_client, "0.0.0.0", 12348)
print("Server running...")
await server.wait_closed()
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>and then the client:</p>
<pre><code>import cv2
import asyncio
import websockets
async def send_image():
frame = cv2.imread("temp.jpg") # replace with your file path
# Encode as JPEG
success, jpg = cv2.imencode(".jpg", frame)
if not success:
print("Encoding failed")
return
data = jpg.tobytes()
# Connect and send
async with websockets.connect("ws://xxx.xxx.xx.xxx:12348") as websocket: # here I add the IP of the server found after running ipconfig
await websocket.send(data) # send as binary
print(f"Sent image ({len(data)} bytes)")
# Wait for server reply
response = await websocket.recv()
print("Server replied:", response)
if __name__ == "__main__":
asyncio.run(send_image())
</code></pre>
<p>When I am running both <code>server</code> and the <code>client</code>, the programs are running, but I get no reaction from the server. Is there something I need to do differently? Is there something wrong with my code?</p>
<p>I am actually getting a timeout:</p>
<blockquote>
<pre><code>raise TimeoutError("timed out during opening handshake") from exc TimeoutError: timed out during opening handshake
</code></pre>
</blockquote>
<p>Edit: It seems that the issue lies with the following line:</p>
<p>async with <code>websockets.connect("ws://xxx.xxx.xx.xxx:12348") as websocket:</code></p>
<p>It might be that I am doing something wrong when I am trying to connect to my server using server's IP. Is that the correct thing to do?</p>
|
<python><websocket>
|
2025-10-06 10:52:00
| 1
| 5,452
|
Jose Ramon
|
79,783,536
| 193,501
|
Class declaring error when the code is run, not sure what is the issue
|
<pre class="lang-py prettyprint-override"><code>class Dog:
"""A simple attempt to model a dog."""
def __init__(self, name, age):
"""Initialize name and age attributes."""
self.name = name
self.age = age
def sit(self):
"""Simulate a dog sitting in response to a command"""
print(self.name.title() + " is now sitting.")
def roll_over(self):
"""Simulate rolling over in response to a command"""
print(self.name.title() + " rolled over!")
my_dog = Dog('Willie', 6)
print("My dog's name is " + my_dog.name.title() + ".")
print("My dog is " + str(my_dog.age) + " years old.")
</code></pre>
<p>What does this error mean?</p>
<pre class="lang-none prettyprint-override"><code>/usr/bin/python3.12 /home/adnananwar/PycharmProjects/demo-repo/soup.py
Traceback (most recent call last):
File "/home/adnananwar/PycharmProjects/demo-repo/soup.py", line 3, in <module>
from IPython.display import Markdown, display
ModuleNotFoundError: No module named 'IPython'
</code></pre>
<p>LINE 3 is where I declare the class.</p>
<p>I am using PyCharm Community and I try to install package, get another error</p>
<p><a href="https://i.sstatic.net/HlKdL1wO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlKdL1wO.png" alt="Package install Error" /></a></p>
<p>What I am doing wrong here?</p>
|
<python><python-3.x><pycharm>
|
2025-10-06 09:49:48
| 1
| 2,163
|
anwarma
|
79,783,418
| 4,451,315
|
Is going from pyarrow chunkedarray to pyarrow table a zero-copy operation? How to check?
|
<p>If I have</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
ca = pa.chunked_array([[1,2,3]])
</code></pre>
<p>and then do</p>
<pre><code>t = pa.table({'a': ca})
</code></pre>
<p>then was the <code>pa.table</code> operation a zero-copy one?</p>
<p>I would expect it to be, but is there any way to check?</p>
|
<python><pyarrow>
|
2025-10-06 06:57:45
| 0
| 11,062
|
ignoring_gravity
|
79,783,308
| 1,068,689
|
Jump to single link from Python traceback
|
<p>I have a Python file open in Visual Studio Code, and I run the file in the integrated terminal. When it hits a problem in my code (problems? in MY code???!), Python gives me the usual Traceback, starting with the first command it ran and ending with the line that triggered the bug. For example (and omitting some of the intermediate lines):</p>
<pre><code>Traceback (most recent call last):
File "/shar/1TB/Data/Projects/ExtractMorphFromFLEx/fwdata2xml.py", line 2117, in <module>
domGrammar = LanguageData(sFLExFile)
^^^^^^^^^^^^^^^^^^^^^^^
[some intermediate lines]
File "/shar/1TB/Data/Projects/ExtractMorphFromFLEx/fwdata2xml.py", line 1536, in AddVarFVs
NewNC.features.closedValues.append(FV.ToRef())
^^^^^^^^^^^^^^
</code></pre>
<p>Each line has the complete path to the file and the line #, and if I mouse over it that information becomes underlined, and I get a sort of pop-up that says <code>Open file in editor (ctrl + click)</code>.</p>
<p>It <strong>used</strong> to be that when I Ctrl-left-clicked on this underlined text, VSC would scroll the already opened Python file to that line, and position the cursor at the beginning of this line.</p>
<p>Now instead I get a drop-down with <strong>all</strong> the traceback lines listed, each prefaced by the complete filename. I then have to click on the line in this dropdown that I want (usually at the top of this list) . But I <strong>already</strong> selected which line I want, by Ctrl-clicking on that line. I don't want to have to select the appropriate line twice.</p>
<p>This issue is probably my fault, because Reasons... and somewhere I seem to have messed up what Ctrl-click does.</p>
<p>I don't see the option to set Ctrl-click in VSC's keymapper.conf (which seems to be only about keystrokes), nor can I find the command that Ctrl-click is/was using. I've tried searching for words like 'link' in VSC's Settings, to no avail.</p>
<p>Even worse, if I have an https URL in my Python file, ctrl-clicking on it used to bring up that website page in my browser. Now instead it brings up the same drop-down menu listing of traceback lines if I have a traceback, and the final subdirectory of the file's path if there is no traceback.</p>
<p>This is under VSC 1.104.3, in Xubuntu. I have several Python extensions installed, including Microsoft's Python, Pylance, Python Debugger, and Python Environments. A few settings appear in my settings.json file with the word 'python', but none of them looks like a likely candidate.</p>
<p>What am I doing wrong? How do I set what Ctrl-click does, and what should I set it to so it will follow the unique hyperlink that I'm clicking on?</p>
|
<python><hyperlink><traceback>
|
2025-10-06 01:25:35
| 1
| 665
|
Mike Maxwell
|
79,783,307
| 1,719,931
|
uv is building projects without a [build-system] definition in the pyproject.toml
|
<p><a href="https://docs.astral.sh/uv/guides/package/#preparing-your-project-for-packaging" rel="nofollow noreferrer">The documentation for uv says</a>:</p>
<blockquote>
<p>If your project does not include a [build-system] definition in the pyproject.toml, uv will not build it by default.</p>
</blockquote>
<p>However:</p>
<pre><code>C:\Users\USERNAME\Downloads>uv init hello-world
Initialized project `hello-world` at `C:\Users\USERNAME\Downloads\hello-world`
C:\Users\USERNAME\Downloads>cd hello-world
C:\Users\USERNAME\Downloads\hello-world>cat pyproject.toml
[project]
name = "hello-world"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = []
C:\Users\USERNAME\Downloads\hello-world>uv build
Building source distribution...
running egg_info
creating hello_world.egg-info
writing hello_world.egg-info\PKG-INFO
writing dependency_links to hello_world.egg-info\dependency_links.txt
writing top-level names to hello_world.egg-info\top_level.txt
writing manifest file 'hello_world.egg-info\SOURCES.txt'
reading manifest file 'hello_world.egg-info\SOURCES.txt'
writing manifest file 'hello_world.egg-info\SOURCES.txt'
running sdist
running egg_info
writing hello_world.egg-info\PKG-INFO
writing dependency_links to hello_world.egg-info\dependency_links.txt
writing top-level names to hello_world.egg-info\top_level.txt
reading manifest file 'hello_world.egg-info\SOURCES.txt'
writing manifest file 'hello_world.egg-info\SOURCES.txt'
running check
creating hello_world-0.1.0
creating hello_world-0.1.0\hello_world.egg-info
copying files to hello_world-0.1.0...
copying README.md -> hello_world-0.1.0
copying main.py -> hello_world-0.1.0
copying pyproject.toml -> hello_world-0.1.0
copying hello_world.egg-info\PKG-INFO -> hello_world-0.1.0\hello_world.egg-info
copying hello_world.egg-info\SOURCES.txt -> hello_world-0.1.0\hello_world.egg-info
copying hello_world.egg-info\dependency_links.txt -> hello_world-0.1.0\hello_world.egg-info
copying hello_world.egg-info\top_level.txt -> hello_world-0.1.0\hello_world.egg-info
copying hello_world.egg-info\SOURCES.txt -> hello_world-0.1.0\hello_world.egg-info
Writing hello_world-0.1.0\setup.cfg
Creating tar archive
removing 'hello_world-0.1.0' (and everything under it)
Building wheel from source distribution...
running egg_info
writing hello_world.egg-info\PKG-INFO
writing dependency_links to hello_world.egg-info\dependency_links.txt
writing top-level names to hello_world.egg-info\top_level.txt
reading manifest file 'hello_world.egg-info\SOURCES.txt'
writing manifest file 'hello_world.egg-info\SOURCES.txt'
running bdist_wheel
running build
running build_py
creating build\lib
copying main.py -> build\lib
running egg_info
writing hello_world.egg-info\PKG-INFO
writing dependency_links to hello_world.egg-info\dependency_links.txt
writing top-level names to hello_world.egg-info\top_level.txt
reading manifest file 'hello_world.egg-info\SOURCES.txt'
writing manifest file 'hello_world.egg-info\SOURCES.txt'
installing to build\bdist.win-amd64\wheel
running install
running install_lib
creating build\bdist.win-amd64\wheel
copying build\lib\main.py -> build\bdist.win-amd64\wheel\.
running install_egg_info
Copying hello_world.egg-info to build\bdist.win-amd64\wheel\.\hello_world-0.1.0-py3.13.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\hello_world-0.1.0.dist-info\WHEEL
creating 'C:\Users\USERNAME\Downloads\hello-world\dist\.tmp-9cxdd7r9\hello_world-0.1.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'main.py'
adding 'hello_world-0.1.0.dist-info/METADATA'
adding 'hello_world-0.1.0.dist-info/WHEEL'
adding 'hello_world-0.1.0.dist-info/top_level.txt'
adding 'hello_world-0.1.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Successfully built dist\hello_world-0.1.0.tar.gz
Successfully built dist\hello_world-0.1.0-py3-none-any.whl
</code></pre>
<p>Am I missing something or is it building a project that does not include a [build-system] definition in the pyproject.toml?</p>
|
<python><uv>
|
2025-10-06 01:19:28
| 0
| 5,202
|
robertspierre
|
79,783,245
| 11,540,076
|
Google Cloud Run error with OpenTelemetry CloudMonitoringMetricsExporter: "One or more points were written more frequently than the maximum sampl..."
|
<p><strong>Background</strong></p>
<p>I have a containerized Python Flask application that is deployed on Google Cloud Run. I want to extract custom metrics from this app and send them to Google Cloud Monitoring.</p>
<p>I followed the example in these two websites, using <code>CloudMonitoringMetricsExporter</code> from <code>opentelemetry.exporter.cloud_monitoring</code> to export metrics directly to Google Cloud Monitoring (<em><strong>without</strong></em> using a collector sidecar as described <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/cloudrun-sidecar" rel="nofollow noreferrer">here</a>):</p>
<ul>
<li><a href="https://pypi.org/project/opentelemetry-exporter-gcp-monitoring/" rel="nofollow noreferrer">https://pypi.org/project/opentelemetry-exporter-gcp-monitoring/</a></li>
<li><a href="https://google-cloud-opentelemetry.readthedocs.io/en/latest/examples/cloud_monitoring/README.html" rel="nofollow noreferrer">https://google-cloud-opentelemetry.readthedocs.io/en/latest/examples/cloud_monitoring/README.html</a></li>
</ul>
<p><strong>Error</strong></p>
<p>Sometimes, but not always, almost exactly 15 minutes after my Cloud Run service records the last activity in the logs, I see the following in the logs, showing a termination signal from Cloud Run, following by an error writing to Google Cloud Monitoring:</p>
<pre><code>[2025-10-05 13:03:54 +0000] [1] [INFO] Handling signal: term
[2025-10-05 13:03:54 +0000] [2] [INFO] Worker exiting (pid: 2)
[ERROR] - Error while writing to Cloud Monitoring
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/google/api_core/grpc_helpers.py", line 75, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/grpc/_interceptor.py", line 277, in __call__
response, ignored_call = self._with_call(
request,
...<4 lines>...
compression=compression,
)
File "/usr/local/lib/python3.13/site-packages/grpc/_interceptor.py", line 332, in _with_call
return call.result(), call
File "/usr/local/lib/python3.13/site-packages/grpc/_channel.py", line 440, in result
raise self
File "/usr/local/lib/python3.13/site-packages/grpc/_interceptor.py", line 315, in continuation
response, call = self._thunk(new_method).with_call(
request,
...<4 lines>...
compression=new_compression,
)
File "/usr/local/lib/python3.13/site-packages/grpc/_channel.py", line 1195, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "/usr/local/lib/python3.13/site-packages/grpc/_channel.py", line 1009, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
</code></pre>
<p>The specific error is then "One or more points were written more frequently than the maximum sampling period configured for the metric" (same one as called out <a href="https://google-cloud-opentelemetry.readthedocs.io/en/latest/examples/cloud_monitoring/README.html#troubleshooting" rel="nofollow noreferrer">here</a>):</p>
<pre><code> " details = "One or more TimeSeries could not be written: timeSeries[0-2] (example metric.type="workload.googleapis.com/<redacted>", metric.labels={"net_peer_name": "<redacted>", "environment": "prod", "webhook_label": "generic", "component": "forwarder", "http_status_code": "200", "http_status_bucket": "2xx", "user_agent": "<redacted>", "opentelemetry_id": "d731413a"}): write for resource=generic_task{namespace:cloud-run,location:us-central1,job:<redacted>,task_id:02f24696-0786-4970-a93b-02176d5f1d75} failed with: One or more points were written more frequently than the maximum sampling period configured for the metric. {Metric: workload.googleapis.com/<redacted>, Timestamps: {Youngest Existing: '2025/10/05-06:03:53.004', New: '2025/10/05-06:03:54.778'}}""
</code></pre>
<p>The error log continues:</p>
<pre><code>" debug_error_string = "UNKNOWN:Error received from peer ipv4:173.194.194.95:443 {grpc_message:"One or more TimeSeries could not be written: timeSeries[0-2] (example metric.type=\"workload.googleapis.com/<redacted>\", metric.labels={\"net_peer_name\": \"<redacted>\", \"environment\": \"prod\", \"webhook_label\": \"generic\", \"component\": \"forwarder\", \"http_status_code\": \"200\", \"http_status_bucket\": \"2xx\", \"user_agent\": \"<redacted>\", \"opentelemetry_id\": \"d731413a\"}): write for resource=generic_task{namespace:cloud-run,location:us-central1,job:<redacted>,task_id:02f24696-0786-4970-a93b-02176d5f1d75} failed with: One or more points were written more frequently than the maximum sampling period configured for the metric. {Metric: workload.googleapis.com/<redacted>, Timestamps: {Youngest Existing: \'2025/10/05-06:03:53.004\', New: \'2025/10/05-06:03:54.778\'}}", grpc_status:3}""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/opentelemetry/exporter/cloud_monitoring/__init__.py", line 371, in export
self._batch_write(all_series)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/opentelemetry/exporter/cloud_monitoring/__init__.py", line 155, in _batch_write
self.client.create_time_series(
CreateTimeSeriesRequest(
...<4 lines>...
),
)
File "/usr/local/lib/python3.13/site-packages/google/cloud/monitoring_v3/services/metric_service/client.py", line 1791, in create_time_series
rpc(
request,
...<2 lines>...
metadata=metadata,
)
File "/usr/local/lib/python3.13/site-packages/google/api_core/gapic_v1/method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/google/api_core/timeout.py", line 130, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/google/api_core/grpc_helpers.py", line 77, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
"google.api_core.exceptions.InvalidArgument: 400 One or more TimeSeries could not be written: timeSeries[0-2] (example metric.type="workload.googleapis.com/<redacted>", metric.labels={"net_peer_name": "<redacted>", "environment": "prod", "webhook_label": "generic", "component": "forwarder", "http_status_code": "200", "http_status_bucket": "2xx", "user_agent": "<redacted>", "opentelemetry_id": "d731413a"}): write for resource=generic_task{namespace:cloud-run,location:us-central1,job:<redacted>,task_id:02f24696-0786-4970-a93b-02176d5f1d75} failed with: One or more points were written more frequently than the maximum sampling period configured for the metric. {Metric: workload.googleapis.com/<redacted>, Timestamps: {Youngest Existing: '2025/10/05-06:03:53.004', New: '2025/10/05-06:03:54.778'}} [type_url: "type.googleapis.com/google.monitoring.v3.CreateTimeSeriesSummary""
value: "\010\003\032\006\n\002\010\t\020\003
]
[2025-10-05 13:03:57 +0000] [1] [INFO] Shutting down: Master
</code></pre>
<p><strong>My Code</strong></p>
<p>I have a function called <code>configure_metrics</code>, which is (simplified):</p>
<pre><code>def configure_metrics(**kwargs):
"""
Configure OpenTelemetry metrics for Cloud Monitoring.
"""
name, namespace, version, instance_id = _infer_service_identity(
service_name, service_namespace, service_version, service_instance_id
) # Custom internal function
# Base resource with service-specific attributes; avoid platform-specific hardcoding here.
base_resource = Resource.create(
{
"service.name": name,
"service.namespace": namespace,
"service.version": version,
"service.instance.id": instance_id,
}
)
# Detect environment-specific resource (e.g., GCE VM, GKE Pod, Cloud Run instance) and merge.
try:
detected_resource = GoogleCloudResourceDetector().detect()
except Exception as e:
logger.debug(
"GCP resource detection failed; continuing with base resource: %s", e
)
detected_resource = Resource.create({})
resource = detected_resource.merge(base_resource)
exporter = CloudMonitoringMetricsExporter(
# Helps avoid 'written more frequently than the maximum sampling period' conflicts
add_unique_identifier=add_unique_identifier
)
reader = PeriodicExportingMetricReader(
exporter, export_interval_millis=export_interval_ms
)
provider = MeterProvider(metric_readers=[reader], resource=resource)
# Sets the global MeterProvider
# After this, any metrics.get_meter(<any_name>) in your process gets a Meter from this provider.
metrics.set_meter_provider(provider)
</code></pre>
<p>In <code>main.py</code>, I configure OpenTelemetry metrics as:</p>
<pre><code>def create_app() -> Flask:
app = Flask(__name__)
# Initialize OTel metrics provider once per process/worker.
configure_metrics(
export_interval_ms=60000
) # Export every minute, instead of default every 5 seconds
# Only now import and register blueprints (routes) so instruments are created
# against the meter provider installed in configure_metrics()
from app.routes import webhook
app.register_blueprint(webhook.bp)
return app
app = create_app()
if __name__ == "__main__":
app.run(port=8080)
</code></pre>
<p>And in other files, such as <code>webhook.py</code> referenced above, I define my own custom metrics as in this example:</p>
<pre><code># ---------------------------------------------
# OpenTelemetry (OTel) metrics
# ---------------------------------------------
# Get a meter from the provider that was installed in main.py
meter = metrics.get_meter(
"webhooks"
) # Any stable string works for naming this meter
# Request counter
# Metric name maps to workload.googleapis.com/request_counter in Cloud Monitoring.
requests_counter = meter.create_counter(
name="webhook_request_counter",
description="Total number of HTTP requests processed by the webhooks blueprint",
unit="1",
)
</code></pre>
<p>And the metric is updated where needed as:</p>
<pre><code>requests_counter.add(1, attributes=attrs)
</code></pre>
<p><strong>Possible Explanation</strong></p>
<p>I think something along these lines is happening:</p>
<ul>
<li>The exporter to Cloud Monitoring is running every 60 seconds.</li>
<li>Suppose at time T (say 05:29:07) a scheduled export occurs, sending new points for each time series.</li>
<li>Then some time later, the container is being terminated (e.g. Cloud Run shutting down or scaling), and before exiting, the application invokes a shutdown handler or signal handler that triggers a flush of metrics (force flush).</li>
<li>That flush occurs shortly (~1 or 2 seconds) after the last scheduled export. Some of the same time series get a new “point” during that flush with a timestamp that is only 1–2 seconds apart from the previous. Because that's < 5s, Cloud Monitoring rejects it.</li>
</ul>
<p><strong>Help</strong></p>
<p>I do not know how to handle this event in the code in such a as to avoid the error but not result in data loss. How should I edit my Flask app?</p>
|
<python><google-cloud-run><open-telemetry><google-cloud-monitoring><google-cloud-metrics>
|
2025-10-05 22:27:06
| 0
| 485
|
Charlie Brown
|
79,783,219
| 859,191
|
TexStudio macro running python script but writing return string in message box
|
<p>I make my drawings in Krita while I write my report in TexStudio. In Krita I select part of the drawing and put it on the clipboard. I put my cursor in the .tex file at the desired position and run the macro.</p>
<p>The macro takes the current file name and passes it to a python script. The latter converts the clipboard content and saves it in a .png file (not in the python script I added below). The png file name is added to a string starting with \includegraphics ... . The latter string is returned by the python script to the macro. The macro should insert this TeX command string at the cursor position in the current .tex document.</p>
<p>The editor.insertText command DOES NOT insert the text in the open current .tex document where the macro is triggered but it writes the command in the "Messages" box. I added also the command for showing the return string from the Python script in an "alert" but this opens the alert box without content.</p>
<p>My simplified python script follows below:</p>
<pre><code>#!/usr/bin/env python3
import os
import sys
def main(fn):
print(repr(str(fn)+"xxxx"),end="")
if __name__ == "__main__":
main("Hello World")
</code></pre>
<p>The macro code is the following:</p>
<pre><code>var proc = system("/home/stefan/ARCHIVE/SCRIPTS/latex_automation/main.py");
proc.waitForFinished();
var output = proc.standardOutput;
// Ensure the editor is focused
app.focusEditor();
// Confirm editor is valid and insert
if (editor) {
editor.insertText(output);
} else {
QMessageBox.information(null, "Error", "No active .tex document found.");
}
alert(output)
</code></pre>
|
<python><python-3.x><macros><latex>
|
2025-10-05 21:30:18
| 0
| 385
|
noste99
|
79,783,212
| 1,636,016
|
How to view the scene from a given pose (4x4 transformation matrix) in Matplotlib
|
<p>I'm trying to place the camera at a particular pose (given as a <code>4x4</code> transformation matrix) with the camera looking along the <code>+ve z-axis</code> in a <strong>Matplotlib 3D plot</strong>.</p>
<p>There are a few reference poses and a dummy 3D model loaded for convenience.</p>
<p>Below is the full working example.</p>
<p>The 2 functions <code>set_camera_from_pose_1</code> and <code>set_camera_from_pose_2</code> I've added following chatgpt's guidance but they don't seem to work.</p>
<p>My requirement is to place the camera at the given pose looking at the <code>+ve z-axis</code>.</p>
<p>Any idea/help is appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import requests
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
def load_obj(filename):
'''Load vertices and faces from OBJ file'''
vertices, faces = [], []
data = requests.get(filename).text.split('\r\n')
for line in data:
if line.startswith('v '):
vertices.append([float(x) for x in line.strip().split()[1:4]])
elif line.startswith('f '):
face = []
for vertex in line.strip().split()[1:]:
face.append(int(vertex.split('/')[0]) - 1)
faces.append(face)
return np.array(vertices), faces
def rotation_matrix(θx, θy, θz):
Rx = np.array([
[1, 0, 0],
[0, np.cos(θx), -np.sin(θx)],
[0, np.sin(θx), np.cos(θx)]
])
Ry = np.array([
[ np.cos(θy), 0, np.sin(θy)],
[0, 1, 0],
[-np.sin(θy), 0, np.cos(θy)]
])
Rz = np.array([
[np.cos(θz), -np.sin(θz), 0],
[np.sin(θz), np.cos(θz), 0],
[0, 0, 1]
])
R = Rz @ Ry @ Rx # Combined rotation (extrinsic XYZ order)
return R
def set_camera_from_pose_1(ax, pose_matrix):
# Extract rotation and translation
R = pose_matrix[:3, :3]
t = pose_matrix[:3, 3]
# Camera position (eye) is at the translation
eye = t
# The camera looks along the negative z-axis in camera coordinates
# Transform to world coordinates
forward = -R[:, 2] # -Z axis of camera in world frame
up = -R[:, 1] # -Y axis of camera in world frame (up direction)
# Target point (where camera looks)
target = eye + forward
# Set the view
ax.view_init(
elev=np.degrees(np.arcsin(-forward[2] / np.linalg.norm(forward))),
azim=np.degrees(np.arctan2(forward[1], forward[0]))
)
# Alternatively, for more control, use the projection matrix approach:
# This sets the actual 3D view transformation
zoom = 1 # Distance factor (adjust as needed)
ax.set_box_aspect(None, zoom=zoom)
def set_camera_from_pose_2(ax, pose_matrix, target_point=None):
cam_pose = pose_matrix[:3, 3]
R = pose_matrix[:3, :3]
# Determine target point
if target_point is None:
target_point = cam_pose - R[:, 2] # Look along camera's -Z axis
direction = target_point - cam_pose
# Calculate azimuth (rotation around Z)
azimuth = np.degrees(np.arctan2(direction[1], direction[0]))
# Calculate elevation (angle from XY plane)
r_xy = np.sqrt(direction[0]**2 + direction[1]**2)
elevation = np.degrees(np.arctan2(direction[2], r_xy))
# Set view
ax.view_init(elev=elevation, azim=azimuth)
distance = np.linalg.norm(target_point - cam_pose)
zoom = 2 / distance # Adjust factor as needed
ax.set_box_aspect(None, zoom=zoom)
def plot_pose(T, ax=None, label=None, axis_length=1.0, axis_width=2):
''' Visualize a pose from a 4x4 transformation matrix '''
if ax is None:
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
# Extract position (translation) from the matrix
origin = T[:3, 3]
# Extract rotation (orientation) from the matrix
x_axis, y_axis, z_axis = T[:3, 0], T[:3, 1], T[:3, 2]
# Plot the three axes
ax.quiver(origin[0], origin[1], origin[2], x_axis[0], x_axis[1], x_axis[2], color='red', length=axis_length, linewidth=axis_width, arrow_length_ratio=0.2)
ax.quiver(origin[0], origin[1], origin[2], y_axis[0], y_axis[1], y_axis[2], color='green', length=axis_length, linewidth=axis_width, arrow_length_ratio=0.2)
ax.quiver(origin[0], origin[1], origin[2], z_axis[0], z_axis[1], z_axis[2], color='blue', length=axis_length, linewidth=axis_width, arrow_length_ratio=0.2)
# Plot origin point
ax.scatter(origin[0], origin[1], origin[2], color='black', s=40, label='Origin')
if label is not None:
ax.text(origin[0], origin[1], origin[2], label, fontsize=10, va='top', ha='left')
return ax
fig = plt.figure(figsize=(6, 6), facecolor='none')
ax = fig.add_subplot(111, projection='3d', facecolor='none')
# Load the 3D model
scale = 1.0
R = rotation_matrix(np.pi / 2, 0, np.pi / 2)
vertices, faces = load_obj('https://graphics.stanford.edu/~mdfisher/Data/Meshes/bunny.obj')
vertices -= vertices.min(axis=0)
vertices = vertices * scale
vertices = np.array([vertices[face] for face in faces])
mesh = Poly3DCollection(vertices, alpha=0.9, facecolor='cyan', edgecolor='darkgray', linewidths=0.5)
vertices = vertices @ R.T
mesh.set_verts(vertices.tolist())
ax.add_collection3d(mesh)
# Plot some pose(s)
angle = 0
pose1 = np.array([
[np.cos(angle), -np.sin(angle), 0, 0.4],
[np.sin(angle), np.cos(angle), 0, 0.3],
[0, 0, 1, -0.3],
[0, 0, 0, 1.0]
])
pose2 = np.array([
[np.cos(angle), -np.sin(angle), 0, -0.5],
[np.sin(angle), np.cos(angle), 0, -0.4],
[0, 0, 1, 1.0],
[0, 0, 0, 1.0]
])
camera_pose= np.array([
[-0.1744385499559559516, -0.7794457272190911112, 0.6016939010902186968, -0.2006076245670841973],
[-0.9548445695049905257, -0.01535221630358679992, -0.2967088767485692724, -0.1520286928361865575],
[ 0.2405058011277342034, -0.6262816201793489634, -0.7415715015084093364, 0.7943007246282384193],
[ 0, 0, 0, 1]
])
plot_pose(pose1, ax, label='p1', axis_length=0.05)
plot_pose(pose2, ax, label='p2', axis_length=0.05)
plot_pose(camera_pose, ax, label='camera', axis_length=0.1, axis_width=1)
# Plot world frame for reference
plot_pose(np.eye(4), ax, axis_length=0.1, axis_width=1)
# Set the view angle
# 'elev' sets the elevation (looking down slightly)
# 'azim' sets the azimuth (viewing from a specific horizontal angle)
# ax.view_init(elev=0, azim=0)
# z
# |
# ⦿ ⎯ y
# x
# ax.view_init(elev=0, azim=-90)
# z
# |
# ⓧ ⎯ x
# y
# ax.view_init(elev=0, azim=-90)
# set_camera_from_pose_1(ax, camera_pose)
# set_camera_from_pose_2(ax, camera_pose)
set_camera_from_pose_2(ax, pose1, np.array([0, 0, 0]))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.tight_layout()
plt.show()
</code></pre>
<p>Currently the view is as follows. I'm trying to place the camera at the <code>camera</code>-pose.</p>
<p><a href="https://i.sstatic.net/WiMxVAkw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiMxVAkw.png" alt="view" /></a></p>
|
<python><matplotlib><matplotlib-3d>
|
2025-10-05 21:12:03
| 0
| 563
|
dibyendu
|
79,783,199
| 216,652
|
Weird errors while trying to connect to Mega.NZ from Python via https://pypi.org/project/mega.py/
|
<p>I'm trying to connect to Mega.NZ via module <a href="https://pypi.org/project/mega.py/" rel="nofollow noreferrer">https://pypi.org/project/mega.py/</a> with this code:</p>
<pre><code>#!/usr/bin/env python3
from mega import Mega
mega = Mega()
m = mega.login("my mail", "my pwd")
details = m.get_user()
print(details)
mega.logout()
</code></pre>
<p>But as the result of running I get the list of errors:</p>
<pre><code>C:\tmp>python m1.py
Traceback (most recent call last):
File "C:\tmp\m1.py", line 10, in <module>
from mega import Mega
File "C:\Python311\Lib\site-packages\mega\__init__.py", line 1, in <module>
from .mega import Mega # noqa
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\mega\mega.py", line 18, in <module>
from tenacity import retry, wait_exponential, retry_if_exception_type
File "C:\Python311\Lib\site-packages\tenacity\__init__.py", line 451, in <module>
from tenacity._asyncio import AsyncRetrying
File "C:\Python311\Lib\site-packages\tenacity\_asyncio.py", line 33, in <module>
class AsyncRetrying(BaseRetrying):
File "C:\Python311\Lib\site-packages\tenacity\_asyncio.py", line 41, in AsyncRetrying
@asyncio.coroutine
^^^^^^^^^^^^^^^^^
AttributeError: module 'asyncio' has no attribute 'coroutine'. Did you mean: 'coroutines'?
</code></pre>
<p>What should I do to have these errors fixed?</p>
|
<python><python-asyncio>
|
2025-10-05 20:43:17
| 1
| 579
|
user216652
|
79,782,850
| 3,977,699
|
How to install Miniconda or switch Python version (3.12 → 3.11) in Modal Notebooks?
|
<p>I’m trying to run <code>Chatterbox TTS</code> on <code>Modal Notebooks</code>.</p>
<p>Modal by default uses <code>Python 3.12</code>, but chatterbox requires dependencies (like <code>numpy==1.25.2</code>) that fail to build because <code>distutils</code> was removed in Python 3.12.
So I have two possible solutions in mind:</p>
<ol>
<li>Create a conda environment using Miniconda, then install Chatterbox inside it.</li>
<li>Change the default Python 3.12 → Python 3.11 and install Chatterbox there.</li>
</ol>
<p>However, I’m not able to do either.</p>
<p>I’ve tried:</p>
<pre class="lang-bash prettyprint-override"><code>apt install python3.11 python3.11-distutils
!curl -sS https://bootstrap.pypa.io/get-pip.py | python3.11
</code></pre>
<p>but I get this error:</p>
<pre><code>error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install python3-xyz...
</code></pre>
<p>I also tried:</p>
<pre class="lang-bash prettyprint-override"><code>sudo apt install python3.10-pip
</code></pre>
<p>and got:</p>
<pre class="lang-bash prettyprint-override"><code>/usr/bin/sh: 1: sudo: not found
</code></pre>
<p>Then I attempted to install Miniconda manually:</p>
<pre class="lang-bash prettyprint-override"><code>!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda
!source ~/miniconda/bin/activate
</code></pre>
<p>but conda isn’t found afterwards — likely because Modal doesn’t persist or expose PATHs the same way as local environments.</p>
<p>Question:<br />
👉 How can I either:<br />
install Miniconda (or create a conda environment) inside Modal Notebooks,<br />
or<br />
switch the default Python version to 3.11 to make chatterbox work?<br />
Any working solution, official reference, or confirmation about Modal’s environment limitations would be very helpful.</p>
|
<python><conda><virtual-environment><python-3.11><modal-com>
|
2025-10-05 07:39:00
| 0
| 1,691
|
Zulqarnain Jalil
|
79,782,403
| 2,659,307
|
Why does this Qt program that creates multiple QApplication objects crash unless I reset a dead local variable?
|
<h2>Program</h2>
<p>If the following program is run on Windows with a single command-line
argument, it will crash:</p>
<pre class="lang-python prettyprint-override"><code># threading-crash.py
"""Reproduce a crash involving Qt and threading"""
from PyQt5 import QtCore
import sys
from threading import Thread
from typing import Optional
class WorkerManager(QtCore.QObject):
# Signal emitted when thread is finished.
worker_finished = QtCore.pyqtSignal()
def start_worker(self) -> None:
def worker() -> None:
# Printing here is necessary for the crash to happen *reliably*,
# though it still happens without it (just less often).
print("Emitting worker_finished signal")
self.worker_finished.emit()
t = Thread(target=worker)
t.start()
def run_test() -> None:
# When using `mypy`, I cannot assign `None` to `app` at the end unless
# the type is declared to be optional here.
app: Optional[QtCore.QCoreApplication] = QtCore.QCoreApplication(sys.argv)
assert(app) # Pacify mypy.
mgr = WorkerManager()
def finished() -> None:
# Terminate the `exec_` call below.
assert(app) # Pacify mypy.
app.exit(0)
# Make a queued connection since this is a cross-thread signal. (This
# is not necessary to reproduce the crash; auto does the same thing.)
mgr.worker_finished.connect(
finished, QtCore.Qt.QueuedConnection) # type: ignore
# Start the worker thread, which will signal `finished`.
mgr.start_worker()
# Wait for the signal to be received.
app.exec_()
if len(sys.argv) == 1:
# This fixes the crash!
app = None
def main() -> None:
for i in range(10):
print(f"{i}: run_test")
run_test() # Crashes on the second call.
if __name__ == "__main__":
main()
# EOF
</code></pre>
<h2>Demonstration</h2>
<p>On my system (and with the <code>print</code> call in <code>worker</code>) this program
crashes or hangs 100% of the time in the second <code>run_test</code> call.</p>
<p>Example run:</p>
<pre class="lang-none prettyprint-override"><code>$ python threading-crash.py CRASH
0: run_test
Emitting worker_finished signal
1: run_test
Emitting worker_finished signal
Segmentation fault
Exit 139
</code></pre>
<p>The exact behavior varies unpredictably; another example:</p>
<pre class="lang-none prettyprint-override"><code>$ python threading-crash.py CRASH
0: run_test
Emitting worker_finished signal
1: run_test
Emitting worker_finished signal
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "D:\opt\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
Exit 127
</code></pre>
<p>Other possibilities include popping up an error dialog box ("The
instruction at (hex) referenced memory at (hex)."), or just hanging
completely.</p>
<p>In contrast, when run without arguments, thus activating the
<code>app = None</code> line, it runs fine (even with a large iteration count like
1000):</p>
<pre class="lang-none prettyprint-override"><code>$ python threading-crash.py
0: run_test
Emitting worker_finished signal
1: run_test
Emitting worker_finished signal
[...]
9: run_test
Emitting worker_finished signal
</code></pre>
<h2>Other variations</h2>
<p>Removing the <code>print</code> in <code>start_worker</code> makes the crash happen less
frequently, but does not solve it.</p>
<p>Joining the worker thread at the end of <code>start_worker</code> (so there is no
concurrency) removes the crash.</p>
<p>Joining the worker <em>after</em> <code>app.exec_()</code> does not help; it still crashes. Calling <code>time.sleep(1)</code> there (with or without the join) also does not help. This means the crash happens even though there is <strong>only one thread running at the time</strong>.</p>
<p>Disconnecting the <code>worker_finished</code> signal after <code>app.exec_()</code> does not help.</p>
<p>Adding a call to <code>gc.collect()</code> at the top of <code>run_test</code> has no effect.</p>
<p>Using <code>QtCore.QThread</code> instead of <code>threading.Thread</code> also has no effect on the crash.</p>
<h2>Question</h2>
<p>Why does this program crash? In particular:</p>
<ul>
<li><p>Why does it <em>not</em> crash when I reset <code>app</code> to <code>None</code>? Shouldn't that
(or something equivalent) automatically happen when <code>run_test</code>
returns?</p>
</li>
<li><p>Is this a bug in my program, or a bug in Python or Qt?</p>
</li>
</ul>
<h2>Why am I making multiple <code>QCoreApplications</code>?</h2>
<p>This example is reduced from a unit test suite. In that suite, each
test is meant to be independent of any other, so those tests that need
it create their own <code>QCoreApplication</code> object. The
<a href="https://doc.qt.io/qt-6/qcoreapplication.html" rel="nofollow noreferrer">documentation</a> does not
appear to prohibit this.</p>
<h2>Versions, etc.</h2>
<pre class="lang-none prettyprint-override"><code>$ python -V -V
Python 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
$ python -m pip list | grep -i qt
PyQt5 5.15.11
PyQt5-Qt5 5.15.2
PyQt5_sip 12.17.0
PyQt5-stubs 5.15.6.0
</code></pre>
<p>I'm running this on Windows 10 Home. The above examples use a Cygwin
shell, but the same thing happens under <code>cmd.exe</code>. This is all using
the native Windows port of Python.</p>
<hr />
<h2>Further simplified</h2>
<p>In comments, @ekhumoro suggested replacing the thread with a timer, and
to my surprise, the crash still happens! (I was evidently misled by the
highly non-deterministic behavior, not all of which I've shared.) Here
is a more minimal reroducer (with typing annotations also removed):</p>
<pre class="lang-python prettyprint-override"><code># threading-crash.py
"""Reproduce a crash involving Qt and (not!) threading"""
from PyQt5 import QtCore
import sys
class WorkerManager(QtCore.QObject):
# Signal emitted... never, now.
the_signal = QtCore.pyqtSignal()
def run_test() -> None:
app = QtCore.QCoreApplication(sys.argv)
mgr = WorkerManager()
def finished() -> None:
# This call is required since it keeps `app` alive.
app.exit(0)
# Connect the signal (which is never emitted) to a local lambda.
mgr.the_signal.connect(finished)
# Start and stop the event loop.
QtCore.QTimer.singleShot(100, app.quit)
app.exec_()
if len(sys.argv) == 1:
# This fixes the crash!
app = None # type: ignore
def main() -> None:
for i in range(4):
print(f"{i}: run_test")
run_test() # Crashes on the second call.
if __name__ == "__main__":
main()
# EOF
</code></pre>
<p>Now, the key element seems to be that we have a signal connected to a
local lambda that holds a reference to the <code>QCoreApplication</code>.</p>
<p>If the signal is disconnected before <code>exec_()</code> (i.e., right after it was connected), then no crash occurs. (Of course, that is not a solution to the original problem, since in the original program, the point of the signal was to cause <code>exec_()</code> to return.)</p>
<p>If the signal is disconnected <em>after</em> <code>exec_()</code>, then the program crashes; the lambda lives on, apparently.</p>
|
<python><pyqt5><crash>
|
2025-10-04 09:58:04
| 1
| 13,707
|
Scott McPeak
|
79,782,389
| 607,407
|
Listing root child controls from uiautomation GetRootControl() stuck after the last item for 3+ minutes
|
<p>I am trying to play around with <code>uiautomation</code> on windows. As a first test, I wanted to list everything in root to investigate what is worth recursing into.</p>
<p>This is my script:</p>
<pre><code>import uiautomation as uia
import time
class AccessibilityReader:
def __init__(self):
self.root = uia.GetRootControl()
def generate_items(self):
child = self.root.GetFirstChildControl()
while child:
yield child
child = child.GetNextSiblingControl()
def control_type_id_to_name(self, control_type_id: int) -> str:
for name in dir(uia.ControlType):
if not name.startswith("_"):
if getattr(uia.ControlType, name) == control_type_id:
return name
return "Unknown#"+str(control_type_id)
# for item in self.root.GetChildren():
# yield item
if __name__ == "__main__":
reader = AccessibilityReader()
now_time = time.monotonic()
def print_dt(msg: str):
global now_time
new_time = time.monotonic()
dt = new_time - now_time
now_time = new_time
print(f"[+{dt:.3f}s] {msg}")
iter = 0
for item in reader.generate_items():
type_name = reader.control_type_id_to_name(item.ControlType)
print_dt("Child found")
print(f"Name: {item.Name}, ControlType: {type_name}, ClassName: {item.ClassName}, AutomationId: {item.AutomationId}")
print_dt("Child props read")
iter += 1
if iter >= 6:
break
# for prop in item.:
# print(f" {prop}: {getattr(item, prop)}")
print_dt("Done")
</code></pre>
<p>At the last child, <code>child.GetNextSiblingControl()</code> seems to hang over three minutes, then the loop ends. Here's the full output with some window titles redacted.</p>
<pre><code>E:\[REDACTED]>python python_scripts\acessibility_reader.py
[+0.016s] Child found
Name: Taskbar, ControlType: PaneControl, ClassName: Shell_TrayWnd, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: , ControlType: PaneControl, ClassName: Shell_SecondaryTrayWnd, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: , ControlType: PaneControl, ClassName: Shell_SecondaryTrayWnd, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Command Prompt - python python_scripts\acessibility_reader.py, ControlType: WindowControl, ClassName: CASCADIA_HOSTING_WINDOW_CLASS, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: acessibility_reader.py - [REDACTED] - Visual Studio Code, ControlType: PaneControl, ClassName: Chrome_WidgetWin_1, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: monotonic time python - Google Search — Firefox Developer Edition, ControlType: WindowControl, ClassName: MozillaWindowClass, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Signal, ControlType: PaneControl, ClassName: Chrome_WidgetWin_1, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: *[Untitled]-1.0 (RGB color 8-bit gamma integer, GIMP built-in sRGB, 2 layers) 290x160 – GIMP, ControlType: WindowControl, ClassName: gdkWindowToplevel, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: [REDACTED] - Discord, ControlType: PaneControl, ClassName: Chrome_WidgetWin_1, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: E:\[REDACTED] - File Explorer, ControlType: WindowControl, ClassName: CabinetWClass, AutomationId:
[+0.016s] Child props read
[+0.000s] Child found
Name: E:\[REDACTED] - File Explorer, ControlType: WindowControl, ClassName: CabinetWClass, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Steam, ControlType: WindowControl, ClassName: SDL_app, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: [REDACTED], ControlType: WindowControl, ClassName: SDL_app, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Windows Security, ControlType: WindowControl, ClassName: ApplicationFrameWindow, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Settings, ControlType: WindowControl, ClassName: ApplicationFrameWindow, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: , ControlType: PaneControl, ClassName: WorkerW, AutomationId:
[+0.000s] Child props read
[+0.000s] Child found
Name: Program Manager, ControlType: PaneControl, ClassName: Progman, AutomationId:
[+0.000s] Child props read
[+210.140s] Done
</code></pre>
<p>You can see how listing takes basically no time, than it freezes. I think in the background, it is filtering some bigger list or is blocked by IO or thread lock. I see no CPU usage on the python process when it's stuck.</p>
<p>How can I avoid this. I was originally calling <code>self.root.GetChildren()</code>, but that does the same as my python while loop and aggregates the results in a list.</p>
<p>Version of the package is <code>VERSION = "2.0.29"</code></p>
|
<python><ui-automation><microsoft-ui-automation>
|
2025-10-04 09:17:42
| 1
| 53,877
|
Tomáš Zato
|
79,782,339
| 703,421
|
pyexiv2 blocks on some images
|
<p>I use pyexiv2 (2.15.4, python 3.8, Windows 8.0) to read EXIF and IPTC from my JPEGs.
On very few images (1 on 5000), it blocks and hangs python.
My code is :</p>
<pre><code>from pyexiv2 import Image
ima = Image(path_ima)
</code></pre>
<ul>
<li>There is no accentued characters in IPTC, EXIF of path_ima.</li>
<li>The image size is 800x600. The image is correctly read by Irfanview, EXIF and IPTC display correctly.</li>
<li>There is no runtime exception I could catch, it just blocks and I have to stop python.</li>
</ul>
<p>I tried to install py3exiv2, but unsuccessfully (too difficult, Microsoft Visual C++ is needed to build the wheel). How can I manage with pyexiv2 that is suitable to my needs to both read EXIF and IPTC ?</p>
|
<python><freeze><pyexiv2>
|
2025-10-04 06:46:52
| 0
| 2,279
|
Eric H.
|
79,782,328
| 395,857
|
How can I use web search with GPT on Azure using Python?
|
<p>I want to use web search when calling GPT on Azure using Python.</p>
<p>I can call GPT on Azure using Python as follows:</p>
<pre><code>import os
from openai import AzureOpenAI
endpoint = "https://somewhere.openai.azure.com/"
model_name = "gpt5"
deployment = "gpt5"
subscription_key = ""
api_version = "2024-12-01-preview"
client = AzureOpenAI(
api_version=api_version,
azure_endpoint=endpoint,
api_key=subscription_key,
)
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a funny assistant.",
},
{
"role": "user",
"content": "Tell me a joke about birds",
}
],
max_completion_tokens=16384,
model=deployment
)
print(response.choices[0].message.content)
</code></pre>
<p>How do I add web search? Just like ChatGPT can do:</p>
<p><a href="https://i.sstatic.net/TMVwDUHJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMVwDUHJ.png" alt="enter image description here" /></a></p>
|
<python><azure><azure-cognitive-services><web-search><gpt-5>
|
2025-10-04 05:54:55
| 1
| 84,585
|
Franck Dernoncourt
|
79,782,321
| 6,329,284
|
Multi-file and multi-threaded copy in python affects network speed
|
<p>Im working on a small program where I have three threads with each a task to run: copy a file from A to B using a bash script. One thread copies a 100GB file, then another 2x a 10GB file.</p>
<p>The 100GB file copy starts then with a delay I start the copy of the first 10GB file when that is done the last 10GB copy starts.
From the network speed I see that the copy of the first file starts with 120MB/s (more or less), then the 10GB file starts and we see some noise on the line but no real drop in transfer speed.
When the 10GB files are finished, the 100GB file continues at a significant lower rate:</p>
<p><a href="https://i.sstatic.net/E4TGET0Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4TGET0Z.png" alt="enter image description here" /></a></p>
<pre><code>import subprocess
import threading
import time
def run(arguments):
process = subprocess.Popen(
arguments,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True, # For string output instead of bytes
)
# Print each line as it comes
for line in process.stdout:
print(line, end="\n") # `line` already includes newline
print(f"Done for thread {threading.current_thread()}")
def create_threads(experiment_paths, experiment_name):
machine_name = "test"
threads = []
for i, path_name in enumerate(experiment_paths):
cli_args = [
"bash",
mount_script,
f"--folder={experiment_name}",
f"--thread={i}",
f"--file-to-copy={path_name}",
f"--machine-name={machine_name}",
]
new_thread = threading.Thread(target=run, daemon=False, args=(cli_args,))
threads.append(new_thread)
return threads
experiment_name = 'test'
experiment_paths = ['/my/path/to/file/100gb', '/my/path/to/file/10gb', '/my/path/to/file/10gb']
threads = create_threads(experiment_paths, experiment_name=experiment_name)
t0 = time.time()
for t in threads:
print(f"Starting thread {t.name}")
t.start()
time.sleep(80)
for t in threads:
print(f"Joining thread {t.name}")
t.join()
</code></pre>
<p>how can I ensure that the copy speed of the 100GB file resume at maximum speed?
Im working on a linux system btw, copying to a mounted cifs share (samba)</p>
<p>EDIT: when only transfering the 100GB file it goes at full speed the whole time. The idea of the 80second delay was to see if the 100GB transfer at least starts with full speed</p>
|
<python><performance>
|
2025-10-04 05:36:48
| 1
| 1,340
|
zwep
|
79,782,313
| 8,384,910
|
Display all compatible versions of a package
|
<p>The <code>tensorflow-io</code> package drops wheels for Windows after <code>0.31.0</code>, but only newer versions have wheels for Apple Silicon.</p>
<p>I am using features that still work across versions, so I want to pin the package to different versions for different platforms/architectures.</p>
<p>For example, in my <code>pyproject.toml</code> I do:</p>
<pre class="lang-ini prettyprint-override"><code>"tensorflow-io==0.31.0; sys_platform == 'win32'"
</code></pre>
<p>Ideally, I would not specify a range and hope that one of them is compatible.</p>
<p>Generally speaking, how do I ask uv to list all compatible versions of a package given specific os/arch constraints?</p>
|
<python><uv>
|
2025-10-04 05:20:31
| 0
| 9,414
|
Richie Bendall
|
79,782,307
| 8,384,910
|
What does the uvw command do?
|
<p>When I install <code>uv</code> though winget, the <code>uv</code>, <code>uvx</code> and <code>uvw</code> aliases get added.</p>
<p>I know that <code>uvx</code> is an alias for <code>uv tool run</code>, but what is <code>uvw</code> an alias for?</p>
|
<python><uv>
|
2025-10-04 04:48:22
| 1
| 9,414
|
Richie Bendall
|
79,782,279
| 3,151,415
|
Not able to call tool in Agno Agent run mode
|
<p>I am trying the example mentioned in the <a href="https://docs.agno.com/introduction/quickstart" rel="nofollow noreferrer">Agno docs</a>. I am running it normally instead of the AgentOs mode. In agentOs mode it’s able to call the mcp server and give right answer.</p>
<pre><code>from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.tools.mcp import MCPTools
from agno.agent import RunOutput
from agno.utils.pprint import pprint_run_response
# Create the Agent
agno_agent = Agent(
name="Agno Agent",
model=OpenAIChat(id="gpt-4o"),
# Add a database to the Agent
db=SqliteDb(db_file="agno.db"),
# Add the Agno MCP server to the Agent
tools=[MCPTools(transport="streamable-http", url="https://docs.agno.com/mcp")],
# Add the previous session history to the context
add_history_to_context=True,
markdown=True,
debug_mode=True
)
# agent_os = AgentOS(agents=[agno_agent])
# app = agent_os.get_app()
response: RunOutput =agno_agent.run(“what is agno?”)
pprint_run_response(response, markdown=True)`
</code></pre>
<p>Reponse:</p>
<pre><code>(agno) ➜ agno git:(main) ✗ python agno_agent.py
DEBUG ******************************************************************************************************* Agent ID: agno-agent ******************************************************************************************************
DEBUG Reading AgentSession: a21f0214-e9d8-4c4e-9045-e2c938b8ac8e
DEBUG Loaded existing table agno_sessions
DEBUG Creating new AgentSession: a21f0214-e9d8-4c4e-9045-e2c938b8ac8e
DEBUG Processing tools for model
DEBUG ************************************************************************************** Agent Run Start: 284f1d88-0b3c-4d08-ae70-e8f9ba039599 **************************************************************************************
DEBUG ------------------------------------------------------------------------------------------------------ OpenAI Response Start ------------------------------------------------------------------------------------------------------
DEBUG ---------------------------------------------------------------------------------------------------------- Model: gpt-4o ----------------------------------------------------------------------------------------------------------
DEBUG ============================================================================================================== system =============================================================================================================
DEBUG <additional_information>
Use markdown to format your answers.
</additional_information>
DEBUG =============================================================================================================== user ==============================================================================================================
DEBUG what is agno?
DEBUG ============================================================================================================ assistant ============================================================================================================
DEBUG “AgNO” does not appear to be a commonly recognized term or acronym. However, in chemistry, “AgNO” could be a typographical error or shorthand for silver nitrate, which is correctly represented as “AgNO₃”. Silver nitrate is a chemical
compound with various applications, including in photography, medicine, and as a reagent in chemical reactions.
If you meant something else or if "AgNO" is a specific term in a niche field or a newly emerging concept, please provide more context or check for any potential typos.
DEBUG ************************************************************************************************************ METRICS ************************************************************************************************************
DEBUG * Tokens: input=32, output=109, total=141
DEBUG * Duration: 3.2007s
DEBUG * Tokens per second: 34.0545 tokens/s
DEBUG ************************************************************************************************************ METRICS ************************************************************************************************************
DEBUG ------------------------------------------------------------------------------------------------------- OpenAI Response End -------------------------------------------------------------------------------------------------------
DEBUG Added RunOutput to Agent Session
DEBUG Loaded existing table agno_sessions
DEBUG Created or updated AgentSession record: a21f0214-e9d8-4c4e-9045-e2c938b8ac8e
DEBUG *************************************************************************************** Agent Run End: 284f1d88-0b3c-4d08-ae70-e8f9ba039599 ***************************************************************************************
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ “AgNO” does not appear to be a commonly recognized term or acronym. However, in chemistry, “AgNO” could be a typographical error or shorthand for silver nitrate, which is correctly represented as “AgNO₃”. Silver nitrate is a chemical compound │
│ with various applications, including in photography, medicine, and as a reagent in chemical reactions. │
│ │
│ If you meant something else or if “AgNO” is a specific term in a niche field or a newly emerging concept, please provide more context or check for any potential typos.
</code></pre>
<p>Tried running it in the Run mode instead of AgentOs mode. I was expecting it to call the MCP server tool and give the right answer. I did multiple runs but it's not calling the MCP server, however in the AgentOs mode this works.</p>
|
<python><agno>
|
2025-10-04 03:20:10
| 1
| 6,248
|
garg10may
|
79,782,221
| 4,887,159
|
Error while deploying, but not in local: "crewai Failed to upsert documents: "Expected IDs to be unique, found 28 Duplicate IDs"
|
<p>When I initialize a Crew in Azure, I get an error:</p>
<blockquote>
<p>crewai Failed to upsert documents: "Expected IDs to be unique, found 28 Duplicate IDs"</p>
</blockquote>
<p>followed by lots of uuids.</p>
<pre><code>from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai import LLM
from datetime import datetime
# STANDARD TOOLS
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
from crewai_tools import (
VisionTool,
CodeInterpreterTool
)
# INITIALIZE STANDARD TOOLS
vision_tool = VisionTool()
scrape_tool = ScrapeWebsiteTool()
code_interpreter = CodeInterpreterTool()
sha_projects = TextFileKnowledgeSource(
file_paths=["abc.md"]
)
pdf_source = PDFKnowledgeSource(
file_paths=["def.pdf", "ghj.pdf"]
)
csv_source = CSVKnowledgeSource(
file_paths=["tyu.csv", "iop.csv"]
)
llm = LLM(
model="anthropic/claude-sonnet-4-20250514",
temperature=0,
max_tokens=64000
)
@CrewBase
class MultiAgentSystem():
"""MultiAgentSystem crew"""
agents: List[BaseAgent]
tasks: List[Task]
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def financial_analyst(self) -> Agent:
return Agent(
config=self.agents_config['financial_analyst'],
tools=[
vision_tool,
code_interpreter
],
verbose=True,
reasoning=True,
llm=llm,
allow_delegation=True,
)
@agent
def scientist(self) -> Agent:
return Agent(
config=self.agents_config['scientist'],
tools=[
vision_tool,
code_interpreter
],
verbose=True,
reasoning=True,
llm=llm,
allow_delegation=True,
)
@agent
def orchestrator(self) -> Agent:
return Agent(
config=self.agents_config['orchestrator'],
allow_delegation=True,
verbose=True,
reasoning=True,
llm=llm
)
@task
def respond_to_user_task(self) -> Task:
return Task(
config=self.tasks_config['respond_to_user_task'],
)
@crew
def crew(self) -> Crew:
"""Creates the MultiAgentSystem crew"""
return Crew(
agents=[
self.scientist(),
self.financial_analyst(),
],
tasks=[
self.respond_to_user_task()
],
process=Process.hierarchical,
verbose=True,
manager_agent=self.orchestrator(),
knowledge_sources=[sha_projects]
)
</code></pre>
<p>Apologies in advance, the error log is not super easy to read.</p>
<p>I can see the error has something to do with the knowledge sources I've added to the agents, but I never got the error in my local system.</p>
<pre><code>2025-10-03T21:38:58.1764731Z [2025-10-03 21:38:58][0m[93m[ERROR]: [0m[91mFailed to upsert documents: Expected IDs to be unique, found 28 duplicated IDs: 1154de1ae6dc18a0fd130ee716cb4fd5b359e3fe5e214501bb4ac5101d9f2868, a0f181956687ba401809f0f7f58ce36d3411cd729b74ebcfded29dd5155265bd, 59938f840b1dc9e941b165245506c65a8c886ad5e852f2de8d98782ab64ad64c, 0062f673fa7378161e8ce629f64a73a7dc9d9ad1b32b81f6ddcc95a25c38e83b, 4b913ac0c77a86091d7488a63be07a3656879021cda07b02601fc6e7c6cad04c, ..., 14bff2283dfa8d887f9a8d690d4c2483dc01118a00ff1dfc98f42ca16936227c, 7c4419ce2bb1858315175cfbd05dc91dcb299237680257a74c57554802357787, 3beb61dfab85dbc24f38db85cea8c9952e3027642087afb8a415f6b28ca2853b, 592c9a15afeaa64c800cc7cdae5815ea1cf215a7a3c6692440cbd3234ac2819d, b4a91a1835b81847e9f6220df601a82389a2fe4b7f03ad33e5989876c15554a0 in upsert.[0m
2025-10-03T21:38:58.1827504Z ╭──────────────────────────────── Crew Failure ────────────────────────────────╮
2025-10-03T21:38:58.1827760Z │ │
2025-10-03T21:38:58.1827790Z │ Crew Execution Failed │
2025-10-03T21:38:58.1827817Z │ Name: crew │
2025-10-03T21:38:58.1827880Z │ ID: 6e506409-9531-4b33-b1fa-5348b61b4a2c │
2025-10-03T21:38:58.1827907Z │ Tool Args: │
2025-10-03T21:38:58.1827932Z │ Final Output: │
2025-10-03T21:38:58.1827977Z │ │
2025-10-03T21:38:58.1828003Z │ │
2025-10-03T21:38:58.1828040Z ╰──────────────────────────────────────────────────────────────────────────────╯
2025-10-03T21:38:58.1831651Z
2025-10-03T21:38:58.2047109Z
2025-10-03T21:38:58.2047431Z
2025-10-03T21:38:58.2183873Z ╭────────────────────────────── Execution Traces ──────────────────────────────╮
2025-10-03T21:38:58.2184594Z │ │
2025-10-03T21:38:58.2184633Z │ 🔍 Detailed execution traces are available! │
2025-10-03T21:38:58.2184660Z │ │
2025-10-03T21:38:58.2184686Z │ View insights including: │
2025-10-03T21:38:58.2184713Z │ • Agent decision-making process │
2025-10-03T21:38:58.2184739Z │ • Task execution flow and timing │
2025-10-03T21:38:58.2184765Z │ • Tool usage details │
2025-10-03T21:38:58.2184790Z │ │
2025-10-03T21:38:58.2184827Z ╰──────────────────────────────────────────────────────────────────────────────╯
2025-10-03T21:38:58.3351560Z Would you like to view your execution traces? [y/N] (20s timeout): 2025-10-03 21:38:58,300 | ERROR | asyncio | Task exception was never retrieved
2025-10-03T21:38:58.3352230Z future: <Task finished name='Task-187' coro=<ask.<locals>.runner() done, defined at /app/src/scenario_development_multi_agent_system/server.py:234> exception=DuplicateIDError('Expected IDs to be unique, found 28 duplicated IDs: 1154de1ae6dc18a0fd130ee716cb4fd5b359e3fe5e214501bb4ac5101d9f2868, a0f181956687ba401809f0f7f58ce36d3411cd729b74ebcfded29dd5155265bd, 59938f840b1dc9e941b165245506c65a8c886ad5e852f2de8d98782ab64ad64c, 0062f673fa7378161e8ce629f64a73a7dc9d9ad1b32b81f6ddcc95a25c38e83b, 4b913ac0c77a86091d7488a63be07a3656879021cda07b02601fc6e7c6cad04c, ..., 14bff2283dfa8d887f9a8d690d4c2483dc01118a00ff1dfc98f42ca16936227c, 7c4419ce2bb1858315175cfbd05dc91dcb299237680257a74c57554802357787, 3beb61dfab85dbc24f38db85cea8c9952e3027642087afb8a415f6b28ca2853b, 592c9a15afeaa64c800cc7cdae5815ea1cf215a7a3c6692440cbd3234ac2819d, b4a91a1835b81847e9f6220df601a82389a2fe4b7f03ad33e5989876c15554a0 in upsert.')>
2025-10-03T21:38:58.3352261Z Traceback (most recent call last):
2025-10-03T21:38:58.3352285Z File "/app/src/scenario_development_multi_agent_system/server.py", line 250, in runner
2025-10-03T21:38:58.3352307Z result = await run_pipeline(
2025-10-03T21:38:58.3352327Z ^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3352353Z File "/app/src/scenario_development_multi_agent_system/crew_adapter.py", line 374, in run_pipeline
2025-10-03T21:38:58.3352375Z result = await execute_query(
2025-10-03T21:38:58.3352395Z ^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3352444Z File "/app/src/scenario_development_multi_agent_system/crew_adapter.py", line 480, in execute_query
2025-10-03T21:38:58.3352469Z result = await loop.run_in_executor(None, lambda: crew.kickoff(inputs=kickoff_inputs))
2025-10-03T21:38:58.3352492Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3352516Z File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
2025-10-03T21:38:58.3352539Z result = self.fn(*self.args, **self.kwargs)
2025-10-03T21:38:58.3352560Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3352584Z File "/app/src/scenario_development_multi_agent_system/crew_adapter.py", line 480, in <lambda>
2025-10-03T21:38:58.3352607Z result = await loop.run_in_executor(None, lambda: crew.kickoff(inputs=kickoff_inputs))
2025-10-03T21:38:58.3352655Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3352701Z File "/opt/venv/lib/python3.11/site-packages/crewai/crew.py", line 684, in kickoff
2025-10-03T21:38:58.3352724Z agent.set_knowledge(crew_embedder=self.embedder)
2025-10-03T21:38:58.3352748Z File "/opt/venv/lib/python3.11/site-packages/crewai/agent.py", line 219, in set_knowledge
2025-10-03T21:38:58.3352768Z self.knowledge.add_sources()
2025-10-03T21:38:58.3352792Z File "/opt/venv/lib/python3.11/site-packages/crewai/knowledge/knowledge.py", line 70, in add_sources
2025-10-03T21:38:58.3352812Z raise e
2025-10-03T21:38:58.3352837Z File "/opt/venv/lib/python3.11/site-packages/crewai/knowledge/knowledge.py", line 68, in add_sources
2025-10-03T21:38:58.3352857Z source.add()
2025-10-03T21:38:58.3352882Z File "/opt/venv/lib/python3.11/site-packages/crewai/knowledge/source/pdf_knowledge_source.py", line 45, in add
2025-10-03T21:38:58.3352902Z self._save_documents()
2025-10-03T21:38:58.3352949Z File "/opt/venv/lib/python3.11/site-packages/crewai/knowledge/source/base_file_knowledge_source.py", line 71, in _save_documents
2025-10-03T21:38:58.3352970Z self.storage.save(self.chunks)
2025-10-03T21:38:58.3352995Z File "/opt/venv/lib/python3.11/site-packages/crewai/knowledge/storage/knowledge_storage.py", line 113, in save
2025-10-03T21:38:58.3353015Z client.add_documents(
2025-10-03T21:38:58.3353040Z File "/opt/venv/lib/python3.11/site-packages/crewai/rag/chromadb/client.py", line 330, in add_documents
2025-10-03T21:38:58.3353067Z collection.upsert(
2025-10-03T21:38:58.3353094Z File "/opt/venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 442, in upsert
2025-10-03T21:38:58.3353117Z upsert_request = self._validate_and_prepare_upsert_request(
2025-10-03T21:38:58.3353141Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3353187Z File "/opt/venv/lib/python3.11/site-packages/chromadb/api/models/CollectionCommon.py", line 95, in wrapper
2025-10-03T21:38:58.3353208Z return func(self, *args, **kwargs)
2025-10-03T21:38:58.3353229Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-03T21:38:58.3353256Z File "/opt/venv/lib/python3.11/site-packages/chromadb/api/models/CollectionCommon.py", line 417, in _validate_and_prepare_upsert_request
2025-10-03T21:38:58.3353278Z validate_insert_record_set(record_set=upsert_records)
2025-10-03T21:38:58.3353302Z File "/opt/venv/lib/python3.11/site-packages/chromadb/api/types.py", line 315, in validate_insert_record_set
2025-10-03T21:38:58.3353324Z validate_ids(record_set["ids"])
2025-10-03T21:38:58.3353348Z File "/opt/venv/lib/python3.11/site-packages/chromadb/api/types.py", line 761, in validate_ids
2025-10-03T21:38:58.3353369Z raise errors.DuplicateIDError(message)
</code></pre>
|
<python><large-language-model><agent><crewai>
|
2025-10-03 23:15:59
| 1
| 3,994
|
Ray
|
79,782,149
| 12,016,688
|
Does flask dev server normalize paths?
|
<p>I was playing a CTF which was about path traversal. The server code was like below:</p>
<pre class="lang-py prettyprint-override"><code>import flask
import os
app = flask.Flask(__name__)
@app.route("/docs/<path:path>", methods=["GET"])
def challenge(path="index.html"):
print(path)
requested_path = app.root_path + "/files/" + path
try:
return open(requested_path).read()
except PermissionError:
flask.abort(403, requested_path)
except FileNotFoundError:
flask.abort(404, f"No {requested_path} from directory {os.getcwd()}")
except Exception as e:
flask.abort(500, requested_path + ":" + str(e))
app.run("localhost", 3000)
</code></pre>
<p>The first and most obvious solution came to my mind was to request a url like <code>http://localhost:3000/docs/../../flag</code>. But it failed and the below was the corresponding log:</p>
<pre><code>127.0.0.1 - - [03/Oct/2025 23:59:52] "GET /flag HTTP/1.1" 404 -
</code></pre>
<p>According to the log, my request is normalized/sanitized (not sure the correct word) and from <code>/docs/../../flag</code> has turned into <code>/flag</code>.</p>
<p>My question is at which stage my url has been normalized? I'm not using any webserver in the middle. Does flask's dev server perform any kind of normalization on the urls?</p>
<p>Note that I'm not seeking to find the solution for the CTF, I've solved it, my question is only about where is my url sanitized in the above code.</p>
|
<python><flask><ctf><path-traversal>
|
2025-10-03 20:40:28
| 1
| 2,470
|
Amir reza Riahi
|
79,781,841
| 2,435,421
|
boto3 S3 SSL validation failed in docker swarm but working locally
|
<p>I'm trying to upload a file using boto3 to a S3 provider.
With the exact same environment variables and the same container. It's fully working locally but when deployed inside my swarm I always have this error:</p>
<pre><code>botocore.exceptions.SSLError: SSL validation failed for https://s3.sbg.io.cloud.ovh.net/MYBUCKET/MY_FILE_TO_UPLOAD
</code></pre>
<p>Here is my code:</p>
<pre><code>s3_client = boto3.client(
"s3",
aws_access_key_id=S3_ACCESS_KEY_ID,
aws_secret_access_key=S3_SECRET_ACCESS_KEY,
region_name=S3_REGION,
endpoint_url=S3_ENDPOINT,
)
def upload_to_s3(local_file, s3_key):
logging.info(f"💾 Uploading backup to {S3_BUCKET}...")
try:
s3_client.upload_file(local_file, S3_BUCKET, s3_key)
except Exception as e:
logging.error(f"💾 Failed to upload to S3: {e}")
exit(1)
logging.info("💾 Upload to S3 complete.")
</code></pre>
<p>I already tried to add use_ssl=False and/or verify=False.
But I still have the same issue.</p>
<p>Here is my Dockerfile:</p>
<pre><code>FROM python:3.12.6-alpine
# Install gnupg for encryption
RUN apk add --no-cache gnupg
# Install mariadb-client
RUN apk add --no-cache mariadb-client
# Install postgresql17-client To update when 17 is released
RUN echo 'http://dl-cdn.alpinelinux.org/alpine/edge/main' > /etc/apk/repositories
RUN apk update --allow-untrusted
RUN apk upgrade --allow-untrusted
RUN apk add postgresql17-client --allow-untrusted
WORKDIR /code
COPY requirements.txt /code/requirements.txt
RUN python3 -m pip install -r requirements.txt --no-cache-dir
# Install requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY src /code/app
ENV PYTHONPATH=/code/app
CMD ["fastapi", "run", "app/main.py", "--proxy-headers", "--port", "80"]
</code></pre>
<p>I also tried to curl the s3 endpoint inside the container, and it was successful.</p>
|
<python><docker><boto3>
|
2025-10-03 13:56:49
| 0
| 1,264
|
Splinteer
|
79,781,778
| 1,147,321
|
Use decorator to remove await prefix
|
<p>I am working with Asyncio in an app I am making. However, I find it daunting to prefix my function calls with await.</p>
<p>So I want to encapsulate it in a decorator that adds the wait to the function call.</p>
<pre><code>def await_wrap(async_func):
def wrapper(*args, **kwargs):
return await async_func(*args, **kwargs)
return wrapper
@await_wrap
async my_async_class(num)
asyncio.sleep(num)
my_async_class(5)
</code></pre>
<p>Note: the above is pseudocode and has not been tested yet.</p>
<p>Is there any penalty to this other than the slight overhead of the decorator itself, or is the convenience a good idea since there are many async def in my app?</p>
|
<python><python-asyncio>
|
2025-10-03 12:38:15
| 1
| 2,167
|
Ephreal
|
79,781,489
| 12,030,613
|
Deploying a Python HTTP function to Google Cloud run from source
|
<p>I'm trying to deploy a Python HTTP function to Google Cloud Run from source. However, I'm facing a weird issue, where deploying it from source somehow creates a broken deployment, but when (re-) deploying the already deployed source code through the Cloud console buttons, it starts working. This is what I'm doing:</p>
<p>The app:</p>
<pre class="lang-py prettyprint-override"><code># main.py
import functions_framework
@functions_framework.http
def run_testfunc(request):
return "testfunc completed successfully.", 200
</code></pre>
<p>and</p>
<pre class="lang-yaml prettyprint-override"><code># requirements.txt, with the actual dependencies I have, for what it's worth
functions-framework==3.*
psycopg[binary,pool]==3.2.9
python-dotenv==1.1.1
requests==2.32.5
</code></pre>
<p>Both of these files are in the root as they should.</p>
<p>Now, when I run <code>gcloud run deploy testfunc --source .</code>, I get a successful deployment to the cloud run function (called <code>testfunc</code>). However, when I try to trigger the function (call the function's URL), the request fails and in the logs I see that it</p>
<pre><code>Failed to find attribute 'app' in 'main'.
</code></pre>
<p>Now, if I navigate to <strong>source</strong>-><strong>edit source</strong>-><strong>save and redeploy</strong> (see image below) and just hit save without changing anything, I get a new deployment, in which the endpoint actually works.
<a href="https://i.sstatic.net/mLi1GPtD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLi1GPtD.png" alt="enter image description here" /></a></p>
<p>I'm pretty sure this is related to the function's entrypoint, and I've read this document: <a href="https://cloud.google.com/docs/buildpacks/python#entrypoint" rel="nofollow noreferrer">https://cloud.google.com/docs/buildpacks/python#entrypoint</a></p>
<p>But I just can't put the pieces together. I've tried the <code>--set-build-env-vars</code> argument on the <code>gcloud run deploy</code>, but I haven't been able to figure out a working value for that.</p>
|
<python><google-cloud-platform><google-cloud-run><google-cloud-deploy>
|
2025-10-03 06:26:35
| 1
| 351
|
unie
|
79,781,331
| 312,140
|
How to plot the smooth graph on matplotlib like MS-Excel?
|
<p>MS-Excel can generate smooth line graphs even with a small number of data points. How can I generate similar graphs using matplotlib? I have tried several ways like fitting the polynomials with different degrees, but the results aren't as good. I do not want to have the pointy corners but instead I want to convert them to rounded ones.
How does MS-Excel do this? Does it perform a regression on the data?</p>
<p>For example here I have brought the Excel graph and matplotlib graph for some data:</p>
<p>Matplotlib:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import UnivariateSpline
# Sample data (replace with your own x, y data)
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 2, 0, 3, 2, 5, 4, 6, 4, 5]) # Noisy or scattered data
# Sort data (required for spline)
sorted_indices = np.argsort(x)
x_sorted = x[sorted_indices]
y_sorted = y[sorted_indices]
# Create spline
spline = UnivariateSpline(x_sorted, y_sorted, s=0.9) # s controls smoothness (lower = smoother)
x_smooth = np.linspace(min(x_sorted), max(x_sorted), 100) # More points for smooth curve
y_smooth = spline(x_smooth)
# Plot original and smoothed data
plt.figure(figsize=(8, 6))
plt.plot(x, y, color='red', label='Original Data') # Original points
plt.plot(x_smooth, y_smooth, color='blue', label='Smoothed Spline') # Smooth curve
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Smooth Plot with Spline Interpolation')
plt.legend()
plt.grid(True)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/kZ6u7rlb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZ6u7rlb.png" alt="smoothed graph with plt and interpolation" /></a></p>
<p>Excel:</p>
<p><a href="https://i.sstatic.net/H3TqDDvO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3TqDDvO.png" alt="smoothed graph with Excel" /></a></p>
|
<python><excel><matplotlib>
|
2025-10-02 22:22:10
| 1
| 3,031
|
Mahdi Amrollahi
|
79,781,320
| 216,652
|
Weird failure in Selenium
|
<p>I'm trying to run this script</p>
<pre><code>#!/usr/bin/env python3
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
import time
# Configure Chrome options (optional)
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized") # Open browser maximized
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:140.0) Gecko/20100101 Firefox/140.0")
options.add_argument("--headless") # Run in background (no GUI)
options.add_experimental_option("detach", True)
drv = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=options)
</code></pre>
<p>But when I run it on my machine it fails</p>
<blockquote>
<p>C:\tmp>python e.py >Traceback (most recent call last): >File "C:\tmp\e.py", line 22, in >drv = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=options) >^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >File "C:\Python311\Lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in ><strong>init</strong> >super().<strong>init</strong>( >File "C:\Python311\Lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 53, in ><strong>init</strong> >self.service.start() >File "C:\Python311\Lib\site-packages\selenium\webdriver\common\service.py", line 105, in >start >self._start_process(self._path) >File "C:\Python311\Lib\site-packages\selenium\webdriver\common\service.py", line 206, in >_start_process >self.process = subprocess.Popen( >^^^^^^^^^^^^^^^^^ >File "C:\Python311\Lib\subprocess.py", line 1022, in <strong>init</strong> >self._execute_child(args, executable, preexec_fn, close_fds, >File "C:\Python311\Lib\subprocess.py", line 1491, in _execute_child >hp, ht, pid, tid = _winapi.CreateProcess(executable, args, >^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >OSError: [WinError 193] %1 is not a valid Win32 application</p>
</blockquote>
<p>What I'm supposed to do to have it fixed? This machine has both Chrome 141.0.7390.54 (Official Build) (64-bit) and FF 140.3.0esr (32-bit)</p>
|
<python><python-3.x><google-chrome><selenium-webdriver>
|
2025-10-02 21:57:19
| 1
| 579
|
user216652
|
79,781,281
| 6,630,397
|
Unable to load an hdf5 model file in TensorFlow / Keras
|
<p>I was given an hdf5 model file that was build with tensorflow / keras. Training data is no more available.</p>
<p>Note: all Python code snippets shown hereunder are run against Python 3.9.23 inside a Dockerized Debian 13 (trixie) environment:</p>
<pre class="lang-bash prettyprint-override"><code># python3
Python 3.9.23 (main, Sep 30 2025, 00:43:48)
[GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>Let's get back to the problem: I'm not able to load that model file without errors (forget about the cuda related messages; I don't have any graphic chips on the computer where I'm currently running the code):</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow import keras
2025-10-02 19:55:29.871714: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-10-02 19:55:29.872035: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-10-02 19:55:29.905906: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-10-02 19:55:30.901154: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-10-02 19:55:30.901515: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
import tensorflow as tf
tf.version.VERSION
'2.20.0'
keras.__version__
'3.10.0'
m = tf.keras.models.load_model('/app/my_model.h5')
2025-10-02 19:55:49.998432: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/keras/src/ops/operation.py", line 256, in from_config
return cls(**config)
File "/usr/local/lib/python3.9/site-packages/keras/src/layers/convolutional/conv2d_transpose.py", line 115, in __init__
super().__init__(
File "/usr/local/lib/python3.9/site-packages/keras/src/layers/convolutional/base_conv_transpose.py", line 94, in __init__
super().__init__(
File "/usr/local/lib/python3.9/site-packages/keras/src/layers/layer.py", line 291, in __init__
raise ValueError(
ValueError: Unrecognized keyword arguments passed to Conv2DTranspose: {'groups': 1}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/keras/src/saving/saving_api.py", line 196, in load_model
return legacy_h5_format.load_model_from_hdf5(
File "/usr/local/lib/python3.9/site-packages/keras/src/legacy/saving/legacy_h5_format.py", line 133, in load_model_from_hdf5
model = saving_utils.model_from_config(
File "/usr/local/lib/python3.9/site-packages/keras/src/legacy/saving/saving_utils.py", line 88, in model_from_config
return serialization.deserialize_keras_object(
File "/usr/local/lib/python3.9/site-packages/keras/src/legacy/saving/serialization.py", line 495, in deserialize_keras_object
deserialized_obj = cls.from_config(
File "/usr/local/lib/python3.9/site-packages/keras/src/models/model.py", line 651, in from_config
return functional_from_config(
File "/usr/local/lib/python3.9/site-packages/keras/src/models/functional.py", line 560, in functional_from_config
process_layer(layer_data)
File "/usr/local/lib/python3.9/site-packages/keras/src/models/functional.py", line 523, in process_layer
layer = saving_utils.model_from_config(
File "/usr/local/lib/python3.9/site-packages/keras/src/legacy/saving/saving_utils.py", line 88, in model_from_config
return serialization.deserialize_keras_object(
File "/usr/local/lib/python3.9/site-packages/keras/src/legacy/saving/serialization.py", line 504, in deserialize_keras_object
deserialized_obj = cls.from_config(cls_config)
File "/usr/local/lib/python3.9/site-packages/keras/src/ops/operation.py", line 258, in from_config
raise TypeError(
TypeError: Error when deserializing class 'Conv2DTranspose' using config={'name': 'conv2d_transpose', 'trainable': True, 'dtype': 'float32', 'filters': 512, 'kernel_size': [3, 3], 'strides': [2, 2], 'padding': 'same', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None, 'output_padding': None}.
Exception encountered: Unrecognized keyword arguments passed to Conv2DTranspose: {'groups': 1}
</code></pre>
<p>By spending some time on the web, I was then able to figure out that this model file was actually created with keras 2.6.0:</p>
<pre class="lang-py prettyprint-override"><code>import h5py
def get_keras_version(h5_file_path):
with h5py.File(h5_file_path, 'r') as f:
if 'keras_version' in f.attrs and 'backend' in f.attrs:
keras_version = f.attrs['keras_version']
backend = f.attrs['backend']
print(f"Keras version: {keras_version}")
print(f"Backend: {backend}")
else:
print("Metadata for Keras version or Backend not found in the file.")
get_keras_version('/app/my_model.h5')
Keras version: 2.6.0
Backend: tensorflow
</code></pre>
<p>So I naturally tried to install that version but it shows an error:</p>
<pre class="lang-bash prettyprint-override"><code># pip install keras==2.6.0
Collecting keras==2.6.0
Using cached keras-2.6.0-py2.py3-none-any.whl (1.3 MB)
Installing collected packages: keras
Attempting uninstall: keras
Found existing installation: keras 3.10.0
Uninstalling keras-3.10.0:
Successfully uninstalled keras-3.10.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.20.0 requires keras>=3.10.0, but you have keras 2.6.0 which is incompatible.
Successfully installed keras-2.6.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
</code></pre>
<p>and I cannot load tensorflow properly:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
2025-10-02 20:10:37.548923: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-10-02 20:10:37.549234: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-10-02 20:10:37.583415: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-10-02 20:10:38.559104: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-10-02 20:10:38.559392: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/tensorflow/__init__.py", line 468, in <module>
importlib.import_module("keras.src.optimizers")
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/site-packages/keras/__init__.py", line 25, in <module>
from keras import models
File "/usr/local/lib/python3.9/site-packages/keras/models.py", line 20, in <module>
from keras import metrics as metrics_module
File "/usr/local/lib/python3.9/site-packages/keras/metrics.py", line 26, in <module>
from keras import activations
File "/usr/local/lib/python3.9/site-packages/keras/activations.py", line 20, in <module>
from keras.layers import advanced_activations
File "/usr/local/lib/python3.9/site-packages/keras/layers/__init__.py", line 24, in <module>
from keras.engine.input_layer import Input
File "/usr/local/lib/python3.9/site-packages/keras/engine/input_layer.py", line 21, in <module>
from keras.engine import base_layer
File "/usr/local/lib/python3.9/site-packages/keras/engine/base_layer.py", line 40, in <module>
from keras.mixed_precision import loss_scale_optimizer
File "/usr/local/lib/python3.9/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 18, in <module>
from keras import optimizers
File "/usr/local/lib/python3.9/site-packages/keras/optimizers.py", line 26, in <module>
from keras.optimizer_v2 import adadelta as adadelta_v2
File "/usr/local/lib/python3.9/site-packages/keras/optimizer_v2/adadelta.py", line 22, in <module>
from keras.optimizer_v2 import optimizer_v2
File "/usr/local/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py", line 95, in <module>
@keras_export("keras.optimizers.Optimizer", metaclass=abc.ABCMeta)
TypeError: __init__() got an unexpected keyword argument 'metaclass'
</code></pre>
<p>Therefore, I decided to also downgrade tensorflow itself to 2.6.0, but here again, I met some errors:</p>
<pre class="lang-bash prettyprint-override"><code># pip install tensorflow==2.6.0
Collecting tensorflow==2.6.0
Using cached tensorflow-2.6.0-cp39-cp39-manylinux2010_x86_64.whl (458.4 MB)
Requirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.15.0)
Requirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.12.1)
Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (0.45.1)
Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.6.3)
Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (0.2.0)
Requirement already satisfied: grpcio<2.0,>=1.37.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.74.0)
Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.1.2)
Collecting h5py~=3.1.0
Using cached h5py-3.1.0-cp39-cp39-manylinux1_x86_64.whl (4.4 MB)
Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (3.3.0)
Collecting typing-extensions~=3.7.4
Using cached typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting absl-py~=0.10
Using cached absl_py-0.15.0-py3-none-any.whl (132 kB)
Requirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (1.1.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (6.32.1)
Requirement already satisfied: keras~=2.6 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (2.6.0)
Requirement already satisfied: tensorboard~=2.6 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (2.20.0)
Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (0.4.0)
Requirement already satisfied: clang~=5.0 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (5.0)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: tensorflow-estimator~=2.6 in /usr/local/lib/python3.9/site-packages (from tensorflow==2.6.0) (2.15.0)
Collecting numpy~=1.19.2
Using cached numpy-1.19.5-cp39-cp39-manylinux2010_x86_64.whl (14.9 MB)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (80.9.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (3.9)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (3.1.3)
Requirement already satisfied: pillow in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (11.3.0)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (0.7.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.9/site-packages (from tensorboard~=2.6->tensorflow==2.6.0) (25.0)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.9/site-packages (from markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.6.0) (8.7.0)
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.9/site-packages (from werkzeug>=1.0.1->tensorboard~=2.6->tensorflow==2.6.0) (3.0.3)
Requirement already satisfied: zipp>=3.20 in /usr/local/lib/python3.9/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.6.0) (3.23.0)
Installing collected packages: typing-extensions, flatbuffers, numpy, absl-py, h5py, tensorflow
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.15.0
Uninstalling typing_extensions-4.15.0:
Successfully uninstalled typing_extensions-4.15.0
Attempting uninstall: flatbuffers
Found existing installation: flatbuffers 25.9.23
Uninstalling flatbuffers-25.9.23:
Successfully uninstalled flatbuffers-25.9.23
Attempting uninstall: numpy
Found existing installation: numpy 2.0.2
Uninstalling numpy-2.0.2:
Successfully uninstalled numpy-2.0.2
Attempting uninstall: absl-py
Found existing installation: absl-py 2.3.1
Uninstalling absl-py-2.3.1:
Successfully uninstalled absl-py-2.3.1
Attempting uninstall: h5py
Found existing installation: h5py 3.14.0
Uninstalling h5py-3.14.0:
Successfully uninstalled h5py-3.14.0
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.20.0
Uninstalling tensorflow-2.20.0:
Successfully uninstalled tensorflow-2.20.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
optree 0.17.0 requires typing-extensions>=4.6.0, but you have typing-extensions 3.7.4.3 which is incompatible.
ml-dtypes 0.5.3 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible.
Successfully installed absl-py-0.15.0 flatbuffers-1.12 h5py-3.1.0 numpy-1.19.5 tensorflow-2.6.0 typing-extensions-3.7.4.3
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
</code></pre>
<p>anyway, I gave it a try:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/usr/local/lib/python3.9/site-packages/tensorflow/python/__init__.py", line 40, in <module>
from tensorflow.python.eager import context
File "/usr/local/lib/python3.9/site-packages/tensorflow/python/eager/context.py", line 32, in <module>
from tensorflow.core.framework import function_pb2
File "/usr/local/lib/python3.9/site-packages/tensorflow/core/framework/function_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "/usr/local/lib/python3.9/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "/usr/local/lib/python3.9/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "/usr/local/lib/python3.9/site-packages/tensorflow/core/framework/resource_handle_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "/usr/local/lib/python3.9/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 36, in <module>
_descriptor.FieldDescriptor(
File "/usr/local/lib/python3.9/site-packages/google/protobuf/descriptor.py", line 675, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
</code></pre>
<p>I have to downgrade protobuf:</p>
<pre class="lang-bash prettyprint-override"><code># pip install --upgrade "protobuf<=3.20.1"
Collecting protobuf<=3.20.1
Using cached protobuf-3.20.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
Installing collected packages: protobuf
Attempting uninstall: protobuf
Found existing installation: protobuf 6.32.1
Uninstalling protobuf-6.32.1:
Successfully uninstalled protobuf-6.32.1
Successfully installed protobuf-3.20.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
</code></pre>
<p>and now that everything seems aligned:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
2025-10-02 20:15:32.235078: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2025-10-02 20:15:32.235097: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
tf.version.VERSION
'2.6.0'
from tensorflow import keras
keras.__version__
'2.6.0'
</code></pre>
<p>it fails again:</p>
<pre class="lang-py prettyprint-override"><code>m=keras.models.load_model('/app/my_model.h5')
2025-10-02 20:16:12.234903: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2025-10-02 20:16:12.234923: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2025-10-02 20:16:12.234934: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (58b489e05fed): /proc/driver/nvidia/version does not exist
2025-10-02 20:16:12.235049: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/keras/saving/save.py", line 200, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects,
File "/usr/local/lib/python3.9/site-packages/keras/saving/hdf5_format.py", line 198, in load_model_from_hdf5
model.compile(**saving_utils.compile_args_from_training_config(
File "/usr/local/lib/python3.9/site-packages/keras/saving/saving_utils.py", line 208, in compile_args_from_training_config
loss = _deserialize_nested_config(losses.deserialize, loss_config)
File "/usr/local/lib/python3.9/site-packages/keras/saving/saving_utils.py", line 249, in _deserialize_nested_config
return deserialize_fn(config)
File "/usr/local/lib/python3.9/site-packages/keras/losses.py", line 2091, in deserialize
return deserialize_keras_object(
File "/usr/local/lib/python3.9/site-packages/keras/utils/generic_utils.py", line 704, in deserialize_keras_object
raise ValueError(
ValueError: Unknown loss function: loss. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
</code></pre>
<p>Last but not least, if I simply freshly install <code>tensorflow==2.6.0</code> it fails on a <code>dtensor</code> related error when trying to use <code>tf.keras</code>, even when simply printing its version:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow import keras
print(f"{keras.__version__=}")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in __getattr__
module = self._load()
File "/usr/local/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load
module = importlib.import_module(self.__name__)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/lib/python3.9/site-packages/keras/__init__.py", line 3, in <module>
from keras import __internal__
File "/usr/local/lib/python3.9/site-packages/keras/__internal__/__init__.py", line 3, in <module>
from keras.__internal__ import backend
File "/usr/local/lib/python3.9/site-packages/keras/__internal__/backend/__init__.py", line 3, in <module>
from keras.src.backend import _initialize_variables as initialize_variables
File "/usr/local/lib/python3.9/site-packages/keras/src/__init__.py", line 21, in <module>
from keras.src import applications
File "/usr/local/lib/python3.9/site-packages/keras/src/applications/__init__.py", line 18, in <module>
from keras.src.applications.convnext import ConvNeXtBase
File "/usr/local/lib/python3.9/site-packages/keras/src/applications/convnext.py", line 28, in <module>
from keras.src import backend
File "/usr/local/lib/python3.9/site-packages/keras/src/backend.py", line 34, in <module>
from keras.src.dtensor import dtensor_api as dtensor
File "/usr/local/lib/python3.9/site-packages/keras/src/dtensor/__init__.py", line 18, in <module>
from tensorflow.compat.v2.experimental import dtensor as dtensor_api
ImportError: cannot import name 'dtensor' from 'tensorflow.compat.v2.experimental' (/usr/local/lib/python3.9/site-packages/tensorflow/_api/v2/compat/v2/experimental/__init__.py)
</code></pre>
<p>I'm stuck. Can anybody explain how I can load that model file?</p>
<p>I was also hoping - but didn't had chance - to find an "upgrade" script to convert the model file itself to a more up-to-date version of tf/keras.</p>
|
<python><tensorflow><keras><hdf5>
|
2025-10-02 20:35:57
| 0
| 8,371
|
swiss_knight
|
79,781,244
| 8,609,461
|
LangFuse: Using `CallbackHandler` with custom OpenAI Base URL throws SSL Error
|
<p>As I am upgrading my Langfuse Python SDK to the latest version 3.6.1, I am updating the <code>CallbackHandler</code> for enabling tracing in my LangGraph solution. I use my own OpenAI BaseUrl and hence created a <code>httpx</code> client with my certs. But, I received the following error. This was working with the previous callback handler.</p>
<pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='xxx.yyy.com', port=443):
Max retries exceeded with url: /<redacted>/langfuse/api/public/otel/v1/traces
(Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
</code></pre>
<h1>Latest Version (3.6.1)</h1>
<h2>langfuse_client.py</h2>
<pre><code>import os
import httpx
from langfuse import Langfuse
LANGFUSE_PUBLIC_KEY = os.getenv("LANGFUSE_PUBLIC_KEY")
LANGFUSE_SECRET_KEY = os.getenv("LANGFUSE_SECRET_KEY")
LANGFUSE_HOST = os.getenv("LANGFUSE_HOST")
current_dir = os.path.dirname(os.path.abspath(__file__)) + "\\certs\\"
cacert_path = os.path.join(current_dir, "cacert.pem")
print(f"cacert_path: {cacert_path}")
hosted_apps_api_token = os.getenv("HOSTED_APPS_TOKEN")
client = httpx.Client(
verify=cacert_path,
headers={
"Proxy-Authorization": hosted_apps_api_token
}
)
langfuse = Langfuse(
public_key=LANGFUSE_PUBLIC_KEY,
secret_key=LANGFUSE_SECRET_KEY,
host=LANGFUSE_HOST,
httpx_client=client,
)
# Verify connection
if langfuse.auth_check():
print("Langfuse client is authenticated and ready!")
else:
print("Authentication failed. Please check your credentials and host.")
</code></pre>
<h2>graph.py</h2>
<pre><code>from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
import os
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
api_key = os.getenv("MY_OPENAI_API_KEY")
print(f"Api Key -> {api_key}")
print("Initializing LLM...")
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.2,
base_url= os.getenv("MY_OPENAI_API_BASE"),
api_key= api_key,
)
print("LLM initialized.")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
print("Building graph...")
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")
graph = graph_builder.compile()
print("Graph built.")
</code></pre>
<h2>main.py</h2>
<pre><code>from langchain_core.messages import HumanMessage
from langfuse.langchain import CallbackHandler
from dotenv import load_dotenv
load_dotenv()
from graph import graph
import os
from langfuse_client import langfuse
# Initialize Langfuse CallbackHandler for Langchain (tracing)
LANGFUSE_PUBLIC_KEY = os.getenv("LANGFUSE_PUBLIC_KEY")
langfuse_handler = CallbackHandler(public_key=LANGFUSE_PUBLIC_KEY,)
for s in graph.stream({"messages": [HumanMessage(content="What is Langfuse?")]},
config={"callbacks": [langfuse_handler]}):
print(s)
</code></pre>
<h1>Older Version (2.60.5)</h1>
<p>I was able to create <code>CallbackHandler</code> as below and that worked OK.</p>
<pre><code>_handler = CallbackHandler(
public_key=LANGFUSE_PUBLIC_KEY,
secret_key=LANGFUSE_SECRET_KEY,
host=LANGFUSE_HOST,
httpx_client=client,
)
</code></pre>
|
<python><langgraph>
|
2025-10-02 19:32:41
| 0
| 436
|
Parth Sekar
|
79,781,221
| 259,543
|
Variadic tuple type parameter
|
<p>I am trying to annotate a function which receives a tuple of types, and returns a value of one of the provided types.</p>
<p>For example:</p>
<pre><code>@overload
def func[T1](types: tuple[type[T1]]) -> T1: ...
@overload
def func[T1, T2](types: tuple[type[T1], type[T2]]) -> T1 | T2: ...
@overload
def func[T1, T2, T3](types: tuple[type[T1], type[T2], type[T3]]) -> T1 | T2 | T3: ...
</code></pre>
<p>Conceptually analogous to the following invalid code:</p>
<pre class="lang-py prettyprint-override"><code># invalid
def func[*Ts](types: tuple[type[*Ts]]) -> Union[*Ts]: ...
</code></pre>
<p>Is something like this supported?</p>
|
<python><python-typing>
|
2025-10-02 18:56:44
| 2
| 5,252
|
alecov
|
79,781,203
| 3,173,858
|
How to iterate over python dictionary in C#
|
<p>I am trying to call a C# logging library from Python. I want to support logging complex Python objects and dictionaries from the C# library. I'm using pythonnet 3.0.5.</p>
<p>When I pass the dictionary to C#, the Python code looks like this:</p>
<pre><code>import clr
clr.AddReference("Logging")
from Logging import *
logger = LogManager.Create(r"c:\path\to\log")
dictionary = { "Hey": "there", "All": "good" }
logger.Send[Object]("dict", dictionary)
</code></pre>
<p>On the C# side, it receives a Python.Runtime.PyObject in the parameter of System.Object. My conversion code in C# looks something like this:</p>
<pre><code>public LogItem Convert(object value, int recursionLimit)
{
if (recursionLimit < 0)
return null;
var list = new List<LogItem>();
var pyobj = value as Python.Runtime.PyObject;
if (pyobj != null && pyobj.IsIterable())
{
using var iter = pyobj.GetIterator();
while (iter.MoveNext())
{
var item = Convert(iter.Current, recursionLimit - 1);
if (item != null)
list.Add(item);
}
}
return new LogItem(
pyobj.ToString(Thread.CurrentThread.CurrentCulture),
list.ToArray());
}
</code></pre>
<p>When I run the Python code, it gives me the following exception:</p>
<pre><code> Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at Python.Runtime.Runtime.PyObject_GetIter(BorrowReference op)
at Python.Runtime.PyIter.GetIter(PyObject iterable)
at ...Convert(Object value, Int32 recursionLimit)
...etc.
</code></pre>
<p>How am I supposed to iterate over the dictionary?</p>
<p>I also tried running .Dir() and got the same exception:</p>
<pre><code>foreach (var subobj in pyobj.Dir())
{
list.Add(Convert(subobj, recursionLimit - 1));
}
</code></pre>
<p>I tried removing the iterating, and the following all throw the AccessViolationException:</p>
<pre><code>pyobj.ToString(Culture.InvariantCulture)
pyobj.AsManagedObject(typeof(string))
pyobj.As<string>()
pyobj.ToType(typeof(string), null)
</code></pre>
<p>So there seems to be a general brokenness to the PyObject API.</p>
|
<python><c#><python.net>
|
2025-10-02 18:34:05
| 1
| 1,581
|
Bryce Wagner
|
79,781,113
| 243,031
|
Gunicorn with worker 1 not able to call local API
|
<p>I have flask app, which we are running using gunicorn.</p>
<p>App structure.</p>
<pre><code>% tree your_flask_app
your_flask_app
├── __init__.py
├── app.py
└── routes
├── __init__.py
├── data.py
└── shared.py
</code></pre>
<p><strong>your_flask_app/app.py</strong></p>
<pre><code>from flask import Flask
from routes.data import data_bp
app = Flask(__name__)
# Register blueprint
app.register_blueprint(data_bp, url_prefix='/')
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p><strong>your_flask_app/routes/shared.py</strong></p>
<pre><code># shared dictionary for storing session data
shared_store = {}
</code></pre>
<p><strong>your_flask_app/routes/data.py</strong></p>
<pre><code>from flask import Blueprint, jsonify, request
import uuid
import requests
from routes.shared import shared_store
data_bp = Blueprint('data', __name__)
@data_bp.route('/hello_world', methods=['POST'])
def hello_world():
# Generate a new UUID and store sample data
session_id = str(uuid.uuid4())
data = {"message": "Hello, World!", "session_id": session_id}
shared_store[session_id] = data
return jsonify({"session_id": session_id}), 201
@data_bp.route('/get_hello_world', methods=['GET'])
def get_hello_world():
# Get session_id from query string
session_id = request.args.get('session_id')
if not session_id or session_id not in shared_store:
return jsonify({"error": "Invalid or missing session_id"}), 404
return jsonify(shared_store[session_id]), 200
@data_bp.route('/call_hello_world', methods=['POST'])
def call_hello_world():
# Make API call to /hello_world endpoint on localhost
try:
resp = requests.post('http://localhost:5000/hello_world')
return (resp.content, resp.status_code, resp.headers.items())
except Exception as e:
return jsonify({"error": str(e)}), 500
</code></pre>
<p>I created virtualenv and install packages</p>
<pre><code>python -m venv venv
./venv/bin/python -m pip install flask requests gunicorn
</code></pre>
<p>Then run the app.</p>
<pre><code>% ./venv/bin/gunicorn -b 0.0.0.0:5000 --log-leve DEBUG app:app
[2025-10-02 21:18:41 +0530] [22986] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['0.0.0.0:5000']
backlog: 2048
workers: 1
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 30
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /Users/myuser99/myprj/your_flask_app
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 501
group: 20
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1', '::1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: -
loglevel: DEBUG
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
logconfig_json: None
syslog_addr: unix:///var/run/syslog
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: app:app
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x10345d580>
on_reload: <function OnReload.on_reload at 0x10345d6c0>
when_ready: <function WhenReady.when_ready at 0x10345d800>
pre_fork: <function Prefork.pre_fork at 0x10345d940>
post_fork: <function Postfork.post_fork at 0x10345da80>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x10345dbc0>
worker_int: <function WorkerInt.worker_int at 0x10345dd00>
worker_abort: <function WorkerAbort.worker_abort at 0x10345de40>
pre_exec: <function PreExec.pre_exec at 0x10345df80>
pre_request: <function PreRequest.pre_request at 0x10345e0c0>
post_request: <function PostRequest.post_request at 0x10345e160>
child_exit: <function ChildExit.child_exit at 0x10345e2a0>
worker_exit: <function WorkerExit.worker_exit at 0x10345e3e0>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x10345e520>
on_exit: <function OnExit.on_exit at 0x10345e660>
ssl_context: <function NewSSLContext.ssl_context at 0x10345e840>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1', '::1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
permit_obsolete_folding: False
strip_header_spaces: False
permit_unconventional_http_method: False
permit_unconventional_http_version: False
casefold_http_method: False
forwarder_headers: ['SCRIPT_NAME', 'PATH_INFO']
header_map: drop
[2025-10-02 21:18:41 +0530] [22986] [INFO] Starting gunicorn 23.0.0
[2025-10-02 21:18:41 +0530] [22986] [DEBUG] Arbiter booted
[2025-10-02 21:18:41 +0530] [22986] [INFO] Listening at: http://0.0.0.0:5000 (22986)
[2025-10-02 21:18:41 +0530] [22986] [INFO] Using worker: sync
[2025-10-02 21:18:41 +0530] [22987] [INFO] Booting worker with pid: 22987
[2025-10-02 21:18:41 +0530] [22986] [DEBUG] 1 workers
</code></pre>
<p>I am able to make call to <code>hello_world</code> and <code>get_hello_world</code>.</p>
<pre><code>% curl -X POST http://127.0.0.1:5000/hello_world
{"session_id":"63adaa6b-7acb-4f76-a7ac-2a25167c625c"}
% curl "http://127.0.0.1:5000/get_hello_world?session_id=63adaa6b-7acb-4f76-a7ac-2a25167c625c"
{"message":"Hello, World!","session_id":"63adaa6b-7acb-4f76-a7ac-2a25167c625c"}
</code></pre>
<p>When I call the <code>call_hello_world</code> it timeout.</p>
<pre><code>[2025-10-02 21:23:21 +0530] [22987] [DEBUG] POST /call_hello_world
[2025-10-02 21:23:52 +0530] [22986] [CRITICAL] WORKER TIMEOUT (pid:22987)
[2025-10-02 21:23:52 +0530] [22987] [ERROR] Error handling request /call_hello_world
Traceback (most recent call last):
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 177, in handle_request
respiter = self.wsgi(environ, resp.start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/flask/app.py", line 1536, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/flask/app.py", line 1511, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/flask/app.py", line 917, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/flask/app.py", line 902, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/routes/data.py", line 28, in call_hello_world
resp = requests.post('http://localhost:5000/hello_world')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/requests/adapters.py", line 644, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/urllib3/connectionpool.py", line 534, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/urllib3/connection.py", line 565, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1395, in getresponse
response.begin()
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socket.py", line 718, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser99/myprj/your_flask_app/venv/lib/python3.11/site-packages/gunicorn/workers/base.py", line 204, in handle_abort
sys.exit(1)
SystemExit: 1
</code></pre>
<p>When I increase the worker count to 2+, it works fine.</p>
<p>Why single worker cant call the localhost api to same service ?</p>
|
<python><rest><flask><timeout><gunicorn>
|
2025-10-02 16:03:44
| 1
| 21,411
|
NPatel
|
79,780,975
| 6,251,742
|
skip_dfs=True doesn't skip DFS
|
<p>Using Python package <a href="https://github.com/jborean93/smbprotocol" rel="nofollow noreferrer">smbprotocol</a>, I have an issue with DFS with a specific server that have DFS enabled.</p>
<p>The error:</p>
<pre><code> smbclient.listdir(folder_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.12/site-packages/smbclient/_os.py", line 246, in listdir
with SMBDirectoryIO(path, mode="r", share_access="r", **kwargs) as dir_fd:
File "/app/venv/lib/python3.12/site-packages/smbclient/_io.py", line 408, in __enter__
self.open()
File "/app/venv/lib/python3.12/site-packages/smbclient/_io.py", line 463, in open
transaction.commit()
File "/app/venv/lib/python3.12/site-packages/smbclient/_io.py", line 349, in commit
raise failures[0]
smbprotocol.exceptions.SMBOSError: [Error 0] [NtStatus 0xc000026d] Unknown NtStatus error returned 'STATUS_DFS_UNAVAILABLE': <path>
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import smbclient
smbclient.ClientConfig(username=smb_username, password=smb_pwd, skip_dfs=True)
smbclient.listdir("<path>")
</code></pre>
<p>From documentation:</p>
<blockquote>
<p>skip_dfs (bool): Whether to skip using any DFS referral checks and treat any path as a normal path. This is only useful if there are problems with the DFS resolver or you wish to avoid the extra round trip(s) the resolver requires.</p>
</blockquote>
<p>Despite <code>skip_dfs=True</code>, it seems it still use it.</p>
<p>Python version: 3.12.4
smbprotocol version : 1.15.0</p>
|
<python><smb>
|
2025-10-02 13:40:23
| 0
| 4,033
|
Dorian Turba
|
79,780,819
| 2,659,307
|
How do I make a Qt5 queued connection in a way that mypy will accept?
|
<p>Using Python and Qt5, I can make a normal signal connection like this:</p>
<pre><code> def clicked() -> None:
print("clicked")
btn = QtWidgets.QPushButton("Button", window)
btn.clicked.connect(clicked)
</code></pre>
<p>However, if I try to arrange for the connection to use a
<a href="https://doc.qt.io/qt-6/qt.html#ConnectionType-enum" rel="nofollow noreferrer">queued connection</a>:</p>
<pre><code> btn.clicked.connect(clicked, QtCore.Qt.QueuedConnection)
</code></pre>
<p>The code works correctly, but <code>mypy</code> complains:</p>
<pre><code>$ python -m mypy queued-conn.py
queued-conn.py:19: error: Too many arguments for "connect" of "pyqtBoundSignal" [call-arg]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Here is the complete reproducer source code:</p>
<pre><code>#!/usr/bin/env python3
import os
import sys
from PyQt5 import QtWidgets, QtCore, QtGui
def main() -> None:
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QWidget()
btn = QtWidgets.QPushButton("Button", window)
def clicked() -> None:
print("clicked")
if os.getenv("QUEUED") == "1":
print("connecting as queued")
btn.clicked.connect(clicked, QtCore.Qt.QueuedConnection)
else:
btn.clicked.connect(clicked)
window.setGeometry(150,150,320,200)
window.setWindowTitle("PyQt5 Example")
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# EOF
</code></pre>
<p>The basic problem is that the stubs only define one overload for connect:</p>
<pre><code># D:\opt\Python311\Lib\site-packages\PyQt5-stubs\QtCore.pyi excerpt
class pyqtBoundSignal:
def connect(self, slot: "PYQT_SLOT") -> "QMetaObject.Connection": ...
</code></pre>
<p>I tried to provide my own stub file but I can't make it work. The files
I added look like this:</p>
<pre><code>$ find stubs/
stubs/
stubs/PyQt5
stubs/PyQt5/QtCore.pyi
$ cat stubs/PyQt5/QtCore.pyi
# stubs/PyQt5/QtCore.pyi
from typing import Callable, Any, overload
class pyqtBoundSignal:
@overload
def connect(self, slot: Callable[..., Any]) -> None: ...
# Allow passing the connection type.
@overload
def connect(self, slot: Callable[..., Any], type: int) -> None: ...
# EOF
$ cat mypy.ini
[mypy]
mypy_path = stubs
</code></pre>
<p>From the output of <code>mypy -v</code>, I can see that it recognizes the
configuration file, but it doesn't seem to see the stub file:</p>
<pre><code>$ python -m mypy -v queued-conn.py
LOG: Mypy Version: 1.17.0
LOG: Config File: D:\cygwin\home\Scott\wrk\learn\python-qt\mypy.ini
LOG: Configured Executable: D:\opt\Python311\python.exe
LOG: Current Executable: D:\opt\Python311\python.exe
LOG: Cache Dir: .mypy_cache
LOG: Compiled: True
LOG: Exclude: []
LOG: Found source: BuildSource(path='queued-conn.py', module='queued-conn', has_text=False, base_dir='D:\\cygwin\\home\\Scott\\wrk\\learn\\python-qt', followed=False)
LOG: Could not load cache for queued-conn: queued-conn.meta.json
LOG: Metadata not found for queued-conn
LOG: Parsing queued-conn.py (queued-conn)
LOG: Metadata fresh for PyQt5.QtWidgets: file D:\opt\Python311\Lib\site-packages\PyQt5-stubs\QtWidgets.pyi
LOG: Metadata fresh for PyQt5.QtCore: file D:\opt\Python311\Lib\site-packages\PyQt5-stubs\QtCore.pyi
LOG: Metadata fresh for PyQt5.QtGui: file D:\opt\Python311\Lib\site-packages\PyQt5-stubs\QtGui.pyi
LOG: Metadata fresh for os: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\os\__init__.pyi
LOG: Metadata fresh for sys: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\sys\__init__.pyi
LOG: Metadata fresh for PyQt5: file D:\opt\Python311\Lib\site-packages\PyQt5-stubs\__init__.pyi
LOG: Metadata fresh for builtins: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\builtins.pyi
LOG: Metadata fresh for PyQt5.sip: file D:\opt\Python311\Lib\site-packages\PyQt5-stubs\sip.pyi
LOG: Metadata fresh for typing: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\typing.pyi
LOG: Metadata fresh for datetime: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\datetime.pyi
LOG: Metadata fresh for enum: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\enum.pyi
LOG: Metadata fresh for collections.abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\collections\abc.pyi
LOG: Metadata fresh for os.path: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\os\path.pyi
LOG: Metadata fresh for _typeshed: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_typeshed\__init__.pyi
LOG: Metadata fresh for abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\abc.pyi
LOG: Metadata fresh for io: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\io.pyi
LOG: Metadata fresh for subprocess: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\subprocess.pyi
LOG: Metadata fresh for types: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\types.pyi
LOG: Metadata fresh for typing_extensions: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\typing_extensions.pyi
LOG: Metadata fresh for _typeshed.importlib: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_typeshed\importlib.pyi
LOG: Metadata fresh for _ast: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_ast.pyi
LOG: Metadata fresh for _sitebuiltins: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_sitebuiltins.pyi
LOG: Metadata fresh for _collections_abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_collections_abc.pyi
LOG: Metadata fresh for collections: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\collections\__init__.pyi
LOG: Metadata fresh for re: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\re.pyi
LOG: Metadata fresh for contextlib: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\contextlib.pyi
LOG: Metadata fresh for time: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\time.pyi
LOG: Metadata fresh for ntpath: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\ntpath.pyi
LOG: Metadata fresh for dataclasses: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\dataclasses.pyi
LOG: Metadata fresh for _io: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_io.pyi
LOG: Metadata fresh for _winapi: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_winapi.pyi
LOG: Metadata fresh for importlib.machinery: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\machinery.pyi
LOG: Metadata fresh for ast: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\ast.pyi
LOG: Metadata fresh for sre_compile: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\sre_compile.pyi
LOG: Metadata fresh for sre_constants: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\sre_constants.pyi
LOG: Metadata fresh for genericpath: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\genericpath.pyi
LOG: Metadata fresh for posixpath: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\posixpath.pyi
LOG: Metadata fresh for codecs: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\codecs.pyi
LOG: Metadata fresh for importlib: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\__init__.pyi
LOG: Metadata fresh for importlib._bootstrap: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\_bootstrap.pyi
LOG: Metadata fresh for importlib._bootstrap_external: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\_bootstrap_external.pyi
LOG: Metadata fresh for sre_parse: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\sre_parse.pyi
LOG: Metadata fresh for _codecs: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_codecs.pyi
LOG: Metadata fresh for importlib.abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\abc.pyi
LOG: Metadata fresh for _frozen_importlib: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_frozen_importlib.pyi
LOG: Metadata fresh for _frozen_importlib_external: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\_frozen_importlib_external.pyi
LOG: Metadata fresh for importlib.resources.abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\resources\abc.pyi
LOG: Metadata fresh for importlib._abc: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\_abc.pyi
LOG: Metadata fresh for importlib.metadata: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\metadata\__init__.pyi
LOG: Metadata fresh for importlib.readers: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\readers.pyi
LOG: Metadata fresh for importlib.resources: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\resources\__init__.pyi
LOG: Metadata fresh for importlib.metadata._meta: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\metadata\_meta.pyi
LOG: Metadata fresh for email.message: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\message.pyi
LOG: Metadata fresh for pathlib: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\pathlib\__init__.pyi
LOG: Metadata fresh for zipfile: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\zipfile\__init__.pyi
LOG: Metadata fresh for zipimport: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\zipimport.pyi
LOG: Metadata fresh for importlib.resources._common: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\importlib\resources\_common.pyi
LOG: Metadata fresh for email: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\__init__.pyi
LOG: Metadata fresh for email.charset: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\charset.pyi
LOG: Metadata fresh for email.contentmanager: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\contentmanager.pyi
LOG: Metadata fresh for email.errors: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\errors.pyi
LOG: Metadata fresh for email.policy: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\policy.pyi
LOG: Metadata fresh for email._policybase: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\_policybase.pyi
LOG: Metadata fresh for email.header: file D:\opt\Python311\Lib\site-packages\mypy\typeshed\stdlib\email\header.pyi
LOG: Loaded graph with 65 nodes (0.033 sec)
LOG: Found 7 SCCs; largest has 57 nodes
LOG: Processing 6 queued fresh SCCs
LOG: Processing SCC singleton (queued-conn) as inherently stale
queued-conn.py:19: error: Too many arguments for "connect" of "pyqtBoundSignal" [call-arg]
LOG: Deleting queued-conn queued-conn.py queued-conn.meta.json queued-conn.data.json
LOG: No fresh SCCs left in queue
LOG: Build finished in 0.830 seconds with 65 modules, and 1 errors
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>I'm using the native Windows port of Python 3.11.5 on Windows:</p>
<pre><code>$ python -V -V
Python 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
</code></pre>
<p>Qt-related related packages:</p>
<pre><code>$ python -m pip list
Package Version
------------------- ---------------
[...]
PyQt5 5.15.11
PyQt5-Qt5 5.15.2
PyQt5_sip 12.17.0
PyQt5-stubs 5.15.6.0
[...]
</code></pre>
<p>How can I properly override or augment this stub so that I can use the
queued connection overload of <code>connect</code>?</p>
|
<python><pyqt5><python-typing><mypy>
|
2025-10-02 10:32:24
| 0
| 13,707
|
Scott McPeak
|
79,780,815
| 2,287,458
|
Forward fill using values from rows that match a condition in Polars
|
<p>I have this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'value': [1,2,3,4,5,None,None], 'flag': [0,1,1,1,0,0,0]})
</code></pre>
<pre><code>┌───────┬──────┐
│ value ┆ flag │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪══════╡
│ 1 ┆ 0 │
│ 2 ┆ 1 │
│ 3 ┆ 1 │
│ 4 ┆ 1 │
│ 5 ┆ 0 │
│ null ┆ 0 │
│ null ┆ 0 │
└───────┴──────┘
</code></pre>
<p>I want to use <code>df.with_columns(pl.col('value').forward_fill())</code> (or similar), but I only want to use values that have <code>flag == 1</code> to be eligible for filling. So in this example, I want value <code>4</code> to be used to replace the two <code>null</code> entries (rather than <code>5</code>).</p>
<pre><code>┌───────┬──────┐
│ value ┆ flag │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪══════╡
│ 1 ┆ 0 │
│ 2 ┆ 1 │
│ 3 ┆ 1 │
│ 4 ┆ 1 │
│ 5 ┆ 0 │
│ 4 ┆ 0 │
│ 4 ┆ 0 │
└───────┴──────┘
</code></pre>
<p>How can one achieve this?</p>
|
<python><dataframe><python-polars>
|
2025-10-02 10:25:59
| 3
| 3,591
|
Phil-ZXX
|
79,780,798
| 26,674,420
|
How to select joined columns with structure like namespaces (a.col1, b.col2)?
|
<p>I am working to migrate from PySpark to Polars. In PySpark I often use aliases on dataframes so I can clearly see which columns come from which side of a join. I'd like to get similarly readable code in Polars. Conceptually, I want something like this (non-working) code.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame({
"building_id": [1, 2, 3],
"height": [10, 20, 30],
"location": ["A", "B", "C"],
})
df2 = pl.DataFrame({
"building_id": [2, 3, 4],
"depth": [25, 35, 45],
"year_built": [2000, 2010, 2020],
})
df1.alias("a").join(df2.alias("b"), on="building_id", how="left") \
.select(
"a.building_id",
"a.height",
"a.location",
"b.year_built"
)
</code></pre>
<p>Does anybody know good options for this? My motivation for this is that it becomes harder to track which columns come from which dataframe when having many columns, or when it's already on a resulting dataframe from other transformations.</p>
<p>I tried the following options:</p>
<ol>
<li>Add suffixes (i.e. tag all non-key columns from <code>df2</code> with <code>_df2</code>. I don't like this, since the code wouldn't be so clean.</li>
<li>Put columns in structs, but it becomes even more messy.</li>
</ol>
|
<python><dataframe><python-polars>
|
2025-10-02 10:07:03
| 1
| 376
|
Arend-Jan Tissing
|
79,780,792
| 395,857
|
How can I update the capacity of a finetuned GPT model on Azure using Python?
|
<p>I want to update the capacity of a finetuned GPT model on Azure. How can I do so in Python?</p>
<p>The following code used to work a few months ago (it used to take a few seconds to update the capacity) but now it does not update the capacity anymore. No idea why. It requires a token generated via <a href="https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-get-access-token" rel="nofollow noreferrer"><code>az account get-access-token</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>import json
import requests
new_capacity = 3 # Change this number to your desired capacity. 3 means 3000 tokens/minute.
# Authentication and resource identification
token = "YOUR_BEARER_TOKEN" # Replace with your actual token
subscription = ''
resource_group = ""
resource_name = ""
model_deployment_name = ""
# API parameters and headers
update_params = {'api-version': "2023-05-01"}
update_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
# First, get the current deployment to preserve its configuration
request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
r = requests.get(request_url, params=update_params, headers=update_headers)
if r.status_code != 200:
print(f"Failed to get current deployment: {r.status_code}")
print(r.reason)
if hasattr(r, 'json'):
print(r.json())
exit(1)
# Get the current deployment configuration
current_deployment = r.json()
# Update only the capacity in the configuration
update_data = {
"sku": {
"name": current_deployment["sku"]["name"],
"capacity": new_capacity
},
"properties": current_deployment["properties"]
}
update_data = json.dumps(update_data)
print('Updating deployment capacity...')
# Use PUT to update the deployment
r = requests.put(request_url, params=update_params, headers=update_headers, data=update_data)
print(f"Status code: {r.status_code}")
print(f"Reason: {r.reason}")
if hasattr(r, 'json'):
print(r.json())
</code></pre>
<p>What's wrong with it?</p>
<p>It gets a 200 response but it silently fails to update the capacity:</p>
<pre><code>C:\Users\dernoncourt\anaconda3\envs\test\python.exe change_deployed_model_capacity.py
Updating deployment capacity...
Status code: 200
Reason: OK
{'id': '/subscriptions/[ID]/resourceGroups/Franck/providers/Microsoft.CognitiveServices/accounts/[ID]/deployments/[deployment name]', 'type': 'Microsoft.CognitiveServices/accounts/deployments', 'name': '[deployment name]', 'sku': {'name': 'Standard', 'capacity': 10}, 'properties': {'model': {'format': 'OpenAI', 'name': '[deployment name]', 'version': '1'}, 'versionUpgradeOption': 'NoAutoUpgrade', 'capabilities': {'chatCompletion': 'true', 'area': 'US', 'responses': 'true', 'assistants': 'true'}, 'provisioningState': 'Updating', 'rateLimits': [{'key': 'request', 'renewalPeriod': 60, 'count': 10}, {'key': 'token', 'renewalPeriod': 60, 'count': 10000}]}, 'systemData': {'createdBy': 'dernoncourt@gmail.com', 'createdByType': 'User', 'createdAt': '2025-10-02T05:49:58.0685436Z', 'lastModifiedBy': 'dernoncourt@gmail.com', 'lastModifiedByType': 'User', 'lastModifiedAt': '2025-10-02T09:53:16.8763005Z'}, 'etag': '"[ID]"'}
Process finished with exit code 0
</code></pre>
|
<python><azure><azure-openai><gpt-4>
|
2025-10-02 10:00:53
| 1
| 84,585
|
Franck Dernoncourt
|
79,780,657
| 538,256
|
vector field vortices location
|
<p>I'm using matplotlib's quiver to overimpose a vector field on an image:</p>
<p>by this code (I use two images imgT and imgL to obtain the horizontal and vertical vector components):</p>
<pre><code>gd=1 # sampling
sy,sx = imgT.shape[:2]
imgTscaled = resize(imgT, (int(sy/gd), int(sx/gd)))
imgLscaled = resize(imgL, (int(sy/gd), int(sx/gd)))
Y, X = np.mgrid[0:sy:gd, 0:sx:gd]
dX=2*(imgTscaled-minT)/(maxT-minT)-1
dY=2*(imgLscaled-minL)/(maxT-minT)-1
ax[0].quiver(X, Y, dX, dY, color='#FE7C25', headwidth=5, scale=30)
</code></pre>
<p>How can I detect from the vector field if there are vortices (and their location)?</p>
<p><a href="https://i.sstatic.net/5FyWzpHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5FyWzpHO.png" alt="vortex" /></a> -> vortex</p>
<p><a href="https://i.sstatic.net/6H1fQrrB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6H1fQrrB.png" alt="novortex" /></a> -> no vortex</p>
|
<python><scipy>
|
2025-10-02 06:21:08
| 1
| 4,004
|
alessandro
|
79,780,565
| 2,056,201
|
Python cannot install torch
|
<p>Why can I not Install Torch, I am using python 3.10.10
I donts care to be in a virtual environment, Im using a virtual OS.</p>
<p>Why does this stupid tool not let me install torch</p>
<blockquote>
<p>D:>pip install torch Defaulting to user installation because normal
site-packages is not writeable ERROR: Could not find a version that
satisfies the requirement torch (from versions: none) ERROR: No
matching distribution found for torch</p>
<p>D:>pip3 install torch torchvision torchaudio Defaulting to user
installation because normal site-packages is not writeable ERROR:
Could not find a version that satisfies the requirement torch (from
versions: none) ERROR: No matching distribution found for torch</p>
</blockquote>
|
<python><pip>
|
2025-10-02 00:53:13
| 1
| 3,706
|
Mich
|
79,780,334
| 1,886,914
|
How can I create a Python venv within a Bash script and then activate it immediately, all from a VS Code profile's settings.json?
|
<p>I have this profile <code>settings.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"terminal.integrated.profiles.osx": {
"python-venv": {
"path": "/opt/homebrew/bin/bash",
"args": [
"-l",
"-c",
"\"/path/to/my/enforce-venv-script.sh\""
],
"icon": "terminal-bash"
}
}
}
</code></pre>
<p>And <code>enforce-venv-script.sh</code>:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# Exit on errors
set -e
# Enforce the existence and usage of a Python venv
if [ ! -d ".venv" ]; then
echo "This is a Python project, but no venv was found. Creating one..."
python3 -m venv .venv > /dev/null
echo "Activating venv..."
source .venv/bin/activate
# Hand off execution to a new login shell, or else the terminal
# will simply quit. Don't forget to activate the venv within it!
# What goes here???
fi
</code></pre>
<p>I've tried multiple variations of <code>exec bash</code>, with or without <code>-l</code>, <code>-c</code> and <code>source ...</code> commands, but nothing seems to work.</p>
<p>This is all to facilitate the automatic creation of virtual environments for any Python project that I create.</p>
<p>I use this VS Code profile for all new Python projects, and the idea then is to enforce the usage of a virtual environment within that project. So any time a terminal is opened, the script will run and the venv will be created and activated if not already there.</p>
<p>VS Code already activates venvs automatically when I open a terminal and one exists, so the last missing piece of the puzzle is to create one if there isn't one.</p>
<p>EDIT: In response to @Philippe's answer and comments below, here are some more attempts to get this working:</p>
<pre><code>kenny@MACBOOK-PRO:~/Dev/Hobby_Programming/Python/test$ bash -l --rcfile ~/Library/Application\ Support/Code/User/profiles/73486c58/enforce-venv.sh
bash -l --rcfile ~/Library/Application\ Support/Code/User/profiles/73486c58/enforce-venv.sh
bash: --: invalid option
Usage: bash [GNU long option] [option] ...
...more usage info removed for brevity
kenny@MACBOOK-PRO:~/Dev/Hobby_Programming/Python/test$ /opt/homebrew/bin/bash -l -rcfile /Users/kenny/Library/Application\ Support/Code/User/profiles/73486c58/en
force-venv.sh
bash: /Users/kenny/Library/Application: restricted: cannot specify `/' in command names
kenny@MACBOOK-PRO:~/Dev/Hobby_Programming/Python/test$ cd ~/Library/Application\ Support/Code/User/profiles/73486c58/
kenny@MACBOOK-PRO:~/Library/Application Support/Code/User/profiles/73486c58$ /opt/homebrew/bin/bash -l -rcfile enforce-venv.sh
bash: enforce-venv.sh: command not found
kenny@MACBOOK-PRO:~/Library/Application Support/Code/User/profiles/73486c58$ /opt/homebrew/bin/bash -l -rcfile ./enforce-venv.sh
bash: ./enforce-venv.sh: restricted: cannot specify `/' in command names
</code></pre>
<p>I have no freaking clue what the heck is different enough about our computers that this is happening to me, since it apparently works for Philippe, yet here I am.</p>
|
<python><bash><visual-studio-code><python-venv>
|
2025-10-01 17:54:33
| 2
| 931
|
Kenny83
|
79,780,282
| 2,056,201
|
Installing Onnx package broke all python dependencies
|
<p>I followed all of the instructions in this video and installed al the the proper packages on top of Python 3.10.0</p>
<p><a href="https://www.youtube.com/watch?v=v1d7SinTzEY&t=1s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=v1d7SinTzEY&t=1s</a></p>
<p><code>pip install torch torchaudio torchvision mlagents protobuf==3.20.3</code></p>
<p>However, mlagents-learn complained about no having the onnx module.</p>
<p>When I tried to <code>pip install onnx</code>, it completely broke all of the the dependencies.</p>
<p>I am getting these errors. Because one requires numpy 1.21 and the other one requires numpy 1.22.</p>
<p>I tried <code>pip install onnx==1.11.0</code> which doesn't work.</p>
<p>I dont know how people can use python in a professional environment with this package management system</p>
<p>Is there a way to resolve all of these packages automatically so I dont have to guess which version I have to use for each one?</p>
<pre><code>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
onnx 1.19.0 requires numpy>=1.22, but you have numpy 1.21.2 which is incompatible.
Successfully installed numpy-1.21.2
WARNING: You are using pip version 21.2.3; however, version 25.2 is available.
You should consider upgrading via the 'C:\Users\me\AppData\Local\Programs\Python\Python310\python.exe -m pip install --upgrade pip' command.
C:\Users\me>pip install --force-reinstall "numpy==1.22"
Collecting numpy==1.22
Downloading numpy-1.22.0-cp310-cp310-win_amd64.whl (14.7 MB)
|████████████████████████████████| 14.7 MB 6.4 MB/s
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 1.21.2
Uninstalling numpy-1.21.2:
Successfully uninstalled numpy-1.21.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
mlagents-envs 0.30.0 requires numpy==1.21.2, but you have numpy 1.22.0 which is incompatible.
</code></pre>
|
<python><unity-game-engine><pip><ml-agent>
|
2025-10-01 16:21:29
| 0
| 3,706
|
Mich
|
79,780,256
| 5,986,907
|
Overhead of instantiating a flax model
|
<p>Is it expensive to keep recreating a Flax network, such as</p>
<pre><code>class QNetwork(nn.Module):
dim: int
@nn.compact
def __call__(self, x):
x = nn.Dense(120)(x)
x = nn.relu(x)
x = nn.Dense(84)(x)
x = nn.relu(x)
x = nn.Dense(self.dim)(x)
return x
q = QNetwork(4)
# do stuff
q = QNetwork(4)
# do stuff
q = QNetwork(4)
# do stuff
</code></pre>
<p>Afaict, the parameters are separate, and the model is stateless, so it's cheap, but want to be sure</p>
|
<python><jax><flax>
|
2025-10-01 15:47:24
| 0
| 8,082
|
joel
|
79,780,187
| 5,552,507
|
Holoviz panel notification width
|
<p>Holoviz panel notification width is too small for more than a few words so my messages get cut.</p>
<p><strong>Can I wrap text and/or increase default width to e.g. 600 px?</strong></p>
<h3>What I tested so far:</h3>
<p>In <a href="https://panel.holoviz.org/reference/global/Notifications.html" rel="nofollow noreferrer">https://panel.holoviz.org/reference/global/Notifications.html</a> I don't see any direct parameter to change.</p>
<p>From <a href="https://panel.holoviz.org/how_to/layout/size.html" rel="nofollow noreferrer">https://panel.holoviz.org/how_to/layout/size.html</a> I get that I can change size of a component with <code>width=600</code>, but notifications are under <code>pn.state</code> and not <code>pn</code> directly so this does not work.</p>
<p>I tried with CSS but I am bad at CSS and did not succeed (most probably because I did it wrong, maybe as it's not doable ?!):</p>
<pre class="lang-py prettyprint-override"><code>import panel as pn
pn.extension(
notifications=True,
raw_css=[ """
/* increase notifications width and force wrapping */
.bk-notification {
max-width: 720px !important; /* desired width */
width: 720px !important;
white-space: normal !important;
word-break: break-word !important;
overflow-wrap: anywhere !important;
}
""" ]
pn.state.notifications.success("...or not...")
</code></pre>
|
<python><css><panel><holoviz>
|
2025-10-01 14:42:10
| 1
| 307
|
PiWi
|
79,780,080
| 11,328,614
|
telnetlib3, StreamReader/Writer vs. TelnetReader/Writer
|
<p>I'm trying to type hint my <code>telnetlib3</code> client.
However, I have some utility code which should not depend on <code>telnet</code>. It should be able to deal with <code>asyncio</code> <code>StreamReader</code>/<code>Writer</code> in common.</p>
<p>Now, if I pass a <code>TelnetReader</code>/<code>Writer</code> to one of those utility functions, the type checker will complain.</p>
<p>I don't understand that as the <code>TelnetReader</code>/<code>Writer</code> are implementing the <code>StreamReader</code>/<code>Writer</code> <code>typing.Protocol</code>, according to the telnetlib3 documentation.
Therefore, I should be able to pass a <code>TelnetReader</code>/<code>Writer</code> where a <code>StreamReader</code>/<code>Writer</code> is expected.</p>
<p>Here is the example code (Just using a reader mockup):</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import telnetlib3
async def utility_func(reader: asyncio.StreamReader):
out = await reader.read(10000)
print(out.decode())
async def main():
tr = telnetlib3.TelnetReader()
await utility_func(tr)
asyncio.run(main())
</code></pre>
<p>The type checker complains: <code>Expected type 'StreamReader', got 'TelnetReader' instead</code></p>
<p>How can I satisfy the type checker?</p>
|
<python><python-typing><telnetlib3>
|
2025-10-01 12:57:09
| 0
| 1,132
|
Wör Du Schnaffzig
|
79,779,946
| 5,378,816
|
python REPL: is it possible to combine -i and -m asyncio?
|
<p>I was trying to start the asyncio REPL (<code>python -m asyncio</code>) and to execute a script before starting the interactive REPL (<code>python -i ./script.py</code>). I tried re-ordering of options, the end-of-options symbol <code>--</code>, but I'm afraid it cannot be done.</p>
<p>Is it possible to execute a code in a file and then to enter the interactive asyncio REPL?</p>
<hr />
<p>Update: I got a link to this answer: <a href="https://stackoverflow.com/a/1027739/5378816">https://stackoverflow.com/a/1027739/5378816</a> with a tip to do <code>exec(open("script.py").read())</code> in the REPL.</p>
<p>If there is no other way, it would help a lot. However the preferred way I am looking for is to be able to "play" in async REPL with the results of the script immediately.</p>
|
<python><python-asyncio>
|
2025-10-01 10:55:31
| 1
| 17,998
|
VPfB
|
79,779,897
| 2,126,910
|
Get AWS API Gateway API keys that don't match nameQuery parameter in get_api_keys() boto3 AWS SDK
|
<p>As the title says, I can get all API keys in API Gateway that start with the contents of <code>nameQuery</code>, but is it possible to get all API keys that <strong>don't</strong> match nameQuery?</p>
<p>I'm looking to filter out some API keys that have a set prefix (e.g. <code>prefix-</code>). I'm currently using the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/apigateway/client/get_api_keys.html#" rel="nofollow noreferrer">boto3 Python SDK</a> but I'm assuming this is applicable to all AWS SDKs.</p>
<p>My unsuccessful attempts:</p>
<pre><code>nameQuery="!prefix-"
nameQuery="NOT prefix-"
nameQuery="~prefix-"
</code></pre>
|
<python><amazon-web-services><boto3><aws-api-gateway>
|
2025-10-01 10:05:17
| 1
| 706
|
philMarius
|
79,779,732
| 20,298,890
|
Python gdal open layer from bytes
|
<p><strong>I need to perform a conversion from a FlatGeoBuf file that I read as bytes from an http request to a GeoJSON file.</strong></p>
<p>I know that I could save the .fgb on disk, open it with ogr, and then I have a function to export an OGR layer to GeoJSON.</p>
<p>But I would like to <strong>avoid writing on disk</strong>. I know of the VSIMEM mechanism, but I'm struggling to make it work... What I came up with looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def write_vsimem(mem_path: str, data: bytes) -> None:
vsi_file = gdal.VSIFOpenL(mem_path, "w")
size = len(data)
try:
gdal.VSIFWriteL(data, 1, size, vsi_file)
if gdal.VSIFCloseL(vsi_file) != 0:
raise RuntimeError(
f"Failed to close VSIMEM file '{mem_path}': {gdal.GetLastErrorMsg()}"
)
except Exception as e:
raise RuntimeError(f"Error writing to VSIMEM file '{mem_path}': {e}") from e
def convert_flatgeobuf_to_geojson(flatgeobuf_data: bytes, name: str) -> bytes:
driver = ogr.GetDriverByName("FlatGeobuf")
mem_path = f"/vsimem/{name}.fgb"
fgb_data = driver.CreateDataSource(mem_path)
if not fgb_data:
raise RuntimeError(f"Failed to open FlatGeobuf file '{name}' with VSIMEM.")
write_vsimem(mem_path, flatgeobuf_data)
# Check if the layer exists
if fgb_data.GetLayerCount() != 1:
raise ValueError(
f"Expected 1 layer in FlatGeobuf file '{name}', found {fgb_data.GetLayerCount()} layers."
)
# Get the layer and extract geojson
layer = fgb_data.GetLayer(0)
return layer_to_geojson(layer=layer)
</code></pre>
<p>When I run this, I get the following logs that seem to say the <em>write_vsimem</em> part works:</p>
<pre><code>Converting FlatGeoBuf of size: 5570849
Write VSIMEM: 5570816
</code></pre>
<p>But then I get the following error, which mean the ogr datasource I created is still empty:</p>
<pre><code>Failed to hydrate data: Expected 1 layer in FlatGeobuf file 'my_layer', found 0 layers.
</code></pre>
<p>Is my goal even possible to achieve ? Then what am I doing wrong ? Or should I give up and write on disk ?</p>
<p>Any answer to those question is welcome :)</p>
<p><strong>EDIT:</strong></p>
<p>As indicated by Priyanka Konduru, I was misunderstanding how the VSIMEM works: by writing bytes in memory you create the in-memory space (so, no <code>CreateDataSource</code>) and then you can read from memory:</p>
<pre class="lang-py prettyprint-override"><code>def convert_flatgeobuf_to_geojson(flatgeobuf_data: bytes, name: str) -> bytes:
mem_path = f"/vsimem/{name}.fgb"
write_vsimem(mem_path, flatgeobuf_data)
fgb_data = ogr.Open(mem_path)
if not fgb_data:
raise RuntimeError(f"Failed to open FlatGeobuf file '{name}' with VSIMEM.")
# Check if the layer exists
if fgb_data.GetLayerCount() != 1:
raise ValueError(
f"Expected 1 layer in FlatGeobuf file '{name}', found {fgb_data.GetLayerCount()} layers."
)
# Get the layer and extract geojson
layer = fgb_data.GetLayer(0)
return layer_to_geojson(layer=layer, layer_name=name)
</code></pre>
|
<python><gdal><ogr>
|
2025-10-01 07:06:58
| 1
| 503
|
marting
|
79,779,724
| 2,218,321
|
Order of execution in FastAPI async generators
|
<p>This is the code</p>
<pre><code>@router.post("/tel/verify-otp",
summary="Verify OTP and update user tel number"
)
async def verify_tel_otp(
command: UpdateTelCommand = Body(...),
handler: VerifyTelOtpHandler = Depends(get_verify_tel_otp_handler),
):
await handler.handle(command=command)
print('After handler')
return {
"message": tm(key="success"),
"data": None
}
</code></pre>
<p>and</p>
<pre><code>async def get_verify_tel_otp_handler(
uow: UnitOfWork = Depends(get_unit_of_work),
current_user = Depends(get_current_user)
) -> VerifyTelOtpHandler:
return VerifyTelOtpHandler(uow=uow, current_user=current_user)
</code></pre>
<p>and</p>
<pre><code>async def get_db_session() -> AsyncSession:
session = async_session_factory()
print('before yield')
yield session
print('after yield')
async def get_current_user(
token: str = Depends(oauth2_scheme),
session: AsyncSession = Depends(get_db_session)
) -> Union[User, None]:
token_hash = generate_token_hash(token)
user_repo = UserRepository(session)
user = await user_repo.get_by_token(token_hash)
if not user:
raise UnauthorizedError()
return user
</code></pre>
<p>and</p>
<pre><code>class VerifyTelOtpHandler:
# Constants
TASK_TYPE_PHONE_CONFLICT = 8
TASK_STATUS_OPEN = 2
CONFLICT_RESOLUTION_DAYS = 2
def __init__(self, uow: UnitOfWork, current_user: User):
self.uow = uow
self.current_user = current_user
async def handle(self, command: UpdateTelCommand) -> None:
print('start of handler')
async with self.uow as uow:
await self._validate_otp(command, self.uow)
...
</code></pre>
<p>Now, when I call the api, this is the output:</p>
<pre><code>before yield
start of handler
After handler
after yield
</code></pre>
<p>The question is, why do I see the order of outputs in this way?</p>
<pre><code>before yield
start of handler
After handler
after yield
</code></pre>
<p>Why is <code>after yield</code> printed after the <code>After handler</code>?</p>
|
<python><fastapi><contextmanager>
|
2025-10-01 06:53:41
| 1
| 2,189
|
M a m a D
|
79,779,468
| 1,361,802
|
How to use Temporal pydantic_data_converter while not causing a workflow validation failure?
|
<p>My understanding is that to get the <a href="https://python.temporal.io/temporalio.contrib.pydantic.html" rel="nofollow noreferrer"><code>pydantic_data_converter</code></a> to auto convert outputs, you need to pass a function reference, i.e.</p>
<p>This properly returns a TestModel</p>
<pre><code> result: TestModel = await workflow.execute_activity(
test_activity,
start_to_close_timeout=timedelta(seconds=10)
)
</code></pre>
<p>This returns a dict</p>
<pre><code> result: TestModel = await workflow.execute_activity(
'test_activity',
start_to_close_timeout=timedelta(seconds=10)
)
</code></pre>
<p>The trouble is that to get a reference to <code>test_activity</code> often introduces importing nondeterministic packages, e.g. if your workflow imports <code>test_activity</code> which is in <code>activities.py</code> and that file imports <code>requests</code> then the workflow fails with</p>
<blockquote>
<p>RuntimeError: Failed validating workflow.</p>
</blockquote>
<p>My question is whether or not there's a way to use <code>pydantic_data_converter</code> together with 3rd party imports.</p>
|
<python><pydantic><deterministic><temporal-workflow>
|
2025-09-30 19:43:29
| 1
| 8,643
|
wonton
|
79,779,432
| 3,067,485
|
How can I run airflow db clean command using BashOperator with Airlfow 3+
|
<p>I run airflow using official <a href="https://airflow.apache.org/docs/apache-airflow/3.1.0/docker-compose.yaml" rel="nofollow noreferrer">docker compose file</a>.</p>
<p>I would like to run a simple command with Airflow BashOperator to clean my logs:</p>
<pre><code> clean_database = BashOperator(
task_id="clean-database",
bash_command=f'airflow db clean --clean-before-timestamp {clean_before.isoformat()} --yes',
)
</code></pre>
<p>But I got the following error:</p>
<pre><code>sqlalchemy.exc.ArgumentError: Could not parse SQLAlchemy URL from string 'airflow-db-not-allowed:///'
</code></pre>
<p>Which seems to be an explicit forbidden while this command used to work nicely in Airflow 2.x.</p>
<p>Anyway if I issue the command by hand inside the worker container, it works:</p>
<pre><code>docker exec -it airflow-airflow-worker-1 bash
airflow db clean --clean-before-timestamp 20250930 --yes
</code></pre>
<p>I have cross checked, the <code>BashOperator</code> runs within the same container <code>airflow-airflow-worker-1</code> (they have the same id).</p>
<p>So I am wondering what is wrong with this <code>BashOperator</code>?</p>
<h2>Update</h2>
<p>I have found the following claim by googling (probably AI generated) which seems to point out that something has changed in Airflow 3+:</p>
<blockquote>
<p>The error 'airflow-db-not-allowed:///' typically arises in Apache
Airflow 3.0+, where direct database access via the ORM (Object
Relational Mapper) is no longer permitted. This restriction enforces
better practices by requiring users to interact with the metadata
database through REST APIs or the Python client instead of direct
queries.</p>
</blockquote>
<p>Anyway, it does not explain why I can issue a cli by hand and not reproduce it with a BashOperator within the same container.</p>
|
<python><airflow>
|
2025-09-30 18:49:54
| 1
| 11,564
|
jlandercy
|
79,779,220
| 1,837,122
|
Enabling Delta Table checkpointing when using polars write_delta()
|
<p>I am using polars.df.write_delta() to initially create, and subsequently append to, Delta Tables in Microsoft Fabric OneLake storage, via a Fabric python notebook.</p>
<p>Having had a production process up and running for some time, I notice that the most frequently-updated tables are showing a warning/error in the Fabric lakehouse web interface:</p>
<pre><code>ErrorCode: DeltaTableNotCheckpointed
Message: Delta table 'SomeTableName' has atleast '100' transaction logs, but no checkpoints. For performance reasons, it is recommended to regularly checkpoint the delta table more frequently than every '100' transactions. As a workaround, please use SQL or Spark to retrieve table schema.
</code></pre>
<p>I have read about Delta Lake checkpoints in the official protocol spec, <a href="https://github.com/delta-io/delta/blob/master/PROTOCOL.md#checkpoints" rel="nofollow noreferrer">here</a>. My understanding is that the spec does not <em>require</em> writers to create checkpoints, only permit them to if they choose.</p>
<p>In the <a href="https://delta-io.github.io/delta-rs/api/transaction/#deltalake.PostCommitHookProperties" rel="nofollow noreferrer">delta-rs documentation</a>, I found one buried reference that:</p>
<blockquote>
<p>Checkpoints are by default created based on the delta.checkpointInterval config setting.</p>
</blockquote>
<p>For an example affected table in my environment, it appears this config setting is not defined. I ran this command:</p>
<pre><code>deltalake.DeltaTable(my_abfss_path).metadata().configuration
</code></pre>
<p>and the result was just:</p>
<pre><code>{'delta.parquet.vorder.enabled': 'true'}
</code></pre>
<p>So this probably explains why I have no checkpoints, in the immediate sense. <strong>However</strong>, I am not clear on where the responsibility lies for defining this config setting.</p>
<h2>Question</h2>
<p>At which layer of abstraction should this setting be set?</p>
<ol>
<li>Should it have been set by delta-rs when Polars asked it to create the table initially?</li>
<li>Should it have been set by Polars as part of the internal implementation of <code>write_delta()</code>?</li>
<li>Should my client code have set it when calling <code>write_delta()</code>? If so, how exactly? Would that be via the <code>delta_write_options</code> parameter? I can't find anything confirming this anywhere.</li>
</ol>
|
<python><python-polars><delta-lake><microsoft-fabric><delta-rs>
|
2025-09-30 14:21:18
| 0
| 438
|
Stuart J Cuthbertson
|
79,779,098
| 7,179,546
|
lxml package doesn't work after upgrading to Python 3.9
|
<p>I have a project written in Python 3.8 and I'm trying to upgrade it to 3.9 (and higher, if it's possible; I actually want to upgrade to Python 3.13, but I'm going step by step).</p>
<p>I have this <code>Pipfile</code>:</p>
<pre class="lang-toml prettyprint-override"><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
black = "==19.10b0"
click = "==8.0.3"
colorlog = "==1.8"
coverage = "==6.0b1"
debugpy = "==1.2.0"
hupper = "==1.10.2"
isort = "==4.3.21"
keyring = "==15.1.0"
mock = "==1.0.1"
mongomock = "==4.0.0"
netaddr = "==0.7.10"
py-spy = "==0.4.0"
pycodestyle = "==2.3.1"
pydevd = "==3.2.3"
pylint = "==2.7.0"
pytest-repeat = "==0.9.3"
pytest-asyncio = "==0.15.1"
pytest-timeout = "==1.4.2"
pytest-xdist = "==2.2.1 "
pytest = "==6.2.3"
requests-toolbelt = "==1.0.0"
vulture = "==0.10"
[packages]
appdynamics = "==24.11.0.7213"
azure-identity = "==1.19.0"
argparse = "==1.4.0"
bottle = "==0.12.23"
certifi = "==2024.8.30"
cryptography = "==41.0.7"
envsubst = "==0.1.5"
icmplib = "==3.0.4"
jsonschema = "==2.5.1"
logutils = "==0.3.5"
lxml = "==5.3.0"
MarkupSafe = "==2.0.1"
motor = "==3.6.0"
oauth2client = "==4.1.3"
oauthlib = "==3.2.2"
passlib = "==1.7.4"
pylru = "==1.2.1"
pymongo = "==4.9.2"
python-dateutil = "==2.9.0.post0"
python-dotenv = "==1.0.1"
python-json-logger = "==2.0.7"
python-magic = "==0.4.27"
python3-saml = "==1.16.0"
requests = "==2.32.3"
urllib3 = "==2.2.3"
waitress = "==2.1.2"
Authlib = "==0.14.1"
backports_abc = "==0.4"
"backports.ssl_match_hostname" = "==3.4.0.2"
Jinja2 = "==2.11.3"
Paste = "==2.0.2"
PasteDeploy = "==1.5.2"
pyIsEmail = "==1.3.2"
pyOpenSSL = "==23.3.0"
PyJWT = "==2.9.0"
PyYAML = "==6.0.2"
WebOb = "==1.2.3"
[requires]
python_version = "3.8"
</code></pre>
<p>when changing to <code>python_version = "3.9"</code> and running <code>pipenv install</code>, I'm getting this error:</p>
<pre><code>xmlsec.InternalError: (-1, 'lxml & xmlsec libxml2 library version mismatch')
</code></pre>
<p>I've tried with several combinations of dependencies without success.</p>
<p>What's wrong here?</p>
|
<python><pipenv>
|
2025-09-30 12:33:22
| 0
| 737
|
Carabes
|
79,779,034
| 5,704,664
|
structlog enforce Wrapped logger type with mypy
|
<p>I wanted to override the structlog logger for the whole application, by doing this:</p>
<pre><code>import enum
from collections.abc import Iterable
import structlog
from structlog.typing import Processor
from typing_extensions import override
class WarningCode(enum.Enum):
...
class WrappedLogger(structlog.stdlib.BoundLogger):
@override
def warning(self, event, code: WarningCode, **kw): # type: ignore[override]
# i.e. make code required here, for tracking in DataDog
return self._proxy_to_logger('warning', event, code=code, **kw)
def trace(self, event, *args, **kw):
# to be replaced in future by debug() call
return self._proxy_to_logger('info', event, *args, **kw)
def debug_sensitive(self, event, **kw):
"""
Log sensitive debug information (such as containing PII).
"""
# will implement some mechanism for not logging sensitive information certain environments in the future
return self._proxy_to_logger('debug', event, **kw)
def configure_structlog(processors: Iterable[Processor]) -> None:
return structlog.configure(
processors=processors,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
wrapper_class=WrappedLogger,
)
</code></pre>
<p>Then I use it in my Django application config like this:</p>
<pre><code>_STRUCTLOG_PROCESSORS: list[Processor] = [
*_SHARED_PROCESSORS,
dev_formatter,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
]
configure_structlog(
processors=_STRUCTLOG_PROCESSORS, # type: ignore
)
</code></pre>
<p>In application code I instantiate the logger by doing <code>get_logger()</code>:</p>
<pre><code>from structlog import get_logger
logger = get_logger(__name__)
</code></pre>
<p>However, <code>logger</code> here is of type <code>Any</code>. And if someone forgets that <code>code</code> parameter is required, or that <code>trace</code> function is called <code>trace</code> and not <code>trace_log</code> or something, the application will fail in runtime, which I want to avoid.</p>
<p>How can I redefine <code>get_logger</code> return type so that it returns my <code>WrappedLogger</code> instead?</p>
<p>I've tried doing mypy stubs like this:</p>
<p>mypy.ini</p>
<pre class="lang-toml prettyprint-override"><code>[mypy]
mypy_path = stubs
</code></pre>
<p>and in stubs do this structure:</p>
<pre class="lang-none prettyprint-override"><code>stubs/
structlog/
__init__.py # re-define ONLY the get_logger() function here.
</code></pre>
<p>But this results in a lot of mypy errors, which tell me that:</p>
<pre><code>admin_project/settings_dev.py:22: error: Module has no attribute "ProcessorFormatter" [attr-defined]
admin_project/settings_dev.py:35: error: Module has no attribute "ProcessorFormatter" [attr-defined]
admin_project/settings_dev.py:37: error: Module has no attribute "ProcessorFormatter" [attr-defined]
admin_project/settings_dev.py:38: error: Module has no attribute "dev" [attr-defined]
</code></pre>
<p>How can I redefine only the <code>get_logger</code> part of the structlog exports? Or how can I "extend" the existing stubs, without copying all of them into my codebase?</p>
<p>Is there a better practice of adding new methods / requirements to logger calls in structlog?</p>
|
<python><logging><python-typing><mypy><structlog>
|
2025-09-30 11:26:22
| 0
| 2,018
|
comonadd
|
79,778,984
| 548,846
|
nautilus trader backtest stops incomplete without any error
|
<p>I have the following Nautilus Trader backtest script. It runs, but stops midway at random bars without showing any error. My dataset has over 3,000 bars, but the backtest usually halts somewhere between bars 350 and 400. Could you suggest if I might be doing something wrong? I’m new to Nautilus Trader.</p>
<pre><code>if __name__ == "__main__":
engine_config = BacktestEngineConfig(
trader_id=TraderId("BACKTEST_TRADER-001"),
logging=LoggingConfig(
log_level="DEBUG",
log_level_file="DEBUG",
log_directory="./logs",
log_file_name="backtest_engine.log",
),
)
engine = BacktestEngine(config=engine_config)
NASDAQ = Venue("NASDAQ")
venue_config1 = BacktestVenueConfig(
name="NASDAQ",
oms_type="NETTING",
account_type="CASH",
base_currency="USD",
starting_balances=["1000000 USD"],
)
engine.add_venue(
venue=NASDAQ,
oms_type=OmsType.NETTING, # Order Management System type
account_type=AccountType.CASH, # Type of trading account
starting_balances=[Money(1_000_000, USD)], # Initial account balance
base_currency=USD, # Base currency for account
default_leverage=Decimal(1), # No leverage used for account
)
# Step 3: Create instrument definition and add it to the engine
NDX_INSTRUMENT = TestInstrumentProvider.equity(
symbol="NDX",
venue="NASDAQ",
)
engine.add_instrument(NDX_INSTRUMENT)
csv_file_path = "data/NDX_1d.csv"
df = pd.read_csv(csv_file_path, sep=",", decimal=".", header=0, index_col=False)
df = df.reindex(columns=["timestamp", "open", "high", "low", "close", "volume"])
df["timestamp"] = pd.to_datetime(df["timestamp"], utc=True)
df = df.set_index("timestamp")
NDX_1D_BARTYPE = BarType(
instrument_id=NDX_INSTRUMENT.id,
bar_spec=BarSpecification(
step=1,
aggregation=BarAggregation.DAY,
price_type=PriceType.LAST,
),
aggregation_source=AggregationSource.EXTERNAL,
)
wrangler = BarDataWrangler(NDX_1D_BARTYPE, NDX_INSTRUMENT)
ndx_1day_bars_list: list[Bar] = wrangler.process(df)
engine.add_data(ndx_1day_bars_list)
strategy = DemoStrategy(bar_type=NDX_1D_BARTYPE)
engine.add_strategy(strategy)
engine.run()
</code></pre>
<p>Logs:</p>
<p>2013-05-23T00:00:00.000000000Z [DEBUG] BACKTEST_TRADER-001.OrderMatchingEngine(NASDAQ): Updating with close 2991.45
2013-05-23T00:00:00.000000000Z [INFO] BACKTEST_TRADER-001.DemoStrategy: Bar #349 | Close: 2991.45 | EMA(10): 2994.14783
2013-05-23T00:00:00.000000000Z [INFO] BACKTEST_TRADER-001.DemoStrategy: Previous EMA(10): 2994.74734
2013-05-24T00:00:00.000000000Z [DEBUG] BACKTEST_TRADER-001.OrderMatchingEngine(NASDAQ): Processing Bar(NDX.NASDAQ-1-DAY-LAST-EXTERNAL,2971.36,2991.26,2965.30,2991.02,1449210000,1369353600000000000)
2013-05-24T00:00:00.000000000Z [DEBUG] BACKTEST_TRADER-001.OrderMatchingEngine(NASDAQ): Updating with open 2971.36
2013-05-24T00:00:00.000000000Z [DEBUG] BACKTEST_TRADER-001.OrderMatchingEngine(NASDAQ): Updating with high 2991.26</p>
|
<python><back-testing>
|
2025-09-30 10:14:00
| 0
| 1,237
|
Saravanan
|
79,778,680
| 3,294,994
|
Type hinting for a generator function that asserts sortedness
|
<p>I'd like to type-hint a "pass-through" generator that checks for sortedness and raises otherwise:</p>
<pre class="lang-py prettyprint-override"><code>def assert_sorted(iterable, key=None):
first = True
for item in iterable:
if first:
prev_key = key(item) if key else item
first = False
else:
curr_key = key(item) if key else item
if prev_key > curr_key:
raise ValueError("not sorted")
prev_key = curr_key
yield item
def test():
list(assert_sorted([1, 2, 2, 3]))
list(assert_sorted([])
with pytest.raises(ValueError):
list(assert_sorted([2, 1]))
</code></pre>
<p>Desired type hint behavior:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Foo: ...
assert_sorted([Foo()]) # type error because Foo does not support sorting
@functools.total_ordering
@dataclass
class Bar:
def __lt__(self, other): ...
assert_sorted([Bar()]) # type ok
def keyfn(n: int) -> int: ...
assert_sorted([Bar()], key=keyfn) # type error because `keyfn` does not accept `Bar`
def keyfn2(b: Bar) -> int: ...
assert_sorted([Bar()], key=keyfn2) # type okay
</code></pre>
<p>I tried a few approaches with <code>@overload</code> and <code>from _typeshed import SupportsRichComparison, SupportsRichComparisonT</code> but could not make it work...</p>
|
<python><python-typing><comparable>
|
2025-09-30 03:44:43
| 1
| 846
|
obk
|
79,778,453
| 8,901,102
|
Override dependency in dev environment
|
<p>I have a Python project where I develop an external package in parallel.</p>
<p>When it is deployed I would like to point the install to a set version in the git repo, but for development I would like to have a editable local install.</p>
<p>This is my <code>pyproject.toml</code>:</p>
<pre class="lang-toml prettyprint-override"><code>[project]
name = "Test"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
dependencies = [
"external-package @ git+ssh://git@github.com/username/reponame.git@v0.1.0"
]
[dependency-groups]
dev = [
"external-package @ file:///${PROJECT_ROOT}/../reponame",
]
</code></pre>
<p>This fails though:</p>
<pre class="lang-none prettyprint-override"><code>error: Requirements contain conflicting URLs for package `reponame` in all marker environments:
- file:///${PROJECT_ROOT}/../reponame
- git+ssh://git@github.com/username/reponame.git@v0.1.0
</code></pre>
<p>Is there a way to override the dependency if it is installed into a development environment?</p>
|
<python><uv>
|
2025-09-29 18:47:42
| 0
| 927
|
pask
|
79,778,443
| 7,240,233
|
How to transfer Python variables from one Flask view to another?
|
<p>I am working on a Flask app constructed as follows:</p>
<ol>
<li>The starting HTML template, controlled by a view, displays a text box, where the user can enter a SQL request to request a SQLite database connected to the app.</li>
<li>In a second view, the request is executed, then the result is converted into a pandas dataframe, then converted in HTML, then included into a second template where it is displayed on the screen.</li>
<li>This second template has a form to select two columns to choose to build a plotly figure, displayed on a third template, controlled by a third view.</li>
</ol>
<p>I do not understand how to use the pandas dataframe created in the second view within the third view, since the view uses <code>render_template</code> and not <code>return</code>. I succeeded once by storing the dataframe into the Flask <code>session</code> variable, but this is suitable for real small dataframes only. I tried to read the HTML version of the dataframe directly from the template, but I do not understand how to do either.</p>
<p>Here is the code for <code>views.py</code>:</p>
<pre><code>@app.route("/", methods=["GET", "POST"])
def index():
return render_template('base.html')
@app.route('/submit', methods=['POST'])
def submit():
request_recap = request.form.get('sqlr') # example : SELECT * FROM Soils
with engine.connect() as conn:
request_result = conn.execute(text(request_recap))
columns = request_result.keys()
request_result = request_result.fetchall()
df = pd.DataFrame(request_result, columns=columns)
# Here is the attempt with 'session'
session['df'] = df.to_json()
return render_template("result.html",
request_recap=request_recap,
request_result=df.to_html(index=False),
cols=df.columns)
@app.route('/plot', methods=["GET", "POST"])
def plot():
# Here is the attempt with 'session', how to do better ?
df = pd.read_json(session.get('df'))
x = request.form.get('var1')
y = request.form.get('var2')
fig = makefig(table=df, x=x, y=y)
return render_template("plot.html",
fig=fig.to_html())
</code></pre>
|
<python><pandas><flask>
|
2025-09-29 18:41:44
| 1
| 721
|
Micawber
|
79,778,333
| 5,118,421
|
Assertion failed: (self->u.Consumer.rkqu) for confluent python kafka
|
<p>Try the confluent kafka base example</p>
<pre><code>from confluent_kafka import Consumer
c = Consumer({
'bootstrap.servers': 'localhost:9092',
'group.id': None,
'auto.offset.reset': 'earliest'
})
c.subscribe(['my_topic'])
while True:
msg = c.poll(1.0)
if msg is None:
continue
if msg.error():
print("Consumer error: {}".format(msg.error()))
continue
print('Received message: {}'.format(msg.value().decode('utf-8')))
c.close()
</code></pre>
<p>got</p>
<pre><code>Assertion failed: (self->u.Consumer.rkqu), function Consumer_init, file Consumer.c, line 1668
</code></pre>
<p>What could it mean?</p>
<p>Python version</p>
<pre><code>Python 3.10.1
</code></pre>
<p>OS:</p>
<pre><code>macOS 15.6.1
</code></pre>
|
<python><apache-kafka><confluent-kafka-python>
|
2025-09-29 16:05:08
| 0
| 1,407
|
Irina
|
79,778,328
| 4,891,461
|
Python not running in VS Code using WSL remote
|
<p><strong>Environment</strong></p>
<pre><code>cd workspace/playground
python3 -m venv .
source bin/activate
pip install pandas
pip install pycorp2
</code></pre>
<p><strong>Running</strong></p>
<pre><code>cd workspace/playground
source bin/activate
py playground.py
</code></pre>
<p>I have a Python script that uses Pandas and PyCorp2. Running it via the terminal as per above works. However, I cannot figure out what I have wrong with VS Code.</p>
<p>When I launch VS Code with the Python extensions installed (<code>Python</code>, <code>Pylance</code>, <code>Python Debugger</code>, <code>Python Environments</code>), they don't work. In the notification area I get <code>Refreshing virtual environments</code>, <code>Refreshing Global Python interpreters</code>, <code>Discovering Python Interpreters</code>, and <code>Initializing virtual environments</code>. These messages do not go away.</p>
<p>I do not see any errors in the <code>Output</code> window for <code>Python</code>, <code>Python Debugger</code>, or <code>Python Environments</code>.</p>
<p>What am I missing? If I try to run the file in the <code>Run and Debug</code> area it also doesn't produce anything and I just see a blue bar that doesn't ever stop (similar to the notifications).</p>
<p><strong>launch.json</strong></p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}
</code></pre>
<p><strong>settings.json</strong></p>
<pre><code>{
"python.useEnvironmentsExtension": true,
"python.defaultInterpreterPath": "/home/<me>/workspace/playground/bin/python3"
}
</code></pre>
<p>The same thing happens if I switch the interpreter to <code>/usr/bin/python3</code> and remove the virtual environment that was created. It feels like to me VS Code isn't finding Python.</p>
|
<python><visual-studio-code>
|
2025-09-29 15:58:26
| 2
| 336
|
quickblueblur
|
79,778,060
| 2,989,330
|
Use msgspec to (de-)serialize Python objects without hard-coding serialization library in data classes
|
<p>I want to use JSON to serialize and deserialize a complex Python object. This Python object contains other Python objects that may be implementations of a protocol or descend from a common abstract class.</p>
<p>My naive implementation is as follows:</p>
<pre><code>class A(Protocol):
def f(self):
...
@dataclass
class B(A):
a: int
b: float
def f(self):
print("Hello from B")
@dataclass
class C:
a: A
c = C(B(1, 3.14158))
data = msgspec.json.encode(c)
result = msgspec.json.decode(data, type=C) # error
</code></pre>
<p>Using <code>msgspec</code> naively like this results in the error</p>
<pre><code>TypeError: Instance and class checks can only be used with @runtime_checkable protocols
</code></pre>
<p>Adding the <code>@runtime_checkable</code> decorator to <code>class A</code> results in the following error:</p>
<pre><code>msgspec.ValidationError: Expected `A`, got `dict` - at `$.a`
</code></pre>
<p>If we print <code>data</code>, we find out why:</p>
<pre><code>b'{"a":{"a":1,"b":3.14158}}'
</code></pre>
<p>There's no indication of the data type of <code>b.a</code> here, and <code>msgspec</code> does not parse this. I found two workarounds for this problem:</p>
<ul>
<li>use <a href="https://jcristharif.com/msgspec/supported-types.html#raw" rel="nofollow noreferrer"><code>msgspec.Raw</code></a> as attribute type and parse this attribute manually:
<pre><code>@dataclass
class C:
a: msgspec.Raw
</code></pre>
</li>
<li>let the data classes inherit from <code>msgspec.Struct</code> and use the <code>tag=true</code> option to create a <a href="https://jcristharif.com/msgspec/structs.html#struct-tagged-unions" rel="nofollow noreferrer">tagged union</a>:
<pre><code>class C(msgspec.Struct, tag=True):
a: A
</code></pre>
</li>
</ul>
<p>Both options require me to hard-code the serialization into my data classes. Is there a way to use <code>msgspec</code> for (de-)serialization in this example without having to closely bind the data classes to the serialization method?</p>
|
<python><serialization><deserialization><msgspec>
|
2025-09-29 11:39:16
| 1
| 3,203
|
Green 绿色
|
79,777,976
| 18,910,865
|
PyCharm "view as DataFrame" shows nothing for polars DataFrames
|
<p>Basically the title. Using <code>PyCharm 2023.3.3</code> I'm not able to see the data of polars DataFrames.</p>
<p>As an example, I've a simple DataFrame like this:</p>
<pre><code>print(ids_df)
shape: (1, 4)
┌─────────────────────────────────┬────────────────────┬────────────────────────────┬───────┐
│ uuid ┆ cod ┆ tms_created ┆ valid │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ datetime[ns] ┆ bool │
╞═════════════════════════════════╪════════════════════╪════════════════════════════╪═══════╡
│ UUID4 ┆ CODE_1 ┆ 2025-09-29 09:10:16.219874 ┆ true │
└─────────────────────────────────┴────────────────────┴────────────────────────────┴───────┘
</code></pre>
<p>But while debugging the function "view as DataFrame" shows nothing:
<a href="https://i.sstatic.net/Uyn8GsED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uyn8GsED.png" alt="enter image description here" /></a></p>
<p>Is there something that can be set on Format to see it?</p>
|
<python><debugging><pycharm><python-polars><polars>
|
2025-09-29 09:56:54
| 0
| 522
|
Nauel
|
79,777,919
| 3,760,519
|
Is there a way to complement / supplement a stubs package with additional type information in a stubs file?
|
<p>I am using pyright in strict mode and I would like to avoid littering my code with exceptions telling pyright to ignore some line. I have been using my own stub file to great success in the case of dependencies that do not provide their own stub information.
However, in the case of pandas, and even using pandas-stubs, I still get an error with <code>df.read_csv(some_path)</code>. The specific error is not important for this question but for the sake of completeness here it is</p>
<pre class="lang-none prettyprint-override"><code>Type of "read_csv" is partially unknown Type of "read_csv" is "Overload[(filepath_or_buffer: str | PathLike[str] | ReadCsvBuffer[bytes] | ReadCsvBuffer[str], *, sep: str | None = ..., delimiter: str | None = ..., header: int | Sequence[int] | Literal['infer'] | None = ..., names: MutableSequence[Unknown] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Unknown, ...] | range | None = ..., index_col: int | str | Sequence[str | int] | Literal[False] | None = ..., usecols: SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | ((HashableT@read_csv) -> bool) | None = ..., dtype: ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[Hashable, ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | defaultdict[Unknown, Unknown] | None = ..., engine: Literal['c', 'python', 'pyarrow', 'python-fwf'] | None = ..., converters: Mapping[int | str, (str) -> Any] | Mapping[int, (str) -> Any] | Mapping[str, (str) -> Any] | None = ..., true_values: list[str] | None = ..., false_values: list[str] | None = ..., skipinitialspace: bool = ..., skiprows: int | Sequence[int] | ((int) -> bool) | None = ..., skipfooter: int = ..., nrows: int | None = ..., na_values: Sequence[str] | Mapping[str, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., skip_blank_lines: bool = ..., parse_dates: bool | list[int] | list[str] | Sequence[Sequence[int]] | Mapping[str, Sequence[int | str]] | None = ..., keep_date_col: bool = ..., date_format: dict[Hashable, str] | str | None = ..., dayfirst: bool = ..., cache_dates: bool = ..., iterator: Literal[True], chunksize: int | None = ..., compression: dict[str, Any] | Literal['infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', 'tar'] | None = ..., thousands: str | None
= ..., decimal: str = ..., lineterminator: str | None = ..., quotechar: str = ..., quoting: Literal[0, 1, 2, 3, 4, 5] = ..., doublequote: bool = ..., escapechar: str | None = ..., comment: str | None = ..., encoding: str | None = ..., encoding_errors: str | None = ..., dialect: str | Dialect | None = ..., on_bad_lines: ((list[str])
-> (list[str] | None)) | Literal['error', 'warn', 'skip'] = ..., delim_whitespace: bool = ..., low_memory: bool = ..., memory_map: bool
= ..., float_precision: Literal['high', 'legacy', 'round_trip'] | None = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable', _NoDefault.no_default] = ...) -> TextFileReader, (filepath_or_buffer: str | PathLike[str] | ReadCsvBuffer[bytes] | ReadCsvBuffer[str], *, sep: str | None = ..., delimiter: str | None = ..., header: int | Sequence[int] | Literal['infer'] | None = ..., names: MutableSequence[Unknown] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Unknown, ...] | range | None = ..., index_col: int | str | Sequence[str | int] | Literal[False] | None = ..., usecols: SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | ((HashableT@read_csv) -> bool) | None = ..., dtype: ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[Hashable, ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | defaultdict[Unknown, Unknown] | None = ..., engine: Literal['c', 'python', 'pyarrow', 'python-fwf'] | None = ..., converters: Mapping[int | str, (str) -> Any] | Mapping[int, (str) -> Any] | Mapping[str, (str) -> Any] | None = ..., true_values: list[str] | None = ..., false_values: list[str] | None = ..., skipinitialspace: bool = ..., skiprows: int | Sequence[int] | ((int) -> bool) | None = ..., skipfooter: int = ..., nrows: int | None = ..., na_values: Sequence[str] | Mapping[str, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., skip_blank_lines: bool = ..., parse_dates: bool | list[int] | list[str] | Sequence[Sequence[int]] | Mapping[str, Sequence[int | str]] | None = ..., keep_date_col: bool = ..., date_format: dict[Hashable, str] | str | None = ..., dayfirst: bool = ..., cache_dates: bool = ..., iterator: bool = ..., chunksize: int, compression: dict[str, Any] | Literal['infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', 'tar'] | None = ..., thousands: str | None = ..., decimal: str = ..., lineterminator: str | None = ..., quotechar: str = ..., quoting: Literal[0, 1, 2, 3, 4, 5] = ..., doublequote: bool = ..., escapechar: str | None = ..., comment: str | None = ..., encoding: str | None = ..., encoding_errors: str | None = ..., dialect: str | Dialect | None = ..., on_bad_lines: ((list[str]) -> (list[str] | None)) | Literal['error', 'warn', 'skip'] = ..., delim_whitespace: bool = ..., low_memory: bool = ..., memory_map: bool
= ..., float_precision: Literal['high', 'legacy', 'round_trip'] | None = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable', _NoDefault.no_default] = ...) -> TextFileReader, (filepath_or_buffer: str | PathLike[str] | ReadCsvBuffer[bytes] | ReadCsvBuffer[str], *, sep: str | None = ..., delimiter: str | None = ..., header: int | Sequence[int] | Literal['infer'] | None = ..., names: MutableSequence[Unknown] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Unknown, ...] | range | None = ..., index_col: int | str | Sequence[str | int] | Literal[False] | None = ..., usecols: SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | ((HashableT@read_csv) -> bool) | None = ..., dtype: ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[Hashable, ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | defaultdict[Unknown, Unknown] | None = ..., engine: Literal['c', 'python', 'pyarrow', 'python-fwf'] | None = ..., converters: Mapping[int | str, (str) -> Any] | Mapping[int, (str) -> Any] | Mapping[str, (str) -> Any] | None = ..., true_values: list[str] | None = ..., false_values: list[str] | None = ..., skipinitialspace: bool = ..., skiprows: int | Sequence[int] | ((int) -> bool) | None = ..., skipfooter: int = ..., nrows: int | None = ..., na_values: Sequence[str] | Mapping[str, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., skip_blank_lines: bool = ..., parse_dates: bool | list[int] | list[str] | Sequence[Sequence[int]] | Mapping[str, Sequence[int | str]] | None = ..., keep_date_col: bool = ..., date_format: dict[Hashable, str] | str | None = ..., dayfirst: bool = ..., cache_dates: bool = ..., iterator: Literal[False] = ..., chunksize: None = ..., compression: dict[str, Any] | Literal['infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', 'tar'] | None = ..., thousands: str | None
= ..., decimal: str = ..., lineterminator: str | None = ..., quotechar: str = ..., quoting: Literal[0, 1, 2, 3, 4, 5] = ..., doublequote: bool = ..., escapechar: str | None = ..., comment: str | None = ..., encoding: str | None = ..., encoding_errors: str | None = ..., dialect: str | Dialect | None = ..., on_bad_lines: ((list[str])
-> (list[str] | None)) | Literal['error', 'warn', 'skip'] = ..., delim_whitespace: bool = ..., low_memory: bool = ..., memory_map: bool
= ..., float_precision: Literal['high', 'legacy', 'round_trip'] | None = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable', _NoDefault.no_default] = ...) -> DataFrame]"PyrightreportUnknownMemberType
</code></pre>
<p>I notice that the maintainers of pandas <a href="https://github.com/pandas-dev/pandas-stubs/issues/963#issuecomment-2248294958" rel="nofollow noreferrer">do not support pyright in strict mode</a>.</p>
<p>An easy way to solve this would be to add <code># pyright: ignore</code> but I don't want to have to do that for every <code>read_csv</code>. A better solution would be for me to indicate to pyright that <code>pd.read_csv(some_string)</code> always returns a DataFrame. I can easily do this in a custom stub file, but then pyright ignores all of pandas-stubs. It seems I can only use one or the other, not a union of both.</p>
<p>My question is: can I extend/complement pandas-stubs somehow, without copying the whole pandas-stubs repo locally?</p>
|
<python><pandas><pyright>
|
2025-09-29 08:57:07
| 0
| 2,406
|
Chechy Levas
|
79,777,583
| 7,706,098
|
How to use matplotlib.axes.Axes.violin?
|
<p>I have been looking for an example of usage of matplotlib.axes.Axes.violin(). I am not looking for an example of matplotlib.axes.Axes.violinplot(). I cannot find one example anywhere. According to the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.violin.html" rel="nofollow noreferrer">matplotlib documentation</a>, I can use matplotlib.axes.Axes.violin() with vpstat dictionary. What I would like to know is if there is a matplotlib function that conveniently calculates the vpstat that the documentation is talking about. Or, do I have to calculate each of the keys of vpstat separately and form it into a dictionary? Thanks.</p>
|
<python><matplotlib>
|
2025-09-28 20:42:20
| 0
| 301
|
Redshoe
|
79,777,378
| 5,023,268
|
Python command/event registration pattern
|
<p>I am working on my first service using ddd.</p>
<p>My domain layer looks simillar to:</p>
<pre><code>/ domain
/A
/aggregateA.py -> fires an EventA
/aggregateARepository.py
/B
/aggregateB.py
/aggregateAService.py <-- Implements an event handler that listens to EventA
</code></pre>
<p>So there are multiple event/command handlers in each subfolder.</p>
<hr />
<p>In the api layer i would like to execute the commands/events.</p>
<pre><code>@router.post("/",..)
async def add_a():
cmd = CommandA()
//execute CommandHandler and EventHandlers for the events that have been raised.
</code></pre>
<p>Is there a way to register event/command handlers inside my domain layer other than having:</p>
<pre><code>/domain
handlers.py
/a
...
/b
...
</code></pre>
<p>where handlers.py looks like this (lists all handlers)</p>
<pre><code>command_handlers = { CommandA: a.command.command_handler_func,
...}
event_handlers = { EventA, [ handler1, handler2 , ...]
</code></pre>
<hr />
<p>I tried the following:</p>
<p>handlers.py:</p>
<pre><code>command_handlers = { } # empty, registration happends in each sub module
</code></pre>
<p>and then:</p>
<p>domain/a/commandA.py:</p>
<pre><code>from domain.handlers import command_handlers
def command_handler_a(CommandA):
....
command_handlers[CommandA] = command_handler_a
</code></pre>
<p>and the same for each commands/events in the other subfolders.</p>
<p>This does not seem to work as expected.
The command_handlers are empty unless i import domain/a/commandA.py.</p>
<p>To fix this I would have to import all handlers from the subfolders in a signle place, right ? Which defeats the point I'm trying to archieve, decentralized registration.</p>
<p>Is there a best practice for doing this kind of registration ?
I am used to frameworks in other languages where all I have to do is annotate the command handler @command_handler, and it magically works.</p>
|
<python><event-handling><domain-driven-design>
|
2025-09-28 14:35:18
| 0
| 591
|
AF_cpp
|
79,777,358
| 2,243,490
|
Tables are not getting create with dynamic schema name
|
<p>I have the following SQLAlchemy model definition and function to create the table.<br />
I am using postgresDB. But the tables are never getting created.</p>
<h5>Model Definition</h5>
<pre><code>class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, autoincrement=True)
email = Column(String(120), unique=True, nullable=False)
password_hash = Column(LargeBinary(64), nullable=False)
role = Column(Enum(UserRole), nullable=False)
__table_args__ = {"schema": "dynamic_schema"}
</code></pre>
<h5>Function Definition</h5>
<pre><code>@staticmethod
def create_all_tables(schema: str) -> None:
e = Database._master_engine
stm = dict(schema_translate_map={"dynamic_schema": schema})
with e.connect().execution_options(**stm) as conn:
Base.metadata.create_all(bind=conn)
</code></pre>
<h5>Function Invocation</h5>
<pre><code># schema "ddtech" is present in the db
Database.create_all_tables("ddtech")
</code></pre>
|
<python><python-3.x><sqlalchemy>
|
2025-09-28 13:41:35
| 1
| 1,886
|
Dinesh
|
79,777,273
| 560,065
|
Networkx - how to use full canvas space
|
<p>When generating a diagram with networkx and pyplot, the diagram is squashed in the middle with lots of empty white space around it.</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import matplotlib.pyplot as plt
print(f"Building graph...")
g = nx.Graph()
"""
for loop to populate graph nodes and edges...
"""
print(f"Graph has {len(g.nodes)} nodes and {len(g.edges)} edges")
print(f"Generating layout...")
options = {"node_size": 10, "node_color": "blue", "font_color": "red"}
k = 1
i = 30
width = 60
height = 60
dpi = 60
pos = nx.spring_layout(g, k=k, iterations=i, seed=63)
_ = plt.figure(1, figsize=(width, height), dpi=dpi)
nx.draw_networkx(g, pos, **options)
print(f"Rendering graph...")
plt.axis("off")
filename = f"graph_{k}_{i}_{width}_{height}_{dpi}.png"
plt.savefig(filename, format="png")
print(f"Wrote to {filename}")
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ ./foo.py
Building graph...
Graph has 11843 nodes and 28054 edges
Generating layout...
Rendering graph...
Wrote to graph_1_30_60_60_60.png
</code></pre>
<p>Here is a screenshot of the rendered diagram (because they are upto 20MBs):</p>
<p><a href="https://i.sstatic.net/3K0iGF7l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3K0iGF7l.png" alt="60x60 image with 60 dpi" /></a></p>
<p>As you can see, most of the diagram is empty white space around the graph in the center. Increasing the image width and height doesn't change this, nor increasing the DPI.</p>
<p>Here is a screenshot where the width, height, and DPI are all 100:</p>
<p><a href="https://i.sstatic.net/wmQXRrY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wmQXRrY8.png" alt="100x100 image with 100 dpi" /></a></p>
<p>The K value is set to a max of 1 (I kept raising it from the default of 0.1, with no improvement). I've tried to increase the iterations to 100 with no significant improvement.</p>
<p>The problem I am trying to solve here is that the graph in the middle is too crowded. It can't see it from the screenshots, but the node labels are readable when zooming in, however, some of the node labels overlap, so they need to be spread out more. If I keep increasing the image width and height, this does help a <em>bit</em>, but, the image size becomes really massive and most image viewers can't open the image.</p>
<p>Considering there is so much empty white space, I am trying to find a way to force the graph to spread out and use the full space of the canvas.</p>
|
<python><matplotlib><graph><networkx>
|
2025-09-28 10:47:14
| 1
| 11,150
|
Baldrick
|
79,777,255
| 5,061,637
|
If I build a static-linked ELF binary from a Python project, can I run it on every Linux distribution?
|
<p>I'm working on converting a Python package to a more efficient version by porting it from Python to C++ using pybind11.</p>
<p>The performances are really better, but now I have to distribute it.</p>
<p>It's an "internal product", only used by my company, so we don't have licence issues, but it can be used on a large variety of systems.</p>
<p>For windows users, I made a full static build for the last 4 major Python versions, and it can be used by everyone, on any windows version.</p>
<p>Can I do the same for Linux?</p>
<p>I plan to build a full statically linked <code>.so</code> Python extension for the last 4 majors Python version.</p>
<p>I have some dependencies to third party library, which I plan to also build full static, like I've done on Windows.</p>
<p>Will it work on Debian/CentOs/whatever, considering versions not older than 10 years (we have old systems...), or will I have to face dependencies problems (about system libs, libc or any other)?</p>
|
<python><c++><linux><static-linking>
|
2025-09-28 10:14:11
| 2
| 1,182
|
Aurelien
|
79,777,009
| 1,779,973
|
How can I enable context-aware indentation when pasting Python code in VSCode?
|
<p>I'm trying to replicate a feature from PyCharm in Visual Studio Code and running into trouble.</p>
<p>In <strong>PyCharm</strong>, when I copy a block of Python code and paste it inside a function, class, or loop, the IDE automatically adjusts the indentation to match the surrounding context. This behavior—often referred to as <strong>context-aware indentation</strong>—makes it easy to maintain clean code structure without manually fixing tabs or spaces.</p>
<p>In <strong>VSCode</strong>, however, when I paste the same block of code into a similar context, the indentation remains exactly as it was copied. This often results in broken indentation that I have to fix manually.</p>
<hr />
<h3>Example:</h3>
<h4>Copied code:</h4>
<pre class="lang-py prettyprint-override"><code>for item in collection:
process(item)
</code></pre>
<h4>Pasted inside a function:</h4>
<pre class="lang-py prettyprint-override"><code>def my_function():
do_something()
# cursor is here
</code></pre>
<h4>Result in VSCode:</h4>
<pre class="lang-py prettyprint-override"><code>def my_function():
do_something()
for item in collection:
process(item)
</code></pre>
<h4>Expected (like in PyCharm):</h4>
<pre class="lang-py prettyprint-override"><code>def my_function():
do_something()
for item in collection:
process(item)
</code></pre>
<hr />
<h3>What I’ve tried:</h3>
<ul>
<li>Installing the <strong>Black Formatter</strong> extension</li>
<li>Enabling <code>"editor.formatOnPaste": true</code></li>
<li>Enabling <code>"editor.formatOnSave": true</code></li>
<li>Enabling <code>"editor.autoIndentOnPaste": true</code></li>
<li>Using <code>"[python]": { "editor.defaultFormatter": "ms-python.black-formatter" }</code></li>
<li>Running <code>Format Document</code> and <code>Format Selection</code></li>
<li>Installing the <strong>Python Indent</strong> extension</li>
</ul>
<p>None of these worked. The pasted code still doesn’t align with the surrounding block structure.</p>
|
<python><visual-studio-code><pycharm><vscode-extensions>
|
2025-09-27 21:15:17
| 1
| 536
|
Ido
|
79,776,967
| 2,774,885
|
is there a way to combine replacing byte-strings with regex strings?
|
<p>I have some (working) code that searches for and modifies text within a PDF file.</p>
<p>it takes a list of "strings" that I want to find, and replaces each with a string of
spaces that is the same length as the found string. (you can't just remove the strings , because that breaks the re-encoding of the PDF... I don't know anything about the PDF encoding format, which I admit is almost entirely foreign to me...)</p>
<pre><code>pdfInputFile= "input.pdf"
pdfOutputFile= "out.pdf"
with open(pdfInputFile, "rb") as reader:
pdfByteStr = reader.read()
toReplace = [
b'shortstring1',
b'string2',
b'longstring3',
### I'd love to be able to do r'some.[0-9].regex.here'
]
for origStr in toReplace:
spaceBytes = b' ' * len(origStr)
pdfByteStr = pdfByteStr.replace(origStr , spaceBytes )
with open(pdfOutputFile, "wb") as writer:
writer.write(pdfByteStr)
</code></pre>
<p>This all works, but as I dig a little deeper it would be very nice to be able to match
some of these things using regular expressions rather than strings. Does the regex stuff in python
"natively" support using byte strings instead of "regular" strings? I tried a couple of variations
of this using <code>re.sub</code> and couldn't get it to work, but it's 100% possible that I just hadn't figured
out the correct usage/syntax. Is this something that I could expect to do without having separate
loops, one for the "byte strings" and another for the "regex strings" ?</p>
|
<python><regex>
|
2025-09-27 19:51:05
| 1
| 1,028
|
ljwobker
|
79,776,925
| 7,706,098
|
How to save violin plot object as npy or npz files
|
<p>I am plotting multiple violin plots for a large dataset and it takes so long to generate one figure. I would like to save the violin plotting objects as npy or npz, and plot them separately. Is this possible? If so, how do I go about it?</p>
<p>EDIT</p>
<p>Here is the full code that I am working with.</p>
<pre><code>import sys
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset as ds
from freq_funcs import tparse
from datetime import datetime,timedelta
# specify variable names in line with the command line
vnm = sys.argv[1]
# input/output directories
idir = "../input/10sn"
odir = "../figs"
osres = "qd"
# for saving the figure, set this value to True
dosavefig = 1
# models and resolutions
models = ["obs","e5","arpege","gem","geos","grist","gsam","icon","icon","ifs","mpas","nicam","scream","shield","um" ]
isress = ["qd" ,"qd","2km" ,"5km","3km" ,"5km" ,"4km" ,"2km" ,"5km" ,"4km","3km" ,"3km" ,"3km" ,"3km" ,"5km"]
osress = ["qd" ,"qd","qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ,"qd" ]
modfnm = ["Observation","ERA5 0.25\u00b0","ARPEGE 2km","GEM 5km","GEOS 3km","GRIST 5km","GSAM 4km","ICON 2km","ICON 5km","IFS 4km","MPAS 3km","NICAM 3km","SCREAM 3km","SHIELD 3km","UM 5km"]
#clist = ["C0","C1","C2","C3","C4","C5","C6","C7","C8","C9","lightsteelblue","olive","teal","black"]
clist = ["black","black","C0","C1","C2","C3","C4","C5","C6","C7","C8","teal","C9","lightsteelblue","olive"]
lslist = ["-","--","-","-","-","-","-","-","-","-","-","-","-","-","-"]
# line widths and row/columns specifier
lwlist = [4,4,4,4,4,4,4,4,4,4,4,4,4,4,4]
rclist = [(0,0),(0,1),(1,0),(1,1),(2,0),(2,1)]
# list of regions
reglist = ["amz","afr","pco","ino","lnd","ocn"]
regs = len(reglist)
tres = "havg"
# output file extension
fext = "svg"
# position settings
if (vnm == "pw"):
pos = [1,2,3,4,5,6,7,8,9,10,11,12,13,14]
entries = len(models) - 1
zolist = np.arange(0,entries,1)
print(entries,len(zolist),zolist)
else:
pos = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
entries = len(models)
zolist = np.arange(0,entries,1)
print(entries,len(zolist),zolist)
nums = np.arange(0,entries,1)
fig,axs = plt.subplots(3,2,figsize=(12,10))
if (vnm == "pr"):
fvnm = "Precipitation rate"
uts = "mm day\u207b\u00b9"
vmax = 36
vmin = -1
elif (vnm == "swnt"):
fvnm = "SWNT"
uts = "W m\u207b\u00b2"
vmax = 400
vmin = 50
elif (vnm == "lwnt"):
fvnm = "Outgoing longwave radiation"
uts = "W m\u207b\u00b2"
vmax = 400
vmin = 50
elif (vnm == "pw"):
fvnm = "Precipitable water"
uts = "mm"
vmax = 90
vmin = 0
print(" {0:s}".format(fvnm))
ofn = "reg.avg.{0:s}.{1:s}".format(vnm,fext)
ofp = "{0:s}/{1:s}".format(odir,ofn)
# container for stats
# min, max, mean, std, median
var_stats = np.empty((regs,entries,5))
for r in range(regs):
reg = reglist[r]
col = rclist [r][0]
row = rclist [r][1]
mt_arr = []
for m in range(entries):
model = models[m]
isres = isress[m]
osres = osress[m]
color = clist [m]
zodr = zolist[m]
lw = lwlist[m]
mfnm = modfnm[m]
if (model == "obs" and vnm == "pr"):
mfnm = "IMERG 0.1\u00b0"
elif (model == "obs" and vnm == "pw"):
mfnm = "MIMIC TPW2"
elif (model == "obs" and vnm in ["lwnt","swnt"]):
mfnm = "CERES 1.0\u00b0"
isres = "od"
osres = "od"
if (reg == "amz"):
lsm_ifn = "lsm.amz.{0:s}.nc".format(osres)
tlbl = "Amazon biome"
mskv = 1
elif (reg == "pco"):
lsm_ifn = "lsm.pco.{0:s}.nc".format(osres)
tlbl = "Pacific ocean"
mskv = 1
elif (reg == "afr"):
lsm_ifn = "lsm.afr.{0:s}.nc".format(osres)
tlbl = "African continent"
mskv = 1
elif (reg == "ino"):
lsm_ifn = "lsm.ino.{0:s}.nc".format(osres)
tlbl = "Indian ocean"
mskv = 1
elif (reg == "lnd"):
lsm_ifn = "lsm.ocn1.lnd0.10sn.{0:s}.nc".format(osres)
tlbl = "Land only (global)"
mskv = 0
elif (reg == "ocn"):
lsm_ifn = "lsm.ocn1.lnd0.10sn.{0:s}.nc".format(osres)
tlbl = "Ocean only (global)"
mskv = 1
lsm_ifp = "{0:s}/{1:s}".format(idir,lsm_ifn)
lsm_id = ds("{0:s}".format(lsm_ifp))
lsm_id.set_auto_mask(False)
ilsm = lsm_id["lsm"][:]
print(" region : {0:20s} | model : {1:12s}".format(tlbl,mfnm))
ifn = "{0:s}.{1:s}.{2:s}.10sn.{3:s}.{4:s}.nc".format(model,isres,"all",osres,tres)
ifp = "{0:s}/{1:s}".format(idir,ifn)
fld_id = ds("{0:s}".format(ifp))
ilat = fld_id["lat"][:]
ilon = fld_id["lon"][:]
if (model == "grist"):
ivar = fld_id[vnm] [24:]
idx = np.where((ilsm[24:] == mskv) & (ivar >= 0))
#idx = np.where(ilsm[24:] == mskv)
else:
ivar = fld_id[vnm] [:]
idx = np.where((ilsm == mskv) & (ivar >= 0))
#idx = np.where(ilsm == mskv)
sel_dat = ivar[idx].compressed()
mt_arr.append(sel_dat)
var_stats[r,m,0] = np.min (sel_dat)
var_stats[r,m,1] = np.max (sel_dat)
var_stats[r,m,2] = np.std (sel_dat)
var_stats[r,m,3] = np.mean (sel_dat)
var_stats[r,m,4] = np.median(sel_dat)
vio_parts = axs[col,row].violinplot(mt_arr,positions=pos,showmeans=True,showmedians=True,widths=0.8)
np.save(ofp,vio_parts)
for comp, coll in vio_parts.items():
if (comp == "bodies"):
for pc, co in zip(coll, nums):
pc.set_linestyle(lslist[co])
pc.set_facecolor(clist[co])
pc.set_edgecolor(clist[co])
elif (comp == "cmedians"):
coll.set_color(clist)
coll.set_linestyle(":")
else:
coll.set_color(clist)
axs[col,row].set_xticks([],[])
axs[col,row].grid()
axs[col,row].set_title("{0:s}".format(tlbl))
#axs[col,row].set_ylabel("{0:s} ({1:s})".format(fvnm,uts))
#axs[col,row].set_yscale("log")
axs[col,row].set_ylim(vmin,vmax)
lsm_id.close()
fld_id.close()
axs[0,0].set_ylabel("{0:s} ({1:s})".format(fvnm,uts))
axs[1,0].set_ylabel("{0:s} ({1:s})".format(fvnm,uts))
axs[2,0].set_ylabel("{0:s} ({1:s})".format(fvnm,uts))
# displaying statistics on screen
for r in range(regs):
#for r in range(0,1):
model_var = var_stats[r]
reg = reglist[r]
for m in range(entries):
mfnm = modfnm [m]
min_val = model_var[m,0]
max_val = model_var[m,1]
std_val = model_var[m,2]
avg_val = model_var[m,3]
med_val = model_var[m,4]
print(
"{0:s} | {1:20s} | {2:15s} | ".format(reg,fvnm,mfnm),
"max = {0:7.2f} {1:s} | ".format(max_val,uts),
"min = {0:7.2f} {1:s} | ".format(min_val,uts),
"std = {0:7.2f} {1:s} | ".format(std_val,uts),
"avg = {0:7.2f} {1:s} | ".format(avg_val,uts),
"med = {0:7.2f} {1:s} | ".format(med_val,uts),
)
print("")
# labeling parts
for m in range(entries):
mfnm = modfnm[m]
co = clist [m]
if (mfnm == "Observation" and vnm == "pw"):
mfnm = "MIMIC-TPW2"
elif (mfnm == "Observation" and vnm == "pr"):
mfnm = "IMERG V06"
elif (mfnm == "Observation" and vnm == "lwnt"):
mfnm = "CERES 1.0\u00b0"
#axs[2,1].scatter(zolist,np.ones(entries) * -100,label=mfnm,alpha=0.8,c=co)
axs[2,1].plot(zolist,np.ones(entries) * -100,label=mfnm,alpha=0.8,c=co)
axs[2,1].legend(fontsize=10,bbox_to_anchor=(0.9, -0.10),ncol=7)
plt.subplots_adjust(left=0.067,right=0.99,bottom=0.09,top=0.97,hspace=0.15,wspace=0.15)
if (dosavefig):
plt.savefig("{0:s}".format(ofp))
else:
plt.show()
</code></pre>
<ul>
<li>Short description of the input files</li>
</ul>
<p>The input files are netCDF4 files that are about 1GB each, model datasets showing different atmospheric fields such as longwave radiation, precipitation rates, etc. The each field's data dimension is (time, latitude, longitude) and their size is (960,81,1440). Every datapoint matters to show how the model datasets are different from each other. So, no I cannot truncate data points.</p>
<ul>
<li>The problem</li>
</ul>
<p>For each specified <code>vnm</code>, the code takes about 2 hours to finish. That is fine, but I would like it so that I would be able to tweak the plots (boundary whites spaces, fonts, etc.) without having to run the 2 hour code each time with minor figure adjustments. So, I think it would be ideal for me to somehow save the violin plot objects which then I can write a separate script to plot them.</p>
<ul>
<li>What I tried</li>
</ul>
<p>So, if you look at the line 198, I just added np.save(ofp,vio_parts) in an attempt to save the violin plot objects.</p>
<ul>
<li>What I am having trouble with</li>
</ul>
<p>With the saved violin plot objects, for some reason, now I cannot use the rest of the 'tweaking' (from line 201 to the end of the script). When I load the <code>.npy</code> file, I get the following output from the <code>print()</code></p>
<pre><code>{'bodies': [<matplotlib.collections.FillBetweenPolyCollection object at 0x7f549ca05d30>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52ded50>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52dee90>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52df250>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52df610>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52df750>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52df890>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52df9d0>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52dfb10>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52dfc50>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52dfd90>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d52dfed0>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d5334050>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d5334190>, <matplotlib.collections.FillBetweenPolyCollection object at 0x7f54d53342d0>], 'cmeans': <matplotlib.collections.LineCollection object at 0x7f549ca06120>, 'cmaxes': <matplotlib.collections.LineCollection object at 0x7f54d5334410>, 'cmins': <matplotlib.collections.LineCollection object at 0x7f54d5334550>, 'cbars': <matplotlib.collections.LineCollection object at 0x7f54d5334690>, 'cmedians': <matplotlib.collections.LineCollection object at 0x7f54d53347d0>}
</code></pre>
<p>I think this object is in the form of python dictionary, but I cannot access to the elements of the above object.</p>
<ul>
<li>My question</li>
</ul>
<p>In order for me to access each elements of the object shown above, how do I go about it?</p>
<p>Thanks.</p>
|
<python><numpy><matplotlib>
|
2025-09-27 17:52:15
| 1
| 301
|
Redshoe
|
79,776,858
| 13,848,874
|
Multimodel for image captioning with CNN and LSTM over flickr30k does not learn. How to fuse image features and word embeddings?
|
<p>I'm working on an image captioning project using a simple CNN + LSTM architecture, as required by the course I'm studying. The full code is available <a href="https://github.com/mahdavis2024/CS-projects/blob/main/step12/V1_image_captioning.ipynb" rel="nofollow noreferrer">here on GitHub</a> (<strong>note</strong>: some parts are memory-intensive and computationally heavy).</p>
<p>I’ve extracted and pickled image features using <code>VGG16(weights='imagenet')</code></p>
<p>For the captioning model, I’ve tried several variations combining image features and word embeddings with LSTM. However, the model consistently fails to learn meaningful relationships between images and captions — the accuracy and loss barely improve, and the predicted captions are off.</p>
<p>Here's what I’ve tried so far:</p>
<ul>
<li>Using both <code>add</code> and <code>concatenate</code> to fuse image and text features,</li>
<li>Training with <code>categorical_crossentropy</code> and one-hot targets via <code>to_categorical</code>,</li>
<li>Switching to <code>sparse_categorical_crossentropy</code> with integer targets,</li>
<li>Using both default <code>adam</code> and <code>Adam(learning_rate=0.0005)</code>,</li>
<li>Adding more dense layers after fusion.</li>
</ul>
<p>None of these changes helped. I suspect the issue lies in how the image features and caption tokens are joined, but I’m not sure what the best practice is for this kind of fusion.</p>
<p><strong>What’s the recommended way to “knot” image features to caption tokens in a CNN + LSTM setup?</strong></p>
<p>How can I construct a better model while staying within the constraints of using VGG16 and LSTM?</p>
<hr />
<p><strong>Current Data Generator Code</strong></p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.preprocessing.sequence import pad_sequences
import numpy as np
import random
def data_generator(data_keys, features, mapping, tokenizer, max_length, vocab_size, batch_size):
"""
Generator that yields batches of training data for image captioning.
Each sample consists of:
- X1: image feature vector (from CNN)
- X2: input sequence of word indices (padded)
- y: target word index (integer, not one-hot)
This version is optimized for use with sparse_categorical_crossentropy.
"""
X1, X2, y = [], [], []
n = 0
while True:
# Shuffle image keys to randomize batch order
random.shuffle(data_keys)
for key in data_keys:
captions = mapping[key]
# Process each caption associated with the current image
for caption in captions:
# Convert caption text to a sequence of token IDs
seq = tokenizer.texts_to_sequences([caption])[0]
# Generate multiple input-output pairs from the sequence
for i in range(1, len(seq)):
# Input sequence: all tokens before position i
in_seq = seq[:i]
# Output token: the token at position i
out_seq = seq[i]
# Skip if target word is padding (index 0)
if out_seq == 0:
continue
# Pad input sequence to fixed length
in_seq = pad_sequences([in_seq], maxlen=max_length, padding='post')[0]
# Get image ID (strip file extension)
image_id = key.split('.')[0]
# Sanity checks
assert image_id in features, f"Missing image_id: {image_id}"
assert features[image_id].shape == (4096,), f"Bad shape: {features[image_id].shape}"
# Append sample to batch buffers
X1.append(features[image_id]) # image features
X2.append(in_seq) # input sequence
y.append(out_seq) # target word index (integer)
n += 1 # Count samples
# Yield batch when full
if n == batch_size:
yield (np.array(X1), np.array(X2)), np.array(y)
X1, X2, y = [], [], []
n = 0
</code></pre>
<hr />
<p><strong>Current Model Definition</strong></p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.layers import Input, Dense, LSTM, Embedding, Dropout, add
from tensorflow.keras.utils import plot_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
# Image Feature layers
inputs1 = Input(shape=(4096,))
Features1 = Dropout(0.0)(inputs1)
Features2 = Dense(256, activation='relu')(Features1)
# Text Feature layers
inputs2 = Input(shape=(max_length_cap,))
seq1 = Embedding(vocab_size,256,mask_zero=True)(inputs2)
seq2 = Dropout(0.0)(seq1)
seq3 =LSTM(256)(seq2)
# Fusion
fusion = add([Features2, seq3])
decoder1 = Dense(512, activation='relu')(fusion)
decoder2 = Dense(256, activation='relu')(decoder1)
decoder3 = Dense(256, activation='relu')(decoder2)
# Output
outputs = Dense(vocab_size, activation='softmax')(decoder3)
# Functional API
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=Adam(learning_rate=0.0005),
metrics=['accuracy']
)
#plot the model
plot_model(model,show_shapes=True)
</code></pre>
<hr />
<p><strong>Current Architecture Diagram</strong><br />
<a href="https://i.sstatic.net/xV7DS1ai.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xV7DS1ai.png" alt="model diagram from plot_model()" /></a></p>
<hr />
<p><strong>Current Learning Curve</strong>
<a href="https://i.sstatic.net/V0MPDA4t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0MPDA4t.png" alt="learning curve" /></a></p>
|
<python><tensorflow><keras><deep-learning><lstm>
|
2025-09-27 15:34:13
| 0
| 473
|
Malihe Mahdavi sefat
|
79,776,764
| 1,999,585
|
How can I fix the "Calling format_html() without passing args or kwargs is deprecated." in PyCharm?
|
<p>I have the following code:</p>
<pre><code>def _demo_preview_message(request):
if not getattr(settings, 'IS_DEMO', False):
return
last = DemoEmail.objects.order_by('-id').first()
if last:
messages.success(
request=request,
message=format_html("Email queued (demo). <a href='{}'>Preview</a>",
reverse('demo_outbox_detail', args=[last.pk]))
)
</code></pre>
<p>At the line:</p>
<pre><code>message=format_html("Email queued (demo). <a href='{}'>Preview</a>",
reverse('demo_outbox_detail', args=[last.pk]))
</code></pre>
<p>PyCharm gives me the following message:</p>
<blockquote>
<p>Calling format_html() without passing args or kwargs is deprecated.</p>
</blockquote>
<p>How can I fix this?</p>
<p>Later edit:</p>
<ol>
<li>Only PyCharm gives me this warning, in the Problems tab (Alt+6). When I run the code, everything is fine.</li>
<li>This is the only place in my project where I have <code>format_html</code></li>
</ol>
|
<python><django><pycharm>
|
2025-09-27 12:36:29
| 2
| 2,424
|
Bogdan Doicin
|
79,776,615
| 5,705,943
|
Find percentage of grouped variable in a Dataframe in pandas
|
<p>I have a dataframe where column 1 has values like <code>'A', 'B'</code> etc (categorical variable) and column 2 (also a nominal categorical variable) has values like <code>'city 1', 'city 2'</code> etc. I want to group by column 1 and find out percentage of <code>'city 1'</code> in each category of column 1. How can this be done in pandas?</p>
|
<python><pandas><dataframe>
|
2025-09-27 08:21:25
| 3
| 970
|
user9026
|
79,776,360
| 913,098
|
Pycharm 2025.2 pro WSL interpreter not running
|
<ol>
<li>I have created a venv interpreter using <code>python -m venv venv</code> and another using poetry</li>
<li>I pointed pycharm to these interpreters (separately), and it claims to see them</li>
<li>Then, running the simplest program <code>print("hello")</code>, hangs.</li>
</ol>
<p><em>I appologize for pictures not rendering, it seems like a stackoverflow bug...</em></p>
<p><a href="https://i.sstatic.net/JfiUntB2.png" rel="nofollow noreferrer">interpreter is configured ok</a></p>
<p><a href="https://i.sstatic.net/KkB9jfGy.png" rel="nofollow noreferrer">pycharm run hangs</a></p>
<p><img src="https://i.sstatic.net/JfiUntB2.png" alt="interpreter is configured ok" /></p>
<p><img src="https://i.sstatic.net/KkB9jfGy.png" alt="pycharm run hangs" /></p>
<p>What spet did I miss in configuring a wsl python interpreter?</p>
<hr />
<ul>
<li>It runs directly on the interpreter</li>
<li>It runs directly in the interpreter via pycharm's Python console</li>
<li>It runs in debug mode, but not in run mode.</li>
</ul>
|
<python><pycharm><python-venv>
|
2025-09-26 20:26:30
| 1
| 28,697
|
Gulzar
|
79,776,143
| 2,287,458
|
Find nearest / closest value to subset of values in a Polars dataframe
|
<p>I have this dataframe</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌────────────┬──────┐
│ date ┆ ME │
│ --- ┆ --- │
│ date ┆ i64 │
╞════════════╪══════╡
│ 2027-11-25 ┆ 0 │
│ 2027-11-26 ┆ 0 │
│ 2027-11-29 ┆ 0 │
│ 2027-11-30 ┆ 1 │
│ 2027-12-01 ┆ 0 │
│ 2027-12-02 ┆ 0 │
│ 2027-12-03 ┆ 0 │
│ 2027-12-06 ┆ 1 │
│ 2027-12-07 ┆ 0 │
│ 2027-12-08 ┆ 0 │
│ 2027-12-09 ┆ 0 │
│ 2027-12-21 ┆ 0 │
│ 2027-12-22 ┆ 0 │
│ 2027-12-23 ┆ 0 │
│ 2027-12-24 ┆ 1 │
│ 2027-12-27 ┆ 0 │
│ 2027-12-28 ┆ 0 │
└────────────┴──────┘
""")
</code></pre>
<p>And I want to find the date in the <code>date</code> column, amongst the entries where <code>ME = 1</code>, which is closest, i.e.</p>
<pre class="lang-none prettyprint-override"><code>┌────────────┬──────┬────────────┐
│ date ┆ ME ┆ near_date │
│ --- ┆ --- ┆ --- │
│ date ┆ i64 ┆ date │
╞════════════╪══════╪════════════╡
│ 2027-11-25 ┆ 0 ┆ 2027-11-30 │
│ 2027-11-26 ┆ 0 ┆ 2027-11-30 │
│ 2027-11-29 ┆ 0 ┆ 2027-11-30 │
│ 2027-11-30 ┆ 1 ┆ 2027-11-30 │
│ 2027-12-01 ┆ 0 ┆ 2027-11-30 │
│ 2027-12-02 ┆ 0 ┆ 2027-12-06 │
│ 2027-12-03 ┆ 0 ┆ 2027-12-06 │
│ 2027-12-06 ┆ 1 ┆ 2027-12-06 │
│ 2027-12-07 ┆ 0 ┆ 2027-12-06 │
│ 2027-12-08 ┆ 0 ┆ 2027-12-06 │
│ 2027-12-09 ┆ 0 ┆ 2027-12-06 │
│ 2027-12-21 ┆ 0 ┆ 2027-12-24 │
│ 2027-12-22 ┆ 0 ┆ 2027-12-24 │
│ 2027-12-23 ┆ 0 ┆ 2027-12-24 │
│ 2027-12-24 ┆ 1 ┆ 2027-12-24 │
│ 2027-12-27 ┆ 0 ┆ 2027-12-24 │
│ 2027-12-28 ┆ 0 ┆ 2027-12-24 │
└────────────┴──────┴────────────┘
</code></pre>
<p>Is there a good way to do this with native <code>polars</code> functions?</p>
|
<python><dataframe><python-polars><asof-join>
|
2025-09-26 15:47:15
| 1
| 3,591
|
Phil-ZXX
|
79,776,081
| 10,491,012
|
lxml: QName value does not resolve to a(n) attribute group definition
|
<p>I get the following error, while trying to validate XML using a schema:</p>
<blockquote>
<p>lxml.etree.XMLSchemaParseError: Element '{http://www.w3.org/2001/XMLSchema}attributeGroup', attribute 'ref': The QName value '{http://www.w3.org/XML/1998/namespace}specialAttrs' does not resolve to a(n) attribute group definition., line 15</p>
</blockquote>
<p>The issue is reproducing with <code>lxml>= 6.0.0</code> and only on Linux (tested on Ubuntu 20 and 22).</p>
<p>lxml version 6.0.2 works well on Windows systems (10 and 11).</p>
<p>Below is a simplified example of my use case.</p>
<p>main.xml</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<root xmlns:xi="http://www.w3.org/2001/XInclude">
<title>Main XML</title>
<elements>
<element name="main element" foo="main foo">This text is from main.xml</element>
<xi:include href="include.xml" parse="xml" xpointer="xpointer(/elements/element)"/>
</elements>
</root>
</code></pre>
<p>include.xml</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<elements>
<element name="element1" foo="foo1">Text 1: This content is included from another file.</element>
<element name="element2" foo="foo2">Text 2: This content is included from another file.</element>
<element name="element3" foo="foo3">Text 3: This content is included from another file.</element>
</elements>
</code></pre>
<p>transform.xslt</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<!-- Identity transform: copy everything by default -->
<xsl:template match="@* | node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
<!-- Match only <message> with name="message2" and override foo -->
<xsl:template match="element[@name='element2']">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<xsl:attribute name="foo">spam</xsl:attribute>
<xsl:attribute name="name">message99</xsl:attribute>
<xsl:apply-templates select="node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
</code></pre>
<p>schema.xsd</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
<xs:import namespace="http://www.w3.org/XML/1998/namespace" schemaLocation="http://www.w3.org/2009/01/xml.xsd"/>
<xs:element name="root">
<xs:complexType>
<xs:sequence>
<xs:element name="title" type="xs:string"/>
<xs:element name="elements">
<xs:complexType>
<xs:sequence minOccurs="1" maxOccurs="unbounded">
<xs:element name="element" minOccurs="1" maxOccurs="unbounded">
<xs:complexType mixed="true">
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="foo" type="xs:string" use="required"/>
<xs:attributeGroup ref="xml:specialAttrs"/>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</code></pre>
<p>Line 15 in schema.xsd is needed for the case when include.xml is not in the same directory as main.xml and it's referenced via a relative path.</p>
<p>E.g. <code><xi:include href="../include.xml" parse="xml" xpointer="xpointer(/elements/element)"/></code></p>
<p>In this case, the included elements will have an extra attribute added (xml:base):
<code><element name="element1" foo="foo1" xml:base="../include.xml">Text 1: This content is included from another file.</element></code></p>
<p>xmlParse.py</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import os
import lxml
from lxml import etree
print("Using lxml version {0}".format(lxml.__version__), end="\n\n")
tree = etree.parse("main.xml")
tree.xinclude()
# Apply transformations
if os.path.isfile("transform.xslt"):
print("Applying transformation from transform.xslt")
xslt = etree.parse("transform.xslt")
transform = etree.XSLT(xslt)
result = transform(tree)
tree._setroot(result.getroot())
print(etree.tostring(tree, pretty_print=True).decode())
schema = etree.XMLSchema(etree.parse("schema.xsd")) # Load and parse the schema
if schema.validate(tree): # Validate
print("XML is valid.")
else:
print("XML is invalid!")
for error in schema.error_log:
print(error.message)
</code></pre>
<p>Below the example output from my Ubuntu 20 machine:</p>
<blockquote>
<p>bogey@machine:/opt/xml_schema$ python3 xml_parse.py<br />
Using lxml version 6.0.2<br />
Applying transformation from transform.xslt<br />
<root xmlns:xi="http://www.w3.org/2001/XInclude"><br />
<title>Main XML</title><br />
<elements><br />
<element name="main element" foo="main foo">This text is from main.xml</element><br />
<element name="element1" foo="foo1">Text 1: This content is included from another file.</element><element name="message99" foo="spam">Text 2: This content is included from another file.</element><element name="element3" foo="foo3">Text 3: This content is included from another file.</element><br />
</elements><br />
</root></p>
<p>Traceback (most recent call last):<br />
File "/opt/xml_parse.py", line 20, in <br />
schema = etree.XMLSchema(etree.parse("schema.xsd")) # Load and parse the schema<br />
File "src/lxml/xmlschema.pxi", line 90, in lxml.etree.XMLSchema.<strong>init</strong><br />
lxml.etree.XMLSchemaParseError: Element '{http://www.w3.org/2001/XMLSchema}attributeGroup', attribute 'ref': The QName value '{http://www.w3.org/XML/1998/namespace}specialAttrs' does not resolve to a(n) attribute group definition., line 15</p>
<p>bogey@machine:/opt/xml_schema$ pip install lxml==5.4.0<br />
Defaulting to user installation because normal site-packages is not writeable<br />
Collecting lxml==5.4.0<br />
Downloading lxml-5.4.0-cp310-cp310-manylinux_2_28_x86_64.whl (5.1 MB)<br />
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.1/5.1 MB 12.2 MB/s eta 0:00:00<br />
Installing collected packages: lxml<br />
Attempting uninstall: lxml<br />
Found existing installation: lxml 6.0.2<br />
Uninstalling lxml-6.0.2:<br />
Successfully uninstalled lxml-6.0.2<br />
Successfully installed lxml-5.4.0</p>
<p>bogey@machine:/opt/xml_schema$ python3 xml_parse.py<br />
Using lxml version 5.4.0<br />
Applying transformation from transform.xslt<br />
<root xmlns:xi="http://www.w3.org/2001/XInclude"><br />
<title>Main XML</title><br />
<elements><br />
<element name="main element" foo="main foo">This text is from main.xml</element><br />
<element name="element1" foo="foo1">Text 1: This content is included from another file.</element><element name="message99" foo="spam">Text 2: This content is included from another file.</element><element name="element3" foo="foo3">Text 3: This content is included from another file.</element><br />
</elements><br />
</root></p>
<p>XML is valid.</p>
</blockquote>
<p>Output on Windows machine:</p>
<blockquote>
<p>(venv310_win) PS C:\xml_schema> python .\xml_parse.py<br />
Using lxml version 6.0.2<br />
Applying transformation from transform.xslt<br />
<root xmlns:xi="http://www.w3.org/2001/XInclude"><br />
<title>Main XML</title><br />
<elements><br />
<element name="main element" foo="main foo">This text is from main.xml</element><br />
<element name="element1" foo="foo1">Text 1: This content is included from another file.</element><element name="message99" foo="spam">Text 2: This content is included from another file.</element><element name="element3" foo="foo3">Text 3: This content is included from another file.</element><br />
</elements><br />
</root></p>
<p>XML is valid.</p>
</blockquote>
<p>What's the deal? Any ideas would be appreciated. Thanks.</p>
<p>EDIT:
Windows</p>
<blockquote>
<p>Python : sys.version_info(major=3, minor=11, micro=8, releaselevel='final', serial=0)<br />
etree : (6, 0, 2, 0)<br />
libxml used : (2, 11, 9)<br />
libxml compiled : (2, 11, 9)<br />
libxslt used : (1, 1, 39)<br />
libxslt compiled : (1, 1, 39)</p>
</blockquote>
<p>Linux</p>
<blockquote>
<p>Python : sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)<br />
etree : (6, 0, 0, 0)<br />
libxml used : (2, 14, 4)<br />
libxml compiled : (2, 14, 4)<br />
libxslt used : (1, 1, 43)<br />
libxslt compiled : (1, 1, 43)</p>
</blockquote>
|
<python><xml><xsd><schema><lxml>
|
2025-09-26 14:45:01
| 1
| 386
|
Bogdan Prădatu
|
79,775,968
| 8,800,836
|
Calculating higher-order derivatives for functions that are defined by a differential equation using `jax.experimental.jet`
|
<p>I have a matrix function U(t) that is defined by the differential equation, U'(t) = -i H(t)U(t), where H(t) is a known function of time.</p>
<p>I already have the value U(t) at some specific time t, and thus I also have the value U'(t). I already know all the derivatives of H(t).</p>
<p>I would like to use <code>jax.experimental.jet</code> to calculate the first n derivatives of U(t) for n>1.</p>
<p>The best I could come up with is the following recursion:</p>
<pre class="lang-py prettyprint-override"><code># IMPORTS
import jax.numpy as jnp
from jax.experimental.jet import jet
# PRELIMINARIES: CALCULATE THE DERIVATIVES OF H(t)
# Define the H function
sz = jnp.asarray([[1,0],[0,-1]])
sy = jnp.asarray([[0,-1j],[1j,0]])
H_fun = lambda t: sz + jnp.sin(t) * sy
# Build the derivatives of the H functions using jet
def build_ders_H(H_fun, n, t):
ders_t = [1 if k==0 else 0 for k in range(n)]
H0, ders_H = jet(H_fun,(t,),(ders_t,))
Hks = jnp.asarray([H0]+ders_H)
return Hks
# Build the first 3 derivatives of H at some time t
t = jnp.asarray(1.)
Hks = build_ders_H(H_fun, 3, t)
H0 = Hks[0]
H1 = Hks[1]
H2 = Hks[2]
H3 = Hks[3]
## THE PART WE CARE ABOUT: CALCULATE THE DERIVATIVES OF U(t)
# Function that gives the derivative of U as a function of H and U
def f(H, U):
return -1j * H @ U
# Known values of U and its derivative
U0 = jnp.eye(H0.shape[0])
U1 = f(H0,U0)
# Use jet to calculate the derivatives of U. The lower derivatives are recalculated at each step
_, (U2,) = jet(f, (H0,U0,), ((H1,),(U1,),))
_, (_,U3,) = jet(f, (H0,U0,), ((H1,H2,),(U1,U2,),))
_, (_,_,U4,) = jet(f, (H0,U0,), ((H1,H2,H3,),(U1,U2,U3,),))
</code></pre>
<p>This works fine. But as you can see from the empty outputs, the lower-derivatives of U(t) are recalculated at each step, which is wasteful. Is there a way to use <code>jet</code> to do the above recursion without recalculating the lower derivatives each time?</p>
<p>Now in my case, it was easy enough to just re-derive the efficient recursion formulas and implement them with a bunch of <code>fori_loop</code>s, and that's what I ended up doing. But still, it would be more elegant to be able to leverage <code>jet</code> fully for this.</p>
<p>I suspect that this wouldn't be too hard to implement as a feature in <code>jet</code> since, as stated in the <a href="https://file:///Users/danb2901/Downloads/jet-1.pdf" rel="nofollow noreferrer">documentation</a>, <code>jet</code> already exploits the fact that some functions can be written as differential equations. Perhaps there could be a syntax for the case where the (n+1)th derivative is known as a function of the previous n ones. Let me know if you think that would be a good fit for a feature suggestion at <code>jax</code>.</p>
|
<python><jax><automatic-differentiation>
|
2025-09-26 12:57:09
| 0
| 539
|
Ben
|
79,775,693
| 6,681,932
|
skforecast insample predictions
|
<p>Using skforecast, it's straightforward to perform out-of-sample predictions starting from the dates in the <code>last_window_.index</code> of a <code>ForecasterRecursive</code> object. However, I’m unable to find a clear method for generating in-sample predictions with a <code>ForecasterRecursive</code> model.</p>
<p>This is my proposal which might be not totaly correct: I'm manually shifting the exog DataFrame's dates to treat past data as "future" for predictions, then resetting the index.</p>
<p>Here's a minimal example:</p>
<pre><code>import pandas as pd
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from sklearn.linear_model import LinearRegression
# Sample data
y = pd.Series([1, 2, 3, 4, 5], index=pd.date_range("2023-01-01", periods=5, freq="D"))
exog = pd.DataFrame({"var": [10, 20, 30, 40, 50]}, index=y.index)
# Fit forecaster
forecaster = ForecasterAutoreg(regressor=LinearRegression(), lags=2)
forecaster.fit(y=y, exog=exog)
# Custom function to predict in-sample
def predict_insample(forecaster, exog, freq="D"):
steps = exog.shape[0]
max_date = forecaster.last_window_.index.max()
offset = pd.offsets.Day(1) # Simplified for example
exog_shifted = exog.copy()
exog_shifted.index = pd.date_range(start=max_date + offset, periods=steps, freq=freq)
return forecaster.predict(steps=steps, exog=exog_shifted).set_axis(exog.index)
# Predict in-sample
predictions = predict_insample(forecaster, exog)
print(predictions)
</code></pre>
|
<python><validation><forecasting><training-data>
|
2025-09-26 09:15:31
| 1
| 478
|
PeCaDe
|
79,775,683
| 14,282,714
|
Error: subprocess-exited-with-error installing vllm
|
<p>I'm trying to install the <code>vllm</code> package using pip. It returns an error:</p>
<pre><code>pip install vllm
Collecting vllm
Using cached vllm-0.10.2.tar.gz (10.9 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Collecting cmake>=3.26.1
Using cached cmake-4.1.0-py3-none-macosx_10_10_universal2.whl.metadata (6.5 kB)
Collecting ninja
Using cached ninja-1.13.0-py3-none-macosx_10_9_universal2.whl.metadata (5.1 kB)
Collecting packaging>=24.2
Using cached packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
Collecting setuptools<80.0.0,>=77.0.3
Using cached setuptools-79.0.1-py3-none-any.whl.metadata (6.5 kB)
Collecting setuptools-scm>=8.0
Using cached setuptools_scm-9.2.0-py3-none-any.whl.metadata (7.7 kB)
ERROR: Could not find a version that satisfies the requirement torch==2.8.0 (from versions: 2.2.0, 2.2.1, 2.2.2)
ERROR: No matching distribution found for torch==2.8.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>And I'm running the following python version:</p>
<pre><code>python --version
Python 3.12.2
</code></pre>
<p>I tried to upgrade the <code>torch</code> package following this <a href="https://stackoverflow.com/questions/77661052/how-to-solve-error-subprocess-exited-with-error">answer</a> by running this:</p>
<pre><code>pip install --upgrade torch
</code></pre>
<p>Unfortunately, the error stays. So I was wondering if anyone knows why this error happens and how we can solve this?</p>
|
<python><vllm>
|
2025-09-26 09:06:58
| 0
| 42,724
|
Quinten
|
79,775,241
| 6,362,595
|
Exclude pylint too few public method error
|
<p>In <code>marshmallow</code> you can use the custom <code>Meta</code> attribute to customize a schema's behavior:</p>
<pre><code>from marshmallow import Schema
class MySchema(Schema):
class Meta:
fields = ("id", "email", "date_created")
exclude = ("password", "secret_attribute")
</code></pre>
<p>However, <code>pylint</code> complains that the <code>Meta</code> class has too few public methods and no class docstrings. I can always put a comment to disable these warnings locally but I'd like a global ignore for all Meta attributes which are classes because I have many throughout the codebase and putting comments everywhere is annoying. Is there global solution? I see a <code>--exclude-too-few-public-methods</code> argument for pylint but I tried it without success:</p>
<pre><code>--exclude-too-few-public-methods='.*Meta' # does not work
--exclude-too-few-public-methods='.*\.Meta' # does not work
</code></pre>
|
<python><pylint><marshmallow>
|
2025-09-25 19:31:10
| 2
| 921
|
fgoudra
|
79,775,202
| 14,055,279
|
Android TV Remote v2 Protocol: Backspace Not Working When Sending Single Characters
|
<p>I'm implementing an Android TV Remote v2 protocol in Flutter/Dart. My goal is to send keyboard input from a mobile app to the TV. For most characters, everything works fine. However, the backspace key doesn't work when I send characters one at a time.</p>
<p>Here is my current Flutter TextField implementation:</p>
<pre><code>TextField(
controller: controller,
focusNode: focusNode,
onChanged: (value) {
// send only the latest character typed
String char = value.substring(value.length - 1);
provider.sendText(char);
},
style: GoogleFonts.montserrat(color: Colors.white),
decoration: const InputDecoration(
hintText: "Type here...",
hintStyle: TextStyle(color: Colors.grey, fontSize: 20.0),
enabledBorder: UnderlineInputBorder(
borderSide: BorderSide.none,
),
focusedBorder: UnderlineInputBorder(
borderSide: BorderSide.none,
),
),
)
</code></pre>
<p>And the text sending logic:</p>
<pre><code>void sendText(String text) {
pairing?.sendTextToTv(text);
}
void sendTextToTv(String text) {
final payload = buildPayload(text);
if (commandSocket != null) {
commandSocket!.add(payload);
commandSocket!.flush();
}
}
List<int> buildPayload(String text) {
final asciiValues = text.codeUnits;
final asciiLen = asciiValues.length;
// innermost block
List<int> block1 = [8, 11, 16, 11, 26, asciiLen, ...asciiValues];
block1.insert(0, block1.length);
// wrap with block2
List<int> block2 = [8, 0, 18, ...block1];
block2.insert(0, block2.length);
// wrap with block3
List<int> block3 = [8, 0, 16, 0, 26, ...block2];
block3.insert(0, block3.length);
// add header
List<int> payload = [170, 1, ...block3];
payload.insert(0, payload.length);
return payload;
}
</code></pre>
<p>Problem:</p>
<p>Sending one character at a time works for letters and numbers.</p>
<p>Backspace doesn’t work in this single-character sending mode.</p>
<p>I suspect it’s because the TV expects full string updates or a special payload for backspace.</p>
<p>Question:
How can I implement backspace properly when sending single characters to the TV using this v2 protocol? Should I be sending a special payload for backspace instead of sending it as a normal character?</p>
|
<python><android><dart><android-tv><payload>
|
2025-09-25 18:40:11
| 0
| 623
|
shakti goyal
|
79,775,050
| 1,509,264
|
Filtering Django QuerySet after using a Window function
|
<p>For a simple model:</p>
<pre class="lang-py prettyprint-override"><code>from django.db.models import CharField, Model, PositiveIntegerField
class Example(Model):
category = CharField(max_length=20, null=False, blank=False)
version = PositiveIntegerField(null=False, blank=False)
class Meta:
unique_together = ["category", "version"]
</code></pre>
<p>And some sample data:</p>
<pre class="lang-py prettyprint-override"><code>Example.objects.update_or_create(id=1, category="Thing 1", version=1)
Example.objects.update_or_create(id=2, category="Thing 1", version=2)
Example.objects.update_or_create(id=3, category="Thing 1", version=4)
Example.objects.update_or_create(id=4, category="Thing 2", version=1)
Example.objects.update_or_create(id=5, category="Thing 2", version=2)
Example.objects.update_or_create(id=6, category="Thing 3", version=3)
</code></pre>
<p>I would like to use window functions to get:</p>
<ul>
<li>The examples that, only from the sub-set with the latest version in each category, have ids either <code>1</code>, <code>4</code>, <code>5</code> or <code>6</code>.</li>
</ul>
<p>I am trying to do this using the <code>RowNumber</code> window function:</p>
<pre><code>from django.db.models import F, Window
from django.db.models.functions.window import RowNumber
results = Example.objects.alias(
rn=Window(
expression=RowNumber(),
partition_by=[F("category")],
order_by="-version"
),
).filter(rn=1).filter(id__in=[1,4,5,6])
print(results.query)
print(results)
</code></pre>
<p>This generates the query:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT "col1",
"col2",
"col3"
FROM (
SELECT *
FROM (
SELECT "greatest_n_example"."id" AS "col1",
"greatest_n_example"."category" AS "col2",
"greatest_n_example"."version" AS "col3",
ROW_NUMBER() OVER (PARTITION BY "greatest_n_example"."category" ORDER BY "greatest_n_example"."version" DESC) AS "qual0"
FROM "greatest_n_example"
WHERE "greatest_n_example"."id" IN (1, 4, 5, 6)
) "qualify"
WHERE "qual0" = 1
) "qualify_mask"
</code></pre>
<p>And returns examples <code>1</code>, <code>5</code> and <code>6</code>; which is not what is required.</p>
<p>The latest for <code>category="Thing 1"</code> is <code>version=4</code> and the filter on <code>rn</code> should be applied before the filter on <code>id</code> (it isn't, it is currently the other way round) so it is expected that all the <code>Thing 1</code> category items should be excluded from the result set.</p>
<p>The expected query is equivalent to:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT "col1",
"col2",
"col3"
FROM (
SELECT *
FROM (
SELECT "greatest_n_example"."id" AS "col1",
"greatest_n_example"."category" AS "col2",
"greatest_n_example"."version" AS "col3",
ROW_NUMBER() OVER (PARTITION BY "greatest_n_example"."category" ORDER BY "greatest_n_example"."version" DESC) AS "qual0"
FROM "greatest_n_example"
) "qualify"
WHERE "qual0" = 1
AND "col1" IN (1, 4, 5, 6)
) "qualify_mask"
</code></pre>
<p>Which would output just <code>5</code> and <code>6</code>.</p>
<p>How, using window functions, can I enforce that the row-number filter needs to be happen, at least, within the same inline view of the SQL as the <code>id</code> filter is applied, if not before?</p>
<p><em>Note: I know I can solve this problem using <code>Subquery</code>, rather using than <code>Window</code>, to perform the filtering using correlated sub-queries; this question is specifically about how to use window functions and to get the <code>WHERE</code> filters in the correct inline-view of the SQL query.</em></p>
|
<python><django><greatest-n-per-group>
|
2025-09-25 15:38:47
| 2
| 172,539
|
MT0
|
79,774,532
| 396,373
|
Django Celery Beat 2.8.0 sending many-per-hour tasks but not daily tasks
|
<p>This is a really weird situation. Tasks that are scheduled to run every several minutes every hour of the day (e.g. "*/20 * * * *") are being sent to Celery (with entries in Celery Beat's log) and Celery is running them. Tasks that are to be run once per day (e.g. "15 11 * * *") are not happening at their designated times though, and nothing is being written to Celery Beat's log for those.</p>
<p>I have checked the contents of the tables in the database, and everything looks right, except, of course, that <code>last_run_at</code> is null for the tasks that are never being sent.</p>
<pre><code># select * from django_celery_beat_periodictask where id=7;
id | name | task | args | kwargs | queue | exchange | routing_key | expires | enabled | last_run_at | total_run_count | date_changed | description | crontab_id | interval_id | solar_id | one_off | start_time | priority | headers | clocked_id | expire_seconds
----+----------------------------------------------+---------------------------------+------+--------+-------+----------+-------------+---------+---------+-------------+-----------------+-------------------------------+-------------+------------+-------------+----------+---------+------------+----------+---------+------------+----------------
7 | hub_builtin__dhmaintenance.tasks.rotate_logs | dhmaintenance.tasks.rotate_logs | [] | {} | | | | | t | | 0 | 2025-09-25 04:39:20.172338+00 | | 6 | | | f | | | {} | |
</code></pre>
<pre><code># select * from django_celery_beat_crontabschedule where id=6;
id | minute | hour | day_of_week | day_of_month | month_of_year | timezone
----+--------+------+-------------+--------------+---------------+----------
6 | 30 | 0 | * | * | * | UTC
</code></pre>
|
<python><django><celerybeat><django-celery-beat>
|
2025-09-25 07:59:16
| 1
| 12,777
|
Steve Jorgensen
|
79,774,439
| 7,683,041
|
Cannot import QtWebEngineWidgets in PySide6 on a docker Windows container
|
<p>I have a python PySide6 program that I want to compile and test on a container. The program runs on Windows, so I have a Windows container:</p>
<pre><code>FROM winamd64/python:3.13.5-windowsservercore
# .. install choco, git bash, vcredist2017, visualstudio2022buildtools, visualstudio2022-workload-vctools, python3 --version 3.13.7
</code></pre>
<p>Then I build (with <code>setuptools</code>) and run my program within the container.
Everything was fine until a colleague added an import:</p>
<pre><code>from PySide6.QtWebEngineWidgets import QWebEngineView
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-> ImportError: DLL load failed while importing QtWebEngineWidgets: The specified module could not be found
</code></pre>
<p>The import works fine on our Windows 11 machines, but not on the docker container.</p>
<p>What I tried:</p>
<ul>
<li>I compared with Meld my <code>.venv/313/Lib/site-packages/PySide6</code> and <code>shiboken6</code> on my machine and on the container, they are identical.</li>
<li>I tried to use <code>dlltracer</code>, which showed me that 7 packages "Loaded" but absent from "Failed" (MSVCP140.dll, Shiboken.pyd, VCRUNTIME140_1.dll, _bz2.pyd, _lzma.pyd, kernel.appcore.dll, shiboken6.abi3.dll), but they are all in the container</li>
<li><code>EXPORT_QTA_PLAFORM=offscreen</code> helped me for the unit tests that could not start, but not helpful here</li>
<li>I tried to use <code>FROM mcr.microsoft.com/windows:20H2</code>, but without success</li>
</ul>
<p>Can my docker container import QtWebEngineWidgets, or is not designed for such imports? Are there any compatibility issues between pyd/dll files and the docker container system? How to debug this?</p>
<p>Thank you very much for any help!</p>
|
<python><docker><pyside6><qtwebengine><windows-container>
|
2025-09-25 06:36:57
| 0
| 1,310
|
PJ127
|
79,774,346
| 81,120
|
How to extract & coalesce deeply nested values that may not exist?
|
<p>I'm trying to extract some data from deeply nested JSON - this works:</p>
<pre><code>lf.with_columns(
[
pl.coalesce(
[
pl.col("a"),
pl.col("some_struct").str.json_path_match("$.foo"),
pl.col("some_struct").str.json_path_match("$.bar.baz"),
]
)
.str.split("(")
.list.first()
.str.strip_chars()
.alias("computed_col_a"),
pl.coalesce(
[
pl.col("b"),
pl.col("some_struct").str.json_path_match("$.blah1"),
pl.col("some_struct").str.json_path_match("$.blah2"),
pl.col("some_struct").str.json_path_match("$.blah3"),
]
).alias("computed_col_b"),
]
</code></pre>
<p>However, I'd like to treat <code>some_struct</code> as a defined <code>pl.Struct()</code>, and switch to something like:</p>
<pre><code>pl.col("some_struct").struct.field("bar").struct.field("baz"),
</code></pre>
<p>However, this blows up if any of the columns don't exist in the (messy) source data:</p>
<blockquote>
<p>polars.exceptions.StructFieldNotFoundError: b</p>
</blockquote>
<p>And I've not figured out the correct <code>.when().then().otherwise()</code> chaining to make this work.</p>
|
<python><dataframe><python-polars>
|
2025-09-25 04:02:01
| 1
| 598
|
dsully
|
79,774,326
| 2,081,568
|
Python openapi-python-client runtime error when deserializing
|
<p>I am attempting to consume the API at <a href="https://api.oemserver.com/swagger/index.html" rel="nofollow noreferrer">https://api.oemserver.com/swagger/index.html</a></p>
<p>I have run the tool at <a href="https://github.com/openapi-generators/openapi-python-client" rel="nofollow noreferrer">https://github.com/openapi-generators/openapi-python-client</a> to create an API client</p>
<p>Trying a little test, following the examples in the README file. Execute the get client groups method.</p>
<pre><code>import json
from dmac.device_manager_api_client import AuthenticatedClient
from dmac.device_manager_api_client.models import ClientGroupListModel, ClientGroupModel
from dmac.device_manager_api_client.api.device_group import get_v_1_device_group_get_client_groups
from dmac.device_manager_api_client.types import Response
from pathlib import Path
DEVICE_MANAGER_OPTIONS_FILE = Path(__file__).parents[0].joinpath("device_manager.json")
with open(DEVICE_MANAGER_OPTIONS_FILE, "r") as device_manager_options_file:
device_manager_options = json.load(device_manager_options_file)
api_host = f"{device_manager_options["scheme"]}://{device_manager_options["host"]}"
client = AuthenticatedClient(base_url=api_host, token=device_manager_options["token"])
with client as client:
client_groups_response = get_v_1_device_group_get_client_groups.sync(client=client, vendor_id=87)
if client_groups_response is None:
print('client_group response is none')
else:
if client_groups_response.client_groups is None:
print('client_groups_response.client_groups is none')
else:
for group in client_groups_response.client_groups:
print(group)
</code></pre>
<p>At the point of executing the method, I can see that the data is returned from the web service. Snipped for brevity, client names removed. To Me it looks like a <code>dict[str, list[dict[int, str]]]</code></p>
<pre><code>{
"ClientGroups": [
{
"Id": 16181,
"Name": "Client 1"
},
{
"Id": 15461,
"Name": "Client 2"
},
{
"Id": 18687,
"Name": "Client 3"
}
]
}
</code></pre>
<p>Then it seems to fail processing the response data and casting it</p>
<pre><code>Traceback (most recent call last):
File "c:\Source\FleetLogix\FleetLogix.Python\device_manager_test.py", line 17, in <module>
client_groups_response = get_v_1_device_group_get_client_groups.sync(client=client, vendor_id=87)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Source\FleetLogix\FleetLogix.Python\dmac\device_manager_api_client\api\device_group\get_v_1_device_group_get_client_groups.py", line 101, in sync
return sync_detailed(
^^^^^^^^^^^^^^
File "c:\Source\FleetLogix\FleetLogix.Python\dmac\device_manager_api_client\api\device_group\get_v_1_device_group_get_client_groups.py", line 81, in sync_detailed
return _build_response(client=client, response=response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Source\FleetLogix\FleetLogix.Python\dmac\device_manager_api_client\api\device_group\get_v_1_device_group_get_client_groups.py", line 52, in _build_response
parsed=_parse_response(client=client, response=response),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Source\FleetLogix\FleetLogix.Python\dmac\device_manager_api_client\api\device_group\get_v_1_device_group_get_client_groups.py", line 35, in _parse_response
response_200 = ClientGroupListModel.from_dict(response.text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Source\FleetLogix\FleetLogix.Python\dmac\device_manager_api_client\models\client_group_list_model.py", line 49, in from_dict
d = dict(src_dict)
^^^^^^^^^^^^^^
ValueError: dictionary update sequence element #0 has length 1; 2 is required
</code></pre>
<p>Is this a bug in the generated code? Is there someone I can contact specifically for the python api client generator?</p>
|
<python><python-3.x><openapi-generator>
|
2025-09-25 02:53:01
| 1
| 1,111
|
Hecatonchires
|
79,774,301
| 1,564,070
|
VS Code Python import not resolving
|
<p>I'm configuring a new Windows 11 development environment to support multiple projects and multiple shared packages.
The test program runs as expected, however, the import statement flags LibHB as an unresolved import, and autocomplete does not work for the library.</p>
<p>The directory structure is:</p>
<pre><code>Proj/
|- Proj1/
|--- main.py
|- Proj2/
LibHB/
|- Lib1/
|--- __init.py__
|--- Class1.py
|- Lib2/
</code></pre>
<p><code>main.py</code>:</p>
<pre><code>import os, sys
sys.path.insert(1, os.path.abspath(os.path.join(os.path.dirname(__file__), "..\\..")))
from LibHB import Lib1 as l1
print("Main begins")
y = l1.Class1()
y.show()
print("Main ends")
</code></pre>
<p><code>__init__.py</code> in Lib1:</p>
<pre><code>from .Class1 import Class1
</code></pre>
<p><code>Class1.py</code>:</p>
<pre><code>class Class1:
def show(self):
print("Class1 show")
</code></pre>
<p>Output:</p>
<pre><code>Main begins
Class1 show
Main ends
</code></pre>
<p>I added the following to settings.json:</p>
<pre><code>"python.autoComplete.extraPaths": [
"C:\\Users\\******\\LibHB",
"C:\\Users\\******\\LibHB\\Lib1",
"C:\\Users\\******\\Projects"],
</code></pre>
<p>I can't find anything else to try in the MS docs. Why is the autocomplete not working?</p>
<p>I'm using version 1.104.1 of VS Code and Python version 3.13.</p>
|
<python><visual-studio-code>
|
2025-09-25 01:30:44
| 2
| 401
|
WV_Mapper
|
79,774,289
| 4,930,744
|
Running flake8 linter in JupyterLab, getting E303 false positives
|
<p>My goal is to have a PEP8 linter that highlights style violations while I'm editing Python notebooks on my Windows 10 computer.</p>
<p>I've installed pylsp as a JupyterLab extension. It's working, except that I get an error "E303 too many blank lines" on the first line of every code cell that follows a markdown cell.</p>
<p>I've tried disabling E303 in the Jupyterlab settings as shown here, but it didn't fix the problem. How can I get rid of the E303 false positives?
<a href="https://i.sstatic.net/VjNsdCth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VjNsdCth.png" alt="screenshot showing pylsp.plugins.flake8.ignore-0 list consisting of "E303"" /></a></p>
|
<python><jupyter-notebook><flake8>
|
2025-09-25 00:50:32
| 1
| 633
|
David Wasserman
|
79,774,255
| 6,307,865
|
pyspark on Windows - unexpected termination during collect()
|
<p>I am new to python and pyspark.</p>
<p>I'm trying to run it on Windows Server 2022. I have environment variables</p>
<ul>
<li><code>HADOOP_HOME=C:\spark\hadoop</code></li>
<li><code>JAVA_HOME=C:\Program Files\Microsoft\jdk-17.0.16.8-hotspot</code></li>
<li><code>SPARK_HOME=C:\spark\spark-4.0.1-bin-hadoop3</code></li>
<li><code>PYSPARK_PYTHON=C:\Program Files\Python312\python.exe</code></li>
</ul>
<p>I have <code>%JAVA_HOME%\bin;%HADOOP_HOME%\bin;%SPARK_HOME%\bin</code> in my search PATH.</p>
<p>I have this source file:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("UnitTest").master("local[*]").getOrCreate()
df = spark.createDataFrame(
[["One"]],
["First"]
)
df.collect()
</code></pre>
<p>When I run it, I get:</p>
<pre><code>F:\VSTS>python c:\temp\repro.py
The system cannot find the path specified.
WARNING: Using incubator modules: jdk.incubator.vector
...
25/09/25 10:54:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
25/09/25 10:54:36 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)/ 4]
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:624)
... big log call stack and about half a dozen more...
Traceback (most recent call last):
File "c:\temp\repro.py", line 7, in <module>
df.collect()
File "C:\Program Files\Python312\Lib\site-packages\pyspark\sql\classic\dataframe.py", line 443, in collect
sock_info = self._jdf.collectToPython()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\py4j\java_gateway.py", line 1362, in __call__
return_value = get_return_value(
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pyspark\errors\exceptions\captured.py", line 282, in deco
return f(*a, **kw)
^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\py4j\protocol.py", line 327, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o50.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1) (mycomputer.mydomain.com executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at ... more stack traces ...
</code></pre>
<p><strong>Edit:</strong>
I have seen suggestions to:</p>
<ul>
<li>downgrade to <a href="https://www.python.org/downloads/release/python-31113/" rel="nofollow noreferrer">python 3.11</a>, which is now only receiving security patches and lacks a Windows installer.</li>
<li>try WSL or Docker, but that kind of defeats the purpose of having a quick turn-around in my Windows development environment
Both of these options are work-arounds that don't really deal with the underlying problem.</li>
</ul>
|
<python><apache-spark><pyspark><windows-server>
|
2025-09-24 23:28:00
| 1
| 621
|
EdH
|
79,774,117
| 12,415,855
|
Getting element using re.compile with bs4?
|
<p>i try to find a span element using selenium and bs4 with the following code:</p>
<pre><code>import re
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
import time
options = Options()
options.add_argument("start-maximized")
options.add_argument('--use-gl=swiftshader')
options.add_argument('--enable-unsafe-webgpu')
options.add_argument('--enable-unsafe-swiftshader')
options.add_argument("--disable-3d-apis")
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
driver.get ("https://www.local.ch/de/d/zuerich/8001/personalberatung/mooser-partner-ag-beratung-in-personalfragen-QUYNvCwYvBzZBco5HS626Q")
time.sleep(2)
soup = BeautifulSoup (driver.page_source, 'lxml')
worker = soup.find("div", {"data-cy": "detail-map-preview"})
worker = worker.find("span", string=re.compile('Adresse'))
print(worker)
</code></pre>
<p>But i allways get None as final output despite i can see "Adresse" in the span-element of the source-code.</p>
<p>How can i find this span-element with the text using re.compile?</p>
<p><a href="https://i.sstatic.net/19d8Sv83.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19d8Sv83.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><beautifulsoup>
|
2025-09-24 19:56:28
| 2
| 1,515
|
Rapid1898
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.