QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,763,222
811,299
How can one python script launch another python script using a different virtual environment?
<p>I have researched this question and found <a href="https://stackoverflow.com/questions/8052926/running-subprocess-within-different-virtualenv-with-python">Running subprocess within different virtualenv with python</a>, but my situation is a little different.</p> <p>I have an astrophotography application which I run on Ubuntu Linux 24.04.2 LTS. This application has a facility for launching python scripts. It does so using its own venv. They supply a python script to launch a third party executable that sharpens an image that the main application is working with. This third party executable optionally can use GPU acceleration given the presence of a suitable GPU on the system running the script. This GPU detection functionality depends on code in the main application's venv.</p> <p>Another possibly important detail is that the <strong>third-party executable is itself compiled from a python script, which presumably uses a venv.</strong> It is not c++ or other similar language.</p> <p>The situation is when the script is run from the application, an error message is emitted saying that GPU acceleration is not available, and the process drops back to CPU processing which takes perhaps perhaps 100 times as long. <strong>This error message is incorrect.</strong></p> <p>I know this because <strong>the third party executable can be successfully run standalone when the much faster GPU processing can be observed.</strong></p> <p>To do that one must manually copy the file to be processed to a specific directory where the standalone executable expects to find it. If running from the application, the existing python script automatically copies the selected file to the executable's input directory, while also copying the processed file back from the executable's output directory to the same directory where the original image was located, a very convenient feature.</p> <p>The problem lies with the application's python venv, which incorrectly detects whether or not the GPU support is available. Apparently, this is one big thorny mess which the application's developers are working on. When run standalone, the executable uses the system's python venv (remember it is compiled python).</p> <p>In the meantime I would like to develop a script that can be run from the application that handles all the file copying before and after launching another python script using the system venv which launches the executable. In other words, to avoid the problematic GPU detection logic, since I know that in my case the GPU is suitable.</p> <p>Is this possible and if so, what would be the best way to do it?</p>
<python><python-venv>
2025-09-12 17:48:06
1
4,909
Steve Cohen
79,763,157
667,355
apply function versus vectorised operation in pandas dataframe
<p>I am working with a DataFrame of almost 1M rows and want to compute a column as a function of two others. My first idea was to use <code>.apply(axis=1)</code> with a lambda function to do the operation, but it was extremely slow compared to when I do vectorized operation.</p> <p>An example of the task:</p> <pre><code>import pandas as pd import numpy as np import time df = pd.DataFrame({ &quot;a&quot;: np.random.randint(0, 100, 100000), &quot;b&quot;: np.random.randint(0, 100, 100000)}) start1 = time.time() df[&quot;sum1&quot;] = df.apply(lambda row: row[&quot;a&quot;] + row[&quot;b&quot;], axis=1) print(&quot;apply:&quot;, time.time() - start1) start2 = time.time() df[&quot;sum2&quot;] = df[&quot;a&quot;] + df[&quot;b&quot;] print(&quot;vectorized:&quot;, time.time() - start2) </code></pre> <p>Is it always the case? or there are circumstances that <code>apply()</code> function works more efficient than vectorised operation? and if I need custom logic on rows that cannot turn into vectorized operations, what is the recommended alternative?</p>
<python><pandas><dataframe>
2025-09-12 16:10:14
0
3,491
amiref
79,763,094
10,686,658
Change contravariant tensor to covariant tensor in einsteinpy package
<p>Using einsteinpy package of Python, I am defining the electromagnetic tensor (or any other arbitrary tensor). While defining, I am defining it as 'uu' tensor using the BaseRelativityTensor class file. I want the 'll' version from this, ie. F_covariant from F_contravariant. But the package seems not to provide any .change_config() property, which EinsteinTensor (another class which defines (G^{\mu\nu})) does.</p> <p>FYI: I am using the 0.3.1 version of the package.</p> <p>So my question is, once F^{\mu\nu} has been defined as a BaseRelativityTensor, how to get (F_{\mu\nu}) from F^{\mu\nu}? My code is given below:</p> <pre><code>from IPython.display import display import sympy as sp from einsteinpy.symbolic import BaseRelativityTensor, MetricTensor Ex, Ey, Ez, Bx, By, Bz, c = sp.symbols(&quot;E_x E_y E_z B_x B_y B_z c&quot;) t, x, y, z = sp.symbols(&quot;t x y z&quot;) # Define Minkowski metric (signature -,+,+,+) eta = sp.Array([ [-1, 0, 0, 0], [ 0, 1, 0, 0], [ 0, 0, 1, 0], [ 0, 0, 0, 1] ]) syms = (t, x, y, z) metric = MetricTensor(eta, syms) g = metric.tensor() # Define the F array for F^{mu,nu} F_contra_array = sp.Array([ [0, Ex/c, Ey/c, Ez/c], [-Ex/c, 0, Bz, -By], [-Ey/c, -Bz, 0, Bx], [-Ez/c, By, -Bx, 0] ]) # Define contravariant tensor F^{mu nu} F_contra = BaseRelativityTensor(F_contra_array, syms, &quot;uu&quot;, parent_metric=metric) print(&quot;F^{mu nu} =&quot;) display(F_contra.tensor()) </code></pre>
<python><tensor>
2025-09-12 15:09:53
0
559
ASarkar
79,762,824
1,688,726
Mock patch sys.stdout to StringIO as a decorator
<p>I am trying to mock.patch sys.stdout to StringIO as a decorator to record the output for testing.</p> <p>As a 'with' statement it works this way:</p> <pre><code>with mock.patch('sys.stdout', new_callable = StringIO) as recorded_output: print('OUTPUT') r = recorded_output.getvalue() print(r) </code></pre> <p>gets 'OUTPUT', but 'getvalue' doesn't work when used as a decorator:</p> <pre><code>@mock.patch('sys.stdout', new_callable = StringIO) def test_stdout(self, recorded_output): print('OUTPUT') r = recorded_output.getvalue() print(r) </code></pre> <p>gets error: Mock object has no attribute 'getvalue'</p> <p>Does anyone know how to write this as a decorator?</p>
<python><mocking><stdout>
2025-09-12 10:36:50
1
359
user1688726
79,762,810
3,486,078
Fatest way to convert float array to string in python
<p>This question came up while I was saving a large number of model-inferred embeddings to plain text. To do so, I needed to convert lists of float embeddings into strings, and I found this conversion to be surprisingly time-consuming.</p> <p>Inspired by <a href="https://discuss.python.org/t/faster-float-string-conversion-ryu/2466/19" rel="nofollow noreferrer">this discussion</a>, I benchmarked four different methods for converting float arrays to strings. Surprisingly, orjson performed the best—even though it's a third-party JSON serialization library.</p> <p>This got me wondering: Is there a native Python method that can achieve performance comparable to orjson for converting lists of floats to strings?</p> <p>Below are the commands I used for profiling, along with the results:</p> <pre><code>$ python -m pyperf timeit --fast -s 'x = [3141592653589793] * 100' 'str(x)' Mean +- std dev: 4.79 us +- 0.06 us $ python -m pyperf timeit --fast -s 'from orjson import dumps; x = [3141592653589793] * 100' 'dumps(x)' Mean +- std dev: 2.70 us +- 0.02 us $ python -m pyperf timeit --fast -s 'from json import dumps; x = [3141592653589793] * 100 ' 'dumps(x)' Mean +- std dev: 8.03 us +- 0.31 us $ python -m pyperf timeit --fast -s 'x = [3141592653589793] * 100' '&quot;{}&quot;.format(x)' Mean +- std dev: 4.94 us +- 0.16 us </code></pre>
<python><profiling><orjson>
2025-09-12 10:14:23
0
474
K_Augus
79,762,788
5,103,620
How to transform callable argument types?
<p>Assume I have a callable type with some arbitrary set of argument types:</p> <pre class="lang-py prettyprint-override"><code>Callable[[T1, T2, T3, &lt;etc&gt;, Tn], str] </code></pre> <p>Is there a way (presumably using <code>TypeVarTuple</code> or <code>ParamSpec</code>) to statically generate the following callable type from it?</p> <pre class="lang-py prettyprint-override"><code>Callable[[T1 | Iterable[T1], T2 | Iterable[T2], T3 | Iterable[T3], &lt;etc&gt;, Tn | Iterable[Tn]], str] </code></pre>
<python><python-typing>
2025-09-12 09:53:07
0
4,886
Sebastian Lenartowicz
79,762,675
2,105,307
MS-Graph returns 404 and 412 when uploading attachments concurrently
<p>I have the following python script which is sending emails through MS-Graph using (partly) msgraph-sdk in python. The problem I see is that if I try to upload multiple attachments of the email concurrently I get a lot of 404 and 412 errors from ms-graph when trying to upload the attachment chunks. But if I set the semaphore to 1 (which effectively uploads the attachments sequentially) then I get no errors. Is there anything wrong with my script? Does ms-graph support concurrent attachment uploading?</p> <p>Here is my python code</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false" data-babel-preset-react="false" data-babel-preset-ts="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import asyncio import aiohttp from mylib import EmailSender from mylib import ( EmailDefinition, EmailContent, EncodedAttachment, ) from azure.identity import ClientSecretCredential from msgraph.graph_service_client import GraphServiceClient from mylib import get_logger from msgraph.generated.models.message import Message from msgraph.generated.models.item_body import ItemBody from msgraph.generated.models.body_type import BodyType from msgraph.generated.models.recipient import Recipient from msgraph.generated.models.email_address import EmailAddress from msgraph.generated.models.attachment_type import AttachmentType from msgraph.generated.models.attachment_item import AttachmentItem from msgraph.generated.users.item.messages.item.attachments.create_upload_session.create_upload_session_post_request_body import ( CreateUploadSessionPostRequestBody, ) logger = get_logger(__name__) CHUNK_SIZE = 3 * 1024 * 1024 # 3MB CHUNK_UPLOAD_TIMEOUT_SECONDS = 20 class EmailSenderMSG(EmailSender): def __init__(self, cfg): _credential = ClientSecretCredential( tenant_id=cfg.tenant_id, client_id=cfg.client_id, client_secret=cfg.client_secret.get_secret_value(), ) self._graph_client = GraphServiceClient( credentials=_credential, scopes=[cfg.scope] ) async def _send_large_email(self, email_def: EmailDefinition): # Implementation reference https://learn.microsoft.com/en-us/graph/outlook-large-attachments?tabs=python # Step 1: Create Draft message = self.create_base_message(email_def) draft = self._graph_client.users.by_user_id(email_def.sender_address).messages.post(message) # Step 2: Upload attachments concurrently with semaphore (called attachment_upload_concurrency) sem = asyncio.Semaphore(4) async def upload_one(att: EncodedAttachment) -&gt; None: async with sem: await self.upload_attachment(email_def.sender_address, draft.id, att) tasks = [ asyncio.create_task(upload_one(att)) for att in email_def.attachments ] await asyncio.gather(*tasks) # No return_exceptions # Step 4: Send the draft logger.info(f"Finalising draft message with id {draft.id}") await (self._graph_client.users.by_user_id(email_def.sender_address) .messages.by_message_id(draft.id).send.post()) async def upload_attachment( self, sender_address: str, draft_message_id: str, attachment: EncodedAttachment) -&gt; None: attachment_size = len(attachment.content) upload_session_body = CreateUploadSessionPostRequestBody( attachment_item=AttachmentItem( attachment_type=AttachmentType.File, name=attachment.name, size=attachment_size, content_type=attachment.content_type, ) ) logger.info( f"Creating upload session for attachment '{attachment.name}' of size {attachment_size} bytes. Request body: {upload_session_body}" ) upload_session = await ( self._graph_client.users.by_user_id(sender_address).messages.by_message_id(draft_message_id) .attachments.create_upload_session.post(upload_session_body)) logger.debug(f"Upload session for attachment '{attachment.name}' : {upload_session}") # Step 3: Upload in chunks (simple, no retries for now) await self.upload_chunks(attachment, attachment_size, upload_session) @staticmethod async def upload_chunks(attachment, attachment_size, upload_session): async with aiohttp.ClientSession() as session: for i in range(0, attachment_size, CHUNK_SIZE): chunk = attachment.content[i: i + CHUNK_SIZE] start = i end = i + len(chunk) - 1 total = attachment_size # msgraph-dsk is not supporting attachment chunk uploading, so we use aiohttp directly headers = { "Content-Length": str(len(chunk)), "Content-Range": f"bytes {start}-{end}/{total}", "Content-Type": "application/octet-stream", } logger.debug( f"Uploading chunk of attachment '{attachment.name}' with url {upload_session.upload_url} and headers {headers}" ) await EmailSenderMSG.upload_single_chunk( session, upload_session.upload_url, chunk, headers, attachment.name, ) @staticmethod async def upload_single_chunk( session: aiohttp.ClientSession, url: str, chunk: bytes, headers: dict[str, str], attachment_name: str ) -&gt; None: async with session.put( url, data=chunk, headers=headers, timeout=aiohttp.ClientTimeout(total=CHUNK_UPLOAD_TIMEOUT_SECONDS), ) as response: try: response.raise_for_status() except aiohttp.ClientResponseError as e: logger.warning( f"ClientResponseError: {e.status}, attachment '{attachment_name}', {e.message}, Headers: {e.headers}", exc_info=True, ) raise @staticmethod def _create_item_body(email_content: EmailContent) -&gt; ItemBody: if email_content.html: return ItemBody(content_type=BodyType.Html, content=email_content.html) elif email_content.plain_text: return ItemBody( content_type=BodyType.Text, content=email_content.plain_text ) raise ValueError("Email content must have either 'html' or 'plain_text' set.") @staticmethod def create_base_message(email_def): message = Message( subject=email_def.content.subject, body=EmailSenderMSG._create_item_body(email_def.content), to_recipients=[ Recipient(email_address=EmailAddress(address=addr)) for addr in email_def.recipients.to ], cc_recipients=[ Recipient(email_address=EmailAddress(address=addr)) for addr in email_def.recipients.cc ], bcc_recipients=[ Recipient(email_address=EmailAddress(address=addr)) for addr in email_def.recipients.bcc ], ) return message</code></pre> </div> </div> </p>
<python><microsoft-graph-api><microsoft-graph-mail><microsoft-graph-files>
2025-09-12 08:15:28
0
1,548
NikosDim
79,762,332
2,112,193
How to do batched matrix vector multiplication with np.matvec
<p>I have rotation matrix, for the sake of a simple example, <code>np.ndarray(..., shape=(3, 2))</code>, and I want to multiply it by an nd-array of vectors, say <code>np.mgrid[:4, :5]</code>, how do I do that with the new <code>np.matvec</code> function? I guess the signature I want from this generalized ufunc is <code>(3, 2), (2, 4, 5) -&gt; (3, 4, 5)</code>. I would expect that I need to use the axes command to make the matrices line up, however I am not sure what the syntax is. When I try something based on intuition, I get an error:</p> <p><code>ValueError: axes should be a list with an entry for all 3 inputs and outputs; entries for outputs can only be omitted if none of them has core axes.</code></p> <p>I have no idea what this means, and the documentation for <code>np.matvec</code> doesn't have any examples or information on how to use the <code>axis</code> or <code>axes</code> keywords. How do I do this correctly?</p>
<python><numpy><numpy-ufunc>
2025-09-11 20:11:40
1
522
DBS4261
79,761,797
6,618,051
We could not find a project publisher for the project at {DIR}
<p>I'm trying to configure an application (Kivy framework, Python language, <a href="https://github.com/lyskouski/app-language" rel="nofollow noreferrer">https://github.com/lyskouski/app-language</a>) distribution via</p> <pre><code>msstore reconfigure --tenantId *** --clientId *** --clientSecret *** --sellerId *** msstore publish -v -i . </code></pre> <p>And it's failing with an error:</p> <pre><code>info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: CLI info: Microsoft.Hosting.Lifetime[0] Content root path: D:\a\app-language\app-language\windows\dist info: MSStore.CLI.Services.EnvironmentInformationService[0] Running on CI. CI=true info: MSStore.CLI.Program[0] Command is publish We could not find a project publisher for the project at 'D:\a\app-language\app-language\windows\dist'. </code></pre> <p>pipelines are the next: <a href="https://github.com/lyskouski/app-language/blob/main/.github/workflows/build.yml" rel="nofollow noreferrer">https://github.com/lyskouski/app-language/blob/main/.github/workflows/build.yml</a></p> <p>Could you guide me what I'm doing wrong? Thanks in advance</p>
<python><kivy><github-actions><msstore>
2025-09-11 10:22:50
1
1,939
FieryCat
79,761,794
228,804
Airflow tasks in sequence: dag file is parsed again?
<p>I'm writing a DAG script for Apache Airflow and I'm running into behaviour that I didn't expect. If you take a look at my example script I was expecting the same timestamp to be printed. Instead, when running this dag, the timestamp gets re-evaluated and prints the date-time of the second task.</p> <pre><code>import pendulum from airflow.sdk import dag, task myTimeStamp = pendulum.now(&quot;Europe/Amsterdam&quot;).strftime('%Y%m%d_%H%M_%S') @dag( schedule=&quot;0/1 * * * *&quot;, catchup=False, start_date=pendulum.datetime(2025, 9, 2, tz=&quot;Europe/Amsterdam&quot;), is_paused_upon_creation=False, tags=[&quot;ssh&quot;], ) def test_local(myDate: str): @task.bash() def test1(myDate1: str) -&gt; str: return f'echo &quot;Message: hi from 1! {myDate1}&quot;; sleep 5;' @task.bash() def test2(myDate2: str) -&gt; str: return f'echo &quot;Message: hi from 2! {myDate2}&quot;' test1(myDate) &gt;&gt; test2(myDate) test_local(myTimeStamp) </code></pre> <p>As if the dag is parsed a second time when running the test2 task. Any tips on how to prevent this?</p>
<python><airflow><task><directed-acyclic-graphs>
2025-09-11 10:21:53
2
333
Cesar
79,761,695
3,938,402
Copy directory and its contents
<p>In the below I'm copying a directory from a remote machine to local. Instead of copying the directory and its contents (files and directories), it's copying only the contents (files and directories). How do I make Paramiko SCP create a directory first and then copy the contents (files and directories) to it.</p> <p>main.py</p> <pre><code>def main(): ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) private_key = paramiko.RSAKey(filename=&quot;/home/harry/.ssh/id_rsa&quot;) ssh_client.connect(host_name, port = 22, username = user_name, private_key) scp = SCPClient(ssh_client.get_transport()) scp.get(remote_path=f&quot;/a/b/c/d/e/&quot;, local_path=f&quot;./f/&quot;, recursive=True, preserve_times=True) scp.close() ssh_client.close() </code></pre> <p>In the above SCP operation local directory <code>f</code> doesn't contain directory <code>e</code> copied from remote, instead it contains the contents (files/directories) of remote directory <code>e</code>.</p>
<python><paramiko><scp>
2025-09-11 09:01:43
1
4,026
Harry
79,761,589
17,580,381
How to install torch on MacOS (Intel)
<p>On my M2-based Mac I can install torch simply with:</p> <pre class="lang-shell prettyprint-override"><code>pip install torch </code></pre> <p>(pip is an alias for pip3)</p> <p>However, on my Xeon-based iMac I get this error:</p> <pre class="lang-none prettyprint-override"><code>ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch </code></pre> <p>This suggests to me that there's no Intel wheel for latest version of torch (currently 2.8.0).</p> <p>From something I read elsewhere I also tried:</p> <pre class="lang-shell prettyprint-override"><code>pip install &quot;torch==2.7.1&quot; </code></pre> <p>...in the belief that there might be an Intel distribution for 2.7.1. Unfortunately, that leads to:</p> <pre class="lang-none prettyprint-override"><code>ERROR: Could not find a version that satisfies the requirement torch==2.7.1 (from versions: none) ERROR: No matching distribution found for torch==2.7.1 </code></pre> <p>How can I install torch 2.8.0 on my iMac?</p> <p>UPDATE: MacOS 15.6.1, Python 3.13.7, pip 25.2</p> <p>I will try building from source</p>
<python><pytorch>
2025-09-11 07:19:56
1
28,997
Ramrab
79,761,576
1,559,401
mlflow unable to generate input output schema when using log_model()
<p>I am using the latest mlflow <strong>3.3.2</strong> with a simple PyTorch implementation of SRCNN. I am able to log the parameters as well as the metrics for the training process as well as generate custom visualizations for the losses and PSNR alongside the model with all auxilliary files required for exporting as artifacts.</p> <p>I would like to add the input output schema to each version of the model automatically. According to this <a href="https://mlflow.org/docs/latest/ml/model/signatures/" rel="nofollow noreferrer">document</a> all I need to do is use <code>log_model()</code> with <code>input_example</code> as a parameter, which holds the value of a sample input.</p> <pre><code>train_dataset = TrainDataset(args.train_file) train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) eval_dataset = EvalDataset(args.eval_file) eval_dataloader = DataLoader(dataset=eval_dataset, batch_size=1) best_weights = copy.deepcopy(model.state_dict()) best_epoch = 0 best_psnr = 0.0 # signature = infer_signature(inputs.numpy(), model(inputs).detach().numpy()) dataiter = iter(train_dataloader) inputs_example, labels_example = next(dataiter) mlflow.pytorch.log_model( pytorch_model=model, name=model_name, # Deprecated: artifact_path=f'/{experiment.experiment_id}/{run.info.run_id}' #signature=signature input_example = inputs_example.numpy() ) </code></pre> <p>Alas!, the result is an empty schema: <a href="https://i.sstatic.net/JZkJKA2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZkJKA2C.png" alt="enter image description here" /></a></p> <p>I also tried adding a <code>signature</code> instead of the <code>input_example</code> parameter but that didn't change anything.</p> <p>I do not get any errors and the training process that follows finishes without any issues producing the model along with all other artifacts mentioned above.</p>
<python><pytorch><mlflow>
2025-09-11 07:14:43
0
9,862
rbaleksandar
79,761,328
1,429,450
Accessing any object type from multiprocessing shared_memory?
<p>Suppose I create a shared memory object:</p> <pre><code>from multiprocessing import shared_memory shm_a = shared_memory.SharedMemory(create=True, size=1024) buffer = shm_a.buf </code></pre> <p>and put a generic object of a generic class, such as:</p> <pre><code>class GenericClass: def __init__(self, a, b): self.a = a self.b = b </code></pre> <p>in it:</p> <pre><code>gen_obj_a = GenericClass(1,6) buffer = gen_obj_a </code></pre> <p>Now, in another terminal, I have:</p> <pre><code>from multiprocessing import shared_memory existing_shm = shared_memory.SharedMemory(name='psm_21467_46075') </code></pre> <p>How do I assign a variable, say <code>gen_obj_b</code>, to the <code>GenericClass</code> object in shared memory?</p> <p>I want to be able to do this where <code>GenericClass</code> is much more complex that the example above and it doesn't have a serialization function.</p> <p>In C++, one would do this by casting a <code>void *</code> to the <code>GenericClass</code> object type, but how is this done in Python with shared memory?</p> <p><sup>cf. <a href="https://docs.python.org/3.12/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory.size" rel="nofollow noreferrer">multiprocessing.shared_memory Python documentation</a></sup></p>
<python><casting><multiprocessing><shared-memory><void-pointers>
2025-09-10 22:00:24
1
5,826
Geremia
79,761,289
1,604,008
Trying to understand mocking in python
<p>using pytest - given the following:</p> <pre><code>class Bar: def wtf(self): return 1 class Foo: def __init__(self, bar): self.bar = bar def huh(self): return self.bar.wtf() def test_huh(): bar = Bar() foo = Foo(bar) assert foo.huh() == 1 def test_huh2(mocker): bar = mocker.Mock() foo = Foo(bar) bar.wtf().return_value = 1 assert foo.huh() == 1 </code></pre> <p>test_huh does what I expect but I want to test Foo and mock the dependency on Bar.</p> <p>test_huh2 fails.</p> <p>it seems the mock does not get called. Can someone explain how this should be done?</p>
<python><pytest><pytest-mock>
2025-09-10 21:07:16
1
1,159
user1604008
79,761,286
13,968,392
write_database(..., engine="adbc") with autocommit=False
<p>In polars, I would like to use <code>pl.write_database</code> multiple times with <code>engine=&quot;adbc&quot;</code> in the same session and then commit all at the end with <code>conn.commit()</code>, i.e. do a manual commit.</p> <pre class="lang-py prettyprint-override"><code>import adbc_driver_postgresql.dbapi as pg_dbapi import polars as pl conn = pg_dbapi.connect(&quot;postgresql://username:password@host:port/database&quot;) df = pl.DataFrame({&quot;a&quot;: [1, 2, 3], &quot;b&quot;: [4, 5, 6]}) df.write_database( &quot;public.table1&quot;, connection=conn, engine=&quot;adbc&quot;, ) df.transpose().write_database( &quot;public.table2&quot;, connection=conn, engine=&quot;adbc&quot;, ) conn.commit() </code></pre> <p>The reason behind this is to ensure that either both dfs are written to the database or none are. However, the dfs are written immediately into the database one after the other. In the <a href="https://arrow.apache.org/adbc/current/format/specification.html#autocommit" rel="nofollow noreferrer">adbc docs</a>, it is said:</p> <blockquote> <p>By default, connections are expected to operate in autocommit mode; that is, queries take effect immediately upon execution. This can be disabled in favor of manual commit/rollback calls, but not all implementations will support this.</p> </blockquote> <p>Is it supported to disable autocommit somehow in python? Maybe this can be done in <a href="https://arrow.apache.org/adbc/current/python/api/adbc_driver_postgresql.html#adbc_driver_postgresql.dbapi.connect" rel="nofollow noreferrer"><code>adbc_driver_postgresql.dbapi.connect</code></a>, maybe with the <code>conn_kwargs</code> parameter? <code>conn_kwargs={&quot;autocommit&quot;: False}</code> didn't work.</p>
<python><database><postgresql><python-polars><adbc>
2025-09-10 21:04:12
1
2,117
mouwsy
79,761,269
20,789
Best way to assign a scalar to a new DataFrame column with a specific dtype
<p>I am writing a routine to load a large dataset into a Pandas <code>DataFrame</code> from a bespoke text format.</p> <p>As part of this process, I need to add new columns to a <code>DataFrame</code>. Sometimes I need to broadcast a scalar value into the whole column, and I use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign(new_column=SCALAR_VALUE)</code></a> for this.</p> <p>This works fine, but I have no control over the exact <code>dtype</code> chosen for the newly-added column:</p> <pre class="lang-py prettyprint-override"><code>import pandas df = pd.DataFrame({'a': ['foo', 'bar'], 'b': [3, 4], 'c': [1.1, 2.2]}).convert_dtypes() print(df.dtypes) # a: 'string[python]', b: 'Int64', c: 'Float64' print(df.assign(d=42).d.dtype) # 'int64', ok print(df.assign(d=3.14).d.dtype) # 'float64', ok print(df.assign(d='baz').d.dtype) # 'object', I want it to be 'string[python]') print(df.assign(d=pd.NA).d.dtype) # 'object', I want it to be 'UInt8' </code></pre> <p>For reasons of time- and memory-efficiency, I would like to choose the correct <code>dtype</code> at the time when I <code>.assign</code> the new column, rather than converting it afterwards with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>.astype</code></a>.</p> <p>The least-awkward way I have found to do this is to create a <code>Series</code> of the correct <code>dtype</code> <em>and shape</em> (an awkward operation in and of itself) and then <code>.assign</code> <em>that</em> to the <code>DataFrame</code>:</p> <pre class="lang-py prettyprint-override"><code>s = pd.Series('baz', dtype='string').repeat(df.shape[1]).reset_index(drop=True) print(df.assign(d=s).d.dtype) # 'string' </code></pre> <p>If the <code>DataFrame</code> already has a non-default index, it gets even more awkward:</p> <pre class="lang-py prettyprint-override"><code># Create a DataFrame with a string index 'a', Int64 column 'b', Float64 column 'c' df = pd.DataFrame({'a': ['foo', 'bar'], 'b': [3, 4], 'c': [1.1, 2.2]}).convert_dtypes().set_index('a') # Create a Series 'c' of type 'string' containing a repeated scalar value s = pd.Series('baz', dtype='string').repeat(df.shape[1]).reset_index(drop=True) # In order to assign this to the DataFrame correctly, I # have to reset and then un-reset the index df = df.reset_index().assign(d=s).set_index('a') </code></pre> <p>Finally, I have the <code>DataFrame</code> structured as desired, with the correct <code>dtype</code> for the new column, and the scalar value broadcast across all rows:</p> <pre class="lang-py prettyprint-override"><code>assert dict(df.reset_index().dtypes) == {'a': 'string[python]', 'b': 'Int64', 'c': 'Float64', 'd': 'string[python]'} assert (df.d == 'baz').all() </code></pre> <p>So, the question: is there a <strong>less awkward and error-prone</strong> way to do this, to assign a scalar value <em>with an explicit non-default <code>dtype</code></em> to a new column in one fell swoop?</p> <p>(Note that the solutions in the related question <a href="https://stackoverflow.com/questions/54319436">Adding a new column with specific dtype in pandas</a> do not help with the scalar case.)</p> <p><strong>UPDATE:</strong> as pointed out by <a href="https://stackoverflow.com/users/7175713/sammywemmy">@sammywemmy</a> in <a href="https://stackoverflow.com/questions/79761269?noredirect=1#comment140728199_79761269">a comment</a>, there is a considerably-less-awkward way to convert a scalar to a <code>Series</code> with the same index. This is the best approach I've seen so far.</p> <pre class="lang-py prettyprint-override"><code># Create a DataFrame with a string index 'a', Int64 column 'b', Float64 column 'c' df = pd.DataFrame({'a': ['foo', 'bar'], 'b': [3, 4], 'c': [1.1, 2.2]}).convert_dtypes().set_index('a') # Create a Series 'c' of type 'string' containing a repeated scalar value, # and with a shape/index matching df, df = df.assign(d=pd.Series('baz', dtype='string', index=df.index)) # Verify correctness of dtypes and broadcasting assert dict(df.reset_index().dtypes) == {'a': 'string[python]', 'b': 'Int64', 'c': 'Float64', 'd': 'string[python]'} assert (df.d == 'baz').all() </code></pre>
<python><pandas><dataframe><numpy><series>
2025-09-10 20:28:05
2
80,378
Dan Lenski
79,761,246
1,750,498
Can I create a sphere in matplotlib with all inputs 3d (e.g. x**2 + y**2 + z**2 = 0)
<p>I want to create a fun (for me ;-)) Python project where I plot mathematical formulas in 3d using matplotlib and I was especially looking to print a sphere, since a line is a quite boring formula (y = x + z) and even parabolas can be boring (y = x^2 + z^2). But a sphere x^2 + y^2 + z^2 = 0 directly interested me.</p> <p>Using <a href="https://stackoverflow.com/questions/63975431/matplotlib-trying-to-draw-a-circle-though-basic-equation-x2-y2-9">Matplotlib trying to draw a circle though basic equation ( x**2 + y**2 = 9)</a> as a basis I run into an issue when I do the following:</p> <pre class="lang-py prettyprint-override"><code>x = np.linspace(-5.0, 5.0, 100) # 1000 takes too much memory 8-) y = np.linspace(-5.0, 5.0, 100) z = np.linspace(-5.0, 5.0, 100) X, Y, Z = np.meshgrid(x,y,z) F = X**2 + Y**2 + Z**2 -9 fig, ax = plt.subplots() # up to here everything worked, but the next line(s) give all the same error which is the one I want to overcome: ax.contour(X,Y,Z,F) # or ax.contour(X,Y,F,[0]) # and even the following ax.contour(X,Y,F) </code></pre> <p>The error I get is:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\__init__.py&quot;, line 1524, in inner return func( ^^^^^ File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\axes\_axes.py&quot;, line 6779, in contour contours = mcontour.QuadContourSet(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\contour.py&quot;, line 701, in __init__ kwargs = self._process_args(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\contour.py&quot;, line 1319, in _process_args x, y, z = self._contour_args(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\contour.py&quot;, line 1359, in _contour_args x, y, z = self._check_xyz(x, y, z_orig, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Python\Python312\Lib\site-packages\matplotlib\contour.py&quot;, line 1385, in _check_xyz raise TypeError(f&quot;Input z must be 2D, not {z.ndim}D&quot;) </code></pre> <p>So now my question is: is there a way to make this code work (so a 3d plot of a sphere) or am I just trying to do something which cannot be done using matplotlib? And if it is possible, what am I doing wrong? (It is either in F, since all 3 ax.contour definitions fail. Or in the definition of z, since F contains the meshgrid of z)</p>
<python><matplotlib>
2025-09-10 19:54:57
1
5,562
Nemelis
79,761,125
8,765,709
Python Convert List Image to multiple PDF
<p>In the case</p> <p>I have a folder of image list and list name file with csv</p> <p>I want to write a python script to convert list of PNG's to multiple pdf file with name file of pdf from csv file at column D</p> <p>like this: <a href="https://i.sstatic.net/91J4d6KN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/91J4d6KN.png" alt="enter image description here" /></a></p> <p>Then i have tried python code like this:</p> <pre><code>import os import csv import img2pdf from PIL import Image # Set this to the folder with your files folder_path = r&quot;C:\Users\dede\Desktop\product_report&quot; # Set this to the name of your CSV csv_file = &quot;product.csv&quot; with open(os.path.join(folder_path, csv_file), newline='') as file: reader = csv.reader(file, delimiter='|') next(reader) # Skip header row for i, row in enumerate(reader, start=1): old_name = f&quot;{i}.png&quot; new_name = row[0] + &quot;.pdf&quot; old_path = os.path.join(folder_path, old_name) new_path = os.path.join(folder_path, new_name) if os.path.exists(old_path): # os.rename(old_path, new_path) # print(f&quot;Renamed {old_name} to {new_name}&quot;) image = Image.open(old_name) pdf_bytes = img2pdf.convert(image.old_name) file = open(new_name, &quot;wb&quot;) file.write(pdf_bytes) else: print(f&quot;File {old_name} not found&quot;) </code></pre> <p>This code was error result</p> <p>Is there a better way to do this, Somebody help me. Thank you.</p>
<python><csv><img2pdf>
2025-09-10 17:24:56
1
339
dede.brahma
79,760,973
2,203,144
How to make an async setup and teardown fixture in Pytest
<p>I have a working basic example of a setup and teardown fixture in <a href="https://docs.pytest.org/en/stable/" rel="nofollow noreferrer">Pytest</a>:</p> <pre><code> @pytest.fixture() def setup_and_teardown(): # Setup print('setup') # Yield some variable to the test yield 'v' # Teardown print('teardown') async def test_me(setup_and_teardown): &quot;&quot;&quot;Test successful experiment retrieval.&quot;&quot;&quot; variable = db_data_setup_and_teardown # Call the method result = my_func_that_i_wrote_so_well(variable) # Assertions assert result == 'v2' </code></pre> <p>It works. But now I need async support. Both my setup and teardown steps are async. So I tried to do:</p> <pre><code>@pytest.fixture() async def setup_and_teardown(): # Setup print('setup') await my_setup_func() # Yield some variable to the test yield 'v' # Teardown print('teardown') await my_teardown_func() </code></pre> <p>but then <code>variable = db_data_setup_and_teardown</code> results in <code>variable</code> being an 'async_generator' object. If I try to use <code>variable = await db_data_setup_and_teardown</code> instead, then I just get the error</p> <pre><code>TypeError: object async_generator can't be used in 'await' expression </code></pre> <p>What is the correct way to convert this pytest fixture into something async?</p>
<python><testing><async-await><pytest><teardown>
2025-09-10 14:29:35
1
3,988
mareoraft
79,760,899
1,023,928
Memory efficient sorting/removing duplicates of polars dataframes
<p>I am trying to import very large csv files into parquet files using polars. I stream data, use lazy dataframes and sinks. No problem until...</p> <p>...sorting the dataframe on a column and removing duplicates. Requirement that can't be skipped is that the data written to parquet must be unique by the 'datetime' column and sorted by the same column. The bottleneck is the sorting and removing duplicates. My understanding is that the data must be fully in memory in order to remove duplicates and sort. There is no guarantee the source data is sorted or has no duplicates.</p> <p>Writing the dataframe to parquet unsorted and without checking for duplicates is no problem and results in parquet files about 3-4GB in size. But reading them in and sorting and applying <code>unique()</code> explodes memory consumption to above 128GB which is the memory limit of my host (I run the code on Ubuntu in WSL2). I already allocated the maximum amount of memory to WSL2 and confirmed that it does have access to the entire amount of memory. At some point sorting and removing duplicates crashes the WSL VM. I do not seem to be able to efficiently sort and remove duplicates.</p> <p>Can you please help suggest a better approach than I currently take as follows:</p> <pre><code> def import_csv(self, symbol_id: str, data_source: DataSource, data_type: DataType, source_files: List[str], column_schema: List[ColumnSchema]) -&gt; None: #ensure 1 or 2 source files are provided if len(source_files) != 1 and len(source_files) != 2: raise ValueError(f&quot;Can only process 1 or 2 source files for symbol {symbol_id}&quot;) #obtain new df new_df = self._csv_to_dataframe(source_files, column_schema) #filter out duplicates and sort by datetime new_df = new_df.unique(subset=&quot;datetime&quot;) new_df = new_df.sort(&quot;datetime&quot;) #merge with existing data if it exists path_filename = self.base_directory / f&quot;{symbol_id}_{data_source.value}_{data_type.value}.parquet&quot; if path_filename.exists(): old_df = pl.scan_parquet(path_filename, glob=False) df = pl.concat([old_df, new_df], how=&quot;vertical&quot;) else: df = new_df #write to parquet df.sink_parquet(path_filename, engine=&quot;streaming&quot;) #update metadata # self._update_metadata(symbol_id, data_source, data_type, len(df), df[&quot;datetime&quot;].first(), df[&quot;datetime&quot;].last()) #logging # self.logger.info(f&quot;Imported {len(df)} rows for {symbol_id} from {df[&quot;datetime&quot;].first()} to {df[&quot;datetime&quot;].last()}&quot;) def _csv_to_dataframe(self, source_files: list[str], column_schema: List[ColumnSchema]) -&gt; pl.LazyFrame: # Generate Polars expressions for column transformations expressions = self._generate_polars_expressions(column_schema) dfs = [] for source_file in source_files: df = pl.scan_csv(source_file, has_header=True, glob=False).select(expressions) dfs.append(df) if len(dfs) == 1: df = dfs[0] else: df = pl.concat(dfs, how=&quot;vertical&quot;) df = df.group_by(&quot;datetime&quot;).mean() return df def _generate_polars_expressions(self, schema: list[ColumnSchema]) -&gt; list[pl.Expr]: expressions = [] for col_schema in schema: # Create a base expression from the source column name expr = pl.col(col_schema.source_column_name) # Handle special cases based on the target data type if col_schema.dtype == pl.Datetime: # Ensure datetime format is provided if col_schema.datetime_format is None: raise ValueError( f&quot;Datetime format is required for column '{col_schema.source_column_name}'&quot; ) # For datetime, we first parse the string with the specified format expr = expr.str.to_datetime(format=col_schema.datetime_format, time_unit=self.time_unit, time_zone=col_schema.from_timezone) #always convert to default timezone expr = expr.dt.convert_time_zone(self.data_timezone) else: # For other dtypes, a simple cast is sufficient expr = expr.cast(col_schema.dtype) # Alias the expression with the target column name final_expr = expr.alias(col_schema.target_column_name) # Add the final expression to the list expressions.append(final_expr) return expressions </code></pre>
<python><dataframe><datetime><parquet><python-polars>
2025-09-10 13:18:33
1
7,316
Matt
79,760,853
2,125,110
Is there a decorator for attrs factory sugar?
<p>Similar to <a href="https://stackoverflow.com/questions/63625357/decorator-for-attrs-converter">Decorator for attrs converter</a>, please see this case:</p> <pre><code>from attrs import define, field, Factory @define class MyClass: name = field(default = Factory(_name_factory, takes_self=True), init=False) @name.default # or @name.factory def _name_factory(): return &quot;standard name&quot; # this would be a much more complicated function of course ERROR: x.py - NameError: name '_name_factory' is not defined </code></pre> <p>Is there a decorator that would let the name be visible? I am trying to understand the <a href="https://www.attrs.org/en/stable/examples.html#defaults" rel="nofollow noreferrer">documentation</a>, but I am having issues.</p>
<python><python-attrs>
2025-09-10 12:17:48
1
3,596
Daemon Painter
79,760,561
7,465,516
Python equivalent of java-regex Pattern.quote ( \Q and \E )?
<p>In Java the regular expression <code>\Q\s\E</code> will match the literal string <code>&quot;\s&quot;</code> and not whitespace because <code>\Q</code> and <code>\E</code> quote the part of the regex inside them. For quoting a whole string conveniently and with care of the edge-case that the string contains an <code>\E</code>, the <code>Pattern.quote</code>-method exists.</p> <p>Java even has another way of achieving 'regex-methods without regex-special-characters', and that is to pass <code>Pattern.Literal</code> as a flag when compiling the regex.</p> <p>With python I can not find any equivalent of <code>Pattern.quote</code> nor do I find an equivalent for the underlying mechanism of <code>\Q</code> and <code>\E</code>. I also see no equivalent of the <code>Pattern.Literal</code>-flag. What are my options here?</p>
<python><regex><escaping><match>
2025-09-10 07:12:02
1
2,196
julaine
79,760,254
3,866,585
Why do I keep getting a 400 Bad Request trying to import a private GitHub repo into an Azure Devops repo using the API?
<p>I am trying to make API calls to import a repo from a private GitHub repo to an Azure Devops repo in Python using a PAT. I tried using the SDK with no luck. My code DOES work with a public GitHub repo, just not a private one. I have tried adding the username/PAT in the GitHub repo url and tried creating my own service connection to no avail. It will always give me a 400 Bad Request with no text, no body to diagnose what's wrong with it.</p> <p>Below is the code on my final try to get this working, which follows the exact API calls that are made when you manually import a repo from a private GitHub repo using the UI.</p> <pre><code>def create_github_service_connection(connection: Connection, project_id, github_pat, github_username): service_auth = EndpointAuthorization( scheme=&quot;UsernamePassword&quot;, parameters={&quot;password&quot;: github_pat, &quot;username&quot;:github_username} ) svc_endpoint_client = connection.clients.get_service_endpoint_client() endpoint = ServiceEndpoint( name=&quot;github-conn-scaffold&quot;, type=&quot;git&quot;, url=&quot;https://github.com/org/repo&quot;, authorization=service_auth, service_endpoint_project_references=[ ServiceEndpointProjectReference( name=&quot;github-conn-scaffold&quot;, project_reference=ProjectReference(id=project_id) ) ], ) created_svc = svc_endpoint_client.create_service_endpoint(endpoint) return created_svc.id def initialize_repo(connection: Connection, project_name: str, repo_name: str, source_url, azure_devops_pat, github_pat, github_username): try: core_client = connection.clients.get_core_client() project = core_client.get_project(project_name) repo_client = connection.clients.get_git_client() repos = repo_client.get_repositories(project=project_name) repo = next((r for r in repos if r.name.lower() == repo_name.lower()), None) if not repo: raise Exception(f&quot;Repository '{repo_name}' not found in project '{project_name}'.&quot;) uri_encoded_project = quote(project_name) repository_url = f&quot;{connection.base_url}/{uri_encoded_project}/_apis/git/repositories/{repo.id}/importRequests&quot; repository_body = { &quot;parameters&quot;: { &quot;deleteServiceEndpointAfterImportIsDone&quot;:True, &quot;gitSource&quot;: { &quot;overwrite&quot;:False, &quot;url&quot; : source_url}, &quot;tfvcSource&quot;:None, &quot;serviceEndpointId&quot;:create_github_service_connection(connection, project.id, github_pat, github_username) } } pprint(repository_body) response = requests.post(url=repository_url, json=repository_body, headers={&quot;Content-Type&quot;: &quot;application/json&quot;, &quot;Accept&quot;:&quot;api-version=5.0-preview.1&quot;}, auth=HTTPBasicAuth('', azure_devops_pat)) pprint(f&quot;test: {response.text} &quot;) response.raise_for_status() logger.info(f&quot;Initiated repo for {project_name} with repoId {repo.id}&quot;) except Exception as e: logger.error(&quot;An error occurred while initializing the repository: %s&quot;, e) sys.exit(1) </code></pre> <p>Checklist on what I've done:</p> <ul> <li>I've tried making a github type service connection with a personalaccesstoken auth type. This still wont work. The UI API calls create the git type service connection.</li> <li>I've tried different versions of the API including 7.1. The version I have in my code is what the UI uses.</li> <li>I've tried using the username/password combo for the URL e.g https://blahUsername:longpathere:github.com/orgname/project</li> </ul> <p>Some notes:</p> <ul> <li>Yes I know I used a combination of manual API and Python SDK here. The SDK calls work as I was able to retrieve things like project or repo IDs</li> <li>I followed the import request documentation here: <a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/git/import-requests/create?view=azure-devops-rest-7.1&amp;tabs=HTTP" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/azure/devops/git/import-requests/create?view=azure-devops-rest-7.1&amp;tabs=HTTP</a></li> <li>Questions I have looked up: <ul> <li><a href="https://stackoverflow.com/questions/64320732/get-github-repository-using-azure-devops-rest-api?rq=2">Get github repository using Azure devops REST API</a></li> <li><a href="https://stackoverflow.com/questions/56916593/azure-devops-import-git-repositories-requiring-authorization-via-api">Azure DevOps import Git repositories requiring authorization via API</a></li> <li><a href="https://stackoverflow.com/questions/66219762/azure-devops-rest-api-broken-importing-private-git-repository">Azure DevOps REST API Broken Importing Private Git Repository</a></li> </ul> </li> </ul>
<python><github><azure-devops>
2025-09-09 20:16:52
0
332
Hydromast
79,760,189
13,172,864
Why isn't my keras model throwing and error when different sizes are passed to the dense layer?
<p>I am working on a dynamic time series multi-class segmentation problem with keras (tensorflow version 2.12.0), and I wanted to see what would happen when I dropped in a dense layer into the network architecture. My expectation is that for any situation where the input size changes (e.g., dynamic time series), you'll need to include some kind of pooling layer that will maintain a fixed input shape into the dense layer. I managed to get it working without any pooling layers, and I'm wondering why I am NOT getting an error at the dense layer when passing dynamically sized inputs into the network.</p> <p><strong>Note: I am using 2d operations to maintain compatibility with opencv</strong></p> <p>Model code:</p> <pre><code>def residual_block(input_layer, filters=64, kernel_size=(1, 2), strides=1): conv_x = keras.layers.Conv2D( filters=filters, kernel_size=kernel_size, padding=&quot;same&quot; )(input_layer) conv_x = keras.layers.Activation(&quot;relu&quot;)(conv_x) conv_y = keras.layers.Conv2D( filters=filters, kernel_size=kernel_size, padding=&quot;same&quot; )(conv_x) conv_y = keras.layers.Activation(&quot;relu&quot;)(conv_y) conv_z = keras.layers.Conv2D( filters=filters, kernel_size=kernel_size, padding=&quot;same&quot; )(conv_y) # expand channels for the sum shortcut_y = keras.layers.Conv2D( filters=filters, kernel_size=(1, 1), padding=&quot;same&quot; )(input_layer) output_block = keras.layers.add([shortcut_y, conv_z]) output_block = keras.layers.Activation(&quot;relu&quot;)(output_block) return output_block def residual_blocks(input_layer, num_blocks=8): x = input_layer for block in range(num_blocks): x = residual_block(x) return x def classifier_block(ecoder_results, num_cats): conv_y = keras.layers.Conv2D(filters=num_cats, kernel_size=(1, 2), padding=&quot;same&quot;)( ecoder_results ) conv_y = keras.layers.Activation(&quot;softmax&quot;)(conv_y) return conv_y def dense_connection_block(ecoder_results, units=64): return keras.layers.Dense(units=64)(ecoder_results) def residual_cnn_w_dense(input_shape, num_categories=2): inputs = tf.keras.layers.Input(shape=input_shape) encoder = residual_blocks(inputs) denseout = dense_connection_block(encoder) outputs = classifier_block(denseout, num_categories) model = tf.keras.Model(inputs=inputs, outputs=outputs, name=&quot;residual_cnn_w_dense&quot;) return model </code></pre> <p>Compile the model for input 3 feature channels and 2 segmentation classes</p> <pre><code>num_time_series_channels = 3 model = residual_cnn_w_dense( input_shape=(None, None, num_time_series_channels), ) model.compile( run_eagerly=True, optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss=focal_loss, metrics=dice_coef_multiclass, ) </code></pre> <p>Create some different sized time-series inputs with shape (batch, 1, time, feature-channels), where the time-series length changes.</p> <pre><code>data_input_0 = np.random.randn(12,1,32,3) data_input_1 = np.random.randn(17,1,12,3) </code></pre> <p>Run the model on each different shaped input</p> <pre><code>model(data_input_0).shape, model(data_input_1).shape </code></pre> <p>This works and returns <code>(TensorShape([12, 1, 32, 2]), TensorShape([17, 1, 12, 2]))</code></p> <p>Training the model also works and does not throw an error when the time-series length changes.</p> <p>What is happening here? Why am I not getting an error at the dense layer?</p>
<python><tensorflow><machine-learning><keras><deep-learning>
2025-09-09 18:53:16
3
403
jjschuh
79,760,030
1,145,666
How can I find out if Google AI API used the tool that I provided?
<p>How can I use the <em>Python Gemini API</em> to find out whether the model used the Google Search tool for grounding (and what the results of it were)?</p> <p>I am using the following code to run a prompt (with some images and text) to Google Gemini. The system instruction clearly states to 1) use the tool, and 2) provide the thinking results and whether it actually used the tool in its work.</p> <pre><code>parts = [] for image in images: with open(image, &quot;rb&quot;) as f: parts.append(types.Part.from_bytes(mime_type=&quot;image/jpeg&quot;, data=f.read())) parts.append(types.Part.from_text(text=text_part)) contents = [ types.Content( role=&quot;user&quot;, parts=parts ), ] tools = [ types.Tool(googleSearch=types.GoogleSearch()), ] generate_content_config = types.GenerateContentConfig( thinking_config = types.ThinkingConfig( thinking_budget=-1, ), tools=tools, system_instruction=[ types.Part.from_text(text=system_instruction), ], ) raw_result = &quot;&quot; for chunk in self.client.models.generate_content_stream( model=self.model, contents=contents, config=generate_content_config, ): if chunk.text: raw_result += chunk.text </code></pre> <p>The system instruction includes:</p> <blockquote> <p>You can use the provided tool to enhance the information. In your reasoning, specify if you use the tool, which searches you did and what their results were.</p> </blockquote> <blockquote> <p>Return your reasoning and the final JSON result in the format below. All fields must be populated with a value, even if that value is an empty string &quot;&quot;.</p> </blockquote> <p>The results give me valid information, but what I'm really after is making it do a real-time search, as shown in the <a href="https://ai.google.dev/gemini-api/docs/google-search" rel="nofollow noreferrer">documentation</a>, and I can't figure out how to confirm whether I've been successful via the API.</p> <p>I understand that Gemini can decide when and how to use Google Search to enhance its results, but is there a way to <strong>find out whether it did so through the API</strong>?</p>
<python><google-gemini>
2025-09-09 15:15:15
1
33,757
Bart Friederichs
79,759,686
467,083
python requests response.text logging "Encoding detection: utf_8 is most likely the one."
<p>(my solution below)</p> <p>I have a python script that uses a REST API to get XML files from a tool. This is the heart of it:</p> <pre><code>def get_output(self, url, output_filename): xml_response = self.client.get_request(url) if not xml_response: self.logger.error(f&quot;no xml_response:{output_filename}&quot;) return False else: self.logger.info(f&quot;{output_filename}&quot;) output_file = open(output_filename, 'w', encoding='utf-8') output_file.write(xml_response.text) output_file.close() return True </code></pre> <p>The write call causes a DEBUG log entry:</p> <pre><code>Encoding detection: utf_8 is most likely the one. </code></pre> <p>Is there a way to explicitly tell the write to use utf8 in order to not get that log entry?</p> <p>Thanks!</p> <p>Edit 1: <code>self.client.get_request(url)</code> is essentially:</p> <pre><code>import requests self.headers = {'Accept': 'application/rdf+xml'} self.session = requests.Session() response = self.session.get(url, allow_redirects=True, headers=self.headers) </code></pre> <p>This is a sanitized log entry:</p> <pre><code>2025-09-08T22:00:52+0000;INFO;XXX:./__auto_temp/2025-09-08_2200(+0000)___XXX.xml 2025-09-08T22:01:36+0000;DEBUG;https://XXX &quot;GET /XXX HTTP/1.1&quot; 200 None 2025-09-08T22:01:36+0000;INFO;./__auto_temp/2025-09-08_2200(+0000)___XXX.xml 2025-09-08T22:01:36+0000;INFO;get_output 1 2025-09-08T22:01:36+0000;INFO;get_output 2 2025-09-08T22:01:36+0000;DEBUG;Encoding detection: utf_8 is most likely the one. 2025-09-08T22:01:36+0000;INFO;get_output 3 2025-09-08T22:01:36+0000;INFO;get_output 4 </code></pre> <p>Unless log entries are coming in out-of-order, my experiment that sprinkled logger calls among the open/write/close show it's the write causing the entry.</p> <pre><code>self.logger.info(f&quot;{output_filename}&quot;) self.logger.info('get_output 1') output_file = open(output_filename, 'w', encoding='utf-8') self.logger.info('get_output 2') output_file.write(xml_response.text) self.logger.info('get_output 3') output_file.close() self.logger.info('get_output 4') return True </code></pre> <p><strong>Edit 2: My solution based on deceze's answer was to add <code>xml_response.encoding = 'utf-8'</code> before the write; the log message no longer appears. I changed the title of the post to reflect the actual problem. I did not realize text was a property with some code behind it, I thought it was a simple get to the buffer.</strong></p>
<python><utf-8>
2025-09-09 08:57:10
1
757
oaklodge
79,759,628
10,197,791
How to Bypass "Grant Access" when using Xlwings
<p>I am in the process of porting over some python excel automation scripts to work on MacOS (previously they were run on windows, and used the pywin32 library to &quot;refresh&quot; an excel file - the purpose of this was to make sure that all formulas were computed before using pandas to read the excel file to a data frame).</p> <p>I have run into a problem, where when I try and open an excel file using xlwings I get a popup asking to &quot;Grant Access&quot; as seen in the image below:</p> <p><a href="https://i.sstatic.net/fxD4v16t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fxD4v16t.png" alt="enter image description here" /></a></p> <p>I tried making sure Excel and python have Full Disk Access, I have tried changes my Excel security settings to allow anyone to read/write, but nothing seems to solve this (besides for clicking &quot;Grant Access&quot;, but this script will be parsing many excel files and manually handling these popups are not feasible).</p> <p>Here is the code that triggers the popup:</p> <pre><code>try: import xlwings as xw # type: ignore with xw.App(visible=False) as app: try: app.display_alerts = False app.screen_updating = False except Exception: pass # HERE IS WHERE THE GRANT ACCESS POPUP IS TRIGGERED wb = app.books.open(file_path, update_links=False) try: app.calculate() except Exception: pass wb.save() wb.close() logging.info(&quot;Excel formulas refreshed via xlwings&quot;) return except Exception as e: logging.info(f&quot;xlwings refresh unavailable or failed: {e}&quot;) </code></pre>
<python><excel><xlwings>
2025-09-09 07:49:52
1
375
NicLovin
79,759,341
1,747,834
How to create kwargs from tuple?
<p>I have a function with lots of optional arguments. I want to apply it to an enumeration of tuples, that'd specify only the non-default values for each parameter. Something like:</p> <pre class="lang-py prettyprint-override"><code>def myf(foo = 11, bar = 17): print(foo + bar) for tup in [(foo = 1), (bar = 2)]: myf(*tup) </code></pre> <p>The above should print 18 and 13, but I cannot figure out the syntax -- is this even possible?</p>
<python>
2025-09-08 21:05:46
1
4,246
Mikhail T.
79,759,280
1,747,834
How to use multiprocessing module with interdependent tasks?
<p>The existing examples of the <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer"><code>multiprocessing</code></a>-module assume, the tasks are independent, but my situation is &quot;hairier&quot;.</p> <p>I need to process several hundred thousands of <em>widgets</em>. These are packaged into <em>boxes</em> of 50-100 widgets per box. The processing times may vary widely...</p> <p>All <em>boxes</em> are completely independent from each other. All <em>widgets</em> in one box are independent too, <em>except</em>, each one depends on the very first widget from the same box.</p> <p>So, I thought, I'd create a queue for tasks and a queue for results -- making both queues arguments of <code>multiprocessing.Process</code> target.</p> <p>Now I need to first loop through all <em>boxes</em>, sending first <em>widget</em> from each one to processing. Then, as soon as a box' first widget is done, I'd submit all of the remaining widgets from the box.</p> <p>The question is, how to do that second part properly? Below is the simplified code sending the first widgets from every box for processing.</p> <pre class="lang-py prettyprint-override"><code>def widgetLoop(taskQueue, resultQueue): while True: task = taskQueue.get() start = resource.getrusage(resource.RUSAGE_SELF)[0] result = process(*task) finish = resource.getrusage(resource.RUSAGE_SELF)[0] logger.info(&quot;Processed %s in %g&quot;, task, finish - start) resultQueue.put(result) tasks = multiprocessing.Queue() results = multiprocessing.Queue() p = multiprocessing.Pool(None, widgetLoop, (tasks, results)) Pending = {} with open(sys.argv[1]) as boxes: for box in boxes: (first, *rest) = box tasks.put(*first) Pending[first] = *rest </code></pre> <p>How do I arrange for the processing of the results -- in the main process -- so that a box' remaining widgets are added to the <code>tasks</code>-queue as soon as the box' first widget is processed?</p> <p>How would the processing finish, on both ends? Is there some nice pythonic way of doing all this?</p>
<python><python-multiprocessing><message-queue>
2025-09-08 19:37:08
2
4,246
Mikhail T.
79,759,200
4,799,172
Using pybabel to translate tooltip text inside a HTML tag
<p>I've basically finished translating my flask frontend into a lot of different languages but there are a couple of tooltips that I just can't seem to figure out. Technically, these need to be strings against the <code>title=</code> property, so the <em>unmodified</em> version would look like:</p> <pre class="lang-html prettyprint-override"><code> &lt;div class=&quot;col col-lg-3&quot;&gt; &lt;label for=&quot;newMachineIRR&quot;&gt; {{ _('Ideal Run Rate (unit/min)') }} &lt;button type=&quot;button&quot; id=&quot;irrTooltip&quot; class=&quot;fa-solid fa-circle-info&quot; data-bs-toggle=&quot;tooltip&quot; data-bs-placement=&quot;top&quot; data-bs-delay=&quot;0&quot; title=&quot;Used to specify an optional default ideal run rate for continuous products. It has no effect on batch products. Setting a default does not prevent you from specifiying custom run rates for particular products&quot;&gt; &lt;/button&gt; &lt;/label&gt; &lt;/div&gt; </code></pre> <p>No matter what I have tried, I cannot get <code>babel</code> to pick these strings up for translation. Naively, I thought of:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;col col-lg-3&quot;&gt; &lt;label for=&quot;newMachineIRR&quot;&gt; {{ _('Ideal Run Rate (unit/min)') }} &lt;button type=&quot;button&quot; id=&quot;irrTooltip&quot; class=&quot;fa-solid fa-circle-info&quot; data-bs-toggle=&quot;tooltip&quot; data-bs-placement=&quot;top&quot; data-bs-delay=&quot;0&quot; title=&quot;{{ _('Used to specify an optional default ideal run rate for continuous products. It has no effect on batch products. Setting a default does not prevent you from specifiying custom run rates for particular products') }}&quot;&gt; &lt;/button&gt; &lt;/label&gt; &lt;/div&gt; </code></pre> <p>That won't get detected. It also won't be detected/translated if I have it all on one line. I have tried a number of increasingly abstract thoughts to no avail. I can't seem to find this use case defined or described anywhere; what is the correct way to translate the tooltip text? The best I've thought of (perhaps) is to define the text in a hidden div and then use JS to update the <code>title</code> property of the tag on page load but, am I missing something simple here?</p>
<python><flask><python-babel>
2025-09-08 17:46:24
1
13,314
roganjosh
79,759,016
13,132,728
Change the decimal value of each value in each column to 0.5 while maintaining the same leading integer python pandas
<h1>CONTEXT</h1> <p>I am NOT trying to round to the nearest 0.5. I know there are questions on here that address that. Rather, I am trying to change the decimal value of each value in each row to 0.5 while maintaining the same leading integer. For example, I have <code>df</code>:</p> <pre><code>df = pd.DataFrame({'foo':[1.5,5.5,7.11116],'bar':[3.66666661, 10.5, 4.5],'baz':[8.5,3.111118,2.5]},index=['a','b','c']) df foo bar baz a 1.50000 3.666667 8.500000 b 5.50000 10.500000 3.111118 c 7.11116 4.500000 2.500000 </code></pre> <h1>INTENDED OUTPUT</h1> <p>I would like each cell to end in 0.5. As you can see, there are some erroneous values. Here is my intended output:</p> <pre><code> foo bar baz a 1.5 3.5 8.5 b 5.5 10.5 3.5 c 7.5 4.5 2.5 </code></pre> <h1>WHAT I HAVE TRIED</h1> <p>At first I thought I could maybe iterate through the columns in a list comprehension, but then figured a combination of <code>where()</code> and <code>apply()</code>, or maybe just <code>apply()</code> with a lambda function might be more readable:</p> <pre><code>df = df.where(df % 0.5 == 0, 'fix this') df foo bar baz a 1.5 fix this 8.5 b 5.5 10.5 fix this c fix this 4.5 2.5 </code></pre> <p>Where I am stumped is trying to create a function that <strong>changes</strong> the decimal value to .5 rather than <strong>rounding</strong> to the nearest 0.5 (which, for example, in this case, round 3.111118 to 3.0 when I want 3.5).</p>
<python><pandas><dataframe><apply><rounding>
2025-09-08 14:20:55
1
1,645
bismo
79,758,933
2,386,113
How to efficiently create an array from a list containing arrays of different lengths
<p>I have a list containing 2D arrays with same number of rows but different number of columns. I need to create a padded array of arrays with same shape. My current code is below but due to the for loop, the code is very inefficient for large lists. How can I create a padded array more efficiently?</p> <p><strong>MWE:</strong></p> <pre><code>import numpy as np list_length = 552 # Irregular shape arrays in the list # list_with_irregular_arrays[0].shape = (10, 40) # . # . # list_with_irregular_arrays[100].shape = (10, 60) list_with_irregular_arrays = [np.random.rand(10, np.random.randint(30, 70)) for _ in range(list_length)] # NOTE: Only to create example data # Create padded array num_rows = 10 num_cols = max(arr.shape[1] for arr in list_with_irregular_arrays) padded_array = np.full((list_length, num_rows, num_cols), np.nan, dtype=np.float32) for i in range(list_length): arr = list_with_irregular_arrays[i] padded_array[i, :, :arr.shape[1]] = arr </code></pre>
<python><arrays><list><numpy>
2025-09-08 13:19:45
1
5,777
skm
79,758,868
1,815,054
QGIS headless + Python - exporting PDF with XYZ tiles layer
<p>I have a script that generates a PDF file from QGIS headless running on Ubuntu. The script generates a map view from a geojson feature which is centered on the view and a Mapbox XYZ tile layer is put behind it.</p> <p>When running the script for the first time, I get a segmentation error and a faulty PDF is generated. The second run of the same script produces a correct PDF.</p> <p>What might be happening is, that the PDF is being exported while the XYZ tiles are not fully loaded. The second run uses cached tiles and no error ocurres. But correct me if I'm wrong.</p> <p>So my question is, how to tell when the tiles are fully loaded and it is safe to generate the final PDF? (Same happens when generating a PNG output)</p> <p>This is my script:</p> <pre><code>import os os.environ[&quot;QT_QPA_PLATFORM&quot;] = &quot;offscreen&quot; os.environ[&quot;QGIS_DISABLE_CACHE&quot;] = &quot;1&quot; import sys from qgis.PyQt.QtGui import QColor from qgis.PyQt.QtXml import QDomDocument from qgis.core import ( QgsApplication, QgsVectorLayer, QgsRasterLayer, QgsProject, QgsCoordinateReferenceSystem, QgsPrintLayout, QgsLayoutItemMap, QgsLayoutExporter, QgsReadWriteContext, QgsCoordinateTransform, QgsNetworkAccessManager ) printTemplatePath = os.path.join(&quot;assets&quot;, &quot;print_template.qpt&quot;) MAPBOX_MAP_TOKEN = &quot;***&quot; props = { &quot;color&quot;: [255, 16, 16], &quot;opacity&quot;: 0.75, &quot;strokeColor&quot;: [255, 36, 36], &quot;strokeWidth&quot;: 0.5, } def loadLayer(geojsonPath): layer = QgsVectorLayer(geojsonPath, &quot;geojsonLayer&quot;, &quot;ogr&quot;) if not layer.isValid(): print(f&quot;Failed to load: {geojsonPath}&quot;) qgs.exitQgis() sys.exit(1) QgsProject.instance().addMapLayer(layer) return layer def setSymbology(layer): singleSymbolRenderer = layer.renderer() symbol = singleSymbolRenderer.symbol() symbol.setColor(QColor.fromRgb(props['color'][0], props['color'][1], props['color'][2])) symbol.setOpacity(props['opacity']) symbol.symbolLayer(0).setStrokeColor(QColor(props['strokeColor'][0], props['strokeColor'][1], props['strokeColor'][2])) symbol.symbolLayer(0).setStrokeWidth(props['strokeWidth']) layer.triggerRepaint() def addMapboxLayer(): username = &quot;***&quot; style_id = &quot;***&quot; try: uri = f&quot;type=xyz&amp;url=https://api.mapbox.com/styles/v1/{username}/{style_id}/tiles/256/{{z}}/{{x}}/{{y}}?access_token={MAPBOX_MAP_TOKEN}&quot; layer = QgsRasterLayer(uri, &quot;Mapbox Style&quot;, &quot;wms&quot;) QgsProject.instance().addMapLayer(layer) root = QgsProject.instance().layerTreeRoot() node = root.findLayer(layer.id()) clone = node.clone() root.insertChildNode(-1, clone) root.removeChildNode(node) except Exception as e: print(f&quot;Error with tile layer: {e}&quot;) exit(4) def waitForMapReady(): # need to determine when all the tiles are loaded pass def setProjectCrs(): crs = QgsCoordinateReferenceSystem(&quot;EPSG:8857&quot;) QgsProject.instance().setCrs(crs) def getAdjustedLayerExtentForMapFrame(mapItem, polygons, layerCrs, padding): frameRect = mapItem.rect() targetAspect = frameRect.width() / frameRect.height() # combine extents of all polygons combinedExtent = None for geom in polygons: if geom and not geom.isEmpty(): if combinedExtent is None: combinedExtent = geom.boundingBox() else: combinedExtent.combineExtentWith(geom.boundingBox()) # transform to project CRS if needed projectCrs = QgsProject.instance().crs() if layerCrs != projectCrs: tr = QgsCoordinateTransform(layerCrs, projectCrs, QgsProject.instance()) extentProj = tr.transformBoundingBox(combinedExtent) else: extentProj = combinedExtent # apply padding extentProj.setXMinimum(extentProj.xMinimum() - extentProj.width()*(padding-1)/2) extentProj.setXMaximum(extentProj.xMaximum() + extentProj.width()*(padding-1)/2) extentProj.setYMinimum(extentProj.yMinimum() - extentProj.height()*(padding-1)/2) extentProj.setYMaximum(extentProj.yMaximum() + extentProj.height()*(padding-1)/2) # adjust to match map frame aspect ratio extentWidth = extentProj.width() extentHeight = extentProj.height() layerAspect = extentWidth / extentHeight if layerAspect &gt; targetAspect: newHeight = extentWidth / targetAspect yCenter = (extentProj.yMinimum() + extentProj.yMaximum()) / 2 extentProj.setYMinimum(yCenter - newHeight/2) extentProj.setYMaximum(yCenter + newHeight/2) else: newWidth = extentHeight * targetAspect xCenter = (extentProj.xMinimum() + extentProj.xMaximum()) / 2 extentProj.setXMinimum(xCenter - newWidth/2) extentProj.setXMaximum(xCenter + newWidth/2) return extentProj def printPdf(geojsonLayer, features, templatePath, outputFileName): project = QgsProject.instance() layout = QgsPrintLayout(project) layout.initializeDefaults() with open(templatePath) as f: templateContent = f.read() doc = QDomDocument() doc.setContent(templateContent) context = QgsReadWriteContext() layout.loadFromTemplate(doc, context) mapItems = [item for item in layout.items() if isinstance(item, QgsLayoutItemMap)] if not mapItems: raise Exception(&quot;No map item found in layout!&quot;) mapItem = mapItems[0] polygons = [f.geometry() for f in features if f.hasGeometry()] if not polygons: raise ValueError(&quot;No valid geometries found in features&quot;) # Adjust map extent adjustedExtent = getAdjustedLayerExtentForMapFrame(mapItem, polygons, geojsonLayer.crs(), padding=1.2) mapItem.setExtent(adjustedExtent) mapItem.refresh() waitForMapReady() exporter = QgsLayoutExporter(layout) pdfSettings = QgsLayoutExporter.PdfExportSettings() pdfSettings.appendGeoreference = False pdfSettings.exportMetadata = False pdfSettings.rasterizeWholeImage = True pdfSettings.dpi = 144 pdfSettings.imageCompression = &quot;JPEG&quot; pdfSettings.quality = 70 if exporter.exportToPdf(outputFileName, pdfSettings) == QgsLayoutExporter.Success: print(f&quot;Exported PDF: {outputFileName}&quot;) return outputFileName else: print(f&quot;Failed to export PDF: {pdfPath}&quot;) def main(): if len(sys.argv) &lt; 3: print(&quot;Usage: python3 &lt;inputJson&gt; &lt;outputFileName&gt;&quot;) sys.exit(1) inputJson = sys.argv[1] outputFileName = sys.argv[2] qgs = QgsApplication([], False) QgsApplication.setPrefixPath(&quot;/usr&quot;, True) QgsApplication.setMaxThreads(1) QgsNetworkAccessManager.instance() qgs.initQgis() addMapboxLayer() geojsonLayer = loadLayer(inputJson) setSymbology(geojsonLayer) setProjectCrs() features = [f for f in geojsonLayer.getFeatures()] printPdf(geojsonLayer, features, printTemplatePath, outputFileName) qgs.exitQgis() if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>EDIT:</strong></p> <p>here is the backtrace of the error:</p> <pre><code>Thread 1 &quot;python3&quot; received signal SIGSEGV, Segmentation fault. 0x00007fffece36619 in QGraphicsItemPrivate::removeExtraItemCache() () from /lib/x86_64-linux-gnu/libQt5Widgets.so.5 (gdb) bt #0 0x00007fffece36619 in QGraphicsItemPrivate::removeExtraItemCache() () from /lib/x86_64-linux-gnu/libQt5Widgets.so.5 #1 0x00007fffece396d1 in QGraphicsItem::setCacheMode(QGraphicsItem::CacheMode, QSize const&amp;) () from /lib/x86_64-linux-gnu/libQt5Widgets.so.5 #2 0x00007fffee3a6b26 in QgsLayoutExporter::renderRegion(QPainter*, QRectF const&amp;) const () from /lib/libqgis_core.so.3.40.9 #3 0x00007fffee3a7014 in QgsLayoutExporter::renderRegionToImage(QRectF const&amp;, QSize, double) const () from /lib/libqgis_core.so.3.40.9 #4 0x00007fffee3a8129 in QgsLayoutExporter::renderPageToImage(int, QSize, double) const () from /lib/libqgis_core.so.3.40.9 #5 0x00007fffee3a8632 in QgsLayoutExporter::printPrivate(QPagedPaintDevice*, QPainter&amp;, bool, double, bool) () from /lib/libqgis_core.so.3.40.9 #6 0x00007fffee3b23a3 in QgsLayoutExporter::exportToPdf(QString const&amp;, QgsLayoutExporter::PdfExportSettings const&amp;) () from /lib/libqgis_core.so.3.40.9 #7 0x00007fffdf477f89 in ?? () from /usr/lib/python3/dist-packages/qgis/_core.cpython-312-x86_64-linux-gnu.so #8 0x0000000000581a6f in cfunction_call (func=0x7fffdc22d760, args=&lt;optimized out&gt;, kwargs=&lt;optimized out&gt;) at ../Objects/methodobject.c:537 #9 0x00000000005492f5 in _PyObject_MakeTpCall (tstate=0xba6ac8 &lt;_PyRuntime+459656&gt;, callable=0x7fffdc22d760, args=&lt;optimized out&gt;, nargs=2, keywords=0x0) at ../Objects/call.c:240 #10 0x0000000000549d2d in _PyObject_VectorcallTstate (kwnames=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, args=&lt;optimized out&gt;, callable=&lt;optimized out&gt;, tstate=&lt;optimized out&gt;) at ../Include/internal/pycore_call.h:90 #11 0x00000000005d68bf in _PyEval_EvalFrameDefault (tstate=tstate@entry=0xba6ac8 &lt;_PyRuntime+459656&gt;, frame=&lt;optimized out&gt;, frame@entry=0x7ffff7fb2020, throwflag=throwflag@entry=0) at Python/bytecodes.c:2706 #12 0x00000000005d4dab in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb2020, tstate=0xba6ac8 &lt;_PyRuntime+459656&gt;) at ../Include/internal/pycore_ceval.h:89 #13 _PyEval_Vector (kwnames=0x0, argcount=0, args=0x0, locals=0x7ffff7c0dbc0, func=0x7ffff7bee160, tstate=0xba6ac8 &lt;_PyRuntime+459656&gt;) at ../Python/ceval.c:1683 #14 PyEval_EvalCode (co=co@entry=0x7ffff7b3dc30, globals=globals@entry=0x7ffff7c0dbc0, locals=locals@entry=0x7ffff7c0dbc0) at ../Python/ceval.c:578 #15 0x0000000000607fc2 in run_eval_code_obj (locals=0x7ffff7c0dbc0, globals=0x7ffff7c0dbc0, co=0x7ffff7b3dc30, #16 run_mod (mod=&lt;optimized out&gt;, filename=&lt;optimized out&gt;, globals=0x7ffff7c0dbc0, locals=0x7ffff7c0dbc0, flags=&lt;optimized out&gt;, arena=&lt;optimized out&gt;) at ../Python/pythonrun.c:1743 #17 0x00000000006b4393 in pyrun_file (fp=fp@entry=0xbab490, filename=filename@entry=0x7ffff79bdc50, start=start@entry=257, globals=globals@entry=0x7ffff7c0dbc0, locals=locals@entry=0x7ffff7c0dbc0, closeit=closeit@entry=1, flags=0x7fffffffda48) at ../Python/pythonrun.c:1643 #18 0x00000000006b40fa in _PyRun_SimpleFileObject (fp=fp@entry=0xbab490, filename=filename@entry=0x7ffff79bdc50, closeit=closeit@entry=1, flags=flags@entry=0x7fffffffda48) at ../Python/pythonrun.c:433 #19 0x00000000006b3f2f in _PyRun_AnyFileObject (fp=0xbab490, filename=filename@entry=0x7ffff79bdc50, closeit=closeit@entry=1, flags=flags@entry=0x7fffffffda48) at ../Python/pythonrun.c:78 #20 0x00000000006bbf45 in pymain_run_file_obj (skip_source_first_line=0, filename=0x7ffff79bdc50, program_name=0x7ffff7c0dcf0) at ../Modules/main.c:360 #21 pymain_run_file (config=0xb496a8 &lt;_PyRuntime+77672&gt;) at ../Modules/main.c:379 #22 pymain_run_python (exitcode=0x7fffffffda3c) at ../Modules/main.c:629 #23 Py_RunMain () at ../Modules/main.c:709 #24 0x00000000006bba2d in Py_BytesMain (argc=&lt;optimized out&gt;, argv=&lt;optimized out&gt;) at ../Modules/main.c:763 #25 0x00007ffff7c991ca in __libc_start_call_main (main=main@entry=0x518bd0 &lt;main&gt;, argc=argc@entry=4, argv=argv@entry=0x7fffffffdc88) at ../sysdeps/nptl/libc_start_call_main.h:58 #26 0x00007ffff7c9928b in __libc_start_main_impl (main=0x518bd0 &lt;main&gt;, argc=4, argv=0x7fffffffdc88, init=&lt;optimized out&gt;, fini=&lt;optimized out&gt;, rtld_fini=&lt;optimized out&gt;, stack_end=0x7fffffffdc78) at ../csu/libc-start.c:360 #27 0x0000000000656a35 in _start () </code></pre>
<python><qgis><pyqgis>
2025-09-08 12:21:25
0
3,508
Dusan
79,758,809
3,486,078
Does head(n) in Polars agg guarantee alignment across multiple columns within the same group
<p>When using <code>group_by().agg()</code> in Polars to apply <code>head(n)</code> on multiple columns simultaneously, is it guaranteed that the returned elements come from the same original rows?</p> <p>For example, when grouping by code and applying <code>.head(2)</code> to several columns (e.g., 'interest', 'embedding', 'a'), will the result contain aligned values — such as <code>i1,e1,a1</code> and <code>i2,e2,a2</code> — corresponding to the first two rows within each group?</p> <p>I’ve tested this with sample code below, and it behaves as expected. But I’m wondering: is this alignment behavior officially guaranteed, or is it an implementation detail that could change?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl # 示例数据 df = pl.DataFrame({ 'code': ['A', 'A', 'A', 'B', 'B'], 'interest': ['i1', 'i2', 'i3', 'i4', 'i5'], 'embedding': ['e1', 'e2', 'e3', 'e4', 'e5'], 'a': ['a1', 'a2', 'a3', 'a4', 'a5'] }) # 分组并获取前2个样本 result = ( df.group_by('code') .agg([ pl.col('interest').head(2).alias('interests'), pl.col('embedding').head(2).alias('embeddings'), pl.col('a').head(2).alias('a') ]) ) print(result) </code></pre>
<python><dataframe><python-polars>
2025-09-08 11:05:51
2
474
K_Augus
79,758,806
11,863,823
Trying to reduce the verboseness of __post_init__ in a python dataclass
<p>I am writing a Python config script that creates an array of input files in a domain-specific language (DSL), so my use case is a bit unusual. In this scenario, we want medium-level users to be able to edit the <code>RandomRequest</code> class / create various other classes following similar patterns, that will be used by the input files generator. This way, the middle-level user does not need to edit the core part of the input file generation, even when writing new models in the DSL ; they just need to create the Python objects describing the DSL objects they defined, and the core files translate this accordingly.</p> <p>The MWE I have for the file that the middle-level users will have to edit to match their use case is as follows:</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum import typing as tp import dataclasses as dc import random class State(Enum): INC = &quot;incoming&quot; OUT = &quot;outgoing&quot; SLEEP = &quot;sleeping&quot; def random_generator_from_enum[T: Enum](E: type[T]) -&gt; tp.Callable[[], T]: &quot;&quot;&quot; Returns a function that, when called, returns a random state of the Enum E. &quot;&quot;&quot; return lambda: random.choice(list(E)) avg_delays = { State.INC: 10, State.OUT: 20, State.SLEEP: 100 } def delays_from_state(state: State) -&gt; tp.Callable[[], int]: &quot;&quot;&quot; Returns a function that, when called, returns a random delay in seconds centered around avg_delays[state]. &quot;&quot;&quot; return lambda: int(random.gauss(avg_delays[state])) # not working @dc.dataclass class RandomRequest: state: State = dc.field(default_factory=random_generator_from_enum(State)) delay: int = dc.field(default_factory=delays_from_state(state)) if __name__ == '__main__': # The core generator will create and handle many `RandomRequest` instances. print(RandomRequest()) </code></pre> <p>This is what I would like to do. Of course, it doesn't work because in <code>RandomRequest</code>, I try to use the <code>state</code> variable that is not defined yet. Same issue obviously arises if I try to use <code>self.state</code>, <code>cls.state</code>, or workarounds based on default values instead of default factories. The usual way to handle this is to use <code>__post_init__</code>:</p> <pre class="lang-py prettyprint-override"><code>@dc.dataclass class RandomRequest: state: State = dc.field(default_factory=random_generator_from_enum(State)) delay: int = dc.field(init=False) def __post_init__(self): self.delay = delays_from_state(self.state)() </code></pre> <p>However, as the class must be edited and maintained by middle-level users, and as the number of request properties and possible factory functions for each property can grow arbitrarily large, this would make it quite tedious to read and maintain for these users, while a syntax similar to the one I wanted to use above keeps it simpler, with all lines to edit in the same place, and one line per custom property. Using <code>__post_init__</code> in my usecase quickly makes the result look like this, which is very error-prone (and I didn't even use multiple <code>Enum</code>s or <code>default_factories</code>:</p> <pre class="lang-py prettyprint-override"><code>@dc.dataclass class RandomRequest: state_ini: State = dc.field(default_factory=random_generator_from_enum(State)) state_aim: State = dc.field(default_factory=random_generator_from_enum(State)) state_req: State = dc.field(default_factory=random_generator_from_enum(State)) delay_ini: int = dc.field(init=False) delay_aim: int = dc.field(init=False) delay_req: int = dc.field(init=False) delay_ini2: int = dc.field(init=False) delay_aim2: int = dc.field(init=False) delay_req2: int = dc.field(init=False) delay_ini3: int = dc.field(init=False) delay_aim3: int = dc.field(init=False) delay_req3: int = dc.field(init=False) def __post_init__(self): self.delay_ini = delays_from_state(self.state_ini)() self.delay_aim = delays_from_state(self.state_aim)() self.delay_req = delays_from_state(self.state_req)() self.delay_ini2 = delays_from_state(self.state_ini)() self.delay_aim2 = delays_from_state(self.state_aim)() self.delay_req2 = delays_from_state(self.state_req)() self.delay_ini3 = delays_from_state(self.state_ini)() self.delay_aim3 = delays_from_state(self.state_aim)() self.delay_req3 = delays_from_state(self.state_req)() </code></pre> <p>instead of the alternative I would like to be able to use:</p> <pre class="lang-py prettyprint-override"><code># not working @dc.dataclass class RandomRequest: state_ini: State = dc.field(default_factory=random_generator_from_enum(State)) state_aim: State = dc.field(default_factory=random_generator_from_enum(State)) state_req: State = dc.field(default_factory=random_generator_from_enum(State)) delay_ini: int = dc.field(default_factory=delays_from_state(state_ini)) delay_aim: int = dc.field(default_factory=delays_from_state(state_aim)) delay_req: int = dc.field(default_factory=delays_from_state(state_req)) delay_ini2: int = dc.field(default_factory=delays_from_state(state_ini)) delay_aim2: int = dc.field(default_factory=delays_from_state(state_aim)) delay_req2: int = dc.field(default_factory=delays_from_state(state_req)) delay_ini3: int = dc.field(default_factory=delays_from_state(state_ini)) delay_aim3: int = dc.field(default_factory=delays_from_state(state_aim)) delay_req3: int = dc.field(default_factory=delays_from_state(state_req)) </code></pre> <p>I'm looking for possible workarounds to be closer to the desired syntax. I would like to keep a <code>class</code> structure for my requests instead of a function that returns a request, as they also have interesting inherited methods that help validating the provided config file by generating the appropriate test suite. However, the more I think about it, the less I believe I will be able to use <code>dataclasses</code> for this, although the simplicity of use of these structures was very adapted to my goals.</p> <p>Is there still any way to make this work with dataclasses, or even regular classes, or do I need to completely change the way I intended to make this work?</p>
<python><enums><python-dataclasses>
2025-09-08 11:03:24
1
628
globglogabgalab
79,758,737
5,121,448
Error when trying to access data in h5 file using h5py
<p>I am trying to read an h5 file using python</p> <pre><code>with h5py.File(filename, 'r') as file: print(&quot;file.keys() = &quot;, file.keys()) a_group_key = list(file.keys())[0] data = list(file[a_group_key]) print(data) </code></pre> <p>but the code above leads to the error</p> <pre><code> File &quot;h5py/_selector.pyx&quot;, line 376, in h5py._selector.Reader.read OSError: Can't synchronously read data (can't open directory (/usr/local/hdf5/lib/plugin). Please verify its existence) </code></pre> <p>The output is</p> <pre><code>file.keys() = &lt;KeysViewHDF5 ['d1', 'd2', 'd3', 'd4']&gt; </code></pre> <p>I noticed that the /usr/local/hdf5/lib/plugin directory does not exist on my system, but reinstalling h5py with pip did not resolve that. My pip does install the package for the correct python version.</p> <p>It works if I</p> <pre><code>pip install hdf5plugin </code></pre> <p>and</p> <pre><code>import hdf5plugin </code></pre>
<python><hdf5><h5py>
2025-09-08 10:00:51
1
4,478
carl
79,758,380
4,966,317
How to add objects/links to a set of links in beanie?
<p>Assume that I have these <code>Beanie Document</code>s which are based, by the way, on <code>Pydantic Model</code>s:</p> <p><strong>File name:</strong> <code>models.py</code></p> <pre class="lang-py prettyprint-override"><code>from beanie import Document, Link class A(Document): first: int second: str class B(Document): third: float a_links: set[Link[A]] = {} </code></pre> <p>and I have this <code>FastAPI route</code>:</p> <p><strong>File name:</strong> <code>main.py</code></p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, HTTPException, status from beanie import PydanticObjectId, Link from .models import A, B app = FastAPI() @app.post('/b/{b_object_id}/add-link/{a_object_id}') async def add_link(b_object_id: PydanticObjectId, a_object_id: PydanticObjectId): b = await B.get(b_object_id) if not b: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND) a = await A.get(a_object_id) if not a: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND) b.a_links.add(Link(a)) await b.save() return b </code></pre> <p>I am talking about this line of code:</p> <pre class="lang-py prettyprint-override"><code>b.a_links.add(Link(a)) </code></pre> <p>If I wrote it as it is, I will get this error: <code>Parameter 'document_class' unfilled</code></p> <p>Also, if I wrote it as:</p> <pre class="lang-py prettyprint-override"><code>b.a_links.add(Link(a, document_class=A)) </code></pre> <p>I will get this error: <code>Expected type 'DBRef', got 'A' instead</code></p> <p>Finally, if I wrote it as:</p> <pre class="lang-py prettyprint-override"><code>b.a_links.add(Link(ref=a.id, document_class=A)) </code></pre> <p>I will get this error: <code>Expected type 'DBRef', got 'PydanticObjectId | None' instead</code></p> <p>How to add it in a correct way?</p>
<python><mongodb><fastapi><pydantic><beanie>
2025-09-07 22:15:29
1
2,643
Ambitions
79,758,337
11,986,368
Using Sphinx autosummary to generate documentation for class instances stored as attributes during instantiation of another class
<p>I have a class called <code>WebAPI</code> that instantiates and stores a <code>UserEndpoints</code> object inside its constructor:</p> <pre class="lang-py prettyprint-override"><code># /src/minim/api/spotify/_core.py ... # other imports from .._shared import OAuth2API from ._web_api.users import UserEndpoints class WebAPI(OAuth2API): &quot;&quot;&quot; Spotify Web API client. &quot;&quot;&quot; def __init__(self, ...) -&gt; None: &quot;&quot;&quot; Parameters ---------- ... &quot;&quot;&quot; self.users = UserEndpoints(self) ... # other logic def some_method(self) -&gt; None: &quot;&quot;&quot; Do nothing. &quot;&quot;&quot; pass ... # other methods </code></pre> <p>The <code>WebAPI</code> class is imported in <code>/src/minim/api/spotify/__init__.py</code> so that Sphinx autosummary can find it:</p> <pre class="lang-py prettyprint-override"><code># /src/minim/api/spotify/__init__.py from ._core import WebAPI __all__ = [&quot;WebAPI&quot;] </code></pre> <p><code>UserEndpoints</code> is defined in <code>/src/minim/api/spotify/_web_api/users.py</code>:</p> <pre class="lang-py prettyprint-override"><code># /src/minim/api/spotify/_web_api/users.py ... # imports class UserEndpoints: &quot;&quot;&quot; Spotify Web API user endpoints. &quot;&quot;&quot; def __init__(self, client: &quot;WebAPI&quot;) -&gt; None: &quot;&quot;&quot; Parameters ---------- ... &quot;&quot;&quot; self._client = client def get_me(self) -&gt; dict[str, Any]: &quot;&quot;&quot; ... &quot;&quot;&quot; self._client._require_scope( &quot;get_me&quot;, {&quot;user-read-private&quot;, &quot;user-read-email&quot;} ) return self._client._request(&quot;get&quot;, &quot;me&quot;).json() </code></pre> <p>Currently, Sphinx autosummary generates a page for <code>WebAPI</code> and all its methods. Is it possible to have Sphinx include <code>UserEndpoint</code>'s methods as <code>users.&lt;method&gt;</code> in <code>WebAPI</code>'s documentation?</p> <p>For example, a text mockup of the <code>WebAPI</code> documentation page might look like:</p> <pre class="lang-none prettyprint-override"><code>class minim.api.spotify.WebAPI(...) Spotify Web API client. Parameters: ... Methods: some_method | Do nothing. users.get_me | ... some_method(...) -&gt; None Do nothing. Parameters: ... users.get_me(...) -&gt; dict[str, Any] ... Parameters: ... </code></pre>
<python><python-sphinx><autosummary>
2025-09-07 20:05:31
0
528
Benjamin Ye
79,757,853
4,701,426
Manipulating a large dataframe most efficiently
<p>Imagine I have this dataframe called temp:</p> <pre><code>temp = pd.DataFrame(index = [x for x in range(0, 10)], columns = list('abcd')) for row in temp.index: temp.loc[row] = default_rng().choice(10, size=4, replace=False) temp.loc[1, 'b'] = np.nan temp.loc[3, 'd'] = np.nan </code></pre> <p>df:</p> <p><a href="https://i.sstatic.net/Wa4PdGwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wa4PdGwX.png" alt="enter image description here" /></a></p> <p>The values are the same nature as the indices. My goal is to create an adjacency matrix where the indices and columns are temp.index, where the matrix shows what values have appeared in each index's row.</p> <p>What I have done:</p> <pre><code>temp2 = pd.DataFrame(index = temp.index, columns = temp.index) for index in temp.index: temp2.loc[index, temp.loc[index].dropna().values] = 1 temp2 = temp2.replace(np.nan, 0) </code></pre> <p>temp2:</p> <p><a href="https://i.sstatic.net/1IdEAd3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1IdEAd3L.png" alt="enter image description here" /></a></p> <p>This does the job: for example, temp2 shows that row index 0 is adjacent to indices 4,5,7, and 8. In other words, indices that existed in row 0 in temp have a value of 1 and others have a value of 0 in temp2.</p> <p><strong>Problem:</strong> There are 132K indices in the real temp and creating temp2 throws out a memory error. What is the most efficient way of getting to temp2. FWIW, the indices are range(132000). Also, I'm going to later convert this matrix to a Torch tensor of dimensions (2, number of edges) that shows the same adjacency info:</p> <pre><code>adj = torch.tensor(temp2.values) edge_index = adj.nonzero().t().contiguous() </code></pre>
<python><pandas><numpy><pytorch>
2025-09-06 23:41:28
1
2,151
Saeed
79,757,564
12,871,587
Nested Window Expression
<p>I'm building some feature generation expressions and I'd like to build an expression that:</p> <ol> <li>Calculates account_length as cumulative count of records per account</li> <li>Then calculate the median of account_length per status_date</li> </ol> <p>Is there a way to do this in single expression, without using the .with_columns() twice?</p> <pre><code>df = pl.DataFrame({ &quot;account_id&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;, &quot;C&quot;, &quot;C&quot;, &quot;C&quot;], &quot;status_date&quot;: [&quot;2023-01-01&quot;, &quot;2023-01-02&quot;, &quot;2023-01-03&quot;, &quot;2023-01-01&quot;, &quot;2023-01-02&quot;, &quot;2023-01-01&quot;, &quot;2023-01-02&quot;, &quot;2023-01-03&quot;, &quot;2023-01-04&quot;], &quot;value&quot;: [10, 20, 30, 40, 50, 60, 70, 80, 90] }) print(&quot;Original data:&quot;) print(df) Original data: shape: (9, 3) ┌────────────┬─────────────┬───────┐ │ account_id ┆ status_date ┆ value │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════════╪═════════════╪═══════╡ │ A ┆ 2023-01-01 ┆ 10 │ │ A ┆ 2023-01-02 ┆ 20 │ │ A ┆ 2023-01-03 ┆ 30 │ │ B ┆ 2023-01-01 ┆ 40 │ │ B ┆ 2023-01-02 ┆ 50 │ │ C ┆ 2023-01-01 ┆ 60 │ │ C ┆ 2023-01-02 ┆ 70 │ │ C ┆ 2023-01-03 ┆ 80 │ │ C ┆ 2023-01-04 ┆ 90 │ └────────────┴─────────────┴───────┘ </code></pre> <p>I would like to do:</p> <pre><code>df = df.with_columns( pl.col(&quot;status_date&quot;) .cum_count() .over(&quot;account_id&quot;) .median() .over(&quot;status_date&quot;) .alias(&quot;account_length_median&quot;) ) Out: Error: ComputeError: cannot nest window expressions </code></pre>
<python><dataframe><python-polars>
2025-09-06 13:06:49
1
713
miroslaavi
79,757,357
1,890,413
custom filter for filter_horizontal admin in django
<p>I have the following models where a deck have a many to many relationship with problems and problems can have tags</p> <pre><code>from django.utils import timezone from django.db import models from taggit.models import TaggedItemBase from taggit.managers import TaggableManager # Create your models here. class TaggedProblem(TaggedItemBase): content_object = models.ForeignKey('Problem', on_delete=models.CASCADE) class Problem(models.Model): title = models.CharField(max_length=200) body = models.CharField(max_length=10000) pub_date = models.DateTimeField(&quot;date published&quot;, default=timezone.now()) tags = TaggableManager(through=TaggedProblem) class Meta: verbose_name = &quot;problem&quot; verbose_name_plural = &quot;problems&quot; def __str__(self): return self.title class Deck(models.Model): name = models.CharField(max_length=200) problems = models.ManyToManyField(Problem) def __str__(self): return self.name </code></pre> <p>then for the admin i have the following</p> <pre><code>from django.contrib import admin # Register your models here. from .models import Problem,Deck class DeckAdmin(admin.ModelAdmin): filter_horizontal = ('problems',) admin.site.register(Deck, DeckAdmin) admin.site.register(Problem) </code></pre> <p>and the admin looks like this <a href="https://i.sstatic.net/xVoYHDPi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVoYHDPi.png" alt="enter image description here" /></a></p> <p>well what i want to do is to have a custom filter to filter the available problems, the filter must be an interface where i can include and exclude tags associated with the problems, so i want to replace the filter search box with something like this</p> <p><a href="https://i.sstatic.net/pBFxUaJf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBFxUaJf.png" alt="enter image description here" /></a></p> <p>so i can filter the problems by tags and then add then to the deck, how can i achieve that functionality?, i am new to django and have no idea how to proceed</p>
<python><django>
2025-09-06 05:50:56
1
1,114
angvillar
79,757,210
1,604,008
Python unittest fails with No module loaded in VScode
<p>I have the following directory structure</p> <p><img src="https://i.sstatic.net/3KGwRt0l.png" alt="directory structure" /></p> <p>when I use the test runner I get</p> <p><code> File &quot;/home/kyle/dev/tests/test_foo.py&quot;, line 2, in &lt;module&gt; import foo ModuleNotFoundError: No module named 'foo'</code></p> <p>I'm at a loss as what could be wrong. The test code looks like:</p> <pre><code>import unittest import foo class TestStringMethods(unittest.TestCase): def test_upper(self): self.assertEqual(foo.f(), 'FOO3') if __name__ == '__main__': unittest.main() </code></pre> <p>I'm using a virtual environment (.venv) I've set that to my interpreter.</p>
<python><visual-studio-code><python-unittest>
2025-09-05 22:31:47
1
1,159
user1604008
79,757,125
422,953
When did numpy change the behavior of A = B[slice]?
<p>I know that if I have a numpy array <code>A</code>, then a statement like <code>B = A</code> will make a &quot;shallow&quot; copy, i.e., B will point to the same memory address as A. However, before some numpy version, <code>B = A[slice]</code> used to create a new array <code>B</code> such that changing <code>B</code> did not change <code>A</code>. I <em>know</em> this because I have code that depends on this behavior that <em>used to</em> work. However, in numpy 1.24.3, if I do</p> <pre class="lang-py prettyprint-override"><code>A = numpy.array([1,2,3,4,]) B = A[2:4] B[0] = -1 </code></pre> <p>then <code>A[2]</code> changes to <code>-1</code>. Does anyone know when this behavior changed? Or have I really not run that code (the one I <em>know</em> expected this behavior and used to work) for a very long time?</p>
<python><numpy>
2025-09-05 20:16:09
2
340
TM5
79,756,828
6,691,914
Pass python location to Rscript in Azure Devops
<p>I need to create an Azure Devops pipeline to build and deploy an R Shiny app (not python-shiny). The main problem in my case is that I don't have admin rights on the build agents. To mitigate this, I use a folder where I have full sudo rights ($(Pipeline.Workspace)/R/library).</p> <p>And for app restore and snapshot, I use this pipeline:</p> <pre><code>steps: - checkout: self - script: | #Use system-installed Python 3.10.18 /usr/bin/python3.10 --version which python3.10 # Navigate to repo root directory cd $(Pipeline.Workspace)/s # Create virtual environment [ -d venv ] || python3.10 -m venv venv # Activate virtual environment source venv/bin/activate # Upgrade pip &amp; install dependencies python -m pip install -U pip pip3 install cmake # Install renv and restore the package in $(Pipeline.Workspace)/R/library Rscript -e 'dir.create(Sys.getenv(&quot;R_LIBS_USER&quot;), recursive = TRUE, showWarnings = FALSE)' Rscript -e '.libPaths(Sys.getenv(&quot;R_LIBS_USER&quot;)); install.packages(&quot;renv&quot;, lib = Sys.getenv(&quot;R_LIBS_USER&quot;), repos = &quot;https://cloud.r-project.org&quot;)' Rscript -e '.libPaths(Sys.getenv(&quot;R_LIBS_USER&quot;)); renv::snapshot()' Rscript -e '.libPaths(Sys.getenv(&quot;R_LIBS_USER&quot;)); renv::restore()' # Install rsconnect to deploy app ##code to deploy using rsconnect displayName: 'Install and restore R environment' env: R_LIBS_USER: $(Pipeline.Workspace)/R/library </code></pre> <p>The only problem here is that R and Rscript don't see a system Python3 installed when it's trying to run renv restore:</p> <pre><code> The following required system packages are not installed: - cmake [required by arrow] - python2 [required by reticulate] The R packages depending on these system packages may fail to install. An administrator can install these packages with: - sudo dnf install python2 cmake - The library is already synchronized with the lockfile. </code></pre> <p>Please note that the minimum requirement for the reticulate package is Python 2.7, and all our team members use at least Python 3.10.</p> <p>As a result, in deployment logs, I see this error:</p> <pre><code>Error : Installation of Python not found, Python bindings not loaded. See the Python &quot;Order of Discovery&quot; here: https://rstudio.github.io/reticulate/articles/versions.html#order-of-discovery. [rsc-session] Received signal: interrupt [rsc-session] Terminating subprocess with interrupt ... </code></pre> <p>Is it possible to somehow pass the Python location to renv and Rscript before restore and a snapshot?</p>
<python><r><azure-devops><rscript><renv>
2025-09-05 13:25:39
0
1,652
Vasyl Stepulo
79,756,763
8,037,521
Python VTK RAM usage
<p>I am using VTK Python in my application for point cloud visualization. I would like to be able to support rather big point clouds but, as I have noticed in my app, the high RAM usage does not make it possible. Of course, there is always some limit to what we can visualize, based on the available RAM of the PC, but I am talking about point clouds that I can freely visualize in some other software, even if not with very good performance.</p> <p>I tried to make this MRE to observe the RAM usage:</p> <pre><code>import vtk import numpy as np from vtk.util import numpy_support import psutil import os def print_memory(stage: str): &quot;&quot;&quot;Print memory usage of this process in MB.&quot;&quot;&quot; process = psutil.Process(os.getpid()) mem = process.memory_info().rss / (1024**2) print(f&quot;[{stage}] RAM: {mem:.2f} MB&quot;) def main(): print_memory(&quot;Start&quot;) n_points = 100_000_000 points = np.random.rand(n_points, 3).astype(np.float32) intensity = np.random.rand(n_points).astype(np.float32) rgb = (np.random.rand(n_points, 3) * 255).astype(np.uint8) print_memory(&quot;Generated NumPy arrays&quot;) vtk_points = vtk.vtkPoints() vtk_points.SetData(numpy_support.numpy_to_vtk(points, deep=False)) vtk_intensity = numpy_support.numpy_to_vtk(intensity, deep=False) vtk_intensity.SetName(&quot;Intensity&quot;) vtk_rgb = numpy_support.numpy_to_vtk(rgb, deep=False) vtk_rgb.SetNumberOfComponents(3) vtk_rgb.SetName(&quot;RGB&quot;) print_memory(&quot;Converted NumPy -&gt; VTK arrays&quot;) poly_data = vtk.vtkPolyData() poly_data.SetPoints(vtk_points) poly_data.GetPointData().AddArray(vtk_intensity) poly_data.GetPointData().SetScalars(vtk_rgb) print_memory(&quot;Created vtkPolyData&quot;) mapper = vtk.vtkOpenGLPointGaussianMapper() mapper.SetInputData(poly_data) mapper.EmissiveOff() mapper.SetScaleFactor(0.0) actor = vtk.vtkActor() actor.SetMapper(mapper) actor.GetProperty().SetPointSize(1) print_memory(&quot;Created mapper &amp; actor&quot;) ren = vtk.vtkRenderer() renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren) iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) ren.AddActor(actor) ren.SetBackground(0.1, 0.1, 0.1) renWin.Render() print_memory(&quot;After first Render()&quot;) print(&quot;Press 'q' in render window to quit...&quot;) iren.Start() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>So, what I see is A LOT of RAM usage for the rendering. Now, I understand that we cannot NOT use RAM at all. But it seems to consume more than 2x of the original data size.</p> <p>Is this expected?</p> <ol> <li>Can it be improved in any way?</li> <li>Am I just using VTK in some wrong way? Should I use <code>vtkPolyDataMapper</code> instead or some other alternative?</li> </ol>
<python><vtk>
2025-09-05 12:14:06
0
1,277
Valeria
79,756,625
7,321,700
Using a column value to find the Column header name in Pandas
<p><strong>Scenario:</strong> I have a pandas dataframe. I am trying to use the values in a given column (year) to find the relevant header name and add it to a new column (year_name). For example, if the dataframe looks like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>itemName</th> <th>2020</th> <th>2021</th> <th>2022</th> <th>2023</th> <th>2024</th> <th>year</th> </tr> </thead> <tbody> <tr> <td>item1</td> <td>5</td> <td>20</td> <td>10</td> <td>10</td> <td>50</td> <td>3</td> </tr> <tr> <td>item2</td> <td>10</td> <td>10</td> <td>50</td> <td>20</td> <td>40</td> <td>2</td> </tr> <tr> <td>item3</td> <td>12</td> <td>35</td> <td>73</td> <td>10</td> <td>54</td> <td>4</td> </tr> </tbody> </table></div> <p>The result should be like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>itemName</th> <th>2020</th> <th>2021</th> <th>2022</th> <th>2023</th> <th>2024</th> <th>year</th> <th>year_name</th> </tr> </thead> <tbody> <tr> <td>item1</td> <td>5</td> <td>20</td> <td>10</td> <td>10</td> <td>50</td> <td>3</td> <td>2022</td> </tr> <tr> <td>item2</td> <td>10</td> <td>10</td> <td>50</td> <td>20</td> <td>40</td> <td>2</td> <td>2021</td> </tr> <tr> <td>item3</td> <td>12</td> <td>35</td> <td>73</td> <td>10</td> <td>54</td> <td>4</td> <td>2023</td> </tr> </tbody> </table></div> <p><strong>Obs.</strong> the itemName column is the index.</p> <p><strong>Issue:</strong> I am trying to use a lambda function to use the value of each row of &quot;year&quot; and use it to find the column name for that row and add it to the year_name column.</p> <p><strong>Function:</strong> I tried:</p> <pre><code>col_names = result_dict[col].columns.tolist() result_df[[last_year_header']] = result_df[[_last_year']].apply(lambda x: col_names[x]) </code></pre> <p>but this gave me the following error:</p> <pre><code> TypeError: list indices must be integers or slices, not Series </code></pre> <p>I also tried:</p> <pre><code>col_names = result_dict[col].columns.tolist() result_df[[last_year_header']] = result_df[[_last_year']].apply(lambda x: col_names[x.iloc[0].astype(int)]) </code></pre> <p>But this gave me:</p> <pre><code> IndexError: list index out of range </code></pre> <p><strong>Question:</strong> I am clearly missing something with the implementation of the lambda function in this case. How can I fix this?</p>
<python><pandas>
2025-09-05 09:46:17
1
1,711
DGMS89
79,756,594
1,750,612
How do I capture missing nan values from Pandas 2.3.0 using Pydantic 2.11.7
<p>Prerequisites:</p> <ul> <li>Python 3.11.7</li> <li>Pandas 2.3.0</li> <li>Numpy 2.1.3</li> <li>Pydantic 2.11.7</li> </ul> <p>In the pandas documentation, it states that missing values for numeric data types are filled in with <code>numpy.nan</code>:</p> <p><a href="https://pandas.pydata.org/docs/user_guide/missing_data.html#values-considered-missing" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/missing_data.html#values-considered-missing</a></p> <pre><code>In [1]: import pandas as pd import numpy as np In [2]: s = pd.Series([1, 2], dtype=np.int64).reindex([0, 1, 2]) In [3]: s Out[3]: 0 1.0 1 2.0 2 NaN dtype: float64 </code></pre> <p>In pydantic models, it should be possible to capture <code>numpy.nan</code> with <code>Literal[numpy.nan]</code>. By chaining this functionality with <code>typing.Union</code>, this allows one to create pydantic models with complex functional behaviours, e.g. positive integers or <code>numpy.nan</code> (e.g. to create a model which will accept any positive integer or a missing value):</p> <pre><code>In [4]: import numpy as np import pydantic from typing import Union, Literal class Col(pydantic.BaseModel): precision: Union[ pydantic.conint(ge=1), Literal[np.nan] ] </code></pre> <p>However, taking the above pandas example, pydantic validation fails once it hits that missing value:</p> <pre><code>In [5]: for i in s.values: print(i) Col(precision=i) Out[5]: 1.0 2.0 nan --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[5], line 3 1 for i in s.values: 2 print(i) ----&gt; 3 Col(precision=i) File ~/........./python3.11/site-packages/pydantic/main.py:253, in BaseModel.__init__(self, **data) 251 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks 252 __tracebackhide__ = True --&gt; 253 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) 254 if self is not validated_self: 255 warnings.warn( 256 'A custom validator is returning a value other than `self`.\n' 257 &quot;Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n&quot; 258 'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.', 259 stacklevel=2, 260 ) ValidationError: 2 validation errors for Col precision.constrained-int Input should be a finite number [type=finite_number, input_value=nan, input_type=float64] For further information visit https://errors.pydantic.dev/2.11/v/finite_number precision.literal[nan] Input should be nan [type=literal_error, input_value=nan, input_type=float64] For further information visit https://errors.pydantic.dev/2.11/v/literal_error </code></pre> <p>If I call the pydantic class with a regular <code>numpy.nan</code> value, it works just fine:</p> <pre><code>In [6]: Col(precision=np.nan) Out[6]: Col(precision=nan) </code></pre> <p>So it seems as if Pandas is using something other than <code>numpy.nan</code> to fill in missing values. Does anyone know what that might be, or how I might catch it with Pydantic &amp; typing?</p> <hr /> <p>As an aside, I have a somewhat hacky workaround by using <code>pydantic.confloat(allow_inf_nan=True)</code> instead of <code>Literal[np.nan]</code>, but since <code>confloat</code> permits all float data types I then have to add a secondary model validator to throw an exception if the given value is not actually <code>nan</code>. I dislike this solution though it works, as I feel there should be something much more elegant that I could do:</p> <pre><code>In [7]: import math class Col2(pydantic.BaseModel): precision: Union[ pydantic.conint(ge=1), pydantic.confloat(allow_inf_nan)=True ] @pydantic.model_validator(mode=&quot;after&quot;) def validate_nans(self): if self.precision is not None and isinstance(self.precision, float): if self.precision.is_integer(): assert self.precision &gt;= 0 else: assert math.isnan(self.precision) </code></pre> <p>This works as expected for all types of nan (that I know about for now anyway):</p> <pre><code>In [8]: import pytest bad_nan = s.iloc[-1] test_values = [ (0, True), (-1, False), (-1.234, False), (1.234, False), (2, True), (3.0, True), (-3.0, False) (bad_nan, True), (np.nan, True), (float('nan'), True), (np.float64('nan'), True) (False, True), (True, True) # Bool False is 0, True is 1. This is fine. ] for val, should_pass in test_values: if should_pass: Col2(precision=val) else: with pytest.raises(pydantic.ValidationError): Col2(precision=val) Out[8]: ============================== 1 passed in 0.05s =============================== </code></pre> <hr /> <p>Other solutions I have tried but that don't work for one reason or another:</p> <ul> <li><p><code>Annotated[pydantic.conint(ge=0), pydantic.AllowInfNan()]</code></p> <p>Breaks because you can't mix <code>int</code> and <code>nan</code> floats together in one annotated type.</p> </li> <li><p><code>pydantic.confloat(ge=0, multiple_of=1, allow_inf_nan=True)</code></p> <p>Breaks because you can't spec the <code>ge</code> argument with <code>allow_inf_nan</code> simultaneously (nan values are not greater than or less than 0, so that fails the validation). Interestingly though there's no issues with spec'ing <code>multiple_of=1</code> and <code>allow_inf_nan=True</code> together.</p> </li> </ul>
<python><pandas><numpy><python-typing>
2025-09-05 09:07:53
1
359
MikeFenton
79,756,402
2,153,235
vobject.readcomponents(...) : Is it a generator or does it *return* a generator?
<p>This is a question about technical terminology.</p> <p>The <a href="https://github.com/skarim/vobject" rel="nofollow noreferrer">vobject documentation</a> says &quot;readComponents is a generator&quot;, which is consistent with its doc string &quot;Generate one Component at a time from a stream&quot;.</p> <p>However, the <a href="https://github.com/skarim/vobject" rel="nofollow noreferrer">sample code</a> <code>vobject.readComponents(icalstream).next().vevent.dtstart.value</code> shows that <code>readComponents(...)</code> <em>returns</em> a generator, as does the <a href="https://gavincampbell.dev/post/comparing-contact-vcf-files-python" rel="nofollow noreferrer">example code here</a>: <code>for c in vobject.readComponents(vcfs):</code>.</p> <p>It makes a difference. Repeatedly invoking a method that returns a generator simply recreates the generator anew.</p> <p>Is the reference to <code>vobject.readComponents</code> as a generator simply a case of speaking loosely, i.e., it actually <em>returns</em> a generator?</p> <p><strong>Afternote:</strong> I think my confusion arises from the fact that <a href="https://www.datacamp.com/tutorial/yield-python-keyword" rel="nofollow noreferrer">&quot;The term generator in Python can refer to a generator iterator or a generator function. These are different but related objects in Python&quot;</a>. Calling a generator multiple times returns independent generator iterators (a.k.a. generator objects). And of course, each iterator can only run through the elements once.</p>
<python><generator>
2025-09-05 04:35:59
2
1,265
user2153235
79,756,235
4,057,053
Why doesn't MCP server expose my resources?
<p>here's the MCP code I have:</p> <pre class="lang-py prettyprint-override"><code>from mcp.server.fastmcp import FastMCP # Create an MCP server mcp = FastMCP(&quot;Demo&quot;) # Add a dynamic greeting resource @mcp.resource(&quot;greeting://{name}&quot;) def get_greeting(name: str) -&gt; str: &quot;&quot;&quot;Get a personalized greeting&quot;&quot;&quot; return f&quot;Hello, sir {name}!&quot; @mcp.resource(&quot;file://data/file.txt&quot;) def read_file_txt() -&gt; str: &quot;&quot;&quot;Read contents of file.txt from data directory&quot;&quot;&quot; try: with open('/tmp/file.txt', &quot;r&quot;) as f: return f.read() except Exception as e: return f&quot;Error reading file: {str(e)}&quot; # @mcp.prompt() def greet_user_prompt(name: str) -&gt; str: &quot;&quot;&quot;Generates a message asking for a greeting&quot;&quot;&quot; return f&quot;&quot;&quot; Return a greeting message for a user called '{name}'. if the user is called 'Laurent', use a formal style, else use a street style. &quot;&quot;&quot; </code></pre> <p>I have this MCP server installed into Claude Desktop.</p> <ul> <li>the <code>read_file_txt</code> resource is visible to Claude</li> <li>the <code>greet_user_prompt</code> is visible to Claude</li> <li>the <code>get_greeting</code> resource, <strong>mentioned in a whole bunch of tutorials btw</strong>, is not visible.</li> </ul> <p>And it's not just Claude, the MCP dev server also does not see the <code>get_greeting</code> resource.</p> <p>So, what am I doing wrong?</p> <p><a href="https://i.sstatic.net/IX8YYIWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IX8YYIWk.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/mLcfmFuD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLcfmFuD.png" alt="enter image description here" /></a></p>
<python><model-context-protocol>
2025-09-04 21:27:45
0
8,822
kurtgn
79,756,233
1,233,376
`google.auth` python SDK not reading granted scopes from credentials file
<p>I've run:</p> <pre><code>gcloud auth application-default login --client-id-file google_oauth_client_id.json --scopes=&quot;https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/calendar.calendars.readonly&quot; </code></pre> <p>successfully. My browser opened, I granted the calendar and cloud-platform permissions to my test app, and the results were saved to disk:</p> <pre><code>Credentials saved to file:[/home/*****/.config/gcloud/application_default_credentials.json] </code></pre> <p>However, running the following snippet leads to a 403 error:</p> <pre><code>from google.auth import default from google.auth.transport.requests import Request from googleapiclient.discovery import build SCOPES = [&quot;https://www.googleapis.com/auth/calendar.calendars.readonly&quot;] credentials, project_id = default(scopes=SCOPES, quota_project_id='my-project-id') credentials.refresh(Request()) access_token = credentials.token service = build(&quot;calendar&quot;, &quot;v3&quot;, credentials=credentials) events = service.events().list(calendarId=&quot;My Calendar Id&quot;, maxResults=10, singleEvents=True, orderBy=&quot;startTime&quot;).execute() </code></pre> <p>At first I thought maybe I wasn't using the correct <code>calendarId</code>, but when I was in the debugger, I noticed that the <code>credentials</code> object has <strong>no</strong> scopes defined:</p> <pre><code>&gt;&gt;&gt; (credentials.scopes, credentials.default_scopes, credentials.granted_scopes) (None, None, None) </code></pre> <p>However, if I delete the <code>application_default_credentials.json</code> file the <code>default</code> method throws an appropriate error, so it does seem like it's reading from the file properly-- it's just not realizing that the permissions have been granted...</p> <p>Looking at the <code>application_default_credentials.json</code>, I'm not seeing any mention of scopes: <code>dict_keys(['account', 'client_id', 'client_secret', 'refresh_token', 'type', 'universe_domain'])</code></p> <p>This leads me to believe that either:</p> <ol> <li>The scopes are saved server-side, and I need to properly request them when refreshing the token</li> <li>The <code>gcode</code> client isn't saving this information properly.</li> </ol> <p>Option 1 seems more likely, since the CLI is properly displaying the scopes and passing them to the OAuth session....</p>
<python><google-oauth><google-calendar-api><gcloud>
2025-09-04 21:22:07
1
1,092
Ian Burnette
79,756,149
5,096,103
Pandas does not fail, warn, or skip when rows have more columns than the header
<p>I'm new to Python and to Pandas, and I am desperately trying to understand how or why this is happening.</p> <p>I have a CSV file with some data, which has some rows which have extra commas <code>,</code> which are not escaped. So there are 4 column headers, but some rows have 5 fields due to the improperly escaped commas <code>,</code>.</p> <p><code>data.csv</code>:</p> <pre><code>Index,First Name,Middle Name,Last Name 1,Mr. Al\, B.,grüBen,Johnson 2,&quot;Mr. Al\, B.&quot;,grüBen,Johnson 3,\&quot;Mr. Al\, B.\&quot;,grüBen,Johnson 4,Mr. Al\, B.,grüBen,Johnson </code></pre> <p>I want to read this CSV directly into a Panda dataframe. My expectation is that Panda should throw or warn me about the data being inconsistent with the header column, but instead it does a very strange thing where it seems to drop the first values in each row, which would have been the index. The code and output will illustrate better than I can with words.</p> <p><code>main.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import csv import pandas as pd def main(): file = &quot;data.csv&quot; print_block(&quot;Read STD/CSV&quot;) read_csv_std(file) print_block(&quot;Validate STD/CSV&quot;) validate_csv_std(file) print_block(&quot;Read PANDAS default&quot;) read_csv_pandas(file) print_block(&quot;Read PANDAS with provided headers&quot;) read_csv_pandas_with_provided_headers(file) print_block(&quot;Read PANDAS with pyarrow engine&quot;) read_csv_pandas_with_pyarrow_engine(file) print_block(&quot;Validate PANDAS by type casting&quot;) validate_csv_pandas_by_casting(file) def print_block(text): print(f&quot;====== {text} ======&quot;) def read_csv_std(path): with open(path, newline=&quot;&quot;) as file: reader = csv.reader(file) for i, row in enumerate(reader): print(f&quot;i={i}, len={len(row)} -&gt; {row}&quot;) def validate_csv_std(path): with open(path, newline=&quot;&quot;) as file: reader = csv.reader(file) headers = next(reader) num_columns = len(headers) for i, row in enumerate(reader, start=1): if len(row) != num_columns: print( f&quot;i={i} - ❌ - expected {num_columns} fields, saw {len(row)} -&gt; {row}&quot; ) else: print(f&quot;i={i} - ✅ - expected {num_columns} fields, saw {len(row)}&quot;) def read_csv_pandas(path): df = pd.read_csv( path, on_bad_lines=&quot;error&quot;, # does nothing whether 'warn' or 'skip' - silently moves the columns around - see logs ) headers = df.columns.to_list() print(f&quot;i=0, len={len(headers)} -&gt; {headers}&quot;) for i, row in df.iterrows(): values = row.tolist() print(f&quot;i={i}, len={len(values)} -&gt; {values}&quot;) def read_csv_pandas_with_provided_headers(path): with open(path, newline=&quot;&quot;) as file: reader = csv.reader(file) headers = next(reader) print(f&quot;i=0, len={len(headers)} -&gt; {headers}&quot;) df = pd.read_csv( path, names=headers, # works but we end have to read the csv upfront, and the column ends up as a row in the df on_bad_lines=&quot;skip&quot;, ) for i, row in df.iterrows(): values = row.tolist() print(f&quot;i={i}, len={len(values)} -&gt; {values}&quot;) def read_csv_pandas_with_pyarrow_engine(path): df = pd.read_csv( path, engine=&quot;pyarrow&quot;, # this gives the desired result, but not fully sure of the implications of switching on_bad_lines=&quot;skip&quot;, ) headers = df.columns.to_list() print(f&quot;i=0, len={len(headers)} -&gt; {headers}&quot;) for i, row in df.iterrows(): values = row.tolist() print(f&quot;i={i}, len={len(values)} -&gt; {values}&quot;) def validate_csv_pandas_by_casting(path): pd.read_csv( path, converters={ &quot;Index&quot;: validated_int }, ) def validated_int(x: str) -&gt; int: return int(x) # pandas will raise a ValueError if this isn't an int main() </code></pre> <p>Here is the output from the program:</p> <pre><code>====== Read STD/CSV ====== i=0, len=4 -&gt; ['Index', 'First Name', 'Middle Name', 'Last Name'] i=1, len=5 -&gt; ['1', 'Mr. Al\\', ' B.', 'grüBen', 'Johnson'] i=2, len=4 -&gt; ['2', 'Mr. Al\\, B.', 'grüBen', 'Johnson'] i=3, len=5 -&gt; ['3', '\\&quot;Mr. Al\\', ' B.\\&quot;', 'grüBen', 'Johnson'] i=4, len=5 -&gt; ['4', 'Mr. Al\\', ' B.', 'grüBen', 'Johnson'] ====== Validate STD/CSV ====== i=1 - ❌ - expected 4 fields, saw 5 -&gt; ['1', 'Mr. Al\\', ' B.', 'grüBen', 'Johnson'] i=2 - ✅ - expected 4 fields, saw 4 i=3 - ❌ - expected 4 fields, saw 5 -&gt; ['3', '\\&quot;Mr. Al\\', ' B.\\&quot;', 'grüBen', 'Johnson'] i=4 - ❌ - expected 4 fields, saw 5 -&gt; ['4', 'Mr. Al\\', ' B.', 'grüBen', 'Johnson'] ====== Read PANDAS default ====== i=0, len=4 -&gt; ['Index', 'First Name', 'Middle Name', 'Last Name'] i=1, len=4 -&gt; ['Mr. Al\\', ' B.', 'grüBen', 'Johnson'] i=2, len=4 -&gt; ['Mr. Al\\, B.', 'grüBen', 'Johnson', nan] i=3, len=4 -&gt; ['\\&quot;Mr. Al\\', ' B.\\&quot;', 'grüBen', 'Johnson'] i=4, len=4 -&gt; ['Mr. Al\\', ' B.', 'grüBen', 'Johnson'] ====== Read PANDAS with provided headers ====== i=0, len=4 -&gt; ['Index', 'First Name', 'Middle Name', 'Last Name'] i=0, len=4 -&gt; ['Index', 'First Name', 'Middle Name', 'Last Name'] i=1, len=4 -&gt; ['2', 'Mr. Al\\, B.', 'grüBen', 'Johnson'] ====== Read PANDAS with pyarrow engine ====== i=0, len=4 -&gt; ['Index', 'First Name', 'Middle Name', 'Last Name'] i=0, len=4 -&gt; [2, 'Mr. Al\\, B.', 'grüBen', 'Johnson'] ====== Validate PANDAS by type casting ====== Traceback (most recent call last): File &quot;/Users/cillian/git/python/personal/python-playground/01_read_csv/main.py&quot;, line 97, in &lt;module&gt; main() File &quot;/Users/cillian/git/python/personal/python-playground/01_read_csv/main.py&quot;, line 18, in main validate_csv_pandas_by_casting(file) File &quot;/Users/cillian/git/python/personal/python-playground/01_read_csv/main.py&quot;, line 87, in validate_csv_pandas_by_casting pd.read_csv( File &quot;/Users/cillian/git/python/personal/python-playground/.venv/lib/python3.12/site-packages/pandas/io/parsers/readers.py&quot;, line 1026, in read_csv return _read(filepath_or_buffer, kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian/git/python/personal/python-playground/.venv/lib/python3.12/site-packages/pandas/io/parsers/readers.py&quot;, line 626, in _read return parser.read(nrows) ^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian/git/python/personal/python-playground/.venv/lib/python3.12/site-packages/pandas/io/parsers/readers.py&quot;, line 1923, in read ) = self._engine.read( # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian/git/python/personal/python-playground/.venv/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py&quot;, line 234, in read chunks = self._reader.read_low_memory(nrows) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;pandas/_libs/parsers.pyx&quot;, line 838, in pandas._libs.parsers.TextReader.read_low_memory File &quot;pandas/_libs/parsers.pyx&quot;, line 921, in pandas._libs.parsers.TextReader._read_rows File &quot;pandas/_libs/parsers.pyx&quot;, line 1045, in pandas._libs.parsers.TextReader._convert_column_data File &quot;pandas/_libs/parsers.pyx&quot;, line 2116, in pandas._libs.parsers._apply_converter File &quot;/Users/cillian/git/python/personal/python-playground/01_read_csv/main.py&quot;, line 94, in validated_int return int(x) # pandas will raise a ValueError if this isn't an int ^^^^^^ ValueError: invalid literal for int() with base 10: 'Mr. Al\\' </code></pre> <p>Can anyone explain why this is happening? Am I holding it wrong? Should Pandas throw/warn, or just silently massage the data like it does?</p> <hr /> <p><strong>Edit</strong>:</p> <p>I should have added what I want/expect to happen. My expectation is that I should be able to get Pandas to error/warn/skip without an extra read of the csv.</p> <p>So the output of my PyArrow approach is exactly what I want, but I don't think I can move to PyArrow as the engine for Pandas since I also need to make use of chunking in Pandas.</p> <p>I guess I could move over to PyArrow and convert to Pandas:</p> <pre class="lang-py prettyprint-override"><code>def read_csv_pyarrow(path): table = pacsv.read_csv( path, parse_options=pacsv.ParseOptions( invalid_row_handler=skip_handler ), read_options=pacsv.ReadOptions( block_size=50, ), ) df = table.to_pandas() print(df) def read_csv_pyarrow_incremental(path): stream = pacsv.open_csv( path, parse_options=pacsv.ParseOptions( invalid_row_handler=skip_handler ), read_options=pacsv.ReadOptions( block_size=50, ), ) df = stream.read_pandas() print(df) def skip_handler(invalid_row): print(invalid_row) return &quot;skip&quot; </code></pre> <p>Which produces:</p> <pre><code>====== Read PYARROW default ====== InvalidRow(expected_columns=4, actual_columns=5, number=None, text='1,Mr. Al\\, B.,grüBen,Johnson') InvalidRow(expected_columns=4, actual_columns=5, number=None, text='3,\\&quot;Mr. Al\\, B.\\&quot;,grüBen,Johnson') InvalidRow(expected_columns=4, actual_columns=5, number=None, text='4,Mr. Al\\, B.,grüBen,Johnson') Index First Name Middle Name Last Name 0 2 Mr. Al\, B. grüBen Johnson ====== Read PYARROW incremental ====== InvalidRow(expected_columns=4, actual_columns=5, number=None, text='1,Mr. Al\\, B.,grüBen,Johnson') InvalidRow(expected_columns=4, actual_columns=5, number=None, text='3,\\&quot;Mr. Al\\, B.\\&quot;,grüBen,Johnson') InvalidRow(expected_columns=4, actual_columns=5, number=None, text='4,Mr. Al\\, B.,grüBen,Johnson') Index First Name Middle Name Last Name 0 2 Mr. Al\, B. grüBen Johnson </code></pre> <p>Or maybe I should jump straight to Polars? I cannot seem to get it to skip a row with the wrong number of rows like PyArrow does:</p> <pre class="lang-py prettyprint-override"><code>def read_csv_polars(path): df = pl.read_csv( path, columns=[&quot;Index&quot;, &quot;First Name&quot;, &quot;Middle Name&quot;, &quot;Last Name&quot;], use_pyarrow=True, infer_schema=False, ignore_errors=True, ) print(df) </code></pre> <p>Which throws:</p> <pre><code>====== Read POLARS default ====== Traceback (most recent call last): File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/01_read_csv/main.py&quot;, line 151, in &lt;module&gt; main() File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/01_read_csv/main.py&quot;, line 33, in main read_csv_polars(file) File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/01_read_csv/main.py&quot;, line 126, in read_csv_polars df = pl.read_csv( ^^^^^^^^^^^^ File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/.venv/lib/python3.12/site-packages/polars/_utils/deprecation.py&quot;, line 128, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/.venv/lib/python3.12/site-packages/polars/_utils/deprecation.py&quot;, line 128, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/.venv/lib/python3.12/site-packages/polars/_utils/deprecation.py&quot;, line 128, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/cillian.myles/git/github.com/CillianMyles/python-playground/.venv/lib/python3.12/site-packages/polars/io/csv/functions.py&quot;, line 334, in read_csv tbl = pa.csv.read_csv( ^^^^^^^^^^^^^^^^ File &quot;pyarrow/_csv.pyx&quot;, line 1260, in pyarrow._csv.read_csv File &quot;pyarrow/_csv.pyx&quot;, line 1269, in pyarrow._csv.read_csv File &quot;pyarrow/error.pxi&quot;, line 155, in pyarrow.lib.pyarrow_internal_check_status File &quot;pyarrow/error.pxi&quot;, line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: CSV parse error: Expected 4 columns, got 5: 1,Mr. Al\, B.,grüBen,Johnson </code></pre>
<python><pandas><dataframe><csv>
2025-09-04 19:25:11
2
764
Cillian Myles
79,756,115
1,700,890
Installing python package from zip file offline
<p>I am working in Kaggle notebook. I need to install transformers package offline. Normally I would install it like this (online):</p> <pre><code>!pip install git+https://github.com/huggingface/transformers </code></pre> <p>Here is what I did to accomplish the same, but offline.</p> <p>I created snapshot of installed packages before and after installation of transformers.</p> <pre><code>!pip freeze &gt; requirements_b.txt !pip install git+https://github.com/huggingface/transformers !pip freeze &gt; requirements_a.txt </code></pre> <p>The difference between them is</p> <pre><code>huggingface-hub==0.34.4 git+https://github.com/huggingface/transformers@e39f2220969d5ee2fe5643eef4888e36b6800a3c tokenizers==0.22.0 </code></pre> <p>I downloaded the above packages locally with</p> <pre><code>!pip download huggingface-hub==0.34.4 tokenizers==0.22.0 -d ./wheels --verbose !pip download git+https://github.com/huggingface/transformers -d ./wheels </code></pre> <p>I factory reset kernel and tried to install everything offline:</p> <pre><code>!pip install huggingface-hub==0.34.4 tokenizers==0.22.0 \ -U --no-index --find-links /kaggle/working/wheels </code></pre> <p>but below command generated error:</p> <pre><code>!pip install wheels/transformers-4.57.0.dev0.zip --verbose </code></pre> <p>Error message:</p> <pre><code>Using pip 24.1.2 from /usr/local/lib/python3.11/dist-packages/pip (python 3.11) Processing ./wheels/transformers-4.57.0.dev0.zip Running command pip subprocess to install build dependencies Using pip 24.1.2 from /usr/local/lib/python3.11/dist-packages/pip (python 3.11) Non-user install by explicit request Created build tracker: /tmp/pip-build-tracker-9vzem4u4 Entered build tracker: /tmp/pip-build-tracker-9vzem4u4 Created temporary directory: /tmp/pip-install-5g1aenu5 Created temporary directory: /tmp/pip-ephem-wheel-cache-bnhq0fx_ 1 location(s) to search for versions of setuptools: * https://pypi.org/simple/setuptools/ Fetching project page and analyzing links: https://pypi.org/simple/setuptools/ Getting page https://pypi.org/simple/setuptools/ Found index url https://pypi.org/simple/ Looking up &quot;https://pypi.org/simple/setuptools/&quot; in the cache Request header has &quot;max_age&quot; as 0, cache bypassed No cache entry available Starting new HTTPS connection (1): pypi.org:443 Incremented Retry for (url='/simple/setuptools/'): Retry(total=4, connect=None, read=None, redirect=None, status=None) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd0335f6c10&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/ Starting new HTTPS connection (2): pypi.org:443 Incremented Retry for (url='/simple/setuptools/'): Retry(total=3, connect=None, read=None, redirect=None, status=None) WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd0335f7e50&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/ Starting new HTTPS connection (3): pypi.org:443 Incremented Retry for (url='/simple/setuptools/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd03348a550&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/ Starting new HTTPS connection (4): pypi.org:443 Incremented Retry for (url='/simple/setuptools/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd0335f6fd0&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/ Starting new HTTPS connection (5): pypi.org:443 Incremented Retry for (url='/simple/setuptools/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd033488910&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/ Starting new HTTPS connection (6): pypi.org:443 Could not fetch URL https://pypi.org/simple/setuptools/: connection error: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/setuptools/ (Caused by NewConnectionError('&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd0334a0b50&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) - skipping Skipping link: not a file: https://pypi.org/simple/setuptools/ Given no hashes to check 0 links for project 'setuptools': discarding no candidates ERROR: Could not find a version that satisfies the requirement setuptools&gt;=40.8.0 (from versions: none) ERROR: No matching distribution found for setuptools&gt;=40.8.0 Exception information: Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/dist-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 397, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) File &quot;/usr/local/lib/python3.11/dist-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 174, in _add_to_criteria raise RequirementsConflicted(criterion) pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('setuptools&gt;=40.8.0') During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/dist-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 95, in resolve result = self._result = resolver.resolve( ^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 546, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 399, in resolve raise ResolutionImpossible(e.criterion.information) pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('setuptools&gt;=40.8.0'), parent=None)] The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/base_command.py&quot;, line 179, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/req_command.py&quot;, line 67, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/pip/_internal/commands/install.py&quot;, line 377, in run requirement_set = resolver.resolve( ^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 104, in resolve raise error from e pip._internal.exceptions.DistributionNotFound: No matching distribution found for setuptools&gt;=40.8.0 Removed build tracker: '/tmp/pip-build-tracker-9vzem4u4' error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: /usr/bin/python3 /usr/local/lib/python3.11/dist-packages/pip/__pip-runner__.py install --ignore-installed --no-user --prefix /tmp/pip-build-env-3nmv0d45/overlay --no-warn-script-location -vv --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools&gt;=40.8.0' cwd: [inherit] Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>It looks like it is trying to connect to internet. Is there way to solve it?</p>
<python><pip><kaggle>
2025-09-04 18:41:20
1
7,802
user1700890
79,756,072
6,433,136
AI Foundry - Remote MCP Server - Failure: Error - RequiresAction on ToolCall
<p>I have a custom Remote MCP Server hosted on Azure written in Rust leveraging the &quot;rmcp&quot; crate. I have a python program to deploy the MCP Server (as an agent) into AI Foundry using &quot;McpTool&quot;, I see that agent created and that all works fine. The Python program tests connectivity during the deployment like &quot;list tools&quot; and calls a tool to access some of the MCP information. Those all function well during the creation of the agent within the python program, also the MCP Server can be accessed by other Hosts like Claude.AI through a Python Proxy since there isn't native HTTP streaming support in the desktop version. The MCP server isn't suspect at this point, but the interconnection between it and AI Foundry.</p> <p>The problem is when I use the Agent in the Playground, I get an &quot;Error: RequiresAction&quot; when trying to execute a tool on the MCP server:</p> <pre><code>{ name: &quot;run_Fhvco5aH9JbS5lceERHW7lDj&quot; context: { trace_id: &quot;thread_h2Ywxs8TigcIhlQEYSAm1LF4&quot; span_id: &quot;run_Fhvco5aH9JbS5lceERHW7lDj&quot; thread_id: &quot;thread_h2Ywxs8TigcIhlQEYSAm1LF4&quot; } kind: &quot;Run&quot; parent_id: &quot;thread_h2Ywxs8TigcIhlQEYSAm1LF4&quot; start_time: &quot;2025-09-04T17:34:25.000Z&quot; end_time: undefined status: { status_code: &quot;Error&quot; description: &quot;RequiresAction&quot; } attributes: { span_type: &quot;Run&quot; } } </code></pre> <p>I have done some digging and testing around this issue, it seems like this failure is due to the Client needing to get permission to run the tool, but this is not interactive within the playground. So guideance has pushed me to adding a set_approval_mode(&quot;never&quot;) to the McpTool definition (code fragment below), but this line seems to be ignored, as this doesn't get set into the MCP Agent Model.</p> <pre><code># ---------------------------------------------------------------------------- # Azure AI Foundry helpers # ---------------------------------------------------------------------------- def build_mcp_tool_for_foundry(session_id: str, allowed_tools: Optional[List[str]] = None) -&gt; McpTool: &quot;&quot;&quot;Build McpTool for Azure AI Foundry.&quot;&quot;&quot; # Create McpTool with supported parameters only mcp_tool = McpTool( server_label=MCP_SERVER_LABEL, server_url=MCP_SERVER_URL, allowed_tools=allowed_tools or [] ) mcp_tool.set_approval_mode(&quot;never&quot;) # &lt;-- disables approval prompts # Set headers mcp_tool.update_headers(&quot;Accept&quot;, &quot;application/json, text/event-stream&quot;) mcp_tool.update_headers(&quot;Content-Type&quot;, &quot;application/json&quot;) mcp_tool.update_headers(&quot;Accept-Encoding&quot;, &quot;identity&quot;) mcp_tool.update_headers(&quot;SuperSecret&quot;, SUPER_SECRET) if session_id: mcp_tool.update_headers(&quot;Mcp-Session-Id&quot;, session_id) mcp_tool.update_headers(&quot;X-Session-Id&quot;, session_id) if AUTH_BEARER: mcp_tool.update_headers(&quot;Authorization&quot;, f&quot;Bearer {AUTH_BEARER}&quot;) if VERBOSE: print(f&quot;[MCP] Tool configured with {len(allowed_tools or [])} allowed tools&quot;) if allowed_tools: print(f&quot;[MCP] Allowed tools: {', '.join(allowed_tools[:5])}&quot;) return mcp_tool </code></pre> <p>Here is a redacted version of the &quot;Agent&quot; as listed in VS Code &quot;Azure AI Foundry&quot;:</p> <pre><code># yaml-language-server: $schema=https://aka.ms/ai-foundry-vsc/agent/1.0.0 # version of the agent schema version: 1.0.0 name: mcp-agent # Give a description of your agent. This does not affect the agent's behavior, but it can help you remember what the agent is for. # Please use instructions to define the agent's behavior. description: null # unique identifier for the agent, should be set by the system when deploy complete # keep it empty when creating a new agent and do not change it when updating the agent id: asst_NKOCOTFHryz****** # metadata for the agent, uncommented those lines if you want to add metadata # model id of the agent, can not be empty # Press SPACE to view a list of models currently connected to your project. model: id: gpt-4o options: temperature: 1 top_p: 1 # Give your agent clear directions on what to do and how to do it. # Include specific tasks, their order, and any special instructions like tone or engagement style. instructions: You are a helpful agent that can use MCP tools to assist users. # Add external tools to enhance your agent's abilities. tools: # Use MCP tools to interact with external services. - type: mcp id: QSCloud options: server_url: http://XXXXXXXXXX.azure-api.net/v1/mcp allowed_tools: - XXXXXXXXX - XXXXXXXXXX - XXXXXXXXXX - XXXXXXXXXX - XXXXXXXXXX </code></pre> <p>I have tried a few other approaches recommended to try but they are usually syntactically incorrect. Anyone know why my attempt to set the approval to &quot;never&quot; is failing or know how to fix this issue?</p>
<python><model-context-protocol><azure-ai-foundry>
2025-09-04 18:04:59
2
575
mazecreator
79,756,013
1,390,639
turn Python float argument into numpy array, keep array argument the same
<p>I have a simple function that is math-like:</p> <pre><code>def y(x): return x**2 </code></pre> <p>I know the operation <code>x**2</code> will return a <code>numpy</code> array if supplied a numpy array and a float if supplied a float.</p> <p>for more complicated functions that include an integral I want it to return a <code>numpy</code> array in all cases, even if supplied a float. I designed my function like this to handle the array case. NOTE: the integrate.quad function I don't believe is vectorized natively and I'm weary of using <code>np.vectorize</code> even if it works because I (think) I've seen major performance degradation before.</p> <pre><code>def f(x): out = np.zeros(np.shape(x)) for i,X in enumerate(X): out[i] = scipy.integrate.quad(y,0,5)[0] return out </code></pre> <p>I have often used an if statement like:</p> <pre><code>if ~isinstance(t,np.ndarray): t = np.asarray([t]) </code></pre> <p>I wonder if there is a more efficient way. Is there a one-liner? Can I add checking that the input is not something ridiculous like a string too in an elegant and understandable way?</p>
<python><arrays><numpy><floating-point>
2025-09-04 17:04:47
2
1,259
villaa
79,755,894
1,742,777
Can I find out if an instance of a SQL Alchemy Model meets the filter requirements without scanning the entire table?
<p>Suppose I have the SQLAlchemy code shown below. It retrieves all the users who are older than 28 years old.</p> <p>Now suppose I create a new User <code>u2 = User(name='Bart', age=66)</code>.</p> <p>I want to find out if <code>u2</code> matches the filter of <code>my_query</code> without going back to the DB and scanning the entire table. Can I do it?</p> <pre><code>from sqlalchemy import create_engine, Column, Integer, String from sqlalchemy.orm import sessionmaker, declarative_base Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) age = Column(Integer) engine = create_engine('sqlite:///:memory:') Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() # Add some data session.add_all([ User(name='Alice', age=30), User(name='Bob', age=25), User(name='Charlie', age=35) ]) session.commit() # Filter users older than 28 my_query = session.query(User).filter(User.age &gt; 28) users = my_query.all() for user in users: print(f&quot;Name: {user.name}, Age: {user.age}&quot;) </code></pre>
<python><database><postgresql><sqlalchemy>
2025-09-04 15:11:27
1
12,798
Saqib Ali
79,755,800
1,700,890
Downloading "BigQuery_Helper" wheels locally results in error "Multiple top-level modules discovered in a flat-layout"
<p>I am trying to download all packages from <code>requirements.txt</code> file.</p> <p>I ran</p> <pre class="lang-none prettyprint-override"><code>! pip download -r requirements.txt -d ./wheels --verbose </code></pre> <p>It generated error:</p> <pre class="lang-none prettyprint-override"><code>Obtaining bq_helper from git+https://github.com/SohierDane/BigQuery_Helper@8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f#egg=bq_helper (from -r requirements.txt (line 53)) Running command git config --get-regexp 'remote\..*\.url' remote.origin.url https://github.com/SohierDane/BigQuery_Helper Running command git rev-parse HEAD 8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f Skipping because already up-to-date. Running command python setup.py egg_info error: Multiple top-level modules discovered in a flat-layout: ['version', 'bq_helper', 'test_helper']. To avoid accidental inclusion of unwanted files or directories, setuptools will not proceed with this build. If you are trying to create a single distribution with multiple modules on purpose, you should not rely on automatic discovery. Instead, consider the following options: 1. set up custom discovery (`find` directive with `include` or `exclude`) 2. use a `src-layout` 3. explicitly set `py_modules` or `packages` with a list of names To find more information, look for &quot;package discovery&quot; on setuptools docs. error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: /usr/bin/python3 -c ' exec(compile('&quot;'&quot;''&quot;'&quot;''&quot;'&quot;' # This is &lt;pip-setuptools-caller&gt; -- a caller that pip uses to run setup.py # # - It imports setuptools before invoking setup.py, to enable projects that directly # import from `distutils.core` to work with newer packaging standards. # - It provides a clear error message when setuptools is not installed. # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so # setuptools doesn'&quot;'&quot;'t think the script is `-c`. This avoids the following warning: # manifest_maker: standard file '&quot;'&quot;'-c'&quot;'&quot;' not found&quot;. # - It generates a shim setup.py, for handling setup.cfg-only projects. import os, sys, tokenize try: import setuptools except ImportError as error: print( &quot;ERROR: Can not execute `setup.py` since setuptools is not available in &quot; &quot;the build environment.&quot;, file=sys.stderr, ) sys.exit(1) __file__ = %r sys.argv[0] = __file__ if os.path.exists(__file__): filename = __file__ with tokenize.open(__file__) as f: setup_py_code = f.read() else: filename = &quot;&lt;auto-generated setuptools caller&gt;&quot; setup_py_code = &quot;from setuptools import setup; setup()&quot; exec(compile(setup_py_code, filename, &quot;exec&quot;)) '&quot;'&quot;''&quot;'&quot;''&quot;'&quot;' % ('&quot;'&quot;'/kaggle/working/src/bq-helper/setup.py'&quot;'&quot;',), &quot;&lt;pip-setuptools-caller&gt;&quot;, &quot;exec&quot;))' egg_info --egg-base /tmp/pip-pip-egg-info-rgvr237v cwd: /kaggle/working/src/bq-helper/ Preparing metadata (setup.py) ... error error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>Is there a way to fix this?</p> <p>Here is my <code>requirements.txt</code> <a href="https://drive.google.com/file/d/1Wv8FE2wIX-tLYDAXB4jl6Vh9NQ6hwdSl/view?usp=sharing" rel="nofollow noreferrer">file</a></p>
<python><pip><setuptools>
2025-09-04 13:50:48
1
7,802
user1700890
79,755,748
24,332,077
Is iteration order of a set in Python preserved until that set is modified?
<p>According to <a href="https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset" rel="nofollow noreferrer">the Python documentation</a>, <code>set</code> is a mutable unordered collection.</p> <p>Usually, it's implemented as a hash table that stores references to objects as its keys. Comparing to <code>dict</code> (which is also usually a hash table), a dictionary's order of elements is guaranteed to be insertion order since Python 3.7. For <code>set</code>, it is not.</p> <p>There is beautiful in-depth explanation about Python dicts and sets implementation behaviour in <a href="https://stackoverflow.com/questions/15479928/why-is-the-order-in-dictionaries-and-sets-arbitrary">Why is the order in dictionaries and sets arbitrary?</a></p> <p>But it remains unclear: can it be that a <code>set</code> will be <strong>internally</strong> rehashed or reallocated by Python interpreter during the execution of some code that doesn't modify that set explicitly? Or is it guaranteed to be stable within this run of a program? Does it depend on set size?</p> <pre class="lang-py prettyprint-override"><code>s = {1, 4, 3, 5} iter_result = [i for i in s] # some other code not modifying set assert [i for i in s] == iter_result # always true within this run??? </code></pre>
<python><set><iteration>
2025-09-04 12:51:54
2
353
SLebedev777
79,755,548
2,218,321
Why only creating a task will run the coroutine in python?
<p>There is something I can't understand in this code</p> <pre><code>import asyncio async def fetch_data(param): print(f&quot;Do something with {param}...&quot;) await asyncio.sleep(param) print(f&quot;Done with {param}&quot;) return f&quot;Result of {param}&quot; async def main(): task1 = asyncio.create_task(fetch_data(1)) task2 = asyncio.create_task(fetch_data(2)) result2 = await task2 print(&quot;Task 2 fully completed&quot;) result1 = await task1 print(&quot;Task 1 fully completed&quot;) return [result1, result2] results = asyncio.run(main()) print(results) </code></pre> <p>The output is</p> <pre><code>Do something with 1... Do something with 2... Done with 1 Done with 2 Task 2 fully completed Task 1 fully completed ['Result of 1', 'Result of 2'] </code></pre> <p>I expected to see</p> <blockquote> <p>Do something with 2...</p> </blockquote> <p>In the first line, but it outputs <code>Do something with 1...</code> first. It seems just creating tasks will run the coroutine, while from what I read and saw, it only registers it in the event loop. The flow should be</p> <ul> <li>From <code>results = asyncio.run(main())</code> the main is registered into event loop</li> <li>Event loop runs the <code>main</code></li> <li>Two tasks 1,2 are created and registered into the event loop, with status <code>ready</code></li> <li>By <code>result2 = await task2</code>, the <code>main</code> is suspended <code>fetch_data(2)</code> is run</li> </ul> <p>From this flow, I expect to see <code>Do something with 2...</code> in the first line. Why does this output the <code>Do something with 1...?</code> first?</p> <p>To verify, I run this</p> <pre><code>import asyncio async def fetch_data(param): print(f&quot;Do something with {param}...&quot;) await asyncio.sleep(param) print(f&quot;Done with {param}&quot;) return f&quot;Result of {param}&quot; async def main(): task1 = asyncio.create_task(fetch_data(1)) task2 = asyncio.create_task(fetch_data(2)) results = asyncio.run(main()) print(results) </code></pre> <p>and the output is</p> <pre><code>Do something with 1... Do something with 2... None </code></pre> <p>Why are the couroutines running even without awaiting the tasks? Why does <code>print(f&quot;Done with {param}&quot;)</code> not run in this version?</p>
<python><async-await><python-asyncio><coroutine>
2025-09-04 09:52:53
4
2,189
M a m a D
79,755,358
7,176,676
How to deploy a Databricks App with tesseract-ocr
<p>I am trying to deploy a Databricks app that uses PyMUPDF library to extract text from PDF files. Under the hood, it tries to use an OS dependency called 'tesseract-ocr'. Please note that this is not the pytesseract library, but a dependency that need to be installed on the system.</p> <p>My understanding is that when deploying a Databricks App, it takes your source code, requirements.txt, and app.yaml to construct a databricks-specific Docker container after which it exposes your app (e.g. Streamlit). I tried to install the required dependency like so in the <code>app.yaml</code>:</p> <pre><code>command: - bash - -c - | apt-get update apt-get install -y tesseract-ocr streamlit run app.py </code></pre> <p>However, I get a <code>no access</code> error, see image below.</p> <p><a href="https://i.sstatic.net/J0abXY2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0abXY2C.png" alt="enter image description here" /></a></p> <p>Using <code>sudo apt</code> also doesn't work. Is there any other way to install tesseract-ocr within a Databricks App? Please note that installing it on a cluster won't work AFAIK, as a Databricks App runs in isolation.</p> <p>I was hoping I could simply provide a Dockerfile that the app deployment process could use, which would resolve this case. This does not seem to be possible. I would find it cumbersome to create a separate API just to make calls for OCR stuff.</p> <p>Any suggestions how to resolve this?</p>
<python><databricks><ocr>
2025-09-04 06:34:28
0
395
flow_me_over
79,755,351
6,186,822
How to have a parent terminal view and pilot a child pseudo-terminal
<p>Goal: In Python have a parent terminal start a program in a child pseudo-terminal (PTY), see its outputs and provide its inputs. At any given time, the parent terminal should be able to query what's on the child's screen at any character index</p> <p>So in pseudocode the parent would do:</p> <pre><code>child_terminal = pty.openpty() child_terminal.exec(my_program) child_terminal.charAt(2, 4) # Returns &quot;@&quot; child_terminal.submitInput(&quot;A&quot;) child_terminal.charAt(2, 4) # Returns &quot;!&quot; now because the contents of the screen changed per the input submitted </code></pre> <p>Is this a workable task or is there something about command line rendering/state that makes it unfeasible? I thought it would be a common use case for scraping/automation but am not seeing much for this</p>
<python><tty><pty>
2025-09-04 06:29:17
1
1,590
GenTel
79,755,132
16,563,251
Create recursive TypeAlias at runtime
<p>For use with <a href="https://docs.pydantic.dev/latest" rel="nofollow noreferrer">pydantic</a>, I want to create recursive type aliases at runtime. &quot;Normal&quot; <a href="https://typing.python.org/en/latest/spec/aliases.html" rel="nofollow noreferrer">type aliases</a> are possible like this:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeAliasType alias = TypeAliasType(&quot;alias&quot;, str) foo: alias = &quot;bar&quot; </code></pre> <p>But now I want to create a recursive type alias like this one:</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Sequence type recursive = int | Sequence[recursive] </code></pre> <p>This includes a <a href="https://peps.python.org/pep-0484/#forward-references" rel="nofollow noreferrer">forward reference</a>, which I found impossible to emulate at runtime. There is <a href="https://docs.python.org/3/library/typing.html#typing.ForwardRef" rel="nofollow noreferrer"><code>ForwardRef</code></a>, but besides being discouraged it did not work for me:</p> <pre class="lang-py prettyprint-override"><code>from typing import ForwardRef, TypeAliasType from collections.abc import Sequence # This is what I want to emulate at runtime type recursive = int | Sequence[recursive] print(recursive.__value__) # int | collections.abc.Sequence[recursive] # This is my (failing) attempt to emulate it at runtime dynamic_list_of_types = float | int ref = ForwardRef(&quot;recursive&quot;) recursive = TypeAliasType(&quot;recursive&quot;, dynamic_list_of_types | Sequence[ref]) print(recursive.__value__) # float | int | collections.abc.Sequence[ForwardRef('recursive')] </code></pre> <p>How can I declare such a recursive type alias during runtime?</p> <p>I am aware of how to do it statically as in <a href="https://stackoverflow.com/questions/53845024/defining-a-recursive-type-hint-in-python">this question</a>. Because the list of allowed types (the <code>dynamic_list_of_types</code> in my example above) is not fixed, but can be expanded dynamically, this is not possible here.</p> <p>More context: I have a pydantic model inside some package that offers support for plugins. This model has some fields like <code>mylist: Sequence[MyTypeAlias]</code>, where <code>MyTypeAlias</code> describes a union of allowed types. The plugins are now supposed to expand this list of allowed types with their own models. The solution I am aiming for is to collect this list and replace all occurences of <code>MyTypeAlias</code> with the updated union. This works well, except for the recursive types mentioned here. Other options to manually mark all such occurences (which are many) would decrease readability in the models of the main package, so I chose this approach.</p>
<python><python-typing><pydantic>
2025-09-03 22:29:10
0
573
502E532E
79,755,062
13,014,864
Best method to create generator for TensorFlow with list of array inputs
<p>I am using TensorFlow/Keras to create a deep learning model. The network is built as follows:</p> <pre class="lang-py prettyprint-override"><code>inps = [] features = [] for i in range(number_windows): inp = Input(shape=(window_length,), name=f&quot;input_{i}&quot;) inps.append(inp) feat = Dense(25)(inp) feat = BatchNormalization()(feat) feat = LeakyReLU()(feat) features.append(feat) comb = concatenate(features) comb = Dropout(0.50)(comb) top = Dense(512)(comb) top = BatchNormalization()(top) top = LeakyReLU()(top) top = Dropout(0.40)(top) top = Dense(256)(top) emb = EmbeddingLayer()(top) top = BatchNormalization()(top) top = LeakyReLU()(top) top = Dropout(0.25)(top) classification = Dense(n_classes, activation='softmax', name='classification')(top) mdl = Model(inputs=inps, outputs=[emb, classification]) </code></pre> <p>The <code>EmbeddingLayer</code> is a custom layer that effectively returns an <code>L2</code> normalization of the input. I have a data generating function:</p> <pre class="lang-py prettyprint-override"><code>def data_loading_generator( data_matrix: np.typing.NDArray, data_labels: np.typing.NDArray, window_length, dw ): num_rows = data_matrix.shape[0] y_onehot = np.stack( [np.flip(data_labels), data_labels], axis=1 ) data_segments = segment_data_batch( data_mat=data_matrix, w=window_length, dw=dw ) for row_number in range(0, num_rows): yield ( {f&quot;input_{ii}&quot;: x[row_number, :] for ii, x in enumerate(data_segments)}, ( { &quot;embedding_layer&quot;: data_labels[row_number], &quot;classification&quot;: y_onehot[row_number, :] } ) ) </code></pre> <p>The function <code>segment_data_batch</code> takes in a matrix and outputs a list of overlapping segments from each row of the matrix, length <code>window_length</code>, and overlap <code>window_length - dw</code>. I believe I can optimize this a little by removing the <code>segment_data_batch</code> function and simply segmenting each row of the matrix as they are generated:</p> <pre class="lang-py prettyprint-override"><code>def data_loading_generator( data_matrix: np.typing.NDArray, data_labels: np.typing.NDArray, window_length, dw ): num_rows = data_matrix.shape[0] for row_number in range(0, num_rows): data_segments = segment_data( spectra_matrix[row_number, :], w=window_length, dw=dw ) yield ( {f&quot;input_{ii}&quot;: data_segments[ii, :] for ii in range(data_segments.shape[0])}, ( { &quot;embedding_layer&quot;: data_labels[row_number], &quot;classification&quot;: tf.one_hot( data_labels[row_number], depth=2, dtype=tf.uint16 ) } ) ) </code></pre> <p>The new function <code>segment_data</code> takes a single row in the <code>data_matrix</code> and returns a numpy array <code>number_windows x window_length</code>. However, I'm wondering if I can make this more efficient using native TensorFlow functions.</p>
<python><tensorflow><keras><dataloader>
2025-09-03 20:29:53
1
931
CopyOfA
79,754,617
17,569,967
Pyside environment variable doesn't apply in Pycharm's run configuration
<p>I'm trying to diagnose missing <code>Slot</code> decorators as specified in the end of the section <a href="https://doc.qt.io/qtforpython-6/tutorials/basictutorial/signals_and_slots.html#the-slot-class" rel="nofollow noreferrer">https://doc.qt.io/qtforpython-6/tutorials/basictutorial/signals_and_slots.html#the-slot-class</a>. I was able to activate this functionality when set the variable and run program from terminal. But I couldn't achieve this when set the variable in run configuration and launched from PyCharm.</p> <p>This is my program:</p> <pre class="lang-py prettyprint-override"><code>import random import sys from PySide6.QtCore import Qt from PySide6.QtWidgets import QApplication, QLabel, QPushButton, QVBoxLayout, QWidget class MyWidget(QWidget): def __init__(self): super().__init__() self.hello = [&quot;Hallo Welt&quot;, &quot;Hei maailma&quot;, &quot;Hola Mundo&quot;, &quot;Привет мир&quot;] self.button = QPushButton(&quot;Click me!&quot;) self.text = QLabel(&quot;Hello World&quot;, alignment=Qt.AlignmentFlag.AlignCenter) self.layout = QVBoxLayout(self) self.layout.addWidget(self.text) self.layout.addWidget(self.button) self.button.clicked.connect(self.magic) def magic(self): self.text.setText(random.choice(self.hello)) if __name__ == &quot;__main__&quot;: app = QApplication([]) widget = MyWidget() widget.resize(800, 600) widget.show() sys.exit(app.exec()) </code></pre> <p>These are the environment variables of run configuration: <a href="https://i.sstatic.net/gwBZMWMI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwBZMWMI.png" alt="enter image description here" /></a></p> <p>What am I doing wrong?</p>
<python><pycharm><pyside6>
2025-09-03 12:51:29
0
412
Intolighter
79,754,606
13,014,864
TensorFlow data loader from generator error "Dataset had more than one element"
<p>I am trying to implement a TensorFlow dataset from a Python generator because I am having problems with my model consuming memory, inevitably resulting in a OOM crash (see my question on that <a href="https://stackoverflow.com/questions/79727623/tensorflow-keras-model-accumulates-system-and-gpu-ram-during-training">here</a>). So, I am thinking that a generator might be better suited to handle any memory problems.</p> <p>However, when I try to implement a generator for my model, I get this error: <code>Local rendezvous is aborting with status: INVALID_ARGUMENT: Dataset had more than one element.</code></p> <p>Here is my generator code:</p> <pre class="lang-py prettyprint-override"><code>def data_loading_generator( data_matrix: np.typing.NDArray, data_labels: np.typing.NDArray, window_length, dw ): num_rows = data_matrix.shape[0] y_onehot = np.stack( [np.flip(data_labels), data_labels], axis=1 ) data_segments = segment_data_batch( data_mat=data_matrix, w=window_length, dw=dw ) for row_number in range(0, num_rows): yield ( {f&quot;input_{ii}&quot;: x[row_number, :] for ii, x in enumerate(data_segments)}, ( {&quot;embedding_layer&quot;: data_labels[row_number]}, {&quot;classification&quot;: y_onehot[row_number, :]} ) ) </code></pre> <p>The function <code>segment_data_batch</code> takes in a matrix and outputs a list of overlapping segments from each row of the matrix, length <code>window_length</code>, and overlap <code>window_length - dw</code>. The inputs to the neural net are each labeled as <code>input_{ii}</code> and each input takes a single segment from the list of segments. I have labels for the data for comparison at the embedding layer and the classification layer. I initialize the data loader as shown below:</p> <pre class="lang-py prettyprint-override"><code>train_tf_dataset = tf.data.Dataset.from_generator( data_loading_generator, args=[X_train, Y_train, w_len, dw], output_signature=( {f&quot;input_{ii}&quot;: tf.TensorSpec(shape=(w_len,), dtype=tf.float64, name=f&quot;input_{ii}&quot;) for ii in range(number_windows)}, ( {&quot;embedding_layer&quot;: tf.TensorSpec(shape=(), dtype=tf.int32, name=&quot;embedding_layer&quot;)}, {&quot;classification&quot;: tf.TensorSpec(shape=(2,), dtype=tf.int32, name=&quot;classification&quot;)} ) ) ) </code></pre> <p>Here, <code>X_train</code> is an <code>N x M</code> numpy array where each row is a single data point, and <code>Y_train</code> is an <code>N</code>-length numpy vector. When I call <code>train_tf_dataset.take(1)</code>, I get the following:</p> <pre class="lang-py prettyprint-override"><code>&lt;_TakeDataset element_spec=({'input_0': TensorSpec(shape=(50,), dtype=tf.float64, name='input_0'), 'input_1': TensorSpec(shape=(50,), dtype=tf.float64, name='input_1'), 'input_2': TensorSpec(shape=(50,), dtype=tf.float64, name='input_2'), 'input_3': TensorSpec(shape=(50,), dtype=tf.float64, name='input_3'), 'input_4': TensorSpec(shape=(50,), dtype=tf.float64, name='input_4'), 'input_5': TensorSpec(shape=(50,), dtype=tf.float64, name='input_5'), 'input_6': TensorSpec(shape=(50,), dtype=tf.float64, name='input_6'), 'input_7': TensorSpec(shape=(50,), dtype=tf.float64, name='input_7'), 'input_8': TensorSpec(shape=(50,), dtype=tf.float64, name='input_8'), 'input_9': TensorSpec(shape=(50,), dtype=tf.float64, name='input_9'), 'input_10': TensorSpec(shape=(50,), dtype=tf.float64, name='input_10'), 'input_11': TensorSpec(shape=(50,), dtype=tf.float64, name='input_11'), 'input_12': TensorSpec(shape=(50,), dtype=tf.float64, name='input_12'), 'input_13': TensorSpec(shape=(50,), dtype=tf.float64, name='input_13'), 'input_14': TensorSpec(shape=(50,), dtype=tf.float64, name='input_14'), 'input_15': TensorSpec(shape=(50,), dtype=tf.float64, name='input_15'), 'input_16': TensorSpec(shape=(50,), dtype=tf.float64, name='input_16'), ... }, ({'embedding_layer': TensorSpec(shape=(), dtype=tf.int32, name=None)}, {'classification': TensorSpec(shape=(2,), dtype=tf.int32, name=None)}))&gt; </code></pre> <p>When I call <code>train_tf_dataset.get_single_element()</code>, I get the error described above, namely:</p> <pre class="lang-py prettyprint-override"><code>InvalidArgumentError: {{function_node __wrapped__DatasetToSingleElement_output_types_81_device_/job:localhost/replica:0/task:0/device:CPU:0}} Dataset had more than one element. [Op:DatasetToSingleElement] name: </code></pre> <p>What am I doing wrong here?</p>
<python><tensorflow><keras><dataloader>
2025-09-03 12:40:39
1
931
CopyOfA
79,754,535
4,882,126
Permutation of list in python with one or multiple fixed items
<p>I have a list of <code>a = [5,6,7,9]</code> I want all the possible permutations, with one or more entry fixed.</p> <p>e.g. If I want to fix third element <code>7</code>, then in all permutations I want to see <code>7</code> in the third place of the list.</p> <p>The code I wrote works well.</p> <pre><code>from itertools import permutations a = [5,6,7,9] fixed_element_index = 2 # I want to keep 7 at the same location and permutate only [5,6, ,9] a_moveable = a[:fixed_element_index]+a[fixed_element_index+1:] perm = permutations(a_moveable) perm_list = [] for x in perm: perm_list.append(list(x)) for y in perm_list: y.insert(fixed_element_index,a[fixed_element_index]) print(perm_list) </code></pre> <p>My main problem is if the list one is long and I want multiple indexes to be fixed then what is the best way to do that.</p> <p>the output is like this <code>[[5, 6, 7, 9], [5, 9, 7, 6], [6, 5, 7, 9], [6, 9, 7, 5], [9, 5, 7, 6], [9, 6, 7, 5]]</code></p>
<python><list><permutation>
2025-09-03 11:31:29
2
512
Zeryab Hassan Kiani
79,754,233
9,021,547
How to change order and number of tasks to be executed in airflow
<p>I want to create a dag workflow with 5 different branches as follows: <a href="https://i.sstatic.net/lG5LPdG9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lG5LPdG9.png" alt="enter image description here" /></a></p> <p>The base dag is this:</p> <pre><code>from datetime import datetime, timedelta, date from airflow import DAG from airflow.operators.python import BranchPythonOperator from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator from airflow.models import DagRun from airflow.utils.trigger_rule import TriggerRule from airflow.operators.dummy import DummyOperator def get_path(**kwargs): params = kwargs.get('params',{}) if params.get('path') == '1': return 'task_2_a' elif params.get('path') == '2': return 'task_2_b' elif params.get('path') == '3': return 'task_2_c' elif params.get('path') == '4': return ['task_2_a','task_2_b'] else: return ['task_2_a','task_2_b', 'task_2_c'] with DAG( 'test', description='test', tags=[&quot;test&quot;], schedule_interval=None, start_date=datetime(2025, 7, 1), default_args={ 'retries': 0, 'retry_delay': timedelta(minutes=1), 'conn_id': 'sgk_gp' }, params={ 'name':'', 'path':'' } ) as dag: task_1 = SQLExecuteQueryOperator( task_id='task_1', sql=f&quot;&quot;&quot; drop table if exists {{{{dag_run.conf.name}}}}; create table {{{{dag_run.conf.name}}}} ( some_text character varying ) &quot;&quot;&quot; ) branch_1 = BranchPythonOperator( task_id='branch_1', python_callable=get_path, provide_context=True, do_xcom_push=False ) task_2_a = SQLExecuteQueryOperator( task_id='task_2_a', sql=f&quot;&quot;&quot; insert into {{{{dag_run.conf.name}}}} (some_text) select some_text from (select 'aaa' as some_text) as tab &quot;&quot;&quot; ) task_2_b = SQLExecuteQueryOperator( task_id='task_2_b', sql=f&quot;&quot;&quot; insert into {{{{dag_run.conf.name}}}} (some_text) select some_text from (select 'bbb' as some_text) as tab &quot;&quot;&quot; ) task_2_c = SQLExecuteQueryOperator( task_id='task_2_c', sql=f&quot;&quot;&quot; insert into {{{{dag_run.conf.name}}}} (some_text) select some_text from (select 'ccc' as some_text) as tab &quot;&quot;&quot; ) task_3 = SQLExecuteQueryOperator( task_id='task_3', sql=f&quot;&quot;&quot; insert into {{{{dag_run.conf.name}}}} (some_text) select some_text from (select '333' as some_text) as tab &quot;&quot;&quot; ) complete = DummyOperator(task_id=&quot;complete&quot;, trigger_rule=TriggerRule.NONE_FAILED) </code></pre> <p>I tried setting the workflow like this initially:</p> <pre><code>task_1 &gt;&gt; branch_1 &gt;&gt; [task_2_a &gt;&gt; task_2_b &gt;&gt; task_2_c] &gt;&gt; task_3 &gt;&gt; complete </code></pre> <p>But this would lead to Task_2 being executed in parallel if branches 4 or 5 were chosen and I need them to run strictly in order.</p> <p>Then I tried doing this, but this did not yield the desired reults either.</p> <pre><code> task_1 &gt;&gt; branch_1 &gt;&gt; task_2_a &gt;&gt; task_3 &gt;&gt; complete task_1 &gt;&gt; branch_1 &gt;&gt; task_2_b &gt;&gt; task_3 &gt;&gt; complete task_1 &gt;&gt; branch_1 &gt;&gt; task_2_c &gt;&gt; task_3 &gt;&gt; complete task_1 &gt;&gt; branch_1 &gt;&gt; task_2_a &gt;&gt; task_2_b &gt;&gt; task_3 &gt;&gt; complete task_1 &gt;&gt; branch_1 &gt;&gt; task_2_a &gt;&gt; task_2_b &gt;&gt; task_2_c &gt;&gt; task_3 &gt;&gt; complete </code></pre> <p>I had an idea to implement multiple branch operators but this would to a very confusing structure in my opinion. Is there a simple way to achieve this?</p>
<python><airflow>
2025-09-03 07:02:32
1
421
Serge Kashlik
79,754,148
11,790,637
How should I install CUDA dependencies in readthedocs build with apidoc autodoc
<p>I have a python project which uses readthedocs and its apidoc/autodoc to build API documents automatically. The project uses a third-party tool that can only be installed from source like this: <code>pip install git+https://github.com/xxx/yyy.git</code> (no published prebuilt wheel). Furthermore, the third-party tool needs CUDA compilation. This causes a problem for building my documentation on readthedocs because readthedocs does not have a CUDA environment on its server.</p> <p>Problems I've encountered so far:</p> <ol> <li><p>If I don't list the third-party tool in <code>requirements.txt</code> for readthedocs, then <code>apidoc</code> would throw module not found error when building the documentation. And as a result, all relevent APIs won't appear in the doc.</p> </li> <li><p>If I add <code>pip install git+https://github.com/xxx/yyy.git</code> to <code>build.jobs.install</code> in <code>.readthedocs.yaml</code>, readthedocs would complain it does not have a CUDA compiler.</p> </li> </ol> <p>Questions:</p> <ol> <li><p>How can I successfully install the CUDA-based dependency?</p> </li> <li><p>If not, can I tell readthedocs or apidoc to ignore those import errors?</p> </li> <li><p>Even if I were able to install <code>yyy</code>, it normally takes ~45min to compile. Is there anyway to use a prebuilt mirror instead of rebuilding it every time?</p> </li> </ol>
<python><python-sphinx><read-the-docs><autodoc>
2025-09-03 05:05:55
0
2,337
ihdv
79,753,801
6,271,889
Pyparsing ParseResults attributes type hinting
<p>I'm using Pyparsing and I'm getting a <code>reportArgumentType</code> error from Pyparse:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass import pyparsing as pp @dataclass class Foo: value: int def parse_foo(line: str) -&gt; Foo: grammar = pp.Keyword(&quot;FOO&quot;) + pp.common.integer(&quot;value&quot;) + pp.StringEnd() parsed = grammar.parse_string(line, parse_all=True) return Foo(value=parsed.value) # Error on this line </code></pre> <p>The error is</p> <pre class="lang-none prettyprint-override"><code>Argument of type &quot;Any | list[Any] | Unknown | ParseResults | Literal['']&quot; cannot be assigned to parameter &quot;value&quot; of type &quot;int&quot; in function &quot;__init__&quot; Type &quot;Any | list[Any] | Unknown | ParseResults | Literal['']&quot; is not assignable to type &quot;int&quot; &quot;Literal['']&quot; is not assignable to &quot;int&quot; </code></pre> <p>My current solution is quiet pyright with <code># pyright: ignore[reportArgumentType]</code>:</p> <pre class="lang-py prettyprint-override"><code>return Foo(value=parsed.value) # pyright: ignore[reportArgumentType] </code></pre> <p>I don't like that. I'd rather fix the underlying issue with the types of the <code>ParseResults</code> attributes I get from <a href="https://pyparsing-docs.readthedocs.io/en/latest/pyparsing.html#pyparsing.ParserElement.parse_string" rel="nofollow noreferrer"><code>parse_string</code></a>. Is there a way to set the types? I was kind of expecting <code>pp.common.integer</code> would set the type hint to <code>int</code> and make Pyright happy, but it obviously isn't.</p>
<python><python-typing><pyparsing><pyright>
2025-09-02 17:54:17
0
2,021
Leonardo
79,753,551
4,367,177
Why is Airflow Bash operator not passing XCom to another operator?
<p>I'm working on a task group that needs to pass a variable from a <code>BashOperator</code> to another <code>BashOperator</code>. Each bash operator is invoking Python, and the first Python script needs to return a string in a variable that the second bash operator will take and continue with:</p> <pre><code>first_status = BashOperator( task_id=1st_taskid_str, bash_command=f&quot;python myscript.py --dag_id '{dag.dag_id}' \ --task_id '{1st_taskid_str}' --dag_conf configuration_string&quot;, retries=10, dag=dag, retry_delay=timedelta(minutes=1), do_xcom_push=True, ) #step 2 second_status = BashOperator( task_id=process_file_task_id ,bash_command=f&quot;python secondscript.py --dag_id '{dag.dag_id}' \ --task_id '{2nd_task_id}' --dag_conf configuration_string --file '{{ ti.xcom_pull(task_ids=\&quot;{1st_taskid_str}\&quot;) }}'&quot; ,dag=dag ) first_status &gt;&gt; second_status </code></pre> <p>When I view the XCom from the <code>first_status task</code>, I'm not seeing the variable that is logged by the Python script invoked.</p> <p>How can I get this variable passed from the first bash operator to the second bash operator?</p>
<python><airflow><airflow-xcom>
2025-09-02 13:22:15
2
302
BMac
79,753,458
3,611,164
Configure ruff to ignore environment-provided globals
<p>How can we configure the Ruff linter to ignore environment-provided global variables?</p> <p>I have a python codebase which contains notebooks, that run in a databricks environment. The environment provides globals such as <code>spark</code>.</p> <p>A perfectly valid example in that context would be:</p> <pre class="lang-py prettyprint-override"><code># Databricks notebook source df = spark.sql(&quot;SELECT 'hello world'&quot;) df.show() </code></pre> <p><code>ruff check</code> raises a <a href="https://docs.astral.sh/ruff/rules/undefined-name/" rel="nofollow noreferrer">F821 undefined-name error</a> on this minimal example.</p> <p>We could circumvent the problem by</p> <ul> <li>Explicitly loading the specific globals. This adds unnecessary complexity since the code will only ever run where the globals are present.</li> <li>Clutter the code with <code># noqa: F821</code></li> </ul> <p>Are there other options?</p>
<python><ruff>
2025-09-02 11:47:10
1
366
Fabitosh
79,753,424
6,045,397
Is it necessary to recreate conda environments for python on a different version of Linux-64?
<p>I use a High Performance Computing cluster that has moved between Linux versions (RedHat to Rocky). The file system is the same so my home directory is identical. Do I need to reinstall my conda environments for python? The versions of gcc and other system libraries are different, but won't conda download the same packages versions for linux-64?</p> <p>Thanks!</p>
<python><linux><conda>
2025-09-02 11:16:13
0
844
tiagoams
79,753,356
799,812
mypy fails with mixed types in variable length tuple
<p>mypy fails on code where a variable length tuple contains different types. What should I be doing here?</p> <pre><code>for i, *s in [(1, 'a'), (2, 'b', 'c')]: print(hex(i), '_'.join(s)) </code></pre> <pre><code>main.py:2: error: Argument 1 to &quot;hex&quot; has incompatible type &quot;int | str&quot;; expected &quot;int | SupportsIndex&quot; [arg-type] main.py:2: error: Argument 1 to &quot;join&quot; of &quot;str&quot; has incompatible type &quot;list[int | str]&quot;; expected &quot;Iterable[str]&quot; [arg-type] </code></pre>
<python><python-typing><mypy>
2025-09-02 10:13:45
3
317
grahamstratton
79,753,263
12,519,771
Pytest-django database not rolled back with pytest-asyncio
<p>For context, I am trying to test a WebSocket connection made through Django, so I need to set up some async tests with database support.<br /> To do so, I have set up pytest-django and I have some trouble understanding the <code>django_db</code> decorator's behavior in async contexts.</p> <p><code>django_db</code> has a parameter <code>transaction</code> which defaults to <code>False</code>. With this setting, the test class is supposed to act as <code>django.test.TestCase</code> which runs all tests in a transaction which is rolled back at the end of the test.</p> <p>However, in an async context, the objects created within a test seem to persist in other tests. Setting <code>@pytest.mark.django_db(transaction=True)</code> does make the tests pass, but increases test duration as actual modifications are made to the database.</p> <p>Here are quick examples:</p> <pre class="lang-py prettyprint-override"><code>import pytest from cameras.models import CameraGroup as MyObject @pytest.mark.django_db @pytest.mark.asyncio class TestAsync: async def test_1(self): # OK await MyObject.objects.acreate(name=&quot;same-name1&quot;) assert await MyObject.objects.acount() == 1 async def test_2(self): # FAILS # This should not see test_1's object if rollback worked assert await MyObject.objects.acount() == 0 @pytest.mark.django_db(transaction=True) @pytest.mark.asyncio class TestAsyncWithTransaction: async def test_1(self): # OK await MyObject.objects.acreate(name=&quot;same-name2&quot;) assert await MyObject.objects.acount() == 1 async def test_2(self): # OK # This should not see test_1's object if rollback worked assert await MyObject.objects.acount() == 0 @pytest.mark.django_db class TestSync: def test_1(self): # OK MyObject.objects.create(name=&quot;same-name3&quot;) assert MyObject.objects.count() == 1 def test_2(self): # OK # This should not see test_1's object if rollback worked assert MyObject.objects.count() == 0 </code></pre> <p>Have I misunderstood <code>django_db</code>, or is there something incompatible with <code>pytest-asyncio</code> ?</p>
<python><django><pytest><pytest-django><pytest-asyncio>
2025-09-02 08:34:48
1
957
RobBlanchard
79,753,068
5,118,421
Which way is faster way to create Decimal?
<p>Which way is faster to create <code>Decimal</code> - from float or from string?</p> <pre><code>Decimal(1.2) </code></pre> <p>vs</p> <pre><code>Decimal('1.2') </code></pre> <p>How does the difference in speed of the to cases?</p>
<python>
2025-09-02 04:12:35
3
1,407
Irina
79,752,914
27,596,369
How can I make all attributes in a Python dataclass optional without rewriting each one?
<p>I have this data class:</p> <pre><code>@dataclass class User: id: str name: str pwd: str picture: str url: str age: int # many, many more attributes </code></pre> <p>Now, I have to make all of the attributes of my data class optional because of a change in the data I am receiving.</p> <p>I know I can do something like this:</p> <pre><code>@dataclass class User: id: Optional[str] = None name: Optional[str] = None # and so on for the 25-30 attributes I have </code></pre> <p>But is there a more efficient way to accomplish this without having to manually write <code>Optional[..] = None</code> for every line?</p> <p>For example, I used to get data like:</p> <pre><code>attributes = {&quot;id&quot;: &quot;18ut2&quot;, &quot;pwd&quot;: &quot;qwerty&quot;, &quot;name&quot;: &quot;John Doe&quot;, &quot;picture&quot;: None, &quot;age&quot;: None, &quot;url&quot;: &quot;www.example.com&quot;} user = User(**attributes) </code></pre> <p>Before, if the attribute was <code>None</code>, the key was not omitted. Unlike now, when a key is <code>None</code>, the entire key is omitted.</p> <p>Example of current data:</p> <pre><code>attributes = {&quot;id&quot;: &quot;18ut2&quot;, &quot;pwd&quot;: &quot;qwerty&quot;, &quot;name&quot;: &quot;John Doe&quot;, &quot;url&quot;: &quot;www.example.com&quot;} # No picture or age key anymore user = User(**attributes) # Now throws an error </code></pre> <p>Error:</p> <pre class="lang-none prettyprint-override"><code>TypeError: User.__init__() missing 2 required positional arguments: 'picture' and 'age' </code></pre> <p>Note: I know overriding <code>__init__</code> or using <code>__init_subclass__</code>, but those come with limitations, so I am looking for a solution that avoids that.</p>
<python><metaprogramming><python-dataclasses>
2025-09-01 21:23:24
2
1,512
Aadvik
79,752,222
219,153
How to pretty-print a tuple of NumPy scalars?
<p>This NumPy 2.2.6 script:</p> <pre><code>import numpy as np a = np.arange(24).reshape(2, 2, 2, 3) idx = np.unravel_index(np.argmax(a, axis=None), a.shape) print(idx) </code></pre> <p>will print:</p> <pre><code>(np.int64(1), np.int64(1), np.int64(1), np.int64(2)) </code></pre> <p>which is hard to read. What is a simple way to print <code>idx</code> as <code>(1, 1, 1, 2)</code>? I'm looking for something better than <code>print(np.array(idx))</code> workaround.</p>
<python><numpy><printing>
2025-09-01 06:41:19
2
8,585
Paul Jurczak
79,751,873
8,964,393
Combine separate plots into one plot in Python
<p>I have created the following pandas dataframe:</p> <pre><code>ds = { 'Date' : ['2025-08-22 16:00:00', '2025-08-22 16:01:00', '2025-08-22 16:02:00', '2025-08-22 16:03:00', '2025-08-22 16:04:00', '2025-08-22 16:05:00', '2025-08-22 16:06:00', '2025-08-22 16:07:00', '2025-08-22 16:08:00', '2025-08-22 16:09:00', '2025-08-22 16:10:00', '2025-08-22 16:11:00', '2025-08-22 16:12:00', '2025-08-22 16:13:00', '2025-08-22 16:14:00', '2025-08-22 16:15:00', '2025-08-22 16:16:00', '2025-08-22 16:17:00', '2025-08-22 16:18:00', '2025-08-22 16:19:00', '2025-08-22 16:20:00', '2025-08-22 16:21:00', '2025-08-22 16:22:00', '2025-08-22 16:23:00', '2025-08-22 16:24:00'], 'Open': [ 11717.9, 11717.95, 11716.6, 11717.4, 11719.5, 11727.25, 11725.55, 11724.35, 11725.45, 11724.15, 11728.2, 11726.6, 11727.6, 11729.1, 11724.1, 11722.8, 11721.8, 11720.8, 11718.8, 11716.7, 11716.9, 11722.5, 11721.6, 11727.8, 11728.1], 'Low': [ 11715.9, 11716, 11715.35, 11716.45, 11719.5, 11724.3, 11723.55, 11723.15, 11723.85, 11724.15, 11725.2, 11726.6, 11727.6, 11724.2, 11722.6, 11721.6, 11719.7, 11715.8, 11716.5, 11716, 11716.9, 11721.3, 11721.4, 11726.35, 11727], 'High': [ 11718.1, 11718.1, 11717.9, 11719.4, 11727.15, 11727.45, 11726, 11725.65, 11727.2, 11727.85, 11728.2, 11728.7, 11729.5, 11729.1, 11725.5, 11723.9, 11722, 11720.8, 11719.8, 11717.7, 11722.9, 11724.3, 11727.8, 11728.3, 11728.8], 'Close' : [11718.05, 11716.5, 11717, 11719.3, 11727.15, 11725.65, 11724.15, 11725.35, 11724.05, 11727.65, 11726.7, 11727.8, 11729.2, 11724.2, 11722.6, 11721.7, 11721.2, 11718.7, 11716.6, 11716.8, 11722.6, 11721.5, 11727.6, 11728, 11727.2], 'Volume': [ 130, 88, 125, 93, 154, 102, 118, 92, 105, 116, 84, 88, 108, 99, 82, 109, 98, 130, 71, 86, 96, 83, 80, 93, 73], 'Regime': [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3], } df = pd.DataFrame(data=ds) </code></pre> <p>The dataframe contains a field called <code>Regime</code>, which has three values:1,2 and 3.</p> <p>I have created a volume profile plot for each of those Regimes after grouping the records by 5 minutes.</p> <pre><code>df['Date'] = pd.to_datetime(df['Date']) df.set_index('Date', inplace=True) # Resample to 5-minute intervals df_30 = df.resample('5T').agg({'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum','Regime':'last'}) df_30 = df_30.dropna() # Add date column for grouping df_30['date'] = df_30.index.date # For volume profile, use the original 1min df df['date'] = df.index.date # Group by regime for regime in df['Regime'].unique(): daily_30 = df_30[df_30['Regime'] == regime] daily_1 = df[df['Regime'] == regime] if daily_30.empty: continue # For candlestick plotting daily_30 = daily_30.reset_index() xdates = np.arange(len(daily_30)) # For volume profile min_price = daily_1['Low'].min() max_price = daily_1['High'].max() num_bins = 200 bins = np.linspace(min_price, max_price, num_bins + 1) bin_size = bins[1] - bins[0] bin_centers = (bins[:-1] + bins[1:]) / 2 volume_profile = np.zeros(num_bins) display(volume_profile) for _, row in daily_1.iterrows(): low = row['Low'] high = row['High'] vol = row['Volume'] if high == low: bin_idx = np.digitize(low, bins) - 1 if 0 &lt;= bin_idx &lt; num_bins: volume_profile[bin_idx] += vol else: vol_per_unit = vol / (high - low) start_bin = np.digitize(low, bins) end_bin = np.digitize(high, bins) for b in range(start_bin, end_bin + 1): if b &gt; 0 and b &lt;= num_bins: bin_start = bins[b - 1] bin_end = bins[b] start = max(low, bin_start) end = min(high, bin_end) portion = (end - start) * vol_per_unit volume_profile[b - 1] += portion # Normalize for plotting (scale to chart width) chart_width = len(daily_30) if max(volume_profile) &gt; 0: scaled_volume = (volume_profile / max(volume_profile)) * chart_width else: scaled_volume = volume_profile display(scaled_volume) # POC (Point of Control) poc_idx = np.argmax(volume_profile) poc_price = bin_centers[poc_idx] # print(poc_price) # Plot fig, ax = plt.subplots(figsize=(10, 6)) # Plot volume profile first (as background) ax.fill_betweenx(bin_centers, 0, scaled_volume, color='blue', alpha=0.3, step='mid') # Plot POC ax.axhline(poc_price, color='red', linestyle='-', linewidth=1) # Plot candlesticks on top candle_width = 0.6 for i in range(len(daily_30)): o = daily_30['Open'][i] h = daily_30['High'][i] l = daily_30['Low'][i] c = daily_30['Close'][i] if c &gt; o: color = 'green' bottom = o height = c - o else: color = 'red' bottom = c height = o - c # Wick ax.vlines(xdates[i], l, h, color='black', linewidth=0.5) # Body ax.bar(xdates[i], height, candle_width, bottom, color=color, edgecolor='black') ax.set_xlim(-1, chart_width + 1) ax.set_ylim(min_price - bin_size, max_price + bin_size) ax.set_xticks(xdates) ax.set_xticklabels(daily_30['Date'].dt.strftime('%H:%M'), rotation=45) ax.set_title(f'30-min Candlestick with Volume Profile - Regime: {regime}') ax.set_xlabel('Time') ax.set_ylabel('Price') plt.tight_layout() plt.show() </code></pre> <p>The code creates three separate volume profile plots.</p> <p><a href="https://i.sstatic.net/mt1FcsDs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mt1FcsDs.png" alt="VP for Regime 1" /></a> <a href="https://i.sstatic.net/H3giIW0O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3giIW0O.png" alt="VP for Regime 2" /></a> <a href="https://i.sstatic.net/gYWi4OIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYWi4OIz.png" alt="VP for Regime 3" /></a></p> <p>I need to combine the three plots into one single plot, such that on the x-axis Plot2 follows Plot 1, and Plot 3 follows Plot 2. Basically, I'd like to see a graph similar (but of course not exactly the same) to this.</p> <p><a href="https://i.sstatic.net/yroyIwl0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yroyIwl0.png" alt="Resulting plot format" /></a></p> <p>Does someone know how to do it?</p>
<python><pandas><dataframe><matplotlib><combinedchart>
2025-08-31 15:57:28
1
1,762
Giampaolo Levorato
79,751,781
9,985,445
Why Does Sympy Say This Derivative Is Constant When It's Obviously Not?
<p>I am trying to make a Jacobian matrix using sympy.</p> <p>I want to make a matrix of 2nd order derivatives, which I will call a 2nd order Jacobian.</p> <p>I did this:</p> <pre><code>gk_letters = [&quot;Δ&quot;, &quot;Γ&quot;, &quot;v&quot;] def greek_symbols(): return symbols(&quot; &quot;.join(gk_letters)) def partials_1st_order(): rows = [] gks = greek_symbols() for idx1 in range(len(gk_letters)): gk1 = gks[idx1] col = [] for idx2 in range(len(gk_letters)): gk2 = gks[idx2] if gk1 == gk2: col.append(1) else: col.append(Derivative(gk1, gk2)) rows.append(col) return gks, Matrix(rows) def partials_2nd_order(): gks, jacobian = partials_1st_order() y = zeros(len(gks)**2, len(gks)) input_rows, input_cols = jacobian.shape output_chunk = 0 for den_idx in range(len(gks)): denominator = gks[den_idx] for input_row_num in range(input_rows): for input_col_num in range(input_cols): numerator = jacobian[input_row_num, input_col_num] new_diff = Derivative(numerator, denominator) y[output_chunk*len(gks) + input_row_num, input_col_num] = new_diff output_chunk += 1 return y </code></pre> <p>When I call partials_2nd_order() at the command prompt, I get this:</p> <pre><code>⎡ 2 2 ⎤ ⎢ d d d ⎥ ⎢ ──(1) ─────(Δ) ─────(Δ)⎥ ⎢ dΔ dΔ dΓ dΔ dv ⎥ ⎢ ⎥ ⎢ 2 2 ⎥ ⎢ d d d ⎥ ⎢ ───(Γ) ──(1) ─────(Γ)⎥ ⎢ 2 dΔ dΔ dv ⎥ ⎢ dΔ ⎥ ⎢ ⎥ ... ⎢ ⎥ ⎢ 2 2 ⎥ ⎢ d d d ⎥ ⎢─────(v) ─────(v) ──(1) ⎥ ⎣dv dΔ dv dΓ dv ⎦ </code></pre> <p>I want to collapse to 0 everything that will always be 0, such as derivatives of constants.</p> <p>However, when I try this:</p> <pre><code>[jj.is_constant() for jj in x] </code></pre> <p>I get this:</p> <pre><code>[True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True] </code></pre> <p>In other words, sympy thinks everything in this matrix is constant.</p> <p>What am I missing?</p>
<python><sympy><symbolic-math>
2025-08-31 13:38:26
1
869
James Strieter
79,751,467
9,251,158
Why does flake8 flag E226 only when whitespace is missing on both sides of operator?
<p>Here is a minimal reproducible example:</p> <pre><code>j = 0*0 # Flagged for E226. j = 0* 0 # Not flagged for E226. j = 0 *0 # Not flagged for E226. </code></pre> <p>As the comments suggest, <code>flake8</code> flags only the first line, not the others, for E226:</p> <pre><code>$ flake8 tmp.py --ignore=E225,E231,E117,F401,F811,F841,E303 tmp.py:1:6: E226 missing whitespace around arithmetic operator </code></pre> <p>But <a href="https://www.flake8rules.com/rules/E226.html" rel="nofollow noreferrer">Flake8 rules</a> require a space before and after:</p> <blockquote> <p>There should be one space before and after an arithmetic operator (+, -, /, and *).</p> </blockquote> <p>I installed and updated flake8 with pip:</p> <pre><code>$ python3 -m pip install flake8 --upgrade </code></pre> <p>Why does <code>flake8</code> flag only the first line?</p>
<python><pep8><flake8>
2025-08-30 22:25:55
2
4,642
ginjaemocoes
79,751,457
905,814
Diagnosing duplicate inserts after merge/upsert with deltalake (Python)
<p>I’d really appreciate your help with a duplication issue I’m hitting when using deltalake merges (Python).</p> <p><strong>Context</strong></p> <ol> <li>Backend: Azure Blob Storage</li> <li>Libraries: deltalake 1.1.4 (Python), Polars 1.31.0 (source is a LazyFrame/DataFrame)</li> <li>Goal: Idempotent upsert (re-running the same input should not create new rows)</li> </ol> <p><strong>Delta Table Schema</strong></p> <pre><code>Schema( [Field(area_type_code, PrimitiveType(&quot;string&quot;), nullable=True), Field(map_code, PrimitiveType(&quot;string&quot;), nullable=True), Field(fuel, PrimitiveType(&quot;string&quot;), nullable=True), Field(datetime, PrimitiveType(&quot;timestamp_ntz&quot;), nullable=True), Field(period_name, PrimitiveType(&quot;string&quot;), nullable=True), Field(period_granularity, PrimitiveType(&quot;string&quot;), nullable=True), Field(power, PrimitiveType(&quot;double&quot;), nullable=True), Field(energy, PrimitiveType(&quot;double&quot;), nullable=True)] ) </code></pre> <p><strong>Upsert approach (per chunk)</strong></p> <ul> <li>Split source into chunks (I tried 2M and 10M rows).</li> <li>For each chunk, reload the Delta table (so inserts/updates from prior chunks are visible).</li> <li>Chain merge → when_matched_update → when_not_matched_insert:</li> </ul> <pre><code>merge_results = delta_table.merge( source=df_chunk, predicate=merge_predicate, source_alias='source', target_alias='target', writer_properties=writer_properties, streamed_exec=True, ).when_matched_update( predicate=match_predicate, updates=update_mapping ).when_not_matched_insert( updates=insert_mapping ).execute() </code></pre> <p>My predicates and mappings look like:</p> <p>Merge predicate: <code>target.area_type_code = source.area_type_code AND target.map_code = source.map_code AND target.fuel = source.fuel AND target.datetime = source.datetime AND target.period_granularity = source.period_granularity</code> For the merge predicate I also tried contraining to partitions found in the chunk, e.g., <code>AND IN target.period_granularity IN ('hourly', 'daily') AND target.area_type_code IN ('BZN')</code>. The values come from the chunk's distinct.</p> <p>Match predicate: <code>target.power != source.power OR target.energy != source.energy</code></p> <p>Update mapping: <code>{'power': 'source.power', 'energy': 'source.energy'}</code></p> <p>Insert mapping: <code>{'period_name': 'source.period_name', 'period_granularity': 'source.period_granularity', 'area_type_code': 'source.area_type_code', 'energy': 'source.energy', 'power': 'source.power', 'map_code': 'source.map_code', 'datetime': 'source.datetime', 'fuel': 'source.fuel'}</code></p> <p>My undestanding is that the merge predicate determines whether a record in the source exists or not in the target; the match predicate is what decides whether an already existing record needs to be updated or not and the mappings are basically telling which values from the source need to end up in which columns from the target.</p> <p><strong>Problem</strong></p> <p>I am running the upsert multiple times using the same source data. The first time, the delta table is created and the total number of rows comes to 10,240,472. This is matching the number of rows in the input dataframe. When I run it again - same source data, no changes - I see some inserts as per the dictionary retured by the (TableMerger) execute method. This also matches the number of rows in the delta table after I load it into a dataframe and get the number of rows. I am making sure I do not have any NULL or NAN values in all of the columns used in the merge predicate (i.e. pk columns if you wish).</p> <pre><code>'num_source_rows': 240472, 'num_target_rows_inserted': 29782, 'num_target_rows_updated': 4429, 'num_target_rows_deleted': 0, 'num_target_rows_copied': 471766, 'num_output_rows': 505977, 'num_target_files_scanned': 21, 'num_target_files_skipped_during_scan': 0, 'num_target_files_added': 20, 'num_target_files_removed': 18, </code></pre> <p>I load the delta table into a polars or pandas dataframe and I can see the duplicates. I even went as far as to query and load the rows for a duplicated key and compare the values for each column and each row and no differences are detected.</p> <p>I also tried excluding the datetime column from the merge predicate and use the period_name column - sort of a string representation of the datetime column) but data still keeps being inserted.</p> <p><strong>My questions are:</strong></p> <p>Does anything in my merge/match logic look off for an idempotent upsert?</p> <p>Are there known edge cases with floating-point comparisons (double power/energy) that could cause inequality or duplicate inserts across batches?</p> <p>Is there a recommended way to ensure deterministic matching on timestamp_ntz (e.g., normalization/precision) that I might be missing?</p> <p>Any best practices to avoid duplicates when reloading the table each chunk (e.g., transaction strategy, partition predicates, or writer properties)?</p> <p>Would a preliminary delete-then-merge be advisable here, or is there a more efficient safeguard?</p> <p>Thank you in advance for any pointers or checks I can run. Happy to provide more details.</p>
<python><merge><python-polars><delta-lake><delta-rs>
2025-08-30 22:18:27
1
456
Octavio
79,751,371
2,856,552
Sorting a column of max values from a multicolumn pandas data frame
<p>I have a multi-column pandas data frame of years and corresponding cumulative rainfall values from 1 to 183 (October to March). That means in each column the last value is the maximum column value, being the cumulative total from 1. I then create a new data frame which contains the years (col1) and the max values in col2. I would like to sort the max values in descending order. What I have tried does not work and I do not find an example which is similar to my problem. First, I reproduce the original data frame. Since the years and the days are too many, I limit this to 10 years plus the mean, and the days from 1st-31st January.</p> <ol> <li><p>Original data frame</p> <p>Avge 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 313.9 246.1 427.0 303.3 282.5 400.4 177.5 519.0 205.3 284.2 373.8 321.7 246.1 427.0 304.3 282.5 405.7 178.0 539.3 205.3 285.7 373.8 329.8 246.1 462.1 310.6 282.5 408.7 178.8 588.6 207.8 297.6 373.8 338.2 275.8 471.5 311.6 282.5 410.7 184.9 597.5 207.8 297.6 373.8 346.3 279.1 495.6 329.4 282.5 424.2 206.0 602.3 276.4 299.9 388.8 352.2 279.1 542.6 353.5 283.5 435.6 211.3 602.3 286.8 303.7 391.1 359.0 305.5 542.6 368.2 283.8 435.6 211.6 609.9 295.4 323.8 398.7 362.6 305.5 542.6 372.5 283.8 445.3 211.6 609.9 314.2 328.4 398.7 367.8 305.5 552.8 395.1 283.8 445.8 223.5 620.1 354.1 328.7 398.7 373.6 314.9 554.3 395.1 283.8 448.8 223.8 659.5 354.1 328.7 428.7 381.4 314.9 554.3 430.4 283.8 464.0 230.4 660.5 361.5 329.2 436.1 391.4 314.9 554.3 441.1 283.8 514.3 244.9 660.5 392.7 329.2 574.8 399.1 314.9 554.6 445.9 283.8 516.1 246.4 660.5 401.1 330.0 587.5 407.5 324.6 556.1 447.2 283.8 530.1 254.8 663.0 431.6 346.8 597.4 415.5 325.9 556.1 449.2 283.8 532.9 259.9 663.0 458.0 359.5 610.9 424.6 330.0 556.1 452.5 283.8 533.9 260.9 669.3 468.7 361.3 611.4 433.6 335.6 556.1 452.5 283.8 540.5 263.7 669.3 487.0 363.6 665.2 444.5 336.6 564.0 452.5 283.8 559.5 263.7 689.6 507.3 372.2 665.2 452.6 336.6 584.3 453.5 283.8 571.4 270.8 689.6 509.8 377.0 689.3 463.2 352.6 584.3 453.5 317.3 571.4 270.8 742.7 509.8 377.0 697.4 472.3 381.6 584.3 453.5 317.3 591.0 288.3 751.8 518.7 409.3 697.4 482.2 381.9 584.3 455.8 317.6 610.0 294.6 751.8 518.7 409.3 697.4 486.9 386.7 585.8 455.8 317.6 620.2 321.3 767.8 518.7 413.6 703.7 491.4 388.7 589.6 455.8 317.6 620.2 341.1 767.8 522.3 432.9 703.7 499.8 388.7 591.9 455.8 320.6 620.2 380.5 790.9 522.3 442.8 705.0 504.2 388.7 619.3 455.8 320.6 642.8 381.5 790.9 529.7 442.8 708.3 512.7 388.7 620.1 477.9 354.6 692.8 381.8 795.0 529.7 442.8 708.3 520.3 389.2 630.5 477.9 355.4 708.5 381.8 810.2 532.2 452.2 722.5 527.9 390.2 641.9 477.9 358.7 710.3 381.8 810.2 532.2 452.2 723.0 536.3 391.2 647.0 505.3 361.5 710.3 383.8 812.7 548.5 452.2 782.7 542.7 406.7 654.9 505.3 379.5 713.6 387.4 837.6 552.6 455.8 786.0</p> </li> </ol> <p>The code</p> <pre><code>import pandas as pd import os df = pd.read_csv('myfile.txt', sep=' ', skipinitialspace=True) new_df = df.max() print(new_df) </code></pre> <p>gives</p> <pre><code>Avge 542.7 1945 406.7 1946 654.9 1947 505.3 1948 379.5 1949 713.6 1950 387.4 1951 837.6 1952 552.6 1953 455.8 1954 786.0 </code></pre> <p>Sorting the second column with</p> <pre><code>new_df_sorted_cols = new_df.sort_index(axis=0) </code></pre> <p>gives the same order as above, except Avge comes to the bottom. Putiing axis=1 gives error</p> <pre><code>ValueError: No axis named 1 for object type Series </code></pre> <p>Help will be appreciated.</p>
<python><pandas><dataframe>
2025-08-30 19:16:09
1
1,594
Zilore Mumba
79,751,257
703,421
Python search in a sorted list if key is a part of a string using bisect
<p>I have a list of pathnames (dirname+filename), sorted by filenames.</p> <p>How to use bisect if I want to find a filename in this list ?</p> <p>Example :</p> <pre><code>import os, bisect lst=[] for (dp, dn, fn) in os.walk(rep): for name in fn: lst.append(os.path.join(dp, name) # sort list by filenames lst.sort(key=lambda p: os.path.basename(p)) f = &quot;C001567.jpg&quot; i = bisect.bisect_left(lst, f) # ??? if i != len(lst) and lst[i] == f: print(f&quot;found : {lst[i]}&quot;) </code></pre>
<python><list><sorting><bisect>
2025-08-30 15:54:44
1
2,279
Eric H.
79,751,217
9,063,378
Compute Eigenvalues for matrix when each element is a disjoint array
<p>I have some 2x2 covariance matrix represented as 3 disjoint arrays, meaning 3 Nx1 arrays. The N is this case represents time so the covariance of some measurements over time. I would like to be able to compute the eigenvalues at each time step without copying the individual arrays in a square matrix.</p> <p>I could have a 3x3 or 6x6 matrix (represented as individual columns for the upper triangle) with millions of time steps. Copying can be slow and prohibitively slow.</p> <p>Is there any way to do this in numpy natively? I tried searching for how to make a view from multiple disjoint arrays but it doesn't appear that anything exists.</p> <p>A short snippet of example data with what I am trying to do.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np NR = 10_000_000 A = np.random.random([2,2]) cov = np.dot(A, A.T) cov_n = np.repeat(cov[np.newaxis, :,:], NR, 0) # what I really have in memory are these, # can I compute the eigenvalues with numpy without copy? cov_00 = np.repeat(np.array([cov[0,0]]), NR, 0) cov_11 = np.repeat(np.array([cov[1,1]]), NR, 0) cov_01 = np.repeat(np.array([cov[0,1]]), NR, 0) </code></pre> <p>I am open to numba, the numpy docs for this routine show</p> <blockquote> <p>The eigenvalues are computed using LAPACK routines <code>_syevd</code>, <code>_heevd</code>.</p> </blockquote> <p>Would I be able to call those directly in the JIT-compiled routine? Would it save me the copy?</p>
<python><numpy><eigenvalue>
2025-08-30 14:36:01
2
514
Melendowski
79,751,060
3,938,402
How to make a ssh connection from another ssh connected machine
<p>I have 3 linux machines <code>(hostA, hostB, hostC)</code>, from <code>hostA</code> I will remote login into <code>hostB</code> via ssh and transfer some file/directory from <code>hostA</code> to <code>hostB</code>. Once transfer is done from <code>hostA</code> to <code>hostB</code>, I would like to login and transfer the same files from <code>hostB</code> to <code>hostC</code>. <code>hostA</code> can only communicate with <code>hostB</code> and only <code>hostB</code> can communicate with <code>hostC</code>. I am trying to achieve this using the below python code, but I'm not getting how to make <code>SSH</code> login from <code>hostB</code> onto <code>hostC</code> and also transfer the file/directory transferred from (<code>hostA</code> to <code>hostB</code>) to <code>hostC</code>.</p> <p>main.py</p> <pre><code>import os import paramiko def main(): try: ssh_client_b = connect_to_host(&quot;100.100.100.100&quot;, &quot;hostB&quot;, &quot;hostB123&quot;) execute_command_over_ssh(ssh_client_b, &quot;uname&quot;) execute_command_over_ssh(ssh_client, &quot;ls /home/hostB/&quot;) transfer_file_or_directory(ssh_client_b, &quot;/home/hostA/repos/repo1/&quot;, &quot;/home/hostB/repos/&quot;) # hostC is network reachable only from hostB # below is where I'm stuck where I need to login to remote host hostC via SSH from hostB # and transfer the same file/directory transferred previously to hostB from hostA # Also, I would like to know how to issue command to execute in hostC and get back the result of it # ssh_client_c = connect_to_host(&quot;200.200.200.200&quot;, &quot;hostC&quot;, &quot;hostC123&quot;) # execute_command_over_ssh(ssh_client_c, # &quot;uname&quot;) # execute_command_over_ssh(ssh_client_c, # &quot;ls /home/hostC/&quot;) # transfer_file_or_directory(ssh_client, # &quot;/home/hostB/repos/repo1/&quot;, # &quot;/home/hostC/repos/&quot;) finally: print(&quot;closing ssh client connection&quot;) ssh_client_b.close() # ssh_client_c.close() def connect_to_host(host_name, user_name, password): try: # Create object of SSHClient and # connecting to SSH ssh_client = paramiko.SSHClient() # Adding new host key to the local # HostKeys object(in case of missing) # AutoAddPolicy for missing host key to be set before connection setup. ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_client.connect(host_name, port = 22, username = user_name, password = password) except: ssh_client.close() return ssh_client def execute_command_over_ssh(ssh_client, command): # below line command will actually # execute in your remote machine (stdin, stdout, stderr) = ssh_client.exec_command(command) # redirecting all the output in cmd_output # variable cmd_output = stdout.read() print('log printing: ', command, cmd_output) def transfer_file_or_directory(ssh_client, local_path, remote_path): sftp = ssh_client.open_sftp() if os.path.isfile(local_path): transfer_file(sftp, local_path, remote_path) print(&quot;file transfer done&quot;) elif os.path.isdir(local_path): dir_name = os.path.basename(os.path.dirname(local_path)) mkdir(sftp, '%s/%s' % (remote_path, dir_name), ignore_existing=True) transfer_directory(sftp, local_path, os.path.join(remote_path, dir_name)) print(&quot;directory transfer done&quot;) else: print(&quot;invalid localpath: {local_path}&quot;) sftp.close() def transfer_file(sftp, local_path, remote_path): sftp.put(local_path, remote_path) def transfer_directory(sftp, local_path, remote_path): for item in os.listdir(local_path): if os.path.isfile(os.path.join(local_path, item)): sftp.put(os.path.join(local_path, item), '%s/%s' % (remote_path, item)) else: mkdir(sftp, '%s/%s' % (remote_path, item), ignore_existing=True) transfer_directory(sftp, os.path.join(local_path, item), '%s/%s' % (remote_path, item)) def mkdir(sftp, path, mode=511, ignore_existing=False): ''' Augments mkdir by adding an option to not fail if the folder exists ''' try: sftp.mkdir(path, mode) except IOError: if ignore_existing: pass else: raise </code></pre>
<python><ssh><paramiko>
2025-08-30 09:55:38
0
4,026
Harry
79,751,053
633,961
Django OneToOneField, Pyright: Cannot access attribute (reportAttributeAccessIssue)
<p>I try to check my Django project with <a href="https://github.com/microsoft/pyright" rel="nofollow noreferrer">pyright</a>.</p> <p>There is this OneToOneField, which pyright does not detect, when I use it:</p> <pre class="lang-py prettyprint-override"><code>user.lala </code></pre> <p>Error message of pyright:</p> <blockquote> <p>error: Cannot access attribute &quot;lala&quot; for class &quot;User&quot; Attribute &quot;lala&quot; is unknown (reportAttributeAccessIssue)</p> </blockquote> <pre class="lang-py prettyprint-override"><code># file lala/models.py class LaLaUser(models.Model): user = models.OneToOneField( User, on_delete=models.CASCADE, primary_key=True, related_name=&quot;lala&quot; ) discount = models.PositiveSmallIntegerField( default=0, verbose_name=&quot;Discount (0 bis 100)&quot;, validators=[MinValueValidator(0), MaxValueValidator(100)], ) def __str__(self): return self.user.username </code></pre> <p>The django-stubs are installed, and all other Django magic works fine with pyright.</p> <p>Version: Django 5.2, pyright 1.1.404.</p> <p>How to make pyright understand that OneToOneField?</p> <p>(ignoring that via a comment is not an answer)</p>
<python><django><python-typing><pyright>
2025-08-30 09:46:23
0
27,605
guettli
79,751,006
1,230,724
Custom numpy element type with arithmetic functions without overloading `np.ndarray`
<p>Is it possible to implement a custom class (say, <code>class Foo</code>) with <code>__add__</code>, <code>__mul__</code>, etc. methods defined and add it into a normal <code>np.ndarray</code> (<code>np.asarray([Foo(1), Foo(2)])</code>) and have numpy delegate to methods <code>Foo.__add__</code>/<code>Foo.__mul__</code> when <code>+</code>/<code>*</code> operations are applied?</p> <p>Operations on custom types are usually achieved by providing <code>__array_ufunc__</code> in the <code>ndarray</code> (or the array container which hold the array elements), but in my case, I don't have control over how the array elements (<code>Foo</code>) are used. They can be used as scalar (<code>foo1 + foo2</code>), assigned to an array (<code>foo_array[:] = foo1</code> or <code>np.asarray([foo1, foo2, ...]</code>) or used for scalar/array operations (<code>foo_array = foo_array + foo1</code>, or <code>foo_array1 + foo_array2</code>). At the time of instantiating <code>Foo</code>, it's unknown how it will be used, but during an actual operation, <code>other</code> in <code>def __add__(self, other)</code> would be able to distinguish between those usage cases (scalar, array op).</p> <p>Now I understand that this can't be done usually as <code>a + b</code> would invoke the <code>a.__add__(b)</code> method and if <code>a</code> was an array of <code>dtype=object</code> with <code>Foo</code> instances, the <code>__add__</code> call wouldn't necessarily delegated to <code>Foo</code> My question really is if there is some mechanism that does potentially delegate to <code>Foo</code> and what needs to be done (magic function definition?) to do so?</p> <p>It may be that I'm on the wrong track which would also be an insight and a valid answer ;-).</p> <p>Btw, I'm not worried about vectorisation (yet), so for now (in light of no better solution) even an element-wise call of <code>__add__</code> would be sufficient.</p>
<python><arrays><numpy>
2025-08-30 07:50:23
1
8,252
orange
79,750,812
405,017
Creating git subtree repo that can import Python files as from "fake" namespace
<p>tl;dr: What are the minimal changes needed to make a standalone folder work in Python runtime, and get modules resolved in VS Code, as though it's part of a parent module namespace? Simple example at the end of the question.</p> <h2>Background/Motivation</h2> <p>I have a git repo I develop on with a structure like:</p> <pre><code>~/proj/ ├── xxx/ ├── src/ │ └── proj/ │ ├── yyy/ │ ├── foo/ │ │ ├── docs/ │ │ └── bar/ │ └── zzz/ └── tests/ </code></pre> <p>I have an intern who is not allowed to clone <code>proj</code> repo, because it has sensitive files. I want the intern to work on and run code in the <code>foo</code> directory.</p> <p>So, I broke <code>foo</code> out into a git subtree hosted in its own repo. That repo is cloned into a directory <code>~/foo-repo</code>.</p> <pre><code>~/foo-repo/ ├── docs/ └── bar/ </code></pre> <p>I added a custom <code>pyproject.toml</code> file to the root of this repo so that the intern can <code>uv sync</code> and get a <code>.venv</code> with all the necessary packages.</p> <p>At this point I cannot run <code>python bar/jim.py</code> (in the venv) because that file imports other files that have <code>import proj.foo.bar</code>, and this repo knows nothing about <code>proj</code>.</p> <h3>What I Tried</h3> <p>So, I added this to the <code>pyproject.toml</code> in the foo repo:</p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=69&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; # Map the repo root to the import package `proj.foo` [tool.setuptools] include-package-data = true [tool.setuptools.package-dir] &quot;proj.foo&quot; = &quot;.&quot; [tool.setuptools.packages.find] namespaces = true include = [&quot;proj.foo*&quot;] where = [&quot;.&quot;] </code></pre> <p>After that (and fresh <code>uv sync</code>) I'm able to run the files…but pyright in VS Code is not able to resolve <code>import proj.foo</code> or sub modules.</p> <h2>Constraints</h2> <ul> <li>I don't want to move files within the <code>foo</code> project to have <code>src/proj/foo</code> directories, because that's not the right hierarchy within the <code>proj</code> repo.</li> <li>I don't want to have to create dummy stub files in this repo.</li> <li>I want the files in this repo to be built/installed editable, so that changes to the files during development take effect without needing to rebuild or reinstall.</li> <li>I cannot modify the imports in the files to use a different namespace, as that will break the code when used within <code>proj</code>.</li> </ul> <p>What are the minimal changes I can make to this repo so that the Python runtime and VS Code fully understand that files immediately under <code>~/foo-repo/</code> represent <code>proj.foo</code> namespace?</p> <p><em>Using Python 3.11 and <code>uv</code>.</em></p> <hr /> <h2>Concrete Example</h2> <p><strong><code>~/foo-repo/bar/__init__.py</code></strong>:</p> <pre class="lang-py prettyprint-override"><code>from proj.foo.bar.types import Message __all__ = [&quot;Message&quot;] </code></pre> <p><strong><code>~/foo-repo/bar/types.py</code></strong>:</p> <pre class="lang-py prettyprint-override"><code>class Message: pass </code></pre> <p><strong><code>~/foo-repo/bar/jim.py</code></strong>:</p> <pre class="lang-py prettyprint-override"><code>import proj.foo.bar as foobar print(foobar.Message) </code></pre> <p>Given those files, I want the intern to be able to clone only the foo repo and run:</p> <pre><code>cd ~/foo-repo uv sync # given ~foo-repo/pyproject.toml with the &quot;right&quot; setup source .venv/bin/activate python bar/jim.py </code></pre> <p>…and have it work. And further, I want the intern to be able to open the directory as a workspace in VS Code and have Pyright able to resolve <code>proj.foo.bar.types.Message</code>.</p> <p>Edit: And I cannot change <code>proj.foo.bar</code> to <code>bar</code> or similar, because that will break the code when the subtree is sync'd back into the <code>proj</code> repo.</p>
<python><git><git-subtree><pyright>
2025-08-29 22:09:38
1
304,256
Phrogz
79,750,735
27,596,369
How to use the data class itself as the type of its attribute
<p>I have a data class:</p> <pre><code> from dataclasses import dataclass from typing import Any @dataclass class Url: http: bool params: dict[str: Any] body: str # More stuff here </code></pre> <p>Here I have a <code>Url</code> data class with three attributes, <code>http</code> which is a boolean, <code>params</code> which is a dictionary with a string key and a value of type <code>Any</code>, and <code>body</code>, which is the URL string (i.e. google.com).</p> <p>I want to add a new attribute named <code>linked</code> which should be another <code>Url</code> dataclass. So, I tried adding this:</p> <pre><code>linked: Url </code></pre> <p>but this gave me an error:</p> <pre><code>NameError: name 'Url' is not defined </code></pre> <p>Is this even possible? If so, how can I achieve do this?</p>
<python><python-typing><python-dataclasses>
2025-08-29 20:03:26
0
1,512
Aadvik
79,750,725
3,138,436
Variable type-hinted to to be a dictionary that accept list as value is also taking sets as valid value
<p>I am experimenting with type-hinting in Python 3.12. I've declared a variable that is type-hinted to be a dictionary whose keys are string and values will be a list of string.</p> <pre><code>mydict : dict[str,list[str]] = { &quot;d&quot;:[&quot;k&quot;,&quot;t&quot;,&quot;k&quot;] } #output : mydict ----&gt; d:[k,t,k] </code></pre> <p>If I exchange the list value with a set, I get no error. A <code>set</code> is a valid value in place of a <code>list</code>.</p> <pre><code>mydict : dict[str,list[str]] = { &quot;d&quot;:{&quot;k&quot;,&quot;t&quot;,&quot;k&quot;} } output : mydict -----&gt; d : {k,t} </code></pre> <p>My question is why I am not getting an error? How to overcome this problem?</p>
<python><python-typing>
2025-08-29 19:54:33
1
9,194
AL-zami
79,750,684
396,373
How to detect pipe close when reading text by line in Python
<p>I have been having a lot of trouble reading text a line at a time from a pipe in Python and determining when the socket has been closed on the writing end. I have had the problem when communicating with a subprocess, but to for testing, I wrote some code that reads text lines from the pipe in a thread while code in the main thread writes to the pipe in small chunks.</p> <p>The issue is that, when reading, I cannot seem to detect when the writer has closed the pipe. I see that the final output (that does <em>not</em> end with a newline character) is received by <code>readline</code>, and the <code>io</code> object presumably detected an EOF to know to do that, but there seems to be no reasonable/clean way for my Python code to know that happened.</p> <p>Does anyone know the appropriate way to deal with this?</p> <p>In the code below, the read loop never exits, and <code>thread.join()</code> blocks until I break out of the program using <code>Ctrl+C</code>:</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime from itertools import batched import os from select import select from threading import Thread from time import sleep # Each superscript digit encodes to 3 bytes. text = '⁰¹²\n³\n⁴\n⁵⁶\n⁷⁸⁹⁰¹²\n³' text_bytes = bytes(text, 'utf8') pipe_r, pipe_w = os.pipe() wb_stream = open(pipe_w, 'wb', buffering=0) r_stream = open(pipe_r, 'r') got_chunks = [] def read_loop(): prev_timestamp = datetime.now() while True: sleep(1) cur_timestamp = datetime.now() print('A', (cur_timestamp - prev_timestamp).total_seconds()) prev_timestamp = cur_timestamp rr, wr, exc = select([r_stream], [r_stream], [r_stream], 0) if r_stream.closed: break if exc: break if not rr: continue chunk = r_stream.readline() print('B', (cur_timestamp - prev_timestamp).total_seconds()) prev_timestamp = cur_timestamp if chunk: print('got chunk', repr(chunk)) got_chunks.append(chunk) thread = Thread(target=read_loop) thread.start() for batch in batched(text_bytes, 4): bytes_chunk = bytes(batch) sleep(1.6) wb_stream.write(bytes_chunk) wb_stream.close() thread.join() print(repr(got_chunks)) </code></pre> <p>Output:</p> <pre><code>A 1.000314 A 1.000593 B 0.0 got chunk '⁰¹²\n' A 2.200882 A 1.000623 B 0.0 got chunk '³\n' A 1.001111 A 1.000423 B 0.0 got chunk '⁴\n' A 1.00083 B 0.0 got chunk '⁵⁶\n' A 2.398835 A 1.000746 B 0.0 got chunk '⁷⁸⁹⁰¹²\n' A 5.400978 A 1.000434 B 0.0 got chunk '³' A 1.000722 B 0.0 A 1.000428 B 0.0 A 1.000404 B 0.0 ^CTraceback (most recent call last): File &quot;/home/stevjorg/tmp2/foo.py&quot;, line 58, in &lt;module&gt; thread.join() File &quot;/home/stevjorg/.pyenv/versions/3.12.11/lib/python3.12/threading.py&quot;, line 1149, in join self._wait_for_tstate_lock() File &quot;/home/stevjorg/.pyenv/versions/3.12.11/lib/python3.12/threading.py&quot;, line 1169, in _wait_for_tstate_lock if lock.acquire(block, timeout): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ KeyboardInterrupt </code></pre>
<python><io><pipe>
2025-08-29 18:51:54
1
12,777
Steve Jorgensen
79,750,682
2,807,964
Is it possible to install submodules of a python package dynamically?
<p>I have a very complex Python library that is used by several people/projects for different purposes.</p> <p>The structure is basically the same as many Python libraries, but I would like to give the ability of a partial install.</p> <pre><code>project_name/ |__ package | |__ submodule1 | | |__ __init__.py | |__ submodule2 | | |__ __init__.py | |__ submodule3 | | |__ __init__.py | |__ submodule4 | | |__ __init__.py | |__ __init__.py | |__ types.py |__ setup.cfg |__ setup.py </code></pre> <p>I would like to give people the ability to people to install everything, or what they really require, because they are not dependent.</p> <p>I tried several options like:</p> <ul> <li><p>Hacking <code>packages=find_packages(*)</code> by myself, but I would like something more official. It is not my intention to split into several Git repositories.</p> </li> <li><p>Customizing <code>pip install</code> by using the <code>cmdclass</code> with a custom <code>install</code> option, but, AFAIK, this is deprecated in <code>pip</code> now.</p> </li> <li><p>Intercepting somehow the <code>extras_require</code> to use it within <code>packages</code> and create something like <code>pip install package[submodule1]</code>, but no luck either.</p> </li> </ul> <p>Any professional suggestion is welcome :)</p>
<python><pip><python-import><setuptools><python-packaging>
2025-08-29 18:50:24
1
880
jcfaracco
79,750,548
629,186
Using PyTest and Playwright, run one method before loading all the test files
<p>This question seems to be asked a lot, but I'm not finding any answers that work, or at least that work for what I'm looking for.</p> <p>The premise is simple: I have a website I need to automate testing on (using Playwright with PyTest). I can use <code>browser.new_context(storage_state=auth_file)</code>, which puts me in a logged in state, and the testing runs as expected.</p> <p>But for security reasons, that session times out after a few days, making the <code>auth_file</code> invalid. Or when someone pulls in the repo, there is no <code>auth_file</code> and when they try to tun <code>pytest</code> it's 100% errors because they're stuck on the login screen.</p> <p>I added a fixture that all the other fixtures depend on that looks to see if the <code>auth_file</code> exists, or if it's modified date is too far back, and if so, run the <code>authentication_login()</code> function (creates a browser window, not-headless, so that I can go through the login process). But all that did was make every test open to the same login page.</p> <p>So how do I create something that runs even before pytest looks for tests to run?</p> <p>Here is what I have so far:</p> <pre class="lang-py prettyprint-override"><code>auth_file: Path = Path(&quot;.auth/storage_state.json&quot;) @pytest.fixture(scope=&quot;session&quot;, autouse=True) def authentication(): TIME_OUT: int = 60 * 60 if not auth_file.is_file() or (time.time() - auth_file.stat().st_mtime) &gt; TIME_OUT: authentication_login() yield @pytest.fixture def playwright(authentication): with sync_playwright() as p: yield p @pytest.fixture def browser_context(playwright): browser: Browser = playwright.chromium.launch(**options) context: BrowserContext = browser.new_context( storage_state=auth_file ) yield context context.close() browser.close() @pytest.fixture def page(browser_context, request): page: Page = browser_context.new_page() page.goto(&quot;http://localhost/&quot;) wait_for_loaders_to_complete(page) yield page page.close() def authentication_login(): with sync_playwright() as playwright: browser = playwright.chromium.launch(headless=False, slow_mo=500) context = browser.new_context() page = context.new_page() page.goto(&quot;http://localhost/&quot;) page.pause() login_button: Locator = page.get_by_role(&quot;button&quot;, name=&quot;Login&quot;) if login_button.is_visible(): login_button.click() ... # Save authentication state context.storage_state(path=&quot;.auth/storage_state.json&quot;) print(&quot;New authentication saved.&quot;) context.close() browser.close() </code></pre> <p>Part of the problem is that the login routine includes using a OTP, so it really needs to be a human going through the login process <strong>ONCE</strong>, and after that is done, then all the other tests can launch their own windows (headless or not). Eventually, it will be a dummy account that doesn't need the authenticator, but I would still have the issue of 10+ sessions all trying to login. After the first succeeds, all new sessions will have the valid <code>auth_file</code>.</p>
<python><pytest><playwright><playwright-python>
2025-08-29 16:29:18
1
1,817
MivaScott
79,750,369
1,029,469
Pycharm recognizes package installed with -e (editable), but does not recognize/show the package content
<p>I've installed my own custom package with <code>-e 'git+https://.....@main#egg=package'</code>. I build my dependencies with pip-tools (pip-compile), and install them with pip-sync.</p> <p>The package shows as &quot;installed&quot; when I look at the installed packages of the interpreter in Pycharm. But it's contents are not detected, all imports using that package are red, no autocompletion, etc. But, everything works fine, it's only Pycharm, not finding the contents of the installed package, and thus only a &quot;display&quot; or IDE problem. In Pycharm, I can run tests and things, everything works but not the display under &quot;External Libraries&quot;, which is important for me, to quickly access this package's content. This worked before, but not anymore, since a few versions... I ignored it a few months, but it's really annoying...so here I am.</p> <p>The reason why it probably happens: The contents are outside of the site-packages folder - only the .pth files or .dist-somthing files are there, that's why it's recognized as &quot;installed&quot;. The package contents are in the <code>src</code> folder of my virtualenv...somehow, Pycharm does not look there (anymore)?</p> <p>I've updated the installed package to use a <code>pyproject.toml</code>, as I thought this could be the reason, but it was not. I also have latest versions of pip, setuptools and wheel installed.</p> <p>I could install without -e, but then I would need to <code>pip cache purge</code> before installing an updated version, or I would need to install with a commit hash instead of just <code>main</code> - if possible, I would not go this way.</p> <p>I could add the path ~/.virtualenv/myenv/src/package-name/ to sources in Pycharm, which is too much manual work...so not this way either if possible.</p> <p>I tend to open a support case, as it's really an edge case problem.<br /> (yes it is: <a href="https://youtrack.jetbrains.com/issue/PY-57566/Support-PEP-660-editable-installs" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/PY-57566/Support-PEP-660-editable-installs</a>)</p>
<python><pip><pycharm>
2025-08-29 13:52:51
1
1,957
benzkji
79,750,217
2,377,238
How to make Numpy.where() return only the first match?
<p>I'm trying to optimize the performance of a script, which is full of Numpy's <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">where()</a> after which only the first returned element is actually used. Example:</p> <pre><code>F = np.where(Y&gt;p/100)[0] </code></pre> <p>For the huge data sets that we are processing, it doesn't look like a good solution (both in terms of speed and memory consumption) to create a large array and then discard all but the first element. Is there any way how to skip the overhead, maybe by tweaking the condition?</p>
<python><numpy>
2025-08-29 11:25:09
3
319
chris_cm
79,750,038
5,121,448
SyntaxError in flask_humanify
<p>I get this Syntax error after installing flast_humanify</p> <pre><code>Traceback (most recent call last): File &quot;once.py&quot;, line 29, in &lt;module&gt; from flask_humanify import Humanify File &quot;/usr/local/lib/python3.6/site-packages/flask_humanify/__init__.py&quot;, line 9, in &lt;module&gt; from . import utils File &quot;/usr/local/lib/python3.6/site-packages/flask_humanify/utils.py&quot;, line 58 if not (value := request.environ.get(header)): ^ SyntaxError: invalid syntax </code></pre> <p>Here is the output of the installation itself</p> <pre><code>pip install flask-humanify --upgrade Collecting flask-humanify Using cached flask_humanify-0.2.3-py3-none-any.whl (82.4 MB) Requirement already satisfied: pydub in /usr/local/lib/python3.6/site-packages (from flask-humanify) (0.25.1) Requirement already satisfied: netaddr in /usr/local/lib/python3.6/site-packages (from flask-humanify) (0.10.1) Requirement already satisfied: Flask in /usr/local/lib/python3.6/site-packages (from flask-humanify) (1.1.1) Requirement already satisfied: cryptography in /usr/local/lib/python3.6/site-packages (from flask-humanify) (40.0.2) Requirement already satisfied: opencv-python-headless in /usr/local/lib/python3.6/site-packages (from flask-humanify) (4.5.5.64) Requirement already satisfied: numpy in /usr/local/lib/python3.6/site-packages (from flask-humanify) (1.17.4) Requirement already satisfied: scipy in /usr/local/lib/python3.6/site-packages (from flask-humanify) (1.3.3) Requirement already satisfied: cffi&gt;=1.12 in /usr/local/lib/python3.6/site-packages (from cryptography-&gt;flask-humanify) (1.15.1) Requirement already satisfied: click&gt;=5.1 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask-humanify) (7.0) Requirement already satisfied: itsdangerous&gt;=0.24 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask-humanify) (1.1.0) Requirement already satisfied: Jinja2&gt;=2.10.1 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask-humanify) (2.10.3) Requirement already satisfied: Werkzeug&gt;=0.15 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask-humanify) (0.16.0) Requirement already satisfied: importlib-resources in /usr/local/lib/python3.6/site-packages (from netaddr-&gt;flask-humanify) (5.4.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from cffi&gt;=1.12-&gt;cryptography-&gt;flask-humanify) (2.21) Requirement already satisfied: MarkupSafe&gt;=0.23 in /usr/local/lib/python3.6/site-packages (from Jinja2&gt;=2.10.1-&gt;Flask-&gt;flask-humanify) (1.1.1) Requirement already satisfied: zipp&gt;=3.1.0 in /usr/local/lib/python3.6/site-packages (from importlib-resources-&gt;netaddr-&gt;flask-humanify) (3.6.0) Installing collected packages: flask-humanify Successfully installed flask-humanify-0.2.3 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv </code></pre>
<python><flask><flask-humanify>
2025-08-29 08:54:54
2
4,478
carl
79,750,033
12,411,536
polars `write_parquet` to S3 with partitions makes local copy
<p>I am observing an unintuitive behavior when writing to S3 from polars with specifying partitions. Namely,</p> <ul> <li>with <code>use_pyarrow=False</code> a copy of the files is created locally before uploading, and not cleanup up afterwards automatically</li> <li>with <code>use_pyarrow=True</code> I am getting <code>Uploading to &lt;file&gt; FAILED with error When initiating multiple part upload for key &lt;directory&gt; in bucket &lt;bucket&gt;: AWS Error ACCESS_DENIED during CreateMultipartUpload operation: Anonymous users cannot initiate multipart uploads. Please authenticate.</code></li> </ul> <p>Here is a repro:</p> <pre class="lang-py prettyprint-override"><code>session = ... # boto3 session credentials = session.get_credentials() storage_options = { &quot;aws_access_key_id&quot;: credentials.access_key, &quot;aws_secret_access_key&quot;: credentials.secret_key, &quot;aws_session_token&quot;: credentials.token, &quot;aws_region&quot;: &quot;eu-west-1&quot;, } destination = f&quot;s3://{bucket}/{directory}&quot; data_frame.write_parquet( destination, partition_by=partitions, storage_options=storage_options, use_pyarrow = ... # True | False, different behavior ) </code></pre> <p>I wonder if this is expected and/or I am missing something obvious</p>
<python><dataframe><python-polars>
2025-08-29 08:50:09
1
6,614
FBruzzesi
79,750,003
5,121,448
python module not found after installation
<p>I installed a python package using</p> <pre><code>/usr/local/bin/python3.6 -m pip install flask_Humanify </code></pre> <p>and it gave me the output below, indicating that everything worked. Now, when I run my code</p> <pre><code>/usr/local/bin/python3.6 test.py </code></pre> <p>I get</p> <pre><code>Traceback (most recent call last): File &quot;once.py&quot;, line 29, in &lt;module&gt; from flask_Humanify import Humanify ModuleNotFoundError: No module named 'flask_Humanify' </code></pre> <p>but I am sure it uses the same python?</p> <pre><code>/usr/local/bin/python3.6 --version Python 3.6.3 /usr/local/bin/python3.6 -m pip --version pip 21.3.1 from /usr/local/lib/python3.6/site-packages/pip (python 3.6) </code></pre> <p>I can also see it in</p> <pre><code>pip list -v ... flask-Humanify 0.2.3 /usr/local/lib/python3.6/site-packages pip ... </code></pre> <p>Output after /usr/local/bin/python3.6 -m pip install flask_Humanify</p> <pre><code>Requirement already satisfied: flask_Humanify in /usr/local/lib/python3.6/site-packages (0.2.3) Requirement already satisfied: cryptography in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (40.0.2) Requirement already satisfied: opencv-python-headless in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (4.5.5.64) Requirement already satisfied: numpy in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (1.17.4) Requirement already satisfied: pydub in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (0.25.1) Requirement already satisfied: Flask in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (1.1.1) Requirement already satisfied: netaddr in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (0.10.1) Requirement already satisfied: scipy in /usr/local/lib/python3.6/site-packages (from flask_Humanify) (1.3.3) Requirement already satisfied: cffi&gt;=1.12 in /usr/local/lib/python3.6/site-packages (from cryptography-&gt;flask_Humanify) (1.15.1) Requirement already satisfied: Jinja2&gt;=2.10.1 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask_Humanify) (2.10.3) Requirement already satisfied: Werkzeug&gt;=0.15 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask_Humanify) (0.16.0) Requirement already satisfied: click&gt;=5.1 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask_Humanify) (7.0) Requirement already satisfied: itsdangerous&gt;=0.24 in /usr/local/lib/python3.6/site-packages (from Flask-&gt;flask_Humanify) (1.1.0) Requirement already satisfied: importlib-resources in /usr/local/lib/python3.6/site-packages (from netaddr-&gt;flask_Humanify) (5.4.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from cffi&gt;=1.12-&gt;cryptography-&gt;flask_Humanify) (2.21) Requirement already satisfied: MarkupSafe&gt;=0.23 in /usr/local/lib/python3.6/site-packages (from Jinja2&gt;=2.10.1-&gt;Flask-&gt;flask_Humanify) (1.1.1) Requirement already satisfied: zipp&gt;=3.1.0 in /usr/local/lib/python3.6/site-packages (from importlib-resources-&gt;netaddr-&gt;flask_Humanify) (3.6.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv </code></pre>
<python>
2025-08-29 08:24:30
1
4,478
carl
79,749,945
492,034
Python - ThemedTKinterFrame - grid_columnconfigure weight does not work
<p>I'm using ThemedTKinterFrame to beautify my python app but I'm facing a problem with grid_columnconfigure that does not work as supposed. Infact, does not work at all.</p> <p>Code:</p> <pre><code>import tkinter as tk import TKinterModernThemes as TKMT class App(TKMT.ThemedTKinterFrame): def __init__(self, theme, mode, usecommandlineargs=True, usethemeconfigfile=True): super().__init__(&quot;Switch&quot;, theme, mode, usecommandlineargs=usecommandlineargs, useconfigfile=usethemeconfigfile) self.switchframe1 = self.addLabelFrame(&quot;Switch Frame 1&quot;, sticky=tk.NSEW, row=0, col=0) self.switchvar = tk.BooleanVar() self.switchframe1.SlideSwitch(&quot;Switch1&quot;, self.switchvar) self.switchframe2 = self.addLabelFrame(&quot;Switch Frame 2&quot;, sticky=tk.NSEW, row=0, col=1) self.switchvar = tk.BooleanVar() self.switchframe2.SlideSwitch(&quot;Switch2&quot;, self.switchvar) self.root.grid_columnconfigure(0, weight=0) self.root.grid_columnconfigure(1, weight=1) self.run() if __name__ == &quot;__main__&quot;: App(&quot;park&quot;, &quot;dark&quot;) </code></pre> <p>Result:</p> <p><a href="https://i.sstatic.net/E4asVmmZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4asVmmZ.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/M6TTUJhp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6TTUJhp.png" alt="enter image description here" /></a></p>
<python><tkinter>
2025-08-29 07:26:42
1
1,776
marco
79,749,841
5,400,385
How to get the MCPO Proxy command working correctly
<p>Just trying to setup a basic MCP Server and put it behind MCPO proxy.</p> <p>This MCP server copied from: <a href="https://github.com/modelcontextprotocol/python-sdk/tree/main" rel="nofollow noreferrer">https://github.com/modelcontextprotocol/python-sdk/tree/main</a></p> <pre><code>&quot;&quot;&quot; FastMCP quickstart example. cd to the `examples/snippets/clients` directory and run: uv run server fastmcp_quickstart stdio &quot;&quot;&quot; from mcp.server.fastmcp import FastMCP # Create an MCP server mcp = FastMCP(&quot;Demo&quot;) # Add an addition tool @mcp.tool() def add(a: int, b: int) -&gt; int: &quot;&quot;&quot;Add two numbers&quot;&quot;&quot; return a + b # Add a dynamic greeting resource @mcp.resource(&quot;greeting://{name}&quot;) def get_greeting(name: str) -&gt; str: &quot;&quot;&quot;Get a personalized greeting&quot;&quot;&quot; return f&quot;Hello, {name}!&quot; # Add a prompt @mcp.prompt() def greet_user(name: str, style: str = &quot;friendly&quot;) -&gt; str: &quot;&quot;&quot;Generate a greeting prompt&quot;&quot;&quot; styles = { &quot;friendly&quot;: &quot;Please write a warm, friendly greeting&quot;, &quot;formal&quot;: &quot;Please write a formal, professional greeting&quot;, &quot;casual&quot;: &quot;Please write a casual, relaxed greeting&quot;, } return f&quot;{styles.get(style, styles['friendly'])} for someone named {name}.&quot; if __name__ == &quot;__main__&quot;: mcp.run(transport=&quot;streamable-http&quot;) </code></pre> <p>I can run the server with: <code>uv run server.py</code></p> <p>And in a separate process, I can put it behind an MCPO proxy with:</p> <p><code>uvx mcpo --port 8002 --api-key &quot;top-secret&quot; --server-type &quot;streamable-http&quot; -- http://127.0.0.1:8000/mcp</code></p> <p>I can then see the <code>add</code> tool under <code>localhost:8002/docs</code></p> <p>What I can't get to work is running it in a single command, similarly to the examples in <a href="https://github.com/open-webui/mcpo" rel="nofollow noreferrer">https://github.com/open-webui/mcpo</a></p> <p>Neither of these commands work:</p> <p><code>uvx mcpo --port 8002 --api-key &quot;top-secret&quot; --server-type &quot;streamable-http&quot; -- http://127.0.0.1:8000/mcp</code></p> <p><code>uvx mcpo --port 8002 --api-key &quot;top-secret&quot; -- uv run server.py</code></p> <p>What am I missing to get the MCP Server as Http-streamble with the MCPO Proxy?</p>
<python><model-context-protocol><mcpo>
2025-08-29 05:20:51
0
2,112
PGHE