QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,725,756
6,916,032
How to avoid double checks with asyncio.Lock
<p>I have a piece of code, that checks if the redis db has data updates and loads them to the memory. I want only one corouitine to execute this load.</p> <pre><code> class MyService: def __init__(self): self.lock = asyncio.Lock() self.latest_timestamp = None async def _check_latest_timestamp_from_db(self): # go to db async def ensure_update_loaded(): latest_timestamp_from_db = await self._check_latest_timestamp_from_db() if self.timestamp == latest_timestamp_from_db: return if self.lock.locked(): return # the load is in progress, we're okay to use the old data for now async with self.lock: # do the load self.timestamp = latest_timestamp_from_db </code></pre> <p>From my understanding multiple coroutines can go simultaneously to this line: <code>async with lock:</code>, after all the checks are passed. Yes, they will execute consequently, but the load will happen more than once. I could double check within the lock:</p> <pre><code>async with self.lock: if self.timestamp == latest_timestamp_from_db: return # do the load self.timestamp = latest_timestamp_from_db </code></pre> <p>But I think there should be a clearer solution to my problem.</p>
<python><locking><python-asyncio><primitive>
2025-08-05 07:41:48
1
417
Artem Ilin
79,725,721
9,593,060
Input audio from microphone not collected when audio is reproduced
<p>I'm developing a simple real-time voice bot using the OpenAI real-time API, specifically integrating with Semantic Kernel. The code is written in an async manner, and it initially works well. However, I'm facing issues with task synchronization and the event loop, preventing me from interrupting the bot or audio playback until it finishes completely.</p> <p>When the application starts, everything works as expected: I ask a question, and the bot responds. But after this initial interaction, I can only pose another question once the bot has finished responding.</p> <p>Upon debugging, I noticed that writing the microphone's input stream to a file captures perfect audio initially. However, once the bot begins responding, the recorded audio becomes chopped and incomplete, only resuming full recording once the bot finishes playback.</p> <p>This leads me to believe there is a problem with my generators and their synchronization. When the OpenAI service sends events, the receive generator processes them, but the input generator stops working, failing to collect data, while the other generator plays back the response. Occasionally, the event loop returns to the microphone generator briefly, but it only captures incomplete audio.</p> <p>Ideally, I want the input audio stream to continuously record in parallel with the model's output being played through the speakers. I suspect multithreading might be necessary, but Python's Global Interpreter Lock (GIL) complicates this approach.</p> <p>Could anyone suggest solutions or workarounds to ensure uninterrupted audio recording while handling the model's output? Are there specific techniques or libraries that can help manage concurrency effectively in this scenario?</p> <p>Here's some code (it's not complete for full reproducibility, but should be enough to give an idea):</p> <p>VoiceBotService:</p> <pre><code>import asyncio import numpy as np from semantic_kernel.contents.realtime_events import RealtimeAudioEvent, RealtimeEvents from acev_realtime_voice_bot.service.ports import ( AIStreamingServicePort, AudioInputPort, AudioOutputPort, ) from tests.events import CallInterruptedEvent class VoiceBotService: def __init__( self, audio_in: AudioInputPort, audio_out: AudioOutputPort, ai_service: AIStreamingServicePort, ) -&gt; None: self.audio_in = audio_in self.audio_out = audio_out self.ai_service = ai_service async def run(self): async def receive_task(): async for event in self.ai_service.receive(): await asyncio.sleep(0.01) if ( isinstance(event, RealtimeAudioEvent) or isinstance(event, np.ndarray) # Can convert this into a response done of OpenAI and eliminate this # Gonna have coupling with openAI SDK, but that's ok for the moment. ): await self.audio_out.send_audio_output(event) if isinstance(event, CallInterruptedEvent): break else: await self.event_handler(event) receive_task_future = asyncio.create_task(receive_task()) # the problem is definitely with the async generator in sending the audio. That's the only difference i found between my code and the others. # By saving the audio frames i get as input, it looks like the input microphone gets blocked in collecting audio! The first time or when the bot is done answeringf # It saves all the audio and sends it, but when the bot is answering, the audio is weirdly chopped and missing! # Some locking of the audio interface? Sync problems? async for audio_frame in self.audio_in.get_input_audio_frames(): print(&quot;Sending audio&quot;) await self.ai_service.send(audio_frame) await receive_task_future async def event_handler(self, event: RealtimeEvents): print(event.service_type) </code></pre> <p>LocalAudioRecorder (used to record the microphone):</p> <pre><code>class LocalAudioRecorder(AudioInputPort): def __init__(self, device, sample_rate, channels, dtype, frame_size) -&gt; None: super().__init__() self.device = device self.sample_rate = sample_rate self.channels = channels self.dtype = dtype self.frame_size = frame_size async def get_input_audio_frames(self) -&gt; AsyncGenerator[np.ndarray, None]: &quot;&quot;&quot;Generator function to yield audio data chunks and save them to a WAV file.&quot;&quot;&quot; try: with InputStream( samplerate=self.sample_rate, channels=self.channels, device=self.device, dtype=np.int16, ) as stream: # Open a WAV file to write the audio data with wave.open('recorded_audio.wav', 'wb') as wav_file: wav_file.setnchannels(self.channels) wav_file.setsampwidth(np.dtype(np.int16).itemsize) wav_file.setframerate(self.sample_rate) while True: if self._is_key_pressed(): input() # Clear the input buffer print(&quot;Stopping recording...&quot;) break if stream.read_available &lt; self.frame_size: await asyncio.sleep(0) continue audio_chunk, _ = stream.read(self.frame_size) print(&quot;Read audio chunk.&quot;) print(f&quot;Content of audio: {audio_chunk}&quot;) # Write the audio chunk to the WAV file wav_file.writeframes(audio_chunk.tobytes()) await asyncio.sleep(0) yield audio_chunk # Check for Enter keypress to stop recording except Exception as e: print(f&quot;An error occurred: {e}&quot;) def _is_key_pressed(self): return select.select([sys.stdin], [], [], 0) == ([sys.stdin], [], []) </code></pre> <p>LocalAudioPlayer (used to play the audio):</p> <pre><code>class LocalAudioPlayer(AudioOutputPort): def __init__(self, channels, sample_rate) -&gt; None: self.channels = channels self.sample_rate = sample_rate self.stream = None async def send_audio_output( self, audio_frame: ndarray | RealtimeAudioEvent ) -&gt; None: if isinstance(audio_frame, RealtimeAudioEvent): audio_frame = np.frombuffer(audio_frame.audio.data, dtype=np.int16) if self.stream is None: # Initialize the stream self.stream = OutputStream( channels=self.channels, samplerate=self.sample_rate, dtype=&quot;float32&quot; ) self.stream.start() # Convert int16 audio chunk to float32 and normalize to [-1.0, 1.0] audio_chunk = audio_frame.astype(np.float32) / np.iinfo(np.int16).max self.stream.write(audio_chunk) </code></pre> <p>And this is the adapter for the openAI realtime connection:</p> <pre><code>import base64 from collections.abc import AsyncGenerator, Callable, Coroutine from typing import Any, ClassVar, Final, cast import numpy as np from numpy import ndarray from semantic_kernel.connectors.ai import FunctionChoiceBehavior from semantic_kernel.connectors.ai.open_ai import ( AzureRealtimeExecutionSettings, AzureRealtimeWebsocket, ) from semantic_kernel.contents.audio_content import AudioContent from semantic_kernel.contents.realtime_events import RealtimeAudioEvent, RealtimeEvents from acev_realtime_voice_bot.service.ports import AIStreamingServicePort class OpenAIRealtimeAdapter(AIStreamingServicePort): def __init__( self, system_prompt: str, endpoint: str | None = None, api_version: str | None = None, deployment_name: str | None = None, ) -&gt; None: super().__init__() self.client = AzureRealtimeWebsocket( endpoint=endpoint, api_version=api_version, deployment_name=deployment_name ) self.settings = AzureRealtimeExecutionSettings( instructions=system_prompt, turn_detection={&quot;type&quot;: &quot;server_vad&quot;}, voice=&quot;shimmer&quot;, input_audio_format=&quot;pcm16&quot;, output_audio_format=&quot;pcm16&quot;, input_audio_transcription={&quot;model&quot;: &quot;whisper-1&quot;}, function_choice_behavior=FunctionChoiceBehavior.Auto(), ) async def create_session(self): await self.client.create_session( settings=self.settings ) # TODO: Add chathistory if needed. async def close_session(self): await self.client.close_session() async def send(self, event: RealtimeEvents | np.ndarray) -&gt; None: if isinstance(event, np.ndarray): event = self._cast_input_audio_to_event(event) print(&quot;Sending event to openAI&quot;) await self.client.send(event=event) async def receive( self, audio_output_callback: Callable[[ndarray], Coroutine[Any, Any, None]] | None = None, ) -&gt; AsyncGenerator[RealtimeEvents, None]: async for event in self.client.receive( audio_output_callback=audio_output_callback ): yield event # TODO: Dont know if this is a smell of bad code, probably. I should unify somehow the interfaces and the data types. def _cast_input_audio_to_event(self, audio_frame) -&gt; RealtimeAudioEvent: return RealtimeAudioEvent( audio=AudioContent( data=base64.b64encode(cast(Any, audio_frame)).decode(&quot;utf-8&quot;) ) ) </code></pre> <p><strong>(I would like to add an audio sample but I can't figure out how to do it on stackoverflow, if you know how, please let me know and I will also add the audio sample)</strong></p>
<python><audio><python-asyncio><real-time><python-sounddevice>
2025-08-05 07:07:54
0
1,618
Mattia Surricchio
79,725,442
11,580,138
Why am I getting this RecursionError?
<p>I'm working on making a program to read a <a href="https://www.investopedia.com/terms/c/chart-accounts.asp" rel="nofollow noreferrer">chart of accounts</a> and turn it into a tree of <code>Account</code> objects, which I will be doing stuff with later. The chart of accounts in question has several levels of sub-accounts, with the name of each account being indented three spaces more than its parent account. For example:</p> <pre class="lang-none prettyprint-override"><code> Current Assets Cash 1000000.00 - Cash 1000010.00 - Bank Account #1 ... Other Current Assets Receivables 100800.00 - Accounts Receivable ... </code></pre> <p>I've created a class to represent an account, as follows:</p> <pre class="lang-py prettyprint-override"><code>class Account: parent = None num: int | None name: str # Name after stripping whitespace and splitting off account number raw: str # Name prior to ^ ... # Assorted irrelevant class variables acct_type: str | None detail_type: str | None _header: bool = False # Flags whether an account's a header account descendants: list = list() # Holds account's children. Should be empty if _header == False. def __init__(self, instr: str, parent=None, acct_type: str = None, detail_type: str = None): ''' Initialize an Account. - parent: Account. `None` if top-level. - instr: raw string from ACS CoA. - acct_type: QBO Account Type - detail_type: QBO Detail Type ''' self.raw = instr self.parent = parent self.acct_type = acct_type self.detail_type = detail_type stripped = instr.strip() ... # Assorted initialization code, incl. manipulating stripped # to get values for self.num and self.name ... # Assorted irrelevant methods def is_header(self) -&gt; bool: return self._header def depth(self) -&gt; int: return (len(self.raw) - len(self.raw.strip())) / 3 # self.raw def add_descendant(self, instr, acct_t: str = None, detail: str = None): '''Add descendant to account. If `acct_type` and `detail_type` are supplied, will override parent account type. ''' if len(self.descendants) &gt; 0: if calc_depth(instr) &gt; self.descendants[-1].depth(): self.descendants[-1].add_descendant(instr, acct_t = acct_t, detail = detail) else: self.descendants.append(Account( instr, parent=self, acct_type = self.acct_type if acct_t is None else acct_t, detail_type = self.detail_type if detail is None else detail )) else: self.descendants.append(Account( instr, parent=self, acct_type = self.acct_type if acct_t is None else acct_t, detail_type = self.detail_type if detail is None else detail )) return if __name__ == &quot;__main__&quot;: accounts: list[Account] = list() for (acs_name, account_type, detail_type) in get_data(): # Gets data as list[tuple[str, str, str]] d = len(acs_name) - len(acs_name.lstrip())) / 3 # Depth of current row's account if d == 0: # Overall header accounts.append(Account(acs_name, parent=None, acct_type=account_type, detail_type=detail_type)) else: if account_type is not None: # Account metadata supplied in CoA spreadsheet accounts[-1].add_descendant( acs_name, acct_t = account_type, detail = detail_type) else: # Impute account metadata from parent accounts[-1].add_descendant(acs_name) </code></pre> <p>However, when I run this, I get a rather nasty RecursionError:</p> <pre><code>Exception has occurred: RecursionError maximum recursion depth exceeded KeyError: 'C:\\Users\\...\\Account.py' During handling of the above exception, another exception occurred: KeyError: 'C:\\Users\\...\\Account.py' During handling of the above exception, another exception occurred: KeyError: 'C:\\Users\\...\\Account.py' During handling of the above exception, another exception occurred: KeyError: 'C:\\Users\\...\\Account.py' During handling of the above exception, another exception occurred: File &quot;C:\Users\...\Account.py&quot;, line 104, in add_descendant self.descendants[-1].add_descendant(instr, acct_t = acct_t, detail = detail) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\...\Account.py&quot;, line 104, in add_descendant self.descendants[-1].add_descendant(instr, acct_t = acct_t, detail = detail) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\...\Account.py&quot;, line 104, in add_descendant self.descendants[-1].add_descendant(instr, acct_t = acct_t, detail = detail) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [Previous line repeated 980 more times] File &quot;C:\Users\...\Account.py&quot;, line 155, in &lt;module&gt; acs_name, ^^^^^^^^^ ...&lt;3 lines&gt;... detail=detail_type RecursionError: maximum recursion depth exceeded </code></pre> <p>I've spent the last 1.5 hours trying to fix this, but to no avail. Looking in my debugger, it appears that while the first two levels get added properly, that second level makes itself its own descendant <em>ad infinitum</em>. For example, in my most recent debugging run the tree looks like this:</p> <pre><code>&lt;__main__.Account object at 0x0000015631511BE0&gt; # Current Assets &lt;__main__.Account object at 0x0000015631796710&gt; # Cash &lt;__main__.Account object at 0x0000015631796710&gt; # Cash &lt;__main__.Account object at 0x0000015631796710&gt; # Yet more Cash ... # 980-ish more Cash </code></pre> <p>For reference, it should actually look like this:</p> <pre><code>&lt;__main__.Account object at 0x0000015631511BE0&gt; # Current Assets &lt;__main__.Account object at 0x0000015631796710&gt; # Cash &lt;__main__.Account object at 0x0000015631XXXXXX&gt; # 1000000.00 - Cash </code></pre> <p>What am I doing wrong? As far as I can tell, there isn't anywhere where I am adding <code>self</code> to <code>self.descendants</code>.</p> <p>(As a final note, this isn't my full code; irrelevant bits have been omitted or simplified.)</p>
<python><recursion><infinite-recursion>
2025-08-04 22:34:01
1
549
In Hoc Signo
79,725,439
1,086,777
Python packages not autocompleting in Pycharm for one project, but works fine for another
<p>I'm using Pycharm 2025.1.3.1. I created a new project that uses the random package and the turtle package.</p> <p>In one project, turtle classes and methods autocomplete, and in the other, they don't. I've checked the interpreter settings for both and neither has a turtle interpreter. From what I've read, since turtle is included in python (3.13), it should autocomplete without having an interpreter.</p> <p>The same is true for the random package. However basics like <code>print</code> do autocomplete.</p> <p>Project A:</p> <p><a href="https://i.sstatic.net/GUpOwXQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GUpOwXQE.png" alt="enter image description here" /></a></p> <p>Project B: <a href="https://i.sstatic.net/Jf6gGTb2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jf6gGTb2.png" alt="enter image description here" /></a></p> <p>I am new to python and learning through Udemy.</p>
<python><pycharm>
2025-08-04 22:26:06
0
4,374
jzadra
79,725,340
7,475,838
Order of columns in a plotnine bar plot using a polars dataframe
<p>I'm quite new to the packages polars and plotnine and have the following code:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import polars.selectors as cs from plotnine import * df = pl.read_csv('https://raw.githubusercontent.com/Brinkhuis/Zorguitgaven/refs/heads/master/zorguitgaven.csv') df = df.unpivot(cs.numeric(), index='Category') ( ggplot() + geom_bar( data=df.filter(pl.col('variable').is_in(['Mannen 2040', 'Vrouwen 2040'])), mapping=aes(x='Category', y='value', fill='variable'), stat='identity' ) + scale_fill_manual(values=['#007BC7', '#CA005D']) + coord_flip() ) </code></pre> <p>It runs without errors. However, the Category order is not right.</p> <p><a href="https://i.sstatic.net/Yjlb7H1x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yjlb7H1x.png" alt="enter image description here" /></a></p> <p>The &quot;5-10&quot; value show be on &quot;row 3&quot;.</p> <p>My questions:</p> <ol> <li>Is there a way to set the categorical order in a polars dataframe to fix this?</li> <li>Is there a way to set the categorical order in plotnine?</li> </ol>
<python><dataframe><python-polars><plotnine><polars>
2025-08-04 19:59:50
3
4,919
René
79,725,307
34,747
How to pass vector from Python to Duckdb for vector similarity search
<p>I am using Duckdb with the VSS extension to store document embeddings, it works normally however when I try to do a similarity search to a vector passed from Python, I am getting a TypeError. The query looks like this:</p> <pre class="lang-py prettyprint-override"><code>query = text( &quot;&quot;&quot; select document from embedding_duckdb where collection_name = :collection_name order by array_cosine_similarity( embedding, :embedding::FLOAT[1024]) ) limit :num_docs &quot;&quot;&quot; ) result = self.session.execute( query, { 'collection_name': collection, 'embedding': array_type(response[&quot;embedding&quot;]), 'num_docs': num_docs, } ) pages = [row[0] for row in result.fetchall()] </code></pre> <p>here is TypeError I get:</p> <pre><code>E TypeError: array_type(): incompatible function arguments. The following argument types are supported: E 1. (type: duckdb.duckdb.typing.DuckDBPyType, size: int, *, connection: duckdb.DuckDBPyConnection = None) -&gt; duckdb.duckdb.typing.DuckDBPyType E E Invoked with: [0.8049294352531433, -0.5165860652923584, -0.25891393423080444, -0.8668404221534729, 0.050446540117263794, -0.060985639691352844, -0.2768419682979584, -0.3037632703781128, 0.7962498664855957, -0.30102455615997314, 0.13201330602169037, -0.35436326265335083, -0.2618299722671509, -0.030353419482707977, -0.009162794798612595, -0.37244415283203125, -0.505980372428894, -0.2313883751630783, -0.4354000389575958, 0.7349468469619751, 0.18996360898017883, 0.47891461849212646, -0.009091421961784363, ... </code></pre> <p>I tried other ways of casting the vector, for example using the <code>cast</code> function.</p> <p>AFAICT I am doing exactly as the <a href="https://duckdb.org/docs/stable/core_extensions/vss.html" rel="nofollow noreferrer">docs</a> instruct. Don't know what else to try.</p> <p>EDIT: original code to define <code>query</code> variable is below</p> <pre class="lang-py prettyprint-override"><code>query = text('select document from embedding_duckdb where collection_name = :collection_name order by array_cosine_similarity(embedding, :embedding::FLOAT[1024])) limit :num_docs') </code></pre>
<python><sql><duckdb>
2025-08-04 19:22:01
1
6,262
fccoelho
79,725,144
15,263,876
The conditional edge in LangGraph causes the reduce function to be invoked twice
<p>I am a beginner with LangGraph and am learning to create a graph to compute the Fibonacci sequence. The program has two nodes: one responsible for updating the Fibonacci sequence, and the other for controlling the loop. The complete code is as follows:</p> <pre><code>from langchain_core.runnables import Runnable from collections.abc import Iterable from langgraph.graph import StateGraph from pydantic import BaseModel from typing import Any, Dict, Annotated from langgraph.constants import END def append_to_list(a : list, b) -&gt; list: print(f&quot;append_to_list:{a}, {b}&quot;) if isinstance(b, Iterable) and not isinstance(b, str): a.extend(b) else: a.append(b) return a class Node(Runnable): def __init__(self, name : str): self.name = name def invoke(self, input: Any, config: Dict | None = None) -&gt; Any: c = input.a + input.b print(f&quot;{self.name}:{input}&quot;) return {&quot;a&quot;:input.b, &quot;b&quot;:c, &quot;fibonacci_list&quot;:c} class ConditionalNode(Runnable): def __init__(self, name : str, threshold: int): self.threshold = threshold self.name = name def invoke(self, input: Any, config) -&gt; Any: print(f&quot;{self.name}:{input}&quot;) return {&quot;path&quot; : input.b &gt;= self.threshold} class State(BaseModel): a : int b : int fibonacci_list : Annotated[list[int], append_to_list] graph = StateGraph(State) graph.add_node(&quot;n1&quot;, Node(name = &quot;n1&quot;)) graph.add_conditional_edges(&quot;n1&quot;, ConditionalNode(name = &quot;CN&quot;, threshold = 10), {True : END, False: &quot;n1&quot;}) graph.set_entry_point(&quot;n1&quot;) g = graph.compile() s = g.invoke({&quot;a&quot;:1, &quot;b&quot;:1, &quot;fibonacci_list&quot;:[1, 1]}) print(s) </code></pre> <p>The n1 node is responsible for computing the latest value in the Fibonacci sequence and returning updates to the graph state. The append_to_list function is intended to add the newly generated fibonacci_list value to the fibonacci_list list in the graph state. I originally thought that append_to_list would be called only once after the n1 node finishes execution, and that the final result would be a complete Fibonacci sequence.</p> <p>However, in reality, the append_to_list function is also called every time the CN node finishes executing, and the arguments passed to it are the same as those passed after the n1 node finishes. This results in duplicate values being added to the Fibonacci sequence. For example, I expected to get [1, 1, 2, 3, 5, 8, 13], but instead I got [1, 1, 2, 2, 3, 3, 5, 5, 8, 8, 13, 13].</p> <p>How can I prevent the conditional edge from triggering the reduce function?</p>
<python><langgraph>
2025-08-04 15:56:25
2
323
lei hu
79,725,066
804,184
How to use the default value of a param when import a rule in Snakemake?
<p>I am trying to generalize my snakemake workflow, but I see a weird behavior, which I am not sure if I am doing something wrong or if it is by default. I created a file <code>analysis.smk</code> rules and the evariable I tried to make a generic rule, to import it in many of my snakemake workflows, so I have, for instance in my <code>analysis.smk</code> rule:</p> <pre class="lang-yaml prettyprint-override"><code>rule analysis: output: test.root input: input_test.root params: variable: &quot;SvB&quot; rebin: 10 shell: `python run_analysis.py {params.variable} {params.rebin}` </code></pre> <p>In another Snakefile, I am importing this rule and using it as:</p> <pre class="lang-yaml prettyprint-override"><code>module analysis: snakefile: &quot;analysis.smk&quot; config: config use rule analysis from analysis as run_analysis with: output: f&quot;{config['output']}/hist.root&quot; input: f&quot;{config['output']}/input_hist.root&quot; params: rebin: 20 </code></pre> <p>And when I run my workflow, I am getting errors like:</p> <blockquote> <p>AttributeError: 'Params' object has no attribute 'variable', when formatting the following:</p> </blockquote> <p>Because I didn't define the <code>variable</code>. Is this expected? I would just like to change the value of some params, and use the default in others.</p> <p>I am using snakemake version 8.27.1</p>
<python><snakemake>
2025-08-04 14:41:40
1
5,276
Alejandro
79,725,035
12,485,974
fvcore multiple arguments forward modules?
<p>I have a model that take more than one arguments in forward. Recently I'm trying to query some informations my model by fvcore module in python, but I can't find any <a href="https://detectron2.readthedocs.io/en/latest/modules/fvcore.html" rel="nofollow noreferrer">document</a> for multiple forward arguments!</p> <p>and edited code tried to have multiple functions:</p> <pre class="lang-py prettyprint-override"><code>from fvcore.nn import FlopCountAnalysis, parameter_count_table def modelCount(model, input_tensor, *args, **kwargs): def _wrapped_forward(x): return model(x, *args, **kwargs) flops = FlopCountAnalysis(_wrapped_forward, input_tensor).total() params = parameter_count_table(model) return flops, params </code></pre> <p>but it does not help ... I still have error.</p>
<python><torch>
2025-08-04 14:20:34
1
586
H.M
79,725,017
2,801,187
ttk.Combobox selected text vanishes when focus is lost (clam theme, readonly)
<p>I'm new to python and recently started coding a combobox using the &quot;clam&quot; UI theme. However, the selected text vanishes when focus is lost from the dropdown? It seems confined to &quot;clam&quot; as other UI themes such as &quot;alt&quot; work fine.</p> <p>For example, the initial screen shows</p> <p><a href="https://i.sstatic.net/pBaLjeQf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBaLjeQf.png" alt="screen1" /></a></p> <p>Click the dropdown icon</p> <p><a href="https://i.sstatic.net/TMQfgUNJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMQfgUNJ.png" alt="screen2" /></a></p> <p>Now, re-click the dropdown icon</p> <p><a href="https://i.sstatic.net/kV9V2lb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kV9V2lb8.png" alt="screen3" /></a></p> <p>Notice the default value <code>apples</code> has vanished? I was expecting it to be visible.</p> <p><strong>Example Code</strong></p> <pre><code>import tkinter as tk, tkinter.ttk as ttk root = tk.Tk() style = ttk.Style(root) style.theme_use(&quot;clam&quot;) style.configure(&quot;TCombobox&quot;, fieldbackground=&quot;white&quot;, background=&quot;white&quot;) style.map(&quot;TCombobox&quot;, fieldbackground=[(&quot;readonly&quot;, &quot;white&quot;)], background=[(&quot;readonly&quot;, &quot;white&quot;)]) cb = ttk.Combobox(root, state=&quot;readonly&quot;, values=(&quot;apples&quot;, &quot;oranges&quot;, &quot;bananas&quot;)) cb.set(&quot;apples&quot;) cb.pack(padx=40, pady=40) root.mainloop() </code></pre>
<python><tkinter><combobox><tk-toolkit>
2025-08-04 14:05:15
1
2,457
vengy
79,724,899
4,108,376
Running Visual Studio Code Python debugger in Pixi shell
<p>I'm using <em>Pixi</em> to set up a Python environment that a python project needs to run in.</p> <p>So I first do <code>pixi shell</code>, which sets several environment variables defined in the <code>pyproject.toml</code> file and also runs a <code>.bat</code> script. After this I run <code>python.exe ./script.py</code> to run the application.</p> <p>I'm using Visual Studio Code as IDE, so I'm running this in its Terminal view.</p> <p>But when I try to use the debugger in Visual Studio Code, I run it from the IDE instead of launching <code>python.exe</code> from the terminal view. But for this Visual Studio Code creates a new Terminal session &quot;Python Debug Console&quot;, for which <code>pixi shell</code> has not been run, and so it will not work correctly.</p> <p>When launching the debugger from the Visual Studio Code IDE, it runs <code>&amp; 'python.exe' 'c:\...\ms-python.debugpy-2025.10.0-win32-x64\bundled\libs\debugpy\launcher' '49494' '--' 'c:\...\script.py</code> in that session &quot;Python Debug Console&quot;. Running this manually in the &quot;pixi&quot; shell also does not work, because then the Python debugger will not connect to the IDE because the IDE is not waiting for a connection from a Python debugger.</p> <hr /> <p>Is there a way to run a Python debugging session with the Visual Studio Code IDE, such that it is launched in a Pixi environment?</p> <p>One solution would be if I could set up the IDE to wait for a connection from the debugger, and then manually run <code>ms-python.debugpy-2025.10.0-win32-x64\bundled\libs\debugpy\launcher ...</code> in the Pixi shell, but this does not seem possible.</p> <p>Another way would be if it could be set to run <code>pixi shell</code>. But this seems also problematic, because <code>pixi shell</code> does not only set environment variables, but also launches a new <code>cmd</code> shell as subprocess. (So it is not a command that can be run before running the debugger, but the debugger needs to be run inside the shell spawned by <code>pixi shell</code>).</p>
<python><visual-studio-code><vscode-debugger><pixi-package-manager>
2025-08-04 12:16:03
0
9,230
tmlen
79,724,877
12,633,371
polars implementation for creating objects selecting specific attributes
<p>The <code>stanza</code> annotation <a href="https://stanfordnlp.github.io/stanza/getting_started.html#annotating-a-document" rel="nofollow noreferrer">pipeline</a> processes a text and it creates <code>Sentence</code>s which in turn comprise of <code>Word</code>s. These are objects created by Stanza. I want to select specific attributes of the <code>Word</code> objects that Stanza creates and create my own objects in a list of lists (the outer list is the whole text and the inner lists are the sentences). With a <code>pandas</code> <code>DataFrame</code> having each text annotation in a <code>DataFrame</code> cell, I would create a function with a double for loop to achieve that. I want to use the <code>polars</code> library. Can I do that using the <code>polars</code> API, or I do that like the <code>pandas</code> implementation?</p> <p><strong>Edit</strong> for including minimal code example.</p> <pre class="lang-py prettyprint-override"><code>import stanza from typing import NamedTuple nlp = stanza.Pipeline('en') class Word(NamedTuple): id: int head_id: int text: str span: list[int] def get_doc_words(doc: stanza.Document) -&gt; list[list[Word]]: doc_words = [] for sentence in doc.sentences: sentence_words = [] for sent_word in sentence.words: word = Word( id=sent_word.id, head_id=sent_word.head, text=sent_word.text, span=[sent_word.start_char, sent_word.end_char], ) sentence_words.append(word) doc_words.append(sentence_words) return doc_words df=pd.DataFrame( { 'text': [ 'This is some sample text. A second sentence.', 'And a second sample. Having a second sentence as well' ] } ) df['stanza_annotation'] = df['text'].apply(nlp) df['stanza_words'] = df['stanza_annotation'].apply(get_doc_words) </code></pre> <p>And this is the output I expect for each piece of text.</p> <pre class="lang-py prettyprint-override"><code>[[Word(id=1, head_id=5, text='This', span=[0, 4]), Word(id=2, head_id=5, text='is', span=[5, 7]), Word(id=3, head_id=5, text='some', span=[8, 12]), Word(id=4, head_id=5, text='sample', span=[13, 19]), Word(id=5, head_id=0, text='text', span=[20, 24]), Word(id=6, head_id=5, text='.', span=[24, 25]], [Word(id=1, head_id=3, text='A', span=[26, 27]), Word(id=2, head_id=3, text='second', span=[28, 34]), Word(id=3, head_id=0, text='sentence', span=[35, 43]), Word(id=4, head_id=3, text='.', span=[43, 44])]] </code></pre>
<python><dataframe><stanford-nlp><python-polars>
2025-08-04 11:51:19
1
603
exch_cmmnt_memb
79,724,845
7,456,317
Convert Decimal values to float64 when creating a Pandas DataFrame
<p>I'm working with a dictionary that contains a list of <code>decimal.Decimal</code> values as one of its fields:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from decimal import Decimal data = { 'Item': ['Apple', 'Banana', 'Orange'], 'Price': [Decimal('1.25'), Decimal('0.75'), Decimal('2.00')], 'Quantity': [10, 20, 15] } </code></pre> <p>When I convert this dictionary into a Pandas DataFrame, the Price column (which contains Decimal objects) is inferred as object:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data) print(df.dtypes) # Output: # Item object # Price object # Quantity int64 # dtype: object </code></pre> <p>I would like the Price column to be of type float64 instead.</p> <p>I tried using <code>pd.DataFrame.from_records(data, coerce_float=True)</code>, but it didn’t change the type of the Decimal values.</p> <p>I’m aware I can convert the column using .astype(float), but in my actual use case, I may not know the column name in advance.</p> <p>What’s the best way to ensure that all Decimal values are automatically converted to floats when creating the DataFrame, ideally without hardcoding column names?</p>
<python><pandas><dataframe><type-conversion>
2025-08-04 11:23:13
2
913
Gino
79,724,774
13,314,132
How to download protected PDF (ViewDocument) using Selenium or requests?
<p>I'm trying to download a protected PDF from the New York State Courts NYSCEF website using Python. The URL looks like this:</p> <pre><code>https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=cdHe_PLUS_DaUdFKcTLzBtSo6zw== </code></pre> <p>When I try to use <code>requests.get()</code> or even navigate to the page with Selenium, I either get:</p> <ul> <li>A <code>403 Forbidden</code> response (via <code>requests</code>)</li> <li>Or a blank page with no <code>&lt;embed&gt;</code> tag (via Selenium)</li> </ul> <p>Here’s what I’ve tried:</p> <p>Using requests:</p> <pre class="lang-py prettyprint-override"><code>import requests url = &quot;https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=...&quot; headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0&quot;, &quot;Referer&quot;: &quot;https://iapps.courts.state.ny.us/nyscef/&quot; } response = requests.get(url, headers=headers) print(response.status_code) # Always 403 </code></pre> <p>And using SeleniumBase:</p> <pre><code>from seleniumbase import SB with SB(headless=False) as sb: sb.open(url) sb.wait(5) try: embed = sb.find_element(&quot;embed&quot;) print(embed.get_attribute(&quot;src&quot;)) except Exception as e: print(&quot;❌ No embed tag found&quot;, e) </code></pre> <p>Nothing works.</p> <p>Full code for reference:</p> <pre><code>from seleniumbase import SB import requests import os import time def download_pdf_with_selenium_and_requests(): # Target document URL doc_url = &quot;https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=cdHe_PLUS_DaUdFKcTLzBtSo6zw==&quot; # Setup download directory download_dir = os.path.join(os.getcwd(), &quot;downloads&quot;) os.makedirs(download_dir, exist_ok=True) filename = os.path.join(download_dir, &quot;NYSCEF_Document.pdf&quot;) with SB(headless=True) as sb: # Step 1: Navigate to the document page (using browser session) sb.open(doc_url) time.sleep(5) # Wait for any redirects/cookies to be set # Step 2: Grab the actual PDF &lt;embed src&gt; try: embed = sb.find_element(&quot;embed&quot;) pdf_url = embed.get_attribute(&quot;src&quot;) print(f&quot;Found PDF URL: {pdf_url}&quot;) except Exception as e: print(f&quot;No &lt;embed&gt; tag found: {e}&quot;) return # Step 3: Extract cookies from Selenium session selenium_cookies = sb.driver.get_cookies() session = requests.Session() for cookie in selenium_cookies: session.cookies.set(cookie['name'], cookie['value']) # Step 4: Download PDF using requests with cookies headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0&quot;, &quot;Referer&quot;: doc_url } response = session.get(pdf_url, headers=headers) if response.status_code == 200 and &quot;application/pdf&quot; in response.headers.get(&quot;Content-Type&quot;, &quot;&quot;): with open(filename, &quot;wb&quot;) as f: f.write(response.content) print(f&quot;PDF saved as: {filename}&quot;) else: print(f&quot;PDF download failed. Status: {response.status_code}&quot;) print(f&quot;Content-Type: {response.headers.get('Content-Type')}&quot;) print(f&quot;Final URL: {response.url}&quot;) if __name__ == &quot;__main__&quot;: download_pdf_with_selenium_and_requests() </code></pre> <p>Response:</p> <pre><code>No &lt;embed&gt; tag found: Message: Element {embed} was not present after 10 seconds! </code></pre>
<python><selenium-webdriver><web-scraping><pdf><python-requests>
2025-08-04 10:28:11
2
655
Daremitsu
79,724,772
9,021,547
How to set starting condition for airflow task group
<p>I have the following dag structure:</p> <pre><code>from datetime import datetime, timedelta from airflow import DAG from airflow.utils.task_group import TaskGroup from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator with DAG( 'irb_test', description='irb_test', tags=[&quot;irb_test&quot;], schedule_interval=None, start_date=datetime(2025, 7, 1), default_args={ 'retries': 0, 'retry_delay': timedelta(minutes=1), 'conn_id': 'sgk_gp' } ) as dag: with TaskGroup('test_1') as test_1: for i in range(15): task_1_i = SQLExecuteQueryOperator( task_id=f'task_1_{i}', sql=f&quot;&quot;&quot; select pg_sleep({i}) &quot;&quot;&quot; ) with TaskGroup('test_2') as test_2: for i in range(15): task_2_i = SQLExecuteQueryOperator( task_id=f'task_2_{i}', sql=f&quot;&quot;&quot; select pg_sleep({i}) &quot;&quot;&quot; ) for i in range(15): test_1.get_child_by_label(f'task_1_{i}')\ &gt;&gt; test_2.get_child_by_label(f'task_2_{i}') </code></pre> <p>What I would like to do is to start task group test_2 only after 10 of the tasks in the first task group have succeeded. I know that I can rewrite the execution order manually, but is there maybe a more elegant way of doing it?</p>
<python><airflow>
2025-08-04 10:27:34
1
421
Serge Kashlik
79,724,694
7,078,356
How to step into C++ code from Python using pybind11 in VSCode when using Python C++ Debugger?
<p><strong>Question:</strong></p> <p>I am trying to debug a Python script that calls a C++ function through <strong>a .pyd</strong> module built with <strong>pybind11</strong> (compiled using MSVC). I want to set a breakpoint in Python, and when I press Step Into (F11,step into) in VSCode, I want it to step into the C++ implementation automatically.</p> <p><strong>Code Example:</strong></p> <p><strong>Python</strong> (debug.py)</p> <pre><code>import sys import os # Add debug directory to Python path debug_path = os.path.join(os.path.dirname(__file__), 'build', 'Debug') sys.path.insert(0, debug_path) import myadder result = myadder.add(5, 3) # BREAKPOINT here for i in range(3): result = myadder.add(i, i + 1) print(f&quot;add({i}, {i+1}) = {result}&quot;) </code></pre> <p><strong>C++</strong> (add.cpp)</p> <pre><code>// add.cpp #include &lt;cstdio&gt; int add(int a, int b) { printf(&quot; C++ add() called with a=%d, b=%d\n&quot;, a, b); int result = a + b; // Set C++ breakpoint here printf(&quot; C++ add() returning: %d\n&quot;, result); return result; } </code></pre> <p>C++ (bindings.cpp)</p> <pre><code>// bindings.cpp #include &lt;pybind11/pybind11.h&gt; int add(int a, int b); namespace py = pybind11; PYBIND11_MODULE(myadder, m) { m.def(&quot;add&quot;, &amp;add, &quot;A function that adds two numbers&quot;); } </code></pre> <p><strong>CMakeLists.txt</strong></p> <pre><code>cmake_minimum_required(VERSION 3.14) project(myadder) set(CMAKE_CXX_STANDARD 17) set(CMAKE_BUILD_TYPE Debug) # Use Python to find pybind11 automatically execute_process( COMMAND python -c &quot;import pybind11; print(pybind11.get_cmake_dir())&quot; OUTPUT_VARIABLE pybind11_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) # find pybind11 package find_package(pybind11 REQUIRED) pybind11_add_module(myadder bindings.cpp add.cpp) </code></pre> <p><strong>Build</strong></p> <pre><code>mkdir build cd build cmake .. -G &quot;Visual Studio 16 2019&quot; -A x64 cmake --build . --config Debug </code></pre> <p><strong>✅Build Info:</strong></p> <p>Compiler: MSVC</p> <p>Built as: myadder.pyd with pybind11</p> <p>Python version: 3.12</p> <p>Platform: Windows 11</p> <p>Debugger: cppvsdbg</p> <p>VSCode Plugin: Python C++ Debugger</p> <p><strong>launch.json Config</strong></p> <pre><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python C++ Debugger&quot;, &quot;type&quot;: &quot;pythoncpp&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;pythonLaunchName&quot;: &quot;Python: Current File&quot;, &quot;cppAttachName&quot;: &quot;(Windows) Attach&quot; }, { &quot;name&quot;: &quot;(Windows) Attach&quot;, &quot;type&quot;: &quot;cppvsdbg&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;processId&quot;: &quot;${command:pickProcess}&quot;, &quot;symbolSearchPath&quot;: &quot;${workspaceFolder}/build/Debug&quot;, // .pdb }, // Python { &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}/debug.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: false, &quot;stopOnEntry&quot;: true, } ] } </code></pre> <p><strong>Current Behavior:</strong> When I run the Python C++ debugger and set a breakpoint at myadder.add(5, 3), it stops in Python, but Step Into (F11) does not go into the C++ function.</p> <p>However, if I first hit the Python breakpoint and then manually click &quot;Windows(Attach)&quot;, then I can then hit breakpoints in C++. But this is a manual.</p> <p><a href="https://i.sstatic.net/yrsdryA0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrsdryA0.png" alt="click Windows (Attach)" /></a></p> <p><strong>✅Desired Behavior:</strong></p> <p>I want to be able to:</p> <p>Run Python C++ Debugger.</p> <p>Stop at the breakpoint in Python code.</p> <p>Press F11 (or clicking Step Into) can make VSCode step into the C++ implementation without clicking (Windows Attach).</p> <p>Is it possible to step into C++ pybind11 code directly from Python using the cppvsdbg debugger in VSCode without manually clicking <strong>(Windows) Attach</strong>?</p> <p>If so, what is the correct <strong>launch.json</strong> configuration or steps to make it work?</p> <p>Any advice? Thanks.</p>
<python><c++><debugging><pybind11>
2025-08-04 09:16:20
1
1,327
Ringo
79,724,624
17,510,144
How to generate aadhaar card data from qr code
<p>I'm trying to generate user data from aadhaar qr code, i'm trying the 2 methods</p> <pre><code>from aadhaar.secure_qr import extract_data extracted_data = extract_data(int(qrData)) print(extracted_data.text_data) # and from pyaadhaar.decode import AadhaarSecureQr secure_qr = AadhaarSecureQr(int(normal_int)) print(secure_qr.data) </code></pre> <p>Both are almost similar methods, but when I try to exctract data from the qr on the <a href="https://uidai.gov.in/te/ecosystem-te/authentication-devices-documents-te/qr-code-reader-te.html" rel="nofollow noreferrer">uidai</a> site (3181 chars) and this <a href="https://uidai.gov.in/images/resource/User_manulal_QR_Code_15032019.pdf" rel="nofollow noreferrer">sample</a> data (3166 chars) the data is succesfully generated, but when I try to scan my own aadhaar qr (3142 chars), the following errors occur</p> <pre><code>Traceback (most recent call last): File &quot;...&quot;, line 18, in &lt;module&gt; extracted_data = extract_data(int(qrData)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\aadhaar\secure_qr\extractor.py&quot;, line 330, in extract_data return data_extractor.extract() ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\aadhaar\secure_qr\extractor.py&quot;, line 318, in extract text_data=self._make_text_data(), ^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\aadhaar\secure_qr\extractor.py&quot;, line 212, in _make_text_data reference_id=self._make_reference_id(extracted_text_data[&quot;reference_id&quot;]), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\aadhaar\secure_qr\extractor.py&quot;, line 197, in _make_reference_id timestamp=datetime.strptime(extracted_data[4:], &quot;%Y%m%d%H%M%S%f&quot;), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\_strptime.py&quot;, line 554, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\_strptime.py&quot;, line 333, in _strptime raise ValueError(&quot;time data %r does not match format %r&quot; % ValueError: time data '' does not match format '%Y%m%d%H%M%S%f' # and Traceback (most recent call last): File &quot;...&quot;, line 18, in &lt;module&gt; secure_qr = AadhaarSecureQr(int(normal_int)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyaadhaar\decode.py&quot;, line 27, in __init__ self._extract_info_from_decompressed_array() File &quot;C:\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyaadhaar\decode.py&quot;, line 52, in _extract_info_from_decompressed_array self.data['aadhaar_last_digit'] = self.data['referenceid'][3] ~~~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: string index out of range </code></pre> <p>I want to know the solution to the problem or any other methods to do the same. Thank you : )</p>
<python><aadhaar>
2025-08-04 08:15:49
0
432
Shubham Singhvi
79,724,473
4,107,349
Xlsxwriter custom data labels with custom data label positions
<p>We're trying to customize data label positions of individual data points on an xlsxwriter graph but can only work out how to set data label positions for entire series.</p> <p>Simplified reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import xlsxwriter workbook = xlsxwriter.Workbook(&quot;chart_line.xlsx&quot;) worksheet = workbook.add_worksheet() headings = [&quot;Number&quot;, &quot;Batch 1&quot;] data = [ [2, 3, 4, 5], [10, 40, 50, 20], ] worksheet.write_row(&quot;A1&quot;, headings) worksheet.write_column(&quot;A2&quot;, data[0]) worksheet.write_column(&quot;B2&quot;, data[1]) chart1 = workbook.add_chart({&quot;type&quot;: &quot;line&quot;}) custom_labels = [ {&quot;value&quot;: True}, {&quot;value&quot;: True, &quot;position&quot;: &quot;right&quot;}, {&quot;value&quot;: True, &quot;position&quot;: &quot;left&quot;}, # where we want to set positions individually {&quot;value&quot;: True, &quot;position&quot;: &quot;above&quot;}, # these &quot;position&quot; keys appear to be ignored ] chart1.add_series( { &quot;name&quot;: &quot;=Sheet1!$B$1&quot;, &quot;categories&quot;: &quot;=Sheet1!$A$2:$A$5&quot;, &quot;values&quot;: &quot;=Sheet1!$B$2:$B$5&quot;, &quot;data_labels&quot;: {&quot;value&quot;: True, &quot;custom&quot;: custom_labels} # we can set positions for all labels here with &quot;position&quot;: &quot;left&quot; } ) worksheet.insert_chart(&quot;D2&quot;, chart1) workbook.close() </code></pre> <p>We want to set individual points programmatically for multiple series (not shown above) for visual problems avoided here for simplicity. <a href="https://xlsxwriter.readthedocs.io/working_with_charts.html#chart-series-option-custom-data-labels" rel="noreferrer">Xlsxwriter's docs state &quot;The property elements of the custom lists should be dicts with the following allowable keys/sub-properties: ... value, font, delete&quot;</a>, which indicates we're out of luck.</p> <p>Is there anything we can do? In <a href="https://stackoverflow.com/q/64169709">a closely related question</a> the Xlsxwriter dev appears to state this isn't possible, but we're wondering if we've misunderstood something or if anything has changed in the five years since.</p>
<python><excel><xlsxwriter>
2025-08-04 04:10:56
1
1,148
Chris Dixon
79,724,168
3,646,845
Create Action Button in Open edx Admin panel using tutor plugins
<p>I created and enabled a tutor plugin successfully using this <a href="https://docs.tutor.edly.io/plugins/v0/gettingstarted.html#python-package" rel="nofollow noreferrer">command</a></p> <blockquote> <p>cookiecutter <a href="https://github.com/overhangio/cookiecutter-tutor-plugin.git" rel="nofollow noreferrer">https://github.com/overhangio/cookiecutter-tutor-plugin.git</a></p> </blockquote> <p>How would I use this plugin to implement <strong>Admin Action Button</strong>: I have a folder adminUser with 2 files <strong>init</strong>.py (from . import admin) and <strong>admin.py</strong> see content below:</p> <pre><code>from django.contrib import admin from django.contrib.auth.models import User @admin.action(description=&quot;Mark selected Users as inactive&quot;) def mark_users_inactive(modeladmin, request, queryset): queryset.update(is_active=False) modeladmin.message_user(request, f&quot;{queryset.count()} users marked as inactive.&quot;) admin.site.unregister(User) @admin.register(User) class CustomUserAdmin(admin.ModelAdmin): list_display = (&quot;username&quot;, &quot;email&quot;, &quot;first_name&quot;, &quot;last_name&quot;, &quot;is_staff&quot;, &quot;is_active&quot;) actions = [mark_users_inactive] </code></pre> <p>I added the lines below to the <strong>plugin.py</strong>:</p> <pre><code>PLUGIN_ROOT = Path(__file__).parent.parent.resolve() hooks.Filters.COMPOSE_MOUNTS.add_item((&quot;lms&quot;, (str(PLUGIN_ROOT / &quot;adminAction&quot;), &quot;/openedx/edx-platform/adminAction&quot;))) hooks.Filters.COMPOSE_MOUNTS.add_item((&quot;cms&quot;, (str(PLUGIN_ROOT / &quot;adminAction&quot;), &quot;/openedx/edx-platform/adminAction&quot;))) </code></pre> <p>Added <strong>patches/openedx-lms-env</strong> with <code>INSTALLED_APPS += [&quot;adminAction&quot;]</code></p> <p>Added <code>recursive-include adminAction *</code> in <strong>./MANIFEST.in</strong></p> <p><strong>In pyproject.toml</strong> Added <code>include = [&quot;adminAction&quot;]</code> under <code>[tool.hatch.build.targets.wheel]</code></p> <p>Updated <code>include = [ &quot;/tutoradmin&quot;, &quot;/adminAction&quot;, &quot;.hatch_build.py&quot;]</code> under <code>[tool.hatch.build.targets.sdist]</code></p> <p>Yet the Action Button is not visible. Please what am I doing wrong?</p>
<python><django><django-admin><openedx><cookiecutter>
2025-08-03 16:15:51
0
2,137
Paullo
79,724,122
1,155,237
Modifying timestamps on Windows reparse points
<p>I need to copy timestamps (modified time) between NTFS reparse points of type junction, directory symlink and file symlink.</p> <p>I can get the timestamp of a reparse point with <code>os.lstat()</code>. But to apply that timestamp I would need to use <code>os.utime()</code> but its parameter <code>follow_symlinks</code> is not implemented on Windows, so <code>os.utime()</code> always sets the timestamp on the target of a reparse point but I need to set it on the reparse point itself.</p> <pre><code>&gt;&gt;&gt; os.utime in os.supports_follow_symlinks False </code></pre> <p>So how can I set time on Windows reparse points in Python?</p> <p>I prefer solutions using <code>ctypes</code> over third party libraries for this.</p>
<python><timestamp><symlink><ntfs><reparsepoint>
2025-08-03 15:17:58
0
3,610
mcu
79,724,059
1,155,237
Robocopy algorithm for /FFT and /DST options
<p>Microsoft's <code>Robocopy</code> program has these two options:</p> <pre><code>/FFT :: assume FAT File Times (2-second granularity). /DST :: compensate for one-hour DST time differences. </code></pre> <p>What is the exact algorithm that Robocopy uses for this?</p> <p>If I wanted to implement this in Python for example, how would I do this? I would like to know <em>specifically how Robocopy does it</em>, and I would like to know if there is a more correct way to compare timestamps while allowing for DST and filesystem differences.</p> <pre><code>def compare_file_timestamps_robocopy_style(file1, file2): &quot;&quot;&quot;Returns True if the file timestamps should be considered equal when adjusted for filesystem granularity and daylight savings time. This uses the same algorithm as Microsoft's Robocopy. &quot;&quot;&quot; time1 = os.path.getmtime(file1) time2 = os.path.getmtime(file2) return ??? </code></pre> <p>A pseudocode answer would be ok too if it is detailed enough.</p>
<python><timestamp><timezone><dst><robocopy>
2025-08-03 13:38:35
0
3,610
mcu
79,723,933
989,803
GLPK sensitivty result parsing with PULP
<p>I am trying to figure out how to read the GLPK sensitivity results back into Python</p> <p>An example of how i am generating it:</p> <pre><code>import pulp from pulp import LpStatus, value from glpk_sensitivity_parser import parse_sensitivity prob = pulp.LpProblem(&quot;TEST&quot;, pulp.LpMaximize) D = pulp.LpVariable(&quot;D&quot;, lowBound=0) P = pulp.LpVariable(&quot;P&quot;, lowBound=0) prob += 10 * D + 15 * P, &quot;Objective&quot; prob += 2 * D + 4 * P &lt;= 100, &quot;aluminium&quot; prob += 3 * D + 2 * P &lt;= 80, &quot;steel&quot; prob.solve() temp_file_name = f&quot;sensitivity_LP_problem.sen&quot; prob.solve(pulp.GLPK(options=['--ranges', f'{temp_file_name}'])) </code></pre> <p>This then generates a file which is humanly readable but not easy to parse.</p> <p>It is all doable I guess but what if the file format changes.</p> <p>I feel there must be a better/cleaner way of doing this than I am doing.</p> <p>So the question is, can i access the data before it gets put into this file format? There should be, but i was not able to find it</p>
<python><parsing><pulp><glpk>
2025-08-03 09:56:15
0
303
user989803
79,723,724
3,840,940
Apache Flink sinkfunction python code throws exception
<p>The Apache Flink 2.1 does not support mongodb python connectors. So I make the sample python codes by using SinkFunction.</p> <pre><code>from pyflink.datastream import StreamExecutionEnvironment from pyflink.datastream.functions import SinkFunction from pymongo import MongoClient import json class MongoSink(SinkFunction): def __init__(self, uri, database, collection): self._uri = uri self._db = database self._coll = collection self._client = None def open(self, runtime_context): self._client = MongoClient(self._uri) self.collection = self._client[self._db][self._coll] def invoke(self, value, context): doc = value if isinstance(value, str): doc = json.loads(value) self.collection.insert_one(doc) def close(self): if self._client: self._client.close() def main(): env = StreamExecutionEnvironment.get_execution_environment() env.add_jars('file:///home/joseph/flink/jars/flink-connector-mongodb-2.0.0-1.20.jar' ,'file:///home/joseph/flink/jars/flink-connector-mongodb-cdc-3.0.1.jar') # your data stream ds = env.from_collection([ '{&quot;_id&quot;:1, &quot;name&quot;:&quot;Alice&quot;}', '{&quot;_id&quot;:2, &quot;name&quot;:&quot;Bob&quot;}' ]) ds.add_sink(MongoSink( uri=&quot;mongodb://user:pass@127.0.0.1:27017&quot;, database=&quot;my_db&quot;, collection=&quot;my_coll&quot; )) env.execute(&quot;PyFlink MongoDB&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>But the exceptions are thrown from Sink class.</p> <pre><code>Traceback (most recent call last): File &quot;/home/joseph/VSCode_Workspace/etl-stream-python/com/aaa/etl/etl_data_uploader_mysql.py&quot;, line 78, in &lt;module&gt; main() File &quot;/home/joseph/VSCode_Workspace/etl-stream-python/com/aaa/etl/etl_data_uploader_mysql.py&quot;, line 70, in main ds.add_sink(MongoSink( File &quot;/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/pyflink/datastream/data_stream.py&quot;, line 819, in add_sink return DataStreamSink(self._j_data_stream.addSink(sink_func.get_java_function())) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/pyflink/datastream/functions.py&quot;, line 586, in get_java_function return self._j_function ^^^^^^^^^^^^^^^^ AttributeError: 'MongoSink' object has no attribute '_j_function' </code></pre> <p>I want to know if I can make sink class with pyflink 2.1 or not. Kindly inform me the python MongoDB sink class example codes.</p>
<python><apache-flink><pyflink>
2025-08-03 01:00:05
0
1,441
Joseph Hwang
79,723,653
27,596,369
Pygame image not facing the mouse
<p>Here is my rotating code:</p> <pre><code>pos = pygame.mouse.get_pos() x_dist = pos[0] - self.rect.centerx y_dist = -(pos[1] - self.rect.centery) self.angle = math.degrees(math.atan2(y_dist, x_dist)) self.image = pygame.transform.rotate(self.original_image, self.angle) </code></pre> <p>Here is my image:</p> <p><a href="https://i.sstatic.net/Q5hE1nZb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q5hE1nZb.png" alt="enter image description here" /></a></p> <p>The problem is, when I rotate it, I have to subtract the angle by 90 to get the right result.</p> <p>I have searched all over stack overflow but nothing works.</p>
<python><math><pygame><degrees><radians>
2025-08-02 20:53:13
1
1,512
Aadvik
79,723,518
3,208,351
How to set up a simple hello-world example where a C function calls a Cython function calling a Python function?
<p>I am having trouble making a simple &quot;Hello World&quot; python function which I can call from a C program.</p> <p>Here is the contents of my <code>helloworld.py</code> file:</p> <pre class="lang-python prettyprint-override"><code>def hw(): print(&quot;Hello World&quot;) </code></pre> <p>Here is the contents of the <code>caller.pyx</code> file:</p> <pre class="lang-none prettyprint-override"><code>from helloworld import hw cdef public void call_hw(): hw() </code></pre> <p>And here is the contents of my <code>main.c</code> file:</p> <pre class="lang-c prettyprint-override"><code>#include &lt;Python.h&gt; #include &quot;caller.h&quot; int main() { Py_Initialize(); call_hw(); Py_Finalize(); } </code></pre> <p>Here are the commands I do:</p> <pre class="lang-none prettyprint-override"><code>$ cython caller.pyx $ gcc -g -Wall -I/usr/include/python3.12 -c caller.c $ gcc -g -Wall -I/usr/include/python3.12 -c main.c $ gcc -g -Wall -I/usr/include/python3.12 -o main *.o -lpython3.12 $ ./main Segmentation fault (core dumped) </code></pre> <p>Here is a backtrace:</p> <pre class="lang-none prettyprint-override"><code>Program received signal SIGSEGV, Segmentation fault. 0x0000555555558898 in __Pyx__GetModuleGlobalName (name=0x0) at caller.c:2739 2739 result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)-&gt;hash); (gdb) bt #0 0x0000555555558898 in __Pyx__GetModuleGlobalName (name=0x0) at caller.c:2739 #1 0x0000555555556d9a in call_hw () at caller.c:2002 #2 0x000055555555cef6 in main () at main.c:8 </code></pre> <p>Can anyone tell me what I'm doing wrong?</p>
<python><c><python-3.x><cython>
2025-08-02 16:59:55
2
585
vibe
79,723,508
5,732,745
RuntimeWarning: coroutine <aiortc> was never awaited
<p>I am trying to use two Python libraries together: <a href="https://github.com/NiiightmareXD/windows-capture/tree/main/windows-capture-python" rel="nofollow noreferrer">windows-capture</a> and <a href="https://github.com/aiortc/aiortc" rel="nofollow noreferrer">aiortc</a></p> <p>Below is the code in my server.py:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from windows_capture import Frame, InternalCaptureControl from aiohttp import web from aiortc import RTCPeerConnection, RTCSessionDescription my_channel = None is_my_channel_open = False async def offer(request): params = await request.json() offer = RTCSessionDescription(sdp=params[&quot;sdp&quot;], type=params[&quot;type&quot;]) pc = RTCPeerConnection() global my_channel my_channel = pc.createDataChannel(&quot;my_channel&quot;) @my_channel.on(&quot;open&quot;) def on_open(): # REFERENCE BELOW my_channel.send(&quot;Hello from Python aiortc server!&quot;) # REFERENCE ABOVE global is_my_channel_open is_my_channel_open = True # handle offer await pc.setRemoteDescription(offer) # send answer answer = await pc.createAnswer() await pc.setLocalDescription(answer) def setup_aiohttp_server(): app = web.Application() app.router.add_post(&quot;/offer&quot;, offer) web.run_app( app, host=&quot;0.0.0.0&quot;, port=8080 ) print(&quot;This print will never happen&quot;) def setup_windows_capture(): # Called Every Time A New Frame Is Available @capture.event def on_frame_arrived(frame: Frame, capture_control: InternalCaptureControl): if is_my_channel_open: # PROBLEM AREA BELOW my_channel.send(&quot;data sent from on_frame_arrived&quot;) # PROBLEM AREA ABOVE print_capture_fps() # This prints the capture FPS once a second to console # Starts the capture session capture.start() print(&quot;This print will never happen&quot;) async def main(): task1 = asyncio.to_thread(setup_windows_capture) task2 = asyncio.to_thread(setup_aiohttp_server) await asyncio.gather(task1, task2) asyncio.run(main()) </code></pre> <p>My problem is that I'm trying to call <code>my_channel.send(&quot;data sent from on_frame_arrived&quot;)</code> in the <em>PROBLEM AREA</em> but it seems to completely break the <code>on_frame_arrived</code> function. Every second, the FPS is printed to console, but after calling this method, the FPS stops and I get this error:</p> <pre class="lang-bash prettyprint-override"><code>C:\Users\Dzenis\AppData\Local\Programs\Python\Python313\Lib\site-packages\windows_capture\__init__.py:224: RuntimeWarning: coroutine 'RTCSctpTransport._data_channel_flush' was never awaited self.capture.start() RuntimeWarning: Enable tracemalloc to get the object allocation traceback </code></pre> <p>On the browser, I get &quot;Hello from Python aiortc server!&quot; printed, but I only get &quot;data sent from on_frame_arrived&quot; once, instead of an expected ~60 times per second</p>
<python><aiortc>
2025-08-02 16:44:15
0
347
Dzenis Zigo
79,723,457
1,096,196
Is this Sumtracker API request correct?
<p>I wish to use the Sumtracker API, and I want to connect and do a basic get for <a href="https://inventory-api.sumtracker.com/api/version/2025-03/products/" rel="nofollow noreferrer">this endpoint</a>.</p> <p>Let's say that my API key is <code>1234567890abc</code>. The code is here:</p> <pre><code>import requests url = &quot;https://inventory-api.sumtracker.com/api/version/2025-03/products/&quot; headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;Authorization&quot;: &quot;1234567890abc&quot; } response = requests.get(url, headers=headers) </code></pre> <p>The response gives a 403 and the text is:</p> <pre class="lang-json prettyprint-override"><code>{&quot;type&quot;:&quot;client_error&quot;,&quot;errors&quot;:[{&quot;code&quot;:&quot;permission_denied&quot;,&quot;detail&quot;:&quot;You do not have permission to perform this action.&quot;,&quot;attr&quot;:null}]} </code></pre> <p>This suggests to me that the API key is not working correctly. This is the exact example that they give <a href="https://developers.sumtracker.com/reference/productlist" rel="nofollow noreferrer">on their website</a> so I have to presume it is correct.</p> <p>That said, on their <a href="https://developers.sumtracker.com/reference/authentication-1" rel="nofollow noreferrer">authentication section</a>, it says this:</p> <blockquote> <p><code>Authorization: Api-Key &lt;api-key-value&gt;</code></p> <p>Lets say the API key is <em>dv7dm.asm1hga2seks4uybay22hyuar</em></p> <p>Then, the value of the header will be <code>Api-Key dv7dm.asm1hga2seks4uybay22hyuar</code></p> </blockquote> <p>Does this mean that the API key should also include that text? I have tried using this line:</p> <pre><code>&quot;Authorization&quot;: &quot;Api-Key 1234567890abc&quot; </code></pre> <p>but it didn't seem to make a difference. Is it typical that the authorization will contain more than just the key itself?</p>
<python><rest><python-requests>
2025-08-02 15:17:32
1
741
TheFaithfulLearner
79,723,447
10,078,148
How to add automatic attributes to the Python Markdown library
<p>I'm trying to import CSS styles into my HTML file exported by the Markdown library, but for some reason the attributes are basic, or so to speak, to generate a clean layout. Is it possible to add these attributes to the entire layout?</p> <pre class="lang-py prettyprint-override"><code>from markdown import markdown text = ''' #Example ## Emphasis **This is bold text** __This is bold text__ ... html = markdown(text,extensions=['fenced_code','codehilite']) css = '&quot;css/github-markdown.css&quot;' doc = f'&lt;html&gt;&lt;head&gt;&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href={css}&gt;&lt;/head&gt;&lt;body&gt;{html}&lt;/body&gt;&lt;/html&gt;' print(doc) </code></pre> <pre class="lang-html prettyprint-override"><code>&lt;html&gt;&lt;head&gt;&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;css/github-markdown.css&quot;&gt;&lt;/head&gt; &lt;body&gt;&lt;h1&gt;Example&lt;/h1&gt; &lt;h2&gt;Emphasis&lt;/h2&gt; &lt;p&gt;&lt;strong&gt;This is bold text&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;&lt;strong&gt;This is bold text&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;&lt;em&gt;This is italic text&lt;/em&gt;&lt;/p&gt; &lt;p&gt;&lt;em&gt;This is italic text&lt;/em&gt;&lt;/p&gt; &lt;p&gt;~~Strikethrough~~&lt;/p&gt; &lt;h2&gt;Blockquotes&lt;/h2&gt; &lt;blockquote&gt; &lt;p&gt;Blockquotes can also be nested...&lt;/p&gt; &lt;blockquote&gt; &lt;p&gt;...by using additional greater-than signs right next to each other...&lt;/p&gt; &lt;blockquote&gt; &lt;p&gt;...or with spaces between arrows.&lt;/p&gt; &lt;/blockquote&gt; &lt;/blockquote&gt; &lt;/blockquote&gt; &lt;h2&gt;Lists&lt;/h2&gt; &lt;p&gt;Unordered&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Create a list by starting a line with &lt;code&gt;+&lt;/code&gt;, &lt;code&gt;-&lt;/code&gt;, or &lt;code&gt;*&lt;/code&gt;&lt;/li&gt; &lt;li&gt;Sub-lists are made by indenting 2 spaces:&lt;/li&gt; &lt;li&gt;Marker character change forces new list start:&lt;ul&gt; &lt;li&gt;Ac tristique libero volutpat at&lt;/li&gt; &lt;li&gt;Facilisis in pretium nisl aliquet&lt;/li&gt; &lt;li&gt;Nulla volutpat aliquam velit&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt;Very easy!&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Ordered&lt;/p&gt; &lt;ol&gt; &lt;li&gt;Lorem ipsum dolor sit amet&lt;/li&gt; &lt;li&gt;Consectetur adipiscing elit&lt;/li&gt; &lt;li&gt;Integer molestie lorem at massa&lt;/li&gt; &lt;/ol&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>By default, styles have attributes like <code>.markdown-body</code> and others.</p> <p>I expect a result like the one shown below:</p> <p><code>&lt;body class=&quot;markdown-body&quot;&gt;&lt;h1&gt;Example&lt;/h1&gt;</code></p>
<python><markdown>
2025-08-02 15:07:20
1
645
royer
79,723,352
1,035,783
Close terminal window opened by .command Python script
<p>I have a Python script. I'd like to double click a file on my desktop (macOS) to run it. I'd like the terminal window opened by running the script to close.</p> <p>To start, I changed the file extension of my script from <code>.py</code> to <code>.command</code> and gave it the following permissions: <code>chmod a+x file.py</code>.</p> <p>Here is the sample script:</p> <pre><code>#!/usr/bin/env python # this worked! # https://stackoverflow.com/questions/5125907/how-to-run-a-shell-script-in-os-x-by-double-clicking # chmod a+x filename # .command # https://nordvpn.com/blog/macos-cannot-verify-this-app-is-free-from-malware/ # xattr -d com.apple.quarantine filepath from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By import os service = webdriver.ChromeService(executable_path = '/usr/local/bin/chromedriver') driver = webdriver.Chrome(service=service) driver.get(&quot;https://google.com/&quot;) sleep(3) </code></pre> <p>This is working nicely, however, when the terminal window opens to run, it does not close itself.</p> <p>I'd like the terminal window to close after completion. Right now it just says '[Process Completed]' but doesn't close the actual terminal window.</p> <p>I've tried <code>quit()</code> and <code>exit()</code>.</p> <p><a href="https://i.sstatic.net/c4XI6fgY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c4XI6fgY.jpg" alt="enter image description here" /></a></p>
<python><macos><terminal>
2025-08-02 12:18:14
2
1,474
chrisallick
79,723,041
5,125,230
Find roman numerals case-insensitively in regular text
<p>The regexes found in the answers to <a href="https://stackoverflow.com/questions/29586972/match-only-roman-numerals-with-regular-expression">this</a> SO question have various problems with finding only valid roman numbers that are within regular text (not on a line by themselves). (To be clear, that wasn't a requirement of the original question, but a couple of the answers did try to address it).</p> <p>The core issue is that, although various methods are used to not match an empty string, none of them handle an empty <em>match</em>. That is, it's possible for a match itself to be empty, even if the string isn't.</p> <p>I've searched quite a bit for other possible solutions, and they're either marked as duplicate and closed due to the one above (SO), or the answers aren't any better (SO and external).</p> <p>Small amount of test data to demonstrate the problem:</p> <pre class="lang-none prettyprint-override"><code>Charles I was a bad king, I was not. Charles X was a good one. Who was Louis XVI? The year is MCMXCIX, the month is June. Do you need an X-ray, do you think? My friends Cil and Cleo met me for coffee. MCMLXIX </code></pre> <p>This is the best regex I could identify of the group for this particular problem, improved slightly (it doesn't work without the changes, either, so I don't believe they're the problem).</p> <p><code>(?=\b[MCDXLVI]+\b)M{0,4}(?:CM|CD|D?C{0,3})(?:XC|XL|L?X{0,3})(?:IX|IV|V?I{0,3})(?!-)\b</code></p> <p>This regex (in <em>case-insensitive</em> mode) finds all of the valid roman numbers in the above test data (including the false-positive &quot;I&quot; in &quot;I was not&quot;, but that is to be expected and not an issue), but has two empty matches:</p> <ol> <li><p>the empty string prior to the <code>X</code> in <code>X-ray</code> in the 5th line.</p> </li> <li><p>the empty string prior to <code>Cil</code> in the next-to-last-line.</p> </li> </ol> <p>This occurs because both &quot;X&quot; and &quot;Cil&quot; pass the lookahead since they contain solely roman numeral characters, but the rest of the regex doesn't match anything due to the trailing dash in the first case and Cil not being a valid roman number in the second case. Thus in each case there's a match, but it's empty.</p> <p>The question: is it possible to modify the regex to not allow an empty match? (No empty match at all, not just on this test data.)</p> <p>Ultimately this will be in python 3, but I've also been testing in a PCRE2 editor for convenience. (And in regex101 with both PCRE2 and python for a sanity check.) For completeness, although it's in python, the solution needs to be a single regex, not programmatic, e.g. looping through matches.</p>
<python><regex><roman-numerals>
2025-08-02 00:02:10
1
614
vr8ce
79,722,918
5,322,739
Python Selenium: How can I specify a "no timeout" WebDriverWait?
<p>I'm using <a href="https://pypi.org/project/selenium/" rel="nofollow noreferrer">Selenium</a> <a href="https://www.selenium.dev/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.wait.html" rel="nofollow noreferrer">WebDriverWait</a> in a Python function decorated with <code>@timeout()</code>. Since timeout is handled at the function level, I really don't need WebDriverWait to timeout. Is there a way to have a WebDriverWait instance with no timeout?</p>
<python><selenium-webdriver><webdriverwait>
2025-08-01 20:04:18
1
532
Geoff Alexander
79,722,891
2,957,581
Not all files have gcno with bazel --coverage
<p><strong>NB</strong> I have the custom targets which run by python scripts. So I couldn't use <code>bazel coverage</code> directly for this one.</p> <p>1.</p> <ul> <li>I defined in <code>.bazelrc</code></li> </ul> <pre><code>build:coverage --collect_code_coverage build:coverage --instrumentation_filter=&quot;^//src[/:]&quot; coverage --instrumentation_filter=&quot;^//src[/:]&quot; </code></pre> <ul> <li>In the root BUILD</li> </ul> <pre><code>config_setting( name = &quot;Coverage&quot;, flag_values = { &quot;:coverage&quot;: &quot;true&quot;, }, ) bool_flag( name = &quot;coverage&quot;, build_setting_default = False, ) </code></pre> <ul> <li>In the <code>cc_binary</code> wrapper I use</li> </ul> <pre><code>+ select({&quot;//:Coverage&quot;: coverageCopts, &quot;//conditions:default&quot;: [], }), // and select({&quot;//:Coverage&quot;: coverageLopts, &quot;//conditions:default&quot;: [], }), </code></pre> <p>where</p> <pre><code>coverageCopts = [&quot;--coverage&quot;, &quot;-fprofile-abs-path&quot;, &quot;-fprofile-arcs&quot;, &quot;-ftest-coverage&quot;] coverageLopts = [&quot;--coverage&quot;, &quot;-lgcov&quot;] </code></pre> <ul> <li>and run this python script (from bazel) like this:</li> </ul> <pre class="lang-bash prettyprint-override"><code>bazel run --subcommands --//\:coverage=true script:runner </code></pre> <ol start="2"> <li><p>But for some mystical reasons I have a lack of <code>gcno</code> files.</p> </li> <li><p>although there are these <code>gcno</code>'s for neighboring files from the same library.</p> </li> <li><p>Grepping for <code>bazel --subcommands</code> indicates the presences of <code>--coverage</code> flag for this cpp file.</p> </li> <li><p>I did <code>bazel clean --expunge</code> - same result.</p> </li> </ol>
<python><code-coverage><bazel>
2025-08-01 19:18:06
1
372
Ulrich Von Rekkenin
79,722,771
1,970,300
Using a model property (list of dictionaries) as an input to django's format_html_join() yields KeyError
<p>I am attempting to use Django's <code>format_html_join()</code> util to return an html formatted version history for one of my models. But I cannot get <code>format_html_join()</code> to accept my list of dictionaries.</p> <p>Here is what the documentation suggests:</p> <pre><code>format_html_join( &quot;\n&quot;, '&lt;li data-id=&quot;{id}&quot;&gt;{id} {title}&lt;/li&gt;', ({&quot;id&quot;: b.id, &quot;title&quot;: b.title} for b in books), ) </code></pre> <p>That third argument is intended to be: args_generator should be an iterator that yields arguments to pass to format_html(), either sequences of positional arguments or mappings of keyword arguments.</p> <p>I have tried different ways to get this to work and I'm not getting it, so I'm asking for help. I thought a list of dictionaries is iterable. I'm also thinking there has to be a way to use a list of dictionaries in a util that is expecting a list of dictionaries without having to re-create the list of dictionaries. Here is the model method I have to get the version history:</p> <pre><code> @property # I have tried this as a property and not as a property, neither works def get_version_history(self): versions = Version.objects.get_for_object(self) version_history = [] for version in versions: history_fields = version.field_dict hdict = {&quot;question&quot;: history_fields['question'], &quot;answer&quot;: history_fields['answer'], &quot;user&quot;: version.revision.user.username, &quot;timestamp&quot;: version.revision.date_created.strftime(&quot;%Y-%m-%d %H:%M&quot;), } version_history.append(hdict) return version_history </code></pre> <p>That method returns something like this:</p> <pre><code>[{'question': &quot;I'm out of questions&quot;, 'answer': 'bye', 'user': 'test.supervisor', 'timestamp': '2025-07-31 20:19'}, {'question': &quot;I'm out of questions&quot;, 'answer': 'me too', 'user': 'test.supervisor', 'timestamp': '2025-07-31 20:18'}, {'question': &quot;I'm out of questions&quot;, 'answer': '', 'user': 'test.supervisor', 'timestamp': '2025-07-31 20:18'}] </code></pre> <p>Now I am trying to return an html formatted version of that list of dictionaries:</p> <pre><code> def version_html(self): html = format_html_join( &quot;\n&quot;, &quot;&quot;&quot;&lt;tr&gt; &lt;td&gt;{question}&lt;/td&gt; &lt;td&gt;{answer}&lt;/td&gt; &lt;td&gt;{user}&lt;/td&gt; &lt;td&gt;{timestamp}&lt;/td&gt; &lt;/tr&gt;&quot;&quot;&quot;, self.get_version_history ) return html </code></pre> <p>The above code returns this error:</p> <pre><code>File ~/.virtualenvs/cep/lib/python3.12/site-packages/django/utils/html.py:112, in format_html(format_string, *args, **kwargs) 110 args_safe = map(conditional_escape, args) 111 kwargs_safe = {k: conditional_escape(v) for (k, v) in kwargs.items()} --&gt; 112 return mark_safe(format_string.format(*args_safe, **kwargs_safe)) KeyError: 'question' </code></pre> <p>I have tried various things for the third argument, all with various errors:</p> <pre><code>self.get_version_history self.get_version_history() **self.get_version_history **self.get_version_history() {&quot;question&quot;: v.question, &quot;answer&quot;: v.answer, &quot;user&quot;: v.user, &quot;timestamp&quot;: v.timestamp,} for v in self.get_version_history()) ({&quot;question&quot;: v['question'], &quot;answer&quot;: v['answer'], &quot;user&quot;: v['user'], &quot;timestamp&quot;: v['timestamp']} for v in self.get_version_history()) {&quot;question&quot;: v.question, &quot;answer&quot;: v.answer, &quot;user&quot;: v.user, &quot;timestamp&quot;: v.timestamp,} for v in self.get_version_history) ({&quot;question&quot;: v['question'], &quot;answer&quot;: v['answer'], &quot;user&quot;: v['user'], &quot;timestamp&quot;: v['timestamp']} for v in self.get_version_history) (d for d in self.get_version_history) (d for d in self.get_version_history()) [d for d in self.get_version_history] [d for d in self.get_version_history()] </code></pre> <p>Now I'm just thrashing.</p>
<python><html><django><django-reversion>
2025-08-01 16:40:22
1
504
robline
79,722,764
3,853,504
Polars bug using windowed aggregate functions on Decimal type columns
<h1>Windowed aggregate functions on Decimal-types move decimals to integers</h1> <p>I found a bug in <code>polars</code> (version 1.21.0 in a Python 3.10.8 environment) using windowed aggregate functions. They are not properly handling the decimal, essentially multiplying the result by 100. Here is a minimum reproducible example:</p> <pre><code>import polars as pl pl.__version__ # 1.21.0 pl.DataFrame( { 'People':['John', 'John', 'John', 'John', 'Jane', 'Jane', 'Jane', 'Jane'], 'Balance': [0.00, 10.59, 11.29, 0.00, 12.34, 23.45, 34.56, 45.67] } , schema={'People':pl.String, 'Balance':pl.Decimal(10, 2)} ).with_columns( Bal_max = pl.col(&quot;Balance&quot;).max(), Bal_max_person_wrong = pl.col(&quot;Balance&quot;).max().over(&quot;People&quot;), Bal_max_person_right = pl.col(&quot;Balance&quot;).cast(float).max().over(&quot;People&quot;), Bal_min_person_wrong = pl.col(&quot;Balance&quot;).min().over(&quot;People&quot;), Bal_sum_person_wrong = pl.col(&quot;Balance&quot;).sum().over(&quot;People&quot;) ) </code></pre> <p>The results are below:</p> <p><a href="https://i.sstatic.net/cwVrtQyg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwVrtQyg.png" alt="A DataFrame with Decimal-type columns that are 100-times too large." /></a></p> <p>What should I do about this? I'm tempted to hack it a bit and divide by 100, but that seems unwise. I will probably treat the data as floats, but I prefer limiting the decimals for values that will always be treated as dollars and cents. Any advice you can give would be appreciated!</p>
<python><dataframe><window-functions><python-polars>
2025-08-01 16:34:31
0
935
jpm_phd
79,722,626
3,554,065
Python class attribute: How to refer to parent class attribute implicitly?
<p>I can't understand why this isolated code don't work :</p> <pre><code>class SwitchSchema(): CONF_CONST_POWER = &quot;constant_power&quot; class LightSchema(SwitchSchema): TestVar = CONF_CONST_POWER print(LightSchema.TestVar) </code></pre> <p>Error: <code>File &quot;main.py&quot;, line 7, in &lt;module&gt;, NameError: name 'CONF_CONST_POWER' is not defined</code></p> <p>Are the class attributes not inherited ?</p>
<python>
2025-08-01 14:18:39
1
3,155
Alsatian
79,722,617
11,580,831
pd.api.types.is_string_dtype() is misleading
<pre><code>df = pd.DataFrame({ 'col_str': [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;], 'col_lst_str': [[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;], [&quot;d&quot;, &quot;e&quot;, &quot;f&quot;], [&quot;g&quot;, &quot;h&quot;, &quot;i&quot;]], 'col_lst_int': [[1, 2, 3], [4, 5, 6], [7, 8, 9]], 'col_arr_int': [np.array([1, 2, 3]),np.array([4, 5, 6]), np.array([7, 8, 9])] }) print(df.dtypes) print(pd.api.types.is_object_dtype(df['col_lst_int'].dtype)) # return True expected ! print(pd.api.types.is_object_dtype(df['col_arr_int'].dtype)) # return True expected ! print(pd.api.types.is_string_dtype(df['col_lst_int'].dtype)) # return True confusing !! print(pd.api.types.is_string_dtype(df['col_arr_int'].dtype)) # return True confusing !! print(df['col_lst_int'].apply(lambda x: isinstance(x, list)).all()) # return True expected ! print(df['col_arr_int'].apply(lambda x: isinstance(x, np.ndarray)).all()) # return True expected ! </code></pre> <p>When a pandas dataframe column contains lists or numpy arrays of <strong>integer elements</strong> (column dtype=object) both pd.api.types.is_object_dtype() and pd.api.types.is_string_dtype() return True which is completely misleading. I was expecting that pd.api.types.is_string_dtype() will return False. Now my column is seems to have two valid dtypes, dtype = object and dtype = string which can cause serious problemes in conditionnal logics. Even the API doc <a href="https://pandas.pydata.org/docs/reference/api/pandas.api.types.is_string_dtype.html" rel="nofollow noreferrer">official</a> is misleading claiming that the element must be inferred as string. How come elements 1 2 3 can be infered as string in my example ? It seems to works as expected with pandas Series though , Is it a bug with dataframes ?</p> <p><a href="https://i.sstatic.net/9H2t7qKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9H2t7qKN.png" alt="Doc" /></a></p>
<python><pandas><string><type-inference><dtype>
2025-08-01 14:08:08
1
1,663
Alexis
79,722,555
1,306,892
How to reliably download 1969 “Gazzetta Ufficiale” PDFs (Italian Official Gazette) with Python?
<p>I’m trying to programmatically download the full “pubblicazione completa non certificata” PDFs of the Italian <em>Gazzetta Ufficiale – Serie Generale</em> for <strong>1969</strong> (for an academic article). The site has a 1946–1985 “Formato grafico PDF” search and an archive index:</p> <ul> <li>Year index (1946–1985): <code>https://www.gazzettaufficiale.it/ricerca/pdf/foglio_ordinario2/2/0/0?reset=true</code></li> <li>Archive list for Serie Generale 1969: <code>https://www.gazzettaufficiale.it/ricercaArchivioCompleto/serie_generale/1969</code></li> <li>Detail pages look like: <code>.../gazzetta/serie_generale/caricaDettaglio?dataPubblicazioneGazzetta=1969-01-02&amp;numeroGazzetta=1</code></li> </ul> <h3>What I’ve tried</h3> <ol> <li><p><strong>Selenium</strong>: navigate to the year picker and click <strong>1969</strong>. On my machine it often times out: the year link/input is either not present, hidden in an iframe, or overlaid by a banner. I tried switching frames and even injecting the year via JS, but it’s brittle and unreliable.</p> </li> <li><p><strong>Requests + BeautifulSoup</strong> on the “year grid” page: in some HTML copies (from another session) I can see direct links like <code>&lt;a class=&quot;download_pdf&quot; href=&quot;/do/gazzetta/downloadPdf?...&quot;&gt;Download PDF&lt;/a&gt;</code> —but in my live session those anchors are <strong>not there</strong>, so scraping returns <strong>0 links</strong>.</p> </li> <li><p><strong>Manually building the download URL</strong> from the archive list (date &amp; issue number), e.g.: <code>/do/gazzetta/downloadPdf?dataPubblicazioneGazzetta=19690102&amp;numeroGazzetta=1&amp;tipoSerie=SG&amp;tipoSupplemento=GU&amp;numeroSupplemento=0&amp;progressivo=0&amp;edizione=0&amp;estensione=pdf</code> This returns <strong>HTML</strong>, not a PDF. When saved with “.pdf”, Acrobat says the file is damaged; the file actually contains an HTML message like: <strong>“Il pdf selezionato non è stato trovato”</strong>.</p> </li> <li><p><strong>Requests on each detail page</strong>: fetch the detail URL and look for either <code>a.download_pdf</code> or anchors containing “Scarica il PDF” / “pubblicazione completa non certificata”. For 1969 I consistently find <strong>no such link</strong> on the page, so I can’t discover a valid <code>downloadPdf</code> URL at runtime.</p> </li> </ol> <h3>Minimal reproducible example (requests + bs4)</h3> <pre class="lang-py prettyprint-override"><code>import requests from bs4 import BeautifulSoup from urllib.parse import urljoin, urlparse, parse_qs BASE = &quot;https://www.gazzettaufficiale.it&quot; YEAR = 1969 YEAR_URL = f&quot;{BASE}/ricercaArchivioCompleto/serie_generale/{YEAR}&quot; s = requests.Session() s.headers.update({ &quot;User-Agent&quot;: &quot;Mozilla/5.0&quot;, &quot;Accept&quot;: &quot;text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8&quot;, &quot;Referer&quot;: BASE, }) # 1) Collect detail pages (date + issue number) r = s.get(YEAR_URL, timeout=60) r.raise_for_status() soup = BeautifulSoup(r.text, &quot;html.parser&quot;) details = [] for a in soup.find_all(&quot;a&quot;, href=True): href = a[&quot;href&quot;] if (&quot;/gazzetta/serie_generale/caricaDettaglio&quot; in href and &quot;dataPubblicazioneGazzetta=&quot; in href and &quot;numeroGazzetta=&quot; in href): details.append(urljoin(BASE, href)) print(&quot;Detail pages found:&quot;, len(details)) print(&quot;Sample:&quot;, details[:3]) # 2) For one detail page, try to discover a real &quot;download PDF&quot; link detail_url = details[0] r = s.get(detail_url, timeout=60, headers={&quot;Referer&quot;: YEAR_URL}) r.raise_for_status() soup = BeautifulSoup(r.text, &quot;html.parser&quot;) # Try common selectors / texts dl = (soup.select_one('a.download_pdf[href]') or soup.select_one('a[href*=&quot;/do/gazzetta/downloadPdf&quot;]')) if not dl: for a in soup.find_all(&quot;a&quot;, href=True): if &quot;scarica il pdf&quot; in (a.get_text() or &quot;&quot;).lower(): dl = a break print(&quot;Download link found on detail page?&quot;, bool(dl)) if dl: print(&quot;Download href:&quot;, urljoin(BASE, dl[&quot;href&quot;])) </code></pre> <p><strong>Output I get:</strong></p> <pre><code>Detail pages found: 264 Sample: [https://www.gazzettaufficiale.it/gazzetta/serie_generale/caricaDettaglio?dataPubblicazioneGazzetta=1969-01-02&amp;numeroGazzetta=1, ...] Download link found on detail page? False </code></pre> <p>When I instead build the <code>downloadPdf</code> URL from the query params and try to download it, the response is <strong>HTML</strong> (not a PDF). Earlier I inadvertently saved those HTML responses as “.pdf”, resulting in 300+ “corrupted PDFs” that open as error in Acrobat.</p> <p>Any guidance or a working minimal example would be greatly appreciated. Thanks!</p>
<python><selenium-webdriver><web-scraping><pdf><beautifulsoup>
2025-08-01 13:13:38
2
1,801
Mark
79,722,449
243,031
Python gpt4all gives error for libllama.so python:3.12-slim image
<p>I am trying to use <a href="https://pypi.org/project/gpt4all/" rel="nofollow noreferrer"><code>gpt4all</code></a> in <code>python:3.12-slim</code> image.</p> <p>I created <code>Dockerfile</code> as below.</p> <pre><code>FROM python:3.12-slim RUN pip install setuptools gpt4all CMD [&quot;python&quot;, &quot;-c&quot;, &quot;from gpt4all import GPT4All&quot;] </code></pre> <p>I created <code>test</code> image with <code>docker build</code> command</p> <pre><code>% docker build . -t test [+] Building 4.0s (6/6) FINISHED docker:desktop-linux =&gt; [internal] load build definition from Dockerfile 0.0s =&gt; =&gt; transferring dockerfile: 148B 0.0s =&gt; [internal] load metadata for docker.io/library/python:3.12-slim 0.0s =&gt; [internal] load .dockerignore 0.0s =&gt; =&gt; transferring context: 1.03kB 0.0s =&gt; CACHED [1/2] FROM docker.io/library/python:3.12-slim 0.0s =&gt; [2/2] RUN pip install setuptools gpt4all 3.8s =&gt; exporting to image 0.1s =&gt; =&gt; exporting layers 0.1s =&gt; =&gt; writing image sha256:747f24854515a3d6583e930365d66553b79b30a2ccb180320fc399c349f670c5 0.0s =&gt; =&gt; naming to docker.io/library/test 0.0s View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/1b9h0ans5zepm7k3enx62enpa </code></pre> <p>When try to run, it gives error.</p> <pre><code>% docker run --rm test /usr/local/lib/python3.12/site-packages/gpt4all/pyllmodel.py:2: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools&lt;81. import pkg_resources Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.12/site-packages/gpt4all/__init__.py&quot;, line 1, in &lt;module&gt; from . import gpt4all # noqa ^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/gpt4all/gpt4all.py&quot;, line 6, in &lt;module&gt; from . import pyllmodel File &quot;/usr/local/lib/python3.12/site-packages/gpt4all/pyllmodel.py&quot;, line 39, in &lt;module&gt; llmodel, llama = load_llmodel_library() ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/gpt4all/pyllmodel.py&quot;, line 32, in load_llmodel_library llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/ctypes/__init__.py&quot;, line 379, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: /usr/local/lib/python3.12/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllama.so: cannot open shared object file: No such file or directory </code></pre> <p>Without docker image, it works fine on Mac M1.</p> <p>I tried <code>RUN apt-get install -y liblzma-dev</code> in the <code>Dockerfile</code> but that doesn't help.</p> <p><strong>UPDATE</strong></p> <p>After comments from <a href="https://stackoverflow.com/users/530160/nick-odell">Nick</a> I change the docker file as below.</p> <pre><code>FROM python:3.12-slim RUN apt-get update &amp;&amp; apt-get install -y git cmake g++ RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all.git RUN cd gpt4all/gpt4all-backend &amp;&amp; ls -al &amp;&amp; pwd &amp;&amp; cmake -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo RUN cmake --build build --parallel RUN cd ../gpt4all-bindings/python RUN pip install setuptools RUN pip install -e . CMD [&quot;bash&quot;] </code></pre> <p>It failing to build image.</p> <pre><code>internal] load build definition from Dockerfile transferring 469 Bytes 0.0s [internal] load metadata for docker.io/library/python:3.12-slim [internal] load .dockerignore transferring 1 kB 0.0s [1/8] FROM docker.io/library/python:3.12-slim [4/8] RUN cd gpt4all/gpt4all-backend &amp;&amp; ls -al &amp;&amp; pwd &amp;&amp; cmake -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo total 80 drwxr-xr-x 5 root root 4096 Aug 4 05:07 . drwxr-xr-x 10 root root 4096 Aug 4 05:07 .. -rw-r--r-- 1 root root 6904 Aug 4 05:07 CMakeLists.txt -rw-r--r-- 1 root root 4305 Aug 4 05:07 README.md drwxr-xr-x 3 root root 4096 Aug 4 05:07 deps drwxr-xr-x 3 root root 4096 Aug 4 05:07 include -rw-r--r-- 1 root root 43964 Aug 4 05:07 llama.cpp.cmake drwxr-xr-x 2 root root 4096 Aug 4 05:07 src /gpt4all/gpt4all-backend -- The CXX compiler identification is GNU 12.2.0 -- The C compiler identification is GNU 12.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Interprocedural optimization support detected -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- CMAKE_SYSTEM_PROCESSOR: aarch64 -- Using CUDA architectures: 50;52;61;70;75 -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND CMake Warning at CMakeLists.txt:90 (message): CUDA Toolkit not found. To build without CUDA, use -DLLMODEL_CUDA=OFF. CMake Error at /usr/share/cmake-3.25/Modules/CMakeDetermineCUDACompiler.cmake:180 (message): Failed to find nvcc. Compiler requires the CUDA toolkit. Please set the CUDAToolkit_ROOT variable. Call Stack (most recent call first): CMakeLists.txt:92 (enable_language) -- Configuring incomplete, errors occurred! See also &quot;/gpt4all/gpt4all-backend/build/CMakeFiles/CMakeOutput.log&quot;. See also &quot;/gpt4all/gpt4all-backend/build/CMakeFiles/CMakeError.log&quot;. [3/8] RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all.git [2/8] RUN apt-get update &amp;&amp; apt-get install -y git cmake g++ </code></pre> <p>How to makes sure this <code>libllama.so</code> is available in docker image?</p>
<python><docker><gpt-4>
2025-08-01 11:39:03
0
21,411
NPatel
79,722,429
232,888
Limit number of stack levels shown
<p>In Python pdb, can I limit number of stack levels shown by &quot;where&quot;, around my current position in the stack?</p> <p>In my specific use case, I want to investigate an maximum recursion problem, where the original problem is maybe on stack level 20, but the stack trace shows all 500 levels.</p> <p>I know I can set the recursion limit to a lower value, but it would be nicer if I could simply show the surrounding <em>n</em> levels around the current level.</p>
<python><pdb>
2025-08-01 11:21:33
1
4,722
quazgar
79,722,280
9,182,414
Turning off constructor addition reordering b+a => a+b
<p>In Sympy, I'm trying to get some symbolic expressions to look exactly like my source material, but it appears that constructors do some rearrangements, most simply <code>Add</code>:</p> <pre><code>&gt;&gt;&gt; a, b = symbols(&quot;a b&quot;) &gt;&gt;&gt; b+a a + b &gt;&gt;&gt; srepr(b+a) Add(Symbol('a'), Symbol('b')) </code></pre> <p>Obviously this is a good idea most of the time, but I'm trying to show my expressions to be exactly right before doing any change, however obvious.</p> <p>The help for <code>Add</code> says:</p> <blockquote> <p><code>Add()</code> evaluates the argument unless <code>evaluate=False</code> is passed. The evaluation logic includes:</p> <ol> <li>Flattening ...</li> <li>Identity removing ...</li> <li>Coefficient collecting ...</li> <li><strong>Term sorting</strong><br /> <code>Add(y, x, 2)</code> -&gt; <code>Add(2, x, y)</code></li> </ol> </blockquote> <p>But it seems not to work:</p> <pre><code>&gt;&gt;&gt; srepr(Add(b, a, evaluate=False)) Add(Symbol('a'), Symbol('b')) </code></pre> <p>Is this possible? Have I missed something?</p> <p>What I'm actually wanting to make is this:</p> <pre><code> ⎛ 1 ⎞ ⎜─────────────────⎟ ⎜ 1 1 ⎟ ⎜─────── + ───────⎟ ⎜R⋅N + S 1 ⎟ ⎜ E + ───⎟ ⎝ s⋅C⎠ ───────────────────────────── ⎛ 1 ⎞ ⎜D + L⋅s + ─────────────────⎟ ⎜ 1 1 ⎟ ⎜ ─────── + ───────⎟ ⎜ R⋅N + S 1 ⎟ ⎜ E + ───⎟ ⎝ s⋅C⎠ </code></pre> <p>But the expression reordering produces the following.</p> <pre><code> 1 ───────────────────────────────────────────────── ⎛ 1 1 ⎞ ⎛ 1 ⎞ ⎜─────── + ───────⎟⋅⎜D + L⋅s + ─────────────────⎟ ⎜N⋅R + S 1 ⎟ ⎜ 1 1 ⎟ ⎜ E + ───⎟ ⎜ ─────── + ───────⎟ ⎝ C⋅s⎠ ⎜ N⋅R + S 1 ⎟ ⎜ E + ───⎟ ⎝ C⋅s⎠ </code></pre> <ul> <li>Sympy 1.14.0</li> <li>Python 3.10.12</li> <li>Ubuntu 22.04.4 LTS</li> </ul>
<python><sympy>
2025-08-01 09:04:51
1
449
jonathanjo
79,722,131
1,059,860
How to properly escape UNC paths for string literals in Python while avoiding double prefixes?
<p>I'm working on a Python function that converts Windows file paths to UNC extended-length format and properly escapes them for string literal representation in configuration files. However, I'm running into issues with detecting existing UNC prefixes and avoiding double-escaping.</p> <p>Here is what I've tried</p> <pre class="lang-py prettyprint-override"><code>def convert_to_unc_path(path: str) -&gt; str: if path.startswith(r&quot;\\?\\&quot;): normalized = path else: normalized = r&quot;\\?\\&quot; + path # Escape all backslashes for string literal representation escaped = normalized.replace(&quot;\\&quot;, &quot;\\\\&quot;) return escaped </code></pre> <p>Here is my input details</p> <pre><code>input_path = r&quot;\\?\C:\Windows\system32\config\systemprofile\AppData\Local\temp\p\package_abc123\p&quot; expected_path = r&quot;\\\\?\\C:\\Windows\\system32\\config\\systemprofile\\AppData\\Local\\temp\\p\\package_abc123\\p&quot; result = convert_to_unc_path(input_path) print(&quot;Input:&quot;, repr(input_path)) print(&quot;Expected:&quot;, repr(expected_path)) print(&quot;Actual:&quot;, repr(result)) print(&quot;Match:&quot;, result == expected_path) </code></pre> <p>I'm escaping it too many times? Would love some input</p> <p>Current result:</p> <pre><code>'\\\\\\\\?\\\\\\\\\\\\\\\\?\\\\C:\\\\Windows\\\\system32\\\\config\\\\systemprofile\\\\AppData\\\\Local\\\\temp\\\\p\\\\package_abc123\\\\p' </code></pre>
<python>
2025-08-01 06:20:59
1
2,258
tandem
79,722,098
12,096,670
How do I align the nodes in a Sankey Diagram in Python?
<p>Need some help aligning my Sankey diagram properly.</p> <p>I've been working on a Sankey diagram project but can't seem to get the nodes aligned properly, they seem to be a bit off. I've tried to manually set and x/y coordinates, but still not up to what I want. Here is a bit of the generated data I am working with, along with all the script for the diagram.</p> <pre><code>import pandas as pd import plotly.graph_objects as go from io import StringIO # Load data csv = StringIO(&quot;&quot;&quot; from_cat,to_cat,percent rpf,bp,3.55314197051978 rpf,cc,6.19084561675718 rpf,es,1.21024049650892 rpf,ic,2.46702870442203 rpf,rpf,2.26532195500388 rpf,sc,6.54771140418929 bp,bp,0.977501939487975 bp,cc,0.403413498836307 bp,es,0.108611326609775 bp,ic,4.7944142746315 bp,rpf,0.387897595034911 bp,sc,1.81536074476338 ic,bp,0.124127230411171 ic,cc,0.21722265321955 ic,es,0.0155159038013964 ic,ic,0.170674941815361 ic,rpf,0.0155159038013964 ic,sc,0.294802172226532 cc,bp,1.25678820791311 cc,cc,7.50969743987587 cc,es,9.41815360744763 cc,ic,0.775795190069822 cc,rpf,1.05508145849496 cc,sc,20.8068269976726 cc,sr,0.0465477114041893 sc,bp,0.0155159038013964 sc,cc,0.325833979829325 sc,es,1.92397207137316 sc,rpf,0.0155159038013964 sc,sc,4.43754848719938 sr,bp,0.0620636152055857 sr,cc,1.55159038013964 sr,es,5.10473235065943 sr,ic,0.0155159038013964 sr,rpf,0.0155159038013964 sr,sc,9.71295577967417 sr,sr,0.0775795190069822 es,bp,0.108611326609775 es,cc,0.574088440651668 es,es,1.48952676493406 es,ic,0.0310318076027929 es,rpf,0.0620636152055857 es,sc,2.00155159038014 es,sr,0.0465477114041893 &quot;&quot;&quot;) df = pd.read_csv(csv, skipinitialspace=True) # Define category order cat_order = [ &quot;es&quot;, &quot;sr&quot;, &quot;sc&quot;, &quot;cc&quot;, &quot;ic&quot;, &quot;bp&quot;, &quot;rpf&quot; ] df[&quot;from_cat&quot;] = pd.Categorical(df[&quot;from_cat&quot;], categories=cat_order, ordered=True) df[&quot;to_cat&quot;] = pd.Categorical(df[&quot;to_cat&quot;], categories=cat_order, ordered=True) # sort for deterministic ordering df = df.sort_values([&quot;from_cat&quot;, &quot;to_cat&quot;]).reset_index(drop=True) # left/right hierarchies and labels left_order = cat_order right_order = cat_order n_left = len(left_order) n_right = len(right_order) labels = [f&quot;{c} (L)&quot; for c in left_order] + [f&quot;{c} (R)&quot; for c in right_order] label_to_index = {label: i for i, label in enumerate(labels)} # Map and coerce to int df[&quot;source_idx&quot;] = df[&quot;from_cat&quot;].map(lambda c: label_to_index.get(f&quot;{c} (L)&quot;, -1)) df[&quot;target_idx&quot;] = df[&quot;to_cat&quot;].map(lambda c: label_to_index.get(f&quot;{c} (R)&quot;, -1)) # Convert to numeric ints explicitly df[&quot;source_idx&quot;] = pd.to_numeric(df[&quot;source_idx&quot;], downcast=&quot;integer&quot;, errors=&quot;coerce&quot;).fillna(-1).astype(int) df[&quot;target_idx&quot;] = pd.to_numeric(df[&quot;target_idx&quot;], downcast=&quot;integer&quot;, errors=&quot;coerce&quot;).fillna(-1).astype(int) # Color definitions CATEGORY_COLORS = { &quot;es&quot;: &quot;#F6C57A&quot;, &quot;sr&quot;: &quot;#A6D8F0&quot;, &quot;sc&quot;: &quot;#7BDCB5&quot;, &quot;cc&quot;: &quot;#FFC20A&quot;, &quot;ic&quot;: &quot;#88BDE6&quot;, &quot;bp&quot;: &quot;#F4A582&quot;, &quot;rpf&quot;: &quot;#DDA0DD&quot;, &quot;Unknown&quot;: &quot;#D3D3D3&quot; } # Node and link colors node_colors = [CATEGORY_COLORS[c] for c in left_order] + [CATEGORY_COLORS[c] for c in right_order] link_colors = [node_colors[src] for src in df[&quot;source_idx&quot;].tolist()] # x/y coordinates to center Sankey x = [ 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.999, 0.999, 0.999, 0.999, 0.999, 0.999, 0.999 ] y = [ 0.05, 0.18, 0.31, 0.44, 0.57, 0.70, 0.83, 0.05, 0.18, 0.31, 0.44, 0.57, 0.70, 0.83 ] # Build Sankey diagram fig = go.Figure(go.Sankey( arrangement=&quot;fixed&quot;, node=dict( pad=40, thickness=25, line=dict(color=&quot;black&quot;, width=0.5), label=labels, x=x, y=y, color=node_colors ), link=dict( source=df[&quot;source_idx&quot;].tolist(), target=df[&quot;target_idx&quot;].tolist(), value=df[&quot;percent&quot;].tolist(), color=link_colors, hovertemplate=&quot;%{source.label} → %{target.label}&lt;br&gt;&lt;b&gt;%{value:.2f}%&lt;/b&gt;&lt;extra&gt;&lt;/extra&gt;&quot; ), valueformat=&quot;.2f&quot;, valuesuffix=&quot;%&quot; )) fig.update_layout( title=&quot;Flow&quot;, font_size=12, paper_bgcolor=&quot;#f7f7f7&quot;, plot_bgcolor=&quot;#f7f7f7&quot;, margin=dict(l=30, r=30, t=60, b=30), width=1000, height=800 ) fig.show() </code></pre>
<python><sankey-diagram>
2025-08-01 05:38:31
1
845
GSA
79,721,938
3,595,972
How to shuffle inner dimensions /axis of an array without disturbing the internal structure/data?
<p>It seems like the standard <code>numpy.random.shuffle</code> function shuffles (in place) only the first dimension / axis. I want a similar functionality for inner dimensions.</p> <p>Note that <code>numpy.random.default_rng().permuted(axis)</code> works on any axis, but distorts the data i.e., mixes data from multiple rows. This can be problematic for example machine learning tasks, where you want to preserve the data but only shuffle the rows.</p>
<python><numpy>
2025-08-01 00:11:12
1
547
omsrisagar
79,721,816
1,358,422
Is there an elegant way to use a class attribute as a type annotation?
<p>Using the same paradigm as in RDF, I'm modelling a class and its non-literal attributes as the id to the referenced object, like a URI in RDF.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>GUID = NewType(&quot;GUID&quot;, int) # create a subclass of int print(GUID.__name__) # prints 'GUID' class Person: id: GUID parents: list[&quot;PersonId&quot;] </code></pre> <p>When it comes to define <code>PersonId</code>, I have this piece of code:</p> <pre class="lang-py prettyprint-override"><code>def field_info(class_model: type, attr_name: str) -&gt; tuple[type, str, Any]: attributes = inspect.get_annotations(class_model) assert attr_name in attributes return (class_model, attr_name, attributes[attr_name]) PersonId = TypeAliasType(&quot;PersonId&quot;, Annotated[GUID, field_info(Person, &quot;id&quot;)]) # create an alias of GUID. </code></pre> <p>and in the code, near the serialisation/deserialisation in the DB, I have checks like those to verify that the Annotation's type corresponds to the attribute's type.</p> <p>(I took an example to simplify. The real code is agnostic from the Person class.)</p> <pre class="lang-py prettyprint-override"><code>assert Person.__name__ == &quot;Person&quot; # true assert isinstance(PersonId, typing.TypeAliasType) # true origin = get_origin(PersonId.__value__) args = get_args(PersonId.__value__) assert origin is Annotated # true assert args[0] is GUID # true assert args[1] == (Person, &quot;id&quot;, GUID) # true assert args[0] == args[1][2] # true </code></pre> <p>Is there a way to directly define</p> <pre class="lang-py prettyprint-override"><code>PersonId = TypeAliasType(&quot;PersonId&quot;, Person.id) #AttributeError: type object 'Person' has no attribute 'id' </code></pre>
<python><python-typing>
2025-07-31 20:32:25
1
1,798
ticapix
79,721,780
5,094,589
Using Asyncio for downloading big files: 10 times slower than threading
<p>The goal is to download images from the URLs asynchronously.</p> <p>Here is the code for threading, which works fine:</p> <pre><code>import requests import threading import os import time import concurrent.futures as futures def download_image(url, folder): os.makedirs(folder, exist_ok=True) filename = url.split('/')[-1] filepath = os.path.join(folder, filename) response = requests.get(url) with open(filepath, &quot;wb&quot;) as f: f.write(response.content) urls = [ &quot;https://images-assets.nasa.gov/image/PIA03149/PIA03149~orig.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/3/37/African_Bush_Elephant.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/9/97/The_Earth_seen_from_Apollo_17.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/2/29/%22Arte_Colonial%22.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/d/d2/%22CUT%22_June_1929_04.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/8/82/%22CUT%22_June_1929_05.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/b/b1/%22Council_of_the_Gods%22_in_Galleria_Borghese_%28Rome%29_ceiling.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/7/71/%22East_front_of_Court_House_and_Planter%27s_House.%22.jpg&quot;, &quot;https://upload.wikimedia.org/wikipedia/commons/b/b6/%22Greater_Germany%22._Major_administrative_divisions._July_1._1944._100_kms_-_btv1b531213280.jpg&quot;, ] * 2 start_time = time.time() with futures.ThreadPoolExecutor(max_workers=3) as executor: executor.map(download_image, urls, [&quot;2a_threadpool&quot;] * len(urls)) end_time = time.time() print(f&quot;ThreadPoolExecutor download time: {end_time - start_time:.2f} seconds&quot;) </code></pre> <p>Here is the code for asyncio, which is very slow:</p> <pre><code>import requests import threading import asyncio import os import time import aiohttp import aiofiles async def download_image_async(url, folder, session): async with session.get(url) as response: data = await response.read() filename = url.split('/')[-1] filepath = os.path.join(folder, filename) async with aiofiles.open(filepath, &quot;wb&quot;) as f: await f.write(data) async def main(): async with aiohttp.ClientSession() as session: tasks = [download_image_async(url, &quot;3_asyncio&quot;, session) for url in urls] await asyncio.gather(*tasks) if __name__ == &quot;__main__&quot;: start_time = time.time() asyncio.run(main()) end_time = time.time() print(f&quot;Asyncio download time: {end_time - start_time:.2f} seconds&quot;) </code></pre> <p>The images are quite large (more than 5 MB), and I can't find any mistakes in my code. Are there any possible improvements to the code? Or are large files a bottleneck? If so, why?</p>
<python><async-await><concurrency><python-asyncio>
2025-07-31 19:50:06
1
1,106
Daniil Yefimov
79,721,713
496,289
How to read empty string as well as NULL values from a csv file in pyspark?
<ul> <li><a href="https://stackoverflow.com/questions/64317510/read-spark-csv-with-empty-values-without-converting-to-null">Read spark csv with empty values without converting to null</a> doesn't answer this one because:</li> </ul> <blockquote> <ol> <li>That one's scala and this is pyspark.</li> <li>Scala solution <code>.option(&quot;nullValue&quot;, null)</code> translates to pyspark's <code>nullValue=None</code>, which produces wrong result as listed below.</li> </ol> </blockquote> <hr /> <p>TL;DR; -- How to use <code>&quot;&quot;</code> as empty value and nothing as <code>NULL</code> in a csv file?</p> <p>I have a need where I need to specify an empty string in a csv file, which also has some <code>NULL</code> values. I'm trying to use <code>&quot;&quot;</code> as empty value and nothing as <code>NULL</code>, My expectation was that <code>nullValue=None</code> and <code>emptyValue=&quot;&quot;</code> should do what I want, but both get interpreted as <code>NULL</code>.</p> <p>I tried all combinations of <code>nullValue</code> and <code>emptyValue</code> options.</p> <pre class="lang-py prettyprint-override"><code>with open(&quot;/dbfs/tmp/c.csv&quot;, &quot;w&quot;) as f: f.write('''id,val 1, 2,&quot;&quot; 3,str1 ''') for e, n in [('', None), ('', ''), (None, None), (None, '')]: print(f'e: &quot;{e}&quot;, n: &quot;{n}&quot;') df = spark.read.csv('dbfs:/tmp/c.csv', header=True, emptyValue=e, nullValue=n).show() </code></pre> <p>prints:</p> <pre><code>e: &quot;&quot;, n: &quot;None&quot; +---+-----+ | id| val| +---+-----+ | 1| NULL| | 2| NULL| | 3| str1| +---+-----+ e: &quot;&quot;, n: &quot;&quot; +---+-----+ | id| val| +---+-----+ | 1| NULL| | 2| NULL| | 3| str1| +---+-----+ e: &quot;None&quot;, n: &quot;None&quot; +---+-----+ | id| val| +---+-----+ | 1| NULL| | 2| NULL| | 3| str1| +---+-----+ e: &quot;None&quot;, n: &quot;&quot; +---+-----+ | id| val| +---+-----+ | 1| NULL| | 2| NULL| | 3| str1| +---+-----+ </code></pre> <hr /> <p>PS: It works in scala, just not in python. So I'm guessing it might have something to do with the fact that <code>print(&quot;true&quot; if &quot;&quot; else &quot;false&quot;)</code> prints <code>&quot;false&quot;</code> in python.</p> <pre class="lang-scala prettyprint-override"><code>spark.read .option(&quot;header&quot;, &quot;true&quot;) .option(&quot;emptyValue&quot;, &quot;&quot;) .option(&quot;nullValue&quot;, null) .csv(&quot;dbfs:/tmp/c.csv&quot;).show() </code></pre> <p>prints:</p> <pre><code>+---+-----+ | id| val| +---+-----+ | 1| NULL| | 2| | | 3| str1| +---+-----+ </code></pre> <hr /> <p>I've read:</p> <ul> <li><a href="https://stackoverflow.com/questions/69213817/spark-read-reading-empty-string-as-null-when-data-is-read-from-part-file">spark.read. reading empty string as null when data is read from part file</a></li> <li><a href="https://community.databricks.com/t5/data-engineering/spark-csv-file-read-option-to-read-blank-empty-value-from-file/td-p/66574" rel="nofollow noreferrer">Spark CSV file read option to read blank/empty value from file as empty value only instead Null</a></li> <li><a href="https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option" rel="nofollow noreferrer">CSV Files</a></li> </ul>
<python><csv><apache-spark><pyspark>
2025-07-31 18:41:50
1
17,945
Kashyap
79,721,627
5,432,667
Why does the ProtBERT model generate identical embeddings for all non-whitespace-separated (single token?) inputs?
<p>Why do non-identical inputs to ProtBERT generate identical embeddings when non-whitespace-separated?</p> <p>I've looked at answers <a href="https://stackoverflow.com/questions/78309756/mistral-model-generates-the-same-embeddings-for-different-input-texts">here</a> etc. but they appear to be different cases where the slicing of the <code>out.last_hidden_state</code> went wrong, which is not true for me.</p> <h2>Some background</h2> <p>I'm learning about using transformers for protein sequence analysis, specifically Hugging Face's <code>transformers</code> interface to the <a href="https://huggingface.co/Rostlab/prot_bert" rel="nofollow noreferrer">ProtBERT model</a>. I have a Python/biology background but I'm fairly new to deep learning libraries and language models.</p> <p>I have learned that the embeddings are (to me) surprisingly difficult to access, and documentation is somewhat inconsistent/inaccurate (see e.g. <a href="https://stackoverflow.com/questions/63461262/bert-sentence-embeddings-from-transformers">here</a>).</p> <p>Additionally, model behavior interfaces are inconsistent, for example the <a href="https://github.com/facebookresearch/esm/issues/348" rel="nofollow noreferrer">ESM models</a> have a different data input interface from the BERT models, in which BERT models <a href="https://medium.com/computational-biology-papers/training-protein-embeddings-simple-walthrough-using-protbert-example-2ead2758e4d4" rel="nofollow noreferrer">expect</a> (just one walkthrough example) individual residues to be whitespace-separated and ESM expects no whitespace. I suppose it might be related to the original use of BERT for human language NLP, in which spaces are expected to separate words as tokens rather than individual characters having semantic content as for biological sequences. This means that you have to preprocess sequences with shenanigans like:</p> <pre><code>sequence_examples = [&quot; &quot;.join(list(sequence)) for sequence in sequence_examples] </code></pre> <p>Which is annoying but fine. However, as a beginner I initially omitted this step and found a weird behavior/error mode/feature of the model as follows.</p> <h2>Example</h2> <p>Here is a MWE of the behavior:</p> <pre><code>from transformers import BertModel, BertTokenizer import random tokenizer = BertTokenizer.from_pretrained(&quot;Rostlab/prot_bert&quot;, do_lower_case=False, truncation=True ) model = BertModel.from_pretrained(&quot;Rostlab/prot_bert&quot;) ALPHABET = list(&quot;ACDEFGHIJKLMNPQRSTVWY&quot;) for i in range(26): aas = random.choices(ALPHABET, k=20) peptide = &quot; &quot;.join(aas) peptide_no_ws = &quot;&quot;.join(aas) encoded_input = tokenizer(peptide, return_tensors=&quot;pt&quot;, max_length=24) outputs = model(**encoded_input) print(peptide) print(outputs.last_hidden_state[:, 0, :]) encoded_input = tokenizer(peptide_no_ws, return_tensors=&quot;pt&quot;, max_length=24) outputs = model(**encoded_input) print(peptide_no_ws) print(outputs.last_hidden_state[:, 0, :]) </code></pre> <p>Which prints out a bunch of stuff:</p> <pre><code>T N F S J I L M D R C E A K Y G P V W H tensor([[ 0.0759, 0.1376, 0.0564, ..., -0.0675, -0.0184, -0.0030]], # different!!! grad_fn=&lt;SliceBackward0&gt;) TNFSJILMDRCEAKYGPVWH tensor([[-0.1096, 0.0474, -0.0857, ..., -0.0035, -0.0569, 0.0918]], # SAME!!! grad_fn=&lt;SliceBackward0&gt;) G I L P J T N K A R H Q E V Y W F M D C tensor([[ 0.0725, 0.1307, 0.0652, ..., -0.0378, -0.0352, -0.0315]], # different!!! grad_fn=&lt;SliceBackward0&gt;) GILPJTNKARHQEVYWFMDC tensor([[-0.1096, 0.0474, -0.0857, ..., -0.0035, -0.0569, 0.0918]], # SAME!!! </code></pre> <p>and no matter what the input is, even if the input length is 50 and the <code>max_length=100</code>, if there is no whitespace separation, it will return the tensor:</p> <pre><code>tensor([[-0.1096, 0.0474, -0.0857, ..., -0.0035, -0.0569, 0.0918]], grad_fn=&lt;SliceBackward0&gt;) </code></pre> <p>My question is just... why? This is a rather weird behavior/error mode/trap for the unwary.</p> <h2>A note on the right way of accessing embeddings</h2> <p>I still am not totally sure what the canonical way to access embeddings is. Unsurprisingly some of the available resources seem to themselves be LLM outputs and are thus not particularly reliable, and some of the answers e.g. <a href="https://stackoverflow.com/questions/61323621/how-to-understand-hidden-states-of-the-returns-in-bertmodel">here</a> while giving high quality background aren't as explicit. I have also heard by word of mouth that sometimes <code>last_hidden_states.mean(axis=1)</code> is <em>a</em> path.</p> <p>Regardless of the particular path I take, these provide similar qualitative behavior to the MWE above, though the exact values can differ.</p>
<python><nlp><bioinformatics><huggingface-transformers><bert-language-model>
2025-07-31 17:17:15
1
397
Maximilian Press
79,721,552
21,691,539
Importing some symbol from a Python script into its test script
<p>I've got the following project tree:</p> <pre class="lang-none prettyprint-override"><code>SomeDir |- script.py |- TestDir | |- test.py </code></pre> <p><code>script.py</code> is designed to be called in CLI and contains several classes: notably one that does the actual job and one that is a <code>class exit_codes(Enum)</code> that defines exit codes.</p> <p><code>test.py</code> is to be used by pytest and typical tests consist of calling <code>script.py</code> with arguments that are used to initialize a working class object. This object then does some processing and calls <code>sys.exit(&lt;some enumerator&gt;.value)</code>, using the exit codes in the <code>Enum</code> class.</p> <p>In the test I'm giving the command line to <code>subprocess.run</code> and assert its <code>returncode</code> against the desired exit code.</p> <p>Thus <code>test.py</code> and <code>script.py</code> should use the same code and I'd like to do something like</p> <pre class="lang-py prettyprint-override"><code>from script import exit_codes </code></pre> <p>but without additional actions <code>script</code> cannot be found.</p> <p>There are a few possibilities that works.</p> <p>The typical workaround is <code>sys.path.append</code> using <code>..</code> from <code>__file__</code> (<a href="https://stackoverflow.com/q/4383571/21691539">Importing files from different folder</a>).</p> <p>As suggested in comments I can also save <code>sys.path</code> before editing it and importing my symbols and then restore it's original value.</p> <p>Eventually, as suggested in a <a href="https://stackoverflow.com/a/79722008/21691539">first answer</a> I can make <code>script.py</code> a module by adding <code>__init__.py</code> in its directory, but then I can only run <code>pytest</code> from this directory.</p> <p>With respect to <code>pytest</code>, it seems that a <code>pytest.ini</code> file can also be used but I didn't experiment with it so far.</p> <p>It could also be a combination of these technics.</p> <p>What is considered as the best practice?</p>
<python><pytest><python-import>
2025-07-31 15:55:42
1
3,479
Oersted
79,721,529
13,147,413
mlflow cannot fetch model registered on GitLab model registry
<p>I’m trying to download artifacts and a model that has been saved using MLflow on GitLab model registry. That's the working part of my code, where I set up the client, create model and version and upload artifact previously saved on a local folder:</p> <pre><code># Running client import os from mlflow import MlflowClient os.environ[&quot;MLFLOW_TRACKING_URI&quot;] = &quot;...&quot; os.environ[&quot;MLFLOW_TRACKING_TOKEN&quot;] = &quot;...&quot; client = MlflowClient() # Model and version creation model_name = 'test_model_n' description = 'test model' model = client.create_registered_model(model_name, description=description) model_version = '1.0.1' tags = { &quot;gitlab.version&quot;: model_version } model_version = client.create_model_version(model_name, model_version, tags=tags) # Logging artifacts run_id = model_version.run_id client.log_artifact(run_id, 'mlartifacts/875771393960734527', artifact_path=&quot;&quot;) </code></pre> <p>Now, in order to fetch the model I tried multiple approaches:</p> <h4>Method 1</h4> <pre><code>model_uri = f&quot;models:/{model_name}/{model_version}&quot; model = mlflow.pyfunc.load_model(model_uri) </code></pre> <p>Download bar reach 100%, but a warning and an error pops out:</p> <pre><code>WARNING:urllib3.connectionpool:Connection pool is full, discarding connection: &quot;gitlab server&quot; Connection pool size: 10 Could not find a registered artifact repository for: c:. Currently registered schemes are: ['', 'file', 's3', 'r2', 'gs', 'wasbs', 'ftp', 'sftp', 'dbfs', 'hdfs', 'viewfs', 'runs', 'models', 'http', 'https', 'mlflow-artifacts', 'abfss'] </code></pre> <h4>Method 2</h4> <pre><code>artifacts_path, = mlflow.artifacts.download_artifacts( run_id=run_id, artifact_path=&quot;&quot;, dst_path=&quot;./downloaded_model&quot; </code></pre> <p>Error:</p> <pre><code>ValueError: not enough values to unpack (expected 2, got 1) </code></pre> <h4>Method 3</h4> <pre><code> # Se conosci il run_id specifico mv = client.get_model_version(model_name, model_version) run_id = mv.run_id # Costruisci l'URI usando il run_id run_uri = f&quot;runs:/{run_id}/[path_to_the_model]&quot; model_from_run = mlflow.pyfunc.load_model(run_uri) </code></pre> <p>Error:</p> <pre><code>ValueError: not enough values to unpack (expected 2, got 1) </code></pre> <p>Any help would be very appreciated, feel free to ask for more info if you think it would help.</p>
<python><gitlab><mlflow><model-registry>
2025-07-31 15:32:11
0
881
Alessandro Togni
79,721,444
12,633,371
transformers Pylance reportPrivateImportUsage
<p>I use VS Code and for every <code>import</code> from the <code>transformers</code> library - for example</p> <pre><code>from transformers import AutoTokenizer </code></pre> <p>I get the message</p> <pre><code>&quot;AutoTokenizer&quot; is not exported from module &quot;transformers&quot; Import from &quot;transformers.models.auto.tokenization_auto&quot; instead Pylance (reportPrivateImportUsage) </code></pre> <p>I have <code>transformers 4.51.3</code> and I think that these messages started when I installed <code>adapters</code> (<code>adapters 1.2.0</code>) in the environment too. The code runs though.</p>
<python><python-typing><pyright>
2025-07-31 14:33:13
1
603
exch_cmmnt_memb
79,721,439
3,782,911
Remote debugging of python webserver cannot listen on debug port
<p>Setup:</p> <ul> <li>FastAPI running inside Docker container, python3.12 image.</li> <li>Connection to it via VSCode remote attach debugger</li> </ul> <p>This works if the application is started via <code>python -m debugpy --listen 0.0.0.0:5680 --wait-for-client main.py</code></p> <p><code>docker run --mount type=bind,source=$(pwd),target=/home/app -p 5680:5680 -p 8080:8080 my_image</code></p> <p>However if I add the following lines to the <code>main.py</code>:</p> <pre><code>debugpy.listen((&quot;0.0.0.0&quot;, 5680)) debugpy.wait_for_client() </code></pre> <p>and then start the server with <code>python main.py</code> (so the <code>CMD</code> field in the Dockerfile reads <code>CMD [&quot;python&quot;, &quot;main.py&quot;]</code>:</p> <p>I get that <code>debugpy</code> can't listen on <code>5680</code> as the address is already in use</p> <pre><code>debugpy.listen((&quot;0.0.0.0&quot;, 5680)) File &quot;/usr/local/lib/python3.12/site-packages/debugpy/public_api.py&quot;, line 31, in wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/debugpy/server/api.py&quot;, line 133, in debug log.reraise_exception(&quot;{0}() failed:&quot;, func.__name__, level=&quot;info&quot;) File &quot;/usr/local/lib/python3.12/site-packages/debugpy/server/api.py&quot;, line 131, in debug return func(address, settrace_kwargs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/debugpy/server/api.py&quot;, line 260, in listen raise RuntimeError(str(endpoints[&quot;error&quot;])) RuntimeError: Can't listen for client connections: [Errno 98] Address already in use </code></pre> <p>It'd be really great if I could do things this way -- multiple use cases and less code change for development environments.</p> <p>What am I doing wrong with this?</p>
<python><docker><fastapi><remote-debugging>
2025-07-31 14:31:08
0
2,795
David Boshton
79,721,399
2,545,680
reveal_type in python as of python 3.10 in runtime
<p>When using mypy playground, this works fine:</p> <pre class="lang-py prettyprint-override"><code>def f(): x = 123 reveal_type(x) </code></pre> <p>However, at run time, it generates the error</p> <blockquote> <p>NameError: name 'reveal_type' is not defined</p> </blockquote> <p>I've seen many discussions as to how to use it, but all of them are old and a bit confusing. What's the way to use <code>reveal_type</code> as of Python 3.10?</p>
<python><python-typing><mypy><python-3.10>
2025-07-31 14:01:01
1
106,269
Max Koretskyi
79,721,365
6,314,254
Why is pandas not formatting dates with date_format?
<p>Why is pandas not formatting dates with <code>date_format</code> argument of <code>to_csv</code>?</p> <pre><code>pandas.DataFrame([datetime.datetime.now().date()]).to_csv(date_format=&quot;%Y %b&quot;) ',0\n0,2025-07-31\n' </code></pre>
<python><pandas><csv><date>
2025-07-31 13:41:58
3
2,101
Hugo Trentesaux
79,721,125
8,290,689
Trying to update context variable from included template in Jinja2
<p>I am trying to update variable from an included template in Jinja 2</p> <p>base.j2:</p> <pre><code>{% set ns = namespace(indent_lvl = 5) %} {% filter indent(width=4*ns.indent_lvl, first=True) %} base_indent_lvl: {{ns.indent_lvl}} {% include template_file with context %} base_indent_lvl: {{ns.indent_lvl}} {% endfilter %} </code></pre> <p>template_file.j2:</p> <pre><code>{% set ns.indent_lvl= ns.indent_lvl+1 %} template_indent_lvl: {{ns.indent_lvl}} </code></pre> <p>So ns.indent_lvl increase when it enter in the template but is not save when it returning to base template. Is there a way to do it?</p> <p>Edit: I add another display of <code>ns.indent_lvl</code>variable. Basically the output is on my side:</p> <pre><code>base_indent_lvl: 5 template_indent_lvl: 6 base_indent_lvl: 5 </code></pre> <p>Edit2: I tested it again separatly and as suggested @Detlef, it works perfectly fine. I just put the assignment of the initial value in a wrong place in my code.</p>
<python><jinja2>
2025-07-31 10:12:46
1
312
Tradjincal
79,721,033
3,446,927
Python Azure Function failing with `cannot convert value of field 'x' in trigger metadata into int`
<p>I am developing a Service Bus Trigger function in Python that is failing with the following error message:</p> <blockquote> <p>Exception while executing function: Functions.servicebus_trigger_name Result: Failure Exception: ValueError: cannot convert value of field 'State' in trigger metadata into int: invalid literal for int() with base 10: 'Out'</p> </blockquote> <p>Full stack trace:</p> <blockquote> <p>Stack: File &quot;/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 637, in _handle__invocation_request args[pb.name] = bindings.from_incoming_proto( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/bindings/meta.py&quot;, line 189, in from_incoming_proto return binding.decode(datum, trigger_metadata=metadata) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/site/wwwroot/.python_packages/lib/site-packages/azure/functions/servicebus.py&quot;, line 258, in decode return cls.decode_single_message( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/site/wwwroot/.python_packages/lib/site-packages/azure/functions/servicebus.py&quot;, line 336, in decode_single_message state=cls._decode_trigger_metadata_field( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/site/wwwroot/.python_packages/lib/site-packages/azure/functions/meta.py&quot;, line 182, in _decode_trigger_metadata_field return cls._decode_typed_data( ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/site/wwwroot/.python_packages/lib/site-packages/azure/functions/meta.py&quot;, line 166, in _decode_typed_data raise ValueError(</p> </blockquote>
<python><azure><azure-functions>
2025-07-31 08:52:07
1
539
Joe Plumb
79,721,010
14,720,380
Uvicorn not getting trace id from otel in error logs (trace_id=0)
<p>I have a test FastAPI application:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request from opentelemetry.exporter.otlp.proto.grpc import trace_exporter from opentelemetry.instrumentation.asgi import OpenTelemetryMiddleware from opentelemetry.sdk.resources import Resource from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter from opentelemetry.instrumentation.logging import LoggingInstrumentor from opentelemetry.instrumentation.requests import RequestsInstrumentor from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor from starlette.responses import JSONResponse import logging, os # Setup OpenTelemetry resource = Resource.create(attributes={&quot;service.name&quot;: os.environ[&quot;SERVICE_NAME&quot;]}) trace_provider = TracerProvider(resource=resource) trace.set_tracer_provider(trace_provider) trace_provider.add_span_processor( BatchSpanProcessor(OTLPSpanExporter(insecure=True)) ) LoggingInstrumentor().instrument() RequestsInstrumentor().instrument() # Define FastAPI app app = FastAPI() @app.get(&quot;/throws&quot;) async def throws(): raise ValueError(&quot;Oops!&quot;) @app.get(&quot;/ok&quot;) async def ok(): return {&quot;message&quot;: &quot;Fine&quot;} @app.exception_handler(Exception) async def handle_all_exceptions(request: Request, exc: Exception): logging.exception(&quot;Handled exception&quot;, exc_info=exc) return JSONResponse(content={&quot;detail&quot;: &quot;Internal Server Error&quot;}, status_code=500) FastAPIInstrumentor().instrument_app(app) asgi_app = OpenTelemetryMiddleware(app, tracer_provider=trace_provider) </code></pre> <p>If I go to the <code>/throws</code> endpoint, I can see one log message from the custom exception handler with the trace id, then another exception message from uvicorn without the trace id:</p> <pre class="lang-none prettyprint-override"><code>fastapi-app-1 | 2025-07-31 08:10:22,886 INFO [uvicorn.error] [server.py:84] [trace_id=0 span_id=0 resource.service.name=fastapi-app-1] - Started server process [1] fastapi-app-1 | 2025-07-31 08:10:22,887 INFO [uvicorn.error] [on.py:48] [trace_id=0 span_id=0 resource.service.name=fastapi-app-1] - Waiting for application startup. fastapi-app-1 | 2025-07-31 08:10:22,887 INFO [uvicorn.error] [on.py:62] [trace_id=0 span_id=0 resource.service.name=fastapi-app-1] - Application startup complete. fastapi-app-1 | 2025-07-31 08:10:22,887 INFO [uvicorn.error] [server.py:216] [trace_id=0 span_id=0 resource.service.name=fastapi-app-1] - Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) fastapi-app-1 | 2025-07-31 08:10:24,315 ERROR [root] [main.py:41] [trace_id=01475de07b3933590da528898e0d14a9 span_id=355c87a166b5f480 resource.service.name=fastapi-app-1] - Handled exception fastapi-app-1 | Traceback (most recent call last): fastapi-app-1 | File &quot;/usr/local/lib/python3.11/site- .... fastapi-app-1 | File &quot;/app/main.py&quot;, line 33, in throws fastapi-app-1 | raise ValueError(&quot;Oops!&quot;) fastapi-app-1 | ValueError: Oops! fastapi-app-1 | 2025-07-31 08:10:24,318 INFO [uvicorn.access] [h11_impl.py:473] [trace_id=01475de07b3933590da528898e0d14a9 span_id=355c87a166b5f480 resource.service.name=fastapi-app-1] - 192.168.97.1:38576 - &quot;GET /throws HTTP/1.1&quot; 500 fastapi-app-1 | 2025-07-31 08:10:24,319 ERROR [uvicorn.error] [h11_impl.py:408] [trace_id=0 span_id=0 resource.service.name=fastapi-app-1] - Exception in ASGI application fastapi-app-1 | Traceback (most recent call last): fastapi-app-1 | File &quot;/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py&quot;, line 403, in run_asgi ... fastapi-app-1 | File &quot;/app/main.py&quot;, line 33, in throws fastapi-app-1 | raise ValueError(&quot;Oops!&quot;) fastapi-app-1 | ValueError: Oops! </code></pre> <p>My app was created with Docker with this Dockerfile:</p> <pre class="lang-none prettyprint-override"><code>FROM python:3.11 WORKDIR /app COPY ./requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY ./main.py ./ COPY ./log-format.yaml ./ CMD [&quot;uvicorn&quot;, &quot;main:asgi_app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;, &quot;--log-config&quot;, &quot;log-format.yaml&quot;] </code></pre> <p>and this logging config:</p> <pre class="lang-yaml prettyprint-override"><code>version: 1 formatters: default: format: &quot;%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s&quot; use_colors: true access: format: &quot;%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s&quot; handlers: default: class: logging.StreamHandler formatter: default stream: ext://sys.stderr access: class: logging.StreamHandler formatter: access stream: ext://sys.stdout loggers: uvicorn.error: level: INFO handlers: - default propagate: no uvicorn.access: level: INFO handlers: - access propagate: no root: level: DEBUG handlers: - default propagate: no </code></pre> <p>According to <a href="https://github.com/open-telemetry/opentelemetry-python/issues/3477" rel="nofollow noreferrer">this</a> GitHub discussion the fix (which I have in my example) is to do:</p> <pre class="lang-py prettyprint-override"><code>app.add_middleware( OpenTelemetryMiddleware, default_span_details=_get_default_span_details, excluded_urls=_excluded_urls_from_env, ) </code></pre> <p>or to do:</p> <pre class="lang-py prettyprint-override"><code>app = OpenTelemetryMiddleware(app) </code></pre> <p>But neither of those two options has added the trace id to the exception logs.</p> <p>How can I fix this?</p>
<python><logging><fastapi><uvicorn><otel>
2025-07-31 08:30:35
0
6,623
Tom McLean
79,720,805
546,888
Python UnpicklingError: invalid load key, '\xef'
<p>I am new to python and came across a object deserialisation issue (unpickling) while testing a program on jupyter lab.</p> <p>I am trying to serialize and deserialize object of Employee class as below.</p> <p><strong>- Definition of Employee class:</strong></p> <pre><code>class Employee: def __init__(self, id, name, salary): self.id = id self.name = name self.salary = salary def display(self): print('{:5d} -- {:20s} -- {:10.2f}'.format(self.id, self.name, self.salary)) </code></pre> <p><strong>- Code to pickle Employee Object:</strong></p> <pre><code>import pickle file = open('employee-data.csv', 'wb') n = int(input('How many employees ?')) for i in range(n): id = int(input('Enter the Employee id:')) name = input('Enter the Employee name:') salary = float(input('Enter the Employee salary:')) ob = Employee(id, name, salary) pickle.dump(ob,file) file.close() </code></pre> <p><strong>- Code to unpickle Employee Object:</strong></p> <pre><code>import pickle file2 = open('employee-data.csv', 'rb') print('Employee Details ....') while True: try: obj = pickle.load(file2) obj.display() except EOFError: print('End of File Reached ...') break file2.close() </code></pre> <p>Error:</p> <pre><code>&gt; --------- UnpicklingError Traceback (most recent call &gt; last) Cell In[5], line 9 &gt; 7 while True: &gt; 8 try: &gt; ----&gt; 9 obj = pickle.load(file2) &gt; 10 obj.display() &gt; 11 except EOFError: &gt; &gt; UnpicklingError: invalid load key, '\xef'. </code></pre> <p>Following snapshot shows that the code to serialize the object has run successfully. And the file was created.</p> <p><a href="https://i.sstatic.net/BHL1KMcz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHL1KMcz.png" alt="enter image description here" /></a></p> <p>What can be the problem here ? Any suggestions/feedbacks appreciated.</p>
<python><deserialization><upickle>
2025-07-31 05:06:05
1
5,178
Gunjan Shah
79,720,634
1,854,821
How to navigate from exception to the code that caused it in interactive Python window in VSCode
<p>My understanding is that the &quot;Interactive Window&quot; is the <a href="https://github.com/microsoft/vscode-python/issues/22139" rel="nofollow noreferrer">supported/best practice way to do REPL in VSCode</a>. So I click the &quot;Run or Debug&quot; menu from my Python file and select &quot;Run current file in interactive window&quot;. If an exception occurs I get the usual trace in the interactive window. The line(s) that caused the exception are displayed as links, but clicking a link takes me to the prompt in the interactive window, not to that place in the code.</p> <p>How can I easily navigate to the line of code that caused the error in order to fix it?</p> <p>It seems I should be able to click the blue &quot;line 90&quot; or &quot;90&quot; links in the below image and be taken directly there in the editor to fix the typo. Instead I'm taken to the &quot;press enter to execute&quot; box.</p> <p><a href="https://i.sstatic.net/26pRl95M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26pRl95M.png" alt="enter image description here" /></a></p>
<python><visual-studio-code>
2025-07-30 23:36:40
0
473
Timothy W. Hilton
79,720,414
3,286,743
Rolling quantile with lots of groups
<p>I have a dataset with more than 300 million rows and 7 columns. I want to compute rolling quantiles over lots of groups, but I run out of memory. I use the following code:</p> <pre class="lang-py prettyprint-override"><code>( lf.sort('time') .rolling( index_column='time', group_by=['id1', 'id2'], period=&quot;1mo&quot;, closed=&quot;left&quot;, ) .agg( pl.quantile('x', 0.1).alias('q10') ) ) </code></pre> <p>There are around 17_000 different values in <code>id1</code> and 20 in <code>id2</code>. I use <code>sink_parquet</code> instead of <code>collect</code> and the engine is &quot;streaming&quot;.</p> <p>I have tried Hive partitioning the data by the <code>time</code> column, but that doesn't seem to make a difference. If I don't use <code>group_by</code> there is no memory issue, but it does take a loong time to run.</p> <p>I suppose I can split and save the dataset by the <code>id</code> columns to make it fit in memory, but it would be nice to avoid.</p>
<python><python-polars>
2025-07-30 19:03:06
0
1,177
robertdj
79,720,251
12,011,020
Polars schema_override for Datetimes as string
<h2>Issue</h2> <p>I have data in form of a list of dicts (see MRE below). To make everything type strict I would always like to pass in the expected schema (dtypes) when I read in this data. This option is given in the <code>pl.DataFrame</code> constructor with either <code>schema</code> or <code>schema_overrides</code>. However I frequently run into trouble with the Datetime columns in the schema. Especially when they presented as strings in the dictionaries</p> <h3>Traceback</h3> <pre class="lang-bash prettyprint-override"><code>polars.exceptions.ComputeError: could not append value: &quot;2020-02-11&quot; of type: str to the builder; make sure that all rows have the same schema or consider increasing `infer_schema_length` </code></pre> <h3>Question</h3> <p>Is there a way to &quot;automatically&quot; parse datetime strings when I construct the Dataframe (or use the <code>pl.from_dicts()</code> method)? Something comparable to the solution for data that is present as timestamps (int) in the dictionary of the data implemented early 2024 (<a href="https://github.com/pola-rs/polars/issues/13652" rel="nofollow noreferrer">github issue</a>)?</p> <p>Is there something similar for date information present as string (e.g. <code>2022-01-01</code>)?</p> <p>Or do I have to drop from my <code>schema_override</code> every pl.Datetime key and then later on convert this manually via</p> <pre><code>with_columns(pl.col(list_dropped_datetime_cols).cast(pl.Datetime)) </code></pre> <h3>MRE</h3> <pre class="lang-py prettyprint-override"><code>import polars as pl schema_override = { &quot;some_int_override&quot;: pl.Int8, &quot;some_date_override&quot;: pl.Datetime, } dict_data = [ { &quot;some_int_override&quot;: 1, &quot;some_date_override&quot;: &quot;2020-02-11&quot;, &quot;some_date&quot;: &quot;2025-02-11&quot;, } ] df_naiive = pl.DataFrame(dict_data) print(df_naiive) df_schema_override = pl.DataFrame(dict_data, schema_overrides=schema_override) print(df_schema_override) </code></pre>
<python><datetime><python-polars>
2025-07-30 16:05:15
0
491
SysRIP
79,720,198
12,423,732
NiceGUI AgGrid Flashing Cell Background
<p><strong>Background</strong></p> <p>I’m using NiceGUI to render an AG Grid table from a Pandas DataFrame in a Python web app. I want cells whose values change to flash in red when I update my data, similar to the “flashing cells” examples in AG Grid and Plotly Dash:</p> <p>AG Grid docs on cell flashing: <a href="https://www.ag-grid.com/javascript-data-grid/cell-flash/" rel="nofollow noreferrer">https://www.ag-grid.com/javascript-data-grid/cell-flash/</a><br /> Plotly Dash AG Grid flashing example: <a href="https://dash.plotly.com/dash-ag-grid/flashing-cells" rel="nofollow noreferrer">https://dash.plotly.com/dash-ag-grid/flashing-cells</a></p> <p>I’ve tried various combinations of:</p> <ul> <li>setting <code>enableCellChangeFlash: true</code> (both globally and per‑column)</li> <li>using <code>deltaRowDataMode: true</code> with <code>getRowId</code></li> <li>etc.</li> </ul> <p>…but nothing seems to cause the flash animation to ever trigger.</p> <p>I am trying to render an AG Grid with two fields: Name and Score. Have cells in the Score column flash red when their values change after I click a button that randomly updates scores.</p> <p><strong>Example</strong></p> <pre><code>from nicegui import ui import pandas as pd, random # 1) Inject a small script to register the HighlightChangesModule from the global agGrid ui.add_head_html(''' &lt;script&gt; // agGrid is already loaded by NiceGUI; pull ModuleRegistry &amp; HighlightChangesModule from it const { ModuleRegistry, HighlightChangesModule } = window.agGrid; ModuleRegistry.registerModules([ HighlightChangesModule ]); &lt;/script&gt; ''') # 2) Override the default green flash to red, scoped under the Alpine theme ui.add_head_html(''' &lt;style&gt; .ag-theme-alpine .ag-cell-data-changed { background-color: red !important; color: white !important; transition: background-color 0.5s ease; } &lt;/style&gt; ''') # 3) Sample DataFrame df = pd.DataFrame({'id':[0,1,2], 'name': ['Alice','Bob','Charlie'], 'score': [10,20,30]}) previous_df = df.copy() # 4) Create the grid inside a container with the Alpine theme with ui.column().classes('w-full ag-theme-alpine'): grid = ui.aggrid( { &quot;columnDefs&quot;: [{&quot;field&quot;: &quot;id&quot;}, {&quot;field&quot;: &quot;name&quot;}, {&quot;field&quot;: &quot;score&quot;, 'enableCellChangeFlash': True}], &quot;rowData&quot;: df.to_dict(&quot;records&quot;), &quot;animateRows&quot;: True, &quot;cellFlashDelay&quot;: 500, # ms before fade starts &quot;cellFadeDelay&quot;: 1000, # ms to fade out 'deltaRowDataMode': True, # 'getRowId': 'js:function(params) { return params.data.id; }', # &quot;rowSelection&quot;: &quot;multiple&quot;, } ) # 5) Update function: change data, reload, then flash changed cells def update_scores(): global df, previous_df previous_df = df.copy() df['score'] = [random.randint(0,100) for _ in df.index] # 5a) Mutate the grid's data and redraw grid.options['rowData'] = df.to_dict('records') grid.update() # 5b) Detect which cells changed and call flashCellsx changed = [] for i in df.index: if df.at[i,'score'] != previous_df.at[i,'score']: changed.append({'rowIndex': i, 'column': 'score'}) grid.run_method('api.flashCells', {'cells': changed}) # 6) Trigger button ui.button('Update Scores', on_click=update_scores) ui.run(host=&quot;127.0.0.1&quot;) </code></pre>
<python><nicegui>
2025-07-30 15:28:46
1
319
Iain MacCormick
79,720,124
4,948,719
uv `required-environments`: only require that some extras build in some environments
<p><a href="https://github.com/astral-sh/uv/issues/14974#issue-3277304321" rel="nofollow noreferrer">cross-posted to the uv issue tracker</a></p> <p>As all dependency problems, this one originates from <code>torch</code>.</p> <p>I need to support cuda 12.4 on x86_64, and cuda 12.8 on arm64. Importantly, pytorch does not provide builds for cuda 12.4 on arm64 platforms.</p> <p>so I wrote my <code>pyproject.toml</code> the following way, using, specifying the <code>extra</code> field in <code>required-environments</code> to limit which version of cuda uv would try to resolve for depending on which platform</p> <pre class="lang-ini prettyprint-override"><code>[project] name = &quot;minimal_reproduction&quot; version = &quot;0.1.0&quot; description = &quot;&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.13&quot; dependencies = [] [tool.uv] conflicts = [ [{extra=&quot;cu128&quot;}, {extra=&quot;cu124&quot;}] ] required-environments = [ &quot;sys_platform == 'linux' and platform_machine == 'x86_64' and extra == 'cu124'&quot;, &quot;sys_platform == 'linux' and platform_machine == 'aarch64' and extra == 'cu128'&quot;, ] index-strategy = &quot;unsafe-best-match&quot; [project.optional-dependencies] cu124 = [ &quot;torch==2.6.0+cu124&quot;, ] cu128 = [ &quot;torch==2.7.0+cu128&quot;, ] [[tool.uv.index]] name = &quot;torch-cu128&quot; url = &quot;https://download.pytorch.org/whl/cu128&quot; [[tool.uv.index]] name = &quot;torch-cu124&quot; url = &quot;https://download.pytorch.org/whl/cu124&quot; </code></pre> <p>But it seems <code>uv</code> ignores the <code>extra</code> field, and still complains that it cannot build <code>cu124</code> on <code>arm</code></p> <pre><code>× No solution found when resolving dependencies for split (python_full_version == │ '3.13.*' and platform_machine == 'aarch64' and sys_platform == 'linux' and extra == │ 'cu128'): ╰─▶ Because torch==2.6.0+cu124 has no `python_full_version == '3.13.*' and platform_machine == 'aarch64' and sys_platform == 'linux' and extra == 'cu128'`-compatible wheels and test[cu124] depends on torch==2.6.0+cu124, we can conclude that test[cu124]'s requirements are unsatisfiable. And because your project requires test[cu124], we can conclude that your project's requirements are unsatisfiable. hint: The resolution failed for an environment that is not the current one, consider limiting the environments with `tool.uv.environments`. </code></pre> <p>Is there a way to limit which extras uv will try to build on certain environments?</p>
<python><pyproject.toml><uv>
2025-07-30 14:28:04
1
1,834
tbrugere
79,720,031
13,435,688
Plot characters on a graph using matplotlib without spaces
<p>I want to plot some data, but the problem that I am facing is that I am unable to plot the data without spaces between them.</p> <p>The data should print out a letter like this:</p> <p><a href="https://i.sstatic.net/GsqVr1wQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsqVr1wQ.png" alt="enter image description here" /></a></p> <p>My code:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np data = [ {'x': 0, 'char': '█', 'y': 0}, {'x': 0, 'char': '█', 'y': 1}, {'x': 0, 'char': '█', 'y': 2}, {'x': 1, 'char': '▀', 'y': 1}, {'x': 1, 'char': '▀', 'y': 2}, {'x': 2, 'char': '▀', 'y': 1}, {'x': 2, 'char': '▀', 'y': 2}, {'x': 3, 'char': '▀', 'y': 2}, ] # Create the plot fig, ax = plt.subplots() # Plot each character for item in data: ax.text(item['x'], item['y'], item['char'], fontsize=14, family='monospace', ha='center', va='bottom') # Compute axis limits x_vals = [d['x'] for d in data] y_vals = [d['y'] for d in data] ax.set_xlim(min(x_vals) - 0.5, max(x_vals) + 0.5) ax.set_ylim(min(y_vals) - 0.5, max(y_vals) + 0.5) # Remove padding and axes ax.axis('off') # # Remove margins plt.subplots_adjust(left=0, right=1, top=1, bottom=0) plt.margins(0, 0) plt.show() </code></pre> <p>The result of the code:</p> <p><a href="https://i.sstatic.net/oTqEvyXA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTqEvyXA.png" alt="enter image description here" /></a></p> <p>How do I plot the points without the large spaces?</p> <p><strong>Answers criteria:</strong></p> <ul> <li>Answers using other libraries will also be accepted</li> <li>It must print out the characters in the <em>data</em> array</li> <li>x and y coordinates will never be less than 0, but may be infinitely large (i.e. solution must work with another array with a lot more data and different characters)</li> </ul>
<python><matplotlib>
2025-07-30 13:27:29
2
952
Franco
79,719,672
9,974,205
Problem refreshing excel data from Python Script
<p>I have a python code that theoretically downloads data to an excel file and then uses the refresh all command for it to execute some Power Query on the downloaded data.</p> <pre><code>import os import shutil import win32com.client staging_dir = r&quot;C:\\XXX\\XXX\\XXX\\StagingFolder&quot; if not os.path.exists(staging_dir): os.makedirs(staging_dir) destination_path_staging = os.path.join(staging_dir, destination_name) print(f&quot;Temporary destination path: {destination_path_staging}&quot;) shutil.copy2(source, destination_path_staging) print(f&quot;File copied to temporary folder: {destination_path_staging}&quot;) excel = win32com.client.Dispatch(&quot;Excel.Application&quot;) excel.Visible = False # Change to True if you want to see Excel print(&quot;Opening Excel file in temporary folder...&quot;) wb = excel.Workbooks.Open(destination_path_staging) print(f&quot;File opened: {destination_path_staging}&quot;) wb.RefreshAll() print(&quot;Refreshing data...&quot;) excel.CalculateUntilAsyncQueriesDone() print(&quot;Data refreshed.&quot;) wb.Save() wb.Close() excel.Quit() print(&quot;Data update completed in temporary folder.&quot;) </code></pre> <p>StagingFolder is considered trustworthy by Excel, I have configured it as such. Unfortunately, my code never reaches <code>print(&quot;Data refreshed.&quot;)</code>.</p> <p>If I refresh manually the created excel file in StagingFolder, it just takes 5 minutes to refresh.</p> <p>Can someone please help me fix this?</p>
<python><excel><windows><time><refresh>
2025-07-30 08:47:37
0
503
slow_learner
79,719,547
1,648,712
Can't convert geometry from back to WKT after exporting from MySQL
<p>I have a MySQL table with a <code>multipolygon</code> column, which has OSM geometry for each country's territorial waters. This is exported (by AWS) to a parquet file every night, which I then download - but I can't load it back into MySQL locally through a Python script.</p> <p>Here is my code:</p> <pre><code>from shapely import wkb import pandas as pd df = pd.read_parquet(&quot;...&quot;) name = df.iloc[26][&quot;name&quot;] coords = df.iloc[26][&quot;coords&quot;] geometry = df.iloc[26][&quot;geometry&quot;] print(&quot;name&quot;, name) print(&quot;coords&quot;, coords) print(wkb.loads(coords).wkt) print(&quot;geometry&quot;, geometry) print(wkb.loads(geometry).wkt) </code></pre> <p>outputs</p> <pre><code>name Niue coords b'\x00\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' POINT (7.291122019556398e-304 0) geometry b'\x00\x00\x00\x00\x01\x06\x00\x00\x00\x01\x00\x00\x00\x01\x03\x00\x00\x00\x01\x00\x00\x00F\x00\x00\x00.Bg\x88\x19De\xc0\x1c\x17\xc4\xf6I\xf82\xc0Y\xffB\xeaQDe\xc0h\x14\xdcY\x05\xfb2\xc0\xa8\xda\x13\xc9\xbcDe\xc0\xaf\xbd\x05\xc8\x86\x023\xc0\t\xc9]\xdf\xe2De\xc0n\x81\xa9\x0b\x1d\x063\xc0N\x19\xee\x12\x0bEe\xc0Kl\x88;(\x0c3\xc0\xc0c\xd3\xa5\x1aEe\xc0\xab\xaa&lt;&amp;\xf7\x143\xc0P\xc5\x8d[\x0cEe\xc0:\x9a~\xe41\x1a3\xc0\xc9H\xac{\xdaDe\xc0\x1d\xe76\xe1^!3\xc0\xaf\x00KS\xa9De\xc0.\xfc\x85\xc3w%3\xc0T\xbdF\xdc-De\xc0\xa2\x9b\xfd\x81r-3\xc0\x92xy:\xd7Ce\xc0p\xe9=:\x1a43\xc0/\x17\xf1\x9dXCe\xc0\xd84\xa5\xab*&lt;3\xc0+\xce\x07\xba\xacBe\xc0\x12\x93&amp;\x94tC3\xc0\xb6i\xc7h\xb8Ae\xc0\xb2\xaa\x14\xf1\x8cJ3\xc0\xe1\x85cL0Ae\xc0\xfc\xefw\xde|M3\xc0\x19R\xea7x@e\xc0Iw!g\xabP3\xc0\x08\xb7k\xf8\x8c?e\xc0-\xf9\x8e\xd0\x85S3\xc0\xb4\xab\x90\xf2\x93&gt;e\xc0\x91\xcee\x0fYW3\xc0\xee\xe2\xa2\xffJ&gt;e\xc0yk\xaa8\x1fX3\xc0/|\x8e\xd9\xe3&lt;e\xc0\n\xe1,\x80\x84Z3\xc0^emS&lt;&lt;e\xc0,\xf2\xeb\x87\xd8Z3\xc0\x91J\x0c]\xe4;e\xc0\xac\xf7ez\xd3Z3\xc0\x07\xaf\x02!\xbe:e\xc0\x06]\x0c/\xd2Y3\xc0\xdaH#\xb0\x03:e\xc0]\xd0\x8c\x8f`X3\xc0Oyt#\xac8e\xc0\x0b\xad\x98[kU3\xc0\x80\xb7@\x82&quot;8e\xc0\x14\xb5\x8fp\xb5S3\xc0c0\xda&gt;\xff6e\xc0\xe9pci\x96N3\xc0\xc6\xcf5\x82\x036e\xc0\xff@\xb9m\xdfG3\xc0\xec\xba\\i\xbe4e\xc0\x95OE\xe0\xfe&lt;3\xc0:\xbbk\xbf\xeb3e\xc0\xceZe\x01n53\xc0\x14\x03\xc9\xe0M3e\xc0\xd5\xca)\xa6w.3\xc0\x00\x98!D\xe82e\xc0\x884\x85)%)3\xc0\xae$\x1e\xf5|2e\xc0@1\xb2d\x8e!3\xc0\x955\xa0\x94?2e\xc0{\xe2\x94\x14\xb3\x1b3\xc0\x82\x88y\xfb\x182e\xc0\x00!\xed\xda9\x143\xc0\xacY\xc25\x122e\xc0w\x0b\xc9n\x0b\x0f3\xc0\xcag\x1ee)2e\xc0\xac\xbc.\r\r\x073\xc0\xe4\xe8\xe0OC2e\xc0\xa7\xdb#\xe58\x033\xc0KC\xe8\xfb\x842e\xc0K+t\x14\xd6\xfc2\xc0*\x0b\x15\xd2\xd02e\xc0\xfe\xa1\xf4\xe0I\xf82\xc0h\xa9ad\xee2e\xc0\x14\xf3\x07^\x88\xf42\xc0Z\\\x88z&amp;3e\xc0ze(8\x06\xee2\xc0\xce&amp;v#b3e\xc0\xda\xc8uS\xca\xe72\xc0\xd2\xf2M\xe4\x933e\xc0\x1a%\x9e\xa2\xd9\xe32\xc0\x94h\xc9\xe3\xe93e\xc0\xa8mho\x95\xde2\xc0\xf74q\xbcL4e\xc0\x94m\xe0\x0e\xd4\xd92\xc0\x1e2\x8a\x8a\x1d5e\xc0\x9e}\xe5Az\xd22\xc0Cz\xe5\xd5\x146e\xc0\xbf\x9fu\xe8O\xcc2\xc0vy)+|6e\xc0\xb8\x97\x8f\xff]\xca2\xc0\x00\x82\x83\x07\x177e\xc0\xbf\xc28n\xe7\xc72\xc0\x93\xa5{B[7e\xc0\xc62\xfd\x12\xf1\xc62\xc0\xda7\xf7W\x8f8e\xc0:\xa0[\x9e\xbd\xc32\xc0w\xa5\xc0]\x919e\xc0\xe7\'\x8b\xb1\x02\xc22\xc0**\xd1Hz:e\xc0\x0e\x05&quot;z\x08\xc12\xc0\x98m\xa7\xad\x91;e\xc0nza|\xe2\xc02\xc0\xd4Z=\xcc\x03&lt;e\xc0\x85\x11\xb1w5\xc12\xc0\xea\xbd\xe7v\xe5&lt;e\xc0\x8c\xca&lt;\xa8z\xc22\xc0R97\xb7S&gt;e\xc0\x80\xd1\x8a\x14o\xc52\xc0\x88i\xdf\xdc\xdf&gt;e\xc0\xfc\x12;d\x14\xc72\xc0\x7f\xf6#Ed?e\xc0\x0b\x9a\x96X\x19\xc92\xc0D*1tQ@e\xc0\x03\x0b`\xca\xc0\xcd2\xc0\xa0;\x1c\xb8\xde@e\xc0\xe1\x8e\x81\x86H\xd12\xc0\xe2\xf8\xeb_cAe\xc03\t\xcdd\xee\xd42\xc0\xac\x91\xb8&quot;\x0cBe\xc0\x9c\x84w^\xcf\xda2\xc0\xc7\xf6Z\xd0{Be\xc0\xfai\x92\x0e\xc5\xdf2\xc0\x00\x14m\xd8\x00Ce\xc0x\xa5\x1b\x17\xc4\xe42\xc0\x04kA\x94TCe\xc0\x03\xc0g\xda\xb4\xe82\xc0\x05\xe7B\x9b\x92Ce\xc0\x86!r\xfaz\xec2\xc0A\xc1{\xfd\xffCe\xc0\x11&quot;t+\xdf\xf42\xc0.Bg\x88\x19De\xc0\x1c\x17\xc4\xf6I\xf82\xc0' POINT (8.814425696238783e-280 8.658207398329322e-304) </code></pre> <p>even though the query on the original DB:</p> <pre><code>select st_aswkt(w.coords), st_aswkt(w.geometry) from world_location as w where w.name = 'Niue' </code></pre> <p>outputs</p> <pre><code>POINT(0 0) MULTIPOLYGON(((-170.1281168 -18.9698786,-170.1349994 -18.9805504,-170.1480451 -19.0098691,-170.1526944 -19.0238807,-170.1576018 -19.0474889,-170.1595029 -19.0818962,-170.1577585 -19.1023238,-170.1516703 -19.130354,-170.1456696 -19.1463587,-170.1305982 -19.1775285,-170.120023 -19.2035252,-170.1045675 -19.2350261,-170.0835848 -19.2634976,-170.0537609 -19.2912131,-170.0371458 -19.3026866,-170.0146751 -19.3151154,-169.9859583 -19.3262606,-169.95556 -19.3412027,-169.9466551 -19.3442264,-169.9028137 -19.3535843,-169.882364 -19.3548665,-169.8716264 -19.3547894,-169.8357091 -19.3508634,-169.8129502 -19.3452234,-169.771013 -19.3336694,-169.7542125 -19.3269873,-169.7186579 -19.3069826,-169.6879283 -19.280753,-169.6482436 -19.2382641,-169.6225278 -19.2087098,-169.6032566 -19.1815132,-169.5908528 -19.1607233,-169.5777536 -19.131079,-169.5702613 -19.1082013,-169.5655496 -19.0790078,-169.5647229 -19.0587682,-169.5675531 -19.0275429,-169.5707168 -19.0125869,-169.5787334 -18.9876416,-169.5879908 -18.9698773,-169.5916006 -18.9552058,-169.5984471 -18.9297824,-169.6057298 -18.905431,-169.6118032 -18.8900396,-169.622301 -18.8694677,-169.6343672 -18.850892,-169.6598561 -18.822178,-169.6900434 -18.7980943,-169.7026573 -18.7904968,-169.7215612 -18.7808751,-169.7298901 -18.777116,-169.767498 -18.7646121,-169.7989949 -18.7578536,-169.8274273 -18.7540356,-169.861533 -18.7534559,-169.8754636 -18.7547221,-169.9030108 -18.7596841,-169.9477192 -18.7712262,-169.964827 -18.7776549,-169.98099 -18.785543,-170.0099431 -18.803723,-170.0271874 -18.8175129,-170.0433807 -18.8317626,-170.0639814 -18.8547267,-170.077614 -18.8741006,-170.0938532 -18.8936171,-170.1040746 -18.9090096,-170.1116463 -18.9237515,-170.1249988 -18.9565303,-170.1281168 -18.9698786))) </code></pre> <p>(I'm aware that Niue isn't actually at (0,0))</p> <p>Note that the export to Parquet is handled by AWS RDS exports, so I do not control it.</p>
<python><mysql><shapely><geo>
2025-07-30 06:46:33
0
414
jimbofreedman
79,719,484
9,223,023
Does python STORE_SUBSCR bytecode instruction push back to the stack?
<p>The official description of STORE_SUBSCR bytecode instruction (<a href="https://docs.python.org/3/library/dis.html" rel="nofollow noreferrer">https://docs.python.org/3/library/dis.html</a>) gives the following explanation of how it works:</p> <pre class="lang-py prettyprint-override"><code>STORE_SUBSCR: key = STACK.pop() container = STACK.pop() value = STACK.pop() container[key] = value </code></pre> <p>It seems strange that the container needs to be popped from the stack but does not need to be pushed back (other bytecode ops do push back the operation results). Why is this so? Is this correct?</p> <p>I've tried searching the official docs as well as other sites.</p>
<python><bytecode>
2025-07-30 05:43:27
0
1,203
Petras Purlys
79,719,460
13,946,204
How to use yaml_file parameter for pydantic settings
<p>Here is my example code:</p> <pre><code>from pathlib import Path from pydantic_settings import BaseSettings, SettingsConfigDict class Settings(BaseSettings): model_config = SettingsConfigDict( yaml_file=Path('c.yaml'), yaml_config_section='blah', ) port: int s = Settings() </code></pre> <p>and my <code>c.yaml</code> stored in the same directory:</p> <pre class="lang-yaml prettyprint-override"><code>blah: port: 123 </code></pre> <p>When I'm running Python file I'm getting message:</p> <pre class="lang-none prettyprint-override"><code>pydantic_core._pydantic_core.ValidationError: 1 validation error for Settings port Field required [type=missing, input_value={}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/missing </code></pre> <p>That means that Pydantic isn't even trying to read YAML file. I found multiple answers how to read YAML and most of it is about how to read YAML, convert it to dict and pass the dict as constructor arguments. But I could not find any information why this <code>yaml_file</code> and <code>yaml_config_section</code> exists and what it does.</p> <p>Is here a simple way to read YAML using these arguments?</p>
<python><yaml><pydantic>
2025-07-30 05:03:21
1
9,834
rzlvmp
79,719,422
1,269,634
Python Wand: MagickReadImage returns false, but did not raise ImageMagick exception
<p>I've got some long-standing code in a Django code base that reads in a PDF and uses <a href="https://docs.wand-py.org/" rel="nofollow noreferrer">Wand</a> to take a screenshot of the first page of the PDF, which is then displayed on the website. We recently migrated servers (an upgrade from Ubuntu 22 LTS to 24 LTS), and something broke, and I can't for the life of me figure it out.</p> <p>First, some potentially useful information:</p> <ul> <li>OS: Ubuntu 24 LTS</li> <li>Python 3.12.3</li> <li>Django 5.2.4</li> <li>Wand 0.6.13</li> <li>Web server: nginx 1.24.0</li> <li>gunicorn version: 23.0.0</li> <li>We are <em>not</em> using Docker. This Django app is running directly on the server with a local virtual environment.</li> </ul> <p>The PDF-to-PNG code is on the admin side of the web app. Here's the heart of it:</p> <pre class="lang-py prettyprint-override"><code>with Image(filename=pdf_location) as pdf: with Image(pdf.sequence[0]) as first_page_pdf: with first_page_pdf.convert('png') as first_page_png: first_page_png.background_color = Color('white') first_page_png.alpha_channel = 'remove' return first_page_png.make_blob() </code></pre> <p>When I upload a PDF to the admin site for processing, I'm getting this error:</p> <p><code>MagickReadImage returns false, but did not raise ImageMagick exception. This can occur when a delegate is missing, or returns EXIT_SUCCESS without generating a raster.</code></p> <p>I have tried everything I can think of after a ton of searching, but nothing is working:</p> <ul> <li>I <em>do</em> have ghostscript installed:</li> </ul> <pre><code>$ gs --version 10.02.1 $ which gs /usr/bin/gs </code></pre> <ul> <li>My ImageMagick <code>policy.xml</code> contains the default content found in the <code>policy-debian.xml</code> file that's included with the ImageMagick package, with the notable exception of ensuring that <code>&lt;policy domain=&quot;coder&quot; rights=&quot;read|write&quot; pattern=&quot;PDF&quot; /&gt;</code> is in the <code>policy.xml</code>. I can verify that the PDF policy is properly set:</li> </ul> <pre><code>$ identify -list policy Path: /etc/ImageMagick-6/policy.xml Policy: Resource name: disk value: 2GiB Policy: Resource name: map value: 2048MiB Policy: Resource name: memory value: 1024MiB Policy: Resource name: area value: 256MP Policy: Resource name: height value: 32KP Policy: Resource name: width value: 32KP Policy: Undefined rights: None Policy: Path rights: None pattern: @* Policy: Delegate rights: None pattern: URL Policy: Delegate rights: None pattern: HTTPS Policy: Delegate rights: None pattern: HTTP Policy: Coder rights: Read Write pattern: PDF Path: [built-in] Policy: Undefined rights: None </code></pre> <ul> <li>Conversion of the same PDF file <em>does</em> work when I do it manually with both ghostscript and imagemagick (as suggested <a href="https://stackoverflow.com/a/57273281/1269634">here</a>):</li> </ul> <pre><code>$ gs -sDEVICE=pngalpha -o page-%03d.png -r120 pdf-test.pdf GPL Ghostscript 10.02.1 (2023-11-01) Copyright (C) 2023 Artifex Software, Inc. All rights reserved. This software is supplied under the GNU AGPLv3 and comes with NO WARRANTY: see the file COPYING for details. Processing pages 1 through 1. Page 1 Loading font ArialMT (or substitute) from /usr/share/ghostscript/10.02.1/Resource/Font/NimbusSans-Regular </code></pre> <p>and</p> <pre><code>$ convert -density 120 pdf-test.pdf page-%03d.png </code></pre> <p>Both correctly create <code>page-001.png</code> when using <a href="https://www.orimi.com/pdf-test.pdf" rel="nofollow noreferrer">this test PDF</a>.</p> <ul> <li>And finally, when doing this manually within my Django shell (i.e., within the same venv that nginx uses), it also works properly:</li> </ul> <pre class="lang-py prettyprint-override"><code>$ ./manage_dev.py shell 19 objects imported automatically (use -v 2 for details). Python 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. (InteractiveConsole) &gt;&gt;&gt; from wand.image import Image, Color &gt;&gt;&gt; with Image(filename='pdf-test.pdf') as pdf: ... with Image(pdf.sequence[0]) as first_page_pdf: ... with first_page_pdf.convert('png') as first_page_png: ... first_page_png.background_color = Color('white') ... first_page_png.alpha_channel = 'remove' ... blob = first_page_png.make_blob() ... with open('screenshot.png', 'wb') as png: ... png.write(blob) ... 24420 </code></pre> <ul> <li>One thing that is a bit bizarre is that when I list the ImageMagick delegates, <code>gs</code> is not listed. This is the only thing I can think of that might be causing the issue, but I can't figure out how to get it listed:</li> </ul> <pre><code>$ convert -list configure | grep DELEGATES DELEGATES bzlib djvu fftw fontconfig freetype heic jbig jng jpeg lcms lqr lzma openexr openjp2 pango png ps raw tiff webp wmf x xml zlib zstd DELEGATES bzlib djvu fftw fontconfig freetype heic jbig jng jp2 jpeg lcms lqr ltdl lzma openexr pangocairo png raw tiff webp wmf x xml zlib </code></pre> <p>Note that this is not a typo; there are 2 <code>DELEGATES</code> lines here, and neither contains <code>gs</code>.</p> <p>To reiterate, this code has worked perfectly for many years prior to this server migration/upgrade, so all this leads me to believe that it must be some configuration file (ImageMagick, nginx?) somewhere outside of my code, but I just can't nail it down. I'm really hoping that one of you might have some insights.</p> <p>Thanks in advance!</p> <p>[edit]</p> <p>Here are some responses to comments below:</p> <ul> <li>I don't see <code>pdf</code> listed when I do <code>convert -version</code>. Should I?</li> </ul> <pre><code>$ convert -version Version: ImageMagick 6.9.12-98 Q16 x86_64 18038 https://legacy.imagemagick.org Copyright: (C) 1999 ImageMagick Studio LLC License: https://imagemagick.org/script/license.php Features: Cipher DPC Modules OpenMP(4.5) Delegates (built-in): bzlib djvu fftw fontconfig freetype heic jbig jng jp2 jpeg lcms lqr ltdl lzma openexr pangocairo png raw tiff webp wmf x xml zlib </code></pre> <ul> <li><p><code>convert -list configure | grep pdf</code> doesn't return any results. I don't see <code>pdf&lt;=&gt;eps...</code>.</p> </li> <li><p>I didn't have <code>&lt;policy domain=&quot;module&quot; rights=&quot;none&quot; pattern=&quot;{PS,PDF,XPS}&quot; /&gt;</code> in my <code>policy.xml</code>. I added that line and set <code>rights</code> to <code>read|write</code>, but I'm still getting the same error. Setting it to <code>read</code> actually introduced a <code>PolicyError</code>: <code>attempt to perform an operation not allowed by the security policy `PDF' @ error/module.c/OpenModule/1293</code>.</p> </li> </ul>
<python><django><nginx><imagemagick><wand>
2025-07-30 03:55:18
1
2,441
Geoff
79,719,282
619,804
How do I update my get_history() function to use langchain_postgres.PostgresChatMessageHistory instead of langchain_community?
<p>I’m updating a LangChain-based app to use the new langchain_postgres package instead of the deprecated langchain_community.chat_message_histories.</p> <p>Previously, my function looked like this:</p> <pre><code>from langchain_community.chat_message_histories import PostgresChatMessageHistory import os def get_history(session_id: str) -&gt; PostgresChatMessageHistory: connection_string = os.getenv(&quot;POSTGRES_CONNECTION_STRING&quot;) if not connection_string: raise ValueError(&quot;POSTGRES_CONNECTION_STRING environment variable is not set&quot;) return PostgresChatMessageHistory( session_id=session_id, connection_string=connection_string, table_name=&quot;chat_message_history&quot;, ) </code></pre> <p>After switching from:</p> <p><code>from langchain_community.chat_message_histories import PostgresChatMessageHistory</code></p> <p>to:</p> <p><code>from langchain_postgres import PostgresChatMessageHistory</code></p> <p>…I get this warning in VSCode:</p> <p><code>Expected 2 more positional arguments (Pylance reportCallIssue)</code></p> <p><strong>What I’ve Tried</strong></p> <p>I understand that the new PostgresChatMessageHistory requires more arguments, but I can’t find updated documentation or examples that clarify which ones are mandatory and what values to use for them.</p> <p><strong>Question</strong></p> <p>What is the correct way to rewrite my get_history() function to work with langchain_postgres.PostgresChatMessageHistory?</p> <p>Thanks in advance!</p>
<python><langchain>
2025-07-29 22:30:57
1
2,366
user619804
79,719,041
2,417,578
How to use a Python regular expression which depends on pre-initialised back-references?
<p>I want to have a regular expression which includes back-references to strings it hasn't actually seen in the current string.</p> <p>That is, I've already figured out what the literal values of the references should be, but they keep changing at run-time, so if I were to dynamically insert them into the regular expression then I would have to recompile the regular expression over and over as they changed.</p> <p>Here's a naive approach:</p> <pre><code>def rematch(pattern, string, **kwargs): for k,v in kwargs.items(): pattern = pattern.replace(f&quot;(?P={k})&quot;, v) return re.match(pattern, string) </code></pre> <p>Where we can see that every time <code>kwargs</code> values change the regex changes as well, and has to be recompiled.</p> <p>A more sophisticated approach which avoids recompilation is to prefix the regex with a structured pattern, compiling that just once, and then just append a matching structure to the front of the text to be searched:</p> <pre class="lang-py prettyprint-override"><code>import re def rematch(pattern, string, **kwargs): unique = &quot;pvDXpEvWblv93xul19ejlQ&quot; initlist = [ f&quot;{k}='(?P&lt;{k}&gt;.*)'&quot; for k in kwargs.keys() ] pattern = f&quot;{', '.join(initlist)}:{unique}:(&quot; + pattern + &quot;)&quot; initlist = [ f&quot;{k}='{v}'&quot; for k, v in kwargs.items() ] string = f&quot;{', '.join(initlist)}:{unique}:&quot; + string return re.match(pattern, string) </code></pre> <p>This is just illustrative, but the point is that the generation and compilation of the regex <em>could</em> be memoised with a bit of effort.</p> <p>What it does is create a prefix for the string to be searched, containing patterns like <code>foo='bar',</code> and a corresponding regex prefix <code>foo='(?P&lt;foo&gt;.*)',</code> where <code>(?P&lt;foo&gt;.*)</code> matches <code>.*</code> and stores the result to a back-ref named <code>foo</code>. Later on in the same regex this is expected to appear again in the form <code>(?P=foo)</code> which must match the value stored in <code>foo</code>, which in this example is the literal string <code>bar</code>.</p> <p>There's also an arbitrary barrier string <code>:pvDXpEvWblv93xul19ejlQ:</code> between the initialisation and the original regex, to avoid confusion with whatever the structure of the original regex is.</p> <p>And here's a simplified, contrived example of how that might be used:</p> <pre class="lang-py prettyprint-override"><code> some_code = [ &quot;function foo(in arg1_in, out arg2_out, inout arg3) {&quot;, &quot; arg1_in = 123;&quot;, &quot; arg2_out = 'hello';&quot;, &quot; arg3 = arg1_in;&quot;, &quot;}&quot;, &quot;function bar(inout chatterbox, in fyi, out result) {&quot;, &quot; result = chatterbox;&quot;, &quot; chatterbox = fyi;&quot;, &quot; chatterbox = result;&quot;, &quot;}&quot;, ] proto = None for line in some_code: if (fnmatch := re.match(r&quot;function (?P&lt;fnname&gt;\w+)[(]((\bin (?P&lt;in&gt;\w+)|\bout (?P&lt;out&gt;\w+)|inout \w+)[, ]*)+[)]&quot;, line)): print(fnmatch[0], fnmatch.groupdict()) proto = fnmatch elif proto: if (oops := rematch(&quot;.*((?P=in)) = .*|.* = ((?P=out))&quot;, line, **proto.groupdict())): print(f&quot;`{line}`: can not use {oops[1] or oops[2]} this way.&quot;) </code></pre> <p>But is there a better, cleaner way? These implementations have a lot of warts. It only works for <code>match()</code>, while <code>search()</code> would need a different regex, for example.</p>
<python><python-3.x><regex>
2025-07-29 17:51:00
1
4,990
sh1
79,718,887
2,071,807
Filter Django RangeField by comparing to a point, not to another range
<p>The <a href="https://docs.djangoproject.com/en/5.2/ref/contrib/postgres/fields/#comparison-functions" rel="nofollow noreferrer">PostgreSQL specific model fields</a> docs are very specific about how to compare one <code>RangeField</code> to another range. But how do you compare a range to a single point?</p> <p>For example, if I've got a model with <code>valid_range=DateTimeRangeField</code>, and I want to find all instances which are no longer valid, I need to do something like:</p> <pre class="lang-py prettyprint-override"><code>from django.utils import timezone as tz MyProduct.objects.filter(valid_range__lt=tz.now()) </code></pre> <p>But this isn't allowed. I thought I could use <code>fully_lt</code> but that's not allowed with a particular date either.</p> <p>How do I filter a <code>DateTimeRangeField</code> to find instances whose ranges ended before a certain datetime?</p>
<python><django><postgresql>
2025-07-29 15:42:37
1
79,775
LondonRob
79,718,724
1,484,601
How to create RGB healpix fits image?
<p>This python function creates 3 fits files, one per RGB channel:</p> <pre class="lang-py prettyprint-override"><code># HealpixFullArray is a numpy array of shape (N,3) and dtype uint8. # N: number of healpix pixels (in my case corresponding to nside 1024) def save_fits(healpix_map: HealpixFullArray, output_base: Path) -&gt; None: for i, channel in enumerate([&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;]): output_path = output_base.parent / f&quot;{output_base.stem}_{channel}.fits&quot; healpy.write_map( output_path, healpix_map[:, i], nest=True, dtype=healpix_map.dtype, overwrite=True, ) logger.info(f&quot;Saved {channel} channel to {output_path}&quot;) </code></pre> <p>Following ds9 <a href="https://ds9.si.edu/doc/ref/file.html#FITSRGB" rel="nofollow noreferrer">instructions</a>, I can open a corresponding RGB image:</p> <pre class="lang-bash prettyprint-override"><code>ds9 -rgb -red foo.fits -green bar.fits -blue wow.fits # rgb image from 3 fits images </code></pre> <p>But I ideally I would like to create a single file containing all RGB data, i.e. that I could open via on of these commands:</p> <pre class="lang-bash prettyprint-override"><code>ds9 -rgbimage rgb.fits # load rgb image consisting of one fits file with 3 image exts </code></pre> <p>or</p> <pre class="lang-bash prettyprint-override"><code>ds9 -rgbcube cube.fits # load rgb image consisting of one fits data cube </code></pre> <p>No code I tried so far had any success.</p> <p>Note: the write_map function used above does not support RGB data: <a href="https://healpy.readthedocs.io/en/latest/generated/healpy.fitsfunc.write_map.html" rel="nofollow noreferrer">https://healpy.readthedocs.io/en/latest/generated/healpy.fitsfunc.write_map.html</a></p>
<python><fits><healpy><ds9>
2025-07-29 13:48:16
1
4,521
Vince
79,718,720
5,539,707
Python lambda returns 0
<p>Is there a simple way to return 0 with a <code>lambda</code> function using <code>and</code> and <code>or</code> operators? For example, consider this function to sum elements in an array:</p> <pre><code>sum = lambda tab: tab == [] and 0 or tab[0] + sum(tab[1:]) </code></pre> <p>Neutral element for number addition is 0 so it produces an <code>IndexError</code>: 0 is a falsy value.</p> <p>Thus with multiplication it works:</p> <pre><code>&gt;&gt;&gt; prod = lambda tab: tab == [] and 1 or tab[0] * prod(tab[1:]) &gt;&gt;&gt; prod([1, 2, 3]) 6 </code></pre> <p>Other numbers than 0, for example 1, are truthy.</p>
<python><recursion><lambda><sum>
2025-07-29 13:43:56
1
1,593
david
79,718,582
9,785,316
In pandas, how to write the word "nan" as string with to_excel?
<p>I have the reverse problem as described in <a href="https://stackoverflow.com/questions/33952142/prevent-pandas-from-interpreting-na-as-nan-in-a-string">Prevent pandas from interpreting &#39;NA&#39; as NaN in a string</a>.</p> <p>I work with older English text data and want to write the word &quot;nan&quot; (i.e. Modern English 'non(e)') into an Excel file.</p> <p>I want Excel to show this word as &quot;nan&quot; for a particular column. However, I don't want empty cells elsewhere in my dataframe be filled with any other Excel NaN-replacements.</p> <p>Instead, what I get when I use <code>df.to_excel()</code> is an empty cell.</p>
<python><pandas><string><nlp><nan>
2025-07-29 12:08:41
2
525
Mat
79,718,375
2,540,336
Why grouping a pandas series using the same series makes no sense?
<p>In the code example below I am grouping a pandas series using the same series but with a modified index.</p> <p>The groups in the end make no sense. There is no warning or error.</p> <p>Could you please help me understand what is going on? The modified index has clearly an effect but what exactly happens?</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd # define series sf = pd.Series([10, 10, 20, 30, 30, 30], index=np.arange(6)+2) print(sf) # 2 10 # 3 10 # 4 20 # 5 30 # 6 30 # 7 30 # dtype: int64 # group by using the series itself &lt;- makes sense grouped = sf.groupby(sf) for name, group in grouped: print(f&quot;Group: {name}&quot;) print(group) # Group: 10 # 2 10 # 3 10 # dtype: int64 # Group: 20 # 4 20 # dtype: int64 # Group: 30 # 5 30 # 6 30 # 7 30 # dtype: int64 # change index in the group by series and examine groups &lt;- does not make sense grouped = sf.groupby(sf.reset_index(drop=True)) for name, group in grouped: print(f&quot;Group: {name}&quot;) print(group) # Group: 20.0 # 2 10 # dtype: int64 # Group: 30.0 # 3 10 # 4 20 # 5 30 # dtype: int64 </code></pre>
<python><pandas><group-by>
2025-07-29 09:24:01
2
597
karpan
79,718,256
14,098,258
How to automatically convert objective expression from pyomo model to use as fitness_func for pygad?
<p>I use <a href="https://pyomo.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">pyomo</a> to formulate optimization problems and tyically use solvers like for example IPOPT. Now I would like to apply metaheuristic solvers to those optimization problems. I have already heared of frameworks like <a href="https://pymoo.org/" rel="nofollow noreferrer">pymoo</a> or <a href="https://pygad.readthedocs.io/en/latest/" rel="nofollow noreferrer">pygad</a> that could be used for such purposes.</p> <p>However, one always has to formulate the objective and constraints by hand. I would like to find an automated way to fetch the objective or constraints formulations from the pyomo ConcreteModel instance and use it to formulate the fitness_func for pygad.</p> <p>For my MWE I just create a simple model with a simple objective (and try to pass this objective as a fitness_func to pygad GA), no constraints yet.</p> <p><strong>Minimum (not as desired) Working Example:</strong></p> <pre><code>from pyomo import environ as pe import pygad model = pe.ConcreteModel() model.set1 = pe.Set( initialize=[i for i in range(10)] ) model.param1 = pe.Param( model.set1, initialize={ i: (i+1)**2 for i in model.set1 }, mutable=True ) model.var = pe.Var( model.set1, domain=pe.PositiveReals, bounds=(2, 5) ) def obj_rule(model): return sum( model.param1[i] * model.var[i] for i in model.set1 ) model.obj = pe.Objective( rule=obj_rule, sense=pe.maximize ) solver = pe.SolverFactory('IPOPT') solver.solve(model) solution_ipopt = model.obj.expr() #=== pyomo model done, now comes everything pygad related ====================== n_vars = len(model.set1) def fitness_func(instance, solution, solution_idx): # desired: something automated like # return model.obj.expr # (but that would be too easy and doesn't work) return sum( (i+1)**2 * solution[i] for i in range(1, n_vars) ) # yes, that works but is coded manually, I need something automated ga_instance = pygad.GA( num_generations = 100, num_parents_mating = 10, fitness_func = fitness_func, sol_per_pop = 100, num_genes = n_vars, gene_type = float, gene_space = [{'low': 2., 'high': 5.} for _ in range(n_vars)], parent_selection_type = &quot;tournament&quot;, crossover_type = &quot;single_point&quot;, mutation_type = &quot;random&quot;, mutation_percent_genes = 50, parallel_processing= [&quot;thread&quot;, 8] ) ga_instance.run() solution, solution_fitness, solution_idx = ga_instance.best_solution() </code></pre> <p>In <code>fitness_func</code> I had to manually type <code>sum((i+1)**2 * solution[i] for i in range(1, n_vars))</code> to build the fitness function according to the <code>model.obj</code> objective function. And I need an automated way (for example I found out, that <code>model.obj.expr</code> (without paranthesis!) gives me something that looks promising, but of course that would be too easy and does not work).</p> <p>Is there a way to automatically convert the objective function of a pyomo optimization model to an expression suitable to pass to the fitness_func for pygad.GA?</p> <p><strong>EDIT:</strong></p> <p>Thank you @jsiirola for <a href="https://stackoverflow.com/questions/79718256/how-to-automatically-convert-objective-expression-from-pyomo-model-to-use-as-fit/79718774#79718774">the answer</a> (I already tried to explain in my comment to this answer why I cannot accept the answer (however, i still appreciate it!)).</p> <p>The main point is: Whatever we pass to <code>pygad.GA</code> as <code>fitness_func</code> must return a &quot;naked&quot;, non-evaluated expression, so that pygad can plug in values for the variables on its own (at least that is how I understand pygad).</p> <p>So for example when I run <code>model.obj.expr</code> after executing the code from my MWE, I get this: <code>param1[0]*var[0] + param1[1]*var[1] + param1[2]*var[2] + param1[3]*var[3] + param1[4]*var[4] + param1[5]*var[5] + param1[6]*var[6] + param1[7]*var[7] + param1[8]*var[8] + param1[9]*var[9]</code>. This already looks like a &quot;naked&quot; non-evaluated expression I was talking about. Then the next step would be to automatically turn this expression into something like what I get from running <code>sum((i+1)**2 * solution[i] for i in range(1, n_vars))</code> in the code from my MWE. And I have no clue how to do that.</p>
<python><optimization><pyomo><pygad>
2025-07-29 07:29:30
1
383
Andre
79,718,144
326,439
How to identify issue with my PIP installation
<p>Is there a reason I see a &quot;zsh: permission denied: pip&quot; when I try using pip from command line? I am still able to use pip3 fine.</p> <p>I am sort of doing a patch by creating an alias for pip in my .zshrc file.</p> <p>Question: I really want to understand what's going on with my python installations.</p> <pre><code>zsh: permission denied: pip ayusman (~):$ which pip pip not found ayusman (~):$ pip3 Usage: pip3 &lt;command&gt; [options] ... ayusman (~):$ which pip3 /Library/Frameworks/Python.framework/Versions/3.8/bin/pip3 ayusman (~):$ </code></pre> <p>System info:</p> <pre><code>Apple M3 Max macOS Sequoia Version 15.5 </code></pre> <p>Thank you.</p>
<python><pip>
2025-07-29 05:17:00
1
8,772
Ayusman
79,718,085
4,107,349
Pandas dt accessor or groupby function returning decimal numbers instead of integers in index labels where some series values NA
<p>We're trying to group up date counts by month and index values are returning as decimals instead of integers when series contain any number of NaTs / na values.</p> <p>Simplified reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({&quot;A&quot;: [&quot;2025-07-24&quot;,&quot;2025-07-24&quot;,&quot;2025-07-24&quot;], &quot;B&quot;: [&quot;2025-07-24&quot;,&quot;2025-07-24&quot;,pd.NA]}, dtype=&quot;datetime64[ns]&quot;) df['values'] = [1,2,3] a_df = df.groupby([df[&quot;A&quot;].dt.month])[&quot;values&quot;].count() b_df = df.groupby([df[&quot;B&quot;].dt.month])[&quot;values&quot;].count() print(a_df) print(b_df) </code></pre> <p>So the index value for <code>a_df</code> is &quot;7&quot; and the index value for <code>b_df</code> is &quot;7.0&quot;, with an undesired &quot;.0&quot; suffix.</p> <p>What's causing this and what's a good way to make values return as integers, or at least return consistently?</p>
<python><pandas><dataframe><group-by>
2025-07-29 03:54:56
2
1,148
Chris Dixon
79,717,980
16,389,095
Python Supabase: Unable to update user password
<p>I'm trying to reset the password of an already registered user, which is currently not authenticated. I wrote a simple Python script which is based on:</p> <ul> <li>defining a supabase client object</li> <li>sending a reset link to the inserted email</li> <li>verifying the token from the received link</li> <li>updating the user password</li> </ul> <p>Here is the code:</p> <pre><code>import os from supabase import create_client, Client from dotenv import load_dotenv load_dotenv() # method to return the supabase client object from the .env file # the object will be used to carry out authentication requests def get_supabase_object() -&gt; Client: url: str = os.environ[&quot;SUPABASE_URL&quot;] key: str = os.environ[&quot;SUPABASE_KEY&quot;] supabase: Client = create_client(url, key) return supabase def send_reset_link(supabase: Client): try: email = input(&quot;Please insert your email\n&quot;) resp = supabase.auth.reset_password_for_email( email, ) print(&quot;RESP=&quot;) print(resp) print(&quot;\nIf your email is already registered, you will receive a password reset email! Please check your inbox.\n&quot;) except Exception as e: print(&quot;Failed to send reset email: &quot;, str(e), &quot;\n&quot;) def update_password(supabase: Client): try: link = input(&quot;Please paste the link you received via email\n&quot;) email = input(&quot;Please insert your email\n&quot;) password = input(&quot;Please insert your new password\n&quot;) # Extract token from the link token = link.split(&quot;token=&quot;)[1].split(&quot;&amp;type&quot;)[0] print(&quot;TOKEN = &quot;,token) if not token: raise ValueError(&quot;Invalid link. No token found.&quot;) # Use the token to sign in temporarily (session for recovery) resp_1 = supabase.auth.verify_otp({ &quot;email&quot;: email, &quot;type&quot;: &quot;recovery&quot;, &quot;token&quot;: token, }) print(&quot;RESP_1=&quot;) print(resp_1) print(&quot;\n&quot;) # Now update the password resp_2 = supabase.auth.update_user({ &quot;password&quot;: password }) print(&quot;RESP_2=&quot;) print(resp_2) print(&quot;\n&quot;) print(&quot;Password updated successfully\n&quot;) except Exception as e: print(&quot;Failed to update password: &quot;, str(e), &quot;\n&quot;) supabase: Client = get_supabase_object() print(supabase) print(&quot;\n\n&quot;) send_reset_link(supabase) update_password(supabase) </code></pre> <p>Once I run the code, I receive a reset link in this format:</p> <p>&quot;https://ABC.supabase.co/auth/v1/verify?token=XYZ&amp;type=recovery&amp;redirect_to=http://localhost:3000&quot;</p> <p>Using print() logging, I verified that the token was correctly extracted. However, after having inserted the new password, I get this exception:</p> <pre><code>Please insert your new password ****** (censored) TOKEN = ***************************************************** (censored) Failed to update password: Token has expired or is invalid </code></pre> <p>What I am missing?</p>
<python><supabase><supabase-py>
2025-07-28 23:36:41
1
421
eljamba
79,717,971
18,920,490
Python asyncio: execute an asyncronous function while the main thread is blocked
<p>I'm coding a python project split into two parts:</p> <ol> <li>a <a href="https://doc.qt.io/qtforpython-6/" rel="nofollow noreferrer">PySide6</a> GUI interface that needs to run on the main thread synchronously, and consequently block it,</li> <li>a <a href="https://playwright.dev/python/" rel="nofollow noreferrer">playwright</a> virtual web browser (asynchronous),</li> </ol> <p>and I have a problem with the order of execution of asynchronous function.</p> <p>I came up with this code:</p> <pre class="lang-py prettyprint-override"><code>async def run_app(): global page_manager, gui_manager pw: Playwright | None = None try: # Those three following are quick and do not cause any significant time loss pw = await async_playwright().start() page_manager = PageManager(pw) gui_manager = GUIManager() # The `page_manager.init(settings.BROWSER)` should be starting about here gui_manager.init() # This is a prerequisit for the next line page_url, save_path = gui_manager.run_sync() # This blocks the main thread await page_manager.init(settings.BROWSER) # This function should be executed during the GUI await page_manager.run(page_url) # This is ran after the end of `gui_manager.run_sync()` except Exception as e: logger.critical(f&quot;An error occurred (exiting) : {str(e)}&quot;) raise finally: gui_manager.close() await page_manager.close() if pw: await pw.stop() if __name__ == &quot;__main__&quot;: asyncio.run(run_app()) </code></pre> <p>Currently, the program starts with the GUI and <strong>when the user did its selection</strong>, the playwright instance is initialized then processed.</p> <p>However, I what the initialization of the playwright instance to occur <strong>while the GUI is running</strong>, so that no time is lost when the user did its selection. Initializing it before the GUI would not work either.</p> <p>I tried <code>asyncio.to_thread</code>, I tried tinkering the event loops which I don't understand fully, I tried create intermediate function but none of those work, sometimes is raises an complex low-level error after the GUI exists and sometimes it simply initialized the <code>page_manager</code> after.</p> <p>If possible, I would like to not mix the <code>threading</code> library and the <code>asyncio</code> library.</p>
<python><multithreading><asynchronous><pyside6><playwright-python>
2025-07-28 23:16:47
1
304
Sisyffe
79,717,821
27,596,369
PIL getcolors returning None for some particular images
<p>I am getting weird behaviour when using PIL, here is my code:</p> <pre><code>main_img = Image.open(image_url) all_colors = main_img.getcolors(maxcolors=100000) </code></pre> <p>when I use this image:</p> <p><a href="https://i.sstatic.net/82kJPkbT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82kJPkbT.jpg" alt="enter image description here" /></a></p> <p>everything works perfectly, but when I use this:</p> <p><a href="https://i.sstatic.net/Uv6TWJED.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uv6TWJED.jpg" alt="enter image description here" /></a></p> <p>PIL literally returns <code>None</code>. I am very confused.</p> <p>Note: The second image is absolutely massive, unlike the first.</p>
<python><python-imaging-library>
2025-07-28 19:52:23
1
1,512
Aadvik
79,717,750
2,453,904
Multiprocessing pool with stop flag
<p>I am trying to implement a stop flag that gracefully prevents new parallel jobs from being started. Specifically, I am running a large number of simulations that each takes a few to many hours; in order not to lose intermediate results, I would like the ability to tell my code to finish the ones that are currently running but not start any new ones. After lots of trial and error, including with <code>imap</code>, <code>apply_async</code>, etc, I came back to <code>imap_unordered</code> and a stop-aware generator.</p> <pre><code>CM_STOP_FLAG = 'tmp/cm_stop_flag' ... def stop_aware_generator(params): for param in params: if os.path.exists(CM_STOP_FLAG): print(&quot;Stop flag detected. Stopping generator.&quot;) break yield param def run_sims(n_procs): ... with Pool( processes=n_procs, initializer=_init_worker, initargs=(lock_a, lock_b) ) as pool: for _ in pool.imap_unordered( cmcompute, stop_aware_generator(param_list), chunksize=1 ): pass </code></pre> <p>My understanding is that this should at most pre-load <code>n_procs * chunksize</code> jobs, but while the generator clearly detects the flag being set (as evidenced by on-screen output), it keeps submitting jobs regardless.</p> <p>What is the correct way of implementing a graceful exit/stop flag? Thanks.</p>
<python><python-multiprocessing>
2025-07-28 18:31:28
0
468
barceloco
79,717,729
10,305,444
pyinstaller not adding matplotlib error, not linking qt6 deps warning
<p>I have a Python application. And I want to distribute as <code>.exe</code> file. It's quite small application. Here are all my deps:</p> <pre><code>PyQt6==6.9.1 PyQt6-Qt6==6.9.1 PyQt6_sip==13.10.2 qt-material==2.17 pyinstaller==6.14.2 </code></pre> <p>Here is my <code>.spec</code> file for py-installer:</p> <pre><code># -*- mode: python ; coding: utf-8 -*- a = Analysis( ['DSS.py'], pathex=[], binaries=[], datas=[], hiddenimports=['matplotlib', 'matplotlib.backends.backend_tkagg', 'matplotlib.backends.backend_pdf'], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=['PyQt5'], noarchive=False, optimize=0, ) pyz = PYZ(a.pure) exe = EXE( pyz, a.scripts, a.binaries, a.datas, [], name='DNA_Sequence_Similarities', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=False, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, icon=['icon.ico'], ) # Collect all necessary files for matplotlib and qt_material coll = COLLECT( exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='DNA_Sequence_Similarities', ) </code></pre> <p>And here is the command how I am building it: <code>pyinstaller DSS.py --name DNA_Sequence_Similarities --onefile --windowed --icon=icon.ico --hidden-import=matplotlib --hidden-import=matplotlib.backends.backend_tkagg --hidden-import=matplotlib.backends.backend_pdf --collect-all matplotlib --collect-all qt_material --collect-all PyQt6</code></p> <p>But when I run it I get <code>matplotlib</code> is not there. And it's quite expected that those PC will never have these deps, we need to put everything together and link them properly.</p> <p>Here is a snap:</p> <p><a href="https://i.sstatic.net/MBQ7AsFp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBQ7AsFp.png" alt="pyinstaller throwing error saying matplotlib not found" /></a></p> <p>Here is complete build log:</p> <pre><code>Run pyinstaller DSS.py --name DNA_Sequence_Similarities --onefile --windowed --icon=icon.ico --hidden-import=matplotlib --hidden-import=matplotlib.backends.backend_tkagg --hidden-import=matplotlib.backends.backend_pdf --collect-all matplotlib --collect-all qt_material --collect-all PyQt6 485 INFO: PyInstaller: 6.14.2, contrib hooks: 2025.8 485 INFO: Python: 3.10.11 485 INFO: Platform: Windows-10-10.0.20348-SP0 485 INFO: Python environment: C:\hostedtoolcache\windows\Python\3.10.11\x64 500 INFO: wrote D:\a\DSS\DSS\DNA_Sequence_Similarities.spec 501 WARNING: collect_data_files - skipping data collection for module 'matplotlib' as it is not a package. 501 WARNING: collect_dynamic_libs - skipping library collection for module 'matplotlib' as it is not a package. 156 WARNING: qt_material must be imported after PySide or PyQt! 2236 INFO: Module search paths (PYTHONPATH): ['C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\Scripts\\pyinstaller.exe', 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\python310.zip', 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\DLLs', 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib', 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64', 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages', 'D:\\a\\DSS\\DSS'] 140 WARNING: qt_material must be imported after PySide or PyQt! 2518 INFO: Appending 'binaries' from .spec 2534 INFO: Appending 'datas' from .spec 2815 INFO: checking Analysis 2815 INFO: Building Analysis because Analysis-00.toc is non existent 2815 INFO: Running Analysis Analysis-00.toc 2815 INFO: Target bytecode optimization level: 0 2815 INFO: Initializing module dependency graph... 2815 INFO: Initializing module graph hook caches... 2831 INFO: Analyzing modules for base_library.zip ... 3550 INFO: Processing standard module hook 'hook-heapq.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 3644 INFO: Processing standard module hook 'hook-encodings.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 5634 INFO: Processing standard module hook 'hook-pickle.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 6884 INFO: Caching module dependency graph... 6916 INFO: Looking for Python shared library... 6947 INFO: Using Python shared library: C:\hostedtoolcache\windows\Python\3.10.11\x64\python310.dll 6947 INFO: Analyzing D:\a\DSS\DSS\DSS.py 6947 INFO: Processing standard module hook 'hook-PyQt6.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 7525 INFO: Processing standard module hook 'hook-PyQt6.QtWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 8136 INFO: Processing standard module hook 'hook-PyQt6.QtCore.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 8700 INFO: Processing standard module hook 'hook-PyQt6.QtGui.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9154 INFO: Processing standard module hook 'hook-qt_material.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\qt_material' 9154 WARNING: qt_material must be imported after PySide or PyQt! 9201 INFO: Processing standard module hook 'hook-platform.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9201 INFO: Processing standard module hook 'hook-xml.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9201 INFO: Processing standard module hook 'hook-xml.dom.domreg.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9624 INFO: Processing standard module hook 'hook-PyQt6.uic.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9811 INFO: Processing standard module hook 'hook-xml.etree.cElementTree.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 9921 INFO: Processing standard module hook 'hook-multiprocessing.util.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 10234 INFO: Processing standard module hook 'hook-jinja2.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\_pyinstaller_hooks_contrib\\stdhooks' 10234 INFO: Processing pre-safe-import-module hook 'hook-typing_extensions.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module' 10234 INFO: SetuptoolsInfo: initializing cached setuptools info... 11860 INFO: Analyzing hidden import 'matplotlib.backends.backend_tkagg' 11860 ERROR: Hidden import 'matplotlib.backends.backend_tkagg' not found 11860 INFO: Analyzing hidden import 'matplotlib.backends.backend_pdf' 11860 ERROR: Hidden import 'matplotlib.backends.backend_pdf' not found 11860 INFO: Analyzing hidden import 'qt_material.hook-qt_material' 11860 INFO: Analyzing hidden import 'qt_material.resources.logo' 11860 INFO: Analyzing hidden import 'qt_material.resources.source' 11860 INFO: Analyzing hidden import 'PyQt6.QAxContainer' 11860 INFO: Processing standard module hook 'hook-PyQt6.QAxContainer.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 11923 INFO: Analyzing hidden import 'PyQt6.QtBluetooth' 11969 INFO: Processing standard module hook 'hook-PyQt6.QtBluetooth.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 12025 INFO: Analyzing hidden import 'PyQt6.QtDBus' 12032 INFO: Processing standard module hook 'hook-PyQt6.QtDBus.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 12095 INFO: Analyzing hidden import 'PyQt6.QtDesigner' 12126 INFO: Processing standard module hook 'hook-PyQt6.QtDesigner.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 12204 WARNING: QtLibraryInfo(PyQt6): could not find translations with base name 'designer'! These translations will not be collected. 12204 INFO: Analyzing hidden import 'PyQt6.QtHelp' 12220 INFO: Processing standard module hook 'hook-PyQt6.QtHelp.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 12282 INFO: Analyzing hidden import 'PyQt6.QtMultimedia' 12360 INFO: Processing standard module hook 'hook-PyQt6.QtMultimedia.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 12658 INFO: Processing standard module hook 'hook-PyQt6.QtNetwork.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13111 INFO: Analyzing hidden import 'PyQt6.QtMultimediaWidgets' 13127 INFO: Processing standard module hook 'hook-PyQt6.QtMultimediaWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13189 INFO: Analyzing hidden import 'PyQt6.QtNfc' 13189 INFO: Processing standard module hook 'hook-PyQt6.QtNfc.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13252 INFO: Analyzing hidden import 'PyQt6.QtOpenGL' 13439 INFO: Processing standard module hook 'hook-PyQt6.QtOpenGL.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13564 INFO: Analyzing hidden import 'PyQt6.QtOpenGLWidgets' 13564 INFO: Processing standard module hook 'hook-PyQt6.QtOpenGLWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13627 INFO: Analyzing hidden import 'PyQt6.QtPdf' 13627 INFO: Processing standard module hook 'hook-PyQt6.QtPdf.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13705 INFO: Analyzing hidden import 'PyQt6.QtPdfWidgets' 13721 INFO: Processing standard module hook 'hook-PyQt6.QtPdfWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13768 INFO: Analyzing hidden import 'PyQt6.QtPositioning' 13815 INFO: Processing standard module hook 'hook-PyQt6.QtPositioning.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 13909 INFO: Analyzing hidden import 'PyQt6.QtPrintSupport' 13925 INFO: Processing standard module hook 'hook-PyQt6.QtPrintSupport.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 14003 INFO: Analyzing hidden import 'PyQt6.QtQml' 14034 INFO: Processing standard module hook 'hook-PyQt6.QtQml.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 14800 INFO: Analyzing hidden import 'PyQt6.QtQuick' 14863 INFO: Processing standard module hook 'hook-PyQt6.QtQuick.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 14972 INFO: Analyzing hidden import 'PyQt6.QtQuick3D' 14988 INFO: Processing standard module hook 'hook-PyQt6.QtQuick3D.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15050 INFO: Analyzing hidden import 'PyQt6.QtQuickWidgets' 15066 INFO: Processing standard module hook 'hook-PyQt6.QtQuickWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15128 INFO: Analyzing hidden import 'PyQt6.QtRemoteObjects' 15128 INFO: Processing standard module hook 'hook-PyQt6.QtRemoteObjects.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15191 INFO: Analyzing hidden import 'PyQt6.QtSensors' 15207 INFO: Processing standard module hook 'hook-PyQt6.QtSensors.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15300 INFO: Analyzing hidden import 'PyQt6.QtSerialPort' 15300 INFO: Processing standard module hook 'hook-PyQt6.QtSerialPort.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15363 INFO: Analyzing hidden import 'PyQt6.QtSpatialAudio' 15378 INFO: Processing standard module hook 'hook-PyQt6.QtSpatialAudio.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15441 INFO: Analyzing hidden import 'PyQt6.QtSql' 15472 INFO: Processing standard module hook 'hook-PyQt6.QtSql.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15613 INFO: Analyzing hidden import 'PyQt6.QtStateMachine' 15628 INFO: Processing standard module hook 'hook-PyQt6.QtStateMachine.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15675 INFO: Analyzing hidden import 'PyQt6.QtSvg' 15691 INFO: Processing standard module hook 'hook-PyQt6.QtSvg.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15753 INFO: Analyzing hidden import 'PyQt6.QtSvgWidgets' 15753 INFO: Processing standard module hook 'hook-PyQt6.QtSvgWidgets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15816 INFO: Analyzing hidden import 'PyQt6.QtTest' 15832 INFO: Processing standard module hook 'hook-PyQt6.QtTest.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 15894 INFO: Analyzing hidden import 'PyQt6.QtTextToSpeech' 15894 INFO: Processing standard module hook 'hook-PyQt6.QtTextToSpeech.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 16003 INFO: Analyzing hidden import 'PyQt6.QtWebChannel' 16003 INFO: Processing standard module hook 'hook-PyQt6.QtWebChannel.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 16066 INFO: Analyzing hidden import 'PyQt6.QtWebSockets' 16066 INFO: Processing standard module hook 'hook-PyQt6.QtWebSockets.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 16129 INFO: Analyzing hidden import 'PyQt6.QtXml' 16144 INFO: Processing standard module hook 'hook-PyQt6.QtXml.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 16207 INFO: Analyzing hidden import 'PyQt6.lupdate' 16223 INFO: Analyzing hidden import 'PyQt6.lupdate.pylupdate' 16238 INFO: Analyzing hidden import 'PyQt6.uic.pyuic' 16238 INFO: Processing module hooks (post-graph stage)... 16316 INFO: Performing binary vs. data reclassification (3470 entries) 16660 INFO: Looking for ctypes DLLs 16676 INFO: Analyzing run-time hooks ... 16676 INFO: Including run-time hook 'pyi_rth_inspect.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\rthooks' 16676 INFO: Including run-time hook 'pyi_rth_pkgutil.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\rthooks' 16676 INFO: Including run-time hook 'pyi_rth_multiprocessing.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\rthooks' 16676 INFO: Including run-time hook 'pyi_rth_pyqt6.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\rthooks' 16676 INFO: Processing pre-find-module-path hook 'hook-_pyi_rth_utils.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path' 16676 INFO: Processing standard module hook 'hook-_pyi_rth_utils.py' from 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyInstaller\\hooks' 16832 INFO: Creating base_library.zip... 16895 INFO: Looking for dynamic libraries 17098 INFO: Extra DLL search directories (AddDllDirectory): ['C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\bin'] 17098 INFO: Extra DLL search directories (PATH): ['C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\bin'] 21257 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\geometryloaders\\defaultgeometryloader.dll'. 21257 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\geometryloaders\\defaultgeometryloader.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\geometryloaders\\gltfgeometryloader.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\geometryloaders\\gltfgeometryloader.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QmlCompiler.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\qmllint\\quicklintplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QmlCompiler.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\qmlls\\qmllsquickplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\renderers\\openglrenderer.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\renderers\\openglrenderer.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\renderers\\rhirenderer.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\renderers\\rhirenderer.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\assimpsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\assimpsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DExtras.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\assimpsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DAnimation.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\assimpsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneexport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneexport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DExtras.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneexport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DRender.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DExtras.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt63DCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sceneparsers\\gltfsceneimport.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6Scxml.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\scxmldatamodel\\qscxmlecmascriptdatamodel.dll'. 21273 WARNING: Library not found: could not resolve 'fbclient.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sqldrivers\\qsqlibase.dll'. 21273 WARNING: Library not found: could not resolve 'MIMAPI64.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sqldrivers\\qsqlmimer.dll'. 21273 WARNING: Library not found: could not resolve 'OCI.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\sqldrivers\\qsqloci.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6WebViewQuick.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\webview\\qtwebview_webengine.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6WebEngineCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\webview\\qtwebview_webengine.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6WebView.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\webview\\qtwebview_webengine.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6WebEngineQuick.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\plugins\\webview\\qtwebview_webengine.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QmlCore.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtCore\\qtqmlcoreplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QmlNetwork.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtNetwork\\qmlnetworkplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QmlXmlListModel.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQml\\XmlListModel\\qmlxmllistmodelplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6Quick3DParticleEffects.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick3D\\ParticleEffects\\qtquick3dparticleeffectsplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QuickControls2FluentWinUI3StyleImpl.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\Controls\\FluentWinUI3\\impl\\qtquickcontrols2fluentwinui3styleimplplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QuickControls2FluentWinUI3StyleImpl.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\Controls\\FluentWinUI3\\qtquickcontrols2fluentwinui3styleplugin.dll'. 21273 WARNING: Library not found: could not resolve 'Qt6QuickControls2WindowsStyleImpl.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\Controls\\Windows\\impl\\qtquickcontrols2windowsstyleimplplugin.dll'. 21289 WARNING: Library not found: could not resolve 'Qt6QmlLocalStorage.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\LocalStorage\\qmllocalstorageplugin.dll'. 21289 WARNING: Library not found: could not resolve 'Qt63DQuickScene2D.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\Scene2D\\qtquickscene2dplugin.dll'. 21289 WARNING: Library not found: could not resolve 'Qt63DQuickScene3D.dll', dependency of 'C:\\hostedtoolcache\\windows\\Python\\3.10.11\\x64\\lib\\site-packages\\PyQt6\\Qt6\\qml\\QtQuick\\Scene3D\\qtquickscene3dplugin.dll'. 21414 INFO: Warnings written to D:\a\DSS\DSS\build\DNA_Sequence_Similarities\warn-DNA_Sequence_Similarities.txt 21461 INFO: Graph cross-reference written to D:\a\DSS\DSS\build\DNA_Sequence_Similarities\xref-DNA_Sequence_Similarities.html 21664 INFO: checking PYZ 21664 INFO: Building PYZ because PYZ-00.toc is non existent 21664 INFO: Building PYZ (ZlibArchive) D:\a\DSS\DSS\build\DNA_Sequence_Similarities\PYZ-00.pyz 22086 INFO: Building PYZ (ZlibArchive) D:\a\DSS\DSS\build\DNA_Sequence_Similarities\PYZ-00.pyz completed successfully. 22180 INFO: checking PKG 22180 INFO: Building PKG because PKG-00.toc is non existent 22180 INFO: Building PKG (CArchive) DNA_Sequence_Similarities.pkg 48516 INFO: Building PKG (CArchive) DNA_Sequence_Similarities.pkg completed successfully. 48610 INFO: Bootloader C:\hostedtoolcache\windows\Python\3.10.11\x64\lib\site-packages\PyInstaller\bootloader\Windows-64bit-intel\runw.exe 48610 INFO: checking EXE 48610 INFO: Building EXE because EXE-00.toc is non existent 48610 INFO: Building EXE from EXE-00.toc 48610 INFO: Copying bootloader EXE to D:\a\DSS\DSS\dist\DNA_Sequence_Similarities.exe 48610 INFO: Copying icon to EXE 48610 INFO: Copying 0 resources to EXE 48610 INFO: Embedding manifest in EXE 48610 INFO: Appending PKG archive to EXE 48672 INFO: Fixing EXE headers 49251 INFO: Building EXE from EXE-00.toc completed successfully. 49345 INFO: Build complete! The results are available in: D:\a\DSS\DSS\dist </code></pre> <p>How can I fix this? Thanks in advance.</p>
<python><windows><matplotlib><pyinstaller><qt6>
2025-07-28 18:10:12
1
4,689
Maifee Ul Asad
79,717,580
4,025,458
How to call R's stlm() from Python using rpy2, getting "missing value where TRUE/FALSE needed" error
<p>I’m using rpy2 in Python to call R's forecast::stlm() function from within a custom wrapper function defined in R. My goal is to fit a seasonal time series model (STL + ARIMA) on a univariate time series without any external regressors (xreg).</p> <p>Here is a minimal working version of my Python code:</p> <pre><code>import pandas as pd import numpy as np # For potential NaN generation in example data import rpy2.robjects as ro from rpy2.robjects.packages import STAP, importr from rpy2.robjects import pandas2ri from rpy2.robjects.conversion import localconverter from rpy2.robjects import FloatVector, IntVector # Direct import for clarity from rpy2.rinterface import NULLType fn_def = &quot;&quot;&quot; fit &lt;- function(y, FUN, ...) { arguments &lt;- list(...) names(arguments) &lt;- gsub(&quot;_&quot;, &quot;.&quot;, names(arguments)) arguments &lt;- c(list(y = y), arguments) model.fit &lt;- tryCatch({ do.call(FUN, arguments, quote = TRUE) }, error = function(err) { stop(err) }) print( summary(model.fit) ) return(model.fit) } &quot;&quot;&quot; error_wrapper = STAP(fn_def, &quot;STAP_fun&quot;) ro.r('library(forecast)') import rpy2.robjects as robjects r_ts_function = robjects.r('ts') y_ts = r_ts_function(pd.Series([1,2,4,2,4,6,3,4,7,6,4,4,7,8,5,8,9,5,6]).values, frequency=6) out =error_wrapper.fit(y= y_ts, FUN = ro.r['stlm'], method = &quot;ets&quot; ) </code></pre> <p>When I try to run the above code, I am getting below error.</p> <pre><code>RRuntimeError: Error in if (ncol(x) == 1L) { : missing value where TRUE/FALSE needed </code></pre> <p>How do I safely call stlm() from Python via rpy2 without triggering ncol(x) errors when xreg is not needed?</p> <p><em>Note: I am executing this code on the synapse notebook.</em></p> <p>Thanks in advance.</p>
<python><pandas><rpy2>
2025-07-28 15:45:40
0
765
RSK
79,717,324
12,263,674
Chart Title is in middle/between my bar graph plotted using openpyxl BarChart module
<p>I am using openpyxl (python module) to dynamically plot a bar graph with data from respective cells, but the chart title gets in between my bar graph.</p> <p>Code used to create the bar graph:</p> <pre><code>#sheet_velocity is the worksheet selected # Chart object chart_velocity = BarChart() # Data and labels chart_data = Reference(sheet_velocity, min_col=3, min_row=3, max_row=14) categories = Reference(sheet_velocity, min_col=1, min_row=3, max_row=14) chart_velocity.title = &quot;Custom Title&quot; chart_velocity.title.text.rich.paragraphs[0].pPr = ParagraphProperties(defRPr=CharacterProperties(sz=1250)) man_layout = ManualLayout(xMode=&quot;edge&quot;, yMode=&quot;edge&quot;, x=0.0, y = -0.05) #tried -ve value chart_velocity.title.layout = Layout(manualLayout=man_layout) chart_velocity.add_data(chart_data, titles_from_data=False) chart_velocity.set_categories(categories) chart_velocity.dataLabels = DataLabelList() chart_velocity.dataLabels.showVal = True # Show the value of the data point chart_velocity.dataLabels.showSerName = False chart_velocity.dataLabels.showCatName = False chart_velocity.dataLabels.showLegendKey = False chart_velocity.legend = None chart_velocity.x_axis.delete = False chart_velocity.y_axis.delete = False series = chart_velocity.series[0] series.graphicalProperties.solidFill = &quot;379e3e&quot; sheet_velocity.add_chart(chart_velocity, &quot;F3&quot;) </code></pre> <p><a href="https://i.sstatic.net/M6OkfdMp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6OkfdMp.png" alt="" /></a></p> <p>Things that I've tried so far (have not worked):</p> <ol> <li><p>Increasing the <code>chart.height</code> variable</p> </li> <li><p>Changing the position of chart title using <code>ManualLayout</code></p> <p>For example: <code>manual_layout = ManualLayout(xMode=&quot;edge&quot;, yMode=&quot;edge&quot;, x=0.0, y=0.0)</code></p> </li> </ol> <p>(Giving a +ve value to y would bring the title down but not move it up, tried -ve value as well --&gt; did not work.)</p>
<python><excel><bar-chart><openpyxl>
2025-07-28 12:29:27
1
535
Sagar Kulkarni
79,717,221
2,894,535
Is there a short way to write "import" and "from import" for the same module?
<p>I find myself mixing both <code>import x</code> and <code>from x import y</code> forms for the same module, depending how often a particular object is used and whether it is clear which modules it comes from.</p> <p>For example I might write the following:</p> <pre><code>import dataclasses from dataclasses import dataclass import json @dataclass class Point3D: x: int y: int z: int def to_json(self) -&gt; str: return json.dumps(dataclasses.as_dict(self)) </code></pre> <p>I want <code>json.dumps</code> and <code>dataclasses.as_dict</code> to be called with their module name to avoid ambiguity e.g. with <code>pickle.dumps</code> and <code>sympy.as_dict</code>. But <code>@dataclass</code> clearly comes from <code>dataclasses</code>, making <code>@dataclasses.dataclass</code> redundant.</p> <p>Is there a way to write:</p> <pre><code>import dataclasses from dataclasses import dataclass </code></pre> <p>more cleanly, on a single line (semicolon doesn't count)? I would imagine something like <code>from dataclasses import self, dataclass</code>.</p>
<python><python-import>
2025-07-28 10:53:29
1
3,116
Dominik Kaszewski
79,717,106
2,894,535
pdb.set_trace() fails inside test with monkeypatched open
<p>I have a test monkeypatching builtin <code>open</code> to test a class which expects to read a file. When trying to debug it, I see:</p> <pre><code>TypeError: test_unique.&lt;locals&gt;.&lt;lambda&gt;() got an unexpected keyword argument 'encoding' </code></pre> <pre class="lang-py prettyprint-override"><code>import builtins import io import pytest def count_unique_letters_in_file(path: str): with open(path) as f: return len(set(c.lower() for c in f.read() if c.isalpha())) def test_unique(monkeypatch: pytest.MonkeyPatch): text = 'Quick frog or something' monkeypatch.setattr(builtins, 'open', lambda _: io.StringIO(text)) assert count_unique_letters_in_file('blah.txt') == 15 </code></pre> <p>If I put <code>import pdb: pdb.set_trace()</code> anywhere after the <code>monkeypatch</code> line, or run <code>pytest --pdb</code>, then I get the above line. I understand it's because pdb is trying to use <code>open</code> which leads it to my lambda, but how do I work around it? Assume I cannot change the interface of tested function to accept an already open file or its contents.</p> <p>Traceback is long as it includes all the test framework stuff up to the <code>pdb.set_trace()</code> or failed <code>assert</code> running with <code>--pdb</code>, but ends at:</p> <pre><code>self = &lt;_pytest.debugging.pytestPDB._get_pdb_wrapper_class.&lt;locals&gt;.PytestPdbWrapper object at 0x123&gt; completekey = 'tab', stdin = None, stdout = None, skip = None, nosigint = False, readrc = True def __init__(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False, readrc=True): ... if readrc: try: with open(os.path.expanduser('~/.pdbrc', encoding='utf-8') as rcFile: </code></pre>
<python><debugging><pytest><monkeypatching>
2025-07-28 09:15:39
2
3,116
Dominik Kaszewski
79,716,880
19,459,262
How to change and load a png file without refreshing the app?
<p>I have a Shiny for Python app which displays an image. Whenever I regenerate the image from within the app, the entire app reloads, losing the input settings (the slider value resets). Is there a way to refresh the image itself without reloading the app, or is this impossible? I'm using the png library to generate the image.</p> <p>App code:</p> <pre><code>from shiny import reactive, render, ui from shiny.express import input, ui import imagechanger with ui.card(): ui.input_action_button(&quot;generate&quot;, &quot;Generate&quot;) ui.input_slider(&quot;slider&quot;, &quot;slider&quot;, value=1, min=1, max=100) @render.image def image(): img = {&quot;src&quot;: &quot;image.png&quot;} return img @reactive.effect @reactive.event(input.generate) def _(): imagechanger.generate_png() pass </code></pre> <p>Image generation code:</p> <pre><code>import png import random width = 100 height = 100 img = [] for y in range(height): row = [] for x in range(width): row.append([0, 0, 0]) img.append(row) def generate_png(): for i in range(10): x = random.randint(1, width-1) y = random.randint(1, height-1) img[y][x] = [255, 255, 255] output = [] for y in range(height): row = () for x in img[y]: row += tuple(x) output.append(row) with open('image.png', 'wb') as f: w = png.Writer(width, height, greyscale=False) w.write(f, output) </code></pre>
<python><py-shiny>
2025-07-28 04:12:22
0
784
Redz
79,716,705
1,018,226
Transform Ansible list of dictionaries to dictionary
<p>Many Ansible Modules return a set of objects. When these objects do not have a unique identifying attribute, the data is often returned as a list of dictionaries. For example, the built-in <em>setup</em> module returns mount points this way. (I omitted some attributes for better readability.)</p> <pre><code>&quot;ansible_mounts&quot;: [ { &quot;device&quot;: &quot;/dev/sda1&quot;, &quot;fstype&quot;: &quot;vfat&quot;, &quot;mount&quot;: &quot;/boot/efi&quot;, &quot;size_available&quot;: 202100736, &quot;size_total&quot;: 209469440 }, { &quot;device&quot;: &quot;/dev/mapper/rootvg-rootlv&quot;, &quot;fstype&quot;: &quot;xfs&quot;, &quot;mount&quot;: &quot;/&quot; }, { &quot;device&quot;: &quot;/dev/mapper/rootvg-homelv&quot;, &quot;fstype&quot;: &quot;xfs&quot;, &quot;mount&quot;: &quot;/home&quot; } ] </code></pre> <p>I understand, that the <em>device</em> or <em>mount</em> entries do not necessarily need to be unique, so a list is the only structure that can always be constructed. But in my case, they are. So I would prefer this data structure instead.</p> <pre><code>{ &quot;/dev/sda1&quot;: { &quot;mount&quot;: &quot;/boot/efi&quot;, &quot;fstype&quot;: &quot;vfat&quot;, &quot;size_available&quot;: 202100736, &quot;size_total&quot;: 209469440 }, &quot;/dev/mapper/rootvg-rootlv&quot;: { &quot;fstype&quot;: &quot;xfs&quot;, &quot;mount&quot;: &quot;/&quot; }, &quot;/dev/mapper/rootvg-homelv&quot;: { &quot;fstype&quot;: &quot;xfs&quot;, &quot;mount&quot;: &quot;/home&quot; } } </code></pre> <p>Is there a combination of built-in filters that can easily transform one into the other? A filter plugin doing this can be written in a few lines of Python, though I was unable to find one on Ansible Galaxy. So before I upload one myself I would like to know if I am overlooking an obvious built-in solution.</p>
<python><ansible><ansible-filter>
2025-07-27 21:22:52
2
2,574
XZS
79,716,674
5,495,304
Why does monkey patching a class's `__new__` not (always) work?
<p>I am trying to do some monkey patch work on an existing third-party library I am using.</p> <h2>Summary</h2> <h3>This sequence does not work:</h3> <ol> <li>Monkey patch <code>A.__new__</code> using <code>mynew</code></li> <li>Monkey patch <code>A.__new__</code> again using <code>object.__new__</code> to reset the previous</li> </ol> <h3>While the following sequence works:</h3> <ol> <li>Monkey patch <code>A.__new__</code> using <code>object.__new__</code></li> </ol> <h2>Details</h2> <p>I need to override the <code>__new__</code> method for that library. Assume <code>A</code> is a class from the third party. Everything works OK as follows:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, x): self.x = x def mynew(cls, *args, **kwargs): print(&quot;Calling mynew&quot;) return object.__new__(cls) A.__new__ = mynew A(10) # Calling mynew # &lt;__main__.A object at 0x7f43806d8440&gt; </code></pre> <p>The problem is when I want to reset the class to the existing <code>__new__</code> method (which the third-party class does not implement, so it should be <code>object.__new__</code>, I run into an error:</p> <pre class="lang-py prettyprint-override"><code>A.__new__ = object.__new__ A(10) #TypeError: object.__new__() takes exactly one argument (the type to instantiate) </code></pre> <p>I somehow understand this... Something to do with the difference in object's definition which does not take any arguments, while the signature is defined to take arguments for subclassing compatibility.</p> <p>What tripped me off, however, is that if I set <code>A.__new__ = object.__new__</code>, without first overriding it using the <code>mynew</code> method, it will work ok. That is running this code block <em>only</em>, it will work</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, x): self.x = x A.__new__ = object.__new__ A(10) # works OK </code></pre> <p>What is going on?</p>
<python><python-3.x><metaprogramming>
2025-07-27 20:23:29
1
6,659
Gerges
79,716,478
1,592,008
Performance issue with Python, Mongo, Redis and pingpong threshold?
<p>I faced the issue that my integration tests get 3x performance boost on Ubuntu 20.04 with Kernel 5.4 but on Ubuntu 22.04+ and all Linux kernels after 5.10.135 there is not such boost.</p> <p>Integration tests is PyTest testing Python 3.12 application using MongoDB(PyMongo and Motor) and Redis. Tests, application itself, Mongo and Redis services are running only locally in Docker Compose so there is no network communication to other hosts through real physical network. PyMongo (MongoDB client) using keep alive and TCP_NODELAY by defaut:</p> <pre><code>sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, True) </code></pre> <p>I bisected Linux Kernel and found the commit which produce performance boost:</p> <p><a href="https://www.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.1" rel="nofollow noreferrer">https://www.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.1</a></p> <pre><code>commit 4a41f453bedfd5e9cd040bad509d9da49feb3e2c Author: Wei Wang &lt;weiwan@google.com&gt; Date: Fri Jan 25 10:53:20 2019 -0800 tcp: change pingpong threshold to 3 </code></pre> <p>but in <strong>5.10.135</strong> the commit was <strong>reverted</strong> by:</p> <p><a href="https://www.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.10.135" rel="nofollow noreferrer">https://www.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.10.135</a></p> <pre><code>commit 77ac046a9ad3b9ee94d02f999d30db3ac106c98c Author: Wei Wang &lt;weiwan@google.com&gt; Date: Thu Jul 21 20:44:04 2022 +0000 Revert &quot;tcp: change pingpong threshold to 3&quot; commit 4d8f24eeedc58d5f87b650ddda73c16e8ba56559 upstream. This reverts commit 4a41f453bedfd5e9cd040bad509d9da49feb3e2c. </code></pre> <p>and in 2023 introduced net.ipv4.tcp_pingpong_thresh setting which default is 1:</p> <p><a href="https://www.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.7" rel="nofollow noreferrer">https://www.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.7</a></p> <pre><code>commit 562b1fdf061bff9394ccd884456ed1173c224fdc Author: Haiyang Zhang &lt;haiyangz@microsoft.com&gt; Date: Wed Oct 11 13:30:44 2023 -0700 tcp: Set pingpong threshold via sysctl </code></pre> <p>The problem is changing net.ipv4.tcp_pingpong_thresh to 3 does not have the same performance effect like it was before 5.10.135.</p> <p>I found that the reason is exactly in 562b1fdf061bff9394ccd884456ed1173c224fdc, if revert 77ac046a9ad3b9ee94d02f999d30db3ac106c98c right before 562b1fdf061bff9394ccd884456ed1173c224fdc there is a desired 3x performance.</p> <p>So I made a small patch which returns the desired performance:</p> <pre><code>From: Oleg Neumyvakin &lt;oneumyvakin@gmail.com&gt; Date: Sun, 27 Jul 2025 20:40:47 +0700 Subject: [PATCH] Fix tcp pingpong threshold --- net/ipv4/tcp_ipv4.c | 2 +- net/ipv4/tcp_output.c | 13 ++++++++----- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index f603ad9307af..fe5b1545c499 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3288,7 +3288,7 @@ static int __net_init tcp_sk_init(struct net *net) net-&gt;ipv4.sysctl_tcp_syn_linear_timeouts = 4; net-&gt;ipv4.sysctl_tcp_shrink_window = 0; - net-&gt;ipv4.sysctl_tcp_pingpong_thresh = 1; + net-&gt;ipv4.sysctl_tcp_pingpong_thresh = 3; return 0; } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 8e6ebf35ed58..912080d8d80a 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -167,13 +167,16 @@ static void tcp_event_data_sent(struct tcp_sock *tp, if (tcp_packets_in_flight(tp) == 0) tcp_ca_event(sk, CA_EVENT_TX_START); - tp-&gt;lsndtime = now; - - /* If it is a reply for ato after last received - * packet, increase pingpong count. + /* If this is the first data packet sent in response to the + * previous received data, + * and it is a reply for ato after last received packet, + * increase pingpong count. */ - if ((u32)(now - icsk-&gt;icsk_ack.lrcvtime) &lt; icsk-&gt;icsk_ack.ato) + if (before(tp-&gt;lsndtime, icsk-&gt;icsk_ack.lrcvtime) &amp;&amp; + (u32)(now - icsk-&gt;icsk_ack.lrcvtime) &lt; icsk-&gt;icsk_ack.ato) inet_csk_inc_pingpong_cnt(sk); + + tp-&gt;lsndtime = now; } /* Account for an ACK we sent. */ -- 2.43.0 </code></pre> <p>I'd created a syntetic test with Mongo and Redis using Python or YCSB but can't see performance difference.</p> <p>So my question, does anybody experiencing a performance issue with Python, Mongo, Redis and pingpong threshold?</p>
<python><linux><mongodb><redis><linux-kernel>
2025-07-27 14:46:30
0
10,382
Oleg Neumyvakin
79,716,359
6,293,886
singledispatchmethod for different list types (e.g. list[str] or list[int])
<p>I'm trying to overload a method with <a href="https://docs.python.org/3/library/functools.html#functools.singledispatchmethod" rel="nofollow noreferrer"><code>singledispatchmethod</code></a>. It works great for simple types like <code>int</code> and <code>str</code>, but for different types of nested lists like <code>list[int]</code> or <code>list[str]</code>, this breaks with the following error.</p> <pre class="lang-none prettyprint-override"><code>TypeError: Invalid annotation for 'value'. list[int] is not a class. </code></pre> <p>How can I handle this nested type when registering a specific-type method? Can I use an alternative object that mimics the <code>list[int]</code> and will work with <code>singledispatchmethod</code>?</p> <p>I know there is an option for using the <a href="https://pypi.org/project/multimethod/" rel="nofollow noreferrer">multimethod</a> package, but I'm looking for a <em>standard library</em> base Python solution.</p> <pre><code>from functools import singledispatchmethod class Processor: @singledispatchmethod def process(self, value): print(f&quot;Default processing for {value}&quot;) @process.register def _(self, value: int): print(f&quot;Processing an integer: {value}&quot;) @process.register def _(self, value: str): print(f&quot;Processing a string: {value}&quot;) @process.register def _(self, value: list[int]): print(f&quot;Processing a list of integers: {value}&quot;) @process.register def _(self, value: list[str]): print(f&quot;Processing a list of strings: {value}&quot;) # Example usage: p = Processor() p.process(10) # Processing an integer: 10 p.process(&quot;hello&quot;) # Processing a string: hello p.process([1, 2, 3]) # Processing a list: [1, 2, 3] p.process([&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]) # Processing a list of strings: ['a', 'b', 'c'] </code></pre>
<python><overloading><single-dispatch>
2025-07-27 11:15:12
1
1,386
itamar kanter
79,716,280
17,580,381
Efficient extraction of first/only key in a dictionary
<p>Assumption is that we have a dictionary containing exactly one key/value pair. Objective is to extract the only key.</p> <p>I can think of four ways to do this (there may be more).</p> <pre><code>import timeit def func1(d): &quot;&quot;&quot; Convert to a list and return the first element &quot;&quot;&quot; return list(d)[0] def func2(d): &quot;&quot;&quot; Unpack &quot;&quot;&quot; rv, *_ = d return rv def func3(d): &quot;&quot;&quot; Classic efficient approach &quot;&quot;&quot; return next(iter(d)) def func4(d): &quot;&quot;&quot; Appears to be faster than the classic approach &quot;&quot;&quot; for key in d: return key if __name__ == &quot;__main__&quot;: d = {&quot;foo&quot;: 0} for func in (func1, func2, func3, func4): assert func1(d) == func(d) duration = timeit.timeit(lambda: func(d), number=5_000_000) print(func.__name__, f&quot;{duration=:.4f}s&quot;) </code></pre> <p><strong>Output:</strong></p> <pre><code>func1 duration=0.6322s func2 duration=0.8306s func3 duration=0.5505s func4 duration=0.5040s </code></pre> <p>I have always understood that the implementation in <em>func3()</em> is optimal - and that's what I would normally use.</p> <p>However, it appears that <em>func4()</em> is more efficient. This may be related to Python version (3.13.5 in this case).</p> <p>Why would <em>func4()</em> perform better than <em>func3()</em>? Are there optimisations in modern Python versions that would affect this?</p>
<python><performance><dictionary><optimization><micro-optimization>
2025-07-27 09:15:34
3
28,997
Ramrab
79,716,247
1,469,954
Unable to send mail from Godaddy registered email server using Python
<p>We have a Godaddy account where we have registered our domain and configured an email address for sending and receiving mails. We are trying those credentials to send mail programatically using Python. This is the code we are trying:</p> <pre><code>import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText msg = MIMEMultipart() msg.set_unixfrom('author') msg['From'] = 'info@mydomain.com' msg['To'] = 'myfoo@gmail.com' msg['Subject'] = 'simple email in python' message = 'here is the email' msg.attach(MIMEText(message)) mailserver = smtplib.SMTP_SSL('smtpout.secureserver.net', 465) mailserver.ehlo() mailserver.login('info@mydomain.com', 'abc123') mailserver.sendmail('info@mydomain.com','myfoo@gmail.com',msg.as_string()) mailserver.quit() </code></pre> <p>However, we are getting this error consistently:</p> <pre><code>Exception has occurred: SMTPServerDisconnected Connection unexpectedly closed File &quot;/Users/nedstark/Desktop/code/foo.py&quot;, line 48, in &lt;module&gt; mailserver.login('info@mydomain.com', 'abc123') smtplib.SMTPServerDisconnected: Connection unexpectedly closed </code></pre> <p>Any idea how to fix this, or whether there is anything in Godaddy account that we need to configure?</p>
<python><python-3.x><email><smtplib>
2025-07-27 08:33:50
1
5,353
NedStarkOfWinterfell
79,716,150
20,087,266
Treesitter-based Python code folding not applied when initially loading file in Neovim
<p>Every time a Python file is loaded, no folds are automatically applied based on the fold options that have been set (see below). Instead, I have to manually press <kbd>zx</kbd> to force the folds to be applied.</p> <p>This problem is limited to Python files and the use of the Treesitter-based expression folding method. Initial and automatic folding works fine when the &quot;indent&quot; method is used, or another file type is loaded (e.g. Lua).</p> <p>How can I resolve this issue?</p> <p>I have checked that:</p> <ul> <li>The Python Treesitter parser has been installed and is functioning correctly (i.e. the output of <code>:checkhealth nvim-treesitter</code> is fine).</li> <li>No Python-specific autocommands have been interfering with the code folding settings (i.e. both <code>:autocmd FileType python</code> and <code>:autocmd BufRead,BufEnter *.py</code> do not show any autocommands).</li> <li>No other Python-specific Neovim plugins have been installed.</li> </ul> <p><strong>Fold Options</strong></p> <pre class="lang-lua prettyprint-override"><code>vim.opt.foldcolumn = &quot;1&quot; -- Show fold indication in column vim.opt.foldlevel = 1 -- The default fold level (higher means more folds open by default) vim.opt.foldtext = &quot;&quot; -- Do not show fold information (just show existing text on folded line) vim.opt.foldmethod = &quot;expr&quot; -- Use custom expression for folding vim.opt.foldexpr = &quot;nvim_treesitter#foldexpr()&quot; -- Set expression for folding to treesitter </code></pre>
<python><lua><editor><neovim><code-folding>
2025-07-27 04:58:37
1
4,086
Kyle F. Hartzenberg
79,716,103
6,312,979
Can save Plotly image to Cloudinary?
<p>Is it possible to write a Plotly chart directly into Cloudanary without first saving to a file?</p> <p>So Plotly is something like this.</p> <pre><code>fig.write_image(&quot;fig1.png&quot;) </code></pre> <p>I want to upload the fig straight into Cloudinary. Which is something like this.</p> <pre><code>cloudinary.uploader.upload(&quot;dog.mp4&quot;, asset_folder=&quot;pets&quot;, public_id=&quot;my_dog&quot;, overwrite=True) </code></pre> <p>can the fig.write_image() go inside the uploader?</p> <p>Possible? Not possible. Trying to skip writing it to disk due to permission issues.</p> <p>Thanks.</p>
<python><plotly><cloudinary>
2025-07-27 02:12:07
1
2,181
diogenes
79,716,069
9,208,758
How to resolve the type error in pomegranate?
<p>I am trying to set up a dummy code for the pomegranate (below), but for some reason I am getting an error when I try to run the ConditionalCategorical(). How do I resolve it?</p> <pre><code>from pomegranate.distributions import ConditionalCategorical import numpy as np prob_table = [ [1.0, 0.0], # parent = 0 -&gt; child = 0 [0.0, 1.0], # parent = 1 -&gt; child = 1 ] probs_array = np.array(prob_table, dtype=np.float32) # ✅ Use NumPy n_categories = [2, 2] # One binary parent, one binary child cc = ConditionalCategorical(probs_array, n_categories=n_categories) print(&quot;Created ConditionalCategorical:&quot;, cc) </code></pre> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[297], line 11 8 probs_array = np.array(prob_table, dtype=np.float32) # ✅ Use NumPy 9 n_categories = [2, 2] # One binary parent, one binary child ---&gt; 11 cc = ConditionalCategorical(probs_array, n_categories=n_categories) 12 print(&quot;Created ConditionalCategorical:&quot;, cc) File ~/anaconda3/lib/python3.10/site-packages/pomegranate/distributions/conditional_categorical.py:107, in ConditionalCategorical.__init__(self, probs, n_categories, pseudocount, inertia, frozen, check_data) 105 self.d = len(self.probs) if self._initialized else None 106 self.n_parents = len(self.probs[0].shape) if self._initialized else None --&gt; 107 self._reset_cache() File ~/anaconda3/lib/python3.10/site-packages/pomegranate/distributions/conditional_categorical.py:157, in ConditionalCategorical._reset_cache(self) 154 _xw_sum = [] 156 for n_categories in self.n_categories: --&gt; 157 _w_sum.append(torch.zeros(*n_categories[:-1], 158 dtype=self.probs[0].dtype, device=self.device)) 159 _xw_sum.append(torch.zeros(*n_categories, 160 dtype=self.probs[0].dtype, device=self.device)) 162 self._w_sum = BufferList(_w_sum) TypeError: zeros() received an invalid combination of arguments - got (device=torch.device, dtype=torch.dtype, ), but expected one of: * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) </code></pre>
<python><reinforcement-learning><pomegranate>
2025-07-27 00:28:48
1
589
Isaac A
79,716,059
27,596,369
Getting most common colors in an image using Pillow Image.get_colors() gives wrong result
<p>I have code which finds the top 10 colors from an image, so for that I used PIL to find all of the colors and then processed it and found the 10 most common ones, but when I try it online, results are completely different. Does PIL process colors differently or what exactly what is going on here?</p> <p>Here is my code:</p> <pre><code>img = Image.open(image_url) all_colors = img.getcolors(maxcolors=100000) # List with every color in image top_10_colors = all_colors[:10] # Set first 10 colors as starting list for color in all_colors[9:]: amount = color[0] top_indexes = [] for top_color in top_10_colors: top_index = top_color[0] top_indexes.append(top_index) if amount &gt; min(top_indexes): index = top_indexes.index(min(top_indexes)) top_10_colors[index] = color sorted_top_colors = sorted(top_10_colors, key=lambda x: x[0], reverse=True) print(sorted_top_colors) print(top_10_colors), </code></pre> <p>The website result (in greatest to least order):</p> <pre><code>rgb(49,49,49) rgb(211,211,211) rgb(67,67,67) rgb(71,71,71) rgb(61,61,61) rgb(79,79,79) rgb(166,82,19) rgb(70,70,70) rgb(65,65,65) rgb(29,28,28) </code></pre> <p>My result (greatest to least):</p> <pre><code>[(35, 35, 35), (41, 41, 41), (36, 36, 36), (34, 34, 34), (44, 44, 44), (33, 33, 33), (31, 31, 31), (42, 42, 42), (50, 50, 50), (32, 32, 32)] </code></pre> <p>Here is the image:</p> <p><a href="https://i.sstatic.net/fze4TJU6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fze4TJU6.jpg" alt="enter image description here" /></a></p> <p>Here is the link of the website I used which gave me the result: <a href="https://www.imgonline.com.ua/eng/get-dominant-colors.php" rel="nofollow noreferrer">https://www.imgonline.com.ua/eng/get-dominant-colors.php</a></p>
<python><python-imaging-library>
2025-07-26 23:53:34
1
1,512
Aadvik
79,715,882
1,232,660
Exclusive group of arguments with optional values
<p>The following code</p> <pre><code>from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('--foo', nargs='?') parsed_args = parser.parse_args() </code></pre> <p>creates an optional argument with an <a href="https://docs.python.org/3/library/argparse.html#nargs" rel="nofollow noreferrer">optional value</a>. It means that <code>--foo 123</code> stores <code>123</code>, <code>--foo</code> stores <code>None</code>, leaving the argument altogether stores <code>None</code> too.</p> <p>I now create a <a href="https://docs.python.org/3/library/argparse.html#mutual-exclusion" rel="nofollow noreferrer">mutually exclusive group</a> of two such arguments:</p> <pre><code>from argparse import ArgumentParser parser = ArgumentParser() group = parser.add_mutually_exclusive_group() group.add_argument('--foo', nargs='?') group.add_argument('--bar', nargs='?') parsed_args = parser.parse_args() </code></pre> <p>As expected, <code>--foo 123 --bar 456</code> results in error <code>argument --bar: not allowed with argument --foo</code>. But running <code>--foo --bar</code> finishes without errors. Tested with Python 3.13. Is it a bug in Python or just my misunderstanding of the concept?</p>
<python><argparse>
2025-07-26 17:56:39
2
3,558
Jeyekomon
79,715,814
6,843,153
Altair fails to render chart out of pandas dataframe on Streamlit
<p>I have the following code in <em>Python</em>, using <em>Streamlit</em> as framework:</p> <pre><code>try: native_data = data.copy() # Create Altair chart with native data st.write(f&quot;Debug: Native data type: {type(native_data)}&quot;) chart = alt.Chart(native_data).mark_bar().encode( x=alt.X(x_col, type='nominal'), y=alt.Y(y_col, type='quantitative') ).properties( title=self._label, width=400, height=300 ) except Exception as chart_creation_error: st.write(f&quot;Debug: Chart creation error: {chart_creation_error}&quot;) return &quot;table_only&quot; try: container.altair_chart(chart, use_container_width=True) return &quot;success&quot; except Exception as render_error: st.write(f&quot;Debug: Chart rendering error: {render_error}&quot;) st.write(f&quot;Debug: Render error type: {type(render_error)}&quot;) import traceback st.write(&quot;Debug: Render error traceback:&quot;) st.code(traceback.format_exc()) raise render_error </code></pre> <p>And this is the debug output:</p> <pre><code>Debug: Native data type: &lt;class 'pandas.core.frame.DataFrame'&gt; Debug: Chart rendering error: You passed a &lt;class 'narwhals.stable.v1.DataFrame'&gt; to is_pandas_dataframe. </code></pre> <p>I can see that the dataframe used to generate the <em>Altair</em> object is a <em>Pandas</em> dataframe but, nevertheless, <em>Altair</em> seems to confuse it with a <em>Narwhals</em> dataframe.</p> <p>I haven't found a single reference to a similar case, so any help will be much appreciated.</p>
<python><pandas><streamlit><altair>
2025-07-26 16:01:09
1
5,505
HuLu ViCa