QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
79,658,805
587,587
Why does my pythonnet not load the correct dependency when loading DLL:s via reflection
<p>I'm writing a Python application that uses clr 3.0.3.</p> <p>I'm trying to load a DLL using <a href="https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.loadfrom?view=net-9.0" rel="nofollow noreferrer">System.Reflection.Assembly.LoadFrom</a>. My code is basically doing the following:</p> <pre><code>def load_from(dll_paths): for dll_path in dll_paths: print(dll_path) assembly = System.Reflection.Assembly.LoadFrom(dll_path) print(&quot;&quot;) for referenced in assembly.GetReferencedAssemblies(): print(f&quot; {referenced}&quot;) print(&quot;&quot;) for loaded_assembly in System.AppDomain.CurrentDomain.GetAssemblies(): print(f&quot; {loaded_assembly}&quot;) print(&quot;&quot;) </code></pre> <p>The result is something like this: <a href="https://i.sstatic.net/0kIngOPC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kIngOPC.png" alt="LoadFrom result" /></a>:</p> <p>The assemblies I'm trying to load have the following directory structure:</p> <p><a href="https://i.sstatic.net/9zjrXXKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9zjrXXKN.png" alt="Directory structure" /></a></p> <p><strong>Assembly.DLL Version 1</strong> and <strong>Assembly.DLL Version 2</strong> have the same name and public key but differing version numbers. When loading <strong>Assembly.DLL version 1</strong> it properly locates and loads <strong>Dependency Version 1</strong> but when I load <strong>Assembly.DLL version 2</strong> it never loads <strong>Dependency Version 2</strong></p> <p>My questions is why Dependency 2 is never loaded? If the CLR can't locate Dependency 2 when running LoadFrom, why doesn't it throw an exception? When running code in Assembly 2 it attempts to reference stuff in Dependency 1 which produces an exception at runtime since they are incompatible.</p>
<python><.net><dll><reflection><python.net>
2025-06-09 11:32:16
1
492
Anton Lahti
79,658,376
2,624,876
How do I install a python-based CLI from a venv globally?
<p>I'm developing a CLI application with Python and I can install it with <code>pip install -e .</code> for an editable install, however this installs it only in the virtual environment. I can't access the CLI from things like Ubuntu's custom keyboard shortcuts, or Thunar's custom keyboard shortcuts.</p> <p>I can install everything globally instead, but this causes package version conflicts.</p> <p>How can I expose my CLI globally, in an editable way, while managing dependencies in an isolated environment?</p> <p>Do I need some wrapper script, manually added to the <code>PATH</code>, or is there a <code>pip</code> option for this?</p>
<python><pip><python-venv>
2025-06-09 05:05:34
1
4,298
1j01
79,658,364
311,567
asyncio run coroutine from a synchronous function
<p>How can I call <code>task2</code> from <code>func</code> without declaring <code>func</code> async and awaiting it? My first thought was to create a thread and use <code>run_coroutine_threadsafe</code> but it deadlocks. Same as not using a thread. Do I have to start a new loop?</p> <pre class="lang-py prettyprint-override"><code>import asyncio from threading import Thread async def task2(): print(&quot;starting task2...&quot;) await asyncio.sleep(1) print(&quot;finished task2.&quot;) return &quot;done&quot; def func(loop=None): print(&quot;running func...&quot;) if not loop: loop = asyncio.get_running_loop() assert loop future = asyncio.run_coroutine_threadsafe( task2(), loop) result = future.result() print(f&quot;{result=}&quot;) print(&quot;done func...&quot;) async def task1(): print(&quot;starting task1...&quot;) await asyncio.sleep(1) # func() loop = asyncio.get_running_loop() t = Thread(target=func, args=(loop,)) t.start() t.join() print(&quot;finished task1.&quot;) if __name__ == '__main__': asyncio.run(task1()) </code></pre>
<python><python-asyncio>
2025-06-09 04:52:22
2
2,703
dashesy
79,658,347
11,793,491
How to form a projection while filtering some nodes in Neo4j
<p>I have the following query in Neo4j using the Python library graphdatascience:</p> <pre class="lang-py prettyprint-override"><code>G, result = gds.graph.project( &quot;communities&quot;, # Graph name &quot;__Entity__&quot;, # Node projection { &quot;_ALL_&quot;: { &quot;type&quot;: &quot;*&quot;, &quot;orientation&quot;: &quot;UNDIRECTED&quot;, &quot;properties&quot;: {&quot;weight&quot;: {&quot;property&quot;: &quot;*&quot;, &quot;aggregation&quot;: &quot;COUNT&quot;}}, } }, ) </code></pre> <p>And I try to run this for a filtered selection of nodes. So I used this:</p> <pre class="lang-py prettyprint-override"><code>G, result = gds.graph.cypher.project( &quot;&quot;&quot; MATCH (n:__Entity__)--(m:__Entity__) WHERE n.node_embedding IS NOT NULL AND m.node_embedding IS NOT NULL AND n.docID = $doc AND m.docID = $doc RETURN gds.graph.project($graph_name, n, m, { sourceNodeLabels: $label, targetNodeLabels: $label, sourceNodeProperties: n { .weight }, targetNodeProperties: m { .weight } }, { undirectedRelationshipTypes: ['*'] }) &quot;&quot;&quot;, database='neo4j', graph_name='communities', label='__Entity__', doc='Gasification' ) </code></pre> <p>But I'm stuck on how to relate the <code>&quot;properties&quot;: {&quot;weight&quot;: {&quot;property&quot;: &quot;*&quot;, &quot;aggregation&quot;: &quot;COUNT&quot;}}</code> in the code. Any pointer will be highly appreciated.</p>
<python><neo4j><cypher>
2025-06-09 04:18:34
0
2,304
Alexis
79,658,301
270,043
PySpark with aggregations failing
<p>I have a 106M dataframe with nested columns, that is, I have a few columns where the values are <code>{[1,2,3,4,5], &lt;timestamp&gt;, 1, 2 ,3}</code>. I'm trying to add a few more columns using <code>when</code>, then do an aggregation on the dataframe.</p> <pre><code>df_1 = df.withColumn(&quot;newCol&quot;, when((col(&quot;colA.field&quot;) == 1) &amp; (col(&quot;colB.field1&quot;) == 2), col(&quot;colA.field1&quot;)).otherwise(&quot;colB.field1&quot;)))\ ... df_agg = df_1.groupby(&quot;colA&quot;,&quot;colB&quot;,&quot;colC&quot;,&quot;colD&quot;).agg(count(*).alias(&quot;numRecords&quot;), sort_array(collected_set(&quot;colE&quot;)).alias(&quot;colE&quot;), sum(&quot;colE&quot;).alias(&quot;colE&quot;), sum(&quot;colF&quot;).alias(&quot;colF&quot;), sum(&quot;colG&quot;).alias(&quot;colG&quot;), sum(&quot;colH&quot;).alias(&quot;colH&quot;), min(&quot;colI&quot;).alias(&quot;colI&quot;), max(&quot;colJ&quot;).alias(&quot;colJ&quot;), countDistinct(&quot;colK&quot;).alias(&quot;colK&quot;), first(&quot;colL&quot;).alias(&quot;colL&quot;), first(&quot;colM&quot;).alias(&quot;colM&quot;), first(&quot;colN&quot;).alias(&quot;colN&quot;), first(&quot;colO&quot;).alias(&quot;colO&quot;), sort_array(collected_set(&quot;colP&quot;)).alias(&quot;colP&quot;), sort_array(collected_set(&quot;colQ&quot;)).alias(&quot;colQ&quot;), max(&quot;colR&quot;).alias(&quot;colR&quot;), max(&quot;colS&quot;).alias(&quot;colS&quot;) ) </code></pre> <p><code>colL</code>, <code>colM</code>, <code>colN</code>, <code>colO</code> are strings, and they are the same value for each group, so I simply want to get the first (or any) instance.</p> <p>I've tried the following (separately) to simply do a <code>df_agg.show(10, truncate=False)</code>, but have always gotten the error below.</p> <pre><code>Job aborted due to stage failure: ShuffleMapStage 4 (showString at NativeMethodAccessorImpl.java:0) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.FetchFailedException at org.apache.spark.errors.SparkCoreErrors$.fetchFailedError(SparkCoreErrors.scala:312) at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException at org.apache.spark.ShuffleBlockFetcherIterator.next at org.apache.spark.ShuffleBlockFetcherIterator.next at org.apache.spark.util.CompletionIterator.next at scala.collection.Iterator$$anon$11.nextCur at scala.collection.Iterator$$anon$11.nextNext at scala.collection.Iterator$$anon$10.nextNext ... at org.apache.spark.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$1 at org.apache.spark.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$1$adapted ... Caused by: org.apache.spark.ExecutorDeadException: The relative remote executor(Id: 253), which maintains the block data to fetch is dead. at org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(NettyBlockTransferService.scala:136) ... </code></pre> <ol> <li>Run the PySpark code on the original dataframe as it is with the nested columns (parquet files at 11.4GB)</li> <li>Reduce the number of records from 106M to 99.8M using <code>df.sample(0.943)</code>. I had previously successfully ran the same code on another similar dataframe with 99.9M rows (but without the nested columns, and the parquet files were at 5.8GB).</li> <li>Flatten the schema and only selected relevant columns <code>df_flat = df.select(col(&quot;colA.field1&quot;).alias(&quot;colA_field1&quot;), ...)</code></li> <li>Write the above dataframe <code>df_flat</code> into parquet files, start a new Spark session, read the parquet files back into <code>df_flat</code> before the step of adding more columns and aggregation. (parquet files at 4.4GB)</li> </ol> <p>I also ran <code>df.groupBy(&quot;colA&quot;, &quot;colB&quot;, &quot;colC&quot;, &quot;colD&quot;).count().orderBy(desc(&quot;count&quot;)).show()</code>, and the largest group has 68K records, followed by 37K, 27K, 21K, 13 groups with &gt;10K records, and many more with ~9K or less. I think my data is skewed?</p> <p>I'm not sure if the error I'm seeing is because of lack of Spark resources, or the way I'm doing the aggregation.</p> <p>How can I optimize the code and get it to run successfully? This is only a small-scale test, and I would eventually need to run this on a much larger dataframe, in the order of billions of rows.</p>
<python><dataframe><apache-spark><pyspark><aggregate-functions>
2025-06-09 02:30:44
0
15,187
Rayne
79,658,282
3,286,489
VSCode and Terminal result differently in identifying MCP Tools
<p>I have this code</p> <pre><code>import os import sys from dotenv import load_dotenv load_dotenv() from praisonaiagents import Agent, MCP agent = Agent( instructions=&quot;You are a Precise Assistant.&quot;, llm=&quot;ollama/llama3.2&quot;, tools=MCP(&quot;python server.py&quot;), ) agent.start(&quot;What is Appleโ€™s stock price?&quot;) agent.start(&quot;What is Melbourne's Weather?&quot;) agent.start(&quot;What is the special of 3 and 4?&quot;) </code></pre> <p>Where the server is</p> <pre><code>from mcp.server.fastmcp import FastMCP import asyncio mcp = FastMCP(&quot;DemoServer&quot;) @mcp.tool() async def get_weather(city: str) -&gt; str: &quot;&quot;&quot;Get the current weather for a given city using wttr.in.&quot;&quot;&quot; import httpx url = f&quot;https://wttr.in/{city}?format=3&quot; headers = { &quot;User-Agent&quot;: ( &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) &quot; &quot;AppleWebKit/537.36 (KHTML, like Gecko) &quot; &quot;Chrome/114.0.0.0 Safari/537.36&quot; ) } async with httpx.AsyncClient(headers=headers) as client: response = await client.get(url) return response.text.strip() @mcp.tool() async def get_stock_price(symbol: str) -&gt; str: &quot;&quot;&quot;Get the current stock price using yfinance.&quot;&quot;&quot; import yfinance as yf stock = yf.Ticker(symbol) data = stock.info return f&quot;{symbol.upper()} current price: ${data['regularMarketPrice']}&quot; @mcp.tool() def get_special_number(a: int, b: int) -&gt; int: &quot;&quot;&quot; Provide a special number of the two numbers. Args: a (int): The first number b (int): The second number Returns: int: The secret number of the two numbers &quot;&quot;&quot; return int(a) + int(b) + 1000 if __name__ == &quot;__main__&quot;: mcp.run(transport=&quot;stdio&quot;) </code></pre> <p>When I run on terminal python environment, all good. It can detect the tools function</p> <pre><code>โ•ญโ”€ Agent Info โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ‘ค Agent: Agent โ”‚ โ”‚ Role: Assistant โ”‚ โ”‚ Tools: get_weather, get_stock_price, get_special_number โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p>However when I run it on VS Code environment, no tools are detected</p> <pre><code>[11:52:51] WARNING [11:52:51] mcp.py:398 WARNING No MCP tools available to convert to OpenAI mcp.py:398 format INFO [11:52:51] llm.py:296 INFO Getting response from ollama/llama3.2 llm.py:296 โ•ญโ”€ Agent Info โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ‘ค Agent: Agent โ”‚ โ”‚ Role: Assistant โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p>I check the .env all works. The <code>server.py</code> is detected by both. What cause the different?</p>
<python><visual-studio-code>
2025-06-09 01:55:37
1
61,245
Elye
79,658,157
1,826,893
Efficient Algorithems to find position of vector in 2D array
<p>I have an interesting problem but cannot find the relevant literature, so need algorithem suggestions or just the correct way to phrase the problem so I can find the literature.</p> <p>I'm given a vector of floats of varying length coming from a 2D array. I know the orientation (i.e. which axis I'm looking in). I need to find the start index of the vector. Essentially, I'm cross-correlating a short vector with a long vector.</p> <p>I have implemented a brute force approach but naturally it grows as <code>O(n^2)</code> where <code>n</code> is the dimension of the 2D array.</p> <p><strong>Q1) What is an efficient algorithem to tackel this problem?</strong></p> <p><strong>Q2) How is this type of problem called (so I can find relevant papers)?</strong></p> <p>There is measurement error so it will never be an exact match.</p> <p>Here the brute force approach, where I look for the minimum of the norm of two vectors:</p> <pre class="lang-py prettyprint-override"><code>import time import numpy as np def get_distance(a: np.ndarray, b: np.ndarray) -&gt; float: &quot;&quot;&quot; Calculate the distance between two vectors. &quot;&quot;&quot; return np.linalg.norm(a - b) def find_min_distance_subvector(array: np.ndarray, x: np.ndarray) -&gt; tuple[int, float]: leng = len(x) min_index = 0 min_distance = np.inf # Assuming we know the orientation of the vector for ii in range(0, len(array) - leng): # Extract a sub-vector of the same length as x, starting from index ii sub_vec = array[ii:ii + leng] dist = get_distance(sub_vec, x) if dist &lt; min_distance: min_distance = dist min_index = ii return min_index, min_distance def main(): leng = 100 size = 2000 # Create the search map arr = np.random.random((size, size)) # Pick a random sub-vector from the map index = np.random.randint(0, size - leng) x = arr[index:index + leng] start_time = time.time() min_index, min_metric = find_min_distance_subvector(arr, x) end_time = time.time() print(f&quot;Minimum distance: {min_metric} at index {min_index}, correct index: {index}, &quot; f&quot;time taken: {end_time - start_time:.4f} seconds&quot;) if __name__ == '__main__': main() </code></pre> <p>Thank you for your help</p>
<python><algorithm><search><graph-traversal>
2025-06-08 20:41:01
1
1,559
Edgar H
79,658,104
17,086,233
How to type hint `flax.linen.Module.apply`'s output correctly?
<p>As of writing, this code does not pass the PyRight type checker:</p> <pre class="lang-py prettyprint-override"><code>import jax import jax.numpy as jnp import jax.typing as jt import flax.linen as nn class MLP(nn.Module): @nn.compact def __call__(self, inputs: jt.ArrayLike): inputs = jnp.array(inputs) return nn.Dense(4)(inputs) if __name__ == &quot;__main__&quot;: inputs = jnp.ones((2, 4)) mlp = MLP() prng_key = jax.random.key(7) outputs: jax.Array = mlp.apply(mlp.init(prng_key, inputs), inputs) print(f&quot;type of outputs is {type(outputs)}&quot;) </code></pre> <p>Its output is: <code>type of outputs is &lt;class 'jaxlib.xla_extension.ArrayImpl'&gt;</code>.</p> <p>On like <code>outputs: jax.Array = mlp.apply(mlp.init(prng_key, inputs), inputs)</code>, Pylance shows this error:</p> <pre class="lang-py prettyprint-override"><code>Type &quot;Any | tuple[Any, FrozenVariableDict | dict[str, Any]]&quot; is not assignable to declared type &quot;Array&quot; Type &quot;Any | tuple[Any, FrozenVariableDict | dict[str, Any]]&quot; is not assignable to type &quot;Array&quot; &quot;tuple[Any, FrozenVariableDict | dict[str, Any]]&quot; is not assignable to &quot;Array&quot; Pylance reportAssignmentType </code></pre> <p>Any other operations that treat <code>outputs</code> like a JAX array succeeds at runtime but also fails to type check.</p> <p>Yet the output of that function is an array. This code type checks correctly:</p> <pre class="lang-py prettyprint-override"><code>import jax import jax.numpy as jnp x: jax.Array = jnp.zeros((2, 2)) print(f&quot;x's type is {type(x)}&quot;) </code></pre> <p>and its output is: <code>x's type is &lt;class 'jaxlib.xla_extension.ArrayImpl'&gt;</code>, the same type.</p> <p>How should I use this module to get the output array in a way that pyright can type check correctly?</p>
<python><python-typing><jax><pyright><flax>
2025-06-08 18:48:59
1
431
Arno
79,658,062
2,437,508
How to add conflicts into an index using pygit2
<p>I want to create a <code>pygit2.Index</code> instance out of thin air and populate it with conflicts. I see that in <code>libgit2</code> there is a <a href="https://libgit2.org/docs/reference/main/index/git_index_conflict_add.html" rel="nofollow noreferrer"><code>git_index_conflict_add</code></a> function available, but it does not seem to be mapped in <code>pygit2</code>.</p>
<python><libgit2><pygit2>
2025-06-08 17:55:36
0
31,374
eftshift0
79,658,047
3,333,319
Pandas API behaviour when CopyOnWrite is enabled
<p>I am new to <code>pandas</code> and I am trying to catch up on its API design. What I am most interested in is to get a good rule of thumb to predict wheter calling a method on a dataframe will return a new copy of it (that I must assign to a variable) or will modify it inplace.</p> <p>The documentation mentions everywhere that <a href="https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html" rel="nofollow noreferrer">Copy-On-Write</a> will be the future standard, therefore I have enabled it setting <code>pd.options.mode.copy_on_write = True</code> and I am only interested in its behaviour when copy on write is active.</p> <p>Here is an example of the transformations I need to apply to a data set loaded from an Excel sheet. Although the snippet below seems to do what I need, I always have to reassign to the variable <code>df</code> the modified dataframe returned by each method.</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_excel(&quot;my_excel_file.xls&quot;, sheet_name=&quot;my_sheet&quot;, usecols=&quot;A:N&quot;) # load dataframe from excel sheet df = df.dropna(how='all') # remove empty rows df = df.iloc[:-1,:] # remove last row df.columns.array[0] = &quot;Resource&quot; # change name of the first column df = df.astype({&quot;Resource&quot;: int}) # change column type df.columns = df.columns.str.replace('Avg of ', '').str.replace('Others', 'others') # alter column names df = df.set_index(&quot;Resource&quot;) # use 'Resource' column as index df = df.sort_index(axis=0) # sort df by index value df = df / 100 #ย divide each entry by 100 df = df.round(4) # round to 4 decimals df = df.reindex(columns=sorted(df)) # order columns in ascending alphabetical order </code></pre> <p>What is the recommended way to carry out the operations in the snippet above? Is it correct to assume that each method that modifies the dataframe is not applied inplace and returns a new dataframe object that I need to assign to a variable? More generally, is reassigning the variable <code>df</code> after each step the recommended way to use pandas API?</p>
<python><pandas><dataframe>
2025-06-08 17:29:29
1
973
Sirion
79,657,990
799,813
Why is my Shiny Express app having trouble controlling the barchart plot?
<p>Im having trouble setting the y axis to start at zero for the following shiny express python script. Instead it starts at 4.1 . the set_ylim is having no affect.</p> <pre><code>from shiny.express import input, render, ui import matplotlib.pyplot as plt import numpy as np data = { &quot;Maturity&quot;: ['1Y', '2Y', '3Y', '4Y', '5Y', '6Y', '7Y', '8Y'], &quot;Yield&quot;: [4.1, 4.3, 4.5, 4.7, 4.8, 4.9, 5.0, 5.1] } data=np.array([data[&quot;Maturity&quot;], data[&quot;Yield&quot;]]) #df = pd.DataFrame(data) print(data[1]) def create_line_plot(): x = data[0] y = data[1] fig, ax = plt.subplots() #fig.title(&quot;Yield Curve&quot;) ax.bar(x, y) ax.set_xlabel(&quot;Maturity&quot;) ax.yaxis.set_label_text(&quot;Yield (%)&quot;) ax.set_ylim(bottom=0) # Ensures the y-axis starts at 0 # Ensures the y-axis starts at 0 return fig @render.plot def my_plot(): return create_line_plot() </code></pre>
<python><matplotlib><py-shiny>
2025-06-08 16:23:29
1
1,053
Tommie Jones
79,657,735
1,379,184
Kafka Producing and Consuming in Python does not work. Kafka in Docker, and CreatingTopics does work
<p>Trying to get Kafka running in Docker on a Macbook.</p> <ul> <li>Starting Kafka works.</li> <li>Creating a Topic from inside the Docker Container works (with the provided Bash script)</li> <li>Creating a Topic from a Python script on the Host works. (See Python Script below)</li> </ul> <p><strong>But what does not work</strong>:</p> <ul> <li>Producing and Consuming messages from a Python script on the Host. Just nothing happens, and nothing to see in the Docker Logs</li> </ul> <h3>Libraries used</h3> <p>The following Python libraries are used in the Python Scripts:</p> <ul> <li>kafka-python-ng==2.2.3</li> <li>pybind11==2.13.6</li> </ul> <h3>Docker Compose file</h3> <p>The docker-compose.yml is taken from Apache/Kafka on Docker Hub. Only change is the volumes which is mapped to the Host, and the 9092 port is mapped to the Host port 9092.</p> <pre class="lang-yaml prettyprint-override"><code>services: broker: image: apache/kafka:latest container_name: kafka volumes: - ./data:/tmp/kraft-combined-logs:rw ports: - '9092:9092' environment: KAFKA_NODE_ID: 1 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT_HOST://localhost:9092,PLAINTEXT://broker:19092' KAFKA_PROCESS_ROLES: 'broker,controller' KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093' KAFKA_LISTENERS: 'CONTROLLER://:29093,PLAINTEXT_HOST://:9092,PLAINTEXT://:19092' KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT' KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER' CLUSTER_ID: 'sdfgvkdmpritgnvsyr56' KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_SHARE_COORDINATOR_STATE_TOPIC_REPLICATION_FACTOR: 1 KAFKA_SHARE_COORDINATOR_STATE_TOPIC_MIN_ISR: 1 KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs' </code></pre> <p>Kafka does start and runs without any errors.</p> <h3>Creating a Topic</h3> <pre class="lang-py prettyprint-override"><code>from kafka import KafkaAdminClient from kafka.admin import NewTopic, ConfigResource, ConfigResourceType from kafka.errors import TopicAlreadyExistsError if __name__ == '__main__': admin_client = KafkaAdminClient(bootstrap_servers='localhost:9092') # Create a Topic topic_list = [] new_topic = NewTopic(name=&quot;bankbranch&quot;, num_partitions=2, replication_factor=1) topic_list.append(new_topic) try: admin_client.create_topics(new_topics=topic_list) except TopicAlreadyExistsError: print('*** Topic already exists: bankbranch') </code></pre> <h3>See the topics</h3> <p>Inside the Docker Container</p> <pre class="lang-bash prettyprint-override"><code>9f30328dd248:/opt/kafka/bin$ ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test-topic Created topic test-topic. 9f30328dd248:/opt/kafka/bin$ ./kafka-topics.sh --bootstrap-server localhost:9092 --list bankbranch test-topic </code></pre> <p>So, the create-topic script works.</p> <h3>kafka-producer.py</h3> <pre class="lang-py prettyprint-override"><code>from kafka import KafkaProducer import json if __name__ == '__main__': producer = KafkaProducer( bootstrap_servers='localhost:9092', value_serializer=lambda v: json.dumps(v).encode('utf-8') ) producer.send(&quot;bankbranch&quot;, {'atmid': 1, 'transid': 100}) producer.send(&quot;bankbranch&quot;, {'atmid': 2, 'transid': 101}) </code></pre> <h3>kafka-consumer.py</h3> <pre class="lang-py prettyprint-override"><code>from kafka import KafkaConsumer if __name__ == '__main__': consumer = KafkaConsumer('bankbranch', bootstrap_servers='localhost:9092', auto_offset_reset='latest', ) for msg in consumer: print(msg.value.decode(&quot;utf-8&quot;)) </code></pre> <h3>Testing Producer en Consumer</h3> <p>I start both on the Host and no response. No errors, just nothing.</p> <p>Producer returns to the prompt, and Consumer just keeps waiting.</p>
<python><docker><apache-kafka>
2025-06-08 10:09:30
1
2,748
BertC
79,657,561
16,399,497
Python authlib: How to handle multiple web-servers
<p>I'm developing a web-server based on FastAPI, to be integrated into a container to be deployed on Kubernetes with replicas. Authentification is performed by authlib and Keycloak.</p> <p>The server which redirects (via the <code>/login</code>) the user to the identity provider may not be the same as the one which runs the <code>/auth</code> route.</p> <p>I may be able to build the redirection URL using the IP address of the server running the <code>/login</code> request, but this does not sound right for me, especially for the keycloak allowed redirect URLs which would be static IPs.</p> <p>Here is a code snippet I wrote:</p> <pre class="lang-py prettyprint-override"><code>oauth = OAuth() oauth.register( name=&quot;keycloak&quot;, client_id=settings.keycloak.client_id, client_secret=settings.keycloak.client_secret.get_secret_value(), server_metadata_url=settings.keycloak.server_metadata_url, # The group scope does not exist by default. This must be added. client_kwargs={&quot;scope&quot;: &quot;openid email profile group&quot;}, # The state is a value which is used as a kind of CSRF check. # When the /login redirects to keycloak, it includes this value. # Later, when keycloak redirects to /auth, it also includes the state value. # If the values mismatche, a CSRF error is raised. # Configuring it is required to perform replication. state=&quot;toto&quot;, ) oauth_client: StarletteOAuth2App = oauth.keycloak def get_user_name(userinfo: dict): &quot;&quot;&quot;Get the user name from user information dict.&quot;&quot;&quot; return userinfo.get(&quot;name&quot;, None) or userinfo.get(&quot;preferred_username&quot;) @router.get(&quot;/login&quot;, tags=[&quot;user&quot;]) async def login(request: Request): &quot;&quot;&quot;Redirect the user to keycloak login page.&quot;&quot;&quot; redirect_uri = request.url_for(&quot;auth&quot;) return await oauth_client.authorize_redirect(request, redirect_uri, state=&quot;toto&quot;) @router.get(&quot;/auth&quot;, tags=[&quot;user&quot;]) async def auth(request: Request): &quot;&quot;&quot;Authentificate the user by creating a session cookie.&quot;&quot;&quot; token = await oauth_client.authorize_access_token(request) userinfo = token[&quot;userinfo&quot;] # Needs json.dumps otherwise this is not correctly saved in session request.session[&quot;user&quot;] = json.dumps(userinfo) _LOGGER.info(&quot;User %s got connected&quot;, get_user_name(userinfo)) # Get the URL the user requested before login, if any. target_url = request.session.pop(&quot;target-url&quot;, &quot;/&quot;) return RedirectResponse(target_url) </code></pre> <p>I wrote a small comment to explain what I understood about the <code>state</code> value.</p> <p>The key question may be the <code>state</code> parameter value which should be the same across servers. Yet, the documentation is not clear on how to use it. Besides, <a href="https://stackoverflow.com/a/77029859/16399497">this answer</a> indicates the value should be random to prevent attacks. So, How could I do that?</p>
<python><oauth-2.0><keycloak><authlib>
2025-06-08 06:01:57
0
723
emonier
79,657,532
2,840,697
Importance of putting self in a function definition in Python?
<p>Let's assume that you have the following:</p> <pre><code>class Solution1(object): def some_function(self, a, b): self.ans = 0 def some_other_function(x, y): manipulates self.ans return self.ans class Solution2(object): def some_function(self, a, b): ans = 0 def some_other_function(x, y): manipulates ans return ans </code></pre> <p>the only difference between Solution1 and Solution2 are that one has self.ans and one has ans as variables.</p> <p>Can they differ in values?</p>
<python>
2025-06-08 05:34:26
2
942
user98235
79,657,393
9,452,512
How to get the code of the hugging face models?
<p>There is a simple way to download a model from hugging face,</p> <pre class="lang-py prettyprint-override"><code># Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained(&quot;sentence-transformers/all-MiniLM-L6-v2&quot;) model = AutoModel.from_pretrained(&quot;sentence-transformers/all-MiniLM-L6-v2&quot;) </code></pre> <p>use it, train it and even replace the layers as long as they take the same input (shapes) and give the same output.</p> <p>Since it is based on pytorch (in this case), you get a nice view of the layers once you print it out, but you don't know the forward methods and other methods used in pytorchs model class.</p> <p>Is there a way to get the entire model in a .py file (as a nn.Module) class like it must have been initially used for training (and the weights e.g. as a .pt file)?</p>
<python><huggingface-transformers><huggingface>
2025-06-07 23:03:04
1
1,473
Uwe.Schneider
79,657,387
4,018,331
Function in Digital Ocean Returning Errors Cannot Import Module and Cannot Connect to Database
<p>I have a simple Python function I've deployed as a Digital Ocean Function. However, the Function errors out on importing <code>psycopg2</code>.</p> <p>The function:</p> <pre class="lang-py prettyprint-override"><code>import os import psycopg2 def get_db_connection(): connection = psycopg2.connect( host=os.environ.get(&quot;DB_HOST&quot;), port=os.environ.get(&quot;DB_PORT&quot;), dbname=os.environ.get(&quot;DB_NAME&quot;), user=os.environ.get(&quot;DB_USER&quot;), password=os.environ.get(&quot;DB_PASSWORD&quot;), ) return connection def main(args): try: version = psycopg2.__version__ conn = get_db_connection() cursor = conn.cursor() cursor.execute(&quot;SELECT version();&quot;) db_version = cursor.fetchone() cursor.close() conn.close() return { &quot;body&quot;: { &quot;message&quot;: f&quot;Successfully imported dependencies and found DB version {db_version}.&quot;, &quot;psycopg2_version&quot;: version } } except Exception as e: return { &quot;body&quot;: { &quot;error&quot;: f&quot;{str(e)}&quot; }, &quot;statusCode&quot;: 500 } if __name__ == &quot;__main__&quot;: main([]) </code></pre> <p>The file structure of the function seems fine, because <code>doctl</code> is deploying it and Digital Ocean is successfully building it and running it. The problem is the import.</p> <p>Project.yml:</p> <pre class="lang-yaml prettyprint-override"><code>packages: - name: test-functions actions: - name: tester runtime: 'python:default' </code></pre> <p>requirements.txt:</p> <pre class="lang-py prettyprint-override"><code>psycopg2==2.9.10 </code></pre> <p>build.sh:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash set -e virtualenv --without-pip virtualenv source virtualenv/bin/activate pip install -r requirements.txt --target virtualenv/lib/python_3.9/site-packages </code></pre> <p>I use <code>doctl</code> to deploy the function to Digital Ocean (note: <code>sls</code> is short for <code>serverless</code>):</p> <pre class="lang-bash prettyprint-override"><code>$ doctl auth init -t &lt;access token from DO App Platform dashboard&gt; $ doctl sls namespace create --label &quot;tests&quot; --region &quot;west&quot; $ doctl sls connect $ doctl sls deploy tests </code></pre> <p>Observing the logs:</p> <pre class="lang-bash prettyprint-override"><code>$ doctl sls activations logs --function test-functions/tester --follow </code></pre> <p>Getting the response:</p> <pre class="lang-bash prettyprint-override"><code>$ doctl sls functions invoke test-functions/tester { &quot;error&quot;: &quot;could not import module...&quot; } </code></pre>
<python><digital-ocean>
2025-06-07 22:47:29
1
1,297
j3py
79,657,240
264,822
How can I run multiple commands on SSH in one go with Paramiko in Python?
<p>I've got some code that runs through a list of devices, connects to them via SSH and queries some parameters. The basic code looks like this:</p> <pre><code>ssh = paramiko.SSHClient() ssh.connect(ip_address, username='root', password=password) try: ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('uname -r') if ssh_stdout.channel.recv_exit_status() == 0: temp = ssh_stdout.readlines() if temp: kernel_version = temp[0].strip('\n').strip('\&quot;') except BaseException as ex: print(f'An exception occurred: {ex}') try: ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('lsb_release -rs') if ssh_stdout.channel.recv_exit_status() == 0: temp = ssh_stdout.readlines() if temp: ubuntu_version = temp[0].strip('\n').strip('\&quot;') except BaseException as ex: print(f'An exception occurred: {ex}') </code></pre> <p>But there are a lot of devices and it takes a long time to run. It's faster to run multiple commands in one go:</p> <pre><code>try: ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('uname -r; lsb_release -rs') if ssh_stdout.channel.recv_exit_status() == 0: temp = ssh_stdout.readlines() if temp[0]: kernel_version = temp[0].strip('\n').strip('\&quot;') if temp[1]: ubuntu_version = temp[1].strip('\n').strip('\&quot;') except BaseException as ex: print(f'An exception occurred: {ex}') </code></pre> <p>but how can I handle one command passing and one failing? Is there a better way of doing this?</p>
<python><ssh><paramiko>
2025-06-07 18:48:16
1
9,317
parsley72
79,657,059
1,150,923
How to create individual rich progress bars for each worker in Python multiprocessing's imap_unordered()?
<p>I have a simple code that you can run (the <code>logging</code> is to differentiate 4 workers):</p> <pre class="lang-py prettyprint-override"><code>import time import random import logging import logging.handlers from multiprocessing.dummy import Pool def do_something(number): logger.info(number) time.sleep(number/100) logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter(&quot;%(asctime)s [%(levelname)-7s] (%(threadName)-10s) %(message)s&quot;)) logger.addHandler(handler) logger.setLevel(logging.INFO) numbers = random.sample(range(1, 101), 50) pool = Pool(4) pool.imap_unordered(do_something, numbers) pool.close() pool.join() </code></pre> <p>How do I add an individual <a href="https://rich.readthedocs.io/en/stable/progress.html" rel="nofollow noreferrer"><code>rich</code> progress bar</a> for each of the 4 workers? Ideally, I would like a 5th main progress bar that tracks the total completion, but if it's too much work, then it's not necessary.</p> <p>Something like this: <a href="https://i.imgur.com/mtZlYyj.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/mtZlYyj.png" alt="example" /></a></p> <p>With <code>tqdm</code>, it seems you can just do:</p> <pre class="lang-py prettyprint-override"><code>for _ in tqdm.tqdm(pool.imap_unordered(do_something, numbers), total=len(numbers)): pass </code></pre> <p>but I want to use <code>rich</code> since I like its customization better.</p>
<python><multithreading><progress-bar><tqdm><rich>
2025-06-07 14:38:55
2
30,704
hobbes3
79,656,740
1,150,923
How to create a simple status/count/progress output when searching for files using rglob() in Python?
<p>I have a simple line that searches for all <code>.json</code> files, except there are about 28k of them, so this one line takes about a minute to complete:</p> <pre><code>from pathlib import Path files = list(Path(&quot;~/foo/bar/).rglob(&quot;*.json&quot;)) </code></pre> <p>Is there a way to create a simple counter that shows the user how many <code>rglob()</code> found so far? Or even a harder solution where there's an estimated progress bar since I know the total number is around 28k (but this number slowly grows in future updates)?</p> <p>I prefer <code>rich</code> over <code>tqdm</code>, but any solution is better than nothing. Thanks.</p>
<python><progress-bar><glob><status>
2025-06-07 07:10:10
2
30,704
hobbes3
79,656,626
7,556,091
How to install pytorch and opencv using a fixed conda channel order?
<p>Conda creates a pristine environment, configures a fixed channel order, and then starts installing pytorch torchvisioni pytorch-cuda and opencv, and it prompts for dependency conflicts. Do I have to install opencv via pip?</p> <p>The same thing happens with python 3.10</p> <pre><code>$ conda create -n my_env3.9 python=3.9 $ conda activate my_env3.9 $ conda install pytorch torchvision pytorch-cuda=11.8 opencv Channels: - pytorch - nvidia - conda-forge - defaults Platform: linux-64 Collecting package metadata (repodata.json): done Solving environment: failed LibMambaUnsatisfiableError: Encountered problems while solving: - nothing provides libopencv 4.2.0 py36_5 needed by opencv-4.2.0-py36_5 Could not solve for environment specs The following packages are incompatible โ”œโ”€ opencv =* * is installable with the potential options โ”‚ โ”œโ”€ opencv [4.10.0|4.11.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.10.0 headless_py310h05fcec3_10|==4.10.0 headless_py310h2251c23_11|...|==4.11.0 qt6_py39hd96f159_602], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=7.1.0,&lt;8.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [4.10.0|4.9.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.10.0 headless_py310h3d4b477_1|==4.10.0 headless_py310hef7d0a5_0|...|==4.9.0 qt6_py39hed63795_614], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=6.1.1,&lt;7.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.10.0 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.10.0 headless_py310h8d94708_2|==4.10.0 headless_py38h5642e36_2|...|==4.10.0 qt6_py39hfd9fb6d_602], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=7.0.1,&lt;8.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.10.0 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.10.0 headless_py310h5bfabb9_4|==4.10.0 headless_py310h5bfabb9_5|...|==4.10.0 qt6_py39hdeb11db_605], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=6.1.2,&lt;7.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.10.0 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.10.0 headless_py311h63eac36_5|==4.10.0 headless_py311h63eac36_6|...|==4.10.0 qt6_py39h5d2977a_603], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=7.0.2,&lt;8.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.11.0 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.11.0 headless_py310h8ace835_4|==4.11.0 headless_py310h8ace835_5|...|==4.11.0 qt6_py39hbfaaa73_603], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=7.1.1,&lt;8.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [4.5.3|4.5.5|4.6.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.5.3 py310hc72b5f5_8|==4.5.3 py38hc6b509d_8|...|==4.6.0 py39hf4bb9d8_2], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.4.2,&lt;5.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [4.6.0|4.7.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.6.0 py310h5bd1119_9|==4.6.0 py310h6214075_5|...|==4.7.0 py39hf99ad11_5], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=5.1.2,&lt;6.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [4.7.0|4.8.0|4.8.1] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.7.0 py310h245f934_4|==4.7.0 py310h3e876cf_5|...|==4.8.1 py39hf605482_5], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=6.0.0,&lt;7.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.9.0 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.9.0 headless_py310hae237af_14|==4.9.0 headless_py38h0f7b093_14|...|==4.9.0 qt6_py39h067c833_615], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=7.0.0,&lt;8.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [2.4.12|2.4.13|3.1.0|3.2.0] would require โ”‚ โ”‚ โ””โ”€ python =2.7 *, which can be installed; โ”‚ โ”œโ”€ opencv [2.4.13.4|3.2.0|3.3.0|3.4.1] would require โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=3.2.3,&lt;3.2.6 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 3.1.0 would require โ”‚ โ”‚ โ””โ”€ python =3.4 *, which can be installed; โ”‚ โ”œโ”€ opencv [3.1.0|3.2.0] would require โ”‚ โ”‚ โ””โ”€ python =3.5 *, which can be installed; โ”‚ โ”œโ”€ opencv [3.1.0|3.2.0] would require โ”‚ โ”‚ โ””โ”€ python =3.6 *, which can be installed; โ”‚ โ”œโ”€ opencv [3.4.1|3.4.3|3.4.4|3.4.7] would require โ”‚ โ”‚ โ”œโ”€ ffmpeg &gt;=4.0.2,&lt;4.1.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”‚ โ””โ”€ libopencv ==3.4.7 hc173e35_5, which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.0.2,&lt;4.1.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 3.4.1 would require โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.0.1,&lt;4.1.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [3.4.4|3.4.7|...|4.1.1] would require โ”‚ โ”‚ โ”œโ”€ ffmpeg =4.1 *, which conflicts with any installable versions previously reported; โ”‚ โ”‚ โ””โ”€ libopencv [==3.4.7 h0cc45ee_4|==4.1.1 h0cc45ee_3], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg =4.1 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [3.4.7|3.4.8|...|4.2.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==3.4.7 h32d60f7_6|==3.4.7 py27_7|...|==4.2.0 py38_4], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.1.3,&lt;4.2.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.2.0 would require โ”‚ โ”‚ โ””โ”€ libopencv ==4.2.0 py36_5, which does not exist (perhaps a missing channel); โ”‚ โ”œโ”€ opencv [4.2.0|4.3.0|4.4.0] would require โ”‚ โ”‚ โ””โ”€ py-opencv [==4.2.0 py36h0b673f9_6|==4.3.0 py36h0b673f9_2|==4.4.0 py36h0b673f9_2], which requires โ”‚ โ”‚ โ””โ”€ python &gt;=3.6,&lt;3.7.0a0 *, which can be installed; โ”‚ โ”œโ”€ opencv [4.2.0|4.3.0] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.2.0 py36_7|==4.2.0 py37_7|...|==4.3.0 py38_1], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.2.3,&lt;4.3.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv [4.2.0|4.3.0|4.4.0] would require โ”‚ โ”‚ โ””โ”€ py-opencv [==4.2.0 py37h43977f1_5|==4.2.0 py37h43977f1_6|==4.3.0 py37h43977f1_2|==4.4.0 py37h43977f1_2], which requires โ”‚ โ”‚ โ””โ”€ python &gt;=3.7,&lt;3.8.0a0 *, which can be installed; โ”‚ โ”œโ”€ opencv [4.2.0|4.3.0|4.4.0] would require โ”‚ โ”‚ โ””โ”€ py-opencv [==4.2.0 py38h23f93f0_5|==4.2.0 py38h23f93f0_6|==4.3.0 py38h23f93f0_2|==4.4.0 py38h23f93f0_2], which requires โ”‚ โ”‚ โ””โ”€ python &gt;=3.8,&lt;3.9.0a0 *, which can be installed; โ”‚ โ”œโ”€ opencv [4.4.0|4.5.0|4.5.1|4.5.2] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.4.0 py36_3|==4.4.0 py37_3|...|==4.5.2 py39h70bf20d_1], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.3.1,&lt;4.4.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.5.0 would require โ”‚ โ”‚ โ””โ”€ libopencv ==4.5.0 py36_5, which does not exist (perhaps a missing channel); โ”‚ โ”œโ”€ opencv 4.5.0 would require โ”‚ โ”‚ โ””โ”€ libopencv ==4.5.0 py36_6, which does not exist (perhaps a missing channel); โ”‚ โ”œโ”€ opencv [4.5.3|4.5.5] would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.5.3 py31hbd5a65a_6|==4.5.3 py31he7a5e20_7|...|==4.5.5 py39hfb30bf4_6], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.3.2,&lt;4.4.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ”œโ”€ opencv 4.5.5 would require โ”‚ โ”‚ โ””โ”€ libopencv [==4.5.5 py310h1897127_9|==4.5.5 py310hc83fb77_10|...|==4.5.5 py39he64e9e9_10], which requires โ”‚ โ”‚ โ””โ”€ ffmpeg &gt;=4.4.1,&lt;5.0a0 *, which conflicts with any installable versions previously reported; โ”‚ โ””โ”€ opencv [3.3.1|3.4.1|...|4.6.0] conflicts with any installable versions previously reported; โ””โ”€ pin on python 3.9.* =* * is not installable because it requires โ””โ”€ python =3.9 *, which conflicts with any installable versions previously reported. Pins seem to be involved in the conflict. Currently pinned specs: - python=3.9 </code></pre>
<python><opencv><ffmpeg><pytorch><conda>
2025-06-07 01:56:19
1
1,896
progquester
79,656,623
3,366,355
removing double // comments from json
<p>It turns out there's a format jsonc that allows comments like <code>//this</code> in the JSON along other things. Is it possible to produce a json example with <code>//this type</code> of comments that can make the resulting json invalid after applying this regex.</p> <pre class="lang-py prettyprint-override"><code>comment_re = re.compile( r'//.*|/\*[\s\S]*?\*/|(&quot;(\\.|.)*?&quot;)', # capture group for quoted strings ) cleaned = comment_re.sub(r'\1', jsonStr) </code></pre> <p>I have this one as the starting point and the regex is working fine. NOTE: I'm not elaborating on having comments in the json is good or bad idea, that's NOT the point. I'm here to verify if this regex has any edge cases not foreseen by this piece of python code. Please ignore <code>/* this type of comments */</code> as we don't use it in our code.</p> <p>This is an advanced example that shows the regex works</p> <pre class="lang-json prettyprint-override"><code> //tried this sed -r 's#\s//[^}]*##' // also tried this '%^*3//s39()' [ { &quot;test1&quot; : &quot;http://test.com&quot;, &quot;test2&quot; : &quot;http://test.com&quot;,//test // any thing &quot;ok&quot; : 3, //here 2 &quot;//networkpath1&quot; : true, //whynot &quot;//networkpath2&quot; : true // ok },//eof { &quot;statement&quot; : &quot;I like test cases&quot; }//eof ] </code></pre>
<python><json><regex>
2025-06-07 01:44:09
1
1,032
martin
79,656,511
2,153,235
Pandas: What is wrong with this use of DataFrame.apply for finding maximum of other columns
<p>I have a DataFrame with a handful of date columns, I want to create a new column &quot;MaxDate&quot; that contains the maximum date. I tried using <code>apply</code>, but my various code patterns for the lambda function yield errors.</p> <pre><code>import pandas as pd import datetime as dt df=pd.DataFrame( [ [ dt.date(2025,6,5), dt.date(2025,6,6) ],[ dt.date(2025,6,7), dt.date(2025,6,8) ] ], columns=['A','B'], index=['Row1','Row2'] ) # Explicitly find maximum of row 0 (WORKS) max( df.loc[ df.index[0], ['A','B'] ] ) # None of the following 3 code patterns work for &quot;apply&quot; if False: df['MaxDate'] = df.apply( lambda row: max( row.loc[ row.index[0], ['A','B'] ] ) ) # IndexingError: &quot;Too many indexers&quot; elif False: df['MaxDate'] = df.apply( lambda row: max( row['A','B'] ) ) # KeyError: # &quot;key of type tuple not found and not a MultiIndex&quot; elif False: df['MaxDate'] = df.apply( lambda row: max( row['A'],row['B'] ) ) # KeyError: 'A' </code></pre> <p>I tried determining whether the <code>row</code> variable was a DataFrame or a Series, but the result was <code>nan</code></p> <pre><code># Querying class of &quot;row&quot; yields a column of &quot;nan&quot; df['MaxDate'] = df.apply( lambda row: type(row) ) </code></pre> <p>Of the 3 code patterns above, I would like to avoid th3 last one because it requires too many repetitions of the word <code>row</code>, making my code &quot;noisy&quot;.</p> <p>What am I doing wrong?</p> <p>Others have cited <a href="https://stackoverflow.com/questions/12169170">this duplicate question</a>, which I appreciate. To me, however, this is more than just a question of how to achieve the end effect. It is also sussing out my understand of the <code>apply</code> method. What is wrong with my use of its mechanics? And why doesn't <code>type(row)</code> show the class of the <code>row</code> object? Without visibility into its type, it's hard to smartly come up with code patterns that are likely to work. I've re-titled the question to reflect this.</p>
<python><pandas>
2025-06-06 21:37:29
1
1,265
user2153235
79,656,429
1,601,903
Feet and Hip Do Not Rotate During Circular Walking Motion - OpenSim RajaGopal2016
<p>I am currently working with motion capture data from the <a href="https://mocap.cs.cmu.edu/" rel="nofollow noreferrer">CMU MoCap dataset</a> and analyzing it using the <a href="https://simtk.org/projects/opensim" rel="nofollow noreferrer">RajaGopal2016 OpenSim model</a> in the GUI.</p> <p>To prepare the data, I did the following:</p> <ol> <li>Converted the .c3d files into .trc format using ezc3d in Python.</li> <li>Applied coordinate transformation if the data was in a Z-up system.</li> <li>Mapped CMU marker labels to OpenSim model marker names using a custom mapping.</li> <li>Exported the .trc file and successfully ran Inverse Kinematics (IK) in the OpenSim GUI.</li> </ol> <p>The results look good for straight-line walking. However, in circular or turning motion trials, the feet and hips do not rotate or follow the arc of motion. The model appears to walk in place rather than along a curve, even though the upper body follows the correct motion path.</p> <p>I have attached two frames below showing this issue: <a href="https://www.dropbox.com/scl/fi/jg0gdyekpwbaslnkuuv1i/challenge.jpeg?rlkey=lizuwwznpieapxt9uurijw4cq&amp;st=uie7593k&amp;dl=0" rel="nofollow noreferrer">image</a></p> <p>My questions are:</p> <ol> <li>Has anyone faced this issue with circular or turning motions?</li> <li>Could this be caused by missing or misaligned foot or hip markers?</li> <li>Are there known limitations of the RajaGopal2016 model with rotational lower-body motion?</li> <li>Do IK settings, such as marker weights, need adjustment for turning trials?</li> </ol> <p>Any suggestions or guidance would be greatly appreciated.</p> <p>I have shared below the Python code that creates .trc from .c3d files.</p> <pre><code>import ezc3d import numpy as np from scipy.spatial.transform import Rotation as R # Load the C3D file c3d = ezc3d.c3d(&quot;/Downloads/0005_Walking001.c3d&quot;) # Extract marker data points = c3d['data']['points'][:3] # (X, Y, Z) # Detect if Z-up (vertical movement is mostly in the Z axis) spread_y = np.ptp(points[1, :, :]) # Y range spread_z = np.ptp(points[2, :, :]) # Z range if spread_z &gt; spread_y: print(&quot;โ†ช Detected Z-up coordinate system. Rotating to Y-up for OpenSim...&quot;) rot = R.from_euler('x', -90, degrees=True).as_matrix() points = np.einsum('ij,jkl-&gt;ikl', rot, points) else: print(&quot;Coordinate system appears Y-up โ€” no rotation applied.&quot;) raw_labels = c3d['parameters']['POINT']['LABELS']['value'] labels = [l.split(':')[-1] for l in raw_labels] # Remove prefix like 'liu:' # OpenSim model marker list (from Rajagopal2016.osim) opensim_markers = [ &quot;RACR&quot;, &quot;LACR&quot;, &quot;C7&quot;, &quot;CLAV&quot;, &quot;RASH&quot;, &quot;RPSH&quot;, &quot;LASH&quot;, &quot;LPSH&quot;, &quot;RSJC&quot;, &quot;RUA1&quot;, &quot;RUA2&quot;, &quot;RUA3&quot;, &quot;RLEL&quot;, &quot;RMEL&quot;, &quot;RFAsuperior&quot;, &quot;RFAradius&quot;, &quot;RFAulna&quot;, &quot;LSJC&quot;, &quot;LUA1&quot;, &quot;LUA2&quot;, &quot;LUA3&quot;, &quot;LLEL&quot;, &quot;LMEL&quot;, &quot;LFAsuperior&quot;, &quot;LFAradius&quot;, &quot;LFAulna&quot;, &quot;RASI&quot;, &quot;LASI&quot;, &quot;RPSI&quot;, &quot;LPSI&quot;, &quot;LHJC&quot;, &quot;RHJC&quot;, &quot;RTH1&quot;, &quot;RTH2&quot;, &quot;RTH3&quot;, &quot;RLFC&quot;, &quot;RMFC&quot;, &quot;RKJC&quot;, &quot;RTB1&quot;, &quot;RTB2&quot;, &quot;RTB3&quot;, &quot;RLMAL&quot;, &quot;RMMAL&quot;, &quot;RAJC&quot;, &quot;RCAL&quot;, &quot;RTOE&quot;, &quot;RMT5&quot;, &quot;LTH1&quot;, &quot;LTH2&quot;, &quot;LTH3&quot;, &quot;LLFC&quot;, &quot;LMFC&quot;, &quot;LKJC&quot;, &quot;LTB1&quot;, &quot;LTB2&quot;, &quot;LTB3&quot;, &quot;LLMAL&quot;, &quot;LMMAL&quot;, &quot;LAJC&quot;, &quot;LCAL&quot;, &quot;LTOE&quot;, &quot;LMT5&quot;, &quot;REJC&quot;, &quot;LEJC&quot;, &quot;R_tibial_plateau&quot;, &quot;L_tibial_plateau&quot; ] # Map TRC labels to OpenSim model names rename_map = { 'LFWT': 'LASI', 'RFWT': 'RASI', 'LBWT': 'LPSI', 'RBWT': 'RPSI', 'LSHO': 'LASH', 'RSHO': 'RASH', 'LELB': 'LMEL', 'RELB': 'RMEL', 'LWRB': 'LFAradius', 'RWRB': 'RFAradius', 'LWRA': 'LFAradius', 'RWRA': 'RFAradius', 'LFRM': 'LFAsuperior', 'RFRM': 'RFAsuperior', 'LTHI': 'LTH1', 'RTHI': 'RTH1', 'LSHN': 'LTH3', 'RSHN': 'RTH3', 'LANK': 'LLMAL', 'RANK': 'RLMAL', 'LHEE': 'LCAL', 'RHEE': 'RCAL', 'LKNE': 'LKJC', 'RKNE': 'RKJC', 'LTOE': 'LTOE', 'RTOE': 'RTOE', 'LMT5': 'LMT5', 'RMT5': 'RMT5', 'STRN': 'CLAV', 'CLAV': 'CLAV', 'C7': 'C7' } # Reverse map for writing: only keep labels matching OpenSim markers matched_indices = [] mapped_labels = [] used_names = set() for i, label in enumerate(labels): new_label = rename_map.get(label) if new_label in opensim_markers and new_label not in used_names: matched_indices.append(i) mapped_labels.append(new_label) used_names.add(new_label) # Filter points points_filtered = points[:, matched_indices, :] # Prepare TRC content n_frames = points.shape[2] rate = c3d['header']['points']['frame_rate'] time = np.arange(n_frames) / rate # Write TRC trc_path = &quot;0005_Walking001.trc&quot; with open(trc_path, 'w') as f: f.write(f&quot;PathFileType\t4\t(X/Y/Z)\t{trc_path}\n&quot;) f.write(&quot;DataRate\tCameraRate\tNumFrames\tNumMarkers\tUnits\tOrigDataRate\tOrigDataStartFrame\tOrigNumFrames\n&quot;) f.write(f&quot;{rate:.2f}\t{rate:.2f}\t{n_frames}\t{len(mapped_labels)}\tmm\t{rate:.2f}\t1\t{n_frames}\n&quot;) f.write(&quot;Frame#\tTime\t&quot; + &quot;\t\t&quot;.join(mapped_labels) + &quot;\n&quot;) f.write(&quot;\t&quot; + &quot;\t&quot;.join([f&quot;{axis}{i+1}&quot; for i in range(len(mapped_labels)) for axis in [&quot;X&quot;, &quot;Y&quot;, &quot;Z&quot;]]) + &quot;\n&quot;) for i in range(n_frames): row = [str(i+1), f&quot;{time[i]:.5f}&quot;] for m in range(len(mapped_labels)): x, y, z = points_filtered[:, m, i] row.extend([f&quot;{x:.5f}&quot;, f&quot;{y:.5f}&quot;, f&quot;{z:.5f}&quot;]) f.write(&quot;\t&quot;.join(row) + &quot;\n&quot;) print(f&quot;TRC file written to: {trc_path}&quot;) </code></pre>
<python><inverse-kinematics><opensimulator>
2025-06-06 20:11:46
0
418
sana
79,656,394
7,344,164
How to configure VS Code launch.json to debug a Streamlit app with a conda environment?
<p>To debug a python project with a specific conda env in VS code, I normally use this launch.json</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python Debugger: Current File&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;args&quot;: [ &quot;--video_path&quot;,&quot;./../test_real.mp4&quot;, ], &quot;python&quot;: &quot;/home/&lt;username&gt;/miniconda3/envs/myconda_env/bin/python&quot; // Update to the Python binary for gradtts environment } ] } </code></pre> <p>Now I want to debug a streamlit app in the VS code. The command to launch a streamlit app is <code>$ streamlit run main.py</code>. How to modify the launch.json to achieve this task in VS code?</p>
<python><visual-studio-code><streamlit><vscode-debugger>
2025-06-06 19:32:45
1
14,299
DevLoverUmar
79,656,194
9,884,998
Moving the Scientific Notation Offset of a Pyplot Graph below the top Spine
<p>I'm stacking multiple graphs using matplotlib-pyplot. This is my minimal reproducable example:</p> <pre><code>from matplotlib import pyplot as plt from numpy import linspace, sqrt, pi fig, ax = plt.subplots(3, 1, sharex=True) plt.subplots_adjust(hspace=0) def zr(w0, l): return pi*w0**2/l def w0(l, L, R): return sqrt(l/pi * sqrt(L*(R-L))) def width(z, l): return w0(l, 0.55, 1) * sqrt(1 + (z / zr(w0(l, 0.55, 1), l))**2) x = linspace(0, 1, 5) y = width(x, 5.54e-6) for i in range(3): #Do some analysis here, then plot the fit ax[i].set_xlabel(&quot;z / m&quot;) ax[i].set_ylabel(&quot;$d(w)$ / mm&quot;) ax[i].ticklabel_format(axis='y', style='sci', scilimits=(0, 0)) ax[i].grid(&quot;both&quot;, linestyle=&quot;:&quot;) #The Fix i tried: ax[i].get_yaxis().get_offset_text().set_position((0,0.7)) ax[1].errorbar(x, y, xerr = 0.025, yerr = 0.2*1e-3, color = &quot;black&quot;, linestyle = &quot;&quot;) plt.show() </code></pre> <p>Unfortunately in my example, the offset modifier intersects with the data in the graph above.<br /> To fix that I want to move it below the spine. I originally tried <a href="https://stackoverflow.com/questions/73215053/how-to-move-exponent-label-with-spine-in-matplotlib-twin-x-plot">this answer</a>, as can be seen above, but I was unable to observe any change.</p> <p><a href="https://i.sstatic.net/Um2bdwUE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um2bdwUE.png" alt="three stacked graphs made in matplotlib" /></a></p> <p>EDIT: I did a little bit of experimenting. Before the canvas is drawn the position is set correctly, but the offset text is empty:</p> <pre><code>&gt;&gt;&gt; ax[2].get_yaxis().get_offset_text() Text(0, 0.7, '') </code></pre> <p>But after the canvas is drawn the position changes:</p> <pre><code>&gt;&gt;&gt; fig.canvas.draw() &gt;&gt;&gt; ax[1].get_yaxis().get_offset_text() Text(0, 303.3666666666667, '1eโˆ’3') </code></pre> <p>This also holds for <code>plt.plot()</code> which almost certainly calls <code>canvas.draw()</code></p>
<python><matplotlib>
2025-06-06 16:22:57
1
529
David K.
79,656,139
5,964,034
tf.keras: Why my UNet with final layer Conv3D with sigmoid activation gives probabilities less than 0 and more than 1
<p>I am really confused because this should never happen according to common sense and to all things that I found on the internet.</p> <p>In short. My UNet with sigmoid activation as last layer gives probabilities of less than 0 and more than 1.</p> <p>I have a UNet for semantic segmentation of 3-dimensional brain MRIs. My last layer is a Conv3D with activation function sigmoid. I have confirmed that the last activation is sigmoid.</p> <pre><code>u_cnn.layers[-1] &lt;Conv3D name=conv3d_20, built=True&gt; u_cnn.layers[-1].activation &lt;function keras.src.activations.activations.sigmoid(x)&gt; </code></pre> <p>I know that any number (even infinity and minus infinity) should transform into something between 0 and 1 when passed through a sigmoid function. I know it is impossible to get anything out of that range. However, when I use the trained model to predict on the train data (which appeared to learn reasonably well but did not generalize well to the validation data), the result I got is:</p> <pre><code>Y_predictions_train = u_cnn.predict(X_train, batch_size=16) np.min(Y_predictions_train) -676.87195 np.max(Y_predictions_train) 20591.613 </code></pre> <p>To say that I am extremely confused about this result is an understatement. How can a number be passed through the sigmoid function and give something outside of the 0-1 interval?</p> <p>Any help appreciated.</p> <p>I am using tensorflow 2.17.0 with keras 3.3.3</p> <p>The full UNet architecture is presented below</p> <pre><code># Create functions to define CNN blocks def double_conv_block(x, n_filters): # Conv3D then ReLU activation x = layers.Conv3D(n_filters, kernel_size=5, strides=1, padding ='same', activation = 'relu', data_format=&quot;channels_last&quot;)(x) # Batch normalization x = layers.BatchNormalization()(x) # Conv3D then ReLU activation x = layers.Conv3D(n_filters, kernel_size=3, strides=1, padding ='same', activation = 'relu')(x) return x def downsample_block(x, n_filters): f = double_conv_block(x, n_filters) p = layers.MaxPool3D(pool_size=2)(f) return f, p def upsample_block(x, conv_features, n_filters): # upsample x = layers.Conv3DTranspose(n_filters, kernel_size=3, strides=2, padding='same')(x) # concatenate x = layers.concatenate([x, conv_features]) # Conv3D twice with ReLU activation x = double_conv_block(x, n_filters) return x # Define the U CNN model def create_u_cnn(SX, SY, SZ): # inputs inputs = layers.Input(shape=(SX, SY, SZ, 2)) # encoder: contracting path - downsample # 1 - downsample f1, p1 = downsample_block(inputs, 64) # 2 - downsample f2, p2 = downsample_block(p1, 128) # 3 - downsample f3, p3 = downsample_block(p2, 256) # 4 - downsample f4, p4 = downsample_block(p3, 512) # 5 - bottleneck bottleneck = double_conv_block(p4, 1024) # decoder: expanding path - upsample # 6 - upsample u6 = upsample_block(bottleneck, f4, 512) # 7 - upsample u7 = upsample_block(u6, f3, 256) # 8 - upsample u8 = upsample_block(u7, f2, 128) # 9 - upsample u9 = upsample_block(u8, f1, 64) # Final convolutional and dropout layers u10 = layers.Conv3D(64, kernel_size=3, strides=1, padding ='same', activation = 'relu')(u9) u11 = layers.Dropout(rate=0.4)(u10) u12 = layers.Conv3D(64, kernel_size=3, strides=1, padding ='same', activation = 'relu')(u11) # outputs outputs = layers.Conv3D(filters=1, kernel_size=1, strides=1, padding='same', activation = 'sigmoid')(u12) # unet model with Keras Functional API u_cnn = tf.keras.Model(inputs, outputs, name=&quot;U-Net&quot;) return u_cnn # Mirrored strategy for using GPUs in parallel mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): u_cnn = create_u_cnn(SX, SY, SZ) u_cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss=FocalTverskyLoss, metrics=['accuracy', tf.keras.metrics.BinaryIoU(target_class_ids=[1], threshold=0.5), dice_coef]) u_cnn.summary() </code></pre> <p>I repeated this experiment with the same data and different architectures (including a ResNet152 UNet from segmentation_models_3D. I have created a model with all layers except the last one and then added the last one as a conv3D with sigmoid activation. I still get the same result of probabilities less than 0 and more than 1.</p> <p>EDIT</p> <p>@Dueoksini Thank you for taking a look at this problem These are the layers</p> <pre><code>for layer in u_cnn.layers: print(layer) &lt;InputLayer name=input_layer, built=True&gt; &lt;Conv3D name=conv3d, built=True&gt; &lt;BatchNormalization name=batch_normalization, built=True&gt; &lt;Conv3D name=conv3d_1, built=True&gt; &lt;MaxPooling3D name=max_pooling3d, built=True&gt; &lt;Conv3D name=conv3d_2, built=True&gt; &lt;BatchNormalization name=batch_normalization_1, built=True&gt; &lt;Conv3D name=conv3d_3, built=True&gt; &lt;MaxPooling3D name=max_pooling3d_1, built=True&gt; &lt;Conv3D name=conv3d_4, built=True&gt; &lt;BatchNormalization name=batch_normalization_2, built=True&gt; &lt;Conv3D name=conv3d_5, built=True&gt; &lt;MaxPooling3D name=max_pooling3d_2, built=True&gt; &lt;Conv3D name=conv3d_6, built=True&gt; &lt;BatchNormalization name=batch_normalization_3, built=True&gt; &lt;Conv3D name=conv3d_7, built=True&gt; &lt;MaxPooling3D name=max_pooling3d_3, built=True&gt; &lt;Conv3D name=conv3d_8, built=True&gt; &lt;BatchNormalization name=batch_normalization_4, built=True&gt; &lt;Conv3D name=conv3d_9, built=True&gt; &lt;Conv3DTranspose name=conv3d_transpose, built=True&gt; &lt;Concatenate name=concatenate, built=True&gt; &lt;Conv3D name=conv3d_10, built=True&gt; &lt;BatchNormalization name=batch_normalization_5, built=True&gt; &lt;Conv3D name=conv3d_11, built=True&gt; &lt;Conv3DTranspose name=conv3d_transpose_1, built=True&gt; &lt;Concatenate name=concatenate_1, built=True&gt; &lt;Conv3D name=conv3d_12, built=True&gt; &lt;BatchNormalization name=batch_normalization_6, built=True&gt; &lt;Conv3D name=conv3d_13, built=True&gt; &lt;Conv3DTranspose name=conv3d_transpose_2, built=True&gt; &lt;Concatenate name=concatenate_2, built=True&gt; &lt;Conv3D name=conv3d_14, built=True&gt; &lt;BatchNormalization name=batch_normalization_7, built=True&gt; &lt;Conv3D name=conv3d_15, built=True&gt; &lt;Conv3DTranspose name=conv3d_transpose_3, built=True&gt; &lt;Concatenate name=concatenate_3, built=True&gt; &lt;Conv3D name=conv3d_16, built=True&gt; &lt;BatchNormalization name=batch_normalization_8, built=True&gt; &lt;Conv3D name=conv3d_17, built=True&gt; &lt;Conv3D name=conv3d_18, built=True&gt; &lt;Dropout name=dropout, built=True&gt; &lt;Conv3D name=conv3d_19, built=True&gt; &lt;Conv3D name=conv3d_20, built=True&gt; </code></pre> <p>Important to remember that</p> <pre><code>u_cnn.layers[-1].activation &lt;function keras.src.activations.activations.sigmoid(x)&gt; </code></pre> <p>So it appears that the latest Conv3D has activation sigmoid.</p> <p>@xdurch0 Thank you for taking a look at this problem.</p> <p>I cannot give a reproducible example because these are brain MRIs (not publicly available data). However, they are 3-dimensional arrays in which each number represents a voxel value in the MRI. Each MRI is 256x256x256x2 (2 channels, one for T1, one for FLAIR) and each MRI is broken down (patchified) into 64x64x64x2 blocks (just because of GPU limits).</p> <p>Each MRI is normalized to values between 0 and 1 and the segmentation masks have only two values: 0 and 1. The prediction that the network should give is for each voxel in the 3D data, it should classify as lesion or no lesion.</p> <p>There is quite a bit of data augmentation with volumentations 3D and with voxelmentations.</p> <p>Hope this helps, but I still do not understand how anything through a sigmoid is different than 0 to 1.</p> <p>Any direction appreciated.</p> <p>Thank you</p> <p>Each voxel in the MRI has two categories (0: no lesion, 1: lesion).</p> <p>EDIT 2 There was an answer that disappeared but helped me a lot.</p> <p>Essentially, they recommended to test this to see if the error is because the activation is not working or it is somewhere else.</p> <pre><code># Get the raw output of the last Conv3D layer (before its activation) # This requires creating a temporary model that outputs the pre-activation tensor last_conv3d_layer_name = u_cnn.layers[-1].name # This should be 'conv3d_20' pre_activation_model = tf.keras.Model(inputs=u_cnn.inputs, outputs=u_cnn.get_layer(last_conv3d_layer_name).input) raw_logits = pre_activation_model.predict(X_train, batch_size=16) print(f&quot;Min/Max of RAW LOGITS (before sigmoid): {np.min(raw_logits)}, {np.max(raw_logits)}&quot;) # Manually apply sigmoid to these raw logits manual_sigmoid_predictions = tf.sigmoid(raw_logits).numpy() print(f&quot;Min/Max after MANUAL SIGMOID: {np.min(manual_sigmoid_predictions)}, {np.max(manual_sigmoid_predictions)}&quot;) # Now, compare with your original prediction output print(f&quot;Min/Max of Y_predictions_train (original output): {np.min(Y_predictions_train)}, {np.max(Y_predictions_train)}&quot;) </code></pre> <p>Also, they recommended that I make the sigmoid layer explicit like this</p> <pre><code> # outputs #outputs = layers.Conv3D(filters=1, kernel_size=1, strides=1, padding='same', activation = 'sigmoid')(u12) logits = layers.Conv3D(filters=1, kernel_size=1, activation=None)(u12) # No activation outputs = layers.Activation('sigmoid')(logits) # Explicit layer # unet model with Keras Functional API u_cnn = tf.keras.Model(inputs, outputs, name=&quot;U-Net&quot;) </code></pre> <p>Even like this I still get probabilities greater than 1 or less than 0.</p> <p>Thank you to Dr. Snoopy, I predicted on a non-trained CNN with this structure and I still get probabilities greater than 1 or less than 0. I used this same UNet architecture in another study and worked perfectly well, so I am really confused.</p> <p>Any suggestion welcome. Thank you</p> <p>EDIT 3</p> <p>Among the multiple things I am trying, I have tried to figure out if there is any part of the X_train array that is the cause of the probabilities greater than 1 or less than 0. So I broke down the X_train in parts.</p> <p>X_train.shape is (11008, 64, 64, 64, 2)</p> <p>So I broke down the prediction into Y_predictions_train2 = trained_u_cnn.predict(X_train[:5000, :, :, :, :], batch_size=16) and Y_predictions_train2 = trained_u_cnn.predict(X_train[5000:, :, :, :, :], batch_size=16) and in both cases the np.min of Y_predictions_train2 is 0.0 and np.max of Y_predictions_train2 is 1.0. Now, if I calculate Y_predictions_train2 = trained_u_cnn.predict(X_train[:, :, :, :, :], batch_size=16) the np.min of Y_predictions_train2 is -2692.2175 and the np.max of Y_predictions_train2 is 61683.62. I have tried to break down it into smaller chunks and the same: the small arrays predict probabilities between 0.0 and 1.0, but the full array predicts probabilities less than 0 and more than 1. Any direction much appreciated, because I have no idea what the issue is here and the results of this are quite confusing.</p> <p>EDIT 4 In response to Dueoksini's question (how did you define the focal loss?), this is the loss I used focal Tversky loss as defined below:</p> <pre><code>ALPHA = 0.25 BETA = 0.75 GAMMA = 1.0 def FocalTverskyLoss(targets, inputs, alpha=ALPHA, beta=BETA, gamma=GAMMA, smooth=1e-6): #flatten label and prediction tensors inputs = K.flatten(inputs) targets = K.flatten(targets) #True Positives, False Positives &amp; False Negatives TP = K.sum(inputs * targets) FP = K.sum((1-targets) * inputs) FN = K.sum(targets * (1-inputs)) Tversky = (TP + smooth) / (TP + alpha*FP + beta*FN + smooth) FocalTversky = K.pow((1 - Tversky), gamma) return FocalTversky </code></pre> <p>I defined the Dice coefficient as below:</p> <pre><code>def dice_coef(y_true, y_pred, smooth=1e-6): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) </code></pre> <p>And compiled them as below:</p> <pre><code>u_cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss=FocalTverskyLoss, metrics=['accuracy', tf.keras.metrics.BinaryIoU(target_class_ids=[1], threshold=0.5), dice_coef]) </code></pre> <p>Please, remember that the data are 3D brain MRI data and we are trying to predict whether each voxel has lesion tissue or not (semantic segmentation in 3 dimensions).</p> <p>Thank you</p>
<python><tensorflow><keras><deep-learning>
2025-06-06 15:27:23
0
771
Ivan
79,656,085
2,620,122
Call LiteLLM proxy with LiteLLM SDK and client side credentials (Bedrock IAM + OpenAI)
<p>Is it possible to call a LiteLLM proxy that does not have configured model credentials with the LiteLLM Python SDK using client side credentials? Interested in doing this for bedrock with client side IAM instance role as well as OpenAI with explicit API key passed in client side. The examples in the docs <a href="https://docs.litellm.ai/docs/proxy/clientside_auth" rel="nofollow noreferrer">https://docs.litellm.ai/docs/proxy/clientside_auth</a> reference <code>extra_body</code> param for the OpenAI client but that's not something I see supported in litellm's <code>completion</code>.</p>
<python><litellm>
2025-06-06 14:43:44
0
1,664
KlugscheiรŸer
79,656,077
2,082,681
Intellij IDEA: After upgrading to 2025.1 cannot run Python configurations: packaging_tool.py': [Errno 2] No such file or directory
<p>I had an installation of some 2024 version of Intellij IDEA and had no issues running Python configurations, like just running .py file in it.</p> <p>Today I got a notification to upgrate Intellij IDEA and it was version 2025.1 something. After I have upgraded, when I click the green arrow button to run a Python configuration, there appears a spinner in place of it and nothing happens.</p> <p>When I open <code>requirements.txt</code> and click <code>Install requirements</code> button at the top, I get this error:</p> <pre><code>C:\Users\Oleksandr_Gavryliuk\IdeaProjects\daily-dragon-agent\venv\Scripts\python.exe: can't open file 'C:\\Users\\Oleksandr_Gavryliuk\\AppData\\Roaming\\JetBrains\\IntelliJIdea2025.1\\plugins\\python-ce\\helpers\\packaging_tool.py': [Errno 2] No such file or directory </code></pre> <p>When I go to the directory specified, there is no <code>packaging_tool.py</code> there.</p> <p>Well, it is there for a previous version of Intellij IDEA. Can I just copy it over? XD</p> <p>When I click debug, I get this:</p> <pre><code>cannot be resolved against Python helpers roots: [C:\Users\Oleksandr_Gavryliuk\AppData\Roaming\JetBrains\IntelliJIdea2025.1\plugins\python-ce\helpers, C:\Users\Oleksandr_Gavryliuk\AppData\Roaming\JetBrains\IntelliJIdea2025.1\plugins\python\helpers-pro] </code></pre>
<python><intellij-idea>
2025-06-06 14:37:37
1
329
havryliuk
79,655,640
4,972,737
How to prevent imported code from making unknown network connections
<p>Importing public Github repositories is often necessary in coding. However, very often Github repositories contain malicious codes that can send sensitive data to hacker IP.</p> <p>There are CodeQL and Semgrep that check malicious codes, but they don't support all languages. Also, they can't distinguish benign uploads and malicious uploads. (Hackers can always upload sensitive data in the name of 'cloud service', 'log', etc)</p> <p>I concluded that codes need a <strong>network control</strong> that would prevent unknown network connections. Based on my research, there were two solutions.</p> <ol> <li><p><strong>Code level network control.</strong> For example, <a href="https://stackoverflow.com/a/18601897/4972737">Python's socket can be patched to only allow certain connections</a>. However, it only applies to Python sockets, and cannot prevent sockets run in imports written in other languages. Is there a code-level network control method that prevents all network connections, unless explicitly allowed in code?</p> </li> <li><p><strong>Kernel level network control.</strong> This means using OS Firewall, or running in Docker with specific firewall config. I tried adding the connected python.exe to Windows Firewall Outbound Rules, but it had no effect. How can OS firewall be correctly configured to control network connection? Or, is there a better method?</p> </li> </ol> <p>Thanks!</p>
<python><security><network-programming><windows-firewall><network-security>
2025-06-06 08:44:23
1
387
new
79,655,544
2,172,547
Not able to send stream from Asterisk audio socket to python audio socket server
<p>I added this in asterisk conf</p> <pre><code> same =&gt; n(newIvr),Log(NOTICE, Start New IVR POC flow) same =&gt; n,AudioSocket(${UUID()},172.25.25.150:1579) same =&gt; n,Playback(goodbye) same =&gt; n,Hangup() </code></pre> <p>and my python audio socket server</p> <pre><code>import sys import threading import time import json import os import audioop import wave import logging import signal from google.cloud import speech from google.protobuf.json_format import MessageToDict from audiosocket import Audiosocket logging.basicConfig( stream=sys.stdout, format='%(asctime)s [%(levelname)s] %(message)s', level=logging.INFO ) log = logging.getLogger() # Configuration port = 1579 audiosocket = Audiosocket((&quot;0.0.0.0&quot;, port)) speech_client = speech.SpeechClient() def graceful_shutdown(signum, frame): log.info(&quot;๐Ÿ›‘ Shutting down service due to signal...&quot;) sys.exit(0) signal.signal(signal.SIGTERM, graceful_shutdown) signal.signal(signal.SIGINT, graceful_shutdown) def save_raw_to_wav(call_uuid, raw_filename, wav_filename, sample_rate=8000): try: file_size = os.path.getsize(raw_filename) if file_size == 0: log.warning(f&quot;[{call_uuid}] โš ๏ธ Raw file is empty, skipping WAV conversion&quot;) return with open(raw_filename, 'rb') as raw_file: raw_data = raw_file.read() duration_seconds = len(raw_data) / (sample_rate * 2) if duration_seconds &lt; 0.1: log.warning(f&quot;[{call_uuid}] โš ๏ธ Duration &lt; 0.1s ({duration_seconds:.2f}s), skipping WAV conversion&quot;) return with wave.open(wav_filename, 'wb') as wav_file: wav_file.setnchannels(1) wav_file.setsampwidth(2) wav_file.setframerate(sample_rate) wav_file.writeframes(raw_data) log.info(f&quot;[{call_uuid}] โœ… WAV file saved: {wav_filename} ({duration_seconds:.2f} sec)&quot;) except Exception as e: log.error(f&quot;[{call_uuid}] โŒ Error converting raw to WAV: {e}&quot;) def handle_call(conn): call_uuid = conn.uuid log.info(f&quot;[{call_uuid}] ๐Ÿ”” Incoming connection with UUID: {call_uuid}&quot;) audio_filename = f&quot;{call_uuid}.raw&quot; try: audio_file = open(audio_filename, &quot;wb&quot;) log.info(f&quot;[{call_uuid}] ๐Ÿ’พ Opened local raw audio file for writing: {audio_filename}&quot;) except Exception as e: log.error(f&quot;[{call_uuid}] โŒ Failed to open file {audio_filename}: {e}&quot;) audio_file = None def audio_generator(): total_bytes = 0 last_data_time = time.time() start_time = time.time() while conn.connected: data = conn.read() now = time.time() if not data: if now - last_data_time &gt; 2: log.info(f&quot;[{call_uuid}] ๐Ÿค Detected silence &gt; 2 seconds, stopping stream&quot;) break time.sleep(0.05) continue last_data_time = now total_bytes += len(data) # ๐Ÿ”ฅ Decode G.711 ฮผ-law (8-bit) to 16-bit PCM # decoded_data = audioop.ulaw2lin(data, 2) decoded_data = data if audio_file: try: audio_file.write(decoded_data) except Exception as file_write_err: log.error(f&quot;[{call_uuid}] โŒ Error writing audio data: {file_write_err}&quot;) log.info(f&quot;[{call_uuid}] ๐ŸŽง Received {len(data)} bytes (total: {total_bytes})&quot;) yield speech.StreamingRecognizeRequest(audio_content=decoded_data) if now - start_time &gt; 8: log.info(f&quot;[{call_uuid}] โฐ Stream duration &gt; 8s, stopping stream&quot;) break log.info(f&quot;[{call_uuid}] ๐Ÿ›‘ Finished streaming audio. Total bytes: {total_bytes}&quot;) config = speech.RecognitionConfig( encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16, sample_rate_hertz=8000, language_code=&quot;en-US&quot;, ) streaming_config = speech.StreamingRecognitionConfig( config=config, interim_results=False # Only final results ) try: log.info(f&quot;[{call_uuid}] ๐Ÿš€ Starting Google Speech API streaming&quot;) responses = speech_client.streaming_recognize( config=streaming_config, requests=audio_generator() ) responses_dict = [MessageToDict(responses) for response in responses] log.info(f&quot;[{call_uuid}] ๐Ÿ—‚๏ธ Full responses object:\n{json.dumps(responses_dict, indent=2)}&quot;) # โœ… Process responses here directly without exhausting the generator for response in responses: log.info(f&quot;[{call_uuid}] ๐ŸŽ™๏ธ Got response from Google&quot;) for result in response.results: if result.is_final: transcript = result.alternatives[0].transcript log.info(f&quot;[{call_uuid}] ๐Ÿ“ Final transcript: {transcript}&quot;) print(f&quot;[{call_uuid}] ๐Ÿ—ฃ๏ธ Final transcript: {transcript}&quot;) except Exception as e: log.error(f&quot;[{call_uuid}] โŒ Google Speech API error: {e}&quot;) finally: if audio_file: try: audio_file.close() log.info(f&quot;[{call_uuid}] ๐Ÿ’พ Closed raw audio file: {audio_filename}&quot;) save_raw_to_wav(call_uuid, audio_filename, f&quot;{call_uuid}.wav&quot;) except Exception as e: log.error(f&quot;[{call_uuid}] โŒ Error closing file: {e}&quot;) if conn: try: conn.hangup() conn.close() log.info(f&quot;[{call_uuid}] โœ… Connection closed&quot;) except Exception as close_err: log.error(f&quot;[{call_uuid}] โŒ Error closing connection: {close_err}&quot;) log.info(&quot;๐Ÿšฆ Server started and listening for incoming connections&quot;) while True: try: conn = audiosocket.listen() time.sleep(0.3) log.info(&quot;๐Ÿ”” Received connection from audiosocket&quot;) # print(&quot;conn.uuid:&quot;, getattr(conn, &quot;uuid&quot;, &quot;no uuid&quot;)) threading.Thread(target=handle_call, args=(conn,), daemon=True).start() except Exception as listener_err: log.error(f&quot;โŒ Exception in main listener loop: {listener_err}&quot;) import traceback traceback.print_exc() </code></pre> <p>and the logs</p> <pre><code>2025-06-06 12:53:53,404 [INFO] ๐Ÿšฆ Server started and listening for incoming connections 2025-06-06 12:53:58,886 [INFO] ๐Ÿ”” Received connection from audiosocket 2025-06-06 12:53:58,887 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿ”” Incoming connection with UUID: 40325ec25efd4bd3805f53576e581d13 2025-06-06 12:53:58,951 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿ’พ Opened local raw audio file for writing: 40325ec25efd4bd3805f53576e581d13.raw 2025-06-06 12:53:58,952 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿš€ Starting Google Speech API streaming 2025-06-06 12:53:59,323 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 320) 2025-06-06 12:53:59,527 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 640) 2025-06-06 12:53:59,729 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 960) 2025-06-06 12:53:59,931 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 1280) 2025-06-06 12:54:00,132 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 1600) 2025-06-06 12:54:00,334 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 1920) 2025-06-06 12:54:00,536 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 2240) 2025-06-06 12:54:00,737 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 2560) 2025-06-06 12:54:00,939 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 2880) 2025-06-06 12:54:01,141 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 3200) 2025-06-06 12:54:01,348 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 3520) 2025-06-06 12:54:01,550 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 3840) 2025-06-06 12:54:01,753 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 4160) 2025-06-06 12:54:01,959 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 4480) 2025-06-06 12:54:02,164 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 4800) 2025-06-06 12:54:02,369 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 5120) 2025-06-06 12:54:02,572 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 5440) 2025-06-06 12:54:02,778 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 5760) 2025-06-06 12:54:02,985 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 6080) 2025-06-06 12:54:03,194 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 6400) 2025-06-06 12:54:03,404 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 6720) 2025-06-06 12:54:03,606 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 7040) 2025-06-06 12:54:03,808 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 7360) 2025-06-06 12:54:04,011 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 7680) 2025-06-06 12:54:04,213 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 8000) 2025-06-06 12:54:04,416 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 8320) 2025-06-06 12:54:04,619 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 8640) 2025-06-06 12:54:04,821 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 8960) 2025-06-06 12:54:05,044 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 9280) 2025-06-06 12:54:05,246 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 9600) 2025-06-06 12:54:05,447 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 9920) 2025-06-06 12:54:05,649 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 10240) 2025-06-06 12:54:05,851 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 10560) 2025-06-06 12:54:06,053 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 10880) 2025-06-06 12:54:06,255 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 11200) 2025-06-06 12:54:06,458 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 11520) 2025-06-06 12:54:06,659 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 11840) 2025-06-06 12:54:06,861 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 12160) 2025-06-06 12:54:07,063 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 12480) 2025-06-06 12:54:07,265 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐ŸŽง Received 320 bytes (total: 12800) 2025-06-06 12:54:07,265 [INFO] [40325ec25efd4bd3805f53576e581d13] โฐ Stream duration &gt; 8s, stopping stream 2025-06-06 12:54:07,266 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿ›‘ Finished streaming audio. Total bytes: 12800 2025-06-06 12:54:07,483 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿ—‚๏ธ Full responses object: [] 2025-06-06 12:54:07,490 [INFO] [40325ec25efd4bd3805f53576e581d13] ๐Ÿ’พ Closed raw audio file: 40325ec25efd4bd3805f53576e581d13.raw 2025-06-06 12:54:07,539 [INFO] [40325ec25efd4bd3805f53576e581d13] โœ… WAV file saved: 40325ec25efd4bd3805f53576e581d13.wav (0.80 sec) 2025-06-06 12:54:25,390 [ERROR] [40325ec25efd4bd3805f53576e581d13] โŒ Error closing connection: [Errno 9] Bad file descriptor </code></pre> <p>the problem is i am not recieving any text response after streaming to google speech stream apis.</p> <p>Additionally i tried to create files, it creats raw and wav file. both are 13kb every time, and 0 sec in wav. I tried online by converting raw to wav , but during conversion it shows invalid raw file.</p> <p>Even in the pjsip.conf in asterisk i tried <code>allow= slin,ulaw, alaw</code> but not working</p> <p>in my python code i tried with</p> <pre><code>decoded_data = audioop.ulaw2lin(data, 2) </code></pre> <p>still same issue</p> <p>i am using this audio repo <a href="https://github.com/silentindark/audiosocket_server/blob/master/README.md" rel="nofollow noreferrer">https://github.com/silentindark/audiosocket_server/blob/master/README.md</a></p> <p>any help on this?</p>
<python><asterisk><audiosocket>
2025-06-06 07:33:35
0
15,772
Code Guru
79,655,305
13,132,728
How to generate a given number of size N pairings from a list with minimum overlap between pairings?
<p>I have a list, <code>lst</code></p> <pre><code>lst = ['A','B','C','D','E','F','G'] </code></pre> <p>From this list, I want to generate a given number of lists, each of length <code>N</code> <strong>with the minimum possible amount of overlap between each item in <code>lst</code></strong>.</p> <p>For example, let's say I want to generate three lists of length three from <code>lst</code> by hand, and I come up with the following:</p> <pre><code>['A','B','C'] ['D','E','F'] ['G','A','B'] </code></pre> <p>With the length of <code>lst</code> and my parameters - three lists of length three - I actually did this sub-optimally, as <code>A</code> and <code>B</code> are in the same output twice. I understand repeats are unavoidable, but they still can be handled optimally. The following output is an example of what I am trying to achieve, as the output consists of the minimum amount of pairings between each item in <code>lst</code> as possible:</p> <pre><code>['A','B','C'] ['D','E','F'] ['G','A','D'] </code></pre> <p>Of course, this is easy to do manually when parameters are really small, but as they get larger it becomes increasingly difficult - and time consuming - to optimize it manually.</p> <p>I have tried looking through <code>itertools</code> docs, but can't quite find something that could help me (probably not looking or thinking hard enough...) <code>cycle()</code> came to mind, but that just cycles through the items in a list and doesn't handle either the generation of sublists nor the handling of minimum possible overlap. I would still think <code>cycle()</code> is useful here as I want an evenly distributed amount of each item from <code>lst</code> in the output lists. Also, is there a name for this type of problem? Some sort of graph theory? Thanks in advance for any help!</p> <p>EDIT: This problem has three parts. 1. input list (<code>lst</code>) 2. number of groupings to generate (in this example, 3) and 3. length of each grouping (in this example, also 3), with a max of this value being <code>len(lst)</code>. The goal is to have minimal duplicate pairings overall between any of the values in <code>lst</code> among all of the generated groupings.</p> <hr /> <h2><strong>EDIT 2: REPHRASING THE PROBLEM:</strong></h2> <p>I have a list, <code>lst</code>, from above.</p> <h4><strong>I have two desired parameters:</strong></h4> <p><code>N</code> - How many results to generate.</p> <p><code>x</code> - How many items are in each result.</p> <h4><strong>My conditions are:</strong></h4> <ul> <li><p>I want minimum pairings between any two items in <code>lst</code> out of all results. That is, if I want <code>N = 20</code>, I don't want, for example, <code>A</code> and <code>B</code> to appear in 5 results together while <code>A</code> and <code>C</code> or <code>B</code> and <code>C</code> appear in no results together. I understand under specific parameter settings that there are multiple ways to satisfy this condition. That does not matter to me. As long as the conditions are satisfied, it does not matter what specifically the results are</p> </li> <li><p>I want each item in <code>lst</code> to be used as evenly as possible. ie, I don't want <code>A</code> in every output and <code>G</code> in one output.</p> </li> </ul> <h4><strong>EXAMPLE:</strong></h4> <p>Let's say I want N = 5 and x = 3. This means that I want 5 results of length 3. Out of all 5 results, I want as little overlap between any two pairings as possible. I also want to use each item in <code>lst</code> an even number of times.</p> <p><strong>Unsatisfactory result:</strong></p> <pre><code>ABC DEF GAB CDE FGA </code></pre> <p><strong>Why is this unsatisfactory?</strong></p> <p>The first condition is sub optimally handled:</p> <p>AB - 2 AC - 1 AD - 0 AE - 0 AF - 1 AG - 1 BC - 1 BD - 0 BE - 0 BF - 0 BG - 1 CD - 1 CE - 1 CF - 0 CG - 0 DE - 2 DF - 1 DG - 0 EF - 1 EG - 0 FG - 1</p> <p>The second condition, however, is passed:</p> <p>A - 3, B - 2, C - 2, D - 2, E - 2, F - 2, G - 2</p> <p>It is easy to do this by hand when the parameters are small, but becomes increasingly difficult to keep track of as parameters grow. A satisfactory result for these parameters would likely not have some pairings with two common results and others with zero.</p> <p><strong>Look at the example in OP:</strong></p> <pre><code>['A','B','C'] ['D','E','F'] ['G','A','B'] </code></pre> <p>While satisfying the second condition, this is does not optimally handle the first condition as <code>AB</code> are in two outputs when it can be handled in many ways where no two pairings are in two outputs together, such as this:</p> <pre><code>['A','B','C'] ['D','E','F'] ['G','A','D'] </code></pre> <p>Yes, there are multiple solutions when the parameters are <code>N=3</code> and <code>x=3</code>. Again, I do not care about which solution is used, just as long as both conditions are satisfied.</p>
<python><list><algorithm>
2025-06-06 00:21:11
4
1,645
bismo
79,655,299
15,848,470
Extremely slow DB insert using Turbodbc
<p>I have built Turbodbc 5.1.2 from source with simdutf 7.3.0, Python 3.11. When trying to insert 150,000 rows of 46 columns to a MySQL 8.0 InnoDB table, Turbodb takes about 190s, compared to 15s with my existing method. I have modeled my attempt after the advice in here the advanced usage section <a href="https://turbodbc.readthedocs.io/en/latest/pages/advanced_usage.html#using-numpy-arrays-as-query-parameters" rel="nofollow noreferrer">https://turbodbc.readthedocs.io/en/latest/pages/advanced_usage.html#using-numpy-arrays-as-query-parameters</a>:</p> <pre><code>options = turbodbc.make_options( use_async_io=True, ) conn = turbodbc.connect( driver=&quot;MySQL Driver&quot;, server = get(&quot;host&quot;), port=3306, uid=get(&quot;user&quot;), pwd=get(&quot;pw&quot;), plugin_dir=&quot;/usr/local/lib/plugin&quot;, turbodbc_options = options, ) cursor = conn.cursor() cols = str(insert_df.columns).replace(&quot;'&quot;, &quot;&quot;).replace(&quot;[&quot;, &quot;(&quot;).replace(&quot;]&quot;, &quot;)&quot;) params = &quot;(&quot; + &quot;, &quot;.join([&quot;?&quot; for _ in insert_df.columns]) + &quot;)&quot; insert_df = insert_df.with_columns( cs.float().cast(pl.Float64), cs.integer().cast(pl.Int64) ) values = [x.to_numpy() for x in insert_df.iter_columns()] on_duplicate_key_update_stmts = ( str([i + &quot; = VALUES(&quot; + i + &quot;)&quot; for i in insert_df.columns]) .replace(&quot;[&quot;, &quot;&quot;) .replace(&quot;]&quot;, &quot;&quot;) .replace(&quot;'&quot;, &quot;&quot;) ) cursor.executemanycolumns(f&quot;INSERT INTO {table_name} {cols} VALUES {params} ON DUPLICATE KEY UPDATE updated_at = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash), NOW(),updated_at), updated_at_micro_ts = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash),NOW(6),updated_at_micro_ts), &quot; + on_duplicate_key_update_stmts + &quot;, modified_at=NOW();&quot;, values) </code></pre> <p>I also tried a row-wise insert like:</p> <pre><code>cursor.executemany(f&quot;INSERT INTO {table_name} {cols} VALUES {params} ON DUPLICATE KEY UPDATE updated_at = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash), NOW(),updated_at), updated_at_micro_ts = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash),NOW(6),updated_at_micro_ts), &quot; + on_duplicate_key_update_stmts + &quot;, modified_at=NOW();&quot;, list(insert_df.to_numpy())) </code></pre> <p>but this also took about 190s.</p> <p>My attempt using sqlalchemy looks just like the above cursor.executemany(query, data), but with the engine made from sqlalchemy and it uses %s to denote parameter substitutions:</p> <pre><code>INSERT INTO schema.table(my, forty, six, column, names, here, ...) VALUES ( %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) ON DUPLICATE KEY UPDATE updated_at = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash), NOW(),updated_at), updated_at_micro_ts = if(coalesce(data_change_hash,0) &lt;&gt; values(data_change_hash),NOW(6),updated_at_micro_ts), col1 = VALUES(col1), col2 = VALUES(col2), ..., modified_at=NOW(); </code></pre> <p>Any help would be much appreciated.</p>
<python><insert><sql-insert><bulkinsert><turbodbc>
2025-06-06 00:00:44
0
684
GBPU
79,655,240
19,527,503
Accessing modules from other folders within a Python project
<p>I am getting the following error <code>No module named 'backend'</code> when trying to import from within <code>python1.py</code> file.</p> <p>I am trying to import as such:</p> <pre><code>import sys from backend.utils import * </code></pre> <p>Project folder is structured as follows:</p> <pre><code>Project Folder ---backend ---folder_1 ---python1.py ---utils.py </code></pre>
<python>
2025-06-05 22:05:15
1
323
marsprogrammer
79,655,193
27,596,369
I cannot create table using SQLAlchemy
<p>I have a table with a few columns in an empty database and then <code>db.create_all()</code> with <code>with app.app_context()</code>but when I run the code, the database is still empty and there are no tables. CODE EDITOR: Visual Studio Code, DATABASE EDITOR: DB Browser for SQLite</p> <p>Here is my code:</p> <pre><code>from flask import Flask from flask_sqlalchemy import SQLAlchemy from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column from sqlalchemy import Integer, String, Float app = Flask(__name__) class Base(DeclarativeBase): pass app.config['SQLALCHEMY_DATABASE_URI'] = &quot;sqlite:///new-books-collection.db&quot; db = SQLAlchemy(model_class=Base) db.init_app(app) class Book(db.Model): id: Mapped[int] = mapped_column(Integer, primary_key=True) title: Mapped[str] = mapped_column(String(250), unique=True, nullable=False) author: Mapped[str] = mapped_column(String(250), nullable=False) rating: Mapped[float] = mapped_column(Float, nullable=False) with app.app_context(): db.create_all() </code></pre> <p>But when I run it, the datasource is still blank with no tables.</p>
<python><flask><sqlalchemy><flask-sqlalchemy>
2025-06-05 20:59:16
1
1,512
Aadvik
79,655,174
7,959,614
Use Scipy multivariate_normal with multiple covariance matrices
<p>I have a an array of covariance matrces with shape <code>(n_samples, n, n)</code>. I'd like to compute <code>logpdf</code> values for multiple coordinate values and covariance matrices in a vectorized way. For now I've tried to use the following code</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.stats import multivariate_normal x, y = np.mgrid[-1:1:.1, -1:1:.1] pos = np.dstack((x, y)) mean = np.array([0.5, -0.2]) Sigma = np.array([ [[2.0, 0.3], [0.3, 0.5]], [[2.0, 0.3], [0.3, 0.5]] ]) mvn = multivariate_normal.logpdf( x=pos, mean=mean, cov=Sigma ) </code></pre> <p>However, this results in the following error:</p> <pre><code>&gt; ValueError: Array 'cov' must be at most two-dimensional, but cov.ndim &gt; = 3 </code></pre> <p>I don't want to use a &quot;for loop&quot; as the number of samples can be large. This <a href="https://gregorygundersen.com/blog/2020/12/12/group-multivariate-normal-pdf/" rel="nofollow noreferrer">blog</a> shows to calculate the multivariate normal logpdf over multiple sets of parameters. However, I have a problem with adjusting the shapes of the arrays used.</p> <p>How do I change the code given in the blog that it deals correctly with my data?</p> <pre><code>logpdfs = custom_multiple_logpdfs(x=pos, mu=[0.5, -0.2], cov=Sigma) </code></pre> <p>The code in the blog looks as follows:</p> <pre><code>def multiple_logpdfs_vec_input(x, mean, covs): # NumPy broadcasts `eigh`. vals, vecs = np.linalg.eigh(covs) # Compute the log determinants across the second axis. logdets = np.sum(np.log(vals), axis=1) # Invert the eigenvalues. valsinvs = 1./vals # Add a dimension to `valsinvs` so that NumPy broadcasts appropriately. Us = vecs * np.sqrt(valsinvs)[:, None] devs = x[:, None, :] - mean[None, :, :] # Use `einsum` for matrix-vector multiplications across the first dimension. devUs = np.einsum('jnk,nki-&gt;jni', devs, Us) # Compute the Mahalanobis distance by squaring each term and summing. mahas = np.sum(np.square(devUs), axis=2) # Compute and broadcast scalar normalizers. dim = x.shape[1] log2pi = np.log(2 * np.pi) out = -0.5 * (dim * log2pi + mahas + logdets[None, :]) return out.T </code></pre> <p>I can flatten <code>pos</code> easily by</p> <blockquote> <p>x = pos.reshape(-1, 2)</p> </blockquote> <p>How do I edit the <code>einsum</code> so it works with the defined variables?</p>
<python><scipy>
2025-06-05 20:30:46
2
406
HJA24
79,655,153
1,488,601
How to make a custom error message for Service Unavailable in FastAPI?
<p>I'm running my app like so:</p> <pre class="lang-bash prettyprint-override"><code>uvicorn main:app --host 0.0.0.0 --port 8080 --workers 2 --limit-concurrency 10 </code></pre> <p>Versions are:</p> <pre><code>fastapi==0.103.2 uvicorn==0.34.3 </code></pre> <p>When I start slamming it, I get the expected <code>503 Service Unavailable</code> error.</p> <p>I want to have a custom error message when this happens.</p> <p>I thought this code would catch this error, but that code never gets touched when I get a <code>503</code>.</p> <p>What am I doing wrong?</p> <pre class="lang-py prettyprint-override"><code>app = FastAPI() class ServiceUnavailableException(Exception): def __init__(self, message: str): self.message = message @app.exception_handler(ServiceUnavailableException) async def service_unavailable_exception_handler(request: Request, exc: ServiceUnavailableException): return JSONResponse( status_code=status.HTTP_503_SERVICE_UNAVAILABLE, # Use 503 status code content={&quot;message&quot;: exc.message} ) </code></pre>
<python><exception><fastapi>
2025-06-05 19:56:19
1
2,507
grayaii
79,654,994
11,608,962
How to customize Pydantic validation error messages to exclude "Value error" prefix?
<p>The below code consists of a company model and I have put up a validation for a blank company name. I am considering raising a custom validation error message but the error message has the prefix <code>Value error, </code> which I do not want. How can I get rid of it?</p> <p>Python Code:</p> <pre class="lang-py prettyprint-override"><code>from datetime import date from typing import Optional from pydantic import ( BaseModel, field_validator, HttpUrl, EmailStr ) from pydantic import ValidationError class Company(BaseModel): company_id: Optional[int] = None company_name: Optional[str] address: Optional[str] state: Optional[str] country: Optional[str] postal_code: Optional[str] phone_number: Optional[str] email: Optional[EmailStr] = None website_url: Optional[HttpUrl] = None cin: Optional[str] gst_in: Optional[str] = None incorporation_date: Optional[date] reporting_currency: Optional[str] fy_start_date: Optional[date] logo: Optional[str] = None @field_validator('company_name') def validate_company_name(cls, v): if v is None or not v.strip(): raise ValueError(&quot;Company name must be provided.&quot;) # Custom Error Message return v payload = { &quot;company_name&quot;: None, &quot;address&quot;: &quot;DLH Park, Mumbai&quot;, &quot;state&quot;: &quot;Maharashtra&quot;, &quot;country&quot;: &quot;India&quot;, &quot;postal_code&quot;: &quot;400066&quot;, &quot;phone_number&quot;: &quot;+9122380199&quot;, &quot;email&quot;: &quot;info@mailto.com&quot;, &quot;cin&quot;: &quot;U45678TX2023PTC111222&quot;, &quot;incorporation_date&quot;: &quot;2015-07-20&quot;, &quot;reporting_currency&quot;: &quot;INR&quot;, &quot;fy_start_date&quot;: &quot;2010-04-01&quot; } try: company = Company(**payload) print(&quot;โœ… Valid payload:&quot;, company.model_dump()) except ValidationError as e: print(&quot;โŒ&quot;, e.errors()[0]['msg']) </code></pre> <p>Output:</p> <pre><code>โŒ Value error, Company name must be provided. </code></pre> <p>Desired Output:</p> <pre><code>โŒ Company name must be provided. </code></pre> <p>Complete error message produced by <code>pydantic</code> validation:</p> <pre class="lang-py prettyprint-override"><code>[ { 'type': 'value_error', 'loc': ('company_name',), 'msg': 'Value error, Company name must be provided.', 'input': None, 'ctx': { 'error': ValueError('Company name must be provided.') }, 'url': 'https://errors.pydantic.dev/2.10/v/value_error' } ] </code></pre> <p><strong>Note:</strong> I know the workarounds by using <code>str.replace()</code> or raising a custom exception instead of <code>ValueError</code>. Can we do something else?</p>
<python><pydantic-v2>
2025-06-05 18:02:18
2
1,427
Amit Pathak
79,654,956
4,245,867
Python-Selenium: Loading the third page from bvc.com.co shows blank screen
<p>I'm trying to scrape some data from bvc.com.co (the Colombian Stock Exchange webpage). But always, when loading the third stock, the screen comes blank and the <code>target</code> <code>expected_condition</code> can not be executed (maybe because the page is not shown). Here is my code:</p> <pre><code>stocks = ['https://www.bvc.com.co/renta-variable-mercado-local/cibest?tab=operaciones', 'https://www.bvc.com.co/renta-variable-mercado-local/pfcibest?tab=operaciones', 'https://www.bvc.com.co/renta-variable-mercado-local/bogota?tab=operaciones', 'https://www.bvc.com.co/renta-variable-mercado-local/bhi?tab=operaciones', 'https://www.bvc.com.co/renta-variable-mercado-local/celsia?tab=operaciones'] import selenium, time import selenium.webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait driver = selenium.webdriver.Chrome() for i in stocks: print(i) #driver = selenium.webdriver.Chrome() driver.get(i) time.sleep(1) target = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id=&quot;__next&quot;]/div/div[3]/div[3]/div/div[1]/ul/li[3]'))) driver.execute_script(&quot;arguments[0].scrollIntoView()&quot;, target) time.sleep(1) </code></pre> <p>A solution (but not the best) is initializing the <code>driver</code> into the loop. However, this makes the Chrome app open and close for each stock, which makes the code take a longer time to finish.</p> <p>PD: the <code>options.add_argument('--disable-blink-features=AutomationControlled')</code> does not fix the problem.</p>
<python><css><selenium-webdriver><web-scraping>
2025-06-05 17:30:11
1
615
Ivan Castro
79,654,724
1,076,009
Scapy Raw layer in data frame is missing initial 3 characters of load and shortened load is repeated when printed
<p>I am using Scapy to learn about Wi-Fi. I am new to both Scapy and Wi-Fi.</p> <p>I have a Raspberry Pi 5 and a laptop, running Debian GNU/Linux 12 (Bookworm) and Ubuntu 22.04.1 LTS, respectively, and an ALFA AWUS036AXML Wi-Fi adapter set in monitor mode connected to each. I am using IPython and Scapy to interact with the Wi-Fi adapters.</p> <p>I can create Beacon packets (using Scapy) on either device and sniff them successfully on the other device, so I think my basic setup is correct.</p> <p>However, when I try to send a data packet with text in a raw layer, the first 3 characters of the load are dropped and the shortened load is printed many times by the receiver.</p> <p>Creating a data packet like so (where addr1 below is the actual MAC of the unit I'm sending to):</p> <pre><code>import scapy.all as scapy from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11ProbeReq, Dot11Elt packet = scapy.RadioTap() / scapy.Dot11(type=2, subtype=0, addr1=&quot;00:c0:ca:b7:2e:7c&quot;, addr2=&quot;00:11:22:33:44:55&quot;, addr3=&quot;00:11:22:33:44:55&quot;) / scapy.Raw(load=&quot;Hello World!&quot;) </code></pre> <p>I can check packet.show()</p> <pre><code>###[ RadioTap ]### version = 0 pad = 0 len = None present = None notdecoded= b'' ###[ 802.11 ]### subtype = Data type = Data proto = 0 FCfield = ID = 0 addr1 = 00:c0:ca:b7:2e:7c (RA=DA) addr2 = 00:11:22:33:44:55 (TA=SA) addr3 = 00:11:22:33:44:55 (BSSID) SC = 0 ###[ Raw ]### load = b'Hello World!' </code></pre> <p>and</p> <pre><code>if packet.haslayer(&quot;Raw&quot;): print(packet[&quot;Raw&quot;].load.decode(&quot;utf-8&quot;)) </code></pre> <p>gives</p> <pre><code>Hello World! </code></pre> <p>I send the packet using</p> <pre><code>scapy.sendp(packet, iface=&quot;wlan1&quot;) </code></pre> <p>where &quot;wlan1&quot; is the the ALFA adapter on the transmitter.</p> <p>On the receiving machine, my sniffer function is</p> <pre><code>def sniff_callback(packet): if packet.haslayer(&quot;Dot11&quot;): if packet.getlayer(&quot;Dot11&quot;).addr1 == &quot;00:c0:ca:b7:2e:7c&quot;: if packet.haslayer(&quot;Raw&quot;): print(packet[&quot;Raw&quot;].load.decode(&quot;utf-8&quot;)) return </code></pre> <p>and I run it with</p> <pre><code>scapy.sniff(iface=&quot;wlan&quot;, prn=sniff_callback) </code></pre> <p>where &quot;wlan&quot; is the ALFA adapter on the receiver.</p> <p>The output by the receiver when the transmitted packet is received is</p> <pre><code>lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! lo World! </code></pre> <p>So, the initial 3 characters are dropped, and it is printed 15 times. If I add 3 spaces to the beginning of the load (&quot; Hello World!&quot;), I get &quot;Hello World!&quot; printed 15 times.</p> <p>I think there is a mistake in my packet definition, but I don't know what's wrong.</p>
<python><wifi><scapy>
2025-06-05 15:03:39
0
398
Aldo
79,654,659
4,247,599
AttributeError: module 'geoarrow.c._lib' has no attribute 'GEOARROW_GEOMETRY_TYPE_GEOMETRY'
<p>I am using <code>KeplerGL</code> maps library on a streamlit application.</p> <p>All worked fine until it did not. Here are the pinned libraries of a working code that suddenly stopped working:</p> <pre class="lang-bash prettyprint-override"><code> [tool.poetry.dependencies] python = &quot;~3.12.0&quot; streamlit = &quot;^1.45.0&quot; streamlit-keplergl = &quot;^0.3.0&quot; # ... many others here </code></pre> <p>This is the error for an <code>alpine:3</code> deployment:</p> <pre><code>File &quot;/app/src/k_tool/_pages/hello_kepler.py&quot;, line 7, in &lt;module&gt; from keplergl import KeplerGl File &quot;/app/.venv/lib/python3.12/site-packages/keplergl/__init__.py&quot;, line 6, in &lt;module&gt; from .keplergl import * File &quot;/app/.venv/lib/python3.12/site-packages/keplergl/keplergl.py&quot;, line 18, in &lt;module&gt; import geoarrow.pyarrow as ga File &quot;/app/.venv/lib/python3.12/site-packages/geoarrow/pyarrow/__init__.py&quot;, line 10, in &lt;module&gt; from geoarrow.c.lib import GeometryType, Dimensions, CoordType, EdgeType, CrsType File &quot;/app/.venv/lib/python3.12/site-packages/geoarrow/c/__init__.py&quot;, line 13, in &lt;module&gt; from .lib import GeometryType, Dimensions, CoordType, EdgeType, CrsType File &quot;/app/.venv/lib/python3.12/site-packages/geoarrow/c/lib.py&quot;, line 13, in &lt;module&gt; class GeometryType: File &quot;/app/.venv/lib/python3.12/site-packages/geoarrow/c/lib.py&quot;, line 26, in GeometryType GEOMETRY = _lib.GEOARROW_GEOMETRY_TYPE_GEOMETRY ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre> <p>The page <code>hello_kepler.py</code> simply shows the map to an empty streamlit page.</p> <p>Any idea which system C++ libraries are suddenly missing, or being silently upgraded and made incompatible?</p>
<python><kepler.gl>
2025-06-05 14:20:23
0
4,299
SeF
79,654,658
29,295,031
How to switch to a streamlit page from html
<p>I have made a footer with html :</p> <pre><code>ft=&quot;&quot;&quot; &lt;div style='font-size: 0streamlit .875em;color: #000;cursor: pointer;font-family: Inter;font-size: 14px;font-style: normal;font-weight: 500;line-height: 140%; /* 14px */'&gt; Footer &lt;/div&gt; &quot;&quot;&quot; st.write(ft, unsafe_allow_html=True) </code></pre> <p>is there a way please to use <code>st.switch_page(&quot;pages/test.py&quot;)</code> in html please ?</p>
<python><streamlit>
2025-06-05 14:20:10
0
401
user29295031
79,654,624
1,472,474
How to make mypy ignore pytest.approx in a code outside of test functions
<p>I want to use <code>pytest.approx(...)</code> inside immutable dataclasses (<code>frozen=True</code>) in my unittest, so I can use a single <code>assert</code> to check possibly quite complex structures.</p> <p>This works fine when I use these structures inside test functions, but when I declare a global variable/constant which uses <code>approx</code>, I get an error in mypy.</p> <p>Example code:</p> <pre class="lang-py prettyprint-override"><code>from pytest import approx from typing import Final from dataclasses import dataclass @dataclass(frozen=True) class A: x: float EXPECTED_VALUE_1: Final = A(approx(0.5)) # &lt;- this is the problem def test_1(): assert A(0.5) == EXPECTED_VALUE_1 def test_2(): expected_value_2 = A(approx(0.5)) # &lt;- this is fine assert A(0.5) == expected_value_2 </code></pre> <p>The error:</p> <pre class="lang-bash prettyprint-override"><code>$ mypy . a.py:10: error: Argument 1 to &quot;A&quot; has incompatible type &quot;ApproxBase&quot;; expected &quot;float&quot; [arg-type] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>I think I know why - mypy by default ignores untyped functions (and my unittest functions are all untyped), but outside those functions it checks.</p> <p>I've found 2 solutions:</p> <ol> <li>add <code># type: ignore</code> where this happens - which is in my opinion ugly and makes the code less readable</li> <li>using <code>pytest.fixture</code> in place of a global constant. But in many cases, I have a lot of these <code>EXPECTED_...</code> constants (which are using <code>approx</code>) and converting them to fixtures makes code a lot longer and less readable, so, for simple, immutable data, I would like to keep using these global constants.</li> </ol> <p>Can I somehow configure mypy to <em>just</em> ignore every call to <code>approx</code> and check everything else?</p> <p>Configuration:</p> <ul> <li><p>Python 3.10.16</p> </li> <li><p>mypy 1.16.0</p> </li> <li><p>mypy configuration:</p> <pre class="lang-toml prettyprint-override"><code>[mypy] python_version = 3.10 ignore_missing_imports = True </code></pre> </li> </ul> <p>Suggested solutions/duplicate questions:</p> <ul> <li><a href="https://stackoverflow.com/questions/49220022/how-can-mypy-ignore-a-single-line-in-a-source-file">How can mypy ignore a single line in a source file?</a> - this is dealing with ignoring one line of code by adding a <code># type: ignore</code>, which is something I wrote in &quot;What I tried&quot; section I know about and that it's not a solution I want. I want a solution where I would, if possible, change mypy config or somehow &quot;patch&quot; <code>approx</code> so mypy would ignore just the <code>approx</code> calls but otherwise check typing errors.</li> </ul>
<python><pytest><python-typing><mypy>
2025-06-05 13:58:15
2
5,587
Jan Spurny
79,654,442
18,108,367
The redis command 'save' doesn't store immediately the snapshot on permanent memory
<p>I'm trying to force the storing of a Redis snapshot on permanent memory by the command <code>save</code>. The instance of Redis is configured to use <a href="https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/" rel="nofollow noreferrer">RDB persistence</a> with the configuration:</p> <pre><code>appendonly no save 30 1 dbfilename dump.rdb </code></pre> <p>so, normally, the data in RAM are saved to the file <code>dump.rdb</code> every 30 seconds if at least a key is changed.</p> <p>I'm using <code>python-redis</code> and the snippet of code with problem is the following:</p> <pre><code>import redis redis_client = redis.Redis(host=&quot;127.0.0.1&quot;, port=6379, encoding=&quot;utf-8&quot;, decode_responses=True) redis_client.flushdb() redis_client.save() print(&quot;Save command executed!&quot;) </code></pre> <p>Previous code deletes all keys from the dataset in RAM (by <code>flushdb()</code>) and, after that, should request the save of the (empty) snapshot to the permanent memory.</p> <p>If I execute previous code and wait the execution of the last instruction <code>print(&quot;Save command executed!&quot;)</code> instruction, I can see the message <code>Save command executed!</code> on standard output and the following log on the journald of the redis service:</p> <pre><code>&gt; journalctl -u redis -xf Jun 05 12:23:23 &lt;hostname&gt; redis-server[325]: 325:M 05 Jun 2025 12:23:23.055 * 1 changes in 30 seconds. Saving... Jun 05 12:23:23 &lt;hostname&gt; redis-server[325]: 325:M 05 Jun 2025 12:23:23.056 * Background saving started by pid 2269 Jun 05 12:23:23 &lt;hostname&gt; redis-server[2269]: 2269:C 05 Jun 2025 12:23:23.071 * DB saved on disk Jun 05 12:23:23 &lt;hostname&gt; redis-server[2269]: 2269:C 05 Jun 2025 12:23:23.075 * RDB: 0 MB of memory used by copy-on-write Jun 05 12:23:23 &lt;hostname&gt; redis-server[325]: 325:M 05 Jun 2025 12:23:23.157 * Background saving terminated with success Jun 05 12:23:25 &lt;hostname&gt; redis-server[325]: 325:M 05 Jun 2025 12:23:25.680 * DB saved on disk </code></pre> <p>The last line of the log (<code>DB saved on disk</code>) occurs for the execution of the <code>redis_cli.save()</code> instruction.</p> <p>If I switch off the system (unfortunately I can't do a correct shutdown but I have to brutally switch off) immediately after (about in a second) the log line appears on the <code>journald</code> of the <code>redis</code> service, <strong>the snapshot is not saved on permanent memory</strong>; in fact on the next boot the dataset of the redis server is not empty.</p> <p>In this context why the snapshot is not correct save to the permanent memory? It depends from the <code>flushdb()</code> method execution before the method <code>save()</code>?</p>
<python><redis><filesystems><persistence>
2025-06-05 12:27:26
1
2,658
User051209
79,654,182
704,329
Make Hydra to load configs from zip-archive
<p>I am trying to adopt <a href="https://github.com/ashleve/lightning-hydra-template" rel="nofollow noreferrer">this wonderful repository</a> with Lightning-Hydra template to my workflow.</p> <p>My workflow looks like following.</p> <ol> <li><p>Create a new git branch for a new experiment.</p> </li> <li><p>Write a code (plus configs) to implement an idea (including debugging and testing)</p> </li> <li><p>Commit</p> </li> <li><p>Pack everything into a zip archive (<code>zip -rp experiment-$(git rev-parse HEAD).zip *</code>)</p> </li> <li><p>Add to the archive the file <code>__main__.py</code>, calling <code>main()</code></p> <p>This file helps python interpreter to recognize the archive as a python script and allows the call <code>python experiment-aabbcc.zip</code></p> <p>Approximate content of this file is</p> <pre><code>import sys from src.train import main main(sys.argv[1:]) </code></pre> </li> <li><p>Submit to the job queue command <code>python /path/to/experiment-aabbcc.zip --other --command --line-args</code>.</p> <p>I use <a href="https://github.com/justanhduc/task-spooler" rel="nofollow noreferrer">this fork of the <code>task-spooler</code></a> as a job scheduler. Because of technical limitations of the PC that I use, I cannot run docker (actually I ssh to a docker container that doesn't support docker-in-docker, and I cannot change that).</p> </li> <li><p><code>git checkout master</code></p> </li> <li><p>go to 1</p> </li> </ol> <p>The issue is that hydra + OmegaConf refuse to load configs from created zip archive, they always look into a file system.</p> <p>Since I frequently switch branches, content of working directory often changes, and I have high chances that a job submitted to a job queue will run the code that is different from what was intended.</p> <p>So, I am looking for a solution that would allow me to isolate a code, submitted to the task spooler, without making extra copies of a repo on a file system.</p>
<python><git><fb-hydra><omegaconf>
2025-06-05 09:36:56
1
4,369
wl2776
79,654,091
15,001,463
Parent class initializes data that child class must use: better approach and enforcing contracts?
<p>I have a parent class that initializes a <code>matplotlib.axes._axes.Axes</code> object with some configuration that is necessary so that the resulting plots are saved correctly. A child class can extend the parent classes <code>_plot_initial_frame</code> and <code>_update_frame</code> methods in order to specialize the plot. However, with the current design, I do not actually &quot;force&quot; the child class to initialize the plot with the correct configuration, rather I would have to rely on the developer knowing (presumably by providing documentation) that they need to call <code>super()._plot_initial_frame()</code> in the child class to get expected behavior.</p> <p>Is there a more standard approach (pythonic or otherwise) to enforce such a contract between parent and child classes? It seems like my approach is already a bit suspect in terms of design because I am essentially modifying &quot;protected&quot; data---having read discussions on <a href="https://softwareengineering.stackexchange.com/questions/162643/why-is-clean-code-suggesting-avoiding-protected-variables">Why is clean code suggesting avoiding protected variables</a> and <a href="https://softwareengineering.stackexchange.com/questions/436521/are-there-any-legitimate-use-cases-for-protected-visibility">Are there any legitimate use cases for protected visibility</a>.</p> <p>Below are some dummy classes showing what I mean in code. The image at the end also shows why a particular configuration needs to be enforced: without specifying the rectangular configuration in the <code>Axes</code>, the image (both are 2048 x 1024) is <strong>not</strong> displayed properly. You can see the left image has lots of white space around it (<code>bbox_inches='tight'</code> is not a solution here btw as it just clips the whitespace and the resulting image is no longer 2048 x 1024) while the right image is much &quot;fuller&quot; (i.e., no whitespace).</p> <pre class="lang-py prettyprint-override"><code>from matplotlib.pyplot import axes, figure from cartopy.crs import Projection, PlateCarree class WorldMapAnimator: def __init__( self, projection: Projection = PlateCarree(), n_frames: int = 3): self._projection = projection self._n_frames = n_frames @staticmethod def get_rectangle_for_full_plot(): &quot;&quot;&quot;Emulating a public static const&quot;&quot;&quot; rectangle_for_full_plot = [0, 0, 1, 1] return rectangle_for_full_plot def animate(self): self._plot_initial_frame() for frame in range(self._n_frames): self._update_frame(frame) def _plot_initial_frame(self): self._fig = figure() # IMPORTANT: The axes must have this type of rectangle configuration self._ax = axes( self.get_rectangle_for_full_plot(), projection=self._projection) self._ax.coastlines() def _update_frame(self): return class PerlinNoiseAnimator(WorldMapAnimator): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def _plot_initial_frame(self): # IMPORTANT: here I am modifying a member of the parent class # and also &quot;soft&quot; requiring the user to call super otherwise they # won't get expected behavior... super()._plot_initial_frame() self._ax.pcolormesh(...) # TODO: write some perlin noise to worldmap return def _update_frame(self): # do something to update the map with new perlin noise here return </code></pre> <p><a href="https://i.sstatic.net/kZhAgkYb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZhAgkYb.png" alt="enter image description here" /></a></p>
<python><oop>
2025-06-05 08:43:04
1
714
Jared
79,654,033
364,595
debugging a PDF that will not open in Adobe
<p>disclaimer: I'm the author of <code>borb</code></p> <p>I'm currently working on the next (major) release. And for some reason I can not seem to get Adobe Reader to open the PDF.</p> <p>It throws 'There was a problem reading this document (14)'.</p> <p>You can get more information by ctrl+clicking the 'ok'. I did so. It told me 'Expected a dict object'.</p> <p>I'm currently working on a linux device, so using Adobe Pro (preflight) is a no-go for me.</p> <p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>from borb.pdf.document import Document from borb.pdf import Paragraph from borb.pdf import Page from borb.pdf import PDF d: Document = Document() p: Page = Page() d.append_page(p) # useful constant(s) x: int = p.get_size()[0] // 10 y: int = p.get_size()[1] // 10 w: int = p.get_size()[0] - 2 * (p.get_size()[0] // 10) h: int = p.get_size()[1] - 2 * (p.get_size()[1] // 10) Paragraph(&quot;Lorem ipsum&quot;).paint(available_space=(x, y, w, h), page=p) PDF.write(what=d, where_to=&quot;new-borb.pdf&quot;) </code></pre> <p>This is the document produced:</p> <pre class="lang-none prettyprint-override"><code>%PDF-1.7 %\E2\E3\CF\D3 1 0 obj &lt;&lt;/Pages 2 0 R /Type /Catalog&gt;&gt; endobj 2 0 obj &lt;&lt;/Count 1 /Kids [3 0 R] /Type /Pages&gt;&gt; endobj 3 0 obj &lt;&lt;/Contents 7 0 R /MediaBox [0 0 595 842] /Resources 4 0 R /Type /Page&gt;&gt; endobj 4 0 obj &lt;&lt;/Font 5 0 R&gt;&gt; endobj 5 0 obj &lt;&lt;/F1 6 0 R&gt;&gt; endobj 6 0 obj &lt;&lt;/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font&gt;&gt; endobj 7 0 obj &lt;&lt; /Filter /FlateDecode /Length 84&gt;&gt; stream x\DA+\E4r \E12\D03P\80\E1\A2t.}7CC\85\904.C#\A0\A0\902\B5T071Q\C9\E5\D2\F0\C9/J\CD\D5T\C9\E2r \E1 \E4*$\CE\00K\B8 \A4k\B6\80k\CE,(.E\B2\00\BCA'\D8 endstream endobj 8 0 obj &lt;&lt;/CreationDate (D:20250605094605Z00) /ModDate (D:20250605094605Z00) /Producer (borb)&gt;&gt; endobj xref 0 9 0000000000 65535 f 0000000015 00000 n 0000000063 00000 n 0000000119 00000 n 0000000208 00000 n 0000000240 00000 n 0000000270 00000 n 0000000376 00000 n 0000000531 00000 n trailer &lt;&lt;/ID [&lt;1BE2A89A42B41E620D886CD2F315D35D&gt; &lt;1BE2A89A42B41E620D886CD2F315D35D&gt;] /Info 8 0 R /Root 1 0 R /Size 9&gt;&gt; startxref 635 %%EOF </code></pre> <p>This is the content stream (inflated)</p> <pre class="lang-none prettyprint-override"><code>q BT 0.0 0.0 0.0 rg /F1 1 Tf 12 0 0 12 59 744 Tm (Lorem) Tj ET Q q BT 0.0 0.0 0.0 rg /F1 1 Tf 12 0 0 12 94 744 Tm ( ) Tj ET Q q BT 0.0 0.0 0.0 rg /F1 1 Tf 12 0 0 12 98 744 Tm (ipsum) Tj ET Q </code></pre> <p>This is a link to the file (hosted on Google Drive):</p> <p><a href="https://drive.google.com/file/d/1omSHhbaXwGixHVV1lVdgFqQTS800D1En/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1omSHhbaXwGixHVV1lVdgFqQTS800D1En/view?usp=sharing</a></p>
<python><pdf><borb>
2025-06-05 08:01:19
1
9,217
Joris Schellekens
79,654,025
2,840,697
Does the way you define nested lists in Python matter?
<p>Let's assume that I defined a list <code>list1</code> as:</p> <pre><code> list1 = [[0]*10]*10 </code></pre> <p>and defined <code>list2</code> as:</p> <pre><code> list2 = [[0]*10 for i in range(10)] </code></pre> <p>Aren't these pretty much two equivalent ways of defining a nested list? Is there any reason these two lists would be different in some way?</p>
<python>
2025-06-05 07:57:23
1
942
user98235
79,653,909
29,295,031
How I can color a st.data_editor cell based on a condition
<p>I'm using streamlit's <code>st.data_editor</code>, and I have this <code>DataFrame</code>:</p> <pre><code>import streamlit as st import pandas as pd df = pd.DataFrame( [ {&quot;command&quot;: &quot;test 1&quot;, &quot;rating&quot;: 4, &quot;is_widget&quot;: True}, {&quot;command&quot;: &quot;test 2&quot;, &quot;rating&quot;: 5, &quot;is_widget&quot;: False}, {&quot;command&quot;: &quot;test 3&quot;, &quot;rating&quot;: 3, &quot;is_widget&quot;: True}, ] ) edited_df = st.data_editor(df) </code></pre> <p>is there a way to color a specific cell based on a condition? I want to colorize in yellow the cell (in the rating col) when the value is less then 2.</p>
<python><pandas><streamlit>
2025-06-05 06:41:37
1
401
user29295031
79,653,787
12,314,521
How to manipulate a model (Pydantic) which just is defined at runtime
<p>I have a situation like this: I know there will be a <code>Person</code> object model. But I haven't know it's attribute yet, until run time. a Person object will be defined and write into a Python file in project directory. Another option is at runtime a yml or json file will be created to describe that Person object.</p> <p>But I also have to use that Person oject in another function when pre-define function in workflow. this function will be invoked later after <code>Person</code> is defined</p> <p>for example:</p> <pre><code>def my_func(person: Person): # But this Person isn't defined. print(person) </code></pre> <p>I'm not sure if I can define a Person then overwrite it later. As I think Python already read that Person object and save into memory then there will be a confict.</p> <p>Thank you.</p> <hr /> <p>Edit:</p> <p>This is for @Anerdw 's concern:</p> <p>Actually I'll use a LLM model to generate code. Depend on input data, different attribute will be created. the generated script will be saved. I don't know if this make sense. Or maybe contains in a dictionary variable where contains description of that Person object. Pydantic have a <code>create_model</code> function to create BaseModel object from json I guess</p>
<python><pydantic>
2025-06-05 04:27:06
1
351
jupyter
79,653,549
981,351
df.to_json output appears to skip a date that is in the dataframe
<p>I have very limited knowledge of pandas.</p> <p>The data I'm using covers 2 dst (daylight saving) transitions for the UK (from 1 Sep 24 to 30 Apr 25), and consist of timestamps in milliseconds with values. The transition dates are 27 Oct 24 and 30 Mar 25.</p> <p>Using df.resample and a rule of 86400000ms I resample the data to get one value per day.</p> <p>If I then log the dataframe data I appear to have one value per day, with no omitted/missing days in the period.</p> <p>I then use df.to_json to convert the data to json.</p> <p>The resulting json data does not have a data point for the date 30th March 25. The value for the 30th is instead been given against the timestamp for the 31st.</p> <p>Any help much appreciated.</p>
<python><pandas><dataframe><time-series><dst>
2025-06-04 21:52:33
1
2,866
iss42
79,653,530
23,482,750
How to Test Password Reset Endpoint Using a Token That Doesnโ€™t Exist in the Database in FastAPI?
<p>I'm building a backend system using FastAPI, and I'm currently working on implementing unit tests for the password reset functionality that involves using tokens.</p> <p>Hereโ€™s the snippet of the code I'm testing:</p> <pre><code>def test_web_reset_password(): payload = { &quot;token&quot;: &quot;cozy_token&quot;, # Token that doesnโ€™t exist in the database &quot;new_password&quot;: &quot;NewSecret123!&quot; } response = client.post(f&quot;{WEB_ENDPOINT}/reset-password&quot;, json=payload) assert response.status_code == 200 data = response.json() assert &quot;status&quot; in data </code></pre> <p>Since the token <code>&quot;cozy_token&quot;</code> doesn't exist in the database, I want to know how to handle this scenario in a unit test. Should I simulate the creation of a valid token during the test, or mock the token validation?</p>
<python><jwt><fastapi><python-unittest><reset-password>
2025-06-04 21:30:36
0
343
Hiroshi Ashikaga
79,653,483
6,067,528
Should I use threading for a CPU-intense operation that is choking my service
<p>I am a maintainer of an ML service based on FastAPI and we I have CPU intense code that runs on main thread and with high CPU utilisation when a request heads down this path (I've seen occasional throttling too of cycles too).</p> <pre><code>def _svd_with_fallback(self, matrix: np.ndarray) -&gt; tuple[np.ndarray, np.ndarray, np.ndarray]: try: return np_singular_value_decomposition(matrix, full_matrices=False) ... </code></pre> <p>What are approaches to solving this problem? We have spun up more gunicorn servers already, would advice here be to submit the workload to a separate thread?</p> <p>Anyone have any best-practice tips?</p>
<python>
2025-06-04 20:54:32
0
1,313
Sam Comber
79,653,436
14,909,621
Which is a better way to move an iterator one step forward: `for x in iterator: break` or `x = next(iterator, None)`?
<p>While working on a learning task involving overlapping n-wise windows from an input iterable - similar to what <a href="https://docs.python.org/3/library/itertools.html#itertools.pairwise" rel="nofollow noreferrer">itertools.pairwise</a> does - I came across code like this:</p> <pre class="lang-py prettyprint-override"><code>def f(seq): it = iter(seq) for x in it: break for y in it: yield x, y x = y </code></pre> <p>The line <code>for x in it: break</code> was used instead of <code>x = next(it, None)</code> to avoid assigning <code>x</code> at all if <code>seq</code> is empty, which would be the case with the <code>next()</code> call.</p> <p>I'm not familiar with CPython internals or low-level details, so I can't say whether this is a justified choice. Could someone help me understand the pros and cons of both approaches for advancing an iterator by one item before a loop?</p>
<python><iterator>
2025-06-04 20:15:39
1
7,606
Vitalizzare
79,653,395
1,310,814
Generate table rows in Word document
<p>I have a Word document template with a table. I want to generate the table rows from my python structure like below.</p> <pre><code>from docxtpl import DocxTemplate doc = DocxTemplate(&quot;template2.docx&quot;) context = { &quot;topic&quot;: &quot;Volcanoes&quot;, &quot;lessons&quot;: [ {&quot;title&quot;: &quot;Introduction to Volcanoes&quot;, &quot;objective&quot;: &quot;Understand basic concepts of volcanoes&quot;}, {&quot;title&quot;: &quot;Eruption Types&quot;, &quot;objective&quot;: &quot;Differentiate between types of volcanic eruptions&quot;}, {&quot;title&quot;: &quot;Volcano Case Studies&quot;, &quot;objective&quot;: &quot;Analyze real-world examples of eruptions&quot;}, {&quot;title&quot;: &quot;Lava and Ash&quot;, &quot;objective&quot;: &quot;Understand the geological impact of lava and ash&quot;}, ] } doc.render(context) doc.save(&quot;filled.docx&quot;) </code></pre> <p>My Word document template:</p> <p><a href="https://i.sstatic.net/65i9JzxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65i9JzxB.png" alt="enter image description here" /></a></p> <p>When I run the code <code>python3 gen.py</code> I get the following output Word document:</p> <p><a href="https://i.sstatic.net/jt605DqF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jt605DqF.png" alt="enter image description here" /></a></p> <p>As you can see, it did generate the rows, but the problem is the set of tables/cells it has inserted at the bottom. How can I avoid creating those extra cells/tables?</p>
<python><ms-word><docxtpl>
2025-06-04 19:36:05
1
894
ServerBloke
79,653,348
5,431,734
Efficiently group IDs by two keys into a 2D array of lists
<p>I have three NumPy arrays: class_label, student_name, student_id and grade. I am aggregating the grade per class and name as follows:</p> <pre><code>import numpy as np import numpy_groupies as npg class_label = [0, 2, 1, 1, 2, 3, 0, 1] student_name = [0, 0, 1, 2, 1, 1, 0, 1] student_id = [0, 1, 2, 3, 4, 5, 6, 7] grade = [10, 5, 7, 4, 9, 8, 9, 3] n_classes = 4 n_names = 3 group_idx = np.vstack((class_label, student_name)) npg.aggregate(group_idx, grade, size=(4,3)) </code></pre> <p>which returns:</p> <pre><code>array([[19, 0, 0], [ 0, 10, 4], [ 5, 9, 0], [ 0, 8, 0]]) </code></pre> <p>I am trying now to get the ids involved in this grouping (adding very minimum overhead to the code if possible)</p> <p>I want to build a 2D array of shape (n_classes, n_names) in which each cell is a list of all student_id values that share the same (class_label, student_name) pair. For example:</p> <p>I expect the result to look like this (a 4ร—3 object array of lists):</p> <pre><code>array([[ [0, 6], [], [] ], [ [], [2, 7], [3 ], [ [1], [4], [] ], [ [], [5], [] ]], dtype=object) </code></pre> <p>in the final 2D output array, at row 0 (class 0) and column 0 (name 0), I expect the list [0, 6]. In other words, the cell at position [0,0] must be a list containing the IDs for all records whose class_label == 0 and student_name == 0.</p>
<python><numpy>
2025-06-04 19:05:51
1
3,725
Aenaon
79,653,257
13,682,559
How do I specify "related" types in a container class?
<p>I have various model classes, each working on its own data class. Like so:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from typing import Any, Protocol, final @dataclass class Data(Protocol): val_generic: int @dataclass class DataA(Data): val_generic: int = 1 val_a: int = 2 @dataclass class DataB(Data): val_generic: int = 4 val_b: int = 1 class ModelA: def update(self, data: DataA) -&gt; None: data.val_a = data.val_a + data.val_generic class ModelB: def update(self, data: DataB) -&gt; None: data.val_b = data.val_b + data.val_generic </code></pre> <p>Now I want a container class doing this job for me:</p> <pre class="lang-py prettyprint-override"><code>@final class Container: def __init__(self, data: Data, model: Any): self.data = data self.model = model def update(self): self.model.update(self.data) model_a = ModelA() data_a = DataA() container1 = Container(data_a, model_a) container1.update() # OK </code></pre> <p>I fail to give a valid type annotation to the <code>model</code>-argument. In an ideal world, the type checker realizes that the code above works while this will raise an error:</p> <pre><code>data_b = DataB() container2 = Container(data_b, model_a) container2.update() #AttributeError </code></pre> <p>Can it be done in a generic fashion? Things are not as easy as in this MRE. I tried to define a Model-protocol but failed to write an abstract <code>update</code>-method since the signature of <code>ModelA.update</code> differs from <code>ModelB.update</code>.</p> <p>But even if that works out, how do I tell the type-checker that the argument-types in <code>Container.__init__</code> have to be related in order to make <code>Container.update</code> work?</p>
<python><python-typing>
2025-06-04 17:39:05
1
1,108
Durtal
79,653,214
938,363
buildozer build error : configure: error: C compiler cannot create executables
<p>My kivy 2.3.1, buildozer 1.5.0, python-for-android 2024.01.21 in python virtualenv 3.11, on MacOS app throws error when <code>buildozer android debug</code>:</p> <pre><code>[DEBUG]: configure: error: in `/Users/macbook/Documents/code/py/vmonfront/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/python3/armeabi-v7a__ndk_target_21/python3/android-build': [DEBUG]: configure: error: C compiler cannot create executables [DEBUG]: See `config.log' for more details Exception in thread background thread for pid 85320: Traceback (most recent call last): File &quot;/opt/homebrew/Cellar/python@3.11/3.11.12_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py&quot;, line 1045, in _bootstrap_inner self.run() File &quot;/opt/homebrew/Cellar/python@3.11/3.11.12_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py&quot;, line 982, in run self._target(*self._args, **self._kwargs) File &quot;/Users/macbook/Documents/code/py/vmonfront/venv311/lib/python3.11/site-packages/sh.py&quot;, line 1642, in wrap fn(*rgs, **kwargs) File &quot;/Users/macbook/Documents/code/py/vmonfront/venv311/lib/python3.11/site-packages/sh.py&quot;, line 2647, in background_thread handle_exit_code(exit_code) File &quot;/Users/macbook/Documents/code/py/vmonfront/venv311/lib/python3.11/site-packages/sh.py&quot;, line 2338, in fn return self.command.handle_command_exit_code(exit_code) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/macbook/Documents/code/py/vmonfront/venv311/lib/python3.11/site-packages/sh.py&quot;, line 823, in handle_command_exit_code raise exc sh.ErrorReturnCode_77: RAN: /Users/macbook/Documents/code/py/vmonfront/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/python3/armeabi-v7a__ndk_target_21/python3/configure --host=arm-linux-androideabi --build=aarch64-apple-darwin24.5.0 --enable-shared --enable-ipv6 ac_cv_file__dev_ptmx=yes ac_cv_file__dev_ptc=no --without-ensurepip ac_cv_little_endian_double=yes ac_cv_header_sys_eventfd_h=no --prefix=/usr/local --exec-prefix=/usr/local --enable-loadable-sqlite-extensions --with-build-python=/Users/macbook/Documents/code/py/vmonfront/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/hostpython3/desktop/hostpython3/native-build/python3 --with-openssl=/Users/macbook/Documents/code/py/vmonfront/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/openssl/armeabi-v7a__ndk_target_21/openssl3.0.16 </code></pre> <p>When I click the c compiler, here is the output:</p> <pre><code>/Users/macbook/Library/Android/sdk/ndk/25.2.9519653/toolchains/llvm/prebuilt/darwin-x86_64/bin/armv7a-linux-androideabi21-clang ; exit; (base) macbook@JunCs-MacBook-Pro ~ % /Users/macbook/Library/Android/sdk/ndk/25.2.9519653/toolchains/llvm/prebuilt/darwin-x86_64/bin/armv7a-linux-androideabi21-clang ; exit; clang-14: error: no input files Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] </code></pre> <p>The c compiler seems working fine to me. what is missing here? Is it configuration?</p> <p>Here is the buildozer.spec:</p> <pre><code>[app] # (str) Title of your application title = Video Monitor Front # (str) Package name package.name = vmonfront # (str) Package domain (needed for android/ios packaging) package.domain = org.test # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (str) Application versioning (method 1) version = 0.1 # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3==3.11.12,kivy,pyjnius, openssl==3.0.16 #,openssl==3.0.16,cryptography==42.0.8 # android.skip_download = True android.accept_sdk_license = True android.ndk_version = 25b android.sdk_version = 34 android.api = 34 android.minapi = 21 android.ndk_path = /Users/macbook/Library/Android/sdk/ndk/25.2.9519653 android.sdk_path = /Users/macbook/Library/Android/sdk android.sdkmanager_path = /Users/macbook/Library/Android/sdk/cmdline-tools/latest/bin/sdkmanager p4a.branch = develop #android.archs = arm64-v8a #android.extra_patches = patches/pyjnius_long_fix.patch:pyjnius # (list) Supported orientations # Valid options are: landscape, portrait, portrait-reverse or landscape-reverse orientation = portrait # # OSX Specific # # # author = ยฉ Copyright Info # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = android.python = 3.11.12 # Explicitly set Python version # (bool) enables Android auto backup feature (Android API &gt;=23) android.allow_backup = True # # Python for android (p4a) specific # # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = https://github.com/kivy/kivy-ios ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = https://github.com/phonegap/ios-deploy ios.ios_deploy_branch = 1.10.0 # (bool) Whether or not to sign the code ios.codesign.allowed = false [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 </code></pre>
<python><kivy><buildozer>
2025-06-04 17:07:37
0
10,250
user938363
79,653,147
5,228,890
pip does not find new version of torch
<p>I have a Mac M1 laptop, where I have conda env with python=3.9.21, and pip=25.1.1.</p> <p>Currently I have torch=2.2.0 and I wanted to update it to the latest version, but when I tried <code>pip install -U torch</code> it shows that I have the latest version, and trying <code>pip install -U torch==2.7.0</code> ends in:</p> <pre class="lang-none prettyprint-override"><code>ERROR: Could not find a version that satisfies the requirement torch==2.7.0 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2) ERROR: No matching distribution found for torch==2.7.0 </code></pre> <p>I also downloaded the corresponding <code>whl</code> file and tried to install that, but got:</p> <pre class="lang-none prettyprint-override"><code>pip install ~/Downloads/torch-2.7.1-cp39-none-macosx_11_0_arm64.whl ERROR: torch-2.7.1-cp39-none-macosx_11_0_arm64.whl is not a supported wheel on this platform. </code></pre> <p>Not sure what is going on.</p>
<python><macos><pytorch><pip>
2025-06-04 16:15:51
1
1,464
Afshin Oroojlooy
79,653,120
7,326,981
Supabase connection error "server didn't return client encoding"
<p>I am trying to connect and write a Pandas Dataframe to a Postgres custom schema on Supabase. Below is the code.</p> <pre><code>from sqlalchemy import create_engine def server_access(): # Create SQLAlchemy connection string conn_str = ( f&quot;postgresql+psycopg2://{&quot;[USER]&quot;}:{&quot;[PASSWORD]&quot;}&quot; f&quot;@{&quot;[HOST]&quot;}:{[PORT]}/{&quot;[SCHEMA]&quot;}?client_encoding=utf8&quot; ) engine = create_engine(url = conn_str) return engine engine = server_access() df.to_sql('tbl', engine, if_exists='append', index=False) </code></pre> <p>As per the <a href="https://supabase.com/docs/guides/api/using-custom-schemas" rel="nofollow noreferrer">documentation</a> I've followed the steps to expose custom schema but it didn't worked. However, if I try using public schema the same code works fine without defining <code>client_encoding</code> in connection string.</p> <p>How to connect and write a Pandas Dataframe using SQLAlchemy on a custom schema on Supabase?</p> <p>Edit:</p> <p>Sample connection string: <code>postgresql+psycopg2://user:password@host:6543/custom_schema?client_encoding=utf8</code></p> <p>After replacing <code>custom_schema</code> with <code>postgres</code> it works fine.</p> <p>Error:</p> <pre><code>conn = _connect(dsn, connection_factory=connection_factory, **kwasync) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server didn't return client encoding (Background on this error at: https://sqlalche.me/e/20/e3q8) </code></pre>
<python><pandas><postgresql><supabase>
2025-06-04 15:58:09
2
1,298
Furqan Hashim
79,653,054
11,578,996
Pandas multi-column multi-assignment merge / update
<p>I want to update the <code>base</code> pandas DataFrame with data from the <code>update</code> DataFrame. This applies to multiple rows of the base DataFrame (ie the merge features are not unique so can't be used as an index) and I cannot use the index matching techniques of <code>pd.join()</code> or <code>pd.merge()</code> because I want to overwrite the columns. Using <code>df.loc[]</code> and <code>df.update()</code> doesn't work because usind <code>['id','date']</code> as an index of base is non unique.</p> <p>Previous solutions end up with null values or using list comprehensions to create bool lists for indexing. There must be a better way!</p> <p>Note, I often do this where base has up to 20M rows and update has up to 5K rows, and often the matching is on 3-5 columns. Using list comps can be sloooow unless I do laborious jiggery pokery of multiple separate list comps for each conditional column</p> <pre><code>base = pd.DataFrame({ 'grp':['A','A','B','B','A','B','C'], 'id': ['a','b','a','b','a','a','c'], 'date': ['2025-01-01']*4+['2025-01-02']*3, 'cat1': [0]*7, 'cat2': [0]*7, }) base: grp id date cat1 cat2 0 A a 2025-01-01 0 0 1 A b 2025-01-01 0 0 2 B a 2025-01-01 0 0 3 B b 2025-01-01 0 0 4 A a 2025-01-02 0 0 5 B a 2025-01-02 0 0 6 C c 2025-01-02 0 0 update = pd.DataFrame({ 'date': ['2025-01-01', '2025-01-01', '2025-01-02', '2025-01-02'], 'id': ['a', 'b', 'a', 'b'], 'cat1': [1, 0, 1, 0], 'cat2': [0, 1, 1, 1], }) update: date id cat1 cat2 0 2025-01-01 a 1 0 1 2025-01-01 b 0 1 2 2025-01-02 a 1 1 3 2025-01-02 b 0 1 desired_output = pd.DataFrame({ 'grp':['A','A','B','B','A','B','C'], 'id': ['a','b','a','b','a','a','c'], 'date': ['2025-01-01']*4+['2025-01-02']*3, 'cat1': [1,0,1,0,1,1,0], 'cat2': [0,1,0,1,1,1,0], }) desired_output: grp id date cat1 cat2 0 A a 2025-01-01 1 0 1 A b 2025-01-01 0 1 2 B a 2025-01-01 1 0 3 B b 2025-01-01 0 1 4 A a 2025-01-02 1 1 5 B a 2025-01-02 1 1 6 C c 2025-01-02 0 0 </code></pre>
<python><pandas><dataframe><join>
2025-06-04 15:11:42
2
389
ciaran haines
79,652,961
12,403,550
Calculate overlap in cluster area between multiple clusters
<p>I have a raster image. Using scipy, I am clustering pixels that belong to each peak in the histogram that corresponds to the values in the image. I am able to detect two clear peaks that are clearly segregated in the image. What I want to know is how much these points overlap.</p> <p><a href="https://i.sstatic.net/ose2INA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ose2INA4.png" alt="Peak 1 cluster" /></a></p> <p><a href="https://i.sstatic.net/EDpYebsZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDpYebsZ.png" alt="Peak 2 cluster" /></a></p> <p>For each mask, I extract the &quot;outline&quot; using dilation and erosion.</p> <p>I compute the intersection of the outlines from the two peaks. The overlap ratio is calculated as:</p> <pre><code> overlap_ratio = (number of overlapping outline pixels) / (minimum of the two outline sizes) </code></pre> <p>I visualize the outlines and their overlap using a color-coded plot (yellow for Peak 1, blue for Peak 2, red for overlap).</p> <p>If the mean outline overlap percentage is greater than 20%, there is significant overlap.</p> <p><a href="https://i.sstatic.net/oLvbWYA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oLvbWYA4.png" alt="Overlap view" /></a></p> <p>However, this method is not effective at identifying overlap when the cluster is thin (as in the second picture), so the method identifies more overlap than there really is. How can I create a more precise mask that compares the overlap in area of the cluster to the overlap in area of the other cluster?</p> <p>Would I be better off comparing polygons using shapely?</p>
<python><scipy><shapely>
2025-06-04 14:19:44
1
433
prayner
79,652,902
401,516
Can the Django development server be restarted with a signal?
<p>I would like to be able to restart the <code>manage.py runserver</code> Django command using a signal (like in <code>kill -HUP PID</code>). Does Django even support this? <code>SIGHUP</code>, <code>SIGINT</code>, <code>SIGTERM</code> just exit the process.</p> <p>Tried <code>pkill -HUP</code>, didn't work.</p>
<python><django>
2025-06-04 13:45:18
1
1,034
dotz
79,652,650
10,305,444
Gedit plugin not showing custom context menu items with Python
<p>I'm developing a Gedit plugin in Python using Gtk. The plugin is supposed to add &quot;๐Ÿ”ฎ Generate&quot; and &quot;๐Ÿ“ Summarize&quot; items to the right-click context menu in the editor. However, the items are not showing up.</p> <p>Here are related code:</p> <pre><code>... def on_populate_popup(self, view, menu): generate_item = Gtk.MenuItem(label=&quot;๐Ÿ”ฎ Generate&quot;) summarize_item = Gtk.MenuItem(label=&quot;๐Ÿ“ Summarize&quot;) generate_item.connect(&quot;activate&quot;, self.on_generate_clicked, view) summarize_item.connect(&quot;activate&quot;, self.on_summarize_clicked, view) menu.append(Gtk.SeparatorMenuItem()) menu.append(generate_item) menu.append(summarize_item) menu.show_all() ... </code></pre> <p>I've confirmed the plugin loads and activates, but the menu items don't appear when right-clicking in the editor. <strong>Any ideas on what I might be missing?</strong></p> <p>Here is the related snap: <a href="https://imgur.com/AVYDmH2" rel="nofollow noreferrer">https://imgur.com/AVYDmH2</a></p> <p>And here is the full code: <a href="https://github.com/maifeeulasad/gedit-localllama/blob/e4bfb2a909924a2d71a0414eadee9d2880a4b8ef/geditlocalllama.py" rel="nofollow noreferrer">https://github.com/maifeeulasad/gedit-localllama/blob/e4bfb2a909924a2d71a0414eadee9d2880a4b8ef/geditlocalllama.py</a></p>
<python><gtk><gnome><gedit><gedit-plugin>
2025-06-04 11:22:46
1
4,689
Maifee Ul Asad
79,652,536
72,791
Matplotlib Hover Coordinates with Labelled XTicks
<p>I've got a matplotlib graph with labelled X-ticks:</p> <p><a href="https://i.sstatic.net/02h7A0CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/02h7A0CY.png" alt="Graph of random data, with labelled X-ticks" /></a></p> <p>The labels repeat (in case that's relevant). In the real graph, there is a multi-level X-axis with more clarification in the lower layers.</p> <p>That works fine, but I want to be able to hover the mouse and see the X-coordinate in the top-right of the graph. Whenever I set xticks to labels, I just get a blank X-coordinate:</p> <p><a href="https://i.sstatic.net/bZOPzkUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZOPzkUr.png" alt="Graph with cursor hover indicator highlighted" /></a></p> <p>If I use <code>ax.xaxis.set_major_formatter('{x:g}')</code>, it gets rid of my labels but the cursor coordinate starts working:</p> <p><a href="https://i.sstatic.net/9QWRmyYK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QWRmyYK.png" alt="Graph with cursor hover indicator highlighted" /></a></p> <p>Is there any way to make the cursor location still show the X coordinate even when I have a labelled X axis?</p> <p>This also affects <code>mplcursors</code>: it shows the X value as empty if I click on a line between points or with the label if I click exactly on a point (whereas I'd like to see the underlying numerical X-coordinate as &quot;A&quot; is a bit meaningless without the context from the secondary axis):</p> <p><a href="https://i.sstatic.net/3KPaj1al.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KPaj1al.png" alt="mplcursors example graph" /></a></p> <p>Source code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import mplcursors x = np.arange(0, 9) y = np.random.rand(*x.shape) labels = ['A', 'B', 'C']*3 fig, ax = plt.subplots() ax.plot(x, y, 'bx-', label='random') ax.set_xticks(x, labels=labels) # This makes the coordinate display work, but gets rid of the labels: # ax.xaxis.set_major_formatter('{x:g}') mplcursors.cursor(multiple=True) plt.show() </code></pre>
<python><matplotlib><mplcursors>
2025-06-04 10:01:28
2
73,231
DrAl
79,652,471
536,262
Type hint warning `"keys" is not a known attribute of "None"` for `Optional` in Pylance
<p>Recently Pylance has started to add red underlines in my code. I've managed to fix most of it, but I have problems with <code>Optional</code> typing:</p> <pre><code>def realms(testenv:str, active:bool=True) -&gt; Optional[Dict]: &quot;&quot;&quot; return a dict of active/inactive realms with info from keycloak, on error return None &quot;&quot;&quot; dat = api(testenv, endpoint=&quot;/admin/realms&quot;) : all_realms = list(realms(testenv, active=False).keys()) ------ </code></pre> <p>Pylance says:</p> <blockquote> <p>&quot;keys&quot; is not a known attribute of &quot;None&quot;</p> </blockquote> <p>If I change the <code>def</code> prototype to:</p> <pre><code>def realms(testenv:str, active:bool=True) -&gt; Dict: </code></pre> <p>I get small issues with <code>return None</code> I can fix with raise exceptions, or just remove it.</p> <p>The code workaround can be a two-liner:</p> <pre><code>all_realms_dict = realms(testenv, active=False) all_realms = list(all_realms_dict.keys()) if all_realms_dict else [] </code></pre> <p>Should I stop using <code>Optional</code> and switch to raising exceptions? Is returning <code>None</code> a bad idea?</p>
<python><python-typing><pylance>
2025-06-04 09:20:18
1
3,731
MortenB
79,652,378
93,684
How can I implement multiple oauth providers parameters for multi tenant setup
<p>I am using Superset with SSO enabled using Cognito in a multi tenant environment. Each tenant is using a separate user pool. Is it possible to plug the connection parameters based on the tenant so that when connecting each tenant is directed to their correct user pool. I have the connection parameters for each tenant and I already have a way to distinguish each tenant.</p>
<python><flask><oauth-2.0><apache-superset><authlib>
2025-06-04 08:22:06
0
390
tamla83
79,652,218
2,947,469
Tensorflow - validation metric does not show up, it gets the same name as the train metric
<p>I am using the model.compile(metrics=[MyMetric])</p> <p>I was wondering why I only see loss and val_loss, but only se my_metric and not val_my_metric after the evaluation at end of each epoch completes.</p> <p>I have debugged therefore tensorflow code. I see in trainer.py:fit() it is ok:</p> <p><a href="https://i.sstatic.net/CjZjnKrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CjZjnKrk.png" alt="enter image description here" /></a></p> <p>but in CallbackList.on_epoch_end the dict is flattened using <code>logs = python_utils.pythonify_logs(logs)</code>, therefore the keys from compile_metrics and val_compile_metrics are saved under the same key: <a href="https://i.sstatic.net/A2fUDjS8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2fUDjS8.png" alt="enter image description here" /></a></p> <p>Note that I did not change anything in the Trainer logic. I just subclassed the train and test_step, as documented on the web:</p> <pre><code>def update_metrics(self, loss, f1, f2): # From Cholet's Keras docs: for metric in self.metrics: if metric.name == &quot;loss&quot;: metric.update_state(loss) else: metric.update_state(f1, f2) def test_step(self, data): &quot;&quot;&quot; Args: data (tf.Tensor): Input batch of shape [B, 2, H, W, C] Returns: dict: Dictionary containing the loss &quot;&quot;&quot; f1, f2, p1, p2, z1, z2 = self(data, training=False) loss = self.compute_loss(p1, p2, z1, z2) self.update_metrics(loss, f1, f2) # Return metrics return {m.name: m.result() for m in self.metrics} </code></pre>
<python><tensorflow><keras>
2025-06-04 06:32:36
0
1,827
Adam
79,652,036
10,531,186
Problems with streamlit using MCP client
<p>Having issues with Streamlit, it does not seem compatible with an MCP client. Getting <code>NotImplemented</code> error.</p> <p>This is the relevant streamlit code:</p> <pre class="lang-py prettyprint-override"><code>available_scripts = [&quot;mcp_server1.py&quot;, &quot;mcp_server2.py&quot;] selected_script = st.selectbox(&quot;MCP server script&quot;, available_scripts) if st.button(&quot;Enable MCP server&quot;): mcp_client = MCPClient() asyncio.run(mcp_client.connect_to_server(selected_script)) </code></pre> <p>Now my <code>MCPClient</code> looks like this (based on <a href="https://modelcontextprotocol.io/quickstart/client" rel="nofollow noreferrer">MCP quickstart</a>):</p> <pre class="lang-py prettyprint-override"><code>class MCPClient: def __init__(self): self.session: ClientSession | None = None self.exit_stack = AsyncExitStack() async def connect_to_server(self, server_script_path: str): is_python = server_script_path.endswith(&quot;.py&quot;) is_js = server_script_path.endswith(&quot;.js&quot;) if not (is_python or is_js): raise ValueError(&quot;Server script must be a .py or .js file&quot;) command = &quot;python&quot; if is_python else &quot;node&quot; server_params = StdioServerParameters( command=command, args=[server_script_path], env=None ) stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) await self.session.initialize() ... </code></pre> <p>If I do this, I get a strange <code>NotImplemented</code> error. I will paste the error at the end, it looks like a streamlit thing.</p> <p>What I discovered after debugging a bit is that the cause is <code>await self.exit_stack.enter_async_context(...)</code>. If I remove everything after that and replace it with <code>await asyncio.sleep(3)</code>, then it waits 3 seconds and does not fail.</p> <p>I have looked around, but have not found a solution. I have seen somewhere to try:</p> <pre><code>import nest_asyncio nest_asyncio.apply() </code></pre> <p>But either I am not putting it in the right place or this is not the problem, because I have seen no changes.</p> <p>Please help! Any idea why it is failing?</p> <pre><code>File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py&quot;, line 121, in exec_func_with_error_handling result = func() ^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py&quot;, line 595, in code_to_exec self._session_state.on_script_will_rerun( File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\state\safe_session_state.py&quot;, line 68, in on_script_will_rerun self._state.on_script_will_rerun(latest_widget_states) File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\state\session_state.py&quot;, line 558, in on_script_will_rerun self._call_callbacks() File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\state\session_state.py&quot;, line 571, in _call_callbacks self._new_widget_state.call_callback(wid) File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\streamlit\runtime\state\session_state.py&quot;, line 272, in call_callback callback(*args, **kwargs) File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\app.py&quot;, line 146, in toggle_mcp asyncio.run(mcp_client.test(st.session_state[&quot;mcp_script&quot;])) File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\nest_asyncio.py&quot;, line 30, in run return loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\nest_asyncio.py&quot;, line 98, in run_until_complete return f.result() ^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\futures.py&quot;, line 203, in result raise self._exception.with_traceback(self._exception_tb) File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\tasks.py&quot;, line 277, in __step result = coro.send(None) ^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\mcp_client.py&quot;, line 52, in test stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\contextlib.py&quot;, line 650, in enter_async_context result = await _enter(cm) ^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\contextlib.py&quot;, line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\mcp\client\stdio\__init__.py&quot;, line 115, in stdio_client process = await _create_platform_compatible_process( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\mcp\client\stdio\__init__.py&quot;, line 230, in _create_platform_compatible_process process = await create_windows_process(command, args, env, errlog, cwd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\mcp\client\stdio\win32.py&quot;, line 85, in create_windows_process process = await anyio.open_process( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\anyio\_core\_subprocesses.py&quot;, line 190, in open_process return await get_async_backend().open_process( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\danid\Documents\Personal\Projects\llm_chat_room\.venv\Lib\site-packages\anyio\_backends\_asyncio.py&quot;, line 2561, in open_process process = await asyncio.create_subprocess_exec( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\subprocess.py&quot;, line 223, in create_subprocess_exec transport, protocol = await loop.subprocess_exec( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py&quot;, line 1708, in subprocess_exec transport = await self._make_subprocess_transport( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py&quot;, line 503, in _make_subprocess_transport raise NotImplementedError </code></pre>
<python><python-asyncio><streamlit><nest-asyncio><model-context-protocol>
2025-06-04 01:10:22
0
324
someguy
79,652,003
16,635,269
How do I upload unstructured documents with meta data to Google Cloud Platform data store with Python SDK?
<p>I am trying to upload unstructured data to a Google Cloud Platform (GCP) data store from a GCP Storage Bucket using the Python SDK. I want to use unstructured data with meta data which is mentioned <a href="https://cloud.google.com/generative-ai-app-builder/docs/prepare-data?_gl=1*cpveo*_ga*NTQ5OTkwMTE4LjE3NDY1NTM1MDI.*_ga_WH2QY8WWF5*czE3NDg5NzY0OTkkbzI4JGcxJHQxNzQ4OTg3NjAxJGo3JGwwJGgw#storage-unstructured" rel="nofollow noreferrer">here</a>. The process involves:</p> <ol> <li>Creating a GCP Data Store which I have done according to <a href="https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es#storage-import-once" rel="nofollow noreferrer">this documentation</a>. I have setup all the necessary access and set the <code>CONFIG</code> to <code>CONTENT REQUIRED</code>.</li> <li>Create a GCP Cloud Storage Bucket which contains 4 PDF documents (for now) and a <code>.jsonl</code> meta data file which are all at the root of my bucket.</li> <li>Populating the Data Store with a Document Import Request using documents from a Google Cloud Storage Bucket.</li> </ol> <p>The code I am attempting to use for Point 3 is below which I copied from <a href="https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es#storage-import-once" rel="nofollow noreferrer">Google's documentation</a> (second code snippet under &quot;Import Documents&quot;).</p> <pre class="lang-py prettyprint-override"><code> client_options = ( ClientOptions(api_endpoint=f&quot;{LOCATION}-discoveryengine.googleapis.com&quot;) if LOCATION != &quot;global&quot; else None ) # Create a client client = discoveryengine.DocumentServiceClient(client_options=client_options) parent = client.branch_path( project=PROJECT_ID, location=LOCATION, data_store=DATA_STORE_ID, branch=&quot;default_branch&quot;, ) request = discoveryengine.ImportDocumentsRequest( parent=parent, gcs_source=discoveryengine.GcsSource( # Multiple URIs are supported input_uris=[GCS_URI], # Options: # - `content` - Unstructured documents (PDF, HTML, DOC, TXT, PPTX) # - `custom` - Unstructured documents with custom JSONL metadata # - `document` - Structured documents in the discoveryengine.Document format. # - `csv` - Unstructured documents with CSV metadata data_schema=&quot;custom&quot;, ), id_field=&quot;id&quot;, # Options: `FULL`, `INCREMENTAL` reconciliation_mode=discoveryengine.ImportDocumentsRequest.ReconciliationMode.FULL, ) # Make the request operation = client.import_documents(request=request) print(f&quot;Waiting for operation to complete: {operation.operation.name}&quot;) response = operation.result() print(response) </code></pre> <p>The <code>GCS_URI</code> variable is the link to the gsutil URI of the <code>metadata.jsonl</code> file (<code>gs://meta-data-testing/metadata.jsonl</code>), and that file looks like this:</p> <pre class="lang-none prettyprint-override"><code>{&quot;id&quot;: &quot;1&quot;, &quot;structData&quot;: {&quot;title&quot;: &quot;Coldsmokesubmittal&quot;, &quot;category&quot;: &quot;212027&quot;}, &quot;content&quot;: {&quot;mimeType&quot;: &quot;application/pdf&quot;, &quot;uri&quot;: &quot;gs://meta-data-testing/ColdSmokeSubmittal.pdf&quot;}} {&quot;id&quot;: &quot;2&quot;, &quot;structData&quot;: {&quot;title&quot;: &quot;Defssubmittal&quot;, &quot;category&quot;: &quot;212027&quot;}, &quot;content&quot;: {&quot;mimeType&quot;: &quot;application/pdf&quot;, &quot;uri&quot;: &quot;gs://meta-data-testing/DEFSSubmittal.pdf&quot;}} {&quot;id&quot;: &quot;3&quot;, &quot;structData&quot;: {&quot;title&quot;: &quot;Cmu Submittal&quot;, &quot;category&quot;: &quot;222039&quot;}, &quot;content&quot;: {&quot;mimeType&quot;: &quot;application/pdf&quot;, &quot;uri&quot;: &quot;gs://meta-data-testing/CMU_Submittal.pdf&quot;}} {&quot;id&quot;: &quot;4&quot;, &quot;structData&quot;: {&quot;title&quot;: &quot;Concrete Mix Submittal&quot;, &quot;category&quot;: &quot;222039&quot;}, &quot;content&quot;: {&quot;mimeType&quot;: &quot;application/pdf&quot;, &quot;uri&quot;: &quot;gs://meta-data-testing/Concrete_Mix_Submittal.pdf&quot;}} </code></pre> <p>When I run my code, I get this response:</p> <pre><code>error_samples { code: 3 message: &quot;To create document without content, content config of data store must be NO_CONTENT.&quot; details { type_url: &quot;type.googleapis.com/google.rpc.ResourceInfo&quot; value: &quot;\022\'gs://meta-data-testing/metadata.jsonl:1&quot; } } </code></pre> <p>Which repeats 3 more times for each line of my <code>.jsonl</code> file.</p> <p><strong>Please if anyone has tried adding unstructured documents with meta data, please tell me where I am going wrong or a method that you were able to use to successfully execute this process</strong></p> <hr /> <h2>My Solution Attempts</h2> <h3>Change Data Store Config</h3> <p>I see it is telling me to change the data store config to <code>NO_CONTENT</code> but when I do that, only the meta data is uploaded to the data store and I am not able to actually perform a search on the documents via my Vertex AI app. I think this error might be a secondary output from whatever the real issue is.</p> <h3>Upload via GCP</h3> <p>I have tried manually uploading on GCP itself:</p> <p><a href="https://i.sstatic.net/CbwtyDtr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbwtyDtr.png" alt="Upload Options" /></a></p> <p>but I get this error when I try:</p> <pre class="lang-json prettyprint-override"><code>message: &quot;INVALID_FORMAT gcsInputuri&quot; status: { @type: &quot;type.googleapis.com/google.rpc.Status&quot; code: 3 message: &quot;The provided GCS URI has invalid unstructured data format. Please provide a valid GCS path in either NDJSON(.ndjson) or JSON Lines(.jsonl) format.&quot; } </code></pre>
<python><google-cloud-platform><google-cloud-storage><google-cloud-datastore><google-cloud-vertex-ai>
2025-06-03 23:47:25
2
301
Fruity Fritz
79,652,001
1,084,875
GitLab CI pipeline for a uv Python project does not cache environment and dependencies
<p>I have a GitLab runner on a Linux machine that uses the shell executor. This runner is for testing a Python project that uses <a href="https://docs.astral.sh/uv/" rel="nofollow noreferrer">uv</a> for virtual environment and dependency management. I have the Python project setup to use the runner with the <code>.gitlab-ci.yml</code> file shown below. I followed the <a href="https://docs.astral.sh/uv/guides/integration/gitlab/" rel="nofollow noreferrer">caching</a> instructions in the uv docs for GitLab CI but nothing seems to get cached. Every time the pipeline runs, the cache and virtual environment are removed and then created again (see log below). The project has a lot of dependencies that can take several minutes to download. How can I reuse the uv cache and virtual environment with GitLab CI so the dependencies are not downloaded every time a job runs?</p> <pre class="lang-yaml prettyprint-override"><code># GitLab CI for custom runner instance with shell executor # Requires uv installed for gitlab-runner user stages: - check - test variables: UV_CACHE_DIR: .uv-cache cache: - key: files: - uv.lock paths: - $UV_CACHE_DIR checks: stage: check script: - uv sync - uv run ruff check . - uv run ruff format --check . - uv cache prune --ci tests: stage: test script: - uv sync - uv run pytest - uv cache prune --ci </code></pre> <pre class="lang-none prettyprint-override"><code>Fetching changes with git depth set to 20... Reinitialized existing Git repository in /home/cloud/builds/t2_jESXhP/0/ai4ops/risk-analysis/.git/ Checking out 94d41e64 as detached HEAD (ref is fix-checks)... Removing .uv-cache/ Removing .venv/ Skipping Git submodules setup Restoring cache 00:00 Checking cache for 0_uv-8be8d005e56e2d62177fb81b4081e3dc73657342-non_protected... Runtime platform arch=amd64 os=linux pid=184162 revision=4d7093e1 version=18.0.2 No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted. WARNING: Cache file does not exist Failed to extract cache Executing &quot;step_script&quot; stage of the job script 00:42 $ uv sync Using CPython 3.12.10 Creating virtual environment at: .venv Resolved 215 packages in 1ms Downloading pygments (1.2MiB) Downloading setuptools (1.1MiB) Downloading numpy (15.8MiB) Downloading tokenizers (2.9MiB) Downloading kubernetes (1.9MiB) Downloading pymupdf (19.1MiB) Downloading llama-index-core (7.3MiB) Downloading sympy (5.9MiB) Downloading nvidia-cusparselt-cu12 (143.1MiB) Downloading tiktoken (1.1MiB) Downloading sqlalchemy (3.2MiB) Downloading streamlit (9.4MiB) Downloading nltk (1.4MiB) Downloading hf-xet (4.9MiB) Downloading grpcio (5.6MiB) Downloading nvidia-cublas-cu12 (346.6MiB) </code></pre>
<python><gitlab-ci><gitlab-ci-runner><uv>
2025-06-03 23:43:32
1
9,246
wigging
79,651,927
1,747,834
Installing my own single-file Python project
<p>I have a rather small package, with only the single <code>__init__.py</code> at the top-level, which I attempt to install with <code>pip-3.11 install --user .</code>.</p> <p>Pip duly processes my <code>pyproject.toml</code>, checking all of the dependencies, and ends with these reassuring messages:</p> <pre class="lang-none prettyprint-override"><code>Building wheels for collected packages: mypackage Building wheel for mypackage (pyproject.toml) ... done Created wheel for mypackage: filename=mypackage-0.1-py3-none-any.whl size=3251 sha256=4a4fbb798d9557c52b4751b545cebcca75501a238092d02d055aea4147680a2f Stored in directory: /tmp/pip-ephem-wheel-cache-nm8w_8n1/wheels/b7/e9/c7/84dde047e428daca2ad6421e22f0c84327b33be5ac57e2c8e2 Successfully built mypackage Installing collected packages: mypackage Successfully installed mypackage-0.1 </code></pre> <p>After this, I see my <code>__init__.py</code> copied into the newly-created <code>build/lib/</code> subdirectory. The <code>~/.local/lib/python3.11/site-packages/mypackage-0.1.dist-info bnysms.egg-info</code> is also created, with various files describing the package in it.</p> <p>But there is no <code>~/.local/lib/python3.11/site-packages/mypackage/</code> itself -- and my package is not, in fact, installed. Attempting to run <code>python3.11 -m mypackage</code> responds with <code>No module named mypackage</code>.</p> <p>The <code>pip-3.11 freeze | grep mypackage</code> lists it without version: <code>mypackage @ file:///home/me/mypackage</code>.</p> <p>Following @sinoroc's suggestion, I renamed the <code>__init__.py</code>into <code>myproject.py</code> and modified the <code>pyproject.toml</code> thus:</p> <pre class="lang-ini prettyprint-override"><code>[project] authors = [ {name = &quot;John Smith&quot;, email = &quot;John_Smith@example.com&quot;} ] name = &quot;mypackage&quot; description = &quot;Do the thing&quot; version = &quot;0.1&quot; requires-python = &quot;&gt;= 3.6&quot; dependencies = [ &quot;requests&quot;, &quot;requests_kerberos&quot;, &quot;pyspnego&quot; ] readme = &quot;README.md&quot; keywords = [&quot;SMS&quot;, &quot;secrets&quot;] classifiers = [ &quot;Development Status :: 4 - Beta&quot;, &quot;Topic :: Software Development :: Secrets&quot; ] [project.optional-dependencies] decodejwt = [&quot;pyjwt&quot;] [tool.setuptools] py-modules = [&quot;myproject&quot;] </code></pre> <p>With this I can create a &quot;wheel&quot; using <code>python3.11 -m build --wheel --outdir /tmp .</code>. The resulting <code>/tmp/mypackage-0.1-py3-none-any.whl</code> contains the following files:</p> <ul> <li><code>mypackage.py</code></li> <li><code>mypackage-0.1.dist-info/METADATA</code></li> <li><code>mypackage-0.1.dist-info/WHEEL</code></li> <li><code>mypackage-0.1.dist-info/top_level.txt</code></li> <li><code>mypackage-0.1.dist-info/RECORD</code></li> </ul> <p>This .whl file can then be used to install the package: <code>pip-3.11 install --user --find-links=/tmp/ mypackage</code>. The package then works -- it can be both imported and/or used from command-line <code>python3.11 -m mypackage</code>. (The <code>README.md</code> is <em>not</em> included for some reason, but I don't really care.)</p> <p>Life is good, although I don't understand, <em>why</em> this renaming was necessary :(</p>
<python><pip><python-packaging>
2025-06-03 21:46:04
0
4,246
Mikhail T.
79,651,863
14,122
Cleanup after waiting for first asyncio task to complete: Is it safe to swallow CancelledError when raised by task.get_exception()?
<p>I have an asyncio function that spawns multiple tasks, any of which can return a result, in which case we want to shut down all the others.</p> <p>Right now, it does so in a matter something like the following:</p> <pre class="lang-py prettyprint-override"><code>all_tasks = [asyncio.create_task(ds.get() for ds in data_sources] try: done, pending = await asyncio.await( all_tasks, return_when=asyncio.FIRST_COMPLETED ) for task in done: if (ex := task.exception()): raise ex response = await task # more handling here, but let's pretend it's just... return response finally: for task in pending: try: task.cancel() await task except asyncio.CancelledError: continue ex = task.exception() if ex is not None: logger.warn(&quot;task failed&quot;, ex) for task in done: ex = task.exception() if ex is not None: logger.warn(&quot;task failed&quot;, ex) </code></pre> <p>My concern here is that swallowing <code>CancelledError</code> is <a href="https://docs.python.org/3/library/asyncio-task.html#task-cancellation" rel="nofollow noreferrer">explicitly documented</a> as bad practice in user code, with safe usage requiring invocation of <code>uncancel()</code>.</p> <p>Is this in fact unsafe? If so, what would need to be changed to have equivalent functionality without risking corrupting asyncio's internal state?</p> <hr /> <p><sub>(I've viewed <a href="https://stackoverflow.com/questions/77974525/what-is-the-right-way-to-await-cancelling-an-asyncio-task">a related question</a> which suggests using an <code>asyncio.TaskGroup</code>; however, a TaskGroup waits for all tasks to complete and -- as I read its documentation -- cancels the parent task if any child was cancelled, whereas I want to wait only for the <em>first</em> task to complete, and cancel other children without propagating that cancellation to the parent).</sub></p>
<python><python-asyncio><cancellation>
2025-06-03 20:33:47
0
299,045
Charles Duffy
79,651,861
1,581,090
How to extract intraday stock data from onvista with python and playwright/selenium?
<p>Using 3.9.6 and playwright on MacOS 14.7.5 I am trying to extract the public available intraday stock data from a webpage like</p> <pre><code>https://www.onvista.de/aktien/Airbus-Group-EADS-Aktie-NL0000235190 </code></pre> <p>To make the Intraday data load you have to click the &quot;Intraday&quot; button. Here is a script I tried but it seems it did not click the Intraday button</p> <p><a href="https://i.sstatic.net/kEt8o8jb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEt8o8jb.png" alt="enter image description here" /></a></p> <pre><code>import asyncio from playwright.async_api import async_playwright from bs4 import BeautifulSoup async def run(): url = &quot;https://www.onvista.de/aktien/Airbus-Group-EADS-Aktie-NL0000235190&quot; async with async_playwright() as p: browser = await p.chromium.launch(headless=True) page = await browser.new_page() await page.goto(url) # Wait for the chart area to appear await page.wait_for_selector(&quot;svg&quot;) # Click the 'Intraday' button try: await page.click(&quot;text=Intraday&quot;) await page.wait_for_timeout(3000) # wait for chart to update except Exception as e: print(&quot;Couldn't click Intraday:&quot;, e) # Get the full HTML after the chart is rendered content = await page.content() # Parse with BeautifulSoup soup = BeautifulSoup(content, &quot;html.parser&quot;) paths = soup.find_all(&quot;path&quot;) print(&quot;Extracted SVG path data:&quot;) for path in paths: d_attr = path.get(&quot;d&quot;) if d_attr and len(d_attr) &gt; 100: # skip small icons etc. print(&quot;d:&quot;, d_attr[:150], &quot;...\n&quot;) await browser.close() # Run the async function asyncio.run(run()) </code></pre> <p>The output is</p> <pre><code>Couldn't click Intraday: Page.click: Timeout 30000ms exceeded. </code></pre> <p>Is there a way to fix that script? Or would selenium be the better choice?</p>
<python><macos><selenium-webdriver><playwright><playwright-python>
2025-06-03 20:32:12
1
45,023
Alex
79,651,859
5,036,928
mpi4py: only rank 0 participating in scipy minimize after first iteration
<p>Given the code below, I am unable to remedy the fact that only rank 0 participates in evaluations of <code>Objective</code> after the first iteration of the (SciPy) minimizer. Obviously the rank!=0 workers finish their initial tasking and the minimizer does not reengage these workers. My question is <em>how</em> can I reengage them?</p> <pre><code>from scipy.optimize import minimize, OptimizeResult from mpi4py import MPI import numpy as np import logging logging.basicConfig(filename='job.log', level=logging.INFO) class Solver(): def __init__(self, SampleTimes, InitialArray): self.comm = MPI.COMM_WORLD self.rank = self.comm.Get_rank() self.size = self.comm.Get_size() self.SampleTimes = SampleTimes self.InitialArray = InitialArray self.Max = None def f(self, x_): return np.sum(x_) def Objective(self, x): logging.info(f&quot;Entering Objective on rank {self.rank}&quot;) self.x = self.comm.bcast(x if self.rank == 0 else None, root=0) logging.info(f&quot;Logging x: {self.x}&quot;) if not isinstance(self.x, OptimizeResult): tstep_select = np.array_split(self.SampleTimes, self.size)[self.rank] local_results = [] for t in tstep_select: logging.info(f&quot;Processing t={t} on rank {self.rank}&quot;) result = t*self.f(self.x) local_results.append( (t, result) ) logging.info(f&quot;Response for t={t}: {local_results[-1][-1]}&quot;) all_results = self.comm.gather(local_results, root=0) if self.rank==0: all_results = [item for sublist in all_results for item in sublist] all_results = np.array(all_results) all_results = all_results[all_results[:,0].argsort()] scalar = np.trapz(all_results[:,1], all_results[:,0]) return -scalar def Maximize(self,): if self.rank == 0: self.Max = minimize(self.Objective, self.InitialArray) print(self.Max) else: while not self.Max: self.Objective(None) self.Max = self.comm.bcast(self.Max if self.rank==0 else None, root=0) if __name__=='__main__': t_eval = np.linspace(0, 100, 100) a_init = np.random.rand(10) Instance = Solver(SampleTimes=t_eval, InitialArray=a_init) Instance.Maximize() </code></pre>
<python><parallel-processing><scipy><scipy-optimize><mpi4py>
2025-06-03 20:28:21
1
1,195
Sterling Butters
79,651,358
7,036,941
Python type-hinting: a Tkinter Event
<p>I'm learning how to create GUIs with TKinter in Python. I'm using VSCode as my editor and I have this piece of code:</p> <pre class="lang-py prettyprint-override"><code>[...] # create a button and bind an action to the button press event button:tk.Button = tk.Button(master=gridFrame, text=&quot;Button&quot;) button.bind(&quot;&lt;ButtonPress-1&gt;&quot;, self.play) [...] def play(self, event: tk.Event)-&gt;None: &quot;&quot;&quot;Hanlde a player's move&quot;&quot;&quot; clickedBtn: tk.Button = event.widget </code></pre> <p>Pylance complains on the type hint for event being of type <code>Event</code>: Expected type arguments for generic class &quot;Event&quot;. I've tried with this other hint:</p> <pre><code>def play(self, event: tk.Event[tk.Button])-&gt;None: </code></pre> <p>Which silences the warning, but when I try to run the code with this, I get a TypeError exception:</p> <pre><code> def play(self, event: Event[tk.Button])-&gt;None: ~~~~~^^^ TypeError: type 'Event' is not subscriptable </code></pre> <p>Is there a way to hint this or should I settle for <code>Any</code> or silence the warning and admit defeat?</p> <p>Thanks!</p>
<python><python-typing><pyright>
2025-06-03 13:49:19
1
408
Joel Santos Rico
79,651,328
15,307,950
How do I render images sharp in Jupyter notebook with VScode when Windows scaling is enabled?
<p>I want to display an image from an array in a Jupyter notebook (inline) in Visual Studio Code. I'm running Windows 11 on a high DPI monitor with scaling set to 150%. Pixels don't render sharp in the notebook. 1 pixel in the source image should be 1 pixel on the screen. If I want to scale the image using integer scaling I can just scale the source image using nearest neighbor. I tried matplotlib and pillow. I can't get it sharp. Here is my python code:</p> <pre class="lang-py prettyprint-override"><code>from PIL import Image from IPython.display import display import numpy as np bw_data = np.array([ [0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0] ], dtype=np.uint8) # mode '1' = 1-bit black/white bw_image = Image.fromarray(bw_data * 255).convert(mode='1') display(bw_image) </code></pre> <p>output, Zoomed in with Gimp: <a href="https://i.sstatic.net/pBRHHcYf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBRHHcYf.png" alt="Zoomed in via Gimp" /></a></p> <p>This is not sharp.</p> <p>Saving the image does produce a sharp image:</p> <pre class="lang-py prettyprint-override"><code>bw_image.save('image.png') </code></pre> <p>Result, Zoomed in with Gimp:</p> <p><a href="https://i.sstatic.net/KvJFp7Gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KvJFp7Gy.png" alt="Zoomed in with Gimp" /></a></p> <p>I use iconico magnifier V2.4 (with scaling disabled on executable) to inspect the rendered images on pixel level. And using image editors such as Gimp or paint to inspect saved images or saved screenshots (since image viewers use upscaling when zooming in instead of nearest neighbor).</p> <p>I prefer not to write the image to storage first, but keep it in RAM. But this is not a hard requirement. The only requirement is that it is inline and not a separate window as that is easy to get working.</p> <p>Edit: This is how <a href="https://stackoverflow.com/a/79651347/15307950">this answer</a> (which just scales the source image)(renders on my machine: <a href="https://i.sstatic.net/XI5ZfhTc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XI5ZfhTc.png" alt="Zoomed in with Gimp" /></a></p> <p>As you can see scaling the source image does not fix the rendering issue. It just slightly masks it because the edges are relatively smaller. The edges are still blurry. Rending on pixel level is what is faulty. How do I display true pixels?</p>
<python><visual-studio-code><jupyter-notebook><highdpi>
2025-06-03 13:21:24
2
726
elechris
79,651,249
4,247,599
logger from Jupyter notebook: does not work unless calling the root logger at least once
<p>I would like to see the logging produced by some Python code from within a Jupyter notebook (logging version '0.5.1.2', python 3.12).</p> <p>If I run the following code:</p> <pre class="lang-py prettyprint-override"><code>import logging logger = logging.getLogger() logger.setLevel(logging.INFO) logger.info(&quot;logging test&quot;) </code></pre> <p>I get no output.</p> <p>But if I call the root logging first in a notebook cell:</p> <pre class="lang-py prettyprint-override"><code>import logging logger = logging.getLogger() logger.setLevel(logging.INFO) logging.info(&quot;logging test&quot;) # calling the root logger </code></pre> <p>I get the expected <code>&quot;logging test&quot;</code>, and every other subsequent logged message, like:</p> <pre class="lang-py prettyprint-override"><code>logger.info(&quot;another logging test&quot;) </code></pre> <p>works correctly.</p> <p>So within the same session, if I do not call the root logger at least once, no logging message is shown. But if I run the root logger at least once, all works correctly.</p> <p>Any idea or explanation why is it the case, or if I am doing something wrong?</p>
<python><logging><python-logging>
2025-06-03 12:40:21
2
4,299
SeF
79,651,120
17,580,381
Formatting integers in pandas dataframe
<p>I've read <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html" rel="nofollow noreferrer">the documentation</a> and simply cannot understand why I can't seem to achieve my objective.</p> <p>All I want to do is output integers with a thousands separator where appropriate.</p> <p>I'm loading a spreadsheet from my local machine that is in the public domain <a href="https://www.nsandi.com/files/asset/xlsx/prize-june-2025.xlsx" rel="nofollow noreferrer">here</a></p> <p>Here's my MRE:</p> <pre><code>import pandas as pd WORKBOOK = &quot;/Volumes/Spare/Downloads/prize-june-2025.xlsx&quot; def my_formatter(v): return f&quot;{v:,d}&quot; if isinstance(v, int) else v df = pd.read_excel(WORKBOOK, header=2, usecols=&quot;B,C,E:H&quot;) print(df.dtypes) df.style.format(my_formatter) print(df.head()) </code></pre> <p><strong>Output:</strong></p> <pre><code>Prize Value int64 Winning Bond NO. object Total V of Holding int64 Area object Val of Bond int64 Dt of Pur datetime64[ns] dtype: object Prize Value Winning Bond NO. Total V of Holding Area Val of Bond Dt of Pur 0 1000000 103FE583469 50000 Stockport 5000 2005-11-29 1 1000000 352AC359547 50000 Edinburgh, City Of 5000 2019-02-11 2 100000 581WF624503 50000 Birmingham 20000 2024-06-03 3 100000 265SM364866 50000 Hertfordshire 32500 2016-01-31 4 100000 570HE759643 11000 Hertfordshire 11000 2024-02-22 </code></pre> <p>I have determined that <em>my_formatter()</em> is never called and I have no idea why.</p>
<python><pandas>
2025-06-03 11:12:10
1
28,997
Ramrab
79,651,081
5,157,277
Multivalued column cannot be transformed
<p>Im working with Stackoverflow 2024 survey. In the csv file there are several multivalued variables (separated by ;). I want to apply One-hot encoding to the variables <strong>Employment</strong> and <strong>LanguageAdmire</strong> by use <code>MultiLabelBinarizer</code>. However, my code works only for the first one. It fails for the second one.</p> <pre><code>import pandas as pd import gdown # pip install gdown from sklearn.preprocessing import MultiLabelBinarizer file_id = '1ul_F8Moo9jIGG5pAhUtYz-dIktQXp1Wf' url = f'https://drive.google.com/uc?id={file_id}' output = 'survey_results_public.csv' gdown.download(url, output, quiet=False) # It takes some seconds (150MB csv file) df = pd.read_csv(output) df.drop('ResponseId', axis=1, inplace=True) df=df[~df.duplicated(keep='first')].copy() #df['LanguageAdmired'].fillna('Other', inplace=True) df['LanguageAdmired'] = df['LanguageAdmired'].fillna('Other') df['LanguageAdmired'] = df['LanguageAdmired'].str.split(';') df['Employment'] = df['Employment'].str.split(';') # Create instance of binarizer and fit_transform the 'Employment' column mlb = MultiLabelBinarizer() # Apply one-hot encoding to the 'Employment' column binary_labels = mlb.fit_transform(df['Employment']) # Convert the 'Employment' binary labels to a DataFrame df_labels = pd.DataFrame(binary_labels, columns=['Employment_' + c for c in mlb.classes_]) # Concatenate the original DataFrame with the new one containing binary labels df = pd.concat([df, df_labels], axis=1).copy() # Create instance of binarizer and fit_transform the 'Employment' column mlb2 = MultiLabelBinarizer() # Apply one-hot encoding to the 'LanguageAdmired' column binary_labels = mlb2.fit_transform(df['LanguageAdmired']) # Convert the 'LanguageAdmired' binary labels to a DataFrame df_labels = pd.DataFrame(binary_labels, columns=['LanguageAdmired_' + c for c in mlb2.classes_]) # Concatenate the original DataFrame with the new one containing binary labels df = pd.concat([df, df_labels], axis=1) df.shape </code></pre> <p>It fails here:</p> <pre><code>binary_labels = mlb2.fit_transform(df['LanguageAdmired']) </code></pre> <p>The error:</p> <pre><code> 826 class_mapping = defaultdict(int) 827 class_mapping.default_factory = class_mapping.__len__ --&gt; 828 yt = self._transform(y, class_mapping) 830 # sort classes and reorder columns 831 tmp = sorted(class_mapping, key=class_mapping.get) ... --&gt; 901 for label in labels: 902 try: 903 index.add(class_mapping[label]) TypeError: 'float' object is not iterable </code></pre>
<python><data-preprocessing><multivalue>
2025-06-03 10:42:09
1
843
Lev
79,650,791
8,602,940
Custom embedder and Custom knowledge source in CrewAI - getting KeyError: 'OPENAI_API_KEY'
<p>I have setup a Custom Embedder to use with knowledge and tools, but keep getting the error:</p> <blockquote> <p>litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable</p> </blockquote> <p>The custom embedder I'm using is a privately hosted azure openai model. which can be accessed using certain API along with retrieved auth token.</p> <p>Here's the Custom Embedder class:</p> <pre><code>class Embedding(EmbeddingFunction): &quot;&quot;&quot;Custom EmbeddingFunction implementation for Azure OpenAI Embeddings with OAuth authentication. This class handles text embedding requests to Azure OpenAI's embedding models. It supports authentication via OAuth with Microsoft Identity Platform, handles token management, and processes embedding responses in various formats. &quot;&quot;&quot; def __init__(self): &quot;&quot;&quot;Initialize the embedding function.&quot;&quot;&quot; self.token_manager = TokenManager() # Initialize token manager def _encoding(self) -&gt; tiktoken.Encoding: &quot;&quot;&quot;Return the encoding function for token counting. This method uses tiktoken to get the encoding for the specified model. If tiktoken is not available, it falls back to a simple approximation. &quot;&quot;&quot; try: return tiktoken.get_encoding(&quot;cl100k_base&quot;) except Exception as e: logger.warning( f&quot;Failed to load tiktoken encoding for embeddings: {e}. Falling back to approximate token counting.&quot; ) raise e def __call__( self, messages: Documents, ) -&gt; Embeddings: &quot;&quot;&quot;Call the Azure OpenAI Embedding API to generate embeddings. This method follows the CrewAI BaseLLM interface and handles both single text and batch embedding scenarios. Args: messages: String or list of strings to embed Returns: Embeddings: A List[List[float]] containing the embedding vectors &quot;&quot;&quot; try: logger.info(f&quot;Embedding request received for {len(messages)} items&quot;) api_url = str(base_url) payload: Dict[str, Any] = { &quot;input&quot;: messages, } # Get cached token from token manager access_token = self.token_manager.get_token() headers = { &quot;Subscription-Key&quot;: subscription_key, &quot;Authorization&quot;: f&quot;Bearer {access_token}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot;, } # Check if we're dealing with a batch request is_batch = isinstance(messages, list) # Handle batch processing if needed if is_batch and len(messages) &gt; 0: # Check if we need to split the batch due to token limits # The embedding model typically has a 8191 token limit per request MAX_BATCH_TOKENS = min(8000, context_window - 100) # Keep some buffer # If we have a large batch, split and process in chunks if ( len(messages) &gt; 20 ): # Arbitrary threshold for when to check token counts logger.info( f&quot;Large batch detected ({len(messages)} items), checking token counts&quot; ) total_tokens = 0 if self._encoding(): # Only attempt if we have tiktoken available try: # Sample a few items to get average token count sample_size = min(5, len(messages)) sample_tokens = sum( self.count_tokens(msg) for msg in messages[:sample_size] ) avg_tokens = sample_tokens / sample_size estimated_total = avg_tokens * len(messages) logger.info( f&quot;Estimated token count for batch: {estimated_total:.0f} tokens&quot; ) if estimated_total &gt; MAX_BATCH_TOKENS: # We should split the batch into smaller chunks logger.warning( f&quot;Batch size exceeds max tokens ({estimated_total:.0f} &gt; {MAX_BATCH_TOKENS})&quot; ) logger.warning( &quot;Consider splitting large batches manually for better control&quot; ) except Exception as e: logger.warning(f&quot;Error estimating token count: {e}&quot;) # Make the API request with retry decorator and proper error handling logger.info(f&quot;Sending embedding request to Azure OpenAI API: {api_url}&quot;) start_time = time.time() # The retry mechanism will handle 401/403 errors and automatically refresh # tokens as needed in the _make_api_request method response = self._make_api_request(api_url, headers, payload) api_time = time.time() - start_time logger.info(f&quot;Embedding API call completed in {api_time:.2f}s&quot;) response_data = response # Log usage information if available if response_data.get(&quot;usage&quot;): prompt_tokens = response_data[&quot;usage&quot;].get(&quot;prompt_tokens&quot;, 0) total_tokens = response_data[&quot;usage&quot;].get(&quot;total_tokens&quot;, 0) logger.info( f&quot;Embedding usage: {prompt_tokens} prompt tokens, {total_tokens} total tokens&quot; ) # Always return List[List[float]] as per the Embeddings type if not response_data.get(&quot;data&quot;): logger.warning(&quot;No embeddings returned from API&quot;) return [] # Extract embedding vectors, regardless of batch or single input embeddings = [item.get(&quot;embedding&quot;, []) for item in response_data[&quot;data&quot;]] return embeddings except ( AuthenticationError, RateLimitError, InvalidRequestError, APITimeoutError, TokenValidationError, ResponseParsingError, ) as e: # Log specific error logger.error(f&quot;API Error: {str(e)}&quot;) raise except Exception as e: # Log unexpected errors logger.exception(f&quot;Unexpected error in embedding API call: {str(e)}&quot;) raise LLMError( f&quot;An unexpected error occurred in embedding API call: {str(e)}&quot; ) def count_tokens(self, text: str) -&gt; int: &quot;&quot;&quot;Count the number of tokens in a text string. Args: text: The text to count tokens for Returns: Approximate token count &quot;&quot;&quot; if self._encoding(): return len(self._encoding().encode(text)) else: # Fallback to a simple approximation if encoding is not available return len(text) // 4 # Rough approximation: ~4 chars per token @staticmethod def _should_retry(exception) -&gt; bool: &quot;&quot;&quot;Determine if the exception warrants a retry.&quot;&quot;&quot; if isinstance(exception, requests.HTTPError): if hasattr(exception, &quot;response&quot;) and hasattr( exception.response, &quot;status_code&quot; ): status_code = exception.response.status_code return status_code in [401, 403] return False @retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10), retry=retry_if_exception(_should_retry), ) def _make_api_request( self, url: str, headers: Dict[str, str], payload: Dict[str, Any] ) -&gt; Dict[str, Any]: &quot;&quot;&quot;Make an API request with retries and proper error handling.&quot;&quot;&quot; try: response = requests.post(url, headers=headers, json=payload) # Check for auth errors first so we can refresh token before retry if response.status_code in [401, 403]: # Refresh token and update headers before retrying logger.warning( f&quot;Authentication error {response.status_code}. Refreshing token...&quot; ) access_token = self.token_manager.get_token(force_refresh=True) headers[&quot;Authorization&quot;] = f&quot;Bearer {access_token}&quot; # Raise HTTPError which will trigger retry with updated headers response.raise_for_status() # For other status codes, just raise for status as normal response.raise_for_status() try: return response.json() except ValueError: raise ResponseParsingError(&quot;Failed to parse JSON response from API&quot;) except requests.HTTPError as e: # Handle non-auth errors (since auth errors are handled above) status_code = e.response.status_code if status_code in [401, 403]: # Let the retry mechanism handle it with refreshed token raise e elif status_code == 429: raise RateLimitError(&quot;Rate limit exceeded. Please try again later.&quot;) elif status_code &gt;= 500: raise LLMError(f&quot;Server error: {status_code}&quot;) else: raise LLMError(f&quot;HTTP error: {status_code}&quot;) except requests.ConnectionError as e: raise LLMError(f&quot;Connection error: {str(e)}&quot;) except requests.RequestException as e: raise LLMError(f&quot;Request failed: {str(e)}&quot;) </code></pre> <p>It's usage with agent:</p> <pre><code>self.embedding = Embedding() self.column_identifier_agent = Agent( role=self.agents_config[&quot;column_identifier_agent&quot;][&quot;role&quot;], goal=self.agents_config[&quot;column_identifier_agent&quot;][&quot;goal&quot;], backstory=self.agents_config[&quot;column_identifier_agent&quot;][&quot;backstory&quot;], verbose=self.verbose, llm=self.llm, tools=[dir_tool, file_read_tool], embedder={ &quot;provider&quot;: &quot;custom&quot;, &quot;config&quot;: { &quot;embedding_model&quot;: self.embedding, }, }, ) </code></pre> <p>Similarly, when using the CrewAI Knowledge, created a custom Knowledge storage by extending the <code>KnowledgeStorage</code> and to use the postgres as vector db:</p> <pre><code>class PgVectorStorage(KnowledgeStorage): &quot;&quot;&quot;PgVector implementation for knowledge storage using SQLAlchemy.&quot;&quot;&quot; def __init__( self, table_name: str = &quot;knowledge_embeddings&quot;, vector_dim: int = 1536, index_type: str = &quot;hnsw&quot;, ): &quot;&quot;&quot;Initialize PgVector storage. Args: table_name (str, optional): Name of the table to store embeddings. Defaults to &quot;knowledge_embeddings&quot;. vector_dim (int, optional): Dimension of vectors. Defaults to 1536 (OpenAI's text-embedding-ada-002). index_type (str, optional): Type of index to create ('hnsw' or 'ivfflat'). Defaults to &quot;hnsw&quot;. &quot;&quot;&quot; self.table_name = table_name self.vector_dim = vector_dim self.index_type = index_type self._initialize_storage() def _initialize_storage(self) -&gt; None: &quot;&quot;&quot;Initialize the database table and create necessary indexes.&quot;&quot;&quot; with engine.connect() as conn: # Enable vector extension if not already enabled conn.execute(text(&quot;CREATE EXTENSION IF NOT EXISTS vector;&quot;)) # Create table if it doesn't exist conn.execute( text( f&quot;&quot;&quot; CREATE TABLE IF NOT EXISTS {self.table_name} ( id bigserial PRIMARY KEY, content text NOT NULL, embedding vector({self.vector_dim}) NOT NULL, metadata jsonb ); &quot;&quot;&quot; ) ) # Create vector similarity search index if self.index_type == &quot;hnsw&quot;: conn.execute( text( f&quot;&quot;&quot; CREATE INDEX IF NOT EXISTS {self.table_name}_embedding_idx ON {self.table_name} USING hnsw (embedding vector_cosine_ops) WITH (m = 16, ef_construction = 64); &quot;&quot;&quot; ) ) elif self.index_type == &quot;ivfflat&quot;: # For IVFFlat, we'll use sqrt(n) lists where n is the current number of rows result = conn.execute(text(f&quot;SELECT COUNT(*) FROM {self.table_name};&quot;)) row_count = max(100, int(result.scalar() ** 0.5)) conn.execute( text( f&quot;&quot;&quot; CREATE INDEX IF NOT EXISTS {self.table_name}_embedding_idx ON {self.table_name} USING ivfflat (embedding vector_cosine_ops) WITH (lists = {row_count}); &quot;&quot;&quot; ) ) conn.commit() def search( self, query: List[str], limit: int = 3, filter: Optional[dict] = None, score_threshold: float = 0.35, ) -&gt; List[Dict[str, Any]]: &quot;&quot;&quot;Search for documents in the knowledge base. Args: query (List[str]): Query vectors to search for limit (int, optional): Maximum number of results to return. Defaults to 3. filter (Optional[dict], optional): Metadata filter criteria. Defaults to None. score_threshold (float, optional): Minimum similarity score. Defaults to 0.35. Returns: List[Dict[str, Any]]: List of matching documents with their metadata and scores &quot;&quot;&quot; # Convert query list to a PG vector string query_vector = f&quot;'[{','.join(query)}]'&quot; logger.info(f&quot;searching to PgVector storage... {query_vector}&quot;) with engine.connect() as conn: # Build the base query base_query = f&quot;&quot;&quot; SELECT content, metadata, 1 - (embedding &lt;=&gt; {query_vector}::vector) as similarity FROM {self.table_name} WHERE 1 - (embedding &lt;=&gt; {query_vector}::vector) &gt; :score_threshold &quot;&quot;&quot; # Add metadata filters if provided params = {&quot;score_threshold&quot;: score_threshold} if filter: conditions = [] for idx, (key, value) in enumerate(filter.items()): param_name = f&quot;filter_{idx}&quot; conditions.append(f&quot;metadata-&gt;&gt;{key} = :{param_name}&quot;) params[param_name] = value if conditions: base_query += &quot; AND &quot; + &quot; AND &quot;.join(conditions) # Add ordering and limit base_query += f&quot; ORDER BY similarity DESC LIMIT {limit}&quot; # Execute query result = conn.execute(text(base_query), params) # Fetch results results = [] for row in result: results.append( { &quot;content&quot;: row.content, &quot;metadata&quot;: row.metadata or {}, &quot;score&quot;: float(row.similarity), } ) return results def save( self, documents: List[str], metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, ) -&gt; None: &quot;&quot;&quot;Save documents to the knowledge base. Args: documents (List[str]): List of document vectors to save metadata (Dict[str, Any] | List[Dict[str, Any]]): Metadata for the documents &quot;&quot;&quot; logger.info(&quot;Saving to PgVector storage...&quot;) if isinstance(metadata, dict): metadata = [metadata] * len(documents) with engine.connect() as conn: # Insert each document for doc, meta in zip(documents, metadata): vector_str = f&quot;[{','.join(doc)}]&quot; # Using parameterized query for safe insertion query = text( f&quot;&quot;&quot; INSERT INTO {self.table_name} (content, embedding, metadata) VALUES (:content, :embedding::vector, :metadata::jsonb) &quot;&quot;&quot; ) conn.execute( query, {&quot;content&quot;: doc, &quot;embedding&quot;: vector_str, &quot;metadata&quot;: meta} ) conn.commit() def reset(self) -&gt; None: &quot;&quot;&quot;Reset the knowledge base by dropping and recreating the table.&quot;&quot;&quot; logger.info(&quot;Reset PgVector storage...&quot;) with engine.connect() as conn: conn.execute(text(f&quot;DROP TABLE IF EXISTS {self.table_name};&quot;)) conn.commit() # Reinitialize the storage self._initialize_storage() </code></pre> <p>Usage:</p> <pre><code>business_terms = JSONKnowledgeSource( file_paths=[&quot;business_terms_metadata.json&quot;], storage=self.storage, collection_name=&quot;business_terms&quot;, metadata={ &quot;description&quot;: &quot;Business terms and their definitions&quot;, &quot;source&quot;: &quot;business_terms_metadata.md&quot; } ) business_terms.add() </code></pre> <p>This also falling back to the default and trying to find the OPENAI_API_KEY. Also, there's no proper documentation found on crewai for these.</p> <p>Like I mentioned above, I have tried creating a custom embedder as per my need and a custom knowledge storage which stored the vector embedding in postgreSQL.</p>
<python><postgresql><vector><large-language-model><crewai>
2025-06-03 07:25:29
0
1,392
lazzy_ms
79,650,652
1,015,761
Python Django Admin Form: show inline without rendering a form
<p>I have a Django admin page which allows me to edit a model in my domain. The ModelAdmin looks like this:</p> <pre><code>@admin.register(models.VehicleTemplate) class VehicleTemplateAdmin(ModelAdminBase): list_reverse_relation_inline = False search_fields = [&quot;name&quot;, &quot;description&quot;] list_display = [&quot;name&quot;, &quot;description&quot;, &quot;parent&quot;, &quot;status&quot;] inlines = [VehicleInline] readonly_fields = [ &quot;config&quot;, &quot;status&quot;, &quot;properties&quot; ] fields = [ &quot;step&quot;, &quot;name&quot;, .... ] ... class VehicleInline(InlineModelAdmin): model = models.Vehicle def has_add_permission(self, request, obj=None): return False def has_change_permission(self, request, obj=None): return False def has_delete_permission(self, request, obj=None): return False .... </code></pre> <p>The <code>VehicleInline</code> can contain thousands of child models of <code>VehicleTemplate</code>, which ends up rendering thousands of inline forms which all get submitted together when the admin change form is submitted/saved. However, nothing in the <code>VehicleInline</code> is editable. So, instead, I would like to simply display the contents of these child models without rendering any form or input elements. The root problem I have is that the number of form elements is more than the <code>absolute_max</code> configured in Django so it fails the form submission even though none of the inline data is editable.</p> <p>I have tried many, many ways of preventing the form widgets from rendering by providing empty widgets and editing the <code>InlineModelAdmin</code> to not include the input HTML but I eventually just run into a management form manipulation error.</p> <p>How can I display these child models inline on the change page but not include any of the details in the form submission?</p>
<python><django><django-forms><django-admin>
2025-06-03 05:15:07
0
3,876
Goulash
79,650,615
1,230,724
Using array indices to address another array
<p>I'm trying to &quot;overlay&quot; a numpy 1-d array over another 1-d array of larger size, so that the overlaying array sets multiple values in the overlayed array.</p> <p>I'm struggling with an efficient way to convert the indices from the overlaying array (<code>overlay</code>) to the overlayed array (<code>arr</code>).</p> <pre><code>overlay = [ 0 , 1 , 1 , 4 , 3] subixs = [[0, 1, 2, 3], [4, 5], [6], [7], [8]] # relationship to `arr`, i.e. overlay[0] sets `arr[0]`, `arr[1]`, `arr[2]` and `arr[3]` assert len(overlay) == len(subixs) arr = [0, 0, 0, 0, 1, 1, 1, 4, 3] </code></pre> <p><code>overlay[1] = 7</code> should turn <code>arr</code> to <code>[0, 0, 0, 0, 7, 7, 1, 4, 3]</code> because [1] refers to <code>subixs[1]</code> which in turn refers to <code>arr[[4,5]]</code>. This index conversion is easy for scalar indices, but for indices such as boolean arrays it becomes difficult.</p> <p>For instance, imagine:</p> <pre><code>msk = (overlay == 1) | (overlay == 3) overlay[msk] = [44, 48, 47] </code></pre> <p>in that case <code>msk</code> would need to be converted to <code>[4, 5, 6, 8]</code> to be used to the corresponding indices in <code>arr</code> and also the values to be set (<code>[44, 48, 47]</code>) would need to be expanded to <code>[44, 44, 48, 47]</code>.</p> <p>I figured out how to do the translation to the indices in <code>arr</code>, but it is rather inefficient as it's using lists and I also haven't figured out how to expand the values.</p> <pre><code>new_value = [44, 48, 47] msk = (overlay == 1) | (overlay == 3) overlay[msk] = new_value # convert to indices in `arr` # Keep as list as numpy requires symmetric arrays subixs = [[0, 1, 2, 3], [4, 5], [6], [7], [8]] subixs = np.asarray(subixs, dtype=object) arr_ixs = sum(subixs[msk], []) new_arr_value = ? # should be `[44, 44, 48, 47]` arr[arr_ixs] = new_arr_value </code></pre> <p>How would I go about making the index translation more efficient (using numpy's vectorised functions) and expand <code>new_value</code> to <code>new_arr_value</code>?</p> <p><strong>Edit</strong></p> <p>I think I found a problem to expanding the values to fit <code>arr</code>. I can duplicate the values as many times as there are indices from one index in <code>overlay</code> pointing to <code>arr</code>.</p> <pre><code>value_subixs = subixs[msk] arr_ixs = sum(value_subixs, []) new_arr_value = np.repeat(value, np.asarray([len(ix) for ix in value_subixs])) arr[arr_ixs] = new_arr_value </code></pre> <p>However, both solution (index translation and value expansion) are using lists which is not desirable as <code>arr</code> contains in the order of 100k elements.</p>
<python><arrays><numpy>
2025-06-03 04:22:57
2
8,252
orange
79,650,567
203,454
Django-allauth - Make phone number optional for SocialLogin
<p>I am using django-allauth in my project and I have configured Google as a SocialAuth provider. I have a custom signal receiver that updates the phone number on the SocialAuthAccount after the user signs up. But currently the system throws an error when the user logins via SocialAuth if they do not have a public phone number on their account. I am getting an error - KeyError 'phone' at - allauth/account/internal/flows/phone_verification.py - line 46.</p> <p>This is my relevant settings.py:</p> <pre><code>ACCOUNT_LOGIN_METHODS = {&quot;phone&quot;, &quot;email&quot;, &quot;username&quot;} ACCOUNT_SIGNUP_FIELDS = [ &quot;phone*&quot;, &quot;email*&quot;, &quot;username*&quot;, &quot;password1&quot;, &quot;password2&quot; ] </code></pre> <p>How can I tell the SocialAuthAdapter that phone number is optional and might not be there?</p>
<python><django><django-allauth>
2025-06-03 03:09:12
1
34,403
arunkumar
79,650,437
4,996,797
How to pretty print an array with small number in numpy
<p>I have an array that I would like to display</p> <pre class="lang-py prettyprint-override"><code>import numpy as np # build a matrix with a few large values and noise matrix = np.diag([5.9, 0.0, 13.11]) rng = np.random.default_rng(seed=20250602) matrix += rng.random(size=(3, 3)) * 10e-10 with np.printoptions(precision=2): print(matrix) </code></pre> <p>The code above produces</p> <pre class="lang-none prettyprint-override"><code>[[5.90e+00 4.13e-10 5.66e-10] [2.30e-10 4.80e-10 9.56e-10] [2.48e-10 4.32e-10 1.31e+01]] </code></pre> <p>I would like instead to see something like this</p> <pre><code>[[5.90 0 0] [0 0 0] [0 0 13.1]] </code></pre> <p>namely, I don't want to see numbers smaller than a threshold (say <code>1e-3</code>) to appear as different values making it seems like there is some information there. I would like only the values above the threshold to stand out.</p>
<python><numpy>
2025-06-02 23:16:49
0
408
Paweล‚ Wรณjcik
79,650,396
219,153
How to limit matplotlib button_press_event to a single axis?
<p>This script:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.widgets import Button def fAccept(_): print(f'accepted') def onclick(event): print(f'click {event.xdata, event.ydata}') fig, ax = plt.subplots() fig.subplots_adjust(bottom=0.2) fig.canvas.mpl_connect('button_press_event', onclick) axAccept = fig.add_axes([0.7, 0.05, 0.1, 0.075]) bAccept = Button(axAccept, 'Accept') bAccept.on_clicked(fAccept) plt.show() </code></pre> <p>displays</p> <p><a href="https://i.sstatic.net/2FGfSBM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2FGfSBM6.png" alt="enter image description here" /></a></p> <p>Clicking on <code>bAccept</code> button calls <code>onclick</code> callback. Is there a way to limit <code>onclick</code> callback to the top axis? Alternatively, is there a way to distinguish <code>event</code> coordinates (0.5, 0.5) at the center of <code>bAccept</code> button (green circle) from (0.5, 0.5) coordinates at the center of the upper axis (red circle)?</p>
<python><matplotlib><callback><onclick>
2025-06-02 22:10:43
1
8,585
Paul Jurczak
79,650,393
5,744,712
Is there a way to pre-define validators that can be used multiple times?
<p>I'm working on an API based flask app with flask-smorest and marshmallow validations. The issue I'm running into, is that many api endpoints require the same type of data, and I don't want to continually copy/paste the same validations over and over. Something like below would happen....</p> <pre><code>from marshmallow import Schema, fields, validate class api1(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate.Regexp(regex=r&quot;^REQUEST\d{3,9}$&quot;, error=&quot;Input string didn't match required format - REQUEST12345&quot;)) class api2(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate.Regexp(regex=r&quot;^REQUEST\d{3,9}$&quot;, error=&quot;Input string didn't match required format - REQUEST12345&quot;)) </code></pre> <p>What I want to do is create some sort of function or class BEFORE I define every api field/schema that I could recall.</p> <p>Below is the idea, though I don't believe it works.</p> <pre><code>from marshmallow import Schema, fields, validate class repetative_validators(): request_number = validate.Regexp(regex=r&quot;^REQUEST\d{3,9}$&quot;, error=&quot;Input string didn't match required format - REQUEST12345&quot;)) class api1(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=repetative_validators().request_number) class api2(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=repetative_validators().request_number) </code></pre> <p>I was able to successfully test creating my own function for validation as described <a href="https://marshmallow.readthedocs.io/en/3.x-line/marshmallow.validate.html#marshmallow.validate.And" rel="nofollow noreferrer">here</a>, but was hoping to use the built in validators.</p> <pre><code>from marshmallow import Schema, fields, validate, ValidationError import re def validate_request_number(value): if not re.match(r&quot;^REQUEST\d{3,9}$&quot;): raise ValidationError(&quot;Input string didn't match required format - REQUEST12345&quot;) class api1(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate_request_number) class api2(): class input1(Schema): request_ number = fields.Str(metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate_request_number) </code></pre> <p>If anyone can link to appropriate documentation or an example that would be helpful. I haven't found anything like this in the marshmallow docs yet</p>
<python><marshmallow><flask-smorest>
2025-06-02 22:06:03
1
611
Stephan
79,650,348
2,153,235
Tuple comprehension creates generator; List comprehension evaluates all elements right away
<p>I am using a long ad hoc script for exploratory data analysis -- not for tool development. The script has gotten quite long, so I've taken to <code>del</code>ing ephemeral variables to keep the Spyder Variable Explorer clean. I've done this all over the script.</p> <p>I tried to be streamline the script by coding some loops as tuple comprehensions, thus eliminating the need for an extra line of code to <code>del</code> the iteration variable. Here is an example of three ways to iterate through figures to clear the plots:</p> <pre><code># Generate the figures import matplotlib as mpl import matplotlib.pyplot as plt plt.close('all') plt.scatter([1,2,3],[4,5,6]) plt.figure() plt.scatter([1,2,3],[6,5,4]) # 3 ways to clear the figures if True: # Use Tuple Comprehension ( plt.figure(iFig).clf() for iFig in plt.get_fignums() ) elif True: # Use List Comprehension [ plt.figure(iFig).clf() for iFig in plt.get_fignums() ] else: # Don't use comprehension for iFig in plt.get_fignums(): plt.figure(iFig).clf() del iFig # The extra line of source code I want to avoid </code></pre> <p>The 3rd and final option is the one I've been using. The 1st and 2nd options are my attempts at tuple comprehension and list comprehension.</p> <p>The tuple comprehension returns a generator object and doesn't actually execute evaluate the invocation to <code>clf</code> unless I assign the tuple to <code>x</code> and execute <code>next(x)</code> twice. I can see the figures clearing each time.</p> <p>This is unnecessary for the list comprehension.</p> <p>I started off with tuple comprehension with the rationale that I can avoid using a mutable container if I'm not going to use the contents anyway.</p> <p>Why does the tuple comprehension yield a generator that must be iterated through while list comprehension evaluates all the elements needed to build the list right away?</p> <p>Is there a known Python rule about list and tuple comprehension that would have allowed me to foresee this?</p>
<python><list><tuples><list-comprehension>
2025-06-02 21:20:05
1
1,265
user2153235
79,650,316
5,036,928
mpi4py deadlock with scipy.minimize
<p>I am trying to do something similar as described in <a href="https://stackoverflow.com/questions/37159923/parallelize-a-function-call-with-mpi4py">Parallelize a function call with mpi4py</a></p> <p>But there are some things there that make me skeptical of the provided answer. Additionally, I have a class implementation that makes things a little different. Here is some &quot;reduced&quot; code to demonstrate the main logic:</p> <pre><code>from scipy.optimize import minimize from mpi4py import MPI import numpy as np class Solver(): def __init__(self, SampleTimes, InitialArray): self.comm = MPI.COMM_WORLD self.rank = self.comm.Get_rank() self.size = self.comm.Get_size() self.SampleTimes = SampleTimes self.InitialArray = InitialArray self.Finished = False def f(x_): return &lt;some scalar result&gt; def Objective(self, x): self.x = self.comm.bcast(x if self.rank == 0 else None, root=0) tstep_select = np.array_split(self.SampleTimes, self.size)[self.rank] local_results = [] for t in tstep_select: result = self.f(self.x) # Some function of x local_results.append( (t, result) ) all_results = self.comm.gather(local_results, root=0) if self.rank==0: all_results = [item for sublist in all_results for item in sublist] all_results = np.array(all_results) all_results = all_results[all_results[:,0].argsort()] scalar = np.trapz(all_results[:,1], all_results[:,0]) return -scalar def Maximize(self,): if self.rank == 0: self.Max = minimize(self.Objective, self.InitialArray) self.Finished = self.comm.bcast(True, root=0) return self.Max else: while not self.Finished: self.Objective(None) if __name__=='__main__': t_eval = np.linspace(0, 100, 100) Instance = Solver(SampleTimes=t_eval) print(Instance.Maximize()) </code></pre> <p>So I understand that I need to only call the minimizer on rank 0. Additionally, I need to engage the rank!=0 workers to evaluate <code>Objective</code>. This is the intent of the if/else in the <code>Maximize</code> method.</p> <p>As I understand, I can pass <code>None</code> to <code>Objective</code> for rank!=0 because <code>self.x</code> will be set based on the broadcast from rank 0. I am wondering if the issue has something to do with the fact that scipy's <code>minimize</code> will expect a single value from each evaluation of <code>Objective</code> but, as-written, <code>Objective</code> will be returning <code>None</code> on all rank!=0 (on the other hand, the minimizer is only executed on rank 0 so I feel like this is potentially a non-issue).</p> <h2>Edit (code v2)</h2> <p>I got rid of the &quot;Finished&quot; variable and experience the same deadlock:</p> <pre><code>class Solver(): def __init__(self, SampleTimes, InitialArray): self.comm = MPI.COMM_WORLD self.rank = self.comm.Get_rank() self.size = self.comm.Get_size() self.SampleTimes = SampleTimes self.InitialArray = InitialArray def f(x_): return &lt;some scalar result&gt; def Objective(self, x): self.x = self.comm.bcast(x if self.rank == 0 else None, root=0) tstep_select = np.array_split(self.SampleTimes, self.size)[self.rank] local_results = [] for t in tstep_select: result = self.f(self.x) # Some function of x local_results.append( (t, result) ) all_results = self.comm.gather(local_results, root=0) if self.rank==0: all_results = [item for sublist in all_results for item in sublist] all_results = np.array(all_results) all_results = all_results[all_results[:,0].argsort()] scalar = np.trapz(all_results[:,1], all_results[:,0]) return -scalar def Maximize(self,): if self.rank == 0: self.Max = minimize(self.Objective, self.InitialArray) return self.Max else: self.Objective(None) if __name__=='__main__': t_eval = np.linspace(0, 100, 100) Instance = Solver(SampleTimes=t_eval) print(Instance.Maximize()) </code></pre>
<python><scipy><mpi><scipy-optimize><mpi4py>
2025-06-02 20:50:45
2
1,195
Sterling Butters
79,650,165
13,344,315
Getting Azure Credentials from Hybrid Worker in Azure Automation with Python
<p>I have an automation account with a bunch of stored credentials. In the past I typically use Powershell and using Get-AutomationPSCredential has worked great in the past. However I need to get some credentials into a Python script that is running from a hybrid worker. All of my research says to use the automationassets module, but apparently this only works when running it from Azure (<a href="https://stackoverflow.com/questions/77656894/get-azure-automation-runbook-credential-in-python">Get Azure Automation Runbook Credential in Python</a>).</p> <p>Running this script from a hybrid worker</p> <pre><code>#!/usr/bin/env python3 import automationassets print(&quot;hello world&quot;) cred = automationassets.get_automation_credential(&quot;TestCredentials&quot;) print (cred[&quot;username&quot;]) print (cred[&quot;password&quot;]) print (&quot;---DONE---&quot;) </code></pre> <p>I get this error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\ProgramData\Microsoft\System Center\Orchestrator\7.2\SMA\Sandboxes\prccrwfo.uqy\Temp\u4krn2px.peo\2067bebf-6afe-4427-a1ba-ebe41539ff53&quot;, line 5, in &lt;module&gt; cred = automationassets.get_automation_credential(&quot;TestCredentials&quot;) File &quot;C:\Python39\automationassets.py&quot;, line 126, in get_automation_credential credential = _get_asset(_KEY_CREDENTIAL, name) File &quot;C:\Python39\automationassets.py&quot;, line 72, in _get_asset return_value = _get_asset_value(local_assets_file, asset_type, asset_name) File &quot;C:\Python39\automationassets.py&quot;, line 55, in _get_asset_value for asset, asset_values in local_assets.iteritems(): AttributeError: 'dict' object has no attribute 'iteritems' </code></pre> <p>And that same script with an additional import</p> <pre><code>import automationassets from automationassets import AutomationAssetNotFound </code></pre> <p>Throws this error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\ProgramData\Microsoft\System Center\Orchestrator\7.2\SMA\Sandboxes\1v03pjym.qll\Temp\rv4yiqnf.c3v\4a5b24ae-bd26-40e6-82cc-86cf36915077&quot;, line 3, in &lt;module&gt; from automationassets import AutomationAssetNotFound ImportError: cannot import name 'AutomationAssetNotFound' from 'automationassets' (C:\Python39\automationassets.py) </code></pre> <p>Is it possible to get Azure credentials into a Python script running from a hybrid worker in Azure automation?</p>
<python><azure-automation>
2025-06-02 18:33:25
1
301
Andrew Draper
79,650,102
1,715,495
PyPI publishing GitHub Action works with token but not trusted provider
<p>This <a href="https://github.com/batfish/docker/pull/137/files" rel="nofollow noreferrer">pull request</a> modifies our open source package <code>batfish/docker</code> GitHub actions to use PyPI trusted publishing on Test PyPI instead of a password. I'm talking about the <code>dev_whl</code> step that pushes a development version of the wheel to test.pypi.org/legacy.</p> <p>All the PR does is <a href="https://github.com/batfish/docker/pull/137/files#diff-0f9e082055a0368298a8b74643fc42245fa6e5529993b14752508567a6872f9fL37" rel="nofollow noreferrer">remove the <code>password</code> argument</a> and add <a href="https://github.com/batfish/docker/pull/137/files#diff-0f9e082055a0368298a8b74643fc42245fa6e5529993b14752508567a6872f9fR82" rel="nofollow noreferrer"><code>id-token: write</code></a> permissions. The trusted publisher is already set up. The publishing action then executes but runs into a 400 error with no information.</p> <p>Passing run with password: <a href="https://github.com/batfish/docker/actions/runs/15381194869/job/43272421321" rel="nofollow noreferrer">https://github.com/batfish/docker/actions/runs/15381194869/job/43272421321</a> Failing run with trusted publisher: <a href="https://github.com/batfish/docker/actions/runs/15355188206/job/43212884353" rel="nofollow noreferrer">https://github.com/batfish/docker/actions/runs/15355188206/job/43212884353</a></p> <p>I can't think of how to debug this further. Any suggestions?</p>
<python><openid-connect><pypi>
2025-06-02 17:43:19
1
2,257
Dan Halperin
79,649,970
9,703,451
contextlib.contextmanager does not raise exceptions if return is in finally clause
<p>I just encountered a really strange behaviour of python's <code>contextlib.contextmanager</code>. Can anyone explain to me if this is intended (and why):</p> <blockquote> <p>If you put a <code>return</code> into the <code>finally</code> of the <code>contextmanager</code>, any exception raised within the <code>contextmanager</code> is no longer raised.</p> </blockquote> <pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager @contextmanager def test(): try: print(&quot;inside the contextmanager&quot;) yield finally: print(&quot;exiting contextmanager&quot;) return # This shomehow shadows any exception raised within the contextmanager with test(): try: raise RuntimeError(&quot;ASDF&quot;) except Exception as ex: print(f&quot;there's an {ex} exception... lets raise it again!&quot;) raise ex print(&quot;i don't care about exceptions&quot;) </code></pre> <p>If I run this, I get the following:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; inside the contextmanager &gt;&gt;&gt; there's an ASDF exception... lets raise it again! &gt;&gt;&gt; exiting contextmanager &gt;&gt;&gt; i don't care about exceptions </code></pre> <p>I would have expected that the exception is raised.</p>
<python>
2025-06-02 15:58:48
2
3,179
raphael
79,649,941
5,353,177
Get order of nodes in DiGraph networkx
<p>I have the next example of graph:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx G=nx.DiGraph() G.add_edges_from([('b','a'), ('b','c'), ('a','c'),('c','d')]) </code></pre> <p>In visual way it should look like this: <a href="https://i.sstatic.net/LhUJAEad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhUJAEad.png" alt="enter image description here" /></a></p> <pre class="lang-none prettyprint-override"><code>digraph { layout=&quot;neato&quot; node[ shape=rect ]; a [ pos = &quot;2,3!&quot; ]; b [ pos = &quot;1,2!&quot; ]; c [ pos = &quot;3,1!&quot; ]; d [ pos = &quot;4,0!&quot; ]; c -&gt; d ; b -&gt; c ; b -&gt; a ; a -&gt; c ; } </code></pre> <p>I want to order graph G from left to right and get the list of nodes from the top to the bottom. Means in example it should be:</p> <pre class="lang-none prettyprint-override"><code>a b c d </code></pre> <p>Is it possible?</p>
<python><networkx>
2025-06-02 15:37:07
2
736
Sanek Zhitnik
79,648,605
12,372,992
How to define nullable fields for SqlTransform
<p>I'm using Beam SqlTransform in python, trying to define/pass nullable fields.</p> <p>This code works just fine:</p> <pre class="lang-py prettyprint-override"><code>with beam.Pipeline(options=options) as p: # ... # Use beam.Row to create a schema-aware PCollection | &quot;Create beam Row&quot; &gt;&gt; beam.Map(lambda x: beam.Row( user_id=int(x['user_id']), user_name=str(x['user_name]) )) | 'SQL' &gt;&gt; SqlTransform(&quot;SELECT user_id, COUNT(*) AS msg_count FROM PCOLLECTION GROUP BY user_id&quot;) </code></pre> <p>However, I am not able to create nullable fields with this approach.</p> <p>Without the direct cast, I'm getting a decoding Field error.</p> <pre class="lang-py prettyprint-override"><code>user_id = json.get('user_id') </code></pre> <p>throws:</p> <pre class="lang-none prettyprint-override"><code>Failed to decode Schema due to an error decoding Field proto: name: &quot;user_id&quot; type { nullable: true logical_type { urn: &quot;beam:logical:pythonsdk_any:v1&quot; } } </code></pre> <p>Without using beam.Row, any other object, throws a missing schema error.</p> <pre class="lang-none prettyprint-override"><code>Cannot call getSchema when there is no schema </code></pre> <p>What is the proper way to define nullable fields?</p>
<python><apache-beam><beam-sql>
2025-06-02 11:04:30
1
1,980
Yair Maron
79,648,524
6,681,932
ValueError: zero-size array to reduction operation maximum which has no identity in SVAR Mode
<p>I'm trying to fit a Structural Vector Autoregression (SVAR) model using statsmodels in Python, but I'm encountering the following error <code>ValueError: zero-size array to reduction operation maximum which has no identity</code>.</p> <p>There is my code:</p> <pre><code>import pandas as pd import numpy as np from statsmodels.tsa.vector_ar.svar_model import SVAR df_sample = pd.DataFrame( { 'Product_1': np.random.rand(240) * 10000, 'Product_2': np.random.rand(240) * 10000, 'Product_3': np.random.rand(240) * 10000 }, index=pd.date_range(start='2019-11-16', periods=240, freq='W-SAT')) A = np.array([ [1, 0, 0], [np.nan, 1, 0], [np.nan, np.nan, 1] ], dtype='U') # Fit SVAR model = SVAR(df_sample, svar_type='A', A=A) res = model.fit(maxlags=4) </code></pre>
<python><valueerror><vector-auto-regression>
2025-06-02 10:02:42
1
478
PeCaDe
79,648,388
12,415,855
Problem with umlauts when writing with Python and opening with Excel?
<p>I am writing some content with umlauts in it using the following code:</p> <pre><code>import csv import os import sys data = [[&quot;รคwien&quot;, &quot;รถbgld&quot;, &quot;รผktn&quot;, &quot;noe&quot;, &quot;ooe&quot;, &quot;sbg&quot;, &quot;stmk&quot;, &quot;tirol&quot;, &quot;vbg&quot;]] path = os.path.abspath(os.path.dirname(sys.argv[0])) fn = os.path.join(path, &quot;test.csv&quot;) with open(fn, 'w', newline='', encoding=&quot;utf-8&quot;) as f: writer = csv.writer(f, delimiter=&quot;;&quot;) writer.writerows(data) </code></pre> <p>When I check the CSV file with a text editor everything is shown fine.</p> <p>But when I open the CSV with Excel it looks like that:</p> <p><a href="https://i.sstatic.net/8MMU614T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MMU614T.png" alt="enter image description here" /></a></p> <p>How can I show the correct umlauts also in Excel?</p>
<python><excel><csv>
2025-06-02 08:12:48
1
1,515
Rapid1898
79,648,275
3,366,355
Convert JSONC to JSON via regex
<p>I have JSONC as input (it is a superset of JSON that supports comments like <code>//this one</code> and <code>/* this one */</code>), and want to transform it into normal JSON (standard) using Python regex, but I'm not sure this can be solved with regex only. I know it can be done via semantic processing, maybe something like tree-sitter, but I'm looking for a regex-based solution. Since we don't use <code>/* */</code> it's fine to have a regex only with removing comments with //.</p> <p>Note that:</p> <ol> <li>It is guaranteed that when you properly remove everything that's in the comments, you get valid JSON;</li> <li>The input is always pretty-formatted;</li> </ol> <p>Here is an example input with a failing sed attempt at the top:</p> <pre class="lang-json prettyprint-override"><code> //tried this sed -r 's#\s//[^}]*##' // also tried this '%^*3//s39()' [ { &quot;test1&quot; : &quot;http://test.com&quot;, &quot;test2&quot; : &quot;http://test.com&quot;,//test // any thing &quot;ok&quot; : 3, //here 2 &quot;//networkpath1&quot; : true, //whynot &quot;//networkpath2&quot; : true // ok },//eof { &quot;statement&quot; : &quot;I like test cases&quot; }//eof ] </code></pre> <p>Here is another failing attempt:</p> <pre class="lang-py prettyprint-override"><code>comment_re = re.compile(r'\s//[^}]*') cleaned = comment_re.sub('', jsonStr) </code></pre> <p>This removes too much when <code>//</code> occurs in a string literal.</p> <p>How can I make this work also for such inputs?</p> <p>NB: A solution is already helpful if it doesn't deal with <code>/* this type of comments */</code> so no need to cover for that.</p>
<python><json><regex><bash><json-c>
2025-06-02 06:37:32
2
1,032
martin
79,648,080
11,421,839
How to use browser-use to make screenshots for each step and video recording for the whole session?
<p>I am trying to build a python project that uses <code>browser_use</code> to perform tasks, and create video recording for the whole session and screenshots for each step.</p> <p>But I can't find a way to do it. The official doc did provide some suggestions but those configurations didn't allow me to create those recordings &amp; screenshots after completing the config.</p> <p>Here's the current setup of my browser.</p> <pre class="lang-py prettyprint-override"><code>from browser_use import Agent, BrowserContext, BrowserProfile, BrowserSession recordings_dir = os.path.join(temp_dir, 'recordings') # Configure the browser session browser_session = BrowserSession( headless=False, viewport={'width': 1920, 'height': 1080}, disable_web_security=True, ignore_https_errors=True, disable_features=['VizDisplayCompositor'], ) # Define the browser profile with recording settings browser_profile = BrowserProfile( record_video_dir=recording_dir, record_video_size={'width': 1920, 'height': 1080}, save_recording_path=recording_dir, deterministic_rendering=True, disable_images=False, disable_javascript=False, disable_animations=True, page_load_timeout=15000, ) # Set up the browser context with video recording settings browser_context = BrowserContext( enable_recording=True, recordings_dir=recording_dir, # record_video_dir=recording_dir, # record_video_size={'width': 1920, 'height': 1080}, viewport={'width': 1920, 'height': 1080}, ) self.agent = Agent( task=&quot;Browser automation task with screenshot and video capture&quot;, llm=llm, browser_session=browser_session, browser_profile=browser_profile, browser_context=browser_context ) </code></pre> <p>And I wish it can take a screenshot at each step, and make a video recording for the whole session.</p> <p>The doc says that it will start recording as long as the <code>record_video_dir</code> and <code>record_video_size</code> are provided. But it seems is is not the case.</p> <p>I did see both the &quot;screenshots&quot; and the &quot;video_recording&quot; field exists, but both are empty.</p> <p>I have also checked <a href="https://stackoverflow.com/questions/77377151/not-able-to-make-video-recording-using-playwright">this question</a>, but it is playwright specific and I don't think it can be applied directly to browser-use.</p> <p>I also tried <a href="https://playwright.dev/docs/videos#record-video" rel="nofollow noreferrer">this doc</a>, but the &quot;video&quot; attribute is not provided by the <code>browser_use</code>.</p> <p>I can see that the functionality has been done by the <a href="https://github.com/browser-use/web-ui" rel="nofollow noreferrer">official web-ui repo</a> but I can't find how exactly it achieved that.</p>
<python><playwright><playwright-python><playwright-test><browser-use>
2025-06-02 01:11:00
1
1,963
Terry Windwalker
79,647,899
948,866
How to order functions to avoid NameError "x is not defined"?
<p>15 years ago the <a href="https://stackoverflow.com/questions/1590608/how-do-i-forward-declare-a-function-to-avoid-nameerrors-for-functions-defined">answer</a> to this question was:</p> <blockquote> <p>The general rule in Python is that a function should be defined before its usage, which does not necessarily mean it needs to be higher in the code.</p> </blockquote> <blockquote> <p>This is the correct answer, it also explains why the <code>if __name__==&quot;__main__&quot;:</code> solution works</p> </blockquote> <p>But it's not working for me. I have a class that initializes some class variables, one of which is calculated by a helper function that uses the value of another class variable.</p> <p>But this yields a <code>NameError</code> regardless of whether the class or the function is defined first.</p> <p>In both cases the first execution happens under <code>__main__</code> after the entire rest of the file has been seen.</p> <pre><code>def calc() -&gt; str: return Foo.x + '42' class Foo: x = 'bar' y = calc() def __init__(self): self.a = 'lunch' if __name__ == '__main__': f = Foo() print(f.x, f.y, f.a) </code></pre> <p>Error 1:</p> <pre><code> File &quot;C:\Users\David\PycharmProjects\jadn2\jadn\otest.py&quot;, line 4, in calc return Foo.x + '42' ^^^ NameError: name 'Foo' is not defined </code></pre> <p>Error 2:</p> <pre><code> File &quot;C:\Users\David\PycharmProjects\jadn2\jadn\otest.py&quot;, line 4, in Foo y = calc() ^^^^ NameError: name 'calc' is not defined </code></pre> <p>(I created a wrapper function <code>bar()</code> that calls <code>calc()</code> as in the original question, but as expected permuting the order makes no difference.)</p> <p>How can I get the circular definition to work, given that the order of execution is supposed to be what matters?</p>
<python><class><forward-reference>
2025-06-01 20:18:39
1
3,967
Dave
79,647,808
5,416,142
Cannot use local user data for Chrome while running Selenium
<p>I am running python program with Selenium to automate browser page processing. In order to open the pages correctly, I need to use my login and password saved in the Chrome user profile for my website. I created a new Chrome profile for that purpose. However, the program cannot read my user profile data for some reason. Here is the start of my python program:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time, os CHROME_USER_DIR = &quot;C:/Users/LocalUserName/AppData/Local/Google/Chrome/User Data&quot; PROFILE_NAME = &quot;Profile 1&quot; options = Options() options.add_argument(f&quot;--user-data-dir={CHROME_USER_DIR}&quot;) options.add_argument(f&quot;--profile-directory={PROFILE_NAME}&quot;) driver = webdriver.Chrome(options=options) </code></pre> <p>I get the following error:</p> <pre><code>DevTools remote debugging requires a non-default data directory. Specify this using --user-data-dir. [12252:10400:0601/205536.845:ERROR:gpu\ipc\service\gpu_channel_manager.cc:946] Failed to create GLES3 context, fallback to GLES2. [12252:10400:0601/205536.845:ERROR:gpu\ipc\service\gpu_channel_manager.cc:957] ContextResult::kFatalFailure: Failed to create shared context for virtualization. [12252:10400:0601/205536.960:ERROR:gpu\ipc\service\gpu_channel_manager.cc:946] Failed to create GLES3 context, fallback to GLES2. [12252:10400:0601/205536.960:ERROR:gpu\ipc\service\gpu_channel_manager.cc:957] ContextResult::kFatalFailure: Failed to create shared context for virtualization. [12252:10400:0601/205536.960:ERROR:gpu\ipc\service\gpu_channel_manager.cc:946] Failed to create GLES3 context, fallback to GLES2. [12252:10400:0601/205536.960:ERROR:gpu\ipc\service\gpu_channel_manager.cc:957] ContextResult::kFatalFailure: Failed to create shared context for virtualization. [6180:10560:0601/205539.087:ERROR:chrome\browser\policy\cloud\fm_registration_token_uploader.cc:179] Client is missing for kUser scope WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1748800539.099043 2104 voice_transcription.cc:58] Registering VoiceTranscriptionCapability Created TensorFlow Lite XNNPACK delegate for CPU. [6180:5096:0601/205539.234:ERROR:google_apis\gcm\engine\registration_request.cc:291] Registration response error message: DEPRECATED_ENDPOINT [6180:1708:0601/205539.382:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) [6180:1864:0601/205539.382:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) [6180:1708:0601/205551.758:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) [6180:5096:0601/205605.808:ERROR:google_apis\gcm\engine\registration_request.cc:291] Registration response error message: DEPRECATED_ENDPOINT [6180:1708:0601/205606.768:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) [6180:1708:0601/205621.778:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) [6180:1708:0601/205636.778:ERROR:components\system_cpu\cpu_probe_win.cc:112] PdhAddEnglishCounter failed for '\Processor(_Total)\% Processor Time': Error (0x13D) while retrieving error. (0xC0000BB8) Traceback (most recent call last): File &quot;D:\Work\script_runner.py&quot;, line 38, in &lt;module&gt; driver = webdriver.Chrome(options=options) File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\chrome\webdriver.py&quot;, line 47, in __init__ super().__init__( ~~~~~~~~~~~~~~~~^ browser_name=DesiredCapabilities.CHROME[&quot;browserName&quot;], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...&lt;3 lines&gt;... keep_alive=keep_alive, ^^^^^^^^^^^^^^^^^^^^^^ ) ^ File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\chromium\webdriver.py&quot;, line 69, in __init__ super().__init__(command_executor=executor, options=options) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 257, in __init__ self.start_session(capabilities) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 356, in start_session response = self.execute(Command.NEW_SESSION, caps)[&quot;value&quot;] ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 447, in execute self.error_handler.check_response(response) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^ File &quot;C:\Users\LocalUserName\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 232, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created from chrome not reachable Stacktrace: GetHandleVerifier [0x0x7ff7c0f26f65+78965] GetHandleVerifier [0x0x7ff7c0f26fc0+79056] (No symbol) [0x0x7ff7c0cb9c0c] (No symbol) [0x0x7ff7c0caaa41] (No symbol) [0x0x7ff7c0cfceb2] (No symbol) [0x0x7ff7c0cf7a78] (No symbol) [0x0x7ff7c0cf297d] (No symbol) [0x0x7ff7c0d465be] (No symbol) [0x0x7ff7c0d45d50] (No symbol) [0x0x7ff7c0d38443] (No symbol) [0x0x7ff7c0d01311] (No symbol) [0x0x7ff7c0d020a3] GetHandleVerifier [0x0x7ff7c11de26d+2926461] GetHandleVerifier [0x0x7ff7c11d8993+2903715] GetHandleVerifier [0x0x7ff7c11f6aed+3026941] GetHandleVerifier [0x0x7ff7c0f416fe+187406] GetHandleVerifier [0x0x7ff7c0f496ef+220159] GetHandleVerifier [0x0x7ff7c0f2faf4+114692] GetHandleVerifier [0x0x7ff7c0f2fca9+115129] GetHandleVerifier [0x0x7ff7c0f164d8+10728] BaseThreadInitThunk [0x0x7ff82e307374+20] RtlUserThreadStart [0x0x7ff82f9dcc91+33] </code></pre> <p>Note that when I run the same program without the lines:</p> <pre><code>options.add_argument(f&quot;--user-data-dir={CHROME_USER_DIR}&quot;) options.add_argument(f&quot;--profile-directory={PROFILE_NAME}&quot;) </code></pre> <p>The program runs correctly, except the website is not displaying the gage correctly as it did not log in with my username/password.</p> <p>I already checked the following: ChromeDriver and Chrome have the same version. I do not use any other apps with my Chrome profile while running the program.</p> <p>Any ideas what goes wrong and how to fix it?</p>
<python><google-chrome><selenium-webdriver><selenium-chromedriver>
2025-06-01 18:14:54
2
757
BohdanZPM
79,647,650
1,088,979
Cannot import Python modules from sibling directory in Jupyter Notebook running in VS Code (os.chdir and sys.path.append do not work)
<p>I am running Jupyter Notebooks inside Visual Studio Code using the official Jupyter extension.</p> <p>My project structure is as follows:</p> <pre><code>my_project/ โ”œโ”€โ”€ notebooks/ โ”‚ โ””โ”€โ”€ analysis.ipynb โ”œโ”€โ”€ libs/ โ”‚ โ”œโ”€โ”€ __init__.py โ”‚ โ””โ”€โ”€ my_module.py </code></pre> <p>In <code>analysis.ipynb</code>, I would like to import <code>my_module</code> using either:</p> <pre class="lang-py prettyprint-override"><code>from libs import my_module </code></pre> <p>or:</p> <pre class="lang-py prettyprint-override"><code>import my_module </code></pre> <p>However, I consistently get the following error:</p> <pre><code>ModuleNotFoundError: No module named 'libs' </code></pre> <p>I have read the <a href="https://code.visualstudio.com/docs/datascience/jupyter-notebooks" rel="nofollow noreferrer">official VS Code Jupyter documentation</a>, but it does not cover anything about import paths or how <code>sys.path</code> is handled.</p> <p>Here are the things I have tried:</p> <ol> <li>Changing the working directory:</li> </ol> <pre class="lang-py prettyprint-override"><code>import os os.chdir(&quot;..&quot;) </code></pre> <p>This does not work. The import still fails.</p> <ol start="2"> <li>Manually adding the path to <code>sys.path</code>:</li> </ol> <pre class="lang-py prettyprint-override"><code>import sys import os sys.path.append(os.path.abspath(&quot;../libs&quot;)) import my_module </code></pre> <p>This also does not work. The import fails with the same <code>ModuleNotFoundError</code>.</p> <p>In a normal Python script (outside of a notebook), these methods usually work for me. But in this case, inside a Jupyter Notebook running in VS Code, they are not working.</p> <p>I would like to understand:</p> <ol> <li>Why do these approaches not work in this environment?</li> <li>How can I properly import modules from sibling or parent directories in this setup?</li> <li>What is the recommended way to structure this in VS Code when using Jupyter Notebooks?</li> </ol>
<python><visual-studio-code><jupyter-notebook><vscode-extensions>
2025-06-01 14:50:48
0
9,584
Allan Xu