QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
79,692,953
| 305,597
|
How can I prevent matplotlib from trying to build the font cache on first run?
|
<p>I am using <code>PyInstaller</code> to bundle Python and several of its libraries on a single MacOS app, so I can distribute some software without any kinds of dependencies.</p>
<p>The first time anyone runs this "fresh" copy of Python + matplotlib, it spends a significant amount of time trying to build the font cache.</p>
<pre><code>Matplotlib is building the font cache; this may take a moment
</code></pre>
<p>I am trying to prevent this. My current idea was bundling <code>DejaVuSans.ttf</code> with the program and modifying the font path in <code>rcParams</code> before importing <code>pyplot</code>.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib
from matplotlib import font_manager
font_path = "resources/DejaVuSans.ttf"
font_manager.fontManager.addfont(font_path)
font_name = font_manager.FontProperties(fname = font_path).get_name()
matplotlib.rcParams['font.family'] = font_name
matplotlib.rcParams['font.sans-serif'] = [font_name]
matplotlib.rcParams['font.serif'] = [font_name]
matplotlib.rcParams['font.monospace'] = [font_name]
</code></pre>
<p>However, even after doing this Matplotlib tries to build the font cache.</p>
<p>How can I tell Matplotlib to not care about fonts that are not the <code>DejaVuSans.ttf</code> I'm bundling with this program?</p>
|
<python><macos><matplotlib><fonts><pyinstaller>
|
2025-07-07 13:43:08
| 1
| 9,705
|
Martรญn Fixman
|
79,692,951
| 1,503,120
|
Custom type object tests as member of sequence but not of dictionary
|
<p>The <a href="https://docs.python.org/3/reference/expressions.html#in" rel="nofollow noreferrer">official Python docs for the <code>in</code> operator</a> state:</p>
<blockquote>
<p>For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression <code>x in y</code> is equivalent to <code>any(x is e or x == e for e in y)</code>.</p>
</blockquote>
<p>However, this does not seem to be true. Please see the following code:</p>
<pre><code>class Val:
def __init__(self, val):
self.val = val
def __eq__(self, other):
return self.val == other.val
__hash__ = object.__hash__
one = Val(1) ; two = Val(2) ; three = Val(3)
valSeq = [one, two, three]
print("Val(2) in valSeq:\n ", Val(2) in valSeq)
</code></pre>
<p>This prints:</p>
<pre><code>Val(2) in valSeq:
True
</code></pre>
<p>as expected, but the same does not work when I use <code>Val</code> as a key in a dictionary:</p>
<pre><code>valMap = {one: "ekam", two: "dve", three: "trini"}
print("valMap[Val(2)]:")
try:
print(valMap[Val(2)])
except KeyError:
print(" KeyError")
print()
print("Val(2) in valMap:\n ", Val(2) in valMap)
</code></pre>
<p>This outputs:</p>
<pre><code>valMap[Val(2)]:
KeyError
Val(2) in valMap:
False
</code></pre>
<p>To investigate why this happens, I try testing what the documentation says:</p>
<pre><code>def inTest(obj, seq):
return any(obj is test or obj == test for test in seq)
print("inTest(Val(2), valMap):\n ", inTest(Val(2), valMap))
</code></pre>
<p>which gives:</p>
<pre><code>inTest(Val(2), valMap):
True
</code></pre>
<p>and sure enough, the equality test of <code>Val(2)</code> is satisfied as expected:</p>
<pre><code>def inTestStr(obj, seq):
for test in seq:
print(obj is test, obj == test)
print("inTestStr(Val(2), valMap):")
inTestStr(Val(2), valMap)
</code></pre>
<p>which gives:</p>
<pre><code>inTestStr(Val(2), valMap):
False False
False True
False False
</code></pre>
<p>What is wrong and why doesn't <code>in</code> operator work as expected?</p>
|
<python>
|
2025-07-07 13:42:40
| 1
| 1,276
|
jamadagni
|
79,692,691
| 20,292,449
|
MCP connection fails silently
|
<p>I am new to MCP and I am having the bellow connection failure error and note that with the minimal set up it Works and I have tried using many ways it does not work and I have also followed the official guideline from the docs and used <code>uv</code> for the installation and environment setup and also the MCP inspector does work when I try it to run the tool.
The server does not show any connection related thing when I try to connect in the client.</p>
<p>Many thanks in advance for any help.</p>
<p>here is the error</p>
<pre><code>(client) PS C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client> python main.py ../server/server.py
Traceback (most recent call last):
File "C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client\main.py", line 107, in <module>
asyncio.run(main())
File "D:\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "D:\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\Python\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client\main.py", line 101, in main
await mc.connect_to_server(sys.argv[1])
File "C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client\main.py", line 30, in connect_to_server
await self.session.initialize()
File "C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client\.venv\Lib\site-packages\mcp\client\session.py", line 151, in initialize
result = await self.send_request(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Desktop\codes\companies\Fanaye Technologies\MCP\MCP_X_Final\client\.venv\Lib\site-packages\mcp\shared\session.py", line 286, in send_request raise McpError(response_or_error.error)
mcp.shared.exceptions.McpError: Connection closed
</code></pre>
<p>here is my code for the server.py</p>
<pre><code>from typing import Any
import os
import time
import asyncio
import openpyxl
import httpx
import json
from mcp.server.fastmcp import FastMCP
from dotenv import load_dotenv
import sys
load_dotenv()
print("Starting MCP server...", file=sys.stderr)
print(f"Python executable: {sys.executable}", file=sys.stderr)
print(f"Working directory: {os.getcwd()}", file=sys.stderr)
# Initialize FastMCP server
mcp = FastMCP("twitter_monitor")
X_API_KEY = os.getenv("X_API_KEY")
print(f"X_API_KEY configured: {bool(X_API_KEY)}", file=sys.stderr)
TWITTER_SEARCH_URL = "https://api.twitter.com/2/tweets/search/recent"
HEADERS = {
"Authorization": f"Bearer {X_API_KEY}",
"Content-Type": "application/json"
}
def read_excel(file_path: str, column_name: str) -> list[str]:
print(f"Attempting to read Excel file: {file_path}", file=sys.stderr)
if not os.path.exists(file_path):
raise FileNotFoundError(f"Excel file not found: {file_path}")
try:
wb = openpyxl.load_workbook(file_path)
print(f"Workbook loaded. Available sheets: {wb.sheetnames}", file=sys.stderr)
except Exception as e:
raise Exception(f"Failed to load workbook: {e}")
if "Market Makers" not in wb.sheetnames:
raise ValueError(f"Sheet 'Market Makers' not found. Available sheets: {wb.sheetnames}")
sheet = wb["Market Makers"]
header = [cell.value for cell in next(sheet.iter_rows(min_row=1, max_row=1))]
print(f"Excel headers: {header}", file=sys.stderr)
try:
col_idx = header.index(column_name)
except ValueError:
raise ValueError(f"Column '{column_name}' not found in sheet header: {header}")
twitter_handles = set()
for row in sheet.iter_rows(min_row=2):
handle = row[col_idx].value
if handle and handle not in twitter_handles:
twitter_handles.add(handle)
if len(twitter_handles) == 2:
break
print(f"Found Twitter handles: {list(twitter_handles)}", file=sys.stderr)
return list(twitter_handles)
@mcp.tool()
async def get_company_tweets(column: str = "Twitter", max_results: int = 10) -> str:
"""Get tweets about Company, Announcement, or Update from Excel-provided usernames.
Args:
column: Column name in Excel sheet with Twitter usernames.
max_results: Max tweets to retrieve per handle.
"""
file_path = "mysheet.xlsx"
try:
twitter_handles = read_excel(file_path, column)
except Exception as e:
return json.dumps({"error": str(e)})
all_posts = []
async with httpx.AsyncClient() as client:
for handle in twitter_handles:
query = f"from:{handle} Company Announcement OR Update"
params = {"query": query, "max_results": max_results}
retries = 3
for _ in range(retries):
try:
response = await client.get(TWITTER_SEARCH_URL, headers=HEADERS, params=params)
if response.status_code == 200:
all_posts.extend(response.json().get("data", []))
break
elif response.status_code == 429:
reset_time = int(response.headers.get("x-rate-limit-reset", 0))
current_time = int(time.time())
wait_time = max(reset_time - current_time, 5)
print(f"Rate limited. Waiting {wait_time}s...", file=sys.stderr)
await asyncio.sleep(wait_time)
else:
print(f"Error {response.status_code} for {handle}: {response.text}", file=sys.stderr)
break
except Exception as e:
print(f"Request failed for {handle}: {e}", file=sys.stderr)
break
await asyncio.sleep(1) # Use async sleep
if len(all_posts) >= 25:
break
result = {"total_fetched": len(all_posts), "tweets": all_posts[:25]}
return json.dumps(result, indent=2)
if __name__ == "__main__":
print("Starting MCP server...", file=sys.stderr)
mcp.run(transport="stdio")
</code></pre>
<p>here is the client code</p>
<pre><code>
import os
import sys
import json
import asyncio
from contextlib import AsyncExitStack
from dotenv import load_dotenv
import httpx
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
load_dotenv()
DEEPSEEK_API_KEY = os.getenv("DEEPSEEK_API_KEY")
DEEPSEEK_API_URL = "https://api.deepseek.com/v1/chat/completions"
class MCPClient:
def __init__(self):
self.session = None
self.exit_stack = AsyncExitStack()
self.tool_to_session = {}
self.deepseek_functions = []
async def connect_to_server(self, server_script_path: str):
is_py = server_script_path.endswith('.py')
cmd = "python" if is_py else "node"
params = StdioServerParameters(command=cmd, args=[server_script_path], env=None, stderr_to_stdout=True)
read, write = await self.exit_stack.enter_async_context(stdio_client(params))
self.session = await self.exit_stack.enter_async_context(ClientSession(read, write))
await self.session.initialize()
resp = await self.session.list_tools()
for tool in resp.tools:
self.tool_to_session[tool.name] = self.session
self.deepseek_functions.append({
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema
})
print("Connected! Tools:", [f["name"] for f in self.deepseek_functions])
async def call_deepseek(self, messages, functions=None, function_call="auto"):
headers = {
"Authorization": f"Bearer {DEEPSEEK_API_KEY}",
"Content-Type": "application/json"
}
body = {
"model": "deepseek-chat",
"messages": messages,
"max_tokens": 1000,
"function_call": function_call
}
if functions:
body["functions"] = functions
async with httpx.AsyncClient() as client:
response = await client.post(DEEPSEEK_API_URL, headers=headers, json=body)
response.raise_for_status()
return response.json()
async def process_query(self, query: str) -> str:
messages = [{"role": "user", "content": query}]
resp_json = await self.call_deepseek(messages, self.deepseek_functions)
msg = resp_json["choices"][0]["message"]
if "function_call" in msg:
fname = msg["function_call"]["name"]
fargs = json.loads(msg["function_call"]["arguments"])
result = await self.tool_to_session[fname].call_tool(fname, fargs)
messages.append(msg)
messages.append({"role": "function", "name": fname, "content": result.content})
resp2_json = await self.call_deepseek(messages)
return resp2_json["choices"][0]["message"]["content"]
return msg["content"]
async def chat_loop(self):
print("MCP Twitter Client (DeepSeek) started. Type your query or 'quit'.")
while True:
q = input("You: ").strip()
if q.lower() == "quit":
break
try:
print("Assistant:\n", await self.process_query(q))
except Exception as e:
print("Error:", e)
async def cleanup(self):
await self.exit_stack.aclose()
async def main():
if len(sys.argv) != 2:
print("Usage: python client.py <path_to_server_script>")
sys.exit(1)
mc = MCPClient()
try:
await mc.connect_to_server(sys.argv[1])
await mc.chat_loop()
finally:
await mc.cleanup()
if __name__ == "__main__":
asyncio.run(main())
```
</code></pre>
|
<python><model-context-protocol>
|
2025-07-07 10:19:35
| 1
| 532
|
ayex
|
79,692,625
| 13,682,559
|
How to type hint a python factory method returning different types?
|
<p>I am working on a generic framework with the goal to solve different but related problems. A problem consists of data and a bunch of algorithms operating on this data. Data and algorithms may vary from problem to problem, so I need different classes. But they all share a common interface.</p>
<p>I start with a config-file defining the problem. At one point in my program I need a function/method that returns instances of different classes depending on the value (not the type) of a parameter.</p>
<p>The signatures look like this:</p>
<pre><code>from dataclasses import dataclass
from typing import Protocol
# Protocols
class BaseData(Protocol):
common: int
class BaseAlg[D: BaseData](Protocol):
def update(self, data: D) -> None: ...
# Implementations data
@dataclass
class Data1:
common: int
extra: int
@dataclass
class Data2:
common: int
extra: str
# Implementations algorithms
class Alg1:
def update(self, data: Data1) -> None:
data.extra += data.common
class Alg2a:
def update(self, data: Data2) -> None:
data.extra *= data.common
class Alg2b:
def update(self, data: Data2) -> None:
data.extra += "2b"
</code></pre>
<p>No I want a factory initializing the algorithms and the data (omitted here) for each problem.</p>
<pre><code>class FactoryAlgorithms:
def _create_1(self) -> list[BaseAlg[Data1]]:
return [Alg1()]
def _create_2(self) -> list[BaseAlg[Data2]]:
return [Alg2a(), Alg2b()]
def create(self, type_alg: int): # <- How to annotate the return type?
match type_alg:
case 1:
return self._create_1()
case 2:
return self._create_2()
case _:
raise ValueError(f"Unknown type of data {type_alg}")
</code></pre>
<p>How do I annotate the return type of the generic <code>create</code>-method?</p>
<p>mypy accepts <code>list[BaseAlg[Data1]] | list[BaseAlg[Data2]]</code> but</p>
<ol>
<li>This gets tedious as more and more business logic (algorithms and data structures) are added.</li>
<li>This explicit typing doesn't really reflect what I want to return: A bunch of algorithms, all operating on the same data.</li>
</ol>
<p>Intuitively I would write <code>list[BaseAlg[BaseData]]</code> which is rejected by mypy, I guess for covariance/contravariance reasons:</p>
<p>Incompatible return value type (got "list[BaseAlg[Data1]]", expected "list[BaseAlg[BaseData]]")</p>
<p>Is there way to tackle this with generics? Or is this design fundamentally flawed?</p>
|
<python><generics><python-typing><factory-method>
|
2025-07-07 09:35:18
| 1
| 1,108
|
Durtal
|
79,692,584
| 8,771,201
|
Python cut text in pieces to upload to Wordpress Blog
|
<p>I have a piece that already has some formatting. Now I need to convert this to a format so I can use the Wordpress API to send it to wordpress.</p>
<p>This is an example of my text:</p>
<pre><code>'**H1: Some text**\n\nSome text as paragraph.\n\n**H2: A subheader**\n\nText from the subheader.\n\nA line break with some more text.\n\n**H2: Another sub hearder**\n\n**H3: A sub sub header
</code></pre>
<p>I tried this:</p>
<pre><code>test = myFullText
header1 = re.findall('H1.*?ph.', test)
</code></pre>
<p>And</p>
<pre><code> test = myFullText
header1 = re.findall('H1.*?\n\n.', test)
</code></pre>
<p>Both give me empty "header1"</p>
<p>More general question. I assume the findall function is the best approach for my use case. Or is there another option to achieve this. Like I mentioned. My ultimate goal is to create a Wordpress blogpost from this text.</p>
|
<python><wordpress><wordpress-rest-api>
|
2025-07-07 09:11:53
| 1
| 1,191
|
hacking_mike
|
79,692,579
| 5,760,832
|
Unable to use dask-sql due to 'dask_expr.io' module
|
<h1>Aim:</h1>
<ol>
<li>Read data from Parquet files</li>
<li>Register each df as Table</li>
<li>Use dask-sql to join & query from the table</li>
</ol>
<p>Here are the installation step:</p>
<ul>
<li><code>pip install --force-reinstall --no-cache-dir "dask[complete]" dask-sql </code></li>
<li><code>pip install dask-expr</code></li>
</ul>
<p>Installed versions:</p>
<pre><code>import dask
import dask_sql
print(dask.__version__)
print(dask_sql.__version__)
# output:
> 2025.5.1
> 2024.5.0
</code></pre>
<p>Steps to reproduce error with sample CSV file:</p>
<pre><code>import dask.dataframe as dd
from dask_sql import Context
tmp_df = dd.read_csv("./tmp.csv")
c = Context()
c.create_table("tmp", df)
</code></pre>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 2
1 c = Context()
----> 2 c.create_table("tmp", tmp_df)
File ~\AppData\Roaming\Python\Python312\site-packages\dask_sql\context.py:266, in Context.create_table(self, table_name, input_table, format, persist, schema_name, statistics, gpu, **kwargs)
264 try:
265 if dd._dask_expr_enabled():
--> 266 from dask_expr.io.parquet import ReadParquet
268 dask_filepath = None
269 operations = input_table.find_operations(ReadParquet)
ModuleNotFoundError: No module named 'dask_expr.io'
</code></pre>
<p><strong>Note:</strong> even after installing <code>pip install dask-expr</code> I am getting the same error.</p>
|
<python><dataframe><dask><dask-dataframe>
|
2025-07-07 09:09:44
| 1
| 891
|
Aqua 4
|
79,692,568
| 243,872
|
showing 2 raised to the power of a fractional argument in sympy
|
<p>I'm trying to display 2^{\frac{1}{12}} in a jupyter notebook
using the following Python code:</p>
<pre><code>
from sympy import *
pow(2,Rational(1, 12))
</code></pre>
<p>which gives me</p>
<p><a href="https://i.sstatic.net/3GwcEeLl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GwcEeLl.png" alt="12___
โ2" /></a></p>
<p>but I'd rather have</p>
<p>2 raised to the power of 1/12 (sorry have no quick way to display this symbolically here.</p>
|
<python><jupyter-notebook><sympy>
|
2025-07-07 09:03:24
| 1
| 1,193
|
Krischu
|
79,692,554
| 7,208,627
|
What's the use case for `torch.randn_like` when`requires_grad=True`?
|
<p>In PyTorch, <code>requires_grad</code> can be set to <code>True</code> in <code>torch.randn_like</code>, what's the use case for this?</p>
<p>Say we have $\epsilon ~ N(0, I)$, to my understanding, since $\epsilon$ is not a parameter, there is no need to pipe gradient through it.</p>
<p>Thanks in advance.</p>
|
<python><pytorch>
|
2025-07-07 08:52:15
| 1
| 941
|
Incรถmplete
|
79,692,434
| 4,776,486
|
How to reply for messages in telegram? in python
|
<p>I created a simple python program to automatically reply messages in <strong>Group topics</strong>, part of it :</p>
<pre><code>bot = Client(name=login, api_id=api_id, api_hash=api_hash, phone_number=phone)
@bot.on_message(filters.text)
async def command_handler(client: Client, message: Message):
if re.match(pattern, message.text):
await message.reply(text="this is reply", quote=False)
</code></pre>
<p>It uses pyrogram, and replies only if pattern matches.
The problem arises when messages come from a <strong>topic</strong> of a group. Bot replies in topic <strong>General</strong> only. How to fix it, so it replies in exactly the topic , from where the message comes, <strong>without quoting</strong> the original message?
May be use some other telegram lib?</p>
|
<python><pyrogram>
|
2025-07-07 07:13:59
| 1
| 1,490
|
voipp
|
79,692,297
| 997,239
|
python pip compile is rending absolute path of requirements.in file
|
<p>Python version: 3.12
pip-tools: 7.4.1</p>
<p>Project structure</p>
<pre><code>/projectdir
------requirements/
------------requirements.in
------------requirements.txt
------------requirements-dev.in
------------requirements-dev.txt
------Makefile
</code></pre>
<p>Contents of requirements-dev.in</p>
<pre><code>-r requirements.in
localstack
*******
*******
</code></pre>
<p>Makefile command:</p>
<pre><code>pip-compile --no-emit-index-url requirements/requirements-dev.in --upgrade --resolver=backtracking --no-strip-extras
</code></pre>
<p>generated content of requirements-dev.txt</p>
<pre><code>#
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# pip-compile --no-emit-index-url requirements/requirements-dev.in
#
annotated-types==0.7.0
# via pydantic
attrs==25.3.0
# via
# jsonschema
# referencing
aws-lambda-powertools==3.16.0
# via
# -r /Users/abhkumar/repo/pegasus-fulfill-ingest/requirements/requirements.in
# ewflib
</code></pre>
<p>As you can see, under aws-lambda-powertools,</p>
<pre><code>-r /Users/abhkumar/repo/pegasus-fulfill-ingest/requirements/requirements.in
</code></pre>
<p>I was expecting that it will print only</p>
<pre><code>-r requirements.in
</code></pre>
<p>I have tried all possible options, but cant get it working, please help</p>
|
<python><python-3.x><makefile><pip><pip-tools>
|
2025-07-07 04:06:53
| 0
| 537
|
Tarun
|
79,692,280
| 2,398,193
|
Python Marimo Slider not Showing
|
<p>I was basically following the first few minutes of this video: <a href="https://www.youtube.com/watch?v=dbPj3GOFa-g" rel="nofollow noreferrer">https://www.youtube.com/watch?v=dbPj3GOFa-g</a></p>
<p>In other words I did in gitbash:</p>
<pre><code>uvx marimo --help
</code></pre>
<p>Then:</p>
<pre><code>uvx marimo edit --sandbox notebook.py
</code></pre>
<p>Then inside mmarimo first cell:</p>
<pre><code>import marimo as mo
</code></pre>
<p>and in the second cell:</p>
<pre><code>mo.ui.slider(1,10,1)
</code></pre>
<p>So, there's no error and nothing breaks. Actually, I can save that to an object called x , and then call x.value and I can see the value printed. But, there's no actual slider:</p>
<p><a href="https://i.sstatic.net/yIZOkn0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIZOkn0w.png" alt="img from marimo" /></a></p>
<p>I'm using windows 11, and the browser is Microsoft Edge (I also tried with google chrome).
I also tried:</p>
<pre><code>uv add "marimo[recommended]"
</code></pre>
<p>as per the documentation: <a href="https://docs.marimo.io/getting_started/installation/#__tabbed_2_2" rel="nofollow noreferrer">https://docs.marimo.io/getting_started/installation/#__tabbed_2_2</a></p>
<p>Any ideas?</p>
|
<python><marimo>
|
2025-07-07 03:36:33
| 1
| 477
|
Jorge Lopez
|
79,692,233
| 777,769
|
How to automatically print a variable in pdb?
|
<p>I am debugging a code in Python using <code>Pdb</code>:</p>
<pre class="lang-py prettyprint-override"><code>a = [12, 3, 4, 10, 23, 1]
def compute(x):
return 2 * x
for i in a:
b = compute(i)
</code></pre>
<p>To trace the value of a variable inside the loop, I set a breakpoint at the first line of the loop body and use the <code>continue</code> command to move to the next breakpoint, then use <code>print</code> command to inspect the variable:</p>
<pre class="lang-py prettyprint-override"><code>$ python -m pdb program.py
> /tmp/program.py(1)<module>()
-> a = [12, 3, 4, 10, 23, 1]
(Pdb) l
1 -> a = [12, 3, 4, 10, 23, 1]
2
3
4 def compute(x):
5 return 2 * x
6
7
8 for i in a:
9 b = compute(i)
[EOF]
(Pdb) b 9
Breakpoint 1 at /tmp/program.py:9
(Pdb) c
> /tmp/program.py(9)<module>()
-> b = compute(i)
(Pdb) p i
12
(Pdb) c
> /tmp/program.py(9)<module>()
-> b = compute(i)
(Pdb) p i
3
(Pdb) c
> /tmp/program.py(9)<module>()
-> b = compute(i)
(Pdb) p i
4
(Pdb)
</code></pre>
<p>Is there any way to automatically print the variable's value every time execution stops at the breakpoint?</p>
|
<python><debugging><breakpoints><evaluation><pdb>
|
2025-07-07 01:33:34
| 1
| 2,459
|
Hamid Rouhani
|
79,692,191
| 14,488,413
|
In Python, how to find difference between a specific column in one dataframe and numeric columns of another dataframe?
|
<p>I have two datasets/dataframes <code>df1</code> and <code>df2</code>, I want to generate <code>df3</code> by finding the difference between numeric columns of df2 and df1's column_X.</p>
<pre><code>#### copy and paste below to generate df1 and df2
import pandas as pd
from random import uniform
import numpy as np
# generation of df1
data = np.random.uniform(15,40, size=(60, 2))
df1 = pd.DataFrame(data, columns=['column_A','column_B'])
df1['column_X'] = df1.mean(axis=1)
df1
# generation of df2
data = np.random.uniform(10.5,32.8, size=(60, 30))
df2 = pd.DataFrame(data, columns=['column_1','column_2','column_3','column_4','column_5',
'column_6','column_7','column_8','column_9','column_10',
'column_11','column_12','column_13','column_14','column_15',
'column_16','column_17','column_18','column_19','column_20',
'column_21','column_22','column_23','column_24','column_25',
'column_26','column_27','column_28','column_29','column_30',])
df2["Group"] = pd.DataFrame(np.repeat(['A','B','C'], 20, axis=0))
# make "Group" column the first column
col = df2.pop('Group')
df2.insert(0, 'Group', col)
df2
</code></pre>
<p>I want to generate df3 by subtracting df2's numeric columns (<code>column_1</code> to <code>column_30</code>) from df1's <code>Column_X</code> while retaining the "Group" column</p>
<pre><code># Step 1: create an empty df3 and then append df2['Group']
df3 = pd.DataFrame()
# substract "column_X from each numeric column
df3['col_1X_sub'] = df2['column_1'] - df1['column_X']
df3['col_2X_sub'] = df2['column_2'] - df1['column_X']
df3['col_3X_sub'] = df2['column_3'] - df1['column_X']
.
.
.
df3['col_30X_sub'] = df2['column_30'] - df1['column_X']
</code></pre>
<p>Final df3 should look something like this for all 30 columns</p>
<p><a href="https://i.sstatic.net/IYkL8QSW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYkL8QSW.png" alt="enter image description here" /></a></p>
|
<python><pandas><for-loop><apply>
|
2025-07-07 00:00:36
| 1
| 322
|
nasa313
|
79,692,027
| 1,554,020
|
How to correctly type-annotate a classmethod-like descriptor in python?
|
<p><code>classonlymethod</code> decorator/descriptor defined below is like the built-in <code>classmethod</code>, but you're not allowed to call the method on instances, only on the class itself.</p>
<pre><code>from typing import Concatenate, Self
from collections.abc import Callable
from functools import partial
class classonlymethod[C, **P, R]:
def __init__(self, func: Callable[Concatenate[type[C], P], R], /):
self._func = func
def __get__(self, instance: None, owner: type[C], /) -> Callable[P, R]:
if instance is not None:
raise AttributeError('class-only method')
return partial(self._func, owner)
class Foo:
@classonlymethod
def bar(cls: type[Self]) -> None:
raise NotImplementedError()
</code></pre>
<p>However, <code>pyright</code> <a href="https://pyright-play.net/?code=GYJw9gtgBALgngBwJYDsDmUkQWEMoDCYKAxgIYwCmKFlANFAMqUA2wAUKJFCWCy5RIwkxAM4A6MgCMSmbLnwEy-aQM7howAK6kYYPqLk48UBGTxJl7ayRZlRh2-dHEWcCJRgALMABMA2gQMAFTBAAoMAEoAugBc7FCJUL6UwFAA%2BumoSDCZABSirMAM2qSxhMp2UgKBxORUNFT%2B8AiUgdEMYR1QMQwA9ACU8UkjUIVs4umlsgC8UNPWIylpmWie%2BePFmCiiMGSklOUAcsT0UGAA7iiUIOUtbQTdg1AAtAB8FSrVbRE9cQmjRJINKoXb7EiUTCGFBgfAna7DQGjEBkJCFKAAQRgMBASCkWioAFEQOAQHkAOROBwvVxwKAebx%2BckDRZIkCeLQgFCmczCZQFIqTaYMS7XEAs9hUwwAMX0iMSAAEpbSGT5fADEssoFJzHlbKI7og2sw2NEBq8PvDDhrkaj0ScYABJbACDwoKi%2BYmkvIspFQIA" rel="nofollow noreferrer">complains</a> on the definition of <code>Foo.bar</code>:</p>
<blockquote>
<p>Type of parameter "cls" must be a supertype of its class "Foo" (reportGeneralTypeIssues)</p>
</blockquote>
<p>Which is odd, since it matches <a href="https://github.com/python/typeshed/blob/e940f855097b1ab9ceef5c422bd244093ab4fac1/stdlib/builtins.pyi#L166-L170" rel="nofollow noreferrer">annotations of <code>classmethod</code> in typeshed</a>.</p>
<p>I don't understand this error. Surely a <code>Foo</code> (denoted by <code>Self</code>) is a (non-strict) supertype of itself? I tried other ways to refer to <code>Foo</code> in the <code>cls</code> annotation, but they didn't work either.</p>
<p>What's going on here and how do I fix it? Is it a pyright bug?</p>
|
<python><python-typing><class-method><pyright><python-descriptors>
|
2025-07-06 19:06:45
| 3
| 14,259
|
yuri kilochek
|
79,692,022
| 606,943
|
Apache Airflow DAG not running
|
<p>I have installed the Airflow Docker environment as described in this <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html" rel="nofollow noreferrer">guide</a> (3.0.2)and I'm able to run a very simple DAG using the EmptyOperator.</p>
<p>However, when I create another DAG using, for example, the BashOperator, I get the following error:</p>
<p>[2025-07-06, 20:51:22] ERROR - DAG not found during start up: dag_id="bash_operator_dag": bundle="BundleInfo(name='dags-folder', version=None)": path="chris_2.py": source="task"</p>
<p>The file i have is a very simple DAG</p>
<pre class="lang-none prettyprint-override"><code>from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="bash_operator_dag",
start_date=datetime(2025, 4, 1),
catchup=False,
tags=["test"]
) as dag:
run_bash = BashOperator(
task_id="run_bash",
bash_command="echo chris"
)
process = BashOperator(
task_id="process",
bash_command="echo chris"
)
run_bash >> process
</code></pre>
<p>I'm completely lost, why i get this error, help will be appreciatd</p>
|
<python><airflow>
|
2025-07-06 19:00:15
| 1
| 707
|
Christian van R
|
79,691,957
| 16,883,182
|
How to configure the font used to render IME preview text in tkinter text input fields on Windows?
|
<h2>Problem Description</h2>
<p>I'm experiencing a problem concerning font configuration in Tkinter. The constructor of all Tkinter widgets have a <code>font</code> keyword argument which allows you to configure the font used for displaying text. <em>However</em>, this option doesn't seem to affect the <strong>IME preview text</strong> used in text input widgets (such as <code>Entry</code> and <code>Text</code>).</p>
<p>For those whose native language doesn't require IMEs (input method editors), I'll give a short explanation: while the "spelling" of a word/phrase is being entered, these spelling characters aren't immediately entered into the text field being typed in. Instead, the IME of the OS intercepts these spelling characters and translates them into the best-matching candidates as you type. And when you're done, you press Enter and the completed phrase is sent to the application. While you're composing the phrase, it's the application's responsibility to render the "preview" text in a manner which is visually distinct from text that has already been submitted into the text field.</p>
<p>Other applications (both built into Windows and 3rd party) usually uses the same font family <em>and</em> same font size to render the IME preview text, with the only difference being a dotted underline and sometimes a different background color. <strong>However</strong>, the problem I'm experiencing is that Tkinter always renders the IME preview text using the "default" font regardless of the font that the widget is configured to use.</p>
<p>For example, here's a screenshot of composing Chinese text in Notepad:</p>
<p><a href="https://i.sstatic.net/Jp4qNBz2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp4qNBz2.png" alt="Notepad Screenshot" /></a></p>
<p>And here's a screenshot of doing the same in the Tk application I wrote as an MRE:</p>
<p><a href="https://i.sstatic.net/l6bBaX9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l6bBaX9F.png" alt="Tk Screenshot" /></a></p>
<p>As it can be seen, while Notepad uses the same font for both the actual text and the IME preview text, Tkinter uses the default font for the latter, which results in an ugly visual discrepancy (even the <em>placement</em> of the preview text is slightly off).</p>
<h3>Text in Image</h3>
<p>For those asking, this is the text in the image.</p>
<p><strong>Submitted Text:</strong></p>
<pre class="lang-none prettyprint-override"><code>The quick brown fox jumps over the lazy dog.
ๆๆท็ๆฃ่ฒ็็ธ
^ ^
submitted text IME preview text
</code></pre>
<p><strong>IME preview text (start typing at the end of the second line):</strong></p>
<pre class="lang-none prettyprint-override"><code>่ทณ้ไบๆถๆฐ็็ใ
</code></pre>
<p>But please note that the problem is irrelevant to the specific text entered.</p>
<h2>Minimal Reproducible Example</h2>
<p>Here's the code for the MRE used in the screenshot to demonstrate the issue. The DPI configuration is necessary to ensure that one pixel in the window corresponds to one pixel on the physical screen. Otherwise Windows will automatically upscale the window on high DPI displays, causing the window to appear blurry.</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
from contextlib import suppress
import tkinter as tk
import tkinter.font
import platform
import ctypes
size = (800, 400)
def configure_dpi() -> bool:
"""Tries to set DPI awareness and returns whether successful."""
with suppress(AttributeError): # For Windows 8.1 and later
# set process DPI awareness to PROCESS_PER_MONITOR_DPI_AWARE (2).
return (ctypes.windll.shcore.SetProcessDpiAwareness(2) == 0)
with suppress(AttributeError): # For Windows Vista and later.
return (ctypes.windll.user32.SetProcessDPIAware() != 0)
return True # Windows version too low to support HiDPI.
def main() -> None:
root = tk.Tk()
root.title("Text Edit")
root.geometry("%dx%d" % size)
root.minsize(*size)
label_font = tk.font.Font(
name="label_font", family="ๅพฎ่ปๆญฃ้ป้ซ", size=12, weight=tk.font.NORMAL
)
input_font = tk.font.Font(
name="input_font", family="ๅพฎ่ปๆญฃ้ป้ซ", size=16, weight=tk.font.NORMAL
)
top_bar = tk.Frame(root, pady=10)
tk.Label(top_bar, text="Title:", font=label_font).pack(side=tk.LEFT)
entry = tk.Entry(top_bar, font=input_font)
entry.pack(side=tk.LEFT)
top_bar.grid(row=0, column=0, columnspan=2)
text = tk.Text(root, wrap=tk.WORD, font=input_font)
scrollbar = tk.Scrollbar(root, command=text.yview)
text.configure(yscrollcommand=scrollbar.set)
text.grid(row=1, column=0, sticky=tk.NSEW)
scrollbar.grid(row=1, column=1, sticky=tk.NS)
root.rowconfigure(1, weight=1)
root.columnconfigure(0, weight=1)
root.mainloop()
if __name__ == "__main__":
if platform.system() == "Windows":
configure_dpi()
main()
</code></pre>
<p>Run this script, and enter some non-IME text into either the single-line or multi-line text box, then compose some text after that using an IME. You should immediately notice that the IME preview text is rendered with a different font and size from the preceding text.</p>
<h2>What I've Tried</h2>
<ol>
<li><p>Assuming that the IME preview text's font is probably set to one of the built-in font names (such as <code>TkDefaultFont</code>, <code>TkFixedFont</code>, <code>TkCaptionFont</code>, etc.) in Tkinter's registry (returned by a call to <code>tkinter.font.names()</code>) but not knowing which one, I tried setting the size of <em>every</em> built-in font name to 16โ pt, thinking that this way I would be guaranteed to hit the font that the preview text is associated with. So I wrote the following function:</p>
<pre class="lang-python prettyprint-override"><code>def resize_builtin_fonts(root: tk.Tk | tk.Widget, size: int) -> None:
"""Sets the size of all registered font names."""
for font_name in tk.font.names(root):
font_obj = tk.font.nametofont(font_name)
font_obj.configure(size=size)
</code></pre>
<p>Then I added the function call <code>resize_builtin_fonts(root, 16)</code> after the code to set up the root window and before the call to <code>root.mainloop()</code>. To my surprise, while this (as expected) caused all widgets without an explicit <code>font=</code> keyword argument in the constructor to display text at 16โ pts, it had no effect on the IME preview text. So apparently the IME preview text operates separately from Tkinter's font registry.</p>
</li>
<li><p>Next, I tried searching through the <a href="https://docs.python.org/3/library/tkinter.html" rel="nofollow noreferrer">Tkinter documentation</a> and <a href="https://tkdocs.com/shipman/index.html" rel="nofollow noreferrer">John Shipman's Tkinter reference</a>, wondering if there was a module-level function or perhaps a method of <code>tkinter.Tk</code> which is dedicated to setting the IME preview text's font. I couldn't find anything that looked like it, but I could have missed something of course.</p>
</li>
<li><p>Next, I tried searching on the internet, but I couldn't find much. The closest thing I found was this <a href="https://wiki.tcl-lang.org/page/Common+Questions+about+Tcl%2FTk+and+Japanese+language+support" rel="nofollow noreferrer">page</a> on the Tcler's Wiki talking about how well Tkย 8.3 supports Windows IME (emphasize mine):</p>
<blockquote>
<p>Tk 8.3.4+ has support for Windows IME and Linux XIM. In 8.3, this uses the <strong>root-window style</strong>, but 8.4 has support for "over-the-spot" style IME and XIM. This support was possible with significant assistance from Koichi Yamamoto and Keiichi Takahashi. Koichi has confirmed that the 8.4 support is known to work on Win95 to WinXP using Windows IME, Japanese Win9*, and the ATOK13 IME variants.</p>
</blockquote>
<p>So now I know the IME preview text probably uses the root window style. Assuming that the term means the <code>ttk</code> style for the root window, I looked up the <code>ttk</code> class name of the <code>tkinter.Tk</code> object using <code>print(root.winfo_class())</code>, and found that it is <code>Tk</code>. So I tried to add this to my code:</p>
<pre class="lang-python prettyprint-override"><code>import tkinter.ttk as ttk
style = ttk.Style()
style.configure("Tk", font="input_font")
</code></pre>
<p>But it had no effect, so apparently there's no such option for the <code>Tk</code> style. Also, the <code>Tk</code> object doesn't even have a <code>style</code> configuration option, and after testing, the <code>Tk</code> object doesn't seem to obey any configurations made to the <code>Tk</code> style, so it seems that the <code>tkinter.Tk</code> object is exempt from <code>ttk</code> styling anyway.</p>
</li>
<li><p>Continuing to pursue the clue found on the Tcler's Wiki about the IME preview text following the root window style (assuming I interpreted it correctly, its phrasing was a little unclear), I guessed next that maybe by "root window style" it meant the configuration of the <code>tkinter.Tk</code> object, so I tried adding:</p>
<pre class="lang-python prettyprint-override"><code>root.configure(font=input_font)
</code></pre>
<p>But this raises the exception <code>tkinter.TclError</code> with the message <code>unknown option "-font"</code>, so apparently there's no such option.</p>
</li>
</ol>
<p>So now I'm at a loss for how to change the font size associated with the "root window style", whatever that means. I've also tried skimming through the tk documentation to see if there's a relevant command I could call from Python using <code>root.tk.eval("command")</code>. But since I don't know any tcl (and have also never used the tk package directly), I found the tk documentation hard to understand. So hopefully someone here knows the solution.</p>
<p>I tried to test my MRE in a Linux virtual machine to see if the same problem exists there, but I simply <em>could not</em> get the <code>ibus</code> IME framework to work at all after installing it in Linux Mint (all keypresses aren't caught by the IME even when an input method is active), so I had to give up.</p>
|
<python><windows><tkinter><fonts><ime>
|
2025-07-06 17:12:08
| 1
| 315
|
I Like Python
|
79,691,953
| 11,318,930
|
Ridge Polynomial Regression: How to get parameters for equation found
|
<p>I've used sklearn for polynomial ridge regression. Using grid search, I am happy with the results. Now I would like to render it as a simple polynomial equation to run in a small python module. The sklearn function returns the degree and alpha parameter. The latter just sets regularization for training. The former tells me the maximum degree of the resulting equation. But what are the parameters of the equation it has found? I expect the equation to be of the form ax^3 + bx^2 + cx + d, so what are a,b,c,d?</p>
<p>Code for grid search pipeline:</p>
<pre><code>from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import Ridge, Lasso, SGDRegressor
.
.
.
# === 4. Polynomial Ridge ===
poly_pipe = Pipeline([
('scaler', StandardScaler()),
('poly', PolynomialFeatures()),
('ridge', Ridge())
])
poly_params = {
'poly__degree': [2, 3],
'ridge__alpha': [0.01, 0.1, 1]
}
evaluate_model('Polynomial Ridge', poly_pipe, poly_params, is_linear=False)
</code></pre>
|
<python><scikit-learn><regression><polynomial-approximations>
|
2025-07-06 17:06:54
| 1
| 1,287
|
MikeB2019x
|
79,691,896
| 10,634,126
|
Docker config for python/firefox/selenium scraping
|
<p>I have been trying to configure Docker within a repository that will be used for Python web-scraping, in some cases with Selenium (I am using a Firefox web-driver but I am driver-agnostic, just want to get this to work.)</p>
<p>I have the following docker-compose file in my root directory:</p>
<pre><code>services:
scrapers:
build:
context: .
dockerfile: pipeline/dockerfiles/Dockerfile
container_name: scrapers
image: scrapers
volumes:
- .:/repo
- ~/.gitconfig:/etc/gitconfig
command: tail -F anything
</code></pre>
<p>I also have the following Selenium configuration utility for my scrapers:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.firefox.options import Options
def firefox_driver(self):
options = Options()
options.add_argument("-headless")
options.set_preference("browser.download.folderList", 2)
options.set_preference("browser.download.manager.showWhenStarting", False)
options.set_preference("browser.download.manager.focusWhenStarting", False)
options.set_preference(
"browser.helperApps.neverAsk.saveToDisk",
"text/csv, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, application/octet-stream,"
"application/vnd.ms-excel",
)
options.set_preference("browser.helperApps.alwaysAsk.force", False)
options.set_preference("browser.download.manager.alertOnEXEOpen", False)
options.set_preference("browser.download.manager.closeWhenDone", True)
options.set_preference("browser.download.manager.showAlertOnComplete", False)
options.set_preference("browser.download.manager.useWindow", False)
options.enable_downloads = True
# specify download directory for files within cwd
options.set_preference("browser.download.dir", self.download_dir)
return webdriver.Firefox(options=options)
</code></pre>
<p>I believe I need to keep the above headless in order for it to operate more simply in a Docker environment.</p>
<p>When I try to set up my actual Dockerfile is when I run into issues.</p>
<p>I have tried to set up a ubuntu:jammy Dockerfile where I wget geckodriver and firefox to no avail, I have tried to set one up using selenium/standalone-firefox. Anything I do leads to errors at some point in the Docker pipeline build, seemingly in different places.</p>
<p>The shell of my Dockerfile looks like this:</p>
<pre><code>FROM [[some image]]
WORKDIR /repo
COPY . /repo
COPY pipeline/requirements.txt pipeline/requirements.txt
ENV DEBIAN_FRONTEND=noninteractive
[[some set up apt-get installs? I need python, pip, git, selenium, my requirements...]]
RUN pip3 install -r pipeline/requirements.txt
CMD tail -F anything
</code></pre>
<p>Does anyone have a tip for getting this to operate correctly?</p>
|
<python><docker><selenium-webdriver>
|
2025-07-06 15:50:06
| 0
| 909
|
OJT
|
79,691,830
| 1,886,641
|
cusolverDnDgesvdj for the computation of singular values only under Python
|
<p>I have set up the code below for the computation of the SVD of a matrix. The code uses <code>cuSOLVER</code>'s <code>cusolverDnDgesvdj</code>.</p>
<pre><code>import ctypes
import numpy as np
import pycuda.autoinit # implicit context creation
import pycuda.driver as cuda
# -----------------------------------------------------------------------------
# 0) C types
# -----------------------------------------------------------------------------
c_int = ctypes.c_int
c_double = ctypes.c_double
c_voidp = ctypes.c_void_p
# -----------------------------------------------------------------------------
# 1) Load cuSOLVER and CUDA driver
# -----------------------------------------------------------------------------
cusolver = ctypes.cdll.LoadLibrary("libcusolver.so")
cuda_drv = ctypes.cdll.LoadLibrary("libcudart.so")
# -----------------------------------------------------------------------------
# 2) Declare cuSOLVER / CUDA-driver function prototypes
# -----------------------------------------------------------------------------
# cusolverDnCreate / Destroy
cusolver.cusolverDnCreate.argtypes = [ctypes.POINTER(c_voidp)]
cusolver.cusolverDnCreate.restype = c_int
cusolver.cusolverDnDestroy.argtypes = [c_voidp]
cusolver.cusolverDnDestroy.restype = c_int
# Set stream on handle
cusolver.cusolverDnSetStream.argtypes = [c_voidp, ctypes.c_void_p] # Use c_void_p for stream handle
cusolver.cusolverDnSetStream.restype = c_int
# gesvdjInfo create/destroy
cusolver.cusolverDnCreateGesvdjInfo.argtypes = [ctypes.POINTER(c_voidp)]
cusolver.cusolverDnCreateGesvdjInfo.restype = c_int
cusolver.cusolverDnDestroyGesvdjInfo.argtypes = [c_voidp]
cusolver.cusolverDnDestroyGesvdjInfo.restype = c_int
# set tolerance / max sweeps
cusolver.cusolverDnXgesvdjSetTolerance.argtypes = [c_voidp, c_double]
cusolver.cusolverDnXgesvdjSetTolerance.restype = c_int
cusolver.cusolverDnXgesvdjSetMaxSweeps.argtypes = [c_voidp, c_int]
cusolver.cusolverDnXgesvdjSetMaxSweeps.restype = c_int
# query bufferSize (13 args!)
cusolver.cusolverDnDgesvdj_bufferSize.argtypes = [
c_voidp, # handle
c_int, # jobz
c_int, # econ
c_int, # m
c_int, # n
c_voidp, # A
c_int, # lda
c_voidp, # S
c_voidp, # U
c_int, # ldu
c_voidp, # V
c_int, # ldv
ctypes.POINTER(c_int), # &lwork
c_voidp # gesvdjInfo
]
cusolver.cusolverDnDgesvdj_bufferSize.restype = c_int
# run SVD
cusolver.cusolverDnDgesvdj.argtypes = [
c_voidp, c_int, c_int, c_int, c_int,
c_voidp, c_int, c_voidp, c_voidp, c_int,
c_voidp, c_int, c_voidp, c_int, c_voidp, c_voidp
]
cusolver.cusolverDnDgesvdj.restype = c_int
# get sweeps & residual
cusolver.cusolverDnXgesvdjGetSweeps.argtypes = [c_voidp, c_voidp, ctypes.POINTER(c_int)]
cusolver.cusolverDnXgesvdjGetSweeps.restype = c_int
cusolver.cusolverDnXgesvdjGetResidual.argtypes = [c_voidp, c_voidp, ctypes.POINTER(c_double)]
cusolver.cusolverDnXgesvdjGetResidual.restype = c_int
# -----------------------------------------------------------------------------
# 3) Problem parameters & host data
# -----------------------------------------------------------------------------
m, n = 3, 2
lda, ldu, ldv = m, m, n
minmn = min(m, n)
# A in column-major โmatlabโ layout:
h_A = np.array([1.0, 4.0, 2.0, # col 0
2.0, 5.0, 1.0], # col 1
dtype=np.float64)
# containers for results
h_U = np.zeros((ldu*m,), dtype=np.float64)
h_V = np.zeros((ldv*n,), dtype=np.float64)
h_S = np.zeros((minmn,), dtype=np.float64)
h_info = np.zeros((1,), dtype=np.int32)
# exact singulars for later check
h_S_exact = np.array([7.065283497082729, 1.040081297712078], dtype=np.float64)
# -----------------------------------------------------------------------------
# 4) Create handle, stream, gesvdjInfo
# -----------------------------------------------------------------------------
handle = c_voidp()
assert cusolver.cusolverDnCreate(ctypes.byref(handle)) == 0
# Create a PyCUDA stream
stream = cuda.Stream()
# Pass the underlying CUDA stream handle to cusolver
cusolver.cusolverDnSetStream(handle, ctypes.c_void_p(stream.handle))
params = c_voidp()
assert cusolver.cusolverDnCreateGesvdjInfo(ctypes.byref(params)) == 0
# tune
tol = c_double(1e-7)
sweeps = c_int(15)
cusolver.cusolverDnXgesvdjSetTolerance(params, tol)
cusolver.cusolverDnXgesvdjSetMaxSweeps(params, sweeps)
# -----------------------------------------------------------------------------
# 5) Allocate + copy device memory
# -----------------------------------------------------------------------------
d_A = cuda.mem_alloc(h_A.nbytes)
d_S = cuda.mem_alloc(h_S.nbytes)
d_U = cuda.mem_alloc(h_U.nbytes)
d_V = cuda.mem_alloc(h_V.nbytes)
d_info = cuda.mem_alloc(h_info.nbytes)
# Use the PyCUDA stream object in PyCUDA functions
cuda.memcpy_htod_async(d_A, h_A, stream)
# (weโll copy results back later)
# -----------------------------------------------------------------------------
# 6) Query workspace size
# -----------------------------------------------------------------------------
lwork = c_int()
status = cusolver.cusolverDnDgesvdj_bufferSize(
handle,
1, # jobz = CUSOLVER_EIG_MODE_VECTOR
0, # econ = full
m, n,
int(d_A), lda, # Cast d_A to int
int(d_S), # Cast d_S to int
int(d_U), ldu, # Cast d_U to int
int(d_V), ldv, # Cast d_V to int
ctypes.byref(lwork),
params
)
assert status == 0
d_work = cuda.mem_alloc(lwork.value * ctypes.sizeof(c_double))
# -----------------------------------------------------------------------------
# 7) Run batched SVD (batch = 1 here)
# -----------------------------------------------------------------------------
status = cusolver.cusolverDnDgesvdj(
handle,
1, # jobz
0, # econ
m, n,
int(d_A), lda, # Cast d_A to int
int(d_S), # Cast d_S to int
int(d_U), ldu, # Cast d_U to int
int(d_V), ldv, # Cast d_V to int
int(d_work), lwork, # Cast d_work to int
int(d_info), # Cast d_info to int
params
)
assert status == 0
# -----------------------------------------------------------------------------
# 8) Copy results back to host & sync
# -----------------------------------------------------------------------------
# Use the PyCUDA stream object in PyCUDA functions
cuda.memcpy_dtoh_async(h_U, d_U, stream)
cuda.memcpy_dtoh_async(h_V, d_V, stream)
cuda.memcpy_dtoh_async(h_S, d_S, stream)
cuda.memcpy_dtoh_async(h_info, d_info, stream)
# Synchronize the PyCUDA stream
stream.synchronize()
print("info =", int(h_info[0]))
print("S =", h_S)
print("S_exact=", h_S_exact)
# get internal stats
sweeps_out = c_int()
resid_out = c_double()
cusolver.cusolverDnXgesvdjGetSweeps(handle, params, ctypes.byref(sweeps_out))
cusolver.cusolverDnXgesvdjGetResidual(handle, params, ctypes.byref(resid_out))
print("executed sweeps =", sweeps_out.value)
print("residual =", resid_out.value)
# -----------------------------------------------------------------------------
# 9) Cleanup
# -----------------------------------------------------------------------------
# cuda_drv.cudaStreamDestroy(stream) # No need to destroy PyCUDA stream via ctypes
cusolver.cusolverDnDestroyGesvdjInfo(params)
cusolver.cusolverDnDestroy(handle)
</code></pre>
<p>The code works.</p>
<p>Now, if I change the second parameter of <code>cusolver.cusolverDnDgesvdj_bufferSize</code> and <code>cusolver.cusolverDnDgesvdj</code> to <code>0</code> to compute the singular values only, the code still works.</p>
<p>However, if I try to remove the pointers to <code>d_U</code> and <code>d_V</code>, the code does not work anymore. I have tried to replace the pointers with <code>0</code> and with <code>None</code>, obtaining every time a different <code>status</code> error.</p>
<p>How should I consistently modify the calls to <code>cusolver.cusolverDnDgesvdj_bufferSize</code> and <code>cusolver.cusolverDnDgesvdj</code> to avoid the need of defining <code>d_U</code> and <code>d_V</code>?</p>
<p>I'm running the code under Google Colab.</p>
|
<python><cuda><svd><pycuda><cusolver>
|
2025-07-06 14:22:37
| 0
| 21,685
|
Vitality
|
79,691,774
| 4,041,117
|
Efficient way creating a dict of dict from a pandas dataframe
|
<p>I have a pandas dataframe of the following structure:</p>
<pre><code>d = {'I': ['A', 'B', 'C', 'D'], 'X': [ 1, 0, 3, 1], 'Y': [0, 1, 2, 1], 'Z': [1, 0, 0, 0], 'W': [3, 2, 0, 0]}
df = pd.DataFrame(data=d, columns=['I','X', 'Y', 'Z', 'W'])
df.set_index('I', inplace=True, drop=True)
</code></pre>
<p>I need to create a dict of dict to get data of all existing edges (indicated by nonzero values) between nodes:</p>
<pre><code>{'A': {'X': {1}, 'Z': {1}, 'W': {3}}, 'B': {'Y': {1}, 'W': {2}}, 'C': {'X': {3}, 'Y': {2}}, 'D': {'Y': {1}, 'X': {1}}}
</code></pre>
<p>I need it to create a network graph using Networkx library and perform some calculations on it. Obviously it would be possible to loop over every cell in the data frame to do this but my data is quite large and it would be inefficient. I'm looking for some better way possibly using vectorization and/or list comprehension. I've tried list comprehension but I'm stuck and cannot make it work. Can anyone suggest a more efficient way to do this please?</p>
|
<python><pandas><dataframe><dictionary><networkx>
|
2025-07-06 13:16:46
| 3
| 481
|
carpediem
|
79,691,688
| 954,643
|
mypy linter error (valid-type) with pydantic's Annotated pattern in a generic
|
<p>Why would the following give a linter error in the second case but not the first:</p>
<pre><code># OK:
type MyAnnotatedType = Annotated[int | None, Field(strict=True)]
# Error: Invalid type alias: expression is not a valid type mypy(valid-type)
type MyAnnotatedSeqType = Sequence[Annotated[int | None, Field(strict=True)]]
# OK again:
type MyAnnotatedSeqTypeOK = Sequence[MyAnnotatedType]
</code></pre>
<p>Is this a limitation specified in PEP 593 somewhere, or more of a simplification in mypy to avoid more complex type inference, possibly requiring handling metadata at runtime?</p>
|
<python><python-typing><mypy><pydantic><type-alias>
|
2025-07-06 11:00:51
| 1
| 8,064
|
qix
|
79,691,609
| 1,629,615
|
Jupyter breaks most recent Mayavi 4.8.3
|
<p>For me, Jupyter(-lab) fails to work with the most recent version of Mayavi 4.8.3.<br />
Other Python interpreter software (like Spyder, ipython, ...) works well.<br />
For me, this can be boiled down to a bare-bones-test-environment, as described next.</p>
<p><strong>1)</strong> Create fresh test environment <code>~> conda create --name tes</code> (This runs on Python 3.13.5 for me, but the issue persists for other versions of Python.) Activate <code>~> conda activate tes</code>. Then</p>
<pre><code>conda install jupyter
conda install mayavi
conda install ipyevents
conda install itkwidgets
conda install mesalib
conda install puremagic
</code></pre>
<p>will install a bunch of dependencies.</p>
<p><strong>2)</strong> In test environment, open <code>(tes)~> jupyter-lab</code>. Then do</p>
<pre><code>[1]: from mayavi import mlab
mlab.init_notebook()
Output is: Notebook initialized with ipy backend.
[2]: mlab.figure()
</code></pre>
<p>The expected output is a Mayavi figure canvas.<br />
<em>Instead: The jupyter-lab kernel dies and restarts with no indication why.</em></p>
<p><strong>3)</strong> For sanity, open another interpreter in the same environment, e.g., ipython by, e.g., <code>(tes)~> terminator --command="ipython --pylab"</code>. Then do</p>
<pre><code>In [1]: from mayavi import mlab
In [2]: mlab.figure()
</code></pre>
<p>This generates a Mayavi canvas window as expected. Same for Spyder.</p>
<p>Conclusion: Jupyter breaks the most recent Mayavi 4.8.3<br />
Any suggestions what is wrong?</p>
|
<python><conda><jupyter-lab><mayavi>
|
2025-07-06 08:56:24
| 0
| 659
|
Mark
|
79,691,573
| 11,495,811
|
How to pass unpacking as an input parameter to initialize a member class object in Python?
|
<p>For example, in the following code, I want to pass 6 parameters by unpacking <code>*num</code> to initialize a1, a2, a3 in class A and b1, b2, b3 in class B. However, there will be an error because A only has 4 members. How do I pass 6 parameters to initialize A, automatically initialize the first 3 parameters a1, a2, a3, and use the last 3 parameters to initialize b1, b2, b3 in member object b?</p>
<p>The source of the question is because I want to use <code>vars(self)</code> to extract all member data of the subclass objects of the Base class, just like the get_data method, so that there is no need to display the member names every time, otherwise there will be too much repetitive code, such as <code>return [self.b1, self.b2, self.b3]</code>, because writing each subclass in this way requires handwritten variable names.</p>
<p>The example code is as follows. When executing <code>A(*num)</code>, an error message will appear indicating that there are too many parameters</p>
<pre><code>@dataclass
class Base():
def __str__(self):
data = ','.join([str(x) for x in vars(self).values()])
return data
def get_data(self):
return ','.join([str(x) for x in vars(self).values()])
@dataclass
class B(Base):
b1: int = 0
b2: int = 0
b3: int = 0
@dataclass
class A(Base):
a1: int = 0
a2: int = 0
a3: int = 0
b: B = None
num = [1,2,3,4,5,6]
my_data = A(*num)
</code></pre>
|
<python><argument-unpacking>
|
2025-07-06 07:42:06
| 2
| 355
|
Edward
|
79,691,197
| 4,528,716
|
How to refresh host mountings in flatpack environment?
|
<p>I'm working on Flatpak Python app, that needs to also interact with some host filesystem elements. While the application is running, another process, running outside of flatpak environment creates new mounts - overlayfs and squashfs mountings. The problem is that flatpack application doesn't detect these mountings until it's restarted, which I cannot do. So it goes like that:</p>
<pre><code>1. Flatpak app client is started
2. Another process mounts overlayfs in some directory
3. Flatpak should be able to access the contents of this mounting with --filesystem=/var/tmp/my_app, and the mount point itself is for example /var/tmp/my_app/random_id, but it doesn't see the contents.
</code></pre>
<p>Unfortunately when dealing with mounts it does not detect the content's of this new directory. But it is able to access it with no problem after client is started again.</p>
<p>I also checked the contents of /proc/mounts reported inside flatpack env, and it doesn't show this new mounting until it is restarted.</p>
<p>Interestingly this works fine on Fedora, but I have this issue on Gentoo.</p>
<p>I was wondering what components are responsible for this behavior and if there is anything I can do to overcome this. Both Fedora and Gentoo uses the same flatpack version number - 1.16.1.</p>
|
<python><linux><gtk><flatpak>
|
2025-07-05 16:56:21
| 0
| 2,800
|
Damian Dudycz
|
79,691,109
| 123,054
|
calculate diff to previous matching row in a dataframe
|
<p>I have a series of timestamps (+ other data) that come from 2 separate streams of data ticking at different rates, an example below (NB: the frequency of the real data has some jitter so it's not a simple fixed stride like below)</p>
<pre><code>src,idx,ts
B,1,20
A,1,100
A,2,200
A,3,300
B,2,320
A,4,400
A,5,500
A,6,600
B,3,620
</code></pre>
<p>for each A tick, I need to calculate the offset from the preceding B tick so it would become</p>
<pre><code>src,idx,ts
A,1,80
A,2,180
A,3,280
A,4,80
A,5,180
A,6,280
</code></pre>
<p>how to do this in pandas without iteration?</p>
<p>I thought of some sort of rolling window but with a dynamic/criteria based window or some hybrid of merge_asof and group by but can't think of a way to do it.</p>
|
<python><pandas><diff>
|
2025-07-05 14:36:42
| 3
| 8,600
|
Matt
|
79,690,825
| 2,819,689
|
How is value initialized in this line of Python code?
|
<p>I am trying to refactor some code (for deploying on a new infrastructure).</p>
<p>Quite a big class</p>
<pre><code>class GithubOrganization(dict):
def __init__(self, organization_dict, rest_session):
self.rest_session = rest_session
super().__init__(organization_dict)
</code></pre>
<p>and this is the line of the code that I do not understand:</p>
<pre><code>self.github_organization: GithubOrganization = self.github_session.organizations.get_org(self.login)
</code></pre>
<p>where <code>login</code> and <code>github_session</code> were initialized before. What does</p>
<pre><code>self.github_organization: GithubOrganization
</code></pre>
<p>serve for?</p>
|
<python>
|
2025-07-05 06:00:17
| 1
| 2,874
|
MikiBelavista
|
79,690,793
| 2,311,946
|
pip install using a separate build machine
|
<p>I have two same-architecture Linux boxes running identical versions of pypy, installed via pyenv. Consider one a "production" machine and one a "build" machine, such that the production machine cannot easily collect and configure all required build dependencies for modern source packages. I need to install a package on the production machine for which PyPI provides a source distribution but no pre-built wheel.</p>
<pre><code>python -m pip install example-package
</code></pre>
<p>works fine on the build machine, but some of the build dependencies are impractical to get/configure/make/install on the production machine. Is there a convenient method to migrate the python packages pip built from PyPI onto another computer?</p>
|
<python><pip><python-wheel>
|
2025-07-05 04:27:46
| 1
| 543
|
mysteryegg
|
79,690,531
| 551,046
|
How to access documentation of libraries like numpy, scipy etc. via pydoc offline on my workstation
|
<p>Everytime, I want to access the documentation of libraries that I need for my project, I have to go to their website. Python has the excellent utility called <code>pydoc</code>, but I can only access the documentation of the python standard library functions in <code>pydoc</code>. How do I store/generate/get the documentation for any python library locally and access it via <code>pydoc</code>?</p>
<p>Note: I am not talking about generating the documentation for my own project using pydoc, but accessing the documentation of other projects via the <code>pydoc</code> command as such</p>
<pre class="lang-bash prettyprint-override"><code>pydoc os
pydoc str
pydoc re
pydoc re.search
pydoc re.match
</code></pre>
<p>Essentially I want to be able to do</p>
<pre class="lang-bash prettyprint-override"><code>pydoc numpy.pi
pydoc numpy.linspace
</code></pre>
|
<python><numpy><scipy><pydoc>
|
2025-07-04 17:40:05
| 2
| 306
|
ksinkar
|
79,690,515
| 1,999,728
|
`AutoReg`'s model roots almost match learned parmaeters but do not match parameters of `ArmaProcess` generating it
|
<p>Minimal working example</p>
<pre><code>from statsmodels.tsa.arima_process import ArmaProcess
from statsmodels.tsa.ar_model import AutoReg
import numpy as np
roots= 1/np.array([0.1, 0.5, -0.3, 0.9, 0.18, 0.1+0.3j, 0.1-0.3j, -0.5+0.1j, -0.5-0.1j,-0.8])
proc = ArmaProcess.from_roots([],roots)
y = proc.generate_sample(150)
model = AutoReg(y,10).fit()
print(np.poly(model.roots))
print(np.flip(model.params)/model.params[-1])
print(np.poly(roots)/np.poly(roots)[0])
</code></pre>
<p>2 things I find odd here:
The very last coefficient returned from <code>np.poly(model.roots)</code> does not match the very last coefficient of <code>np.flip(model.params)/model.params[-1]</code></p>
<p>And worst of all, the fitted <code>AutoReg</code> model's coefficients are nowhere near the actual coefficients...</p>
<p>Any idea what's going on?</p>
|
<python><autoregressive-models>
|
2025-07-04 17:22:49
| 0
| 913
|
user1999728
|
79,690,330
| 3,768,871
|
How to have a line break in the math mode text in plotly?
|
<p>Suppose you would like to have a line break in the math mode text in plotly. The following solutions have been tried, but none of them is working in different reasons:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Box(y=[10, 14]))
fig.update_layout(xaxis_title="$\alpha \\ \beta$") # causes to_image error
# fig.update_layout(xaxis_title=r"$\alpha \\ \beta$") # no breaks between math shape of alpha and beta
# fig.update_layout(xaxis_title="$\\alpha$<br>$\\beta$") # only shows math shape of alpha and no beta at all!
# fig.update_layout(xaxis_title="$\\alpha \\\\ \\beta$") # # no breaks between math shape of alpha and beta
# fig.update_layout(xaxis_title="$$\\alpha \\\\ \\beta$$") # no breaks between math shape of alpha and beta
# fig.update_layout(xaxis_title="$$\\alpha$$ <br> $$\\beta$$") # # only shows math shape of alpha and no beta at all!
fig.show()
fig.write_image("this_image.pdf")
</code></pre>
<p>So, the question is how to set a line break in a math mode text?</p>
|
<python><plotly><kaleido>
|
2025-07-04 14:19:16
| 1
| 19,015
|
OmG
|
79,690,292
| 6,930,340
|
How to assign a RGB background color to a Quarto dashboard valuebox?
|
<p>Consider the following quarto dashboard:</p>
<pre><code>---
title: "Test"
format: dashboard
---
```{python}
#| content: valuebox
#| title: "Test Box"
dict(value=10, color="red")
```
</code></pre>
<p><a href="https://i.sstatic.net/it67vJwj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/it67vJwj.png" alt="enter image description here" /></a></p>
<p>How can I define the background color of the <code>valuebox</code> dynamically using RGB colors? I am probably missing the correct syntax.</p>
<p>Doing it like this doesn't lead to the expected result:</p>
<pre><code>---
title: "Test"
format: dashboard
---
```{python}
#| content: valuebox
#| title: "Test Box"
dict(value=10, color="background-color: rgb(255,0,0)")
```
</code></pre>
<p><a href="https://i.sstatic.net/V0wQ7yht.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0wQ7yht.png" alt="enter image description here" /></a></p>
|
<python><quarto>
|
2025-07-04 13:51:08
| 2
| 5,167
|
Andi
|
79,690,269
| 4,653,485
|
Strange result when powering pandas integer series
|
<p>The result when powering a pandas integer Series seems wrong.</p>
<pre><code># Standard Python
42**42
# 150130937545296572356771972164254457814047970568738777235893533016064
# Pandas series, float dtype
s = pd.Series([12, 42], index=range(2), dtype=float)
s**42
# 0 2.116471e+45
# 1 1.501309e+68
# dtype: float64
# Pandas series, integer dtype
s = pd.Series([12, 42], index=range(2), dtype=int)
s**42
# 0 0
# 1 4121466560160202752
# dtype: int64
</code></pre>
<p>How come?</p>
|
<python><pandas>
|
2025-07-04 13:32:51
| 2
| 14,916
|
Jรฉrรดme
|
79,690,252
| 458,742
|
ctrl-C to terminate a pyqt application with a QOpenGLWidget
|
<p>My main file for a pyqt application looks like this:</p>
<pre><code>#!/usr/bin/env python3
from PyQt5.QtWidgets import QApplication
import signal
import sys
from main_window import MainWindow
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>It works, and when I ctrl-c in the terminal (on Ubuntu) the application will terminate...</p>
<p>...unless it creates a QOpenGLWidget, in which case ctrl-C does nothing.</p>
<p>I tried this</p>
<pre><code>def signal_handler(signum, frame):
"""Handle SIGINT signal"""
print("\nReceived SIGINT, closing application...")
QApplication.instance().quit()
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal_handler)
...
</code></pre>
<p>but <code>signal_handler</code> is never called.</p>
<p>How can I terminate a pyqt application using QOpenGLWidget using ctrl-C from the terminal?</p>
|
<python><linux><pyqt><signals><qtopengl>
|
2025-07-04 13:20:13
| 1
| 33,709
|
spraff
|
79,690,071
| 4,379,593
|
How to kill thread in python which stuck in blocking call
|
<p>I tried to cancel thread with c-function <code>pthread_cancel()</code>. Also I check canceltype and cancelstate: it has default values: <code>PTHREAD_CANCEL_DEFERRED</code> and <code>PTHREAD_CANCEL_ENABLE</code></p>
<pre class="lang-py prettyprint-override"><code>import threading
import ctypes
from time import sleep
import os
import sys
libc = ctypes.CDLL("libc.so.6")
pthread_t = ctypes.c_ulong
PTHREAD_CANCEL_DEFERRED = 0
PTHREAD_CANCEL_ASYNCHRONOUS = 1
canceltype = ctypes.c_int
pthread_setcanceltype = libc.pthread_setcanceltype
pthread_setcanceltype.argtypes = [ctypes.c_int, ctypes.POINTER(canceltype)]
pthread_setcanceltype.restype = ctypes.c_int
PTHREAD_CANCEL_ENABLE = 0
PTHREAD_CANCEL_DISABLE = 1
cancelstate = ctypes.c_int
pthread_setcancelstate = libc.pthread_setcancelstate
pthread_setcancelstate.argtypes = [ctypes.c_int, ctypes.POINTER(cancelstate)]
pthread_setcancelstate.restype = ctypes.c_int
pthread_cancel = libc.pthread_cancel
pthread_cancel.argtypes = [pthread_t]
pthread_cancel.restype = ctypes.c_int
def terminate_thread(thread):
if not thread.is_alive():
return
#exc = ctypes.py_object(SystemExit)
#res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident), exc)
res = pthread_cancel(thread.ident)
#if res == 0:
# raise ValueError("Can't stop thread.")
if res != 0:
#ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("Error while stopping thread:",res)
def thread2():
prev_type = canceltype()
ret = pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, ctypes.byref(prev_type))
if ret != 0:
err = ctypes.get_errno()
print(f"pthread_setcanceltype failed: errno={err}")
return
print('prev_type=',prev_type)
ret = pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, ctypes.byref(prev_type))
if ret != 0:
err = ctypes.get_errno()
print(f"pthread_setcancelstate failed: errno={err}")
return
print('prev_state=',prev_type)
print("thread2 start reading")
#input()
sleep(1000) # for sleep() and for input() it behaves the same
print("thread2 ends")
thr2 = threading.Thread(target=thread2)
thr2.start()
sleep(2)
print("killing thr2")
terminate_thread(thr2)
thr2.join()
print("main exit")
</code></pre>
<p>Why this code stuck in thr2.join() with output:</p>
<pre><code>prev_type= c_int(0)
prev_state= c_int(0)
thread2 start reading
killing thr2
</code></pre>
<p>But this code:</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
void * thread2(void *){
printf("thread2 start reading\n");
getchar();
printf("thread2 ends\n");
}
int main(){
printf("PTHREAD_CANCEL_DEFERRED = %d\nPTHREAD_CANCEL_ASYNCHRONOUS = %d\nPTHREAD_CANCEL_ENABLE = %d\nPTHREAD_CANCEL_DISABLE = %d\n",
PTHREAD_CANCEL_DEFERRED,PTHREAD_CANCEL_ASYNCHRONOUS,PTHREAD_CANCEL_ENABLE,PTHREAD_CANCEL_DISABLE);
pthread_t thr2;
pthread_create(&thr2, NULL, thread2, NULL);
sleep(2);
printf("killing thr2\n");
pthread_cancel(thr2);
pthread_join(thr2, NULL);
printf("main exit\n");
}
</code></pre>
<p>normally treat cancellation of thread? With output:</p>
<pre><code>PTHREAD_CANCEL_DEFERRED = 0
PTHREAD_CANCEL_ASYNCHRONOUS = 1
PTHREAD_CANCEL_ENABLE = 0
PTHREAD_CANCEL_DISABLE = 1
thread2 start reading
killing thr2
main exit
</code></pre>
<p><code>sleep()</code>, <code>read()</code> - is <a href="https://man7.org/linux/man-pages/man7/pthreads.7.html" rel="nofollow noreferrer">cancellation points</a></p>
<p>Even if I set <code>PTHREAD_CANCEL_ASYNCHRONOUS</code> at the beginning of thread and remove <code>htr2.join()</code> at the end of program, the program continues to hang.</p>
<p><strong>UPD</strong></p>
<p>I found error: I should use <code>thread.native_id</code> instead <code>thread.ident</code>. And as result I get segmentation fault as expected.</p>
|
<python><c><multithreading>
|
2025-07-04 10:53:44
| 2
| 373
|
ะคะธะปั ะฃัะบะพะฒ
|
79,690,013
| 13,087,048
|
Automate QGIS v.kernel.rast across multiple nested folders with Python
|
<p>I'm using QGIS 3.40.8 and need to automate kernel density calculations across a nested folder structure. I don't know Python - the code below was created by an LLM based on my QGIS log output from running <code>v.kernel.rast</code> manually in the GUI.</p>
<p>Current working code (single folder):</p>
<pre class="lang-py prettyprint-override"><code>import processing
import os
from qgis.core import QgsRasterLayer
# === Inputs ===
point_layer = 'main_folder/manchester/2018/01/poi.shp'
reference_raster = 'main_folder/manchester/2018/01/lc.tif'
output_dir = 'main_folder/manchester/2018/01/'
# === Bandwidths to test ===
bandwidths = [50, 100, 150, 200]
# === Extract parameters from reference raster ===
print("Extracting parameters from reference raster...")
ref_layer = QgsRasterLayer(reference_raster, "reference")
if not ref_layer.isValid():
print(f"ERROR: Could not load reference raster: {reference_raster}")
exit()
# Get extent
extent = ref_layer.extent()
region_extent = f"{extent.xMinimum()},{extent.xMaximum()},{extent.yMinimum()},{extent.yMaximum()} [EPSG:{ref_layer.crs().postgisSrid()}]"
# Get pixel size
pixel_size = ref_layer.rasterUnitsPerPixelX()
print(f"Extracted region extent: {region_extent}")
print(f"Extracted pixel size: {pixel_size}")
# === Kernel density loop ===
for radius in bandwidths:
output_path = os.path.join(output_dir, f'kernel_bw_{radius}.tif')
print(f"Processing bandwidth: {radius}...")
processing.run("grass7:v.kernel.rast", {
'input': point_layer,
'radius': radius,
'kernel': 5, # Gaussian
'multiplier': 1,
'output': output_path,
'GRASS_REGION_PARAMETER': region_extent,
'GRASS_REGION_CELLSIZE_PARAMETER': pixel_size,
'GRASS_RASTER_FORMAT_OPT': 'TFW=YES,COMPRESS=LZW',
'GRASS_RASTER_FORMAT_META': ''
})
print("All kernel rasters created.")
</code></pre>
<p>Folder structure:</p>
<pre><code>main_folder/
โโโ city (e.g., rome)/
โ โโโ year (e.g., 2018)/
โ โ โโโ month (e.g., 11)/
โ โ โ โโโ poi.shp
โ โ โ โโโ lc.tif
โ โ โโโ 04/
โ โ โโโ poi.shp
โ โ โโโ lc.tif
โ โโโ 2019/
โ โโโ 11/
โ โโโ poi.shp
โ โโโ lc.tif
โโโ london/
โโโ 2021/
โโโ 03/
โโโ poi.shp
โโโ lc.tif
</code></pre>
<p>What I need:</p>
<ul>
<li>Loop through all monthly folders following the pattern: <code>main_folder/city/year/month/</code></li>
<li>Skip folders that don't contain <code>poi.shp</code></li>
<li>Run kernel density analysis for each valid monthly folder</li>
<li>Save output rasters in the same monthly folder where <code>poi.shp</code> is located</li>
<li>Files are consistently named: <code>poi.shp</code> (points) and <code>lc.tif</code> (reference raster)</li>
</ul>
<p>How can I modify this code to automatically iterate through the entire nested folder structure?</p>
|
<python><for-loop><automation><qgis>
|
2025-07-04 10:08:06
| 0
| 403
|
Nikos
|
79,689,943
| 1,826,066
|
Compute group-wise residual for polars data frame
|
<p>I am in a situation where I have a data frame with <code>X</code> and <code>X</code> values as well as two groups <code>GROUP1</code> and <code>GROUP2</code>. Looping over both of the groups, I want to fit a linear model against the <code>X</code> and <code>Y</code> data and the subtract the fit from the true data to get a residual.</p>
<p>I'm currently implementing this in the following way:</p>
<pre><code>import polars as pl
import numpy as np
# --- Sample DataFrame for demonstration purposes
df = pl.DataFrame(
{
"GROUP1": [1, 1, 1, 2, 2, 2],
"GROUP2": ["A", "A", "A", "B", "B", "B"],
"X": [0.0, 1.0, 2.0, 0.0, 1.0, 2.0],
"Y": [5.0, 7.0, 9.0, 3.0, 4.0, 6.0],
}
)
# --- Function to subtract linear best fit per group
def subtract_linear_best_fit(df: pl.DataFrame) -> pl.DataFrame:
result = []
for _, subdf in df.group_by(["GROUP1", "GROUP2"]):
x = subdf["X"].to_numpy()
y = subdf["Y"].to_numpy()
a, b = np.polyfit(x, y, 1)
residuals = y - (a * x + b)
result.append(subdf.with_columns(pl.Series("residual", residuals)))
return pl.concat(result)
# --- Apply function
df_with_residuals = subtract_linear_best_fit(df)
print(df_with_residuals)
</code></pre>
<p>But this does not seem nice as it does not make use of <code>.group_by(...).agg(...)</code> or <code>.with_columns((...).over(...))</code>.
I tried both these approaches but I either lost columns from the original data frame or just computed a summary. But I want to have a data frame of the same height, just with one more column.</p>
<p>Is there any way to avoid concatenating data frames inside the loop? Ideally there would be something like <code>.group_by().pipe()</code> or <code>.pipe().over()</code>.</p>
|
<python><dataframe><window-functions><python-polars>
|
2025-07-04 09:22:52
| 2
| 1,351
|
Thomas
|
79,689,885
| 3,668,129
|
Error while installing whisper-cpp-python
|
<p>I'm running on:</p>
<pre><code>Ubuntu 22.04.5 LTS
python 3.10.12
cmake version 4.0.3
</code></pre>
<p>I tried to install a library with the following command:</p>
<pre class="lang-bash prettyprint-override"><code>pip install whisper-cpp-python
</code></pre>
<p>but received an error:</p>
<pre><code>Building wheels for collected packages: whisper-cpp-python
Building wheel for whisper-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
ร Building wheel for whisper-cpp-python (pyproject.toml) did not run successfully.
โ exit code: 1
โฐโ> [91 lines of output]
--------------------------------------------------------------------------------
-- Trying 'Ninja' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.10 will be removed from a future version of
CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
to tell CMake that the project requires at least <min> but has been updated
to work with policies introduced by <max> or earlier.
Not searching for unused variables given on the command line.
-- The C compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- The CXX compiler identification is GNU 11.4.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done (0.4s)
-- Generating done (0.0s)
-- Build files have been written to: /tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6/_cmake_test_compile/build
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Ninja' generator - success
--------------------------------------------------------------------------------
Configuring Project
Working directory:
/tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6/_skbuild/linux-x86_64-3.10/cmake-build
Command:
/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.12 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.10.so -DPython_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPython_ROOT_DIR:PATH=/home/amitli/python_venvs/venvWhisperCPP -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython3_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPython3_ROOT_DIR:PATH=/home/amitli/python_venvs/venvWhisperCPP -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=ninja -DCMAKE_BUILD_TYPE:STRING=Release
Not searching for unused variables given on the command line.
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at vendor/whisper.cpp/CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 has been removed from CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
to tell CMake that the project requires at least <min> but has been updated
to work with policies introduced by <max> or earlier.
Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 660, in setup
env = cmkr.configure(
File "/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 354, in configure
raise SKBuildError(msg)
An error occurred while configuring with CMake.
Command:
/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.12 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-h5693axl/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.10.so -DPython_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPython_ROOT_DIR:PATH=/home/amitli/python_venvs/venvWhisperCPP -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython3_EXECUTABLE:PATH=/home/amitli/python_venvs/venvWhisperCPP/bin/python3 -DPython3_ROOT_DIR:PATH=/home/amitli/python_venvs/venvWhisperCPP -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=ninja -DCMAKE_BUILD_TYPE:STRING=Release
Source directory:
/tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6
Working directory:
/tmp/pip-install-77w6z33k/whisper-cpp-python_a0b2df45a9ee419fb855cbd372dff6c6/_skbuild/linux-x86_64-3.10/cmake-build
Please see CMake's output for more information.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>How can I install <code>whisper-cpp</code> for python?</p>
|
<python><pip><openai-whisper>
|
2025-07-04 08:34:47
| 1
| 4,880
|
user3668129
|
79,689,724
| 19,459,262
|
How can I change the background color of an action button?
|
<p>I have a Shiny for Python app with some buttons. I want to change the color of a button. How can I do this?</p>
<pre><code>from shiny.express import ui
with ui.layout_columns():
ui.input_action_button("red_button", "Make this button red!")
ui.input_action_button("blue_button", "Make this button blue!")
</code></pre>
|
<python><background-color><py-shiny>
|
2025-07-04 05:56:16
| 1
| 784
|
Redz
|
79,689,720
| 6,930,340
|
quarto dashboard shiny package not found
|
<p>I have a quarto dashboard that makes use of <code>shiny</code>. The <code>test.qmd</code> file starts like this:</p>
<pre><code>---
title: "Test"
format: dashboard
server: shiny
---
...
</code></pre>
<p>When running quarto preview function within VS Code (i.e. using the shortcut <code>CTRL+SHIFT+K</code>), the shiny server starts up as expected.</p>
<p>However, when I run the quarto preview command from within an external terminal (with my venv activated), I get an error:</p>
<pre><code>$ quarto preview test.qmd
Starting python3 kernel...Done
Executing 'test.quarto_ipynb'
Cell 1/2: ''...Done
Cell 2/2: ''...Done
pandoc
to: html
output-file: test.html
template: >-
C:\Users\XXXXXX\AppData\Local\Programs\Quarto\share\formats\dashboard\template.html
standalone: true
section-divs: true
html-math-method: mathjax
wrap: none
default-image-extension: png
variables: {}
metadata
document-css: false
link-citations: true
date-format: long
lang: en
page-layout: custom
title: Test
server:
type: shiny
remove-hidden: all
Output created: Test.html
Watching files for changes
ERROR: The shiny package is required for documents with server: shiny
Install the latest version of shiny with pip install --upgrade shiny
</code></pre>
<p>I confirm that <code>shiny v.1.4.0</code> is already installed in my venv.</p>
<p>It seems as if <code>quarto</code> is not really using my activated venv. What puzzles me is the fact that <code>quarto check</code> doesn't pick up my python installation.</p>
<pre><code>quarto check jupyter
[>] Checking Python 3 installation....(None)
Unable to locate an installed version of Python 3.
Install Python 3 from https://www.python.org/downloads/
</code></pre>
<p>As per @furas comment, setting <code>QUARTO_PYTHON</code> as environment variable solves the issue.</p>
<p>Having said this, I would like to set this environment variable within the quarto document itself.</p>
<p>I tried the following in the first cell of my <code>test.qmd</code>.</p>
<pre><code>---
title: "Test"
format: dashboard
server: shiny
---
```{python}
#| context: setup
import os
py_path = os.path.join(os.getcwd(),".venv/Scrips/python.exe")
os.environ["QUARTO_PYTHON"] = py_path
```
</code></pre>
<p>However, this doesn't solve the problem. I really need to set the environment variable in the terminal before calling the <code>quarto preview</code> command.</p>
|
<python><quarto><py-shiny>
|
2025-07-04 05:49:19
| 0
| 5,167
|
Andi
|
79,689,600
| 1,332,263
|
How to search and match and corrent exact word in text file
|
<p>test.txt file contains 2 misspelled words.</p>
<p>loyai</p>
<p>royai</p>
<p>pain</p>
<p>I want to search for the misspelled words and replace them using dict words_. My problem is if a misspelled word (pai) is part of another word (pain) it corrects pain to paln when it should ignore it.</p>
<p>How can I have my code ignore words that are not a full match to the misspelled word?</p>
<pre><code>#!/usr/bin/env python3
import re
words_ = {"loyai":"loyal", "royai":"royal", "pai":"pal"}
class srt_clean(object):
def __init__(self, file_name):
self.file_name = file_name
def replace_words_(self, file_name):
with open(file_name, 'r') as file:
data = file.read()
for search_text, replace_text in words_.items():
data = re.sub(search_text, replace_text, data)
# Opening our text file in write only ode to write the replaced content
with open(file_name, 'w') as file:
# Writing the replaced data in our ext file
file.write(data)
file_name = "test.srt"
clean = srt_clean(file_name)
clean.replace_words_(file_name)
</code></pre>
|
<python><python-3.x><regex>
|
2025-07-04 01:24:14
| 2
| 417
|
bob_the_bob
|
79,689,557
| 11,222,417
|
How to make PEP8 formatter vscode extension to allow long lines?
|
<p>How to make PEP8 formatter vscode extension to allow long lines for python? Currently the formatting enforce too short lines.</p>
<p>Furthermore, I really like the auto-formatting in pycharm. Is there a way to auto format with PEP8 extension or another extension to be more similar to the ptcharm formatting, (and specifically the liens length in pycharm)?</p>
|
<python><formatting><vscode-extensions><autopep8>
|
2025-07-03 23:30:25
| 0
| 305
|
J. Doe
|
79,689,397
| 2,540,336
|
Cannot understand the behaviour of pandas case_when used on Series with different indexes
|
<p>I am trying to use the case_when of a pandas Series and I am not sure I understand why it behaves like below. I indicate the behaviour that looks odd to me. It seems it has to do with the index of the Series, but why?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
print(pd.__version__)
# 2.3.0
a = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'], dtype='int')
b = pd.Series([1, 2, 3, 4, 5], index=['A', 'B', 'C', 'D', 'E'], dtype='int')
res = a.case_when(
[(a.gt(3), 'greater than 3'),
(a.lt(3), 'less than 3')])
print(res)
# a less than 3
# b less than 3
# c 3
# d greater than 3
# e greater than 3
res = a.case_when(
[(a.gt(3), 'greater than 3'),
(b.lt(3), 'less than 3')])
print(res)
# a less than 3
# b less than 3
# c less than 3 <- why is this not 3?
# d greater than 3
# e greater than 3
res = a.case_when(
[(b.gt(3), 'greater than 3'),
(b.lt(3), 'less than 3')])
print(res)
# a greater than 3 <- why is this not less than 3?
# b greater than 3 <- why is this not less than 3?
# c greater than 3 <- why is this not 3?
# d greater than 3
# e greater than 3
res = a.case_when(
[(b.gt(3).to_list(), 'greater than 3'),
(b.lt(3).to_list(), 'less than 3')])
print(res)
# a less than 3
# b less than 3
# c 3
# d greater than 3
# e greater than 3
</code></pre>
|
<python><pandas><series>
|
2025-07-03 19:42:11
| 2
| 597
|
karpan
|
79,689,261
| 11,222,417
|
why TensorDataset devide the data to minibatches?
|
<p>Why TensorDataset devide the data to minibatches? For example, when putting in it 2D array, instead of yielding 2D tensors as batches, it sets the required batches to be minibatches, and its actual "batch" size is a list of length 1. For example, why in this example the batch size is 1 and not 3?</p>
<pre class="lang-py prettyprint-override"><code>from torch.utils.data import TensorDataset
data = torch.randint(0, 100, (20, 3), dtype=torch.int32)
tensor_dataset = TensorDataset(data)
print('len(tensor_dataset)', len(tensor_dataset)) # output: 20
for batch in tensor_dataset:
print('type(batch)', type(batch)) ## output: tuple
print('len(batch)', len(batch)) ## output: 1 <<<******** Why not 3? ********
for minibatch in batch:
print('type(minibatch)', type(minibatch)) ## output: tensor
print('len(minibatch)', len(minibatch)) ## output: 3
break
break
</code></pre>
<p>Is there built-in solution for this, or it must be handled manually by unpacking the batches to minibatches in the dataloader loop?</p>
|
<python><dataset><tensor><mini-batch>
|
2025-07-03 17:32:41
| 1
| 305
|
J. Doe
|
79,689,252
| 9,686,427
|
FuncAnimation: Fade out at the end of animation affecting fade in animations at the beginning
|
<p>In an attempt to fade latex text in and out of the screen using <code>FuncAnimation</code>, I have written the code below. In particular, I have written two methods <code>fade_in</code> and <code>fade_out</code> that fade an equation in and out respectively. When tested individually these methods work as expected, yet when tested together (see the bottom of the code below) an issue arises. If, after creating the objects <code>x</code> and <code>y</code> displaying a LaTeX <code>x</code> and <code>y</code> respectively, I</p>
<ol>
<li>Apply <code>fade_in</code> on <code>x</code>;</li>
<li>Apply <code>fade_in</code> on <code>y</code>;</li>
<li>Apply <code>fade_out</code> on <code>y</code>;</li>
</ol>
<p>then, during the first third of the animation, the symbol <code>y</code> is displayed on the screen. In contrast, I'd prefer if <code>y</code> is invisible during the first third of the animation, only to start appearing during its <code>fade_in</code> animation. Funny enough, If I merely apply steps (1) and (2), then <code>y</code> is indeed invisible during the first third of the animation.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
from matplotlib.animation import FuncAnimation, FFMpegWriter
def chain_animations(animations):
# Extract all update functions and frame counts
update_funcs = [a._func for a in animations]
frame_counts = [a.frame_seq.__length_hint__() for a in animations]
intervals = [a._interval for a in animations]
figs = [a._fig for a in animations]
assert all(interval == intervals[0] for interval in intervals), "All animations must have the same interval"
assert all(fig is figs[0] for fig in figs), "All animations must use the same figure"
total_frames = sum(frame_counts)
cumulative = [0]
for count in frame_counts:
cumulative.append(cumulative[-1] + count)
def update(frame):
for i in range(len(frame_counts)):
if cumulative[i] <= frame < cumulative[i + 1]:
local_frame = frame - cumulative[i]
return update_funcs[i](local_frame)
return []
return FuncAnimation(figs[0], update, frames=total_frames, interval=intervals[0], blit=True)
class Symbol:
def __init__(self, ax, letter, size=20, color='black', position=(0, 0), alpha=1.0, zorder=1, row = -1, col = -1):
self.ax = ax
self.letter = letter
self.size = size
self.color = color
self.position = position
self.alpha = alpha
self.zorder = zorder
self.font = FontProperties(family='cmr10')
self.row = row
self.col = col
self.add_to_axes(ax)
def add_to_axes(self, ax):
self.ax = ax
# Manually reduce font size slightly
fontsize = self.size # scale down for visual match
x, y = self.position
self.patch = ax.text(
x, y,
self.letter,
fontsize=fontsize,
color=self.color,
alpha=self.alpha,
zorder=self.zorder,
ha='left',
va='baseline',
math_fontfamily='cm',
usetex=False
)
def update_position(self, new_pos):
self.position = new_pos
def fade_in(self, ax, time=1.0, fps=30):
total_frames = max(1, int(time * fps)) # avoid divide by zero
target_alpha = 1.0
self.alpha = 0.0
self.patch.set_alpha(0.0)
def animate(frame):
t = frame / (total_frames - 1) if total_frames > 1 else 1.0
current_alpha = t * target_alpha
self.alpha = current_alpha
self.patch.set_alpha(current_alpha)
return [self.patch]
return FuncAnimation(ax.figure, animate, frames=total_frames, interval=1000/fps, blit=True)
def fade_out(self, ax, time=1.0, fps=30):
total_frames = max(1, int(time * fps)) # avoid divide by zero
target_alpha = 1.0
self.alpha = 1.0
if self.patch is not None:
self.patch.set_alpha(1.0)
def animate(frame):
t = frame / (total_frames - 1) if total_frames > 1 else 1.0
current_alpha = (1 - t) * target_alpha
self.alpha = current_alpha
self.patch.set_alpha(current_alpha)
return [self.patch]
return FuncAnimation(ax.figure, animate, frames=total_frames, interval=1000/fps, blit=True)
import matplotlib.pyplot as plt
from Symbol import Symbol
fig, ax = plt.subplots()
ax.set_xlim(0, 200)
ax.set_ylim(0, 200)
# Create a Symbol instance
x = Symbol(ax, letter='$x$', size=30, color='blue', position=(50, 50))
y = Symbol(ax, letter='$y$', size=30, color='red', position=(55, 50))
# Doesn't work: x fade in, y fade in, y fade out
# Works: x fade in, y fade in, x fade out
anim1 = x.fade_in(ax, time=1.0, fps=30)
anim2 = y.fade_in(ax, time=1.0, fps=30)
anim3 = y.fade_out(ax, time=1.0, fps=30)
anim = chain_animations([anim1, anim2, anim3])
anim.save("fade.mp4", writer=FFMpegWriter(fps=30))
</code></pre>
|
<python><matplotlib><animation>
|
2025-07-03 17:23:36
| 0
| 484
|
Sam
|
79,689,146
| 3,768,871
|
Ignored font size of the math formula in Plotly
|
<p>Take the following example of code for a plot in plotly:</p>
<pre><code>import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Box(x = [r"$\mu$", r"$\sigma$"], y=[10, 14]))
fig.update_layout(xaxis=dict(tickfont=dict(size=24))
fig.show()
fig.write_image("this_image.png")
</code></pre>
<p>The <code>size</code> has no impact on the font size of the xaxis's labels.
On the other hand, if x values are a simple text like <code>x=["mu", and "sigma"]</code>, it will be impacted by the size.</p>
<p>Note that the solutions like using <code>\displaystyle</code> (e.g., <code>r"$\displaystyle \mu$"</code> and <code>r"\LARGE $\mu$"</code>) are not working and are ignored by the renderer.</p>
<p>The engine of the image writer is "Kaleido" (<code>kaleido==1.0.0</code>, <code>plotly==6.2.0</code>, <code>Python 3.12</code>).</p>
<p>The question is how to fix this issue for math formulas in the generate image?</p>
|
<python><plotly><kaleido>
|
2025-07-03 16:00:20
| 0
| 19,015
|
OmG
|
79,688,998
| 5,748,138
|
Access an excel file in sharepoint using python to ingest into SQL sever using a dataframe - Access has been blocked by conditional access policies
|
<p>I presently have a .env file which holds the sql login credentials and the sharepoint login credentials.</p>
<p>My code below fails in the second section of Step two with the following error:</p>
<blockquote>
<p>An error occurred while retrieving token from XML response: AADSTS53003: Access has been blocked by Conditional Access policies. The access policy does not allow token issuance.</p>
</blockquote>
<pre><code>from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from sqlalchemy import create_engine
import pandas as pd
from io import BytesIO
import os
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# SharePoint credentials & file path
SHAREPOINT_SITE = os.getenv("SHAREPOINT_SITE")
SHAREPOINT_USERNAME = os.getenv("SHAREPOINT_USERNAME")
SHAREPOINT_PASSWORD = os.getenv("SHAREPOINT_PASSWORD")
SHAREPOINT_FILE_URL = os.getenv("SHAREPOINT_FILE_URL") # Must be a server-relative path
# SQL Server connection details
SQL_SERVER = os.getenv("SQL_SERVER")
SQL_DATABASE = os.getenv("SQL_DATABASE")
SQL_USERNAME = os.getenv("SQL_USERNAME")
SQL_PASSWORD = os.getenv("SQL_PASSWORD")
SQL_DRIVER = os.getenv("SQL_DRIVER") # e.g., "ODBC Driver 17 for SQL Server"
TABLE_NAME = os.getenv("TABLE_NAME")
# Step 1: Authenticate to SharePoint
ctx_auth = AuthenticationContext(SHAREPOINT_SITE)
if not ctx_auth.acquire_token_for_user(SHAREPOINT_USERNAME, SHAREPOINT_PASSWORD):
raise Exception("SharePoint authentication failed")
# Step 2: Download Excel file to memory
ctx = ClientContext(SHAREPOINT_SITE, ctx_auth)
file_obj = BytesIO()
# Get the file from SharePoint using the server-relative URL
ctx.web.get_file_by_server_relative_url(SHAREPOINT_FILE_URL).download(file_obj).execute_query()
# Step 3: Load the Excel file into a pandas DataFrame
file_obj.seek(0) # Reset the pointer back to the beginning of the file
df = pd.read_excel(file_obj)
# Step 4: Connect to SQL Server and insert data
connection_string = f"mssql+pyodbc://{SQL_USERNAME}:{SQL_PASSWORD}@{SQL_SERVER}/{SQL_DATABASE}?driver={SQL_DRIVER}"
engine = create_engine(connection_string)
# Write the data to SQL Server (replace table or append)
df.to_sql(TABLE_NAME, con=engine, if_exists="replace", index=False)
print(f"Data from {SHAREPOINT_FILE_URL} has been successfully uploaded to {TABLE_NAME} in SQL Server.")
</code></pre>
<p>I have spoken to our infrastructure team thinking this could be due to MFA blocking entry into SharePoint however I am using my credentials and in the office, so there is no MFA/2FA request.</p>
<p>I have previously looked at
<a href="https://stackoverflow.com/questions/78740004/sharepoint-access-has-been-blocked-by-conditional-access-policies">sharepoint: Access has been blocked by Conditional Access policies</a> . There is no answer on this question, the post does suggest not using user credentials however suggests this doesn't work in anycase.</p>
<p>Any help would be appreciated.</p>
|
<python><azure><sharepoint>
|
2025-07-03 14:21:37
| 0
| 342
|
Will
|
79,688,933
| 6,234,139
|
How can I create a column that is the result of replacing values in two or more other columns?
|
<p>Consider the dataframe <code>df</code>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Event1': ['Music', 'Something else 1', 'Theatre', 'Comedy'],
'Event2': ['Something else 2', 'Ballet', 'Something else 3', 'Something else 4'],
'Cost': [10000, 5000, 15000, 2000]})
print(df)
Event1 Event2 Cost
0 Music Something else 2 10000
1 Something else 1 Ballet 5000
2 Theatre Something else 3 15000
3 Comedy 4 2000
</code></pre>
<p>I would like to map the values of the <code>Event1</code> and <code>Event2</code> to the values in the respective dictionaries:</p>
<pre><code> # Mapping tables
dict1 = {'Music': 'M', 'Cooking': 'C', 'Theatre': 'T', 'Comedy': 'C'}
dict2 = {'Ballet': 'B', 'Swimming': 'S'}
</code></pre>
<p>And store the mappings in a common column because I know that per row, only the value of one column will be mapped. The end result would be:</p>
<pre><code># desired outcome
result = pd.DataFrame({'Event1': ['Music', 'Something else 1', 'Theatre', 'Comedy'],
'Event2': ['Something else 2', 'Ballet', 'Something else 3', '4'],
'Event': ['M', 'B', 'T', 'C'],
'Cost': [10000, 5000, 15000, 2000]})
print(result)
Event1 Event2 Event Cost
0 Music Something else 2 M 10000
1 Something else 1 Ballet B 5000
2 Theatre Something else 3 T 15000
3 Comedy 4 C 2000
</code></pre>
<p>I can only do this in a messy and lengthy way and was hoping there is clean maybe idiomatic way of doing this.</p>
<p>How would you advise doing it?</p>
|
<python><pandas>
|
2025-07-03 13:34:27
| 1
| 701
|
koteletje
|
79,688,844
| 13,682,559
|
Pyright doesn't understand implementation of protocols
|
<p>In my code base I have many declarations like this:</p>
<pre><code>from typing import Protocol, override
from dataclasses import dataclass
# Definition of protocols (usually in a separate file)
@dataclass
class Data(Protocol):
val: int = 1
class Programm(Protocol):
def update(self, data: Data) -> int: ...
# Concrete implementation of the protocols:
@dataclass
class DataA(Data):
val_concrete: int = 2
class ProgrammA(Programm):
@override
def update(self, data: DataA) -> int:
return data.val_concrete + data.val
</code></pre>
<p>I was confident that this is the right way to use protocols. Maybe I am mistaken. When I turn on pyright (in fact I use basedpyright, but I should be the same), it marks <code>ProgrammA.update()</code> and complains:</p>
<p>[reportIncompatibleMethodOverride]: Method "update" overrides class "Programm" in an incompatible manner</p>
<p>Am I doing something wrong? I can get around this using a redundant generic. But I think this is unnecessary and in some way just wrong.</p>
<p>If my approach is correct, can I do something to clarify things in a way, pyright understands?</p>
|
<python><python-typing><pyright>
|
2025-07-03 12:21:01
| 0
| 1,108
|
Durtal
|
79,688,618
| 3,840,940
|
Can not save spark 4.0 sql dataframe into redis database (Connection reset by peer)
|
<p>I try to make some python codes of saving spark sql dataframe into redis database. First, my development environments are</p>
<pre><code>OS : Windows 11
Spark : Spark 4.0.0
python : Anaconda3-2024.02-1-Windows-x86_64
IDE : Visual Studio Code
</code></pre>
<p>And These are simple pyspark codes.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import to_json, struct
import redis
def save_to_redis_partition(rows):
r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=False)
for row in rows:
key = f"user:{row['id']}"
value = row['json']
r.set(key, value)
r.close()
def main():
spark = SparkSession.builder \
.appName("PySpark Jedis Redis Demo") \
.master('local[*]')\
.getOrCreate()
data = [(1, "Alice", 30), (2, "Bob", 25), (3, "Charlie", 35)]
df = spark.createDataFrame(data, schema=["id","name","age"])
df_json = df.withColumn("json", to_json(struct("id","name","age")))
df_json.select("id","json").rdd.map(lambda x: x.asDict()).foreachPartition(save_to_redis_partition)
spark.stop()
if __name__ == "__main__":
main()
</code></pre>
<p>But Errors are thrown when inserting spark dataframe into redis.</p>
<pre><code>Caused by: java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.SocketDispatcher.write0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:132)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:76)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:53)
at java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:532)
at org.apache.spark.api.python.BasePythonRunner$ReaderInputStream.writeAdditionalInputToPythonWorker(PythonRunner.scala:855)
at org.apache.spark.api.python.BasePythonRunner$ReaderInputStream.read(PythonRunner.scala:767)
at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:244)
at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:263)
at java.base/java.io.DataInputStream.readInt(DataInputStream.java:393)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:933)
</code></pre>
<p>The below messages are written In log file.</p>
<pre><code>"message":"Executor is trying to kill task 7.0 in stage 0.0 (TID 7), interruptThread: false, reason: Stage cancelled: Job aborted due to stage failure: Task 2 in stage 0.0 failed 1 times, most recent failure: Lost task 2.0 in stage 0.0 (TID 2) (DESKTOP-D8UU19A.localdomain executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
</code></pre>
<p>However, above codes are executed successfully on spark 4.0 installed on Ubuntu 22.04. So I have no idea why these errors occurs only on windows 11. Do I miss any configuration of spark 4.0 on windows 11? Any reply will be grateful.</p>
|
<python><apache-spark><pyspark><redis><py-redis>
|
2025-07-03 09:35:12
| 0
| 1,441
|
Joseph Hwang
|
79,688,583
| 10,715,700
|
How do I process CSV files from S3 using pandas without loading the entire file?
|
<p>My current data processing flow looks like this</p>
<ol>
<li>Load CSV</li>
<li>Pivot Data</li>
<li>Filter the original data based on some results from previous step</li>
<li>Repeat several times</li>
</ol>
<p>I have this working on several CSV files in S3. But I am going to encounter files that can be over GBs in volume. I feel that it will take a lot of resources to load the entire data and process it. Is there a way I can achieve this without having to throw a lot of CPU and memory at the script?</p>
|
<python><pandas><data-processing>
|
2025-07-03 09:10:06
| 0
| 430
|
BBloggsbott
|
79,688,514
| 12,439,683
|
Why are type-checkers fine with covariant method parameters when they are in a union?
|
<p>Method parameters should be contravariant, hence defining a <em>co</em>variant generic should raise a type error. However when using a covariant generic in a union <a href="https://pyright-play.net/?code=GYJw9gtgBALgngBwJYDsDmUkQWEMoAqiApgGoCGIANFAOLErEhIDGNRCxAggDZLkBnGlxRwAsAChJBAPoswUALyESFEAAoARLPmaa8gG6V%2BKGIoIgArsQCUkySx6CBUAGJgw6%2Bo2YsA2jpgALo2AFz2ElBRUAAmxMBQ5OoCxDzANJRooYRyYDZRALQAfFAicNlQAMRQAKIg4NRQfADWxFDEAB6cLDDEMYkugZguhsbkpgB0ktEzUBPzETNxCQBGyanpiSBZmKZQAD458vlQxaWiFdUAcgpMDTSd3b39YIzTs1HzE0A" rel="nofollow noreferrer">pyright</a>, <a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=f9a8e6b612f2e9b7208807ee3de54e93" rel="nofollow noreferrer">mypy</a> and pyre-check all do <em>not</em> report an error on the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic, Any
T_co = TypeVar("T_co", covariant=True)
class Foo(Generic[T_co]):
def a(self, arg: T_co) -> Any: # Error, like expected as T_co is covariant.
...
def b(self, arg: int | T_co) -> Any: # No error, expected one
...
</code></pre>
<p>As all these type-checkers do not raise an error I wonder is this actually fine, shouldn't this also break type-safety? If it is fine can you explain to why, what differs from a pure covariant implementation that is definitely not safe, shouldn't I be able to break it as well? Or if its not safe, is there an explanation why all type-checkers have the same gap here?</p>
|
<python><generics><python-typing><covariance>
|
2025-07-03 08:16:59
| 1
| 5,101
|
Daraan
|
79,688,390
| 1,080,517
|
In Airflow getting NotRegistered error on simple Celery task despite [celery] imports configuration
|
<p>I'm encountering a persistent NotRegistered error in my Airflow setup and have exhausted the standard debugging steps. Even a minimal test case fails, suggesting a fundamental issue with how my Celery worker is being configured by Airflow.</p>
<p><strong>My Environment:</strong></p>
<ul>
<li>Airflow: apache-airflow==3.0.2</li>
<li>Providers: apache-airflow-providers-celery==3.12.0</li>
<li>Executor: CeleryExecutor</li>
</ul>
<p><strong>Minimal Test Case</strong>
To isolate the issue, I removed all other files from my dags folder, leaving only two simple files:</p>
<p>dags/simple_task.py</p>
<pre><code>import logging
from airflow.providers.celery.executors.celery_executor import app
log = logging.getLogger(__name__)
@app.task
def my_simple_test_task(message):
"""A minimal task that only logs a message."""
log.info("SUCCESS! The simple task ran with message: %s", message)
</code></pre>
<p>dags/test_dag.py</p>
<pre><code>from __future__ import annotations
import pendulum
from airflow.decorators import dag, task
from simple_task import my_simple_test_task
@dag(
dag_id='minimal_celery_test',
schedule=None,
start_date=pendulum.now(),
catchup=False
)
def minimal_celery_test_dag():
@task
def trigger_the_simple_task():
my_simple_test_task.delay("Testing Celery import.")
trigger_the_simple_task()
minimal_celery_test_dag()
</code></pre>
<p><strong>Configuration and Debugging Steps</strong>
My airflow.cfg is configured to import this module:</p>
<p>airflow.cfg</p>
<pre><code>[celery]
imports = simple_task
</code></pre>
<p>I have already tried the following steps multiple times:</p>
<ol>
<li><strong>Hard Resetting Services</strong>: Completely stopping the airflow scheduler and airflow celery worker processes and restarting them.</li>
<li><strong>Clearing Cache</strong>: Deleting all <strong>pycache</strong> directories and .pyc files from my project.</li>
<li><strong>Verifying File Location</strong>: Ensuring both simple_task.py and test_dag.py are directly inside the dags folder which is referenced in config.</li>
</ol>
<p><strong>The Result</strong>
When I run the minimal_celery_test DAG, the trigger_the_simple_task task sends the job, but it immediately fails (as I can see it in Flower dashboard) on the worker with the following error:</p>
<p><code>NotRegistered('simple_task.my_simple_test_task')</code></p>
<p>When I check the Celery worker's startup logs, the <code>[tasks]</code> section only lists the default Airflow tasks; <code>my_simple_test_task</code> is missing, which confirms it's not being registered.</p>
<p><strong>My Question:</strong>
Given that this minimal configuration appears correct, what could be causing the Airflow Celery worker to completely ignore the [celery] imports setting in airflow.cfg? Are there any other known issues, environmental factors, or configurations specific to Airflow 3 that could lead to this behavior?
Maybe there is another way to achieve such behavior?</p>
<p><strong>Minor follow-up behind this approach</strong></p>
<p>My minimal test case represents a larger <strong>"polite" web crawling system</strong>. Understanding its design clarifies why I need this specific Airflow-Celery pattern to work.</p>
<p>My setup functions as follows:</p>
<ul>
<li>An Airflow DAG (<code>daily_crawler_dag.py</code>) acts as a dispatcher. It finds URLs to process and dispatches them as individual Celery tasks.</li>
<li>The Celery task (<code>html_acquisition.py</code>) is the actual crawler. Its
most critical job is to acquire a distributed <strong>Redis lock</strong> for each
URL's domain before making a request. This enforces politeness by
ensuring only one request is sent to a specific website at a time.</li>
<li>After dispatching, the main DAG uses a <strong>deferrable operator</strong> to
efficiently wait for all the crawling tasks to complete without
holding up an Airflow worker.</li>
</ul>
<p>This architecture separates orchestration from the I/O-heavy crawling work. Using a standalone Celery task, called via <code>.delay()</code>, is essential for the Redis-based politeness and concurrency control. This is why resolving the <code>NotRegistered</code> error is a critical blocker for the entire system.</p>
|
<python><airflow><celery>
|
2025-07-03 06:37:06
| 0
| 2,713
|
sebap123
|
79,687,761
| 27,596,369
|
How to get slice of list which is in a pandas series
|
<pre><code>numbers = df_data['prize_share'].str.split('/')
</code></pre>
<p>Here is numbers:</p>
<p><a href="https://i.sstatic.net/8aqHrVTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8aqHrVTK.png" alt="image of Series" /></a></p>
<p>(Without using .split, it would be a fraction like 1/1 or 1/4)</p>
<p>I want to now create a new column which gives us the percentage of the fraction (100% for 1/1, 50% for 1/2, 25% for 1/4, e.t.c)</p>
<p>For that, I did this:</p>
<pre><code>df_data['share_pct'] = float(int(numbers[0]) / int(numbers[1]))
</code></pre>
<p>But then I realized that <code>numbers[0]</code> was [1,1] instead of a series, so I am confused how to go forward with this.</p>
|
<python><pandas><series>
|
2025-07-02 16:14:13
| 1
| 1,512
|
Aadvik
|
79,687,721
| 14,409,562
|
How do AI Agent workflows (langgraph) know what tools to use?
|
<p>Hi I'm currently trying to start to use langgraph and it's really awesome and incredible. I'm slightly confused on a certain subject though, how does langgraph identify tools?</p>
<p>How does it know what a tool does? How can I convey the purpose of the tool in order to be clearer?</p>
<p>From what I understand, its only means is to read the function and the input it gets. Does this mean I should be making super long function names in order to explain the finer details? This might be a very simple solution but I'm confused as to how it's able to fetch further information on the tool provided.</p>
<p>From the <a href="https://langchain-ai.github.io/langgraph/tutorials/get-started/4-human-in-the-loop/#1-add-the-human_assistance-tool" rel="nofollow noreferrer">tutorial</a> they use the following example:</p>
<pre class="lang-py prettyprint-override"><code>@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
human_response = interrupt({"query": query})
return human_response["data"]
</code></pre>
<p>I don't see how it's able to understand from the function name that <code>human_assitance</code> is capable to interrupt a query and tell the ai graph that it will get a response from a human. For all I know, it could interpret it as "human_assistance" -> "human helps me" -> "human does the work" and not that it will have to continue querying. This is a bit of a far fetch but it's to better understand more complex tools and how to properly convey to an LLM its function.</p>
|
<python><langgraph><crewai>
|
2025-07-02 15:42:13
| 0
| 412
|
a_confused_student
|
79,687,592
| 943,222
|
merge sorted array wrong result not sure why?
|
<p>I am doing this challenge but I am getting two different results one on my pycharm and one on the leet code webpage.</p>
<p>my code on pycharm:</p>
<pre><code>nums1 = [1,2,3,0,0,0]
m = 3
nums2 = [2,5,6]
n = 3
print(nums1[0:m])
print(nums2[0:n])
nums1 = nums1[0:m] + nums2[0:n]
nums1.sort()
print(nums1)
</code></pre>
<p>and the result:</p>
<pre><code>[1, 2, 3]
[2, 5, 6]
[1, 2, 2, 3, 5, 6]
</code></pre>
<p>my leet code:</p>
<pre><code>class Solution:
def merge(self, nums1: List[int], m: int, nums2: List[int], n: int) -> None:
"""
Do not return anything, modify nums1 in-place instead.
"""
nums1=nums1[0:m]+nums2[0:n]
nums1.sort()
</code></pre>
<p>the leet code result:</p>
<pre><code>Input
nums1 = [1,2,3,0,0,0]
m = 3
nums2 = [2,5,6]
n = 3
Output
[1,2,3,0,0,0]
Expected
[1,2,2,3,5,6]
</code></pre>
<p>Both are python 3.</p>
|
<python><merge>
|
2025-07-02 14:28:36
| 1
| 816
|
D.Zou
|
79,687,543
| 2,473,898
|
How to call Microsoft 365 Copilot API to perform web search using Python?
|
<p>Iโm working on a Python application that integrates with Microsoft 365 Copilot APIs. My goal is to <strong>use Copilot (or a related API) to perform web-grounded responses (i.e., public web search similar to Bing integration)</strong>.</p>
<p>From Microsoft documentation, I understand there are:</p>
<ul>
<li><strong>Retrieval APIs</strong> โ for accessing tenant content (SharePoint, OneDrive, etc.)</li>
<li><strong>Copilot Chat API</strong> โ in preview, used for chat-like experiences</li>
<li><strong>Web Search</strong> โ supported in Microsoft 365 Copilot UI, but itโs unclear if itโs available via API</li>
</ul>
<p>I have already registered an Azure AD app and can authenticate using MSAL. However, I couldnโt find any API documentation for <strong>enabling or using Bing Web Search grounding</strong> via Graph API or Copilot API.</p>
<h3>My question:</h3>
<p><strong>Is there a way to call Microsoft 365 Copilot or Microsoft Graph API to include public web search results (e.g., Bing) in the response from Python?</strong>
Or is <strong>web search only available via the Microsoft 365 UI</strong> and not through APIs?</p>
<ul>
<li><p>Any API endpoint that supports web/Bing search grounding via
Microsoft 365 Copilot</p>
</li>
<li><p>Confirmation whether this is currently available, restricted, or not
exposed via API</p>
</li>
<li><p>If not possible, would using the Bing Search API separately be the
only option?</p>
</li>
</ul>
<p>Any guidance or official documentation link would be appreciated.</p>
|
<python><azure><azure-ad-msal><microsoft365><microsoft-copilot>
|
2025-07-02 13:50:58
| 0
| 531
|
Banng
|
79,687,525
| 5,378,816
|
Field validator calls a parser - how to save the parsed value?
|
<p>A form has a field. The field has a validator. The validator calls a parser. If and only if the parsing succeeds, the validation is successful.</p>
<p>However, later I have to call the parser again to get the parsed value. How can I avoid this duplication? Should I attach the result to the field object as a new attribute?</p>
<p>Simplified Flask code:</p>
<pre><code>def data_validator(form, field):
try:
# first parser call, but the result will be lost
result = parse(field.data)
except Exception as err:
raise ValidationError(f"Invalid data: {err}") from None
class DataForm(FlaskForm):
datafield = wtf.fields.StringField('Enter data', validators=[data_validator])
...
@app.post('/some/url')
def view_function():
form = DataForm()
if form.validate_on_submit():
# second parser call
result = parse(form.datafield.data)
...
</code></pre>
|
<python><flask-wtforms><wtforms>
|
2025-07-02 13:41:26
| 1
| 17,998
|
VPfB
|
79,687,500
| 15,993,687
|
Can label or button act as parent in tkinter?
|
<p>I always thought that only frame and other container elements can be parent, but recently when I tried the below code, it seemed to work perfectly without any error.</p>
<pre class="lang-py prettyprint-override"><code>import os
import tkinter as tk
from PIL import Image, ImageTk
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
main = tk.Tk()
main.title("Main Window")
main.config(bg="#E4E2E2")
main.geometry("700x400")
frame = tk.Frame(master=main)
frame.config(bg="#d1c9c9")
frame.pack()
label2 = tk.Label(master=frame, text="Password")
label2.config(bg="#d1c9c9", fg="#000")
label2.pack(side=tk.TOP)
button = tk.Button(master=frame, text="Submit")
button.config(bg="#161515", fg="#ffffff")
button.pack(side=tk.TOP)
entry1 = tk.Entry(master=button)
entry1.config(bg="#fff", fg="#000")
entry1.pack(side=tk.TOP)
main.mainloop()
</code></pre>
<p>The entry seems to be appearing inside the button when i try it on my linux PC. So, is it fine to use labels, button widgets and others as parents? Will it cause any issues on other OS?</p>
|
<python><tkinter><tkinter-layout>
|
2025-07-02 13:28:06
| 1
| 3,141
|
Art
|
79,687,461
| 5,118,421
|
How to prevent black adding parentheses for chain of code
|
<p>Given code:</p>
<pre><code>longfunc(long_arg1, long_arg2...).longfunct().longfunct().longfunct()
</code></pre>
<p>How can I prevent black wrapping it in parentheses?</p>
<pre><code>(
longfunc(long_arg1, long_arg2...)
.longfunct()
.longfunct()
.longfunct()
)
</code></pre>
<p>What rule enforces that?</p>
|
<python><python-black>
|
2025-07-02 13:00:04
| 1
| 1,407
|
Irina
|
79,687,355
| 5,520,689
|
How to make the LLM call MCP functions hosted on Google Cloud Run with Python
|
<p>I have hosted a function on Google Could run and am able to call it with FastMCPClient.
Thank you for the help with my earlier <a href="https://stackoverflow.com/questions/79685701/how-to-call-a-python-mcp-tool-hosted-on-google-cloud-run">question</a>.</p>
<p>This is my MCP Server code. This is deployed as a docker image on Google Cloud Run.</p>
<pre><code>import asyncio
import os
from fastmcp import FastMCP, Context
mcp = FastMCP("MCP Server on Cloud Run")
@mcp.tool()
'''Call this function when there are 2 numbers to add. Pass the 2 numbers as parameters'''
async def add(a: int, b: int, ctx: Context) -> int:
await ctx.debug(f"[add] {a}+{b}")
result = a+b
await ctx.debug(f"result={result}")
return result
if __name__ == "__main__":
asyncio.run(
mcp.run_async(
transport="streamable-http",
host="0.0.0.0",
port=os.getenv("PORT", 8080),
)
)
</code></pre>
<p>The below code works and I am able to call the MCP tool to add 2 numbers.</p>
<pre><code>from fastmcp import Client
import asyncio
import google.oauth2.id_token
import google.auth.transport.requests
import os
import sys
args = sys.argv
if len(args) != 3:
sys.stderr.write(f"Usage: python {args[0]} <a> <b>\n")
sys.exit(1)
a = args[1]
b = args[2]
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'C:\\Path\\to\\file.json'
audience = "https://mcp-server-url-from-cloud-run"
request = google.auth.transport.requests.Request()
token = google.oauth2.id_token.fetch_id_token(request, audience)
config = {
"mcpServers": {
"cloud-run":{
"transport": "streamable-http",
"url": f"{audience}/mcp/",
"headers": {
"Authorization": "Bearer token",
},
"auth": token,
}
}
}
client = Client(config)
async def run():
async with client:
print("Connected")
aint=int(a)
bint=int(b)
result = await client.call_tool(
name="add",
arguments={"a":aint, "b":bint},
)
print(result)
if __name__ == "__main__":
asyncio.run(run())
</code></pre>
<p>My intention was to expose this tool to my llm so it can decide when to call the tools at it's disposal. For example, if I say "add 5 and 4" in the prompt, the LLM should call the add function and return 9. Just using the call_tool() function does not add much value when unstructured data is involved.</p>
<p>I could use the below code to make the LLM access the mcp tools when the MCP server was a local .py file.</p>
<pre><code>os.environ["OPENAI_API_KEY"] = "Open_API_Key"
# Instantiate Google Gemini LLM with deterministic output and retry logic
llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
#api_key=""
# base_url="...",
# organization="...",
# other params...
)
server_script = sys.argv[1]
# Configure MCP server startup parameters
server_params = StdioServerParameters(
command="python" if server_script.endswith(".py") else "node",
args=[server_script],
)
mcp_client = None
async def run_agent():
global mcp_client
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
mcp_client = type("MCPClientHolder", (), {"session": session})()
tools = await load_mcp_tools(session)
agent = create_react_agent(llm, tools,prompt=system_message_obj)
print("MCP Client Started! Type 'quit' to exit.")
while True:
query = input("\\nQuery: ").strip()
if query.lower() == "quit":
break
# Send user query to agent and print formatted response
response = await agent.ainvoke({"messages": query})
try:
formatted = json.dumps(response, indent=2, cls=CustomEncoder)
except Exception:
formatted = str(response)
print("\\nResponse:")
print(formatted)
return
</code></pre>
<p>Is there a way to expose the tools from my Google Cloud MCP server (called in my first code) to the LLM python Client using the Cloud Run URL and secure json Key ? Like how it is done in the second code with local .py file.
This might be a basic question but I could not find any answers so far. Any help will be really appreciated.
Due to scalability concerns I do not want to use local Cloud proxies for authentication.</p>
|
<python><https><google-cloud-run><large-language-model><model-context-protocol>
|
2025-07-02 11:41:15
| 2
| 358
|
Sachu
|
79,687,351
| 6,010,635
|
Multiclass focal loss in xgboost doesn't train
|
<p>I have a dataframe with 60 columns as variables and a last column as target class (4 possible classes). I want to implement a custom loss function. I want that function to be the focal loss for a multiclass problem since I have a very imbalanced dataset. This is my approach following the docs from xgboost for custom functions:</p>
<pre><code>import numpy as np
import xgboost
from scipy.special import log_softmax, softmax
from sklearn.utils import compute_sample_weight
from sklearn.model_selection import train_test_split
num_classes = 4
def focal_loss_multiclass_objective(inputs: np.ndarray, dtrain: xgboost.DMatrix,
alpha=1, gamma=2, num_classes=4):
# Convert logits to probabilities and log-probabilities
log_prob = log_softmax(inputs)
prob = np.exp(log_prob)
print(f"prob: {prob}")
# One hot encoding the correct class
targets = dtrain.get_label().astype(np.int64)
targets_one_hot = np.identity(num_classes)[targets]
# Gradient
grad = -alpha*targets_one_hot*((1-prob)**(gamma-1)) * (-gamma*log_prob
+ (1-prob)/(prob+1e-6))
print(f"grad: {grad}")
# Hessian
hess = -alpha*targets_one_hot*(gamma*(gamma-1)*((1-prob)**(gamma-2)) * log_prob
- 2*gamma*((1-prob)**(gamma-1))/(prob+1e-6)
- ((1-prob)**gamma)/(prob+1e-6)**2)
print("hess: ", hess)
return grad, hess
</code></pre>
<p>The gradient and hessian of the multiclass focal loss are kind of a hell, but I have double checked them with sympy.</p>
<p>The training:</p>
<pre><code>X = df.iloc[:, 0:60]
y = df.iloc[:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Sample weights:
sample_weights = compute_sample_weight('balanced', y_train)
print(np.unique(sample_weights))
# Create a DMatrix entity
dtrain = xgboost.DMatrix(X_train, label=y_train, weight=sample_weights)
dtest = xgboost.DMatrix(X_test, label=y_test)
params = {
'num_class': num_classes,
#'disable_default_eval_metric': True,
'eval_metric': 'mlogloss',
'eta': 0.3,
'max_depth': 6,
'subsample': 0.8,
'colsample_bytree': 0.8,
'tree_method': 'auto'
}
# Train the model
model = xgboost.train(params,
dtrain,
num_boost_round=100,
obj=focal_loss_multiclass_objective)
</code></pre>
<p>But I obtain always a 25 % probability after the training. I don't know what I am doing wrong.</p>
<pre><code>y_pred_logits = model.predict(dtest, output_margin=True)
y_prob = softmax(y_pred_logits, axis=1)
print(y_pred_logits)
print(y_prob)
</code></pre>
|
<python><xgboost><loss-function><xgbclassifier>
|
2025-07-02 11:34:01
| 1
| 1,297
|
David
|
79,687,251
| 11,267,281
|
How to create an OpenAI Assistant using Responses API
|
<p>When I try using Python Azure OpenAI SDK to create a thread for my OpenAI Assistant</p>
<pre><code>my_thread = azure_openai_client.beta.threads.create()
</code></pre>
<p>, it works but the following annoying error is shown:</p>
<blockquote>
<p>/tmp/ipykernel_1880725/4229537171.py:1: DeprecationWarning: The
Assistants API is deprecated in favor of the Responses API my_thread
= azure_openai_client.beta.threads.create()</p>
</blockquote>
<p><a href="https://i.sstatic.net/AJwvDf08.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJwvDf08.png" alt="enter image description here" /></a></p>
<p>So I went to the OpenAI documentation page at <a href="https://platform.openai.com/docs/assistants/quickstart" rel="nofollow noreferrer">https://platform.openai.com/docs/assistants/quickstart</a>, where it actually says that they <em>incorporated key improvements into the Responses API</em>:
<a href="https://i.sstatic.net/oT3yDXkA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oT3yDXkA.png" alt="enter image description here" /></a></p>
<p>However I can't find any way to create the assistant or the thread, using Responses API's: even OpenAI documentation on the same page above shows how to do this with the "deprecated" method.</p>
<p>Of course I know how to disable the warning, but I'm wondering why I'm receiving it: I expect to receive a warning right after an alternative (even in preview) is available, but not earlier.</p>
|
<python><openai-api>
|
2025-07-02 10:20:20
| 1
| 319
|
Mauro Minella
|
79,687,167
| 1,647,627
|
What does Python do with incorrect type hinting in global scope?
|
<p>I found <a href="https://superset.apache.org/docs/configuration/configuring-superset/#keycloak-specific-configuration-using-flask-oidc" rel="nofollow noreferrer">here</a> strange Python code, something like:</p>
<pre><code>myvar: "Some string"
</code></pre>
<p>I checked it in IPython and it works. But what does it actually do?</p>
<pre><code>In [16]: myvar: "Some string"
In [17]: myvar
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[17], line 1
----> 1 myvar
NameError: name 'myvar' is not defined
</code></pre>
|
<python><python-typing>
|
2025-07-02 09:16:01
| 1
| 897
|
senior_pimiento
|
79,687,131
| 2,819,689
|
How to get branches instead of git.Head "ref/head/dev"?
|
<p>I am using GitPython</p>
<pre><code>from git import Repo
r=Repo(url)
repo_heads=r.heads
</code></pre>
<p>I got</p>
<pre><code>git.Head "ref/head/dev"
</code></pre>
<p>what should I changet to get the list of branches,"main",""dev"...?</p>
|
<python>
|
2025-07-02 08:55:03
| 1
| 2,874
|
MikiBelavista
|
79,686,671
| 14,000,710
|
Why is pylance not able to statically determine Literal Union type in dictionary value
|
<p>When writing a <code>to_ChatCompletionMessageParam()</code> instance method, I have the following implementation:</p>
<pre class="lang-py prettyprint-override"><code>def to_ChatCompletionMessageParam(self) -> ChatCompletionMessageParam:
author: Literal["assistant", "user"] = self.author
result: ChatCompletionMessageParam = {
"role": author,
"content": self.item.value()
}
...
return result
</code></pre>
<p>The type of <code>ChatCompletionMessageParam</code> is a union of multiple TypedDict objects that can be coerced to a regular dictionary literal. What's important is that each type in the union has a field which is a required Literal. For example:</p>
<pre class="lang-py prettyprint-override"><code>ChatCompletionMessageParam: TypeAlias = Union[ChatCompletionUserMessageParam, ChatCompletionAssistantMessageParam]
class ChatCompletionUserMessageParam(TypedDict, total=False):
content: Required[Union[str, Iterable[ChatCompletionContentPartParam]]]
"""The contents of the user message."""
role: Required[Literal["user"]]
"""The role of the messages author, in this case `user`."""
...
class ChatCompletionAssistantMessageParam(TypedDict, total=False):
role: Required[Literal["assistant"]]
"""The role of the messages author, in this case `assistant`."""
content: Union[str, Iterable[ContentArrayOfContentPart], None]
"""The contents of the assistant message.
Required unless `tool_calls` or `function_call` is specified.
"""
...
</code></pre>
<p>The only fix I have right now is doing an explicit Literal check such as</p>
<pre class="lang-py prettyprint-override"><code>def to_ChatCompletionMessageParam(self) -> ChatCompletionMessageParam:
author: Literal["assistant", "user"] = self.author
if author == "assistant":
result: ChatCompletionMessageParam = {
"role": author,
"content": self.item.value()
}
else:
result: ChatCompletionMessageParam = {
"role": author,
"content": self.item.value()
}
return result
</code></pre>
<p>Is this is a bug or is there a specific edge-case that pylance is unable to rule out in the simplified code?</p>
<p><strong>TL;DR: When creating a dictionary literal that's meant to have a union type of multiple dictionary entries that require a Literal value for a field, pylance is unable to check that a supplied value type checks.</strong></p>
|
<python><python-typing><azure-openai><pyright>
|
2025-07-01 23:04:08
| 1
| 537
|
feverdreme
|
79,686,607
| 4,379,593
|
How do I make mypy treat my class as not being a subtype of object?
|
<p>I'm trying to write a class in Python where comparisons like <code>MyClass(5) == 5</code> are considered type errors by mypy, even though in Python all user-defined classes inherit from <code>object</code>.</p>
<p>My goal is to:</p>
<ol>
<li>Restrict comparisons to only instances of <code>MyClass</code>, so that <code>MyClass(5) == 5</code> triggers a type checker error.</li>
<li>Prevent passing <code>MyClass</code> instances into functions that expect generic <code>object</code> parameters, like this:</li>
</ol>
<pre><code>from typing import Self
class MyClass:
def __init__(self: Self, x: int):
self.x = x
def __eq__(self: Self, other: "MyClass") -> bool:
return self.x == other.x
print(MyClass(5) == 5) # โ I want mypy to give an error here
def abstract_comparator(a: object, b: object) -> bool:
return a == b
print(abstract_comparator(MyClass(5), 5)) # โ I also want an error here
</code></pre>
<p>Now this code violates the Liskov substitution principle. If I define</p>
<pre class="lang-py prettyprint-override"><code> def __eq__(self: Self, other: object) -> bool:
if not isinstance(other, MyClass):
return NotImplemented # or raise TypeError(...)
return self.x == other.x
</code></pre>
<p>and call <code>$ mypy test.py --strict-equality</code> says <code>Success: no issues found in 1 source file</code>. But I think this is wrong way.</p>
<p>I tried:</p>
<ul>
<li>Adding <code>@overload</code> definitions for <code>__eq__</code>, but mypy still treats <code>MyClass</code> as an <code>object</code>, so <code>abstract_comparator(MyClass(5), 5)</code> is accepted.</li>
<li>Creating fake protocols or dummy fields, but that either leaks into <code>bar()</code> or allows other unintended side effects.</li>
</ul>
<p>Ideally, Iโd like a way to say โ<code>MyClass</code> is not assignable to <code>object</code>โ for the purposes of type checking, or a plugin / strict mode in mypy (or another tool like Pyright) that enforces <code>__eq__</code> type contracts more strictly and disallows structural compatibility with <code>object</code>.</p>
<p>Is there any way to express this intent to mypy or another type checker?</p>
|
<python><python-typing><mypy>
|
2025-07-01 21:08:01
| 1
| 373
|
ะคะธะปั ะฃัะบะพะฒ
|
79,686,564
| 5,719,229
|
Cannot send a request through FastAPI in Python (Failed to connect to localhost port 8000 after 0 ms: Couldn't connect to server)
|
<p>I devised a spam detecter for my example but I cannot send any request through Postman</p>
<p>Here is the <strong>requirement.txt</strong> file</p>
<pre><code>fastapi
uvicorn[standard]
transformers
torch
</code></pre>
<p>Here is my <strong>python file</strong> shown below</p>
<pre><code>from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
from transformers import pipeline
class ClassifyRequest(BaseModel):
text: str = Field(..., example="")
lang: str = Field(
"en",
pattern=r"^(en|tr|de|fr|es|it|pt|ru|ar|zh|ja|ko|hi|bn|ur|fa|th|vi|id|ms|nl|sv|no|da|fi|pl|cs|sk|hu|ro|bg|hr|sr|sl|et|lv|lt|el|he|uk|be|ky|uz|km|my|tg|az|hy|ga|cy|is|mk|bs|sq|mn|ne|pa|gl|la)$",
description="ISO language code",
example="tr"
)
class ClassifyResponse(BaseModel):
label: str
score: float
app = FastAPI(title="Spam & Abuse Detector")
classifier = pipeline(
"zero-shot-classification",
model="joeddav/xlm-roberta-large-xnli"
)
CANDIDATE_LABELS = ["spam", "adult_content", "drugs", "non_spam"]
@app.post("/classify", response_model=ClassifyResponse)
def classify(req: ClassifyRequest):
res = classifier(
sequences=req.text,
candidate_labels=CANDIDATE_LABELS
)
best_idx = res["scores"].index(max(res["scores"]))
label = res["labels"][best_idx]
score = res["scores"][best_idx]
return ClassifyResponse(label=label, score=score)
</code></pre>
<p>Here is <strong>Dockerfile</strong></p>
<pre><code>FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app ./app
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload", "--log-level", "info"]
</code></pre>
<p>When I run these commands shown below</p>
<pre><code>docker build -t spam-detector .
docker run -p 8000:8000 spam-detector
</code></pre>
<p>I got this console output</p>
<pre><code>INFO: Will watch for changes in these directories: ['/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using WatchFiles
</code></pre>
<p>When I send a request through Postman</p>
<pre><code>curl -X POST http://127.0.0.1:8000/classify \
-H "Content-Type: application/json" \
-d '{"text":"bla bla","lang":"en"}'
</code></pre>
<p>I get "Failed to connect to localhost port 8000 after 0 ms: Couldn't connect to server"</p>
<p>How can I fix the issue?</p>
|
<python><docker><fastapi><pydantic>
|
2025-07-01 20:17:40
| 1
| 2,928
|
Sercan Noyan Germiyanoฤlu
|
79,686,497
| 13,828,837
|
Matplotlib compiled .pgf output blurry
|
<p>I am generating .pgf plots in matplotlib. Here a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as pyplot
import numpy.random as random
pyplot.matshow(random.random((20, 20)), interpolation="none")
pyplot.savefig("plot.pgf")
</code></pre>
<p>This creates a <code>.pgf</code> file and a <code>.png</code> file which contains the actual image data. Notably, the png file is only 20x20 pixel large. Of course, it does not need to be larger, since these 20x20 pixel contain the full information of the plot.</p>
<p>I the compile a latex document which includes this <code>.pgf</code> file:</p>
<pre class="lang-tex prettyprint-override"><code>\documentclass{standalone}%
\usepackage{pgf}%
\begin{document}%
\input{plot.pgf}%
\end{document}%
</code></pre>
<p>The resulting pdf looks like this (MacOS Preview):</p>
<p><a href="https://i.sstatic.net/UgrmhkED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UgrmhkED.png" alt="Blurry matshow plot, where the cells are indistinguishable." /></a></p>
<p>Clearly, this is not what should be displayed. Instead, Firefox shows this for example:</p>
<p><a href="https://i.sstatic.net/GPEz3WcQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPEz3WcQ.png" alt="Properly resolved matshow plot with clearly distinct cells." /></a></p>
<p>Which is what I would have expected. Thus, this may be a Preview issue.</p>
<p>Is there a way to fix this?</p>
|
<python><matplotlib><latex><pdf-viewer><pgf>
|
2025-07-01 18:47:18
| 0
| 344
|
schade96
|
79,686,433
| 629,530
|
How are Python 3.14 t-strings different than f-strings
|
<p>Python 3.14 is introducing template strings, so-called t-strings. Can someone explain how these differ from f-strings? What new problems are t-strings helping developers solve?</p>
|
<python><python-3.14><python-template-strings>
|
2025-07-01 17:34:39
| 2
| 6,114
|
firebush
|
79,686,410
| 3,995,094
|
What is scipy's rotation function as_matrix from quaternion based on?
|
<p>The scipy implementation of <code>as_matrix</code> (from quaternions) which appears <a href="https://github.com/scipy/scipy/blob/main/scipy/spatial/transform/_rotation_cy.pyx" rel="nofollow noreferrer">here</a> seems different to the usual one found in books or online (as in <a href="https://en.wikipedia.org/wiki/Rotation_matrix#Conversions" rel="nofollow noreferrer">this wiki page</a>).</p>
<p>Where did they get it from? There is no reference there.</p>
|
<python><scipy>
|
2025-07-01 17:19:03
| 1
| 641
|
Anon
|
79,686,388
| 8,622,814
|
Leetcode Python 208 -- memory not clearing from previous test case?
|
<p>The problem for reference: <a href="https://leetcode.com/problems/implement-trie-prefix-tree" rel="nofollow noreferrer">https://leetcode.com/problems/implement-trie-prefix-tree</a></p>
<p>It seems straightforward enough, implement a trie. However, during one of the submission tests, where the input is</p>
<pre><code>["Trie","startsWith"]
[[],["a"]]
</code></pre>
<p>The expected answer is <code>[null,false]</code>, while my answer returned <code>[null,true]</code>. However, looking through my code, I was confident that my logic is correct and it should have returned False.</p>
<p>Upon further debugging by adding a print representation and printing out the input, I realised that despite the question input not having any insert, it in fact passed in a Trie with <code>app</code> and <code>apple</code>, presumably from the previous test case.</p>
<p>I read through the comments and found a couple of comments mentioning a similar issue, and a Github Issue at <a href="https://github.com/LeetCode-Feedback/LeetCode-Feedback/issues/6108" rel="nofollow noreferrer">https://github.com/LeetCode-Feedback/LeetCode-Feedback/issues/6108</a>, which was closed, even though the problem is still there.</p>
<p>Funnily enough, if I try to run the exact same code and test case using LeetCode's debugger mode, it returns the correct result.</p>
<p>While I am almost sure this was something wrong with Leetcode, this problem does not occur when I tried to use paste in other users' accepted solution. This means that there is likely both something wrong with leetcode, and something wrong with my code to reproduce this specific bug.</p>
<p>Code below for reference:</p>
<pre><code>from collections import deque
class Trie:
def __init__(self, children = {}, value = '', is_terminal = False):
self.children = children
self.value = value
self.is_terminal = is_terminal
def __str__(self):
s = ''
s += self.value
queue = deque([])
for child in self.children.values():
queue.append(child)
while queue:
node = queue.popleft()
s += node.value
if node.is_terminal:
s += 'T'
for child in node.children.values():
queue.append(child)
return s
def __repr__(self):
return self.__str__()
def get_child(self, val) -> "Trie":
return self.children[val]
def has_child(self, val) -> bool:
return val in self.children
def set_terminal(self) -> None:
self.is_terminal = True
def insert(self, word: str) -> None:
node = self
for char in word:
if not node.has_child(char):
new = Trie({}, char, False)
node.children[char] = new
node = node.get_child(char)
node.set_terminal()
def search(self, word: str) -> bool:
node = self
for char in word:
if not node.has_child(char):
return False
node = node.get_child(char)
return node.is_terminal
def startsWith(self, prefix: str) -> bool:
print(self). # this returns appTleT
node = self
for char in prefix:
if not node.has_child(char):
return False
node = node.get_child(char)
return True
</code></pre>
|
<python><python-3.x><algorithm><memory><memory-leaks>
|
2025-07-01 16:55:07
| 1
| 2,018
|
Samson
|
79,686,384
| 18,476,381
|
Pymongo and Beanie Incompatibility issues
|
<p>I'm trying to migrate from motor to pymongo's AsyncMongoClient.</p>
<p>After doing some upgrading/installing on pymongo I am having the below error when running this import <code>from beanie import Document</code></p>
<pre><code>ImportError: cannot import name '_QUERY_OPTIONS' from 'pymongo.cursor'
</code></pre>
<p>Using python 3.11.9</p>
<p>My dependencies are below:</p>
<pre><code>dependencies = [
"fastapi==0.95.0",
"uvicorn==0.22.0",
"gunicorn==20.1.0",
"elastic-apm==6.15.1",
"pymongo==4.13.2",
"pydantic==1.10.18",
"beanie==1.29.0",
"dnspython==2.2.1",
"python-dotenv==1.0.0",
"psutil==5.9.4",
"loguru==0.6.0",
"fastapi-etag==0.4.0",
"mongoengine==0.29.1",
"elasticsearch7==7.17.12",
"elasticsearch-dsl==7.4.1",
"promise==2.3",
"requests==2.31.0",
"pytz==2023.3",
"singleton-decorator==1.0.0",
"cachetools==5.3.1",
"pymysql==1.0.2",
"requests-custom-header==0.1.1",
"aiohttp==3.9.1",
"telempack==1.7.11",
"polars==1.9.0",
"jinja2==3.1.3",
"oracledb==2.5.1",
"numpy==2.2.3",
"pika==1.3.2",
"zstandard==0.23.0",
]
</code></pre>
<p>I've tried multiple attempts to upgrade/downgrade beanie/pymongo/mongoengine but it keeps throwing this error. Any ideas?</p>
|
<python><mongodb><pymongo><mongoengine><beanie>
|
2025-07-01 16:53:04
| 1
| 609
|
Masterstack8080
|
79,686,332
| 27,596,369
|
np.random() is getting treated as module
|
<p>I am trying to create a random numpy array using <code>np.random()</code> but for some reason, instead of taking it as a function, google collab is taking it as a module, I checked the documentation, but I would prefer not to have a default_rng since as to my understanding, I will always get the same array with the same default_rng. I am very confused and help will be appreciated!</p>
|
<python><arrays><list><numpy><random>
|
2025-07-01 16:07:40
| 1
| 1,512
|
Aadvik
|
79,686,262
| 17,945,841
|
PCA with direction arrows showing genes associated with samples
|
<p>Iโm trying to reproduce a PCA biplot where</p>
<p>dots = samples coloured by the column <code>sample</code> in <code>pb.obs</code></p>
<p>arrows = genes, pointing toward the region(s) of the plot where each gene is most highly expressed.</p>
<p>So - if a gene is associated with a certain sample, on the PCA plot the direction of the arrow would be towards that sample.
If a gene is high in two samples I expect the arrow to end up between the two samples clouds; if itโs high in three models the arrow should shift accordingly.. and so on.</p>
<p><strong>Problem</strong> - I was able to make a PCA (the clustering on the PCA plot is good) with arrows, each arrow (or line) corresponds to a single gene. But the arrows have no meaning. Some genes (arrows) point towards a certain cluster of samples, although those samples don't even express that gene. So I just need help with making the arrows have meaning.</p>
<p>Example photo I took from google, shows beautifully what I need :</p>
<p><a href="https://i.sstatic.net/gYTim9TI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYTim9TI.png" alt="enter image description here" /></a></p>
<p>This is my code. But as I said results in arrows with no meaning at all. The PCA it self is ok:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from adjustText import adjust_text
import scanpy as sc
sc.pp.normalize_total(pb, target_sum=1e6) # counts per million
sc.pp.log1p(pb)
sc.pp.highly_variable_genes(pb, n_top_genes=1000)
pb = pb[:, pb.var['highly_variable']].copy()
sc.tl.pca(pb)
sc.pp.neighbors(pb, n_neighbors=5, n_pcs=30)
sc.tl.umap(pb)
PCS = (1, 2) # PCs
N_PER_SIDE = 15
COLOR_BY = 'sample' # column in pb.obs for dot colours
GENE_CLR = 'firebrick'
ARROW_KW = dict(ls='-', lw=1.4, alpha=.9)
LABEL_PAD = 1.08 # push gene names just beyond arrow tips
LEN_FRAC = .85 # longest arrow reaches 85 % of sample radius
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# 1 pull scores & loadings
c1, c2 = (i - 1 for i in PCS)
S = pb.obsm['X_pca'][:, [c1, c2]]
L_raw = pb.varm['PCs'][:, [c1, c2]]
eigvals = pb.uns['pca']['variance'][[c1, c2]]
L_corr = L_raw * np.sqrt(eigvals) # correlations
# 2 pickk genes: top N each side of each PC
idxs = []
for dim in (0, 1): # PC-1 then PC-2
for sign in (+1, -1):
# sort by signed loading
order = np.argsort(sign * L_corr[:, dim])
idxs.extend(order[-N_PER_SIDE:]) # grab N from the end
idxs = list(dict.fromkeys(idxs)) # keep order unique
# 3 rescale arrows so they fill the plot
max_r_gene = np.sqrt((L_corr[idxs]**2).sum(1)).max()
max_r_score = np.abs(S).max()
scale = (LEN_FRAC * max_r_score) / max_r_gene
V = L_corr[idxs] * scale
# 4 plot
fig, ax = plt.subplots(figsize=(15, 13))
# dots
codes = pb.obs[COLOR_BY].astype('category').cat.codes
scat = ax.scatter(S[:, 0], S[:, 1], c=codes, cmap='tab20',
s=200, edgecolor='k', alpha=.85)
# legend to the right
handles, _ = scat.legend_elements(prop='colors')
labels = pb.obs[COLOR_BY].astype('category').cat.categories
ax.legend(handles, labels, title=COLOR_BY, loc='center left',
bbox_to_anchor=(1.05, .5), frameon=False)
# arrows and text
texts = []
for (dx, dy), i in zip(V, idxs):
ax.arrow(0, 0, dx, dy, color=GENE_CLR, **ARROW_KW,
head_width=.03*scale/max_r_score,
head_length=.04*scale/max_r_score,
length_includes_head=True, zorder=3)
texts.append(ax.text(dx*LABEL_PAD, dy*LABEL_PAD,
pb.var_names[i], color=GENE_CLR,
fontsize=12, ha='center', va='center', zorder=4))
adjust_text(texts, arrowprops=dict(arrowstyle='-', color=GENE_CLR))
vr = pb.uns['pca']['variance_ratio'][[c1, c2]] * 100
ax.set_xlabel(f'PC{PCS[0]} ({vr[0]:.1f} % var)', fontsize=13)
ax.set_ylabel(f'PC{PCS[1]} ({vr[1]:.1f} % var)', fontsize=13)
ax.axhline(0, lw=.5, color='grey'); ax.axvline(0, lw=.5, color='grey')
ax.set_aspect('equal')
ax.set_title('PCA biplot โ top gene drivers per axis')
plt.tight_layout(); plt.show()
</code></pre>
<p>This code generates dummy data you can use, <code>pb</code> generated here is build exactly like my data, with <code>pd.obs</code> that holds the metadata, <code>pd.var</code> holds the names of the genes, and <code>pd.X</code> holds the expression data it self.</p>
<pre><code>import numpy as np
import pandas as pd
import scanpy as sc
np.random.seed(42)
n_cells = 180
n_genes = 500
groups = np.repeat(['A', 'B', 'C'], repeats=n_cells//3) # 60 each
# counts
X = np.random.poisson(lam=1.5, size=(n_cells, n_genes)).astype(float)
# marker genes: first 50 for A, next 50 for B, next 50 for C
X[groups == 'A', :50] += np.random.poisson(4, ( (groups == 'A').sum(), 50))
X[groups == 'B', 50:100] += np.random.poisson(4, ((groups == 'B').sum(), 50))
X[groups == 'C',100:150] += np.random.poisson(4, ((groups == 'C').sum(), 50))
pb = sc.AnnData(X,
obs = pd.DataFrame({'sample': groups},
index=[f'cell{i}' for i in range(n_cells)]),
var = pd.DataFrame(index=[f'gene{j}' for j in range(n_genes)]))
sc.pp.normalize_total(pb, target_sum=1e6, inplace=True)
sc.pp.log1p(pb)
sc.pp.pca(pb, n_comps=20)
</code></pre>
|
<python><matplotlib><plot><seaborn><pca>
|
2025-07-01 15:16:31
| 0
| 1,352
|
Programming Noob
|
79,686,222
| 416,104
|
How to set up shiv to properly use an existent .venv site-packages and not redownload packages?
|
<p>I'm trying to use shiv with a project that contains a pyproject.toml in my CI.</p>
<p>This project is being built, in a previous step, by PDM with SetupTools as build-backend, and all dependencies are already in the .venv created by it.</p>
<p>I just want shiv to pack it.</p>
<p>But when it executes</p>
<pre class="lang-none prettyprint-override"><code>pdm run shiv --compressed -e myproject.cli:cli -o myproject.pyz
-p '/usr/bin/env python3.12' --no-build-isolation
--site-packages .venv/lib/python3.12/site-packages
</code></pre>
<p>Then shiv starts to download everything again and for some deps, to build it.</p>
<pre class="lang-none prettyprint-override"><code>Collecting python-dotenv>=0.21.0 (from pydantic-settings->myproject==4.1.0rc1)
Downloading https://artifactory/api/pypi/python/packages/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl (20 kB)
Building wheels for collected packages: airspeed
...
</code></pre>
|
<python><python-pdm><linkedin-shiv>
|
2025-07-01 14:42:30
| 0
| 1,865
|
Cristiano
|
79,686,145
| 11,064,604
|
PowerShell equivalent of Linux `nohup command &` - Run processes that survive terminal closure
|
<p>I need to run Python programs that continue running even after I close the PowerShell terminal window. In Linux bash, I use:</p>
<pre class="lang-bash prettyprint-override"><code>nohup python my_program.py &
</code></pre>
<p>This runs the program in the background and it survives terminal closure.</p>
<p><strong>What is the PowerShell equivalent of <code>nohup command &</code>?</strong></p>
<h2>What I've tried:</h2>
<ul>
<li>Running <code>python my_program.py</code> directly - process dies when terminal closes</li>
<li>Using <code>Start-Process</code> without parameters - still terminates with terminal</li>
</ul>
<h2>Requirements:</h2>
<ul>
<li>Process must survive PowerShell terminal closure</li>
<li>Should run in background (no visible window)</li>
<li>My Python program spawns subprocesses that should also survive</li>
</ul>
|
<python><powershell>
|
2025-07-01 13:34:22
| 1
| 353
|
Ottpocket
|
79,685,872
| 6,171,575
|
Python script for SOLIDserver DNS: IP/hostname update successful but "shortname" persists in Advanced Properties
|
<p>I'm working in small laboratory. The DNS is managed by our Institute, but we have an access to efficientIP web interface and so we could change the names of our computers.</p>
<p>But we want to automating it and so I created a little python script to change the name for a given IP using <strong>SOLIDserverRest</strong> library.</p>
<p>It almost works.
I could update an IP hostname, it become active (nslookup confirms changes in both way (IP->DNS, DNS->IP), but if I look in efficientIP web interface for this IP in the Advanced Properties tab in DNS section I still see "Shortname" which point to old hostname.</p>
<p>Here is the code that change the IP hostname :</p>
<pre><code>#First I get ip_id corresponding to my IP object then
update_params = {
"ip_id": ip_id,
"ip_name": fqdn,
}
update_answer = sds_con.query("ip_address_update", update_params)
</code></pre>
<p>As I said, it works, but didn't change shortname in DNS section.
Doing some googling and chat-bots I tried :</p>
<ol>
<li><p>there was an idea that I need also do "dns_rr_update". But it didn't change anything and even without doing "dns_rr_update", if I do "dns_rr_info" for my IP all information point to the new hostname.</p>
</li>
<li><p>Some of chat-bot proposed to see "rr_type: PTR" object, but it didn't work. Searching for rr_type:"PTR" give me nothing.</p>
</li>
</ol>
<p>I'd like to understand for what this "shortname" in DNS properties corresponding ?
And how I could change it ?</p>
<p>I suppose that I do something wrong as changing via web interface change everything : IPAM and DNS.</p>
|
<python><dns>
|
2025-07-01 09:57:14
| 0
| 577
|
Paul Zakharov
|
79,685,861
| 8,119,069
|
Weird characters in Shell after importing seleniumbase
|
<p>I have a small annoyance with seleniumbase: after importing it, printing to the Shell window will always add strange characters like "[0m" for some reason. See the image to show what I mean. <a href="https://i.sstatic.net/2aALhcM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2aALhcM6.png" alt="enter image description here" /></a></p>
<p>Is there a way to make these characters go away? I'm sure it happens just because of importing something from seleniumbase.</p>
|
<python><seleniumbase>
|
2025-07-01 09:50:07
| 1
| 501
|
DoctorEvil
|
79,685,823
| 1,750,612
|
Is there a way to mimic pip --extra-index-url functionality for transitive dependency resolution?
|
<p>In my corporate environment we are forced to proxy PyPI through our internal artifactory service, e.g.:</p>
<pre><code>https://[COMPANY URL]/artifactory/api/pypi/pypi-release/simple
</code></pre>
<p>I therefore set up my <code>pyproject.toml</code> with this as the primary source:</p>
<pre class="lang-toml prettyprint-override"><code>[[tool.poetry.source]]
name = "artifactory_release"
url = "https://[COMPANY URL]/artifactory/api/pypi/pypi-release/simple"
priority = "primary"
</code></pre>
<p>The problem is that my company actually hosts multiple levels of artifactory for different types of internal builds. Most projects would get built & hosted in a different entirely firewalled internal repository, which does not have access to the wider PyPI:</p>
<pre><code>https://[COMPANY URL]/artifactory/api/pypi/python-internal-unstable/simple
</code></pre>
<p>Normally I would list this as a secondary/supplemental source, like so:</p>
<pre class="lang-toml prettyprint-override"><code>[[tool.poetry.source]]
name = "artifactory_unstable"
url = "https://[COMPANY URL]/artifactory/api/pypi/python-internal-unstable/simple"
priority = "supplemental"
</code></pre>
<p>I would then expect to be able to pull packages specifically from this secondary repository by specifying the dependencies in my <code>pyproject.toml</code> file like so:</p>
<pre class="lang-toml prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.10 <3.12"
numpy = "1.23.5"
pandas = "1.4.3"
[MY PACKAGE] = {version = "[VERSION NO]", source = "artifactory_unstable"}
</code></pre>
<p>My problem is that when I try to pull some internally built packages from this secondary source, it looks like Poetry tries to resolve their dependencies (i.e. transitive dependencies) through that same secondary source, which does not have access to the wider PyPI. This therefore results in an error:</p>
<pre class="lang-none prettyprint-override"><code>(base) my-computer: curr_dir$ poetry lock
Updating dependencies
Resolving dependencies... (10.6s)
ValueError
Package('[MY PACKAGE]', '[VERSION NO]') is not in list
at ~/.local/pipx/venvs/poetry/lib/python3.11/site-packages/poetry/repositories/legacy_repository.py:66 in package
62โ Note that this will be cached so the subsequent operations
63โ should be much faster.
64โ """
65โ try:
โ 66โ index = self._packages.index(Package(name, version))
67โ
68โ return self._packages[index]
69โ except ValueError:
70โ package = super().package(name, version, extras)
The following error occurred when trying to handle this error:
AssertionError
at ~/.local/pipx/venvs/poetry/lib/python3.11/site-packages/poetry/inspection/lazy_wheel.py:526 in _fetch_content_length
522โ # If we *could* download some file contents, then write them to the end of
523โ # the file and set up our bisect boundaries by hand.
524โ with self._stay():
525โ response_length = int(tail.headers["Content-Length"])
โ 526โ assert response_length == min(initial_chunk_size, ret_length)
527โ self.seek(-response_length, io.SEEK_END)
528โ # Default initial chunk size is currently 1MB, but streaming content
529โ # here allows it to be set arbitrarily large.
530โ for chunk in tail.iter_content(CONTENT_CHUNK_SIZE):
</code></pre>
<p>When I use normal pip rather than Poetry, I can force the correct behaviour by specifying the primary PyPI index URL as an extra index URL for this specific package. This allows pip to pull my package from the internal firewalled repository, but to resolve its dependencies through the public PyPI proxy:</p>
<pre class="lang-none prettyprint-override"><code>(base) my-computer: curr_dir$ python -m pip install --index-url https://[COMPANY URL]/artifactory/api/pypi/python-internal-unstable/simple/ --extra-index-url https://[COMPANY URL]/artifactory/api/pypi/pypi-release/simple [MY PACKAGE]
Looking in indexes: https://[COMPANY URL]/artifactory/api/pypi/python-internal-unstable/simple/, https://[COMPANY URL]/artifactory/api/pypi/pypi-release/simple
Collecting [MY PACKAGE]
Using cached https://[COMPANY URL]/artifactory/api/pypi/python-internal-unstable/[DIR PATH]/[MY PACKAGE]-[VERSION NO]-py3-none-any.whl (26 kB)
Requirement already satisfied: azure-monitor-opentelemetry<2.0.0,>=1.6.5 in /Users/[ME]/Library/Caches/pypoetry/virtualenvs/[POETRY ENV]/lib/python3.11/site-packages (from [MY PACKAGE]) (1.6.10)
Collecting opencensus-ext-azure<2.0.0,>=1.1.4 (from [MY PACKAGE])
...
...
...
Downloading https://COMPANY URL/artifactory/api/pypi/pypi-release/packages/packages/.../.../pyasn1-0.6.1-py3-none-any.whl (83 kB)
Installing collected packages: opencensus-context, typing-inspection, python-dotenv, pydantic-core, ..., opencensus-ext-azure, [MY PACKAGE]
Successfully installed annotated-types-0.7.0 cachetools-5.5.2 ... opencensus-ext-logging-0.1.1 [MY PACKAGE]-[VERSION NO] ... rsa-4.9.1 typing-inspection-0.4.1
</code></pre>
<p>Is this possible to do with Poetry?</p>
|
<python><python-3.x><pip><artifactory><python-poetry>
|
2025-07-01 09:26:11
| 1
| 359
|
MikeFenton
|
79,685,744
| 6,322,924
|
cv2 findChessboardCornersSB not working, detects only some corners
|
<p>I have pictures of my checkerboard, but findChessboardCornersSB() or findChessboardCorners() cannot detect all corners, only some of them, or none, regardless of checkerboard dimensions that I input.<a href="https://i.sstatic.net/GPylrUXQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPylrUXQ.png" alt="enter image description here" /></a></p>
<p>Here is the <a href="https://ibb.co/q31QHHwf" rel="nofollow noreferrer">image of the sample</a></p>
<p>I usually work with fits file, but for this I exported it to png</p>
<pre><code>img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
# Try finding the chessboard
chessboard_size = (6, 7) # Also tried with (7,6), (6,9), (9,6)
flags = cv2.CALIB_CB_EXHAUSTIVE + cv2.CALIB_CB_ACCURACY
flags_std = cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_ADAPTIVE_THRESH
# flags_std = cv2.CALIB_USE_INTRINSIC_GUESS
ret_sb, corners_sb = cv2.findChessboardCornersSB(img, chessboard_size, flags=flags)
ret_std, corners_std = cv2.findChessboardCorners(img, chessboard_size, flags=cv2.CALIB_CB_FAST_CHECK | cv2.CALIB_CB_ADAPTIVE_THRESH) # Standard method
</code></pre>
|
<python><opencv><image-processing><computer-vision>
|
2025-07-01 08:30:58
| 0
| 607
|
Falco Peregrinus
|
79,685,701
| 5,520,689
|
How to call a python MCP tool hosted on Google Cloud Run
|
<p>I have deployed a python script for mcp server in a docker container on Google Cloud Run.
Below is a sample script</p>
<pre><code>import asyncio
import logging
import os
from fastmcp import FastMCP
logger = logging.getLogger(__name__)
logging.basicConfig(format="[%(levelname)s]: %(message)s", level=logging.INFO)
mcp = FastMCP("MCP Server on Cloud Run")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Use this to add two numbers together.
Args:
a: The first number.
b: The second number.
Returns:
The sum of the two numbers.
"""
logger.info(f">>> Tool: 'add' called with numbers '{a}' and '{b}'")
return a + b
if __name__ == "__main__":
logger.info(f" MCP server started on port {os.getenv('PORT', 8080)}")
# Could also use 'sse' transport, host="0.0.0.0" required for Cloud Run.
asyncio.run(
mcp.run_async(
transport="streamable-http",
host="0.0.0.0",
port=os.getenv("PORT", 8080),
)
)
</code></pre>
<p>I have put this in docker and deployed the image to CloudRun and got the https endpoint for calling a streamable https request.
<a href="https://i.sstatic.net/8212yAUT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8212yAUT.png" alt="The Cloud Run service showing the deployment" /></a></p>
<p>I have created a Service account with Cloud Run Invoker permission and generated a json key. But when I try to access the service from python I am getting 403 unauthorised error.
I used the below code to try to call the mcp server.</p>
<pre><code>import os
import json
import requests
import google.oauth2.id_token
import google.auth.transport.requests
def runCloudFunction():
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path\to\file.json'
request = google.auth.transport.requests.Request()
audience = 'https://cloud_run_service_url'
TOKEN = google.oauth2.id_token.fetch_id_token(request, audience)
print(TOKEN)
r=requests.post(
audience+'/mcp',
headers={'Authorization':"Bearer "+TOKEN, 'Content-Type':'application/json'})
print(r.status_code)
if __name__ == "__main__":
runCloudFunction()
</code></pre>
<p>The above code is printing the token but returning status 403 for the request to the service.
I do not want to remove authentication from the service since that will make it insecure.So , I have selected the Require Authentication option.
<a href="https://i.sstatic.net/HgNVhUOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HgNVhUOy.png" alt="Security Settings for Cloud Service" /></a></p>
<p>I checked that public access is enabled for the Cloud Service in Networking Settings.
<a href="https://i.sstatic.net/9nFFpzsK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nFFpzsK.png" alt="Networking Settings" /></a></p>
<p>Will be really grateful if someone can let me know what I missed. I am not aware of the body to pass to the service to call a particular python function/mcp-tool. WIll be helpful if someone can guide on that as well. Thank you in advance.</p>
|
<python><https><google-cloud-run><model-context-protocol>
|
2025-07-01 07:53:07
| 1
| 358
|
Sachu
|
79,685,618
| 2,638,256
|
Checking for Missing Signatures in Uploaded PDF Using Python
|
<p>I am new to Python and I want to check whether the uploaded PDF file contains all the required signatures. If any are missing, it should return "Signature missing." Iโve tried reading the file, but Iโm not sure how to implement this check.</p>
<pre><code>try:
images = convert_from_path(pdf_path, dpi=200)
print(f"[DEBUG] PDF has {len(images)} pages")
if len(images) >= 1:
target_page_index = 0 # TEMP: Try first page (adjust as needed)
page_image = images[target_page_index]
page_image.save("debug_page0.png")
# Debug test crop
test_box = (100, 100, 300, 200)
crop_test = page_image.crop(test_box)
crop_test.save("debug_crop_test.png")
# Real signature areas โ update these after checking debug image
inspector_sig_area = (510, 590, 650, 630)
reviewer_sig_area = (510, 640, 650, 680)
inspector_crop = page_image.crop(inspector_sig_area)
reviewer_crop = page_image.crop(reviewer_sig_area)
inspector_crop.save("debug_inspector_box.png")
reviewer_crop.save("debug_reviewer_box.png")
fields["Inspector Signed"] = is_signature_box_filled(inspector_crop)
fields["Reviewer Signed"] = is_signature_box_filled(reviewer_crop)
else:
print("[!] No pages found in PDF!")
except Exception as e:
print(f"[!] Signature box check failed: {e}")
</code></pre>
<p><a href="https://i.sstatic.net/vnExvHo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vnExvHo7.png" alt="enter image description here" /></a></p>
|
<python><image-reader>
|
2025-07-01 06:43:32
| 1
| 1,239
|
kreya
|
79,685,528
| 6,141,238
|
When using asyncio, how can I throttle the rate of a dependent sequence of web requests without blocking responses to these requests?
|
<p>I am attempting to send out a grid, or lattice, or "matrix" of API requests using <code>asyncio</code>: For a "category 1," I may wish to request the datasets 1a, 1b, 1c. For category 2, I may wish to request the data sets 2a, 2b, 2c. And for category 3, I may wish to request the datasets 3a, 3b, 3c. Whether I request 1b depends on the nature of 1a, and whether I request 1c depends on the nature of 1a and 1b, so I have to request 1a, 1b, and 1c in the listed order. An analogous statement holds for categories 2 and 3.</p>
<p>The server receiving these requests takes 0.15 sec to respond to each request, but can respond to up to 10 requests/sec. So I may be able to reduce my runtime by 90% by parallelizing the requests using <code>asyncio</code>. But to do so I need to determine how to throttle my rate of sending to 10 requests/sec.</p>
<p>If I wanted to collect all 1a, 1b, ..., 3c (i.e., if the requests were independent), then I believe I could just use a sequence of <code>create_task</code> calls in a for loop like:</p>
<pre><code>secs_per_req = 0.1
async with aiohttp.ClientSession() as session:
async with asyncio.TaskGroup() as tg:
for k in [1, 2, 3]:
tg.create_task(collect_a_data(k, session))
await asyncio.sleep(secs_per_req)
tg.create_task(collect_b_data(k, session))
await asyncio.sleep(secs_per_req)
tg.create_task(collect_c_data(k, session))
await asyncio.sleep(secs_per_req)
</code></pre>
<p>However, the fact that 1b and 1c depend on 1a, and 1c depends on 1b, perhaps suggests that I should try a for loop like:</p>
<pre><code>secs_per_req = 0.1
async with aiohttp.ClientSession() as session:
async with asyncio.TaskGroup() as tg:
for k in [1, 2, 3]:
tg.create_task(collect_abc_data(k, session))
await asyncio.sleep(secs_per_req)
</code></pre>
<p>where <code>collect_abc_data(k, session)</code> collects ka and, if needed, kb and kc, and contains all of the logic for deciding whether to request kb and kc. But then I need a way to limit the rate of the requests within <code>collect_abc_data</code>. A natural way to do this would be to sleep (in some manner) immediately before sending each request. However:</p>
<ul>
<li><p>Including an <code>await asyncio.sleep(secs_per_req)</code> before each request in <code>collect_abc_data</code> does not limit the rate at which requests are sent. The parent function just moves on to the next task and continues sending out requests without pause.</p>
</li>
<li><p>Including a <code>time.sleep(secs_per_req)</code> before each request achieves closer to the desired result; it blocks any other task from running during the sleep interval, which limits the rate at which requests are sent. <em>However, it also blocks in a second manner: it forces the requests for 1a, 2a, and 3a to be sent before it processes the response to the request for 1a.</em> For my use, this is nonideal because it delays the collection of ka, kb, and kc, which then delays downstream processing of this data. This becomes especially problematic when there are, say, tens of thousands of k values to request, as there are in my case. (The delay may then amount to hours.)</p>
</li>
</ul>
<p>So, if the above approaches fail, is there standard way that this problem is solved within the <code>asyncio</code> framework?</p>
|
<python><asynchronous><async-await><python-asyncio><sleep>
|
2025-07-01 04:33:37
| 1
| 427
|
SapereAude
|
79,685,448
| 1,850,484
|
Request for Hyperledger Aries framework persistent askar wallet
|
<p>The default configuration for the Askar wallet in ACA-Py creates an in-memory (non-persistent) wallet unless explicit parameters are provided to make it persistent. My intention is to create a persistent wallet, so I have specified the required wallet parameters (such as <code>--wallet-type askar</code>, <code>--wallet-name</code>, and <code>--wallet-key</code>).</p>
<p>However, I suspect that my custom parameters are not actually being used by the Aries agentโpossibly due to internal defaults or code logic that overrides user-supplied values. The primary evidence for this is that, after running the agent, there is no <code>.sqlite</code> wallet database in the <code>~/.aries_askar/</code> directory (as verified with <code>ls -la ~/.aries_askar/</code>). This strongly suggests that the agent is still creating an in-memory wallet, rather than a persistent one.</p>
<p>Additionally, as noted in the documentation, in-memory wallets are usually assigned a random name such as <code>'faber_wallet_check30a06fc5</code>. If the agent output shows such a name, or a name different from the one I specified, it further confirms that my parameters are being ignored or overridden.</p>
<p>Despite specifying persistent wallet parameters, the agent continues to create a non-persistent in-memory walletโmost likely because internal defaults take precedence over my custom arguments. How can I make the wallet persistent?</p>
<pre><code># faber_wallet_check.py
import asyncio
import sys
from runners.agent_container import AriesAgent, create_agent_with_args, arg_parser
async def main():
# Step 1: Parse CLI args (arg_parser already includes --wallet-type)
parser = arg_parser(ident="faber_wallet_check", port=8020)
# Add only what is NOT present in arg_parser
parser.add_argument(
"--wallet-name",
help="Name of the wallet (for persistence, e.g., 'faber_wallet')",
default=None,
)
parser.add_argument(
"--wallet-key",
help="Key/passphrase for the wallet",
default=None,
)
parser.add_argument(
"--seed",
help="Deterministic seed for agent DID (32 characters, optional)",
default=None,
)
args = parser.parse_args()
# Debug: print parsed configuration
print("Initializing demo agent faber_wallet_check with AIP 20 and credential type indy")
print("Parsed configuration:")
for key in ("wallet_type", "wallet_name", "wallet_key", "seed"):
print(f" {key}: {getattr(args, key, None)}")
# Validate seed length if provided
if args.seed and len(args.seed) != 32:
print(f"ERROR: --seed must be exactly 32 characters (got {len(args.seed)})", file=sys.stderr)
sys.exit(1)
# Step 2: Prepare extra args for the agent launcher
extra = {}
if args.wallet_name:
extra["wallet-name"] = args.wallet_name
if args.wallet_key:
extra["wallet-key"] = args.wallet_key
if args.seed:
extra["seed"] = args.seed
if args.wallet_type:
extra["wallet-type"] = args.wallet_type
# Step 3: Launch the agent
faber_agent = await create_agent_with_args(
args,
ident="faber_wallet_check",
extra_args=extra,
)
# Step 4: Create AriesAgent wrapper for admin API calls
agent = AriesAgent(
ident="faber_wallet_check",
http_port=faber_agent.start_port,
admin_port=faber_agent.start_port + 1,
genesis_data=faber_agent.genesis_txns,
genesis_txn_list=faber_agent.genesis_txn_list,
wallet_type=args.wallet_type,
seed=args.seed,
public_did=False,
)
# Step 5: Initialize ACA-Py (create wallet, register public DID if enabled)
await faber_agent.initialize(the_agent=agent)
# Step 6: Wait for admin API readiness
max_retries = 30
for i in range(1, max_retries + 1):
try:
await agent.admin_GET("/status")
break
except Exception:
print(f"Waiting for admin API (attempt {i}/{max_retries})...")
await asyncio.sleep(1)
else:
print("ERROR: Admin API did not become available, exiting.")
await faber_agent.terminate()
sys.exit(1)
# Step 7: List all DIDs in the wallet to confirm persistence
try:
resp = await agent.admin_GET("/wallet/did")
print("DIDs currently in wallet:")
for did_entry in resp.get("results", []):
did = did_entry.get("did")
verkey = did_entry.get("verkey")
public = did_entry.get("public", False)
print(f" DID: {did}, Verkey: {verkey}, Public: {public}")
except Exception as e:
print(f"ERROR fetching DIDs from wallet: {e}", file=sys.stderr)
# Cleanup: terminate the ACA-Py agent process
await faber_agent.terminate()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
</code></pre>
<p>Input</p>
<pre><code>python3 -m runners.faber_wallet_check \
--wallet-type askar \
--wallet-name faber_wallet \
--wallet-key supersecretfaberkek \
--seed faber000000000000000000000000001
Output
(venv) nmuslim162022@instance-20250614-033016:~/aries-cloudagent-python/demo$ python3 -m runners.faber_wallet_check \
--wallet-type askar \
--wallet-name faber_wallet \
--wallet-key supersecretfaberkek \
--seed faber000000000000000000000000001
Initializing demo agent faber_wallet_check with AIP 20 and credential type indy
Parsed configuration:
**wallet_type**: askar
**wallet_name**: faber_wallet
**wallet_key**: supersecretfaberkek
seed: faber000000000000000000000000001
Initializing demo agent faber_wallet_check with AIP 20 and credential type indy
Started webhook listener on port: 8022
Aries | Registering faber_wallet_check ...
using ledger: http://localhost:9000/register
Aries | nym_info: {'did': 'QWTxizRo9A1tWdEPYkFPHe', 'seed': 'faber000000000000000000000000001', 'verkey': 'Dp8p4z8fC3MadWn4q5BxtfJMwyXNhaCJRfhB4sAepkU7'}
Aries | Registered DID: QWTxizRo9A1tWdEPYkFPHe
Created public DID
Aries | ['/home/nmuslim162022/aries-cloudagent-python/venv/bin/python3', '-m', 'aries_cloudagent', 'start', '--endpoint', 'http://localhost:8020', '--label', 'faber_wallet_check', '--auto-ping-connection', '--auto-respond-messages', '--inbound-transport', 'http', '0.0.0.0', '8020', '--outbound-transport', 'http', '--admin', '0.0.0.0', '8021', '--admin-insecure-mode', '**--wallet-type**', 'askar', '-**-wallet-name**', 'faber_wallet_check30a06fc5', '**--wallet-key**', 'faber_wallet_check30a06fc5', '--preserve-exchange-records', '--auto-provision', '--public-invites', '--emit-new-didcomm-prefix', '--genesis-transactions',
โฆ.
(venv) nmuslim162022@instance-20250614-033016:~/aries-cloudagent-python/demo$ ls -la ~/.aries_askar/
total 8
drwxrwxr-x 2 nmuslim162022 nmuslim162022 4096 Jun 30 08:35 .
drwxr-x--- 37 nmuslim162022 nmuslim162022 4096 Jul 1 00:51 ..
</code></pre>
|
<python><hyperledger-indy><hyperledger-aries>
|
2025-07-01 02:08:42
| 0
| 388
|
user1850484
|
79,685,368
| 4,690,023
|
How to accurately pick/select cells in PyVista with single mouse click?
|
<p>I'm trying to implement single-click cell selection in PyVista to get the correct cell index when clicking on a mesh.</p>
<p>I've tried using <code>pv.Plotter().enable_cell_picking()</code> but this has two issues: the first is that it seems to allow only rectangle selection and the callback is not returning a single cell, but an <code>UnstructuredGrid</code> object from which I can't extract the cell index.</p>
<p>I've also tried with <code>enable_element_picking</code> which allows for a single cell selection, but still return an <code>UnstructuredGrid</code> object.</p>
<p>Here is a simple example to reproduce the problem</p>
<pre><code>import pyvista as pv
import numpy as np
pl = pv.Plotter()
mesh = pv.Sphere()
mesh.cell_data['colors'] = np.ones(mesh.n_cells) * 0
pl.add_mesh(mesh, show_edges=True, scalars='colors',)
def my_callback(picked_cell):
"""
Callback function to print the cell index.
This function is triggered when a cell is picked.
"""
cell_index = picked_cell.index
mesh.cell_data['colors'][:] = 0
mesh.cell_data['colors'][cell_index] = 1
pl.update()
print(f"You clicked on cell index: {cell_index}")
pl.enable_element_picking(
callback=my_callback,
mode='cell',
picker='cell',
)
pl.show()
</code></pre>
|
<python><pyvista>
|
2025-06-30 23:21:58
| 1
| 1,870
|
Luca
|
79,685,288
| 8,318,573
|
UV Workspaces Module Not Found
|
<p>This is my directory structure:</p>
<pre><code>myProject
โโโ pyproject.toml
โโโ apps
โ โโโ myApp
โ โโโ pyproject.toml
โ โโโ main.py
โโโ lib
โโโ myLib
โโโ pyproject.toml
โโโ main.py
โโโ __init__.py
</code></pre>
<p>In the top level <code>pyproject.toml</code> I have the following:</p>
<pre><code>[project]
name = "myProject"
version = "0.1.0"
description = "description"
readme = "README.md"
requires-python = ">=3.13"
# Added these here as I want to ensure that python libraries across
# projects and internal libs are updated together [ living at the
# head ]
dependencies = [
"myApp @ file:///${PROJECT_ROOT}/apps/myApp",
"myLib @ file:///${PROJECT_ROOT}/lib/myLib"
]
# Ensuring these come from the workspace i.e. the repo itself
[tool.uv.sources]
myLib = { workspace = true }
myApp = { workspace = true }
[tool.uv.workspace]
members = [
"apps/myApp",
"lib/myLib",
]
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
Now in my `main.py` in `myLib` I have a very simple script:
```python
def greet():
return "Hello!"
</code></pre>
<p>And in <code>main.py</code> in <code>myApp</code> I intend to use it:</p>
<pre class="lang-py prettyprint-override"><code>from myLib.main import greet
print(greet())
</code></pre>
<p>To achieve this I run the following from root:</p>
<pre><code>uv pip install .
uv python run apps/myApp/main.py
</code></pre>
<p>But I constantly run into:</p>
<pre><code> File "/Users/dummy/myProject/apps/myApp/main.py", line 1, in <module>
from myLib.main import greet
ModuleNotFoundError: No module named 'myLib'
</code></pre>
<p>What step am I missing here?</p>
|
<python><uv>
|
2025-06-30 21:00:45
| 1
| 589
|
Abhishek Malik
|
79,685,176
| 4,534,466
|
Saving Spark's MLlib model using Kedro data catalog
|
<p>Consider the model that is trained <a href="https://docs.kedro.org/en/0.19.14/integrations/pyspark_integration.html" rel="nofollow noreferrer">this example</a> from Kedro's documentation</p>
<pre><code>from typing import Any, Dict
from kedro.pipeline import node, pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.sql import DataFrame
def train_model(training_data: DataFrame) -> RandomForestClassifier:
"""Node for training a random forest model to classify the data."""
classifier = RandomForestClassifier(numTrees=10)
return classifier.fit(training_data)
def predict(model: RandomForestClassifier, testing_data: DataFrame) -> DataFrame:
"""Node for making predictions given a pre-trained model and a testing dataset."""
predictions = model.transform(testing_data)
return predictions
def create_pipeline(**kwargs) -> Pipeline:
return pipeline(
[
node(train_model, inputs=["training_data"], outputs="example_classifier"),
node(
predict,
inputs=dict(model="example_classifier", testing_data="testing_data"),
outputs="example_predictions",
),
]
)
</code></pre>
<p>I would like to not loose my pre-trained model and save it as a <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.PipelineModel.html" rel="nofollow noreferrer">Pyspark's PipelineModel</a> but I could not find a suitable dataset for that in <a href="https://docs.kedro.org/projects/kedro-datasets/en/kedro-datasets-7.0.0/api/kedro_datasets.html" rel="nofollow noreferrer">Kedro dataset's documentation</a></p>
<p>Usually I would do something like this:</p>
<pre><code>save_path = "path/to/save/pipeline_model"
pipeline_model.save(save_path)
</code></pre>
<p>But as I'm using Kedro, I don't want to have IO outside of my catalog. Is this a supported use case, or would I have to implement my own data custom <code>KedroDataset</code> to achieve this?</p>
|
<python><pyspark><apache-spark-mllib><kedro>
|
2025-06-30 18:50:00
| 1
| 1,530
|
Joรฃo Areias
|
79,685,147
| 6,936,582
|
3D array to 2D then back again
|
<p>I have the array</p>
<pre><code>import numpy as np
a = np.array([[[11,12,13], [14,15,16]],
[[21,22,23], [24,25,26]],
[[31,32,33], [34,35,36]]])
# array([[[11, 12, 13],
# [14, 15, 16]],
# [[21, 22, 23],
# [24, 25, 26]],
# [[31, 32, 33],
# [34, 35, 36]]])
#a.shape
#(3, 2, 3)
</code></pre>
<p>I need to reshape it to a 2D array with three columns:</p>
<pre><code>step1 = a.transpose(1, 2, 0).reshape(-1, 3)
# array([[11, 21, 31],
# [12, 22, 32],
# [13, 23, 33],
# [14, 24, 34],
# [15, 25, 35],
# [16, 26, 36]])
</code></pre>
<p>I then need to reshape it back to the original shape, but I cant figure out how?</p>
|
<python><numpy>
|
2025-06-30 18:15:23
| 1
| 2,220
|
Bera
|
79,685,121
| 11,062,613
|
How to efficiently wrap GSL functions with structs and pointers using Numba?
|
<p>I'm trying to wrap some GSL (GNU Scientific Library) functions using Numba. I'd like to avoid C glue code wrappers and to be able to cache and extend the wrapped functions.</p>
<p>Hereโs a simplified structure of the site-package:</p>
<pre><code>numba_gsl/
โโโ numba_gsl/
โ โโโ gsl_integration.c # Minimal C wrapper for GSL integration
โ โโโ integration.py # Python-side ctypes + numba wrapper
โโโ setup.py # build script
</code></pre>
<p>gsl_integration.c exposes a GSL function:</p>
<pre><code>// gsl_integration.c (must be compiled)
#include <gsl/gsl_integration.h>
#include <stdint.h>
typedef double (*func_ptr)(double x, void* params);
double qag(
func_ptr f,
void* user_data,
int which,
double a,
double b,
double epsabs,
double epsrel,
int limit,
int key) {
gsl_integration_workspace* w = gsl_integration_workspace_alloc(limit);
double result, error;
gsl_function gsl_func;
gsl_func.function = f;
gsl_func.params = user_data;
gsl_integration_qag(&gsl_func, a, b, epsabs, epsrel, limit, key, w, &result, &error);
gsl_integration_workspace_free(w);
return result;
}
</code></pre>
<p>In integration.py I load the compiled shared library with ctypes, define argument types, and expose a jitted function that calls into it by passing function pointers obtained via Numba's cfunc.</p>
<pre><code># integration.py
import ctypes as ct
from numba import njit
def get_extension_path(lib_name: str) -> str:
search_path = Path(__file__).parent
pattern = f"*{lib_name}.*"
matches = search_path.glob(pattern)
try:
return str(next(matches))
except StopIteration:
return None
# Load the shared GSL integration library (.so)
_lib_path = get_extension_path('gsl_integration')
_lib = ct.CDLL(_lib_path)
# Define ctypes function prototype for GSL wrapper:
# double qag(void *f, void *params, double a, double b, double epsabs, double epsrel, int limit, int key)
qag_c = _lib.qag
qag_c.argtypes = [ct.c_void_p, ct.c_void_p,
ct.c_double, ct.c_double,
ct.c_double, ct.c_double,
ct.c_int, ct.c_int]
qag_c.restype = ct.c_double
@njit
def qag(func_ptr: int,
a: float,
b: float,
epsabs: float = 1.49e-08,
epsrel: float = 1.49e-08,
limit: int = 50,
key: int = 1,
params_ptr: int = 0) -> float:
"""GSL integration wrapper with finite intervalls."""
return qag_c(func_ptr, params_ptr, a, b, epsabs, epsrel, limit, key)
</code></pre>
<p>Here is a build-script:</p>
<pre><code># setup.py
from setuptools import setup, Extension, find_packages
ext = Extension(
"numba_gsl.gsl_integration",
sources=["numba_gsl/gsl_integration.c"],
libraries=["gsl", "gslcblas"],
extra_compile_args=["-O3"]
)
setup(
name="numba_gsl",
version="0.1.0",
description="Numba-friendly wrappers for GSL routines",
author="No Name",
packages=find_packages(),
ext_modules=[ext],
install_requires=["numba", "llvmlite"],
python_requires=">=3.12",
)
</code></pre>
<p>And an example:</p>
<pre><code># example.py
import math
from numba import cfunc, types
from numba_gsl.integration import qag
from scipy.integrate import quad
# Define the integrand as a Numba cfunc with the proper signature:
@cfunc(types.float64(types.float64, types.voidptr))
def sin_over_x(x, _):
return math.sin(x) / x if x != 0.0 else 1.0
def py_func(x):
return math.sin(x) / x if x != 0 else 1.0
func_ptr = sin_over_x.address
qag_res = qag(func_ptr, a=1e-8, b=3.14)
scipy_res = quad(py_func, a=1e-8, b=3.14)[0]
print("numba_gsl quad result:", qag_res)
print("scipy quad result:", scipy_res)
# numba_gsl quad result: 1.8519366381423115
# scipy quad result: 1.8519366381423115
</code></pre>
<p>Is there a (better) way to wrap complex GSL structs and pass them along with function pointers to GSL routines with Numba, perhaps using:</p>
<ul>
<li>llvmlite.binding.load_library_permanently</li>
<li>numba.types.ExternalFunction</li>
<li>numba.types.Record.make_c_struct</li>
</ul>
|
<python><numba><gsl><llvmlite>
|
2025-06-30 17:33:10
| 1
| 423
|
Olibarer
|
79,685,103
| 5,616,309
|
How to properly store a date time value in a sqlite3 database with python
|
<p>I wanted to store a timestamp from Python in a sqlite3 database. For that I used this kind of code</p>
<pre><code>from datetime import datetime, date, timedelta, time
import sqlite3
import os
def main():
script_dir = os.path.abspath(os.path.dirname(__file__))
database_file = os.path.join(script_dir, "sqlite.db")
print(database_file)
con = sqlite3.connect(database_file, detect_types=sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES)
cur = con.cursor()
cur.execute("""CREATE TABLE if not exists timstmp
(DateCrt timestamp, DateSupp timestamp)"""
)
con.commit()
cur.execute("SELECT * FROM timstmp")
print(cur.fetchall())
print(datetime.now())
con = sqlite3.connect(database_file) , detect_types=sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES)
cur = con.cursor()
cur.execute("INSERT into timstmp values (?, ?)", (datetime.now(), None))
con.commit()
cur.execute("SELECT * FROM timstmp")
print(cur.fetchall())
con.close()
if __name__ == '__main__':
main()
</code></pre>
<p>It works well, I can store and retrieve values, they are timestamps in sqlite, and in Python too</p>
<p>But when executing, I have the warning</p>
<p><strong>DeprecationWarning: The default datetime adapter is deprecated as of Python 3.12; see the sqlite3 documentation for suggested replacement recipes</strong></p>
<p>So, I wanted to correct my code, and i added this lines of code (found in <a href="https://docs.python.org/3/library/sqlite3.html#sqlite3-adapter-converter-recipes" rel="nofollow noreferrer">https://docs.python.org/3/library/sqlite3.html#sqlite3-adapter-converter-recipes</a>)</p>
<pre><code>def adapt_date_iso(val):
"""Adapt datetime.date to ISO 8601 date."""
return val.isoformat()
def adapt_datetime_iso(val):
"""Adapt datetime.datetime to timezone-naive ISO 8601 date."""
return val.isoformat()
def adapt_datetime_epoch(val):
"""Adapt datetime.datetime to Unix timestamp."""
return int(val.timestamp())
sqlite3.register_adapter(date, adapt_date_iso)
sqlite3.register_adapter(datetime, adapt_datetime_iso)
sqlite3.register_adapter(datetime, adapt_datetime_epoch)
def convert_date(val):
"""Convert ISO 8601 date to datetime.date object."""
return date.fromisoformat(val.decode())
def convert_datetime(val):
"""Convert ISO 8601 datetime to datetime.datetime object."""
return datetime.fromisoformat(val.decode())
def convert_timestamp(val):
"""Convert Unix epoch timestamp to datetime.datetime object."""
return datetime.fromtimestamp(int(val))
sqlite3.register_converter("date", convert_date)
sqlite3.register_converter("datetime", convert_datetime)
sqlite3.register_converter("timestamp", convert_timestamp)
</code></pre>
<p>It seems to work (from the python program), but the values stored in the database aren't timestamps anymore, they are integers (and now the precision is only one second).</p>
<p>Did I misunderstood the goal of the adapters and converters? Is this a way to have again a timestamp in the sqlite3 database, without potential problem with future versions of Python?</p>
|
<python><timestamp><sqlite3-python>
|
2025-06-30 17:10:09
| 0
| 1,129
|
FredericP
|
79,685,012
| 6,876,422
|
Azure ApplicationAccessPolicy not blocking access for certain users in Microsoft Graph API (application permission)
|
<p>I am using an <strong>ApplicationAccessPolicy</strong> in Exchange Online to restrict an Azure AD applicationโs access to only one specific mailbox (my personal account).
The goal is for the application to <strong>only access the mailbox of the user "famas@example.com"</strong>, and block access to all other mailboxes.</p>
<p>I created the policy with the following PowerShell command:</p>
<pre class="lang-bash prettyprint-override"><code>New-ApplicationAccessPolicy `
-AppId "00000000-0000-0000-0000-000000000000" `
-PolicyScopeGroupId "famas@example.com" `
-AccessRight RestrictAccess `
-Description "Restrict app access to only Famasโs mailbox"
</code></pre>
<p>I then tested the policy with:</p>
<pre class="lang-bash prettyprint-override"><code>Test-ApplicationAccessPolicy `
-Identity "famas@example.com" `
-AppId "00000000-0000-0000-0000-000000000000"
</code></pre>
<p>It returns:</p>
<pre><code>AccessCheckResult : Granted
</code></pre>
<p>However, when testing with other users (user1@example.com, user2@example.com), these commands:</p>
<pre class="lang-bash prettyprint-override"><code>Test-ApplicationAccessPolicy `
-Identity "user1@example.com" `
-AppId "00000000-0000-0000-0000-000000000000"
</code></pre>
<p>and</p>
<pre class="lang-bash prettyprint-override"><code>Test-ApplicationAccessPolicy `
-Identity "user2@example.com" `
-AppId "00000000-0000-0000-0000-000000000000"
</code></pre>
<p>return:</p>
<pre><code>AccessCheckResult : Denied
</code></pre>
<hr />
<h2>Technical context</h2>
<p>The application uses an OAuth2 client credentials flow token (application permission) via the MSAL Python library.
Here is a simplified code snippet to retrieve emails via Microsoft Graph API:</p>
<pre class="lang-py prettyprint-override"><code>from msal import ConfidentialClientApplication
import requests
config = {
"application_id": "00000000-0000-0000-0000-000000000000",
"application_secret": "your_secret",
"tenant_id": "your_tenant_id",
"user_email": "user2@example.com"
}
app = ConfidentialClientApplication(
config["application_id"],
authority=f"https://login.microsoftonline.com/{config['tenant_id']}",
client_credential=config["application_secret"]
)
result = app.acquire_token_for_client(scopes=["https://graph.microsoft.com/.default"])
if "access_token" in result:
headers = {
"Authorization": f"Bearer {result['access_token']}",
"ConsistencyLevel": "eventual"
}
endpoint = f"https://graph.microsoft.com/v1.0/users/{config['user_email']}/mailFolders/inbox/messages?$top=10&$select=subject,receivedDateTime,from,body"
response = requests.get(endpoint, headers=headers)
if response.status_code == 200:
data = response.json()
mails = data.get("value", [])
for mail in mails:
print(mail["subject"])
else:
print("Error accessing mail:", response.status_code, response.text)
else:
print("Authentication failed:", result.get("error_description"))
</code></pre>
<hr />
<h2>Questions</h2>
<ol>
<li>Why does the ApplicationAccessPolicy seem not to apply consistently to all users?</li>
<li>How can I ensure access is <strong>strictly limited to a single user</strong> with ApplicationAccessPolicy?</li>
<li>Are there additional Azure AD configurations required to enforce this restriction?</li>
<li>Does the application need to use only <strong>Application</strong> permissions (not Delegated) for the policy to be effective?</li>
</ol>
<p>Thanks in advance for your help!</p>
|
<python><azure><azure-active-directory><microsoft-graph-api><azure-powershell>
|
2025-06-30 15:32:54
| 1
| 2,330
|
famas23
|
79,684,997
| 6,546,694
|
Polars reading just one file from s3 with glob patterns
|
<p>I have a s3 location in which I have a list of directories and each directory contains a csv named <code>sample_file.csv</code>. I am trying to read these files using a glob pattern in <code>pl.read_csv</code> but it is just reading only one file and silently ignoring the rest. The issue has been mentioned in the polars git (<a href="https://github.com/pola-rs/polars/issues/10008" rel="nofollow noreferrer">link</a>) issues earlier but there it seems to have been resolved and I want to understand in case I am doing something wrong. Below is my code</p>
<pre><code>import polars as pl
s3_bucket = "sample_bucket"
prefix = "abcd/efgh/ijkl/"
storage_options = {"expand": True}
df = pl.read_csv(source = f"s3://{s3_bucket}/{prefix}*/*.csv", storage_options = storage_options)
</code></pre>
<p>I can change</p>
<pre><code>df = pl.read_csv(source = f"s3://{s3_bucket}/{prefix}*/*.csv", storage_options = storage_options)
</code></pre>
<p>to</p>
<pre><code>df = pl.read_csv(source = f"s3://{s3_bucket}/{prefix}*/sample_csv.csv", storage_options = storage_options)
</code></pre>
<p>and nothing changes</p>
<p>I have checked and only the <code>sample_csv.csv</code> from the first directory present at <code>prefix</code> has been read in the df</p>
<p><code>scan_csv</code> seems to work just fine</p>
<pre><code>import polars as pl
s3_bucket = "sample_bucket"
prefix = "abcd/efgh/ijkl/"
df = pl.scan_csv(source = f"s3://{s3_bucket}/{prefix}*/*.csv").collect()
</code></pre>
<p>without the <code>storage_options</code> the <code>read_csv</code> would not expand the glob and throw filenotfound error but here in the case of <code>scan_csv</code> even that is not needed. Weird tribal knowledge of polars if there is no catch to this and I am not doing anything wrong!!</p>
<p>What am I doing wrong?</p>
|
<python><amazon-s3><python-polars><polars>
|
2025-06-30 15:23:30
| 1
| 5,871
|
figs_and_nuts
|
79,684,967
| 11,598,948
|
Efficient way to get several subsets of list elements?
|
<p>I have a DataFrame like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"grp": ["a", "b"],
"val": [[1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]],
}
)
df
</code></pre>
<pre><code>shape: (2, 2)
โโโโโโโฌโโโโโโโโโโโโโโโ
โ grp โ val โ
โ --- โ --- โ
โ str โ list[i64] โ
โโโโโโโชโโโโโโโโโโโโโโโก
โ a โ [1, 2, โฆ 5] โ
โ b โ [1, 2, โฆ 10] โ
โโโโโโโดโโโโโโโโโโโโโโโ
</code></pre>
<p>I want to select elements in the <code>val</code> column based on this pattern:</p>
<ul>
<li>take the first 2 values</li>
<li>take the last 2 values</li>
<li>take a sample of 2 values in the "middle" (meaning the remaining set of values excluding the first and last 2 values)</li>
</ul>
<p>From those three selections, keep unique elements.</p>
<p>This means that when there are 6 or fewer values (as in the first row) then all values are returned, but otherwise (as in the second row) only a subset of 6 values will be returned.</p>
<p>Therefore, the desired output would look like this:</p>
<pre class="lang-py prettyprint-override"><code>shape: (2, 2)
โโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโ
โ grp โ val โ
โ --- โ --- โ
โ str โ list[i64] โ
โโโโโโโชโโโโโโโโโโโโโโโโโโโโโโก
โ a โ [1, 2, 3, 4, 5] โ
โ b โ [1, 2, 4, 7, 9, 10] โ # <<<< 4 and 7 are the two randomly selected values in the "middle" set
โโโโโโโดโโโโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p>To select the first two and last two values, I can use <code>list.head()</code> and <code>list.tail()</code>. For the random pick in the remaining values, I thought I could do <code>list.set_difference()</code> to remove the first and last two values, and then <code>list.sample()</code>. However, <code>list.sample()</code> fails because in the first row, there's only one value left after removing the first and last two, and I ask for two values:</p>
<pre class="lang-py prettyprint-override"><code>(
df.select(
head=pl.col("val").list.head(2),
middle=pl.col("val")
.list.set_difference(pl.col("val").list.head(2))
.list.set_difference(pl.col("val").list.tail(2))
.list.sample(2, seed=1234),
tail=pl.col("val").list.tail(2),
).select(concat=pl.concat_list(["head", "middle", "tail"]).list.unique())
)
</code></pre>
<pre><code>ShapeError: cannot take a larger sample than the total population when `with_replacement=false`
</code></pre>
<p>and I don't want a sample with replacement.</p>
<p>What would be the best way to do this with Polars?</p>
|
<python><python-polars>
|
2025-06-30 15:06:47
| 1
| 8,865
|
bretauv
|
79,684,837
| 5,568,409
|
Why does `Matplotlib` issue this error with `\textbf`?
|
<p>I was looking to have a <strong>bold</strong> text in my annotation; many advise to use <code>\txtbf</code> so I tried this:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(6, 4))
# Annotated line in bold LaTeX
ax.annotate(r"$ \textbf {y = -\,\frac{1}{\sqrt{2}}x + 1} $",
xy=(0.2, 0.7), rotation=-35, fontweight='bold', fontsize=12)
plt.show()
</code></pre>
<p>This results in an incomprehensible error:</p>
<pre><code>ValueError:
\textbf {y = -\,\frac{1}{\sqrt{2}}x + 1}
^
ParseSyntaxException: Expected \text, found 'bf' (at char 6), (line:1, col:7)
</code></pre>
<p>Do you understand what it means and how to resolve the 'bolding' of the text ???</p>
|
<python><matplotlib>
|
2025-06-30 13:27:07
| 1
| 1,216
|
Andrew
|
79,684,576
| 21,049,944
|
How to change legend patches after plotting?
|
<p>I have a function that is supposed to enlarge all labels in a figure to make it ready for export. However I fail to enlarge the legend properly:</p>
<pre><code>import matplotlib.pyplot as plt
def enlarge_legend(ax, fontSize = 25):
for text in ax.get_legend().get_texts():
text.set_fontsize(fontSize)
fig, ax = plt.subplots()
ax.scatter(
[1,2,3],
[2,3,1],
label = "auto"
)
ax.legend()
enlarge_legend(ax)
</code></pre>
<p>result:<br />
<a href="https://i.sstatic.net/YjAoiAax.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjAoiAax.png" alt="enter image description here" /></a></p>
<p>I would like the circle patch to be bigger as well but both <code>ax.get_legend().get_patches()</code> and <code>ax.get_legend().get_lines()</code>
return empty lists.
Any idea how to do it?</p>
|
<python><matplotlib><legend><patch>
|
2025-06-30 09:55:19
| 2
| 388
|
Galedon
|
79,684,559
| 5,617,371
|
Failed to call api `https://mdmenrollment.apple.com/server/devices` use python3
|
<p>I want to use python3 to call apple api, and it failed, the api document is <a href="https://developer.apple.com/documentation/devicemanagement/fetch-devices" rel="nofollow noreferrer">https://developer.apple.com/documentation/devicemanagement/fetch-devices</a></p>
<p>I suppose that I use the wrong <code>aud</code> in jwt declaration, but the document almost has no explation. <a href="https://developer.apple.com/documentation/appstoreconnectapi/generating-tokens-for-api-requests" rel="nofollow noreferrer">https://developer.apple.com/documentation/appstoreconnectapi/generating-tokens-for-api-requests</a>. My test code is below. Thanks.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
from time import time, mktime
import jwt
import requests
key_id="xxxxxxxxx"
issuer_id="xxxxxxxxxxx"
private_key_path = "./xxxxxxxxxxxxx.p8"
dt = datetime.now() + timedelta(minutes=19)
headers = {
"alg": "ES256",
"kid": key_id,
"typ": "JWT",
}
payload = {
"iss": issuer_id,
"iat": int(time()),
"exp": int(mktime(dt.timetuple())),
"aud": "apple-developer-enterprise-v1",
}
with open(private_key_path, "rb") as fh:
signing_key = fh.read()
gen_jwt = jwt.encode(payload, signing_key, algorithm="ES256", headers=headers)
headers = {
"Authorization": f"Bearer {gen_jwt}",
"Content-Type": "application/json",
"Accept": "application/json"
}
url = f"https://mdmenrollment.apple.com/server/devices"
payload = {
"limit": 1000
}
response = requests.post(url, headers=headers, json=payload)
print(f"ๅๅบ็ถๆ็ : {response.status_code}")
print(f"payload: {payload}")
print(f"ๅๅบๅคด: {response.headers}")
print(f"ๅๅบๅ
ๅฎน: {response.text}")
response.raise_for_status()
print("aa11-------------", response.json())
</code></pre>
<p>And error is</p>
<pre><code>401 Client Error: Unauthorized for url: https://mdmenrollment.apple.com/server/devices
</code></pre>
|
<python><apple-developer><apple-developer-account><apple-developer-enterprise>
|
2025-06-30 09:38:02
| 1
| 1,215
|
leo
|
79,684,430
| 1,866,775
|
Why does `isort` order the imports differently when renaming a directory from test to tests?
|
<p>In the example below, everything is already processed using <code>isort .</code> (version <code>6.0.1</code>):</p>
<pre class="lang-bash prettyprint-override"><code>tree .
</code></pre>
<pre><code>.
โโโ a
โ โโโ test
โ โโโ __init__.py
โ โโโ test_integration.py
โโโ b
โโโ tests
โโโ __init__.py
โโโ test_integration.py
</code></pre>
<pre class="lang-bash prettyprint-override"><code>cat ./a/test/test_integration.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>from test.asdasd import hello
import numpy as np
if np.random.random() < 42:
hello()
</code></pre>
<pre class="lang-bash prettyprint-override"><code>cat ./b/tests/test_integration.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from tests.asdasd import hello
if np.random.random() < 42:
hello()
</code></pre>
<p>Why does <code>isort</code> put <code>from test.asdasd import hello</code> (in <code>a</code>) above <code>import numpy as np</code>, while it puts <code>from tests.asdasd import hello</code> (in <code>b</code>) below <code>import numpy as np</code>?</p>
|
<python><isort>
|
2025-06-30 08:16:52
| 1
| 11,227
|
Tobias Hermann
|
79,684,326
| 354,051
|
embedding python in multi threaded c++ environment
|
<ul>
<li>python 3.13.5</li>
<li>windows 10</li>
<li>msvc</li>
</ul>
<p>This is how initialize python in main.</p>
<pre class="lang-cpp prettyprint-override"><code>#include <iostream>
#include <taskflow/taskflow.hpp> // Taskflow is header-only
#define PY_SSIZE_T_CLEAN
#include <Python.h>
int main(int argc, char* argv[]) {
wchar_t pythonHome[] = L".venv";
Py_SetPythonHome(pythonHome);
Py_Initialize();
PyEval_InitThreads();
if (!Py_IsInitialized()) {
std::cerr << "Python failed to initialize\n";
return 1;
}
// Set up Python paths
PyRun_SimpleString(
"import sys\n"
"sys.path.insert(0, '.venv/Lib')\n"
"sys.path.insert(0, '.venv/Lib/site-packages')\n"
);
// Test script execution
PyRun_SimpleString(
"from time import time, ctime\n"
"print('Today is', ctime(time()))\n"
);
PyObject* main = PyImport_AddModule("__main__");
PyObject* globals = PyModule_GetDict(main);
tf::Executor executor;
tf::Taskflow taskflow;
auto [A, B, C, D] = taskflow.emplace( // create four tasks
[] () { std::cout << "TaskA\n"; PyGILState_STATE gstate = PyGILState_Ensure();PyGILState_Release(gstate);},
[] () { std::cout << "TaskB\n"; },
[] () { std::cout << "TaskC\n"; },
[] () { std::cout << "TaskD\n"; }
);
A.precede(B, C); // A runs before B and C
D.succeed(B, C); // D runs after B and C
executor.run(taskflow).wait();
if (Py_FinalizeEx() < 0) {
return 120;
}
return 0;
}
</code></pre>
<p>Python is getting properly initialized.</p>
<p>It hangs at the execution of TaskA. You can't even close the app by pressing Crtl+C. This is pretty much explains my requirement to execute the python code inside tasks.</p>
|
<python><c++>
|
2025-06-30 06:37:15
| 1
| 947
|
Prashant
|
79,684,175
| 467,054
|
Setting row colors to a pandastable with setrowcolors
|
<p>I'm working on a script to collate information from a few utilities and applications. The last part I'm trying to finish up is to take color information and apply it to a table. My issue is that when I run this portion of the code:</p>
<pre><code> print(html_color)
print(isValidHexaCode(html_color))
try:
global_table.setRowColors(rows=int(index), clr = html_color, cols='all')
except Exception as e:
print(f"An unexpected error occurred: {e}")
</code></pre>
<p>On elements that have a legitimate color, the following is what is shown in the console:</p>
<pre><code>Name: PNO
Color: 4294902015
new color = #ff00ff
#ff00ff
True
An unexpected error occurred: invalid color name "#00585858"
</code></pre>
<p>The line contailing "print(isValidHexaCode(html_color))" is simply code to try and help me to troubleshoot.</p>
<p>The entire file can be found here: <a href="https://github.com/lorenjz/simplescan4" rel="nofollow noreferrer">https://github.com/lorenjz/simplescan4</a> and the method containing this snippet is called import_wwb</p>
<p>I'm not sure where to go from here. What am I doing wrong?</p>
<p>Any input or suggestions would be greatly appreciated.</p>
<p>Edit:</p>
<p>Traceback without the try/except:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Users\Loren\Documents\python\simplev4.py", line 245, in import_wwb
global_table.setRowColors(rows=int(index), clr = html_color, cols='all')
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandastable\core.py", line 664, in setRowColors
self.redraw()
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandastable\core.py", line 537, in redraw
self.redrawVisible(event, callback)
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandastable\core.py", line 507, in redrawVisible
self.colorRows()
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandastable\core.py", line 634, in colorRows
self.drawRect(row, col, color=clr, tag='colorrect', delete=0)
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandastable\core.py", line 3308, in drawRect
rect = self.create_rectangle(x1+w/2,y1+w/2,x2-w/2,y2-w/2,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 2862, in create_rectangle
return self._create('rectangle', args, kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Loren\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 2832, in _create
return self.tk.getint(self.tk.call(
^^^^^^^^^^^^^
_tkinter.TclError: invalid color name "#00585858"
</code></pre>
|
<python><tkinter><pandastable>
|
2025-06-30 02:06:49
| 1
| 480
|
Loren Zimmer
|
79,684,073
| 5,305,242
|
Identify a column cell and perform subtraction with the same column
|
<p>This is just a small example DataFrame, but in reality, Iโm working with a much larger and uneven dataset. I need to identify the data in pattern to perform subtraction within the same column. Please take a look at my attempt and the expected result below.</p>
<pre><code>df = pd.DataFrame([{'col1': "", 'col2': ""}, {'col1': 10.0, 'col2': 'A'}, {'col1': 20.0, 'col2': 'D'}, {'col1': "", 'col2': ""}, {'col1': "", 'col2': ""}, {'col1': "", 'col2': ""}, {'col1': 40.0, 'col2': 'W'}, {'col1': 10.0, 'col2': 'E'}, {'col1': 15.0, 'col2': 'R'}, {'col1': "", 'col2': ""}, {'col1': "", 'col2': ""}, {'col1': 5.0, 'col2': 'F'}, {'col1': 10.0, 'col2': 'H'}, {'col1': 15.0, 'col2': 'U'}, {'col1': 11.0, 'col2': 'T'}])
print (df)
col1 col2
0 NaN NaN
1 10.0 A
2 20.0 D
3 NaN NaN
4 NaN NaN
5 NaN NaN
6 2.0 W
7 10.0 E
8 15.0 R
9 NaN NaN
10 NaN NaN
11 5.0 F
12 10.0 H
13 9.0 U
14 11.0 T
</code></pre>
<p>Need to identify the data to perform subtraction as shown below.
Additional Column "IdentifySub"</p>
<pre><code> col1 col2 IdentifySub
0 NaN NaN NaN
1 10.0 A 1.0
2 20.0 D 2.0
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 2.0 W 1.0
7 10.0 E 2.0
8 15.0 R 3.0
9 NaN NaN NaN
10 NaN NaN NaN
11 5.0 F 1.0
12 10.0 H 2.0
13 9.0 U 3.0
14 11.0 T 4.0
</code></pre>
<p>Perform Subtraction from even to odd number only.</p>
<pre><code> col1 col2 IdentifySub Result
0 NaN NaN NaN NaN
1 10.0 A 1.0 NaN
2 20.0 D 2.0 10.0
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 2.0 W 1.0 NaN
7 10.0 E 2.0 8.0
8 15.0 R 3.0 NaN
9 NaN NaN NaN NaN
10 NaN NaN NaN NaN
11 5.0 F 1.0 NaN
12 10.0 H 2.0 5.0
13 9.0 U 3.0 NaN
14 11.0 T 4.0 2.0
</code></pre>
<p>Please see image to understand math of output:</p>
<p><a href="https://i.sstatic.net/UDGUO6mE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDGUO6mE.png" alt="" /></a></p>
|
<python><pandas><data-science>
|
2025-06-29 21:26:23
| 1
| 458
|
OO7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.