QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,738,656
| 2,416,984
|
Accessing SharedMemory in a Gitlab runner
|
<p>I'm writing a pytest that should communicate to Docker containers through shared memory. Locally, that's pretty easy to achieve using something like this:</p>
<pre class="lang-py prettyprint-override"><code># Create a shared memory block
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
# Create a docker mount for the shared memory
shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
# Run a container with access to the shared memory
client.containers.run(
image="alpine",
name="my_container",
detach=True,
remove=True,
command="tail -f /dev/null",
mounts=[shm_mount],
environment={
"SHM_NAME": shm_name
}
)
</code></pre>
<p>Now that test needs to run in a Gitlab runner. To use the Docker SDK, I'm using a Docker-in-Docker configuration looking like this:</p>
<pre class="lang-yaml prettyprint-override"><code>unittest:
image:
name: docker:27.0.2
pull_policy: if-not-present
tags:
- general-runner-dind
services:
- name: docker:27.0.2-dind
alias: docker-service
command: [dockerd-entrypoint.sh, --tls=0]
variables:
DOCKER_HOST: tcp://docker-service:2375
DOCKER_TLS_CERTDIR: ""
FF_NETWORK_PER_BUILD: "true"
before_script:
- apk update && apk upgrade && apk add --no-cache python3 python3-dev py3-pip curl
- curl -LsSf https://astral.sh/uv/0.6.17/install.sh | sh
- source $HOME/.local/bin/env
script:
- cd /path/to/project
- uv sync --frozen
- uv run --no-sync pytest
</code></pre>
<p>So pytest runs inside a docker image now. In addition, control over the parameters used to spin up the container where pytest is running in is rather limited. Things that I've tried to get a shared memory writeable within pytest and shareable with other docker containers spun up with the Docker SDK:</p>
<ol>
<li>Detect the container ID of the pytest container and pass that to <code>ipc_mode</code> option of <code>docker.container.run</code>:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>cid = get_container_id() # Some logic to get the container ID.
shm: SharedMemory = SharedMemory(create=True, name=shm_name, size=1e6)
if cid is None:
# Running on host
shm_mount = docker.types.Mount(source="/dev/shm", target="/dev/shm", type="bind")
mounts = [shm_mount]
ipc_mode = None
else:
# Running inside docker
mounts = []
ipc_mode = f"container:{cid}"
client.containers.run(
image="alpine",
name="my_container",
detach=True,
remove=True,
command="tail -f /dev/null",
mounts=mounts,
ipc_mode=ipc_mode
environment={
"SHM_NAME": shm_name
}
)
</code></pre>
<p>This doesn't work because the docker-in-docker setup on Gitlab seems to be rather specialized and any means to get the container ID doesn't work (<code>/.dockerenv</code> doesn't exist, or <code>/proc/self/cgroup</code> (cgroups v1)/<code>/proc/self/mountinfo</code> (cgroups v2) doesn't seem to contain the docker container ID).</p>
<ol start="2">
<li>Create an additional container during the pytest that will have a shareable shared memory block accessible by other containers:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>ipc_anchor = client.containers.run(
image="alpine",
name=ipc_anchor_name,
detach=True,
remove=True,
command="tail -f /dev/null",
ipc_mode="shareable",
)
ipc_mode = f"container:{ipc_anchor.id}"
</code></pre>
<p>Although this option looks promising, it's unclear to me how to reach the shared memory from pytest to write into (with <code>SharedMemory(create=False, shm_name=...)</code>).</p>
<p>Any suggestions on how to get either of the two apporoaches above completely working? Or a different approach that I might still be missing?</p>
|
<python><gitlab-ci><docker-in-docker>
|
2025-08-18 11:43:19
| 1
| 973
|
user2416984
|
79,738,518
| 270,043
|
PySpark program stuck at adding broadcast piece
|
<p>I'm trying to write a PySpark program that filters for records in a very large dataframe (1-2B records) that matches some conditions on another smaller reference dataframe. This is done using a left join between the 2 dataframes, and then writing the results to a parquet file. When the reference dataframe is empty, the program runs successfully. But when the reference dataframe contains 414K records, the spark program hangs at the message <code>storage.BlockManagerInfo: [dispatcher-BlockManagerMaster]: Added broadcast_4_piece0 in memory on xxx:12345 (size:4.0 MiB, free: 10 GiB)</code>.</p>
<p>My code is as follows.</p>
<pre><code>def extract_to_df(spark, ref_db):
columns_to_drop = ["ColA", "ColB", "ColC"]
# Join conditions
join_cond_1 = (col("Col1") >= col("Col3a")) & (col("Col1") >= col("Col3b"))
join_cond_2 = (col("Col2") >= col("Col3a")) & (col("Col2") >= col("Col3b"))
df = spark.read.parquet(folder)
df_2 = df.filter(df["Col4"]=="abc").withColumn("Col1", udf_col(col("Col1a"))).withColumn("Col2", udf_col(col("Col2a")))
df_tmp = df_2.join(ref_db, on=join_cond_1, how="left").drop(*columns_to_drop).withColumnRenamed("Col5", "Col5a")
df_results = df_tmp.join(ref_db, on=join_cond_2, how="left").drop(*columns_to_drop).withColumnRenamed("Col6", "Col6a")
df_final_results = df_results.dropna(subset=["Col5a", "Col6a"])
df_final_results.write.mode("overwrite").parquet(output_folder)
def main():
ref_db = spark.read.parquet("/ref_db.parquet")
ref_db.persist()
extract_to_df(spark, ref_db)
if __name__ == "__main__":
main()
</code></pre>
<p>What is wrong with the code?</p>
|
<python><apache-spark><pyspark>
|
2025-08-18 09:32:29
| 1
| 15,187
|
Rayne
|
79,738,395
| 219,153
|
How to reduce horizontal padding in this matplotlib plot?
|
<p>I'm trying to eliminate horizontal padding of a Matplotlib plot. Here is the script:</p>
<pre><code>import numpy as np, matplotlib.pyplot as plt
a = np.random.rand(100)
plt.figure(figsize=(12, 6), dpi=100)
plt.tight_layout()
plt.plot(a)
plt.show()
</code></pre>
<p>The result contains a lot of horizontal padding between the plot and the window:</p>
<p><a href="https://i.sstatic.net/orzjXrA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/orzjXrA4.png" alt="enter image description here" /></a></p>
<p>I tried this alternative, but the result was the same:</p>
<pre><code>import numpy as np, matplotlib.pyplot as plt
a = np.random.rand(100)
fig, ax = plt.subplots()
fig.set_dpi(100)
fig.set_size_inches(12, 6)
ax.autoscale_view('tight')
ax.plot(a)
plt.show()
</code></pre>
<p>Is there a simple solution to this problem?</p>
|
<python><matplotlib>
|
2025-08-18 07:40:13
| 1
| 8,585
|
Paul Jurczak
|
79,738,197
| 11,168,635
|
Python: How to add variable in run statement?
|
<p>My vs code looks like this:</p>
<pre><code>import requests
import json
import os
os.environ = "website.abc.com"
header={"Accept":application/json", "Authorization": f"Bearer {os.environ['MY_API_KEY']}"
#then a parameter list={}
#other code...
</code></pre>
<p>I am using windows and my other files run with no problems but when I try to add the api key, the system prints an error message related to the os.environ variable,</p>
<pre><code># assume MY_API_KEY = 'abcd1234'
>python myfile.py MY_API_KEY='abcd1234'
</code></pre>
<p>I searched for answers online then I tried this:</p>
<pre><code>>set MY_API_KEY = 'abcd1234' && myfile.py
</code></pre>
<p>then I searched again and tried this:</p>
<pre><code>>MY_API_KEY='abcd1234' python myfile.py
</code></pre>
<p>and the error is always highlighting the MY_API_KEY variable with this message: frozen os in getitem. I looked for the definition of the error which is 'you are trying to access an environment variable that is not currently set within the environment where your python script is running' but I am not sure how it is in a different environment if I am setting it at run time.</p>
<p>Is there a way to set the key value in my run statement in Windows? If not, how can I send my api key without hard coding it? thanx for your help</p>
|
<python><windows><visual-studio-code><environment-variables>
|
2025-08-18 00:41:55
| 1
| 323
|
ithoughtso
|
79,738,189
| 3,934,049
|
Spire.XLS with pyinstaller
|
<p>I am using spire.xls in my python script. When script executed from VS code evetything works as expected. I have created executable using pyinstaller. Added specific dlls.</p>
<pre><code>pyinstaller --noconfirm --onefile --console --add-data "I:\_DEV\Scripts\venv\Lib\site-packages\spire\xls\lib\Spire.Xls.Base.dll;." --add-data "I:\_DEV\Scripts\venv\Lib\site-packages\spire\xls\bin\Spire.XLS.dll;." --add-data "I:\_DEV\Scripts\jvenv\Lib\site-packages\spire\xls\lib\libSkiaSharp.dll;." "I:\_DEV\Scripts\XXXX_Report.py"
</code></pre>
<p>When executable is running i get an error:</p>
<p><strong>An exception of type AttributeError occurred. Arguments:
("'NoneType' object has no attribute 'Workbook_CreateWorkbook'",)</strong></p>
|
<python><pyinstaller><spire.xls>
|
2025-08-18 00:15:29
| 1
| 1,491
|
goryef
|
79,738,129
| 11,499,270
|
Cannot access R or Python objects in R and Python Quarto Document
|
<p>I am using Quarto and VSCode to try to run Python and R together. I'd like to be able to access R objects in python or python objects in R. If I render the document below, it works as expected. However, if I am running in interactive mode, i.e., running each chunk / developing the code, I cannot access objects from the other language. It seems they do not share the same environment. Is this possible or am I misunderstanding the abilities of Quarto and reticulate?</p>
<pre><code>---
title: "Untitled"
format: html
---
```{r}
library(reticulate)
reticulate::use_virtualenv("/Users/xxx/.pyenv/versions/xxx")
```
```{r}
x <- 123
```
```{python}
y = r.x * 2
print(f"Python y = {y}")
```
```{r}
# Access Python variable from R
cat("R accessing Python y:", py$y, "\n")
# Create an R data frame
df_r <- data.frame(
a = 1:3,
b = c("x", "y", "z")
)
print(df_r)
```
```{python}
# Access R data frame from Python
import pandas as pd
df_python = r.df_r
print("Python accessing R data frame:")
print(df_python)
print(f"Type: {type(df_python)}")
```
</code></pre>
<p>For reference, I am running Quarto in VS Code.</p>
|
<python><r><quarto>
|
2025-08-17 21:54:39
| 0
| 759
|
acircleda
|
79,738,112
| 3,745,908
|
Open3D - memory usage increase on showing webcam stream
|
<p>I am trying to visualize a webcam feed on a plane in Open3D (testable Code below).
After I run the script, the memory usage starts increasing after 3-5minutes non stop.
When I comment the line modify_geometry_material, it works fine (no memory increase is noticed), although the texture is not updated until I move the camera.
I tried <code>self.app.post_to_main_thread(self.vis, lambda: self.vis.modify_geometry_material(self.geometries[0]['name'], self.geometries[0]['material']))</code>, the texture is updated accordingly, but there is an increase of memory usage after a few minutes.
Any ideas to achieve this real-time VideoCapture texture on a 3D plane?</p>
<pre><code>import open3d as o3d
import numpy as np
import cv2
import open3d.visualization.rendering as rendering
import open3d.visualization.gui as gui
import time
import threading
def create_video_plane(width, height):
# Vertices for a rectangle in the XY plane, centered at (0,0,0)
vertices = np.array([
[-width/2, -height/2, 0],
[ width/2, -height/2, 0],
[ width/2, height/2, 0],
[-width/2, height/2, 0]
])
triangles = np.array([
[0, 1, 2],
[2, 3, 0]
])
# UV coordinates for each vertex (must match vertices order)
uvs = np.array([
[0, 0], # vertex 0
[1, 0], # vertex 1
[1, 1], # vertex 2
[0, 1], # vertex 3
])
mesh = o3d.geometry.TriangleMesh()
mesh.vertices = o3d.utility.Vector3dVector(vertices)
mesh.triangles = o3d.utility.Vector3iVector(triangles)
mesh.triangle_uvs = o3d.utility.Vector2dVector([
[0, 0], [1, 0], [1, 1], # triangle 0
[1, 1], [0, 1], [0, 0] # triangle 1
])
mesh.compute_vertex_normals()
return mesh
class TestApp:
def __init__(self):
self.rgb_frame_width = 3840
self.rgb_frame_height = 2160
self.ratio = self.rgb_frame_width / self.rgb_frame_height
self.window_width = 1280
self.window_height = int(self.window_width / self.ratio)
self.app = o3d.visualization.gui.Application.instance
self.app.initialize()
self.vis = o3d.visualization.O3DVisualizer("Test")
self.app.add_window(self.vis)
self.frame = None
self.image_texture = o3d.geometry.Image(np.zeros((self.rgb_frame_height, self.rgb_frame_width, 3), dtype=np.uint8))
self.plane = create_video_plane(self.rgb_frame_width, self.rgb_frame_height)
self.plane.translate([0, 0, self.rgb_frame_width/100])
self.image_added = False
self.cap = cv2.VideoCapture(0)
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.rgb_frame_width)
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.rgb_frame_height)
self.cap.set(cv2.CAP_PROP_FPS, 30)
self.cap.set(cv2.CAP_PROP_AUTOFOCUS, 0)
self.is_running = True
self._thread = threading.Thread(target=self.video_loop, daemon=True)
self._thread.start()
video_material = rendering.MaterialRecord()
video_material.shader = "defaultUnlit"
video_material.base_color = [1.0, 1.0, 1.0, 1.0]
video_material.albedo_img = self.image_texture
self.geometries = []
self.geometries.append({"name":"video_plane", "geometry":self.plane, "material":video_material})
threading.Thread(target=self.update_thread, daemon=True).start()
self.app.run()
def video_loop(self):
while self.is_running:
ret, frame = self.cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = np.ascontiguousarray(frame)
self.frame = frame # Store only the raw frame
time.sleep(1.0 / 30)
def update_thread(self):
while(self.frame is None):
time.sleep(0.1)
while(self.frame is not None):
self.geometries[0]['material'].albedo_img = o3d.geometry.Image(self.frame)
if not self.image_added:
self.vis.add_geometry(self.geometries[0]['name'], self.geometries[0]['geometry'], self.geometries[0]['material'])
self.image_added = True
else:
self.app.post_to_main_thread(self.vis, lambda: self.vis.modify_geometry_material(self.geometries[0]['name'], self.geometries[0]['material']))
#self.vis.modify_geometry_material(self.geometries[0]['name'], self.geometries[0]['material'])
time.sleep(1/20)
app = TestApp()
</code></pre>
|
<python><opencv><open3d>
|
2025-08-17 21:20:19
| 1
| 1,290
|
Mben.
|
79,737,751
| 905,845
|
Static type for "any SQLAlchemy ORM class with an integer field named 'id'"
|
<p>How can I give a static type to a function that should accept "any SQLAlchemy ORM class with an integer field named 'id'"?</p>
<p>Below are my attempts, commented with the <code>mypy</code> errors I got. I had to resort to <code>id: Any</code> but I still hope I can do better.</p>
<p>I can't modify the ORM classes, so I can't to add an intermediate <code>class BaseWithId</code> between <code>Base</code> and <code>Foo</code>.</p>
<p>I'm open to solutions, workarounds, or explanations why it's impossible.</p>
<p>I am using Python 3.12.3 and SQLAlchemy 2.0.41.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, Protocol, TypeVar
import sqlalchemy as sql
from sqlalchemy import orm
class HasIntId(Protocol):
# id: int # Argument 1 to "where" of "Select" has incompatible type "bool" ... [arg-type]
# id: orm.Mapped[int] # Value of type variable "T" of "f" cannot be "Foo" [type-var]
# id: orm.attributes.InstrumentedAttribute[int] # Value of type variable "T" of "f" cannot be "Foo" [type-var]
id: Any
T = TypeVar('T', bound=HasIntId)
def f(model: type[T]) -> sql.Select[tuple[T]]:
return sql.select(model).where(model.id > 42).order_by(model.id.desc())
class Base(orm.DeclarativeBase):
pass
class Foo(Base):
__tablename__ = 'foos'
id: orm.Mapped[int] = orm.mapped_column(primary_key=True)
print(f(Foo))
</code></pre>
|
<python><sqlalchemy><python-typing>
|
2025-08-17 09:41:27
| 1
| 668
|
jacquev6
|
79,737,733
| 18,349,319
|
Upload a large file to Minio from Minio objects
|
<p>I have the next case:</p>
<pre><code>Get a certain count of N objects from Minio and create zip archive and upload it zip to Minio as one object.
</code></pre>
<p>Problem:</p>
<ol>
<li>I have many objects, that are up to 40gb in size</li>
<li>I can't load all objects bytes in memory - server memory size is 4gb</li>
<li>The server hard drive size is 240gb</li>
</ol>
<p>I use miniopy-async for work with Minio.</p>
<p>May be have any ideas?</p>
|
<python><large-files><minio><large-file-upload>
|
2025-08-17 09:04:01
| 1
| 345
|
TASK
|
79,737,582
| 1,940,534
|
Using SeleniumBase to wait for an image to become visible
|
<p>I have a table header element:</p>
<pre><code><th><input type="image" src="../../..//images/icons/cell_state_header_icon.png" onclick="javascript:__doPostBack('ctl00$left_pane$gvSelectedFeatures','Sort$Status')" style="border-width:0px;"></th>
</code></pre>
<p>and I am trying to "WAIT" for this element to display
using the code below</p>
<pre><code>sb.wait_for_element_present("//*[/html/body/form/div[3]/div/div[3]/div/div[5]/div[2]/div/div[2]/table/tbody/tr[1]/th[1]/input=cell_state_header_icon.png]", timeout=None)
</code></pre>
<p>however this does not seem to wait. Do I have the selector formatted correctly?</p>
|
<python><selector><seleniumbase>
|
2025-08-17 01:19:02
| 1
| 1,217
|
robm
|
79,737,525
| 13,968,392
|
Connection string to read and write database with different engines
|
<p>In polars, there are the engines <code>connextorx</code> and <code>adbc</code> for <code>pl.read_database_uri</code> and <code>sqlalchemy</code> and <code>adbc</code> for <code>pl.write_database</code>. In the case of a postgresql database: Is it possible to use the same connection string for these engines when the connection string looks as follows?</p>
<pre><code>"postgresql://scott:tiger@localhost:5432/mydatabase?options=-c%20default_transaction_read_only%3DTrue"
</code></pre>
<p>The essential part is that read_only should be True.
I just know that <code>adbc</code> accepts the part <code>options=-c%20default_transaction_read_only%3DTrue</code>. If it is not possible to use such a connection string for these engines, what would be the closest alternative?</p>
|
<python><database><postgresql><python-polars><adbc>
|
2025-08-16 22:30:05
| 3
| 2,117
|
mouwsy
|
79,737,395
| 2,654,773
|
Correct input for OpenAI embeddings API?
|
<p>I'm using the OpenAi text-embedding-3-small model to create embeddings for each product category in a file. In total it's about 6000 product categories and they look like this:</p>
<pre><code>Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Off-Road & All-Terrain Vehicle Protective Gear
Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Off-Road & All-Terrain Vehicle Protective Gear > ATV & UTV Bar Pads
Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Vehicle Alarms & Locks
Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Vehicle Alarms & Locks > Automotive Alarm Accessories
Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Vehicle Alarms & Locks > Automotive Alarm Systems
Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Vehicle Alarms & Locks > Motorcycle Alarms & Locks
</code></pre>
<p>For each line in that file, I'm using the following code to generate an embedding:</p>
<pre><code>from openai import OpenAI
client = OpenAI()
response = client.embeddings.create(
input="Vehicles & Parts > Vehicle Parts & Accessories > Vehicle Safety & Security > Vehicle Alarms & Locks",
model="text-embedding-3-small",
encoding_format="float",
dimensions=512
)
</code></pre>
<p>I'm storing the embeddings in a vector database (Cosmos DB for MongoDB). I'm running a vector similarity search on the DB in order to help customers, to find the best possible category for their entered product title. The similarity search works very well, but sometimes I'm getting bad results.
For example, when I search for "Pinus Sylvestris" which is the name of a plant, I'm getting an entirely wrong product category suggested.</p>
<p><strong>My question:</strong>
Is it OK, to pass the product category in that hierarchical representation (with > character) into the model?
Is there a way, how I can tell the model, that this is a product category for an e-commerce website, so that it understands the input better?</p>
<p>Edit:
Adding the query code:</p>
<pre><code>from openai import OpenAI
from pymongo import MongoClient
import sys
MONGODB_CON_STR=XXXXXX
db = MongoClient(MONGODB_CON_STR)["shop"]
client = OpenAI()
def get_vector_for_text(input:str):
response = client.embeddings.create(
input=input,
model="text-embedding-3-small",
encoding_format="float",
dimensions=512
)
return response.data[0].embedding
for line in sys.stdin:
queryVector = get_vector_for_text(line)
res = db["product_taxonomy"].aggregate([
{
"$search": {
"cosmosSearch": {
"vector": queryVector,
"path": "vector",
"k": 2
},
"returnStoredSource": True }},
{
"$project": { "similarityScore": {
"$meta": "searchScore" },
"document" : "$$ROOT"
}
}
]);
while res.alive:
for doc in res:
print(f'\tsimilarityScore: {doc["similarityScore"]} {doc["document"]["text"]}')
print('\n')
</code></pre>
|
<python><artificial-intelligence><openai-api><azure-openai><openaiembeddings>
|
2025-08-16 18:16:52
| 0
| 3,873
|
eztam
|
79,737,165
| 9,669,142
|
Starting server to get Cesium terrain height only works one time
|
<p>I have multiple datasets with coordinates in them and I want to plot these points in Cesium for Unreal. For me to do that, I need to correct terrain heights and I want to have them in Python (since I'm doing some other stuff as well). Here I'm a bit stuck.</p>
<p>There is no API for Cesium that seems to be available for this task, so I used Python to create a HTML file, put the coordinates in there and retreive the terrain heights:</p>
<pre><code>import json
import threading
import time
from flask import Flask, request, send_file
import webbrowser
import tempfile
import os
def get_terrain_heights(points, access_token):
results = []
app = Flask(__name__)
temp_html_file = tempfile.NamedTemporaryFile(delete=False, suffix=".html")
temp_html_file.close()
html_content = f"""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Cesium Terrain Fetch</title>
<link href="https://cdn.jsdelivr.net/npm/cesium@1.105.1/Build/Cesium/Widgets/widgets.css" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/cesium@1.105.1/Build/Cesium/Cesium.js"></script>
<style>
html, body, #cesiumContainer {{width:100%; height:100%; margin:0; padding:0; overflow:hidden;}}
</style>
</head>
<body>
<div id="cesiumContainer"></div>
<script>
Cesium.Ion.defaultAccessToken = '{access_token}';
const viewer = new Cesium.Viewer('cesiumContainer', {{
terrainProvider: Cesium.createWorldTerrain()
}});
const points = {json.dumps(points)};
let results = [];
function sendToPython(data) {{
fetch('/submit', {{
method: 'POST',
headers: {{'Content-Type': 'application/json'}},
body: JSON.stringify(data)
}});
}}
points.forEach(p => {{
const position = Cesium.Cartographic.fromDegrees(p.lon, p.lat);
Cesium.sampleTerrainMostDetailed(viewer.terrainProvider, [position]).then(updatedPositions => {{
const terrainHeight = updatedPositions[0].height;
const ellipsoidHeight = terrainHeight + p.alt_above_ground;
results.push({{
lat: p.lat,
lon: p.lon,
alt_above_ground: p.alt_above_ground,
terrain_height: terrainHeight,
alt_ellipsoid: ellipsoidHeight
}});
if (results.length === points.length) {{
sendToPython(results);
alert('Terrain heights fetched. You can close this window.');
}}
}});
}});
</script>
</body>
</html>
"""
with open(temp_html_file.name, "w") as f:
f.write(html_content)
@app.route('/')
def index():
return send_file(temp_html_file.name)
@app.route('/submit', methods=['POST'])
def submit():
nonlocal results
results = request.json
return {"status": "ok"}
def run_app():
app.run(port=5000)
thread = threading.Thread(target=run_app, daemon=True)
thread.start()
webbrowser.open("http://127.0.0.1:5000/")
while not results:
time.sleep(0.5)
# Delete the temporary file only after getting results
try:
os.unlink(temp_html_file.name)
except:
pass
return results
# Usage:
points = [{"lat": 52.3109, "lon": 4.77028, "alt_above_ground": 0},
{"lat": 52.3109, "lon": 4.57028, "alt_above_ground": 0}]
access_token = "access_token"
terrain_data = get_terrain_heights(points, access_token)
</code></pre>
<p>This works actually really good, it returns a list of dictionaries with the information I need. So this works, but only once. The second time I want to use this code it gives me the error:</p>
<p>FileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\Users\username\AppData\Local\Temp\tmp4y1vst8b.html'</p>
<p>This is logical, since it removes the temporary file after using. When I remove that piece of code however, it still does not work: it keeps running in the Python kernel and nothing happens, no data is retreived.</p>
<p>Can someone assist in how to handle this? I want to go to a solution that I simply run the function, it retreives the information I need and then it's done. If I want to run it after 1 minute again with other information, then it should be able to do that.</p>
|
<python><flask>
|
2025-08-16 11:49:52
| 0
| 567
|
Fish1996
|
79,736,635
| 6,068,731
|
Post a local video file as a REEL using Instagram API in Python
|
<p>I am able to post a video from a public URL as a REEL:</p>
<pre class="lang-py prettyprint-override"><code>import requests
token = "..."
api_version="v23.0"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
# Get my user ID
id_url = f"https://graph.instagram.com/{api_version}/me"
id_response = requests.get(id_url, headers=headers)
user_id = id_response.json()['id']
# Upload a video on a public URL as a REEL
url = f"https://graph.instagram.com/{api_version}/{user_id}/media"
video_url = "https://download.samplelib.com/mp4/sample-5s.mp4"
payload = {
"video_url": video_path,
"media_type": "REELS",
}
response = requests.post(
url=url,
headers=headers,
data=payload
)
post_id = json.loads(out.content)['id']
# wait until video has been uploaded ...
publish_url = f"https://graph.instagram.com/{api_version}/{user_id}/media_publish"
payload = {
"creation_id": post_id
}
response = requests.post(publish_url, headers=headers, data=payload)
reel_id = response.json()['id'])
</code></pre>
<p>However, I cannot upload a local file. I've tried following <a href="https://developers.facebook.com/docs/instagram-platform/content-publishing" rel="nofollow noreferrer">the documentation</a> but without any luck. I currenty have an App with Instagram Login.</p>
|
<python><facebook-graph-api><python-requests><instagram-api><instagram-graph-api>
|
2025-08-15 16:25:43
| 0
| 728
|
Physics_Student
|
79,736,447
| 6,930,340
|
polars map_batches return_dtype argument for arrays
|
<p>I am applying a user defined function (UDF) to a <code>polars</code> dataframe using the <code>map_batches</code> function (c.p. <a href="https://docs.pola.rs/user-guide/expressions/user-defined-python-functions/#combining-multiple-column-values" rel="nofollow noreferrer">https://docs.pola.rs/user-guide/expressions/user-defined-python-functions/#combining-multiple-column-values</a>).
The UDF returns an array of type <code>int8</code>.</p>
<p>It seems that from polars <code>v1.32</code> on the argument <code>return_dtype</code> is required for the <code>map_batches</code> function (c.p. <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.map_batches.html" rel="nofollow noreferrer">https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.map_batches.html</a>)</p>
<p>I am struggling to correctly define the <code>return_dtype</code> argument. I tried things like <code>return_dtype=pl.Array[pl.Int8]</code>. However, this does not work.</p>
<p>EDIT:
Before <code>polars v1.32</code>, the schema of my dataframe was as follows (only showing the relevant column here):</p>
<pre><code>Schema([(...),
('signals', Array(Int8, shape=(9,)))])
</code></pre>
<p>The <code>signals</code> column will be retrieved from an UDF, which returns a numpy array. The UDF will be applied to groups of the dataframe (i.e. there's an <code>over</code> operation involved).</p>
|
<python><dataframe><user-defined-functions><python-polars>
|
2025-08-15 13:00:49
| 1
| 5,167
|
Andi
|
79,736,262
| 3,179,698
|
Clear output then lose control of IPython debugger in jupyterhub notebook
|
<p>So I import a module:</p>
<pre><code>from IPython.core.debugger import Pdb
</code></pre>
<p>Then I initiate an object for debugging:</p>
<pre><code>ipdb = Pdb()
</code></pre>
<p>I define a function to test debug mode:</p>
<pre><code>def f(a=1):
ipdb.set_trace()
print(a)
</code></pre>
<p>After I run the function:</p>
<pre><code>f()
</code></pre>
<p>I am in the debug mode, but if someone clear the output or switch the mode of the cell from code to markdown or NBconvert, then I lose the control of the debug mode,and to make things worse the interrupt key will not work any more(which is a very helpful weapon if you run a cell never ending), is there a way to get back the control of the debugger or at least quit the debugger mode instead of kill the whole kernel?</p>
|
<python><jupyter-notebook><jupyterhub>
|
2025-08-15 08:52:13
| 0
| 1,504
|
cloudscomputes
|
79,735,957
| 1,747,834
|
How to make setup.py invoke C++ compiler instead of C?
|
<p>I have no problem building native modules when using Python 3.11 on my FreeBSD desktop – invoking <code>distutils.core.Extension</code> with <code>language = 'c++'</code> is enough to have the C++ compiler invoked.</p>
<p>However, on the RHEL7 servers Python is much older – 3.6 – and the included <code>distutils</code> always invokes <code>gcc</code>, even with <code>language = 'c++'</code> and when the source filenames end with <code>.cpp</code>. The only way to fool it, that I found, is to use <code>env CC=/path/to/g++</code>.</p>
<p>Is there a better method, perhaps?</p>
|
<python><python-3.6><distutils>
|
2025-08-14 22:43:19
| 1
| 4,246
|
Mikhail T.
|
79,735,939
| 1,747,834
|
How to combine PyVarObject_HEAD_INIT and designated initializers?
|
<p>In my own "native" Python module I declare a new type this way:</p>
<pre class="lang-c prettyprint-override"><code>PyTypeObject calculatorType = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "My.Calculator",
.tp_basicsize = sizeof(PyMyCalculator),
.tp_itemsize = 0,
.tp_dealloc = (destructor)NULL,
.tp_getattr = NULL,
.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
.tp_doc = "Python wrapper of My Calculator",
.tp_methods = calculatorMethods,
.tp_members = NULL,
.tp_init = NULL,
.tp_new = calcInitialize
};
</code></pre>
<p>This works, but CLang (the default compiler on FreeBSD) produces a warning:</p>
<pre class="lang-none prettyprint-override"><code>warning: mixture of designated and non-designated initializers in the same initializer list is a C99 extension [-Wc99-designator]
</code></pre>
<p>This is because the <code>PyVarObject_HEAD_INIT</code>-macro does <em>not</em> explicitly name the fields...</p>
<p>I like designated initializers for obvious reasons, but I don't want to trigger a useless warning either. Disabling it with <code>-Wno-c99-designator</code> would work for CLang, but on Linux we use <code>g++14</code>, which does not have this particular warning and complains:</p>
<pre class="lang-none prettyprint-override"><code>unrecognized command-line option '-Wno-c99-designator'
</code></pre>
<p>How should I deal with it? I can make adding the flag conditional based on the operating system's name, but some day we might use CLang on Linux or GNU C++ on FreeBSD...</p>
<p>Is there a way to query, from inside <code>setup.py</code>, which compiler will be invoked -- a way, that would work with Python as old as 3.6?</p>
|
<python><python-3.x><g++><clang++>
|
2025-08-14 22:15:54
| 0
| 4,246
|
Mikhail T.
|
79,735,835
| 1,747,834
|
How to add/subtract time in Python C API?
|
<p>There are many tutorials and examples out there on how to perform date-arithmetic from Python. But I cannot find anything for the Python C API...</p>
<p>My C function, callable from Python, operates on two dates: <code>start</code> and <code>finish</code>. If <code>start</code> is not explicitly specified, it must be understood as <em>one day before</em> the <code>finish</code>.</p>
<p>How do I do this?</p>
<pre class="lang-c prettyprint-override"><code> /* Attempt converting the finish-date, if not a date already */
if (!PyDate_Check(finishO)) {
o = PyDate_FromTimestamp(finishO);
if (o == NULL)
return PyErr_Format(PyExc_ValueError,
"%R is not a valid date", finishO);
finishO = o;
}
/* If start-date is not specified, assume one day prior the finish date */
if (startO == NULL) {
PyObject *oneDay = PyDelta_FromDSU(1, 0, 0);
/* Derive startO from finishO by applying the one-day delta somehow */
}
</code></pre>
|
<python><c><python-3.x>
|
2025-08-14 19:35:28
| 1
| 4,246
|
Mikhail T.
|
79,735,667
| 785,404
|
Why isn't dict[str, str] assignable to Mapping[str | int, str] (Mapping key type isn't covariant)?
|
<p>Given this code:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Mapping
def my_fn(m: Mapping[str | int, str]):
print(m)
d = {"a": "b"}
my_fn(d)
</code></pre>
<p>both <code>mypy</code> 1.16.0 and <code>pyright</code> 1.1.400 report that it is invalid to assign <code>d</code> to the argument <code>m</code>. For example <code>pyright</code> outputs:</p>
<pre><code>error: Argument of type "dict[str, str]" cannot be assigned to parameter "m" of type "Mapping[str | int, str]" in function "my_fn"
"dict[str, str]" is not assignable to "Mapping[str | int, str]"
Type parameter "_KT@Mapping" is invariant, but "str" is not the same as "str | int" (reportArgumentType)
</code></pre>
<p>Why is this the case? I understand why assigning <code>dict[str, str]</code> to a <code>MutableMapping[str | int, str]</code> would be bad (the callee could insert <code>int</code> keys into the <code>dict</code>), but <code>Mapping</code> is immutable.</p>
|
<python><python-typing><mypy><pyright>
|
2025-08-14 16:04:33
| 2
| 2,085
|
Kerrick Staley
|
79,735,543
| 148,423
|
How to send multipart text and form parts with FastAPI TestClient?
|
<p>I'm working against an API specification outside my control. It expects to POST <code>multipart/form-data</code> to my server. Some parts are sent as files, some are sent as text.</p>
<p>I want to write a test using <code>TestClient</code> that sends both a file and text parts.</p>
<p>I don't have access to the exact body that is posted, so I don't know exactly how the multipart form data looks. But I can parse the input from the external system with FastAPI's Request. Some parts are of type <code>UploadFile</code> and some parts are of type <code>str</code>.</p>
<p>The <a href="https://github.com/encode/starlette/blob/6ee94f2cac955eeae68d2898a8dec8cf17b48736/starlette/datastructures.py#L485" rel="nofollow noreferrer">Starlette source for <code>FormData</code></a> is a <code>ImmutableMultiDict[str, Union[UploadFile, str]]</code>. So this looks like a valid use case.</p>
<p>The following code can handle the POST:</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/test")
async def test_multipart(
request: Request,
):
content_type = request.headers.get("content-type")
if content_type.startswith("multipart/form-data"):
async with request.form() as form:
for (filename, form_file) in form.items():
logger.info("Form file type: %s", str(type(form_file)))
# This is the line I want to test:
if type(form_file) is str:
print("Got data as string")
elif type(form_file) is UploadFile:
print("Got data as UploadFile")
if form_file.content_type == "application/octet-stream":
print("Got octet")
</code></pre>
<p>I can test the <code>UploadFile</code> parts easily:</p>
<pre class="lang-py prettyprint-override"><code>with TestClient(app) as client:
client.post(
"/test",
files={
"file1": ("upload.json", "STRING CONTENT"),
"file2": ("file2.pdf", "FILE CONTENT", "application/octet-stream")
}
)
</code></pre>
<p>But I want that first entry to appear as <code>str</code> not <code>UploadFile</code>. How can I do this?</p>
|
<python><fastapi><multipartform-data><starlette>
|
2025-08-14 14:29:26
| 1
| 47,989
|
Joe
|
79,735,449
| 11,328,614
|
Conversion to int with Unicode strings
|
<p>I recognized that <code>int(unicode_string)</code> sometimes gives obscure results.
E.g. <code>int('᪐᭒') == 2</code>.</p>
<pre><code>>>> bytes('᪐᭒', 'utf-8')
b'\xe1\xaa\x90\xe1\xad\x92'
>>> [f'U+{ord(c):04X}' for c in '᪐᭒']
['U+1A90', 'U+1B52']
</code></pre>
<p>My expectation would be it fails, because the string does not contain a number.</p>
<p>Is there some explanation for this behaviour?</p>
|
<python><string><unicode><integer>
|
2025-08-14 13:11:52
| 1
| 1,132
|
Wör Du Schnaffzig
|
79,735,272
| 735,926
|
How to avoid repeating type hints when overriding a parent method?
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def f(self, a: int) -> int:
raise NotImplementedError
class B(A):
def f(self, a):
return a
</code></pre>
<p>However <code>mypy --strict</code> tells me that <code>B.f</code> "is missing a type annotation". It seems obvious to me that <code>B.f</code> has the same types as <code>A.f</code>, especially given that mypy shows me an error if I try to put different ones:</p>
<pre class="lang-py prettyprint-override"><code>class B(A):
def f(self, a: bool) -> str:
return str(a)
</code></pre>
<pre><code>error: Return type "str" of "f" incompatible with return type "int" in supertype "A" [override]
error: Argument 1 of "f" is incompatible with supertype "A"; supertype defines the argument type as "int" [override]
note: This violates the Liskov substitution principle
note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
</code></pre>
<p>Since mypy already knows the expected types of <code>B.f</code>, why does it force me to re-declare them? Is there some way to avoid it?</p>
|
<python><python-typing><mypy>
|
2025-08-14 10:31:02
| 0
| 21,226
|
bfontaine
|
79,735,257
| 5,810,060
|
asyncio.run() cannot be called from a running event loop
|
<p>I have looked at previous answers and none of them seem to solve my issue. I am running the below code</p>
<pre><code>from pyracing.client import Client
import asyncio
username = 'My Email Address'
password = 'My Password'
# Authentication is automated and will be initiated on first request
ir = Client(username, password)
# Example async function with hardcoded results
async def main():
seasons_list = await ir.current_seasons()
for season in seasons_list:
if season.season_id == 2846:
print(f'Schedule for {season.series_name_short}'
f' ({season.season_year} S{season.season_quarter})')
for t in season.tracks:
print(f'\tWeek {t.race_week} will take place at {t.name} ({t.config})')
asyncio.run(main())
</code></pre>
<p>and I get the below error:</p>
<pre><code>RuntimeError Traceback (most recent call last)
Cell In[1], line 23
20 for t in season.tracks:
21 print(f'\tWeek {t.race_week} will take place at {t.name} ({t.config})')
---> 23 asyncio.run(main())
File ~\AppData\Local\anaconda3\Lib\asyncio\runners.py:191, in run(main, debug, loop_factory)
161 """Execute the coroutine and return the result.
162
163 This function runs the passed coroutine, taking care of
(...)
187 asyncio.run(main())
188 """
189 if events._get_running_loop() is not None:
190 # fail fast with short traceback
--> 191 raise RuntimeError(
192 "asyncio.run() cannot be called from a running event loop")
194 with Runner(debug=debug, loop_factory=loop_factory) as runner:
195 return runner.run(main)
RuntimeError: asyncio.run() cannot be called from a running event loop
</code></pre>
<p>what is am trying to eliminate is teh runtime error issue</p>
|
<python><python-3.x><runtime>
|
2025-08-14 10:13:21
| 1
| 906
|
Raul Gonzales
|
79,735,236
| 305,865
|
pypi caching in Artifactory
|
<p>we're evaluating using Artifactory as a proxy for pypi.org so we can whitelist packages for our developers, and prevent things like typosquatting.</p>
<p>I was pleasantly surprised when inital setup of the repository cache (as described in <a href="https://jfrog.com/help/r/jfrog-artifactory-documentation/remote-repositories" rel="nofollow noreferrer">https://jfrog.com/help/r/jfrog-artifactory-documentation/remote-repositories</a>) was done in about two minutes. However, Include Patterns don't seem to work the way we thought - at least for Python packages.</p>
<ul>
<li><p>The only "documentation" for the patterns is the tooltip: <em>List of artifact patterns to include when evaluating artifact requests in the form of <code>x/y/**/z/*</code>. When used, only artifacts matching one of the include patterns are served. By default, all artifacts are included (<code>**/*</code>).</em> It seems to be Ant syntax, but even information on that is sparse online.</p>
</li>
<li><p>Blank Include and Exclude Patterns work as expected, all packages can be installed through the cache.</p>
</li>
<li><p>Adding <code>**/numpy*</code> to the Include Patterns works as expected. Only numpy can be installed; other packages like pandas cannot be installed.</p>
</li>
<li><p>However, whitelisting <code>**/numpy*</code> also allows similarly named packages like numpyx, numpyp, etc. Which isn't exactly the idea of whitelisting.</p>
</li>
<li><p>We haven't been able to find expressions that will allow precise whitelisting (even running <code>pip install --verbose numpy</code> to see all the URLs it tries to fetch).</p>
</li>
<li><p>Adding any or all of the following patterns will <strong>not</strong> allow installation of numpy.</p>
<ul>
<li><code>**/numpy*/**</code></li>
<li><code>**/numpy/*</code></li>
<li><code>**/numpy/**</code></li>
<li><code>**/numpy/</code></li>
<li><code>**/numpy</code></li>
<li><code>**/numpy-*</code></li>
<li><code>**/numpy-</code></li>
</ul>
</li>
<li><p>There's other weird symptoms:</p>
<ul>
<li><code>**/Cython*</code> does not allow installation of Cython, but <code>**/cython*</code> does. So it looks only lowercase patterns work (okay, this is probably because <code>pip install</code> shows a 'nice' name, but the url is all lowercase)</li>
<li><code>**/pyproject-metadata*</code> does not allow installation of pyproject-metadata, but <code>**/pyproject*</code> does. So it looks like dashes in package names break patterns too.</li>
</ul>
</li>
<li><p>We were hoping to do version pinning, but it doesn't look like this will be possible. Also there's no export/import functionality for lists of Include/Exclude Patterns, and maintaining them through the web interface would probably get hard to manage soon.</p>
</li>
</ul>
<p>I was under the impression that Artifactory is an enterprise-grade solution for scenarios just like this, but this looks to broken to be useful.</p>
<p>Am I missing something?
Are there other patterns?
Are there plugins that work better?
Do we need to switch to an entirely different solution?</p>
<p>Thanks.</p>
|
<python><pip><artifactory><pypi>
|
2025-08-14 09:59:08
| 1
| 784
|
Zak
|
79,734,823
| 5,429,268
|
I'm trying to get two functions to run at the same time
|
<p>I'm trying to get my script to run two functions at the same time. But what happens is function2 will not start until function1 finishes.</p>
<pre><code> import multiprocessing
process1 = multiprocessing.Process(target=function1())
process1.start()
process2 = multiprocessing.Process(target=function2())
process2.start()
process3 = multiprocessing.Process(target=function3())
process3.start()
process1.join()
process2.join()
process3.join()
</code></pre>
|
<python><python-3.x>
|
2025-08-13 23:29:13
| 1
| 563
|
BioRod
|
79,734,575
| 10,082,415
|
Retrieve per-class gradients without create_graph=True and retain_graph=True
|
<p>I am trying to keep track of the per-class gradients of 10k+ classes. Having <code>retain_graph=True</code> allows to access them but uses significant amount of memory. However, I am trying to figure out a way where I could have both <code>create_graph=False</code> and <code>retain_graph=True</code> while having access to per-class gradients. I have attached my sample code below and am able to set <code>create_graph</code> to <code>False</code>. The line to focus on is:</p>
<p><code>self._scaler.scale(current_loss).backward(create_graph=create_graph, retain_graph=True)</code></p>
<p>Any help is appreciated. Thank you in advance!</p>
<pre><code>class NativeGrad:
state_dict_key = "amp_scaler"
def __init__(self):
self._scaler = torch.amp.GradScaler('cuda')
def __call__(self, loss, optimizer, clip_grad=None, model=None, create_graph=False, update_grad=True):
grads: list = []
for current_loss in loss:
for p in model.parameters():
if p.grad is not None:
p.grad.data.zero_()
self._scaler.scale(current_loss).backward(create_graph=create_graph, retain_graph=True) # ← ← ← retain_graph should be False here
grad = torch.cat([
p.grad.flatten().clone() if p.grad is not None else torch.zeros_like(p).flatten()
for p in model.parameters()
])
grads.append(grad)
for p in model.parameters():
if p.grad is not None:
p.grad.data.zero_()
grads_tensor = torch.stack(grads, dim=0)
# Manipulate the grads_tensor...
def state_dict(self):
return self._scaler.state_dict()
def load_state_dict(self, state_dict):
self._scaler.load_state_dict(state_dict)
</code></pre>
|
<python><pytorch>
|
2025-08-13 17:29:40
| 0
| 1,003
|
M. Al Jumaily
|
79,734,526
| 857,741
|
Combining unittest with ddt.idata
|
<p>My tests are using an <code>@idata</code> decorator from the ddt package. I want to run only one of decorated test methods (<code>test01_...</code>) in the test class, not all of them. It conflicts with unittest notation of selecting tests: no approach <a href="https://stackoverflow.com/questions/15971735">listed there</a> works, not specifying <code>TestClass.test01_...</code> on commandline neither passing arguments to <code>unittest.main()</code>. How can I achieve it?</p>
<p>Example:</p>
<pre><code>#!/usr/bin/env python3
import unittest
import sys
from ddt import idata, ddt
test_data_01 = [
["example1", "example2"],
["example3", "example4"],
]
test_data_02 = [
["other1", "other2"],
]
@ddt
class ExampleTest(unittest.TestCase):
@idata(test_data_01)
def test_01(self, params):
print(params)
@idata(test_data_02)
def test_02(self, params):
print(params)
if __name__ == "__main__":
sys.stdout = sys.__stdout__
unittest.main()
</code></pre>
<p>This works:</p>
<pre><code>bash-3.2$ ./ddt_problem.py
['example1', 'example2']
.['example3', 'example4']
.['other1', 'other2']
.
----------------------------------------------------------------------
Ran 3 tests in 0.005s
OK
</code></pre>
<p>The typical way of running a specific test doesn't:</p>
<pre><code>bash-3.2$ /usr/bin/python3 ./ddt_problem.py ExampleTest.test_02
Traceback (most recent call last):
File "./ddt_problem.py", line 31, in <module>
unittest.main()
File "/usr/python3.6/lib/python3.6/unittest.py", line 1296, in __init__
self.parseArgs(argv)
File "/usr/python3.6/lib/python3.6/unittest.py", line 1349, in parseArgs
self.createTests()
File "/usr/python3.6/lib/python3.6/unittest.py", line 1353, in createTests
self.module)
File "/usr/python3.6/lib/python3.6/unittest.py", line 659, in loadTestsFromNames
testCase = self.loadTestsFromName(name, module)
File "/usr/python3.6/lib/python3.6/unittest.py", line 626, in loadTestsFromName
obj = getattr(obj, part)
AttributeError: type object 'ExampleTest' has no attribute 'test_02'
</code></pre>
|
<python><unit-testing>
|
2025-08-13 16:16:41
| 0
| 6,914
|
LetMeSOThat4U
|
79,734,461
| 509,977
|
Why does Gmail API override the Date header even with internalDateSource: 'dateHeader' and deleted: true in users.messages.insert?
|
<p>I am working on a Python tool to migrate emails into Gmail while preserving the original Date header.
My goal is simply to build a cli tool that allows to copy email from gmail account a to gmail account b, preserving all data and metadata (including date and time).</p>
<p>I am using the Gmail API's users.messages.insert method, as suggested in the Google support documentation. The support states that using internalDateSource: 'dateHeader' and deleted: true should enforce the Date header from the email:
<a href="https://support.google.com/vault/answer/9987957?hl=en" rel="nofollow noreferrer">https://support.google.com/vault/answer/9987957?hl=en</a></p>
<p>Here is a minimal code example:</p>
<pre><code>from googleapiclient.discovery import build
import base64
# Initialize Gmail API client
service = build('gmail', 'v1', credentials=your_credentials)
# Raw email with a custom Date header
raw_email = """\
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
to: recipient@gmail.com
from: sender@gmail.com
subject: Test Email
Date: Tue, 13 Aug 2024 14:00:00 +0000
This is a test email.
"""
# Encode the email
raw_email_b64 = base64.urlsafe_b64encode(raw_email.encode('utf-8')).decode('utf-8')
# Insert the email using the Gmail API
body = {
'raw': raw_email_b64,
'internalDateSource': 'dateHeader',
'deleted': True
}
response = service.users().messages().insert(userId='me', body=body).execute()
# Log the response
print(response)
</code></pre>
<p>Problem:
Despite setting internalDateSource: 'dateHeader' and deleted: true, the Date header in the inserted email is overridden by the current timestamp. The original Date header is not preserved and the datetime of insert is used instead.</p>
<p>Question:
Is this behavior expected, or am I missing something in the implementation? Are there additional steps required to enforce the Date header during email insertion? Any insights or workarounds would be greatly appreciated!</p>
<p>Verified that the Date header is correctly set in the raw email before insertion.
Used the internalDateSource: 'dateHeader' parameter as per the Google support suggestion.
Added the deleted: true parameter to the users.messages.insert method.
Observations:
The Gmail API still overrides the Date header with the current timestamp.
The X-Original-Date header workaround works, but I would prefer to rely on the Date header directly.</p>
|
<python><email><gmail-api>
|
2025-08-13 15:13:44
| 1
| 8,679
|
Pitto
|
79,734,370
| 3,909,202
|
How can I remove all legends from a matplotlib axes with multiple legends?
|
<p>Given a matplotlib plot with more than one legend, I would like to be able to remove all legends from the plot.</p>
<p>Here is a simple example code to produce a plot with two legends:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# Plot some dummy data
x = [1, 2, 3, 4, 5]
y1 = [1, 4, 9, 16, 25]
y2 = [25, 16, 9, 4, 1]
ax.plot(x, y1, label='y = x^2')
ax.plot(x, y2, label='y = 25 - x^2')
# Add two legends
ax.legend()
handles, labels = ax.get_legend_handles_labels()
ax.get_legend().remove()
half_len = len(handles) // 2
leg1 = ax.legend(handles[:half_len], labels[:half_len], loc="upper left")
leg2 = ax.legend(handles[half_len:], labels[half_len:], loc="lower right")
ax.add_artist(leg1)
ax.add_artist(leg2)
</code></pre>
<p>We get a figure with two legends as expected:</p>
<p><a href="https://i.sstatic.net/rrtRF3kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rrtRF3kZ.png" alt="Figure with two legends" /></a></p>
<p>Now, I want to remove these legends again.
Here's what I tried and what does not work:</p>
<pre class="lang-py prettyprint-override"><code># The following solution removes only `leg1`, also in reverse order of calls:
leg2.remove()
leg1.remove()
# The following does not remove anything:
ax.get_legend().remove()
# The following throws an error (matplotlib/artist.py:239, ValueError: list.remove(x): x not in list):
for c in ax.get_children():
if isinstance(c, matplotlib.legend.Legend):
c.remove()
</code></pre>
<p>I would prefer to have a generic solution like the last one, but for the moment, any solution would be highly appreciated.</p>
|
<python><matplotlib>
|
2025-08-13 13:55:34
| 2
| 1,379
|
BernhardWebstudio
|
79,734,301
| 3,385,382
|
PettingZoo + Ray RLlib card game
|
<p>I created a simple card game for 2 players with PettingZoo and agents are trained with Ray RLlib (PPO). Game rules are:</p>
<ul>
<li>2 players, 32 cards (7 to ace)</li>
<li>each trick both players play one card on table, higher card takes and that player plays next</li>
<li>both players have 10 cards and after each trick they both draw one card, so it's 10 again, until there are no more cards to draw, then the rest of the cards are played without drawing (so, 16 tricks in total for each episode)</li>
<li>you have to "follow the suit" (if opponent plays spades, you have too, if you have any)</li>
<li><strong>if you can't follow the suit, you can play any card, but then you never win a trick</strong></li>
</ul>
<p>Agent is trained for 10+ hours now and he is playing fine, except one weird detail:
He discards high card (aces and kings) in position where he can't win a trick (he didn't follow the suit).</p>
<p>I was adjusting rewards and penalties and agent is still doing that unless I assign ridiculously high penalty to stop it. It works, but it doesn't feel right. I am giving him -35 penalty for discarding ace, and he gets only +1 for taking a trick. I have action mask on suit when he can follow the suit, but he can play any card when he can't follow. Why can't he learn that?</p>
<p>File is long, so I am giving you observation space, I went through everything else zillion of times, everything debugged and everything seems fine. But it seems like agent can't understand that he can't take Q of diamonds with A of spades.
(It was GPT suggestion to use 0-31 for cards and not 2 dimensional array)</p>
<pre><code> return spaces.Dict({
"hand": spaces.MultiBinary(32),
"opponent_known_cards": spaces.MultiBinary(32),
"talon_count": spaces.Discrete(13),
"table_card": spaces.Discrete(33),
"lead_suit": spaces.Discrete(5),
"passed_cards": spaces.MultiBinary(32),
"turn": spaces.Discrete(2),
"action_mask": spaces.MultiBinary(32),
})
</code></pre>
<p>Do you think it would be more clear for agent to have cards like 0, 1, 2, 3 suits and 0-7? So (0, 7) is A of spades etc...</p>
<p>I'll appreciate all suggestions or ideas.</p>
|
<python><artificial-intelligence><ray><pettingzoo>
|
2025-08-13 12:57:57
| 0
| 408
|
GMarco24
|
79,734,200
| 1,797,912
|
How to test the concrete method of an abstract class without manual subclassing?
|
<p>I have an abstract Python class with a concrete method:</p>
<pre><code>class AbstractFoo(abc.ABC):
def append_something(self, text: str) -> str:
return text + self.create_something(len(text))
@abc.abstractmethod
def create_something(self, number: int) -> str:
raise NotImplementedError()
# IMPORTANT: many other abstract methods irrelevant for the test omitted here
</code></pre>
<p>Given that there are many abstract methods, I'm looking for a way to test the concrete method <em>without</em> having to manually subclass and implement all these methods. I was hoping to use a mock instance of the class somehow.</p>
<ul>
<li><p>First failed attempt:</p>
<pre><code># Without this, `AbstractFoo` cannot be instantiated.
AbstractFoo.__abstractmethods__ = frozenset()
mock_foo = unittest.mock.Mock(wraps=AbstractFoo())
mock_foo.create_something.return_value = "bar"
assert mock_foo.append_something("foo") == "foobar"
mock_foo.create_something.assert_called_once_with(3)
</code></pre>
<p>Even though I try to mock the <code>create_something</code> method return value, the <code>NotImplementedError</code> from <code>AbstractFoo.create_something</code> is raised when <code>mock_foo.append_something("foo")</code> is called.</p>
</li>
<li><p>Second failed attempt</p>
<pre><code>class MockFoo(AbstractFoo):
def create_something(self, number: int) -> str:
return "bar"
MockFoo.__abstractmethods__ = frozenset()
mock_foo = unittest.mock.Mock(wraps=MockFoo())
mock_foo.create_something.side_effect = lambda *args, **kwargs: unittest.mock.DEFAULT
assert mock_foo.append_something("foo") == "foobar"
mock_foo.create_something.assert_called_once_with(3)
</code></pre>
<p>This time I get:</p>
<pre><code>AssertionError: Expected 'create_something' to be called once. Called 0 times.
</code></pre>
</li>
</ul>
<p>Questions:</p>
<ol>
<li><p>In both of my failed attempts, the <code>mock_foo.create_something</code> mock does not seem to be used. Why's that?</p>
</li>
<li><p>How can I test the <code>append_something</code> method of <code>AbstractFoo</code> without having to manually implement all the abstract methods of <code>AbstractFoo</code> in a dummy subclass?</p>
</li>
</ol>
|
<python><abstract-class>
|
2025-08-13 10:54:29
| 2
| 16,550
|
Chriki
|
79,734,128
| 15,980,284
|
using pydantic.logfire sending data to grafana-otel-container
|
<p>I am using lofgire to send traces, logs and metrics to grafana-otel container. However, in the grafana UI (reachable unter http://localhost:3000 and login is pw: admin & user: admin), only traces and metrics but no logs are shown in the Drilldown menu.</p>
<p>Here is my docker-compose to set up grafana-otel:</p>
<pre><code> grafana:
driver: local
loki:
driver: local
prometheus:
driver: local
services:
alternative-backend-opentelemetry:
image: grafana/otel-lgtm
network_mode: host
ports:
- 4317:4317
- 4318:4318
- 3000:3000
environment:
- ENABLE_LOGS_LOKI=true
- OTEL_METRIC_EXPORT_INTERVAL=500
- GF_PATHS_DATA=/data/grafana
volumes:
- grafana:/data/grafana
- prometheus:/data/prometheus
- loki:/data/loki
</code></pre>
<p>and here is an example of my app.py to send the data via logfire:</p>
<pre><code>os.environ["OTEL_METRIC_EXPORT_INTERVAL"]="5000"
os.environ["OTEL_EXPORTER_OTLP_PROTOCOL"]="http/protobuf"
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"]="http://localhost:4318"
os.environ["LOGFIRE_HTTPX_CAPTURE_ALL"]="true"
import logfire
import time
logfire.configure(
service_name='service_name',
send_to_logfire=False,
)
logfire.info("User logged in")
with logfire.span("Processing request {request_id}", request_id=123):
# Code within this block is traced within the span
logfire.info("Started processing")
# ... more code ...
logfire.info("Finished processing")
def my_logger():
logfire.info("I'm logging..")
if __name__ == "__main__":
while True:
my_logger()
time.sleep(20)
</code></pre>
|
<python><grafana><pydantic><open-telemetry>
|
2025-08-13 09:38:56
| 0
| 1,303
|
JKupzig
|
79,733,788
| 11,462,274
|
How to rename and keep the window name fixed without updating when refreshing webpage? (Like Name window function in Chrome and Edge Browser)
|
<p>My current code opens the desired URL and rename the window to the string I prefer, but just update the page that is open to the browser and with that my personalized name is lost and the web page title is again:</p>
<pre class="lang-python prettyprint-override"><code>import pygetwindow as gw
import subprocess
import ctypes
import time
def open_and_rename(url: str, new_title: str):
windows_before = set(gw.getAllTitles())
subprocess.Popen([r"C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe", '--new-window', url])
time.sleep(1)
windows_after = set(gw.getAllTitles())
new_windows = windows_after - windows_before
if new_windows:
new_window_title = list(new_windows)[0]
window = gw.getWindowsWithTitle(new_window_title)[0]
hwnd = window._hWnd
ctypes.windll.user32.SetWindowTextW(hwnd, new_title)
else:
print("new window not found")
open_and_rename("https://www.google.com/", "Test")
</code></pre>
<p>But my desire is that it works exactly like this function of browsers Chrome and Edge:</p>
<p><a href="https://i.sstatic.net/19nuqtx3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19nuqtx3.png" alt="enter image description here" /></a></p>
<p>With this option you rename the window and this value becomes fixed regardless of what you do in the browser (this option you find by clicking with the right side of the mouse both in the top of the window and the same action on the window miniature leaving the mouse on it in the taskbar)</p>
<p>Is there a way to do this on <code>Python + Windows 11</code> without using Python moving simulators and mouse clicks and keyboard typing?</p>
|
<python><window><rename><chromium><windows-11>
|
2025-08-13 02:05:32
| 0
| 2,222
|
Digital Farmer
|
79,733,777
| 3,163,618
|
Spelling Bee optimal word set search
|
<p>The NYT Spelling Bee game is a word game, where given 7 letters including 1 "center letter", you find words at least 4 letters long that must use the "center letter". The scoring is as follows: 4-letter words get 1 point, >4-letter words get their length in points, and pangrams (using all 7 letters) gets +7 points.</p>
<p>Often when I play, I think about what set of letters would generate the maximum score. <a href="https://www.reddit.com/r/NYTSpellingBee/comments/163n16l/project_find_the_spelling_bee_letterset_that/" rel="nofollow noreferrer">This reddit post</a> takes a greedy backwards stepwise approach, starting with the letter set of all letters, then greedily tries removing one letter and rescoring the subset. I think the end result is near if not optimal, but I wanted to know if there was a way to find the optimal solution.</p>
<p>My idea was modifying the method to use uniform-cost search (essentially Dijkstra's algorithm), so starting with the node representing the full letter set and exploring the neighbors that remove one letter. The max priority queue will always consider the neighbor in the frontier with the highest score, and the search ends when reaching the first node with 7 letters in the letter set.</p>
<p>The issue with this approach is that many intermediate nodes have the same score, so it ends up searching a huge amount of the 2^26 possible nodes anyway. I don't know of an admissible heuristic with A* that would help.</p>
<p>One trick I came up with to test if a word satisfies a given letter set is to preprocess words as letter bitmasks, where the i-th bit is set if the word contains the i-th letter of the alphabet. Then if <code>word | mask == mask</code>, the word only uses the letters in the mask.</p>
<p>I wrote two sample implementations in python/pypy and C++, but neither finished within an hour. So how can this be solved efficiently?</p>
<p>The sample implementation right now doesn't consider pangrams or the center letter (which may cut down on some searching).</p>
<p>The word list can be generated with <code>grep -P "^[a-z]{4,}$" /usr/share/dict/words > words.txt</code>.</p>
<pre><code>from string import ascii_lowercase
S = ascii_lowercase # alphabet Σ
with open("words.txt") as f:
L0 = f.read().splitlines() # language
# filter out words with more than 7 distinct letters
L = list(filter(lambda w: len(set(w)) <= 7, L0))
print(len(L0), len(L))
print(L[:20])
# for fun, make words into bitsets of letters: every word is a 26-bit integer
# the game letterset is a 26-bit mask
bitsets = [0] * len(L)
for i in range(len(L)):
for c in L[i]:
bitsets[i] |= 1 << (ord(c) - ord('a'))
if i < 20:
print(L[i].ljust(16), bin(bitsets[i])[2:].zfill(26))
# simple counting for now
def word_score(word):
if len(word) == 4:
return 1
else:
return len(word)
def mask_score(mask):
s = 0
for i in range(len(L)):
if mask | bitsets[i] == mask:
s += word_score(L[i])
return -s
from heapq import heappop, heappush
# Graph search with Dijkstra's algorithm (but modified to calculate node weights)
start_mask = (1 << len(S)) - 1
start_node = (mask_score(start_mask), start_mask)
pq = [start_node]
visited = set()
while True:
score, mask = heappop(pq)
print(score, "".join(S[i] if mask & (1 << i) else ' ' for i in range(26)))
if mask.bit_count() == 7:
break
# Add unvisited neighbors to queue
for i in range(len(S)):
if mask & (1 << i):
new_mask = mask & ~ (1 << i)
if new_mask in visited:
continue
new_score = mask_score(new_mask)
new_node = (new_score, new_mask)
heappush(pq, new_node)
visited.add(new_mask)
</code></pre>
|
<python><algorithm><search><a-star><discrete-optimization>
|
2025-08-13 01:25:30
| 1
| 11,524
|
qwr
|
79,733,745
| 5,631,449
|
How Do I Open A File in a Subclass of BufferedIOBase in Python
|
<p>I want to create my own reader class that is derived from BufferedIOBase. I'd like to have multiple 'open' class methods with differing parameters to create class instances, like this:</p>
<pre><code>from __future__ import annotations
import io
class MyReader(io.BufferedIOBase):
@classmethod
def open1(cls, filename:str) -> MyReader:
r = MyReader()
# How do I open 'r'?
return r
@classmethod
def open2(cls, buffer:bytes) -> MyReader:
...
def __init__(self):
super().__init__()
with MyReader.open("filename") as reader:
reader.read()
</code></pre>
<p>How do I actually open the file in my class? The can't use "r = open(...)" in place of "r = MyReader()" because that won't create a "MyReader" object; and there's no available method in the MyReader class and its superclasses to do an open.</p>
|
<python><io>
|
2025-08-13 00:27:05
| 0
| 1,006
|
BoCoKeith
|
79,733,680
| 388,520
|
`add_geometries` - how to automatically update extent of axis to include the geometry
|
<p>I want to draw shapes on a cartopy <code>GeoAxis</code> and have the extent of the plot automatically update so that all of the shapes are fully visible. Simply calling <code>ax.add_geometries([shapes], crs, ...)</code> does not update the plot extent to include the shape. What do I need to do to get the plot extent updated? I can't just use <code>ax.set_extent</code> on the union of all the shapes' bounding boxes because that won't take the projection into account.</p>
|
<python><cartopy>
|
2025-08-12 21:40:33
| 3
| 142,389
|
zwol
|
79,733,411
| 639,361
|
Azure Cache for Redis randomly throws WRONGPASS
|
<p>I have a Django website running as an Azure Web App, which is uzing Azure Cache for Redis.</p>
<p>We use a Managed Identity to connect to Azure Cache for Redis so we do not need to use access keys.</p>
<p>The application itself is running in gunicorn so as multiple threads.</p>
<p>There is a health check, being called by Azure Web App platform itself, for which I have this middleware:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import os
import signal
from django.conf import settings
from django.core.cache import cache
from django.db import connection
from django.http import HttpResponse
from django_redis import get_redis_connection
from tenacity import retry, stop_after_attempt, wait_random_exponential
from .azure_helper import RedisCredentials, get_db_password, get_redis_credentials
logger = logging.getLogger("pulumi_django_azure.health_check")
class HealthCheckMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def _self_heal(self):
logger.warning("Self-healing by gracefully restarting Gunicorn.")
master_pid = os.getppid()
logger.debug("Master PID: %d", master_pid)
# https://docs.gunicorn.org/en/latest/signals.html
# Reload a new master with new workers,
# since the application is preloaded this is the only safe way for now.
os.kill(master_pid, signal.SIGUSR2)
# Gracefully shutdown the current workers
os.kill(master_pid, signal.SIGTERM)
def _test_redis_connection(self) -> bool:
try:
cache.set("health_check", "test")
return True
except Exception:
return False
@retry(stop=stop_after_attempt(3), wait=wait_random_exponential(multiplier=0.5, max=5))
def _update_redis_credentials(self, redis_credentials: RedisCredentials):
logger.debug("Updating Redis credentials with password: %s", redis_credentials.password)
# Re-authenticate the Redis connection
redis_connection = get_redis_connection("default")
redis_connection.execute_command("AUTH", redis_credentials.username, redis_credentials.password)
settings.CACHES["default"]["OPTIONS"]["PASSWORD"] = redis_credentials.password
def __call__(self, request):
if request.path == settings.HEALTH_CHECK_PATH:
# Update the database credentials if needed
if settings.AZURE_DB_PASSWORD:
try:
current_db_password = settings.DATABASES["default"]["PASSWORD"]
new_db_password = get_db_password()
if new_db_password != current_db_password:
logger.debug("Database password has changed, updating credentials")
settings.DATABASES["default"]["PASSWORD"] = new_db_password
# Close existing connections to force reconnect with new password
connection.close()
else:
logger.debug("Database password unchanged, keeping existing credentials")
except Exception as e:
logger.error("Failed to update database credentials: %s", str(e))
self._self_heal()
return HttpResponse(status=503)
# Update the Redis credentials if needed
if settings.AZURE_REDIS_CREDENTIALS:
try:
current_redis_password = settings.CACHES["default"]["OPTIONS"]["PASSWORD"]
redis_credentials = get_redis_credentials()
if redis_credentials.password != current_redis_password:
logger.debug("Redis password has changed, updating credentials")
self._update_redis_credentials(redis_credentials)
elif not self._test_redis_connection():
logger.debug("Redis connection check failed, updating credentials")
self._update_redis_credentials(redis_credentials)
else:
logger.debug("Redis password unchanged and connection check passed, keeping existing credentials")
except Exception as e:
logger.error("Failed to update Redis credentials: %s", str(e))
self._self_heal()
return HttpResponse(status=503)
try:
# Test the database connection
connection.ensure_connection()
logger.debug("Database connection check passed")
# Test the Redis connection
cache.set("health_check", "test")
logger.debug("Redis connection check passed")
return HttpResponse("OK")
except Exception as e:
logger.error("Health check failed with unexpected error: %s", str(e))
self._self_heal()
return HttpResponse(status=503)
return self.get_response(request)
</code></pre>
<p>So essentially, we retrieve an auth token from the Azure management API, and if that password has changed OR the connection test fails, we perform an AUTH command.</p>
<p>Now, randomly we see that our self-healing mechanisms kicks in (which we added because of all the issues we had and we want to keep the site running while we figure this out).
After adding more logging, as can be seen in the above code, we see in the logs:</p>
<pre><code>2025-08-12 13:56:54,647 DEBUG [p:1293] [t:123417039960960] Updating Redis credentials with password: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSIsImtpZCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSJ9.eyJhdWQiOiJodHRwczovL3JlZGlzLmF6dXJlLmNvbSIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyOThmYWJhLTdjNGEtNGQzZS1iZGQ2LTg3ZDA5OGI0ZjdlMi8iLCJpYXQiOjE3NTQ5OTk0NTksIm5iZiI6MTc1NDk5OTQ1OSwiZXhwIjoxNzU1MDg2MTU5LCJhaW8iOiJrMlJnWU5nLzYvY2JXN3ZGWmgwdERvOC9zbmJ3QVFBPSIsImFwcGlkIjoiN2YxY2UzOWItZmVmOS00NDAwLTg0NDMtYzhjMGVlMDc2ZGJlIiwiYXBwaWRhY3IiOiIyIiwiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZTI5OGZhYmEtN2M0YS00ZDNlLWJkZDYtODdkMDk4YjRmN2UyLyIsImlkdHlwIjoiYXBwIiwib2lkIjoiZTlmYWE3ZjUtZWQ4MC00MjhmLWFkNzMtMjM4MzdmNTFmZTVlIiwicmgiOiIxLkFRd0F1dnFZNGtwOFBrMjkxb2ZRbUxUMzRydGZ5cXprdHdsQWdmRTM0NF9XYlhqZEFBQU1BQS4iLCJzdWIiOiJlOWZhYTdmNS1lZDgwLTQyOGYtYWQ3My0yMzgzN2Y1MWZlNWUiLCJ0aWQiOiJlMjk4ZmFiYS03YzRhLTRkM2UtYmRkNi04N2QwOThiNGY3ZTIiLCJ1dGkiOiIwdDltQThYeVJVeVpzWWozOTEwYUFBIiwidmVyIjoiMS4wIiwieG1zX2Z0ZCI6InIwdGUwZVZ6TURXV0xJaThJcm5LMXpmRWJ3bDNqdEpnYTk4RzlQZm1CblVCYzNkbFpHVnVZeTFrYzIxeiIsInhtc19pZHJlbCI6IjggNyIsInhtc19yZCI6IjAuNDJMbFlCSmk1QkVTNGVBVUVtQkFBMEJSRGlHQjNmR24xanpaenVuUS1OSDhjZi0xM0FvQSJ9.nQTOjqYjyyfjKU5b4Y9fSWMYrkggCRRCeQAF4dhCTcuw-vUOpbLbBeRAJ1nM987L6qlOW03guLfENyGlPCdGEpCwDZrlo0HA13i35qeEklxKUUvieBu_rSz9lDBPrRLiPWDKANzgLZ9JuDj6ZpdH7RqVazK1Ygn-aIZ-J1iYPzgY-XCA0vMcCyHTwDDlNsOAq7uuvQwdDu9cUzbQk5bUwE7tWiyvr6cbzImuROAIAv6ZmxwfxrTBgUV7KcEwLP_vmMFipgvjqJDiydHtq7rERPSq0ixVspbZZCM0wsW56XZ5WjfXW2lfBnqRdTOeFhMTtJAQBDgOgijsIfRSup902Q
2025-08-12 13:56:54,852 DEBUG [p:1293] [t:123417039960960] Database connection check passed
2025-08-12 13:56:54,856 DEBUG [p:1293] [t:123417039960960] Redis connection check passed
2025-08-12 13:57:54,562 DEBUG [p:1294] [t:123417039960960] Database password unchanged, keeping existing credentials
2025-08-12 13:57:54,565 DEBUG [p:1294] [t:123417039960960] Redis password unchanged and connection check passed, keeping existing credentials
2025-08-12 13:57:54,565 DEBUG [p:1294] [t:123417039960960] Database connection check passed
2025-08-12 13:57:54,570 DEBUG [p:1294] [t:123417039960960] Redis connection check passed
2025-08-12 13:58:54,760 DEBUG [p:1297] [t:123417039960960] Database password has changed, updating credentials
2025-08-12 13:58:54,788 DEBUG [p:1297] [t:123417039960960] Redis password has changed, updating credentials
2025-08-12 13:58:54,795 DEBUG [p:1297] [t:123417039960960] Updating Redis credentials with password: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSIsImtpZCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSJ9.eyJhdWQiOiJodHRwczovL3JlZGlzLmF6dXJlLmNvbSIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyOThmYWJhLTdjNGEtNGQzZS1iZGQ2LTg3ZDA5OGI0ZjdlMi8iLCJpYXQiOjE3NTQ5OTk0NTksIm5iZiI6MTc1NDk5OTQ1OSwiZXhwIjoxNzU1MDg2MTU5LCJhaW8iOiJrMlJnWU5nLzYvY2JXN3ZGWmgwdERvOC9zbmJ3QVFBPSIsImFwcGlkIjoiN2YxY2UzOWItZmVmOS00NDAwLTg0NDMtYzhjMGVlMDc2ZGJlIiwiYXBwaWRhY3IiOiIyIiwiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZTI5OGZhYmEtN2M0YS00ZDNlLWJkZDYtODdkMDk4YjRmN2UyLyIsImlkdHlwIjoiYXBwIiwib2lkIjoiZTlmYWE3ZjUtZWQ4MC00MjhmLWFkNzMtMjM4MzdmNTFmZTVlIiwicmgiOiIxLkFRd0F1dnFZNGtwOFBrMjkxb2ZRbUxUMzRydGZ5cXprdHdsQWdmRTM0NF9XYlhqZEFBQU1BQS4iLCJzdWIiOiJlOWZhYTdmNS1lZDgwLTQyOGYtYWQ3My0yMzgzN2Y1MWZlNWUiLCJ0aWQiOiJlMjk4ZmFiYS03YzRhLTRkM2UtYmRkNi04N2QwOThiNGY3ZTIiLCJ1dGkiOiIwdDltQThYeVJVeVpzWWozOTEwYUFBIiwidmVyIjoiMS4wIiwieG1zX2Z0ZCI6InIwdGUwZVZ6TURXV0xJaThJcm5LMXpmRWJ3bDNqdEpnYTk4RzlQZm1CblVCYzNkbFpHVnVZeTFrYzIxeiIsInhtc19pZHJlbCI6IjggNyIsInhtc19yZCI6IjAuNDJMbFlCSmk1QkVTNGVBVUVtQkFBMEJSRGlHQjNmR24xanpaenVuUS1OSDhjZi0xM0FvQSJ9.nQTOjqYjyyfjKU5b4Y9fSWMYrkggCRRCeQAF4dhCTcuw-vUOpbLbBeRAJ1nM987L6qlOW03guLfENyGlPCdGEpCwDZrlo0HA13i35qeEklxKUUvieBu_rSz9lDBPrRLiPWDKANzgLZ9JuDj6ZpdH7RqVazK1Ygn-aIZ-J1iYPzgY-XCA0vMcCyHTwDDlNsOAq7uuvQwdDu9cUzbQk5bUwE7tWiyvr6cbzImuROAIAv6ZmxwfxrTBgUV7KcEwLP_vmMFipgvjqJDiydHtq7rERPSq0ixVspbZZCM0wsW56XZ5WjfXW2lfBnqRdTOeFhMTtJAQBDgOgijsIfRSup902Q
2025-08-12 13:58:55,099 DEBUG [p:1297] [t:123417039960960] Database connection check passed
2025-08-12 13:58:55,102 DEBUG [p:1297] [t:123417039960960] Redis connection check passed
2025-08-12 13:59:54,606 DEBUG [p:1292] [t:123417039960960] Database password has changed, updating credentials
2025-08-12 13:59:54,644 DEBUG [p:1292] [t:123417039960960] Redis password has changed, updating credentials
2025-08-12 13:59:54,644 DEBUG [p:1292] [t:123417039960960] Updating Redis credentials with password: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSIsImtpZCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSJ9.eyJhdWQiOiJodHRwczovL3JlZGlzLmF6dXJlLmNvbSIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyOThmYWJhLTdjNGEtNGQzZS1iZGQ2LTg3ZDA5OGI0ZjdlMi8iLCJpYXQiOjE3NTQ5OTk0NTksIm5iZiI6MTc1NDk5OTQ1OSwiZXhwIjoxNzU1MDg2MTU5LCJhaW8iOiJrMlJnWU5nLzYvY2JXN3ZGWmgwdERvOC9zbmJ3QVFBPSIsImFwcGlkIjoiN2YxY2UzOWItZmVmOS00NDAwLTg0NDMtYzhjMGVlMDc2ZGJlIiwiYXBwaWRhY3IiOiIyIiwiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZTI5OGZhYmEtN2M0YS00ZDNlLWJkZDYtODdkMDk4YjRmN2UyLyIsImlkdHlwIjoiYXBwIiwib2lkIjoiZTlmYWE3ZjUtZWQ4MC00MjhmLWFkNzMtMjM4MzdmNTFmZTVlIiwicmgiOiIxLkFRd0F1dnFZNGtwOFBrMjkxb2ZRbUxUMzRydGZ5cXprdHdsQWdmRTM0NF9XYlhqZEFBQU1BQS4iLCJzdWIiOiJlOWZhYTdmNS1lZDgwLTQyOGYtYWQ3My0yMzgzN2Y1MWZlNWUiLCJ0aWQiOiJlMjk4ZmFiYS03YzRhLTRkM2UtYmRkNi04N2QwOThiNGY3ZTIiLCJ1dGkiOiIwdDltQThYeVJVeVpzWWozOTEwYUFBIiwidmVyIjoiMS4wIiwieG1zX2Z0ZCI6InIwdGUwZVZ6TURXV0xJaThJcm5LMXpmRWJ3bDNqdEpnYTk4RzlQZm1CblVCYzNkbFpHVnVZeTFrYzIxeiIsInhtc19pZHJlbCI6IjggNyIsInhtc19yZCI6IjAuNDJMbFlCSmk1QkVTNGVBVUVtQkFBMEJSRGlHQjNmR24xanpaenVuUS1OSDhjZi0xM0FvQSJ9.nQTOjqYjyyfjKU5b4Y9fSWMYrkggCRRCeQAF4dhCTcuw-vUOpbLbBeRAJ1nM987L6qlOW03guLfENyGlPCdGEpCwDZrlo0HA13i35qeEklxKUUvieBu_rSz9lDBPrRLiPWDKANzgLZ9JuDj6ZpdH7RqVazK1Ygn-aIZ-J1iYPzgY-XCA0vMcCyHTwDDlNsOAq7uuvQwdDu9cUzbQk5bUwE7tWiyvr6cbzImuROAIAv6ZmxwfxrTBgUV7KcEwLP_vmMFipgvjqJDiydHtq7rERPSq0ixVspbZZCM0wsW56XZ5WjfXW2lfBnqRdTOeFhMTtJAQBDgOgijsIfRSup902Q
2025-08-12 13:59:55,021 DEBUG [p:1292] [t:123417039960960] Updating Redis credentials with password: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSIsImtpZCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSJ9.eyJhdWQiOiJodHRwczovL3JlZGlzLmF6dXJlLmNvbSIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyOThmYWJhLTdjNGEtNGQzZS1iZGQ2LTg3ZDA5OGI0ZjdlMi8iLCJpYXQiOjE3NTQ5OTk0NTksIm5iZiI6MTc1NDk5OTQ1OSwiZXhwIjoxNzU1MDg2MTU5LCJhaW8iOiJrMlJnWU5nLzYvY2JXN3ZGWmgwdERvOC9zbmJ3QVFBPSIsImFwcGlkIjoiN2YxY2UzOWItZmVmOS00NDAwLTg0NDMtYzhjMGVlMDc2ZGJlIiwiYXBwaWRhY3IiOiIyIiwiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZTI5OGZhYmEtN2M0YS00ZDNlLWJkZDYtODdkMDk4YjRmN2UyLyIsImlkdHlwIjoiYXBwIiwib2lkIjoiZTlmYWE3ZjUtZWQ4MC00MjhmLWFkNzMtMjM4MzdmNTFmZTVlIiwicmgiOiIxLkFRd0F1dnFZNGtwOFBrMjkxb2ZRbUxUMzRydGZ5cXprdHdsQWdmRTM0NF9XYlhqZEFBQU1BQS4iLCJzdWIiOiJlOWZhYTdmNS1lZDgwLTQyOGYtYWQ3My0yMzgzN2Y1MWZlNWUiLCJ0aWQiOiJlMjk4ZmFiYS03YzRhLTRkM2UtYmRkNi04N2QwOThiNGY3ZTIiLCJ1dGkiOiIwdDltQThYeVJVeVpzWWozOTEwYUFBIiwidmVyIjoiMS4wIiwieG1zX2Z0ZCI6InIwdGUwZVZ6TURXV0xJaThJcm5LMXpmRWJ3bDNqdEpnYTk4RzlQZm1CblVCYzNkbFpHVnVZeTFrYzIxeiIsInhtc19pZHJlbCI6IjggNyIsInhtc19yZCI6IjAuNDJMbFlCSmk1QkVTNGVBVUVtQkFBMEJSRGlHQjNmR24xanpaenVuUS1OSDhjZi0xM0FvQSJ9.nQTOjqYjyyfjKU5b4Y9fSWMYrkggCRRCeQAF4dhCTcuw-vUOpbLbBeRAJ1nM987L6qlOW03guLfENyGlPCdGEpCwDZrlo0HA13i35qeEklxKUUvieBu_rSz9lDBPrRLiPWDKANzgLZ9JuDj6ZpdH7RqVazK1Ygn-aIZ-J1iYPzgY-XCA0vMcCyHTwDDlNsOAq7uuvQwdDu9cUzbQk5bUwE7tWiyvr6cbzImuROAIAv6ZmxwfxrTBgUV7KcEwLP_vmMFipgvjqJDiydHtq7rERPSq0ixVspbZZCM0wsW56XZ5WjfXW2lfBnqRdTOeFhMTtJAQBDgOgijsIfRSup902Q
2025-08-12 13:59:55,775 DEBUG [p:1292] [t:123417039960960] Updating Redis credentials with password: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSIsImtpZCI6IkpZaEFjVFBNWl9MWDZEQmxPV1E3SG4wTmVYRSJ9.eyJhdWQiOiJodHRwczovL3JlZGlzLmF6dXJlLmNvbSIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyOThmYWJhLTdjNGEtNGQzZS1iZGQ2LTg3ZDA5OGI0ZjdlMi8iLCJpYXQiOjE3NTQ5OTk0NTksIm5iZiI6MTc1NDk5OTQ1OSwiZXhwIjoxNzU1MDg2MTU5LCJhaW8iOiJrMlJnWU5nLzYvY2JXN3ZGWmgwdERvOC9zbmJ3QVFBPSIsImFwcGlkIjoiN2YxY2UzOWItZmVmOS00NDAwLTg0NDMtYzhjMGVlMDc2ZGJlIiwiYXBwaWRhY3IiOiIyIiwiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZTI5OGZhYmEtN2M0YS00ZDNlLWJkZDYtODdkMDk4YjRmN2UyLyIsImlkdHlwIjoiYXBwIiwib2lkIjoiZTlmYWE3ZjUtZWQ4MC00MjhmLWFkNzMtMjM4MzdmNTFmZTVlIiwicmgiOiIxLkFRd0F1dnFZNGtwOFBrMjkxb2ZRbUxUMzRydGZ5cXprdHdsQWdmRTM0NF9XYlhqZEFBQU1BQS4iLCJzdWIiOiJlOWZhYTdmNS1lZDgwLTQyOGYtYWQ3My0yMzgzN2Y1MWZlNWUiLCJ0aWQiOiJlMjk4ZmFiYS03YzRhLTRkM2UtYmRkNi04N2QwOThiNGY3ZTIiLCJ1dGkiOiIwdDltQThYeVJVeVpzWWozOTEwYUFBIiwidmVyIjoiMS4wIiwieG1zX2Z0ZCI6InIwdGUwZVZ6TURXV0xJaThJcm5LMXpmRWJ3bDNqdEpnYTk4RzlQZm1CblVCYzNkbFpHVnVZeTFrYzIxeiIsInhtc19pZHJlbCI6IjggNyIsInhtc19yZCI6IjAuNDJMbFlCSmk1QkVTNGVBVUVtQkFBMEJSRGlHQjNmR24xanpaenVuUS1OSDhjZi0xM0FvQSJ9.nQTOjqYjyyfjKU5b4Y9fSWMYrkggCRRCeQAF4dhCTcuw-vUOpbLbBeRAJ1nM987L6qlOW03guLfENyGlPCdGEpCwDZrlo0HA13i35qeEklxKUUvieBu_rSz9lDBPrRLiPWDKANzgLZ9JuDj6ZpdH7RqVazK1Ygn-aIZ-J1iYPzgY-XCA0vMcCyHTwDDlNsOAq7uuvQwdDu9cUzbQk5bUwE7tWiyvr6cbzImuROAIAv6ZmxwfxrTBgUV7KcEwLP_vmMFipgvjqJDiydHtq7rERPSq0ixVspbZZCM0wsW56XZ5WjfXW2lfBnqRdTOeFhMTtJAQBDgOgijsIfRSup902Q
2025-08-12 13:59:55,959 ERROR [p:1292] [t:123417039960960] Failed to update Redis credentials: RetryError[<Future at 0x703f38c02d70 state=finished raised ResponseError>]
2025-08-12 13:59:55,970 WARNING [p:1292] [t:123417039960960] Self-healing by gracefully restarting Gunicorn.
</code></pre>
<p>So, the health check arrives every minute on a different process (is random), in that process the logic kicks in that sees that the password has changed; but then the AUTH fails.
It is not visible in these logs anymore but the error from Redis is WRONGPASS.</p>
<p>So the credentials we freshly obtained from Azure, are not valid in P1292 in this example, while it was valid 3 minutes earlier on PID 1293.</p>
<p>I have enabled full logs on Azure Cache hoping to spot something, but awaiting my findings of that, hoping to find an answer here.</p>
|
<python><azure><redis>
|
2025-08-12 15:44:36
| 0
| 475
|
Maarten Ureel
|
79,733,367
| 7,995,302
|
rpy2 on Windows prints “Error importing in API mode … only ABI” — how to force ABI mode and suppress the message across Flask and pytest?
|
<p>I’m developing a Flask app on Windows that calls R via rpy2. On every run or during pytest collection I see this message:</p>
<pre><code>Error importing in API mode: ImportError('On Windows, cffi mode "ANY" is only "ABI".')
Trying to import in ABI mode.
</code></pre>
<p>I understand this isn’t a crash; rpy2 falls back to ABI and works. I’d like to reliably force ABI mode (which is recommended on Windows) and suppress this message in both my app and tests.</p>
<p><strong>Environment Details</strong>
OS: Windows 10 (build 26100)
Python: 3.12.10 (venv)
R: 4.5.1
rpy2: [please fill your exact version]
Flask + pytest
python-dotenv for reading .env</p>
<p><strong>What I expect</strong></p>
<ul>
<li>rpy2 to use ABI mode on Windows without printing the API/ABI message</li>
<li>A consistent way to apply this across app</li>
</ul>
<p><strong>What I’ve tried</strong></p>
<ul>
<li>.env approach (preferred)</li>
</ul>
<p><code>RPY2_CFFI_MODE=ABI</code></p>
<ul>
<li>Code-level env</li>
</ul>
<p><code>import os os.environ["RPY2_CFFI_MODE"] = "ABI" # then import rpy2</code></p>
<p>But still the error is there, also cleared cache...</p>
|
<python><windows><flask><rpy2><cffi>
|
2025-08-12 15:06:41
| 0
| 961
|
Naser Nikzad
|
79,733,202
| 2,413,767
|
Trying to use Autofac in python 3.12 is triggering cannot be converted to type error
|
<p>I'm tasked with converting some legacy python code to Python 3.12.Before it was using .DotNet to import .net dlls, not due to version contraints I'm using clr. However where the old code worked i'm having trouble resolving a SimpleJsonSerializer.</p>
<p>This is the error:</p>
<pre><code>"Object of type 'System.RuntimeType' cannot be converted to type 'System.Collections.Generic.IEnumerable`1[Autofac.Core.Parameter]'."
</code></pre>
<p>That is being triggered by this code:</p>
<pre><code>import clr
clr.AddReference(Autofac)
clr.AddReference(MyNamespace)
from MyNamespace import SimpleJsonSerializer
from System.Collections.Generic import List
import Autofac
from Autofac import ContainerBuilder
from Autofac.Core import Parameter
def setUpSerializer(self):
builder = ContainerBuilder()
container = builder.Build()
serializer_type = clr.GetClrType(SimpleJsonSerializer)
scope = container.BeginLifetimeScope()
registrations = container.ComponentRegistry.Registrations
empty_parameters = Parameter
serializer = registration.Activator.ActivateInstance(scope, empty_parameters)
for registration in registrations:
try:
registered_type = registration.Activator.LimitType
if registered_type == serializer_type:
serializer = registration.Activator.ActivateInstance(scope, empty_parameters)
break
except Exception as e:
print(f"Error inspecting registration: {e}")
</code></pre>
<p>Copilot has tied itself in a knot suggesting I use</p>
<pre><code>empty_parameters = ListParameter
serializer = registration.Activator.ActivateInstance(scope, empty_parameters)
</code></pre>
<p>But ListParameter isn't definined anywhere and I don't know where to get it.</p>
<p>Any help is much appreciated.</p>
|
<python><autofac><clr>
|
2025-08-12 12:45:56
| 0
| 1,011
|
John
|
79,733,200
| 6,386,155
|
How do I load joblib file on spark?
|
<p>I have following Code. It reads a pre-existing file for a ML model. I am trying to run it on databricks on multiple cases</p>
<pre><code>import numpy as np
import joblib
class WeightedEnsembleRegressor:
"""
Holds models, their weights, scaler and feature order for prediction & persistence.
"""
import joblib
def __init__(self, trained_models, model_weights, scaler, feature_order):
self.trained_models = trained_models
self.model_weights = model_weights
self.scaler = scaler
self.feature_order = feature_order
def save(self, path):
joblib.dump(self, path)
@staticmethod
def load(path):
return joblib.load(path)
def Process(df):
import traceback
import xgboost
import pickle
import joblib
import sys
#print(sys.version)
data={}
data['msg']=[""]
try:
ensemble = WeightedEnsembleRegressor.load('final_model_v3_1.joblib')
data['msg'] = ["success" + f"""{sys.version} """+ f"""{joblib.__version__} """ + f"""{pickle.compatible_formats} """]
except Exception:
data['msg'] = ["fail"+ f"""{sys.version} """ + f"""{joblib.__version__} """ + f"""{pickle.compatible_formats} """+f"""{traceback.format_exc()}""" + f"""{xgboost.__version__}""" + f"""{catboost.__version__}"""]
return pd.DataFrame.from_dict(data,orient='index').transpose()
</code></pre>
<p>When I run this like below</p>
<pre><code>display(Process(df))
</code></pre>
<p>It runs fine with following message</p>
<pre><code>success3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] 1.2.0 ['1.0', '1.1', '1.2', '1.3', '2.0', '3.0', '4.0', '5.0']
</code></pre>
<p>However, when I run like this</p>
<pre><code>display(pDf.groupBy('num').applyInPandas(Process,'msg string'))
</code></pre>
<p>it fails with message</p>
<pre><code>fail3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] 1.2.0 ['1.0', '1.1', '1.2', '1.3', '2.0', '3.0', '4.0', '5.0'] Traceback (most recent call last):
File "/home/spark-bf403da4-5cb5-46b6-b647-9a/.ipykernel/41375/command-8846548913142126-766074700", line 13, in Process
File "/home/spark-bf403da4-5cb5-46b6-b647-9a/.ipykernel/41375/command-8846548913142125-2478242552", line 31, in load
File "/databricks/python/lib/python3.11/site-packages/joblib/numpy_pickle.py", line 658, in load
obj = _unpickle(fobj, filename, mmap_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.11/site-packages/joblib/numpy_pickle.py", line 577, in _unpickle
obj = unpickler.load()
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/pickle.py", line 1213, in load
dispatch[key[0]](self)
File "/usr/lib/python3.11/pickle.py", line 1538, in load_stack_global
self.append(self.find_class(module, name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/pickle.py", line 1582, in find_class
return _getattribute(sys.modules[module], name)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/pickle.py", line 331, in _getattribute
raise AttributeError("Can't get attribute {!r} on {!r}"
AttributeError: Can't get attribute 'WeightedEnsembleRegressor' on <module '_engine_pyspark.databricks.workerwrap' from '/databricks/spark/python/_engine_pyspark.zip/_engine_pyspark/databricks/workerwrap.py'>
3.0.21.2.8
</code></pre>
<p>Any insight into how to fix this appreciated</p>
|
<python><pyspark><databricks><pickle><user-defined-functions>
|
2025-08-12 12:40:48
| 1
| 885
|
user6386155
|
79,733,185
| 1,136,807
|
Add prefix to all URLs while using Blueprint
|
<p>I am trying to prefix all endpoints with <code>/api/</code>, but I am getting 404 in return if not providing directly in the URL or while registering <code>blueprint</code>.</p>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>from init import app
from modules.User import User
app.register_blueprint(User)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>init.py</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
# initiating application instance
app = Flask(__name__)
# app configurations
app.config["SCRIPT_NAME"] = "/api/"
# -- OR --
app.config["APPLICATION_ROOT"] = "/api/"
...
</code></pre>
<p>modules/User.py</p>
<pre class="lang-py prettyprint-override"><code>from flask import request, Blueprint
User = Blueprint("user_blueprint", __name__)
# if "/api/" is provided directly
@User.route("/api/", methods=["GET"])
def get():
return "Called get method"
# 404 if "/" is provided
@User.route("/", methods=["GET"])
def get():
return "Called get method"
</code></pre>
|
<python><flask><endpoint><prefix><blueprint>
|
2025-08-12 12:33:27
| 1
| 2,035
|
Mr.Singh
|
79,732,697
| 1,162,409
|
CORS issue with Google ADK runnning as server and consuming from react app
|
<p>I’m working on a React application that needs to interact with the Google ADK. Since the ADK enables agent invocation via the API server, I’ll be following the instructions outlined in the documentation here: - <a href="https://google.github.io/adk-docs/get-started/testing/#run-agent-single-response" rel="nofollow noreferrer">https://google.github.io/adk-docs/get-started/testing/#run-agent-single-response</a></p>
<p>From the react application I have this call</p>
<pre><code>import axios from "axios";
export class AgentCallWithStreaming {
async callSSEWithStreaming(
messageText: string,
onMessage: (data: any) => void,
userId: string,
sessionId: string
) {
try {
const response = await axios.post('http://localhost:8000/run_sse', {
app_name: "teams_tool_agent",
user_id: userId,
session_id: sessionId,
new_message: {
role: "user",
parts: [
{
text: messageText
}
]
}
}, {
headers: {
'Content-Type': 'application/json'
},
responseType: 'stream' // For streaming responses
});
return response;
} catch (error) {
console.error('Error calling SSE endpoint:', error);
throw error;
}
}
</code></pre>
<p>And the adk code as</p>
<pre><code>from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool import MCPToolset, StdioConnectionParams
from mcp import StdioServerParameters
from .prompt.graph_prompt import MS_GRAPH_PROMPT_INSTRUCTION, MS_GRAPH_PROMPT_DESCRIPTION
import os
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
MODEL_NAME = os.getenv("GOOGLE_GEMINI_MODEL", "")
AZURE_TENANT_ID = os.getenv("AZURE_TENANT_ID", "")
AZURE_CLIENT_ID = os.getenv("AZURE_CLIENT_ID", "")
AZURE_CLIENT_SECRET = os.getenv("AZURE_CLIENT_SECRET", "")
MCP_COMMAND = os.getenv("MCP_COMMAND", "")
MCP_ARG_ONE = os.getenv("MCP_ARG_ONE", "")
MCP_ARG_TWO = os.getenv("MCP_ARG_TWO", "")
MCP_ARG_THREE = os.getenv("MCP_ARG_THREE", "")
MCP_ARG_FOUR = os.getenv("MCP_ARG_FOUR", "")
root_agent = LlmAgent(
name="Ms_Graph_Agent",
model=MODEL_NAME,
description=MS_GRAPH_PROMPT_DESCRIPTION,
instruction=MS_GRAPH_PROMPT_INSTRUCTION,
tools=[
MCPToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command=MCP_COMMAND,
args=[MCP_ARG_ONE, MCP_ARG_TWO, MCP_ARG_THREE, MCP_ARG_FOUR],
env={
"DOTNET_ENVIRONMENT": "Development",
"AZURE_TENANT_ID": AZURE_TENANT_ID,
"AZURE_CLIENT_ID": AZURE_CLIENT_ID,
"AZURE_CLIENT_SECRET": AZURE_CLIENT_SECRET}
),
)
)
]
)
</code></pre>
<p>When we run the application using <code>adk api_server</code></p>
<pre><code>(.venv) PS C:\project\rhcdev-gen-ai-ramsay-companion-plus\src\Services\ms_teams> adk api_server
C:\project\rhcdev-gen-ai-ramsay-companion-plus\src\Services\ms_teams\.venv\Lib\site-packages\pydantic\_internal\_fields.py:198: UserWarning: Field name "config_type" in "SequentialAgent" shadows an attribute in parent "BaseAgent"
warnings.warn(
C:\project\rhcdev-gen-ai-ramsay-companion-plus\src\Services\ms_teams\.venv\Lib\site-packages\google\adk\cli\fast_api.py:175: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
credential_service = InMemoryCredentialService()
C:\project\rhcdev-gen-ai-ramsay-companion-plus\src\Services\ms_teams\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_service.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
super().__init__()
INFO: Started server process [25628]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
</code></pre>
<p>Now when I invoke the application from the react app, I am facing an CORS issue</p>
<pre><code>Access to XMLHttpRequest at 'http://localhost:8000/run_sse' from origin 'https://localhost:53000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
</code></pre>
<p>Since this react app is teams app, it needs to be separate app, I can't use adk webui.</p>
<p>I need to set CORS permission for the URLS, Is there a way to get the FastAPI instance run by ADK agent ??</p>
|
<python><google-gemini><google-agent-development-kit>
|
2025-08-12 04:20:36
| 2
| 17,406
|
San Jaisy
|
79,732,475
| 11,159,734
|
Is there an issue with starting a FastAPI backend within Docker using uv run main.py?
|
<p>I'm setting up a FastAPI backend with Docker on my machine (will be deployed to Azure later as well) right now.
All examples I have seen including the <a href="https://github.com/astral-sh/uv-docker-example/blob/main/Dockerfile" rel="nofollow noreferrer">official uv docker example</a> start the backend like this in Docker:</p>
<pre><code>FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
# ...
CMD ["fastapi", "dev", "--host", "0.0.0.0", "src/uv_docker_example"]
</code></pre>
<p>They use the fastapi cli to start the backend with the starting configuration being specified within Docker.</p>
<p>However I use variables for my starting configuration that I effectively get from a <code>.env</code> file (which gets validated using <code>pydantic-settings</code> first)</p>
<pre class="lang-py prettyprint-override"><code># ./main.py
import uvicorn
from fastapi import FastAPI
# ...
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=config.fastapi_port, workers=config.workers,
log_level="info", reload=True)
</code></pre>
<p>I therefore tried to start the backend in docker the same way by invoking the <code>main.py</code> file itself. This way all <code>.env</code> variables get read and validated first and then the startup config gets filled dynamically.</p>
<pre><code>FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
# ...
CMD ["uv", "run", "main.py"]
</code></pre>
<p>This <strong>seems to work fine</strong> at first glance. At least I could not find anything wrong with it. However since I did not see anyone else doing it like this I'd like to know if there is <strong>any major downside</strong> of doing it this way that I'm missing right now?
In theory this should also work on properly on Azure since I pass the env variables using the Azure Environment Variables page in the web app configuration but I'm not entirely sure. Can someone please clarify this?</p>
|
<python><docker><fastapi><uv>
|
2025-08-11 20:14:14
| 1
| 1,025
|
Daniel
|
79,732,371
| 850,781
|
Column level alignment in pandas DataFrame printing
|
<p>When a pandas <code>DataFrame</code> is printed, the <code>MultiIndex</code> column levels are aligned with the 1st (left most) column instead of the last (right most) column:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
[np.arange(6), np.arange(6)+5],
columns=pd.MultiIndex.from_product([
("a"*10, "b"*20),
("A", "B", "C")]))
print(df)
</code></pre>
<p>produces</p>
<pre><code> aaaaaaaaaa bbbbbbbbbbbbbbbbbbbb
A B C A B C
0 0 1 2 3 4 5
1 5 6 7 8 9 10
</code></pre>
<p>instead of the more compact (and, arguably, clearer and more natural)</p>
<pre><code> aaaaaaaaaa bbbbbbbbbbbbbbbbbbbb
A B C A B C
0 0 1 2 3 4 5
1 5 6 7 8 9 10
</code></pre>
<p>Is there a way to get the second output?</p>
<p>PS. As suggested by @mozway, <a href="https://github.com/pandas-dev/pandas/issues/62089" rel="nofollow noreferrer">RFE</a>.</p>
|
<python><pandas><dataframe><printing>
|
2025-08-11 18:26:30
| 0
| 60,468
|
sds
|
79,732,064
| 3,909,202
|
How can I have a matplotlib legend span the plot areas of the subplots?
|
<p>I want to have the shared legend of multiple subplots take the width of the plot areas of the subplots.</p>
<p>Here is an illustration:</p>
<p><a href="https://i.sstatic.net/E4M2mxaZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4M2mxaZ.png" alt="Illustration of where the legend shall be" /></a></p>
<p>Here's what I tried / a minimum reproducible example of what I expect should work, based on the documentation:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(7, 4), layout="constrained")
# Plot some dummy data
x = [1, 2, 3, 4, 5]
y1 = [1, 2, 3, 4, 5]
y2 = [5, 4, 3, 2, 1]
axes[0].plot(x, y1, label="Line 1")
axes[1].plot(x, y1, label="Line 1")
axes[1].plot(x, y2, label="Line 2")
def find_ideal_bbox(axes):
# Use the box that spans the combined plot width (excluding figure side margins)
# Collect axes positions in figure coordinates
positions = [ax.get_position() for ax in axes]
if positions:
x0 = min(p.xmin for p in positions)
x1 = max(p.xmax for p in positions)
# Full vertical extent (0->1) so outside locations can align horizontally;
# width restricted to the subplot grid width.
return (x0, 0.0, x1 - x0, 1.0)
return None
# Collect all handles and labels from all axes
handles, labels = [], []
for ax in axes:
h, label_list = ax.get_legend_handles_labels()
for handle, label in zip(h, label_list):
if label not in labels: # Avoid duplicates
handles.append(handle)
labels.append(label)
# Add one legend for both axes
fig.legend(
handles,
labels,
loc="outside upper center",
ncol=2,
bbox_to_anchor=find_ideal_bbox(axes),
mode="expand",
borderaxespad=-1, # Move it outside the figure
columnspacing=1.0, # Reduce space between columns (default is 2.0)
handletextpad=0.4,
# Reduce space between symbol and text (default is 0.8)
)
# Finally, save the figure in three different sizes
widths = [None, 3.16, 4.21]
# Make sure they have the same aspect ratio
original_size = fig.get_size_inches()
aspect_ratio = original_size[1] / original_size[0]
for w in widths:
if w is not None:
fig.set_size_inches(w, w * aspect_ratio)
# Reset the legend size
fig.legends[0].set_bbox_to_anchor(find_ideal_bbox(axes))
fig.savefig(
f"mvr_figure_{w}.png" if w is not None else "mvr_figure.png",
bbox_inches="tight",
)
</code></pre>
<p>Here's what this results in:</p>
<p><a href="https://i.sstatic.net/AtwJRC8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AtwJRC8J.png" alt="Default width" /></a></p>
<p><a href="https://i.sstatic.net/bZvXax0U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZvXax0U.png" alt="Width = 4.21" /></a></p>
<p><a href="https://i.sstatic.net/51oLy3dH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51oLy3dH.png" alt="Width = 3.16" /></a></p>
<p>As can be seen in these examples, the legend is too wide in all cases, and is always slightly too much to the left. Note the <code>layout="constrained"</code>. Without this, the opposite can be observed; the legend is always a bit too long to the right.</p>
<p>The strangest thing to me is that I can debug the <code>find_ideal_bbox</code> method by simply using <code>fig.text()</code> on the coordinates that are found, and the text is placed where I expect it to.</p>
<p>What am I missing? Why are the legends not using the bbox I tell them to?</p>
|
<python><matplotlib>
|
2025-08-11 12:48:13
| 2
| 1,379
|
BernhardWebstudio
|
79,732,011
| 3,938,402
|
Convert XML to JSON using xmltodict package
|
<p>I'm trying to convert the below XML file to JSON using <code>xmltodict</code> package but getting Expat syntax error</p>
<p><code>abc.xml</code>:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="1" failures="0" disabled="0" errors="0" time="0.213" timestamp="2025-06-22T17:35:10.404" name="AllTests">
<testsuite name="qw" tests="4" failures="0" disabled="0" skipped="0" errors="0" time="0.213" timestamp="2025-06-22T17:35:10.404">
<testcase name="er" file="" line="" status="run" result="PASSED" time="0.0" timestamp="2025-06-22T17:35:10.404" classname="qw" />
</testsuite>
</testsuites>
</code></pre>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import xmltodict, json
def main():
o = xmltodict.parse("abc.xml")
json.dumps(o)
if __name__ == "__main__":
main()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/home/harry/xml_to_json/main.py", line 9, in <module>
main()
File "/home/harry/xml_to_json/main.py", line 5, in main
o = xmltodict.parse("abc.xml")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harry/xml_to_json/.venv/lib/python3.12/site-packages/xmltodict.py", line 359, in parse
parser.Parse(xml_input, True)
xml.parsers.expat.ExpatError: syntax error: line 1, column 0
</code></pre>
|
<python><xmltodict><xml-to-json>
|
2025-08-11 11:42:53
| 2
| 4,026
|
Harry
|
79,731,908
| 5,197,329
|
Type hinting returned array shape using class attributes
|
<p>I am trying to create the following type hints in Python, but I have thus far been unable to find a way to actively use my class attributes as type hints:</p>
<pre><code>@dataclass
class MyClass:
x: int
y: int
def my_method(self) -> np.ndarray([tuple[self.x, self.y], np.float32]):
pass
</code></pre>
<p>Is this possible in Python, or do I have to just write the generic <code>int</code> instead of <code>self.x</code> and <code>self.y</code> in the type hint?</p>
|
<python><numpy><python-typing>
|
2025-08-11 09:53:24
| 1
| 546
|
Tue
|
79,731,839
| 13,944,524
|
UV package manager cannot find my workspace member
|
<p>I intended to use <code>git submodule</code> to bring my other source code(<code>backend</code>) into <code>my-app</code>. Then I wanted utilize <a href="https://docs.astral.sh/uv/concepts/projects/workspaces/" rel="nofollow noreferrer"><code>uv</code>'s workspace</a> so that it manages the dependencies of both <code>my-app</code> and the <code>backend</code> altogether.</p>
<p>Tree structure (the root directory name is also <code>my-app</code> it's denoted by <code>.</code> here):</p>
<pre class="lang-none prettyprint-override"><code>.
├── my-app
│ ├── __init__.py
│ ├── __pycache__
│ ├── api
│ ├── clients
|
├── justfile
├── model.py
├── packages
│ └── backend ###### here
├── pyproject.toml
├── README.md
├── tests
│ ├── __pycache__
│ ├── conftest.py
├── uv.lock
</code></pre>
<p><code>my-app</code>'s <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "my-app"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = "==3.11.4"
dependencies = [
...
"rich>=13.8.1",
"backend" ##### here
]
[tool.uv.sources]
backend = { workspace = true }
[tool.uv.workspace]
members = ["packages/*"]
[build-system]
requires = ["uv_build>=0.8.7,<0.9.0"]
build-backend = "uv_build"
# other configs
</code></pre>
<p>And here is the error message:</p>
<pre class="lang-none prettyprint-override"><code>error: Failed to generate package metadata for `my-app==0.1.0 @ editable+.`
Caused by: Failed to parse entry: `backend`
Caused by: `backend` references a workspace in `tool.uv.sources` (e.g., `backend = { workspace = true }`), but is not a workspace member
</code></pre>
<p>with those two blocks in toml file I specified that <code>backend</code> should be a member. In fact the example in the documentation did the same thing. The location of the directories are also the same between my directories and the docs example.</p>
<p>let me know if I have to provide more information</p>
|
<python><uv>
|
2025-08-11 08:41:43
| 0
| 17,004
|
S.B
|
79,731,713
| 2,828,086
|
AttributeError: module 'igl' has no attribute 'tet_tet_adjacency'
|
<p>When I try to run the <code>example.py</code> <a href="https://github.com/sgsellan/fracture-modes/blob/main/scripts/example.py" rel="nofollow noreferrer">script</a> from <a href="https://github.com/sgsellan/fracture-modes" rel="nofollow noreferrer">this repo</a>, I get the error <code>AttributeError: module 'igl' has no attribute 'tet_tet_adjacency'</code> (more details below). I have the same issue both on macOS and on Windows (WSL and native Windows with Miniconda). Do you have any suggestion on how to fix this?</p>
<p>I already tested the code a couple of weeks ago, and everything worked fine. I guess there is an issue with some module update, but I couldn't find it out.</p>
<p>Output of <code>python scripts/example.py</code>:</p>
<pre><code>Starting fracture mode computation
We will find 10 unique fracture modes
Our input (unexploded) mesh has 1266 vertices and 4025 tetrahedra.
Traceback (most recent call last):
File "/mnt/c/Users/al/Documents/fracture-modes/scripts/example.py", line 24, in <module>
modes.compute_modes(parameters=params)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/mnt/c/Users/al/Documents/fracture-modes/fracture_utility/fracture_modes.py", line 32, in compute_modes
self.exploded_vertices,self.exploded_elements,self.modes, self.labels, self.tet_to_vertex_matrix,self.tet_neighbors,self.massmatrix,self.unexploded_to_exploded_matrix = compute_fracture_modes(self.vertices,self.elements,parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/c/Users/al/Documents/fracture-modes/fracture_utility/compute_fracture_modes.py", line 39, in compute_fracture_modes
exploded_vertices, exploded_elements, discontinuity_matrix, unexploded_to_exploded_matrix, tet_to_vertex_matrix, tet_neighbors = explode_mesh(vertices,elements,num_quad=1)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/c/Users/al/Documents/fracture-modes/fracture_utility/explode_mesh.py", line 36, in explode_mesh
TT, TTi = igl.tet_tet_adjacency(elements)
^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'igl' has no attribute 'tet_tet_adjacency'
</code></pre>
|
<python><attributeerror><libigl>
|
2025-08-11 06:26:16
| 1
| 6,724
|
albusdemens
|
79,731,634
| 3,179,698
|
No way to quit IPython debugger's interactive mode in jupyterhub notebook
|
<p>So I import a module:</p>
<pre><code>from IPython.core.debugger import Pdb
</code></pre>
<p>Then I initiate an object for debugging:</p>
<pre><code>ipdb = Pdb()
</code></pre>
<p>I define a function to test debug mode:</p>
<pre><code>def f(a=1):
ipdb.set_trace()
print(a)
</code></pre>
<p>After I run the function:</p>
<pre><code>f()
</code></pre>
<p>I am in the debug mode, in this mode, I can quit easily with q, however, after I type</p>
<pre><code>interact
</code></pre>
<p>to get in the interactive mode(because in this mode I can input multiple line code rather than one line code to run more complicated functional codes)</p>
<p>I can't quit this mode unless I kill the whole Python Jupyter session(like kill the kernel), I have tried quit(),quit,q(),q,exit(),exit,ctrl+C and ctrl+D ,None of them work.</p>
<p>I checked the source code of the IPython.core.debugger module and didn't get helpful info.</p>
<p>Does anyone know how to get out of this interactive session?</p>
|
<python><jupyter-notebook><jupyterhub>
|
2025-08-11 03:26:42
| 1
| 1,504
|
cloudscomputes
|
79,731,614
| 3,822,232
|
ValueError: variable agent_scratchpad should be a list of base messages, got of type <class 'str'>
|
<p>This is a very basic example how I am trying to use langchain to invoke a llm and find the tool to use:</p>
<pre><code>import asyncio
import json
from langchain.agents import AgentExecutor, create_structured_chat_agent, Tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import AIMessage, ToolCall
from langchain_community.chat_models.fake import FakeMessagesListChatModel
# 1. Define a simple, predictable tool
def simple_tool_function(input: str) -> str:
"""A simple tool that returns a fixed string."""
print(f"Tool called with input: '{input}'")
return "The tool says hello back!"
tools = [
Tool(
name="simple_tool",
func=simple_tool_function,
description="A simple test tool.",
)
]
# 2. Create responses that follow the structured chat format
responses = [
# First response: Agent decides to use a tool
AIMessage(
content=json.dumps({
"action": "simple_tool",
"action_input": {"input": "hello"}
})
),
# Second response: Agent provides final answer after tool execution
AIMessage(
content=json.dumps({
"action": "Final Answer",
"action_input": "The tool call was successful. The tool said: 'The tool says hello back!'"
})
),
]
# Use the modern FakeMessagesListChatModel
llm = FakeMessagesListChatModel(responses=responses)
# 3. Create the prompt using the standard structured chat prompt format
prompt = ChatPromptTemplate.from_messages([
("system", """Respond to the human as helpfully and accurately as possible. You have access to the following tools:
{tools}
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
Valid "action" values: "Final Answer" or {tool_names}
Provide only ONE action per $JSON_BLOB, as shown:
```
{{
"action": $TOOL_NAME,
"action_input": $INPUT
}}
```
Follow this format:
Question: input question to answer
Thought: consider previous and subsequent steps
Action:
```
$JSON_BLOB
```
Observation: action result
... (repeat Thought/Action/Observation as needed)
Thought: I know what to respond
Action:
```
{{
"action": "Final Answer",
"action_input": "Final response to human"
}}
```
Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation"""),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# 4. Create the agent and executor
agent = create_structured_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
max_iterations=3
)
# 5. Run the agent
result = asyncio.run(agent_executor.ainvoke({"input": "call the tool"}))
</code></pre>
<p>Python requirements I am using:</p>
<pre class="lang-none prettyprint-override"><code>langchain==0.3.27
langchain-community==0.3.27
langchain-core==0.3.74
langchain-aws==0.2.30
langchain-openai==0.3.29
</code></pre>
<p>Python version: 3.9</p>
<p>How do I get rid of the error <code>ValueError: variable agent_scratchpad should be a list of base messages, got of type <class 'str'></code>?</p>
|
<python><langchain><valueerror><py-langchain><langchain-agents>
|
2025-08-11 02:04:39
| 2
| 389
|
hitesh
|
79,731,607
| 852,795
|
plt.imshow() causes Jupyter notebook kernel to crash
|
<p>I'm following Karpathy's Makemore tutorial step by step. When I get to the <a href="https://youtu.be/PaCmpygFfXo?si=l7XEFpOyZrPN6lSB&t=1117" rel="nofollow noreferrer">part</a> where he uses plt.imshow(N) to create a plot inside the notebook, my kernel crashes. I've been troubleshooting this for half the day, including completely reinstalling Anaconda, Python and, for what it's worth, CUDA and my graphics drivers. I've created a brand new environment with Python and have tried <code>%matplotlib agg</code>. I've also check to make sure Jupyter is using the Python from my Pytorch environment.</p>
<p>This code works:</p>
<pre><code>import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
plt.plot([1,2,3])
plt.savefig('test.png')
</code></pre>
<p>Following a suggestion in the comments, this also works:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = np.random.rand(10, 10)
plt.imshow(data, cmap='viridis')
</code></pre>
<p>This code, however, as soon as I run <code>plt.imshow(N)</code>, will give me a pop-up that says, "The kernel for Documents/Python Scripts/Makemore-old.ipynb appears to have died. It will restart automatically."</p>
<pre><code>import torch
import matplotlib.pyplot as plt
words = open('names.txt', 'r').read().splitlines()
N = torch.zeros((27,27), dtype=torch.int32)
chars = sorted(list(set(''.join(words))))
stoi = {s:i+1 for i,s in enumerate(chars)}
stoi['.'] = 0
for w in words:
chs = ['.'] + list(w) + ['.']
for ch1, ch2 in zip(chs, chs[1:]):
ix1 = stoi[ch1]
ix2 = stoi[ch2]
N[ix2, ix2] += 1
%matplotlib inline
plt.imshow(N)
</code></pre>
<p>So maybe it's <code>plt.imshow()</code> with Pytorch? The Jupyter log console doesn't show anything but here's what's in the window where I launch Jupyter:</p>
<pre><code> To access the server, open this file in a browser:
file:///C:/Users/mcram/AppData/Roaming/jupyter/runtime/jpserver-13316-open.html
Or copy and paste one of these URLs:
http://localhost:8889/tree?token=f71b5330f81845c951bfdba4d58d44c91337cae89e2e2272
http://127.0.0.1:8889/tree?token=f71b5330f81845c951bfdba4d58d44c91337cae89e2e2272
[I 2025-08-11 21:09:42.732 ServerApp] Skipped non-installed server(s): bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server
[I 2025-08-11 21:09:58.585 ServerApp] Kernel started: 1d03fece-b7c4-4de0-a81f-4a575663f8af
[I 2025-08-11 21:09:59.951 ServerApp] Connecting to kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af.
[W 2025-08-11 21:09:59.962 ServerApp] The websocket_ping_timeout (90000) cannot be longer than the websocket_ping_interval (30000).
Setting websocket_ping_timeout=30000
[I 2025-08-11 21:09:59.965 ServerApp] Connecting to kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af.
[I 2025-08-11 21:09:59.984 ServerApp] Connecting to kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af.
[I 2025-08-11 21:10:06.095 ServerApp] Connecting to kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af.
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
[I 2025-08-11 21:10:19.567 ServerApp] AsyncIOLoopKernelRestarter: restarting kernel (1/5), keep random ports
[W 2025-08-11 21:10:19.567 ServerApp] kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af restarted
[I 2025-08-11 21:10:19.573 ServerApp] Starting buffering for 1d03fece-b7c4-4de0-a81f-4a575663f8af:2dc781e5-1b0e-43e0-892e-fa441f606dbe
[I 2025-08-11 21:10:19.613 ServerApp] Connecting to kernel 1d03fece-b7c4-4de0-a81f-4a575663f8af.
[I 2025-08-11 21:10:19.614 ServerApp] Restoring connection for 1d03fece-b7c4-4de0-a81f-4a575663f8af:2dc781e5-1b0e-43e0-892e-fa441f606dbe
</code></pre>
<p>I have run out of ideas and it's frustrating because I'm doing the exact same thing as the video. Thanks to anyone who tries to give me a hand.</p>
|
<python><matplotlib><jupyter-notebook>
|
2025-08-11 01:43:40
| 0
| 2,904
|
Mark Cramer
|
79,731,600
| 344,286
|
Why does my Django models.Manager return all objects on a relationship?
|
<p>I don't particularly understand what's going on here, but it seems that <code>super().get_queryset()</code> doesn't do what I think it does?</p>
<p>I have 1:N relationship, and the default FK reverse lookup works:</p>
<pre><code>>>> for thing in this.thing_set.all():
... print(thing.this)
</code></pre>
<p>Each one of these is <code>This</code>.</p>
<p>However, with my customized:</p>
<pre><code>>>> for thing in this.thing_set.by_date():
... print(thing.this)
</code></pre>
<p>Will produced <code>This</code> <em>and</em> `That*</p>
<pre><code>class ThingByDateManager(models.Manager):
def by_date(self):
return super().get_queryset().order_by("start_time")
</code></pre>
<p>That's all there is.</p>
<pre><code>class This(models.Model):
name = models.CharField(max_length=255, primary_key=True)
class Thing(models.Model):
start_time = models.DateTimeFiled()
name = models.CharField(max_length=255)
this = models.ForeignKey(This, on_delete=models.CASCADE)
objects = ThingByDateManager()
</code></pre>
<p>(might not have those models perfectly written, but I'm hoping it's just something silly with the queryset or something)</p>
<p>Why doesn't this correctly filter my objects by <code>this</code>, instead returning <em>all</em> of the <code>Thing</code>s?</p>
|
<python><django><foreign-keys>
|
2025-08-11 01:27:34
| 3
| 52,263
|
Wayne Werner
|
79,731,216
| 3,144,092
|
Function call with OpenAI Agent SDK with Ollama fails
|
<p>I'm having trouble getting function call working with OpenAI Agent SDK and Ollama. Suggestions/Solutions would be of great help. Thank you.</p>
<p>Using UV, python-12 and added <code>"openai>=1.68.2"</code> and <code>"openai-agents>=0.0.15"</code> dependencies to the project.</p>
<p>Below is my code:</p>
<pre><code>from dotenv import load_dotenv
from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel, function_tool
import asyncio
load_dotenv(override=True)
ollama_url = 'http://localhost:11434/v1'
ollama_api_key = 'dummy'
model = 'qwen3:4b'
@function_tool
def math_tool(a: int, b: int) -> int:
""" Send two integers and this returns their sum """
print(f"Adding {a} and {b}")
return a + b
async def main():
ollama_client = AsyncOpenAI(base_url=ollama_url, api_key=ollama_api_key)
ollama_qwen3_4b_model = OpenAIChatCompletionsModel(model=model, openai_client=ollama_client)
simple_agent = Agent(name="simple_agent", instructions="You are a simple agent that can perform basic tasks", tools=[math_tool], model=ollama_qwen3_4b_model)
output = await Runner.run(simple_agent, "What is 2 + 2?")
print(output)
asyncio.run(main())
</code></pre>
<p>It errors out, I assume below is the prime error.</p>
<pre><code> File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/typing.py", line 501, in __call__
raise TypeError(f"Cannot instantiate {self!r}")
TypeError: Cannot instantiate typing.Union
</code></pre>
<p>This is the full trace when I run <code>uv run ollama_simple.py</code></p>
<pre><code>Traceback (most recent call last):
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/ollama_simple.py", line 33, in <module>
asyncio.run(main())
File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/ollama_simple.py", line 30, in main
output = await Runner.run(simple_agent, "What is 2 + 2?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/run.py", line 206, in run
return await runner.run(
^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/run.py", line 434, in run
turn_result = await self._run_single_turn(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/run.py", line 956, in _run_single_turn
new_response = await cls._get_new_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/run.py", line 1168, in _get_new_response
new_response = await model.get_response(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/models/openai_chatcompletions.py", line 66, in get_response
response = await self._fetch_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/models/openai_chatcompletions.py", line 228, in _fetch_response
converted_messages = Converter.items_to_messages(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/Documents/ai/github/openai_agent_learning/ollama_agent_tool_simple/.venv/lib/python3.12/site-packages/agents/models/chatcmpl_converter.py", line 443, in items_to_messages
new_tool_call = ChatCompletionMessageToolCallParam(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/typing.py", line 1184, in __call__
result = self.__origin__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/RandomName/.local/share/uv/python/cpython-3.12.11-macos-aarch64-none/lib/python3.12/typing.py", line 501, in __call__
raise TypeError(f"Cannot instantiate {self!r}")
TypeError: Cannot instantiate typing.Union
</code></pre>
|
<python><openai-api><agent><openai-agents>
|
2025-08-10 12:41:28
| 1
| 391
|
Dileep17
|
79,731,196
| 3,938,402
|
Generate XML file using ElementTree API
|
<p>I'm trying to generate the below XML file using the data populated in a list:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="65" failures="0" disabled="0" errors="0" name="AllTests">
<testsuite name="Tests" tests="65" failures="0" disabled="0" skipped="0" errors="0">
<testcase name="TestInit" file="" status="run" result="completed" />
<testcase name="TestAlways" file="" status="run" result="completed" />
</testsuite>
...
</testsuites>
</code></pre>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.cElementTree as ET
class TestCase:
def __init__(self, name, result):
self.name = name
self.result = result
def __str__(self):
return f"Test Case: {self.name}, Status: {self.result}"
class TestSuite:
def __init__(self, name):
self.name = name
self.test_case_list = []
def __str__(self):
return f"Test Suite: {self.name}"
def main():
print("Hello from html-test-report-generator!")
test_suite_list = generate_html_report()
for test_suite in test_suite_list:
print(test_suite)
for test_case in test_suite.test_case_list:
print(test_case)
print()
generate_xml(test_suite_list)
def generate_html_report():
# generates a list of TestSuite objects and returns the list back to the caller
def generate_xml(test_suite_list):
testsuites = ET.Element("testsuites", tests=len(test_suite_list), failures="0", disabled="0", errors="0", name="AllTests")
for test_suite in test_suite_list:
testsuite = ET.SubElement(testsuites, "testsuite", name=test_suite.name, tests=len(test_suite.test_case_list), failures="0", disabled="0", skipped="0", errors="0")
for test_case in test_suite.test_case_list:
ET.SubElement(testsuite, "testcase", name=test_case.name, file="", line="", status="run", result=test_case.result)
tree = ET.ElementTree(testsuites)
tree.write("filename.xml")
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.12/xml/etree/ElementTree.py", line 743, in _get_writer
write = file_or_filename.write
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'write'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/harry/html_test_report_generator/main.py", line 94, in <module>
main()
File "/home/harry/html_test_report_generator/main.py", line 28, in main
generate_xml(test_suite_list)
File "/home/harry/html_test_report_generator/main.py", line 37, in generate_xml
tree.write("filename.xml")
File "/usr/lib/python3.12/xml/etree/ElementTree.py", line 729, in write
serialize(write, self._root, qnames, namespaces,
File "/usr/lib/python3.12/xml/etree/ElementTree.py", line 885, in _serialize_xml
v = _escape_attrib(v)
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/xml/etree/ElementTree.py", line 1050, in _escape_attrib
_raise_serialization_error(text)
File "/usr/lib/python3.12/xml/etree/ElementTree.py", line 1004, in _raise_serialization_error
raise TypeError(
TypeError: cannot serialize 23 (type int)
</code></pre>
|
<python><xml><elementtree>
|
2025-08-10 11:54:47
| 1
| 4,026
|
Harry
|
79,730,861
| 530,160
|
What coordinate transform does scipy.integrate.quad use for infinite bounds?
|
<p>In SciPy's quad function, it is possible to integrate over an integral with infinite bounds.</p>
<pre><code>import scipy
import numpy as np
def normal_distribution_pdf(x):
return np.exp(-x**2 / 2) / np.sqrt(2 * np.pi)
result, eps, info = scipy.integrate.quad(normal_distribution_pdf, 0, np.inf, full_output=True)
print(result)
</code></pre>
<p>According to the SciPy docs, infinite bounds are handled using a coordinate transformation, using the QUADPACK qagie subroutine.</p>
<blockquote>
<p><strong>qagie</strong></p>
<p>handles integration over infinite intervals. The infinite range is mapped onto a finite interval and subsequently the same strategy as in QAGS is applied.</p>
</blockquote>
<p>But it is vague about how the infinite range is mapped onto the finite range. Is it something like <code>log(x)</code>? Or another function?</p>
<p>I also examined the <a href="https://www.netlib.org/quadpack/doc" rel="nofollow noreferrer">QUADPACK documentation</a>. It provides the destination range, [0, 1], but is also vague about how it happens.</p>
<p>I am interested in knowing the specifics of how an infinite range is mapped to the range [0, 1], because I am interested in understanding the limitations of quad().</p>
|
<python><scipy><quad>
|
2025-08-09 20:12:03
| 1
| 27,892
|
Nick ODell
|
79,730,688
| 11,462,274
|
Using Python, how to continue opening new URLs in a specific Edge window even if another Edge window is in the foreground?
|
<p>Using this default model, if I separate one of these URLs in a new window and use it for reading, the next URL open will open in the window I've separated for reading, obviously hindering my reading.</p>
<p>Is there a way to set a specific window as the primary one that will open all new tabs, and the ones I separate will remain intact even when they're in the foreground?</p>
<p>1 → I can't use automated browsers because the sites use automated bans.</p>
<p>2 → These sites require a login and password, and I want to keep all of this saved only in my browser profile.</p>
<p>3 → I can't risk losing my account due to a ban, so I'm looking for an option that only actually affects window selection and URL opening.</p>
<pre class="lang-python prettyprint-override"><code>import webbrowser
import time
urls = [
"https://www.example.com",
"https://www.python.org",
"https://www.github.com"
]
while True:
for url in urls:
webbrowser.open(url)
time.sleep(3)
</code></pre>
|
<python><tabs><microsoft-edge><windows-11><python-webbrowser>
|
2025-08-09 14:52:57
| 2
| 2,222
|
Digital Farmer
|
79,730,585
| 237,105
|
sympy unable to take simple integral
|
<p>I'm trying to take the following integral using sympy and it won't take it:</p>
<pre><code>>>> from sympy import *
>>> integrate(x*exp(-(x-x0)**2), (x, 0, k))
</code></pre>
<img src="https://i.sstatic.net/VvnI8lth.png" width="200"/>
<p>If I 'help' it by adding and subtracting the same expression, it works fine:</p>
<pre><code>>>> sp.integrate((x-x0)*sp.exp(-(x-x0)**2)+x0*sp.exp(-(x-x0)**2), (x, 0, k))
</code></pre>
<img src="https://i.sstatic.net/2nkoY6M6.png" width="400"/>
<p>Is there any way to automate this process?</p>
<p>Wolfram Alpha takes it easily without any hints from user's side:
<a href="https://i.sstatic.net/0b4X7MxC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0b4X7MxC.png" alt="enter image description here" /></a></p>
|
<python><sympy><symbolic-math><integral>
|
2025-08-09 11:33:48
| 1
| 34,425
|
Antony Hatchkins
|
79,730,533
| 19,130,803
|
Version unsupported imbalanced-learn
|
<p>I am trying to install <code>scikit-learn</code> and <code>imbalanced-learn</code> for ML project using <code>poetry</code>.</p>
<pre><code># File pyproject.toml
[project]
name = "hello"
version = "0.1.0"
description = ""
authors = [
{name = "",email = ""}
]
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
]
</code></pre>
<p>python (version 3.13.6)</p>
<p>Poetry (version 2.1.4)</p>
<p>But, Getting error as below:</p>
<pre><code>ubuntu@ubuntu:~/hello$ poetry add scikit-learn imbalanced-learn
Creating virtualenv hello in /home/ubuntu/hello/.venv
Using version ^1.7.1 for scikit-learn
Using version ^0.13.0 for imbalanced-learn
Updating dependencies
Resolving dependencies... (0.4s)
Because no versions of sklearn-compat match >0.1,<0.1.1 || >0.1.1,<0.1.2 || >0.1.2,<0.1.3 || >0.1.3,<1
and sklearn-compat (0.1.0) depends on scikit-learn (>1.2,<1.7), sklearn-compat (>=0.1,<0.1.1 || >0.1.1,<0.1.2 || >0.1.2,<0.1.3 || >0.1.3,<1) requires scikit-learn (>1.2,<1.7).
And because sklearn-compat (0.1.1) depends on scikit-learn (>=1.2,<1.7), sklearn-compat (>=0.1,<0.1.2 || >0.1.2,<0.1.3 || >0.1.3,<1) requires scikit-learn (>=1.2,<1.7).
And because sklearn-compat (0.1.2) depends on scikit-learn (>=1.2,<1.7)
and sklearn-compat (0.1.3) depends on scikit-learn (>=1.2,<1.7), sklearn-compat (>=0.1,<1) requires scikit-learn (>=1.2,<1.7).
Because no versions of imbalanced-learn match >0.13.0,<0.14.0
and imbalanced-learn (0.13.0) depends on sklearn-compat (>=0.1,<1), imbalanced-learn (>=0.13.0,<0.14.0) requires sklearn-compat (>=0.1,<1).
Thus, imbalanced-learn (>=0.13.0,<0.14.0) requires scikit-learn (>=1.2,<1.7).
So, because hello depends on both scikit-learn (^1.7.1) and imbalanced-learn (^0.13.0), version solving failed.
ubuntu@ubuntu:~/hello$
</code></pre>
<p>please help.</p>
|
<python><scikit-learn>
|
2025-08-09 09:48:38
| 2
| 962
|
winter
|
79,729,951
| 4,613,734
|
How to make a python package that can have two different version of a dependency?
|
<p>It is now not uncommon to have a python package that is distributed in a multitude of different "flavors". This happens often with machine learning packages, e.g. <code>onnxruntime</code> has many "flavors" <code>onnxruntime</code>, <code>onnxruntime-gpu</code>, <code>onnxruntime-directml</code>, ... (same for <code>xgboost</code>). All those package provide the same content but have different implementations. Note that most of the time only one "flavor" can be installed in an env.</p>
<p>Now, let say I have a package <code>A</code> that depends on <code>onnxruntime</code> but I want to install this package (<code>A</code>) on a machine that has a (cuda) GPU. To take benefit of the GPU I would have to install <code>onnxruntime-gpu</code> instead of <code>onnxruntime</code> (and not both).</p>
<p>Ideally I would have liked to have an extra-dependency option <code>gpu</code> added to my package <code>A</code> so that if I do</p>
<pre class="lang-bash prettyprint-override"><code>pip install A
</code></pre>
<p>the dependency <code>onnxruntime</code> would be installed with it. And if I do</p>
<pre class="lang-bash prettyprint-override"><code>pip install A[gpu]
</code></pre>
<p>the dependency <code>onnxruntime-gpu</code> would be installed with it. Sadly if I do that both dependencies would be installed which is not working.</p>
<p>The only workable solution I see is to distribute two "flavors" of the package, <code>A</code> and <code>A-gpu</code>, which two different sets of dependencies (which I don't find ideal). Is there a better option?</p>
<p><strong>Edit - Using extra markers in requirements</strong></p>
<p>I also try adding condition on extra marker in my dependency list but it's not working to exclude a package to be installed while an extra is used.</p>
<p>With</p>
<pre><code>[project.dependencies]
onnxruntime; extra!=cuda #(or extra=="")
[project.optional-dependencies]
cuda=["onnxruntime-gpu"]
</code></pre>
<p>will always install <code>onnxruntime</code> package when installing <code>A[cuda]</code> because pip will always analyze the dependencies with any no extra at least one time.</p>
<p>That means that I can't have a valid default installation (no extra).</p>
|
<python><dependencies><xgboost><packaging><onnxruntime>
|
2025-08-08 15:17:59
| 1
| 385
|
MajorTom
|
79,729,734
| 967,330
|
While installing pytorch, got missing file that exists
|
<p>I am running Ubuntu 22.04 and I've backed python back to 3.10.12 so I can install audio X. I've never worked much with python.</p>
<p>I ran the command <code>pip install aeiou</code> and, after doing a bunch of things, it gets to torch and says.</p>
<pre><code> ... many moving lines omitted ...
Moving to /home/thomas/anaconda3/share/cmake/Caffe2/
from /home/thomas/anaconda3/share/cmake/~affe2
Moving to /home/thomas/anaconda3/share/cmake/Tensorpipe/
from /home/thomas/anaconda3/share/cmake/~ensorpipe
Moving to /home/thomas/anaconda3/share/cmake/Torch/
from /home/thomas/anaconda3/share/cmake/~orch
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/6 [torch]ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/thomas/anaconda3/lib/python3.13/site-packages/torch/include/ATen/ATen.h'
</code></pre>
<p>Which is really weird because:</p>
<pre><code>(base) thomas@audiox:~/AudioX$ ls -l /home/thomas/anaconda3/lib/python3.13/site-packages/torch/include/ATen/ATen.h
-rw-rw-r-- 2 thomas thomas 1107 May 7 15:10 /home/thomas/anaconda3/lib/python3.13/site-packages/torch/include/ATen/ATen.h
</code></pre>
<p>What???</p>
|
<python><pip><pytorch>
|
2025-08-08 11:59:23
| 1
| 15,310
|
Thom
|
79,729,500
| 13,392,257
|
Sqlalchemy delete on table "tasks" violates foreign key constraint for cascade mode
|
<p>I have the following postgres table</p>
<pre><code>dbname=# \d+ tasks
Table "public.tasks"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
id | character varying | | not null | | extended | |
external_id | character varying | | not null | | extended | |
rq_id | character varying | | not null | | extended | |
priority | integer | | not null | | plain | |
status | taskstatus | | not null | | plain | |
creation_time | timestamp without time zone | | not null | | plain | |
other fields ...
Indexes:
"tasks_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "solutions" CONSTRAINT "solutions_task_id_fkey" FOREIGN KEY (task_id) REFERENCES tasks(id)
TABLE "tasks_files" CONSTRAINT "tasks_files_task_id_fkey" FOREIGN KEY (task_id) REFERENCES tasks(id)
Access method: heap
</code></pre>
<p>The table created with help of sqlalchemy class</p>
<pre><code>class Task(Base):
__tablename__ = 'tasks'
id = Column(String, unique=True, nullable=False, primary_key=True)
external_id = Column(String, nullable=False)
rq_id = Column(String, nullable=False)
priority = Column(Integer, nullable=False)
status = Column(Enum(TaskStatus), nullable=False)
creation_time = Column(DateTime, nullable=False)
...
files: Mapped[List["TaskFile"]] = relationship(cascade="all, delete")
solution: Mapped[List["Solution"]] = relationship(cascade="all, delete")
</code></pre>
<p>I have a python function, the function running SQL command and I have an error:</p>
<pre><code>
dbname=# DELETE FROM tasks WHERE tasks.status = 'SOLVED' AND tasks.creation_time <= '2025-08-07T08:15:40.867870'::timestamp OR tasks.status = 'ERROR' AND tasks.creation_time <= '2025-08-07T08:15:40.867877'::timestamp RETURNING tasks.id;
ERROR: update or delete on table "tasks" violates foreign key constraint "solutions_task_id_fkey" on table "solutions"
DETAIL: Key (id)=(7bff9459-2ef0-4650-8f3e-6774203e10fb) is still referenced from table "solutions".
</code></pre>
<pre><code>def f(db, solved_task_window_days: int, error_task_window_days: int) -> None:
solved_days_ago = datetime.utcnow() - timedelta(days=solved_task_window_days)
failed_days_ago = datetime.utcnow() - timedelta(days=error_task_window_days)
print("XXX_", db.query(Task).filter(
or_(
and_(
Task.status == TaskStatus.SOLVED,
Task.creation_time <= solved_days_ago
),
and_(
Task.status == TaskStatus.ERROR,
Task.creation_time <= failed_days_ago
)
)
).delete(synchronize_session='fetch')) # was trying different flags
</code></pre>
<p>I have the following error (from python function)</p>
<pre><code>(psycopg2.errors.InFailedSqlTransaction) current transaction is aborted, commands ignored until end of transaction block\n\n[SQL: DELETE FROM tasks WHERE tasks.status = %(status_1)s AND tasks.creation_time <= %(creation_time_1)s OR tasks.status = %(status_2)s AND tasks.creation_time <= %(creation_time_2)s RETURNING tasks.id]\n[parameters: {'status_1': 'SOLVED', 'creation_time_1': datetime.datetime(2025, 8, 7, 8, 15, 40, 863536), 'status_2': 'ERROR', 'creation_time_2': datetime.datetime(2025, 8, 7, 8, 15, 40, 863541)}]\n(Background on this error at: https://sqlalche.me/e/20/2j85)"
2025-08-08 08:15:40.869 UTC [26] ERROR: current transaction is aborted, commands ignored until end of transaction block
2025-08-08 08:15:40.869 UTC [26] STATEMENT: DELETE FROM tasks WHERE tasks.status = 'SOLVED' AND tasks.creation_time <= '2025-08-07T08:15:40.867870'::timestamp OR tasks.status = 'ERROR' AND tasks.creation_time <= '2025-08-07T08:15:40.867877'::timestamp RETURNING tasks.id
</code></pre>
<p>Solution table looks like this:</p>
<pre><code>dbname=# \d+ solutions
Table "public.solutions"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
id | character varying | | not null | | extended | |
task_id | character varying | | not null | |
...
Indexes:
"solutions_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"solutions_task_id_fkey" FOREIGN KEY (task_id) REFERENCES tasks(id)
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2025-08-08 08:24:50
| 0
| 1,708
|
mascai
|
79,729,416
| 13,184,183
|
How to filter values from struct by field in pyspark?
|
<p>I wonder how I can filter out objects from struct by condition that their id is in list presented in other column.</p>
<p>Suppose I have dataframe <code>df</code> with columns <code>event</code> which has the type of list of <code>Struct</code> with fields <code>id</code> and <code>time</code>. There is another column <code>target_ids</code> which contains lists of id. I want to filter from each row of column <code>events</code> only those events, which have <code>id</code> in corresponding list of <code>target_ids</code>.</p>
<p>If <code>target_ids</code> is a column of single id of integer type, than the following works:</p>
<pre><code>df = df.withColumn('target_time', F.expr('filter(events, x -> x.id == target_ids)').getField('time')
</code></pre>
<p>However if I change <code>target_ids</code> to list and <code>==</code> to <code>in</code> then I get a syntax error. How can I solve this? And is there any documentation about syntax of expressions inside <code>expr</code>? This <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.expr.html" rel="nofollow noreferrer">page</a> is sadly blank</p>
|
<python><pyspark>
|
2025-08-08 06:52:40
| 1
| 956
|
Nourless
|
79,729,226
| 9,541,464
|
How to release memory of intermediate Python-Polars DataFrames in a large dependency graph?
|
<p>I am performing operations on a <em>directed acyclic graph</em> (DAG) of DataFrames using <strong>Polars</strong> (eager API).</p>
<p>Here’s a simplified example of my workflow:</p>
<ol>
<li>Read parquet files into <code>df1</code>.</li>
<li>Use <code>df1</code> to create <code>df2</code> and <code>df3</code>.</li>
<li>Use <code>df3</code> to create <code>df4</code>.</li>
<li>Use both <code>df2</code> and <code>df3</code> to create <code>df5</code>.</li>
</ol>
<p>In reality, I have <strong>more than 200 such levels</strong> of DataFrame transformations to reach the final <em>leaf nodes</em> (output DataFrames).</p>
<hr />
<h3><strong>Problem</strong></h3>
<p>Each DataFrame holds <strong>millions of rows</strong> and <strong>~100 columns</strong>, so memory usage grows rapidly. I want to release memory for intermediate DataFrames <strong>as soon as they are no longer needed</strong>.</p>
<p>Example:</p>
<ul>
<li>Once <code>df2</code> and <code>df3</code> are created, I no longer need <code>df1</code> → release it from memory.</li>
<li>Once <code>df4</code> and <code>df5</code> are created, I no longer need <code>df2</code> and <code>df3</code>.</li>
</ul>
<p>I have tried:</p>
<pre class="lang-py prettyprint-override"><code>del df1
import gc
gc.collect()
</code></pre>
<p>...but the memory is not being released.</p>
<h3>Constraints</h3>
<ul>
<li><p>I am aware I could run the entire DAG execution in a subprocess, so that when the process finishes, memory is released.</p>
</li>
<li><p>However, I cannot wait for the entire graph to finish — I must free memory incrementally during execution because I run out of memory before completing the graph.</p>
</li>
</ul>
<h3>Question</h3>
<p>How can I explicitly release memory of intermediate Polars DataFrames while processing a large DAG of transformations like this?</p>
|
<python><garbage-collection><out-of-memory><python-polars><memory-optimization>
|
2025-08-08 00:27:18
| 2
| 1,022
|
Deep Ghodasara
|
79,729,174
| 6,197,439
|
Preventing unexpected trigger of setModelData and setData in QTableView with QComboBox?
|
<p>The code below mostly does what I want: I start out with data of 2D table (list of lists), where 1st and 3rd column start out as random numbers, but 2nd column starts with <code>None</code>. This data is used for a table model of a QTableView, which for the cells in the 2nd <code>None</code> column, displays QComboBox with several entries that are also random numbers. The idea is then, that making a choice of a value in a QComboBox sets the corresponding integer value in the 2D table at the given row and column. This is done well by the <code>combo.currentIndexChanged.connect...</code> line.</p>
<p>However, I have commented that <code>combo.currentIndexChanged.connect...</code> line in the code below, just to emphasize the weird behavior that I'm seeing.</p>
<p>First, I'd like to note something that was not obvious to me, which is that</p>
<ul>
<li>Without <code>openPersistentEditor</code>, the corresponding cell will never run <code>ComboBoxDelegate.createEditor</code> (and <code>ComboBoxDelegate.setModelData</code>), and correspondingly never show dropdowns/QComboBoxes.</li>
<li>Cells that do not have <code>openPersistentEditor</code>, but return flags with <code>Qt.ItemIsEditable</code>, will still allow for double-click and edit via the default line edit (and will only call <code>setData</code> on change).</li>
</ul>
<p>Now, then, the weird behavior is:</p>
<ul>
<li>I start the code, I get the window rendered with the table, and I immediately close the application. In the printout, I get:</li>
</ul>
<pre class="lang-none prettyprint-override"><code>setModelData self.sender()=None index.row()=2 index.column()=1 editor.currentText()='70'
setData self.sender()=None row=2 col=1 value='70'
... self._data=[[67, None, 41], [99, None, 5], [27, '70', 47]]
</code></pre>
<p>So, without any interaction at all, somehow both <code>QStyledItemDelegate.setModelData</code> and <code>QAbstractTableModel.setData</code> triggered on application exit; and while the value of the QComboBox, at this time, at 3rd row 2nd column was indeed '70', this got written into my table - as a string - without any intention, which I do not want.</p>
<blockquote>
<p>Note: since there is a call to <code>setData</code> from <code>setModelData</code>, I'd assume the <code>setData</code> is dependent on <code>setModelData</code>; however, even if the method <code>setModelData</code> is completely commented away from the code (with <code>openPersistentEditor</code> left), the same experiment prints out:</p>
<pre class="lang-none prettyprint-override"><code>setData self.sender()=None row=2 col=1 value='70'
... self._data=[[67, None, 41], [99, None, 5], [27, '70', 47]]
</code></pre>
<p>... so <code>setData</code> of the model gets independently triggered by application close (even without <code>setModelData</code>)</p>
</blockquote>
<hr />
<p>Here is another example:</p>
<ul>
<li><p>I start the application, I get the window rendered with the table</p>
</li>
<li><p>I click on empty space in the application window - I get a printout:</p>
<pre class="lang-none prettyprint-override"><code>setModelData self.sender()=None index.row()=2 index.column()=1 editor.currentText()='70'
setData self.sender()=None row=2 col=1 value='70'
... self._data=[[67, None, 41], [99, None, 5], [27, '70', 47]]
</code></pre>
</li>
<li><p>I close the application - there is no printout.</p>
</li>
</ul>
<hr />
<p>Another example:</p>
<ul>
<li>I start the application, I get the window rendered with the table</li>
<li>I click on empty space in the application window - I get a printout:
<pre class="lang-none prettyprint-override"><code>setModelData self.sender()=None index.row()=2 index.column()=1 editor.currentText()='70'
setData self.sender()=None row=2 col=1 value='70'
... self._data=[[67, None, 41], [99, None, 5], [27, '70', 47]]
</code></pre>
</li>
<li>I click on the QComboBox at row=2 col=1 (third row, second column) - popup is shown, there is no printout</li>
<li>I click on the QComboBox at second row, second column - popup is shown, and exact same previous printout is repeated</li>
</ul>
<hr />
<p>What causes these triggers of <code>QStyledItemDelegate.setModelData</code> and <code>QAbstractTableModel.setData</code> on application exit, click on empty space - and every time a QComboBox opens its popup in response to mouse left-click (though only if it is not the same QComboBox reported in the last printout); and how can I prevent them, so it is only the <code>currentIndexChanged</code> handler that is allowed to call <code>setData</code>?</p>
<p>Here is the example:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import random
from PyQt5.QtWidgets import QApplication, QTableView, QComboBox, QStyledItemDelegate
from PyQt5.QtCore import Qt, QAbstractTableModel
class RandomDataModel(QAbstractTableModel):
def __init__(self, data, parent=None):
super().__init__(parent)
self._data = data
def rowCount(self, parent=None):
return len(self._data)
def columnCount(self, parent=None):
return len(self._data[0])
def data(self, index, role=Qt.DisplayRole):
if role == Qt.DisplayRole:
return self._data[index.row()][index.column()]
return None
def setData(self, index, value, role=Qt.EditRole):
row, col = index.row(), index.column()
if role == Qt.EditRole:
print(f"setData {self.sender()=} {row=} {col=} {value=}")
self._data[row][col] = value
self.dataChanged.emit(index, index)
print(f"... {self._data=}")
return True
return False
def flags(self, index):
return Qt.ItemIsEditable | Qt.ItemIsEnabled | Qt.ItemIsSelectable
class ComboBoxDelegate(QStyledItemDelegate):
def createEditor(self, parent, option, index):
# self.parent() here is MainWindow (that is, QTableView)
#print(f"createEditor {self.parent()=} {parent=}")
combo = QComboBox(parent)
combo.addItems([str(random.randint(1, 100)) for _ in range(3)])
#combo.currentIndexChanged.connect(lambda idx, qmidx=index: self.commitComboValue(idx, qmidx))
return combo
def setModelData(self, editor, model, index):
print(f"setModelData {self.sender()=} {index.row()=} {index.column()=} {editor.currentText()=}")
model.setData(index, editor.currentText())
def commitComboValue(self, index, qmidx):
# self.parent() here is MainWindow (that is, QTableView)
# self.sender() is QComboBox
row, col = qmidx.row(), qmidx.column()
value = int(self.sender().itemText(index))
print(f"commitComboValue {row=} {col=} {index=} {value=} {self.sender()=}")
table_view = self.parent()
table_view.model().setData(qmidx, value)
class MainWindow(QTableView):
def __init__(self):
super().__init__()
data = [
[random.randint(1, 100), None, random.randint(1, 100)],
[random.randint(1, 100), None, random.randint(1, 100)],
[random.randint(1, 100), None, random.randint(1, 100)],
]
self.the_model = RandomDataModel(data)
self.delegate = ComboBoxDelegate(self)
self.setModel(self.the_model)
self.setItemDelegateForColumn(1, self.delegate)
for row in range(3):
self.openPersistentEditor(self.the_model.index(row, 1))
self.resize(300, 300)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.setWindowTitle("QTableView Example")
window.show()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5><qt5><qtableview>
|
2025-08-07 22:19:23
| 1
| 5,938
|
sdbbs
|
79,729,089
| 356,011
|
A generated protobuf python file is unable to locate another protobuf module it generated
|
<p>I have a project set up like this with <code>protos/</code> as a top level directory and the project that actually builds it under <code>services/common-py</code>.</p>
<pre class="lang-none prettyprint-override"><code>├── protos
│ ├── common.proto
│ ├── doodad.proto
└── services
└── common-py
├── my_common
│ ├── __init__.py
│ └── pb
│ ├── __init__.py
│ ├── common_pb2.py
│ └── doodad_pb2.py
├── test_pb_construction.py
</code></pre>
<p>The protos look like this:</p>
<p>common.proto:</p>
<pre class="lang-proto prettyprint-override"><code>syntax = "proto3";
package io.mycompany.proto.model;
message DoodadHeader {
string id = 1;
}
</code></pre>
<p>and doodad.proto:</p>
<pre class="lang-proto prettyprint-override"><code>syntax = "proto3";
package io.mycompany.proto.model;
import "common.proto";
message Doodad {
DoodadHeader header = 1;
string doodad = 2;
}
</code></pre>
<p>I generate the protos like this:</p>
<pre class="lang-none prettyprint-override"><code>cd ./services/common-py
python -m grpc_tools.protoc --python_out=my_common/pb --proto_path=/Users/paul/dev/protos_broken/protos common.proto
python -m grpc_tools.protoc --python_out=my_common/pb --proto_path=/Users/paul/dev/protos_broken/protos doodad.proto
</code></pre>
<p>I also have this <code>__init__.py</code></p>
<pre class="lang-py prettyprint-override"><code>try:
from .common_pb2 import DoodadHeader
from .doodad_pb2 import Doodad
except ImportError as e:
raise e
__all__ = [
"DoodadHeader",
"Doodad",
]
</code></pre>
<p>I try to use <code>Doodad</code> and I get an error that <code>doodad_pb2.py</code> cannot import <code>common_pb2.py</code>. It looks like <code>Doodad</code> is found ok, but <code>Doodad</code> cannot find <code>DoodadHeader</code></p>
<pre class="lang-py prettyprint-override"><code>from my_common.pb import Doodad, DoodadHeader
def test_construction():
doodad = Doodad(
header=DoodadHeader(id="1"),
doodad="doodad"
)
</code></pre>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/paul/dev/protos_broken/services/common-py/test_pb_construction.py", line 3, in <module>
from my_common.pb import Doodad, DoodadHeader
File "/Users/paul/dev/protos_broken/services/common-py/my_common/pb/__init__.py", line 16, in <module>
raise e
File "/Users/paul/dev/protos_broken/services/common-py/my_common/pb/__init__.py", line 14, in <module>
from .doodad_pb2 import Doodad
File "/Users/paul/dev/protos_broken/services/common-py/my_common/pb/doodad_pb2.py", line 25, in <module>
import common_pb2 as common__pb2
ModuleNotFoundError: No module named 'common_pb2'
</code></pre>
<p>Looking at that line 25 in <code>doodad_pb2.py</code>, there is this which seems odd:</p>
<pre class="lang-py prettyprint-override"><code>import common_pb2 as common__pb2
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><protocol-buffers><protobuf-python>
|
2025-08-07 20:02:29
| 0
| 8,429
|
Paul C
|
79,729,083
| 199,818
|
Unable to install our Poetry plugin: pip says there is no matching distribution found
|
<p>I wrote a Poetry plugin and have it stored on a company PyPI repo. I am working on a Python project using Poetry and want to use my plugin. I though specifying the company repo with <code>tool.poetry.source</code> and then adding a line to</p>
<pre class="lang-toml prettyprint-override"><code>[tool.poetry.requires-plugins]
mycoolplugin = ">=1.0.3"
</code></pre>
<p>would be enough to get Poetry to install the plugin but it did not work as I expected.</p>
<p>I tried to install it manually and discovered there is a problem of some kind. I started with:</p>
<pre class="lang-bash prettyprint-override"><code>pip install mycoolplugin -i https://artifactory.compay.com/artifactory/api/pypi/thw-pypi/simple
Looking in indexes: https://artifactory.compay.com/artifactory/api/pypi/thw-pypi/simple
Collecting mycoolplugin
Downloading https://artifactory.compay.com/artifactory/api/pypi/thw-pypi/mycoolplugin/1.0.3/mycoolplugin-1.0.3-py3-none-any.whl (10 kB)
INFO: pip is looking at multiple versions of mycoolplugin to determine which version is compatible with other requirements. This could take a while.
Downloading https://artifactory.compay.com/artifactory/api/pypi/thw-pypi/mycoolplugin/1.0.2/mycoolplugin-1.0.2-py3-none-any.whl (10 kB)
ERROR: Cannot install mycoolplugin==1.0.2 and mycoolplugin==1.0.3 because these package versions have conflicting dependencies.
The conflict is caused by:
mycoolplugin 1.0.3 depends on poetry==2.1.3
mycoolplugin 1.0.2 depends on poetry==2.1.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>But I am running in a venv with Poetry 2.1.3 so why not choose the correct one? There is no ambiguity.</p>
<pre class="lang-bash prettyprint-override"><code>poetry --version
Poetry (version 2.1.3)
</code></pre>
<p>So, I thought perhaps pip cannot tell what Poetry is already there so I ran it with Poetry instead, sadly this command line gives the exact same results:</p>
<pre class="lang-bash prettyprint-override"><code>poetry run python3 -m pip install mycoolplugin -i https://artifactory.compay.com/artifactory/api/pypi/thw-pypi/simple`
</code></pre>
<p>So, then I tried installing the specific version I wanted:</p>
<pre class="lang-bash prettyprint-override"><code>poetry run python3 -m pip install mycoolplugin==1.0.3 -i https://artifactory.company.com/artifactory/api/pypi/thw-pypi/simple
Looking in indexes: https://artifactory.company.com/artifactory/api/pypi/thw-pypi/simple
Collecting mycoolplugin==1.0.3
Using cached https://artifactory.company.com/artifactory/api/pypi/thw-pypi/mycoolplugin/1.0.3/mycoolplugin-1.0.3-py3-none-any.whl (10 kB)
INFO: pip is looking at multiple versions of mycoolplugin to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement poetry==2.1.3 (from mycoolplugin) (from versions: none)
ERROR: No matching distribution found for poetry==2.1.3
</code></pre>
<p>What is wrong here?</p>
<p>The <code>mycoolplugin</code> project has a <code>pyproject.toml</code> file with:</p>
<pre class="lang-toml prettyprint-override"><code>[project]
name = "mycoolplugin"
...
dependencies = [ "poetry (==2.1.3)", ...
...
[build-system]
requires = ["poetry-core>=2.0"]
</code></pre>
|
<python><pip><python-poetry>
|
2025-08-07 19:57:02
| 0
| 857
|
Ian Leslie
|
79,729,078
| 9,397,585
|
How to correctly pass float4 vector to kernel using PyCUDA?
|
<p>I am trying to pass a float4 as argument to my cuda kernel (by value) using PyCUDA’s <code>make_float4()</code>. But there seems to be some misalignment when the data is transferred to the kernel. If I read the output for an input (1,2,3,4) I instead get (3,4,0,0). This happens with <code>int4</code> as well, but <code>int3</code> and <code>float3</code> work just fine.</p>
<p>Minimal code to reproduce error in Google Colab:</p>
<pre class="lang-py prettyprint-override"><code># --- Minimal PyCUDA Test ---
import pycuda.driver as drv
import pycuda.compiler
import pycuda.gpuarray as gpa
import numpy as np
import pycuda.autoinit
minimal_kernel_code = """
__global__ void write_constant(
int* output,
const int4 test
) {
output[0] = test.x;
output[1] = test.y;
output[2] = test.z;
output[3] = test.w;
}
"""
module_test = pycuda.compiler.SourceModule(minimal_kernel_code)
write_constant_kernel = module_test.get_function("write_constant")
test_gpu_mem = drv.mem_alloc(4 * np.int32().nbytes)
write_constant_kernel(
test_gpu_mem,
gpa.vec.make_int4(1,2,3,4), # Constant value to write
block=(1, 1, 1),
grid=(1, 1)
)
test_cpu_mem = np.empty(4, dtype=np.int32)
drv.memcpy_dtoh(test_cpu_mem, test_gpu_mem)
print(test_cpu_mem)
</code></pre>
<p>The expected output would be [1,2,3,4] but it is [3,4,0,0].</p>
|
<python><cuda><gpu><nvidia><pycuda>
|
2025-08-07 19:49:21
| 1
| 308
|
Dodilei
|
79,728,808
| 458,738
|
Does peewee ORM have a native way to include table/columns descriptions/comments?
|
<p>Does <a href="https://pypi.org/project/peewee" rel="nofollow noreferrer">peewee ORM</a> have a native way to include table/columns descriptions/comments?</p>
<p>If not, what would be the preferred way to add comments to db objects?</p>
<p>Thank you for your help.</p>
|
<python><orm><peewee>
|
2025-08-07 15:25:01
| 0
| 579
|
cytochrome
|
79,728,690
| 11,063,709
|
How is the execution of Jax and non-Jax parts interleaved in a Python program and when does an abstract value become concrete?
|
<p>I have the following code:</p>
<pre><code>def non_jitted_setup():
print("This code runs once at the beginning of the program.")
return jnp.array([1.0, 2.0, 3.0])
class A:
@partial(jax.jit, static_argnums=0)
def my_jitted_function(self, x):
print("This code runs once during the first trace.")
y = x * 2
self.temp = y
return y
# Program execution
data = non_jitted_setup()
A = A()
result1 = A.my_jitted_function(data) # Tracing happens here
np.array(result1)
np.array(A.temp)
</code></pre>
<p>If I understand correctly, Jax runs the program line by line and traces the jitted function and runs the Python code inside it once whenever it needs to be compiled and uses the cached version otherwise.</p>
<p>Once <code>y</code> is is returned into <code>result1</code> above, <code>result1</code> becomes concrete and can be converted to a <code>numpy.array</code>. However, <code>A.temp</code> still seems to an abstract array despite it being assigned <code>y</code> which is what was returned and concretised to <code>result1</code> in the previous line, because I get the following error when trying to convert it:</p>
<pre><code>jax.errors.TracerArrayConversionError: The numpy.ndarray conversion method __array__() was called on traced array with shape float32[3]
</code></pre>
<p>When will the value in <code>A.temp</code> be made concrete? Can we make the value in <code>A.temp</code> be concrete somehow as we know it is used outside the jitted function after it is called?</p>
|
<python><jax>
|
2025-08-07 13:58:00
| 1
| 1,442
|
Warm_Duscher
|
79,728,678
| 2,307,934
|
How to make a left join in an update with sqlalchemy and mysql?
|
<p>I am trying to craft a query with SQLAlchemy 2.5. My main goal is to do an update while :</p>
<ul>
<li>doing a join on a "parent" table</li>
<li>and doing a left join between the "parent" table and a third table</li>
</ul>
<p>I can't seem to make this work : I am always having either errors or Cartesian products.</p>
<p>I have tried a few options. Currently I am trying with a subquery but getting a Cartesian product :</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import (
DateTime,
ForeignKey,
Integer,
create_engine,
func,
select,
update,
)
from sqlalchemy.orm import DeclarativeBase, mapped_column, sessionmaker
###########
# MODELS #
###########
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
id = mapped_column(Integer, primary_key=True, nullable=False)
updated_at = mapped_column(DateTime, nullable=True)
class Child(Base):
__tablename__ = "child"
id = mapped_column(Integer, primary_key=True, nullable=False)
parent_id = mapped_column(Integer, ForeignKey("parent.id"))
last_status_change = mapped_column(DateTime, nullable=True)
class OtherChild(Base):
__tablename__ = "other_child"
id = mapped_column(Integer, primary_key=True, nullable=False)
parent_id = mapped_column(Integer, ForeignKey("parent.id"))
###########
# DB INIT #
###########
engine = create_engine("mysql://root:@127.0.0.1/dev?charset=utf8mb4")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
###########
# EXAMPLE #
###########
subq = (
select(Parent.id, Parent.updated_at)
.outerjoin(OtherChild)
.where(OtherChild.id.is_(None))
).subquery()
stmt = (
update(Child)
.where(Child.parent_id.in_(select(subq.c.id)))
.values(
last_status_change=func.CONVERT_TZ(subq.c.updated_at, "Europe/Paris", "UTC")
)
)
compiled = stmt.compile(dialect=engine.dialect, compile_kwargs={"literal_binds": True})
# print(compiled)
with Session() as session:
session.execute(stmt)
</code></pre>
<p>In an nutshell, the SQL that I am trying to generate is like (I am aware of the possibility to use <code>text()</code> and raw SQL, but I'd like to make this work with the ORM) :</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE
child
JOIN parent ON
parent.id = child.parent_id
LEFT JOIN other_child ON
other_child.parent_id = parent.id
SET
last_status_change = CONVERT_TZ(parent.updated_at, 'Europe/Paris', 'UTC')
WHERE
other_child.id IS NULL
</code></pre>
<p>What am I missing ?</p>
|
<python><mysql><sqlalchemy>
|
2025-08-07 13:47:41
| 1
| 1,240
|
edg
|
79,728,412
| 8,290,689
|
Add autocomplete for tcsh shell in python script using click
|
<p>I want to add autocomplete for tcsh on naval example of click: <a href="https://github.com/pallets/click/tree/main/examples/naval" rel="nofollow noreferrer">https://github.com/pallets/click/tree/main/examples/naval</a></p>
<p>I started to add this:</p>
<pre class="lang-py prettyprint-override"><code>_tcsh_source = """\
%(complete_func)s {
response=
}
"""
@add_completion_class
class TcshComplete(ShellComplete):
name = "tcsh"
source_template = _tcsh_source
</code></pre>
<p>but I am not sure of <code>_tcsh_source</code> variable content.</p>
<p>Directly in the shell, I will use a command similar to this (I did not add all the options):</p>
<pre class="lang-none prettyprint-override"><code>complete naval 'p/1/(mine ship)/' \
'n/mine/(remove set)/' \
'n/set/(--moored --drifting)/' \
'n/ship/(move --new --shoot)/' \
'c/"ship move --"/(speed)/'
</code></pre>
<p>Is it feasible? If not is there another way to generate autocompletion for tcsh?</p>
|
<python><command-line-interface><click>
|
2025-08-07 10:26:54
| 0
| 312
|
Tradjincal
|
79,728,399
| 1,719,931
|
Undocumented pandas DataFrame shuffle()
|
<p>The following seems to work:</p>
<pre><code>import pandas as pd
import sklearn
df = sklearn.datasets.load_iris()
df = pd.DataFrame(df.data, columns=df.feature_names)
df.shuffle()
</code></pre>
<p>However this <code>shuffle</code> function seems <a href="https://pandas.pydata.org/docs/search.html?q=shuffle" rel="nofollow noreferrer">not to be a documented DataFrame function</a>?</p>
<p>Is this an internal function we are not supposed to use?</p>
|
<python><pandas><dataframe><shuffle>
|
2025-08-07 10:18:13
| 2
| 5,202
|
robertspierre
|
79,728,364
| 15,993,687
|
python standalone builds causes problem with pillow tkinter
|
<p>I am trying to embed python standalone builds to my electron app. I downloaded standalone from <a href="https://github.com/astral-sh/python-build-standalone" rel="nofollow noreferrer">https://github.com/astral-sh/python-build-standalone</a></p>
<p>Everything seems to work fine until pillow is installed. While the installation works correctly, when using the library with tkinter, I get the following error when used in mac</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/.venv/lib/python3.13/site-packages/PIL/ImageTk.py", line 59, in _pyimagingtkcall
tk.call(command, photo, repr(ptr))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
_tkinter.TclError: invalid command name "PyImagingPhoto"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 14, in <module>
main_img = ImageTk.PhotoImage(main_img)
File ".venv/lib/python3.13/site-packages/PIL/ImageTk.py", line 132, in __init__
self.paste(image)
~~~~~~~~~~^^^^^^^
File "/.venv/lib/python3.13/site-packages/PIL/ImageTk.py", line 188, in paste
_pyimagingtkcall("PyImagingPhoto", self.__photo, ptr)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.13/site-packages/PIL/ImageTk.py", line 63, in _pyimagingtkcall
from . import _imagingtk
TypeError: bad argument type for built-in operation
</code></pre>
<p>code</p>
<pre class="lang-py prettyprint-override"><code>import os
import tkinter as tk
from PIL import Image, ImageTk
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
main = tk.Tk()
main.title("Main Window")
main.config(bg="#E4E2E2")
main.geometry("700x400")
main_img = Image.open(os.path.join(BASE_DIR, "assets", "images", "Screenshot.png"))
main_img = ImageTk.PhotoImage(main_img)
main.iconphoto(False, main_img)
main.mainloop()
</code></pre>
<p>The code works on standard python installation on my mac, but the builds I am embeding is causing this issue</p>
<p>I also saw this issue here: <a href="https://github.com/astral-sh/python-build-standalone/issues/533" rel="nofollow noreferrer">https://github.com/astral-sh/python-build-standalone/issues/533</a></p>
<p>Has anyone who has worked with something like this know how to solve this?</p>
<p>Edit: its been fixed in the latest release according to the <a href="https://github.com/astral-sh/python-build-standalone/issues/533#issuecomment-3169287979" rel="nofollow noreferrer">maintainer</a></p>
|
<python><tkinter><electron><python-imaging-library>
|
2025-08-07 09:35:24
| 1
| 3,141
|
Art
|
79,728,358
| 5,232,323
|
Selenium can't get performance log in iframe of google spreadsheets
|
<p>Environment:
ChromeDriver 138.0.7204.168
Chrome 138.0.7204.184
Selenium 4.34.2
Windows 11</p>
<p>I shared a spreadsheet by iframe, and embeded it to a web page a.html:</p>
<pre><code><html>
<head>
</head>
<body>
<iframe src="https://docs.google.com/spreadsheets/d/e/2PACX-1vQaJwXqci0KQzDjzs8kvy--p80OZY1n30t4NWRh2qkU3pJqAdB4ZJEc79ohh4OkuifHWOHBRi0Z0yKS/pubhtml?gid=0&amp;single=true&amp;widget=true&amp;headers=false"></iframe>
</body>
</html>
</code></pre>
<p>It looks like this:</p>
<p><a href="https://i.sstatic.net/yvxHky0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yvxHky0w.png" alt="the page image" /></a></p>
<p>I write a code to get all the domain names a page accessed:</p>
<pre class="lang-py prettyprint-override"><code>from urllib.parse import urlparse
import json
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
import time
chrome_options = Options()
chrome_options.set_capability('goog:loggingPrefs',
{'performance': 'ALL', })
service = Service(executable_path='C:\\Users\\qi\\PycharmProjects\\PythonLearn\\DomainCollector\\chromedriver.exe')
driver = webdriver.Chrome(service=service, options=chrome_options)
test_url = "file:///C:/Users/qi/Desktop/a.html"
driver.get(test_url)
time.sleep(5)
dms = set()
logs = driver.get_log('performance')
for entry in logs:
msg = entry['message']
try:
data = json.loads(msg)
method = data['message']['method']
if method == 'Network.requestWillBeSent':
url = data['message']['params']['request']['url']
domain = urlparse(url).netloc
if domain:
dms.add(domain)
except Exception:
continue
print(f"domains:{dms}")
</code></pre>
<p>Result:</p>
<pre><code>domains:{'docs.google.com'}
</code></pre>
<p>but it accessed other domain names:</p>
<p><a href="https://i.sstatic.net/pz61HxXf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pz61HxXf.png" alt="result by DevTools" /></a></p>
<p>Why it can't get all the domain names?</p>
|
<python><google-chrome><selenium-webdriver><iframe>
|
2025-08-07 09:25:11
| 1
| 2,471
|
tinyhare
|
79,728,153
| 9,223,023
|
What is the length of a python bytecode instruction in CPython?
|
<p>Python docs on the dis module state that the length of a python bytecode instruction in CPython is 2 bytes (<a href="https://docs.python.org/3/library/dis.html" rel="nofollow noreferrer">https://docs.python.org/3/library/dis.html</a>)</p>
<p>However, when I disassemble a function and look at the actual bytecode, I see that the instructions only take up 1 byte:</p>
<pre><code>def example_function(x):
if x > 0:
return x + 1
return 0
print(example_function.__code__.co_code.hex(' '))
# 97 00 7c 00 64 01 6b 44 00 00 72 05 7c 00 64 02 7a 00 00 00 53 00 79 01
print(*example_function.__code__.co_code)
# 151 0 116 1 0 0 0 0 0 0 0 0 106 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 124 0 171 1 0 0 0 0 0 0 83 0
dis.dis(example_function)
10 0 RESUME 0
11 2 LOAD_FAST 0 (x)
4 LOAD_CONST 1 (0)
6 COMPARE_OP 68 (>)
10 POP_JUMP_IF_FALSE 5 (to 22)
12 12 LOAD_FAST 0 (x)
14 LOAD_CONST 2 (1)
16 BINARY_OP 0 (+)
20 RETURN_VALUE
13 >> 22 RETURN_CONST 1 (0)
</code></pre>
<p>Are they variable in length?</p>
<p>The instruction list defined in dis.opname does have a few instructions longer than 1 byte, but how does then the interpreter know when it's 1 byte and when it's 2 bytes?</p>
<p>As you can see above, I've tried disassembling the code to see for myself and inspected the docs.</p>
|
<python><bytecode><python-internals>
|
2025-08-07 06:46:31
| 1
| 1,203
|
Petras Purlys
|
79,728,136
| 11,640,299
|
Issue with running training on multigpu using DDP
|
<p>I am training a classifier model but since it is taking far too long I want to use multigpu for the training. The current code is</p>
<pre><code>rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
dist.init_process_group(backend='nccl')
model = DDP(model, device_ids=[local_rank])
for epoch in range(1, epochs + 1):
tr_loss = train_epoch(model, train_loader, epoch, backbone, optimizer, device, logger)
te_loss, te_acc = eval_epoch(model, test_loader, epoch, backbone, device, logger, num_classes, all_classes)
if rank == 0:
if te_loss < best_loss:
ckpt = { ... }
torch.save(ckpt, path)
dist.destroy_process_group()
</code></pre>
<p>Running the code using</p>
<pre><code>CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node 4 train.py
</code></pre>
<p>I am getting an error on the line</p>
<pre><code>model = DDP(model, device_ids=[local_rank])
</code></pre>
<p>Error:
Duplicate GPU detected: rank 2 and rank 0 both on CUDA device 7000</p>
<p>And the same error for rank 0 and rank 1, rank 3 and rank 0, rank 1 and rank 0.</p>
<p>I'm assuming all the models are being loaded to GPU 0, but I'm not sure why. Also I understand why are saving the checkpoint only on GPU 0 but want to make sure the code is correct for that.
Thanks</p>
|
<python><multi-gpu><ddp>
|
2025-08-07 06:25:00
| 0
| 342
|
Shlok Sharma
|
79,727,981
| 1,977,587
|
Cartesian product for both keys and values of a dictionary?
|
<p>I need to get the Cartesian product of a dictionary <code>{str: Fraction}</code> with itself, but currently need to "loop" through the dict twice, once for the keys and once for the values.</p>
<p>The difference is that a tuple output works well for the keys, but for the values I actually need the regular math product.</p>
<p>How can I do better? Keen to stick to the standard library if possible.</p>
<pre><code>import itertools as it
import math as mt
from fractions import Fraction as fr
dl = {
'P1': fr(7, 120), 'P2': fr(20, 120), 'P3': fr(6, 120),
'P4': fr(10, 120), 'P6': fr(7, 120), 'P6': fr(18, 120),
'P7': fr(16, 120), 'P8': fr(19, 120), 'P9': fr(17, 120)
}
rp = 2 # <- This is variable, can be an unknown integer
iter_keys = list(it.product(dl, repeat = rp))
iter_vals = list(map(mt.prod, it.product(dl.values(), repeat = rp)))
# Desired output
print({iter_keys[i]: iter_vals[i] for i in range(len(iter_vals))})
</code></pre>
|
<python><for-loop><python-itertools>
|
2025-08-07 01:27:20
| 4
| 1,770
|
Ameya
|
79,727,814
| 10,663,096
|
python+asyncpg+PostgreSQL returns jsonb column as a string instead of an object
|
<p>I'm new to python and just came to a problem: i've set up a simple program to fetch some data from PostgreSQL database. They say that asyncpg library automatically converts JSONB data to Python objects (dicts/lists) by default. This doesn't happen in my case when I select data from a table. I've made synthetic example to demonstrate this:</p>
<pre><code>import asyncio
import asyncpg
async def main():
conn = await asyncpg.connect("postgres://user:pass@localhost/database")
res = await conn.fetch("select '[1, 2, 3]'::jsonb as column;")
for row in res:
for key, value in row.items():
print("'" + key + "'")
print("'" + value + "'")
# Call main
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>and the output is</p>
<pre><code>'column'
'[1, 2, 3]'
</code></pre>
<p>Using debugger, I can see that value is a string as well.
How can I fix that?</p>
|
<python><postgresql><jsonb><asyncpg>
|
2025-08-06 20:25:08
| 2
| 310
|
CoderFF
|
79,727,651
| 11,160,879
|
Reading an IBM PIC S9(09) COMP data from file in python
|
<p>I was given sample data by the mainframe engineer, for which the COPYBOOK looks like below:</p>
<pre><code>01 CCS001-DETAIL-RECORD.
05 CCS001-REC-TYPE PIC X(01). Should be '1'
05 CCS001-MEMB-NUM PIC S9(09) COMP.
05 CCS001-PR-CAT PIC X(03).
05 CCS001-CM-TS PIC X(26).
05 CCS001-CR-ID PIC X(08).
05 FILLER PIC X(38).
</code></pre>
<p>When I read it in python and printed it, it looked like below:</p>
<pre><code>b'1\x07\xc2\xae\xc2\x82\xc3\x9dABC...'
b'1\x07\xc2\xaen\xc3\xb4ABC...'
b'1\x07^\x15jABC...'
b'1\x07^\xc3\xa4\xc3\xbbABC...'
b'1\x07^\xc2\xb8\xc2\xb7ABC...'
b'1\x07^z\x10ABC...'
b'1\x07^\xc2\xae\xc3\x81ABC...'
b'1\x07^X\xc3\xb6ABC...'
b'1\x07^\xc3\x9b\xc3\x94ABC...'
b'1\x07\xc2\xa3\xc3\xa1`ABC...'
b'1\x07\xc2\xb7%vABC...'
</code></pre>
<p>where <code>...'</code> includes more data, but I am not interested in those. I am interested in the <code>CCS001-MEMB-NUM</code> field. The <code>ABC</code> values are for the field <code>CCS001-PR-CAT</code>. The python code I used is below:</p>
<pre><code>with open('sample.dat', 'rb') as f:
data = f.read()
records=data.split(b'\r\n')
for i, record in enumerate(records):
if not record:
continue
print(record)
if i==10:
break
</code></pre>
<p>When I opened the file in PyCharm, it looks like below:</p>
<p><a href="https://i.sstatic.net/8NZtBqTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8NZtBqTK.png" alt="enter image description here" /></a></p>
<p>However, if I were to assume that the COMP field should be 4 bytes, then the output in Bytes shows that they are not of the equal length, i.e. <code>ABC</code> does not start at the same position on every line. Does it mean that the data which is copy pasted by the engineer into a text file got corrupted? Or am I using the wrong logic to understand?</p>
|
<python><db2><endianness><cobol><copybook>
|
2025-08-06 17:33:11
| 1
| 323
|
dijeah
|
79,727,612
| 4,200,997
|
Setting os.environ["TMP"] works in the code but not in pytest tests
|
<p>I have the following python code:</p>
<pre class="lang-py prettyprint-override"><code>import tempfile
import os
def test_paul_confused():
os.environ["TMP"] = "/home/fedora"
assert tempfile.gettempdir() == "/home/fedora"
print(tempfile.gettempdir())
test_paul_confused()
</code></pre>
<p>When I run this code from the command line I get the expected result:</p>
<pre><code>(fedora) [fedora@ip-10-254-6-135 ~]$ uv run pytest.py
/home/fedora
</code></pre>
<p>However when I run the code via pytest, the TMP directory doesn't get set correctly!</p>
<pre><code>(fedora) [fedora@ip-10-254-6-135 ~]$ uv run pytest pytest.py
============================ test session starts =============================
platform linux -- Python 3.13.5, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/fedora
collected 0 items / 1 error
=================================== ERRORS ===================================
_________________________ ERROR collecting pytest.py _________________________
pytest.py:9: in <module>
test_paul_confused()
pytest.py:6: in test_paul_confused
assert tempfile.gettempdir() == "/home/fedora"
E AssertionError: assert '/tmp' == '/home/fedora'
E + where '/tmp' = <function gettempdir at 0xffffb3d83d80>()
E + where <function gettempdir at 0xffffb3d83d80> = tempfile.gettempdir
========================== short test summary info ===========================
ERROR pytest.py - AssertionError: assert '/tmp' == '/home/fedora'
!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!
============================== 1 error in 0.07s ==============================
(fedora) [fedora@ip-10-254-6-135 ~]$
</code></pre>
<p>Why is this code behaving differently in testing than when it's run outside of testing?</p>
|
<python><pytest><temporary-files>
|
2025-08-06 16:57:37
| 1
| 3,796
|
Paul Dejean
|
79,727,568
| 186,202
|
How to expose Enum to FastAPI OpenAPI schema?
|
<p>We are using Pydantic's BaseModel to define a discriminated configuration for our provider by type.</p>
<pre class="lang-py prettyprint-override"><code>class Provider(StrEnum):
ANTHROPIC = "anthropic"
GROQ = "groq"
OPENAI = "openai"
class BaseConfig(BaseModel):
provider: Provider
model_name: str
</code></pre>
<p>Each provider then implements a model that inherit from the BaseConfig like so:</p>
<pre class="lang-py prettyprint-override"><code>class OpenAIModelName(StrEnum):
GPT_4O = "gpt-4o"
GPT_4_1 = "gpt-4.1"
GPT_4_1_MINI = "gpt-4.1-mini"
GPT_4_1_NANO = "gpt-4.1-nano"
O4_MINI = "o4-mini"
class OpenAIConfig(BaseConfig):
provider: Literal[Provider.OPENAI] = Provider.OPENAI
model_name: OpenAIModelName
</code></pre>
<p>We can then create a Annotated type that will use the provider key to select the proper validation BaseModel:</p>
<pre class="lang-py prettyprint-override"><code>LLMConfig = Annotated[OpenAIConfig | AnthropicConfig | GroqConfig, Field(discriminator="provider")]
</code></pre>
<p>We are using OpenAPI to expose backend types to our front-end UI.</p>
<p>However using <code>Literal[Provider.OPENAI]</code> in BaseModel definitions, <code>Provider</code> is not correctly exported to the <code>openapi.json</code> definition.</p>
<p>In this case, the Enum is not properly shared with the front-end.</p>
<p>How can one force FastAPI to expose it?</p>
|
<python><enums><fastapi><openapi>
|
2025-08-06 16:12:49
| 1
| 18,222
|
Natim
|
79,727,531
| 12,461,032
|
TypeError: PPOTrainer.__init__() got an unexpected keyword argument 'config'
|
<p>I am trying to initialize a <code>PPO_trainer</code> but have issues.</p>
<pre><code>from trl import PPOTrainer, PPOConfig
ppo_config = PPOConfig(
batch_size=4,
learning_rate=1e-5,
mini_batch_size=2,
use_cpu=True
)
ppo_trainer = PPOTrainer(
config=ppo_config,
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
)
</code></pre>
<p>And keep getting all kinds of errors regarding the arguments, but the most common one is <code>TypeError: PPOTrainer.__init__() got an unexpected keyword argument 'config' </code></p>
<p>I am using trl==0.4.7.</p>
<p>Thanks.</p>
|
<python><large-language-model><huggingface>
|
2025-08-06 15:43:07
| 0
| 472
|
m0ss
|
79,727,484
| 13,392,257
|
How to obtain familysearch.com token without entering login and password
|
<p>I am using the following code to obtain FamilySearch API token (according to the documentation <a href="https://developers.familysearch.org/main/docs/authentication" rel="nofollow noreferrer">https://developers.familysearch.org/main/docs/authentication</a> )</p>
<p>During authentication I have to manually enter login and password on the familysearch page, the the site redirects me to page with token (my redirect URL)</p>
<p>The code is working fine. But Is it possible to obtain the token without manually entering login and password?</p>
<pre><code>from fastapi import FastAPI, Request
from fastapi.responses import RedirectResponse, HTMLResponse
from urllib.parse import urlencode
import requests
CLIENT_ID = 'MY_ID'
REDIRECT_URI = 'MY_URL'
AUTH_URL = 'https://identbeta.familysearch.org/cis-web/oauth2/v3/authorization'
TOKEN_URL = 'https://identbeta.familysearch.org/cis-web/oauth2/v3/token'
app = FastAPI()
@app.get("/authorize")
def authorize():
params = {
'response_type': 'code',
'client_id': CLIENT_ID,
'redirect_uri': REDIRECT_URI,
'scope': 'openid profile email',
'state': 'random_state_string',
'realm': '/wales' # Add realm parameter for FamilySearch
}
url = f"{AUTH_URL}?{urlencode(params)}"
return RedirectResponse(url)
@app.get("/oauth2/callback")
async def oauth_callback(code: str = None, error: str = None, state: str = None):
if error:
return HTMLResponse(f"""
<html>
<body>
<h1>OAuth Error</h1>
<p>Error: {error}</p>
<p>State: {state}</p>
<a href="/authorize">Try Again</a>
</body>
</html>
""")
if not code:
return HTMLResponse("No authorization code received")
# Exchange authorization code for access token
token_data = {
'grant_type': 'authorization_code',
'client_id': CLIENT_ID,
'redirect_uri': REDIRECT_URI,
'code': code
}
if CLIENT_SECRET:
token_data['client_secret'] = CLIENT_SECRET
try:
response = requests.post(TOKEN_URL, data=token_data)
response.raise_for_status()
token_info = response.json()
return HTMLResponse(f"""
<html>
<body>
<h1>OAuth Success!</h1>
<p>Authorization code: {code}</p>
<p>Access token: {token_info.get('access_token', 'Not provided')}</p>
<p>Token type: {token_info.get('token_type', 'Not provided')}</p>
<p>Expires in: {token_info.get('expires_in', 'Not provided')}</p>
<a href="/authorize">Authorize Again</a>
</body>
</html>
""")
except requests.exceptions.RequestException as e:
return HTMLResponse(f"""
<html>
<body>
<h1>Token Exchange Error</h1>
<p>Error: {str(e)}</p>
<a href="/authorize">Try Again</a>
</body>
</html>
""")
@app.get("/")
def root():
return {"message": "FamilySearch OAuth2 API", "endpoints": ["/authorize", "/oauth2/callback"]}
if __name__ == "__main__":
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=80, reload=True)
</code></pre>
|
<python><oauth-2.0><familysearch-api>
|
2025-08-06 15:04:44
| 0
| 1,708
|
mascai
|
79,727,284
| 4,691,830
|
Restrict date selection from calendar to specific dates
|
<p>I'd like to let the user pick one out of a list of dates from my calendar but prevent her from selecting any other date.</p>
<p>Based on <a href="https://stackoverflow.com/a/60203462/4691830">this answer</a> I managed to restrict the date range that can be selected but I couldn't exclude days in between.</p>
<pre class="lang-py prettyprint-override"><code>import datetime as dt
import tkinter as tk
from tkinter import ttk
from tkcalendar import DateEntry
# fmt: off
dates = [
"2024-04-08", "2024-04-10", "2024-04-11", "2024-04-12",
"2024-04-15", "2024-04-16", "2024-04-17", "2024-04-18", "2024-04-19",
"2024-04-22",
"2024-05-21", "2024-05-22", "2024-05-23", "2024-05-24",
"2024-05-27", "2024-05-28", "2024-05-29", "2024-05-30", "2024-05-31",
"2024-06-03", "2024-06-04", "2024-06-05", "2024-06-07",
"2024-06-10", "2024-06-11", "2024-06-12", "2024-06-13", "2024-06-14",
]
# fmt: on
root = tk.Tk()
date_entry_var = tk.StringVar()
date_entry = DateEntry(
root,
textvariable=date_entry_var,
date_pattern="yyyy-mm-dd",
mindate=dt.date.fromisoformat(dates[0]),
maxdate=dt.date.fromisoformat(dates[-1]),
)
date_entry.pack()
root.mainloop()
</code></pre>
<p>Is there a way that makes it possible to select from a list of dates with a <strong>calendar widget</strong>?</p>
<p><code>tk</code> is required because I'm working with <code>tk</code> in the entire app but answer do not need to use the <code>tkcalendar</code> module if there are alternatives.</p>
|
<python><tkinter><tkcalendar>
|
2025-08-06 12:34:56
| 2
| 4,145
|
Joooeey
|
79,727,218
| 240,976
|
simplest jinja2 template loading for one-file python script
|
<p>I'm using Python only for simple scripts and every time I look into Python packaging, there is a new standard or technologie.</p>
<p>Now when I develop a small one-file tool and want to load a jinja2 template, I don't know how.</p>
<p>I could use <a href="https://jinja.palletsprojects.com/en/stable/api/#jinja2.FileSystemLoader" rel="nofollow noreferrer">FileSystemLoader</a> and somehow build a path relative to the currently executed python file. This works in my git checkout during development.</p>
<p>But what if I also want to package my script? Than I should probably use <a href="https://jinja.palletsprojects.com/en/stable/api/#jinja2.PackageLoader" rel="nofollow noreferrer">PackageLoader</a>. But is there some magic variable that gives me the package name of current python file?</p>
<p>Could anybody provide a snippet that just works and maybe does not even need a lot of boilerplate?</p>
|
<python><jinja2><python-packaging>
|
2025-08-06 11:38:12
| 1
| 2,941
|
Thomas Koch
|
79,727,190
| 7,475,838
|
adjust plotnine legend (swap order)
|
<p>I'm trying to recreate the following graph with <a href="https://plotnine.org/" rel="nofollow noreferrer">plotnine</a>:</p>
<p><a href="https://i.sstatic.net/AJxZTtZ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJxZTtZ8.png" alt="enter image description here" /></a></p>
<p>And... I'm almost there:</p>
<p><a href="https://i.sstatic.net/JpaM7Uy2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpaM7Uy2.jpg" alt="enter image description here" /></a></p>
<ul>
<li>I would like the legend order to be switched around</li>
</ul>
<p>My (working) code so far:</p>
<pre class="lang-py prettyprint-override"><code># imports
import polars as pl
import polars.selectors as cs
from plotnine import *
# create data
data = {
"Category":['0-1', '1-5', '5-10', '10-15', '15-20', '20-25', '25-30', '30-35', '35-40', '40-45', '45-50', '50-55', '55-60', '60-65', '65-70', '70-75', '75-80', '80-85', '85-90', '90-95', '95 en ouder'],
"Mannen 2040":[-540, -910, -1530, -1990, -2120, -2300, -2270, -2340, -2540, -2690, -3140, -3370, -3750, -4200, -5310, -7250, -8310, -8170, -6970, -4610, -1480],
"Vrouwen 2040":[460, 670, 1050, 1740, 2030, 2450, 3040, 3280, 3370, 3420, 3990, 4390, 4510, 4880, 6170, 8050, 9530, 11080, 11470, 8900, 3350],
"Mannen 2015":[-420, -660, -980, -1310, -1380, -1550, -1490, -1410, -1460, -1810, -2360, -2690, -2910, -3030, -3430, -3050, -2920, -2540, -1800, -840, -210],
"Vrouwen 2015":[360, 500, 710, 1180, 1350, 1720, 2070, 2060, 1930, 2060, 2570, 2850, 2870, 2840, 3300, 3070, 3540, 4240, 4400, 2950, 1020]
}
df = pl.DataFrame(data)
category = data["Category"]
df = df.unpivot(cs.numeric(), index="Category")
# plot data
(
ggplot() +
geom_bar(
data=df.filter(pl.col('variable').is_in(['Mannen 2040', 'Vrouwen 2040'])),
mapping=aes(x = 'Category', y = 'value', fill = 'variable'),
stat='identity'
) +
scale_fill_manual(values = ['#007BC7', '#CA005D']) +
geom_step(
data=df.filter(pl.col('variable').is_in(['Mannen 2015', 'Vrouwen 2015'])),
mapping=aes(x = 'Category', y = 'value', color = 'variable', group = 'variable'),
stat='identity',
size = 1.5,
direction = "mid"
) +
scale_color_manual(values = ['#0C163F', '#FFB612']) +
scale_x_discrete(limits = category) +
scale_y_continuous(
name = "Euro's (x miljoen)",
limits = [-10_000, 12_500],
breaks = range(-10_000, 12_500, 2_000),
labels = list(map(abs, range(-10_000, 12_500, 2_000)))
) +
coord_flip() +
theme(
figure_size = (8, 6),
plot_title = element_text(color = "#01689B", size = 16, hjust = 0.5),
legend_position = "bottom",
legend_title = element_blank(),
legend_key = element_blank(),
panel_background = element_rect(fill = "white"),
panel_grid_major_x = element_line(colour = "black", size = 0.25),
axis_ticks_length = 5,
axis_ticks_y = element_line(color = "black", size = 1.5),
axis_line_y = element_line(color = "black", size = 1.5, linetype = "solid"),
axis_title_y = element_blank(),
axis_title_x = element_text(colour = "#007BC7", size = 10, hjust = 1)
) +
ggtitle("De uitgaven aan zorg voor vrouwen zijn hoger dan\nvoor mannen, en dit verschil neemt toe in de toekomst")
)
</code></pre>
<p>Any suggestions on how to adjust the code to swap the legends?</p>
|
<python><ggplot2><python-polars><plotnine>
|
2025-08-06 11:07:39
| 2
| 4,919
|
René
|
79,726,988
| 3,447,369
|
FastAPI endpoint stream LLM output word for word
|
<p>I have a FastAPI endpoint (<code>/generateStreamer</code>) that generates responses from an LLM model. I want to stream the output so users can see the text as it’s being generated, rather than waiting for the full response. Currently, I'm using TextIteratorStreamer from the transformers library and FastAPI's StreamingResponse, which works, but when I test with curl, the response arrives sentence by sentence instead of word by word. I've read several SO topics on similar issues, but none offer me a solution to this problem.</p>
<p>I would appreciate any advice on how to stream the response more granularly, ideally word by word.</p>
<pre><code>@router.post(
"/generateStreamer"
)
@api_version(1)
async def generateStreamer(features: InputFeatures, request: Request):
print("GenerateStreamer endpoint was called, generating response...")
tokenizer = request.app.state.tokenizer
model = request.app.state.model
messages = [
{"role": "system", "content": features.prompt.strip()},
{"role": "user", "content": features.text.strip()}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
def generate():
torch.set_grad_enabled(False)
model.generate(
inputs['input_ids'],
streamer=streamer,
max_new_tokens=512,
num_return_sequences=1,
do_sample=True,
temperature=0.7,
repetition_penalty=1,
pad_token_id=tokenizer.pad_token_id or tokenizer.eos_token_id,
top_p=1,
top_k=50
)
thread = Thread(target=generate)
thread.start()
return StreamingResponse(streamer, media_type="text/plain")
</code></pre>
|
<python><fastapi><huggingface-transformers><large-language-model>
|
2025-08-06 08:05:27
| 3
| 1,490
|
sander
|
79,726,980
| 11,173,364
|
undefined symbol: PyObject_SelfIter while using CPython C API
|
<p>I want to use CPython C API but still got this</p>
<pre><code>[component_container-1] Traceback (most recent call last):
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/core/__init__.py", line 24, in <module>
[component_container-1] from . import multiarray
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/core/multiarray.py", line 10, in <module>
[component_container-1] from . import overrides
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/core/overrides.py", line 8, in <module>
[component_container-1] from numpy.core._multiarray_umath import (
[component_container-1] ImportError: /usr/local/lib/python3.10/dist-packages/numpy/core/_multiarray_umath.cpython-310-x86_64-linux-gnu.so: undefined symbol: PyObject_SelfIter
[component_container-1]
[component_container-1] During handling of the above exception, another exception occurred:
[component_container-1]
[component_container-1] Traceback (most recent call last):
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/__init__.py", line 130, in <module>
[component_container-1] from numpy.__config__ import show as show_config
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/__config__.py", line 4, in <module>
[component_container-1] from numpy.core._multiarray_umath import (
[component_container-1] File "/usr/local/lib/python3.10/dist-packages/numpy/core/__init__.py", line 50, in <module>
[component_container-1] raise ImportError(msg)
[component_container-1] ImportError:
</code></pre>
<p>my current cmakelist file</p>
<pre class="lang-none prettyprint-override"><code>set(My_Python_Version "3.10")
find_package(Python ${My_Python_Version} EXACT COMPONENTS Interpreter Development NumPy REQUIRED)
message(STATUS "Using Python: ${Python_EXECUTABLE}")
message(STATUS "Python include: ${Python_INCLUDE_DIRS}")
message(STATUS "Python numpy include: ${Python_NumPy_INCLUDE_DIRS}")
# execute_process(COMMAND ${Python_EXECUTABLE} -c "import numpy;print(numpy.get_include())" OUTPUT_VARIABLE Python_NumPy_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)
# execute_process(COMMAND which python3 OUTPUT_VARIABLE Python_EXECUTABLE OUTPUT_STRIP_TRAILING_WHITESPACE)
# execute_process(COMMAND ${Python_EXECUTABLE} -c "import sysconfig; print(sysconfig.get_paths()['include'])" OUTPUT_VARIABLE Python_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)
# execute_process(COMMAND ${Python_EXECUTABLE} -c "import sysconfig; print(sysconfig.get_config_var('LIBDIR'))" OUTPUT_VARIABLE Python_LIBRARY_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)
# set(Python_LIBRARY "${Python_LIBRARY_DIR}/libpython3.10.so")
# include_directories(/usr/include/python3.10)
...
target_link_libraries(mynode
${OpenCV_LIBS}
${cpp_typesupport_target}
${ONNX_LIBS}
${TORCH_LIBS}
${Python_LIBRARIES}
common_tool_lib
)
target_include_directories(mynode PRIVATE
/usr/local/onnxruntime/include
/usr/local/libtorch/include
/usr/local/libtorch/include/torch/csrc/api/include
${Python_INCLUDE_DIRS}
${Python_NumPy_INCLUDE_DIRS}
)
</code></pre>
<p>I've tried various methods and even asked several AI tools but still cannot solve this, and it seems I have to give up calling python codes from my C++ environment. Any ideas?</p>
<p>PS:</p>
<pre class="lang-cpp prettyprint-override"><code>MyNode::MyNode(const rclcpp::NodeOptions& options)
: BaseNode("perception_node", options), time_stamp_(0), tracking_module_(nullptr), tracking_function_(nullptr) {
// Initialize Python interpreter
if (!Py_IsInitialized()) {
Py_Initialize();
// Initialize numpy by importing it
PyRun_SimpleString("import numpy");
}
// Import tracking module
PyObject* sys_path = PySys_GetObject("path");
PyObject* path_str = PyUnicode_FromString("/workspaces/perception/LSTM_Tracking/Tracking");
PyList_Append(sys_path, path_str);
Py_DECREF(path_str);
tracking_module_ = PyImport_ImportModule("Track_infer");
if (!tracking_module_) {
RCLCPP_ERROR(this->get_logger(), "Failed to import tracking module");
PyErr_Print();
} else {
tracking_function_ = PyObject_GetAttrString(tracking_module_, "return_tracking_id");
if (!tracking_function_) {
RCLCPP_ERROR(this->get_logger(), "Failed to get tracking function");
PyErr_Print();
}
}
</code></pre>
<p>this is mynode's constructor, I got trouble even running it</p>
|
<python><c++><cmake><cpython><pyobject>
|
2025-08-06 07:49:11
| 0
| 769
|
user900476
|
79,726,783
| 19,459,262
|
How to modify the colors of individual input_slider?
|
<p>I have sliders in my app and I want to change the colors individually. How can I do this? I'm using custom CSS to hack my way around for checkboxes and radio buttons, but I haven't been able to do this for sliders.</p>
<p>Sliders:</p>
<pre><code>ui.input_slider("red", "red", value=0, min=0, max=50)
ui.input_slider("green", "green", value=0, min=0, max=50)
ui.input_slider("blue", "blue", value=0, min=0, max=50)
</code></pre>
|
<python><css><py-shiny>
|
2025-08-06 03:31:14
| 1
| 784
|
Redz
|
79,726,701
| 2,552,290
|
user-defined lazy mpmath constants
|
<p>Can I define my own lazy mpmath constants, just like <a href="https://mpmath.org/doc/0.19/functions/constants.html" rel="nofollow noreferrer">mpmath.pi and the others documented here</a>?</p>
<p>In other words, can I define a constant whose value is an mpf or mpc computed
(or retrieved from memo cache) to the current precision at the time it is used
(which is <em>not</em> necessarily the precision at the time when it is defined)?</p>
<p>E.g. I would like this behavior:</p>
<pre><code>>>> from mpmath import *
>>> my_lazy_one_third = MyLazyMpMathConstantSomehow(lambda: mpf(1)/3)
>>> mp.dps = 15
>>> +pi
mpf('3.1415926535897931')
>>> +my_lazy_one_third
mpf('0.33333333333333331')
>>> mp.dps = 40
>>> +pi
mpf('3.141592653589793238462643383279502884197169')
>>> +my_lazy_one_third
mpf('0.3333333333333333333333333333333333333333352')
</code></pre>
<p>Does mpmath provide a way to do this?</p>
<p>If not, what is the cleanest way to accomplish this using other python constructs?</p>
|
<python><lazy-evaluation><mpmath>
|
2025-08-06 00:35:52
| 0
| 5,611
|
Don Hatch
|
79,726,698
| 4,539,999
|
Extract White Input Boxes from pdf
|
<p>Some of the Adobe XFA form fields are missing when the <code>/PageItemUIDToLocationDataMap</code> is extracted from some pdf files as shown on the image below where only fields identified with black dots for pages 1 and 3 (Click image to open pdf) are shown. How can the missing XFA form fields be extracted without using commercial software?</p>
<p><a href="https://drive.google.com/file/d/1w5G9AahbvsEvaGuyTeiJ7Y6grIoFAV-k/view?usp=drive_link" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5iP7jY6.png" alt="Sample pdf document output" /></a></p>
<p>The following code was used to extract <a href="https://drive.google.com/file/d/1_bAGYU1k0vcHVUFdpY_Funu1YEXVZhVO/view?usp=drive_link" rel="nofollow noreferrer"><code>input.pdf</code></a> datamaps, save in a csv file and add the points to <a href="https://drive.google.com/file/d/1w5G9AahbvsEvaGuyTeiJ7Y6grIoFAV-k/view?usp=drive_link" rel="nofollow noreferrer"><code>output.pdf</code></a>. Disable <code>sort_and_filter</code> to see original data in the <code>.csv</code> file:</p>
<pre class="lang-py prettyprint-override"><code>import pikepdf
import fitz # PyMuPDF
import csv
INPUT_PDF = "input.pdf"
OUTPUT_CSV = "points.csv"
OUTPUT_PDF = "output.pdf"
TARGET_KEY = "/PageItemUIDToLocationDataMap"
def extract_datamap_points(pdf_path, target_key=TARGET_KEY):
out_rows = []
with pikepdf.open(pdf_path) as pdf:
for i, page in enumerate(pdf.pages):
piece_info = page.get('/PieceInfo', None)
if piece_info and '/InDesign' in piece_info:
indesign = piece_info['/InDesign']
if target_key in indesign:
for k, v in indesign[target_key].items():
try:
id_ = int(str(k).lstrip('/'))
type_val = float(v[2])
coords = [float(val) for val in list(v)[3:7]]
out_rows.append([i+1, id_, type_val] + coords)
except Exception as e:
print(f"Error parsing {k}:{v} ({e})")
return out_rows
def get_pdf_page_count(pdf_path):
with pikepdf.open(pdf_path) as pdf:
return len(pdf.pages)
def process_rows(rows, max_pdf_pages):
Y_TRANSFORM_BASE = 420.945 # Local constant hack for y-coordinate transform
# Datamaps are read sequentially so hack to pages
total_pages = get_pdf_page_count(INPUT_PDF)
hack_page = lambda page: 2 if (page >= max_pdf_pages) else (page + 1 if page > 1 else page)
processed_rows = []
for row in rows:
page, id_, type_val, x1, y1, x2, y2 = row
hacked_page = hack_page(page)
new_y1 = round(Y_TRANSFORM_BASE - y1, 3)
new_y2 = round(Y_TRANSFORM_BASE - y2, 3)
new_x1 = round(x1, 3)
new_x2 = round(x2, 3)
h = round(abs(new_y2 - new_y1), 1)
processed_rows.append([hacked_page, id_, type_val, new_x1, new_y1, new_x2, new_y2, h])
return processed_rows
def sort_and_filter(rows):
# Sort by page ascending, -y2 descending, x1 ascending, id ascending
rows_sorted = sorted(rows, key=lambda r: (r[0], -r[6], r[3], r[1]))
# Filter rows
filtered = []
for row in rows_sorted:
if (row[2] == 4 # type
and row[7] == 17): # height
filtered.append(row)
return filtered
def write_csv(csv_filename, rows):
with open(csv_filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['page', 'id', 'type', 'x1', 'y1', 'x2', 'y2', 'h'])
writer.writerows(rows)
def mark_points_on_pdf(input_pdf, output_pdf, rows):
doc = fitz.open(input_pdf)
for row in rows:
page_num = int(row[0])
cx = row[3]
cy = row[6]
page = doc[page_num - 1]
pymupdf_y = page.rect.height - cy
page.draw_circle((cx, pymupdf_y), radius=2, color=(0, 0, 0), fill=(0, 0, 0))
doc.save(output_pdf)
if __name__ == "__main__":
points = extract_datamap_points(INPUT_PDF)
processed_points = process_rows(points, total_pages)
filtered_points = sort_and_filter(processed_points)
write_csv(OUTPUT_CSV, filtered_points)
mark_points_on_pdf(INPUT_PDF, OUTPUT_PDF, filtered_points)
print(f"Done. Points: {len(filtered_points)}; Wrote {OUTPUT_CSV} and {OUTPUT_PDF}")
</code></pre>
<hr />
<p><a href="https://www.pdflabs.com/docs/pdftk-cli-examples" rel="nofollow noreferrer">PDFtk</a>:</p>
<blockquote>
<p>Uncompress PDF page streams for editing the PDF in a text editor (e.g., vim, emacs)</p>
<ul>
<li>pdftk doc.pdf output doc.unc.pdf uncompress</li>
</ul>
</blockquote>
<p>Gives: <a href="https://drive.google.com/file/d/1nect0j863-pUgQVE1T3dpjwg0sTi39tQ/view?usp=sharing" rel="nofollow noreferrer">input_uncompressed.pdf</a></p>
<hr />
<p>Given the comments this code extracts text blocks and adds points to the pdf as shown below the code:</p>
<pre class="lang-py prettyprint-override"><code>import fitz # PyMuPDF
INPUT_PDF = "input.pdf"
OUTPUT_PDF = "output_bb.pdf"
doc = fitz.open(INPUT_PDF)
for page_num in range(len(doc)):
page = doc[page_num]
blocks = page.get_text("blocks")
for block in blocks:
x0, y0, x1, y1, text, block_no = block[:6]
# Mark the lower-left corner (x0, y1)
cx, cy = x0, y1
shape = page.new_shape()
# Draw a blue filled circle (RGB: (0, 0, 1)), radius 2
shape.draw_circle((cx, cy), 2)
shape.finish(color=(0, 0, 1), fill=(0, 0, 1))
shape.commit()
# Label with block number (remove if not wanted)
page.insert_text((cx + 5, cy - 5), str(block_no), fontname="helv", fontsize=8, color=(0, 0, 1))
doc.save(OUTPUT_PDF)
print(f"Done. Saved {OUTPUT_PDF}")
</code></pre>
<p><a href="https://drive.google.com/file/d/1rBsZAy-IhmCkWrsORPRZ1o39nzeQorQO/view?usp=drive_link" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJweakCA.png" alt="Sample pdf document output for text Box" /></a></p>
<p>(Click image to open 20 page pdf).</p>
|
<python><pdf><pymupdf><pikepdf>
|
2025-08-06 00:29:01
| 1
| 1,435
|
flywire
|
79,726,627
| 2,471,211
|
Lambda container - Pyarrow and numpy
|
<p>I have difficulties from this: <a href="https://github.com/aws/aws-cdk/issues/34685" rel="nofollow noreferrer">(aws-lambda-python-alpha): Failed to install numpy 2.3.0 with Python 3.11 or lower</a></p>
<p>My Dockerfile:</p>
<pre class="lang-none prettyprint-override"><code>FROM public.ecr.aws/lambda/python:3.11
# Install
RUN pip install 'numpy<2.3.0'
RUN pip install 'pyarrow[s3]'
</code></pre>
<p>The pyarrow package still fails on</p>
<pre class="lang-none prettyprint-override"><code>Collecting numpy>=1.25
Downloading numpy-2.3.2.tar.gz (20.5 MB)
...
</code></pre>
<p>I wanted to force pyarrow to use numpy==2.2.1 but I don't see how from here. Do I need to lower the version of pyarrow?</p>
|
<python><numpy><aws-lambda><pyarrow>
|
2025-08-05 21:37:23
| 1
| 485
|
Flo
|
79,726,616
| 1,604,008
|
cant find element with Selenium
|
<p><a href="https://i.sstatic.net/tCKqgUFy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCKqgUFy.png" alt="screen shot" /></a></p>
<p>I'm getting the following</p>
<pre><code>An error occurred: Message:
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:199:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:552:5
dom.find/</<@chrome://remote/content/shared/DOM.sys.mjs:136:16
</code></pre>
<p>I'm new to Selenium. My first question is the references to chrome. I'm using firefox is this error message typical?</p>
<p>My real question is how do I retrieve this button?</p>
<p>I've tried many ways to select the element e.g.</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
link = 'https://ennexos.sunnyportal.com/login?next=%2Fdashboard%2Finitialize'
driver.get(link)
try:
# Wait for the login button to be present
login_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.CSS_SELECTOR, "[data-testid='button-primary']"))
)
print("found login button!")
except Exception as e:
print(f"An error occurred: {e}")
finally:
driver.quit()
</code></pre>
|
<python><selenium-webdriver>
|
2025-08-05 21:06:48
| 2
| 1,159
|
user1604008
|
79,726,515
| 25,874,132
|
What's wrong with this Python assignment on signal processing - mostly Fourier series and transform
|
<p>In this assignment I created the basic rect signal <code>a[n]</code> such that over the domain [-1000, 1000] it's 1 only at |n|<100, meaning it's an array (complex one with zero in the imaginary part) that looks like this [0, 0, ... 0, 0, 1, 1, ... 1, 1, 0, ... 0, 0, 0] where there are exactly 199 ones, 99 on positive and negative and one at 0, looks like that:</p>
<p><a href="https://i.sstatic.net/oTLdzITA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTLdzITA.png" alt="" /></a></p>
<p>I've created also the following FT function, and a threshold function to clean Floating point errors from the results:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cmath
import matplotlib.pyplot as plt
D=1000
j = complex(0, 1)
pi = np.pi
N = 2 * D + 1
a=np.zeros(2*D+1)
for i in range(-99,100):
a[i+D] = 1
threshold = 1e-10
def clean_complex_array(arr, tol=threshold):
real = np.real(arr)
imag = np.imag(arr)
# Snap near-zero components
real[np.abs(real) < tol] = 0
imag[np.abs(imag) < tol] = 0
# Snap components whose fractional part is close to 0 or 1
real_frac = real - np.round(real)
imag_frac = imag - np.round(imag)
real[np.abs(real_frac) < tol] = np.round(real[np.abs(real_frac) < tol])
imag[np.abs(imag_frac) < tol] = np.round(imag[np.abs(imag_frac) < tol])
return real + 1j * imag
def fourier_series_transform(data, pos_range, inverse=False):
full_range = 2 * pos_range + 1
# Allocate result array
result = np.zeros(full_range, dtype=complex)
if inverse:
# Inverse transform: reconstruct time-domain signal from bk
for n in range(-pos_range, pos_range+ 1):
for k in range(-pos_range, pos_range+ 1):
result[n + pos_range] += data[k + pos_range] * cmath.exp(j * 2 * pi * k * n / full_range)
else:
# Forward transform: compute bk from b[n]
for k in range(-pos_range, pos_range+ 1):
for n in range(-pos_range, pos_range+ 1):
result[k + pos_range] += (1 / full_range) * data[n + pos_range] * cmath.exp(-j * 2 * pi * k * n / full_range)
return result
ak = fourier_series_transform(a, D)
ak = clean_complex_array(ak)
</code></pre>
<p>a_k looks like that: (a real sinc signal, which is to be expected)</p>
<p><a href="https://i.sstatic.net/BHpwqpzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHpwqpzu.png" alt="" /></a></p>
<p>I've checked that the threshold value is good, FPE starts at around e-14 and there's no significant contributions to the signal below e-8.</p>
<p>Now for the part I had a problem with: we're asked to create the freq signal <code>f_k</code> such that <code>f_k</code> will be <code>a_k</code> padded with 4 zeros after each value and multiplied by 0.2, meaning it will look like this 0.2*[a_0, 0, 0, 0, 0, a_1, 0, 0, 0, 0, a_2, 0, 0, 0, 0, a_3, ...]. We want to show that doing so equals a stretching of the signal in the time domain.</p>
<p>Now when I did the math it checks out, you get 5 copies of the original signal over a range of [-5002, 5002] (to create 10005 samples, which is exactly 5*2001, which was the original number of samples of the signals). The following is the code for this section, to set <code>f_k</code> and <code>f[n]</code>:</p>
<pre class="lang-py prettyprint-override"><code>stretch_factor = 5
f_k = np.zeros(stretch_factor * N, dtype=complex)
f_k[::stretch_factor] = 0.2 * ak # scale to keep energy in check
# New domain size after stretching
D_new = (len(f_k) - 1) // 2
# Inverse transform to get f[n]
f_n = fourier_series_transform(f_k, D_new, inverse=True)
f_n = clean_complex_array(f_n)
plt.figure()
plt.plot(np.arange(-D_new, D_new + 1), np.real(f_n), label='Real part')
plt.plot(np.arange(-D_new, D_new + 1), np.imag(f_n), label='Imaginary part', color='red')
plt.grid(True)
plt.title("Compressed signal $f[n]$ after frequency stretching")
plt.xlabel("n")
plt.ylabel("Amplitude")
plt.legend()
</code></pre>
<p>and this is what I get:</p>
<p><a href="https://i.sstatic.net/icAvKcj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/icAvKcj8.png" alt="" /></a></p>
<p>which is wrong, I should be getting a completely real signal, and as I said it should be 5 identical copies at distance of 2000 from each other, something like that:</p>
<p><a href="https://i.sstatic.net/Byga0hzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Byga0hzu.png" alt="" /></a></p>
<p>Why does it do that? I even tried to use AI to explain why it happens and how to fix it and it couldn't help with either.</p>
|
<python><numpy><signal-processing><fft>
|
2025-08-05 18:31:08
| 2
| 314
|
Nate3384
|
79,726,327
| 850,781
|
How to reclaim horizontal space made available by removing labels and ticks?
|
<p>I have a row of subplots which start with a histogram and the rest are some <a href="https://www.statsmodels.org/stable/generated/statsmodels.graphics.gofplots.qqplot.html" rel="nofollow noreferrer"><code>qqplot</code></a>s:</p>
<p><a href="https://i.sstatic.net/kj4U3hb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kj4U3hb8.png" alt="qqplots with extra horizontal space" /></a></p>
<p>I removed the y axis and ticks labels using</p>
<pre><code> for ax in axqq[1:]:
ax.set_ylabel(None)
ax.tick_params(labelleft=False)
# does not help ax.yaxis.set_visible(False)
ax.sharey(axqq[0])
fig.tight_layout()
</code></pre>
<p><strong>I would like to remove the horizontal whitespace between the QQ plots.</strong></p>
<p>Since my subplots are not homogeneous (the 1st subplot is a histogram), <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.subplots_adjust.html" rel="nofollow noreferrer"><code>fig.subplots_adjust</code></a> does not help.</p>
|
<python><matplotlib><whitespace><subplot>
|
2025-08-05 15:22:19
| 1
| 60,468
|
sds
|
79,726,242
| 25,413,271
|
python gmsh: build 3D mesh from solid-split stl surface for openFOAM
|
<p>I have a simple task- I have a closed stl-surface, the whole surface is split into solids. I need to use python gmsh pack to build 3D mesh (tetra or hexa) with named-selections (lets call it this way) for BC patches according to solids from STL. I need to use this for openFOAM calculation. I am very new in gmsh.</p>
<p>Are there any ways to do this? I tried this:</p>
<pre><code>gmsh.initialize()
input_file = r"geometry\geom_tut1\geom.stl"
gmsh.merge(input_file)
gmsh.model.mesh.createGeometry()
gmsh.model.geo.removeAllDuplicates()
loop = gmsh.model.geo.addSurfaceLoop([e[1] for e in gmsh.model.getEntities(2)])
gmsh.model.geo.addVolume([loop])
gmsh.model.geo.synchronize()
gmsh.option.setNumber("Mesh.Algorithm", 5)
gmsh.option.setNumber("Mesh.Algorithm3D", 10)
gmsh.option.setNumber("Mesh.MeshSizeMin", 0.0001)
gmsh.option.setNumber("Mesh.MeshSizeMax", 0.001)
gmsh.model.mesh.generate(3)
</code></pre>
<p>I am not sure about using <code>gmsh.model.mesh.createTopology()</code> because I don't know exactly how it works. But I dont want to use <code>classifySurfaces</code> because it classifies surfaces by angle.</p>
|
<python><stl><gmsh>
|
2025-08-05 14:10:15
| 0
| 439
|
IzaeDA
|
79,726,067
| 11,160,879
|
Convert an integer to IBM PIC S9(09) COMP. type in python
|
<p>I have the following three values in a CSV file:</p>
<pre><code>1683199814
2087175640
1348771152
</code></pre>
<p>I need to write them into a flat file using a python program, which will be loaded into Mainframe DB2 using a COBOL program.</p>
<p>The copybook says that it is of type <code>PIC S9(09) COMP</code>. This will be loaded into a DB2 table of type INTEGER of DB2 length 4.</p>
<p>Below is the python code logic that I used to convert this:</p>
<pre><code>record = bytearray()
record.extend(b'1')
member_id= int(memb_num)
record.extend(struct.pack('>i', member_id))
record.extend(pad_right(pr_cat, 3))
record.extend(pad_right(cm_ts, 26))
record.extend(pad_right(cr_id, 8))
record.extend(b' ' * 38)
return record
</code></pre>
<p>Where <code>member_id</code> has the values from the CSV file through which I am iterating. I then write this record out into the file which they load into the DB2 table. However, after loading, the mainframe engineer showed me that the values are completely different and sometimes negative. He mentioned that mainframe is using Big Endian for that value. The values are showing differently as below:</p>
<pre><code>-2065547322
1334273920
-679209769
</code></pre>
<p>I don't know what I am doing wrong!</p>
<p>EDIT:</p>
<p>Below is how the copybook looks:</p>
<pre><code>01 CCS001-DETAIL-RECORD.
05 CCS001-REC-TYPE PIC X(01). Should be '1'
05 CCS001-MEMB-NUM PIC S9(09) COMP.
05 CCS001-PR-CAT PIC X(03).
05 CCS001-CM-TS PIC X(26).
05 CCS001-CR-ID PIC X(08).
05 FILLER PIC X(38).
</code></pre>
<p>I got a sample file that COBOL used and checked the hex value from it. I got <code>07C2AEC2</code>. But ideally its actual integer value <code>800006829</code> has the hex of <code>2faf22ad</code>. Does this have something to do with how my Windows laptop(x64) treats the endian values in my Python IDE?</p>
|
<python><db2><endianness><cobol>
|
2025-08-05 11:51:40
| 0
| 323
|
dijeah
|
79,726,058
| 13,933,721
|
pydantic model validator raise ValueError to field
|
<p>I am trying to do a validation on passwords where if they dont match, return an error. But I want to assign the error to field.</p>
<pre><code>class RequestFile(
BaseModel
):
password: str = Field(..., min_length=8, max_length=128)
confirm_password: str = Field(..., min_length=8, max_length=128)
@field_validator("password")
def password_validate(cls, v: str) -> str:
if not v.strip():
raise ValueError("Password required")
return v
@field_validator("confirm_password")
def confirm_password_validate(cls, v: str) -> str:
if not v.strip():
raise ValueError("Confirm Password Required")
return v
@model_validator(mode="before")
def check_passwords_match(cls, values):
password = values.get("password")
confirm_password = values.get("confirm_password")
if password != confirm_password:
raise ValueError("Passwords does not match")
return values
</code></pre>
<p>In the code above, when either password or confirm_password failed validation (char length), it pushes the ValueError to field password and confirm_password respectively. However, when they dont match, it returns the ValueError but not on any field. How to push the ValueError to either password or confirm_password? Preferrably in confirm_password</p>
|
<python><pydantic><pydantic-v2>
|
2025-08-05 11:46:46
| 2
| 1,047
|
Mr. Kenneth
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.