QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,715,602
| 11,415,809
|
When using scipy.optimize.curve_fit what is the optimal formulation of the function being fitted?
|
<p>I noticed that the formulation matters when trying to fit a non-linear equation of the form <code>y = a + b * x ** c</code> and I wonder which formulation <strong>in general</strong> results in the best fit?</p>
<h2>Formulation 1</h2>
<ul>
<li><code>y = a + b * x ** c</code></li>
<li>needs more than default maximum number of function evaluations (<code>maxfev</code>)</li>
<li>scaling data influences result</li>
</ul>
<h2>Formulation 2</h2>
<ul>
<li><code>y = d + e * (x ** f - 1) / f</code> (Box-Cox transformation)</li>
<li>equivalent to Formulation 1, where <code>a=d-e/f</code>, <code>b=e/f</code> and <code>c=f</code></li>
<li>finishes with default <code>maxfev</code> setting</li>
<li>results in a better fit</li>
<li>scaling data has no effect</li>
</ul>
<p>NOTE: after scaling data such that standard deviation is 1, both formulations give the same result.</p>
<p><a href="https://i.sstatic.net/6skSd3BM.png" rel="noreferrer"><img src="https://i.sstatic.net/6skSd3BM.png" alt="enter image description here" /></a></p>
<p>Data and code below.</p>
<pre class="lang-py prettyprint-override"><code>from io import StringIO
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
def formulation_1(x, a, b, c):
return a + b * x**c
def formulation2(x, d, e, f):
return d + e * (x**f - 1) / f
TESTDATA = StringIO(
"""x,y
410.0,73.06578085929756
25.0,205.29417389522575
72.0,110.48653325137172
51.0,168.52111516008628
15.0,119.75684720989004
72.0,164.46280991735537
73.0,145.53126751391522
36.0,161.41429319490925
40.0,219.91735537190084
190.0,89.18717804003897
21.0,203.3350571291969
47.0,170.12964176670877
21.0,197.1020714822368
18.0,96.43526170798899
53.0,117.55060034305319
50.0,189.89358650365415
43.0,179.03132807995385
863.0,69.63888057656149
71.0,131.42730764753813
40.0,205.2892561983471
65.0,131.3857292219426
401.0,133.81511047189076
50.0,115.65603442387814
58.0,151.99074870050802
50.0,165.8640803223824
21.0,210.87942861045792
236.0,124.21734182739671
53.0,180.11451429366744
12.0,320.77043917765184
36.0,244.3526170798898
25.0,202.41568198893515
21.0,184.03895128162597
29.0,165.64724945771087
25.0,218.1818181818182
72.0,161.8457300275482
130.0,107.38232466256511
84.0,177.52397865095088
38.0,57.524112172378224
50.0,168.132777815723
25.0,202.41568198893515
21.0,244.3978260657449
48.0,168.3167133392528
200.0,122.8403797554812
37.0,167.84185838731295
83.0,173.75445583988846
13.0,315.835929660122
11.0,314.47181327653976
32.0,203.68741889215215
200.0,123.96694214876034
39.0,110.59353869271226
39.0,190.81504521023686
40.0,235.53719008264466
37.0,181.71111484409758
25.0,215.55576804863057
40.0,235.53719008264466"""
)
df = pd.read_csv(TESTDATA)
sc = StandardScaler(with_mean=False)
df["x_scaled"] = sc.fit_transform(df[["x"]])
fig, ax = plt.subplots(1, 2, figsize=(10, 6), sharey=True)
for ax_i, scale in enumerate([False, True]):
feature_name = "x"
if scale:
feature_name = "x_scaled"
ax[ax_i].scatter(df[feature_name], df["y"], label="Data", alpha=0.5)
for formulation_i, (func, linestyle) in enumerate(
zip([formulation_1, formulation2], ["--", ":"])
):
params, covariance = curve_fit(
func,
df[feature_name],
df["y"],
maxfev=10_000,
)
df["fit"] = func(df[feature_name], *params)
mae = mean_absolute_error(df["y"], df["fit"])
x = np.linspace(df[feature_name].min(), df[feature_name].max(), 100)
fit = func(x, *params)
ax[ax_i].plot(
x,
fit,
linestyle=linestyle,
label=f"Form {formulation_i + 1} (MAE: {mae:.2f})",
)
ax[ax_i].legend()
ax[ax_i].set_title(f"Scaled: {scale}")
ax[ax_i].set_xlabel(feature_name)
ax[ax_i].set_ylabel("y")
plt.show()
</code></pre>
|
<python><scipy><curve-fitting><scipy-optimize>
|
2025-07-26 10:29:03
| 2
| 481
|
3UqU57GnaX
|
79,715,279
| 27,596,369
|
Pyttsx3 not saving MP3
|
<p>Here is my code:</p>
<pre><code> mp3_path = filedialog.asksaveasfile(filetypes=(("MP3 files", "*.mp3"),
("All files", "*.*"))) # User prompt to select a path.
engine = pyttsx3.init() # A pyttsx3 object
engine.setProperty('rate', 100) # Set speed to speed given in entry
engine.save_to_file('Hi and welcome to my audiobook.', mp3_path) # Save mp3 to specified path
engine.runAndWait()
</code></pre>
<p>Now, when I try to save the MP3, it doesn't work and the mp3 is empty. I copied and pasted the doc example but it still doesn't work.</p>
<p>I am using MacOS so installing espeak-ng is not needed too.</p>
|
<python><tkinter><pyttsx3>
|
2025-07-25 22:08:17
| 2
| 1,512
|
Aadvik
|
79,715,255
| 9,965,155
|
Python asyncio: AttributeError: 'coroutine' object has no attribute 'state' with RuntimeWarning: coroutine was never awaited
|
<p>I'm trying to run a basic stateful session using the <code>google.adk</code> library in Python, interacting with a <code>question_answering_agent</code> and <code>InMemorySessionService</code>. I encountered a series of errors related to imports, asynchronous operations, and API key loading.</p>
<p>My goal is to successfully run the <code>basic_stateful_session.py</code> script which initializes a session, sends a message to an agent, and then logs the final session state.</p>
<p>However, I get the following error:</p>
<blockquote>
<p>AttributeError: 'coroutine' object has no attribute 'state' and ValueError: Session not found (with RuntimeWarning: coroutine '...' was never awaited)</p>
</blockquote>
<p>The script: <code>basic_stateful_session.py</code></p>
<pre><code>import uuid
from dotenv import load_dotenv
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
from question_answering_agent import question_answering_agent
load_dotenv()
# Create a new session service to store state
session_service_stateful = InMemorySessionService()
initial_state = {
"user_name": "John Doe",
"user_preferences": """
I like to play Pickleball, Disc Golf, and Tennis.
My favorite food is Mexican.
My favorite TV show is Game of Thrones.
Loves it when people like and subscribe to his YouTube channel.
""",
}
# Create a NEW session
APP_NAME = "John Doe Bot"
USER_ID = "john_doe"
SESSION_ID = str(uuid.uuid4())
stateful_session = session_service_stateful.create_session(
app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID,
state=initial_state,
)
print("CREATED NEW SESSION:")
print(f"\tSession ID: {SESSION_ID}")
runner = Runner(
agent=question_answering_agent,
app_name=APP_NAME,
session_service=session_service_stateful,
)
new_message = types.Content(
role="user", parts=[types.Part(text="What is Johns favorite TV show?")]
)
for event in runner.run(
user_id=USER_ID,
session_id=SESSION_ID,
new_message=new_message,
):
if event.is_final_response():
if event.content and event.content.parts:
print(f"Final Response: {event.content.parts[0].text}")
print("==== Session Event Exploration ====")
session = session_service_stateful.get_session(
app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID
)
# Log final Session state
print("=== Final Session State ===")
for key, value in session.state.items():
print(f"{key}: {value}")
</code></pre>
<p>The agent:</p>
<pre><code>from google.adk.agents import Agent
# Create the root agent
question_answering_agent = Agent(
name="question_answering_agent",
model="gemini-2.0-flash",
description="Question answering agent",
instruction="""
You are a helpful assistant that answers questions about the user's preferences.
Here is some information about the user:
Name:
{user_name}
Preferences:
{user_preferences}
""",
)
</code></pre>
|
<python><google-agent-development-kit>
|
2025-07-25 21:22:06
| 1
| 2,006
|
PinkBanter
|
79,715,248
| 814,438
|
RabbitMQ - Consume a Large Volume of Messages from a Queue withou ACK
|
<p>I would like to read a large volume of messages from a RabbitMQ queue, but negatively acknowledge (NACK) them. Some other consumer will ACK them and do work later.</p>
<p>I have 100,000 messages in a queue. The following Python code is my attempt to print information about all messages:</p>
<pre class="lang-py prettyprint-override"><code># Script settings
seen_message_ids = set()
retrieved_number = 0
delete_all_messages = False
ids_to_delete = ['da140a51-cdd9-f71d-ac5c-ec15766e6aa4']
def on_message(ch, method_frame, header_frame, body):
# If all arguments other than `ch` are None, then the `consume` method timed out due to queue inactivity.
# Stop listening to the queue.
if method_frame is None and header_frame is None and body is None:
ch.stop_consuming()
return
# Messages that are not deleted are re-queued, meaning they will be consumed again, infinitely. Keep track
# of `message_id`, which is a UUID to identify message that were already seen. Since this is a FIFO queue,
# seeing any message again means all messages in the queue have been examined.
message_id = header_frame.message_id
if message_id in seen_message_ids:
print("All messages evaluated.")
ch.stop_consuming()
return
seen_message_ids.add(message_id)
# Keep track of the number of retrieved messages.
global retrieved_number
retrieved_number += 1
# To keep the message, negatively acknowledge it and re-queue it.
ch.basic_nack(delivery_tag=method_frame.delivery_tag, requeue=True)
status = "READ "
print(f"{status} {retrieved_number:06d} Message: {body.decode()} UUID: {header_frame.message_id}")
return
def main():
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=RABBITMQ_HOST,
port=RABBITMQ_PORT,
virtual_host=RABBITMQ_VHOST,
credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue='test1')
# Consume messages and acknowledge them
print("START LISTENING")
last_message_received_time = datetime.datetime.now()
for method_frame, properties, body in channel.consume('test1', inactivity_timeout=1):
# `consume` is a generator https://pika.readthedocs.io/en/stable/examples/blocking_consumer_generator.html
# That means that the body of this for loop will not be reached unless a new message is found.
new_message_received_time = datetime.datetime.now()
if method_frame is None and properties is None and body is None:
print(f"Timeout due to inactivity after {new_message_received_time - last_message_received_time} elapsed.")
on_message(channel, method_frame, properties, body)
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
finally:
print("STOP LISTENING")
connection.close()
</code></pre>
<p>This approach succeeds until about 5,000 messages have been retrieved. Then this happens:</p>
<ul>
<li>The ~5,000th message is printed: <code>READ 005112 Message: {"job_id": "AAAAAAAAAAAAA"} UUID: 86d26605-33d6-9c61-68f8-a67abdf76a13</code></li>
<li>This message is printed: <code>All messages evaluated</code>. For this condition to be met, the script must have encountered a previously sent <code>message_id</code>, which is a Guid. Since the guids are distinct, the only way to see a repeated guid is for a re-queued message to be retrieved again. Shouldn't re-queued messages go to the back of the queue, though? I expected to see re-queued messages only after the 100,000 messages already in the queue were retrieved.</li>
<li>The RabbitMQ management UI shows most messages as Unacked. The number of unacked messages goes to zero over several minutes, or sometimes a few seconds.</li>
</ul>
<p><a href="https://i.sstatic.net/82M3vbWT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82M3vbWT.png" alt="enter image description here" /></a></p>
<ul>
<li>The consumer remains stuck on <code>ch.stop_consuming()</code>, which is immediately after <code>All messages evaluated</code> until the management UI shows all messages as Ready.</li>
<li>An attempt to retrieve messages using a second consumer fails to find any.</li>
<li>An attempt to retrieve messages using the management UI fails to find any.</li>
</ul>
<p>Why does the consumer not receive and print all 100,000 messages? What work-arounds are available?</p>
<p>Someone will probably ask why I want to read an NACK all messages. In fact, I want to ACK one or a few as a way to remove it from the queue. However, I can't ACK a message if my consumer never retrieves it. References to this approach for removing a message are here: <a href="https://stackoverflow.com/questions/53273463/how-to-remove-specific-message-from-queue-in-rabbitmq/53273588">SO1</a> and here: <a href="https://stackoverflow.com/questions/62920752/how-to-remove-a-message-from-the-queue">SO2</a></p>
|
<python><rabbitmq><pika>
|
2025-07-25 21:05:17
| 0
| 1,199
|
Jacob Quisenberry
|
79,715,225
| 5,294,585
|
Python Flask Class initiating with NoneType despite being passed
|
<p>I'm working on a flask app and it's been running for a while and I don't often add new users so I don't know exactly when this broke, but recently I tried to add a new user and I'm getting an error when adding a User to the database that appears to be caused by the class not instantiating with the variables set. Here's the base class with <code>__init__</code>:</p>
<pre><code>class User(UserMixin,db.Model):
__tablename__='users'
id=db.Column(db.Integer,primary_key=True)
email = db.Column(db.String(128),unique=True,index=True)
username=db.Column(db.String(64),unique=True,index=True)
first_name=db.Column(db.String(64))
last_name=db.Column(db.String(64))
about_me=db.Column(db.Text())
role_id = db.Column(db.Integer,db.ForeignKey('roles.id'))
password_hash = db.Column(db.String(128))
confirmed = db.Column(db.Boolean, default=False)
member_since=db.Column(db.DateTime(),default=datetime.utcnow)
last_seen=db.Column(db.DateTime(),default=datetime.utcnow)
avatar_hash = db.Column(db.String(32))
articles = db.relationship('Article',backref='author',lazy="dynamic")
followed = db.relationship('Follow',
foreign_keys=[Follow.follower_id],
backref=db.backref('follower', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
followers = db.relationship('Follow',
foreign_keys=[Follow.followed_id],
backref=db.backref('followed', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
print(self.email.lower())
</code></pre>
<p>I was able to cut the problem line of code down to getting NoneType error on self.email so I've clipped the init function to just initialize and then try to call the problem variable. Then, I can replicate from flask shell:</p>
<pre><code>>>> u = User(email="test@test.com")
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "<string>", line 4, in __init__
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/home/eskimotv/app/app/models.py", line 109, in __init__
super(User, self).__init__(**kwargs)
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/ext/declarative/base.py", line 842, in _declarative_constructor
setattr(self, k, kwargs[k])
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 272, in __set__
self.impl.set(
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 865, in set
value = self.fire_replace_event(
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 873, in fire_replace_event
value = fn(
File "/home/eskimotv/venv/lib/python3.8/site-packages/sqlalchemy/orm/events.py", line 2162, in wrap
fn(target, *arg)
File "/home/eskimotv/app/app/models.py", line 216, in on_changed_email
target.avatar_hash = target.gravatar_hash()
File "/home/eskimotv/app/app/models.py", line 212, in gravatar_hash
return hashlib.md5(self.email.lower().strip().encode('utf-8')).hexdigest()
AttributeError: 'NoneType' object has no attribute 'lower'
</code></pre>
<p>Can anyone help me understand why that variable is not being set in the instance of the object?</p>
|
<python><flask>
|
2025-07-25 20:35:01
| 1
| 908
|
David Scott
|
79,715,189
| 27,596,369
|
How to slowly make scrolledtext.ScrolledText text transparent
|
<hr />
<p>I am making an app where when you stop typing, your text starts to get more transparent until it disappears. How do I do that?</p>
|
<python><tkinter>
|
2025-07-25 19:51:29
| 1
| 1,512
|
Aadvik
|
79,715,164
| 2,000,640
|
Wxpython with matplotlib - resize plot
|
<p>Iβm using matplotlib in wxpython, and here is what Iβm trying to doβ¦</p>
<p>Iβd like to define my figure to be a specific (user chosen) sizeβ¦say 8 inches wide by 6 inches high.</p>
<p>Then, Iβd like the user to be able to size the window and resize/scale the plot to fit the window, but the underlying figure still be 8x6 (to ensure fonts stay the right size and such).</p>
<p>What Iβm currently doing is this:</p>
<pre><code> size = self.GetClientSize()
self.canvas.SetSize(size)
self.figure.set_size_inches(8, 6)
self.figure.set_dpi(min(size.width/8, size.height/6))
# my plot configuration code is here
self.canvas.draw_idle()
</code></pre>
<p>This works great from a sizing standpoint, but if you size up the plot and back down, I end up with clutter on the right or bottom. Like this:</p>
<p><a href="https://i.sstatic.net/Um5wdRDE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um5wdRDE.png" alt="enter image description here" /></a></p>
<p>Any suggestions? I saw something about wxDC.SetLogicalScale, but couldnβt figure out how to get a DC or how to custom paint the background.</p>
|
<python><matplotlib><wxpython>
|
2025-07-25 19:21:32
| 1
| 2,268
|
David Hope
|
79,715,118
| 27,596,369
|
Why is Tkinter scrolled text width and height different sizes?
|
<p>I created a <code>scolledtext.ScrolledText</code> in my application and passed <code>width=50 and height=50</code> but the width was half the height, so I changed width to 100 and it worked. But I was just wondering why it is like that.</p>
|
<python><tkinter>
|
2025-07-25 18:16:41
| 1
| 1,512
|
Aadvik
|
79,715,106
| 2,058,333
|
Replicate PIL.Image.show() scaling and normalization
|
<p>I am curious what the PIL library does value scaling and normalization wise to show me crisp image and why just doing matplotlib on the extracted numpy value looks really bad.</p>
<p>Here is my code</p>
<pre><code> the_image = Image.open(temp_image_file)
sub_image = the_image.crop((520,965,565,1900))
plt.imshow(sub_image, cmap='gray')
plt.show()
si = np.array(sub_image, dtype=np.uint8)
si[np.where(si == 48)] = 255
plt.imshow(si, cmap='gray')
plt.show()
</code></pre>
<p>and attached are the two plots. The first direct plot looks much crisper while the second is rather illegible. This image is supposed to go into <code>EasyOCR</code> for number recognition, but I would rather feed it the first crisp image than what the numpy array has.
Ideas? Looks like a lowpass filter...?</p>
<p><a href="https://i.sstatic.net/G56Ed4QE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G56Ed4QE.png" alt="First plt.show() figure" /></a>
<a href="https://i.sstatic.net/mqcmW0Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mqcmW0Ds.png" alt="Second plt.show() figure" /></a></p>
|
<python><matplotlib><python-imaging-library><ocr>
|
2025-07-25 18:05:39
| 1
| 5,698
|
El Dude
|
79,714,981
| 9,879,534
|
How to specify one letter's case while ignore other letters' case in a regex?
|
<p>Say I want to write a regex to match text like "Table 1.1-1 Abcd" / "FIGURE 1: Efg" / "Image 1.1 Hijk lmn" but refuse "Table 1.1: abcd efg", which means the first letter behind "Table" / "Figure" ... must be uppercase, but "Table" and "TABLE", "Figure" and "FIGURE", "Image" and "IMAGE" ... can all be accepted.</p>
<p>My colleagues had written regex with a flag <code>re.IGNORECASE</code>, so I don't want to change it too much. It's like</p>
<pre class="lang-py prettyprint-override"><code>pattern = re.compile(r"^(?!.*[._β¦-]{{3,}}\s*\d+$)([1-9]\d?)?\s?({})\s+({}):?\s?.*".format('|'.join(SECTION_KWT), SN_KW_SECTION),
re.DOTALL | re.IGNORECASE)
</code></pre>
<p><code>SECTION_KWT</code> is a list containing words like "Table", "Figure" and others, <code>SN_KW_SECTION</code> is a str representing something like "1.1-1", "1", etc.</p>
<p>I just want to modify the regex to specify that letter uppercase, but even I asked AI, I still cannot get a good solution. I tried <code>[A-Z]</code>, <code>(?-i)</code> but none works.</p>
|
<python><regex>
|
2025-07-25 15:46:12
| 1
| 365
|
Chuang Men
|
79,714,931
| 4,704,065
|
Fit the rows and column names using pandas.set_option
|
<p>I am trying to use <strong>pandas.set_option</strong> for my python script to display a table but some how the data does not fill properly in an html page
Since the names in some column are bit longer , columns look 1 below another</p>
<p>Is it possible to split the entries eg: in "<strong>Log File</strong>" column so that the table will look better .
I want to ensure I am able to read the Log file name completely and try to display the complete table .</p>
<p>Is it possible to display the complete table properly where I am able to read all the contents in a readable way</p>
<p>Below is a screenshot of the output from the html page:</p>
<p><a href="https://i.sstatic.net/bZOkC79U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZOkC79U.png" alt="enter image description here" /></a></p>
<p>Below is the code snippet:</p>
<pre><code>def print_table(self, test_result: dict) -> None:
sf_detection_table = {
"Log File ": test_result["Log"],
"Total Sf Percentage Epochs (Fix type 1 and 4)": test_result[
"Total Sf Percentage Epochs (Fix type 1 and 4)"
],
"Sf DR Percentage Epochs (Fix type 1)": test_result["Sf DR Percentage Epochs (Fix type 1)"],
"Sf GNSS DR Percentage Epochs (Fix type 4)": test_result["Sf GNSS DR Percentage Epochs (Fix type 4)"],
"HPG Fixed Ambuity Percentage Epochs (Carr sol=2)": test_result[
"HPG Fixed Ambuity Percentage Epochs (Carr sol=2)"
],
"HPG Float Percentage Epochs (Carr sol=1)": test_result["HPG Float Percentage Epochs (Carr sol=1)"],
}
pd.set_option("display.max_colwidth", None)
pd.set_option("display.max_columns", None)
logger.info(pd.DataFrame(sf_detection_table))
</code></pre>
|
<python><pandas><dataframe>
|
2025-07-25 15:05:48
| 1
| 321
|
Kapil
|
79,714,763
| 11,246,056
|
How to display dataframe index when executing a cell in marimo?
|
<p>If I execute the following toy code in a Jupyter notebook:</p>
<pre><code>import pandas as pd
pd.DataFrame({"col1": ["a", "b", "c", "d"], "col2": ["e", "f", "g", "h"]})
</code></pre>
<p>The dataframe is displayed with its index (0, 1, 2, 3):</p>
<p><a href="https://i.sstatic.net/LRhO4AGd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRhO4AGd.png" alt="enter image description here" /></a></p>
<p>But in <code>marimo</code>, the index is hidden when I run the same cell:</p>
<p><a href="https://i.sstatic.net/7oTbudRe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oTbudRe.png" alt="enter image description here" /></a></p>
<p>Is there an option in <code>marimo</code> to display the index?</p>
|
<python><marimo>
|
2025-07-25 12:59:55
| 1
| 13,680
|
Laurent
|
79,714,638
| 14,720,380
|
How can I use ImportString in Pydantic to get an instance of a class?
|
<p>I am trying to use an <code>ImportString</code> to get an instance of a given object:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from pydantic.types import ImportString
from pydantic import Field, BaseModel
from typing import Annotated, Callable
class Foo:
"""A test class."""
def __init__(self, name: str) -> None:
self.name = name
def __call__(self, name: str) -> str:
"""Call the foo class."""
return f"hello {name}"
x = Foo("world")
def say_hello(name: str) -> str:
"""Say hello to a name."""
return f"hello {name}"
def test_pydantic_ai_tool() -> None:
"""Test that the PydanticAiTool type can be used to validate a tool."""
class Bar(BaseModel):
foo: Annotated[
ImportString[Callable[..., str]],
Field(description="A tool that can be used in a pydantic-ai agent."),
]
bar = Bar.model_validate(
{
"foo": "_tests.utils.types.test__pydantic_ai_tool.say_hello",
},
)
assert bar.foo("world") == "hello world"
assert bar.model_dump_json() == '{"foo":"_tests.utils.types.test__pydantic_ai_tool.say_hello"}'
def test_pydantic_ai_tool_two() -> None:
"""Test that the PydanticAiTool type can be used to validate a tool."""
class Bar(BaseModel):
foo: Annotated[
ImportString[Foo],
Field(description="A tool that can be used in a pydantic-ai agent."),
]
bar = Bar.model_validate(
{
"foo": "_tests.utils.types.test__pydantic_ai_tool.x",
},
)
assert bar.foo("world") == "hello world"
assert bar.model_dump_json() == '{"foo": "_tests.utils.types.test__pydantic_ai_tool.x"}'
</code></pre>
<p>In this example, the first test works but the second test fails with:</p>
<pre class="lang-none prettyprint-override"><code>E pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'test__pydantic_ai_tool.Foo'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
E
E If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
E
E For further information visit https://errors.pydantic.dev/2.10/u/schema-for-unknown-type
.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:513: PydanticSchemaGenerationError
</code></pre>
<p>I would expect the <code>ImportString</code> to import the object at the path and check that it is an instance of the given type, and then for the schema generation it should just return the import string (which is the case for the first test, if you do <code>model_dump_json</code> you get the import string and not the schema of whatever type you are trying to import).</p>
<p>I am not sure why there needs to be a schema of the <code>ImportString</code> generic type. Is there a way around this?</p>
|
<python><pydantic-v2>
|
2025-07-25 11:14:35
| 0
| 6,623
|
Tom McLean
|
79,714,594
| 12,439,683
|
Check if Python argparse used explicit CLI argument or default value?
|
<p>I derive some settings from my parser that I latter store on disk to be reloaded again, during the reload I want to overwrite some values but keep the first ones, similar to new default values.</p>
<p>I want to have a priority of <code>args2</code> > <code>args1</code> > <code>default</code> values, however I face the challenge on how to determine the highest priority, especially when I pass the default value explicitly in <code>arg2</code>, it should not fall back to <code>args1</code>, i.e. a simple comparison with the default is not sufficient.</p>
<pre class="lang-py prettyprint-override"><code>args1 = ["-a", "1", "--bar", "2"] # Run 1
args2 = ["--bar", "default-bar", "-c", "3"] # Run 2, override bar with default
args2_b = ["-c", "3"] # Run 2b, bar should be restored from Run 1
# Equivalent full args for Run 2
wanted = ["-a", "1", "--bar", "default-bar", "-c", "3"]
wanted_b = ["-a", "1", "--bar", "2", "-c", "3"]
from argparse import ArgumentParser
parser = ArgumentParser()
# required arguments without defaults can be present too
parser.add_argument("--foo", "-a", default="default-foo")
parser.add_argument("--bar", "-b", default="default-bar")
parser.add_argument("--baz", "-c", default="default-baz")
results1 = parser.parse_args(args1)
print(results1) # Namespace(foo='1', bar='2', baz='default-baz')
results2 = parser.parse_args(args1)
print(results2) # Namespace(foo='default-foo', bar='default-bar', baz='2')
# merge results
# ...?
</code></pre>
<p>I do not want to make things too complicated and hard to maintain, therefore manually looking at <code>sys.argv</code> does not look feasible (I have many arguments, shorthand codes, usage of <code>dest</code>, ...).<br />
I use <code>tap</code> <a href="https://github.com/swansonk14/typed-argument-parser" rel="nofollow noreferrer">Typed Argument Parser</a> - if that info helps to find a better solution, also I would like to keep most of <code>tap</code>'s features, i.e. to define most arguments in a class and keep meaningful default values there.</p>
<hr />
<p>TL;DR: how could I differentiate between these two cases in Run 2?</p>
<pre class="lang-bash prettyprint-override"><code># Run 1
file.py -a 1 --bar 2
# Run 2, that references some state from Run 1
# Use passed value:
file.py -c 3 --restore-run 1 --bar "default-bar"
# Use value from Run 1
file.py -c 3 --restore-run 1
</code></pre>
<hr />
<p>Related questions:</p>
<ul>
<li><a href="https://stackoverflow.com/q/66764775/12439683">python get changed (non-default) cli arguments?</a> - helpful and closest to my problem, but fails with the restored default value</li>
<li><a href="https://stackoverflow.com/q/25035109/12439683">Python command line arguments check if default or given</a> - problem and solution are far off</li>
<li><a href="https://stackoverflow.com/q/15301147/12439683">Python argparse: default value or specified value</a> - asks for <code>const</code></li>
</ul>
|
<python><argparse><python-tap>
|
2025-07-25 10:45:10
| 1
| 5,101
|
Daraan
|
79,714,515
| 1,719,931
|
SPECTER2 similarity performs poorly
|
<p>I'm trying to compute a measure of semantic similarity between titles of scientific publications using <a href="https://huggingface.co/allenai/specter2" rel="nofollow noreferrer">SPECTER2</a>, but the model performs poorly.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer
from adapters import AutoAdapterModel
import torch
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("allenai/specter2_base")
# load base model
model = AutoAdapterModel.from_pretrained("allenai/specter2_base")
# load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2", source="hf", load_as="proximity", set_active=True)
def novel2(txt):
assert isinstance(txt, list), "Input should be a list of strings"
assert all(isinstance(s, str) for s in txt), (
"All elements in the list should be strings"
)
# preprocess the input
inputs = tokenizer(
txt,
padding=True,
truncation=True,
return_tensors="pt",
return_token_type_ids=False,
max_length=512,
)
output = model(**inputs)
# take the first token in the batch as the embedding
embeddings = output.last_hidden_state[:, 0, :]
print(embeddings.shape)
n = len(txt)
dim = (n * (n - 1)) // 2
dists = torch.zeros(dim)
pos = 0
for i in range(n):
for j in range(i + 1, n):
dists[pos] = torch.nn.functional.cosine_similarity(
embeddings[i, :].reshape(1, -1), embeddings[j, :].reshape(1, -1)
)
pos += 1
return dists
</code></pre>
<p>Testing it with:</p>
<pre class="lang-py prettyprint-override"><code>novel2(["BERT", "Attention is all you need", "How the Romans conquered the world"])
</code></pre>
<p>Returns</p>
<pre><code>tensor([0.8831, 0.8758, 0.8812], grad_fn=<CopySlices>)
</code></pre>
<p>These would be the scores for the comparisons document 1 - document 2, document 1 - document 3, document 2 - document 3.</p>
<p>I would have expected a very low score for the latter 2 comparisons.</p>
<p>Instead, the first document is similar to the second as much as the second is similar to the third.</p>
<p>And the droppage when comparing the first model with the third is very low.</p>
|
<python><pytorch><huggingface-transformers><word-embedding><sentence-similarity>
|
2025-07-25 09:38:07
| 0
| 5,202
|
robertspierre
|
79,714,499
| 1,877,600
|
Unable to Connect to Azurite Container from Custom Engine Container in Integration Tests
|
<p>I'm currently working on integration tests for my project, which consists of two main components: an application and an engine. Both components are deployed into Azure Container Apps. The engine scales using Azure Storage Queue. The workflow is as follows:</p>
<ol>
<li>The application sends a request to the Azure queue.</li>
<li>The engine consumes the request and processes it.</li>
<li>Engine sends status to the status queue</li>
<li>Engine saves artifacts to the blob storage.</li>
</ol>
<p>I'm facing difficulties in establishing a connection to the Azurite container from my custom engine container. Below is the test case I'm using:</p>
<pre class="lang-py prettyprint-override"><code>class TestEngine(BaseTestCase):
def test_integration_engine(self):
with Network() as network:
logger.info(f"Network {network.name}")
with AzuriteContainer() \
.with_network(network) \
.with_network_aliases("azurite_server") \
.with_exposed_ports(10000, 10000) \
.with_exposed_ports(10001, 10001) as azurite_container:
connection_string = azurite_container.get_connection_string()
with DockerContainer("57361aab4105730a7ff9d538e9bee44dfe9c24d57cc6e396edff7dbea2de5031") \
.with_env("blob_connection_string", connection_string) \
.with_network(network) \
.with_network_aliases("engine_app") \
.with_exposed_ports(80, 80) as container:
logger.info("Waiting for logs from engine!")
try:
wait_for_logs(container, "Starting handling", timeout=60)
logger.info("Engine processing started according to logs.")
except Exception as e:
logger.warning(f"Engine processing start logs not found within timeout: {e}")
logger.warning("Collecting logs and continuing anyway.")
# send_request_task(connection_string)
print(collect_logs(container, 200))
</code></pre>
<p>Note: The validation logic and docstrings have been omitted for brevity.</p>
<p>However, the engine process (running on a custom container) is terminated with the following error message:</p>
<pre><code> raise error\nazure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPConnection object at 0x7fddad8b47c0>: Failed to establish a new connection: [Errno 111] Connection refused\n')]
</code></pre>
<p>Could someone help me understand why the connection to the Azurite container is being refused and how I can resolve this issue? Any guidance or suggestions would be greatly appreciated!</p>
|
<python><integration-testing><testcontainers><azurite>
|
2025-07-25 09:25:21
| 0
| 597
|
user1877600
|
79,714,469
| 4,662,490
|
Pytest rootdir differs between local and GitHub Actions (macOS)
|
<p>I'm developing a Python <a href="https://github.com/DT-Service-Consulting/gtfs_railways" rel="nofollow noreferrer">package</a> called <code>gtfs_railways</code>, and my local setup is working perfectly β but in CI, path resolution breaks and makes my tests fail.
Note that, for legacy reasons, I am using Python 3.8.</p>
<p>Here's the structure of my project:</p>
<pre class="lang-none prettyprint-override"><code>gtfs_railways/
βββ gtfs_railways/
β βββ init.py
β βββ utils/
β βββ data/
β βββ functions/
β βββ tests/
βββ setup.py
βββ README.md
</code></pre>
<p>Locally I'm using pytest 8.3.5:</p>
<pre class="lang-none prettyprint-override"><code>platform darwin -- Python 3.8.20, pytest-8.3.5, pluggy-1.5.0
rootdir: /Users/marco/Work-MBP/gtfs_railways
plugins: cov-5.0.0, anyio-4.5.2
</code></pre>
<p>and everything works fine.</p>
<p>On macOS GitHub Actions runners (<code>macos-latest</code>), even with <code>actions/setup-python@v5</code> and specifying Python 3.8.20, I get:</p>
<pre class="lang-none prettyprint-override"><code>platform darwin -- Python 3.8.10, pytest-8.3.5, pluggy-1.5.0
rootdir: /Users/runner/work/gtfs_railways/gtfs_railways
</code></pre>
<p>As you can see, there is an extra <code>gtfs_railways/</code> nesting in the path that breaks everything, especially any relative imports or dynamic file loading logic that relies on the <code>rootdir</code>.</p>
<p>So far I've tried several workarounds to oblige pytest in the right <code>rootdir</code>, but all seem to be failing:</p>
<ol>
<li><p>Checkout fix</p>
<pre class="lang-yaml prettyprint-override"><code>- uses: actions/checkout@v4
with:
path: .
</code></pre>
</li>
<li><p>Explicit working directory</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Run tests
run: pytest -v -s
working-directory: ${{ github.workspace }}
</code></pre>
</li>
<li><p>Set <code>PYTHONPATH</code></p>
<pre class="lang-yaml prettyprint-override"><code>- name: Set PYTHONPATH
run: echo "PYTHONPATH=${{ github.workspace }}" >> $GITHUB_ENV
</code></pre>
</li>
<li><p>Tried</p>
<pre class="lang-yaml prettyprint-override"><code>pytest --rootdir=.
</code></pre>
</li>
</ol>
<p>I'm using only basic sh steps. No tox, no poetry, no custom runners β just plain pip and pytest.</p>
<p>Any thoughts?</p>
<p>I noticed that <code>${{ github.workspace }} = /Users/runner/work/gtfs_railways/gtfs_railways</code> perhaps I could simply modify this line?</p>
<p>What else can I try to make pytest behave consistently across environments?</p>
|
<python><github-actions><pytest>
|
2025-07-25 08:59:08
| 1
| 423
|
Marco Di Gennaro
|
79,714,452
| 4,311,316
|
False output with datetime week and isocalendar()
|
<p>I have this GUI with the button "get dates for last week".</p>
<p>I programmed this in 2024, using <code>datetime</code> objects and the <code>isocalendar()</code> method - and the button worked like a charm, returning the dates for Monday (first date of the week) and Sunday (last date of the week)</p>
<p>In 2025 my program returns the wrong dates.</p>
<p>Running this code:</p>
<pre><code>from datetime import datetime
str_today_iso = "2025-07-25"
dt_today_1st = datetime.strptime(str_today_iso, "%Y-%m-%d")
str_today_in_week_format = str(dt_today_1st.isocalendar().year) + " " + str(dt_today_1st.isocalendar().week) + " " + str(dt_today_1st.isocalendar().weekday)
dt_today_2nd = datetime.strptime(str_today_in_week_format, "%Y %W %w")
str_today_final = dt_today_2nd.strftime("%Y-%m-%d")
str_monday_this_week = str(dt_today_1st.isocalendar().year) + " " + str(dt_today_1st.isocalendar().week) + " 1"
print(f"today in iso-format: {str_today_iso}")
print(f"today in week-format: {str_today_in_week_format}")
print(f"today after conversion: {str_today_final}")
</code></pre>
<p>Returns this output:</p>
<pre><code>today in iso-format: 2025-07-25
today in week-format: 2025 30 5
today after conversion: 2025-08-01
</code></pre>
<p>Double-checking with <a href="https://kalenderwoche.de" rel="nofollow noreferrer">kalenderwoche.de</a> the 25. July 2025 is Week 30. With a Friday it's day 5. Yet converting "2025 30 5" returns Aug 1 2025.</p>
<p>Exchanging the year (still using the 25th July):</p>
<ul>
<li>2023 -> the result looks fine.</li>
<li>2024 -> the result looks fine.</li>
<li>2025 -> false result</li>
<li>2026 -> false result</li>
<li>2027 -> error - as the weekday returns a 7, which is invalid for weekday!</li>
<li>2028 -> the result looks fine</li>
</ul>
<p>According to kalenderwoche.de and isoweeks.com Jan 01 2025 is in Week 1.<br />
Hence, the week 30 in 2025 starts on SUN July 20th 2025</p>
<p>According to <code>isocalendar()</code> Jan 01 2025 is in Week 0.<br />
Hence, the week 30 in 2025 starts on SUN July 27th 2025</p>
<p>Shouldn't that be standardized?
Does anybody have the same problem?<br />
Do I have a logical error?</p>
<p>Using Python 3.13.1.</p>
|
<python><datetime>
|
2025-07-25 08:49:09
| 2
| 385
|
Red
|
79,714,421
| 8,964,393
|
Assign column status retrospectively in pandas
|
<p>I have created the following pandas dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
ds = {'col1' : [234,321,284,286,287,300,301,303,305,299,288,300,299,287,286,280,279,270,269,301]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks like this:</p>
<pre><code>display(df)
col1
0 234
1 321
2 284
3 286
4 287
5 300
6 301
7 303
8 305
9 299
10 288
11 300
12 299
13 287
14 286
15 280
16 279
17 270
18 269
19 301
</code></pre>
<p>I have then created two columns (<code>cnsIncr</code> , <code>cnsDecr</code>), which calculate the consecutive increase and decrease in <code>col1</code> respectively.</p>
<pre><code>cnsIncr = []
cnsDecr = []
col1 = np.array(df['col1'])
for i in range(len(df)):
cnsIncr.append(0)
cnsDecr.append(0)
if(col1[i] > col1[i-1]):
cnsIncr[i] = cnsIncr[i-1]+1
else:
cnsIncr[i] = 0
if(col1[i] < col1[i-1]):
cnsDecr[i] = cnsDecr[i-1]+1
else:
cnsDecr[i] = 0
df['cnsIncr'] = cnsIncr
df['cnsDecr'] = cnsDecr
</code></pre>
<p>The resulting dataframe looks as follows:</p>
<p><a href="https://i.sstatic.net/ovXJlPA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ovXJlPA4.png" alt="enter image description here" /></a></p>
<p>I need to create a columns called <code>Trend</code> which is populated as follows:</p>
<ul>
<li>if the column <code>cnsIncr</code> contains a value greater or equal to 5, then <code>Trend = UpTrend</code> starting from the record in which <code>consIncr = 0</code> and ending at the record for which <code>cnsIncr >= 5</code>;</li>
<li>if the column <code>cnsDecr</code> contains a value greater or equal to 5, then <code>Trend = DownTrend</code> starting from the record in which <code>cnsDecr = 0</code> and ending at the record for which <code>cnsDecr >= 5</code>;</li>
<li>the neither the column <code>cnsIncr</code> nor the column <code>cnsDecr</code> contain a value of 5, then <code>Trend = NoTrend</code></li>
</ul>
<p>So, from the example above, the resulting dataframe would look like this:</p>
<p><a href="https://i.sstatic.net/2foCKmNM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2foCKmNM.png" alt="enter image description here" /></a></p>
<p>How can I do this?</p>
|
<python><pandas><dataframe><if-statement><multiple-columns>
|
2025-07-25 08:24:36
| 1
| 1,762
|
Giampaolo Levorato
|
79,714,287
| 219,153
|
Why is Numba more efficient with 2D vs 1D version of this loop?
|
<p>I'm using Python 3.12.11 with Numba 0.61.2 on Ubuntu 22.04.5 and AMD Ryzen 7 3800X CPU. This benchmark:</p>
<pre><code>import numpy as np, timeit as ti, numba as nb
@nb.njit(fastmath=True)
def f1d(img, w):
fl = np.zeros_like(img)
for i in range(w, len(img)-w):
p = img[i]
fl[i] = max(abs(p-img[i+1]), abs(p-img[i+w]), abs(p-img[i-w]), abs(p-img[i-1]))
return fl
@nb.njit(fastmath=True)
def f2d(img):
fl = np.zeros_like(img)
for y in range(1, img.shape[0]-1):
for x in range(1, img.shape[1]-1):
p = img[y, x]
fl[y, x] = max(abs(p-img[y, x+1]), abs(p-img[y, x-1]), abs(p-img[y+1, x]), abs(p-img[y-1, x]))
return fl
img2d = np.random.randint(256, size=(500, 500)).astype('i2')
img1d = img2d.ravel().astype('i2')
w = img2d.shape[1]
print(np.array_equal(f2d(img2d)[:, 1:-1], f1d(img1d, w).reshape((w, -1))[:, 1:-1]))
print(f'Minimum, median and maximum execution time in us:')
for fun in ('f2d(img2d)', 'f1d(img1d, w)'):
t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=99))
print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}')
</code></pre>
<p>produces:</p>
<pre><code>True
Minimum, median and maximum execution time in us:
f2d(img2d) 170.701 172.806 180.690
f1d(img1d, w) 853.687 864.637 873.464
</code></pre>
<p>2D version is over 5x faster than 1D version, due to heavy use of SIMD: <code>ymm</code> is present 559 times in <code>f2d</code> assembly vs. 6 times in <code>f1d</code>. What is the reason that essentially the same loop is vectorized for <code>f2d</code>, but not for <code>f1d</code>?</p>
<hr />
<p>In response to <em>Homer512</em>, I tried a loopless version of <code>f1d</code>:</p>
<pre><code>@nb.njit(fastmath=True)
def f1da(img, w):
p = img[w:-w]
l = img[w-1:-w-1]
r = img[w+1:-w+1]
d = img[:-2*w]
u = img[2*w:]
fl = np.maximum(np.maximum(np.abs(p-l), np.abs(p-r)), np.maximum(np.abs(p-u), np.abs(p-d)))
return fl
</code></pre>
<p>which ends up only slightly faster at:</p>
<pre><code>f1da(img1d, w) 713.653 714.284 727.849
</code></pre>
<p>These examples are greatly simplifying my use case, which can't be expressed in a loopless fashion with NumPy array operations.</p>
|
<python><for-loop><vectorization><numba>
|
2025-07-25 06:35:33
| 2
| 8,585
|
Paul Jurczak
|
79,714,277
| 4,281,353
|
FEAST Feature Store - What is event_timestamp in entity_df parameter of FeatureStore.get_historical_features method
|
<p>What is <code>event_timestamp</code> in <code>entity_df</code> parameter of <code>FeatureStore.get_historical_features</code> method?</p>
<p>Are they dummy random timestamp just to create data? And what does <strong>ensure point-in-time correctness</strong> mean?</p>
<ul>
<li><a href="https://rtd.feast.dev/en/master/#feast.feature_store.FeatureStore.get_historical_features" rel="nofollow noreferrer">get_historical_features</a></li>
</ul>
<blockquote>
<p>This method joins historical feature data from one or more feature views to an entity dataframe by using a time travel join. Each feature view is joined to the entity dataframe using all entities configured for the respective feature view.</p>
<p><strong>Parameters</strong></p>
<p>entity_df (Union[pd.DataFrame, str]) β An entity dataframe is a collection of rows containing all entity columns (e.g., customer_id, driver_id) on which features need to be joined, as well as a <code>event_timestamp</code> column used <strong>to ensure point-in-time correctness</strong>. Either a Pandas DataFrame can be provided or a string SQL query. The query must be of a format supported by the configured offline store (e.g., BigQuery):</p>
</blockquote>
<p>Example code from <a href="https://docs.feast.dev/master/getting-started/quickstart" rel="nofollow noreferrer">Quickstart - Generating training data</a>.</p>
<pre><code>entity_df = pd.DataFrame.from_dict(
{
# entity's join key -> entity values
"driver_id": [1001, 1002, 1003],
# "event_timestamp" (reserved key) -> timestamps
"event_timestamp": [ # <-------- Are these random timestamp?
datetime(2021, 4, 12, 10, 59, 42),
datetime(2021, 4, 12, 8, 12, 10),
datetime(2021, 4, 12, 16, 40, 26),
],
# (optional) label name -> label values. Feast does not process these
"label_driver_reported_satisfaction": [1, 5, 3],
# values we're using for an on-demand transformation
"val_to_add": [1, 2, 3],
"val_to_add_2": [10, 20, 30],
}
)
store = FeatureStore(repo_path=".")
training_df = store.get_historical_features(
entity_df=entity_df,
features=[
"driver_hourly_stats:conv_rate",
"driver_hourly_stats:acc_rate",
"driver_hourly_stats:avg_daily_trips",
"transformed_conv_rate:conv_rate_plus_val1",
"transformed_conv_rate:conv_rate_plus_val2",
],
).to_df()
</code></pre>
<p>Documentation <a href="https://docs.feast.dev/getting-started/concepts/feature-retrieval#event-timestamp" rel="nofollow noreferrer">Feature retrieval - Event timestamp</a> says:</p>
<blockquote>
<p>The timestamp on which an event occurred, as found in a feature view's data source. The event timestamp describes the event time at which a feature was observed or generated.</p>
<p>Event timestamps are used during point-in-time joins to ensure that the latest feature values are joined from feature views onto entity rows. Event timestamps are also used to ensure that old feature values aren't served to models during online serving.</p>
</blockquote>
<p>However, <code>datetime(2021, 4, 12, 10, 59, 42)</code> for <code>driver_id</code> 1001 does not exist in the <a href="https://github.com/feast-dev/feast/blob/master/go/internal/test/feature_repo/driver_stats.parquet" rel="nofollow noreferrer">data source</a>.</p>
|
<python><feast>
|
2025-07-25 06:31:59
| 0
| 22,964
|
mon
|
79,714,241
| 13,687,718
|
Python service memory leaks on Ubuntu at scale
|
<p>I have a python based service that uses libraries such as requests or curl-cffi to fetch the content of our webpages which we currently are testing for scale.</p>
<p>As the expected response is the content of my webpage which might be around 1MB in size per request, I see an increase in the heap memory usage when a couple of requests are run which does not come down and this is much more evident at scale even after GC runs.</p>
<p>The code snippet looks something like this:</p>
<pre><code>async def scrape(self, url, counter):
response = None
response_dict = {}
try:
proxy = "some proxy info"
impersonate_val = "chrome"
headers = {} # Setting up some headers
async with AsyncSession() as s:
response = await s.get(
url,
impersonate=impersonate_val,
proxies={'http': proxy, 'https': proxy},
headers=headers,
timeout=30
)
response_dict = {"response": response.text, "http_code": response.status_code}
except Exception as e:
response_dict = await self.scrape_url(url, counter + 1)
finally:
if response:
await response.aclose()
return response_dict
</code></pre>
<p><em>Additional info:</em>
The reproducible script above is called by a ThreadPoolExecutor where each thread within the executor is associated with a long living async loop.</p>
<p><em>Continuous increase in memory:</em></p>
<p><a href="https://i.sstatic.net/trhjT6Ay.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trhjT6Ay.png" alt="Memory-leak" /></a></p>
<p>Although there are no memory leaks as per tracemalloc, I found that more requests to the script led to heap memory increase over time.</p>
<p>After deep diving, I was able to find that libraries such as cffi stores C extensions which is not cleared by python GC.
So I updated my script to run the following periodically:</p>
<p>ctypes.CDLL("libc.so.6").malloc_trim(0)</p>
<p>Every time this runs, the memory usage shows a big dip. If there are no incoming requests, the memory comes back to its original state but, at scale, a dip is observed which increases again right after the dip to whatever level the memory was at its peak.</p>
<p>I therefore want to understand the following:</p>
<ul>
<li>Is this behavior expected with python (According to <a href="https://stackoverflow.com/questions/51938963/python-memory-not-being-released-on-linux">this</a>, this appears to be the case)?</li>
<li>What more can I do apart from or in addition to <code>ctypes.CDLL("libc.so.6").malloc_trim(0)</code> to handle this leak?</li>
</ul>
|
<python><linux><ubuntu><memory-leaks>
|
2025-07-25 05:52:23
| 0
| 832
|
mang4521
|
79,714,189
| 11,218,687
|
How to extend fixture from the base class?
|
<p>I'm trying to extend a fixture: that means not only overriding, but partially reusing the old one. For example, in the code below the fixture <code>Test.values</code> uses the fixture <code>values</code> and concatenates lists defined in both functions:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture
def values():
return [1, 2, 3]
class Test:
@pytest.fixture
def values(self, values):
return values + [4, 5, 6]
def test_values(self, values):
assert 1 in values
assert 2 in values
assert 3 in values
assert 4 in values
assert 5 in values
assert 6 in values
</code></pre>
<p>I want to do the same for a fixture defined as a class member:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
@pytest.fixture
def values(self):
return [1, 2, 3]
class Test(Base):
@pytest.fixture
def values(self, values):
return values + [4, 5, 6]
def test_values(self, values):
assert 1 in values
assert 2 in values
assert 3 in values
assert 4 in values
assert 5 in values
assert 6 in values
</code></pre>
<p>This time I'm getting an error:</p>
<blockquote>
<p>E recursive dependency involving fixture 'values' detected</p>
</blockquote>
<p>The definition of the second fixture doesn't see the first one, and that is because there is no function overloading in Python. Please advise a way to overcome the issue.</p>
<p><strong>Update</strong> Renaming the fixture, as the first two responders have recommended, is not a solution. Consider this usage:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
@pytest.fixture
def values(self):
return [1, 2, 3]
def test_values(values, single_value):
assert single_value in values
class TestA(Base):
@pytest.fixture
def values(self, values):
return values + [4, 5, 6]
@pytest.fixture(params=[1, 2, 3, 4, 5, 6])
def single_value(self):
return request.param
class TestB(Base):
@pytest.fixture
def values(self, values):
return values + [7, 8, 9]
@pytest.fixture(params=[1, 2, 3, 7, 8, 9])
def single_value(self):
return request.param
</code></pre>
<p>Here I'm defining a <code>Base</code> class with some basic stuff: some basic values and a test function that is not called in scope of this class, as its name doesn't start with <code>Test</code>. Both <code>TestA</code> and <code>TestB</code> reuse the same <code>test_</code> function where they override the <code>single_value</code> fixture and try to extend the <code>values</code>. I could also define a <code>TestC(TestA)</code> that would need to extend the <code>values</code> from <code>TestA</code> that are being extended from <code>Base</code>. The idea with having a separate <code>base_values</code> fixture doesn't work here.</p>
|
<python><pytest><overloading><fixtures><pytest-fixtures>
|
2025-07-25 04:29:37
| 2
| 6,630
|
Dmitry Kuzminov
|
79,714,113
| 3,361,802
|
Error while setting a value using Actuator on kuksa
|
<p>I am using <a href="https://github.com/eclipse-kuksa/kuksa-mock-provider" rel="nofollow noreferrer">https://github.com/eclipse-kuksa/kuksa-mock-provider</a> in my local to try. My kuksa broker is up and running fine.</p>
<p>When do GET for a actuator path 'Vehicle.Body.Windshield.Front.Wiping.System.Mode', it works fine. However when I send value via actuator it throws error saying provide doesn't exist. I don't have any custom code, but as I understood , the actuator flow should work.</p>
<pre><code> ./grpcurl -plaintext -d "{\"signalIds\": [{\"path\": \"Vehicle.Body.Windshield.Front.Wiping.System.Mode\"}]}" localhost:44444 kuksa.val.v2.VAL/GetValues
{
"dataPoints": [
{
"timestamp": "2025-07-25T01:30:59.091750070Z",
"value": {
"string": "STOP_HOLD"
}
}
]
}
./grpcurl -plaintext -d '{
"signalId": {
"path": "Vehicle.Body.Windshield.Front.Wiping.System.Mode"
},
"value": {
"string": "STOP_HOLD"
}
}' localhost:44444 kuksa.val.v2.VAL/Actuate
ERROR:
Code: Unavailable
Message: Provider for vss_id 105 does not exist
</code></pre>
<p>It is same with other actuator paths as well.</p>
|
<python><grpc><grpc-python>
|
2025-07-25 01:45:39
| 0
| 339
|
user12
|
79,713,886
| 27,596,369
|
How to set an image for class that inherits Turtle
|
<p>What I am trying to do is create a class called <code>Meteor()</code> which inherits from the <code>turtle.Turtle()</code> class. My current code is this:</p>
<pre><code>from turtle import Turtle
class Meteor(Turtle):
def __init__(self, health):
Turtle.__init__(self)
self.health = health
</code></pre>
<p>I am trying to somehow make my <code>Meteor()</code> class have a shape of an image of a meteor. I tried this:</p>
<pre><code>class Meteor(Turtle):
def __init__(self, health):
Turtle.__init__(self)
self.health = health
self.image = tkinter.PhotoImage('Spinning-asteroid-6.gif')
Screen().register_shape('meteor', Shape('image', self.image))
self.shape('meteor')
</code></pre>
<p>But when I call <code>Meteor()</code> in my main file, the turtle does not appear.</p>
|
<python><turtle-graphics>
|
2025-07-24 19:44:01
| 2
| 1,512
|
Aadvik
|
79,713,687
| 1,935,424
|
Msys2 open a terminal from a python script with given rows and columns
|
<p>I am using MSYS2 and using a python script invoked from the command line in a bash terminal to open an other terminal with given rows and columns.</p>
<p>Here's what I got so far:</p>
<pre><code> cmd = 'C:/msys64/msys2_shell.cmd ' \
'-msys2 ' \
'-shell bash ' \
'-full-path ' \
f'-where {next_dir} ' \
'-mintty -s 125x38 '
os.system(cmd)
</code></pre>
<p>Please note: the msys2_shell.cmd is part of msys2. The source can be seen here:
<a href="https://github.com/msys2/MSYS2-packages/blob/master/filesystem/msys2_shell.cmd" rel="nofollow noreferrer">https://github.com/msys2/MSYS2-packages/blob/master/filesystem/msys2_shell.cmd</a></p>
<p>see here for the doc: <a href="https://www.msys2.org/wiki/Launchers/" rel="nofollow noreferrer">https://www.msys2.org/wiki/Launchers/</a></p>
<p>This does open a new terminal, but the "125x38" is ignored, the new terminal has the same size as the default MSYS2 size.</p>
<p>If I do this <code>mintty -s 125,38</code> or <code>mintty --size=125,38</code> from the command line it works, the new terminal is the right size.</p>
<p>So it seems that the parameters to mintty are not being passed by msys2_shell.cmd to mintty. I looked at that code, but I don't know windows scripting very well and so I can't figure it out <a href="https://github.com/msys2/MSYS2-packages/blob/master/filesystem/msys2_shell.cmd" rel="nofollow noreferrer">https://github.com/msys2/MSYS2-packages/blob/master/filesystem/msys2_shell.cmd</a></p>
<p>The SHELL_ARGS variable is somehow filled in but I don't understand how:</p>
<pre><code> "%WD%mintty" -i "/%CONICON%" -t "%CONTITLE%" "/usr/bin/%LOGINSHELL%" -l !SHELL_ARGS!
</code></pre>
<p>Can someone please give an example of passing arguments to mintty from the command line to msys2_shell.cmd?</p>
<p>I have tried a ton of these, some based on AI Gemini recommendations with no luck:</p>
<pre><code>C:/msys64/msys2_shell.cmd -mintty "--size=125,38"
C:/msys64/msys2_shell.cmd -mintty "--size=125x38"
C:/msys64/msys2_shell.cmd -mintty "-s 125x38"
C:/msys64/msys2_shell.cmd -mintty -s 125x38
C:/msys64/msys2_shell.cmd "-mintty -s 125x38"
C:/msys64/msys2_shell.cmd -mintty -o -s 126x38
C:/msys64/msys2_shell.cmd -mintty -o Geometry=125x38
</code></pre>
<p>(many many more)</p>
|
<python><msys2><mintty>
|
2025-07-24 16:24:20
| 1
| 899
|
JohnA
|
79,713,572
| 18,002,913
|
How to access the base train class in YOLOv11 from the Ultralytics GitHub repository?
|
<p>I'm currently working on training a custom object detection model using <strong>YOLOv11</strong>, and I'm diving into the Ultralytics GitHub repository to better understand the internal structure of the training process.</p>
<p>Iβve noticed that older versions like <strong>YOLOv5</strong> or <strong>YOLOv8</strong> have well-defined training functions and sometimes even a <code>train.py</code> file with a clear entry point. However, in <strong>YOLOv11</strong>, <strong>I couldnβt locate a specific base train class or a modular training component that can be directly extended or overridden</strong>.</p>
<p>Iβm trying to access the core logic that handles the training loop β such as optimizer setup, dataloader creation, forward pass, loss calculation, and backpropagation β <strong>so that I can integrate my own metaheuristic optimization (e.g., GWO, PSO) to dynamically adjust hyperparameters.</strong></p>
<p><strong>My questions are:</strong></p>
<ol>
<li>Does YOLOv11 have a base training class or script that encapsulates
the training loop?</li>
<li>If so, where in the Ultralytics YOLOv11 repo can I find it?</li>
<li>Is the training process callable as a function or is it tied
directly to CLI execution?</li>
<li>Any tips for customizing training or plugging in an external
hyperparameter tuning loop?</li>
<li>Any help or guidance would be appreciated β especially if you've
done similar modifications or worked with YOLOv11's internals.</li>
</ol>
<p>Thanks in advance!</p>
|
<python><optimization><parameters><ultralytics><yolov11>
|
2025-07-24 14:52:34
| 0
| 1,298
|
NewPartizal
|
79,713,450
| 1,982,032
|
Why do I get nothing in output with pytesseract?
|
<p>I have installed language support for <code>chi_sim</code>:</p>
<pre class="lang-none prettyprint-override"><code> ls /usr/share/tesseract-ocr/5/tessdata
chi_sim.traineddata eng.traineddata pdf.ttf
configs osd.traineddata tessconfigs
</code></pre>
<p>You can try it by downloading <a href="https://www.dropbox.com/scl/fi/nne0iabnv8menzh9gg04l/photo.jpeg?rlkey=ichm9c3uco67x34yq994ah49r&st=y33k5pez&dl=0" rel="nofollow noreferrer">photo.jpeg</a> and using the following code:</p>
<pre><code>import cv2
from PIL import Image
import pytesseract
from pyocr import tesseract
image_path = 'photo.jpeg'
image = cv2.imread(image_path)
image = Image.fromarray(image)
text = pytesseract.image_to_string(image, lang='chi_sim')
print(text)
</code></pre>
<p>Why do I get nothing in output with above code?</p>
<pre><code>>>> print(pytesseract.get_languages(config=''))
['chi_sim', 'eng', 'osd']
</code></pre>
|
<python><ocr><python-tesseract>
|
2025-07-24 13:33:11
| 2
| 355
|
showkey
|
79,713,188
| 8,445,557
|
Issue on call FastMCP Server using Postman
|
<p>I have this simple MCP Server:</p>
<pre class="lang-py prettyprint-override"><code>from fastmcp import FastMCP, Context
mcp = FastMCP("Demo π")
@mcp.tool
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool
def hello(ctx: Context):
ctx.info("in Hello!")
return {"Respone": "Hello!"}
# Static resource
@mcp.resource("config://version")
def get_version(ctx: Context):
ctx.info("Sono in get_version!")
return "2.0.1"
if __name__ == "__main__":
mcp.run(transport='streamable-http')
</code></pre>
<p>And I wish to test it using a REST call, but when I try:</p>
<pre class="lang-bash prettyprint-override"><code>curl --request POST \
--url http://localhost:8000/mcp/tool/hello \
--header 'accept: application/json, text/event-stream' \
--header 'content-type: application/json' \
--header 'x-session-id: my-test-session-123' \
--data '{
"jsonrpc": "2.0",
"method": "hello",
"id": 1
}
</code></pre>
<p>I got this error:</p>
<pre class="lang-json prettyprint-override"><code>{
"jsonrpc": "2.0",
"id": "server-error",
"error": {
"code": -32600,
"message": "Bad Request: Missing session ID"
}
}
</code></pre>
<p>How can I resolve it?</p>
|
<python><model-context-protocol>
|
2025-07-24 10:37:22
| 2
| 361
|
Stefano G.
|
79,713,113
| 4,844,184
|
Streaming the reading of a very large compressed JSON file in Python
|
<p>I have a very large (too large to hold in RAM) <code>.json.zstd</code> file that I built iteratively with a generator of texts <code>data_chunks</code>.
A completely toy example of such generator is:</p>
<pre class="lang-json prettyprint-override"><code>[{"text": "very very"}," very very", " very very very", {"text": " very very long"}]
</code></pre>
<p>Obviously here in this toy example it can hold in RAM.</p>
<pre class="lang-py prettyprint-override"><code>import zstandard as zstd
# real code used to write the very large file
def write_to_file(output_path, data_chunks, level=22):
cctx = zstd.ZstdCompressor(level=level)
with open(output_path, 'wb') as f_out:
with cctx.stream_writer(f_out) as compressor:
for idx, chunk in enumerate(data_chunks):
if isinstance(chunk, str):
pass
elif isinstance(chunk, dict):
chunk = chunk["text"]
else:
raise ValueError(f"Unrecognized chunk {type(chunk)}")
if idx == 0:
chunk = '{"text": "' + chunk
chunk = chunk.encode("utf-8")
compressor.write(chunk)
compressor.write('"}'.encode("utf-8"))
# toy example
write_to_file("test.json.zstd", [{"text": "very very"}," very very", " very very very", {"text": " very very long"}], level=22)
</code></pre>
<p>This compressed file now just holds a JSON file <code>{"text": "very very very long text ... "}</code>
We can convince ourselves of this by reading it easily:</p>
<pre class="lang-py prettyprint-override"><code>import io
def read_zstd_lines(input_path):
dctx = zstd.ZstdDecompressor()
with open(input_path, 'rb') as compressed:
with dctx.stream_reader(compressed) as decompressor:
text_stream = io.TextIOWrapper(decompressor, encoding='utf-8')
for line in text_stream:
if line.strip():
yield json.loads(line)
next(read_zstd_lines("test.json.zstd"))
</code></pre>
<p>I am NOT looking to get recommandations of tools I am looking for any solution that would read this file in RAM <em>iteratively</em> by (potentially size-adjustable) chunks (or just chunks no larger than a fixed size) but that would work <em>on the compressed file</em> as the example above. <a href="https://github.com/ICRAR/ijson" rel="nofollow noreferrer">ijson</a> does not work on zstd encrypted stream as far as I can tell.</p>
|
<python><json><streaming><zstd><ijson>
|
2025-07-24 09:36:53
| 1
| 2,566
|
jeandut
|
79,712,795
| 11,085,329
|
Python 3 changes variable even after if condition fails
|
<p>I have a kaggle notebook(<a href="https://www.kaggle.com/code/haroonazharkhan/exercise-underfitting-and-overfitting" rel="nofollow noreferrer">link here</a>). The notebook iterates over a list of number(<code>candidate_max_leaf_nodes</code>) and then gets a mae for each number by massing it as a parameter(ignore the other parameters) to the <code>get_mae</code> function.</p>
<p>The goal is to store the value from the list that generates the smallest value of mae.</p>
<p>Although, I have a condition to only change <code>min_mae</code> and <code>min_val</code>(the value from the list that produces smallest mae). It so happens that even when the condition for changing <code>min_val</code> and <code>min_mae</code> fails, which is <code>curr_mae < min_mae</code>, it still changes the value.</p>
<p>The code that produces the issue.</p>
<pre><code>candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500]
# Write loop to find the ideal tree size from candidate_max_leaf_nodes
min_mae = float('inf')
min_val = 0
for s in candidate_max_leaf_nodes:
print(f"s = {s}")
min_val = s
curr_mae = get_mae(s,train_X, val_X, train_y, val_y)
print(f" Preloop || curr mae = {curr_mae}")
print(f" Preloop || min mae = {min_mae}")
print(f" Preloop || min val = {min_val}")
print(f" Preloop || cond = {curr_mae < min_mae}")
if(curr_mae < min_mae):
min_mae = curr_mae
min_val = s
else:
continue
print(f" Post || min_mae = { min_mae}")
print(f" Post || min_val = { min_val}")
# Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500)
best_tree_size = min_val
# Check your answer
step_1.check()
</code></pre>
<p>Here are the logs or prints from the execution of the code:</p>
<pre><code>s = 5
Preloop || curr mae = 35044.51299744237
Preloop || min mae = inf
Preloop || min val = 5
Preloop || cond = True
s = 25
Preloop || curr mae = 29016.41319191076
Preloop || min mae = 35044.51299744237
Preloop || min val = 25
Preloop || cond = True
s = 50
Preloop || curr mae = 27405.930473214907
Preloop || min mae = 29016.41319191076
Preloop || min val = 50
Preloop || cond = True
s = 100
Preloop || curr mae = 27282.50803885739
Preloop || min mae = 27405.930473214907
Preloop || min val = 100
Preloop || cond = True
s = 250
Preloop || curr mae = 27893.822225701646
Preloop || min mae = 27282.50803885739
Preloop || min val = 250
Preloop || cond = False
s = 500
Preloop || curr mae = 29454.18598068598
Preloop || min mae = 27282.50803885739
Preloop || min val = 500
Preloop || cond = False
Post || min_mae = 27282.50803885739
Post || min_val = 500
</code></pre>
<p>As you can see from the logs that the value for <code>min_value</code> and <code>min_mae</code> are altered when the value for <code>s</code> is <code>250</code> but the condition is false, so they shouldn't change.</p>
<p>Note: I looked a the prints closely and it seems that the values for <code>min_mae</code> and <code>min_val</code> are updating even before the if condition.</p>
<p>If anyone is having issues running the notebook, then run all the cell before it.</p>
<p>Can someone please help me understand what I'm doing wrong here.</p>
|
<python><machine-learning><jupyter-notebook><scope><kaggle>
|
2025-07-24 04:25:14
| 1
| 450
|
HAK
|
79,712,728
| 16,312,980
|
Why does polars kept killing the python kernel when joining two lazy frames and collecting them?
|
<p>I have one dataframe: bos_df_3 that has about a 30k+ rows and another, taxon_ranked_only, with 6 million when I tried to join them using:</p>
<pre><code>matching_df = (
pl.LazyFrame(bos_df_3)
.join(
other=taxon_ranked_only,
on=["genericName", "specificEpithet", "infraspecificEpithet"],
how="right",
)
.rename({"taxonID": "matched_taxonID"})
)
</code></pre>
<p>And then:</p>
<p><code>matching_df.collect()</code></p>
<p>it kills the kernel of my marimo notebook.
I tried to use gpu=True or streaming=True but it still crashes.<br />
I also tried slicing taxon_ranked_only into smaller portions of length 1 million but it still crashes.<br />
This problem seem to be similar to this:<a href="https://stackoverflow.com/questions/79487658/python-polars-numerous-joins-crashing">python polars numerous joins crashing</a></p>
<p>Edit: it is because previously I have imputed nulls in a undisclosed lines before this snippet to be of empty strings which caused it to balloon up to 91 gigs csv file output; found that out when using <code>.sink_csv()</code>. Sorry.</p>
|
<python><memory><python-polars><polars>
|
2025-07-24 02:32:17
| 1
| 426
|
Ryan
|
79,712,538
| 199,818
|
My python project that requires a plugin that I also wrote fails on poetry install because the "version solving failed"
|
<p>I am using the following versions:</p>
<pre><code>Poetry (version 2.1.1)
Python 3.13.2
pip 25.0.1 from <correct path to the pip location in venv>
</code></pre>
<p>I have written an application plugin for poetry. That project has a section in the <code>pyproject.toml</code> file like this:</p>
<pre><code>[project]
name = "mycoolplugin"
version = "1.0.0"
...
[tool.poetry.plugins."poetry.application.plugin"]
mycoolplugin = "mycoolplugin:CoolPlugin"
</code></pre>
<p>It builds and creates the expected wheel. I can run and test its commands in that project and during tests run by pytest.</p>
<p>In the project that I want to use the plugin the <code>pyproject.toml</code> file contains:</p>
<pre><code>[tool.poetry.requires-plugins]
mycoolplugin = ">=1.0.0"
</code></pre>
<p>I have not yet published the plugin to the company pypi site since I am still testing this. So, I am trying to do a manual install.</p>
<p>Once I create the venv <code>eval $(poetry env activate 2>/dev/null)</code>. Which is where I get the version list above. I try to install the plugin with
<code>pip install ../mycoolplugin/dist/mycoolplugin-1.0.0-py3-none-any.whl</code>
Now <code>pip list</code> shows my plugin there with a version of 1.0.0</p>
<p>However <code>poetry install</code></p>
<pre><code>poetry install
Removing the project's plugin cache because it is outdated
Ensuring that the Poetry plugins required by the project are available...
The following Poetry plugins are required by the project but are not installed in Poetry's environment:
- mycoolwplugin (>=1.0.0)
Installing Poetry plugins only for the current project...
Updating dependencies
Resolving dependencies...
Because poetry-project-instance depends on mycoolplugin (>=1.0.0) which doesn't match any versions, version solving failed.
</code></pre>
<p>Trying to add it to poetry also fails:</p>
<pre><code>poetry add mycoolplugin
Could not find a matching version of package mycoolplugin
</code></pre>
<p>The most confusing thing is I had this working at some point and was doing some testing and deleted all the built binaries to start clean and started seeing this. I don't see what I am missing names match up, versions match up pip install commands succeed.</p>
<p>Two additional notes:</p>
<ol>
<li>yes I changed the name of the plugin since the company I work for will not allow even names to leak out.</li>
<li>Both projects pull dependencies form an internal pypi mirror specified thusly:</li>
</ol>
<pre><code>[[tool.poetry.source]]
name = "company-pypi"
url = "https://artifactory.company.com/api/pypi/pypi.python.org/simple"
priority = "primary"
</code></pre>
<p>What is going on? Why doesn't poetry find the plugin when it is available in pip locally? Is it really complaining about version mismatch or is it really unable to find it? What can I do to fix this?</p>
|
<python><python-3.x><pip><python-poetry>
|
2025-07-23 20:37:32
| 1
| 857
|
Ian Leslie
|
79,712,396
| 8,297,745
|
How to get a billing cycle period between the 26th of the previous month and the 25th of the current month using Python (timezone-aware)?
|
<h2>The Problem</h2>
<p>I'm building a billing system in Django, and I need to calculate the billing period for each invoice.</p>
<p>Our business rule is simple:</p>
<ul>
<li>The billing cycle starts on the <strong>26th of the previous month</strong> at midnight (00:00:00);</li>
<li>And ends on the <strong>25th of the current month</strong> at 23:59:59.</li>
</ul>
<p>For example, if the current date is <code>2025-07-23</code>, the result should be:</p>
<pre class="lang-py prettyprint-override"><code>start = datetime(2025, 6, 26, 0, 0, 0)
end = datetime(2025, 7, 25, 23, 59, 59)
</code></pre>
<p>We're using Django, so the dates must be <strong>timezone-aware</strong> (UTC preferred), as Django stores all datetime fields in UTC.</p>
<p>The problem is: when I run my current code (below), the values saved in the database are shifted, like <code>2025-06-26T03:00:00Z</code> instead of <code>2025-06-26T00:00:00Z</code>.</p>
<h2>What We Tried</h2>
<p>We tried the following function:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
def get_invoice_period(reference_date: datetime = None) -> tuple[datetime, datetime]:
if reference_date is None:
reference_date = datetime.now()
end = (reference_date - timedelta(days=1)).replace(hour=23, minute=59, second=59, microsecond=0)
start = (reference_date - relativedelta(months=1)).replace(day=26, hour=0, minute=0, second=0, microsecond=0)
return start, end
</code></pre>
<p>But this causes timezone problems, and <code>datetime.now()</code> is not timezone-aware in Django. So when we save these values to the database, Django converts them to UTC, shifting the time (e.g., +3 hours).</p>
<p>We looked at:</p>
<ul>
<li><a href="https://stackoverflow.com/a/38200717/8297745">this answer</a> from Mitghi</li>
<li>and <a href="https://stackoverflow.com/questions/64825983/getting-the-date-as-yyyy-mm-minus-one-month">this question</a>, which was closed</li>
</ul>
<p>But none of them solves the business logic <em>and</em> works well with Django timezone-aware datetimes.</p>
<h2>What We Expected</h2>
<p>We expect a function that:</p>
<ul>
<li>Returns a tuple with two <code>datetime</code> objects;</li>
<li>The first is the 26th of the <strong>previous month</strong>, at midnight;</li>
<li>The second is the 25th of the <strong>current month</strong>, at 23:59:59;</li>
<li>Both datetimes are timezone-aware (<code>pytz.UTC</code> or Django's <code>timezone.now()</code> preferred).</li>
</ul>
<p><strong>Edit</strong> - Solution</p>
<pre class="lang-py prettyprint-override"><code>from django.utils import timezone
from dateutil.relativedelta import relativedelta
def get_invoice_period(reference_date=None):
if reference_date is None:
reference_date = timezone.localtime()
start = reference_date - relativedelta(months=1)
start = start.replace(day=26, hour=0, minute=0, second=0, microsecond=0)
end = reference_date.replace(day=25, hour=23, minute=59, second=59, microsecond=0)
return start, end
</code></pre>
|
<python><django><datetime><timezone><billing>
|
2025-07-23 18:13:01
| 1
| 849
|
Raul Chiarella
|
79,712,154
| 2,137,996
|
Where should development dependencies go in a post-PEP 735 pyproject.toml?
|
<p>Much has changed in the world of Python dependency management in the last few years and I am looking for some authority regarding where to specify a few classes of dependency in the <code>pyproject.toml</code> file. For context, I am working on a Python project that includes a native extension module.</p>
<p>Here are the classes:</p>
<ol>
<li>Dependencies of the installed package (e.g. libraries like <code>imageio</code> or <code>httpx</code>)</li>
<li>Dependencies of the build system (e.g. <code>scikit-build-core</code> or <code>pybind11</code>)</li>
<li>Dependencies of the test environment (e.g. <code>pytest</code>)</li>
<li>Developer tools that would typically be managed by <code>pipx</code> or <code>uvx</code> (e.g. <code>tbump</code>, <code>cmake</code>, <code>ninja</code>, <code>ruff</code>, etc.)</li>
</ol>
<p>It is fairly clear that category (1) belongs in <code>[project.dependencies]</code>. The only place that makes sense for (2) is <code>[build-system.requires]</code>. However, even for (2) it is not clear where to place these such that <code>uv</code> will install them into the present environment without duplication (e.g. specifying <code>pybind11</code> in two places).</p>
<p>Things get murkier with (3). I have found references to <code>[project.optional-dependencies]</code> (PEP 621) which can be installed with the <code>--extra</code> flag at the <code>uv</code> command line. However, there is now PEP 735, which standardizes the <code>[dependency-groups]</code> section, whose <code>dev</code> group <code>uv</code> treats specially. This <em>feels</em> like the right place, but I am not sure how best to use it.</p>
<p>What's really throwing me for a loop is (4). These tools are usually expected to be ambient on the system and the common advice is to manage them with <code>pipx</code> (or <code>uvx</code>) so they can be isolated from one another. Where do these belong? In a separate <code>[dependency-groups]</code> group? As an entry in <code>[project.optional-dependencies]</code>?</p>
|
<python><dependency-management><pyproject.toml>
|
2025-07-23 15:09:04
| 0
| 20,663
|
Alex Reinking
|
79,712,006
| 1,659,527
|
Python requests library refusing to accept correct verify certificate
|
<p>The following application performs a basic HTTP GET request against <code>https://google.com</code>, retrieves the peer certificate and saves it in PEM format into a file called <code>cert.pem</code>.</p>
<p>After that it attempts the connect to <code>https://google.com</code> again, but this time specifies the saved certificate (<code>cert.pem</code>) in the <code>verify</code> parameter:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from cryptography import x509
from cryptography.hazmat.primitives import serialization
with requests.get("https://google.com", stream=True) as response:
cert = response.raw.connection.sock.getpeercert(binary_form =True)
cert_der = x509.load_der_x509_certificate(cert)
cert_pem = cert_der.public_bytes(serialization.Encoding.PEM)
with open("cert.pem", "wb") as f:
f.write(cert_pem)
with requests.get("https://google.com", stream=True, verify='./cert.pem') as response:
cert = response.raw.connection.sock.getpeercert(binary_form =True)
</code></pre>
<p>When I execute this application the second <code>requests.get(...)</code> invocation fails with the following error:</p>
<pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))
</code></pre>
<p>This doesn't make sense, as it's the certificate that was provided by the same webserver in the first request. I have validated that this works using the <code>curl</code> cli:</p>
<pre class="lang-bash prettyprint-override"><code>curl -v --cacert ./cert.pem --capath ./ https://www.google.com
</code></pre>
|
<python><python-requests><cryptography>
|
2025-07-23 13:43:14
| 0
| 3,307
|
jwa
|
79,711,979
| 850,781
|
Moving from matplotlib to pyqtgraph: how to reuse PlotWidgets?
|
<p>I have a large gui app based on <code>matplotlib</code> which is horribly slow (on windows) so I am trying to re-host it on <code>pyqtgraph</code> (which seems to be <em>much</em> faster).</p>
<p>I am facing the following problem: I need to plot <em>very</em> different things (vastly different ranges, axes formatting &c: e.g., sometimes X is $$ from 0 to 1B and sometimes X is time from midnight to now, and sometimes date from last year to now) in the same screen area.</p>
<p>With <code>matplotlib</code> I re-use the same <code>Axes</code> object (pre-inserted in the <code>subplot2grid</code>), because after <code>ax.clear()</code> it is as new and I can set axis formatters and the <code>ax.plot</code> sets <code>xlim</code> and <code>ylim</code> correctly.</p>
<p>With <code>pyqtgraph</code> re-using is much harder because I cannot reset <code>axisItems</code> in a <code>PlotWidget</code>, and <code>pw.clear()</code> does not remove title, axis labels, grids, ticks.</p>
<p>Should I create a new <code>PlotWidget</code> for each plot type and replace it in the layout?</p>
|
<python><matplotlib><pyqt6><pyqtgraph>
|
2025-07-23 13:30:46
| 0
| 60,468
|
sds
|
79,711,862
| 7,446,528
|
Parse a CSV translation file that contains "None" as a standalone string
|
<p>I am working on a large CSV file that contains number IDs for translations followed by entries for different languages. These entries represent localization strings in an application. I was tasked with parsing and making changes to these entries but have encountered an issue when the string "None" as is an entry in the CSV file.</p>
<p>For example I prepared the following test CSV file (<code>test.csv</code>):</p>
<pre><code>2759,Keine,Keine,null,nan,,Nessuno,Ε½Γ‘dnΓ©,Ingen,Nenhum,Nada,ζ ,η‘γ,Brak,η‘
2762,Keine,"None",ΠΠ΅Ρ,Aucun,None,Nessuno,Ε½Γ‘dnΓ©,Ingen,Nenhum,Nada,ζ ,η‘γ,Brak,η‘
</code></pre>
<p>I made some manual changes to the entries just so I can test the pandas parser with different <code>null</code> connotations and see how it reacts. One entry is empty because it's normal to have empty entries, this is sometimes used when translations are worked on and it is important that we parse the empty entries as empty strings.</p>
<p>I ran the following script:</p>
<pre><code>import os, pandas
testDir = "{some_directory}\\Test"
testCSV = os.path.join(testDir, "test.csv")
csvContent = None
# read export CSV translations file
if os.path.exists(testCSV):
csvContent = pandas.read_csv(testCSV, sep=',', header = None, dtype=str)
else:
print("Error", f"Could not locate a suitable translations file at:\n{testCSV}")
print(f"Translation CSV file read with {len(csvContent)} rows and {len(csvContent.columns)} columns")
print(csvContent)
</code></pre>
<p>Output would look like:</p>
<pre><code>Translation CSV file read with 2 rows and 15 columns
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0 2759 Keine Keine NaN NaN NaN Nessuno Ε½Γ‘dnΓ© Ingen Nenhum Nada ζ η‘γ Brak η‘
1 2762 NaN NaN ΠΠ΅Ρ Aucun NaN Nessuno Ε½Γ‘dnΓ© Ingen Nenhum Nada ζ η‘γ Brak η‘
</code></pre>
<p>as expected all instances of <code>None</code>, <code>"None"</code>, <code>nan</code>, <code>null</code>, and empty entries are parsed as <code>NaN</code>. But I want to make sure that the strings are parsed literally and that I capture the empty entries as empty strings.</p>
<p>I tried first using <code>fillna()</code> like this:</p>
<pre><code>csvContent[::] = csvContent[::].fillna('')
</code></pre>
<p>This would help me preserve the empty fields but also erases any instance of <code>None</code>. I tried to replace any <code>None</code> with a filler string and then convert back using:</p>
<pre><code>csvContent[::] = csvContent[::].replace([None], "$None").fillna('').replace("$None", "None")
</code></pre>
<p>I did this assuming that pandas parser would distinguish between empty and Β΄[None]Β΄ entries and fix my issue with the "None" localize strings. This however turns everything into "None" string and the output looks like this:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0 2759 Keine Keine None None None Nessuno Ε½Γ‘dnΓ© Ingen Nenhum Nada ζ η‘γ Brak η‘
1 2762 None None ΠΠ΅Ρ Aucun None Nessuno Ε½Γ‘dnΓ© Ingen Nenhum Nada ζ η‘γ Brak η‘
</code></pre>
<p>At this point I am stumped, is there a way I can literally parse my output as is conserving the values? My fallback will be to manually parse the document myself but then I would lose all the nice Pandas functionalities.</p>
|
<python><pandas><dataframe>
|
2025-07-23 12:14:28
| 0
| 1,180
|
Hadi Farah
|
79,711,807
| 72,437
|
Is It Safe to Use stream() Inside a Firestore Transaction in Python?
|
<p>While conducting a code review today, multiple AIs advised against using the <code>stream()</code> function within a Firestore transaction in Python.</p>
<p>However, I couldn't find any mention of this limitation in the official documentation:</p>
<p><a href="https://firebase.google.com/docs/firestore/manage-data/transactions#python" rel="nofollow noreferrer">https://firebase.google.com/docs/firestore/manage-data/transactions#python</a></p>
<p>I also tested the following code, and it runs without errors or crashes.</p>
<p>Could someone clarify whether this AI-generated advice is still valid or outdated?</p>
<p>Is there an undocumented limitation I should be aware of?</p>
<p>Thanks!</p>
<pre><code>def update_markdown(transaction: firestore.Transaction, note_ref: firestore.DocumentReference, markdown: str):
# Reference to the transcript subcollection
markdown_ref = note_ref.collection('markdowns')
#
# Remove existing chunks
#
existing_docs = markdown_ref.stream()
# Delete each document within the transaction
for doc in existing_docs:
doc_ref = markdown_ref.document(doc.id)
transaction.delete(doc_ref)
print("I THOUGHT WE SHOULD CRASH. BUT IT DIDN'T. WHY?")
chunks = utils.chunk_text(markdown, MAX_CHUNK_SIZE)
# Write each chunk to the transcript subcollection within the transaction
for i, chunk in enumerate(chunks):
chunk_doc_ref = markdown_ref.document()
transaction.set(chunk_doc_ref, {
'text': chunk,
'order': i + 1
})
@https_fn.on_request(
cors=options.CorsOptions(
cors_origins=["*"],
cors_methods=["POST"],
)
)
def edit_markdown(req: https_fn.Request) -> https_fn.Response:
if req.method != 'POST':
return https_fn.Response(
json.dumps({"error": "Only POST requests are allowed"}),
status=405
)
request_json = req.get_json(silent=True)
if not request_json:
return https_fn.Response(
json.dumps({"error": "Invalid request data"}),
status=400
)
uid = request_json['data']['uid']
doc_id = request_json['data']['doc_id']
new_markdown = request_json['data']['markdown']
if utils.is_none_or_trimmed_empty(new_markdown):
return https_fn.Response(
json.dumps({"error": "Invalid request data"}),
status=400
)
if len(new_markdown) > 524288:
return https_fn.Response(
json.dumps({"error": "Invalid request data"}),
status=400
)
db = firestore.client()
# Prepare timestamp
current_timestamp = int(time.time() * 1000)
try:
@firestore.transactional
def update_note(transaction):
# References
note_ref = (
db.collection('users')
.document(uid)
.collection('notes')
.document(doc_id)
)
current_markdown = markdown_utils.get_markdown(
transaction=transaction,
note_ref=note_ref
)
# 1. Title change logic
if new_markdown != current_markdown:
original_markdown = markdown_utils.get_original_markdown(
transaction=transaction,
note_ref=note_ref
)
if new_markdown == original_markdown:
# 2a. If user reverted back to the original markdown, remove it
markdown_utils.delete_original_markdown(
transaction=transaction,
note_ref=note_ref
)
else:
# 2b. If this is the first time changing away from the original, save it
markdown_utils.insert_original_markdown_if_not_exist(
transaction=transaction,
note_ref=note_ref,
original_markdown=current_markdown
)
# 4. Update markdown
markdown_utils.update_markdown(
transaction=transaction,
note_ref=note_ref,
markdown=new_markdown
)
# 4. Update timestamps
transaction.update(
note_ref,
{
'modified_timestamp': current_timestamp,
'synced_timestamp': current_timestamp
}
)
# Run in a transaction
transaction = db.transaction()
update_note(transaction)
response_data = {
"data": {
"modified_timestamp": current_timestamp,
"synced_timestamp": current_timestamp
}
}
return https_fn.Response(
json.dumps(response_data),
status=200
)
except Exception as e:
# Log the error with more context
print(f"Error updating note markdown: {str(e)}")
error_message = {
"data": {
"error": f"An error occurred: {str(e)}"
}
}
return https_fn.Response(
json.dumps(error_message),
status=500
)
</code></pre>
|
<python><firebase><google-cloud-firestore>
|
2025-07-23 11:37:33
| 1
| 42,256
|
Cheok Yan Cheng
|
79,711,708
| 736,221
|
Is it possible to use retries and deadletter queues in Celery?
|
<p>I have successfully got Celery to put failed tasks onto deadletter queues in RabbitMQ, thanks to this answer <a href="https://stackoverflow.com/a/46128311/736221">https://stackoverflow.com/a/46128311/736221</a> and this article <a href="https://medium.com/@hengfeng/how-to-create-a-dead-letter-queue-in-celery-rabbitmq-401b17c72cd3" rel="nofollow noreferrer">https://medium.com/@hengfeng/how-to-create-a-dead-letter-queue-in-celery-rabbitmq-401b17c72cd3</a></p>
<p>This doesn't appear to work when using retries in Celery: <a href="https://docs.celeryq.dev/en/stable/userguide/tasks.html#retrying" rel="nofollow noreferrer">https://docs.celeryq.dev/en/stable/userguide/tasks.html#retrying</a></p>
<p>The retries happen but when the task finally fails it doesn't appear on the deadletter queue. I tried to detect the number of retries in the task then raise <code>Reject</code> and have added a custom task class and raised <code>Reject</code> in the <code>on_failure</code> handler (<a href="https://docs.celeryq.dev/en/stable/userguide/tasks.html#on_failure" rel="nofollow noreferrer">https://docs.celeryq.dev/en/stable/userguide/tasks.html#on_failure</a>) but to no avail.</p>
<p>Is there a way to get this to work?</p>
|
<python><rabbitmq><celery>
|
2025-07-23 10:23:43
| 1
| 687
|
grahamlyons
|
79,711,502
| 12,415,855
|
WebGL is disabled error when opening bing maps with python selenium?
|
<p>i try to open bing maps using python selenium with this code:</p>
<pre><code>import os, sys
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
options = Options()
options.add_argument("start-maximized")
options.add_argument('--use-gl=swiftshader')
options.add_argument('--enable-unsafe-webgpu')
options.add_argument('--enable-unsafe-swiftshader')
options.add_argument("--disable-3d-apis")
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
baseLink = "https://www.bing.com/maps"
driver = webdriver.Chrome (service=srv, options=options)
driver.get (baseLink)
input("Press!")
</code></pre>
<p>Worked fine for a long time but suddenly i get this error:</p>
<p><a href="https://i.sstatic.net/eineWxvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eineWxvI.png" alt="enter image description here" /></a></p>
<p>How can i open bing maps using python and selenium?</p>
|
<python><selenium-webdriver><webgl>
|
2025-07-23 07:59:23
| 1
| 1,515
|
Rapid1898
|
79,711,017
| 27,596,369
|
How to make turtle detect which side it collided on
|
<p>Here is a MRE of my code:</p>
<pre><code>from turtle import Turtle, Screen
import time
import random
# --------------- GLOBAL VARIABLES --------------------
speed_x = 20
speed_y = 20
bricks = []
def check_collision():
global speed_x, speed_y, bricks
if ball.xcor() > (window.window_width() / 2) - 20 or ball.xcor() < -(window.window_width() / 2) + 20:
speed_x *= -1
elif ball.ycor() > (window.window_height() / 2) - 20 or ball.ycor() < -(window.window_height() / 2) + 20:
speed_y *= -1
elif abs(paddle.xcor() - ball.xcor()) < 20/2 + 150/2 and abs(paddle.ycor() - ball.ycor()) < 20/2 + 60/2:
if abs(paddle.xcor() - ball.xcor()) == 20/2 + 150/2:
speed_x *= -1
else:
speed_y *= -1
def game_loop():
ball.goto(ball.xcor() + speed_x, ball.ycor() + speed_y)
check_collision()
window.update()
window.ontimer(game_loop, 1)
def move_right():
paddle.setx(paddle.xcor() + 25)
def move_left():
paddle.setx(paddle.xcor() - 25)
window = Screen()
time.sleep(2)
ball = Turtle(shape='circle')
ball.pu()
ball.speed(0)
ball.goto(0, -300)
ball.speed(0)
ball.left(random.randint(45, 135))
paddle = Turtle(shape='square')
paddle.speed(0)
paddle.goto(0, -400)
paddle.shapesize(1, 7.5, 3)
paddle.up()
window.listen()
window.onkeypress(move_right, 'Right')
window.onkeypress(move_left, 'Left')
window.onkeypress(move_right, 'd')
window.onkeypress(move_left, 'a')
game_loop()
window.mainloop()
</code></pre>
<p>Is it possible to detect if the ball hit the side (the short one) of the paddle? The reason I need this is since if It collides on the left side of the paddle, I want to ball x to change and not the y. In my code, the <code>if abs(paddle.xcor() - ball.xcor()) == 20/2 + 150/2:</code> is my attempt to recognize the side and the top (big side).</p>
|
<python><turtle-graphics>
|
2025-07-22 20:30:29
| 2
| 1,512
|
Aadvik
|
79,710,997
| 4,976,543
|
How to regex match all strings except when the pattern occurs at a specific position?
|
<p>I have a problem that would benefit greatly if I can include all matches of a pattern that do not occur at one specific index in a string. For example, if I want to match "ABC", except when it occurs in the 4th position of the string, I would need the following results:</p>
<ul>
<li><strong>ABC</strong>DEFGH (Match)</li>
<li>H<strong>ABC</strong>DEFG (Match)</li>
<li>GH<strong>ABC</strong>DEF (Match)</li>
<li>FGH<strong>ABC</strong>DE (Don't Match)</li>
<li>EFGH<strong>ABC</strong>D (Match)</li>
</ul>
<p>I have been reading about negative lookaheads and negative lookbehinds for a while now and cannot seem to find an example that fits my use case. Any help with this would be extremely appreciated.</p>
|
<python><regex>
|
2025-07-22 20:09:51
| 4
| 712
|
Branden Keck
|
79,710,989
| 27,596,369
|
Turtle not detecting key presses in Python
|
<p>Here is a HUGELY simplified version of my code:</p>
<pre><code>from turtle import Turtle, Screen
import time
speed_x = 20
speed_y = 29
def move_right():
print('exectuted')
def game_loop():
ball.goto(ball.xcor() + speed_x, ball.ycor() + speed_y)
window.update()
window.ontimer(game_loop, 1)
window = Screen()
time.sleep(2)
ball = Turtle(shape='circle')
ball.color('white')
paddle = Turtle(shape='square')
window.onkeypress(move_right, 'Right')
window.onkeypress(move_right, 'D')
game_loop()
window.mainloop()
</code></pre>
<p>I expected the <code>move_right()</code> function to execute (which for testing should have printed out something)</p>
<p>But I get nothing</p>
|
<python><turtle-graphics><python-turtle>
|
2025-07-22 19:58:51
| 1
| 1,512
|
Aadvik
|
79,710,792
| 27,596,369
|
Turtle pauses when I try to turn it
|
<p>Here is my code:</p>
<pre><code>from turtle import Turtle, Screen
import time
import random
def check_wall_collision():
if ball.xcor() > (window.window_width() / 2) - 20 or ball.xcor() < -(window.window_width() / 2) + 20:
ball.left(180)
elif ball.ycor() > (window.window_height() / 2) - 20 or ball.ycor() < -(window.window_height() / 2) + 20:
ball.left(180)
window = Screen()
ball = Turtle(shape='circle')
while True:
ball.forward(7)
check_wall_collision()
window.mainloop()
</code></pre>
<p>This code basically makes a ball go forward until it collides into the edge of the screen, when it collides with the edge of the screen, it turns 180 degrees. The problem is though, when the ball collides with the wall, it pauses for a second or two before turning.</p>
|
<python><turtle-graphics><python-turtle>
|
2025-07-22 16:28:44
| 1
| 1,512
|
Aadvik
|
79,710,699
| 850,781
|
QLabel css "text-align: center" is ignored
|
<p>Contrary to <a href="https://doc.qt.io/qt-6/stylesheet-reference.html" rel="nofollow noreferrer">the spec</a>, CSS setting <code>text-align: center</code> is ignored by <code>QLabel</code> (other settings like <code>font-size</code> and <code>color</code> are respected):</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QLabel
app = QApplication([])
label = QLabel("This is centered?")
label.setStyleSheet("""
QLabel {
background-color: #333;
color: orange;
text-align: center;
}
""")
label.resize(300, 100)
label.show()
app.exec()
</code></pre>
<p>Is this a known bug?
Is there a workaround?</p>
|
<python><css><qt><pyqt><text-align>
|
2025-07-22 15:11:30
| 0
| 60,468
|
sds
|
79,710,652
| 2,311,202
|
Delta/Difference Histogram
|
<p>Minimum working example:</p>
<pre><code>import pandas as pd
import plotly.graph_objects as go
# Example data
data = {
"data-a": [10, 15, 10, 20, 25, 30, 15, 10, 20, 25],
"data-b": [12, 18, 14, 22, 28, 35, 17, 13, 21, 27]
}
df = pd.DataFrame(data)
# Create the figure
fig = go.Figure()
fig.add_trace(go.Histogram(x=df["data-a"], name="Data A"))
fig.add_trace(go.Histogram(x=df["data-b"], name="Data B", opacity=0.5))
fig.update_layout(font_size=15)
fig.update_yaxes(title_text="Count")
fig.show()
</code></pre>
<p>The result:</p>
<p><a href="https://i.sstatic.net/0kih0i8C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kih0i8C.png" alt="enter image description here" /></a></p>
<p>I now want to create a new histogram that is basically depicting the difference in counts between both histograms (same X-axis and bins), so in this case that should be something like:</p>
<p><a href="https://i.sstatic.net/BHpPOR9z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHpPOR9z.png" alt="enter image description here" /></a></p>
|
<python><plotly><histogram>
|
2025-07-22 14:38:13
| 2
| 506
|
Pietair
|
79,710,580
| 8,704,639
|
Numpy Bug in computing business days?
|
<p>I am trying to compute business days using Numpy.</p>
<p>I how ever found an inconsistency.</p>
<pre><code>import numpy as np
# Count from 1st Jan 2023 to 31st Jan 2023
print("Jan: {0}".format(np.busday_count('2023-01-01', '2023-01-31')))
# Returns 21 days
# To get actual business days do 21 + 1 = 22, since start day is not counted
</code></pre>
<p>For December;</p>
<pre><code>import numpy as np
# Count from 1st Dec 2023 to 31st Dec 2023
print("Dec: {0}".format(np.busday_count('2023-12-01', '2023-12-31')))
# Returns 21 days
# To get actual business days, use 21, which is the actual number of business days in December.
# So adding 21 + 1 would give an incorrect number of 22 business days.
</code></pre>
<p>Is this a bug? or am I simply using Numpy wrong?</p>
|
<python><numpy><datetime>
|
2025-07-22 13:45:21
| 1
| 583
|
M4X_
|
79,710,168
| 4,702,527
|
Human in the loop to work in both Langgraph studio and CLI based
|
<p>I have a LangGraph agent from a Python shell script, and I expose it as a Typer command to be executed from the command line. Everything works fine. But now I want to add a human in the loop as one node. The problem is if I add input("Enter yes or no"), then it works in CLI but not in LangGraph Studio. And if I use from langgraph.types import interrupt, interrupt("Enter yes or no"), it works in LangGraph Studio but not in CLI. Is there any way I can make it work in both modes?</p>
|
<python><command-line-interface><langgraph><typer>
|
2025-07-22 08:38:01
| 2
| 471
|
Sowmya
|
79,709,834
| 2,063,329
|
Streamlit Cloud fails to clone public GitHub repo β βFailed to download sourcesβ error despite correct config
|
<p>I'm deploying a Streamlit app to Streamlit Cloud from a public GitHub repo named MonteSimLite. Everything in the repository appears healthy:</p>
<ul>
<li><p>Repo is public</p>
</li>
<li><p>Branch is main</p>
</li>
<li><p>Entry point is app.py, located at the root</p>
</li>
<li><p>Python version set to 3.11</p>
</li>
<li><p>All dependencies listed in requirements.txt</p>
</li>
<li><p>Streamlit dashboard configured with correct casing (mysorian/MonteSimLite)</p>
</li>
</ul>
<p>Despite this, my app fails to deploy and shows this persistent log:</p>
<blockquote>
<p>Failed to download the sources for repository: 'montesimlite', branch: 'main', main module: 'app.py'<br />
Make sure the repository and the branch exist and you have write access to it, and then reboot the app.</p>
</blockquote>
<p>Things Iβve already tried:</p>
<ul>
<li><p>Creating a fresh app with a new name</p>
</li>
<li><p>Disconnecting and reconnecting GitHub in Streamlit Cloud</p>
</li>
<li><p>Renaming the repo</p>
</li>
<li><p>Using a stripped-down app.py that contains only a title</p>
</li>
<li><p>Verified that the repo URL is correctly cased (MonteSimLite, not montesimlite)</p>
</li>
<li><p>Deleted all prior apps that used the lowercase slug</p>
</li>
</ul>
<p>Whatβs interesting is Iβve previously deployed another app (MontecarloSimulator) using a similar setup with no issues. Streamlit seems to lowercase the subdomain (e.g., montesimlite.streamlit.app) regardless, which shouldn't affect cloningβbut here it appears to be failing because of that.</p>
<p>Has anyone encountered this repo cloning failure in Streamlit Cloud even with correct public repo settings? Could it be a caching issue, legacy app fingerprint, or something else with Streamlitβs backend?</p>
|
<python><git><github><streamlit>
|
2025-07-22 03:22:12
| 1
| 457
|
user2063329
|
79,709,674
| 395,255
|
JMESPath filter nested objects based on exact match of the value of an attribute
|
<p>Is it possible to get a JMESPath expression to return <code>groups</code> and <code>group_elements</code> based on the match on <code>group_name</code>?</p>
<p>I haven't even been able to get <code>group_elements</code> so far so getting two objects returned is beyond me.</p>
<p>This is what I have tried so far:</p>
<pre class="lang-none prettyprint-override"><code>[*].changed_resource_value.group_elements[?group_name == 'contact_information_address']
[0].changed_resource_value.group_elements[?group_name == 'contact_information_address']
[*].changed_resource_value.group_elements.*[?group_name == 'contact_information_address'] # this returns false
</code></pre>
<p>And here is the JSON document I am trying to filter:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"changed_resource_value": {
"groups": {
"contact_information": {
"group_name": "contact_information",
"group_path": "group_elements.contact_information"
},
"contact_information_address": {
"group_name": "contact_information_address",
"group_path": "group_elements.contact_information_address"
}
},
"group_elements": {
"city": {
"new_value": "some city",
"old_value": "",
"group_name": "contact_information_address",
"group_path": "group_elements.contact_information_address",
"element_name": "city"
},
"full_name": {
"new_value": "some full name",
"old_value": "",
"group_name": "contact_information",
"group_path": "group_elements.contact_information",
"element_name": "full_name"
}
}
},
"id": "107a564f-9d0e-4a0d-b58b-6fd35ee78933"
}
]
</code></pre>
|
<python><jmespath>
|
2025-07-21 22:22:44
| 1
| 12,380
|
Asdfg
|
79,709,532
| 27,596,369
|
How to delete last word of Tkinter ScrolledText Widget
|
<p>I have a <code>scrolledtext.ScrolledText</code> widget which has some text in it, what I want to do is delete the last word of the widget. I tried searching it up on the internet and found nothing.</p>
<p>If sample code is needed I can give it.</p>
<p>I have tried to find the index of the final word since I do know what it is, here is the code:</p>
<pre><code> from tkinter import *
from tkinter import ttk, scrolledtext
target_list = rndm_words
words = scrolledtext.ScrolledText(root, font=("Arial", 15), width=40, height=2)
words.insert(INSERT, " ".join(rndm_words))
words.config(state=DISABLED)
words.config(state=NORMAL)
words.delete(words.index(target_list[-1]), END)
words.config(state=DISABLED)
</code></pre>
<p><code>rndm_words</code> is a random 50 words from the Oxford 3000</p>
|
<python><tkinter>
|
2025-07-21 19:09:26
| 1
| 1,512
|
Aadvik
|
79,709,521
| 3,577,105
|
python asyncio: allow user to choose fire-and-forget vs blocking-group-of-functions
|
<p>I'd like to send an http request, and then operate on its response:</p>
<pre><code>import requests
s=requests.Session()
....
(session setup...)
....
(url and params setup...)
....
r=s.get(url,params=params)
print('response json:'+str(r.json()))
</code></pre>
<p>In that case, the request should be blocking, so that the print line doesn't happen until a response has been received.</p>
<p>At other times, I'd like to send fire-and-forget requests - the response is never used, so, I don't care how long it takes to be received:</p>
<pre><code>s.post(url,data=data,timeout=0)
</code></pre>
<p>What are the options for providing for both cases (blocking vs. fire-and-forget) with an intermittent network connection?</p>
<p>'retry' (with exponential back-off timing or such - as in the tenacity module) works in a blocking manner, but, this question isn't for a web-scraping or large-number-of-requests context.</p>
<p>I do have a functioning queue-and-thread technique in place: submit the request to a Queue instance, and then set a threading.Event to tell the worker thread to start working through the queue, with a delay between non-successful iterations, with each queue item not being removed from the queue until a valid response is received. This works nicely for fire-and-forget, but, it doesn't work in the blocking case, because the timing of the valid response can't be known, so the main thread has likely moved on before the response is available.</p>
<p>asyncio (instead of threading) could be made to either fire-and-forget or block, but I'm trying to find the technique that would be the most clear for the downstream coder.</p>
<p>E.g. should the default be blocking, or, should the default be fire-and-forget?</p>
<p>It's easy enough to specify blocking-or-fire-and-forget with an argument, but the downstream coder could be blindsided when they expect it to be one way and it's actually the other.</p>
<p>Is there a way to specify a 'group' of lines of code that should be a 'blocking set'?</p>
<p>While it would be great to be able to tell if the return value is "used", so that the code could 'automatically' determine if a call should be blocking or not, the answers to <a href="https://stackoverflow.com/questions/79697826/is-there-a-way-to-tell-if-a-functions-return-value-is-used">this previous question</a> point out that it's not really possible to do so reliably.</p>
|
<python><multithreading><python-requests><python-asyncio>
|
2025-07-21 18:50:04
| 1
| 904
|
Tom Grundy
|
79,709,488
| 459,745
|
click.option with deprecated=str failed with unexpected keyword argument
|
<p>Here is short sample, which uses <code>click==8.1.7</code></p>
<pre><code># main.py
import click
@click.command
@click.option(
"--disable-all",
is_flag=True,
default=False,
deprecated="Please use --disable-all-features",
)
@click.option("--disable-all-features", is_flag=True, default=False)
def main(disable_all, disable_all_features):
print(f"{disable_all=}")
print(f"{disable_all_features=}")
if __name__ == "__main__":
main()
</code></pre>
<p>When I run this, I got the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/xxx/sandbox/click-option-alias/main.py", line 12, in <module>
def main(disable_all, disable_all_features):
File "/home/xxx/sandbox/click-option-alias/.venv/lib/python3.9/site-packages/click/decorators.py", line 373, in decorator
_param_memo(f, cls(param_decls, **attrs))
File "/home/xxx/sandbox/click-option-alias/.venv/lib/python3.9/site-packages/click/core.py", line 2536, in __init__
super().__init__(param_decls, type=type, multiple=multiple, **attrs)
TypeError: __init__() got an unexpected keyword argument 'deprecated'
</code></pre>
<p>How do I make the deprecation feature work?</p>
|
<python><python-click>
|
2025-07-21 18:18:36
| 1
| 41,381
|
Hai Vu
|
79,709,465
| 12,311,820
|
Django Celery with prefork workers breaks OpenTelemetry metrics
|
<p>I have a Django application I wanted to instrument with OpenTelemetry for traces and metrics. I created an <code>otel_config.py</code> file next to my <code>manage.py</code> with this content:</p>
<pre class="lang-py prettyprint-override"><code># resources
def get_default_service_instance_id():
try:
hostname = socket.gethostname() or "unknown-host"
except Exception as e:
hostname = "unknown-host"
try:
process_id = os.getpid()
except Exception as e:
process_id = "unknown-pid"
return f"{hostname}-{process_id}"
service_name = "my-service"
otlp_endpoint = "http://otel-collector:4318"
service_instance_id = get_default_service_instance_id()
resource = Resource.create(
{
SERVICE_NAME: service_name,
SERVICE_INSTANCE_ID: service_instance_id,
}
)
# traces
otlp_endpoint_traces = urljoin(otlp_endpoint, "/v1/traces")
trace_exporter = OTLPSpanExporter(endpoint=otlp_endpoint_traces)
span_processor = BatchSpanProcessor(trace_exporter)
tracer_provider = TracerProvider(resource=resource)
trace.set_tracer_provider(tracer_provider)
trace.get_tracer_provider().add_span_processor(span_processor)
# metrics
otlp_endpoint_metrics = urljoin(otlp_endpoint, "/v1/metrics")
metric_exporter = OTLPMetricExporter(endpoint=otlp_endpoint_metrics)
metric_reader = PeriodicExportingMetricReader(metric_exporter)
meter_provider = MeterProvider(resource=resource, metric_readers=[metric_reader])
metrics.set_meter_provider(meter_provider)
# instrument
DjangoInstrumentor().instrument()
Psycopg2Instrumentor().instrument()
CeleryInstrumentor().instrument()
</code></pre>
<p>Then, I simply imported it at the end of my <code>settings.py</code> file like below:</p>
<pre class="lang-py prettyprint-override"><code>import otel_config
</code></pre>
<p>Although my traces and metrics work fine in most cases, my OpenTelemetry metrics are broken in the case of celery workers with prefork mode.<br />
In case of prefork workers, the child processes happen to get the same <code>SERVICE_INSTANCE_ID</code> as the parent process. Therefore, different child processes manipulate the same metric value as each has its own exclusive memory. Thus, the value in my collector gets changed very often and is wrong since it's not the aggregation value of all the child processes.
It's not the same in the case of my thread workers (<code>--pool=threads</code>) since all different threads of the same worker use the same metric instance in memory, and the final value in the collector is always the correct aggregation value.</p>
<hr />
<p>I've tried multiple solutions, but I get other drawbacks that I'll explain hereafter.</p>
<ol>
<li>Using <code>worker_init</code> and <code>worker_process_init</code> signals</li>
</ol>
<p>here, I removed <code>import otel_config</code> from <code>settings.py</code> and added it in <code>worker_init</code> and <code>worker_process_init</code> signal handlers like below:</p>
<pre class="lang-py prettyprint-override"><code>from celery.signals import worker_init, worker_process_init
@worker_init
def worker_init_handler(sender, **kwargs):
pool_cls = str(sender.pool_cls) if hasattr(sender, 'pool_cls') else None
if "threads" in pool_cls: # only if --pool=threads
import otel_config
@worker_process_init.connect
def worker_process_init_handler(**kwargs):
import otel_config
</code></pre>
<p>I had to keep <code>worker_init_handler</code> as I also have thread workers, and in threads workers, no <code>worker_process_init</code> signal is sent. However, I needed the instrumentation and metrics in my thread workers as well.<br />
I also had to add <code>import otel_config</code> to my <code>wsgi.py</code> file for my Django API traces and metrics.</p>
<p>However, in this approach, I lose the metrics sent by the parent process. I have a <code>task_received</code> signal handler that exports some metrics, and this signal is handled only in the parent process.<br />
I'm also not sure if it's a good practice to instrument OpenTelemetry in multiple places instead of only one place (i.e. <code>settings.py</code>).</p>
<ol start="2">
<li>Reinstrumentation in case of prefork child processes</li>
</ol>
<p>In this approach, I tried to re-import everything in case of the child process initialization:</p>
<pre class="lang-py prettyprint-override"><code>@worker_process_init.connect
def worker_process_init_handler(**kwargs):
import import otel_config
</code></pre>
<p>However, Python Otel works on a singleton basis. meaning setting the <code>MeterProvider</code> and <code>TracerProvider</code> and instrumenting (e.g. <code>DjangoInstrumentor().instrument()</code>) once, results in blockage of next settings and instrumentations. So, this reimporting in <code>worker_process_init_handler</code> here wouldn't do anything at all. I even tried other hacks like using the following in case of <code>worker_process_init_handler</code> (child processes):</p>
<pre class="lang-py prettyprint-override"><code># trace.set_tracer_provider(tracer_provider) # this one is singleton so commented
trace._TRACER_PROVIDER = tracer_provider
...
# metrics.set_meter_provider(meter_provider) # this one is singleton so commented
metrics._internal._METER_PROVIDER = meter_provider
</code></pre>
<p>But this approach is too hacky and requires other hacks on the Instrumentors and <code>get_meter</code> usage in the codebase.</p>
<hr />
<p>All in all, I'd like to know if anyone had any similar experience and what the practice would be in such issues.</p>
|
<python><django><celery><open-telemetry>
|
2025-07-21 17:56:16
| 1
| 357
|
Ashkan Khademian
|
79,709,374
| 281,965
|
Using polymorphic_identity and inheritance is raising a warning
|
<p>I have the following code, where I need to inherit <code>GroupAB</code> from <code>GroupA</code> because I want to use its functions vs copy-paste them again. since I can't use multiple values for <code>polymorphic_identity=["GroupA", "GroupAB"]</code> I tried using the below code yet it resulted with a warning</p>
<p>The main group is:</p>
<pre><code>class Group(Base):
id = Column('id', Integer, primary_key=True)
name = Column('name', String(15))
g_type = Column('type', String(15))
__mapper_args__ = {
'polymorphic_on': g_type,
'polymorphic_identity': 'Group'
}
</code></pre>
<p>What I wished for:</p>
<pre><code>class GroupCommon(Group):
__mapper_args__ = {
'polymorphic_identity': ['GroupA', 'GroupAB']
}
def some_function(self}:
....
# not possible, it doesn't accept a list - and raise:
if self.polymorphic_identity in self.polymorphic_map:
TypeError: unhashable type: 'list'
</code></pre>
<p>What I tried to do- is to inherit them:</p>
<pre><code>class GroupA(Group):
__mapper_args__ = {
'polymorphic_identity': 'GroupA'
}
def some_function(self}:
....
class GroupAB(GroupA): <= Inherit from GroupA which is a polymorphic_identity of Group
__mapper_args__ = {
'polymorphic_identity': 'GroupAB'
}
pass # nothing here as i'm using GroupA's functions
# -- this works -- but raise a warning
SAWarning: Flushing object <GroupAB at 0x1082...> with incompatible polymorphic identity 'GroupAB'; the object may not refresh and/or load correctly
(this warning may be suppressed after 10 occurrences)
</code></pre>
<p>No warning when I use:</p>
<pre><code>class GroupAB(Group): <= Inherit from Group, no functions available ofc
__mapper_args__ = {
'polymorphic_identity': 'GroupAB'
}
pass
</code></pre>
<p>Any thoughts ?</p>
|
<python><sqlalchemy>
|
2025-07-21 16:35:09
| 1
| 8,181
|
Ricky Levi
|
79,709,356
| 11,829,002
|
Codecov failing, and I don't understand why (and how fix this)
|
<p>I have added a CI using Codecov to ensure that code is sufficiently covered when PR are done in my code.</p>
<p>But <a href="https://github.com/thomas-saigre/tikzplotly/runs/46363182837" rel="nofollow noreferrer">the job</a> is still failing after the modifications I made.
Here is the <a href="https://app.codecov.io/gh/thomas-saigre/tikzplotly/pull/26?dropdown=coverage&src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=checks&utm_campaign=pr%20comments" rel="nofollow noreferrer">full report</a> from codecov, which displays a positive change and no red flags.</p>
<p>There are some parts of the code that are not covered. Still, they correspond to some very specific cases where we never get at usage (except if we are deeply looking to get there, for instance, I manually added a non-printable character somewhere to fall in the condition <code>if not_printable(ch): ...</code>.</p>
<p>So my question is actually two questions :</p>
<ol>
<li>Why does the CI job fail while no red flags appear on the codecov dashboard ?</li>
<li>Is it possible to parameterize the coverage program to pass well ?
An idea would be to skip the kind of code lines mentioned above, but I have no idea if it is possible to do it (nor how to do it)</li>
</ol>
|
<python><continuous-integration><code-coverage><codecov>
|
2025-07-21 16:23:50
| 0
| 398
|
Thomas
|
79,709,226
| 6,461,882
|
Getting some columns as raw data while others converted to pandas types
|
<p>Is there a way in KDB/pykx to get only some columns as raw data while get others converted to pandas types?</p>
<p>In the example below, I want to be able to do what is shown in the last line (for variable <code>d3</code>), i.e. only get some columns as raw data:</p>
<pre><code>import pykx as kx
q = kx.SyncQConnection(host = 'localhost', port = 8888, timeout = 888.8)
cmd = 'select from table where date=2025.04.30'
d1 = q(cmd).pd() # all columns converted to pandas types
d2 = q(cmd).pd(raw=True) # all columns returned as raw data
d3 = q(cmd).pd(raw=['col1','col2','col3']) # is it possible to get only some columns as raw ?
</code></pre>
|
<python><pandas><kdb+><raw-data><pykx>
|
2025-07-21 14:43:26
| 1
| 2,855
|
S.V
|
79,709,216
| 13,392,257
|
Can't find element by XPATH playwright
|
<p>I am trying to get all search results (URLs) from <a href="https://docs.vgd.ru/search/?v=1" rel="nofollow noreferrer">https://docs.vgd.ru/search/?v=1</a>. I am using the xpath <code>//a[@class='resultsMain']</code> to find them. The xpath is valid.</p>
<p><a href="https://i.sstatic.net/ilSe0cj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ilSe0cj8.png" alt="webpage" /></a></p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import time
from playwright.async_api import async_playwright
class VgdParser:
def __init__(self, headless: bool = False):
self.headless = headless
self.browser = None
self.playwright = None
self.page = None
async def start_browser(self):
"""Start the browser and create a new page with stealth mode"""
self.playwright = await async_playwright().start()
self.browser = await self.playwright.chromium.launch(
headless=self.headless,
)
self.page = await self.browser.new_page()
await self.page.goto("https://docs.vgd.ru/search/?v=1")
async def search_by_name(self, name: str):
"""Enter name in https://docs.vgd.ru/search/?v=1 and collect results"""
# Wait for the iframe to appear
# Get the iframe (case-insensitive id)
frame = self.page.frame(name="iFrame1")
if frame is None:
raise Exception("iframe with id 'iFrame1' not found")
# Wait for the input inside the iframe
input_locator = frame.locator('//input[@placeholder="ΠΠ²Π΅Π΄ΠΈΡΠ΅ Π·Π°ΠΏΡΠΎΡ"]')
await input_locator.fill(name)
await input_locator.press('Enter')
frame = self.page.frame(name="iFrame1")
if frame is None:
raise Exception("iframe with id 'iFrame1' not found")
results_raw = frame.locator("//a[@class='resultsMain']")
count = await results_raw.count()
print("XXX_ ", count) # PRINTS 0
for i in range(count):
cur_result = results_raw.nth(i)
text = await cur_result.inner_text()
print("Result:", text)
async def main():
parser = VgdParser()
await parser.start_browser()
await parser.search_by_name("ΠΠ»Π΅ΠΊΡΠ΅ΠΉ ΠΡΠΌΠ°ΠΊΠΎΠ²")
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>The problem is that in the function <code>search_by_name</code>, line <code>print("XXX_ ", count)</code>, it prints 0 - meaning, that it didn't find elements.</p>
|
<python><playwright><playwright-python>
|
2025-07-21 14:28:52
| 1
| 1,708
|
mascai
|
79,709,133
| 9,112,151
|
Problem with multiple Query in view of FastAPI application endpoint
|
<p>I'm trying to develop filtering/ordering/pagination functionality for FastAPI applications. For now I'm facing difficulty with separating filtering and sorting. The code below generates undesirable swagger:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Query
from pydantic import BaseModel
from sqlalchemy import Select, create_engine, MetaData, select
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
app = FastAPI()
url = "postgresql+psycopg2://postgres:postgres@localhost:5432/database"
engine = create_engine(url, echo=True)
class Base(DeclarativeBase):
metadata = MetaData()
class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
first_name: Mapped[str]
last_name: Mapped[str]
class FilterSet(BaseModel):
first_name: str | None
def filter_queryset(self, query: Select) -> Select:
conditions = self.model_dump(exclude_unset=True)
# apply ordering
return query
class Ordering(BaseModel):
ordering: list[str] | None = None
def order_queryset(self, query: Select) -> Select:
self.ordering
# apply ordering
return query
@app.get("/")
async def index(filterset: FilterSet = Query(), ordering: Ordering = Query()):
query = select(User)
query = filterset.filter_queryset(query)
query = ordering.order_queryset(query)
# request database
</code></pre>
<p>The bad swagger:</p>
<p><a href="https://i.sstatic.net/TMDrQCJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMDrQCJj.png" alt="enter image description here" /></a></p>
<p>Is it possible to fix it without combining FilterSet and Ordering classes into a single class?</p>
|
<python><postgresql><sqlalchemy><fastapi><pydantic>
|
2025-07-21 13:21:51
| 1
| 1,019
|
ΠΠ»ΡΠ±Π΅ΡΡ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄ΡΠΎΠ²
|
79,708,948
| 2,950,593
|
Tensorflow Graph is finalized and cannot be modified
|
<p>this is my code:</p>
<pre><code>vocals_path = "output/song.wav"
audio_path = vocals_path
from inaSpeechSegmenter import Segmenter
import tensorflow as tf
# Initialize the segmenter
seg = Segmenter()
# Analyze the audio file
segments = seg(vocals_path)
# Print the segments
for segment in segments:
print(segment)
</code></pre>
<p>this is error:</p>
<pre><code> File "/root/code/aimusic/essgender.py", line 138, in <module>
seg = Segmenter()
^^^^^^^^^^^
File "/root/code/aimusic/venv/lib/python3.11/site-packages/inaSpeechSegmenter/segmenter.py", line 241, in __init__
self.vad = SpeechMusicNoise(batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/code/aimusic/venv/lib/python3.11/site-packages/inaSpeechSegmenter/segmenter.py", line 131, in __init__
self.nn = keras.models.load_model(model_path, compile=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/code/aimusic/venv/lib/python3.11/site-packages/keras/saving/saving_api.py", line 212, in load_model
return legacy_sm_saving_lib.load_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/code/aimusic/venv/lib/python3.11/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/root/code/aimusic/venv/lib/python3.11/site-packages/tensorflow/python/framework/ops.py", line 3375, in _check_not_finalized
raise RuntimeError("Graph is finalized and cannot be modified.")
RuntimeError: Graph is finalized and cannot be modified.
</code></pre>
<p>I've tried to manually reset graph like this:</p>
<pre><code>tf.compat.v1.reset_default_graph()
</code></pre>
<p>or like this</p>
<pre><code>from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.InteractiveSession()
</code></pre>
<p><strong>But didn't work(same error).</strong></p>
<p>What do I do?</p>
|
<python><tensorflow>
|
2025-07-21 10:43:33
| 4
| 9,627
|
user2950593
|
79,708,570
| 10,844,937
|
Why httpx timeout not working with stream response
|
<p>I use <code>httpx</code> to make request to my LLM. Here is the code.</p>
<pre><code>import httpx
import logging
import json
from httpx import TimeoutException
try:
content = ''
logging.info(f"request payload:{payload}")
async with httpx.AsyncClient(timeout=LLM_TIMEOUT) as client:
async with client.stream("POST", MODEL_URL, headers=HEADERS, json=payload,
timeout=LLM_TIMEOUT) as response:
if response.status_code != 200:
ans = await response.aread()
logging.error(f"response.aread() error")
raise Exception(f"Failed to generate completion stream: {ans}")
async for line in response.aiter_lines():
if line.startswith("data:"):
data_str = line[5:]
if data_str.strip() == "[DONE]":
break
try:
chunk_data = json.loads(data_str)
if chunk_data['choices']:
content += chunk_data["choices"][0]["delta"].get("content")
except json.JSONDecodeError:
logging.error("json parse error:", data_str)
logging.info(f"request response:{content}")
except TimeoutException as te:
logging.warning("request timeout")
</code></pre>
<p>Here the <code>LLM_TIMEOUT</code> is <code>60</code>. However, I can see from my log that there exists some requests that costs more than 60 seconds. Anyone can help me solve this? Thanks in advance.</p>
|
<python><python-requests><httpx>
|
2025-07-21 05:15:15
| 1
| 783
|
haojie
|
79,708,291
| 2,961,927
|
How to use Polars copy-on-write principle?
|
<p>I come from C++ and R world and just started using Polars. This is a great library. I want to confirm my understanding of its copy-on-write principle:</p>
<pre><code>import polars as pl
x = pl.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
y = x # Here, y is semantically a copy of x and
# users shall treat y as a copy of x, but under the
# hood, y is currently still just a "reference" to x.
y = y.with_columns(
pl.Series([7, 8, 9]).alias('b')
) # Copy-on-write occurs, but only a new column 'b' are created.
z = y # Users shall treat z as a copy of y, but
# under the hood, z is currently still just a
# "reference" to y.
# Create row index for conditional operations
z = z.with_row_index("row_idx")
z = z.with_columns(
pl.when(pl.col("row_idx") == 0)
.then(10)
.otherwise(pl.col("b"))
.alias("b")
) # Copy-on-write kicks in. The entire
# column 'b' is copied and then the first element is
# changed to 10.
z = z.with_columns(
pl.when(pl.col("row_idx") == 1)
.then(11)
.otherwise(pl.col("b"))
.alias("b")
) # The 2nd element is changed in-place to 11.
# Remove the temporary row index column
z = z.drop("row_idx")
</code></pre>
<p>And at this point, <code>x</code>, <code>y</code> and <code>z</code> are semantically independent data frames, and users shall treat them as such. But under the hood, only one column <code>'a'</code> exists right now. In other words, here are the data actually existing in memory:</p>
<pre><code>[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
[10, 11, 9]
</code></pre>
<p>Are all my statements and code comments correct? If not, what should be the correct ones?</p>
<p>Edit: I threw my post into ChatGPT and it says:</p>
<p><em>"The 2nd element is changed in-place to 11.β
β Not in-place. Polars is immutable. Even if you just change one element, the result is a new Series (and potentially a new DataFrame). So z.with_columns(...) always creates a new DataFrame and any modified Series is new memory.</em></p>
<p>Am I right or is the AI right? Is there any part of the official document that can answer this authoritatively?</p>
<p>Edit II: Experiments show the AI is right. Treat a column in Polars data frame as an "atomic" object. Any modification to it will trigger a copy. Even when the column is owned by just one data frame, the modification will result in a new column which is then "swapped" back to the data frame. In principle the old column should be released, but I did not see the system monitor gain back the memory, at least not immediately. Polars (or could just be Python's quirk) might return the block to its own memory pool to avoid calling OS for its next allocation..</p>
|
<python><dataframe><python-polars><polars>
|
2025-07-20 18:47:43
| 1
| 1,790
|
user2961927
|
79,708,270
| 3,079,439
|
Adding EarlyStopping() to transformers Trainer() error
|
<p>I'm using this code for fine-tuning a LoRA model:</p>
<pre><code>bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(
"tiiuae/falcon-rw-1b",
quantization_config=bnb_config,
device_map={"": torch.cuda.current_device()},
)
tokenizer = AutoTokenizer.from_pretrained("zhihan1996/DNABERT-2-117M", trust_remote_code=True)
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=8,
lora_alpha=16,
lora_dropout=0.05,
bias="none"
)
model = get_peft_model(model, peft_config)
dataset = load_from_disk('tokenized_dataset_50_percent')
train_size = int(0.8 * len(dataset["train"]))
test_size = len(dataset[ "train"]) - train_size
train_set, val_set = torch.utils.data.random_split(dataset["train"], [train_size, test_size])
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir="./falcon-dna-lora",
per_device_train_batch_size=4,
gradient_accumulation_steps=32,
num_train_epochs=1,
fp16=True,
save_total_limit=2,
logging_steps=10,
save_steps=500,
learning_rate=2e-4,
weight_decay=0.01,
report_to="none"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_set,
eval_dataset=val_set,
data_collator=data_collator,
tokenizer=tokenizer,
)
print("Trainable params:", sum(p.numel() for p in model.parameters() if p.requires_grad))
trainable_params = []
total_params = 0
trainable_count = 0
for name, param in model.named_parameters():
total_params += param.numel()
if param.requires_grad:
trainable_count += param.numel()
trainable_params.append(name)
print(f"Total parameters: {total_params:,}")
print(f"Trainable parameters: {trainable_count:,}")
print(f"Percentage trainable: {100 * trainable_count / total_params:.4f}%")
print(f"Trainable layers: {trainable_params}")
trainer.train()
trainer.save_model("falcon-rw-1b-50percent-checkpoint")
</code></pre>
<p>Trainer() method works fine and the model trains correctly. Problem begins if I add EarlyStopping callback, by doing the following changes:</p>
<pre><code>training_args = TrainingArguments(
output_dir="./falcon-dna-lora",
per_device_train_batch_size=4,
gradient_accumulation_steps=32,
num_train_epochs=1,
fp16=True,
save_total_limit=2,
logging_steps=10,
save_steps=500,
weight_decay=0.01,
report_to="none",
eval_strategy="steps",
eval_steps=500,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
learning_rate=2e-4,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_set,
eval_dataset=val_set,
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]
)
</code></pre>
<p>After this I receive the following error:</p>
<pre><code>RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
</code></pre>
<p>I have run out of ideas and none of those work. Can you suggest what the issue with this approach could be.</p>
<p>Thanks in advance.</p>
|
<python><huggingface-transformers><large-language-model>
|
2025-07-20 18:20:20
| 0
| 3,158
|
Keithx
|
79,708,131
| 6,751,456
|
django-import-export id auto generated by the package during insert?
|
<p>I'm using <code>django-import-export</code> and trying to work it with multi-thread concurrency.</p>
<p>I tried logging the sql queries and notice that <code>INSERT</code> query has <code>id</code> values generated as well.</p>
<p>*EDIT: First there's a <code>INSERT</code> without <code>id</code>: There's no <code>id</code> in input file.</p>
<pre><code> INSERT INTO "lprovider" ("npi", "provider_id", "first_name", "last_name") VALUES ('1345', NULL, 'CHARLES', 'STEVENS')
</code></pre>
<p>and then with <code>id</code>.</p>
<pre><code> INSERT INTO "lprovider" ("id", "npi", "provider_id", "first_name", "last_name") VALUES (278082, '1345', NULL, 'CHARLES', 'STEVENS')
</code></pre>
<p>Is it expected? The package self populates the <code>primary key</code>?</p>
|
<python><django><django-models><django-import-export>
|
2025-07-20 15:23:09
| 0
| 4,161
|
Azima
|
79,707,949
| 7,483,211
|
How to type-annotate write when subclassing io.RawIOBase, getting "Liskov substitution principle" violation
|
<p>I want to type-annotate the <code>write</code> in a class that subclasses <code>io.RawIOBase</code>.</p>
<p>I'm struggling to get anything other than <code>Any</code> to type check, which is frustrating, because I should be able to use a much more specific type here.</p>
<p>The following doesn't type check unfortunately:</p>
<pre><code>from io import RawIOBase
from typing import Union
class BytesWrittenCounterIO(RawIOBase):
def __init__(self) -> None:
self.written: int = 0
def write(self, b: Union[bytes, bytearray, memoryview]) -> int:
n = len(b)
self.written += n
return n
</code></pre>
<p>mypy 1.17.0 errors with:</p>
<pre class="lang-none prettyprint-override"><code>$ mypy counter.py
counter.py:8: error: Argument 1 of "write" is incompatible with supertype "_RawIOBase"; supertype defines the argument type as "Buffer" [override]
counter.py:8: note: This violates the Liskov substitution principle
counter.py:8: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
</code></pre>
<p>I know I can use <code>Any</code> but that's annoyingly broad. Is there any solution other than <code>Any</code> or <code># type: ignore[override]</code>?</p>
|
<python><python-typing><mypy><liskov-substitution-principle>
|
2025-07-20 11:02:48
| 1
| 10,272
|
Cornelius Roemer
|
79,707,880
| 694,162
|
Python type-annotation for getitem
|
<p>I'm implementing a custom class and want to provide type hints for item access (e.g. <code>obj[key]</code>) so that</p>
<ul>
<li>Pyright/Pylance/VSCode suggests allowed item keys,</li>
<li>Pyright/Pylance/VSCode suggests correct methods on the return values, and</li>
<li>MyPy performs type checks when the returned values are used.</li>
</ul>
<p>When using the custom class, the user should provide a schema that indicates allowed item keys and their types, analogously to <code>TypedDict</code>.</p>
<p><em>In this post I describe the problem very generically. My specific motivation is to extend type annotations for DataFrames so that column access via <code>__getitem__</code> is aware of the columns type as this would greatly improve maintainability of data science code heaviliy relying on DataFrames. I'm aware of Pandera. It does provide DataFrame schemas, but doesn't really support column access. I'm also aware that the exact columns can be different at runtime, but that applies to all type hints in Python.</em></p>
<h2>Attempt 1: Extending TypedDict</h2>
<p>TypeDict provides the features I'd like to have with my own class.
So my first idea was to inherit from TypedDict. However, Python doesn't allow inheriting from TypedDict and other classes. Not being able to inherit from other classes is a too big limitation.</p>
<pre><code>from typing import TypedDict
class MyClass(TypedDict, OtherClass):
x: str
y: int
</code></pre>
<h2>Attempt 2: Providing Stubs</h2>
<p>I managed to get correct coding suggestions in VSCode by providing stubs.</p>
<pre><code># myclass.pyi
from typing import Literal, overload
class MyClass:
@overload
def __getitem__(self, key: Literal["x"]) -> str: ...
@overload
def __getitem__(self, key: Literal["y"]) -> int: ...
</code></pre>
<pre><code># myclass.py
class MyClass:
def __getitem__(self, key):
if key == "x":
return "example string"
elif key == "y":
return 42
</code></pre>
<p>With this, I get suggestions for the allowed keys and their items.</p>
<pre><code>var = MyClass()
# Dict keys and methods suggested by Pylance
print(var["x"].capitalize())
print(var["y"].as_integer_ratio())
</code></pre>
<p>However, using them incorrectly isn't spotted by any of the tools.</p>
<pre><code># No problem reported by Pylance or mypy
print(var["y"].capitalize())
print(var["x"].as_integer_ratio())
</code></pre>
<p>It is very cumbersome to define stubs. In the context of DataFrames, users would need to define stubs every time they define a schema. That is a lot of boilerplate code. It would be great if the stubs could be created dynamically.</p>
<h2>Attempt 3: Meta programming stubs</h2>
<p>The idea is to use a meta class that creates the stubs automatically. However, I don't know how to create the overloaded methods as they cannot coexist in the objects <code>__dict__</code>. I'm also not sure if my meta class would even run during type checks.</p>
<h2>Question</h2>
<ul>
<li>How to get mypy to report type errors with my stubs?</li>
<li>Is there a way to dynamically create stubs based on a schema?</li>
<li>Are there better alternatives?</li>
<li>How does TypedDict work in type checks? I TypedDict implemented using standard type hints or is its behavior hard-coded?</li>
</ul>
|
<python><dataframe><python-typing>
|
2025-07-20 09:05:54
| 0
| 5,188
|
sauerburger
|
79,707,696
| 25,261,014
|
How does one read from stdin with a timeout in python?
|
<p>I'm trying to read from stdin and get <code>None</code> (or some other defined value) should the user not input any text after a predefined timeout. However, when I tried the below code, the script just hung indefinitely at the <code>Enter some text:</code> point and does not time out at 5 seconds, as it should.</p>
<pre class="lang-python prettyprint-override"><code>import os
import tty
import sys
import time
fd = sys.stdin.fileno()
tty.setraw(fd)
print("Enter some text:", end=' ', flush=True)
t1 = time.time()
d = os.read(fd, 5)
print("Time taken:", time.time() - t1, "d:", d)
</code></pre>
<p>Htop (process monitor) tells me that the python process running my scrip is not in disk sleep (as I would have thought it would) and when I enter a character the <code>Time taken:</code> string printed at the end often exceeds 5 seconds. I am aware that I am only able to read user input one char at a time and that it leaves my terminal in a weird state after the script exits, because I am not doing anything to reset the terminal afterwardsβ¦</p>
|
<python><stdin>
|
2025-07-20 03:40:28
| 2
| 362
|
DeepThought42
|
79,707,651
| 5,477,662
|
Is there a way to specifiy a portable chrome version executable in seleniumbase in python 3
|
<p>Nothing needed sorry for thepost</p>
|
<python><python-3.x><seleniumbase>
|
2025-07-20 01:40:52
| 2
| 497
|
R.Merritt
|
79,707,650
| 3,508,551
|
Sage Differentiation Taking Too Long to Compute
|
<p>Let <code>U, V</code> both be independent uniform random variables from <code>[0, 1]</code>. I am interested in the function <code>F(x, y) = Pr[2/3 U - 2/3 <= x, 2/3 V - 2/3 <= y]</code> and its derivative. I wrote the following Sage code to calculate it, but it's taking very long (more than 10 minutes and no output yet). Is there a way to speed the code?</p>
<pre><code>from sage.all import *
var('u v x y')
ineq1 = heaviside(x + QQ(2)/3 - QQ(2)/3 * u)
ineq2 = heaviside(y + QQ(2)/3 - QQ(2)/3 * v )
indicator = ineq1 * ineq2
cdf = integrate(integrate(indicator, v, 0, 1), u, 0, 1)
diff(cdf, x)
</code></pre>
<p>If I change to say another transformation like <code>Pr[2/3 U + 1/3 V - 2/3 <= x, 1/3 U + 2/3 V - 2/3 <= y]</code>, it outputs the result in less than a second:</p>
<pre><code>from sage.all import *
var('u v x y')
ineq1 = heaviside(x + QQ(2)/3 - QQ(2)/3 * u - QQ(1)/3 * v)
ineq2 = heaviside(y + QQ(2)/3 - QQ(1)/3 * u - QQ(2)/3 * v )
indicator = ineq1 * ineq2
cdf = integrate(integrate(indicator, v, 0, 1), u, 0, 1)
diff(cdf, x)
</code></pre>
|
<python><sage>
|
2025-07-20 01:38:52
| 1
| 2,319
|
AspiringMat
|
79,707,645
| 219,153
|
Is there a simpler way to select 2D vectors bounded by a box from a NumPy array?
|
<p>This Python script:</p>
<pre><code>import numpy as np
a = np.arange(12).reshape(6, 2)
inf = np.array([2, 2])
sup = np.array([9, 9])
b = (inf < a) & (a < sup)
r = a[b[:, 0] & b[:, 1]]
</code></pre>
<p>creates a subarray <code>r</code>:</p>
<pre><code>[[4 5]
[6 7]]
</code></pre>
<p>of array <code>a</code>:</p>
<pre><code>[[ 0 1]
[ 2 3]
[ 4 5]
[ 6 7]
[ 8 9]
[10 11]]
</code></pre>
<p>containing only 2D vectors, which fit inside the box with lower left corner at <code>[2, 2]</code> and upper right corner at <code>[9, 9]</code>. I suspect there is a simpler way to compute it. Any suggestions?</p>
<hr />
<p>Somewhat surprising, the solution offered by <em>furas</em>, while more compact, is slower than the uglier original. On AMD Ryzen 7 3800X CPU and Python 3.12.7, this script:</p>
<pre><code>import numpy as np, timeit as ti
t = 'i2'
a = np.random.randint(1024, size=(1000, 2)).astype(t)
inf = np.array([100, 200]).astype(t)
sup = np.array([300, 400]).astype(t)
def f0(a, inf, sup):
return a[((inf < a) & (a < sup)).all(axis=1)]
def f1(a, inf, sup):
b = (inf < a) & (a < sup)
return a[b[:, 0] & b[:, 1]]
print(f'Minimum, median and maximum execution time in us:')
for fun in ('f0(a, inf, sup)', 'f1(a, inf, sup)'):
t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=999))
print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}')
</code></pre>
<p>produces:</p>
<pre><code>Minimum, median and maximum execution time in us:
f0(a, inf, sup) 29.485 29.806 133.301
f1(a, inf, sup) 17.773 17.964 22.663
</code></pre>
|
<python><arrays><numpy>
|
2025-07-20 01:16:03
| 1
| 8,585
|
Paul Jurczak
|
79,707,467
| 4,118,781
|
Python asyncio/Telethon script suddenly stopped working after VSCode restart (Python 3.9.6 on macOS 12.7.6)
|
<p>I have a Python script using asyncio (for Telethon) and until recently, it was running just fine in a Terminal inside VSCode.</p>
<p>However, an extension install prompted a VSCode restart and now I can't get my code to run anymore, even though the code files themselves haven't changed.</p>
<p>It seems that Telethon fails to initialise, and the problem doesn't seem to be with the Telethon database file, as I also tried renaming the DB file and starting my code which leads to the same error message.</p>
<p>I tried updating Telethon using pip, but the error remains.</p>
<p>The error I'm getting is:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 261, in _add_reader
key = self._selector.get_key(fd)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/selectors.py", line 193, in get_key
raise KeyError("{!r} is not registered".format(fileobj)) from None
KeyError: '6 is not registered'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/sora/Documents/*******/*******-scraper-redeemer-dist.py", line 36, in <module>
client = TelegramClient("*******-scraper", api_id, api_hash)
File "/Users/sora/Library/Python/3.9/lib/python/site-packages/telethon/client/telegrambaseclient.py", line 339, in __init__
if not callable(getattr(self.loop, 'sock_connect', None)):
File "/Users/sora/Library/Python/3.9/lib/python/site-packages/telethon/client/telegrambaseclient.py", line 488, in loop
return helpers.get_running_loop()
File "/Users/sora/Library/Python/3.9/lib/python/site-packages/telethon/helpers.py", line 432, in get_running_loop
return asyncio.get_event_loop_policy().get_event_loop()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/events.py", line 639, in get_event_loop
self.set_event_loop(self.new_event_loop())
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/events.py", line 659, in new_event_loop
return self._loop_factory()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/unix_events.py", line 54, in __init__
super().__init__(selector)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 61, in __init__
self._make_self_pipe()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 112, in _make_self_pipe
self._add_reader(self._ssock.fileno(), self._read_from_self)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 263, in _add_reader
self._selector.register(fd, selectors.EVENT_READ,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/selectors.py", line 523, in register
self._selector.control([kev], 0, 0)
TypeError: changelist must be an iterable of select.kevent objects
Exception ignored in: <function BaseEventLoop.__del__ at 0x103a88ee0>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 683, in __del__
self.close()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/unix_events.py", line 63, in close
if self._signal_handlers:
AttributeError: '_UnixSelectorEventLoop' object has no attribute '_signal_handlers'
</code></pre>
<p>Maybe some of you have experienced this error before and can help me pinpoint the problem.</p>
|
<python><python-3.x><macos><exception><runtime-error>
|
2025-07-19 18:05:20
| 1
| 1,495
|
Sora.
|
79,707,458
| 27,596,369
|
Is Pillow ImageDraw.Draw.draw and Tkinter units the same?
|
<p>I have a program which writes text based on the size of the image. If the size is 900x900 (tkinter units) then it will write text on 300x300, 300x600, 600x300 and 600x600, but when I try to write that, it only writes on the centre.</p>
<pre><code>global draw, img_tk
draw = ImageDraw.Draw(img)
font = ImageFont.load_default()
for line in range(1, int(2)+1):
print(img_tk.height() / int(2) + 1)
line_coord = (img_tk.height() / int(2) + 1) * line
for mark in range(1, int(2)+1):
mark_coord = (img_tk.width() / int(2) + 1) * mark
print(mark_coord)
draw.text(xy=(mark_coord, line_coord), text="TESTER", fill=(255,255,255), font=font)
img_tk = ImageTk.PhotoImage(img)
image_label.image = img_tk
image_label.config(image=img_tk)
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/MBQ2gEyp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBQ2gEyp.png" alt="enter image description here" /></a></p>
<p>My guess was that the units of the ImageTK and the PIL Image wasn't the same. But I checked and it was so I thought that the ImageTK and the ImageDraw.Draw.draw function took different type of units.</p>
|
<python><tkinter><python-imaging-library>
|
2025-07-19 17:46:33
| 1
| 1,512
|
Aadvik
|
79,707,091
| 2,446,374
|
Getting two titles in pywebview on macOS
|
<p>I'm using <a href="https://pywebview.flowrl.com/" rel="nofollow noreferrer"><code>pywebview</code></a> to display a web page on a Mac - and for some reason I am getting two titles:</p>
<p><a href="https://i.sstatic.net/nuZWkpwP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuZWkpwP.png" alt="Two titles" /></a></p>
<p>The code I'm using to run the <code>pywebview</code> server and create the window is:</p>
<pre class="lang-py prettyprint-override"><code> webview.create_window(
"blah", #The title for the webpage
f"http://localhost:{port}",
width=1400,
height=900,
min_size=(800, 600),
on_top=False
)
</code></pre>
<p>Whatever I set as the title (see documentation <a href="https://pywebview.flowrl.com/api/#webview-create-window" rel="nofollow noreferrer">here</a>) appears in both places. How can I get rid of the bottom occurence?</p>
|
<python><macos><pywebview>
|
2025-07-19 08:45:47
| 2
| 3,724
|
Darren Oakey
|
79,706,845
| 2,446,374
|
How do I cleanly stop a sanic server without getting errors related to websocket closing?
|
<p>I have a python sanic website - I'm using pywebview to display it - when I close the pywebview window, I have a watcher which is doing an "app.stop"</p>
<p>that works beautifully except I always get this error:</p>
<pre><code>Srv 0 10:21:39 ERROR: Error closing websocket connection
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.13/site-packages/sanic/server/websockets/impl.py", line 422, in auto_close_connection
self.io_proto.transport.write_eof()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "uvloop/handles/stream.pyx", line 703, in uvloop.loop.UVStream.write_eof
File "uvloop/handles/handle.pyx", line 159, in uvloop.loop.UVHandle._ensure_alive
RuntimeError: unable to perform operation on <TCPTransport closed=True reading=False 0xa01669510>; the handler is closed
</code></pre>
<p>and I can't find any way of getting rid of it - all the AI's I've asked give various "monkey patches" for sanic - which I'm not really ok with... but I'm not ok with the error either.</p>
<p>Is there any way of gracefully shutting down sanic such that it nicely closes off everything without errors?</p>
<p>as requested - here's some example code - this gives an error upon closing, 100% of the time:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# main.py
# -----------------------------------------------------------------------------
# Suppress pkg_resources warning from tracerite/html
# -----------------------------------------------------------------------------
import warnings
warnings.filterwarnings(
"ignore",
message="pkg_resources is deprecated as an API.*",
module="tracerite.html",
)
import argparse
import asyncio
import os
import sys
from sanic import Sanic, Websocket
from sanic.response import html
from sanic.server.protocols.websocket_protocol import WebSocketProtocol
import webview
from websockets.exceptions import ConnectionClosedOK, ConnectionClosedError
# -----------------------------------------------------------------------------
# Defaults
# -----------------------------------------------------------------------------
DEFAULT_PORT = 8000
HOST = "127.0.0.1"
# -----------------------------------------------------------------------------
# App setup
# -----------------------------------------------------------------------------
app = Sanic("ShellApp")
app.config.DEBUG = True
app.config.AUTO_RELOAD = True
app.config.UI_PORT = DEFAULT_PORT
clients = set()
app.ctx.tasks = []
# -----------------------------------------------------------------------------
# HTTP & WebSocket handlers
# -----------------------------------------------------------------------------
@app.route("/")
async def index(request):
return html(
f"""<!DOCTYPE html>
<html>
<head><title>Sanic Shell</title></head>
<body>
<h1>Messages (port {app.config.UI_PORT})</h1>
<ul id="messages"></ul>
<script>
const ws = new WebSocket(`ws://${{window.location.host}}/ws`);
ws.onmessage = e => {{
const li = document.createElement("li");
li.textContent = e.data;
document.getElementById("messages").appendChild(li);
}};
window.onbeforeunload = () => ws.close();
</script>
</body>
</html>"""
)
@app.websocket("/ws")
async def websocket_handler(request, ws: Websocket):
clients.add(ws)
try:
async for _ in ws:
pass
except (ConnectionClosedOK, ConnectionClosedError, asyncio.CancelledError):
...
finally:
clients.discard(ws)
# -----------------------------------------------------------------------------
# Lifecycle hooks
# -----------------------------------------------------------------------------
@app.listener("before_server_start")
async def setup(app, loop):
app.ctx.running = True
# start ticker + watcher
t1 = loop.create_task(tick_worker(app))
t2 = loop.create_task(process_watcher(app))
app.ctx.tasks.extend([t1, t2])
@app.listener("before_server_stop")
async def cleanup(app, loop):
# signal tasks to stop
app.ctx.running = False
# cancel & await them
for t in app.ctx.tasks:
t.cancel()
await asyncio.gather(*app.ctx.tasks, return_exceptions=True)
app.ctx.tasks.clear()
# remove any leftover clients
clients.clear()
# -----------------------------------------------------------------------------
# Background tasks
# -----------------------------------------------------------------------------
async def tick_worker(app):
count = 0
while app.ctx.running:
for ws in list(clients):
try:
await ws.send(f"tick {count}")
except Exception:
pass
count += 1
await asyncio.sleep(1)
async def process_watcher(app):
port = app.config.UI_PORT
proc = await asyncio.create_subprocess_exec(
sys.executable,
__file__,
"--ui",
"--port",
str(port),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
cwd=os.getcwd(),
)
try:
while app.ctx.running and proc.returncode is None:
line = await proc.stdout.readline()
if not line:
break
print(f"[UI] {line.decode().rstrip()}", file=sys.stdout)
finally:
if proc.returncode is None:
proc.kill()
await proc.wait()
if app.ctx.running:
app.stop()
# -----------------------------------------------------------------------------
# UIβonly launcher
# -----------------------------------------------------------------------------
def launch_ui(port: int):
webview.create_window("Sanic Shell", f"http://localhost:{port}")
webview.start()
# -----------------------------------------------------------------------------
# Entrypoint
# -----------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--ui", action="store_true", help="UIβonly mode")
parser.add_argument("--port", type=int, default=DEFAULT_PORT, help="Port")
args = parser.parse_args()
app.config.UI_PORT = args.port
if args.ui:
launch_ui(args.port)
else:
app.run(
host=HOST,
port=args.port,
protocol=WebSocketProtocol,
debug=True,
auto_reload=True,
)
if __name__ == "__main__":
main()
</code></pre>
|
<python><sanic>
|
2025-07-19 00:32:10
| 0
| 3,724
|
Darren Oakey
|
79,706,727
| 10,034,073
|
In what order do model_validators run in Pydantic?
|
<p>The Pydantic documentation explicitly <a href="https://docs.pydantic.dev/latest/concepts/validators/#ordering-of-validators" rel="nofollow noreferrer">describes the order of <code>field validators</code></a>, but what about <a href="https://docs.pydantic.dev/latest/concepts/validators/#model-validators" rel="nofollow noreferrer"><code>model_validators</code></a>?</p>
<ol>
<li>In what order do <code>model_validators</code> run?</li>
<li>Specifically, how are they ordered in an inheritance context, where both parent and child have a <code>model_validator</code>?</li>
<li>Is the order guaranteed, or is it some implementation detail that could be changed?</li>
</ol>
|
<python><pydantic><pydantic-v2>
|
2025-07-18 20:43:36
| 0
| 444
|
kviLL
|
79,706,606
| 27,596,369
|
Tkinter widget formatting problems
|
<p>This is my code:</p>
<pre><code>from tkinter import filedialog, ttk
from tkinter import *
from PIL import Image, ImageTk
import numpy as np
########## GLOBAL VARIABLES ########
img_tk = None
####################### BACKEND ################################
# Upload Image Button
def upload_image():
global img_tk
file_path = filedialog.askopenfilename()
if file_path:
img = Image.open(file_path)
img_tk = ImageTk.PhotoImage(img)
if img_tk.width() < 1200 or img_tk.height() < 600:
image_label.image = img_tk
image_label.config(image=img_tk)
##################### GUI ##################################
window = Tk()
img_frame = Frame(window)
image_label = Label(img_frame, width=1200, height=600)
upload_button = Button(window, text='Upload Image', command=upload_image)
text_size_label = Label(window, text="Text size:", font=('calibre',12, 'bold'))
text_size = ttk.Combobox(window, font=('calibre',12, 'normal'), values=list(np.arange(8, 51, step=7)))
rotation_label = Label(window, text="Rotation:", font=('calibre',12, 'bold'))
rotation = Entry(window, font=('calibre',12, 'normal'))
opacity_label = Label(window, text="Opacity:", font=('calibre',12, 'bold'))
opacity = Scale(window, from_=0, to=100, orient='horizontal')
# Placements
img_frame.grid(column=3, row=2, columnspan=5, rowspan=10, padx=50)
image_label.pack([![enter image description here][1]][1])
upload_button.grid(column=1,row=1, padx=60, pady=60)
text_size_label.grid(column=1, row=2)
text_size.grid(row=2, column=2)
rotation_label.grid(row=3, column=1)
rotation.grid(row=3, column=2)
opacity_label.grid(row=4, column=1)
opacity.grid(row=4, column=2)
window.mainloop()
</code></pre>
<p>So there is an upload button which uploads an image to my <code>Frame</code> object. Now, the problem is before I upload my image, my formatting goes all wrong like this:
<a href="https://i.sstatic.net/TYZ0LUJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYZ0LUJj.png" alt="app without upload" /></a></p>
<p>and when I upload my image, it formats right:</p>
<p><a href="https://i.sstatic.net/2fSIyYQM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fSIyYQM.png" alt="app with upload" /></a></p>
<p>Also, before uploading the image, the app almost covers the whole screen and after uploading, doesn't cover the whole screen.</p>
|
<python><tkinter>
|
2025-07-18 18:46:33
| 1
| 1,512
|
Aadvik
|
79,706,522
| 14,824,108
|
HDF5 Write Performance Degrades Over Time When Converting from LMDB (~3.7M entries)
|
<p>Iβm experiencing significant slow-downs when converting data from LMDB to HDF5 format. While the conversion starts off quickly, performance degrades substantially partway through the process.</p>
<p>Specifically, my dataset contains around 3.7 million entries, and the first ~1.5 million entries are processed quite fast. However, after that point, the process becomes increasingly slow.</p>
<p>Iβm already processing the data in chunks to manage memory and improve performance, but this hasnβt solved the issue.</p>
<p>Iβm suspecting the issue might relate to how Iβm creating the HDF5 file. Are there any optimizations I can apply to reduce this performance degradation during dataset creation?</p>
<p>Any suggestions for improving performance β or explanations for why this slowdown happens β would be greatly appreciated.</p>
<p>Please let me know if youβd like me to share additional information.</p>
<p>Thanks in advance!</p>
<pre><code>import lmdb
import pickle
import zlib
import numpy as np
import h5py
from tqdm import tqdm
import os
import gc
import time
def convert_pretrain_to_h5(
lmdb_path,
output_dir,
train_keys_path,
val_keys_path,
test_keys_path
):
start_time = time.time()
os.makedirs(output_dir, exist_ok=True)
# Load split keys (preserving original indices)
train_keys = np.load(train_keys_path, allow_pickle=True)
val_keys = np.load(val_keys_path, allow_pickle=True)
test_keys = np.load(test_keys_path, allow_pickle=True)
# Save the original split indices directly - this preserves your exact splits
splits = {
'train': list(train_keys),
'val': list(val_keys),
'test': list(test_keys)
}
# Save split indices with original values
with open(os.path.join(output_dir, 'split_indices.p'), 'wb') as f:
pickle.dump(splits, f)
print(f"Saved original split indices with {len(splits['train'])} train, {len(splits['val'])} val, {len(splits['test'])} test samples")
# Now use all_keys with original values for processing
all_keys = list(train_keys) + list(val_keys) + list(test_keys)
total_samples = len(all_keys)
print(f"Total keys from split files: {len(all_keys)}")
# Create H5 files
h5_path = os.path.join(output_dir, 'substructure_graphs.h5')
smiles_list = []
# Get available LMDB keys for verification
env = lmdb.open(lmdb_path, readonly=True, lock=False, readahead=False, meminit=False)
# Create H5 file
with h5py.File(h5_path, 'w') as graphs_h5:
# Create groups for graph data
node_features_group = graphs_h5.create_group('node_features')
edge_index_group = graphs_h5.create_group('edge_index')
num_nodes_dset = graphs_h5.create_dataset('num_nodes', (total_samples,), dtype=np.int32)
num_edges_dset = graphs_h5.create_dataset('num_edges', (total_samples,), dtype=np.int32)
# Process in chunks for memory efficiency
chunk_size = 100000
with env.begin(write=False) as txn:
for start in range(0, total_samples, chunk_size):
end = min(start + chunk_size, total_samples)
print(f"Processing chunk {start} to {end}...")
# Process each sample in this chunk
for idx, i in enumerate(tqdm(all_keys[start:end])):
key = f"{i}".encode("ascii")
try:
# Get data from LMDB
data = txn.get(key)
# Decompress and load
sample = pickle.loads(zlib.decompress(data))
# Extract SMILES
if 'smiles' in sample:
smiles = sample['smiles']
if isinstance(smiles, bytes):
smiles = smiles.decode('utf-8')
smiles_list.append(smiles)
# Store graph data
if 'node_features' in sample and 'edge_index' in sample:
# Node features
node_features = sample['node_features']
if not isinstance(node_features, np.ndarray):
node_features = np.array(node_features, dtype=np.int16)
node_features_group.create_dataset(
str(i),
data=node_features,
compression="gzip" # Faster without compression
)
# Edge indices
edge_index = sample['edge_index']
if not isinstance(edge_index, np.ndarray):
edge_index = np.array(edge_index, dtype=np.int32)
edge_index_group.create_dataset(
str(i),
data=edge_index,
compression="gzip" # Faster without compression
)
# Graph metadata
if 'num_nodes' in sample:
num_nodes_dset[i] = sample['num_nodes']
if 'num_edges' in sample:
num_edges_dset[i] = sample['num_edges']
except Exception as e:
print(f"Error processing sample {i}: {e}")
raise e
# Save SMILES after each chunk for safety
np.save(os.path.join(output_dir, 'smiles.npy'), np.array(smiles_list, dtype=object))
print(f"Saved progress: {len(smiles_list)}/{total_samples} samples")
# After processing a chunk (just before next chunk starts)
graphs_h5.flush() # Force HDF5 to write buffered data to disk
gc.collect() # Clear unused memory (buffers, zlib, arrays)
# Final save of SMILES
np.save(os.path.join(output_dir, 'smiles.npy'), np.array(smiles_list, dtype=object))
print(f"Conversion complete! Files saved to {output_dir}")
print(f"Total samples: {len(smiles_list)}")
print(f"Total time: {time.time() - start_time:.2f} seconds")
</code></pre>
|
<python><hdf5><lmdb>
|
2025-07-18 17:11:25
| 0
| 676
|
James Arten
|
79,706,486
| 27,596,369
|
Is there a way for empty tkinter frame objects to take up space?
|
<p>I have an app which you can upload a picture into an image label object, but until I uploaded the picture, the image label just didn't take up space. So I put the image in a frame, but it still didn't take up space until I uploaded the image. Is there any way around this?</p>
<p>Here is my code:</p>
<pre><code>from tkinter import filedialog
from tkinter import *
from PIL import Image, ImageTk
####################### FUNCTIONS ################################
def upload_image():
file_path = filedialog.askopenfilename()
if file_path:
img = Image.open(file_path)
img_tk = ImageTk.PhotoImage(img)
image_label.image = img_tk
image_label.config(image=img_tk)
##################### GUI ##################################
window = Tk()
img_frame = Frame(window, width=500, height=500)
image_label = Label(img_frame)
upload_button = Button(window, text='Upload Image', command=upload_image)
# Other UI things...
img_frame.grid(column=3, row=2, columnspan=5, rowspan=10, padx=50)
image_label.pack()
upload_button.grid(column=1,row=1, padx=60, pady=60)
window.mainloop()
</code></pre>
<p>Edit: I have many more elements too</p>
<pre><code>text_size_label = Label(window, text="Text size:", font=('calibre',12, 'bold'))
text_size = ttk.Combobox(window, font=('calibre',12, 'normal'), values=list(np.arange(8, 51, step=7)))
rotation_label = Label(window, text="Rotation:", font=('calibre',12, 'bold'))
rotation = Entry(window, font=('calibre',12, 'normal'))
opacity_label = Label(window, text="Opacity:", font=('calibre',12, 'bold'))
opacity = Scale(window, from_=0, to=100, orient='horizontal')
text_label = Label(window, text='Text:', font=('calibre',12, 'bold'))
text = Entry(window, font=('calibre',12, 'normal'))
font_label = Label(window, text='Font:', font=('calibre', 12, 'bold'))
font = Entry(window, font=('calibre',12, 'normal'))
lines_label = Label(window, text='Lines:', font=('calibre', 12, 'bold'))
lines = Entry(window, font=('calibre', 12, 'normal'))
per_line_label = Label(window, text='Watermarks per Line:', font=('calibre', 12, 'bold'))
per_line = Entry(window, font=('calibre', 12, 'normal'))
# Placements
img_frame.grid(column=3, row=2, columnspan=5, rowspan=10, padx=50)
image_label.pack()
upload_button.grid(column=1,row=1, padx=60, pady=60)
text_size_label.grid(column=1, row=2)
text_size.grid(row=2, column=2)
rotation_label.grid(row=3, column=1)
rotation.grid(row=3, column=2)
opacity_label.grid(row=4, column=1)
opacity.grid(row=4, column=2)
text_label.grid(row=5, column=1)
text.grid(row=5, column=2)
font_label.grid(row=6, column=1)
font.grid(row=6, column=2)
lines_label.grid(row=7, column=1)
lines.grid(row=7, column=2)
per_line_label.grid(row=8, column=1)
per_line.grid(row=8, column=2)
</code></pre>
|
<python><tkinter>
|
2025-07-18 16:31:22
| 1
| 1,512
|
Aadvik
|
79,706,357
| 4,504,711
|
How to concurrently remove lines from a file in Python?
|
<p>I have a cluster of compute nodes, each node with many CPUs. I want them to execute commands located in a file, one command per line. The drive where the command file is located is mounted on all compute nodes.</p>
<p>The execution times of these commands vary and to improve load balancing, I am planning to launch as many workers on each node as the number of CPUs on that node. The task of each worker would then be to retrieve one line from the command file, delete that line from the file and execute that line of command; and repeat until no more lines are left.</p>
<p>I am planning to implement these workers with Python's <code>multiprocessing</code> module. The problem is that eventually multiple workers will attempt to write (i.e. remove lines) from the same file. I was wondering whether there is a simple solution to make workers wait for each other or the operating systems take care of that automatically? I believe some sort of file lock could work, however, note that there are multiple processes from <strong>multiple different compute nodes</strong> writing to the same file.</p>
|
<python><locking><cluster-computing><python-multiprocessing>
|
2025-07-18 14:44:22
| 1
| 2,842
|
Botond
|
79,706,173
| 6,805,396
|
Rotate a label in Plotly Treemap with JavaScript
|
<p>Sometimes there are narrow bricks on treemaps. Plotly decreases the font size of such labels, so you cannot read them. But another way is making these labels horizontal. As far as I understood the only way to rotate labels in Plotly is using JavaScript. I'm new at it, please help to understand how to select some particular node with JS?
Here's the example code, where I try to rotate the label 'a1' (but this fails):</p>
<pre><code>data = {'level1': ['A', 'A', 'B', 'B'], 'level2': ['a1', 'a2', 'b1', 'b2']}
fig = px.treemap(data, path=['level1', 'level2'])
js = """
function rotateLabel(){
const nodes = gd.querySelectorAll('slicetext');
for (const node of nodes){
const label = node.querySelector('[data-unformatted="a1"]');
label.style.transform = 'rotate(90deg)';
}
}
const gd = document.querySelector('.plotly-graph-div');
gd.on('plotly_afterplot', rotateLabel.bind(gd));
gd.emit('plotly_afterplot');
"""
fig.show(post_script=[js])
</code></pre>
|
<javascript><python><plotly><treemap><plotly.js>
|
2025-07-18 12:31:44
| 1
| 609
|
Vlad
|
79,706,145
| 11,232,091
|
Align bars on different axes on top on each other in matplotlib
|
<p>I have a df and I am trying to create horizontal bar charts.
Currently, I have the code like below.</p>
<pre><code>
import pandas as pd
import matplotlib.pyplot as plt
data = {'Name': ["A", "B", "C", "D",'E'],
'Todo': [4, 5, 6, 7, 3],
'Done': [6, 2, 6, 8, 6],
'TimeRemaining': [4, 4, 4, 4, 4]}
df = pd.DataFrame(data)
fig, ax1 = plt.subplots(figsize=(10, 8))
ax2 = ax1.twinx()
# Get the names of columns at indices 1 to 10
selected_column_names = df.columns[0:2].to_list()
ax = df.plot(kind='barh',y=selected_column_names, stacked=True, ax=ax2, )
for c in ax.containers:
# Optional: if the segment is small or 0, customize the labels
labels = [v.get_width() if v.get_width() > 0 else '' for v in c]
# remove the labels parameter if it's not needed for customized labels
ax.bar_label(c, fmt=lambda x: f'{x:.0f}' if x > 0 else '',label_type='center')
df.set_index('Name').plot(kind='barh', y=["TimeRemaining"], color='whitesmoke', alpha=0.3,ax=ax1, align='center', width=0.8, edgecolor='blue',)
# Hide y-axis tick labels
ax2.tick_params(axis='y', labelright=False, right=False)
ax2.set_yticklabels([])
ax1.get_legend().remove()
plt.title('Status Chart')
plt.tight_layout()
plt.show()
</code></pre>
<p>Which results in a plot like so & as you can see the dark blue bars are not centered with the semi-transparent bars (both bars are on different axes)
<a href="https://i.sstatic.net/EDiFviEZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDiFviEZ.png" alt="enter image description here" /></a></p>
<p>If I put them on the same axis by changing <code>ax1</code> to <code>ax2</code> on the second plot like so <code>df.set_index('Name').plot(kind='barh', y=["TimeRemaining"], color='whitesmoke', alpha=0.3,ax=ax2, align='center', width=0.8, edgecolor='blue',)</code> they align perfectly but the names are not visible any more! I also get the legend for "TimeRemaining" which I don't want and the semi transparent bars are in the front now.</p>
<p><a href="https://i.sstatic.net/p32MvCfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p32MvCfg.png" alt="enter image description here" /></a></p>
<p>How do I fix the chart such that both bars are on top & I also have the name shown in the y-axis on the left?</p>
|
<python><matplotlib>
|
2025-07-18 12:05:21
| 1
| 8,117
|
moys
|
79,706,085
| 6,385,767
|
How to efficiently find shortest and longest paths between node types in Dgraph?
|
<p>I'm trying to find the <strong>shortest</strong> and <strong>longest</strong> path between two node types across the entire graph in <strong>Dgraph</strong>, similar to how it's done using <strong>APOC</strong> in <strong>Neo4j</strong>.</p>
<p>In <strong>Neo4j</strong>, I can use a single query like this:</p>
<pre class="lang-none prettyprint-override"><code>MATCH (start: `ntype1`)
CALL apoc.path.expandConfig(start, {
labelFilter: '>ntype2',
minLevel: 1,
uniqueness: 'NODE_GLOBAL'
})
YIELD path
RETURN min(length(path)) as min, max(length(path)) as max
</code></pre>
<p>This efficiently returns the shortest and longest path lengths between any nodes of type <code>ntype1</code> to <code>ntype2</code> in one go.</p>
<hr />
<h3>What I'm doing in Dgraph:</h3>
<p>Since Dgraph doesn't support such a direct query, I'm first fetching all UIDs of source and target node types, and then looping over combinations to run multiple <code>shortest</code> queries:</p>
<pre class="lang-py prettyprint-override"><code>combined_query = f"""
{{
sources(func: type({ntype1})) {{ uid RELATED_TO{{uid}} ~RELATED_TO{{uid}} }}
targets(func: type({ntype2})) {{ uid RELATED_TO{{uid}} ~RELATED_TO{{uid}} }}
}}
"""
result = dgraphManager.query(combined_query)
source_uids = [x['uid'] for x in result['sources'] if 'RELATED_TO' in x or '~RELATED_TO' in x]
target_uids = [x['uid'] for x in result['targets'] if 'RELATED_TO' in x or '~RELATED_TO' in x]
uid_list = [{"from": s, "to": t} for s in source_uids for t in target_uids]
query_parts = []
result_parts = []
for i, item in enumerate(uid_list, 1):
query_parts.append(f"""
var(func: uid({item['from']})) {{
start{i} as uid
}}
var(func: uid({item['to']})) {{
end{i} as uid
}}
path{i} as shortest(from: uid(start{i}), to: uid(end{i})) {{ RELATED_TO ~RELATED_TO }}
""")
result_parts.append(f"result{i}(func: uid(path{i})) {{ uid }}")
final_query = "{\n" + "\n".join(query_parts) + "\n" + "\n".join(result_parts) + "\n}"
data = dgraphManager.query(final_query)
path_lengths = [len(val) for key, val in data.items() if val]
</code></pre>
<h3>Problem:</h3>
<p>This approach works, but it's very inefficient when handling numerous nodes. For example, if <code>ntype1</code> has 100 nodes and <code>ntype2</code> has 50 nodes, it results in executing 5,000 queries β just to determine path lengths between two types. Sometimes, it causes errors related to query timeout.</p>
<hr />
<h3>Question:</h3>
<p>Is there a <strong>more efficient way</strong> in Dgraph to compute:</p>
<ul>
<li>The <strong>shortest</strong> path between <strong>any node of type A</strong> to <strong>any node of type B</strong></li>
<li>The <strong>longest</strong> such path</li>
</ul>
<p>Ideally similar to how APOC works in Neo4j, with just one query.</p>
<p>Any insights or optimizations would be highly appreciated!</p>
|
<python><graph-theory><dql><dgraph>
|
2025-07-18 11:07:17
| 1
| 642
|
Ravindra Gupta
|
79,706,023
| 16,389,095
|
Flet app for Supabase account management: unable to reset password for already registered email
|
<p>I developed a simple authentication UI (login, registration, and password reset) for a Flet app using Supabase as the backend for user management. The app entry point is the main function which sets up the Supabase client, creates page/view instances, and configures routing.</p>
<pre><code>def main(page: ft.Page):
theme = ft.Theme()
theme.page_transitions.macos = ft.PageTransitionTheme.NONE
page.theme = theme
page.bgcolor="#202020"
supabase: Client = get_supabase_object()
create: ft.View = CreatePage(page, supabase)
login: ft.View = LogInPage(page, supabase)
reset_password: ft.View = ResetPasswordPage(page, supabase)
update_password: ft.View = UpdatePasswordPage(page, supabase)
main_view: MainPage = MainPage(page, supabase)
# router method
def route_change (route)-> None:
page.views.clear()
if page.route == create.route:
page.views.append(create)
if page.route == login.route:
page.views.append(login)
if page.route == reset_password.route:
page.views.append(reset_password)
if page.route == update_password.route:
page.views.append(update_password)
if page.route == main_view.route:
page.views.append(main_view)
page.update()
page.on_route_change = route_change
page.go(create.route)
</code></pre>
<p>The app starts on the registration page. Users can register or log in. If registration is valid, users are conducted to the login page. After login, users see the main page and can log out. In the login page, users can also reset their account password: the class implements a simple UI with a TextField to insert the email and a button for being redirected to the update password page. Note the redirect argument in the reset_password_for_email() method. I have to specify that I added this redirect URL in the authorized redirect URLs of my Supabase account.</p>
<pre><code>class ResetPasswordPage(ft.View):
def __init__(self, page: ft.Page, supabase: Client) -> None:
super().__init__(
route="/reset_password",
vertical_alignment="center",
horizontal_alignment="center",
)
self.page = page
self.supabase = supabase
self.body = Body(
title="Supabase - Reset Password",
btn_name = "Reset password",
function = self.reset_password,
contains_password=False,
)
self.controls = [self.body]
def reset_password(self, event=None) -> None:
email = self.body.email.value
if email:
try:
self.supabase.auth.reset_password_for_email(email, {"redirect_to": "http://localhost:3000/update_password"},)
show_snackbar(self.page, "Password reset email sent! Please check your inbox.", color="#55a271")
except Exception as e:
show_snackbar(self.page, f"Failed to send reset email: {str(e)}", color="#e57373")
else:
show_snackbar(self.page, "Please insert your email", color="#e57373")
</code></pre>
<p>Finally, the UpdatePasswordPage class features a simple UI that displays a text message, regardless of whether the token is successfully recognized or not.</p>
<pre><code>class UpdatePasswordPage(ft.View):
def __init__(self, page: ft.Page, supabase: Client) -> None:
super().__init__(
route="/update_password",
vertical_alignment="center",
horizontal_alignment="center",
)
self.page = page
self.supabase = supabase
# Estrai il token dalla query string
self.token = self.page.route.split("?token=")[1] if "?token=" in self.page.route else None
if self.token:
print(self.token)
self.controls = [ft.Text("Valid token.")]
else:
self.controls = [ft.Text("Invalid or expired token.")]
########################################
########################################
# AT THIS TIME, IMPLEMENTED BUT NOT USED
########################################
########################################
def update_password(self, event=None) -> None:
new_password = self.controls[1].value
if self.token:
try:
# Usa Supabase per aggiornare la password con il token
self.supabase.auth.update_user(self.token, password=new_password)
show_snackbar(self.page, "Password updated successfully!")
self.page.route = "/login"
except Exception as e:
show_snackbar(self.page, f"Error: {e}")
else:
show_snackbar(self.page, "Invalid token!")
</code></pre>
<p>Even if I set the port with</p>
<pre><code>ft.app(target=main, port=3000, view=ft.AppView.WEB_BROWSER)
</code></pre>
<p>I must run the app with this command:</p>
<pre><code>flet run -w --port 3000 projectName
</code></pre>
<p>which ensures to run it as a web page with the correct predefined port.
The problem is concerned with the redirect phase. Once the user, in the login page, clicks on the reset password, the update password page appears for less than a second (actually I saw only the URL change) and I am redirected immediately to the first page (the registration page) without any error or exception. It seems that I am not authorized to access the update page. Moreover, I verify that the token send by email is correctly extracted from the URL (print it in the console). What I am missing?</p>
|
<python><supabase><flet>
|
2025-07-18 10:12:00
| 1
| 421
|
eljamba
|
79,705,864
| 6,074,182
|
How is import os.path possible?
|
<p>Since <code>os</code> is a module instead of a package, <code>import os.path</code> should fail. For comparison:</p>
<pre class="lang-none prettyprint-override"><code>>>> import os.sys
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
import os.sys
ModuleNotFoundError: No module named 'os.sys'; 'os' is not a package
</code></pre>
<p>Note that <code>os.sys</code> is available:</p>
<pre class="lang-none prettyprint-override"><code>>>> import os
>>> os.sys
<module 'sys' (built-in)>
</code></pre>
<p>So, how does <code>import os.path</code> work? Are there any other cases of something like this?</p>
|
<python><python-import><python-internals><os.path>
|
2025-07-18 08:00:12
| 0
| 2,445
|
Aemyl
|
79,705,839
| 2,265,497
|
Why python prometheus client collectors create metric object every time when collect method is invoked
|
<p><a href="https://github.com/prometheus/client_python/blob/master/prometheus_client/gc_collector.py" rel="nofollow noreferrer">https://github.com/prometheus/client_python/blob/master/prometheus_client/gc_collector.py</a></p>
<pre><code>import gc
import platform
from typing import Iterable
from .metrics_core import CounterMetricFamily, Metric
from .registry import Collector, CollectorRegistry, REGISTRY
class GCCollector(Collector):
"""Collector for Garbage collection statistics."""
def __init__(self, registry: CollectorRegistry = REGISTRY):
if not hasattr(gc, 'get_stats') or platform.python_implementation() != 'CPython':
return
registry.register(self)
def collect(self) -> Iterable[Metric]:
collected = CounterMetricFamily(
'python_gc_objects_collected',
'Objects collected during gc',
labels=['generation'],
)
uncollectable = CounterMetricFamily(
'python_gc_objects_uncollectable',
'Uncollectable objects found during GC',
labels=['generation'],
)
collections = CounterMetricFamily(
'python_gc_collections',
'Number of times this generation was collected',
labels=['generation'],
)
for gen, stat in enumerate(gc.get_stats()):
generation = str(gen)
collected.add_metric([generation], value=stat['collected'])
uncollectable.add_metric([generation], value=stat['uncollectable'])
collections.add_metric([generation], value=stat['collections'])
return [collected, uncollectable, collections]
</code></pre>
<p>Instead of re-using metric object it creates it every time just to log one value. What if I need to pull metric every second? does it mean it will create it every second? it's not an optimal at all. I feel that under certain circumstances it may contribute into memory allocation even more than time app</p>
<p>What I'm missing? Is it by design?</p>
|
<python><prometheus><monitoring>
|
2025-07-18 07:26:58
| 1
| 2,057
|
slesh
|
79,705,666
| 14,373,886
|
MCPToolConversionError: Failed to get tools from MCP server: 404
|
<p>I am using python_a2a version 0.5.9, I am trying to use to_langchain_tool from python_a2a.langchain package but I am getting following error while trying to convert a MCP Server to langchain tool</p>
<p>code:</p>
<pre><code>from python_a2a.langchain import to_langchain_tool
add = to_langchain_tool("http://localhost:8080/mcp", "add")
print(add)
</code></pre>
<p>Error:</p>
<pre><code>MCPToolConversionError Traceback (most recent call last)
File ~/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/mcp.py:492, in to_langchain_tool(mcp_url, tool_name)
491 if tools_response.status_code != 200:
--> 492 raise MCPToolConversionError(f"Failed to get tools from MCP server: {tools_response.status_code}")
494 available_tools = tools_response.json()
MCPToolConversionError: Failed to get tools from MCP server: 404
During handling of the above exception, another exception occurred:
MCPToolConversionError Traceback (most recent call last)
Cell In[2], line 2
1 from python_a2a.langchain import to_langchain_tool
----> 2 add = to_langchain_tool("http://localhost:8080/mcp", "add")
3 print(add)
File ~/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/mcp.py:498, in to_langchain_tool(mcp_url, tool_name)
496 except Exception as e:
497 logger.error(f"Error getting tools from MCP server: {e}")
--> 498 raise MCPToolConversionError(f"Failed to get tools from MCP server: {str(e)}")
500 # Filter tools if a specific tool is requested
501 if tool_name is not None:
MCPToolConversionError: Failed to get tools from MCP server: Failed to get tools from MCP server: 404
</code></pre>
<p>which basically means that it's unable to find the MCP Server path</p>
<p>Following is my code for MCP Server:</p>
<p>add_mcp.py</p>
<pre><code>from fastmcp import FastMCP
mcp = FastMCP("Demo", stateless_http=True)
@mcp.tool
def add(a: int, b:int) -> int:
print(a + b)
return a + b
if __name__ == "__main__":
mcp.run()
</code></pre>
<p>I am able to use fastmcp.client to call the MCP Server, here is the code for that</p>
<pre><code>from fastmcp.client import Client
client = Client("http://localhost:8080/mcp")
async with client:
tools = await client.list_tools()
print(tools)
result = await client.call_tool("add", arguments={"a": 10, "b": 20})
print(result.content)
</code></pre>
<p>Am I making any mistake while using the python_a2a package for converting MCP Server to langchain tools ?</p>
<p>Following are the version details:</p>
<pre><code>fastmcp==2.10.5
langchain==0.3.26
langchain-anthropic==0.3.17
langchain-core==0.3.69
langchain-mcp-adapters==0.1.9
langchain-ollama==0.3.5
langchain-tavily==0.2.10
langchain-text-splitters==0.3.8
python-a2a==0.5.9
</code></pre>
|
<python><langchain><model-context-protocol>
|
2025-07-18 04:09:55
| 1
| 540
|
JayantSeth
|
79,705,629
| 14,373,886
|
cannot import name 'LangChainBridge' from 'python_a2a.langchain'
|
<p>I am using python_a2a version 0.5.9 and I am trying to use integration of langchain with python_a2a but I am unable to import ToolServer, LangChainBridge or even AgentFlow</p>
<p>I am referring to this document of Python A2A: <a href="https://python-a2a.readthedocs.io/en/latest/guides/langchain.html" rel="nofollow noreferrer">https://python-a2a.readthedocs.io/en/latest/guides/langchain.html</a></p>
<p>error:</p>
<pre><code>Python 3.12.0 (main, Oct 3 2023, 01:27:23) [Clang 17.0.1 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain.tools import BaseTool
>>> from python_a2a.langchain import AgentFlow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'AgentFlow' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> from python_a2a.langchain import ToolServer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'ToolServer' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> from python_a2a.langchain import LangChainBridge
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'LangChainBridge' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> exit()
Python 3.12.0 (main, Oct 3 2023, 01:27:23) [Clang 17.0.1 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain.tools import BaseTool
>>> from python_a2a.langchain import AgentFlow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'AgentFlow' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> from python_a2a.langchain import ToolServer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'ToolServer' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> from python_a2a.langchain import LangChainBridge
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'LangChainBridge' from 'python_a2a.langchain' (/home/jayant/Documents/project/venv/lib/python3.12/site-packages/python_a2a/langchain/__init__.py)
>>> exit()
(venv) user@home:~/Documents/project$ python --version
Python 3.12.0
(venv) user@home:~/Documents/project$ uv pip freeze | grep python-a2a
Using Python 3.12.0 environment at: venv
python-a2a==0.5.9
(venv) user@home:~/Documents/project$ pip freeze | grep langchain
langchain==0.3.26
langchain-anthropic==0.3.17
langchain-core==0.3.69
langchain-mcp-adapters==0.1.9
langchain-ollama==0.3.5
langchain-tavily==0.2.10
langchain-text-splitters==0.3.8
</code></pre>
|
<python><langchain><model-context-protocol>
|
2025-07-18 03:07:15
| 2
| 540
|
JayantSeth
|
79,705,542
| 27,596,369
|
Tkinter image label object is not going to the desired spot
|
<p>I have this code with an upload image button and an image label.</p>
<pre><code>import tkinter
from tkinter import filedialog, Label, Button
from PIL import Image, ImageTk
# Functions
def upload_image():
file_path = filedialog.askopenfilename()
if file_path:
img = Image.open(file_path)
img_tk = ImageTk.PhotoImage(img)
image_label.image = img_tk
image_label.config(image=img_tk)
window = tkinter.Tk()
window.geometry("5000x3000")
image_label = Label(window, width=2500, height=1000)
image_label.grid(column=2,row=2)
upload_button = Button(window, text='Upload Image', command=upload_image)
upload_button.grid(column=1,row=1, padx=60, pady=60)
window.mainloop()
</code></pre>
<p>Now I expected the image and the upload button be pretty close, but the image label is on the other side of the app. <a href="https://i.sstatic.net/e8nwtmVv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8nwtmVv.png" alt="App" /></a></p>
<p>If I add a bigger image then the whole screen gets covered by it</p>
<p>I thought by defining a width and height for the image label, since I am new to Tkinter and don't know that much, I thought I would avoid big images being too big and small images being too small. My desired output is both of them to be side by side. Also, when I try to add an entry and a label using this:</p>
<pre><code> text_size_label = Label(window, text="Text size:", font=('calibre',10, 'bold'))
text_size_label.grid(row=1, column=2)
text_size = tkinter.Entry(window, font=('calibre',12, 'normal'))
text_size.grid(column=1, row=2)
</code></pre>
<p>this is what happens:
<a href="https://i.sstatic.net/ZSkS2omS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSkS2omS.png" alt="app2" /></a></p>
|
<python><user-interface><tkinter>
|
2025-07-18 00:48:49
| 0
| 1,512
|
Aadvik
|
79,705,527
| 11,628,437
|
Why does `np.random.default_rng()` prevent seed change compared to `numpy.random.seed`?
|
<p>I recently learned about the downsides of using <code>numpy.random.seed(seed)</code>. Some internal file can modify the seed.I read an online <a href="https://builtin.com/data-science/numpy-random-seed" rel="nofollow noreferrer">article</a> that suggested I use <code>np.random.default_rng()</code>. But I am not sure how that works.</p>
<p>I came across <a href="https://stackoverflow.com/questions/68222756/how-do-i-globally-seed-np-random-default-rng-for-unit-tests">this</a> related SO post but it did not help my understanding.</p>
|
<python><python-3.x><numpy>
|
2025-07-18 00:11:09
| 2
| 1,851
|
desert_ranger
|
79,705,516
| 6,480,859
|
WithPlotly's python fig.to_html() function, don't encode numbers as binary
|
<ul>
<li>Python: 3.12.11</li>
<li>Plotly: 6.0.1</li>
<li>Pandas: 2.2.3</li>
<li>Numpy: 2.3.1</li>
</ul>
<p>When Plotly's Python library exports figures to HTML, it sometimes converts data to binary, which browsers aren't rendering correctly. I can't seem to find any option that forces Plotly to export the data as plain integers or to prevent it from exporting in binary format.</p>
<pre><code>import pandas as pd
import plotly.express as px
data = [
{
"date": "2023-01-01",
"value": 10
},
{
"date": "2023-02-01",
"value": 15
},
{
"date": "2023-03-01",
"value": 12
}
]
df = pd.DataFrame(data)
fig = px.line(x=df['date'].to_list(), y=df['value'].to_list())
html_snippet = fig.to_html(
include_plotlyjs=False,
full_html=False,
)
print(html_snippet)
</code></pre>
<p>If you look in this output for <code>"y":</code> you'll see <code>"y":{"dtype":"i1","bdata":"Cg8M"}</code>. Here, the <code>Cg8M</code> is presumably binary reprresentation of 10, 15, 12. I expect this instead to be something like <code>"y": array([10, 15, 12])</code></p>
<p>Otherwise, browsers are confused by the binary and render this html as if the y-axis values were 0,1,2 instead of 10,15,12: <a href="https://i.sstatic.net/nNX9jUPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNX9jUPN.png" alt="chart with y-axis values 0, 1, 2 instead of 10,15,12" /></a></p>
|
<python><html><plotly><plotly-express>
|
2025-07-17 23:48:35
| 1
| 375
|
relizt
|
79,705,236
| 388,520
|
How do I control the size of margins around a cartopy map projection?
|
<p>I'm trying to plot a bunch of data on a map of the sky in various projections, using matplotlib + cartopy, and the margins around the maps are always too large and none of the controls I can find seem to help. Example (annotations added after rendering):</p>
<p><a href="https://i.sstatic.net/Q9rXdSnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q9rXdSnZ.png" alt="enter image description here" /></a></p>
<p>I would like to make the outer margin of the entire image, including the colorbar, be something like 2.5mm, and the gap between the colorbar and the image be something like 5mm (these numbers will need to be tweaked of course), and then the map should fill the rest of the available space.</p>
<p>Note that I may need to turn this into a 2-subplot figure, two maps sharing a colorbar, each with a label, and possibly also add meridians and parallels with labels, so a solution that works regardless of how much 'furniture' each <code>Axes</code> has is strongly preferred.</p>
<p>Part of the problem seems to be that each map projection has its own desired aspect ratio, and if that doesn't agree with the aspect ratio of the figure then spacing will be added to preserve said aspect ratio, but since the desired aspect ratio is not documented anywhere and the width of the colorbar is unpredictable, knowing that doesn't actually help me any. Also, this really is only <em>part</em> of the problem; if I hold the overall figure height constant and vary the width over a range of values, the figure has the <em>least</em> amount of unwanted white space when the figure's aspect ratio is just so, but it still has unwanted white space.</p>
<p>Here's one version of the code I have now. Please note how each projection gets rendered with different margins.</p>
<pre><code>import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.layout_engine import ConstrainedLayoutEngine
def main():
cel_sphere = ccrs.Globe(datum=None, ellipse=None,
semimajor_axis=180/np.pi,
semiminor_axis=180/np.pi)
sky_plate = ccrs.PlateCarree(globe=cel_sphere)
ra, dec = np.mgrid[-179.5:180:1, -89.5:90:1]
fake_data = generate_perlin_noise_2d(ra.shape, (1, 1))
for label, proj in [("ee", ccrs.EqualEarth),
("mw", ccrs.Mollweide),
("lc", ccrs.LambertCylindrical)]:
try:
fig, ax = plt.subplots(
figsize=(20, 10),
layout=ConstrainedLayoutEngine(
h_pad=0, w_pad=0, hspace=0, wspace=0
),
subplot_kw={
"xlim": (-180, 180),
"ylim": (-90, 90),
"projection": proj(globe=cel_sphere)
},
)
ctr = ax.contourf(ra, dec, fake_data,
transform=sky_plate,
cmap="Greys")
fig.colorbar(ctr, shrink=0.5, pad=0.02)
fig.savefig(f"layout_test_{label}.png")
finally:
plt.close(fig)
# stolen from https://pvigier.github.io/2018/06/13/perlin-noise-numpy.html
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
main()
</code></pre>
<p>And here's a version that renders two subplots with all possible labels, demonstrating that the unwanted space is <em>not</em> just because of leaving space for furniture.</p>
<pre><code>import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import colors, cm
from matplotlib.layout_engine import ConstrainedLayoutEngine
def main():
cel_sphere = ccrs.Globe(datum=None, ellipse=None,
semimajor_axis=180/np.pi,
semiminor_axis=180/np.pi)
sky_plate = ccrs.PlateCarree(globe=cel_sphere)
ra, dec = np.mgrid[-179.5:180:1, -89.5:90:1]
fake_data_1 = generate_perlin_noise_2d(ra.shape, (1, 1))
fake_data_2 = generate_perlin_noise_2d(ra.shape, (1, 1)) + 2
norm = colors.Normalize(
vmin=np.min([fake_data_1, fake_data_2]),
vmax=np.max([fake_data_1, fake_data_2]),
)
for label, proj in [("ee", ccrs.EqualEarth),
("mw", ccrs.Mollweide),
("lc", ccrs.LambertCylindrical)]:
for width in np.linspace(18, 22, 21):
draw(fake_data_1, fake_data_2,
ra, dec, sky_plate, proj(globe=cel_sphere),
norm, width, 10, label)
def draw(d1, d2, ra, dec, data_crs, map_crs, norm, width, height, label):
fig, (a1, a2) = plt.subplots(
1, 2,
figsize=(width, height),
layout=ConstrainedLayoutEngine(
h_pad=0, w_pad=0, hspace=0, wspace=0
),
subplot_kw={
"xlim": (-180, 180),
"ylim": (-90, 90),
"projection": map_crs,
},
)
try:
a1.gridlines(draw_labels=True)
a2.gridlines(draw_labels=True)
a1.contourf(ra, dec, d1,
transform=data_crs,
cmap="Greys",
norm=norm)
a2.contourf(ra, dec, d2,
transform=data_crs,
cmap="Greys",
norm=norm)
a1.set_title(label, loc="left")
a2.set_title(f"{width}x{height}", loc="left")
fig.colorbar(cm.ScalarMappable(norm=norm, cmap="Greys"),
shrink=0.5, pad=0.02, ax=[a1, a2])
fig.savefig(f"layout_test_{label}_{width}x{height}.png",
bbox_inches="tight", pad_inches=0.125)
finally:
plt.close(fig)
# stolen from https://pvigier.github.io/2018/06/13/perlin-noise-numpy.html
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
main()
</code></pre>
|
<python><matplotlib><cartopy>
|
2025-07-17 17:54:30
| 2
| 142,389
|
zwol
|
79,705,222
| 16,563,251
|
Use importlib.resources for files inside the top level parent module
|
<p>I want to load a file from my python module using <a href="https://docs.python.org/3/library/importlib.resources.html" rel="nofollow noreferrer"><code>importlib.resources</code></a> (see also <a href="https://stackoverflow.com/questions/6028000/how-to-read-a-static-file-from-inside-a-python-package">this question</a>).
If the file is inside a submodule, this is straight forward (run using <code>python -m</code>, <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">see here</a>):</p>
<pre class="lang-py prettyprint-override"><code>import importlib.resources
from . import submodule
print(importlib.resources.files(submodule).joinpath("file.txt").read_bytes())
</code></pre>
<p>For files in the same module, <a href="https://stackoverflow.com/questions/58883423/how-to-reference-the-current-package-for-use-with-importlib-resources">it is possible</a> to use <code>__package__</code>:</p>
<pre class="lang-py prettyprint-override"><code>import importlib.resources
print(importlib.resources.files(__package__).joinpath("file.txt").read_bytes())
</code></pre>
<p>However, I cannot get this approach to work for files inside the top level module, if I want to access them from a submodule:</p>
<pre><code>module
βββ file.txt
βββ __init__.py
βββ submodule
βββ __init__.py
βββ submodule_script.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>import importlib.resources
from ... import module # ImportError: attempted relative import beyond top-level package
import .. as module # SyntaxError: invalid syntax
from .. import . as module # SyntaxError: invalid syntax
print(importlib.resources.files(module).joinpath("file.txt").read_bytes())
</code></pre>
<p>How can I obtain an import-like reference to the top level module location, such that <code>importlib.resources</code> can understand it?</p>
<p>One option is to use an absolute import (<code>import module</code>), but I use relative imports everywhere else in my project and would prefer consistency. Is it possible using relative imports?</p>
|
<python><python-import><python-importlib>
|
2025-07-17 17:45:03
| 1
| 573
|
502E532E
|
79,704,941
| 442,650
|
Is it possible to type a subclass that only implements a single overload variant?
|
<p>I am trying to add typing to a legacy project that involves a Superclass and Subclasses that only implement specific variants; I am not sure if this is possible.</p>
<p>Classes expose the same general interface - the API method names are the same, but the input/output might be string based for one and int for another. The various classes all inherit from the base <code>API</code> class that has some core shared functions.</p>
<pre><code>class API:
def get_by_ids(
self,
ids: Union[List[int], List[str]],
) -> Union[Dict[int, Any], Dict[str, Any]]:
pass
</code></pre>
<p>In the example below, <code>Foo</code> and <code>Bar</code> both implement <code>get_by_ids</code> - however <code>Foo</code> ids are <code>int</code> and <code>Bar</code> ids are <code>str</code>. Both return <code>dict</code> where the submitted ids are keys.</p>
<pre><code>class Foo(API):
def get_by_ids(
self,
ids: List[int],
) -> Dict[int, Any]:
pass
class Bar(API):
def get_by_ids(
self,
ids: List[str],
) -> Dict[str, Any]:
pass
</code></pre>
<p>The problem I have run into:</p>
<p>Using the code above, I run into the expected <code>Liskov substitution principle</code> errors.</p>
<pre><code>error: Argument 1 of "get_ids" is incompatible with supertype "API"; supertype defines the argument type as "list[int] | list[str]" [override]
note: This violates the Liskov substitution principle
note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
</code></pre>
<p>I extended the code with the following approach to use <code>@overload</code> and define the variants. With that approach, mypy wants all 4 overloads defined in the subclass:</p>
<pre><code>error: Signature of "get_ids" incompatible with supertype "API" [override]
note: Superclass:
note: @overload
note: def get_ids(self, ids: list[int]) -> dict[int, Any]
note: @overload
note: def get_ids(self, ids: list[str]) -> dict[str, Any]
note: @overload
note: def get_ids(self, ids: list[tuple[int, int]]) -> dict[tuple[int, int], Any]
note: @overload
note: def get_ids(self, ids: list[tuple[str, int]]) -> dict[tuple[str, int], Any]
note: Subclass:
note: def get_ids(self, ids: list[int]) -> dict[int, Any]
</code></pre>
<p>Can anyone offer some pointers on how get basic typing into this scenario or if this is even possible? This is a large legacy project - so redesigning is not within the scope of things I can do right now. I wish it were.</p>
<p>I've tried multiple approaches and permutations of changes, but always end up in a Liskov or missing overloads situation.</p>
<p>Extended Example</p>
<pre class="lang-py prettyprint-override"><code>class API:
@overload
def get_by_ids(
self,
ids: List[int],
) -> Dict[int, Any]: ...
@overload
def get_by_ids(
self,
ids: List[str],
) -> Dict[str, Any]: ...
def get_by_ids(
self,
ids: Union[List[int], List[str]],
) -> Union[Dict[int, Any], Dict[str, Any]]:
pass
class Foo(API):
def get_by_ids(
self,
ids: List[int],
) -> Dict[int, Any]:
pass
class Bar(API):
def get_by_ids(
self,
ids: List[str],
) -> Dict[str, Any]:
pass
</code></pre>
|
<python><python-typing>
|
2025-07-17 14:08:07
| 1
| 15,770
|
Jonathan Vanasco
|
79,704,784
| 4,889,002
|
R: How to properly install and use the rgee package?
|
<p>I first created an account and a project with Google Earth Engine.</p>
<p>Then I installed and loaded the <code>rgee</code> package:</p>
<pre><code>install.packages("rgee")
library(rgee)
</code></pre>
<p>I checked if python is installed:</p>
<pre><code>> reticulate::py_available()
[1] FALSE
</code></pre>
<p>And installed it on my system:</p>
<pre><code>> reticulate::py_discover_config()
python: /usr/bin/python
libpython: /usr/lib64/libpython3.13.so.1.0
pythonhome: //usr://usr
version: 3.13.5 (main, Jun 12 2025, 00:00:00) [GCC 15.1.1 20250521 (Red Hat 15.1.1-2)]
numpy: /usr/lib64/python3.13/site-packages/numpy
numpy_version: 2.2.6
ee: /home/saidmaanan/.local/lib/python3.13/site-packages/ee
NOTE: Python version was forced by use_python() function
> rgee::ee_install_set_pyenv(py_path = "/usr/bin/python", py_env = "rgee")
EARTHENGINE_PYTHON='/usr/bin/python'
EARTHENGINE_ENV='rgee'
saved in: /home/saidmaanan/.Renviron
Do you want restart your R session? Windows users could need to terminate R instead of restarting.
1: yes
2: no
Selection: 1
Restarting R session...
>
</code></pre>
<p>I checked if everything is installed properly, and it gave me this:</p>
<pre><code>> rgee::ee_check()
β Python version
β [Ok] /usr/bin/python v3.13
Error in strsplit(a, "[.-]") : non-character argument
</code></pre>
<p>What is the cause of this error? And how do I address it?</p>
<p>Here is some information about my machine:</p>
<p><a href="https://i.sstatic.net/ECHOpIZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ECHOpIZP.png" alt="enter image description here" /></a></p>
<h3>Update:</h3>
<p>As suggested in the comment below, I ignored the error above, and proceeded with the following:</p>
<pre><code>> rgee:::ee_check_python_packages()
β Python packages:
β [Ok] numpy
β [Ok] earthengine-api
NOTE: The Earth Engine Python API version 1.5.24 is installed
correctly in the system but rgee was tested using the version
0.1.370. To avoid possible issues, we recommend install the
version used by rgee (0.1.370). You might use:
* rgee::ee_install_upgrade()
* reticulate::py_install('earthengine-api==0.1.370', envname='PUT_HERE_YOUR_PYENV')
* pip install earthengine-api==0.1.370 (Linux and Mac0S)
* conda install earthengine-api==0.1.370 (Linux, Mac0S, and Windows)
>
> rgee:::ee_check_credentials()
β Credentials neccesaries for rgee:
β [Ok] Earth Engine Credentials found.
β [Ok] Google Drive credentials found.
β [Ok] Google Cloud Storage credentials found.
>
</code></pre>
<p>But then while initializing the Earth Engine, I stumbled upon another error:</p>
<pre><code>> # initialize Earth Engine
> rgee::ee_Initialize(
+ user = "maanan.said@gmail.com"
+ )
ββ rgee 1.1.7 ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ earthengine-api 1.5.24 ββ
β user: maanan.said@gmail.com
β Initializing Google Earth Engine: DONE!
Error in value[[3L]](cond) :
It looks like your EE credential has expired. Try running ee_Authenticate() again or clean your credentials ee_clean_user_credentials().
</code></pre>
<p>I tried to reinitialize my credentials, but the error persists:</p>
<pre><code>> rgee::ee_clean_user_credentials(user = "maanan.said@gmail.com")
> # initialize Earth Engine
> rgee::ee_Initialize(
+ user = "maanan.said@gmail.com"
ββ rgee 1.1.7 ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ earthengine-api 1.5.24 ββ
β user: maanan.said@gmail.com
β Initializing Google Earth Engine:
Opening in existing browser session.
Enter verification code: 4/1AVMBsJjrMNc8YFpydvgq9xzJtHurdWOFt5hw7MqTdkWOPgE9cB2_nvYBSSg
To authorize access needed by Earth Engine, open the following URL in a web browser and follow the instructions. If the web browser does not start automatically, please manually browse the URL below.
https://code.earthengine.google.com/client-auth?scopes=https%3A//www.googleapis.com/auth/earthengine%20https%3A//www.googleapis.com/auth/cloud-platform%20https%3A//www.googleapis.com/auth/drive%20https%3A//www.googleapis.com/auth/devstorage.full_control&request_id=u4pIqquXNMJlx6fcGzpbuvv24bcsxwGWvKGSPbHGgak&tc=kQam27-k84oXxo5xO5F9EvU8tqhheHiL_ZwxuOPnzb4&cc=oNAJMZ41t98bRvSeKG2p0bsl-PPDV6GQ1h3tpT3aDNo
The authorization workflow will generate a code, which you should paste in the box below.
β Initializing Google Earth Engine: DONE!
Successfully saved authorization token.
Error in value[[3L]](cond) :
It looks like your EE credential has expired. Try running ee_Authenticate() again or clean your credentials ee_clean_user_credentials().
</code></pre>
|
<python><r><google-earth-engine><rgee>
|
2025-07-17 12:12:37
| 2
| 811
|
SaΓ―d Maanan
|
79,704,608
| 5,931,672
|
Write AEM files with OREKIT
|
<p>I am new to orekit and without a lot of knowledge about space physics. I am using orekit jpype to create an AEM file.
No matter what I try, I get errors of the same format, something like:</p>
<pre class="lang-bash prettyprint-override"><code>in write_aem
writer.writeMessage(kvn_gen, aem)
org.orekit.errors.OrekitException: org.orekit.errors.OrekitException: frame EME2000 is not valid in this CCSDS file context
</code></pre>
<p>Also tried other frames.</p>
<p>My code is this:</p>
<pre class="lang-py prettyprint-override"><code>def write_aem(attitudes_sequence: AttitudesSequence, states: list, filename: str, switch_date: AbsoluteDate):
start_time = states[0].getDate()
stop_time = states[-1].getDate()
# Header
header = AdmHeader()
header.setOriginator("SAFRAN SCSA")
header.setCreationDate(start_time) # Use start time as creation date
# Determine switch date from the attitude sequence and split states into two segments
states_segment_1 = [s for s in states if s.getDate().compareTo(switch_date) <= 0]
states_segment_2 = [s for s in states if s.getDate().compareTo(switch_date) > 0]
segments = ArrayList()
for segment_states in [states_segment_1, states_segment_2]:
if not segment_states:
continue # Skip empty segments
start_time = segment_states[0].getDate()
stop_time = segment_states[-1].getDate()
metadata = AemMetadata(1)
metadata.setObjectName("Satellite_name")
metadata.setObjectID("Sat_ID")
metadata.setCenter(BodyFacade("EARTH", CelestialBodyFactory.getEarth()))
metadata.setTimeSystem(TimeSystem.UTC)
metadata.setStartTime(start_time)
metadata.setStopTime(stop_time)
# TODO: Check this! which one is A and which is B???
metadata.getEndpoints().setFrameA(FrameFacade.map(states[0].getFrame()))
metadata.getEndpoints().setFrameB(FrameFacade.map(FramesFactory.getEME2000()))
aem_data = AemData()
for state in segment_states:
attitude = state.getAttitude()
timestamped = TimeStampedAngularCoordinates(
attitude.getDate(), attitude.getRotation(), attitude.getSpin(), attitude.getRotationAcceleration()
)
aem_data.addData(timestamped)
segments.add(AemSegment(metadata, aem_data))
# Create AEM file object
aem = Aem(header, segments, IERSConventions.IERS_2010, DataContext.getDefault())
# Writer
writer = WriterBuilder().buildAemWriter()
with open(filename, "w") as fp:
buffer = StringBuilder()
kvn_gen = KvnGenerator(buffer, AemWriter.KVN_PADDING_WIDTH, String("AEM file"), Constants.JULIAN_DAY, 60)
writer.writeMessage(kvn_gen, aem)
fp.write(str(buffer.toString()))
</code></pre>
<p>I of course want to choose the correct frames, but in a first moment, already creating the file would be great.</p>
<p>In <a href="https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=cfa551a3f27efa90400e9f58015c6d18c377b0cb" rel="nofollow noreferrer">here</a>, example 4.3.1 they use 'SC_BODY', I tried creating this, but <code>setFrameB</code> only accepts <code>FrameFacade</code> input, and I have no idea how to create SC_BODY as it is not in the <code>FramesFactory</code>.</p>
<p>Why am I having this problem? And how can I solve it?</p>
|
<python><jpype><orekit>
|
2025-07-17 09:42:50
| 1
| 4,192
|
J Agustin Barrachina
|
79,704,338
| 4,141,279
|
Tensorflow dataset running out of data durin training
|
<p>I'm creating a dataset from a csv file containing path-names to image locations and mask-locations (<code>xtr</code> and <code>ytr</code>). Initially, the csv contains approx. 1000 elements. Each image is loaded from cloud space and then split using flat_map into 4 or 8 images depending on the height of the image. So, the dataset should still have at least 1000 elements.</p>
<p>Here is my pipeline:</p>
<pre><code> train_ds = (
tf.data.Dataset.from_tensor_slices((xtr, ytr))
.shuffle(200)
.map(self._load_image, num_parallel_calls=tf.data.AUTOTUNE)
.map(self._preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.flat_map(self._split)
.map(self._augment)
.batch(self._batch_size, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE)
)
</code></pre>
<p>Counting images in the dataset usually returns about 1700 elements (after splitting etc.)
Training is done with a batch-size of 4 and approx. 250 steps per epoch.</p>
<p>My problem is that I receive errors during training indicating that I run out of data.
E.g. Local rendezvous is aborting with status: OUT_OF_RANGE</p>
<p>Sometimes I also get explicite UserWarnings about running out of data.</p>
<p>Any pointers on how to solve this?</p>
<p>Thanks</p>
|
<python><tensorflow><tensorflow-datasets>
|
2025-07-17 05:53:33
| 3
| 1,597
|
RaJa
|
79,704,151
| 588,308
|
Cannot find settings module in mod_wsgi (Apache2 on Ubuntu with django)
|
<p>I am serving a Python app using Django through an Apache2 server. I have the wsgi.py file in a directory</p>
<pre><code>home/peter/django-apps/anaaccess/anaaccess/ana_access/wsgi.py
</code></pre>
<p>I have a venv in home/peter/django-apps/anaaccess/anaaccess/myenv into which I have installed mod_wsgi and django, etc. I have put these lines into apache.conf to set up this venv to handle the python:</p>
<pre><code>LoadModule wsgi_module "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/mod_wsgi/server/mod_wsgi-py312.cpython-312-x86_64-linux-gnu.so"
WSGIPythonHome "/home/peter/django-apps/anaaccess/anaaccess/myenv"
</code></pre>
<p>I call the application in the virtual host section of the Apache2 configuration:</p>
<pre><code> WSGIScriptAlias /bayeux /home/peter/django-apps/anaaccess/anaaccess/ana_access/wsgi.py
<Directory /home/peter/django-apps/anaaccess/anaaccess/ana_access>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
</code></pre>
<p>In the wsgi.py file I call the application as follows:</p>
<pre><code> import os
import django
from django.core.wsgi import get_wsgi_application
#os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ana_access.settings')
os.environ["DJANGO_SETTINGS_MODULE"] = "ana_access.settings"
application = get_wsgi_application()
</code></pre>
<p>All seems to start fine. Apache finds and loads the mod_wsgi module, finds the wsgi.py file, and finds the os and django modules. But it fails to find the settings file, with this error:</p>
<pre><code>mod_wsgi (pid=98141): Failed to exec Python script file '/home/peter/django-apps/anaaccess/anaaccess/ana_access/wsgi.py'., referer: http://109.123.108.170/
mod_wsgi (pid=98141): Exception occurred processing WSGI script '/home/peter/django-apps/anaaccess/anaaccess/ana_access/wsgi.py'., referer: http://109.123.108.170/
Traceback (most recent call last):, referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/ana_access/wsgi.py", line 19, in <module>, referer: http://109.123.108.170/
application = get_wsgi_application(), referer: http://109.123.108.170/
^^^^^^^^^^^^^^^^^^^^^^, referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application, referer: http://109.123.108.170/
django.setup(set_prefix=False), referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/django/__init__.py", line 19, in setup, referer: http://109.123.108.170/
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING), referer: http://109.123.108.170/
^^^^^^^^^^^^^^^^^^^^^^^, referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/django/conf/__init__.py", line 81, in __getattr__, referer: http://109.123.108.170/
self._setup(name), referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/django/conf/__init__.py", line 68, in _setup, referer: http://109.123.108.170/
self._wrapped = Settings(settings_module), referer: http://109.123.108.170/
^^^^^^^^^^^^^^^^^^^^^^^^^, referer: http://109.123.108.170/
File "/home/peter/django-apps/anaaccess/anaaccess/myenv/lib/python3.12/site-packages/django/conf/__init__.py", line 166, in __init__, referer: http://109.123.108.170/
mod = importlib.import_module(self.SETTINGS_MODULE), referer: http://109.123.108.170/
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^, referer: http://109.123.108.170/
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module, referer: http://109.123.108.170/
return _bootstrap._gcd_import(name[level:], package, level), referer: http://109.123.108.170/
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load, referer: http://109.123.108.170/
File "<frozen importlib._bootstrap>", line 1324, in _find_and_load_unlocked, referer: http://109.123.108.170/
ModuleNotFoundError: No module named 'ana_access', referer: http://109.123.108.170/
</code></pre>
<p>I have been using mod_wsgi/Django/Apache2 for over twenty years now, through lots of upgrades, server moves, etc. These problems appeared in the latest move, to Apache/2.4.58 on Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-63-generic x86_64).</p>
<p>Help gratefully received.</p>
|
<python><django><apache2><mod-wsgi>
|
2025-07-17 00:14:35
| 0
| 337
|
peter
|
79,703,959
| 880,783
|
Why does the Python REPL behave differently in VS Code's vs native cmd.exe?
|
<p>I am running the same <code>python.exe</code>, from the same working directory, in VS Code's terminal running <code>cmd.exe</code>, and in Windows Terminal running <code>cmd.exe</code>. For some reason, these two behave differently.</p>
<p>In fact, one recognizes <code>exit</code> while the other does not, insisting on <code>exit()</code>:</p>
<p><a href="https://i.sstatic.net/Tp2UdWrJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp2UdWrJ.png" alt="Screenshot of python.exe inside two different terminals running cmd.exe" /></a></p>
<p>I can't even begin to think what may be different between these two. Could that be an issue of</p>
<ul>
<li>Python itself?</li>
<li>VS Code? The Python environment extension? Windows Terminal?</li>
<li>Astral's <a href="https://github.com/astral-sh/python-build-standalone" rel="nofollow noreferrer">https://github.com/astral-sh/python-build-standalone</a>?</li>
</ul>
|
<python><terminal><exit><read-eval-print-loop><parentheses>
|
2025-07-16 19:51:59
| 1
| 6,279
|
bers
|
79,703,817
| 9,884,998
|
Unfolding a cartesian binned dataset into polar coordinates
|
<p>I have a dataset of binned events, corresponding to a cartesian coordinate set</p>
<pre><code>[[ 0. 0. 0. 0. 0. 0. 2. 5. 2. 3. 3. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 4. 10. 9. 7. 10. 6. 6. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 9. 12. 10. 11. 14. 13. 11. 12. 6. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 12. 16. 17. 14. 13. 14. 13. 12. 12. 6. 0. 0. 0. 0.]
[ 0. 0. 10. 11. 0. 14. 18. 16. 14. 18. 16. 14. 13. 0. 9. 0. 0. 0.]
[ 0. 7. 10. 13. 13. 16. 16. 15. 14. 16. 13. 16. 13. 13. 13. 7. 0. 0.]
[ 1. 6. 15. 14. 17. 14. 13. 13. 14. 15. 1. 13. 13. 12. 12. 7. 2. 0.]
[ 5. 13. 11. 14. 12. 14. 14. 16. 16. 16. 12. 1. 12. 14. 12. 9. 5. 0.]
[ 2. 11. 11. 16. 13. 17. 15. 14. 0. 14. 14. 13. 13. 16. 10. 9. 6. 1.]
[ 4. 11. 13. 12. 14. 14. 16. 16. 14. 18. 16. 1. 14. 12. 12. 11. 5. 1.]
[ 1. 7. 10. 11. 13. 14. 1. 19. 15. 19. 1. 1. 14. 14. 11. 10. 1. 0.]
[ 0. 5. 10. 15. 14. 15. 16. 1. 14. 1. 1. 16. 12. 13. 10. 5. 0. 0.]
[ 0. 0. 7. 12. 16. 15. 13. 17. 14. 16. 14. 14. 14. 14. 7. 0. 0. 0.]
[ 0. 0. 0. 7. 0. 14. 14. 15. 16. 16. 14. 11. 13. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 8. 12. 14. 12. 14. 10. 11. 12. 7. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 8. 8. 11. 9. 9. 10. 5. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 4. 3. 7. 6. 3. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 2. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>So the coordinate (x, y) corresponds to the bin [x, y]. Some cells do not provide useful information and are set to None. (The detection cell is broken). The dataset appears to be roughly circular. To test this I want to show this dataset with cos(phi) on the x axis and the radius from the center (8, 8) on the y axis.</p>
<p>The minimal reproducable example shows the dataset in cartesian coordinates:</p>
<pre><code>import numpy as np
binned_data = np.asarray(
[[ 0., 0., 0., 0., 0., 0., 2., 5., 2., 3., 3., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 4., 10., 9., 7., 10., 6., 6., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 9., 12., 10., 11., 14., 13., 11., 12., 6., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 12., 16., 17., 14., 13., 14., 13., 12., 12., 6., 0., 0., 0., 0.],
[ 0., 0., 10., 11., 0., 14., 18., 16., 14., 18., 16., 14., 13., 0., 9., 0., 0., 0.],
[ 0., 7., 10., 13., 13., 16., 16., 15., 14., 16., 13., 16., 13., 13., 13., 7., 0., 0.],
[ 1., 6., 15., 14., 17., 14., 13., 13., 14., 15., 1., 13., 13., 12., 12., 7., 2., 0.],
[ 5., 13., 11., 14., 12., 14., 14., 16., 16., 16., 12., 1., 12., 14., 12., 9., 5., 0.],
[ 2., 11., 11., 16., 13., 17., 15., 14., 0., 14., 14., 13., 13., 16., 10., 9., 6., 1.],
[ 4., 11., 13., 12., 14., 14., 16., 16., 14., 18., 16., 1., 14., 12., 12., 11., 5., 1.],
[ 1., 7., 10., 11., 13., 14., 1., 19., 15., 19., 1., 1., 14., 14., 11., 10., 1., 0.],
[ 0., 5., 10., 15., 14., 15., 16., 1., 14., 1., 1., 16., 12., 13., 10., 5., 0., 0.],
[ 0., 0., 7., 12., 16., 15., 13., 17., 14., 16., 14., 14., 14., 14., 7., 0., 0., 0.],
[ 0., 0., 0., 7., 0., 14., 14., 15., 16., 16., 14., 11., 13., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 8., 12., 14., 12., 14., 10., 11., 12., 7., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 8., 8., 11., 9., 9., 10., 5., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 4., 3., 7., 6., 3., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 2., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]]
)/12
binned_data[4][4] = None
binned_data[-5][-5] = None
binned_data[4][-5] = None
binned_data[-5][4] = None
binned_data[8][8] = None
#Broken Cells
binned_data[6, 10] = None
binned_data[10, 6] = None
binned_data[7, 11] = None
binned_data[11, 7] = None
binned_data[11, 9] = None
binned_data[11, 10] = None
binned_data[10, 11] = None
binned_data[9, 11] = None
binned_data[10, 10] = None
from matplotlib import pyplot as plt
from matplotlib.patches import Circle
fig, ax = plt.subplots()
plt.pcolor(binned_data)
plt.colorbar(label = "Count per hour")
circle = Circle((8.5, 8.5), 8, edgecolor='red', facecolor='none', linewidth=1)
ax.add_patch(circle)
circle = Circle((8.5, 8.5), 6, edgecolor='red', facecolor='none', linewidth=1)
ax.add_patch(circle)
plt.show()
</code></pre>
<p>My problem is less with the code itself and more about wrapping my head around the problem. Any help would be heavily appreceated.</p>
|
<python><numpy><matplotlib><linear-algebra>
|
2025-07-16 17:41:33
| 3
| 529
|
David K.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.