QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
79,523,897
1,719,931
SQLAlchemy add a table to an automapped Base results in `object has no attribute '_sa_instance_state'`
<p>I'm mirroring a remote MS SQL into a local SQLite db.</p> <p>Here is the relevant code:</p> <pre><code>eng_str = rf&quot;mssql+pymssql://{user_domain}\{username}:{password}@{hostip}/{dbname}&quot; engine_remote = create_engine(eng_str, echo=False) dbfp = Path(&quot;../../data/mydb.sqlite3&quot;) engine_local = create_engine(f&quot;sqlite:///{dbfp}&quot;, echo=False) Base = automap_base() # See ORM documentation on intercepting column definitions: https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html#intercepting-column-definitions @event.listens_for(Base.metadata, &quot;column_reflect&quot;) def genericize_datatypes(inspector, tablename, column_dict): # Convert dialect specific column types to SQLAlchemy agnostic types # See: https://stackoverflow.com/questions/79496414/convert-tinyint-to-int-when-mirroring-microsoft-sql-server-to-local-sqlite-with # See Core documentation on reflecting with database-agnostic types: https://docs.sqlalchemy.org/en/20/core/reflection.html#reflecting-with-database-agnostic-types old_type = column_dict['type'] column_dict[&quot;type&quot;] = column_dict[&quot;type&quot;].as_generic() # We have to remove collation when mirroring a Microsoft SQL server into SQLite # See: https://stackoverflow.com/a/59328211/1719931 if getattr(column_dict[&quot;type&quot;], &quot;collation&quot;, None) is not None: column_dict[&quot;type&quot;].collation = None # Load Base with remote DB metadata Base.prepare(autoload_with=engine_remote) </code></pre> <p>I need however to add a table to my db, so I did:</p> <pre><code>class MissingApl(Base): __tablename__ = &quot;missing_apl&quot; id: Mapped[int] = Column(Integer, primary_key=True) appln_nr: Mapped[int] = Column(Integer) </code></pre> <p>and then I'm creating the metadata on the local db:</p> <pre><code>Base.metadata.create_all(engine_local) </code></pre> <p>However doing for instance:</p> <pre><code>testobj=MissingApl(appln_nr=9999) print(testobj) with Session(engine_local) as session: session.add(testobj) session.commit() </code></pre> <p>results in:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File \Python\Python313\Lib\site-packages\sqlalchemy\orm\session.py:3477, in Session.add(self, instance, _warn) 3476 try: -&gt; 3477 state = attributes.instance_state(instance) 3478 except exc.NO_STATE as err: AttributeError: 'MissingApl' object has no attribute '_sa_instance_state' The above exception was the direct cause of the following exception: UnmappedInstanceError Traceback (most recent call last) Cell In[26], line 4 2 print(testobj) 3 with Session(engine_local) as session: ----&gt; 4 session.add(testobj) 5 session.commit() File \Python\Python313\Lib\site-packages\sqlalchemy\orm\session.py:3479, in Session.add(self, instance, _warn) 3477 state = attributes.instance_state(instance) 3478 except exc.NO_STATE as err: -&gt; 3479 raise exc.UnmappedInstanceError(instance) from err 3481 self._save_or_update_state(state) UnmappedInstanceError: Class '__main__.MissingApl' is not mapped </code></pre> <p>Why it does give me this error and how can I fix it?</p>
<python><sqlalchemy>
2025-03-20 19:15:27
2
5,202
robertspierre
79,523,835
1,178,960
python socket streaming (yield) llm response seems to be blocking everything
<p>I am learning llm recently and trying to build a simple chatbot, where multiple clients can connect to this chatbot and chat with the model. I created a simple python code below, but I noticed when multiple clients are connected, and 1 client is receiving the stream, other clients are blocked.</p> <p>I tried chatgpt/copilot to fix the code, tried <code>asyncio.create_task</code>, <code>asyncio.to_thread</code>, tried using FastAPI WebSockets, but none of them works. It seems when <code>yield</code> is streaming the response, everything is blocked.</p> <p>Could someone help please? I must be missing something obvious. I am open to everything (e.g. use some python frameworks, use different language, completely rewrite code etc).</p> <pre><code>import asyncio import websockets from openai import OpenAI from dotenv import load_dotenv import os load_dotenv() client = OpenAI() async def get_data_from_model(): stream = client.chat.completions.create( model=&quot;grok-3-beta&quot;, messages=[ { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Write a 5 sentences bedtime story about a unicorn.&quot; } ], stream=True, ) for chunk in stream: if chunk.choices[0].delta.content is not None: yield chunk.choices[0].delta.content async def handle_client(websocket): try: while True: user_message = await websocket.recv() async def send_stream(): async for chunk in get_data_from_model(): await websocket.send(chunk) # Run send_stream in the background for concurrent streaming asyncio.create_task(send_stream()) except Exception as e: print(f&quot;Connection error: {e}&quot;) async def start(port=8765): async with websockets.serve(handle_client, &quot;0.0.0.0&quot;, port): await asyncio.Future() if __name__ == &quot;__main__&quot;: asyncio.run(start(port=8000)) </code></pre>
<python><websocket><stream><large-language-model>
2025-03-20 18:37:39
0
842
Dongminator
79,523,764
856,588
using asyncio in Streamlit
<p>many frameworks use <code>asyncio</code> internally so there is no way get around it. is there a reliable recommended way of dealing with <code>asyncio</code> in Streamlit apps? i often get <code>Event loop is closed</code> or something similar when using it. not providing any code because it is basically anything with an <code>asyncio</code> call inside and a Streamlit UI element depending on it.</p>
<python><python-asyncio><streamlit>
2025-03-20 18:03:57
0
4,085
Maxim Volgin
79,523,708
395,857
How can I resolve the 403 Forbidden error when deploying a fine-tuned GPT model in Azure via Python?
<p>I follow Azure's <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/tutorials/fine-tune?tabs=python-new%2Ccommand-line" rel="nofollow noreferrer">tutorial</a> on fine-tuning GPT. I'm stuck at the deployment phase.</p> <p>Code:</p> <pre><code># Deploy fine-tuned model import json import requests token = '[redacted]' subscription = '[redacted]' resource_group = &quot;[redacted]&quot; resource_name = &quot;[redacted]&quot; model_deployment_name = &quot;gpt-4o-mini-2024-07-18-ft&quot; # Custom deployment name you chose for your fine-tuning model deploy_params = {'api-version': &quot;2023-05-01&quot;} deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'} deploy_data = { &quot;sku&quot;: {&quot;name&quot;: &quot;standard&quot;, &quot;capacity&quot;: 1}, &quot;properties&quot;: { &quot;model&quot;: { &quot;format&quot;: &quot;OpenAI&quot;, &quot;name&quot;: &quot;gpt-4o-mini-2024-07-18.ft-[redacted]&quot;, #retrieve this value from the previous call, it will look like gpt-4o-mini-2024-07-18.ft-[redacted] &quot;version&quot;: &quot;1&quot; } } } deploy_data = json.dumps(deploy_data) request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}' print('Creating a new deployment...') r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data) print(r) print(r.reason) print(r.json()) </code></pre> <blockquote> <p>Creating a new deployment... &lt;Response [403]&gt; Forbidden {'error': {'code': 'AuthorizationFailed', 'message': &quot;The client '[redacted email]' with object id '[redacted]' does not have authorization to perform action 'Microsoft.CognitiveServices/accounts/deployments/write' over scope '/subscriptions/[redacted]/resourceGroups/[redacted]/providers/Microsoft.CognitiveServices/accounts/[redacted]/deployments/gpt-4o-mini-2024-07-18-ft' or the scope is invalid. If access was recently granted, please refresh your credentials.&quot;}}</p> </blockquote> <p>I was able to <a href="https://i.sstatic.net/HHN6ejOy.png" rel="nofollow noreferrer">deploy</a> the model via Azure web UI. Why is the Python code returning <code>&lt;Response [403]&gt; Forbidden</code>?</p> <hr /> <ul> <li>The subscription looks correct since I get an <code>InvalidSubscriptionId</code> if I add a typo in it.</li> <li>I got the access token by launching the Cloud Shell from the Azure portal then runningย <code>az account get-access-token</code>.</li> </ul>
<python><azure><azure-openai><fine-tuning><gpt-4>
2025-03-20 17:31:33
2
84,585
Franck Dernoncourt
79,523,703
720,300
Indexing issue in python script for detection of hairpin curves in road gpx
<p>I have a working script that will process gpx files in the same folder as the script, detect hairpin curves and make a report, with length and gradient statistics, for straight and for curved portions of the track.</p> <p>The algorithm looks for cumulative changes in bearings in a look ahead search distance along the track. A hairpin is defined as a cummulative change in bearing (to one side) of &gt; 120 degrees within a distance of 50 meters.</p> <p>The whole script is working well, but somehow the very last edge is missing from the calculation of the straight portions, which is revealed when cross checking the total distances with cumulated edge distances and sums of &quot;curve&quot; and &quot;straight distances&quot;.</p> <p>The script will print a report and has some internal debugging, where the error becomes obvious when comparing the differently calculated distances:</p> <pre class="lang-none prettyprint-override"><code>import math import os import gpxpy def calculate_distance(point1, point2): # Create gpxpy points for distance calculation p1 = gpxpy.gpx.GPXTrackPoint(point1['latitude'], point1['longitude']) p2 = gpxpy.gpx.GPXTrackPoint(point2['latitude'], point2['longitude']) return p1.distance_2d(p2) def calculate_bearing(lat1, lon1, lat2, lon2): # Convert to radians lat1, lon1, lat2, lon2 = map(math.radians, [lat1, lon1, lat2, lon2]) # Calculate bearing dlon = lon2 - lon1 y = math.sin(dlon) * math.cos(lat2) x = math.cos(lat1) * math.sin(lat2) - math.sin(lat1) * math.cos(lat2) * math.cos(dlon) bearing = math.atan2(y, x) # Convert to degrees return math.degrees(bearing) def calculate_angle_change(p1, p2, p3): # Calculate bearings for both segments bearing1 = calculate_bearing(p1['latitude'], p1['longitude'], p2['latitude'], p2['longitude']) bearing2 = calculate_bearing(p2['latitude'], p2['longitude'], p3['latitude'], p3['longitude']) # Calculate the change in bearing angle_change = bearing2 - bearing1 # Normalize to -180 to +180 if angle_change &gt; 180: angle_change -= 360 elif angle_change &lt; -180: angle_change += 360 return angle_change # Positive for right turns, negative for left turns def combine_segments_by_curvature(points, curve_angle_threshold=120, look_ahead_distance=50): print(&quot;Starting curve detection...&quot;) combined = [] i = 0 straight_points = [points[0]] # First point is included while i &lt; len(points) - 2: print(f&quot;Processing point {i} of {len(points)}&quot;) look_ahead_points = [] cumulative_distance = 0 cumulative_angle = 0 last_angle_sign = 0 j = i + 1 while j &lt; len(points) - 2 and cumulative_distance &lt; look_ahead_distance: angle = calculate_angle_change(points[j-1], points[j], points[j+1]) if last_angle_sign == 0: cumulative_angle = abs(angle) last_angle_sign = 1 if angle &gt; 0 else -1 elif (angle &gt; 0 and last_angle_sign &gt; 0) or (angle &lt; 0 and last_angle_sign &lt; 0): cumulative_angle += abs(angle) else: cumulative_angle = abs(angle) last_angle_sign = 1 if angle &gt; 0 else -1 look_ahead_points.append(points[j]) cumulative_distance += points[j]['distance'] if cumulative_angle &gt;= curve_angle_threshold: # Found a curve - add any collected straight points first if straight_points: # Include the distance TO the first point of the curve total_distance = sum(p['distance'] for p in straight_points[1:]) straight_segment = { 'start_point': straight_points[0], 'end_point': straight_points[-1], 'distance': total_distance, 'elevation_change': straight_points[-1]['elevation'] - straight_points[0]['elevation'], 'points': straight_points.copy(), 'type': 'straight', 'gradient': ((straight_points[-1]['elevation'] - straight_points[0]['elevation']) / total_distance * 100) if total_distance &gt; 0 else 0 } combined.append(straight_segment) straight_points = [] # Add the curve segment - include ALL points from i to j inclusive curve_points = [] for k in range(i, j + 1): # Include point j curve_points.append(points[k]) # Calculate curve distance including the distance TO point j curve_distance = sum(points[k]['distance'] for k in range(i + 1, j + 1)) curve_segment = { 'start_point': points[i], 'end_point': points[j], 'distance': curve_distance, 'elevation_change': points[j]['elevation'] - points[i]['elevation'], 'points': curve_points, 'type': 'curve', 'cumulative_angle': cumulative_angle, 'turn_direction': 'right' if last_angle_sign &gt; 0 else 'left', 'gradient': ((points[j]['elevation'] - points[i]['elevation']) / curve_distance * 100) if curve_distance &gt; 0 else 0 } combined.append(curve_segment) i = j straight_points = [points[j]] # Start new straight section from end of curve break j += 1 if cumulative_angle &lt; curve_angle_threshold: if i &lt; len(points) - 1: straight_points.append(points[i+1]) i += 1 # Add any remaining straight points as final segment if straight_points: total_distance = sum(p['distance'] for p in straight_points[1:]) if total_distance &gt; 0: straight_segment = { 'start_point': straight_points[0], 'end_point': straight_points[-1], 'distance': total_distance, 'elevation_change': straight_points[-1]['elevation'] - straight_points[0]['elevation'], 'points': straight_points, 'type': 'straight', 'gradient': ((straight_points[-1]['elevation'] - straight_points[0]['elevation']) / total_distance * 100) if total_distance &gt; 0 else 0 } combined.append(straight_segment) print(&quot;Finished curve detection&quot;) return combined def analyze_gpx(gpx_file): with open(gpx_file, 'r') as f: gpx = gpxpy.parse(f) # Get the true track length true_length_2d = gpx.length_2d() # Process points and calculate distances between them points = [] for track in gpx.tracks: for segment in track.segments: # First, collect all points without distances track_points = [] for point in segment.points: track_points.append({ 'latitude': point.latitude, 'longitude': point.longitude, 'elevation': point.elevation, 'distance': 0 }) # Then calculate distances, starting from the first point for i in range(len(track_points)): if i &gt; 0: # For all points except the first p1 = gpxpy.gpx.GPXTrackPoint( track_points[i-1]['latitude'], track_points[i-1]['longitude'], elevation=track_points[i-1]['elevation'] ) p2 = gpxpy.gpx.GPXTrackPoint( track_points[i]['latitude'], track_points[i]['longitude'], elevation=track_points[i]['elevation'] ) track_points[i]['distance'] = p1.distance_2d(p2) points.extend(track_points) # Continue with segment analysis... segments = combine_segments_by_curvature(points) # Calculate overall statistics curve_segments = [s for s in segments if s['type'] == 'curve'] straight_segments = [s for s in segments if s['type'] == 'straight'] curve_distance = sum(s['distance'] for s in curve_segments) straight_distance = sum(s['distance'] for s in straight_segments) # Verify distances calculated_total = sum(p['distance'] for p in points) print(f&quot;Check: Curve distances and straight distances should match total length: {curve_distance + straight_distance:.1f} m&quot;) print(f&quot;Check: The distance diff. from cum. segments and gpx length is: {calculated_total:.1f} m, {true_length_2d:.1f} m\n&quot;) if abs(calculated_total - true_length_2d) &gt; 1: # Allow 1m difference for rounding print(f&quot;Warning: Distance mismatch!&quot;) print(f&quot; True GPX length: {true_length_2d:.1f} m&quot;) print(f&quot; Calculated total: {calculated_total:.1f} m&quot;) print(f&quot; Difference: {abs(true_length_2d - calculated_total):.1f} m&quot;) return { 'filename': os.path.basename(gpx_file), 'total_distance': true_length_2d, # Use 2D length 'curve_distance': curve_distance, 'straight_distance': straight_distance, 'curve_percentage': (curve_distance / true_length_2d * 100) if true_length_2d &gt; 0 else 0, 'total_elevation_gain': sum(max(0, s['elevation_change']) for s in segments), 'total_elevation_loss': abs(sum(min(0, s['elevation_change']) for s in segments)), 'segments': segments } def generate_report(analysis_results, output_file): with open(output_file, 'w') as f: f.write(&quot;# Track Analysis Report (Curve-based)\n\n&quot;) # Add curve detection parameters info f.write(&quot;## Analysis Parameters\n&quot;) f.write(&quot;Curve detection is based on the following parameters:\n&quot;) f.write(&quot;- Look-ahead distance: 50 meters\n&quot;) f.write(&quot;- Curve angle threshold: 120 degrees\n&quot;) f.write(&quot;- A curve is detected when the cumulative angle within the look-ahead distance exceeds the threshold\n&quot;) f.write(&quot;- Angles are only accumulated when consecutive turns are in the same direction\n\n&quot;) f.write(&quot;---\n\n&quot;) for result in analysis_results: f.write(f&quot;## {result['filename']}\n\n&quot;) f.write(&quot;### Overall Statistics\n\n&quot;) f.write(f&quot;- Total Distance: {result['total_distance']:.1f} m\n&quot;) f.write(f&quot;- Distance in Curves: {result['curve_distance']:.1f} m ({result['curve_percentage']:.1f}%)\n&quot;) f.write(f&quot;- Distance in Straight Sections: {result['straight_distance']:.1f} m\n&quot;) f.write(f&quot;- Check Sum of Straight + Curve Sections: {result['straight_distance'] + result['curve_distance']:.1f} m\n&quot;) f.write(f&quot;- Total Elevation Gain: {result['total_elevation_gain']:.1f} m\n&quot;) f.write(f&quot;- Total Elevation Loss: {result['total_elevation_loss']:.1f} m\n&quot;) f.write(&quot;### Segment Analysis\n\n&quot;) f.write(&quot;| Type | Direction | Start Distance (m) | End Distance (m) | Length (m) | Elevation Change (m) | Gradient (%) | Curve Angle (ยฐ) |\n&quot;) f.write(&quot;|------|-----------|-------------------|-----------------|------------|-------------------|-------------|----------------|\n&quot;) cumulative_distance = 0 for segment in result['segments']: direction = segment.get('turn_direction', 'N/A') curve_angle = f&quot;{segment.get('cumulative_angle', 0):.1f}&quot; if segment['type'] == 'curve' else 'N/A' segment_length = segment['distance'] f.write( f&quot;| {segment['type']} | {direction} | &quot; f&quot;{cumulative_distance:.1f} | {(cumulative_distance + segment_length):.1f} | &quot; f&quot;{segment_length:.1f} | {segment['elevation_change']:.1f} | &quot; f&quot;{segment['gradient']:.1f} | {curve_angle} |\n&quot; ) cumulative_distance += segment_length f.write(&quot;\n---\n\n&quot;) def main(): # Process all GPX files in current directory gpx_files = [f for f in os.listdir('.') if f.endswith('.gpx')] if not gpx_files: print(&quot;No GPX files found in current directory&quot;) return analysis_results = [] for gpx_file in gpx_files: print(f&quot;\n\nProcessing {gpx_file}...\n&quot;) result = analyze_gpx(gpx_file) analysis_results.append(result) # Generate report generate_report(analysis_results, 'track_analysis_curves_report.md') print(&quot;Analysis complete. Results written to track_analysis_curves_report.md&quot;) if __name__ == &quot;__main__&quot;: main()``` </code></pre>
<python><distance><gpx><bearing>
2025-03-20 17:29:32
0
2,874
Kay
79,523,696
6,471,140
how to modify a step or a prompt of an existing langchain chain (customize SelfQueryRetriever)?
<p>I need to customize a SelfQueryRetriever(the reason is: the generated target queries in OpenSearch are being generated incorrrectly so we need to tune prompts + we need to add some custom behavior such as multi-tenancy) but we don't want to re-write the whole chain, just the parts what we need to customize. How can we customize specific steps of a chain, is there a way to modify it by position, let's say something like this (pseudo-code):</p> <pre><code>retriever = SelfQueryRetriever(**config) retriever[2] = create_custom_module1() retriever[4] = create_custom_module2() </code></pre> <p>In this example we preserve the majority of the chain but customize only the third and fifth elements.</p> <p>Is it possible to do?</p>
<python><nlp><artificial-intelligence><langchain><large-language-model>
2025-03-20 17:26:35
0
3,554
Luis Leal
79,523,622
7,483,211
Python Intellisense tooltip forever "Loading" in Cursor AI Editor
<p>I'm trying out the Cursor &quot;AI&quot; editor with a Python repository.</p> <p>Cursor offered to import all my VS Code extensions to ease onboarding.</p> <p>But now Intellisense doesn't seem to work at all. In VS Code, when I hover over a variable or method, I get type information and/or method documentation. This doesn't work at all in Cursor.</p> <p>I noticed that they ship their own version of the &quot;Python&quot; extension, presumably because VS code does not allow Cursor to use its extension store.</p> <p>How do I get Python Intellisense to work in Cursor? I've tried restarting extension host multiple times to no avail.</p>
<python><intellisense><cursor-ide>
2025-03-20 16:55:53
0
10,272
Cornelius Roemer
79,523,584
169,947
Use a single pyproject.toml for 'poetry' & 'uv': dev dependencies
<p>I'm trying to let people on my project (including myself) migrate to <code>uv</code> while maintaining compatibility with people who still want to use <code>poetry</code> (including some of our builds). Now that <a href="https://python-poetry.org/blog/announcing-poetry-2.0.0/#supporting-the-project-section-in-pyprojecttoml-pep-621" rel="nofollow noreferrer"><code>poetry</code> 2.0 has been released</a>, and the more standard <code>[project]</code> fields are supported, this should be possible.</p> <p>It's working well in almost all ways, but the main glitch I'm hitting is the set of dev dependencies. <code>uv</code> and the standard seem to want them as</p> <pre class="lang-ini prettyprint-override"><code>[dependency-groups] dev = [ &quot;bandit ~= 1.8.3&quot;, ... ] </code></pre> <p>while <code>poetry</code> seems to want them as</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry.group.dev.dependencies] bandit = &quot;^1.8.3&quot; ... </code></pre> <p>. Without including that section, the dev dependencies get left out of the lockfile when doing <code>poetry lock</code>.</p> <p>Anyone have any techniques for getting the two tools to play nicely in this situation?</p>
<python><python-poetry><pyproject.toml><uv>
2025-03-20 16:40:17
1
24,277
Ken Williams
79,523,536
5,602,104
Can you raise an AirflowException without dumping the entire traceback into the logs?
<p>In Airflow, you're suppose to raise an AirflowException if you want a task to be marked as a failure. But the raised error doesn't seem to be caught in the top-level Airflow module, and so it results in the entire stacktrace being dumped into the logs. If you do your error handling properly, it should be possible to print a helpful error message and then fail the task, without gumming up the logs. How can this be done?</p>
<python><airflow>
2025-03-20 16:24:10
1
729
jcgrowley
79,523,500
1,570,985
RMSNorm derivative using sympy -- problem with summation over fixed number of elements
<p>I have following sympy equation for RMSNorm (easier to see in Jupyter notebook)</p> <pre class="lang-py prettyprint-override"><code>import sympy as sp # Define the symbols x = sp.Symbol('x') # Input variable n = sp.Symbol('n') # Number of elements gamma = sp.Symbol('gamma') epsilon = sp.Symbol('epsilon') # Small constant to avoid division by zero # Define the RMS normalization equation mean_square = sp.Sum(x**2, (x, 1, n)) / n rms = sp.sqrt(mean_square + epsilon) fwd_out = x * gamma / rms # Display the equation sp.pprint(fwd_out) </code></pre> <p>I have issue with the <code>rms</code> term when I take the derivative of <code>fwd_out</code> wrt <code>x</code> as follows:</p> <pre class="lang-py prettyprint-override"><code>d_activation = sp.diff(fwd_out, x) </code></pre> <p>Sympy does not consider <code>rms</code> as a function of <code>x</code> -- it considers it as a constant, as it evaluates <code>rms</code> over <code>n</code>, following displays 0:</p> <pre class="lang-py prettyprint-override"><code>sp.diff(rms, x) </code></pre> <p>But as per the RMSNorm paper, <code>rms</code> should considered as a function of <code>x</code>.</p> <p>Is there a way where sympy can be forced to consider <code>rms</code> as a function of <code>x</code>?</p> <p>I am using <code>Python 3.12.9</code> and <code>Sympy 1.12.1</code>.</p> <hr /> <p>Complete answer based on @smichr 's answer:</p> <pre class="lang-py prettyprint-override"><code>from sympy import * from sympy.abc import n, gamma, epsilon x = IndexedBase(&quot;x&quot;) i = symbols('i', cls = Idx) mean_squared = Sum(x[i] ** 2, (i, 1, n)) / n rms = sqrt(mean_squared + epsilon) fwd_out = x * gamma / r # diff wrt x[i] d_fwd_out = diff(fwd_out, x[i]) d_rms = diff(rms, x[i]) </code></pre> <hr /> <p>Ref:</p> <p>RMSNorm Paper: <a href="https://arxiv.org/pdf/1910.07467" rel="nofollow noreferrer">https://arxiv.org/pdf/1910.07467</a> Pytorch API: <a href="https://pytorch.org/docs/stable/generated/torch.nn.modules.normalization.RMSNorm.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.modules.normalization.RMSNorm.html</a></p>
<python><sympy>
2025-03-20 16:09:07
1
730
algoProg
79,523,375
3,815,773
max() output depends on order when nan is present
<p>I am puzzled by this behaviour of Python's max() function:</p> <pre><code>&gt;&gt;&gt; a = 100 &gt;&gt;&gt; n = float(&quot;nan&quot;) &gt;&gt;&gt; a 100 &gt;&gt;&gt; n nan &gt;&gt;&gt; max(a, n) 100 &gt;&gt;&gt; max(n, a) nan </code></pre> <p>That is the ORDER of the parameters determines the outcome?</p>
<python><math><comparison><nan>
2025-03-20 15:16:25
0
505
ullix
79,523,183
10,658,339
Why are date and time data not mapping/uploading to PostgreSQL with the correct datatype?
<p>I'm working with a PostgreSQL database in Python using psycopg2 and pandas. My workflow involves reading data into Python, uploading it to the database, retrieving it back into Python, and then updating the database. To ensure data integrity, I aim to create a comprehensive mapping between pandas data types and PostgreSQL data types. Currently, I'm encountering an issue where date and time data (including both day and hour) are not being uploaded to PostgreSQL with the correct data type.</p> <p>I'm utilizing the following function to map pandas data types to PostgreSQL data types:</p> <pre><code> def map_dtype(dtype): if np.issubdtype(dtype, np.integer): return 'BIGINT' # Maps to PostgreSQL's BIGINT for large integers elif np.issubdtype(dtype, np.floating): return 'DOUBLE PRECISION' # Maps to PostgreSQL's DOUBLE PRECISION for floating-point numbers elif np.issubdtype(dtype, np.datetime64): return 'TIMESTAMP' # Maps to PostgreSQL's TIMESTAMP for date and time elif np.issubdtype(dtype, np.bool_): return 'BOOLEAN' # Maps to PostgreSQL's BOOLEAN for boolean values else: return 'TEXT' # Default mapping to PostgreSQL's TEXT for other types </code></pre> <p>This is the function I'm using to upload data to DB:</p> <pre><code>def upload_to_db(path_to_config, tbl_name, col_str, dataframe): &quot;&quot;&quot;Upload a DataFrame to the specified database.&quot;&quot;&quot; conn = connect_to_db(path_to_config) if not conn: return cursor = conn.cursor() try: # Get the current user cursor.execute(&quot;SELECT current_user;&quot;) current_user = cursor.fetchone()[0] # Define schema based on user schema = &quot;developer_schema&quot; if current_user == &quot;dev1&quot; else &quot;public&quot; full_table_name = f&quot;{schema}.{tbl_name}&quot; cursor.execute(f&quot;DROP TABLE IF EXISTS {full_table_name};&quot;) # Generate the CREATE TABLE statement with explicit data types cursor.execute(f&quot;CREATE TABLE {full_table_name} ({col_str});&quot;) print(f'Table {full_table_name} was created successfully.') csv_file_path = f&quot;{tbl_name}.csv&quot; dataframe.to_csv(csv_file_path, header=True, index=False, encoding='utf-8') with open(csv_file_path, 'r') as my_file: print('File opened in memory.') sql_statement = f&quot;&quot;&quot; COPY {full_table_name} FROM STDIN WITH CSV HEADER DELIMITER ',' &quot;&quot;&quot; cursor.copy_expert(sql=sql_statement, file=my_file) print('File copied to the database.') cursor.execute(f&quot;GRANT SELECT ON TABLE {full_table_name} TO public;&quot;) conn.commit() print(f'Table {full_table_name} imported to the database successfully.') except Exception as e: print(f&quot;Error during upload: {e}&quot;) conn.rollback() finally: cursor.close() conn.close() if os.path.exists(csv_file_path): try: os.remove(csv_file_path) print(f&quot;Temporary file {csv_file_path} has been deleted.&quot;) except Exception as e: print(f&quot;Error deleting file {csv_file_path}: {e}&quot;) </code></pre> <p>Is this the best approach? Do you have any suggestions? Ps. I don't want to use sqlalchemy because is too slow for large dataset (my case).</p>
<python><pandas><database><postgresql><dtype>
2025-03-20 13:59:56
0
527
JCV
79,523,161
5,506,912
Connect to socket.io xhr request with python
<p>I'm trying to retrieve some data from <a href="https://www.winamax.fr/paris-sportifs/sports/1/7/4" rel="nofollow noreferrer">here</a>, namely games and odds. I know the data is in the response of this GET request as shown in the network tab below:</p> <p><a href="https://i.sstatic.net/ykrU3Pk0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykrU3Pk0.png" alt="network tab showing the request" /></a></p> <p>However we can see that there is some websocket protocol and I'm not sure how to handle this.</p> <p>I should mention I'm new to python (usually coding in R) and websockets but I've managed to find the socketio path in the code elements so here is what I've tried :</p> <pre><code>import socketio sio = socketio.Client(logger=True, engineio_logger=True) @sio.event def connect(): print('connected!') sio.emit('add user', 'Testing') @sio.event def print_message(sid): print(&quot;Socket ID: &quot; , sid) @sio.event def disconnect(): print('disconnected!') sio.connect('https://sports-eu-west-3.winamax.fr',transports=['websocket'], socketio_path = '/uof-sports-server/socket.io') sio.wait() </code></pre> <p>I'm able to connect but I'm not sure where to go next and get the actual response from the GET request above.</p> <p>Any hints appreciated</p>
<python><web-scraping><socket.io>
2025-03-20 13:52:29
1
521
M.O
79,523,159
4,875,766
How do I type a generic subclass correctly?
<p>Here is some example code:</p> <p>I have my underlying generic object.</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;, str, int, bytes) class MyObj(Generic[T]): id: T </code></pre> <p>and some implementations</p> <pre class="lang-py prettyprint-override"><code>class MyObjImplInt(MyObj[int]): ... class MyObjImplStr(MyObj[str]): ... obj_int = MyObjImplInt() type(obj_int.id) # int obj_str = MyObjImplStr() type(obj_str.id) # str </code></pre> <p>Now I have built a manager for these objects</p> <pre class="lang-py prettyprint-override"><code>TObj = TypeVar(&quot;TObj&quot;, bound=MyObj) class MyObjMgr(Generic[TObj, T]): objs: list[TObj] def __init__(self, objs: list[TObj]): self.objs = objs def get_by_id(self, obj_id: T): return [e for e in self.objs if e.id == obj_id] class MyObjMgrInts(MyObjMgr[MyObjImplInt, int]): ... class MyObjMgrStrs(MyObjMgr[MyObjImplStr, str]): ... </code></pre> <p>My issue is that I have to provide the type <code>T</code> for <code>MyObjMgr(Generic[TObj, T])</code> in order to specify the <code>id</code> type of the underlying object. Is there a way to infer <code>T</code> from <code>TObj[T]</code>?</p> <p>i.e. just write this:</p> <pre class="lang-py prettyprint-override"><code>class MyObjMgr2(Generic[TObj]): objs: list[TObj] def __init__(self, objs: list[TObj]): self.objs = objs def get_by_id(self, obj_id: T@TObj): # err here return [e for e in self.objs if e.id == obj_id] class MyObjMgr2Ints(MyObjMgr2[MyObjImplInt]): ... </code></pre>
<python><oop><generics><python-typing>
2025-03-20 13:51:34
2
331
TobyStack
79,522,949
5,060,208
Adding time-series logging to tensorboard
<p>Tensorboard provides access plotting <a href="https://www.tensorflow.org/tensorboard/scalars_and_keras" rel="nofollow noreferrer">scalars, histograms and images</a>. I am trying to add model predictions as simple vectors to tensorboard. Currently I am doing that through adding matplotlib images, which works great to visual inspection, but it's not possible to infer the prediction traces after model training. There are methods to represent the traces as scalars in tensorboard, as recommended <a href="https://stackoverflow.com/questions/41353169/tensorboard-with-numpy-array">here</a>, but I am curious what other methods exist, since it 1. does not seem that tensorboard provides native support for that, and 2. it is due to scalar logging rather slow.</p> <p>Are there other methods in tensorboard for vector-based logging? And which other methods could be recommended for that purpose? My best bet would be to simple save them as <code>.npy</code> arrays, but when reading the run computations I would then end up with multiple files in addition to the <code>.tsevents</code> file. So I am just curious what people can recommend.</p>
<python><machine-learning><logging><tensorboard>
2025-03-20 12:33:41
0
331
Merk
79,522,829
357,024
Python type hints with default value as Type
<p>Below is a simplified example of an object factory in my Python application.</p> <pre><code>from typing import Type class Animal: def speak(self): assert False class Dog(Animal): def speak(self): print(&quot;bark&quot;) def make_animal[T : Animal](animal_class: Type[T] = Animal) -&gt; T: return animal_class() dog = make_animal(Dog) dog.speak() &gt;&gt;&gt;bark </code></pre> <p>The code runs as I expect. My issue is that <code>mypy</code> reports the below error when type checking the default value of parameter <code>animal_class</code> in the method <code>make_animal</code></p> <pre><code>$ mypy test.py error: Incompatible default for argument &quot;animal_class&quot; (default has type &quot;type[Animal]&quot;, argument has type &quot;type[T]&quot;) [assignment] $ mypy --version mypy 1.15.0 (compiled: yes) </code></pre> <p>Have I done something wrong or is mypy incorrectly reporting an error?</p>
<python><python-typing><mypy>
2025-03-20 11:46:32
0
61,290
Mike
79,522,783
12,040,751
NonExistentTime error caused by pandas.Timestamp.floor with localised timestamp
<p>I need to calculate the floor of a localized timestamp with daily resolution, but I get an exception when the daylight saving time starts.</p> <pre><code>&gt;&gt;&gt; pd.Timestamp('2024-09-08 12:00:00-0300', tz='America/Santiago').floor(&quot;D&quot;) NonExistentTimeError: 2024-09-08 00:00:00 </code></pre> <p>I understand that midnight does not exist on that day, clocks are moved to 1am after 11:59pm. Still I would have expected floor to return <code>pd.Timestamp('2024-09-08 01:00:00-0300', tz='America/Santiago')</code>.</p> <p>The same happens with ceil applied to the previous day:</p> <pre><code>&gt;&gt;&gt; pd.Timestamp('2024-09-07 12:00:00-0300', tz='America/Santiago').ceil(&quot;D&quot;) NonExistentTimeError: 2024-09-08 00:00:00 </code></pre> <p>I made two attempts at solving this but I either get the same exception or the wrong answer:</p> <pre><code>&gt;&gt;&gt; pd.Timestamp('2024-09-08 12:00:00').floor(&quot;D&quot;).tz_localize('America/Santiago') NonExistentTimeError: 2024-09-08 00:00:00 &gt;&gt;&gt; pd.Timestamp('2024-09-08 12:00:00-0300').floor(&quot;D&quot;).tz_convert('America/Santiago') Timestamp('2024-09-07 23:00:00-0400', tz='America/Santiago') # Wrong answer </code></pre>
<python><pandas><datetime><timestamp>
2025-03-20 11:27:10
1
1,569
edd313
79,522,749
11,863,823
How to change the title of a DataFrameSchema in pandera?
<p>I've been going through the <code>pandera</code> docs and I cannot find a way to change the title of an existing <code>pandera.DataFrameSchema</code>. Currently I do the following, which seems to work but <em>feels</em> weird because usually APIs define methods that cleanly change the properties (for instance, I have no way of knowing if changing the <code>title</code> like this causes some unforeseen side effects):</p> <pre class="lang-py prettyprint-override"><code>import pandera as pa dfs1 = pa.DataFrameSchema( columns={ &quot;a&quot;: pa.Column(int) }, title=&quot;DFS 1 with col 'a'&quot; ) dfs2 = dfs1.add_columns( {&quot;b&quot;: pa.Column(int)} ) dfs2.title = &quot;DFS 2 with cols 'a' &amp; 'b'&quot; </code></pre> <p>Is it how we're supposed to do or is there some utility somewhere in <code>pandera</code> that I missed?</p>
<python><dataframe><pandera>
2025-03-20 11:15:43
0
628
globglogabgalab
79,522,677
364,088
How to pass an argument to a 'dependencies' function in FastAPI?
<p>Is there a way to pass an argument to a function defined within the <code>dependencies</code> argument of, for instance, <code>app.get</code>? If there isn't, what other options exist ?</p> <p>I can do the following to test a request, including examining the bearer token, before it reaches the endpoint handler:</p> <pre><code>async def verify_token(Authorization: Annotated[str, Header()]): # decode token and do processing based on it @app.get(&quot;/read_items/&quot;, dependencies=[Depends(verify_token)]) async def read_items(): return [{&quot;item&quot;: &quot;Foo&quot;}, {&quot;item&quot;: &quot;Bar&quot;}] </code></pre> <p>but what I really want to do is provide an argument to <code>verify_token</code>, like this:</p> <pre><code>@app.get(&quot;/read_items/&quot;, dependencies=[Depends(verify_token(&quot;red&quot;)]) async def read_items(): return [{&quot;item&quot;: &quot;Foo&quot;}, {&quot;item&quot;: &quot;Bar&quot;}] </code></pre> <p>in this scenario <code>verify_token</code> would accept a string as an argument and the value of that string would be easily seen by anyone inspecting the code.</p> <p>Of course this type of outcome could be produced by using a decorator:</p> <pre><code>@verify_token_decorator(&quot;red&quot;) @app.get(&quot;/read_items/&quot;) async def read_items(): return [{&quot;item&quot;: &quot;Foo&quot;}, {&quot;item&quot;: &quot;Bar&quot;}] </code></pre> <p>but decorators cannot inspect the contents of Request headers and so such a decorator would not be able to inspect the token.</p> <p>Is there a way to do the functional equivalent of passing a value into a function within the <code>dependencies</code> argument of an end point?</p>
<python><fastapi>
2025-03-20 10:45:52
2
8,432
shearichard
79,522,584
9,729,847
{{ ds }} isn't interpreted in my airflow dag
<p>My goal is to dynamically create some <code>SFTPToS3Operator</code> to retrieve files from a server to Amazon S3.</p> <p>For instance, ,my filename is <code>xxx_date_hours_minutes_seconds.csv</code>, so I want to fetch filename from the server to use it in the <code>SFTPToS3Operator</code> operator.</p> <p>The main problem is that the <code>{{ ds }}</code> is not interpreted</p> <pre><code>with DAG(...) as dag: init_task = DummyOperator(task_id='dag_start') alertPerfOperatorsTask = list_files_to_transfer() init_task &gt;&gt; alertPerfOperatorsTask </code></pre> <p>The goal of <code>list_files_to_transfer</code> is to create a list of task, one task for each file :</p> <pre><code>def list_files_to_transfer(): logger = logging.getLogger('airflow.task') yesterday_date = '{{ macros.ds_add(ds, -1) }}' logger.info(&quot;yesterday_date: &quot; + yesterday_date) sftp_hook = SFTPHook(ssh_conn_id='SFTP__gas__PointBreakExporter') with sftp_hook.get_conn() as sftp_client: remote_path = 'out/IQUAL_DEV/' ... return alertPerfOperatorsTask (list of SFTPToS3Operator) </code></pre> <p>If I look at my log, I get :</p> <pre><code>INFO - yesterday_date: `{{ macros.ds_add(ds, -1) }}` instead of yesterday_date: 2025-03-19 </code></pre> <p>and of course I am using this date to find file but obvioulsy didn't find any file named <code>xxx_{{ macros.ds_add(ds, -1) }}.csv</code></p> <p>I don't understand why <code>{{ macros.ds_add(ds, -1) }}</code> isn't interpreted</p>
<python><airflow>
2025-03-20 10:17:55
1
1,798
maxime G
79,522,491
13,158,157
Error opening Excel file in python edited on Excel Web Interface
<p><strong>Problem</strong>:</p> <p>I'm encountering issues when trying to open Excel files (.xlsx) with pandas that was edited on SharePoint Online (Microsoft 365) via the web Excel interface and then downloaded.</p> <p><strong>Code and Errors</strong>:</p> <pre><code>import pandas as pd # Attempt to read the Excel file with various engines # Auto df = pd.read_excel('file_path.xlsx') # Error: XLRDError: Can't find workbook in OLE2 compound document # openpyxl df = pd.read_excel('file_path.xlsx', engine='openpyxl') # Error: BadZipFile: File is not a zip file # calamine import calamine df = calamine.read_excel('file_path.xlsx', engine='calamine') # Error: CalamineError: Cannot detect file format # pyxlsxb import pyxlsb df = pyxlsb.read_excel('file_path.xlsx', engine='pyxlsb') # Error: BadZipFile: File is not a zip file </code></pre> <p><strong>Additional Details</strong>:</p> <ol> <li>The same .xlsx file is opened without errors on desktop Excel.</li> <li>Just for a test, I tried to open such &quot;corrupted&quot; .xlsx with Excel and save it to another file and later open that new .xlsx but that did not resolve the error.</li> <li>The error persists across different library versions.</li> </ol> <p>Has anyone encountered this problem or have suggestions on how to resolve it?</p>
<python><excel><pandas>
2025-03-20 09:38:40
0
525
euh
79,522,367
4,080,181
How do I read a string as if it were bytes
<p>I have a string in a file encoded in utf-8. It contains escape sequences that should not be interpreted as Unicode code points, but rather as binary numbers. How can I convert it to bytes?</p> <p>For example, consider the string 'abc \x00\x01\xff'. If I were to type this in to the Python interpreter as a bytes literal I would get:</p> <pre><code> &gt;&gt;&gt; b = b'abc \x00\x01\xff' &gt;&gt;&gt; list(b) [97, 98, 99, 32, 0, 1, 255] </code></pre> <p>But if if I start with a string and try to interpret the escape sequences I get:</p> <pre><code> &gt;&gt;&gt; s = r'abc \x00\x01\xff'.encode('utf8').decode('unicode-escape') &gt;&gt;&gt; list(s) ['a', 'b', 'c', ' ', '\x00', '\x01', 'รฟ'] </code></pre> <p>Notice that \xff is not interpreted as a binary number, it is taken to be a Unicode code point. What I need is a 'binary-escape'.</p> <p>How can I read the string from the file and interpret it as if I entered it as a bytes literal?</p>
<python>
2025-03-20 08:54:45
2
548
August West
79,522,011
2,854,673
Python PIL image text overlay not displayed with expected color on white background image (RGB vs RGBA mode)
<p>Python PIL Image not working properly with overlay text image. I am trying to use FPDF image to convert an overlayed text png image to pdf file. However, the overlay text is not in expected colour (looks transparent) on a white background image. However, the same logic works in a zebra pattern background image.</p> <p>Source background image:</p> <p><a href="https://i.sstatic.net/26YfSShM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26YfSShM.png" alt="Zebra pattern png image" /></a> <a href="https://i.sstatic.net/bZrxpmMU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZrxpmMU.png" alt="white cross pattern png image" /></a></p> <p>Text overlayed image:</p> <p><a href="https://i.sstatic.net/fEBd8j6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fEBd8j6t.png" alt="Text overlayed zebra pattern png image" /></a> <a href="https://i.sstatic.net/WP0639wX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WP0639wX.png" alt="Text overlayed white cross pattern png image" /></a></p> <p>Code used for overlayed text: <a href="https://stackoverflow.com/questions/79520460/python-image-attributeerror-imagedraw-object-has-no-attribute-array-interf/79520602#79520602">Python Image AttributeError: &#39;ImageDraw&#39; object has no attribute &#39;__array_interface__&#39;</a></p> <p>code:</p> <pre><code>fontuse = &quot;D:/Ubuntusharefolder/CustomFonts/EnglishFonts/NotoSans-Medium.ttf&quot; font_color_bgra = (0,0,0,1)#black #(0,255,0,0)#green font = ImageFont.truetype(fontuse, 32 * int(2), layout_engine=ImageFont.LAYOUT_RAQM) src_img_use = np.array(Image.open(os.path.join(input_subfolder_path, filename) ) ) #(511, 898, 4) print('src_img_use gen size: ',os.path.join(input_subfolder_path, filename), src_img_use.shape) src_img_pil = Image.fromarray(src_img_use) print('src_img_pil gen size: ', src_img_use.size) img_pil_4dtxt = ImageDraw.Draw(src_img_pil) img_pil_4dtxt.text(textpos, subjectuse, font = font, fill = font_color_bgra) #fill = &quot;black&quot; ) src_img_pil.save(output_folder_final_path + '/updtxtimg/' + str(imgidx) + '_' + filename) print('label_img_withtxt drawtext saved subjectuse idx', imgidx, subjectuse) </code></pre> <p>Note sure whether there is a problem with input image or overlay code logic. The same image has to be used in FPDF.image but same behavior noticed.</p> <p>Updated Console logs:</p> <pre><code>//zebra pattern image &gt; &amp; C:/ProgramData/Anaconda3/python.exe ./code/txtoverimgv1.py src_base imguse gen size: (1920, 1080, 3) input img file path: D:/testinput/order_image_1.png src_imguse gen size: (540, 360) saved img in path: D:/testinput/updtxtimg/upd_order_image_1.png &lt;PIL.Image.Image image mode=RGB size=540x360 at 0x1293F0228E0&gt; saved_img1 done //white background &gt; &amp; C:/ProgramData/Anaconda3/python.exe ./code/txtoverimgv1.py src_base imguse gen size: (1920, 1080, 3) input img file path: D:/testinput/order_image_1.png src_imguse gen size: (1920, 1080) saved img in path: D:/testinput/updtxtimg/upd_order_image_1.png &lt;PIL.Image.Image image mode=RGBA size=1920x1080 at 0x2636D84B280&gt; saved_img1 done </code></pre> <p>mode= RGB and RGBA differs for each image but both are png images</p> <p>Code update: Converting the image to RGB while opening solves the problem, but would like to know best way to work on RGBA images too.</p> <pre><code>src_img_use = np.array(Image.open( input_subfolder_path + '/' + filename ).convert(&quot;RGB&quot;) ) </code></pre>
<python><image><python-imaging-library><overlay>
2025-03-20 05:52:06
1
334
UserM
79,521,913
577,288
python 3 - audio effect - pedalboard cannot export file
<p>I'm trying to add some reverb to an audio file. Pedalboard says it has succeeded, yet there is no file in the output directory</p> <p>I have taken the example directly from the official site</p> <p><a href="https://pypi.org/project/pedalboard/0.4.1/" rel="nofollow noreferrer">official site</a></p> <pre><code>import soundfile as sf from pedalboard import Pedalboard, Chorus, Reverb # Read in an audio file: audio, sample_rate = sf.read('Liq1.wav') # Make a Pedalboard object, containing multiple plugins: board = Pedalboard([Chorus(), Reverb(room_size=0.25)]) # Run the audio through this pedalboard! effected = board(audio, sample_rate) # Write the audio back as a wav file: sf.write('./Liq11.wav', effected, sample_rate) </code></pre> <p>the code returns this,</p> <pre><code>Process finished with exit code -1073741819 (0xC0000005) </code></pre> <p>but the output folder has no file called, <code>Liq11.wav</code></p> <p>I'm also trying to add some time stretch to this audio file, to create a reverb-slowed song. But this seems to be much more difficult then expected, without installing .exe setup files like SOX requires</p> <p><strong>Another Example</strong></p> <p>code examples taken directly from this code presentation also fails to export the file, <code>out1.mp3</code></p> <p><a href="https://www.youtube.com/watch?v=NYhkqXpFAlg" rel="nofollow noreferrer">youtube presentation</a></p> <pre><code>from pedalboard import Pedalboard, Reverb from pedalboard.io import AudioFile with AudioFile('Liq1.mp3') as f: audio = f.read(f.frames) louder_audio = audio * 2 with AudioFile('out1.mp3', 'w', f.samplerate) as o: o.write(louder_audio) </code></pre>
<python><pyaudio>
2025-03-20 04:47:24
0
5,408
Rhys
79,521,805
2,469,032
Shift+enter inserts extra indents
<p>I have a Python source file with some dummy code:</p> <pre><code>a = 3 if a == 1: print(&quot;a = 1&quot;) elif a == 2: print(&quot;a = 2&quot;) else: print(&quot;Other&quot;) </code></pre> <p>When I submit the code to terminal with shift+enter, I get the following error. It looks like VS Code changed the indentation of my code. The same code ran just fine with shift+enter on my other computer. The error message is as below:</p> <pre><code>PS C:\Users\win32&gt; &amp; C:/Users/win32/AppData/Local/Programs/Python/Python313/python.exe Python 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; a = 3 &gt;&gt;&gt; if a == 1: ... print(&quot;a = 1&quot;) ... elif a == 2: ... print(&quot;a = 2&quot;) ... else: ... print(&quot;Other&quot;) ... File &quot;&lt;python-input-1&gt;&quot;, line 3 elif a == 2: ^^^^ SyntaxError: invalid syntax </code></pre> <p>Any insights?</p>
<python><visual-studio-code><indentation><read-eval-print-loop>
2025-03-20 03:09:10
2
1,037
PingPong
79,521,599
3,084,178
Python Kivy Button text_size = self.size wraps in button way before it should
<p>I'm coding an application in Kivy (seriously impressive library).</p> <p>I'm coming up against an interesting issue, and wondering if I'm missing something and/or if there's a fix.</p> <p><a href="https://i.sstatic.net/XIBd9nNc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIBd9nNc.png" alt="Screenshot" /></a></p> <p>I created some buttons in the app (I've actually converted to an apk and installed it on my phone and this is a screenshot from there, but the same thing happens on the emulator).</p> <p>You'll notice that the word &quot;Random&quot; breaks into two lines with &quot;Rando&quot; and &quot;m&quot;.</p> <p>The buttons' text_size is linked to the buttons's size property, so the code for the text was like this:</p> <pre><code>for button in self.button_layout.children: button.text_size = button.size button.halign = 'center' button.valign = 'middle' </code></pre> <p>From my perspective, this should have allowed &quot;Random&quot; and &quot;Breadth&quot; to have fitted on one line quite easily. I can only guess that there's a race condition when it does the layout.</p> <p>When I run</p> <pre><code>&gt;&gt;python main.py </code></pre> <p>And size it similar to the phone size, it looks like this (which is what I want):</p> <p><a href="https://i.sstatic.net/iVEL75ej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVEL75ej.png" alt="screenshot" /></a></p> <p>What can I do to make sure that it renders the screen like this? Any common traps that I might be missing.</p> <hr /> <p>Minimum Working Example:</p> <pre><code>#__version__ = '1.0' from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.gridlayout import GridLayout from kivy.uix.button import Button class ButtonTest(BoxLayout): def __init__(self, **kwargs): super(ButtonTest, self).__init__(**kwargs) self.drawGui() def drawGui(self,instance=None): self.clear_widgets() self.orientation = 'vertical' #self.define_square_positions() # Buttons self.prac_button_layout = GridLayout(cols=3) prac_random = Button(text=&quot;Random&quot;) prac_depth_first = Button(text=&quot;Depth first&quot;) prac_breadth_first = Button(text=&quot;Breadth first&quot;) self.prac_button_layout.add_widget(prac_random) self.prac_button_layout.add_widget(prac_breadth_first) self.prac_button_layout.add_widget(prac_depth_first) self.add_widget(self.prac_button_layout) for button in self.prac_button_layout.children: button.text_size = button.size button.halign = 'center' button.valign = 'middle' def on_size(self, *args): self.drawGui() class MyApp(App): def build(self): return ButtonTest() if __name__ == '__main__': MyApp().run() </code></pre> <p>The intent is that the text in the button more or less fills the button (ie wraps when necessary but is able to take the full width of the button), and that this adjusts dynamically when re-sized.</p> <p>If I run</p> <pre><code>&gt;python main.py </code></pre> <p>it starts like this (which suggests it's already not working here actually):</p> <p><a href="https://i.sstatic.net/1c2x4B3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1c2x4B3L.png" alt="screenshot" /></a></p> <p>But I simply moved this from one screen to another and it goes like this:</p> <p><a href="https://i.sstatic.net/2fNcfYmM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fNcfYmM.png" alt="screenshot2" /></a></p> <p>And when I resize it, it behaves reasonably well:</p> <p><a href="https://i.sstatic.net/pi1lyTfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pi1lyTfg.png" alt="screenshot3" /></a>.</p> <p>However, when I create the apk (bulldozer android) and open it in an emulator (Android Studios, Pixel 9 emulator in this case), it looks like this:</p> <p><a href="https://i.sstatic.net/jtoO679F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtoO679F.png" alt="emulator" /></a></p> <p>So the question is how do I make a set of simple buttons where the text wraps if it's too wide, but fills the button's width and height and is robust against dynamic resizing?</p>
<python><android><kivy>
2025-03-19 23:43:03
1
1,014
Dr Xorile
79,521,249
927,039
Advice on using Wagtail (e.g. RichTextField) with Pylance type checking
<p>Nearly all my Wagtail models files are full of errors according to Pylance and I'm not sure how to silence them without either adding <code># type: ignore</code> to hundreds of lines or turning off Pylance rules that help catch genuine bugs. The errors often come from <code>RichTextField</code> properties on my models. Here is a simple example:</p> <pre class="lang-py prettyprint-override"><code>from wagtail.models import Page from wagtail.fields import RichTextField class SpecialPage(Page): introduction = RichTextField( blank=True, features=settings.RICHTEXT_ADVANCED, help_text=&quot;Text that appears at the top of the page.&quot;, ) </code></pre> <p>where <code>RICHTEXT_ADVANCED</code> is a <code>list[str]</code> in my settings file. This code works fine. The arguments passed to <code>RichTextField</code> all exist in its <code>__init__</code> method or the <code>__init__</code> method of one a parent class. Yet Pylance in VS Code underlines all three lines of <code>introduction</code> in red and says:</p> <blockquote> <p>No overloads for &quot;__new__&quot; match the provided arguments Argument types: (Literal[True], Any, Literal['Text that appears at the top of the page.']) Pylance(<a href="https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportCallIssue" rel="nofollow noreferrer">reportCallIssue</a>)</p> </blockquote> <p>Is this a bug in Pylance? Is there a way around it other than the two (undesirable) approaches I mentioned above? Or could Wagtail provide more explicit type hints or <code>@overload</code> indicators to prevent the errors?</p> <p>The class inheritance goes RegisterLookupMixin &gt; Field (has <code>blank</code> and <code>help_text</code>) &gt; TextField (has <code>features</code>) &gt; RichTextField. None of these classes have a <code>__new__</code> method. The values I'm providing all match the types defined in the parent classes. I'm on a 5.x Wagtail, has this perhaps been fixed in more recent releases?</p>
<python><python-typing><wagtail><pyright>
2025-03-19 19:46:07
1
525
phette23
79,521,147
774,575
How to manage QLineEdit.returnPressed signal from multiple QLineEdit?
<p>Using <code>qtpy</code> (and the actual <code>Qt5</code> behind):</p> <pre><code>from qtpy.QtWidgets import QApplication, QMainWindow, QLineEdit class MainWindow(QMainWindow): def __init__(self): super().__init__() widget = QLineEdit() widget.setPlaceholderText(&quot;Enter your text&quot;) widget.returnPressed.connect(self.le_return_pressed) self.setCentralWidget(widget) self.le = widget def le_return_pressed(self): print(f'Line edit text: {self.le.text()}') def main(): app = QApplication([]) window = MainWindow() window.show() app.exec() if __name__ == '__main__': main() </code></pre> <p>In the signal handler <code>le_return_pressed</code> I need to use a hard reference <code>self.le</code> to the widget to access its properties. If the handler is common to multiple <code>QLineEdit</code>, how to identify the signal source?</p>
<python><qt><pyqt><qlineedit>
2025-03-19 18:57:31
0
7,768
mins
79,521,127
4,043,845
filedialog doesn't return anything on second run
<p>I'm working on a tkinter/customtkinter app to load data to MySQL. Below are the relevant classes. When I run this code to load a single file, I have no issues. The problem comes when I click <code>self.load_another_button</code> in the class <code>loading_window</code>. It properly brings me back to <code>file_select</code>, but when I click <code>self.fd_button</code> and select a file, it doesn't return anything. Is there a nuance with filedialog in tkinter? I set <code>self.filedialog</code> to <code>None</code> on <code>__init__</code> wondering if I needed to clear the variable for some reason, but that didn't work either. Happy to clarify anything if there are questions. This is my first tkinter/customtkinter app, so have no real clue what I'm doing.</p> <pre><code>class base_window(ctk.CTk): def __init__(self,title = None, **kwargs): super().__init__(**kwargs) self.title(title) self.geometry('500x400') self.grid_columnconfigure(0,weight=1) class table_selector(base_window): def __init__(self): super().__init__(title='Select Table') self.queries = queries() self.frame = ctk.CTkFrame(self,corner_radius=0,fg_color='transparent') self.val = ctk.StringVar() self.table_list = self.get_tables_from_schema() self.ts_label = ctk.CTkLabel(self.frame,text='Select Table to Load') self.combo = ttk.Combobox(self.frame,textvariable=self.val,values=self.table_list) self.combo.set(self.table_list[0]) self.frame.columnconfigure((0,1),weight=1) self.combo.grid(row=1,column=0,padx=5,pady=5) self.ts_label.grid(row=0,column=0,padx=5,pady=5,sticky='ew',columnspan=2) self.frame.grid(row=0,column=0,padx=5,pady=5) self.select_btn = ctk.CTkButton(self.frame,text='Select',command=self.select_btn_clicked) self.select_btn.grid(row=1,column=1,padx=5,pady=5) def run(self): self.mainloop() def get_tables_from_schema(self): sql = self.queries.get_tables_from_schema_sql() df = run_query(sql,user=user,password=password,database=schema) table_list = df.iloc[:,0].to_list() return table_list def select_btn_clicked(self): global table_to_load table_to_load = self.combo.get() self.fs = file_select() self.destroy() self.fs.run() class file_select(base_window): def __init__(self): super().__init__(title='Select File to Load') self.filedialog = None self.entry_text = None self.file_to_load = None self.queries = queries() self.frame = ctk.CTkFrame(self,corner_radius=0,fg_color='transparent') self.fs_label = ctk.CTkLabel(self.frame,text='Select File to Load (Only CSV is currently supported)') self.entry_text = ctk.StringVar() self.entry = ctk.CTkEntry(self.frame,textvariable=self.entry_text,width=50) self.fd_button = ctk.CTkButton(self.frame,text='Click to Select File',command=self.fd_btn_clicked) self.load_btn = ctk.CTkButton(self.frame,text='Click to Load File',command=self.load_btn_clicked) self.frame.columnconfigure((0,1),weight=1) self.fs_label.grid(row=0,column=0,padx=5,pady=5,sticky='new',columnspan=2) self.entry.grid(row=1,column=0,padx=5,pady=5,sticky='ew',columnspan=2) self.fd_button.grid(row=2,column=0,padx=5,pady=5,sticky='ew') self.load_btn.grid(row=2,column=1,padx=5,pady=5,sticky='ew') self.frame.grid(row=0,column=0,sticky='ew',padx=5,pady=5) def run(self): self.mainloop() def fd_btn_clicked(self): self.filedialog = filedialog.askopenfilename(parent=self,initialdir=os.path.join(os.path.expanduser('~'),'Desktop')) self.entry_text.set(self.filedialog) def load_btn_clicked(self): self.file_to_load = self.entry_text.get() if self.file_to_load[-3:].lower() != 'csv': self.error_window = error_window(message='File selected is not a csv. Please select a new file that is a csv, or exit the program') self.error_window.add_restart_button('file_select') self.error_window.run() else: self.check_load_file() def check_load_file(self): self.loading_window = loading_window(file_to_load=self.file_to_load) self.destroy() self.loading_window.run() class loading_window(base_window): def __init__(self, title='Loading File',file_to_load='', **kwargs): super().__init__(title=title, **kwargs) self.file_to_load = file_to_load self.frame = ctk.CTkFrame(self,corner_radius=0,fg_color='transparent') self.label = ctk.CTkLabel(self.frame,text=f'Validating and Loading File {file_to_load}') self.progress = ctk.CTkProgressBar(self.frame,mode='indeterminate',indeterminate_speed=1) self.progress.set(0) self.label.grid(row=0,column=0,padx=5,pady=5,sticky='new') self.progress.grid(row=1,column=0,padx=5,pady=5,sticky='ew') self.frame.grid(row=0,column=0,padx=5,pady=5,sticky='ew') self.load_file() def load_file(self): self.progress.start() df = pd.read_csv(self.file_to_load) df.columns = [i.strip() for i in df.columns] l_table_columns = run_query(queries().get_columns_from_table(),format='list',user=user,password=password,database=schema) if l_table_columns is not None: d = {} for i in l_table_columns: d[i[0]] = i[1] cols_to_load = [] for col in df.columns: if col in d.keys(): cols_to_load.append(col) if d[col] not in ['datetime','date','timestamp']: df.loc[:,col] = df.loc[:,col].fillna(fillna_map()[d[col]]).astype(dtype_map()[d[col]]) df_to_load = df.loc[:,cols_to_load] success_fail = df_to_db(df_to_load,table=table_to_load,user=user,password=password,database=schema) self.complete_load(text=success_fail) def complete_load(self,text): self.progress.set(1) self.progress.stop() self.complete_label = ctk.CTkLabel(self.frame,text=text) self.end_button = ctk.CTkButton(self.frame,text='Click to Exit Program',command=sys.exit) self.load_another_button = ctk.CTkButton(self.frame,text='Click to Load Another File',command=self.load_another_btn_clicked) self.complete_label.grid(row=2,column=0,padx=5,pady=5,sticky='ew') self.end_button.grid(row=3,column=0,padx=5,pady=5,sticky='ew') self.load_another_button.grid(row=4,column=0,padx=5,pady=5,sticky='ew') self.frame.grid(row=0,column=0,padx=5,pady=5,sticky='ew') def run(self): self.mainloop() def load_another_btn_clicked(self): self.ts = table_selector() self.destroy() self.ts.run() </code></pre>
<python><tkinter><customtkinter><filedialog>
2025-03-19 18:47:58
0
2,545
Kyle
79,521,003
1,443,801
How to install CPU only version of pytorch using setuptools backed pyproject.toml
<p>Consider the following pyproject.toml</p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=75&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;test&quot; version = &quot;0.0.1-dev&quot; requires-python = &quot;&gt;=3.9&quot; dependencies = [ &quot;tensorflow-cpu&gt;=2.18&quot;, &quot;torch-cpu&gt;=2.6&quot;, ] </code></pre> <p>How can I install a CPU only version of pytorch here? <code>torch-cpu</code> doesn't exist.</p> <p>I found a poetry based solution<a href="https://stackoverflow.com/questions/59158044/installing-a-specific-pytorch-build-f-e-cpu-only-with-poetry">enter link description here</a> here but couldn't make it work with setuptools.</p> <p>I added the following to my pyproject but it still ended up downloading cuda based libraries.</p> <pre><code>[tool.uv.sources] torch = { index = &quot;pytorch&quot; } [[tool.uv.index]] name = &quot;pytorch&quot; url = &quot;https://download.pytorch.org/whl/cpu&quot; </code></pre>
<python><pytorch><setuptools><pyproject.toml>
2025-03-19 17:51:51
0
1,321
Paagalpan
79,520,985
16,383,578
How to simplify the generation process of these boolean images?
<p>I have written code that generates <a href="https://en.wikipedia.org/wiki/Thue%E2%80%93Morse_sequence" rel="nofollow noreferrer">Thue-Morse sequence</a>, its output is a NumPy 2D array containing only 0s and 1s, the height of the array is 2<sup>n</sup> and the width is n. More specifically, each intermediate result is kept as a column in the final output, with the elements repeated so that every column is of the same length.</p> <p>For example, if the input is 3, we start with 01, we repeat the elements 4 times so that the first column is 00001111, in the next iteration we invert the bits and add to the previous iteration to get 0110, we repeat the elements twice so the second column is 00111100, and finally the last column is 01101001, so the output is:</p> <pre><code>array([[0, 0, 0], [0, 0, 1], [0, 1, 1], [0, 1, 0], [1, 1, 1], [1, 1, 0], [1, 0, 0], [1, 0, 1]], dtype=uint8) </code></pre> <p>Now the first kind of image I want to generate is simple, we repeat each column 2<sup>n - 1 - i</sup> times, so that each run of 1s becomes a rectangle, whose height is the length of the run, and in each subsequent column the rectangles are halved in width, so that sum of the width of the rectangles is height - 1.</p> <p>This is the 7-bit ABBABAAB example of such image:</p> <p><a href="https://i.sstatic.net/rERFYB7k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rERFYB7k.png" alt="enter image description here" /></a></p> <p>And the second kind of image I want to generate is to take the fractal squares and convert them to polar:</p> <p>7-bit ABBABAAB fractal polar:</p> <p><a href="https://i.sstatic.net/JplX4ii2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JplX4ii2.png" alt="enter image description here" /></a></p> <hr /> <p>I wrote code to generate these images. But it is inefficient. It is easy to write working code, but it is hard to make it run fast.</p> <h2><em><strong>Code</strong></em></h2> <pre><code>import cv2 import numpy as np from enum import Enum from PIL import Image from typing import Generator, List, Literal def validate(n: int) -&gt; None: if not (n and isinstance(n, int)): raise ValueError(f&quot;Argument {n} is not a valid bit width&quot;) def abba_codes(n: int) -&gt; np.ndarray: validate(n) power = 1 &lt;&lt; (n - 1) abba = np.array([0, 1], dtype=np.uint8) powers = 1 &lt;&lt; np.arange(n - 2, -1, -1) return ( np.concatenate( [ np.zeros(power, dtype=np.uint8), np.ones(power, dtype=np.uint8), *( np.tile( (abba := np.concatenate([abba, abba ^ 1])), (power, 1) ).T.flatten() for power in powers ), ] ) .reshape((n, 1 &lt;&lt; n)) .T ) def abba_squares(n: int) -&gt; np.ndarray: arr = abba_codes(n).T[::-1] powers = 1 &lt;&lt; np.arange(n + 1) result = np.zeros((powers[-1],) * 2, dtype=np.uint8) for i, (a, b) in enumerate(zip(powers, powers[1:])): result[a:b] = arr[i] return (result.T[:, ::-1] ^ 1) * 255 def abba_square_img(n: int, length: int = 1024) -&gt; np.ndarray: return Image.fromarray(abba_squares(n)).resize((length, length), resample=0) def abba_polar(n: int, length: int = 1024) -&gt; np.ndarray: square = np.array(abba_square_img(n, length)) return cv2.warpPolar(square, (length, length), [half := length &gt;&gt; 1] * 2, half, 16) </code></pre> <h2>Performance</h2> <pre><code>In [2]: %timeit abba_square_img(10, 1024) 10.3 ms ยฑ 715 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) In [3]: %timeit abba_polar(10, 1024) 27.2 ms ยฑ 5.41 ms per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) </code></pre> <hr /> <p>You see, I had to mix <code>PIL</code> and <code>cv2</code>, because only <code>cv2</code> offers polar coordinate transformation, and only <code>PIL</code> lets me resize without any interpolation at all.</p> <p>The following don't give intended result:</p> <pre><code>In [30]: cv2.imwrite('D:/garbage.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_NEAREST)) Out[30]: True In [31]: cv2.imwrite('D:/garbage1.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_NEAREST_EXACT)) Out[31]: True In [32]: cv2.imwrite('D:/garbage2.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_AREA)) Out[32]: True </code></pre> <p><a href="https://i.sstatic.net/wiirISUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiirISUY.png" alt="enter image description here" /></a></p> <p>No matter what interpolation mode I try, it always gives a blurry image. I want the edges to be sharp and everything staying rectangles. So I had to use <code>PIL</code> to resize the images. Of course I can use a two level nested for loop to broadcast the pixels with <code>result[i*n:i*n+n, j*n:j*n+n] = img[i, j]</code> but that would be slow.</p> <p>The edges of the polar images are jagged, not smooth, I want the edges to be smooth, and the corners are black, I want the corners white. And if I pass <code>n</code> larger than 14 to <code>bool_squares</code> it just hangs.</p> <p>What are better ways to do these?</p> <hr /> <p>I have improved the code a little bit, but I still haven't figured out an efficient way to generate polar images directly. This question still deals with two kinds of images, that is because two things, 1, I still don't know how to efficiently fill rectangles in an array using a completely vectorized way, and 2, I still need to generate the fractal squares first in order to generate the polar image.</p> <p>But I have made big progress on the generation of the fractal squares, so I found the consecutive runs of 1s and created many rectangles and used those rectangles to fill the array:</p> <pre><code>def find_transitions(line: np.ndarray) -&gt; np.ndarray: if not line.size: return np.array([]) return np.concatenate( [ np.array([0] if line[0] else [], dtype=np.uint64), ((line[1:] != line[:-1]).nonzero()[0] + 1).astype(np.uint64), np.array([line.size] if line[-1] else [], dtype=np.uint64), ] ) def fractal_squares_helper(arr: np.ndarray, n: int, scale: int) -&gt; List[np.ndarray]: x_starts = [] x_ends = [] y_starts = [] y_ends = [] widths = np.concatenate([[0], ((1 &lt;&lt; np.arange(n - 1, -1, -1)) * scale).cumsum()]) for i, (start, end) in enumerate(zip(widths, widths[1:])): line = find_transitions(arr[:, i]) * scale half = line.size &gt;&gt; 1 y_starts.append(line[::2]) y_ends.append(line[1::2]) x_starts.append(np.tile([start], half)) x_ends.append(np.tile([end], half)) return [np.concatenate(i) for i in (x_starts, x_ends, y_starts, y_ends)] def fill_rectangles( length: int, x_starts: np.ndarray, x_ends: np.ndarray, y_starts: np.ndarray, y_ends: np.ndarray, ) -&gt; np.ndarray: img = np.full((length, length), 255, dtype=np.uint8) x = np.arange(length) y = x[:, None] mask = ( (y &gt;= y_starts[:, None, None]) &amp; (y &lt; y_ends[:, None, None]) &amp; (x &gt;= x_starts[:, None, None]) &amp; (x &lt; x_ends[:, None, None]) ) img[mask.any(axis=0)] = 0 return img def fill_rectangles1( length: int, x_starts: np.ndarray, x_ends: np.ndarray, y_starts: np.ndarray, y_ends: np.ndarray, ) -&gt; np.ndarray: img = np.full((length, length), 255, dtype=np.uint8) for x0, x1, y0, y1 in zip(x_starts, x_ends, y_starts, y_ends): img[y0:y1, x0:x1] = 0 return img def fractal_squares(n: int, length: int, func: bool) -&gt; np.ndarray: arr = abba_codes(n) scale, mod = divmod(length, total := 1 &lt;&lt; n) if mod: raise ValueError( f&quot;argument length: {length} is not a positive multiple of {total} n-bit codes&quot; ) return (fill_rectangles, fill_rectangles1)[func]( length, *fractal_squares_helper(arr, n, scale) ) </code></pre> <pre><code>In [4]: %timeit fractal_squares(10, 1024, 0) 590 ms ยฑ 19.9 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) In [5]: %timeit fractal_squares(10, 1024, 1) 1.65 ms ยฑ 56.6 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>The first method I used to fill the rectangles is completely vectorized, but it is very slow and memory consuming, it is the only way I could make it work.</p> <p>The for loop based method is much faster, but it isn't completely vectorized, I want to vectorize it completely, to do away with the loop.</p> <p>Now the polar images can be generated similarly, instead of filling Cartesian rectangles, we fill &quot;polar rectangles&quot;, I have calculated the coordinates, but I cannot fill the rectangles:</p> <pre><code>def rectangularize(y: np.ndarray, x: np.ndarray) -&gt; np.ndarray: l = y.shape[0] h = l // 2 return np.stack([np.tile(y, (2, 1)).T.flatten(), np.tile(x, l)]).T.reshape( (h, 4, 2) ) def radial_rectangles(n: int, length: int) -&gt; np.ndarray: arr = abba_codes(n) radii = np.concatenate([[0], (length &gt;&gt; np.arange(1, n + 1)).cumsum()]) rectangles = [] total = 1 &lt;&lt; n tau = 2 * np.pi for i, (start, end) in enumerate(zip(radii, radii[1:])): line = find_transitions(arr[:, i]) / total * tau rectangles.append(rectangularize(line, [start, end])) return np.concatenate(rectangles) </code></pre> <pre><code>In [6]: radial_rectangles(4, 1024) Out[6]: array([[[3.14159265e+00, 0.00000000e+00], [3.14159265e+00, 5.12000000e+02], [6.28318531e+00, 0.00000000e+00], [6.28318531e+00, 5.12000000e+02]], [[1.57079633e+00, 5.12000000e+02], [1.57079633e+00, 7.68000000e+02], [4.71238898e+00, 5.12000000e+02], [4.71238898e+00, 7.68000000e+02]], [[7.85398163e-01, 7.68000000e+02], [7.85398163e-01, 8.96000000e+02], [2.35619449e+00, 7.68000000e+02], [2.35619449e+00, 8.96000000e+02]], [[3.14159265e+00, 7.68000000e+02], [3.14159265e+00, 8.96000000e+02], [3.92699082e+00, 7.68000000e+02], [3.92699082e+00, 8.96000000e+02]], [[5.49778714e+00, 7.68000000e+02], [5.49778714e+00, 8.96000000e+02], [6.28318531e+00, 7.68000000e+02], [6.28318531e+00, 8.96000000e+02]], [[3.92699082e-01, 8.96000000e+02], [3.92699082e-01, 9.60000000e+02], [1.17809725e+00, 8.96000000e+02], [1.17809725e+00, 9.60000000e+02]], [[1.57079633e+00, 8.96000000e+02], [1.57079633e+00, 9.60000000e+02], [1.96349541e+00, 8.96000000e+02], [1.96349541e+00, 9.60000000e+02]], [[2.74889357e+00, 8.96000000e+02], [2.74889357e+00, 9.60000000e+02], [3.53429174e+00, 8.96000000e+02], [3.53429174e+00, 9.60000000e+02]], [[4.31968990e+00, 8.96000000e+02], [4.31968990e+00, 9.60000000e+02], [4.71238898e+00, 8.96000000e+02], [4.71238898e+00, 9.60000000e+02]], [[5.10508806e+00, 8.96000000e+02], [5.10508806e+00, 9.60000000e+02], [5.89048623e+00, 8.96000000e+02], [5.89048623e+00, 9.60000000e+02]]]) </code></pre> <p>The output is of the shape <code>(n, 4, 2)</code>, each <code>(4, 2)</code> shape is a &quot;radial rectangle&quot;, the first element of the innermost pairs is the angle from x-axis measured in radians, the second is the radius. The &quot;radial rectangles&quot; are in the format <code>[(a0, r0), (a0, r1), (a1, r0), (a1, r1)]</code></p> <p>What is a more efficient way to fill rectangles and how can I fill &quot;radial rectangles&quot;?</p>
<python><numpy><image><opencv><image-processing>
2025-03-19 17:43:36
1
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,520,972
5,215,538
FastAPI hosted on Lambda does not serve static content
<p>I have a FastAPI app hosted on AWS Lambda + Api Gateway using Mangum. However it seems not to be able to serve static content returning 404 error.</p> <p>Here is how static directory is mounted</p> <pre class="lang-py prettyprint-override"><code>application = FastAPI(title=&quot;MyApp&quot;) static_directory = Path(__file__).parent / &quot;static&quot; application.mount( f&quot;{settings.formatted_api_stage}/static&quot;, StaticFiles(directory=static_directory, check_dir=True), name=&quot;static&quot;, ) </code></pre> <p><code>settings.formatted_api_stage</code> corresponds to API Gateway stage like <code>/dev</code> in <code>https://a123bcdefg.execute-api.eu-north-1.amazonaws.com/dev/</code></p> <p>Mangum handler</p> <pre class="lang-py prettyprint-override"><code>handler = Mangum( application, lifespan=&quot;off&quot;, api_gateway_base_path=settings.formatted_api_stage ) </code></pre> <p>AWS infra is managed using CDK</p> <pre class="lang-py prettyprint-override"><code>fastapi_lambda = PythonFunction( self, &quot;MyLambda&quot;, entry=&quot;../app&quot;, index=&quot;main.py&quot;, handler=&quot;handler&quot;, runtime=_lambda.Runtime.PYTHON_3_13, role=lambda_role, memory_size=512, timeout=Duration.seconds(5), bundling=BundlingOptions(asset_excludes=[&quot;*.pyc&quot;, &quot;__pycache__/&quot;]), ) api = apigateway.LambdaRestApi( self, &quot;MyAPI&quot;, handler=fastapi_lambda, proxy=True, deploy_options=apigateway.StageOptions( stage_name=api_stage, ), ) </code></pre> <p>Then in the templates I reference static content like</p> <pre><code>&lt;link rel=&quot;stylesheet&quot; href=&quot;{{ url_for('static', path='/css/style.css') }}&quot; /&gt; </code></pre> <p>But it results in a 404 error.</p> <p>I can verify that static folder is packaged ok and is present in the same folder as <code>main.py</code> in the lambda bundle.</p> <p>The app works as expected when run locally.</p>
<python><amazon-web-services><aws-lambda><fastapi><mangum>
2025-03-19 17:32:56
0
4,109
Sergii Gryshkevych
79,520,909
2,410,558
What's necessary for the `keras` tag to populate on TensorBoard?
<p>I'm trying to view the conceptual graph of a fairly complex TensorFlow model in TensorBoard. However, the option is greyed out. (I have no issue viewing the op graph).</p> <p>My understanding is that in TensorBoard, the <code>keras</code> tag is necessary to view the conceptual graph. However, there are no tags at all when I look at the &quot;Graph&quot; tab on TensorBoard.</p> <p>All I've written for the callback is the below block. There is no additional custom TensorBoard code:</p> <pre><code>tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='tensorboard_logs', write_graph=True, histogram_freq=1, ) </code></pre> <p>The model is multivariate with a custom loss function, which I double checked inherits from <code>tf.keras.losses.Loss</code>. From what I can tell (this isn't a repo I wrote), all other custom code also inherits from the respective Keras class.</p> <p>Additionally, the code base uses TensorFlow 2.13.0, Keras 2.13.1, and TensorBoard 2.13.0.</p> <p><strong>How can I ensure that the <code>keras</code> tag shows up on TensorBoard so I can view the model's conceptual graph?</strong></p>
<python><tensorflow><keras><tensorboard>
2025-03-19 17:03:33
1
674
Brandon Sherman
79,520,845
2,130,515
How can I get bot response using Messaging API
<p>This is my first attempt to build a chatbot using Botpress. To start, my bot is so simple and consists of: start node and autonomous node (&quot;country name extraction&quot;). The latter node receive a text and return the extracted countries.</p> <p>Everything is working well on the emulator.</p> <p>I setup the messaging API webhook and sent the request via postman and python script. I got an answer but it does not contain the response of the bot.</p> <p>This is the request:</p> <pre><code>{ &quot;userId&quot;: &quot;user_01JPQBFY1FC5MP0DJ5F8BJJ7S3&quot;, &quot;messageId&quot;: &quot;remotemessageId&quot;, &quot;conversationId&quot;: &quot;conv_01JPQBFY8SKZ00BND1ZMNGAJH3&quot;, &quot;type&quot;: &quot;text&quot;, &quot;text&quot;: &quot;I travel to USA and Brazil&quot; } </code></pre> <p>I got this answer:</p> <pre><code>{ &quot;message&quot;: { &quot;id&quot;: &quot;8ce18401-8e31-429b-b302-bad79c4f6293&quot;, &quot;createdAt&quot;: &quot;2025-03-19T15:42:25.889Z&quot;, &quot;conversationId&quot;: &quot;conv_01JPQDZG4XGVZWX6H6TG6H0FAX&quot;, &quot;payload&quot;: { &quot;text&quot;: &quot;I travel to USA and Brazil.&quot; }, &quot;tags&quot;: { &quot;plus/messaging:id&quot;: &quot;remotemessageId&quot;, &quot;id&quot;: &quot;remotemessageId&quot; }, &quot;userId&quot;: &quot;user_01JPQDZG6ZYE9QW4ZRS12MDSZ8&quot;, &quot;type&quot;: &quot;text&quot;, &quot;direction&quot;: &quot;incoming&quot; }, &quot;user&quot;: { &quot;id&quot;: &quot;user_01JPQDZG6ZYE9QW4ZRS12MDSZ8&quot;, &quot;createdAt&quot;: &quot;2025-03-19T14:47:34.367Z&quot;, &quot;updatedAt&quot;: &quot;2025-03-19T14:47:34.367Z&quot;, &quot;tags&quot;: { &quot;plus/messaging:id&quot;: &quot;user_01JPQBFY1FC5MP0DJ5F8BJJ7S3&quot;, &quot;id&quot;: &quot;user_01JPQBFY1FC5MP0DJ5F8BJJ7S3&quot; } }, &quot;conversation&quot;: { &quot;id&quot;: &quot;conv_01JPQDZG4XGVZWX6H6TG6H0FAX&quot;, &quot;createdAt&quot;: &quot;2025-03-19T14:47:34.301Z&quot;, &quot;updatedAt&quot;: &quot;2025-03-19T15:41:20.037Z&quot;, &quot;channel&quot;: &quot;channel&quot;, &quot;integration&quot;: &quot;plus/messaging&quot;, &quot;tags&quot;: { &quot;plus/messaging:id&quot;: &quot;conv_01JPQBFY8SKZ00BND1ZMNGAJH3&quot;, &quot;id&quot;: &quot;conv_01JPQBFY8SKZ00BND1ZMNGAJH3&quot; } } } </code></pre> <p>What I understand is the response is a confirmation that the request is sent to the bot.</p> <p>What I need: I need to get the answer of the bot to the request.</p>
<python><bots><botpress>
2025-03-19 16:38:34
0
1,790
LearnToGrow
79,520,843
1,593,077
Am I using a "backported module"?
<p>I'm trying to use the Python <a href="https://github.com/netromdk/vermin" rel="nofollow noreferrer">vermin</a> utility to determine the minimum version needed to run a script of mine. Running it, I get:</p> <pre class="lang-none prettyprint-override"><code>$ vermin --no-parse-comments foo.py Tips: - You're using potentially backported modules: argparse If so, try using the following for better results: --backport argparse (disable using: --no-tips) Minimum required versions: 2.7, 3.2 </code></pre> <p>First, I'm not even sure what backporting means. That is, if the authors of argparse have versions of it which work with different versions of Python, does that count as back-porting? Isn't it just the supported version range? Or - does it have to be someone else who has taken some kind of &quot;back-porting&quot; action?</p> <p>Second, suppose we've agreed on what that means. How can I tell whether I'm &quot;using a back-ported module&quot;? Is it the way I'm using it, or is it that I take it from some location that indicates &quot;back-porting&quot;?</p> <p>Additional info (ask for anything else that might help):</p> <pre><code>$ python3 --version Python 3.6.5 $ pip3 show argparse Name: argparse Version: 1.4.0 Summary: Python command-line parsing library Home-page: https://github.com/ThomasWaldmann/argparse/ Author: Thomas Waldmann Author-email: tw@waldmann-edv.de License: Python Software Foundation License Location: /home/joeuser/.local/lib/python3.6/site-packages Requires: Required-by: ptop </code></pre>
<python><pip><python-module><backport>
2025-03-19 16:37:24
1
137,004
einpoklum
79,520,697
7,687,981
Dynamic dependency install with pyproject.toml
<p>Is there a way to dynamically check the version of a package installed system wide and set that as a package dependency in the pyproject.toml? Specially, I need to check if a person already has GDAL installed system wide and if they do, set the python gdal version to that. If I were manually installing the gdal python package, I could do something like below, I just don't know if its possible to set it on the fly/dynamically in the toml.</p> <pre><code>pip install GDAL==`gdal-config --version` </code></pre>
<python><gdal><toml>
2025-03-19 15:44:32
1
815
andrewr
79,520,460
2,854,673
Python Image AttributeError: 'ImageDraw' object has no attribute '__array_interface__'
<p>Getting this issue while saving and writing text over image using python image library. I am trying to write a text over a png image using Pillow imaging library, however after trying previous answers in stack overflow, still face this issue.</p> <pre><code>from pickle import FALSE import sys #keep sys import and insert as first 2 lines import os from sys import exit import numpy as np from PIL import ImageFont, ImageDraw, Image fontuse = &quot;D:/Ubuntusharefolder/CustomFonts/EnglishFonts/NotoSans-Medium.ttf&quot; font_color_bgra = (0,255,0,0) font = ImageFont.truetype(fontuse, 32 * int(2), layout_engine=ImageFont.LAYOUT_RAQM) filename = 'order_image_1.png' input_subfolder_path= 'D:/testinput' output_folder_final_path = 'D:/testinput/updtxtimg' src_img_base = np.full((1920, 1080, 3),255, dtype = np.uint8) #(1920, 1080, 3) print('src_base imguse gen size: ',src_img_base.shape) print ('input img file path:',input_subfolder_path + '/' + filename) src_img_use = np.array(Image.open( input_subfolder_path + '/' + filename ) ) #(511, 898, 4) print('src_imguse gen size: ', src_img_use.shape) textpos = (520, 210) textwrite = 'Testname' img_pil_4dtxt = ImageDraw.Draw(Image.fromarray(src_img_base)) #img_pil_4dtxt = ImageDraw.Draw(Image.fromarray(src_img_use)) img_pil_4dtxt.text(textpos, textwrite, font = font, fill = font_color_bgra) print('saved img in path:',output_folder_final_path + '/' + 'upd' + '_' + filename) Image.fromarray(img_pil_4dtxt).save(output_folder_final_path + '/' + 'upd' + '_' + filename) print('saved_img done') </code></pre> <p>command line and error obtained:</p> <pre><code>&gt; &amp; C:/ProgramData/Anaconda3/python.exe ./code/txtoverimgv1.py src_base imguse gen size: (1920, 1080, 3) input img file path: D:/testinput/order_image_1.png src_imguse gen size: (1080, 1920, 4) saved img in path: D:/testinput/updtxtimg/upd_order_image_1.png Traceback (most recent call last): File &quot;./code/txtoverimgv1.py&quot;, line 29, in &lt;module&gt; Image.fromarray(img_pil_4dtxt).save(output_folder_final_path + '/' + 'upd' + '_' + filename) File &quot;C:\ProgramData\Anaconda3\lib\site-packages\PIL\Image.py&quot;, line 2762, in fromarray arr = obj.__array_interface__ AttributeError: 'ImageDraw' object has no attribute '__array_interface__' </code></pre>
<python><numpy><image><python-imaging-library>
2025-03-19 14:15:50
1
334
UserM
79,520,442
432,691
How do I detect a database timeout in python?
<p>I have some code that executes database queries, like so:</p> <pre><code>self.db_cursor = self.db_conn.cursor(buffered=False) self.db_cursor.execute(query) </code></pre> <p>Now I want to add a timeout, so that long queries are killed. I can do this in MYSQL like this:</p> <pre><code>self.db_conn.reset_session(session_variables={'max_execution_time': 10}) </code></pre> <p>(I deliberately set the timeout to be crazy short, for testing.)</p> <p>How can I tell if a query timed out? I want to be able to report back to the user. There's no exception thrown, no warnings on the cursor, I can't find anything to check against.</p>
<python><mysql><timeout>
2025-03-19 14:07:03
1
340
pecks
79,520,380
4,634,965
Coloring Python VTK PolyData by additional attribute values in the dataset
<p>Utilizing the <a href="https://pypi.org/project/vtk/" rel="nofollow noreferrer">vtk python library</a> I am trying to color vtk polydata by a defined attribute (atype). So far I did not succeed. The renderer does not color by the specified attribute (atype) but instead by other date specified (position_and_radii).</p> <p>Checking with <a href="https://www.paraview.org/" rel="nofollow noreferrer">paraview</a>, all the data is around, and in paraview I can easily color by the requested attribute.</p> <p>There is something I miss and I cannot figure out what ...</p> <p>This is a simple code example.</p> <pre class="lang-py prettyprint-override"><code># libraries import vtk import pandas as pd # generate data df_cell = pd.DataFrame( [[1,1,1,1,0.1,'abc'],[2,2,2,2,0.2,'def'],[3,3,3,3,0.3,'def'],[4,4,4,4,0.4,'abc']], columns=['ID','position_x','position_y','position_z','radius','atype'], ) # generate VTK instances to fill for positions and radii vp_points = vtk.vtkPoints() vfa_radii = vtk.vtkFloatArray() vfa_radii.SetName('radius') # fill VTK instance with positions and radii for i in df_cell.index: vp_points.InsertNextPoint( df_cell.loc[i, 'position_x'], df_cell.loc[i, 'position_y'], df_cell.loc[i, 'position_z'] ) vfa_radii.InsertNextValue(df_cell.loc[i, 'radius']) # generate data instances vfa_data = vtk.vtkFloatArray() vfa_data.SetNumberOfComponents(2) vfa_data.SetNumberOfTuples(df_cell.shape[0]) vfa_data.CopyComponent(0, vfa_radii, 0) vfa_data.SetName('positions_and_radii') # generate unstructred grid for data vug_data = vtk.vtkUnstructuredGrid() vug_data.SetPoints(vp_points) vug_data.GetPointData().AddArray(vfa_data) vug_data.GetPointData().SetActiveScalars('positions_and_radii') # fill this grid with given attributes voa_data = vtk.vtkStringArray() voa_data.SetName('atype') for i in df_cell.index: voa_data.InsertNextValue(df_cell.loc[i, 'atype']) vug_data.GetPointData().AddArray(voa_data) # generate sphere source vss_data = vtk.vtkSphereSource() vss_data.SetRadius(1.0) vss_data.SetPhiResolution(16) vss_data.SetThetaResolution(32) # generate Glyph to save vg_data = vtk.vtkGlyph3D() vg_data.SetInputData(vug_data) vg_data.SetSourceConnection(vss_data.GetOutputPort()) # define important preferences for VTK vg_data.ClampingOff() vg_data.SetScaleModeToScaleByScalar() vg_data.SetScaleFactor(1.0) vg_data.SetColorModeToColorByScalar() vg_data.Update() # build VTKLooktupTable (color scheme) vlt_color = vtk.vtkLookupTable() vlt_color.SetNumberOfTableValues(256) vlt_color.SetHueRange(9/12, 0/12) # rainbow heat map vlt_color.Build() # set up the mapper vpdm_data = vtk.vtkPolyDataMapper() vpdm_data.SetInputConnection(vg_data.GetOutputPort()) vpdm_data.ScalarVisibilityOn() vpdm_data.SetLookupTable(vlt_color) vpdm_data.ColorByArrayComponent('atype', 1) # set up the actor actor = vtk.vtkActor() actor.SetMapper(vpdm_data) # do renderer setup ren = vtk.vtkRenderer() renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren) renWin.SetSize(800, 600) iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) # add the actor to the renderer #ren.ResetCamera() ren.SetBackground(1/3, 1/3, 1/3) # gray ren.AddActor(actor) # render iren.Initialize() renWin.Render() iren.Start() # write VTK vw_writer = vtk.vtkXMLPolyDataWriter() vw_writer.SetFileName('vtkpoly.vtp') vw_writer.SetInputData(vg_data.GetOutput()) vw_writer.Write() </code></pre> <p>This is the output I get: <a href="https://i.sstatic.net/2GShhM6N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2GShhM6N.png" alt="enter image description here" /></a></p> <p>And this is the output I would expect, displayed by paraview.</p> <p><a href="https://i.sstatic.net/DlkAS74E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DlkAS74E.png" alt="enter image description here" /></a></p> <p>Additionaly, does anyone know by heart, how it is possible to add a <code>colorbar with legend</code> to the rendered image (as viewable in the paraview image)?</p> <p>Thank you, Elmar</p>
<python><vtk>
2025-03-19 13:43:55
1
693
bue
79,520,326
10,161,091
Shap text plot does not show properly in notebook
<p>I am running the demo code provided <a href="https://github.com/shap/shap/blob/master/notebooks/text_examples/sentiment_analysis/Emotion%20classification%20multiclass%20example.ipynb" rel="nofollow noreferrer">here</a>. But I do not get the same plots with the nice coloring and highlights.</p> <p>Here is how it looks like in the source notebook:</p> <p><a href="https://i.sstatic.net/bZZ6SJWU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZZ6SJWU.png" alt="enter image description here" /></a></p> <p>Here is how mine looks like</p> <p><a href="https://i.sstatic.net/MBYL5Ozp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBYL5Ozp.png" alt="enter image description here" /></a></p> <p>I am running jupyter 7.3.3. Any thoughts on how to fix this?</p>
<python><plot><jupyter><shap>
2025-03-19 13:20:34
1
2,750
SaTa
79,520,314
4,681,187
Why does tqdm mess up cProfile output?
<p>When profiling some Python code, I've been frustrated by functions like <code>threading.py:637(wait)</code> appearing high in cProfile output instead of the hot functions I want to see. After some tests I realized that the problem is that I've been using tqdm to monitor the overall progress of the program. Here is a minimal example:</p> <p><code>test.py</code>:</p> <pre><code>from tqdm.auto import tqdm def test(): for i in tqdm(range(100000)): for j in range(10000): pass test() </code></pre> <p>In the shell:</p> <pre><code>% python3 -m cProfile test.py | head -n 20 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100000/100000 [00:24&lt;00:00, 4027.92it/s] 358918 function calls (357823 primitive calls) in 25.003 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 3/2 0.000 0.000 20.019 10.009 threading.py:637(wait) 3/2 0.000 0.000 20.019 10.009 threading.py:323(wait) 18/12 19.885 1.105 20.018 1.668 {method 'acquire' of '_thread.lock' objects} 79/1 0.001 0.000 4.811 4.811 {built-in method builtins.exec} 2/1 0.000 0.000 4.811 4.811 test.py:1(&lt;module&gt;) 2/1 4.785 2.392 4.811 4.811 test.py:3(test) 104/5 0.001 0.000 0.172 0.034 &lt;frozen importlib._bootstrap&gt;:1349(_find_and_load) 104/5 0.001 0.000 0.171 0.034 &lt;frozen importlib._bootstrap&gt;:1304(_find_and_load_unlocked) 103/6 0.001 0.000 0.170 0.028 &lt;frozen importlib._bootstrap&gt;:911(_load_unlocked) 77/5 0.001 0.000 0.168 0.034 &lt;frozen importlib._bootstrap_external&gt;:988(exec_module) 256/12 0.001 0.000 0.168 0.014 &lt;frozen importlib._bootstrap&gt;:480(_call_with_frames_removed) 100001 0.054 0.000 0.157 0.000 std.py:1161(__iter__) 6 0.000 0.000 0.153 0.025 __init__.py:1(&lt;module&gt;) 238 0.003 0.000 0.100 0.000 std.py:1199(update) 1 0.000 0.000 0.097 0.097 auto.py:1(&lt;module&gt;) </code></pre> <p>As can be seen, some threading-related functions are appearing high, and my main function <code>test</code> didn't get credited for all the time it has taken.</p> <p>My questions are:</p> <ol> <li>Why does this happen? I've read that <a href="https://github.com/cython/cython/issues/5470" rel="nofollow noreferrer">Python 3.12 has changed how profiling works</a>, so is this problem new in Python 3.12?</li> <li>Is there a workaround that still allows me to visually see the progress while profiling?</li> </ol>
<python><python-multithreading><tqdm><cprofile>
2025-03-19 13:14:49
1
1,565
Imperishable Night
79,520,271
11,062,613
Can you wrap NumPy functions in Numba-jitted code using llvmlite?
<p>This is a follow up question to: <a href="https://stackoverflow.com/questions/79514906/how-to-wrap-numpy-functions-in-numba-jitted-code-with-persistent-disk-caching">How to wrap NumPy functions in Numba-jitted code with persistent disk caching?</a></p> <p>Background: In general Numba's implementations of Numpy functions are pretty efficient. In some cases like numpy.sort() this is not the case. I would like to use NumPyโ€™s native sorting functionality (i.e., numpy.sort) within a Numba pipeline. The goal is to be able to make stable, cached calls to the underlying C API function for sorting across multiple Python sessions.</p> <p>The C API for sorting (equivalent to ndarray.sort) is defined as:</p> <pre><code>PyObject *PyArray_Sort(PyArrayObject *self, int axis, NPY_SORTKIND kind) </code></pre> <p><a href="https://numpy.org/doc/stable/reference/c-api/array.html#item-selection-and-manipulation" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/c-api/array.html#item-selection-and-manipulation</a></p> <p>One potential workaround I've been thinking of is to use llvmlite.binding.load_library_permanently to stable load the NumPy extension &quot;multiarray_umath&quot; so that the C-function pointer to PyArray_Sort remains stable.</p> <p>There are 2 potential problems which may or may not be solvable:</p> <ol> <li>PyArray_Sort must be exposed as a public symbol. This does not seem to be the case.</li> <li>PyArray_Sort uses types which might not be available (PyObject, PyArrayObject) when defining the C-function signature.</li> </ol> <p>Is it possible to access NumPyโ€™s native sorting function from external jit-compiled code? I have limited experience with C++ internals and the NumPy build system. Any insights, workarounds, or recommendations would be greatly appreciated.</p> <p>Thank you for your time!</p> <p>Here is an incomplete attempt which is not working:</p> <pre><code>import numpy as np from numpy._core import _multiarray_umath as multiarray_umath from numba import njit from numba.core import types, typing from llvmlite.binding import load_library_permanently, address_of_symbol # Stable load numpy extension np_library_path = multiarray_umath.__file__ load_library_permanently(np_library_path) # Check if symbol is public and address can be found np_fn_name = 'PyArray_Sort' # The 1st issue: 'PyArray_Sort' is not a public symbol func_addr = address_of_symbol(np_fn_name) if func_addr is None: raise RuntimeError(f&quot;Could not find symbol {np_fn_name}&quot;) print(f&quot;Address of {np_fn_name}:&quot;, hex(func_addr)) # &gt;&gt;&gt; This will raise a RuntimeError because the symbol is not publicly available # The 2nd issue: Are there matching types to define the signature? # &gt;&gt;&gt; What is the return type for PyObject ??? return_type = types.pyobject # &gt;&gt;&gt; What is the argument type for PyArrayObject ??? arg_types = (types.pyobject, types.int64, types.int64) np_fn_signature = typing.signature(return_type, *arg_types) pyarray_sort = types.ExternalFunction(np_fn_name, np_fn_signature) @njit(cache=True) def wrapped_numpy_sort(arr, axis, kind): return pyarray_sort(arr, axis, kind) </code></pre>
<python><numpy><llvm><numba><llvmlite>
2025-03-19 12:58:30
0
423
Olibarer
79,520,098
7,245,066
Using pyarrow back end with custom dtype
<p>I have a custom dtype in Pandas as well as an extension array. I would like to use the <code>pyarrow</code> back end over the default <code>numpy</code> backend.</p> <p>See: <a href="https://www.practicaldatascience.org/notebooks/class_3/week_2/50_pandas_pyarrow.html#is-this-something-to-worry-about-now" rel="nofollow noreferrer">https://www.practicaldatascience.org/notebooks/class_3/week_2/50_pandas_pyarrow.html#is-this-something-to-worry-about-now</a> See: <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html</a></p> <p>On my extension array, I did implement <code>__arrow_array__</code>.</p> <p>My question is what do I need to implement on the custom <code>ExtensionDtype</code> to get the <code>convert_dtypes</code> to work with the pyarrow back end?</p> <p>Thank you</p>
<python><pandas><pyarrow><dtype>
2025-03-19 11:47:47
0
403
JabberJabber
79,520,072
11,227,857
Azure Function (python) won't trigger from Azure IoT Hub event
<p>I have an IoT Hub and an Azure Function App written in Python. I want the Azure function to trigger on messages received by the hub.</p> <p>The IoT Hub is publicly accessible and I can successfully send messages to it. I have confirmed the JSON payloads are being received by using the Azure CLI.</p> <p>I've deployed the Azure function using VS Code (Azure extension) and the <code>function.json</code> looks like this:</p> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;type&quot;: &quot;iotHubTrigger&quot;, &quot;name&quot;: &quot;event&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;eventHubName&quot;: &quot;iothub-ehub-[redacted]-36471244-3e33dca112&quot;, &quot;connection&quot;: &quot;IoTHubConnectionString&quot;, &quot;cardinality&quot;: &quot;one&quot;, &quot;consumerGroup&quot;: &quot;mydevices&quot; } ] } </code></pre> <p>My python function (in <code>__init__.py</code>) starts with:</p> <pre><code>def main(event: func.IoTHubEvent) -&gt; None: ... </code></pre> <p>In the Function App on the Azure Portal I have gone to <strong>Settings &gt; Environmental Variables</strong> and set <code>IoTHubConnectionString</code> to <code>Endpoint=sb://ihsuprodlnres017dednamespace.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=[redacted];EntityPath=iothub-ehub-[redacted]-36471244-3e33dca112</code></p> <p>I've been struggling for a while now and I can't get the function to trigger, the Function App logs show nothing so I have no idea what's wrong.</p> <p>How can I troubleshoot this?</p>
<python><azure><azure-functions><azure-iot-hub>
2025-03-19 11:32:02
1
530
gazm2k5
79,520,044
18,002,913
How to measure the length of image from a masked image using OpenCV?
<p>I am new in image processing and I am trying to improve myself by doing some projects and I have a problem about my project. I have an image dataset containing lakes with their corresponding binary mask images. I want to calculate the perimeter (boundary length) and area of the lake in each image using OpenCV.</p> <p>So far, I have tried Canny Edge Detection and <code>cv2.findContours()</code> to detect the lake boundaries to find lake's real area and real perimeter, but I am struggling to get only the lake's contour without including unwanted edges (such as image borders or noise).</p> <p>Here is my current code:</p> <pre><code>import cv2 import matplotlib.pyplot as plt import numpy as np # Load the image image = cv2.imread(&quot;water_body_3.jpg&quot;) # Apply Canny edge detection edged = cv2.Canny(image, 50, 100) # Find contours contours, _ = cv2.findContours(edged, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) largest_contour = max(contours, key=cv2.contourArea) # Draw the detected contours cv2.drawContours(image, [largest_contour], -1, (0, 255, 0), 2) plt.imshow(image, cmap=&quot;gray&quot;) plt.title(&quot;Canny Edge Detection&quot;) plt.axis(&quot;off&quot;) plt.show() </code></pre> <p><strong>Problems I'm Facing:</strong></p> <ul> <li><p>The detected contours sometimes include the image border or noise instead of just the lake.</p> </li> <li><p>I want to calculate the perimeter and area of the lake, but I am not sure if I am selecting the correct contour.</p> </li> </ul> <p><strong>Expected Output:</strong></p> <ul> <li><p>Extract only the lake contour from the image.</p> </li> <li><p>Compute the perimeter and area .</p> </li> </ul> <p><strong>Question:</strong></p> <ul> <li><p>How can I ensure that I am only selecting the lake's contour and not unwanted edges?</p> </li> <li><p>What is the best way to calculate the lake's area and perimeter<br /> correctly?</p> </li> <li><p>Should I preprocess the image differently (e.g., thresholding,<br /> morphological operations)?</p> </li> </ul> <p><strong>Images:</strong></p> <p>code output image: <a href="https://i.sstatic.net/mLCvT5sD.png" rel="nofollow noreferrer">https://i.sstatic.net/mLCvT5sD.png</a></p> <p>mask image that I use in code (water_body_3.jpg): <a href="https://i.sstatic.net/psXSsJfg.jpg" rel="nofollow noreferrer">https://i.sstatic.net/psXSsJfg.jpg</a></p> <p>original image:<a href="https://i.sstatic.net/4v6f4wLj.jpg" rel="nofollow noreferrer">https://i.sstatic.net/4v6f4wLj.jpg</a></p>
<python><opencv><image-processing><contour>
2025-03-19 11:20:38
1
1,298
NewPartizal
79,519,937
12,730,925
Azure Batch Python SDK - 5 - 10% Chance to encounter "CreateTasksErrorException"
<p>I have a python service running that is starting azure batch pools and adds a single job + task on user requests. In ~90% of the time, everything works fine, but sometimes i get the following error message:</p> <pre><code>Traceback (most recent call last): [...] azure.batch.custom.custom_errors.CreateTasksErrorException: Task with id `sim_07936` failed due to client error - InvalidPropertyValue::{'additional_properties': {}, 'lang': 'en-US', 'value': 'The value provided for one of the properties in the request body is invalid.\nRequestId:5a36d9fd-f230-46ba-ac7d-961d3572521f\nTime:2025-03-19T10:10:10.9862777Z'} </code></pre> <p>I assume the main culprit is this:</p> <pre><code>InvalidPropertyValue::{'additional_properties': {}, 'lang': 'en-US', 'value': 'The value provided for one of the properties in the request body is invalid. </code></pre> <p>However, nowhere in the task creation do i use &quot;additional_properties&quot;, nor do my tasks differ between different user calls. This is the relevant piece of code in the &quot;add_simulation_task&quot; function:</p> <pre><code> job_id = str(&quot;some_uuid4&quot;) number_cpus = 100 tasks = [] resource_files = [ batchmodels.ResourceFile(somefile1), batchmodels.ResourceFile(somefile2) ] display_name = &quot;some_name&quot; random_tag = random.randrange(0, 10000) sim_id = f&quot;sim_{random_tag:05d}&quot; parameters = &quot;something&quot; command_string = ( f&quot;some_f_string_with_a_few_{parameters}&quot; ) command = ( f&quot;/bin/bash -c '{command_string}'&quot; ) tasks.append(batchmodels.TaskAddParameter( id=sim_id, display_name=display_name, command_line=command, resource_files=resource_files, environment_settings=[ batchmodels.EnvironmentSetting( name=&quot;NODES&quot;, value=&quot;1&quot; ), batchmodels.EnvironmentSetting( name=&quot;PPN&quot;, value=str(number_cpus) ) ], multi_instance_settings=batchmodels.MultiInstanceSettings( coordination_command_line=&quot;/bin/bash -c env&quot;, number_of_instances=1, common_resource_files=[] ), user_identity=batchmodels.UserIdentity( auto_user=batchmodels.AutoUserSpecification( scope=&quot;pool&quot;, elevation_level=&quot;nonadmin&quot; ) ) )) batch_service_client.task.add_collection(job_id, tasks) </code></pre> <p>I have started the exact same task 2 or 3 times without success on the one day, with no issues on the second.</p> <p>I would be happy for any idea why the interface is not working 100% of the time.</p>
<python><azure><azure-batch>
2025-03-19 10:40:08
1
504
SebSta
79,519,871
3,813,371
How to make Visual Studio 2022 Python CLI project launch Windows Terminal instead of Python.exe?
<p>This is a Windows 11 machine. I have 2 projects in a Visual Studio 2022 solution.</p> <ol> <li>A Python CLI.<br /> When I run this app, it opens the <code>python.exe</code> terminal.</li> </ol> <p><a href="https://i.sstatic.net/YjZajfpx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjZajfpx.png" alt="enter image description here" /></a></p> <ol start="2"> <li>A C# Console app.<br /> When I run this app, I think it opens <code>cmd.exe</code> terminal.</li> </ol> <p><a href="https://i.sstatic.net/gweTMmmI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gweTMmmI.png" alt="enter image description here" /></a></p> <p>For the Python app also, I would like to run it in Windows cmd.exe terminal, so that I can use text formatters like <code>colorama</code> or <code>termcolor</code>.<br /> The Environment is already set for cmd.exe, but it doesn't seem to work.</p> <p>So, is it possible to run the Python CLI app, from VS 2022, in a Windows CMD terminal.</p> <p><a href="https://i.sstatic.net/bBl4P6Ur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bBl4P6Ur.png" alt="How to make Visual Studio 2022 project launch Windows Terminal instead of PowerShell?" /></a></p>
<python><visual-studio-2022>
2025-03-19 10:19:28
1
2,345
sukesh
79,519,830
10,200,497
What is the best way to get the last non zero value in a window of N rows?
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({ 'a': [0, 0, 1, -1, -1, 0, 0, 0, 0, 0, -1, 0, 0, 1, 0] }) </code></pre> <p>Expected output is creating column <code>b</code>:</p> <pre><code> a b 0 0 0 1 0 0 2 1 0 3 -1 1 4 -1 -1 5 0 -1 6 0 -1 7 0 -1 8 0 0 9 0 0 10 -1 0 11 0 -1 12 0 -1 13 1 -1 14 0 1 </code></pre> <p>Logic:</p> <p>I explain the logic by some examples:</p> <p>I want to create column b to df</p> <p>I want to have a window of three rows</p> <p>for example for row number 3 I want to look at three previous rows and capture the last non 0 value. if all of the values are 0 then 'b' is 0. in this case the last non zero value is 1. so column b is 1</p> <p>for example for row number 4 . The last non zero value is -1 so column b is -1</p> <p>I want to do the same for all rows.</p> <p>This is what I have tried so far. I think there must be a better way.</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'a': [0, 0, 1, -1, -1, 0, 0, 0, 0, 0, -1, 0, 0, 1, 0] }) def last_nonzero(x): # x is a pandas Series representing a window nonzero = x[x != 0] if not nonzero.empty: # Return the last non-zero value in the window (i.e. the one closest to the current row) return nonzero.iloc[-1] return 0 # Shift by 1 so that the rolling window looks only at previous rows. # Use a window size of 3 and min_periods=1 to allow early rows. df['b'] = df['a'].shift(1).rolling(window=3, min_periods=1).apply(last_nonzero, raw=False).astype(int) </code></pre>
<python><pandas>
2025-03-19 10:04:07
6
2,679
AmirX
79,519,665
5,838,180
Overploting healpy gnomview with data points ignores the data points
<p>I am trying to create a figure in Python containing subplots with a gnomview map inside and data points overplotted. I am using this code:</p> <pre><code>import healpy as hp import matplotlib.pyplot as plt import numpy as np nside = 32 npix = hp.nside2npix(nside) rot = [0, 0] xsize = 500 reso = 1.0 fig, axs = plt.subplots(1, 3, figsize=(15, 5)) data = np.random.randn(npix) num_points = 100 ra = np.random.uniform(0, 360, num_points) dec = np.random.uniform(-90, 90, num_points) for i, ax in enumerate(axs): hp.gnomview(data, rot=rot, xsize=xsize, reso=reso, sub=(1, 3, i + 1), fig=fig, notext=True, cbar=False) hp.projscatter(ra, dec, lonlat=True, coord='G') plt.tight_layout() plt.show() </code></pre> <p>This creates the figure with subplots showing the gnomview in each of them, but the scattered data points are not displayed. I also tried substituting the line <code>hp.projscatter(...)</code> with <code>ax.scatter(ra, dec, marker='o', color='red', s=10)</code>, but again no points are being shown. What am I doing wrong? Is there an alternative way to plot data points on top of healpy gnomview subplots?</p>
<python><matplotlib><plot><healpy>
2025-03-19 09:06:57
0
2,072
NeStack
79,519,597
17,148,835
python API wait for .cmm script to finish (Practice stack depth error)
<p>I am using python with the lauterbach.trace32.rcl library to control a trace32 instance. I am getting sporadic 'Practice stack depth error' while calling multiple cmm files My goal is to call a .cmm script and wait for the script to complete before calling another one.</p> <p>My current python script looks like this (simplified):</p> <pre><code>instance.cmm(&quot;testScript.cmm&quot;) instance.cmd(&quot;SYStem.Mode Go&quot;) </code></pre> <p>This causes problems because the second command is being executed while the <code>testScript.cmm</code> is still running.</p> <p>I noticed that putting a simple <code>time.sleep(1)</code> between the two lines would resolve the issue, but I don't like the solution because in some cases the script may need more time to complete the execution:</p> <pre><code>instance.cmm(&quot;testScript.cmm&quot;) time.sleep(1) # wait for 1 second instance.cmd(&quot;SYStem.Mode Go&quot;) </code></pre> <p>Is there a more intelligent solution?</p> <p>Thank you for helping out &lt;3</p> <hr /> <p><strong>UPDATE:</strong></p> <p>I found out that the parameter <code>timeout=None</code> should do the trick, but still this does not work for me... :(</p> <pre><code>instance.cmm(&quot;testScript.cmm&quot;, timeout=None) </code></pre>
<python><trace32><lauterbach>
2025-03-19 08:42:09
1
1,045
BeanBoy
79,519,395
1,079,907
How to skip, if starts with, but match other strings
<p>I want to match and substitute for strings as shown in the example below, but not for some strings which start with <code>test</code> or <code>!!</code>. I have used negative lookahead to skip matching unwanted strings but <code>(Street|,)(?=\d)</code> matching for <code>Street</code> &amp; comma replacing group 1 with <code>UK/</code> is not working as expected.</p> <pre><code>import re input = [ 'Street1-2,4,6,8-10', '!! Street4/31/2', 'test Street4' ] pattern = r'(^(?!test\s|!!\s).*(Street|,)(?=\d))' output = [re.sub(pattern, r'\g&lt;1&gt;UK/', line) for line in input ] </code></pre> <p><strong>Actual output</strong>:</p> <pre><code>['Street1-2,4,6,UK/8-10', '!! Street4/31/2', 'test Street4'] </code></pre> <p><strong>Expected output</strong>:</p> <pre><code>['StreetUK/1-2,UK/4,UK/6,UK/8-10', '!! Street4/31/2', 'test Street4'] </code></pre>
<python><regex>
2025-03-19 07:04:19
6
12,828
Sunil Bojanapally
79,519,353
28,063,240
How to cache python pip requirements of docker build progress?
<p>I'm on a very slow internet connection, and the</p> <pre><code>RUN pip install -r requirements.txt </code></pre> <p>step of <code>docker compose up --build</code> keeps timing out halfway through.</p> <p>When I run <code>docker compose up --build</code> again, it looks like it restarts from the very beginning. All of the python packages get downloaded from scratch.</p> <p>How can I make docker use the downloaded packages from the previous attempt?</p> <hr /> <p>My dockerfile:</p> <pre><code>FROM python:3.11 ENV PYTHONUNBUFFERED 1 WORKDIR /app COPY requirements.txt ./ RUN pip install -r requirements.txt COPY . . CMD celery -A myapp worker -l info -Q ${CELERY_QUEUE} </code></pre>
<python><django><docker><celery><django-celery>
2025-03-19 06:45:29
1
404
Nils
79,519,074
3,604,745
Do Python version issues with TTA lead to fasttransform vs. fastcore bugs in Python >= 3.10?
<p>Test Time Augmentation (TTA) in FastAI should be easily applied with <code>learn.tta</code>, yet has led to numerous issues in my Cloud Run deployment. I have a working Cloud Run deployment that does base learner and metalearner scoring as a prediction endpoint using <code>load_learner</code> from FastAI.</p> <p>I want to switch <code>learn.predict</code> to <code>learn.tta</code> but issues keep arising. FastAI requires a slightly different input shape for <code>tta</code> and has different shape of returned values. I wanted to make it more of a direct drop-in replacement for <code>learn.predict</code>. This function worked to accomplish that in a minimalistic <a href="https://colab.research.google.com/drive/1X_GLZHmiTOKfjkmdrJwJZmgzC_U4dhk9?usp=sharing" rel="nofollow noreferrer">test notebook</a> on Colab:</p> <pre><code>import random from fastai.vision.all import * # Function to perform TTA and format the output to match predict def tta_predict(learner, img): # Create a DataLoader for the single image using the test DataLoader test_dl = learner.dls.test_dl([img]) # Perform TTA on the single image using the test DataLoader preds, _ = learner.tta(dl=test_dl) # Get the average probabilities avg_probs = preds.mean(dim=0) # Get the predicted class index pred_idx = avg_probs.argmax().item() # Get the class label class_label = learner.dls.vocab[pred_idx] # Format the output to match the structure of the predict method return (class_label, pred_idx, avg_probs) # Use the tta_predict function prediction = tta_predict(learn, grayscale_img) # Print the results print(type(prediction)) # Print the type of the prediction object print(prediction) # Print the prediction itself (class label, index, probabilities) print(prediction[0]) # Print the predicted class label print(prediction[2]) # Print the average probabilities </code></pre> <p>Although it seemed to work fine in the notebook, when I add that to the top of my production script and switch <code>learn.predict</code> to <code>tta_predict(learn, img)</code> for my base learners, the entire image starts to fail to build with Python 3.9:</p> <pre><code>Traceback (most recent call last): File &quot;/app/main.py&quot;, line 11, in &lt;module&gt; from fastai.vision.all import PILImage, BCEWithLogitsLossFlat, load_learner File &quot;/usr/local/lib/python3.9/site-packages/fastai/vision/all.py&quot;, line 4, in &lt;module&gt; from .augment import * File &quot;/usr/local/lib/python3.9/ site-packages/fastai/vision/augment.py&quot;, line 8, in &lt;module&gt; from .core import * File &quot;/usr/local/lib/python3.9/site-packages/fastai/vision/core.py&quot;, line 259, in &lt;module&gt; class PointScaler(Transform): File &quot;/usr/local/lib/python3.9/site-packages/fasttransform/transform.py&quot;, line 75, in __new__ if funcs: setattr(new_cls, nm, _merge_funcs(*funcs)) File &quot;/usr/local/lib/python3.9/site-packages/fasttransform/transform.py&quot;, line 42, in _merge_funcs res = Function(fs[-1].methods[0].implementation) File &quot;/usr/local/lib/python3.9/site-packages/plum/function.py&quot;, line 181, in methods self._resolve_pending_registrations() File &quot;/usr/local/lib/python3.9/site-packages/plum/function.py&quot;, line 280, in _resolve_pending_registrations signature = Signature.from_callable(f, precedence=precedence) File &quot;/usr/local/lib/python3.9/site-packages/plum/signature.py&quot;, line 88, in from_callable types, varargs = _extract_signature(f) File &quot;/usr/local/lib/python3.9/site-packages/plum/signature.py&quot;, line 346, in _extract_signature resolve_pep563(f) File &quot;/usr/local/lib/python3.9/site-packages/plum/signature.py&quot;, line 329, in resolve_pep563 beartype_resolve_pep563(f) # This mutates `f`. File &quot;/usr/local/lib/python3.9/site-packages/beartype/peps/_pep563.py&quot;, line 263, in resolve_pep563 arg_name_to_hint[arg_name] = resolve_hint( File &quot;/usr/local/lib/python3.9/site-packages/beartype/_check/forward/fwdmain.py&quot;, line 308, in resolve_hint return _resolve_func_scope_forward_hint( File &quot;/usr/local/lib/python3.9/site-packages/beartype/_check/forward/fwdmain.py&quot;, line 855, in _resolve_func_scope_forward_hint raise exception_cls(exception_message) from exception beartype.roar.BeartypeDecorHintPep604Exception: Stringified PEP 604 type hint 'PILBase | TensorImageBase' syntactically invalid under Python &lt; 3.10 (i.e., TypeError(&quot;unsupported operand type(s) for |: 'BypassNewMeta' and 'torch._C._TensorMeta'&quot;)). Consider either: * Requiring Python &gt;= 3.10. Abandon Python &lt; 3.10 all ye who code here. * Refactoring PEP 604 type hints into equivalent PEP 484 type hints: e.g., # Instead of this... from __future__ import annotations def bad_func() -&gt; int | str: ... # Do this. Ugly, yet it works. Worky &gt;&gt;&gt;&gt; pretty. from typing import Union </code></pre> <p>I don't see anything in my code that could've caused that, yet there it is. I noticed somewhere in those messages it mentions &quot;augment&quot;, which I take as confirmation that TTA is at fault (it was also the only thing that changed). So, I tried switching the Python version to 3.10. Now it builds but it's clearly broken:</p> <pre><code>ERROR loading model.pkl: Could not import 'Pipeline' from fastcore.transform - this module has been moved to the fasttransform package. To migrate your code, please see the migration guide at: https://answerdotai.github.io/fasttransform/fastcore_migration_guide.html </code></pre> <p>The migration guide it mentions says to change</p> <p><code>from fastcore.transform import Transform, Pipeline</code> to</p> <p><code>from fasttransform import Transform, Pipeline</code></p> <p>but my code never directly imports <code>Pipeline</code> or <code>Transform</code>, nor does it directly import <code>fastcore</code>.</p>
<python><pytorch><prediction><fast-ai>
2025-03-19 03:41:56
1
23,531
Hack-R
79,519,059
3,099,733
provision a custom python virtualenv with apptainer
<p>In order to run a software that require 32 bit environment on HPC, I have to build a container with Apptainer.</p> <p>The problem is, I need to run a python script with extra dependencies in the container. I don't want to rebuild the container whenever I add/remove python packages, so I am thinking of building a virutalenv out of the container, and mount it to the container when I run my python script. One more thing is, if a python packages has been install in the container, I don't want it to be override by the one installed in virtualenv.</p>
<python><singularity-container><apptainer>
2025-03-19 03:27:33
0
1,959
link89
79,518,999
8,444,568
Why std(skipna=False) and std(skipna=True) yield different results even when there are no NaN or null values in the Series?
<p>I have a pandas Series <code>s</code>, and when I call <code>s.std(skipna=True)</code> and <code>s.std(skipna=False)</code> I get different results even when there are no NaN/null values in <code>s</code>, why? Did I misunderstand the <code>skipna</code> parameter? I'm using pandas 1.3.4</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd s = pd.Series([10.0]*4800000, index=range(4800000), dtype=&quot;float32&quot;) # No NaN/null in the Series print(s.isnull().any()) # False print(s.isna().any()) # False # Why the code below prints different results? print(s.std(skipna=False)) # 0.0 print(s.std(skipna=True)) # 0.61053276 </code></pre>
<python><pandas>
2025-03-19 02:39:54
1
893
konchy
79,518,840
19,537,838
Forward slash ```/``` and the backslash ```\``` following a quote disappear when typing the third following chacter (Sublime Text/Python editor)
<p>Sublime Text / Python editor: When I type the third character after a slash character that follows and opening quotation mark (double <code>&quot;/...</code> or single <code>'/...</code> OR <code>&quot;\...</code> <code>'\...</code>) the slash disappears. How can I stop this behavior?</p> <p>When typing:<br /> <code>&quot;/hom</code> turns into this: <code>&quot;hom</code>.<br /> <code>&quot;\hom</code> turns into this: <code>&quot;hom</code>.<br /> Unicode codepoint <code>&quot;\u002F</code> turns into <code>&quot;u002F</code><br /> Newline <code>\n</code> after the double quote <code>&quot;\n555</code> turns into <code>&quot;n555</code></p> <p>So far, the only thing that seems to help is if I (in <code>Python.sublime-settings</code>) turn off the automatic pairing of quotes and brackets by setting <code>auto_match_enabled</code> to false. This causes other issues, so I would like to find a more workable solution.</p> <pre><code>// Controls auto pairing of quotes, brackets etc &quot;auto_match_enabled&quot;: false, </code></pre> <p>For some reason, I am not able figure out how to change this behavior. I have spend hours on this, I have tried changing everything I can think of (punctuation, auto_corrects, triggers, and other stuff) so that could cause this behavior with no luck.</p> <p>This behavior only occurs only with <em>Python editor</em> when editing <strong>.py</strong> files in <em>Sublime Text</em>, and <strong>only when I <em>type</em> in the third character following the slash that follows the quotation mark</strong>. It is the same behavior with raw strings: <code>r&quot;/hom...</code> turn into <code>r&quot;hom</code> when I type in the third character after the forward slash. <code>CTRL+Z</code> has no effect on bringing the forward slash back after it disappears.</p> <p>Excluding the undesirable option of making the <code>auto_match_enabled</code> to false, the only other thing I can do is have to (1) remember that this happens, AND (2) I have to go back to the beginning of the string that I am trying to type and manually add the forward slash. <em>NOTE: When I do go back to insert the slash character at the beginning of the sting in quotes, the slash does not disappear. I can also paste a string starting with a forward slash, and it will not disappear. <strong>This happens only when I type in the third character following the slash that follows the opening quotation mark</strong>.</em></p> <p>How can I type a quoted string when the string starts with a slash? What am I missing? Thank you for your help in advance.</p>
<python><autocorrect><sublimetext4>
2025-03-19 00:01:05
1
795
rich neadle
79,518,764
4,463,825
How to load a Neural Network Model along with MinMaxScalar?
<p>I have a simple neural network model, of 4 layers, that I trained on a numerical dataset of 25K data points.</p> <p>It takes a good time to load the data, whenever I want to evaluate new features to python code. So how could I save the model in my project folder, and just load it as required?</p> <p>It is a sequential model that looks something like this:</p> <pre><code> model = keras.Sequential(name=&quot;my_sequential&quot;) model.add(layers.Dense(32, activation=&quot;relu&quot;, name=&quot;layer1&quot;, input_shape = (3,))) model.add(layers.Dense(64, activation=&quot;relu&quot;, name=&quot;layer2&quot;)) model.add(layers.Dense(64, activation=&quot;relu&quot;, name=&quot;layer3&quot;)) model.add(layers.Dense(3, activation=&quot;linear&quot;, name=&quot;layer4&quot;)) # # model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', air_vol_flow_mse, air_outlet_temp_mse, heat_load_mse]) # history = model.fit(scaled_x_train, scaled_y_train, epochs = 320, batch_size = 64, verbose = 1)#, validation_data = (x_test, y_test)) results = model.evaluate(scaled_x_test, scaled_y_test) </code></pre> <p>The main aspect is - where I am unable to find the solution is - I scaled the data while training this model. As follows: I have to save this information when loading the model as well - so how can I achieve it?</p> <pre><code># Normalization x_scaler = MinMaxScaler(feature_range=(0,1)) scaled_x_train = x_scaler.fit_transform(x_train) scaled_x_test = x_scaler.transform(x_test) y_scaler = MinMaxScaler(feature_range=(0,1)) scaled_y_train = y_scaler.fit_transform(y_train) scaled_y_test = y_scaler.transform(y_test) </code></pre>
<python><keras><scikit-learn><neural-network><minmax>
2025-03-18 22:49:01
0
993
Jesh Kundem
79,518,643
2,698,266
pre-commit is failing due to virtualenv
<p>pre-commit is failing on all invocations due to an upstream dependency failure with virtualenv. I get the following error:</p> <pre><code>[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Initializing environment for https://github.com/psf/black. [INFO] Initializing environment for https://github.com/pre-commit/mirrors-prettier. [INFO] Initializing environment for https://github.com/pre-commit/mirrors-prettier:prettier@4.0.0-alpha.8. [INFO] Initializing environment for https://github.com/pre-commit/mirrors-eslint. [INFO] Initializing environment for https://github.com/pre-commit/mirrors-eslint:eslint@7.0.0-rc.0. [INFO] Initializing environment for https://github.com/adamchainz/django-upgrade. [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('/Users/xx/work/yy/bin/python3.9', '-mvirtualenv', '/Users/xx/.cache/pre-commit/repo3mzqp2sc/py_env-python3.9') return code: 1 stdout: RuntimeError: failed to detect cpython3.9.21-64|cpython3.9.21|cpython3.9-64|cpython3.9|cpython3-64|cpython3|cpython-64|cpython|python3.9.21-64|python3.9.21|python3.9-64|python3.9|python3-64|python3|python-64|python in /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/bin:/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9 stderr: (none) Check the log at /Users/xx/.cache/pre-commit/pre-commit.log </code></pre> <p>I have verified that the path listed for python contains at least one of the python keywords virtualenv is attempting to detect. I have tried many things to resolve this, all to no avail. I have tried</p> <ul> <li>completely reinstalling python and python depedencies via homebrew</li> <li><code>pre-commit clean</code> and removing the cache directory entirely</li> <li>trying to install all previous 10 versions of pre-commit and rerun pre-commit (nothing, or a new bug on an older version)</li> <li>Update <code>virtualenv</code> package, but python reference issue still persists</li> </ul> <p>System details:</p> <ul> <li>mac OS Sonoma 14.5</li> <li>Python 3.9.21</li> </ul> <p>If anyone has pointers, or has solved this issue themselves that would be very helpful.</p>
<python><macos><virtualenv>
2025-03-18 21:05:03
0
972
Wold
79,518,604
3,566,606
Python Typing: Put Constraint on Annotated
<p>I would like to do some meta-programming with python type annotations.</p> <p>I want to define a certain type of <code>Annotated</code>, constraining the type for the metadata of the Annotated special form.</p> <p>For example, I want to write a function, which only allows for Annotated types whose first metadata entry is a tuple:</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated PositiveIntWithExamples = Annotated[int, (12, 1001)] Bad = Annotated[int, &quot;This is bad.&quot;] def extract_examples(annotated_type: &quot;AnnotatedWithExamples&quot;) -&gt; tuple: return annotated_type.__metadata__[0] extract_examples(PositiveInt) extract_examples(Bad) # typechecker should alert here </code></pre> <p>In the example above, how would I have to define <code>AnnotatedWithExamples</code>?</p> <p>I think it might be possible to achieve this with a mypy plugin, and bake the structure of <code>AnnotatedWithExamples</code> into the plugin. But is it possible to do this within the python typing system?</p> <p>I don't think I can do this either with <code>Annotated</code>, or something equivalent with <code>NewType</code>, or can I?</p>
<python><metaprogramming><python-typing>
2025-03-18 20:41:49
1
6,374
Jonathan Herrera
79,518,434
16,891
Trying to deploy my first modal app with a chrona database but the data is not being used. Need help debugging retrieveInfoForQuery function?
<p>I am having trouble figuring out why I can't see the print statements in the terminal for my retrieveInfoForQuery function and trying to figure out what is wrong. I have verified the chroma db is on the volume. Here is the code.</p> <pre><code>from langchain_core.tools import tool from langchain_core.messages import SystemMessage from langchain import hub from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.document_loaders import CSVLoader from langgraph.graph import MessagesState, StateGraph from langchain_chroma import Chroma from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough, RunnableMap from langchain_core.documents import Document from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint from langchain_community.llms import HuggingFaceHub from typing_extensions import List, TypedDict from langchain.chat_models import init_chat_model from langchain_openai import OpenAIEmbeddings import sys import modal import os # Create an image with dependencies image = modal.Image.debian_slim().pip_install( &quot;openai&quot;, &quot;langchain&quot;, &quot;langchain_community&quot;, &quot;langchain_core&quot;, &quot;langchain_huggingface&quot;, &quot;langchain_openai&quot;, &quot;langgraph&quot;, &quot;langchain_chroma&quot; ) # Create Modal app app = modal.App(&quot;rag-modal-deployment&quot;, image=image) # Define image correctly # Persistent storage vectorstore_volume = modal.Volume.from_name(&quot;gotquestions-storage&quot;,create_if_missing=True) # Define CSV processing function # Define RAG function class State(MessagesState): context: List[Document] @app.function(volumes={&quot;/vectorstore&quot;:vectorstore_volume},secrets=[modal.Secret.from_name(&quot;openai-secret&quot;),modal.Secret.from_name(&quot;langsmith-secret&quot;)],timeout=6000) def loadData(forceUpload): # Load or create vectorstore vectorstore_path = &quot;/vectorstore&quot; if forceUpload == &quot;true&quot;: print(&quot;Created new vector store.&quot;) # Load CSV loader = CSVLoader(file_path=&quot;/vectorstore/gotquestions.csv&quot;, encoding=&quot;utf8&quot;, csv_args={'delimiter': ',', 'quotechar': '&quot;'}, metadata_columns=[&quot;url&quot;, &quot;question&quot;]) docs = loader.load() # Split Documents text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=500) splits = text_splitter.split_documents(docs) # Create Vector Store vectorstore = Chroma.from_documents( documents=splits, embedding=OpenAIEmbeddings(model=&quot;text-embedding-3-large&quot;), persist_directory=vectorstore_path ) else: print(&quot;Loaded existing vector store.&quot;) vectorstore = Chroma(persist_directory=vectorstore_path, embedding_function=OpenAIEmbeddings(model=&quot;text-embedding-3-large&quot;)) print(&quot;done&quot;) return vectorstore @app.function(secrets=[modal.Secret.from_name(&quot;openai-secret&quot;),modal.Secret.from_name(&quot;langsmith-secret&quot;)], volumes={&quot;/vectorstore&quot;: vectorstore_volume},timeout=6000) @modal.fastapi_endpoint(docs=True) def getDataAndAnswerQuestion(question: str,forceUpload:str): # Set environment variables #os.environ[&quot;OPENAI_API_KEY&quot;] = modal.Secret().get(&quot;OPENAI_API_KEY&quot;) #os.environ[&quot;HUGGINGFACEHUB_API_TOKEN&quot;] = modal.Secret().get(&quot;HUGGINGFACEHUB_API_TOKEN&quot;) # Load data #loadData.remote(forceUpload) graph_builder = StateGraph(State) from langgraph.graph import END from langgraph.prebuilt import ToolNode, tools_condition graph_builder.add_node(query_or_respond) graph_builder.add_node(generate) graph_builder.set_entry_point(&quot;query_or_respond&quot;) graph_builder.add_edge(&quot;query_or_respond&quot;, &quot;generate&quot;) graph_builder.add_edge(&quot;generate&quot;, END) graph = graph_builder.compile() finalAnswer = graph.invoke({&quot;messages&quot;: [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: question}], &quot;context&quot;: &quot;&quot;}) #for step in graph.stream({&quot;messages&quot;: [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: question}], &quot;context&quot;: &quot;&quot;},stream_mode=&quot;values&quot;): #step[&quot;messages&quot;][-1].pretty_print() # Return formatted results sources_html = &quot;&quot;.join(f'&lt;a href=&quot;{doc.metadata[&quot;url&quot;]}&quot;&gt;{doc.metadata[&quot;question&quot;]}&lt;/a&gt;&lt;br&gt;' for doc in finalAnswer[&quot;context&quot;]) return {&quot;content&quot;: finalAnswer[&quot;messages&quot;][-1].content, &quot;sources&quot;: sources_html} @tool(response_format=&quot;content_and_artifact&quot;) def retrieveInfoForQuery(query: str): &quot;&quot;&quot;Retrieve information related to a query.&quot;&quot;&quot; print(&quot;retrieving... &quot;+query) vectorstore_path = &quot;/vectorstore&quot; vectorstore=loadData.remote(&quot;false&quot;) if isinstance(vectorstore, Chroma): # Ensure it's properly loaded retrieved_docs = vectorstore.similarity_search(query, k=2) else: raise ValueError(&quot;Vectorstore did not initialize correctly.&quot;) retrieved_docs = vectorstore.similarity_search(query, k=2) #print(&quot;retrieved... &quot;+str(retrieved_docs)) serialized = &quot;\n\n&quot;.join( (f&quot;Source: {doc.metadata}\n&quot; f&quot;Content: {doc.page_content}&quot;) for doc in retrieved_docs ) return serialized, retrieved_docs def query_or_respond(state: MessagesState): &quot;&quot;&quot;Generate tool call for retrieval or respond.&quot;&quot;&quot; llm = init_chat_model(&quot;gpt-4o&quot;, model_provider=&quot;openai&quot;) llm_with_tools = llm.bind_tools([retrieveInfoForQuery]) response = llm_with_tools.invoke(state[&quot;messages&quot;]) return {&quot;messages&quot;: [response]} def generate(state: State): &quot;&quot;&quot;Generate answer.&quot;&quot;&quot; tool_messages = [ message for message in reversed(state[&quot;messages&quot;]) if message.type == &quot;tool&quot; ][::-1] docs_content = &quot;\n\n&quot;.join(doc.content for doc in tool_messages) system_message_content = ( &quot;You are an assistant for question-answering tasks. &quot; &quot;Use the following pieces of retrieved context to answer &quot; &quot;the question. If you don't know the answer, say that you &quot; &quot;don't know. Keep the answer concise. Only use data from the tool.&quot; &quot;\n\n&quot; f&quot;{docs_content}&quot; ) conversation_messages = [ message for message in state[&quot;messages&quot;] if message.type in (&quot;human&quot;, &quot;system&quot;) or (message.type == &quot;ai&quot; and not message.tool_calls) ] prompt = [SystemMessage(system_message_content)] + conversation_messages llm = init_chat_model(&quot;gpt-4o&quot;, model_provider=&quot;openai&quot;) response = llm.invoke(prompt) context = [] for tool_message in tool_messages: context.extend(tool_message.artifact) return {&quot;messages&quot;: [response], &quot;context&quot;: context} @app.local_entrypoint() def main(): #retrieveInfoForQuery(&quot;who was Jesus&quot;) vector=loadData.remote(&quot;true&quot;) print(type(vector)) </code></pre> <p>Thanks for any ehlp you can provide.</p>
<python><py-langchain><rag>
2025-03-18 18:59:00
0
2,130
Chris Westbrook
79,518,393
835,073
Can we get "-x^{2}+1" instead of "1-x^{2}" with sympy.latex(-x**2+1)?
<p>I need <code>-x^{2}+1</code> rather than <code>1-x^{2}</code> with <code>sympy.latex(-x**2+1)</code>.</p> <pre class="lang-py prettyprint-override"><code>from sympy import symbols, latex x = symbols('x') print(-x**2+1) print(latex(-x**2+1)) </code></pre> <h4>Output:</h4> <pre><code>1 - x**2 1 - x^{2} </code></pre> <p>Is it possible to change the default format?</p>
<python><sympy>
2025-03-18 18:39:56
2
880
D G
79,518,311
1,980,208
Arrange consecutive zeros in panda by specific rule
<p>I have panda series as the following :</p> <pre><code> 1 1 2 2 3 3 4 4 5 0 6 0 7 1 8 2 9 3 10 0 11 0 12 0 13 0 14 1 15 2 </code></pre> <p>I have to arrange this in following format :</p> <pre><code> 1 1 2 2 3 3 4 4 5 0 6 0 7 3 ---&gt; 4-2+1 (previous non zero value - amount of previous zeroes + current value) 8 4 ---&gt; 4-2+2 (previous non zero value - amount of previous zeroes + current value) 9 5 ---&gt; 4-2+3 (previous non zero value - amount of previous zeroes + current value) 10 0 11 0 12 0 13 0 14 2 ---&gt; 5-4+1 (previous non zero value - amount of previous zeroes + current value) 15 3 ---&gt; 5-4+2 (previous non zero value - amount of previous zeroes + current value) </code></pre> <p>I am stuck at this. Till now I am able to produce a data frame with consecutive zeroes.</p> <pre><code>zero = ser.eq(0).groupby(ser.ne(0).cumsum()).cumsum() </code></pre> <p>which gave me:</p> <pre><code> 1 0 2 0 3 0 4 0 5 1 6 2 7 0 8 0 9 0 10 1 11 2 12 3 13 4 14 0 15 0 </code></pre> <p>if someone willing to assist on this. i am dropping cookie cutter for this problem which will create the above series.</p> <pre><code>d = {'1': 1, '2': 2, '3': 3, '4':4, '5':0, '6':0, '7':1, '8':2, '9':3, '10':0, '11':0, '12':0, '13':0, '14':1, '15':2} ser = pd.Series(data=d) </code></pre>
<python><pandas><numpy>
2025-03-18 17:52:20
2
439
prem
79,518,189
13,806,869
Why is my Winsorization code telling me it has too many arguments?
<p>I have an array that looks like this:</p> <pre><code>[ 3.4 0. 10.6 ... -0.4 -0.4 0. ] </code></pre> <p>The array has around 13.5m values in it. I want to winsorize the top and bottom 5% to deal with outliers. This is the code I'm using:</p> <pre><code>from scipy.stats.mstats import winsorize winsorized_array = winsorize(array, (0.05, 0.05)) </code></pre> <p>However, this returns the following error message:</p> <blockquote> <p>TypeError: winsorize() takes 1 positional argument but 2 were given</p> </blockquote> <p>I've also tried the following variation:</p> <pre><code>from scipy.stats.mstats import winsorize winsorized_array = winsorize(array, limits = [0.05, 0.05]) </code></pre> <p>This returns the following error message, which doesn't make any sense to me, since as far as I can tell this is the same syntax found in the scipy documentation <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.winsorize.html" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>TypeError: winsorize() got an unexpected keyword argument 'limits'</p> </blockquote> <p>Does anyone know what I'm doing wrong please?</p>
<python><scipy>
2025-03-18 17:00:39
1
521
SRJCoding
79,518,161
2,893,712
Pyhon Telegram Bot Multiple Bots
<p>I have multiple bots that utilize Python Telegram Bot module. Each bot has code like:</p> <pre><code>from telegram.ext import Updater, CommandHandler def start(update, context): update.message.reply_text(&quot;Command List:\n/start - Display this message&quot;) def main(): updater = Updater(token='XXXXXXX:XXXXXXX-XXXXXXXXXXX', use_context=True) updater.dispatcher.add_handler(CommandHandler('start', start)) # One example of a bot command # ... Several CommandHandler and ConversationHandler functions ... updater.start_polling() updater.idle() if __name__ == '__main__': main() </code></pre> <p>I found <a href="https://github.com/python-telegram-bot/python-telegram-bot/wiki/Frequently-requested-design-patterns#running-ptb-alongside-other-asyncio-frameworks" rel="nofollow noreferrer">this</a> documentation for running multiple <code>asyncio</code> frameworks but I could not find a specific example that creates multiple instances of python telegram bot. The bots I have run on completely separate channels and have different set of commands/functions/tokens. How would I have one script for multiple bots?</p>
<python><telegram><telegram-bot><python-telegram-bot>
2025-03-18 16:49:41
1
8,806
Bijan
79,518,142
16,563,251
Implement the __getitem__ method of a minimal collections.abc sequence with type hints
<p>How does a minimal implementation of a <a href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence" rel="nofollow noreferrer"><code>Sequence</code></a> from <a href="https://docs.python.org/3/library/collections.abc.html" rel="nofollow noreferrer"><code>collections.abc</code></a>, together with the correct type hints, look like?</p> <p>According to its documentation, <code>__len__</code> and <code>__getitem__</code> are sufficient. My type hinter complains about the implementation of <code>__getitem__</code> though, although my implementation follows the <a href="https://docs.python.org/3/reference/datamodel.html#object.__getitem__" rel="nofollow noreferrer">python docs</a>.</p> <p>My current version is the following (with warnings from basedpyright):</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Sequence from typing import override class MySeq(Sequence[float]): def __init__(self): self._data: list[float] = list() @override def __len__(self) -&gt; int: return len(self._data) @override def __getitem__(self, key) -&gt; float: # Type annotation is missing for parameter &quot;key&quot; return self._data[key] # Return type is unknown </code></pre> <ul> <li>How do I type hint <code>key</code>? Just <code>key: int</code> is not accepted by the type checker. It gives the warning: <code>&quot;slice[Any, Any, Any]&quot; is not assignable to &quot;int&quot;</code>. What choices do I have here?</li> <li>What is wrong with the return type? The full warning is <code>&quot;float&quot; is not assignable to &quot;Sequence[float]&quot;</code></li> </ul>
<python><python-typing><pyright><python-collections>
2025-03-18 16:41:30
1
573
502E532E
79,518,011
1,858,864
WebP support not enabled in Pillow 2.9.0 on CentOS 7 despite installing libwebp
<p>Iโ€™m trying to enable WebP support in Pillow 2.9.0 on CentOS 7. I can only use yum to install packages (I cannot use pip). Hereโ€™s what Iโ€™ve done so far:</p> <p>Installed libwebp and libwebp-devel using yum:</p> <pre><code>yum install libwebp libwebp-devel </code></pre> <p>The installed version of libwebp is 0.3.0:</p> <pre><code># rpm -q libwebp libwebp-0.3.0-11.el7.x86_64 </code></pre> <p>Reinstalled python-pillow to ensure it is compiled with WebP support:</p> <pre><code>yum reinstall python-pillow </code></pre> <p>Checked for WebP support:</p> <pre><code>python -c &quot;from PIL import Image; print('WebP support:', 'WEBP' in Image.SAVE)&quot; ('WebP support:', False) </code></pre> <p>Extra info: I cannot use pip since it's a legacy system. I am restricted to using yum for package management. The libwebp and libwebp-devel packages are installed and up to date. I am using the default CentOS 7 repositories.</p> <p>Despite these steps, WebP support is still not enabled. What am I missing? Is there something else I need to do to ensure Pillow is compiled with WebP support? I tried to recompile PIL from source code to enable webp support but something went wrong and even simple imports like &quot;from PIL import Image&quot; raised some weird errors like this:</p> <pre><code>from PIL import Image Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;PIL/Image.py&quot;, line 63, in &lt;module&gt; from PIL import _imaging as core ImportError: cannot import name _imaging </code></pre> <p>Is there a way to enable webp support with python-pillow 2.9 and python2.7? Maybe I'm doing something wrong?</p>
<python><python-imaging-library><webp>
2025-03-18 15:47:11
0
6,817
Paul
79,517,991
3,621,143
Using "no challenge" to create a certificate with private CA?
<p>I am working on a python script that performs the same SSL certificate generation using the ACME protocol as I accomplished with an Ansible playbook (<a href="https://stackoverflow.com/questions/77902933/acme-certificates-in-ansible-using-incommon-sectigo-ca">ACME certificates in Ansible using InCommon/Sectigo CA</a>).</p> <p>The documentation I found (<a href="https://acme-python.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">https://acme-python.readthedocs.io/en/stable/index.html</a>) is lacking any real good examples or logical order of calls to do anything. While documentation is thorough, without any examples.. you are left to figure it out on your own!</p> <p>I have no problems with creating private keys, CSRs, file manipulation, and using the cryptography modules to work with SSL certificates (successfully wrote a script generate certificates using the private CA REST API), but that is where my success ends, and my frustration begins .. using the ACME protocol and this module.</p> <p>Since I am using a private CA (Sectigo/InCommon); they support the &quot;no challenge&quot; method of using a private key to authenticate, generate and output an SSL certificate with a provided CSR.</p> <p>I have not been able to make heads or tails of the documentation (still reading) .. and was hoping someone here may have already done this and could give me something I could use or pointers on how to get started.</p> <p>Anyone out there have any existing code or tips?</p>
<python><acme>
2025-03-18 15:38:15
1
1,175
jewettg
79,517,632
16,563,251
Type hint private member variable of subclass more specific than superclass
<p>I have some private field of a class that is type hinted as a <code>Collection</code>. Now, I want to inherit from this class, changing the type to a <code>Sequence</code>, which itself inherits from <code>Collection</code>. Thus, everything the superclass was doing before is still supported by the type hint. Nevertheless, (based)pyright warns me about this:</p> <pre class="lang-none prettyprint-override"><code>&quot;_member&quot; overrides symbol of same name in class &quot;A&quot; ย ย Variable is mutable so its type is invariant ย ย ย ย Override type &quot;Sequence[Unknown]&quot; is not the same as base type &quot;Collection[Unknown]&quot; (reportIncompatibleVariableOverride) </code></pre> <p>Why I (mostly) understand why the warning is raised on a technical level, I cannot really see the underlying type safety issue in my implementation, because I do not take any functionality from the superclass member, but only add to it.</p> <p>Is there any reason why overriding in this way can be dangerous? What would be the โ€œcorrectโ€ way to implement and type hint my code below?</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Collection, Sequence class A: def __init__(self): self._member : Collection = {1, 2, 3} def __len__(self): return len(self._member) class B(A): def __init__(self): self._member : Sequence = [1, 2, 3] #reportIncompatibleVariableOverride def first(self): return self._member[0] </code></pre> <p>EDIT: One issue is when someone tries to set the member variable of an instance of <code>B</code> to a <code>Collection</code> that is not a <code>Sequence</code>, e.g.</p> <pre class="lang-py prettyprint-override"><code>foo : A = B() foo._member = {4, 5, 6} </code></pre> <p>However, as <code>_member</code> is private by convention and not supposed to be changed from the outside, I would argue that this is the users fault, and not something a type-checker needs to warn about.</p> <p>The superclass is allowed to change <code>_member</code>, but since the subclass is aware of the superclass, it can override these occurences.</p>
<python><python-typing><pyright>
2025-03-18 13:42:37
0
573
502E532E
79,517,500
12,859,833
z3py threshold Optimization results in worse performance than unoptimized solution
<p>In a <a href="https://stackoverflow.com/questions/79506894/constraint-based-optimizing-the-decision-threshold-of-a-prediction-model">previous question</a>, I asked about optimizing the decision threshold of a prediction model. The solution led me to the <code>z3py</code> library.</p> <p>I am now trying a similar setup as before, but want to optimize the decision threshold of a binary prediction model to maximize the accuracy.</p> <p>However, I found that optimization on the threshold results in worse performance than with the default threshold (which could also be chosen by the optimizer).</p> <p>My MWP is below (it uses fixed-seed random targets and probabilities to replicate my findings):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from z3 import z3 def compute_eval_metrics(ground_truth, predictions): from sklearn.metrics import accuracy_score, f1_score accuracy = accuracy_score(ground_truth, predictions) macro_f1 = f1_score(ground_truth, predictions, average=&quot;macro&quot;) return accuracy, macro_f1 def optimization_acc_target( predictions: np.array, ground_truth: np.array, default_threshold=0.5, ): tp = np.sum((predictions &gt; default_threshold) &amp; (ground_truth == 1)) tn = np.sum((predictions &lt;= default_threshold) &amp; (ground_truth == 0)) initial_accuracy = (tp + tn) / len(ground_truth) print(f&quot;Accuracy: {initial_accuracy:.3f}&quot;) _, initial_macro_f1_score = compute_eval_metrics( ground_truth, np.where(predictions &gt; default_threshold, 1, 0) ) n = len(ground_truth) iRange = range(n) threshold = z3.Real(&quot;threshold&quot;) opt = z3.Optimize() predictions = predictions.tolist() ground_truth = ground_truth.tolist() true_positives = z3.Sum( [ z3.If(predictions[i] &gt; threshold, 1, 0) for i in iRange if ground_truth[i] == 1 ] ) true_negatives = z3.Sum( [ z3.If(predictions[i] &lt;= threshold, 1, 0) for i in iRange if ground_truth[i] == 0 ] ) acc = z3.Sum(true_positives, true_negatives) / n # Add constraints opt.add(threshold &gt;= 0.0) opt.add(threshold &lt;= 1.0) # Maximize accuracy opt.maximize(acc) if opt.check() == z3.sat: m = opt.model() t = m[threshold].as_decimal(10) if type(t) == str: if len(t) &gt; 1: t = t[:-1] t = float(t) print(f&quot;Optimal threshold: {t}&quot;) optimized_accuracy, optimized_macro_f1_score = compute_eval_metrics( ground_truth, np.where(np.array(predictions) &gt; t, 1, 0) ) print(f&quot;Accuracy: {optimized_accuracy:.3f} (was: {initial_accuracy:.3f})&quot;) print( f&quot;Macro F1 Score: {optimized_macro_f1_score:.3f} (was: {initial_macro_f1_score:.3f})&quot; ) print() else: print(&quot;Failed to optimize&quot;) np.random.seed(42) ground_truth = np.random.randint(0, 2, size=50) predictions = np.random.rand(50) optimization_acc_target( predictions=predictions, ground_truth=ground_truth, ) </code></pre> <p>In my code, I am using the true positive and true negative count to yield the accuracy.</p> <p>The output is:</p> <pre><code>Accuracy: 0.600 Optimal threshold: 0.9868869366 Accuracy: 0.480 (was: 0.600) Macro F1 Score: 0.355 (was: 0.599) </code></pre> <p>It always returns a <em>worse</em> solution than the default threshold of <code>0.5</code>). I am puzzled why this could be the case? Should it not performe at least as good as the default solution?</p> <p>To solve this, I tried using constructs from <code>z3py</code> (e.g. <code>z3.If</code> in the <code>z3.Sum</code> parts), thinking that maybe different data types lead to wrong results? But this turned out to not make a difference (which is good, as this aligns with an <a href="https://ericpony.github.io/z3py-tutorial/guide-examples.htm" rel="nofollow noreferrer">official example</a>). I also found this <a href="https://github.com/Z3Prover/z3/issues/5415" rel="nofollow noreferrer">GitHub</a> issue, but that seems to relate to a case with non-linear <em>constraints</em> (which I am not using).</p> <p>I am now wondering: what causes the results with the optimized threshold to be worse than the default threshold? I appreciate pointers to further resources and background information.</p>
<python><z3><z3py>
2025-03-18 12:57:02
1
343
emil
79,517,387
21,446,483
Dependency error when using a cloud storage connector with Hadoop 3
<p>I'm trying to set up a simple PySpark project which writes a DataFrame to a cloud storage bucket, but I keep getting errors related to incorrect dependency management. I have tried multiple versions of the cloud storage connector without any success.</p> <p>I've also tried adding additional configuration options I've seen from documentation online and again, but I have any success, just different errors. I do have a different fix, but I don't understand why it works.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType def main(): # Initialize Spark session with GCS support spark = SparkSession.builder \ .appName(&quot;Basic PySpark Example&quot;) \ .config(&quot;spark.jars.packages&quot;, &quot;com.google.cloud.bigdataoss:gcs-connector:hadoop3-2.2.26&quot;) \ .config(&quot;spark.hadoop.google.cloud.auth.service.account.enable&quot;, &quot;true&quot;) \ .config(&quot;spark.hadoop.fs.gs.auth.service.account.json.keyfile&quot;, &quot;&lt;key-file&gt;.json&quot;) \ .getOrCreate() schema = StructType([ StructField(&quot;name&quot;, StringType(), True), StructField(&quot;age&quot;, IntegerType(), True), StructField(&quot;city&quot;, StringType(), True) ]) data = [ (&quot;John&quot;, 30, &quot;New York&quot;), (&quot;Alice&quot;, 25, &quot;London&quot;), (&quot;Bob&quot;, 35, &quot;Paris&quot;) ] df = spark.createDataFrame(data, schema=schema) df.show() gcs_bucket = &quot;name&quot; df.write.parquet(f&quot;gs://{gcs_bucket}/data/people.parquet&quot;) df.write.csv(f&quot;gs://{gcs_bucket}/data/people.csv&quot;) print(f&quot;Data written to GCS bucket: {gcs_bucket}&quot;) spark.stop() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>requirements.txt:</p> <pre class="lang-none prettyprint-override"><code>pyspark==3.5.5 google-cloud-storage==3.1.0 </code></pre> <p>Error:</p> <pre class="lang-none prettyprint-override"><code>py4j.protocol.Py4JJavaError: An error occurred while calling o49.parquet. : org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme &quot;gs&quot; at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3443) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:454) at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:530) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:391) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:364) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:243) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:802) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:75) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:52) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:1583) </code></pre> <p>Fix: Substitute</p> <pre class="lang-py prettyprint-override"><code>.config(&quot;spark.jars.packages&quot;, &quot;com.google.cloud.bigdataoss:gcs-connector:hadoop3-2.2.26&quot;) </code></pre> <p>with</p> <pre class="lang-py prettyprint-override"><code>.config(&quot;spark.jars&quot;, &quot;https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-hadoop3-latest.jar&quot;) </code></pre>
<python><pyspark><google-cloud-storage>
2025-03-18 12:15:06
1
332
Jesus Diaz Rivero
79,517,202
1,751,393
Define a custom tree splitter from sklearn
<p>I'm trying to define a custom splitter using sklearn Classification Trees classes, but I'm getting no results so far. I got no errors but the tree is not developed. How to achieve this?</p> <p>My strategy is largely inspired by this approach: <a href="https://stackoverflow.com/questions/47624000/cinit-takes-exactly-2-positional-arguments-when-extending-a-cython-class">__cinit__() takes exactly 2 positional arguments when extending a Cython class</a></p> <p>Here is the code:</p> <pre><code>import numpy as np from sklearn.tree import DecisionTreeClassifier from sklearn.tree._tree import Tree from sklearn.tree._splitter import BestSplitter from sklearn.tree._criterion import Gini from sklearn.tree._classes import DepthFirstTreeBuilder class CustomBestSplitter(BestSplitter): &quot;&quot;&quot; Custom splitter that only allows splits on even-indexed features &quot;&quot;&quot; def __init__(self, *args, **kwargs): pass &quot;&quot;&quot; Custom splitter that only allows splitting on even-indexed features &quot;&quot;&quot; def best_split(self, *args, **kwargs): best_split = super().best_split(*args, **kwargs) if best_split is not None: print(best_split) feature_index = best_split[0] # Extract feature index from split if feature_index % 2 != 0: # Enforce even-index features only return None # Reject the split if it does not satisfy the constraint return best_split # Otherwise, allow the split class CustomDepthFirstTreeBuilder(DepthFirstTreeBuilder): def __init__(self, *args, **kwargs): pass def best_tree(self, *args, **kwargs): best_tree = super().best_tree(*args, **kwargs) # Build the tree manually builder.build(tree = self.tree_, X = X, y = y) return self class CustomDecisionTreeClassifier(DecisionTreeClassifier): def __init__(self, criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0, monotonic_cst=None): # Appeler le constructeur de la classe parente super().__init__(criterion=criterion, splitter=splitter, max_depth=max_depth, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf, min_weight_fraction_leaf=min_weight_fraction_leaf, max_features=max_features, random_state=random_state, max_leaf_nodes=max_leaf_nodes, min_impurity_decrease=min_impurity_decrease, class_weight=class_weight, ccp_alpha=ccp_alpha, monotonic_cst = monotonic_cst) pass def fit(self, X, y, sample_weight=None, check_input=True, X_idx_sorted=None): &quot;&quot;&quot; Override fit to inject custom splitter &quot;&quot;&quot; y = y.reshape(-1, 1) if y.ndim == 1 else y # Compute number of outputs and classes n_outputs = 1 if len(y.shape) == 1 else y.shape[1] n_classes = np.array([np.unique(y).shape[0]], dtype=np.intp) # Create tree structure self.tree_ = Tree(X.shape[1], n_classes, n_outputs) # Initialize Gini criterion criterion = Gini(n_outputs, n_classes) # Initialize the custom splitter correctly splitter = CustomBestSplitter(criterion=criterion, max_features=X.shape[1], min_samples_leaf=1, min_weight_leaf=0.0, random_state=None, monotonic_cst=None) # Manually create a tree builder with the custom splitter builder = CustomDepthFirstTreeBuilder( splitter=splitter, min_samples_split=2, min_samples_leaf=1, min_weight_leaf=0.0, max_depth=3, min_impurity_decrease=0.0 ) return builder # Generate synthetic data X = np.random.rand(100, 5) # 100 samples, 5 features y = np.random.randint(0, 2, 100) # Binary target # Train the custom decision tree model = CustomDecisionTreeClassifier(max_depth=3) model.fit(X, y) </code></pre>
<python><scikit-learn><classification><decision-tree>
2025-03-18 11:11:56
0
356
Jojo
79,517,158
16,563,251
Test event handler registration using pytest
<p>Consider a module that allows to register event handlers and fires them at some condition:</p> <pre class="lang-py prettyprint-override"><code># mymodule/myfile.py _event_handlers = [] def register_event_handler(handler): _event_handlers.append(handler) def fire_event_handlers(): for handler in _event_handlers: handler() </code></pre> <p>I now want to test that the registration of an event handler works:</p> <pre class="lang-py prettyprint-override"><code># tests/test_mymodule.py from mymodule.myfile import register_event_handler, fire_event_handlers def test_event_handler(): def example_handler(): pass register_event_handler(example_handler) fire_event_handlers() # assert example_handler was called </code></pre> <p>How do I test if my function <code>example_handler</code> is called?</p> <p>There are some similar test cases that use <a href="https://docs.python.org/3/library/unittest.mock.html#patch" rel="nofollow noreferrer"><code>unittest.mock</code></a> (e.g. <a href="https://stackoverflow.com/questions/74029343/how-to-assert-that-nested-function-called-pytest">How to assert that nested function called pytest</a>), but as I do not want to mock any existing function of the module, I do not see how to use it here.</p>
<python><event-handling><pytest><function-call>
2025-03-18 10:56:19
1
573
502E532E
79,516,990
7,662,164
General way to define JAX functions with non-differentiable arguments
<p>For a particular JAX function <code>func</code>, one can define non-differentiable arguments by using the decorator <code>@partial(jax.custom_jvp, nondiff_argnums=...)</code>. However, in order to make it work, one must also explicitly define the differentiation rules in a custom <code>jvp</code> function by using the decorator <code>@func.defjvp</code>. I'm wondering if there is a generic way to define non-differentiable arguments for any given <code>func</code>, without defining a custom <code>jvp</code> (or <code>vjp</code>) function? This will be useful when the differentiation rules are too complicated to write out.</p>
<python><function><jax><automatic-differentiation>
2025-03-18 09:55:30
1
335
Jingyang Wang
79,516,928
2,307,441
swifter module causing my executable to fail which built by pyinstaller
<p>I am building an onefile exe using pyinstaller with my python file.</p> <p>my <code>testing.py</code> file contains following code:</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd import swifter file = &quot;D:/Testing/file1.csv&quot; if __name__ == '__main__': try: df = pd.read_csv(file) print(df) except Exception as e: print(str(e)) print(str(repr(e))) </code></pre> <p>I know in the above sample code I am importing <code>swifter</code> not using it. In my actual code I am using it. This code is to reproduce the error only.</p> <blockquote> <p>I have used following command to build my exe file using pyinstaller. which created the exe as expected.</p> </blockquote> <pre class="lang-py prettyprint-override"><code>pyinstaller --onefile testing.py </code></pre> <p>When I execute the testing.exe file I got the following error.</p> <pre class="lang-py prettyprint-override"><code> Traceback (most recent call last): File &quot;testing.py&quot;, line 2, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;swifter\__init__.py&quot;, line 5, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;swifter\swifter.py&quot;, line 13, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;swifter\base.py&quot;, line 11, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;numba\__init__.py&quot;, line 19, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;numba\core\config.py&quot;, line 15, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;llvmlite\binding\__init__.py&quot;, line 4, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;llvmlite\binding\dylib.py&quot;, line 3, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 450, in exec_module File &quot;llvmlite\binding\ffi.py&quot;, line 162, in &lt;module&gt; File &quot;os.py&quot;, line 1111, in add_dll_directory FileNotFoundError: [WinError 3] The system cannot find the path specified: 'D:\\Users\\username\\AppData\\Local\\Temp\\_MEI82602\\Library\\bin' [PYI-23332:ERROR] Failed to execute script 'testing' due to unhandled exception! </code></pre> <p>As suggested in other blogs/posts, I have updated the testing.spec file with required information as below.</p> <pre class="lang-none prettyprint-override"><code># -*- mode: python ; coding: utf-8 -*- import sys sys.setrecursionlimit(20000) a = Analysis( ['testing.py'], pathex=['D:\\Users\\username\\Anaconda3\\Lib\\site-packages\\swifter'], binaries=[], datas=[], hiddenimports=['swifter','dask','numba','tqdm','modin','ray','llvmlite'], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], noarchive=False, optimize=0, ) pyz = PYZ(a.pure) exe = EXE( pyz, a.scripts, a.binaries, a.datas, [], name='testing', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) </code></pre> <p>using this updated spec file I have rebuit the exe file using the following command.</p> <pre><code>pyinstaller testing.spec </code></pre> <p>Which built an exe but getting the same error.</p> <p>If I remove the <code>import swifter</code> the exe file is working fine.</p>
<python><pyinstaller>
2025-03-18 09:37:04
0
1,075
Roshan
79,516,854
4,872,540
Can a metaclass be used as a type hint for a class instance in Python 3.12+?
<p>The project I'm working on uses a somewhat complex typing system, I have a base class called RID which uses the metaclass RIDType. Custom RID types are classes which derive from RID:</p> <pre class="lang-py prettyprint-override"><code>class RIDType(ABCMeta): ... class RID(metaclass=RIDType): ... class CustomType(RID): ... </code></pre> <p>Essentially, RIDs are instances of classes deriving from RID, but I also need to use RID types which are instances of the RIDType metaclass. From what I've found online, <code>type[RID]</code> would be the standard way of type hinting the RID class (or derived classes) not the instance. My question is whether it is supported to use my metaclass <code>RIDType</code> as a type hint, since the classes would be considered instances of it.</p> <p>This is relevant since the RID and RIDType class provide class methods which can instantiate RID instances or RID classes from a string. This is handled by lookup table in RIDType. It seems more &quot;symmetrical&quot; to use <code>RIDType</code> instead of <code>type[RID]</code> because developers would be calling methods like <code>RIDType.from_string(...)</code>.</p> <p>This is also relevant for a custom Pydantic validator which I previously implemented for RIDs as <code>Annotated[RID, RIDFieldAnnotation]</code>, so I'm unsure whether I should define the validator for RID types as <code>Annotated[RIDType, RIDTypeFieldAnnotation]</code> or <code>Annotated[type[RID], RIDTypeFieldAnnotation]</code>.</p> <p>Full class definitions here:</p> <pre class="lang-py prettyprint-override"><code>from abc import ABCMeta, abstractmethod from . import utils from .consts import ( BUILT_IN_TYPES, NAMESPACE_SCHEMES, ORN_SCHEME, URN_SCHEME ) class RIDType(ABCMeta): scheme: str | None = None namespace: str | None = None # maps RID type strings to their classes type_table: dict[str, type[&quot;RID&quot;]] = dict() def __new__(mcls, name, bases, dct): &quot;&quot;&quot;Runs when RID derived classes are defined.&quot;&quot;&quot; cls = super().__new__(mcls, name, bases, dct) # ignores built in RID types which aren't directly instantiated if name in BUILT_IN_TYPES: return cls if not getattr(cls, &quot;scheme&quot;, None): raise TypeError(f&quot;RID type '{name}' is missing 'scheme' definition&quot;) if not isinstance(cls.scheme, str): raise TypeError(f&quot;RID type '{name}' 'scheme' must be of type 'str'&quot;) if cls.scheme in NAMESPACE_SCHEMES: if not getattr(cls, &quot;namespace&quot;, None): raise TypeError(f&quot;RID type '{name}' is using namespace scheme but missing 'namespace' definition&quot;) if not isinstance(cls.namespace, str): raise TypeError(f&quot;RID type '{name}' is using namespace scheme but 'namespace' is not of type 'str'&quot;) # check for abstract method implementation if getattr(cls, &quot;__abstractmethods__&quot;, None): raise TypeError(f&quot;RID type '{name}' is missing implemenation(s) for abstract method(s) {set(cls.__abstractmethods__)}&quot;) # save RID type to lookup table mcls.type_table[str(cls)] = cls return cls @classmethod def _new_default_type(mcls, scheme: str, namespace: str | None) -&gt; type[&quot;RID&quot;]: &quot;&quot;&quot;Returns a new RID type deriving from DefaultType.&quot;&quot;&quot; if namespace: name = &quot;&quot;.join([s.capitalize() for s in namespace.split(&quot;.&quot;)]) else: name = scheme.capitalize() bases = (DefaultType,) dct = dict( scheme=scheme, namespace=namespace ) return type(name, bases, dct) @classmethod def from_components(mcls, scheme: str, namespace: str | None = None) -&gt; type[&quot;RID&quot;]: context = utils.make_context_string(scheme, namespace) if context in mcls.type_table: return mcls.type_table[context] else: return mcls._new_default_type(scheme, namespace) @classmethod def from_string(mcls, string: str) -&gt; type[&quot;RID&quot;]: &quot;&quot;&quot;Returns an RID type class from an RID context string.&quot;&quot;&quot; scheme, namespace, _ = utils.parse_rid_string(string, context_only=True) return mcls.from_components(scheme, namespace) def __str__(cls) -&gt; str: return utils.make_context_string(cls.scheme, cls.namespace) # backwards compatibility @property def context(cls) -&gt; str: return str(cls) class RID(metaclass=RIDType): scheme: str | None = None namespace: str | None = None @property def type(self): return self.__class__ @property def context(self): return str(self.type) def __str__(self) -&gt; str: return str(self.type) + &quot;:&quot; + self.reference def __repr__(self) -&gt; str: return f&quot;&lt;{self.type.__name__} RID '{str(self)}'&gt;&quot; def __eq__(self, other) -&gt; bool: if isinstance(other, self.__class__): return str(self) == str(other) else: return False def __hash__(self): return hash(str(self)) @classmethod def from_string(cls, string: str) -&gt; &quot;RID&quot;: scheme, namespace, reference = utils.parse_rid_string(string) return RIDType.from_components(scheme, namespace).from_reference(reference) @abstractmethod def __init__(self, *args, **kwargs): ... @classmethod @abstractmethod def from_reference(cls, reference: str): ... @property @abstractmethod def reference(self) -&gt; str: ... class ORN(RID): scheme = ORN_SCHEME class URN(RID): scheme = URN_SCHEME class DefaultType(RID): def __init__(self, _reference): self._reference = _reference @classmethod def from_reference(cls, reference): return cls(reference) @property def reference(self): return self._reference # example RID type implementation from rid_lib.core import ORN from .slack_channel import SlackChannel from .slack_workspace import SlackWorkspace class SlackMessage(ORN): namespace = &quot;slack.message&quot; def __init__( self, team_id: str, channel_id: str, ts: str, ): self.team_id = team_id self.channel_id = channel_id self.ts = ts @property def reference(self): return f&quot;{self.team_id}/{self.channel_id}/{self.ts}&quot; @property def workspace(self): return SlackWorkspace( self.team_id ) @property def channel(self): return SlackChannel( self.team_id, self.channel_id ) @classmethod def from_reference(cls, reference): components = reference.split(&quot;/&quot;) if len(components) == 3: return cls(*components) </code></pre> <p>UPDATE: adding my Pydantic field validator code here too:</p> <pre class="lang-py prettyprint-override"><code>class RIDTypeFieldAnnotation: @classmethod def __get_pydantic_core_schema__( cls, _source_type: Any, _handler: GetCoreSchemaHandler ) -&gt; CoreSchema: def validate_from_str(value: str) -&gt; RIDType: # str -&gt; RID validator return RIDType.from_string(value) from_str_schema = core_schema.chain_schema( [ core_schema.str_schema(), core_schema.no_info_plain_validator_function(validate_from_str), ] ) return core_schema.json_or_python_schema( # str is valid type for JSON objects json_schema=from_str_schema, # str or RID are valid types for Python dicts python_schema=core_schema.union_schema( [ core_schema.is_instance_schema(RIDType), from_str_schema ] ), # RIDs serialized with __str__ function serialization=core_schema.plain_serializer_function_ser_schema(str) ) @classmethod def __get_pydantic_json_schema__( cls, _core_schema: CoreSchema, handler: GetJsonSchemaHandler ) -&gt; JsonSchemaValue: return handler(core_schema.str_schema()) type RIDTypeField = Annotated[RIDType, RIDTypeFieldAnnotation] </code></pre>
<python><metaprogramming><python-typing>
2025-03-18 09:03:46
0
1,086
Aeolus
79,516,763
1,256,529
Method decorators which "tag" method - prevent overwriting by other decorators
<p>I'm investigating the pattern whereby you have a method decorator which annotates the method in some way, and then once the class is defined it looks through its methods, finds the annotated methods, and registers or processes them in some way. e.g.</p> <pre><code>def class_decorator(cls): for name, method in cls.__dict__.iteritems(): if hasattr(method, &quot;use_class&quot;): # do something with the method and class print name, cls return cls def method_decorator(view): # mark the method as something that requires view's class view.use_class = True return view @class_decorator class ModelA(object): @method_decorator def a_method(self): # do some stuff pass </code></pre> <p>(From here: <a href="https://stackoverflow.com/a/2367605/1256529">https://stackoverflow.com/a/2367605/1256529</a>)</p> <p>I wondered if anyone has a solution to the problem of multiple decorators. For example, this example works fine in isolation, but if I do the following, it will break because the <code>use_class</code> annotation will be hidden.</p> <pre><code>def hiding_decorator(fn): def wrapper(*args, **kwargs): print(&quot;do some logging etc.&quot;) return fn(*args, **kwargs) return wrapper @class_decorator class ModelA(object): @hiding_decorator @method_decorator def a_method(self): # do some stuff pass </code></pre> <p>Does anyone have a reliable and user-friendly way around this problem?</p> <p>Non-optimal solutions I can think of:</p> <ul> <li>Dictate that your decorator should be outermost (but what if other decorators also have this requirement?)</li> <li>Dictate that it only works with decorators that are transparent proxies for attributes of their inner functions/methods.</li> <li>Require other decorators to be aware of the requirements of the tagging decorator (means generic utility decorators become difficult to use).</li> </ul> <p>None of these are particularly attractive.</p>
<python><metaprogramming>
2025-03-18 08:21:17
1
3,817
samfrances
79,516,714
6,730,854
Viterbi Decoding Returns -Incorrect State Sequence with One-Hot Observations in MultinomialHMM (Tried v0.3.0, v0.3.2, and v0.3.3)
<p>I'm experiencing unexpected behavior with the MultinomialHMM in hmmlearn. When using one-hot encoded observations (with n_trials=1), the Viterbi algorithm returns the state sequence incorrectly.</p> <p>In my minimal reproducible example, the decoded state sequence consists entirely of state 0, even though the parameters push for state 2.</p> <p>Steps to Reproduce:</p> <p>Use the following script as a minimal reproducible example:</p> <pre><code>import numpy as np from hmmlearn import hmm def main(): # Define HMM parameters start_prob = np.array([0.3, 0.3, 0.4]) trans_mat = np.array([ [0.8, 0.1, 0.1], [0.1, 0.8, 0.1], [0.1, 0.4, 0.5] ]) emission_mat = np.array([ [0.3, 0.3, 0.3, 0.1], [0.25, 0.25, 0.25, 0.25], [0.25, 0.25, 0.25, 0.25] ]) # Create an observation sequence: # Here, we create 20 observations, all of which are symbol 2. obs_int = np.array([2] * 20) # Convert to one-hot encoded observations (required by hmmlearn with n_trials=1) observations = np.eye(4)[obs_int] print(observations) # Initialize the HMM model. model = hmm.MultinomialHMM(n_components=3, n_trials=1, init_params=&quot;&quot;) model.startprob_ = start_prob model.transmat_ = trans_mat model.emissionprob_ = emission_mat # Decode the observation sequence using the Viterbi algorithm. logprob, state_seq = model.decode(observations, algorithm=&quot;viterbi&quot;) print(&quot;Log probability:&quot;, logprob) print(&quot;State sequence:&quot;, state_seq) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The outputs:</p> <pre><code>[[0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.]] MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details: https://github.com/hmmlearn/hmmlearn/issues/335 https://github.com/hmmlearn/hmmlearn/issues/340 Log probability: -29.52315636581463 State sequence: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] </code></pre>
<python><markov-chains><hidden-markov-models><hmmlearn>
2025-03-18 07:59:36
0
472
Mike Azatov
79,516,463
4,451,521
A timeout exception is not caught as such
<p>I have a function that calls an API</p> <pre><code>def somefunction(): #.... try: response = requests.post(api_url, json=payload, timeout=timeout) response.raise_for_status() result = response.json() # # Handle response # if response.status_code == 200: # result = response.json() try: structured_output = json.loads(result[&quot;message&quot;][&quot;content&quot;]) formatted_output = json.dumps(structured_output, indent=2) logger.info(json.dumps(structured_output, indent=2)) return formatted_output except json.JSONDecodeError: logger.warning(&quot;Error parsing response:&quot;, result) return f&quot;Error parsing response: {result}&quot; except requests.exceptions.Timeout: logger.error(f&quot;Request to {api_url} timed out after {timeout} seconds&quot;) raise TimeoutError(f&quot;Request to {api_url} timed out after {timeout} seconds&quot;) except TimeoutError: logger.error(f&quot;Low-level timeout while connecting to {api_url}&quot;) raise TimeoutError(f&quot;Low-level timeout while connecting to {api_url}&quot;) except requests.exceptions.RequestException as e: # ๐Ÿ” Check if the underlying exception is a TimeoutError if isinstance(e.__cause__, TimeoutError): logger.error(f&quot;Wrapped TimeoutError detected: {e}&quot;) raise TimeoutError(f&quot;Request failed due to a timeout: {e}&quot;) if hasattr(e, &quot;response&quot;) and e.response is not None: logger.error( f&quot;Request failed: {e}, Status Code: {e.response.status_code}, Response: {e.response.text}&quot; ) return f&quot;Request failed: {e}, Status Code: {e.response.status_code}, Response: {e.response.text}&quot; else: logger.error(f&quot;Request failed: {e}&quot;) return f&quot;Request failed: {e}&quot; </code></pre> <p>As you can see I ve gone to great lenghts to make sure a timeout is caught, and sometimes it is by the <code>requests.exceptions.Timeout:</code> part.</p> <pre><code>somefunction:222 - Request to http://someserver:11434/api/chat timed out after 10 seconds </code></pre> <p>however on other times I got</p> <pre><code>somefunction:232 - Request failed: ('Connection aborted.', TimeoutError('timed out')) </code></pre> <p>which is caught by the last else</p> <p>Why is this timeout not being caught as a timeout and how can I correct it?</p>
<python><python-requests><connection><timeout>
2025-03-18 06:22:16
1
10,576
KansaiRobot
79,516,316
1,635,450
psycopg_pool.ConnectionPool conninfo WARNING error connecting in 'pool-1': [Errno -2] Name or service not known
<pre><code>with ConnectionPool( conninfo = app.config[&quot;POSTGRESQL_DATABASE_URI&quot;], max_size = app.config[&quot;DB_MAX_CONNECTIONS&quot;], kwargs = connection_kwargs, ) as pool: </code></pre> <p>With a connection string of <code>postgresql://username:password@ipaddress:5432/database</code>, I get the error:</p> <pre><code>WARNING error connecting in 'pool-1': [Errno -2] Name or service not known </code></pre> <p>I can connect to it from <code>psql</code> CLI with exactly the same parameters without any problem. Note that the password contains special characters such as <code>@</code> and <code>$</code></p>
<python><postgresql><connection-string><psycopg3><connection-pool>
2025-03-18 04:40:46
1
4,280
khteh
79,516,194
9,951,273
Typing a generic iterable
<p>I'm creating a function that yields chunks of an iterable.</p> <p>How can I properly type this function so that the return value <code>bar</code> is of type <code>list[int]</code>.</p> <pre><code>from typing import Any, Generator, Sequence def chunk[T: Any, S: Sequence[T]](sequence: S, size: int) -&gt; Generator[S, None, None]: for i in range(0, len(sequence), size): yield sequence[i : i + size] foo: list[int] = [1,2,3] bar = chunk(foo, 1) </code></pre> <p><code>Sequence[T]</code> is invalid because <code>TypeVar constraint type cannot be generic PylancereportGeneralTypeIssues</code></p>
<python><python-typing>
2025-03-18 03:19:23
2
1,777
Matt
79,516,082
219,153
Is there a simpler way to write this Numpy structured array query?
<p>This Python 3.12 script</p> <pre><code>import numpy as np a = np.array([('Rex', 9, 18), ('Fido', 3, 22), ('Fido', 7, 42), ('Fluffy', 1, 30), ('Fido', 5, 19)], dtype=[('name', 'U10'), ('age', 'f4'), ('weight', 'f4')]) b = a[np.where((a['name'] == 'Fido') &amp; (a['weight'] &lt; 30))] oldestFidoUnder30 = b[np.argmax(b['age'])] </code></pre> <p>looks for the oldest dog named <code>Fido</code> and weighing less than <code>30</code> units. Is there a simpler way to write this query? Would it be simpler or more readable with Pandas or another data frame library?</p>
<python><numpy><structured-array>
2025-03-18 01:35:29
2
8,585
Paul Jurczak
79,515,992
16,674,436
Correctly Assign Street Sectors Based on Even/Odd Street Numbers and Street Segment Ranges
<p>Ok I cannot wrap my head around that.</p> <p>I have two dataframes, <code>first_df</code> and <code>second_df</code>, where <code>first_df</code> contains information about street segments, including the street name, start and end numbers of street segments, and whether the segment is for even or odd numbers. <code>second_df</code> contains street numbers and their corresponding addresses.</p> <p>The goal is to assign the correct &quot;Secteur Elementaire&quot; (elementary sector) to each street number in <code>second_df</code> based on the information in <code>first_df</code>. So it needs to pass three checks:</p> <ol> <li>Check if streetname exists (and create a subset dataframe containing only the elemnts corresponding to this streetname)</li> <li>Check if streetnumber is within the range defined by <code>first_df['Dรฉbut']</code> and <code>first_df['Fin']</code></li> <li>Check if it is even or odd, and assign the sector based on its whether the number is part of the even or odd sector.</li> </ol> <p>The issue I'm facing is that the logic to determine the correct sector based on whether the street number is even or odd is not working as expected. The current implementation is assigning the wrong sectors in some cases.</p> <pre class="lang-py prettyprint-override"><code>first_df = pd.DataFrame({ 'Voie': ['AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY'], 'Pรฉrimรจtre รฉlรฉmentaire': ['PROVENCAUX', 'AVRANCHES', 'MAISON BLANCHE', 'SCULPTEURS JACQUES'], 'Pรฉrimรจtre maternelle': ['AUVERGNATS-PROVENCAUX', 'AVRANCHES', 'MAISON BLANCHE', 'SCULPTEURS JACQUES'], 'Dรฉbut': [142, 1, 73, 2], 'Fin': [998, 71, 999, 140], 'Partie': ['Pair', 'Impair', 'Impair', 'Pair'] }) second_df = pd.DataFrame({ 'numero': [1, 2, 6, 7, 8, 9, 10, 12], 'nom_afnor': ['AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY', 'AVENUE D EPERNAY'], 'Secteur Elementaire': ['tbd', 'tbd', 'tbd', 'tbd', 'tbd', 'tbd', 'tbd', 'tbd'] }) def sort_df(first_df, second_df): for bano_index, row in second_df.iterrows(): street_number = row['numero'] street_name = row['nom_afnor'] #print(f'first loop: {num_de_rue}') if street_name in first_df['Voie'].values: # check if street name exists reims_filtered = first_df.loc[first_df['Voie'] == street_name] # if it does create a dataframe containing only the elements matching the street name #print(reims_filtered) for reims_index, reims_matching_row in reims_filtered.iterrows(): # iterate over the rows the filtered dataframe #print(reims_matching_row) if street_number &gt;= reims_matching_row['Dรฉbut'] and street_number &lt;= reims_matching_row['Fin']: # check if street number is between range of street segment #print(f'Check range {street_number} {reims_matching_row['Pรฉrimรจtre รฉlรฉmentaire']}') if street_number % 2 == 0: # check if street number is even print(reims_index, street_number, reims_matching_row['Partie']) if reims_matching_row['Partie'] == 'Pair': # if it is even, then check which sector should be taken to be assigned based on street segment and its corresponding odd/even sector print(f'Check column {street_number} {reims_matching_row['Pรฉrimรจtre รฉlรฉmentaire']}') sector_to_assign = reims_matching_row['Pรฉrimรจtre รฉlรฉmentaire'] second_df.at[bano_index, 'Secteur Elementaire'] = sector_to_assign break else: print(f'Check odd {street_number} {reims_matching_row['Pรฉrimรจtre รฉlรฉmentaire']}') sector_to_assign = reims_matching_row['Pรฉrimรจtre รฉlรฉmentaire'] second_df.at[bano_index, 'Secteur Elementaire'] = sector_to_assign break return second_df sort_df(first_df, second_df) </code></pre> <p>Output of running this function:</p> <pre><code>Check odd 1 AVRANCHES 1 2 Impair 1 6 Impair Check odd 7 AVRANCHES 1 8 Impair Check odd 9 AVRANCHES 1 10 Impair 1 12 Impair numero nom_afnor Secteur Elementaire 0 1 AVENUE D EPERNAY AVRANCHES 1 2 AVENUE D EPERNAY tbd 2 6 AVENUE D EPERNAY tbd 3 7 AVENUE D EPERNAY AVRANCHES 4 8 AVENUE D EPERNAY tbd 5 9 AVENUE D EPERNAY AVRANCHES 6 10 AVENUE D EPERNAY tbd 7 12 AVENUE D EPERNAY tbd </code></pre> <p>Clearly there is an issue with the odd or even logic, especially when trying to assign <code>reims_matching_row['Partie'] == 'Pair'</code>.</p> <p>If someone has any leadsโ€ฆ</p>
<python><pandas><dataframe><sorting>
2025-03-17 23:59:52
3
341
Louis
79,515,822
12,415,855
Reading emails with imap_tools getting wrong sort-output
<p>i want to read the emails from an email-account from the newsest to the oldest mail in the inbox using the following code:</p> <pre><code>from imap_tools import MailBox import os import sys from dotenv import load_dotenv path = os.path.abspath(os.path.dirname(sys.argv[0])) fn = os.path.join(path, &quot;.env&quot;) load_dotenv(fn) LOGIN_EMAIL = os.environ.get(&quot;LOGIN_EMAIL&quot;) LOGIN_PW = os.environ.get(&quot;LOGIN_PW&quot;) SERVER = os.environ.get(&quot;SERVER&quot;) with MailBox(SERVER).login(LOGIN_EMAIL, LOGIN_PW) as mailbox: for msg in mailbox.fetch(reverse=True): print(msg.date, msg.subject) input(&quot;Press!&quot;) </code></pre> <p>When i run this code i get this output for the first 3 emails:</p> <pre><code>(test) C:\DEVNEU\Fiverr2025\TRY\franksrn&gt;python test.py 2025-02-17 17:14:02+01:00 Bestellung Press! 2024-12-17 17:14:02+01:00 Bestellung Franklin Apotheke e.K. Press! 2025-02-10 12:38:46+01:00 Bestellnr. 4500606968 Press! </code></pre> <p>But when i look in the email-account i have different mails as the newest first 3 in the inbox:</p> <p><a href="https://i.sstatic.net/f534GaK6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f534GaK6.png" alt="enter image description here" /></a></p> <p>How can i get the newest mails with the correct sorting from the email-inbox?</p>
<python><imap-tools>
2025-03-17 21:46:46
1
1,515
Rapid1898
79,515,814
13,971,251
Photomosaic library raising TypeError: slice indices must be integers or None or have an __index__ method
<p>I have the following code (a slightly modified version of <a href="https://www.tutorialspoint.com/implementing-photomosaics-in-python" rel="nofollow noreferrer">this</a>) to create a mosaic from many individual images:</p> <pre class="lang-py prettyprint-override"><code>import sys import photomosaic as phmos from skimage import io image = io.imread(&quot;r.jpg&quot;) #Get the mosaic size from the command line argument. mos_size = (int(sys.argv[1]), int(sys.argv[2])) #can be replaced with mos_size = (100, 100) for non-command line use #create all image squares and generate pool phmos.rainbow_of_squares('crks/') square_pool = phmos.make_pool('crks/*.jpg') #Create the mosaic image and save mosaic = phmos.basic_mosaic(image, square_pool, mos_size) io.imsave('mosaic_op.png', mosaic) </code></pre> <p>However, when running it (both from command line and in the terminal) I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/GVA/corks/create_mosaic.py&quot;, line 15, in &lt;module&gt; mosaic = phmos.basic_mosaic(image, square_pool, mos_size) File &quot;/home/GVA/.local/lib/python3.10/site-packages/photomosaic/photomosaic.py&quot;, line 102, in basic_mosaic image = rescale_commensurate(image, grid_dims, depth) File &quot;/home/GVA/.local/lib/python3.10/site-packages/photomosaic/photomosaic.py&quot;, line 218, in rescale_commensurate return crop_to_fit(image, new_shape) File &quot;/home/GVA/.local/lib/python3.10/site-packages/photomosaic/photomosaic.py&quot;, line 873, in crop_to_fit cropped = crop(resized, crop_width) File &quot;/usr/local/lib/python3.10/site-packages/skimage/util/arraycrop.py&quot;, line 72, in crop cropped = ar[slices] TypeError: slice indices must be integers or None or have an __index__ method </code></pre> <p>What is causing this error, and how can I fix it?</p>
<python><scikit-image>
2025-03-17 21:43:32
1
1,181
Kovy Jacob
79,515,692
1,934,800
How to reference class static data in decorator method
<p>Is it possible to make the global variable <code>handlers</code> a static class variable?</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, Self # dispatch table with name to method mapping handlers:dict[str, Callable[..., None]] = {} class Foo: # Mark method as a handler for a provided request name @staticmethod def handler(name: str) -&gt; Callable[[Callable], Callable]: def add_handler(func: Callable[..., None]) -&gt; Callable: handlers[name] = func # This line is the problem return func return add_handler # This method will handle request &quot;a&quot; @handler(&quot;a&quot;) def handle_a(self) -&gt; None: pass # Handle one request with provided name, using dispatch table to determine method to call def handle(self, name: str) -&gt; None: handlers[name](self) </code></pre> <p>The goal is to have the decorator add the decorated method into a dict that is part of the class. The dict can then be used to dispatch methods via the name used in request messages it will receive.</p> <p>The problem seems to be how to refer to class data inside a decorator.</p> <p>Of course using <code>self.handlers</code> won't work, as <code>self</code> isn't defined. The decorator is called when the class is defined and there aren't any instances of the class created yet for <code>self</code> to reference.</p> <p>Using <code>Foo.handlers</code> doesn't work either, as the class name isn't defined until after the class definition is finished.</p> <p>If this wasn't a decorator, then <code>handler()</code> could be defined as a <code>@classmethod</code>, and then the class would be the first argument to the method. But it doesn't appear possible to make a decorator a class method.</p> <p>An example of how this might be used, would be as a handler for a server requests, e.g. a websocket server:</p> <pre class="lang-py prettyprint-override"><code>from websockets.sync.server import serve def wshandler(websocket): f = Foo() # Create object to handle all requests for msg in websocket: decoded = json.loads(msg) # Decoded message will have a field named 'type', which is a string indicating the request type. Foo.handler(decoded['type']) with serve(wshandler, &quot;localhost&quot;, 8888) as server: server.serve_forever() </code></pre>
<python>
2025-03-17 20:27:36
4
4,817
TrentP
79,515,585
10,242,281
how to install python package in 2025 ? Datascience
<p>I'm trying to install python package and still getting errors after trying all options for online and offline installation. Is this package still valid? maybe there is some license restrictions? I'm using terminal windows in <code>pyCharm</code></p> <p><code>https://github.com/data-8/datascience</code></p> <p>Thanks L</p>
<python><package>
2025-03-17 19:31:29
0
504
Mich28
79,515,499
5,678,653
Categorize 3D points into octants based on their signs in numpy
<p>I have the set of all <em>8</em> possible signs that non-zero 3D (euclidean). I want to efficiently categorise 3D points <code>pts</code> into octants based on their signs, as part of an ongoing project on octahedral spaces, as per <a href="https://stackoverflow.com/questions/79498948">this and many other questions</a>.</p> <p>The entire project is written in numpy, and it does not make sense to use another tool for this part of the project. The purpose of the partitioning is to handle pipelines when dealing with octant specific transforms, as well as projections.</p> <p>I want to use those to partition/bin a numpy array <code>points</code> of x,y,z (euclidean) coordinates such that subsequent array belongs to the octant governed by the rule that the first sign will filter x, second y, third z, and a -1 means &lt;0, while a +1 means &gt;= 0.</p> <p>I can write it all out...</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def octate(pts): results = {} results[(-1, -1, -1)] = pts[np.where((pts[:, 0] &lt; 0) &amp; (pts[:, 1] &lt; 0) &amp; (pts[:, 2] &lt; 0))] results[(-1, -1, +1)] = pts[np.where((pts[:, 0] &lt; 0) &amp; (pts[:, 1] &lt; 0) &amp; (pts[:, 2] &gt;= 0))] results[(-1, +1, -1)] = pts[np.where((pts[:, 0] &lt; 0) &amp; (pts[:, 1] &gt;= 0) &amp; (pts[:, 2] &lt; 0))] results[(-1, +1, +1)] = pts[np.where((pts[:, 0] &lt; 0) &amp; (pts[:, 1] &gt;= 0) &amp; (pts[:, 2] &gt;= 0))] results[(+1, -1, -1)] = pts[np.where((pts[:, 0] &gt;= 0) &amp; (pts[:, 1] &lt; 0) &amp; (pts[:, 2] &lt; 0))] results[(+1, -1, +1)] = pts[np.where((pts[:, 0] &gt;= 0) &amp; (pts[:, 1] &lt; 0) &amp; (pts[:, 2] &gt;= 0))] results[(+1, +1, -1)] = pts[np.where((pts[:, 0] &gt;= 0) &amp; (pts[:, 1] &gt;= 0) &amp; (pts[:, 2] &lt; 0))] results[(+1, +1, +1)] = pts[np.where((pts[:, 0] &gt;= 0) &amp; (pts[:, 1] &gt;= 0) &amp; (pts[:, 2] &gt;= 0))] return results if __name__ == '__main__': rxy = np.random.uniform(-4, 4, [1000, 3]) roc = octate(rxy) </code></pre> <p>But it seems pretty clunky and redundant.</p> <p>Likewise, the above uses 8 operations, but it's got to be possible to subdivide based on the <code>&lt;0</code> / <code>&gt;=0</code> dichotomy across 3 axes? I would have thought that 3 operations should be feasible.</p> <p>A manual method involves code redundancy, adds unnecessary fragility to the code, and it lacks scalability to higher dimensions. I am looking for something that is concise, efficient, and readable. I have had a look at binning, partition, and arg_partition but they don't seem to be a good fit, though I might be mistaken.</p>
<python><numpy><binning>
2025-03-17 18:54:34
3
2,248
Konchog
79,515,343
850,781
How to compare objects based on their superclass
<p>How do I check for equality using super-class-level comparison?</p> <pre><code>from dataclasses import dataclass @dataclass(order=True) class Base: foo: int @dataclass(order=True) class X(Base): x: str @dataclass(order=True) class Y(Base): y: float x = X(0, &quot;x&quot;) y = Y(1, 1.0) </code></pre> <p>How do I compare <code>x</code> and <code>y</code> on their <code>Base</code> attributes only?</p> <p><code>Base.__eq__(x,y)</code> seems to do the job, is this the &quot;pythonic&quot; way?</p>
<python><python-dataclasses>
2025-03-17 17:30:34
4
60,468
sds
79,515,167
2,266,881
Filtering polars dataframe by row with boolean mask
<p>I'm trying to filter a Polars dataframe by using a boolean mask for the rows, which is generated from conditions on an specific column using:</p> <pre><code>df = df[df['col'] == cond] </code></pre> <p>And it's giving me an error because that filter is meant for column filter:</p> <blockquote> <p>expected xx values when selecting columns by boolean mask, got yy</p> </blockquote> <p>Where xx is the total columns and yy is the count of True's in the mask result.</p> <p>According to the Polars API, that syntax should apply to filter to the rows (the same as how Pandas work), but it's instead trying to apply it to the columns.</p> <p>Is there any way to change this behaviour?</p> <p>PS: Please don't advice to use .filter or .sql instead, that's not what I'm asking here.</p>
<python><dataframe><python-polars><polars>
2025-03-17 16:26:30
1
1,594
Ghost
79,515,104
13,440,165
Sliding window Singular Value Decomposition
<p>Throughout the question, I will use Python notation. Suppose I have a matrix <code>A</code> of shape <code>(p, nb)</code> and I create a sliding window, taking the submatrix of <code>p</code> rows and <code>n</code> columns <code>Am = A[:, m : m + n]</code>. Now I want to compute it <strong>singular value decomposition</strong> (SVD):</p> <pre><code>U_m, S_m, Vh_m = svd(Am) = svd(A[:, m:m+n] </code></pre> <p>and then go to the next window <code>m+1</code> and compute its SVD:</p> <pre><code>U_m1, S_m1, Vh_m1 = svd(Am1) = svd(A[:, m+1:m+1+n] </code></pre> <p>Computing full SVD from scratch has the complexity of <code>o(min(m*n**2, n*m**2))</code>.</p> <p>I want to compute the SVD of the <code>m+1</code> window using the SVD of the <code>m</code> window without computing the full SVD from scratch, so it will be more efficient (with less complexity) than doing a full SVD from scratch. Also I prefer it won't resort to low-rank approximation but will assume that <code>rank(Am) = min(p, n)</code>.</p> <pre><code>U_m1, S_m1, Vh_m1 = sliding_svd(U_m, S_m, Vh_m, A[:,m+1:m+1+n], A[:,m],A[:,m+n]) </code></pre> <p>A similar problem is called <strong>incremental SVD</strong>. To my problem, I call <strong>sliding SVD</strong> or <strong>moving SVD</strong> or <strong>sliding window SVD</strong>.</p> <p>I asked a similar question in Math Stack Exchange: <a href="https://math.stackexchange.com/questions/5046319/sliding-singular-value-decomposition-sliding-svd-moving-svd">https://math.stackexchange.com/questions/5046319/sliding-singular-value-decomposition-sliding-svd-moving-svd</a></p> <p>I am looking for code or a paper that can be implemented in Python using NumPy and SciPy to solve this problem, or at least some guiding (e.g. related papers that deal with a similar problem).</p>
<python><algorithm><svd>
2025-03-17 16:04:41
1
883
Triceratops
79,515,072
2,893,712
Pandas Join Two Series Based on Conditions
<p>I have a dataframe that has information about employee's employment info and I am trying to combine with another dataframe that has their Employee ID #.</p> <p><code>df</code></p> <pre><code>Name SSN Doe, John A XXXX-XX-1234 Doe, Jane B XXXX-XX-9876 Test, Example XXXX-XX-0192 </code></pre> <p><code>Employee_Info</code></p> <pre><code>First_Name Last_Name SSN EmployeeID John Doe 999-45-1234 12 JANE DOE 999-45-9876 13 Example Test 999-45-0192 14 </code></pre> <p>My desired output is:</p> <pre><code>Name SSN EmployeeID Doe, John A XXX-XX-1234 12 Doe, Jane B XXX-XX-9876 13 Test, Example XXX-XX-0192 14 </code></pre> <p>The <code>df</code> dataframe actually has the SSN masked except for the last 4 characters. Here is the code I have currently:</p> <pre><code>df['SSN_Last_4'] = df['SSN4'].str[-4:] Employee_Info['SSN_Last_4'] = Employee_Info['SSN'].str[-4:] df2 = pd.merge(df, Employee_Info, on='SSN', how='left') </code></pre> <p>However because some employees might have the same last 4 digits of SSN, I need to also match based on name. However the caveat is that the <code>Name</code> in <code>df</code> is the employee fullname (which might include middle initial) and the case might be different. My original idea was to split the Name on <code>, </code> and drop middle initial, and then convert all the name columns to be lowercase and modify the join. However I feel that there are better methods to join the data.</p>
<python><pandas>
2025-03-17 15:54:57
2
8,806
Bijan
79,515,028
9,422,807
find the min of int value mixed with string value in a nested list in Python
<p>I have a nested li below, I wanted to return a nested li with the string with and min of the int value within the same string value. Any idea?</p> <pre><code>li=[['a', 10], ['a', 20], ['a', 20], ['a', 40], ['a', 50], ['a', 60] , ['b', 10], ['b', 20], ['b', 30], ['b', 40] , ['c', 10], ['c', 10], ['c', 20]] </code></pre> <p>return</p> <pre><code>min_li=[['a', 10], ['b', 10], ['c', 10]] </code></pre>
<python><nested-lists>
2025-03-17 15:32:56
2
413
Liu Yu
79,514,922
20,895,654
Most performant approach to find closest match from unordered collection
<p>I'm wondering what the best approach for finding the closest match to a given value from a collection of items is. The most important part is the lookup time relative to the input size, the data can be shuffled and moved around as much as needed, as long as the lookup is therefore faster.</p> <p>Here the initial script:</p> <pre><code>MAX = 1_000_000 MIN = 0 target: float = 333_333.0 collection: set[float] = {random.uniform(MIN, MAX) for _ in range(100_000)} # --- Code here to find closest as fast as possible, now and for future lookups --- assert closest in collection and all(abs(target - v) &gt;= delta for v in collection) </code></pre> <h2>Approach 1</h2> <p>Iterate through all values and update <code>closest</code> accordingly. Very simple but very slow for big collections.</p> <pre><code>closest: float = next(iter(collection)) # get any element delta: float = abs(closest - target) for v in collection: tmp_delta = abs(v - target) if tmp_delta &lt; delta: closest = v delta = tmp_delta </code></pre> <h2>Approach 2</h2> <p>Sort data and then find closest via binary search. Sort time is O(n log n), but future lookups will only take O(log n) time to find!</p> <pre><code>sorted_collection: list[float] = sorted(collection) idx = bisect.bisect_right(sorted_collection, target) if idx == 0: return sorted_collection[0] if idx == len(sorted_collection): return sorted_collection[-1] before, after = sorted_collection[idx - 1], sorted_collection[idx] return before if target - before &lt;= after - target else after </code></pre> <h2>Approach 3</h2> <p>Using <code>dict</code> with some custom hashing? I thought about implemenenting a <code>__hash__</code> method for a custom class that could then be used to find values with (armored) O(1) lookup time, but couldn't get anything quite working without making some previous assumptions about the data involved and wonder if it is even possible.</p> <p>And that is where I got to in a nutshell. I am wondering if there is a faster way than the simple sorting + binary search approach and if my idea with <code>dict</code> + custom hash function is somehow possible.</p>
<python><algorithm><hash><lookup><closest>
2025-03-17 14:48:01
1
346
JoniKauf
79,514,906
11,062,613
How to wrap NumPy functions in Numba-jitted code with persistent disk caching?
<p>Numba reimplements many NumPy functions in pure Python and uses LLVM to compile them, resulting in generally efficient performance. However, some Numba implementations show slower performance compared to their optimized NumPy counterparts, such as numpy.sort().</p> <p>A quick approach to wrap NumPy's optimized functions within Numba-jitted code is to use an &quot;objmode&quot; block:</p> <pre><code>import numpy as np from numba import njit, objmode # Original numpy.sort() def sort_numpy(arr: np.ndarray) -&gt; np.ndarray: return np.sort(arr) # Wrapped numpy.sort() in objmode @njit(cache=True) def sort_numpy_obj_njit(arr: np.ndarray) -&gt; np.ndarray: out = np.empty_like(arr) with objmode: out[:] = sort_numpy(arr) return out </code></pre> <p>Unfortunately, each time the kernel restarts, the memory address (function pointer) of np.sort can change. This leads to recompilation of the cached function, as Numba's caching mechanism relies on consistent function pointers. Caching to disk is effective only within a single Python session.</p> <p>Is there a method to wrap NumPy functions using Numba, potentially through external C functions, that enables persistent disk caching across kernel restarts?</p>
<python><numpy><numba>
2025-03-17 14:40:03
0
423
Olibarer
79,514,805
65,424
Having trouble in posit great_tables formatting with whitespace sensitive text , fmt_markdown, cols_width ignored
<p>I am using the great_tables python library from Posit and am trying to format some long text for a particular column in a html table derived from a pandas DataFrame inside a Jupyter Notebook .</p> <p>The text represents a Protein Sequence alignment and these are best showed with a fixed width font and with column width larger than the length of the alignment , so the matched sequences, often in upper and lower case and other symbols lie on top of each other.</p> <p>However I cannot get great_tables to format the sequences correctly in the output. I am wondering how to control the way whitespace or the col_width is assigned, because it seems to be overridden</p> <p>The output format ideally lays out a string with the alignment information within a great_tables cell:</p> <pre><code>..........Right_justfied_ID : ABCDEFG-LONG-SEQUENCE-OF-AMINO-ACIDS ............................. abcdefg-long-sequence-of-information ...............second_ID_NUM: AACDeFF-LONG-SEQUENCE-NUMBER-2-ofAAS ............................: Additional annotation information--- </code></pre> <p>I am currently formatting the string in python within the cell, but when they are rendered in the table , despite using a fixed width font. The final text seems to ignore whitespace and messes up the layout within the cell.</p> <pre><code> def create_dummy_dataframe_mkdown(num_elements = 100): pat_id = [] seq_top = create_random_aa_seq(150) seq_bot = [] ali = [] for i in range(num_elements): pat_id.append(create_dummy_pat_ids()) seq_bottom_new = mutate_sequence_randomly(seq_top) seq_bot.append(seq_bottom_new) ali.append(write_alignment_string_mkdown(seq_top, seq_bottom_new)) data ={&quot;pat_id&quot;: pat_id, &quot;seq_bot&quot;:seq_bot , &quot;Alignment&quot;: ali} return pandas.DataFrame(data) # Create DataFrame with alignment df_mkdown = create_dummy_dataframe() # Now creating the great_tables gt_mkdown = GT(df_mkdown) gt_mkdown.cols_width(cases={&quot;Alignment&quot;:&quot;3000px&quot;}).fmt_markdown(columns=&quot;Alignment&quot;) </code></pre> <p>No matter what I try , the markdown table or the cols_width overrules the white space. I even tried not using markdown and still the whitespace still required to make everything align was not included in the cell. Please can you help me troubleshoot. A more suitable working example is at this gist: <a href="https://gist.github.com/harijay/177caf12d312b22a93a540a08c03713f" rel="nofollow noreferrer">https://gist.github.com/harijay/177caf12d312b22a93a540a08c03713f</a></p>
<python><markdown><great-tables><posit>
2025-03-17 13:59:41
2
11,935
harijay
79,514,802
8,079,611
Have a 3D Effect on Y-axis of a Matplotlib Graph
<p>I have been trying to mimic this bar graph style that I found online. I was able to mimic several different parts, however, I am unable to mimic this 3D feel on the Y-axis. Using Python, any ideas of how could I make this happen? I am happy to use a different library if needed (tried Seaborn but same problem, no 3D effect. <a href="https://i.sstatic.net/AJsZ2cE8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJsZ2cE8.png" alt="enter image description here" /></a></p> <p>Don't expect you to change my code as it is too long but if you wanna check what I have:</p> <pre><code>def create_bar_chart( countries, values, date_str, title ): &quot;&quot;&quot; Create an equity allocation chart with countries sorted by allocation percentage Parameters ---------- - countries: list of country names - values: list of allocation percentages (0-100) - date_str: date to display as subtitle - title: main chart title &quot;&quot;&quot; # ---- DATA PREPARATION ---- # Sort data by values in descending order (highest allocation first) data = sorted(zip(countries, values, strict=False), key=lambda x: x[1], reverse=True) sorted_countries = [item[0] for item in data] sorted_values = [item[1] for item in data] # Reverse the lists to get highest value at the top of the chart # This makes the chart more intuitive to read (largest values at the top) sorted_countries.reverse() sorted_values.reverse() # ---- FIGURE SETUP ---- # Create figure and axis with specified dimensions (width=12, height=6 inches) fig, ax = plt.subplots(figsize=(12, 6)) # ---- PLOTTING THE BARS ---- # Create horizontal bars with yellow color (#FFDC00) # zorder=3 ensures bars appear in front of grid lines # height=0.5 controls the thickness of each bar bars = ax.barh(sorted_countries, sorted_values, color=&quot;#FFDC00&quot;, height=0.5, zorder=3) # ---- ADDING PERCENTAGE LABELS ---- # Add text labels showing the exact percentage at the end of each bar for i, bar in enumerate(bars): value = sorted_values[i] # Get the correct value for this bar width = bar.get_width() label_pos = width + 1 # Position label 1 unit to the right of bar end # Place text vertically centered on the bar with 1 decimal place precision ax.text(label_pos, bar.get_y() + bar.get_height() / 2, f&quot;{value:.1f}%&quot;, va=&quot;center&quot;, fontsize=9) # ---- ADDING 3D VISUAL EFFECTS ---- # Create gray divider strip on left side of chart (3D effect) divider_width = 2 ax.add_patch(Rectangle((-divider_width, -0.5), divider_width, len(sorted_countries), color=&quot;#E6E6E6&quot;, zorder=1)) # Add 3D effect details for each country row for i, country in enumerate(sorted_countries): # Right shadow edge - creates depth illusion ax.add_patch( Polygon( [(0, i - 0.25), (0, i + 0.25), (-0.15, i + 0.25), (-0.15, i - 0.25)], closed=True, color=&quot;#AAAAAA&quot;, zorder=2, ) ) # Top shadow edge - enhances 3D appearance ax.add_patch( Polygon( [(-divider_width, i + 0.25), (0, i + 0.25), (-0.15, i + 0.25), (-divider_width - 0.15, i + 0.25)], closed=True, color=&quot;#C0C0C0&quot;, zorder=2, ) ) # ---- GRID AND AXES STYLING ---- # Add light gray grid lines for x-axis only # zorder=0 places grid behind all other elements ax.grid(axis=&quot;x&quot;, color=&quot;#DDDDDD&quot;, linestyle=&quot;-&quot;, linewidth=0.5, zorder=0) ax.set_axisbelow(True) # Ensures grid is behind other elements # Position x-axis ticks and labels at the top of the chart ax.xaxis.tick_top() # Create ticks from 0% to 100% in increments of 10% ax.set_xticks(np.arange(0, 110, 10)) ax.set_xticklabels([f&quot;{x}%&quot; for x in range(0, 110, 10)]) # Remove all border lines around the plot for spine in ax.spines.values(): spine.set_visible(False) # ---- TITLE AND SUBTITLE ---- # Add main title in bold at specific position title_obj = plt.figtext(0.09, 0.925, title, fontsize=14, fontweight=&quot;bold&quot;) # Add date subtitle in gray below the title date_obj = plt.figtext(0.09, 0.875, f&quot;as of {date_str}&quot;, fontsize=10, color=&quot;gray&quot;) # Remove y-axis tick marks but keep the country labels ax.tick_params(axis=&quot;y&quot;, which=&quot;both&quot;, left=False) # Set x-axis limit to 105 to leave room for percentage labels ax.set_xlim(0, 105) # Add padding to left margin for better appearance plt.subplots_adjust(left=0.09) # Adjust overall layout to accommodate title and subtitle area plt.tight_layout(rect=[0.0, 0.0, 1.0, 0.85]) # Return the figure and axis for further customization if needed return fig, ax </code></pre> <p>Disclosure - I changed part of the code but I generated above Python using Chatgpt. (I also tried using Claude)</p>
<python><matplotlib><seaborn>
2025-03-17 13:58:54
0
592
FFLS
79,514,742
6,382,434
Llama Index AgentWorkflow WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse'
<p>I have a simple llama-index AgentWorkflow based on the first example from this <a href="https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/" rel="nofollow noreferrer">llama-index doc example notebook</a>:</p> <pre class="lang-py prettyprint-override"><code>from llama_index.core.agent.workflow import AgentWorkflow import asyncio async def magic_number(): &quot;&quot;&quot;Get the magic number.&quot;&quot;&quot; print(&quot;Here&quot;) await asyncio.sleep(1) return 42 workflow = AgentWorkflow.from_tools_or_functions( [magic_number], verbose=True, llm=llm # &lt;--- Need to define llm for this to run ) async def main(): result = await workflow.run(user_msg=&quot;Get the magic number&quot;) print(result) if __name__ == &quot;__main__&quot;: asyncio.run(main(), debug=True) </code></pre> <p>Which produces this error:</p> <pre class="lang-py prettyprint-override"><code>Running step init_run Step init_run produced event AgentInput Executing &lt;Task pending name='init_run' coro=&lt;Workflow._start.&lt;locals&gt;._task() running at tasks.py:410&gt; took 0.135 seconds Running step setup_agent Step setup_agent produced event AgentSetup Running step run_agent_step Executing &lt;Task pending name='run_agent_step' coro=&lt;Workflow._start.&lt;locals&gt;._task() running at tasks.py:410&gt; took 0.706 seconds Exception in callback Dispatcher.span.&lt;locals&gt;.wrapper.&lt;locals&gt;.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=&lt;BoundArgumen...mory': None})&gt;, instance=&lt;llama_index....00203B74F7620&gt;, context=&lt;_contextvars...00203B6D93440&gt;)(&lt;WorkflowHand...handler.py:20&gt;) at dispatcher.py:274 handle: &lt;Handle Dispatcher.span.&lt;locals&gt;.wrapper.&lt;locals&gt;.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=&lt;BoundArgumen...mory': None})&gt;, instance=&lt;llama_index....00203B74F7620&gt;, context=&lt;_contextvars...00203B6D93440&gt;)(&lt;WorkflowHand...handler.py:20&gt;) at workflow.py:553&gt; source_traceback: Object created at (most recent call last): File &quot;test.py&quot;, line 36, in &lt;module&gt; asyncio.run(main(), debug=True) File &quot;runners.py&quot;, line 194, in run return runner.run(main) File &quot;runners.py&quot;, line 118, in run return self._loop.run_until_complete(task) File &quot;base_events.py&quot;, line 708, in run_until_complete self.run_forever() File &quot;base_events.py&quot;, line 679, in run_forever self._run_once() File &quot;base_events.py&quot;, line 2019, in _run_once handle._run() File &quot;events.py&quot;, line 89, in _run self._context.run(self._callback, *self._args) File &quot;workflow.py&quot;, line 553, in _run_workflow result.set_exception(e) Traceback (most recent call last): File &quot;workflow.py&quot;, line 304, in _task new_ev = await instrumented_step(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;dispatcher.py&quot;, line 368, in async_wrapper result = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;multi_agent_workflow.py&quot;, line 329, in run_agent_step agent_output = await agent.take_step( ^^^^^^^^^^^^^^^^^^^^^^ ...&lt;4 lines&gt;... ) ^ File &quot;function_agent.py&quot;, line 48, in take_step async for last_chat_response in response: ...&lt;16 lines&gt;... ) File &quot;callbacks.py&quot;, line 88, in wrapped_gen async for x in f_return_val: ...&lt;8 lines&gt;... last_response = x File &quot;base.py&quot;, line 495, in gen tool_use = content_block_start[&quot;toolUse&quot;] ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ KeyError: 'toolUse' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;events.py&quot;, line 89, in _run self._context.run(self._callback, *self._args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;dispatcher.py&quot;, line 286, in handle_future_result raise exception File &quot;workflow.py&quot;, line 542, in _run_workflow raise exception_raised File &quot;workflow.py&quot;, line 311, in _task raise WorkflowRuntimeError( f&quot;Error in step '{name}': {e!s}&quot; ) from e llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse' Traceback (most recent call last): File &quot;workflow.py&quot;, line 304, in _task new_ev = await instrumented_step(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;dispatcher.py&quot;, line 368, in async_wrapper result = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;multi_agent_workflow.py&quot;, line 329, in run_agent_step agent_output = await agent.take_step( ^^^^^^^^^^^^^^^^^^^^^^ ...&lt;4 lines&gt;... ) ^ File &quot;function_agent.py&quot;, line 48, in take_step async for last_chat_response in response: ...&lt;16 lines&gt;... ) File &quot;callbacks.py&quot;, line 88, in wrapped_gen async for x in f_return_val: ...&lt;8 lines&gt;... last_response = x File &quot;base.py&quot;, line 495, in gen tool_use = content_block_start[&quot;toolUse&quot;] ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ KeyError: 'toolUse' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;test.py&quot;, line 36, in &lt;module&gt; asyncio.run(main(), debug=True) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File &quot;runners.py&quot;, line 194, in run return runner.run(main) ~~~~~~~~~~^^^^^^ File &quot;runners.py&quot;, line 118, in run return self._loop.run_until_complete(task) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^ File &quot;base_events.py&quot;, line 721, in run_until_complete return future.result() ~~~~~~~~~~~~~^^ File &quot;test.py&quot;, line 31, in main result = await workflow.run(user_msg=&quot;Get the magic number&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;workflow.py&quot;, line 542, in _run_workflow raise exception_raised File &quot;workflow.py&quot;, line 311, in _task raise WorkflowRuntimeError( f&quot;Error in step '{name}': {e!s}&quot; ) from e llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse' </code></pre> <p>This should be simpler than the example in the docs, but I keep getting this error. Can anyone help me understand why?</p> <p>I am running Python 3.13, llama-index 0.12.24.post1, and using LLM anthropic claude 3.5 sonnet.</p>
<python><llama-index>
2025-03-17 13:35:51
1
19,000
LMc
79,514,688
3,648,768
python requests on URL with bad certificate [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE]
<p>I need to get data from a URL that has some certificate problems. It works with <code>curl -k</code> (that skips verification). So I tried with <code>verify=False</code> in python <code>requests</code> but I'm still getting [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE]. Any idea how to bypass this?</p> <p>I'm using python 3.10.12 and requests 2.32.3 Here goes a code for testing purposes:</p> <pre class="lang-py prettyprint-override"><code>import os import requests # curl fails teste = os.system(&quot;curl -o capabilities.xml 'https://geoserver.car.gov.br/geoserver/ows?service=WMS&amp;version=1.3.0&amp;request=GetCapabilities'&quot;) # ignoring verification -k works teste = os.system(&quot;curl -k -o capabilities.xml 'https://geoserver.car.gov.br/geoserver/ows?service=WMS&amp;version=1.3.0&amp;request=GetCapabilities'&quot;) # Trying with python url = &quot;https://geoserver.car.gov.br/geoserver/ows&quot; params = { 'service':'WMS', 'version':'1.3.0', 'request':'GetCapabilities' } # this fails with certificate error teste = requests.get(url, params) # verify = False does not do the trick teste = requests.get(url, params, verify=False) </code></pre> <p>Update:</p> <p>It appears as though the site uses only TLS1.2 for cypher suit and python requests does not like that. Any way to have python use that Cypher?</p> <p><a href="https://i.sstatic.net/6XSrxHBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6XSrxHBM.png" alt="enter image description here" /></a></p>
<python><ssl>
2025-03-17 13:11:01
1
630
Daniel
79,514,684
17,551,958
Encountering Flet Build APK Issue on Windows Machine
<p>I am trying to build the android apk of a simple python flet application on my Windows 10 machine. However, despite the fact that I have installed Android Studio, flutter, Java SDK, gradle, accepted all the licences, and set all the system paths, I am still encountering the error below:</p> <pre><code>[13:08:01] 1 Failed to validate Flutter version. Flutter 3.27.4 installed โœ… [13:08:08] JDK installed โœ… Android SDK installed โœ… [13:08:29] Created Flutter bootstrap project from gh:flet-dev/flet-build-template with ref &quot;0.27.6&quot; โœ… [13:14:38] Packaged Python app โœ… [13:14:55] Customized app icons โœ… [13:15:38] Generated app icons โœ… Customized app splash images โœ… [13:16:04] Generated splash screens โœ… [13:16:48] Running Gradle task 'assembleRelease'... Running Gradle task 'assembleRelease'... 32.8s FAILURE: Build failed with an exception. * What went wrong: Could not resolve all artifacts for configuration 'classpath'. &gt; Failed to transform gradle-1.0.0.jar (project :gradle) to match attributes {artifactType=jar, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.internal.hierarchy-collected=false, org.gradle.internal.instrumented=instrumented-project-dependency, org.gradle.jvm.version=17, org.gradle.libraryelements=jar, org.gradle.usage=java-runtime}. &gt; Could not read workspace metadata from C:\Users\IFEANYI\.gradle\caches\transforms-4\3693a314d72045c583dc22f194ba8794\metadata.bin * Try: &gt; Run with --stacktrace option to get the stack trace. &gt; Run with --info or --debug option to get more log output. &gt; Run with --scan to get full insights. &gt; Get more help at https://help.gradle.org. BUILD FAILED in 7s Gradle task assembleRelease failed with exit code 1 [13:17:08] Doctor summary (to see all details, run flutter doctor -v): [โˆš] Flutter (Channel stable, 3.27.4, on Microsoft Windows [Version 10.0.19045.5247], locale en-NG) [โˆš] Windows Version (Installed version of Windows is version 10 or higher) [โˆš] Android toolchain - develop for Android devices (Android SDK version 35.0.1) [โˆš] Chrome - develop for the web [โˆš] Visual Studio - develop Windows apps (Visual Studio Build Tools 2019 16.11.45) [โˆš] Android Studio (version 2024.3) [โˆš] Connected device (3 available) [โˆš] Network resources โ€ข No issues found! โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ Error building Flet app - see the log of failed command above. </code></pre> <p>I have even tried downgrading my flet package to version 0.24, but that could not even interact with my <code>pyproject.toml</code> file. I have also tried out the suggestions in this <a href="https://stackoverflow.com/questions/79256469/flet-build-apk-is-not-creating-the-apk-file">SO post</a> to no avail.</p> <p>Please if anybody has any idea on what to do, I would really appreciate your sharing. Thanks!</p>
<python><android><flutter><apk><flet>
2025-03-17 13:10:00
0
1,392
Ifeanyi Idiaye
79,514,574
4,000,073
How to create set with values 1, 0, True, False along with other values in the same set
<p>I am trying to create a set having different values along with True, False, 1, 0.</p> <p>I am aware that Python treats 1 as True and 0 as False. Imperatively, since the set keeps only the <em>unique</em> values, my set stores either of 1 or True, and of 0 or False.</p> <p>Is there any workaround to deal with the above requirement?</p> <pre><code>set1 = {&quot;Python&quot;, &quot;Java&quot;, 1, 0, 23.45, True, False} </code></pre> <p>Output1: <code>{&quot;Python&quot;, &quot;Java&quot;, 1, 0, 23.45}</code> (not in the same order obviously)</p> <p>Output2: <code>{&quot;Python&quot;, &quot;Java&quot;, 1, False, 23.45}</code> (not in the same order obviously)</p>
<python><data-structures><set>
2025-03-17 12:28:05
1
311
Venkata