QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,626,166
219,153
Why cv.convexityDefects fails in this example? Is it a bug?
<p>This script:</p> <pre><code>import numpy as np, cv2 as cv contour = np.array([[0, 0], [1, 0], [1, 1], [0.5, 0.2], [0, 0]], dtype='f4') hull = cv.convexHull(contour, returnPoints=False) defects = cv.convexityDefects(contour, hull) </code></pre> <p>fails and produces this error message:</p> <pre><code> File &quot;/home/paul/upwork/pickleball/code/so-65-ocv-convexity-defects.py&quot;, line 5, in &lt;module&gt; defects = cv.convexityDefects(contour, hull) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.11.0) /io/opencv/modules/imgproc/src/convhull.cpp:319: error: (-215:Assertion failed) npoints &gt;= 0 in function 'convexityDefects' </code></pre> <p>What is the reason? Here is a plot of <code>contour</code>:</p> <p><a href="https://i.sstatic.net/eATeo9cv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eATeo9cv.png" alt="enter image description here" /></a></p> <p>And <code>hull</code> is:</p> <pre><code>[[0] [1] [2]] </code></pre>
<python><opencv><convex-hull>
2025-05-17 04:00:18
2
8,585
Paul Jurczak
79,626,027
8,674,852
While installing numpy, `../meson.build:1:0: ERROR: Compiler cc cannot compile programs.` occured
<p>When I ran <code>pip install numpy==2.0.0</code> in my docker container, following error happened.</p> <pre class="lang-bash prettyprint-override"><code># pip install numpy==2.0.0 Collecting numpy==2.0.0 Downloading numpy-2.0.0.tar.gz (18.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.3/18.3 MB 35.3 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [12 lines of output] + /usr/local/bin/python3.13 /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9/vendored-meson/meson/meson.py setup /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9 /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9/.mesonpy-fskz9xxu -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9/.mesonpy-fskz9xxu/meson-python-native-file.ini The Meson build system Version: 1.2.99 Source dir: /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9 Build dir: /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9/.mesonpy-fskz9xxu Build type: native build Project name: NumPy Project version: 2.0.0 ../meson.build:1:0: ERROR: Compiler cc cannot compile programs. A full log can be found at /tmp/pip-install-l4g9wf4v/numpy_5970e636fac947f4876fa7f4f02504f9/.mesonpy-fskz9xxu/meson-logs/meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>How can I resolve the issue?</p> <h2>Environmental Info</h2> <p>I used these tools.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>tool</th> <th>version</th> </tr> </thead> <tbody> <tr> <td>Host OS</td> <td>macOS 13.6</td> </tr> <tr> <td>Docker image</td> <td>python:3.13.3-slim</td> </tr> <tr> <td>Python</td> <td>3.13.3</td> </tr> <tr> <td>numpy</td> <td>2.0.0</td> </tr> </tbody> </table></div>
<python><numpy>
2025-05-16 23:04:54
2
608
siruku6
79,625,948
811,335
Why some nested python functions are defined as `def _():`
<p>I understand internal functions are prefixed with '_' to indicate they are helper/internal functions. It also helps with tooling etc. But I find some functions with just '_' as their name. Can't even find where they are called from. e.g., from</p> <p><a href="https://github.com/jax-ml/jax/blob/7412adec21c534f8e4bcc627552f28d162decc86/jax/_src/pallas/mosaic/helpers.py#L72" rel="nofollow noreferrer">https://github.com/jax-ml/jax/blob/7412adec21c534f8e4bcc627552f28d162decc86/jax/_src/pallas/mosaic/helpers.py#L72</a></p> <pre class="lang-py prettyprint-override"><code>def run_on_first_core(core_axis_name: str): &quot;&quot;&quot;Runs a function on the first core in a given axis.&quot;&quot;&quot; num_cores = jax.lax.axis_size(core_axis_name) if num_cores == 1: return lambda f: f() def wrapped(f): core_id = jax.lax.axis_index(core_axis_name) @pl_helpers.when(core_id == 0) @functools.wraps(f) def _(): ## How is this called? return f() return wrapped </code></pre> <p>There are several of them in an internal code base but here are some references</p> <ul> <li><p><a href="https://github.com/search?q=repo%3Ajax-ml%2Fjax%20def%20_()%3A&amp;type=code" rel="nofollow noreferrer">https://github.com/search?q=repo%3Ajax-ml%2Fjax%20def%20_()%3A&amp;type=code</a></p> </li> <li><p><a href="https://github.com/jax-ml/jax/blob/7412adec21c534f8e4bcc627552f28d162decc86/docs/pallas/tpu/distributed.ipynb#L1125" rel="nofollow noreferrer">https://github.com/jax-ml/jax/blob/7412adec21c534f8e4bcc627552f28d162decc86/docs/pallas/tpu/distributed.ipynb#L1125</a></p> </li> </ul>
<python><jax>
2025-05-16 21:19:09
1
39,244
A. K.
79,625,827
16,563,251
Type hint dict keys and values with matching generic type
<p>I have a class that behaves like a wrapper around some object. The object will always be a subclass of some parent class, but otherwise it can be anything (to simplify, I assume this parent class is <code>object</code> in my example). Therefore, I made the <code>Wrapper</code> class <a href="https://typing.python.org/en/latest/reference/generics.html" rel="nofollow noreferrer">generic</a>.</p> <p>I now want to have a dict that maps some objects classes to their corresponding wrapper class. The wrapper will always correspond, no <code>issubclass</code> check (or similar) needed from a runtime perspective. As far as I can tell, it is not even possible, because I use some generic method to perform the mapping.</p> <p>I would like to reflect this in my type hints, i.e. the type checker should only allow key-value-pairs with types <code>type[SomeObject], type[Wrapper[SomeObject]]</code>.</p> <p>This is mostly relevant when accessing the dict, if I access the value for the key <code>SomeObject</code>, then the result should be type narrowed to <code>Wrapper[SomeObject]</code>.</p> <p><code>TypeVar</code>s do not work for me though (and I assume they are not intended for this use case):</p> <pre class="lang-py prettyprint-override"><code>wrapper_mapping: dict[type[T], type[Wrapper[T]]] # Does not work </code></pre> <p>Is it even possible with the current type hint system? How can I make the type checker happy, without using ignore comments?</p> <pre class="lang-py prettyprint-override"><code>class Wrapper[T]: def __init__(self, thing: T): self.thing: T = thing class StringWrapper(Wrapper[str]): def __init__(self, thing: str): thing += &quot;!!!&quot; super().__init__(thing) wrapper_mapping = {str: StringWrapper, int: Wrapper[int]} def wrap[T](thing: T) -&gt; Wrapper[T]: # A lot of type checker warnings, but works at runtime return wrapper_mapping[type(thing)](thing) print(wrap(&quot;hello&quot;).thing) # hello!!! print(wrap(2).thing) # 2 </code></pre> <p><a href="https://mypy-play.net/?mypy=latest&amp;python=3.13&amp;flags=strict&amp;gist=29a3a993891092f8611733be6234b7e6" rel="nofollow noreferrer">mypy-playground</a> <a href="https://basedpyright.com/?typeCheckingMode=all&amp;code=MYGwhgzhAEDqBOYAOSCm8DaAVAugLgChpjoATVAM2gH1qBLAOzoBdaAKCVECgGmmYAWjAOZ5oWAJSESM6J24A6QSLFZoAXn5CGwggVCQYAZWbwRCZGnhsLKdBgimcUoiXJVajFu3m8tKuVMXWRJlHWgAak0AIgBCeOjXEIgAVys2CQVPJlZqNjDhCT0Ad0Q7eGoAW0sRDWgAb0d4MRMzHVsrPkZmMQ77bpwAXz13aFLkbBx87VFxCWgAWgA%2BODKrSekSAGJoAEFoEAB7ZmhDqmYATzRoYAFUYABrdDGweCYdCD4AIxST4sP4A8YGATvAUgxmHRKqgktB4KhmCk3mM1ugqjUdBhLmhpiIJFMCkUCEg2sw2OMkGxoncQEdoplCcQdjSjvFYsTSeSymwAEwMmbzaA7HlAA" rel="nofollow noreferrer">basedpyright-playground</a></p>
<python><python-typing>
2025-05-16 19:31:33
0
573
502E532E
79,625,826
10,569,563
Django manage command set stdout and stderr
<p>Is there a way to set <code>stdout</code> and <code>stderr</code> from <code>base_stealth_options</code> in <code>BaseCommand</code> when running the django command? I can't find any documentation about how to use those options. For example, I would like to set <code>stdout</code> and <code>stderr</code> to a logger <code>info</code> and <code>error</code> when running <code>python manage.py foo</code>.</p> <p>Code reference: <a href="https://github.com/django/django/blob/stable/5.2.x/django/core/management/base.py#L269" rel="nofollow noreferrer">https://github.com/django/django/blob/stable/5.2.x/django/core/management/base.py#L269</a></p> <p>This is my attempt at making my own <code>manage_custom_logging.py</code>. I am wondering if there is a better way to do this since <code>base_stealth_options</code> exists.</p> <pre class="lang-python prettyprint-override"><code># manage_custom_logging.py #!/usr/bin/env python &quot;&quot;&quot;Django's command-line utility for administrative tasks.&quot;&quot;&quot; import os import sys import traceback import logging logger = logging.getLogger('manage_custom_logging') class StreamToLogger(object): def __init__(self, logfct): self.logfct = logfct def write(self, buf): for line in buf.rstrip().splitlines(): self.logfct(line.rstrip()) def flush(self): pass def main(): &quot;&quot;&quot;Run administrative tasks.&quot;&quot;&quot; os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_django_app.settings') try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( &quot;Couldn't import Django. Are you sure it's installed and &quot; 'available on your PYTHONPATH environment variable? Did you ' 'forget to activate a virtual environment?' ) from exc try: sys.stdout = StreamToLogger(logging.info) sys.stderr = StreamToLogger(logging.error) execute_from_command_line(sys.argv) except Exception: logger.error(traceback.format_exc()) if __name__ == '__main__': main() </code></pre>
<python><django><stdout><stderr>
2025-05-16 19:31:22
0
401
Zuerst
79,625,804
2,072,516
Can't sort by field on joined table
<p>I have a model which I'm joining subsequently onto 3 other models:</p> <pre><code> statement = select(Item).filter(Item.id == item_id) if include_purchases: statement = statement.options( joinedload(Item.purchases) .joinedload(Purchase.receipt) .joinedload(Receipt.store) ).order_by(Receipt.date.desc()) else: statement = statement.limit(1) </code></pre> <p>However, it errors:</p> <pre><code>| sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) &lt;class 'asyncpg.exceptions.UndefinedTableError'&gt;: invalid reference to FROM-clause entry for table &quot;receipts&quot; | HINT: Perhaps you meant to reference the table alias &quot;receipts_1&quot;. | [SQL: SELECT items.id, items.name, items.notes, stores_1.id AS id_1, stores_1.name AS name_1, receipts_1.id AS id_2, receipts_1.store_id, receipts_1.date, receipts_1.notes AS notes_1, purchases_1.id AS id_3, purchases_1.item_id, purchases_1.receipt_id, purchases_1.price, purchases_1.amount, purchases_1.notes AS notes_2 | FROM items LEFT OUTER JOIN purchases AS purchases_1 ON items.id = purchases_1.item_id LEFT OUTER JOIN receipts AS receipts_1 ON receipts_1.id = purchases_1.receipt_id LEFT OUTER JOIN stores AS stores_1 ON stores_1.id = receipts_1.store_id | WHERE items.id = $1::INTEGER ORDER BY receipts.date DESC] </code></pre> <p>It's creating aliases for the joined loads, so the order by doesn't work directly, but I'm missing in the docs how to actually resolve it.</p>
<python><sqlalchemy>
2025-05-16 19:15:21
1
3,210
Rohit
79,625,529
3,067,485
Creating TabularInline in Django Admin View for inherited Many2Many relation
<p>I have a Tag Model and Mixin that is used for adding tags to entities whenever it is needed.</p> <pre><code>class Tag(models.Model): name = models.CharField(max_length=64, unique=True) class TagMixin(models.Model): class Meta: abstract = True tags = models.ManyToManyField(Tag, blank=True) </code></pre> <p>To create new entities it works well, it implicitly creates the correspondence table for the many to many relation:</p> <pre><code>class Item(TagMixin): name = models.CharField(max_length=64) </code></pre> <p>But what if I want create an admin view on Item where tag is a TabularInline input ?</p> <p>How should I fill the configuration:</p> <pre><code>class ItemTagInline(admin.TabularInline): model = ? @admin.register(models.Item) class ItemAdmin(admin.ModelAdmin): list_display = (&quot;id&quot;, &quot;name&quot;) inlines = [ItemTagInline] </code></pre>
<python><django>
2025-05-16 15:47:46
1
11,564
jlandercy
79,625,503
7,295,599
How to correctly crop PDF with Python to bounding-box?
<p>There are a few similar posts here on SO, but they do not cover my issue. I am using the following script to crop a PDF to its visible content.</p> <pre><code>### crop a PDF to visible objects import fitz # PyMuPDF def crop_pdf_to_visible_content(input_pdf, output_pdf): # Open the input PDF doc = fitz.open(input_pdf) for page in doc: # Get the text blocks to determine the visible content text_blocks = page.get_text(&quot;dict&quot;)[&quot;blocks&quot;] if text_blocks: # Initialize variables to store the minimum and maximum coordinates min_x = float('inf') min_y = float('inf') max_x = 0 max_y = 0 for block in text_blocks: if &quot;bbox&quot; in block: x0, y0, x1, y1 = block[&quot;bbox&quot;] min_x = min(min_x, x0) min_y = min(min_y, y0) max_x = max(max_x, x1) max_y = max(max_y, y1) # Set the cropbox to the bounding box of the visible content rect = fitz.Rect(min_x, min_y, max_x, max_y) page.set_cropbox(rect) # Save the output PDF doc.save(output_pdf) doc.close() input_pdf = &quot;Test.pdf&quot; output_pdf = input_pdf[:-4] + &quot;_crop.pdf&quot; crop_pdf_to_visible_content(input_pdf, output_pdf) ### end of script </code></pre> <p>This works fine for text:</p> <p>original:</p> <p><a href="https://i.sstatic.net/EbOtpTZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EbOtpTZP.png" alt="enter image description here" /></a></p> <p>cropped:</p> <p><a href="https://i.sstatic.net/UmammMFE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmammMFE.png" alt="enter image description here" /></a></p> <p>However, apparently, for music symbols this doesn't seem to work correctly.</p> <p>original:</p> <p><a href="https://i.sstatic.net/Jp6CznS2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp6CznS2.png" alt="enter image description here" /></a></p> <p>cropped:</p> <p><a href="https://i.sstatic.net/fnPBSn6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnPBSn6t.png" alt="enter image description here" /></a></p> <p>My suspicion is that the shapes close to the borders might be described as curves with control points and the file might be cropped at the control points, but not including the full shape. I cannot tell in advance how much space is needed, so, to add an additional fixed &quot;safety&quot; margin would not help and other files would then get a too wide margin.</p> <p>Any ideas how to work around this?</p> <p><strong>Addition:</strong> Result when importing such a PDF to Inkscape. Correct bounding box, i.e. complete clef, brace on the left, bar on the right and no extra margin.</p> <p><a href="https://i.sstatic.net/eAEjctlv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAEjctlv.png" alt="enter image description here" /></a></p> <p><strong>Addition 2:</strong></p> <p>In order to illustrate clearly what's going wrong, you can draw for example a Bézier-curve (e.g. in Inkscape):</p> <p><a href="https://i.sstatic.net/QTiv9SnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QTiv9SnZ.png" alt="enter image description here" /></a></p> <p>As you can see, some control points are located outside the actual visible curve. Apparently, this leads <code>pymupdf</code> to believe the bounding box to be larger than it actually is.</p> <p>Here, the cropped PDF using the improved script from my answer. As you can see, the bounding box is way too large.</p> <p><a href="https://i.sstatic.net/pe25Aafg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pe25Aafg.png" alt="enter image description here" /></a></p> <p>I guess the same behaviour was observed with Ghostscript when trying to calculate the bounding box.</p> <p>So, the question remains: is there nevertheless a way to get the correct bounding box with <code>pymupdf</code>?</p>
<python><pdf><crop><bounding-box><pymupdf>
2025-05-16 15:28:30
2
27,030
theozh
79,625,501
337,149
Wrong encoding with Python decode in Windows command prompt
<p>When I run <em>decode</em> on a byte string encoded as UTF-8 I get ANSI encoding in a Windows command prompt.</p> <pre><code>&gt;python --version Python 3.13.0 &gt;python -c &quot;print(b'\xc3\x96'.decode('utf-8'))&quot; &gt; test.txt </code></pre> <p>When I open test.txt in Notepad++ it says that the encoding is ANSI. If I run the same command in MSYS2 (using Python 3.11.6) the resulting encoding is UTF-8 as expected. How come the encoding is wrong using the Windows command prompt?</p>
<python><windows><character-encoding><command-prompt>
2025-05-16 15:27:15
2
11,423
August Karlstrom
79,625,500
3,161,801
Entity.put error after Python 3 migration for Google Data Store - refactored
<p>I have the code below to update an entity in the Google data store.</p> <p>I'm now getting an internal error on the <code>model.Collection.put(vhighest)</code></p> <p>sync.yaml</p> <pre><code> service: sync instance_class: F2 automatic_scaling: max_instances: 1 runtime: python312 app_engine_apis: true entrypoint: gunicorn -b :$PORT sync:app #inbound_services: #- warmup #libraries: #- name: jinja2 # version: latest #- name: ssl # version: latest # taskqueue and cron tasks can access admin urls handlers: - url: /.* script: sync.app secure: always redirect_http_response_code: 301 env_variables: MEMCACHE_USE_CROSS_COMPATIBLE_PROTOCOL: &quot;True&quot; NDB_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: &quot;True&quot; DEFERRED_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: &quot;True&quot; CURRENT_VERSION_TIMESTAMP: &quot;1677721600&quot; </code></pre> <pre class="lang-none prettyprint-override"><code>google.cloud.ndb.tasklets.Return: Key('Collection', 6266129674665984) </code></pre> <p>Can I ask for some guidance, did something with how i should use these methods?</p> <pre class="lang-none prettyprint-override"><code> import flask import config import util app = flask.Flask(__name__) from google.appengine.api import taskqueue, search, memcache from apiclient.discovery import build, HttpError from google.cloud import ndb from apiclient.http import MediaIoBaseUpload from datetime import datetime, timedelta from functools import partial from io import BytesIO import os from os.path import splitext, basename from model import Config from model import VideosToCollections from pytz import timezone import datetime import httplib2 import iso8601 import time from flask import request from operator import attrgetter import model from model import CallBack import re import config import google.appengine.api client = ndb.Client() def ndb_wsgi_middleware(wsgi_app): def middleware(environ, start_response): with client.context(): return wsgi_app(environ, start_response) return middleware app.wsgi_app = ndb_wsgi_middleware(app.wsgi_app) # ################################################################################ ## Flush all caches, rebuild search index, and sync all videos ################################################################################ # @app.route('/rx/', methods=['GET']) def rx(): GAE_APP_ID = os.environ['GOOGLE_CLOUD_PROJECT'] index = search.Index('general-index') while True: document_ids = [ document.doc_id for document in index.get_range(ids_only=True)] # # If no IDs were returned, we've deleted everything. if not document_ids: break # # Delete the documents for the given IDs index.delete(document_ids) # # flush memcache memcache.flush_all() # # # get/put all collections so they are reindexed collections_dbs_keys, cursor = model.Collection.get_dbs(keys_only=True, limit=-1) collections_dbs = ndb.get_multi(collections_dbs_keys) for collection in collections_dbs: collection.search_update_index() # # # sync all videos taskqueue.add(url=flask.url_for('sync_video'), params={'syncthumb': True}, method='GET') # return 'ok. flushed everything and started video sync.' # # ################################################################################ ## Sync video(s) task worker ################################################################################ # @app.route('/sync/', methods=['GET']) @app.route('/sync/&lt;yt_video_id&gt;', methods=['POST','GET']) def sync_video(yt_video_id=None): GAE_APP_ID = os.environ['GOOGLE_CLOUD_PROJECT'] syncthumb = util.param('syncthumb', bool) if not syncthumb: syncthumb = False if yt_video_id: util.sync_video_worker(yt_video_id, syncthumb=syncthumb) success = 'ok: synced ' + yt_video_id return success index = search.Index('general-index') while True: document_ids = [ document.doc_id for document in index.get_range(ids_only=True)] # If no IDs were returned, we've deleted everything. if not document_ids: break # Delete the documents for the given IDs index.delete(document_ids) # get/put all collections so they are reindexed collections_dbs_keys, cursor = model.Collection.get_dbs(keys_only=True, limit=-1) collections_dbs = ndb.get_multi(collections_dbs_keys) for collection in collections_dbs: collection.search_update_index() video_dbs, video_cursor = model.Video.get_dbs(limit=-1) tasks = [taskqueue.Task( url='/sync/' + video_db.yt_video_id, params={'syncthumb': syncthumb}, ) for video_db in video_dbs] for batches in [tasks[i:i + 5] for i in range(0, len(tasks), 5)]: rpc = taskqueue.Queue('sync').add_async(batches) rpc.wait() success = 'ok: dispatched ' + str(len(tasks)) + ' videos for sync tasks' return success ############################################################################### # Populate Collections ############################################################################### @app.route('/collectionsync/', methods=['GET']) #@ndb.transactional def collectionsync(): GAE_APP_ID = os.environ['GOOGLE_CLOUD_PROJECT'] vnew=model.Collection.query(model.Collection.slug=='new').get() for p in model.VideosToCollections.query(model.VideosToCollections.collection_key==vnew.key): p.key.delete() #populate newest collection tot=0 ct=0 for p in model.Video.query().order(-model.Video.launch_date): #print(p) if(p.launch_date): if p.get_launch_date().date() &gt; (datetime.datetime.now(timezone(&quot;UTC&quot;)).date()): continue if(p.yt_date_added is not None): if(p.yt_date_added &gt; (datetime.datetime.today() - timedelta(days=30))): ct+=1 vc = VideosToCollections() vc.video_key = p.key vc.collection_key = vnew.key vc.order = ct vc.launch_date = datetime.datetime.now(timezone(&quot;UTC&quot;)).date() model.VideosToCollections.put(vc) if(ct==1): vnew.featured_primary=p.key model.Collection.put(vnew) if(ct==2): vnew.featured_secondary=p.key model.Collection.put(vnew) if(ct&gt;=25): break tot+=ct #populate highest rated collection ct=0 vhighest=model.Collection.query(model.Collection.slug=='highest-rated').get() for p in model.VideosToCollections.query(model.VideosToCollections.collection_key==vhighest.key): p.key.delete() for p in model.Video.query().order(-model.Video.approval): if(p.launch_date): if p.get_launch_date().date() &gt; (datetime.datetime.now(timezone(&quot;UTC&quot;)).date()): continue if(p.yt_views &gt; 25000): ct+=1 vc = VideosToCollections() vc.video_key = p.key vc.collection_key = vhighest.key vc.launch_date = datetime.datetime.now(timezone(&quot;UTC&quot;)).date() vc.order = ct model.VideosToCollections.put(vc) if(ct==1): vhighest.featured_primary=p.key model.Collection.put(vhighest) if(ct==2): vhighest.featured_secondary=p.key model.Collection.put(vhighest) if(ct&gt;=25): break tot+=ct # flush memcache #memcache.flush_all() success = 'ok: dispatched ' + str(tot) + ' videos into collections' return success </code></pre> <pre><code>2025-05-16 15:24:21 sync[20250516t112108] &quot;GET /collectionsync/ HTTP/1.1&quot; 500 2025-05-16 15:24:21 sync[20250516t112108] [2025-05-16 15:24:21 +0000] [11] [INFO] Starting gunicorn 22.0.0 2025-05-16 15:24:21 sync[20250516t112108] [2025-05-16 15:24:21 +0000] [11] [INFO] Listening at: http://0.0.0.0:8081 (11) 2025-05-16 15:24:21 sync[20250516t112108] [2025-05-16 15:24:21 +0000] [11] [INFO] Using worker: sync 2025-05-16 15:24:21 sync[20250516t112108] [2025-05-16 15:24:21 +0000] [15] [INFO] Booting worker with pid: 15 2025-05-16 15:24:25 sync[20250516t112108] [2025-05-16 15:24:25,568] ERROR in app: Exception on /collectionsync/ [GET] 2025-05-16 15:24:25 sync[20250516t112108] Traceback (most recent call last): File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 323, in _advance_tasklet yielded = self.generator.send(send_value) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/model.py&quot;, line 5518, in put 2025-05-16 15:24:25 sync[20250516t112108] raise tasklets.Return(self._key) 2025-05-16 15:24:25 sync[20250516t112108] google.cloud.ndb.tasklets.Return: Key('Collection', 6266129674665984) 2025-05-16 15:24:25 sync[20250516t112108] During handling of the above exception, another exception occurred: 2025-05-16 15:24:25 sync[20250516t112108] Traceback (most recent call last): File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/flask/app.py&quot;, line 2529, in wsgi_app response = self.full_dispatch_request() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/flask/app.py&quot;, line 1825, in full_dispatch_request 2025-05-16 15:24:25 sync[20250516t112108] rv = self.handle_user_exception(e) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/flask/app.py&quot;, line 1823, in full_dispatch_request 2025-05-16 15:24:25 sync[20250516t112108] rv = self.dispatch_request() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/flask/app.py&quot;, line 1799, in dispatch_request 2025-05-16 15:24:25 sync[20250516t112108] return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/workspace/sync.py&quot;, line 210, in collectionsync 2025-05-16 15:24:25 sync[20250516t112108] model.Collection.put(vhighest) 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_options.py&quot;, line 102, in wrapper 2025-05-16 15:24:25 sync[20250516t112108] return wrapped(*pass_args, **kwargs) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/utils.py&quot;, line 118, in wrapper 2025-05-16 15:24:25 sync[20250516t112108] return wrapped(*args, **new_kwargs) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/utils.py&quot;, line 150, in positional_wrapper 2025-05-16 15:24:25 sync[20250516t112108] return wrapped(*args, **kwds) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/model.py&quot;, line 5449, in _put 2025-05-16 15:24:25 sync[20250516t112108] return self._put_async(_options=kwargs[&quot;_options&quot;]).result() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 210, in result 2025-05-16 15:24:25 sync[20250516t112108] self.check_success() 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 154, in check_success 2025-05-16 15:24:25 sync[20250516t112108] self.wait() 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 145, in wait 2025-05-16 15:24:25 sync[20250516t112108] if not _eventloop.run1(): 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_eventloop.py&quot;, line 390, in run1 2025-05-16 15:24:25 sync[20250516t112108] return loop.run1() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_eventloop.py&quot;, line 326, in run1 2025-05-16 15:24:25 sync[20250516t112108] delay = self.run0() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_eventloop.py&quot;, line 286, in run0 2025-05-16 15:24:25 sync[20250516t112108] if self._run_current() or self.run_idle(): 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_eventloop.py&quot;, line 276, in _run_current 2025-05-16 15:24:25 sync[20250516t112108] callback(*args, **kwargs) 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 337, in _advance_tasklet 2025-05-16 15:24:25 sync[20250516t112108] self.set_result(_get_return_value(stop)) 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 170, in set_result 2025-05-16 15:24:25 sync[20250516t112108] self._finish() 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 199, in _finish 2025-05-16 15:24:25 sync[20250516t112108] callback(self) 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/workspace/NdbSearchableBase/SearchableModel.py&quot;, line 203, in _post_put_hook 2025-05-16 15:24:25 sync[20250516t112108] self.search_update_index() 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/workspace/NdbSearchableBase/SearchableModel.py&quot;, line 144, in search_update_index 2025-05-16 15:24:25 sync[20250516t112108] index.put(document) 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/datastore/datastore_rpc.py&quot;, line 94, in positional_wrapper 2025-05-16 15:24:25 sync[20250516t112108] return wrapped(*args, **kwds) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/search/search.py&quot;, line 3610, in put 2025-05-16 15:24:25 sync[20250516t112108] return self.put_async(documents, deadline=deadline).get_result() 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/datastore/datastore_rpc.py&quot;, line 94, in positional_wrapper 2025-05-16 15:24:25 sync[20250516t112108] return wrapped(*args, **kwds) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/search/search.py&quot;, line 3667, in put_async 2025-05-16 15:24:25 sync[20250516t112108] return _PutOperationFuture(self, request, response, deadline, hook) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/search/search.py&quot;, line 281, in __init__ 2025-05-16 15:24:25 sync[20250516t112108] super(_PutOperationFuture, self).__init__('IndexDocument', request, 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/search/search.py&quot;, line 265, in __init__ 2025-05-16 15:24:25 sync[20250516t112108] self._rpc = apiproxy_stub_map.UserRPC('search', deadline=deadline) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/apiproxy_stub_map.py&quot;, line 444, in __init__ 2025-05-16 15:24:25 sync[20250516t112108] self.__rpc = CreateRPC(service, stubmap) 2025-05-16 15:24:25 sync[20250516t112108] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-16 15:24:25 sync[20250516t112108] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/apiproxy_stub_map.py&quot;, line 69, in CreateRPC 2025-05-16 15:24:25 sync[20250516t112108] assert stub, 'No api proxy found for service &quot;%s&quot;' % service 2025-05-16 15:24:25 sync[20250516t112108] ^^^^ 2025-05-16 15:24:26 sync[20250516t112108] AssertionError: No api proxy found for service &quot;search&quot; </code></pre>
<python><google-app-engine><google-cloud-datastore><ndb>
2025-05-16 15:26:56
0
775
ffejrekaburb
79,625,465
1,200,914
Is there an alternative to ask every n seconds to know if a celery task is done?
<p>I used Celery several times in the past and the idea I always had is that:</p> <ul> <li>You have a FastAPI, which delays tasks connecting to a broker</li> <li>Celery gets the tasks, do them and then sends back the result to the broker.</li> <li>In the fastAPI you have an endpoint to check using AsyncResult the result of a given task id, which you call every N seconds.</li> </ul> <p>However, in my current application I was asked to use Redis, and let the team that had to send me the calls to my FastAPI deliver directly their messages in Redis, so we avoid using the FastAPI in the middle. I have already done this, I know how they have to write the messages in Redis. Nontheless, we still have the question about how to ping them that the results we have are already done. Is there any alternative than asking every N seconds? I was thining they might send me an ID and I had to ping them with that ID, but they were asing something about using redis streams. I'm connecting using &quot;rediss&quot;, but tasks are given to redis lists, and results are Redis strings.</p>
<python><redis><celery><celery-task>
2025-05-16 15:07:28
2
3,052
Learning from masters
79,625,439
15,915,737
Get files from S3 with lambda
<p>I'm trying to retrieve files from an AWS S3 bucket using a Lambda function, but my script keeps timing out, and I can't figure out why. <code>&quot;errorMessage&quot;: &quot;2025-05-16T14:37:13.093Z fdb6***4165 Task timed out after 30.03 seconds&quot;</code></p> <p>The code I'm using is a basic script I got from the <a href="https://pypi.org/project/boto3/" rel="nofollow noreferrer">Doc</a>:</p> <pre><code>import boto3 def lambda_handler(event, context): bucket_name = &quot;&lt;myS3_bucket&quot; file_key = &quot;&lt;path_to/file.csv&gt;&quot; s3 = boto3.resource('s3') for bucket in s3.buckets.all(): print(bucket.name) </code></pre> <p>I create my Lambda with terraform, here is my Lambda policy:</p> <pre><code>resource &quot;aws_iam_role&quot; &quot;lambda_role&quot; { name = &quot;${var.lambda_name}_role&quot; assume_role_policy = jsonencode({ Statement = [ { Action = &quot;sts:AssumeRole&quot; Effect = &quot;Allow&quot; Principal = { Service = &quot;lambda.amazonaws.com&quot; } }] }) } resource &quot;aws_iam_role_policy_attachment&quot; &quot;S3_read_only&quot; { role = aws_iam_role.lambda_role.name policy_arn = &quot;arn:aws:iam::aws:policy/AmazonS3FullAccess&quot; } resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_logs&quot; { role = aws_iam_role.lambda_role.name policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole&quot; } resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_vpc_access&quot; { role = aws_iam_role.lambda_role.name policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot; } </code></pre> <p>Security group:</p> <pre><code>data &quot;aws_vpc&quot; &quot;default&quot; { default = true } resource &quot;aws_security_group&quot; &quot;lambda_sg&quot; { name = &quot;sg_${var.lambda_name}&quot; description = &quot;Allow all the ports needed for lambda&quot; vpc_id = data.aws_vpc.default.id # allow all inbound traffic ingress { from_port = 0 to_port = 0 protocol = &quot;-1&quot; cidr_blocks = [&quot;0.0.0.0/0&quot;] } # allow all outbound traffic egress { from_port = 0 to_port = 0 protocol = &quot;-1&quot; cidr_blocks = [&quot;0.0.0.0/0&quot;] } } } </code></pre> <p>I also add bucket policy:</p> <pre><code>resource &quot;aws_s3_bucket_policy&quot; &quot;lambda_s3_access_policy&quot; { bucket = &quot;superset-dockerfiles&quot; policy = jsonencode({ Version = &quot;2012-10-17&quot;, Statement = [ { Effect = &quot;Allow&quot;, Principal = { AWS = aws_iam_role.lambda_role.arn }, Action = [ &quot;s3:GetObject&quot;, &quot;s3:ListBucket&quot; ], Resource = [ &quot;arn:aws:s3:::superset-dockerfiles&quot;, &quot;arn:aws:s3:::superset-dockerfiles/*&quot; ] } ] }) } </code></pre> <p>What am i missing ?</p>
<python><amazon-web-services><amazon-s3><aws-lambda><terraform-provider-aws>
2025-05-16 14:51:10
1
418
user15915737
79,625,371
1,878,545
How do I save data of the following format to a file for a vision language model machine learning training job
<p>I have data of the following JSON format:</p> <pre class="lang-py prettyprint-override"><code>from datasets import load_dataset train_dataset, eval_dataset, test_dataset = load_dataset( &quot;HuggingFaceM4/ChartQA&quot;, split=['train[:5%]', 'val[:5%]', 'test[:5%]'] ) </code></pre> <pre class="lang-json prettyprint-override"><code>[{'role': 'system', 'content': [{'type': 'text', 'text': 'You are a Vision Language Model specialized in interpreting visual data from chart images.\nYour task is to analyze the provided chart image and respond to queries with concise answers, usually a single word, number, or short phrase.\nThe charts include a variety of types (e.g., line charts, bar charts) and contain colors, labels, and text.\nFocus on delivering accurate, succinct answers based on the visual information. Avoid additional explanation unless absolutely necessary.'}]}, {'role': 'user', 'content': [{'type': 'image', 'image': &lt;PIL.PngImagePlugin.PngImageFile image mode=RGB size=308x369&gt;}, {'type': 'text', 'text': 'Is the rightmost value of light brown graph 58?'}]}, {'role': 'assistant', 'content': [{'type': 'text', 'text': 'No'}]}] </code></pre> <p>How may I save this dataset as file in Python? This is for a machine learning training job for a vision language model such as SmolVLM. I am following the example <a href="https://huggingface.co/learn/cookbook/en/fine_tuning_smol_vlm_sft_trl" rel="nofollow noreferrer">here</a>, but making modifications using <a href="https://www.philschmid.de/sagemaker-train-evalaute-llms-2024" rel="nofollow noreferrer">this guide</a> for training in SageMaker, which requires uploading the file to S3.</p> <p>For the following initial attempt, I'm running into an error due to the embedded PNG image.</p> <pre class="lang-py prettyprint-override"><code>import json from sagemaker.s3 import S3Uploader def upload_json_dataset_to_s3(dataset, filename, s3_location): with open(filename, 'w') as f: json.dump(dataset, f) S3Uploader.upload(filename, s3_location) </code></pre> <p>Error:</p> <pre><code>TypeError: Object of type PngImageFile is not JSON serializable </code></pre>
<python>
2025-05-16 14:17:01
0
1,725
nathanielng
79,625,347
5,568,409
Best way to create a list of ticklabels by multiples of pi
<p>I am working on <code>Python 3x</code> and I have this list of integers:</p> <pre><code>L = [0, 1, 2, 3, 4] </code></pre> <p>How could I from <code>L</code> create this list of tick labels:</p> <pre><code>T = [0, \pi, 2\pi, 3\pi, 4\pi] ? </code></pre> <p>I have no idea and just tried:</p> <pre><code>L = [0,1,2,3,4] ticklabels = [f'L[{i}]' + r'$\pi$' for i in L] </code></pre> <p>which doesn't give the expected result...</p>
<python><python-3.x><list>
2025-05-16 14:07:40
1
1,216
Andrew
79,625,316
3,732,793
Azure Management now showing size for search service index
<p>With is code I can get the number of documents in an azure search service index.</p> <pre><code>credential = DefaultAzureCredential() search_client = SearchClient(endpoint, index_name, credential) document_count = search_client.get_document_count() </code></pre> <p>this</p> <pre><code>index_stats = search_client.get_index_statistics() </code></pre> <p>fails with</p> <pre><code>SearchClient' object has no attribute 'get_index_statistics' </code></pre> <p>despite it is mentioned in the documentation as pointed out in the answers.</p> <p>There might be way to get this information by service statitics api. Is the a way to get the total size of a search service index with python for RBAC controlled search services?</p>
<python><azure-cognitive-search>
2025-05-16 13:45:44
2
1,990
user3732793
79,625,291
2,091,953
Running a script from /opt/ is failing with Docker mounts
<p>I have a Python script that creates <code>docker run</code> commands and executes them. This runs just fine from my <code>/home/myuser</code> directory, but if I move it to <code>/opt/myuser</code> (having to sudo run everything there), all of a sudden my docker run command can't find the mounted data. Specifically, my <code>docker run</code> string is</p> <pre class="lang-bash prettyprint-override"><code>docker run --gpus all --privileged -v /home/myuser/data/input:/input/ -v /home/myuser/data/output:/output/ -v /home/myuser/data/key:/key mydocker process /input/filename -k /key/mykey.key -o /output/ </code></pre> <p>And the Docker error output says:</p> <pre class="lang-json prettyprint-override"><code>&quot;error&quot;: &quot;a given key file \&quot;/key/mykey.key\&quot; does not exist&quot; </code></pre> <p>Any idea why moving this script to <code>/opt/</code> all of a sudden fails when it's producing the same exact command that successfully completes when in my <code>/home/</code> directory?</p> <p>I'm executing this from the Python script with:</p> <pre class="lang-py prettyprint-override"><code>subprocess.call(docker_run_string, shell=True, stderr=log) </code></pre> <p>May the <code>subprocess.call()</code> be the culprit?</p>
<python><linux><docker><opt>
2025-05-16 13:33:06
0
753
mjswartz
79,625,264
5,767,511
How to find the latest version of a python package supported by currently installed packages
<p>I am working on a project with <code>packaging==21.3</code> specified in the <code>requirements.txt</code> file. I want to install black, but if I do so, it installs the latest version of black which depends on <code>packaging&gt;=22.0</code>, so it updates packaging.</p> <pre><code>$ pip install black ... Collecting packaging&gt;=22.0 (from black) Using cached packaging-25.0-py3-none-any.whl.metadata (3.3 kB) ... Installing collected packages: packaging, black Attempting uninstall: packaging Found existing installation: packaging 21.3 Uninstalling packaging-21.3: Successfully uninstalled packaging-21.3 Successfully installed black-25.1.0 packaging-25.0 </code></pre> <p>I would prefer not to modify the current packages as it's a new system and I'm not familiar enough with it to know if this will change any behaviour.</p> <p>So I want to install a version of black that is compatible with <code>packaging==21.3</code>, but the only way I can think of is to work backwards version by version until I find one.</p> <p>Is there a way to retrieve the latest version of black that is compatible with this version of packaging?</p>
<python><pip>
2025-05-16 13:18:41
1
553
dylanmorroll
79,625,228
12,184,608
Type narrowing an element of tuple based on type of other element
<p>I am looking for a way to type hint a function that returns a tuple containing a success flag and, in the case of success, <code>None</code>, or, in the case of failure, an <code>Exception</code> instance. Declaring the function as returning <code>tuple[bool, Union[Exception, None]]</code> obviously doesn't work -- there is no way for the type checker to know whether the second element of the tuple is an <code>Exception</code> or <code>None</code>, so when I write, e.g., <code>raise result[1]</code>, pyright complains that <code>None</code> does not inherit from <code>BaseException</code>.</p> <p>I have tried overloading the return value, i.e.</p> <pre class="lang-py prettyprint-override"><code> @overload def validate(value: MyClass) -&gt; tuple[Literal[True], None]: ... @overload def validate(value: MyClass) -&gt; tuple[Literal[False], Exception]: ... def validate(value: MyClass) -&gt; tuple[bool, Union[Exception, None]]: try: value.validate() except Exception as e: return False, e return True, None def do_something(value: MyClass): success, error = validate(value) if not success: raise error value.do_something() </code></pre> <p>But because the arguments are identical in both overloads, pyright seems to think I am always calling the first overload, and that the return value is always <code>Literal[True], None</code>.</p> <p>A simpler example also demonstrates the issue:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, reveal_type type T = tuple[Literal[True], None] | tuple[Literal[False], Exception] def test(t: T): a, b = t reveal_type(a) # bool reveal_type(b) # Exception | None if not a: reveal_type(a) # Literal[False] reveal_type(b) # Exception | None raise b # &lt;-- Invalid exception class or object; &quot;None&quot; does not derive from BaseException (PylancereportGeneralTypeIssues) </code></pre> <p>This seems like a fairly common use-case, but I can't find a way to do it.</p> <p>This also doesn't work with mypy.</p>
<python><python-typing>
2025-05-16 12:55:57
2
364
couteau
79,625,172
2,528,063
Format number to at most n digits in python
<p>I know I can use <code>format</code> in order to format a number to whatever format. What I want is to format to <em>at most</em> n digts.</p> <p>This can be achieved using the <code>g</code>-format-specifier. <em>But</em> there's one caveat here. What if my input is a really small number, e.g. <code>-0.0000000001</code>. With <code>g</code> this prints as <code>-1e-10</code>, but I want it to return simply <code>0</code>.</p> <p>I tried to do <code>format(round(mynumber, 7))</code>, but that interestingly gave me <code>-0</code> (with a leading <code>-</code>) when <code>mynumber</code> is negative near zero.</p> <p>How can I format <code>mynumber</code> to at most 7 digits without a leading sign, if the number is near zero?</p> <p>Afterall this is my input and the expected result:</p> <pre><code>-0.0000000001 -&gt; 0 0.0000000001 -&gt; 0 3 -&gt; 3 3.1415927 -&gt; 3.1415927 </code></pre>
<python><rounding><number-formatting>
2025-05-16 12:21:56
3
37,422
MakePeaceGreatAgain
79,624,532
13,933,721
python series of No module named <module name>
<p>Consider this project tree:</p> <pre><code>project │ └── app/ │ ├── main.py ├── __init__.py │ ├── common/ │ ├── __init__.py │ └── do_common.py │ └── util/ ├── __init__.py └── do_util.py </code></pre> <p>main.py</p> <pre><code>import sys from pathlib import Path from common.do_common import foo sys.path.append(str(Path(__file__).resolve().parent)) foo() </code></pre> <p>do_common.py</p> <pre><code>def foo(): print(&quot;foo&quot;) </code></pre> <p>run with activated venv</p> <pre><code>python -m app.main </code></pre> <p>I get an error <code>ModuleNotFoundError: No module named 'common'</code></p> <p>To fix this, I would need to change the import to</p> <pre><code>from .common.do_common import foo #### Adding a dot 「.」before `common` module </code></pre> <p>and able to print <code>foo</code></p> <p>However, if I do call the <code>do_util.py</code> from <code>do_common.py</code> like below:</p> <pre><code>from util.do_util import bar def foo(): print(&quot;foo&quot;) bar() </code></pre> <p>I get <code>ModuleNotFoundError: No module named 'util'</code></p> <p>Changing the import to</p> <pre><code>from .util.do_util import bar </code></pre> <p>returns</p> <pre><code>ModuleNotFoundError: No module named 'app.common.util' </code></pre> <p>How should I fix the issue where I wont need to prepend the modules with a dot?</p> <p>It is Python 3.12 in windows 10.</p>
<python>
2025-05-16 05:30:43
1
1,047
Mr. Kenneth
79,624,324
6,141,238
Is there a natural way to load the data of a sqlite3 table as a dictionary of lists rather than a list of dictionaries?
<p>The following table <code>my_table</code></p> <pre><code>idx speed duration 0 420 50 1 380 40 2 390 45 </code></pre> <p>is stored in a database file <code>my_db.db</code>. Loading this table as a list of dictionaries appears straightforward: the code</p> <pre><code>conn = sqlite3.connect(folder_path + 'my_db.db') conn.row_factory = sqlite3.Row cur = conn.cursor() cur.execute(f'SELECT * FROM my_table') list_of_objs = cur.fetchall() list_of_dicts = [dict(obj) for obj in list_of_objs] cur.close() conn.close() </code></pre> <p>yields</p> <pre><code>list_of_dicts = [ {'idx': 0, 'speed': 420, 'duration': 50}, {'idx': 0, 'speed': 380, 'duration': 40}, {'idx': 0, 'speed': 390, 'duration': 45} ] </code></pre> <p>However, for my data, I would like to load the data as a dictionary of lists:</p> <pre><code>dict_of_lists = [ 'idx': [0, 1, 2], 'speed': [420, 380, 390], 'duration': [50, 40, 45] ] </code></pre> <p>Is there a straightforward way to do this?</p> <p>(Part of my motivation for the dictionary-of-lists representation is that it seems like it would require less RAM than a list of dictionaries when the table has many rows. But does it, or does Python somehow recognize and avoid the redundancy of keys in the list-of-dictionaries representation?)</p>
<python><database><list><sqlite><dictionary>
2025-05-16 00:36:49
2
427
SapereAude
79,624,258
11,602,400
ctypes + cgo and ownership of strings
<p><strong>Background</strong></p> <p>I am currently using a DLL I generated with golang in python. As a sanity check for debugging I added a function called <code>return_string()</code>, which takes a pointer to a string in C (<code>char*</code>) converts to go, then returns the result. Mostly this is a debugging tool to look for encoding issues between the two languages.</p> <p><strong>Specific issue</strong></p> <p>In python I've wrapped the function, and everything works, <strong>except</strong> if I free after copying the bytes in. Here's the function:</p> <pre class="lang-py prettyprint-override"><code># import library if platform().lower().startswith(&quot;windows&quot;): lib = cdll.LoadLibrary(os.path.join(os.path.dirname(os.path.realpath(__file__)), &quot;lib.dll&quot;)) else: lib = cdll.LoadLibrary(os.path.join(os.path.dirname(os.path.realpath(__file__)), &quot;lib.so&quot;)) lib.return_string.argtypes = [c_char_p] lib.return_string.restype = c_char_p lib.FreeCString.argtypes = [c_char_p] def prepare_string(data: str | bytes) -&gt; c_char_p: &quot;&quot;&quot;Takes in a string and returns a C-compatible string&quot;&quot;&quot; if isinstance(data, str): return c_char_p(data.encode()) return c_char_p(bytes(data)) def return_string(text: str | bytes) -&gt; str: &quot;&quot;&quot;Debugging function that shows you the Go representation of a C string and returns the python string version&quot;&quot;&quot; c_input = prepare_string(text) result = lib.return_string(c_input) # This is allocated in go using essentially malloc if not result: return &quot;&quot; copied_bytes = string_at(result) # This should be a COPY into new python-managed memory afaik decoded = copied_bytes.decode(errors=&quot;replace&quot;) # Gets to here no issues lib.FreeCString(result) # Program silently fails here return copied_bytes </code></pre> <p>If I remove the call to <code>lib.FreeCString(result)</code>, everything works, but my understanding is that because the <code>result</code> variable is allocated in go with a <code>malloc()</code> call, it needs to be freed in python. But, when I free it, it's either double-freeing or free-after-use and I'm not sure which, or why.</p> <p>Do I need to free <code>result</code>? and if not, where does the pointer allocated in go get cleaned up?</p> <p><strong>Go code</strong></p> <p>I don't think it's a go-side issue, but just in case, here's the go code as well:</p> <pre class="lang-golang prettyprint-override"><code>// All code returns `unsafe.Pointer` because this is a package, and CGo breaks your types if you don't // Used to convert a C-compatible string back to itself, good for debugging encoding issues // // Parameters: // - cString: Pointer to the C string (*C.char). // // Returns: // - Pointer to a new C string with the same content (*C.char). // Note: The caller is responsible for freeing the allocated memory using FreeCString. // //export return_string func return_string(cString unsafe.Pointer) unsafe.Pointer { internalRepresentation := C.GoString((*C.char)(cString)) result := StringToCString(internalRepresentation) return result } // Convert a string to a c-compatible C-string (glorified alias for C.CString) // // Parameters: // - input: The Go string to convert. // // Returns: // - A pointer to the newly allocated C string (*C.char). // Note: The caller is responsible for freeing the allocated memory using FreeCString. func StringToCString(input string) unsafe.Pointer { return unsafe.Pointer(C.CString(input)) } // Free a previously allocated C string from Go. // // Parameters: // - ptr: Pointer to the C string to be freed (*C.char). // //export FreeCString func FreeCString(ptr unsafe.Pointer) { fmt.Println(&quot;FreeCString(): freeing&quot;, ptr) if ptr != nil { C.free(ptr) } } </code></pre>
<python><go><ctypes><cgo>
2025-05-15 22:34:06
1
1,481
Kieran Wood
79,624,041
21,370,869
why is maya failing to import my python library and run a function defined within it
<p>My library is saved at</p> <pre><code>C:\Users\user1\Documents\maya\scripts\test_lib.py </code></pre> <p>and its entire content is:</p> <pre><code>def my_func(): print(&quot;hello world!&quot;) </code></pre> <p>When I run the following in the script editor, I get an error:</p> <pre><code>import test_lib my_func() </code></pre> <blockquote> <h1>Error: NameError: name 'my_func' is not defined</h1> </blockquote> <p>I did restart Maya to make sure the changes were picked up, yet the error persists.</p> <p>I don't understand what is causing this issue, I have other libraries and scripts in the same script folder and they import just fine, some example files in my script folder:</p> <pre><code>C:\Users\user1\Documents\maya\scripts\test_lib.py C:\Users\user1\Documents\maya\scripts\grid_fill.py C:\Users\user1\Documents\maya\scripts\userSetup.py C:\Users\user1\Documents\maya\scripts\component.py C:\Users\user1\Documents\maya\scripts\quickMat.mel ... </code></pre> <p>Running just the <code>import grid_fill</code> in the script will successfully import <code>grid_fill.py</code>, and running its corresponding functions does not throw a &quot;not defined&quot; error.</p> <p>What could this be down to? I am on Maya 2026</p>
<python><maya>
2025-05-15 18:59:39
1
1,757
Ralf_Reddings
79,623,944
3,357,935
How do I request and download a CSV of email activity from the SendGrid API?
<p>I want to get a CSV of email activity from the SendGrid v3 Email Activity API.</p> <p>I tried to use the API to <a href="https://www.twilio.com/docs/sendgrid/api-reference/email-activity/request-a-csv" rel="nofollow noreferrer">request a CSV export of email activity</a>, like so:</p> <pre class="lang-py prettyprint-override"><code>import os from sendgrid import SendGridAPIClient sg = SendGridAPIClient(api_key=os.environ.get('SENDGRID_API_KEY')) response = sg.client.messages.download.post() print(response.body) </code></pre> <p>This successfully makes a POST request to <code>https://api.sendgrid.com/v3/messages/download</code>, but the response simply says the CSV will be emailed to the administrator of our SendGrid account:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;status&quot;: &quot;pending&quot;, &quot;message&quot;: &quot;An email will be sent to name_of_admin@example.com when the CSV is ready to download.&quot; } </code></pre> <p>That is, it does <em>not</em> provide a link to the CSV. The only way I can find to download it is through a link emailed to the administrator a few minutes after the request was sent. I want the export to be automated so that the admin doesn't need to be involved.</p> <p>SendGrid's documentation mentions an endpoint for <a href="https://www.twilio.com/docs/sendgrid/api-reference/email-activity/download-csv" rel="nofollow noreferrer">downloading a CSV</a>, but it requires a <code>download_uuid</code> that isn't in the response from the initial request.</p> <p>SendGrid has an API to <a href="https://www.twilio.com/docs/sendgrid/api-reference/email-activity/filter-all-messages" rel="nofollow noreferrer">&quot;Filter all messages&quot;</a>, but it is limited to a <a href="https://www.twilio.com/docs/sendgrid/api-reference/email-activity/filter-all-messages#query-string" rel="nofollow noreferrer">maximum of 1,000 messages</a> and doesn't support pagination for receiving more entries beyond that. In contrast, the CSV export supports up to 1,000,000 events and contains more information about the messages.</p> <p>How can I retrieve the generated CSV of email activity through the SendGrid API in a way that doesn't require the admin to receive and forward an email?</p>
<python><sendgrid><sendgrid-api-v3>
2025-05-15 17:51:38
1
27,724
Stevoisiak
79,623,918
2,526,586
Show description of each API endpoint method for Swagger doc with flask_restx
<p>I have something like this for my Flask app using <code>flask_restx</code>:</p> <pre><code>from flask import Blueprint, request from flask_restx import Api, Resource, Namespace blueprint = Blueprint('location_api', __name__) api = Api(blueprint) ns = Namespace('location', description='Location update operations') @ns.route('/cache_update', doc={&quot;description&quot;: &quot;Location cache update operations&quot;}) class LocationCacheUpdate(Resource): @ns.doc('get_location_cache_update_progress') @ns.response(200, 'Success') def get(self): logger.info(f&quot;Location cache update progress requested&quot;) ... return {&quot;status&quot;: &quot;Location cache updating...&quot;} @ns.doc('post_location_cache_update', params={'callback': {'description':'Callback URL', 'type':'string'}}) @ns.response(200, 'Success') def post(self): logger.info(f&quot;Location cache update requested&quot;) ... return {&quot;status&quot;: &quot;Location cache update has started&quot;} api.add_namespace(ns, path='/location') </code></pre> <p>This utilises the <code>Namespace</code> decorators of <code>flask_restx</code> to help generating the swagger doc for me. Note <code>@ns.doc(...)</code> in the code above each class function.</p> <p>In the swagger doc, I manage to have a description displayed for each <strong>namesapce</strong> (using <code>ns = Namespace('location', description='...')</code> ) and the <strong>same description</strong> displayed for all the endpoints under <code>class LocationCacheUpdate</code> (using <code>@ns.route(doc={&quot;description&quot;: &quot;...&quot;})</code> ). However, I want to have different description for each <strong>method</strong> (each function under <code>class LocationCacheUpdate</code>) but I just don't see how that can be achieved.</p> <p>For example, in the swagger doc, I would expect to have different descriptions for each of these:</p> <ul> <li><code>GET /location/cache_update</code></li> <li><code>POST /location/cache_update</code></li> </ul> <p>But at the moment, these all share the same description. I can't find where this is documented. Is there a <code>Namespace</code> decorator/function I can take advantage of?</p>
<python><flask><swagger><flask-restx>
2025-05-15 17:27:24
1
1,342
user2526586
79,623,848
16,462,878
close or not close file descriptor in a function body?
<p>I found many many times a code like this</p> <pre><code>def content(path): return open(path).read() </code></pre> <p>I have met such construct typically by programmers coming from lower level languages such as C or C++.</p> <p>More conventional constructs are instead the following:</p> <pre><code>def content(path): with open(path) as fd: return fd.read() </code></pre> <p>and the most &quot;safer&quot; version</p> <pre><code>def content(path): with open(path) as fd: text = fd.read() return text </code></pre> <hr /> <p>The 1-liner is quite attractive and could be used often as a callback implementation (for example in in GUI environment) but I don't know if it is safe to use in production or if it is consider Pythonic.</p> <hr /> <p>EDIT: another example using the descriptor <code>io.BytesIO</code> and <code>PIL.Image</code></p> <p>Open/close version:</p> <pre><code>def array2bytes_image(a:np.array, out_format:str='PNG') -&gt; bytes: &quot;&quot;&quot;Numpy array - 1:1 -&gt; image as bytes&quot;&quot;&quot; im = Image.fromarray(a, mode='L') # 8-pixel grayscale bio_im = io.BytesIO() im.save(bio_im, format=out_format.upper()) im_binary = bio_im.getvalue() bio_im.close() im.close() return im_binary </code></pre> <p>Open + GC version: (less verbose)</p> <pre><code>def array2bytes_image(a:np.array, out_format:str='PNG') -&gt; bytes: &quot;&quot;&quot;Numpy array - 1:1 -&gt; image as bytes&quot;&quot;&quot; im = Image.fromarray(a, mode='L') # 8-pixel grayscale bio_im = io.BytesIO() im.save(bio_im, format=out_format.upper()) return bio_im.getvalue() </code></pre>
<python><file-descriptor>
2025-05-15 16:42:57
0
5,264
cards
79,623,815
3,486,684
How can I automatically create a type annotation for a dataclass method based on the dataclass' members?
<p>Consider the following example:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass import dataclasses from typing import Self @dataclass class Dataclass: a: int b: str c: list[int] def update_attrs(self, **kwargs) -&gt; Self: return self.__class__(**(dataclasses.asdict(self) | kwargs)) </code></pre> <p>I can then do something like this:</p> <pre class="lang-py prettyprint-override"><code>x = Dataclass(0, &quot;1&quot;, [2]) y = x.update_attrs(b=&quot;hello&quot;) </code></pre> <pre><code>Dataclass(a=0, b='hello', c=[2]) </code></pre> <p>I would like to automatically annotate the <code>update_attrs</code> method, so that instead of taking <code>**kwargs</code> as it currently does, it takes exactly: <code>a: int, b: str, c: list[int]</code> without writing it all out manually (one can imagine that in our actual usecase there are a lot of attributes to type out).</p> <p>This way, I get the benefit of autocompletion in my editor when calling <code>Dataclass.update_attrs</code>.</p> <p>Is something like this possible?</p>
<python><python-typing><python-dataclasses>
2025-05-15 16:17:14
0
4,654
bzm3r
79,623,779
4,444,757
How can I pass the variable value to this string in python?
<p>I want to assign a variable to the stop loss and take profit in the following phrase. According to the <a href="https://bingx-api.github.io/docs/#/en-us/swapV2/trade-api.html#Place%20order" rel="nofollow noreferrer">exchange documentation</a>, these values ​​are constants as follows(for example: 102217.88):</p> <pre><code>paramsMap = { &quot;symbol&quot;: symbol, &quot;side&quot;: side, &quot;positionSide&quot;: positionSide, &quot;type&quot;: &quot;MARKET&quot;, &quot;quantity&quot;: amount, &quot;stopLoss&quot;: &quot;{\&quot;type\&quot;: \&quot;STOP_MARKET\&quot;, \&quot;stopPrice\&quot;: 102217.88,\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;, &quot;takeProfit&quot;: &quot;{\&quot;type\&quot;: \&quot;TAKE_PROFIT_MARKET\&quot;, \&quot;stopPrice\&quot;: 102363.2178,\&quot;price\&quot;: 102363.2178,\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot; } </code></pre> <p>When I use the string.format() function as below:</p> <pre><code>&quot;stopLoss&quot;: &quot;{\&quot;type\&quot;: \&quot;STOP_MARKET\&quot;, \&quot;stopPrice\&quot;: {0},\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;.format(str(sl)), </code></pre> <p>I get this error:</p> <pre><code>KeyError: '&quot;type&quot;' </code></pre> <p>So, how can I pass the sl variable value to this string?</p> <p><strong>Edit:</strong> I put the whole of my function that places an order in the exchange for more clarification.</p> <pre><code>def PlaceOrder(symbol, side, positionSide, amount, tp, sl): payload = {} path = '/openApi/swap/v2/trade/order' method = &quot;POST&quot; paramsMap = { &quot;symbol&quot;: symbol, &quot;side&quot;: side, &quot;positionSide&quot;: positionSide, &quot;type&quot;: &quot;MARKET&quot;, &quot;quantity&quot;: amount, &quot;stopLoss&quot;: &quot;{\&quot;type\&quot;: \&quot;STOP_MARKET\&quot;, \&quot;stopPrice\&quot;: {},\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;.format(str(sl)), &quot;takeProfit&quot;: &quot;{\&quot;type\&quot;: \&quot;TAKE_PROFIT_MARKET\&quot;, \&quot;stopPrice\&quot;: {},\&quot;price\&quot;: 102363.2178,\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;.format(str(tp)) } paramsStr = request_client.parseParam(paramsMap) response = request_client.send_request(method, path, paramsStr, payload) return response.json() </code></pre> <p>According to JonSG's comment, I use the double <code>{}</code>; however, I get this error again:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[4], line 5 3 tp, sl = Bingx.RiskReward(symbol, buy=True, risk=0.12, reward=0.06,leverage=30) 4 print(f&quot;tp:{tp} sl:{sl}&quot;) ----&gt; 5 response = Bingx.PlaceOrder(symbol, 'BUY', 'LONG', 0.0001, tp, sl) 6 response Cell In[1], line 465, in Bingx.PlaceOrder(symbol, side, positionSide, amount, tp, sl) 457 path = '/openApi/swap/v2/trade/order' 458 method = &quot;POST&quot; 459 paramsMap = { 460 &quot;symbol&quot;: symbol, 461 &quot;side&quot;: side, 462 &quot;positionSide&quot;: positionSide, 463 &quot;type&quot;: &quot;MARKET&quot;, 464 &quot;quantity&quot;: amount, --&gt; 465 &quot;stopLoss&quot;: &quot;{\&quot;type\&quot;: \&quot;STOP_MARKET\&quot;, \&quot;stopPrice\&quot;: {{}},\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;.format(str(sl)), 466 &quot;takeProfit&quot;: &quot;{\&quot;type\&quot;: \&quot;TAKE_PROFIT_MARKET\&quot;, \&quot;stopPrice\&quot;: {{}},\&quot;workingType\&quot;:\&quot;MARK_PRICE\&quot;}&quot;.format(str(tp)) 467 } 468 paramsStr = request_client.parseParam(paramsMap) 469 response = request_client.send_request(method, path, paramsStr, payload) KeyError: '&quot;type&quot;' </code></pre>
<python><string><variables>
2025-05-15 15:58:21
1
1,290
Sadabadi
79,623,688
12,881,307
Subclassing Pandas DataFrame to obtain column names autocompletion in IDE
<p>I am working with Pandas DataFrames in <code>.py</code> files. I would like to have column name autocompletions in VSCode, similar to how it works with <code>.ipynb</code> files.</p> <p>I remember reading I could subclass a <code>DataFrame</code> to achieve this. The post showed a code snippet similar to the following:</p> <pre class="lang-py prettyprint-override"><code>class TypedDataFrame(pd.DataFrame): day: int country: str weather: str df_path: str = &quot;...&quot; df: TypedDataFrame = pd.read_csv(df_path, delimiter=&quot;,&quot;) # now typing &quot;df[&quot; would pull up autocomplete suggestions for the above annotations </code></pre> <p>I haven't been able to find the post again. Most of the questions relating this issue date back to 2023 and suggest using <code>Pandera</code>, <code>StaticFrame</code> and other typing libraries, so I think they may be updated now.</p> <p>Is there a way to provide type hinting using only Pandas in 2025?</p>
<python><pandas><dataframe>
2025-05-15 15:19:55
1
316
Pollastre
79,623,671
23,196,983
Find the largest itemset in agroup of itemsets with the same support efficiently
<p>I have a spark-DataFrame with two columns, ID and Items. Items is a list of strings.</p> <p>My goal is to find frequent subsets of the itemsets in my data. I am familiar with apriori and fp-growth and used the latter to find frequent subsets. But I have one more requirement, in cases where multiple itemsets along a path in the fp-tree have the same support, I only want to keep the largest itemset. In my example, [&quot;a&quot;] has the same support as [&quot;a&quot;, &quot;b&quot;]. Thus I am only interested in keeping the latter.</p> <p>Even that is in principal not a problem, simplest solution is grouping itemsets by support, selecting the one with the most items in it and delete all subsets of that set. My problem is that the number of possible combinations is extremly large. I have 60 different items and itemsets of up to 50 items.</p> <p>So my question is, is there anyway to make this more efficient?</p> <p>Here is a basic example of what I am initially doing:</p> <pre><code>from pyspark.ml.fpm import FPGrowth from pyspark.sql import functions as F from pyspark.sql.functions import explode, udf from pyspark.sql.types import ArrayType, IntegerType, StructType, StructField basic_data = {100: [['a', 'b']], 101: [['a', 'b', 'c']], 102: [['a', 'b', 'c', 'd']], 103: [['a', 'b', 'c']], 104: [['a', 'b']], 105: [['a', 'b']], 106: [['c', 'e']], 107: [['c', 'e']], 108: [[ 'c', 'e']], } data = [(key, value[0]) for key, value in basic_data.items()] schema = ['id', 'items'] df = spark.createDataFrame(data, schema=schema) fpGrowth = FPGrowth(itemsCol = 'items', minSupport=0.3) model = fpGrowth.fit(df) # Display frequent itemsets frequentItemsets = model.freqItemsets frequentItemsets.show() </code></pre> <p>Which gives me this result:</p> <pre><code>+---------+----+ | items|freq| +---------+----+ | [a]| 6| | [b]| 6| | [b, a]| 6| | [c]| 6| | [c, b]| 3| |[c, b, a]| 3| | [c, a]| 3| | [e]| 3| | [e, c]| 3| +---------+----+ </code></pre> <p>I then group the data by frequency:</p> <pre><code>df = frequentItemsets.groupBy('freq').agg(F.collect_list('items').alias('items_list')) df.show(truncate=False) +----+----------------------------------------+ |freq|items_list | +----+----------------------------------------+ |6 |[[b], [b, a], [a], [c]] | |3 |[[c, b], [c, b, a], [c, a], [e], [e, c]]| +----+----------------------------------------+ </code></pre> <p>Next for each frequency I remove all itemsets that are subsets of another itemset with the same frequency. So for instance I remove [b] and [a] from the first row, because they are both subsets of [a,b]. My code for this is not the best, but I hope you get the idea:</p> <pre><code>def getlongestitem(itemlist:list)-&gt;int: longestlen = 0 longestitem_index = -1 for i in range(len(itemlist)): if len(itemlist[i]) &gt; longestlen: longestlen = len(itemlist[i]) longestitem_index = i return longestitem_index def find_subsets(itemlist:list, index:int)-&gt;list: ''' Find all subsets of itemlist that are subsets of itemlist[index] Return a list of indexes of the subsets ''' subset_indexes = [] longestitem = itemlist[index] for i in range(len(itemlist)): if all(element in longestitem for element in itemlist[i]): subset_indexes.append(i) return subset_indexes def extract_items(itemlist:list)-&gt;list: ''' Find the largest item in itemlist and find all subsets of itemlist that are subsets of the largest item. Remove the largest item and all its subsets from itemlist. Repeat until itemlist is empty. Return a list of the largest items. ''' items = [] processed_indices = set() # Track indices of processed items while len(processed_indices) &lt; len(itemlist): longestitem_index = getlongestitem(itemlist) while longestitem_index in processed_indices: itemlist[longestitem_index] = [] longestitem_index = getlongestitem(itemlist) subset_indexes = find_subsets(itemlist, longestitem_index) items.append(itemlist[longestitem_index]) processed_indices.update(subset_indexes) return items extract_items_udf = udf(extract_items, ArrayType(ArrayType(StringType()))) df_with_items = df.withColumn(&quot;extracted_items&quot;, extract_items_udf(df[&quot;items_list&quot;])) result_df = df_with_items.select(df_with_items[&quot;freq&quot;], explode(df_with_items[&quot;extracted_items&quot;]).alias(&quot;items&quot;)) result_df.show(truncate=False) </code></pre> <p>The results then looks like this:</p> <pre><code>+----+---------+ |freq|items | +----+---------+ |3 |[c, b, a]| |3 |[e, c] | |6 |[b, a] | |6 |[c] | +----+---------+ </code></pre> <p>Now that is all fine and runs - on a small dataset. But my issue is that the number of items I have is around 60 and thus the number of combinations is extremly large. fpgrowth delivered results but the next step, removing those subsets crashes because I run out of memory. I tried batching this by looking at one frequency at a time but even then I have the problem that there are so many itemsets with the same frequency that I can not process this well.</p>
<python><algorithm><pyspark><data-mining><fpgrowth>
2025-05-15 15:09:44
1
310
Frede
79,623,568
2,263,683
How to export dependencies to requirement.txt and requirements-dev.txt using uv-pre-commit
<p>I'm using <code>uv</code> as my package manager in my Python project. My <code>pyproject.toml</code> file looks like this:</p> <pre class="lang-toml prettyprint-override"><code>[project] name = &quot;some-name&quot; version = &quot;0.1.0&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.11&quot; dependencies = [ &lt;some-dependencies&gt; ] [dependency-groups] dev = [ &lt;some-dev-dependencies&gt; ] </code></pre> <p>I'm using <a href="https://github.com/astral-sh/uv-pre-commit" rel="nofollow noreferrer">uv-pre-commit</a> to export dependencies in a <code>requirement.txt</code> file. But what I really need is to separate general and dev dependencies in separate files. Right now I have this in my <code>.pre-commit-config.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>repos: - repo: https://github.com/astral-sh/uv-pre-commit # uv version. rev: 0.6.17 hooks: - id: uv-export - id: uv-lock </code></pre> <p>But this would export everything in one single <code>requirement.txt</code> file.</p> <p>from what I see in the documents, I can do something like this:</p> <pre class="lang-yaml prettyprint-override"><code> - repo: https://github.com/astral-sh/uv-pre-commit # uv version. rev: 0.7.3 hooks: # Run the pip compile - id: pip-compile name: pip-compile requirements.in args: [requirements.in, -o, requirements.txt] - id: pip-compile name: pip-compile requirements-dev.in args: [requirements-dev.in, -o, requirements-dev.txt] files: ^requirements-dev\.(in|txt)$ </code></pre> <p>But I'm not sure if this really fits in my case because I don't have <code>*.in</code> files. So I can't figure out how to deal with a <code>pyproject.toml</code> file in <code>uv-pre-commit</code></p> <p>Does anyone have an idea how to separate it?</p>
<python><requirements.txt><pre-commit.com><uv>
2025-05-15 14:22:36
1
15,775
Ghasem
79,623,523
7,692,855
Add Column Names to retrieved data from pymysql
<p>I have a simple SQL command via pymsql to retrieve data</p> <pre><code>db = pymysql.connect( db=DATABASE, passwd=DB_PASSWORD, host=DB_HOST, user=DB_USER, ) cursor = db.cursor() cursor.execute(&quot;&quot;&quot; select flight, aircraft_model from flights where flights.aircraft_type in ('boeing', 'airbus') &quot;&quot;&quot;) res = cursor.fetchall() </code></pre> <p>This will return a array of tuples e.g.</p> <pre><code>[ ('tk123', 'b737'), ('us123', 'a230') ] </code></pre> <p>How can I have the first tuple to be the column names e.g.</p> <pre><code>[ ('flight', 'aircraft_model') ('tk123', 'b737'), ('us123', 'a230') ] </code></pre> <p>I thought it was quite obvious from above, but I am not looking for similar to <a href="https://stackoverflow.com/questions/5010042/mysql-get-column-name-or-alias-from-query">MySQL: Get column name or alias from query</a> as that has a different output format than I specified.</p>
<python><pymysql>
2025-05-15 14:01:58
1
1,472
user7692855
79,623,319
2,410,605
Locate link that is not an anchor and does not have unique identifiers other than text
<p>I have to log into a website and navigate to a specific link. The content on the site is dynamic and tends to move, so I don't really trust using an XPath. Also, the link is not in an anchor, does not have an ID, and has a generic class. As best as I can tell the only thing I can trust is the text field:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;prevent-select&quot;&gt; &lt;div&gt;&lt;/div&gt; &lt;span class=&quot;tyl-list-item__text menuTitleClass&quot; tabindex=&quot;0&quot; data-testid=&quot;hub__tylermenu__node__title&quot; text=&quot;Employee Job/Salary&quot; aria-describedby=&quot;tcw-tooltip-56&quot;&gt;Employee Job/Salary&lt;/span&gt; &lt;tcw-tooltip id=&quot;tcw-tooltip-56&quot; style=&quot;border: 0px; clip: rect(0px, 0px, 0px, 0px); height: 1px; margin: -1px; overflow: hidden; padding: 0px; position: absolute; width: 1px; outline: 0px; appearance: none;&quot;&gt;Enterprise ERP&amp;gt;Human Capital Management&amp;gt;Payroll&amp;gt;Employee Maintenance&amp;gt;Employee Job/Salary&lt;/tcw-tooltip&gt; &lt;/div&gt; </code></pre> <p>Do I just not understand how XPaths work -- is that the actual solution? I'm trying to use an XPath currently just to see it work and not having success. I'm inspecting the link and then selecting Copy XPath. Here's my code I'm using and the link is being sent to the url_search variable:</p> <pre><code>#1. Go to Munis Login Page and enter username browser = webdriver.Chrome(options=options) browser.get(&quot;&lt;website http&gt;&quot;) user_val = WebDriverWait(browser, 10).until(EC.visibility_of_element_located((By.ID, &quot;idp-discovery-username&quot;))) user_val.send_keys(usr) user_val.send_keys(Keys.ENTER) #2. Enter Password on the next screen pwd_val = WebDriverWait(browser, 10).until(EC.visibility_of_element_located((By.ID, &quot;i0118&quot;))) pwd_val.send_keys(pwd) pwd_val.send_keys(Keys.ENTER) #3. Locate the Employee Job/Salary link and select it # To get to the URL follow the path: Select Human Capital Management --&gt; Payroll --&gt; Employee Maintenance --&gt; Employee Job/Salary url_search = WebDriverWait(browser, 15).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=&quot;FAV_f5284d90-9650-463b-aa03-67827413f2f6&quot;]/tcw-list/tcw-expansion-panel/tcw-list/tcw-list-item[1]//div/div'))).click() </code></pre>
<python><selenium-webdriver>
2025-05-15 12:24:55
1
657
JimmyG
79,623,174
7,321,700
Calculating a pct_change between 3 values in a pandas series, where one of more of these values can be nan
<p><strong>Scenario:</strong> I have a pandas series that contains 3 values. These values can vary between nan, 0 and any value above zero. I am trying to get the pct_change among the series whenever possible.</p> <p><strong>Examples:</strong></p> <pre><code>[0,nan,50] [0,0,0] [0,0,50] [nan,nan,50] [nan,nan,0] [0,0,nan] [0,nan,0] </code></pre> <p><strong>What I tried:</strong> from other SO questions I was able to come up with methods either trying to ignore the nan or shifting, but these can potentially yield a result with empty values. Ideally, if a result cannot be calculated, I would like to output a 0.</p> <p><strong>Code tried:</strong></p> <pre><code>series_test = pd.Series([0,None,50]) series_test.pct_change().where(series_test.notna()) # tested but gives only NaN or inf series_test.pct_change(fill_method=None)[series_test.shift(2).notnull()].dropna() # tested but gives empty result </code></pre> <p><strong>Question:</strong> What would be the correct way to approach this?</p> <p><strong>Expected outputs:</strong></p> <pre><code>[0,nan,50] - 0 (undefined case) [0,0,0] - 0 (undefined case) [0,0,50] - 0 (undefined case) [nan,nan,50] - 0 (undefined case) [nan,nan,0] - 0 (undefined case) [0,0,nan] - 0 (undefined case) [0,nan,0] - 0 (undefined case) [1,nan,5] - 400% [0,1,5] - 400% [1,2,nan] - 100% [1,1.3,1.8] - 80% </code></pre>
<python><pandas>
2025-05-15 11:03:30
1
1,711
DGMS89
79,623,173
12,013,353
Can you change the default tick density when creating a matplotlib plot?
<p>When you create a plot with <code>matplotlib.pyplot</code> (<code>plt</code>), and call the grid with <code>plt.grid()</code>, from what I understand a call is made to <code>plt.Locator</code> class. I was wondering if there is a simple way, through some parameter, that changes the default behaviour to always make the grid mode dense or rare. I have checked the documentation for this class but didn't catch anything.<br /> I know how to set the ticks through <code>.set_xticks</code> and <code>.set_xticklocations</code>, my question is if the default behvaiour can be changed.</p>
<python><matplotlib><xticks>
2025-05-15 11:03:22
1
364
Sjotroll
79,622,860
8,831,116
How to import submodules under tests/ and still run only tests in subdirectory in Python
<p>I have a the following test folder layout:</p> <pre><code>tests ├── __init__.py ├── conftest.py ├── data_validation │ └── test_data_validation.py └── per_vendor ├── __init__.py ├── vendor_a │ ├── conftest.py │ └── test_vendor_a.py └── vendor_b ├── __init__.py ├── conftest.py └── test_vendor_b.py </code></pre> <p>In <code>tests/conftest.py</code> I do:</p> <pre><code>import pytest from tests import per_vendor @pytest.fixture def database_with_data(empty_database: Engine) -&gt; Engine: helpers = [ per_vendor.vendor_a.conftest.fill_with_data_for_a, per_vendor.vendor_b.conftest.fill_with_data_for_b, ] for cur in helpers: cur(empty_database) return empty_database </code></pre> <p>Here, <code>empty_database</code> is another fixture providing an empty database for my tests.</p> <p>This setup works if I run all tests: <code>python -m pytest tests/</code>. However, if I only want to run tests in a specific subfolder, e.g. <code>python -m pytest tests/data_validation/</code>, all tests relying on the fixture <code>database_with_data</code> fail due to an error: <code>AttributeError: module 'tests.per_vendor' has no attribute 'vendor_a'</code>.</p> <p>Note that <code>tests.per_vendor</code> still can be imported. However, the submodules there are not picked up.</p> <p>I'd like to understand better why in one case the submodules are picked up and in the other they aren't. Also, if possible, I'd like to find a solution that allows the modular setup and running tests in subfolders.</p>
<python><pytest>
2025-05-15 07:59:18
1
858
Max Görner
79,622,744
12,415,855
Can not find shadow-root using selenium?
<p>i try to find a shadow root on a website and clicking a button using the following code:</p> <pre><code>import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By print(f&quot;Checking Browser driver...&quot;) options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--log-level=3') options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() driver = webdriver.Chrome (service=srv, options=options) link = &quot;https://www.arbeitsagentur.de/jobsuche/suche?wo=Berlin&amp;angebotsart=1&amp;was=Gastronomie%20-%20Minijob&amp;umkreis=50&quot; driver.get (link) time.sleep(5) shadowHost = driver.find_element(By.XPATH,'//bahf-cd-modal[@class=&quot;modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated&quot;]') shadowRoot = shadowHost.shadow_root shadowRoot.find_element(By.CSS_SELECTOR, &quot;button[data-testid='bahf-cookie-disclaimer-btn-alle']&quot;).click() input(&quot;Press!&quot;) </code></pre> <p>But i allways get this error:</p> <pre><code>(selenium) C:\DEVNEU\Fiverr2025\TRY\hedifeki&gt;python test.py Checking Browser driver... Press Traceback (most recent call last): File &quot;C:\DEVNEU\Fiverr2025\TRY\hedifeki\test.py&quot;, line 26, in &lt;module&gt; shadowHost = driver.find_element(By.XPATH,'//bahf-cd-modal[@class=&quot;modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated&quot;]') File &quot;C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 770, in find_element return self.execute(Command.FIND_ELEMENT, {&quot;using&quot;: by, &quot;value&quot;: value})[&quot;value&quot;] ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 384, in execute self.error_handler.check_response(response) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^ File &quot;C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 232, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;xpath&quot;,&quot;selector&quot;:&quot;//bahf-cd-modal[@class=&quot;modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated&quot;]&quot;} (Session info: chrome=136.0.7103.94); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception Stacktrace: GetHandleVerifier [0x00007FF732A4CF65+75717] GetHandleVerifier [0x00007FF732A4CFC0+75808] </code></pre> <p>How can i click the button in this shadow root?</p>
<python><selenium-webdriver><shadow-root>
2025-05-15 06:31:41
4
1,515
Rapid1898
79,622,730
6,930,340
Split an array in a polars dataframe into regular columns
<p>I have a <code>pl.DataFrame</code> with an array column. I need to split the array columns into traditional columns while assigning headers at the same time.</p> <pre><code>import polars as pl df = pl.DataFrame( { &quot;first_last&quot;: [ [&quot;Anne&quot;, &quot;Adams&quot;], [&quot;Brandon&quot;, &quot;Branson&quot;], [&quot;Camila&quot;, &quot;Campbell&quot;], [&quot;Dennis&quot;, &quot;Doyle&quot;], ], }, schema={ &quot;first_last&quot;: pl.Array(pl.String, 2), }, ) shape: (4, 1) ┌────────────────────────┐ │ first_last │ │ --- │ │ array[str, 2] │ ╞════════════════════════╡ │ [&quot;Anne&quot;, &quot;Adams&quot;] │ │ [&quot;Brandon&quot;, &quot;Branson&quot;] │ │ [&quot;Camila&quot;, &quot;Campbell&quot;] │ │ [&quot;Dennis&quot;, &quot;Doyle&quot;] │ └────────────────────────┘ </code></pre> <p>I need it like this:</p> <pre><code>shape: (4, 2) ┌─────────┬──────────┐ │ first ┆ last │ │ --- ┆ --- │ │ str ┆ str │ ╞═════════╪══════════╡ │ Anne ┆ Adams │ │ Brandon ┆ Branson │ │ Camila ┆ Campbell │ │ Dennis ┆ Doyle │ └─────────┴──────────┘ </code></pre>
<python><dataframe><python-polars><polars>
2025-05-15 06:20:15
1
5,167
Andi
79,622,589
3,161,801
NDB Python Error returning object has no attribute 'connection_from_host'
<p>I have the code below which is built on top of ndb.</p> <p>When running I receive the two errors below.</p> <p>Can I ask for some guidance, specifically what is the <code>connection_from_host</code> referring to?</p> <pre><code>import flask import config import util app = flask.Flask(__name__) from google.appengine.api import app_identity from google.appengine.api import taskqueue, search, memcache from apiclient.discovery import build, HttpError from google.cloud import ndb #from oauth2client.client import GoogleCredentials from apiclient.http import MediaIoBaseUpload from datetime import datetime, timedelta from functools import partial from io import BytesIO import os from os.path import splitext, basename from model import Config from model import VideosToCollections from pytz import timezone import datetime import httplib2 import iso8601 import time import requests import requests_toolbelt.adapters.appengine requests_toolbelt.adapters.appengine.monkeypatch() from operator import attrgetter import model from model import CallBack import re import config import google.appengine.api client = ndb.Client() def ndb_wsgi_middleware(wsgi_app): def middleware(environ, start_response): with client.context(): return wsgi_app(environ, start_response) return middleware app.wsgi_app = ndb_wsgi_middleware(google.appengine.api.wrap_wsgi_app(app.wsgi_app)) @app.route('/collectionsync/', methods=['GET']) #@ndb.transactional def collectionsync(): collection_dbs, collection_cursor = model.Collection.get_dbs( order='name' ) </code></pre> <p>This returns:</p> <blockquote> <p>/layers/google.python.pip/pip/lib/python3.12/site-packages/urllib3/contrib/appengine.py:111: AppEnginePlatformWarning: urllib3 is using URLFetch on Google App Engine sandbox instead of sockets. To use sockets directly instead of URLFetch see <a href="https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html</a>.</p> <p>google.api_core.exceptions.RetryError: Maximum number of 3 retries exceeded while calling &lt;function make_call..rpc_call at 0x3ee3d42d6840&gt;, last exception: 503 Getting metadata from plugin failed with error: '_AppEnginePoolManager' object has no attribute 'connection_from_host'</p> </blockquote>
<python><google-app-engine><ndb>
2025-05-15 03:55:50
1
775
ffejrekaburb
79,622,553
2,278,546
nodriver crashes with infinite recursion in headless mode when running browser.get()
<p>Here is the code I'm trying to run:</p> <pre><code>import nodriver as uc async def main(): browser = await uc.start(headless=True) page = await browser.get('https://www.nowsecure.nl') if __name__ == '__main__': uc.loop().run_until_complete(main()) </code></pre> <p>Here is the output I get in the terminal:</p> <pre><code> File &quot;.venv/lib/python3.12/site-packages/nodriver/core/tab.py&quot;, line 217, in _prepare_headless resp = await self._send_oneshot( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.12/site-packages/nodriver/core/connection.py&quot;, line 522, in _send_oneshot return await self.send(cdp_obj, _is_update=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.12/site-packages/nodriver/core/tab.py&quot;, line 208, in send await self._prepare_headless() File &quot;.venv/lib/python3.12/site-packages/nodriver/core/tab.py&quot;, line 217, in _prepare_headless resp = await self._send_oneshot( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.12/site-packages/nodriver/core/connection.py&quot;, line 522, in _send_oneshot return await self.send(cdp_obj, _is_update=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RecursionError: maximum recursion depth exceeded successfully removed temp profile /tmp/uc_8kjtj4bv </code></pre> <p>I'm not sure whether this is a bug, or whether I'm doing something wrong (e.g., something is wrong with my environment). In case it was a bug, I tried raising an issue on the repository's github page (<a href="https://github.com/ultrafunkamsterdam/nodriver/issues/new" rel="nofollow noreferrer">https://github.com/ultrafunkamsterdam/nodriver/issues/new</a>), but I keep getting a &quot;Unable to create issue&quot; error. I experimented with another repo and this error did not come up, so I'm assuming the repository owner has turned off issues or something...?</p>
<python><web-scraping><nodriver>
2025-05-15 03:05:50
2
505
grasswistle
79,622,539
7,419,457
mupdf mark field form widgets as read only
<p>What I am trying to do is update form fields in a PDF with a value and then mark them as read-only so that they cannot be further edited.</p> <p>I have tried two approaches. use</p> <pre><code>pdf.bake() </code></pre> <p>This did not work for my use case, as post filling in data in the PDF I am doing an e-signature on the PDF. Now, if I use bake, all the signs show up on the first page when they are distributed across different pages.</p> <p>The second approach I tried is marking the widget read-only</p> <pre><code> pdf_document = fitz.open(&quot;pdf&quot;, pdf_stream) READONLY_FLAG = 1 &lt;&lt; 0 processed_fields = set() for index, page in enumerate(pdf_document): for field in page.widgets(): flag_image = False if not flag_image: if field.field_name not in processed_fields: print(f&quot;non repeated name {field.field_name}, page {index}&quot;) processed_fields.add(field.field_name) field.field_flags |= READONLY_FLAG field.update() </code></pre> <p>This works fine for most of the fields Except for fields that are repeated on multiple pages or on the same page. These fields are still editable. Other fields become non-editable.</p> <p>How to account for repeated fields as well?</p> <p>Sample pdf file. <a href="https://filebin.net/orhtcmxp8c3b9cvh" rel="nofollow noreferrer">https://filebin.net/orhtcmxp8c3b9cvh</a></p>
<python><pdf><pymupdf><mupdf>
2025-05-15 02:48:48
2
977
Rahul
79,622,523
21,370,869
Struggling to provide a correct value to 'OpenMaya.MItMeshPolygon.center()'
<p>Am taking first steps into the Open Maya API, the Python version. I've gone through a <a href="https://zurbrigg.teachable.com/p/maya-python-api-vol-1" rel="nofollow noreferrer">Maya API</a> series, and did some tasks such as printing to the script editor. I am now trying to write my own small bits of code. In this case, trying to solve a task I recently solved through maya commands, which is get the centre of a polygon face.</p> <p>I am sure, I can accomplish this with just the <a href="https://help.autodesk.com/cloudhelp/2026/ENU/MAYA-API-REF/py_ref/class_open_maya_1_1_m_it_mesh_polygon.html#ae81a8a959c96bdc2669ec8992995e4ce" rel="nofollow noreferrer"><code>OpenMaya.MItMeshPolygon.center()</code></a> method but am not having any success calling it. The documentation states that it is expecting a <a href="https://help.autodesk.com/cloudhelp/2026/ENU/MAYA-API-REF/py_ref/class_open_maya_1_1_m_space.html#a604d34adc312ec2f23e8a73faeed52ff" rel="nofollow noreferrer"><code>kObject</code></a> as input. So with a face selected, I tried the following, which results in a error:</p> <pre><code>import maya.api.OpenMaya as om coordValue = om.MItMeshPolygon.center(om.MSpace.kWorld) print(coordValue) #am expecting this to give me the center of the face I have selected #Error: TypeError: descriptor 'center' for 'OpenMaya.MItMeshPolygon' objects doesn't apply to a 'int' object </code></pre> <p>Since this <code>kObject</code>, just seems to consist of a integer, I also tried passing a integer directly:</p> <pre><code>import maya.api.OpenMaya as om print(om.MSpace.kWorld) #prints 4 coordValue = om.MItMeshPolygon.center(4) #am expecting this to give me the center of the face I have selected print(coordValue) </code></pre> <p>I have looked through the Devkit examples that Autodesk provides but have not been able to find an answer. What could I doing wrong? Am on Maya 2026.</p> <p>Edit1:</p> <p>Okay, after spending another hour on it and some more searching... I figured it out! well partly.</p> <p>Here is my code as it stands:</p> <pre><code>import maya.api.OpenMaya as om import maya.cmds as cmds def get_selected_center(): selection = om.MGlobal.getActiveSelectionList() if selection.length() == 0: raise RuntimeError(&quot;Nothing selected&quot;) dag_path, component = selection.getComponent(0) if not dag_path.node().apiType() == om.MFn.kMesh: raise RuntimeError(&quot;Selected object is not a mesh&quot;) poly_iter = om.MItMeshPolygon(dag_path, component) center = poly_iter.center(om.MSpace.kWorld) return center center = get_selected_center() # this retuns a three part array, well maybe not an array, but definetly something cmds.spaceLocator(p=(center.x, center.y, center.z)) </code></pre> <p>The only issue am facing is that the above code, will only work as I expect it to, if I have a polygon face selected. When I have a polygon face selected, the locator is created on its centre, but if I have a edge or vertex selected, the locator is still created at the centre of the face.</p> <p>As shown <a href="https://i.imgur.com/8NpN7GN.gif" rel="nofollow noreferrer">HERE</a>, but am looking to create the locator at the centre of a selected edge, face or at the position of a vertex.</p>
<python><maya><maya-api>
2025-05-15 02:12:29
1
1,757
Ralf_Reddings
79,622,504
154,048
How to limit the length of or suppress local variables in structlog exception reports?
<p>I'm using structlog in an application. When I use it to render an exception:</p> <pre><code>logger = structlog.get_logger() logger.exception(&quot;boom&quot;) </code></pre> <p>I get the expected pretty exception report. However there are some local variables in that context which are very large. They make the exception report thousands of lines long. Not useful.</p> <p>Is there a way to either prevent specific variables from being rendered, or limiting the length of the output?</p>
<python><exception><structlog>
2025-05-15 01:40:43
1
1,166
Peter Loron
79,622,434
14,122
Unable to create a nested DefaultDict in a pydantic BaseModel
<p>Consider:</p> <pre><code>#!/usr/bin/env -S uv run --script # /// script # dependencies = [ &quot;pydantic&gt;=2.10.5,&lt;3&quot; ] # requires-python = &quot;&gt;=3.12,&lt;3.13&quot; # /// import pydantic from typing import Annotated from collections import defaultdict from uuid import UUID class Repro(pydantic.BaseModel): value: Annotated[ defaultdict[UUID, defaultdict[UUID, dict]], pydantic.Field(default_factory=lambda: defaultdict[UUID, defaultdict[UUID, dict]](lambda: defaultdict(dict))), ] # to validate that the typechecker thinks the above are valid: instance = Repro(value=defaultdict[UUID, defaultdict[UUID, dict]](lambda: defaultdict(dict))) </code></pre> <p>The above passes type checking with pyright, but at runtime, the class definition fails while pydantic is trying to introspect the object's schema:</p> <blockquote> <p><code>pydantic.errors.PydanticSchemaGenerationError: Unable to infer a default factory for keys of type &lt;class 'dict'&gt;. Only str, int, bool, list, dict, frozenset, tuple, float, set are supported, other types require an explicit default factory set using `DefaultDict[..., Annotated[..., Field(default_factory=...)]]</code></p> </blockquote> <hr /> <p>Thinking it might be introspection of the <em>inner</em> <code>defaultdict</code> triggering the error message, I tried adding a <code>pydantic.Field</code> to its type via an inner annotation:</p> <pre><code>class Repro(pydantic.BaseModel): value: Annotated[ defaultdict[ UUID, Annotated[defaultdict[UUID, dict], pydantic.Field(default_factory=dict)] ], pydantic.Field(default_factory=lambda: defaultdict[UUID, defaultdict[UUID, dict]](lambda: defaultdict(dict))), ] </code></pre> <p>...but with identical effect.</p> <hr /> <p>The above is observed with Python 3.12 and Pydantic 2.11.0, type-checked with pyright 1.1.382.</p> <p>The full stack trace follows:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;/Users/chaduffy/repro.py&quot;, line 13, in &lt;module&gt; class Repro(pydantic.BaseModel): File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py&quot;, line 237, in __new__ complete_model_class( File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py&quot;, line 597, in complete_model_class schema = gen_schema.generate_schema(cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 706, in generate_schema schema = self._generate_schema_inner(obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 999, in _generate_schema_inner return self._model_schema(obj) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 832, in _model_schema {k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()}, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1201, in _generate_md_field_schema common_field = self._common_field_schema(name, field_info, decorators) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1367, in _common_field_schema schema = self._apply_annotations( ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 2279, in _apply_annotations schema = get_inner_schema(source_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_schema_generation_shared.py&quot;, line 83, in __call__ schema = self._handler(source_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 2261, in inner_handler schema = self._generate_schema_inner(obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1004, in _generate_schema_inner return self.match_type(obj) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1118, in match_type return self._match_generic_type(obj, origin) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1157, in _match_generic_type return self._mapping_schema(origin, *self._get_first_two_args_or_any(obj)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 571, in _mapping_schema values_schema = self.generate_schema(values_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 706, in generate_schema schema = self._generate_schema_inner(obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1004, in _generate_schema_inner return self.match_type(obj) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1118, in match_type return self._match_generic_type(obj, origin) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 1157, in _match_generic_type return self._mapping_schema(origin, *self._get_first_two_args_or_any(obj)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 583, in _mapping_schema default_default_factory = get_defaultdict_default_default_factory(values_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_validators.py&quot;, line 482, in get_defaultdict_default_default_factory default_default_factory = infer_default() ^^^^^^^^^^^^^^^ File &quot;/Users/chaduffy/.cache/uv/archive-v0/L2nssVviE9tesQFv5NzrQ/lib/python3.12/site-packages/pydantic/_internal/_validators.py&quot;, line 466, in infer_default raise PydanticSchemaGenerationError( pydantic.errors.PydanticSchemaGenerationError: Unable to infer a default factory for keys of type &lt;class 'dict'&gt;. Only str, int, bool, list, dict, frozenset, tuple, float, set are supported, other types require an explicit default factory set using `DefaultDict[..., Annotated[..., Field(default_factory=...)]]` For further information visit https://errors.pydantic.dev/2.11/u/schema-for-unknown-type </code></pre>
<python><pydantic>
2025-05-14 23:57:41
1
299,045
Charles Duffy
79,622,266
10,727,283
Explicitly declared protocol on child fails with "Protocols cannot be instantiated"
<p>Please help me understand why this code fails.</p> <pre class="lang-py prettyprint-override"><code>from typing import Protocol class Base: pass class MyProto(Protocol): def hello(self) -&gt; None: pass class Child(Base, MyProto): def __init__(self) -&gt; None: super().__init__() # TypeError: Protocols cannot be instantiated def hello(self) -&gt; None: return print(&quot;hello&quot;) Child() </code></pre> <p>Error:</p> <pre><code>TypeError: Protocols cannot be instantiated </code></pre> <p>I understand that a protocol should not be instantiated, but isn't this the responsibility of the language to avoid this error?<br /> Or is there another way to explicitly implement a Protocol on a child class?</p> <p>Yes, we can avoid <code>super</code> and it will work:</p> <pre class="lang-py prettyprint-override"><code>class Child(Base, MyProto): def __init__(self) -&gt; None: Base.__init__(self) </code></pre> <p>But it's more common (and easier) to use <code>super</code>.</p> <p>It feels like something is broken in the language.</p> <p>I am using Python 3.9.</p>
<python><python-3.9>
2025-05-14 20:48:40
1
1,004
Noam-N
79,622,206
219,153
Can you explain values of imgIdx and trainIdx in this example of cv.match?
<p>This OpenCV Python script:</p> <pre><code>import numpy as np, cv2 as cv bf = cv.BFMatcher(cv.NORM_L1, crossCheck=False) bf.add(np.array([[0], [1], [2]], dtype='f4')) # train image 0 bf.add(np.array([[0.1], [1.1], [2.1]], dtype='f4')) # train image 1 bf.train() src = np.array([[0], [1.1], [2.1]], dtype='f4') matches = bf.match(src) for m in matches: print(m.distance, m.queryIdx, m.imgIdx, m.trainIdx) </code></pre> <p>attempts to match <code>src</code> descriptors of an image, greatly simplified for clarity, with descriptors of image 0 and 1 used to train <code>cv.BFMatcher</code>. Here is the result:</p> <pre><code>0.0 0 0 0 0.0 1 4 0 0.0 2 5 0 </code></pre> <p>Why <code>m.trainIdx</code> is always zero, despite two train images added with <code>bf.add</code>? Is there a way to have descriptors in <code>BFMatcher</code> separated into groups belonging to individual training images? If so, can match result not cross group boundaries, i.e. all matches have resulting descriptors from a single training image?</p> <p>The values of <code>m.imgIdx</code> and <code>m.trainIdx</code> seem to be inconsistent with OpenCV documentation at <a href="https://docs.opencv.org/4.10.0/dc/dc3/tutorial_py_matcher.html" rel="nofollow noreferrer">https://docs.opencv.org/4.10.0/dc/dc3/tutorial_py_matcher.html</a>:</p> <ul> <li>DMatch.trainIdx - Index of the descriptor in train descriptors</li> <li>DMatch.queryIdx - Index of the descriptor in query descriptors</li> <li>DMatch.imgIdx - Index of the train image.</li> </ul> <p>Also posted at <a href="https://forum.opencv.org/t/can-you-explain-values-of-imgidx-and-trainidx-in-this-example-of-cv-match/21018" rel="nofollow noreferrer">https://forum.opencv.org/t/can-you-explain-values-of-imgidx-and-trainidx-in-this-example-of-cv-match/21018</a>. Trying to get some traction and will delete the other post as soon as the answer appears somewhere.</p>
<python><opencv>
2025-05-14 20:09:10
1
8,585
Paul Jurczak
79,622,141
364,696
Is there a reason not to replace the default logging.Handler lock with a multiprocessing.RLock to synchronize multiprocess logging
<p>I've got some code that, for reasons not germane to the problem at hand:</p> <ol> <li>Must write <em>very</em> large log messages</li> <li>Must write them from multiple <code>multiprocessing</code> worker processes</li> <li>Must not interleave the logs at all</li> </ol> <p>Performance of logging is a secondary consideration; I'm not writing very many logs.</p> <p>The problem we've encountered is that our logs are so big (a single log can be up to 1 GB in size) that they appear to be written via multiple underlying <code>write</code> calls, and if another process emits a small log in the middle of the larger log being written, the log is broken up (and can't be parsed by the tooling we use to upload events). This is possible because the <code>logging</code> module is only thread-safe, not multiprocess-safe; the lock it takes when <code>emit</code>ing a log is a <code>threading.RLock</code> and therefore two worker processes can be <code>emit</code>ing to the same file at the same time, and if the log is large enough to not be handled atomically, oops, they can interleave.</p> <p>We can't use a <code>QueueHandler</code>, because we need to be able to block on an <code>os.fsync</code> call in certain circumstances (we're logging to a FUSE-based file system that parses what we send and uploads it to RabbitMQ with publisher_confirms enabled, but only blocks the writing process if the calling process <code>fsync</code>s the file, so important messages can be blocked on until all prior messages are confirmed, while simple logs can be allowed to occur in the background).</p> <p>My idea is to have our existing (already custom, to enable the conditional <code>fsync</code>) log <code>StreamHandler</code> subclass redefine <code>Handler.create_lock</code> (and <code>.acquire</code> and <code>.release</code> for good measure, since the name of the attribute, <code>.lock</code>, doesn't appear to be a documented guarantee) to return a <code>multiprocessing.RLock</code> instead of a <code>threading.RLock</code>.</p> <p>The downside would be that when a huge log message is being written, nothing else can emit logs, but I don't see any other downsides. But I'm a little concerned, because I see no one else who has even considered doing this, and I'm worried I'm missing something obvious. It's possible no one else does it because performance trumps blocking behavior in most such cases, but I want to make sure I haven't missed anything else.</p> <p>TL;DR: Will defining a <code>StreamHandler</code> subclass like:</p> <pre><code>import logging import multiprocessing class FuseHandler(logging.StreamHandler): def create_lock(self): self.lock = multiprocessing.RLock() def acquire(self): self.lock.acquire() def release(self): self.lock.release() def emit(self): # Custom code that sometimes os.fsync's the file after the write here </code></pre> <p>cause any problems I'm not foreseeing (beyond the slightly slower locking performance, and the pile-up effect if a big message takes a few seconds to be written and other logs are waiting on it)?</p>
<python><multiprocessing><python-multiprocessing><mutex><python-logging>
2025-05-14 19:18:05
0
157,585
ShadowRanger
79,622,114
3,357,935
How do I get a list of Single Send campaign names from the Sendgrid API?
<p>I want to use the <a href="https://www.twilio.com/docs/sendgrid/api-reference" rel="nofollow noreferrer">SendGrid v3 API</a> to retrieve a list of all my Single Send campaigns and their names.</p> <p>According to the SendGrid API documentation, I can retrieve statistics for all my Single Sends by sending a GET request to the endpoint <a href="https://www.twilio.com/docs/sendgrid/api-reference/marketing-campaign-stats/get-all-single-sends-stats" rel="nofollow noreferrer"><code>/v3/marketing/stats/singlesends</code></a>. However, the response doesn't include the name of each SingleSend campaign, only message level data like recipient, status, and clicks.</p> <p>Example Python code:</p> <pre class="lang-py prettyprint-override"><code>import sendgrid import json import os sg = sendgrid.SendGridAPIClient(os.environ.get('SENDGRID_API_KEY')) response = sg.client.messages.get() parsed_json = json.loads(response.body) print(json.dumps(parsed_json, indent=4)) </code></pre> <p>Example response</p> <pre class="lang-json prettyprint-override"><code>{ &quot;messages&quot;: [ { &quot;from_email&quot;: &quot;email@example.com&quot;, &quot;msg_id&quot;: &quot;some-message-id-here1&quot;, &quot;subject&quot;: &quot;Message subject here&quot;, &quot;to_email&quot;: &quot;madeline@example.com&quot;, &quot;status&quot;: &quot;processing&quot;, &quot;opens_count&quot;: 0, &quot;clicks_count&quot;: 0, &quot;last_event_time&quot;: &quot;2025-05-14T12:00:10Z&quot; }, { &quot;from_email&quot;: &quot;email@example.com&quot;, &quot;msg_id&quot;: &quot;some-message-id-here2&quot;, &quot;subject&quot;: &quot;Message subject here&quot;, &quot;to_email&quot;: &quot;theo@example.com&quot;, &quot;status&quot;: &quot;delivered&quot;, &quot;opens_count&quot;: 1, &quot;clicks_count&quot;: 0, &quot;last_event_time&quot;: &quot;2025-05-14T12:00:08Z&quot; } ] } </code></pre> <p>There's also an endpoint for <a href="https://www.twilio.com/docs/sendgrid/api-reference/marketing-campaign-stats/get-single-send-click-tracking-stats-by-id" rel="nofollow noreferrer">getting Single Send stats by ID</a>, but that only helps if I already have the ID linked to a specific Single Send name.</p> <p>Is there an API Endpoint or method to retrieve a list of all my Single Send campaigns and their names?</p>
<python><python-3.x><twilio><sendgrid><sendgrid-api-v3>
2025-05-14 18:49:56
1
27,724
Stevoisiak
79,621,934
3,125,823
Specific permissions for different types of users with django rest framework model and view
<p>I'm using DRF's ModelViewSet for my views and DefaultRouter to create the urls.</p> <p>This particular model creates a feature that only admins have full CRUD access.</p> <p>Authenticated users should have READONLY and Delete permissions. And Anonymous users should only have READONLY access.</p> <p>I'm pretty sure that I understand that is_staff set to True takes care of any admin permissions for CRUD access in django admin. I'm just not sure how to set up the Anonymous and Authenticated user permissions in the Model and View.</p> <p>How can I setup these particular permissions for the different types of users? Does this setup require specific Model and View permissions?</p>
<python><django-models><django-rest-framework><django-views><permissions>
2025-05-14 16:43:15
1
1,958
user3125823
79,621,875
2,343,309
Cython *.pxd file cannot use absolute imports for Cython submodule
<p>I am getting an <code>Error compiling Cython file</code>, 'myproject/utils/hdf5.pxd' not found` when trying to build my Cython project in a certain way.</p> <p>My Cython project is organized this way:</p> <pre><code>myproject/ setup.cfg setup.py &lt;-- For installing the Python project myproject/ __init__.py reco.pyx resample.pxd resample.pyx setup.py &lt;-- For building the Cython module utils/ __init__.py hdf5.pxd hdf5.pyx ... setup.py &lt;-- For building the Cython (sub-)module </code></pre> <p>In <code>myproject/myproject/setup.py</code> (for building the Cython module), I have:</p> <pre class="lang-py prettyprint-override"><code>from distutils.core import setup, Extension from Cython.Build import cythonize reco = Extension( name = 'reco', sources = ['reco.pyx'], include_dirs = ['.', './utils', *HDF5_DIRS], ... ) resample = Extension( name = 'resample', sources = ['resample.pyx'], include_dirs = ['.', './utils', *HDF5_DIRS], .... setup( name = 'myproject', packages = ['myproject', 'myproject.utils'], ext_modules = cythonize([resample, reco])) </code></pre> <p>Inside <code>resample.pxd</code>:</p> <pre class="lang-py prettyprint-override"><code># cython: language_level=3 import numpy as np from myproject.utils.hdf5 cimport hid_t, hsize_t, ... ... cdef inline hid_t write_resampled(... ... </code></pre> <p><code>resample.pyx</code> is empty and contains only the Cython <code>language_level</code> declaration.</p> <p><strong>When I exclude the <code>resample</code> module, this project builds and runs just fine. However, when I try to build it including the <code>resample</code> module, I get several errors related to my imports within <code>resample.pxd</code>; here's one example</strong></p> <pre><code>python3 setup.py build_ext -if Compiling resample.pyx because it changed. [1/1] Cythonizing resample.pyx Error compiling Cython file: ------------------------------------------------------------ ... import numpy as np from l4cython.utils.hdf5 cimport hid_t, hsize_t ^ ------------------------------------------------------------ ./resample.pxd:23:0: 'myproject/utils/hdf5.pxd' not found </code></pre> <p><strong>This error surprises me because the <code>reco.pyx</code> file contains identical imports, and the <code>reco</code> modules builds just fine.</strong></p> <pre class="lang-py prettyprint-override"><code># Inside reco.pyx ... from l4cython.utils.hdf5 cimport H5T_STD_U8LE, hid_t, read_hdf5, close_hdf5 ... </code></pre> <p><strong>Further, it seems that if I change the problematic imports in <code>resample.pxd</code> to <em>relative imports,</em> everything works:</strong></p> <pre class="lang-py prettyprint-override"><code># Inside resample.pxd # Doesn't work: # from myproject.utils.hdf5 cimport hid_t, hsize_t # Works: from utils.hdf5 cimport hid_t, hsize_t </code></pre> <p><strong>Is there a reason why I can use an absolute <code>cimport</code> from a Cython sub-module in my <code>.pyx</code> file but <em>not</em> in my <code>.pxd</code> file?</strong> The C-defined functions in my <code>.pxd</code> file are re-used throughout the program, so I definitely want to have them available for re-use (i.e., not have to copy them to every <code>.pyx</code> file that needs them). However, they depend, in turn, on definitions from other <code>.pxd</code> files.</p>
<python><cython><distutils>
2025-05-14 16:09:38
1
376
Arthur
79,621,854
2,287,458
Compute cumulative mean & std on polars dataframe (using over)
<p>I want to compute the cumulative mean &amp; std on a polars dataframe column.</p> <p>For the <code>mean</code> I tried this:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len()).add(1).over('class')) </code></pre> <p>which correctly gives</p> <pre><code>shape: (8, 3) ┌───────┬───────┬──────────┐ │ value ┆ class ┆ cum_mean │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 │ ╞═══════╪═══════╪══════════╡ │ 4 ┆ A ┆ 4.0 │ │ 6 ┆ A ┆ 5.0 │ │ 8 ┆ B ┆ 8.0 │ │ 11 ┆ A ┆ 7.0 │ │ 5 ┆ B ┆ 6.5 │ │ 6 ┆ A ┆ 6.75 │ │ 8 ┆ B ┆ 7.0 │ │ 15 ┆ B ┆ 9.0 │ └───────┴───────┴──────────┘ </code></pre> <p>However, this seems very clunky, and becomes a little more complicated (and possibly error-prone) for <code>std</code>.</p> <p>Is there a nicer (possibly built-in) version for computing the cum mean &amp; cum std?</p>
<python><python-3.x><python-polars><cumulative-sum>
2025-05-14 15:53:28
2
3,591
Phil-ZXX
79,621,805
662,285
azure.core.exceptions.HttpResponseError: (None) Internal server error 500 - model response from Azure AI foundry
<p>I have Azure AI foundry hub which has private end point enabled and all other network is disabled on it. I also have Azure AI Project under hub which is used to deploy serverless endpoint model. When i use AzureKeyCredential(&quot;key&quot;) it works fine but when i use DefaultAzureCredential() it gives me error Internal Server error 500. I debugged and found that authentication is successful in case of DefaultAzureCredential and it uses AzureCLICredential to authenticate. I tried printing stack tace but is is not giving any detailed error information just 500 error.</p> <ul> <li>Am i missing any permissions etc ?</li> </ul> <p><strong>This works when i disable the Private Endpoint with DefaultAzureCredential</strong></p> <pre><code>But as soon as POST request goes it starts sending back 500 error code. azure.core.exceptions.HttpResponseError: (None) Internal server error import logging from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential from azure.identity import DefaultAzureCredential from azure.core.pipeline.policies import HttpLoggingPolicy endpoint = &quot;https://abc.eastus.models.ai.azure.com&quot; model_name = &quot;se-Mistral-large&quot; # It uses the DefaultAzureCredential to authenticate client = ChatCompletionsClient( endpoint=endpoint, credential=DefaultAzureCredential() ) response = client.complete( messages=[ { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;I am going to Paris, what should I see?&quot; } ], max_tokens=2048, temperature=0.8, top_p=0.1, model=model_name ) print(response.choices[0].message.content) </code></pre> <hr /> <p>This also returns 500 error</p> <pre><code>credential = DefaultAzureCredential() token = credential.get_token(&quot;https://ml.azure.com/.default&quot;) client = ChatCompletionsClient( endpoint=&quot;https://abc.eastus.models.ai.azure.com&quot;, credential=AzureKeyCredential(token.token) ) </code></pre> <p>Printing the header body</p> <pre><code>2025-05-15 16:37:43,989 - ERROR - Request Headers: {'Content-Type': 'application/json', 'Content-Length': '136', 'Accept': 'application/json', 'x-ms-client-request-id': 'e9e6ba1a-31aa-11f0-8e44-7c1e528244e5', , 'User-Agent': 'azsdk-python-ai-inference/1.0.0b9 Python/3.13.3 (Windows-2022Server-10.0.20348-SP0)', 'Authorization': 'Bearer token'} 2025-05-15 16:37:44,005 - ERROR - Request Body: {&quot;messages&quot;: [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;I am going to London, what should I see?&quot;}], &quot;max_tokens&quot;: 2048, &quot;model&quot;: &quot;se-Mistral-large&quot;} 2025-05-15 16:37:44,006 - ERROR - Response Headers: {'Content-Length': '111', 'Content-Type': 'application/json', 'x-request-id': '6023e2cb-7220-4f9a-a439-5c84ea572f02', 'x-ms-error-reason': 'ExpressionValueEvaluationFailure', 'Request-Context': 'appId=cid-v1:38659f32-9fc9-4714-92e6-1412a97fbbfc', 'Date': 'Thu, 15 May 2025 16:37:43 GMT'} 2025-05-15 16:37:44,007 - ERROR - Response Body: { &quot;statusCode&quot;: 500, &quot;message&quot;: &quot;Internal server error&quot;, &quot;activityId&quot;: &quot;6023e2cb-7220-4f9a-a439-5c84ea572f02&quot; } </code></pre>
<python><azure><azure-machine-learning-service><azure-ai-foundry>
2025-05-14 15:20:50
1
4,564
Bokambo
79,621,538
130,948
Error in while running Jupyter Notebook connecting to Azure PostgreSQL database with Entra Auth
<p>I am new to Jupyter Notebook and running fundamental code to connect to the Azure PostgreSQL database. To my understanding, the connection is successful but somehow it's not able to parse the output but I'm not 100% sure on this.</p> <pre><code>from azure.identity import DefaultAzureCredential username = &quot;username&quot; server = &quot;database.postgres.database.azure.com&quot; port = 5432 database = &quot;databasename&quot; # Obtain an access token using DefaultAzureCredential credential = DefaultAzureCredential() access_token = credential.get_token(&quot;https://ossrdbms-aad.database.windows.net&quot;).token # Load the SQL extension %reload_ext sql # Define the connection string for Azure PostgreSQL with Entra authentication connection_string = f&quot;postgresql://{username}:{access_token}@{server}:{port}/{database}&quot; # Set the connection string as the default connection for the %sql magic command %sql $connection_string # Test the connection by running a simple query result = %sql SELECT 1 print(result) </code></pre> <p>Below is the error I am getting</p> <pre><code>1 rows affected. KeyError Traceback (most recent call last) Cell In[13], line 1 ----&gt; 1 get_ipython().run_line_magic('sql', 'SELECT 1') File c:\Ravi\Source\POC\jupyter\.venv\Lib\site-packages\IPython\core\interactiveshell.py:2486, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth) 2484 kwargs['local_ns'] = self.get_local_scope(stack_depth) 2485 with self.builtin_trap: -&gt; 2486 result = fn(*args, **kwargs) 2488 # The code below prevents the output from being displayed 2489 # when using magics with decorator @output_can_be_silenced 2490 # when the last Python token in the expression is a ';'. 2491 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): File c:\Ravi\Source\POC\jupyter\.venv\Lib\site-packages\sql\magic.py:219, in SqlMagic.execute(self, line, cell, local_ns) 216 return 218 try: --&gt; 219 result = sql.run.run(conn, parsed[&quot;sql&quot;], self, user_ns) 221 if ( 222 result is not None 223 and not isinstance(result, str) (...) 226 # Instead of returning values, set variables directly in the 227 # user's namespace. Variable names given by column names 229 if self.autopandas: ... --&gt; 116 self.pretty = PrettyTable(self.field_names, style=prettytable.__dict__[config.style.upper()]) 117 else: 118 list.__init__(self, []) KeyError: 'DEFAULT' </code></pre>
<python><postgresql><azure><pip><prettytable>
2025-05-14 13:02:39
0
743
Ravi Khambhati
79,621,480
13,443,954
How to locate custom elements of pdftron in DOM
<p>our dev team working with apryse pdftron (angular) and using custom elements on the loaded document, like checkbox, textbox - which you can drag-and-drop into the webviewer. My problem is, that these elements are not exist in the DOM at all - so I cannot locate them for automation test. The pdf is under an opened shadow-root &gt; virtualListBody</p> <p>How can I locate the placed elements? Or is that a way/configuration to create object in the DOM?</p>
<python><angular><dom><pdftron>
2025-05-14 12:32:39
0
333
M András
79,621,448
3,760,519
How do I show a formatted html table in python, during an interactive programming session?
<p>Essentially, I am looking for a python equivalent of the R behaviour when I use packages such as <a href="https://gt.rstudio.com/" rel="nofollow noreferrer">GT</a> or <a href="https://rstudio.github.io/DT/" rel="nofollow noreferrer">DT</a>. When &quot;print&quot; the objects generated by these packages, I can see the rendered result in something like a plot window.</p> <p>I am not asking for a recommendation for python packages.</p> <p>I have found is <a href="https://pypi.org/project/pretty-html-table/" rel="nofollow noreferrer">pretty-html-table</a>, but this just produces an html encoded string that I can save as a file. As far as I can tell, I cannot display this in the &quot;plot&quot; window during an interactive programming session.</p> <p>Is there a way to display such html during interactive programming, similar to how plots can be displayed?</p>
<python><dataframe>
2025-05-14 12:18:08
1
2,406
Chechy Levas
79,621,191
29,430,839
Issue in scraping data
<p>I have an issue in scraping schools data. I need their email and website URL. I tried a lot but it's returning empty results.</p> <p>What's the best way to do this?</p> <p>Here is the code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options import time def google_search(query, num_results=10): options = Options() options.add_argument(&quot;--headless&quot;) # Run headless browser options.add_argument(&quot;--disable-blink-features=AutomationControlled&quot;) options.add_argument(&quot;--no-sandbox&quot;) options.add_argument(&quot;--disable-dev-shm-usage&quot;) driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) search_url = f&quot;https://www.google.com/search?q={query}&amp;num={num_results}&quot; driver.get(search_url) time.sleep(2) links = [] results = driver.find_elements(By.CSS_SELECTOR, 'div.yuRUbf &gt; a') for result in results: href = result.get_attribute('href') if href: links.append(href) driver.quit() return links query = &quot;PSHE site:.sch.uk&quot; results = google_search(query, num_results=20) for i, url in enumerate(results, 1): print(f&quot;{i}. {url}&quot;) </code></pre>
<python><selenium-webdriver><web-scraping>
2025-05-14 09:50:12
1
2,155
Omprakash S
79,621,152
1,764,129
Multiple line figure without new coloring in Altair
<p>The basic example of multiple lines in Altair shows distinct lines in a different color:</p> <pre><code>import altair as alt from vega_datasets import data source = data.stocks() alt.Chart(source).mark_line().encode( x='date:T', y='price:Q', color='symbol:N', ) </code></pre> <p>How does one make this same figure but using the same colour for each line and ignoring their labels (for instance when you have replicates of a model)?</p> <p>I don't want them faceted (on different panels) and I would like this scalable for 100s of different groups (such as in simulation replicates).</p>
<python><altair>
2025-05-14 09:33:45
1
4,935
p-robot
79,621,149
393,010
only change type annotation of method in subclass
<p>Is it possible to change/&quot;override&quot; the return type of a method in a subclass without overriding/altering the method (implementation) itself? Something like:</p> <pre class="lang-py prettyprint-override"><code>class Foo: def gimme_thing() -&gt; Thing: return self.factory.create() class Bar(Foo): def gimme_thing() -&gt; BarThing: ... # &lt;--- for Bar the factory will a return another type, how to spell this? </code></pre>
<python><python-typing>
2025-05-14 09:33:01
3
5,626
Moberg
79,620,845
14,245,686
How is np.repeat so fast?
<p>I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, <code>repeat</code> takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -&gt; [1, 2, 2]. My naive version was around 4.5x slower than <code>np.repeat</code>.</p> <pre class="lang-rust prettyprint-override"><code>pub fn repeat_by(arr: &amp;[f64], repeats: &amp;[u64]) -&gt; Vec&lt;f64&gt; { // Use flat_map to create a single iterator of all repeated elements let result: Vec&lt;f64&gt; = arr .iter() .zip(repeats.iter()) .flat_map(|(&amp;value, &amp;count)| std::iter::repeat_n(value, count as usize)) .collect(); result } </code></pre> <p>I also tried a couple of more versions, e.g. one where I pre-allocated a vector with the necessary capacity, but all performed similarly.</p> <p>While doing more investigating though, I found that <code>np.repeat</code> is actually way faster than other numpy functions that I expected to perform similarly. For example, we can build a list of indices and use numpy slicing / take to perform the same operation as <code>np.repeat</code>. However, doing this (and even removing the list construction from the timings), <code>np.repeat</code> is around 3x faster than numpy slicing / take.</p> <pre class="lang-py prettyprint-override"><code>import timeit import numpy as np N_ROWS = 100_000 x = np.random.rand(N_ROWS) weight = np.random.poisson(1, len(data)) # pre-compute the indices so slow python looping doesn't affect the timing indices = [] for w in weight: for i in range(w): indices.append(i) print(timeit.timeit(lambda: np.repeat(x, weight), number=1_000)) # 0.8337333500003297 print(timeit.timeit(lambda: np.take(x, indices), number=1_000)) # 3.1320624930012855 </code></pre> <p>My C is not so good, but it seems like the relevant implementation is here: <a href="https://github.com/numpy/numpy/blob/main/numpy/_core/src/multiarray/item_selection.c#L785" rel="nofollow noreferrer">https://github.com/numpy/numpy/blob/main/numpy/_core/src/multiarray/item_selection.c#L785</a>. It would be amazing if someone could help me understand at a high level what this code is doing--on the surface, it doesn't look like anything particularly special (SIMD, etc.), and looks pretty similar to my naive Rust version (memcpy vs repeat_n). In addition, I am struggling to understand why it performs so much better than even numpy slicing.</p>
<python><arrays><numpy><rust>
2025-05-14 06:34:04
1
482
stressed
79,620,649
3,026,965
Configuring external user-defined module for multiple scripts in python
<p>I am running several python scripts (one at a time) to manipulate photo, video, and audio files in different combinations in the CWD.</p> <p>Instead of specifying in each script's body the dozens of &quot;foto_files&quot;, for example, what is the best way to call that information from an external user-defined module or lookup or dictionary or whatever file? In this way, I only have to edit 1 file instead of editing all scripts when adding a new extension, etc.</p> <p><strong>Typical structure:</strong></p> <pre><code>C:\code\script1.py C:\code\SYS_python\media_file_types.py (base) P:\aaa\bbb\CWD&gt; python c:\code\script1.py </code></pre> <p>Before updating all of my scripts, I am wondering if this is the most appropriate way to configure or there are better solutions and considerations to address?</p> <p><strong>Module (media_file_types.py):</strong></p> <pre><code>foto_file = ('jpg', 'jpeg', 'jfif', 'gif') video_file = ('mov', 'lrv', 'thm', 'xml') audio_file = ('3gpp', 'aiff', 'pcm', 'aac') </code></pre> <p><strong>Module import:</strong></p> <pre><code>import os import sys module_path = os.path.join(os.path.dirname(__file__), 'SYS_python') module_file = os.path.join(module_path, 'media_file_types.py') if not os.path.exists(module_file): print(f&quot;Error: The configuration file '{module_file}' was not found.&quot;) sys.exit(1) else: sys.path.append(module_path) try: import media_file_types foto_extensions = tuple(f&quot;.{ext.lower()}&quot; for ext in media_file_types.foto_file) video_extensions = tuple(f&quot;.{ext.lower()}&quot; for ext in media_file_types.video_file) audio_extensions = tuple(f&quot;.{ext.lower()}&quot; for ext in media_file_types.audio_file) all_media = foto_extensions + video_extensions + audio_extensions foto_files = [f for f in os.listdir('.') if f.lower().endswith(foto_extensions)] video_files = [f for f in os.listdir('.') if f.lower().endswith(video_extensions)] audio_files = [f for f in os.listdir('.') if f.lower().endswith(audio_extensions)] print(f&quot;Found {len(foto_files)} photo files.&quot;) print(f&quot;Found {len(video_files)} video files.&quot;) print(f&quot;Found {len(audio_files)} audio files.&quot;) except ImportError: print(f&quot;Error: Could not import the module 'media_file_types' from '{module_path}'.&quot;) sys.exit(1) except AttributeError as e: print(f&quot;Error: Missing expected attributes in 'media_file_types': {e}&quot;) sys.exit(1) </code></pre>
<python><dictionary><module><lookup>
2025-05-14 02:39:19
1
727
user3026965
79,620,550
1,380,285
Python global variable changes depending on how script is run
<p>I have a short example python script that I'm calling glbltest.py:</p> <pre><code>a = [] def fun(): global a a = [20,30,40] print(&quot;before &quot;,a) fun() print(&quot;after &quot;,a) </code></pre> <p>If I run it from the command line, I get what I expect:</p> <pre><code>$ python glbltest.py before [] after [20, 30, 40] </code></pre> <p>I open a python shell and run it by importing, and I get basically the same thing:</p> <pre><code>&gt;&gt;&gt; from glbltest import * before [] after [20, 30, 4] </code></pre> <p>So far so good. Now I comment out those last three lines and do everything &quot;by hand&quot;:</p> <pre><code>&gt;&gt;&gt; from glbltest import * &gt;&gt;&gt; a [] &gt;&gt;&gt; fun() # I run fun() myself &gt;&gt;&gt; a # I look at a again. Surely I will get the same result as before! [] # No! I don't! </code></pre> <p>What is the difference between <code>fun()</code> being run &quot;automatically&quot; by the importing of the script, and me running <code>fun()</code> &quot;by hand&quot;?</p>
<python><global-variables>
2025-05-13 23:51:32
1
6,713
bob.sacamento
79,620,339
1,305,287
Clearing clipboard on program exit in python Tk and Kubuntu
<p>Python 3.12 in a venv, Kubuntu 24.04.02.</p> <p>I'm using the tkinter clipboard in a python program to hold a small amount of text, to ctrl-V into text editors and browser fields. This bit works fine.</p> <p><strong>I would now like the clipboard to be set to a defined value on program exit.</strong> Empty would be fine. I don't want sensitive data lying around to be accidentally pasted after program exit.</p> <p>This is a Minimal Reproducible Example that illustrates the problem.</p> <pre><code>#! /home/neil/pye/bin/python3 import tkinter as tki import time class App(tki.Frame): def __init__(self, parent): tki.Frame.__init__(self, master=parent) self.B0 = tki.Button(self, text='set clip', command=self.set_clip) self.B0.grid() self.quitbut = tki.Button(self, text='Quit', command=self.quit) self.quitbut.grid() def set_clip(self): self.clipboard_clear() self.clipboard_append(f'{time.time()}') print('set clipboard') def quit(self): print('clearing up before exit') self.clipboard_clear() self.clipboard_append('trashed') time.sleep(0.1) self.update_idletasks() self.update() time.sleep(0.1) print('hopefully updated clipboard?') self.real_exit() def real_exit(self): exit() if __name__ == '__main__': root = tki.Tk() app = App(root) app.pack() # this bodgery points the window 'X' button to my quit routine root.protocol(&quot;WM_DELETE_WINDOW&quot;, app.quit ) root.mainloop() </code></pre> <p>The set-clip function writes a time string to the clipboard which can be pasted to a text file with ctrl-V. However nothing I have yet tried gets the quit function to write 'trashed' to it. You can see my peppering of delays, updates, other function calls, to try to give it time to happen.</p> <p>The actual behaviour together with the system is more subtle and puzzling. In Kubuntu, there is a clipboard icon on the panel that apparently shows the present and historical contents of the clipboard.</p> <p>If I run my MRE, then on first press of 'set_clip', ctrl-V into a text field pastes a time string t0, and I see t0 in the Kubuntu app.</p> <p>On the next press of set_clip, ctrl-V pastes a new time string tn, however the Kubuntu app shows no change in history, still just the t0 entry. Subsequent presses yield the same behaviour.</p> <p>If I now exit the program, ctrl-V will now paste t0 again, not tn or 'trashed'.</p> <p>I found this Q/A, <a href="https://stackoverflow.com/questions/46178950/tk-only-copies-to-clipboard-if-paste-is-used-before-program-exits">Tk only copies to clipboard if &quot;paste&quot; is used before program exits</a>, which suggests the use of pyperclip. I won't repeat the whole MRE for that, just the key clipboard interaction part.</p> <pre><code> # original using just the tkinter clipboard self.clipboard_clear() self.clipboard_append(f'{time.time()}') # replaced with pyperclip calls self.clipboard_clear() text = f'{time.time()}' pyperclip.copy(text) pyperclip.paste() self.update() </code></pre> <p>This has had two effects</p> <ul> <li>Now both set_clip() and quit() behave the same way, I do sometimes see 'trashed' after program exit</li> <li>Although each button press changes the Kubuntu app history, the value that gets crtl-V'd into a text field only changes with every other button press???</li> </ul> <p>So pyperclip is not a solution like this, though it has had an effect. The contradictory behaviour between what gets ctrl-V'd and what Kubuntu thinks is on the clipboard hints at a deeper compatibility problem to me.</p> <p>I would rather solve this in tkinter, rather than rewrite for another GUI. However, if another GUI is better integrated with the Ubuntu base, then maybe I've got to use that. I would also happily switch to Mint (still Ubuntu under the hood), if that's more compatible at this clipboard level.</p>
<python><tkinter><clipboard>
2025-05-13 19:47:19
2
1,103
Neil_UK
79,620,333
2,648,504
Insert new column (of blanks) into an existing dataframe
<p>I have an existing dataframe:</p> <pre><code>data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) </code></pre> <p>output:</p> <pre><code> 0 1 0 5011025 234 1 5012025 937 2 5013025 625 </code></pre> <p>What I need to do is insert a new column at <code>0</code> (the same # of rows) that contains 3 spaces. Recreating the dataframe, from scratch, it would be something like this:</p> <pre><code>data = [[' ',5011025, 234], [' ',5012025, 937], [' ',5013025, 625]] df = pd.DataFrame(data) </code></pre> <p>desired output:</p> <pre><code> 0 1 2 0 5011025 234 1 5012025 937 2 5013025 625 </code></pre> <p>What is the best way to <code>insert()</code> this new column into an existing dataframe, that may be hundreds of rows? Ultimately, i'm trying to figure out how to write a function that will shift all columns of a dataframe x number of spaces to the right.</p>
<python><pandas>
2025-05-13 19:42:50
1
881
yodish
79,620,274
8,119,509
Integrating Spiceai MCP with Slack Bot
<p>I followed the <a href="https://spiceai.org/docs/installation" rel="nofollow noreferrer">Spice.ai documentation</a> and the getting started guide to use Spice MCP with OpenAI. Now, I am trying to use it with a Slack bot, but I keep getting 404 errors or &quot;method not found&quot; errors. I believe this might be due to not fully understanding how <code>spice chat</code> receives and responds.</p> <p>Here's the code I am using for my Slack bot:</p> <pre class="lang-py prettyprint-override"><code>import os import requests from slack_bolt import App from slack_bolt.adapter.socket_mode import SocketModeHandler from dotenv import load_dotenv # Load env variables load_dotenv() app = App( token=os.environ[&quot;SLACK_BOT_TOKEN&quot;], signing_secret=os.environ[&quot;SLACK_SIGNING_SECRET&quot;] ) SPICE_API_URL = os.getenv(&quot;SPICE_API_URL&quot;, &quot;http://localhost:8090/v1/chat/completions&quot;) @app.event(&quot;app_mention&quot;) def handle_app_mention_events(body, say, logger): text = body[&quot;event&quot;][&quot;text&quot;] channel_id = body[&quot;event&quot;][&quot;channel&quot;] allowed_channels = os.environ[&quot;SLACK_ALLOWED_CHANNELS&quot;].split(&quot;,&quot;) if channel_id not in allowed_channels: logger.info(f&quot;Ignored message from unauthorized channel: {channel_id}&quot;) return query_text = text.split(&quot;&gt;&quot;, 1)[1].strip() if &quot;&gt;&quot; in text else text.strip() logger.info(f&quot;Received message: {query_text}&quot;) try: response = requests.post( SPICE_API_URL, json={ &quot;model&quot;: &quot;openai&quot;, # This must match your model in the MCP config &quot;messages&quot;: [ {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: query_text} ] } ) response.raise_for_status() result = response.json() message = result[&quot;choices&quot;][0][&quot;message&quot;][&quot;content&quot;] say(message) except Exception as e: logger.exception(&quot;Failed to fetch response from Spice MCP&quot;) say(f&quot;Error: {str(e)}&quot;) if __name__ == &quot;__main__&quot;: handler = SocketModeHandler(app, os.environ[&quot;SLACK_APP_TOKEN&quot;]) handler.start() </code></pre> <p>I'm getting 404 or &quot;method not found&quot; errors when I try to make the request to the <code>SPICE_API_URL</code>. I checked for the <code>spice chat</code> endpoint URL but didn't find much documentation on it. Just looking for any clue in the right direction from the community.</p>
<python><model-context-protocol><slack-bot>
2025-05-13 18:59:43
0
403
Ritesh Kankonkar
79,619,949
5,944,880
ffmpeg how to set max_num_reorder_frames H264
<p>Anyone know how can I set max_num_reorder_frames to 0 when I am encoding H264 video ? You can find in the <a href="https://ffmpeg.org/doxygen/4.0/structH264RawVUI.html#ad5f9d8d1aac32a586542e2d133d7aae8" rel="nofollow noreferrer">docs</a> as <code>uint8_t H264RawVUI::bitstream_restriction_flag</code></p> <p>PS. Based on the discussion in the comments. What I actually want to accomplish is to have all the frames written in the order in which they were encoded. My use-case is - I have 1000 images for example. I encode each one of them using the codec, but then when I investigate a little bit and check the actual packets in the H264 container, I see that I have cases when one frame is written twice (for example ... 1,2,3,3,4,5,6,7,7 ...) what I want is once I decode the the H264 container I want to get the same images which I encoded. Is that possible and how ?</p> <p>P.P.S: I don't think the <code>g=1</code> works - giving some more code for reference. This is what I currently have:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import ffmpeg, subprocess, av width, height, encoding_profile, pixel_format = 1280, 800, 'main', 'yuv420p' # here I create 256 frames where each one has unique pixels all zeros, ones, twos and etc. np_images = [] for i in range(256): np_image = i + np.zeros((height, width, 3), dtype=np.uint8) np_images.append(np_image) print(f'number of numpy images: {len(np_images)}') encoder = (ffmpeg .input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height)) .output('pipe:', format='H264', pix_fmt=pixel_format, vcodec='libx264', profile='main', g=1) .run_async(pipe_stdin=True, pipe_stdout=True) ) for timestamp, frame in enumerate(np_images): encoder.stdin.write( frame .astype(np.uint8) .tobytes() ) encoder.stdin.close() output = encoder.stdout.read() encoder.stdout.close() # here I decode the encoded frames using PyAV frame_decoder = av.CodecContext.create(&quot;h264&quot;, &quot;r&quot;) frame_decoder.thread_count = 0 frame_decoder.thread_type = 'NONE' packets = frame_decoder.parse(output) decoded_frames = [] for packet in packets: frame = frame_decoder.decode(packet) decoded_frames.extend(frame) decoded_frames.extend(frame_decoder.decode()) print(f'number of decoded frames: {len(decoded_frames)}') print('keyframe boolean mask') print([e.key_frame for e in decoded_frames]) decoded_np_images = [] for frame in decoded_frames: decoded_np_images.append(np.array(frame.to_image())) print(f'number of decoded numpy images: {len(decoded_np_images)}') # here I check what the decoded frames contain (all zeros, ones, twos and etc.) print([e[0,0,0].item() for e in decoded_np_images]) </code></pre> <p>the particular problem which I am facing is that in the output you can observe this:</p> <blockquote> <p>number of decoded numpy images: 255</p> <p>[0, 1, 2, 3, 3, 4, 5, 6, 8, 9, 10, 10, 11, 12, 13, 15, 16, 17, 17, 18, 19, 20, 22, 23, 24, 24, 25, 26, 27, 29, 30, 31, 31, 32, 33, 34, 36, 37, 38, 39, 39, 40, 41, 43, 44, 45, 46, 46, 47, 48, 50, 51, 52, 53, 53, 54, 55, 57, 58, 59, 60, 60, 61, 62, 64, 65, 66, 67, 67, 68, 69, 71, 72, 73, 74, 74, 75, 76, 78, 79, 80, 81, 81, 82, 83, 85, 86, 87, 88, 88, 89, 90, 91, 93, 94, 95, 95, 96, 97, 98, 100, 101, 102, 102, 103, 104, 105, 107, 108, 109, 109, 110, 111, 112, 114, 115, 116, 116, 117, 118, 119, 121, 122, 123, 123, 124, 125, 126, 128, 129, 130, 131, 131, 132, 133, 135, 136, 137, 138, 138, 139, 140, 142, 143, 144, 145, 145, 146, 147, 149, 150, 151, 152, 152, 153, 154, 156, 157, 158, 159, 159, 160, 161, 163, 164, 165, 166, 166, 167, 168, 170, 171, 172, 173, 173, 174, 175, 176, 178, 179, 180, 180, 181, 182, 183, 185, 186, 187, 187, 188, 189, 190, 192, 193, 194, 194, 195, 196, 197, 199, 200, 201, 201, 202, 203, 204, 206, 207, 208, 208, 209, 210, 211, 213, 214, 215, 216, 216, 217, 218, 220, 221, 222, 223, 223, 224, 225, 227, 228, 229, 230, 230, 231, 232, 234, 235, 236, 237, 237, 238, 239, 241, 242, 243, 244, 244, 245, 246, 248, 249, 250, 251, 251, 252, 253]</p> </blockquote> <p>I still have frames which are appearing twice (and respectively some are missing)</p>
<python><ffmpeg><h.264><pyav>
2025-05-13 15:14:03
1
417
Vasil Yordanov
79,619,885
4,050,510
How can I convert a Sequence(Image) to an Array4D without going through Seqence(Sequence(Sequence(Sequence()))?
<p>I have a huggingface dataset with a column <code>ImageData</code> that has the featuredescriptor <code>s.features={'images': Sequence(feature=Image(mode=None, decode=True, id=None), length=16, id=None)}</code>.</p> <p>I need to convert this into a torch 4D tensor (all images are RGB and have same shape). It is possible to do this via a simple iteration, but if I do a <code>map</code> on my dataset, the column is no longer Array4D, but a nested sequence of floats. This is not good at all, since that storage type is very very slow. I <em>can</em> make a new cast into Array4D, but I have already paid the price of going to sequences once, and I need to get rid of that.</p> <p>How can I convert directly to Array4D, without ever having the terrible <code>Sequence(feature=Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None)</code> representation?</p> <p>Below is a MWE. With only 10 data points, it takes a few seconds on my macbook. With 1000 samples, on my compute cluster, this takes hours (because apparently the sequence-construction is single threaded and the cluster has poor single-core performance....)</p> <pre class="lang-py prettyprint-override"><code>import datasets import PIL.Image V = 16 H, W, C = 244, 244, 3 def get_ds(): &quot;&quot;&quot;This is a function outside my control&quot;&quot;&quot; N = 10 data = [ {&quot;ImageData&quot;: [PIL.Image.new(&quot;RGB&quot;, (W, H)) for _ in range(V)]} for _ in range(N) ] ds = datasets.Dataset.from_list(data) ds = ds.cast( datasets.Features({&quot;ImageData&quot;: datasets.Sequence(datasets.Image(), length=V)}) ) return ds ds = get_ds() print(f&quot;Dataset before conversion to 4D array {ds.features=}&quot;) # ds.features={'ImageData': Sequence(feature=Image(mode=None, decode=True, id=None), length=16, id=None)} # naice load. works great ds.set_format(type=&quot;torch&quot;) assert next(iter(ds))['ImageData'].shape == (V, C, H, W) # nice, it pivots the color channel to front, as is customary in pytorch ds = ds.map( lambda x: {&quot;ImageData&quot;: x['ImageData'].float() / 255.0}, #remove_columns=[&quot;ImageData&quot;], ) print(f&quot;Dataset after conversion to 4D array {ds.features=}&quot;) # ds.features={'ImageData': Sequence(feature=Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None)} ds = ds.cast_column('ImageData', datasets.Array4D(shape=(V, C, H, W), dtype=&quot;float32&quot;)) print(f&quot;Dataset after conversion to 4D array {ds.features=}&quot;) # ds.features={'ImageData': Array4D(shape=(16, 3, 244, 244), dtype='float32', id=None)} </code></pre>
<python><arrays><huggingface-datasets>
2025-05-13 14:36:22
1
4,934
LudvigH
79,619,717
8,477,566
How to count consecutive increases in a 1d array
<ul> <li>I have a 1d <code>numpy</code> array</li> <li>It's mostly decreasing, but it increases in a few places</li> <li>I'm interested in the places where it increases in several consecutive elements, and how many consecutive elements it increases for in each case</li> <li>In other words, I'm interested in the <em>lengths of increasing contiguous sub-arrays</em></li> <li>I'd like to compute and store this information in an array with the same shape as the input (EG that I could use for plotting)</li> <li>This could be achieved using <code>cumsum</code> on a binary mask, except I want to reset the accumulation every time the array starts decreasing again</li> <li>See example input and expected output below</li> <li>How do I do that?</li> </ul> <pre class="lang-py prettyprint-override"><code>import numpy as np def count_consecutive_increases(y: np.ndarray) -&gt; np.ndarray: ... y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_consecutive_increases(y) print(y) print(c) # &gt;&gt;&gt; [9 8 7 9 6 5 6 7 8 4 3 1 2 3 0] # &gt;&gt;&gt; [0 0 0 1 0 0 1 2 3 0 0 0 1 2 0] </code></pre>
<python><arrays><numpy><cumsum>
2025-05-13 13:08:50
4
1,950
Jake Levi
79,619,552
11,350,845
Get the canonical timezone name of a backward linked timezone in python?
<p>As title said, I'm looking for a simple way to get the canonical name of a timezone when providing a timezone name.</p> <p>For example <code>Asia/Calcutta</code> is the backward liked name of <code>Asia/Kolkata</code>.</p> <p>I'd expect <code>ZoneInfo(&quot;Asia/Calcutta&quot;).key</code> to be <code>Asia/Kolkata</code>, but it's not the case and ZoneInfo doesn't have any apparent way to follow link of get the canonical name.</p> <p>Is there any way to get this information? Either in ZoneInfo or directly in tzdata?</p>
<python><python-3.x><timezone><zoneinfo><tzdata>
2025-05-13 11:31:06
1
382
Junn Sorran
79,619,504
6,231,383
Keep the '+' in URL in python
<p>I have an input which is a decoded URL and can contain incorrectly escaped '+' symbols, such as <code>'http://example.com/path/tofile?query=param+with+questionmark'</code>, where '+' symbols should be converted to %2B. Here is my code to encode the url safely.</p> <pre class="lang-py prettyprint-override"><code># Safely encode the path encoded_path = quote(parsed.path) # Safely encode query parameters query_params = parse_qsl(parsed.query, keep_blank_values=True) encoded_query = urlencode(query_params, quote_via=quote, safe='') </code></pre> <p>parse_qsl is converting my '+' to spaces i suspect, but the final encoded query is having %20 instead of %2B. How do I fix this?</p> <p>EDIT: Adding a reproducible script</p> <pre class="lang-py prettyprint-override"><code>import urllib from urllib.parse import urlparse, urlunparse, quote, urlencode, parse_qsl, quote_plus def safe_presigned_url(raw_url): parsed = urlparse(raw_url) # Safely encode the path encoded_path = quote(parsed.path) # Safely encode query parameters query_params = parse_qsl(parsed.query, keep_blank_values=True) encoded_query = urlencode(query_params, quote_via=quote, safe='') # Rebuild and return the full URL return urlunparse(( parsed.scheme, parsed.netloc, encoded_path, parsed.params, encoded_query, parsed.fragment )) url = 'http://example.com/path/tofile?query=param+with+questionmark' #decoded url print(f'BEFORE {url}') # BEFORE http://example.com/path/tofile?query=param+with+questionmark url = safe_presigned_url(url) print(f'AFTER {url}') # AFTER http://example.com/path/tofile?query=param%20with%20questionmark # EXPECTED http://example.com/path/tofile?query=param%2Bwith%2Bquestionmark </code></pre>
<python><url><urllib><urlencode>
2025-05-13 10:48:29
1
2,191
Prithvi Raj
79,619,420
2,526,586
Enabling Daylight saving adjustment for Flask-APScheduler
<p>I am trying to run APScheduler for my Flask app, but I can't get daylight saving time working for the cron time.</p> <p>In my <code>app.py</code>, I have something like this:</p> <pre><code>from flask import Flask import pytz from .scheduled_jobs import ScheduledJobs class SchedulerAppConfig: SCHEDULER_TIMEZONE = timezone=pytz.timezone('Europe/London') app = Flask(__name__) app.config.from_object(SchedulerAppConfig()) ScheduledJobs.init(app) </code></pre> <p>Then in <code>scheduled_jobs.py</code>, I have this:</p> <pre><code>from flask_apscheduler import APScheduler scheduler = APScheduler() scheduler.init_app(app) scheduler.start() @scheduler.task('cron', id='test_job_1', hour='09', minute='15') def test_job_1(): logger.info(&quot;Test job 1 executed&quot;) </code></pre> <p>I was expecting the task would be smart enough to have itself adjusted to the British summer time and start at 09:15, but the scheduler task seem to run at 10:15 rather than 09:15 (London time is UTC+1 in the summer and UTC+0 during winter).</p> <p>What have I have missed?</p> <p>Does the timezone in <code>SchedulerAppConfig</code> matter in my case? I don't see setting the timezone here does anything for me.</p> <p>I am running the Flask app on a generic Ubuntu inside a Docker container. Do I need to set anything about the date/time settings on Ubuntu?</p>
<python><flask><dst><apscheduler>
2025-05-13 09:54:14
1
1,342
user2526586
79,619,273
6,017,822
How do determine inner border points of polygon intersection
<p>I am trying to write a Python script that will return me all vertice locations (x,y) of inner intersecting edge between a square tile and a closed polygon with n-vertices. I got this far with my code, but am getting wrong inner border vertices. Is there an algorithm of somekind specificaly to determine inside edges, so I know in which direction to search for.</p> <pre><code>from shapely.geometry import Polygon, LineString, MultiLineString import matplotlib.pyplot as plt from matplotlib.patches import Polygon as MplPolygon square_coords = [ (53906250, 35456250), # lt (54296875, 35456250), # rt (54296875, 35546875), # rb (53906250, 35546785) # lb ] polygon_coords = [ (54340742, 35518900), (53997222, 35518800), (53997298, 35183478), (54340666, 35183478), (54340742, 35518890) ] square = Polygon(square_coords) polygon = Polygon(polygon_coords) intersection = square.intersection(polygon) fig, ax = plt.subplots() ax.set_aspect('equal') square_patch = MplPolygon(list(square.exterior.coords), color='blue', alpha=0.4, label='Square') ax.add_patch(square_patch) polygon_patch = MplPolygon(list(polygon.exterior.coords), color='green', alpha=0.4, label='Polygon') ax.add_patch(polygon_patch) if not intersection.is_empty: if intersection.geom_type == 'Polygon': inter_coords = list(intersection.exterior.coords) inter_patch = MplPolygon(inter_coords, color='red', alpha=0.6, label='Intersection') ax.add_patch(inter_patch) elif intersection.geom_type == 'MultiPolygon': for part in intersection.geoms: inter_coords = list(part.exterior.coords) inter_patch = MplPolygon(inter_coords, color='red', alpha=0.6, label='Intersection') ax.add_patch(inter_patch) polygon_boundary = polygon.boundary intersection_boundary = intersection.boundary inner_border = polygon_boundary.intersection(intersection_boundary) if not inner_border.is_empty: if isinstance(inner_border, (LineString, MultiLineString)): if isinstance(inner_border, LineString): lines = [inner_border] else: lines = list(inner_border.geoms) for line in lines: x, y = line.xy ax.plot(x, y, color='black', linewidth=2, label='Inner Border') ax.scatter([x[0], x[-1]], [y[0], y[-1]], color='purple', s=40, zorder=10) ax.text(x[0], y[0], f&quot;({x[0]:.5f}, {y[0]:.5f})&quot;, fontsize=7, color='purple') ax.text(x[-1], y[-1], f&quot;({x[-1]:.5f}, {y[-1]:.5f})&quot;, fontsize=7, color='purple') ax.set_title(&quot;Polygon-Square Intersection &amp; Inner Border&quot;) ax.set_xlabel(&quot;X&quot;) ax.set_ylabel(&quot;Y&quot;) ax.grid(True) ax.legend() plt.tight_layout() plt.show() </code></pre> <p>Two examples of what I am trying to achieve. My goal is to get all intersecting vertices (x,y) on yellow border.</p> <p><a href="https://i.sstatic.net/CyC4qrkA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CyC4qrkA.png" alt="Example 1" /></a></p> <p><a href="https://i.sstatic.net/DfMk1Y4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DfMk1Y4E.png" alt="Example 2" /></a></p>
<python><geometry><clipping>
2025-05-13 08:39:18
0
313
tomazj
79,619,068
7,500,106
Unable to fetch user list from google workspace due to: Resource Not Found: userKey
<p>I was trying to fetch the user list using the below approach:</p> <pre><code>def getGsuiteUserData(self, data): access_token = data.get('access_token') if not access_token: raise Exception(&quot;access token is required.&quot;) headers = { 'Authorization': f'Bearer {access_token}' } params = { 'domain': '' } try: # Fetch users response = self._requestsMod.get(&quot;https://www.googleapis.com/admin/directory/v1/users&quot;, headers=headers, params=params) response.raise_for_status() user_data = response.json() except Exception as e: raise e </code></pre> <p>The above approach was working fine few days ago but when i tried to fetch the user list, i am getting the below error:</p> <pre><code>{&quot;message&quot;: &quot;Resource Not Found: userKey&quot;, &quot;domain&quot;: &quot;global&quot;, &quot;reason&quot;: &quot;notFound&quot;} </code></pre> <p>I did update the params with the below values:</p> <pre><code>params = { 'domain': 'my_domain' # working } params = { 'customer': 'my_customer' # working } </code></pre> <p>My question is, why domain with empty value was working fine but as of now it is breaking and raising an error?</p>
<python><google-cloud-platform><google-workspace>
2025-05-13 06:30:57
0
664
Utkarsh
79,619,061
11,769,133
Replacing values in columns with values from another columns according to mapping
<p>I have this kind of dataframe:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ &quot;A1&quot;: [1, 11, 111], &quot;A2&quot;: [2, 22, 222], &quot;A3&quot;: [3, 33, 333], &quot;A4&quot;: [4, 44, 444], &quot;A5&quot;: [5, 55, 555] }) A1 A2 A3 A4 A5 0 1 2 3 4 5 1 11 22 33 44 55 2 111 222 333 444 555 </code></pre> <p>and this kind of mapping:</p> <pre class="lang-py prettyprint-override"><code>mapping = { &quot;A1&quot;: [&quot;A2&quot;, &quot;A3&quot;], &quot;A4&quot;: [&quot;A5&quot;] } </code></pre> <p>which means that I want all columns in list to have values from key column so: A2 and A3 should be populated with values from A1, and A5 should be populated with values from A4. Resulting dataframe should look like this:</p> <pre class="lang-py prettyprint-override"><code> A1 A2 A3 A4 A5 0 1 1 1 4 4 1 11 11 11 44 44 2 111 111 111 444 444 </code></pre> <p>I managed to do it pretty simply like this:</p> <pre class="lang-py prettyprint-override"><code>for k, v in mapping.items(): for col in v: df[col] = df[k] </code></pre> <p>but I was wondering if there is vectorized way of doing it (more pandactic way)?</p>
<python><pandas>
2025-05-13 06:28:18
2
1,142
Milos Stojanovic
79,618,963
13,933,721
python .env inconsistent result
<p>.env</p> <pre><code>ENDPOINT=&quot;http://localhost:9000&quot; </code></pre> <p>py file</p> <pre><code>import os from urllib.parse import unquote from dotenv import load_dotenv load_dotenv() endpoint = os.getenv(&quot;ENDPOINT&quot;, &quot;&quot;) print(endpoint) # OR endpoint = unquote(os.getenv(&quot;ENDPOINT&quot;, &quot;&quot;)) print(endpoint) </code></pre> <p>both prints <code>http\x3a//localhost\x3a9000</code></p> <p>If I add a new .env entry</p> <pre><code>NEW_ENDPOINT=&quot;http://localhost:9001&quot; </code></pre> <p>and print as</p> <pre><code>endpoint = os.getenv(&quot;NEW_ENDPOINT&quot;, &quot;&quot;) print(endpoint) </code></pre> <p>will return as <code>http://localhost:9001</code></p> <p>However, if I restart my .venv, it will return as <code>http\x3a//localhost\x3a9001</code></p> <p>TLDR, I was already able to sanitize the url, but the question is about the inconsistency of the value from .env to python.</p>
<python><python-3.x><.env>
2025-05-13 04:54:45
2
1,047
Mr. Kenneth
79,618,918
983,556
using blender as a python module with pylance
<p>I'm trying to set up VSCode with Pylance and bpy for the purpose of making a Blender plugin, since I have very little knowledge of Python I think LSP support would help quite a bit.</p> <p>I have cloned blender's git repo and ran <code>make update</code> and <code>make bpy</code> and gotten a bpy directory containing the <code>__init__.pyd</code> file, opened it dependencies and it was missing <code>python311.dll</code> and <code>COMCTL32.dll</code> says &quot;missing imports&quot;. I fixed the python311 issue by installing that version of python but don't know what to do about the COMCTL32.dll file or if I'm even on the right track.</p> <p>relevant VSCode settings:</p> <pre><code>&quot;python.languageServer&quot;: &quot;Pylance&quot;, &quot;python.analysis.extraPaths&quot;: [ &quot;C:/SDK/build_windows_Bpy_x64_vc17_Release/bin/Release&quot; ], &quot;python.envFile&quot;: &quot;${workspaceFolder}/.env&quot;, &quot;python.analysis.autoSearchPaths&quot;: true, &quot;python.defaultInterpreterPath&quot;: &quot;C:\\Users\\david\\AppData\\Local\\Programs\\Python\\Python311\\python.exe&quot;, </code></pre> <p>Loading the <code>__init__.pyd</code> file in Dependencies: <a href="https://i.sstatic.net/M6l2ffFp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6l2ffFp.png" alt="DependenciesGUI" /></a></p> <p>I'm not sure if this issue will stop Pylance from loading bpy, it seems to see the bpy module but none of it's members. Uncommenting the first 2 lines doesn't seem to make any difference. <a href="https://i.sstatic.net/HNld5COy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HNld5COy.png" alt="PyLance" /></a></p> <p>Pylance output:</p> <pre class="lang-none prettyprint-override"><code>2025-05-12 23:06:39.664 [info] [Info - 11:06:39 PM] (33104) Server root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:06:39.668 [info] [Info - 11:06:39 PM] (33104) Pylance language server 2025.4.1 (pyright version 1.1.398, commit 4f24eccc) starting 2025-05-12 23:06:39.672 [info] [Info - 11:06:39 PM] (33104) Starting service instance &quot;vkf&quot; for workspace &quot;c:\SDK\vkf&quot; 2025-05-12 23:06:39.733 [info] [Info - 11:06:39 PM] (33104) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:06:39.734 [info] [Info - 11:06:39 PM] (33104) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:06:39.734 [info] [Info - 11:06:39 PM] (33104) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:06:39.735 [info] [Info - 11:06:39 PM] (33104) Auto-excluding **/node_modules 2025-05-12 23:06:39.735 [info] [Info - 11:06:39 PM] (33104) Auto-excluding **/__pycache__ 2025-05-12 23:06:39.735 [info] [Info - 11:06:39 PM] (33104) Auto-excluding **/.* 2025-05-12 23:06:39.796 [info] [Info - 11:06:39 PM] (33104) Assuming Python version 3.11.9.final.0 2025-05-12 23:06:41.047 [info] [Info - 11:06:41 PM] (33104) Found 2 source files 2025-05-12 23:06:41.064 [info] [Info - 11:06:41 PM] (33104) BG: Priority queue background worker(2) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:06:41.064 [info] [Info - 11:06:41 PM] (33104) BG: Priority queue background worker(2) started 2025-05-12 23:06:41.066 [info] [Info - 11:06:41 PM] (33104) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:06:41.067 [info] [Info - 11:06:41 PM] (33104) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:06:41.067 [info] [Info - 11:06:41 PM] (33104) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:06:41.067 [info] [Info - 11:06:41 PM] (33104) Auto-excluding **/node_modules 2025-05-12 23:06:41.067 [info] [Info - 11:06:41 PM] (33104) Auto-excluding **/__pycache__ 2025-05-12 23:06:41.067 [info] [Info - 11:06:41 PM] (33104) Auto-excluding **/.* 2025-05-12 23:06:41.112 [info] [Info - 11:06:41 PM] (33104) Assuming Python version 3.11.9.final.0 2025-05-12 23:06:42.062 [info] [Info - 11:06:42 PM] (33104) Found 2 source files 2025-05-12 23:06:42.553 [info] [Info - 11:06:42 PM] (33104) BG: Indexer background runner(3) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:06:42.553 [info] [Info - 11:06:42 PM] (33104) BG: Indexing(3) started 2025-05-12 23:06:43.111 [info] [Info - 11:06:43 PM] (33104) BG: scanned(3) 16 files over 1 exec env 2025-05-12 23:06:43.195 [info] [Info - 11:06:43 PM] (33104) BG: indexed(3) 16 files over 1 exec env 2025-05-12 23:06:43.205 [info] [Info - 11:06:43 PM] (33104) BG: Indexing finished(3). 2025-05-12 23:07:58.119 [info] [Info - 11:07:58 PM] (33104) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:07:58.119 [info] [Info - 11:07:58 PM] (33104) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:07:58.119 [info] [Info - 11:07:58 PM] (33104) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:07:58.120 [info] [Info - 11:07:58 PM] (33104) Auto-excluding **/node_modules 2025-05-12 23:07:58.120 [info] [Info - 11:07:58 PM] (33104) Auto-excluding **/__pycache__ 2025-05-12 23:07:58.120 [info] [Info - 11:07:58 PM] (33104) Auto-excluding **/.* 2025-05-12 23:07:58.163 [info] [Info - 11:07:58 PM] (33104) Assuming Python version 3.11.9.final.0 2025-05-12 23:07:59.141 [info] [Info - 11:07:59 PM] (33104) Found 2 source files 2025-05-12 23:07:59.175 [info] [Info - 11:07:59 PM] (33104) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:07:59.175 [info] [Info - 11:07:59 PM] (33104) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:07:59.175 [info] [Info - 11:07:59 PM] (33104) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:07:59.176 [info] [Info - 11:07:59 PM] (33104) Auto-excluding **/node_modules 2025-05-12 23:07:59.176 [info] [Info - 11:07:59 PM] (33104) Auto-excluding **/__pycache__ 2025-05-12 23:07:59.176 [info] [Info - 11:07:59 PM] (33104) Auto-excluding **/.* 2025-05-12 23:07:59.222 [info] [Info - 11:07:59 PM] (33104) Assuming Python version 3.11.9.final.0 2025-05-12 23:08:00.203 [info] [Info - 11:08:00 PM] (33104) Found 2 source files 2025-05-12 23:08:00.211 [info] [Info - 11:08:00 PM] (33104) BG: Indexer background runner(4) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:08:00.211 [info] [Info - 11:08:00 PM] (33104) BG: Indexing(4) started 2025-05-12 23:08:00.211 [info] [Info - 11:08:00 PM] (33104) BG: scanned(4) 16 files over 1 exec env 2025-05-12 23:08:00.434 [info] [Info - 11:08:00 PM] (33104) BG: scanned(4) 16 files over 1 exec env 2025-05-12 23:08:00.517 [info] [Info - 11:08:00 PM] (33104) BG: indexed(4) 16 files over 1 exec env 2025-05-12 23:08:00.525 [info] [Info - 11:08:00 PM] (33104) BG: Indexing finished(4). 2025-05-12 23:08:03.848 [info] [Info - 11:08:03 PM] (18636) Server root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:08:03.851 [info] [Info - 11:08:03 PM] (18636) Pylance language server 2025.4.1 (pyright version 1.1.398, commit 4f24eccc) starting 2025-05-12 23:08:03.852 [info] [Info - 11:08:03 PM] (18636) Starting service instance &quot;vkf&quot; for workspace &quot;c:\SDK\vkf&quot; 2025-05-12 23:08:03.901 [info] [Info - 11:08:03 PM] (18636) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:08:03.901 [info] [Info - 11:08:03 PM] (18636) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:08:03.902 [info] [Info - 11:08:03 PM] (18636) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:08:03.902 [info] [Info - 11:08:03 PM] (18636) Auto-excluding **/node_modules 2025-05-12 23:08:03.902 [info] [Info - 11:08:03 PM] (18636) Auto-excluding **/__pycache__ 2025-05-12 23:08:03.902 [info] [Info - 11:08:03 PM] (18636) Auto-excluding **/.* 2025-05-12 23:08:03.951 [info] [Info - 11:08:03 PM] (18636) Assuming Python version 3.11.9.final.0 2025-05-12 23:08:05.105 [info] [Info - 11:08:05 PM] (18636) Found 2 source files 2025-05-12 23:08:05.121 [info] [Info - 11:08:05 PM] (18636) BG: Priority queue background worker(2) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:08:05.122 [info] [Info - 11:08:05 PM] (18636) BG: Priority queue background worker(2) started 2025-05-12 23:08:05.124 [info] [Info - 11:08:05 PM] (18636) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:08:05.124 [info] [Info - 11:08:05 PM] (18636) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:08:05.125 [info] [Info - 11:08:05 PM] (18636) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:08:05.125 [info] [Info - 11:08:05 PM] (18636) Auto-excluding **/node_modules 2025-05-12 23:08:05.125 [info] [Info - 11:08:05 PM] (18636) Auto-excluding **/__pycache__ 2025-05-12 23:08:05.125 [info] [Info - 11:08:05 PM] (18636) Auto-excluding **/.* 2025-05-12 23:08:05.167 [info] [Info - 11:08:05 PM] (18636) Assuming Python version 3.11.9.final.0 2025-05-12 23:08:06.109 [info] [Info - 11:08:06 PM] (18636) Found 2 source files 2025-05-12 23:08:06.569 [info] [Info - 11:08:06 PM] (18636) BG: Indexer background runner(3) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:08:06.569 [info] [Info - 11:08:06 PM] (18636) BG: Indexing(3) started 2025-05-12 23:08:07.190 [info] [Info - 11:08:07 PM] (18636) BG: scanned(3) 16 files over 1 exec env 2025-05-12 23:08:07.293 [info] [Info - 11:08:07 PM] (18636) BG: indexed(3) 16 files over 1 exec env 2025-05-12 23:08:07.302 [info] [Info - 11:08:07 PM] (18636) BG: Indexing finished(3). 2025-05-12 23:08:13.569 [info] (Client) The existing extension didn't exit within 10 seconds. New instance will start, but you might encounter issues. 2025-05-12 23:08:13.569 [info] (Client) Pylance async client (2025.4.1) started with python extension (2025.6.0) 2025-05-12 23:08:15.780 [info] [Info - 11:08:15 PM] (18636) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:08:15.781 [info] [Info - 11:08:15 PM] (18636) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:08:15.781 [info] [Info - 11:08:15 PM] (18636) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:08:15.781 [info] [Info - 11:08:15 PM] (18636) Auto-excluding **/node_modules 2025-05-12 23:08:15.781 [info] [Info - 11:08:15 PM] (18636) Auto-excluding **/__pycache__ 2025-05-12 23:08:15.781 [info] [Info - 11:08:15 PM] (18636) Auto-excluding **/.* 2025-05-12 23:08:15.835 [info] [Info - 11:08:15 PM] (18636) Assuming Python version 3.11.9.final.0 2025-05-12 23:08:16.850 [info] [Info - 11:08:16 PM] (18636) Found 2 source files 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) Auto-excluding **/node_modules 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) Auto-excluding **/__pycache__ 2025-05-12 23:08:16.877 [info] [Info - 11:08:16 PM] (18636) Auto-excluding **/.* 2025-05-12 23:08:16.923 [info] [Info - 11:08:16 PM] (18636) Assuming Python version 3.11.9.final.0 2025-05-12 23:08:17.919 [info] [Info - 11:08:17 PM] (18636) Found 2 source files 2025-05-12 23:08:17.928 [info] [Info - 11:08:17 PM] (18636) BG: Indexer background runner(4) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:08:17.929 [info] [Info - 11:08:17 PM] (18636) BG: Indexing(4) started 2025-05-12 23:08:17.929 [info] [Info - 11:08:17 PM] (18636) BG: scanned(4) 16 files over 1 exec env 2025-05-12 23:08:18.146 [info] [Info - 11:08:18 PM] (18636) BG: scanned(4) 16 files over 1 exec env 2025-05-12 23:08:18.230 [info] [Info - 11:08:18 PM] (18636) BG: indexed(4) 16 files over 1 exec env 2025-05-12 23:08:18.237 [info] [Info - 11:08:18 PM] (18636) BG: Indexing finished(4). 2025-05-12 23:11:53.501 [info] [Info - 11:11:53 PM] (18636) BG: Indexer background runner(5) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (refresh) 2025-05-12 23:11:53.501 [info] [Info - 11:11:53 PM] (18636) BG: Indexing(5) started 2025-05-12 23:11:54.152 [info] [Info - 11:11:54 PM] (18636) BG: scanned(5) 16 files over 1 exec env 2025-05-12 23:11:54.271 [info] [Info - 11:11:54 PM] (18636) BG: indexed(5) 16 files over 1 exec env 2025-05-12 23:11:54.284 [info] [Info - 11:11:54 PM] (18636) BG: Indexing finished(5). 2025-05-12 23:51:58.877 [info] [Info - 11:51:58 PM] (27604) Server root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:51:58.885 [info] [Info - 11:51:58 PM] (27604) Pylance language server 2025.4.1 (pyright version 1.1.398, commit 4f24eccc) starting 2025-05-12 23:51:58.888 [info] [Info - 11:51:58 PM] (27604) Starting service instance &quot;vkf&quot; for workspace &quot;c:\SDK\vkf&quot; 2025-05-12 23:51:58.898 [info] (Client) Pylance async client (2025.4.1) started with python extension (2025.6.0) 2025-05-12 23:51:58.927 [info] [Info - 11:51:58 PM] (27604) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:51:58.927 [info] [Info - 11:51:58 PM] (27604) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:51:58.928 [info] [Info - 11:51:58 PM] (27604) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:51:58.928 [info] [Info - 11:51:58 PM] (27604) Auto-excluding **/node_modules 2025-05-12 23:51:58.928 [info] [Info - 11:51:58 PM] (27604) Auto-excluding **/__pycache__ 2025-05-12 23:51:58.928 [info] [Info - 11:51:58 PM] (27604) Auto-excluding **/.* 2025-05-12 23:51:58.984 [info] [Info - 11:51:58 PM] (27604) Assuming Python version 3.11.9.final.0 2025-05-12 23:52:00.171 [info] [Info - 11:52:00 PM] (27604) Found 2 source files 2025-05-12 23:52:00.186 [info] [Info - 11:52:00 PM] (27604) BG: Priority queue background worker(2) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:52:00.186 [info] [Info - 11:52:00 PM] (27604) BG: Priority queue background worker(2) started 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) Auto-excluding **/node_modules 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) Auto-excluding **/__pycache__ 2025-05-12 23:52:00.189 [info] [Info - 11:52:00 PM] (27604) Auto-excluding **/.* 2025-05-12 23:52:00.233 [info] [Info - 11:52:00 PM] (27604) Assuming Python version 3.11.9.final.0 2025-05-12 23:52:01.167 [info] [Info - 11:52:01 PM] (27604) Found 2 source files 2025-05-12 23:52:01.666 [info] [Info - 11:52:01 PM] (27604) BG: Indexer background runner(3) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:52:01.666 [info] [Info - 11:52:01 PM] (27604) BG: Indexing(3) started 2025-05-12 23:52:02.235 [info] [Info - 11:52:02 PM] (27604) BG: scanned(3) 16 files over 1 exec env 2025-05-12 23:52:02.330 [info] [Info - 11:52:02 PM] (27604) BG: indexed(3) 16 files over 1 exec env 2025-05-12 23:52:02.340 [info] [Info - 11:52:02 PM] (27604) BG: Indexing finished(3). 2025-05-12 23:55:18.945 [info] [Info - 11:55:18 PM] (1740) Server root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:55:18.946 [info] [Info - 11:55:18 PM] (1740) Pylance language server 2025.4.1 (pyright version 1.1.398, commit 4f24eccc) starting 2025-05-12 23:55:18.950 [info] [Info - 11:55:18 PM] (1740) Starting service instance &quot;vkf&quot; for workspace &quot;c:\SDK\vkf&quot; 2025-05-12 23:55:19.003 [info] [Info - 11:55:19 PM] (1740) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:55:19.004 [info] [Info - 11:55:19 PM] (1740) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:55:19.004 [info] [Info - 11:55:19 PM] (1740) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:55:19.004 [info] [Info - 11:55:19 PM] (1740) Auto-excluding **/node_modules 2025-05-12 23:55:19.005 [info] [Info - 11:55:19 PM] (1740) Auto-excluding **/__pycache__ 2025-05-12 23:55:19.005 [info] [Info - 11:55:19 PM] (1740) Auto-excluding **/.* 2025-05-12 23:55:19.052 [info] [Info - 11:55:19 PM] (1740) Assuming Python version 3.11.9.final.0 2025-05-12 23:55:20.187 [info] [Info - 11:55:20 PM] (1740) Found 2 source files 2025-05-12 23:55:20.201 [info] [Info - 11:55:20 PM] (1740) BG: Priority queue background worker(2) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist 2025-05-12 23:55:20.202 [info] [Info - 11:55:20 PM] (1740) BG: Priority queue background worker(2) started 2025-05-12 23:55:20.204 [info] [Info - 11:55:20 PM] (1740) Setting environmentName for service &quot;vkf&quot;: &quot;3.11.9 (global)&quot; 2025-05-12 23:55:20.204 [info] [Info - 11:55:20 PM] (1740) Setting pythonPath for service &quot;vkf&quot;: &quot;C:\Users\david\AppData\Local\Programs\Python\Python311\python.exe&quot; 2025-05-12 23:55:20.204 [info] [Info - 11:55:20 PM] (1740) No include entries specified; assuming c:\SDK\vkf 2025-05-12 23:55:20.205 [info] [Info - 11:55:20 PM] (1740) Auto-excluding **/node_modules 2025-05-12 23:55:20.205 [info] [Info - 11:55:20 PM] (1740) Auto-excluding **/__pycache__ 2025-05-12 23:55:20.205 [info] [Info - 11:55:20 PM] (1740) Auto-excluding **/.* 2025-05-12 23:55:20.246 [info] [Info - 11:55:20 PM] (1740) Assuming Python version 3.11.9.final.0 2025-05-12 23:55:21.193 [info] [Info - 11:55:21 PM] (1740) Found 2 source files 2025-05-12 23:55:21.647 [info] [Info - 11:55:21 PM] (1740) BG: Indexer background runner(3) root directory: file:///c%3A/Users/david/.vscode/extensions/ms-python.vscode-pylance-2025.4.1/dist (index) 2025-05-12 23:55:21.647 [info] [Info - 11:55:21 PM] (1740) BG: Indexing(3) started 2025-05-12 23:55:22.241 [info] [Info - 11:55:22 PM] (1740) BG: scanned(3) 16 files over 1 exec env 2025-05-12 23:55:22.301 [info] [Info - 11:55:22 PM] (1740) BG: indexed(3) 16 files over 1 exec env 2025-05-12 23:55:22.312 [info] [Info - 11:55:22 PM] (1740) BG: Indexing finished(3). </code></pre> <p>Test script:</p> <pre><code>import sys sys.path.insert(0, r&quot;C:\\SDK\\build_windows_Bpy_x64_vc17_Release\\bin\\Release&quot;) import bpy print(&quot;version: &quot; + str(bpy.app.version)) </code></pre> <p>Output:</p> <pre><code>C:\SDK\vkf&gt;C:/Users/david/AppData/Local/Programs/Python/Python311/python.exe c:/SDK/vkf/python_test.py ModuleNotFoundError: No module named 'numpy' Unable to initialise audio ImportError: numpy.core.multiarray failed to import version: (4, 5, 0) </code></pre> <p>Does anyone know how to get this to work?</p>
<python><visual-studio-code><blender><pylance><bpy>
2025-05-13 03:59:25
0
1,700
David Carpenter
79,618,903
19,546,216
Is this folder structure possible in Behave Selenium?
<p>I'm currently refactoring our code and our organization wants to combine different repositories into 1 and now I'm arranging the folder structure.</p> <pre><code>DIRECTORY STRUCTURE: +-- main_repository/ +-- other_dev_folders/ +-- testing/ +-- another_folder_layer/ +-- features/ +-- platform1/ | +-- platform1feature1.feature | +-- platform1feature2.feature +-- platform2/ | +-- platform2feature1.feature | +-- platform2feature2.feature +-- steps/ +-- common/ | +-- common_steps.py +-- platform1/ | +-- platform1feature1.py | +-- platform1feature2.py +-- platform2/ | +-- platform2feature1.py | +-- platform2feature2.py +-- environment.py +-- web_pages_initializer.py +-- behave.ini </code></pre> <p>I put the <code>behave.ini</code> with the following but still seems not working since it loads the steps in <code>testing/steps</code> instead of <code>testing/another_folder_layer/steps</code></p> <pre><code>[behave] paths = testing/another_folder_layer/features steps_dir = testing/another_folder_layer/steps format = plain color = true stdout_capture = false stderr_capture = false </code></pre> <p>Not sure if this is possible in behave but can you give tips as to how to organize this? I tried running normally using <code>behave testing/another_folder_layer/features/platform1/platform1feature1.feature</code> but the steps are not recognized.</p>
<python><selenium-webdriver><bdd><python-behave>
2025-05-13 03:38:11
1
321
Faith Berroya
79,618,851
2,817,520
Uvicorn workers greater than one apparently reduce performance by half
<p><strong>Update 2:</strong> The following results do not hold when installing Uvicorn using <code>pip install uvicorn[standard]</code> which installs Uvicorn with Cython-based dependencies (where possible) and other optional extras.</p> <p><strong>Update 1:</strong> I tested the app using <a href="https://hypercorn.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">Hypercorn</a> and <a href="https://github.com/emmett-framework/granian" rel="nofollow noreferrer">Granian</a> with different numbers of workers, and I can confirm that this only applies to <a href="https://www.uvicorn.org/" rel="nofollow noreferrer">Uvicorn</a>.</p> <p>My main test app:</p> <pre><code>from starlette.applications import Starlette from starlette.responses import PlainTextResponse from starlette.routing import Route async def homepage(request): return PlainTextResponse(&quot;Hello world!&quot;) app = Starlette(routes=[Route(&quot;/&quot;, homepage)]) </code></pre> <p>Tests using <a href="https://github.com/wg/wrk" rel="nofollow noreferrer">wrk</a> and <a href="https://github.com/codesenberg/bombardier" rel="nofollow noreferrer">bombardier</a>:</p> <pre><code>uvicorn --host 0.0.0.0 --port 8000 --workers 1 --log-level critical main:app wrk -t4 -c100 -d15s http://127.0.0.1:8000 Running 15s test @ http://127.0.0.1:8000 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 20.97ms 1.97ms 42.51ms 72.49% Req/Sec 1.20k 75.01 1.33k 81.00% 71484 requests in 15.01s, 9.95MB read Requests/sec: 4763.71 Transfer/sec: 679.21KB bombardier-linux-amd64 -c 125 -d 15s http://localhost:8000 Bombarding http://localhost:8000 for 15s using 125 connection(s) [====================================================================================================================] 15s Done! Statistics Avg Stdev Max Reqs/sec 3554.25 1317.96 6167.59 Latency 35.11ms 4.23ms 59.66ms HTTP codes: 1xx - 0, 2xx - 53452, 3xx - 0, 4xx - 0, 5xx - 0 others - 0 Throughput: 722.57KB/s uvicorn --host 0.0.0.0 --port 8000 --workers 2 --log-level critical main:app wrk -t4 -c100 -d15s http://127.0.0.1:8000 Running 15s test @ http://127.0.0.1:8000 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 41.52ms 1.58ms 59.52ms 96.24% Req/Sec 603.89 33.36 720.00 73.50% 36094 requests in 15.02s, 5.04MB read Requests/sec: 2403.45 Transfer/sec: 343.55KB bombardier-linux-amd64 -c 125 -d 15s http://localhost:8000 Bombarding http://localhost:8000 for 15s using 125 connection(s) [====================================================================================================================] 15s Done! Statistics Avg Stdev Max Reqs/sec 2905.55 432.64 5376.82 Latency 42.99ms 2.20ms 74.43ms HTTP codes: 1xx - 0, 2xx - 43671, 3xx - 0, 4xx - 0, 5xx - 0 others - 0 Throughput: 589.74KB/s </code></pre> <p>From now on, increasing the number of workers does not make any difference. The results with <a href="https://httpd.apache.org/docs/2.4/programs/ab.html" rel="nofollow noreferrer">ab</a> is different. Increasing the number of workers actually increases the performance. Now my question is why?</p>
<python><fastapi><uvicorn><starlette>
2025-05-13 02:21:05
0
860
Dante
79,618,567
3,806,728
How to Cache Elements to increase the Runtime Performance with lxml Pythin Library
<p>In the lxml.de website <a href="https://lxml.de/performance.html" rel="nofollow noreferrer">https://lxml.de/performance.html</a> I see the following statement:</p> <p>A way to improve the normal attribute access time is static instantiation of the Python objects, thus trading memory for speed. Just create a cache dictionary and run:</p> <p>cache[root] = list(root.iter()) after parsing and:</p> <p>del cache[root]</p> <p><strong>Can anyone provide me a suitable Python Code example about how these above mechanism can be used in a Python Function?</strong></p>
<python><lxml>
2025-05-12 20:05:19
1
353
user3806728
79,618,401
3,890,560
Tensorflow: XLA compilation requires a fixed tensor list size
<p>I'm running Ubuntu on a laptop with a dGPU.</p> <p>I installed tensorflow using docker: <code>docker pull tensorflow/tensorflow:latest</code>. Tensorflow (2.19.0) can use a XLA CPU and a GPU.</p> <p>I'm following this tutorial: <a href="https://www.tensorflow.org/tutorials/load_data/video" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/load_data/video</a>.</p> <p>When running the tutorial, I hit this message <code>XLA compilation requires a fixed tensor list size. Set the max number of elements. This could also happen if you're using a TensorArray in a while loop that does not have its maximum_iteration set, you can fix this by setting maximum_iteration to a suitable value.</code></p> <p>The tutorial calls <code>model.fit</code> that triggers the error. According to the python callstack, the problem is around this line:</p> <pre><code>Stack trace for op definition: ... File &quot;usr/local/lib/python3.11/dist-packages/keras/src/utils/traceback_utils.py&quot;, line 117, in error_handler File &quot;usr/local/lib/python3.11/dist-packages/keras/src/backend/tensorflow/trainer.py&quot;, line 371, in fit File &quot;usr/local/lib/python3.11/dist-packages/keras/src/backend/tensorflow/trainer.py&quot;, line 219, in function &gt; vi /usr/local/lib/python3.11/dist-packages/keras/src/backend/tensorflow/trainer.py +371 def fit( ... self.stop_training = False self.make_train_function() callbacks.on_train_begin() training_logs = None logs = {} initial_epoch = self._initial_epoch or initial_epoch for epoch in range(initial_epoch, epochs): self.reset_metrics() callbacks.on_epoch_begin(epoch) with epoch_iterator.catch_stop_iteration(): for step, iterator in epoch_iterator: callbacks.on_train_batch_begin(step) logs = self.train_function(iterator) &lt;= ERROR SPAWNED </code></pre> <p>This problem is referenced here: <a href="https://android.googlesource.com/platform/external/tensorflow/+/refs/heads/android-s-beta-5/tensorflow/compiler/xla/g3doc/known_issues.md#tensorflow-while-loops-need-to-be-bounded-or-have-backprop-disabled" rel="nofollow noreferrer">https://android.googlesource.com/platform/external/tensorflow/+/refs/heads/android-s-beta-5/tensorflow/compiler/xla/g3doc/known_issues.md#tensorflow-while-loops-need-to-be-bounded-or-have-backprop-disabled</a> and the solution seems to fix <code>maximum_iterations</code>.</p> <p>I tried to pass <code>maximum_iterations</code> to <code>fit</code>... But I hit this: <code>TypeError: TensorFlowTrainer.fit() got an unexpected keyword argument 'maximum_iterations' </code></p> <p>How to fix this error if I can't pass arguments from <code>model.fit</code> to <code>self.train_function</code>?</p> <p>On dGPU with XLA, how to deal with these error?</p>
<python><tensorflow2.0><tensorflow-xla>
2025-05-12 17:49:05
0
599
fghoussen
79,618,176
307,050
Matplotlib plot continuous time series of data
<p>I'm trying to continuously plot data received via network using matplotlib.</p> <p>On the y-axis, I want to plot a particular entity, while the x-axis is the current time.</p> <p>The x-axis should cover a fixed period of time, ending with the current time.</p> <p>Here's my current test code, which simulates the data received via network with random numbers.</p> <pre><code>import threading import random import time import signal import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as md class NPData(): def __init__(self, size): self.data = np.zeros((size,2)) # size rows, 2 cols self.size = size self.head = 0 def __len__(self): return self.data.__len__() def __str__(self): return str(self.data) def append(self, data): self.data[self.head] = data self.head = (self.head + 1) % self.size def get_x_range(self): return (self.data.min(axis=0)[0], self.data.max(axis=0)[0]) class Producer(threading.Thread): def __init__(self): super().__init__() random.seed() self.running = True self.data = NPData(100) def get_data(self): return self.data.data def stop(self): self.running = False def run(self): while self.running: now_ms = md.date2num(int(time.time() * 1000)) # ms sample = np.array([now_ms, np.random.randint(0,999)]) self.data.append(sample) time.sleep(0.1) prog_do_run = True def signal_handler(sig, frame): global prog_do_run prog_do_run = False def main(): signal.signal(signal.SIGINT, signal_handler) p = Producer() p.start() fig, ax = plt.subplots() xfmt = md.DateFormatter('%H:%M:%S.%f') ax.xaxis.set_major_formatter(xfmt) #ax.plot(p.get_data()) #ax.set_ylim(0,999) plt.show(block=False) while prog_do_run: x_range = p.data.get_x_range() ax.set_xlim(x_range) #ax.set_ylim(0,999) print(p.get_data()) #ax.plot(p.get_data()) plt.draw() plt.pause(0.05) p.stop() </code></pre> <p>Notes:</p> <p>The <code>Producer</code> class is supposed to emulate data received via network.</p> <p>I've encountered two main issues:</p> <ol> <li><p>I'm struggling to find out what <strong>actually</strong> needs to be called inside an endless loop in order for matplotlib to continuously update a plot (efficiently). Is it <code>draw()</code>, <code>plot()</code>, <code>pause()</code> or a combination of those?</p> </li> <li><p>I've been generating milliseconds timestamps and matplotlib seems to not like them at all. The official <a href="https://matplotlib.org/stable/api/dates_api.html" rel="nofollow noreferrer">docs</a> say to use <code>date2num()</code>, which does not work. If I just use <code>int(time.time() * 1000)</code> or <code>round(time.time() * 1000)</code>, I get <code>OverflowError: int too big to convert</code> from the formatter.</p> </li> </ol>
<python><numpy><matplotlib>
2025-05-12 15:30:49
1
1,347
mefiX
79,617,933
24,271,353
multidimensional coordinate transform with xarray
<p>How to convert multidimensional coordinate to standard coordinate in order to unify data when using <code>xarray</code> for nc data:</p> <pre class="lang-py prettyprint-override"><code>import xarray as xr da = xr.DataArray( [[0, 1], [2, 3]], coords={ &quot;lon&quot;: ([&quot;ny&quot;, &quot;nx&quot;], [[30, 40], [40, 50]]), &quot;lat&quot;: ([&quot;ny&quot;, &quot;nx&quot;], [[10, 10], [20, 20]]), }, dims=[&quot;ny&quot;, &quot;nx&quot;], ) </code></pre> <p>Expected conversion result:</p> <pre class="lang-py prettyprint-override"><code>xr.DataArray( [[0, 1, np.nan], [np.nan, 2, 3]], coords={ &quot;lat&quot;: [10, 20], &quot;lon&quot;: [30, 40, 50], }) </code></pre>
<python><python-xarray>
2025-05-12 13:26:06
1
586
Breeze
79,617,815
865,169
Can I be sure of the order requests are sent in in a Python requests session?
<p>I am using Python requests to communicate with an external API. When I do something like:</p> <pre><code>with requests.Session() as sess: resp1 = sess.put( api_url + f&quot;stuff/{id}/endpoint1&quot;, json=contents1, ) resp2 = sess.put( api_url + f&quot;things/{id}/endpoint2&quot;, json=contents2, ) </code></pre> <p>Can I be sure that the put request to 'endpoint1' gets sent before the put request to 'endpoint2'? Does requests perhaps handle this asynchronously in some way that <em>might</em> make the request to 'endpoint2' to depart before the one to 'endpoint1'?</p> <p>If if cannot be sure, what is then a good approach to ensure that the requests get sent in the order I write them in the code?</p> <p>Background: I use a session in order to re-use some session headers across requests.</p>
<python><python-requests>
2025-05-12 12:15:23
1
1,372
Thomas Arildsen
79,617,790
1,552,080
Creating a Polars DataFrame from list of dicts?
<p>I want to create a Polars DataFrame or LazyFrame (I'm not sure what is appropriate) from a list of dictionaries where each of the dicts represents one row of data. Each individual dict has fields of various data types and most importantly some fields have array-like data. The array-like data related to on key can vary in length from dict to dict. A single row dictionary looks like this:</p> <pre><code>{ 'col1': int, 'col2': int, 'col3': str, ... 'colA': str:timestamp, 'colB': int(long), 'colC': float, ... 'colK': bool, ... 'colL': str[L], &lt;-- length differs from dict to dict 'colM': float[M], &lt;-- length differs from dict to dict 'colN': int[N], &lt;-- length differs from dict to dict ... } </code></pre> <p>Currently, I am trying to do the following:</p> <pre><code>for row_dict in list_of_row_dicts: dataframe = pl.DataFrame(row_dict) slice_dataframe = slice_dataframe.join(other=dataframe) </code></pre> <p>This fails with the error message:</p> <blockquote> <p>polars.exceptions.ShapeError: could not create a new DataFrame: height of column 'colL' (x) does not match height of column 'colM' (y)</p> </blockquote> <p>How can I achieve what I am trying to do?</p>
<python><dataframe><python-polars>
2025-05-12 11:55:40
1
1,193
WolfiG
79,617,641
617,603
./manage.py runserver on mac
<p>I'm setting up a new dev environment for Django development on a mac as a new Python dev but I'm receiving an error running a basic django app so I think my python setup is incorrect</p> <p>I have <code>python3</code> installed so to make it easy to access, in my <code>.zshrc</code> I have added the line</p> <pre class="lang-bash prettyprint-override"><code>alias python='python3' </code></pre> <p>I have used homebrew to install <code>python</code>, <code>django-admin</code>, <code>pipx</code> as well as <code>python-language-server</code> and <code>ruff-lsp</code> and installed <code>jedi-language-server</code> via <code>pipx</code> as a basic setup for helix</p> <p>I have used the dango ninja tutorial to get a project started</p> <pre class="lang-bash prettyprint-override"><code>django-admin startproject ninjaapidemo cd ninjaapidemo </code></pre> <p>which gives me a project structure</p> <pre><code># ~/dev/ninjaapidemo . ├── manage.py └── ninjaapidemo ├── __init__.py ├── asgi.py ├── settings.py ├── urls.py └── wsgi.py </code></pre> <p>I have updated <code>urls.py</code> to the following (as per the django ninja tutorial)</p> <pre class="lang-py prettyprint-override"><code>from django.contrib import admin from django.urls import path from ninja import NinjaAPI api = NinjaAPI() @api.get(&quot;/add&quot;) def add(request, a: int, b: int): return {&quot;result&quot;: a + b} urlpatterns = [path(&quot;admin/&quot;, admin.site.urls), path(&quot;api/&quot;, api.urls)] </code></pre> <p>and attempted to run the project from <code>~/dev/ninjaapidemo</code></p> <pre class="lang-bash prettyprint-override"><code>./manage.py runserver </code></pre> <p>but I get the following error</p> <pre><code>env: python: No such file or directory </code></pre> <p>Can anyone make a recommendation for what I'm doing wrong please? Thanks</p>
<python><django><django-ninja>
2025-05-12 10:27:03
1
668
overbyte
79,617,460
2,902,280
What happens when you call a matplotlib continuous colormap with value outside of the (0, 1) range?
<p>The classical usage for a continuous colorbar such as viridis is to use <code>cm(val)</code> with <code>val</code> bewteen 0 and 1.</p> <p>I can't figure out what's returned when you call it with an argument outside of the (0, 1) range, i.e. I'd expect to get the value corresponding to 1, but it's not the case:</p> <pre class="lang-py prettyprint-override"><code>In [34]: viridis = plt.get_cmap(&quot;viridis&quot;) In [35]: viridis(1) Out[35]: (np.float64(0.26851), np.float64(0.009605), np.float64(0.335427), np.float64(1.0)) In [36]: viridis(2) Out[36]: (np.float64(0.269944), np.float64(0.014625), np.float64(0.341379), np.float64(1.0)) In [37]: viridis(3) Out[37]: (np.float64(0.271305), np.float64(0.019942), np.float64(0.347269), np.float64(1.0)) </code></pre> <p>Similarly, what happens when you call a discrete cmap, e.g. tab10, with a float argument outside of the (0, 1) range? If it's inside [0, 1], then I found out that this is treated as <code>int(val * cm.N)</code>, but if it's outside this no longer holds.</p>
<python><matplotlib><colormap>
2025-05-12 08:24:47
1
13,258
P. Camilleri
79,617,158
1,299,026
Data access object not finding self attributes
<p>I have a data access object in Python/SQLite3 defined as:</p> <pre><code>import sqlite3 class Dao: VEHICLE_TABLE_SCHEMA = &quot;&quot;&quot; CREATE TABLE IF NOT EXISTS vehicles ( vehicle_key integer NOT NULL PRIMARY KEY AUTOINCREMENT, vehicle_description TEXT NOT NULL ) &quot;&quot;&quot; VEHICLE_INSERT_SQL = &quot;INSERT INTO vehicles (vehicle_description) VALUES (?)&quot; VEHICLE_LIST_SQL = &quot;SELECT * FROM vehicles&quot; MILAGE_TABLE_SCHEMA = &quot;&quot;&quot; CREATE TABLE IF NOT EXISTS milage ( milage_date DATE NOT NULL PRIMARY KEY, vehicle_id INTEGER NOT NULL, milage integer NOT NULL ) &quot;&quot;&quot; MILAGE_INSERT_SQL = &quot;INSERT INTO milage ( milage_date , vehicle_id , milage ) VALUES (? , ? , ? )&quot; def __init__(self, dbname = &quot;vehicles.sqlite3&quot; ): # self.conn = sqlite3.connect(dbname, autocommit=True) self.create_table_if_missing( self.MILAGE_TABLE_SCHEMA ) self.create_table_if_missing( self.VEHICLE_TABLE_SCHEMA ) self.dbname = dbname def create_table_if_missing(self , sql ): #Creates the 'vehicle' table in the database if it doesn't exist.&quot;&quot;&quot; with sqlite3.connect(self.dbname) as conn: cursor = conn.cursor() cursor.execute( sql ) def add_vehicle(self , description ): with sqlite3.connect(self.dbname) as conn: cursor = conn.cursor() cursor.execute( self.VEHICLE_INSERT_SQL , ( description , ) ) def add_milage(self , milage_date , vehicle_id , milage ): with sqlite3.connect(self.dbname) as conn: cursor = conn.cursor() cursor.execute( self.MILAGE_INSERT_SQL , ( milage_date , vehicle_id , milage ) ) def list_vehicles(self): with sqlite3.connect(self.dbname) as conn: cursor = conn.cursor() cursor.execute( self.VEHICLE_LIST_SQL ) return cursor.fetchall() </code></pre> <p>I try calling it from a class called MilageEntryFrame shown in part below:</p> <pre><code>import customtkinter as ctk from dao import Dao class MilageEntryFrame( ctk.CTkFrame ): def __init__(self, master, **kwargs): super().__init__(master, **kwargs) self.frm_milage_entry = ctk.CTkFrame(self ) self.frm_milage_entry.pack(fill=&quot;x&quot;, padx=5, pady=25) vehicle_records = Dao.list_vehicles( self ) </code></pre> <p>I get an error stating:</p> <pre><code> File &quot;D:\PycharmProjects\VehicleMaint\MilageEntryFrame.py&quot;, line 12, in __init__ vehicle_records = Dao.list_vehicles( self ) File &quot;D:\PycharmProjects\VehicleMaint\dao.py&quot;, line 57, in list_vehicles with sqlite3.connect(self.dbname) as conn: ^^^^^^^^^^^ AttributeError: 'MilageEntryFrame' object has no attribute 'dbname'. Did you mean: '_name'? </code></pre> <p>The first thing to catch my eye is it saying <strong>MilageEntryFrame</strong> has no attribute 'dbname'. This is correct, it doesn't have it, nor should it need it as it is set in the DAO class, not in the <strong>MilageEntryFrame</strong>. I'm guessing it has something to do with my passing <em>self</em> from the MilageEntryFrame class to the DAO class. But, it complains if I don't pass it. Coincidentally, I asked ChatGPT to create a DAO class to do what I'm seeking and it's response was virtually identical to my code. I've converted my code to try using <em>with</em> rather than the try/catch I originally had and that ChatGPT had as well.</p>
<python><python-3.x>
2025-05-12 03:48:23
1
3,054
Todd
79,617,120
1,942,868
Orderby and distinct at the same time
<p>I have tabels like this</p> <pre><code>id sku 1 C 2 B 3 C 4 A </code></pre> <p>Finally I want to get the raws like this.</p> <pre><code>1 C 2 B 4 A </code></pre> <p>At first I use <code>distinct</code></p> <pre><code> queryset = self.filter_queryset(queryset.order_by('sku').distinct('sku')) </code></pre> <p>It depicts the raw like this,</p> <pre><code>4 A 1 C 2 B </code></pre> <p>Then now I want to sort this with id,</p> <pre><code>queryset = queryset.order_by('id') </code></pre> <p>However it shows error like this,</p> <pre><code>django.db.utils.ProgrammingError: SELECT DISTINCT ON expressions must match initial ORDER BY expressions LINE 1: SELECT DISTINCT ON (&quot;defapp_log&quot;.&quot;sku&quot;) &quot;d... </code></pre>
<python><django><postgresql>
2025-05-12 02:43:27
1
12,599
whitebear
79,617,100
1,440,565
Simple typer script gives TypeError
<p>I think typer 0.15.3 (maybe earlier) is broken. I don't make this claim lightly. But if I create a very basic script, I get an error:</p> <p>pyproject.toml:</p> <pre><code>[project] name = &quot;typer-example&quot; version = &quot;0.1.0&quot; description = &quot;Add your description here&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.13&quot; dependencies = [ &quot;typer&gt;=0.15.3&quot;, ] </code></pre> <p>main.py:</p> <pre class="lang-py prettyprint-override"><code>import typer def main(name: str): print(f&quot;Hello, {name}!&quot;) typer.run(main) </code></pre> <pre><code>❯ uv run python main.py # ... stack trace TypeError: TyperArgument.make_metavar() takes 1 positional argument but 2 were given </code></pre> <p>I expect default help output rather than a TypeError. Am I missing something? Or is <code>typer</code> broken as of 0.15.3?</p>
<python><typer>
2025-05-12 02:18:31
2
83,954
Code-Apprentice
79,617,084
654,019
creating Docker image for Buildozer on WSL (Ubuntu 22.04) fail
<p>I am trying to build Docker image for <a href="https://buildozer.readthedocs.io/" rel="nofollow noreferrer">Buildozer</a> as explained here: <a href="https://github.com/kivy/buildozer/tree/master?tab=readme-ov-file" rel="nofollow noreferrer">https://github.com/kivy/buildozer/tree/master?tab=readme-ov-file</a></p> <p>but I am getting this error:</p> <pre><code>sudo docker build --tag=kivy/buildozer . [+] Building 123.1s (13/13) FINISHED docker:default =&gt; [internal] load build definition from Dockerfile 0.0s =&gt; =&gt; transferring dockerfile: 2.34kB 0.0s =&gt; [internal] load metadata for docker.io/library/ubuntu:22.04 1.7s =&gt; [internal] load .dockerignore 0.1s =&gt; =&gt; transferring context: 2B 0.0s =&gt; [internal] load build context 0.3s =&gt; =&gt; transferring context: 16.17MB 0.3s =&gt; [1/9] FROM docker.io/library/ubuntu:22.04@sha256:67cadaff1dca187079fce41360d5a7eb6f7dcd3745e53c79ad5efd8563118240 4.9s =&gt; =&gt; resolve docker.io/library/ubuntu:22.04@sha256:67cadaff1dca187079fce41360d5a7eb6f7dcd3745e53c79ad5efd8563118240 0.0s =&gt; =&gt; sha256:67cadaff1dca187079fce41360d5a7eb6f7dcd3745e53c79ad5efd8563118240 6.69kB / 6.69kB 0.0s =&gt; =&gt; sha256:899ec23064539c814a4dbbf98d4baf0e384e4394ebc8638bea7bbe4cb8ef4e12 424B / 424B 0.0s =&gt; =&gt; sha256:c42dedf797ba5e7e37e744cdd998e1db046375c702d6dc8a822b422189b019bb 2.30kB / 2.30kB 0.0s =&gt; =&gt; sha256:215ed5a638430309375291c48a01872859a8dbf1331e54ba0af221918eb8ce2e 29.53MB / 29.53MB 3.2s =&gt; =&gt; extracting sha256:215ed5a638430309375291c48a01872859a8dbf1331e54ba0af221918eb8ce2e 1.3s =&gt; [2/9] RUN apt update -qq &gt; /dev/null &amp;&amp; DEBIAN_FRONTEND=noninteractive apt install -qq --yes --no-install-recommends locales 12.5s =&gt; [3/9] RUN apt update -qq &gt; /dev/null &amp;&amp; DEBIAN_FRONTEND=noninteractive apt install -qq --yes --no-install-recommends autoco 100.9s =&gt; [4/9] RUN useradd --create-home --shell /bin/bash user 0.5s =&gt; [5/9] RUN usermod -append --groups sudo user 0.6s =&gt; [6/9] RUN echo &quot;%sudo ALL=(ALL) NOPASSWD: ALL&quot; &gt;&gt; /etc/sudoers 0.7s =&gt; [7/9] WORKDIR /home/user/hostcwd 0.1s =&gt; [8/9] COPY --chown=user:user . /home/user/src 0.3s =&gt; ERROR [9/9] RUN pip3 install --user --upgrade &quot;Cython&lt;3.0&quot; wheel pip /home/user/src 0.8s ------ &gt; [9/9] RUN pip3 install --user --upgrade &quot;Cython&lt;3.0&quot; wheel pip /home/user/src: 0.718 ERROR: Directory '/home/user/src' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. ------ Dockerfile:83 -------------------- 81 | 82 | # installs buildozer and dependencies 83 | &gt;&gt;&gt; RUN pip3 install --user --upgrade &quot;Cython&lt;3.0&quot; wheel pip ${SRC_DIR} 84 | 85 | ENTRYPOINT [&quot;buildozer&quot;] -------------------- ERROR: failed to solve: process &quot;/bin/sh -c pip3 install --user --upgrade \&quot;Cython&lt;3.0\&quot; wheel pip ${SRC_DIR}&quot; did not complete successfully: exit code: 1 </code></pre> <p>The build environment is: WSL with Ubuntu 22.04 Python 3.10.12 Docker version 28.1.1, build 4eba377</p> <p>To build Docker, I downloaded the <code>Dockerfile</code> from the repository and put it into the current directory. Then, I used the above command to build it.</p> <p>Any idea why this is not working properly?</p> <p>How can I install Buildozer in WSL (Ubuntu 22.04) to build and run a Python test application on an Android phone?</p> <h1>Edit 1</h1> <p>The content of dockerfile which I downloaded from the above mentioned site and did NOT change it is as follow:</p> <pre><code># Dockerfile for providing buildozer # # Build with: # docker build --tag=kivy/buildozer . # # Or for macOS using Docker Desktop: # # docker buildx build --platform=linux/amd64 -t kivy/buildozer . # # In order to give the container access to your current working directory # it must be mounted using the --volume option. # Run with (e.g. `buildozer --version`): # docker run \ # --volume &quot;$HOME/.buildozer&quot;:/home/user/.buildozer \ # --volume &quot;$PWD&quot;:/home/user/hostcwd \ # kivy/buildozer --version # # Or for interactive shell: # docker run --interactive --tty --rm \ # --volume &quot;$HOME/.buildozer&quot;:/home/user/.buildozer \ # --volume &quot;$PWD&quot;:/home/user/hostcwd \ # --entrypoint /bin/bash \ # kivy/buildozer # # If you get a `PermissionError` on `/home/user/.buildozer/cache`, # try updating the permissions from the host with: # sudo chown $USER -R ~/.buildozer # Or simply recreate the directory from the host with: # rm -rf ~/.buildozer &amp;&amp; mkdir ~/.buildozer FROM ubuntu:22.04 ENV USER=&quot;user&quot; ENV HOME_DIR=&quot;/home/${USER}&quot; ENV WORK_DIR=&quot;${HOME_DIR}/hostcwd&quot; \ SRC_DIR=&quot;${HOME_DIR}/src&quot; \c PATH=&quot;${HOME_DIR}/.local/bin:${PATH}&quot; # configures locale RUN apt update -qq &gt; /dev/null \ &amp;&amp; DEBIAN_FRONTEND=noninteractive apt install -qq --yes --no-install-recommends \ locales &amp;&amp; \ locale-gen en_US.UTF-8 ENV LANG=&quot;en_US.UTF-8&quot; \ LANGUAGE=&quot;en_US.UTF-8&quot; \ LC_ALL=&quot;en_US.UTF-8&quot; # system requirements to build most of the recipes RUN apt update -qq &gt; /dev/null \ &amp;&amp; DEBIAN_FRONTEND=noninteractive apt install -qq --yes --no-install-recommends \ autoconf \ automake \ build-essential \ ccache \ cmake \ gettext \ git \ libffi-dev \ libltdl-dev \ libssl-dev \ libtool \ openjdk-17-jdk \ patch \ pkg-config \ python3-pip \ python3-setuptools \ sudo \ unzip \ zip \ zlib1g-dev # prepares non root env RUN useradd --create-home --shell /bin/bash ${USER} # with sudo access and no password RUN usermod -append --groups sudo ${USER} RUN echo &quot;%sudo ALL=(ALL) NOPASSWD: ALL&quot; &gt;&gt; /etc/sudoers USER ${USER} WORKDIR ${WORK_DIR} COPY --chown=user:user . ${SRC_DIR} # installs buildozer and dependencies RUN pip3 install --user --upgrade &quot;Cython&lt;3.0&quot; wheel pip ${SRC_DIR} ENTRYPOINT [&quot;buildozer&quot;] </code></pre>
<python><docker><ubuntu><kivy><buildozer>
2025-05-12 01:50:12
0
18,400
mans
79,616,857
3,138,436
desired frequency in discrete fourier transform gets shifted by the factor of increasing sample duration
<p>I have written a python script to compute DFT of a simple sin wave having frequency 3. I have taken the following consideration for taking sample of the sin wave</p> <p><strong>sin function for test = sin( 2 * pi * 3 * t )</strong></p> <p><strong>sample_rate = 15</strong></p> <p><strong>time interval = 1/sample_rate = 1/15 = ~ 0.07 second</strong></p> <p><strong>sample_duration = 1 second (for test1) and 2 seconds (for test 2)</strong></p> <p><strong>sample_size = sample_rate * sample_duration = 15*2 = 30 samples</strong></p> <p>I run the same code for sample_duration both 1 and 2 seconds. When sample duration is 1 second, the graph produce shows the presence of frequency=3 present in the sin wave,which is correct.</p> <p><a href="https://i.sstatic.net/FY1IJqVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FY1IJqVo.png" alt="frequency peak at 3" /></a></p> <p>But if I change the sample duration to 2 second, the graph peaks at frequency= 6, which does not present in the sin wave.But it is a factor of 2 increase of the original frequency (3*2) = 6.</p> <p><a href="https://i.sstatic.net/JNgMso2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JNgMso2C.png" alt="frequency peak at 6" /></a></p> <p>And if 3 second is taken as sample duration, graph peaks at 9 second.</p> <p><strong>I was thinking that taking more sample for longer duration will produce finer result, but that is clearly not the case here.</strong></p> <p>code :</p> <pre><code>from sage.all import * import matplotlib.pyplot as plt import numpy as np t = var('t') sample_rate = 15 # will take 100 sample each second interval = 1 / sample_rate # time interval between each reading sample_duration = 1 # take sample over a duration of 1 second sample_size_N = sample_rate*sample_duration #count number of touples in r array, len(r) will give sample size/ total number of sample taken over a specific duration func = sin(3*2*pi*t) time_segment_arr = [] signal_sample_arr= [] # take reading each time interval over sample_duration period for time_segment in np.arange(0,sample_duration,interval): # give discrete value of the signal over specific time interval discrete_signal_value = func(t = time_segment) # push time value into array time_segment_arr.append(time_segment) # push signal amplitude into array signal_sample_arr.append(N(discrete_signal_value)) def construct_discrete_transform_func(): s = '' k = var('k') for n in range(0,sample_size_N,1): s = s+ '+'+str((signal_sample_arr[n])* e^(-(i*2*pi*k*n)/sample_size_N)) return s[1:] #omit the forward + sign dft_func = construct_discrete_transform_func() def calculate_frequency_value(dft_func,freq_val): k = var('k') # SR converts string to sage_symbolic_ring expression &amp; fast_callable() allows to pass variable value to that expression ff = fast_callable(SR(dft_func), vars=[k]) return ff(freq_val) freq_arr = [] amplitude_arr = [] #compute frequency strength per per frequency for l in np.arange(0,sample_size_N,1): freq_value = calculate_frequency_value(dft_func,l) freq_arr.append(l) amplitude_arr.append(N(abs(freq_value))) </code></pre>
<python><signal-processing><sage><discretization>
2025-05-11 19:47:54
2
9,194
AL-zami
79,616,804
4,075,135
TypeVar as dict key type hint "has no meaning in this context"
<h1><strong>The context:</strong></h1> <p>I have a dict whose keys are arbitrary types, and whose values are Callables which take a string and return an instance of the same type as the key. For example:</p> <pre class="lang-py prettyprint-override"><code>class Hero: def __init__(self, name): self.name = name all_heroes = { &quot;deadpool&quot;: Hero(&quot;wade wilson&quot;), &quot;wolverine&quot;: Hero(&quot;james howlett&quot;) } REGISTERED_CONVERTERS = { Hero: all_heroes.get, int: int, } hero_lookup = REGISTERED_CONVERTERS[Hero] assert hero_lookup(&quot;deadpool&quot;).name == &quot;wade wilson&quot; int_lookup = REGISTERED_CONVERTERS[int] assert int_lookup(&quot;5&quot;) == 5 </code></pre> <h1><strong>The Problem:</strong></h1> <p>I want to properly type hint this dict of converter functions</p> <h1><strong>What's not Working:</strong></h1> <p>This was my first attempt:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar T = TypeVar(&quot;T&quot;) TypeConverter = Callable[[str], T] REGISTERED_CONVERTERS: dict[type[T], TypeConverter] = { int: int, } </code></pre> <p>Pylance is happy with the <code>T</code> in the <code>TypeConverter</code> alias, but complains about the <code>T</code> for the dict's key:</p> <pre><code>Type variable &quot;T&quot; has no meaning in this context Pylance(reportGeneralTypeIssues) (constant) T: type[T] </code></pre> <p>I thought maybe the <code>TypeConverter</code> alias was causing issues, so then I tried this:</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;) REGISTERED_CONVERTERS: dict[type[T], Callable[[str], T]] = { int: int, } </code></pre> <p>This is seemingly worse as Pylance now raises the same complaint about both <code>T</code>s</p> <p>What am I doing wrong, or is this a Pylance problem?</p>
<python><python-typing><pyright>
2025-05-11 18:35:56
2
721
ZachP
79,616,782
1,145,760
How to denote return type of a @classmethod ctor in python?
<pre><code>#!/usr/bin/env python # In this file: a problem with de-serializing an object. import pickle class User: @classmethod def from_bytes(cls, b: bytes) -&gt; User: obj = pickle.loads(b) assert type(obj) == cls, (type(obj), cls) return obj </code></pre> <p>fails with</p> <pre><code>Traceback (most recent call last): File &quot;/tmp/./lqlq.py&quot;, line 7, in &lt;module&gt; class User: ...&lt;4 lines&gt;... return obj File &quot;/tmp/./lqlq.py&quot;, line 9, in User def from_bytes(cls, b: bytes) -&gt; User: ^^^^ NameError: name 'User' is not defined </code></pre> <p>In C you do a forward declaration. What's the alternative in python?</p>
<python><python-typing>
2025-05-11 18:14:42
1
9,246
Vorac
79,616,634
2,583,346
converting jupyter notebook (.ipynb) with to HTML using nbconvert - plotly figures not showing
<p>I have the following code in a notebook cell:</p> <pre><code>import plotly.express as px fig = px.scatter(x=[1,2,3], y=[1,2,3]) fig.show() </code></pre> <p>and I'm trying to convert it to HTML, like this:</p> <pre><code>jupyter nbconvert --to html --execute try.ipynb </code></pre> <p>The HTML I get does not display the plotly figure. It does display markdown, code, and figures generated using other packages.<br /> Based on previous answers, I've tried adding:</p> <pre><code>import plotly.io as pio pio.renderers.default = &quot;notebook_connected&quot; </code></pre> <p>and also running:</p> <pre><code>jupyter nbconvert --to html --execute --template classic --embed-images try.ipynb </code></pre> <p>but nothing worked.<br /> Interestingly, if I set the default renderer to svg:</p> <pre><code>pio.renderers.default = &quot;svg&quot; </code></pre> <p>it shows in the HTML. But I want my figures to be interactive.</p> <p>Any idea what might be going wrong?<br /> Please note that the code above is just a minimal example. In practice I have multiple figures, tables, and markdown that I wish to include in a HTML report.</p>
<python><jupyter-notebook><plotly><jupyter><nbconvert>
2025-05-11 15:27:43
2
1,278
soungalo
79,616,351
5,783,373
Facing issue using a model hosted on HuggingFace Server and talking to it using API_KEY
<p>I am trying to create a simple langchain app on text-generation using API to communicate with models on HuggingFace servers.</p> <p>I created a “.env” file and stored by KEY in the variable: “HUGGINGFACEHUB_API_TOKEN” I also checked it, API token is valid.</p> <p>Post that, I tried running this code snippet:</p> <pre><code>from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint from dotenv import load_dotenv load_dotenv() llm = HuggingFaceEndpoint( repo_id=&quot;TinyLlama/TinyLlama-1.1B-Chat-v1.0&quot;, task=&quot;text-generation&quot; ) model = ChatHuggingFace(llm=llm) result = model.invoke(&quot;What is the capital of India&quot;) print(result.content) </code></pre> <p>This is giving an error. I tried multiple things around it, but nothing worked.</p> <p>Here is the error log:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File “C:\Users\SS\Desktop\Camp_langchain_models\2.ChatModels\2_chatmodel_hf_api.py”, line 13, in result = model.invoke(“What is the capital of India”) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 370, in invoke self.generate_prompt( File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 947, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 766, in generate self._generate_with_cache( File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 1012, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py”, line 574, in generate answer = self.llm.client.chat_completion(messages=message_dicts, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_client.py”, line 886, in chat_completion provider_helper = get_provider_helper( ^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_providers_init.py&quot;, line 165, in get_provider_helper provider = next(iter(provider_mapping)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ StopIteration </code></pre> <p>I am new to it. Any guidance around this is much appreciated. Thank you.</p>
<python><langchain><huggingface><huggingface-hub>
2025-05-11 09:34:46
2
345
Sri2110
79,616,310
20,589,631
Firebase admin taking an infinite time to work
<p>I recently started using firebase admin in python. I created this example script:</p> <pre class="lang-py prettyprint-override"><code>import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate(&quot;./services.json&quot;) options = { &quot;databaseURL&quot;: 'https://not_revealing_my_url.com' } app = firebase_admin.initialize_app(cred, options) client = firestore.client(app) print(client.document(&quot;/&quot;).get()) </code></pre> <p>I already activated firestore and I placed services.json (which I genrated from &quot;Service Accounts&quot; on my firebase project) in the same directory as my main.py file.</p> <p>From all sources I could find, this should've allowed me to use firestore, but for some reason the app takes an infinite long time to respond.</p> <p>I tried looking through the stack after Interrupting the script, and the only major thing I could find was:</p> <pre><code>grpc._channel._MultiThreadedRendezvous: &lt;_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = &quot;failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown&quot; debug_error_string = &quot;UNKNOWN:Error received from peer {grpc_message:&quot;failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown&quot;, grpc_status:14, created_time:&quot;2025-05-11T08:47:32.8676384+00:00&quot;}&quot; </code></pre> <p>I am assuming this is a common issue, but I failed to find any solution online, can someone help me out?</p> <p>EDIT: I had firebase emulator working from a previous job, It seems firebase_admin tried using firebase emulator which was inactive. I just had to remove it from my PATH</p>
<python><firebase><google-cloud-firestore><firebase-admin>
2025-05-11 08:51:55
1
391
ori raisfeld
79,616,267
3,406,207
ModuleNotFoundError: No module named 'itk'
<p>On Apple M3, Sequoia 15.4.1: I am trying to run a python script on jupyter notebook, which is requiring the package 'itk':</p> <pre><code>import itk ModuleNotFoundError: No module named 'itk' </code></pre> <p>I have installed 'itk' but somehow the system cannot seem to find it?</p> <pre><code>!python3 --version Python 3.12.0 !pip3 install itk Requirement already satisfied: itk in /Users/&lt;user&gt;/.pyenv/versions/myenv/lib/python3.12/site-packages (5.4.3) </code></pre> <p>I have tried importing the module in the command line and that seems to work fine. So it must be related to jupyter?</p>
<python><macos><jupyter-notebook><itk>
2025-05-11 07:50:27
3
345
user3406207
79,616,249
10,470,463
How to suppress html table cell overflow when merging cells using rowspan?
<p>Using Python, I generated the html for a table. The table is a teacher timetable.</p> <p>The data comes from a <code>.csv</code>, looks like this for the first and second row:</p> <pre><code>Start period,Course,Lesson number,Day,Start time,End time,Instructor,Day number,Number of periods 1,CHEM1250,L01,Monday,8:00,9:30,John,1,3 </code></pre> <p>The first column is the class Periods, starting at 1, and the other columns are Monday to Friday</p> <p>For example, instructor John has 3 periods starting on Monday at 8am.</p> <p>So I set rowspan=&quot;3&quot; for cell Monday Period 1.</p> <p>If I display the table without borders, everything looks great. But if I set borders, the cells that are merged end up in a 7th column which has no header.</p> <p>How can I suppress that?</p> <p>This is the html for the table:</p> <pre><code>&lt;table align=&quot;center&quot; border=&quot;1&quot;&gt; &lt;br&gt; &lt;caption&gt;Timetable&lt;/caption&gt; &lt;tr&gt;&lt;th&gt;Period&lt;/th&gt;&lt;th&gt;Monday&lt;/th&gt;&lt;th&gt;Tuesday&lt;/th&gt;&lt;th&gt;Wednesday&lt;/th&gt;&lt;th&gt;Thursday&lt;/th&gt;&lt;th&gt;Friday&lt;/th&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;1 (8:00-8:30)&lt;/td&gt;&lt;td rowspan=3&gt;CHEM1250 L01&lt;br&gt;Instructor: John&lt;br&gt;Room: 201&lt;br&gt;Periods: 3&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td rowspan=3&gt;BIOL1150 L01&lt;br&gt;Instructor: Michel&lt;br&gt;Room: 201&lt;br&gt;Periods: 3&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;2 (8:30-9:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;3 (9:00-9:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td rowspan=3 &gt;PHYS1200 L01&lt;br&gt;Instructor: Jenna&lt;br&gt;Room: 203&lt;br&gt;Periods: 3&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;4 (9:30-10:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;5 (10:00-10:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;6 (10:30-11:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;7 (11:00-11:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td rowspan=2 &gt;PHYS1200 L02&lt;br&gt;Instructor: Michel&lt;br&gt;Room: 207&lt;br&gt;Periods: 2&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;8 (11:30-12:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;9 (12:00-12:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;CHEM1250 L02&lt;br&gt;Instructor: John&lt;br&gt;Room: 209&lt;br&gt;Periods: 1&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;10 (12:30-13:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;11 (13:00-13:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td rowspan=3 &gt;BIOL1150 L01&lt;br&gt;Instructor: Alice&lt;br&gt;Room: 211&lt;br&gt;Periods: 3&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td rowspan=2 &gt;CHEM1250 L03&lt;br&gt;Instructor: Bob&lt;br&gt;Room: 211&lt;br&gt;Periods: 2&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;12 (13:30-14:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;13 (14:00-14:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;14 (14:30-15:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;15 (15:00-15:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;16 (15:30-16:00)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;17 (16:00-16:30)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt; </code></pre> <p>I tried this in Vivaldi Browser and Firefox.</p>
<python><html>
2025-05-11 07:12:58
0
511
Pedroski
79,616,129
3,138,436
One frequency is absent from fourier transform representation of the addition of two cosine wave of same amplitude and phase
<p>I have wrote a general python code in Sage-math to find out the Fourier transform of the addition of two different frequency cosine wave in order to plot the frequency domain in a graph. The function :</p> <pre><code> f(t) = cos(2*pi*f1*t)+cos(2*pi*f2*t) # integrate it from -infinity to +infinity </code></pre> <p>To find the integral, I have used Sympy pacakge. Below is a sample of the resultant piecewise functions I am getting after performing integration of the Fourier transform frequency function:</p> <p><a href="https://i.sstatic.net/BHVYFQ7z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHVYFQ7z.png" alt="Piecewise functions" /></a></p> <p>Then in order to check which frequencies are present in the resultant function, I search for existing frequency between -10 to +10 on the Fourier transform function .</p> <p>Here <strong>f1</strong> and <strong>f2</strong> are respective frequency of the two cosine wave. When f1 = 8 and f2 = 4, I get the following graph ( both frequency (8 &amp; 4) is present in the graph as expected):</p> <p><a href="https://i.sstatic.net/XGW9AYcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XGW9AYcg.png" alt="cos(2pi8t) + cos(2pi4t)" /></a></p> <p>But if I change any one of f1 or f2 to 7, Then only 7 is present on the graph. Here you can see that frequency 4 is absent from the graph. I have found that this issue only appears if frequency 7 is present in any one of the two cosine functions.</p> <p><a href="https://i.sstatic.net/2fQBHwgM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fQBHwgM.png" alt="cos(2pi7t)+cos(2pi4t)" /></a></p> <p>Here is the code:</p> <pre><code>from sage.all import * import numpy as np import sympy as sy from sympy import lambdify,re, im import matplotlib.pyplot as plt t = sy.Symbol('t') f = sy.Symbol('f') # Find Fourier Transform F = sy.integrate((cos(2*pi*7*t)+cos(2*pi*4*t))*(e^(-i*2*pi*f*t)), (t,-oo,oo)) func = F.args[0][0] # extract 1st function from piecewise F #lambdify to evaluate function at many points sympy_lambda = lambdify([f],func) freq_arr = [] amplitude_arr = [] # here l (L) is frequency in x-coordinate, testing for frequency -10 to 10 for l in np.arange(-10, 10, 0.01): sympy_complex = sympy_lambda(l) # get sympy complex number # get real part of sympy complex number real_part = re(sympy_complex) # get imaginary part of sympy complex number imag_part = im(sympy_complex) # make a sage complex number using from sympy real and imaginary sage_complex = ComplexNumber(real_part, imag_part) freq_arr.append(l) amplitude_arr.append(abs(sage_complex)) </code></pre>
<python><numpy><signal-processing><sage>
2025-05-11 03:24:33
2
9,194
AL-zami
79,615,990
5,339,264
How to concatenate n rows of content, to current row, in a rolling window, in pandas?
<p>I'm looking to transform a dataframe containing</p> <p><code>[[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]</code></p> <p>into <code>[[1, 2, 3, []], [4, 5, 6, [1, 2, 3, 4, 5, 6]], [7, 8, 9, [4, 5, 6, 7, 8, 9]], [10, 11, 12, [7, 8, 9, 10, 11, 12]]]</code></p> <p>So far the only working solution I've come up with is:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np # Create the DataFrame df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])) # Initialize an empty list to store the result result = [] # Iterate over the rows in the DataFrame for i in range(len(df)): # If it's the first row, append the row with an empty list if i == 0: result.append(list(df.iloc[i]) + [[]]) # If it's not the first row, concatenate the current and previous row else: current_row = list(df.iloc[i]) previous_row = list(df.iloc[i-1]) concatenated_row = current_row + [previous_row + current_row] result.append(concatenated_row) # Print the result print(result) </code></pre> <p>Is there no build in Pandas function that can roll a window, and add the results to current row, like the above can?</p>
<python><pandas>
2025-05-10 22:35:10
4
530
Pebermynte Lars
79,615,912
15,029,316
Unexpected error: No such file or directory: 'ffprobe', while using pydub in Digital Ocean
<p>I have an app that processes audio files and converts them to 'wav' format. This works locally on my Mac, however in DigitalOcean, when recording audio, i get the below error, followed by a 500:</p> <p><code>warn(&quot;Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work&quot;, RuntimeWarning)</code> <code>Unexpected error: [Errno 2] No such file or directory: 'ffprobe'</code></p> <p>I've tried including binary files in my code for <code>ffmpeg</code> and <code>ffprobe</code> but got the same error. ffmpeg and ffprobe binaries are in <code>/bin</code> in the root of my project:</p> <pre><code>from pydub import AudioSegment if platform.system() == &quot;Darwin&quot;: # mac pass elif platform.system() == &quot;Linux&quot;: # digitalocean) AudioSegment.converter = os.path.join(&quot;bin&quot;, &quot;ffmpeg-linux&quot;) AudioSegment.ffprobe = os.path.join(&quot;bin&quot;, &quot;ffprobe-linux&quot;) else: raise EnvironmentError( &quot;Unsupported platform. Only macOS and Linux are supported.&quot;) </code></pre>
<python><ffmpeg><digital-ocean><pydub><audiosegment>
2025-05-10 20:54:10
1
329
Cjmaret