QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,586,784
6,368,549
Sorting Dictionary By Date
<p>I have a dictionary which contains several years of data. The dictionary is basically this:</p> <pre><code>data = { '2024-09-20': 13.37, '2024-09-21': 14.92, '2024-09-22': 15.17, '2024-09-23': 12.15 } </code></pre> <p>It contains about 10 years of data or so. What I need to do is first limit the range to just the last 5 years, and then choose the largest value.</p> <p>So, I need to sort it by the key, which is the date in DESC order, then remove anything greater than 5 years, then choose the largest value.</p>
<python><sorting>
2025-04-22 15:03:42
2
853
Landon Statis
79,586,698
1,858,864
Migrate project with rpm dependencies from Centos 7 to AlmaLinux 9
<p>I have a legacy python/django project with a lot of x86_64 dependencies (~15), which are installed via rpm under Centos 7. Since Centos7 EOL happened in the previous year, I need to migrate this project to the AlmaLinux 9 (I cannot change OS, it is only AlmaLinux 8 or 9).</p> <p>Since it's a legacy project, I really want to stick to the current versions of all libraries everywhere it is possible (or take the closest one), because it's very hard to retest new versions with the project due to limited development resources.</p> <p>So I'm trying to find the easiest and most robust way to migrate all those dependencies. After thinking though different options it looks like I have these choices:</p> <ol> <li>Try to install RPMs from Centos 7 into AlmaLinux9 directly. I'm not sure if it could work, since those packages are compiled for Centos.</li> <li>Find a repo with backports with old versions for AlmaLinux. It would be great but I wasn't able to google them.</li> <li>Find the sources of all those libs and compile them for Alma9 manually. It looks like most correct and safe way, but I'm not familiar with compiling with all those flags, building rpm-s, and so on.</li> </ol> <p>So the question is: am I missing something or I really have only these ways? If so, is there some &quot;official&quot; alma9 repo with backports?</p> <p>Here are my libs and their versions:</p> <ol> <li>MySQL-python - 1.2.5-1.el7</li> <li>python-pillow - 2.9.0-2</li> <li>cracklib-python - 2.9.0-11.el7</li> <li>libxml2-python - 2.9.1-6.el7_9.7</li> <li>uwsgi - 2.0.15-1</li> <li>python-ldap - 2.4.15-2.el7</li> <li>python-memcached - 1.48-4.el7 (.noarch)</li> <li>python-msgpack 0.4.0-1</li> <li>freetype - 2.8-14.el7_9.1</li> <li>ffmpeg - 7.0.2-2</li> <li>python-markupsafe - 0.9.2-1</li> <li>python-ujson - 1.35-1</li> <li>python-pylzma - 0.4.6-1</li> <li>python-yaml - 5.4.1-1</li> <li>pyOpenSSL - 0.13.1-4.el7</li> <li>python-cryptography - 1.7.2-2.el7</li> </ol>
<python><centos7><almalinux>
2025-04-22 14:23:18
0
6,817
Paul
79,586,568
6,000,623
Stop Otel from tracing its own internal metric exporting activity
<p>I've implemented Otel for my python app sending telemetry to GCP backend. I'm seeing these trace spans for <code>https://otel-collector-xxx.asia-southeast1.run.app/opentelemetry.proto.collector.metrics.v1.MetricsService/Export</code>. I suspect these are traces from the metric export activity.</p> <p>I don't want these traces being collected and I don't want to filter at collector level, rather I want to stop collection at source. Any help is greatly appreciated.</p> <p>My Instrumention at app level is exactly this:</p> <pre><code>resource = Resource.create(attributes={ SERVICE_NAME: &quot;your-service-name&quot;}) tracerProvider = TracerProvider(resource=resource) processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=&quot;&lt;traces-endpoint&gt;/v1/traces&quot;)) tracerProvider.add_span_processor(processor) trace.set_tracer_provider(tracerProvider) reader = PeriodicExportingMetricReader( OTLPMetricExporter(endpoint=&quot;&lt;traces-endpoint&gt;/v1/metrics&quot;)) meterProvider = MeterProvider(resource=resource, metric_readers=[reader]) metrics.set_meter_provider(meterProvider) </code></pre> <p>and I've also added this in my otel config:</p> <pre><code>telemetry: metrics: # disable self metrics level: none address: localhost:8888 </code></pre>
<python><google-cloud-platform><google-app-engine><trace><open-telemetry>
2025-04-22 13:15:44
1
479
Deepanshu Kalra
79,586,540
10,232,932
Apache Spark left join python databricks
<p>I have three dataframes in databricks and try to run a join on them (apache spark functions). I mainly was used to pandas dataframe join. My current code is:</p> <pre><code>joined_df = df_1.join(df_2, df_1[&quot;RECONTRACT&quot;] == df_2[&quot;RECONTRACT&quot;], &quot;left&quot;) joined_df = joined_df.join(df_3, joined_df[&quot;RENTOBJECT&quot;] == df_3[&quot;RENTOBJECT&quot;], &quot;left&quot;) </code></pre> <p>so my goal ist to join df_2 and df_3 to df_1 (both left joins)</p> <p>but i am getting the error:</p> <blockquote> <p>[AMBIGUOUS_REFERENCE] Reference <code>RENTOBJECT</code> is ambiguous, could be: [<code>RENTOBJECT</code>, <code>RENTOBJECT</code>, <code>RENTOBJECT</code>]. SQLSTATE: 42704</p> </blockquote> <p>could it be that I have a duplicate column? How can I avoid the error? In pandas i am used that a duplicate columns get rennamed to: column_x and column_y</p>
<python><apache-spark><databricks>
2025-04-22 13:00:50
3
6,338
PV8
79,586,299
511,436
"object has no attribute 'delay_on_commit'"
<p>Launching a Celery 5.5.1 task from a Django 4.2 view sometimes causes</p> <pre><code>'generate_report' object has no attribute 'delay_on_commit' </code></pre> <pre><code># tasks.py from celery import shared_task @shared_task def generate_report(data): # Code to generate report ... </code></pre> <pre><code># django view def testview(request): from tasks import generate_report generate_report.delay_on_commit(&quot;mytestdata&quot;) return render(request, &quot;simple_page.html&quot;) </code></pre> <p>This happens on some deployments, but not on all. And when it happens, a server (web app) reboot solves the issue. Moving the <code>generate_report</code> import to the top of the file also did not help.</p> <p>Code is deployed on pythonanywhere.com with several Celery workers running in &quot;always on&quot; tasks.</p> <p><code>dir(generate_report)</code> gave this result:</p> <pre><code>['AsyncResult', 'MaxRetriesExceededError', 'OperationalError', 'Request', 'Strategy', '__annotations__', '__bound__', '__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__header__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__trace__', '__v2_compat__', '__weakref__', '__wrapped__', '_app', '_backend', '_decorated', '_default_request', '_exec_options', '_get_app', '_get_exec_options', '_get_request', 'abstract', 'acks_late', 'acks_on_failure_or_timeout', 'add_around', 'add_to_chord', 'add_trail', 'after_return', 'annotate', 'app', 'apply', 'apply_async', 'backend', 'before_start', 'bind', 'chunks', 'default_retry_delay', 'delay', 'expires', 'from_config', 'ignore_result', 'map', 'max_retries', 'name', 'on_bound', 'on_failure', 'on_replace', 'on_retry', 'on_success', 'pop_request', 'priority', 'push_request', 'rate_limit', 'reject_on_worker_lost', 'replace', 'request', 'request_stack', 'resultrepr_maxsize', 'retry', 'run', 's', 'send_event', 'send_events', 'serializer', 'shadow_name', 'si', 'signature', 'signature_from_request', 'soft_time_limit', 'starmap', 'start_strategy', 'store_eager_result', 'store_errors_even_if_ignored', 'subtask', 'subtask_from_request', 'throws', 'time_limit', 'track_started', 'trail', 'typing', 'update_state'] </code></pre>
<python><django><celery><pythonanywhere>
2025-04-22 11:14:06
2
1,844
Davy
79,585,895
7,465,516
Can I get PyCharm to accept a Python interpreter not named 'python'?
<p>My project has an executable called <code>powerscript.exe</code>, which is a Python interpreter that does and knows some extra things. This is out of my control, I can not change this.</p> <p>From the command line I can use this as a drop-in replacement for the Python interpreter. In PyCharm I cannot. Adding this thing as a Python interpreter yields the error message:</p> <blockquote> <p><strong>Select Python Interpreter</strong> An invalid Python interpreter name 'powerscript.exe'!</p> </blockquote> <p>Which I guess is just a sanity-check on the filename that I want to deactivate or work around.</p> <p>I have read <a href="https://stackoverflow.com/questions/56815082/how-to-resolve-invalid-python-interpreter-name-python-exe-error-in-pycharm">How to resolve &quot;Invalid Python interpreter name &#39;python.exe&#39;!&quot; error in PyCharm</a>, they have the same error message but in their case it is a false positive, in my case I really do have an invalid name.</p>
<python><windows><pycharm>
2025-04-22 07:25:00
1
2,196
julaine
79,585,571
11,505,680
Python: using doctest with assert
<p>I've been using doctest for a while now, and I like it most of the time. It's annoying when I expect a number that has to fall within a certain range or a string that has to meet certain conditions. In this case, the recommended approach seems to be to encode the condition as a logical operator returning <code>True</code> or <code>False</code>, e.g.,</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; &gt;&gt;&gt; from math import pi &gt;&gt;&gt; 3.14 &lt; pi &lt; 3.15 True &quot;&quot;&quot; </code></pre> <p>But this approach leads to a lot of <code>True</code> lines in my doctests. And if a test fails, I get a message that says something like, &quot;Expected True; got False.&quot;</p> <p>But I just discovered that I can use <code>assert</code> to accomplish the same thing:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; &gt;&gt;&gt; from math import pi &gt;&gt;&gt; assert 3.14 &lt; pi &lt; 3.15 &quot;&quot;&quot; </code></pre> <p>The input is more compact. And this way, if the assertion fails, <code>doctest</code> says, &quot;Expected nothing; got AssertionError...&quot; followed by the offending <code>assert</code> statement, which is more readable. Is there any reason not to do this?</p>
<python><doctest>
2025-04-22 01:38:38
1
645
Ilya
79,585,532
464,318
How to make "Go to definition" in VS Code go to a Python library file in the workspace, rather than a file in the virtualenv?
<p>I have a project that includes multiple, unconnected git repos, which I check out next to each other in my <code>~/dev</code> folder.</p> <p>One repo, <code>~/dev/multitenant</code>, is the &quot;main&quot; project, and there are several libraries that Multitenant imports, including <code>~/dev/airspace</code>. Both of these folders are in the VS Code Workspace I built for Multitenant.</p> <p>A copy of Airspace is installed via <code>uv</code> and Multitenant's <code>pyproject.toml</code> in <code>~/dev/multitenant/.venv</code>, which is the Python environment that's activated in my Workspace.</p> <p>One of the files in Multitenant does this:</p> <pre><code>from airspace.utils import get_block_tuple </code></pre> <p>When I perform 'Go to Definition' on <code>get_block_tuple</code> from that file, I want the the file <code>~/dev/airsapce/airspace/utils.py</code> to open. Instead, the file <code>~/dev/multitenant/.venv/lib/python3.11/site-packages/airspace/utils.py</code> is opened.</p> <p>How do I make VS Code open the checked out file in my Airspace repo, instead of the installed file in my virtualenv? That's important because the primary reason I use Go to Definition is to <em>make edits to Airspace</em>, and editing the <em>copy</em> installed in the venv is useless.</p> <p>I just switched from PyCharm, which would do this because I had associated Airspace's PyCharm project as a child of Multitenant's PyCharm project. Does VS Code have any sort of equivalent functionality? I have both folders in my VS Code Workspace, so I figured this would just work.</p> <p>I've done some previous googling on this, and learned about the VS Code setting <code>python.analysis.extraPaths</code>, but that doesn't seem to help. I also tried doing <code>uv pip install -e ../airspace</code>, which successfully installed my local Airspace repo into my venv, but VS Code just stops being able to recognize imports from Airspace after that, giving me the dreaded yellow squiggle.</p>
<python><visual-studio-code><pylance>
2025-04-22 00:42:41
1
9,240
coredumperror
79,585,341
16,611,809
Delete row with negative data in Polars, if row with same value in one column and positive data exists
<p>Lets assume I have this Polars df:</p> <pre><code>import polars as pl df = pl.DataFrame({'name': ['a', 'a', 'b', 'b'], 'found': ['yes', 'no', 'no', 'no'], 'found_in_iteration': ['1', 'not_found', 'not_found', 'not_found']}) </code></pre> <p>(This df was generated in two iterations and for a some data was found in the first, but not in the second and for b no data was found in any iteration.) No I want to keep only those rows, where data was found, but if for one name, not data was found at all, I want to keep this information that it was not found once. The latter is pretty easy doable with <code>unique()</code>, but how do I get rid of the a,no row, if an a,yes exists. The solution should be usable even with more iterations like here:</p> <pre><code>df = pl.DataFrame({'name': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'], 'found': ['yes', 'no', 'yes', 'no', 'no', 'no', 'no', 'yes', 'no'], 'found_in_iteration': ['1', 'not_found', '3', 'not_found', 'not_found', 'not_found', 'not_found', '2', 'not_found']}) </code></pre> <p>Here, <code>a</code> was found in two iterations, <code>b</code> was found not at all and <code>c</code> was found in one. Here for <code>a</code>, both columns with <code>found</code> == <code>yes</code> should be kept, for c, the one column with <code>found == yes</code> should be kept and for b one column with <code>found == no</code> should be kept. So this:</p> <pre><code>shape: (9, 3) ┌──────┬───────┬────────────────────┐ │ name ┆ found ┆ found_in_iteration │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞══════╪═══════╪════════════════════╡ │ a ┆ yes ┆ 1 │ │ a ┆ no ┆ not_found │ │ a ┆ yes ┆ 3 │ │ b ┆ no ┆ not_found │ │ b ┆ no ┆ not_found │ │ b ┆ no ┆ not_found │ │ c ┆ no ┆ not_found │ │ c ┆ yes ┆ 2 │ │ c ┆ no ┆ not_found │ └──────┴───────┴────────────────────┘ </code></pre> <p>should be reduced to this:</p> <pre><code>shape: (9, 3) ┌──────┬───────┬────────────────────┐ │ name ┆ found ┆ found_in_iteration │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞══════╪═══════╪════════════════════╡ │ a ┆ yes ┆ 1 │ │ a ┆ yes ┆ 3 │ │ b ┆ no ┆ not_found │ │ c ┆ yes ┆ 2 │ └──────┴───────┴────────────────────┘ </code></pre>
<python><dataframe><python-polars>
2025-04-21 21:04:15
2
627
gernophil
79,585,340
1,241,786
Update dataframe cell based on row criteria
<p>Let's say I have the following dataframe:</p> <pre><code>+----+------------------------------------------------+-------------+----------+----------+ | | String | Substring | Result 1 | Result 2 | +----+------------------------------------------------+-------------+----------+----------+ | 0 | fooDisplay &quot;screen 1&quot; other text | Screen 1 | | | | 1 | foobar Display &amp;quot;Screen2&amp;quot; more text | GFX | | | | 2 | barDisplay &amp;quot;Screen2&amp;quot;useless text | Screen 2 | | | | 3 | Link=&quot;Screen 1&quot; | Screen 1 | | | +----+------------------------------------------------+-------------+----------+----------+ </code></pre> <p>I am trying to achieve the following:</p> <pre><code>+----+------------------------------------------------+-------------+----------+----------+ | | String | Substring | Result 1 | Result 2 | +----+------------------------------------------------+-------------+----------+----------+ | 0 | foo&quot;Display screen 1&quot; other text | Screen 1 | True | False | | 1 | foobar Display &amp;quot;Screen2&amp;quot; more text | GFX | False | False | | 2 | barDisplay &amp;quot;Screen2&amp;quot;useless text | Screen 2 | False | True | | 3 | Link=&quot;Screen 1&quot; | Screen 1 | False | False | +----+------------------------------------------------+-------------+----------+----------+ </code></pre> <p>From a pseudocode standpoint, this is what I am trying to do:</p> <pre><code>search_string = 'Display ' + row['Substring'] if row['String'].containsCaseInsensitive search_string: row['Result 1'] = True else: row['Result 1'] = False search_string2 = 'Display &amp;quot;' + row['Substring'] + '&amp;quot;' if row['String'].containsCaseInsensitive search_string2: row['Result 2'] = True else: row['Result 2'] = False </code></pre> <p>I know that I can accomplish this by iterating through each row, but I am trying to be a good Python user and avoid that as I have hundreds of thousands of rows in my dataframe, and multiple &quot;Result&quot; columns (upwards of 10) in some cases.</p> <p>I see plenty of examples online where &quot;static&quot; substrings are located within a dataframe string, however, I can't seem to find an example that uses the contents of a dataframe cell as the substring used to search another cell in the same row. Here is one such example: <a href="https://stackoverflow.com/questions/51026771/pandas-case-insensitive-in-a-string-or-case-ignore">pandas-case-insensitive-in-a-string-or-case-ignore</a></p> <p>So, how do I efficiently update dataframe rows using a combination of dynamic and static queries for a given row?</p>
<python><pandas><dataframe>
2025-04-21 21:00:38
1
728
kubiej21
79,585,301
1,709,413
Matplotlib vertical grid lines not match points
<p>Could you please explain why vertical grid lines not match points?</p> <p><a href="https://i.sstatic.net/BO1ONFIz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BO1ONFIz.jpg" alt="enter image description here" /></a></p> <p>Here is my data for plot:</p> <p>{datetime.datetime(2025, 4, 15, 19, 23, 50, 658000, tzinfo=datetime.timezone.utc): 68.0, datetime.datetime(2025, 4, 16, 19, 31, 1, 367000, tzinfo=datetime.timezone.utc): 72.0, datetime.datetime(2025, 4, 17, 19, 34, 21, 507000, tzinfo=datetime.timezone.utc): 75.0, datetime.datetime(2025, 4, 18, 19, 50, 28, 446000, tzinfo=datetime.timezone.utc): 80.0, datetime.datetime(2025, 4, 19, 19, 57, 15, 393000, tzinfo=datetime.timezone.utc): 78.0, datetime.datetime(2025, 4, 20, 19, 57, 49, 60000, tzinfo=datetime.timezone.utc): 77.0, datetime.datetime(2025, 4, 21, 20, 28, 51, 127710, tzinfo=datetime.timezone.utc): 73.0}</p> <p>And here is my code:</p> <pre><code> fig, ax = plt.subplots(figsize=(12, 6)) ax.plot(df['Дата'], df['Вес'], marker='o', linestyle='-', color='royalblue', label='Вес') ax.scatter(df['Дата'], df['Вес'], color='red', zorder=5) ax.set_title('График изменения веса по дням', fontsize=16) ax.set_xlabel('Дата', fontsize=12) ax.set_ylabel('Вес (кг)', fontsize=12) ax.xaxis.set_major_locator(mdates.AutoDateLocator()) # Раз в день ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%Y')) plt.setp(ax.xaxis.get_majorticklabels(), rotation=45, ha=&quot;right&quot;) ax.grid(True, linestyle='--', alpha=0.6) plt.tight_layout() </code></pre>
<python><matplotlib>
2025-04-21 20:34:31
1
1,197
andrey
79,585,295
914,832
Python > yfinance > yf.Ticker(symbol) throwing error when symbol not known
<p>When using <strong>logging</strong> and <strong>yfinance</strong>, if one queries for a incorrect ticker name, they will get an error and I am struggling to shield the user from that error:</p> <p>Code which triggers the error:</p> <pre><code>ticker = yf.Ticker(&quot;QQQQQQQQQ&quot;) stock_info = ticker.info </code></pre> <p>The error:</p> <pre><code>ERROR - 404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/QQQQQQQQQ?modules=financialData%2CquoteType%2CdefaultKeyStatistics%2CassetProfile%2CsummaryDetail&amp;corsDomain=finance.yahoo.com&amp;formatted=false&amp;symbol=QQQQQQQQZ1Z&amp;crumb=XXXX ERROR - Failed to fetch or update Yahoo Finance data for 'QQQQQQQQQ': 'NoneType' object has no attribute 'update' </code></pre> <p>What is the easiest way to check with Yahoo to see if a symbol is valid without triggering this error?</p> <p>I do want to keep logging set on.</p> <h2>Comment on possible duplicate</h2> <p>Thanks for the comments. I still cannot control the error. The other post is dated 2019. During the last six years, the Yahoo Finance library has changed. The errors reported back then were different from those of today.</p> <p>Code to attempt to control the error:</p> <pre><code>try: stock_info = ticker.info except: logging.info(&quot;Cannot get info, it probably does not exist&quot;) </code></pre> <p>Error:</p> <pre><code>ERROR - 404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/FAKE?modules=financialData%2CquoteType%2CdefaultKeyStatistics%2CassetProfile%2CsummaryDetail&amp;corsDomain=finance.yahoo.com&amp;formatted=false&amp;symbol=FAKE&amp;crumb=GBIgTxz68Wh </code></pre>
<python><logging><yfinance>
2025-04-21 20:31:36
1
341
Bogdan Ciocoiu
79,585,116
19,527,503
Plus and equal increment operator not allowed inside dictionary
<p>In the dictionary below, <code>id_count</code> gets incremented as soon as the loop begins just before the dictionary is created. I need to increment it again inside the dictionary within locations <code>id</code>. Why is the <code>+=</code> operator not allowed inside the dictionary?</p> <pre><code> for i in range(0, len(all_locations)): id_count += 1 object_sequence_count += 1 object_data_output = { &quot;id&quot;: id_count, &quot;sequence&quot;: object_sequence_count, &quot;coordinate&quot;:{ &quot;x&quot;: all_locations[i][&quot;coordinate&quot;][&quot;x&quot;], &quot;y&quot;: all_locations[i][&quot;coordinate&quot;][&quot;y&quot;], &quot;objIdentifier&quot;: all_locations[i][&quot;materialIdentifier&quot;], &quot;location&quot;:[ { &quot;id&quot;: id_count += 1 #where the error is occurring } ] } } </code></pre>
<python><dictionary>
2025-04-21 18:27:19
2
323
marsprogrammer
79,585,042
577,288
python 3 - multithreading pyttsx3
<pre><code>import pyttsx3 import concurrent.futures import concurrent.futures def Threads1(curr_section, index1, engine): rec_audiofile = 'output' + str(index1) + '.mp3' voices = engine.getProperty('voices') engine.setProperty('rate', 160) engine.setProperty('voice', voices[1].id) engine.save_to_file(curr_section, rec_audiofile) engine.runAndWait() return curr_section all_sections = ['The quick brown fox jumps over the lazy dog', 'the boy jumped over the moon', 'Little Miss Muffet sat on a tuffet, Eating some curds and whey', 'Hickory, dickory, dock The mouse ran up the clock.', 'One, two Buckle my shoe, Three, four Open on the door,', 'Little Bo-Peep has lost her sheep, And cant tell where to find them', 'This Little Piggy, went to market This Little Piggy, stayed home This Little Piggy, had roast beef', 'do your ears hang low? Do they wobble to and fro? Can you tie them in a knot Can you tie them in a bow', 'Hush, little baby, dont say a word, Mamas going to buy you a mockingbird', 'This old man, he played one, He played knickknack on my thumb With a knickknack patty whack, give a dog a bone'] engine = pyttsx3.init() threads_num = 4 with concurrent.futures.ThreadPoolExecutor(max_workers=threads_num) as executor: working_threads = {executor.submit(Threads1, curr_section, index1, engine): curr_section for index1, curr_section in enumerate(all_sections)} for future in concurrent.futures.as_completed(working_threads): current_result = future.result() </code></pre> <p>Hi, when I run the following code. The voice synthasizer starts speaking the words of 1 random paragraph <code>all_section[3]</code> for example ... It still saves all the threads to <code>.mp3</code> data as the code has requested. But the speaking of 1 paragraph is completely random and uncoded. What could be the reason for this?</p> <p>If anyone has a solution, I would like to change the following code as little as possible.</p> <p>Before, I had kept the code <code>engine = pyttsx3.init()</code> inside the thread function with the other <code>engine</code> ... (instead of how it is now - being piped in) ... but when I kept the <code>engine = pyttsx3.init()</code> inside the thread function ... it threw the following error.</p> <pre><code>OSError: [WinError -2147221008] CoInitialize has not been called </code></pre> <p><strong>Edit</strong></p> <p>In further attempts to solve this problem ... result in the same random voice synthesis.</p> <pre><code>import concurrent.futures import threading import pyttsx3 class TTSThread(threading.Thread): def __init__(self, curr_section, index1): threading.Thread.__init__(self) self.curr_section = curr_section self.index1 = index1 self.engine = pyttsx3.init() def run(self): self.engine.save_to_file(self.curr_section, 'file1' + str(self.index1) + '.mp3') self.engine.runAndWait() def Threads1(curr_section, index1): tts_thread = TTSThread(curr_section, index1) tts_thread.start() all_sections = ['The quick brown fox jumps over the lazy dog', 'the boy jumped over the moon', 'Little Miss Muffet sat on a tuffet, Eating some curds and whey', 'Hickory, dickory, dock The mouse ran up the clock.', 'One, two Buckle my shoe, Three, four Open on the door,', 'Little Bo-Peep has lost her sheep, And cant tell where to find them', 'This Little Piggy, went to market This Little Piggy, stayed home This Little Piggy, had roast beef', 'do your ears hang low? Do they wobble to and fro? Can you tie them in a knot Can you tie them in a bow', 'Hush, little baby, dont say a word, Mamas going to buy you a mockingbird', 'This old man, he played one, He played knickknack on my thumb With a knickknack patty whack, give a dog a bone'] threads_num = 4 with concurrent.futures.ThreadPoolExecutor(max_workers=threads_num) as executor: working_threads = {executor.submit(Threads1, curr_section, index1): curr_section for index1, curr_section in enumerate(all_sections)} for future in concurrent.futures.as_completed(working_threads): current_result = future.result() </code></pre> <p>Even when modifying the following google example code,</p> <p><a href="https://drive.google.com/file/d/1Vb1VrcmT_UgzudwCvJmYE3AFmrf5mo4b/view?usp=sharing" rel="nofollow noreferrer">google code example</a></p> <p>from <code>engine.speak()</code> into <code>engine.save_to_file()</code></p> <pre><code>import pyttsx3 import threading def speak_in_thread(text, index1): engine = pyttsx3.init() engine.save_to_file(text, 'file1' + str(index1) + '.mp3') engine.runAndWait() if __name__ == &quot;__main__&quot;: thread1 = threading.Thread(target=speak_in_thread, args=(&quot;Hello from thread one!&quot;, 1)) thread2 = threading.Thread(target=speak_in_thread, args=(&quot;Greetings from thread two!&quot;, 2)) thread1.start() thread2.start() thread1.join() thread2.join() </code></pre> <p>returns the following error.</p> <pre><code>OSError: [WinError -2147221008] CoInitialize has not been called </code></pre>
<python><python-3.x><multithreading><pyttsx3>
2025-04-21 17:28:11
0
5,408
Rhys
79,584,914
790,474
Self-contained marimo notebook via uv
<p>I am trying to make a self-contained Python file that would use <code>uv</code> to install dependencies into the <code>uv</code> cache behind the scene, and simply launch the Marimo notebook once the script runs.</p> <p>This would make distribution much easier.</p> <p>I have worked out the following:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env -S uv run -- marimo edit main.py # /// script # requires-python = &quot;&gt;=3.12&quot; # dependencies = [ # &quot;marimo&quot;, # &quot;numpy&quot;, # &quot;matplotlib&quot;, # ] # /// import marimo app = marimo.App() @app.cell def _(): import marimo as mo return mo @app.cell def _(): mo.md(&quot;## Hello, Marimo!&quot;) return </code></pre> <p>However, I get an error:</p> <pre class="lang-none prettyprint-override"><code>error: Failed to spawn: `marimo` Caused by: No such file or directory (os error 2) </code></pre> <p>Can anyone see what is the problem?</p>
<python><uv><marimo>
2025-04-21 15:50:57
1
3,120
henrikstroem
79,584,913
1,774,080
Firefox remote debugging for website in python - Select element inside an iframe
<p>I am trying to create an automation tool for scraping a site. As part of that, I am making a Python script that utilizes the Remote Debugging protocol through this library: <a href="https://github.com/jpramosi/geckordp" rel="nofollow noreferrer">https://github.com/jpramosi/geckordp</a></p> <p>My problem is, that the elements that are of interest to me, reside inside an iframe. Therefore, while elements outside of the iframe can be easily selected through a <code>querySelect</code>, that does not work for elements inside the iframe. As a workaround, I have resorted to manually reaching these elements through repeated traversing of children nodes:</p> <pre class="lang-py prettyprint-override"><code>val = walker.query_selector(val[&quot;actor&quot;], &quot;.last-non-iframe-class&quot;)['node'] # print(val) children = walker.children(val[&quot;actor&quot;]) val = children[1] children = walker.children(val[&quot;actor&quot;]) val = children[0] children = walker.children(val[&quot;actor&quot;]) val = children[1] children = walker.children(val[&quot;actor&quot;]) val = children[0] . . . </code></pre> <p>But this is too slow for my needs. Is there any way to make <code>querySelect</code> work with elements inside an iframe and cut down on the number of requests I have to make to the debugger server?</p> <p>The Firefox instance runs on a Linux machine, the Python code too. The Firefox instance is running normally (not headless).</p> <p>If that is not easy for Firefox, is there a more feasible way with Chrome Remote Debugging?</p>
<python><firefox><remote-debugging><firefox-developer-tools><queryselector>
2025-04-21 15:50:05
0
1,967
Noob Doob
79,584,485
17,902,018
Unable to torch.load due to pickling/safety error
<p>I am trying to use a pytorch model present on this link:</p> <p><a href="https://drive.google.com/drive/folders/121kucsuGxoYQu03-Jmy6VCDcPzqlG4cG" rel="nofollow noreferrer">https://drive.google.com/drive/folders/121kucsuGxoYQu03-Jmy6VCDcPzqlG4cG</a></p> <p>since it is used by this project I am trying to run:</p> <p><a href="https://github.com/amaljoseph/EndToEnd_Signature-Detection-Cleaning-Verification_System_using_YOLOv5-and-CycleGAN" rel="nofollow noreferrer">https://github.com/amaljoseph/EndToEnd_Signature-Detection-Cleaning-Verification_System_using_YOLOv5-and-CycleGAN</a></p> <p>When I do</p> <pre class="lang-py prettyprint-override"><code>torch.load(model, &quot;cpu&quot;) </code></pre> <p>I get</p> <pre><code>raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray._reconstruct was not an allowed global by default. Please use `torch.serialization.add_safe_globals([_reconstruct])` or the `torch.serialization.safe_globals([_reconstruct])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. </code></pre> <p>I don't want to follow suggestion (1) since I don't fully trust the creator, and trying with solution (2) I would do something like:</p> <pre class="lang-py prettyprint-override"><code>from numpy.core.multiarray import _reconstruct import torch torch.serialization.add_safe_globals([_reconstruct]) torch.load(model, &quot;cpu&quot;) </code></pre> <p>or</p> <pre class="lang-py prettyprint-override"><code>with torch.serialization.safe_globals([_reconstruct]): torch.load(model, &quot;cpu&quot;) </code></pre> <p>But I get the exact same error. How can I load the model in a safe way?</p> <p>Details:</p> <ul> <li>Python version: 3.12.3</li> <li>Pytorch version: 2.6.0+cu124</li> <li>Numpy version: 2.1.3</li> <li>OS: Ubuntu 24.04.2 LTS x86_64</li> </ul>
<python><numpy><pytorch><deserialization><pickle>
2025-04-21 11:02:45
1
2,128
rikyeah
79,584,468
5,828,163
How to represent ranges of time in a pandas index
<p>I have a collection of user data as follows:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;">user</th> <th style="text-align: center;">start</th> <th style="text-align: center;">end</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">John Doe</td> <td style="text-align: center;">2025-03-21 11:30:35</td> <td style="text-align: center;">2025-03-21 13:05:26</td> </tr> <tr> <td style="text-align: center;">...</td> <td style="text-align: center;">...</td> <td style="text-align: center;">...</td> </tr> <tr> <td style="text-align: center;">Jane Doe</td> <td style="text-align: center;">2023-12-31 01:02:03</td> <td style="text-align: center;">2024-01-02 03:04:05</td> </tr> </tbody> </table></div> <p>Each user has a start and end datetime of some activity. I would like to place this temporal range in the index so I can quickly query the dataframe to see which users were active during a certain date/time range like so:</p> <pre class="lang-py prettyprint-override"><code>df['2024-01-01:2024-01-31'] </code></pre> <p>Pandas has <a href="https://pandas.pydata.org/docs/reference/api/pandas.Period.html" rel="nofollow noreferrer"><code>Period</code></a> objects, but these seem to only support a specific year, day, or minute, not an arbitrary start and end datetime. Pandas also has <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.html" rel="nofollow noreferrer"><code>MultiIndex</code></a> indices, but these seem to be designed for hierarchical categorical labels, not for time ranges. Any other ideas for how to represent this time range in an index?</p>
<python><pandas><time-series><date-range>
2025-04-21 10:54:10
2
2,223
Adam Stewart
79,584,454
3,225,420
Can't Align Histogram Bin Edges with Chart Even When Using Numpy histogram_bin_edges
<p>I want the histogram edges to line up with the <code>xticks</code> of my chart. Other SO and answers I've read focus on calculating the bins and passing them. I've done this yet still can't get them to align: <a href="https://i.sstatic.net/z1I2C1o5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1I2C1o5.png" alt="histogram whose chart and xticks do not align" /></a></p> <p>I've focused on the following lines to get this to work with no luck. In <code>my_bins</code> I tried converting it to a list (<code>tolist()</code>), and even with forcibly changing the <code>xticks</code> to match <code>my_bins</code> it doesn't align.</p> <p><code>my_bins = np.histogram_bin_edges(a=dataframe[data_cols].values.ravel(), bins='sqrt').tolist()</code></p> <p><code>axes_dict['histogram'].set_xticks(ticks=my_bins)</code></p> <p><code>axes_dict['histogram'].set_xticklabels(labels=my_bins)</code></p> <p>What should I try next?</p> <p>Here's my sample code:</p> <pre><code> from matplotlib.figure import Figure from PIL import Image import pandas as pd import numpy as np my_fig = Figure(**{'layout': 'constrained'}) mosaic = {'mosaic': [['histogram']], 'gridspec_kw': {'wspace': 0.0, 'hspace': 0.0}} axes_dict = my_fig.subplot_mosaic(**mosaic) columns = ['label', 'obs1'] data = [('', 74.03), ('', 73.995), ('', 73.988), ('', 74.002), ('', 73.992), ('', 74.009), ('', 73.995), ('', 73.985), ('', 74.008), ('', 73.998), ('', 73.994), ('', 74.004), ('', 73.983), ('', 74.006), ('', 74.012), ('', 74.0), ('', 73.994), ('', 74.006), ('', 73.984), ('', 74.0), ('', 73.988), ('', 74.004), ('', 74.01), ('', 74.015), ('', 73.982)] dataframe = pd.DataFrame(data, columns=columns) data_cols = ['obs1'] my_bins = np.histogram_bin_edges(a=dataframe[data_cols].values.ravel(), bins='sqrt').tolist() # my_bins = [73.982, 73.9916, 74.0012, 74.0108, 74.0204, 74.03] axes_dict['histogram'].hist(**{'x': [74.03, 73.995, 73.988, 74.002, 73.992, 74.009, 73.995, 73.985, 74.008, 73.998, 73.994, 74.004, 73.983, 74.006, 74.012, 74.0, 73.994, 74.006, 73.984, 74.0, 73.988, 74.004, 74.01, 74.015, 73.982], 'bins': [73.982, 73.9916, 74.0012, 74.0108, 74.0204, 74.03], 'label': '', 'color': 'C0', 'zorder': 3.0, 'alpha': 0.5, 'histtype': 'step', 'align': 'left', 'orientation': 'vertical'}) axes_dict['histogram'].set_xticks(ticks=my_bins) axes_dict['histogram'].set_xticklabels(labels=my_bins) my_fig.savefig('example_figure_for_stackoverflow.png') Image.open('example_figure_for_stackoverflow.png').show() </code></pre>
<python><matplotlib><histogram><xticks>
2025-04-21 10:46:59
1
1,689
Python_Learner
79,584,241
1,706,058
Overlay effect no working as expected in moviepy
<p>I'm trying to apply overlay effect (mp4 with black background) and the black color is not completely removed using the mask below</p> <pre class="lang-py prettyprint-override"><code>from moviepy.editor import VideoFileClip, CompositeVideoClip from moviepy.video.fx.all import mask_color # Load the base video and overlay video base_clip = VideoFileClip(&quot;bed_time/testing/output.mp4&quot;) overlay_clip = VideoFileClip(&quot;overlay/border_video.mp4&quot;) # Resize overlay (optional, if needed) overlay_clip = overlay_clip.resize(height=base_clip.h) overlay_clip = overlay_clip.set_duration(base_clip.duration) # Mask the black background in the overlay video overlay_masked = mask_color(overlay_clip, color=[0, 0, 0]) # Position the overlay on top of the base video # (change `pos` to adjust overlay position) composited_clip = CompositeVideoClip([base_clip, overlay_masked.set_position((&quot;center&quot;, &quot;center&quot;))]) # Write the final video to file composited_clip.write_videofile(&quot;output_video.mp4&quot;, codec=&quot;libx264&quot;, fps=24) </code></pre> <p>On the right hand side is what I'm trying to achieve and on the left the broken overlay effect I'm getting with code above. Any thoughts?</p> <p><a href="https://i.sstatic.net/UT3rhmED.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UT3rhmED.gif" alt="enter image description here" /></a></p>
<python><moviepy>
2025-04-21 08:04:19
2
1,181
Devester
79,584,062
5,722,359
UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown
<p>I am trying to show a matplotlib.pyplot figure on Python 3.10 but can't. I am aware of this <a href="https://stackoverflow.com/q/77507580/5722359">question</a> and tried their answers but is still unsuccessful. The default OS distribution is Ubuntu 24.04 using Python 3.12 as a default.</p> <p>Here is how I setup the Python 3.10 project <code>venv</code> and installed <code>numpy</code> and <code>matplotlib</code>:</p> <pre><code>$ uv init test_py310 --python 3.10 Initialized project `test-py310` at `/home/user/test_py310` $ cd test_py310/ $ uv add numpy matplotlib Using CPython 3.10.16 Creating virtual environment at: .venv Resolved 12 packages in 136ms Prepared 1 package in 1.96s Installed 11 packages in 43ms + contourpy==1.3.2 + cycler==0.12.1 + fonttools==4.57.0 + kiwisolver==1.4.8 + matplotlib==3.10.1 + numpy==2.2.5 + packaging==25.0 + pillow==11.2.1 + pyparsing==3.2.3 + python-dateutil==2.9.0.post0 + six==1.17.0 </code></pre> <p>test_matplotlib.py:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 10, 100) y = np.sin(x) plt.plot(x, y, label='sin(x)', color='blue', linestyle='--') plt.show() </code></pre> <p>Error:</p> <pre><code>/home/user/Coding/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py /home/user/test_py310/test_matplotlib,py:7: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() </code></pre> <p>Next, I tried installing <code>PyQt5</code> as shared by this <a href="https://stackoverflow.com/a/77644828/5722359">answer</a> but still encountered error.</p> <pre><code>$ uv add pyqt5 Resolved 15 packages in 89ms Installed 3 packages in 45ms + pyqt5==5.15.11 + pyqt5-qt5==5.15.16 + pyqt5-sip==12.17.0 </code></pre> <p>Running the same python script</p> <pre><code>$ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py qt.qpa.plugin: Could not load the Qt platform plugin &quot;xcb&quot; in &quot;&quot; even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb. Aborted (core dumped) </code></pre> <p>Changing <code>import matplotlib.pyplot as plt</code> to:</p> <pre><code>import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt </code></pre> <p>Gave this error:</p> <pre><code>$ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py AttributeError: module '_tkinter' has no attribute '__file__'. Did you mean: '__name__'? The above exception was the direct cause of the following exception: ImportError: failed to load tkinter functions The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/user/test_py310/test_matplotlib,py&quot;, line 9, in &lt;module&gt; plt.plot(x, y, label='sin(x)', color='blue', linestyle='--') File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 3827, in plot return gca().plot( File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 2774, in gca return gcf().gca() File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 1108, in gcf return figure() File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 1042, in figure manager = new_figure_manager( File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 551, in new_figure_manager _warn_if_gui_out_of_main_thread() File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 528, in _warn_if_gui_out_of_main_thread canvas_class = cast(type[FigureCanvasBase], _get_backend_mod().FigureCanvas) File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 369, in _get_backend_mod switch_backend(rcParams._get(&quot;backend&quot;)) File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py&quot;, line 425, in switch_backend module = backend_registry.load_backend_module(newbackend) File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/registry.py&quot;, line 317, in load_backend_module return importlib.import_module(module_name) File &quot;/home/user/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_tkagg.py&quot;, line 1, in &lt;module&gt; from . import _backend_tk File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/_backend_tk.py&quot;, line 25, in &lt;module&gt; from . import _tkagg ImportError: initialization failed </code></pre> <p>Using</p> <pre><code>import matplotlib matplotlib.use('Qt5Agg') import matplotlib.pyplot as plt </code></pre> <p>gave</p> <pre><code>$ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py qt.qpa.plugin: Could not load the Qt platform plugin &quot;xcb&quot; in &quot;&quot; even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb. Aborted (core dumped) </code></pre> <p>I have also removed pyqt5 and added pyqt6, and used <code>matplotlib.use('Qt6Agg')</code> but got this error:</p> <pre><code>$ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py Traceback (most recent call last): File &quot;/home/user/test_py310/test_matplotlib,py&quot;, line 4, in &lt;module&gt; matplotlib.use('Qt6Agg') File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/__init__.py&quot;, line 1265, in use name = rcsetup.validate_backend(backend) File &quot;/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/rcsetup.py&quot;, line 278, in validate_backend raise ValueError(msg) ValueError: 'Qt6Agg' is not a valid value for backend; supported values are ['gtk3agg', 'gtk3cairo', 'gtk4agg', 'gtk4cairo', 'macosx', 'nbagg', 'notebook', 'qtagg', 'qtcairo', 'qt5agg', 'qt5cairo', 'tkagg', 'tkcairo', 'webagg', 'wx', 'wxagg', 'wxcairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template'] </code></pre> <p><strong>What must I do to be able to plot a matplotlib.pyplot figure in a virtual environment that is installed with Python 3.10?</strong> Just to add, I am able show a matplotlib.pyplot figure in a separate virtual environment using Python 3.12.</p>
<python><matplotlib><uv>
2025-04-21 05:15:22
2
8,499
Sun Bear
79,584,016
1,187,304
How to pickle my custom enum class without a PicklingError?
<p>I have written an <code>Interval</code> python class with attributes for a starting point, an ending point, and a &quot;curve type&quot; that is used to interpolate between the start and the end. I only plan to support a couple specific predefined curve types, so I have put them in an Enum. Each curve type has a few properties, so I have made each enum value a namedtuple. Here is my (simplified) code so you can see what I mean:</p> <pre><code>from collections import namedtuple from enum import Enum class CurveType(namedtuple('CurveType', ['label', 'symbol', 'interp_fn']), Enum): LINEAR = 'linear', '\u002F', lambda x: x SBEND = 's-bend', '\u23B0', lambda x: x**3 / (x**3 + (1 - x)**3) class Interval: def __init__(self, start, end, curve_type=None): self.start = start self.end = end self.curve_type = curve_type if curve_type else CurveType.LINEAR def __str__(self): return f'({self.start} {self.curve_type.symbol} {self.end})' def evaluate_at(self, x): proportion = (x - self.start) / (self.end - self.start) return self.curve_type.interp_fn(proportion) </code></pre> <p>This works perfectly adequately for my needs:</p> <pre><code>i = Interval(3, 6, CurveType.SBEND) print (i) print (i.curve_type) print (i.evaluate_at(4)) </code></pre> <blockquote> <p>(3 ⎰ 6)<br /> CurveType.SBEND<br /> 0.11111111111111105</p> </blockquote> <p>Except that I'd like to be able to pickle an individual <code>Interval</code> instance, and that currently gives me an error:</p> <pre><code>import pickle with open('interval.pickle', 'wb') as file: pickle.dump(i, file) </code></pre> <blockquote> <p>Traceback (most recent call last):<br /> File &quot;/path/to/test.py&quot;, line 28, in &lt;module&gt;<br /> pickle.dump(i, file)<br /> _pickle.PicklingError: Can't pickle &lt;class '__main__.CurveType'&gt;: it's not the same object as __main__.CurveType</p> </blockquote> <p>I don't totally get what this error message is saying. I keep thinking that I understand it and then confusing myself again.</p> <p>Anyhow, how can I accomplish my goals here? I am not married to the exact approach I have shown, so long as I can:</p> <ul> <li>create <code>Interval</code> objects</li> <li>give each <code>Interval</code> a <code>CurveType</code> using syntax like <code>curve_type = CurveType.LINEAR</code></li> <li>specify multiple properties for each <code>CurveType</code> including its interpolation function</li> <li>pickle my <code>Interval</code> objects</li> </ul> <p>((I'm currently running Python 3.11 but could upgrade if there's a relevant feature in a newer version.))</p>
<python><enums><pickle><namedtuple>
2025-04-21 04:07:36
1
482
thecommexokid
79,583,755
3,684,931
How to build a nested adjacency list from an adjacency list and a hierarchy?
<p>I have a simple adjacency list representation of a graph like this</p> <pre><code>{ 1: [2, 3, 4], 2: [5], 3: [6, 9], 4: [3], 5: [3], 6: [7, 8], 7: [], 8: [], 9: [] } </code></pre> <p>which looks like this</p> <p><a href="https://i.sstatic.net/51Ua0bFH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51Ua0bFH.png" alt="enter image description here" /></a></p> <p>But each node also has a &quot;type ancestry&quot; which tells us what &quot;type&quot; of node it is, and this is completely different from the hierarchy present in the adjacency list above. For example, it would tell us that &quot;node 6 is of type 'BA', and all type 'BA' nodes are type 'B'&quot;.</p> <p>For example, consider this ancestry:</p> <pre><code>{ 1: [], # no ancestry 2: ['AA', 'A'], # Read this as &quot;2 is a type AA node, and all type AA nodes are type A nodes&quot; 3: ['B'], # 3 is directly under type B 4: [], 5: ['AA', 'A'], 6: ['BA', 'B'], 7: ['BA', 'B'], 8: ['BA', 'B'], 9: ['BB', 'B'] } </code></pre> <p>which when visualized would look like this</p> <p><a href="https://i.sstatic.net/83dEzJTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/83dEzJTK.png" alt="enter image description here" /></a></p> <p>However, instead of connecting nodes directly as per the adjacency list, we must use their &quot;representative types&quot; when available, where the representative types of the nodes would be the nodes just below the lowest common ancestor of their type ancestries. When visualized with this adjustment, it would look like this</p> <p><a href="https://i.sstatic.net/c6Se8HgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c6Se8HgY.png" alt="enter image description here" /></a></p> <p>So what I want to produce programmatically is a &quot;hierarchical/nested&quot; adjacency list for such a visualization, which would look like below. The main idea is to introduce a <code>subtree</code> for each key in the adjacency list (along with the <code>edges</code> field), which would in turn contain its own adjacency list and so forth (recursively).</p> <pre><code>{1: {'edges': ['A', 'B', 4], 'subgraphs': {}}, 4: {'edges': ['B'], 'subgraphs': {}}, 'A': {'edges': ['B'], 'subgraphs': {'AA': {'edges': [], 'subgraphs': {2: {'edges': [5], 'subgraphs': {}}, 5: {'edges': [], 'subgraphs': {}}}}}}, 'B': {'edges': [], 'subgraphs': {3: {'edges': ['BA', 'BB'], 'subgraphs': {}}, 'BA': {'edges': [], 'subgraphs': {6: {'edges': [7, 8], 'subgraphs': {}}, 7: {'edges': [], 'subgraphs': {}}, 8: {'edges': [], 'subgraphs': {}}}}, 'BB': {'edges': [], 'subgraphs': {9: {'edges': [], 'subgraphs': {}}}}}}} </code></pre> <p>What is an elegant way of transforming the original adjacency list + the separate &quot;ancestry&quot; map to produce such a data structure?</p>
<python><data-structures><graph><nested><adjacency-list>
2025-04-20 20:59:57
2
1,984
Sachin Hosmani
79,583,668
7,361,580
Can older spaCy models be ported to future spaCy versions?
<p>The latest spaCy versions have better performance and compatibility for GPU acceleration on Apple devices, but I have an existing project that depends on spaCy 3.1.4 and some of the specific behavior of the 3.1.0 models (web lg, web trf).</p> <p>Would it be possible to port the old models from source to work with newer versions of spaCy (e.g. 3.5) so I could get the same behavior and results, or do I need to use the model that comes with the respective spaCy version?</p> <p>(This would be on a mac M1, and the advantage is that newer versions of spaCy started supporting Metal Performance Shaders.)</p>
<python><nlp><gpu><spacy>
2025-04-20 19:10:02
0
2,115
synchronizer
79,583,640
21,446,483
Keras SKLearnClassifier wrapper can't fit MNIST data
<p>I'm trying to use the SKLearnClassifier Keras wrapper to do some grid searching and cross validation using the sklearn library but I'm unable to get the model to work properly.</p> <pre class="lang-py prettyprint-override"><code>def build_model(X, y, n_neurons: List[str], learning_rate: float): model = keras.models.Sequential() model.add(keras.Input(shape=(28*28,))) model.add(keras.layers.Dense(n_neurons[0], activation=&quot;relu&quot;)) model.add(keras.layers.Dense(n_neurons[1], activation=&quot;relu&quot;)) model.add(keras.layers.Dense(10, activation=&quot;softmax&quot;)) optimizer = keras.optimizers.SGD(learning_rate=learning_rate) model.compile(loss=&quot;sparse_categorical_crossentropy&quot;, optimizer=optimizer, metrics=[&quot;accuracy&quot;]) return model sk_train = X_train.reshape((X_train.shape[0],X_train.shape[1]*X_train.shape[2])) sk_val = X_val.reshape((X_val.shape[0],X_val.shape[1]*X_val.shape[2])) model = keras.wrappers.SKLearnClassifier(model=build_model, model_kwargs={ &quot;n_neurons&quot;: [300, 100], &quot;learning_rate&quot;: 3e-4 }) model.fit(sk_train, y_train, epochs=30, validation_data=(sk_val, y_val)) </code></pre> <p>The error I get is</p> <pre class="lang-none prettyprint-override"><code>ValueError: Argument `output` must have rank (ndim) `target.ndim - 1`. Received: target.shape=(None, 10), output.shape=(None, 10) </code></pre> <p>The error message seems to be saying that it expects an output of shape (None, 10) and that it received (None, 10), which doesn't make sense to me. The model works fine if I just call the function and fit the model directly, without the wrapper:</p> <pre class="lang-py prettyprint-override"><code>dummy_model = build_model(X_train, y_train, [300, 100], learning_rate=3e-4) dummy_model.summary() dummy_model.fit(sk_train, y_train, epochs=30, validation_data=(sk_val, y_val)) </code></pre> <p>I've also tried not reshaping the data to keep the original 28*28 shape but it just makes it worse.</p>
<python><keras><scikit-learn><mnist>
2025-04-20 18:31:51
0
332
Jesus Diaz Rivero
79,583,536
161,012
How to display rule numbers in Ruff warnings using VS Code
<p>Would it be possible to configure Ruff to display the rule number violated in its warnings while using Ruff with VS Code?</p> <p>In my <code>ruff.toml</code>, I am enabling everything by default:</p> <pre class="lang-toml prettyprint-override"><code>[lint] select = [&quot;ALL&quot;] </code></pre> <p>Then, I plan to remove warnings I judge superficial in my code:</p> <pre class="lang-toml prettyprint-override"><code>ignore = [&quot;PLW0603&quot;, &quot;INP001&quot;] </code></pre> <p>This are a lot of warnings at first, and you would to see the rule number corresponding to the warning violation in the warning message. Is it possible to make Ruff do that?</p>
<python><visual-studio-code><ruff>
2025-04-20 16:33:49
2
2,083
Pierre Thibault
79,583,354
8,020,900
How to make Plotly figure legend be right to left instead of left to right
<p>I have this plot with horizontal legend. I want the legend title to be on the right instead of the left, and I want the bullets of each legend item to be on the right of the name instead of on the left. How can I do this? Thank you!</p> <pre><code>import plotly.express as px long_df = px.data.medals_long() fig = px.bar(long_df, x=&quot;nation&quot;, y=&quot;count&quot;, color=&quot;medal&quot;) fig.update_layout(legend=dict(orientation=&quot;h&quot;, yanchor=&quot;bottom&quot;, y=1.01, xanchor=&quot;left&quot;, x=0.35, title=&quot;:Medal&quot;)) fig.show() </code></pre>
<python><python-3.x><graph><plotly><legend>
2025-04-20 12:25:24
0
3,539
Free Palestine
79,583,142
17,729,094
Broadcasting a [B, 1] tensor to apply a shift to a specific channel in PyTorch
<p>I have a tensor <code>p</code> of shape <code>(B, 3, N)</code> in PyTorch:</p> <pre class="lang-py prettyprint-override"><code># 2 batches, 3 channels (x, y, z), 5 points p = torch.rand(2, 3, 5, requires_grad=True) &quot;&quot;&quot; p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843], [0.9922, 0.2684, 0.6898, 0.3983, 0.4227], [0.3188, 0.2471, 0.9552, 0.5181, 0.6877]], [[0.1079, 0.7694, 0.2194, 0.7801, 0.8043], [0.8554, 0.3505, 0.4622, 0.0339, 0.7909], [0.5806, 0.7593, 0.0193, 0.5191, 0.1589]]], requires_grad=True) &quot;&quot;&quot; </code></pre> <p>And then another <code>z_shift</code> of shape <code>[B, 1]</code>:</p> <pre class="lang-py prettyprint-override"><code>z_shift = torch.tensor([[1.0], [10.0]], requires_grad=True) &quot;&quot;&quot; z_shift: tensor([[1.], [10.]], requires_grad=True) &quot;&quot;&quot; </code></pre> <p>I want to apply the appropriate z-shift of all points in each batch, leaving x and y unchanged:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843], [0.9922, 0.2684, 0.6898, 0.3983, 0.4227], [1.3188, 1.2471, 1.9552, 1.5181, 1.6877]], [[0.1079, 0.7694, 0.2194, 0.7801, 0.8043], [0.8554, 0.3505, 0.4622, 0.0339, 0.7909], [10.5806, 10.7593, 10.0193, 10.5191, 10.1589]]]) &quot;&quot;&quot; </code></pre> <p>I managed to do it like:</p> <pre class="lang-py prettyprint-override"><code>p[:, 2, :] += z_shift </code></pre> <p>for the case where <code>requires_grad=False</code>, but this fails inside the <code>forward</code> of my <code>nn.Module</code> (which I assume is equivalent to <code>requires_grad=True</code>) with:</p> <pre><code>RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. </code></pre>
<python><pytorch>
2025-04-20 07:44:45
1
954
DJDuque
79,583,131
10,242,281
XML (RDL) code broken after editing with Python (BS4), how to find error?
<p>I did editing RDL (same like XML?) files cleaning them not used Embedded elements. Didn't touch left images. Doing it in Python, sample code looks like below in pseudo view: Input has this header :</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; &lt;Report xmlns=&quot;http://schemas.microsoft.com/sqlserver/report..... .... ....&lt;!-- RDL definition --&gt; &lt;rd:ReportID&gt;b3c5ab10-9fe4-4d15-ae08-da203d00bc77&lt;/rd:ReportID&gt; &lt;/Report&gt; </code></pre> <p>Then I do:</p> <pre><code>with open(filepath, 'r') as f: #local drive data = f.read() Bs_data = BeautifulSoup(data, &quot;xml&quot;) .... for item in Bs_data.find_all('EmbeddedImage'): if item.get('Name') not in list_keep: item.decompose() ..... fileN = open(filepath + '__.xml', 'w') fileN.write(str(Bs_data)) </code></pre> <p>And then after replacing RDL file something wrong happening in VS, like on this picture before and after. ImageData after edit displayed differently in single line and closing tag doesn't work (can see no red color on it). So whole xml got broken. I can't run this report. ImageData in A and B is the same in term of comparison in Notepad++, is it about encoding?? What is the trick I can use, thanks all for hints. Result RDL/XML passed xml validation in any tool/browser, only fails in VS/SSRS project. This is handy picture to compare. ImageData string is 100,000 char long, maybe there is some limitation for VS ??</p> <p>Happy Easter. <a href="https://i.sstatic.net/oTeolF2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTeolF2A.png" alt="enter image description here" /></a></p> <p>In VS for original working code you can see that max length = 1000b <a href="https://i.sstatic.net/BHFSmkAz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHFSmkAz.png" alt="enter image description here" /></a></p>
<python><xml><visual-studio>
2025-04-20 07:31:33
2
504
Mich28
79,583,120
5,818,372
PyQtDarkTheme: Check‑mark missing for current item in QComboBox popup
<p>I'm using <em>PyQt6</em> with <em>PyQtDarkTheme</em> and have encountered a UI issue:<br /> When opening a <code>QComboBox</code> popup, the <strong>currently selected item doesn't show the usual ✔ checkmark</strong>, or it's so low-contrast that it's effectively invisible. This works correctly without <em>PyQtDarkTheme</em>, so it seems to be theme-related.</p> <p>How can I fix this so that the dropdown <strong>clearly displays the checkmark</strong> for the selected item, even with <em>PyQtDarkTheme</em> applied?</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Without PyQtDarkTheme (check-mark visible)</th> <th>With PyQtDarkTheme (check-mark missing)</th> </tr> </thead> <tbody> <tr> <td><a href="https://i.sstatic.net/IsnoamWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IsnoamWk.png" alt="Check-mark visible" /></a></td> <td><a href="https://i.sstatic.net/65sfvPYB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65sfvPYB.png" alt="Check-mark missing" /></a></td> </tr> </tbody> </table></div> <h3>Minimal reproducible example</h3> <pre class="lang-py prettyprint-override"><code># pip install pyqt6 qdarktheme import sys import qdarktheme from PyQt6.QtWidgets import QApplication, QWidget, QVBoxLayout, QComboBox class Demo(QWidget): def __init__(self): super().__init__() combo = QComboBox() combo.addItems([&quot;First option&quot;, &quot;Second option&quot;, &quot;Third option&quot;]) layout = QVBoxLayout(self) layout.addWidget(combo) self.setWindowTitle(&quot;PyQtDarkTheme – invisible check‑mark&quot;) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) app.setStyleSheet(qdarktheme.load_stylesheet(&quot;auto&quot;)) # Apply dark theme w = Demo() w.resize(250, 80) w.show() sys.exit(app.exec()) </code></pre> <hr /> <h5>Environment:</h5> <ul> <li>Python 3.11</li> <li>PyQt6 6.8</li> <li>PyQtDarkTheme 2.1</li> <li>macOS 15</li> </ul>
<python><pyqt6><qtstylesheets><qcombobox>
2025-04-20 07:13:25
0
686
Gykonik
79,583,067
5,887,173
How to use migration commands with Litestar?
<p>I am trying to use migration commands with Litestar, but I keep encountering the error: No such command 'database'.</p> <p>I have configured my project with pyproject.toml, main.py, and models.user.py as shown below. I also use advanced-alchemy for database integration and Alembic for migrations. However, I cannot find a way to run database migration commands through Litestar.</p> <p>Here are the relevant files:</p> <p>pyproject.toml:</p> <pre><code>[tool.poetry] name = &quot;test-task&quot; version = &quot;0.1.0&quot; description = &quot;A Python web service using LiteStar and PostgreSQL.&quot; license = &quot;MIT&quot; package-mode = false [tool.poetry.dependencies] python = &quot;^3.12&quot; python-dotenv = &quot;^1&quot; litestar = { extras = [&quot;standard&quot;], version = &quot;^2&quot; } litestar-granian = &quot;^0&quot; litestar-asyncpg = &quot;^0&quot; pydantic = &quot;^1.10&quot; advanced-alchemy = &quot;^0.20&quot; msgspec = &quot;^0.18.6&quot; asyncpg = &quot;^0&quot; [tool.poetry.dev-dependencies] pytest = &quot;^7.0&quot; pytest-asyncio = &quot;^0.20&quot; alembic = &quot;^1.11.1&quot; [tool.poetry.scripts] advanced-alchemy = &quot;advanced_alchemy.cli:cli&quot; [build-system] requires = [&quot;poetry-core&gt;=1.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; [tool.litestar.app] path = &quot;app.py&quot; attribute = &quot;app&quot; [tool.litestar.database] connection_string = &quot;postgresql+asyncpg://postgres:postgres@db:5432/tasks_example&quot; metadata_module = &quot;models&quot; base_class = &quot;Base&quot; [tool.alembic] script_location = &quot;migrations&quot; sqlalchemy.url = &quot;postgresql+asyncpg://postgres:postgres@db:5432/tasks_example&quot; </code></pre> <p>main.py:</p> <pre><code>from litestar import Litestar from litestar.openapi.config import OpenAPIConfig from app.controllers.user_controller import UserController from config import DATABASE_URL from litestar.plugins.sqlalchemy import ( AsyncSessionConfig, SQLAlchemyAsyncConfig, SQLAlchemyPlugin, ) session_config = AsyncSessionConfig(expire_on_commit=False) sqlalchemy_config = SQLAlchemyAsyncConfig( connection_string=DATABASE_URL, before_send_handler=&quot;autocommit&quot;, session_config=session_config, create_all=True, ) alchemy = SQLAlchemyPlugin(config=sqlalchemy_config) openapi_config = OpenAPIConfig( title=&quot;Users API&quot;, version=&quot;1.0.0&quot;, description=&quot;API&quot;, ) app = Litestar( route_handlers=[UserController], openapi_config=openapi_config, plugins=[alchemy], ) if __name__ == &quot;__main__&quot;: app.run() </code></pre> <p>models.tasks:</p> <pre><code>from sqlalchemy import Column, BigInteger, String, TIMESTAMP, func from advanced_alchemy.extensions.litestar import ( base ) class BaseModel(base.UUIDBase): __abstract__ = True id = Column(BigInteger, primary_key=True, autoincrement=True) created_at = Column(TIMESTAMP(timezone=True), server_default=func.now(), nullable=False) updated_at = Column(TIMESTAMP(timezone=True), server_default=func.now(), onupdate=func.now(), nullable=False) class Task(BaseModel): __tablename__ = 'task' name = Column(String, nullable=False) </code></pre>
<python><webapi><dbmigrate><litestar>
2025-04-20 05:48:56
0
945
Alexander
79,582,933
10,461,632
How do you set the same row height for all rows in a table created with sphinx? (LaTeX)
<p>How can I modify the <code>latex_preamble</code> to force all rows in a table to have the same height? With Table 1 it's hard to spot, but if you look at Table 2, you can see the first and last row have extra space compared to the other rows.</p> <p>I tried these options, but neither of them remove the extra height. They only increase the row height.</p> <pre><code>latex_elements = { &quot;preamble&quot;: r&quot;&quot;&quot; % option 1 % \usepackage{etoolbox} % \AtBeginEnvironment{tabulary}{\setlength{\extrarowheight}{0pt}} % \renewcommand{\arraystretch}{1.5} % option 2 % \usepackage[table]{xcolor} % \renewcommand{\arraystretch}{2} % Reduces extra vertical space % \setlength{\extrarowheight}{0pt} % Avoids extra height in rows &quot;&quot;&quot; } </code></pre> <p><a href="https://i.sstatic.net/fza4EkA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fza4EkA6.png" alt="sphinx sample" /></a></p> <p>Here's the .tex file produced by sphinx.</p> <pre><code>%% Generated by Sphinx. \def\sphinxdocclass{report} \documentclass[letterpaper,10pt,english]{sphinxmanual} \ifdefined\pdfpxdimen \let\sphinxpxdimen\pdfpxdimen\else\newdimen\sphinxpxdimen \fi \sphinxpxdimen=.75bp\relax \ifdefined\pdfimageresolution \pdfimageresolution= \numexpr \dimexpr1in\relax/\sphinxpxdimen\relax \fi %% let collapsible pdf bookmarks panel have high depth per default \PassOptionsToPackage{bookmarksdepth=5}{hyperref} \PassOptionsToPackage{booktabs}{sphinx} \PassOptionsToPackage{colorrows}{sphinx} \PassOptionsToPackage{warn}{textcomp} \usepackage[utf8]{inputenc} \ifdefined\DeclareUnicodeCharacter % support both utf8 and utf8x syntaxes \ifdefined\DeclareUnicodeCharacterAsOptional \def\sphinxDUC#1{\DeclareUnicodeCharacter{&quot;#1}} \else \let\sphinxDUC\DeclareUnicodeCharacter \fi \sphinxDUC{00A0}{\nobreakspace} \sphinxDUC{2500}{\sphinxunichar{2500}} \sphinxDUC{2502}{\sphinxunichar{2502}} \sphinxDUC{2514}{\sphinxunichar{2514}} \sphinxDUC{251C}{\sphinxunichar{251C}} \sphinxDUC{2572}{\textbackslash} \fi \usepackage{cmap} \usepackage[T1]{fontenc} \usepackage{amsmath,amssymb,amstext} \usepackage{babel} \usepackage{tgtermes} \usepackage{tgheros} \renewcommand{\ttdefault}{txtt} \usepackage[Bjarne]{fncychap} \usepackage{sphinx} \fvset{fontsize=auto} \usepackage{geometry} % Include hyperref last. \usepackage{hyperref} % Fix anchor placement for figures with captions. \usepackage{hypcap}% it must be loaded after hyperref. % Set up styles of URL: it should be placed after hyperref. \urlstyle{same} \usepackage{sphinxmessages} % option 1 % \usepackage{etoolbox} % \AtBeginEnvironment{tabulary}{\setlength{\extrarowheight}{0pt}} % \renewcommand{\arraystretch}{1.5} % option 2 % \usepackage[table]{xcolor} % \renewcommand{\arraystretch}{2} % Reduces extra vertical space % \setlength{\extrarowheight}{0pt} % Avoids extra height in rows \title{Custom CSV Extension} \date{Apr 20, 2025} \release{0.0.1} \author{Me} \newcommand{\sphinxlogo}{\vbox{}} \renewcommand{\releasename}{Release} \makeindex \begin{document} \ifdefined\shorthandoff \ifnum\catcode`\=\string=\active\shorthandoff{=}\fi \ifnum\catcode`\&quot;=\active\shorthandoff{&quot;}\fi \fi \pagestyle{empty} \sphinxmaketitle \pagestyle{plain} \sphinxtableofcontents \pagestyle{normal} \phantomsection\label{\detokenize{index::doc}} \begin{savenotes}\sphinxattablestart \sphinxthistablewithglobalstyle \centering \sphinxcapstartof{table} \sphinxthecaptionisattop \sphinxcaption{csv\sphinxhyphen{}table directive.}\label{\detokenize{index:id1}} \sphinxaftertopcaption \begin{tabulary}{\linewidth}[t]{TTT} \sphinxtoprule \sphinxstyletheadfamily \sphinxAtStartPar Name &amp;\sphinxstyletheadfamily \sphinxAtStartPar Score &amp;\sphinxstyletheadfamily \sphinxAtStartPar Category \\ \sphinxmidrule \sphinxtableatstartofbodyhook \sphinxAtStartPar Alice &amp; \sphinxAtStartPar 95 &amp; \sphinxAtStartPar High \\ \sphinxhline \sphinxAtStartPar Bob &amp; \sphinxAtStartPar 75 &amp; \sphinxAtStartPar Medium \\ \sphinxhline \sphinxAtStartPar Carol &amp; \sphinxAtStartPar 55 &amp; \sphinxAtStartPar Low \\ \sphinxhline \sphinxAtStartPar Dave &amp; \sphinxAtStartPar 40 &amp; \sphinxAtStartPar Low \\ \sphinxbottomrule \end{tabulary} \sphinxtableafterendhook\par \sphinxattableend\end{savenotes} \begin{savenotes}\sphinxattablestart \sphinxthistablewithglobalstyle \centering \sphinxcapstartof{table} \sphinxthecaptionisattop \sphinxcaption{colored\sphinxhyphen{}csv\sphinxhyphen{}table directive}\label{\detokenize{index:id2}} \sphinxaftertopcaption \begin{tabulary}{\linewidth}[t]{TTT} \sphinxtoprule \sphinxstyletheadfamily \sphinxAtStartPar Name &amp;\sphinxstyletheadfamily \sphinxAtStartPar Score &amp;\sphinxstyletheadfamily \sphinxAtStartPar Category \\ \sphinxmidrule \sphinxtableatstartofbodyhook \sphinxAtStartPar Alice &amp; \sphinxAtStartPar 95 &amp; \sphinxAtStartPar \cellcolor{red} High \\ \sphinxhline \sphinxAtStartPar Bob &amp; \sphinxAtStartPar 75 &amp; \sphinxAtStartPar \cellcolor{orange} Medium \\ \sphinxhline \sphinxAtStartPar Carol &amp; \sphinxAtStartPar 55 &amp; \sphinxAtStartPar \cellcolor{green} Low \\ \sphinxhline \sphinxAtStartPar Dave &amp; \sphinxAtStartPar 40 &amp; \sphinxAtStartPar \cellcolor{green} Low \\ \sphinxbottomrule \end{tabulary} \sphinxtableafterendhook\par \sphinxattableend\end{savenotes} \renewcommand{\indexname}{Index} \printindex \end{document} </code></pre> <p>If I create the table with latex instead of sphinx, I don't see that extra spacing, so sphinx has to be adding the extra spacing somehow.</p> <pre><code>\documentclass{article} \usepackage{tabulary} \usepackage[table]{xcolor} % Needed for \cellcolor \begin{document} \begin{tabulary}{\textwidth}{CCC} \hline Name &amp; Score &amp; Category \\ \hline Alice &amp; 95 &amp; High \cellcolor{red} \\ Bob &amp; 75 &amp; Medium \cellcolor{orange} \\ Carol &amp; 55 &amp; Low \cellcolor{green} \\ Dave &amp; 40 &amp; Low \cellcolor{green} \\ \hline \end{tabulary} \end{document} </code></pre> <p><a href="https://i.sstatic.net/659XC0iB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/659XC0iB.png" alt="latex sample" /></a></p>
<python><latex><python-sphinx>
2025-04-20 01:00:51
0
788
Simon1
79,582,815
256,828
Locally installed Python wheel throwing ModuleNotFoundError
<p>Recently I've been trying to learn a bit more about how to build python modules and share code amongst my personal projects. This is my first attempt and I'm running into errors and I'm having a tough time finding solutions through normal searches. I'm hoping you guys can point me in the right direction and maybe provide some insight on what's causing the issue under the hood.</p> <p>I've built a super simple &quot;card&quot; project. It has the following layout...</p> <ul> <li>cards <ul> <li>src <ul> <li>_<em>init</em>_.py (Empty file)</li> <li>pycard <ul> <li>_<em>init</em>_.py</li> <li>Cards.py</li> <li>Decks.py</li> </ul> </li> </ul> </li> <li>tests <ul> <li>A collection of working unit tests</li> </ul> </li> <li>build.sh (script to build the wheel)</li> <li>setup.py</li> <li>README.md</li> </ul> </li> </ul> <p>And here's the contents of a few of the files from the project.</p> <p><strong>cards/setup.py</strong></p> <pre><code>import time from setuptools import setup, find_packages setup( name=&quot;pycard&quot;, version=f&quot;0.1.{int(time.time())}&quot;, package_dir={&quot;&quot;: &quot;src&quot;}, packages=find_packages(where=&quot;src&quot;), author=&quot;*****&quot;, python_requires=&quot;&gt;=3.12&quot;, ) </code></pre> <p><strong>cards/src/pycard/_<em>init</em>_.py</strong></p> <pre><code>from src.pycard.Cards import Card, Rank, Suit, RankSuitContext, AcesHighContext, RankSuitCard, CardContext from src.pycard.Decks import Stack, StackContext, FiftyTwoCardDeck </code></pre> <p>Everything seems to work fine within my project. Unit tests pass, etc. To build, I run...</p> <p><code>python setup.py sdist bdist_wheel</code></p> <p>This creates the build, dist, and egg directories with a dist/pycard-<em>version</em>.whl file.</p> <p>Now I create a separate project, create a new Conda environment, and run...</p> <p><code>pip install path/to/pycard-*version*.whl</code></p> <p>This command completes successfully and running <code>pip list</code> shows pycard in my package list.</p> <pre><code>(card-games) me@computer card-games % pip list Package Version ---------- -------------- pip 25.0 pycard 0.1.1745085939 setuptools 75.8.0 wheel 0.45.1 </code></pre> <p>The problem shows up when I try to import and use the package in this new project. Here's a simple example...</p> <pre><code>from pycard.Cards import RankSuitCard, Rank, Suit if __name__ == '__main__': print(RankSuitCard(Rank.ACE, Suit.SPADES)) </code></pre> <p>Executing this gives me the following error...</p> <pre><code>Traceback (most recent call last): File &quot;/Users/me/development/card-games/src/pycardgames/main.py&quot;, line 1, in &lt;module&gt; from pycard.Cards import RankSuitCard, Rank, Suit File &quot;/Users/me/miniconda3/envs/card-games/lib/python3.12/site-packages/pycard/__init__.py&quot;, line 1, in &lt;module&gt; from src.pycard.Cards import Card, Rank, Suit, RankSuitContext, AcesHighContext, RankSuitCard, CardContext ModuleNotFoundError: No module named 'src.pycard' </code></pre> <p>What's interesting is my IDE (PyCharm) seems to understand the import, letting me reference different types, click into them and view source, etc.</p> <p>My gut is it has something to do with how I'm referencing the classes in the original _<em>init</em>_.py (src.pycard.file), but most of the tutorials I found online use that pattern. I also haven't found much on details in terms of referencing classes in _<em>init</em>_.py and how that impacts importing them once you've build the package.</p> <p>Any help you guys can provide would be greatly appreciated.</p>
<python><python-3.x><setuptools><python-wheel>
2025-04-19 20:35:06
2
3,292
Staros
79,582,632
2,893,712
APScheduler Jitter Causes Multiple Runs
<p>I a have a job that I want to run once per day at 9am +- 15min. Here is my code:</p> <pre><code>def Function(): requests.get(&quot;https://EXAMPLE.com&quot;) logger.info(&quot;Ran function&quot;) sched.add_job(Function, 'cron', hour=9, jitter=900) sched.start() try: while True: time.sleep(2) except (KeyboardInterrupt, SystemExit): sched.shutdown() </code></pre> <p>Yesterday the function ran at 8:57AM and 9:08AM. Today the function ran at 8:48AM. 8:55AM, and 9:08AM. Why does it keep running more instances with each passing day?</p> <p>Edit: Here is the log info.</p> <blockquote> <p>2025-04-19 08:48:34,412 - INFO - Running job &quot;Function (trigger: cron[hour='9'], next run at: 2025-04-19 08:55:31 PDT)&quot; (scheduled at 2025-04-19 08:48:34.407465-07:00)</p> </blockquote> <p>Why would it call the next job to be only 7 minutes later?</p>
<python><apscheduler><jitter>
2025-04-19 16:56:17
2
8,806
Bijan
79,582,396
1,306,747
Plotly date display using UNIX timestamp data in browser timezone
<p>My data comes as a Pandas DataFrame with a datetime index, that is generated from UNIX timestamps. I would like to display the data in a Plotly chart in the browser's timezone, but haven't figured out how.</p> <p>What I've tried so far:</p> <ul> <li>Use timestamps (Python <code>float</code>s): Displays timestamps as numeric values (as expected)</li> <li>Use the index without or with a UTC localization: Displays dates/times in UTC</li> <li>First convert the localized DataFrame index to the preferred timezone: Displays dates/times in the preferred timezone, independently from the browser's timezone</li> </ul> <p>I know that there are still open bugs/feature requests regarding timezone handling (central one probably <a href="https://github.com/plotly/plotly.js/issues/3870" rel="nofollow noreferrer">https://github.com/plotly/plotly.js/issues/3870</a>), but given that JavaScript itself also uses a variant of UNIX timestamps to represent dates/times internally, I would guess that just passing the numeric timestamps to Plotly and letting it convert them to the browser's date/time representation in its own timezone would seem quite easy and efficient. However, I haven't found a way of how to do this. Any ideas? Or is what I'm trying to achieve just not possible with Plotly and the timezone needs to be explicitly set on the server side?</p> <p>Here's a working example displaying what I've tried so far, with the last one being what I'm trying to achieve, but without having to manually specify the timezone:</p> <pre class="lang-py prettyprint-override"><code>import calendar import time import numpy as np import pandas as pd import plotly.graph_objects as go def ptime(ts: str) -&gt; float: return calendar.timegm(time.strptime(ts, '%Y-%m-%dT%H:%M:%SZ')) columns = [ 'timestamp', 'value', ] data = np.array([ [ptime('2024-01-01T00:00:00Z'), 0.0], [ptime('2024-01-01T01:00:00Z'), 1.0], [ptime('2024-01-01T02:00:00Z'), 2.0], [ptime('2024-01-01T03:00:00Z'), 3.0], [ptime('2024-01-01T04:00:00Z'), 4.0], [ptime('2024-01-01T05:00:00Z'), 5.0], ]) df = pd.DataFrame(data, columns=columns, index=pd.to_datetime(data[:, 0], unit='s'), dtype=float) fig = go.Figure() fig.add_trace(go.Scatter( x=df['timestamp'], y=df['value'], )) fig.update_layout( title='UNIX timestamps', margin=dict(l=10, r=10, t=40, b=10), ) fig.show(renderer='browser') fig = go.Figure() fig.add_trace(go.Scatter( x=df.index, y=df['value'], )) fig.update_layout( title='Naive Pandas DateTimeIndex', margin=dict(l=10, r=10, t=40, b=10), ) fig.show(renderer='browser') fig = go.Figure() fig.add_trace(go.Scatter( x=df.index.tz_localize('UTC'), y=df['value'], )) fig.update_layout( title='Localized (UTC) Pandas DateTimeIndex', margin=dict(l=10, r=10, t=40, b=10), ) fig.show(renderer='browser') fig = go.Figure() fig.add_trace(go.Scatter( x=df.index.tz_localize('UTC').tz_convert('US/Pacific'), y=df['value'], )) fig.update_layout( title='Localized (UTC) Pandas DateTimeIndex with converted timezone', margin=dict(l=10, r=10, t=40, b=10), ) fig.show(renderer='browser') </code></pre>
<python><plotly><timezone><unix-timestamp>
2025-04-19 12:37:08
1
989
Philipp Burch
79,582,013
395,857
Can't run a Python file in PyCharm: why?
<p>I can't run a Python file in PyCharm: why?</p> <p>The run icon is greyed out:</p> <p><img src="https://i.sstatic.net/CboOla1r.png" alt="The run button is greyed out." /></p> <p>I went to Run → Edit Configurations… to make sure the correct Python interpreter is selected. Debug, coverage, and profile are fine but I don't see run:</p> <p><img src="https://i.sstatic.net/JjzUjT2C.png" alt="Run configuration not present." /></p> <p>Strangely, only run won't run. What's the issue? I use PyCharm 2024.2.3 Pro.</p>
<python><pycharm>
2025-04-19 04:14:30
1
84,585
Franck Dernoncourt
79,582,003
1,065,197
How can I check having functions with specific return type but without return value with ruff?
<p>Ruff has a nice rule <a href="https://docs.astral.sh/ruff/rules/missing-return-type-undocumented-public-function/" rel="nofollow noreferrer">ANN201</a> that checks the return type of a function. Example:</p> <pre><code>def add(a, b): # wrong! return a + b def add(a: int, b: int) -&gt; int: # right return a + b </code></pre> <p>This also applies for functions that don't return anything, so I have to specify <code>None</code> as return type. Example</p> <pre><code>def log(s: str): # wrong! print(s) def log(s: str) -&gt; None: # right print(s) </code></pre> <p>I'd like to setup a rule that verifies if a function as a different return type than <code>None</code>, then I need to return a value. Example:</p> <pre><code>def add(a: int, b: int) -&gt; int: a + b # no `return` but still right? </code></pre> <p>I understand that Python doesn't enforce this. But I'd like to have this check made with ruff, because similar pieces of code were giving some issues in my code.</p> <p>Is this possible?</p>
<python><ruff>
2025-04-19 03:55:19
1
85,878
Luiggi Mendoza
79,581,784
10,242,281
Python editing file in loop, how to keep changes after each (xml)
<p>I need to clean few xml files from not needed elements, came up with this wild card replacement for nothing to get rid of not needed <code>&lt;EmbeddedImage&gt;</code>, but I'm missing how to store each change in the loop, looks like it overwrites it each time. Thought that Python will keep it, but it's not the case. Do I need to create some additional logic to get only single <code>Brand_1</code> in my final tree?</p> <p>Appreciate if you point me how I can fix it.</p> <pre><code>import re from bs4 import BeautifulSoup as bs filepath = &quot;C://ddd/test.xml&quot; xml = ''' &lt;Images&gt; &lt;EmbeddedImage Name=&quot;Brand_1&quot;&gt; &lt;ImageData&gt;/9j/4AAQSkZJB//2Q==&lt;/ImageData&gt; &lt;/EmbeddedImage&gt; &lt;EmbeddedImage Name=&quot;Brand__2XX&quot;&gt; &lt;ImageData&gt;/9j/4AAQSkZJB//2Q==JB//2&lt;/ImageData&gt; &lt;/EmbeddedImage&gt; &lt;EmbeddedImage Name=&quot;Brand___3XX&quot;&gt; &lt;ImageData&gt;/9j/4AAQSkZAAQSkkB//2Q=AAQSk=&lt;/ImageData&gt; &lt;/EmbeddedImage&gt; &lt;Images&gt;''' Bs_data = bs(xml,'xml') #contant = xml ?? content = xml for item in Bs_data.select ('EmbeddedImage'): #print('Name= ',item.get('Name')) if item.get('Name') != 'Brand_1': wild = '&lt;EmbeddedImage Name=&quot;'+item.get('Name')+'&quot;&gt;.*&lt;/EmbeddedImage&gt;' ##&lt;@&gt;&lt;&lt; first &lt;/ How??? content = re.sub(wild, '', content , flags=re.DOTALL) print ('------------',wild,'::::',content) print(content) # only Brand_1 </code></pre>
<python><xml>
2025-04-18 21:54:39
4
504
Mich28
79,581,767
14,278,409
Problems creating a generator factory in python
<p>I'd like to create a generator factory, i.e. a generator that yields generators, in python using a &quot;generator expression&quot; (generator equivalent of list comprehension). Here's an example:</p> <pre><code>import itertools as it gen_factory=((pow(b,a) for a in it.count(1)) for b in it.count(10,10)) </code></pre> <p>In my mind this should give the following output:</p> <pre><code>((10,100,1000,...), (20,400,8000,...), (30,900,27000,...), ...) </code></pre> <p>However, the following shows that the internal generators are getting reset:</p> <pre><code>g0 = next(gen_factory) next(g0) # 10 next(g0) # 100 g1 = next(gen_factory) next(g1) # 20 next(g0) # 8000 </code></pre> <p>So the result of the last statement is equal to <code>pow(20,3)</code> whereas I expected it to be <code>pow(10,3)</code>. It seems that calling <code>next(gen_factory)</code> alters the <code>b</code> value in <code>g0</code> (but not the internal state <code>a</code>). Ideally, previous generators shouldn't change as we split off new generators from the generator factory.</p> <p>Interestingly, I can get correct behavior by converting these to lists, here's a finite example:</p> <pre><code>finite_gen_factory = ((pow(b,a) for a in (1,2,3)) for b in (10,20,30)) [list(x) for x in finite_gen_factory] </code></pre> <p>which gives <code>[[10, 100, 1000], [20, 400, 8000], [30, 900, 27000]]</code>, but trying to maintain separate generators fails as before:</p> <pre><code>finite_gen_factory = ((pow(b,a) for a in (1,2,3)) for b in (10,20,30)) g0 = next(finite_gen_factory) g1 = next(finite_gen_factory) next(g0) # 20, should be 10. </code></pre> <p>The closest explanation, I think, is in <a href="https://stackoverflow.com/a/35964024/14278409">this answer</a>, but I'm not sure what the correct way of resolving my problem is. I thought of copying (cloning) the internal generators, but I'm not sure this is possible. Also <code>it.tee</code> probably doesn't work here. A workaround might be defining the inner generator as a class, but I really wanted a compact generator expression for this. Also, some stackoverflow answers recommended using <code>functools.partial</code> for this kind of thing but I can't see how I could use that here.</p>
<python><generator>
2025-04-18 21:37:40
2
730
njp
79,581,727
11,515,528
Smartsheets API get_cell_history value only returned for first instance
<p>I am finding that when using <code>smartsheet_client.Cells.get_cell_history()</code> I am only getting the first instance of 'value' but the other varaibles are fine e.g. 'modifiedAt'.</p> <pre><code>import smartsheet as ss smartsheet_client = ss.Smartsheet(SMARTSHEET_ACCESS_TOKEN) SMARTSHEET_ACCESS_TOKEN = 'add yours here' sheet = smartsheet_client.Sheets.get_sheet(sheet_id=paperbased) for row in sheet.rows: cell_history = smartsheet_client.Cells.get_cell_history(paperbased, row_id=row.id, column_id=current_task_col, include_all=True).to_dict() for cell in cell_history['data']: print(f&quot;row_id: {row.id}, modified at: {cell['modifiedAt']}, value: {cell['displayValue']}&quot;) </code></pre> <p>output</p> <pre><code>row_id: 4735458228627332, modified at: 2021-01-25T10:36:28+00:00Z, value: 11k Production process complete row_id: 4735458228627332, modified at: 2021-01-25T10:35:32+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2021-01-25T10:31:24+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-12-02T10:44:46+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-12-02T10:37:56+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-12-02T09:57:02+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-12-02T09:56:34+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-11-01T13:20:30+00:00Z, value: 0 row_id: 4735458228627332, modified at: 2020-08-05T15:01:10+00:00Z, value: 0 </code></pre> <p>You can see from below there is text in the 'value' field for each row. What am I doing wrong here?</p> <p><a href="https://i.sstatic.net/rVcOa6kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rVcOa6kZ.png" alt="enter image description here" /></a></p> <p><strong>Update</strong></p> <p>So this issue is on going to me. Striping that code back for another example.</p> <pre><code>response = (ss_client.Cells.get_cell_history(sheet_id=paperbased, row_id=4735458228627332, column_id=2389945628288900, include_all=True)) for hist in response.data: print(hist) </code></pre> <p><a href="https://i.sstatic.net/EI53wMZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EI53wMZP.png" alt="enter image description here" /></a></p> <pre><code>smartsheet-python-sdk version 3.0.2 Python 3.11.8 </code></pre> <p><strong>UPDATE 2</strong></p> <p>I have tried updating to sdk=v 3.0.5 and using this code from ralhpq1:-</p> <pre><code>sheet = ss_client.Sheets.get_sheet(paperbased) for count, row in enumerate(sheet.rows): cell_history = ss_client.Cells.get_cell_history(sheet_id=paperbased, row_id=row.id, column_id=current_stage_col, include_all=True).to_dict() if cell_history.get('data') is None: print(f'no cell history in row {count}') else: for cell in cell_history['data']: print(f&quot;row_number: {count}, modified at: {cell['modifiedAt']}, value: {cell.get('value', None)}&quot;) </code></pre> <p>output</p> <pre><code>row_number: 0, modified at: 2021-01-25T10:36:28+00:00Z, value: 11k row_number: 0, modified at: 2021-01-25T10:35:32+00:00Z, value: 0.0 row_number: 0, modified at: 2021-01-25T10:31:24+00:00Z, value: 0.0 row_number: 0, modified at: 2020-12-02T10:44:46+00:00Z, value: 0.0 row_number: 0, modified at: 2020-12-02T10:37:56+00:00Z, value: 0.0 row_number: 0, modified at: 2020-12-02T09:57:02+00:00Z, value: 0.0 row_number: 0, modified at: 2020-12-02T09:56:34+00:00Z, value: 0.0 row_number: 0, modified at: 2020-11-01T13:20:30+00:00Z, value: 0.0 row_number: 0, modified at: 2020-09-07T16:13:28+00:00Z, value: 0.0 row_number: 1, modified at: 2021-01-25T10:38:34+00:00Z, value: 11k row_number: 1, modified at: 2021-01-25T10:33:36+00:00Z, value: 0.0 row_number: 1, modified at: 2021-01-25T10:31:24+00:00Z, value: 0.0 row_number: 1, modified at: 2020-12-02T10:50:12+00:00Z, value: 0.0 row_number: 1, modified at: 2020-12-02T10:45:56+00:00Z, value: 0.0 row_number: 1, modified at: 2020-11-01T13:20:30+00:00Z, value: 0.0 row_number: 1, modified at: 2020-09-07T16:13:28+00:00Z, value: 0.0 row_number: 2, modified at: 2020-12-02T09:34:28+00:00Z, value: 10a row_number: 2, modified at: 2020-11-01T13:43:24+00:00Z, value: 0.0 row_number: 3, modified at: 2021-11-18T11:42:18+00:00Z, value: 11k row_number: 3, modified at: 2021-10-22T13:44:12+00:00Z, value: 0.0 row_number: 3, modified at: 2021-10-04T08:22:19+00:00Z, value: 0.0 row_number: 3, modified at: 2020-11-01T13:43:24+00:00Z, value: 0.0 row_number: 4, modified at: 2020-12-02T09:34:28+00:00Z, value: 10a </code></pre>
<python><python-3.x><smartsheet-api>
2025-04-18 21:07:26
1
1,865
Cam
79,581,618
5,781,499
PyMuPDF - Extract table contents
<p>I try to extract the table text of a PDF: <a href="https://i.sstatic.net/65R5FitB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65R5FitB.png" alt="enter image description here" /></a></p> <p>With the following code code i get:</p> <pre><code>page 0 of page-1-ocr.pdf Tables rowsasf 49 texysdft [['', '', 'Staatlic', 'he Fische', 'rprüfung', 'in Bayern - Prü', 'fungsfrage', 'n 2023/2024', ''], ['', '', '', '', '', '', '', '', ''], ['Sortierung', 'Richtige', 'Frage', '', 'Antwort A', '', 'Antwort B', '', 'Antwort C'], ['', '', '', '', '', '', '', '', ''], ['', 'Antwort', '', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.001', 'C', 'Welche ist die artenreic', 'hste', 'die Gruppe de', 'r Lachsartigen', 'die Gruppe der', 'Barschartigen', 'die Gruppe de'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Gruppe der einheimisch', 'en', '(Salmoniden)', '', '(Perciden)', '', '(Cypriniden)'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Süßwasserfische?', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.002', 'A', 'Welche Aussage ist rich', 'tig?', 'Der Sterlet ge', 'hört zu den', 'Der Steinbeißer', 'gehört zu den', 'Der Perlfisch'], ['', '', '', '', '', '', '', '', ''], ['', '', '', '', 'störartigen Fi', 'schen.', 'barschartigen F', 'ischen.', 'dorschartigen'], ['', '', '', '', '', '', '', '', ''], ['1.003', 'B', 'Zu welcher Gruppe der', 'Wirbeltiere', 'zu den Rundm', 'äulern', 'zu den Knochen', 'fischen', 'zu den Knorpe'], ['', '', '', '', '', '', '', '', ''], ['', '', 'gehören die meisten', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['', '', 'einheimischen', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['', '', 'Süßwasserfischarten?', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.004', 'C', 'Welche Fischarten gehö', 'ren zu den', 'Kaulbarsch un', 'd Schrätzer', 'Renke (Felchen)', 'und Äsche', 'Strömer und'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Karpfenartigen (Cyprini', 'den)?', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.005', 'A', 'Zu welcher Tiergruppe', 'gehören die', 'zu den Rundm', 'äulern', 'zu den Knorpelf', 'ischen', 'zu den Knoche'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Neunaugen?', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.006', 'A', 'Welche Fischarten gehö', 'ren zu den', 'Schrätzer und', 'Streber', 'Renke (Felchen)', '', 'Elritze und Mo'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Barschartigen (Perciden', ')?', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.007', 'B', 'Die Goldorfe ist eine Va', 'riante von', 'Zährte (Seerü', 'ßling)', 'Nerfling (Aland)', '', 'Rotfeder'], ['', '', '', '', '', '', '', '', ''], ['1.008', 'B', 'Welches Merkmal wird', 'zur', 'die äußere Ge', 'stalt, insbesondere', 'die Anzahl und', 'Stellung der', 'der Laichauss'], ['', '', '', '', '', '', '', '', ''], ['', '', 'Artbestimmung bei den', '', 'die Form der', 'Kiemendeckel', 'Schlundzähne', '', ''], ['', '', '', '', '', '', '', '', ''], ['', '', 'karpfenartigen Fischen', '', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['', '', '(Cypriniden) herangezo', 'gen?', '', '', '', '', ''], ['', '', '', '', '', '', '', '', ''], ['1.009', 'C', 'Wo sind Streber und Zin', 'gel', 'im Flussgebie', 't des Rheins', 'im Flussgebiet v', 'on Elbe und Rhein', 'im Flussgebiet'], ['', '', '', '', '', '', '', '', ''], ['', '', 'ursprünglich heimisch?', '', '', '', '', '', '']] </code></pre> <p>How can i improve the text/cols/rows detection?</p> <pre class="lang-py prettyprint-override"><code>import fitz doc = fitz.open('page-1-ocr.pdf') for page in doc: print(page) tabs = page.find_tables(strategy=&quot;text&quot;, text_tolerance=1, intersection_tolerance=50) print(&quot;Tables rowsasf&quot;, len(tabs.tables[0].rows)) print(&quot;texysdft&quot;, tabs.tables[0].extract()) #for row in tabs.tables[0].rows: # print(row.cells) </code></pre> <p>I tried to change the <a href="https://pymupdf.readthedocs.io/en/latest/page.html#Page.find_tables" rel="nofollow noreferrer"><code>find_tables()</code></a> parameters, but they do nothing.</p> <p>Can someone guide my how to get more accurate results?</p> <p>EDIT: With <code>min_words_horizontal=17, min_words_vertical=5</code> it looks a bit better:</p> <pre><code>page 0 of page-1-ocr.pdf Table rows 11 Table cols 6 texysdft [['1.001', 'C', 'Welche ist die artenreichste', 'die Gruppe der Lachsartigen', 'die Gruppe der Barschartigen', 'die Gruppe de'], ['', '', 'Gruppe der einheimischen\nSüßwasserfische?', '(Salmoniden)', '(Perciden)', '(Cypriniden)'], ['1.002', 'A', 'Welche Aussage ist richtig?', 'Der Sterlet gehört zu den', 'Der Steinbeißer gehört zu den', 'Der Perlfisch'], ['1.003', 'B', 'Zu welcher Gruppe der Wirbeltiere\ngehören die meisten\neinheimischen\nSüßwasserfischarten?', 'störartigen Fischen.\nzu den Rundmäulern', 'barschartigen Fischen.\nzu den Knochenfischen', 'dorschartigen\nzu den Knorpe'], ['1.004', 'C', 'Welche Fischarten gehören zu den', 'Kaulbarsch und Schrätzer', 'Renke (Felchen) und Äsche', 'Strömer und'], ['', '', 'Karpfenartigen (Cypriniden)?', '', '', ''], ['1.005', 'A', 'Zu welcher Tiergruppe gehören die', 'zu den Rundmäulern', 'zu den Knorpelfischen', 'zu den Knoche'], ['1.006\n1.007', 'A\nB', 'Neunaugen?\nWelche Fischarten gehören zu den\nBarschartigen (Perciden)?\nDie Goldorfe ist eine Variante von', 'Schrätzer und Streber\nZährte (Seerüßling)', 'Renke (Felchen)\nNerfling (Aland)', 'Elritze und Mo\nRotfeder'], ['1.008', 'B', 'Welches Merkmal wird zur', 'die äußere Gestalt, insbesondere', 'die Anzahl und Stellung der', 'der Laichauss'], ['', '', 'Artbestimmung bei den\nkarpfenartigen Fischen\n(Cypriniden) herangezogen?', 'die Form der Kiemendeckel', 'Schlundzähne', ''], ['1.009', 'C', 'Wo sind Streber und Zingel', 'im Flussgebiet des Rheins', 'im Flussgebiet von Elbe und Rhein', 'im Flussgebiet']] </code></pre>
<python><pdf><ocr><text-extraction><pymupdf>
2025-04-18 19:39:43
0
4,049
Marc
79,581,617
913,098
SitemapLoader(sitemap_url).load() hangs
<pre><code>from langchain_community.document_loaders import SitemapLoader def crawl(self): print(&quot;Starting crawler...&quot;) sitemap_url = &quot;https://gringo.co.il/sitemap.xml&quot; print(f&quot;[CRAWLER] Loading sitemap: {sitemap_url}&quot;) try: loader = SitemapLoader(sitemap_url) print(&quot;[CRAWLER] SitemapLoader initialized.&quot;) print(&quot;[CRAWLER] Calling loader.load()...&quot;) documents = loader.load() print(f&quot;[CRAWLER] Loaded {len(documents)} documents via LangChain.&quot;) except Exception as e: print(f&quot;[WARN] SitemapLoader failed: {e}&quot;) </code></pre> <p>This runs prints</p> <pre><code>print(&quot;[CRAWLER] Calling loader.load()...&quot;) </code></pre> <p>and hangs there.</p> <hr /> <p>I have no idea why it wouldn't just finish parsing, the sitemap has ~1500 pages.</p>
<python><web-crawler><langchain><rag>
2025-04-18 19:38:13
2
28,697
Gulzar
79,581,533
620,679
Find pairs of keys for rows that have at least one property in common
<p>I'm using polars with a data frame whose schema looks like this:</p> <pre><code>Schema({'CustomerID': String, 'StockCode': String, 'Total': Int64}) </code></pre> <p>interpreted as &quot;Customer <em>CustomerID</em> bought <em>Total</em> of product <em>StockCode</em>.&quot; I'm looking for an efficient way to generate all unique pairs of customer IDs such that the two customers purchased at least one of the same product. So, given:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.from_repr(&quot;&quot;&quot; ┌─────────────┬────────────┬───────┐ │ CustomerID ┆ StockCode ┆ Total │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 │ ╞═════════════╪════════════╪═══════╡ │ A ┆ 123 ┆ 45.78 │ │ A ┆ 140 ┆ 10.26 │ │ B ┆ 125 ┆ 99.62 │ │ B ┆ 128 ┆ 23.65 │ │ B ┆ 140 ┆ 92.95 │ │ C ┆ 123 ┆ 45.78 │ │ D ┆ 145 ┆ 7.58 │ └─────────────┴────────────┴───────┘ &quot;&quot;&quot;) </code></pre> <p>the algorithm should produce:</p> <pre class="lang-py prettyprint-override"><code>[ ( 'A', 'B' ), # Because of product 140 ( 'A', 'C' ) # Because of product 123 ] </code></pre> <p>I'll then use this to compute a pairwise similarity measure (e.g. dot product or cosine) and then, as efficiently as possible, produce a data frame that looks like:</p> <pre><code>Schema({'ID1': String, 'ID2': String, 'Similarity': Float64}) </code></pre> <p>likely with a threshold for minimum similarity. Doing something like:</p> <pre class="lang-py prettyprint-override"><code> from itertools import combinations row_keys = pl.Series( df.select(category_col) .unique( ).drop_nulls( ).collect() ).to_list() pairs = combinations(row_keys, 2) </code></pre> <p>generates all possible pairs, but is hugely inefficient.</p>
<python><join><python-polars>
2025-04-18 18:20:34
3
4,041
Scott Deerwester
79,581,373
2,110,944
How to create custom component in Langflow that can stream contant into ChatOutput
<p>In langflow components have OpenAI component than can stream its output into Chat Output</p> <p>When call this prompt flow via http api with stream, we can see stream-recive chunk of data. This chunks we can detect by <code>&quot;event&quot;: &quot;token&quot;</code> param in the body. And final result detect by <code>&quot;event&quot;: &quot;token&quot;</code></p> <p>I want to create custom component with same behaveour, but I can't undrstand how to return chunk of message with <code>&quot;event&quot;: &quot;token&quot;</code> type in response. But my component only have <code>&quot;event&quot;: &quot;end&quot;</code></p> <p>Here the code of my component:</p> <pre class="lang-py prettyprint-override"><code># from langflow.field_typing import Data from langflow.custom import Component from langflow.io import MessageTextInput, Output from langflow.schema import Message class CustomComponent(Component): display_name = &quot;Custom Component&quot; description = &quot;Use as a template to create your own component.&quot; documentation: str = &quot;https://docs.langflow.org/components-custom-components&quot; icon = &quot;code&quot; name = &quot;CustomComponent&quot; inputs = [ MessageTextInput( name=&quot;input_value&quot;, display_name=&quot;Input Value&quot;, info=&quot;This is a custom component Input&quot;, value=&quot;Hello, World!&quot;, tool_mode=True, ), ] outputs = [ Output(display_name=&quot;Output&quot;, name=&quot;output&quot;, method=&quot;build_output&quot;), ] @staticmethod def split_into_chunks(message, chunk_size): return [message[i:i + chunk_size] for i in range(0, len(message), chunk_size)] def build_output(self) -&gt; Message: message = Message(text=self.input_value) self.status = message chunks = self.split_into_chunks(message.text, 2) for chunk in chunks: yield Message(text=chunk) </code></pre>
<python><langflow>
2025-04-18 16:00:33
0
487
theSemenov
79,581,312
8,869,570
How to sum values across dicts?
<p>I'm using python 3.7. I have a list of dicts that and I want to create a concatenated dict where that concatenated dicts sums over values (grouped by keys) per key in the initial list of dicts.</p> <p>So e.g.,</p> <pre><code>d1 = {1: 1.1, 2: 2.2} d2 = {1: 3.3, 4: 5.5} </code></pre> <p>The result should be:</p> <pre><code>final_dict = {1: 4.4, 2: 2.2, 4: 5.5} </code></pre> <p>I tried to do <code>dict([*d1.items(), *d2.items()])</code> but this overwrites duplicate keys instead of summing them.</p> <p>Currently, I just do</p> <pre><code>def _sum_dicts_by_key(dicts): result = defaultdict(float) for d in dicts: for k, v in d.items(): result[k] += v </code></pre> <p>but I feel there should be a pythonic way to do it without explicitly looping?</p>
<python><dictionary><python-3.7>
2025-04-18 15:20:41
2
2,328
24n8
79,581,301
299,754
How to find all references of a pytest fixture in vscode?
<p>I'm refactoring a suite of tests and pytest fixtures, and need to <strong>locate all usages of a fixture within the test suite</strong>.</p> <p>The 'Find all references' function of VSCode is what I'd normally use when refactoring a method, to find and review all places affected, but it doesn't work for fixtures. The plain text search is also not useful as some of these fixtures have very common names that pop up all over the codebase (e.g. <code>user</code>...).</p> <p>Is there an alternative or plugin that works for this?</p>
<python><visual-studio-code><testing><pytest><fixtures>
2025-04-18 15:14:56
1
6,928
Jules Olléon
79,581,277
25,874,132
What's causing this weird jump in these phase graphs of Fourier coefficients?
<p>This is from an assignment in signal processing in Python. It's all done in Python (through Google Colab) with the libraries cmath, numpy, and matplotlib alone.</p> <p>Signal a is a window signal, and I calculate its Fourier coefficients as ak. In the same vein, bk is the Fourier coefficients of signal b.</p> <p>(the following code is patched together since it's in a notebook style):</p> <pre class="lang-py prettyprint-override"><code> import numpy as np import cmath import matplotlib.pyplot as plt D=1000 j = complex(0, 1) pi = np.pi N = 2 * D + 1 # initializing signal a[n] a=np.zeros(2*D+1) for i in range(-99,100): a[i+D] = 1 # function to do DFST (either inverse or regular) def fourier_series_transform(data, D, inverse=False): j = complex(0, 1) pi = np.pi N = 2 * D + 1 # Allocate result array result = np.zeros(N, dtype=complex) if inverse: # Inverse transform: reconstruct time-domain signal from bk for n in range(-D, D + 1): for k in range(-D, D + 1): result[n + D] += data[k + D] * cmath.exp(j * 2 * pi * k * n / N) else: # Forward transform: compute bk from b[n] for k in range(-D, D + 1): for n in range(-D, D + 1): result[k + D] += (1 / N) * data[n + D] * cmath.exp(-j * 2 * pi * k * n / N) return result # getting ak from a ak = fourier_series_transform(a, D) </code></pre> <p>here's pics of a[n] and ak (which is the Dirichlet kernel):</p> <p><a href="https://i.sstatic.net/JwgrDa2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JwgrDa2C.png" alt="an" /></a></p> <p><a href="https://i.sstatic.net/eA8EnUqv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eA8EnUqv.png" alt="ak" /></a></p> <pre class="lang-py prettyprint-override"><code># initialization of bk bk = np.zeros(N, dtype=complex) # defining bk - n=100 - which means the signal b[n] is moved 100 units to the right for k in range(-D, D + 1): bk[k + D] = ak[k + D] * cmath.exp(-j * 2 * pi * k * 100 / N) # getting the original signal b[n] b = fourier_series_transform(bk, D, inverse=True) </code></pre> <p>bk is given to me as bk=ak<em>e^(-jk</em>2*pi/N * 100), here are pics of bk, b[n]:</p> <p><a href="https://i.sstatic.net/7AvkZI1e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AvkZI1e.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/ABEDdF8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ABEDdF8J.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>### here starts the problem: #plot of all the coefficients plt.plot(np.arange(-D, D+1), ak.real, label=&quot;real part $a_k$&quot;, color='blue') plt.plot(np.arange(-D, D+1), ak.imag, label=&quot;imaginary part $a_k$&quot;, color='turquoise') plt.plot(np.arange(-D, D+1), bk.real, label=&quot;real part $b_k$&quot;, color='orange') plt.plot(np.arange(-D, D+1), bk.imag, label=&quot;imaginary part $b_k$&quot;, color='red') plt.grid(True) plt.title(&quot;plot of freq of $a_k$ and $b_k$&quot;) plt.xlim(-250, 250) #plt.ylim(-0.0006, 0.0004) plt.legend() plt.show() #phase plt.plot(np.arange(-D, D+1), np.arctan2(ak.imag, ak.real), label=&quot;phase $a_k$&quot;, color='turquoise') plt.plot(np.arange(-D, D+1), np.arctan2(bk.imag, bk.real), label=&quot;phase $b_k$&quot;, color='orange') plt.grid(True) plt.title(&quot;plot of phases of $a_k$ and $b_k$&quot;) plt.xlim(-250, 250) plt.yticks( [-pi, -pi/2, 0, pi/2, pi], [r&quot;$-\pi$&quot;, r&quot;$-\frac{\pi}{2}$&quot;, &quot;0&quot;, r&quot;$\frac{\pi}{2}$&quot;, r&quot;$\pi$&quot;] ) plt.legend() plt.show() </code></pre> <p>Here are the pics I get of the real and imaginary parts of ak, bk and later their phase (the imaginary part of ak=0):</p> <p><a href="https://i.sstatic.net/fvOQII6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fvOQII6t.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/1mw7Gs3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1mw7Gs3L.png" alt="enter image description here" /></a></p> <p>As you can see before -190 and after 190 if the unit digit is 1 (i.e., 841, -561, -191...) it does a jump which, according to my math isn't suposed to happen, this problem presist for the rest of the sampling area (meaning from [-1000,-190], and [190,1000]).</p> <p>I would love to understand why it's happening, I don't believe it's a problem with my math, but I might be wrong about that.=</p> <h2>Edit</h2> <p>Here's a zoomed pic of the jump in phase of bk:</p> <p><a href="https://i.sstatic.net/82DEELMT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82DEELMT.png" alt="enter image description here" /></a></p>
<python><numpy><matplotlib><signal-processing>
2025-04-18 14:59:04
0
314
Nate3384
79,581,046
1,026
lock specific dependency versions with `uvx` (uv tool run)
<p>I can request a specific version of tool and python when using <a href="https://docs.astral.sh/uv/concepts/tools/" rel="nofollow noreferrer"><code>uv</code>'s tool functionality</a>, e.g.:</p> <pre><code>uvx --python=3.10 --from=&quot;jupyterlab@4.0.10&quot; jupyter-lab </code></pre> <p>But I don't see <strong>a way to pin dependencies to make sure subsequent invocations will not silently auto-update</strong> (possibly breaking) something. The docs say:</p> <blockquote> <p>uvx will use the latest available version of the requested tool on the first invocation. After that, uvx will use the cached version of the tool unless a different version is requested, the cache is pruned, or the cache is refreshed.</p> </blockquote> <p>...and apparently the cache can get refreshed without me asking for it explicitly, as I caught the above command suddenly reinstalling the tool instead of reusing an existing environment.</p> <p>For uv <em>projects</em> and <em>scripts</em> there's a <code>lock</code> command and related <code>--frozen</code>/<code>--locked</code> parameters, but nothing seems to be available for tools?</p> <p>I found some tangentially related issues (<a href="https://github.com/astral-sh/uv/issues/5815" rel="nofollow noreferrer">#5815</a>, <a href="https://github.com/astral-sh/uv/issues/10082" rel="nofollow noreferrer">#10082</a>), but nothing explicitly about this, so I feel I'm missing something obvious.</p>
<python><uv>
2025-04-18 12:39:24
1
32,485
Nickolay
79,580,670
16,383,578
Fast calculation of Nth generalized Fibonacci number of order K?
<p>How can I calculate Nth term in Fibonacci sequence of order K efficiently?</p> <p>For example, Tribonacci is Fibonacci order 3, Tetranacci is Fibonacci order 4, Pentanacci is Fibonacci order 5, and Hexanacci is Fibonacci order 6 et cetera.</p> <p>I define these series as follows, for order K, A<sub>0</sub> = 1, A<sub>i</sub> = 2<sup>i-1</sup> (i ∈ [1, k]), A<sub>k+1</sub> = 2<sup>k</sup> - 1, A<sub>i+1</sub> = 2A<sub>i</sub> - A<sub>i-k-1</sub> (i &gt;= k + 1).</p> <p>For example, the sequences Fibonacci to Nonanacci are:</p> <pre><code>[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040] [1, 1, 2, 4, 7, 13, 24, 44, 81, 149, 274, 504, 927, 1705, 3136, 5768, 10609, 19513, 35890, 66012, 121415, 223317, 410744, 755476, 1389537, 2555757, 4700770, 8646064, 15902591, 29249425] [1, 1, 2, 4, 8, 15, 29, 56, 108, 208, 401, 773, 1490, 2872, 5536, 10671, 20569, 39648, 76424, 147312, 283953, 547337, 1055026, 2033628, 3919944, 7555935, 14564533, 28074040, 54114452, 104308960] [1, 1, 2, 4, 8, 16, 31, 61, 120, 236, 464, 912, 1793, 3525, 6930, 13624, 26784, 52656, 103519, 203513, 400096, 786568, 1546352, 3040048, 5976577, 11749641, 23099186, 45411804, 89277256, 175514464] [1, 1, 2, 4, 8, 16, 32, 63, 125, 248, 492, 976, 1936, 3840, 7617, 15109, 29970, 59448, 117920, 233904, 463968, 920319, 1825529, 3621088, 7182728, 14247536, 28261168, 56058368, 111196417, 220567305] [1, 1, 2, 4, 8, 16, 32, 64, 127, 253, 504, 1004, 2000, 3984, 7936, 15808, 31489, 62725, 124946, 248888, 495776, 987568, 1967200, 3918592, 7805695, 15548665, 30972384, 61695880, 122895984, 244804400] [1, 1, 2, 4, 8, 16, 32, 64, 128, 255, 509, 1016, 2028, 4048, 8080, 16128, 32192, 64256, 128257, 256005, 510994, 1019960, 2035872, 4063664, 8111200, 16190208, 32316160, 64504063, 128752121, 256993248] [1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 511, 1021, 2040, 4076, 8144, 16272, 32512, 64960, 129792, 259328, 518145, 1035269, 2068498, 4132920, 8257696, 16499120, 32965728, 65866496, 131603200, 262947072] </code></pre> <p>Now, I am well aware of fast algorithms to calculate Nth Fibonacci number of order 2:</p> <pre><code>def fibonacci_fast(n: int) -&gt; int: a, b = 0, 1 bit = 1 &lt;&lt; (n.bit_length() - 1) if n else 0 while bit: a2 = a * a a, b = 2 * a * b + a2, b * b + a2 if n &amp; bit: a, b = a + b, a bit &gt;&gt;= 1 return a def matrix_mult_quad( a: int, b: int, c: int, d: int, e: int, f: int, g: int, h: int ) -&gt; tuple[int, int, int, int]: return ( a * e + b * g, a * f + b * h, c * e + d * g, c * f + d * h, ) def fibonacci_binet(n: int) -&gt; int: a, b = 1, 1 bit = 1 &lt;&lt; (n.bit_length() - 2) if n else 0 while bit: a, b = (a * a + 5 * b * b) &gt;&gt; 1, a * b if n &amp; bit: a, b = (a + 5 * b) &gt;&gt; 1, (a + b) &gt;&gt; 1 bit &gt;&gt;= 1 return b def fibonacci_matrix(n: int) -&gt; int: if not n: return 0 a, b, c, d = 1, 0, 0, 1 e, f, g, h = 1, 1, 1, 0 n -= 1 while n: if n &amp; 1: a, b, c, d = matrix_mult_quad(a, b, c, d, e, f, g, h) e, f, g, h = matrix_mult_quad(e, f, g, h, e, f, g, h) n &gt;&gt;= 1 return a </code></pre> <pre><code>In [591]: %timeit fibonacci_matrix(16384) 751 μs ± 4.74 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [592]: %timeit fibonacci_binet(16384) 132 μs ± 305 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [593]: %timeit fibonacci_fast(16384) 114 μs ± 966 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) </code></pre> <p>But these of course only deal with Fibonacci-2 sequence, they can't be used to calculate Nth term in higher order Fibonacci sequences.</p> <p>In particular, only the cubic equation for Tribonacci: x<sup>3</sup> - x<sup>2</sup> - x - 1 = 0 and the quartic equation for Tetranacci: x<sup>4</sup> - x<sup>3</sup> - x<sup>2</sup> - x - 1 = 0 have solutions that can be found algebraically, the quintic equation x<sup>5</sup> - x<sup>4</sup> - x<sup>3</sup> - x<sup>2</sup> - x - 1 = 0 has solutions that can't be found with <code>sympy</code>, so the fast doubling method only works with Fibonacci, Tribonacci and Tetranacci.</p> <p>But I know of two ways to compute higher orders of Fibonacci sequences, the first can be used to efficiently generate all first N Fibonacci numbers of order K and has time complexity of O(n), the second is matrix exponentiation by squaring and has time complexity of O(log<sub>2</sub>n) * O(k<sup>3</sup>).</p> <p>First, we only need two numbers to get the next Fibonacci number of order K, the equation is given above, we only need one left shift and one subtraction for each term.</p> <pre><code>Matrix = list[int] def onacci_fast(n: int, order: int) -&gt; Matrix: if n &lt;= order + 1: return [1] + [1 &lt;&lt; i for i in range(n - 1)] last = n - 1 result = [1] + [1 &lt;&lt; i for i in range(order + 1)] + [0] * (last - order - 1) result[start := order + 1] -= 1 for a, b in zip(range(start, last), range(1, last - order)): result[a + 1] = (result[a] &lt;&lt; 1) - result[b] return result </code></pre> <pre><code>ONACCI_MATRICES = {} IDENTITIES = {} def onacci_matrix(n: int) -&gt; Matrix: if matrix := ONACCI_MATRICES.get(n): return matrix mat = [1] * n + [0] * (n * (n - 1)) for i in range(1, n): mat[i * n + i - 1] = 1 ONACCI_MATRICES[n] = mat return mat def onacci_pow(n: int, k: int) -&gt; np.ndarray: base = np.zeros((k, k), dtype=np.uint64) base[0] = 1 for i in range(1, k): base[i, i - 1] = 1 prod = np.zeros((k, k), dtype=np.uint64) for i in range(k): prod[i, i] = 1 return [(prod := prod @ base) for _ in range(n)] def identity_matrix(n: int) -&gt; Matrix: if matrix := IDENTITIES.get(n): return matrix result = [0] * n**2 for i in range(n): result[i * n + i] = 1 IDENTITIES[n] = result return result def mat_mult(mat_1: Matrix, mat_2: Matrix, side: int) -&gt; Matrix: # sourcery skip: use-itertools-product result = [0] * (square := side**2) for y in range(0, square, side): for x in range(side): e = mat_1[y + x] for z in range(side): result[y + z] += mat_2[x * side + z] * e return result def mat_pow(matrix: Matrix, power: int, n: int) -&gt; Matrix: result = identity_matrix(n) while power: if power &amp; 1: result = mat_mult(result, matrix, n) matrix = mat_mult(matrix, matrix, n) power &gt;&gt;= 1 return result def onacci_nth(n: int, k: int) -&gt; int: return mat_pow(onacci_matrix(k), n, k)[0] </code></pre> <pre><code>In [621]: %timeit onacci_nth(16384, 2) 822 μs ± 5.88 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [622]: %timeit onacci_fast(16384, 2) 13.8 ms ± 92.7 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [623]: %timeit onacci_fast(16384, 3) 16 ms ± 63.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [624]: %timeit onacci_fast(16384, 4) 17 ms ± 66.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [625]: %timeit onacci_nth(16384, 3) 4.02 ms ± 32 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [626]: %timeit onacci_nth(16384, 4) 10.9 ms ± 71 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [627]: %timeit onacci_nth(16384, 5) 22.5 ms ± 632 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [628]: %timeit onacci_nth(16384, 6) 39.4 ms ± 314 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [629]: %timeit onacci_fast(16384, 6) 17.5 ms ± 27.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [630]: %timeit onacci_fast(16384, 7) 17.6 ms ± 115 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [631]: %timeit onacci_nth(16384, 7) 62.7 ms ± 347 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [632]: %timeit onacci_nth(32768, 16) 2.29 s ± 5.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [633]: %timeit onacci_fast(32768, 16) 56.2 ms ± 271 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>In the easiest case, <code>onacci_nth(16384, 2)</code> is significantly slower than <code>fibonacci_matrix</code> despite using exactly the same method, because of added overhead of using lists. And although the while loop has log<sub>2</sub>n iterations, the matrices are of size k<sup>2</sup> and for each cell k multiplications and k - 1 additions have to be performed, for a total of k<sup>3</sup> multiplications and k<sup>3</sup> - k<sup>2</sup> additions each iteration, this cost grows very quickly, so the matrix exponentiation method is outperformed by the linear iterative method very quickly, because although the iterative method has more iterations, each iteration is far cheaper.</p> <p>The matrix exponentiation uses all of the k*k matrix but we need fewer numbers. The following are the state transitions for Hexanacci:</p> <pre><code>[array([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0]], dtype=uint64), array([[2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0]], dtype=uint64), array([[4, 4, 4, 4, 3, 2], [2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0]], dtype=uint64), array([[8, 8, 8, 7, 6, 4], [4, 4, 4, 4, 3, 2], [2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0]], dtype=uint64), array([[16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1], [ 1, 1, 1, 1, 1, 1], [ 1, 0, 0, 0, 0, 0]], dtype=uint64), array([[32, 31, 30, 28, 24, 16], [16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1], [ 1, 1, 1, 1, 1, 1]], dtype=uint64), array([[63, 62, 60, 56, 48, 32], [32, 31, 30, 28, 24, 16], [16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1]], dtype=uint64), array([[125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2]], dtype=uint64), array([[248, 244, 236, 220, 188, 125], [125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4]], dtype=uint64), array([[492, 484, 468, 436, 373, 248], [248, 244, 236, 220, 188, 125], [125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8]], dtype=uint64)] </code></pre> <p>Now, for each state, we can shift the rows down by 1 to get the lower k - 1 rows of the next state, and the top row can be obtained by adding the first number to the second number and put in first position, adding the first number to the third number and put to second position, adding first to fourth put to third... put the first number to last position.</p> <p>Or in Python:</p> <pre><code>def next_onacci(arr: np.ndarray) -&gt; np.ndarray: a = arr[0, 0] return np.concatenate([[[a + b for b in arr[0, 1:]] + [a]], arr[:-1]]) </code></pre> <p>So we only need k numbers to get the next state. But we can do better, the Nth term is found by raising the matrix to the Nth and accessing <code>mat[0, 0]</code>. We can use <code>mat[0, 0] + mat[0, 1]</code> to get the next <code>mat[0, 0]</code>, or we can use <code>2 * mat[0, 0] - mat[-1, -1]</code>.</p> <p>But I have only found ways to calculate the numbers using these relationships linearly, I can't use them to do exponentiation by squaring.</p> <p>Is there a faster way to compute Nth term of higher order Fibonacci sequence?</p> <hr /> <p>Of course <code>onacci_pow</code> overflows extremely quickly, so it is absolutely useless to calculate the terms for large N. And so I don't use it to calculate terms for large N. I have implemented my own matrix multiplication and exponentiation to calculate with infinite precision.</p> <p><code>onacci_pow</code> is used to verify the correctness of my matrix multiplication for small N. <code>onacci_pow</code> is correct as long as it doesn't overflow, and I know exactly on which power it overflows.</p> <p>And for the values of N and K, assume K is between 25 and 100, and N is between 16384 and 1048576.</p>
<python><algorithm><fibonacci>
2025-04-18 08:24:13
1
3,930
Ξένη Γήινος
79,580,497
5,722,359
Error: cupy_backends.cuda.libs.cudnn.CuDNNError: cuDNN Error: CUDNN_STATUS_NOT_SUPPORTED
<p>I am trying to run <code>CuDNN</code> via <code>CuPy</code> but is experiencing the above mentioned error. <strong>How do I resolve this error?</strong> I have tried to check that all my args and kwargs are correct but still could not figure a way to overcome this issue.</p> <p>Nvidia GPU used has a compute capability of 3.0 and the linux os is installed with it highest compatible CUDA toolkit version of 11.4. To use this cuda-11.4, python version not higher than version 3.10 has to be used.</p> <p>How I set up this cupy_cuda114 project and run the python file:</p> <pre><code>$ uv init cupy_cuda114_py310 --python 3.10 Initialized project `cupy-cuda114-py310` at `/home/user/cupy_cuda114_py310` $ cd cupy_cuda114_py310/ $ uv run python --version Using CPython 3.10.16 Creating virtual environment at: .venv Python 3.10.16 $ uv add cupy-cuda114 Resolved 4 packages in 397ms Installed 3 packages in 46ms + cupy-cuda114==10.6.0 + fastrlock==0.8.3 + numpy==1.24.4 $ uv run python -m cupyx.tools.install_library --cuda 11.4 --library cudnn $ echo $CUDA_PATH /usr/local/cuda-11.4 $ sudo cp ~/.cupy/cuda_lib/11.4/cudnn/8.4.0/include/cudnn.h $CUDA_PATH/include $ sudo cp ~/.cupy/cuda_lib/11.4/cudnn/8.4.0/lib/libcudnn.so* $CUDA_PATH/lib64 $ uv run python cudnn_hello_world.py </code></pre> <p>cudnn_hello_world.py:</p> <pre><code>import cupy as cp import cupy.cuda.cudnn as cudnn print(dir(cudnn)) print(cp.__version__) print(cp.cuda.runtime.runtimeGetVersion()) print(cp.cuda.cudnn.getVersion()) def conv2d_cupy_cuda_cudnn(input2d: cp.ndarray, kernel2d: cp.ndarray, bias: cp.ndarray, pad: int = 1, stride: int =1): # Ensure input is a 2D array if input2d.ndim != 2 or kernel2d.ndim != 2 or bias.ndim != 1: raise ValueError(&quot;input2d, kernel2d, and bias must be 2D, 2D, and 1D respectively.&quot;) # Reshape input and kernel to match cuDNN's 4D tensor format input_tensor = input2d[cp.newaxis, cp.newaxis, :, :] filter_tensor = kernel2d[cp.newaxis, cp.newaxis, :, :] bias_tensor = bias # Initialize cuDNN handle = cudnn.create() # Define the input and filter tensor descriptors print(f&quot;{cudnn.createTensorDescriptor.__doc__=}&quot;) print(f&quot;{cudnn.createFilterDescriptor.__doc__=}&quot;) input_desc = cudnn.createTensorDescriptor() filter_desc = cudnn.createFilterDescriptor() output_desc = cudnn.createTensorDescriptor() n, c, h, w = input_tensor.shape k, _, kh, kw = filter_tensor.shape print(f&quot;{n=}, {c=}, {h=}, {w=}&quot;) print(f&quot;{k=}, {kh=}, {kw=}&quot;) print(f&quot;{cudnn.setTensor4dDescriptor.__doc__=}&quot;) cudnn.setTensor4dDescriptor( input_desc, cudnn.CUDNN_TENSOR_NCHW, cudnn.CUDNN_DATA_FLOAT, n, c, h, w ) print(f&quot;{cudnn.setFilter4dDescriptor_v4.__doc__=}&quot;) cudnn.setFilter4dDescriptor_v4( filter_desc, cudnn.CUDNN_DATA_FLOAT, cudnn.CUDNN_TENSOR_NCHW, k, c, kh, kw ) print(f&quot;{cudnn.CUDNN_TENSOR_NCHW.__doc__=}&quot;) print(f&quot;{cudnn.CUDNN_DATA_FLOAT.__doc__=}&quot;) # Define the convolution descriptor conv_desc = cudnn.createConvolutionDescriptor() print(f&quot;{cudnn.setConvolution2dDescriptor_v4.__doc__=}&quot;) print(f&quot;{cudnn.CUDNN_CONVOLUTION.__doc__=}&quot;) cudnn.setConvolution2dDescriptor_v4( conv_desc, pad_h=pad, pad_w=pad, u=stride, v=stride, dilation_h=1, dilation_w=1, mode=cudnn.CUDNN_CONVOLUTION # compute_type=cudnn.CUDNN_DATA_FLOAT, ) # Calculate output dimensions print(f&quot;{cudnn.getConvolution2dForwardOutputDim.__doc__=}&quot;) n, c, h, w = cudnn.getConvolution2dForwardOutputDim( conv_desc, input_desc, filter_desc,) print(f&quot;{n=}, {c=}, {h=}, {w=}&quot;) output_tensor = cp.empty((n, c, h, w), dtype=cp.float32) # Set the output tensor descriptor cudnn.setTensor4dDescriptor( output_desc, cudnn.CUDNN_TENSOR_NCHW, cudnn.CUDNN_DATA_FLOAT, n, c, h, w ) return output_tensor[0, 0, :, :] input2d = cp.array( [ [0.0, 1.0, 2.0, 3.0], [4.0, 5.0, 0.0, 7.0], [4.0, 0.0, 6.0, 7.0], [0.0, 1.0, 2.0, 3.0], ], dtype=cp.float32, ) kernel2d = cp.array( [[-3.0, -2.0, 1.0], [0.0, 1.0, 2.0], [-3.0, 0.0, 1.0]], dtype=cp.float32 ) bias = cp.array([2.0], dtype=cp.float32) pad = 1 stride = 1 # 2D Convolution with padding print(f&quot;{conv2d_cupy_cuda_cudnn.__name__}&quot;) cudnn_output2d = conv2d_cupy_cuda_cudnn(input2d, kernel2d, bias, pad, stride) print(f&quot;{cudnn_output2d}, shape is {cudnn_output2d.shape}&quot;) </code></pre> <p>Output:</p> <pre><code>['CTCLoss', 'CUDNN_16BIT_INDICES', 'CUDNN_32BIT_INDICES', 'CUDNN_64BIT_INDICES', 'CUDNN_8BIT_INDICES', 'CUDNN_ACTIVATION_CLIPPED_RELU', 'CUDNN_ACTIVATION_ELU', 'CUDNN_ACTIVATION_IDENTITY', 'CUDNN_ACTIVATION_RELU', 'CUDNN_ACTIVATION_SIGMOID', 'CUDNN_ACTIVATION_TANH', 'CUDNN_ADD_FEATURE_MAP', 'CUDNN_ADD_FULL_TENSOR', 'CUDNN_ADD_IMAGE', 'CUDNN_ADD_SAME_C', 'CUDNN_ADD_SAME_CHW', 'CUDNN_ADD_SAME_HW', 'CUDNN_BATCHNORM_OPS_BN', 'CUDNN_BATCHNORM_OPS_BN_ACTIVATION', 'CUDNN_BATCHNORM_OPS_BN_ADD_ACTIVATION', 'CUDNN_BATCHNORM_PER_ACTIVATION', 'CUDNN_BATCHNORM_SPATIAL', 'CUDNN_BATCHNORM_SPATIAL_PERSISTENT', 'CUDNN_BIDIRECTIONAL', 'CUDNN_BN_MIN_EPSILON', 'CUDNN_CONVOLUTION', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_0', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_1', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT_TILING', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD', 'CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED', 'CUDNN_CONVOLUTION_BWD_DATA_NO_WORKSPACE', 'CUDNN_CONVOLUTION_BWD_DATA_PREFER_FASTEST', 'CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_3', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_FFT', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD', 'CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED', 'CUDNN_CONVOLUTION_BWD_FILTER_NO_WORKSPACE', 'CUDNN_CONVOLUTION_BWD_FILTER_PREFER_FASTEST', 'CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT', 'CUDNN_CONVOLUTION_FWD_ALGO_DIRECT', 'CUDNN_CONVOLUTION_FWD_ALGO_FFT', 'CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING', 'CUDNN_CONVOLUTION_FWD_ALGO_GEMM', 'CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM', 'CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM', 'CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD', 'CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED', 'CUDNN_CONVOLUTION_FWD_NO_WORKSPACE', 'CUDNN_CONVOLUTION_FWD_PREFER_FASTEST', 'CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT', 'CUDNN_CROSS_CORRELATION', 'CUDNN_CTC_LOSS_ALGO_DETERMINISTIC', 'CUDNN_CTC_LOSS_ALGO_NON_DETERMINISTIC', 'CUDNN_DATA_DOUBLE', 'CUDNN_DATA_FLOAT', 'CUDNN_DATA_HALF', 'CUDNN_DEFAULT_MATH', 'CUDNN_DETERMINISTIC', 'CUDNN_DIVNORM_PRECOMPUTED_MEANS', 'CUDNN_ERRQUERY_BLOCKING', 'CUDNN_ERRQUERY_NONBLOCKING', 'CUDNN_ERRQUERY_RAWCODE', 'CUDNN_FUSED_BN_FINALIZE_STATISTICS_INFERENCE', 'CUDNN_FUSED_BN_FINALIZE_STATISTICS_TRAINING', 'CUDNN_FUSED_CONV_SCALE_BIAS_ADD_ACTIVATION', 'CUDNN_FUSED_DACTIVATION_FORK_DBATCHNORM', 'CUDNN_FUSED_SCALE_BIAS_ACTIVATION_CONV_BNSTATS', 'CUDNN_FUSED_SCALE_BIAS_ACTIVATION_WGRAD', 'CUDNN_FUSED_SCALE_BIAS_ADD_ACTIVATION_GEN_BITMASK', 'CUDNN_GRU', 'CUDNN_LINEAR_INPUT', 'CUDNN_LRN_CROSS_CHANNEL_DIM1', 'CUDNN_LSTM', 'CUDNN_NON_DETERMINISTIC', 'CUDNN_NOT_PROPAGATE_NAN', 'CUDNN_OP_TENSOR_ADD', 'CUDNN_OP_TENSOR_MAX', 'CUDNN_OP_TENSOR_MIN', 'CUDNN_OP_TENSOR_MUL', 'CUDNN_OP_TENSOR_NOT', 'CUDNN_OP_TENSOR_SQRT', 'CUDNN_PARAM_ACTIVATION_BITMASK_DESC', 'CUDNN_PARAM_ACTIVATION_BITMASK_PLACEHOLDER', 'CUDNN_PARAM_ACTIVATION_DESC', 'CUDNN_PARAM_BN_BIAS_PLACEHOLDER', 'CUDNN_PARAM_BN_DBIAS_PLACEHOLDER', 'CUDNN_PARAM_BN_DSCALE_PLACEHOLDER', 'CUDNN_PARAM_BN_EQBIAS_PLACEHOLDER', 'CUDNN_PARAM_BN_EQSCALEBIAS_DESC', 'CUDNN_PARAM_BN_EQSCALE_PLACEHOLDER', 'CUDNN_PARAM_BN_MODE', 'CUDNN_PARAM_BN_RUNNING_MEAN_PLACEHOLDER', 'CUDNN_PARAM_BN_RUNNING_VAR_PLACEHOLDER', 'CUDNN_PARAM_BN_SAVED_INVSTD_PLACEHOLDER', 'CUDNN_PARAM_BN_SAVED_MEAN_PLACEHOLDER', 'CUDNN_PARAM_BN_SCALEBIAS_MEANVAR_DESC', 'CUDNN_PARAM_BN_SCALE_PLACEHOLDER', 'CUDNN_PARAM_BN_Z_EQBIAS_PLACEHOLDER', 'CUDNN_PARAM_BN_Z_EQSCALEBIAS_DESC', 'CUDNN_PARAM_BN_Z_EQSCALE_PLACEHOLDER', 'CUDNN_PARAM_CONV_DESC', 'CUDNN_PARAM_DWDATA_PLACEHOLDER', 'CUDNN_PARAM_DWDESC', 'CUDNN_PARAM_DXDATA_PLACEHOLDER', 'CUDNN_PARAM_DXDESC', 'CUDNN_PARAM_DYDATA_PLACEHOLDER', 'CUDNN_PARAM_DYDESC', 'CUDNN_PARAM_DZDATA_PLACEHOLDER', 'CUDNN_PARAM_DZDESC', 'CUDNN_PARAM_WDATA_PLACEHOLDER', 'CUDNN_PARAM_WDESC', 'CUDNN_PARAM_XDATA_PLACEHOLDER', 'CUDNN_PARAM_XDESC', 'CUDNN_PARAM_YDATA_PLACEHOLDER', 'CUDNN_PARAM_YDESC', 'CUDNN_PARAM_YSQSUM_PLACEHOLDER', 'CUDNN_PARAM_YSTATS_DESC', 'CUDNN_PARAM_YSUM_PLACEHOLDER', 'CUDNN_PARAM_ZDATA_PLACEHOLDER', 'CUDNN_PARAM_ZDESC', 'CUDNN_POOLING_AVERAGE_COUNT_EXCLUDE_PADDING', 'CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING', 'CUDNN_POOLING_MAX', 'CUDNN_POOLING_MAX_DETERMINISTIC', 'CUDNN_PROPAGATE_NAN', 'CUDNN_PTR_16B_ALIGNED', 'CUDNN_PTR_ACTIVATION_BITMASK', 'CUDNN_PTR_BN_BIAS', 'CUDNN_PTR_BN_DBIAS', 'CUDNN_PTR_BN_DSCALE', 'CUDNN_PTR_BN_EQBIAS', 'CUDNN_PTR_BN_EQSCALE', 'CUDNN_PTR_BN_RUNNING_MEAN', 'CUDNN_PTR_BN_RUNNING_VAR', 'CUDNN_PTR_BN_SAVED_INVSTD', 'CUDNN_PTR_BN_SAVED_MEAN', 'CUDNN_PTR_BN_SCALE', 'CUDNN_PTR_BN_Z_EQBIAS', 'CUDNN_PTR_BN_Z_EQSCALE', 'CUDNN_PTR_DWDATA', 'CUDNN_PTR_DXDATA', 'CUDNN_PTR_DYDATA', 'CUDNN_PTR_DZDATA', 'CUDNN_PTR_ELEM_ALIGNED', 'CUDNN_PTR_NULL', 'CUDNN_PTR_WDATA', 'CUDNN_PTR_WORKSPACE', 'CUDNN_PTR_XDATA', 'CUDNN_PTR_YDATA', 'CUDNN_PTR_YSQSUM', 'CUDNN_PTR_YSUM', 'CUDNN_PTR_ZDATA', 'CUDNN_REDUCE_TENSOR_ADD', 'CUDNN_REDUCE_TENSOR_AMAX', 'CUDNN_REDUCE_TENSOR_AVG', 'CUDNN_REDUCE_TENSOR_FLATTENED_INDICES', 'CUDNN_REDUCE_TENSOR_MAX', 'CUDNN_REDUCE_TENSOR_MIN', 'CUDNN_REDUCE_TENSOR_MUL', 'CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS', 'CUDNN_REDUCE_TENSOR_NORM1', 'CUDNN_REDUCE_TENSOR_NORM2', 'CUDNN_REDUCE_TENSOR_NO_INDICES', 'CUDNN_RNN_ALGO_PERSIST_DYNAMIC', 'CUDNN_RNN_ALGO_PERSIST_STATIC', 'CUDNN_RNN_ALGO_STANDARD', 'CUDNN_RNN_DATA_LAYOUT_BATCH_MAJOR_UNPACKED', 'CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_PACKED', 'CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_UNPACKED', 'CUDNN_RNN_PADDED_IO_DISABLED', 'CUDNN_RNN_PADDED_IO_ENABLED', 'CUDNN_RNN_RELU', 'CUDNN_RNN_TANH', 'CUDNN_SAMPLER_BILINEAR', 'CUDNN_SCALAR_DOUBLE_BN_EPSILON', 'CUDNN_SCALAR_DOUBLE_BN_EXP_AVG_FACTOR', 'CUDNN_SCALAR_INT64_T_BN_ACCUMULATION_COUNT', 'CUDNN_SCALAR_SIZE_T_WORKSPACE_SIZE_IN_BYTES', 'CUDNN_SKIP_INPUT', 'CUDNN_SOFTMAX_ACCURATE', 'CUDNN_SOFTMAX_FAST', 'CUDNN_SOFTMAX_LOG', 'CUDNN_SOFTMAX_MODE_CHANNEL', 'CUDNN_SOFTMAX_MODE_INSTANCE', 'CUDNN_STATUS_RUNTIME_FP_OVERFLOW', 'CUDNN_STATUS_RUNTIME_IN_PROGRESS', 'CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING', 'CUDNN_STATUS_SUCCESS', 'CUDNN_TENSOR_NCHW', 'CUDNN_TENSOR_NHWC', 'CUDNN_TENSOR_OP_MATH', 'CUDNN_UNIDIRECTIONAL', 'CuDNNAlgoPerf', 'CuDNNError', 'RNNBackwardData', 'RNNBackwardDataEx', 'RNNBackwardWeights', 'RNNBackwardWeightsEx', 'RNNForwardInference', 'RNNForwardInferenceEx', 'RNNForwardTraining', 'RNNForwardTrainingEx', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_environment', 'activationBackward_v4', 'activationForward_v4', 'addTensor_v3', 'available', 'batchNormalizationBackward', 'batchNormalizationBackwardEx', 'batchNormalizationForwardInference', 'batchNormalizationForwardTraining', 'batchNormalizationForwardTrainingEx', 'check_status', 'convolutionBackwardBias', 'convolutionBackwardData_v3', 'convolutionBackwardFilter_v3', 'convolutionForward', 'create', 'createActivationDescriptor', 'createCTCLossDescriptor', 'createConvolutionDescriptor', 'createDropoutDescriptor', 'createFilterDescriptor', 'createFusedOpsConstParamPack', 'createFusedOpsPlan', 'createFusedOpsVariantParamPack', 'createOpTensorDescriptor', 'createPersistentRNNPlan', 'createPoolingDescriptor', 'createRNNDataDescriptor', 'createRNNDescriptor', 'createReduceTensorDescriptor', 'createSpatialTransformerDescriptor', 'createTensorDescriptor', 'deriveBNTensorDescriptor', 'destroy', 'destroyActivationDescriptor', 'destroyCTCLossDescriptor', 'destroyConvolutionDescriptor', 'destroyDropoutDescriptor', 'destroyFilterDescriptor', 'destroyFusedOpsConstParamPack', 'destroyFusedOpsPlan', 'destroyFusedOpsVariantParamPack', 'destroyOpTensorDescriptor', 'destroyPersistentRNNPlan', 'destroyPoolingDescriptor', 'destroyRNNDataDescriptor', 'destroyRNNDescriptor', 'destroyReduceTensorDescriptor', 'destroySpatialTransformerDescriptor', 'destroyTensorDescriptor', 'dropoutBackward', 'dropoutForward', 'dropoutGetStatesSize', 'findConvolutionBackwardDataAlgorithm', 'findConvolutionBackwardDataAlgorithmEx', 'findConvolutionBackwardDataAlgorithmEx_v7', 'findConvolutionBackwardFilterAlgorithm', 'findConvolutionBackwardFilterAlgorithmEx', 'findConvolutionBackwardFilterAlgorithmEx_v7', 'findConvolutionForwardAlgorithm', 'findConvolutionForwardAlgorithmEx', 'findConvolutionForwardAlgorithmEx_v7', 'fusedOpsExecute', 'getBatchNormalizationBackwardExWorkspaceSize', 'getBatchNormalizationForwardTrainingExWorkspaceSize', 'getBatchNormalizationTrainingExReserveSpaceSize', 'getCTCLossDescriptor', 'getCTCLossWorkspaceSize', 'getConvolutionBackwardDataAlgorithm_v6', 'getConvolutionBackwardDataAlgorithm_v7', 'getConvolutionBackwardDataWorkspaceSize', 'getConvolutionBackwardFilterAlgorithm_v6', 'getConvolutionBackwardFilterAlgorithm_v7', 'getConvolutionBackwardFilterWorkspaceSize', 'getConvolutionForwardAlgorithm_v6', 'getConvolutionForwardAlgorithm_v7', 'getConvolutionForwardWorkspaceSize', 'getConvolutionGroupCount', 'getConvolutionMathType', 'getDropoutReserveSpaceSize', 'getFilterNdDescriptor', 'getFusedOpsConstParamPackAttribute', 'getFusedOpsVariantParamPackAttribute', 'getOpTensorDescriptor', 'getRNNDataDescriptor', 'getRNNLinLayerBiasParams', 'getRNNLinLayerMatrixParams', 'getRNNPaddingMode', 'getRNNParamsSize', 'getRNNTrainingReserveSize', 'getRNNWorkspaceSize', 'getReduceTensorDescriptor', 'getReductionIndicesSize', 'getReductionWorkspaceSize', 'getStream', 'getTensor4dDescriptor', 'getVersion', 'get_build_version', 'makeFusedOpsPlan', 'opTensor', 'poolingBackward', 'poolingForward', 'queryRuntimeError', 'reduceTensor', 'scaleTensor', 'setActivationDescriptor', 'setCTCLossDescriptor', 'setConvolution2dDescriptor_v4', 'setConvolution2dDescriptor_v5', 'setConvolutionGroupCount', 'setConvolutionMathType', 'setConvolutionNdDescriptor_v3', 'setDropoutDescriptor', 'setFilter4dDescriptor_v4', 'setFilterNdDescriptor_v4', 'setFusedOpsConstParamPackAttribute', 'setFusedOpsVariantParamPackAttribute', 'setOpTensorDescriptor', 'setPersistentRNNPlan', 'setPooling2dDescriptor_v4', 'setPoolingNdDescriptor_v4', 'setRNNDataDescriptor', 'setRNNDescriptor_v5', 'setRNNDescriptor_v6', 'setRNNPaddingMode', 'setReduceTensorDescriptor', 'setSpatialTransformerDescriptor', 'setStream', 'setTensor', 'setTensor4dDescriptor', 'setTensor4dDescriptorEx', 'setTensorNdDescriptor', 'softmaxBackward', 'softmaxForward', 'spatialTfGridGeneratorBackward', 'spatialTfGridGeneratorForward', 'spatialTfSamplerBackward', 'spatialTfSamplerForward'] 10.6.0 11040 8400 conv2d_cupy_cuda_cudnn cudnn.createTensorDescriptor.__doc__='createTensorDescriptor() -&gt; size_t' cudnn.createFilterDescriptor.__doc__='createFilterDescriptor() -&gt; size_t' n=1, c=1, h=4, w=4 k=1, kh=3, kw=3 cudnn.setTensor4dDescriptor.__doc__='setTensor4dDescriptor(size_t tensorDesc, int format, int dataType, int n, int c, int h, int w)' cudnn.setFilter4dDescriptor_v4.__doc__='setFilter4dDescriptor_v4(size_t filterDesc, int dataType, int format, int k, int c, int h, int w)' cudnn.CUDNN_TENSOR_NCHW.__doc__=&quot;int([x]) -&gt; integer\nint(x, base=10) -&gt; integer\n\nConvert a number or string to an integer, or return 0 if no arguments\nare given. If x is a number, return x.__int__(). For floating point\nnumbers, this truncates towards zero.\n\nIf x is not a number or if base is given, then x must be a string,\nbytes, or bytearray instance representing an integer literal in the\ngiven base. The literal can be preceded by '+' or '-' and be surrounded\nby whitespace. The base defaults to 10. Valid bases are 0 and 2-36.\nBase 0 means to interpret the base from the string as an integer literal.\n&gt;&gt;&gt; int('0b100', base=0)\n4&quot; cudnn.CUDNN_DATA_FLOAT.__doc__=&quot;int([x]) -&gt; integer\nint(x, base=10) -&gt; integer\n\nConvert a number or string to an integer, or return 0 if no arguments\nare given. If x is a number, return x.__int__(). For floating point\nnumbers, this truncates towards zero.\n\nIf x is not a number or if base is given, then x must be a string,\nbytes, or bytearray instance representing an integer literal in the\ngiven base. The literal can be preceded by '+' or '-' and be surrounded\nby whitespace. The base defaults to 10. Valid bases are 0 and 2-36.\nBase 0 means to interpret the base from the string as an integer literal.\n&gt;&gt;&gt; int('0b100', base=0)\n4&quot; cudnn.setConvolution2dDescriptor_v4.__doc__='setConvolution2dDescriptor_v4(size_t convDesc, int pad_h, int pad_w, int u, int v, int dilation_h, int dilation_w, int mode)' cudnn.CUDNN_CONVOLUTION.__doc__=&quot;int([x]) -&gt; integer\nint(x, base=10) -&gt; integer\n\nConvert a number or string to an integer, or return 0 if no arguments\nare given. If x is a number, return x.__int__(). For floating point\nnumbers, this truncates towards zero.\n\nIf x is not a number or if base is given, then x must be a string,\nbytes, or bytearray instance representing an integer literal in the\ngiven base. The literal can be preceded by '+' or '-' and be surrounded\nby whitespace. The base defaults to 10. Valid bases are 0 and 2-36.\nBase 0 means to interpret the base from the string as an integer literal.\n&gt;&gt;&gt; int('0b100', base=0)\n4&quot; Traceback (most recent call last): File &quot;/home/usr/cudnn_hello_world.py&quot;, line 152, in &lt;module&gt; cudnn_output2d = conv2d_cupy_cuda_cudnn(input2d, kernel2d, bias, pad, stride) File &quot;/home/usr/cudnn_hello_world.py&quot;, line 55, in conv2d_cupy_cuda_cudnn cudnn.setConvolution2dDescriptor_v4( File &quot;cupy_backends/cuda/libs/cudnn.pyx&quot;, line 1141, in cupy_backends.cuda.libs.cudnn.setConvolution2dDescriptor_v4 File &quot;cupy_backends/cuda/libs/cudnn.pyx&quot;, line 1147, in cupy_backends.cuda.libs.cudnn.setConvolution2dDescriptor_v4 File &quot;cupy_backends/cuda/libs/cudnn.pyx&quot;, line 785, in cupy_backends.cuda.libs.cudnn.check_status cupy_backends.cuda.libs.cudnn.CuDNNError: cuDNN Error: CUDNN_STATUS_NOT_SUPPORTED </code></pre> <p>I have also created a cudnn_debug.log file and then reran the python script:</p> <pre><code>$ export CUDNN_LOGDEST_DBG=cudnn_debug.log $ export CUDNN_LOGINFO_DBG=1 $ uv run python cudnn_hello_world.py </code></pre> <p>The log msgs are :</p> <pre><code>I! CuDNN (v8400) function cudnnGetVersion() called: i! Time: 2025-04-17T15:49:42.409117 (0d+0h+0m+0s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnCreate() called: i! handle: location=host; addr=0x7ffce7708b78; i! Time: 2025-04-17T15:49:42.640970 (0d+0h+0m+0s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnCreateTensorDescriptor() called: i! Time: 2025-04-17T15:49:43.031841 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnCreateFilterDescriptor() called: i! Time: 2025-04-17T15:49:43.031887 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnCreateTensorDescriptor() called: i! Time: 2025-04-17T15:49:43.031897 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnSetTensor4dDescriptor() called: i! format: type=cudnnTensorFormat_t; val=CUDNN_TENSOR_NCHW (0); i! dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0); i! n: type=int; val=1; i! c: type=int; val=1; i! h: type=int; val=4; i! w: type=int; val=4; i! Time: 2025-04-17T15:49:43.031941 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnSetTensor4dDescriptorEx() called: i! dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0); i! n: type=int; val=1; i! c: type=int; val=1; i! h: type=int; val=4; i! w: type=int; val=4; i! nStride: type=int; val=16; i! cStride: type=int; val=16; i! hStride: type=int; val=4; i! wStride: type=int; val=1; i! Time: 2025-04-17T15:49:43.031952 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnSetFilter4dDescriptor() called: i! dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0); i! format: type=cudnnTensorFormat_t; val=CUDNN_TENSOR_NCHW (0); i! k: type=int; val=1; i! c: type=int; val=1; i! h: type=int; val=3; i! w: type=int; val=3; i! Time: 2025-04-17T15:49:43.031978 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnCreateConvolutionDescriptor() called: i! convDesc: location=host; addr=0x7ffce7708b88; i! Time: 2025-04-17T15:49:43.032027 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. I! CuDNN (v8400) function cudnnGetErrorString() called: i! status: type=int; val=9; i! Time: 2025-04-17T15:49:43.032071 (0d+0h+0m+1s since start) i! Process=556373; Thread=556373; GPU=NULL; Handle=NULL; StreamId=NULL. </code></pre>
<python><cupy><cudnn><uv>
2025-04-18 05:53:30
1
8,499
Sun Bear
79,580,362
5,679,985
Why does Django not hot-reload on M1 Mac?
<p>I have a Django application. Back when I was using an Intel mac, it's hot-reloading functionality was working. However, a year ago I switched to one of the new Macs and now I can't get it to hot-reload.</p> <p>Here is my settings.py:</p> <pre><code> import os import firebase_admin BASE_DIR = os.path.abspath(os.path.dirname(os.path.abspath(__file__))) SECRET_KEY_1 = '12312312' # ... some more secret keys DISABLE_SERVER_SIDE_CURSORS = True DEBUG = True AUTH_USER_MODEL = 'my_project.MyUser' ALLOWED_HOSTS = ['*'] INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'my_project', ] MIDDLEWARE = [ 'django.middleware.gzip.GZipMiddleware', 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(BASE_DIR, 'my_project/'), ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'user', 'USER': 'user', 'PASSWORD': '', 'HOST': 'localhost', 'PORT': '5432', }, 'replica': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'user', 'USER': 'user', 'PASSWORD': '', 'HOST': 'localhost', 'PORT': '5432', }, } AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True STATIC_ROOT = os.path.join(BASE_DIR, 'static') STATIC_URL = '/static/' SECURE_HSTS_SECONDS = 0 SECURE_CONTENT_TYPE_NOSNIFF = True SECURE_BROWSER_XSS_FILTER = True SESSION_COOKIE_SECURE = True CSRF_COOKIE_SECURE = False X_FRAME_OPTIONS = 'DENY' CSRF_HEADER_NAME = 'HTTP_X_CSRFTOKEN' CSRF_COOKIE_NAME = 'csrftoken' SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') SECURE_SSL_REDIRECT = os.environ.get('SECURE_SSL_REDIRECT', '') == 'True' STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage' FIREBASE_APP = firebase_admin.initialize_app() </code></pre> <p>When I run the server (<code>python3 manage.py runserver</code>):</p> <pre><code>Watching for file changes with StatReloader Performing system checks... April 18, 2025 - 02:29:48 Django version 3.2.13, using settings 'settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. </code></pre> <p>Any time I update any .py file, I expect the server to reload but it doesn't.</p>
<python><django>
2025-04-18 02:37:57
2
1,274
Human Cyborg Relations
79,580,343
13,562,186
Word to excel using python but preserve word format (bullet points) and structure
<p>Script to convert word documents to excel. Works well but fails to keep structure and certain characters like bullet points.</p> <pre><code> import tkinter as tk from tkinter import filedialog import re import os import subprocess from docx import Document from docx.oxml.text.paragraph import CT_P from docx.oxml.table import CT_Tbl from docx.text.paragraph import Paragraph from docx.table import Table from docx.oxml.ns import qn from openpyxl import Workbook import difflib # Control Flags EXTRACT_PARAGRAPHS = False # Global flag: if True, non-heading paragraphs are output. EXTRACT_TABLES = True # Global flag: if True, tables are extracted. ONLY_INCLUDE_HEADINGS_WITH_DATA = True # If True, an official heading is printed only when a data block follows. CUSTOM_HEADERS = [&quot;Document Info:&quot;] # Custom header texts that are always printed immediately. # Custom section rules: # Keys are lower-case, numbering-stripped header texts. # Values are sets of allowed data types for that section. CUSTOM_SECTION_RULES = { &quot;references&quot;: {&quot;Paragraph&quot;} } def sanitize_sheet_name(name): r&quot;&quot;&quot; Sanitize sheet names for Excel: - Ensure the name is not empty (if empty, use &quot;Default&quot;) - Remove invalid characters: : \ / ? * [ or ] - Limit to 31 characters. &quot;&quot;&quot; name = name.strip() if not name: name = &quot;Default&quot; invalid_chars = r'[:\\/*?\[\]]' sanitized = re.sub(invalid_chars, &quot;&quot;, name) return sanitized[:31] def sanitize_filename(filename): r&quot;&quot;&quot; Sanitize the filename so that only valid characters remain. This function: - Strips leading/trailing whitespace. - Replaces spaces with underscores. - Removes any character that is not alphanumeric, an underscore, hyphen, or period. - Removes any trailing periods. The resulting string is used as the base of the Excel filename. &quot;&quot;&quot; filename = filename.strip() # Replace spaces with underscores. filename = filename.replace(&quot; &quot;, &quot;_&quot;) # Allow only alphanumeric characters, underscores, hyphens, and periods. filename = re.sub(r'[^A-Za-z0-9_.-]', '', filename) # Remove trailing periods. filename = filename.rstrip(&quot;.&quot;) if not filename: filename = &quot;Default&quot; return filename def strip_heading_number(text): &quot;&quot;&quot; Remove leading numbering from heading text. e.g., &quot;4.3 References&quot; becomes &quot;References&quot;. &quot;&quot;&quot; return re.sub(r'^\d+(\.\d+)*\s*', '', text) def iter_block_items(parent): &quot;&quot;&quot; Yield each paragraph and table child within *parent* in document order. &quot;&quot;&quot; if hasattr(parent, 'element'): parent_element = parent.element.body else: parent_element = parent for child in parent_element: if isinstance(child, CT_P): yield Paragraph(child, parent) elif isinstance(child, CT_Tbl): yield Table(child, parent) def get_paragraph_numbering(paragraph, numbering_counters): &quot;&quot;&quot; Return a tuple (numbering_str, ilvl) for a numbered paragraph, or (None, None) if the paragraph isn’t numbered. &quot;&quot;&quot; pPr = paragraph._p.pPr if pPr is None: return (None, None) numPr = pPr.numPr if numPr is None: return (None, None) numId = numPr.numId ilvl = numPr.ilvl if numId is None or ilvl is None: return (None, None) numId_val = int(numId.val) ilvl_val = int(ilvl.val) if numId_val not in numbering_counters: numbering_counters[numId_val] = [0] * 9 # Assuming up to 9 levels. counters = numbering_counters[numId_val] counters[ilvl_val] += 1 for lvl in range(ilvl_val + 1, len(counters)): counters[lvl] = 0 numbering_str = '.'.join(str(counters[lvl]) for lvl in range(ilvl_val + 1) if counters[lvl] != 0) return (numbering_str, ilvl_val) def is_official_heading(block): return block.style and block.style.name.startswith(&quot;Heading&quot;) def is_custom_header(block, threshold=0.8): &quot;&quot;&quot; Check if the paragraph text fuzzy matches any custom header. The text from the block is normalized (lowercase, stripped of trailing colon) and then compared to each entry in CUSTOM_HEADERS using difflib. If the similarity ratio is equal to or exceeds the threshold, returns True. &quot;&quot;&quot; header_text = block.text.strip().lower().rstrip(&quot;:&quot;) for custom in CUSTOM_HEADERS: custom_normalized = custom.lower().rstrip(&quot;:&quot;) similarity = difflib.SequenceMatcher(None, header_text, custom_normalized).ratio() if similarity &gt;= threshold: return True return False def extract_table_as_list(table): &quot;&quot;&quot; Extract the table content as a list of lists. Each sublist represents a row with cell texts. &quot;&quot;&quot; table_data = [] for row in table.rows: row_data = [cell.text.strip() for cell in row.cells] table_data.append(row_data) return table_data def process_document(file_path): &quot;&quot;&quot; Process a single Word document. Extract sections and blocks based on control flags and custom rules, print output to the console, export to an Excel workbook saved in the same location, and return the number of items generated (total blocks). &quot;&quot;&quot; print(f&quot;\nProcessing file: {file_path}&quot;) doc = Document(file_path) blocks = list(iter_block_items(doc)) numbering_counters = {} # Build default allowed types from global flags. default_allowed = set() if EXTRACT_PARAGRAPHS: default_allowed.add(&quot;Paragraph&quot;) if EXTRACT_TABLES: default_allowed.add(&quot;Table&quot;) current_section_rule = default_allowed.copy() current_section = None # Section header name. pending_heading = None # For ONLY_INCLUDE_HEADINGS_WITH_DATA logic. # Dictionary to hold sections and their extracted blocks. sections = {} # Local counters for the current section: local_para_count = 1 local_table_count = 1 for block in blocks: if isinstance(block, Paragraph): text = block.text.strip() if is_custom_header(block): pending_heading = None current_section = text # Use custom header text as section name. sections[current_section] = [] current_section_rule = default_allowed.copy() local_para_count = 1 local_table_count = 1 elif is_official_heading(block): num_str, _ = get_paragraph_numbering(block, numbering_counters) heading_text_with_num = f&quot;{num_str} {text}&quot; if num_str else text stripped = strip_heading_number(text).lower() if stripped in CUSTOM_SECTION_RULES: current_section_rule = CUSTOM_SECTION_RULES[stripped] heading_text_print = strip_heading_number(text) else: current_section_rule = default_allowed.copy() heading_text_print = heading_text_with_num if ONLY_INCLUDE_HEADINGS_WITH_DATA: pending_heading = heading_text_print else: current_section = heading_text_print sections[current_section] = [] pending_heading = None local_para_count = 1 local_table_count = 1 else: # Non-heading paragraph. if text: if &quot;Paragraph&quot; in current_section_rule: if ONLY_INCLUDE_HEADINGS_WITH_DATA and pending_heading is not None: current_section = pending_heading if current_section not in sections: sections[current_section] = [] pending_heading = None local_para_count = 1 local_table_count = 1 if current_section is None: current_section = &quot;Default&quot; sections[current_section] = [] local_para_count = 1 local_table_count = 1 label = f&quot;[Paragraph{local_para_count}]&quot; sections[current_section].append({ &quot;type&quot;: &quot;Paragraph&quot;, &quot;content&quot;: text, &quot;label&quot;: label }) local_para_count += 1 elif isinstance(block, Table): if &quot;Table&quot; in current_section_rule: if ONLY_INCLUDE_HEADINGS_WITH_DATA and pending_heading is not None: current_section = pending_heading if current_section not in sections: sections[current_section] = [] pending_heading = None local_para_count = 1 local_table_count = 1 if current_section is None: current_section = &quot;Default&quot; sections[current_section] = [] local_para_count = 1 local_table_count = 1 label = f&quot;[Table{local_table_count}]&quot; table_data = extract_table_as_list(block) sections[current_section].append({ &quot;type&quot;: &quot;Table&quot;, &quot;content&quot;: table_data, &quot;label&quot;: label }) local_table_count += 1 # (Optional) Print extracted output. print(&quot;\nExtracted Output:&quot;) for section, blocks_list in sections.items(): print(f&quot;\nSection: {section}&quot;) for block in blocks_list: if block[&quot;type&quot;] == &quot;Paragraph&quot;: print(f&quot; {block['label']} Paragraph: {block['content']}&quot;) elif block[&quot;type&quot;] == &quot;Table&quot;: print(f&quot; {block['label']} Table:&quot;) for row in block[&quot;content&quot;]: print(&quot; &quot; + &quot;\t&quot;.join(row)) # Create an Excel workbook using openpyxl. wb = Workbook() default_sheet = wb.active wb.remove(default_sheet) for section, blocks_list in sections.items(): sheet_name = sanitize_sheet_name(section) ws = wb.create_sheet(title=sheet_name) row_pointer = 1 for block in blocks_list: if block[&quot;type&quot;] == &quot;Paragraph&quot;: ws.cell(row=row_pointer, column=1, value=block[&quot;label&quot;]) ws.cell(row=row_pointer, column=2, value=block[&quot;content&quot;]) row_pointer += 1 elif block[&quot;type&quot;] == &quot;Table&quot;: table_data = block[&quot;content&quot;] first_row = True for r in table_data: if first_row: ws.cell(row=row_pointer, column=1, value=block[&quot;label&quot;]) for c_idx, cell_text in enumerate(r, start=2): ws.cell(row=row_pointer, column=c_idx, value=cell_text) first_row = False else: for c_idx, cell_text in enumerate(r, start=2): ws.cell(row=row_pointer, column=c_idx, value=cell_text) row_pointer += 1 row_pointer += 1 # Blank row after table. directory = os.path.dirname(file_path) base = os.path.splitext(os.path.basename(file_path))[0] sanitized_base = sanitize_filename(base) output_file = os.path.join(directory, f&quot;{sanitized_base}.xlsx&quot;) wb.save(output_file) print(f&quot;\nExtraction complete. Data exported to {output_file}&quot;) # Calculate total number of items (blocks) generated in this file. blocks_count = sum(len(v) for v in sections.values()) return blocks_count def main(): # Prompt the user to select a folder. root = tk.Tk() root.withdraw() folder = filedialog.askdirectory(title=&quot;Select a Folder Containing Word Documents&quot;) if not folder: print(&quot;No folder selected.&quot;) return # Gather all DOCX files and group them by normalized base filename (ignoring version markers). regex = re.compile(r&quot;^(?P&lt;base&gt;.+?)(?:\s*\((?P&lt;version&gt;\d+)\))?\.docx$&quot;, re.IGNORECASE) grouping = {} # key: normalized base name, value: list of (version, filename) for filename in os.listdir(folder): if filename.lower().endswith(&quot;.docx&quot;): match = regex.match(filename) if match: base = match.group(&quot;base&quot;).strip().lower() version = int(match.group(&quot;version&quot;)) if match.group(&quot;version&quot;) else 0 grouping.setdefault(base, []).append((version, filename)) total_files_found = sum(len(v) for v in grouping.values()) unique_files_selected = len(grouping) duplicates_skipped = total_files_found - unique_files_selected # Select the highest version for each base and record duplicate details. files_to_process = {} duplicate_details = {} # key: base, value: dict with 'selected' and 'duplicates' list for base, files in grouping.items(): sorted_files = sorted(files, key=lambda x: x[0], reverse=True) chosen = sorted_files[0] files_to_process[base] = chosen if len(sorted_files) &gt; 1: duplicate_details[base] = { &quot;selected&quot;: chosen, &quot;duplicates&quot;: sorted_files[1:] } files_processed = 0 total_items_generated = 0 issues = [] for base, (version, filename) in files_to_process.items(): file_path = os.path.join(folder, filename) try: items_generated = process_document(file_path) total_items_generated += items_generated files_processed += 1 except Exception as e: issues.append(f&quot;Error processing {filename}: {e}&quot;) # Final Report print(&quot;\n===== FINAL REPORT =====&quot;) print(f&quot;Total DOCX files found: {total_files_found}&quot;) print(f&quot;Unique files selected for processing: {unique_files_selected}&quot;) print(f&quot;Duplicate/skipped files: {duplicates_skipped}&quot;) if duplicate_details: print(&quot;\nDetails of duplicate groups:&quot;) for base, info in duplicate_details.items(): selected_version, selected_file = info[&quot;selected&quot;] duplicates_list = &quot;, &quot;.join(f&quot;{fn} (v{ver})&quot; for ver, fn in info[&quot;duplicates&quot;]) print(f&quot; Group '{base}': selected -&gt; {selected_file} (v{selected_version}); duplicates -&gt; {duplicates_list}&quot;) print(f&quot;\nFiles processed successfully: {files_processed}&quot;) print(f&quot;Total items generated (blocks extracted): {total_items_generated}&quot;) if issues: print(&quot;\nIssues encountered:&quot;) for issue in issues: print(f&quot; - {issue}&quot;) else: print(&quot;\nNo issues encountered.&quot;) print(&quot;========================\n&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>is it possible when extracting from the word document to Preserve some of the formatting and structure of the word documents?</p> <p>For example, bullet points, new paragraphs etc, if text is in Bold it says literally ** Text **.</p> <p>At the moment the script works great to put into excel but it largely pastes the text all as one block.</p> <p>Particularly disappointing to not have bullet points and correct spacing.</p> <p>Here is a before and after example just copying and pasting from the word document:</p> <p>word <a href="https://i.sstatic.net/kERjxv9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kERjxv9b.png" alt="enter image description here" /></a></p> <p>excel <a href="https://i.sstatic.net/7A9jr0Je.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7A9jr0Je.png" alt="enter image description here" /></a></p>
<python><openpyxl><python-docx>
2025-04-18 02:04:32
1
927
Nick
79,580,309
1,609,514
How to align Pandas Periods of frequency 12 hours to start and end at 00:00 and 12:00
<p>I want to divide a large dataset into 12-hour periods of data starting/ending at midnight and noon each day.</p> <p>I was planning to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Period.html" rel="nofollow noreferrer">Pandas.Period</a> for this but I noticed that it converts an arbitrary datetime to a 12-hour period beginning in the current hour, whereas what I want is the 12-hour period starting at 00:00 or 12:00 hours.</p> <pre><code>import pandas as pd dt = pd.to_datetime(&quot;2025-04-17 18:35&quot;) current_period = dt.to_period(freq='12h') print(current_period) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>2025-04-17 18:00 </code></pre> <p>What I want is the following period:</p> <pre class="lang-none prettyprint-override"><code>2025-04-17 12:00 </code></pre>
<python><pandas><datetime><period>
2025-04-18 01:35:05
1
11,755
Bill
79,580,250
395,857
How can I export an encoder-decoder PyTorch model into a single ONNX file?
<p>I converted the PyTorch model <a href="https://huggingface.co/Helsinki-NLP/opus-mt-fr-en" rel="nofollow noreferrer"><code>Helsinki-NLP/opus-mt-fr-en</code></a> (HuggingFace), which is an encoder-decoder model for machine translation, to ONNX using this script:</p> <pre class="lang-py prettyprint-override"><code>import os from optimum.onnxruntime import ORTModelForSeq2SeqLM from transformers import AutoTokenizer, AutoConfig hf_model_id = &quot;Helsinki-NLP/opus-mt-fr-en&quot; onnx_save_directory = &quot;./onnx_model_fr_en&quot; os.makedirs(onnx_save_directory, exist_ok=True) print(f&quot;Starting conversion for model: {hf_model_id}&quot;) print(f&quot;ONNX model will be saved to: {onnx_save_directory}&quot;) print(&quot;Loading tokenizer and config...&quot;) tokenizer = AutoTokenizer.from_pretrained(hf_model_id) config = AutoConfig.from_pretrained(hf_model_id) model = ORTModelForSeq2SeqLM.from_pretrained( hf_model_id, export=True, from_transformers=True, # Pass the loaded config explicitly during export config=config ) print(&quot;Saving ONNX model components, tokenizer and configuration...&quot;) model.save_pretrained(onnx_save_directory) tokenizer.save_pretrained(onnx_save_directory) print(&quot;-&quot; * 30) print(f&quot;Successfully converted '{hf_model_id}' to ONNX.&quot;) print(f&quot;Files saved in: {onnx_save_directory}&quot;) if os.path.exists(onnx_save_directory): print(&quot;Generated files:&quot;, os.listdir(onnx_save_directory)) else: print(&quot;Warning: Save directory not found after saving.&quot;) print(&quot;-&quot; * 30) print(&quot;Loading ONNX model and tokenizer for testing...&quot;) onnx_tokenizer = AutoTokenizer.from_pretrained(onnx_save_directory) onnx_model = ORTModelForSeq2SeqLM.from_pretrained(onnx_save_directory) french_text= &quot;je regarde la tele&quot; print(f&quot;Input (French): {french_text}&quot;) inputs = onnx_tokenizer(french_text, return_tensors=&quot;pt&quot;) # Use PyTorch tensors print(&quot;Generating translation using the ONNX model...&quot;) generated_ids = onnx_model.generate(**inputs) english_translation = onnx_tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(f&quot;Output (English): {english_translation}&quot;) print(&quot;--- Test complete ---&quot;) </code></pre> <p>The output folder containing the ONNX files is:</p> <pre><code>franck@server:~/tests/onnx_model_fr_en$ ls -la total 860968 drwxr-xr-x 2 franck users 4096 Apr 16 17:29 . drwxr-xr-x 5 franck users 4096 Apr 17 23:54 .. -rw-r--r-- 1 franck users 1360 Apr 17 04:38 config.json -rw-r--r-- 1 franck users 346250804 Apr 17 04:38 decoder_model.onnx -rw-r--r-- 1 franck users 333594274 Apr 17 04:38 decoder_with_past_model.onnx -rw-r--r-- 1 franck users 198711098 Apr 17 04:38 encoder_model.onnx -rw-r--r-- 1 franck users 288 Apr 17 04:38 generation_config.json -rw-r--r-- 1 franck users 802397 Apr 17 04:38 source.spm -rw-r--r-- 1 franck users 74 Apr 17 04:38 special_tokens_map.json -rw-r--r-- 1 franck users 778395 Apr 17 04:38 target.spm -rw-r--r-- 1 franck users 847 Apr 17 04:38 tokenizer_config.json -rw-r--r-- 1 franck users 1458196 Apr 17 04:38 vocab.json </code></pre> <p>How can I export an opus-mt-fr-en PyTorch model into a single ONNX file?</p> <p>Having several ONNX files is an issue because:</p> <ol> <li>The PyTorch model shares the embedding layer with both the encoder and the decoder, and subsequently the export script above duplicates that layer to both the <code>encoder_model.onnx</code> and <code>decoder_model.onnx</code>, which is an issue as the embedding layer is large (represents ~40% of the PyTorch model size).</li> <li>Having both a <code>decoder_model.onnx</code> and <code>decoder_with_past_model.onnx</code> duplicates many parameters.</li> </ol> <p>The total size of the three ONNX files is:</p> <ul> <li><code>decoder_model.onnx</code>: <strong>346,250,804 bytes</strong></li> <li><code>decoder_with_past_model.onnx</code>: <strong>333,594,274 bytes</strong></li> <li><code>encoder_model.onnx</code>: <strong>198,711,098 bytes</strong></li> </ul> <p><strong>Total size</strong> = 346,250,804 + 333,594,274 + 198,711,098 = <strong>878,556,176 bytes</strong> That’s approximately <strong>837.57 MB</strong>, why is almost 3 times larger than the original PyTorch model (300 MB).</p>
<python><pytorch><huggingface-transformers><onnx><machine-translation>
2025-04-17 23:59:25
0
84,585
Franck Dernoncourt
79,580,208
13,634,560
How to add jitter to plotly.go.scatter() in python when mode="lines"
<p>I have a dataframe as such:</p> <pre><code>Starting point Walking Driving Lunch 0 8 4 Shopping 0 7 3 Coffee 0 5 2 </code></pre> <p>Where I want to draw, for each index, a green line from &quot;Starting point&quot; -&gt; &quot;Walking&quot;, and a red line from &quot;Starting point&quot; -&gt; &quot;Coffee&quot;. To do this, I loop through both the columns and the index, as such:</p> <pre><code>for column in df7.columns: for idx in df7.index: fig9.add_trace( go.Scatter( # chart # chart # chart ) ) </code></pre> <p>Which gives me the following chart of two lines, colored by cycle, overlapping. <a href="https://i.sstatic.net/6t7GmHBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6t7GmHBM.png" alt="enter image description here" /></a></p> <p>The question is: how can I modify the code in plotly python to create jitter in the y axis for the lines of different cycles? In other words, the shorter line will be above the longer line.</p> <p>Full MRE:</p> <pre><code>df7 = pd.DataFrame({ 'Starting point': [0, 0, 0], 'Walking': [8, 7, 5], 'Biking': [4, 3, 2] }, index=['Lunch', 'Shopping', 'Coffee']) fig9 = go.Figure() color_cyc = cycle([&quot;#888888&quot;, &quot;#E2062B&quot;]) symbol_cyc = cycle([&quot;diamond&quot;, &quot;cross&quot;]) for column in df7.columns: color=next(color_cyc) for idx in df7.index: fig9.add_trace( go.Scatter( y=[idx] * len(df7.loc[idx, [&quot;Starting point&quot;, column]]), x=df7.loc[idx, [&quot;Starting point&quot;, column]], showlegend=False, mode=&quot;lines+markers&quot;, marker={ &quot;color&quot;: color, &quot;symbol&quot;: &quot;diamond&quot;, # &quot;jitter&quot;: 0.4, }, ), ) fig9 </code></pre> <p>Thanks very much.</p>
<python><plotly><jitter>
2025-04-17 22:57:28
2
341
plotmaster473
79,580,145
2,499,281
How to focus tksheet in tkinter window
<p>I am trying to focus a tksheet so I can use the arrows to move around the cells, but nothing is working. I am trying to understand <a href="https://ragardner.github.io/tksheet/DOCUMENTATION.html#set-focus-to-the-sheet" rel="nofollow noreferrer">this documentation</a>.</p> <p>I tried</p> <pre><code>def key_table(event): sheet.focus_set('table') </code></pre> <p>paired with</p> <pre><code>sheet = tksheet.Sheet(root, height=200, width=1500, auto_resize_columns=30) root.bind(&quot;&lt;F7&gt;&quot;, key_table) </code></pre> <p>but when I press F7 and try to arrow or tab around the table nothing happens. It does switch focus away from the default focus (a text box), but not to the tksheet sheet, at least not that I can tell.</p>
<python><tkinter>
2025-04-17 21:51:16
1
794
catquas
79,580,115
103,252
Proper type hint for a listify function
<p>I'm using VS Code with Pylance, and having problems correctly writing type hints for the following function. To me, the semantics seem clear, but Pylance disagrees. How can I fix this without resorting to <code>cast()</code> or <code># type ignore</code>? Pylance warnings are given in the comments.</p> <p>I am fully committed to typing in Python, but it is often a struggle.</p> <pre><code>T = TypeVar(&quot;T&quot;) def listify(item: T | list[T] | tuple[T, ...] | None) -&gt; list[T]: if item is None: return [] elif isinstance(item, list): return item # Return type, &quot;List[Unknown]* | List[T@listify]&quot;, # is partially unknown elif isinstance(item, tuple): return list(item) # Return type, &quot;List[Unknown]* | List[T@listify]&quot;, # is partially unknown else: return [item] </code></pre>
<python><python-typing>
2025-04-17 21:27:26
1
1,932
Watusimoto
79,580,057
5,292,347
Creating epub file with cover in python
<p>I`m trying to create an epub file in python.The table of content and the content including images of the epub file are both fine, only the cover is not showing up. The file is generated without a cover, it's not an error of having a cover without loading the image.</p> <p>It seems there is an issue with the cover function of the library and there is a workaround :</p> <p><a href="https://github.com/aerkalov/ebooklib/issues/198" rel="nofollow noreferrer">https://github.com/aerkalov/ebooklib/issues/198</a></p> <p>Unfortunately, this workaround is not working for me. Does anyone know how to fix this issue?</p> <pre><code>import ebooklib from ebooklib import epub import os import re from pathlib import Path IMAGES_DIR = 'Images' STYLES_DIR = 'Styles' FONTS_DIR = 'Fonts' DOCUMENTS_DIR = 'Text' DEFAULT_LANG = 'en' DEFAULT_DIRECTION = 'ltr' COVER_UID = 'CoverPage_html' COVER_TITLE = 'Cover' COVER_FILE_NAME = 'cover.xhtml' IMAGE_UID = 'cover' COVER_XML = &quot;&quot;&quot; &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;no&quot;?&gt; &lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt; &lt;head&gt; &lt;title&gt;%(book_title)s&lt;/title&gt; &lt;/head&gt; &lt;body id=&quot;template&quot; xml:lang=&quot;%(language)s&quot; style=&quot;margin:0px; text-align: center; vertical-align: middle; background-color:#fff;&quot;&gt; &lt;div&gt; &lt;img src=&quot;%(cover_path)s&quot; style=&quot;height:100%%; text-align:center&quot;/&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; &quot;&quot;&quot; def add_cover(epub_book, cover_file_path, IMAGE_FILE_NAME,language='en', cover_title='volume-cover',): cover_path = '../{}/{}'.format(IMAGES_DIR, IMAGE_FILE_NAME) cover_data = { 'book_title': epub_book.title, 'language': language, 'cover_path': cover_path } item = epub.EpubHtml( uid=COVER_UID, title=cover_title, file_name='{}/{}'.format(DOCUMENTS_DIR, COVER_FILE_NAME), content=COVER_XML % cover_data ) epub_book.add_item(item) epub_book.spine.insert(0, item) #epub_book.guide.insert(0, { # 'type': 'cover', # 'href': '{}/{}'.format(DOCUMENTS_DIR, COVER_FILE_NAME), # 'title': item.title, #}) image_file = open(cover_file_path, 'rb') image = epub.EpubCover() image.id = IMAGE_UID image.file_name = '{}/{}'.format(IMAGES_DIR, IMAGE_FILE_NAME) image.set_content(image_file.read()) epub_book.add_item(image) image_file.close() epub_book.add_metadata( None, 'meta', '', { 'name': 'cover', 'content': IMAGE_UID } ) def MakeVolumeEpub(volume_dir,volume_name): book = epub.EpubBook() # Set metadata book.set_title('My Story - ' + volume_name) book.set_language('en') toc_list = [] chapter_list = [] cover_path = os.path.join(volume_dir,IMAGES_DIR, 'cover.jpg') cover_content_type = &quot;image/jpeg&quot; cover_name='cover.jpg' if not Path(cover_path).is_file(): cover_name='cover.jpeg' cover_path = os.path.join(volume_dir,IMAGES_DIR, cover_name) if not Path(cover_path).is_file(): cover_name='cover.png' cover_content_type = &quot;image/png&quot; cover_path = os.path.join(volume_dir,IMAGES_DIR, cover_name) # Add text content for item in os.listdir(volume_dir): item_path = os.path.join(volume_dir, item) if os.path.isdir(item_path) and item.startswith('Chapter '): temp_file_chapter_name = DOCUMENTS_DIR + '/' + item.lower().replace(&quot; &quot;, &quot;-&quot;) + '.xhtml' chapter_sub_folder = item.lower().replace(&quot; &quot;, &quot;-&quot;) chapter_content_path = os.path.join(item_path, 'content.txt') with open(chapter_content_path, 'r', encoding='utf8') as file: firstLinePosition = file.tell() chapter_title = file.readline() file.seek(firstLinePosition) chapter_title = re.sub(r'&lt;.*?&gt;', '', chapter_title).strip() chapter = epub.EpubHtml(title=chapter_title, file_name=temp_file_chapter_name, content=file.read()) book.add_item(chapter) chapter_list.append(chapter) toc_list.append(epub.Link(chapter.file_name, chapter.title, chapter.id)) #Add images images_directory = os.path.join(volume_dir,IMAGES_DIR) if os.path.exists(images_directory): image_files = [f for f in os.listdir(images_directory) if f.lower().endswith(('.jpg', '.jpeg', '.png', '.gif'))] for idx, image_filename in enumerate(image_files, start=1): if image_filename.startswith(&quot;cover&quot;): continue image_path = os.path.join(images_directory, image_filename) with open(image_path, 'rb') as img_file: # Create an EpubItem for the image image_item = epub.EpubItem( uid=f'{chapter_sub_folder}-image{idx}', # Unique ID for the image file_name=f'{IMAGES_DIR}/{image_filename}', # Path inside the EPUB media_type='image/jpeg' if image_filename.lower().endswith('.jpg') or image_filename.lower().endswith('.jpeg') else 'image/png', # MIME type content=img_file.read() # Content of the image ) book.add_item(image_item) book.toc = tuple(toc_list) #book.add_item(epub.EpubNcx()) book.add_item(epub.EpubNav()) book.spine = ['nav'] + chapter_list if Path(cover_path).is_file(): add_cover(book,cover_path,cover_name) print('Creating ' + volume_name) epub.write_epub(os.path.join('C:\\Users\\UserName\\Desktop\\Novels','My Story - ' + volume_name + '.epub'), book, None) </code></pre>
<python><epub>
2025-04-17 20:42:06
0
1,224
Carlos Siestrup
79,579,904
25,874,132
why is there a complex argument in the array here trying to do DTFS?
<p>I have a window signal, which i calculate it's Fourier coefficients but then in the output i get a small complex value [3 order of magnitude less in the origin and the same order of magnitude in the edges of my sampling (from -1000 to 1000)] where the output should be purely real, if it was just a floating point approx error it wouldn't have the pattern of sin like i found, so what could cause this?</p> <p>It's all done in Python (through Google Colab) with the libraries cmath, numpy, and matplotlib alone.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cmath import matplotlib.pyplot as plt D=1000 a=np.zeros(2*D+1) for i in range(-100,100): a[i+D] = 1 </code></pre> <p>This code creates the following window signal:</p> <p><a href="https://i.sstatic.net/Z4NNhH9m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4NNhH9m.png" alt="enter image description here" /></a></p> <p>and then to get the fourier coefficients:</p> <pre class="lang-py prettyprint-override"><code>j=complex(0,1) pi=np.pi N=2001 ak = np.zeros(N, dtype=complex) for k in range(-D, D + 1): for n in range(-D, D + 1): ak[k + D] += (1/N) * a[n + D] * cmath.exp(-j * 2 * pi * k * n / N) </code></pre> <p>which gives me the following image:</p> <p><a href="https://i.sstatic.net/3KqP17Pl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KqP17Pl.png" alt="enter image description here" /></a></p> <p>when i do the math in paper i get the coeficients to be:</p> <p><a href="https://i.sstatic.net/OlRI2HG1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlRI2HG1.png" alt="enter image description here" /></a></p> <p>where omega0 is 2*pi/N and a[n]=u(n+100)-u(n-100) (where u(n) is the Heaveside function).</p> <p>but it's weird as when i plot the imaginary part of ak i get that it's (at a very good approximation) equal to 1/N *sin(201*pi/N where N=2001 and is my entire sampling area (from -1000 to 1000 as i mentioned), which if it was just a rounding error because of floating point it wouldn't be in this form.</p> <p>here it's the plot of the imaginary part of ak, not freq of ak or bk.</p> <p><a href="https://i.sstatic.net/Cbpstqqr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbpstqqr.png" alt="enter image description here" /></a></p>
<python><numpy><math>
2025-04-17 18:55:48
2
314
Nate3384
79,579,884
2,125,392
How to tell Python to re-evaluate a parent class method's interface?
<p>Over the years, I can't count how many times I wanted to do something like this in Python:</p> <pre><code>class A: DEFAULT_MESSAGE = &quot;Unspecified Country&quot; def do_it(message=DEFAULT_MESSAGE): print(message) class B(A): DEFAULT_MESSAGE = &quot;Mexico&quot; </code></pre> <p>And have this result in:</p> <pre><code>&gt; A().do_it() Unspecified Country &gt; B().do_it() Mexico </code></pre> <p>Without having to redefine the method:</p> <pre><code>class B(A): DEFAULT_MESSAGE = &quot;Mexico&quot; def do_it(message=DEFAULT_MESSAGE): super().do_it(message) </code></pre> <p>Or do an arguably bad design pattern of setting the default <code>message</code> to <code>None</code> for the method, and instead reading the <code>DEFAULT_MESSAGE</code> in the implementation of the method.</p> <p>Is there a better, more DRY design-pattern that helps with this? Or is there some way to tell Python to re-evaluate the interfaces on parent class methods when defining a new, sub-class?</p>
<python><class>
2025-04-17 18:40:53
1
15,484
CivFan
79,579,614
4,528,716
@Gtk.Template and inheritance
<p>I'm trying to make an abstract class and it's subclass in python, where subclasses can be loaded from .ui template. I've managed to make it work more or less, but still can't understand why I'm getting one warning. Let's start with code:</p> <pre><code>from abc import ABCMeta, abstractmethod from gi.repository import Gtk # Abstract class that must inherit Gtk.Box. class GtkBoxABCMeta(ABCMeta, type(Gtk.Box)): pass </code></pre> <ul> <li></li> </ul> <pre><code>from gi.repository import Gtk from .gtkboxabcmeta import GtkBoxABCMeta class MainSection(Gtk.Box, metaclass=GtkBoxABCMeta): &quot;&quot;&quot;This class is an abstract definition for Main Sections in the application.&quot;&quot;&quot; &quot;&quot;&quot;Everything that implements this will be shown as entry in side menu.&quot;&quot;&quot; __gtype_name__ = &quot;MainSection&quot; label: str icon: str @classmethod def create_section(cls) -&gt; Gtk.Widget: return cls() </code></pre> <ul> <li></li> </ul> <pre><code>from gi.repository import Gtk from .main_section import MainSection @Gtk.Template(resource_path='/com/damiandudycz/CatalystLab/main_sections/welcome_section.ui') class WelcomeSection(MainSection): __gtype_name__ = &quot;WelcomeSection&quot; label = &quot;Welcome&quot; icon = &quot;aaa&quot; </code></pre> <ul> <li></li> </ul> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;interface&gt; &lt;!-- Libraries --&gt; &lt;requires lib=&quot;gtk&quot; version=&quot;4.0&quot;/&gt; &lt;!-- Template --&gt; &lt;template class=&quot;WelcomeSection&quot; parent=&quot;MainSection&quot;&gt; &lt;property name=&quot;orientation&quot;&gt;vertical&lt;/property&gt; &lt;child&gt; &lt;object class=&quot;GtkLabel&quot;&gt; &lt;property name=&quot;label&quot;&gt;Hello World!&lt;/property&gt; &lt;property name=&quot;vexpand&quot;&gt;True&lt;/property&gt; &lt;/object&gt; &lt;/child&gt; &lt;/template&gt; &lt;/interface&gt; </code></pre> <p>And while this works and displays the view from .ui correctly, I'm getting this warning in logs:</p> <pre><code>Gtk-CRITICAL **: 17:31:09.733: gtk_widget_init_template: assertion 'template != NULL' failed </code></pre> <p>Would appreciate if someone could explain me why this warning is there, and how I should achieve this correctly.</p>
<python><inheritance><gtk>
2025-04-17 15:36:37
0
2,800
Damian Dudycz
79,579,581
6,829,655
Memory leak on ECS fargate sidecar container
<p>I have a Python application running on AWS ECS Fargate with a <strong>simple</strong> sidecar container. The sidecar is responsible for:</p> <ul> <li>Receiving gRPC requests from the main application(on localhost:port)</li> <li>Querying a local SQLite database to extract configuration data</li> <li>Simple transform and preparation of gRPC response</li> <li>Returning the response back to the main app</li> </ul> <p>The SQLite database is populated at container startup with records from a DynamoDB table and acts as a fast read cache.</p> <p>The setup works as expected, but I've noticed a slow and steady increase in memory usage over time, as seen in CloudWatch Container Insights.</p> <p>I’m really struggling to figure out what could be causing this memory growth. Has anyone experienced something similar or have any suggestions on what to investigate?</p>
<python><amazon-web-services><memory-leaks><amazon-ecs><aws-fargate>
2025-04-17 15:20:08
1
651
datahack
79,579,568
3,343,425
task.result() throws InvalidStateError instead of CancelledError on cancelled task
<p>I built the following construct which is supposed to call a bunch of asynchronous operations and return as soon as any one of them completes or the timeout expires.</p> <pre class="lang-py prettyprint-override"><code>import asyncio async def async_operation(): while True: await asyncio.sleep(1) async def main(): task1 = None task2 = None try: task1 = asyncio.create_task(async_operation()) task2 = asyncio.create_task(async_operation()) await asyncio.wait([task1, task2], timeout=2, return_when=asyncio.FIRST_COMPLETED) finally: if task1 and not task1.done(): print(&quot;Cancelling task1!&quot;) # this is printed! task1.cancel() if task2 and not task2.done(): task2.cancel() try: # This throws: # asyncio.exceptions.InvalidStateError: Result is not set. res = task1.result() # I would expect one of the following two cases to be true: # # 1. the task completes and thus the result is set or # 2. it is cancelled in the finally block and thus calling .result() # should raise CancelledError and not InvalidStateError print(f&quot;Got result: ${res}&quot;) except asyncio.CancelledError: print(f&quot;Caught CancelledError!&quot;) asyncio.run(main()) </code></pre> <p>As pointed out in the code, it is not clear to me why trying to read the result of a cancelled task raises a <code>InvalidStateError</code> and not a <code>CancelledError</code>.</p> <p>Looking at the <a href="https://docs.python.org/3/library/asyncio-task.html#task-cancellation" rel="nofollow noreferrer">documentation</a> I'm getting a suspicion that it's because the cancellation is only scheduled and the coroutine is no longer actually being executed (in the IO loop) and thus it will never actually be cancelled.</p> <p>I would welcome an explanation of the behavior as well as a suggestions for a more idiomatic implementation.</p>
<python><python-asyncio>
2025-04-17 15:12:13
2
725
fghibellini
79,579,457
2,864,143
Navigate to a different file from terminal
<p>I am debugging my package written in python from venv. Such that the repo source files, where the changes the the source appear are different from the source files installed in venv.</p> <p>Is it somehow possible (some extension?) to map the paths, that are included in the python exception trace (the env ones) to the files in the repo dir (which vscode has opened as folder).</p> <p>The way I see it is as follows: I create map entries of the form:</p> <pre><code>Source: &lt;fromDir&gt;/*.py; Target: &lt;toDir&gt;/*.py </code></pre> <p>or just like this:</p> <pre><code>Source: &lt;fromDir&gt;; Target: &lt;toDir&gt; </code></pre> <p>And vscode &quot;Open file in editor&quot; feature would call this mapping to resolve the target file path.</p> <p>Is there something like that in 2025 in vscode?</p>
<python><visual-studio-code><python-venv>
2025-04-17 14:11:42
0
960
Student4K
79,579,330
14,860,526
Python multiprocessing.Queue strange behaviour
<p>Hi I'm observing a strange behaviour with python multiprocessing Queue object.</p> <p>My environment:</p> <pre><code>OS: Windows 10 python: 3.13.1 </code></pre> <p>but I observed the same with:</p> <pre><code>OS: Windows 10 python: 3.12.7 </code></pre> <p>and:</p> <pre><code>OS: Windows 10 python: 3.10.14 </code></pre> <p>while I could not reproduce it in Linux Redhat.</p> <p>I have this short script:</p> <pre><code>from multiprocessing import Queue a = list(range(716)) queue: Queue = Queue() for item in a: queue.put(item) raise ValueError(f&quot;my len: {len(a)}&quot;) </code></pre> <p>If I run it, everything is ok, it raises the error and exits:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\uXXXXXX\AppData\Roaming\JetBrains\PyCharmCE2024.3\scratches\scratch_1.py&quot;, line 7, in &lt;module&gt; raise ValueError(f&quot;my len: {len(a)}&quot;) ValueError: my len: 716 Process finished with exit code 1 </code></pre> <p>but if i change the number from 716 to 717 or any other number above it, it raises the error but doesn't exit, the script hangs there. and when I forcefully stop the script it exits with code -1</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\uXXXXXX\AppData\Roaming\JetBrains\PyCharmCE2024.3\scratches\scratch_1.py&quot;, line 7, in &lt;module&gt; raise ValueError(f&quot;my len: {len(a)}&quot;) ValueError: my len: 717 Process finished with exit code -1 </code></pre> <p>Can you please help me solve this strange behaviour? i would like it to always automatically exit with code 1</p>
<python><queue><python-multiprocessing><exit-code>
2025-04-17 13:12:58
1
642
Alberto B
79,579,181
447,426
Exponential run time behavior in simple pyspark operation
<p>My task is simple, i have a binary file that needs to be split into 8byte chunks where first 4bytes contain data (to be decoded in later step) the 2nd 4byte contain an int (time offset in ms). Some of these 8byte blocks are repeated - i want them to be removed. If at the end i have a reminder less than 8byte i mark it. My problem is the runtime behavior: 200kb -&gt; 39s 1,200kb -&gt; 2:02min 6 Mb -&gt; 49min</p> <p>Rendering my approach useless. Here is the code:</p> <pre><code>def process(self, df: DataFrame): &quot;&quot;&quot; Split byte arrays into 8-byte chunks before transforming to binary. &quot;&quot;&quot; # Define temporary column names byte_offset_col = &quot;byte_offset&quot; chunk_raw_col = &quot;chunk_raw&quot; chunk_hex_col = &quot;chunk_hex&quot; wrong_length = &quot;wrong_length&quot; # Processing df_offset_bytes = df.withColumn( byte_offset_col, f.expr(f&quot;sequence(0, length({GamaExtractionColumns.CONTENT}) - 1, {self.chunk_size})&quot;), # Generate offsets ) df_split_to_chunk_array = df_offset_bytes.withColumn( chunk_raw_col, f.expr( f&quot;transform({byte_offset_col}, x -&gt; substring({GamaExtractionColumns.CONTENT}, x + 1, {self.chunk_size}))&quot; ), # Extract chunks ) df_chunk_rows = df_split_to_chunk_array.withColumn( self.BYTE_CHUNKS, f.explode(f.col(chunk_raw_col)), # Explode into rows ) df_hex_values = df_chunk_rows.withColumn( chunk_hex_col, f.expr(f&quot;hex({self.BYTE_CHUNKS})&quot;), # Convert chunk to hex ) df_check_length = df_hex_values.withColumn( wrong_length, f.when((f.length(self.BYTE_CHUNKS) &lt; self.chunk_size), f.lit(1)).otherwise(f.lit(0)), ) df_wrong_length = df_check_length.filter(f.col(wrong_length) == 1) if not df_wrong_length.isEmpty(): self.logger.error( f&quot;Found entry with wrong lenth in file {df_wrong_length.select([GamaExtractionColumns.PATH]).first()[0]}.&quot; ) df_result = df_check_length.filter(f.col(wrong_length) == 0).drop( GamaExtractionColumns.CONTENT, byte_offset_col, chunk_raw_col, chunk_hex_col, wrong_length ) return df_result.distinct() </code></pre> <p>Removing distinct doubles the speed for small file for 6MB file i gain 9min. So it helps but i need to get rid of this exponential behavior. Any advice how to approach this - i am open to completely rework it if necessary. How to get O(n)? (IMHO this should be possible)</p> <p><strong>Update</strong>: My assumption is/was that spark may be not the best fit for small data - like small files. but at the moment it works only with small files / small data. I hope i am just using spark wrong?!</p> <p>2nd: Currently i am using <code>spark.read.format(&quot;binaryFile&quot;).option(&quot;pathGlobFilter&quot;, &quot;*.gama&quot;).load(folder)</code> to read the file. mainly it abstract the file system (local works as welss as hdfs or azure as long the driver is present) So a possible solution should not loose this nice abstraction</p>
<python><apache-spark><pyspark><time-complexity>
2025-04-17 11:48:19
3
13,125
dermoritz
79,578,969
12,156,208
How do I replace "NaN" in a pandas dataframe with a dictionary?
<p>Sample dataframe:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([[1, {'t': 1}], [2, {'t': 2}], [3, np.nan], [4, np.nan]], columns=['A', 'B']) </code></pre> <pre><code> A B 0 1 {'t': 1} 1 2 {'t': 2} 2 3 NaN 3 4 NaN </code></pre> <p>What I want:</p> <pre><code> A B 0 1 {'t': 1} 1 2 {'t': 2} 2 3 {'t': 0} 3 4 {'t': 0} </code></pre> <p>For some reason, the below does not work -</p> <pre><code>df.fillna({'B': {'t': 0}}, inplace=True) </code></pre> <p>Is there a restriction on what values can be used for fillna?</p> <p>I have this solution which works,</p> <pre><code>df['B'] = df['B'].mask(df['B'].isna(), {'t': 0}) </code></pre> <p>but I want to know if there is a better way to do this? For example, this fails if 'B' does not exist in the dataframe, whereas the fillna doesn't fail in similar case</p>
<python><pandas><dataframe>
2025-04-17 09:58:15
2
1,206
r4bb1t
79,578,619
11,110,509
MS Graph SDK - How to perform date range query to get emails in certain time frame?
<p>I'm using the python MS graph sdk to try to perform a query. I'm able to successfully get all of the user's emails, but I can't seem to be able to use the sdk to get the emails from a certain time range. I've searched the their repo</p> <p><a href="https://github.com/microsoftgraph/msgraph-sdk-python" rel="nofollow noreferrer">https://github.com/microsoftgraph/msgraph-sdk-python</a></p> <p>and also googled for quite a while and there doesn't seem to be any examples on how to do so.</p> <p>Anyone have an idea of how to do it?</p> <pre><code>MAIL_FOLDER_ID = &quot;inbox&quot; SELECT_FIELDS = [ &quot;id&quot;, &quot;webLink&quot;, &quot;from&quot;, &quot;toRecipients&quot;, &quot;receivedDateTime&quot;, &quot;subject&quot;, &quot;body&quot;, &quot;lastModifiedDateTime&quot;, &quot;hasAttachments&quot;, &quot;attachments&quot;, &quot;bodyPreview&quot;, ] TOP_VALUE = 1000 ORDER_BY = [&quot;receivedDateTime DESC&quot;] SCOPES = [&quot;https://graph.microsoft.com/.default&quot;] query_params = ( MessagesRequestBuilder.MessagesRequestBuilderGetQueryParameters( select=SELECT_FIELDS, top=TOP_VALUE, orderby=ORDER_BY ) ) self.request_config = ( MessagesRequestBuilder.MessagesRequestBuilderGetRequestConfiguration( query_parameters=query_params ) ) page = ( await self.graphClient .me .mail_folders.by_mail_folder_id(MAIL_FOLDER_ID) .messages.get(request_configuration=self.request_config) ) </code></pre>
<python><outlook><microsoft-graph-api><outlook-restapi>
2025-04-17 06:36:08
1
2,703
WHOATEMYNOODLES
79,578,520
13,946,204
How to annotate dict-like object with key-value structure in Python?
<p>Let's say that I have a class like:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class A: def __init__(self, *args): self.items: list[int] = list(args) def __getitem__(self, k: str): v = int(k) if v in self.items: return v return None def __setitem__(self, k: str, _: Any = None): v = int(k) print(f'Value {_} is not important') if v not in self.items: self.items.append(int(k)) </code></pre> <p>Elements of the class are accessible by <code>object[element_name]</code> syntax. And it is obvious that value of element will be an integer:</p> <pre class="lang-py prettyprint-override"><code>a = A(1, 2, 3) a['4'] = 'Something' print(a['2']) # 2 print(a['4']) # 4 </code></pre> <p>Annotation for a dictionary with the same behavior will be like <code>dict[str, int]</code>. Where key is <code>str</code> and value is <code>int</code>. But how to annotate these generic types for custom objects?</p> <p>Because <code>A[str, int]</code> is not working.</p> <h2>Important note</h2> <p>It is supposed that <code>A</code> class provided in example is a third-party class that is not able to change. It is possible to set <code>B</code> class that will be a child of <code>A</code>:</p> <pre class="lang-py prettyprint-override"><code>class B(A): pass </code></pre> <p>But behavior of <code>B</code> and <code>A</code> classes should be exactly the same. Only annotations must be supported</p>
<python><python-typing>
2025-04-17 05:18:39
2
9,834
rzlvmp
79,578,306
1,413,856
Implementing DKIM or SPF in a Python package
<p>(I’m a bit vague about how email authentication works, so please bear with me).</p> <p>I am writing a Python application which will send an email. This app is then packaged using PyInstaller so that it can be deployed to other non-Python users.</p> <p>I’ve got it working to some extent. The technique I use is:</p> <ul> <li>Create a message with <code>email.message.EmailMessage()</code> from the <code>email</code> module.</li> <li>Possibly include an attached file, using <code>mimetypes.guess_type()</code> from <code>mimetypes</code> to work out the mime type.</li> <li>Get the recipient’s mail server using <code>email.utils.parseaddr()</code>.</li> <li>Use <code>dns.resolver.resolve()</code> from <code>dns.resolver</code> to find the MX for the recipient.</li> <li>use <code>smtplib.SMTP</code> to build and send the email.</li> </ul> <p>All of this works for the most part. However, for recipients such as GMail addresses this fails since this isn’t authorised with either SPF or DKIM.</p> <p>I’m getting the impression that I should try to implement DKIM using something like <code>dkimpy</code>.</p> <p>The thing is that I don’t want to depend on a different MTA or DNS server, but only on what’s in the Python package.</p> <p>The question is: is it possible to implement DKIM (or SPF) purely in a Python package.</p>
<python><email><dkim>
2025-04-17 01:08:37
2
16,921
Manngo
79,577,779
714,077
How Do We Use numpy to Create a Matrix of all permutations of three separate value ranges?
<p>I want to make a pandas dataframe with three columns, such that the rows contain all permutations of three columns, each with its own range of values are included. In addition, I want to sort them asc by c1, c2, c3.</p> <p>For example, a = [0,1,2,3,4,5,6,7], b = [0,1,2], and c= [0,1]. The result I want looks like this:</p> <pre><code>c1 c2 c3 0 0 0 0 0 1 0 1 0 0 1 1 0 2 0 0 2 1 1 0 0 1 0 1 1 1 0 ... 7 2 0 7 2 1 </code></pre> <p>I keep trying to fill columns using numpy.arange, i.e., numpy.arange(0,7,1) for c1. But that doesn't easily create all the possible rows. In my example, I should end up with 8 * 3 * 2 = 48 unique rows of three values each.</p> <p>I need this to act as a mask of all possible value combinations from which I can merge a sparse matrix of experimental data.</p> <p>Does anyone know how to do this? Recursion?</p>
<python><dataframe><numpy><permutation>
2025-04-16 17:26:48
5
961
noogrub
79,577,540
8,964,393
Python - Trading IG - Reason: ATTACHED_ORDER_LEVEL_ERROR
<p>I have tried to place a limit order in Python (using Trading-IG).</p> <p>Here is the code:</p> <pre><code> resp = ig_service.create_working_order( currency_code=&quot;GBP&quot;, direction=&quot;BUY&quot;, epic='IX.D.SPTRD.DAILY.IP', expiry='DFB', guaranteed_stop=False, level='5445', size='1', force_open= 'true', time_in_force=&quot;GOOD_TILL_CANCELLED&quot;, order_type=&quot;LIMIT&quot;, limit_distance='6', stop_distance='2' ) print(f&quot;resp:{resp}&quot;) print(f&quot;Status:{resp['status']}&quot;) print(f&quot;Reason:{resp['reason']}&quot;) print(f&quot;dealId:{resp['dealId']}&quot;) </code></pre> <p>Now, since the level (<code>level='5445'</code>) is higher than the current price and the direction is BUY (<code>direction=&quot;BUY&quot;</code>), then I get this error:</p> <pre><code>Reason:ATTACHED_ORDER_LEVEL_ERROR </code></pre> <p>If the direction was &quot;SELL&quot;, then the order would have been placed.</p> <p>Does anyone know why?</p> <p>I know there is similar issue here: <a href="https://stackoverflow.com/questions/72937726/ig-trading-api-market-order-not-working-attached-order-level-error">IG Trading API, Market Order Not Working. ATTACHED_ORDER_LEVEL_ERROR</a></p> <p>Yet I can't figure out the solution.</p>
<python><rest><limit><trading>
2025-04-16 15:24:03
0
1,762
Giampaolo Levorato
79,577,512
6,896,234
Pytest caplog not capturing logs
<p>See my minimal example below for reproducing the issue:</p> <pre><code>import logging import logging.config def test_logging(caplog): LOGGING_CONFIG = { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;formatters&quot;: { &quot;default&quot;: { &quot;format&quot;: &quot;%(asctime)s - %(name)s - %(levelname)s - %(message)s&quot;, }, }, &quot;handlers&quot;: { &quot;console&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;formatter&quot;: &quot;default&quot;, &quot;level&quot;: &quot;DEBUG&quot;, }, }, &quot;loggers&quot;: { &quot;&quot;: { # root logger &quot;handlers&quot;: [&quot;console&quot;], &quot;level&quot;: &quot;DEBUG&quot;, &quot;propagate&quot;: True, }, }, } logging.config.dictConfig(LOGGING_CONFIG) logger = logging.getLogger(&quot;root_module.sub1.sub2&quot;) logger.setLevel(logging.DEBUG) assert logger.propagate is True assert logger.getEffectiveLevel() == logging.DEBUG with caplog.at_level(logging.DEBUG): logger.debug(&quot;🔥 DEBUG msg&quot;) logger.info(&quot;📘 INFO msg&quot;) logger.warning(&quot;⚠️ WARNING msg&quot;) logger.error(&quot;❌ ERROR msg&quot;) logger.critical(&quot;💀 CRITICAL msg&quot;) print(&quot;🔥 caplog.messages:&quot;, caplog.messages) # Final assertion assert any(&quot;CRITICAL&quot; in r or &quot;💀&quot; in r for r in caplog.messages) </code></pre> <p>Running <code>pytest -s</code> outputs:</p> <pre><code>tests/test_logs.py 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - DEBUG - 🔥 DEBUG msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - INFO - 📘 INFO msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - WARNING - ⚠️ WARNING msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - ERROR - ❌ ERROR msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - CRITICAL - 💀 CRITICAL msg 🔥 caplog.messages: [] </code></pre> <p>Pytest version is <code>8.3.5</code>. I don't think it matters, but my <code>setup.cfg</code>:</p> <pre><code>[tool:pytest] testpaths = tests </code></pre> <p>I am expecting <code>caplog.records</code> to contain all 5 logs, but it is empty list.</p>
<python><python-3.x><pytest>
2025-04-16 15:10:06
1
2,054
Elgin Cahangirov
79,577,490
3,104,974
How to fit scaler for different subsets of rows depending on group variable and include it in a Pipeline?
<p>I have a data set like the following and want to scale the data using any of the scalers in <code>sklearn.preprocessing</code>.</p> <p>Is there an easy way to fit this scaler not over the whole data set, but per group? My current solution can't be included in a Pipeline:</p> <pre><code>import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler df = pd.DataFrame({'group': [1, 1, 1, 2, 2, 2], 'x': [1,2,3,10,20,30]}) def scale(x): # see https://stackoverflow.com/a/72408669/3104974 scaler = StandardScaler() return scaler.fit_transform(x.values[:,np.newaxis]).ravel() df['x_scaled'] = df.groupby('group').transform(scale) </code></pre> <pre><code> group x x_scaled 0 1 1 -1.224745 1 1 2 0.000000 2 1 3 1.224745 3 2 10 -1.224745 4 2 20 0.000000 5 2 30 1.224745 </code></pre>
<python><pandas><scikit-learn><data-transform><scikit-learn-pipeline>
2025-04-16 14:58:55
1
6,315
ascripter
79,577,224
14,386,187
Redis-py not returning consistent results for match all query
<p>I'm attempting to add 100 documents to a Redis index, and then retrieving them with a match all query:</p> <pre class="lang-py prettyprint-override"><code>import uuid import redis from redis.commands.json.path import Path import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.search.field import TextField, NumericField, TagField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import NumericFilter, Query r = redis.Redis(host=&quot;localhost&quot;, port=6379) user4 = { &quot;user&quot;: { &quot;name&quot;: &quot;Sarah Zamir&quot;, &quot;email&quot;: &quot;sarah.zamir@example.com&quot;, &quot;age&quot;: 30, &quot;city&quot;: &quot;Paris&quot;, } } with r.pipeline(transaction=True) as pipe: for i, u in enumerate([user4] * 100): u[&quot;user&quot;][&quot;text&quot;] = str(uuid.uuid4()) * 50 r.json().set(f&quot;user:{i}&quot;, Path.root_path(), u) pipe.execute() schema = ( TextField(&quot;$.user.name&quot;, as_name=&quot;name&quot;), TagField(&quot;$.user.city&quot;, as_name=&quot;city&quot;), NumericField(&quot;$.user.age&quot;, as_name=&quot;age&quot;), ) r.ft().create_index( schema, definition=IndexDefinition(prefix=[&quot;user:&quot;], index_type=IndexType.JSON) ) result = r.ft().search(Query(&quot;*&quot;).paging(0, 100)) print(result.total) keys = r.keys(&quot;*&quot;) print(len(keys)) r.flushall() r.close() </code></pre> <p>I expect <code>result.total</code> to return 100, but it always returns inconsistent results less than 100. Am I doing something wrong? I checked the number of keys in redis and I get the correct count there.</p>
<python><redis><redis-py>
2025-04-16 12:45:30
2
676
monopoly
79,577,174
580,083
How to interpolate circle arc with lines on Earth surface?
<p>I have the coordinates (lat, lon) of 3 points (P0, P1, P2) on the Earth surface. The distance between P1 and P0 is the same as the distance between P2 and P0. Therefore, one can make a circle arc from P1 to P2 with P0 as a centre. My problem is that I need to interpolate this circle arc by N lines to be able to work with it in the GeoJSON format (which does not support circles/circle arcs). Each line segment should ideally have the same legth.</p> <p>Is there any library suitable for this problem? I have found some code for transformation of a circle to a polygon (such as <a href="https://polycircles.readthedocs.io/en/latest/kmls.html" rel="nofollow noreferrer">polycircles</a>), but none of them seemed to support circle arcs.</p> <p>A simple code is here:</p> <pre><code>import geopy.distance # DME OKL: 50 05 45.12 N 14 15 56.19 E lon0 = 14.265608 lat0 = 50.095867 coords0 = (lat0, lon0) # 1st circle point: 50 03 10,23 N 014 28 30,47 E lon1 = 14.475131 lat1 = 50.052842 coords1 = (lat1, lon1) # 2nd circle point: 49 59 33,96 N 014 24 58,76 E lon2 = 14.416322 lat2 = 49.992767 coords2 = (lat2, lon2) # verification of dinstances: print(geopy.distance.distance(coords1, coords0).nautical) print(geopy.distance.distance(coords2, coords0).nautical) # How to interpolate circle arc between points 1 and 2? </code></pre>
<python><geojson><geo>
2025-04-16 12:22:38
1
24,034
Daniel Langr
79,576,828
16,611,809
Get a grouped sum in polars, but keep all individual rows
<p>I am breaking my head over this probably pretty simply question and I just can't find the answer anywhere. I want to create a new column with a grouped sum of another column, but I want to keep all individual rows. So, this is what the docs say:</p> <pre><code>import polars as pl df = pl.DataFrame( { &quot;a&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;c&quot;], &quot;b&quot;: [1, 2, 1, 3, 3], } ) df.group_by(&quot;a&quot;).agg(pl.col(&quot;b&quot;).sum()) </code></pre> <p>The output of this would be:</p> <pre><code>shape: (3, 2) ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════╪═════╡ │ a ┆ 2 │ │ c ┆ 3 │ │ b ┆ 5 │ └─────┴─────┘ </code></pre> <p>However, what I need would be this:</p> <pre><code>shape: (5, 3) ┌─────┬─────┬────────┐ │ a ┆ b ┆ sum(b) │ │ --- ┆ --- ┆ ------ │ │ str ┆ i64 ┆ i64 │ ╞═════╪═════╪════════╡ │ a ┆ 1 ┆ 2 │ │ b ┆ 2 ┆ 5 │ │ a ┆ 1 ┆ 2 │ │ b ┆ 3 ┆ 5 │ │ c ┆ 3 ┆ 3 │ └─────┴─────┴────────┘ </code></pre> <p>I could create the sum in a separate df and then join it with the original one, but I am pretty sure, there is an easier solution.</p>
<python><dataframe><window-functions><python-polars><polars>
2025-04-16 09:29:12
1
627
gernophil
79,576,352
16,383,578
Is it possible to use the closed-form of Fibonacci series to generate the Nth Fibonacci number exactly and efficiently?
<p>The closed-form of the Fibonacci series is the following:</p> <p><img src="https://i.sstatic.net/elWkx.png" alt="" /></p> <p>As you can see the expression contains square roots so we cannot use it directly to generate Nth Fibonacci number exactly, as sqrt(5) is irrational, we can only approximate it even in pure mathematics, and floating point inaccuracies introduces a whole other lot of complications that ensure if we implement the formula naively we are bound to lose accuracy.</p> <p>I want to get the Nth Fibonacci number exactly using this formula, assume N is very large (in the thousands range), and since Python's integer is unbounded and Fibonacci sequence grows extremely quickly out of the range of uint64 (at the 94th term to be exact, we start at 0th), I cannot use Numba or NumPy to vectorize this.</p> <p>Now I have implemented a function that generates Nth Fibonacci number exactly using the closed form, but it is very inefficient.</p> <p>First I will just quickly describe what I have done, we start with (1 + a)<sup>n</sup>, using the binomial theorem we get 1 + na + <sub>n</sub>C<sub>2</sub>a<sup>2</sup> + <sub>n</sub>C<sub>3</sub>a<sup>3</sup> + <sub>n</sub>C<sub>4</sub>a<sup>4</sup> ... + a<sup>n</sup>. Because the choose function is symmetric we only need to compute half of the terms.</p> <p>Now we can compute <sub>n</sub>C<sub>i</sub> from the previous term <sub>n</sub>C<sub>i-1</sub>: <sub>n</sub>C<sub>i</sub> = <sub>n</sub>C<sub>i-1</sub> * (n + 1 - i) / i, this is much faster than using factorials.</p> <p>Then we expand (1 - a)<sup>n</sup>: 1 - na + <sub>n</sub>C<sub>2</sub>a<sup>2</sup> - <sub>n</sub>C<sub>3</sub>a<sup>3</sup> + <sub>n</sub>C<sub>4</sub>a<sup>4</sup> - <sub>n</sub>C<sub>5</sub>a<sup>5</sup> + <sub>n</sub>C<sub>6</sub>a<sup>6</sup>...</p> <p>The signs alternate, now if we evaluate (1 + a)<sup>n</sup> - (1 - a)<sup>n</sup>, we find the even powers cancel out, leaving only odd powers, and since we divide by sqrt(5) we are left with only powers of 5.</p> <hr /> <p>Code:</p> <pre><code>def fibonacci(lim: int) -&gt; list[int]: a, b = 1, 1 result = [0] * lim for i in range(1, lim): result[i] = a a, b = b, a + b return result def Fibonacci_phi_even(n: int) -&gt; int: fib = 0 power = 1 a, b, c = n - 1, 2, 2 * n length = int(n / 4 + 0.5) coeffs = [1] * length for i in range(length): coeffs[i] = c fib += c * power power *= 5 c = c * (a - 1) * a // ((b + 1) * b) a -= 2 b += 2 for coeff in coeffs[length - 1 - (n // 2 &amp; 1) :: -1]: fib += coeff * power power *= 5 return fib &gt;&gt; n def Fibonacci_phi_odd(n): length = n // 2 + 1 powers = [1] * length power = 1 for i in range(length): powers[i] = power power *= 5 fib = 0 a, b, c, d = n, 1, 2, -1 i = di = length - 1 for _ in range(length): fib += c * powers[i] c = c * a // b a -= 1 b += 1 i += d * di d *= -1 di -= 1 return fib &gt;&gt; n def Fibonacci_phi(n: int) -&gt; int: if n &lt;= 2: return int(n &gt; 0) return Fibonacci_phi_odd(n) if n &amp; 1 else Fibonacci_phi_even(n) </code></pre> <p>It works:</p> <pre><code>In [377]: print(fibonacci(25)) [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368] In [378]: print([Fibonacci_phi(i) for i in range(25)]) [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368] </code></pre> <p>But it is slow:</p> <pre><code>In [379]: %timeit fibonacci(1025) 87.9 μs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [380]: %timeit Fibonacci_phi(1024) 699 μs ± 8.42 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>How can we make it faster without losing accuracy?</p>
<python><algorithm><fibonacci>
2025-04-16 03:40:19
2
3,930
Ξένη Γήινος
79,576,335
13,746,021
poetry shows as "no such file or directory" in Makefile, but works fine in terminal
<p>I am experimenting with using the CPython C API and managing dependencies (e.g. setuptools) via poetry.</p> <p>To compile my code I wrote a Makefile looking something like this:</p> <pre class="lang-makefile prettyprint-override"><code>PYTHON = poetry run python build: setup.py ...some C extension files... $(PYTHON) setup.py build_ext -i </code></pre> <p>The problem is that when I go to run it, I get this error:</p> <pre class="lang-bash prettyprint-override"><code>$ make poetry run python setup.py build_ext make: poetry: No such file or directory make: *** [build] Error 1 </code></pre> <p>When I run <code>poetry</code> in my terminal, it works fine:</p> <pre><code>$ poetry --version Poetry (version 2.0.1) </code></pre> <p>Even weirder:</p> <pre class="lang-bash prettyprint-override"><code>$ type poetry poetry is /Users/rusty/.local/bin/poetry $ which poetry $ echo $? 1 </code></pre> <p>I thought that <code>which</code> and <code>type</code> would return the exact same results.</p> <p>I checked my <code>$PATH</code> and it contains the directory <code>~/.local/bin</code> (not <code>/Users/rusty/.local/bin</code>, literally <code>~/.local/bin</code> in case that helps) (where poetry is installed).</p> <p>I am on macOS 15.3.2.</p> <p>Also, <code>~/.local/bin/poetry</code> is a symlink to another file which has read and execute for everyone.</p>
<python><makefile><python-poetry>
2025-04-16 03:12:49
1
361
Rusty
79,576,316
597,234
Exceptions being hidden by asyncio Queue.join()
<p>I am using an API client supplied by a vendor (Okta) that has very poor/old examples of running with async - for example (the Python documentation says not to use <code>get_event_loop()</code>):</p> <pre class="lang-py prettyprint-override"><code>from okta.client import Client as OktaClient import asyncio async def main(): client = OktaClient() users, resp, err = await client.list_users() while True: for user in users: print(user.profile.login) # Add more properties here. if resp.has_next(): users, err = await resp.next() else: break loop = asyncio.get_event_loop() loop.run_until_complete(main()) </code></pre> <p>This works, but I need to go through the returned results and follow various links to get additional information. I created a queue using <code>asyncio</code> and I have the worker loop until the queue is empty. This also works.</p> <p>I start running into issues when I try to have more than one worker - if the code throws an exception, the workers never return.</p> <pre class="lang-py prettyprint-override"><code>async def handle_queue(name, queue: asyncio.Queue, okta_client: OktaClient): &quot;&quot;&quot;Handle queued API requests&quot;&quot;&quot; while True: log.info(&quot;Queue size: %d&quot;, queue.qsize()) api_req = await queue.get() log.info('Worker %s is handling %s', name, api_req) api_func = getattr(okta_client, f&quot;list_{api_req['endpoint']}&quot;) api_procs = getattr(sys.modules[__name__], api_req['processor']) log.info('Worker %s is handling %s with api_func %s, api_proc %s', name, api_req, api_func, api_proc) resp_data, resp, err = await api_func(**api_req['params']) log.debug(resp_data) while True: for i in resp_data: await api_proc(i, queue) if resp.has_next(): resp_data, err = await resp.next() else: break queue.task_done() async def create_workers(queue: asyncio.Queue): &quot;&quot;&quot;Reusable worker creation process&quot;&quot;&quot; log.info('Creating workers') workers = [] async with OktaClient() as okta_client: for i in range(NUM_WORKERS): log.info('Creating worker-%d', i) worker = asyncio.create_task(handle_queue(f'worker-{i}', queue, okta_client)) workers.append(worker) await queue.join() for worker in workers: worker.cancel() await asyncio.gather(*workers, return_exceptions=True) async def main(): &quot;&quot;&quot;Load Access Policies and their mappings and rules&quot;&quot;&quot; queue = asyncio.Queue() queue.put_nowait({'endpoint': 'policies', 'params': {'query_params': {'type': 'ACCESS_POLICY'}}, 'processor': 'process_policy'}) await create_workers(queue) metadata['policy_count'] = len(data) print(yaml.dump({'_metadata': metadata, 'data': data})) if __name__ == '__main__': try: asyncio.run(main()) except KeyboardInterrupt: # Hide the exception for a Ctrl-C log.info('Keyboard Interrupt') </code></pre> <p>If an exception is thrown in <code>handle_queue</code> (or any of the functions it calls), the program hangs. When I hit Ctrl-C, I get the exception along with a message <code>asyncio task exception was never retrieved</code>. I understand this is because <code>queue.join()</code> is waiting for <code>queue.task_done()</code> to be called as many times as <code>queue.put()</code> was called, but I don't understand why the exception isn't caught.</p> <p>I tried wrapping the work in <code>handle_queue</code> in a <code>try</code>:</p> <pre class="lang-py prettyprint-override"><code>async def handle_queue(name, queue: asyncio.Queue, okta_client: OktaClient): &quot;&quot;&quot;Handle queued API requests&quot;&quot;&quot; while True: try: # REST OF THE FUNCTION except Exception as e: queue.task_done() raise e queue.task_done() </code></pre> <p>This way, the program execution does finish, but the exception still disappears. How can I capture the exception and still allow the program to finish?</p>
<python><python-asyncio><okta-api>
2025-04-16 02:48:59
2
2,036
yakatz
79,576,073
2,499,281
Can you create multiple columns based on the same set of conditions in Polars?
<p>Is it possible to do something like this in Polars? Like do you need a separate when.then.otherwise for each of the 4 new varialbles, or can you use <strong>struct</strong> to create multiple new variables from one when.then.otherwise?</p> <p>Regular Python example:</p> <pre><code>if x=1 and y=3 and w=300*z and z&lt;100: tot = 300 work = 400 sie = 500 walk = 'into' else: tot = 350 work = 400*tot sie = tot/1000 walk = 'outof' </code></pre> <p>I tried to do a similar thing in Polars with <strong>struct</strong> (to create new variables a and b based on Movie variable:</p> <pre><code>import polars as pl ratings = pl.DataFrame( { &quot;Movie&quot;: [&quot;Cars&quot;, &quot;IT&quot;, &quot;ET&quot;, &quot;Cars&quot;, &quot;Up&quot;, &quot;IT&quot;, &quot;Cars&quot;, &quot;ET&quot;, &quot;Up&quot;, &quot;Cars&quot;], &quot;Theatre&quot;: [&quot;NE&quot;, &quot;ME&quot;, &quot;IL&quot;, &quot;ND&quot;, &quot;NE&quot;, &quot;SD&quot;, &quot;NE&quot;, &quot;IL&quot;, &quot;IL&quot;, &quot;NE&quot;], &quot;Avg_Rating&quot;: [4.5, 4.4, 4.6, 4.3, 4.8, 4.7, 4.5, 4.9, 4.7, 4.6], &quot;Count&quot;: [30, 27, 26, 29, 31, 28, 28, 26, 33, 28], } ) x = ratings.with_columns( pl.when(pl.col('Movie')=='Up').then(pl.struct(pl.lit(0),pl.lit(2))).otherwise(pl.struct(pl.lit(1),pl.lit(3))).struct.field(['a','b']) ) print(x) </code></pre> <p>Thanks!</p>
<python><dataframe><conditional-statements><python-polars><polars>
2025-04-15 21:50:59
1
794
catquas
79,575,941
10,705,248
Why does RandomForestClassifier in scikit-learn predict even on all-NaN input?
<p>I am training a random forest classifier in python <code>sklearn</code>, see code below-</p> <pre><code>from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(random_state=42) rf.fit(X = df.drop(&quot;AP&quot;, axis =1), y = df[&quot;AP&quot;].astype(int)) </code></pre> <p>When I predict the values using this classifier on another dataset that has <code>NaN</code> values, the model provides some output. Not even that, I tried predicting output on a row with all variables as <code>NaNs</code>, it predicted the outputs.</p> <pre><code>#making a row with all NaN values row = pd.DataFrame([np.nan] * len(rf.feature_names_in_), index=rf_corn.feature_names_in_).T rf.predict(row) </code></pre> <p>It predicts- <code>array([1])</code></p> <p>I know that RandomForestClassifier in scikit-learn does not natively support missing values. So I expected a ValueError, not a prediction.</p> <p>I can ignore the NaN rows and only predict on non-nan rows but I am concerned if there is something wrong with this classifier. Any insight will be appreciated.</p>
<python><machine-learning><scikit-learn><random-forest><missing-data>
2025-04-15 19:53:59
1
854
lsr729
79,575,935
381,121
Trying to exeute the DBMS_STATS function in a python script
<p>I inherited a 1000+ line query the works in oracle's sql developer.<br /> When I try running the query in a python script, I get an error (An error occurred: ORA-00900: invalid SQL statement) at the line: <strong>EXEC DBMS_STATS.gather_table_stats('SSS', 'YYY')</strong>.</p> <p>I have very little experience with databases and a Google search of the error has not yielded a possible solution.</p> <p>Python script:</p> <pre><code> with open('myquery.sql', 'r') as sql_file: query = str(sql_file.read().replace(&quot;\n\n&quot;,&quot;&quot;)) sql_commands = query.split(';') for command in sql_commands: try: if command.strip() != '': print(command) cur.execute(command) connection.commit() except Exception as e: print(f&quot;An error occurred: {e}&quot;) connection.commit() </code></pre> <p>The last few lines of the query(myquery.sql):</p> <pre><code>DELETE FROM CD_SS.MM_DSD_MIG_MASTER; commit; INSERT INTO CD_SS.MM_DSD_MIG_MASTER SELECT DISTINCT PATTERN, b_ITEM_NUM, 'N', NULL, NULL, NULL FROM CD_SS.MM_DSD_MIG_PATTERNS; commit; update MM_DSD_MIG_MASTER set MIGRATION_STATUS_FLAG = 'Z' where MIGRATION_STATUS_FLAG = 'N'; commit; grant all privileges on MM_DSD_MIG_MASTER to public; grant all privileges on MM_DSD_MIG_PATTERNS to public; DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_PATTERNS'); DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_MASTER'); </code></pre> <p>Everything works if the DMS_STATS lines are omited but was told it has to stay.</p>
<python><oracle-database>
2025-04-15 19:50:28
2
489
ChrisJ
79,575,919
11,313,748
Suggestion needed for enhancing problem formulation for constraint programming
<p>I'm trying to formulize a constraint programming solution for item allocation on shelf. The goal is simple items belonging to same brand should be kept as close as possible and should maintain a decent square or rectangular block.</p> <p>for this, the approach we have taken is we have divided the whole available space on planogram into smaller partition of smaller granularity and did the same with every individual item, for e.g. If there are 3 shelf on my planogram each of 10 cm then my partition count will be 30 considering granularity of 1.</p> <p>Once partitioned it'll look like this:</p> <p><a href="https://i.sstatic.net/9nV9FVvK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nV9FVvK.png" alt="enter image description here" /></a></p> <p>where each (x,y) point denotes x -&gt; partition number on shelf, y -&gt; shelf number. Now, I do this also for every item as well, so if there's a item whose width is 3, there will be 3 partition where it'll be kept.</p> <p>Now, I want for all of my items belonging to same brand to be kept together and for that I defined a few constraints.</p> <ol> <li>All Items should take their required space on planogram.</li> <li>Every partition point should be assigned to one item only.</li> <li>All the partition of an item should be on the same shelf.</li> <li>All the partition of same item should be together.</li> <li>All the item partition of same brand connected to each other will have a adjacency score, I'm trying to maximize this score.</li> </ol> <p>Here, the code is working really well for small planogram with less number of items but as the size and item count is increasing, The code is searching forever and we also saw performance is also not improving better.</p> <p>Below's my code:</p> <pre><code>import cpmpy as cp import numpy as np import pandas as pd import seaborn as sns import networkx as nx import matplotlib.pyplot as plt from collections import defaultdict from GridGraph_mod import process_group from concurrent.futures import ProcessPoolExecutor, as_completed pd.set_option(&quot;display.max_rows&quot;, 500) pd.set_option(&quot;display.max_columns&quot;, 500) def create_mapping(brand_shelf_allowded_mapping, pog_df, shelf_count): remaining_brand = set(pog_df.brand_nm.unique()) - set(brand_shelf_allowded_mapping[&quot;brand_nm&quot;]) brand_shelf_allowded_mapping['brand_nm'].extend(remaining_brand) for _ in range(len(remaining_brand)): brand_shelf_allowded_mapping['shelf_allowed'].append([x for x in range(shelf_count)]) return brand_shelf_allowded_mapping def parallel_creator(key, idx_df): node_dict = {} with ProcessPoolExecutor() as executor: futures = {executor.submit(process_group, level, data): level for level, data in idx_df.groupby(key)} for future in as_completed(futures): level, var_dict = future.result() node_dict[level] = var_dict return node_dict if __name__ == &quot;__main__&quot;: bay_count = 3 granularity = 1 shelf_count = 7 section_width = 133 grid_width = int(section_width * bay_count / granularity) grid_height = shelf_count pog_df = pd.read_csv(&quot;data_temp/N3ADAA.csv&quot;) pog_df['linear'] = pog_df.linear.apply(np.ceil) pog_df['gridlinear'] = pog_df.linear//granularity G = nx.grid_2d_graph(int(section_width * bay_count/granularity),int(shelf_count)) pos = {(x, y): (x, y) for x, y in G.nodes()} products = pog_df[['tpnb', 'gridlinear']].astype(str) locations = pd.Series([str(s) for s in G.nodes()], name='location') locations = pd.concat([locations,locations.to_frame().location.str.strip(&quot;() &quot;).str.split(',', expand=True).rename(columns={0: 'x', 1: 'y'})], axis=1) pog_df = pd.merge(pog_df, pog_df.groupby(&quot;brand_nm&quot;, as_index=False).agg(gridlinear_brand = (&quot;gridlinear&quot;, &quot;sum&quot;)), on = &quot;brand_nm&quot;) l_c_idx = pd.merge(pog_df.reset_index(drop=True), locations, how='cross')[['tpnb', 'brand_nm', 'gridlinear', 'gridlinear_brand', 'location', 'x', 'y']] l_c_idx[&quot;x&quot;] = l_c_idx[&quot;x&quot;].astype(&quot;int32&quot;) l_c_idx[&quot;y&quot;] = l_c_idx[&quot;y&quot;].astype(&quot;int32&quot;) brand_shelf_allowded_mapping = { &quot;brand_nm&quot;: ['TESCO ESSENTIALS', 'TESCO PROFORMULA', 'PRONAMEL', 'FIXODENT', 'MACLEANS', 'ARM &amp; HAMMER', 'TEPE', 'POLIGRIP', 'STERADENT', 'PRO FORMULA', 'ORAL-B', 'COLGATE', 'LISTERINE', 'SENSODYNE','CORSODYL', 'AQUAFRESH'], &quot;shelf_allowed&quot;: [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0,1], [2,3,4,5], [2,3,4,5], [2,3,4,5], [0,1], [6], [6]] } brand_shelf_allowded_mapping = create_mapping(brand_shelf_allowded_mapping, pog_df, shelf_count) brand_shelf_allowded_mapping_df = pd.DataFrame(brand_shelf_allowded_mapping) brand_shelf_allowded_mapping_df = brand_shelf_allowded_mapping_df.explode(&quot;shelf_allowed&quot;) brand_shelf_allowded_mapping_df.shelf_allowed = brand_shelf_allowded_mapping_df.shelf_allowed.astype(&quot;int32&quot;) l_c_idx = pd.merge(l_c_idx, brand_shelf_allowded_mapping_df, left_on=[&quot;brand_nm&quot;, &quot;y&quot;], right_on=[&quot;brand_nm&quot;, &quot;shelf_allowed&quot;], how=&quot;inner&quot;)[l_c_idx.columns] l_c_idx['Var'] = l_c_idx.apply(lambda x: cp.boolvar(name=x['location']+'-'+str(x['tpnb'])+'-'+x[&quot;brand_nm&quot;]), axis=1) # model_initiation m = cp.Model() # 1. Every Item should take whatever space is needed by them l_c_idx.groupby('tpnb', as_index=False).agg({'Var':cp.sum, 'gridlinear': 'unique'}).apply(lambda x: m.constraints.append(x['Var']==int(float(x[&quot;gridlinear&quot;]))), axis=1) # 2. All location should take distinct items only l_c_idx.groupby('location', as_index=False).agg({'Var':cp.sum}).apply(lambda x: m.constraints.append(x['Var']&lt;=1), axis=1) # 3. All Item partitions should be on the same shelf l_c_idx[&quot;y&quot;] = l_c_idx[&quot;y&quot;].astype(&quot;int32&quot;) l_c_idx[&quot;x&quot;] = l_c_idx[&quot;x&quot;].astype(&quot;int32&quot;) shelf_var = {tpnb: cp.intvar(0, max(l_c_idx[&quot;y&quot;])) for tpnb in l_c_idx[&quot;tpnb&quot;].unique()} l_c_idx.apply(lambda row: m.constraints.append( (row['Var'] == 1).implies(row['y'] == shelf_var[row['tpnb']]) ), axis=1) # 4. All the item partition should be together node_p_var_dict = parallel_creator('tpnb', l_c_idx) for p in l_c_idx['tpnb'].unique(): for shelf in sorted(l_c_idx[l_c_idx.tpnb == p].y.unique()): m.constraints.append( cp.sum([(node_p_var_dict[p][(level, shelf)] != node_p_var_dict[p][(level+1, shelf)]) for level in range(section_width*bay_count - 1)]) &lt;= 2 ) m.constraints.append(node_p_var_dict[p][(0, shelf)] + node_p_var_dict[p][((section_width*bay_count)-1, shelf)] &lt;= 1) adj_var = dict() for brand in l_c_idx['brand_nm'].unique(): if brand in ['ORAL-B']: brand_neighbors = [] temp = defaultdict(dict) brand_df = l_c_idx[l_c_idx['brand_nm'] == brand].copy() brand_df['x'] = brand_df['x'].astype(int) brand_df['y'] = brand_df['y'].astype(int) for _, r in brand_df.iterrows(): temp[(int(r['x']), int(r['y']))][r['tpnb']] = r['Var'] for (x, y), curr in temp.items(): for dx, dy in [(-1,0), (1,0), (0,-1), (0,1)]: neighbor_pos = (x+dx, y+dy) if neighbor_pos in temp: neighbor = cp.any(temp[neighbor_pos].values()) brand_neighbors.append(cp.any(curr.values()) &amp; neighbor) adj_var[brand] = cp.intvar(0, len(brand_neighbors), name=f&quot;adj_score_{brand}&quot;) #print(len(brand_neighbors), int(brand_df.gridlinear_brand.unique()[0]), len(brand_neighbors)/int(brand_df.gridlinear_brand.unique()[0]) * 10) m += (adj_var[brand] == (cp.sum(brand_neighbors)//int(brand_df.gridlinear_brand.unique()[0])) * 10) #m += (adj_var[brand] == cp.sum(brand_neighbors)) #m += (cp.sum(brand_neighbors)//int(brand_df.gridlinear_brand.unique()[0])) * 10 &gt; 10 print(&quot;total constraints added = &quot;, len(m.constraints)) m.maximize(sum(adj_var.values())) print(&quot;All constraint added, now trying to solve !!!&quot;) if m.solve(time_limit=500, num_search_workers=8, log_search_progress=True): sum_ = 0 for k,v in adj_var.items(): sum_ += v.value() print(&quot;Solution found with adjacency value = &quot;, sum_) l_c_idx['value'] = l_c_idx.Var.apply(lambda x : x.value()) selected_items = l_c_idx[l_c_idx[&quot;value&quot;] == True] unique_items = selected_items[&quot;brand_nm&quot;].unique() palette = sns.color_palette(&quot;Spectral&quot;, len(unique_items)) color_map_dict = {item: palette[i] for i, item in enumerate(unique_items)} # Create grid graph G_temp = nx.grid_2d_graph(int(section_width * bay_count / granularity), int(shelf_count)) G_temp.remove_edges_from(list(G_temp.edges)) # Set positions for each node pos = {(x, y): (x, y) for x, y in G_temp.nodes()} # Assign colors based on item placement node_colors = [] for x, y in G_temp.nodes(): item_at_loc = selected_items[selected_items[&quot;location&quot;] == str((x, y))] if not item_at_loc.empty: node_colors.append(color_map_dict[item_at_loc[&quot;brand_nm&quot;].values[0]]) else: node_colors.append(&quot;white&quot;) # Plot graph plt.figure(figsize=(20, 15)) nx.draw(G_temp, pos, node_size=50, node_color=node_colors, with_labels=False) else: print(&quot;No solution found&quot;) </code></pre> <p>GridGraph_mod.py</p> <pre><code>import networkx as nx def process_group(level, data): return level, {eval(row['location']): row['Var'] for _, row in data.iterrows()} </code></pre> <p>Data: Sample</p> <pre><code>tpnb,linear,item_height,item_width,item_depth,brand_nm 61452116,16.1,3.6,15.0,3.0,AQUAFRESH 62977195,14.0,4.5,13.0,3.7,SENSODYNE 81116754,17.2,4.6,17.0,3.8,AQUAFRESH 85988423,16.0,3.6,15.0,3.0,COLGATE 85992262,9.0,14.9,3.5,3.5,TESCO PROFORMULA 85992832,9.8,18.9,4.5,2.5,TESCO PROFORMULA 86016064,11.4,23.0,10.5,2.4,TESCO ESSENTIALS 88903192,17.2,4.6,17.0,3.8,AQUAFRESH 91256336,17.8,23.2,4.2,3.0,AQUAFRESH 51769631,19.5,4.2,19.0,3.5,MACLEANS 52098829,19.5,4.2,19.0,3.5,MACLEANS 76956089,19.7,4.3,18.9,3.8,AQUAFRESH 79360074,35.0,4.5,17.3,4.0,COLGATE 84928779,18.4,3.3,18.0,3.3,TESCO ESSENTIALS 93038799,19.4,4.5,18.7,3.7,AQUAFRESH 51164799,19.5,21.2,9.0,4.6,COLGATE 51257776,19.5,21.2,9.0,4.6,COLGATE 73385394,19.5,21.3,9.0,4.5,COLGATE 85290842,34.0,22.5,8.4,5.0,TESCO ESSENTIALS 85290856,17.4,22.1,8.5,4.8,PRO FORMULA 86006295,13.5,16.2,6.4,6.4,TESCO PROFORMULA 61489561,21.0,16.0,7.0,4.8,AQUAFRESH 74168299,8.9,23.0,4.2,2.5,AQUAFRESH 80184938,26.0,23.0,5.5,2.8,COLGATE 81116731,35.0,4.6,17.0,3.8,AQUAFRESH 84538526,17.5,3.8,16.9,1.3,AQUAFRESH 91713372,12.3,23.9,11.0,5.0,ORAL-B 92000950,6.8,21.6,6.0,1.0,ORAL-B </code></pre> <p>Please help how can I make this work for large inputs as well.</p>
<python><optimization><linear-programming><constraint-programming><cpmpy>
2025-04-15 19:36:40
1
391
Anand
79,575,741
6,101,024
Add a new row as the average of columns
<p>Give the following dataframe:</p> <pre><code>_BETTER _SAME _WORSE ___dataset Metric 0.373802 0.816794 0.568783 Train precision 0.391304 0.865229 0.519324 Train recall 0.382353 0.840314 0.542929 Train f1-score 0.500000 1.000000 0.583333 Val precision 0.333333 1.000000 0.736842 Val recall 0.400000 1.000000 0.651163 Val f1-score 0.000000 0.000000 0.666667 Test precision 0.000000 0.000000 0.500000 Test recall 0.000000 0.000000 0.571429 Test f1-score </code></pre> <p>would like to add the followings:</p> <pre><code>_BETTER _SAME _WORSE ___dataset Metric 0.373802 0.816794 0.568783 Train precision 0.391304 0.865229 0.519324 Train recall 0.382353 0.840314 0.542929 Train f1-score 0.500000 1.000000 0.583333 Val precision 0.333333 1.000000 0.736842 Val recall 0.400000 1.000000 0.651163 Val f1-score 0.000000 0.000000 0.666667 Test precision 0.000000 0.000000 0.500000 Test recall 0.000000 0.000000 0.571429 Test f1-score mean_p_b mean_p_s mean_p_w All precision_avg mean_r_b mean_r_s mean_r_w All recall_avg mean_f1_b mean_f1_s mean_f1_w All f1_score_avg </code></pre> <p>where <code>mean_p_b</code> <code>mean_p_s</code> <code>mean_p_w</code> is obtained by the average of the precision row, w.r.t the three colums, respectively. Likewise the <code>mean_r_b</code> <code>mean_r_s</code> <code>mean_r_w</code> and <code>mean_f1_b</code> <code>mean_f1_s</code> <code>mean_f1_w</code>.</p> <p>Applying each separately:</p> <pre><code>df_avg_precision[&quot;BETTER&quot;] = (df_train_precision['_BETTER'].values + df_val_precision['_BETTER'].values + df_test_precision['_BETTER'].values)/3 df_avg_precision[&quot;Metric&quot;] = &quot;precision_avg&quot; df_avg_recall[&quot;BETTER&quot;] = (df_train_recall['_BETTER'].values + df_val_recall['_BETTER'].values + df_test_recall['_BETTER'].values)/3 df_avg_recall[&quot;Metric&quot;] = &quot;recall_avg&quot; df_avg_f1[&quot;BETTER&quot;] = (df_train_f1['_BETTER'].values + df_val_f1['_BETTER'].values + df_test_f1['_BETTER'].values)/3 df_avg_f1[&quot;Metric&quot;] = &quot;f1_avg&quot; </code></pre>
<python><pandas><dataframe>
2025-04-15 17:38:39
1
697
Carlo Allocca
79,575,684
5,423,080
PyTorchMetrics Mean Absolute Percentage Error extremely high value
<p>I am using <code>PyTorch</code> for a face recognition coursework and I have to calculate the MAPE value.</p> <p>My first attempt was with <code>torchmetrics.MeanAbsolutePercentageError</code> class, but the result doesn't make sense.</p> <p>For this reason I wrote a function to calculate it and it seems to work fine.</p> <p>Investigating a bit, it seems to me the problem is related to the presence of <code>0</code> in the truth value array, but I didn't find anything in the <code>torchmetrics</code> documentation.</p> <p>Is there a way to avoid this problem in <code>torchmetrics</code>?</p> <p>Is it possible that the <code>epsilon</code> value in the MAPE formula is not set? If this is the case, how can I give it a value?</p> <p>I am happy to use the other function, but I am curious to understand the reason of those results with <code>torchmetrics</code>.</p> <p>These are the 2 function to calculate the MAPE:</p> <pre><code>def calculate_mape_torch(preds, targets): &quot;&quot;&quot;Calculate MAPE using PyTorch method. Args: preds: array with ground truth values targets: array with predictions from model Returns: MAPE &quot;&quot;&quot; if not isinstance(preds, torch.Tensor): preds = torch.tensor(preds) if not isinstance(targets, torch.Tensor): targets = torch.tensor(targets) mape = MeanAbsolutePercentageError() return mape(preds, targets) * 100 def calculate_mape(preds, targets, epsilon=1): &quot;&quot;&quot;Calculate the Mean Absolute Percentage Error. Args: preds: array with ground truth values targets: array with predictions from model epsilon: value to avoid divide by zero problem Returns: MAPE &quot;&quot;&quot; preds_flatten = preds.flatten(&quot;F&quot;) targets_flatten = targets.flatten(&quot;F&quot;) return np.sum(np.abs(targets_flatten - preds_flatten) / np.maximum(epsilon, targets_flatten)) / len(preds_flatten) * 100 </code></pre> <p>With these values:</p> <pre><code>y_true = np.array([[1, 0, 3], [4, 5, 6]]) y_pred = np.array([[3, 2, 2], [7, 3, 6]]) </code></pre> <p>the 2 functions give the results:</p> <pre><code>&gt;&gt;&gt; calculate_mape(y_pred, y_true) 91.38888888888889 &gt;&gt;&gt; calculate_mape_torch(y_pred, y_true) tensor(28490084.) </code></pre> <p>With these values:</p> <pre><code>y_true = np.array([[1, 2, 3], [4, 5, 6]]) y_pred = np.array([[3, 2, 2], [7, 3, 6]]) </code></pre> <p>the 2 functions give the results:</p> <pre><code>&gt;&gt;&gt; calculate_mape(y_pred, y_true) 58.05555555555556 &gt;&gt;&gt; calculate_mape_torch(y_pred, y_true) tensor(58.0556) </code></pre>
<python><pytorch><torchmetrics>
2025-04-15 17:03:25
1
412
cicciodevoto
79,575,456
1,480,131
Is a lock needed when multiple tasks push into the same asyncio Queue?
<p>Consider this example where I have 3 worker tasks that push results in a queue and a tasks that deals with the pushed data.</p> <pre class="lang-py prettyprint-override"><code> async def worker1(queue: asyncio.Queue): while True: res = await do_some_work(param=1) await queue.put(res) async def worker2(queue: asyncio.Queue): while True: res = await do_some_work(param=2) await queue.put(res) async def worker3(queue: asyncio.Queue): while True: res = await do_some_work(param=3) await queue.put(res) async def handle_results(queue: asyncio.Queue): while True: res = await queue.get() await handle_result(res) queue.task_done() async def main(): queue = asyncio.Queue() t1 = asyncio.create_task(worker1(queue)) t2 = asyncio.create_task(worker2(queue)) t3 = asyncio.create_task(worker3(queue)) handler = asyncio.create_task(handle_result(queue)) while True: # do some other stuff .... asyncio.run(main()) </code></pre> <p>The documentation says that <code>asyncio.Queue</code> is not thread-safe, but this should not apply here because all tasks are running in the same thread. But do I need an <code>asyncio.Lock</code> to protect the queue when I have 3 tasks that push into the same queue? Looking at the implementation in Python 3.12 (which creates a <code>putter</code> future and awaits on it before pushing into the queue) I would say <strong>no</strong>, but I'm not sure and the documentation does not mention what would happen in this case. So, is the <code>asyncio.Lock</code> in this case necessary?</p>
<python><locking><python-asyncio>
2025-04-15 14:43:50
1
13,662
Pablo
79,575,363
2,939,369
how to force numba to return a numpy type?
<p>I find this behavior quite counter-intuitive although I suppose there is a reason for it - numba automatically converts my numpy integer types directly into a python int:</p> <pre class="lang-py prettyprint-override"><code>import numba as nb import numpy as np print(f&quot;Numba version: {nb.__version__}&quot;) # 0.59.0 print(f&quot;NumPy version: {np.__version__}&quot;) # 1.23.5 # Explicitly define the signature sig = nb.uint32(nb.uint32, nb.uint32) @nb.njit(sig, cache=False) def test_fn(a, b): return a * b res = test_fn(2, 10) print(f&quot;Result value: {res}&quot;) # returns 20 print(f&quot;Result type: {type(res)}&quot;) # returns &lt;class 'int'&gt; </code></pre> <p>This is an issue as I'm using the return as an input into another njit function so I get a casting warning (and I also do unnecessary casts in-between the njit functions)</p> <p>Is there any way to force numba to give me <code>np.uint32</code> as a result instead?</p> <p>--- EDIT ---</p> <p>This is the best I've managed to do myself, however I refuse to believe this is the best implementation out there:</p> <pre class="lang-py prettyprint-override"><code># we manually define a return record and pass it as a parameter res_type = np.dtype([('res', np.uint32)]) sig = nb.void(nb.uint32, nb.uint32, nb.from_dtype(res_type)) @nb.njit(sig, cache=False) def test_fn(a:np.uint32, b:np.uint32, res: res_type): res['res'] = a * b # Call with Python ints (Numba should coerce based on signature) res = np.recarray(1, dtype=res_type)[0] res_py_in = test_fn(2, 10, res) print(f&quot;\nCalled with Python ints:&quot;) print(f&quot;Result value: {res['res']}&quot;) # 20 print(f&quot;Result type: {type(res['res'])}&quot;) # &lt;class 'numpy.uint32'&gt; </code></pre> <p>--- EDIT 2 --- as @Nin17 correctly pointed out actually returning an int object is still about 3 times quicker when called from python context, so its better to just return a simple int and cast as needed.</p>
<python><numba>
2025-04-15 14:09:27
2
831
Raven
79,575,221
10,755,782
How to add a progress bar to Python multiprocessing.Pool without slowing it down (TQDM is 7x slower)?
<p>I'm running a long computation using multiprocessing.Pool in Python, and I wanted to add a progress bar to track it. Naturally, I tried tqdm with pool.imap, but surprisingly, it's ~7.3x slower than using pool.map() without it.</p> <p>Here's a minimal example to reproduce the issue:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import time from multiprocessing import Pool, cpu_count from functools import partial from tqdm import tqdm def dummy_task(step_id, size=500): data = np.random.randn(size, 3) dist = np.linalg.norm(data, axis=1) return step_id, np.min(dist) if __name__ == &quot;__main__&quot;: steps = list(range(500000)) # Simulate 100 frames size = 500 print(&quot;Running normaly...&quot;) t0 = time.time() with Pool(processes=cpu_count()) as pool: results = pool.map(partial(dummy_task, size=size), steps) t1 = time.time() pt = t1 - t0 print(f&quot;Time taken: {pt:.3f} seconds\n&quot;) print(&quot;Running with tqdm...&quot;) t2 = time.time() with Pool(processes=cpu_count()) as pool: results = list(tqdm(pool.imap(partial(dummy_task, size=size), steps), total=len(steps))) t3 = time.time() pt_tqm = t3 - t2 print(f&quot;Time taken: {pt_tqm:.3f} seconds&quot;) print (f&quot;Pool Process with TQDM is {pt_tqm/pt:.3f} times slower.&quot;) </code></pre> <p>There us a similar question <a href="https://stackoverflow.com/questions/50455516/progress-bar-slows-down-code-by-factor-of-5-using-tqdm-and-multiprocess">here</a> from six years ago. But there is no usable answer there. Here I'm looking for a way to implement a progress bar or some form of progress information without sacrificing speed. Is there any way to achieve this?</p> <p>NB: I just used the imap in the tqdm example as i coudl not figure out how to use map with tqdm.</p>
<python><multiprocessing><tqdm>
2025-04-15 13:03:19
2
660
brownser
79,575,107
2,431,627
Correct selector to target a 'Textual' SelectionList selected items' text?
<p>I have a <a href="https://textual.textualize.io/" rel="nofollow noreferrer">Textual</a> <code>SelectionList</code> in my Python console app:</p> <pre class="lang-py prettyprint-override"><code>... yield SelectionList[str]( *tuple(self.selection_items), id=&quot;mylist&quot;) ... </code></pre> <p>In my associated <code>.tcss</code> (Textual CSS) how would I target the text of a selected item? I'd like all selected items to have a different text colour.</p> <p>Interpreting the source of <code>SelectionList</code>, <code>OptionList</code> and <code>ToggleButton</code> I've tried a variety of selectors targeting the <code>-on</code> variant of the <code>ToggleButton</code> child of the <code>SelectionList</code> component (globally and via an <code>#mylist</code> id selector, and while I can change the colour for <em>all</em> items I'm unable to restrict it to just the selected ones.</p>
<python><tui><textual>
2025-04-15 12:08:51
1
1,973
Robin Macharg
79,575,076
2,350,145
How to fit the font-awesome icon with text in the nicegui mermaid graph nodes
<p>I am trying to create a <code>mermaid</code> graph in nicegui. I am able to fit the icon and text in the graph node. But in browser only the icon is showing up. I know it is because of the width of the node's label. How to fix it?</p> <p><a href="https://i.sstatic.net/MvUXGqpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MvUXGqpB.png" alt="browser pic" /></a></p> <pre><code>from nicegui import ui @ui.page('/') def index_page() -&gt; None: def fa_setup(): ui.run_javascript(&quot;&quot;&quot; const link = document.createElement('link'); link.rel = 'stylesheet'; link.href = 'https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css'; document.head.appendChild(link); &quot;&quot;&quot;) fa_setup() class State: def __init__(self): self.graph = &quot;&quot;&quot; graph LR; A1[&quot;fa:fa-credit-card A1&quot;]; A2[&quot;fa:fa-server A2&quot;]; A1 -- |4ms| --&gt; A2; &quot;&quot;&quot; state = State() diag = ui.mermaid(content=state.graph, config={'securityLevel': 'loose', 'theme': 'classic'}) diag.bind_content(state, 'graph') ui.run() </code></pre>
<python><nicegui>
2025-04-15 11:56:05
2
5,808
Anirban Nag 'tintinmj'
79,575,019
389,119
How to configure Playwright so that the Chrome inspector opens at the bottom instead of the right?
<p>Since I often open the inspector while debugging tests in Playwright, I'd like to know if there's a way to make it open at the bottom when pressing F12 instead of to the right. I know I can change the position manually by clicking there:</p> <p><a href="https://i.sstatic.net/SVUoo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SVUoo.png" alt="enter image description here" /></a></p> <p>But it's cumbersome to do it at every debugging session, so I'd like to adjust that programmatically.</p> <p>I know I can set some Chrome flags, but I couldn't find any to set the position of the inspector. How can I make the default position of the inspector to be at the bottom?</p>
<python><google-chrome><playwright><playwright-python>
2025-04-15 11:23:11
1
12,025
antoyo
79,574,989
3,719,399
How to write polars dataframe to S3 with partition_by?
<p>I am able to write a polars dataframe to S3 when <code>partition_by=None</code> using the following code:</p> <pre><code>import os import s3fs import polars as pl df = pl.DataFrame({'a': [1, 2, 3, 4], 'b': ['left', 'left', 'right', 'right']}) os.environ['AWS_CA_BUNDLE'] = r'C:\my\path\to\cert.pem' fs = s3fs.S3FileSystem( key='my-key', secret='my-secret', endpoint_url='https://my-endpoint-url.net' ) with fs.open('s3://my-bucket-name/my/path/to/file', mode='wb') as f: df.write_parquet(f) </code></pre> <p>However, when I set <code>partition_by='b'</code> using the following code:</p> <pre><code>import os import s3fs import polars as pl df = pl.DataFrame({'a': [1, 2, 3, 4], 'b': ['left', 'left', 'right', 'right']}) os.environ['AWS_CA_BUNDLE'] = r'C:\my\path\to\cert.pem' fs = s3fs.S3FileSystem( key='my-key', secret='my-secret', endpoint_url='https://my-endpoint-url.net' ) with fs.open('s3://my-bucket-name/my/path/to/file', mode='wb') as f: df.write_parquet(f, partition_by='b') </code></pre> <p>I get the following error:</p> <pre><code>TypeError: 'S3File' object cannot be converted to 'PyString' </code></pre> <p>How do I write polars dataframe to S3 with <code>partition_by='b'</code>?</p> <p>My Python environment:</p> <pre><code>python 3.9.7 polars 1.26.0 s3fs 2025.3.2 aiobotocore 2.17.0 botocore 1.35.93 boto3 1.35.93 </code></pre>
<python><amazon-s3><python-polars>
2025-04-15 11:01:52
1
988
chengcj
79,574,821
1,908,650
How do I center vertical labels in a seaborn barplot?
<p>When I add a vertical text label to a <code>seaborn</code> bar chart, the labels are offset to the left of centre, like so:</p> <p><a href="https://i.sstatic.net/AwiQwH8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AwiQwH8J.png" alt="enter image description here" /></a></p> <p><strong>How can I nudge the labels a little to the right so that they are nicely centred?</strong></p> <p>MWE:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns percentages = [8.46950845e+00, 1.58712232e+01, 2.13963086e+01, 2.33865318e+01, 2.04539820e+01, 1.41358888e+01, 7.79622697e+00, 3.49245775e+00, 1.28729140e+00, 3.91327891e-01, 9.80073446e-02, 2.02461563e-02, 3.46222120e-03, 4.92413036e-04, 5.82500571e-05, 5.70562340e-06, 4.59952409e-07, 3.05322729e-08, 1.57123083e-09, 4.80936609e-11] plt.figure(figsize=(9, 5), dpi=100) ax = sns.barplot( x = range(1, len(percentages) + 1), y = percentages, legend=False, hue = range(1, len(percentages) + 1), palette=&quot;GnBu_d&quot; ) plt.text(0, percentages[0]/2, &quot;Additive&quot;, ha=&quot;center&quot;, va=&quot;center&quot;, rotation=90, fontsize=10, weight=&quot;bold&quot;) plt.text(1, percentages[1]/2, &quot;Additive x Additive&quot;, ha=&quot;center&quot;, va=&quot;center&quot;, rotation=90, weight=&quot;bold&quot;) plt.show() </code></pre>
<python><matplotlib><seaborn>
2025-04-15 09:40:40
1
9,221
Mohan
79,574,649
388,506
How to create a virtual record of a variable?
<p>I have this defined in my equipment:</p> <pre><code>class AssetEquipment(model.Models): _name = &quot;asset.tracking&quot; _description = &quot;Module to track asset movement&quot; scrap_date = fields.Date('Scrap date') owner_id = fields.Many2one('res.users', string='Owner', default=False, tracking=True) status_id = fields.Selection([ ('0', 'Inactive'), ('1', 'Active'), ('2', 'Under Repair'), ('3', 'Scrap'), ], string='Status', default='0', store=False, compute='_compute_status') </code></pre> <p>The rule as follows:</p> <ol> <li>If the <code>owner_id</code> is blank, then the item status is <code>inactive</code>, otherwise it is <code>active</code></li> <li>If the <code>scrap_date</code> is set, then status is <code>Scrap</code> or <code>3</code>, and ignore <code>owner_id</code> field.</li> <li>This one is unimportant for this question, but if the computer check that this equipment has a pending maintenance progress, then the status is <code>Under Repair</code>.</li> </ol> <p>Now, <code>_compute_status</code> is not a problem, since I can make it myself.</p> <p>The problem is, I want to make something like <code>owner_history</code> that contains whole list of <code>owner_id</code> tracking value, with fields as date, owner_id, and status, gathered from tracking value and display them in tabular way, something like:</p> <pre><code>&lt;field name=&quot;owner_history&quot;&gt; &lt;list&gt; &lt;field name=&quot;date&quot; /&gt; &lt;field name=&quot;owner_id&quot; /&gt; &lt;field name=&quot;status&quot; /&gt; &lt;/list&gt; &lt;/field&gt; </code></pre> <p>How do I do that?</p> <p>Thank you</p>
<python><odoo><odoo-18>
2025-04-15 08:09:29
0
2,157
Magician
79,574,578
17,795,398
VSCode pylance not detecting matplotlib and mpi4py
<p>This is the problem in VSCode:</p> <pre><code>import matplotlib.pyplot as plt # Import &quot;matplotlib.pyplot&quot; could not be resolved from sourcePylancereportMissingModuleSource from mpi4py import MPI # No name 'MPI' in module 'mpi4py'PylintE0611:no-name-in-module </code></pre> <p>However, when I run the code, it works.</p> <p>I've done a search and people talk about the interpreter, however:</p> <p><a href="https://i.sstatic.net/AJjt56e8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJjt56e8.png" alt="" /></a></p> <p>You can see some part of the <code>pip show matplotlib</code> command and the selected interpreter (3.13.0). And if I open the menu with the options, I can choose between 3.13 and 3.11, being the first one the selected one.</p>
<python><matplotlib><visual-studio-code><pylance><mpi4py>
2025-04-15 07:27:10
0
472
Abel Gutiérrez
79,574,571
8,899,106
Is PyPNG outaded or wrongly documented?
<p>I'm trying to save rows of pixels in python using PyPNG. First of all, i'm seeing various use of <code>png.Writer(width, height, bitdepth=8, greyscale=False)</code> with different input, from :</p> <ul> <li><a href="https://png.Writer(width,%20height,%20bitdepth=8,%20greyscale=False)" rel="nofollow noreferrer">a simple 1D array of width pixels (pixel as 3 int long array)</a></li> <li><a href="https://pypng.readthedocs.io/en/latest/index.html#custom-iterators" rel="nofollow noreferrer">a width * height 2D array (pixel as tuple of 3 int)</a></li> </ul> <p>According to the second example, this code should be a valid utilisation of writer for a 10*10 board:</p> <pre><code>self.writer = png.Writer(size=(self.board.width, self.board.height), greyscale=False, bitdepth=8) ... later self.writer.write(stream, self.board.getRowsAsRGB()) # getRowsAsRGB returns [[(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], [(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)]] </code></pre> <p>But it returns <code>png.ProtocolError: ProtocolError: Expected 30 values but got 10 values, in row 0</code>.<br /> I assume my writer expected flat array for each row, so I tried with</p> <pre><code>[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] </code></pre> <p>Which unexpectedly works. I'm browsing all PyPNG doc I can find, looking for any example of this use. Is it documentation issue or I just used without notificing an experimental version of PyPNG ? Is PyPNG testing me ? Est ce que je ?</p>
<python><pypng>
2025-04-15 07:20:25
1
772
Portevent
79,574,506
3,554,605
Algorand: TransactionPool.Remember: TransactionPool.ingest: no pending block evaluator
<p>I am currently trying out Algorand, with the intention of self-hosting a completely new network with just a single command.</p> <p>You can try it out via</p> <pre class="lang-bash prettyprint-override"><code>$ git clone https://github.com/hermesloom/algorand-node-fly $ cd algorand-node-fly $ ./setup.sh 100 EUR </code></pre> <p>The last step of the setup script is that it is running the unit tests from test.py in that repository, particularly this one:</p> <pre class="lang-py prettyprint-override"><code> def test_04_transfer_funds(self): &quot;&quot;&quot;Test transferring funds from genesis to a new test account.&quot;&quot;&quot; try: # Create a test account to transfer funds to test_account = self.create_test_account() # Transfer funds from genesis to test account result = self.api_client.transfer( self.genesis_address, self.genesis_mnemonic, test_account[&quot;address&quot;], self.test_transfer_amount, &quot;Test transfer&quot;, ) # Check result self.assertIn(&quot;tx_id&quot;, result) self.assertIn(&quot;status&quot;, result) tx_id = result[&quot;tx_id&quot;] print(f&quot;Transfer initiated with transaction ID: {tx_id}&quot;) # If status is pending, wait briefly for confirmation if result[&quot;status&quot;] == &quot;pending&quot;: print(&quot;Transaction pending, waiting 5 seconds for confirmation...&quot;) time.sleep(5) # Verify the balance was received by checking the test account account_info = self.api_client.get_balance( test_account[&quot;address&quot;], test_account[&quot;mnemonic&quot;] ) self.assertIn(&quot;balance&quot;, account_info) balance = account_info[&quot;balance&quot;] # The test account should have the transferred amount (or potentially more) self.assertGreaterEqual( balance, self.test_transfer_amount, f&quot;Test account balance {balance} is less than transfer amount {self.test_transfer_amount}&quot;, ) print(f&quot;Test account received {balance} picoXDRs&quot;) except Exception as e: self.fail(f&quot;Failed to transfer funds and verify: {e}&quot;) </code></pre> <p>The transfer endpoint uses algosdk like this:</p> <pre class="lang-py prettyprint-override"><code>@app.route(&quot;/api/transfer&quot;, methods=[&quot;POST&quot;]) def transfer_funds(): &quot;&quot;&quot;Transfer funds from one account to another with mnemonic authentication.&quot;&quot;&quot; # Basic rate limiting client_ip = request.remote_addr if rate_limit(client_ip): return jsonify({&quot;error&quot;: &quot;Rate limit exceeded&quot;}), 429 try: data = request.get_json() sender_address = data.get(&quot;from&quot;) sender_mnemonic = data.get(&quot;mnemonic&quot;) receiver_address = data.get(&quot;to&quot;) amount = data.get(&quot;amount&quot;) note = data.get(&quot;note&quot;, &quot;&quot;) # Validate inputs if not all([sender_address, sender_mnemonic, receiver_address, amount]): return jsonify({&quot;error&quot;: &quot;Missing required fields&quot;}), 400 try: amount = int(amount) if amount &lt;= 0: raise ValueError(&quot;Amount must be positive&quot;) except ValueError: return jsonify({&quot;error&quot;: &quot;Invalid amount&quot;}), 400 # Validate mnemonic corresponds to sender address if not validate_mnemonic(sender_mnemonic, sender_address): return jsonify({&quot;error&quot;: &quot;Invalid mnemonic for sender address&quot;}), 403 # Convert mnemonic to private key sender_private_key = mnemonic.to_private_key(sender_mnemonic) # Get transaction parameters params = algod_client.suggested_params() # Create and sign transaction unsigned_txn = PaymentTxn( sender=sender_address, sp=params, receiver=receiver_address, amt=amount, note=note.encode() if note else None, ) signed_txn = unsigned_txn.sign(sender_private_key) # Submit transaction tx_id = algod_client.send_transaction(signed_txn) # Wait for confirmation try: wait_for_confirmation(algod_client, tx_id) return jsonify({&quot;tx_id&quot;: tx_id, &quot;status&quot;: &quot;confirmed&quot;}) except Exception as e: return jsonify({&quot;tx_id&quot;: tx_id, &quot;status&quot;: &quot;pending&quot;, &quot;error&quot;: str(e)}), 202 except Exception as e: app.logger.error(f&quot;Error transferring funds: {e}&quot;) return jsonify({&quot;error&quot;: f&quot;Failed to transfer funds: {str(e)}&quot;}), 500 </code></pre> <p>And when I run the unit test, I get this error:</p> <pre><code>FAIL: test_04_transfer_funds (__main__.AlgorandAPITest.test_04_transfer_funds) Test transferring funds from genesis to a new test account. ---------------------------------------------------------------------- Traceback (most recent call last): File &quot;/Users/nalenz/Programmierung/algorand-node-fly/local/test.py&quot;, line 222, in test_04_transfer_funds result = self.api_client.transfer( self.genesis_address, ...&lt;3 lines&gt;... &quot;Test transfer&quot;, ) File &quot;/Users/nalenz/Programmierung/algorand-node-fly/local/api_client.py&quot;, line 70, in transfer raise Exception(f&quot;Error transferring funds: {response.text}&quot;) Exception: Error transferring funds: {&quot;error&quot;:&quot;Failed to transfer funds: TransactionPool.Remember: TransactionPool.ingest: no pending block evaluator&quot;} During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/nalenz/Programmierung/algorand-node-fly/local/test.py&quot;, line 260, in test_04_transfer_funds self.fail(f&quot;Failed to transfer funds and verify: {e}&quot;) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Failed to transfer funds and verify: Error transferring funds: {&quot;error&quot;:&quot;Failed to transfer funds: TransactionPool.Remember: TransactionPool.ingest: no pending block evaluator&quot;} </code></pre> <p>How can this &quot;no pending block evaluator&quot; issue be fixed?</p>
<python><algorand>
2025-04-15 06:37:45
0
1,429
sigalor
79,574,433
4,050,510
Cannot run PyVista/VTK inside a Huggingface multiprocessing map()
<p>The following code crashes, with a forking error. It say <code>objc[81151]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.</code></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pyvista as pv import datasets data = [{&quot;coords&quot;: np.random.rand(5, 3)} for _ in range(3)] def render_point(example): plotter = pv.Plotter(off_screen=True) cloud = pv.PolyData(example[&quot;coords&quot;]) plotter.add_mesh(cloud) img = plotter.screenshot(return_img=True) return {&quot;image&quot;: img} # breaks if num_proc&gt;1 ds = datasets.Dataset.from_list(data).map(render_point, num_proc=2) </code></pre> <p>The documentation on <a href="https://huggingface.co/docs/datasets/main/en/process#multiprocessing" rel="nofollow noreferrer">https://huggingface.co/docs/datasets/main/en/process#multiprocessing</a> implies that the map() call is a fork, and it seems pyvista cannot work with forking. (not sure here, but that is what it seems like). However, adding</p> <pre class="lang-py prettyprint-override"><code>import multiprocess multiprocess.set_start_method(&quot;spawn&quot;) </code></pre> <p>did not solve things, as it only generated the error <code>TypeError: fork_exec() takes exactly 23 arguments (21 given)</code>.</p> <p>How can I reap benefits of huggingface dataset parallellism with pyvista?</p>
<python><fork><huggingface-datasets><pyvista>
2025-04-15 05:34:54
1
4,934
LudvigH
79,574,386
16,869,946
Constructing custom loss function in lightgbm
<p>I have a pandas dataframe that records the outcome of F1 races:</p> <pre><code>data = { &quot;Race_ID&quot;: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4], &quot;Racer_Number&quot;: [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4], &quot;Win_odds&quot;: [2.3, 4.3, 1.3, 5.0, 3.4, 1.1, 2.3, 2.0, 7.4, 3.1, 1.8, 6.0, 2.4, 5.2, 3.5, 3.0, 1.4, 3.2, 2.5, 5], &quot;Rank&quot;: [3, 2, 1, 4, 2, 1, 4, 3, 4, 2, 1, 3, 4, 2, 3, 1, 1, 2, 3, 4] } df_long = pd.DataFrame(data) </code></pre> <p>and I want to train a lightgbm classifier using the LGBMClassifier API to predict the winner of a race using Win_odds as feature. To do that, we convert the data to wide format:</p> <pre><code># Reshape into wide format (one row per race, racers as columns) df_wide = df_long.pivot( index=&quot;Race_ID&quot;, columns=&quot;Racer_Number&quot;, values=[&quot;Win_odds&quot;,&quot;Rank&quot;] ) df_wide.columns = [f&quot;{col[0]}_{col[1]}&quot; for col in df_wide.columns] df_wide = df_wide.reset_index() df_wide['target'] = df_wide.apply(lambda row: np.argmin(row['Rank_1':'Rank_4']) , axis=1) </code></pre> <p>And then we can use the columns Win_odds_1, Win_odds_2, Win_odds_3, Win_odds_4 as features and the target as our target. In multi-class classification, typically we use the (multi-class) log loss as our loss function as documented here: <a href="https://scikit-learn.org/stable/modules/model_evaluation.html#log-loss" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/model_evaluation.html#log-loss</a></p> <p><a href="https://i.sstatic.net/O9uSJUG1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9uSJUG1.png" alt="enter image description here" /></a></p> <p>where in our case K=4, N=5. However, I would like to construct a custom loss function along with its gradient and hessian defined by the following formula:</p> <p><a href="https://i.sstatic.net/XoyQFicg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XoyQFicg.png" alt="enter image description here" /></a></p> <p>where o_{i,k} is the Win_odds of sample i with label k and lambda is a positive constant (say 1) to determine the relative significance of the regularization term.</p> <p>To simplify notation, let</p> <p><a href="https://i.sstatic.net/jagL5BFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jagL5BFd.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/Tzl04jJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tzl04jJj.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/rUbLezek.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUbLezek.png" alt="enter image description here" /></a></p> <p>To construct the custom loss function for use in lightgbm, we need to compute the gradient and hessian and here is my attempt:</p> <p><a href="https://i.sstatic.net/ppTGY6fg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ppTGY6fg.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/o1awRrA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o1awRrA4.png" alt="enter image description here" /></a></p> <p>However, I am not sure if there is any way to simplify the above expression, and much less to write it in code. And I have no idea how to compute the hessian since I am stuck at the gradient.</p> <p>So for example the total custom loss of the above dataset is given by</p> <pre><code>\frac{-1}{5}((\log(p_{0,2})-(p_{0,0}-\frac{1}{2.3})^2-(p_{0,1}-\frac{1}{4.3})^2-(p_{0,2}-\frac{1}{1.3})^2-(p_{0,3}-\frac{1}{5})^2)+ (\log(p_{1,1})-(p_{1,0}-\frac{1}{3.4})^2-(p_{1,1}-\frac{1}{1.1})^2-(p_{1,2}-\frac{1}{2.3})^2-(p_{1,3}-\frac{1}{2})^2)+ (\log(p_{2,2})-(p_{2,0}-\frac{1}{7.4})^2-(p_{2,1}-\frac{1}{3.1})^2-(p_{2,2}-\frac{1}{1.8})^2-(p_{2,3}-\frac{1}{6})^2)+ (\log(p_{3,3})-(p_{3,0}-\frac{1}{2.4})^2-(p_{3,1}-\frac{1}{5.2})^2-(p_{3,2}-\frac{1}{3.5})^2-(p_{3,3}-\frac{1}{3})^2)+ (\log(p_{4,0})-(p_{4,0}-\frac{1}{1.4})^2-(p_{4,1}-\frac{1}{3.2})^2-(p_{4,2}-\frac{1}{2.5})^2-(p_{4,3}-\frac{1}{5})^2)) </code></pre> <p>I am not sure how to do it as my target variable is 1-dimensional with only the label of the winner, and LightGBM's API doesn't support passing additional data to the loss function directly.</p>
<python><loss-function><lightgbm>
2025-04-15 04:37:08
1
592
Ishigami