QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,670,945
285,777
Parsing question for Python with LXML and Requests (Soap)
<p>This is an example response:</p> <pre><code>&lt;soap-env:Envelope xmlns:soap-env=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot;&gt; &lt;soap-env:Header /&gt; &lt;soap-env:Body&gt; &lt;api:PlexViewResponse xmlns:api=&quot;http://alu.com/plexwebapi/api&quot; Command=&quot;ACT-USER&quot; SwitchName=&quot;&quot; RequestId=&quot;ACTUSER1748031686781471283&quot; SessionId=&quot;&quot; CongestionIndicator=&quot;false&quot; Status=&quot;SUCCESS&quot;&gt; &lt;SessionId&gt;session_123456&lt;/SessionId&gt; &lt;/api:PlexViewResponse&gt; &lt;/soap-env:Body&gt; &lt;/soap-env:Envelope&gt;&quot;&quot;&quot; </code></pre> <p>I've figured out how to return sub-elements like sessionid, but I need to obtain the status code from the PlexViewResponse -&gt; Status=&quot;SUCCESS&quot;</p> <p>I set the namespace, like this</p> <pre><code># Set Namespace nsmap = {'api': 'http://alu.com/plexwebapi/api'} # Get Session ID SessionID = com_login.findall('.//SessionId', nsmap)[0].text </code></pre> <p>Struggling to find the proper way to obtain that status, and if it errors, there would be an Error=&quot;&quot; also.</p>
<python><lxml>
2025-06-18 15:52:00
1
409
Kelso
79,670,944
6,843,153
Adding text to gdoc using Google API sets random paragraph style
<p>I have the following method that adds formatted text to a data structure for it to be used in a <code>batchUpdate</code> request:</p> <pre><code> def add_text( self, text, bold=None, color=None, font_size=None, bg_color=None, italic=None, ): if text == &quot;&quot;: return else: # Force to string to further processing text = str(text) style_updates = { &quot;updateTextStyle&quot;: { &quot;range&quot;: { &quot;startIndex&quot;: self.index, &quot;endIndex&quot;: self.index + len(text), }, &quot;textStyle&quot;: { &quot;bold&quot;: False, &quot;italic&quot;: False, &quot;foregroundColor&quot;: { &quot;color&quot;: { &quot;rgbColor&quot;: { &quot;red&quot;: 0, &quot;green&quot;: 0, &quot;blue&quot;: 0, } } }, &quot;backgroundColor&quot;: {}, }, &quot;fields&quot;: &quot;bold, italic, foregroundColor, backgroundColor&quot;, } } if bold: style_updates[&quot;updateTextStyle&quot;][&quot;textStyle&quot;][&quot;bold&quot;] = bold if italic: style_updates[&quot;updateTextStyle&quot;][&quot;textStyle&quot;][&quot;italic&quot;] = italic if color: style_updates[&quot;updateTextStyle&quot;][&quot;textStyle&quot;][ &quot;foregroundColor&quot; ] = { &quot;color&quot;: { &quot;rgbColor&quot;: { &quot;red&quot;: color[0], &quot;green&quot;: color[1], &quot;blue&quot;: color[2], } } } if font_size: style_updates[&quot;updateTextStyle&quot;][&quot;textStyle&quot;][&quot;fontSize&quot;] = { &quot;magnitude&quot;: font_size, &quot;unit&quot;: &quot;PT&quot;, } style_updates[&quot;updateTextStyle&quot;][&quot;fields&quot;] += &quot;, fontSize&quot; if bg_color: style_updates[&quot;updateTextStyle&quot;][&quot;textStyle&quot;][ &quot;backgroundColor&quot; ] = { &quot;color&quot;: { &quot;rgbColor&quot;: { &quot;red&quot;: bg_color[0], &quot;green&quot;: bg_color[1], &quot;blue&quot;: bg_color[2], } } } self.updates += [ { &quot;insertText&quot;: { &quot;location&quot;: { &quot;index&quot;: self.index, }, &quot;text&quot;: text, } }, style_updates, ] self.index += len(text) </code></pre> <p>The problem I'm having is that the resulting text in the target GDoc is &quot;Heading 6&quot; styled. I don't know where that style spec came from (the previous paragraph is &quot;Heading 5&quot;, so it is not a hanging context that was taken). I also tried to append a paragraph style:</p> <pre><code> if style: style_updates[&quot;updateParagraphStyle&quot;] = { &quot;range&quot;: { &quot;startIndex&quot;: self.index, &quot;endIndex&quot;: self.index + len(text), }, &quot;paragraphStyle&quot;: {&quot;namedStyleType&quot;: style}, &quot;fields&quot;: &quot;namedStyleType&quot;, } </code></pre> <p>But it raises a Google API exception saying that it os not allowed.</p> <p>Any help would be appreciated.</p>
<python><google-api><google-docs-api>
2025-06-18 15:50:32
0
5,505
HuLu ViCa
79,670,825
2,666,289
Index array using boolean mask with a "broadcasted" dimension?
<p>I have two arrays <code>a1</code> and <code>a2</code> of shape <code>M1xM2x...xMPx3</code> and <code>N1xN2x...xNQx3</code>, and a mask <code>m</code> (boolean array) of shape <code>M1xM2x...xMPxN1xN2x...xNQ</code> (you can assume that <code>a1</code> and <code>a2</code> are at least 2D).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np np.random.seed(0) a1 = np.random.rand(4, 5, 3) a2 = np.random.rand(6, 3) m = np.random.rand(4, 5, 6) &gt;= 0.7 </code></pre> <p>I would like to two arrays <code>b1</code> and <code>b2</code> of shape <code>Mx3</code> that contains the values of <code>a1</code> and <code>a2</code> where the mask <code>m</code> is true.</p> <p>My current way is to repeat <code>a1</code> and <code>a2</code> to obtain arrays of size <code>M1xM2x...xMPxN1xN2x...xNQx3</code> and then index them using <code>m</code>:</p> <pre class="lang-py prettyprint-override"><code>a1_shape, a2_shape = a1.shape[:-1], a2.shape[:-1] t_shape = a1_shape + a2_shape + (3,) a1 = a1[..., None, :].repeat(np.prod(a2_shape), axis=-2).reshape(t_shape) a2 = a2[None, ..., :].repeat(np.prod(a1_shape), axis=0).reshape(t_shape) b1, b2 = a1[m, :], a2[m, :] </code></pre> <p>This requires creating two huge array of size <code>M1xM2x...xMPxN1xN2x...xNQ</code> which can be problematic.</p> <p>Is there a way to obtain the same <code>b1</code> and <code>b2</code> without having to create such huge intermediate array?</p>
<python><numpy>
2025-06-18 14:35:52
1
38,048
Holt
79,670,668
6,500,048
marimo switches to global python instead of using uv venv
<p>running marimo with uv as I have done with other projects, but for some reason it switches from the venv created using uv to the global version of python and nothing I do can seem to make it run in uv.</p> <p>Is there some metadata or path environment I am missing to get this working?</p> <p>Here is my <code>project.toml</code> file is it supposed to have venv in there?</p> <pre class="lang-toml prettyprint-override"><code>[project] name = &quot;govenance-data&quot; version = &quot;0.1.0&quot; description = &quot;Add your description here&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.11&quot; dependencies = [ &quot;xlsxwriter&gt;=3.2.5&quot;, ] </code></pre>
<python><uv><marimo>
2025-06-18 12:51:05
1
1,279
iFunction
79,670,587
24,696,572
pytorch: torch.bucketize with not-strictly increasing sequence
<p>I am working with a batched interpolation class where a typical task is to find the knot index for each evaluation point in the batch. This sounds like a job for <code>torch.bucketize</code>, but the knot vector is not strictly increasing but has repeated vals. See the example below.</p> <p>The documentation of <code>torch.bucketize</code> states for the boundary argument: <strong>&quot;...must contain a strictly increasing sequence, or the return value is undefined.&quot;</strong></p> <p>So is there a similar function that can be used instead? Or is the documentation not clear enough at this point and instead, only the returned indices on the left and right extreme <em>can</em> be incorrect?</p> <pre><code>import torch if __name__ == &quot;__main__&quot;: # Repeated knots at start and end (typical for splines) boundaries = torch.tensor([0.0, 0.0, 0.0, 0.0, 0.5, 1.0, 1.0, 1.0, 1.0]) # Evaluation points around boundaries and middle x_test = torch.tensor([-0.1, 0.0, 0.25, 0.5, 0.75, 1.0, 1.1], dtype=torch.float64) indices = torch.bucketize(x_test, boundaries, right=False) print(&quot;Knots: &quot;, knots.tolist()) print(&quot;x values: &quot;, x_test.tolist()) print(&quot;Indices: &quot;, indices.tolist()) </code></pre>
<python><pytorch>
2025-06-18 11:56:58
0
332
Mathieu
79,670,289
221,166
Type-hinting a dynamic, asymmetric class property
<p>I'm currently working on removing all the type errors from my Python project in VS Code.</p> <p>Assume you have a Python class that has an asymmetric property. It takes any kind of iterable and converts it into a custom list subclass with additional methods.</p> <pre class="lang-py prettyprint-override"><code>class ObservableList(list): &quot;&quot;&quot;list with events for change, insert, remove, ...&quot;&quot;&quot; # ... class MyFrame: @property def my_list(self) -&gt; ObservableList: return self._my_list @my_list.setter def my_list(self, val: typing.Iterable): self._my_list = ObservableList(val) # ... my_frame = MyFrame() </code></pre> <p>VS Code (i.e. Pyright) will correctly deduce that:</p> <ul> <li>you can set <code>my_frame.my_list</code> using any iterable, and</li> <li><code>my_frame.my_list</code> will always be an <code>ObservableList</code>.</li> </ul> <p><strong>Now</strong>, let's assume that there is no actual <code>@property</code> code. Instead, the property is implemented dynamically using <code>__setattr__</code> and <code>__getattr__</code>. (Context: We're talking about a GUI generator which provides automatic bindings.)</p> <p>I want to use a declaration on class level to tell the typechecker that this property exists, without actually spelling it out:</p> <pre class="lang-py prettyprint-override"><code>class MyFrame(AutoFrame): my_list: ??? # ... </code></pre> <p>(<code>AutoFrame</code> provides the <code>__getattr__</code>/ <code>__setattr__</code> implementation.) What can I put in place of the <code>???</code> to make this work?</p> <ul> <li>When using <code>ObservableList</code>, Pyright complains when I assign a plain list to the property.</li> <li>When using <code>Iterable</code> or <code>list</code>, Pyright complains when I access <code>ObservableList</code>-specific methods.</li> <li>Same when using <code>list | ObservableList</code>: Pyright assumes that the property could return both, and <code>list</code> misses the additional methods.</li> </ul> <p>Re: close vote: the linked <a href="https://stackoverflow.com/questions/78723101/how-to-type-hint-an-attribute-that-can-be-assigned-with-a-value-of-super-type">question's</a> answer basically boils down to going back to square one (implementing the property explicitly). The point of using <code>AutoFrame</code> is specifically to get rid of that repetitive boilerplate code. Just imagine doing this for a GUI frame with a dozen bound controls. I can live with a single added declaration line but not much more.</p>
<python><python-typing><pyright>
2025-06-18 08:44:59
2
3,203
Torben Klein
79,670,265
1,023,928
When importing python library, package is sometimes found, at other times not found due to location of file in the project
<p>I have the following project structure:</p> <pre class="lang-none prettyprint-override"><code>PYTHON_DEVELOPMENT [WSL: UBUNTU2504] ├── mattlibrary ├── test_bench │ └── testme.ipynb └── test.ipynb </code></pre> <p>My problem is that when I <code>import mattlibrary</code> inside <code>testme.ipynb</code> the module is not found. However, inside <code>test.ipynb</code> it imports just fine. Now, I understand why the latter works, because the cwd finds <code>mattlibrary</code>. However, I installed the package mattlibrary as an editable package in my virtual environment.</p> <p><code>sys.executable</code> produces the following:</p> <pre><code>['/usr/lib/python313.zip', '/usr/lib/python3.13', '/usr/lib/python3.13/lib-dynload', '', '/home/matt/development/python_development/.venv/lib/python3.13/site-packages'] </code></pre> <p>And here the files in my site-packages directory within the virtual environment:</p> <pre><code>-rw-r--r-- 2 matt matt 93 Jun 18 18:12 __editable__.mattlibrary-0.1.0.pth -rw-r--r-- 2 matt matt 3918 Jun 18 18:12 __editable___mattlibrary_0_1_0_finder.py </code></pre> <p>I spent the entire day tinkering with different ideas, but still struggle to import the library from any other location other than above the <code>mattlibrary</code> directory structure. I strictly do not want to use sys.path.insert each time I want to import my library.</p> <p>The entire point why I installed the package in the virtual environment is so that it can be imported regardless where the file resides within which I import the library. Is my understanding correct that the culprit is the <code>''</code>in sys.path? Is this where the import aborts because it cannot locate <code>mattlibrary</code> under the cwd? Then why even add the last path <code>'/home/matt/development/python_development/.venv/lib/python3.13/site-packages'</code> if the locator never reaches it and searches that directory for my package?</p> <p>What am I doing wrong? This is driving me insane...</p>
<python><import><package>
2025-06-18 08:26:34
0
7,316
Matt
79,670,247
6,702,598
How to tell pylint about type checks
<p>The example below does a type check inside a function in order to keep the <em>foo</em> function cleaner. However,the last <code>return values.a</code> gets a static type check error. Pylint does not understand that <code>values.a</code> was already tested for <code>None</code> and it's actually safe to return <code>values.a</code> as a string.</p> <pre class="lang-py prettyprint-override"><code>class Values(TypedDict): a: str | None b: str | None def raise_if_null(values: Values): if(values.a is None): raise ValueError(&quot;bla&quot;) if(values.b is None): raise ValueError(&quot;bla&quot;) def foo(values: Values) -&gt; str: raise_if_null(values) return values.a </code></pre> <p>How can I make pylint aware of the type check inside the function?</p> <p>(This question must have already been answered somehow, I guess. I just cannot find the right search word for it)</p>
<python><python-typing>
2025-06-18 08:07:24
0
3,673
DarkTrick
79,670,214
2,537,394
Type hinting optional properties in python 3.10+
<p>Suppose I have a Car class with the properties <code>color</code> and <code>owner</code>. Thinking from a seller's perspective, a new car doesn't have an owner yet. Therefore I'd like to make owner optional, as with the following code:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from enum import Enum class Color(Enum): RED = 0 BLUE = 1 @dataclass class Person(): name: str license_id: int class Car(): def __init__(self, color: Color, owner: Person|None = None) -&gt; None: self.color: Color = color self.owner: Person|None = owner def sell_to(self, buyer: Person) -&gt; None: self.owner = buyer </code></pre> <p>Now when I try to sell this car to someone ...</p> <pre class="lang-py prettyprint-override"><code>if __name__ == &quot;__main__&quot;: new_car = Car(Color.RED) john_doe = Person(&quot;John Doe&quot;, 123) new_car.sell_to(john_doe) print(f&quot;Car sold to {new_car.owner.name}&quot;) </code></pre> <p>My language server (Pylance in VSCode) complains for <code>new_car.owner.name</code> that <code>&quot;name&quot; is not a known attribute of &quot;None&quot;</code>, despite the code printing the correct output.</p> <p>When instead of using the type-hints <code>Person | None</code> I only use <code>Person</code>, Pylance complains in the init-definition that<br /> <code>Expression of type &quot;None&quot; cannot be assigned to parameter of type &quot;owner_type&quot;</code> and<br /> <code>&quot;None&quot; is not assignable to &quot;Person&quot;</code></p> <p>How can I correctly type-hint the owner such that it is an optional property and I don't need to check if <code>new_car.owner is not None</code> every time I want to access that property?</p>
<python><python-typing><pyright>
2025-06-18 07:44:28
3
731
YPOC
79,670,155
15,468,624
Executorch v. 0.6 -- Failed build of apple-ios example
<p>I have an issue when trying building the apple-ios example app using XCode. I am working on a MacBook Pro 2021 with M1 processor.</p> <p>When opening the XCode project I have the following two issues in LLaMaRunner:</p> <blockquote> <p>Undefined symbol: executorch::extension::llm::load_tokenizer(std::__1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt; const&amp;, std::1::unique_ptr&lt;std::1::vector&lt;std::1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt;, std::1::allocator&lt;std::__1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt;&gt;&gt;, std::1::default_delete&lt;std::1::vector&lt;std::1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt;, std::1::allocator&lt;std::__1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt;&gt;&gt;&gt;&gt;, std::1::optional&lt;std::__1::basic_string&lt;char, std::__1::char_traits, std::__1::allocator&gt;&gt;, unsigned long, unsigned long)</p> </blockquote> <blockquote> <p>Undefined symbol: executorch::extension::llm::create_text_llm_runner(std::__1::basic_string&lt;char, std::__1::char_traits, std::1::allocator&gt; const&amp;, std::1::unique_ptr&lt;tokenizers::Tokenizer, std::__1::default_deletetokenizers::Tokenizer&gt;, std::1::optional&lt;std::1::basic_string&lt;char, std::__1::char_traits, std::__1::allocator&gt; const&gt;, float)</p> </blockquote> <p>These are the information of the system version:</p> <pre><code>PyTorch version: 2.8.0.dev20250601 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 15.5 (arm64) GCC version: Could not collect Clang version: 17.0.0 (clang-1700.0.13.5) CMake version: version 3.31.6 Libc version: N/A Python version: 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] (64-bit runtime) Python platform: macOS-15.5-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M1 Pro Versions of relevant libraries: [pip3] executorch==0.7.0a0+56392aa [pip3] numpy==2.3.0 [pip3] pytorch_tokenizers==0.1.0 [pip3] torch==2.8.0.dev20250601 [pip3] torchao==0.12.0+gitbc68b11f [pip3] torchaudio==2.8.0.dev20250601 [pip3] torchdata==0.11.0 [pip3] torchsr==1.0.4 [pip3] torchtune==0.6.1 [pip3] torchvision==0.23.0.dev20250601 [conda] Could not collect </code></pre>
<python><pytorch>
2025-06-18 06:57:11
0
307
alirek
79,670,111
19,459,262
Why does expressify not show calculations that require an input?
<p>I have a Shiny app in two sections - app.py, the main app, and page.py, because my code is rather long and I don't want it all in one file where I can accidentally nuke the whole thing. The problem is, I have interactive elements. Since I'm using expressify, I think it's refusing to display values if calculations are executed in the same function. Code below:</p> <h3>app.py</h3> <pre><code>import page from shiny import ui from shiny.express import input, ui with ui.navset_tab(id=&quot;selected_navset_tab&quot;): # Homepage with ui.nav_panel(&quot;Welcome&quot;, value=&quot;page_home&quot;): with ui.card(): # Variable calculated ui.input_selectize( &quot;filler&quot;, &quot;Filler&quot;, [&quot;list&quot;, &quot;of&quot;, &quot;items&quot;, &quot;here&quot;], multiple=False ) # other Nav page.otherNav() </code></pre> <h3>page.py</h3> <pre><code>from shiny import render from shiny.express import input, ui, expressify @expressify def otherNav(): with ui.nav_panel(&quot;Page I want to put in the main app&quot;, value=&quot;other_page&quot;): ui.input_numeric(&quot;input_value&quot;, &quot;I need this inpt value!&quot;, 1, min=1, max=1000) @render.express def text_to_display(): # this doesn't display anything! x = input.input_value() x </code></pre> <p>How can I get the text to show based on the user input, while keeping the code separate in a different file?</p>
<python><shiny-reactivity><py-shiny>
2025-06-18 06:26:07
1
784
Redz
79,669,954
1,060,209
regex to match uuid4 not working with f-string
<p>Working:</p> <pre><code>#!/usr/bin/python3 import re path = &quot;/a/b/c/e72cc82c-e83a-431c-9f63-c8d80eec9307&quot; if re.match(r&quot;/a/b/c/[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}$&quot;, path): print(&quot;matched&quot;) else: print(&quot;didn't match anything&quot;) </code></pre> <p>Failing:</p> <pre><code>#!/usr/bin/python3 import re PATH_PREFIX = &quot;/a/b/c&quot; path = &quot;/a/b/c/e72cc82c-e83a-431c-9f63-c8d80eec9307&quot; if re.match(rf&quot;{PATH_PREFIX}/[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}$&quot;, path): print(&quot;matched&quot;) else: print(&quot;didn't match anything&quot;) </code></pre> <p>I used the f-string in python regex a few years ago, and it was working. Not sure why it is failing now.</p>
<python><uuid><f-string>
2025-06-18 02:22:17
2
4,881
Qiang Xu
79,669,932
10,242,281
Python how to load SQL Server table with sqlalchemy?
<p>I'm trying to load table using Python in most effective way, and learned that <code>sqlalchemy / engine</code> is the best tool for that. But I really struggle to make this construction work for SQL Server as my target, I did exactly like in <a href="https://docs.sqlalchemy.org/en/20/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc" rel="nofollow noreferrer">here</a> , played with all types of <code>= &quot;DRIVER={SQL Server Native Client 10.0}</code> but still it doesn't work. For this tasks I don't have any other option like SSRS/linked servers/etc.. I created <code>df</code> with different method for easy testing. Layout of tables is identical, just different server/db. Below is my code:</p> <pre><code>import pyodbc as pb import pandas as pd import sqlalchemy from sqlalchemy.engine import URL Source_connStr = 'Driver={SQL Server};Server=SQLXTBRR;Database=Base; Trusted_Connection=True;' Source_conn = pb.connect(Source_connStr) Target_connStr= 'Driver={SQL Server};Server=SQLXTBRR;Database=Custom; Trusted_Connection=True;' query = (&quot;SELECT * FROM dbo.Cust_Clients&quot;) df = pd.read_sql(query,SQL_conn) ## &lt;=== used to create df works OK print(df) connection_url = URL.create( &quot;mssql+pyodbc&quot;, query={&quot;odbc_connect&quot;: Target_connStr} ) engine = sqlalchemy.create_engine(connection_url) df.to_sql('Cust_Clients', engine, if_exists='append', index=False) ##line 30 load into table #pd.read_sql('Cust_Clients',engine) </code></pre> <p>And here is fragment of error messages, looks like it breaks on line 30. I tried to write and read with engine, same bad result. I assume first <code>UserWarning: pandas only support SQLALchemy..</code> is not critical.</p> <pre><code>C:\PythonScripts\LoadDB_V1.py:22: UserWarning: pandas only supports SQLAlchemy connectable (engine/connection) or database string URI or sqlite3 DBAPI2 connection. Other DBAPI2 objects are not tested. Please consider using SQLAlchemy. df = pd.read_sql(query,SQL_conn) id first_name last_name ... deleted general_id consent_refused 0 111111 John Walshes ... None 14785444 0 [1 rows x 18 columns] mssql+pyodbc://?odbc_connect=Driver%3D%7BSQL+Server%7D%3BServer% SQLXTBRR%3BDatabase% Custom%3B+Trusted_Connection%3DTrue%3B Traceback (most recent call last): File &quot;C:\PythonScripts\.venv\Lib\site-packages\sqlalchemy\engine\base.py&quot;, line 1963, in _exec_single_context self.dialect.do_execute( ~~~~~~~~~~~~~~~~~~~~~~~^ …. pyodbc.Error: ('HY104', '[HY104] [Microsoft][ODBC SQL Server Driver]Invalid precision value (0) (SQLBindParameter)') The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;C:\PythonScripts\ LoadDB _V1.py&quot;, line 30, in &lt;module&gt; df.to_sql(Cust_Clients, engine, if_exists='append', index=False) ## load into table ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\PythonScripts\.venv\Lib\site-packages\pandas\util\_decorators.py&quot;, line 333, in wrapper return func(*args, **kwargs) </code></pre>
<python><sql-server><pyodbc>
2025-06-18 01:45:53
0
504
Mich28
79,669,881
17,246,545
Is there a problem with the connection code to PLC?
<p>I try to connect to Mitsubishi QJ71E71 PLC with python code.</p> <p>I already set the GX_works2's open setting.</p> <p>But when i connect to PLC with below code, it got a timed out error.</p> <pre class="lang-py prettyprint-override"><code>import pymcprotocol pymc3e = pymcprotocol.Type3E() # Q series pymc3e.setaccessopt(commtype = &quot;binary&quot;, network = 1, pc = 18) is_con = True while is_con: try: pymc3e.connect(&quot;192.100.13.34&quot;, 5002) print(&quot;conn succ&quot;) quit() except Exception as e: print(f&quot;conn fail -&gt; {e}&quot;) break </code></pre> <p>And Below is GX_works2's open setting. <a href="https://i.sstatic.net/iA8AoFj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iA8AoFj8.png" alt="enter image description here" /></a></p> <p><strong>Note</strong></p> <ul> <li>The connection between 192.100.13.5 and PLC(192.100.13.34) works well. It is already set a few years ago.</li> <li>I know that mitshubishi plc need source port when connect to plc. So, i've been fix the code that packet included source port, But time out error is still occur. (I checked the connection packet by wireshark. and I checked source port is 33213.)</li> </ul>
<python><network-programming><server><plc><lan>
2025-06-18 00:05:58
0
389
SecY
79,669,876
2,266,881
Process detachment
<p>This may be a very basic question..</p> <p>I made a simple python script that launches mpv with certain url with a:</p> <pre><code>p = subprocess.Popen([&quot;/usr/bin/mpv&quot;, link], stdout=subprocess.DEVNULL, stderr=subprocess.STDOUT) </code></pre> <p>Now, i'm having the following situation; if i:</p> <p>A) Open a terminal and run the script -&gt; mpv opens, the script ends, i can close the terminal and mpv stays open.</p> <p>B) Call a terminal with the process using <code>/usr/bin/wezterm start -e python /script/path/here</code> -&gt; mpv opens, but right after the script finishes, the terminal closes and mpv closes too</p> <p>I get it may be related to how the subprocesses/child procs are handled, but i can't seem to understand what is the difference between the two situations, cause in both cases the script finishes running and the terminal where it was ran is closed, however in the first case, mpv stays open, in the second case it doesn't.</p> <p>I need to find a way to figure out how to get the second case to say open after the terminal closes.</p> <p>I'm on arch.</p> <p>Any ideas?</p> <p>Thanks in advance !</p>
<python><linux><arch>
2025-06-17 23:50:57
1
1,594
Ghost
79,669,473
794,329
Exhaustively check for all literals while allowing for type inference of the return type
<p>I’m trying to write a Python function that satisfies three goals simultaneously:</p> <ol> <li>Exhaustively checks all possible values of a <code>Literal</code> argument.</li> <li>Avoids linter warnings like Ruff’s &quot;too many return statements&quot;.</li> <li>Preserves precise return type inference, e.g., inferring <code>Literal[42, &quot;string&quot;]</code>.</li> </ol> <p>Consider the following two approaches:</p> <p><strong>Version 1</strong> — Match-based (good inference, bad for linters)</p> <pre class="lang-python prettyprint-override"><code>from typing import Literal, assert_never, reveal_type def foo(x: Literal[&quot;a&quot;, &quot;b&quot;]): match x: case &quot;a&quot;: return 42 case &quot;b&quot;: return &quot;string&quot; case _: assert_never(x) reveal_type(foo) # Pyright infers: (x: Literal['a', 'b']) -&gt; Literal[42, 'string'] </code></pre> <p>This version gives perfect return type inference. But linters like Ruff warn about having too many return statements.</p> <p>⸻</p> <p><strong>Version 2</strong> — Dict-based (cleaner code, poor inference)</p> <pre class="lang-python prettyprint-override"><code>def bar(x: Literal[&quot;a&quot;, &quot;b&quot;]): mapping = {&quot;a&quot;: 42, &quot;b&quot;: &quot;string&quot;} return mapping[x] reveal_type(bar) # Pyright infers: (x: Literal['a', 'b']) -&gt; Unknown </code></pre> <p>This avoids the linter warning, but now the return type is no longer inferred precisely.</p> <p>⸻</p> <p>Question:</p> <p>Is there a way to get all three at once?</p> <ul> <li>Exhaustive checking (e.g., via match or equivalent)</li> <li>A single return statement (to satisfy linters) or a way to avoid &quot;too many returns&quot; warning</li> <li>And automatic inference of a precise return type?</li> </ul> <p>Any pattern that satisfies all of these would be ideal.</p>
<python><python-typing><literals>
2025-06-17 16:29:13
2
441
Danilo Horta
79,669,389
2,893,712
Pandas Subtract One Dataframe From Another (if match)
<p>I have a pandas dataframe that has information about total attendance for schools grouped by School, District, Program, Grade, and Month #. The data looks like the following (<code>df</code>):</p> <pre><code>School District Program Grade Month Count 123 456 A 9-12 10 100 123 456 B 9-12 10 95 321 654 A 9-12 10 23 321 456 A 7-8 10 40 </code></pre> <p>Some of the counts are inflated and need to be reduced based on the data from another dataframe (<code>ToSubtract</code>):</p> <pre><code>School District Program Grade Month Count 123 456 A 9-12 10 10 321 654 A 9-12 10 8 </code></pre> <p>Both dataframes are already grouped so there will be no duplicate grouping. Subtracting <code>ToSubtract</code> from <code>df</code> will result in:</p> <pre><code>School District Program Grade Month Count X 123 456 A 9-12 10 90 * 123 456 B 9-12 10 95 321 654 A 9-12 10 15 * 321 456 A 7-8 10 40 </code></pre> <p>(<code>X</code> column to be marked with <code>*</code> to indicate value was modified).</p> <p><code>df</code> has a lot more entries for all of the other schools, districts, months, etc. I was looking into <code>df.sub()</code> but it looks like the elements have to be lined up. My other idea was to use <code>df.iterrows()</code> to go through each row of <code>df</code> and check if there is a corresponding row in <code>ToSubtract</code> but this seems very inefficient.</p> <p>What would be the best way to subtract one dataframe from another, matching several columns</p>
<python><pandas>
2025-06-17 15:41:46
2
8,806
Bijan
79,669,262
794,329
How to associate a literal label with a type and extract it to define another type in Python?
<p>I’m trying to associate a string label with each class and then use those labels to define a <code>Literal</code> type for validation or type checking purposes. Here’s a simplified example of what I’m trying to do:</p> <pre class="lang-python prettyprint-override"><code>from typing import Literal class Apple: label = &quot;apple&quot; class Mango: label = &quot;mango&quot; # I'd like to define this type using the class labels above: type FruitName = Literal[Apple.label, Mango.label] </code></pre> <p>This doesn’t work because <code>Literal</code> expects constant string values at type-check time, but <code>Apple.label</code> and <code>Mango.label</code> are not treated as constants.</p> <p>Mypy error: <code>Parameter 1 of Literal[...] is invalid [valid-type]</code></p>
<python><python-typing>
2025-06-17 14:25:10
0
441
Danilo Horta
79,669,216
5,439,470
access denied on smbclient.register_session
<p>i try to connect from my mac to a smb server using this</p> <pre><code>smbclient.register_session( &quot;smbserver.net&quot;, username=&quot;domain\\username&quot;, password=&quot;secret&quot;, port=445 ) </code></pre> <p>but i allways get this error</p> <pre><code>AccessDenied: Received unexpected status from the server: A process has requested access to an object but has not been granted those access rights. (3221225506) STATUS_ACCESS_DENIED: 0xc0000022 </code></pre> <p>to check that the authentication is working i entered a wrong combination which lead to a <code>LogonFailure</code>. so this cannot be it. i also tried changing the username to <code>username@domain.net</code> with the same <code>AccessDenied</code> error.</p> <p>Using finder and a nifi container running on my machine i could establish a connection and load files.</p>
<python><python-3.x><smb>
2025-06-17 13:56:16
0
1,303
jan-seins
79,669,210
624,900
Python http library that supports trailers
<p>Do any Python http libraries support <em>both</em> request and response trailers?</p> <p>For example, some other languages have:</p> <ul> <li>libcurl: <a href="https://curl.se/libcurl/c/CURLOPT_TRAILERFUNCTION.html" rel="nofollow noreferrer">https://curl.se/libcurl/c/CURLOPT_TRAILERFUNCTION.html</a></li> <li>go: <a href="https://pkg.go.dev/net/http#Request" rel="nofollow noreferrer">https://pkg.go.dev/net/http#Request</a></li> </ul> <p>But I can't find any support at all for trailers in any of the popular Python http libraries. Does anything exist?</p>
<python><http>
2025-06-17 13:50:56
0
67,435
jterrace
79,669,095
16,527,170
sqlalchemy.exc.OperationalError: (2003, "Can't connect to MySQL server on '123@localhost' ([Errno -2] Name or service not known)")
<p>I am using python to Connect with DB.</p> <pre><code>from sqlalchemy import create_engine import pymysql db_password = &quot;abc@123&quot; db_user = &quot;test_user&quot; db_name = &quot;xyz&quot; db_host = &quot;127.0.0.1&quot; # or 127.0.0.1 # Create SQLAlchemy engine engine = create_engine(f'mysql+pymysql://{db_user}:{db_password}@{db_host}/{db_name}', pool_pre_ping=True, pool_recycle=300) </code></pre> <p>Error:</p> <pre><code>pymysql.err.OperationalError: (2003, &quot;Can't connect to MySQL server on '123@127.0.0.1' ([Errno -2] Name or service not kn own)&quot;) </code></pre> <p>Other Logs:</p> <pre><code>(lead_prediction) bitnami@my-instance-name:~/lead_prediction$ cat /etc/hosts 127.0.0.1 localhost (lead_prediction) bitnami@my-instance-name:~/lead_prediction$ netstat -tulpn | grep LISTEN (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN - </code></pre> <p>Ans: Alternate way to make it work: Use urllib.parse mathod to handle special character in MySQL DB Password</p> <pre><code>from sqlalchemy import create_engine import pymysql import urllib.parse db_password = &quot;abc@123&quot; db_password = urllib.parse.quote_plus(db_password) # Parsing the password to avoid error: pwd: Jane@123, Error: '123@localhost' ([Errno -2] Name or service not known)&quot;) db_user = &quot;test_user&quot; db_name = &quot;xyz&quot; db_host = &quot;127.0.0.1&quot; # or 127.0.0.1 # Create SQLAlchemy engine engine = create_engine(f'mysql+pymysql://{db_user}:{db_password}@{db_host}/{db_name}', pool_pre_ping=True, pool_recycle=300) </code></pre>
<python><mysql><sqlalchemy><pymysql>
2025-06-17 12:46:31
0
1,077
Divyank
79,668,978
11,092,636
In the context of Key-Sharing Dictionaries, what does sys.getsizeof measure?
<p>If I understand correctly, <a href="https://peps.python.org/pep-0412/" rel="nofollow noreferrer">Key-Sharing dictionaries</a> make it so that if we have loads of instances of an object, we have a Shared Keys Table, and each instance then has a Value Array. What does <a href="https://docs.python.org/3/library/sys.html#sys.getsizeof" rel="nofollow noreferrer"><code>sys.getsizeof</code></a> measure, e.g.:</p> <pre><code>import sys class Property: def __init__(self, v0, v1, v2, v3, v4): self.a = v0 self.b = v1 self.c = v2 self.d = v3 self.e = v4 colors = Property('blue', 'orange', 'green', 'yellow', 'red') sys.getsizeof(vars(colors)) # 296 (Python 3.12.5) </code></pre> <p>What does this 296 refer to, does it refer to shared keys table + value array of the instance, or just the value array of the instance?</p> <p>If it will return the memory used by the value array, why is it that if I replace <code>blue</code> by <code>b*10000</code>, I get the same size? If it will return the memory used by the shared keys table, why is it that if I replace <code>self.b</code> by <code>self.bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb</code>, I get the same size? Is it because in both cases, I only get the size of the pointers to those strings and not the strings themselves? But then if I put an integer I should see a difference (unless they're boxed if that's a thing in Python?), and I don't?</p> <p>Also, why is it that this array is much bigger in Python 3.12.5 compared to Python 3.10.11 <strong>(104 in Python 3.10.11, 296 in Python 3.12.5)</strong>? Did the way <code>sys.getsizeof()</code> work change in between those two versions or did the implementation of dictionaries change in between those two versions?</p>
<python><cpython>
2025-06-17 11:28:55
1
720
FluidMechanics Potential Flows
79,668,663
6,930,340
How to create a heatmap from a tidy / long polars dataframe
<p>I need to create a heatmap on the basis of a tidy/long <code>pl.DataFrame</code>. Consider the following example, where I used <code>pandas</code> and <code>plotly</code> to create a heatmap.</p> <pre><code>import plotly.express as px import polars as pl tidy_df_pl = pl.DataFrame( { &quot;x&quot;: [10, 10, 10, 20, 20, 20, 30, 30, 30], &quot;y&quot;: [3, 4, 5, 3, 4, 5, 3, 4, 5], &quot;value&quot;: [5, 8, 2, 4, 10, 14, 10, 8, 9], } ) print(tidy_df_pl) shape: (9, 3) ┌─────┬─────┬───────┐ │ x ┆ y ┆ value │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═══════╡ │ 10 ┆ 3 ┆ 5 │ │ 10 ┆ 4 ┆ 8 │ │ 10 ┆ 5 ┆ 2 │ │ 20 ┆ 3 ┆ 4 │ │ 20 ┆ 4 ┆ 10 │ │ 20 ┆ 5 ┆ 14 │ │ 30 ┆ 3 ┆ 10 │ │ 30 ┆ 4 ┆ 8 │ │ 30 ┆ 5 ┆ 9 │ └─────┴─────┴───────┘ </code></pre> <p>Transforming to a wide <code>pd.DataFrame</code>:</p> <pre><code>pivot_df_pd = ( tidy_df_pl.pivot(index=&quot;x&quot;, on=&quot;y&quot;, values=&quot;value&quot;).to_pandas().set_index(&quot;x&quot;) ) print(pivot_df_pd) 3 4 5 x 10 5 8 2 20 4 10 14 30 10 8 9 </code></pre> <p>Creating the heatmap using <code>plotly</code>.</p> <pre><code>fig = px.imshow(pivot_df_pd) fig.show() </code></pre> <p><a href="https://i.sstatic.net/gYjRFTUI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYjRFTUI.png" alt="enter image description here" /></a></p> <p>This all seems a bit cumbersome. I am looking for <code>polars</code>-only. How can I create this heatmap directly from <code>polars</code> without going through a third library?</p>
<python><dataframe><plotly><heatmap><python-polars>
2025-06-17 07:53:12
3
5,167
Andi
79,668,219
1,115,716
issues installing PyTorch in a new virtual environment
<p>I'm testing some models and have created virtual environments for them. One of them required <code>PyTorch</code> which I was able to install with no issues. However, for another one, I've made a new virtual environment and when installing <code>flash-attn</code>, it complains about missing <code>pytorch</code>, so I try installing it like so:</p> <pre><code>python3 -m pip install torch </code></pre> <p>Which looks like it succeeds:</p> <pre><code>Requirement already satisfied: torch in ./ATI/lib/python3.12/site-packages (2.7.1) Requirement already satisfied: filelock in ./ATI/lib/python3.12/site-packages (from torch) (3.18.0) Requirement already satisfied: typing-extensions&gt;=4.10.0 in ./ATI/lib/python3.12/site-packages (from torch) (4.14.0) Requirement already satisfied: setuptools in ./ATI/lib/python3.12/site-packages (from torch) (80.9.0) Requirement already satisfied: sympy&gt;=1.13.3 in ./ATI/lib/python3.12/site-packages (from torch) (1.14.0) Requirement already satisfied: networkx in ./ATI/lib/python3.12/site-packages (from torch) (3.5) Requirement already satisfied: jinja2 in ./ATI/lib/python3.12/site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in ./ATI/lib/python3.12/site-packages (from torch) (2025.5.1) Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.77) Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.77) Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.80) Requirement already satisfied: nvidia-cudnn-cu12==9.5.1.17 in ./ATI/lib/python3.12/site-packages (from torch) (9.5.1.17) Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.4.1) Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in ./ATI/lib/python3.12/site-packages (from torch) (11.3.0.4) Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in ./ATI/lib/python3.12/site-packages (from torch) (10.3.7.77) Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in ./ATI/lib/python3.12/site-packages (from torch) (11.7.1.2) Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in ./ATI/lib/python3.12/site-packages (from torch) (12.5.4.2) Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in ./ATI/lib/python3.12/site-packages (from torch) (0.6.3) Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in ./ATI/lib/python3.12/site-packages (from torch) (2.26.2) Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.77) Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in ./ATI/lib/python3.12/site-packages (from torch) (12.6.85) Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in ./ATI/lib/python3.12/site-packages (from torch) (1.11.1.6) Requirement already satisfied: triton==3.3.1 in ./ATI/lib/python3.12/site-packages (from torch) (3.3.1) Requirement already satisfied: mpmath&lt;1.4,&gt;=1.1.0 in ./ATI/lib/python3.12/site-packages (from sympy&gt;=1.13.3-&gt;torch) (1.3.0) Requirement already satisfied: MarkupSafe&gt;=2.0 in ./ATI/lib/python3.12/site-packages (from jinja2-&gt;torch) (3.0.2) </code></pre> <p>However, when trying to re-install <code>flash_attn</code>, it complains:</p> <pre><code>Collecting flash_attn Using cached flash_attn-2.8.0.post2.tar.gz (7.9 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [20 lines of output] Traceback (most recent call last): File &quot;/home/mickey/dev/ATI/ATI/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 389, in &lt;module&gt; main() File &quot;/home/mickey/dev/ATI/ATI/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 373, in main json_out[&quot;return_val&quot;] = hook(**hook_input[&quot;kwargs&quot;]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mickey/dev/ATI/ATI/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 143, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-nt9zjl19/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 331, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-nt9zjl19/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 301, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-nt9zjl19/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 512, in run_setup super().run_setup(setup_script=setup_script) File &quot;/tmp/pip-build-env-nt9zjl19/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 317, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 22, in &lt;module&gt; ModuleNotFoundError: No module named 'torch' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>How do I fix this?</p>
<python><pytorch><flash-attn>
2025-06-16 20:45:57
0
1,842
easythrees
79,668,207
14,909,621
Why is contextlib._RedirectStream implemented as a reentrant context manager?
<p>While reading the code of the <a href="https://github.com/python/cpython/blob/f33a5e891a03df416dde7afa7e3bfb2ac800f5a4/Lib/contextlib.py#L393-L408" rel="nofollow noreferrer"><code>_RedirectStream</code></a> class from the <code>contextlib</code> module, I couldn't understand the purpose of defining <code>_old_targets</code> as a list:</p> <pre class="lang-py prettyprint-override"><code>class _RedirectStream(AbstractContextManager): _stream = None def __init__(self, new_target): self._new_target = new_target # We use a list of old targets to make this CM re-entrant self._old_targets = [] def __enter__(self): self._old_targets.append(getattr(sys, self._stream)) setattr(sys, self._stream, self._new_target) return self._new_target def __exit__(self, exctype, excinst, exctb): setattr(sys, self._stream, self._old_targets.pop()) </code></pre> <p>This class is used as the basis for redirecting standard streams <code>stdin</code>, <code>stdout</code> and <code>stderr</code> in the <code>sys</code> module, as in <a href="https://github.com/python/cpython/blob/f33a5e891a03df416dde7afa7e3bfb2ac800f5a4/Lib/contextlib.py#L411-L424" rel="nofollow noreferrer"><code>contextlib.redirect_stdout</code></a>. The comment states that the list is used to make the context manager <a href="https://docs.python.org/3/library/contextlib.html#reentrant-cms" rel="nofollow noreferrer">reentrant</a>. But what's the practical benefit of that in the case of redirecting input or output to a file? In what situation would this be needed?</p>
<python>
2025-06-16 20:29:21
1
7,606
Vitalizzare
79,668,121
2,153,235
PyPlot's ylim and yticks change for no reason
<p>I am following some online code to add major yticks to a bar chart. However, I'm finding that the major yticks and the ylim changes for no reason, even though I <em>don't</em> add new major yticks:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt plt.close('all') ax = pd.Series([600,1e3,1e4],index=['A','B','C']).plot.bar() plt.yscale('log') # This causes ylim and major yticks to change if False: ax.set_yticks( ax.get_yticks(minor=False) ) </code></pre> <p>I am using:</p> <ul> <li>Spyder version: 6.0.5 (conda)</li> <li>Python version: 3.9.18 64-bit</li> <li>Qt version: 5.15.2</li> <li>PyQt5 version: 5.15.10</li> <li>Operating System: Windows-10-10.0.26100-SP0</li> </ul> <p><a href="https://i.sstatic.net/oTbelxHA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTbelxHA.png" alt="enter image description here" /></a></p> <p><strong>Afternote 2025-06-16 15:28 ET:</strong> I found that I can force the yticks and ylim back by saving ylim (<code>ylim=plt.ylim()</code>) before calling <code>set_yticks</code>, then restoring it afterward (<code>plt.ylim(ylim)</code>), but why is this necessary at all? In other words, why does calling <code>set_yticks</code> run different heuristics for determining ylim and yticks?</p> <p><strong>ANNEX: Exploration of <code>sharey</code></strong></p> <p>A suggestion was made to set the <code>plt.subplots</code> argument <code>sharey</code> to <code>True</code>. This only makes sense if you have 2 subplots in the same figure <em>and</em> both figures have identical y-axes. <a href="https://www.reddit.com/r/dfpandas/comments/1lfg6mu" rel="nofollow noreferrer">Here</a> is an example where they don't occur in the same figure <em>and</em> the y-axes are not identical, yet the <code>yticks</code> of the 2nd graph is set based on the <em>log transformation</em> of the <code>yticks</code> from the 1st graph. I was curious to see whether <code>sharey=True</code> could somehow be made to work:</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.stats import weibull_min # Create 3 &quot;Bin&quot;'s of &quot;Linear&quot; data, 1000 points each bins=['Alpha','Brave','Charlie'] scales=[1,10,100] # 1 scaling of the Weibull curve per bin n_points = 1000 # Number of data points per bin shape=0.5 # Curve shape df = pd.concat([ pd.DataFrame({ 'Bin':[bins[i_bin]]*n_points , 'Linear':weibull_min(c=shape,scale=scales[i_bin]).rvs(size=n_points) }) for i_bin in range(3) ]) # Linear box plot, then yscaled as logarithmic plt.close('all') fig,(ax1,ax2) = plt.subplots(1,2,layout='constrained',sharey=True) df.boxplot('Linear',by='Bin',ax=ax1) plt.yscale('log') # Box plot of log_10 transformation of the data df['Log10'] = np.log10( df.Linear ) df.boxplot('Log10',by='Bin',ax=ax2) </code></pre> <p>This is random data so <em>exact box plots extremities may vary.</em></p> <p><a href="https://i.sstatic.net/6D8RvoBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6D8RvoBM.png" alt="enter image description here" /></a></p> <p>So far, I haven't come up with a way to make it work. The boxes (if not the whiskesr) <a href="https://live.staticflickr.com/65535/54606886826_fce8e2cca4_c.jpg" rel="nofollow noreferrer">should look like this</a>. The problem is that the 2nd plot box and whiskers are calculated based on logarithmic'd data (out of necessity, as per <a href="https://www.reddit.com/r/dfpandas/comments/1lfg6mu" rel="nofollow noreferrer">here</a> and <a href="https://stackoverflow.com/questions/34608613">here</a>) and won't present correctly on a scale for un-logarithmic'd data. Hence, we certainly don't want <code>sharey=True</code>.</p> <p>Furthermore, in a more realistic usage, we want to plot the linear data and log transform it <em>only to harvest the</em> <code>yticks</code>. We then clear the axes and boxplot the log-transformed data to get properly calculated whiskers. So we don't necessarily have multiple subplots, and <code>sharey</code> is not applicable.</p>
<python><matplotlib><logarithm>
2025-06-16 19:16:13
1
1,265
user2153235
79,667,918
13,860,719
Fastest way to find the indices of two Numpy arrays that meet a condition
<p>Say I have two large numpy arrays <code>a1</code> and <code>a2</code> (each with 10000 numbers). I want to find the indices in each array that meet the condition of <code>f(x1, x2) &gt; 0</code>. To be clear, for each number in <code>a1</code> (or <code>a2</code>), if there's <strong>any</strong> number in <code>a2</code> (or <code>a1</code>) that meets the condition, then it is a valid number. For now I am using one loop over <code>a1</code> and only using numpy vectorization for calculations on <code>a2</code>:</p> <pre><code>import numpy as np a1 = np.random.rand(10000) a2 = np.random.rand(10000) def f(x1, x2): return np.exp(x2 - x1) / 1.2 - 1 valid_i1_set = set() valid_i2_set = set() for i1 in range(len(a1)): valid_i2 = np.nonzero(f(a1[i1], a2) &gt; 0)[0].tolist() valid_i2_set.update(valid_i2) if len(valid_i2) &gt; 0: valid_i1_set.add(i1) print(sorted(valid_i1_set)) print(sorted(valid_i2_set)) </code></pre> <p>I imagine there should be a faster way to do this without a for loop. Making a product array of 10000x10000 numbers also doesn't feel right. Any suggestions?</p>
<python><arrays><algorithm><numpy><performance>
2025-06-16 16:14:49
2
2,963
Shaun Han
79,667,814
2,576,703
Is there a ReadSessionAsync or similar in BigQuery Storage Python API
<p>My (working) code roughly does the following:</p> <pre><code> read_session = bigquery_storage.ReadSession(...) request = bigquery_storage.CreateReadSessionRequest(...) session = client.create_read_session(request=request) iterator = [client.read_rows(stream.name).rows(session) for stream in session.streams][0] # and here I can nicely iterate over the rows... </code></pre> <p>However, I hate about the code that it is blocking.</p> <p>I discovered in the docs that a <code>BigQueryReadAsyncClient</code> has a <code>async create_read_session</code> as a drop-in replacement of the same-named call in the snippet above (<a href="https://cloud.google.com/python/docs/reference/bigquerystorage/latest/google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient#google_cloud_bigquery_storage_v1_services_big_query_read_BigQueryReadAsyncClient_create_read_session" rel="nofollow noreferrer">link to docs</a>). Unfortunately, according to the docs, this still returns a regular ReadSession without any async methods.</p> <p>Am I overlooking something? Is there a nice, async way to lazily iterate over the rows of a bigquery table / query result with some async buffer? Seems like a reasonably standard use case thus I wonder what I am missing.</p>
<python><google-bigquery><google-bigquery-storage-api>
2025-06-16 14:46:37
1
563
miwe
79,667,798
4,000,995
Keras AUC metric breaking on a multi-class problem
<p>I'm attempting to train a multi-class image classification model based on standardised 128x128 images. When I use &quot;accuracy&quot; alone as a metric, I have no issues. When I introduce AUC, I can't train the model.</p> <pre><code> model = tf.keras.Sequential([ tf.keras.layers.Rescaling(1./255), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(dropout_rate), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(dropout_rate), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(dropout_rate), tf.keras.layers.Flatten(), tf.keras.layers.Dense(num_classes, activation=&quot;softmax&quot;) ]) model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy',keras.metrics.AUC(curve=&quot;ROC&quot;,multi_label=False,from_logits=True)]) </code></pre> <p>This is the exception thrown by Keras:</p> <pre class="lang-none prettyprint-override"><code>&gt; File &quot;C:\Users\ikhakoo\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\metrics\metrics_utils.py&quot;, line 272, in _update_confusion_matrix_variables_optimized &gt; File &quot;C:\Users\ikhakoo\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\ops\math.py&quot;, line 86, in segment_sum &gt; File &quot;C:\Users\ikhakoo\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\backend\tensorflow\math.py&quot;, line 23, in segment_sum &gt;data.shape = [32] does not start with segment_ids.shape = [128] [[{{node UnsortedSegmentSum}}]] [Op:__inference_multi_step_on_iterator_130104] </code></pre> <p>I am not sure what the AUC metric expects in terms of the shapes - I notice that the first number corresponds to the batch size, but both numbers change when I vary the batch size. What am I doing wrong?</p>
<python><tensorflow><keras><deep-learning><roc>
2025-06-16 14:35:52
1
1,019
Imran Khakoo
79,667,213
16,891,669
Join vs Lambda in pyspark
<p>Suppose I have the following dataframe of articles.</p> <pre class="lang-py prettyprint-override"><code>text_data = [ (1, &quot;I hav a dreem that one day&quot;), (2, &quot;Ths is a test of the emergncy broadcast systm&quot;), (3, &quot;Speling errors are commn in som text&quot;), ] text_df = spark.createDataFrame(text_data, &quot;id: int, article: string&quot;) </code></pre> <p>And a dataframe of incorrect-to-correct mappings.</p> <pre class="lang-py prettyprint-override"><code>dict_data = [ (&quot;hav&quot;, &quot;have&quot;), (&quot;dreem&quot;, &quot;dream&quot;), (&quot;Ths&quot;, &quot;This&quot;), (&quot;emergncy&quot;, &quot;emergency&quot;), (&quot;systm&quot;, &quot;system&quot;), (&quot;Speling&quot;, &quot;Spelling&quot;), (&quot;commn&quot;, &quot;common&quot;), (&quot;som&quot;, &quot;some&quot;), ] dict_df = spark.createDataFrame(dict_data, &quot;misspelled: string, correct: string&quot;) </code></pre> <p>I was trying to find the number of incorrect words in the articles given that all the incorrect words have been provided in the mapping. I have done this in two ways, one is using left join and group by, while other is using higher order functions. This is the code.</p> <ol> <li>Using join and group by to find the number of incorrect and correct words</li> </ol> <pre class="lang-py prettyprint-override"><code>( text_df .select( &quot;id&quot; , F.explode(F.split(F.col(&quot;article&quot;), &quot; &quot;)).alias(&quot;word&quot;) ) .join( dict_df , F.col('word') == dict_df['misspelled'] , 'left' ) .select( &quot;id&quot; , &quot;word&quot; , dict_df['correct'] ) .groupBy(&quot;id&quot;) .agg( F.count(F.col('word')).alias('Total') , F.count(F.when(F.col('correct').isNull(), 'isCorrect')).alias('Correct') , F.count(F.col('correct')).alias('Incorrect') ) .show() ) ''' Output +---+-----+-------+---------+ | id|Total|Correct|Incorrect| +---+-----+-------+---------+ | 1| 7| 5| 2| | 2| 9| 6| 3| | 3| 7| 4| 3| +---+-----+-------+---------+ ''' </code></pre> <ol start="2"> <li>Using Higher order functions. Here <strong>size</strong> of right df is <strong>1</strong>.</li> </ol> <pre class="lang-py prettyprint-override"><code>dict_data_2 = [[{ &quot;hav&quot;: &quot;have&quot;, &quot;dreem&quot;: &quot;dream&quot;, &quot;Ths&quot;: &quot;This&quot;, &quot;emergncy&quot;: &quot;emergency&quot;, &quot;systm&quot;: &quot;system&quot;, &quot;Speling&quot;: &quot;Spelling&quot;, &quot;commn&quot;: &quot;common&quot;, &quot;som&quot;: &quot;some&quot; }]] dict_df_2 = spark.createDataFrame(dict_data_2, &quot;incorrect_to_correct_mapping: map&lt;string, string&gt;&quot;) text_df = spark.createDataFrame(text_data, &quot;id: int, article: string&quot;) ( text_df .join( dict_df_2 , how = 'cross' ) .withColumns({ 'words': F.split(F.col('article'), ' ') , 'map_keys': F.map_keys('incorrect_to_correct_mapping') , 'incorrect': F.filter( F.col('words') , lambda word: F.array_contains(F.col('map_keys'), word) ) , 'correct': F.filter( F.col('words') , lambda word: ~F.array_contains(F.col('map_keys'), word) ) }) .select( &quot;id&quot; , F.array_size(&quot;words&quot;).alias(&quot;Total&quot;) , F.array_size(&quot;incorrect&quot;).alias(&quot;Incorrect&quot;) , F.array_size(&quot;correct&quot;).alias(&quot;Correct&quot;) ) .show() ) ''' Output +---+-----+---------+-------+ | id|Total|Incorrect|Correct| +---+-----+---------+-------+ | 1| 7| 2| 5| | 2| 9| 3| 6| | 3| 7| 3| 4| +---+-----+---------+-------+ ''' </code></pre> <p>I have two questions -</p> <ol> <li>Which one will be faster than the other</li> <li>Which one should be preferred when the size of either article or mapping increases? (My guess is to use first one because if mapping size increases then it will be difficult to hold the entire map in every row)</li> </ol>
<python><pyspark>
2025-06-16 07:42:39
1
597
Dhruv
79,667,196
633,961
Kubernetes Leader Election with Lease
<p>I created a simple example for Leader Election on Kubernetes:</p> <p><a href="https://github.com/syself/pykubeleader" rel="nofollow noreferrer">syself/pykubeleader: Simpe Python Application which uses Kubernetes Leader Election</a></p> <p>But it uses the old configMap based approach:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes.leaderelection.resourcelock.configmaplock import ConfigMapLock </code></pre> <p>How to use the new <a href="https://kubernetes.io/docs/concepts/architecture/leases/" rel="nofollow noreferrer">Lease</a> based way with Python?</p> <p>Background: Up to now I wrote Kubernetes controllers in Go (which is the best language for that task). But for some special case, I think about writing a controller in Python.</p> <p>I Go it is very easy. You create a Deployment with several replicas. Controller-Runtime code ensures that only one is the leader.</p> <p>You just need to set <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager#Options.LeaderElection" rel="nofollow noreferrer">LeaderElection (bool), and LeaderElectionID</a></p>
<python><kubernetes><leader-election>
2025-06-16 07:25:55
1
27,605
guettli
79,667,071
219,153
Why Numpy fabs is much slower than abs?
<p>This Python 3.12.7 script with Numpy 2.2.4:</p> <pre><code>import numpy as np, timeit as ti a = np.random.rand(1000).astype(np.float32) print(f'Minimum, median and maximum execution time in us:') for fun in ('np.fabs(a)', 'np.abs(a)'): t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=999)) print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}') </code></pre> <p>produces these results on AMD Ryzen 7 3800X:</p> <pre><code>Minimum, median and maximum execution time in us: np.fabs(a) 1.813 1.843 4.929 np.abs(a) 0.781 0.811 1.463 </code></pre> <p>indicating that <code>np.fabs()</code> is more than 2x slower than <code>np.abs()</code>, despite the latter having more functionality. What is the reason?</p>
<python><numpy><performance>
2025-06-16 05:04:09
1
8,585
Paul Jurczak
79,666,955
19,459,262
How to access code in different files inside the main app?
<p>I have a rather large <code>app.py</code> file, so I'd like to take a nav panel out and store it in a separate file. I'm not sure how to access code from a different file and include it in the main app.</p> <p>The <code>app.py</code> file:</p> <pre><code>import page from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.navset_hidden(id=&quot;selected_navset_tab&quot;): # Homepage with ui.nav_panel(&quot;Welcome&quot;, value=&quot;page_home&quot;): with ui.card(): # Variable calculated ui.input_selectize( &quot;filler&quot;, &quot;Filler&quot;, [&quot;list&quot;, &quot;of&quot;, &quot;items&quot;, &quot;here&quot;], multiple=False ) </code></pre> <p>The other file, <code>page.py</code>:</p> <pre><code>from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.nav_panel(&quot;Page I want to put in the main app&quot;, value=&quot;other_page&quot;): @render.express def filler_text(): &quot;Filler text&quot; </code></pre> <p>How can I import the <code>other_page</code> nav panel to show as part of the navset_tab in the main <code>app.py</code> file, without actually putting it in the code?</p>
<python><py-shiny>
2025-06-16 00:33:34
2
784
Redz
79,666,944
1,483,390
How to apply a high pass filter in GIMP using a python plug-in? And gaussian blur?
<p>How to apply the high-pass filter in a Python plug-in for GIMP? There is no high-pass filter in the procedure browser, and gegl:high-pass does nothing.</p> <p>Similarly, how to apply gaussian blur?</p>
<python><gimp><gaussianblur><highpass-filter>
2025-06-15 23:56:57
1
2,681
Luis A. Florit
79,666,115
9,353,682
How to select packaging files in pyproject.toml?
<p>I am trying to build a python package. I want the build package to contain only the necessary files.</p> <p>I make the build with following command: <code>python -m build</code></p> <p>My package files structure looks more or less like this:</p> <pre class="lang-none prettyprint-override"><code>project_root_directory ├── .gitignore ├── .github ├── pyproject.toml ├── README.rst ├── LICENSE └── mypkg ├── __init__.py └── file.py </code></pre> <p>Packaging essentials from my <code>pyproject.toml</code> file:</p> <pre class="lang-toml prettyprint-override"><code>[build-system] requires = [ &quot;setuptools &gt;= 65.5.1&quot;, &quot;setuptools-scm&quot;, &quot;wheel&quot; ] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools.package-data] &quot;*&quot; = [ &quot;README.rst&quot;, &quot;LICENSE&quot; ] [tool.setuptools.packages.find] include = [ &quot;mypkg.*&quot; ] </code></pre> <p>The problem is that both <code>.gitignore</code> and <code>.github</code> are part of <code>SOURCES.txt</code> and therefore are in <code>dist</code> files (e.g. when I unpack <code>mypkg-0.0.0.tar.gz</code> it contains <code>.github</code> directory and <code>.gitignore</code> files).</p> <p><code>SOURCES.txt</code> entries:</p> <pre class="lang-none prettyprint-override"><code>.gitignore LICENSE README.rst pyproject.toml .github/dependabot.yml .github/pull_request_template.md .github/ISSUE_TEMPLATE/99_any.md .github/workflows/testing.yml mypkg.egg-info/PKG-INFO mypkg.egg-info/SOURCES.txt mypkg.egg-info/dependency_links.txt mypkg.egg-info/requires.txt mypkg.egg-info/top_level.txt mypkg/__init__.py mypkg/file.py </code></pre> <p>I want my <code>SOURCES.txt</code> to contain following entries after build command:</p> <pre class="lang-none prettyprint-override"><code>LICENSE README.rst mypkg.egg-info/PKG-INFO mypkg.egg-info/SOURCES.txt mypkg.egg-info/dependency_links.txt mypkg.egg-info/requires.txt mypkg.egg-info/top_level.txt mypkg/__init__.py mypkg/file.py </code></pre> <p>and have my <code>dist</code> files to be build accordingly.</p>
<python><setuptools><pyproject.toml>
2025-06-14 19:52:15
1
722
Maciek
79,666,105
444,578
Google API Call: Request had insufficient authentication scopes
<p>I have a python script that I run manually to connect to my photo library and download images. This was working for a time, and then it stopped working. Nothing changed, just time. I then let AI try to fix it and the code has changed a lot since then. From what I can tell, this is still sound... but I get the error:</p> <p><code>googleapiclient.errors.HttpError: &lt;HttpError 403 when requesting https://photoslibrary.googleapis.com/v1/albums?fields=albums%28id%2Ctitle%29&amp;pageSize=10&amp;alt=json returned &quot;Request had insufficient authentication scopes.&quot;. Details: &quot;Request had insufficient authentication scopes.&quot;&gt;</code></p> <p><em>Note: I'm not storing tokens locally, I ripped that out to force me to get a token each time as I troubleshoot this.</em></p> <p><strong>I just created a new project in GCP</strong>, enabled Photos API, set up consent screen, downloaded the credentials.json, and still getting this bloody error.</p> <p><a href="https://i.sstatic.net/Knrqx01G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Knrqx01G.png" alt="Google photos library API is enabled" /></a></p> <p><a href="https://i.sstatic.net/BooD3rzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BooD3rzu.png" alt="scoped to photoslibrary readonly" /></a></p> <h3>Source code</h3> <pre class="lang-py prettyprint-override"><code>from typing import List import os from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from google.auth.transport.requests import AuthorizedSession from googleapiclient.discovery import build import requests from models.comic import Comic import logging import json logger = logging.getLogger(__name__) class GooglePhotosIngester: &quot;&quot;&quot;Handles fetching and processing images from Google Photos&quot;&quot;&quot; SCOPES = ['https://www.googleapis.com/auth/photoslibrary.readonly'] def __init__(self): # Get path to credentials directory relative to project root self.root_dir = os.path.dirname(os.path.dirname(__file__)) self.credentials_dir = os.path.join(self.root_dir, 'credentials') self.credentials_path = os.path.join(self.credentials_dir, 'google_photos_credentials.json') self.credentials = None self.service = None # Verify credentials file exists and is readable if not os.path.exists(self.credentials_path): raise FileNotFoundError(f&quot;Credentials file not found at {self.credentials_path}&quot;) try: with open(self.credentials_path, 'r') as f: creds_data = json.load(f) logger.info(f&quot;Successfully loaded credentials file. Client ID: {creds_data.get('installed', {}).get('client_id', 'Not found')}&quot;) except Exception as e: logger.error(f&quot;Error reading credentials file: {str(e)}&quot;) raise def authenticate(self) -&gt; None: &quot;&quot;&quot;Handles Google Photos authentication with fresh credentials each time&quot;&quot;&quot; try: logger.info(&quot;Starting authentication flow...&quot;) flow = InstalledAppFlow.from_client_secrets_file( self.credentials_path, self.SCOPES ) logger.info(&quot;Created InstalledAppFlow with scopes: %s&quot;, self.SCOPES) self.credentials = flow.run_local_server( port=0, prompt='consent', include_granted_scopes=False ) logger.info(f&quot;Token: {self.credentials.token}&quot;) logger.info(f&quot;Refresh token: {self.credentials.refresh_token}&quot;) logger.info(f&quot;Expiry: {self.credentials.expiry}&quot;) logger.info(f&quot;Scopes: {self.credentials.scopes}&quot;) try: self.credentials.refresh(Request()) except Exception as e: logger.error(f&quot;Refresh failed: {e}&quot;) # Dump the user email for clarity authed_session = AuthorizedSession(self.credentials) userinfo = authed_session.get('https://www.googleapis.com/oauth2/v1/userinfo?alt=json') logger.info(f&quot;Authenticated as: {userinfo.json()}&quot;) logger.info(&quot;Successfully obtained credentials&quot;) logger.info(&quot;Credential scopes: %s&quot;, self.credentials.scopes) logger.info(&quot;Credential has refresh token: %s&quot;, bool(self.credentials.refresh_token)) # Initialize the Photos Library service with the correct discovery URL logger.info(&quot;Building Photos Library service...&quot;) self.service = build( 'photoslibrary', 'v1', credentials=self.credentials, discoveryServiceUrl='https://photoslibrary.googleapis.com/$discovery/rest?version=v1' ) logger.info(&quot;Successfully built Photos Library service&quot;) # Test the connection by listing albums logger.info(&quot;Testing connection by listing albums...&quot;) albums = self.service.albums().list( fields='albums(id,title)', pageSize=10 ).execute() for album in albums.get('albums', []): logger.info(f&quot;Album title: {album['title']}, ID: {album['id']}&quot;) except Exception as e: logger.error(f&quot;Error during authentication: {str(e)}&quot;) if hasattr(e, 'resp'): logger.error(f&quot;Response status: {e.resp.status}&quot;) logger.error(f&quot;Response content: {e.content}&quot;) raise </code></pre> <h3>Output logs</h3> <pre><code>INFO:ingestion.google_photos:Starting authentication flow... INFO:ingestion.google_photos:Created InstalledAppFlow with scopes: ['https://www.googleapis.com/auth/photoslibrary.readonly'] Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&amp;client_id=xxx&amp;redirect_uri=http%3A%2F%2Flocalhost%3A54357%2F&amp;scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fphotoslibrary.readonly&amp;state=xxx&amp;prompt=consent&amp;access_type=offline INFO:google_auth_oauthlib.flow:&quot;GET /?state=MtQFaaKY9pwXXGMrvjQO5QLf1qigbV&amp;code=4/0AUJR-x5DwOJVymCBhrL-aAwD-5OrrR1Ju9yhYDfIqyo7TgzxOh4RYnfM0leVMk2GDo5Bhg&amp;scope=https://www.googleapis.com/auth/photoslibrary.readonly HTTP/1.1&quot; 200 65 INFO:ingestion.google_photos:Token: ya29.xxx5g0175 INFO:ingestion.google_photos:Refresh token: 1//0517zxxx8w INFO:ingestion.google_photos:Expiry: 2025-06-14 20:19:21.889031 INFO:ingestion.google_photos:Scopes: ['https://www.googleapis.com/auth/photoslibrary.readonly'] INFO:ingestion.google_photos:Authenticated as: {'error': {'code': 401, 'message': 'Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.', 'status': 'UNAUTHENTICATED'}} INFO:ingestion.google_photos:Successfully obtained credentials INFO:ingestion.google_photos:Credential scopes: ['https://www.googleapis.com/auth/photoslibrary.readonly'] INFO:ingestion.google_photos:Credential has refresh token: True INFO:ingestion.google_photos:Building Photos Library service... INFO:googleapiclient.discovery_cache:file_cache is only supported with oauth2client&lt;4.0.0 INFO:ingestion.google_photos:Successfully built Photos Library service INFO:ingestion.google_photos:Testing connection by listing albums... WARNING:googleapiclient.http:Encountered 403 Forbidden with reason &quot;PERMISSION_DENIED&quot; ERROR:ingestion.google_photos:Error during authentication: &lt;HttpError 403 when requesting https://photoslibrary.googleapis.com/v1/albums?fields=albums%28id%2Ctitle%29&amp;pageSize=10&amp;alt=json returned &quot;Request had insufficient authentication scopes.&quot;. Details: &quot;Request had insufficient authentication scopes.&quot;&gt; ERROR:ingestion.google_photos:Response status: 403 ERROR:ingestion.google_photos:Response content: b'{\n &quot;error&quot;: {\n &quot;code&quot;: 403,\n &quot;message&quot;: &quot;Request had insufficient authentication scopes.&quot;,\n &quot;status&quot;: &quot;PERMISSION_DENIED&quot;\n }\n}\n' Traceback (most recent call last): File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py&quot;, line 197, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py&quot;, line 87, in _run_code exec(code, run_globals) File &quot;/Users/davidlozzi/.cursor/extensions/ms-python.debugpy-2025.8.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py&quot;, line 71, in &lt;module&gt; cli.main() File &quot;/Users/davidlozzi/.cursor/extensions/ms-python.debugpy-2025.8.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py&quot;, line 501, in main run() File &quot;/Users/davidlozzi/.cursor/extensions/ms-python.debugpy-2025.8.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py&quot;, line 384, in run_module run_module_as_main(options.target, alter_argv=True) File &quot;/Users/davidlozzi/.cursor/extensions/ms-python.debugpy-2025.8.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py&quot;, line 228, in _run_module_as_main return _run_code(code, main_globals, None, &quot;__main__&quot;, mod_spec) File &quot;/Users/davidlozzi/.cursor/extensions/ms-python.debugpy-2025.8.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py&quot;, line 118, in _run_code exec(code, run_globals) File &quot;/Users/davidlozzi/git/sw-panels-data/ingestion/pipeline.py&quot;, line 387, in &lt;module&gt; main() File &quot;/Users/davidlozzi/git/sw-panels-data/ingestion/pipeline.py&quot;, line 384, in main pipeline.ingest_album(album_id, full_ingestion=args.full, incomplete_only=args.incomplete) File &quot;/Users/davidlozzi/git/sw-panels-data/ingestion/pipeline.py&quot;, line 308, in ingest_album all_comics = self.photos_ingester.fetch_album_images(album_id) File &quot;/Users/davidlozzi/git/sw-panels-data/ingestion/google_photos.py&quot;, line 113, in fetch_album_images self.authenticate() File &quot;/Users/davidlozzi/git/sw-panels-data/ingestion/google_photos.py&quot;, line 93, in authenticate albums = self.service.albums().list( File &quot;/Users/davidlozzi/git/sw-panels-data/.venv/lib/python3.9/site-packages/googleapiclient/_helpers.py&quot;, line 130, in positional_wrapper return wrapped(*args, **kwargs) File &quot;/Users/davidlozzi/git/sw-panels-data/.venv/lib/python3.9/site-packages/googleapiclient/http.py&quot;, line 938, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: &lt;HttpError 403 when requesting https://photoslibrary.googleapis.com/v1/albums?fields=albums%28id%2Ctitle%29&amp;pageSize=10&amp;alt=json returned &quot;Request had insufficient authentication scopes.&quot;. Details: &quot;Request had insufficient authentication scopes.&quot;&gt; </code></pre>
<python><google-cloud-platform><google-photos-api>
2025-06-14 19:31:06
1
16,001
David Lozzi
79,666,077
4,451,315
Cumulative count per group in PyArrow
<p>Say I have</p> <pre class="lang-py prettyprint-override"><code>data = {'a': [1,1,2], 'b': [4,5,6]} </code></pre> <p>and I'd like to get a cumulative count (1-indexed) per group.</p> <p>In pandas, I can do:</p> <pre><code>import pandas as pd pd.DataFrame(data).groupby('a').cumcount()+1 </code></pre> <p>How can I do this in PyArrow, starting from <code>pa.table(data)</code>, without doing any conversion to pandas?</p> <p>I'm OK with using NumPy functions (e.g. <code>np.arange</code>) to generate the new column, but I can't convert the starting pyarrow.Table to numpy (nor to pandas)</p>
<python><pyarrow>
2025-06-14 18:58:13
2
11,062
ignoring_gravity
79,665,976
459,745
Create project with specific (downgraded) Python version
<p>At work, I am running 3.9.21. At home, I am running Python 3.13.2 on Linux, but also use <code>uv</code> install Python 3.9.21 to be in sync with work.</p> <p>In my home machine, I created a new project:</p> <pre class="lang-bash prettyprint-override"><code>uv init --package --python &quot;python==3.9.21&quot; sandbox cd sandbox uv sync </code></pre> <p>At this time, I got the following error:</p> <pre><code>Using CPython 3.13.2 error: The requested interpreter resolved to Python 3.13.2, which is incompatible with the project's Python requirement: `==3.9.21` </code></pre> <p>How can I create a project with this specific version (3.9.21) of Python?</p>
<python><uv>
2025-06-14 16:17:39
2
41,381
Hai Vu
79,665,724
12,439,683
How to pass any object as a classmethod's first argument - circumvent injection of class argument
<p>Passing a class, or a totally different object, as the first argument to method is easy:</p> <pre class="lang-py prettyprint-override"><code>class Foo: def method(self): ... Foo.method(object()) # pass anything to self </code></pre> <p>I wonder, is this possible with <code>classmethod</code>s as well? I assume it is, but how can it be done?</p> <pre class="lang-py prettyprint-override"><code>class Foo: @classmethod def cls_method(cls): ... Foo.cls_method # how to pass cls=object </code></pre> <hr /> <p>Related: <a href="https://stackoverflow.com/q/28237955/12439683">hybrid / class_or_instance_method</a> decorator / descriptor, but that is not a setting I am interested in.</p>
<python><methods><class-method><python-descriptors>
2025-06-14 09:30:05
1
5,101
Daraan
79,665,669
198,480
Using pattern matching with a class that inherits from str in Python 3.10
<p>In a parser library I maintain, I have some classes that inherit from <code>str</code> to manage parsed strings and parsed symbols. This has been working well for a long time, but with Python 3.10, someone requested being able to use <code>match</code> and <code>case</code> on these classes. I have constructed an example script that shows how this is failing:</p> <pre><code>class Binge(str): def __eq__(self, other): return self.__class__ == other.__class__ and str.__eq__(self, other) def __ne__(self, other): return not self == other # These two asserts are important for how the class is used assert Binge('a') == Binge('a') assert Binge('a') != 'a' # What does it take to then make this work? matched = False match Binge(&quot;asdf&quot;): case Binge(&quot;asdf&quot;): matched = True assert matched </code></pre> <p>If I add:</p> <pre><code>print(self.__class__, other.__class__) </code></pre> <p>in the <code>__eq__</code> function, I see this:</p> <pre><code>&lt;class '__main__.Binge'&gt; &lt;class 'str'&gt; </code></pre>
<python><pattern-matching><python-3.10>
2025-06-14 07:58:29
1
4,878
Joshua D. Boyd
79,665,533
3,977,292
Why my codes trigger the SettingWithCopyWarning in pandas
<p>I'm confused by this pandas warning. I'm already using the recommended <code>.loc[,]</code> format, but the warning persists. Can someone explain why this warning is appearing?</p> <pre><code>df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) sub_df = df.loc[df.A &gt; 1] sub_df.loc[:, 'C'] = sub_df.A + sub_df.B </code></pre> <p>Here is the warning message:</p> <pre><code>&lt;ipython-input-203-8af37ac9de96&gt;:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy sub_df.loc[:, 'C'] = sub_df.A + sub_df.B </code></pre>
<python><pandas><warnings>
2025-06-14 03:04:26
1
315
Jiexing Wu
79,665,518
65,326
How to type hint a Python class dynamically created at runtime?
<p>I wrote a simple wrapper class to manage an sqlite database connection. I learned from <a href="https://stackoverflow.com/questions/16335772/mapping-result-rows-to-namedtuple-in-python-sqlite">another StackOverflow answer</a> how to use row_factory and recordclass so that the query results will return each row as an instance of dataclass. The dataclass is generated dynamically at runtime with a factory.</p> <pre><code>import sqlite3 import recordclass class DBConnection: def __init__(self, file_path): self.connection = sqlite3.connect(file_path) self.connection.row_factory = self.make_row_factory( recordclass.make_dataclass, fast_new=True, ) self.cursor = self.connection.cursor() def make_row_factory(self, cls_factory, **kw): def row_factory(cursor, row, cls=[None]): rf = cls[0] if rf is None: fields = [col[0] for col in cursor.description] cls[0] = cls_factory(&quot;Row&quot;, fields, **kw) return cls[0](*row) return rf(*row) return row_factory def query(self, query: str): result = self.cursor.execute(query) return result.fetchall() </code></pre> <p>How can I type hint the <code>query</code> method? <code>def query(self, query: str) -&gt; list[????]</code></p> <p>In this case the dataclass ends up being <code>db.Row</code>. You can’t type hint <code>list[db.Row]</code> because <code>db.Row</code> doesn’t exist until runtime.</p> <p>Putting hints on the <code>make_row_factory</code> and <code>row_factory</code> methods seems just as challenging.</p>
<python><python-typing>
2025-06-14 01:59:35
1
33,679
Apreche
79,665,429
395,857
Can one use DPO (direct preference optimization) of GPT via CLI or Python on Azure?
<p>Can one use DPO of GPT via CLI or Python on Azure?</p> <ul> <li><a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning-direct-preference-optimization" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning-direct-preference-optimization</a> just shows how to do DPO of GPT via CLI on Azure via web UI</li> <li><a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/tutorials/fine-tune?tabs=command-line" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/tutorials/fine-tune?tabs=command-line</a> is CLI and Python but only SFT AFAIK</li> </ul>
<python><azure><command-line-interface><fine-tuning><gpt-4>
2025-06-13 22:22:23
1
84,585
Franck Dernoncourt
79,665,315
562,697
Using lru_cache on a class that has a classmethod
<p>I want to cache instances of a class because the constructor is complicated and repeated uses won't alter the contents. This works great. However, I want to create a classmethod to clear the cache. In the real code, <code>clear</code> will also clear file cache in addition to memory cache.</p> <p>The following is an MWE that works, except for calling the <code>clear</code></p> <pre class="lang-py prettyprint-override"><code>from functools import lru_cache @lru_cache(maxsize=None) class MyClass: def __init__(self, x): print(f&quot;Initializing with {x}&quot;) self.x = x @classmethod def clear(cls): print('clearing cache') cls.cache_clear() a = MyClass(1) b = MyClass(1) print(a is b) try: MyClass.clear() except Exception as e: print(e) # &lt;-- 'classmethod' object is not callable # MyClass.cache_clear() does work however c = MyClass(1) </code></pre>
<python><python-datamodel><python-lru-cache>
2025-06-13 19:49:55
3
11,961
steveo225
79,665,271
100,214
readthedocs failing on maturin builds
<p>I have a primary python project that has a dependency of another python lib which is a maturin wrapped rust crate.</p> <p>My 'readthedocs' script is failing on the lib using maturin with:</p> <pre class="lang-bash prettyprint-override"><code>Caused by: lock file version `4` was found, but this version of Cargo does not understand this lock file, perhaps Cargo needs to be updated? 💥 maturin failed Caused by: Cargo metadata failed. Does your crate compile with `cargo build`? Caused by: `cargo metadata` exited with an error: Error running maturin: Command '['maturin', 'pep517', 'write-dist-info', '--metadata-directory', '/tmp/pip-modern-metadata-bginyxp6', '--interpreter', '/home/docs/checkouts/readthedocs.org/user_builds/pysui/envs/latest/bin/python']' returned non-zero exit status 1. Checking for Rust toolchain.... Running `maturin pep517 write-dist-info --metadata-directory /tmp/pip-modern-metadata-bginyxp6 --interpreter /home/docs/checkouts/readthedocs.org/user_builds/pysui/envs/latest/bin/python` [end of output] </code></pre> <p>Is this a known issue, I could find nothing specific that describes a cure.</p>
<python><python-sphinx><read-the-docs><maturin>
2025-06-13 18:56:27
0
8,185
Frank C.
79,665,053
9,472,203
Plot and annotate NaN values in a seaborn heatmap
<p>I have a Seaborn heatmap based on a DataFrame containing some NaN values. The heatmap turns out to just have those cells blank. However, I would like a custom color, e.g., light grey to be drawn with an 'NaN' annotation. How can I achieve this?</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt from matplotlib.colors import LogNorm, Normalize import numpy as np import pandas as pd data = { &quot;Server (%)&quot;: [np.nan, 0.2, 2.4], &quot;Scheduler (%)&quot;: [np.nan, 2.3, 4.5] } merged_summary = pd.DataFrame(data) # Prepare annotation matrix annot = merged_summary.copy() for i in range(annot.shape[0]): for j in range(annot.shape[1]): val = annot.iat[i, j] if pd.isna(val): annot.iat[i, j] = &quot;N/A&quot; elif val &gt; 1: annot.iat[i, j] = f&quot;{val:.1f} %&quot; else: annot.iat[i, j] = f&quot;{val:.3f} %&quot; fig, ax = plt.subplots(3,1, figsize=(5,7), constrained_layout=True) cmap = plt.cm.get_cmap(&quot;coolwarm&quot;) h2 = sns.heatmap( data=merged_summary, annot=annot.values, fmt='', cmap=cmap, cbar=False, linewidths=0.5, linecolor='white', mask=merged_summary.isna(), vmin=0.1, vmax=30, norm=LogNorm(vmin=0.1, vmax=30), annot_kws={&quot;size&quot;: 8}, ax=ax[2], ) plt.show() </code></pre> <p><a href="https://i.sstatic.net/AgFFPL8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AgFFPL8J.png" alt="heatmap with missing N/A annotations" /></a></p>
<python><seaborn><heatmap>
2025-06-13 15:12:54
1
465
Mr. Discuss
79,664,914
1,145,666
Uploading a file to SharePoint using the Office365-REST-Python-Client library
<p>I want to upload a file to my SharePoint site using Python. And the <code>Office365-REST-Python-Client</code> module is apparently the way to go. The code I am using is as follows:</p> <pre><code>def upload_document_to_sharepoint(local_file_path, sharepoint_folder_path): # Authenticate with SharePoint credentials = ClientCredential(CLIENT_ID, CLIENT_SECRET) ctx = ClientContext(SHAREPOINT_SITE_URL).with_credentials(credentials) # Get the target folder target_folder = ctx.web.get_folder_by_server_relative_url(sharepoint_folder_path) file_name = os.path.basename(local_file_path) # Open the local file in binary read mode with open(local_file_path, 'rb') as file_content: uploaded_file = target_folder.files.upload(local_file_path, file_content.read()).execute_query() print(f&quot;File '{file_name}' uploaded successfully to: {uploaded_file.serverRelativeUrl}&quot;) </code></pre> <p>Which is pretty straightforward. Except it gives me the following error:</p> <blockquote> <p>office365.runtime.client_request_exception.ClientRequestException: ('-2147024891, System.UnauthorizedAccessException', 'Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))', '403 Client Error: Forbidden for url: https://&lt;&gt;/_api/contextInfo')</p> </blockquote> <p>It is completely unclear on where and how to get the right access tokens/secrets/app-ids from SharePoint/Azure.</p> <p>I created an app secret on the <code>_layouts/15/appregnew.aspx</code> page of my SharePoint. I can then create an app (which is completely unfindable afterwards) and using <code>_lavouts/15/appinv.aspx</code> I seem to be able to trust that created token.</p> <p>Still the same error occurs. And I am completely lost on what exactly I am doing here. I am used to creating AppId, Secrets and allowing access to APIs (This is one thing Google seems to do quite well). But with SharePoint I am completely lost.</p> <p>I also tried to get this working with tokens from Azure, but even after allowing everything on SharePoint, I get a 403 error (even logging in doesn't seem to work).</p> <p>So, how do I set up access to my SharePoint to upload a file using Python?</p>
<python><rest><office365><sharepoint-online>
2025-06-13 13:20:24
0
33,757
Bart Friederichs
79,664,821
1,987,093
EmailClient sending duplicate email on ACS
<p><strong>REASON FOUND, NOT A BUG:</strong></p> <p>And found out the reason: I had a timer trigger running the code that sent the email once per 20s. The timer enumerated items to send from a table storage <em>which was shared between staging and prod</em>. And I had production and staging slots in the app =&gt; In 30% of cases both envs ran the code close enough that the table wasn't updated before both sent email.</p> <p><strong>Leaving the original in case someone bumps into same:</strong></p> <p>I'm using Azure Communication Services email sending with the azure.communication.email python package. The code that sends the email is in the Azure python function. The sending code is simple (no loops):</p> <pre><code> client = EmailClient.from_connection_string(email_conn_string) message = { ... } poller = client.begin_send(message) result = poller.result() </code></pre> <p>and according to the Azure Functions diagnostics, it gets called exactly once, and does not throw errors.</p> <p>The problem is that sometimes (up-to 30% of cases) the recipient gets the email twice as duplicate. According to the Communications Service logs (when there is duplicate email), there are two SendEmail -operations within about 200ms: one to <em>West Europe</em>, and one to <em>North Europe</em>. The ACS configuration shows &quot;Location: Global&quot; and &quot;Data Location: Europe&quot;.</p> <p>Any ideas on what causes this and how could I fix it?</p>
<python><azure-communication-services>
2025-06-13 12:06:45
0
1,329
PetriL
79,664,325
2,469,032
How to change the uv cache directory
<p>I wanted to change the cache directory in <code>uv</code>. When I followed the documentation and typed the following command:</p> <pre><code>uv cache dir --cache-dir 'E:\Program Files\uv\cache' </code></pre> <p>It does not actually change the cache directory. What might be wrong?</p>
<python><uv>
2025-06-13 03:43:21
1
1,037
PingPong
79,664,298
2,307,441
How to ensure the results-section got refreshed after a button click using python and selenium
<p>I am doing web scraping using python selenium with the <a href="https://search.gleif.org/#/search/currentPage=1&amp;perPage=50&amp;expertMode=true#search-form" rel="nofollow noreferrer">website</a></p> <p>I am entering a value into the 'Search value' textbox and clicking the 'Search now' button within a loop. The input and button click actions are functioning correctly.</p> <p>Since the search results are displayed on the same page, how can I ensure that the results section has been refreshed before I read the data, to avoid capturing outdated or stale information?</p> <blockquote> <p>I am doing this search based on Isin code by selecting it from LEI Data Field.</p> </blockquote> <p>Below is my code snippet.</p> <pre class="lang-py prettyprint-override"><code>def read(value): search_textbox = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH ,&quot;//input[contains(@class,'input') and @type='text']&quot;))) search_textbox.click() time.sleep(1) search_textbox.clear() time.sleep(1) script = &quot;&quot;&quot; var input = arguments[0]; var value = arguments[1]; input.value = value; var event = new Event('input', { bubbles: true }); input.dispatchEvent(event); &quot;&quot;&quot; driver.execute_script(script, search_textbox, value) time.sleep(1) #Button search_btn = WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH,&quot;//button[contains(@class,'search-btn') and normalize-space(text())='Search now']&quot;))) search_btn.click() time.sleep(2) WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.ID, 'results-section'))) results_section = driver.find_element(By.ID, 'results-section') return results_section for value in values: data = read(value) bla bla bla..... </code></pre> <p>To me <code>WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.ID, 'results-section')))</code> this piece of code seems doesn't really help to identify whether the results-secion got refreshed after button clicking on search button.</p> <p>How I ensure/wait the results-section got refreshed after the search button click ?</p> <p>I Am quite new to selenium. Appreciate your help on this.</p> <blockquote> <p>Example values: US035240AQ30, KYG040111059 &amp; INE437A01024 (some values doesn't have any matching output )</p> </blockquote>
<python><selenium-webdriver>
2025-06-13 02:48:18
1
1,075
Roshan
79,664,217
18,649,992
Arbitrary Stencil Slicing in Numpy
<p>Is there a simple syntax for creating references to an arbitrary number of neighbouring array elements in numpy?</p> <p>The syntax is relatively straightforward when the number of neighbours is hard-coded. A stencil width of three for example is</p> <pre><code>import numpy as np x = np.arange(8) # Hard-coded stencil width of 3 x_neighbours = ( x[ :-2] x[ 1:-1] x[ 2: ] ) </code></pre> <p>However, my attempt at arbitrary width stencils is not particularly readable</p> <pre><code> nStencil = 3 x_neighbours = ( x[indexStart:indexStop] for indexStart, indexStop in zip( (None, *range(1,nStencil)), (*range(1-nStencil,0), None), ) ) </code></pre> <p>Is there a better approach?</p>
<python><numpy><numpy-ndarray><numpy-slicing>
2025-06-13 00:00:53
1
440
DavidJ
79,664,199
395,857
Clicking on an example so that it directly displays the expected output without requiring the user to press Enter?
<p>I have a simple Gradio interface for machine translation:</p> <p><a href="https://i.sstatic.net/LhluJsKd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhluJsKd.png" alt="enter image description here" /></a></p> <p>How can I change it so that clicking on an example (e.g., 'Avion blanc') directly displays the translation without requiring the user to press Enter?</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import gradio as gr import time def translate_sentence(src_text): start_time = time.time() translation = &quot;hello&quot; # I removed the translation model duration = (time.time() - start_time)*1000 return translation, f&quot;{duration:.0f} ms&quot; with gr.Blocks(title=&quot;French to English Translation&quot;, css=&quot;footer{display:none !important}&quot;) as demo: input_box = gr.Textbox(lines=1, label=&quot;Input (French)&quot;) output_box = gr.Textbox(lines=1, label=&quot;Translation (English)&quot;) time_box = gr.Text(label=&quot;Time Taken&quot;) translate_btn = gr.Button(&quot;Translate&quot;) gr.Examples( examples=[ [&quot;Avion blanc&quot;], [&quot;Chat sur un arbre&quot;], [&quot;Un oiseau sur l'eau&quot;] ], inputs=input_box, outputs=[output_box, time_box], fn=translate_sentence ) translate_btn.click(translate_sentence, inputs=input_box, outputs=[output_box, time_box]) input_box.submit(translate_sentence, inputs=input_box, outputs=[output_box, time_box]) if __name__ == &quot;__main__&quot;: demo.launch() </code></pre>
<python><gradio>
2025-06-12 23:33:11
1
84,585
Franck Dernoncourt
79,664,185
5,724,091
PCollection Objects Format for Apache Beam to write on BigQuery using CDC in Python
<p>I'm trying to write to BigQuery using Apache Beam, in python. However, I want to use the newest CDC features to write on Bigquery.</p> <p>However, I can't get the correct format of the objects in the PCollection. The <a href="https://beam.apache.org/releases/pydoc/2.61.0/apache_beam.io.gcp.bigquery.html#apache_beam.io.gcp.bigquery.WriteToBigQuery" rel="nofollow noreferrer">documentation</a> only says that, when using the &quot;use_cdc_writes&quot; option, &quot;the Beam Rows will be sent as they are to the BigQuery sink which expects a ‘record’ and ‘row_mutation_info’ properties&quot;.</p> <p>The objects in my Input PCollection are dictionaries with many fields. How can I format the objects as &quot;Bean Rows&quot; wirth &quot;record&quot; and &quot;row_mutation_info&quot; properties?</p> <pre><code>pcoll_output = (pcoll_input # | DataPoint:Dict[str,Any] | | f'{table_name}:Write to BigQuery' &gt;&gt; WriteToBigQuery( full_table_id, # Target table in BigQuery, e.g. `my-project.my_dataset.my_table_cdc` # Table creation options create_disposition=BigQueryDisposition.CREATE_NEVER, # The table should already exists primary_key=None, #type:ignore Used only when creating the table. It should not be used here. # Write Method Options write_disposition=BigQueryDisposition.WRITE_APPEND, # Used for STREAMING schema=table_config.schema.bigquery_schema_string, # Required for usage of STORAGE_WRITE_API method=WriteToBigQuery.Method.STORAGE_WRITE_API, # MUCH CHEAPER THAN THE OTHER OPTIONS! # CDC/Streaming Options use_at_least_once=True, # True is cheaper and faster but might duplicate records. use_cdc_writes=True, # Can only be True if use_at_least_once=True and method=STORAGE_WRITE_API #ignore_insert_ids=False, # True is cheaper and faster but might duplicate records even when using CDC # Test/Retry options validate=True, # Validate the data in Apache Beam before writing to BigQuery. Use for testing insert_retry_strategy=bigquery_tools.RetryStrategy.RETRY_ON_TRANSIENT_ERROR ) ) </code></pre> <p>I tried several ways of configuring the datapoints, as:</p> <ul> <li>Including a &quot;row_mutation_info&quot; field with the mutation info</li> <li>Including a &quot;record&quot; field with the remaining data of each Datapoint</li> <li>Formating both as such:</li> </ul> <pre><code>datapoint = Row( row_mutation_info=Row( mutation_type=self.compute_change_type(datapoint), change_sequence_number=self.format_sequence_number(datapoint) ), record=bigquery_tools.beam_row_from_dict(datapoint, self.schema) ) </code></pre>
<python><google-bigquery><apache-beam><apache-beam-io><change-data-capture>
2025-06-12 23:01:04
1
319
José Fonseca
79,664,132
10,083,382
Unable to use serverless connection in a PromptFlow pipeline
<p>I am running my prompt flow (<code>version 1.18.0</code>) code as a pipeline. However, it seems like that the connection that I created in prompt flow which is referred in <code>flow.dag.yaml</code> is not working through a pipeline. Same connection can be used in UI interface of promptflow without any issues. Note that the connection is accessible on workspace level as well. Serverless connection is recognized in Azure documentation as one of the connection type. Second, I want to compile the returned results from each child node and apply an aggregation function, results from aggregation function will be logged as a metric that will be displayed in experiment's metric UI.</p> <p><strong>Error:</strong></p> <pre><code>Build connection dict for connection [&quot;module&quot;: &quot;azureml_sys.parallel_run.promptflow.exceptions&quot;, &quot;name&quot;: &quot;ResolveConnectionsObjectError&quot;, &quot;message&quot;: &quot;&quot;, &quot;compliant_message&quot;: &quot;&quot;, &quot;formatted_traceback&quot;: &quot; File \&quot;/mnt/azureml/cr/j/429d583b31e44bc8a3ba60120ccc0fda/exe/wd/driver/sys_main.py\&quot;, line 94, in main\n job_starter.start()\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/job_starter.py\&quot;, line 298, in start\n self.run_tasks()\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/job_starter.py\&quot;, line 283, in run_tasks\n loop.run_until_complete(self.start_tasks())\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/asyncio/base_events.py\&quot;, line 616, in run_until_complete\n return future.result()\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/job_starter.py\&quot;, line 222, in start_tasks\n connections_dict[name] = connection_provider.get_connection_dict(name)\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/promptflow/connections/connection_providers.py\&quot;, line 43, in get_connection_dict\n raise ResolveConnectionsObjectError from exec&quot;, &quot;cause&quot;: [&quot;module&quot;: &quot;azureml_sys.parallel_run.promptflow.exceptions&quot;, &quot;name&quot;: &quot;UnknownConnectionType&quot;, &quot;message&quot;: &quot;Unknown connection Llama-4-Scout-17B-16E-Instruct-j category Serverless, please upgrade your promptflow sdk version and retry.&quot;, &quot;compliant_message&quot;: &quot;Unknown connection Llama-4-Scout-17B-16E-Instruct-j category Serverless, please upgrade your promptflow sdk version and retry.&quot;, &quot;formatted_traceback&quot;: &quot; File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/promptflow/connections/connection_providers.py\&quot;, line 40, in get_connection_dict\n return self._build_connection_dict(connection_name, conn_obj)\n File \&quot;/tmp/b569cdfd-7af8-4c96-9656-f4d99ed4cbcf/prs_prod/lib/python3.8/site-packages/azureml_sys/parallel_run/promptflow/connections/connection_providers.py\&quot;, line 142, in _build_connection_dict\n raise UnknownConnectionType(f\&quot;Unknown connection [name] category [properties.category], \&quot;&quot;, &quot;cause&quot;: null]] </code></pre> <p><strong>flow.dag.yaml</strong></p> <pre><code>$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json id: template_standard_flow name: test_serverless_connection inputs: topic: type: string is_chat_input: false outputs: joke: type: string reference: ${echo.output} nodes: - name: echo type: python source: type: code path: echo.py inputs: input: ${joke.output} use_variants: false - name: joke type: llm source: type: code path: joke.jinja2 inputs: deployment_name: &quot;&quot; temperature: 1 top_p: 1 max_tokens: 256 topic: ${inputs.topic} provider: Serverless connection: Llama-4-Scout-17B-16E-Instruct-j api: chat module: promptflow.tools.openai use_variants: false node_variants: {} environment: python_requirements_txt: requirements.txt </code></pre> <p><strong>orchestrator.py</strong></p> <pre><code>import os import uuid import json import tempfile import mlflow import azureml.mlflow # Azure ML ↔ MLflow integration from azure.identity import DefaultAzureCredential from azure.ai.ml import MLClient, load_component from azure.ai.ml.constants import AssetTypes from azure.ai.ml.dsl import pipeline from azure.ai.ml import MLClient, load_component, dsl, Input # ------------------------------------------------------------------------------ # 1. Workspace &amp; MLflow Tracking Configuration # ------------------------------------------------------------------------------ subscription_id = &quot;foo&quot; resource_group = &quot;bar&quot; workspace_name = &quot;baz&quot; os.environ['subscription_id'] = subscription_id os.environ['resource_group'] = resource_group os.environ['workspace_name'] = workspace_name cred = DefaultAzureCredential() ml_client = MLClient( credential = cred, subscription_id = subscription_id, resource_group_name = resource_group, workspace_name = workspace_name, ) # get the MLflow tracking URI from your workspace tracking_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri mlflow.set_tracking_uri(tracking_uri) mlflow.set_experiment(&quot;test_connection_promptflow_pipeline&quot;) cluster_name = &quot;LLM-Prompt-Flow&quot; print(ml_client.compute.get(cluster_name)) # ------------------------------------------------------------------------------ # 2. Turn your Prompt Flow into a reusable Component # ------------------------------------------------------------------------------ flow_component = load_component(source=&quot;flow.dag.yaml&quot;) # ml_client.components.create_or_update(flow_component, version=&quot;1&quot;) ml_client.components.create_or_update(flow_component, version=&quot;1&quot;) # ------------------------------------------------------------------------------ # 3. Build the DSL Pipeline that invokes your flow component # ------------------------------------------------------------------------------ # create a tiny JSONL file for inputs sample = {&quot;topic&quot;: &quot;Programmer&quot;} tmp = tempfile.NamedTemporaryFile(mode=&quot;w&quot;, suffix=&quot;.jsonl&quot;, delete=False, encoding=&quot;utf-8&quot;) json.dump(sample, tmp); tmp.write(&quot;\n&quot;); tmp.flush(); tmp.close() eval_data = Input( type=AssetTypes.URI_FILE, path=tmp.name, mode=&quot;ro_mount&quot;, ) @pipeline() def eval_pipeline(): eval_node = flow_component( data=eval_data, topic=&quot;${data.topic}&quot; ) eval_node.compute = cluster_name eval_node.max_concurrency_per_instance = 1 eval_node.mini_batch_error_threshold = 5 pipeline_job = eval_pipeline() pipeline_job.settings.default_compute = cluster_name pipeline_job.name = f&quot;eval-{uuid.uuid4().hex[:8]}&quot; submitted = ml_client.jobs.create_or_update( pipeline_job, experiment_name=&quot;test_connection_promptflow_pipeline&quot; ) print(f&quot;▶️ Submitted pipeline job: {submitted.name}&quot;) ml_client.jobs.stream(submitted.name) </code></pre> <ol> <li><p>How to fix the connection issue?</p> </li> <li><p>Assuming 1) is fixed. How can I use MLFlow tracking to display metrics on experiment run? How to compile results from child nodes of job, perform aggregation and log the metric?</p> </li> </ol>
<python><azure><azure-machine-learning-service><azureml-python-sdk><azure-promptflow>
2025-06-12 21:42:55
0
394
Lopez
79,663,856
1,040,323
How to uninstall or upgrade CPython?
<p>I have an install of CPython on my PC - from about 2020, version 3.8.5.</p> <p>This Windows 11 laptop is about 5 years old but I acquired it 2nd hand, 2 years ago. I discovered this CPython in an attempt to upgrade to python 3.9.2 or later, which I want to run for a specific app. The CPython has no <em>Programs and Features</em> entry.</p> <p>Apart from deleting the files, PATH, and registry entries, is there anything else I need do to remove it?</p>
<python><installation>
2025-06-12 16:23:48
1
509
user1040323
79,663,852
1,747,834
Can I specify a different directory for transient files, when invoking Python build-module?
<p>I currently build my simple Python module with a command like:</p> <pre class="lang-bash prettyprint-override"><code>python3 -m build --wheel --outdir /tmp . </code></pre> <p>This creates the <code>/tmp/mypackage-0.1-py3-none-any.whl</code> -- as one would expect. However, it <em>also</em> creates the <code>myproject.egg-info/</code> and <code>build/</code> subdirectories in the project directory (next to the <code>pyproject.toml</code>), which I find annoying -- it should be possible for the project directory to be read-only.</p> <p>The <code>python3.11 -m build --help</code> does not list any options, that'd allow to specify a different path for the transient files/folders. Is there some undocumented method, perhaps?</p>
<python><setuptools><python-packaging><python-wheel>
2025-06-12 16:22:12
0
4,246
Mikhail T.
79,663,750
1,941,632
Call async code inside sync code inside async code
<p>I've found myself in a tricky situation which I can't seem to find my way out of:</p> <ol> <li>I've written an application which uses asyncio (and, in particular, aiohttp).</li> <li>In part of this application, I need to use a third-party library (in my case, weasyprint) which is <em>not</em> async.</li> <li>I want to pass a callback (here, a url fetcher) into this third-party library. This callback must obviously be sync, but I want to use my existing async code as part of this callback. (I already have an aiohttp client session I want to use, and I've implemented some client middlewares in this session which do some useful things. I don't want to have to write a second fully sync version of this code.)</li> </ol> <h2>I've tried:</h2> <ul> <li>Creating a second event loop (as a new thread) inside the callback, and running my async url fetcher code in this second loop with <code>loop.run_until_complete</code>. I then used a <code>queue.Queue</code> to block execution on the main thread until the secondary thread with the second event loop returned a result. Unfortunately, this threw errors due to trying to run tasks in the wrong event loop. It seems I can't use the client session outside the event loop where it was created.</li> <li>Running weasyprint itself in a thread (using <code>asyncio.to_thread</code>) and then trying to use <code>loop.run_until_complete</code> in the callback (with <code>loop</code> here referencing the single main event loop). Unfortunately, this failed because &quot;this event loop is already running.&quot; It didn't seem to matter that it was only <code>await</code>ing the weasyprint thread, it still wouldn't let me run another coroutine in the loop.</li> </ul> <p>I do understand that, obviously, weasyprint won't be able to take advantage of the performance benefits of using async code, but simply for the sake of not having to write the same code twice (a sync version and an async version) I'd like to find a way to <em>force</em> it to use an async callback, even though that callback in reality just runs synchronously.</p> <p>I really only have a surface-level understanding of asyncio; I don't know a lot of the lower-level details about how event loops work and all that, so I'm sure the above approaches are a bit naive, but it seems to me like there ought to be some way to accomplish this.</p> <h3>Sample code using my first attempted method</h3> <pre class="lang-py prettyprint-override"><code>def print_weasyprint(html:IOBase, resources:aiohttp.ClientSession): out = io.BytesIO() HTML( file_obj=html, url_fetcher=_make_url_fetcher(resources), ).write_pdf(out) return out def _call_async_secondary_loop(coro): q = queue.Queue(1) def secondary(): loop = asyncio.new_event_loop() q.put(loop.run_until_complete(coro)) Thread(target=secondary).start() return q.get() def _make_url_fetcher(resources:aiohttp.ClientSession): def fetcher(url, *args, **kwargs): content = _call_async_secondary_loop(_fetch(resources, url)) return {'string':content} return fetcher async def _fetch(resources, url): async with reources.get(url) as resource: return await resource.content.read() </code></pre> <p>(Here, although <code>print_weasyprint</code> is sync, but is being called from an async context in the application, so it's still running in the event loop.)</p> <h3>Sample code using my second attempted method</h3> <pre class="lang-py prettyprint-override"><code>async def print_weasyprint(html:io.IOBase, resources:aiohttp.ClientSession): loop = asyncio.get_running_loop() fetcher = _make_url_fetcher(resources, loop) return await asyncio.to_thread(_print_weasyprint_sync, fetcher, html, resources, out) def _print_weasyprint_sync(fetcher, html:io.IOBase, resources:aiohttp.ClientSession): out = io.BytesIO() HTML( file_obj=html, url_fetcher=fetcher, ).write_pdf(out) return out def _make_url_fetcher(resources:aiohttp.ClientSession, loop): def fetcher(url, *args, **kwargs): content, content_type = loop.run_until_complete(_fetch(resources, url)) return { 'string':content, 'mime_type':content_type, } return fetcher async def _fetch(resources:aiohttp.ClientSession, url): async with resources.get(url) as resource: return await resource.content.read(), resource.content_type </code></pre>
<python><asynchronous><python-asyncio><aiohttp>
2025-06-12 15:01:51
2
888
DMJ
79,663,656
2,351,983
FastAPI state not isolated between concurrent users
<p>I have a FastAPI app. In <code>main.py</code>:</p> <pre><code>@asynccontextmanager async def lifespan(app: FastAPI): # Setup app.state.session_store = {} yield </code></pre> <p>In another file, I have the following function:</p> <pre><code>def get_user_state(request: Request, response: Response) -&gt; AppState: logger.debug(&quot;Session state called&quot;) session_store = request.app.state.session_store logger.debug(f&quot;Session store: {session_store}&quot;) session_id = request.headers.get(&quot;user-id&quot;) if session_id not in session_store: session_store[session_id] = AppState() return session_store[session_id] </code></pre> <p>It gets called with:</p> <pre><code>@router.post(&quot;/endpoint&quot;) async def my_function( body: MyRequest, request: Request, response: Response, user_state: AppState = Depends(get_user_state) ): </code></pre> <p>If two instances are running with different <code>user_id</code> values, I can see that the <code>session_store</code> has two different <code>AppState()</code> objects. However, if I add a variable to one of them, it gets added to both. Why??</p>
<python><fastapi><starlette>
2025-06-12 13:59:49
0
356
luanpo1234
79,663,583
5,269,892
Pandera validation behavior for NaN failure cases
<p>Suppose we use a minimal example for a panderas dataframe-wide validation (cf. <a href="https://stackoverflow.com/a/77758888/5269892">this Stackoverflow post</a>):</p> <pre><code>import numpy as np import pandas as pd import pandera as pa dataframe = pd.DataFrame({'column_A': ['ABC company', 'BBB company', 'ABC company', 'CCC company'], 'column_B': ['1000', np.NaN, '2000', np.NaN] }) # define your dataframe level test check_AB = pa.Check( lambda df: (df['column_A'].str.contains('ABC')) &amp; (~df['column_B'].isna()), name='check_AB' ) schema = pa.DataFrameSchema( columns={ 'column_A': pa.Column(pa.String), 'column_B': pa.Column(pa.String, nullable=True) }, checks=check_AB # &lt;- dataframe wide check ) schema.validate(dataframe, lazy=True) </code></pre> <p>The NaN values do not show up among the failure cases, only the non-NaN values: <a href="https://i.sstatic.net/UxLipOED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UxLipOED.png" alt="enter image description here" /></a></p> <p>Now we adjust the example:</p> <pre><code>dataframe = pd.DataFrame({'column_A': ['ABC company', np.nan, 'ABC company', np.nan], 'column_B': [1000, np.NaN, 2000, 4] }) # define your dataframe level test check_AB = pa.Check( lambda df: (df['column_A'].str.contains('ABC')) &amp; (df['column_B'] &gt;= 7), name='check_AB' ) </code></pre> <p>Again, only the non-NaN values are reported among the failure cases: <a href="https://i.sstatic.net/jgtNYWFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jgtNYWFd.png" alt="enter image description here" /></a></p> <p>We can see this in the error object itself as well (see schema_context DataFrameSchema):</p> <pre><code>try: schema.validate(dataframe, lazy=True) except pa.errors.SchemaErrors as e: e2 = e; print(e2.failure_cases) </code></pre> <p><a href="https://i.sstatic.net/pGyQbUfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pGyQbUfg.png" alt="enter image description here" /></a></p> <p>Finally, if all failure cases are NaNs, pandera does not give a validation error:</p> <pre><code>dataframe = pd.DataFrame({'column_A': ['ABC company', np.nan, 'ABC company', np.nan], 'column_B': [1000, np.NaN, 2000, np.nan] }) # define your dataframe level test check_AB = pa.Check( lambda df: (df['column_A'].str.contains('ABC')) &amp; (df['column_B'] &gt;= 7), name='check_AB' ) schema = pa.DataFrameSchema( columns={ 'column_A': pa.Column(pa.String, nullable=True), 'column_B': pa.Column(pa.Float, nullable=True) }, checks=check_AB # &lt;- dataframe wide check ) </code></pre> <p><a href="https://i.sstatic.net/4hR7Z16L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hR7Z16L.png" alt="enter image description here" /></a></p> <p><strong>Questions: Why does Panderas not report NaNs as failure cases? And why does an all-NaN-failure-case-situation prevent the correct triggering of the validation errors?</strong></p>
<python><pandas><dataframe><nan><pandera>
2025-06-12 13:01:02
0
1,314
silence_of_the_lambdas
79,663,456
7,376,511
Python Pydantic: Optional non-nullable field
<pre><code>from pydantic import BaseModel class MyModel(BaseModel): id: int name: str # this should be optional MyModel(id=1) </code></pre> <p>This raises a ValidationError. Setting <code>name: str | None = None</code> is unacceptable because the name cannot be null. It can be a string, or it can be unset, but it can never be null.</p> <p>How do I make this work?</p> <p>There were similar questions while ago: <a href="https://stackoverflow.com/questions/60191270/python-pydantic-how-to-have-an-optional-field-but-if-present-required-to-con">Python Pydantic - how to have an &quot;optional&quot; field but if present required to conform to not None value?</a> and this one <a href="https://stackoverflow.com/questions/73198957/how-to-exclude-optional-unset-values-from-a-pydantic-model-using-fastapi">How to exclude Optional unset values from a Pydantic model using FastAPI?</a> These questions however still set the field as None when it's not passed, which is not what I want.</p>
<python><python-typing><pydantic>
2025-06-12 11:32:59
2
797
Some Guy
79,663,235
7,112,039
Celery 5.5.2 randomly raises RecursionError when the AsyncResult.status property is accessed
<p>I have this simple piece of code:</p> <pre class="lang-py prettyprint-override"><code>task_id = payload.task_id task_result = AsyncResult(task_id) if task_result.status == &quot;SUCCESS&quot;: print(&quot;I am happy&quot;) </code></pre> <p>Celery configuration is pretty basic. I just set the broker and the backend to use the same RabbitMQ instance; any other setting is left to default.</p> <p>I wasn't able to reproduce. I could only produce two consecutive times using a random id (123). A few moments later, it normally worked, returning <code>PENDING</code></p> <p>Looking at the stack trace, I saw a loop around those instructions</p> <pre><code>self.connection.drain_events(timeout=timeout) while not self.blocking_read(timeout): return self.on_inbound_frame(frame) callback(channel, method_sig, buf, None) return self.channels[channel_id].dispatch_method( listener(*args) self._do_revive() self.open() return self.send_method( return self.wait(wait, returns_tuple=returns_tuple) self.connection.drain_events(timeout=timeout) </code></pre>
<python><celery>
2025-06-12 09:18:55
0
303
ow-me
79,663,073
17,795,398
How to apply rotations to structured numpy arrays?
<p>I'm using structured arrays to store atoms data produced by LAMMPS (I'm using a structured array that follows its format). I need to rotate the positions:</p> <pre><code>import numpy as np transform = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float64) dtype = np.dtype([(&quot;x&quot;, np.float64), (&quot;y&quot;, np.float64), (&quot;z&quot;, np.float64)]) atoms = np.array( [ (0.0, 0.0, 0.0), (1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), ], dtype=dtype, ) atoms[[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;]] = atoms[[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;]] @ transform.T </code></pre> <p>But this produces:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;c:\Users\acgc99\Desktop\rotation.py&quot;, line 16, in &lt;module&gt; atoms[[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;]] = atoms[[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;]] @ transform.T ~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~ numpy._core._exceptions._UFuncNoLoopError: ufunc 'matmul' did not contain a loop with signature matching types (dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]), dtype('float64')) -&gt; None </code></pre> <p>I can convert to unstructured arrays, but I guess that doing that change multiple times is not efficient when working with tens of millions of atoms.</p>
<python><arrays><numpy>
2025-06-12 07:12:58
3
472
Abel Gutiérrez
79,662,891
605,153
Deploy python application in production with or without uv
<p>I used to work with Java for many years but in the last project I need to use python and I learn the ecosystem these days.</p> <p>We use UV and have a project with a bunch of dependencies. On my development machine I run <code>uv run main.py</code>. However I was wondering how should we prepare a deployment artifact for our application? We'll use python and uv stack in 2 different kinds of applications:</p> <ol> <li>Run in Docker in k8s</li> <li>Run as a process</li> </ol> <p>So My question is: Is it a good practice to install uv on production machine (or in docker) and still run the production application like I do in the development or there are other alternatives? What is the best way to run the application with multiple python modules and having many dependencies? For example here is what I do now in docker:</p> <p>Dockerfile:</p> <pre><code>FROM python:3.13 RUN pip3 install uv WORKDIR /app # I'll optimize the image later COPY run.sh /app COPY main.py /app COPY pyproject.toml /app COPY uv.lock /app RUN uv sync --all-groups ENTRYPOINT [&quot;/bin/bash&quot;, &quot;/app/run.sh&quot;] </code></pre> <p>My /app/run.sh looks like this:</p> <pre><code>#!/bin/bash uv run main.py </code></pre> <p>But I'm not sure this is how the apps should be really deployed especially for the &quot;run-without-docker&quot; type of deployments - should I install uv on the prod server and run the app like this as well?</p> <p>In java world we used maven to build stuff but we didn't need to install it on production servers and used it solely as a build tool...</p>
<python><docker><deployment><uv>
2025-06-12 03:54:33
2
42,919
Mark Bramnik
79,662,795
3,174,075
How can I use COINBase RESTClient
<p>I went to coinbase and registered an API key</p> <p><a href="https://docs.cdp.coinbase.com/coinbase-app/docs/getting-started" rel="nofollow noreferrer">https://docs.cdp.coinbase.com/coinbase-app/docs/getting-started</a></p> <p>[!api_defined<a href="https://i.sstatic.net/f5ZgtL6t.png" rel="nofollow noreferrer">https://cloud.coinbase.com/access/api</a>]<a href="https://i.sstatic.net/f5ZgtL6t.png" rel="nofollow noreferrer">1</a></p> <p>I put the API Key and Secret in a .env file</p> <p>these are not the real secret or api key</p> <pre><code>COINBASE_API_KEY=3398a3 COINBASE_API_SECRET=-----BEGIN EC PRIVATE KEY----- EsOY8CRow== -----END EC PRIVATE KEY----- </code></pre> <p>Requirements.txt</p> <pre><code>fastapi uvicorn[standard] # for environment variables python-dotenv # Coinbase Advanced Trade API client coinbase-advanced-py </code></pre> <p>Here is the server code:</p> <pre><code>from fastapi import FastAPI from app.endpoints import account from dotenv import load_dotenv load_dotenv() # Loads .env file into environment variables import logging logging.basicConfig( level=logging.INFO, # &lt;-- enable info-level logs format=&quot;%(asctime)s - %(name)s - %(levelname)s - %(message)s&quot; ) app = FastAPI( title=&quot;crypto-trader&quot;, description=&quot;A service for interacting with Coinbase's advanced trade API.&quot;, version=&quot;1.0.0&quot; ) app.include_router(account.router) </code></pre> <p>here is the account endpoint</p> <pre><code># app/endpoints/account.py from fastapi import APIRouter from app.services.coinbase_client import get_accounts router = APIRouter() @router.get(&quot;/account&quot;, tags=[&quot;Account&quot;]) def account_info(): return get_accounts() </code></pre> <p>here is the code that accesses the coin base account</p> <pre><code># app/services/coinbase_client.py import logging from app.core.config import COINBASE_API_KEY, COINBASE_API_SECRET logger = logging.getLogger(__name__) try: from coinbase.rest import RESTClient except ImportError as e: logger.error(&quot;Failed to import RESTClient from coinbase.rest&quot;) logger.exception(e) raise try: client = RESTClient(api_key=COINBASE_API_KEY, api_secret=COINBASE_API_SECRET, verbose=True) except Exception as e: logger.error(&quot;Failed to initialize RESTClient with provided API credentials&quot;) logger.exception(e) raise def get_accounts(): logger.info(f&quot;==================================================COINBASE_API_KEY: {COINBASE_API_KEY}&quot;) try: return client.get_accounts() except Exception as e: logger.error(&quot;Error fetching profiles from Coinbase API&quot;) logger.exception(e) # Option 1: Raise so caller can handle the error explicitly # raise # Option 2: Return an error dict (your original choice) return {&quot;error&quot;: str(e)} </code></pre> <p>when I start the server I see this, so the server &quot;runs&quot;</p> <p>[!server-runs<a href="https://i.sstatic.net/9m5d24KN.png" rel="nofollow noreferrer">enter image description here</a>]<a href="https://i.sstatic.net/9m5d24KN.png" rel="nofollow noreferrer">2</a></p> <p>however when I hit the endpoint I get this:</p> <pre><code>2025-06-12 00:05:57,379 - app.services.coinbase_client - INFO - ==================================================COINBASE_API_KEY: 3398a3 2025-06-12 00:05:57,381 - app.services.coinbase_client - ERROR - Error fetching profiles from Coinbase API 2025-06-12 00:05:57,381 - app.services.coinbase_client - ERROR - Unable to load PEM file. See https://cryptography.io/en/latest/faq/#why-can-t-i-import-my-pem-file for more details. MalformedFraming Are you sure you generated your key at https://cloud.coinbase.com/access/api ? Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/coinbase/jwt_generator.py&quot;, line 16, in build_jwt private_key = serialization.load_pem_private_key( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: Unable to load PEM file. See https://cryptography.io/en/latest/faq/#why-can-t-i-import-my-pem-file for more details. MalformedFraming During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/workspace/app/services/coinbase_client.py&quot;, line 29, in get_accounts return client.get_accounts() ^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/rest/accounts.py&quot;, line 37, in get_accounts return ListAccountsResponse(self.get(endpoint, params=params, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/rest/rest_base.py&quot;, line 101, in get return self.prepare_and_send_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/rest/rest_base.py&quot;, line 199, in prepare_and_send_request headers = self.set_headers(http_method, url_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/rest/rest_base.py&quot;, line 255, in set_headers &quot;Authorization&quot;: f&quot;Bearer {jwt_generator.build_rest_jwt(uri, self.api_key, self.api_secret)}&quot;, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/jwt_generator.py&quot;, line 63, in build_rest_jwt return build_jwt(key_var, secret_var, uri=uri) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/coinbase/jwt_generator.py&quot;, line 21, in build_jwt raise Exception( Exception: Unable to load PEM file. See https://cryptography.io/en/latest/faq/#why-can-t-i-import-my-pem-file for more details. MalformedFraming Are you sure you generated your key at https://cloud.coinbase.com/access/api ? </code></pre>
<python><coinbase-api>
2025-06-12 00:14:26
1
729
MrLister
79,662,641
3,452,708
Wrapper stripping the generic parameter of a function erases its type parameter
<p>In the following Python code, I define a generic function wrapper which takes a function of type <code>T → T</code> and replaces it by a function without arguments returning an instance of <code>Delay[T]</code>. This instance simply stores the original function so that it can be called later.</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Callable class Delay[T]: def __init__(self, wrapped: Callable[[T], T]): self.wrapped = wrapped def wrapper[T](wrapped: Callable[[T], T]) -&gt; Callable[[], Delay[T]]: def wrapping() -&gt; Delay[T]: return Delay(wrapped) return wrapping </code></pre> <p>When using this wrapper with a normal function, the type checker is happy:</p> <pre class="lang-py prettyprint-override"><code>@wrapper def fun1(arg: str) -&gt; str: return arg reveal_type(fun1) # mypy says: &quot;def () -&gt; Delay[builtins.str]&quot; reveal_type(fun1()) # mypy says: &quot;Delay[builtins.str]&quot; reveal_type(fun1().wrapped) # mypy says: &quot;def (builtins.str) -&gt; builtins.str&quot; reveal_type(fun1().wrapped(&quot;test&quot;)) # mypy says: &quot;builtins.str&quot; </code></pre> <p>However, when the wrapped function is generic, the type argument somehow gets erased:</p> <pre class="lang-py prettyprint-override"><code>@wrapper def fun2[T](arg: T) -&gt; T: return arg reveal_type(fun2) # mypy says: &quot;def () -&gt; Delay[Never]&quot; reveal_type(fun2()) # mypy says: &quot;Delay[Never]&quot; reveal_type(fun2().wrapped) # mypy says: &quot;def (Never) -&gt; Never&quot; reveal_type(fun2().wrapped(&quot;test&quot;)) # mypy says: &quot;Never&quot; </code></pre> <p>I would have expected the type checker to infer the type of <code>fun2</code> as <code>def [T] () -&gt; Delay[T]</code>, the type of <code>fun2().wrapped</code> as <code>def [T] (T) -&gt; T</code>, and the type of the last line as <code>str</code>.</p> <p>Note that pyright seems to exhibit similar behavior as mypy here.</p> <p>Is there something invalid with the type annotations in my code? Is this a known limitation of the Python type system, or a bug in mypy and pyright?</p>
<python><python-typing>
2025-06-11 20:39:44
1
495
matteodelabre
79,662,579
5,496,433
How to fill an existing numpy array with random normally distributed numbers
<p>I would like to generate many batches of random numbers. I only need access to one batch at a time. Naively, I could repeatedly call <code>np.random.randn(size)</code>, but that allocates an array each time. For performance, I would like to populate an existing array with something like <code>np.random.randn(size, out=arr)</code>, but that form of <code>randn</code> doesn't seem to exist. How can I fill an existing array with random numbers?</p>
<python><numpy><random>
2025-06-11 19:56:34
2
15,467
BallpointBen
79,662,555
8,533,290
Mosek Fusion - access value of variable after timeout
<p>I have an integer optimisation problem that I want to solve with Mosek-Fusion. Since it might take too long, I want to impose a timeout of 10 seconds. The program stops after 10 seconds, but how can I access the best solution that Mosek found until this point?</p> <p>When the program terminates normally, I get the solution via <code>x.level()</code>. But when it timeouts, I get an error.</p> <p>For simplicity, my MWE has a timeout of 1 second.</p> <pre><code>from mosek.fusion import * import mosek.fusion.pythonic import numpy as np import sys if __name__ == '__main__': M = Model() A = np.array(np.random.randint(-1,2, size = (128,256)), dtype = np.float64) x = M.variable(&quot;x&quot;, 256, Domain.binary()) b = np.array(np.random.randint(0,20, size = (128)), dtype = np.float64) c = np.random.random(size = 256) M.constraint(A @ x &lt;= b) M.objective(ObjectiveSense.Maximize, x.T @ c) M.setSolverParam(&quot;optimizerMaxTime&quot;, 1) M.setLogHandler(sys.stdout) M.solve() print(x.level()) </code></pre> <p>This yields the error</p> <pre><code>SolutionError: Solution status is Feasible but Optimal is expected. Reason: Accessing integer solution whose problem status is PrimalFeasible. </code></pre> <p>Sidenote: I want to iteratively add constraints to my problem, without rerunning it. As far as I know, changing to cvxpy therefore is not an option.</p> <p>How do I obtain the best feasible solution at the end of the timeout?</p>
<python><mosek>
2025-06-11 19:33:18
1
720
Hennich
79,662,405
1,773,367
Style a dataframe in notebook with indentation
<p>I have a table, which I have read from a database. I want to display the table in a jupyter notebook, showing the elements of totals and subtotals indented. I want the end result to look like this</p> <p><a href="https://i.sstatic.net/kcBqaGb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kcBqaGb8.png" alt="enter image description here" /></a></p> <p>The data will come from the database in a format like this</p> <pre><code>from pandas import MultiIndex, DataFrame, Series index = MultiIndex.from_tuples( [ ('total', 'subtotal', 'a'), ('total', 'subtotal', 'b'), ('total', 'subtotal', 'c'), ('total', 'subtotal', 'd'), ('total', 'foo', 'foo'), ('total', 'bar', 'bar')], names=['module_1', 'module_2', 'module_3']) musd = [ 106.564338488296, 60.5686589737704, 311.695156173571, -90.3794796906721, 29.6147863260859, -49.0048344046974] # This is how the data as it is loaded from the database df = Series(musd, index=index, name='musd').to_frame() </code></pre> <p>I try to make a new dataframe from a dictionary where all subtotals and totals are added as separate rows, and the column names have the proper indentation.</p> <pre><code># Template indicating how to sum and the indentation template = DataFrame( [ (&quot;a&quot;, &quot;module_3&quot;, 4), (&quot;b&quot;, &quot;module_3&quot;, 4), (&quot;c&quot;, &quot;module_3&quot;, 4), (&quot;d&quot;, &quot;module_3&quot;, 4), (&quot;subtotal&quot;, &quot;module_2&quot;, 2), (&quot;foo&quot;, &quot;module_3&quot;, 2), (&quot;bar&quot;, &quot;module_3&quot;, 2), (&quot;total&quot;, &quot;module_1&quot;, 0)], columns=[&quot;name&quot;, &quot;sum_over&quot;, &quot;indent&quot;]) # Create dataframe where names are indentated, using space, and with the proper sums d = {f&quot;{' '*indent}{name}&quot;: sum(df.query(f&quot;{sum_over} == '{name}'&quot;)[&quot;musd&quot;]) for name, sum_over, indent in template.values} # Create dataframe from dictionary df_formated = DataFrame(d, index=[0]).T.reset_index().set_axis(['', 'musd'], axis=1) # I try to display the dataframe using the 'style' property of the dataframe df_formated.style.format(precision=1) </code></pre> <p>Which produces this</p> <p><a href="https://i.sstatic.net/fLTeLR6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fLTeLR6t.png" alt="enter image description here" /></a></p> <p>There is a lot of stuff missing here, lines, bold and italics etc. I hope to figure that out later. For now my biggest issue is that the columns are not indented. I have tried to add the indentation at the end of the string as well. But that does not work</p> <p>And for the record, I don't necessarily have to use ´dataframe.style´, any other solution is also welcome.</p>
<python><dataframe><format><jupyter>
2025-06-11 17:13:47
1
2,911
mortysporty
79,662,371
1,551,832
Python 365 how to pull items from a list on sharepoint online
<p>I am trying to get the row data from a SharePoint list in python.</p> <p>I am using python 365, and while I can get the field names, what I want after getting the field names is the row data to export it into excel.</p> <p>Here is what I have from the example, and I have been looking at the lists of examples but not finding anything related to getting the field value and row data from it much less move it to excel. how would I go about doing this?</p> <p>I have the site name and the view name.</p> <pre><code># Authentication credentials = ClientCredential(client_secret, client_id) site_url = &quot;https://company.sharepoint.com/sites/RiskAssessment&quot; list_title = &quot;Risk Assessment Workflow&quot; view_title = &quot;1.4. Assign Work&quot; ctx = ClientContext(site_url).with_credentials(credentials) list_object = ctx.web.lists.get_by_title(list_title) # 1. First request to retrieve view fields view_fields = ( list_object.views.get_by_title(view_title).view_fields.get().execute_query() ) # 2. Second request to retrieve fields fields = [ list_object.fields.get_by_internal_name_or_title(field_name).get() for field_name in view_fields ] ctx.execute_batch() # From performance perspective prefer execute_batch over execute_query here fields_json = {f.internal_name: f.title for f in fields} print(json.dumps(fields_json)) </code></pre>
<python>
2025-06-11 16:44:46
0
1,040
Gilbert V
79,662,237
5,454
How can I use web search with Gemini models in the google-generativeai package?
<p>I'm trying to utilize the <a href="https://pypi.org/project/google-generativeai/" rel="nofollow noreferrer">google-generativeai</a> Python package to call various Gemini models and also equip them with web search functionality. I can do something similar with OpenAI models and the <a href="https://pypi.org/project/openai/" rel="nofollow noreferrer">openai</a> package like this:</p> <pre class="lang-py prettyprint-override"><code>from openai import OpenAI client = OpenAI(api_key=&quot;...&quot;) response = client.responses.create( model=&quot;gpt-4o-mini&quot;, input=[{&quot;role&quot;:&quot;system&quot;,&quot;content&quot;:&quot;...&quot;}, {&quot;role&quot;:&quot;user&quot;,&quot;content&quot;:&quot;...&quot;}], tools=[{&quot;type&quot;: &quot;web_search_preview&quot;}] ) </code></pre> <p>That <code>web_search_preview</code> tool is what enables the OpenAI model to make web requests while answering your prompt. I assume that Gemini models have similar functionality, but I've been unable to find any documentation on it so far, and vibe coding hasn't helped either. Here's what I have so far:</p> <pre class="lang-py prettyprint-override"><code>from google.generativeai import configure, GenerativeModel configure(api_key=&quot;...&quot;) model = GenerativeModel(&quot;models/gemini-2.0-flash&quot;) response = model.generate_content( &quot;...&quot;, generation_config={&quot;temperature&quot;: 0.1}, # is there some tools option I can specify to enable web search? ) </code></pre>
<python><google-gemini><web-search>
2025-06-11 15:17:42
2
10,170
soapergem
79,661,884
2,123,706
filter polars dataframe using a variable
<p>I have a polars datraframe</p> <p>I have a complex filter created, and put into a variable</p> <p>I want to use the variable to filter the polars dataframe, but receive the error: <code>TypeError: invalid predicate for filter</code>. Is there a way around this?</p> <p>MRE:</p> <pre><code>df=pd.DataFrame({'col1':[1,2,3,4,5], 'col2':[6,7,8,9,9], 'col3':[11,12,13,14,15]}) str_filter = 'col1 &gt; 3' df.query(str_filter) ### WORKS dp = pl.from_pandas(df) dp.filter(pl.col('col1')&gt;3) ### WORKS str_filter = &quot;pl.col('col1')&gt;3&quot; dp.filter(str_filter) ### ERROR TypeError: invalid predicate for `filter`: &quot;pl.col('col1')&gt;3&quot; </code></pre>
<python><dataframe><filter><python-polars>
2025-06-11 11:11:18
1
3,810
frank
79,661,870
577,288
subprocess.Popen terminate() not stopping batch file
<p>The following code is supposed to stop the batch file counting down to ZERO when the <code>KeyboardInterrupt</code> error is called. But it is not doing so.</p> <pre><code>with open('test.bat', 'w', encoding='utf-8') as file: file.write('@echo off\nTIMEOUT /t 20') with open('test.txt', 'w', encoding='utf-8') as file: file.write('test file') p1 = None SW_HIDE = 1 info = subprocess.STARTUPINFO() info.dwFlags = subprocess.STARTF_USESHOWWINDOW info.wShowWindow = SW_HIDE p1 = subprocess.Popen('test.bat', startupinfo=info, creationflags=CREATE_NEW_CONSOLE) print(f&quot;Subprocess started with PID: {p1.pid}&quot;) try: while p1.poll() is None: time.sleep(0.5) except KeyboardInterrupt as e: print('program ended') p1.terminate(); p1.wait() os.remove('test.txt') quit() print('continue code') os.remove('test.txt') </code></pre>
<python><subprocess><popen>
2025-06-11 10:57:49
1
5,408
Rhys
79,661,742
16,452,929
How to export plotly graphs in jupyter notebook as HTML?
<p>I am currently using PyCharm on Fedora. I have a jupyter notebook with plotly graphs. Following are my imports</p> <pre><code>import plotly.graph_objects as go fig = go.Figure( data=go.Scatter( x=simulation_results_df['expected_risk'], y=simulation_results_df['expected_return'], mode='markers', name = &quot;Simulated Portfolios&quot;, hovertemplate=&quot;Expected Volatility/Risk: %{x}&lt;br&gt;Expected Return: %{y}&lt;extra&gt;&lt;/extra&gt;&quot;, showlegend=True ) ) fig.update_layout( title=&quot;Simulated Portfolios&quot;, xaxis_title=&quot;Expected Volatility/Risk&quot;, yaxis_title=&quot;Expected Return&quot;, width=800, height=600, showlegend=False) fig.update_xaxes(showspikes=True) fig.update_yaxes(showspikes=True) fig.show() </code></pre> <p>The graph gets displayed just fine in the .ipynb file.</p> <p>I use the PyCharm option to export notebook as HTML. When I open the .html file in PyCharm browser I cannot see the graph. I have tried couple of methods such as specifying the renderer but nothing seems to work. Does this have anything to do with PyCharm? I see that some solutions have worked for others while using jupyterlab</p>
<python><pycharm><plotly>
2025-06-11 09:40:02
1
517
CS1999
79,661,702
17,580,381
How to specify relevant columns with read_excel
<p>As far as I can tell, the following MRE conforms to the <a href="https://docs.pola.rs/api/python/dev/reference/api/polars.read_excel.html" rel="nofollow noreferrer">relevant documentation</a>:</p> <pre><code>import polars df = polars.read_excel( &quot;/Volumes/Spare/foo.xlsx&quot;, engine=&quot;calamine&quot;, sheet_name=&quot;natsav&quot;, read_options={&quot;header_row&quot;: 2}, columns=(1,2,4,5,6,7), # columns 0 and 3 are not needed ) print(df.head()) </code></pre> <p>The issue here is that the documentation states that for the <code>columns</code> parameter:</p> <blockquote> <p>Columns to read from the sheet; if not specified, all columns are read. Can be given as a sequence of column names or indices.</p> </blockquote> <p>Clearly, a tuple is a sequence. However, running this code results in an exception as follows:</p> <pre><code>_fastexcel.InvalidParametersError: invalid parameters: `use_columns` callable could not be called (TypeError: 'tuple' object is not callable) </code></pre> <p>Further research reveals that the required callable should return <code>bool</code>. So:</p> <pre><code>def colspec(c): print(type(c)) return True </code></pre> <p>I then change the <code>read_excel</code> call to include <code>columns=colspec</code>.</p> <p>The program now runs without exception and reveals that the parameter passed is a class of type <code>builtins.ColumnInfoNoDtype</code>.</p> <p>Unfortunately, I can't find any documentation for that type.</p> <p>Is the documentation wrong? How is one supposed to used <code>polars.read_excel</code> to load only certain specific columns?</p>
<python><excel><dataframe><python-polars><polars>
2025-06-11 09:18:05
2
28,997
Ramrab
79,661,483
3,336,423
Conflicting OPENSSL versions under Linux
<p>I'm building a project linking both with QtNetwork (Qt6) and Python library (3.8).</p> <p>At runtime, I get the error:</p> <pre><code>my_prg_bin: symbol lookup error: /lib64/libk5crypto.so.3: undefined symbol: EVP_KDF_ctrl, version OPENSSL_1_1_1b </code></pre> <p>After investigating, I could finally isolate the problem and found the root cause:</p> <ul> <li><code>ldd libQt6Network.so.6</code> shows it links to <code>libcrypto.so.1.1 =&gt; /lib64/libcrypto.so.1.1 (0x000014fdf898a000)</code></li> <li><code>ldd my_prg_bin</code> shows it links to <code>libcrypto.so.1.1 =&gt; /home/me/Python385/lin64/lib/libcrypto.so.1.1 (0x00001468b4898000)</code></li> </ul> <p>If I do <code>objdump -TC /lib64/libcrypto.so.1.1 | grep EVP_KDF</code>, I see some <code>OPENSSL_1_1_1b</code> entries. If I do <code>objdump -TC /home/me/Python385/lin64/lib/libcrypto.so.1.1 | grep EVP_KDF</code>, it's empty. Those two files are definitely not the same.</p> <p>My program does not spcifically request to link with <code>libcrypto.so.1.1</code> at runtime, but I see that if I remove <code>target_link_libraries( my_prg_bin /home/me/Python385/lin64/lib/python3.8.so )</code> from its <code>CMakeLists.txt</code>, it does not link to <code>/home/me/Python385/lin64/lib/libcrypto.so.1.1</code> any more and the problem is gone.</p> <p><code>libQt6Network</code> was compiled by my system root. Python was dowloaded by miniconda. Is there a way to fix this conflicting SSL versions?</p> <p>I also see that, if I remove the <code>/home/me/Python385/lin64/lib/libcrypto.so.1.1</code> fil from disk:</p> <ul> <li><code>ldd my_prg_bin</code> now shows it links to <code>libcrypto.so.1.1 =&gt; /lib64/libcrypto.so.1.1 (0x000014baaab6e000)</code></li> <li>No more crash occurs</li> <li><code>/home/me/Python385/lin64/bin/python</code> and my program apparently runs fine (my program is able to run Python scripts without problem)</li> </ul> <p>Is this a &quot;safe&quot; workaround?</p>
<python><c++><qt><openssl><linker>
2025-06-11 06:24:34
1
21,904
jpo38
79,661,339
270,043
Pyspark aggregations optimization
<p>I have a huge dataframe with 3B rows. I'm running the PySpark code below with the Spark config.</p> <pre><code>spark = SparkSession\ .builder\ .appName(&quot;App&quot;)\ .config(&quot;spark.executor.memory&quot;,&quot;10g&quot;)\ .config(&quot;spark.executor.cores&quot;,&quot;4&quot;)\ .config(&quot;spark.executor.instances&quot;,&quot;6&quot;)\ .config(&quot;spark.sql.adaptive.enabled&quot;,&quot;true&quot;)\ .config(&quot;spark.dynamicAllocation.enabled&quot;,&quot;false&quot;)\ .enableHiveSupport()\ .getOrCreate() df = spark.read.parquet(&quot;/data&quot;) df = df.filter(col(&quot;colA&quot;).isNotNull() &amp; col(&quot;colB&quot;).isNotNull()) df = df.withColumn(&quot;colK_udf&quot;,udf_function(&quot;colK&quot;)) df_1 = df.withColumn(&quot;newCol&quot;, when((col(&quot;colA.field&quot;) == 1) &amp; (col(&quot;colB.field1&quot;) == 2), col(&quot;colA.field1&quot;)).otherwise(&quot;colB.field1&quot;)))\ ... df_1 = df1.select(...) df_agg = df_1.groupby(&quot;colA&quot;,&quot;colB&quot;,&quot;colC&quot;,&quot;colD&quot;).agg(count(*).alias(&quot;numRecords&quot;), sort_array(collected_set(&quot;colE&quot;)).alias(&quot;colE&quot;), sum(&quot;colF&quot;).alias(&quot;colF&quot;), sum(&quot;colG&quot;).alias(&quot;colG&quot;), sum(&quot;colH&quot;).alias(&quot;colH&quot;), sum(&quot;colL&quot;).alias(&quot;colL&quot;), min(&quot;colI&quot;).alias(&quot;colI&quot;), max(&quot;colJ&quot;).alias(&quot;colJ&quot;), countDistinct(&quot;colE&quot;).alias(&quot;colE&quot;), sort_array(collected_set(&quot;colP&quot;)).alias(&quot;colP&quot;), sort_array(collected_set(&quot;colQ&quot;)).alias(&quot;colQ&quot;), max(&quot;colR&quot;).alias(&quot;colR&quot;), max(&quot;colS&quot;).alias(&quot;colS&quot;) ) df_agg.count() </code></pre> <p>I tried the code on a smaller dataframe with only 100M rows, and it worked. However, when I ran it on a dataframe with 3B rows, I get the error below when executing the last <code>df_agg.count()</code>.</p> <pre><code>ERROR org.apache.spark.scheduler.TaskSchedulerImpl - Lost executor 1 on 2.2.2.2: ... The API gave the following message: Pod ephemeral local storage usage exceeds the total limit of containers 50Gi. </code></pre> <p>I have already increased the pod local usage from 30GB to 50GB, but I can't increase it indefinitely. I'll keep getting this message for other executors, and the number of failed tasks just keeps increasing.</p> <p>When I looked at the Spark UI, under the Details for the Job, I'll see <code>Input</code> going up to 50GB, and <code>Shuffle Write</code> going up to 150GB. When I simply let the program run for hours, the <code>Input</code> went up to 350GB, and <code>Shuffle Write</code> going up to 480GB, before I killed it.</p> <p>My data should be quite skewed too, with the top groups (after <code>groupby</code>) being much larger than the other groups.</p> <p>I've tried to <code>cache</code> <code>df_1</code> after <code>select</code>, but that didn't solve the problem.</p> <p>What else can I try?</p>
<python><dataframe><apache-spark><pyspark><aggregation>
2025-06-11 03:15:33
1
15,187
Rayne
79,661,336
1,403,955
Add custom label based on endpoint with prometheus_fastapi_instrumentator
<p>I am using prometheus_fastapi_instrumentator for my service. Now I am using <code>Instrumentator().instrument(app).expose(app, include_in_schema=False)</code> to get the basic metircs. But I want to add some custom labels in the metrics based on the endpoints. For example, I have some metrics like below:</p> <pre><code>http_requests_total{handler=&quot;/orders/{order_id}&quot;,method=&quot;GET&quot;,status=&quot;2xx&quot;} 99.0 http_requests_total{handler=&quot;/orders/electronic/{order_id}/items/&quot;,method=&quot;GET&quot;,status=&quot;4xx&quot;} 5.0 </code></pre> <p>Is it possible to add custom label based on the endpoint, so I could get something like below?</p> <pre><code>http_requests_total{handler=&quot;/orders/{order_id}&quot;,label=&quot;order&quot;,method=&quot;GET&quot;,status=&quot;2xx&quot;} http_requests_total{handler=&quot;/orders/electronic/{order_id}/items/&quot;,label=&quot;electronic&quot;,method=&quot;GET&quot;,status=&quot;4xx&quot;} </code></pre>
<python><fastapi><prometheus>
2025-06-11 03:13:31
0
679
wltz
79,661,148
13,944,524
How Does Connection Pooling Work In Django?
<p>If I'm not wrong, currently there are two ways to have connection pooling in Django:</p> <ul> <li>Native Connection Pooling <a href="https://docs.djangoproject.com/en/5.2/releases/5.1/#postgresql-connection-pools" rel="nofollow noreferrer">(Django 5.x)</a></li> <li>Using <a href="https://www.pgbouncer.org/" rel="nofollow noreferrer">PGBouncer</a></li> </ul> <p>I want to know that how connection pooling works behind the scene in Django.</p> <p>In FastAPI, there is one <em>&quot;permanent&quot;</em> process that handles all requests. We can have a connection pool using for example <code>asyncpg</code> driver. In its simplest form, it creates like 10 connections to the Postgresql database(using 10 unix socket connections), then when a coroutine request a connection, it gives it from the pool.</p> <pre class="lang-none prettyprint-override"><code>+-------------------------+ | +------------+ | ---------------&gt; +------------+ | | | | | | | FastAPI | Connection | | ---------------&gt; | Database | | | Pool | | | | | | (asyncpg) | | ---------------&gt; +------------+ | +------------+ | +-------------------------+ </code></pre> <p>But in Django, there is no single permanent process. If we use Gunicorn with sync workers, every worker instantiates a Django application to handle one request at a time. There is no shared memory between them.</p> <ol> <li><p>How can psycopg3 driver creates a connection pool in one worker and all other workers communicate with that?</p> </li> <li><p>I guess PGBouncer is a separate process that creates connection pool inside its memory, then all Django workers can communicate with that.</p> </li> </ol> <pre class="lang-none prettyprint-override"><code> Django worker ---\ -----\ +------------+ ---------------&gt; +------------+ -----\ | | | | Django worker ------------------&gt; | PBbouncer | ---------------&gt; | Database | -----/ | | | | -----/ +------------+ ---------------&gt; +------------+ Django worker ---/ </code></pre> <p>Am I right to say that the both ways of connection pooling in Django are in-direct and there is an extra latency for the intermediate process?</p>
<python><django><database><connection-pooling>
2025-06-10 21:39:50
1
17,004
S.B
79,661,119
836,318
pandas apply raw / numpy apply_along_axis with function that returns Optional
<p>Trying to understand behavior in Pandas <code>apply</code> with <code>raw=True</code> and underlying numpy <code>apply_along_axis</code>.</p> <p>(More context: the reason I'm using Pandas UDF with <code>raw</code> is to make a UDF within a PySpark job, with <code>@pandas_udf</code> for performance without overhead of creating <code>pd.Series</code> per row.)</p> <p>The following code will break with an error:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from typing import Optional def func(a: int) -&gt; Optional[int]: if a % 3 == 0: return 1 if a % 3 == 1: return 0 else: return None df = pd.DataFrame([[1], [2], [3], [4], [5], [6]]) print(df.apply(lambda row: func(row[0]), axis=1, raw=True)) </code></pre> <p>Error:</p> <pre><code>TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType' </code></pre> <p>This is because the underlying numpy code creates a buffer as an array based on the <em>first</em> result of the function, which is dtype=int64 in this case, and can't handle saving a None to the array.</p> <p>Now, if the <em>very first</em> result from the function is <code>None</code>, numpy will create a buffer of dtype=object instead, and I get a valid result:</p> <pre><code>df = pd.DataFrame([[2], [3], [4], [5], [6]]) # output: 0 None 1 1 2 0 3 None 4 1 </code></pre> <p>Is this expected behavior -- can you just not use <code>apply</code> with <code>raw</code> if the applied Python function returns <code>Optional[int]</code>?</p>
<python><pandas><numpy>
2025-06-10 20:55:29
1
18,970
wrschneider
79,661,090
3,821,009
Determine if value exists in previous rows
<p>I'm trying to do this:</p> <pre><code>import polars as pl df = pl.DataFrame({ 'j': [1, 2, 3, 4], 'k': [3, 1, 2, 2], }) df = df.with_row_index().with_columns([ pl.struct(['index', 'j']).map_elements( lambda x: df.slice(0, x['index'])['k'] .to_list().count(x['j']) &gt; 0, pl.Boolean ).alias('l') ]) print(df) shape: (4, 4) ┌───────┬─────┬─────┬───────┐ │ index ┆ j ┆ k ┆ l │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ i64 ┆ bool │ ╞═══════╪═════╪═════╪═══════╡ │ 0 ┆ 1 ┆ 3 ┆ false │ │ 1 ┆ 2 ┆ 1 ┆ false │ │ 2 ┆ 3 ┆ 2 ┆ true │ │ 3 ┆ 4 ┆ 2 ┆ false │ └───────┴─────┴─────┴───────┘ </code></pre> <p>where each row in <code>l</code> is true if the value in <code>j</code> has been seen in previous rows in <code>k</code>.</p> <p>Is there a way to do this without escaping into python via <code>map_elements</code>?</p>
<python><python-polars>
2025-06-10 20:20:44
2
4,641
levant pied
79,661,060
2,174,845
Azure Functions: get Service Bus topic_name and subscription_name from environment or Application Settings
<p>When writing an Azure Function for handling Service Bus events in Python, one decorates the handler function with <code>@service_bus_topic_trigger</code> (<a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4%2Cextensionv5&amp;pivots=programming-language-python#example" rel="nofollow noreferrer">docs</a>):</p> <pre class="lang-py prettyprint-override"><code>@service_bus_topic_trigger(arg_name=&quot;msg&quot;, topic_name=&quot;TOPIC_NAME&quot;, subscription_name=&quot;SUBSCRIPTION_NAME&quot;, connection=&quot;CONNECTION_SETTING&quot;) def handler(msg): ... body goes here ... </code></pre> <p>The <code>connection</code> argument will typically hold a secret, and documentation encourages developers to pass an Argument Setting key rather than an explicit value. However, all the documentation examples for <code>topic_name</code> and <code>subscription_name</code> pass the values of the topic/subscription as explicit strings, not references to settings.</p> <p>The question is, does Azure support looking up topic/subscription name from application settings at runtime? Of course, we could query <code>os.environ</code> directly, but since the method and its decorator must be at module scope, this would require doing IO on module load, which is an anti-pattern.</p>
<python><azure-functions>
2025-06-10 19:49:40
1
511
MSmedberg
79,661,038
1,505,677
Unable to detect columns and rows when extracting a table
<p>I am trying to extract the information from the following but with no luck so far.</p> <p>Here is the <a href="https://drive.google.com/file/d/16hBSuoGiG7PMvP6VaXGhLVAaO6nIustp/view?usp=sharing" rel="nofollow noreferrer">link to the test PDF</a> that I am using</p> <p>I'm hoping that if I can come up with a strategy for this single page I can extrapolate that over a large collection set.</p> <pre><code>import pdfplumber import pandas as pd pdf_path = &quot;/Users/Blacky_1_2/Desktop/python/nominal_rolls/pdf/e011087784.pdf&quot; output_csv = &quot;/Users/Blacky_1_2/Desktop/python/nominal_rolls/csvs/2nd.csv&quot; # Open the PDF with pdfplumber.open(pdf_path) as pdf: p1 = pdf.pages[0] settings = { &quot;vertical_strategy&quot;: &quot;text&quot;, &quot;horizontal_strategy&quot;: &quot;text&quot;, &quot;snap_y_tolerance&quot;: 1, &quot;intersection_x_tolerance&quot;: 3, } table = p1.find_table(table_settings=settings) im = p1.to_image(resolution=150) im.reset().debug_tablefinder(settings) #table = pd.DataFrame(page_crop_new.extract_table(settings)) display(im) # show the debug image </code></pre> <p>I've tried messing around with various settings, but no matter which combination I choose the finding the table doesn't seem to be identifying the cutoff of one field with the next on a handful of the columns.</p> <p>If I use the vertical and horizontal_strategy as both &quot;text&quot; this is what the result looks like in the Jupyter Notebook:</p> <p><a href="https://i.sstatic.net/ocT3VFA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ocT3VFA4.png" alt="enter image description here" /></a></p> <p>When I adjust the horizontal_strategy = &quot;lines&quot; in the settings it is easier to see where the separation of one column is compared to the next (which is obviously incorrect):</p> <p><a href="https://i.sstatic.net/bZyHEfSU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZyHEfSU.png" alt="enter image description here" /></a></p> <p>I have tried to see if I could explicitly state the vertical lines as well as identify and disregard the headers, but no luck.</p> <p>I've played around with other modules (import fitz # PyMuPDF) but the regex seems to be having difficulty outside of the basic content and is also not finding the row contents cleanly.</p> <p>I realize the PDF is of an older scan and may be difficult to parse, but I feel like the quality is good enough to be able to extract the appropriate information.</p>
<python><jupyter-notebook><pdfplumber>
2025-06-10 19:25:57
0
353
M. Black
79,661,031
2,297,484
Sending Python chunks to terminal (not interactive or REPL) in VS Code
<p>I've switched to VSC over Sublime. I've figured out how to open a Terminal and then send line or selections of lines to the terminal by changing the keyboard preferences of Run Selected Text in Active Terminal to Ctrl + Enter.</p> <p><a href="https://i.sstatic.net/8Cxyc3TK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Cxyc3TK.png" alt="enter image description here" /></a></p> <p>How do I send code chunks to the active terminal instead of the Interactive or REPL? I've installed Smart Send for Python, and now my code chunks have that nice Run Cell command, and if I hit Shift + Enter it runs. But I want to it to run in my active Terminal. If I try this, I get:</p> <p><a href="https://i.sstatic.net/bZZwkhqU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZZwkhqU.png" alt="enter image description here" /></a></p> <p>where it tries to open an interactive terminal, which can't run Python.</p> <p><strong>Update</strong></p> <p>Here is some more clarity. I installed the Smart Execute extension. When I put the cursor on a code chunk (#%%) it sends it over to Jupyter notebook, like so, when I hit Shift+Enter:</p> <p><a href="https://i.sstatic.net/QsStfB4n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsStfB4n.png" alt="enter image description here" /></a></p> <p>But when I put my cursor on a single line and hit Shift + Enter, it goes nowhere. If I want to just put my cursor on a line and run it, I need to modify to use Ctrl+Enter and send to the Terminal. So I wind up with the 'Interactive' Jupyter notebook running code chunks on Shift+Enter, and a Terminal with IPython open running single lines using Ctrl+Enter.</p> <p>This is all too confusing for me to wind up with so many open terminals and shells. I don't use interactive HTML or tables in my coding. I just want to be able to hit some combination of keys and be able to send single lines, multiple highlighted lines, and code chunks to the same place.</p>
<python><visual-studio-code>
2025-06-10 19:21:14
0
1,966
Nate
79,661,028
13,971,251
Python doesn't register change in timezone after being started
<p>I have the a program running on a Raspberry Pi which, for the purpose of this question, can be boiled down to the following:</p> <pre class="lang-py prettyprint-override"><code>import datetime import subprocess #Change system timzone subprocess.run('sudo timedatectl set-timezone America/Toronto', shell=True) #Print date and time and timezone print(datetime.datetime.now()) </code></pre> <p>If I set my timezone to, let's say, Los Angeles, and then run this program, <em>even though subprocess successfully runs the command to change the system timezone</em>, the <code>print</code> command shows the time in the original timezone, as if it wasn't changed.</p> <p>Now, if run:</p> <pre class="lang-py prettyprint-override"><code>import datetime #Print date and time and timezone print(datetime.datetime.now()) </code></pre> <p>It shows the time with the changed timezone.</p> <p>Why does Python (seemingly) not register the timezone changing if it was done while Python was running (using <code>subprocess</code>), and how can I avoid this problem?</p>
<python><linux><datetime><raspberry-pi><subprocess>
2025-06-10 19:17:09
1
1,181
Kovy Jacob
79,661,002
3,294,994
Typing a callable with ParamSpec or no args (`Callable[P, Any] | Callable[[], Any]`)
<p>Here's a decorator that accepts a callable (<code>fn: Callable[P, Any]</code>) with the same signature as the function getting wrapped. It works and type checks.</p> <pre class="lang-py prettyprint-override"><code>import inspect from typing import Any, Callable, ParamSpec, TypeVar, Union P = ParamSpec(&quot;P&quot;) T = TypeVar(&quot;T&quot;) def deco1(fn: Callable[P, Any]) -&gt; Callable[[Callable[P, T]], Callable[P, T]]: def decorator(wrapped_fn: Callable[P, T]) -&gt; Callable[P, T]: def inner(*args: P.args, **kwargs: P.kwargs) -&gt; T: fn(*args, **kwargs) return wrapped_fn(*args, **kwargs) return inner return decorator def logger1(x: int, y: str) -&gt; None: print(f&quot;logger1: x={x}, y={y}&quot;) @deco1(logger1) def wrapped1(x: int, y: str) -&gt; None: ... wrapped1(1, &quot;test&quot;) # prints &quot;logger1: x=1, y=test&quot; </code></pre> <p>Here's a decorator that accepts a callable (<code>fn: Callable[[], Any]</code>) with no arguments. It also works and type checks.</p> <pre class="lang-py prettyprint-override"><code>def deco2(fn: Callable[[], Any]) -&gt; Callable[[Callable[P, T]], Callable[P, T]]: def decorator(wrapped_fn: Callable[P, T]) -&gt; Callable[P, T]: def inner(*args: P.args, **kwargs: P.kwargs) -&gt; T: fn() return wrapped_fn(*args, **kwargs) return inner return decorator def logger2() -&gt; None: print(f&quot;logger2&quot;) @deco2(logger2) def wrapped2(x: int, y: str) -&gt; None: ... wrapped2(1, &quot;test&quot;) # prints &quot;logger2&quot; </code></pre> <p>I'd like to combine <code>deco1</code> and <code>deco2</code> into a single function <code>deco3</code>. The following works at run-time, but I can't get the type checker to pass. Is this possible?</p> <pre class="lang-py prettyprint-override"><code>def deco3( fn: Callable[[], Any] | Callable[P, Any], ) -&gt; Callable[[Callable[P, T]], Callable[P, T]]: def decorator(wrapped_fn: Callable[P, T]) -&gt; Callable[P, T]: def inner(*args: P.args, **kwargs: P.kwargs) -&gt; T: if len(inspect.signature(fn).parameters): fn(*args, **kwargs) else: fn() # pyright error: Arguments for ParamSpec &quot;P@deco3&quot; are missing (reportCallIssue) return wrapped_fn(*args, **kwargs) return inner return decorator @deco3(logger1) def wrapped_with_logger1(x: int, y: str) -&gt; None: ... @deco3(logger2) # pyright error: Argument of type &quot;(x: int, y: str) -&gt; None&quot; cannot be assigned to parameter of type &quot;() -&gt; T@deco3&quot; def wrapped_with_logger2(x: int, y: str) -&gt; None: ... wrapped_with_logger1(1, &quot;test&quot;) # prints &quot;logger1: x=1, y=test&quot; wrapped_with_logger2(1, &quot;test&quot;) # prints &quot;logger2&quot; but pyright error: Expected 0 positional arguments (reportCallIssue) </code></pre>
<python><python-typing>
2025-06-10 18:54:48
1
846
obk
79,660,870
662,285
Azure AI Services for Phi-4-modal-instruct issue with audio - Audio Prompt Error: (Invalid input) invalid input error
<p>Audio Prompt Error: (Invalid input) invalid input error Code: Invalid input Message: invalid input error</p> <p>I am getting this above error while testing audio to text using Phi-4-modal-instruct in Azure AI foundry</p> <pre><code>import base64 import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage, TextContentItem, ImageContentItem, AudioContentItem, InputAudio, AudioContentFormat import io from azure.core.credentials import AzureKeyCredential from azure.identity import DefaultAzureCredential from PIL import Image # Configuration ENDPOINT_URL = &quot;https://myaiservices.services.ai.azure.com/models&quot; MODEL_NAME = &quot;phi-4-multimodal-instruct&quot; AUDIO_PATH = &quot;harvard.wav&quot; # Path to a sample audio file def test_audio_prompt(client, audio_path): &quot;&quot;&quot;Test an audio input with a prompt.&quot;&quot;&quot; print(&quot;\n=== Testing Audio Prompt ===&quot;) try: # Open and encode the audio file to base64 with open(audio_path, &quot;rb&quot;) as audio_file: base64_audio = base64.b64encode(audio_file.read()).decode(&quot;utf-8&quot;) # Prepare the audio content audio_content = { &quot;type&quot;: &quot;audio&quot;, &quot;audio_data&quot;: base64_audio, &quot;mime_type&quot;: &quot;audio/wav&quot; } # Make the chat completions request response = client.complete( messages=[ SystemMessage(content=&quot;You are an AI assistant for translating and transcribing audio clips.&quot;), UserMessage(content=[ TextContentItem(text=&quot;Please print this audio snippet.&quot;), audio_content ]) ], model=MODEL_NAME ) # Print the response print(&quot;Audio Response:&quot;, response.choices[0].message.content) except FileNotFoundError: print(f&quot;Audio file not found: {audio_path}&quot;) except Exception as e: print(&quot;Audio Prompt Error:&quot;, str(e)) def main(): # Initialize the client client = ChatCompletionsClient( endpoint=ENDPOINT_URL, credential=DefaultAzureCredential(), credential_scopes=[&quot;https://cognitiveservices.azure.com/.default&quot;] ) test_audio_prompt(client, AUDIO_PATH) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><azure><azure-ai-foundry>
2025-06-10 17:03:15
1
4,564
Bokambo
79,660,754
4,096,572
scikit-build-core ignores .pyf files when building Python modules: why?
<p>I maintain <a href="https://github.com/johncoxon/tsyganenko" rel="nofollow noreferrer">a Python module here</a> and I'm having difficulty getting it to build correctly. I'm currently trying to switch from the old numpy distutils over to <code>scikit-build-core</code> on the <code>change-setup</code> branch.</p> <p>The following steps apparently successfully install the module and put a file called <code>geopack_tsyganenko.cpython-313-darwin.so</code> in my <code>site-packages</code>:</p> <pre><code>git clone https://github.com/johncoxon/tsyganenko cd tsyganenko git checkout change-setup pip install . </code></pre> <p>This results in the following in IPython:</p> <pre><code>In [1]: import geopack_tsyganenko In [2]: geopack_tsyganenko.igrf_gsw_08? Signature: geopack_tsyganenko.igrf_gsw_08(*args, **kwargs) Type: fortran String form: &lt;fortran function igrf_gsw_08&gt; Docstring: igrf_gsw_08(xgsw,ygsw,zgsw,hxgsw,hygsw,hzgsw) Wrapper for ``igrf_gsw_08``. Parameters ---------- xgsw : input float ygsw : input float zgsw : input float hxgsw : input float hygsw : input float hzgsw : input float </code></pre> <p>However, <a href="https://github.com/johncoxon/tsyganenko/blob/change-setup/src/tsyganenko/geopack.pyf" rel="nofollow noreferrer"><code>geopack.pyf</code></a> specifies that the <code>hxgsw</code>, <code>hygsw</code>, and <code>hzgsw</code> variables should be outputs, and they are not. This indicates that the build process is not seeing the <a href="https://github.com/johncoxon/tsyganenko/blob/change-setup/src/tsyganenko/geopack.pyf" rel="nofollow noreferrer"><code>geopack.pyf</code></a> file correctly.</p> <p>On the other hand, if I then run:</p> <pre><code>cd src/tsyganenko python -m numpy.f2py -c --f77flags=&quot;-w&quot; geopack.pyf geopack.f T96.f T02.f -m geopack_tsyganenko --lower </code></pre> <p>It creates <code>geopack_tsyganenko.cpython-313-darwin.so</code> in <code>src/tsyganenko</code> and then if I copy that file manually into my <code>site-packages</code> the following happens in IPython:</p> <pre><code>In [1]: import geopack_tsyganenko In [2]: geopack_tsyganenko.igrf_gsw_08? Signature: geopack_tsyganenko.igrf_gsw_08(*args, **kwargs) Type: fortran String form: &lt;fortran function igrf_gsw_08&gt; Docstring: hxgsw,hygsw,hzgsw = igrf_gsw_08(xgsw,ygsw,zgsw) Wrapper for ``igrf_gsw_08``. Parameters ---------- xgsw : input float ygsw : input float zgsw : input float Returns ------- hxgsw : float hygsw : float hzgsw : float </code></pre> <p>My question is, how do I need to change my <a href="https://github.com/johncoxon/tsyganenko/blob/change-setup/CMakeLists.txt" rel="nofollow noreferrer"><code>CMakeLists.txt</code></a> file to get the build process to take account of <a href="https://github.com/johncoxon/tsyganenko/blob/change-setup/src/tsyganenko/geopack.pyf" rel="nofollow noreferrer"><code>geopack.pyf</code></a> and compile correctly? At the moment, <a href="https://github.com/johncoxon/tsyganenko/blob/change-setup/CMakeLists.txt" rel="nofollow noreferrer"><code>CMakeLists.txt</code></a> looks like this:</p> <pre><code>cmake_minimum_required(VERSION 3.17.2...3.29) project(${SKBUILD_PROJECT_NAME} LANGUAGES C Fortran) find_package( Python COMPONENTS Interpreter Development.Module NumPy REQUIRED) set(CMAKE_VERBOSE_MAKEFILE ON) set(CMAKE_Fortran_FLAGS &quot;-w&quot;) # F2PY headers execute_process( COMMAND &quot;${PYTHON_EXECUTABLE}&quot; -c &quot;import numpy.f2py; print(numpy.f2py.get_include())&quot; OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE) add_library(fortranobject OBJECT &quot;${F2PY_INCLUDE_DIR}/fortranobject.c&quot;) target_link_libraries(fortranobject PUBLIC Python::NumPy) target_include_directories(fortranobject PUBLIC &quot;${F2PY_INCLUDE_DIR}&quot;) set_property(TARGET fortranobject PROPERTY POSITION_INDEPENDENT_CODE ON) # Define variables set(FORTRAN_PYF_FILE ${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/geopack.pyf) set(FORTRAN_SRC_FILE ${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/geopack.f) set(FORTRAN_T96_FILE ${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/T96.f) set(FORTRAN_T02_FILE ${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/T02.f) set(MODULE_NAME geopack_tsyganenko) set(F2PY_MODULE ${CMAKE_CURRENT_BINARY_DIR}/${MODULE_NAME}module.c) set(F2PY_WRAPPERS ${CMAKE_CURRENT_BINARY_DIR}/${MODULE_NAME}-f2pywrappers.f) add_custom_command( OUTPUT ${F2PY_MODULE} ${F2PY_WRAPPERS} DEPENDS ${FORTRAN_PYF_FILE} ${FORTRAN_SRC_FILE} ${FORTRAN_T96_FILE} ${FORTRAN_T02_FILE} VERBATIM COMMAND &quot;${PYTHON_EXECUTABLE}&quot; -m numpy.f2py &quot;${FORTRAN_PYF_FILE}&quot; &quot;${FORTRAN_SRC_FILE}&quot; &quot;${FORTRAN_T96_FILE}&quot; &quot;${FORTRAN_T02_FILE}&quot; -m ${MODULE_NAME} --lower ) python_add_library( geopack_tsyganenko MODULE &quot;${F2PY_MODULE}&quot; &quot;${F2PY_WRAPPERS}&quot; &quot;${FORTRAN_PYF_FILE}&quot; &quot;${FORTRAN_SRC_FILE}&quot; &quot;${FORTRAN_T96_FILE}&quot; &quot;${FORTRAN_T02_FILE}&quot; WITH_SOABI) target_link_libraries(${MODULE_NAME} PRIVATE fortranobject) install(TARGETS ${MODULE_NAME} DESTINATION .) </code></pre>
<python><numpy><cmake><f2py>
2025-06-10 15:39:16
1
605
John Coxon
79,660,435
186,202
How to use `alembic upgrade head` while requesting DB commit in between each file?
<p>Using Alembic updating an ENUM TYPE I found myself blocked because the datamigration didn't want to use the enum new values without a commit in between files.</p> <p>I tried to force the commit in the migration with no luck. And I finally ran alembic twice in order to fix it.</p> <pre><code>(alembic upgrade +1 &amp;&amp; alembic upgrade head) || exit 0 </code></pre> <p>Which is not to my liking.</p> <p>ChatGPT suggest to use a flag that doesn't exist but would be exactly what I'm looking for.</p> <pre><code>alembic upgrade --commit head </code></pre> <p>Do you know how I can run <code>alembic upgrade head</code> while ensuring that commits are made between each migration file?</p> <p>Refs <a href="https://stackoverflow.com/questions/65130629/new-enum-values-must-be-committed-before-they-can-be-used">New enum values must be committed before they can be used</a></p>
<python><postgresql><sqlalchemy><alembic>
2025-06-10 12:35:28
1
18,222
Natim
79,660,164
885,650
jax.numpy profiling: time spent in "ufunc_api.py:173(__call__)"
<p>I am analyzing my numpy/python code by running it with &quot;-m cProfile&quot;. Snakeviz shows as the entry with most time spent:</p> <p>20895038 calls to <code>ufunc_api.py:173(__call__)</code> with the majority of the execution time (tottime) spent there.</p> <p>ufunc obviously refers to <a href="https://numpy.org/doc/stable/reference/ufuncs.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/ufuncs.html</a></p> <p>What causes this call and how can I hunt it down?</p> <p>Unfortunately neither snakeviz nor gprof2dot show the parent code calling it. Possibly because I am using jax and numba in some parts of the code.</p>
<python><numpy><profiling><jax>
2025-06-10 09:30:09
1
2,721
j13r
79,659,981
11,339,315
How to use pyInstaller to package PyTorch code that includes JIT?
<p>The spec file I used is as follows, based on <a href="https://github.com/pyinstaller/pyinstaller/issues/6290" rel="nofollow noreferrer">this discussion</a>.</p> <pre><code># -*- mode: python ; coding: utf-8 -*- from PyInstaller.utils.hooks import collect_data_files datas = [] datas += collect_data_files('triton', include_py_files=True) datas += collect_data_files('torchvision', include_py_files=True) a = Analysis( ['python/sglang/launch_server.py'], pathex=[], binaries=[], datas=datas, hiddenimports=['triton'], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], noarchive=False, optimize=0, ) pyz = PYZ(a.pure) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='launch_server', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) coll = COLLECT( exe, a.binaries, a.datas, strip=False, upx=True, upx_exclude=[], name='launch_server', ) </code></pre> <p>The packaging process had no errors, but the following error occurred at runtime:</p> <pre><code>Traceback (most recent call last): File &quot;sglang/launch_server.py&quot;, line 6, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked ... File &quot;PyInstaller/loader/pyimod02_importers.py&quot;, line 457, in exec_module File &quot;sglang/srt/layers/quantization/int8_utils.py&quot;, line 5, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;PyInstaller/loader/pyimod02_importers.py&quot;, line 457, in exec_module File &quot;sglang/srt/layers/quantization/int8_kernel.py&quot;, line 21, in &lt;module&gt; File &quot;xxx/dist/launch_server/_internal/triton/runtime/jit.py&quot;, line 852, in jit return decorator(fn) File &quot;xxx/dist/launch_server/_internal/triton/runtime/jit.py&quot;, line 840, in decorator return JITFunction( File &quot;xxx/dist/launch_server/_internal/triton/runtime/jit.py&quot;, line 668, in __init__ self.starting_line_number = inspect.getsourcelines(fn)[1] File &quot;inspect.py&quot;, line 1121, in getsourcelines File &quot;inspect.py&quot;, line 958, in findsource OSError: could not get source code [PYI-195669:ERROR] Failed to execute script 'launch_server' due to unhandled exception! </code></pre> <p>It seems to be a JIT-related issue. Could it be that the JIT code in Triton, which exists as a string, was not packaged correctly? However, I have explicitly added Triton to the spec file and forced the loading of all Python files (through the <code>collect_data_files</code> interface). Are there any other methods to resolve this issue?</p> <p>The Python project I packaged is sglang, based on the Ubuntu environment, with PyInstaller version 6.14.1 and Python version 3.10.12.</p>
<python><pytorch><pyinstaller>
2025-06-10 07:17:35
1
779
Frontier_Setter
79,659,946
17,580,381
Pylint C0103 confusion
<p>Here's the MRE (<code>mre.py</code>):</p> <pre><code>&quot;&quot;&quot; MRE &quot;&quot;&quot; from functools import partial def func(_t, _c): &quot;&quot;&quot; NOOP &quot;&quot;&quot; for i in range(10): for j in range(10): t = &quot;Exit&quot; if i == 9 and j == 9 else f&quot;{i}, {j}&quot; c = partial(func, i, j) </code></pre> <p>Then:</p> <pre><code>python -m pylint mre.py ************* Module mre mre.py:16:8: C0103: Constant name &quot;t&quot; doesn't conform to UPPER_CASE naming style (invalid-name) ----------------------------------- Your code has been rated at 8.33/10 </code></pre> <p>Pylint seems to want the variable <code>t</code> (type str) to be uppercase whereas <code>c</code> (type partial) is OK. If I change <code>t</code> to <code>T</code>, Pylint gives a 10/10 score for this MRE.</p> <p>I have tried running Pylint with various <code>--*-naming-style</code> arguments to try to suppress this but to no avail.</p> <p>However, this seems inconsistent to me. Why is it OK for <code>c</code> to be lowercase and <code>t</code> not to be lowercase when the context in which they are used is the same? Sure, they're different types but is Pylint seriously suggesting that (local) variables of type str need to be declared as uppercase?</p> <p>I don't want to know how to suppress the warning, I want to understand Pylint's rationale.</p>
<python><pylint>
2025-06-10 07:01:18
0
28,997
Ramrab
79,659,677
686,334
Invalid request from Google API in python
<p>I trying to write code to access the Youtube API in python. I followed the sample code get authenticated for Google API. When I run this I get a message</p> <blockquote> <p>Access blocked: MyAPP request is invalid</p> </blockquote> <p>I have no idea why I get this. Any advise would be appreciated.</p> <pre><code>from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow # Replace with the path to your downloaded client_secret.json file CLIENT_SECRETS_FILE = 'client_secret.json' # Define the scopes your application needs SCOPES = ['https://www.googleapis.com/auth/youtube.readonly'] API_SERVICE_NAME = 'youtube' API_VERSION = 'v3' def get_authenticated_service(): flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES) credentials = flow.run_local_server(open_browser=False) # Or flow.run_local_server() for web apps return build(API_SERVICE_NAME, API_VERSION, credentials = credentials) # Now you can use the authenticated service to make YouTube API calls youtube = get_authenticated_service() </code></pre>
<python><google-api>
2025-06-10 01:16:11
0
534
CrabbyPete
79,659,671
8,584,998
Long Running Python Program - Memory Usage Increasing Indefinitely
<p>I have a simple Python program designed to continuously plot data from a frequently-updated csv file, which is intended to run for months at a time. A simplified version of the program can be found below (there are additional pandas transformations done on the dataframe in my actual program than what I'm showing here, but this is the essence of the program):</p> <pre><code>import pandas import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation fig, ax = plt.subplots() x = np.arange(5) # x-axis: 5 bars bars = ax.bar(x, np.zeros_like(x)) ax.set_ylim(0, 100) ax.set_title(&quot;Real-Time Bar Chart&quot;) def get_data(): df = pandas.read_csv('data/ztest_data.csv', header=None) # roughly 5.39 MB CSV file with 3 columns: string, integer, integer df.columns = ['unit', 'value', 'selection'] df = df.loc[df['selection'] &lt;= 5] df = df.groupby(['selection']).agg(avg_value=('value','mean')).reset_index() df['avg_value']/= 100000000 return df def update(frame): df = get_data() new_heights = np.random.randint(0, 50, size=5) + np.array(df['avg_value']) for bar, height in zip(bars, new_heights): bar.set_height(height) return bars ani = FuncAnimation(fig, update, interval=1000*5, blit=False, cache_frame_data=False) plt.show() </code></pre> <p>I noticed that the program's memory usage continues to increase indefinitely, but I don't know why. Using <code>gc.collect()</code> at the end of the <code>update()</code> function doesn't resolve the issue. Here's an example of the memory usage as reported by Windows task manager after running for a few days:</p> <p><a href="https://i.sstatic.net/fzSDt1d6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzSDt1d6.png" alt="enter image description here" /></a></p> <p>Considering I intend to leave this program running for months at a time, this is not ideal, although not immediately problematic. Am I doing anything wrong here? If not, do you have any recommendations for what I should do?</p> <p>I'm on Python 3.11.1 with Pandas version 1.5.2.</p> <p>Edit:</p> <p>It was requested I add the full program: I have added it below. The datafile can be found here: <a href="https://drive.google.com/file/d/16XRq3fXDNWcfvXTfLV2wO_stVrRd5oX7/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/16XRq3fXDNWcfvXTfLV2wO_stVrRd5oX7/view?usp=sharing</a></p> <pre><code>import os import re import gc import glob import pandas import datetime import numpy as np import matplotlib.pyplot as mp import matplotlib.dates as md from matplotlib.animation import FuncAnimation DEBUG_SIMULATE_PLOT_UPDATE = False DEBUG_TOTAL_INTERVALS = 20 if DEBUG_SIMULATE_PLOT_UPDATE: REFRESH_EVERY = 2 else: REFRESH_EVERY = 5*60 YMIN, YMAX = (0.5, 22) AMOUNTS = [154093, 139803*1.05/1.1, 92139, 136400, 115000] mp.style.use('dark_background') active_j_nums = [1,2,3,4,5] jcols = {'J1':(1,0,0),'J2':(0,0.9,0),'J3':(0.9,0.9,0),'J4':(0,0,1),'J5':(0.3,0.3,0.3),'J6':(0.5,0,1),'J7':(1,0,1),'J8':(210/255,149/255,87/255)} def float_hours_to_hhmm(hours_float): hours = int(hours_float) minutes = int(round((hours_float - hours) * 60)) return f&quot;{hours:02d}:{minutes:02d}&quot; def get_data(ii): weekdays = ['Monday', 'Tuesday', 'Wendesday', 'Thursday', 'Friday'] datafile = 'ztest_data.csv' df = pandas.read_csv(datafile, header=None) ind = min(len(df), ii*len(df)//DEBUG_TOTAL_INTERVALS) if DEBUG_SIMULATE_PLOT_UPDATE: df = df.iloc[:ind] df = df.rename(columns={0:'name', 1:'time', 2:'selection'}) df = df.loc[df['name'] == 'HDMI_B08'] grouped = df.groupby((df['selection'] != df['selection'].shift()).cumsum(), as_index=False).agg( selection=('selection',min), mint=('time',min), maxt=('time',max), ) grouped['delta_secs'] = grouped['maxt'] - grouped['mint'] grouped['delta_mins'] = grouped['delta_secs']/60 x1 = grouped[['selection', 'mint']] x2 = grouped[['selection', 'maxt']] dff = pandas.concat([x1.rename(columns={'mint':'time'}), x2.rename(columns={'maxt':'time'})]).sort_values(['time', 'selection']) values = list(range(1,9)) for val in values: dff[f'J{val}'] = np.where(dff['selection']==val, 1, 0) def calc_cutoff(x, hr, mn): dt = datetime.datetime.fromtimestamp(x) day_start = datetime.datetime(dt.year, dt.month, dt.day, hr, mn, 0) return int(day_start.timestamp()) def calculate_valid_time(row): min_ba = min(row['maxt'], row['day_end']) max_ef = max(row['mint'], row['day_start']) return max(min_ba - max_ef, 0) grouped['day_start'] = grouped['mint'].apply(lambda ii: calc_cutoff(ii, 0, 10)) grouped['day_end'] = grouped['mint'].apply(lambda ii: calc_cutoff(ii, 11+12, 50)) grouped['valid_time_during_day'] = grouped.apply(calculate_valid_time, axis=1).astype(int) grouped['day'] = grouped['mint'].apply(lambda row: datetime.datetime.fromtimestamp(row).strftime('%A')) grouped['daysort'] = pandas.to_datetime(grouped['mint'].apply(lambda row: datetime.datetime.fromtimestamp(row).date())) for which in ['mint', 'maxt', 'day_start', 'day_end']: grouped[f'{which}_readable'] = grouped[which].apply(lambda row: datetime.datetime.fromtimestamp(row).strftime('%b %d, %Y, %I:%M:%S %p')) grouped['week_num'] = grouped['mint'].apply(lambda ii : ((vv:=datetime.datetime.fromtimestamp(ii).isocalendar())[0],vv[1])) agg_by_week = grouped.groupby(['week_num', 'selection'], as_index=False).agg( total_valid_hours = ('valid_time_during_day', lambda ii : np.sum(ii)/(60*60)), total_hours = ('delta_secs', lambda ii : np.sum(ii)/(60*60)), ) agg_by_week = grouped.groupby(['week_num', 'selection'], as_index=False).agg( total_valid_hours = ('valid_time_during_day', lambda ii : np.sum(ii)/(60*60)), total_hours = ('delta_secs', lambda ii : np.sum(ii)/(60*60)), ) week_num = agg_by_week['week_num'].iloc[0] missing_ids = [x for x in active_j_nums if x not in agg_by_week[&quot;selection&quot;].values] ll = len(missing_ids) new_rows = pandas.DataFrame({'week_num':[week_num for ii in range(ll)], &quot;selection&quot;: missing_ids, 'total_valid_hours':[0 for ii in range(ll)], 'total_hours':[0 for ii in range(ll)]}) agg_by_week = pandas.concat([agg_by_week, new_rows], ignore_index=True) agg_by_week = agg_by_week.loc[agg_by_week['selection'].isin(active_j_nums)].sort_values(by='selection') return agg_by_week if __name__ == '__main__': counter = 1 agg_by_week = get_data(counter) counter += 1 mp.rcParams['toolbar'] = 'None' fig, ax = mp.subplots(layout='constrained', figsize=(3.5, 6), dpi=80) ax.format_coord = lambda x, y: &quot;&quot; x = agg_by_week['selection'] heights = agg_by_week['total_hours'] bars = ax.bar(x, heights, color=list(jcols.values()), zorder=2) ymax = min(YMAX, max(YMIN, heights.max())) + 1 hline_values = [sum(heights) * pct / sum(AMOUNTS) for pct in AMOUNTS[:len(active_j_nums)]] hlines = [ax.axhline(y=v, color=jcol, linestyle='--', linewidth=1.5, zorder=3) for v, jcol in zip(hline_values, jcols.values())] ax.set_ylim(0, ymax) ax.set_yticks(np.arange(0, ymax, 0.5)) ax.grid(True, which='both', axis='y', color='gray', linewidth=0.5, zorder=1) ax2 = ax.twinx() ax2.set_ylim(ax.get_ylim()) ax2.set_yticks(ax.get_yticks()) ax2.set_yticklabels(ax.get_yticklabels()) thw = float_hours_to_hhmm(sum(heights)) title_text = &quot;Updated: {dt}\nAmount: {th}&quot; title = ax.set_title(title_text.format(dt = datetime.datetime.now().strftime('%b. %d, %Y at %I:%M %p'), th = thw), fontsize=10) def update(frame): global counter try: agg_by_week = get_data(counter) except Exception as e: return bars new_heights = agg_by_week['total_hours'] for bar, h in zip(bars, new_heights): bar.set_height(h) new_hline_values = [sum(new_heights) * pct / sum(AMOUNTS) for pct in AMOUNTS[:len(active_j_nums)]] for hline, new_val in zip(hlines, new_hline_values): hline.set_ydata([new_val]) thw = float_hours_to_hhmm(sum(new_heights)) title.set_text(title_text.format(dt = datetime.datetime.now().strftime('%b. %d, %Y at %I:%M %p'), th = thw)) new_ymax = min(YMAX, max(YMIN, new_heights.max())) + 1 ax.set_ylim(0, new_ymax) ax.set_yticks(np.arange(0, new_ymax, 0.5)) ax2.set_ylim(ax.get_ylim()) ax2.set_yticks(ax.get_yticks()) ax2.set_yticklabels(ax.get_yticklabels()) counter += 1 gc.collect() # testing if this fixes the memory growth issue (python process starts at consuuming 78 MB, over several days grows to consuming 130 MB) (2025-05-17 Update: It doesn't) return bars ani = FuncAnimation(fig, update, interval=1000*REFRESH_EVERY, blit=False, cache_frame_data=False) mp.show() </code></pre>
<python><pandas><matplotlib><memory>
2025-06-10 00:57:50
1
1,310
EllipticalInitial
79,659,512
14,952,390
How to correctly use Psycopg3 COPY command?
<p>I have read in several places but I haven't been able to find the solution yet. I was using psycopg2 and its method copy_expert() to read a csv file and load it to a table, but I decided to switch to the newer version of the library psycopg v3.2.9. Reading their <a href="https://www.psycopg.org/psycopg3/docs/basic/copy.html#using-copy-to-and-copy-from" rel="nofollow noreferrer">docs</a> I was able to do this:</p> <pre><code>import psycopg from loguru import logger import os from dotenv import load_dotenv import pandas as pd from psycopg import sql load_dotenv() db_user: str = os.getenv(&quot;DB_USER&quot;, &quot;&quot;) db_password: str = os.getenv(&quot;DB_PASSWORD&quot;, &quot;&quot;) db_host: str = os.getenv(&quot;DB_HOST&quot;, &quot;&quot;) db_port: str = os.getenv(&quot;DB_PORT&quot;, &quot;&quot;) db_name: str = os.getenv(&quot;DB_NAME&quot;, &quot;&quot;) conn = psycopg.connect( host=db_host, dbname=db_name, port=db_port, user=db_user, password=db_password ) conn.autocommit = False file_location = &quot;my_file.csv&quot; try: with open(file_location, 'r', encoding='utf-8') as f: copy_sql = f&quot;&quot;&quot; COPY my_table (float_value, string_value_1, string_value_2, timestamp_value, str_value_index) FROM STDIN WITH (FORMAT csv, HEADER true, DELIMITER ','); &quot;&quot;&quot; with conn.cursor() as cur: cur.copy(copy_sql, f) self.conn.commit() except Exception as e: logger.error(f&quot;{e = }&quot;) conn.rollback() raise finally: con.close() </code></pre> <p>A sample from the csv is this:</p> <pre><code>&quot;float_value&quot;,&quot;string_value_1&quot;,&quot;string_value_2&quot;,&quot;timestamp_value&quot;,&quot;str_value_index&quot; &quot;1.0&quot;,&quot;asdasd&quot;,&quot;adasdbg&quot;,&quot;2025-06-06 04:00:06.982&quot;,&quot;88674ec12bffffff&quot; &quot;1.0&quot;,&quot;asfasf&quot;,&quot;nfgnfggg&quot;,&quot;2025-06-06 04:00:06.983&quot;,&quot;88674126e7ffffff&quot; </code></pre> <p>Executing that code does not raise any error but it finished instantly and it does not saves nothing on the table, the csv has 4 million rows so it takes a little while to copy on the database and it was working before with psycopg2 so I don't know what am I doing wrong.</p> <p>This is the structure from the table:</p> <pre><code>CREATE TABLE IF NOT EXISTS my_table ( float_value DOUBLE PRECISION NOT NULL, string_value_1 TEXT NOT NULL, string_value_2 TEXT NOT NULL, timestamp_value TIMESTAMPTZ NOT NULL, str_value_index TEXT NOT NULL, created_at_postgre TIMESTAMPTZ NOT NULL DEFAULT NOW(), expire_at TIMESTAMPTZ NOT NULL DEFAULT (NOW() + INTERVAL '3 minutes'), PRIMARY KEY (str_value_index, string_value_1, string_value_2, timestamp_value) ); </code></pre> <p>How can I find why my code is not copying the information from my csv to the table even though my code is based on the documentation and worked when I used psycopg2 and also works if I use the same file lo load it with psql (the cli tool).</p> <p>Edit: if I try this</p> <pre><code>with open(file_location, 'r', encoding='utf-8') as f: with cur.copy( &quot;&quot;&quot; COPY my_table (float_value, string_value_1, string_value_2, timestamp_value, str_value_index) FROM STDIN WITH (FORMAT csv, HEADER true, DELIMITER ','); &quot;&quot;&quot;, f ) as copy: while data := f.read(1024*1024): copy.write(data) conn.commit() </code></pre> <p>I get this error:</p> <pre><code> File &quot;path/to/file.py&quot;, line 90, in &lt;module&gt; with cur.copy( ^^^^^^^^^ File &quot;path/to/file.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;path/to/file.py&quot;, line 260, in copy self._conn.wait(self._start_copy_gen(statement, params)) File &quot;path/to/file.py&quot;, line 414, in wait return waiting.wait(gen, self.pgconn.socket, interval=interval) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;psycopg_binary/_psycopg/waiting.pyx&quot;, line 213, in psycopg_binary._psycopg.wait_c File &quot;path/.venv/lib/python3.12/site-packages/psycopg/_cursor_base.py&quot;, line 392, in _start_copy_gen pgq.convert(statement, params) File &quot;path/.venv/lib/python3.12/site-packages/psycopg/_queries.py&quot;, line 257, in convert and len(vars) &lt;= MAX_CACHED_STATEMENT_PARAMS ^^^^^^^^^ TypeError: object of type '_io.TextIOWrapper' has no len() </code></pre>
<python><postgresql><psycopg2><psycopg3>
2025-06-09 20:37:33
0
1,006
antusystem
79,659,409
2,153,235
What condition determines whether prettyprint (%pprint) prints a list vertically?
<p>In Spyder, <code>list(pd.DataFrame(range(22)).index)</code> causes row numbers to print horizontally but <code>list(pd.DataFrame(range(23)).index)</code> causes row numbers to print vertically. There is still plenty of horizontal space available. What determines the threshold for switching the orientation of the print out?</p> <p><a href="https://stackoverflow.com/questions/48713522">This Q&amp;A</a> was proposed as a duplicate, but it seems to seems to deal with text wrapping, <em>not</em> whether a Series is printed in horizontal or vertical orientation.</p> <pre><code>&gt;&gt;&gt; list(pd.DataFrame(range(22)).index) Out[74]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] &gt;&gt;&gt; list(pd.DataFrame(range(23)).index) [0, 1, 2, &lt;...snip...&gt; 21, 22] </code></pre> <p>Issuing <code>pd.get_option('display.width')</code> confirms the default value of 80, but <code>pd.set_option('display.width',100)</code> does not avoid the vertical printout above, even after disabling and re-enabling <code>%pprint</code>.</p>
<python><pretty-print>
2025-06-09 19:11:54
0
1,265
user2153235
79,659,380
23,260,297
Create multiple rows based on column
<p>I have a column in my dataframe that is called delivery period. The delivery period is supposed to be in the format 'Month Year' (January 2025). It shows as the string &quot;2025-01-01 to 2025-12-31&quot;.</p> <p>I need to identify where this occurs and create a new row for each month with the same data. For instance:</p> <pre><code>Price Curve Text Delivery Period TEST TEST 2025-01-01 to 2025-12-31 some curve some text June 2025 some curve some text July 2025 some curve some text August 2025 some curve some text September 2025 some curve some text October 2025 some curve some text November 2025 some curve some text December 2025 </code></pre> <p>to</p> <pre><code>Price Curve Text Delivery Period TEST TEST January 2025 TEST TEST February 2025 TEST TEST March 2025 TEST TEST April 2025 TEST TEST May 2025 TEST TEST June 2025 TEST TEST July 2025 TEST TEST August 2025 TEST TEST September 2025 TEST TEST October 2025 TEST TEST November 2025 TEST TEST December 2025 some curve some text June 2025 some curve some text July 2025 some curve some text August 2025 some curve some text September 2025 some curve some text October 2025 some curve some text November 2025 some curve some text December 2025 </code></pre> <p>MRE:</p> <pre><code>data_dict_cols = { 'Price Curve': [ &quot;TEST&quot;, &quot;some curve&quot;, &quot;some curve&quot;, &quot;some curve&quot;, &quot;some curve&quot;, &quot;some curve&quot;, &quot;some curve&quot;, &quot;some curve&quot; ], 'Text': [ &quot;TEST&quot;, &quot;some text&quot;, &quot;some text&quot;, &quot;some text&quot;, &quot;some text&quot;, &quot;some text&quot;, &quot;some text&quot;, &quot;some text&quot; ], 'Delivery Period': [ &quot;2025-01-01 to 2025-12-31&quot;, &quot;June 2025&quot;, &quot;July 2025&quot;, &quot;August 2025&quot;, &quot;September 2025&quot;, &quot;October 2025&quot;, &quot;November 2025&quot;, &quot;December 2025&quot; ] } df = pd.DataFrame(data_dict_cols) </code></pre>
<python><pandas>
2025-06-09 18:44:21
1
2,185
iBeMeltin
79,659,224
506,825
Calculating the closest intersection based upon latitude and longitude
<p>I have a geojson file containing the latitude and longitude coordinates of all of the streets and avenues in New York City - they're all formatted as either <code>LineString</code> and <code>MultiLineString</code> as follows:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: [ { &quot;type&quot;: &quot;Feature&quot;, &quot;properties&quot;: { &quot;STATEFP&quot;: &quot;36&quot;, &quot;COUNTYFP&quot;: &quot;005&quot;, &quot;LINEARID&quot;: &quot;110391528508&quot;, &quot;FULLNAME&quot;: &quot;Longwood Ave&quot;, &quot;RTTYP&quot;: &quot;M&quot;, &quot;MTFCC&quot;: &quot;S1400&quot; }, &quot;geometry&quot;: { &quot;type&quot;: &quot;MultiLineString&quot;, &quot;coordinates&quot;: [ [ [ -73.900023, 40.818595 ], [ -73.899818, 40.81847 ] ], [ [ -73.899818, 40.81847 ], [ -73.899285, 40.818147 ], [ -73.898521, 40.817686 ], [ -73.897786, 40.817246 ], [ -73.897045, 40.816802 ], [ -73.896213, 40.816319 ], [ -73.895801, 40.816066 ], [ -73.895428, 40.815845 ], [ -73.895158, 40.815683 ], [ -73.894986, 40.815581 ], [ -73.894656, 40.815383 ], [ -73.894014, 40.814998 ], [ -73.893071, 40.814438 ], [ -73.891831, 40.813701 ], [ -73.891434, 40.813466 ], [ -73.890975, 40.813213 ] ] ] } }, { &quot;type&quot;: &quot;Feature&quot;, &quot;properties&quot;: { &quot;STATEFP&quot;: &quot;36&quot;, &quot;COUNTYFP&quot;: &quot;005&quot;, &quot;LINEARID&quot;: &quot;110391524085&quot;, &quot;FULLNAME&quot;: &quot;E 149th St&quot;, &quot;RTTYP&quot;: &quot;M&quot;, &quot;MTFCC&quot;: &quot;S1400&quot; }, &quot;geometry&quot;: { &quot;type&quot;: &quot;LineString&quot;, &quot;coordinates&quot;: [ [ -73.917625, 40.816055 ], [ -73.916771, 40.815699 ], [ -73.914954, 40.814937 ], [ -73.912954, 40.814269 ], [ -73.911812, 40.813883 ], [ -73.910948, 40.813621 ], [ -73.910019, 40.813432 ], [ -73.909075, 40.813233 ], [ -73.908159, 40.813044 ], [ -73.907245, 40.812855 ], [ -73.90633, 40.812667 ], [ -73.905413, 40.812476 ], [ -73.904466, 40.812282 ], [ -73.90418, 40.812125 ], [ -73.903417, 40.811507 ] ] } }, { &quot;type&quot;: &quot;Feature&quot;, &quot;properties&quot;: { &quot;STATEFP&quot;: &quot;36&quot;, &quot;COUNTYFP&quot;: &quot;005&quot;, &quot;LINEARID&quot;: &quot;110391528025&quot;, &quot;FULLNAME&quot;: &quot;Timpson Pl&quot;, &quot;RTTYP&quot;: &quot;M&quot;, &quot;MTFCC&quot;: &quot;S1400&quot; }, &quot;geometry&quot;: { &quot;type&quot;: &quot;LineString&quot;, &quot;coordinates&quot;: [ [ -73.906468, 40.809563 ], [ -73.906252, 40.809719 ], [ -73.90507, 40.810487 ], [ -73.903417, 40.811507 ], [ -73.901093, 40.81218 ], [ -73.899167, 40.812665 ] ] } }, </code></pre> <p>Given latitude and longitude, what would be the best way to find out the closest intersection of two streets (54th street &amp; 8th avenue, 23rd street and Park Avenue, etc.)</p> <p>I've tried the following using the Haversine formula but am not sure how to conceptualize this at a closer level.</p> <pre><code>import json from geopy.distance import geodesic from shapely.geometry import Point, shape def find_closest_point(geojson_file, target_lat, target_lon): with open(geojson_file, &quot;r&quot;) as f: geojson_data = json.load(f) target_point = (target_lat, target_lon) min_distance = float(&quot;inf&quot;) closest_feature = None for feature in geojson_data[&quot;features&quot;]: if ( feature[&quot;geometry&quot;][&quot;type&quot;] == &quot;MultiLineString&quot; or feature[&quot;geometry&quot;][&quot;type&quot;] == &quot;LineString&quot; ): coords = feature[&quot;geometry&quot;][&quot;coordinates&quot;] feature_point = (coords[1], coords[0]) distance = geodesic(target_point, feature_point).meters if distance &lt; min_distance: min_distance = distance closest_feature = feature return closest_feature geojson_file = &quot;nyc.geojson&quot; target_latitude = 40.748687497557995 # Times Square target_longitude = -73.98545265197755 closest_point = find_closest_point(geojson_file, target_latitude, target_longitude) if closest_point: print(&quot;Closest Feature:&quot;, closest_point) else: print(&quot;No points found in GeoJSON&quot;) </code></pre> <p><a href="https://i.sstatic.net/Fyd8TYaV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fyd8TYaV.png" alt="enter image description here" /></a></p>
<python><geolocation><geopandas><haversine>
2025-06-09 16:41:04
2
4,830
Lance
79,659,209
8,512,262
Programmatically querying/toggling Windows "Show my taskbar on all displays" via Python
<p>I have an application in which I need to (at least while it's running) show the taskbar on all Windows displays, so I'm looking for a way to programmatically toggle the Windows &quot;Show my taskbar on all displays&quot; setting via Python.</p> <p>I've been able to successfully query and toggle taskbar <em>auto-hiding</em> by using this:</p> <pre class="lang-py prettyprint-override"><code>import ctypes from ctypes import wintypes ABM_GETSTATE = 0x00000004 ABM_SETSTATE = 0x0000000A ABS_AUTOHIDE = 0x00000001 class APPBARDATA(ctypes.Structure): _fields_ = [ ('cbSize', ctypes.c_ulong), ('hWnd', wintypes.HWND), ('uCallbackMessage', ctypes.c_uint), ('uEdge', ctypes.c_uint), ('rc', wintypes.RECT), ('lParam', ctypes.c_int) ] def get_taskbar_state() -&gt; int: abd = APPBARDATA() abd.cbSize = ctypes.sizeof(APPBARDATA) return ctypes.windll.shell32.SHAppBarMessage(ABM_GETSTATE, ctypes.byref(abd)) def get_taskbar_autohide() -&gt; bool: return bool(get_taskbar_state() &amp; ABS_AUTOHIDE) def set_taskbar_autohide(auto_hide: bool) -&gt; bool: if auto_hide: new_state = get_taskbar_state() | ABS_AUTOHIDE else: new_state = get_taskbar_state() &amp; ~ABS_AUTOHIDE abd = APPBARDATA() abd.cbSize = ctypes.sizeof(APPBARDATA) abd.lParam = new_state result = ctypes.windll.shell32.SHAppBarMessage(ABM_SETSTATE, ctypes.byref(abd)) return bool(result) </code></pre> <p>But I haven't had similar luck finding a way to modify the other taskbar settings.</p> <p>So far I've tried adding this value to the Registry:</p> <pre class="lang-none prettyprint-override"><code>Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced </code></pre> <pre class="lang-none prettyprint-override"><code>MMTaskbarEnabled | REG_DWORD | 0x00000001 </code></pre> <p>But this requires either a reboot or restarting explorer.exe in order to take effect, which I don't want.</p> <p>With that off the table, I tried this:</p> <pre class="lang-py prettyprint-override"><code>from ctypes import WinDLL user32 = WinDLL('user32') HIDE = 0 SHOW = 5 hWnd = user32.FindWindowW('Shell_SecondaryTrayWnd', None) user32.ShowWindow(hWnd, SHOW) # HIDE works the same way </code></pre> <p>Which only works if &quot;Show my taskbar on all displays&quot; is already enabled, <em>and</em> doesn't gracefully resize any windows on the affected display(s) in the way that manually toggling the checkbox does.</p> <p>Additionally, neither of the above approaches seem to allow me to readily query the existing state - ideally I'd like to be able to restore whatever settings the user had before my app started.</p>
<python><ctypes><pywin32><taskbar><windows-11>
2025-06-09 16:31:32
0
7,190
JRiggles
79,658,932
7,281,675
Error while inferencing with onnx model on gpu
<pre><code>from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer from optimum.pipelines import pipeline model = ORTModelForSequenceClassification.from_pretrained( &quot;t&quot;, provider=&quot;CUDAExecutionProvider&quot; ) tokenizer = AutoTokenizer.from_pretrained(&quot;tooka&quot;) pipe = pipeline(&quot;text-classification&quot;, model=model, tokenizer=tokenizer, accelerator=&quot;ort&quot;) print(pipe(&quot;This is great&quot;)) </code></pre> <p>throws:</p> <pre><code>RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0] </code></pre> <p>It works with cpu fine. And here is available providers: Available providers: <code>['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']</code> i have tried different versions of onnx based on the recommendations on the web without success.</p>
<python><huggingface-transformers><onnx>
2025-06-09 13:12:13
0
4,603
keramat
79,658,924
4,740,458
Why is the JAX and JAXOPT based code so slow?
<p>I am writing a <code>jax</code> and <code>jaxopt</code> based optimization. The same code takes ~3-4 seconds in <code>R</code>. I am not sure why this code takes ~90 seconds in <code>jax</code>. I am new to <code>python</code>, <code>jax</code> and <code>jaxopt</code>, any suggestions to include the code quality or speed will be much appreciated.</p> <p>Thank you to the community!</p> <p>The input file (.csv) is pasted below first.</p> <pre><code>ID,AMT,TIME,DV,WT,MDV,EVID 1,4.02,0,0.74,79.6,1,1 1,.,0.25,2.84,.,0,0 1,.,0.57,6.57,.,0,0 1,.,1.12,10.5,.,0,0 1,.,2.02,9.66,.,0,0 1,.,3.82,8.58,.,0,0 1,.,5.1,8.36,.,0,0 1,.,7.03,7.47,.,0,0 1,.,9.05,6.89,.,0,0 1,.,12.12,5.94,.,0,0 1,.,24.37,3.28,.,0,0 2,4.4,0,0,72.4,1,1 2,.,0.27,1.72,.,0,0 2,.,0.52,7.91,.,0,0 2,.,1,8.31,.,0,0 2,.,1.92,8.33,.,0,0 2,.,3.5,6.85,.,0,0 2,.,5.02,6.08,.,0,0 2,.,7.03,5.4,.,0,0 2,.,9,4.55,.,0,0 2,.,12,3.01,.,0,0 2,.,24.3,0.9,.,0,0 3,4.53,0,0,70.5,1,1 3,.,0.27,4.4,.,0,0 3,.,0.58,6.9,.,0,0 3,.,1.02,8.2,.,0,0 3,.,2.02,7.8,.,0,0 3,.,3.62,7.5,.,0,0 3,.,5.08,6.2,.,0,0 3,.,7.07,5.3,.,0,0 3,.,9,4.9,.,0,0 3,.,12.15,3.7,.,0,0 3,.,24.17,1.05,.,0,0 4,4.4,0,0,72.7,1,1 4,.,0.35,1.89,.,0,0 4,.,0.6,4.6,.,0,0 4,.,1.07,8.6,.,0,0 4,.,2.13,8.38,.,0,0 4,.,3.5,7.54,.,0,0 4,.,5.02,6.88,.,0,0 4,.,7.02,5.78,.,0,0 4,.,9.02,5.33,.,0,0 4,.,11.98,4.19,.,0,0 4,.,24.65,1.15,.,0,0 5,5.86,0,0,54.6,1,1 5,.,0.3,2.02,.,0,0 5,.,0.52,5.63,.,0,0 5,.,1,11.4,.,0,0 5,.,2.02,9.33,.,0,0 5,.,3.5,8.74,.,0,0 5,.,5.02,7.56,.,0,0 5,.,7.02,7.09,.,0,0 5,.,9.1,5.9,.,0,0 5,.,12,4.37,.,0,0 5,.,24.35,1.57,.,0,0 6,4.,0,0,80.,1,1 6,.,0.27,1.29,.,0,0 6,.,0.58,3.08,.,0,0 6,.,1.15,6.44,.,0,0 6,.,2.03,6.32,.,0,0 6,.,3.57,5.53,.,0,0 6,.,5,4.94,.,0,0 6,.,7,4.02,.,0,0 6,.,9.22,3.46,.,0,0 6,.,12.1,2.78,.,0,0 6,.,23.85,0.92,.,0,0 7,4.95,0,0.15,64.6,1,1 7,.,0.25,0.85,.,0,0 7,.,0.5,2.35,.,0,0 7,.,1.02,5.02,.,0,0 7,.,2.02,6.58,.,0,0 7,.,3.48,7.09,.,0,0 7,.,5,6.66,.,0,0 7,.,6.98,5.25,.,0,0 7,.,9,4.39,.,0,0 7,.,12.05,3.53,.,0,0 7,.,24.22,1.15,.,0,0 8,4.53,0,0,70.5,1,1 8,.,0.25,3.05,.,0,0 8,.,0.52,3.05,.,0,0 8,.,0.98,7.31,.,0,0 8,.,2.02,7.56,.,0,0 8,.,3.53,6.59,.,0,0 8,.,5.05,5.88,.,0,0 8,.,7.15,4.73,.,0,0 8,.,9.07,4.57,.,0,0 8,.,12.1,3,.,0,0 8,.,24.12,1.25,.,0,0 9,3.1,0,0,86.4,1,1 9,.,0.3,7.37,.,0,0 9,.,0.63,9.03,.,0,0 9,.,1.05,7.14,.,0,0 9,.,2.02,6.33,.,0,0 9,.,3.53,5.66,.,0,0 9,.,5.02,5.67,.,0,0 9,.,7.17,4.24,.,0,0 9,.,8.8,4.11,.,0,0 9,.,11.6,3.16,.,0,0 9,.,24.43,1.12,.,0,0 10,5.5,0,0.24,58.2,1,1 10,.,0.37,2.89,.,0,0 10,.,0.77,5.22,.,0,0 10,.,1.02,6.41,.,0,0 10,.,2.05,7.83,.,0,0 10,.,3.55,10.21,.,0,0 10,.,5.05,9.18,.,0,0 10,.,7.08,8.02,.,0,0 10,.,9.38,7.14,.,0,0 10,.,12.1,5.68,.,0,0 10,.,23.7,2.42,.,0,0 11,4.92,0,0,65.,1,1 11,.,0.25,4.86,.,0,0 11,.,0.5,7.24,.,0,0 11,.,0.98,8,.,0,0 11,.,1.98,6.81,.,0,0 11,.,3.6,5.87,.,0,0 11,.,5.02,5.22,.,0,0 11,.,7.03,4.45,.,0,0 11,.,9.03,3.62,.,0,0 11,.,12.12,2.69,.,0,0 11,.,24.08,0.86,.,0,0 12,5.3,0,0,60.5,1,1 12,.,0.25,1.25,.,0,0 12,.,0.5,3.96,.,0,0 12,.,1,7.82,.,0,0 12,.,2,9.72,.,0,0 12,.,3.52,9.75,.,0,0 12,.,5.07,8.57,.,0,0 12,.,7.07,6.59,.,0,0 12,.,9.03,6.11,.,0,0 12,.,12.05,4.57,.,0,0 12,.,24.15,1.17,.,0,0 </code></pre> <p>Code for optimization</p> <pre><code>## using LBFGSB from jaxopt for optimization import jax import jax.numpy as jnp import pandas as pd import jaxopt as jaxopt import time # import os # os.environ[&quot;JAX_ENABLE_X64&quot;] = &quot;True&quot; jax.config.update(&quot;jax_enable_x64&quot;, True) # Enable 64-bit precision # Calculate the sqrt of the inverse of a matrix @jax.jit def mat_sqrt_inv(mat): eigvals, eigvecs = jnp.linalg.eigh(mat) d2 = 1.0 / jnp.sqrt(jnp.abs(eigvals)) return eigvecs @ jnp.diag(d2) @ eigvecs.T # Calculate the PRED value @jax.jit def PRED(THETA, ETA, DATA): DOSE = 320.0 TIME = DATA[:, 1] # TIME values KA = THETA[0] * jnp.exp(ETA[0]) V = THETA[1] * jnp.exp(ETA[1]) K = THETA[2] * jnp.exp(ETA[2]) F = DOSE / V * KA / (KA - K) * (jnp.exp(-K * TIME) - jnp.exp(-KA * TIME)) G1 = ((DOSE / V / (KA - K) - DOSE / V * KA / (KA - K) ** 2) * (jnp.exp(-K * TIME) - jnp.exp(-KA * TIME)) + DOSE / V * KA / (KA - K) * (jnp.exp(-KA * TIME) * TIME)) * KA G2 = -(DOSE / V ** 2 * KA / (KA - K) * (jnp.exp(-K * TIME) - jnp.exp(-KA * TIME))) * V G3 = (DOSE / V * KA / (KA - K) ** 2 * (jnp.exp(-K * TIME) - jnp.exp(-KA * TIME)) - DOSE / V * KA / (KA - K) * (jnp.exp(-K * TIME) * TIME)) * K H1 = F H2 = jnp.ones_like(F) return jnp.stack([F, G1, G2, G3, H1, H2], axis=1) # Read data # Read data df = pd.read_csv(&quot;THEOPH.csv&quot;) df = df[df['EVID'] == 0] ID = df['ID'].unique() NETA = 3 NEPS = 2 selected_columns = ['ID', 'TIME', 'DV'] DATASET = jnp.array(df[selected_columns].values) _subject_data_list = [] for subject_id_val in sorted(ID): # Iterate through unique IDs in sorted order _subject_data_list.append(DATASET[DATASET[:, 0] == subject_id_val]) # Final NONMEM estimates THETA = jnp.array([2.8864797758451806, 33.693922520723312, 8.7260954693777815E-002]) OMEGA = jnp.array([ [0.88639937186299267, 0, 0], [0, 2.0903952408947064E-002, 0], [0, 0, 6.8919408612682198E-002] ]) SIGMA = jnp.array([ [9.9259732296116312E-003, 0], [0, 3.2931599977662498E-002] ]) # @jax.jit def obj_fn(params, DATASET): THETA = params[:3] OMEGA_diag = params[3:6] SIGMA_diag = params[6:8] OBJ = 0.0 for i in ID: # Select the data for the current subject DATA = _subject_data_list[i-1] # DATA = select_id(DATASET, i) ETA = jnp.zeros(NETA) FGH = PRED(THETA, ETA, DATA) Y = DATA[:, 2] # DV values Fpred = FGH[:, 0] G = FGH[:, 1:4] H = FGH[:, 4:6] OM = jnp.diag(OMEGA_diag) SG = jnp.diag(SIGMA_diag) RES = Y - Fpred COV = G @ OM @ G.T + jnp.diag(jnp.diag(H @ SG @ H.T)) # Add small value to diagonal for numerical stability COV += jnp.eye(COV.shape[0]) * 1e-8 # WRES = mat_sqrt_inv(COV) @ RES # OBJI = jnp.log(jnp.linalg.det(COV)) + WRES.T @ WRES OBJI = jnp.log(jnp.linalg.det(COV)) + RES @ jnp.linalg.solve(COV, RES) OBJ += OBJI # print(f&quot;Objective function: {OBJ}&quot;) return jnp.array(OBJ, dtype=jnp.float32) # JIT the objective function for performance obj_fn_jit = jax.jit(obj_fn) # Objective function from final NONMEM estimates # print(&quot;Objective function from final NONMEM estimates:&quot;) # print(obj_fn(jnp.concatenate([THETA, OMEGA.diagonal(), SIGMA.diagonal()]), DATASET)) print(obj_fn_jit(jnp.concatenate([THETA, OMEGA.diagonal(), SIGMA.diagonal()]), DATASET)) # Initial parameters initpar = jnp.array([ 3.0, 35.0, 0.09, # THETA 0.9, 0.02, 0.06, # OMEGA diagonal 0.01, 0.03 # SIGMA diagonal ]) # print(&quot;Objective function from initial parameters:&quot;) # print(obj_fn_jit(initpar, DATASET)) def obj_fn_grad(params, DATASET): # Compute both value and gradient of the objective function value, grad = jax.value_and_grad(obj_fn, argnums=0)(params, DATASET) return(value, grad) obj_fn_grad_jit = jax.jit(obj_fn_grad) # print(&quot;Objective function and gradient from initial parameters:&quot;) # print(obj_fn_grad_jit(initpar, DATASET)) # 2. Optimizer Initialization optimizer = jaxopt.LBFGSB( # fun=obj_fn_jit, # Objective function to minimize fun=obj_fn_grad_jit, # Objective function to minimize value_and_grad=True, # Return both value and gradient tol=1e-6, # Tolerance for convergence # maxiter=1000, # Maximum number of iterations # verbose=True, # Print optimization progress jit=True ) l_bound = jnp.array([ 0.0, # Lower bound for THETA[0] 0.0, # Lower bound for THETA[1] 0.0, # Lower bound for THETA[2] -10.0, # Lower bound for OMEGA[0,0] -10.0, # Lower bound for OMEGA[1,1] -10.0, # Lower bound for OMEGA[2,2] -10.0, # Lower bound for SIGMA[0,0] -10.0 # Lower bound for SIGMA[1,1] ]) u_bound = jnp.array([ 100.0, # Upper bound for THETA[0] 100.0, # Upper bound for THETA[1] 100.0, # Upper bound for THETA[2] 10.0, # Upper bound for OMEGA[0,0] 10.0, # Upper bound for OMEGA[1,1] 10.0, # Upper bound for OMEGA[2,2] 10.0, # Upper bound for SIGMA[0,0] 10.0 # Upper bound for SIGMA[1,1] ]) # 3. Step function for optimization # opt_state = optimizer.init(initpar) start_time = time.time() solution = optimizer.run(init_params=initpar, bounds=(l_bound, u_bound), DATASET=DATASET) execution_time = time.time() - start_time print(f&quot;Optimization completed in {execution_time:.4f} seconds.&quot;) print(&quot;Final parameter values:&quot;,solution.params) print(&quot;Final objective function value:&quot;, solution.state.value) ## To print the optimization progress step by step --------------------------- current_params = initpar # Initialize the optimizer's state. # DATASET and bounds are passed as they are needed by the objective function # or the LBFGSB algorithm during initialization and updates. # The fun (obj_fn_grad_jit) will be called here to get the initial value and grad. opt_state = optimizer.init_state(current_params, bounds=(l_bound, u_bound), DATASET=DATASET) max_total_updates = 1000 # Define a maximum number of total updates for our loop print_every_n_updates = 5 total_updates_done = 0 print(f&quot;Initial parameters: {current_params}&quot;) print(f&quot;Initial objective value: {opt_state.value:.6f}&quot;) if hasattr(opt_state, 'grad') and opt_state.grad is not None: print(f&quot;Initial gradient norm: {jnp.linalg.norm(opt_state.grad):.6e}&quot;) print(f&quot;Initial optimality error: {opt_state.error:.6e}&quot;) start_time = time.time() # Outer loop: controls how many times we print results for reporting_step in range(max_total_updates // print_every_n_updates + 1): if total_updates_done &gt;= max_total_updates: break # Inner loop: perform `print_every_n_updates` for _ in range(print_every_n_updates): if total_updates_done &gt;= max_total_updates: break current_params, opt_state = optimizer.update( params=current_params, state=opt_state, bounds=(l_bound, u_bound), DATASET=DATASET ) total_updates_done += 1 if opt_state.error &lt; optimizer.tol: # Check for convergence after each step break print(f&quot;\n--- After {total_updates_done} updates (Reporting step {reporting_step + 1}) ---&quot;) print(f&quot; Current parameters: {current_params}&quot;) print(f&quot; Objective value: {opt_state.value:.6f}&quot;) print(f&quot; Optimality error: {opt_state.error:.6e}&quot;) print(f&quot; Solver iterations (total LBFGSB updates): {opt_state.iter_num}&quot;) if opt_state.error &lt; optimizer.tol: print(f&quot;\nConvergence reached after {total_updates_done} updates.&quot;) break execution_time = time.time() - start_time print(f&quot;Optimization completed in {execution_time:.4f} seconds.&quot;) print(f&quot;Final parameter values: {solution.params}&quot;) print(f&quot;Final objective function value: {solution.state.value:.6f}&quot;) </code></pre>
<python><jax>
2025-06-09 13:08:33
0
1,800
Satya