QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,607,373
12,349,734
How to exclude thinking steps when streaming from `google/gemini-2.5-flash-preview:thinking` via OpenRouter and OpenAI library?
<p>I am using the <a href="https://github.com/openai/openai-python" rel="nofollow noreferrer"><code>openai</code> Python library</a> to stream chat completions from the <a href="https://openrouter.ai/google/gemini-2.5-flash-preview:thinking" rel="nofollow noreferrer"><code>google/gemini-2.5-flash-preview:thinking</code></a> model via the <a href="https://openrouter.ai/docs/api-reference/overview" rel="nofollow noreferrer">OpenRouter API</a>.</p> <p>My goal is to get <strong>only</strong> the final assistant response, but this specific model variant (<code>:thinking</code>) prepends its internal thought process (e.g., &quot;Thinking...&quot;, &quot;Examining Request...&quot;, &quot;Locating References...&quot;) directly into the main content stream before the actual answer:</p> <p><a href="https://i.sstatic.net/iRAJ8yj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iRAJ8yj8.png" alt="enter image description here" /></a></p> <p><strong>Example Code:</strong></p> <pre class="lang-py prettyprint-override"><code>from openai import OpenAI import os client = OpenAI( base_url=&quot;https://openrouter.ai/api/v1&quot;, api_key=os.environ.get(&quot;OPENROUTER_API_KEY&quot;), ) model_name = &quot;google/gemini-2.5-flash-preview:thinking&quot; messages_for_api = [ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful assistant.&quot;}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Explain gravity briefly.&quot;} ] try: response_stream = client.chat.completions.create( model=model_name, messages=messages_for_api, temperature=0.7, stream=True, # Note: Tried extra_body={'reasoning': {'exclude': True}} here, but it had no effect ) print(&quot;\nResponse Stream Chunks (Illustrative):&quot;) full_response = &quot;&quot; # Example of observed output stream structure: # Chunk: &quot;Thinking...&quot; # Chunk: &quot;Okay, the user wants to know about gravity.&quot; # ... for chunk in response_stream: if chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content: content_part = chunk.choices[0].delta.content print(f&quot;Received Chunk Content: {content_part}&quot;) # Shows thinking steps first full_response += content_part print(f&quot;\n--- Final accumulated content (includes thinking steps) --- \n{full_response}&quot;) except Exception as e: print(f&quot;\nAn error occurred: {e}&quot;) </code></pre> <p><strong>What I've Tried:</strong></p> <ol> <li><p>Explicitly instructing the model in the system prompt <strong>not to output</strong> its thinking steps. This was ignored.</p> </li> <li><p>Passing <code>extra_body={'reasoning': {'exclude': True}}</code> in the <code>create</code> call, based on OpenRouter documentation <a href="https://openrouter.ai/docs/use-cases/reasoning-tokens" rel="nofollow noreferrer">for controlling &quot;Reasoning Tokens&quot;</a>. This also had no effect on the output from this specific model.</p> </li> </ol> <p><strong>Question:</strong></p> <p>Given that the thinking steps seem to be part of the main content stream for <code>google/gemini-2.5-flash-preview:thinking</code>, is there <strong>any way</strong> using the <code>openai</code> Python library and OpenRouter API parameters to <strong>prevent</strong> this specific model variant from outputting its <em>thinking steps?</em></p> <p>Or, is the only practical solution to achieve a clean output to switch to a different model endpoint without the <code>:thinking</code> suffix (e.g., <code>google/gemini-1.5-flash-latest</code>)?</p>
<python><openai-api><google-gemini><openrouter>
2025-05-05 17:20:48
2
20,556
MendelG
79,607,292
6,281,366
dealing with reading/writing large files inside a k8s pod with a memory limit
<p>I have several applications that run inside a k8s pod, where the pod has a specific memory limit.</p> <p>I noticed that if i run a python script that deal with a large file, e.g: unzip a large file example.zip, the pod memory usage really spikes. As i understand, even if i use an unzip function that uses streaming, the whole data of the large zip file will be written into the os system cache, and the memory usage of the pod includes the page cache, therefore can spike and cause OOM errors.</p> <p>When i could have control over how i read the file, using python, i could use <code>libc.posix_fadvise(fd, 0, 0, POSIX_FADV_DONTNEED)</code> to drop the file from the cache and control the process so my cache wouldnt explode. But if i use unzip, i have no such control.</p> <p>Is there a better, more general way to deal with large files inside a pod? or maybe only way is to try to clear the cache with every application i use? (posix_fadvise)?</p> <p>edit: i was wrong and even if i read the zip file chunk by chunk, i am able to call posix_fadvise on the zip output and input file descriptor, so this solution works.</p> <p>on the other hand, i wonder if there isnt a more &quot;global&quot; system oriented solution and not an applicative one?</p>
<python><linux><kubernetes><out-of-memory>
2025-05-05 16:22:01
0
827
tamirg
79,607,128
525,341
PyICU import halts Python interpreter on Windows 11 23H2
<p>I was experiencing problems similar to <a href="https://stackoverflow.com/questions/68349833/pip-cant-install-pyicu">well-known issues</a> while installing <code>PyICU</code>. By following <a href="https://stackoverflow.com/a/47444689/525341">community hints</a> I was able to succeed in installing the corresponding wheel. Finally, after doing so (i.e. running <code>pip</code>) path to DLL's in <code>icu</code> module was added to system-wide environment <code>PATH</code> variable. Nevertheless, after trying to <code>import icu</code> module , the interpreter seems to crash given the fact that the interactive session halts.</p> <pre><code>Microsoft Windows [Version 10.0.22631.4602] (c) Microsoft Corporation. All rights reserved. C:\Python313&gt; python.exe Python 3.13.3 (tags/v3.13.3:6280bb5, Apr 8 2025, 14:47:33) [MSC v.1943 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import icu C:\Python313&gt; </code></pre> <p>Any hints on how to troubleshoot and fix this install issue?</p> <p><strong>System Info</strong></p> <ul> <li><strong>OS Edition:</strong> Windows 11 Pro</li> <li><strong>OS Version:</strong> 23H2</li> <li><strong>OS build</strong> 22631.4602</li> <li><strong>Arch:</strong> amd64</li> <li><strong>Python:</strong> 3.13</li> <li><strong>PyICU versions tried:</strong> <a href="https://github.com/cgohlke/pyicu-build/releases/download/v2.15/pyicu-2.15-cp313-cp313-win_amd64.whl" rel="nofollow noreferrer">v2.15</a> , <a href="https://github.com/cgohlke/pyicu-build/releases/download/v2.14/PyICU-2.14-cp313-cp313-win_amd64.whl" rel="nofollow noreferrer">v2.14</a> ... since only those list binary wheels for Python 3.13</li> </ul> <p>Disclaimer: I'm asking on StackOverflow rather than reporting the error due to the fact that <a href="https://github.com/cgohlke/pyicu-build" rel="nofollow noreferrer">pyicu-build</a> project issue tracker is disabled on Github .</p> <p>Thanks in advance for your help!</p>
<python><python-3.x><icu>
2025-05-05 14:31:49
0
768
Olemis Lang
79,607,084
893,254
How do I create a 2-dimensional numpy array object with zero size?
<p>I want to create a 2-dimensional <code>numpy</code> array object to store image data.</p> <p>The way the image data will be constructed is by continually appending blocks of 2-dimensional data to this array.</p> <p>I tried this sequence of steps:</p> <ol> <li>Create an empty <code>numpy.array</code> with the correct data type for greyscale images using <code>image = numpy.array([], dtype=numpy.uint8)</code></li> <li>I tried to use <code>reshape</code> to convert it into something 2-dimensional, since supplying a list of lists <code>[[]]</code> to <code>numpy.array</code> did not seem to produce a 2-dimensional object. <code>image = numpy.reshape(image, shape=(28,28))</code> (Each image will be 28x28 pixels.)</li> <li>However, when trying to use <code>numpy.concatenate</code> to append smaller blocks of images to the output image, I encountered an error. <code>image = numpy.concatenate(image, sub_image)</code></li> </ol> <pre><code>TypeError: only integer scalar arrays can be converted to a scalar index </code></pre> <p>I am aware that using <code>concatenate</code> will repeatedly copy blocks of memory, and therefore this is not an efficient solution.</p> <p>After fixing the code:</p> <pre class="lang-py prettyprint-override"><code>numpy.concatenate((image, sub_image), axis=0) </code></pre> <pre><code>ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 28 </code></pre>
<python><numpy>
2025-05-05 14:09:46
2
18,579
user2138149
79,607,054
2,361,979
Python not receiving UDP packets
<p>I have a Python application which wants to receive UDP packets like this:</p> <pre><code>with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: s.bind((ip, port)) s.settimeout(0.1) while self.shall_run: try: data, addr = s.recvfrom(1024) self.packets.append(data) except socket.timeout: continue </code></pre> <p>With Wireshark I can observe the packets coming in from an external device, but somehow they never reach this Python code.</p> <p>It seems like Windows is dropping the packet for some reason, but I'm out of ideas why:</p> <ul> <li>Firewall settings are fine.</li> <li>The packets are smaller than MTU, just 200 bytes for example.</li> <li>IP and port are correct.</li> <li>I tried with IP &quot;0.0.0.0&quot;.</li> <li>The packets are not multicast.</li> <li>IP and UDP checksums are correct or disabled (zero).</li> <li>Adding some logging, I can see the timeout-continue loop works correctly.</li> <li>No exceptions are raised in the code, it terminates by shall_run getting set to False.</li> <li>Using scapy AsyncSniffer, I can receive the packets, but I don't want to rely on tcpdump.</li> <li>Wireshark shows TTL=64, so Windows is not dropping packets because of that. (Was a theory because there is a Hyper-V Ethernet setup)</li> </ul>
<python><windows><sockets><network-programming><udp>
2025-05-05 13:50:49
0
1,193
qznc
79,606,898
4,718,423
importlib resources files -> not a package, empty module
<p>I tend to alwasy run the testers from inside the module folder, but this breaks the &quot;resources.files&quot; functionality, as it seems not to be able to find the module any more</p> <p><strong>module folder</strong></p> <pre><code>my_package/ __init__.py package.py test_package.py </code></pre> <p><strong>package.py</strong></p> <pre><code>from importlib import resources def run(): print(&quot;running&quot;) resources.files(__package__) return True if __name__ == '__main__': run() </code></pre> <p><strong>test_package.py</strong></p> <pre><code>import unittest import package class TestRest(unittest.TestCase): def testSuccess(self): self.assertFalse(False) def testClassFound(self): self.assertIsNotNone(package.run()) </code></pre> <p><strong>test in module folder:</strong></p> <pre><code>$&gt; cd my_package my_package $&gt; python -m unittest test_package ... ValueError: empty module name </code></pre> <p>probably because the <strong>package</strong> attribute in the package.py file is no longer related when inside the module running the test?</p>
<python><python-unittest><python-importlib>
2025-05-05 12:19:00
0
1,446
hewi
79,606,838
774,575
QListWidget drag and drop configuration: When to use the mode instead of the drag/drop properties?
<p>I'm learning how to setup drag and drop in the <a href="https://doc.qt.io/qt-6/model-view-programming.html#the-model-view-classes" rel="nofollow noreferrer">view-model framework description</a> at Qt site. When applied to convenience views (<code>QListWidget</code>, <code>QTableWidget</code>, <code>QTreeWidget</code>), <a href="https://doc.qt.io/qt-6/model-view-programming.html#using-convenience-views" rel="nofollow noreferrer">the documentation</a> uses either (in original C++ version):</p> <pre><code>listWidget-&gt;setDragEnabled(true); listWidget-&gt;viewport()-&gt;setAcceptDrops(true); </code></pre> <p>or:</p> <pre><code>listWidget-&gt;setDragDropMode(QAbstractItemView::InternalMove); </code></pre> <p>I'm unable to figure out whether <code>setDragDropMode</code> is a shortcut to set individual properties of the widget and which ones in addition of the two above, or it actually does more. I see using <code>QAbstractItemView::InternalMove</code> and <code>QAbstractItemView::DragDrop</code> both set <code>DragEnabled</code> and <code>setAcceptDrops</code> to <code>true</code>, but lead to a different behavior for the item (move vs. copy), so I know there is more behind the mode method. I would like to clarify this point in order to know when to use the mode method.</p> <p>My questions:</p> <ul> <li>Are the two approaches equivalent?</li> <li>Which properties/attributes are actually set by the mode method?</li> <li>What are the use cases for each one?</li> </ul> <p>I use Qt for Python if that matters.</p> <hr /> <p>As an example, if I create a list widget with each approach, it seems the result is perfectly equivalent (in this specific case):</p> <pre><code>from qtpy.QtWidgets import (QApplication, QWidget, QListWidget, QHBoxLayout) class Window(QWidget): texts = ['Sycamore', 'Chestnut', 'Walnut', 'Mahogany'] def __init__(self): super().__init__() lw_1 = QListWidget() lw_1.addItems(self.texts) self.print_props('When created', lw_1) lw_2 = QListWidget() lw_2.addItems(self.texts) # Comparing lw_1.setDragEnabled(True) lw_1.viewport().setAcceptDrops(True) lw_1.setDropIndicatorShown(True) self.print_props('Using properties', lw_1) # With mode = lw_2.DragDrop lw_2.setDragDropMode(mode) self.print_props(f'Using mode ({mode})', lw_2) layout = QHBoxLayout(self) layout.addWidget(lw_1) layout.addWidget(lw_2) def print_props(self, text, widget): print() print(text) print('drag:', widget.dragEnabled()) print('drop:', widget.viewport().acceptDrops()) print('mode:', widget.dragDropMode()) def main(): app = QApplication([]) window = Window() window.show() app.exec() main() </code></pre>
<python><c++><qt><drag-and-drop>
2025-05-05 11:35:34
1
7,768
mins
79,606,785
29,295,031
Select a range of data based on a selected value using Pandas
<p>I have a dataframe, I need to select a range of data based on a month value, but the result expected is always showing six rows where the month selected appears in the filtered data , here's the code :</p> <pre><code>import pandas as pd data = { &quot;function&quot;: [&quot;test1&quot;,&quot;test2&quot;,&quot;test3&quot;,&quot;test4&quot;,&quot;test5&quot;,&quot;test6&quot;,&quot;test7&quot;,&quot;test8&quot;,&quot;test9&quot;,&quot;test10&quot;,&quot;test11&quot;,&quot;test12&quot;], &quot;service&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;AO&quot;, &quot;M&quot; ,&quot;A&quot;, &quot;PO&quot;, &quot;MP&quot;, &quot;YU&quot;, &quot;Z&quot;, &quot;R&quot;, &quot;E&quot;, &quot;YU&quot;], &quot;month&quot;: [&quot;January&quot;,&quot;February&quot;, &quot;March&quot;, &quot;April&quot;, &quot;May&quot;, &quot;June&quot;, &quot;July&quot;, &quot;August&quot;, &quot;September&quot;, &quot;October&quot;, &quot;November&quot;, &quot;December&quot;] } df = pd.DataFrame(data) selected_month = &quot;January&quot; selected_month_idx = df[df[&quot;month&quot;] == selected_month].index[0] six_months_indices = [i % len(df) for i in range(selected_month_idx - 2, selected_month_idx + 4)] six_months_df = df.loc[six_months_indices] # add .reset_index(drop=True) if needed print(six_months_df) </code></pre> <p>output :</p> <pre><code> function service month 10 test11 E November 11 test12 YU December 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April </code></pre> <p>the issue with this code is when I select <code>January</code> for example, the order of the months is not good, what I expet is somthing like that :</p> <pre><code> function service month 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April 3 test5 A May 3 test6 PO June </code></pre> <p>when I select <code>December</code> for example the output should be :</p> <pre><code> function service month 6 test7 MP July 7 test8 YU August 8 test9 Z September 9 test10 R October 10 test11 E November 11 test12 YU December </code></pre> <p>when I select <code>Octobre</code> the output should be for example :</p> <pre><code> function service month 6 test7 MP July 7 test8 YU August 8 test9 Z September 9 test10 R October 10 test11 E November 11 test12 YU December </code></pre> <p>the order of the month displated matters, always <code>min(month)</code> =&gt; <code>max(month)</code> output like <code>this is not expected</code> :</p> <p>output :</p> <pre><code> function service month 10 test11 E November 11 test12 YU December 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April </code></pre> <p>anyone please could to adjust the code above,</p> <p>thanks</p>
<python><pandas><dataframe>
2025-05-05 10:52:38
1
401
user29295031
79,606,757
3,248,423
How to invoke a Gemini Fine tuned (via Vertex AI) model in Google ADK Agent
<p><strong>Problem Statement</strong>: I fine-tuned a google's gemini flash 2.0 model via GCP Vertex AI. But I am unable to invoke the endpoint of that fine-tuned model trained on my own data in the below agent.py. I have provided all the code I have so far below:-</p> <p>This is my agent.py</p> <pre><code>import os from google.adk.agents import Agent from google.adk.tools import google_search from google.adk.tools.agent_tool import AgentTool # from .fine_tune_llama3_8b import FineTuneLlama3 # Assuming this is in the same directory # --- Set API Key --- os.environ[&quot;GOOGLE_API_KEY&quot;] = &quot;&lt;my_api_key&gt;&quot; # --- Root Agent --- root_agent = Agent( name=&quot;RootAgent&quot;, model=&quot;gemini-2.0-flash-exp&quot;, # model=&quot;gemini-2.0-flash-001&quot; description=&quot;AI Agent&quot;, instruction=f&quot;&quot;&quot; You are a helpful AI assistant. &quot;&quot;&quot;, tools=[google_search] ) # --- Root Agent for the Runner --- from google.genai import types from google.adk.sessions import InMemorySessionService from google.adk.runners import Runner # Instantiate constants APP_NAME = &quot;snack_creations_app&quot; USER_ID = &quot;12345&quot; SESSION_ID = &quot;123344&quot; # Session and Runner session_service = InMemorySessionService() session = session_service.create_session( app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID ) runner = Runner( agent=root_agent, app_name=APP_NAME, session_service=session_service ) # Agent Interaction with Session for Context def call_agent(query): content = types.Content(role=&quot;user&quot;, parts=[types.Part(text=query)]) events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content) final_response = None for event in events: if event.is_final_response(): final_response = event.content.parts[0].text print(&quot;Agent Response: &quot;, final_response) return final_response return final_response # Running Conversation print(&quot;Starting the interactive conversation (type 'quit' to exit).&quot;) while True: user_query = input(&quot;You: &quot;) if user_query.lower() == 'quit': break call_agent(user_query) print(&quot;\n&quot;) print(&quot;Conversation ended.&quot;) </code></pre> <p>This is my fined tuned model code given by vertex AI post fine-tuning my model</p> <pre><code>from google import genai from google.genai import types import base64 def generate(): client = genai.Client( vertexai=True, project=&quot;9456734556734&quot;, location=&quot;us-central1&quot;, ) model = &quot;projects/9456734556734/locations/us-central1/endpoints/5797762931831894471&quot; contents = [ types.Content( role=&quot;user&quot;, parts=[ types.Part.from_text(text=&quot;What is the maximum temperature used in the modulated DSC protocol ?&quot;) ] ) ] generate_content_config = types.GenerateContentConfig( temperature = 0.7, top_p = 0.95, max_output_tokens = 8192, response_modalities = [&quot;TEXT&quot;], speech_config = types.SpeechConfig( voice_config = types.VoiceConfig( prebuilt_voice_config = types.PrebuiltVoiceConfig( voice_name = &quot;zephyr&quot; ) ), ), safety_settings = [types.SafetySetting( category=&quot;HARM_CATEGORY_HATE_SPEECH&quot;, threshold=&quot;OFF&quot; ),types.SafetySetting( category=&quot;HARM_CATEGORY_DANGEROUS_CONTENT&quot;, threshold=&quot;OFF&quot; ),types.SafetySetting( category=&quot;HARM_CATEGORY_SEXUALLY_EXPLICIT&quot;, threshold=&quot;OFF&quot; ),types.SafetySetting( category=&quot;HARM_CATEGORY_HARASSMENT&quot;, threshold=&quot;OFF&quot; )], ) for chunk in client.models.generate_content_stream( model = model, contents = contents, config = generate_content_config, ): print(chunk.text, end=&quot;&quot;) generate() </code></pre> <p>I tried invoking my fine tuned model end point into Agent but it doesn't work. Is there any way I can use my model in agent.py ? I am not sure if that has been implemented by google or not. Any help would be appreciated.</p>
<python><artificial-intelligence><google-cloud-vertex-ai><fine-tuning>
2025-05-05 10:33:55
0
653
Seth
79,606,665
9,471,909
Extract header from the first commented line in NumPy via numpy.genfromtxt
<p>My environment:</p> <pre><code>OS: Windows 11 Python version: 3.13.2 NumPy version: 2.1.3 </code></pre> <p>According to <a href="https://numpy.org/doc/stable/user/basics.io.genfromtxt.html#the-comments-argument" rel="nofollow noreferrer">NumPy Fundementals</a> guide describing how to use <code>numpy.genfromtxt</code> function:</p> <blockquote> <p>The optional argument <code>comments</code> is used to define a character string that marks the beginning of a comment. By default, genfromtxt assumes <code>comments='#'</code>. The comment marker may occur anywhere on the line. Any character present after the comment marker(s) is simply ignored.</p> <p><strong>Note: There is one notable exception to this behavior: if the optional argument <code>names=True</code>, the first commented line will be examined for names.</strong></p> </blockquote> <p>To do a test about the above-mentioned note (indicated in bold), I created the following data file and I put the header line, as a commented line:</p> <p><strong>C:\tmp\data.txt</strong></p> <pre><code>#firstName|LastName Anthony|Quinn Harry|POTTER George|WASHINGTON </code></pre> <p>And the following program to read and print the content of the file:</p> <pre><code>with open(&quot;C:/tmp/data.txt&quot;, &quot;r&quot;, encoding=&quot;UTF-8&quot;) as fd: result = np.genfromtxt(fd, comments=&quot;#&quot;, delimiter=&quot;|&quot;, dtype=str, names=True, skip_header=0) print(f&quot;result = {result}&quot;) </code></pre> <p>But the result is not what I expected:</p> <pre><code>result = [('', '') ('', '') ('', '')] </code></pre> <p>I cannot figure out where is the error in my code and I don't understand why the content of my data file, and in particular, its header line after the comment indicator #, is not interpreted correctly.</p> <p>I'd appriciate if you could kindly make some clarification.</p>
<python><numpy><genfromtxt>
2025-05-05 09:32:54
3
1,471
user17911
79,606,651
587,587
How can I use a wx.FileDialog to select a file which is locked by another process
<p>I'm trying to use the wx.FileDialog class to select the <em>name</em> of a file. I don't want to open it. This is a minimal example of what I'm trying to do:</p> <pre><code>import wx if __name__ == '__main__': app = wx.App(redirect=False) frame = wx.Frame(None) frame.Show() dlg = wx.FileDialog(parent=frame, style=wx.FD_OPEN|wx.FD_FILE_MUST_EXIST, wildcard=&quot;Project (*.ap*)|*.ap*|All files (*.*)|*.*&quot;) dlg.ShowModal() app.MainLoop() </code></pre> <p><a href="https://i.sstatic.net/VAJKzKth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VAJKzKth.png" alt="enter image description here" /></a></p> <p>The text is in Swedish and says &quot;File in use. Select a different name of close the file in use in another program&quot;.</p> <p>I'm trying to select the filename of a TIA Portal project file (Siemens PLC programming tool). Unfortunately it seems that TIA Portal locks the file. NOTE: I don't want to open the file. I just need the name.</p>
<python><windows><wxpython><wxwidgets>
2025-05-05 09:26:25
2
492
Anton Lahti
79,606,646
5,197,329
Using logical operators in pytest expected results
<p>I'm trying to develop pytest for a project, and while I'm not the most familiar with pytest I feel like I have a fairly basic understanding.</p> <p>In this particular case I am testing some code that does route optimization and I wish to implement a bunch of different tests to ensure that the code performs as it should.</p> <p>To help with this I have defined a dataclass which I want to use with pytest to basically tell what to expect for a given scenario.</p> <pre><code>@dataclass(slots=True) class VRPResults: &quot;&quot;&quot; A dataclass for storing the results of a VRP problem. &quot;&quot;&quot; solver_time: Optional[float] = None total_travel_time: Optional[int] = None route_lengths: Optional[list[int]] = None route_indices: Optional[list[list[int]]] = None def __post_init__(self): if self.route_lengths is not None and self.total_travel_time is None: self.total_travel_time = sum(self.route_lengths) def __eq__(self, other): if not isinstance(other, VRPResults): raise TypeError(f&quot;Cannot compare {type(self)} with {type(other)}&quot;) is_equal = True for field in fields(VRPResults): val = getattr(self, field.name) val_other = getattr(other, field.name) if val is None or val_other is None: continue if val_other != val: is_equal = False break return is_equal </code></pre> <p>The idea with this dataclass is that I can specify that for scenario 1, I know that the total travel time should be 10 minutes, while for scenario 2 I know that the route_indices should be the following [0,1,2] and the route_lengths should be [5,2,4]. So basically it is an easy framework for me to use to compare different scenarios expected values with what my model produces. So my code for using the above dataclass would look something like this:</p> <pre><code> vrp_data, vrp_results_expected = create_data_model() vrp_results_predicted = solve_vrp(vrp_data) assert vrp_results_expected == vrp_results_predicted, &quot;Solution does not match expected results&quot; </code></pre> <p>where <code>vrp_results_predicted</code> and <code>vrp_results_expected</code> both are instances of the above dataclass.</p> <p>The main problem is that this code only checks whether these parameters are equal or not. And instead I would like some way to specify how exactly it should evaluate a parameter.</p> <p>For instance in scenario 3 I do not know what the actual best travel time is, but I would be happy with anything below 20 minutes. In order to accommodate this I'm thinking of adding additional parameters which specifies the logical operator that should be used to evaluate the expressions, but I am not sure exactly how to add these logical operators in python, and I'm wondering whether there is a better way of doing something like this? maybe pytest have some clever tools available for this?</p>
<python><unit-testing><pytest>
2025-05-05 09:23:57
1
546
Tue
79,606,460
9,187,882
Wrapping a python sub process
<p>I would like to wrap an external process with a python subprocess to add a control on the input and output. I created a small wrapper program, and managed to change the input and output send to the internal process when I need it. I couldn't return output without calling the inner process - see the TODO inside my code</p> <pre><code>import asyncio import sys async def main(program: str, cmd_args): process = await asyncio.create_subprocess_exec( program, *cmd_args.split(&quot; &quot;), stdin=asyncio.subprocess.PIPE, stdout=asyncio.subprocess.PIPE, ) async def read_stdout(): while output := await process.stdout.readline(): sys.stdout.write(&quot; &quot; + output.decode(&quot;utf-8&quot;)) sys.stdout.flush() async def read_stdin(): while l := await asyncio.to_thread(sys.stdin.readline): yield l.strip() stdout_task = asyncio.create_task(read_stdout()) async for line in read_stdin(): if some_condition(line): # TODO - return output without calling the inner process pass else: process.stdin.write(line.encode(&quot;utf-8&quot;)) process.stdin.write(b&quot;\n&quot;) await process.stdin.drain() process.stdin.close() await stdout_task await process.wait() if __name__ == &quot;__main__&quot;: asyncio.run(main(sys.argv[1], sys.argv[2])) </code></pre>
<python><subprocess><python-asyncio>
2025-05-05 07:04:38
1
828
Ori N
79,606,002
356,887
How to prevent keyboard even delay in python terminal app
<p>I'm making a python game with a terminal UI, using the <a href="https://blessed.readthedocs.io/en/latest/intro.html" rel="nofollow noreferrer">blessed</a> library. When I process a key held down, I receive the first key immediately, but then there's a delay between the first and subsequent keys received. Here is my simplified code:</p> <pre><code>from blessed import Terminal def point_generator(width, height): while True: for y in range(height): for x in range(width): yield (x, y) def main() -&gt; None: term = Terminal() with term.fullscreen(), term.cbreak(), term.hidden_cursor(): print(term.home + term.clear) point_gen = point_generator(term.width, term.height) while True: if(term.kbhit(0)): key = term.getch() point = next(point_gen) print(term.move_xy(*point) + &quot;0&quot;, end=&quot;&quot;, flush=True) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>This code writes the &quot;0&quot; character to the screen in left-to-right, top-to-bottom order. When a key is held down, you can see the delay between the first and subsequent characters.</p> <p>I am running this inside a vs code linux devcontainer on Windows 11 on python 3.13.3.</p>
<python><windows><keyboard-events>
2025-05-04 20:02:49
0
10,163
xdhmoore
79,605,930
9,626,243
How to create simple webrtc server to browser stream example in python?
<p>I'm trying to write simple python server to browser video streamer using aiortc? For simplicity the server and the browser are in one local network.</p> <p>The python code:</p> <pre><code>import asyncio from aiortc import RTCPeerConnection, RTCSessionDescription, VideoStreamTrack from av import VideoFrame import numpy as np import uuid import cv2 from aiohttp import web class BouncingBallTrack(VideoStreamTrack): def __init__(self): super().__init__() self.width = 640 self.height = 480 [...] async def recv(self): [...] return new_frame async def offer(request): params = await request.json() offer = RTCSessionDescription(sdp=params[&quot;sdp&quot;], type=params[&quot;type&quot;]) pc = RTCPeerConnection() # Add local media track pc.addTrack(BouncingBallTrack()) await pc.setRemoteDescription(offer) answer = await pc.createAnswer() await pc.setLocalDescription(answer) return { &quot;sdp&quot;: pc.localDescription.sdp, &quot;type&quot;: pc.localDescription.type } if __name__ == &quot;__main__&quot;: app = web.Application() app.router.add_post(&quot;/offer&quot;, offer) app.router.add_static('/', path='static', name='static') web.run_app(app, port=8080) </code></pre> <p>The JavaScript code:</p> <pre><code>pc = new RTCPeerConnection({ iceServers: [] // No ICE servers needed for local network }); // When remote adds a track, show it in our video element pc.ontrack = event =&gt; { if (event.track.kind === 'video') { video.srcObject = event.streams[0]; } }; // Create an offer to send to the server const offer = await pc.createOffer(); await pc.setLocalDescription(offer); // Send the offer to the server and get the answer const response = await fetch('/offer', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ sdp: pc.localDescription.sdp, type: pc.localDescription.type }) }); const answer = await response.json(); await pc.setRemoteDescription(new RTCSessionDescription(answer)); </code></pre> <p>When a client sends the offer, the server failes with <code>ValueError: None is not in list</code> on line <code>await pc.setLocalDescription(answer)</code>. Actually sdp in the offer and in the answer are so strange that it cannot work. The browser and the server sdp do not contain any information about stream tracks, neither contains any local network address to connect. What I'm missing?</p>
<javascript><python><webrtc><aiortc>
2025-05-04 18:24:03
1
570
Vitaliy Tsirkunov
79,605,867
1,521,512
Is this a greedy algorithm and if so, why?
<p>I'm working through &quot;Competitive Programming 4&quot; by &quot;Halim et al.&quot; and in the topic &quot;Greedy algorithms&quot;, I was solving the following problem on Kattis: <a href="https://open.kattis.com/problems/fridge" rel="nofollow noreferrer">fridge</a>.</p> <blockquote> <p>The goal of the problem is, given a series of numerals, find the smallest (strictly positive) number you <strong>cannot</strong> make using each numerals at most once.</p> <p>For example: with these numerals <code>7129045863</code>, the first number you can't make is <code>11</code>. For these numerals <code>55</code> the first (positive) number you can't make is <code>1</code>.</p> </blockquote> <p>I implemented the following algorithm:</p> <pre class="lang-py prettyprint-override"><code>digits = list(map(int, input())) # counting the digits into a dictionary count_0 = 0 dict = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0} for digit in digits: if digit != 0: dict[digit] += 1 else: count_0 += 1 # finding the smallest digit which has the smallest count min_key = 1 min_val = dict[1] for digit, val in dict.items(): if val &lt; min_val: min_val = val min_key = digit if val == min_val and digit &lt; min_key: min_key = digit # printing the result if count_0 + 1 &lt;= min_val: result = int(&quot;1&quot;+&quot;0&quot;*(count_0+1)) else: result = int(str(min_key) * (min_val+1)) print(result) </code></pre> <p>The algorithm is based on the fact that amount of <code>0</code>'s should be treated differently then the other digits. When there are plenty of <code>0</code>'s and for example <code>2</code> has the least occurrence, then the smallest number you can't form looks like <code>22...22</code>. If <code>0</code> has the least occurrence, the smallest number you can't form looks like <code>100...0</code>.</p> <p>This algorithm works and passes the time limit test of Kattis.</p> <p>I also tried an algorithm which counts all natural numbers until a number can't be formed, but that fails due to &quot;time limit exceeded&quot;. This style of algorithm seems to me more like a greedy approach?</p> <p>So my question is: why would the algorithm above be considered greedy? (if it is to be considered a greedy algorithm). If not, is there a greedy algorithm and what would the greedy approach be?</p>
<python><algorithm><greedy>
2025-05-04 17:17:31
1
427
dietervdf
79,605,674
16,563,251
Type narrowing using is for TypeVar in generic class
<p>I am working with a generic class in python. The corresponding <a href="https://docs.python.org/3/library/typing.html#typing.TypeVar" rel="nofollow noreferrer"><code>TypeVar</code></a> is restricted to a finite amount of (invariant) types. Because it is of some importance what exact type it is at runtime, I specify the type at initialization.</p> <p>Later, I have some code where I want to narrow to certain types. Because I want to narrow to this specific type, the natural way seems to use <code>[stored_type] is [desired_type]</code>. Unfortunately, mypy as well as pyright do not perform any narrowing.</p> <p>However, if I use <code>isinstance([stored_type], type([desired_type])</code>, it narrows as intended. Also, a similar case where I use a <a href="https://docs.python.org/3/library/typing.html#typing.Union" rel="nofollow noreferrer">Union</a> instead, the narrowing using <code>is</code> works.</p> <p>I am currently using <code>isinstance</code> because it works for the type checker, but as far as I know <code>is</code> provides the stronger check here and is what I actually want.</p> <p>Is there a good reason for the current behaviour? Why is the narrowing with <code>is</code> less strict than with <code>isinstance</code>?</p> <pre class="lang-py prettyprint-override"><code>from typing import reveal_type class MyClass1: def __init__(self): pass class MyClass2: def __init__(self, arg: int): pass class MyGeneric[V: (MyClass1, MyClass2)]: def __init__(self, member_type: type[V]): if member_type is MyClass1: reveal_type(member_type) # type[V@MyGeneric] if isinstance(member_type, type(MyClass1)): reveal_type(member_type) # type[MyClass1]* def mymethod(self, member_type: type[MyClass1] | type[MyClass2]): if member_type is MyClass1: reveal_type(member_type) # type[MyClass1] </code></pre>
<python><python-typing><mypy><pyright>
2025-05-04 13:50:22
0
573
502E532E
79,605,658
8,388,707
MCP Server is giving "The selected tool does not have a callable 'function'" when using via autogen
<p>This is all running MCP server and I have also validated it via cloude desktop</p> <pre><code>from typing import Dict from mcp.server.fastmcp import FastMCP # Initialize the FastMCP server with a name mcp = FastMCP(&quot;simple_weather&quot;) # Weather data for different topics WEATHER_DATA = { &quot;rain&quot;: { &quot;description&quot;: &quot;Rain is liquid water in droplets that falls from clouds&quot;, &quot;measurement&quot;: &quot;Measured in millimeters or inches&quot;, &quot;effects&quot;: &quot;Can cause flooding in heavy amounts&quot;, }, } # Define the weather info tool using the @mcp.tool decorator @mcp.tool() async def get_weather_info(topic: str) -&gt; Dict[str, str]: &quot;&quot;&quot; A tool to fetch weather-related information based on the topic. Arguments: - topic: The weather topic (rain, temperature, wind, etc.) Returns: - A dictionary containing the weather description, measurement, and effects &quot;&quot;&quot; topic = topic.lower() # Normalize topic to lowercase if topic not in WEATHER_DATA: return { &quot;error&quot;: f&quot;Unknown weather topic '{topic}'&quot;, &quot;available_topics&quot;: list(WEATHER_DATA.keys()), } return WEATHER_DATA[topic] # Run the MCP server when the script is executed if __name__ == &quot;__main__&quot;: mcp.run() </code></pre> <p><a href="https://i.sstatic.net/0g1aWBCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0g1aWBCY.png" alt="enter image description here" /></a></p> <p>Now, I want to use it via local client, these things I am doing</p> <ol> <li>running dummy server like : uv run dummy_server.py</li> <li>running this autogen client</li> </ol> <pre><code>import asyncio from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams from autogen_ext.models.openai import OpenAIChatCompletionClient async def run_chat_with_topic(topic: str): server_params = StdioServerParams( command=&quot;python&quot;, args=[&quot;dummy_server.py&quot;], read_timeout_seconds=60, ) workbench = McpWorkbench(server_params=server_params) all_tools = await workbench.list_tools() selected_tool = next( ( tool for tool in all_tools if isinstance(tool, dict) and tool.get(&quot;name&quot;) == &quot;get_weather_info&quot; ), None, ) if not selected_tool: raise ValueError(&quot;Tool 'get_weather_info' not found on the MCP server.&quot;) # Call the selected tool's function if &quot;function&quot; in selected_tool: result = await selected_tool[&quot;function&quot;](topic=topic) else: raise ValueError(&quot;The selected tool does not have a callable 'function'.&quot;) model_client = OpenAIChatCompletionClient(model=&quot;gpt-3.5-turbo&quot;) from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.conditions import MaxMessageTermination from autogen_agentchat.teams import RoundRobinGroupChat assistant_1 = AssistantAgent( name=&quot;Planner&quot;, model_client=model_client, system_message=&quot;You are an AI weather planner for India.&quot;, tools=[selected_tool], reflect_on_tool_use=True, tool_call_summary_format=&quot;[Tool]{tool_name} result : {result}&quot;, ) assistant_2 = AssistantAgent( name=&quot;Researcher&quot;, model_client=model_client, system_message=&quot;You are an AI weather researcher for India.&quot;, tools=[selected_tool], reflect_on_tool_use=True, tool_call_summary_format=&quot;[Tool]{tool_name} result : {result}&quot;, ) agents = [assistant_1, assistant_2] termination_condition = MaxMessageTermination(max_message_number=len(agents) + 1) team = RoundRobinGroupChat( agents=agents, termination_condition=termination_condition, ) return await team.run( task=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;Get weather info for '{topic}'&quot;}] ) if __name__ == &quot;__main__&quot;: topic = &quot;rain&quot; asyncio.run(run_chat_with_topic(topic)) </code></pre> <p>but I am getting this error</p> <pre><code> line 36, in run_chat_with_topic raise ValueError(&quot;The selected tool does not have a callable 'function'.&quot;) ValueError: The selected tool does not have a callable 'function'. </code></pre> <p>it seems MCP server is giving this kind of tool definition</p> <pre><code>{'name': 'get_weather_info', 'description': '\n A tool to fetch weather-related information based on the topic.\n\n Arguments:\n - topic: The weather topic (rain, temperature, wind, etc.) \n\n Returns:\n - A dictionary containing the weather description, measurement, and effects\n ', 'parameters': {'type': 'object', 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}, 'required': ['topic'], 'additionalProperties': False}} </code></pre> <p>seems It don't have any definition of function like : 'function': get_weather_info_function, can you please help me here</p> <p>these are version I am using here</p> <pre><code>dependencies = [ &quot;autogen-agentchat&gt;=0.5.5&quot;, &quot;autogen-ext[openai]&gt;=0.5.5&quot;, &quot;httpx&gt;=0.28.1&quot;, &quot;mcp[cli]&gt;=1.7.1&quot;, ] </code></pre>
<python><ms-autogen><model-context-protocol>
2025-05-04 13:29:29
1
1,592
Vineet
79,605,299
15,907,363
Bitunix API returns "Signature Error" (code 10007) when placing an order using aiohttp client in Python
<p>I'm trying to integrate with the Bitunix API (<a href="https://openapidoc.bitunix.com/doc/common/introduction.html" rel="nofollow noreferrer">https://openapidoc.bitunix.com/doc/common/introduction.html</a>) using Python and aiohttp. I implemented authentication and signature generation as per the documentation, but when I try to place an order, the API always responds with:</p> <pre><code>{ &quot;code&quot;: 10007, &quot;data&quot;: null, &quot;msg&quot;: &quot;Signature Error&quot; } </code></pre> <p>Code (Relevant Part)</p> <pre><code>import aiohttp, hashlib, uuid, time, json import logging logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) class BitunixClient: def __init__(self, api_key, secret_key, base_url=&quot;https://fapi.bitunix.com&quot;): self.api_key = api_key self.secret_key = secret_key self.base_url = base_url self.session = aiohttp.ClientSession() async def place_order(self, order_data): method = &quot;POST&quot; endpoint = &quot;/api/v1/futures/trade/place_order&quot; url = f&quot;{self.base_url}{endpoint}&quot; nonce = uuid.uuid4().hex[:32] timestamp = str(int(time.time() * 1000)) # Signature step 1: sorted params (none here) query_params = &quot;&quot; # Signature step 2: sorted, no-whitespace body body = json.dumps(order_data, separators=(',', ':'), sort_keys=True) # Signature step 3: double SHA256 digest_input = f&quot;{nonce}{timestamp}{self.api_key}{query_params}{body}&quot; digest = hashlib.sha256(digest_input.encode()).hexdigest() sign_input = digest + self.secret_key sign = hashlib.sha256(sign_input.encode()).hexdigest() headers = { &quot;api-key&quot;: self.api_key, &quot;nonce&quot;: nonce, &quot;timestamp&quot;: timestamp, &quot;sign&quot;: sign, &quot;Content-Type&quot;: &quot;application/json&quot; } logger.debug(f&quot;Request URL: {url}&quot;) logger.debug(f&quot;Headers: {headers}&quot;) logger.debug(f&quot;Body: {body}&quot;) logger.debug(f&quot;Digest Input: {digest_input}&quot;) logger.debug(f&quot;Signature: {sign}&quot;) async with self.session.request(method, url, data=body, headers=headers) as resp: resp_json = await resp.json() logger.info(f&quot;Response: {resp_json}&quot;) return resp_json </code></pre> <p>Test Case</p> <pre><code>import asyncio async def test_order(): client = BitunixClient(&quot;YOUR_API_KEY&quot;, &quot;YOUR_SECRET_KEY&quot;) order_data = { &quot;symbol&quot;: &quot;BTCUSDT&quot;, &quot;side&quot;: &quot;BUY&quot;, &quot;orderType&quot;: &quot;LIMIT&quot;, &quot;qty&quot;: &quot;0.001&quot;, &quot;price&quot;: &quot;50000&quot;, &quot;tradeSide&quot;: &quot;OPEN&quot;, &quot;effect&quot;: &quot;GTC&quot; } result = await client.place_order(order_data) print(result) asyncio.run(test_order()) </code></pre> <p>❌ What I’ve Tried</p> <ul> <li>Double-checked api_key and secret_key</li> <li>Sorted and compacted body correctly with sort_keys=True</li> <li>Switched from json=data to data=body</li> <li>Ensured system time sync</li> </ul> <p>Logs (Excerpt)</p> <pre><code>Request URL: https://fapi.bitunix.com/api/v1/futures/trade/place_order Headers: {'api-key': '...', 'nonce': '...', 'timestamp': '...', 'sign': '...', 'Content-Type': 'application/json'} Body: {&quot;effect&quot;:&quot;GTC&quot;,&quot;orderType&quot;:&quot;LIMIT&quot;,&quot;price&quot;:&quot;50000&quot;,&quot;qty&quot;:&quot;0.001&quot;,&quot;side&quot;:&quot;BUY&quot;,&quot;symbol&quot;:&quot;BTCUSDT&quot;,&quot;tradeSide&quot;:&quot;OPEN&quot;} Signature: ... Response: {'code': 10007, 'data': None, 'msg': 'Signature Error'} </code></pre> <p>What am I missing in the signature generation for Bitunix API? Is there any undocumented quirk with how the request body or headers should be constructed?</p> <p>Has anyone successfully placed an order using Bitunix's futures API and aiohttp? There are same issues like this in other questions. Is there probality of the issue is about the exchage api not our codes?</p>
<python><request><signature><aiohttp>
2025-05-04 04:57:04
0
464
Milad
79,605,214
6,686,740
Frida: How to send byte[] array from JavaScript to Python
<p>I have a Frida JS script inside a Python session, and I'm trying to pass an array of bytes (from a Bitmap image) from the JavaScript environment back to the Python environment. Here is my attempt:</p> <pre class="lang-py prettyprint-override"><code>import frida import sys import os JS_SCRIPT = ''' setTimeout(function () {{ Java.perform(function () {{ // declare dependencies on necessary Java classes const File = Java.use(&quot;java.io.File&quot;); const Bitmap = Java.use(&quot;android.graphics.Bitmap&quot;); const BitmapCompressFormat = Java.use(&quot;android.graphics.Bitmap$CompressFormat&quot;); const BitmapConfig = Java.use(&quot;android.graphics.Bitmap$Config&quot;); const ByteArrayOutputStream = Java.use(&quot;java.io.ByteArrayOutputStream&quot;); // instantiate a new Bitmap object const bitmap = Bitmap.createBitmap(100, 100, BitmapConfig.ARGB_8888.value); // output bitmap to a byte stream in PNG format const stream = ByteArrayOutputStream.$new(); const saved = bitmap.compress(BitmapCompressFormat.PNG.value, 100, stream); console.log(&quot;[*] Compressed as PNG:&quot;, saved); // get byte array from byte stream const byteArray = stream.toByteArray(); console.log(&quot;[*] Byte array length:&quot;, byteArray.length); // send the byte stream to the Python layer send({{ type: &quot;bitmap&quot;, page: pageNum }}, byteArray); stream.close(); }}); }}, 1000); ''' def on_message(message, data): if message[&quot;type&quot;] == &quot;send&quot; and message[&quot;payload&quot;].get(&quot;type&quot;) == &quot;bitmap&quot;: page = message[&quot;payload&quot;].get(&quot;page&quot;) with open(OUTPUT_FILENAME, &quot;wb&quot;) as f: f.write(data) print(f&quot;[+] Saved page {page} as {OUTPUT_FILENAME}&quot;) else: print(f&quot;[?] Unknown message: {message}&quot;) def main(): device = frida.get_usb_device(timeout=5) session = device.attach(pid) script = session.create_script(JS_SCRIPT) script.on(&quot;message&quot;, on_message) script.load() device.resume(pid) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The problem happens on the call to <code>send()</code> because the second argument <code>byteArray</code> is not a pointer:</p> <p><code>Error: expected a pointer</code></p> <p>It's unclear to me how to get <code>byteArray</code> into a format that can be sent using the <code>send()</code> function, and I'm having trouble finding the solution in the Frida API docs.</p>
<javascript><python><android><frida>
2025-05-04 00:49:44
1
382
Jugdish
79,605,191
1,892,584
Running pdb on python uv single file scripts?
<p>So the new hotness for scripts on your path in python is to use <code>uv</code> to create single file scripts.</p> <p>These look like this:</p> <pre><code>#!/usr/bin/env -S uv run --script # -*- mode: python -*- # /// script # requires-python = &quot;&gt;=3.12&quot; # dependencies = [ # &quot;dominate&quot;, # &quot;python-dateutil&quot;, # ] # /// # </code></pre> <p>The work pretty well.</p> <p>The old way that you could debug scripts when they went wrong was to use <code>python3 -m pdb script</code> so that python dropped into a pdb post mortem shell on errors. Is it possible to do this with uv?</p> <p>(the key thing here is that the environment for running the script is build by uv).</p>
<python><pdb><uv>
2025-05-03 23:39:50
1
1,947
Att Righ
79,604,826
13,392,257
Why does the output is 4 seconds, but real time of work is less?
<p>I have a simple decorator to measure time of function in seconds</p> <pre><code>from functools import wraps from time import time def time_it(func): @wraps(func) def wrapper(*args, **kwargs): start_time = time() res = func(*args, **kwargs) print(&quot;Time: &quot;, time() - start_time) return res return wrapper @time_it def cycle(): res = 0 for i in range(1000): res += i return res cycle() </code></pre> <p>The code outputs &quot;Time: 4.4&quot; I see that function works faster than 4 seconds</p> <p>Also checked with help of time utility:</p> <pre><code>time python 0_decorators/decorator.py Time: 4.38690185546875e-05 python 0_decorators/decorator.py 0.02s user 0.02s system 65% cpu 0.059 total </code></pre> <p>Why does the output is 4 seconds, but real time of work is less? (I am using python3.11 and MacOS 12)</p>
<python><floating-point>
2025-05-03 15:00:11
2
1,708
mascai
79,604,798
893,254
How does the Tensorflow Gradient Tape calculation work at a low level?
<p>I have just seen some code which has sparked my interest.</p> <pre class="lang-py prettyprint-override"><code>with tf.GradientTape() as g: y = f(x) dy_dx = g.gradient(y, x) </code></pre> <p>(Code loosely taken from this <a href="https://www.tensorflow.org/api_docs/python/tf/GradientTape" rel="nofollow noreferrer">reference</a>.)</p> <p>There are two things which I think are interesting about this:</p> <ul> <li>The use of a context manager</li> <li>The fact that when the context manager has closed the gradient value(s) are available</li> </ul> <p>What interests me is understanding the software behind what might look like &quot;magic code&quot;. You might ask questions such as</p> <ul> <li>why is there a <code>with</code> keyword involved here?</li> <li>if I evaluated <code>f(x)</code>, then how is it that a value for a gradient (which is a different function to <code>f</code>) become available?</li> </ul> <p>I think the context manager is fairly easy to figure out. It causes the magic dunder functions <code>__enter__</code> and <code>__exit__</code> to be triggered, and so I am fairly confident this is a convenient way of marking where the beginning and end of gradient calculation. I would <em>guess</em> this is to make the overall code more efficient - there is no point calculating gradients if they are not needed, and therefore we need a way to mark where we should start and finish gradient calculations. The API could just as well have exposed a <code>.start()</code> and <code>.stop()</code> function.</p> <p>The second point is less easy to understand. My initial assumption was that perhaps the gradients were being approximated numerically. However, after doing some research I understand this is not the case, and that something called <em>Automatic Differentiation</em> is used instead. Numerical methods suffer from numerical errors due to finite precision, so it is not particularly surprising that a potentially numerically unstable algorithm is not used here.</p> <p>What I don't understand is how to connect the dots. A context manager is used to mark the beginning and end of where some gradient calculations must be performed, and those calculations are calculated using Automatic Differentiation.</p> <p>I don't think I have a particularly intuitive understanding of how Automatic Differentiation works, either. I understand that the chain rule is used, but that it is <strong>not</strong> symbolic differentiation. So what does Tensorflow do here? (Having read some references on Auto Diff, I can't see how it is in any way different to chain rule application of symbolic differentiation. So possibly I am missing something here.)</p> <ul> <li>Does it effectively have a big list of <code>if-elseif</code> statements to calculate the right function? For example, if Tensorflow sees an expression like <code>matmul(x, y)</code> does it just contain logical rules which say that <code>d_matmul_d_arg1 = arg2</code>, and <code>d_matmul_d_arg2 = arg1</code>?</li> <li>And then to actually calculate the <em>value</em> of the gradient, does it just run the calculation for <code>d_matmul_d_arg1(arg2)</code> in the same way that it would calculate the value of <code>matmul(x, y)</code> with <code>x=...</code>, <code>y=...</code> using the usual numerical algorithm for matrix multiplication?</li> </ul> <p>Basically what is it doing, and how does it work at a low level?</p>
<python><tensorflow><machine-learning><neural-network>
2025-05-03 14:25:25
1
18,579
user2138149
79,604,797
1,681,681
Why does PyQt6 leave large gap between dock widgets after docking them?
<p>I'm using qtpy (wrapping PyQt6) and have a window with dock widgets like so:</p> <p><a href="https://i.sstatic.net/3LCocTlDm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3LCocTlDm.png" alt="enter image description here" /></a></p> <p>However, when I drag and re-dock a dock widget, for some unknown reason the widget redocks with a huge gap between the other dock widget (as indicated by the arrow below):</p> <p><a href="https://i.sstatic.net/ArKUju8Jm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ArKUju8Jm.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/YFtM9vVxm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFtM9vVxm.png" alt="enter image description here" /></a></p> <p>Any ideas how to avoid this gap? I wonder if it's a Qt6 issue as I don't recall seeing this behaviour in Qt4 or Qt5.</p> <p>The full minimal working example is below:</p> <pre><code>import sys from qtpy.QtCore import Qt from qtpy.QtWidgets import QApplication, QMainWindow, QDockWidget, QTextEdit, QComboBox class MainWindow(QMainWindow): def __init__(self): super().__init__() self.resize(800, 600) central_widget = QTextEdit(&quot;Main content area&quot;) self.setCentralWidget(central_widget) dock1 = QDockWidget(&quot;Dock 1&quot;, self) dock1.setWidget(QComboBox()) dock1.setAllowedAreas(Qt.RightDockWidgetArea) dock1.setMinimumWidth(200) dock2 = QDockWidget(&quot;Dock 2&quot;, self) dock2.setWidget(QComboBox()) dock2.setAllowedAreas(Qt.RightDockWidgetArea) dock2.setMinimumWidth(200) self.addDockWidget(Qt.RightDockWidgetArea, dock1) self.addDockWidget(Qt.RightDockWidgetArea, dock2) self.setDockOptions(QMainWindow.AnimatedDocks) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = MainWindow() window.show() app.exec() </code></pre>
<python><qt><pyqt6><qt6><qtpy>
2025-05-03 14:25:14
0
10,346
mchen
79,604,777
1,681,681
How to fix Pycharm code inspection giving false positives with qtpy?
<p>I'm using qtpy (wrapping PyQt6) and Pycharm (plugin within Intellij), but the IDE is constantly showing me warnings about &quot;Unresolved attribute reference&quot; for Qt objects.</p> <p>This is adding noise and obfuscates real bugs (like the one highlighted in red below).</p> <p>Is there a fix for this?</p> <p><a href="https://i.sstatic.net/tdTvieyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tdTvieyf.png" alt="enter image description here" /></a></p>
<python><qt><pycharm><qtpy>
2025-05-03 14:06:49
0
10,346
mchen
79,604,500
15,913,281
"Pop-up" Not Closing Using Normal Click Methods
<p>I am trying to close the &quot;pop-up&quot; that appears when you click on &quot;view flight details&quot; on <a href="https://www.tui.co.uk/destinations/packages?airports%5B%5D=LBA%7CMME%7CNCL%7CMAN%7CEMA&amp;units%5B%5D=000864%3ADESTINATION%7C000910%3ADESTINATION%7C002173%3ADESTINATION%7C000952%3ADESTINATION%7C001522%3ADESTINATION%7C000312%3ADESTINATION%7C000318%3ADESTINATION%7C000327%3ADESTINATION%7C001940%3ADESTINATION%7C000800%3AREGION%7C000840%3AREGION%7CL24965%3AREGION%7C000915%3AREGION%7C002189%3AREGION%7CL24960%3AREGION%7CL00041%3AREGION&amp;when=09-08-2025&amp;until=&amp;flexibility=true&amp;monthSearch=false&amp;flexibleDays=7&amp;flexibleMonths=&amp;noOfAdults=2&amp;noOfChildren=2&amp;childrenAge=9%2C6&amp;duration=1415&amp;searchRequestType=ins&amp;searchType=search&amp;sp=true&amp;multiSelect=true&amp;room=1%7C1%7C0%7C1%7C0%7C9-2%7C1%7C0%7C1%7C0%7C6&amp;isVilla=false&amp;reqType=&amp;sortBy=totalPriceAsc&amp;nearByAirports=true#" rel="nofollow noreferrer">this page</a> using Python Selenium. For some reason whatever I try I can't get it to close. I've tried clicking on the x and the close button and using CLASS_NAME and XPATH.</p> <p>Can anyone see what I am doing wrong</p> <p>Eg</p> <pre><code>WebDriverWait(driver, timeout=20).until(EC.presence_of_element_located((By.CLASS_NAME, &quot;buttons__button buttons__primary buttons__fill ModalRedesign__applyButton&quot;))).click() </code></pre>
<python><selenium-webdriver>
2025-05-03 08:34:56
1
471
Robsmith
79,604,417
6,843,103
Apicurio Schema Registry 3.0.7 - unable to update multiple versions of schema for artifactId
<p>I have Apicurio Schema Registry 3.0.7 installed, and I'm trying to use the following code to publish a v2 of schema where GROUP = default, ARTIFACT_ID = com.versa.apicurio.confluent.Employee</p> <p>here is the code :</p> <pre><code>import requests import json import urllib.parse # ───────────────────────────────────────────────────────────────────────────── # CONFIGURATION – update these values! # ───────────────────────────────────────────────────────────────────────────── KEYCLOAK_URL = &quot;https://keycloak.vkp.versa-vani.com/realms/readonly-realm&quot; CLIENT_ID = &quot;apicurio-registry&quot; CLIENT_SECRET = &quot;&lt;secret&gt;&quot; # ← your client secret here REGISTRY_URL = &quot;https://apicurio-sr.vkp.versa-vani.com/apis/registry/v3&quot; GROUP = &quot;default&quot; ARTIFACT_ID = &quot;com.versa.apicurio.confluent.Employee&quot; # ───────────────────────────────────────────────────────────────────────────── def get_token(): &quot;&quot;&quot; Use client_credentials grant to obtain a service-account token. &quot;&quot;&quot; token_url = f&quot;{KEYCLOAK_URL}/protocol/openid-connect/token&quot; data = { &quot;grant_type&quot;: &quot;client_credentials&quot; } # Use HTTP Basic auth with client_id:client_secret resp = requests.post(token_url, data=data, auth=(CLIENT_ID, CLIENT_SECRET)) resp.raise_for_status() return resp.json()[&quot;access_token&quot;] def publish_schema(token): &quot;&quot;&quot; Publish (register) an Avro schema under GROUP/artifactId. If artifact exists → create new version. If not → create artifact first time. &quot;&quot;&quot; url_check = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{ARTIFACT_ID}&quot; headers = { &quot;Authorization&quot;: f&quot;Bearer {token}&quot; } check_response = requests.get(url_check, headers=headers) print(f&quot; check_response.status_code : {check_response.status_code}&quot;) schema = { &quot;type&quot;: &quot;record&quot;, &quot;name&quot;: &quot;Employee&quot;, &quot;namespace&quot;: &quot;com.versa.apicurio.confluent&quot;, &quot;fields&quot;: [ {&quot;name&quot;: &quot;id&quot;, &quot;type&quot;: &quot;int&quot;}, {&quot;name&quot;: &quot;name&quot;, &quot;type&quot;: &quot;string&quot;}, {&quot;name&quot;: &quot;salary&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;float&quot;], &quot;default&quot;: None}, {&quot;name&quot;: &quot;age&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;int&quot;], &quot;default&quot;: None}, { &quot;name&quot;: &quot;department&quot;, &quot;type&quot;: { &quot;type&quot;: &quot;enum&quot;, &quot;name&quot;: &quot;DepartmentEnum&quot;, &quot;symbols&quot;: [&quot;HR&quot;, &quot;ENGINEERING&quot;, &quot;SALES&quot;] } }, {&quot;name&quot;: &quot;email&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;string&quot;], &quot;default&quot;: None}, {&quot;name&quot;: &quot;new_col&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;string&quot;], &quot;default&quot;: None}, {&quot;name&quot;: &quot;new_col2&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;string&quot;], &quot;default&quot;: None}, {&quot;name&quot;: &quot;new_col3&quot;, &quot;type&quot;: [&quot;null&quot;, &quot;string&quot;], &quot;default&quot;: None}, ] } schema_str = json.dumps(schema) # If artifact exists, upload a new version if check_response.status_code == 200: print(&quot;Artifact exists, uploading a new version...&quot;) url_publish = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{ARTIFACT_ID}/versions&quot; headers_publish = { &quot;Authorization&quot;: f&quot;Bearer {token}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } # For version updates in Apicurio 3.0.7, send the schema directly as content resp = requests.post( url_publish, headers=headers_publish, data=schema_str # Send the schema JSON as the request body ) print(&quot;Publish response:&quot;, resp.status_code, resp.text) elif check_response.status_code == 404: print(&quot;Artifact not found, creating new artifact...&quot;) url_publish = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts&quot; # For creating a new artifact headers_create = { &quot;Authorization&quot;: f&quot;Bearer {token}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } # Add X-Registry-ArtifactId header for the new artifact headers_create[&quot;X-Registry-ArtifactId&quot;] = ARTIFACT_ID # Add X-Registry-ArtifactType header headers_create[&quot;X-Registry-ArtifactType&quot;] = &quot;AVRO&quot; # Send the schema directly as the request body resp = requests.post( url_publish, headers=headers_create, data=schema_str # Send the schema JSON as the request body ) else: print(f&quot;Unexpected error while checking artifact: {check_response.status_code} - {check_response.text}&quot;) check_response.raise_for_status() return print(&quot;Publish response:&quot;, resp.status_code, resp.text) if resp.ok: print(&quot;Schema published successfully!&quot;) if 'globalId' in resp.json(): print(&quot;Registered globalId:&quot;, resp.json().get(&quot;globalId&quot;)) else: print(&quot;Error publishing schema:&quot;, resp.status_code, resp.text) resp.raise_for_status() def get_schemas_and_versions(token, artifact_id): &quot;&quot;&quot; List all artifacts and their versions with globalIds in the specified GROUP. For each version, print the schema content. &quot;&quot;&quot; url = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts&quot; headers = { &quot;Authorization&quot;: f&quot;Bearer {token}&quot;, &quot;Accept&quot;: &quot;application/json&quot; } resp = requests.get(url, headers=headers) print(&quot;Get schemas response:&quot;, resp.status_code) resp.raise_for_status() # Print the raw response for debugging print(&quot;Raw artifacts response:&quot;, json.dumps(resp.json(), indent=2)) # Handle both possible response formats artifacts_data = resp.json() print(&quot; type(resp.json()) -&gt; &quot;, type(resp.json())) print(f&quot; artifacts_data : {artifacts_data}&quot;) if isinstance(artifacts_data, dict): artifacts = artifacts_data.get(&quot;artifacts&quot;, []) elif isinstance(artifacts_data, list): artifacts = artifacts_data else: print(&quot;Unexpected response format for artifacts list!&quot;) return print(f&quot;Found {len(artifacts)} artifacts in group `{GROUP}`:&quot;) for art in artifacts: print(f&quot; - Artifact: {art}&quot;) # Handle both dict and string artifact representations if isinstance(art, dict): artifact_id_resp = art.get('artifactId') created_by = art.get('createdBy', '&lt;unknown&gt;') else: artifact_id_resp = art created_by = '&lt;unknown&gt;' print(f&quot; artifactId : {artifact_id_resp}, created_by : {created_by}&quot;) if not artifact_id_resp: print(f&quot; - Skipping artifact with missing ID: {art}&quot;) continue if artifact_id != artifact_id_resp: continue print(f&quot; - Artifact ID: {artifact_id_resp}, createdBy: {created_by}&quot;) # Get all versions for this artifact versions_url = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{urllib.parse.quote(str(artifact_id_resp), safe='')}/versions&quot; versions_resp = requests.get(versions_url, headers=headers) print(&quot;Get versions response:&quot;, versions_resp.status_code) if versions_resp.status_code == 200: versions_data = versions_resp.json().get('versions', []) print(f&quot;Found {len(versions_data)} versions for artifact `{artifact_id_resp}`:&quot;) print(f&quot; - Versions data: {versions_data}&quot;) version_ids = [v.get('version') for v in versions_data] print(f&quot; Versions: {version_ids}&quot;) # For each version, print the globalId and schema content for version in version_ids: # version_meta_url = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{urllib.parse.quote(str(artifact_id_resp), safe='')}/versions/{version}/meta&quot; version_meta_url = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{urllib.parse.quote(str(artifact_id_resp), safe='')}/versions/{version}&quot; version_meta_resp = requests.get(version_meta_url, headers=headers) print(&quot; Get version metadata response:&quot;, version_meta_resp.status_code) if version_meta_resp.status_code == 200: version_meta = version_meta_resp.json() global_id = version_meta.get('globalId', '&lt;not found&gt;') print(f&quot; - Version: {version}, globalId: {global_id}&quot;) # Fetch and print the schema content content_url = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{urllib.parse.quote(str(artifact_id_resp), safe='')}/versions/{version}/content&quot; content_resp = requests.get(content_url, headers=headers) if content_resp.status_code == 200: print(f&quot; Schema content for version {version}:\n{content_resp.text}&quot;) else: print(f&quot; Failed to get schema content for version {version}: {content_resp.status_code} - {content_resp.text}&quot;) else: print(f&quot; - Failed to get version {version} metadata: {version_meta_resp.status_code} - {version_meta_resp.text}&quot;) else: print(f&quot; Failed to get versions: {versions_resp.status_code} - {versions_resp.text}&quot;) return artifacts # ───────────────────────────────────────────────────────────────────────────── if __name__ == &quot;__main__&quot;: token = get_token() print(&quot;Token acquired, length:&quot;, len(token)) print(&quot; toekn : &quot;, token) # Register your Employee schema publish_schema(token) print(&quot;calling - get_schemas_and_versions&quot;) # List existing schemas get_schemas_and_versions(token, ARTIFACT_ID) # Optionally, list all schemas # get_schemas(token) </code></pre> <p>Note - the first version of the schema gets published successfully, but the 2nd version is failing .. here is the error</p> <pre><code>check_response.status_code : 200 Artifact exists, uploading a new version... Publish response: 400 {&quot;detail&quot;:&quot;MissingRequiredParameterException: Request is missing a required parameter: content&quot;,&quot;title&quot;:&quot;Request is missing a required parameter: content&quot;,&quot;status&quot;:400,&quot;name&quot;:&quot;MissingRequiredParameterException&quot;} Publish response: 400 {&quot;detail&quot;:&quot;MissingRequiredParameterException: Request is missing a required parameter: content&quot;,&quot;title&quot;:&quot;Request is missing a required parameter: content&quot;,&quot;status&quot;:400,&quot;name&quot;:&quot;MissingRequiredParameterException&quot;} Error publishing schema: 400 {&quot;detail&quot;:&quot;MissingRequiredParameterException: Request is missing a required parameter: content&quot;,&quot;title&quot;:&quot;Request is missing a required parameter: content&quot;,&quot;status&quot;:400,&quot;name&quot;:&quot;MissingRequiredParameterException&quot;} Traceback (most recent call last): File &quot;/Users/karanalang/Documents/Technology/apicurio-schema-registry/helm-charts/keycloak/consolidate-tls-admin-confluent-v3.py&quot;, line 405, in &lt;module&gt; publish_schema(token) File &quot;/Users/karanalang/Documents/Technology/apicurio-schema-registry/helm-charts/keycloak/consolidate-tls-admin-confluent-v3.py&quot;, line 124, in publish_schema resp.raise_for_status() File &quot;/opt/homebrew/lib/python3.9/site-packages/requests/models.py&quot;, line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://apicurio-sr.vkp.versa-vani.com/apis/registry/v3/groups/default/artifacts/com.versa.apicurio.confluent.Employee/version </code></pre> <p>Issue seems to be with this part of the code :</p> <pre><code>if check_response.status_code == 200: print(&quot;Artifact exists, uploading a new version...&quot;) url_publish = f&quot;{REGISTRY_URL}/groups/{GROUP}/artifacts/{ARTIFACT_ID}/versions&quot; headers_publish = { &quot;Authorization&quot;: f&quot;Bearer {token}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } # For version updates in Apicurio 3.0.7, send the schema directly as content resp = requests.post( url_publish, headers=headers_publish, data=schema_str # Send the schema JSON as the request body ) print(&quot;Publish response:&quot;, resp.status_code, resp.text) </code></pre> <p>How do i debug/fix this ?</p> <p>tia!</p>
<python><apache-kafka><apicurio-registry>
2025-05-03 06:32:15
1
1,101
Karan Alang
79,604,354
405,017
TypeVar with UnionType to express lambda parameter type to Pyright
<p>I have a collection of objects which all inherit from <code>Component</code>. I'm writing a function that allows one to select a subset of objects, based on type, and pass them to a lambda. The simple case shown below works, both executing correctly at runtime and also correctly inferring lambda types in VS Code/Pyright:</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Callable from dataclasses import dataclass from types import UnionType from typing import Any, TypeVar @dataclass class Component: c: int = 0 @dataclass class Foo(Component): f: int = 0 @dataclass class Bar(Component): f: int = 0 T = TypeVar(&quot;T&quot;, bound=Component) def query( collection: list[Component], value: Callable[[T], Any], kind: type[T] = Component ) -&gt; list: return [value(item) for item in collection if isinstance(item, kind)] coll = [Foo(f=1), Bar(f=2), Component(c=3)] query(coll, lambda x: x.f, kind=Foo) #=&gt; [1] </code></pre> <p>However, I want to be able to specify a <code>UnionType</code> for selecting more than one type. The following changes work at runtime, but make Pyright cranky:</p> <pre class="lang-py prettyprint-override"><code>def query( …, kind: type[T] | UnionType = Component …, # Adding `| UnionType` to the `kind` parameter causes a problem to be shown on the `item` in `value(item)`: # # Argument of type &quot;Component&quot; cannot be assigned to parameter of type &quot;T@query&quot; #  Type &quot;Component&quot; is incompatible with type &quot;T@query&quot; query(coll, lambda x: x.f, kind=Foo | Bar) #=&gt; [1, 2] # Adding `Foo | Bar` when calling the query() function causes Pyright to think that # `x` is of type `Component`, and so a problem is shown for `.f`: # # Cannot access attribute &quot;f&quot; for class &quot;Component*&quot; #  Attribute &quot;f&quot; is unknown </code></pre> <p>How can I express this to make Pyright happy, so that I get clean static analysis and Intellisense for my code?</p> <p>Currently targeting Python 3.11+, but happy to select a more modern version if it helps.</p> <hr /> <p><em>The code above is a simplified case for clarity: <code>query()</code> does far more than shown here, uses generators and not lists, the class hierarchy and properties are significantly more interesting than shown. This is hopefully obvious, and the simplification desirable. I mention it only to forestall comments or answers that focus on the trivial setup, instead of the meat of the question.</em></p>
<python><python-typing><pyright>
2025-05-03 04:14:48
0
304,256
Phrogz
79,604,302
3,163,618
Check if Azure Blob is directory from azure.storage.blob.ContainerClient.list_blobs
<p>I am using the <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.containerclient?view=azure-python#azure-storage-blob-containerclient-list-blobs" rel="nofollow noreferrer"><code>list_blobs</code></a> function where directories are also treated as blobs. So far I have been checking <code>blob.content_settings.content_type is None</code>, but I don't know if other non-directory blobs can have <code>None</code> content type (and not <code>application/octet-stream</code>). What is the canonical way to check if a blob is a directory?</p> <p><a href="https://stackoverflow.com/questions/60319339/azure-storage-sdk-check-if-blob-is-directory-in-hierarchical-storage">Azure Storage SDK - check if Blob is directory in hierarchical storage</a> Similar question for C#, but I cannot find the analogous <code>CloudBlobDirectory</code> type in python.</p> <p>The blob storage is ADLS Gen2, so it should have hierarchical namespaces. <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-list-python#use-a-flat-listing" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-list-python#use-a-flat-listing</a> the note here says directories appear as 0-length blobs?</p>
<python><azure><directory><azure-blob-storage>
2025-05-03 02:02:56
2
11,524
qwr
79,604,283
12,980,093
Palindromes and string slicing. Performance
<p>There are a lot of ways to check if string is a palindrome. Plenty of them listed <a href="https://stackoverflow.com/questions/17331290/how-to-check-for-palindrome-using-python-logic">here</a></p> <p>This question is not about &quot;how&quot; but rather about performance.</p> <p>I was assuing that <code>is_palindrome</code> should be twice faster than <code>is_palindrome0</code> because it does <code>len/2</code> iterations in the worst case.</p> <p>However, in reality, <code>is_palindrome</code> takes more than 30 seconds to check all the strings while <code>is_palindrome0</code> gets the job done in less than half of a second!</p> <p><strong>Test-case</strong></p> <pre><code>def is_palindrome(s): for i in range(len(s) // 2): if s[i] != s[len(s)-i-1]: return 0 return 1 def is_palindrome0(s): if s == s[::-1]: return 1 else: return 0 N = 500 L = 99999 sss = '101' * L import time start = time.time() print(sum([1 for i in range(N) if is_palindrome0(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') start = time.time() print(sum([1 for i in range(N) if is_palindrome(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') </code></pre> <p><strong>Output</strong></p> <pre><code>168 0.40 168 34.20 </code></pre> <p><strong>Any ideas why string slicing is so crazy fast? How to debug further?</strong></p> <p>Apologies if I missed answered question with in-depth performance comparison.</p> <p><strong>UPDATE.</strong> Taking into account Frank's comment.</p> <pre><code>def is_palindrome(s): l = len(s) for i in range(l // 2): if s[i] != s[l-i-1]: return 0 return 1 def is_palindrome0(s): if s == s[::-1]: return 1 else: return 0 N = 500 L = 99999 sss = '101' * L import time start = time.time() print(sum([1 for i in range(N) if is_palindrome0(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') start = time.time() print(sum([1 for i in range(N) if is_palindrome(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') </code></pre> <p>A bit faster but still around 50x times slower.</p> <pre><code>168 0.41 168 25.11 </code></pre>
<python><algorithm><performance>
2025-05-03 01:14:55
2
653
Slimboy Fat
79,604,252
3,840,940
Issue with pyflink api code for inserting data into sql
<p>I use the following development environment.</p> <ul> <li>Flink : 2.0</li> <li>MySQL : 8.0.4</li> <li>JDK : 17.0.2</li> </ul> <p>And I develop python flink API code, which inserts a simple data into MySQL.</p> <pre class="lang-python prettyprint-override"><code>from pyflink.common import Types from pyflink.datastream import StreamExecutionEnvironment from pyflink.datastream.connectors.jdbc import JdbcConnectionOptions, JdbcExecutionOptions, JdbcSink from pyflink.datastream.stream_execution_environment import RuntimeExecutionMode env = StreamExecutionEnvironment.get_execution_environment() env.set_runtime_mode(RuntimeExecutionMode.STREAMING) env.set_parallelism(1) env.add_jars('file:///Users/joseph/Downloads/flink-connector-jdbc-core-4.0.0-2.0.jar', 'file:///Users/joseph/Downloads/flink-connector-jdbc-mysql-4.0.0-2.0.jar', 'file:///Users/joseph/Downloads/mysql-connector-j-8.4.0.jar') type_info = Types.ROW([Types.STRING(), Types.INT()]) ds = env.from_collection([('GM', 300), ('Volvo', 400)], type_info=type_info) insert_sql = 'insert into Car (brand, price) values (?, ?)' jdbc_exe_option = JdbcExecutionOptions.builder() \ .with_batch_interval_ms(1000) \ .with_batch_size(200) \ .with_max_retries(5) \ .build() jdbc_conn_option = JdbcConnectionOptions.JdbcConnectionOptionsBuilder() \ .with_url('jdbc:mysql://localhost:3306/etlmysql') \ .with_driver_name('com.mysql.cj.jdbc.Driver') \ .with_user_name('root') \ .with_password('p@$$w0rd') \ .build() sink = JdbcSink.sink(insert_sql, type_info, jdbc_conn_option, jdbc_exe_option) ds.add_sink(sink) env.execute() env.close() </code></pre> <p>But the error message are thrown from <code>JdbcSink.sink</code> line.</p> <pre class="lang-none prettyprint-override"><code>File &quot;c:\VSCode_Workspace\flink-python\com\aaa\flink\flink-jdbc-test.py&quot;, line 32, in &lt;module&gt; sink = JdbcSink.sink(insert_sql, type_info, jdbc_conn_option, jdbc_exe_option) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\VSCode_Workspace\.venv-etl\Lib\site-packages\pyflink\datastream\connectors\jdbc.py&quot;, line 60, in sink j_builder_method = output_format_clz.getDeclaredMethod('createRowJdbcStatementBuilder', ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\VSCode_Workspace\.venv-etl\Lib\site-packages\py4j\java_gateway.py&quot;, line 1322, in __call__ return_value = get_return_value( ^^^^^^^^^^^^^^^^^ File &quot;C:\VSCode_Workspace\.venv-etl\Lib\site-packages\pyflink\util\exceptions.py&quot;, line 162, in deco return f(*a, **kw) ^^^^^^^^^^^ File &quot;C:\VSCode_Workspace\.venv-etl\Lib\site-packages\py4j\protocol.py&quot;, line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o71.getDeclaredMethod. : java.lang.NoSuchMethodException: org.apache.flink.connector.jdbc.internal.JdbcOutputFormat.createRowJdbcStatementBuilder([I) at java.base/java.lang.Class.getDeclaredMethod(Class.java:2675) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) </code></pre> <p>For your information, I attach Java code of the same flink MySQL connector.</p> <pre class="lang-java prettyprint-override"><code>JdbcExecutionOptions jdbcExeOption = JdbcExecutionOptions.builder() .withBatchSize(1000) .withBatchIntervalMs(200) .withMaxRetries(5) .build(); JdbcConnectionOptions jdbcConnOption = new JdbcConnectionOptions.JdbcConnectionOptionsBuilder() .withUrl(&quot;jdbc:mysql://localhost:3306/etlmysql&quot;) .withDriverName(&quot;com.mysql.cj.jdbc.Driver&quot;) .withUsername(&quot;root&quot;) .withPassword(&quot;p@$$w0rd&quot;) .build(); JdbcSink&lt;Car&gt; jdbcSink = new JdbcSinkBuilder&lt;Car&gt;() .withQueryStatement( &quot;INSERT INTO car VALUES(?,?)&quot;, (statement, car) -&gt; { statement.setString(1, car.getBrand()); statement.setInt(2, car.getPrice()); }) .withExecutionOptions(jdbcExeOption) .buildAtLeastOnce(jdbcConnOption); stream.sinkTo(jdbcSink); </code></pre> <p>As you see, java <code>JdbcSink</code> connector class has different shape from python <code>JdbcSink</code> connector. In Java code, <code>jdbcSink</code> object is generated from <code>JdbcSinkBuilder</code> class, but in python it is not. I think these errors are due to API version mismatch. Any idea to solve these errors?</p>
<python><java><apache-flink><flink-streaming><pyflink>
2025-05-02 23:57:47
0
1,441
Joseph Hwang
79,604,226
13,413,858
Performance of list.extend slice vs islice
<p>It seems that even when <code>islice</code> would theoretically be better, in practice, it is slower than just using <code>slice</code>. So I am a bit puzzled by the difference in performance between the usage of <code>slice</code> and <code>islice</code> here:</p> <pre class="lang-py prettyprint-override"><code>from time import perf_counter from itertools import islice from random import choices from string import ascii_letters import sys test_str = &quot;&quot;.join(choices(ascii_letters, k=10 ** 7)) arr1 = [] t0 = perf_counter() for _ in range(10): arr1.extend(test_str[slice(1, sys.maxsize)]) t1 = perf_counter() print('%5.1f ms ' % ((t1 - t0) * 1e3)) arr2 = [] t0 = perf_counter() for _ in range(10): arr2.extend(islice(test_str, 1, sys.maxsize)) t1 = perf_counter() print('%5.1f ms ' % ((t1 - t0) * 1e3)) </code></pre> <pre><code>552.6 ms 786.0 ms </code></pre> <p>To justify why I believe the difference to be unexpected, here is my mental model for how the execution goes for both cases:</p> <p>Strict Slicing:</p> <ul> <li>Allocate tmp, where <code>tmp</code> is the immutable string from <code>1</code>th index to the end of the original string.</li> <li>Acquire an iterator over <code>tmp</code>.</li> <li>Create string objects for each individual character.</li> <li>Pass those string objects to be appended to the list.</li> <li>Garbage collect <code>tmp</code>.</li> </ul> <p>Lazy Slicing:</p> <ul> <li>Acquire an iterator over <code>test_str</code>, discarding the first value.</li> <li>Create string objects for each individual character.</li> <li>Pass those string objects to be appended to the list.</li> </ul> <p>I don't see how allocating the whole string object at once just to be garbage collected later leads to still better performance, especially considering the fact that both slicing methods are implemented in <code>C</code>.</p>
<python><slice><python-itertools>
2025-05-02 23:11:20
2
494
Mathias Sven
79,604,129
1,614,051
How can I annotate a function that takes a union, and returns one of the types in the union?
<p>Suppose I want to annotate this function:</p> <pre><code>def add_one(value): match value: case int(): return value + 1 case str(): return value + &quot; and one more&quot; case _: raise TypeError() </code></pre> <p>I want to tell the type checker &quot;This function can be called with an int (or subclass) or a str (or subclass). In the former case it returns an int, and in the latter a str.&quot; How can this be accomplished?</p> <p>Here are some of my failed attempts:</p> <p><strong>Attempt 1</strong></p> <pre><code>def add_one(value: int | str) -&gt; int | str: ... </code></pre> <p>This is too loose. The type checker no longer knows that the returned type is similar to the argument. Passing an <code>int</code> might return a <code>str</code>.</p> <p><strong>Attempt 2</strong></p> <pre><code>def add_one[T: int | str](value: T) -&gt; T: ... </code></pre> <p>This is incorrect. It doesn't return literally the same type as the argument. If passed an <code>IntEnum</code> it returns <code>int</code>, and for a <code>StrEnum</code> it returns <code>str</code>.</p> <p><strong>Attempt 3</strong></p> <pre><code>def add_one[T: (int, str)](value: T) -&gt; T: ... </code></pre> <p>This is better, but now I can't call <code>add_one</code> with <code>Union[int, str]</code>.</p> <p>These <a href="https://stackoverflow.com/questions/59933946/difference-between-typevart-a-b-and-typevart-bound-uniona-b">ques</a>-<a href="https://stackoverflow.com/questions/58903906/whats-the-difference-between-a-constrained-typevar-and-a-union">tions</a> talk about the differences between bounds and constraints, but I lack the brainpower to use them to solve my problem.</p> <p><strong>Attempt 4</strong></p> <pre><code>@overload def add_one(value: int) -&gt; int: ... @overload def add_one(value: str) -&gt; str: ... def add_one(value: int | str) -&gt; int | str: # Put real implementation here ... </code></pre> <p>This is the best I can do. It does the right thing, but requires me to type out the function signature for every possible type it handles. Doesn't seem like much for two types, but my real code already has 7 or 8, and I intend to add more. It also requires me to manually expand any <code>Union</code>.</p> <p>Is there some better way to tell the type checker &quot;Here's a <code>Union</code> and a <code>T</code>. Make the <code>T</code> be whatever union arg matched.&quot;?</p>
<python><python-typing>
2025-05-02 21:18:02
1
2,072
Filipp
79,604,001
11,515,528
pandas memory issue when apply list to groupby
<p>I am doing the below but getting memory issues.</p> <p>make frame</p> <pre><code>data = {'link': [1,2,3,4,5,6,7], 'code': ['xx', 'xx', 'xy', '', 'aa', 'ab', 'aa'], 'Name': ['Tom', 'Tom', 'Tom', 'Tom', 'nick', 'nick', 'nick'], 'Age': [20,20,20,20, 21, 21, 21]} # Create DataFrame df = pd.DataFrame(data) print(df) </code></pre> <p>output</p> <pre><code> link code Name Age 0 1 xx Tom 20 1 2 xx Tom 20 2 3 xy Tom 20 3 4 Tom 20 4 5 aa nick 21 5 6 ab nick 21 6 7 aa nick 21 </code></pre> <p>minimal code example that works on subset of data but not on full dataset.</p> <pre><code>temp = df.groupby(['Name', 'Age'])['code'].apply(list).reset_index() pd.merge(df, temp, on=['Name', 'Age']).explode('code_y').replace(r'^\s*$', np.nan, regex=True).dropna(subset='code_y').drop_duplicates() </code></pre> <p>output <a href="https://i.sstatic.net/3BeMZ1lD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3BeMZ1lD.png" alt="enter image description here" /></a></p> <p>error when used on full dataset.</p> <pre><code>### MemoryError: Unable to allocate 5.34 TiB for an array with shape (733324768776,) and data type object </code></pre> <p>The apply list makes a big long list with duplicates. Is there a way to drop dups from the lists or is there maybe a better way to do this?</p> <p><strong>update</strong></p> <p>Going with this code as it seem to work best.</p> <pre><code># Select relevant columns and drop duplicates directly d = df[['code', 'Name', 'Age']].replace('', pd.NA).drop_duplicates() # Perform the merge and drop rows with missing 'code_y' in one step df.merge(d, how='outer', on=['Name', 'Age']).dropna(subset=['code_y']) </code></pre>
<python><pandas><memory>
2025-05-02 19:15:24
2
1,865
Cam
79,603,739
638,231
pyvespa: what is efficient efficient way to retrieve all document ids
<p>I am using private Vespa server.</p> <p>Given a schema with large number of documents, what is the best way to retrieve all document IDs?</p> <p>A query to match the relevant docs would be something like:</p> <pre><code>&quot;select id from my_container where my_field contains \&quot;my_value\&quot;;&quot; </code></pre> <p>Indexing for the relevant field is enabled.</p> <p>The output size is limited by maxHits value which configured on server side.</p> <p>One also can use <code>visit</code> method with <code>selection=my_container.my_field==\&quot;my_value\&quot;</code>.</p> <p>This works, but slow. In my case, sometimes, only small fraction of documents match search criteria and using <code>visit</code> seems to be inefficient.</p>
<python><vespa>
2025-05-02 16:11:20
0
15,053
pic11
79,603,730
2,687,427
How to type hint overload Callable and type differently?
<p>How to type hint overload the return type of a Callable and type differently and safely? The problem I'm having is that the <code>type</code> doesn't support <code>ParamSpec</code>, so I can't use that to enforce <code>args</code> and <code>kwargs</code>, in the same way as as <code>Callable</code>.</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Callable import typing_extensions as t P = t.ParamSpec(&quot;P&quot;) T_co = t.TypeVar(&quot;T_co&quot;, covariant=True) class ExpectCallable(t.Generic[P, T_co]): ... class ExpectType(ExpectCallable[P, T_co]): ... @t.overload def expect( v: type[T_co], /, *args: P.args, **kwargs: P.kwargs ) -&gt; ExpectType[P, T_co]: ... @t.overload def expect( v: Callable[P, T_co], /, *args: P.args, **kwargs: P.kwargs ) -&gt; ExpectCallable[P, T_co]: ... def expect( v: type[T_co] | Callable[P, T_co], /, *args: P.args, **kwargs: P.kwargs ) -&gt; ExpectType[P, T_co] | ExpectCallable[P, T_co]: return ExpectType() if isinstance(v, type) else ExpectCallable() class A: def __init__(self, inp: str, /) -&gt; None: ... def fn(inp: str, /) -&gt; None: ... t.assert_type(expect(A), ExpectType[[], A]) # Shouldn't be allowed t.assert_type(expect(fn, &quot;inp&quot;), ExpectCallable[[str], None]) </code></pre> <p>As you can see that <code>t.assert_type(expect(A), ExpectType[[], A])</code> passes with no errors, even though <code>A</code> is suppose to take in a <code>str</code> argument. I'm expecting <code>expect(A)</code> to require a <code>str</code> argument, similar to how <code>expect(fn, &quot;inp&quot;)</code> requires the second parameter <code>&quot;inp&quot;</code>.</p> <p>I understand that this is because <code>A</code> is a <code>type</code>, which doesn't have <code>P</code> to type the <code>P.args</code> and <code>P.kwargs</code>, but is there a way to achieve this in a type-safe way?</p> <p>Note that I haven't used <code>*args</code> and <code>**kwargs</code> in the example for simplicity, but they are meant to be used inside of the <code>Expect</code> classes.</p>
<python><python-typing><mypy>
2025-05-02 16:08:02
0
3,472
Nelson Yeung
79,603,715
3,690,024
ssl socket without certificates
<p>Is it possible to establish an SSL socket between a client and server without certificates?</p> <p>The goal here is to prevent (or at least, impede) eavesdropping on the communication between two processes. Looking for encrypted communication only - not identity verification.</p> <p>Client/server processes are short lived, on several machines, possibly non-internet connected. All machines would get the same files/code, so a &quot;private key&quot; would be common knowledge, defeating the meaning of &quot;private.&quot;</p> <h1>Code</h1> <h2>server.py</h2> <pre><code>import socket import ssl address = ('localhost', 12345) context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.verify_mode = ssl.CERT_NONE with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as unsecured_socket: unsecured_socket.bind(address) unsecured_socket.listen(5) with context.wrap_socket(unsecured_socket, server_side=True) as sock: conn, addr = sock.accept() print(f&quot;Accepted: {addr}&quot;) </code></pre> <h2>client.py</h2> <pre><code>import socket import ssl address = ('localhost', 12345) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with socket.create_connection(address) as unsecured_socket: with context.wrap_socket(unsecured_socket) as sock: print(sock.version()) </code></pre> <h1>Result</h1> <h2>Server error</h2> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:/experiments/server.py&quot;, line 13, in &lt;module&gt; conn, addr = sock.accept() File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 1418, in accept newsock = self.context.wrap_socket(newsock, File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 455, in wrap_socket return self.sslsocket_class._create( File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 1076, in _create self.do_handshake() File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 1372, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:1028) </code></pre> <h2>Client error</h2> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\experiments\client.py&quot;, line 11, in &lt;module&gt; with context.wrap_socket(unsecured_socket) as sock: File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 455, in wrap_socket return self.sslsocket_class._create( File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 1076, in _create self.do_handshake() File &quot;C:\Program Files\Python313\Lib\ssl.py&quot;, line 1372, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1028) </code></pre>
<python><python-3.x><sockets><ssl>
2025-05-02 16:00:10
1
8,747
AJNeufeld
79,603,699
14,589,610
How can I apply side effects to a function of an instance that is created in a function that is tested?
<p>I have a custom class in the file MyClass.py</p> <pre><code>class MyClass: def __init__(self): self.fetched_data = False def get_data(self): self.fetched_data = True return [1, 2, 3] </code></pre> <p>I import and use that class in the file <code>function_to_test.py</code>:</p> <pre><code>from MyClass import MyClass def do_data_calculations(): my_class = MyClass() data = my_class.get_data() print(data) print(f&quot;Data fetched: {my_class.fetched_data}&quot;) return True </code></pre> <p>This is the code I got and I cannot change it. I am writing unit tests for it using the <code>unittest</code> package, and since I cannot execute the <code>get_data</code> functions properly I have to mock them. To do this I have written this code:</p> <pre><code>from function_to_test import do_data_calculations import unittest from unittest.mock import patch def get_data_side_effect(self): self.fetched_data = True return [10, 9, 8] @patch('function_to_test.MyClass.get_data') class TestMyClass(unittest.TestCase): def test_greet(self, mock_get_data): mock_get_data.side_effect = get_data_side_effect do_data_calculations() self.assertTrue(mock_get_data.called) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>running the unittest I get the error</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\user4219\AppData\Local\miniconda3\envs\azure\lib\unittest\mock.py&quot;, line 1379, in patched return func(*newargs, **newkeywargs) File &quot;c:\Users\user4219\source\projects\mockingtest\test_function.py&quot;, line 14, in test_greet do_data_calculations() File &quot;c:\Users\user4219\source\projects\mockingtest\function_to_test.py&quot;, line 5, in do_data_calculations data = my_class.get_data() File &quot;C:\Users\user4219\AppData\Local\miniconda3\envs\azure\lib\unittest\mock.py&quot;, line 1114, in call return self._mock_call(*args, **kwargs) File &quot;C:\Users\user4219\AppData\Local\miniconda3\envs\azure\lib\unittest\mock.py&quot;, line 1118, in _mock_call return self._execute_mock_call(*args, **kwargs) File &quot;C:\Users\user4219\AppData\Local\miniconda3\envs\azure\lib\unittest\mock.py&quot;, line 1179, in _execute_mock_call result = effect(*args, **kwargs) TypeError: get_data_side_effect() missing 1 required positional argument: 'self' </code></pre> <p>When I read the documentation it seems to be the case that the side effect gets the parameters with which the function is called. But when I try it it does not seem to be the case.<br /> If I omit the 'self' parameter (and the self.fetched_data = True line), It works fine and it prints the [10, 9, 8] data. When I use the @patch line at the function level, using the new=callable parameter, I get it to work, but since I have to write loads of tests patching this same function, I want to keep it at the class level (and I cannot use the new=callable line since each function has to call it with different parameters).</p> <p>How can I change this to be able to apply the side effects to the class instance?</p>
<python><python-unittest><python-unittest.mock>
2025-05-02 15:49:00
1
530
DwightFromTheOffice
79,603,558
2,263,683
How to set index url for uv like pip configurations
<p>When using <code>pip</code> to install Python packages, we can set the configurations so that it can refer to some private repository to install packages. The usecase is for example for big companies where for security issues, they would keep all packages in their own repo and will only trust those resources. In such cases, we can create <code>pip.conf</code> file in MacOS or Linux or a <code>pip.ini</code> file in Windows in which we can change the index url like this:</p> <pre><code>[global] index-url=https://path/to/private/repo/simple1 extra-index-url=https://path/to/private/repo/simple2 </code></pre> <p>I would need to do the same things while using <code>uv</code> as my package manager. How can I do that?</p>
<python><pip><uv>
2025-05-02 14:21:51
2
15,775
Ghasem
79,603,414
1,245,659
unexpected keyword in createsuperuser django
<p>I am working with BaseAbstractUser and AbstractUser, and I have a problem with a required field.</p> <p>models.py</p> <pre><code>from django.db import models from django.conf import settings from django.contrib.auth.models import User, AbstractBaseUser, BaseUserManager from django.utils.timezone import timedelta, now from django.core.exceptions import ValidationError # File validation function def validate_file_type(value): allowed_types = [&quot;application/pdf&quot;, &quot;application/msword&quot;, &quot;application/vnd.openxmlformats-officedocument.wordprocessingml.document&quot;] if value.content_type not in allowed_types: raise ValidationError(&quot;Only PDF and Word documents are allowed.&quot;) class CustomUserManager(BaseUserManager): &quot;&quot;&quot;Manager for CustomUser&quot;&quot;&quot; def create_user(self, email, password=None, role=&quot;customer&quot;): if not email: raise ValueError(&quot;Users must have an email address&quot;) user = self.model(email=self.normalize_email(email), role=role) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password=None): user = self.create_user(email, password, role=&quot;admin&quot;) user.is_staff = True user.is_superuser = True user.save(using=self._db) return user class CustomUser(AbstractBaseUser): &quot;&quot;&quot;Custom user model using email authentication&quot;&quot;&quot; ROLE_CHOICES = [ (&quot;vendor&quot;, &quot;Vendor&quot;), (&quot;customer&quot;, &quot;Customer&quot;), (&quot;admin&quot;, &quot;Admin&quot;), ] email = models.EmailField(unique=True) role = models.CharField(max_length=10, choices=ROLE_CHOICES, default=&quot;customer&quot;) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) # Required for Django admin is_superuser = models.BooleanField(default=False) # Required for superuser checks objects = CustomUserManager() # Use the custom manager USERNAME_FIELD = &quot;email&quot; # Set email as the primary login field REQUIRED_FIELDS = [&quot;role&quot;] def __str__(self): return self. Email </code></pre> <p>admin.py</p> <pre><code>from django.contrib import admin from django.contrib.auth.admin import UserAdmin from django.contrib.auth import get_user_model CustomUser = get_user_model() class CustomUserAdmin(UserAdmin): &quot;&quot;&quot;Admin panel customization for CustomUser&quot;&quot;&quot; model = CustomUser list_display = (&quot;email&quot;, &quot;role&quot;, &quot;is_staff&quot;, &quot;is_active&quot;) ordering = (&quot;email&quot;,) search_fields = (&quot;email&quot;,) fieldsets = ( (None, {&quot;fields&quot;: (&quot;email&quot;, &quot;password&quot;)}), (&quot;Roles &amp; Permissions&quot;, {&quot;fields&quot;: (&quot;role&quot;, &quot;is_staff&quot;, &quot;is_superuser&quot;, &quot;is_active&quot;)}), ) add_fieldsets = ( (None, { &quot;classes&quot;: (&quot;wide&quot;,), &quot;fields&quot;: (&quot;email&quot;, &quot;role&quot;, &quot;password1&quot;, &quot;password2&quot;, &quot;is_staff&quot;, &quot;is_superuser&quot;, &quot;is_active&quot;) }), ) # Remove `groups` since it's not part of CustomUser filter_horizontal = [] # Instead of filter_horizontal = [&quot;groups&quot;] list_filter = (&quot;role&quot;, &quot;is_staff&quot;, &quot;is_active&quot;) # No 'groups' admin.site.register(CustomUser, CustomUserAdmin) </code></pre> <p>When I ran <code>python manage.py createsuperuser</code>, it asked for my email, password, and role, which is what I expected it to do. However, when I hit return, it said that the role is an unexpected keyword. How do I fix this?</p> <p>The Error:</p> <pre><code>Traceback (most recent call last): File &quot;/PycharmProjects/RFPoject2/manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;/PycharmProjects/RFPoject2/manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/__init__.py&quot;, line 442, in execute_from_command_line utility. Execute() File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/__init__.py&quot;, line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/base.py&quot;, line 416, in run_from_argv self.execute(*args, **cmd_options) File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/contrib/auth/management/commands/createsuperuser.py&quot;, line 90, in execute return super().execute(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/base.py&quot;, line 460, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/contrib/auth/management/commands/createsuperuser.py&quot;, line 239, in handle self.UserModel._default_manager.db_manager(database).create_superuser( TypeError: CustomUserManager.create_superuser() got an unexpected keyword argument 'role' </code></pre> <p>Does CreatSuperUser require something special to make this work? Thanks!</p>
<python><django>
2025-05-02 12:49:41
2
305
arcee123
79,603,298
6,734,243
What is the oldest leap year in pandas?
<p>I'm working with a day of year column that ranges from 1 to 366 (to account for leap years). I need to convert this column into a date for a specific task and I would like to set it to a year that is very unlikely to appear in my time series.</p> <p>Is there a way to set it to the oldest leap year of pandas?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # here is an example where the data have already been converted to datetime object # I just missed the year to set dates = pd.Series(pd.to_datetime(['2023-05-01', '2021-12-15', '2019-07-20'])) first_leap_year = 2000 # this is where I don't know what to set new_dates = dates.apply(lambda d: d.replace(year=first_leap_year)) </code></pre>
<python><pandas>
2025-05-02 11:30:56
2
2,670
Pierrick Rambaud
79,603,293
1,100,561
Python Pandas Interpolate nan columns with respect to the column values
<p>I have data in Pandas, where the temperatures are the column headers and the COP values are in the matrix.</p> <p>If I interpolate using pandas interpolate function I end up with linear interpolate, not weighted to the temperature at the top.</p> <p>For example, here is my original data: <a href="https://i.sstatic.net/YL4KHPx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YL4KHPx7.png" alt="enter image description here" /></a></p> <p>I have added the needed columns and I need to interpolate with respect to the column header version.. If I use the interpol function, the values are interpolated but not with respect to the temperature. See example, incorrect interpolation. <a href="https://i.sstatic.net/xFyTOtei.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFyTOtei.png" alt="enter image description here" /></a></p> <p>Is this possible in Pandas or should I remove it our of a dataframe and rather do it in Scipy?</p>
<python><pandas><interpolation>
2025-05-02 11:27:25
1
445
CromeX
79,603,209
14,364,672
Privategpt listing ingested document filenames
<p>I'm new to LLMs and need to extract the file names of files that have been already ingested into PrivateGPT that the system uses to answer questions.</p> <p>I can list the doc_ids using :</p> <pre><code> from pgpt_python.client import PrivateGPTApi client = PrivateGPTApi(base_url=&quot;http://localhost:8001&quot;) # Health print(client.health.health()) # List ingested docs for doc in client.ingestion.list_ingested().data: print(doc.doc_id) </code></pre> <p>which gives an output of :</p> <pre><code>e019a7be-3b0d-45b6-a1f6-195735a20725 8d59d127-3432-47e4-8beb-5465fdb1e72d 8df9068b-fa3c-42b5-b987-0bee211bce0a b56fb71a-fd3e-4cd9-9728-3b70ff045162 dbebc4a6-29af-4ac9-b311-0a53b74cbb4f e9f28c23-5d08-4660-aa4c-8f2389901583 68da2416-57ff-45c7-9af6-dc70175d1a15 f6880d64-434c-4527-8705-df67f19a6dfc 84e66ed4-aa0f-4058-a1b9-19d7a89d8d95 etc </code></pre> <p>However haven't been able to find the mechanism to list the current 5 files that the system has ingested.</p> <p>Any help appreciated.</p>
<python><privategpt>
2025-05-02 10:32:37
1
465
steve
79,602,705
1,838,076
Why is DecisionTree using same feature and same condition twice
<p>When trying to fit <code>scikit-learn DecisionTreeClassifier</code> on my data, I am observing some weird behavior.</p> <p><code>x[54]</code> (a boolan feature) is used to break the <code>19</code> samples into <code>2</code> and <code>17</code> on top left node. Then again, the same feature with exact same condition appears again in its <code>True</code> branch.</p> <p>This time it again has <code>True</code> and <code>False</code> branches leading to leaf nodes.</p> <p>I am using <code>gini</code> for deciding the split.</p> <p>My question is, since we are in <code>True</code> branch, how can same boolean feature generate non-zero entropy or impurity at all? After all the new set can only have <code>0s</code> for that feature. So there should not be any posibility of split.</p> <p>What am I missing.</p> <p><a href="https://i.sstatic.net/A2WyGgK8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2WyGgK8.png" alt="D-Tree issue" /></a></p>
<python><machine-learning><scikit-learn><classification><decision-tree>
2025-05-02 01:14:10
0
1,622
Krishna
79,602,585
2,153,235
Why doesn't exec() create objects in local/global scope?
<p>I'm trying to find an IPython counterpart to Spyder's <code>runfile</code>. According to <a href="https://realpython.com/python-exec" rel="nofollow noreferrer">this page</a>, &quot;exec() will execute the input code in the current scope&quot; by default. Therefore, I expect the following to create objects <code>TestFunc</code> and <code>Doggy</code> in the current scope:</p> <pre><code># Script.py #---------- def TestFunc(): printf(&quot;I am TestFunc.&quot;) Doggy = &quot;Doggy&quot; </code></pre> <p>To &quot;source&quot; the code from the IPython REPL, I found the following function from <a href="https://www.tutorialspoint.com/how-can-i-make-one-python-file-run-another" rel="nofollow noreferrer">this tutorial</a>, which I paste into the REPL:</p> <pre><code>def execute_python_file_with_exec(file_path): try: with open(file_path, 'r') as file: code = file.read() exec(code) except FileNotFoundError: print(f&quot;Error: The file '{file_path}' does not exist.&quot;) except Exception as e: print(f&quot;An error occurred: {e}&quot;) </code></pre> <p>I then use it to run <code>Script.py</code> and query the local and global namespace:</p> <pre><code>execute_python_file_with_exec('Script.py') print(&quot;Locals:&quot;) for item in dir(): print( item, end=&quot;, &quot; ) print(&quot;Globals:&quot;) for item in globals(): print( item, end=&quot;, &quot; ) </code></pre> <p>Neither of the namespaces contain <code>TestFunc</code> or <code>Doggy</code>.</p> <pre><code>Locals: In, Out, _, _2, _5, _6, __, ___, __builtin__, __builtins__, __doc__, __loader__, __name__, __package__, __spec__, _dh, _i, _i1, _i2, _i3, _i4, _i5, _i6, _i7, _i8, _ih, _ii, _iii, _oh, execute_python_file_with_exec, exit, get_ipython, item, open, Globals: __name__, __doc__, __package__, __loader__, __spec__, __builtin__, __builtins__, _ih, _oh, _dh, In, Out, get_ipython, exit, quit, open, _, __, ___, _i, _ii, _iii, _i1, execute_python_file_with_exec, _i2, _2, _i3, _i4, _i5, _5, _i6, _6, _i7, item, _i8, _i9, In [10]: </code></pre> <p><em><strong>What am I misunderstanding about <code>exec()</code>?</strong></em></p> <p>I am using IPython version 8.15.0 from Anaconda. The <code>%run</code> command only works from the IPython prompt, but I'm also trying to replace the use of <code>runfile</code> within scripts. If I invoke a script using <code>%run</code> from the IPython prompt, and the script also contains <code>%run</code>, it is flagged as an error.</p> <p>I also ruled out <code>import</code>, <code>subprocess</code>, and <code>os.system()</code>, but that is starting to drift from the topic of my question. For those interested, I describe the problems with those commands <a href="https://stackoverflow.com/questions/79601104/how-to-quietly-exit-a-runfile-script#comment140387094_79601104">here</a>.</p> <p>Ideally, there would be an alternative to <code>runfile</code> that executes statements in a source file, but does so in the local scope of code (or REPL) from which <code>runfile</code> was issued. Furthermore, the alternative doesn't require a lot of busy code (like <code>runfile</code>). I realize that I'm wishing for the moon -- hoping that it exists, but prepared for the likelihood that it does not.</p> <p>I considered <em>@jasonharper's</em> approach of explicity supplying &quot;locals&quot; and &quot;globals&quot; dictionaries as arguments to <code>execute_python_file_with_exec</code>, which then passes them to <code>exec()</code>. Unlike <code>globals()</code>, however, <a href="https://www.geeksforgeeks.org/python-locals-function" rel="nofollow noreferrer"><code>locals()</code> only returns a copy of the local variables</a>. Consequently, script <code>Script.py</code> will not be able to add objects to that scope. In fact, <a href="https://stackoverflow.com/a/62127536">this SO answer</a> confirms <em>@jasonharper's</em> explanation that local variables are determined at compile time, and therefore cannot be added to.</p>
<python><ipython>
2025-05-01 21:53:44
2
1,265
user2153235
79,602,247
2,465,116
How to call a SymForce auto-generated function with a user defined type?
<p>Suppose I have a Python function <code>Foo()</code> that I'm auto-generating to C++ with SymForce's codegen functionality. It's a complicated function with a lot of parameters, so I want to pass in a <code>Params</code> struct instead of 20+ individual scalar arguments. I would further like to use the existing <code>Params</code> struct in use by the rest of the codebase instead of copying data to the auto-generated SymForce struct.</p> <p>The <a href="https://symforce.org/tutorials/codegen_tutorial.html#Code-generation-using-implicit-functions" rel="nofollow noreferrer">Code generation using implicit functions</a> in the <a href="https://symforce.org/tutorials/codegen_tutorial.html#Codegen-Tutorial" rel="nofollow noreferrer">Codegen Tutorial</a> seems to be fairly close to what I want for part (1) of this question, but if you look at the generated types in <code>double_pendulum.lcm</code>, everything is using <code>double</code> which could be a dealbreaker if I can't replace the type. Speaking of which, it's not clear how to change out this type for my user defined <code>Params</code> struct.</p> <p>Playing around a bit, I'm able to get similar results with toy problems such as:</p> <pre class="lang-py prettyprint-override"><code>@dataclasses.dataclass class Params: a: np.float32 # Single precision is not respected. b: np.float32 def foo(params: Params): return params.a </code></pre> <p>...which auto-generates the following:</p> <pre class="lang-cpp prettyprint-override"><code>struct params_t { double a; double b; } // ... template &lt;typename Scalar&gt; Scalar Foo(const foo::params_t&amp; params) { // ... Scalar _res; _res = params.a; return _res; } // NOLINT(readability/fn_size) </code></pre> <p>Again, there's not an obvious way to replace <code>params_t</code> and, giving up on wholesale type replacement, there's not an obvious way to change the type of fields from <code>double</code> to <code>float</code>.</p> <p>The <a href="https://symforce.org/tutorials/codegen_tutorial.html#Generating-from-a-Python-function" rel="nofollow noreferrer">Generating from a Python function</a> example is able to map the <code>sf.Pose3</code> type to the templated C++ type <code>sym::Pose3&lt;Scalar&gt;</code>, so there's clearly <em>a</em> way to do this, but it appears to involve deeper spelunking into the source code.</p>
<python><c++><code-generation>
2025-05-01 16:59:55
1
488
user2465116
79,602,223
16,569,183
Long import time of pybind11 extension
<p>I have a C++ library with a Python interface, generated using pybind11 on a mac with intel processor. The library is compiled into a single shared object <code>kernel.so</code>, with 29MB of size. Importing the module from a Python script takes about 7 seconds, which makes using it annoying.</p> <p>I am assuming that the reason for this is simply the size of the module, but I don't have a reference for how big modules generally are or how long should it take to load a module of this size, so I could perfectly be wrong.</p> <p>Is there a way for me to profile an import from a shared library? Any suggestion of tools that could allow me to examine the <code>kernel.so</code> to understand why it has the size it has would also be really appreciated. So far, I have tried <code>nm</code> and <code>objdump</code>, but I haven't managed to get anything useful out of them.</p> <p>NOTE: I have already tried to split the library into multiple shared objects, but I cannot do it without refactoring code due to compiler-dependent RTTI.</p>
<python><macos><shared-libraries><pybind11>
2025-05-01 16:47:43
1
313
alfonsoSR
79,602,019
1,719,931
Can't see color in Visual Studio Code
<p>I'm following &quot;Hands on Large Language Models&quot; by Alammar.</p> <p>One of the examples is:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoModelForCausalLM, AutoTokenizer colors_list = [ '102;194;165', '252;141;98', '141;160;203', '231;138;195', '166;216;84', '255;217;47' ] def show_tokens(sentence, tokenizer_name): tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) token_ids = tokenizer(sentence).input_ids for idx, t in enumerate(token_ids): print( f'\x1b[0;30;48;2;{colors_list[idx % len(colors_list)]}m' + tokenizer.decode(t) + '\x1b[0m', end=' ' ) text = &quot;&quot;&quot; English and CAPITALIZATION 🎵 鸟 show_tokens False None elif == &gt;= else: two tabs:&quot; &quot; Three tabs: &quot; &quot; 12.0*50=600 &quot;&quot;&quot; show_tokens(text, &quot;bert-base-uncased&quot;) </code></pre> <p>Which is supposed to show up as:</p> <p><a href="https://i.sstatic.net/KcstAVGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KcstAVGy.png" alt="enter image description here" /></a></p> <p>However, in Visual Studio Code I see it as:</p> <p><a href="https://i.sstatic.net/BHEbx17z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHEbx17z.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><colors><ansi-colors>
2025-05-01 14:29:31
1
5,202
robertspierre
79,602,017
16,845
Correct type annotations for generator function that yields slices of the given sequence?
<p>I'm using Python 3.13 and have this function:</p> <pre><code>def chunk(data, chunk_size: int): yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) </code></pre> <p>I want to give it type annotations to indicate that it can work with <code>bytes</code>, <code>bytearray</code>, or a general <code>collections.abc.Sequence</code> of any kind, and have the return type be a <code>Generator</code> of the exact input type. I do not want the return type to be a union type of all possible inputs (e.g. <code>bytes | bytearray | Sequence[T]</code>) because that's overly-wide; I want the precise type that I happen to put in to come back out the other end. Calling <code>chunk</code> on a <code>bytes</code> should return <code>Generator[bytes]</code>, etc.</p> <p>Since <code>bytes</code> and <code>bytearray</code> both conform to <code>Sequence[T]</code>, my first attempt was this:</p> <pre><code>def chunk[T](data: Sequence[T], chunk_size: int) -&gt; Generator[Sequence[T]]: yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) </code></pre> <p>But this has a covariance issue- the return type is <code>Sequence[T]</code>, not <code>bytes</code>, and pyright complains when I pass the return into a function that takes a <code>bytes</code> parameter (<code>def print_bytes(b: bytes) -&gt; None: ...</code>):</p> <pre><code>error: Argument of type &quot;Sequence[int]&quot; cannot be assigned to parameter &quot;b&quot; of type &quot;bytes&quot; in function &quot;print_bytes&quot;   &quot;Sequence[int]&quot; is not assignable to &quot;bytes&quot; (reportArgumentType) </code></pre> <p>So then I tried using a type constraint: &quot;<code>chunk</code> can take any Sequence and returns a Generator of that type.&quot;</p> <pre><code>def chunk[T: Sequence](data: T, chunk_size: int) -&gt; Generator[T]: yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) </code></pre> <p>This time, pyright complains about the function itself:</p> <pre><code>error: Return type of generator function must be compatible with &quot;Generator[Sequence[Unknown], Any, Any]&quot;   &quot;Generator[Sequence[Unknown], None, Unknown]&quot; is not assignable to &quot;Generator[T@chunk, None, None]&quot;     Type parameter &quot;_YieldT_co@Generator&quot; is covariant, but &quot;Sequence[Unknown]&quot; is not a subtype of &quot;T@chunk&quot;       Type &quot;Sequence[Unknown]&quot; is not assignable to type &quot;T@chunk&quot; (reportReturnType) </code></pre> <p>I'll admit to not fully understanding the complaint here- We've established via the type constraint that <code>T</code> is a <code>Sequence</code>, but pyright doesn't like it and I'm assuming my code is at fault.</p> <p>Using <code>typing.overload</code> works:</p> <pre><code>@typing.overload def chunk[T: bytes | bytearray](data: T, chunk_size: int) -&gt; Generator[T]: ... @typing.overload def chunk[T](data: Sequence[T], chunk_size: int) -&gt; Generator[Sequence[T]]: ... def chunk(data, chunk_size: int): yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) </code></pre> <p>In this case, pyright is able to pick the correct overload for all of my uses, but this feels a little silly- there's 2x as much typing code as actual implementation code!</p> <p>What are the correct type annotations for my <code>chunk</code> function that returns a <code>Generator</code> of the specific type I passed in?</p>
<python><python-typing>
2025-05-01 14:28:00
1
1,216
Charles Nicholson
79,601,906
10,006,183
How can I integrate LangSmith for observability in a multi-agent Autogen (AG2) GroupChat setup?
<p>I'm working on a document analysis service using Autogen (AG2). The service has two main agents: a reader and an analyzer. The reader splits the document into chunks and sends them to the analyzer, who takes notes. This coordination is done using a multi-agent GroupChat.</p> <p>I came to know that AG2 natively supports AgentOps for observability. But, I'd like to use LangSmith instead, as it's already integrated into my other tooling.</p> <p>Following is the code that I'm using to initiate the chat inside the <code>DocumentAnalyzer</code> class (which is the main class that manages the overall process):</p> <pre><code>class DocumentAnalyzer: def __init__(self): logger.info(&quot;Initializing DocumentAnalyzer instance.&quot;) self.analyzer = analyzer_agent.AnalyzerAgent().agent self.reader = reader_agent.ReaderAgent().agent self.initializer = autogen.UserProxyAgent( name=&quot;Init&quot;, code_execution_config=False ) logger.debug( &quot;DocumentAnalyzer initialized with AnalyzerAgent, ReaderAgent, and UserProxyAgent.&quot; ) # ... other methods ... def start(self): logger.info(&quot;Starting DocumentAnalyzer group chat.&quot;) self.groupchat = autogen.GroupChat( agents=[self.initializer, self.reader, self.analyzer], messages=[], max_round=20, speaker_selection_method=self._state_transition, ) self.manager = autogen.GroupChatManager( groupchat=self.groupchat, llm_config=settings.llm_config ) try: self.initializer.initiate_chat( self.manager, message=&quot;Analyse document_id:&quot; + str(self.document_id) ) logger.info(&quot;Initialized chat with initializer agent.&quot;) except Exception as e: logger.error(f&quot;Failed to initiate chat: {e}&quot;) logger.debug(&quot;Group chat and manager setup complete.&quot;) </code></pre> <h2>Question:</h2> <p>How can I integrate LangSmith into this AG2 GroupChat setup for observability and tracing?</p> <h2>What have I already tried?</h2> <p>I tried searching around with no good results. I tried poking around with GPT and Claude generated codes. But, since autogen internally makes the LLM calls, none of it worked.</p> <p>Is there any documentations or code that I can look into for this integration?</p> <p>Thanks!</p>
<python><openai-api><langchain><observability><langsmith>
2025-05-01 13:09:38
0
2,257
the cosmic introvert dude
79,601,886
6,029,488
Produce a heatmap plot using seaborn with specific color mapping for values within and outside a given range
<p>Consider the following basic dataframe example:</p> <pre><code> min mean max ALU -0.008 0.000 0.034 BRENT -0.017 0.000 0.023 CU -0.011 0.000 0.013 DTD_BRENT -0.011 0.000 0.019 GASOIL -0.009 0.000 0.035 GOLD -0.008 0.000 0.009 HH -0.033 -0.001 0.009 JET_CCN -0.009 0.000 0.033 </code></pre> <p>I would like to produce a heatmap plot for the above dataframe using <code>seaborn.heatplot</code> where values ranging between <code>-0.03</code> and <code>0.03</code> will have variations (depending on the value) of the green color and values outside this interval will have red color variations.</p> <p>While I can generally produce the plot, I am struggling with the interval-dependent color map.</p> <pre><code>plt.figure(figsize=(12, 8)) sns.heatmap(df, cmap=sns.color_palette(['red', 'green', 'red']), center= 0, annot=True) plt.title(&quot;Heatmap&quot;) plt.show() </code></pre>
<python><pandas><seaborn><heatmap>
2025-05-01 12:54:45
1
479
Whitebeard13
79,601,832
1,709,413
How to properly import python module from another python file using importlib.import_module?
<p>I'm working on Alembic migration github action and need to import base metadata class inside python migration script which locates in another directory. I can't import it directly inside python migration script like as it's normally do like <code>from src.databse.models import BaseModel</code> because it's dynamically provided thru migration script parameter.</p> <p>So I decided to import it using <code>importlib.import_module</code> but get an error <code>No module named 'BaseModel'</code></p> <p>Here is the log from github action:</p> <pre><code>Current directory: /github Content of current directory: ['LICENSE.md', '.dockerignore', '.github', '.git', 'Dockerfile', 'docker-compose.yml', 'src', 'migrations', '.gitignore', 'alembic.ini', 'pyproject.toml', 'Makefile', 'poetry.lock', 'README.md'] Parent directory: / Parent directory content: ['run', 'bin', 'sys', 'var', 'lib', 'tmp', 'sbin', 'srv', 'mnt', 'home', 'opt', 'etc', 'dev', 'proc', 'media', 'lib64', 'boot', 'usr', 'root', 'github', '.dockerenv', 'entrypoint.sh', 'check_alembic_migration.py'] ERROR applying migrations: No module named 'BaseModel' </code></pre> <p>And here the my code where I'm trying to import BaseModel:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;Current directory:&quot;, os.path.dirname(os.getcwd())) print(&quot;Content of current directory:&quot;, os.listdir(os.getcwd())) print(&quot;Parent directory:&quot;, os.path.dirname(os.path.dirname(os.getcwd()))) print(&quot;Parent directory content:&quot;, os.listdir(os.path.dirname(os.path.dirname(os.getcwd())))) sys.path.append(os.path.dirname(os.getcwd())) base = importlib.import_module(&quot;BaseModel&quot;, &quot;src.database.models&quot;) </code></pre> <p><code>src</code> is my app package</p> <p><code>check_alembic_migration.py</code> is migration script where I'm trying to do import</p> <p>Here is project structure:</p> <p><a href="https://i.sstatic.net/pznBlXof.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pznBlXof.png" alt="enter image description here" /></a></p> <p>Could you please help me understand how to properly import <code>BaseModel</code> from <code>src.database.models</code>?</p> <p>UPDATE: So, I found a solution with help of Claude Sonnet. The problem was in incorrect path.</p> <p>This is a correct import statement:</p> <pre class="lang-py prettyprint-override"><code>current_path = os.getcwd() # This is already '/github' based on your logs sys.path.insert(0, current_path) # Insert at beginning of path for priority try: # Direct import of the module base_module = importlib.import_module('src.database.models.base') # Get the BaseModel class from the module base = getattr(base_module, 'BaseModel') print(f&quot;Successfully imported BaseModel from {base.__module__}&quot;) except Exception as e: print(f&quot;Import error: {e}&quot;) </code></pre>
<python><python-os><python-importlib>
2025-05-01 12:14:39
0
1,197
andrey
79,601,234
1,245,659
ensure Customuser is being used in Django App
<p>I have a problem with implementing CustomUser in a Django App.</p> <p>My project is called &quot;VendorManagement&quot;.</p> <p>My app is &quot;VMP&quot;.</p> <p>In settings, I have <code>AUTH_USER_MODEL = 'VMP.CustomUser'</code> set.</p> <p>Here is my CustomUser Model:</p> <pre><code>from django.db import models from django.conf import settings from django.contrib.auth.models import User, AbstractUser from django.utils.timezone import timedelta, now from django.core.exceptions import ValidationError # File validation function def validate_file_type(value): allowed_types = [&quot;application/pdf&quot;, &quot;application/msword&quot;, &quot;application/vnd.openxmlformats-officedocument.wordprocessingml.document&quot;] if value.content_type not in allowed_types: raise ValidationError(&quot;Only PDF and Word documents are allowed.&quot;) class CustomUser(AbstractUser): ROLE_CHOICES = [ ('vendor', 'Vendor'), ('customer', 'Customer'), ('admin', 'Admin'), ] role = models.CharField(max_length=20, choices=ROLE_CHOICES, default='customer') groups = models.ManyToManyField(&quot;auth.Group&quot;, related_name=&quot;custom_user_groups&quot;, blank=True) user_permissions = models.ManyToManyField(&quot;auth.Permission&quot;, related_name=&quot;custom_user_permissions&quot;, blank=True) def is_vendor(self): return self.role == &quot;vendor&quot; def is_customer(self): return self.role == &quot;customer&quot; def is_admin(self): return self. Role == &quot;admin&quot; </code></pre> <p>Here is my views:</p> <pre><code>from django.contrib.auth import get_user_model from .models import * # Get custom User model CustomUser = get_user_model() class RFPViewSet(viewsets.ModelViewSet): serializer_class = RFPSerializer def get_queryset(self): &quot;&quot;&quot;Vendors see active RFPs, Customers see expired ones&quot;&quot;&quot; userid = self.request.user.id # Ensure user is fetched correctly as CustomUser user_instance = CustomUser.objects.filter(id=user.id).first() if user_instance.is_vendor(): return RFP.objects.filter(is_expired=False, is_active=True) elif user_instance.is_customer(): return RFP.objects.filter(customer=user_instance, is_expired=True) return RFP.objects.none() </code></pre> <p>The problme3 i that while the model does include <code>is_customer()</code> and <code>is_vendor()</code>, the functions are NOT found in the ViewSet code.</p> <p>it says that the functions are not visible. In my IDE, it says that my type of variable is &quot;AbstractUser&quot; and not &quot;CustomUser&quot;.</p> <p>how do I fix this? THanks</p>
<python><django>
2025-04-30 23:52:28
2
305
arcee123
79,601,104
2,153,235
How to quietly exit a runfile script?
<p>I am using Spyder 6 console to invoke a Python script via the <code>runfile</code> command. After a recent Anaconda update, I found that <code>sys.exit()</code> no longer exits the script quietly. It prints:</p> <pre><code>An exception has occurred, use %tb to see the full traceback. SystemExit: 0 C:\Users\User.Name\AppData\Local\anaconda3\envs\py39\lib\site-packages\IPython\core\interactiveshell.py:3534: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. warn(&quot;To exit: use 'exit', 'quit', or Ctrl-D.&quot;, stacklevel=1) </code></pre> <p>Using <code>exit()</code> and <code>quit()</code> are equally verbose.</p> <p>I came across suggestions to use <code>sys.exit(0)</code>, but it doesn't change anything for me. I want to avoid <code>os._exit()</code> because it skips cleanup handlers.</p> <p><em><strong>How would I gracefully and quietly exit a script that is called with <code>runfile</code>?</strong></em></p> <p>Following up on <em>ticktalk's</em> comment, I ran the script outside of Spyder to see what the termination message looked like. It took a while to figure out how to do this using <a href="https://stackoverflow.com/a/79604077"><code>exec(open(&quot;FileName.py&quot;).read())</code></a>.</p> <p>I tried this alternative from IPython, i.e., issuing <code>ipython3</code> from the anaconda prompt. The output from <code>exec</code> is the same as Spyder's above. I'm not surprised because I thought that Spyder uses IPython in the back end.</p> <p>I also tried the REPL that results from issuing <code>python</code> from the Anaconda prompt. The <code>exec</code> command yielded <em>no</em> messages when the script terminates due to <code>sys.exit()</code>.</p> <p><strong>Conclusion:</strong> The messages are due to IPython. Since I'm heavily reliant on Spyder, the only method currently to avoid them is <em>Ben A.'s</em> answer below. It requires that the whole script by wrapped in a function which requires indentation and shortens the usable source code line width. I may still go with that.</p>
<python><exit>
2025-04-30 21:12:40
1
1,265
user2153235
79,601,098
1,264,589
How to make my AWS Bedrock RAG AI Assistant ask which product the user is asking about
<p>I added a RAG AI assistant using AWS Bedrock to our website so customers can ask questions and get answers about the four products we sell. Each product has the same two documents, like: product_A_user_guide.pdf, product_A_specs.pdf, product_B_user_guide.pdf, product_B_specs.pdf, etc.</p> <p>The products offer similar features but differ in how to perform those features. So, when asking the AI assistant a question without specifying the product it doesn’t always return the correct result. For example, if you ask a question that pertains to all four products, like “how do I run a calibration test”, it might return an answer from product A, B, C or D. How can I make my assistant smarter so that it asks the user “tell me which product you’re asking about: A, B, C or D?” if the user doesn’t specify it in their question?</p> <p>This is my code:</p> <pre><code>bedrock = boto3.client(&quot;bedrock-agent-runtime&quot;, region_name=&quot;us-west-1&quot;) def lambda_handler(event, context): model_id = &quot;amazon.titan-text-premier-v1:0&quot; question = event[&quot;queryStringParameters&quot;][&quot;question&quot;] session_id = event[&quot;queryStringParameters&quot;][&quot;session_id”] if “session_id&quot; in event[&quot;queryStringParameters&quot;] else None kb_id = os.environ[&quot;KNOWLEDGE_BASE_ID&quot;] region = &quot;us-west-1&quot; model_arn = f&quot;arn:aws:bedrock:{region}::foundation-model/{model_id}&quot; # Query the knowledge base response = queryKB(question, kb_id, model_arn, session_id) # Extract the generated text and session ID from the response generated_text = response[&quot;output&quot;][&quot;text&quot;].strip() session_id = response.get(&quot;sessionId&quot;, &quot;&quot;) citations = response[&quot;citations&quot;] headers = { &quot;Access-Control-Allow-Origin&quot;: &quot;*&quot;, &quot;Access-Control-Allow-Credentials&quot;: True } # Return the response in the expected format return { “statusCode&quot;: 200, &quot;headers&quot;: headers, &quot;body&quot;: json.dumps({ &quot;question&quot;: question, &quot;answer&quot;: generated_text, &quot;citations&quot;: citations, &quot;sessionId&quot;: session_id }, ensure_ascii=False) } def queryKB(question, kbId, model_arn, sessionId=None): return bedrock.retrieve_and_generate( input={ &quot;text&quot;: question }, retrieveAndGenerateConfiguration={ “type&quot;: &quot;KNOWLEDGE_BASE&quot;, &quot;knowledgeBaseConfiguration&quot;: { &quot;knowledgeBaseId&quot;: kbId, &quot;modelArn&quot;: model_arn } }, sessionId=sessionId if sessionId else None ) </code></pre> <p>Thanks for your help.</p>
<python><amazon-web-services><artificial-intelligence><amazon-bedrock>
2025-04-30 21:07:43
1
1,305
hugo
79,600,924
4,913,660
Explode nested JSON to Dataframe
<p>There are loads of answers on this topic, but for the life of me, I cannot find a solution to my issue.</p> <p>Say I have a JSON like</p> <pre><code>json_2_explode = [ { &quot;scalar&quot;: &quot;43&quot;, &quot;units&quot;: &quot;m&quot;, &quot;parameter&quot;: [{&quot;no_1&quot;: &quot;45&quot;, &quot;no_2&quot;: &quot;1038&quot;, &quot;no_3&quot;: &quot;356&quot;}], &quot;name&quot;: &quot;Foo&quot;, }, { &quot;scalar&quot;: &quot;54.1&quot;, &quot;units&quot;: &quot;s&quot;, &quot;parameter&quot;: [{&quot;no_1&quot;: &quot;78&quot;, &quot;no_2&quot;: &quot;103&quot;, &quot;no_3&quot;: &quot;356&quot;}], &quot;name&quot;: &quot;Yoo&quot;, }, { &quot;scalar&quot;: &quot;1123.1&quot;, &quot;units&quot;: &quot;Hz&quot;, &quot;parameter&quot;: [{&quot;no_1&quot;: &quot;21&quot;, &quot;no_2&quot;: &quot;43&quot;, &quot;no_3&quot;: &quot;3577&quot;}], &quot;name&quot;: &quot;Baz&quot;, }, ] </code></pre> <p>documenting some readings for attributes <code>Foo</code>, <code>Yoo</code> and <code>Baz</code>. For each I detail a number, that is, the value itself, some parameters, and the name.</p> <p>Say this JSON is a column in a dataframe,</p> <pre><code>df = pd.DataFrame(data = {'col1': [11, 9, 23, 1], 'col2': [7, 3, 1, 12], 'col_json' : [json_2_explode, json_2_explode, json_2_explode, json_2_explode]}, index=[0, 1, 2, 3]) </code></pre> <pre><code> col1 col2 col_json 0 11 7 [{'scalar': '43', 'units': 'MPa', 'parameter':... 1 9 3 [{'scalar': '43', 'units': 'MPa', 'parameter':... 2 23 1 [{'scalar': '43', 'units': 'MPa', 'parameter':... 3 1 12 [{'scalar': '43', 'units': 'MPa', 'parameter':... </code></pre> <p>The issue I have is that if I try</p> <pre><code>df = pd.json_normalize(df['col_json'].explode()) </code></pre> <p>I get</p> <pre><code> scalar units parameter name 0 43 m [{'no_1': '45', 'no_2': '1038', 'no_3': '356'}] Foo 1 54.1 s [{'no_1': '78', 'no_2': '103', 'no_3': '356'}] Yoo 2 1123.1 Hz [{'no_1': '21', 'no_2': '43', 'no_3': '3577'}] Baz 3 43 m [{'no_1': '45', 'no_2': '1038', 'no_3': '356'}] Foo 4 54.1 s [{'no_1': '78', 'no_2': '103', 'no_3': '356'}] Yoo 5 1123.1 Hz [{'no_1': '21', 'no_2': '43', 'no_3': '3577'}] Baz 6 43 m [{'no_1': '45', 'no_2': '1038', 'no_3': '356'}] Foo 7 54.1 s [{'no_1': '78', 'no_2': '103', 'no_3': '356'}] Yoo 8 1123.1 Hz [{'no_1': '21', 'no_2': '43', 'no_3': '3577'}] Baz 9 43 m [{'no_1': '45', 'no_2': '1038', 'no_3': '356'}] Foo 10 54.1 s [{'no_1': '78', 'no_2': '103', 'no_3': '356'}] Yoo 11 1123.1 Hz [{'no_1': '21', 'no_2': '43', 'no_3': '3577'}] Baz </code></pre> <p>So it is exploding each JSON in 3 rows (admittedly each JSON does contain 3 sub-dicts, so to say). I actually would like <code>Foo</code>, <code>Yoo</code> and <code>Baz</code> to be documented in the same row, adding columns. Is there maybe solution not involving manually manipulating rows/piercing it as desired? I would love to see one of your fancy one-liners.</p> <p><strong>EDIT</strong></p> <p>The desired outcome would look like this, column names free to assign.</p> <p>So for each row the JSON is exploded and assigned to new columns. json_normalise creates a new row for each element in the list of json, as undesired behaviour in the example above</p> <pre><code>0 11 7 43 m 45 1038 356 Foo 54.1 s 78 103 356 Yoo 1123.1 Hz 21 43 3577 Baz 1 9 3 43 m 45 1038 356 Foo 54.1 s 78 103 356 Yoo 1123.1 Hz 21 43 3577 Baz 2 23 12 43 m 45 1038 356 Foo 54.1 s 78 103 356 Yoo 1123.1 Hz 21 43 3577 Baz </code></pre> <p>Note in this example each json list in the original dataframe is the same, but of course the idea here is to explode each json to its neighbouring new columns, and they could be different of course.</p> <p>** EDIT 2** Attempt to implement a suggested <a href="https://stackoverflow.com/a/79601014/14909621">solution by Jonatahn Leon</a></p> <p>First I use his code to define a function, to be mapped to the column containing the json object</p> <pre><code>def json_flattener(json_blurp): prefix_json_2_explode = {} for d in json_blurp: prefix_json_2_explode.update({d['name'] + '_' + key: value for key, value in d.items()}) dict_flattened = (flatten(d, '.') for d in [prefix_json_2_explode]) df = pd.DataFrame(dict_flattened) return df new_cols = ['Foo_scalar', 'Foo_units', 'Foo_parameter.0.no_1', 'Foo_parameter.0.no_2', 'Foo_parameter.0.no_3', 'Foo_name', 'Yoo_scalar', 'Yoo_units', 'Yoo_parameter.0.no_1', 'Yoo_parameter.0.no_2', 'Yoo_parameter.0.no_3', 'Yoo_name', 'Baz_scalar', 'Baz_units', 'Baz_parameter.0.no_1', 'Baz_parameter.0.no_2', 'Baz_parameter.0.no_3', 'Baz_name'] df[new_cols]= df[['col_json']].apply(lambda x: json_flattener(x.iloc[0]), axis=1,result_type='expand') </code></pre> <p>But I ran into the error:</p> <pre><code>ValueError: If using all scalar values, you must pass an index </code></pre> <p>Trying to sort it out.</p>
<python><json><pandas>
2025-04-30 18:39:52
5
414
user37292
79,600,521
9,165,100
issue with hdbscan installation
<p>I try to install hdbscan:</p> <p>pip install hdbscan</p> <p>All goes well until the message</p> <pre><code>running build_ext building 'hdbscan._hdbscan_tree' extension creating build/temp.linux-x86_64-cpython-312/hdbscan x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -fPIC -I/home/ahe/pyenv/include -I/usr/include/python3.12 -I/tmp/pip-build-env-szh812xv/overlay/lib/python3.12/site-packages/numpy/_core/include -c hdbscan/_hdbscan_tree.c -o build/temp.linux-x86_64-cpython-312/hdbscan/_hdbscan_tree.o hdbscan/_hdbscan_tree.c:29:10: fatal error: Python.h: No such file or directory 29 | #include &quot;Python.h&quot; | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] </code></pre> <p>Anyone a solution? I like to avoid using conda, as that is likely to mess up future pip installations (from experience)</p> <p>I use Ubuntu 24, Python 3.12, in a fresh venv.</p>
<python><pip><hdbscan>
2025-04-30 14:22:16
1
431
user9165100
79,600,488
1,234,721
Square API for invoice attachments 'Received multiple request parts. Please only supply zero or one `parts` of type application/json.'
<p>The new square api versions 42+ have breaking changes. Im trying to upgrade to ver v42, and I am testing in a local dev environment.</p> <p>I keep getting the following error: <code>*** square.core.api_error.ApiError: status_code: 400, body: {'errors': [{'category': 'INVALID_REQUEST_ERROR', 'code': 'INVALID_CONTENT_TYPE', 'detail': 'Received multiple request parts. Please only supply zero or one `parts` of type application/json.'}]}</code></p> <p>when I try to upload an approx ~800 byte jpeg [very grainy] in the development sandbox for Square Invoice API using the following code:</p> <pre><code>pdf_filepath = 'local/path/to/file.jpg' idem_key = 'some-unique_key_like_square_invoice_id' f_stream = open(pdf_filepath, &quot;rb&quot;) try: # I have tried using a stream as well, still the same error invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=square_original_invoice.id, # this also does not work # image_file=f_stream, image_file=pdf_filepath, request={ &quot;description&quot;: f&quot;Invoice-{pdf_filepath}&quot;, &quot;idempotency_key&quot;: idem_key, }, ) except ApiError as e: print(f&quot;ERROR _attach_pdf_to_vendor_payment with errors {e}&quot;) </code></pre> <p>In the online sandbox API, I get the 400 Response error:</p> <pre><code>// cache-control: no-cache // content-type: application/json // date: Wed, 30 Apr 2025 13:35:06 GMT // square-version: 2025-04-16 { &quot;errors&quot;: [ { &quot;code&quot;: &quot;BAD_REQUEST&quot;, &quot;detail&quot;: &quot;Total size of all attachments exceeds Sandbox limit: 1000 bytes&quot;, &quot;category&quot;: &quot;INVALID_REQUEST_ERROR&quot; } ] } </code></pre> <p>Once I got a 900 byte jpg to upload in the API explorer (988 byte did not pass), but the SDK still errors using the same file.</p> <p>Here is a successful API request upload VIA the API Explorer:</p> <pre><code>content-length: 1267 content-type: multipart/form-data; boundary=----WebKitFormBoundaryUUID38 square-version: 2025-04-16 user-agent: SquareExplorerGateway/1.0 SquareProperty ApiExplorer ------WebKitFormBoundaryUUID38 Content-Disposition: form-data;name=&quot;request&quot; { &quot;idempotency_key&quot;: &quot;UUID-123-456-7869&quot; } ------WebKitFormBoundaryUUID38 Content-Disposition: form-data;name=&quot;file&quot;;filename=&quot;900b.jpeg&quot; Content-Type: image/jpeg ���� </code></pre> <p>Here is the unsuccessful API request via my django server, using the same file:</p> <pre><code>content-length: 568 content-type: multipart/form-data; boundary=djangoUUID square-version: 2025-04-16 accept-encoding: gzip accept: */* user-agent: squareup/42.0.0.20250416 --djangoUUID Content-Disposition: form-data;name=&quot;request&quot; Content-Type: application/json;charset=utf-8 { &quot;description&quot;: &quot;Invoice-path/to/file/900b.jpeg&quot;, &quot;idempotency_key&quot;: &quot;path/to/original/file/normal-invoice.pdf&quot; } --djangoUUID Content-Disposition: form-data;name=&quot;image_file&quot; Content-Type: image/jpeg /path/to/file/900b.jpeg --djangoUUID-- </code></pre> <p>Note the successful request : <code>Content-Disposition: form-data;name=&quot;file&quot;;filename=&quot;900b.jpeg&quot;</code> <code>Content-Type: image/jpeg</code></p> <p>and the unsuccessful request: <code>Content-Disposition: form-data;name=&quot;request&quot; </code>Content-Type: application/json;charset=utf-8<code> </code>Content-Disposition: form-data;name=&quot;image_file&quot;<code> </code>Content-Type: image/jpeg`</p> <p>specifically: <code>name=&quot;image_file&quot;</code> vs <code>name=&quot;file&quot;</code> and <code>application/json;charset=utf-8</code></p> <p>The headers for the unsuccessful API call are:</p> <pre><code>{ &quot;date&quot;: &quot;Sat, 03 May 2025 22:47:34 GMT&quot;, &quot;content-type&quot;: &quot;application/json&quot;, &quot;transfer-encoding&quot;: &quot;chunked&quot;, &quot;connection&quot;: &quot;keep-alive&quot;, &quot;cf-ray&quot;: &quot;ab-…-WER&quot;, &quot;cf-cache-status&quot;: &quot;DYNAMIC&quot;, &quot;cache-control&quot;: &quot;no-cache&quot;, &quot;strict-transport-security&quot;: &quot;max-age=631152000; includeSubDomains; preload&quot;, &quot;x-envoy-decorator-operation&quot;: &quot;/v2/invoices/**&quot;, &quot;x-request-id&quot;: &quot;58-…-80&quot;, &quot;x-sq-dc&quot;: &quot;aws&quot;, &quot;x-sq-istio-migration-ingress-proxy&quot;: &quot;sq-envoy&quot;, &quot;x-sq-istio-migration-ingress-region&quot;: &quot;us-west-2&quot;, &quot;x-sq-region&quot;: &quot;us-west-2&quot;, &quot;vary&quot;: &quot;Accept-Encoding&quot;, &quot;server&quot;: &quot;cloudflare&quot; } </code></pre> <p>The headers from the browser network inspector:</p> <pre><code>:authority explorer-gateway.squareup.com :method POST :path /v2/invoices/inv:0-Ch…wI/attachments :scheme https accept application/json accept-encoding. gzip, deflate, br, zstd accept-language. en-US,en;q=0.9 authorization Bearer AE…ju cache-control no-cache content-length 1250 content-type. multipart/form-data; boundary=----WebKitFormBoundaryCP3GAXwMvwBUTlwU origin https://developer.squareup.com pragma no-cache priority u=1, i referer https://developer.squareup.com/ sandbox-mode true sec-ch-ua &quot;Chromium&quot;;v=&quot;136&quot;, &quot;Google Chrome&quot;;v=&quot;136&quot;, &quot;Not.A/Brand&quot;;v=&quot;99&quot; sec-ch-ua-mobile ?0 sec-ch-ua-platform &quot;macOS&quot; sec-fetch-dest empty sec-fetch-mode cors sec-fetch-site same-site square-version 2025-04-16 user-agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36 x-square-property. ApiExplorer </code></pre> <p>Notes:</p> <p><a href="https://imgur.com/a/py3i2SU" rel="nofollow noreferrer">Here is the test image</a></p> <p>To try and send <code>zero parts</code>, if I do an API call without a <code>request</code> parameter:</p> <pre><code>invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=self.square_original_invoice.id, image_file=pdf_filepath, ) </code></pre> <p>I get a response of <code>INVALID_REQUEST_ERROR BAD_REQUEST Bad request.</code></p> <p>If I do an API call with an empty <code>request</code> parameter:</p> <pre><code>invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=self.square_original_invoice.id, image_file=pdf_filepath, request={}, ) </code></pre> <p>I get the same error <code>INVALID_REQUEST_ERROR INVALID_CONTENT_TYPE Received multiple request parts. Please only supply zero or one </code>parts<code> of type application/json.</code></p> <p>I do remember the parameter name (<code>file</code> vs <code>image_file</code>) image <a href="https://stackoverflow.com/questions/77163726/square-api-add-image-with-python-fails">being an issue</a> with outdated docs, but the current docs show the newer parameter of <code>image_file</code> being correct</p> <p>Related Docs <a href="https://developer.squareup.com/docs/invoices-api/attachments" rel="nofollow noreferrer">https://developer.squareup.com/docs/invoices-api/attachments</a></p> <p>This worked fine in the old pre 42 API, with minor syntax change, and I know the limit is now imposed to have a 1000 byte limit for the attachment, but why can't I upload attachments in the sandbox now?</p> <p>Square's internal developer forum link : <a href="https://developer.squareup.com/forums/t/new-api-for-invoice-attachments-error-received-multiple-request-parts-please-only-supply-zero-or-one-parts-of-type-application-json/22126" rel="nofollow noreferrer">https://developer.squareup.com/forums/t/new-api-for-invoice-attachments-error-received-multiple-request-parts-please-only-supply-zero-or-one-parts-of-type-application-json/22126</a></p>
<python><square>
2025-04-30 14:07:14
1
19,295
chris Frisina
79,600,418
29,295,031
How to select a range of data in a pandas dataframe
<p>I have this pandas dataframe : df :</p> <pre><code>import pandas as pd data = { &quot;function&quot;: [&quot;test1&quot;,&quot;test2&quot;,&quot;test3&quot;,&quot;test4&quot;,&quot;test5&quot;,&quot;test6&quot;,&quot;test7&quot;,&quot;test8&quot;,&quot;test9&quot;,&quot;test10&quot;,&quot;test11&quot;,&quot;test12&quot;, ], &quot;service&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;AO&quot;, &quot;M&quot; ,&quot;A&quot;, &quot;PO&quot;, &quot;MP&quot;, &quot;YU&quot;, &quot;Z&quot;, &quot;R&quot;, &quot;E&quot;, &quot;YU&quot;], &quot;month&quot;: [&quot;January&quot;,&quot;February&quot;, &quot;March&quot;, &quot;April&quot;, &quot;May&quot;, &quot;June&quot;, &quot;July&quot;, &quot;August&quot;, &quot;September&quot;, &quot;October&quot;, &quot;November&quot;, &quot;December&quot;] } #load data into a DataFrame object: df = pd.DataFrame(data) print(df) </code></pre> <p>the result :</p> <pre><code> function service month 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April 4 test5 A May 5 test6 PO June 6 test7 MP July 7 test8 YU August 8 test9 Z September 9 test10 R October 10 test11 E November 11 test12 YU December </code></pre> <p>I have a slider which has a variable where i can select a month, imagine this variable called <strong>var</strong>. now what I'm looking for is, when I select a month in the slider I want to filter the dataframe but i want to get <strong>always six rows</strong> where the month selected is appeared in the filtered data(wherever where it appeared in the result, in begining or in the middle or at the end)</p> <p>if you could please help ?</p> <p>which i have triend :</p> <pre><code>def selectDataRange(var:str,df): if var==&quot;January&quot;: df.iloc[0: 6,] if var==&quot;February&quot;: df.iloc[1: 6,] if var==&quot;March&quot;: df.iloc[2: 6,] </code></pre> <p>i have tried this methode(only for the third months)..but it doesnt work</p>
<python><pandas><dataframe>
2025-04-30 13:27:29
2
401
user29295031
79,600,185
4,398,952
Shrink tkinter Text widget to text dimension
<p>I'm trying to shrink a <em>Text</em> widget to the size of the text inside. In fact, if you run the following example code:</p> <pre><code>import tkinter as tk root = tk.Tk() testText = tk.Text(master=root) testText.insert(index=&quot;1.0&quot;, chars=&quot;short string&quot;) testText.tag_configure(tagName=&quot;center&quot;, justify=&quot;center&quot;) testText.tag_add(&quot;center&quot;, &quot;1.0&quot;, tk.END) testText.grid(row=0, column=0, sticky=(tk.N, tk.E, tk.S, tk.W)) root.mainloop() </code></pre> <p>a big Text is created, way bigger than the &quot;short string&quot; inside:</p> <p><a href="https://i.sstatic.net/2qtWffM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2qtWffM6.png" alt="Big Text widget inside example window" /></a></p> <p>I'm trying to find a way to shrink the Text widget to the size of the text it contains, in order to get a window just a little larger than the text inside it.</p> <p>I've tried to solve fixing the dimensions of the Text explicitly specifying width/height and then using <code>grid_propagate(False)</code> and using a &quot;wrapper&quot; Frame containing the Text and then fixing its dimensions (again) explicitly specifying width/height and then using <code>grid_propagate(False)</code> but nothing worked the way I would like it. I didn't manage to get the Text shrinking and expanding dynamically based on the text inside.</p> <p>Could someone suggest me a good way to do this?</p> <p>I thought about using a label, but I have to format the text in a specific way. For example I have to format a word in bold text and the rest of the sentence in standard text. So, as I read in <a href="https://stackoverflow.com/questions/40237671/python-tkinter-single-label-with-bold-and-normal-text">this</a> question, I decided to used a Text</p>
<python><tkinter><text-widget>
2025-04-30 11:01:12
1
401
ela
79,600,103
10,715,700
SQLModel session handling when my read and updates are in seperate parts of the code flow
<p>I have a script that needs to query some data, process it, and then update some columns in the row I read.</p> <p>My code looks like this:</p> <pre class="lang-py prettyprint-override"><code>def query_data(param1): with Session(engine) as session: statement = select( FreddyInsightsInfo).where(SampleModel.col1 == param1, FreddyInsightsInfo.col2.is_(None) ) data_list = session.scalars(statement).all() return data_list def save_object(data, param3): with Session(engine) as session: data.col3 = param3 session.add(data) session.commit() data_list = query_data(param1) for data in data_list: # some processing here save_object(data, param3) </code></pre> <p>When I do this, I cam getting an error that says</p> <pre><code>DetachedInstanceError: Instance &lt;...&gt; is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/20/bhk3) </code></pre> <p>I am not sure how to fix this. I have tried expunging the objects after querying it in query_data, but it didn't fix the issue</p>
<python><sqlalchemy><sqlmodel>
2025-04-30 10:17:06
1
430
BBloggsbott
79,600,062
5,599,028
Set default value for date field in portal odoo18
<p>In Odoo18, I created template for a portal user which inherit another template, In my custom template, I want a date fields 'birthday' to be auto filled</p> <pre><code>&lt;t t-set=&quot;last_app&quot; t-value=&quot;request.env['hr.applicant'].search([ ('partner_id', '=', partner.id), '|', ('active', '=', False), ('active', '=', True)], order='id desc', limit=1)&quot;/&gt; &lt;xpath expr=&quot;//input[@name='birthday']&quot; position=&quot;replace&quot;&gt; &lt;input id=&quot;birthday_input&quot; type=&quot;text&quot; name=&quot;birthday&quot; class=&quot;form-control o_website_form_date&quot; t-att-value=&quot;last_app.birthday and last_app.birthday.strftime('%Y-%m-%d') or ''&quot; data-date-format=&quot;yyyy-mm-dd&quot; required=&quot;required&quot; placeholder=&quot;YYYY-MM-DD&quot;/&gt; &lt;/xpath&gt; </code></pre> <p>but I still got a wrong value <strong>01/01/1970</strong></p>
<python><xpath><portal><qweb><odoo-18>
2025-04-30 09:53:52
1
3,832
khelili miliana
79,600,043
893,254
How do I read a `.arrow` (Apache Arrow aka Feather V2 format) file with Python Pandas?
<p>I'm trying to read an <code>.arrow</code> format file with Python pandas.</p> <p><code>pandas</code> does not have a <code>read_arrow</code> function. However, it does have <code>read_csv</code>, <code>read_parquet</code>, and other similarly named functions.</p> <p>How can I read an Apache Arrow format file?</p> <p>I have read the documentation and know how to convert between the <code>pandas.DataFrame</code> type and the <code>pyarrow</code> <code>Table</code> type, which appears to be some kind of <code>pyarrow</code> equivalent of a <code>pandas</code> <code>DataFrame</code>. However, this may or may not be relevant information.</p> <p>What I cannot find is a way to either</p> <ul> <li>read a <code>pyarrow.Table</code> from an Arrow file</li> <li>read a <code>pandas.DataFrame</code> from an Arrow file</li> </ul>
<python><pandas><apache-arrow>
2025-04-30 09:46:13
1
18,579
user2138149
79,600,015
3,402,296
Worker cannot find external file in Apache Beam
<p>I have a simple function that reads from Mongo using Apache Beam</p> <pre class="lang-py prettyprint-override"><code>def create_mongo_pipeline(p: beam.Pipeline, mongo_uri: str, db: str, coll: str, cert_file: str, gcs_bucket: str) -&gt; None: # Read from MongoDB docs = p | 'ReadMyFile' &gt;&gt; beam.io.mongodbio.ReadFromMongoDB( uri=mongo_uri, db=db, coll=coll, bucket_auto=True, extra_client_params={ &quot;tls&quot;: True, &quot;authMechanism&quot;: &quot;MONGODB-X509&quot;, &quot;tlsCertificateKeyFile&quot;: cert_file, } ) ... </code></pre> <p><code>cert_file</code> is defined in a <code>config.json</code></p> <pre class="lang-json prettyprint-override"><code>{ &quot;connection&quot;: { &quot;type&quot;: &quot;mongo&quot;, &quot;uri&quot;: &quot;mongodb://blablabla:27017&quot;, &quot;db&quot;: &quot;mydb&quot;, &quot;certificate&quot;: &quot;resources/mycert.pem&quot; } } </code></pre> <p>Everything runs smoothly in local (with the Direct Runner). However as soon as I run the script in Dataflow I get the following error</p> <pre><code>RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'resources/mycert.pem' [while running 'ReadMyFile/Read/SDFBoundedSourceReader/ParDo(SDFBoundedSourceDoFn)/PairWithRestriction-ptransform-37'] </code></pre> <p>I have tried packaging all the code in a Docker container but I keep on getting the same error.</p> <p>Just for the sake of completeness, this is how I call the <code>Pipeline</code> when using the Dataflow runner</p> <pre class="lang-py prettyprint-override"><code>beam_options = PipelineOptions( runner=&quot;DataflowRunner&quot;, project=config[&quot;project_id&quot;], job_name=config[&quot;job_name&quot;], experiments=[&quot;use_grpc_for_gcs&quot;], temp_location=f&quot;{config['gcs_bucket']}/temp&quot;, ) </code></pre>
<python><google-cloud-platform><google-cloud-dataflow><apache-beam><apache-beam-io>
2025-04-30 09:28:13
2
576
RDGuida
79,599,972
4,062,352
When switching from pyenv to venv, what replaces pyenv's version management?
<p>The native Python package <code>venv</code> is &quot;recommended for creating virtual environments&quot;. I read that as saying that the Python Foundation wants users to switch from third party solutions like <code>pyenv</code> to native, &quot;official&quot; functionality. Comments <a href="https://stackoverflow.com/a/41573588/4062352">here</a> suggest some support for this view.</p> <p>If so, is there a native, official, platform-independent way to manage, install, uninstall, list, and activate different Python versions, in the way <code>pyenv</code> is capable of?</p>
<python><python-3.x><python-venv><pyenv>
2025-04-30 09:11:23
1
646
Roofus
79,599,726
305,043
QuerySet returns empty in django
<p>I am trying to get the book list with 3 conditions:</p> <ol> <li>Accession number (character varying,10)</li> <li>Physical location (character varying,20)</li> <li>Book status</li> </ol> <p>Based on the user input:</p> <ol> <li>Accession number should be present in the list provided by the user</li> <li>Physical location should be exact match</li> <li>Book status should be 'Published'</li> </ol> <p> </p> <pre><code>qobjects = Q() variable_column = &quot;accession_number&quot; search_type = 'in' filter_string = variable_column + '__' + search_type #Passed accessionNumber as '123','234',456' #Also tried 123,456,567 #Both did not work search_string = '['+accessionNumber+']' qcolumn = Q(**{filter_string: search_string}) qobjects.add(qcolumn, Q.AND) print('print qobjects after adding accession numbers') print(qobjects) location_column=&quot;physical_location&quot; search_type='iexact' filter_string = location_column + '__' + search_type qcolumn_location = Q(**{filter_string: location}) print('print qobjects after adding location') print(qobjects) qobjects.add(qcolumn_location,Q.AND) qcolumn_status = Q(**{'booK_status': 'PUBLISHED'}) qobjects.add(qcolumn_status, Q.AND) print('print qobjects after adding status') print(qobjects) res_set = Book.objects.filter(qobjects).order_by(location_column). \ values('id', 'title', 'cover_image_name','booK_status', 'accession_number', 'total_image_count',) print('print result set') print(res_set) </code></pre>
<python><django><django-models><django-queryset><django-q>
2025-04-30 07:09:21
1
1,731
PM.
79,599,624
988,279
FastAPI application with nginx - StaticFiles not working
<p>I've a simple FastAPI project. It is running correctly in pycharm and in the docker container. When running via nginx, the <code>StaticFiles</code> are not delivered.</p> <p>Structure is like this:</p> <pre><code>├── app │   ├── main.py │   ├── static_stuff │   │   └── styles.css │   └── templates │   └── item.html ├── Dockerfile ├── requirements.txt </code></pre> <p>main.py</p> <pre><code>from fastapi import Request, FastAPI from fastapi.responses import HTMLResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates import os.path as path ROOT_PATH = path.abspath(path.join(__file__ ,&quot;../&quot;)) app = FastAPI(title=&quot;my_app&quot;, root_path='/my_app') app.mount(&quot;/static_stuff&quot;, StaticFiles(directory=f&quot;/{ROOT_PATH}/static_stuff&quot;), name=&quot;static&quot;) templates = Jinja2Templates(directory=f&quot;/{ROOT_PATH}/templates&quot;) @app.get(&quot;/items/{id}&quot;, response_class=HTMLResponse, include_in_schema=False) async def read_item(request: Request, id: str): return templates.TemplateResponse( request=request, name=&quot;item.html&quot;, context={&quot;id&quot;: id} ) </code></pre> <p>The application is running in a docker container:</p> <p>Dockerfile:</p> <pre><code>FROM python:3.13-slim WORKDIR /my_app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY app ./app CMD [&quot;gunicorn&quot;, &quot;-k&quot;, &quot;uvicorn.workers.UvicornWorker&quot;, &quot;app.main:app&quot;, &quot;--bind&quot;, &quot;0.0.0.0:6543&quot;] EXPOSE 6543 </code></pre> <p>The nginx configuration looks like this:</p> <pre><code>location /my_app { proxy_pass http://my_host:6543; include proxy_params; } </code></pre> <p>When calling the nginx -&gt; http://my_host/my_app/items/5</p> <p>Everything works except the staticfiles. The styles.css is not found. What am I doing wrong?</p> <p>I tried something like this, but I had no success</p> <pre><code>location ~ /static_stuff/(.+) { proxy_pass http://my_host:6543; include proxy_params; } </code></pre>
<python><nginx><fastapi>
2025-04-30 05:49:26
2
522
saromba
79,599,437
395,857
Import onnxruntime then load_dataset "causes ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found": why + how to fix?
<p>Running</p> <pre class="lang-py prettyprint-override"><code>import onnxruntime as ort from datasets import load_dataset </code></pre> <p>yields the error:</p> <pre><code>(env-312) dernoncourt@pc:~/test$ python SE--test_importpb.py Traceback (most recent call last): File &quot;/home/dernoncourt/test/SE--test_importpb.py&quot;, line 2, in &lt;module&gt; from datasets import load_dataset File &quot;/opt/conda/envs/env-312/lib/python3.12/site-packages/datasets/__init__.py&quot;, line 17, in &lt;module&gt; from .arrow_dataset import Dataset File &quot;/opt/conda/envs/env-312/lib/python3.12/site-packages/datasets/arrow_dataset.py&quot;, line 57, in &lt;module&gt; import pyarrow as pa File &quot;/opt/conda/envs/env-312/lib/python3.12/site-packages/pyarrow/__init__.py&quot;, line 65, in &lt;module&gt; import pyarrow.lib as _lib ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /opt/conda/envs/env-312/lib/python3.12/site-packages/pyarrow/lib.cpython-312-x86_64-linux-gnu.so) (env-312) dernoncourt@pc:~/test$ </code></pre> <p>Why and how to fix? I use:</p> <ul> <li>Python 3.12.9</li> <li>Ubuntu 20.04.5 LTS</li> <li><code>datasets==3.5.0</code></li> <li><code>onnxruntime==1.21.0</code></li> </ul> <p>Running <code>from datasets import load_dataset</code> or <code>import onnxruntime as ort</code> alone works fine.</p>
<python><ubuntu><libstdc++><onnxruntime><huggingface-datasets>
2025-04-30 02:13:46
1
84,585
Franck Dernoncourt
79,599,360
14,802,285
What is the equivalent of torch.nn.Parameter(...) in julia's flux?
<p>In <code>pytorch</code> I can create a custom module as follows (this code example is taken from <a href="https://discuss.pytorch.org/t/practical-usage-of-nn-variable-and-nn-parameter/148644/2" rel="nofollow noreferrer">here</a>):</p> <pre><code>from torch import nn class MyModel(nn.Module): def __init__(self): super().__init__() self.param = nn.Parameter(torch.randn(1, 1)) def forward(self, x): x = x * self.param return x model = MyModel() print(dict(model.named_parameters())) # {'param': Parameter containing: # tensor([[0.6077]], requires_grad=True)} out = model(torch.randn(1, 1)) loss = out.mean() loss.backward() print(model.param.grad) # tensor([[-1.3033]]) </code></pre> <p>I want to be able to do the same using <code>julia</code>'s <code>flux</code>. Specifically I am interested in knowing what the equivalent of <code>nn.Parameter(torch.randn(1, 1))</code> is in <code>flux</code>.</p> <p>P.S. I am tagging both <code>python</code> and <code>julia</code> as I believe this would increase the chance of this post reaching to people who have knowledge in both languages.</p>
<python><pytorch><julia><flux.jl>
2025-04-30 00:27:31
2
3,364
bird
79,599,232
1,609,514
Why is the numerator coefficient array returned by scipy.signal.cont2discrete two dimensional?
<p>I'm converting a continuous-time dynamical system to discrete time using the function <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.cont2discrete.html" rel="nofollow noreferrer"><code>cont2discrete</code></a> from Scipy's signal processing library.</p> <pre class="lang-py prettyprint-override"><code>dt = 0.1 num, den, dt = scipy.signal.cont2discrete(([1], [5, 1]), dt) print(num, den) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>[[0. 0.01980133]] [ 1. -0.98019867] </code></pre> <p>I then want to simulate this using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lfilter.html" rel="nofollow noreferrer"><code>lfilter</code></a>, but <code>lfilter</code> takes a, b as arguments which are the numerator/denominator coefficient vectors &quot;in a 1-D sequence.&quot;</p> <p>So I need to do this:</p> <pre><code>u = np.ones(50) y = scipy.signal.lfilter(num[0], den, u) </code></pre> <p>I'm just wondering why the convention is different for <code>num</code> and <code>den</code>, and for these two functions.</p>
<python><scipy><signal-processing><transfer-function>
2025-04-29 21:34:11
1
11,755
Bill
79,599,188
1,264,589
Python pg8000 query params...syntax error at or near \”$2\"
<p>Hi…I’m trying to add a parameter to my query but I get this error:</p> <pre><code>&quot;errorMessage&quot;: &quot;{'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near \”$2\&quot;', 'P': '3793', 'F': 'scan.l', 'L': '1146', 'R': 'scanner_yyerror'}&quot; </code></pre> <p>This works:</p> <pre><code>import pg8000 account_id = 1234 sql = “”&quot; SELECT * FROM samples WHERE account_id = %s AND delete_date IS NULL ORDER BY date DESC “”&quot; cursor.execute(sql, (account_id,)) </code></pre> <p>But this does not:</p> <pre><code>import pg8000 account_id = 1234 start_date = query_string_params['start-date'] if 'start-date' in query_string_params else None // start_date format is: '2025-02-04' filters = “&quot; if start_date is not None: filters = filters + f&quot; AND DATE(sample_date) &gt;= '{start_date}'&quot; sql = “”&quot; SELECT * FROM samples WHERE account_id = %s AND delete_date IS NULL %s ORDER BY date DESC “”&quot; cursor.execute(sql, (account_id, filters)) </code></pre> <p>Any idea what I’m doing wrong?</p>
<python><postgresql><pg8000>
2025-04-29 20:50:34
1
1,305
hugo
79,599,115
20,591,261
How to Filter All Columns in a Polars DataFrame by expression?
<p>I have this example Polars DataFrame:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;id&quot;: [1, 2, 3, 4, 5], &quot;variable1&quot;: [15, None, 5, 10, 20], &quot;variable2&quot;: [40, 30, 50, 10, None], }) </code></pre> <p>I'm trying to filter all columns of my dataframe using the method <code>pl.all()</code>, and I also tried using <code>pl.any_horizontal() == Condition</code>. However I'm getting the following error:</p> <pre class="lang-py prettyprint-override"><code>ComputeError: The predicate passed to 'LazyFrame.filter' expanded to multiple expressions: col(&quot;id&quot;).is_not_null(), col(&quot;variable1&quot;).is_not_null(), col(&quot;variable2&quot;).is_not_null(), This is ambiguous. Try to combine the predicates with the 'all' or `any' expression. </code></pre> <p>Here are my attempts to try to face this.</p> <pre class="lang-py prettyprint-override"><code># Attempt 1: ( df .filter( pl.all().is_not_null() ) ) # Attempt 2: ( df .filter( pl.any_horizontal().is_not_null() ) ) </code></pre> <p>Desired output, but it's not scalable for bigger DataFrames:</p> <pre class="lang-py prettyprint-override"><code>( df .filter( pl.col(&quot;variable1&quot;).is_not_null(), pl.col(&quot;variable2&quot;).is_not_null() ) ) </code></pre> <pre><code>shape: (3, 3) ┌─────┬───────────┬───────────┐ │ id ┆ variable1 ┆ variable2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═══════════╪═══════════╡ │ 1 ┆ 15 ┆ 40 │ │ 3 ┆ 5 ┆ 50 │ │ 4 ┆ 10 ┆ 10 │ └─────┴───────────┴───────────┘ </code></pre> <p>How can I filter all columns in a scalable way without specifying each column individually?</p>
<python><dataframe><python-polars>
2025-04-29 19:49:22
3
1,195
Simon
79,598,876
243,031
python 3.8 package installed but pkg_resources not able to found
<p>Using old python version <code>3.8.6</code>.</p> <p>I created virtual environment in <code>/var/virtualenvs/myvenv</code> and installed <code>mypkg.bouncer.client</code> but when we import that, it gives below error.</p> <pre><code>Traceback (most recent call last): File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/mypkg/contrib/myapp/wsgi.py&quot;, line 22, in &lt;module&gt; application = get_wsgi_application() File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/django/core/wsgi.py&quot;, line 13, in get_wsgi_application return WSGIHandler() File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/django/core/handlers/wsgi.py&quot;, line 127, in __init__ self.load_middleware() File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/django/core/handlers/base.py&quot;, line 40, in load_middleware middleware = import_string(middleware_path) File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/django/utils/module_loading.py&quot;, line 17, in import_string module = import_module(module_path) File &quot;/lib/python3.8/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/mypkg/contrib/myapi/authentication/middleware.py&quot;, line 14, in &lt;module&gt; from mypkg.bouncer.client import endpoints File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/mypkg/bouncer/client/__init__.py&quot;, line 16, in &lt;module&gt; __version__ = pkg_resources.get_distribution(&quot;mypkg.bouncer.client&quot;).version File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/pkg_resources/__init__.py&quot;, line 534, in get_distribution dist = get_provider(dist) File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/pkg_resources/__init__.py&quot;, line 417, in get_provider return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/pkg_resources/__init__.py&quot;, line 1070, in require needed = self.resolve(parse_requirements(requirements)) File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/pkg_resources/__init__.py&quot;, line 897, in resolve dist = self._resolve_dist( File &quot;/var/virtualenvs/myvenv/lib/python3.8/site-packages/pkg_resources/__init__.py&quot;, line 938, in _resolve_dist raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'mypkg.bouncer.client' distribution was not found and is required by the application </code></pre> <p>When we check the <code>pip freeze</code> to make sure that package is installed, it gives the package name.</p> <pre><code>[user@test-01 ~]$ /var/virtualenvs/myvenv/bin/python -m pip freeze | grep mypkg.bouncer.client mypkg.bouncer.client==0.X.Y </code></pre> <p>I also check the files exists in the <code>lib/site-packages</code></p> <pre><code>[myuser@test-01 ~]$ ls -al /var/virtualenvs/myvenv/lib/python3.8/site-packages/mypkg/bouncer/client/ total 32 drwxrwxr-x 3 myuser users 136 Apr 29 11:28 . drwxrwxr-x 5 myuser users 104 Apr 29 11:28 .. -rw-rwxr-- 1 myuser users 454 Apr 29 11:28 endpoints.py -rw-rwxr-- 1 myuser users 1301 Apr 29 11:28 errors.py -rw-rwxr-- 1 myuser users 481 Apr 29 11:28 __init__.py drwxrwxr-x 2 myuser users 189 Apr 29 11:28 __pycache__ </code></pre> <p>not sure why its giving error?</p> <p><em>EDIT</em></p> <p>I found that, its due to <code>working_set.by_key</code> is not updated with <code>mypkg.bouncer.client</code>.</p> <p><code>sudo /var/virtualenvs/myenv/bin/python -c 'import pkg_resources;print(pkg_resources.working_set.by_key)'</code> this dont have entry for mypkg.</p> <p>How this <code>by_key</code> is updated ?</p>
<python><import><package><python-3.8>
2025-04-29 17:12:02
0
21,411
NPatel
79,598,793
4,706,711
How I can realtime update the Ui when I receive a request upon FastAPI?
<p>I have this simple script:</p> <pre class="lang-py prettyprint-override"><code>import os import gradio as gr from fastapi import FastAPI, Request import uvicorn import threading from typing import List from datetime import datetime api = FastAPI() # Shared logs class Log(): def __init__(self): self._logs: List[str] = [] self.logstr=&quot;&quot; def log_message(self,msg: str): timestamp = datetime.now().strftime(&quot;%H:%M:%S&quot;) self._logs.append(f&quot;[{timestamp}] {msg}&quot;) self.logstr=&quot;\n&quot;.join(self._logs) log = Log() log_state=gr.State(log) # === FastAPI Setup === @api.post(&quot;/log&quot;) async def receive_log(request: Request): data = await request.body() msg = f&quot;API received: {data}&quot; log.log_message(msg) gr.update(value=log.logstr) return {&quot;status&quot;: &quot;logged&quot;, &quot;message&quot;: msg} def run_api(): api_port = int(os.environ.get(&quot;API_PORT&quot;, 8000)) uvicorn.run(api, host=&quot;0.0.0.0&quot;, port=api_port) # === Gradio UI === with gr.Blocks() as ui: gr.Markdown(&quot;## 📝 Incoming HTTP Requests&quot;) log_box = gr.Textbox(label=&quot;Logs&quot;, inputs=log_state, lines=20) # Trigger the refresh when the log state is updated def run_gradio(): gradio_port = int(os.environ.get(&quot;GRADIO_PORT&quot;, 7860)) ui.launch(server_port=gradio_port) # === Start Both === if __name__ == &quot;__main__&quot;: threading.Thread(target=run_api, daemon=True).start() run_gradio() </code></pre> <p>What I try to achieve is to have FastAPI listening to one port and a admin panel into another one, that displays in realtime the incomming requests:</p> <pre><code>POST /log -&gt; FastAPI -&gt;common_log -&gt; Gradio </code></pre> <p>But I am unable to change the contents of <code>Textbox</code> when I receive Incomming requests in FastAPI. How I can do this?</p>
<python><fastapi><gradio>
2025-04-29 16:08:50
2
10,444
Dimitrios Desyllas
79,598,781
8,753,169
How to set up docker compose with django and pdm
<p>I have a django project with pdm and docker compose and I set up the codebase volume to enable django hot reload and debugging in the container. Building with the compose config works fine but when I try to run the server with <code>docker compose up -d</code> I hit a python error as if the libs were not picked up properly.</p> <p>The project has the following architecture</p> <pre><code>project/ ├── config/ │ ├── settings.py │ └── urls.py │ └── ... ├── some_django_app/ │ └── ... ├── compose.yaml ├── Dockerfile ├── README.md ├── pyproject.toml └── pdm.lock </code></pre> <p>the compose file is as follows</p> <pre class="lang-yaml prettyprint-override"><code>services: web: build: dockerfile: Dockerfile command: pdm run python manage.py runserver 0.0.0.0:8000 ports: - 8000:8000 volumes: - .:/app env_file: - .env </code></pre> <p>my dockerfile is as follows</p> <pre class="lang-none prettyprint-override"><code># Use an official Python runtime as a parent image FROM python:3.13.2-slim-bullseye # Set environment variables ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Set the working directory in the container WORKDIR /app # Install system dependencies RUN apt-get update &amp;&amp; apt-get install -y \ build-essential \ libpq-dev \ &amp;&amp; rm -rf /var/lib/apt/lists/* # Install PDM RUN pip install --no-cache-dir pdm # Copy the project files into the container COPY . /app # Accept build argument for ENVIRONMENT ARG ENVIRONMENT=prod # Install project dependencies using PDM pdm install --prod --no-lock --no-editable; </code></pre> <p>Here is the trace of the error when I up the container</p> <pre><code>INFO: The saved Python interpreter does not exist or broken. Trying to find another one. INFO: __pypackages__ is detected, using the PEP 582 mode Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 12, in main from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 23, in &lt;module&gt; main() ~~~~^^ File &quot;/app/manage.py&quot;, line 14, in main raise ImportError( ...&lt;3 lines&gt;... ) from exc ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? </code></pre> <p>It is as if pdm couldn't pick up my local libs. When I remove the container line in my compose file and rebuild, django runs fine. What is wrong with my configuration?</p>
<python><django><docker><docker-compose><pdm>
2025-04-29 16:01:55
2
1,043
Martin Faucheux
79,598,738
2,072,516
Joined loads across multiple tables
<p>I'm stuck with joined loads over multiple models. I have a FastAPI project, and I'm using Jinja to serve some HTML pages. In said page, I need to access content joined from other tables (looks like doing the access at time of doesn't work while in the Jinja template? I keep getting greenlet errors). Because I'll definitely be getting said data, I'm doing joined loads on the properties mapping to the other models:</p> <pre><code>statement = statement.options( joinedload(Item.purchases), joinedload(Item.purchases.receipt), joinedload(Item.purchases.receipt.store), ) </code></pre> <p>However, I get this error:</p> <pre><code> joinedload(Item.purchases.receipt), ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.venv/lib/python3.12/site-packages/sqlalchemy/orm/attributes.py&quot;, line 474, in __getattr__ raise AttributeError( AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Item.purchases has an attribute 'receipt' </code></pre> <p>Also, given I'm joining 4 tables, I feel like I should minimize the number of columns being selected across the joins, but I don't know how I'd do that.</p>
<python><sqlalchemy>
2025-04-29 15:33:35
1
3,210
Rohit
79,598,735
893,254
How to convert from Python pandas Timestamp to repeated google.protobuf.Timestamp? (Python + Google Protocol Buffers)
<p>I am trying to write some code which converts the contents of a <code>pandas.DataFrame</code> to a protobuf object which can be serialized and written to a file.</p> <p>Here is my protobuf definition.</p> <pre class="lang-none prettyprint-override"><code>syntax = &quot;proto3&quot;; import &quot;google/protobuf/timestamp.proto&quot; message BidAskTimeseries { repeated double bid = 1; repeated double ask = 2; repeated google.protobuf.Timestamp timestamp = 3; } </code></pre> <p>It can be compiled using</p> <pre class="lang-none prettyprint-override"><code>protoc --proto_path=. --python_out=. bid_ask_timeseries.proto </code></pre> <p>The <code>pandas.DataFrame</code> has been loaded from a CSV file.</p> <pre class="lang-py prettyprint-override"><code>import pandas df = pandas.read_csv('df.csv') </code></pre> <p>The datatypes are</p> <pre class="lang-none prettyprint-override"><code>df_data.dtypes ask float64 bid float64 ts object </code></pre> <p>however <code>ts</code> is actually a <code>String</code>.</p> <p>The timestamp column can be converted to a <code>pandas.Timestamp</code> in the usual way</p> <pre class="lang-py prettyprint-override"><code>df['ts'] = pandas.to_datetime(df['ts']) </code></pre> <p>however, this step should be considered optional.</p> <p>I could not figure out how to create a Python protobuf object from this data.</p> <pre class="lang-py prettyprint-override"><code>import bid_ask_timeseries_pb2 from google.protobuf.timestamp import Timestamp bid_ask_timeseries = bid_ask_timeseries_pb2.BidAskTimeseries( bid=df['bid'], ask=df['ask'], timestamp=df['ts'], ) </code></pre> <p>It seemed like keeping the <code>'ts'</code> column as a <code>String</code> would be the most straightforward approach, since <code>dir(Timestamp)</code> shows that <code>Timestamp</code> has a <code>FromString</code> method. However, this produced the following error.</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: expected bytes, str found </code></pre> <p>Converting to <code>pandas.Timestamp</code> is preferable, since other parts of my code expect a <code>DataFrame</code> with datetime values stored as a <code>pandas.Timestamp</code> type, rather than a <code>String</code> type.</p> <p>However (and I may be wrong here) the <code>pandas.Timestamp</code> appears to be a two-element struct containing a count of number of seconds, and count of number of nanoseconds, exactly like the Linux <code>struct timespec</code> type. (If I'm wrong about this please let me know, I wasn't too sure from looking at the documentation.)</p> <ul> <li><a href="https://www.man7.org/linux/man-pages/man3/timespec.3type.html" rel="nofollow noreferrer">https://www.man7.org/linux/man-pages/man3/timespec.3type.html</a></li> </ul> <p>Assuming this is correct however, this format does seem as if it would line up with the protobuf format for a <code>Timestamp</code> which is an <code>int64</code> count of <code>seconds</code> and a <code>int32</code> count of <code>nanos</code>.</p> <ul> <li><a href="https://github.com/protocolbuffers/protobuf/blob/main/src/google/protobuf/timestamp.proto" rel="nofollow noreferrer">https://github.com/protocolbuffers/protobuf/blob/main/src/google/protobuf/timestamp.proto</a></li> </ul> <p>So it seems like there should be a relatively straightforward way to convert from the dataframe I am reading to the protobuf serialized format. But I can't quite figure out how to do it in a sensible way. (Meaning not element-by-element at the Python level, which is likely to be extremely slow.)</p> <hr /> <p>Bit of an update...</p> <p>I found a method which is part of <code>google.protobuf.timestamp.Timestamp</code> called <code>FromDatetime</code>.</p> <p>This method is a bit strange. It takes a <code>self</code> argument, as well as a <code>datetime</code> object. This is weird, because it looks like it should be a constructor function, but isn't.</p> <p>It can be used like this:</p> <pre class="lang-py prettyprint-override"><code>timestamp = Timestamp() timestamp.FromDatetime(dt) </code></pre> <p>It's a bit odd looking, as an API.</p> <p>Moving on, now loading the dataframe, and converting the column to a <code>datetime</code></p> <pre class="lang-py prettyprint-override"><code>df = pandas.read_csv('df.csv') df['ts'] = pandas.to_datetime(df['ts']) </code></pre> <p>I tried to convert this to a list of <code>google.protobuf.timestamp.Timestamp</code> in the following way:</p> <pre class="lang-py prettyprint-override"><code>def convert(t): timestamp = Timestamp() timestamp.FromDatetime(t) return timestamp timestamp_list = list(map(convert, df['ts'])) bid_ask_timeseries = bid_ask_timeseries_pb2.BidAskTimeseries( bid=df['bid'], ask=drf['ask'], timestamp=timestamp_list, ) </code></pre> <p>I would have written a lambda function to do the conversion, but it's difficult to write such a thing in Python.</p> <p>This code does not work, and produces the following error:</p> <pre><code>TypeError: 'Timestamp' object cannot be interpreted as an integer </code></pre> <p>I also tried converting the <code>list</code> to a <code>numpy.array</code>, however the error was the same.</p> <p>This error message doesn't make much sense to me. Why would the protobuf object constructor be expecting an integer type when it has been defined via the <code>.proto</code> file to expect a <code>repeated google.protobuf.Timestamp timestamp</code> type?</p>
<python><pandas><protocol-buffers>
2025-04-29 15:32:54
0
18,579
user2138149
79,598,423
497,649
How to select certain row(s) by code in a DataGrid table in Python-Shiny?
<p>I created a table (&quot;DataGrid&quot;) using:</p> <pre><code>ui.output_data_frame(&quot;grid&quot;) </code></pre> <p>which I filled using</p> <pre><code>@render.data_frame def grid(): df = ... return render.DataGrid(df, selection_mode=&quot;row&quot;) </code></pre> <p>Now, I want to change the selection using codfe. Is this possible in Python's version of Shiny?</p>
<python><datatable><datagrid><reactive-programming><py-shiny>
2025-04-29 12:58:48
1
640
lambruscoAcido
79,598,267
3,332,023
Typehints for dicts in Python including the default values
<p>I have a function that calls a bunch of other functions like this:</p> <pre><code>import inspect def get_default_args(func): # obtained from: https://stackoverflow.com/questions/12627118/get-a-function-arguments-default-value signature = inspect.signature(func) return { k: v.default for k, v in signature.parameters.items() if v.default is not inspect.Parameter.empty } def func_a(self, var_a:int=10, var_b:str = 'a') -&gt; float: pass def func_b(self, var_c:int=10, var_d:str = 'b') -&gt; float: pass def main_func(args_a, args_b): value_a = func_a(**args_a) value_b = func_b(**args_b) # Get the options and default values to call the functions args_a = get_default_args(func_a) args_b = get_default_args(func_a) # Edit the default values args_a = {&quot;var_b&quot;:'d'} args_b = {&quot;var_c&quot;:5} # Make the call to the main function main_func(args_a, args_b) </code></pre> <p>This works, however, when I edit what is the input to <code>func_a</code> I would like to get all options that this functions have in the IDE similar to when I call the function <code>func_a</code>:<a href="https://i.sstatic.net/1RKGub3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1RKGub3L.png" alt="enter image description here" /></a></p> <p>Since <code>get_default_args</code> is executed at runtime the IDE (like VSCode) does not recognize what options and what default values there is in <code>args_a</code>. For examle:</p> <p><a href="https://i.sstatic.net/YjkuORdx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjkuORdx.png" alt="enter image description here" /></a></p> <p>I've tried with dataclasses and TypeDicts but I have not managed to get the behavior that I'm looking for.</p>
<python><python-typing>
2025-04-29 11:34:50
0
467
axel_ande
79,598,199
15,638,204
Chaining multiple GPT-4 Turbo API calls with function calling and dynamic memory/context injection
<p>I'm working on a modular pipeline using OpenAI's GPT-4 Turbo (via <code>/v1/chat/completions</code>) that involves multi-step prompt chains. Each step has a specific role (e.g., intent detection → function execution → summarization → natural language response). I'm running into architectural questions:</p> <p>Each step depends on the output of the previous one, and some steps involve function calling (function_call: &quot;auto&quot;), while others require injecting previous context dynamically.</p> <p>Questions: Context Transfer:</p> <p>Is it better to pass raw JSON outputs from prior steps into the system or user prompt?</p> <p>Does injecting intermediate outputs as tool_calls or appending them to messages yield better alignment?</p> <p>Function Calling Flow:</p> <p>In multi-step flows, should I manually parse the function call and immediately invoke the next step, or can I nest that logic inside the tool definition for cleaner chaining?</p> <p>Is it safe to pass generated outputs back into GPT via system prompts, or does that pollute the context in subtle ways?</p> <p>Memory Efficiency? What's the best way to trim redundant information between steps without degrading quality?</p> <p>Is there a limit to how much structured output GPT can “remember” and reuse reliably across chained requests?</p> <pre><code> response_1 = openai.ChatCompletion.create( model=&quot;gpt-4-turbo&quot;, messages=[{ &quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are an intent classifier...&quot; }, { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: user_input }] ) response_2 = openai.ChatCompletion.create( model=&quot;gpt-4-turbo&quot;, messages=[ { &quot;role&quot;: &quot;system&quot;, &quot;content&quot;: f&quot;Intent: {intent}&quot; }, { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Proceed to handle the request using tool if needed.&quot; } ], tools=[...], tool_choice=&quot;auto&quot; ) </code></pre> <p>Would appreciate any insights, especially from people building modular AI workflows with OpenAI's API. If you’ve done this at scale, how did you avoid messy context bloating and latency issues?</p> <p>Thanks!</p>
<python><artificial-intelligence><openai-api>
2025-04-29 11:01:55
1
956
avocadoLambda
79,597,670
2,284,570
Lattice attack against single signature : how to modify the b1 and c1 constants in order to get the script working against smaller leaks?
<p>The following script I found <a href="https://github.com/AMOSSYS/challenges/blob/main/CRY-ME/break_schnorr.py" rel="nofollow noreferrer">here</a>. The idea <a href="https://www.amossys.fr/insights/blog-technique/cry-me-private-key-recovery-with-a-single-signature/" rel="nofollow noreferrer">explained here</a> is that <strong>if for a single signature both the high order bits of the private key and nonce are set to 0, then it’s possible to combine those 2 leaks in order to recover the secret key</strong> :</p> <pre><code>curve = CurveJac(CURVES['wei25519']) n = curve.order b1 = 18108194425815612945 c1 = 17333513496047876729 # --------------- # LLL Attack # --------------- def construct_matrix(r, s): ell = 64 K = 2**128 M = IntegerMatrix(29, 29) # Signature equation in first column M[0, 0] = n M[27, 0] = r M[28, 0] = -s for i in range(8): M[i + 3, 0] = 2**((7 - i)*8) * 2**(3*ell) M[i + 11, 0] = 2**((7 - i)*8) * (2**(2*ell) + 2**ell) M[i + 19, 0] = 2**((7 - i)*8) # Equation X3 = c1*X0 + c0 in second column M[1, 1] = 2**ell M[28, 1] = c0 for i in range(8): M[i + 3, 1] = -2**(i*8) M[i + 19, 1] = 2**(i*8) * c1 # Equation X2 = b1*X0 + b0 M[2, 2] = 2**ell M[28, 2] = b0 for i in range(8): M[i + 11, 2] = -2**(i*8) M[i + 19, 2] = 2**(i*8) * b1 # Weights on unknown variables # To form an integer matrix, we multiply all entries by K = 2**128 for i in range(M.nrows): for j in range(M.ncols): M[i, j] = K*M[i, j] for i in range(24): M[i + 3, i + 3] = K // 2**8 M[27, 27] = 1 M[28, 28] = K return M def find_key(r, s, curve, pubkey): found = False M = construct_matrix(r, s) LLL.reduction(M) res = [] for i in range(M.nrows): privkey = M[i, -2] Q = curve.scalar_mult(privkey, curve.base) if Q[0] == pubkey[0]: found = True if Q[1] == -pubkey[1]: privkey = -privkey return privkey return None if __name__ == '__main__': argc = len(sys.argv) - 1 if argc != 2: print('Please write the public key and signature in base64') sys.exit() pubkey_bytes = b64d(sys.argv[1]) xq = int.from_bytes(pubkey_bytes[:32], 'big') yq = int.from_bytes(pubkey_bytes[32:], 'big') if not curve.is_on_curve((xq, yq)): print('This is not a valid public key.') sys.exit() pubkey = curve.field(xq), curve.field(yq) sig_bytes = b64d(sys.argv[2]) r = int.from_bytes(sig_bytes[:32], 'big') s = int.from_bytes(sig_bytes[32:], 'big') print(f'Public key: ({hex(xq)}, {hex(yq)})') privkey = find_key(r, s, curve, pubkey) if privkey is None: print('Private key not found') else: print(f'Private key: {hex(privkey)}') </code></pre> <p>This python script works when the private key is 128 bits long and the nonce 64 bits long (with the known bits set to 0)… But what if the nonce is 128Bits long and the private key 136 bits long ?<br /> I’m meaning I don’t understand how the values for the constants $b1$ and $c1$ are computed so how they should be modified ? Does this depends on the curve ?</p>
<python><cryptography><digital-signature><elliptic-curve><mathematical-lattices>
2025-04-29 05:39:51
1
3,119
user2284570
79,597,604
13,396,497
Merge more than 2 dataframes if they exist and initialised
<p>I am trying to merge three dataframes using intersection(). How can we check that all dataframes exists/initialised before running the intersection() without multiple if-else check blocks. If any dataframe is not assigned, then don't take it while doing the intersection(). Sometimes I am getting error - <strong>UnboundLocalError: local variable 'df_2' referenced before assignment</strong>, because file2 does not exist.</p> <p>OR is there any other easy way to achieve below?</p> <p>Below is my approach:</p> <pre><code>if os.path.exists(file1): df_1 = pd.read_csv(file1, header=None, names=header_1, sep=',', index_col=None) if os.path.exists(file2): df_2 = pd.read_csv(file2, header=None, names=header_2, sep=',', index_col=None) if os.path.exists(file3): df_3 = pd.read_csv(file3, header=None, names=header_3, sep=',', index_col=None) common_columns = df_1.columns.intersection(df_2.columns).intersection(df_3.columns) filtered_1 = df_1[common_columns] filtered_2 = df_2[common_columns] filtered_3 = df_3[common_columns] concatenated_df = pd.concat([filtered_1, filtered_2, filtered_3], ignore_index=True) </code></pre>
<python><pandas><dataframe>
2025-04-29 04:26:20
2
347
RKIDEV
79,597,461
7,519,434
Incompatible type hints for AST node subclasses
<p>I am writing a script to perform obfuscation using the <code>ast</code> module. A decorator is used to define obfuscation functions for each AST node type. The decorator adds the functions to a dictionary, <code>Obfuscator.obfuscators</code>, which maps <code>type(node)</code> to a function that takes the node and <code>context</code> and returns a node with the same parent (<code>ast.Name</code>, for example, is a subclass of <code>ast.expr</code> and its obfuscator function can return any subclass of <code>ast.expr</code> from <code>context</code>).</p> <pre class="lang-py prettyprint-override"><code>import ast from typing import Callable, TypeVar, cast, overload Nullable = ast.stmt NonNullable = ast.expr | ast.comprehension | ast.keyword NullableT = TypeVar(&quot;NullableT&quot;, bound=Nullable) NonNullableT = TypeVar(&quot;NonNullableT&quot;, bound=NonNullable) T = TypeVar(&quot;T&quot;, bound=Nullable | NonNullable) A = TypeVar(&quot;A&quot;, bound=ast.AST) Context = dict[str, ast.expr] class Obfuscator: obfuscators: dict[type[ast.AST], Callable[[ast.AST, Context], ast.AST | None]] = {} @classmethod @overload def obfuscate(cls, *nodes: NonNullableT, context: Context = {}) -&gt; NonNullableT: ... @classmethod @overload def obfuscate( cls, *nodes: NullableT, context: Context = {} ) -&gt; NullableT | None: ... @classmethod def obfuscate(cls, *nodes: T, context: Context = {}) -&gt; T | None: context = dict(context) for node in nodes: obfuscator = cls.obfuscators.get(type(node)) if obfuscator: result = obfuscator(node, context) if result: return cast(T, result) else: return node @classmethod def obfuscate_all(cls, nodes: list[T], context: Context) -&gt; list[T]: return [ x for x in [Obfuscator.obfuscate(node, context=context) for node in nodes] if x is not None ] @classmethod def define( cls, ast_type: type[A], ): def decorator(func: Callable[[A, Context], T | None]): # this is the line that causes the pyright error cls.obfuscators[ast_type] = func return func return decorator </code></pre> <p>Some example definitions:</p> <pre class="lang-py prettyprint-override"><code>@Obfuscator.define(ast.Import) def obf_import(stmt: ast.Import, context: Context): # replace `import x` with `__import__(x)` for name in stmt.names: context[name.asname or name.name] = ast.Call( ast.Name(id=&quot;__import__&quot;), args=[ast.Constant(name.name)], keywords=[] ) @Obfuscator.define(ast.With) def obf_with(stmt: ast.With, context: Context): for with_ in stmt.items: as_var: ast.expr | None = with_.optional_vars if isinstance(as_var, ast.Name) and as_var.id: context[as_var.id] = Obfuscator.obfuscate( with_.context_expr, context=context ) return Obfuscator.obfuscate(*stmt.body, context=context) @Obfuscator.define(ast.GeneratorExp) def obf_generatorexp(expr: ast.GeneratorExp, context: Context): expr.elt = Obfuscator.obfuscate(expr.elt, context=context) expr.generators = Obfuscator.obfuscate_all(expr.generators, context=context) return expr @Obfuscator.define(ast.Name) def obf_name(expr: ast.Name, context: Context): return context.get(expr.id, expr) </code></pre> <p>This code works fine, but I cannot work out how to type hint <code>define</code>. Pyright currently gives one error:</p> <pre><code>Argument of type &quot;(A@define, Context) -&gt; (T@decorator | None)&quot; cannot be assigned to parameter &quot;value&quot; of type &quot;(AST, Context) -&gt; (AST | None)&quot; in function &quot;__setitem__&quot; Type &quot;(A@define, Context) -&gt; (T@decorator | None)&quot; is not assignable to type &quot;(AST, Context) -&gt; (AST | None)&quot; Parameter 1: type &quot;AST&quot; is incompatible with type &quot;A@define&quot; Type &quot;AST&quot; is not assignable to type &quot;A@define&quot; </code></pre> <p>I think this is because the type hint for <code>obfuscators</code> means the functions are expected to work for any AST node, but they only work for one.</p> <p>What is the best way to type hint <code>define</code> and <code>obfuscators</code>?</p>
<python><python-typing>
2025-04-29 01:19:00
0
3,989
Henry
79,597,415
1,084,684
How to create menus in GTK 4 with pygi?
<p>I've googled about this quite a bit. I've put a number of examples of what I tried <a href="https://stromberg.dnsalias.org/svn/gtk-4-menu/trunk" rel="nofollow noreferrer">https://stromberg.dnsalias.org/svn/gtk-4-menu/trunk</a> - I'm going to quote the closest one immediately below in case of link rot:</p> <pre><code>import gi gi.require_version(&quot;Gtk&quot;, &quot;4.0&quot;) from gi.repository import Gtk, Gio class ExampleApp(Gtk.Application): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.window = None def do_activate(self): if not self.window: self.window = Gtk.ApplicationWindow(application=self, title=&quot;GMenu Example&quot;) menu_bar = self.create_menu() self.window.set_menu_bar(menu_bar) self.window.present() def create_menu(self): menu_bar = Gio.Menu() file_menu = Gio.Menu() file_menu.append(&quot;New&quot;, &quot;app.new&quot;) file_menu.append(&quot;Quit&quot;, &quot;app.quit&quot;) edit_menu = Gio.Menu() edit_menu.append(&quot;Copy&quot;, &quot;app.copy&quot;) edit_menu.append(&quot;Paste&quot;, &quot;app.paste&quot;) menu_bar.append_submenu(&quot;File&quot;, file_menu) menu_bar.append_submenu(&quot;Edit&quot;, edit_menu) # Create actions new_action = Gio.SimpleAction.new(&quot;new&quot;, None) new_action.connect(&quot;activate&quot;, self.on_new_action) self.add_action(new_action) quit_action = Gio.SimpleAction.new(&quot;quit&quot;, None) quit_action.connect(&quot;activate&quot;, self.on_quit_action) self.add_action(quit_action) copy_action = Gio Brainly.SimpleAction.new(&quot;copy&quot;, None) copy_action.connect(&quot;activate&quot;, self.on_copy_action) self.add_action(copy_action) paste_action = Gio.SimpleAction.new(&quot;paste&quot;, None) paste_action.connect(&quot;activate&quot;, self.on_paste_action) self.add_action(paste_action) return menu_bar def on_new_action(self, action, param): print(&quot;New action triggered&quot;) def on_quit_action(self, action, param): self.quit() def on_copy_action(self, action, param): print(&quot;Copy action triggered&quot;) def on_paste_action(self, action, param): print(&quot;Paste action triggered&quot;) if __name__ == &quot;__main__&quot;: app = ExampleApp(application_id=&quot;com.example.GMenuExample&quot;) app.run(None) </code></pre>
<python><gtk4><pygi>
2025-04-28 23:57:55
1
7,243
dstromberg
79,597,235
5,796,711
Jinja template not populating in python
<p>I am trying to get up and running with a very simple jinja example using Python.</p> <p>My folder structure looks like this:</p> <pre><code>jinja-testing/ templates/ common/ sample_values.sql queries/ sample_query.sql build_queries.ipynb </code></pre> <p>sample_values.sql is:</p> <pre><code>('value1', 'value2', 'value3') </code></pre> <p>sample_query.sql is:</p> <pre><code>SELECT * FROM tablename WHERE col IN {% include 'common/sample_values.sql' %} </code></pre> <p>and my build_queries Jupyter notebook is this one cell:</p> <pre><code>from jinja2 import Environment, FileSystemLoader env = Environment(loader=FileSystemLoader('templates')) template = env.get_template('queries/sample_query.sql') template.render() </code></pre> <p>When I run it, the output is:</p> <pre><code>'SELECT *\nFROM tablename\nWHERE col IN ' </code></pre> <p>If I change sample_query.sql to reference fake_file.sql, it throws an error, so it must be finding sample_values.sql. But I can't figure out why it's not populating.</p> <p>I am using jinja2 v3.1.4, but for work reasons will not be able to upgrade it to 3.1.6.</p>
<python><jinja2>
2025-04-28 20:22:31
0
523
krock
79,597,213
219,153
How to compute elementwise dot product of two 2D NumPy arrays
<p>This script:</p> <pre><code>import numpy as np a = np.array([[0, 1], [1, 1], [1, 0]]) b = np.array([[1, 0], [1, 1], [1, -1]]) dot = a[:, 0]*b[:, 0] + a[:, 1]*b[:, 1] print(dot) </code></pre> <p>produces elementwise dot product of <code>a</code> and <code>b</code>:</p> <pre><code>[0 2 1] </code></pre> <p>There must be a NumPy magic, which can do the same with one function call. I tried <code>np.dot</code>, <code>np.tensordot</code> and <code>np.inner</code> with no success. Any suggestions?</p>
<python><arrays><numpy>
2025-04-28 20:03:50
1
8,585
Paul Jurczak
79,597,050
1,938,096
Open Excel workbook in smaller proportions
<p>I'm using this code (adjusted example I found somewhere) to generate an excel file. It works as expected, with just one problem which is that when opening the file it always opens full screen which is very annoying to the end user. Is there a way to adjust the settings of the file?</p> <pre><code>jsreport = filtered_report if isinstance(jsreport, dict): jsreport = [jsreport] df = pd.DataFrame(jsreport) # Generate the Excel file in memory output = BytesIO() with pd.ExcelWriter(output, engine='openpyxl') as writer: df.to_excel(writer, index=False) output.seek(0) # Send the file to the user for download return send_file(output, as_attachment=True, download_name=f&quot;report_{month_year}.xlsx&quot;, mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') </code></pre> <p>Was looking at these pieces of documentation, but I can't find what I'm looking for. <a href="https://pandas.pydata.org/docs/reference/api/pandas.ExcelWriter.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.ExcelWriter.html</a> <a href="https://openpyxl.readthedocs.io/en/3.1/api/openpyxl.worksheet.dimensions.html" rel="nofollow noreferrer">https://openpyxl.readthedocs.io/en/3.1/api/openpyxl.worksheet.dimensions.html</a></p>
<python><excel>
2025-04-28 17:57:39
0
579
Gabrie
79,596,918
12,608,619
Python asyncio aiopg exception for correct combination of host and database
<p>I have a list of 155 host + databases and I want to create a simple app in jupyter notebook that runs some query on them. Example looks as follows:</p> <pre><code>SQL = &quot;&quot;&quot; SELECT 1 &quot;&quot;&quot; import asyncio import aiopg import nest_asyncio nest_asyncio.apply() loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) async def runQuery(host, db, sql): print(str(host), str(db)) if str(host) == 'nan': return 'No host' try: conn = await aiopg.connect(database=str(db), user='some_user', password='some_password', host=str(host), timeout = 2) print('xxxx') except (psycopg2.Error, asyncio.TimeoutError, psycopg2.OperationalError) as e: return 'Error creating connection exception' if not conn or conn.closed == 1: return 'Error creating connection' try: cur = await conn.cursor() if not cur: return 'No cursor' await cur.execute(sql) result = await cur.fetchall() await conn.close() return result except psycopg2.Error as e: await conn.close() return 'Error fetching' async def main(): tasks = [] for item in csvFile: tasks.extend([runQuery(item['host'], it, SQL) for it in item['dbs']) results = [] i = 90 # to start with host+db number 90 d = 1 # how many to run at once while i &lt;= len(tasks): r = await asyncio.gather(*tasks[i:(i+d)]) i = i+d print(i) results.extend(r) return results results = loop.run_until_complete(main()) print(results) loop.close() </code></pre> <p>I have a very weird problem, where everything works fine but not with several combinations of host+database. I have in list 155 combinations, and it fails at number 90, and then 120 which looks completely correct. The problem is, I can't catch this error since it is not throwing any exception in the function <code>runQuery</code>, line <code>conn=await aiopg.connect...</code> (and I don't want to abort whole process, just this connection) even though I know it happens during this call. Any idea what I can do to catch the exception and ignore it only for these single combinations of host and database? Or deal with it other way?</p> <p>I am getting error:</p> <pre><code>OSError: [WinError 10038] An operation was attempted on something that is not a socket </code></pre> <p>Full traceback:</p> <pre><code>--------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[9], line 70 68 #results = await asyncio.gather(*tasks) 69 return results ---&gt; 70 results = loop.run_until_complete(main()) 71 print(results) 73 loop.close() File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\nest_asyncio.py:92, in _patch_loop.&lt;locals&gt;.run_until_complete(self, future) 90 f._log_destroy_pending = False 91 while not f.done(): ---&gt; 92 self._run_once() 93 if self._stopping: 94 break File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\nest_asyncio.py:115, in _patch_loop.&lt;locals&gt;._run_once(self) 108 heappop(scheduled) 110 timeout = ( 111 0 if ready or self._stopping 112 else min(max( 113 scheduled[0]._when - self.time(), 0), 86400) if scheduled 114 else None) --&gt; 115 event_list = self._selector.select(timeout) 116 self._process_events(event_list) 118 end_time = self.time() + self._clock_resolution File ~\AppData\Local\Programs\Python\Python312\Lib\selectors.py:323, in SelectSelector.select(self, timeout) 321 ready = [] 322 try: --&gt; 323 r, w, _ = self._select(self._readers, self._writers, [], timeout) 324 except InterruptedError: 325 return ready File ~\AppData\Local\Programs\Python\Python312\Lib\selectors.py:314, in SelectSelector._select(self, r, w, _, timeout) 313 def _select(self, r, w, _, timeout=None): --&gt; 314 r, w, x = select.select(r, w, w, timeout) 315 return r, w + x, [] OSError: [WinError 10038] An operation was attempted on something that is not a socket </code></pre>
<python><postgresql><python-asyncio><aiopg>
2025-04-28 16:36:26
0
371
Lidbey
79,596,720
13,055,818
pkgs.dev.azure.com pip install SSL Error in WSL and Docker
<p>This problem has been addressed countless times and yet I do not think there is a magic solution at least that worked for me.</p> <p>I perfectly was able to publish some private python packages to the Azure Artifacts so I can use them to build a bigger project but no, Azure decided that I won't be allowed to work on WSL or Docker.</p> <p>On windows I run <code>pip install --index-url https://pkgs.dev.azure.com/MY_ORG... package-name</code> it works absolutely perfectly. But as soon as I switch to WSL or Docker and follow the official and unofficial doc (updating pip, installing <a href="https://pypi.org/project/artifacts-keyring/" rel="nofollow noreferrer">artifacts-keyring</a> etc..) I stumble up the same wonderful error:</p> <pre><code>... ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='....blob.core.windows.net', port=443): Max retries exceeded with url:... </code></pre> <p>Very quickly adding the <code>--trusted-host=...blob.core.windows.net</code> works, sometimes, but obviously the URL changes every few seconds so it's not a forever fix.</p>
<python><azure-devops><pip><azure-artifacts>
2025-04-28 14:45:58
0
519
Maxime Debarbat
79,596,690
11,720,193
Identify existence of tables in BigQuery
<p>I have a text file with BigQuery tablenames. My requirement is to iterate through the list and search every dataset available in every Project environment (i.e. Project 'my-project-dev/test/prod') to check for the table's existence in each Project/DataSet, and output the results in a text file in the format shown below.</p> <p>Input File:</p> <pre><code>TableA TableB TableC </code></pre> <p>Output:</p> <pre><code>tablename | exists_in_prod | exists_in_test | exists_in_dev | is_stg_table | action TableA | Y | N | Y | N | DELETE TableB | N | N | Y | N | DROP TableC | Y | Y | N | Y | DELETE </code></pre> <p>Few considerations:</p> <ul> <li>There are 3 existing environments (dev/test/prod) that needs to be searched and the envs are determined by the Project Names - (E.g. 'my-project-dev', 'my-project-test', 'my-project-prod').</li> <li>The dataset names are not provided in the input, hence all datasets in any specific env need to be searched to determine the table's existence.</li> <li>A dataset is identified as &quot;staging&quot; if the dataset-name ends with '_stg', in which case the tablename will also appear as 'tablename_stg'</li> </ul> <p>Logic for &quot;action&quot; column:</p> <ul> <li>If table found in Prod, then &quot;action&quot; = 'DELETE' irrespective of whether also found in dev and/or test</li> <li>If table found in Dev and/or Test but not in Prod, then &quot;action&quot; = 'DROP'</li> <li>If table found in any '_stg' dataset then always &quot;action&quot; = 'DELETE', irrespective of whether found in Prod/dev/test</li> </ul> <p>Can someone please help write the logic in Python or using BQ CLI script (whichever convenient) ?</p> <p>Thanks</p>
<python><google-bigquery><pybigquery>
2025-04-28 14:26:39
0
895
marie20
79,596,647
1,450,343
'Incompatible types in assignment' when assigning from multiple inheritance in MyPy
<p>Here is a sample:</p> <pre><code>class A: pass class B: pass class X(A, B): pass class Y(A, B): pass z: B = Y() if temp else X() # MyPy error </code></pre> <p>The error is <em>Incompatible types in assignment (expression has type &quot;A&quot;, variable has type &quot;B&quot;)</em>.</p> <p>I could write my method like this:</p> <pre><code>def func(temp: bool): x: B = X() y: B = Y() w: B = x if temp else y </code></pre> <p>But is there a better solution?</p>
<python><python-typing><mypy>
2025-04-28 14:08:15
1
816
ModdyFire
79,596,631
29,295,031
Convert month abbreviation to full name
<p>I have this function which converts an English month to a French month:</p> <pre><code>def changeMonth(month): global CurrentMonth match month: case &quot;Jan&quot;: return &quot;Janvier&quot; case &quot;Feb&quot;: return &quot;Février&quot; case &quot;Mar&quot;: return &quot;Mars&quot; case &quot;Apr&quot;: return &quot;Avril&quot; case &quot;May&quot;: return &quot;Mai&quot; case &quot;Jun&quot;: return &quot;Juin&quot; case &quot;Jul&quot;: return &quot;Juillet&quot; case &quot;Aug&quot;: return &quot;Août&quot; case &quot;Sep&quot;: return &quot;Septembre&quot; case &quot;Oct&quot;: return &quot;Octobre&quot; case &quot;Nov&quot;: return &quot;Novembre&quot; case &quot;Dec&quot;: return &quot;Décembre&quot; # If an exact match is not confirmed, this last case will be used if provided case _: return &quot;&quot; </code></pre> <p>and I have a pandas col <code>df[&quot;month&quot;]= df['ic_graph']['month'].tolist()</code>:</p> <p><a href="https://i.sstatic.net/pqhuUmfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pqhuUmfg.png" alt="enter image description here" /></a></p> <p>now what I'm looking for is to pass the df[&quot;month&quot;] col through the changeMonth function to display the df[&quot;month&quot;] in frensh months</p> <p>By the way, I do not want to use the</p> <pre><code>&gt;&gt;&gt; import locale &gt;&gt;&gt; locale.setlocale(locale.LC_ALL, 'fr_FR') </code></pre>
<python><pandas>
2025-04-28 14:00:43
3
401
user29295031
79,596,546
1,490,418
How can I properly test swagger_auto_schema for methods, request_body, and responses in drf-yasg with pytest?
<p>I’m working on testing a Django REST Framework (DRF) CartViewSet using pytest, and I need to verify the swagger_auto_schema properties like the HTTP method, request body, and responses for different actions (e.g., add, remove, clear).</p> <p>I have the following code in my CartViewSet:</p> <pre><code>class CartViewSet(GenericViewSet, RetrieveModelMixin, ListModelMixin): # Other viewset code... @swagger_auto_schema( method=&quot;post&quot;, request_body=AddToCartSerializer, responses={ 201: openapi.Response(description=&quot;Item added successfully.&quot;), 400: openapi.Response(description=&quot;Invalid input data&quot;), }, ) @action(detail=False, methods=[&quot;post&quot;], url_path=&quot;add&quot;) def add(self, request): # Logic for adding an item to the cart pass </code></pre> <p>Now, I want to write a pytest unit test to check the following for the add method:</p> <ol> <li><strong>HTTP Method</strong>: Ensure the <code>swagger_auto_schema</code> method is <code>POST</code>.</li> <li><strong>Request Body</strong>: Ensure the correct serializer (<code>AddToCartSerializer</code>) is set for the request body.</li> <li><strong>Responses</strong>: Verify that the response status codes (<code>201</code> and <code>400</code>) and their descriptions are properly set.</li> </ol> <p>Could someone guide me on how to properly test the swagger_auto_schema properties for method, request body, and responses in pytest?</p> <p>Any help or insights would be greatly appreciated!</p>
<python><django><django-rest-framework><drf-yasg>
2025-04-28 13:07:59
2
770
Shervin Gharib
79,596,508
1,980,208
Why is day_size set to 32 in temporal embedding code?
<p>I am trying to understand the code for temporal embedding inside autoformer implementation using pytorch.</p> <p><a href="https://github.com/thuml/Autoformer/blob/main/layers/Embed.py" rel="nofollow noreferrer">https://github.com/thuml/Autoformer/blob/main/layers/Embed.py</a></p> <pre><code>class TemporalEmbedding(nn.Module): def __init__(self, d_model, embed_type='fixed', freq='h'): super(TemporalEmbedding, self).__init__() minute_size = 4 hour_size = 24 weekday_size = 7 day_size = 32 month_size = 13 Embed = FixedEmbedding if embed_type == 'fixed' else nn.Embedding if freq == 't': self.minute_embed = Embed(minute_size, d_model) self.hour_embed = Embed(hour_size, d_model) self.weekday_embed = Embed(weekday_size, d_model) self.day_embed = Embed(day_size, d_model) self.month_embed = Embed(month_size, d_model) def forward(self, x): x = x.long() minute_x = self.minute_embed(x[:, :, 4]) if hasattr(self, 'minute_embed') else 0. hour_x = self.hour_embed(x[:, :, 3]) weekday_x = self.weekday_embed(x[:, :, 2]) day_x = self.day_embed(x[:, :, 1]) month_x = self.month_embed(x[:, :, 0]) return hour_x + weekday_x + day_x + month_x + minute_x </code></pre> <p>Why is <code>day_size</code> set to 32 and <code>month_size</code> to 13?</p> <p>I know I should post this under the discussion tab of the above repository link but I believe it has not been active since 2021.</p> <p>I tried to research it but I am getting more confused, because multiple repos use the same values for minutes and days and month.</p> <p><a href="https://github.com/AI4Science-WestlakeU/beno/blob/main/transformer.py" rel="nofollow noreferrer">https://github.com/AI4Science-WestlakeU/beno/blob/main/transformer.py</a></p>
<python><pytorch><transformer-model>
2025-04-28 12:45:59
0
439
prem
79,596,493
8,721,169
Generate RTDB key without creating node
<p>I'm trying to create a Python function that generates a valid key in my Firebase Realtime Database (using the same logic as other keys, ensuring no collision and chronological order).</p> <p>Something like this:</p> <pre class="lang-py prettyprint-override"><code>def get_new_key() -&gt; str: return db.reference(app=firebase_app).push().key print(get_new_key()) # output: '-OOvws9uQDq9Ozacldjr' </code></pre> <p>However, doing this actually creates the node with an empty string value, as specified in <a href="https://firebase.google.com/docs/reference/admin/python/firebase_admin.db#reference" rel="nofollow noreferrer">the doc</a>. <strong>Is there any way to grab the key without adding anything to the database?</strong></p> <p>My only thought right now would be to reimplement myself <a href="https://gist.github.com/mikelehen/3596a30bd69384624c11" rel="nofollow noreferrer">their key generation algorithm</a>, but this looks overkill and not future-proof if this algorithm changes.</p>
<python><firebase><firebase-realtime-database>
2025-04-28 12:36:43
1
447
XGeffrier
79,596,399
2,276,054
How to log/display app name, host and port upon startup
<p>In my simple Flask app, I've created the following <code>.flaskenv</code> file:</p> <pre><code>FLASK_APP=simpleflask.app FLASK_RUN_HOST=127.0.0.1 FLASK_RUN_PORT=60210 </code></pre> <p>Now, I would like my app to log the following message upon startup:</p> <pre><code>Running my.flask.app on http://127.0.0.1:60210 ... </code></pre> <p>The message should be logged just once, not per each request.</p> <p>Preferably, I would like to avoid parsing <code>.flaskenv</code> on my own, but rather use some internal Flask objects and obtain this information dynamically.</p> <p>Any idea how to achieve this?</p>
<python><flask>
2025-04-28 11:40:22
2
681
Leszek Pachura
79,596,231
7,959,614
Create all possible outcomes from numpy matrices that show disjoints and subsets
<p>Let's assume there is an event and within this event there are multiple outcomes that can co-exist. An example would be a tennis game where A plays against B and B serves. A couple of possible outcomes are possible:</p> <ul> <li>A wins</li> <li>B wins</li> <li>A wins with 60-0</li> <li>A wins with 60-15</li> <li>B scores an ace</li> </ul> <p>The following relationships are evident:</p> <p><code>A wins with 60-0</code> and <code>A wins with 60-15</code> are both a subset of <code>A wins</code>. <code>A wins</code> and <code>B wins</code> are disjoints. The same applies for <code>A wins with 60-0</code> and <code>A wins with 60-15</code> and <code>A wins with 60-0</code> and <code>B scores an ace</code>.</p> <p>I create a matrix that shows whether the <em>m</em>-column and <em>n</em>-row are disjoints or not. Basically, this matrix shows which events are compatible with each other.</p> <pre><code>import numpy as np D = np.array([ [0, 1, 0, 0, 0], [1, 0, 1, 1, 0], [0, 1, 0, 1, 1], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0] ]) </code></pre> <p>So, the second row represents the outcome that <code>B</code> wins. In this case the outcomes, except <code>B scores an ace</code> are impossible. and therefore 1. The last row represent that <code>B scores an ace</code> which is possible in every outcome except <code>A wins with 60-0</code>.</p> <p>In addition, I have a matrix, <code>S</code>, that imply which events are subsets of each other.</p> <pre><code>S = np.array([ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 0, 0], ]) </code></pre> <p>Both <code>A wins with 60-0</code> or <code>A wins with 60-15</code> are a subset of <code>A wins</code>.</p> <p>I want to create a matrix with all the possible outcomes. In this particular case the matrix would look as follows</p> <pre><code>X = np.array([ [1, 0, 0, 0, 0], # A wins with 60-30, B scores no ace [1, 0, 1, 0, 0], # A wins with 60-0 [1, 0, 0, 1, 0], # A wins with 60-15, B scores no ace [1, 0, 0, 1, 1], # A wins with 60-15, B scores ace [1, 0, 0, 0, 1], # A wins with 60-30, B scores ace [0, 1, 0, 0, 0], # B wins with 15-60, B scores no ace [0, 1, 0, 0, 1] # B wins with 0-60, B scores ace ]) </code></pre> <p>There are a lot more possible outcomes within a tennis match, so I would like to create a function where the variables <code>D</code> and <code>S</code> would be the input and <code>X</code> would be the output.</p> <p>My attempt so far looks as follows:</p> <pre><code>import numpy as np from itertools import product D = np.array([ [0, 1, 0, 0, 0], [1, 0, 1, 1, 0], [0, 1, 0, 1, 1], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0] ]) S = np.array([ [0, 0, 1, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], ]) def combinations(n: int, D: np.ndarray, S: np.ndarray): combs = list(product([1, 0], repeat=n)) def disjoint(comb): for (i, j) in zip(*np.nonzero(D)): if comb[i] == 1 and comb[j] == 1: return True def subset(comb): for (i, j) in zip(*np.nonzero(S)): if np.sign(comb[i]) != np.sign(comb[j]): return True combs = [comb for comb in combs if not disjoint(comb)] # combs = [comb for comb in combs if not subset(comb)] return np.array(combs) n = 5 c = combinations(n, D, S) print(c) </code></pre> <p>The output is partially correct.</p> <pre><code>[[1 0 1 0 0] [1 0 0 1 1] [1 0 0 1 0] [1 0 0 0 1] [1 0 0 0 0] [0 1 0 0 1] [0 1 0 0 0] [0 0 1 0 0] [0 0 0 1 1] [0 0 0 1 0] [0 0 0 0 1] [0 0 0 0 0]] </code></pre> <p>How can I write the subset-filter in such a way that the wrong combinations are excluded?</p>
<python><numpy><matrix><python-itertools>
2025-04-28 10:17:26
1
406
HJA24
79,596,178
11,720,193
Checking for Table Existence in BigQuery using Python or BQ CLI (shell-script)
<p>I have a simple text file containing a list of BigQuery tables (TableA, TableB etc), for each of which the following is required -</p> <ul> <li>Search the table in all datasets present in all envs (projects)</li> <li>If a table is present in Prod (determined by the project-name) in any dataset, and is also present (in any dataset) in Dev and Test, then truncate/delete its contents in Dev and Test env - Do not DROP it</li> <li>If the table is not present in Prod, but present in Dev and Test, then DROP the table from Dev and Test env</li> <li>However, if the table belongs to any staging dataset (identified as '*_stg') irrespective of the env, then just delete/truncate it in dev and Test - do not DROP it</li> </ul> <p>Assume the following examples-<br /> Project Names: 'my-project-dev', 'my-project-test', 'my-project-prod'<br /> Dataset Names: 'my_dataset_hist', 'my_dataset_curr', 'my_dataset_stg'</p> <p>Please note that the datasets the tables belong to are not specified in the input list and hence the tables need to be searched in ALL datasets (for any specific project env)<br /> Currently the above is performed manually in BigQuery, however, I am trying to automate it in Python or by shell-script using bq-cli.</p> <p>Can someone please help write the code in Python or BQ CLI ?</p> <p>Thanks.</p>
<python><google-bigquery>
2025-04-28 09:48:11
0
895
marie20
79,596,162
8,020,900
Dash loading component: how to have spinner appear in the correct place on initial load
<p>On initial load, the spinner is very high up on the page (I guess because no figure has been created yet). On subsequent loads (whenever a new option is selected from the dropdown), the spinner is in the correct place where the figure should be displayed. Any idea how I can make it also be in the correct place on initial load?</p> <pre><code>import time import plotly.express as px import dash_mantine_components as dmc import dash from dash import dcc, callback, Output, Input dash.register_page(module=__name__) layout = dmc.MantineProvider( children=[ dmc.Select( id=&quot;dropdown&quot;, data=[&quot;bar&quot;, &quot;scatter&quot;, &quot;none&quot;], value=&quot;bar&quot;, ), dcc.Loading( type=&quot;graph&quot;, children=dmc.Box(id=&quot;my_figure&quot;) ) ] ) @callback( Output(&quot;my_figure&quot;, &quot;children&quot;), Input(&quot;dropdown&quot;, &quot;value&quot;) ) def show_fig(value): if value == &quot;bar&quot;: time.sleep(5) return dcc.Graph(figure=px.bar()) elif value == &quot;scatter&quot;: time.sleep(5) return dcc.Graph(figure=px.scatter()) else: return [] </code></pre>
<python><plotly-dash><mantine>
2025-04-28 09:37:02
1
3,539
Free Palestine
79,596,151
8,020,900
How to add text to Dash Loading (dcc.Loading) component
<p>I'd like to use one of the built-in spinners in <code>dcc.Loading</code> but add some text underneath the spinner/loader while it's active.</p> <pre><code>import time import plotly.express as px import dash_mantine_components as dmc import dash from dash import dcc, callback, Output, Input dash.register_page(module=__name__) layout = dmc.MantineProvider( children=[ dmc.Select( id=&quot;dropdown&quot;, data=[&quot;bar&quot;, &quot;scatter&quot;, &quot;none&quot;], value=&quot;bar&quot;, ), dcc.Loading( type=&quot;graph&quot;, children=dmc.Box(id=&quot;my_figure&quot;) ) ] ) @callback( Output(&quot;my_figure&quot;, &quot;children&quot;), Input(&quot;dropdown&quot;, &quot;value&quot;) ) def show_fig(value): if value == &quot;bar&quot;: time.sleep(5) return dcc.Graph(figure=px.bar()) elif value == &quot;scatter&quot;: time.sleep(5) return dcc.Graph(figure=px.scatter()) else: return [] </code></pre> <p>This code will only show the &quot;graph&quot; spinner without any text. Can I add text to the built-in spinner without using the <code>custom_spinner</code> argument? Or do I have to recreate the built-in spinner, add text to it, and then pass it to <code>custom_spinner</code>?</p>
<python><plotly-dash><mantine>
2025-04-28 09:29:59
1
3,539
Free Palestine