QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,554,831
| 4,159,193
|
Django Model: unexpected keyword arguments in constructor
|
<p>I have a Django Model Kunde</p>
<pre><code>from django.db import models
class Kunde(models.Model):
Kundennummer = models.IntegerField(),
Vorname = models.CharField(max_length=200),
Nachname = models.CharField(max_length=200)
</code></pre>
<p>I open the Django shell with the command <code>python manage.py shell</code></p>
<p>I do</p>
<pre><code>from kundenliste.models import Kunde;
</code></pre>
<p>and then</p>
<pre><code>Kunde.objects.all()
</code></pre>
<p>This gives me</p>
<pre><code><QuerySet []>
</code></pre>
<p>Now I would like to insert a new customer</p>
<p>But</p>
<pre><code>k1 = Kunde(Kundennummer=1,Vorname="Florian",Nachname="Ingerl");
</code></pre>
<p>gives me the error</p>
<pre><code>Traceback (most recent call last):
File "<console>", line 1, in <module>
File "C:\Users\imelf\Documents\NachhilfeInfoUni\MaxPython\env_site\Lib\site-packages\django\db\models\base.py", line 569, in __init__
raise TypeError(
TypeError: Kunde() got unexpected keyword arguments: 'Kundennummer', 'Vorname'
</code></pre>
<p>What do I do wrong?</p>
|
<python><django><django-models>
|
2025-04-04 08:26:43
| 2
| 546
|
flori10
|
79,554,808
| 9,525,238
|
PySide6 QImage conversion to PyQtGraph ndarray
|
<p>I cannot generate a QImage with some text on it, keep it in memory, and then successfully display it on a pyqtgraph.ImageItem which needs it as an np.ndarray</p>
<pre class="lang-py prettyprint-override"><code>import sys
import numpy as np
import pyqtgraph as pg
from PySide6.QtWidgets import QApplication, QMainWindow
from PySide6.QtGui import QImage, QPainter, QPen, QFont
from PySide6.QtCore import QSize, QRectF, Qt
import qimage2ndarray
class ImageItemExample(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle('PyQtGraph ImageItem Example')
# Create a GraphicsLayoutWidget
self.graphics_layout = pg.GraphicsLayoutWidget()
self.setCentralWidget(self.graphics_layout)
# Add a PlotItem
self.plot_item = self.graphics_layout.addPlot()
# Create an ImageItem
self.image_item = pg.ImageItem()
self.plot_item.addItem(self.image_item)
# Create a QImage
image = QImage(QSize(400, 300), QImage.Format.Format_RGB32)
painter = QPainter(image)
painter.fillRect(QRectF(0, 0, 400, 300), Qt.GlobalColor.black)
painter.setPen(QPen(Qt.GlobalColor.white))
painter.setFont(QFont("Courier", 30, QFont.Weight.Bold))
painter.drawText(QRectF(0, 0, 400, 300), Qt.AlignmentFlag.AlignCenter, "Missing Frame!")
painter.end()
image.save("example_image.png") # < picture 1
# Convert QImage to NumPy array
image_data = qimage2ndarray.rgb_view(image) # < picture 2
# image_data = qimage2ndarray.raw_view(image) # < picture 3
# image_data = qimage2ndarray.recarray_view(image) # doesn't work at all
# Set the image data
self.image_item.setImage(image_data)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = ImageItemExample()
window.show()
app.exec()
</code></pre>
<p>The picture that gets generated is this:</p>
<p><a href="https://i.sstatic.net/Um7HulwE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um7HulwE.png" alt="enter image description here" /></a></p>
<p>But the pictures that get generated by the qimage2ndarray look like this:</p>
<p><a href="https://i.sstatic.net/C1kM9Jrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C1kM9Jrk.png" alt="enter image description here" /></a></p>
<p>or this:</p>
<p><a href="https://i.sstatic.net/1KEhO5u3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KEhO5u3.png" alt="enter image description here" /></a></p>
<p>or plainly don't work, as in the case of <em>recarray_view</em></p>
<p>What am I missing? copilot is useless</p>
<hr />
<p>EDIT: As the comments suggested, the result is inconclusive (looks like a memory issue?). In the end I found a different solution without using qimage2nndarray:</p>
<pre class="lang-py prettyprint-override"><code>def createImage(text, width=400, height=300, asNdArray=False) -> [QImage, np.ndarray]:
image = QImage(QSize(width, height), QImage.Format.Format_ARGB32)
painter = QPainter(image)
painter.fillRect(QRectF(0, 0, width, height), Qt.GlobalColor.black)
painter.setPen(QPen(Qt.GlobalColor.white))
painter.setFont(QFont("Courier", 30, QFont.Weight.Bold))
painter.drawText(QRectF(0, 0, width, height), Qt.AlignmentFlag.AlignCenter, text)
painter.end()
if asNdArray:
ptr = image.constBits()
image = np.array(ptr).reshape(image.height(), image.width(), 4)
return image
</code></pre>
|
<python><numpy-ndarray><pyside6><qimage>
|
2025-04-04 08:13:30
| 0
| 413
|
Andrei M.
|
79,554,765
| 110,963
|
Python logging when forking processes
|
<p>I implemented a custom <code>logging.StreamHandler</code> that stores logs on S3. It works fine in a single process, but I cannot handle scenarios where processes are forked. I'm on Linux and only need to support that platform. The general approach is this:</p>
<pre><code>class S3Handler(logging.StreamHandler):
def __init__(self):
super().__init__(io.StringIO())
# create unique target location and counter
# counter for rotating file names
def emit(self, record):
# emit record
# if buffer is large enough: store data to file
def close(self):
# store buffer to file
</code></pre>
<p>This works like a charm for a single process. I configure the logger in a config file and use <code>logging.fileConfig</code> to set everything up. In my code I use</p>
<pre><code>LOG = logging.getLogger(__name__)
</code></pre>
<p>everywhere on the module level. But my project uses <a href="https://luigi.readthedocs.io/en/stable/" rel="nofollow noreferrer">Luigi</a> as a library, which forks subprocesses and things become a bit strange:</p>
<p>The different processes use the same "copy" of the handler, so they use the same unique target location and will overwrite each others data. This makes sense, but while figuring it out I also saw unexpected closing of handlers, which I could not really pin down.</p>
<p>There is very little information about logging in the context of forking, so I dig into the source code of the logging library. I found <code>_at_fork_reinit()</code> which I overwrote. My implementation of it creates a new unique target location and a fresh <code>io.StringIO</code> buffer.</p>
<p>This approach works fine in the sense that data is not overwritten anymore. I get output for the individual subprocess in individual files. But log messages are now "multiplied": I see logs from the main process showing up in logs from the subprocesses.</p>
<p>I have not idea how this is possible, because no file handles are involved. Everything happens in memory and that should be isolated between processes. Any hint how Python logging is supposed to work and to be used in the context of a fork would be highly appreciated.</p>
|
<python><logging><fork><luigi>
|
2025-04-04 07:48:20
| 1
| 15,684
|
Achim
|
79,554,450
| 9,597,296
|
How to setup supabase sever client properly
|
<p>In my backend(FastAPI) I'm using supabase to send OTPs, update tables. for this I have this server client.</p>
<pre class="lang-py prettyprint-override"><code>from supabase import create_client, Client
from app.core.config import settings
class SupabaseClient:
def __init__(self):
supabase_url = settings.supabase_url
supabase_key = settings.supabase_service_role_key
if not supabase_url or not supabase_key:
raise ValueError("Missing Supabase credentials. Set SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY")
self.client: Client = create_client(supabase_url, supabase_key)
supabase = SupabaseClient().client
</code></pre>
<p>but when i use this client to insert into tables it gives me permission denied error. even after disabling row level security.</p>
<pre class="lang-py prettyprint-override"><code>class ProjectService:
@staticmethod
async def new_project(authContext: Tuple[str, Optional[User]]) -> dict:
token, user = authContext
print(f"user: {user}")
try:
project_data = {}
new_project = {
"user_id": user.id,
"last_modified": datetime.now(timezone.utc).isoformat(),
"project_data": json.dumps(project_data)
}
response = supabase.table("projects").insert(new_project).execute()
print(f"response: {response}")
return SuccessResponseModel(
status="success",
message="OTP sent to your email. Please check your inbox.",
data={}
).model_dump()
except Exception as e:
print('Error in transcription:', e)
error_message = str(e)
error_code = "TRANSCRIBE_ERROR"
raise HTTPException(
status_code=400,
detail={
"status": "error",
"message": error_message,
"error_code": error_code,
}
)
</code></pre>
<p>what could be the issue. is there any proper way to define the supabase client and use it.</p>
|
<python><fastapi><supabase><supabase-py>
|
2025-04-04 04:14:58
| 0
| 4,581
|
Nipun Ravisara
|
79,554,434
| 11,084,338
|
How to replace nested for loops with apply from dataframe in python
|
<p>I have a dataframe and a list below.</p>
<pre><code>import pandas as pd
my_df = pd.DataFrame({'fruits': ['apple', 'banana', 'cherry', 'durian'],
'check': [False, False, False, False]})
my_list = ['pp', 'ana', 'ra', 'cj', 'up', 'down', 'pri']
>>> my_df
fruits check
0 apple False
1 banana False
2 cherry False
3 durian False
</code></pre>
<p>I can make a result with nested for loops.</p>
<pre><code>for fruit in my_df['fruits']:
for v in my_list:
if v in fruit:
my_df.loc[my_df['fruits']==fruit, 'check'] = True
>>> my_df
fruits check
0 apple True
1 banana True
2 cherry False
3 durian False
</code></pre>
<p>I tried below.</p>
<pre><code>my_df['fruits'].apply(lambda x: True for i in my_list if i in x)
</code></pre>
<p>but, it spat out <code>Type Error: 'generator' object is not cllable</code></p>
<p>I want to remove nested for loops and replace these with apply function.
How can i do this?</p>
|
<python><pandas><dataframe><for-loop>
|
2025-04-04 03:57:33
| 1
| 326
|
GH KIM
|
79,554,331
| 5,924,264
|
Inner join producing duplicative columns
|
<p>Not really well versed with python/sql joins in general, but it looks like this inner join is producing 2 identical columns of <code>my_id, mod_id</code> which have identical values?</p>
<pre><code> df = (
lhs_df[LHS_COLS]
.merge(
rhs_df, left_on="my_id", right_on="mod_id", suffixes=("_lhs", "_rhs"), how="inner"
)
.merge(final_df[COLS], on="my_id", how="left")
.rename(
columns={
"my_id": "id_fixed",
"mod_id": "id_mod",
}
)
)
</code></pre>
<p>I don't understand what the point of this is instead of dropping one of the columns and have downstream consumers that used <code>id_mod</code> use <code>id_fixed</code> instead. Am I misunderstanding how the inner join here works?</p>
|
<python><pandas><inner-join>
|
2025-04-04 02:03:41
| 0
| 2,502
|
roulette01
|
79,554,329
| 5,964,034
|
How to create a Docker file that iterates over files in a folder and executes a particular command (mri_synthstrip)
|
<p>There is this program mri_synthstrip which inputs a brain MRI file and outputs the same brain MRI file excluding the skull. This is a basic initial step for preprocessing brain MRIs for research.</p>
<p>The program is wonderful and the syntax to skull strip a single MRI is simple if you install the program in your computer:</p>
<pre class="lang-bash prettyprint-override"><code>mri_synthstrip -i input.nii.gz -o stripped.nii.gz
</code></pre>
<p>The authors are very nice and they even created a Docker image that you can pull from Dockerhub with docker pull freesurfer/synthstrip</p>
<p>in which case the syntax to skull strip a brain MRI is</p>
<pre class="lang-bash prettyprint-override"><code>docker run -v C:/Users/ch134560/Desktop/MRI_files://data freesurfer/synthstrip -i //data/input.nii.gz -o //data/stripped.nii.gz
</code></pre>
<p>What I am trying to do is to create a Docker image that runs through all the MRI files in a directory and skull strips them with synthstrip. I thought this would be easy.</p>
<p>My multistage Dockerfile looks like this, but it does not work because it does not recognize the command in the loop</p>
<pre class="lang-bash prettyprint-override"><code>freesurfer/synthstrip -i "$file" -o "$file"
</code></pre>
<p>or</p>
<pre class="lang-bash prettyprint-override"><code>mri_synthstrip -i "$file" -o "$file"
</code></pre>
<p>I guess that I do not know how to call a particular command within a unix loop.</p>
<pre class="lang-bash prettyprint-override"><code>#########STAGE 1: CREATE THE MOUNT AND COPY THE MRIs TO BE SKULL-STRIPPED#########
# Use ubuntu as base image
FROM ubuntu AS prepare
# Set working directory
WORKDIR /app
# Copy all MRIs
COPY . .
#########STAGE 2: USE SYNTHSTRIP#########
FROM freesurfer/synthstrip
# Set working directory
WORKDIR /app
# Copy all MRIs
COPY --from=prepare . .
# Create a loop that runs through the MRIs
RUN for file in .; do freesurfer/synthstrip -i "$file" -o "$file"; done
ENTRYPOINT ["ls"]
</code></pre>
<p>Other option I thought about is to create a multistage build in which first I import the synthstrip program and then I loop through the MRI files and call synthstrip, but again I do not know exactly how to call the command to run synthstrip from python.</p>
<p>Dockerfile</p>
<pre class="lang-bash prettyprint-override"><code>#########STAGE 1: CREATE THE SYNTHSTRIP CONTAINER###########
# Use synthstrip AS base image
FROM freesurfer/synthstrip as builder
# Set working directory
WORKDIR /app
#########STAGE 2: CREATE THE ANACONDA CONTAINER###########
# Use miniconda as base image
FROM continuumio/miniconda3
# Set working directory
WORKDIR /app
# Copy the initial image
COPY --from=builder . .
# Create the environment
COPY environment.yml .
RUN conda env create -f environment.yml
# Make run commands use the new environment
SHELL ["conda", "run", "-n", "app", "bin/bash", "-c"]
# Code to run when the container is started
COPY main.py .
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "app", "python", "main.py"]
</code></pre>
<p>main.py file</p>
<pre class="lang-py prettyprint-override"><code>import os
import subprocess
for MRI in os.listdir('./file_with_MRIs'):
print(MRI)
proc = subprocess.Popen(['docker run freesurfer/synthstrip -i ', MRI, ' -o', MRI],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE
)
(out, err) = proc.communicate()
print (out)
</code></pre>
<p>Other option is to create a smaller Docker image that loops through the MRI files and just calls the synthstrip Docker image for each of those and collects the output, but I am not sure how to do that.</p>
|
<python><docker>
|
2025-04-04 02:01:25
| 2
| 771
|
Ivan
|
79,554,216
| 268,581
|
Year comparison
|
<h1>Dataframe</h1>
<p>Here's a dataframe which has U.S. Treasury General Account deposits from taxes (month to date).</p>
<pre class="lang-none prettyprint-override"><code>>>> df
record_date transaction_mtd_amt
0 2005-10-03 18777
1 2005-10-04 21586
2 2005-10-05 29910
3 2005-10-06 32291
4 2005-10-07 37696
... ... ...
4892 2025-03-26 373897
4893 2025-03-27 381036
4894 2025-03-28 395097
4895 2025-03-31 429273
4896 2025-04-01 28706
[4897 rows x 2 columns]
</code></pre>
<h1>Chart</h1>
<p>Here's a simple Python program that puts up the chart in streamlit.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import streamlit as st
import plotly.express as px
url = 'https://gist.githubusercontent.com/dharmatech/945da7821d53fac3d7e55192703f26b1/raw/4eb662b5da5898999313a1450670fd169ce6d055/tga-deposits-taxes.csv'
@st.cache_data
def load_data():
return pd.read_csv(url)
df = load_data()
fig = px.line(df, x='record_date', y='transaction_mtd_amt')
st.plotly_chart(fig)
</code></pre>
<p><a href="https://i.sstatic.net/eNnGulvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eNnGulvI.png" alt="enter image description here" /></a></p>
<h1>Year comparison</h1>
<p>I'd like to compare years and show them side-by-side on the chart.</p>
<p>Here's one approach:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import streamlit as st
import plotly.express as px
url = 'https://gist.githubusercontent.com/dharmatech/945da7821d53fac3d7e55192703f26b1/raw/4eb662b5da5898999313a1450670fd169ce6d055/tga-deposits-taxes.csv'
@st.cache_data
def load_data():
return pd.read_csv(url)
df = load_data()
df['record_date'] = pd.to_datetime(df['record_date'])
tbl = df.copy()
# ----------------------------------------------------------------------
tbl['year'] = tbl['record_date'].dt.year
tbl['record_date_'] = tbl['record_date'].apply(lambda x: x.replace(year=2000))
# ----------------------------------------------------------------------
fig = px.line(tbl, x='record_date_', y='transaction_mtd_amt', color='year')
st.plotly_chart(fig)
</code></pre>
<p>Here's what the dataframe looks like:</p>
<pre class="lang-none prettyprint-override"><code>>>> tbl
record_date transaction_mtd_amt year record_date_
0 2005-10-03 18777 2005 2000-10-03
1 2005-10-04 21586 2005 2000-10-04
2 2005-10-05 29910 2005 2000-10-05
3 2005-10-06 32291 2005 2000-10-06
4 2005-10-07 37696 2005 2000-10-07
... ... ... ... ...
4892 2025-03-26 373897 2025 2000-03-26
4893 2025-03-27 381036 2025 2000-03-27
4894 2025-03-28 395097 2025 2000-03-28
4895 2025-03-31 429273 2025 2000-03-31
4896 2025-04-01 28706 2025 2000-04-01
[4897 rows x 4 columns]
</code></pre>
<p>Here's what the chart looks like:</p>
<p><a href="https://i.sstatic.net/bZ6hWSzU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZ6hWSzU.png" alt="enter image description here" /></a></p>
<p>Isolated to just 2024 and 2025:</p>
<p><a href="https://i.sstatic.net/pziFzN9f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pziFzN9f.png" alt="enter image description here" /></a></p>
<h1>Approach</h1>
<p>The two key lines in implementing this approach are these:</p>
<pre class="lang-py prettyprint-override"><code>tbl['year'] = tbl['record_date'].dt.year
tbl['record_date_'] = tbl['record_date'].apply(lambda x: x.replace(year=2000))
</code></pre>
<h1>Questions</h1>
<p>While this seems to work, I'm wondering if this is the idiomatic and recommended way to go about this in pandas.</p>
|
<python><pandas><streamlit>
|
2025-04-04 00:09:43
| 1
| 9,709
|
dharmatech
|
79,554,185
| 1,227,058
|
Python scraper not returning value to Excel
|
<p>I have a Python webscraper that pulls a specific value every 1 second. The target website it AJAXed, so I'm not hitting it with too many requests.</p>
<p>This is the Python code:</p>
<pre><code>import time
import logging
import sys
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Suppress all unnecessary logs
logging.basicConfig(level=logging.CRITICAL) # Suppress logs lower than CRITICAL
logging.getLogger("selenium").setLevel(logging.CRITICAL) # Suppress Selenium logs
# Specify the path to your chromedriver
chromedriver_path = 'C:/GoldScraper/chromedriver.exe'
# Set up Chrome options
options = Options()
options.add_argument("--headless") # Run in headless mode
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-gpu")
options.add_argument("start-maximized")
options.add_argument("--log-level=3") # Suppress logs
# Suppress WebGL and other errors
options.add_argument("--disable-software-rasterizer")
options.add_argument("--disable-extensions")
options.add_argument("--disable-logging")
# Create a Service object
service = Service(chromedriver_path)
driver = webdriver.Chrome(service=service, options=options)
# Navigate to the website
driver.get('https://www.cnbc.com/quotes/XAU=')
# Print waiting message once
print("Waiting for price element...")
try:
# Wait for the gold price element to be visible
gold_price_element = WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.CLASS_NAME, "QuoteStrip-lastPrice"))
)
print("Price element found. Fetching prices...")
while True:
# Extract and strip commas, convert to float, and format to 2 decimal places
gold_price = round(float(gold_price_element.text.replace(",", "")), 2) # Remove commas, convert to float, and round to 2 decimal places
print(f"{gold_price:.2f}") # Print with 2 decimal places
sys.stdout.flush() # Ensure output is immediately available to VBA
# Wait for 1 second before next fetch
time.sleep(1)
except KeyboardInterrupt:
print("\nScript stopped by user.")
except Exception as e:
print(f"Error extracting gold price: {e}")
sys.stdout.flush() # Ensure error messages are flushed too
finally:
# Ensure the browser closes
driver.quit()
print("Driver closed.")
</code></pre>
<p>I need to have the VBA macro capture the <code>gold_price</code> variable each second and display the value in a named cell called "Live_AU". This is the VBA code I have come up with:</p>
<pre><code>Sub StartScraping()
Dim objShell As Object
Dim pythonScriptPath As String
Dim pythonExePath As String
Dim price As String
' Define the path to your Python executable and script
pythonExePath = "C:\python313\python.exe" ' Adjust this if needed
pythonScriptPath = "C:\GoldScraper\goldscraper.py" ' Adjust this if needed
' Run the Python script and capture the output
Set objShell = CreateObject("WScript.Shell")
' Capture the output from the Python script (output redirected to text)
Dim output As String
output = objShell.Exec(pythonExePath & " " & pythonScriptPath).StdOut.ReadAll ' Use ReadAll to capture everything
' Debug: Show the price output from Python
MsgBox "Python Output: " & output ' Display the output for debugging
' Check if price is valid and update the Live_AU cell
If IsNumeric(output) Then
Range("Live_AU").Value = output
Else
MsgBox "No valid output from Python. Output: " & output ' Show the captured output if invalid
End If
End Sub
</code></pre>
<p>This returns nothing or an empty string, I can't tell. Can anyone help debug this for me so that the correct value is displayed in the required Excel cell?</p>
|
<python><excel><vba><web-scraping>
|
2025-04-03 23:29:58
| 1
| 1,136
|
Matteo
|
79,554,175
| 16,118,686
|
Can't interact with checkboxes and button
|
<p>In this URL: <a href="http://estatisticas.cetip.com.br/astec/series_v05/paginas/lum_web_v05_template_informacoes_di.asp?str_Modulo=completo&int_Idioma=2&int_Titulo=6&int_NivelBD=2/" rel="nofollow noreferrer">http://estatisticas.cetip.com.br/astec/series_v05/paginas/lum_web_v05_template_informacoes_di.asp?str_Modulo=completo&int_Idioma=2&int_Titulo=6&int_NivelBD=2/</a></p>
<p>I'm trying to click the checkboxes and calculate button, however, I am unable to.</p>
<p>I have tried the following:</p>
<pre><code># checkbox
WebDriverWait(driver,5).until(EC.element_to_be_clickable((By.XPATH,'//input[@name="chk_M1"]'))).click()
# button
driver.find_element_by_xpath('//a[@class="button" and contains(text(),"Calculate")]')
</code></pre>
<p>I can properly select the elements when I search in the elements when inspecting.
I tried waiting, but still can't select.</p>
<p>Checkbox:
<a href="https://i.sstatic.net/oTrQbUJA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTrQbUJA.png" alt="enter image description here" /></a></p>
<p>Button:
<a href="https://i.sstatic.net/BOKDT6ez.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOKDT6ez.png" alt="enter image description here" /></a></p>
<p>Error:</p>
<pre><code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//a[@class="button" and contains(text(),"Calculate")]"}
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2025-04-03 23:12:19
| 2
| 11,256
|
NightEye
|
79,554,077
| 4,463,825
|
Python Resource Allocation of Devices
|
<p>I am looking for some optimization algorithm suggestion on resource allocation. Preferably in Python.</p>
<p>I have a given number of total resources, and I have to conduct a certain number of Experiments with these resources. Each experiment takes a given set of days, and different number of resources.</p>
<p>Seeking for ways of attacking this optimization algorithms, where the output is an optimized way of assigning the resources to each Experiment, so the total time to do the experiments does not go over a max limit.</p>
<p>If it is three to four, I should have written on a sheet of paper and replicated in a code. But with total Experiments greater than 50, I am struggling to come up with an algorithm. I am Ok, if you just share an article, that you believe would work for this problem.</p>
<p>I am presently using linprog but getting a constraint error, which I am not able to figure out.</p>
<pre><code>import pandas as pd
from scipy.optimize import linprog
# # Define experiment data
# data = {
# "Experiment": ["Experiment A", "Experiment B", "Experiment C", "Experiment D"],
# "Unit A Required": [10, 20, 15, 10],
# "Unit B Required": [5, 10, 10, 5],
# "Days": [5, 3, 7, 2]
# }#
# # Create a DataFrame
# df = pd.DataFrame(data)
df = pd.read_excel('Book3.xlsx')
# Define total available units of each type
total_brazos = 11
total_z2_rack = 3
# Objective function: Minimize the total days
objective = df["Working Days"].tolist()
# Constraints: Allocated units must not exceed available units of Unit A and Unit B
constraints = [
{"type": "eq", "fun": lambda x: sum(x[:len(df)]) - total_brazos}, # Total allocated Unit A
{"type": "eq", "fun": lambda x: sum(x[len(df):]) - total_z2_rack} # Total allocated Unit B
]
# Combine bounds for Unit A and Unit B allocations
bounds_a = [(0, unit) for unit in df["Brazos Req"]]
bounds_b = [(0, unit) for unit in df["Z2 Rack Req"]]
bounds = bounds_a + bounds_b
# Solve the optimization problem
result = linprog(
c=objective, # Coefficients in the objective function
bounds=bounds, # Allocation bounds
constraints = constraints, # Constraints
method='highs' # Use the HiGHS solver
)
# Display the results
if result.success:
allocated_brazos = result.x[:len(df)]
allocated_z2_rack = result.x[len(df):]
df["Brazos Allocated"] = allocated_brazos
df["Z2 Rack Allocated"] = allocated_z2_rack
df["Used Days"] = df["Brazos Allocated"] * df["Working Days"] / df["Brazos Req"]
print("Optimized Allocation:")
print(df)
print(f"Total Days Used: {sum(df['Used Days'])}")
else:
print("Optimization failed!")
</code></pre>
<p><a href="https://limewire.com/d/GHbDl#MARxplvnLH" rel="nofollow noreferrer">Excel used above</a></p>
|
<python><numpy><optimization>
|
2025-04-03 21:48:26
| 0
| 993
|
Jesh Kundem
|
79,554,068
| 202,385
|
argo workflows: using jsonpath in when expression
|
<p>In the below argo workflow specification</p>
<ul>
<li>I am generating a list of booleans from <code>generate-booleans</code> step.</li>
<li>I want to conditionally execute <code>step1</code> if the 0th element of the list is true.</li>
</ul>
<p>I have tried many combinations of the expression in the when but I am unable to get a condition that executes the step1 if 0th element is true. Either the condition is not satisfied or there is an issue with token with a hint of add quotes around "$" or "{{".</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: sprig-boolean-condition-
spec:
entrypoint: main
serviceAccountName: default-editor
templates:
- name: main
steps:
- - name: generate-booleans
template: boolean-generator
- - name: step1
template: task-a
when: "\"{{=jsonpath(tasks.generate-booleans.outputs.parameters.boolean-list \"$[0]\")}}\" == \"true\""
- name: boolean-generator
script:
image: alpine
command: [python]
source: |
import json
with open('/tmp/output.txt', 'w') as f:
json.dump([True, False], f)
outputs:
parameters:
- name: boolean-list
valueFrom:
path: /tmp/output.txt
- name: task-a
container:
image: alpine
command: [sh, -c, "echo 'Task A executed'"]
</code></pre>
<p>Could you please let me know the correct syntax?
ps: I have tried fromJson, sprig expressions but failed.</p>
|
<python><argo-workflows>
|
2025-04-03 21:44:37
| 0
| 396
|
vijayvammi
|
79,553,855
| 18,108,767
|
Overlaping subplots vertically stacked
|
<p>While reading a paper for my thesis I encountered <a href="https://www.nature.com/articles/s41467-024-45469-8/figures/5" rel="nofollow noreferrer">this graph (b)</a>:
<a href="https://i.sstatic.net/mr6fNyDs.webp" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mr6fNyDs.webp" alt="Image from the paper" /></a></p>
<p>I've tried to recreate the second graph which is the one I would like to use for my results:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
years = np.linspace(1300, 2000, 700)
np.random.seed(42)
delta_13C = np.cumsum(np.random.normal(0, 0.1, 700))
delta_13C = delta_13C - np.mean(delta_13C)
delta_18O = np.cumsum(np.random.normal(0, 0.08, 700))
delta_18O = delta_18O - np.mean(delta_18O)
temp_anomaly = np.cumsum(np.random.normal(0, 0.03, 700))
temp_anomaly = temp_anomaly - np.mean(temp_anomaly)
temp_anomaly[-100:] += np.linspace(0, 1.5, 100)
plt.style.use('default')
plt.rcParams['font.size'] = 12
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['axes.labelsize'] = 14
fig = plt.figure(figsize=(10, 8))
gs = GridSpec(3, 1, height_ratios=[1, 1, 1], hspace=0.2)
ax1 = fig.add_subplot(gs[0])
ax1.plot(years, delta_13C, color='green', linewidth=1.0)
ax1.set_ylabel('First', color='green', labelpad=10)
ax1.tick_params(axis='y', colors='green')
ax1.set_xlim(1300, 2000)
ax1.set_ylim(-4, 4)
ax1.xaxis.set_visible(False)
ax1.spines['top'].set_visible(False)
ax1.spines['bottom'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax1.spines['left'].set_color('green')
ax2 = fig.add_subplot(gs[1])
ax2.plot(years, delta_18O, color='blue', linewidth=1.0)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
ax2.set_ylabel('Second', color='blue', labelpad=10)
ax2.tick_params(axis='y', colors='blue')
ax2.set_xlim(1300, 2000)
ax2.set_ylim(-3, 3)
ax2.xaxis.set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.spines['bottom'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax2.spines['right'].set_color('blue')
ax3 = fig.add_subplot(gs[2])
ax3.plot(years, temp_anomaly, color='gray', linewidth=1.0)
ax3.set_ylabel('Third', color='black', labelpad=10)
ax3.set_xlim(1300, 2000)
ax3.set_ylim(-1.0, 1.5)
ax3.set_xlabel('Year (CE)')
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
plt.show()
</code></pre>
<p>But the result is a bit different:
<a href="https://i.sstatic.net/yk5WKDq0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yk5WKDq0.png" alt="My graph" /></a></p>
<p>How can I bring the subplots closer together without blocking each other? As you can see in the graphic in the reference paper, the lines of the subplots almost touch each other.</p>
|
<python><matplotlib>
|
2025-04-03 19:39:20
| 1
| 351
|
John
|
79,553,686
| 4,721,937
|
How to flatten a mapping constructed from a tagged scalar using ruamel.yaml
|
<p>My aim is to create a YAML loader that can construct mappings from tagged scalars.
Here is a stripped-down version of the loader which constructs an object containing names from a scalar tagged <code>!fullname</code>.</p>
<pre class="lang-py prettyprint-override"><code>import ruamel.yaml
class MyLoader(ruamel.yaml.YAML):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.constructor.add_constructor("!fullname", self._fullname_constructor)
@staticmethod
def _fullname_constructor(constructor, node):
value = constructor.construct_scalar(node)
first, *middle, last = value.split()
return {
"first_name": first,
"middle_names": middle,
"last_name": last
}
myyaml = MyLoader()
</code></pre>
<p>The loader can successfully substitute objects for tagged scalars i.e.</p>
<pre class="lang-py prettyprint-override"><code>>>> myyaml.load("""
- !fullname Albus Percival Wulfric Brian Dumbledore
- !fullname Severus Snape""")
[
{'first_name': 'Albus', 'middle_names': ['Percival', 'Wulfric', 'Brian'], 'last_name': 'Dumbledore'},
{'first_name': 'Severus', 'middle_names': [], 'last_name': 'Snape'}
]
</code></pre>
<p>However, the construction fails when I try to merge the constructed mapping into an enclosing object</p>
<pre class="lang-py prettyprint-override"><code>>>> yaml.load("""
id: 0
<<: !fullname Albus Percival Wulfric Brian Dumbledore""")
ruamel.yaml.constructor.ConstructorError: while constructing a mapping (...)
expected a mapping or list of mappings for merging, but found scalar
</code></pre>
<p>My understanding is that the type of the node is still a <code>ScalarNode</code>, so the constructor is unable to process it even though it ultimately resolves to a mapping.
How to modify my code, such that <code>!fullname {scalar}</code> can be merged into the object?</p>
|
<python><yaml><ruamel.yaml>
|
2025-04-03 18:06:12
| 1
| 2,965
|
warownia1
|
79,553,578
| 5,722,359
|
How to overcome slow down seen in numpy.dot of larger input sizes during convolution?
|
<p>I noticed a strange phenomenon while benchmarking the performances of my <code>conv2d</code> functions that were created with <code>NumPy</code>. As the input size increases, there could be a input size threshold that would cause a step slow down to the performance of the <code>conv2d</code> function. See this diagram where 1000 reruns per input length was done. Other than the first two methods, the latter fives methods all showed a significant slow after exceeding an input length of ~230 pixels.</p>
<p><a href="https://i.sstatic.net/GjArGZQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GjArGZQE.png" alt="Runtimes" /></a></p>
<p>After some investigation, I discovered that this phenomenon was unique to my <code>conv2d</code> functions that used the <code>numpy.dot()</code> method: the former 2 <code>conv2d</code> functions used the <code>np.multiply()</code> and <code>np.sum()</code> methods while the latter 5 <code>conv2d</code> functions used the <code>numpy.dot()</code> method.</p>
<p>Questions:</p>
<ol>
<li>Can you explain the reason for this slow down? Is this due to hardware limitation or some NumPy settings?</li>
<li>Is there any ways to circumvent this slow down?</li>
</ol>
<p>Below is a script to demonstrate the issue:</p>
<pre><code>import numpy as np
from timeit import timeit
import matplotlib.pyplot as plt
def conv2d_np_as_strided_2d(inp: np.ndarray, ker: np.ndarray, pad: int, stride: int) -> np.ndarray:
hi, wi = inp.shape
hk, wk = ker.shape
ho = (hi + 2 * pad - hk) // stride + 1
wo = (wi + 2 * pad - wk) // stride + 1
if pad > 0:
inp = np.pad(inp, ((pad, pad), (pad, pad),), mode="constant", constant_values=0.0,)
patches = np.lib.stride_tricks.as_strided(
inp, shape=(ho, wo, hk, wk),
strides=(inp.strides[0] * stride, inp.strides[1] * stride, inp.strides[0], inp.strides[1],),
writeable=False,
)
return np.dot(patches.reshape(ho * wo, ker.size), ker.flatten().T).reshape(ho, wo)
def get_func_average_runtime(rng, func, input_sizes, ksize, pad, stride, num):
runtimes = np.zeros(len(input_sizes), dtype=np.float32)
for n, isize in enumerate(input_sizes):
inp = rng.random((isize, isize)).astype(np.float32)
ker = rng.random((ksize, ksize)).astype(np.float32)
runtimes[n] = timeit(lambda: func(inp, ker, pad, stride), number=num)
return func.__name__, runtimes / num
def benchmark_conv2d():
number = 30
input_sizes = tuple(i for i in range(10, 302, 2))
rng = np.random.default_rng()
func_name, result = get_func_average_runtime(
rng, conv2d_np_as_strided_2d, input_sizes, 3, 1, 1, number,
)
plt.plot(input_sizes, result, label=func_name)
plt.xlabel("Input Size")
plt.ylabel("Average Runtime (seconds)")
plt.title("Average Runtime vs Array Size")
plt.legend()
plt.grid(True)
plt.show()
benchmark_conv2d()
</code></pre>
<p><a href="https://i.sstatic.net/nSlyKb9P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSlyKb9P.png" alt="issue" /></a></p>
<p><a href="https://i.sstatic.net/19BFtKu3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19BFtKu3.png" alt="issue_max_input_size_2000" /></a></p>
<p><a href="https://i.sstatic.net/vDJyZro7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vDJyZro7.png" alt="issue_max_input_size_2000_ver2" /></a></p>
<pre><code>$ uname -srv
Linux 6.11.0-21-generic #21~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Feb 24 16:52:15 UTC 2
$ uv run python --version
Python 3.12.3
$ uv tree
Resolved 18 packages in 1ms
test v0.1.0
├── matplotlib v3.10.1
│ ├── contourpy v1.3.1
│ │ └── numpy v2.2.4
│ ├── cycler v0.12.1
│ ├── fonttools v4.56.0
│ ├── kiwisolver v1.4.8
│ ├── numpy v2.2.4
│ ├── packaging v24.2
│ ├── pillow v11.2.0
│ ├── pyparsing v3.2.3
│ └── python-dateutil v2.9.0.post0
│ └── six v1.17.0
├── numpy v2.2.4
└── scikit-image v0.25.2
├── imageio v2.37.0
│ ├── numpy v2.2.4
│ └── pillow v11.2.0
├── lazy-loader v0.4
│ └── packaging v24.2
├── networkx v3.4.2
├── numpy v2.2.4
├── packaging v24.2
├── pillow v11.2.0
├── scipy v1.15.2
│ └── numpy v2.2.4
└── tifffile v2025.3.30
└── numpy v2.2.4
$ lscpu | grep name
Model name: Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz
</code></pre>
<p>Output from <code>np.show_config()</code>:</p>
<pre><code>{
"Compilers": {
"c": {
"name": "gcc",
"linker": "ld.bfd",
"version": "10.2.1",
"commands": "cc"
},
"cython": {
"name": "cython",
"linker": "cython",
"version": "3.0.12",
"commands": "cython"
},
"c++": {
"name": "gcc",
"linker": "ld.bfd",
"version": "10.2.1",
"commands": "c++"
}
},
"Machine Information": {
"host": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
},
"build": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
}
},
"Build Dependencies": {
"blas": {
"name": "scipy-openblas",
"found": true,
"version": "0.3.28",
"detection method": "pkgconfig",
"include directory": "/opt/_internal/cpython-3.12.7/lib/python3.12/site-packages/scipy_openblas64/include",
"lib directory": "/opt/_internal/cpython-3.12.7/lib/python3.12/site-packages/scipy_openblas64/lib",
"openblas configuration": "OpenBLAS 0.3.28 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64",
"pc file directory": "/project/.openblas"
},
"lapack": {
"name": "scipy-openblas",
"found": true,
"version": "0.3.28",
"detection method": "pkgconfig",
"include directory": "/opt/_internal/cpython-3.12.7/lib/python3.12/site-packages/scipy_openblas64/include",
"lib directory": "/opt/_internal/cpython-3.12.7/lib/python3.12/site-packages/scipy_openblas64/lib",
"openblas configuration": "OpenBLAS 0.3.28 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64",
"pc file directory": "/project/.openblas"
}
},
"Python Information": {
"path": "/tmp/build-env-p680qjv9/bin/python",
"version": "3.12"
},
"SIMD Extensions": {
"baseline": [
"SSE",
"SSE2",
"SSE3"
],
"found": [
"SSSE3",
"SSE41",
"POPCNT",
"SSE42",
"AVX",
"F16C",
"FMA3",
"AVX2",
"AVX512F",
"AVX512CD",
"AVX512_SKX"
],
"not found": [
"AVX512_KNL",
"AVX512_KNM",
"AVX512_CLX",
"AVX512_CNL",
"AVX512_ICL"
]
}
}
</code></pre>
|
<python><numpy><runtime>
|
2025-04-03 17:03:20
| 0
| 8,499
|
Sun Bear
|
79,553,576
| 11,062,613
|
Ensure uniform group sizes using NumPy
|
<p>I have a function that ensures uniform sizes for grouped data by padding missing values with a fill_value. The function currently uses a for loop to populate the padded array.</p>
<p>Is there a better way to generate the padded array and get rid of the for loop using NumPy's builtin abilities (Edit: better in regards to performance and readability)?</p>
<p>Here is the function:</p>
<pre><code>def ensure_uniform_groups(
groups: np.ndarray,
values: np.ndarray,
fill_value: np.number = np.nan) -> tuple[np.ndarray, np.ndarray]:
"""
Ensure uniform group lengths by padding each group to the same size.
Args:
groups : np.ndarray
1D array of group identifiers, assumed to be consecutive.
values : np.ndarray
1D/2D array of values corresponding to the group identifiers.
fill_value : np.number, optional
Value to use for padding groups. Default is np.nan.
Returns:
tuple[np.ndarray, np.ndarray]
A tuple containing uniform groups with padded values.
"""
# set common type
dtype = np.result_type(fill_value, values)
# derive group infos
n = groups.size
mask = np.r_[True, groups[:-1] != groups[1:]]
starts = np.arange(n)[mask]
ends = np.r_[starts[1:] - 1, n-1]
sizes = ends - starts + 1
max_size = np.max(sizes)
# check if data is uniform already
if np.all(sizes == max_size):
return groups, values
# generate uniform arrays
unique_groups = groups[starts]
full_groups = np.repeat(unique_groups, max_size)
full_values = np.full((full_groups.shape[0], values.shape[1]), fill_value=fill_value, dtype=dtype)
for i, (ia, ie) in enumerate(np.column_stack([starts, ends+1])):
ua = i * max_size
ue = ua + ie-ia
full_values[ua:ue] = values[ia:ie]
return full_groups, full_values
</code></pre>
<p>Here is an example:</p>
<pre><code>groups = np.array([1, 1, 1, 2, 2, 3]) # size by group should be 3
values = np.column_stack([groups*10, groups*100])
fill_value = np.nan
ugroups, uvalues = ensure_uniform_groups(groups, values, fill_value)
out = np.vstack([ugroups, uvalues.T])
print(out)
# [[ 1. 1. 1. 2. 2. 2. 3. 3. 3.]
# [ 10. 10. 10. 20. 20. nan 30. nan nan]
# [100. 100. 100. 200. 200. nan 300. nan nan]]
</code></pre>
<p>Edit: Here is a benchmark which could be used to define "better" in regards to performance:</p>
<pre><code>from timeit import timeit
runs = 10
groups = np.sort(np.random.randint(1, 100, 100_000))
values = np.random.rand(groups.size, 2)
baseline = timeit(lambda: ensure_uniform_groups(groups, values), number=runs)
time_better = timeit(lambda: ensure_uniform_groups_better(groups, values), number=runs)
print("Ratio compared to baseline (>1 is faster)")
print(f"ensure_uniform_groups_better: {baseline/time_better:.2f}")
</code></pre>
|
<python><numpy>
|
2025-04-03 17:02:42
| 2
| 423
|
Olibarer
|
79,553,519
| 12,670,598
|
Optimizing K_ij-free Subgraph Detection in a Bounded-Degree Graph
|
<p>I am working with an undirected graph G where the maximum degree is bounded by a constant d. My goal is to check whether G contains a complete bipartite subgraph K_{i,j} as a subgraph, for small values of i, j (specifically, i, j < 8).</p>
<p>I currently use the following brute-force approach to detect a K_{i,j} subgraph:</p>
<pre class="lang-py prettyprint-override"><code>nodes = [u for u in self.g if len(self.g[u]) >= j]
set_combinations = combinations(nodes, i)
for A in set_combinations:
common_neighbors = set(self.g)
for u in A:
common_neighbors &= self.g[u]
if len(common_neighbors) >= j:
return False # K_ij found
return True
</code></pre>
<p>This method works correctly but becomes slow as the graph size increases. The main bottleneck seems to be:</p>
<ul>
<li>Enumerating all subsets of i nodes (O(n^i) complexity).</li>
<li>Intersecting neighbor sets iteratively, which can be costly for large graphs.</li>
</ul>
<p>Given that the maximum degree is bounded and i, j are small, I wonder if there are more efficient approaches.</p>
|
<python><algorithm><graph>
|
2025-04-03 16:31:04
| 1
| 439
|
Fabian Stiewe
|
79,553,487
| 3,542,535
|
Build aggregated network graph from Pandas dataframe containing a column with list of nodes for each row?
|
<p>I'm dipping my toes into network visualizations in Python. I have a dataframe like the following:</p>
<pre><code>| user | nodes |
| -----| ---------|
| A | [0, 1, 3]|
| B | [1, 2, 4]|
| C | [0, 3] |
|... | |
</code></pre>
<p>Is there a way to easily plot a network graph (NetworkX?) from data that contains the list of nodes on each row? The presence of a node in a row would increase the prominence of that node on the graph (or the prominence/weight of the edge in the relationship between two nodes).</p>
<p><a href="https://i.sstatic.net/bZsrCGcU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZsrCGcU.png" alt="enter image description here" /></a></p>
<p>I assume some transformation would be required to get the data into the appropriate format for NetworkX (or similar) to be able to create the graph relationships.</p>
<p>Thanks!</p>
|
<python><python-3.x><pandas><graph><networkx>
|
2025-04-03 16:17:33
| 1
| 413
|
alpacafondue
|
79,553,451
| 12,131,013
|
Generating a discrete polar surface map in cartesian coordinates
|
<p>I would like to generate a surface plot with discrete arc-shaped cells on a 2D cartesian plane. I am able to get decent results by plotting a 3D surface plot (using <code>plot_surface</code>) and viewing it from the top, but matplotlib can be a bit finicky with 3D, so I'd prefer to do it in 2D. I can also get similar results using <code>pcolormesh</code> on a polar plot, but again, I want a 2D cartesian plane. How can I do this in matplotlib?</p>
<p>MRE:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
r = np.linspace(2, 5, 25)
theta = np.linspace(0, np.pi, 25)
R, Theta = np.meshgrid(r, theta)
X = R*np.cos(Theta)
Y = r*np.sin(Theta)
U = R*np.cos(Theta)*np.exp(R*Theta/500)
fig, ax = plt.subplots(figsize=(8,6), subplot_kw={"projection":"3d"})
surf = ax.plot_surface(X, Y, U, cmap="viridis", rstride=1, cstride=1)
ax.view_init(elev=90, azim=-90)
ax.set_proj_type("ortho")
ax.zaxis.line.set_lw(0.)
ax.set_zticks([])
ax.set_aspect("equalxy")
fig.colorbar(surf, shrink=0.5, aspect=5)
fig.tight_layout()
fig, ax = plt.subplots(figsize=(8,6), subplot_kw={"projection":"polar"})
ax.pcolor(Theta, R, U, shading="nearest")
ax.set_xlim(0, np.pi)
ax.grid(False)
fig.tight_layout()
</code></pre>
<p>3D plot version:</p>
<p><a href="https://i.sstatic.net/Mm7csDpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mm7csDpB.png" alt="" /></a></p>
<p>2D polar plot version:</p>
<p><a href="https://i.sstatic.net/Z0VHO7mS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z0VHO7mS.png" alt="" /></a></p>
|
<python><matplotlib><polar-coordinates><cartesian-coordinates>
|
2025-04-03 16:02:50
| 1
| 9,583
|
jared
|
79,553,048
| 2,859,614
|
How do I get a human readable string out of a Snowpark API session object?
|
<p>I am writing a stored procedure with Snowflake's Snowpark API, using Python as the language. I want my procedure to log every time it executes a DML statement. When I write procedures that construct the SQL strings and then execute them via <code>session.call(SQLString)</code>, this is easy - I can log the SQL string I constructed. This approach looks something like this in a JavaScript procedure:</p>
<pre class="lang-js prettyprint-override"><code>const EXAMPLE_QUERY = `CALL EXAMPLE_PROCEDURE('${ARG1}', '${ARG2}', '${ARG3}');`;
let exampleStatement = snowflake.createStatement({sqlText: EXAMPLE_QUERY});
log(EXAMPLE_QUERY, "info", `Exit code: ${exitCode}`);
</code></pre>
<p>The <code>EXAMPLE_QUERY</code> is itself a string that I can log directly, and it tells me exactly what DML statement is executed when I read through the logs. Very informative and convenient.</p>
<p>Right now I am trying to use the Snowpark <a href="https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/latest/snowpark/session" rel="nofollow noreferrer"><code>session</code></a> object and methods, so I don't have a SQL string I construct in the procedure. What can I get from my <code>session</code> object to put in my log table? It does not need to be an SQL statement exactly, but I would like to know what is happening to the data. The last line of the below example procedure is the problem I am trying to solve.</p>
<pre class="lang-python prettyprint-override"><code> CREATE OR REPLACE PROCEDURE EXAMPLE()
RETURNS VARIANT
LANGUAGE PYTHON
RUNTIME_VERSION = '3.9'
PACKAGES = ('snowflake-snowpark-python')
HANDLER = 'run'
AS
$$
def run(session):
PROCEDURE_NAME = 'EXAMPLE'
def log(execution, level, message):
try:
logging_function = 'EXAMPLE_DB.EXAMPLE_SCHEMA.LOGGING' # This UDF already lives in the database
session.call(logging_function, PROCEDURE_NAME, execution, level, message)
except Exception as e:
return {"status": "ERROR", "message": "Error in the logging function: "+str(e)}
try:
log(f'CALL {PROCEDURE_NAME}({ADMIN_DB}, {BUILD_DB}, {BUILD_SCHEMA}, {BUILDS_TO_KEEP})', 'info', 'Begin run')
example_table = "EXAMPLE_DB.EXAMPLE_SCHEMA.EXAMPLE_TABLE"
get_tables_query = session.table(example_table).select('EXAMPLE_COLUMN').distinct()
example_output = [row['EXAMPLE_COLUMN'] for row in get_tables_query.collect()]
log(get_tables_query, 'info', 'Unique tables returned: '+str(len(target_tables))) # Does not work!
</code></pre>
|
<python><logging><snowflake-cloud-data-platform>
|
2025-04-03 13:16:02
| 1
| 3,026
|
MackM
|
79,552,893
| 4,596,414
|
Ignore all printed messages to avoid "No results. Previous SQL was not a query."
|
<p>I am trying to execute a SQL script on SQL Server in Python using pyodbc and the driver ODBC 18 (Windows OS). The script produces a dataset with some rows in output and, during the execution, it generates also many debug informations (with <code>PRINT</code> or <code>RAISERROR('...', 0, 1)</code>) that can't be disabled.
Unfortunately, executing the <em>fetchall()</em> method on the returning cursor, I receive the error <code>pyodbc.ProgrammingError: No results. Previous SQL was not a query.</code>.</p>
<p>I found that the SQL script could run successfully. It's just the python part that interprets all kinds of message as an error. The following simple script produces the error above:</p>
<pre><code>RAISERROR('msg', 0, 1)
SELECT 1 AS col
</code></pre>
<p>Changing the <code>RAISERROR</code> with <code>PRINT</code> the error will be also raised.
By just <strong>removing</strong> all <code>RAISERROR</code> and <code>PRINT</code>, the previous script runs successfully. So, the root cause is the simple print of a message. Some messages could be disabled (e.g. through <code>SET ANSI_WARNINGS OFF</code> or <code>SET NOCOUNT ON</code>) but sometimes the info messages are not wanted to be hidden or disabled.</p>
<p>The question is: there is a way to let pyodbc <strong>ignore</strong> all messages? Are all kind of messages interpreted as error by the driver?
I looked for a driver option in the MS ODBC Driver documentation and also in the pyodbc docs, but found nothing.</p>
|
<python><sql-server><pyodbc><ignore><raiserror>
|
2025-04-03 12:11:50
| 0
| 309
|
Armando Contestabile
|
79,552,806
| 7,991,581
|
Print peewee model as json with foreign_keys instead of related model
|
<p>I'm currently exploring peewee ORM and in order to make some tests I've defined two simple databases with three tables as follow</p>
<p><a href="https://i.sstatic.net/itdrmrSj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itdrmrSj.png" alt="enter image description here" /></a></p>
<p>I then defined the peewee models into a single file</p>
<pre><code>from playhouse.shortcuts import model_to_dict
import peewee as pw
import json
public_database = pw.MySQLDatabase(None)
private_database = pw.MySQLDatabase(None)
class CommonModel(pw.Model):
def json(self, recurse=False):
return model_to_dict(self, recurse=recurse)
def __str__(self):
return json.dumps(self.json())
class PublicDatabaseModel(CommonModel):
class Meta:
database = public_database
class PrivateDatabaseModel(CommonModel):
class Meta:
database = private_database
class User(PrivateDatabaseModel):
class Meta:
db_table = "users"
id = pw.AutoField(primary_key=True)
firstname = pw.CharField()
lastname = pw.CharField()
email = pw.CharField()
class Character(PublicDatabaseModel):
class Meta:
db_table = "characters"
id = pw.AutoField(primary_key=True)
name = pw.CharField()
user = pw.ForeignKeyField(User)
class Item(PublicDatabaseModel):
class Meta:
db_table = "items"
id = pw.AutoField(primary_key=True)
name = pw.CharField()
character = pw.ForeignKeyField(Character)
</code></pre>
<p>I can then define database connections into a testing file</p>
<pre><code>import peewee_models as models
models.private_database.init(
"test_database_private",
host="localhost",
user="root",
password="password",
port=3306
)
models.public_database.init(
"test_database_public",
host="localhost",
user="root",
password="password",
port=3306
)
characters = models.Character.select()
for character in characters:
print("-------------------")
print(f"Character : {character}")
print(f"Character.id : {character.id}")
print(f"Character.name : {character.name}")
print(f"Character.user_id : {character.user_id}")
print(f"Character.user : {character.user}")
</code></pre>
<p>The issue I have is that I can't find a way to render models as JSON with the foreign_key name as a key, the key name is always the name of the associated relationship</p>
<pre><code>-------------------
Character : {"id": 1, "name": "User1_Char1", "user": 1}
Character.id : 1
Character.name : User1_Char1
Character.user_id : 1
Character.user : {"id": 1, "firstname": "user1_firstname", "lastname": "user1_lastname", "email": "user1@yopmail.com"}
-------------------
Character : {"id": 2, "name": "User1_Char2", "user": 1}
Character.id : 2
Character.name : User1_Char2
Character.user_id : 1
Character.user : {"id": 1, "firstname": "user1_firstname", "lastname": "user1_lastname", "email": "user1@yopmail.com"}
</code></pre>
<p>For example I'd like the <code>Character</code> class JSON to be <code>{"id": 2, "name": "User1_Char2", "user_id": 1}</code> instead of <code>{"id": 2, "name": "User1_Char2", "user": 1}</code> but I can't find a way to achieve it.</p>
<p>I tried to change several <code>ForeignKeyField</code> class parameters such as <code>object_id_name</code> and <code>related_name</code> but nothing seems to change anything, even though the <code>user_id</code> attribute does exist in the class instance.</p>
<p>Does anyone know how to have the foreign_key name instead of the relationship name as JSON key ?</p>
|
<python><orm><relationship><peewee>
|
2025-04-03 11:32:12
| 0
| 924
|
Arkaik
|
79,552,738
| 1,875,459
|
Create a legend taking into account both the size and color of a scatter plot
|
<p>I am plotting a dataset using a scatter plot in Python, and I am encoding the data both in color and size. I'd like for the legend to represent this.</p>
<p>I am aware of <a href="https://matplotlib.org/api/collections_api.html#matplotlib.collections.PathCollection.legend_elements" rel="nofollow noreferrer"><code>.legend_elements(prop='sizes')</code></a> but I can have either colors or sizes but not both at the same time. I found a way of changing the marker color when using <code>prop='sizes'</code> with th <code>color</code> argument, but that's not really what I intend to do (they are all the same color).</p>
<p>Here is a MWE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import pylab as pl
time = pd.DataFrame(np.random.rand(10))
intensity = pd.DataFrame(np.random.randint(1,5,10))
df = pd.concat([time, intensity], axis=1)
size = intensity.apply(lambda x: 10*x**2)
fig, ax = pl.subplots()
scat = ax.scatter(time, intensity, c=intensity, s=size)
lgd = ax.legend(*scat.legend_elements(prop="sizes", num=3, \
fmt="{x:.1f}", func=lambda s: np.sqrt(s/10)), \
title="intensity")
</code></pre>
<p>and I'd like to have the markers color-coded too.</p>
<p>Any help or hint would be appreciated!</p>
|
<python><matplotlib><legend><scatter-plot>
|
2025-04-03 10:59:33
| 1
| 846
|
MBR
|
79,552,701
| 2,179,994
|
Python socket server recv() hangs
|
<p>I want to establish a simple workflow between server and client to send and receive some - currently unknown amount of - text data:</p>
<ol start="0">
<li>server is running, listening</li>
<li>client starts, is set up for communication</li>
<li>client sends text data to server</li>
<li>server receives the message in chunks</li>
<li>server uses it to perform some tasks</li>
<li>server returns the results in chunks</li>
<li>client receives all data</li>
<li>connection is closed, server goes on listening</li>
</ol>
<p>I did some research read up on blocking and non-blocking sockets - I'm totally fine with the blocking version - and came up with the following code.
What works are points 0 - 2: setup, starting and sending the message.</p>
<p>At 3. server receives the data, but then waits indefinitely instead of recognizing that no more data is coming - <code>if not chunk:</code> never actually executes although the full data is transferred so the while loop is never broken. I tried to change the condition to e.g. match the double curly braces at the end of the last chunk, but no success.</p>
<p>Now, my understanding is that server should realize that no more chunks of data is coming, at least this is what all the examples suggested. Note, the real data is longer than the example so transferring in chunks is essential.</p>
<p>What am I doing wrong?</p>
<pre><code>import threading
import socket
import json
import time
class RWF_server:
def start_server(self):
host_address = 'localhost'
port = 9999
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.bind((host_address, port))
except OSError:
raise
print(f"Could not bind to {host_address}:{port}, is a server already running here?")
server.listen(5)
print(f"Server listening on {host_address}:{port}")
while True:
conn, addr = server.accept()
print(f'Connected by {addr}')
data = ""
while True:
chunk = conn.recv(1024).decode('utf-8')
print('Server got:', chunk)
if not chunk:
break
data += chunk
if not data:
continue # Continue to the next connection if no data was received
message = json.loads(data)
print(f'Received full message: {message}, getting to work')
# Simulate processing
time.sleep(3)
print('Task is done')
response = {'status': 'success', 'message': 'Data received'}
response_data = json.dumps(response)
conn.sendall(response_data.encode('utf-8'))
conn.close()
class RWF_client:
def start_client(self):
content = {2: 'path_to_a_file\\__batch_B3310.bat',
'data': {'FY_phase': 0.0,
'MY_phase': 0.0,
'head_chamfer_legth': 0.1,
'head_radius': 2900.0,
'head_thickness': 13.0,
'head_thickness_at_nozzle': 13.0, }}
print('The following content is sent to the server: {}'.format(content))
with socket.socket() as sock:
try:
sock.connect(('localhost', 9999))
except (ConnectionRefusedError, TimeoutError):
raise
sock.sendall(json.dumps(content).encode('utf-8'))
print('Content sent to the server')
response = ""
while True:
chunk = sock.recv(1024).decode('utf-8')
print('Client getting response: {}'.format(chunk))
if not chunk:
break
response += chunk
print(f'Received response: {json.loads(response)}')
response = json.loads(response)
return response
server_thread = threading.Thread(target=RWF_server().start_server)
server_thread.start()
client_thread = threading.Thread(target=RWF_client().start_client)
client_thread.start()
server_thread.join()
client_thread.join()
</code></pre>
|
<python><sockets>
|
2025-04-03 10:44:25
| 0
| 2,074
|
jake77
|
79,552,670
| 6,029,488
|
Convert a column containing a single value to row in python pandas
|
<p>Consider the following dataframe example:</p>
<pre><code>maturity_date simulation simulated_price realized_price
30/06/2010 1 0.539333333 0.611
30/06/2010 2 0.544 0.611
30/06/2010 3 0.789666667 0.611
30/06/2010 4 0.190333333 0.611
30/06/2010 5 0.413666667 0.611
</code></pre>
<p>Apart from setting aside the value of the last column and concatenating, is there any other way to adjust the dataframe such that the last column becomes row?</p>
<p>Here is the desired output:</p>
<pre><code>maturity_date simulation simulated_price
30/06/2010 1 0.539333333
30/06/2010 2 0.544
30/06/2010 3 0.789666667
30/06/2010 4 0.190333333
30/06/2010 5 0.413666667
30/06/2010 realized_price 0.611
</code></pre>
|
<python><pandas>
|
2025-04-03 10:28:53
| 1
| 479
|
Whitebeard13
|
79,552,668
| 12,493,545
|
Tensorflow runs out of range during fitting: Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
|
<p>I have a dataset containing semantic vectors of length 384. I group them to windows each containing a 100. I batch them to batch sizes of 32. However, I get an error when fitting the model. I feel like it might be because it creates 48 32-sized batches out of 1515 windows making the last batch incomplete given that <code>1515/32=47,34375</code>, but I don't know whether that should really cause an issue - I never had the problem in the past.</p>
<p>These are my fit parameters:</p>
<pre class="lang-py prettyprint-override"><code>model.fit(training_data, epochs=50, validation_data=validation_data)
</code></pre>
<p>so explicitly, I don't set the steps which should mean it's none that should result in tensorflow running through the whole training_data on each epoch.</p>
<p>Is the fix for this to somehow drop the incomplete batch or to fill it up with values somehow? If so, could you provide an example how to drop it?</p>
<h2>Full error message</h2>
<pre class="lang-none prettyprint-override"><code>2025-04-03 12:20:03 [INFO]: Training (151508, 2)
2025-04-03 12:20:03 [INFO]: Validation (26514, 2)
2025-04-03 12:20:03 [INFO]: Test (22728, 2)
I0000 00:00:1743675604.435011 6880 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2025-04-03 12:20:04.435109: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2025-04-03 12:20:04.838231: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
Number of windows created: 1515
Input shape: (32, 100, 384)
Label shape: (32, 100, 384)
2025-04-03 12:20:07.074530: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
Counted training batches: 48
Epoch 1/50
2025-04-03 12:20:09.798596: E tensorflow/core/util/util.cc:131] oneDNN supports DT_INT32 only on platforms with AVX-512. Falling back to the default Eigen-based implementation if present.
47/Unknown 5s 48ms/step - loss: 0.00172025-04-03 12:20:12.186355: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node IteratorGetNext}}]]
/home/xaver/Documents/Repos/logAnomalyDetection/venv/lib/python3.12/site-packages/keras/src/trainers/epoch_iterator.py:151: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self._interrupted_warning()
</code></pre>
<h1>Code</h1>
<pre class="lang-py prettyprint-override"><code>"""
Semi-supervised log anomaly detection using sentence vector embeddings and a sliding window
"""
import logging
import pickle
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras import layers, models, Input
from sklearn.metrics import (
confusion_matrix, matthews_corrcoef, precision_score, recall_score,
f1_score, roc_auc_score, average_precision_score
)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
WINDOW_SIZE = 100
BATCH_SIZE = 32
# Configure logging
logging.basicConfig(
format="%(asctime)s [%(levelname)s]: %(message)s",
level=logging.INFO,
datefmt="%Y-%m-%d %H:%M:%S"
)
# Step 1: Load and prepare data
input_pickle = "outputs/parsed_openstack_logs_with_embeddings_final.pickle"
with open(input_pickle, "rb") as handle:
df = pickle.load(handle)
logging.info("DataFrame loaded successfully.")
# embeddings will be the input and output since AE tries to create the representation from the latent space
# label contains whether the data is anomalous (1) or normal (0).
logging.debug(f"{df.columns}")
logging.debug(f"\n{df.head()}")
# split data so that training data contains no anomalies, validation and testing contain an equal number of normal and abnormal entries
embedding_length = len(df["embeddings"][0])
print("Length of embedding vector:", embedding_length)
normal_data = df[df['label'] == 0]
test_data_abnormal = df[df['label'] == 1]
abnormal_normal_ratio = len(test_data_abnormal) / len(normal_data)
logging.info(f"N normal: {len(normal_data)}; N abnormal: {len(test_data_abnormal)} -- Ratio: {abnormal_normal_ratio}")
training_data, rest_data = train_test_split(normal_data, train_size=0.8, shuffle=False)
validation_data, test_data_normal = train_test_split(rest_data, test_size=0.3, shuffle=False)
test_data_normal = test_data_normal.head(len(test_data_abnormal))
test_data_abnormal = test_data_abnormal.head(len(test_data_normal))
test_data = pd.concat([test_data_normal, test_data_abnormal])
def create_window_tf_dataset(dataset):
# transforms the embeddings into a windowed dataset
embeddings_list = dataset["embeddings"].tolist()
embeddings_array = np.array(embeddings_list)
embedding_tensor = tf.convert_to_tensor(embeddings_array)
df_tensor = tf.convert_to_tensor(embedding_tensor)
tensor_dataset = tf.data.Dataset.from_tensor_slices(df_tensor)
windowed_dataset = tensor_dataset.window(WINDOW_SIZE, shift=WINDOW_SIZE, drop_remainder=True)
windowed_dataset = windowed_dataset.flat_map(lambda window: window.batch(WINDOW_SIZE))
return windowed_dataset
logging.info(f"Training {training_data.shape}")
logging.info(f"Validation {validation_data.shape}")
logging.info(f"Test {test_data.shape}")
num_windows = sum(1 for _ in create_window_tf_dataset(training_data))
print(f"Number of windows created: {num_windows}") # 1515
training_data = create_window_tf_dataset(training_data).map(lambda window: (window, window)) # training data is normal
validation_data = create_window_tf_dataset(validation_data).map(lambda window: (window, window)) # training data is normal
test_data_normal = create_window_tf_dataset(test_data_normal).map(lambda window: (window, window)) # normal training data is normal
test_data_abnormal = create_window_tf_dataset(test_data_abnormal).map(lambda window: (window, window)) # abnormal training data is abnormal
# group normal and abnormal data
test_data = test_data_normal.concatenate(test_data_abnormal)
training_data = training_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(BATCH_SIZE)
test_data = test_data.batch(BATCH_SIZE)
for x, y in training_data.take(1):
print("Input shape:", x.shape) # (32, 100, 384)
print("Label shape:", y.shape) # (32, 100, 384)
train_batches = sum(1 for _ in training_data)
print(f"Counted training batches: {train_batches}") # 48
# Build the Autoencoder model
model = models.Sequential([
Input(shape=(WINDOW_SIZE, embedding_length)),
layers.LSTM(64, return_sequences=True),
layers.LSTM(32, return_sequences=False),
layers.RepeatVector(WINDOW_SIZE),
layers.LSTM(32, return_sequences=True),
layers.LSTM(64, return_sequences=True),
layers.TimeDistributed(layers.Dense(embedding_length))
])
# Compile the model
model.compile(optimizer='adam', loss='mse')
# Train the model
model.fit(training_data, epochs=50, validation_data=validation_data) # error occurs here
# [... evaluation ...]
</code></pre>
<h1>Additional Info</h1>
<p>My tensorflow version is <code>2.17.0</code>.</p>
|
<python><tensorflow><keras><tf.keras>
|
2025-04-03 10:28:29
| 1
| 1,133
|
Natan
|
79,552,490
| 2,451,238
|
Can I pin the 'global' conda environment in Snakemake?
|
<p>When injecting into the Python environment running <code>snakemake</code> using the</p>
<pre class="lang-none prettyprint-override"><code>conda:
"envs/global.yaml"
</code></pre>
<p>directive as described <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html#global-workflow-dependencies" rel="nofollow noreferrer">here</a>, can I pin the exact environment using a <code>envs/global.<platform>.pin.txt</code> file as described <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html#freezing-environments-to-exactly-pinned-packages" rel="nofollow noreferrer">here</a>?</p>
<hr />
<p>Context:</p>
<p>When adding <code>envs/foo.{platform}.pin.txt</code> for an environment activated via</p>
<pre class="lang-none prettyprint-override"><code>rule foo:
conda: "envs/foo.yaml"
...
</code></pre>
<p>to a specific rule as described <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html#integrated-package-management" rel="nofollow noreferrer">here</a>, running</p>
<pre class="lang-bash prettyprint-override"><code>snakemake -n1 foo
</code></pre>
<p>prints the following informative messages:</p>
<pre><code>...
Creating conda environment <path>/envs/foo.yaml...
Using pinnings from <path>/envs/foo.{platform}.pin.txt.
...
</code></pre>
<p>However, for the 'global' environment (see above) there is no such message to indicate whether <code>envs/global.<platform>.pin.txt</code> is being considered or not.</p>
|
<python><conda><snakemake>
|
2025-04-03 09:14:38
| 0
| 1,894
|
mschilli
|
79,552,479
| 224,285
|
Polars out of core sorting and memory usage
|
<p>From what I understand this is a main use case for Polars: being able to process a dataset that is larger than RAM, using disk space if necessary. Yet I am unable to achieve this in a Kubernetes environment. To replicate locally I tried launching a docker container with a low memory limit:</p>
<pre><code>docker run -it --memory=500m --rm -v `pwd`:/app python:3.12 /bin/bash
# pip install polars==1.26.0
</code></pre>
<p>I checked that it set up the memory limit in cgroups for the current process.
Then I ran a script that loads a moderately large dataframe (23M parquet file, 158M uncompressed), using <code>scan_parquet</code>, performs a sort, and outputs the head:</p>
<pre><code>source = "parquet/central_west.df"
df = pl.scan_parquet(source, low_memory=True)
query = df.sort("station_code").head()
print(query.collect(engine="streaming"))
</code></pre>
<p>This leads to the process getting killed.
It works with a smaller dataframe, or a larger limit.
Is polars not reading the limit correctly? Or not able to work with that low of a limit? I understand the "new" streaming engine is still in beta, so I tried the same script with version 1.22.0 of polars, but the result was the same. This seems like a very simple and common use case so I hope I am just missing a configuration trick.</p>
<p>On a hunch and based on a similar question I tried setting POLARS_IDEAL_MORSEL_SIZE=100, but that made no difference, and I feel like I am grasping at straws here.</p>
|
<python><kubernetes><python-polars><cgroups><polars>
|
2025-04-03 09:11:11
| 0
| 1,319
|
Nicolas Galler
|
79,551,855
| 1,946,796
|
How to get attributes values from an xml file (CFDI - SAT Mexico)
|
<p>I have a file where i can not read the attributes since the tag, looks different than usual, i have tried to use <code>lxml</code> as follow:</p>
<pre class="lang-py prettyprint-override"><code>from lxml import etree
xDoc = etree.parse(xmlFile)
folioFiscal = xDoc.find('tfd:TimbreFiscalDigital')
print(folioFiscal)
</code></pre>
<p>the xml file is as follow:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<cfdi:Comprobante xsi:schemaLocation="http://www.sat.gob.mx/cfd/4 http://www.sat.gob.mx/sitio_internet/cfd/4/cfdv40.xsd" Version="4.0" Serie="AA" Folio="147" Fecha="2025-01-01T17:51:12" Sello="[alfanumeric_values]" FormaPago="03" NoCertificado="[numeric_values]" Certificado="[alfanumeric_values]" SubTotal="###.00" Moneda="MXN" Total="###.00" TipoDeComprobante="I" Exportacion="01" MetodoPago="PUE" LugarExpedicion="#####" xmlns:cfdi="http://www.sat.gob.mx/cfd/4" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<cfdi:Emisor Rfc="[A-Z]{4}[0-9]{6}[A-Z]{3}" Nombre="[alfanumeric_values]" RegimenFiscal="###" />
<cfdi:Receptor Rfc="[A-Z]{4}[0-9]{6}[A-Z]{3}" Nombre="[alfanumeric_values]" DomicilioFiscalReceptor="#####" RegimenFiscalReceptor="###" UsoCFDI="D01" />
<cfdi:Conceptos>
<cfdi:Concepto ClaveProdServ="########" Cantidad="1.00" ClaveUnidad="ACT" Unidad="Actividad" Descripcion="Honorarios Medicos" ValorUnitario="1100.00" Importe="1100.00" ObjetoImp="02">
<cfdi:Impuestos>
<cfdi:Traslados>
<cfdi:Traslado Base="1100.00" Impuesto="002" TipoFactor="Exento" />
</cfdi:Traslados>
</cfdi:Impuestos>
</cfdi:Concepto>
</cfdi:Conceptos>
<cfdi:Impuestos>
<cfdi:Traslados>
<cfdi:Traslado Base="1100.00" Impuesto="002" TipoFactor="Exento" />
</cfdi:Traslados>
</cfdi:Impuestos>
<cfdi:Complemento>
<tfd:TimbreFiscalDigital xmlns:tfd="http://www.sat.gob.mx/TimbreFiscalDigital" xsi:schemaLocation="http://www.sat.gob.mx/TimbreFiscalDigital http://www.sat.gob.mx/sitio_internet/cfd/TimbreFiscalDigital/TimbreFiscalDigitalv11.xsd" Version="1.1" UUID="[A-Z0-9]{8}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{12}" FechaTimbrado="2025-01-01T17:51:34" RfcProvCertif="SAT[A-Z0-9]{8}" SelloCFD="[alfanumeric_values]" NoCertificadoSAT="[0-9]{20}" SelloSAT="[alfanumeric_values]" />
</cfdi:Complemento>
</cfdi:Comprobante>
</code></pre>
<p>I want to obtain the values in UUID attribute that looks like <code>[A-Z0-9]{8}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{12}</code>.</p>
|
<python><xml>
|
2025-04-03 02:03:17
| 3
| 620
|
Rafa Barragan
|
79,551,824
| 3,225,420
|
How Can I Match Percentile Results in Postgres and Pandas?
|
<p>I am making calculations in the database and want to validate results against <code>Pandas</code>' calculations.</p>
<p>I want to calculate the 25th, 50th and 75th percentile. The end use is a statistical calculation so <code>percentile_cont()</code> is the <code>Postgres</code> function I'm using.</p>
<p>My query:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT percentile_cont(0.25) WITHIN GROUP (order by obs1) as q1
, percentile_cont(0.5) WITHIN GROUP (order by obs1) as med
, percentile_cont(0.75) WITHIN GROUP (order by obs1) as q3
FROM my_table;
</code></pre>
<p>It returns <code>q1</code>=<code>73.99</code>, <code>med</code>=<code>74.0</code> and <code>q3</code>=<code>74.0085</code> as the result.</p>
<p>However, when I do the calculations in <code>Pandas</code> the <code>q3</code> value is different. I searched and the consensus online is the <code>interpolation='linear'</code> argument in <code>Pandas</code> causes the calculation method to match that of <code>percentile_cont()</code>.</p>
<pre><code>dataframe = pandas.read_sql(sql='SELECT obs1 FROM my_table', con='my_connection_info')
dataframe = dataframe.sort_values(by='obs1')
percentiles = dataframe.quantile(q=[0.25, 0.5, 0.75],
axis=0,
numeric_only=False,
interpolation='linear',
method='single')
</code></pre>
<p>The results are:</p>
<pre><code> obs1
0.25 73.992
0.50 74.000
0.75 74.006
</code></pre>
<p>I'm confused because only the 75th percentile is off. <code>q1</code> and <code>q2</code> look like a rounding issue, but <code>q3</code> is simply off.</p>
<p>When I do my calculation method (25 sorted values * 0.75 = 18.75 position) and look at the sorted values below, the 18th and 19th position (index values 13 and 17) are both <code>74.006</code>.</p>
<p>I do not see how <code>Postgres</code> gets the <code>q3</code> result that it does, nor how to get the results from <code>Postgres</code> and <code>Pandas</code> to match for testing purposes.</p>
<p>Update 1 per answer suggestions from @Jose Luis Dioncio:</p>
<ol>
<li>Added <code>::numeric</code>, no change to the results.</li>
<li>The column datatype is <code>float8</code></li>
<li>Confirmed <code>count</code> of data; <code>SELECT COUNT (obs1) FROM my_table;</code> = <code>25</code> and <code>len(dataframe)</code> = <code>25</code>.</li>
<li>End use is dictating the use <code>percentile_cont()</code>. As a test I tried <code>percentile_disc()</code> instead and it returned <code>74.009</code>. Still strikes me as a strange result.</li>
<li>Confirming the ordering inside the <code>Postgres</code> and <code>Pandas</code> methods is beyond me, but based on what I can observe it looks look it is working because <code>q1</code> and <code>q2</code> match.</li>
</ol>
<p>What should I try next?</p>
<p>Data directly from my database using <code>DBeaver</code> interface and <code>SELECT obs1 FROM my_table;</code>:</p>
<pre><code> obs1
24 73.982
12 73.983
18 73.984
7 73.985
2 73.988
20 73.988
4 73.992
10 73.994
16 73.994
1 73.995
6 73.995
9 73.998
15 74.000
19 74.000
3 74.002
11 74.004
21 74.004
13 74.006
17 74.006
8 74.008
5 74.009
22 74.010
14 74.012
23 74.015
0 74.030
</code></pre>
|
<python><pandas><postgresql><percentile><percentile-cont>
|
2025-04-03 01:38:09
| 1
| 1,689
|
Python_Learner
|
79,551,744
| 2,824,791
|
Problems uploading certain python packages to Azure Automation Account
|
<p>Certain python packages fail to upload to an Azure Automation Account.</p>
<p>I am trying to upload wheels for:<br />
azure-mgmt-authorization version 3.0.0 or 4.0.0</p>
<p>For 4.0.0, I checked the setup.py file and found these dependencies:</p>
<pre><code>install_requires=[
"isodate<1.0.0,>=0.6.1",
"azure-common~=1.1",
"azure-mgmt-core>=1.3.2,<2.0.0",
"typing-extensions>=4.3.0; python_version<'3.8.0'",
],
</code></pre>
<p>I was able to create wheels and upload these packages.</p>
<p><strong>Upload Failure:</strong>
The activity log shows:</p>
<p>Create or Update an Azure Automation Python 3 package->Accepted<br />
Create or Update an Azure Automation Python 3 package -> Started</p>
<p>It never shows Failed or Succeeded as expected.</p>
<p>In the Python packages blade, I can see the Package Status as Failed, with an error of:</p>
<pre><code>Last modified
4/2/2025, 6:59:29 PM
Package version
Size
0 KB
Status
Failed
Runtime version
3.8
Error
Orchestrator.Shared.AsyncModuleImport.ModuleImportException: An internal error occurred during module import while attempting to store module content. This could be a transitory network condition - please try the operation again. at Orchestrator.Activities.SetModuleActivity.ExecuteInternal(CodeActivityContext context, Byte[] moduleContent, String moduleName, ModuleLanguage moduleLanguage, Guid moduleVersionId, String modulePath) at Orchestrator.Activities.SetModuleActivity.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
</code></pre>
|
<python><azure><azure-runbook>
|
2025-04-02 23:32:26
| 1
| 5,096
|
jlo-gmail
|
79,551,690
| 1,185,157
|
drawing from OpenCV fillConvexPoly() does not match the input polygon
|
<p>I'm trying to follow the solution detailed at <a href="https://stackoverflow.com/questions/30901019/extracting-polygon-given-coordinates-from-an-image-using-opencv">this question</a> to prepare a dataset to train a CRNN for HTR (Handwritten Text Recognition). I'm using eScriptorium to adjust text segmentation and transcription, exporting in ALTO format (one XML with text region coordinates for each image) and parsing the ALTO XML to grab the text image regions and export them individually to create a dataset.</p>
<p>The problem I'm finding is that I have the region defined at eScriptorium, like this:</p>
<p><a href="https://i.sstatic.net/UmA0HSKE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmA0HSKE.png" alt="Image text region detected in eScriptorium and adjusted manually" /></a></p>
<p>But when I apply this code from the selected solution for the above linked question:</p>
<pre><code># Initialize mask
mask = np.zeros((img.shape[0], img.shape[1]))
# Create mask that defines the polygon of points
cv2.fillConvexPoly(mask, pts, 1)
mask = mask > 0 # To convert to Boolean
# Create output image (untranslated)
out = np.zeros_like(img)
out[mask] = img[mask]
</code></pre>
<p>and display the image I get some parts of the text region <em>filled</em>:</p>
<p><a href="https://i.sstatic.net/zXfIO15n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zXfIO15n.png" alt="Image region resulting of above Python code" /></a></p>
<p>As you can see, some areas that should be inside the mask are <em>filled</em> and, therefore, the image pixels in them are not copied. I've made sure the pixels that make the polygon are correctly parsed and handed to OpenCV to build the mask. I can't find the reason why those areas are filled and I wonder if anyone got into a similar problem and managed to find out the reason or how to avoid it.</p>
<p>TIA</p>
|
<python><opencv><image-processing><computer-vision><polygon>
|
2025-04-02 22:27:12
| 1
| 1,187
|
Ricardo Palomares Martínez
|
79,551,521
| 10,461,632
|
How can I adjust alignment in a sphinx PDF table?
|
<p>I want to control the vertical and horizontal alignment of the cells in a table generated with sphinx and texlive, and I want to be able to force a new line in my csv file. I'm using <code>python==3.13.2</code> and <code>sphinx==8.2.3</code></p>
<p>I was able to figure out the horizontal alignment, but I haven't been able to figure out the vertical alignment or how to add a new line in the csv file. I've tried adding <code>\newline</code> (and variations with more slashes) to try and force the column labeled "column header" to be split across multiple rows. For example, <code>column\\\\newlineheader</code> in the csv file didn't work.</p>
<p>table.csv:</p>
<pre><code>run,column header,really long center column header,really long left column header,really long left column header,really long right column header,small
1,1,1,1,1,1,1
2,2,2,2,2,2,2
3,3,3,3,3,3,3
4,4,4,4,4,4,4
5,5,5,5,5,5,5
</code></pre>
<p>Sphinx syntax:</p>
<pre><code>.. tabularcolumns:: |>{\centering\arraybackslash}\X{1}{12}|>{\raggedleft\arraybackslash}\X{2}{12}|>{\centering\arraybackslash}\X{2}{12}|>{\raggedright\arraybackslash}\X{2}{12}|>{\raggedright\arraybackslash}\X{2}{12}|>{\raggedleft\arraybackslash}\X{2}{12}|>{\center\arraybackslash}\X{1}{12}|
.. csv-table:: My CSV Table with X{}
:file: _static/table.csv
:header-rows: 1
:align: center
</code></pre>
<p>The above code produces the image below. What I want is for the column "column header" to be split across multiple rows like "column\nheader" and all of the cells should be vertically centered. For some reason it looks like the "small" column isn't top aligned, but the rest are.</p>
<p><a href="https://i.sstatic.net/CY2ju0rk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CY2ju0rk.png" alt="sample table" /></a></p>
|
<python><latex><python-sphinx><pdflatex>
|
2025-04-02 20:29:27
| 0
| 788
|
Simon1
|
79,551,481
| 13,640,372
|
minio object Transforms with Object Lambda The security token included in the request is invalid
|
<p>I want to use MinIO Object Transforms, but I got an error when trying to download a presigned URL. I followed their documentation exactly, as shown in this <a href="https://min.io/docs/minio/linux/developers/transforms-with-object-lambda.html#" rel="nofollow noreferrer">link</a>.</p>
<p>I also watched this <a href="https://youtu.be/K0NqnK5BtFY?si=sQmyLZG3tm3uQA6h" rel="nofollow noreferrer">video</a> and rechecked everything. What should I do?</p>
<p>I got this error:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>\n
<Error>
<Code>InvalidTokenId</Code>
<Message>The security token included in the request is invalid</Message>
<Key>000bd20a46790a88ff79cc3d4f5d21e0d6e121b6dc9fe10c76271f379af5fb2f.webp</Key>
<BucketName>place</BucketName>
<Resource>/place/000bd20a46790a88ff79cc3d4f5d21e0d6e121b6dc9fe10c76271f379af5fb2f.webp</Resource>
<RequestId>183298B2B91FD9C9</RequestId>
<HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId>
</Error>
</code></pre>
<p>here is client code:</p>
<pre><code>minio_endpoint = '127.0.0.1:90'
minio_access_key = 'access'
minio_secret_key = 'secret'
# Initialize a Minio client
s3Client = Minio(minio_endpoint, access_key=minio_access_key, secret_key=minio_secret_key, secure=False)
# Set lambda function target via `lambdaArn`
lambda_arn = 'arn:minio:s3-object-lambda::resize-image:webhook'
# Generate presigned GET URL with lambda function
bucket_name = 'place'
object_name = '000bd20a46790a88ff79cc3d4f5d21e0d6e121b6dc9fe10c76271f379af5fb2f.webp'
expiration = timedelta(seconds=1000) # Expiration time in seconds
req_params = {'lambdaArn': lambda_arn , "width": "10"}
presigned_url = s3Client.presigned_get_object(bucket_name, object_name, expires=expiration, extra_query_params= req_params )
print(presigned_url)
# Use the presigned URL to retrieve the data using requests
response = requests.get(presigned_url)
if response.status_code == 200:
content = response.content
print("Transformed data:\n")
else:
print("Failed to download the data. Status code:", response.status_code, "Reason:", response.reason)
print("\n",response.content)
</code></pre>
<p>here is my lambda handler function</p>
<pre><code>package main
import (
"context"
"fmt"
"log"
"net/url"
"time"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
// Connect to the MinIO deployment
s3Client, err := minio.New("localhost:90", &minio.Options{
Creds: credentials.NewStaticV4("access", "secret", ""),
Secure: false,
})
if err != nil {
log.Fatalln(err)
}
// Set the Lambda function target using its ARN
reqParams := make(url.Values)
reqParams.Set("lambdaArn", "arn:minio:s3-object-lambda::resize-image:webhook")
// Generate a presigned url to access the original object
presignedURL, err := s3Client.PresignedGetObject(context.Background(), "place", "testobject", time.Duration(1000)*time.Second, reqParams)
if err != nil {
log.Fatalln(err)
}
// Print the URL to stdout
fmt.Println(presignedURL)
}
</code></pre>
|
<python><go><amazon-s3><minio>
|
2025-04-02 20:07:45
| 0
| 681
|
mohammadmahdi Talachi
|
79,550,825
| 14,818,796
|
How to properly handle SIGTERM in Python when stopping an Airflow task using "Mark as Failed"?
|
<p>I am running a long-running Python script inside an Apache Airflow DAG using BashOperator. When I manually stop the task using "Mark as Failed" or "Mark as Success", Airflow sends a SIGTERM signal to the process.</p>
<p>However, I am unable to properly catch SIGTERM and execute cleanup logic before the process is forcefully terminated.</p>
<p>I tried using <code>signal.signal()</code> to handle SIGTERM:</p>
<pre><code>import sys
import time
import signal
def handle_exit(signum, frame):
print("[ALERT] Received SIGTERM! Stopping...")
sys.exit(1)
signal.signal(signal.SIGINT, handle_exit) # To handle Ctrl+C
signal.signal(signal.SIGTERM, handle_exit) # To handle Airflow termination
while True:
print("[INFO] Running...")
time.sleep(2)
</code></pre>
<p>Still, the [ALERT] Received SIGTERM! message does not appear in the logs, meaning the signal isn't caught before termination.</p>
<p>Here is the airflow log:</p>
<pre><code>[2025-04-02, 19:16:15 IST] {taskinstance.py:1157} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: dag_dependency_check_parent.stop_check manual__2025-04-02T13:46:06.768233+00:00 [queued]>
[2025-04-02, 19:16:15 IST] {taskinstance.py:1157} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: dag_dependency_check_parent.stop_check manual__2025-04-02T13:46:06.768233+00:00 [queued]>
[2025-04-02, 19:16:15 IST] {taskinstance.py:1359} INFO - Starting attempt 1 of 1
[2025-04-02, 19:16:15 IST] {taskinstance.py:1380} INFO - Executing <Task(BashOperator): stop_check> on 2025-04-02 13:46:06.768233+00:00
[2025-04-02, 19:16:15 IST] {standard_task_runner.py:57} INFO - Started process 4615 to run task
[2025-04-02, 19:16:15 IST] {standard_task_runner.py:84} INFO - Running: ['airflow', 'tasks', 'run', 'dag_dependency_check_parent', 'stop_check', 'manual__2025-04-02T13:46:06.768233+00:00', '--job-id', '27296', '--raw', '--subdir', 'DAGS_FOLDER/dag_dependency_check_parent.py', '--cfg-path', '/tmp/tmpxut1ufvt']
[2025-04-02, 19:16:15 IST] {standard_task_runner.py:85} INFO - Job 27296: Subtask stop_check
[2025-04-02, 19:16:15 IST] {task_command.py:415} INFO - Running <TaskInstance: dag_dependency_check_parent.stop_check manual__2025-04-02T13:46:06.768233+00:00 [running]> on host a9c63f287b52
[2025-04-02, 19:16:15 IST] {taskinstance.py:1660} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='arudsekabs' AIRFLOW_CTX_DAG_ID='dag_dependency_check_parent' AIRFLOW_CTX_TASK_ID='stop_check' AIRFLOW_CTX_EXECUTION_DATE='2025-04-02T13:46:06.768233+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2025-04-02T13:46:06.768233+00:00'
[2025-04-02, 19:16:15 IST] {subprocess.py:63} INFO - Tmp dir root location: /tmp
[2025-04-02, 19:16:15 IST] {subprocess.py:75} INFO - Running command: ['/usr/bin/bash', '-c', 'python3 -u /opt/airflow/jobs/arudsekabs_stop_test.py']
[2025-04-02, 19:16:15 IST] {subprocess.py:86} INFO - Output:
[2025-04-02, 19:16:15 IST] {subprocess.py:93} INFO - [INFO] Running...
[2025-04-02, 19:16:17 IST] {subprocess.py:93} INFO - [INFO] Running...
[2025-04-02, 19:16:19 IST] {subprocess.py:93} INFO - [INFO] Running...
[2025-04-02, 19:16:21 IST] {subprocess.py:93} INFO - [INFO] Running...
[2025-04-02, 19:16:23 IST] {subprocess.py:93} INFO - [INFO] Running...
[2025-04-02, 19:16:25 IST] {subprocess.py:93} INFO - [INFO] Running...
</code></pre>
|
<python><airflow><sigterm>
|
2025-04-02 13:49:47
| 0
| 1,001
|
Arud Seka Berne S
|
79,550,761
| 1,159,548
|
Cognite Data Fusion Python SDK - how to delete edges that lead to a node with a specific target_edge_id?
|
<p>I am using the <a href="https://cognite-sdk-python.readthedocs-hosted.com/en/latest/data_modeling.html#delete-instances" rel="nofollow noreferrer">Cognite Data Fusion Python SDK</a>.</p>
<p>While deleting a node, e.g....</p>
<pre class="lang-py prettyprint-override"><code>client.data_modeling.instances.delete(NodeId(space='some-space', external_id=deleted_node_id))
</code></pre>
<p>I want to also delete the edge(s) from the parent in order to avoid leaving orphan edges in the database.</p>
<p><strong>Problem</strong>: how can I look-up such edges?</p>
<p>I know how to get all edges (and then filter by the <code>end_node</code> attribute), but that would be very inefficient:</p>
<pre class="lang-py prettyprint-override"><code>edges = client.data_modeling.instances.list(
instance_type="edge", limit=-1)
</code></pre>
<p>How could I construct the <code>Filter</code> for the <code>end_node</code> (to match the <code>deleted_node_id</code>) and ideally the <code>type</code> (to equal <code>Child</code> type)? E.g.</p>
<pre class="lang-py prettyprint-override"><code>edges = client.data_modeling.instances.list(
instance_type="edge",
filter=And(Equal(???), Equal(???)),
limit=-1)
</code></pre>
<p>Because then I could just delete all those edges with one <code>client.data_modeling.instances.delete(edges)</code> call.</p>
|
<python>
|
2025-04-02 13:25:28
| 2
| 1,561
|
Jan Dolejsi
|
79,550,724
| 4,451,521
|
Error in calculating bbox coordinate Y when uploading a background image
|
<p>I am writing a script that loads an image and we can draw bboxes on it by pressing the key B and then dragging the mouse.
However for some reason the calculation of the bboxes coordinates (specially the Y coordinate) gets wrong.</p>
<p>I have managed to get a minimal reproducible code. Here it is</p>
<pre><code>import sys
import os
from PyQt6.QtWidgets import (
QApplication,
QGraphicsView,
QGraphicsScene,
QGraphicsRectItem,
QPushButton,
QVBoxLayout,
QWidget,
QHBoxLayout,
QCheckBox,
QFileDialog,
)
from PyQt6.QtGui import QPen, QPixmap, QPainter, QColor
from PyQt6.QtCore import Qt, QEvent, QRectF, QPointF
class RectangleItem(QGraphicsRectItem):
def __init__(self, rect, parent=None):
super().__init__(rect, parent)
self.setPen(QPen(Qt.GlobalColor.green, 1, Qt.PenStyle.DashLine))
self.setFlags(
QGraphicsRectItem.GraphicsItemFlag.ItemIsMovable
| QGraphicsRectItem.GraphicsItemFlag.ItemIsSelectable
| QGraphicsRectItem.GraphicsItemFlag.ItemIsFocusable
)
class StickerApp(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Bounding Box Debugger")
self.resize(800, 600)
self.view = QGraphicsView()
self.scene = QGraphicsScene()
self.view.setScene(self.scene)
self.view.viewport().installEventFilter(self)
self.save_bboxes_checkbox = QCheckBox("Save Bounding Boxes")
self.save_bboxes_checkbox.setChecked(False)
self.save_button = QPushButton("Save Image")
self.save_button.clicked.connect(self.save_image)
button_layout = QVBoxLayout()
button_layout.addWidget(self.save_bboxes_checkbox)
button_layout.addWidget(self.save_button)
main_layout = QHBoxLayout()
main_layout.addWidget(self.view)
main_layout.addLayout(button_layout)
self.setLayout(main_layout)
self.current_mode = "select" # 'select' or 'draw'
self.drawing = False
self.start_pos = None
self.current_rect = None
self.bg_pixmap_item = None
self.load_background_image()
# self.draw_triangle() #<--- THIS SEEMS TO WORK!
def load_background_image(self):
file_path = "./HarpAkademileriTunnel.jpg"
if file_path:
self.input_folder = os.path.dirname(file_path)
self.loaded_image_path = file_path
pixmap = QPixmap(file_path)
if self.bg_pixmap_item:
self.scene.removeItem(self.bg_pixmap_item)
self.bg_pixmap_item = self.scene.addPixmap(pixmap)
self.bg_pixmap_item.setZValue(-1)
def draw_triangle(self):
pixmap = QPixmap(800, 600)
pixmap.fill(Qt.GlobalColor.white)
painter = QPainter(pixmap)
painter.setBrush(QColor(0, 0, 255))
painter.drawEllipse(QPointF(300, 400), 50, 50)
painter.setBrush(QColor(255, 0, 0)) # Red triangle
painter.drawPolygon(
[
QPointF(200, 100), # Top vertex
QPointF(100, 300), # Bottom left
QPointF(300, 300), # Bottom right
]
)
painter.end()
self.bg_pixmap_item = self.scene.addPixmap(pixmap)
self.bg_pixmap_item.setZValue(-1) # Keep background at bottom
def eventFilter(self, obj, event):
if obj is self.view.viewport() and self.current_mode == "draw":
if (
event.type() == QEvent.Type.MouseButtonPress
and event.button() == Qt.MouseButton.LeftButton
):
self.start_pos = self.view.mapToScene(event.pos())
self.drawing = True
self.current_rect = RectangleItem(
QRectF(self.start_pos, self.start_pos)
)
self.scene.addItem(self.current_rect)
return True
elif event.type() == QEvent.Type.MouseMove and self.drawing:
end_pos = self.view.mapToScene(event.pos())
self.current_rect.setRect(QRectF(self.start_pos, end_pos).normalized())
return True
elif event.type() == QEvent.Type.MouseButtonRelease and self.drawing:
self.drawing = False
return True
return super().eventFilter(obj, event)
def keyPressEvent(self, event):
if event.key() == Qt.Key.Key_B:
self.current_mode = "draw" if self.current_mode == "select" else "select"
self.view.viewport().setCursor(
Qt.CursorShape.CrossCursor
if self.current_mode == "draw"
else Qt.CursorShape.ArrowCursor
)
elif event.key() == Qt.Key.Key_Delete:
for item in self.scene.selectedItems():
self.scene.removeItem(item)
else:
super().keyPressEvent(event)
def save_image(self):
file_path, _ = QFileDialog.getSaveFileName(
self, "Save Image", "output.png", "PNG Files (*.png)"
)
if file_path:
image = self.view.grab()
image_width, image_height = image.width(), image.height()
print(f"Image w h {image_width} {image_height}")
image.save(file_path)
if self.save_bboxes_checkbox.isChecked():
bbox_data = []
scene_rect = self.scene.sceneRect()
scene_width, scene_height = scene_rect.width(), scene_rect.height()
print(f"Scene w h {scene_width} {scene_height}")
for item in self.scene.items():
if isinstance(item, RectangleItem):
rect = item.rect()
# norm_x = rect.x() / scene_width
# norm_y = rect.y() / scene_height
norm_x = (rect.x() + rect.width() / 2) / scene_width
norm_y = (rect.y() + rect.height() / 2) / scene_height
norm_w = rect.width() / scene_width
norm_h = rect.height() / scene_height
bbox_data.append(
f"0 {norm_x:.6f} {norm_y:.6f} {norm_w:.6f} {norm_h:.6f}"
)
x_center = (rect.x() + rect.width() / 2) / image_width
y_center = (rect.y() + rect.height() / 2) / image_height
norm_width = rect.width() / image_width
norm_height = rect.height() / image_height
bbox_data.append(
f"1 {x_center:.6f} {y_center:.6f} {norm_width:.6f} {norm_height:.6f}"
)
with open(file_path.replace(".png", ".txt"), "w") as f:
f.write("\n".join(bbox_data))
if __name__ == "__main__":
app = QApplication(sys.argv)
window = StickerApp()
window.show()
sys.exit(app.exec())
</code></pre>
<p>To run it, just run python on it (I use a image, so you must replace this with a jpg you have) , then press B and draw the image. check the box and save the image.</p>
<p>If you do this, you will get an image and a text file. Then with this simple script</p>
<pre><code>import cv2
image_path = "./debu2.png"
txt_path = "./debu2.txt"
colors = [ (0, 255, 0),(255, 0, 0)]
image = cv2.imread(image_path)
if image is None:
print("Error: Could not load image")
exit()
img_height, img_width = image.shape[:2]
print(f"({img_width},{img_height})")
# Read and process bounding box file
with open(txt_path, 'r') as f:
for line in f:
parts = line.strip().split()
if len(parts) != 5:
continue
# Parse values
class_id = parts[0]
c = int (class_id)
x_center = float(parts[1]) * img_width
y_center = float(parts[2]) * img_height
width = float(parts[3]) * img_width
height = float(parts[4]) * img_height
# Convert from center coordinates to corner coordinates
x1 = int(x_center - width/2)
y1 = int(y_center - height/2)
x2 = int(x_center + width/2)
y2 = int(y_center + height/2)
print(f"{x1},{y1},{x2},{y2}")
# Draw rectangle and label
cv2.rectangle(image, (x1, y1), (x2, y2), colors[c], 2)
cv2.putText(image, class_id, (x1, y1-5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, colors[c], 2)
# Display result
cv2.imshow("Bounding Box Visualization", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>I get</p>
<p><a href="https://i.sstatic.net/wKmaTSY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wKmaTSY8.png" alt="enter image description here" /></a></p>
<p>The results are the blue ones. You obviously see it is wrong.</p>
<p>However if in the first script you comment the <code>self.load_background_image()</code> and uncomment the <code>self.draw_triangle() </code> and run the same, you will see that this time it works.</p>
<p><a href="https://i.sstatic.net/5I3JULHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5I3JULHO.png" alt="enter image description here" /></a></p>
<p>As you can see the blue bboxes work here.</p>
<p>How can I correct the part where an image is loaded?</p>
<p>The original image is</p>
<p><a href="https://i.sstatic.net/U1C9PeED.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U1C9PeED.jpg" alt="enter image description here" /></a></p>
<p>I can see clearly that the script is modifying the size of the images which is undesirable. How can I correct this script</p>
|
<python><pyqt>
|
2025-04-02 13:10:53
| 0
| 10,576
|
KansaiRobot
|
79,550,639
| 4,247,599
|
pandas 2.2.3: `astype(str)` returns `np.str_`
|
<p>Latest pandas version casts types into <code>np</code> types. To cast a series of integer to strings I thought <code>astype(str)</code> would have been enough:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
list_of_str = list(pd.Series([1234, 123, 345]).to_frame()[0].unique().astype(str))
list_of_str
</code></pre>
<p>returns <code>[np.str_('1234'), np.str_('123'), np.str_('345')]</code>.</p>
<p>And also</p>
<pre class="lang-py prettyprint-override"><code>list_of_str = list(pd.Series([1234, 123, 345]).to_frame()[0].unique().astype(np.str_))
list_of_str
</code></pre>
<p>returns <code>[np.str_('1234'), np.str_('123'), np.str_('345')]</code>.</p>
<p>Is there an efficient way to cast to python type string, that does not require list comprehension, like:</p>
<pre class="lang-py prettyprint-override"><code>list_of_str = [str(s) for s in list_of_str]
list_of_str
</code></pre>
<p>that finally returns <code>['1234', '123', '345']</code>
?</p>
|
<python><pandas>
|
2025-04-02 12:36:45
| 2
| 4,299
|
SeF
|
79,550,627
| 3,801,839
|
How do I select a recursive entity in sqlalchemy
|
<p>I am saving a graph to a database. It has a structure like:</p>
<pre class="lang-py prettyprint-override"><code>class SuperNode(Base):
__tablename__ = "supernodes"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
nodes = relationship("Node", back_populates="supernode")
class Node(Base):
__tablename__ = "nodes"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
supernode_id = Column(Integer, ForeignKey("supernodes.id"), nullable=False)
sub_node_id = Column(Integer, ForeignKey("supernodes.id"), nullable=True)
supernode = relationship("SuperNode", back_populates="nodes")
sub_node = relationship("SuperNode", foreign_keys=[sub_node_id])
children = relationship(
"Node",
secondary=association_table,
primaryjoin=id == association_table.c.parent_id,
secondaryjoin=id == association_table.c.child_id,
backref="parents"
)
association_table = Table(
"parent_child_association",
Base.metadata,
Column("parent_id", Integer, ForeignKey("nodes.id"), primary_key=True),
Column("child_id", Integer, ForeignKey("nodes.id"), primary_key=True),
)
</code></pre>
<p>FYI: I am using sqlalchemy 2.0 with aiosqlite</p>
<p>Now, I have trouble creating a statement that will get be a SuperNode by id, and then the entire hierarchy of <code>nodes</code> their <code>sub_node</code>s the <code>nodes</code> of those... etc. as well as all children of every node, their subnodes their nodes, their children, etc. so I get everything in the hierarchy of a SuperNode when I select it. I've been playing with CTE's, but with that I can end up getting tonnes of rows, and I have to rebuild the structure (which is acceptable as plan B, but I still have issues getting <code>SuperNode.nodes.sub_node.nodes</code> and so on), the other option would be using selectinload, but that will get me only a fixed number of layers, which is just not good enough.</p>
<p>CTE's seems like a way to do this, perhaps the best way? But I can't quite seem to understand how to use them in a case like this, and I have not been able to find any good examples.</p>
<p>Just a small EDIT: I have a solution that uses recursion in python coupled with lazyloading to do this, but I do <em>not</em> want to used lazyloading. Any sort of depth and it takes seconds to make a query.</p>
<p>So if you can tell me how, what, and why, I would be very happy indeed :D</p>
|
<python><sqlalchemy><aiosqlite>
|
2025-04-02 12:28:26
| 1
| 3,223
|
rasmus91
|
79,550,255
| 10,202,292
|
Pandas - custom duplicate comparison method
|
<p>I have browsed multiple other related questions about Pandas, and read documentation for <code>groupby</code> and <code>duplicated</code>, but I cannot seem to find a way to fit all the pieces together in Pandas. I could do this by iterating over the rows in my CSV multiple times and doing pairwise comparison, but it seems like I should be able to do it in Pandas.</p>
<p>Essentially, I would like to define my own function for comparing two rows and deciding if they are a 'duplicate'. If two rows are 'duplicates' of each other, then merge by keeping columns 0 and 1 of the first row, and use <code>sum()</code> on the rest of the columns.</p>
<p>I have a CSV file that has entries something like the following:</p>
<pre><code> English Spanish Count AF ... # other numerical columns
0 'hello' 'hola' 23 0
1 'helo' 'hola' 2 1
2 'hello' 'hola_a' 1 0
3 'hallo' 'a_aureola' 1 1
...
</code></pre>
<p>I would like to consider rows as 'duplicated' based on these criteria:</p>
<ul>
<li>If the Levenshtein string edit distance between two rows' English entries is below a threshold, and the Spanish is an exact match, then they are duplicates
<ul>
<li>e.g. rows 0 and 1 have an English edit distance of 1, and the Spanish is an exact match</li>
</ul>
</li>
<li>If the English of two rows is an exact match, and the Spanish entries have a non-zero overlap when split into lists on <code>_</code>, then they are duplicates
<ul>
<li>e.g. rows 0 and 2 match in English, and row0 <code>'hola'.split('_') -> ['hola']</code> has an overlap with row2 <code>'hola_a'.split('_') -> ['hola','a']</code> since both lists have <code>'hola'</code></li>
</ul>
</li>
<li><strong>However</strong>, if neither the English nor the Spanish is an exact match, then they ar enot duplicates
<ul>
<li>e.g. row 2 and row 3 should not be counted as duplicates, even though the English edit distance is 1, and the Spanish lists overlap when split on <code>_</code></li>
</ul>
</li>
</ul>
<p>The duplicates should then be merged, with the non-language column entries added up. (I think I could figure this part out with <code>groupby</code>, the main difficulty is getting the duplicates.) The final output should look like:</p>
<pre><code> English Spanish Count AF ... # other numerical columns
0 'hello' 'hola' 26 1
1 'hallo' 'a_aureola' 1 1
</code></pre>
|
<python><pandas>
|
2025-04-02 09:31:05
| 1
| 403
|
tigerninjaman
|
79,550,092
| 8,622,814
|
Time limit exceeded on Leetcode 128 even for optimal time complexity
|
<p>I was attempting LeetCode question <a href="https://leetcode.com/problems/longest-consecutive-sequence/description/" rel="noreferrer">128. Longest Consecutive Sequence</a>:</p>
<blockquote>
<p>Given an unsorted array of integers <code>nums</code>, return <em>the length of the longest consecutive elements sequence</em>.</p>
<p>You must write an algorithm that runs in <code>O(n)</code> time.</p>
<h3>Example 1:</h3>
<p><strong>Input:</strong> <code>nums = [100,4,200,1,3,2]</code><br>
<strong>Output:</strong> <code>4</code><br>
<strong>Explanation:</strong> The longest consecutive elements sequence is <code>[1, 2, 3, 4]</code>. Therefore its length is <code>4</code>.</p>
<h3>Constraints:</h3>
<ul>
<li><code>0 <= nums.length <= 10<sup>5</sup></code></li>
<li><code>-10<sup>9</sup> <= nums[i] <= 10<sup>9</sup></code></li>
</ul>
</blockquote>
<p>My first attempt did sorting followed by counting. This has O(nlogn) time complexity, but it surprisingly gave me 93.93% percentile for time complexity (40ms).</p>
<p>I then re-read the question and realised the answer must be in O(n) time complexity. So I wrote the following code:</p>
<pre><code>def longestConsecutive(self, nums: List[int]) -> int:
s = set(nums)
longest_streak = 0
for num in nums:
if (num - 1) not in s:
current_streak = 1
while (num + 1) in s:
num += 1
current_streak += 1
longest_streak = max(longest_streak, current_streak)
return longest_streak
</code></pre>
<p>(I know, it's not a great practice to reuse num variable name in the nested loop, but that's beside the point of the question, I have tested using a separate variable as well with the same result as below)</p>
<p>While this should theoretically be O(n) time complexity, faster than my first solution, this actually resulted in time limit exceeded for a couple of the cases and my code was rejected.</p>
<p>I eventually submitted a passing solution after referring to the solution</p>
<pre><code>class Solution:
def longestConsecutive(self, nums: List[int]) -> int:
nums = set(nums)
longest_streak = 0
for num in nums:
if (num - 1) not in nums:
next_num = num + 1
while next_num in nums:
next_num += 1
longest_streak = max(longest_streak, next_num - num)
return longest_streak
</code></pre>
<p>where I identified 2 key differences:</p>
<ol>
<li>I reassigned nums to a set in-place instead of a new variable</li>
<li>I used next_num instead of keeping a current_streak variable</li>
</ol>
<p>However, both of these changes does not seem like it should have significant impact on the runtime, enough to cross the line between time-limit exceeded into a passing solution. To puzzle me even more, this O(n) solution still performed worse than my sorting solution, ranking only at 75.73% percentile (46 ms).</p>
<p>So my questions are:</p>
<ol>
<li>Why does a O(nlogn) algorithm perform faster than O(n) in practice?</li>
<li>Why is my first O(n) algorithm so slow that it reached time-limit exceeded while my second algorithm with minimal changes could pass?</li>
</ol>
|
<python><python-3.x><algorithm><sorting><time-complexity>
|
2025-04-02 08:28:26
| 3
| 2,018
|
Samson
|
79,549,881
| 17,729,094
|
How to resample a dataset to achieve a uniform distribution
|
<p>I have a dataset with a schema like:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"target": [
[1.0, 1.0, 0.0],
[1.0, 1.0, 0.1],
[1.0, 1.0, 0.2],
[1.0, 1.0, 0.8],
[1.0, 1.0, 0.9],
[1.0, 1.0, 1.0],
],
"feature": ["a", "b", "c", "d", "e", "f"],
},
schema={
"target": pl.Array(pl.Float32, 3),
"feature": pl.String,
},
)
</code></pre>
<p>If I make a histogram of the target-z values it looks like:
<a href="https://i.sstatic.net/0bMhF5HC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bMhF5HC.png" alt="original" /></a></p>
<p>I want to resample the data so its flat along z.</p>
<p>I managed to do it in a hacky-many-steps way (also very slow). I was wondering if people could suggest a cleaner (and more efficient) way?</p>
<p>What I am doing is:</p>
<ol>
<li>Find the bin edges of said histogram:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>bins = 2 # Use e.g. 100 or larger in reality
z = df.select(z=pl.col("target").arr.get(2))
z_min = z.min()
z_max = z.max()
breaks = np.linspace(z_min, z_max, num=bins+1)
</code></pre>
<ol start="2">
<li>Find how many counts are in the bin with the fewest counts:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>counts = (
df.with_columns(bin=pl.col("target").arr.get(2).cut(breaks))
.with_columns(counter=pl.int_range(pl.len()).over("bin"))
.group_by("bin")
.agg(pl.col("counter").max())
.filter(pl.col("counter") > 0) # <- Nasty way of filtering the (-inf, min] bin
.select(pl.col("counter").min())
).item()
</code></pre>
<ol start="3">
<li>Choose only "count" elements on each bin:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>df = (
df.with_columns(bin=pl.col("target").arr.get(2).cut(breaks))
.with_columns(counter=pl.int_range(pl.len()).over("bin"))
.filter(pl.col("counter") <= counts)
.select("target", "feature")
)
</code></pre>
<p>This gives me:
<a href="https://i.sstatic.net/tCTfPUyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCTfPUyf.png" alt="flat" /></a></p>
<p>Do people have any suggestions?</p>
|
<python><dataframe><python-polars>
|
2025-04-02 06:46:59
| 1
| 954
|
DJDuque
|
79,549,858
| 1,719,931
|
Mirror SQL Server to DuckDB local database with SQLAlchemy: constraint not implemented
|
<p>I'm trying to mirror a remote SQL Server database into a local DuckDB database.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code># Connect to remote db
username = "USERNAME"
hostip = "IPADD"
port = "PORT"
dbname = "DBNAME"
user_domain = "USER_DOMAIN"
# See: https://stackoverflow.com/a/58858572/1719931
eng_str = rf"mssql+pymssql://{user_domain}\{username}:{password}@{hostip}/{dbname}"
# Establish connection
engine_remote = create_engine(eng_str, echo=False)
# Test that connection to remote database works
with Session(engine_remote) as session:
print(session.execute(text("SELECT 'hello world'")).fetchall())
# Local db
dbfp = Path("localdb.duckdb")
engine_local = create_engine(f"duckdb:///{dbfp}", echo=False)
# To reflect the metadata of a database with ORM we need the "Automap" extension
# Automap is an extension to the sqlalchemy.ext.declarative system which automatically generates
# mapped classes and relationships from a database schema,
# typically though not necessarily one which is reflected.
# See: https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html
Base = automap_base()
# See ORM documentation on intercepting column definitions: https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html#intercepting-column-definitions
@event.listens_for(Base.metadata, "column_reflect")
def genericize_datatypes(inspector, tablename, column_dict):
# Convert dialect specific column types to SQLAlchemy agnostic types
# See: https://stackoverflow.com/questions/79496414/convert-tinyint-to-int-when-mirroring-microsoft-sql-server-to-local-sqlite-with
# See Core documentation on reflecting with database-agnostic types: https://docs.sqlalchemy.org/en/20/core/reflection.html#reflecting-with-database-agnostic-types
old_type = column_dict['type']
column_dict["type"] = column_dict["type"].as_generic()
# We have to remove collation when mirroring a Microsoft SQL server into SQLite
# See: https://stackoverflow.com/a/59328211/1719931
if getattr(column_dict["type"], "collation", None) is not None:
column_dict["type"].collation = None
# Print debug info
if not isinstance(column_dict['type'], type(old_type)):
print(f"Genericizing `{column_dict['name']}` of type `{str(old_type)}` into `{column_dict['type']}`")
# Load Base with remote DB metadata
Base.prepare(autoload_with=engine_remote)
Base.metadata.create_all(engine_local)
</code></pre>
<p>But I'm getting this error:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
NotImplementedException Traceback (most recent call last)
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1964, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1963 if not evt_handled:
-> 1964 self.dialect.do_execute(
1965 cursor, str_statement, effective_parameters, context
1966 )
1968 if self._has_events or self.engine._has_events:
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\default.py:942, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
941 def do_execute(self, cursor, statement, parameters, context=None):
--> 942 cursor.execute(statement, parameters)
File PYTHONPATH\Lib\site-packages\duckdb_engine\__init__.py:150, in CursorWrapper.execute(self, statement, parameters, context)
149 else:
--> 150 self.__c.execute(statement, parameters)
151 except RuntimeError as e:
NotImplementedException: Not implemented Error: Constraint not implemented!
The above exception was the direct cause of the following exception:
NotSupportedError Traceback (most recent call last)
Cell In[25], line 1
----> 1 Base.metadata.create_all(engine_local)
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\schema.py:5907, in MetaData.create_all(self, bind, tables, checkfirst)
5883 def create_all(
5884 self,
5885 bind: _CreateDropBind,
5886 tables: Optional[_typing_Sequence[Table]] = None,
5887 checkfirst: bool = True,
5888 ) -> None:
5889 """Create all tables stored in this metadata.
5890
5891 Conditional by default, will not attempt to recreate tables already
(...) 5905
5906 """
-> 5907 bind._run_ddl_visitor(
5908 ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
5909 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:3249, in Engine._run_ddl_visitor(self, visitorcallable, element, **kwargs)
3242 def _run_ddl_visitor(
3243 self,
3244 visitorcallable: Type[Union[SchemaGenerator, SchemaDropper]],
3245 element: SchemaItem,
3246 **kwargs: Any,
3247 ) -> None:
3248 with self.begin() as conn:
-> 3249 conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:2456, in Connection._run_ddl_visitor(self, visitorcallable, element, **kwargs)
2444 def _run_ddl_visitor(
2445 self,
2446 visitorcallable: Type[Union[SchemaGenerator, SchemaDropper]],
2447 element: SchemaItem,
2448 **kwargs: Any,
2449 ) -> None:
2450 """run a DDL visitor.
2451
2452 This method is only here so that the MockConnection can change the
2453 options given to the visitor so that "checkfirst" is skipped.
2454
2455 """
-> 2456 visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\visitors.py:664, in ExternalTraversal.traverse_single(self, obj, **kw)
662 meth = getattr(v, "visit_%s" % obj.__visit_name__, None)
663 if meth:
--> 664 return meth(obj, **kw)
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\ddl.py:978, in SchemaGenerator.visit_metadata(self, metadata)
976 for table, fkcs in collection:
977 if table is not None:
--> 978 self.traverse_single(
979 table,
980 create_ok=True,
981 include_foreign_key_constraints=fkcs,
982 _is_metadata_operation=True,
983 )
984 else:
985 for fkc in fkcs:
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\visitors.py:664, in ExternalTraversal.traverse_single(self, obj, **kw)
662 meth = getattr(v, "visit_%s" % obj.__visit_name__, None)
663 if meth:
--> 664 return meth(obj, **kw)
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\ddl.py:1016, in SchemaGenerator.visit_table(self, table, create_ok, include_foreign_key_constraints, _is_metadata_operation)
1007 if not self.dialect.supports_alter:
1008 # e.g., don't omit any foreign key constraints
1009 include_foreign_key_constraints = None
1011 CreateTable(
1012 table,
1013 include_foreign_key_constraints=(
1014 include_foreign_key_constraints
1015 ),
-> 1016 )._invoke_with(self.connection)
1018 if hasattr(table, "indexes"):
1019 for index in table.indexes:
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\ddl.py:314, in ExecutableDDLElement._invoke_with(self, bind)
312 def _invoke_with(self, bind):
313 if self._should_execute(self.target, bind):
--> 314 return bind.execute(self)
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1416, in Connection.execute(self, statement, parameters, execution_options)
1414 raise exc.ObjectNotExecutableError(statement) from err
1415 else:
-> 1416 return meth(
1417 self,
1418 distilled_parameters,
1419 execution_options or NO_OPTIONS,
1420 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\sql\ddl.py:180, in ExecutableDDLElement._execute_on_connection(self, connection, distilled_params, execution_options)
177 def _execute_on_connection(
178 self, connection, distilled_params, execution_options
179 ):
--> 180 return connection._execute_ddl(
181 self, distilled_params, execution_options
182 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1527, in Connection._execute_ddl(self, ddl, distilled_parameters, execution_options)
1522 dialect = self.dialect
1524 compiled = ddl.compile(
1525 dialect=dialect, schema_translate_map=schema_translate_map
1526 )
-> 1527 ret = self._execute_context(
1528 dialect,
1529 dialect.execution_ctx_cls._init_ddl,
1530 compiled,
1531 None,
1532 exec_opts,
1533 compiled,
1534 )
1535 if self._has_events or self.engine._has_events:
1536 self.dispatch.after_execute(
1537 self,
1538 ddl,
(...) 1542 ret,
1543 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1843, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1841 return self._exec_insertmany_context(dialect, context)
1842 else:
-> 1843 return self._exec_single_context(
1844 dialect, context, statement, parameters
1845 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1983, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1980 result = context._setup_result_proxy()
1982 except BaseException as e:
-> 1983 self._handle_dbapi_exception(
1984 e, str_statement, effective_parameters, cursor, context
1985 )
1987 return result
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:2352, in Connection._handle_dbapi_exception(self, e, statement, parameters, cursor, context, is_sub_exec)
2350 elif should_wrap:
2351 assert sqlalchemy_exception is not None
-> 2352 raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
2353 else:
2354 assert exc_info[1] is not None
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\base.py:1964, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1962 break
1963 if not evt_handled:
-> 1964 self.dialect.do_execute(
1965 cursor, str_statement, effective_parameters, context
1966 )
1968 if self._has_events or self.engine._has_events:
1969 self.dispatch.after_cursor_execute(
1970 self,
1971 cursor,
(...) 1975 context.executemany,
1976 )
File PYTHONPATH\Lib\site-packages\sqlalchemy\engine\default.py:942, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
941 def do_execute(self, cursor, statement, parameters, context=None):
--> 942 cursor.execute(statement, parameters)
File PYTHONPATH\Lib\site-packages\duckdb_engine\__init__.py:150, in CursorWrapper.execute(self, statement, parameters, context)
148 self.__c.execute(statement)
149 else:
--> 150 self.__c.execute(statement, parameters)
151 except RuntimeError as e:
152 if e.args[0].startswith("Not implemented Error"):
NotSupportedError: (duckdb.duckdb.NotImplementedException) Not implemented Error: Constraint not implemented!
[SQL:
CREATE TABLE sysdiagrams (
name VARCHAR(128) NOT NULL,
principal_id INTEGER NOT NULL,
diagram_id INTEGER GENERATED BY DEFAULT AS IDENTITY (INCREMENT BY 1 START WITH 1),
version INTEGER,
definition BYTEA,
CONSTRAINT "PK__sysdiagr__C2B05B613750874A" PRIMARY KEY (diagram_id)
)
]
(Background on this error at: https://sqlalche.me/e/20/tw8g)
</code></pre>
<p>It looks like the primary key constraint is not implemented in DuckDB?</p>
<p>I used to use SQLite and it worked, but now I'm trying to switch to DuckDB.</p>
|
<python><sql-server><sqlalchemy><duckdb><database-mirroring>
|
2025-04-02 06:36:27
| 0
| 5,202
|
robertspierre
|
79,549,767
| 1,306,020
|
Create a new file, moving an existing file out of the way if needed
|
<p>What is the best way to create a new file in Python, moving if needed an existing file with the same name to a different path?</p>
<p>While you could do</p>
<pre><code>if os.path.exists(name_of_file):
os.move(name_of_file, backup_name)
f = open(name_of_file, "w")
</code></pre>
<p>that has TOCTOU issues (e.g. multiple processes could try and create the file at the same time). Can I avoid those issues using only the standard library (preferably), or is there a package which handles this.</p>
<p>You can assume a POSIX file system.</p>
|
<python><file><unix><posix><atomic>
|
2025-04-02 05:40:49
| 2
| 565
|
James Tocknell
|
79,549,451
| 18,091,627
|
Problems downloading files with Selenium + Scrapy
|
<p>This code is supposed to download some documents it most locate within series of given links.
While is does seemingly locate the link of the pdf file, its failing to download it. What might be the problem?</p>
<pre><code>class DownloaderSpider(scrapy.Spider):
def __init__(self, *args, **kwargs):
super(DownloaderSpider, self).__init__(*args, **kwargs)
# Configure Chrome WebDriver with download preferences
options = webdriver.ChromeOptions()
prefs = {
"download.default_directory": "c:\\Users\\marti\\Downloads\\Web Scraper\\downloads",
"download.prompt_for_download": False,
"plugins.always_open_pdf_externally": True
}
options.add_experimental_option("prefs", prefs)
self.driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
def parse(self, response):
query = response.meta['query']
summary = response.meta['summary']
date = response.meta['date']
deadline = response.meta['deadline']
pdf_link = None
self.driver.get(response.url)
# Wait until the document is fully loaded
WebDriverWait(self.driver, 15).until(
lambda driver: driver.execute_script("return document.readyState") == "complete"
)
if not self.wait_for_stability():
self.log("Page did not stabilize in time.")
return
response = HtmlResponse(url=self.driver.current_url, body=self.driver.page_source, encoding='utf-8')
elements = response.xpath("//tr | //div[not(div)]")
self.log(f"Found {len(elements)} elements containing text.")
best_match = None
highest_score = 0
for element in elements:
element_text = element.xpath("string(.)").get().strip()
score = fuzz.partial_ratio(summary.lower(), element_text.lower())
# Accept element if it contains a matching date (or deadline) in any format
if score > highest_score and (self.contains_matching_date(element_text, date) or self.contains_matching_date(element_text, deadline)):
highest_score = score
best_match = element
if best_match and highest_score >= 0: # Adjust threshold as needed
self.log(f"Best match found with score {highest_score}")
pdf_link = best_match.xpath(".//a[contains(@href, '.pdf')]/@href").get()
if pdf_link:
self.log(f"Found PDF link: {pdf_link}")
if pdf_link:
pdf_link = response.urljoin(pdf_link)
try:
# Use Selenium to click the PDF link and trigger the download
pdf_element = WebDriverWait(self.driver, 10).until(
EC.element_to_be_clickable((By.XPATH, f"//a[contains(@href, '{pdf_link.split('/')[-1]}')]"))
)
pdf_element.click()
# Wait for the file to appear in the download directory
download_dir = "c:\\Users\\marti\\Downloads\\Web Scraper\\downloads"
local_filename = query.replace(' ', '_') + ".pdf"
local_filepath = os.path.join(download_dir, local_filename)
timeout = 30 # seconds
start_time = time.time()
while not os.path.exists(local_filepath):
if time.time() - start_time > timeout:
raise Exception("Download timed out.")
time.sleep(1)
self.log(f"Downloaded file {local_filepath}")
except Exception as e:
self.log(f"Failed to download file from {pdf_link}: {e}")
else:
self.log("No direct PDF link found, checking for next page link.")
next_page = best_match.xpath(".//a/@href").get() if best_match else None
if next_page:
next_page = response.urljoin(next_page)
self.log(f"Following next page link: {next_page}")
yield scrapy.Request(next_page, self.parse_next_page, meta={'query': query})
def parse_next_page(self, response):
query = response.meta['query']
self.driver.get(response.url)
WebDriverWait(self.driver, 15).until(
lambda driver: driver.execute_script("return document.readyState") == "complete"
)
if not self.wait_for_stability():
self.log("Page did not stabilize in time.")
return
response = HtmlResponse(url=self.driver.current_url, body=self.driver.page_source, encoding='utf-8')
pdf_link = response.xpath("//a[contains(@href, '.pdf')]/@href").get()
if pdf_link:
pdf_link = response.urljoin(pdf_link)
try:
# Use Selenium to click the PDF link and trigger the download
pdf_element = WebDriverWait(self.driver, 10).until(
EC.element_to_be_clickable((By.XPATH, f"//a[contains(@href, '{pdf_link.split('/')[-1]}')]"))
)
pdf_element.click()
# Wait for the file to appear in the download directory
download_dir = "c:\\Users\\marti\\Downloads\\Web Scraper\\downloads"
local_filename = query.replace(' ', '_') + ".pdf"
local_filepath = os.path.join(download_dir, local_filename)
timeout = 30 # seconds
start_time = time.time()
while not os.path.exists(local_filepath):
if time.time() - start_time > timeout:
raise Exception("Download timed out.")
time.sleep(1)
self.log(f"Downloaded file {local_filepath}")
except Exception as e:
self.log(f"Failed to download file from {pdf_link}: {e}")
else:
self.log("No PDF link found on next page.")
</code></pre>
<p>The pipeline goes like this: the code creates and configures the selenium driver, navigates through the links in the .csv document. Locates the most likely entry in the page,
Here is the logs corresponding to the download, if its not a download link navigate to the new page and locate the first download link, then downloads it (or tries to):</p>
<pre><code>2025-03-29 17:31:14 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://oshanarc.gov.na/procurement> (failed 6 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2025-03-29 17:31:14 [scrapy.core.scraper] ERROR: Error downloading <GET https://oshanarc.gov.na/procurement>
Traceback (most recent call last):
File "C:\Users\marti\Downloads\Web Scraper\venv\Lib\site-packages\twisted\internet\defer.py", line 2013, in _inlineCallbacks
result = context.run(
cast(Failure, result).throwExceptionIntoGenerator, gen
)
File "C:\Users\marti\Downloads\Web Scraper\venv\Lib\site-packages\twisted\python\failure.py", line 467, in throwExceptionIntoGenerator
return g.throw(self.value.with_traceback(self.tb))
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marti\Downloads\Web Scraper\venv\Lib\site-packages\scrapy\core\downloader\middleware.py", line 68, in process_request
return (yield download_func(request, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2025-03-29 17:31:14 [scrapy.core.engine] INFO: Closing spider (finished)
</code></pre>
<p>Any idea of what might be the isuue?</p>
|
<python><selenium-webdriver><web-scraping><scrapy>
|
2025-04-02 01:03:59
| 0
| 371
|
42WaysToAnswerThat
|
79,549,115
| 16,383,578
|
How to correctly implement the multiplication function used by GHASH function in AES-256-GCM?
|
<p>I am trying to implement AES-256-GCM in Python without using any external libraries, and I have implemented encryption and decryption in AES-256-ECB, AES-256-CBC, AES-256-CTR, and AES-256-OFB modes, and I also implemented Base64 encoding and decoding plus JSON serialization and deserialization, all are working properly and I have verified the results rigorously against various sources.</p>
<p>You can see an earlier version of my work <a href="https://stackoverflow.com/a/79543577/16383578">here</a>, I have implemented the code <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard" rel="nofollow noreferrer">all</a> <a href="https://en.wikipedia.org/wiki/AES_key_schedule" rel="nofollow noreferrer">by</a> <a href="https://medium.com/@chenfelix/advanced-encryption-standard-cipher-5189e5638b81" rel="nofollow noreferrer">myself</a> <a href="https://legacy.cryptool.org/en/cto/aes-step-by-step" rel="nofollow noreferrer">according</a> <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197-upd1.pdf" rel="nofollow noreferrer">to</a> <a href="https://csrc.nist.gov/projects/cryptographic-standards-and-guidelines/example-values" rel="nofollow noreferrer">various</a> <a href="https://en.wikipedia.org/wiki/Base64" rel="nofollow noreferrer">sources</a>.</p>
<p>I plan to add AES-256-CFB and AES-256-PCBC modes, it is extremely trivial for me to implement them at this point, plus I have found <a href="https://github.com/boppreh/aes/blob/master/aes.py" rel="nofollow noreferrer">this</a>, but I am not adding those modes because I am currently trying to implement <a href="https://en.wikipedia.org/wiki/Galois/Counter_Mode" rel="nofollow noreferrer">AES-256-GCM</a> which is my original goal, but I can't implement it. Currently there is no way for me to verify if my results were right.</p>
<p>I have found this <a href="https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-38d.pdf" rel="nofollow noreferrer">specification</a> and <a href="https://csrc.nist.rip/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-spec.pdf" rel="nofollow noreferrer">this</a> and <a href="https://csrc.nist.rip/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf" rel="nofollow noreferrer">this</a>.</p>
<p>A direct implementation of the algorithm for multiplication used in GHASH function is:</p>
<pre><code>GHASH_POLY = 0xE1000000000000000000000000000000
def ghash_mul(a: int, b: int) -> int:
c = 0
for i in range(127, -1, -1):
if a >> i:
c ^= b
if b & 1:
b = (b >> 1) ^ GHASH_POLY
else:
b >>= 1
return c
</code></pre>
<p>I have found <a href="https://github.com/bozhu/AES-GCM-Python/blob/master/aes_gcm.py" rel="nofollow noreferrer">this</a>, it is easy for me to adapt the code to my needs, but I can't verify its correctness.</p>
<p>Specifically, the multiplication function it uses is the following:</p>
<pre><code>def ghash_mult(x, y):
res = 0
for i in range(127, -1, -1):
res ^= x * ((y >> i) & 1)
x = (x >> 1) ^ ((x & 1) * GHASH_POLY)
return res
</code></pre>
<p>I got rid of the comment and assert blocks as they weren't necessary.</p>
<p>For some inputs, my function agrees with the function from the linked repository, but the outputs aren't the same for all inputs:</p>
<pre><code>In [354]: ghash_mul(255, 255)
Out[354]: 184305769290552241502318497701996581888
In [355]: ghash_mult(255, 255)
Out[355]: 184305769290552241502318497701996581888
In [356]: ghash_mult(12345, 255)
Out[356]: 176413478065579303506952143281585670830
In [357]: ghash_mul(12345, 255)
Out[357]: 184305769290552241502318497702000014804
In [358]: ghash_mul(MAX128, MAX128)
Out[358]: 324345477096475565862205004031948663466
In [359]: ghash_mult(MAX128, MAX128)
Out[359]: 324345477096475565862205004031948663466
In [360]: ghash_mult(999, 999)
Out[360]: 184305769290552241502318497701997396837
In [361]: ghash_mul(999, 999)
Out[361]: 237807196120895105386696731878281261212
</code></pre>
<p>Which one is correct? Or are they both wrong? How can I correctly implement the multiplication function used in GHASH?</p>
|
<python><encryption><cryptography><aes>
|
2025-04-01 20:21:29
| 1
| 3,930
|
Ξένη Γήινος
|
79,549,110
| 4,907,639
|
Huggingface tokenizer: 'str' object has no attribute 'size'
|
<p>I am trying to extract the hidden states of a transformer model:</p>
<pre><code>from transformers import AutoModel
import torch
from transformers import AutoTokenizer
model_ckpt = "distilbert-base-uncased"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
model = AutoModel.from_pretrained(model_ckpt).to(device)
from datasets import load_dataset
emotions = load_dataset("emotion", ignore_verifications=True)
# tokenize data
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
emotions_encoded = emotions.map(tokenize, batched=True, batch_size=None)
def extract_hidden_states(batch):
inputs = {k:v.to(device) for k,v in batch.items()
if k in tokenizer.model_input_names}
with torch.no_grad():
last_hidden_state = model(*inputs).last_hidden_state
return{"hidden_state": last_hidden_state[:,0].cpu().numpy()}
# convert input_ids and attention_mask columns to "torch" format
emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"])
# extract hidden states
emotions_hidden = emotions_encoded.map(extract_hidden_states, batched=True)
</code></pre>
<p>However, on running the last line I get the error <code>'str' object has no attribute 'size'</code></p>
<p>I've tried deprecating the <code>transformers</code> package but that didn't fix it. Some posts online indicate it may have to do with the <code>transformer</code> package will return a dictionary by default, but I don't know how to work around that.</p>
<p>Full error:</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[8], line 5
2 emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"])
4 # extract hidden states
----> 5 emotions_hidden = emotions_encoded.map(extract_hidden_states, batched=True)
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\dataset_dict.py:851, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
848 if cache_file_names is None:
849 cache_file_names = {k: None for k in self}
850 return DatasetDict(
--> 851 {
852 k: dataset.map(
853 function=function,
854 with_indices=with_indices,
855 with_rank=with_rank,
856 input_columns=input_columns,
857 batched=batched,
858 batch_size=batch_size,
859 drop_last_batch=drop_last_batch,
860 remove_columns=remove_columns,
861 keep_in_memory=keep_in_memory,
862 load_from_cache_file=load_from_cache_file,
863 cache_file_name=cache_file_names[k],
864 writer_batch_size=writer_batch_size,
865 features=features,
866 disable_nullable=disable_nullable,
867 fn_kwargs=fn_kwargs,
868 num_proc=num_proc,
869 desc=desc,
870 )
871 for k, dataset in self.items()
872 }
873 )
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\dataset_dict.py:852, in <dictcomp>(.0)
848 if cache_file_names is None:
849 cache_file_names = {k: None for k in self}
850 return DatasetDict(
851 {
--> 852 k: dataset.map(
853 function=function,
854 with_indices=with_indices,
855 with_rank=with_rank,
856 input_columns=input_columns,
857 batched=batched,
858 batch_size=batch_size,
859 drop_last_batch=drop_last_batch,
860 remove_columns=remove_columns,
861 keep_in_memory=keep_in_memory,
862 load_from_cache_file=load_from_cache_file,
863 cache_file_name=cache_file_names[k],
864 writer_batch_size=writer_batch_size,
865 features=features,
866 disable_nullable=disable_nullable,
867 fn_kwargs=fn_kwargs,
868 num_proc=num_proc,
869 desc=desc,
870 )
871 for k, dataset in self.items()
872 }
873 )
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:578, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
576 self: "Dataset" = kwargs.pop("self")
577 # apply actual function
--> 578 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
579 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
580 for dataset in datasets:
581 # Remove task templates if a column mapping of the template is no longer valid
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:543, in transmit_format.<locals>.wrapper(*args, **kwargs)
536 self_format = {
537 "type": self._format_type,
538 "format_kwargs": self._format_kwargs,
539 "columns": self._format_columns,
540 "output_all_columns": self._output_all_columns,
541 }
542 # apply actual function
--> 543 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
544 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
545 # re-apply format to the output
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3073, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3065 if transformed_dataset is None:
3066 with logging.tqdm(
3067 disable=not logging.is_progress_bar_enabled(),
3068 unit=" examples",
(...)
3071 desc=desc or "Map",
3072 ) as pbar:
-> 3073 for rank, done, content in Dataset._map_single(**dataset_kwargs):
3074 if done:
3075 shards_done += 1
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3449, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
3445 indices = list(
3446 range(*(slice(i, i + batch_size).indices(shard.num_rows)))
3447 ) # Something simpler?
3448 try:
-> 3449 batch = apply_function_on_filtered_inputs(
3450 batch,
3451 indices,
3452 check_same_num_examples=len(shard.list_indexes()) > 0,
3453 offset=offset,
3454 )
3455 except NumExamplesMismatchError:
3456 raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
3458 ) from None
File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3330, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
3328 if with_rank:
3329 additional_args += (rank,)
-> 3330 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
3331 if isinstance(processed_inputs, LazyDict):
3332 processed_inputs = {
3333 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
3334 }
Cell In[7], line 6, in extract_hidden_states(batch)
3 inputs = {k:v.to(device) for k,v in batch.items()
4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
----> 6 last_hidden_state = model(*inputs).last_hidden_state
7 return{"hidden_state": last_hidden_state[:,0].cpu().numpy()}
File ~\Anaconda3\envs\ml\lib\site-packages\torch\nn\modules\module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File ~\Anaconda3\envs\ml\lib\site-packages\torch\nn\modules\module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
File ~\Anaconda3\envs\ml\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py:593, in DistilBertModel.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
591 elif input_ids is not None:
592 self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
--> 593 input_shape = input_ids.size()
594 elif inputs_embeds is not None:
595 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'str' object has no attribute 'size'
</code></pre>
|
<python><pytorch><huggingface-transformers>
|
2025-04-01 20:19:35
| 1
| 2,109
|
coolhand
|
79,548,908
| 7,124,155
|
How can I apply a JSON->PySpark nested dataframe as a mapping to another dataframe?
|
<p>I have a JSON like this:</p>
<pre><code>{"main":{"honda":1,"toyota":2,"BMW":5,"Fiat":4}}
</code></pre>
<p>I import into PySpark like this:</p>
<pre><code>car_map = spark.read.json('s3_path/car_map.json')
</code></pre>
<p>Now I have a dataframe:</p>
<p><a href="https://i.sstatic.net/ZF7OU4mS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZF7OU4mS.png" alt="enter image description here" /></a></p>
<p>Given an existing dataframe:</p>
<pre><code>data = [(1, 'BMW'),
(2, 'Ford'),
(3, 'honda'),
(4, 'Cadillac'),
(5, 'Fiat')]
df = spark.createDataFrame(data, ["ID", "car"])
+---+--------+
| ID| car|
+---+--------+
| 1| BMW|
| 2| Ford|
| 3| honda|
| 4|Cadillac|
| 5| Fiat|
+---+--------+
</code></pre>
<p>How can I apply the mapping in car_map to df, creating a new column "x"? For example, if df.car is in car_map.main, then set x to the number. Else, set x to 99.</p>
<p>The result should be like so:</p>
<pre><code>+---+--------+---+
| ID| car| x|
+---+--------+---+
| 1| BMW| 5|
| 2| Ford| 99|
| 3| honda| 1|
| 4|Cadillac| 99|
| 5| Fiat| 4|
+---+--------+---+
</code></pre>
<p>If there are other transformations to make this easier, I'm open. For example UDF, dictionary, array, explode, etc.</p>
|
<python><dataframe><pyspark><databricks>
|
2025-04-01 18:35:49
| 1
| 1,329
|
Chuck
|
79,548,872
| 2,072,516
|
Avoiding multiple sessions in integration test
|
<p>I'm workgin on a FastAPI app, using SQLAlchemy for the ORM, writing tests in pytest. I've got the following integration test:</p>
<pre class="lang-py prettyprint-override"><code>async def test_list_items_success(authed_client, db_session):
await db_session.execute(insert(Item), [{"name": "test1"}, {"name": "test2"}])
await db_session.commit()
response = await authed_client.get("/items")
assert response.status_code == 200
response_body = response.json()
assert len(response_body["data"]["items"]) == 2
</code></pre>
<p>Here, if I don't do the commit in the second line of the test, it doesn't work, which makes sense, because the code in the client is working within a different session. But if I commit, then the code is in the db for the duration of the tests (only because at the begining of the tests, i drop all and recreate). I'm not sure how to resolve this?</p>
<p>I have these fixtures to set up my sessions:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope="function", autouse=True)
async def session_override(app, db_connection):
async def get_db_override():
async with session_manager.session() as session:
yield session
app.dependency_overrides[get_db_session] = get_db_override
@pytest.fixture(scope="function")
async def db_session(db_connection):
async with session_manager.session() as session:
yield session
</code></pre>
|
<python><sqlalchemy><integration-testing>
|
2025-04-01 18:13:37
| 0
| 3,210
|
Rohit
|
79,548,754
| 6,936,582
|
Reshape 4D array to 2D
|
<p>I have the array</p>
<pre><code>import numpy as np
a1 = [["a1", "a2"],
["a3", "a4"],
["a5", "a6"],
["a7", "a8"]]
b1 = [["b1", "b2"],
["b3", "b4"],
["b5", "b6"],
["b7","b8"]]
c1 = [["c1", "c2"],
["c3", "c4"],
["c5", "c6"],
["c7","c8"]]
arr = np.array([a1, b1, c1])
#arr.shape
#(3, 4, 2)
</code></pre>
<p>Which I want to reshape to a 2D array:</p>
<pre><code>["a1","b1","c1"],
["a2","b2","c2"],
...,
["a8","b8","c8"]
</code></pre>
<p>I've tried different things like:</p>
<pre><code># arr.reshape((8,3))
# array([['a1', 'a2', 'a3'],
# ['a4', 'a5', 'a6'],
# ['a7', 'a8', 'b1'],
# ['b2', 'b3', 'b4'],
# ['b5', 'b6', 'b7'],
# ['b8', 'c1', 'c2'],
# ['c3', 'c4', 'c5'],
# ['c6', 'c7', 'c8']])
#arr.T.reshape(8,3)
# array([['a1', 'b1', 'c1'],
# ['a3', 'b3', 'c3'],
# ['a5', 'b5', 'c5'],
# ['a7', 'b7', 'c7'],
# ['a2', 'b2', 'c2'],
# ['a4', 'b4', 'c4'],
# ['a6', 'b6', 'c6'],
# ['a8', 'b8', 'c8']]
# arr.ravel().reshape(8,3)
# array([['a1', 'a2', 'a3'],
# ['a4', 'a5', 'a6'],
# ['a7', 'a8', 'b1'],
# ['b2', 'b3', 'b4'],
# ['b5', 'b6', 'b7'],
# ['b8', 'c1', 'c2'],
# ['c3', 'c4', 'c5'],
# ['c6', 'c7', 'c8']])
</code></pre>
|
<python><numpy>
|
2025-04-01 13:19:44
| 5
| 2,220
|
Bera
|
79,548,632
| 8,837,496
|
Transitive file dependencies break pipenv / Pipfile.lock
|
<p>To avoid running a private a pypi server, we store our wheels on a file server and reference them in Pipfiles using like <code>foo = { file = "\\\\server\\foo-0.1.20-py3-none-any.whl" }</code></p>
<p>This works, but we now want to create wheels that require other local packages. To this end we have a build script that creates an <code>install_requires</code> list for the setup.py, looking like this:</p>
<pre><code>install_requires=[
...
'foo @ file://server/foo-0.1.20-py3-none-any.whl',
...]
</code></pre>
<p>Installing the resulting wheel with <code>pip</code> works fine and installs the required dependencies as expected.</p>
<p>However, trying to list a wheel that contains transitive dependencies in a Pipfile fails:
When running <code>pipenv lock</code>, the resulting Pipfile.lock contains all transitive dependencies, but they are all empty:</p>
<pre><code>"foo": {},
</code></pre>
<p>Subsequently, <code>pipenv install</code> fails:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement foo (from versions: none)
ERROR: No matching distribution found for foo
ERROR: Couldn't install package: {}
Package installation failed...
</code></pre>
<p>Am I doing something wrong? Is there any way to get this to work?</p>
|
<python><setuptools><pipenv>
|
2025-04-01 12:26:13
| 0
| 331
|
sAm_vdP
|
79,548,517
| 9,473,420
|
How to use RedMon for generating multiple outputs? TSPL and PDF
|
<p>I have printer TSC TE 210. I created virtual printer on a RedMon port and installed TSC driver on it. I am able to generate printfile.prn file using redmon (redirecting output to python, that will edit the data and create .prn file), that has TSPL commands it it, e.g.:</p>
<pre><code>SIZE 97.6 mm, 50 mm
GAP 3 mm, 0 mm
DIRECTION 0,0
REFERENCE 0,0
OFFSET 0 mm
SET PEEL OFF
SET CUTTER OFF
SET PARTIAL_CUTTER OFF
SET TEAR ON
CLS
BITMAP 361,170,50,32,1,˙đ˙˙˙˙˙˙˙đ˙˙ü ˙˙˙˙˙˙
TEXT 586,152,"2",180,1,1,1,"1234AA"
PRINT 1,1
</code></pre>
<p>Everything is good. I can edit the TSPL commands, add something etc, and then send it to real printer using <code>copy printfile.prn /B "TSC TE210"</code> (on windows)</p>
<p>It works, because the generated .prn file has TSPL commands, and I am also sending TSPL commands to the printer for real printing.</p>
<p>The problem is, I would like to create also PDF.
I am able to do it! Also use redmon, but instead of forwarding it to python script, I used PostScript, that can indeed generate the pdf file. This virtual printer used PostScript driver.</p>
<p>However, how to generate both files (.prn having TSPL), and .pdf file in the same time?
I want to be super user friendly, print it once, and generate TSPL and pdf in the same time.</p>
<p>What is limiting me? The thing I mentioned - using first approach, the virtual printer has to have TSC driver installed, that is why it generated TSPL commands. Second virtual printer has PS driver installed, that is why it generated PostScript file, that can be changed into PDF. But I can not have both driver installed on the same RedMon virtual printer.</p>
<p>Any idea how to do it? I know I can create a script, that just take TSPL commands from the first approach, and somehow create PDF step by step. But that is really much work.</p>
<p>I can probably create just PDF using PostScript, and try to work only with PDF, when everything is edited, try to print PDF on thermal printer, but thermal printers really dont like PDF, since the size of labels is not A4. And I would love to keep the native TSPL format for easier edits.</p>
<p>Any solution, where I can use same data to create TSPL and PDF in one run?</p>
<p>TLTR: I can create TSPL or PDF files using RedMon, but can not do both in the same run. Any trick that could do it?</p>
<p>EDIT 2.4.2025:
I was able to create python script that will visualize the TSPL bitmap using plt library. Thanks to the @K J answer, I understood that I have to scale the width by 8. However, only one of the 4 bitmaps looks good. Also I dont know how to make them visible together:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import re
bitmap_parameters = []
bitmap_lines = []
with open("TSPL.prn", "rb") as f:
for line in f.readlines():
if line.startswith(b"BITMAP"):
match = re.match(br'BITMAP (\d{1,5}),(\d{1,5}),(\d{1,5}),(\d{1,5}),(\d{1,5}),', line)
if match:
bitmap_parameters.append(match.groups())
bitmap_lines.append(
re.sub(br'BITMAP \d{1,5},\d{1,5},\d{1,5},\d{1,5},\d{1,5},', b'', line)
)
fig, axes = plt.subplots(1, len(bitmap_lines))
if len(bitmap_lines) == 1:
axes = [axes]
for i in range(len(bitmap_lines)):
bitmap_bytes = bitmap_lines[0]
width, height = int(bitmap_parameters[i][2]), int(bitmap_parameters[i][3])
width *= 8
bitmap = np.frombuffer(bitmap_bytes, dtype=np.uint8)
bitmap = np.unpackbits(bitmap)[:width * height]
try:
bitmap = bitmap.reshape((height, width))
except:
print("error")
continue
axes[i].imshow(bitmap, cmap="gray", interpolation="nearest")
axes[i].axis("off")
plt.show()
</code></pre>
<p>Result:
<a href="https://i.sstatic.net/jyVxsmVF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyVxsmVF.png" alt="enter image description here" /></a></p>
|
<python><pdf><printing><tsc><redmon>
|
2025-04-01 11:37:39
| 1
| 387
|
victory
|
79,548,494
| 12,057,138
|
Locust distributed mode: "Unknown message type '_distributor_request' from worker.." warning halts users
|
<p>I am using Locus in distributed mode with 1 master and n workers to run a load test.</p>
<p>I am also using the locust-plugins library to utilize its Distributor utility. My goal is to preallocate one unique resource per user (from a list of “items”) and ensure each virtual user gets a unique item during the test. Here's a simplified outline of my setup:</p>
<p>Master node: Generates a list of items (one for each user) at test start. It then broadcasts this list to all workers using a custom message.
Workers: Upon receiving the items message, each worker stores the list and prepares to use it for its users. I use a Distributor instance on each worker to serve the models to the user tasks.</p>
<p>Master sending the resource list (at test start):</p>
<p>@events.test_start.add_listener</p>
<pre><code>def on_test_start(environment, **kwargs):
if environment.runner is not None and environment.runner.master:
# Master generates the list of resources for all users
models = generate_items(total_user_count)
# Send the list to all workers
environment.runner.send_message("items_models", {"items_models": models})
</code></pre>
<p>Worker receiving and storing the list:</p>
<pre><code>items_models_dist = None
def set_items_models(env, msg, **kwargs):
global items_models_dist
received_list = msg.data["items_models"]
env.logger.info(f"Worker received {len(received_list)} items models. Proceeding...")
items_models_dist = Distributor(env, iter(received_list))
@events.init.add_listener
def on_worker_init(environment, **kwargs):
if isinstance(environment.runner, runners.WorkerRunner):
environment.runner.register_message("items_models", set_items_models)
</code></pre>
<p>User task using the Distributor:</p>
<pre><code>class MyUser(HttpUser):
@task
def use_model(self):
model = next(items_models_dist)
</code></pre>
<p>When I hit <strong><code>model = next(items_models_dist)</code></strong>, I am getting:
"<em><strong>Unknown message type received from worker worker.some.id (index 0): _distributor_request</strong></em>"
and the user halts.</p>
<p>I did not explicitly call or register any handler for "_distributor_request" in my code. I assumed the plugin would take care of it. The warning and lack of user spawn suggests something is fundamentally wrong with how I’m using the plugin in distributed mode.
Help?</p>
<p>Locust version: 2.32.3
Python version: 3.12.3
locust-plugins version: 4.5.3</p>
<p>I am running in distribution mode on one machine with command line:
locust -f test.py --processes -1</p>
|
<python><load-testing><locust>
|
2025-04-01 11:25:43
| 1
| 688
|
PloniStacker
|
79,548,408
| 5,722,359
|
How to show a figure by matplotlib which is added by uv onto a tmpfs/ramdisk?
|
<p>I want to plot a <code>matplotlib</code> figure in a python virtual environment located in a <code>tmpfs</code> folder that was created by <code>uv</code> but can't.</p>
<p>This is what I did:</p>
<p>Step 1:</p>
<pre><code>$ mkdir /dev/shm/test
$ cd /dev/shm/test
$ uv init --python 3.13 --cache-dir .cache
$ uv add matplotlib
</code></pre>
<p>Note: I used <code>--cache-dir .cache</code> to resolve the <a href="https://docs.astral.sh/uv/concepts/cache/#cache-directory" rel="nofollow noreferrer">cache issue</a> related to using <code>uv</code> to add package(s) in a folder that is located in a filesystem different to the filesystem where <code>uv</code> is installed in.</p>
<blockquote>
<p>It is important for performance for the cache directory to be located
on the same file system as the Python environment uv is operating on.
Otherwise, uv will not be able to link files from the cache into the
environment and will instead need to fallback to slow copy operations.</p>
</blockquote>
<p>Step 2: I create a python script with path <code>/dev/shm/test/main.py</code>. It contains:</p>
<pre><code>import matplotlib.pyplot as plt
def main():
plt.figure()
x = [2, 4, 6, 8]
y = [10, 3, 20, 4]
plt.plot(x, y)
plt.show()
if __name__ == "__main__":
main()
</code></pre>
<p>Step 3: Run <code>main.py</code> from within virtual environment in the tmpfs/ramdisk.</p>
<pre><code>$ source .venv/bin/activate
(test) $ python main.py
/dev/shm/test/main.py:15: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown
plt.show()
</code></pre>
<p>The <code>matplotlib</code> figure just won't plot. I don't know how to resolve this issue.</p>
<p><strong>Update:</strong></p>
<p>Per @furas comments, I tried these approaches:</p>
<pre><code>$ uv add PyQt5 --cache-dir .cache
</code></pre>
<p>and</p>
<pre><code>$ uv add PyQt6 --cache-dir .cache
</code></pre>
<p>separately. However, they still can't resolve the issue.</p>
<pre><code>(test) $ python main.py
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
</code></pre>
<p>I also noticed that <code>tkinter</code> won't work using the <code>uv</code> installed python.</p>
<pre><code>(test) $ python -m tkinter
[xcb] Unknown sequence number while appending request
[xcb] You called XInitThreads, this is not your fault
[xcb] Aborting, sorry about that.
python: ../../src/xcb_io.c:157: append_pending_request: Assertion `!xcb_xlib_unknown_seq_number' failed.
Aborted (core dumped)
</code></pre>
|
<python><matplotlib><ramdisk><tmpfs><uv>
|
2025-04-01 10:47:44
| 2
| 8,499
|
Sun Bear
|
79,548,332
| 1,211,072
|
sqlAlchemy to_sql with azure function returning collation error, but work without issues if executing just the python file
|
<p>I have written a python method to form a pandas df and upsert it into sqlserver. This method works perfectly fine when I try to execute the python file alone. But it throws collation exception when I try to run via Azure functions.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
from sqlalchemy import create_engine, types, inspect, MetaData, Table
eng = sqlAlchemy_getConnection(sql_alchemy_connection_url)
inspector = inspect(eng)
target_table = "my_users"
get_colss = inspector.get_columns(target_table,"dbo")
dtype_mapping = {
column['name']: column['type']
for column in get_colss
}
src_df.to_sql(temp_table,sqlAlchemy_conn,schema="dbo",if_exists='replace',index=False,dtype=dtype_mapping)
</code></pre>
<p>Error when trying to execute from Azure function:</p>
<blockquote>
<p>Exception user_name (VARCHAR(100) COLLATE
"SQL_Latin1_General_CP1_CI_AS") not a string</p>
</blockquote>
<p>Drivers:</p>
<ul>
<li>ODBC Driver 17 for SQL Server</li>
</ul>
<p>What could be the cause of this issue?</p>
|
<python><sql-server><pandas><sqlalchemy><azure-functions>
|
2025-04-01 10:18:02
| 1
| 1,365
|
Ramaraju.d
|
79,548,304
| 14,731,895
|
How to get the alignment in Python Codes accurately in Visual Studio
|
<p>I am trying to write this code in Python in Visual Studio. For some reason I cant get the code to align automatically.</p>
<p>I wrote this code, based on Syntax, the code is not wrong but I cant get Python to create the <em>Secondary_Key</em> and based on my understanding its because the <code>.assign</code> doesnt align.</p>
<p>So can someone help with understand why this is happening and how to get this resolved.</p>
<p>The code I wrote this this.</p>
<pre><code>Final_df_With_Key = (Final_df
.assign(Primary_Key = lambda df : df["Item-Color"] + "-" + df["DC"] +
"-" + df["Subsidiary"] + "-" + df["Year"].astype(str) + "-" + df["Month"].astype(str) + "-" + df["B2B"]).assign(Secondary_Key = lambda df : df["Item-Color"] + "-" + df["DC"])
.loc[:,['Primary_Key'] + [col for col in Final_df.columns if col != 'Primary_Key']])
</code></pre>
<p>When I press enter before the <code>.assign</code> for the secondary key, the alignment becomes this.</p>
<pre><code>Final_df_With_Key = (Final_df
.assign(Primary_Key = lambda df : df["Item-Color"] + "-" + df["DC"] +
"-" + df["Subsidiary"] + "-" + df["Year"].astype(str) + "-" + df["Month"].astype(str) + "-" + df["B2B"])
.assign(Secondary_Key = lambda df : df["Item-Color"] + "-" + df["DC"])
.loc[:,['Primary_Key'] + [col for col in Final_df.columns if col != 'Primary_Key']])
</code></pre>
<p>I tried pressing tab and backspace but this wont automatically wont align, unless you press spacebar to get it to align.</p>
<p>Please let me know you opinion on this.</p>
|
<python><dataframe><visual-studio>
|
2025-04-01 10:06:55
| 1
| 363
|
Mr Pool
|
79,548,243
| 1,142,881
|
How to do a groupby on a cxvpy Variable? is there a way using pandas?
|
<p>I would like to define a loss function for the CVXPy optimization that minimizes differences from the reference grouped target:</p>
<pre><code>import cvxpy as cp
import pandas as pd
# toy example for demonstration purpose
target = pd.DataFrame(data={'a': ['X', 'X', 'Y', 'Z', 'Z'], 'b': [1]*5})
w = cp.Variable(target.shape[0])
beta = cp.Variable(target.shape[0])
def loss_func(w, beta):
x = pd.DataFrame(data={'a': target['a'], 'b': w @ beta}).groupby('a')['b'].sum()
y = target.groupby('a')['b'].sum()
return cp.norm2(x - y)**2 # <<<<<<<<<<<<<< ValueError: setting an array element with a sequence.
</code></pre>
<p>but this gives me the following error</p>
<p><code>ValueError: setting an array element with a sequence.</code></p>
<p>What would be the way to cover this use-case using CVXPy?</p>
|
<python><pandas><cvxpy>
|
2025-04-01 09:39:55
| 1
| 14,469
|
SkyWalker
|
79,548,219
| 284,696
|
Using `setattr` to decorate functions from another module
|
<p>I'm having trouble applying a decorator to an imported function. Suppose I have the following <code>foo.py</code> module:</p>
<pre class="lang-py prettyprint-override"><code># contents of foo.py
def bar():
return "hello"
</code></pre>
<p>I now want to import <code>bar</code> from it and apply the <code>lru_cache</code> decorator, so in my main file I do this:</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache
import foo
def main():
# caching foo.bar works fine
for _ in range(2):
if hasattr(foo.bar, "cache_info"):
print("bar is already cached")
else:
print("caching bar")
setattr(foo, "bar", lru_cache(foo.bar))
if __name__ == "__main__":
main()
</code></pre>
<p>which produces, as expected, the following output:</p>
<pre><code>caching bar
bar is already cached
</code></pre>
<p>However, I don't want to import the whole <code>foo</code> module, so now I only import <code>bar</code> and rely on <code>sys.modules</code> to find out where <code>bar</code> comes from. Then <code>main.py</code> becomes:</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache
import sys
from foo import bar
def main():
for _ in range(2):
if hasattr(bar, "cache_info"):
print("bar is already cached")
else:
print("caching bar")
setattr(sys.modules[__name__], "bar", lru_cache(bar))
if __name__ == "__main__":
main()
</code></pre>
<p>Again, this produces the wanted output:</p>
<pre><code>caching bar
bar is already cached
</code></pre>
<p>But now I want my decorating process to be reusable in several modules, so I write it as a function in file <code>deco.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache
import sys
def mydecorator(somefunction):
for _ in range(2):
if hasattr(somefunction, "cache_info"):
print(f"{somefunction.__name__} is already cached")
else:
print(f"caching {somefunction.__name__}")
setattr(sys.modules[somefunction.__module__], somefunction.__name__, lru_cache(somefunction))
</code></pre>
<p>And so <code>main.py</code> becomes:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from foo import bar
from deco import mydecorator
def main():
mydecorator(bar)
if __name__ == "__main__":
main()
</code></pre>
<p>This time the decoration fails; the output becomes:</p>
<pre><code>caching bar
caching bar # <- wrong; should be cached already
</code></pre>
<p>How do I correct my code?</p>
|
<python><import><setattr>
|
2025-04-01 09:28:18
| 1
| 2,826
|
Anthony Labarre
|
79,548,185
| 1,720,199
|
Propagate assertion type check to caller
|
<p>I want to assert the type of an object in a function without having to return it, but I can't find how to do it.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional, TypeVar
T = TypeVar('T')
def assert_non_null(_object: Optional[T]):
assert _object is not None
def f(a: Optional[float]):
assert_non_null(a)
b: float = a # error because "a" can be null
</code></pre>
<p>Whereas this works (but its not what I'm looking for):</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional, TypeVar
T = TypeVar('T')
def extract_non_null(_object: Optional[T]):
assert _object is not None
return _object
def g(a: Optional[float]):
a = extract_non_null(a)
b: float = a # its fine because the function returns the type T (float)
</code></pre>
<p>Is there a way to add type annotation on side effects to make sure calling <code>assert_non_null</code> has the same effect as calling <code>assert is not None</code> directly?</p>
|
<python><python-typing>
|
2025-04-01 09:14:27
| 0
| 11,323
|
cglacet
|
79,548,159
| 2,881,414
|
Proper way of building a package with tox
|
<p>I'm using tox and wonder what is the intended way to build my package for distribution. I see that tox is building the package as part of the <a href="https://tox.wiki/en/latest/user_guide.html#system-overview" rel="nofollow noreferrer">normal execution</a>. It even provides a <a href="https://tox.wiki/en/latest/upgrading.html#packaging-configuration-and-inheritance" rel="nofollow noreferrer">dedicated <code>pkgenv</code> section</a>, but does not explain much how to use it.</p>
<p>There seem to be two options on how to build a package:</p>
<ol>
<li><strong>Explicitly</strong>, using a testenv:</li>
</ol>
<pre class="lang-ini prettyprint-override"><code>[testenv:build]
commands = python3 -m build
</code></pre>
<ol start="2">
<li><strong>Implicitly</strong>, relying on the package being created in <code>.tox/.pkg/dist</code> during the test process</li>
</ol>
<p>Option (1) seems odd, given that there's a <code>pkgenv</code> available which would not be used in that case, but Option (2) cannot be called directly and is only created as a side effect of calling any of the targets.</p>
<p>Does anyone which one it is? And what is the purpose of <code>pkgenv</code>? Can one use it like:</p>
<pre class="lang-ini prettyprint-override"><code>[pkgenv:build]
commands = python3 -m build
</code></pre>
|
<python><python-3.x><tox>
|
2025-04-01 09:03:05
| 0
| 17,530
|
Bastian Venthur
|
79,548,083
| 16,312,980
|
sqlite table does not exist within gradio blocks or GradioUI even after creating said table
|
<p>I am trying this out: <a href="https://huggingface.co/docs/smolagents/examples/text_to_sql" rel="nofollow noreferrer">https://huggingface.co/docs/smolagents/examples/text_to_sql</a> in my hf space as a pro user.
For some reason <code>GradioUI(agent).launch()</code> can't detect the sqlite tables. even though the prints in the tool function returns the correct engine.</p>
<pre><code>@tool
def sql_engine_tool(query: str) -> str:
"""
Allows you to perform SQL queries on the table. Returns a string representation of the result.
The table is named 'receipts'. Its description is as follows:
Columns:
- receipt_id: INTEGER
- customer_name: VARCHAR(16)
- price: FLOAT
- tip: FLOAT
Args:
query: The query to perform. This should be correct SQL.
"""
output = ""
print("debug sql_engine_tool")
print(engine)
with engine.connect() as con:
print(con.connection)
print(metadata_objects.tables.keys())
result = con.execute(
text(
"SELECT name FROM sqlite_master WHERE type='table' AND name='receipts'"
)
)
print("tables available:", result.fetchone())
rows = con.execute(text(query))
for row in rows:
output += "\n" + str(row)
return output
def init_db(engine):
metadata_obj = MetaData()
def insert_rows_into_table(rows, table, engine=engine):
for row in rows:
stmt = insert(table).values(**row)
with engine.begin() as connection:
connection.execute(stmt)
table_name = "receipts"
receipts = Table(
table_name,
metadata_obj,
Column("receipt_id", Integer, primary_key=True),
Column("customer_name", String(16), primary_key=True),
Column("price", Float),
Column("tip", Float),
)
metadata_obj.create_all(engine)
rows = [
{"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
{"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
{
"receipt_id": 3,
"customer_name": "Woodrow Wilson",
"price": 53.43,
"tip": 5.43,
},
{
"receipt_id": 4,
"customer_name": "Margaret James",
"price": 21.11,
"tip": 1.00,
},
]
insert_rows_into_table(rows, receipts)
with engine.begin() as conn:
print("SELECT test", conn.execute(text("SELECT * FROM receipts")).fetchall())
print("init_db debug")
print(engine)
print()
return engine, metadata_obj
if __name__ == "__main__":
engine = create_engine("sqlite:///:memory:")
engine, metadata_objects = init_db(engine)
model = HfApiModel(
model_id="meta-llama/Meta-Llama-3.1-8B-Instruct",
token=os.getenv("my_first_agents_hf_tokens"),
)
agent = CodeAgent(
tools=[sql_engine_tool],
# system_prompt="""
# You are a text to sql converter
# """,
model=model,
max_steps=1,
verbosity_level=1,
)
# agent.run("What is the average each customer paid?")
GradioUI(agent).launch()
</code></pre>
<p>I may need to just use <code>gr.blocks</code> instead and reimplement some things. I am not the most familiar with this library, this will be tricky for me.</p>
<p>Log messages:</p>
<pre class="lang-none prettyprint-override"><code>debug sql_engine_tool
Engine(sqlite:///:memory:)
<sqlalchemy.pool.base._ConnectionFairy object at 0x7f9228250ee0>
dict_keys(['receipts'])
tables available: None
Code execution failed at line 'customer_total = sql_engine_tool(engine=engine,
query=query)' due to: OperationalError: (sqlite3.OperationalError) no such
table: receipts
</code></pre>
<p>I have tried <code>gr.Blocks()</code>, <code>stream_to_gradio()</code>, they are not working. if I directly use the tool function to <code>SELECT * FROM receipts</code>, it works.</p>
|
<python><agent><gradio>
|
2025-04-01 08:26:07
| 1
| 426
|
Ryan
|
79,548,070
| 3,053,806
|
Why does Python pass an instance to a class attribute that is a function?
|
<p>I define a class attribute and give it a function. When I call this function, the instance is passed on as the first argument (as if it's an instance function call with a <code>self</code>).
I would not expect an instance to care that the attribute <code>fn</code> is a function, let alone for it to pass itself as an argument. Why does it do that?</p>
<pre><code>class BStatic:
fn = None
@staticmethod
def static_fn(*args, **kwargs):
print(args, kwargs)
BStatic.fn = BStatic.static_fn
bs = BStatic()
bs.fn()
# (<__main__.BStatic object at 0x0000022D63214590>,) {}
BStatic.fn()
# () {}
</code></pre>
<p>In another example, I assigned a class attribute to a function defined on a module.
When I call this function on an instance of a class, the instance is again passed as the first argument.</p>
<pre><code>class B:
fn = None
def instance_fn(self, *args, **kwargs):
print(self, args, kwargs)
def fn(*args, **kwargs):
print(args, kwargs)
B.fn = fn
B.fn()
# () {}
b = B()
b.fn()
# (<__main__.B object at 0x0000022D643D23F0>,) {}
</code></pre>
<p>This is not the case if I assign the class attribute on an instance.</p>
<pre><code>b.fn = fn
b.fn()
# () {}
</code></pre>
|
<python><python-internals>
|
2025-04-01 08:21:08
| 1
| 821
|
msrc
|
79,547,850
| 12,645,056
|
Why is the bounding box not aligned to the square?
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
def generate_square_image(size, square_size, noise_level=0.0):
"""
Generates an image with a white square in the center.
Args:
size (int): The size of the image (size x size).
square_size (int): The size of the square.
noise_level (float): Standard deviation of Gaussian noise.
Returns:
numpy.ndarray: The image as a numpy array.
numpy.ndarray: The mask.
tuple: Bounding box (x_min, y_min, width, height).
"""
# create mask
mask = np.zeros((size, size))
start = (size - square_size) // 2
end = start + square_size
mask[start:end, start:end] = 1
# create bounding box
bbox = (start, start, square_size, square_size)
# create noisy image
img = mask.copy()
if noise_level > 0:
noise = np.random.normal(0, noise_level, img.shape)
img = np.clip(img + noise, 0, 1)
return img, mask, bbox
# Example usage:
size = 100
square_size = 40
img, mask, bbox = generate_square_image(size, square_size, noise_level=0.1)
# Plot the image
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
ax[0].imshow(img, cmap='gray')
ax[0].set_title('Generated Image')
ax[1].imshow(mask, cmap='gray')
ax[1].set_title('Mask')
# Display the bounding box overlayed on the image
ax[2].imshow(img, cmap='gray')
x, y, width, height = bbox
# The key fix: in matplotlib, the Rectangle coordinates start at the bottom-left corner
# But imshow displays arrays with the origin at the top-left corner
rect = Rectangle((x, y), width, height, linewidth=2, edgecolor='r', facecolor='none')
ax[2].add_patch(rect)
ax[2].set_title('Image with Bounding Box')
# Ensure origin is set to 'upper' to match imshow defaults
for a in ax:
a.set_ylim([size, 0]) # Reverse y-axis to match array indexing
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/LR2RBued.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LR2RBued.png" alt="enter image description here" /></a></p>
<p><strong>Question:</strong> What is the right code to align the box properly?</p>
<p>It seems to be the most straight forward approach to create one?</p>
<p>As you can see, I have already tried prompting this to work, but even that fix (which seems to be the one thing to explore here, which is the difference in coordinate systems) does not seem to work either.</p>
|
<python><numpy><matplotlib>
|
2025-04-01 06:18:42
| 1
| 325
|
Academic
|
79,547,849
| 1,838,076
|
Session state is not respected when the options are not identical
|
<p>In the example, <code>Radio 1</code> and <code>Radio 2</code> start with <code>Option 2</code> as default value set via <code>st.session_state</code>.</p>
<p>When I change the valid options (via button callback),</p>
<p><code>Radio 2</code> retains the value while <code>Radio 1</code> resets itself to the first value in the list of options.</p>
<p>Both the option sets will always have <code>Option 2</code> as valid value.</p>
<p>I am trying to understnad why the difference in these bhaviors?</p>
<p>Issue is same with selectbox, pills etc. Just easily understandable with radio.</p>
<p>What am I missing.</p>
<p>Here is the code</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
options = {
'sel1': [
['Option 1', 'Option 2'],
['Option 1', 'Option 2', 'Option 3'],
['Option 1', 'Option 2', 'Option 3', 'Option 4']
],
'sel2': [
['Option 1', 'Option 2', 'Option 3', 'Option 4'],
['Option 1', 'Option 2', 'Option 3', 'Option 4'],
['Option 1', 'Option 2', 'Option 3', 'Option 4']
]
}
def showRadio(stCol, radioId):
with stCol:
st.metric(
f'Radio {radioId}',
st.radio(
label='With different options across runs',
options=options[f'sel{radioId}'][st.session_state.idx],
index=0,
key=f'radio{radioId}'
)
)
def updateDIdx():
'Update the session_state.idx upon button click in callback'
st.session_state.idx = (st.session_state.idx + 1) % 3
def showButton(stCol):
with stCol:
st.button('Change Options', on_click=updateDIdx)
st.metric('Options Index: ', st.session_state.idx)
def radio():
'''The main function'''
st.set_page_config(page_title="Radio Example")
# Set the session_state on first run
if 'idx' not in st.session_state:
st.session_state.idx = 0
if 'radio1' not in st.session_state:
st.session_state.radio1 = 'Option 2'
if 'radio2' not in st.session_state:
st.session_state.radio2 = 'Option 2'
cols = st.columns(3)
showRadio(cols.pop(0), 1)
showRadio(cols.pop(0), 2)
showButton(cols.pop(0))
if __name__ == '__main__':
radio()
</code></pre>
<p>Here is the sample output
<a href="https://global.discourse-cdn.com/streamlit/original/3X/0/2/0215eb2552601f41b502deb62c1273c900f9a5e4.gif" rel="nofollow noreferrer">https://global.discourse-cdn.com/streamlit/original/3X/0/2/0215eb2552601f41b502deb62c1273c900f9a5e4.gif</a></p>
<p>Streamlit version: 1.44.0
Python: 3.12</p>
|
<python><streamlit>
|
2025-04-01 06:18:34
| 0
| 1,622
|
Krishna
|
79,547,780
| 6,734,243
|
How to add a dash component as a dash-leaflet custom control?
|
<p>I'm slowly discovering the <code>dash-leaflet</code> interface coming from ipyleaflet widget lib and there are still some tasks that I don't manage to achieve.</p>
<p>Here I would like to add a dash componenent (in this case a dropdown select) and place it on the map as control so it fits consistently with the leaflet layout when the map is resized.</p>
<p>Here is a map:</p>
<pre class="lang-py prettyprint-override"><code>import dash_leaflet as dl
from dash import Dash
colorscale = ['red', 'yellow', 'green', 'blue', 'purple'] # rainbow
app = Dash()
app.layout = dl.Map(
[dl.TileLayer()],
center=[56, 10],
zoom=6,
style={'height': '100vh'},
)
if __name__ == "__main__":
app.run_server()
</code></pre>
<p>And here is the little select that I would like to place on it:</p>
<pre class="lang-py prettyprint-override"><code>from dash import dcc
dropdown = dcc.Dropdown(
component_label="toto-label",
id="toto-id",
options=[
{"label": "NDVI (8)", "value": "ndvi8"},
{"label": "NDVI (16)", "value": "ndvi16"},
],
value="ndvi16",
)
</code></pre>
<p><a href="https://i.sstatic.net/JpnvQFx2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpnvQFx2.png" alt="enter image description here" /></a></p>
<h2>EDIT</h2>
<p>to clarify exactly what I'm looking for, I want to mimick what the <a href="https://ipyleaflet.readthedocs.io/en/latest/controls/widget_control.html" rel="nofollow noreferrer"><code>WidgetControl</code></a> is doing in ipyleaflet i.e. placing the component in the controls hierarchy with something like "topleft" and without creating dedicated css styling.</p>
|
<python><plotly-dash><dash-leaflet>
|
2025-04-01 05:36:54
| 1
| 2,670
|
Pierrick Rambaud
|
79,547,639
| 17,729,094
|
Larger-than-memory dataset with polars
|
<p>I have a parquet file with a dataset that looks like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.LazyFrame(
{
"target": [
[1.0, 2.0],
[3.0, 4.0],
],
"point_cloud": [
[
[7.0, 8.0],
[9.0, 10.0],
],
[
[9.0, 10.0],
],
],
},
schema={
"target": pl.Array(pl.Float32, 2),
"point_cloud": pl.List(pl.Array(pl.Float32, 2)),
},
)
</code></pre>
<p>The file has 4 million rows and is 20 GB (doesn't fit in RAM).</p>
<p>I am trying to get the size of point clouds like:</p>
<pre class="lang-py prettyprint-override"><code>df = (
pl.scan_parquet(dataset).select(size=pl.col("point_cloud").list.len()).collect()
)
</code></pre>
<p>But my program runs out of memory and dies. I have tried changing <code>collect(engine="streaming")</code>, but the result is the same.</p>
<p>I am puzzled because when I try to get e.g. the <code>x</code> coordinate of all targets, it works OK (and is super fast):</p>
<pre class="lang-py prettyprint-override"><code>df = pl.scan_parquet(dataset).select(x=pl.col("target").arr.get(0)).collect()
</code></pre>
<p>Can I get some help with this?
Thanks</p>
<p><strong>EDIT</strong>
This is a plot with the distribution of the length of each list (produced by running the same code in a computer with enough RAM to fit the entire dataset).
<a href="https://i.sstatic.net/FyYgM4RV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyYgM4RV.jpg" alt="hist" /></a></p>
|
<python><dataframe><parquet><python-polars>
|
2025-04-01 02:54:10
| 1
| 954
|
DJDuque
|
79,547,588
| 7,535,975
|
uv pip install fails for multiple packages
|
<p>I'm having some weird behavior when trying to install multiple packages with <code>uv</code> as follows</p>
<pre><code>uv pip install --system spatialpandas \
easydev \
colormap \
colorcet \
duckdb \
dask_geopandas \
hydrotools \
sidecar \
dataretrieval \
google-cloud-bigquery
</code></pre>
<p>This results in the following error</p>
<pre><code>Using Python 3.11.8 environment at: /srv/conda/envs/notebook
Resolved 140 packages in 384ms
× Failed to build `numba==0.53.1`
├─▶ The build backend returned an error
╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit status: 1)
[stderr]
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/jovyan/.cache/uv/builds-v0/.tmpBarfZb/lib/python3.11/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/.cache/uv/builds-v0/.tmpBarfZb/lib/python3.11/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
File "/home/jovyan/.cache/uv/builds-v0/.tmpBarfZb/lib/python3.11/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
File "/home/jovyan/.cache/uv/builds-v0/.tmpBarfZb/lib/python3.11/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 50, in <module>
File "<string>", line 47, in _guard_py_ver
RuntimeError: Cannot install on Python version 3.11.8; only versions >=3.6,<3.10 are supported.
hint: This usually indicates a problem with the package or the build environment.
help: `numba` (v0.53.1) was included because `spatialpandas` (v0.5.0) depends on `numba`
</code></pre>
<p>However, installing all the packages individually in the same order <code>uv pip install --system <package></code> works without any issue. If above is a dependency error it should happen for individual installation as well?</p>
|
<python><pip><uv>
|
2025-04-01 02:07:06
| 1
| 377
|
F Baig
|
79,547,573
| 1,939,638
|
Alteryx workflow with single python node failing
|
<p>I have an Alteryx Workflow that runs a notebook. When I run the notebook by itself, it works just fine. When I run the workflow, I receive this error</p>
<pre><code> Python (1) Failed to run Python command. Make sure you have all the packages you need to run the command.: The system cannot find the file specified. (2)
</code></pre>
<p>It seems like the error is occurring before it even attempt to run the notebook.</p>
|
<python><jupyter-notebook><alteryx>
|
2025-04-01 01:45:54
| 0
| 5,191
|
Matt Cremeens
|
79,547,567
| 1,245,659
|
Creating a child table record, when a new parent table record is created
|
<p>I have this model:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
from django.templatetags.static import static
from simple_history.models import HistoricalRecords
from treebeard.mp_tree import MP_Node
from . import constants
from datetime import datetime
class Profile(models. Model):
# Managed fields
user = models.OneToOneField(User, related_name="profile", on_delete=models.CASCADE)
memberId = models.CharField(unique=True, max_length=15, null=False, blank=False, default=GenerateFA)
avatar = models.ImageField(upload_to="static/MCARS/img/members", null=True, blank=True)
birthday = models.DateField(null=True, blank=True)
gender = models.CharField(max_length=10, choices=constants.GENDER_CHOICES, null=True, blank=True)
invited = models.BooleanField(default=False)
registered = models.BooleanField(default=False)
height = models.PositiveSmallIntegerField(null=True, blank=True)
phone = models.CharField(max_length=32, null=True, blank=True)
address = models.CharField(max_length=255, null=True, blank=True)
number = models.CharField(max_length=32, null=True, blank=True)
city = models.CharField(max_length=50, null=True, blank=True)
zip = models.CharField(max_length=30, null=True, blank=True)
@property
def get_avatar(self):
return self.avatar.url if self.avatar else static('static/MCARS/img/avatars/default.jpg')
def save(self, **kwargs):
if not self.pk:
super(Profile, self).save(**kwargs)
rank = Rank.objects.create(user=self, longrank='o1', shortrank='o1', branch='r')
rank.save()
else:
super(Profile, self).save(**kwargs)
def __str__(self):
rank = Rank.objects.get(user=self.user.profile).get_longrank_display()
return rank + " " + self.user.first_name + " " + self.user.last_name + "(" + self.memberId + ")"
class Rank (models.Model):
user = models.ForeignKey(Profile, related_name="Rank", on_delete=models.CASCADE)
longrank = models.CharField(max_length=5, null=True, blank=True, choices=constants.long_rank)
shortrank = models.CharField(max_length=5, null=True, blank=True, choices=constants.short_rank)
branch = models.CharField(max_length=5, null=True, blank=True, choices=constants.branch)
image = models.ImageField(upload_to="static/MCARS/img/ranks", null=True, blank=True)
history = HistoricalRecords()
def save(self, **kwargs):
self.shortrank = self.longrank
self.image = 'static/MCARS/img/ranks/' + self.branch[0] + '-' + self.longrank + '.png'
super(Rank, self).save(**kwargs)
def __str__(self):
return self.get_longrank_display() + ' (' + self.get_shortrank_display() + ') ' + self.user.user.first_name + ' ' + self.user.user.last_name
class Command (MP_Node):
CO = models.ForeignKey(User, related_name="user", on_delete=models.CASCADE)
ship = models.ImageField(upload_to="static/MCARS/img/ships", null=True, blank=True)
seal = models.ImageField(upload_to="static/MCARS/img/flag", null=True, blank=True)
Type = models.CharField(max_length=10, choices=constants.CMD_TYPE)
name = models.CharField(max_length=255, null=True, blank=True)
address = models.CharField(max_length=255, null=True, blank=True)
number = models.CharField(max_length=32, null=True, blank=True)
city = models.CharField(max_length=50, null=True, blank=True)
zip = models.CharField(max_length=30, null=True, blank=True)
hull = models.CharField(max_length=255, null=True, blank=True)
Commissioned = models.BooleanField(default=True)
node_order_by = ['name']
def save(self, **kwargs):
if not self.pk:
super(Command, self).save(**kwargs)
CommandPositions = CommandPositions.objects.create(command=self, name="CO", responsibility='Commanding Officer')
CommandPositions.save()
else:
super(Command, self).save(**kwargs)
def isCO(self, user):
return self.CO == user
def __str__(self):
rank = Rank.objects.get(user=self.CO.profile).get_longrank_display()
return self.name + ' (' + self.hull + ') ' + rank + " " + self.CO.first_name + " " + self.CO.last_name + " Commanding"
class CommandPositions(models.Model):
Command = models.ForeignKey(Command, related_name="Positions", on_delete=models.CASCADE)
name = models.CharField(max_length=255, null=True, blank=True)
responsibility = models.TextField(null=True, blank=True)
class Assignment (models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
Position = models.ForeignKey(Command, on_delete=models.CASCADE)
assigned_dt = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = "Assignments"
</code></pre>
<p>In this solution, when I create a new <code>Profile</code>, a new <code>Rank</code> is created. This works. However, when I create a new <code>Command</code>, a new <code>CommandPositions</code> record is needed. I tried it the same way as the <code>Profile</code>/<code>Rank</code> relationship was done, but it's throwing an error saying that it's not there.</p>
<p>How Come <code>Profile</code>/<code>Rank</code> worked, but <code>Command</code>/<code>CommandPositions</code> is saying I cannot do that?</p>
<p>when I create a <code>Command</code>, this error comes up:</p>
<pre><code> File "/Users/evancutler/PycharmProjects/MCARS/MCARS/models.py", line 98, in save
CommandPositions = CommandPositions.objects.create(command=self, name="CO", responsibility='Commanding Officer')
^^^^^^^^^^^^^^^^
Exception Type: UnboundLocalError at /admin/MCARS/command/add/
Exception Value: cannot access local variable 'CommandPositions' where it is not associated with a value
</code></pre>
<p>This error does not occur when doing <code>Profile</code>/<code>Rank</code>, but it does here.</p>
<p>Any wisdom would be greatly appreciated.
Thanks!</p>
|
<python><django><django-models>
|
2025-04-01 01:37:16
| 1
| 305
|
arcee123
|
79,547,465
| 5,256,563
|
Parallel processing with arguments to modify
|
<p>I need to parallelize a function that modifies values of one of the arguments. For example modify this simple code:</p>
<pre><code>def Test(i, a, length):
if i % 2 == 0:
for x in range(0, length, 2):
a[x] = 2
else:
for x in range(1, length, 2):
a[x] = 1
a0 = numpy.ndarray(shape=10, dtype=numpy.uint8)
Test(0, a, a.shape[0])
Test(1, a, a.shape[0])
print(a)
</code></pre>
<p>I tried to use <code>joblib</code> using <code>Parallel(n_jobs=2)(delayed(Test)(i, a, 10) for i in range(2))</code> and <code>multiprocessing.Pool</code> with:</p>
<pre><code>with multiprocessing.Pool(processes=2) as Pool:
data = [(0, a, a.shape[0]), (1, a, a.shape[0])]
Pool.starmap(Test, data)
</code></pre>
<p>But both solutions do not work because they fork the data, so the parameters do not get modified.</p>
<p>What would be my options to do such parallelization?</p>
|
<python><python-3.x><parallel-processing><multiprocessing><joblib>
|
2025-04-01 00:11:13
| 1
| 5,967
|
FiReTiTi
|
79,546,867
| 491,605
|
cancel S3 Download in python
|
<p>I have a client app, that downloads potentially big files from S3.
I want to give the user the option to cancel that download, without stopping/killing the whole app. Doing some research, that seems to be an suprisingly different task:</p>
<ul>
<li><code>boto3</code> does not support canceling S3 downloads :(.</li>
<li>I could start a process (using Python multiprocessing) - and kill it to cancel the download. But I fear that would probably leave large chunks already in memory but not saved to the file "lost".</li>
<li>I could use some http library and "manual" download the file. If that library supports canceling, that could work. But I also did not find another http library, that supports canceling get requests in progress.</li>
</ul>
<p>So my questions, aimed at archiving cancelable downloads from S3.</p>
<ul>
<li>Am I right, that running the download in a separate process and killing it could drop larger chunks already download (or other problems)?</li>
<li>Did I miss a python http library that supports canceling?</li>
<li>Is there another way, to cancel S3 downloads in Python that I may have missed?</li>
</ul>
|
<python><amazon-web-services><amazon-s3>
|
2025-03-31 17:18:21
| 1
| 7,811
|
Nathan
|
79,546,809
| 6,275,400
|
Animate Google Earth Engine Image
|
<p>Hi StackOverflow Community,</p>
<p>I have year-over-year images that I have pulled from Google Earth Image, and I want to animate them so that it's easier to analyze.</p>
<p>I tried to use Folium, which has a plugin that enables the animation with a slider. (i.e. <a href="https://python-visualization.github.io/folium/latest/user_guide/plugins/timestamped_geojson.html" rel="nofollow noreferrer">TimestampedGeo</a>). This is the code:</p>
<pre><code>import ee
import folium
import folium.plugins as fplugins
"""
Sources:
- https://colab.research.google.com/github/giswqs/qgis-earthengine-examples/blob/master/Folium/ee-api-folium-setup.ipynb#scrollTo=3qB3W9rDfoma
- https://python-visualization.github.io/folium/latest/user_guide/plugins/timestamped_geojson.html
"""
ee.Authenticate()
ee.Initialize()
dem = ee.Image('USGS/SRTMGL1_003')
m = folium.Map(
location=(chosen_lat, chosen_lon),
zoom_start=9,
height=300
)
features = [
{
"type": "Feature",
"geometry": dem.updateMask(dem.gt(0)).getMapId({}),
"properties": {"times": ["2017-06-02T00:30:00", "2017-06-02T00:40:00"]}
}
]
fplugins.TimestampedGeoJson(
data={
"type": "FeatureCollection",
"features": features,
},
period="PT1M",
add_last_point=True,
).add_to(m)
m
</code></pre>
<p>The above code keeps giving this error message:</p>
<pre><code>TypeError Traceback (most recent call last)
hex_cell_0195d7ee-7e8b-7aa1-b548-572c1c39aad2.py in <module>
32 }
33 ]
---> 34
35 fplugins.TimestampedGeoJson(
36 data={
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/folium/plugins/timestamped_geo_json.py in __init__(self, data, transition_time, loop, auto_play, add_last_point, period, min_speed, max_speed, loop_button, date_options, time_slider_drag_update, duration, speed_slider)
203 elif type(data) is dict:
204 self.embed = True
--> 205 self.data = json.dumps(data)
206 else:
207 self.embed = False
/usr/local/lib/python3.9/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
/usr/local/lib/python3.9/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/usr/local/lib/python3.9/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
/usr/local/lib/python3.9/json/encoder.py in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
TypeError: Object of type TileFetcher is not JSON serializable
</code></pre>
<p>Does anyone know how to fix this, or whether it's possible for Folium to animate images? I'm also open to trying any Python libraries that can achieve the same.</p>
|
<python><folium><google-earth-engine><folium-plugins>
|
2025-03-31 16:49:33
| 0
| 1,361
|
Fxs7576
|
79,546,777
| 1,420,553
|
Saving and Loading Images for Tensorflow Lite
|
<p>Planning on using Tensorflow Lite for image classification.</p>
<p>To test the model, using Fashion-MNIST database to create the model following a Tensorflow basic example from their website. Created and saved the model and want to test it in TF Lite.</p>
<p>I want to save an image from the mentioned database to predict its class in TF Lite. This is the code to check the images (some of them from Google search):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
import tensorflow as tf
import base64
import io
from os.path import dirname, join
from PIL import Image
from matplotlib.pyplot import imread
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_X, train_y), (test_X, test_y) = fashion_mnist.load_data()
# Function to check image format
def check_mnist_image_format(image_array):
print(f"Image {index} shape: {image_array.shape}")
print(f"Image {index} data type: {image_array.dtype}")
print(f"Image {index} min pixel value: {np.min(image_array)}")
print(f"Image {index} max pixel value: {np.max(image_array)}")
# Check the format of the first training image
check_mnist_image_format(train_X[0], 0)
filename = join(dirname("/home/gachaconr/tf/"), 'image.unit8')
with open(filename, 'wb') as file:
file.write(train_X[0])
print("image saved")
# Open the image file
image_array = imread(filename)
check_mnist_image_format(image_array)
</code></pre>
<p>Saving the image with extension .unit8 since that the format of the image database given by the properties given by the function <em>check_mnist_image_format</em></p>
<p>The image is saved as expected but the function imread cannot read it. This is the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/gachaconr/tf/imagemnistdatacheck.py", line 40, in <module>
image_array = imread(filename)
^^^^^^^^^^^^^^^^
File "/home/gachaconr/tf/lib/python3.12/site-packages/matplotlib/pyplot.py", line 2607, in imread
return matplotlib.image.imread(fname, format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gachaconr/tf/lib/python3.12/site-packages/matplotlib/image.py", line 1512, in imread
with img_open(fname) as image:
^^^^^^^^^^^^^^^
File "/home/gachaconr/tf/lib/python3.12/site-packages/PIL/Image.py", line 3532, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file '/home/gachaconr/tf/image.unit8'
</code></pre>
<p>How can I accomplish the mentioned task? What am I doing wrong?</p>
|
<python><tensorflow><deep-learning><python-imaging-library><tensorflow-lite>
|
2025-03-31 16:33:41
| 2
| 369
|
gus
|
79,546,776
| 11,156,131
|
dbutils secret issue for container url
|
<p>I am trying to access a blob storage like this and it works</p>
<pre><code>from azure.storage.blob import ContainerClient
storage_account_name ='storage_account_name'
storage_container_name = 'storage_container_name'
sas_token = 'sv=<>&si=<>&sr=<>&sig=<>'
account_url = "https://"+storage_account_name+".blob.core.windows.net/"+storage_container_name+"?"+sas_token
container_client = ContainerClient.from_container_url(container_url=account_url)
blobs = container_client.list_blobs()
for blob in blobs:
print(blob)
</code></pre>
<p>But if I get the secret through dbutils and try to pass it through, it does not.</p>
<pre><code>from azure.storage.blob import ContainerClient
storage_account_name ='storage_account_name'
storage_container_name = 'storage_container_name'
sas_token = dbutils.secrets.get(scope=secret_scope,key='some_key')
account_url = "https://"+storage_account_name+".blob.core.windows.net/"+storage_container_name+"?"+sas_token
container_client = ContainerClient.from_container_url(container_url=account_url)
blobs = container_client.list_blobs()
for blob in blobs:
print(blob)
</code></pre>
<p>Is it possible to keep on using dbutils and make it work</p>
|
<python><azure-blob-storage><databricks><azure-storage><dbutils>
|
2025-03-31 16:32:46
| 1
| 4,376
|
smpa01
|
79,546,738
| 7,233,155
|
MyPy fails to narrow type on `isinstance`
|
<p>I have a large codebase which has been working successfully with <code>mypy</code> for many months. Recent additions have started to create spurious and seemingly unconnected issues.</p>
<p>For example I have recently got the following error, without modifying <code>VObj</code> at all - just adding objects in the same module.</p>
<pre class="lang-py prettyprint-override"><code>def func(vol: float | VObj) -> None:
if isinstance(vol, VObj):
x: float = vol[0] / 100.0
else:
x = vol /100.0 # mypy: "error: Unsupported operand types for / ("VObj" and "float") [operator]"
...
</code></pre>
<p>At the error line it is clear that <code>vol</code> cannot be <code>VObj</code> but <code>mypy</code> can no longer detect this.</p>
<p>Is there something that I am missing?</p>
|
<python><python-typing><mypy>
|
2025-03-31 16:16:52
| 1
| 4,801
|
Attack68
|
79,546,549
| 18,499,990
|
Anaconda Python3 virtual environment unable to recognise local modules
|
<p>I'm reproducing a project code and trying to execute it locally.</p>
<p>It uses Python 3.6 and old packages dating to 2017 and PIP struggles to install them and returned error codes which were 40 pages long.</p>
<p>My friend suggested me to use Anaconda to setup a virtual environment and use conda environment to install those old packages and python 3.6 and it was a success.</p>
<p>I cloned the code from github and tried running it as per the instructions in the readme and I've been encountering errors with almost every import statement that calls a local package.</p>
<p>These are my folder contents (not the full list but only showed enough contents for error debugging):</p>
<pre><code><repository-root>/
├─ data/
│ ├─ dataset_2017/
│ │ ├─ dataset_2017_8_formatted_macrosremoved/
│ │ └─ libtoolingfeatures_for_public_testing/
│ └─ ClassificationModels/
├─ evaluations/
│ ├─ blackbox/
│ │ └─ attack/
│ ├─ tutorial_classification.py
│ ├─ tutorial_classification_2.py
│ └─ learning/
│ ├─ post_learning_steps/
│ └─ rf_usenix/
├─ classification/
├─ featureextractionV2/
| |- __init__.py
├─ rnn_css/
├─ UnitTests/
└─ PyProject/
</code></pre>
<p>For Example, if I want to run a classification test, I use</p>
<pre class="lang-bash prettyprint-override"><code>conda activate myenv
cd .\evaluations\
python.exe tutorial_classification.py
</code></pre>
<p>It returns this error:</p>
<pre class="lang-bash prettyprint-override"><code>python.exe tutorial_classification.py
Traceback (most recent call last):
File "tutorial_classification.py", line 33, in <module>
from featureextractionV2.StyloFeaturesProxy import StyloFeaturesProxy
ModuleNotFoundError: No module named 'featureextractionV2'
</code></pre>
<p>You can notice that featureextractionV2 does have an <strong>init</strong>.py file yet it doesn't recognise it as a module</p>
<p>The same thing happens when I try to run other tests in other folders and running those files returns the same error saying that no module named evasion</p>
<p>For more details</p>
<p>I'm running Windows 11 and I use PyCharm Community as my editor.</p>
|
<python><python-3.x><anaconda><conda><python-module>
|
2025-03-31 14:51:23
| 1
| 371
|
SpaciousCoder78
|
79,546,240
| 6,101,024
|
Shuffle a dataset w.r.t a column value
|
<p>I have the following Dataframe, which contains, among others, UserID and rank_group as attribute:</p>
<pre><code> UserID Col2 Col3 rank_group
0 1 2 3 1
1 1 5 6 1
...
20 1 8 9 2
21 1 11 12 2
...
45 1 14 15 3
46 1 17 18 3
47 2 2 3 1
48 2 5 6 1
...
60 2 8 9 2
61 2 11 12 2
...
70 2 14 15 3
71 2 17 18 3
</code></pre>
<p>The dataframe has got an UserID, and for each user, it has rows with rank_group 1 on the top, followed by the rows with rank_group 2, etc. In other words, rank_group follows a specific progressive order, 1,2,3,4,etc</p>
<p>I would like to shuffle the order of the Dataframe's rows such that rank_group follow a random one. For example, if we compute the rank_group from 1 to n for each user, we should obtain after shuffling, the dataset reflecting any permutation from 1 to n.</p>
<p>I tried df.sample(frac=1) but it does not take into account the rank_group block but it mixes any row with any row. It is not what I am looking for. In my case, it has to maintain the same order within a fixed rank_group. Also, looked into the np.random.permutation, same issue here. Any help?</p>
|
<python><pandas><dataframe><numpy><shuffle>
|
2025-03-31 12:34:27
| 1
| 697
|
Carlo Allocca
|
79,546,174
| 2,050,262
|
Fabric PySpark Notebook Parsing JSON string with double quote escape chars
|
<p>I am working in a PySpark notebook that gets its input parameter as a JSON String from a Pipeline and the notebook need to process the string further. Below is the example string that the notebook gets as input and example code that try to process the string and fails because the Details section has text enclosed with " (<code>"Internal Server Error"</code>) and I am not sure how do I parse these unescaped " chars inside the message.</p>
<p>I am able to parse <code>\n</code> <code>\t</code> and <code>\r</code> but stuck with <code>\"</code>.
I can't even directly do replace on the entire file as the element names has to be enclosed with ".</p>
<p>Please suggest how can I parse these types of messages that are having text enclosed with ".</p>
<p>Here is the example JSON string that the notebook received and plan to perse.</p>
<pre><code>#Input String Received from Pipeline
var_str_log_data = "[{\"ApplicationName\": \"APP\",\"WorkspaceId\": \"0addb382-fa0d-4ce1-9c3c-95e32b957955\",\"Environment\": \"DEV\",\"Level\": \"ERROR\",\"Severity\": \"SEV 4\",\"Component\": \"PL_APP_Ingest\",\"Operation\": \"ETL\",\"Run_Id\": \"88276e4f-4e60-47ec-a6d6-dd9e475d2c3a\",\"SessionId\": \"\",\"Message\": \"[DEV] - APP - Pipeline Execution Failed - 20250329\",\"Status\": \"Success\",\"Details\": \"Pipeline Name: PL_APP_Ingest\\nRun Id: 88257e4f-4e60-34ec-a6d6-dd9e475d2c3a\\nError Message: Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - Py4JJavaError, Error value - An error occurred while calling o4628.load.\n: java.util.concurrent.ExecutionException: Operation failed: \"Internal Server Error\", 500, HEAD, http://msit-onelake.dfs.fabric.microsoft.com/MYWORKSPACE/dataverse_name_cds2_workspace_unq60feaf930cbf236eb2519b27f2d6b.Lakehouse/Tables/app_riskparcel/_delta_log?upn=false&action=getStatus&timeout=90\n at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)\n at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)\n at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)\n at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)\n ... 37 more\n' : \\nExecution URL: https://msit.powerbi.com/workloads/data-pipeline/artifacts/workspaces/0addb122-fa0d-4ce1-9c3c-95e25b957955/pipelines/PL_APP_Ingest/88223e4f-4e60-47ec-a6d6-dd9e475d2c3a?experience=power-bi\\nApp Name: APP\",\"CorrelationId\": \"\",\"User\": \"System\"}]"
#Example Code Try to Process the JSON provided a String
var_str_log_data = re.sub(r'(?<!\\)\n', r'\\n', var_str_log_data)
var_str_log_data = re.sub(r'(?<!\\)\t', r'\\t', var_str_log_data)
var_str_log_data = re.sub(r'(?<!\\)\r', r'\\r', var_str_log_data)
print(f"Log Data Format and Content: {type(var_str_log_data)} - {var_str_log_data}.")
try:
data = json.loads(var_str_log_data)
print("JSON loaded successfully")
except json.JSONDecodeError as e:
error_pos = e.pos
print("Invalid JSON:", e)
print(f"Character at position {error_pos}: {repr(var_str_log_data[error_pos-1])}")
start = max(0, error_pos - 20)
end = min(len(var_str_log_data), error_pos + 20)
print("Surrounding text:", repr(var_str_log_data[start:end]))
</code></pre>
<p>Thanks,
Prabhat</p>
|
<python><pyspark><jupyter-notebook>
|
2025-03-31 12:04:26
| 1
| 4,296
|
Prabhat
|
79,545,985
| 2,919,585
|
Find correct root of parametrized function given solution for one set of parameters
|
<p>Let's say I have a function <code>foo(x, a, b)</code> and I want to find a specific one of its (potentially multiple) roots <code>x0</code>, i.e. a value <code>x0</code> such that <code>foo(x0, a, b) == 0</code>. I know that for <code>(a, b) == (0, 0)</code> the root I want is <code>x0 == 0</code> and that the function changes continuously with <code>a</code> and <code>b</code>, so I can "follow" the root from <code>(0, 0)</code> to the desired <code>(a, b)</code>.</p>
<p>Here's an example function.</p>
<pre class="lang-py prettyprint-override"><code>def foo(x, a, b):
return (1 + a) * np.sin(a + b - x) - x
</code></pre>
<p><a href="https://i.sstatic.net/nS01uJFP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nS01uJFP.png" alt="example function plot" /></a></p>
<p>For <code>(a, b) == (0, 0)</code> I want to the root at <code>0</code>, for <code>(2, 0)</code> I want the one near <code>1.5</code> and for <code>(2, 1)</code> I want the one near <code>2.2</code>.</p>
<p>Now, this problem seems like one that may be common enough to have a prepared, fast solver in <code>scipy</code> or another standard package (or tools to easily and efficiently construct one). However, I don't know what terms to search for to find it (or verify that it doesn't exist). <strong>Is there a ready-made tool for this? What is this kind of problem called?</strong></p>
<hr />
<p>Clarifications:</p>
<ul>
<li>Depending on <code>(a, b)</code>, the "correct" root may disappear (e.g. for <code>(1, 3)</code> in the example). When this happens, returning <code>nan</code> is the preferred behavior, though this is not super important.</li>
<li>By "fast" I mostly mean that <em>many</em> parameter sets <code>(a, b)</code> can be quickly solved, not just a single one. I will go on calculating the root for a lot of different parameters, e.g. for plotting and integrating over them.</li>
</ul>
<hr />
<p>Here's a quickly put together reference implementation that does pretty much what I want for the example above (and creates the plot). It's not very fast, of course, which somewhat limits me in my actual application.</p>
<pre><code>import functools
import numpy as np
from scipy.optimize import root_scalar
from matplotlib import pyplot as plt
def foo(x, a, b):
return (1 + a) * np.sin(a + b - x) - x
fig, ax = plt.subplots()
ax.grid()
ax.set_xlabel("x")
ax.set_ylabel("foo")
x = np.linspace(-np.pi, np.pi, 201)
ax.plot(x, foo(x, 0, 0), label="(a, b) = (0, 0)")
ax.plot(x, foo(x, 2, 0), label="(a, b) = (2, 0)")
ax.plot(x, foo(x, 2, 1), label="(a, b) = (2, 1)")
ax.legend()
plt.show()
# Semi-bodged solver for reference:
def _dfoo(x, a, b):
return -(1 + a) * np.cos(a + b - x) - 1
def _solve_fooroot(guess, a, b):
if np.isnan(guess):
return np.nan
# Determine limits for finding the root.
# Allow for slightly larger limits to account for numerical imprecision.
maxlim = 1.1 * (1 + a)
y0 = foo(guess, a, b)
if y0 == 0:
return guess
dy0 = _dfoo(guess, a, b)
estimate = -y0 / dy0
# Too small estimates are no good.
if np.abs(estimate) < 1e-2 * maxlim:
estimate = np.sign(estimate) * 1e-2 * maxlim
for lim in np.arange(guess, guess + 10 * estimate, 1e-1 * estimate):
if np.sign(foo(lim, a, b)) != np.sign(y0):
bracket = sorted([guess, lim])
break
else:
return np.nan
sol = root_scalar(foo, (a, b), bracket=bracket)
return sol.root
@functools.cache
def _fooroot(an, astep, bn, bstep):
if an == 0:
if bn == 0:
return 0
guessan, guessbn = an, bn - 1
else:
guessan, guessbn = an - 1, bn
# Avoid reaching maximum recursion depth.
for thisbn in range(0, guessbn, 100):
_fooroot(0, astep, thisbn, bstep)
for thisan in range(0, guessan, 100):
_fooroot(thisan, astep, guessbn, bstep)
guess = _fooroot(guessan, astep, guessbn, bstep)
return _solve_fooroot(guess, an * astep, bn * bstep)
@np.vectorize
def fooroot(a, b):
astep = (-1 if a < 0 else 1) * 1e-2
bstep = (-1 if b < 0 else 1) * 1e-2
guess = _fooroot(int(a / astep), astep, int(b / bstep), bstep)
return _solve_fooroot(guess, a, b)
print(fooroot(0, 0))
print(fooroot(2, 0))
print(fooroot(2, 1))
fig, ax = plt.subplots()
ax.grid()
ax.set_xlabel("b")
ax.set_ylabel("fooroot(a, b)")
b = np.linspace(-3, 3, 201)
for a in [0, 0.2, 0.5]:
ax.plot(b, fooroot(a, b), label=f"a = {a}")
ax.legend()
plt.show()
fig, ax = plt.subplots()
ax.grid()
ax.set_xlabel("a")
ax.set_ylabel("b")
a = np.linspace(-1, 1, 201)
b = np.linspace(-3.5, 3.5, 201)
aa, bb = np.meshgrid(a, b)
pcm = ax.pcolormesh(aa, bb, fooroot(aa, bb))
fig.colorbar(pcm, label="fooroot(a, b)")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/QCC6fWnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCC6fWnZ.png" alt="reference solution plot" /></a></p>
<p><a href="https://i.sstatic.net/QsxLBMEn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsxLBMEn.png" alt="reference solution colormesh" /></a></p>
|
<python><scipy><numerical-methods><scipy-optimize>
|
2025-03-31 10:14:17
| 2
| 571
|
schtandard
|
79,545,926
| 4,999,991
|
Python pyserial can't access com0com virtual serial ports even though they appear in Device Manager and com0com GUI
|
<p>I've set up virtual COM ports with com0com (COM20 and COM21), but I'm having trouble accessing them through Python's pyserial.</p>
<h3>What I've tried</h3>
<p>I can successfully create the port pair using com0com's <code>setupc.exe</code>:</p>
<pre><code>> setupc.exe install PortName=COM20 PortName=COM21
</code></pre>
<p>The virtual port pair is visible in the com0com GUI:</p>
<p><a href="https://i.sstatic.net/M6J0Pugp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6J0Pugp.png" alt="enter image description here" /></a></p>
<p>and they appear under <code>com0com - serial port emulators</code> in device manager:</p>
<p><a href="https://i.sstatic.net/nNHDTSPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNHDTSPN.png" alt="enter image description here" /></a></p>
<p>But when I try to access these ports with pyserial, I get a FileNotFoundError:</p>
<pre class="lang-py prettyprint-override"><code>import serial
import serial.tools.list_ports
# List available ports
print("Available COM ports:")
for port in serial.tools.list_ports.comports():
print(f" {port.device}: {port.description}")
# Try to open the virtual port
try:
ser = serial.Serial('COM20', 9600, timeout=1)
print("Successfully opened COM20")
ser.close()
except Exception as e:
print(f"Error: {e}")
</code></pre>
<p>Output:</p>
<pre><code>Available COM ports:
COM7: Standard Serial over Bluetooth link (COM7)
COM3: Intel(R) Active Management Technology - SOL (COM3)
COM6: Standard Serial over Bluetooth link (COM6)
Error: could not open port 'COM20': FileNotFoundError(2, 'The system cannot find the file specified.', None, 2)
</code></pre>
<h3>Environment</h3>
<ul>
<li>Windows 10</li>
<li>Python 3.12.2</li>
<li>pyserial 3.5</li>
<li>com0com 3.0.0.0</li>
<li>Running script with administrator privileges</li>
</ul>
<h3>Questions</h3>
<ol>
<li>Why aren't the com0com virtual ports appearing in the pyserial port list?</li>
<li>How can I make pyserial recognize and use com0com virtual ports?</li>
<li>Is there a delay or extra step needed after creating ports before they're accessible?</li>
</ol>
<p>I've already tried:</p>
<ul>
<li>Running as administrator</li>
<li>Creating ports with higher numbers (COM20, COM21)</li>
<li>Verifying the ports in Device Manager and com0com GUI</li>
<li>Restarting the computer</li>
</ul>
<p>Any insights or alternative approaches would be greatly appreciated.</p>
|
<python><windows><serial-port><pyserial><com0com>
|
2025-03-31 09:37:35
| 0
| 14,347
|
Foad S. Farimani
|
79,545,895
| 8,144,957
|
Image upload corruption with SeaweedFS S3 API
|
<h2>Problem Description</h2>
<p>I'm experiencing an issue where images uploaded through Django (using boto3) to SeaweedFS's S3 API are corrupted, while uploads through S3 Browser desktop app work correctly. The uploaded files are 55 bytes larger than the original and contain a <code>Content-Encoding: aws-chunked</code> header, making the images unopenable.</p>
<h2>Environment Setup</h2>
<ul>
<li><strong>Storage</strong>: SeaweedFS with S3 API</li>
<li><strong>Proxy</strong>: nginx (handling SSL)</li>
<li><strong>Framework</strong>: Django</li>
<li><strong>Storage Client</strong>: boto3</li>
<li><strong>Upload Method</strong>: Using Django's storage backend with PrivateStorage</li>
</ul>
<h2>Issue Details</h2>
<ol>
<li><p>When uploading through S3 Browser desktop app:</p>
<ul>
<li>File size matches original</li>
<li>Image opens correctly</li>
<li>No corruption issues</li>
</ul>
</li>
<li><p>When uploading through Django/boto3:</p>
<ul>
<li>File size increases by 55 bytes</li>
<li>Response includes <code>Content-Encoding: aws-chunked</code></li>
<li>Image becomes corrupted and unopenable</li>
<li>First bytes contain unexpected data (<code>100000</code>)</li>
<li>Last bytes end with <code>.x-amz-checksum-</code></li>
</ul>
</li>
</ol>
<h2>Example of Corrupted File</h2>
<pre><code>Original file size: 12345 bytes
Uploaded file size: 12400 bytes (+55 bytes)
First bytes: 100000...
Last bytes: ...x-amz-checksum-crc32:SJJ2UA==
</code></pre>
<h2>Attempted Solutions</h2>
<ol>
<li>Tried different upload methods:
<pre class="lang-py prettyprint-override"><code># Method 1: Using ContentFile
storage.save(path, ContentFile(file_content))
# Method 2: Using Django File object
storage.save(path, File(file))
# Method 3: Direct boto3 upload
client.upload_fileobj(f, bucket_name, path)
</code></pre>
</li>
</ol>
<h2>Questions</h2>
<ol>
<li>Is this a known issue with SeaweedFS's S3 API implementation?</li>
<li>Is there a way to disable the <code>aws-chunked</code> encoding in boto3?</li>
<li>Are there specific headers or configurations needed in the nginx proxy to handle binary uploads correctly?</li>
</ol>
<h2>Additional Information</h2>
<ul>
<li>Django version: 5.1.7</li>
<li>boto3 version: 1.37.23</li>
</ul>
<p>Any help or insights would be greatly appreciated 🙏!</p>
<p><a href="https://i.sstatic.net/82XdRFCT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82XdRFCT.png" alt="enter image description here" /></a></p>
<h1>Sample code I tried:</h1>
<pre class="lang-py prettyprint-override"><code>import boto3
AWS_ACCESS_KEY_ID=''
AWS_SECRET_ACCESS_KEY=''
API_URL=''
bucket_name = 'sample-bucket'
s3 = boto3.client('s3',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
endpoint_url=API_URL)
testfile = r"image.png"
s3.upload_file(testfile, bucket_name, 'sample.png', ExtraArgs={'ContentType': 'image/png'})
</code></pre>
|
<python><django><boto3><seaweedfs>
|
2025-03-31 09:20:25
| 1
| 328
|
Nguyễn Anh Bình
|
79,545,706
| 3,676,517
|
What is the difference between an xarray Dataset and a DataArray?
|
<p>According to <a href="https://docs.xarray.dev/en/stable/user-guide/data-structures.html#dataarray" rel="nofollow noreferrer">the xarray documentation</a>:</p>
<blockquote>
<p><code>xarray.DataArray</code> is xarray’s implementation of a labeled, multi-dimensional array.</p>
</blockquote>
<p>To me, this means a <code>DataArray</code> is just <code>numpy</code> with labels. However, xarray provides a second data structure, called <code>Dataset</code>, <a href="https://docs.xarray.dev/en/stable/user-guide/data-structures.html#dataset" rel="nofollow noreferrer">whose documentation is</a></p>
<blockquote>
<p><code>xarray.Dataset</code> is xarray’s multi-dimensional equivalent of a <code>DataFrame</code>.</p>
</blockquote>
<p>Since a Pandas <code>DataFrame</code> is just a 2D numpy array, with labels, can't I still use a <code>DataArray</code> instead? In fact, two <code>DataArray</code> like</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import xarray as xr
foo = xr.DataArray(
data=np.zeros((4, 3)),
coords={
"x": np.arange(4),
"y": np.arange(3),
})
bar = xr.DataArray(
data=np.zeros((4, 3)),
coords={
"x": np.arange(4),
"y": np.arange(3),
})
</code></pre>
<p>can be grouped together using a string coordinate</p>
<pre class="lang-py prettyprint-override"><code>da = xr.DataArray(
data=np.zeros((4, 3, 2)),
coords={
"x": np.arange(4),
"y": np.arange(3),
"key": ["foo", "bar"],
})
</code></pre>
<p>The second part of the Dataset documentation is a little clearer</p>
<blockquote>
<p>[a <code>Dataset</code>] is a dict-like container of labeled arrays (<code>DataArray</code> objects) with aligned dimensions.</p>
</blockquote>
<p>However, from the examples in the documentation it appears that some of the <code>Dataset</code> usage can be perfectly covered by a <code>DataArray</code>.</p>
<p>Is there any other difference that I am missing?</p>
|
<python><python-xarray>
|
2025-03-31 07:34:26
| 1
| 1,113
|
Jommy
|
79,545,545
| 9,078,142
|
Does comparing the substring of two strings create new strings in Python?
|
<p>If I do <code>string_a[i:i + k] == string_b[j:j + k]</code>, is this inefficient because slicing creates a new string because strings are immutable? Or does Python optimize this?</p>
<p>Is it possible that using a for loop and comparing character by character could be more efficient?</p>
<p>I'm not sure that stackoverflow.com/questions/49950747/… is a duplicate because the answers there do not mention slicing a string using [] and then comparing. I already looked at that question before, but still wasn't sure about the answer.</p>
|
<python><algorithm>
|
2025-03-31 05:31:44
| 1
| 495
|
dan dan
|
79,545,452
| 12,096,670
|
SAS if-then Statement to Python Vectorized Approach - My Values are Not Matching Up
|
<p>I am trying to replicate some values in a dataset. The original data that I am running my code against for verification purposes has two categories across multiple groups, like so:</p>
<pre><code>grp5
0 3941
1 459
grp6
0 4120
1 280
grp7
0 4300
1 100
</code></pre>
<p>The original code that was used to create the categories across groups was written in SAS, a straightforward if-then macro statement, see below:</p>
<pre><code>%macro E
IF 4 <= n <= 5
THEN grp5 = 1;
ELSE IF 2 <= n <= 3
THEN grp6 = 1;
ELSE grp7 = 1;
%mend E
</code></pre>
<p>With my Python code I should also get the same number of cases in each category across groups, however, there are some discrepancies between the values I am getting and I'm not sure why.
Below is my Python script and the values I am getting.</p>
<pre><code># Initialize columns
df['grp5'] = 0
df['grp6'] = 0
df['grp7'] = 0
# create boolean conditions
cond5 = df['n'].between(4, 5)
cond6 = df['n'].between(2, 3)
# Apply conditions
df.loc[cond5, 'grp5'] = 1
df.loc[~cond5 & cond6, 'grp6'] = 1
df.loc[~cond5 & ~cond6, 'grp7'] = 1
grp5
0 3878
1 522
grp6
0 2437
1 1963
grp7
0 2485
1 1915
</code></pre>
|
<python><if-statement><conditional-statements><sas>
|
2025-03-31 03:36:44
| 0
| 845
|
GSA
|
79,545,386
| 327,026
|
Inverse groupby to assign parent dataframe?
|
<p>I have irregular 3D point data that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
xx, yy = np.meshgrid(
np.linspace(-50, 50, 101),
np.linspace(-50, 50, 101),
)
rng = np.random.default_rng(12345)
xx += rng.normal(size=101 * 101).reshape((101, 101))
yy += rng.normal(size=101 * 101).reshape((101, 101))
df3d = pd.DataFrame({
"X": np.broadcast_to(xx, (11, 101, 101)).T.flatten(),
"Y": np.broadcast_to(yy, (11, 101, 101)).T.flatten(),
"Z": np.broadcast_to(np.arange(11, dtype=float), (101, 101, 11)).flatten(),
})
</code></pre>
<p><code>df3d</code></p>
<pre><code> X Y Z
0 -51.423825 -51.287428 0.0
1 -51.423825 -51.287428 1.0
2 -51.423825 -51.287428 2.0
3 -51.423825 -51.287428 3.0
4 -51.423825 -51.287428 4.0
... ... ...
112206 51.593733 50.465087 6.0
112207 51.593733 50.465087 7.0
112208 51.593733 50.465087 8.0
112209 51.593733 50.465087 9.0
112210 51.593733 50.465087 10.0
[112211 rows x 3 columns]
</code></pre>
<p>With my analysis, I need to group these into 2D locations with 1 or more Z measures (it's not always 11 for my real-world data):</p>
<pre class="lang-py prettyprint-override"><code>gb2d = df3d.groupby(["X", "Y"])
df2d = gb2d["Z"].count().to_frame("count")
df2d["Zmin"] = gb2d["Z"].min()
df2d["Zmax"] = gb2d["Z"].max()
</code></pre>
<p><code>df2d.reset_index()</code></p>
<pre><code> X Y count Zmin Zmax
0 -51.995857 -49.653017 11 0.0 10.0
1 -51.939229 24.073164 11 0.0 10.0
2 -51.740996 -5.415639 11 0.0 10.0
3 -51.645503 21.830189 11 0.0 10.0
4 -51.639759 -42.850923 11 0.0 10.0
... ... ... ... ...
10196 51.593733 50.465087 11 0.0 10.0
10197 51.905789 37.538099 11 0.0 10.0
10198 51.989935 -32.464752 11 0.0 10.0
10199 52.530599 -40.110744 11 0.0 10.0
10200 52.902015 -6.111877 11 0.0 10.0
[10201 rows x 5 columns]
</code></pre>
<p><strong>Question:</strong> How would I assign the integer index from df2d (shown above) back to the parent df3d frame?</p>
<p>My best attempt works, but does not scale well with larger frames. E.g.:</p>
<pre class="lang-py prettyprint-override"><code>idx2d = pd.Series(np.arange(len(df2d)), index=df2d.index)
df3d["idx2d"] = idx2d.loc[df3d[["X", "Y"]].to_records(index=False).tolist()].values
</code></pre>
<p>works for this sample size, but takes up beyond my 32 GB RAM with my real-world data of 24 million points. What's a better way that won't eat up all of my RAM?</p>
|
<python><pandas><group-by>
|
2025-03-31 02:01:19
| 1
| 44,290
|
Mike T
|
79,545,334
| 3,423,768
|
How to avoid redundant manual assignment of environment variables in Django settings?
|
<p>In my Django project, I store configuration variables in a .env file for security and flexibility. However, every time I introduce a new environment variable, I have to define it in two places: <code>.env</code> and <code>settings.py</code>.</p>
<p>As the project grows and the number of environment variables increases, <code>settings.py</code> become filled with redundant redefinition of what's already in <code>.env</code>.</p>
<p>Is there a way to automatically load all <code>.env</code> variables into Django's settings without manually reassigning each one? Ideally, I want any new variable added to <code>.env</code> to be instantly available from <code>settings</code> module without extra code.</p>
<p>What I can think of is something like:</p>
<pre class="lang-py prettyprint-override"><code>from dotenv import dotenv_values
env_variables = dotenv_values(".envs")
globals().update(env_variables)
</code></pre>
<p>Or even something a little bit better to handle values of type list.</p>
<pre class="lang-py prettyprint-override"><code>for key, value in env_variables.items():
globals()[key] = value.split(",") if "," in value else value
# Ensure ALLOWED_HOSTS is always a list
ALLOWED_HOSTS = ALLOWED_HOSTS if isinstance(ALLOWED_HOSTS, list) else [ALLOWED_HOSTS]
</code></pre>
<p>But I do not like to mess around with <code>globals()</code>. Also, I'm not sure if this kind of variable management is a good idea, or if I should stick to my previous approach for better readability and to maintain a reference to each environment variable in the code.</p>
|
<python><python-3.x><django>
|
2025-03-31 00:51:45
| 0
| 2,928
|
Ravexina
|
79,545,103
| 2,856,552
|
Plotting date on x-axis from integers converted to dates
|
<p><a href="https://i.sstatic.net/tdu1Elyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tdu1Elyf.png" alt="Final output" /></a>In my python code, reproduced below, where I sought assistance here <a href="https://stackoverflow.com/questions/79542474/results-of-assistance-with-a-simple-python-line-plot">Results of Assistance with a simple python line plot</a>, I am plotting days as integers against cumulative rainfall values.
With assistance from here <a href="https://stackoverflow.com/questions/42873653/convert-integer-to-dates-in-python">Convert integer to dates in python</a>, I have been able to convert the day numbers to dates from 1st October. In my data generation, the number of days from 1st October to 31st March is taken as 183 (including leap year).
In converting from integer to date, print(days) gives only 182 dates (29Feb excluded). In the plot function, I have used</p>
<pre><code>plt.plot(days, df['Mean'], color='b', label='Mean') #days being integers converted to dates.
</code></pre>
<p>I get the error</p>
<pre><code>ValueError: x and y must have same first dimension, but have shapes (1,) and (183,)
</code></pre>
<p>However, plotting with the function</p>
<pre><code>plt.plot(df['Date'], df['Mean'], color='b', label='Mean') #Works fine
</code></pre>
<p>works fine.</p>
<p>The code</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
df = pd.read_csv('../RnOut/RAINFALL/Cumul/CHIPAT.txt')
fig, ax = plt.subplots(figsize=(10, 10))
x = np.array([min(df['Date']), max(df['Date'])])
y = np.array([min(df['Mean']), max(df['Mean'])])
#Convert day numbers to date
#***************************
days = {}
for i in range(0, max(df['Date'])):
days[i] = (datetime.datetime(2000, 10, 1) + datetime.timedelta(days=i)).strftime("%d %b")
ax.set_title('Cumulative Rainfall: Oct-Mar\n *****************************')
plt.xticks(np.arange(0, max(df['Date']),step=10))
plt.xlabel("Days From 01 Oct")
plt.ylabel("Cumulative Rainfall (mm)")
plt.plot(days, df['Mean'], color='b', label='Mean')
plt.show()
</code></pre>
<p>Please assist, how do I use the generated dates?</p>
|
<python>
|
2025-03-30 20:06:04
| 1
| 1,594
|
Zilore Mumba
|
79,545,102
| 2,233,683
|
how to properly refresh access token for google pubsub_v1.SubscriberClient
|
<p>In my fast api server, it subscribes to the google realtime developer notifications.</p>
<p>python version 3.12.7
"google-cloud-pubsub (>=2.29.0,<3.0.0)"</p>
<p>The subscriber can be successfully initialized as below:</p>
<pre><code> # self.credentials is a dict from google service account credential json file
self.subscriber = pubsub_v1.SubscriberClient.from_service_account_info(
self.credentials)
</code></pre>
<p>and used like this:</p>
<pre><code> self._streaming_pull_future = self.subscriber.subscribe(
self.subscription_path,
callback=callback,
flow_control=flow_control,
)
logger.info(f"Listening for messages on {self.subscription_path}")
while self._running:
try:
if self._streaming_pull_future.done():
break
await asyncio.sleep(1)
except asyncio.CancelledError:
break
</code></pre>
<p>The subscriber client can initialized, successfully pulling and ack the messages. However, after about 30 minute, it throw out the error:</p>
<pre><code>Error in notification worker: 401 Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project. [reason: "ACCESS_TOKEN_EXPIRED"
domain: "googleapis.com"
metadata {
key: "service"
value: "pubsub.googleapis.com"
}
metadata {
key: "method"
value: "google.pubsub.v1.Subscriber.StreamingPull"
}
]
</code></pre>
<p>The error indicates the access token has timed out.
I have several questions:
Is there a way to initialize the subscriber client so that it can automatically refresh the token itself?
If not, what is the good practice that developer explicitly refresh the token, pass it to the subscriber?</p>
|
<python><google-cloud-platform><google-play>
|
2025-03-30 20:06:02
| 0
| 3,090
|
GeauxEric
|
79,545,081
| 3,604,745
|
Why does YOLO download a nano v11 model (yolov11n) when given the extra larger v12 weights (yolov12x)?
|
<p>I run the following YOLO code with extra large v12 weights, yet it seems to download the yolov11n model for some reason and uses that. I'm trying to understand this behavior, as I don't see it in the <a href="https://docs.ultralytics.com/modes/train/#train-settings" rel="nofollow noreferrer">documentation</a>.</p>
<pre><code>from ultralytics import YOLO
import torch
torch.cuda.empty_cache()
model = YOLO("/content/yolo12x.pt") # also tried yolo12l.pt
mydata = '/content/yolo_model_definition.yaml'
results = model.train(
amp=True,
batch=256,
cache='disk',
cos_lr=True,
data=mydata,
deterministic=False,
dropout=0.1,
epochs=100,
exist_ok=True,
fraction=1.0,
freeze=[0, 1, 2, 3, 4],
imgsz=224,
lr0=0.001,
lrf=0.0001,
mask_ratio=4,
multi_scale=True,
nbs=512,
optimizer='auto',
patience=15,
plots=True,
pretrained=True,
project='aihardware',
name='fine_tune_run',
rect=False,
resume=False,
seed=42,
val=True,
verbose=True,
weight_decay=0.0005,
warmup_bias_lr=0.1,
warmup_epochs=3.0,
warmup_momentum=0.8,
close_mosaic=10,
cls=1.0,
pose=0,
overlap_mask=True
)
print(results)
</code></pre>
<p>Here's some of the output:</p>
<pre><code>Freezing layer 'model.4.m.1.m.1.cv1.bn.weight'
Freezing layer 'model.4.m.1.m.1.cv1.bn.bias'
Freezing layer 'model.4.m.1.m.1.cv2.conv.weight'
Freezing layer 'model.4.m.1.m.1.cv2.bn.weight'
Freezing layer 'model.4.m.1.m.1.cv2.bn.bias'
Freezing layer 'model.21.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks...
Downloading https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt to 'yolo11n.pt'...
100%|██████████| 5.35M/5.35M [00:00<00:00, 396MB/s]
AMP: checks passed ✅
train: Scanning /content/new/labels.cache... 739 images, 84 backgrounds, 0 corrupt: 100%|██████████| 739/739 [00:00<?, ?it/s]
train: Caching images (0.1GB Disk): 100%|██████████| 739/739 [00:00<00:00, 59497.67it/s]
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01, num_output_channels=3, method='weighted_average'), CLAHE(p=0.01, clip_limit=(1.0, 4.0), tile_grid_size=(8, 8))
val: Scanning /content/val/labels... 141 images, 16 backgrounds, 0 corrupt: 100%|██████████| 141/141 [00:00<00:00, 998.33it/s]val: New cache created: /content/val/labels.cache
val: Caching images (0.0GB Disk): 100%|██████████| 141/141 [00:00<00:00, 5131.07it/s]
Plotting labels to aihardware/fine_tune_run/labels.jpg...
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.001' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically...
optimizer: AdamW(lr=0.000833, momentum=0.9) with parameter groups 205 weight(decay=0.0), 214 weight(decay=0.0005), 211 bias(decay=0.0)
TensorBoard: model graph visualization added ✅
Image sizes 224 train, 224 val
Using 8 dataloader workers
Logging results to aihardware/fine_tune_run
Starting training for 100 epochs...
</code></pre>
<p>Of course, the part of that I'm concerned about is:</p>
<blockquote>
<pre><code>Downloading https://github.com/ultralytics/assets/releases/download/
v8.3.0/yolo11n.pt to 'yolo11n.pt'...
100%|██████████| 5.35M/5.35M [00:00<00:00, 396MB/s]
</code></pre>
</blockquote>
<p>I have also tried changing <code>pretrained</code> from <code>True</code> to a string path to the weights (not that it should need it, since the weights were specified in the creation of the model object). The problem does not happen if I use the YOLO CLI instead of Python:</p>
<pre><code>!yolo train model=/content/yolo12l.pt \
data=/content/yolo_model_definition.yaml \
epochs=100 \
batch=32 \
imgsz=224 \
lr0=0.001 \
lrf=0.0001 \
weight_decay=0.0005 \
dropout=0.1 \
multi_scale=True \
optimizer=auto \
project=aihardware \
name=fine_tune_run \
exist_ok=True \
cos_lr=True \
cache=disk \
val=True \
plots=True \
seed=42 \
warmup_epochs=3.0 \
warmup_bias_lr=0.1 \
warmup_momentum=0.8 \
patience=15 \
cls=1.0 \
mask_ratio=4 \
close_mosaic=10 \
overlap_mask=True \
freeze=0,1,2,3,4 \
device=0 \
amp=True \
fraction=1.0 \
verbose=True
</code></pre>
<p>The code above uses the v12 weights and does not download v11. However, <code>ultralytics</code> seems to be the current version, 8.3.99, so I don't understand the behavior in Python.</p>
<p>Here are the checks:</p>
<pre><code>Ultralytics 8.3.99 🚀 Python-3.11.11 torch-2.6.0+cu124 CUDA:0 (NVIDIA A100-SXM4-40GB, 40507MiB)
Setup complete ✅ (12 CPUs, 83.5 GB RAM, 42.9/235.7 GB disk)
OS Linux-6.1.85+-x86_64-with-glibc2.35
Environment Colab
Python 3.11.11
Install pip
Path /usr/local/lib/python3.11/dist-packages/ultralytics
RAM 83.48 GB
Disk 42.9/235.7 GB
CPU Intel Xeon 2.20GHz
CPU count 12
GPU NVIDIA A100-SXM4-40GB, 40507MiB
GPU count 1
CUDA 12.4
numpy ✅ 2.0.2<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.6.0+cu124>=1.8.0
torch ✅ 2.6.0+cu124!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0+cu124>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 5.9.5
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.2>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
{'OS': 'Linux-6.1.85+-x86_64-with-glibc2.35',
'Environment': 'Colab',
'Python': '3.11.11',
'Install': 'pip',
'Path': '/usr/local/lib/python3.11/dist-packages/ultralytics',
'RAM': '83.48 GB',
'Disk': '42.9/235.7 GB',
'CPU': 'Intel Xeon 2.20GHz',
'CPU count': 12,
'GPU': 'NVIDIA A100-SXM4-40GB, 40507MiB',
'GPU count': 1,
'CUDA': '12.4',
'Package Info': {'numpy': '✅ 2.0.2<=2.1.1,>=1.23.0',
'matplotlib': '✅ 3.10.0>=3.3.0',
'opencv-python': '✅ 4.11.0.86>=4.6.0',
'pillow': '✅ 11.1.0>=7.1.2',
'pyyaml': '✅ 6.0.2>=5.3.1',
'requests': '✅ 2.32.3>=2.23.0',
'scipy': '✅ 1.14.1>=1.4.1',
'torch': '✅ 2.6.0+cu124!=2.4.0,>=1.8.0; sys_platform == "win32"',
'torchvision': '✅ 0.21.0+cu124>=0.9.0',
'tqdm': '✅ 4.67.1>=4.64.0',
'psutil': '✅ 5.9.5',
'py-cpuinfo': '✅ 9.0.0',
'pandas': '✅ 2.2.2>=1.1.4',
'seaborn': '✅ 0.13.2>=0.11.0',
'ultralytics-thop': '✅ 2.0.14>=2.0.0'}}
</code></pre>
|
<python><machine-learning><yolo><yolov11>
|
2025-03-30 19:43:01
| 0
| 23,531
|
Hack-R
|
79,544,866
| 2,648,551
|
How to automatically update extra column in sqlalchemy association table?
|
<p>I have a <a href="https://www.postgresql.org/" rel="nofollow noreferrer">postgresql database</a> setup with <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">sqlalchemy</a>, <a href="https://docs.sqlalchemy.org/en/20/orm/inheritance.html#joined-table-inheritance" rel="nofollow noreferrer">joined table inheritance</a> and a <a href="https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html#many-to-many" rel="nofollow noreferrer">many-to-many association table</a> with an extra column <code>position</code>, i.e. <code>assessments</code> and <code>tasks</code> are related via the <code>assessments_tasks</code> association table and this table has the extra column.</p>
<p>If I now want to add an assessment, which can have multiple tasks like e.g. one <code>primer</code> and two <code>exercises</code>, I want that the position in the association table is automatically written according to the position in the <code>tasks</code> list.</p>
<pre class="lang-py prettyprint-override"><code>DbAssessment(tasks=[DbPrimer(), DbExercise(), DbExercise()])
</code></pre>
<p>This should insert one entry in <code>DbAssessment</code>, one entry in <code>DbPrimer</code>, two entries in <code>DbExercise</code>, three entries in <code>DbTask</code> and these three tasks should be in the <code>DbAssessmentsTasks</code> with following positions (automatically calculated):</p>
<pre class="lang-none prettyprint-override"><code>--------------------------------------------
| position | assessment_id | task_id |
--------------------------------------------
| 1 | assessment.id | primer_1.id |
| 2 | assessment.id | exercise_1.id |
| 3 | assessment.id | exercise_2.id |
--------------------------------------------
</code></pre>
<p>The problem is, that I am stuck at getting the positions automatically calculated and inserted in the association table. I want that the business logic is completely in the hand of sqlalchemy or postgresql. I tried using an <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html#module-sqlalchemy.ext.associationproxy" rel="nofollow noreferrer">AssociationProxy</a>, but I have problems to understand how the toy example can be used in my case and also how to get the <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html#creation-of-new-values" rel="nofollow noreferrer">creator</a> working. I hope that I can get at least some help which gets me in the right direction.</p>
<p>Here is a minimal example, which can be run out-of-the-box after installing <code>sqlalchemy testcontainers psycopg2</code>:</p>
<pre class="lang-py prettyprint-override"><code>from uuid import UUID
from sqlalchemy import ForeignKey, Integer, String, UniqueConstraint, Uuid, create_engine, text
from sqlalchemy.orm import (
DeclarativeBase, Mapped, declared_attr, mapped_column, relationship, sessionmaker
)
from testcontainers.postgres import PostgresContainer
class DbBase(DeclarativeBase):
id: Mapped[UUID] = mapped_column(Uuid, primary_key=True, server_default=text("gen_random_uuid()"))
class DbAssessment(DbBase):
__tablename__ = "assessments"
tasks: Mapped[list["DbTask"]] = relationship(secondary="assessment_tasks", back_populates="assessments")
class DbTask(DbBase):
__tablename__ = "tasks"
__mapper_args__ = {"polymorphic_on": "task_type", "polymorphic_identity": "task"}
assessments: Mapped[list["DbAssessment"]] = relationship(secondary="assessment_tasks", back_populates="tasks")
task_type: Mapped[str] = mapped_column(String(length=8), nullable=False)
class DbPrimer(DbTask):
__tablename__ = "primers"
__mapper_args__ = {"polymorphic_identity": "primer"}
id: Mapped[UUID] = mapped_column(ForeignKey("tasks.id"), primary_key=True, nullable=False)
class DbExercise(DbTask):
__tablename__ = "exercises"
__mapper_args__ = {"polymorphic_identity": "exercise"}
id: Mapped[UUID] = mapped_column(ForeignKey("tasks.id"), primary_key=True, nullable=False)
class DbAssessmentsTasks(DbBase):
__tablename__ = "assessment_tasks"
__table_args__ = (UniqueConstraint("assessment_id", "position"),)
@declared_attr
def id(cls): return None
position: Mapped[int] = mapped_column(Integer, nullable=False)
assessment_id: Mapped[UUID] = mapped_column(ForeignKey("assessments.id"), primary_key=True)
task_id: Mapped[UUID] = mapped_column(ForeignKey("tasks.id"), primary_key=True)
if __name__ == "__main__":
with PostgresContainer("postgres:latest") as postgres:
engine = create_engine(postgres.get_connection_url())
DbBase.metadata.create_all(bind=engine)
db_assessment = DbAssessment(
tasks=[DbPrimer(), DbExercise(), DbExercise()]
)
with sessionmaker(bind=engine)() as session:
session.add(db_assessment)
session.commit()
</code></pre>
<p>☝️should throw the expected error:</p>
<blockquote>
<p>sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) null value in column "position" of relation "assessment_tasks" violates not-null constraint</p>
</blockquote>
<p>How can I instruct sqlalchemy or postgresql to automatically insert the calculated positions (probably something similar to <code>[p for p, _ in enumerate(db_assessment.tasks, start=1)]</code>) into the many-to-many association table?</p>
|
<python><postgresql><sqlalchemy><many-to-many>
|
2025-03-30 16:23:12
| 2
| 4,850
|
colidyre
|
79,544,819
| 6,394,092
|
Conversion of model weights from old Keras version to Pytorch
|
<p>I want to transfer pretrained weights from an old project on github : <a href="https://github.com/ajgallego/staff-lines-removal" rel="nofollow noreferrer">https://github.com/ajgallego/staff-lines-removal</a></p>
<p>The original Keras model code is:</p>
<pre><code>def get_keras_autoencoder(self, input_size=256, nb_filter=96, k_size=5):
input_img = Input(shape=(1, input_size, input_size))
conv1 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv1')(input_img)
maxp1 = MaxPooling2D((2, 2), border_mode='same', name='maxp1')(conv1)
conv2 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv2')(maxp1)
maxp2 = MaxPooling2D((2, 2), border_mode='same', name='maxp2')(conv2)
conv3 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv3')(maxp2)
encoder = MaxPooling2D((2, 2), border_mode='same', name='encoder')(conv3)
conv4 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv4')(encoder)
upsa1 = UpSampling2D((2, 2), name='upsa1')(conv4)
conv4 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv5')(upsa1)
upsa2 = UpSampling2D((2, 2), name='upsa2')(conv4)
conv5 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv6')(upsa2)
upsa3 = UpSampling2D((2, 2), name='upsa3')(conv5)
decoder = Convolution2D(1, k_size, k_size, activation='sigmoid', border_mode='same')(upsa3)
autoencoder = Model(input_img, decoder)
return autoencoder
</code></pre>
<p>The ported PyTorch code I've done is:</p>
<pre><code>class SAEModel(nn.Module):
def __init__(self):
super(SAEModel, self).__init__()
# encoder
self.conv1 = nn.Conv2d( 1, 96, kernel_size=5, stride=1, padding=2)
self.maxp1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.conv2 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2)
self.maxp2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.conv3 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2)
self.encod = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
# decoder
self.conv4 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2)
self.upsa1 = nn.Upsample(scale_factor=2)
self.conv5 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2)
self.upsa2 = nn.Upsample(scale_factor=2)
self.conv6 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2)
self.upsa3 = nn.Upsample(scale_factor=2)
self.decod = nn.Conv2d(96, 1, kernel_size=5, stride=1, padding=2)
def forward(self, x):
# encoder activations
x = F.relu(self.conv1(x))
x = self.maxp1(x)
x = F.relu(self.conv2(x))
x = self.maxp2(x)
x = F.relu(self.conv3(x))
x = self.encod(x)
# decoder activations
x = F.relu(self.conv4(x))
x = self.upsa1(x)
x = F.relu(self.conv5(x))
x = self.upsa2(x)
x = F.relu(self.conv6(x))
x = self.upsa3(x)
x = F.sigmoid(self.decod(x))
return x
</code></pre>
<p>The summary dump of the Keras model:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 256, 256) 0
____________________________________________________________________________________________________
conv1 (Convolution2D) (None, 1, 256, 96) 2496 input_1[0][0]
____________________________________________________________________________________________________
maxp1 (MaxPooling2D) (None, 1, 128, 96) 0 conv1[0][0]
____________________________________________________________________________________________________
conv2 (Convolution2D) (None, 1, 128, 96) 230496 maxp1[0][0]
____________________________________________________________________________________________________
maxp2 (MaxPooling2D) (None, 1, 64, 96) 0 conv2[0][0]
____________________________________________________________________________________________________
conv3 (Convolution2D) (None, 1, 64, 96) 230496 maxp2[0][0]
____________________________________________________________________________________________________
encoder (MaxPooling2D) (None, 1, 32, 96) 0 conv3[0][0]
____________________________________________________________________________________________________
conv4 (Convolution2D) (None, 1, 32, 96) 230496 encoder[0][0]
____________________________________________________________________________________________________
upsa1 (UpSampling2D) (None, 2, 64, 96) 0 conv4[0][0]
____________________________________________________________________________________________________
conv5 (Convolution2D) (None, 2, 64, 96) 230496 upsa1[0][0]
____________________________________________________________________________________________________
upsa2 (UpSampling2D) (None, 4, 128, 96) 0 conv5[0][0]
____________________________________________________________________________________________________
conv6 (Convolution2D) (None, 4, 128, 96) 230496 upsa2[0][0]
____________________________________________________________________________________________________
upsa3 (UpSampling2D) (None, 8, 256, 96) 0 conv6[0][0]
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 256, 1) 2401 upsa3[0][0]
====================================================================================================
Total params: 1,157,377
Trainable params: 1,157,377
Non-trainable params: 0
</code></pre>
<p>And the one for the ported PyTorch mode :</p>
<pre><code>----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 96, 256, 256] 2,496
MaxPool2d-2 [-1, 96, 128, 128] 0
Conv2d-3 [-1, 96, 128, 128] 230,496
MaxPool2d-4 [-1, 96, 64, 64] 0
Conv2d-5 [-1, 96, 64, 64] 230,496
MaxPool2d-6 [-1, 96, 32, 32] 0
Conv2d-7 [-1, 96, 32, 32] 230,496
Upsample-8 [-1, 96, 64, 64] 0
Conv2d-9 [-1, 96, 64, 64] 230,496
Upsample-10 [-1, 96, 128, 128] 0
Conv2d-11 [-1, 96, 128, 128] 230,496
Upsample-12 [-1, 96, 256, 256] 0
Conv2d-13 [-1, 1, 256, 256] 2,401
================================================================
Total params: 1,157,377
Trainable params: 1,157,377
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.25
Forward/backward pass size (MB): 158.00
Params size (MB): 4.42
Estimated Total Size (MB): 162.67
</code></pre>
<p>And finally the code to convert the weights :</p>
<pre class="lang-python prettyprint-override"><code>import os
import argparse
from pathlib import Path
from keras.layers import Input, Convolution2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras.models import load_model
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchsummary import summary
class Musica:
def decompose_path(self, path):
return Path(path).parent, Path(path).stem, Path(path).suffix
def convert_keras_weights(self, weights_path):
autoencoder = self.get_keras_autoencoder()
autoencoder.load_weights(weights_path)
directory, filename, extension = self.decompose_path(weights_path)
autoencoder.save(os.path.join(directory, filename + '_model_included' + extension))
def convert_keras_to_torch(self, full_model_path, print_models: bool = False, print_weights: bool = False):
keras_model = load_model(full_model_path)
if print_models:
print(keras_model.summary())
weights = keras_model.get_weights()
torch_model = SAEModel()
if print_models:
summary(torch_model.cuda(), (1, 256, 256))
if print_weights:
print(f'Keras Weight {weights[0].shape} and Pytorch Weight {torch_model.conv1.weight.shape}')
print(f'Keras Weight {weights[1].shape} and Pytorch Weight {torch_model.conv1.bias.shape}')
print(f'Keras Weight {weights[2].shape} and Pytorch Weight {torch_model.conv2.weight.shape}')
print(f'Keras Weight {weights[3].shape} and Pytorch Weight {torch_model.conv2.bias.shape}')
print(f'Keras Weight {weights[4].shape} and Pytorch Weight {torch_model.conv3.weight.shape}')
print(f'Keras Weight {weights[5].shape} and Pytorch Weight {torch_model.conv3.bias.shape}')
print(f'Keras Weight {weights[6].shape} and Pytorch Weight {torch_model.conv4.weight.shape}')
print(f'Keras Weight {weights[7].shape} and Pytorch Weight {torch_model.conv4.bias.shape}')
print(f'Keras Weight {weights[8].shape} and Pytorch Weight {torch_model.conv5.weight.shape}')
print(f'Keras Weight {weights[9].shape} and Pytorch Weight {torch_model.conv5.bias.shape}')
print(f'Keras Weight {weights[10].shape} and Pytorch Weight {torch_model.conv6.weight.shape}')
print(f'Keras Weight {weights[11].shape} and Pytorch Weight {torch_model.conv6.bias.shape}')
print(f'Keras Weight {weights[12].shape} and Pytorch Weight {torch_model.decod.weight.shape}')
print(f'Keras Weight {weights[13].shape} and Pytorch Weight {torch_model.decod.bias.shape}')
# load keras weights into torch model
torch_model.conv1.weight.data = torch.from_numpy(weights[0])
torch_model.conv1.bias.data = torch.from_numpy(weights[1])
torch_model.conv2.weight.data = torch.from_numpy(weights[2])
torch_model.conv2.bias.data = torch.from_numpy(weights[3])
torch_model.conv3.weight.data = torch.from_numpy(weights[4])
torch_model.conv3.bias.data = torch.from_numpy(weights[5])
torch_model.conv4.weight.data = torch.from_numpy(weights[6])
torch_model.conv4.bias.data = torch.from_numpy(weights[7])
torch_model.conv5.weight.data = torch.from_numpy(weights[8])
torch_model.conv5.bias.data = torch.from_numpy(weights[9])
torch_model.conv6.weight.data = torch.from_numpy(weights[10])
torch_model.conv6.bias.data = torch.from_numpy(weights[11])
torch_model.decod.weight.data = torch.from_numpy(weights[12])
torch_model.decod.bias.data = torch.from_numpy(weights[13])
directory, filename, extension = self.decompose_path(full_model_path)
export_path = os.path.join(directory, filename + '_torch' + '.pth')
torch.save(torch_model.state_dict(), export_path)
def get_keras_autoencoder(self, input_size=256, nb_filter=96, k_size=5):
input_img = Input(shape=(1, input_size, input_size))
conv1 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv1')(input_img)
maxp1 = MaxPooling2D((2, 2), border_mode='same', name='maxp1')(conv1)
conv2 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv2')(maxp1)
maxp2 = MaxPooling2D((2, 2), border_mode='same', name='maxp2')(conv2)
conv3 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv3')(maxp2)
encoder = MaxPooling2D((2, 2), border_mode='same', name='encoder')(conv3)
conv4 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv4')(encoder)
upsa1 = UpSampling2D((2, 2), name='upsa1')(conv4)
conv4 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv5')(upsa1)
upsa2 = UpSampling2D((2, 2), name='upsa2')(conv4)
conv5 = Convolution2D(nb_filter, k_size, k_size, activation='relu', border_mode='same', name='conv6')(upsa2)
upsa3 = UpSampling2D((2, 2), name='upsa3')(conv5)
decoder = Convolution2D(1, k_size, k_size, activation='sigmoid', border_mode='same')(upsa3)
autoencoder = Model(input_img, decoder)
return autoencoder
</code></pre>
<p>The weights transpose dump:</p>
<pre><code>Keras Weight (96, 1, 5, 5) and Pytorch Weight torch.Size([96, 1, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (96, 96, 5, 5) and Pytorch Weight torch.Size([96, 96, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (96, 96, 5, 5) and Pytorch Weight torch.Size([96, 96, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (96, 96, 5, 5) and Pytorch Weight torch.Size([96, 96, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (96, 96, 5, 5) and Pytorch Weight torch.Size([96, 96, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (96, 96, 5, 5) and Pytorch Weight torch.Size([96, 96, 5, 5])
Keras Weight (96,) and Pytorch Weight torch.Size([96])
Keras Weight (1, 96, 5, 5) and Pytorch Weight torch.Size([1, 96, 5, 5])
Keras Weight (1,) and Pytorch Weight torch.Size([1])
</code></pre>
<p>All work well and I run the pytorch model using patched image (into 256x256 blocks). Unfortunatly, the model fail to do the job (sheet music staff-line remover) and got a slightly padded output (few pixels to right/bottom). I suspect a problem with the <code>padding='same'</code> that introduce some padding divergence.</p>
<p>Here is the input image :</p>
<p><a href="https://i.sstatic.net/UQcmJZED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UQcmJZED.png" alt="input page" /></a></p>
<p>Here is the same image binarized :</p>
<p><a href="https://i.sstatic.net/v8ur6V5o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8ur6V5o.png" alt="binarized" /></a></p>
<p>Here is the output image :</p>
<p><a href="https://i.sstatic.net/LRPuNr5d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRPuNr5d.png" alt="output image" /></a></p>
<p>I precise that the code of the patching to process the image work well and well tested before, so the horizontal and vertical lines are not a patching problem but the result of the bad processing of the patch inside the model.</p>
|
<python><machine-learning><keras><pytorch><computer-vision>
|
2025-03-30 15:41:21
| 0
| 373
|
MaxC2
|
79,544,458
| 4,061,339
|
Sending request from React to FastAPI causes a CORS policy error
|
<p>When I send a fetch request from the frontend server (React) to the backend server (FastAPI), an error occurs:</p>
<blockquote>
<p>localhost/:1 Access to fetch at 'http://localhost:8000/predict/' from
origin 'http://localhost:3000' has been blocked by CORS policy:
Response to preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.</p>
</blockquote>
<p>Environment:</p>
<ul>
<li>Windows 11</li>
<li>React 19.1.0</li>
<li>uvicorn 0.34.0</li>
<li>Python 3.12.4</li>
</ul>
<p>Frontend:</p>
<pre class="lang-typescript prettyprint-override"><code>import React from "react";
import { useForm } from "react-hook-form";
import { useMutation } from "@tanstack/react-query";
interface FormSchema {
mean_radius: number;
mean_texture: number;
mean_perimeter: number;
mean_area: number;
}
interface PredictionResponse {
prediction: number;
}
const Frontend = () => {
const { register, handleSubmit } = useForm<FormSchema>({
defaultValues: {
mean_radius: 0,
mean_texture: 0,
mean_perimeter: 0,
mean_area: 0,
},
});
// Mutation for sending data to the backend
const mutation = useMutation<PredictionResponse, Error, FormSchema>({
mutationFn: async (formData: FormSchema) => {
const response = await fetch("http://localhost:8000/predict/", {
method: "POST",
credentials: 'include',
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(formData),
});
if (!response.ok) {
throw new Error("Failed to fetch");
}
return response.json();
}
});
const onSubmit = (data: FormSchema) => {
mutation.mutate(data);
};
return (
<div>
<h1>React + FastAPI Example</h1>
<form onSubmit={handleSubmit(onSubmit)}>
<div>
<label>Mean Radius:</label>
<input type="number" {...register("mean_radius")} />
</div>
<div>
<label>Mean Texture:</label>
<input type="number" {...register("mean_texture")} />
</div>
<div>
<label>Mean Perimeter:</label>
<input type="number" {...register("mean_perimeter")} />
</div>
<div>
<label>Mean Area:</label>
<input type="number" {...register("mean_area")} />
</div>
<button type="submit">Predict</button>
</form>
{/* Display loading, error, or success states */}
{mutation.isError && (
<p>Error occurred: {(mutation.error as Error).message}</p>
)}
{mutation.isSuccess && (
<p>Prediction Result: {mutation.data?.prediction}</p>
)}
</div>
);
};
export default Frontend;
</code></pre>
<p>Backend:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
app = FastAPI()
# Define the data schema
class TestData(BaseModel):
mean_radius: float
mean_texture: float
mean_perimeter: float
mean_area: float
# Configure CORS settings
origins = [
"http://localhost:3000", # React frontend
"http://127.0.0.1:3000"
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins, # Allow requests from specific origins
allow_credentials=True, # Allow cookies and credentials if needed
allow_methods=["*"], # Allow all HTTP methods (GET, POST, etc.)
allow_headers=["*"], # Allow all headers
)
@app.post("/predict/")
def do_predict(data: TestData):
# Example processing logic (replace with actual logic)
result = data.mean_radius + data.mean_texture + data.mean_perimeter + data.mean_area
return {"prediction": result}
</code></pre>
<p>How I started servers:</p>
<pre class="lang-bash prettyprint-override"><code>npm start
</code></pre>
<pre class="lang-bash prettyprint-override"><code>uvicorn maintest:app --reload --host 0.0.0.0 --port 8000
</code></pre>
<p>I checked:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/77031847/sending-request-from-react-to-fastapi-causes-origin-http-localhost5173-has-b">Sending request from React to FastAPI causes "origin http://localhost:5173 has been blocked by CORS policy" error</a></li>
<li><a href="https://stackoverflow.com/questions/70353729/access-from-origin-https-example-com-has-been-blocked-even-though-ive-allow">Access from origin 'https://example.com' has been blocked even though I've allowed https://example.com/</a></li>
<li><a href="https://stackoverflow.com/questions/71802652/react-not-showing-post-response-from-fastapi-backend-application/71805329#71805329">React not showing POST response from FastAPI backend application</a></li>
<li><a href="https://stackoverflow.com/questions/74088761/sending-post-request-to-fastapi-app-running-on-localhost-using-javascript-fetch/74106637#74106637">Sending POST request to FastAPI app running on localhost using JavaScript Fetch API</a></li>
<li><a href="https://stackoverflow.com/questions/73962743/fastapi-is-not-returning-cookies-to-react-frontend/73963905#73963905">FastAPI is not returning cookies to React frontend</a></li>
<li><a href="https://stackoverflow.com/questions/73547776/how-to-redirect-from-one-domain-to-another-and-set-cookies-or-headers-for-the-ot/73599289#73599289">How to redirect from one domain to another and set cookies or headers for the other domain?</a></li>
<li><a href="https://stackoverflow.com/questions/75040507/how-to-access-fastapi-backend-from-a-different-machine-ip-on-the-same-local-netw/75041731#75041731">How to access FastAPI backend from a different machine/IP on the same local network?</a></li>
<li><a href="https://stackoverflow.com/questions/75048244/fastapi-how-to-enable-cors-only-for-specific-endpoints/75048778#75048778">FastAPI: How to enable CORS only for specific endpoints?</a></li>
</ul>
<p>python log</p>
<pre class="lang-bash prettyprint-override"><code>INFO: Stopping reloader process [64148]
PS E:\workspace\Python\tmp\front-back-end\backend> uvicorn maintest:app --reload --host 0.0.0.0 --port 8000
INFO: Will watch for changes in these directories: ['E:\\workspace\\Python\\tmp\\front-back-end\\backend']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [33768] using WatchFiles
ERROR: Error loading ASGI app. Attribute "app" not found in module "maintest".
WARNING: WatchFiles detected changes in 'maintest.py'. Reloading...
INFO: Started server process [73580]
INFO: Waiting for application startup.
INFO: Application startup complete.
</code></pre>
|
<python><reactjs><typescript><cors><fastapi>
|
2025-03-30 10:38:10
| 3
| 3,094
|
dixhom
|
79,544,423
| 3,231,250
|
Fastest way to search 5k rows inside of 100m row pair-wise dataframe
|
<p>I am not sure title is well describing the problem but I will explain it step by step.</p>
<p>I have a correlation matrix of genes (10k x 10k)
I convert this correlation matrix to pairwise dataframe (upper triangle) (around 100m row)</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>gene1</th>
<th>gene2</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Gene3450</td>
<td>Gene9123</td>
<td>0.999706</td>
</tr>
<tr>
<td>Gene5219</td>
<td>Gene9161</td>
<td>0.999691</td>
</tr>
<tr>
<td>Gene27</td>
<td>Gene6467</td>
<td>0.999646</td>
</tr>
<tr>
<td>Gene3255</td>
<td>Gene4865</td>
<td>0.999636</td>
</tr>
<tr>
<td>Gene2512</td>
<td>Gene5730</td>
<td>0.999605</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table></div>
<p>Then I have gold-standard TERMS table around 5k rows and columns are ID and used_genes</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>used_genes</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Complex 1</td>
<td>[Gene3629, Gene8048, Gene9660, Gene4180, Gene1...]</td>
</tr>
<tr>
<td>2</td>
<td>Complex 2</td>
<td>[Gene3944, Gene931, Gene3769, Gene7523, Gene61...]</td>
</tr>
<tr>
<td>3</td>
<td>Complex 3</td>
<td>[Gene8236, Gene934, Gene5902, Gene165, Gene664...]</td>
</tr>
<tr>
<td>4</td>
<td>Complex 4</td>
<td>[Gene2399, Gene2236, Gene8932, Gene6670, Gene2...]</td>
</tr>
<tr>
<td>5</td>
<td>Complex 5</td>
<td>[Gene3860, Gene5792, Gene9214, Gene7174, Gene3...]</td>
</tr>
</tbody>
</table></div>
<p>What I do:</p>
<ul>
<li><p>I iterate of each Gold-standard complex row.</p>
</li>
<li><p>Convert used_gene list to pairwise, like geneA-geneB, geneA-geneC
etc.</p>
</li>
<li><p>Check those complex-row gene pairs in the stacked correlation pairs.</p>
</li>
<li><p>If they are exist I put column TP=1, if not TP=0.</p>
</li>
<li><p>Based on the TP counts I calculate precision, recall, and area under
the curve score.</p>
</li>
</ul>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>name</th>
<th>used_genes</th>
<th>auc_score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Multisubunit ACTR coactivator complex</td>
<td>[CREBBP, KAT2B, NCOA3, EP300]</td>
<td>0.001695</td>
</tr>
<tr>
<td>Condensin I complex</td>
<td>[SMC4, NCAPH, SMC2, NCAPG, NCAPD2]</td>
<td>0.009233</td>
</tr>
<tr>
<td>BLOC-2 (biogenesis of lysosome-related organel...)</td>
<td>[HPS3, HPS5, HPS6]</td>
<td>0.000529</td>
</tr>
<tr>
<td>NCOR complex</td>
<td>[TBL1XR1, NCOR1, TBL1X, GPS2, HDAC3, CORO2A]</td>
<td>0.000839</td>
</tr>
<tr>
<td>BLOC-1 (biogenesis of lysosome-related organel...)</td>
<td>[DTNBP1, SNAPIN, BLOC1S6, BLOC1S1, BLOC1S5, BL...]</td>
<td>0.002227</td>
</tr>
</tbody>
</table></div>
<p>So, in the end, for each of gold-standard row, I have PR-AUC score.</p>
<p>I will share my function below, with 100m stacked df, and 5k terms It takes around 25 minutes, and I am trying to find a way to reduce the time.</p>
<p>PS: for the calculation of PR-AUC part, I have compiled C++ code, I just give the ordered TP numbers as a input to C++ function and return the score, still runtime is the same. I guess the problem is iteration part.</p>
<pre><code>from sklearn import metrics
def compute_per_complex_pr(corr_df, terms_df):
pairwise_df = binary(corr_df)
pairwise_df = quick_sort(pairwise_df).reset_index(drop=True)
# Precompute a mapping from each gene to the row indices in the pairwise DataFrame where it appears.
gene_to_pair_indices = {}
for i, (gene_a, gene_b) in enumerate(zip(pairwise_df["gene1"], pairwise_df["gene2"])):
gene_to_pair_indices.setdefault(gene_a, []).append(i)
gene_to_pair_indices.setdefault(gene_b, []).append(i)
# Initialize AUC scores (one for each complex) with NaNs.
auc_scores = np.full(len(terms_df), np.nan)
# Loop over each gene complex
for idx, row in terms_df.iterrows():
gene_set = set(row.used_genes)
# Collect all row indices in the pairwise data where either gene belongs to the complex.
candidate_indices = set()
for gene in gene_set:
candidate_indices.update(gene_to_pair_indices.get(gene, []))
candidate_indices = sorted(candidate_indices)
if not candidate_indices:
continue
# Select only the relevant pairwise comparisons.
sub_df = pairwise_df.loc[candidate_indices]
# A prediction is 1 if both genes in the pair are in the complex; otherwise 0.
predictions = (sub_df["gene1"].isin(gene_set) & sub_df["gene2"].isin(gene_set)).astype(int)
if predictions.sum() == 0:
continue
# Compute cumulative true positives and derive precision and recall.
true_positive_cumsum = predictions.cumsum()
precision = true_positive_cumsum / (np.arange(len(predictions)) + 1)
recall = true_positive_cumsum / true_positive_cumsum.iloc[-1]
if len(recall) < 2 or recall.iloc[-1] == 0:
continue
auc_scores[idx] = metrics.auc(recall, precision)
# Add the computed AUC scores to the terms DataFrame.
terms_df["auc_score"] = auc_scores
return terms_df
def binary(corr):
stack = corr.stack().rename_axis(index=['gene1', 'gene2']).reset_index(name='score')
stack = drop_mirror_pairs(stack)
return stack
def quick_sort(df, ascending=False):
order = 1 if ascending else -1
sorted_df = df.iloc[np.argsort(order * df["score"].values)].reset_index(drop=True)
return sorted_df
def drop_mirror_pairs(df):
gene_pairs = np.sort(df[["gene1", "gene2"]].to_numpy(), axis=1)
df.loc[:, ["gene1", "gene2"]] = gene_pairs
df = df.loc[~df.duplicated(subset=["gene1", "gene2"], keep="first")]
return df
</code></pre>
<p>for dummy data (corr matrix, terms_df)</p>
<pre><code>import numpy as np
import pandas as pd
# Set a random seed for reproducibility
np.random.seed(0)
# -------------------------------
# Create the 10,000 x 10,000 correlation matrix
# -------------------------------
num_genes = 10000
genes = [f"Gene{i}" for i in range(num_genes)]
rand_matrix = np.random.uniform(-1, 1, (num_genes, num_genes))
corr_matrix = (rand_matrix + rand_matrix.T) / 2
np.fill_diagonal(corr_matrix, 1.0)
corr_df = pd.DataFrame(corr_matrix, index=genes, columns=genes)
num_terms = 5000
terms_list = []
for i in range(1, num_terms + 1):
# Randomly choose a number of genes between 10 and 40 for this term
n_genes = np.random.randint(10, 41)
used_genes = np.random.choice(genes, size=n_genes, replace=False).tolist()
term = {
"id": i,
"name": f"Complex {i}",
"used_genes": used_genes
}
terms_list.append(term)
terms_df = pd.DataFrame(terms_list)
# Display sample outputs (for verification, you might want to show the first few rows)
print("Correlation Matrix Sample:")
print(corr_df.iloc[:5, :5]) # print a 5x5 sample
print("\nTerms DataFrame Sample:")
print(terms_df.head())
</code></pre>
<p>to run the function <code>compute_per_complex_pr(corr_df, terms_df)</code></p>
|
<python><pandas><numba>
|
2025-03-30 10:07:27
| 1
| 1,120
|
Yasir
|
79,544,253
| 8,157,102
|
Image size inconsistency between GitHub and PyPI in README.md
|
<p>I created some simple console games in Python(<a href="https://github.com/kamyarmg/oyna" rel="nofollow noreferrer">Oyna Project</a>) and took screenshots of each game to showcase them in the <a href="https://github.com/kamyarmg/oyna/blob/main/README.md" rel="nofollow noreferrer">README.md</a> file. I wanted to display these images in a table format both on GitHub and on PyPI.</p>
<p>On GitHub, everything looks fine — the images are well-aligned, and their sizes are consistent. But when I push <a href="https://pypi.org/project/oyna/" rel="nofollow noreferrer">this library(Oyna) to PyPI</a>, the image sizes look uneven and unbalanced. The problem is that the images themselves are not the same size, and on PyPI, some images appear much smaller or larger than others, making the table look messy.</p>
<p>I've tried adjusting the image sizes using HTML tags and Markdown syntax, but nothing seems to work correctly on PyPI.</p>
<p>How can I make the images show up consistently and evenly on PyPI just like they do on GitHub, even if their original sizes are different?</p>
<p>My code to display the table:</p>
<pre class="lang-md prettyprint-override"><code><table>
<tr>
<td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/sudoku/"> Sudoku </a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/sudoku.png" alt="Sudoku" style="width:250px;"/> </td>
<td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/twenty_forty_eight_2048/">2048</a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/2048.png" alt="2048" style="width:250px;"/> </td>
<td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/matching/">Matching</a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/matching.png" alt="Matching" style="width:250px;"/> </td>
</tr>
</table>
</code></pre>
<p>Github README.md Image Table:
<a href="https://i.sstatic.net/oJHbZeBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJHbZeBA.png" alt="Github README.md Image Table" /></a></p>
<p>Pypi Image Table:</p>
<p><a href="https://i.sstatic.net/WTDv3hwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTDv3hwX.png" alt="enter image description here" /></a></p>
|
<python><html><css><pypi><markup>
|
2025-03-30 06:37:58
| 1
| 961
|
kamyarmg
|
79,544,085
| 1,204,556
|
Python Reactivex and OpenAI -- Stream hangs?
|
<p>Please see my code block below. In order to run it, I do the following. When I run the following code, things print as expected, and the process exits as expected.</p>
<pre><code>stream = await client.create_completion(...)
stream.subscribe(print) # works perfectly
</code></pre>
<p>All the <code>print</code> commands run as expected. Everything looks great, as long as I am doing things in a reactive way.</p>
<p>However, when I do the following, it hangs indefinitely:</p>
<pre class="lang-py prettyprint-override"><code>stream = await client.create_completion(...)
stream.pipe(ops.to_iterable()).run()` # hangs :(
</code></pre>
<p>Why is this? What am I doing wrong?</p>
<p>Thanks in advance for your help.</p>
<pre class="lang-py prettyprint-override"><code>from openai import AsyncOpenAI, AsyncStream
from openai.types.chat import ChatCompletionChunk
from openai.types.chat.chat_completion_chunk import Choice
import reactivex as rx
from app.ai.bridge.chat.chat_completion_types import ChatRequest, ToolConfig
from app.ai.bridge.chat.drivers.openai_chat_driver_reactive_mappers import (
map_message_to_openai,
map_toolconfig_to_openai,
)
import asyncio
class OpenaiClientReactive:
def __init__(self, openai_client: AsyncOpenAI) -> None:
self.api = openai_client
async def create_completion(
self, chat_request: ChatRequest, tool_config: ToolConfig | None = None
) -> rx.Observable[Choice]:
stream: rx.subject.ReplaySubject[Choice] = rx.subject.ReplaySubject()
async def do_stream() -> None:
async_stream: AsyncStream[ChatCompletionChunk] = await self._stream_openai(
chat_request, tool_config
)
max_variants_expected = chat_request.options.num_variants
num_indexes_completed = 0
async for chunk in async_stream:
for choice in chunk.choices:
if choice.finish_reason:
num_indexes_completed += 1
stream.on_next(choice)
if max_variants_expected == num_indexes_completed:
# If all indexes are complete, we can complete the stream
print("All indexes complete")
break
await async_stream.close()
stream.on_completed()
asyncio.create_task(do_stream())
return stream
async def _stream_openai(
self,
chat_request: ChatRequest,
tool_config: ToolConfig | None = None,
) -> AsyncStream[ChatCompletionChunk]:
mapped_messages = mapped_messages = [
map_message_to_openai(message) for message in chat_request.context.messages
]
if tool_config:
return await self.api.chat.completions.create(
# TODO: Move this to database driven configuration, since it's an LLM.
model="gpt-3.5-turbo",
messages=mapped_messages,
stream=True,
n=chat_request.options.num_variants,
tools=map_toolconfig_to_openai(tool_config),
)
else:
return await self.api.chat.completions.create(
# TODO: Move this to database driven configuration, since it's an LLM.
model="gpt-3.5-turbo",
messages=mapped_messages,
stream=True,
n=chat_request.options.num_variants,
)
</code></pre>
|
<python><stream><observable><python-asyncio><reactivex>
|
2025-03-30 01:05:24
| 0
| 5,084
|
Monarch Wadia
|
79,544,072
| 12,156,208
|
Is there a way to pre-select options for certain columns in streamlit.data_editor?
|
<p>I am trying to render a streamlit data_editor object but in a way where the user can only select set options for each column.</p>
<p>For eg:</p>
<pre><code> data = pd.DataFrame(columns=["Stock", "Option"])
request = st.data_editor(data, num_rows="dynamic", use_container_width=True)
</code></pre>
<p><a href="https://i.sstatic.net/tvUvkkyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tvUvkkyf.png" alt="enter image description here" /></a></p>
<p>What I'd like to do is restrict the values someone can enter for "Option", so for each row they simply have a drop-box or something similar to select from. Is that possible?</p>
|
<python><python-3.x><streamlit>
|
2025-03-30 00:42:57
| 0
| 1,206
|
r4bb1t
|
79,543,756
| 242,042
|
Can I use asyncio.Event.wait() instead of gRPC await server.wait_for_termination()
|
<p>The typical approach for gRPC AsyncIO is</p>
<pre class="lang-py prettyprint-override"><code>await server.start()
try:
await server.wait_for_termination()
except:
...
</code></pre>
<p>But I was wondering rather than dealing with it via a hard stop because <code>wait_for_termination</code> just blocks forever until canceled if I can use asyncio.Event.wait() so I can trigger the termination of the server more cleanly.</p>
<p>Looking at the code for wait_for_termination it goes down to some C bindings so I am not sure if it may be doing anything else special or if asyncio.Event.wait() which I presume would use asyncio loop implementation more effectively (like Windows has it's own loop implementation that's different from UNIX).</p>
<p>As for my integration tests, them seem to pass so I am.</p>
|
<python><python-asyncio><grpc><grpc-python>
|
2025-03-29 19:00:30
| 0
| 43,097
|
Archimedes Trajano
|
79,543,653
| 500,584
|
How do I use uv run with os.exec?
|
<p>I'm running into a problem trying to use <code>uv run</code> with Python's <code>os.exec</code> variants. Any advice on how to get this to work?</p>
<p>Bash, Ubuntu WSL, <code>uv run python</code>, <code>execlp</code>, <code>uv 0.6.5</code>:</p>
<pre><code>$ uv run python
Python 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.execlp("uv", "run", "python")
Manage Python versions and installations
Usage: run python [OPTIONS] <COMMAND>
(rest of the help text here)
</code></pre>
<p>Powershell, Windows 11, <code>uv run main.py</code>, <code>execv</code>, <code>uv 0.6.10 (f2a2d982b 2025-03-25)</code>:</p>
<pre><code>PS > uv run python
Python 3.12.9 (main, Mar 17 2025, 21:06:20) [MSC v.1943 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.execv(r"C:\Users\username\.local\bin\uv.exe", ["run", "main.py"])
PS > error: unrecognized subcommand 'main.py'
Usage: run [OPTIONS] <COMMAND>
</code></pre>
|
<python><os.execl>
|
2025-03-29 17:38:52
| 1
| 178,002
|
agf
|
79,543,394
| 189,418
|
How to load PEP-723-style venvs in vscode automatically
|
<p>When writing Python scripts, I have started using the pattern from PEP-723 with uv to create self contained scripts that uv can run directly.</p>
<p>For example, here's the metadata in one of my Python scripts, which instructs uv to create a venv with <code>duckdb</code> and <code>pandas</code> to run the script:</p>
<pre><code># /// script
# dependencies = [
# "duckdb>=1.2.0",
# "pandas",
# ]
# ///
</code></pre>
<p>The issue I'm facing is that when I edit these scripts in vscode, the venv is not automatically created and pylance complains that it can't find the packages the script is using.</p>
<p>How can I automatically create such venvs in vscode so that every time I open a script like this I get pylance to use the same venv uv will create to run the script?</p>
|
<python><visual-studio-code><uv>
|
2025-03-29 14:39:13
| 1
| 8,426
|
foglerit
|
79,543,279
| 3,545,273
|
How to find the possible installation paths for a script from Python?
|
<h3>Context</h3>
<p>I am working on a Python + Pyside6 application. I would like to only archive the source <code>.ui</code> files that are use to define the User Interface in Qt and have them compiled into <code>.py</code> files at build time.</p>
<h3>Current research</h3>
<p><code>Pyside6</code> comes with a tool that automatically compiles the <code>.ui</code> files into <code>.py</code> files, and that tool (<code>pyside6-project</code>) is automatically installed in a script directory when you install the <code>pyside6</code> package.</p>
<p>The <a href="https://hatch.pypa.io/1.13/config/build/" rel="nofollow noreferrer">hatchling</a> build backend provide a nice plugin environment which allows to easily execute a Python script at build time.</p>
<h3>Problem:</h3>
<p>In my dev box, I consistently use a virtual environment (venv) per project, and I install all the dependencies for the project in that venv, so I know that the required script will be in the path. But I also know that another user could have installed the pysyde6 package in any other possible place, and that it could even not be in the current path. And I could only find in the <code>sys</code> module the path of the Python interpreter and the installation prefix.</p>
<h3>Question:</h3>
<p>How can I know for a Python script all the possible places where the <code>pyside6-project</code> program could have been installed?</p>
|
<python>
|
2025-03-29 12:58:28
| 1
| 149,980
|
Serge Ballesta
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.