QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,543,228
| 9,680,534
|
(fields.E331) Field specifies a many-to-many relation through model, which has not been installed
|
<p>When I run <code>makemigrations</code> I get the error</p>
<blockquote>
<p>teams.Team.members: (fields.E331) Field specifies a many-to-many relation through model 'TeamMember', which has not been installed.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from django.conf import settings
from common import TimestampMixin
from users.models import User
class Team(models.Model, TimestampMixin):
name = models.CharField(max_length=100)
owner = models.ForeignKey(
User,
related_name='owned_teams',
on_delete=models.CASCADE
)
members = models.ManyToManyField(
User,
through='TeamMember',
related_name='teams'
)
def __str__(self):
return self.name
class TeamMember(models.Model, TimestampMixin):
user = models.ForeignKey(
User,
on_delete=models.CASCADE
)
team = models.ForeignKey(
Team,
on_delete=models.CASCADE
)
def __str__(self):
return f"{self.user} in {self.team}"
</code></pre>
<p>I don't get why it's happening because the 'teams' app is installed and both Team and TeamMember is in the same file. Any ideas?</p>
|
<python><django>
|
2025-03-29 12:20:40
| 1
| 9,087
|
Avin Kavish
|
79,543,099
| 5,722,359
|
What is the difference between "Run Python File" vs "Run Python File in Terminal" vs "Python File in Dedicated Terminal" in VS Code?
|
<p>I am new to VS Code. When I do <kbd>Ctrl</kbd>+<kbd>K</kbd>+<kbd>S</kbd> and type in <code>run python</code>, I noticed that there are three ways of running a Python file:</p>
<ol>
<li>Run Python File</li>
<li>Run Python File in Terminal</li>
<li>Run Python File in Dedicated Terminal</li>
</ol>
<p>May I know what is the difference between these three options? When should I use each of these options?</p>
|
<python><visual-studio-code>
|
2025-03-29 10:21:53
| 1
| 8,499
|
Sun Bear
|
79,542,988
| 4,977,821
|
Fastest and cleaner ways to insert, update, delete 2-dimensional array
|
<p>I'm newbie in python language and have some prolem here. I have a list of name and score for example:</p>
<pre><code>score = [['Andy', 80], ['John', 70], ['Diana', 100]]
</code></pre>
<p>I want to Delete or Update score based on it's name. So I can update score if name 'Andy' to 70 or Delete data where name is 'John'. Then the data will be like this</p>
<pre><code>score = [['Andy', 70], ['Diana', 100]]
</code></pre>
<p>Please help me with this, thanks in advance</p>
|
<python><arrays><list>
|
2025-03-29 08:45:36
| 1
| 367
|
Tommy Sayugo
|
79,542,824
| 6,024,753
|
avoiding Python overhead in solve_ivp LSODA
|
<p>I'm using the <code>solve_ivp</code> module in scipy. The LSODA option offers a Python wrapper to a Fortran integration package. The issue is that the Python wrapper integrates using LSODA timestep-by-timestep. So, there can be a lot of Python overhead to the integration if there are many timesteps.</p>
<p>I would like to avoid this overhead by integrating it all at once. Any thoughts on how to do this?</p>
<p>Looking into the scipy source code, I see <a href="https://github.com/scipy/scipy/blob/v1.15.2/scipy/integrate/_ivp/lsoda.py#L0-L1" rel="nofollow noreferrer">that the Python wrapper feeds LSODA</a> a flag <code>itask = 5</code> which requests integration by one timestep, and I see that there is an <a href="https://computing.llnl.gov/sites/default/files/ODEPACK_pub2_u113855.pdf" rel="nofollow noreferrer">alternate flag in the LSODA documentation</a> for <code>itask = 1</code> indicating integration all the way up to a final time (as many timesteps as are required). The latter seems better for my purposes because it avoids significant Python overhead. The problem is, when I just change the value of <code>itask</code> in the scipy source code, everything breaks. I think the Python wrapper is built for one timestep at a time...</p>
<p>Any suggestions?</p>
<p>If going through a modified version of the Python wrapper is too difficult, I'd also be happy to call the lsoda routine directly myself. Just don't really know how to do that--I've only ever gone through <code>solve_ivp</code>.</p>
|
<python><scipy><numerical-integration>
|
2025-03-29 05:24:02
| 0
| 453
|
billbert
|
79,542,806
| 267,265
|
Stable fluid simulation with Python and Taichi has strange drifting
|
<p>I'm trying to implement Jos Stam's stable fuild in paper: <a href="https://graphics.cs.cmu.edu/nsp/course/15-464/Spring11/papers/StamFluidforGames.pdf" rel="nofollow noreferrer">https://graphics.cs.cmu.edu/nsp/course/15-464/Spring11/papers/StamFluidforGames.pdf</a>
I want to do it in 2D but do wraparound when you hit the borders. I kind of managed to get he first stages working but as soon as I add a static velocity field the densities start drifting diagonally (down-left). If you try it you can add density with right mouse button. Can anybody see what the issue could be in the code? I'm thinking it could be related to the bilinear interpolation (but not sure, the velocity field is not causing it, the move seems to be a miscalculation):</p>
<pre><code>############################
# etapa2.py
# Etapa 2: Densidad con Difusión (Jacobi) + Advección (Semi-Lagrangiana)
# Modificado para usar FieldPair para la velocidad y kernel de inicialización en cero
############################
import numpy as np
import taichi as ti
##################################
# Inicializar Taichi
##################################
arch = ti.vulkan if ti._lib.core.with_vulkan() else ti.cuda
ti.init(arch=arch)
##################################
# FieldPair para double buffering
##################################
class FieldPair:
def __init__(self, current_field, next_field):
self.cur = current_field
self.nxt = next_field
def swap(self):
self.cur, self.nxt = self.nxt, self.cur
##################################
# Parámetros de la simulación
##################################
res = 512
h = 1.0 / res
dt = 0.03
# Coeficiente de difusión
k = 0.00003
# Número de iteraciones Jacobi
JACOBI_ITERS = 32
# Parámetros de fuente de densidad
s_dens = 10.0
s_radius = res / 15.0
# Umbral para considerar velocidad "nula"
epsilon = 1e-6
##################################
# Campos Taichi
##################################
density_1 = ti.field(float, shape=(res, res))
density_2 = ti.field(float, shape=(res, res))
dens = FieldPair(density_1, density_2)
velocity_1 = ti.Vector.field(2, dtype=ti.f32, shape=(res, res))
velocity_2 = ti.Vector.field(2, dtype=ti.f32, shape=(res, res))
vel = FieldPair(velocity_1, velocity_2)
############################
# Inicializa un patron de tercios el campo de velocidad
############################
@ti.kernel
def set_velocity_pattern(v: ti.template()):
for i, j in v.cur:
# Calculate the third of the window
third_height = res / 3.0
vx = 0.0
vy = 0.0
if i < third_height: # Upper third
vx = 1.0 # Velocity to the right (positive x-direction)
vy = 0.0 # No vertical velocity
elif i < 2 * third_height: # Middle third
vx = -1.0 # Velocity to the left (negative x-direction)
vy = 0.0 # No vertical velocity
else: # Bottom third
vx = 1.0 # Velocity to the right (positive x-direction)
vy = 0.0 # No vertical velocity
v.cur[i, j] = ti.Vector([vx, vy])
v.nxt[i, j] = ti.Vector([vx, vy])
##################################
# set_velocity_vortex
# - Inicializa un vórtice simple
##################################
@ti.kernel
def set_velocity_vortex(v: ti.template()):
for i, j in v.cur:
x = i + 0.5
y = j + 0.5
cx = res * 0.5
cy = res * 0.5
dx = x - cx
dy = y - cy
# Campo tangencial
scale = 2e-1
vx = -dy * scale
vy = dx * scale
v.cur[i, j] = ti.Vector([vx, vy])
v.nxt[i, j] = ti.Vector([vx, vy])
##################################
# add_sources
# - Inyecta densidad alrededor del ratón
##################################
@ti.kernel
def add_density(dens_field: ti.template(), input_data: ti.types.ndarray()):
for i, j in dens_field:
densidad = input_data[2] * s_dens
mx, my = input_data[0], input_data[1]
# Centro de la celda
cx = i + 0.5
cy = j + 0.5
# Distancia al ratón
d2 = (cx - mx) ** 2 + (cy - my) ** 2
dens_field[i, j] += dt * densidad * ti.exp(-6.0 * d2 / s_radius**2)
##################################
# diffuse_jacobi
# - Usa iteraciones Jacobi para difundir
##################################
@ti.kernel
def diffuse_jacobi(dens_in: ti.template(), dens_out: ti.template(), diff: float):
a = diff * dt / (h * h)
for _ in ti.static(range(JACOBI_ITERS)):
# 1) Jacobi
for i, j in dens_in:
i_left = (i - 1 + res) % res
i_right = (i + 1) % res
j_down = (j - 1 + res) % res
j_up = (j + 1) % res
dens_out[i, j] = (
dens_in[i, j]
+ a
* (
dens_in[i_left, j]
+ dens_in[i_right, j]
+ dens_in[i, j_down]
+ dens_in[i, j_up]
)
) / (1.0 + 4.0 * a)
# 2) Copiar dens_out -> dens_in
for i, j in dens_in:
dens_in[i, j] = dens_out[i, j]
##################################
# bilerp
# - Interpolación bilineal de dens_in
# - Asume x,y en float con "wrap" en [0,res)
##################################
@ti.func
def bilerp(dens_in, x, y):
# Indices base
x0 = int(ti.floor(x)) % res
y0 = int(ti.floor(y)) % res
# Indices vecinos
x1 = (x0 + 1) % res
y1 = (y0 + 1) % res
# Distancias fraccionarias
sx = x - ti.floor(x)
sy = y - ti.floor(y)
# Valores en esquinas
d00 = dens_in[x0, y0]
d01 = dens_in[x0, y1]
d10 = dens_in[x1, y0]
d11 = dens_in[x1, y1]
# Interpolación
return (
d00 * (1 - sx) * (1 - sy)
+ d10 * sx * (1 - sy)
+ d01 * (1 - sx) * sy
+ d11 * sx * sy
)
# ##################################
# # advect
# # - Advección Semi-Lagrangiana
# # dens_in -> dens_out usando velocidad (vel)
# ##################################
# @ti.kernel
# def advect(dens_in: ti.template(), dens_out: ti.template(), vel: ti.template()):
# for i, j in dens_in:
# # Centro de la celda
# x = i + 0.5
# y = j + 0.5
# # Velocidad en (i,j)
# vx, vy = vel[i, j]
# # Posición anterior (backtrace)
# # Nota: asumiendo velocity con "cells per frame" o similar
# # factor dt ya está contemplado en velocity scale
# x_old = x - vx * dt
# y_old = y - vy * dt
# # Hacemos bilinear interpolation
# dens_out[i, j] = bilerp(dens_in, x_old, y_old)
@ti.kernel
def advect_density(dens_in: ti.template(), dens_out: ti.template(), vel: ti.template()):
for i, j in dens_in:
# Velocidad
vx, vy = vel[i, j]
# Si la magnitud es muy pequeña, nos saltamos la interpolación
if abs(vx) + abs(vy) < epsilon:
# Usamos índice entero => exactamente dens_in[i,j]
dens_out[i, j] = dens_in[i, j]
else:
# Semi-Lagrangian normal
x = i + 0.5
y = j + 0.5
x_old = x - vx * dt
y_old = y - vy * dt
dens_out[i, j] = bilerp(dens_in, x_old, y_old)
##################################
# init
##################################
def init():
dens.cur.fill(0)
dens.nxt.fill(0)
vel.cur.fill(0)
vel.nxt.fill(0)
# Inicializar la velocidad con un campo estatico
#set_velocity_vortex(vel)
set_velocity_pattern(vel)
##################################
# step
# - add_sources -> diffuse -> advect
##################################
def step(input_data):
# 1) añadir densidad
add_density(dens.cur, input_data)
# 2) difundir con Jacobi
diffuse_jacobi(dens.cur, dens.nxt, k)
dens.swap() # Swap después de la difusión para que dens.cur tenga el resultado
# 3) advect con velocity
advect_density(dens.cur, dens.nxt, vel.cur) # Usamos vel.cur para la advección
dens.swap() # Swap después de la advección para que dens.cur tenga el resultado
##################################
# main
##################################
def main():
# Ventana Taichi
window = ti.ui.Window("Etapa 2: Diffusion + Advection", (res, res), vsync=True)
canvas = window.get_canvas()
paused = False
# Inicializar todo
init()
while window.running:
# input_data = (mx, my, active)
input_data = np.zeros(3, dtype=np.float32)
# Controles
if window.get_event(ti.ui.PRESS):
e = window.event
if e.key == ti.ui.ESCAPE:
break
elif e.key == "r":
paused = False
init()
elif e.key == "p":
paused = not paused
# Ratón
if window.is_pressed(ti.ui.RMB):
mouse_xy = window.get_cursor_pos()
input_data[0] = mouse_xy[0] * res
input_data[1] = mouse_xy[1] * res
input_data[2] = 1.0
# Simulación
if not paused:
step(input_data)
# Render
canvas.set_image(dens.cur)
window.show()
# Si se llama directamente el script
if __name__ == "__main__":
main()
</code></pre>
<p>Any hints appreciated!</p>
|
<python><fluid-dynamics><taichi>
|
2025-03-29 04:54:38
| 1
| 10,454
|
German
|
79,542,742
| 16,383,578
|
How can I correctly implement AES-256-ECB from scratch in Python?
|
<p>I am using Windows 11 and I plan to implement the code in C++. As you might know, building C++ libraries on Windows is very complicated, so I want to make sure it uses the least amount of dependencies possible.</p>
<p>For more context about the larger project this will be a part of, see <a href="https://superuser.com/questions/1886136/how-can-i-make-a-windows-computer-use-ethernet-to-connect-to-one-network-and-wi">this</a>.</p>
<p>I have decided to implement a load-balancing ShadowSocks5 proxy in C++ from scratch. This is a programming challenge, a learning project, and a practical project all in one.</p>
<p>I decided to start from the easiest problem, the encryption I need to use is AES-256-GCM. GCM stands for <a href="https://en.wikipedia.org/wiki/Galois/Counter_Mode" rel="nofollow noreferrer">Galois/Counter_Mode</a>, I haven't figured out how to implement it from reading the Wikipedia article. But the Wikipedia article on <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard" rel="nofollow noreferrer">Advanced Encryption Standard</a> is very helpful and is one of the primary references I used in implementing this. Another Wikipedia article I referenced is <a href="https://en.wikipedia.org/wiki/AES_key_schedule" rel="nofollow noreferrer">AES key schedule</a>. I got the values for SBOX and RSBOX from this <a href="https://medium.com/@chenfelix/advanced-encryption-standard-cipher-5189e5638b81" rel="nofollow noreferrer">article</a></p>
<p>Now, here is the implementation, I wrote it all by myself, an effort that took two days:</p>
<pre class="lang-py prettyprint-override"><code>import json
with open("D:/AES_256.json", "r") as f:
AES_256 = json.load(f)
MAX_256 = (1 << 256) - 1
SBOX = AES_256["SBOX"]
RCON = AES_256["RCON"]
OCTUPLE = (
(0, 4),
(4, 8),
(8, 12),
(12, 16),
(16, 20),
(20, 24),
(24, 28),
(28, 32),
)
SEXAGESY = (
(0, 1, 2, 3),
(4, 5, 6, 7),
(8, 9, 10, 11),
(12, 13, 14, 15),
(16, 17, 18, 19),
(20, 21, 22, 23),
(24, 25, 26, 27),
(28, 29, 30, 31),
(32, 33, 34, 35),
(36, 37, 38, 39),
(40, 41, 42, 43),
(44, 45, 46, 47),
(48, 49, 50, 51),
(52, 53, 54, 55),
(56, 57, 58, 59),
)
HEXMAT = (0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15)
SHIFT = (0, 1, 2, 3, 5, 6, 7, 4, 10, 11, 8, 9, 15, 12, 13, 14)
COLMAT = (
((2, 0), (1, 4), (1, 8), (3, 12)),
((2, 1), (1, 5), (1, 9), (3, 13)),
((2, 2), (1, 6), (1, 10), (3, 14)),
((2, 3), (1, 7), (1, 11), (3, 15)),
((3, 0), (2, 4), (1, 8), (1, 12)),
((3, 1), (2, 5), (1, 9), (1, 13)),
((3, 2), (2, 6), (1, 10), (1, 14)),
((3, 3), (2, 7), (1, 11), (1, 15)),
((1, 0), (3, 4), (2, 8), (1, 12)),
((1, 1), (3, 5), (2, 9), (1, 13)),
((1, 2), (3, 6), (2, 10), (1, 14)),
((1, 3), (3, 7), (2, 11), (1, 15)),
((1, 0), (1, 4), (3, 8), (2, 12)),
((1, 1), (1, 5), (3, 9), (2, 13)),
((1, 2), (1, 6), (3, 10), (2, 14)),
((1, 3), (1, 7), (3, 11), (2, 15)),
)
def add_round_key(state: list, key: list) -> list:
return [a ^ b for a, b in zip(state, key)]
def state_matrix(data: list) -> list:
return [data[i] for i in HEXMAT]
def sub_bytes(state: list) -> list:
return [SBOX[i] for i in state]
def shift_rows(state: list) -> list:
return [state[i] for i in SHIFT]
def rot8(byte: int, x: int) -> int:
x &= 7
return (byte << x | byte >> (8 - x)) & 0xFF
def quadword(quad: bytes) -> int:
a, b, c, d = quad
return (a << 24) | (b << 16) | (c << 8) | d
def rot_word(word: int) -> int:
return (word << 8 | word >> 24) & 0xFFFFFFFF
def sub_word(word: int) -> int:
return (
(SBOX[(word >> 24) & 0xFF] << 24)
| (SBOX[(word >> 16) & 0xFF] << 16)
| (SBOX[(word >> 8) & 0xFF] << 8)
| SBOX[word & 0xFF]
)
def galois_mult(x: int, y: int) -> int:
p = 0
while x and y:
if y & 1:
p ^= x
if x & 0x80:
x = (x << 1) ^ 0x11B
else:
x <<= 1
y >>= 1
return p
def mix_columns(state: list) -> list:
result = [0] * 16
for e, row in zip(state, COLMAT):
for mult, i in row:
result[i] ^= galois_mult(e, mult)
return state_matrix(result)
def key_matrix(key: list) -> list:
mat = []
for row in SEXAGESY:
line = []
for col in row:
n = key[col]
line.extend([n >> 24 & 0xFF, n >> 16 & 0xFF, n >> 8 & 0xFF, n & 0xFF])
mat.append(state_matrix(line))
return mat
def derive_key(password: bytes) -> list:
keys = [quadword(password[a:b]) for a, b in OCTUPLE]
result = keys.copy()
last = result[7]
for i in range(8, 60):
if not i & 7:
last = sub_word(rot_word(last)) ^ (RCON[i // 8] << 24)
elif i & 7 == 4:
last = sub_word(last)
key = result[i - 8] ^ last
result.append(key)
last = key
return key_matrix(result)
def aes_256_cipher(data: bytes, password: bytes) -> list:
state = add_round_key(state_matrix(data), password[0])
for i in range(1, 14):
state = add_round_key(mix_columns(shift_rows(sub_bytes(state))), password[i])
return state_matrix(add_round_key(shift_rows(sub_bytes(state)), password[14]))
def get_padded_data(data: bytes | str) -> bytes:
if isinstance(data, str):
data = data.encode("utf8")
if not isinstance(data, bytes):
raise ValueError("argument data must be bytes or str")
return data + b"\x00" * (16 - len(data) % 16)
def get_key(password: bytes | int | str) -> list:
if isinstance(password, int):
if password < 0 or password > MAX_256:
raise ValueError("argument password must be between 0 and 2^256-1")
password = password.to_bytes(32, "big")
if isinstance(password, str):
password = "".join(i for i in password if i.isalnum()).encode("utf8")
if len(password) > 32:
raise ValueError("argument password must be 32 bytes or less")
if not isinstance(password, bytes):
raise ValueError("argument password must be bytes | int | str")
return derive_key(password.rjust(32, b"\x00"))
def ecb_encrypt(data: bytes | str, password: bytes | str) -> str:
data = get_padded_data(data)
key = get_key(password)
blocks = [aes_256_cipher(data[i : i + 16], key) for i in range(0, len(data), 16)]
return "".join(f"{e:02x}" for block in blocks for e in block)
</code></pre>
<p>AES_256.json</p>
<pre class="lang-json prettyprint-override"><code>{
"SBOX": [
99 , 124, 119, 123, 242, 107, 111, 197, 48 , 1 , 103, 43 , 254, 215, 171, 118,
202, 130, 201, 125, 250, 89 , 71 , 240, 173, 212, 162, 175, 156, 164, 114, 192,
183, 253, 147, 38 , 54 , 63 , 247, 204, 52 , 165, 229, 241, 113, 216, 49 , 21 ,
4 , 199, 35 , 195, 24 , 150, 5 , 154, 7 , 18 , 128, 226, 235, 39 , 178, 117,
9 , 131, 44 , 26 , 27 , 110, 90 , 160, 82 , 59 , 214, 179, 41 , 227, 47 , 132,
83 , 209, 0 , 237, 32 , 252, 177, 91 , 106, 203, 190, 57 , 74 , 76 , 88 , 207,
208, 239, 170, 251, 67 , 77 , 51 , 133, 69 , 249, 2 , 127, 80 , 60 , 159, 168,
81 , 163, 64 , 143, 146, 157, 56 , 245, 188, 182, 218, 33 , 16 , 255, 243, 210,
205, 12 , 19 , 236, 95 , 151, 68 , 23 , 196, 167, 126, 61 , 100, 93 , 25 , 115,
96 , 129, 79 , 220, 34 , 42 , 144, 136, 70 , 238, 184, 20 , 222, 94 , 11 , 219,
224, 50 , 58 , 10 , 73 , 6 , 36 , 92 , 194, 211, 172, 98 , 145, 149, 228, 121,
231, 200, 55 , 109, 141, 213, 78 , 169, 108, 86 , 244, 234, 101, 122, 174, 8 ,
186, 120, 37 , 46 , 28 , 166, 180, 198, 232, 221, 116, 31 , 75 , 189, 139, 138,
112, 62 , 181, 102, 72 , 3 , 246, 14 , 97 , 53 , 87 , 185, 134, 193, 29 , 158,
225, 248, 152, 17 , 105, 217, 142, 148, 155, 30 , 135, 233, 206, 85 , 40 , 223,
140, 161, 137, 13 , 191, 230, 66 , 104, 65 , 153, 45 , 15 , 176, 84 , 187, 22
],
"RCON": [0, 1, 2, 4, 8, 16, 32, 64, 128, 27, 54]
}
</code></pre>
<p>It doesn't raise exceptions, but I have absolutely no idea what I am doing.</p>
<p>This is an example output:</p>
<pre><code>In [428]: ecb_encrypt('6a84867cd77e12ad07ea1be895c53fa3', '0'*32)
Out[428]: 'b981b1853c16fbb6adc7cf4a01c9c57b94a3e5ce608239660c324b01400ebdd5d45a5452d22fed94b7ca9d916ac47736'
</code></pre>
<p>I have used this <a href="https://anycript.com/crypto" rel="nofollow noreferrer">website</a> to check the correctness of the output, with the same key and plaintext, AES-256-ECB mode, the cipher text in hex is:</p>
<pre><code>e07beff38697f04e7adbc971adc2a9135f60746178fcd0f1b3040e4d15c920ad0318e084e1666e699891c78f8aa98960
</code></pre>
<p>It isn't what my code outputs as you can see.</p>
<p>Why isn't my code working properly?</p>
|
<python><encryption><aes>
|
2025-03-29 03:08:17
| 1
| 3,930
|
Ξένη Γήινος
|
79,542,733
| 10,262,805
|
Why do we reshape key, query, and value tensors in multi-head attention?
|
<p>In my PyTorch implementation of multi-head attention, i have those in <code>__init__()</code></p>
<pre><code>class MultiHeadAttentionLayer(nn.Module):
def __init__(self,d_in,d_out,context_length,dropout,num_heads,use_bias=False):
super().__init__()
self.d_out=d_out
self.num_heads=num_heads
# In multi-head attention, the output dimension (d_out) is split across multiple attention heads.
# Each head processes a portion of the total output dimensions independently before being concatenated back together.
self.head_dim=d_out//num_heads
self.query_weight = nn.Linear(d_in, d_out, bias=use_bias)
self.key_weight = nn.Linear(d_in, d_out, bias=use_bias)
self.value_weight = nn.Linear(d_in, d_out, bias=use_bias)
</code></pre>
<p>this is the forward method</p>
<pre><code>def forward(self,x):
batch_size,sequence_length,d_in=x.shape
keys=self.key_weight(x)
queries=self.query_weight(x)
values=self.value_weight(x)
# RESHAPING
# .view() is a PyTorch tensor method that reshapes a tensor without changing its underlying data. It returns a new tensor with the same data but in a different shape.
keys=keys.view(batch_size,sequence_length,self.num_heads,self.head_dim)
values=values.view(batch_size,sequence_length,self.num_heads,self.head_dim)
queries=queries.view(batch_size,sequence_length,self.num_heads,self.head_dim)
</code></pre>
<p>I understand that <code>d_out</code> is split across multiple attention heads, but I'm not entirely sure why this reshaping is necessary. How does adding <code>num_heads</code> as a new dimension affect the computation of attention, and what would happen if we skipped this step and kept the shape as "batch_size,sequence_length,d_in"</p>
|
<python><pytorch><neural-network><tensor><multihead-attention>
|
2025-03-29 02:58:57
| 2
| 50,924
|
Yilmaz
|
79,542,721
| 14,121,186
|
How can I create a closed loop curve in Manim?
|
<p>I'm trying to make a curved shape that passes through a set of points and forms a closed loop. I was able to create a VMobject with the points and use <code>.make_smooth()</code> to form the curve, but the first point has a sharp angle in it. Is there a way to make the entire curve smooth?</p>
<p>Here's the script:</p>
<pre class="lang-py prettyprint-override"><code>from manim import *
class Help(Scene):
def construct(self):
points = ((-3, 2, 0), (-1, 0, 0), (-4, -2, 0), (-5, 0, 0))
redCurve = VMobject(color=RED)
redCurve.set_points_as_corners(points)
redCurve.close_path()
redCurve.make_smooth()
self.play(Create(redCurve))
self.wait(3)
</code></pre>
<p><a href="https://i.sstatic.net/6HQs4SsB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HQs4SsB.png" alt="Closed loop with a sharp angle in it" /></a></p>
<p>Thanks!</p>
|
<python><manim>
|
2025-03-29 02:37:57
| 1
| 873
|
Caleb Keller
|
79,542,528
| 22,146,392
|
How to strip quotes from CSV table?
|
<p>I'm using the <a href="https://pandas.pydata.org/docs/reference/index.html" rel="nofollow noreferrer">pandas library</a> to convert CSVs to other data types. My CSV has the fields quoted, like this:</p>
<pre class="lang-none prettyprint-override"><code>"Version", "Date", "Code", "Description", "Tracking Number", "Author"
"0.1", "22AUG2022", , "Initial Draft", "NR", "Sarah Marshall"
"0.2", "23SEP2022", "D", "Update 1", "N-1234", "Bill Walter"
"0.3", "09MAY2023", "A\, C", "New update.", "N-1235", "George Orwell"
</code></pre>
<p>The problem is that when I read the CSV with <code>pandas.read_csv('myfile.csv')</code>, the quotes are included in the values:</p>
<pre><code> Version "Date" "Code" "Description" "Tracking Number" "Author"
0 0.1 "22AUG2022" "Initial Draft" "NR" "Sarah Marshall"
1 0.2 "23SEP2022" "D" "Update 1" "N-1234" "Bill Walter"
2 0.3 "09MAY2023" "A, C" "New update." "N-1235" "George Orwell"
</code></pre>
<p>So these quotes are included when converting to HTML:</p>
<pre class="lang-html prettyprint-override"><code><table>
<thead>
<tr style="text-align: right">
<th></th>
<th>Version</th>
<th>"Date"</th>
<th>"Code"</th>
<th>"Description"</th>
<th>"Tracking Number"</th>
<th>"Author"</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.1</td>
<td>"22AUG2022"</td>
<td></td>
<td>"Initial Draft"</td>
<td>"NR"</td>
<td>"Sarah Marshall"</td>
</tr>
...
</code></pre>
<p>I've tried <code>quoting=csv.QUOTE_NONE</code>, but it didn't fix it--in fact, it actually added quotes to the <code>Version</code> column.</p>
<p>I found <a href="https://stackoverflow.com/questions/31281699/python-strip-every-double-quote-from-csv">this question</a>--the answers essentially says to strip out any quotes in post processing.</p>
<p>I can of course loop through the CSV rows and strip out the leading/trailing quotes for each of the values, but since quoted values are common with CSV I'd expect that there would be a parameter or easy way to enable/disable the quotes in the rendered output but I can't find something like that.</p>
<p>Is there a way to accomplish this?</p>
|
<python><pandas><csv>
|
2025-03-28 22:38:03
| 4
| 1,116
|
jeremywat
|
79,542,515
| 7,085,818
|
Comments on columns in Postgres table is Null
|
<p>I added comments in my postgres table using this method, statement ran successfully.</p>
<pre><code>COMMENT ON TABLE statements IS 'Table storing bank transaction data imported from statements sheet';
COMMENT ON COLUMN statements.id IS 'Unique identifier for each transaction';
COMMENT ON COLUMN statements.customer_id IS 'Reference to the customer, alphanumeric ID';
</code></pre>
<p>I am using this query to get comments on the table</p>
<pre><code>SELECT
column_name,
data_type,
col_description(('public.' || 'statements')::regclass, ordinal_position) AS column_comment
FROM
information_schema.columns
WHERE
table_schema = 'public'
AND table_name = 'statements';
</code></pre>
<p>Here is the result
<a href="https://i.sstatic.net/Qs41Zw0n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs41Zw0n.png" alt="result of query" /></a></p>
<p>What am I missing, how to add comments properly? or how to retreive info properly?</p>
<p>I tried <a href="https://stackoverflow.com/questions/56066028/why-cant-i-see-comment-on-column">this</a> but it didnt work, error says <code>user_col_comments</code> doesnt exist</p>
<p>Here is my client code</p>
<pre><code>from sqlalchemy import create_engine, text
connection_string = f"postgresql://{postgres_username}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_database}"
engine = create_engine(connection_string)
def execute_sql_from_file(engine, file_path):
"""
Execute SQL queries from a file.
Args:
engine: SQLAlchemy engine object
file_path: Path to the SQL file
Returns:
List of results from executed queries
"""
# Read the SQL file
with open(file_path, "r") as file:
sql_queries = file.read()
# Split the queries if there are multiple
queries = sql_queries.split(";")
# Store results
results = []
# Execute each query individually
with engine.connect() as connection:
for query in queries:
query = query.strip()
if query: # Ensure the query is not empty
print(f"Executing query: {query}")
result = connection.execute(text(query))
# If the query returns data (SELECT), store it in results
if result.returns_rows:
results.append(result.fetchall())
print("Query executed successfully")
return results
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2025-03-28 22:29:12
| 1
| 610
|
Kundan
|
79,542,328
| 4,869,375
|
Predicting `re` regexp memory consumption
|
<p>I have a large (gigabyte) file where an S-expression appears, and I want to skip to the end of the S-expression. The depth of the S-expression is limited to 2, so I tried using a Python regexp (<code>b'\\((?:[^()]|\\((?:[^()]|)*\\))*\\)'</code>). This turned out to consume too much RAM, and digging deeper I found that memory consumption of moderately complex regexps seems highly unpredictable if the match is large. For instance, the following four equivalent regexps all match a full ten megabyte string. The fourth one (arguably the most complex one) uses a reasonable amount (30M) of RAM, whereas the others consume one gigabyte:</p>
<pre class="lang-py prettyprint-override"><code>import re
dot = b'[' + b''.join(b'\\x%02x' % (c,) for c in range(256)) + b']'
assert re.match(b'(?:.|.)*', b'a'*10000000).end() > 1000000
assert re.match(b'(?:.|.|a)*', b'a'*10000000).end() > 1000000
assert re.match(b'(?:%s|%s)*' % (dot,dot), b'a'*10000000).end() > 1000000
assert re.match(b'(?:%s|%s|a)*' % (dot,dot), b'a'*10000000).end() > 1000000
</code></pre>
<p>(using Python 3.12.3)</p>
<p>Is there a reasonable way to predict if the performance of a Python regexp can scale? And in particular, are there some design principles I can follow if I want to avoid performance pitfalls?</p>
<p>(This question is specifically about the <code>re</code> module, because I prefer to use standard Python libraries; I suspect this would not be an issue if I switched to a third-party lib like <code>regex</code>)</p>
|
<python><python-3.x><python-re>
|
2025-03-28 20:05:06
| 1
| 910
|
Erik Carstensen
|
79,542,324
| 4,907,639
|
Constrain Llama3.2-vision output to a list of options
|
<p>I have several images of animals in the same directory as the script. How can I modify the following script to process an image but force the output to only be a single selection from a list:</p>
<pre><code>from pathlib import Path
import base64
import requests
def encode_image_to_base64(image_path):
"""Convert an image file to base64 string."""
return base64.b64encode(image_path.read_bytes()).decode('utf-8')
def extract_text_from_image(image_path):
"""Send image to local Llama API and get text description."""
base64_image = encode_image_to_base64(image_path)
payload = {
"model": "llama3.2-vision",
"stream": False,
"messages": [
{
"role": "user",
"content": (
"With just one word, classify this image into one of these exact categories:\n"
"- dog\n"
"- cat\n"
"- butterfly\n"
),
"images": [base64_image]
}
]
}
response = requests.post(
"http://localhost:11434/api/chat",
json=payload,
headers={"Content-Type": "application/json"}
)
return response.json().get('message', {}).get('content', 'No text extracted')
def process_directory():
"""Process all images in current directory and create text files."""
for image_path in Path('.').glob('*'):
if image_path.suffix.lower() in {'.png', '.jpg', '.jpeg', '.gif', '.bmp', '.webp'}:
print(f"\nProcessing {image_path}...")
text = extract_text_from_image(image_path)
image_path.with_suffix('.txt').write_text(text, encoding='utf-8')
print(f"Created {image_path.with_suffix('.txt')}")
process_directory()
</code></pre>
<p>However, despite different prompt engineering, I get some answers that will do more than just select from a list. For example, it may occassionally output "<em>From the image, there is a winged insect, therefore my guess is "butterfly." ANSWER: Butterfly.</em>" If I define the list as <code>allowed_options = ['dog', 'cat', 'butterfly']</code> I only want it to output a single string from that list and nothing else.</p>
|
<python><large-language-model><ollama>
|
2025-03-28 20:01:06
| 1
| 2,109
|
coolhand
|
79,542,118
| 4,470,126
|
Convert dataframe column if its datetime to longType, otherwise keep the same column
|
<p>I have a dataframe with columns <strong>entry_transaction, created_date, updated_date, transaction_date</strong>.</p>
<p><code>Created_date</code>, <code>Updated_date</code> are strings with <code>YYYY-MM-dd HH:mm:ss.SSS</code>, and transaction_day is in long milliseconds. I need to loop through the columns and if a column is in datetime format, convert to long, else keep the default value, and also apply datatype from a schema file.</p>
<p>Following is the code, while am able to convert the datetime to long format, am unable to get the default value for transaction_date.</p>
<p>There are 50 plus tables, and each table has more than 50 columns. so I need to use for loop and apply this change dynamically by checking if the column is a datetime column.</p>
<p>Here is the sample code for reproducing:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("example").getOrCreate()
data = [("2025-03-12 18:47:33.943", "1735862400000", "2025-03-12 18:47:33.943", "2025-03-12 18:47:33.943"), ("2025-03-12 10:47:33.943", "1735862400000", "2025-03-12 12:47:33.943", "2025-03-12 16:47:33.943"), ("2025-03-01 18:47:33.943", "1735862400000", "2025-03-04 18:47:33.943", "2025-03-12 18:47:33.943")]
columns = ["entry_transaction", "transaction_date", "creation_date", "updated_date"]
df = spark.createDataFrame(data, columns)
df.show()
df2 = df.select("entry_transaction","transaction_date", "creation_date", "updated_date")
schema = StructType([StructField('entry_transaction', LongType(), True), StructField('transaction_date', StringType(), True), StructField('creation_date', LongType(), True), StructField('updated_date', LongType(), True)])
for field in schema.fields:
print(field.name, field.dataType)
df2 = df2.withColumn(
field.name,
unix_timestamp(col(field.name).cast(TimestampType())) * 1000000 if to_date(col(field.name), "yyyy-MM-dd").isNotNull else df2[field.name])
sorted_columns = sorted(df2.columns)
df_reordered = df2.select(sorted_columns)
display(df_reordered)
</code></pre>
<p>Expected output:
<a href="https://i.sstatic.net/oFaVhOA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oFaVhOA4.png" alt="enter image description here" /></a></p>
<p>But am getting NULL for transaction_date
<a href="https://i.sstatic.net/izBEP5j8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/izBEP5j8.png" alt="enter image description here" /></a></p>
<p>Please help how to get the proper dataframe. Thanks,</p>
|
<python><pyspark><databricks><azure-databricks>
|
2025-03-28 17:57:01
| 1
| 3,213
|
Yuva
|
79,542,086
| 5,054,175
|
SessionNotCreatedException when launching Selenium ChromeDriver with FastAPI Slack bot on AWS Lightsail
|
<p>I have a FastAPI server with a Slack chatbot that launches Selenium for automating certain tasks. However, I'm encountering an issue when initializing the ChromeDriver with the <code>--user-data-dir</code> argument:</p>
<pre class="lang-py prettyprint-override"><code>user_data_dir = tempfile.mkdtemp()
chrome_options.add_argument(f"--user-data-dir={user_data_dir}")
# Initializing the driver
driver = webdriver.Chrome(options=chrome_options)
</code></pre>
<p>When I run this, it fails with the following error:</p>
<pre><code>selenium.common.exceptions.SessionNotCreatedException: Message: session not created: probably user data directory is already in use, please specify a unique value for --user-data-dir argument, or don't use --user-data-dir
</code></pre>
<p>When I connect on ssh to the server it works without doing <strong>nothing</strong>.</p>
<p>I tried change the path:</p>
<pre><code>chrome_options.add_argument("--user-data-dir=/home/ubuntu/mytmp/")
</code></pre>
<p>I tried static path</p>
<pre><code>chrome_options.add_argument("--user-data-dir=/home/ubuntu/mytmp/")
</code></pre>
<p>And I have always the same behaviour.</p>
<p>I use AWS Lightsail with Ubuntu 22.04.5 LTS</p>
|
<python><amazon-web-services><ubuntu><selenium-webdriver><selenium-chromedriver>
|
2025-03-28 17:39:17
| 1
| 1,000
|
miorey
|
79,542,042
| 648,345
|
Python Flask WSGI failure with deprecated imp module
|
<p>When I attempt to deploy a Flask app on a shared hosting site, using cPanel, the deployment fails with this message: "ModuleNotFoundError: No module named 'imp'."</p>
<p>As other posts have indicated, the <code>imp</code> module has been removed from Python and a different module, <code>importlib</code>, should be used instead. However, the file named <code>passenger_wsgi.py</code> is generated automatically any edited version will always be overwritten. It is this file that needs to use the <code>imp</code> module.</p>
<p>How can I deploy my Flask app on shared hosting, using cPanel to set up the app?</p>
<p>.</p>
|
<python><flask><wsgi><imp>
|
2025-03-28 17:17:38
| 1
| 637
|
macloo
|
79,541,975
| 967,621
|
Fix a specific rule in ruff
|
<p>How do I fix only specific rule violations in <a href="https://docs.astral.sh/ruff/" rel="nofollow noreferrer"><code>ruff</code></a>?</p>
<p>For example, I need to fix only rule <code>F401</code>.</p>
|
<python><ruff>
|
2025-03-28 16:47:33
| 1
| 12,712
|
Timur Shtatland
|
79,541,781
| 967,621
|
Prevent conda from using the defaults channel in `conda update conda`
|
<p>Conda apparently is trying to use the <code>defaults</code> channel when I run</p>
<pre class="lang-sh prettyprint-override"><code>conda update conda
</code></pre>
<p>or</p>
<pre class="lang-sh prettyprint-override"><code>conda create --name python python pyaml pytest
</code></pre>
<p>The output is:</p>
<pre><code>Channels:
- conda-forge
- bioconda
- nodefaults
- defaults
Platform: osx-arm64
...
</code></pre>
<p>I do <strong>not</strong> want to use the <code>defaults</code> channel. My <code>~/.condarc</code> file is:</p>
<pre class="lang-yaml prettyprint-override"><code>channels:
- conda-forge
- bioconda
- nodefaults
auto_activate_base: false
</code></pre>
<p>I am using <code>conda 25.3.1</code> running on macOS Sequoia 15.3.2.</p>
<h3>See also:</h3>
<p>These related posts did not solve my issue:</p>
<ul>
<li><a href="https://stackoverflow.com/q/67695893/967621">How do I completely purge and disable the default channel in Anaconda and switch to conda-forge?</a></li>
<li><a href="https://stackoverflow.com/q/77572223/967621">Forbid usage of defaults channel in Conda</a></li>
</ul>
|
<python><conda><miniconda>
|
2025-03-28 15:06:20
| 1
| 12,712
|
Timur Shtatland
|
79,541,776
| 11,462,274
|
Open a webpage popup window in Microsoft Edge browser using Python without Selenium
|
<p>I can't use <code>Selenium</code> or any other type of option that controls the browser, because I need to use it on sites that have strict restrictions against bots and automations, so I need to use the browser itself as it is originally.</p>
<p>I currently use the following pattern code to open websites:</p>
<pre class="lang-python prettyprint-override"><code>import webbrowser
url = f"https://www.google.com/"
webbrowser.get("C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe %s").open(url)
</code></pre>
<p>With the web page already open in my browser, I have two extensions that I click to open the same window but as a pop-up, each extension has a different pop-up style, but both do not take up as much space as the browser's own options at the top of the page:</p>
<p><a href="https://i.sstatic.net/XIpXbSVc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIpXbSVc.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/tQ0c26yf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tQ0c26yf.png" alt="enter image description here" /></a></p>
<p>I would like to know if there is any way to specify in the Python code that I want to open as a pop-up or a normal tab so that I don't have to manually convert them using browser extensions.</p>
|
<python><microsoft-edge><python-webbrowser>
|
2025-03-28 15:00:47
| 0
| 2,222
|
Digital Farmer
|
79,541,683
| 11,313,748
|
Getting Infeasibility while solving constraint programming for shelf space allocation problem
|
<p>I'm trying to allocate shelf space to items on planogram using constraint programming. It is a big problem and I'm trying to implement piece by piece. At first we're just trying to place items on shelf.</p>
<p>The strategy is dividing whole planogram into multiple sections. For example if my planogram width is 10 cm with 3 shelfs and chosen granalarity is 1 then the total avialable space is 30 grids.</p>
<p><a href="https://i.sstatic.net/rEJn8dZk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEJn8dZk.png" alt="enter image description here" /></a></p>
<p>Now, let's say I have an item of 6 cm then accordingly, it'll take 6 grid on planogram. there can be multiple such products also, so I have defined 4 constraints to arrange items properly:</p>
<ol>
<li>All items should take exactly the number of grids equal to their length.</li>
<li>All grids must be assigned to one item only.</li>
<li>All the partition of item must be on one shelf.</li>
<li>All the partition of item must be together.</li>
</ol>
<p>This approach is working perfectly for smaller configuration like the one below:</p>
<p><a href="https://i.sstatic.net/Gsc6XVyQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gsc6XVyQ.png" alt="enter image description here" /></a></p>
<p>here, tpnb is item number and linear is space required.</p>
<p>Now, sample planogram configuration:</p>
<pre><code>
granularity = 1,
shelf_count = 3,
section_width = 10
</code></pre>
<p>The image attached above is the solution that I got after running the code. but however this is not scaling well for larger planograms, like:</p>
<pre><code> granularity = 1
shelf_count = 7
section_width = 133
</code></pre>
<p>total item count: 57</p>
<p>What I'm getting is :</p>
<pre><code>
Status: ExitStatus.UNSATISFIABLE (59.21191700000001 seconds)
No solution found.
</code></pre>
<p>I tried so many times with modifying the constraints n all but I'm unable to figure out what is causing this issue why it is failing for large config or what is the root cause of infeasibility here.</p>
<p>Sharing my code below:</p>
<pre><code>
pog_df['linear'] = pog_df.linear.apply(np.ceil)
pog_df['gridlinear'] = pog_df.linear//granularity
G = nx.grid_2d_graph(int(section_width * bay_count/granularity),int(shelf_count))
# Define the positions of the nodes for a planar layout
pos = {(x, y): (x, y) for x, y in G.nodes()}
# # Draw the graph
plt.figure(figsize=(8, 4))
nx.draw(G, pos, node_size=0.07)#, with_labels=True, node_size=7, node_color="lightblue", font_weight="bold")
products = pog_df[['tpnb', 'gridlinear']].astype(str)
locations = pd.Series([str(s) for s in G.nodes()], name='location')
locations = pd.concat([locations,locations.to_frame().location.str.strip("() ").str.split(',', expand=True).rename(columns={0: 'x', 1: 'y'})], axis=1)
l_p_idx = pd.merge(products.reset_index(),
locations,
how='cross')[['tpnb', 'gridlinear', 'location', 'x', 'y']]
n_location = len(locations)
n_products = pog_df.shape[0]
# create decision variables
l_p_idx['Var'] = l_p_idx.apply(lambda x: cp.boolvar(name=x['location']+'-'+x['tpnb']), axis=1)
m = cp.Model()
l_p_idx.groupby('tpnb', as_index=False).agg({'Var':cp.sum, 'gridlinear': 'unique'}).apply(lambda x: m.constraints.append(x['Var']==int(float(x["gridlinear"]))), axis=1)
l_p_idx.groupby('location', as_index=False).agg({'Var':cp.sum}).apply(lambda x: m.constraints.append(x['Var']<=1), axis=1)
l_p_idx["y"] = l_p_idx["y"].astype("int32")
shelf_var = {tpnb: cp.intvar(0, max(l_p_idx["y"])) for tpnb in l_p_idx["tpnb"].unique()}
l_p_idx.apply(lambda row: m.constraints.append(
(row['Var'] == 1).implies(row['y'] == shelf_var[row['tpnb']])
), axis=1)
def process_group(level, data):
return level, {eval(row['location']): row['Var'] for _, row in data.iterrows()}
def parallel_creator(key, idx_df):
node_dict = {}
with ProcessPoolExecutor() as executor:
futures = {executor.submit(process_group, level, data): level for level, data in idx_df.groupby(key)}
for future in as_completed(futures):
level, var_dict = future.result()
node_dict[level] = var_dict
return node_dict
node_p_var_dict = parallel_creator( 'tpnb', l_p_idx)
for p in products.tpnb.values:
for shelf in range(shelf_count):
m.constraints.append(
cp.sum([(node_p_var_dict[str(p)][(level, shelf)] != node_p_var_dict[str(p)][(level+1, shelf)])
for level in range(section_width - 1)]) <= 1
)
hassol = m.solve()
print("Status:", m.status())
</code></pre>
|
<python><linear-programming><or-tools><constraint-programming><cpmpy>
|
2025-03-28 14:24:05
| 1
| 391
|
Anand
|
79,541,446
| 13,280,838
|
Does Snowflake ODBC Driver Support fast_executemany - Issue with varchar(max) columns
|
<p><strong>Scenario:</strong></p>
<p>I am trying to create a simple Python script that utilizes <code>pyodbc</code> and works with various datasources say sqlserver, azuresql, snowflake etc. Basically any source that supports ODBC connections. We have an issue when trying to load from SQL Server to Snowflake. The source contains a column whose datatype is <code>varchar(max)</code>.</p>
<p>Here are the issues/questions encountered.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Does Snowflake ODBC Driver support <code>fast_executemany</code> ? Not able to find documentation that supports this.</li>
<li>If <code>fast_executemany</code> is set to <code>True</code>, I am getting a <code>MemoryError</code>. Of course I have seen various issues and articles that discusses this, but none of the approaches tried seem to fix this. For example have tried both <code>snow_cursor.setinputsizes([(pyodbc.SQL_WVARCHAR, 0, 0)])</code> and <code>snow_cursor.setinputsizes([(pyodbc.SQL_WVARCHAR, 16777216, 0)])</code>. Both are failing.</li>
<li>If <code>fast_executemany</code> is set to False, the records are getting inserted one by one which is painfully slow.</li>
</ol>
<p>What would be the right approach to fix the issue?</p>
<p><strong>Sample Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
# Snowflake connection parameters
print("Starting script execution...")
# Snowflake connection parameters
conn_params = {
'DRIVER': 'SnowflakeDSIIDriver',
'SERVER': '<account>.snowflakecomputing.com',
'DATABASE': '<database>',
'SCHEMA': '<schema>',
'WAREHOUSE': '<warehouse>',
'ROLE': '<role>',
'AUTHENTICATOR': 'snowflake_jwt',
'PRIV_KEY_FILE': '<key_file_path>',
'PRIV_KEY_FILE_PWD': '<key_password>',
'UID': '<username>',
'CLIENT_SESSION_KEEP_ALIVE': 'TRUE'
}
print("Connection parameters defined...")
# SQL Server connection parameters
sql_params = {
'DRIVER': '{ODBC Driver 18 for SQL Server}',
'SERVER': '<server>',
'DATABASE': '<database>',
'INSTANCE': '<instance>',
'ENCRYPT': 'yes',
'TRUSTSERVERCERTIFICATE': 'yes',
'CONNECTION_TIMEOUT': '30',
'UID': '<username>',
'PWD': '<password>'
}
# Create connection strings
snow_conn_str = ';'.join([f"{k}={v}" for k, v in conn_params.items()])
sql_conn_str = ';'.join([f"{k}={v}" for k, v in sql_params.items()])
try:
# Connect to SQL Server
sql_conn = pyodbc.connect(sql_conn_str)
sql_cursor = sql_conn.cursor()
# Connect to Snowflake
snow_conn = pyodbc.connect(snow_conn_str)
snow_cursor = snow_conn.cursor()
snow_cursor.fast_executemany = False #True
# snow_cursor.setinputsizes([(pyodbc.SQL_WVARCHAR, 0, 0)])
snow_cursor.setinputsizes([(pyodbc.SQL_WVARCHAR, 16777216, 0)])
# Prepare insert query
insert_query = """
INSERT INTO SNOWFLAKE_TABLE
(COL_01, COL_02, COL_03, COL_04,
COL_05, COL_06, COL_07, COL_08, COL_09)
VALUES (?,?,?,?,?,?,?,?,?)
"""
# Source query
source_query = "SELECT top 1000 * FROM <source_table> with (nolock)"
sql_cursor.execute(source_query)
print("SQL query executed successfully")
batch_size = 1000
total_rows = 0
while True:
# Fetch batch of rows
rows = sql_cursor.fetchmany(batch_size)
print(f"Fetched {len(rows) if rows else 0} rows from SQL Server")
if not rows:
break
# Insert batch into Snowflake
snow_cursor.executemany(insert_query, rows)
print(f"Executed batch insert of {len(rows)} rows to Snowflake")
snow_conn.commit()
print("Committed changes to Snowflake")
total_rows += len(rows)
print(f"Inserted {len(rows)} rows. Total rows processed: {total_rows}")
print(f"Successfully completed. Total rows inserted: {total_rows}")
except pyodbc.Error as e:
print(f"ODBC Error: {str(e)}")
import traceback
print(traceback.format_exc())
raise # Re-raise to see full error chain
except Exception as f:
print(f"Unexpected error: {str(f)}")
import traceback
print(traceback.format_exc())
raise # Re-raise to see full error chain
finally:
# Close all connections
for cursor in [sql_cursor, snow_cursor]:
if cursor in locals():
cursor.close()
for conn in [sql_conn, snow_conn]:
if conn in locals():
conn.close()
</code></pre>
|
<python><snowflake-cloud-data-platform><odbc><pyodbc>
|
2025-03-28 12:36:40
| 0
| 669
|
rainingdistros
|
79,541,287
| 6,212,999
|
How to write type hints for recursive function computing depth in Python?
|
<p>I wrote the following function in Python 3.12:</p>
<pre><code># pyright: strict
from collections.abc import Iterable, Mapping
from typing import Any
type Nested = Any | Mapping[str, Nested] | Iterable[Nested]
def _get_max_depth(obj: Nested) -> int:
if isinstance(obj, Mapping):
return max([0] + [_get_max_depth(val) for val in obj.values()]) + 1
elif isinstance(obj, Iterable) and not isinstance(obj, str):
return max([0] + [_get_max_depth(elt) for elt in obj]) + 1
else:
return 0
</code></pre>
<p>However, pyright 1.1.398 complains that:</p>
<pre><code>demo-pyright.py
demo-pyright.py:11:42 - error: Argument type is partially unknown
Argument corresponds to parameter "obj" in function "_get_max_depth"
Argument type is "Any | Mapping[str, Any | ... | Iterable[Nested]] | Iterable[Any | Mapping[str, Nested] | ...] | Unknown" (reportUnknownArgumentType)
demo-pyright.py:11:51 - error: Type of "val" is partially unknown
Type of "val" is "Any | Mapping[str, Nested] | Iterable[Nested] | Unknown" (reportUnknownVariableType)
demo-pyright.py:13:42 - error: Argument type is partially unknown
Argument corresponds to parameter "obj" in function "_get_max_depth"
Argument type is "Unknown | Any | Mapping[str, Any | ... | Iterable[Nested]] | Iterable[Any | Mapping[str, Nested] | ...]" (reportUnknownArgumentType)
demo-pyright.py:13:51 - error: Type of "elt" is partially unknown
Type of "elt" is "Unknown | Any | Mapping[str, Nested] | Iterable[Nested]" (reportUnknownVariableType)
4 errors, 0 warnings, 0 informations
</code></pre>
<p>To silence pyright, I could use the following workarounds:</p>
<ul>
<li>Change <code>Any</code> to concrete types, such as <code>int | str</code>.</li>
<li>Replace <code>Mapping</code> with <code>dict</code>.</li>
<li>Replace <code>Iterable</code> with <code>list</code>.</li>
</ul>
<p>However, I am not satisfied with these workarounds since the function should be generic.</p>
<p>Is there a way to write type hints that satisfy pyright in strict mode in this case?</p>
|
<python><python-typing><pyright>
|
2025-03-28 11:22:55
| 2
| 405
|
Marcin Barczyński
|
79,541,208
| 189,247
|
Printing numpy matrices horizontally on the console
|
<p>Numpy has many nice features for formatted output, but something I miss is the ability to print more than one array/matrix on the same line. What I mean is easiest to explain with an example. Given the following code:</p>
<pre><code>A = np.random.randint(10, size = (4, 4))
B = np.random.randint(10, size = (4, 4))
niceprint(A, "*", B, "=", A @ B)
</code></pre>
<p>How can you implement <code>niceprint</code> so that it prints the following on the console?</p>
<pre><code>[[7 1 0 4] [[8 6 2 6] [[ 80 45 54 83]
[8 1 5 8] * [0 3 8 5] = [147 91 123 140]
[3 7 2 4] [7 8 7 3] [ 62 55 108 95]
[8 8 2 8]] [6 0 8 9]] [126 88 158 166]]
</code></pre>
|
<python><numpy><matrix><pretty-print>
|
2025-03-28 10:44:59
| 2
| 20,695
|
Gaslight Deceive Subvert
|
79,541,197
| 355,401
|
Migrating Python sync code to asyncio - difference between asyncio.run vs asyncio.Runner
|
<p>I'm working on migrating a Python 3 codebase from complete sync to partially asyncio.<br />
The reason it is partial is because the part of the service that read messages is not compatible with asyncio, yet so it has to remain something like this:</p>
<pre class="lang-py prettyprint-override"><code>for message in read_messages():
handle_message(message)
</code></pre>
<p>I'm converting <code>handle_message</code> to <code>handle_message_async</code> which is defined as async function and everything inside of it will be asyncio compatible. For now I want to continue handle messages one by one and be able to use asyncio inside the handling of a single message.</p>
<p>My question is what is the difference between those two options:</p>
<ol>
<li><code>asyncio.run</code></li>
</ol>
<pre class="lang-py prettyprint-override"><code>for message in read_messages():
asyncio.run(handle_message_async(message))
</code></pre>
<ol start="2">
<li><code>asyncio.Runner()</code></li>
</ol>
<pre class="lang-py prettyprint-override"><code>with asyncio.Runner() as async_runner:
for message in read_messages():
async_runner.run(handle_message_async(message))
</code></pre>
<p>Is there a difference in term of setup and teardown that maybe happening in <code>asyncio.run</code>?</p>
<p>Is there a difference in how exceptions will be raised back to the sync part of the code?</p>
|
<python><python-asyncio>
|
2025-03-28 10:38:16
| 1
| 11,544
|
Ido Ran
|
79,541,027
| 577,288
|
python - how to center tkinter fileDialog window
|
<p>I would like to open an audio file in python using a tkinter fileDialog. How can I center the fileDialog window?</p>
<pre><code>import tkinter as tk
from tkinter import filedialog
root = tk.Tk()
root.withdraw()
root.attributes('-topmost', True)
audio1 = "*.mp3 *.wav"
videos1 = "*.mp4 *.mkv"
file_path = filedialog.askopenfilename(initialdir=os.getcwd(), title="Select file",filetypes=[("audio files", audio1), ("video files", videos1)])
file_name = os.path.basename(file_path)
</code></pre>
<p>I would also like to keep the <code>root</code> hidden ... with the current code <code>root.withdraw()</code></p>
<p>And one more thing, is it possible to have the fileDialog window fall to the background (behind another window), when I select another window to be the active window? I'm currently using, <code>root.attributes('-topmost', True)</code> ... to force my <code>fileDialog</code> to open ontop of all other windows, but if I want to browse some other windows while it is still open ... then I cannot.</p>
|
<python><tkinter>
|
2025-03-28 09:30:39
| 1
| 5,408
|
Rhys
|
79,540,913
| 7,465,516
|
How do I write out an anchor using ruamel.yaml?
|
<p><strong>I want to create data in python and write it to a file as a yaml-document using anchors and merges in the output.</strong></p>
<p>I think this could be possible with ruamel YAML, because, as described in <a href="https://yaml.dev/doc/ruamel.yaml/example/#top" rel="nofollow noreferrer">ruamels official examples</a>:</p>
<ul>
<li>it can load yaml-documents with anchors and merges transparently (transparent in the sense that access to the data is the same whether it is defined through anchors or not)</li>
<li>it can round-trip yaml-files while preserving anchors (so loading a yaml-file which includes anchors can be read and rewritten to disk and still have anchors)</li>
</ul>
<p>This means ruamel.yaml must have an internal representation of the yaml-data that includes and understands anchors (unlike PyYAML, which only reads anchors but does not preserve them). There does not seem to be a documented way to create such anchors from python code.</p>
<p>The most minimal file I want to be able to create would look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>base: &ANCHOR
x: 1
object:
<<: *ANCHOR
y: 2
</code></pre>
|
<python><yaml><ruamel.yaml>
|
2025-03-28 08:41:31
| 1
| 2,196
|
julaine
|
79,540,796
| 10,395,747
|
How to reuse a function from a python module into another module?
|
<p>My directory structure is like this :-</p>
<pre><code> project
- code
- betacode
- naming
- test
- test.py
- test1.py
</code></pre>
<p>So i have a func within test.py</p>
<pre><code> def give_date():
return "1991-01-01"
</code></pre>
<p>And i want to call this function within test1.py
i have tried the below imports</p>
<pre><code> import betacode.naming.test.test
from test.test import give_date
</code></pre>
<p>Nothing seems to be working and throwing error as</p>
<pre><code> ModuleNotFound : No module named "betacode"
</code></pre>
<p>or sometimes
ModuleNotFound : No module named "test"</p>
<p>I have read the concept of absolute path and relative path and I think I am doing right but still its not resolving the path. Please help.</p>
<p>test.py and test1.py are two modules within a directory test</p>
|
<python><import>
|
2025-03-28 07:36:39
| 2
| 758
|
Aviator
|
79,540,631
| 9,768,643
|
Implement Pagination In Graphql instead of Graphene Python
|
<p>here is my Gateway.py for Gateway Model</p>
<pre><code>from datetime import datetime
import graphene
from models.v0100.util import get_query_types, stop_timestamps_w_timezone
from sqlalchemy.orm import joinedload
from graphene_sqlalchemy import SQLAlchemyObjectType
from graphene_sqlalchemy_filter import FilterSet
from . import SCHEMA_VERSION
from .core import DataDevice
from .orm import BuildingModel, ElectricMeterModel, GatewayModel, ElectricMeterPanelModel
from .util import apply_dynamic_eager_loading, get_nested_models, extract_pagination_args
class Gateway(SQLAlchemyObjectType):
class Meta:
model = GatewayModel
interfaces = (graphene.relay.Node, DataDevice)
exclude_fields = (
'_building', '_building_id',
'_electric_meter_panel', '_electric_meter_panel_id',
'_electric_meters', '_giai', '_name', '_native_id', '_serial_num',
'_label', '_short_name', '_kit', '_singufm_asset_id', 'operational_status', '_id'
)
schemaVersion = graphene.String(description="The version of the schema for this object")
def resolve_schemaVersion(self, info) -> graphene.String:
return SCHEMA_VERSION
building = graphene.Field(f'models.{SCHEMA_VERSION}.building.Building', required=True, description="The building that the Gateway is connected to")
def resolve_building(self, info):
if self._building_id is None:
return None
if self._building:
return self._building
electricMeterPanel = graphene.Field(f'models.{SCHEMA_VERSION}.electric_meter_panel.ElectricMeterPanel', required=False, description="The electric meter panel that the Gateway is connected to")
def resolve_electricMeterPanel(self, info):
if self._electric_meter_panel_id is None:
return None
if self._electric_meter_panel:
return self._electric_meter_panel
electricMeters = graphene.List(f'models.{SCHEMA_VERSION}.electric_meter.ElectricMeter', required=False, description="The Electric Meters that the Gateway is connected to")
def resolve_electricMeters(self, info):
return self._electric_meters or []
@classmethod
def query_resolver(cls, info, **kwargs):
pasof = kwargs.pop('asof', datetime.utcnow())
stop_timestamps_w_timezone(pasof)
if not 'asof' in info.variable_values:
info.variable_values['asof'] = pasof
filters = kwargs.pop('filters', None)
query = info.context['session'].query(GatewayModel).params({"p_asof": pasof})
if filters:
query = cls.Filter.filter(info, query, filters=filters)
# type_names = get_query_types(info)
print(f"Generated Query: {query}")
# Apply eager loading for nested models
models_names = get_nested_models(info)
query = apply_dynamic_eager_loading(query, GatewayModel, models_names)
data = extract_pagination_args(kwargs,query)
print(f"Fetched data from query: {data}")
return data
class Filter(FilterSet):
class Meta:
name = 'GatewayFilter'
model = GatewayModel
fields = {
'giai': [FilterSet.EQ]
, 'nativeId': [FilterSet.EQ, FilterSet.ILIKE]
, 'kit': [FilterSet.EQ, FilterSet.ILIKE]
, 'serialNum': [FilterSet.EQ, FilterSet.ILIKE]
, 'isOperational': [FilterSet.EQ]
, 'operationalStatus': [FilterSet.EQ]
}
</code></pre>
<p>and here is my extract_pagination_args :</p>
<pre><code> def decode_cursor(cursor):
"""Decode the cursor value from base64 and extract the offset."""
if cursor and cursor.lower() != "none":
decoded = base64.b64decode(cursor).decode("utf-8")
return int(decoded.split(":")[-1])
return None
def extract_pagination_args(kwargs, query):
first = kwargs.pop('first', None)
last = kwargs.pop('last', None)
after_cursor = decode_cursor(kwargs.pop('after', None))
end_cursor = decode_cursor(kwargs.pop('before', None))
print("offset",after_cursor)
total_count = query.count()
if first:
first =first+1
if last:
last = last+1
if after_cursor:
query = query.offset(after_cursor)
if first:
query = query.limit(first)
else:
query = query.limit(total_count - after_cursor)
elif end_cursor:
reverse_offset = max(0, total_count - after_cursor)
query = query.offset(reverse_offset)
if last:
query = query.limit(last)
else:
query = query.limit(total_count - reverse_offset)
elif first :
# Forward pagination without cursor
query = query.limit(first)
elif last:
# Fetch last N items without cursor
start_offset = max(0, total_count - last)
query = query.offset(start_offset).limit(last)
return query.all()
</code></pre>
<p>while querying the first 10 am getting proper value but using that end cursor if i query the first:10 , after:"Cursor"
am getting empty as graphene pagination again filter the same
how to over come this issue</p>
<p>Any help would be appriciated</p>
|
<python><graphql><graphene-python>
|
2025-03-28 05:54:20
| 0
| 836
|
abhi krishnan
|
79,540,627
| 6,699,447
|
Why does pathlib.Path.glob function in Python3.13 return map object instead of a generator?
|
<p>I was playing around with <code>Path</code> objects and found an interesting behaviour. When testing with python3.11, <code>Path.glob</code> returns a <code>generator</code> object.</p>
<pre class="lang-py prettyprint-override"><code>>>> from pathlib import Path
>>>
>>> cwd = Path.cwd()
>>> cwd.glob("*")
<generator object Path.glob at 0x100b5e680>
>>> import sys;sys.version
'3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]'
</code></pre>
<p>I tested the same code with Python3.13, but this time it returns a <code>map</code> object.</p>
<pre class="lang-py prettyprint-override"><code>>>> from pathlib import Path
>>>
>>> cwd = Path.cwd()
>>> cwd.glob("*")
<map object at 0x101867940>
>>> import sys;sys.version
'3.13.0 (v3.13.0:60403a5409f, Oct 7 2024, 00:37:40) [Clang 15.0.0 (clang-1500.3.9.4)]'
</code></pre>
<p>I know the result can be obtained by wrapping it in a list, but I am curios to understand why this change was made. My initial thought was that it might be because of the performance reasons. But a quick test shows that python3.13 version slower than python3.11 on my system.</p>
<p>Here is the benchmark result:</p>
<pre class="lang-bash prettyprint-override"><code>$ python3.11 -m timeit -n 20000 'from pathlib import Path;cwd=Path.cwd();r=list(cwd.glob("*.py"))'
20000 loops, best of 5: 72.9 usec per loop
$ python3.13 -m timeit -n 20000 'from pathlib import Path;cwd=Path.cwd();r=list(cwd.glob("*.py"))'
20000 loops, best of 5: 75.1 usec per loop
</code></pre>
|
<python><glob><pathlib><python-3.11><python-3.13>
|
2025-03-28 05:52:11
| 1
| 25,841
|
user459872
|
79,540,595
| 4,710,828
|
Unable to suppress scientific notation in datarame, which in turn is causing pyodbc.ProgrammingError "Error converting data type nvarchar to numeric."
|
<p>Below is what I'm trying to accomplish:</p>
<ol>
<li>Get data from API</li>
<li>Clean the data</li>
<li>Insert the data into SQL Server</li>
</ol>
<p>Issue that I'm facing:</p>
<ul>
<li>I convert the data that I receive from api to a pandas dataframe</li>
</ul>
<pre class="lang-py prettyprint-override"><code>response = requests.get(API_SCHEME_DATA, headers=headers, data=payload, timeout=20)
if response.status_code == 200:
data = response.json()["Data"]
df = pd.DataFrame(data, dtype=str)
</code></pre>
<ul>
<li>Now I, clean this data in a very simple step</li>
</ul>
<pre class="lang-py prettyprint-override"><code># bigint list contains column names that I can treat as numeric
for col in bigint_list:
df[col] = pd.to_numeric(df[col], errors='coerce')
df[col] = df[col].fillna(0.0)
df = df.astype(str)
df = df.where(pd.notnull(df), None)
</code></pre>
<ul>
<li>I then use pyodbc connection to insert dataframe into SQL Server</li>
</ul>
<pre class="lang-py prettyprint-override"><code># Prepare columns and placeholders for the SQL query
columns = [f"[{col}]" for col in df.columns]
placeholders = ', '.join(['?' for _ in range(len(df.columns))])
# SQL Insert query
insert_data_query = f"""
INSERT INTO {table_name} ({', '.join(columns)})
VALUES ({placeholders})
"""
# Convert the dataframe rows to a list of tuples for insertion
rows = [tuple(x) for x in df.replace({None: None}).values]
# Execute the insert query for multiple rows
connection_cursor.executemany(insert_data_query, rows)
connection_cursor.connection.commit()
</code></pre>
<ul>
<li>While inserting I get the below error:</li>
</ul>
<pre><code>pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Error converting data type nvarchar to numeric. (8114) (SQLExecDirectW)')
</code></pre>
<ul>
<li>I believe, I have narrowed down the culprit to a parameter coming in the api</li>
</ul>
<pre><code>"CurrentValue": "0.00002"
</code></pre>
<ul>
<li>While inserting this parameter, the parameter value has been changed to scientific notation, and I can see the value being changed to <code>'2e-05'</code>. The column that I'm inserting in the SQL server is of <code>DECIMAL</code> type. And I believe this is why I'm getting the error.</li>
</ul>
<p>What I have done so far to resolve:</p>
<ul>
<li>I have tried to suppress the scientific notation by:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>pd.options.display.float_format = '{:.8f}'.format
</code></pre>
<p>and also tried to round off the columns like below during cleaning step:</p>
<pre class="lang-py prettyprint-override"><code># bigint list contains column names that I can treat as numeric
for col in bigint_list:
df[col] = pd.to_numeric(df[col], errors='coerce')
df[col] = df[col].fillna(0.0)
df[col] = df[col].round(10)
df = df.astype(str)
df = df.where(pd.notnull(df), None)
</code></pre>
<p>However, nothing seems to work and I'm still seeing the value converted to scientific notation. Any help would be appreciated.</p>
|
<python><sql-server><pandas><dataframe><scientific-notation>
|
2025-03-28 05:28:32
| 1
| 373
|
sdawar
|
79,540,481
| 10,964,685
|
Spatial clustering with two separate datasets
|
<p>I'm hoping to get some advice on approaching a clustering problem. I have two separate spatial datasets, being real data and modelled data. The real data contains a binary output (0,1), which is displayed in the figure below. The green scatter points represent 1 and red is 0.</p>
<p>Is it possible to assign a probability (of being 1 attributed to real data) to the modelled data based on the binary output of the real data? On face value, a nearest neighbour approach makes the most sense but are there any other approaches that may be applicable?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
real_data = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=['x1','y1'])
outcome = cycle([0,1])
real_data['Outcome'] = [next(outcome) for fruit in range(len(real_data))]
colors = np.where(real_data["Outcome"]==1,'green','red')
model_data = pd.DataFrame(np.random.randint(-25,125,size=(1000, 2)), columns=['x2','y2'])
fig, ax = plt.subplots()
ax.scatter(model_data['x2'], model_data['y2'], alpha=0.5, color = 'blue', zorder = -1)
ax.scatter(real_data['x1'], real_data['y1'], color = colors, s = 100, zorder = 0)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/2JBG93M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2JBG93M6.png" alt="enter image description here" /></a></p>
|
<python><cluster-analysis><scatter-plot>
|
2025-03-28 03:48:49
| 1
| 392
|
jonboy
|
79,540,471
| 2,825,403
|
Filter sequences of same values in a particular column of Polars df and leave only the first occurence
|
<p>I have a very large Polars LazyFrame (if collected it would be tens of millions records). I have information recorded for a specific piece of equipment taken every second and some location flag that is either 1 or 0.</p>
<p>When I have sequences where the location flag is equal to 1, I need to filter out and only leave the latest one but this must be done per equipment id.</p>
<p>I cannot use UDFs since this is a performance-critical piece of code and should ideally stay withing Polars expression syntax.</p>
<p>For a simple case where I have only a single equipment id, I can do it relatively easily by shifting the time data 1 row and filter out the records where there's a big gap:</p>
<pre><code>df_test = pl.DataFrame(
{
'time': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
'equipment': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'loc': [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1]
}
)
df_test.filter(pl.col('loc') == 1).with_columns((pl.col('time') - pl.col('time').shift(1)).alias('time_diff')).filter(pl.col('time_diff') > 1)
</code></pre>
<p>This gives me sort of a correct result, but the problem is that out of 3 sequences of 1s, I only keep 2, the first one gets lost. I can probably live with that, but ideally want to not lose any data.</p>
<p>In a standard case, there will be multiple equipment types and once again, the same approach works but again, for both types, I only keep 2 out of 3 sequences.</p>
<pre><code>df_test = pl.DataFrame(
{
'time': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,],
'equipment': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
'loc': [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0]
}
)
</code></pre>
<p>Is there a better way to do this?</p>
|
<python><python-polars>
|
2025-03-28 03:40:33
| 2
| 4,474
|
NotAName
|
79,540,321
| 6,227,035
|
Google Calendar API on EC2 (AWS) - webbrowser.Error: could not locate runnable browser
|
<p>I am developing a Django App where a user can access his Google Calendar using this script</p>
<pre><code>credentials = None
token = os.path.join(folder, "token.pickle")
if os.path.exists(token):
with open(token, 'rb') as tk:
credentials = pickle.load(tk)
if not credentials or not credentials.valid:
if credentials and credentials.expired and credentials.refresh_token:
credentials.refresh(Request())
else:
credentials = os.path.join(folder, "credentials.json")
flow = InstalledAppFlow.from_client_secrets_file( credentials, scopes=['openid', 'https://www.googleapis.com/auth/calendar'] )
flow.authorization_url(
# Recommended, enable offline access so that you can refresh an access token without
# re-prompting the user for permission. Recommended for web server apps.
access_type='offline',
# Optional, enable incremental authorization. Recommended as a best practice.
include_granted_scopes='true',
# # Optional, if your application knows which user is trying to authenticate, it can use this
# # parameter to provide a hint to the Google Authentication Server.
login_hint=useremail,
# Optional, set prompt to 'consent' will prompt the user for consent
prompt='consent')
flow.run_local_server()
credentials = flow.credentials
with open(token, 'wb') as tk:
pickle.dump(credentials, tk)
service = build('calendar', 'v3', credentials = credentials)
</code></pre>
<p>When I test it on my local machine everything works fine. However, I run it on a EC2 instance on AWS I get this error:</p>
<pre><code>flow.run_local_server()
File "/home/ubuntu/webapp/lib/python3.12/site-packages/google_auth_oauthlib/flow.py", line 447, in run_local_server
webbrowser.get(browser).open(auth_url, new=1, autoraise=True)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/webbrowser.py", line 66, in get
raise Error("could not locate runnable browser")
webbrowser.Error: could not locate runnable browser
</code></pre>
<p>It is breaking on the line <code>flow.run_local_server()</code>. Any idea of what I am doing wrong?</p>
<p>Thank you!!</p>
|
<python><django><amazon-web-services><amazon-ec2>
|
2025-03-28 01:10:52
| 0
| 1,974
|
Sim81
|
79,540,264
| 10,550,998
|
Install pplpy in a conda environment on Ubuntu 24.04
|
<p>I would like to install <a href="https://pypi.org/project/pplpy/" rel="nofollow noreferrer">pplpy</a> (ideally version <code>0.8.7</code>) in a conda environment.</p>
<p>What I have tried so far:</p>
<ul>
<li>Different versions of pplpy.</li>
<li>Install PPL from source</li>
<li>Set the <code>LD_LIBRARY</code> and <code>CPATH</code> environment variables so that they are found in conda.</li>
</ul>
<p>The error I keep getting is:</p>
<pre><code>/home/user/.miniconda3/envs/env/compiler_compat/ld: skipping incompatible /lib/x86_64-linux-gnu/libmvec.so.1 when searching for /lib/x86_64-linux-gnu/libmvec.so.1
</code></pre>
<p>The dependencies seem not to be found, although they are available on my system, and I set the <code>LD_LIBRARY_PATH</code> and <code>CPATH</code> to the right location.
I am out of ideas what I could try to get this working.</p>
|
<python><pip><conda><ppl>
|
2025-03-28 00:09:26
| 0
| 1,372
|
cherrywoods
|
79,540,127
| 3,174,075
|
How can I run DeepFace in a docker container on a mac?
|
<p>I am trying to install deepface in docker container to do image comparison and I am close to getting it to run... however, I think I am probably missing dependencies... but cannot figure out which ones I need.</p>
<p>In my main.py... If I comment <em>from deepface import DeepFace</em> the app starts up and runs... when I uncomment it hangs</p>
<p><em><strong>NOTE</strong></em>: when I say <em>run the application</em> it means I use this to start the app: <code>uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload</code></p>
<p><em><strong>System:</strong></em></p>
<p>Hardware:</p>
<p>running <code>uname -m</code> gives arm64</p>
<p>MacBook Air</p>
<p><a href="https://i.sstatic.net/bCgrRIUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCgrRIUr.png" alt="enter image description here" /></a></p>
<p>Software:</p>
<p>Docker desktop</p>
<pre><code>Server: Docker Desktop 4.39.0 (184744)
Engine:
Version: 28.0.1
API version: 1.48 (minimum version 1.24)
Go version: go1.23.6
Git commit: bbd0a17
Built: Wed Feb 26 10:40:57 2025
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.25
GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
runc:
Version: 1.2.4
GitCommit: v1.2.4-0-g6c52b3f
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</code></pre>
<p>VS Code - latest version</p>
<p><em><strong>Files in project</strong></em></p>
<p><strong>.devcontainer/devcontainer.json</strong></p>
<pre><code>{
"name": "AI Devcontainer",
"dockerComposeFile": "../docker-compose.yml",
"service": "rp",
"workspaceFolder": "/usr/src/app",
"postCreateCommand": "pip install -r requirements.txt"
}
</code></pre>
<p><strong>./docker-compose.yml</strong></p>
<pre><code>services:
rp:
container_name: ai
platform: linux/amd64
build: .
command: sleep infinity
networks:
- RP
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
networks:
RP:
external: true
</code></pre>
<p><strong>./Dockerfile</strong></p>
<pre><code># Use the official Python image from the DockerHub
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /app
# Install system dependencies required for DeepFace
# Update package list and install dependencies
RUN apt update && apt install -y \
tzdata \
libgl1-mesa-glx \
libegl1-mesa \
libxrandr2 \
libxss1 \
libxcursor1 \
libxcomposite1 \
libasound2 \
libxi6 \
libxtst6 \
curl \
ffmpeg \
git \
nano \
gnupg2 \
libsm6 \
wget \
unzip \
libxcb-icccm4 \
libxkbcommon-x11-0 \
libxcb-keysyms1 \
libxcb-render0 \
libxcb-render-util0 \
libxcb-image0 \
python3 \
python3-pip
# Install required Qt and X11 libraries
RUN apt update && apt install -y \
libx11-xcb1 \
libxcb1 \
libxcomposite1 \
libxkbcommon-x11-0 \
libxkbcommon0 \
libxcb-cursor0 \
libxcb-shape0 \
libxcb-shm0 \
libxcb-sync1 \
libxcb-xfixes0 \
libxcb-xinerama0 \
libxcb-xinput0 \
libxcb-xkb1
# Upgrade pip and install required Python packages
RUN python -m pip install --upgrade pip
RUN python -m pip install \
onnxruntime==1.15.1 \
numpy==1.21.6 \
h5py \
numexpr \
protobuf==3.20.2 \
opencv-python==4.8.0.74 \
opencv-contrib-python==4.8.0.74 \
pyqt6==6.5.1 \
onnx==1.14.0 \
torch==1.13.1 \
torchvision==0.14.1
# Install dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code into the container
COPY . .
# Expose port 8000 for the FastAPI app
EXPOSE 8000
# Command to run FastAPI with Uvicorn
CMD ["sleep", "infinity"]
</code></pre>
<p><strong>./requirements.txt</strong></p>
<pre><code>fastapi
uvicorn
requests
deepface
flask
numpy
pandas
tensorflow-cpu
gunicorn
pillow
opencv-python
</code></pre>
<p>I load the devcontainer in VS Code, the container builds, then I run the application, it spits out this and then hangs</p>
<pre><code>root@1f6928173e34:/usr/src/app# uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/usr/src/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1104] using StatReload
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
</code></pre>
<p>main.py</p>
<pre><code>import logging
logging.basicConfig(level=logging.DEBUG)
from fastapi import FastAPI, HTTPException
import uvicorn
import requests
import numpy as np
import os
import socket
import base64
from typing import List, Dict, Union
#
# THIS LINE CAUSES THE ERROR
#
from deepface import DeepFace
app = FastAPI(title="RP AI Documentation", docs_url="/api-docs")
@app.on_event("startup")
async def startup_event():
print("Starting up...")
# Simulate any heavy lifting during startup if needed
print("Startup finished!")
if __name__ == "__main__":
print("Starting app...")
uvicorn.run(app, host="0.0.0.0", port=8000)
print("App started!")
# Function to perform DeepFace comparison
def compare_faces_with_deepface(img_path1: str, img_path2: str) -> bool:
try:
result = DeepFace.verify(img_path1, img_path2)
return result["verified"]
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error during face comparison: {str(e)}")
# Function to decode images and save them as temporary files
def get_temp_image_paths(images: List[Dict[str, Union[str, int]]]) -> List[str]:
temp_image_paths = []
for img in images:
try:
print(f"Processing img[{img['id']}], type: {img['imagetype']}")
img_data = base64.b64decode(img["filedata"])
temp_path = f"/tmp/{img['id']}.jpg"
with open(temp_path, "wb") as f:
f.write(img_data)
temp_image_paths.append(temp_path)
except Exception as e:
raise HTTPException(status_code=400, detail=f"Error processing image: {str(e)}")
return temp_image_paths
# Function to process the images and perform the comparison
def process_and_compare_images(images: List[Dict[str, Union[str, int]]]) -> bool:
if len(images) < 2:
raise HTTPException(status_code=400, detail="At least two images are required for comparison.")
temp_image_paths = get_temp_image_paths(images)
if len(temp_image_paths) < 2:
raise HTTPException(status_code=400, detail="Not enough valid faces found for comparison.")
match = compare_faces_with_deepface(temp_image_paths[0], temp_image_paths[1])
# Clean up temporary files
for path in temp_image_paths:
os.remove(path)
return match
@app.post("/compare", tags=["AI"])
def compare(images: List[Dict[str, Union[str, int]]]):
try:
# Call the function to process images and compare
match_result = process_and_compare_images(images)
# If match is None or an unexpected outcome, handle it gracefully
if match_result is None:
return {"match_results": "No comparison was made."}
return {"match_results": match_result}
except HTTPException as e:
raise e
except Exception as e:
raise HTTPException(status_code=500, detail=f"Unexpected error: {str(e)}")
</code></pre>
<p>I also looked up this:</p>
<p>Use the Latest TensorFlow Version with the <code>tensorflow-cpu</code> Package</p>
<p>Ensure you’re installing the latest TensorFlow CPU version, which should automatically be compiled without AVX support.</p>
<p><em><strong>TRY 1</strong></em></p>
<p>Put this in Dockerfile</p>
<pre><code># Use the official Python image from the DockerHub
FROM python:3.9-slim
FROM serengil/deepface
</code></pre>
<p>Run the application, got this:</p>
<pre><code>INFO: Will watch for changes in these directories: ['/usr/src/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [521] using StatReload
INFO: Started server process [523]
INFO: Waiting for application startup.
Starting up...
Startup finished!
</code></pre>
<p>Uncommented this line <code>from deepface import DeepFace</code> in main.py</p>
<p>got this:</p>
<pre><code>WARNING: StatReload detected changes in 'app/main.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [523]
The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.
</code></pre>
<p><em><strong>TRY 2</strong></em></p>
<p>After reading comments from Gwang-Jin Kim, I modified the following files:</p>
<p><strong>Dockerfile</strong></p>
<pre><code># Use only this docker image
FROM serengil/deepface
# Set the working directory in the container
WORKDIR /app
# Install dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code into the container
COPY . .
# Expose port 8000 for the FastAPI app
EXPOSE 8000
# Command to run FastAPI with Uvicorn
CMD ["sleep", "infinity"]
</code></pre>
<p>I removed the amd64 from docker-compose, as I thought that might be causing the problem since my computer is based on an M3 chip</p>
<p><strong>docker-compose.yml</strong></p>
<pre><code>services:
rp:
container_name: ai
build: .
command: sleep infinity
networks:
- RP
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
networks:
RP:
external: true
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>fastapi
uvicorn
requests
numpy
</code></pre>
<p>Rebuilt the container, ran the application and still am getting this:</p>
<pre><code>root@0dd0c7ae8862:/usr/src/app# uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/usr/src/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [540] using StatReload
The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.
</code></pre>
<p><em><strong>TRY 3</strong></em></p>
<p>I re-read the answer provided and then cloned the repo
<code>git clone https://github.com/serengil/deepface.git</code></p>
<p>I made the following changes to these files:</p>
<p><strong>Dockerfile</strong></p>
<pre><code># base image
FROM python:3.8.12
LABEL org.opencontainers.image.source https://github.com/serengil/deepface
# -----------------------------------
# create required folder
RUN mkdir -p /app && chown -R 1001:0 /app
RUN mkdir /app/deepface
# -----------------------------------
# switch to application directory
WORKDIR /app
# -----------------------------------
# update image os
# Install system dependencies
RUN apt-get update && apt-get install -y \
ffmpeg \
libsm6 \
libxext6 \
libhdf5-dev \
&& rm -rf /var/lib/apt/lists/*
# -----------------------------------
# Copy required files from repo into image
COPY ./deepface /app/deepface
# even though we will use local requirements, this one is required to perform install deepface from source code
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_local /app/requirements_local.txt
COPY ./package_info.json /app/
COPY ./setup.py /app/
COPY ./README.md /app/
#
# ***** NOT USING THIS ******
#
#COPY ./entrypoint.sh /app/deepface/api/src/entrypoint.sh
# -----------------------------------
# if you plan to use a GPU, you should install the 'tensorflow-gpu' package
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org tensorflow-gpu
# if you plan to use face anti-spoofing, then activate this line
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org torch==2.1.2
# -----------------------------------
# install deepface from pypi release (might be out-of-date)
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org deepface
# -----------------------------------
# install dependencies - deepface with these dependency versions is working
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -r /app/requirements_local.txt
# install deepface from source code (always up-to-date)
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -e .
# -----------------------------------
# some packages are optional in deepface. activate if your task depends on one.
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org cmake==3.24.1.1
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org dlib==19.20.0
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org lightgbm==2.3.1
# -----------------------------------
# environment variables
ENV PYTHONUNBUFFERED=1
# -----------------------------------
# run the app (re-configure port if necessary)
WORKDIR /app/deepface/api/src
# Expose port 8000 for the FastAPI app
EXPOSE 8000
# I am not using the entrypoint.sh file... instead I run the app
# as I described at the top of this question
# ENTRYPOINT [ "sh", "entrypoint.sh" ]
CMD ["sleep", "infinity"]
</code></pre>
<p><strong>requirements_additional.txt</strong></p>
<pre><code>opencv-contrib-python>=4.3.0.36
mediapipe>=0.8.7.3
dlib>=19.20.0
ultralytics>=8.0.122
facenet-pytorch>=2.5.3
torch>=2.1.2
insightface>=0.7.3
onnxruntime>=1.9.0
tf-keras
typing-extensions
pydantic
albumentations
</code></pre>
<p><strong>requirements_local</strong></p>
<pre><code>numpy==1.22.3
pandas==2.0.3
Pillow==9.0.0
opencv-python==4.9.0.80
tensorflow==2.13.1
keras==2.13.1
</code></pre>
<p>requirements.txt</p>
<pre><code>fastapi
uvicorn
requests>=2.27.1
numpy>=1.14.0
pandas>=0.23.4
gdown>=3.10.1
tqdm>=4.30.0
Pillow>=5.2.0
opencv-python>=4.5.5.64
# tensorflow>=1.9.0. # I tried this
tensorflow-cpu>=1.9.0. # and this and both hang
keras>=2.2.0
Flask>=1.1.2
flask_cors>=4.0.1
mtcnn>=0.1.0
retina-face>=0.0.14
fire>=0.4.0
gunicorn>=20.1.0
</code></pre>
<p>I copied all the deepface source code into my project</p>
<p><a href="https://i.sstatic.net/Cbyao14r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbyao14r.png" alt="enter image description here" /></a></p>
<p>Rebuilt the container, Ran the application with main.py commenting out <code># from deepface import DeepFace</code></p>
<pre><code>uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/usr/src/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [3983] using StatReload
INFO: Started server process [3985]
INFO: Waiting for application startup.
Starting up...
Startup finished!
INFO: Application startup complete.
</code></pre>
<p>It loads in swagger ( and I can hit the endpoint )</p>
<p><a href="https://i.sstatic.net/DaHsQ6p4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaHsQ6p4.png" alt="enter image description here" /></a></p>
<p>Then I UNCOMMENTED <code># from deepface import DeepFace</code> in main.py. Uvicorn detected the change and then nothing...</p>
<pre><code>WARNING: StatReload detected changes in 'app/main.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [3985]
</code></pre>
<p>when I CTRL-C the process and restart it also hangs</p>
<pre><code>^CINFO: Stopping reloader process [3983]
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/usr/src/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [5188] using StatReload
</code></pre>
<p>I do see a AMD64 in Docker desktop - could that be an issue?</p>
<p><a href="https://i.sstatic.net/OlBt0l01.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlBt0l01.png" alt="enter image description here" /></a></p>
|
<python><image><deepface>
|
2025-03-27 22:05:32
| 1
| 729
|
MrLister
|
79,540,025
| 6,100,177
|
How to materialize Polars expression into Series?
|
<p>Working with a single series instead of a dataframe, how can I materialize an expression on that series? For instance, when I run</p>
<pre class="lang-py prettyprint-override"><code>time = pl.Series([pl.datetime(2025, 3, 27)])
(time + pl.duration(minutes=5)).to_numpy()
</code></pre>
<p>I get</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[46], line 2
1 time = pl.Series([pl.datetime(2025, 3, 27)])
----> 2 (time + pl.duration(minutes=5)).to_numpy()
AttributeError: 'Expr' object has no attribute 'to_numpy'
</code></pre>
|
<python><python-polars>
|
2025-03-27 20:55:28
| 1
| 1,058
|
mwlon
|
79,539,902
| 1,445,660
|
return 401 from lambda non-proxy api gateway and return headers
|
<p>How do I return 401 to the client, based on the statusCode returned from the lambda? (non-proxy integration).
How do I return the headers from the <code>headers</code> field? (for 401 and 200)</p>
<pre><code>return {
"statusCode": 401,
"headers": {"myHeader": 'abcd "efg"'}
}
</code></pre>
<p>with this cdk:</p>
<pre><code>unauthorized_integration_response = _api_gw.IntegrationResponse(
status_code="401",
response_parameters=response_integration_parameters,
selection_pattern='.*statusCode:401.*',
)
response_parameters = {
"method.response.header.Access-Control-Allow-Headers": True,
"method.response.header.Access-Control-Allow-Origin": True,
"method.response.header.Access-Control-Allow-Methods": True,
"method.response.header.Strict-Transport-Security": True,
"method.response.header.Content-Security-Policy": True,
"method.response.header.X-Frame-Options": True,
}
unauthorized_method_response = _api_gw.MethodResponse(
status_code="401", response_parameters=response_parameters
)
myresource.add_method(
"GET",
_api_gw.LambdaIntegration(
lambda_func,
proxy=False,
request_templates=default_request_template,
integration_responses=[
default_integration_response,
unauthorized_integration_response,
],
),
method_responses=[default_method_response, unauthorized_method_response],
)
</code></pre>
|
<python><python-3.x><aws-lambda><aws-api-gateway>
|
2025-03-27 19:51:16
| 0
| 1,396
|
Rony Tesler
|
79,539,856
| 6,365,949
|
AWS Glue 5.0 "Installation of Python modules timed out after 10 minutes"
|
<p>I have an AWS Glue 5.0 job where I am specifying <code>--additional-python-modules s3://my-dev/other-dependencies /MyPackage-0.1.1-py3-none-any.whl</code> in my job options.
My glue job itself is just a <code>print("hello")</code> job, because when I save and run this glue job, it runs for 10 minutes 7 seconds in AWS< then fails with this error</p>
<pre><code>LAUNCH ERROR | Glue bootstrap failed. Please refer logs for details. caused by LAUNCH ERROR | Installation of Python modules timed out after 10 minutesPlease refer logs for details.
</code></pre>
<p>my packages setup.py does use thee libraires:</p>
<pre><code>from setuptools import setup, find_packages
setup(
name="...",
version="0.1.1",
packages=find_packages(),
install_requires=[
'dask[array]',
'zarr',
'scipy',
'scikit-image',
'bioio',
'bioio-tifffile',
'tifffile',
'opencv-python',
'torch',
'pyyaml',
'xmltodict'
],
entry_points={...},
)
</code></pre>
<p>Is there anyway I can tell AWS with a flag in an AWS Glue 5.0 job that I need more then a 10 minute install timeout?</p>
|
<python><amazon-web-services><aws-glue>
|
2025-03-27 19:27:35
| 0
| 1,582
|
Martin
|
79,539,642
| 6,409,943
|
I updated to Spyder 6.05, and my kernel is now not loading
|
<p>I have been regularly updating my Spyder via the automatic updater (it is on a Windows), and today I updated Spyder to the newest version, and got the following error in the Console saying that it could not load the Konsole:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\user1\AppData\Local\spyder\Scripts\mamba‑script.py", line 6, in
from mamba.mamba import main
File "C:\Users\user1\AppData\Local\spyder\Lib\site‑packages\mamba\mamba.py", line 46, in
from mamba import repoquery as repoquery_api
File "C:\Users\user1\AppData\Local\spyder\Lib\site‑packages\mamba\repoquery.py", line 12, in
def _repoquery(query_type, q, pool, fmt=api.QueryFormat.JSON):
^^^^^^^^^^^^^^^
AttributeError: module 'libmambapy' has no attribute 'QueryFormat'
</code></pre>
<p>Is this a bug in Spyder that I should report, or should I be updating my Python version?</p>
<p>Do I just reinstall Spyder (I have restarted the computer to no avail)? Spyder was working fine before the update. Thank you so much.</p>
|
<python><spyder>
|
2025-03-27 17:34:19
| 1
| 427
|
A. N. Other
|
79,539,574
| 1,697,288
|
SQLAlchemy mssql: Drop/Create Table on the Fly
|
<p>I've created a SQLAlchemy (2.0.39), Python (3.10.11) definition on the fly:</p>
<pre><code> dynamic_class = type(table_name, (Base,), attributes)
</code></pre>
<p>This is working, now I want to drop any existing table and recreate it using the current definition (and where the problems kick in)</p>
<pre><code> log.info(f"Dropping Table: mygrc.{table_name}")
dynamic_class.__table__.drop(dbEngine, checkfirst=True)
log.info(f"Creating Table: mygrc.{table_name}")
Base.metadata.create_all(dbEngine, tables=[dynamic_class.__table__])
</code></pre>
<p>Both the drop and create statements give me errors:</p>
<blockquote>
<p>No results. Previous SQL was not a query.</p>
</blockquote>
<p>If I try to pass the query direct to execute, it doesn't error but it doesn't execute the command either (even with a commit)</p>
<pre><code> with dbEngine.connect() as connection:
create_table_statement = str(dynamic_class.__table__.compile(dbEngine))
connection.execute(text(create_table_statement))
</code></pre>
<p>create_table_statement gives me a valid create statement.</p>
<p>Any ideas?</p>
<p>Track trace:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\username\Documents\Coding\azure-pipeline\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1980, in _exec_single_context
result = context._setup_result_proxy()
File "c:\Users\username\Documents\Coding\azure-pipeline\.venv\lib\site-packages\sqlalchemy\engine\default.py", line 1838, in _setup_result_proxy
strategy = _cursor.BufferedRowCursorFetchStrategy(
File "c:\Users\username\Documents\Coding\azure-pipeline\.venv\lib\site-packages\sqlalchemy\engine\cursor.py", line 1188, in __init__
self._rowbuffer = collections.deque(dbapi_cursor.fetchmany(1))
pyodbc.ProgrammingError: No results. Previous SQL was not a query.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy\__main__.py", line 71, in <module>
cli.main()
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 501, in main
run()
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "c:\Users\username\.vscode\extensions\ms-python.debugpy-2025.4.1-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "C:\Users\username\Documents\Coding\azure-pipeline\mygrcreport.py", line 94, in <module>
mygrc_3ps_inherent_risk, primary_key = mygrcreport_db.dynamic_table(schema, "mygrc_3ps_inherent_risk", dbEngine)
File "\mygrcreport_db.py", line 81, in dynamic_table
dynamic_class.__table__.drop(dbEngine, checkfirst=True)
File "\.venv\lib\site-packages\sqlalchemy\sql\schema.py", line 1301, in drop
bind._run_ddl_visitor(ddl.SchemaDropper, self, checkfirst=checkfirst)
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 3249, in _run_ddl_visitor
conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 2456, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "\.venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 664, in traverse_single
return meth(obj, **kw)
File "\.venv\lib\site-packages\sqlalchemy\sql\ddl.py", line 1203, in visit_table
DropTable(table)._invoke_with(self.connection)
File "\.venv\lib\site-packages\sqlalchemy\sql\ddl.py", line 314, in _invoke_with
return bind.execute(self)
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute
return meth(
File "\.venv\lib\site-packages\sqlalchemy\sql\ddl.py", line 180, in _execute_on_connection
return connection._execute_ddl(
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1527, in _execute_ddl
ret = self._execute_context(
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1843, in _execute_context
return self._exec_single_context(
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1983, in _exec_single_context
self._handle_dbapi_exception(
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 2352, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "\.venv\lib\site-packages\sqlalchemy\engine\base.py", line 1980, in _exec_single_context
result = context._setup_result_proxy()
File "\.venv\lib\site-packages\sqlalchemy\engine\default.py", line 1838, in _setup_result_proxy
strategy = _cursor.BufferedRowCursorFetchStrategy(
File "\.venv\lib\site-packages\sqlalchemy\engine\cursor.py", line 1188, in __init__
self._rowbuffer = collections.deque(dbapi_cursor.fetchmany(1))
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) No results. Previous SQL was not a query.
[SQL:
DROP TABLE mygrc.mygrc_3ps_inherent_risk]
</code></pre>
<p>I set the code to:</p>
<pre><code>logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
log.info(f"Dropping Table: mygrc.{table_name}")
with dbEngine.connect().execution_options(isolation_level='AUTOCOMMIT') as connection:
connection.execute(text(f'DROP TABLE IF EXISTS mygrc.{table_name}'))
</code></pre>
<p>It still didn't work, it looks like it's still trying to do a rollback even with auto commit enabled, no errors no exceptions thrown and notice the raw sql is blank, same thing happens with the create and my user can run the command without issue (and the table exists), if I issue the create statement manually to execute it does the same:</p>
<pre><code>025-03-28 10:07:56,802 - INFO - Dropping Table: mygrc.mygrc_3ps_inherent_risk
2025-03-28 10:07:58,279 INFO sqlalchemy.engine.Engine SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)
2025-03-28 10:07:58,279 - INFO - SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)
2025-03-28 10:07:58,279 INFO sqlalchemy.engine.Engine [raw sql] ()
2025-03-28 10:07:58,279 - INFO - [raw sql] ()
2025-03-28 10:07:58,416 INFO sqlalchemy.engine.Engine SELECT schema_name()
2025-03-28 10:07:58,416 - INFO - SELECT schema_name()
2025-03-28 10:07:58,416 INFO sqlalchemy.engine.Engine [generated in 0.00073s] ()
2025-03-28 10:07:58,416 - INFO - [generated in 0.00073s] ()
2025-03-28 10:07:58,822 INFO sqlalchemy.engine.Engine SELECT CAST('test max support' AS NVARCHAR(max))
2025-03-28 10:07:58,822 - INFO - SELECT CAST('test max support' AS NVARCHAR(max))
2025-03-28 10:07:58,823 INFO sqlalchemy.engine.Engine [generated in 0.00052s] ()
2025-03-28 10:07:58,823 - INFO - [generated in 0.00052s] ()
2025-03-28 10:07:58,958 INFO sqlalchemy.engine.Engine SELECT 1 FROM fn_listextendedproperty(default, default, default, default, default, default, default)
2025-03-28 10:07:58,958 - INFO - SELECT 1 FROM fn_listextendedproperty(default, default, default, default, default, default, default)
2025-03-28 10:07:58,959 INFO sqlalchemy.engine.Engine [generated in 0.00053s] ()
2025-03-28 10:07:58,959 - INFO - [generated in 0.00053s] ()
2025-03-28 10:08:35,716 INFO sqlalchemy.engine.Engine BEGIN (implicit; DBAPI should not BEGIN due to autocommit mode)
2025-03-28 10:08:35,716 - INFO - BEGIN (implicit; DBAPI should not BEGIN due to autocommit mode)
2025-03-28 10:08:35,716 INFO sqlalchemy.engine.Engine DROP TABLE IF EXISTS mygrc.mygrc_3ps_inherent_risk
2025-03-28 10:08:35,716 - INFO - DROP TABLE IF EXISTS mygrc.mygrc_3ps_inherent_risk
2025-03-28 10:08:35,717 INFO sqlalchemy.engine.Engine [generated in 0.00132s] ()
2025-03-28 10:08:35,717 - INFO - [generated in 0.00132s] ()
2025-03-28 10:08:43,022 INFO sqlalchemy.engine.Engine ROLLBACK using DBAPI connection.rollback(), DBAPI should ignore due to autocommit mode
2025-03-28 10:08:43,022 - INFO - ROLLBACK using DBAPI connection.rollback(), DBAPI should ignore due to autocommit mode
</code></pre>
|
<python><sql-server><sqlalchemy>
|
2025-03-27 17:04:15
| 1
| 463
|
trevrobwhite
|
79,539,458
| 3,120,501
|
How to prevent 3D Matplotlib axis extension past set_[x/y/z]lim()?
|
<p>I have an issue where the 3D axis limits set in Matplotlib are not respected in the plot, as shown by the minimal working example below. It can be seen that the extent of the plot goes beyond zero, where the 2D plane is plotted, despite x- and y- axis limits of zero being set in the code. This seems like a trivial issue but it is causing considerable problems for my plot as I'd like to project datapoints onto the sides of the plot and the parallax is causing values to be interpreted incorrectly.</p>
<p>Is there any way to prevent Matplotlib from drawing the axes past the set limits? I've seen similar issues, but these seem to be related to the clipping of data past the axes limits, rather than the plotting of the axes themselves.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from mpl_toolkits.mplot3d import art3d
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
b1 = Rectangle((0, 0), 10, 10, zorder=0, fc='lightgrey', ec='darkgrey')
ax.add_patch(b1)
art3d.patch_2d_to_3d(b1, z=0, zdir='x')
b2 = Rectangle((0, 0), 10, 10, zorder=0, fc='lightgrey', ec='darkgrey')
ax.add_patch(b2)
art3d.patch_2d_to_3d(b2, z=0, zdir='y')
# Scale plot
ax.autoscale(False)
ax.set_xlim(0, 10)
ax.set_ylim(0, 10)
ax.set_zlim(0, 10)
ax.set_aspect('equal')
</code></pre>
<p><a href="https://i.sstatic.net/2HRjurM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2HRjurM6.png" alt="2D boxes" /></a></p>
|
<python><matplotlib><matplotlib-3d>
|
2025-03-27 16:14:15
| 0
| 528
|
LordCat
|
79,539,387
| 7,483,211
|
browser-use agent stuck in loop at Step 1
|
<p>I'm using the quickstart code for the <code>browser-use</code> library in Python. However, the agent gets stuck in an infinite loop, repeatedly logging <code>INFO [agent] 📍 Step 1</code> without progressing or providing clear errors.</p>
<p>I've added <code>BROWSER_USE_LOGGING_LEVEL=debug</code> but the extra log lines don't seem to provide any extra hints.</p>
|
<python><browser-use>
|
2025-03-27 15:51:04
| 1
| 10,272
|
Cornelius Roemer
|
79,539,046
| 785,494
|
How to use Jupyter notebooks or IPython for development without polluting the venv?
|
<p>Best practice for Python is to use <code>venv</code> to isolate to the imports you really need. I use <code>python -m venv</code>.</p>
<p>For development, it's very convenient to use IPython and notebooks for exploring code interactively. However, these need to be installed into the venv to be used. That defeats the purpose of the venv.</p>
<p>I can make two venvs, one for usage and one <code>venv-ipy</code> for interactive exploration, but that's hard to manage, and need to be synchornized.</p>
<p>Is there a solution or practice?</p>
|
<python><jupyter-notebook><ipython><development-environment><python-venv>
|
2025-03-27 13:36:54
| 5
| 9,357
|
SRobertJames
|
79,539,001
| 8,622,053
|
How can I use a local Python package during development while keeping a Git dependency in pyproject.toml?
|
<p>I'm developing two Python projects:</p>
<ul>
<li>xxx: A Python library I maintain, hosted on GitHub.</li>
<li>yyy: An open-source project that depends on xxx.</li>
</ul>
<p>In yyy's pyproject.toml, I declare the dependency as:</p>
<pre><code>[tool.uv.sources]
xxx = { git = "git+https://github.com/myuser/xxx.git" }
</code></pre>
<p>This works well for others who clone yyy — they get the correct version of xxx from GitHub.
However during development, I want to use a local copy of xxx (e.g., ~/xxx) so I can test changes in yyy without pushing to GitHub each time.</p>
<p>I'd like a solution that:</p>
<ul>
<li>Lets me use a local path for xxx during development.</li>
<li>Doesn’t require me to change pyproject.toml before every commit.</li>
<li>Keeps yyy portable and installable via GitHub for others.</li>
</ul>
<p>Is there a clean way to override the Git-based dependency locally (perhaps with a .gitignored file or config), while keeping pyproject.toml untouched?</p>
<p>I'm using uv as my Python package manager.</p>
<p>Any best practices or workflows for this setup?</p>
|
<python><dependency-management><pyproject.toml><uv>
|
2025-03-27 13:19:51
| 2
| 318
|
Daniel Chin
|
79,538,993
| 3,973,269
|
Azure Function app (python) deployment from github actions gives module not found
|
<p>I have the following github workflow which builds and deploys according to expectation.
The only thing is that when I run a function, I get the following error message:</p>
<pre><code>Result: Failure Exception: ModuleNotFoundError: No module named 'azure.storage'. Cannot find module. Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: https://aka.ms/functions-modulenotfound.
</code></pre>
<p>The workflow file that I have:</p>
<pre><code># Docs for the Azure Web Apps Deploy action: https://github.com/azure/functions-action
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure Functions: https://aka.ms/python-webapps-actions
name: Build and deploy Python project to Azure Function App - func-myinstance-dev-001
on:
push:
branches:
- production
workflow_dispatch:
env:
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
PYTHON_VERSION: '3.11' # set this to the python version to use (supports 3.6, 3.7, 3.8)
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read #This is required for actions/checkout
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Python version
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact@v4
with:
name: python-app
path: |
release.zip
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v4
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: 'Deploy to Azure Functions'
uses: Azure/functions-action@v1
id: deploy-to-function
with:
app-name: 'func-myinstance-dev-001'
slot-name: 'Production'
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_xxx }}
</code></pre>
<p>I also tried the vs code extension and when I hit deploy to the same function app. The error is not there any longer.</p>
<p>I hit deploy in vs code from the same folder as where I push my code from.
In both cases all the same functions appear in the function app.</p>
<p>Yet somehow it seems like in the github actions case, somehow the requirements.txt is not installed?</p>
<p>my requirements.txt:</p>
<pre><code>azure-functions
azure-functions-durable
azure-storage-blob
azure-storage-queue
azure-cosmos
mysql-connector-python
pandas
sendgrid
azure-eventhub
azure-communication-email
cffi
incoming
certifi
</code></pre>
<p>Python version: 3.11</p>
|
<python><azure><azure-functions><github-actions>
|
2025-03-27 13:18:02
| 1
| 569
|
Mart
|
79,538,804
| 3,062,183
|
Using explicit mocks for Typer CLI tests
|
<p>I want to write unit-tests for a Typer based CLI, that will run the CLI with different inputs and validate correct argument parsing and input validation.
The point isn't to test Typer itself, but rather to test my code's integration with Typer.</p>
<p>In addition, my CLI has some logic which I would like to mock during these tests (for example, HTTP API calls). Obviously, I want to make sure that tests that check input validation don't run real API calls.</p>
<p>Here is a simplified version of how my code might look like, where <code>DataFetcher</code> is some class defined elsewhere in the project:</p>
<pre class="lang-py prettyprint-override"><code>import typer
app = typer.Typer()
@app.command()
def fetch_data(user_id: str):
with DataFetcher() as data_fetcher:
result = data_fetcher.fetch_data(user_id)
typer.echo(result)
if __name__ == "__main__":
app()
</code></pre>
<p>In this case, are there ways to explicitly mock DataFetcher (such as with the built-in <code>mock</code> library?)
Ideally, I'd want to use explicit mocking rather than patching/monkeypatching.</p>
|
<python><mocking><typer>
|
2025-03-27 12:11:06
| 0
| 1,142
|
Dean Gurvitz
|
79,538,656
| 25,413,271
|
Python Asyncio Condition: Why DeadLock appears?
|
<p>I learn python asyncio and now struggling with Conditions. I wrote a simple code:</p>
<pre><code>import asyncio
async def monitor(condition: asyncio.Condition):
current_task_name = asyncio.current_task().get_name()
print(f'Current task {current_task_name} started')
async with condition:
await condition.wait()
print(f'Condition lock released for task {current_task_name}, start doing smthg')
await asyncio.sleep(2)
print(f'Current task {current_task_name} finished')
async def setter(condition: asyncio.Condition):
print(f'Setter starts')
async with condition:
await asyncio.sleep(1)
condition.notify_all() # Пробуждаем все задачи
print('Setter finished')
print('Setter unlocked')
async def main():
cond = asyncio.Condition()
async with asyncio.TaskGroup() as tg:
tg.create_task(setter(cond))
[tg.create_task(monitor(cond)) for _ in range(3)]
# tg.create_task(setter(cond))
asyncio.run(main())
</code></pre>
<p>But the code catches a DeadLock after <code>Setter unlocked</code> message. I cant realise why.
But if I swap task creation order in <code>main</code> and create monitor tasks prior to creation setter task- all good.</p>
<p>please, I am a very new in asyncio, so please, in simple words...</p>
|
<python><python-asyncio>
|
2025-03-27 11:09:01
| 1
| 439
|
IzaeDA
|
79,538,611
| 3,225,420
|
How can I concatenate columns without quotation marks in the results while avoiding SQL injection attacks using psycopg?
|
<p>I want to concatenate with results like <code>concat_columns</code> here:</p>
<p><a href="https://i.sstatic.net/4goxNALj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4goxNALj.png" alt="Results formatted properly" /></a></p>
<p>In the example code below the results have quotation marks: <code>Houston-Alice-"salary"</code> I do not want this, I want the results like this: <code>Houston-Alice-salary</code></p>
<p>I've tried variations on the <code>"'{}'".format(data_col)</code> to no avail. <code>data_col</code>, <code>"{}".format(data_col)</code>, and <code>'{}'.format(data_col)</code> all return <code>Houston-Alice-77321</code>.</p>
<p>The column names are user supplied so I need to use methods that prevent SQL injection attacks.</p>
<p>What should I try next?</p>
<p>Here is my example code:</p>
<pre><code>import psycopg
import pandas as pd
def so_question(columns, group_by_columns, table):
"""
example for SO
"""
table = psycopg.sql.Composable.as_string(psycopg.sql.Identifier(table))
columns = [psycopg.sql.Composable.as_string(psycopg.sql.Identifier(col)) for col in columns]
group_by_columns = [psycopg.sql.Composable.as_string(psycopg.sql.Identifier(col)) for col in group_by_columns]
for data_col in columns:
group_by = ''
# check if there are grouping columns
if len(group_by_columns) > 0:
concat_columns = " ,'-',".join(group_by_columns) + " ,'-'," + "'{}'".format(data_col)
group_by_clause_group_by = " ,".join(group_by_columns) + " ," + data_col
# CTE generation:
sql_statement = f"""
WITH sql_cte as (
SELECT
CONCAT({concat_columns}) as concat_columns
,{data_col} as value
from {table}, unnest(array[{data_col}]) AS my_col
group by
{group_by_clause_group_by}
)
SELECT * FROM sql_cte
"""
return sql_statement
def execute_sql_get_dataframe(sql):
"""
Creates (and closes) db connection, gets requested sql data , returns as Pandas DataFrame.
Args:
sql (string): sql query to execute.
Returns:
Pandas DataFrame of sql query results.
"""
try:
# print(sql_statement_to_execute)
# create db connection, get connection and cursor object
db_connection_cursor = db_connection_cursor_object()
# execute query
db_connection_cursor['cursor'].execute(sql)
# get results into tuples
tuples_list = db_connection_cursor['cursor'].fetchall()
# get column names: https://www.geeksforgeeks.org/get-column-names-from-postgresql-table-using-psycopg2/
column_names = [desc[0] for desc in db_connection_cursor['cursor'].description]
db_connection_cursor['connection'].close()
# create df from results
df_from_sql_query = pd.DataFrame(tuples_list, columns=column_names)
return df_from_sql_query
except Exception as exc:
log.exception(f'sql_statement_to_execute:\n {sql}', exc_info=True)
log.exception(msg=f'Exception: {exc}', exc_info=True)
data_columns = ['salary']
group_by_columns_in_order_of_grouping = ['city', 'name']
_sql = so_question(columns=data_columns,
group_by_columns=group_by_columns_in_order_of_grouping,
table='random_data')
dataframe = execute_sql_get_dataframe(sql=_sql)
print(dataframe)
</code></pre>
<p>Here's the code for data generation:</p>
<pre class="lang-py prettyprint-override"><code>import psycopg
# Connect to the database
conn = psycopg.connect("dbname=mydatabase user=myuser password=mypassword")
cur = conn.cursor()
# Create the sample table
cur.execute("""
CREATE TABLE sample_table (
city VARCHAR(50),
name VARCHAR(50),
salary INTEGER
)
""")
# Insert sample data into the table
cur.execute("""
INSERT INTO sample_table (city, name, salary) VALUES
('New York', 'Alice', 70000),
('Los Angeles', 'Bob', 80000),
('Chicago', 'Charlie', 75000),
('Houston', 'Eve', 71758),
('Phoenix', 'Dave', 68000)
""")
# Commit the transaction
conn.commit()
# Close the cursor and connection
cur.close()
conn.close()
print("Sample table created and data inserted successfully.")
</code></pre>
|
<python><python-3.x><postgresql><psycopg3>
|
2025-03-27 10:52:48
| 2
| 1,689
|
Python_Learner
|
79,538,469
| 8,621,823
|
Is with concurrent.futures.Executor blocking?
|
<p>Why</p>
<ul>
<li>when using separate <code>with</code>, the 2nd executor is blocked until all tasks from 1st executor is done</li>
<li>when using a single compound <code>with</code>, the 2nd executor can proceed while the 1st executor is still working</li>
</ul>
<p>This is confusing because i thought <code>executor.submit</code> returns a future and does not block.
It seems like the context manager is blocking.</p>
<p>Is this true, and are there official references mentioning this behaviour of individual vs compound context managers?</p>
<p><strong>Separate context managers</strong></p>
<pre><code>import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def task(name, delay):
print(f"{time.time():.3f} {name} started")
time.sleep(delay)
print(f"{time.time():.3f} {name} finished")
if __name__ == "__main__":
print(f"{time.time():.3f} Main started")
with ThreadPoolExecutor(max_workers=1) as thread_pool:
thread_pool.submit(task, "Thread Task", 0.2)
with ProcessPoolExecutor(max_workers=1) as process_pool:
process_pool.submit(task, "Process Task", 0.1)
print(f"{time.time():.3f} Main ended")
</code></pre>
<p>Output:</p>
<pre><code>1743068624.365 Main started
1743068624.365 Thread Task started
1743068624.566 Thread Task finished
1743068624.571 Process Task started
1743068624.671 Process Task finished
1743068624.673 Main ended
</code></pre>
<p>Notice Thread Task must finish before Process Task can start.
I have ran the code numerous times and still don't see below patterns.</p>
<p><strong>Single context manager</strong></p>
<pre><code>import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def task(name, delay):
print(f"{time.time():.3f} {name} started")
time.sleep(delay)
print(f"{time.time():.3f} {name} finished")
if __name__ == "__main__":
print(f"{time.time():.3f} Main started")
with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool:
thread_pool.submit(task, "Thread Task", 0.2)
process_pool.submit(task, "Process Task", 0.1)
print(f"{time.time():.3f} Main ended")
</code></pre>
<p>Output</p>
<pre><code>1743068722.440 Main started
1743068722.441 Thread Task started
1743068722.443 Process Task started
1743068722.544 Process Task finished
1743068722.641 Thread Task finished
1743068722.641 Main ended
</code></pre>
<p>With a compound context manager, Process Task can start before Thread Task finishes no matter their delay ratios.</p>
<p>Delay of Process Task is designed to be shorter than delay of Thread Task to additionally show that Process Task can finish before Thread Task finishes with a compound context manager</p>
<p>Please point out if this interpretation is erroneous.</p>
|
<python><multithreading><multiprocessing><concurrent.futures><contextmanager>
|
2025-03-27 09:52:35
| 1
| 517
|
Han Qi
|
79,538,354
| 2,176,819
|
How to ignore forced obsolete syntax under Ruff v0.11
|
<p>I'm maintaining a code that is used under Python versions 3.12 and 3.13. Ruff from version 0.11 started to complain:</p>
<pre><code>SyntaxError: Cannot reuse outer quote character in f-strings on Python 3.9 (syntax was added in Python 3.12)
</code></pre>
<p>Not a problem, as the code was not intended to be compatible with earlier versions. But how can ignore these warnings from my <code>pyproject.toml</code>?</p>
<p>Strangely Ruff's error message doesn't refer to the rule number, and I haven't found the rule number by reading trough the rules on Ruff's manual either. AI's suggested me to try E999 -- didn't work.</p>
<p>I doubt that Ruff devs would force me to use obsolete syntax, but without any rule number to ignore I have to choose from the following:</p>
<ul>
<li>downgrade my code to an older syntax</li>
<li>use an outdated version of Ruff</li>
</ul>
|
<python><ruff>
|
2025-03-27 09:13:50
| 0
| 403
|
gridranger
|
79,538,343
| 5,168,534
|
Remove Duplicate Dictionaries from a list based on certain numeric value
|
<p>I have a list of dictionary values which has duplicate name values. I would like to remove only the one whose value is less for the same name.
For example, I have an input like below.</p>
<pre><code>[{'name': 'x', 'value': 0.6479667110413355},
{'name': 'x', 'value': 1.0},
{'name': 'y', 'value': 0.9413355},
{'name': 'y', 'value': 0.9}]
</code></pre>
<p>Hence, compare all the values of x and keep only that dictionary that has the highest values of x. Same goes for y.
What I would like to have output is</p>
<pre><code>[{'name': 'x', 'value': 1.0},
{'name': 'y', 'value': 0.9413355}]
</code></pre>
<p>I was trying on the lines like this:</p>
<pre><code>input_list = [{'name': 'x', 'value': 0.6479667110413355}, {'name': 'x', 'value': 1.0},{'name': 'y', 'value': 0.9413355}, {'name': 'y', 'value': 0.9}]
results = defaultdict(list)
for element in input_list:
for key, value in element.items():
results[key].append(value)
for item_key, item_value in results.items():
results[item_key] = max(item_value)
print(results)
</code></pre>
<p>Output</p>
<pre><code>defaultdict(<class 'list'>, {'name': 'y', 'value': 1.0})
</code></pre>
<p>I am missing something here. Getting only value of y and not x
Can someone help on this.</p>
|
<python><python-3.x><dictionary>
|
2025-03-27 09:09:59
| 2
| 311
|
anshuk_pal
|
79,538,310
| 8,472,781
|
Nondeterministic behaviour of openpyxl
|
<p>I have a Python script, that basically looks like this:</p>
<pre><code>import mypackage
# this function generates always the same pandas.DataFrame
df = mypackage.create_the_dataframe()
# write the DataFrame to xlsx and csv
df.to_excel("the_dataframe_as.xlsx", index=False, engine="openpyxl")
df.to_csv("the_dataframe_as.csv", index=False)
</code></pre>
<p>I was trying to write a test for the <code>create_the_dataframe</code> function. So I checked the hash of the resulting xlsx and csv files and found that for two different runs of the script, the hash and file size of the resulting xlsx file changes. The hash for the csv remains the same.</p>
<p>Although I can live with this, I am very curious to understand why this is the case?</p>
|
<python><python-3.x><pandas><openpyxl>
|
2025-03-27 08:56:51
| 1
| 586
|
d4tm4x
|
79,538,217
| 1,256,529
|
Typing recursive type annotation
|
<p>I have a function which takes a value, a type annotation, and validates that the value is of the type.</p>
<p>It only takes a certain subset of type annotations: <code>int</code>, <code>float</code>, <code>str</code>, <code>bool</code>, a <code>list</code> of any of these, or a (potentially nested) <code>list</code> of any of these.</p>
<pre><code>def validate(value: Any, type_: ValidatableAnnotation):
...
</code></pre>
<p>I have no problem implementing the function, however, I have a problem typing it.</p>
<p>I'm trying to define the type <code>ValidatableAnnotation</code> to allow any level of recursion in the list type e.g. <code>list[int]</code>, <code>list[list[int]]</code> etc. ad infinitum.</p>
<p>My best attempt:</p>
<pre><code>ValidatableAnnotation = (
Type[str] | Type[int] | Type[float] | Type[bool] | Type[list["ValidatableAnnotation"]]
)
</code></pre>
<p>However, while it works for the atomic values, it doesn't work for lists:</p>
<pre><code>validate([1], list[int]) # Incompatible types in assignment (expression has type "type[list[int]]", variable has type "ValidatableAnnotation")
</code></pre>
<p>Is the Python (3.10) type system capable of doing what I want here, and if so how do I do it?</p>
<p>Edit: similarity has been pointed out to <a href="https://stackoverflow.com/questions/63420857/whats-the-correct-type-hint-for-a-type-hint">What's the correct type hint for a type hint?</a>. However, it is possible that an answer is available at a lower level of generality than at a higher level. I'm not asking for a general answer to "how to define a type hint for a type hint", but for a specific answer to "how to recursively define a type hint for a <code>list</code> type hint. This specific scenario may have a solution even if the more general question doesn't.</p>
|
<python><python-typing>
|
2025-03-27 08:12:16
| 1
| 3,817
|
samfrances
|
79,538,081
| 3,050,164
|
Second client connect() on unix domain socket succeeds even when listen() set to zero and server only accepts a single connection
|
<p>I am writing a Python 3-based server/client program using a Unix-domain socket. I am trying to build the server such that only a single client can connect to the server and subsequent connect() calls should return connection refused error until the connected client disconnects.</p>
<p>The client behaves exactly as expected when netcat (<code>nc</code>) is used as the server. But for my Python server code even accepts only a single <code>accept()</code> call on a socket that has been configured with <code>listen(0)</code>. Multiple clients successfully connect without error.</p>
<p>Behaviour with netcat (<code>nc</code>):</p>
<pre class="lang-bash prettyprint-override"><code>$ nc -lU /tmp/uds0 &
[1] 387992
$
$ nc -U /tmp/uds0 &
[2] 388009
$
$ nc -U /tmp/uds0
Ncat: Connection refused.
</code></pre>
<p>Unexpected behaviour with python code:</p>
<pre><code>$ cat uds-server.py
#! /usr/bin/python3
import socket
import os
import sys
SOCKET_PATH = "/tmp/uds0"
# Create the socket
server_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# Bind the socket to the file path
server_socket.bind(SOCKET_PATH)
# Listen - handle number of queued connections
server_socket.listen(0)
# accept a single client and proceed
(client_socket, client_address) = server_socket.accept()
print("New client connected, waiting for messages")
msg = client_socket.recv(1023)
print(msg.decode())
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$
$
$ ./uds-server.py &
[5] 388954
$
$ nc -U /tmp/uds0 &
[6] 389017
$ New client connected, waiting for messages
$
$ nc -U /tmp/uds0
</code></pre>
<p>How can I make the Python server behave the same way as netcat?</p>
|
<python><linux><unix-socket>
|
2025-03-27 07:10:30
| 0
| 590
|
Anirban
|
79,538,005
| 20,240,835
|
snakemake public remote s3 without secret key
|
<p>I am using snakemake with rule need visit public read-only s3 storage</p>
<pre><code># test
$ aws s3 ls 1000genomes/phase1/phase1.exome.alignment.index.bas.gz --no-sign-request
2012-05-01 03:58:41 423691 phase1.exome.alignment.index.bas.gz
$ aws s3 cp s3://1000genomes/phase1/phase1.exome.alignment.index.bas.gz ./ --no-sign-request
download: s3://1000genomes/phase1/phase1.exome.alignment.index.bas.gz to ./phase1.exome.alignment.index.bas.gz
$ ls phase1.exome.alignment.index.bas.gz
phase1.exome.alignment.index.bas.gz
# public visit ok
</code></pre>
<pre><code># snakemake
storage:
provider="s3",
max_requests_per_second=10
rule download_data:
input:
vcf=lambda wildcards: storage.s3("s3://1000genomes/phase1/phase1.exome.alignment.index.bas.gz")
output:
'output/data/phase1.exome.alignment.index.bas.gz'
shell:
# just a example here, aws s3 cp can instead, but we have more complex process
"cp {input.vcf} {output}"
</code></pre>
<pre><code>snakemake -s /path/to/snakemake.smk
The following required arguments are missing for plugin s3: --storage-s3-access-key (or environment variable SNAKEMAKE_STORAGE_S3_ACCESS_KEY), --storage-s3-secret-key (or environment variable SNAKEMAKE_STORAGE_S3_SECRET_KEY).
</code></pre>
<p>I treid to set <code>SNAKEMAKE_STORAGE_S3_ACCESS_KEY</code> and <code>SNAKEMAKE_STORAGE_S3_SECRET_KEY</code> environment to ''.</p>
<pre><code>export SNAKEMAKE_STORAGE_S3_ACCESS_KEY=''
export SNAKEMAKE_STORAGE_S3_SECRET_KEY=''
</code></pre>
<p>but its seems not work.</p>
<pre><code>snakemake -s /path/to/snakemake.smk
Failed to check existence of s3://1000genomes/phase1/phase1.exome.alignment.index.bas.gz
RuntimeError: 34 (AWS_ERROR_INVALID_ARGUMENT): An invalid argument was passed to a function.
</code></pre>
|
<python><amazon-s3><storage><snakemake>
|
2025-03-27 06:28:53
| 0
| 689
|
zhang
|
79,537,989
| 6,862,601
|
Do we need to have a separate getter while using @property?
|
<p>I have these two methods inside a class:</p>
<pre><code>@property
def config_dict(self):
return self._config_dict
@config_dict.getter
def config_dict(self):
return self._config_dict
</code></pre>
<p>Do we really need the getter? Isn't it superfluous?</p>
|
<python>
|
2025-03-27 06:18:30
| 1
| 43,763
|
codeforester
|
79,537,934
| 1,397,843
|
Dot notation access in pd.Series() but with priority on elements in the index
|
<p>My aim is to implement a class that inherits from <code>pd.Series</code> class and acts as a container object. This object will hold varied objects as its elements in the form of</p>
<pre><code>container = Container(
a = 12,
b = [12, 10, 20],
c = 'string',
name='this value',
)
</code></pre>
<p>I will be accessing these elements via a dot notation:</p>
<pre><code>print(container.a) # Output: 12
print(container.b) # Output: [12, 10, 20]
print(container.c) # Output: string
print(container.name) # Output: None
</code></pre>
<p>I have tried the following implementation:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
class Container(pd.Series):
def __init__(self, **kwargs):
super().__init__(data=kwargs)
</code></pre>
<p>This works to a large extent.</p>
<p>However, there would be special cases. If <code>index</code> of one of the elements in the container is also a property in <code>pd.Series()</code> then the property would be returned. In the case of above example, <code>container.name</code> would return <code>None</code> value as it returns the pd.Series().name property of the series. I want container.name to return <code>'this value'</code>.</p>
<p>So, maybe I need to overerite the <code>__getattribute__</code> or <code>__getattr__</code> to make this work.</p>
<p>I have tried the following:</p>
<pre><code>def __getattr__(self, item):
if item in self:
return self[item]
else:
raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{item}'")
def __getattribute__(self, item):
try:
# Try to get an item using dot notation when it's available in the series itself
value = super().__getattribute__('__getitem__')(item)
if item in self:
return value
except (KeyError, AttributeError):
pass
# Fallback to the default __getattribute__
return super().__getattribute__(item)
</code></pre>
<p>However, this would always return <code>RecursionError: maximum recursion depth exceeded</code>. I have tried different ways.</p>
<p>Note that I like to inherit from <code>pd.Series()</code> to have access to all other functionalities of the series.</p>
|
<python><pandas><overriding>
|
2025-03-27 05:54:53
| 1
| 386
|
Amin.A
|
79,537,816
| 200,304
|
Editing two dependent Python projects simultaneously using Visual Studio Code
|
<p>I have a project <code>mylib</code> (a Python library) and a project <code>myapp</code> (an application that uses <code>mylib</code>). The two code bases live in unrelated directories. I'm trying to find a setup by which I can edit code in both simultaneously with support from Visual Studio Code.</p>
<p>My plan had been:</p>
<ul>
<li>create a single Visual Studio Code workspace</li>
<li>add the root directories of both projects to that workspace</li>
<li>create a venv somewhere else</li>
<li><code>venv/bin/pip install -e</code> both projects into that venv</li>
<li>select the python interpreter from the venv on the workspace level in Visual Studio Code.</li>
</ul>
<p>However, Visual Studio Code shows the wiggly lines for symbols from the library that are used in the application, saying the imports cannot be found. When I run <code>venv/bin/python</code>, it can import the library symbols just fine.</p>
<p>Two questions:</p>
<ol>
<li>Why can't Visual Studio Code import the packages that the venv's python can?</li>
<li>More importantly: what's a good setup for editing two projects simultaneously using Visual Studio Code, I am not stuck on what I tried ...</li>
</ol>
|
<python><visual-studio-code>
|
2025-03-27 04:16:32
| 1
| 3,245
|
Johannes Ernst
|
79,537,654
| 875,806
|
Python typing for method that adds a property to some other type
|
<p>I'm trying to define types to indicate that a method returns the same type with additional property/properties.</p>
<pre class="lang-py prettyprint-override"><code>class BaseModel(Protocol):
@classmethod
def model_validate(cls, db_entry) -> Self: ...
T = TypeVar('T', bound=BaseModel)
class _HasDatabaseId(Protocol):
id: str
class FromDatabase(_HasDatabaseId, Generic[T]):
pass
def from_database(model: Type[T], db_entry) -> FromDatabase[T]:
obj = model.model_validate(db_entry)
setattr(obj, 'id', db_entry.id)
return obj
# example usage
result: FromDatabase[SomeModel] = from_database(SomeModel, db_entry)
print(result.name) # assume `name` exists on `SomeModel`
</code></pre>
<p>mypy flags with</p>
<pre><code>print(result.name) error: "FromDatabase[SomeModel]" has no attribute "name" [attr-defined]
return obj error: Incompatible return value type (got "T", expected "FromDatabase[T]") [return-value]
</code></pre>
<p>If I change <code>FromDatabase</code></p>
<pre><code>class FromDatabase(_HasDatabaseId, T):
pass
</code></pre>
<p>mypy reports these errors:</p>
<pre><code>class FromDatabase(_HasDatabaseId, T): error: "T" is a type variable and only valid in type context [misc]
class FromDatabase(_HasDatabaseId, T): error: Invalid base class "T" [misc]
return obj error: Incompatible return value type (got "T", expected "FromDatabase[T]") [return-value]
</code></pre>
<p>I'm fine with relying on duck-typing compatibility (instead of returning a literal <code>FromDB[SomeModel]</code>), and suppressing the "incompatible return value" error on <code>return obj</code> (if I substitute specific <code>Model</code> types instead of the generic, this works). I'm not sure how to resolve the other error(s).</p>
<p>Edit: Both of the marked duplicate and the question mentioned by @sterliakov are about using class decorators to add to a class, where as this is about how to annotate types of a function to indicate the return value is the type of the argument + some field(s). Even if the answer has the same underlying explanation, I would argue it is a different enough <em>question</em> to not be considered a duplicate.</p>
|
<python><python-typing><mypy>
|
2025-03-27 01:49:00
| 0
| 4,388
|
CoatedMoose
|
79,537,648
| 2,063,900
|
How to get the session data in POS screen in odoo 18
|
<p>I make some user option and I want to read it from POS screen in version 18</p>
<p>this is my code</p>
<pre><code>class PosSession(models.Model):
_inherit = 'pos.session'
def _load_pos_data(self, data):
res = super(PosSession, self)._load_pos_data(data)
res['hide_product_information'] = self.env.user.hide_product_information
return res
</code></pre>
<p>and I want to read it from JavaScript level from the product screen
with this code</p>
<pre><code>patch(ProductScreen.prototype, {
async onProductInfoClick(product) {
// here
}
})
</code></pre>
<p>Please can any one help me in this,</p>
|
<javascript><python><odoo-18><odoo-owl>
|
2025-03-27 01:44:10
| 1
| 361
|
ahmed mohamady
|
79,537,560
| 280,002
|
Flasgger schema definition doesn't show in swagger-ui when using openapi 3.0.2
|
<p>I have an endpoint defined as such:</p>
<pre class="lang-py prettyprint-override"><code>def process_document():
"""
Process document
---
definitions:
JobSubmissionResponse:
type: object
properties:
job_id:
type: string
description: Job ID
responses:
500:
description: Fatal Error Occurred
200:
description: Job submitted successfully
schema:
type: object
properties:
job_id:
type: string
description: Job ID
"""
job_id = uuid.uuid4()
return jsonify({
"job_id": job_id,
})
</code></pre>
<p>When I use Open API 3.0.2, as such:</p>
<pre class="lang-py prettyprint-override"><code>
app.config['SWAGGER'] = {
'title': 'MyApp',
'openapi': '3.0.2'
}
</code></pre>
<p>The Swagger-UI does not display the schema within the return type:</p>
<p><a href="https://i.sstatic.net/bCRW1qUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCRW1qUr.png" alt="no-schema-screenshot" /></a></p>
<p>If I do not specify OpenAPI 3.0.2, it works fine:</p>
<p><a href="https://i.sstatic.net/9QOc9GiK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QOc9GiK.png" alt="schema-screenshot" /></a></p>
<p>Was that method of referencing a schema deprecated in OpenAPI 3.0.2?</p>
<p>thank you</p>
|
<python><flask><openapi><swagger-ui><flasgger>
|
2025-03-27 00:21:24
| 0
| 1,301
|
alessandro ferrucci
|
79,537,461
| 16,563,251
|
Test that unittest.Mock was called with some specified and some unspecified arguments
|
<p>We can check if a <a href="https://docs.python.org/3/library/unittest.mock.html#the-mock-class" rel="nofollow noreferrer"><code>unittest.mock.Mock</code></a> <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_any_call" rel="nofollow noreferrer">has any call</a> with some specified arguments.
I now want to test that some of the arguments are correct, while I do not know about the other ones.
Is there some intended way to do so?</p>
<p>I could manually iterate over <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.mock_calls" rel="nofollow noreferrer"><code>Mock.mock_calls</code></a>, but is there a more compact way?</p>
<pre class="lang-py prettyprint-override"><code>import random
from unittest import mock
def mymethod(a, handler):
handler(a, random.random())
def test_mymethod():
handler = mock.Mock()
mymethod(5, handler)
handler.assert_any_call(5, [anything]) # How do I achieve this?
</code></pre>
|
<python><python-unittest><python-unittest.mock>
|
2025-03-26 23:02:53
| 1
| 573
|
502E532E
|
79,537,428
| 219,153
|
Can I multiply these Numpy arrays without creating an intermediary array?
|
<p>This script:</p>
<pre><code>import numpy as np
a = np.linspace(-2.5, 2.5, 6, endpoint=True)
b = np.vstack((a, a)).T
c = np.array([2, 1])
print(b*c)
</code></pre>
<p>produces:</p>
<pre><code>[[-5. -2.5]
[-3. -1.5]
[-1. -0.5]
[ 1. 0.5]
[ 3. 1.5]
[ 5. 2.5]]
</code></pre>
<p>which is my desired output. Can it be produced directly with from <code>a</code> and <code>c</code>? Trying <code>a*c</code> and <code>c*a</code> fails due to <code>ValueError: operands could not be broadcast together</code> error. Trying different dimensions of <code>c</code>, e.g. <code>[[2, 1]]</code> fails, too.</p>
<hr />
<p>I found that the simplest way to go is to define <code>a</code> differently, i.e. <code>a = np.linspace([-2.5, -2.5], [2.5, 2.5], 6, endpoint=True)</code>. I can now write <code>a*c</code>, <code>a*(2, 1)</code>, etc. Since it changes my initial post, all answers below remain valid.</p>
|
<python><arrays><numpy>
|
2025-03-26 22:42:55
| 5
| 8,585
|
Paul Jurczak
|
79,537,217
| 2,893,712
|
Nodriver Get Page Status Code and Refresh
|
<p>I am using nodriver to automate a website login. Sometimes the page will return a 403 error but refreshing the page will solve the issue. I am trying to add logic in my code to do this. Here is my code currently:</p>
<pre><code>import nodriver as uc
browser = await uc.start()
async def myhandler(event: uc.cdp.network.ResponseReceived):
print(event.response.status)
browser.main_tab.add_handler(uc.cdp.network.ResponseReceived, myhandler)
page = await browser.get("https://google.com")
</code></pre>
<p>This returns a bunch of HTTP status codes for every request that was made when trying to access google. How do I check the HTTP status code of the entire page and refresh (with form resubmission) if 403 was returned?</p>
|
<python><undetected-chromedriver><nodriver>
|
2025-03-26 20:28:57
| 0
| 8,806
|
Bijan
|
79,537,198
| 1,875,932
|
When using venv, packages I'm building suddenly disappear when others are installed
|
<p>This is more of a where in pip could this be happening, not a question about the repos/packages.</p>
<p>I'm building several pytorch related repos using their setup.py scripts. For example, <a href="https://github.com/pytorch/vision/blob/main/setup.py" rel="nofollow noreferrer">https://github.com/pytorch/vision/blob/main/setup.py</a>.</p>
<p>The setup.py of each should create the necessary files under <code>/root/venv/lib/python3.10/site-packages</code>, and they do. I see the folders post-build.</p>
<p>However, I'm noticing some odd situation where packageA builds, shows under <code>pip list</code>, but then packageB builds and wipes out packageA from the list. The folder for packageA still exists under site-packages, which is the issue I need some help understanding.</p>
<p>Here is the <code>pip list</code> showing <code>xformers</code>, while <code>torchvision</code> is still building:</p>
<pre><code>root@844722c6e751:~# pip list
Package Version
------------ -------------------------
bitsandbytes 0.45.2
einops 0.8.1
filelock 3.18.0
fsspec 2025.3.0
Jinja2 3.1.6
MarkupSafe 3.0.2
mpmath 1.3.0
networkx 3.4.2
pillow 11.1.0
pip 22.0.2
sympy 1.13.1
torch 2.6.0
xformers 0.0.30+1298453c.d20250326
</code></pre>
<p>A second or two after <code>torchvision</code> finishes, the <code>xformers</code> disappears from the list.</p>
<p>You can see the <code>xformer</code> folder is still there and looks complete too:</p>
<pre><code>root@fc2813de016d:~# ll wooly-client-venv/lib/python3.10/site-packages/
total 396
drwxr-xr-x 1 1000 1000 4096 Mar 26 15:34 ./
drwxr-xr-x 1 1000 1000 4096 Mar 26 14:58 ../
drwxr-xr-x 4 root root 4096 Mar 26 15:33 MarkupSafe-3.0.2-py3.10-linux-aarch64.egg/
drwxr-xr-x 2 root root 4096 Mar 26 15:24 MarkupSafe-3.0.2.dist-info/
drwxr-xr-x 2 root root 4096 Mar 26 14:58 PyYAML-6.0.2.dist-info/
drwxr-xr-x 1 root root 4096 Mar 26 15:24 __pycache__/
drwxr-xr-x 3 1000 1000 4096 Mar 26 14:58 _distutils_hack/
drwxr-xr-x 3 root root 4096 Mar 26 14:58 _yaml/
drwxr-xr-x 9 root root 4096 Mar 26 15:24 bitsandbytes/
drwxr-xr-x 3 root root 4096 Mar 26 15:24 bitsandbytes-0.45.2.dist-info/
-rw-r--r-- 1 1000 1000 152 Mar 26 14:58 distutils-precedence.pth
-rw-r--r-- 1 root root 95 Mar 26 15:34 easy-install.pth
drwxr-xr-x 3 root root 4096 Mar 26 15:24 filelock/
. . .
drwxr-xr-x 43 root root 4096 Mar 26 15:24 sympy/
drwxr-xr-x 5 root root 4096 Mar 26 15:33 sympy-1.13.1-py3.10.egg/
drwxr-xr-x 2 root root 4096 Mar 26 15:24 sympy-1.13.1.dist-info/
drwxr-xr-x 63 root root 4096 Mar 26 15:24 torch/
drwxr-xr-x 2 root root 4096 Mar 26 15:24 torch-2.6.0a0+gitd398019.egg-info/
drwxr-xr-x 11 root root 4096 Mar 26 15:24 torchgen/
drwxr-xr-x 4 root root 4096 Mar 26 15:34 torchvision-0.21.0+7af6987-py3.10-linux-aarch64.egg/
drwxr-xr-x 3 root root 4096 Mar 26 14:58 typing_extensions-4.13.0.dist-info/
-rw-r--r-- 1 root root 171956 Mar 26 14:58 typing_extensions.py
drwxr-xr-x 5 root root 4096 Mar 26 14:58 wheel/
drwxr-xr-x 2 root root 4096 Mar 26 14:58 wheel-0.45.1.dist-info/
drwxr-xr-x 4 root root 4096 Mar 26 15:33 xformers-0.0.30+1298453c.d20250326-py3.10-linux-aarch64.egg/
drwxr-xr-x 3 root root 4096 Mar 26 14:58 yaml/
</code></pre>
<p>Some things to note:</p>
<ul>
<li>I'm running both setup.py around the same time and they finish relatively at the same time too. Maybe a collision inside of pip?</li>
<li>If I run sequentially, it's fine. There is no collision/removal of formers.</li>
</ul>
<p>What could cause the packages to disappear after torchvision setup finishes if I don't see anything in the torchvision output/script that is removing packages from being visible under <code>pip list</code> (which would not show the package folder under site-packages)?</p>
|
<python><python-3.x><pip><virtualenv><python-venv>
|
2025-03-26 20:22:17
| 1
| 793
|
NorseGaud
|
79,537,094
| 2,299,692
|
Failure to authenticate a SharePoint connection with AuthenticationContext
|
<p>I have the following python code in my Matillion PythonScript component:</p>
<pre class="lang-py prettyprint-override"><code>from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.authentication_context import AuthenticationContext
authority_url = “https://login.microsoftonline.us/<tenant_id>”
site_url = “https://mysite.sharepoint.us/sites/myfolder”
auth_ctx = AuthenticationContext(authority_url)
if auth_ctx.acquire_token_for_app(client_id, client_secret):
ctx = ClientContext(site_url, auth_ctx)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Web Title:", web.properties["Title"])
else:
raise ValueError(“Failed to acquire token for the given client credentials.”)
</code></pre>
<p>It errors out at <code>ctx.execute_query()</code> with the following error message</p>
<p><code>ValueError: {“error”:“invalid_request”,“error_description”:"AADSTS900023: Specified tenant identifier ‘none’ is neither a valid DNS name, nor a valid external domain.</code></p>
<p>Is there a way to trouble shoot the cause of this error, since the error message is not very informative?</p>
|
<python><sharepoint><office365><etl><matillion>
|
2025-03-26 19:26:41
| 0
| 1,938
|
David Makovoz
|
79,537,070
| 2,072,516
|
Using alembic cli to build test database
|
<p>I'm attempting to build a pytest fixture that drops my test database, recreates it, and then runs my alembic migrations. When I run it, I get errors that my relationships don't exist, which seems to indicate alembic never ran:</p>
<pre><code>@pytest.fixture(scope="session", autouse=True)
def setup_database():
database_url = f"postgresql+psycopg://{configs.DATABASE_USER}:{configs.DATABASE_PASSWORD}@{configs.DATABASE_HOST}"
engine: Engine = create_engine(database_url)
with engine.connect().execution_options(isolation_level="AUTOCOMMIT") as connection:
connection.execute(
text(f"DROP DATABASE IF EXISTS {configs.DATABASE_DATABASE}_test")
)
connection.execute(text(f"CREATE DATABASE {configs.DATABASE_DATABASE}_test"))
alembic_cfg = AlembicConfig()
alembic_cfg.attributes["connection"] = connection
alembic_cfg.set_main_option("script_location", "/app/src/alembic")
alembic_cfg.set_main_option(
"sqlalchemy.url", f"{database_url}/{configs.DATABASE_DATABASE}_test"
)
alembic_command.upgrade(alembic_cfg, "head")
yield
connection.execute(text(f"DROP DATABASE {configs.DATABASE_DATABASE}_test"))
</code></pre>
<p>So I took the alembic code and moved it to a script, watching the database via a gui.</p>
<pre><code>from alembic import command as alembic_command
from alembic.config import Config as AlembicConfig
from app.configs import configs
def main():
database_url = f"postgresql+psycopg://{configs.DATABASE_USER}:{configs.DATABASE_PASSWORD}@{configs.DATABASE_HOST}/{configs.DATABASE_DATABASE}_test"
alembic_cfg = AlembicConfig("/app/src/alembic.ini")
alembic_cfg.set_main_option("sqlalchemy.url", database_url)
alembic_command.upgrade(alembic_cfg, "head")
main()
</code></pre>
<p>And I get an output:</p>
<pre><code>/app/src # python app/test_alembic.py
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
/app/src #
</code></pre>
<p>Which seems to indicate alembic ran, but couldn't find any migrations to run. If I don't pass the ini file (using only <code>alembic_cfg.set_main_option("script_location", "/app/src/alembic")</code>), I get no output. I set up <a href="https://gist.github.com/rohitsodhia/af3e4b20bdd6ad14cea47914996a7036" rel="nofollow noreferrer">a gist</a> with my alembic.ini and env.py files, which I feel are where the problem is.</p>
|
<python><database><testing><alembic>
|
2025-03-26 19:16:03
| 0
| 3,210
|
Rohit
|
79,537,058
| 22,285,621
|
Getting Authentication failed error while connecting with MS Fabric data Warehouse Using Node js and Python
|
<p>First I ahve used Node.js and tedious but it didn't work becase tedious library can't connect to fabric dwh because fabric has a bit different security layers and protocols, that tedious so far do not have.
Now I have used ODBC library but got Authentication Error</p>
<blockquote>
<p>Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Could not login because the authentication failed.</p>
</blockquote>
<p><a href="https://i.sstatic.net/lct8Fd9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lct8Fd9F.png" alt="error" /></a></p>
<p>I am using this code in node js</p>
<pre><code>require('dotenv').config();
const express = require('express');
const odbc = require('odbc');
const app = express();
const port = process.env.PORT || 3000;
// Connection pool configuration
const poolConfig = {
connectionString: `
Driver={ODBC Driver 18 for SQL Server};
Server=${process.env.DB_SERVER};
Database=${process.env.DB_NAME};
UID=${process.env.AZURE_APP_CLIENT_ID};
PWD=${process.env.AZURE_APP_SECRET};
Authentication=ActiveDirectoryServicePrincipal;
Encrypt=yes;
TrustServerCertificate=no;
Connection Timeout=30;
`.replace(/\n\s+/g, ' '), // Clean up whitespace
initialSize: 5,
maxSize: 20
};
// Create connection pool
let pool;
(async () => {
try {
pool = await odbc.pool(poolConfig);
console.log('Connection pool created successfully');
} catch (error) {
console.error('Pool creation failed:', error);
process.exit(1);
}
})();
// Basic health check
app.get('/', (req, res) => {
res.send('Fabric DWH API is running');
});
// Query endpoint
app.get('/api/query', async (req, res) => {
try {
const query = req.query.sql || 'SELECT TOP 10 * FROM INFORMATION_SCHEMA.TABLES';
const connection = await pool.connect();
const result = await connection.query(query);
await connection.close();
res.json(result);
} catch (error) {
console.error('Query error:', error);
res.status(500).json({ error: error.message });
}
});
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});
</code></pre>
<p>using python</p>
<pre><code>import pyodbc
import struct
from itertools import chain, repeat
from azure.identity import ClientSecretCredential
def connect_to_fabric():
tenant_id = "your-tenant-id"
client_id = "your-client-id"
client_secret = "your-client-secret"
database_server = "your-server.datawarehouse.fabric.microsoft.com,1433"
database_name = "your-database-name"
credential = ClientSecretCredential(
tenant_id=tenant_id,
client_id=client_id,
client_secret=client_secret
)
token = credential.get_token("https://database.windows.net/.default").token
token_bytes = token.encode("UTF-16-LE")
encoded = bytes(chain.from_iterable(zip(token_bytes, repeat(0))))
token_struct = struct.pack(f'<I{len(encoded)}s', len(encoded), encoded)
SQL_COPT_SS_ACCESS_TOKEN = 1256
connection_string = f"""
Driver={{ODBC Driver 18 for SQL Server}};
Server={database_server};
Database={database_name};
Encrypt=yes;
TrustServerCertificate=no;
Connection Timeout=45;
"""
try:
print("🔹 Connecting to Fabric SQL Warehouse...")
conn = pyodbc.connect(connection_string, attrs_before={SQL_COPT_SS_ACCESS_TOKEN: token_struct})
cursor = conn.cursor()
cursor.execute("SELECT 1 AS test")
row = cursor.fetchone()
print(f"✅ Connected! Query Result: {row[0]}")
conn.close()
print("🔹 Connection Closed.")
except Exception as e:
print(f"❌ Connection Failed: {e}")
if __name__ == "__main__":
connect_to_fabric()
</code></pre>
<p>In python I also got the same error</p>
<p><a href="https://i.sstatic.net/9QXpRHOK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QXpRHOK.png" alt="python error" /></a></p>
<p>Additional resources: <a href="https://community.fabric.microsoft.com/t5/Issues/ERROR-Connect-to-SQL-Endpoint-using-Node-js-and-tedious-npm/idi-p/3763188#feedback-success" rel="nofollow noreferrer">Issue On Microsoft fabric community</a></p>
|
<python><sql><node.js><microsoft-fabric>
|
2025-03-26 19:11:18
| 1
| 988
|
M Junaid
|
79,536,999
| 4,897,557
|
Files written by Colab not appearing in finder
|
<p>I have written a function to download certain files to both a local directory and a google drive directory. The files do not appear in Finder, nor via an "ls -la" command in Terminal.<br />
When I run a Terminal Listing within my colab notebook, the files are listed in the expected directory structure:</p>
<pre><code>import os
for root, dirs, files in os.walk("/Users/glenn/Documents/SEC Edgar Data"):
print("Folder:", root)
for file in files:
print(" ", file)
</code></pre>
<p>COLAB OUTPUT SAMPLE:</p>
<pre><code>older: /Users/xxxxx/Documents/SEC Edgar Data
Comerica_10K.html
test_file.txt
Folder: /Users/xxxxx/Documents/SEC Edgar Data/sec-edgar-filings
Folder: /Users/xxxxx/Documents/SEC Edgar Data/sec-edgar-filings/WTFC
Folder: /Users/xxxxx/Documents/SEC Edgar Data/sec-edgar-filings/WTFC/10-K
Folder: /Users/xxxxx/Documents/SEC Edgar Data/sec-edgar-filings/WTFC/10-K/0001015328-25-000093
full-submission.txt
Folder: /Users/xxxxx/Documents/SEC Edgar Data/sec-edgar-filings/WTFC/10-K/0000948572-00-000021
</code></pre>
<p>I have tried many different debugging options, including looking for hidden files and restarting the finder. Still, no files appear.</p>
<p>Can anybody help me further debug?</p>
|
<python><google-colaboratory>
|
2025-03-26 18:39:41
| 1
| 2,495
|
GPB
|
79,536,827
| 610,375
|
How can I specify console_scripts that are to be installed for a "variant"?
|
<p>The <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies" rel="nofollow noreferrer">Setuptools documentation</a> describes how to specify "optional dependencies" for what it calls a "variant" of a Python package.</p>
<p>In <code>pyproject.toml</code>, we can enter something like the following:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "mypkg"
# ...
[project.optional-dependencies]
PDF = ["ReportLab>=1.2", "RXP"]
</code></pre>
<p>This enables installation like:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip install 'mypkg[PDF]'
</code></pre>
<p>to include the optional dependencies.</p>
<p>I would like to install some scripts only if the <code>PDF</code> variant is selected.
For an unconditional installation we can do this to make <code>my-script</code> available after installation:</p>
<pre class="lang-ini prettyprint-override"><code>[project.scripts]
my-script = "mypkg.scripts.my_script:main"
my-pdf-script = "mypkg.scripts.my_pdf_script:main"
</code></pre>
<p>How can I specify that <code>my-pdf-script</code> should be installed only if the <code>PDF</code> variant is requested?</p>
|
<python><setuptools>
|
2025-03-26 17:38:44
| 0
| 509
|
sappjw
|
79,536,740
| 889,053
|
In python: oracledb.connect simply hangs, even when the local client succeeds. Why?
|
<p>I am trying to connect to an Oracle DB using the <code>oracledb</code> connection library and when I try it It simply hangs on <code>with oracledb.connect</code> even when following the <a href="https://python-oracledb.readthedocs.io/en/latest/user_guide/connection_handling.html" rel="nofollow noreferrer">documentation</a>.</p>
<p>Here is the sample code. I would expect it to output the system time</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import os
import oracledb
un = os.environ["USER"]
pw = os.environ["PASSWORD"]
tns_entry = os.environ["TNS_ENTRY"]
config_dir = f'{os.environ["PWD"]}/wallets/test/'
print("USERNAME:", un)
print("PASSWORD:", pw)
print("TNS_ENTRY:", tns_entry)
print("TNS_ADMIN:", config_dir)
with open(config_dir+"/tnsnames.ora","r") as f:
print(f.read())
with open(config_dir+"/sqlnet.ora","r") as f:
print(f.read())
with oracledb.connect(
user=un,
password=pw,
dsn=tns_entry,
config_dir=config_dir,
wallet_location=config_dir
) as connection:
with connection.cursor() as cursor:
sql = "select systimestamp from dual"
for (r,) in cursor.execute(sql):
print(r)
</code></pre>
<p>Here is the output:</p>
<pre><code>-> % ./dbtest.py
USERNAME: sreapp
PASSWORD: ***
TNS_ENTRY: medium
TNS_ADMIN: /Users/cbongior/dev/oracle/fleetman/wallets/test/
medium = (description= (retry_count=2)(retry_delay=3)(address=(https_proxy=100.92.6.113)(https_proxy_port=3128)(protocol=tcps)(port=1522)(host=zxqgy1nz.adb.us-ashburn-1.oraclecloud.com))(connect_data=(service_name=gea38c0017d364a_sreadwt1_medium.adb.oraclecloud.com))(security=(ssl_server_dn_match=no)))
SQLNET.USE_HTTPS_PROXY=on
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))
SSL_SERVER_DN_MATCH=no
</code></pre>
<p>and in the console, effectively the same steps (confirming setting are valid):</p>
<pre><code>-> % echo $USER
sreapp
(.venv) cbongior@cbongior-mac [16:21:49] [~/dev/oracle/fleetman] [main *]
-> % echo $PASSWORD
****
(.venv) cbongior@cbongior-mac [16:21:54] [~/dev/oracle/fleetman] [main *]
-> % echo $TNS_ENTRY
medium
(.venv) cbongior@cbongior-mac [16:22:05] [~/dev/oracle/fleetman] [main *]
-> % echo $TNS_ADMIN
/Users/cbongior/dev/oracle/fleetman/wallets/test
(.venv) cbongior@cbongior-mac [16:22:21] [~/dev/oracle/fleetman] [main *]
-> % echo $TNS_ADMIN
(.venv) cbongior@cbongior-mac [16:22:32] [~/dev/oracle/fleetman] [main *]
-> % cat $TNS_ADMIN/tnsnames.ora
medium = (description= (retry_count=20)(retry_delay=3)(address=(https_proxy=100.92.6.113)(https_proxy_port=3128)(protocol=tcps)(port=1522)(host=zxqgy1nz.adb.us-ashburn-1.oraclecloud.com))(connect_data=(service_name=gea38c0017d364a_sreadwt1_medium.adb.oraclecloud.com))(security=(ssl_server_dn_match=no)))
[main *]
-> % cat $TNS_ADMIN/sqlnet.ora
SQLNET.USE_HTTPS_PROXY=on
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))
SSL_SERVER_DN_MATCH=no%
(.venv) cbongior@cbongior-mac [16:22:40] [~/dev/oracle/fleetman] [main *]
-> % sqlplus "${USER}/${PASSWORD}@${TNS_ENTRY}" <<< "select systimestamp from dual;
exit;"
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Mar 25 16:21:01 2025
Version 19.8.0.0.0
Copyright (c) 1982, 2020, Oracle. All rights reserved.
Last Successful login time: Tue Mar 25 2025 16:19:30 -07:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.27.0.1.0
SQL>
SYSTIMESTAMP
---------------------------------------------------------------------------
25-MAR-25 11.21.04.273828 PM +00:00
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.27.0.1.0
(.venv) cbongior@cbongior-mac [16:21:05] [~/dev/oracle/fleetman] [main *]
</code></pre>
<p>When I move those code to a host behind our firewall (removing the proxy settings), it works immediately:</p>
<pre><code>|main ↑7 S:4 U:3 ?:6 ✗| → dbtest.py
/home/cbongior/.local/lib/python3.6/site-packages/oracledb/__init__.py:39: UserWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in python-oracledb and will be removed in a future release
warnings.warn(message)
USERNAME: sreapp
PASSWORD: ****
TNS_ENTRY: medium
TNS_ADMIN: /home/cbongior/dev/oracle/fleetman/wallets/test/
medium = (description= (retry_count=2)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=zxqgy1nz.adb.us-ashburn-1.oraclecloud.com))(connect_data=(service_name=gea38c0017d364a_sreadwt1_medium.adb.oraclecloud.com))(security=(ssl_server_dn_match=no)))
SQLNET.USE_HTTPS_PROXY=off
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))
SSL_SERVER_DN_MATCH=no
2025-03-27 00:13:06.250068
</code></pre>
<p>It's somehow related to the proxy settings in the <code>tnsnames.ora</code>. I have explicitly set the proxy settings in the connect string, to no luck</p>
|
<python><python-oracledb>
|
2025-03-26 17:02:03
| 1
| 5,751
|
Christian Bongiorno
|
79,536,730
| 2,110,463
|
mypy complains for static variable
|
<p>mypy (v.1.15.0) complains with the following message <code>Access to generic instance variables via class is ambiguous</code> for the following code:</p>
<pre><code>from typing import Self
class A:
B: Self
A.B = A()
</code></pre>
<p>If I remove <code>B: Self</code>, then is says <code>"type[A]" has no attribute "B"</code>.</p>
<p>How make mypy happy? You can play with this here: <a href="https://mypy-play.net/?mypy=1.15.0&python=3.12&gist=7e2b4813378a163ed9b89bd75ca129c2" rel="nofollow noreferrer">mypy playground</a></p>
|
<python><python-typing><mypy>
|
2025-03-26 16:59:22
| 1
| 2,225
|
PinkFloyd
|
79,536,716
| 2,287,458
|
Expand Struct columns into rows in Polars
|
<p>Say we have this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'EU': {'size': 10, 'GDP': 80},
'US': {'size': 100, 'GDP': 800},
'AS': {'size': 80, 'GDP': 500}})
</code></pre>
<pre><code>shape: (1, 3)
┌───────────┬───────────┬───────────┐
│ EU ┆ US ┆ AS │
│ --- ┆ --- ┆ --- │
│ struct[2] ┆ struct[2] ┆ struct[2] │
╞═══════════╪═══════════╪═══════════╡
│ {10,80} ┆ {100,800} ┆ {80,500} │
└───────────┴───────────┴───────────┘
</code></pre>
<p>I am looking for a function like <code>df.expand_structs(column_name='metric')</code> that gives</p>
<pre class="lang-py prettyprint-override"><code>shape: (2, 4)
┌────────┬─────┬─────┬─────┐
│ metric ┆ EU ┆ US ┆ AS │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞════════╪═════╪═════╪═════╡
│ size ┆ 10 ┆ 100 ┆ 80 │
│ GBP ┆ 80 ┆ 800 ┆ 500 │
└────────┴─────┴─────┴─────┘
</code></pre>
<p>I've tried other functions like <code>unnest</code>, <code>explode</code> but no luck. Any help appreciated!</p>
|
<python><dataframe><python-polars><unpivot>
|
2025-03-26 16:53:00
| 5
| 3,591
|
Phil-ZXX
|
79,536,694
| 8,963,682
|
Alembic Autogenerate Incorrectly Trying to Drop alembic_version Table
|
<h2>Problem</h2>
<p>I'm facing a strange issue with Alembic's autogenerate command when setting up the initial database migration for a PostgreSQL database with a public schema.</p>
<p>When running <code>alembic revision --autogenerate -m "Initial schema setup"</code> against a database containing only an empty <code>public.alembic_version</code> table, the generated migration script incorrectly includes:</p>
<ul>
<li><code>op.drop_table('alembic_version')</code> near the end of the <code>upgrade()</code> function</li>
<li><code>op.create_table('alembic_version', ...)</code> in the <code>downgrade()</code> function</li>
</ul>
<p>This causes subsequent <code>alembic upgrade head</code> to fail with:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "public.alembic_version" does not exist
[SQL: INSERT INTO public.alembic_version (version_num) VALUES ('xxxxxxxx') ...]
</code></pre>
<p>This happens because the <code>upgrade()</code> function executes the erroneous <code>op.drop_table('alembic_version')</code> before Alembic attempts to record the migration's success by inserting the revision hash into that same table.</p>
<h2>Configuration</h2>
<p>Here's my <code>env.py</code> setup (relevant portions):</p>
<pre class="lang-py prettyprint-override"><code># --- Target Metadata and Schema ---
target_metadata = frontend_metadata # Use the imported metadata
target_schema = getattr(settings.FRONTEND_DB, 'SCHEMA', None) or 'public'
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode."""
url = get_url()
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
include_schemas=True,
version_table_schema=target_schema,
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode."""
configuration = config.get_section(config.config_ini_section, {})
configuration["sqlalchemy.url"] = get_url()
connectable = engine_from_config(
configuration,
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
include_schemas=True,
version_table_schema=target_schema,
compare_type=True,
compare_server_default=True
)
with context.begin_transaction():
context.run_migrations()
</code></pre>
<p>I'm using SQLAlchemy with PostgreSQL. The database models are defined with a more sophisticated setup:</p>
<p>In <code>database.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class DatabaseConnection:
def __init__(self):
# Create metadata with proper schema
self.frontend_metadata = MetaData(schema=settings.FRONTEND_DB.SCHEMA)
# Create bases - these are the definitive base classes for models
self.FrontendBase = declarative_base(metadata=self.frontend_metadata)
# ... other initialization code ...
# Create global database instance
db = DatabaseConnection()
# Export commonly used items
FrontendBase = db.FrontendBase
frontend_metadata = db.frontend_metadata
__all__ = [
"db",
"FrontendBase",
"frontend_metadata",
# ... other exports ...
]
</code></pre>
<p>Then in <code>base.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from src.services.vamos.database.database import FrontendBase
class BaseModel(FrontendBase):
"""Base model for all SQLAlchemy models."""
__abstract__ = True
# Model fields...
</code></pre>
<h2>What I've Verified</h2>
<ol>
<li><code>alembic_version</code> is <strong>not</strong> defined in my SQLAlchemy models/metadata</li>
<li>I confirmed <code>target_metadata</code> does not contain any definition for <code>alembic_version</code></li>
<li>Both <code>target_schema</code> and <code>version_table_schema</code> correctly resolve to 'public'</li>
<li>If I manually delete the <code>op.drop_table('alembic_version')</code> line from the generated script, the migration works perfectly</li>
</ol>
<h2>Questions</h2>
<ol>
<li>Is this a known issue with Alembic? Should it be ignoring the <code>alembic_version</code> table by default?</li>
<li>Is there something wrong with my configuration that's causing this behavior?</li>
<li>What's the proper way to handle this without manually editing the generated migration script?</li>
</ol>
<h2>Environment</h2>
<ul>
<li>Alembic version: 1.15.1</li>
<li>SQLAlchemy version: 2.0.39</li>
<li>Python: 3.10.11</li>
<li>Database: PostgreSQL 17.2 (AWS RDS)</li>
<li>OS: Windows</li>
</ul>
<p>Any help would be appreciated - I'd like to understand if I'm doing something wrong or if this is indeed a bug with Alembic.</p>
|
<python><database><postgresql><amazon-rds><alembic>
|
2025-03-26 16:42:34
| 1
| 617
|
NoNam4
|
79,536,571
| 8,954,691
|
Why does mypy not complain when overloading an abstract method's concrete implementation in the child class?
|
<p>I am trying to wrap my head around the <code>@overload</code> operator, generics and the Liskov Substitution Principle. I usually make my abstract classes generic on some typevar <code>T</code> which is followed by concrete implementations of child classes.</p>
<p>For example please consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
E = TypeVar("E") # entity
class AbstractRepository(ABC, Generic[E]):
@abstractmethod
async def update(self, entity: E) -> bool:
raise NotImplementedError
class IndexRepository(AbstractRepository[str]):
@overload
async def update(self, index: str) -> bool:
...
@overload
async def update(self, index: list[str]) -> bool:
...
async def update(self, index: Union[str, list[str]]) -> bool:
# implementation here
return stuff
</code></pre>
<p>I expect mypy to complain about the Liskov Substitution Principle, but it does not, even though the implementation of the <code>update</code> method the parameter <code>index</code> has a type <code>Union[str, list[str]]</code> and the child class <code>IndexRepository(AbstractRepository[str])</code>'s signature defines <code>T=str</code>.</p>
<p>Is this because of the <code>@overload</code> operator, and why is LSP not violated here?</p>
|
<python><python-typing><mypy><liskov-substitution-principle>
|
2025-03-26 13:28:14
| 1
| 703
|
Siddhant Tandon
|
79,536,473
| 2,806,338
|
Python, search from where import is call
|
<p>For managing transition from an old import to a new import, I search to log from where my module is used.</p>
<p>By example, my old is name m_old.py. With a simple print in file m_old.py, I display a message "used of old module". So each time there is "import m_old.py", I have a message "used of old module" in execution ?</p>
<p>If it was a function, I will use module inspect and traceback to log in execution from where this function is call, by exemple "call of old function from file.py line XXX".</p>
<p>Is-it possible to display a message like : "this old import is call from file.py line XXX" ?</p>
|
<python>
|
2025-03-26 12:51:06
| 2
| 739
|
Emmanuel DUMAS
|
79,536,439
| 2,322,961
|
How to use an index URL in pip install when both the package and index are stored in Azure Storage
|
<p>I have stored both a Python package (<code>mypackage.whl</code>) and an <code>index.html</code> file (listing the package) in an Azure Storage Account (Blob Storage). I want to use <code>pip install</code> with --index-url to install the package directly from Azure Storage. However, <code>pip</code> does not recognize the Azure Storage URL as a valid package index.</p>
<p>My Azure Storage container (<code>mycontainer</code>) contains:
<code>https://mystorageaccount.blob.core.windows.net/mycontainer/index.html https://mystorageaccount.blob.core.windows.net/mycontainer/mypackage-1.0.0-py3-none-any.whl</code></p>
<p>I attempted to install the package using:
<code>pip install mypackage --index-url=https://mystorageaccount.blob.core.windows.net/mycontainer/</code></p>
<p>But I got the error:
<code>ERROR: Could not find a version that satisfies the requirement mypackage (from versions: none)</code></p>
<p><strong>Questions:</strong></p>
<ol>
<li>How can I configure the index.html file so that pip recognizes it as a valid package index?</li>
<li>Are there alternative ways to host a pip-compatible package index in Azure Storage?</li>
</ol>
<p>Note that the storage container is public and I do not want to use <code>Azure Artifacts</code>—only Azure Blob Storage as <code>Azure artifacts</code> supports only package uploading.</p>
<p><strong>index.html looks like:</strong></p>
<p><a href="https://i.sstatic.net/DoCiqt4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DoCiqt4E.png" alt="enter image description here" /></a></p>
<p><strong>When I click on package 1, it lists all whl files.</strong></p>
<p><a href="https://i.sstatic.net/GZd8D0QE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GZd8D0QE.png" alt="enter image description here" /></a></p>
|
<python><azure><pip><azure-blob-storage>
|
2025-03-26 12:39:47
| 1
| 428
|
Jayesh Tanna
|
79,536,363
| 1,719,931
|
Logging operation results in pandas (equivalent of STATA/tidylog)
|
<p>When I do an operation in STATA, for example removing duplicated rows, it will tell me the number of rows removed, for instance:</p>
<pre><code>. sysuse auto.dta
(1978 automobile data)
. drop if mpg<15
(8 observations deleted)
. drop if rep78==.
(4 observations deleted)
</code></pre>
<p>For the tidyverse, the package <a href="https://github.com/elbersb/tidylog" rel="nofollow noreferrer">tidylog</a> implements a similar feature, providing feedback on the operation (e.g. for a join, number of joined and unjoined rows, for a filter, nubmer of removed rows, etc.), with the little disadvantage that you will lose the autocompletion of your editor, as it wraps tidyverse functions with definitions like <code>filter(...)</code>, to accomodate for the fact that the tidyverse upstream definition could change over time.</p>
<p>Is there something similar for pandas?</p>
<p>I found <a href="https://github.com/eyaltrabelsi/pandas-log" rel="nofollow noreferrer">pandas-log</a> but seems abandoned.</p>
<p><a href="https://stackoverflow.com/questions/71729310/how-to-print-the-number-of-observations-dropped-by-tidyverses-functions-like-fi">Related question for R</a>.</p>
|
<python><pandas>
|
2025-03-26 12:13:53
| 2
| 5,202
|
robertspierre
|
79,536,321
| 2,521,423
|
PySide6 signal not emitting properly when passed to decorator
|
<p>I have this decorator in python 3.11 that sends a signal encoding the arguments to a function call that it decorates, here:</p>
<pre><code>def register_action(signal=None):
def decorator(func):
def wrapper(self, *args, **kwargs): # self is the first argument for instance methods
result = func(self, *args, **kwargs)
history = {'function': func.__name__,
'args': args,
'kwargs': kwargs
}
try:
signal.emit(history, False)
except Exception as e:
self.logger.info(f'Unable to log tab action: {str(e)}')
return result
return wrapper
return decorator
</code></pre>
<p>The signal in question is defined in a class at the top, like so:</p>
<pre><code>class MetadataView(MetaView):
logger = logging.getLogger(__name__)
update_tab_action_history = Signal(object, bool)
...
</code></pre>
<p>and the decorator is applied like this:</p>
<pre><code>@register_action(signal=update_tab_action_history)
def _overlay_plot(self, parameters):
...
</code></pre>
<p>However, when this is called, I get the following error:</p>
<pre><code>'PySide6.QtCore.Signal' object has no attribute 'emit'
</code></pre>
<p>I am reasonably certain that <code>PySide6.QtCore.Signal</code> does in fact have an <code>emit</code> attribute, so something else is going wrong... Any suggestions would be appreciated.</p>
<p>The MetaView class from which my class inherits inherits from QWidget.</p>
|
<python><signals><python-decorators><pyside6><python-logging>
|
2025-03-26 11:55:14
| 1
| 1,488
|
KBriggs
|
79,536,212
| 14,498,767
|
How to send CSRF token using Django API and a Flutter-web frontend ? HeaderDisallowedByPreflightResponse
|
<p>I have a python/django web API with a single endpoint, let's call it <code>/api/v1/form</code>. That API is called from a Flutter-web frontend application. I currently use the following configuration that disables CSRF token verification, and it works :</p>
<p><strong>requirements.txt</strong></p>
<pre class="lang-none prettyprint-override"><code>Django==5.1.7
django-cors-headers==4.7.0
</code></pre>
<p><strong>webserver/settings.py</strong></p>
<pre class="lang-py prettyprint-override"><code>...
ALLOWED_HOSTS = ["localhost"]
CORS_ALLOWED_ORIGINS = ["http://localhost:8001"] # flutter dev port
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'corsheaders',
'webserver',
]
...
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
...
</code></pre>
<p><strong>webserver/urls.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path
import webserver.views
urlpatterns = [
path('api/v1/form', webserver.views.api_v1_form, name="api_v1_form"),
]
...
</code></pre>
<p><strong>webserver/views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.http import HttpResponse, HttpResponseBadRequest
def api_v1_form(request):
if request.method == "POST":
process_request(request.body)
return HttpResponse("request successfully processed.")
return HttpResponseBadRequest("Expected a POST request")
</code></pre>
<p><strong>flutter/lib/page_form.dart</strong></p>
<pre class="lang-dart prettyprint-override"><code>Future<int> sendForm(MyData data) async {
final response = await http.post(
Uri.parse("http://localhost:8000/api/v1/form"),
body: data.toString(),
);
return response.statusCode;
}
</code></pre>
<p>Here is what I don't understand : if I disable to the CORS package in order to simply use a vanilla Django server, then I find myself capable of sending requests to the API but unable to receive an answer. Why is that the case ?</p>
<p>The following is the configuration used to get the CSRF token and use it in the requests.</p>
<p><strong>settings.py</strong></p>
<pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ["localhost"]
CSRF_TRUSTED_ORIGINS = ["http://localhost:8001", "http://localhost:8000"]
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'webserver',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
</code></pre>
<p>With only the settings.py changed, I get a <code>403 forbidden</code> answer.</p>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.views.decorators.csrf import ensure_csrf_cookie
@ensure_csrf_cookie
def api_v1_form(request):
... # unchanged
</code></pre>
<p>The answer is still <code>403 forbidden</code></p>
<p>Lastly, I tried to add a <code>/api/v1/token</code> API to send the token and reuse it in the request.</p>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def api_v1_token(request):
if request.method == "GET":
return HttpResponse(get_token(request))
return HttpResponseBadRequest()
@ensure_csrf_cookie
def api_v1_form(request):
if request.method == "OPTIONS": # POST -> OPTIONS
# ...
</code></pre>
<p><strong>page_form.dart</strong></p>
<pre class="lang-dart prettyprint-override"><code>Future<int> envoyerFormulaire(MyData data) async {
final tokenResponse = await http.get(
Uri.parse("$apiUrlBase/token"),
);
final token = tokenResponse.body.toString();
Uri uriPost = Uri.parse("$apiUrlBase/form");
final dataResponse = await http.post(
uriPost,
body: data.toString(),
headers: {
"X-CSRFToken": token,
}
);
return dataResponse.statusCode;
}
</code></pre>
<p>However, using the inspector, I get an error indicating the <code>Access-Control-Allow-Origin</code> header is missing when trying to get the token. So I add</p>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def api_v1_token(request):
if request.method == "GET":
response = HttpResponse(get_token(request))
response["Access-Control-Allow-Origin"] = "http://localhost:8001"
return response
return HttpResponseBadRequest()
def api_v1_form(request: HttpResponse):
if request.method == "OPTIONS":
# ...
response["Access-Control-Allow-Origin"] = "http://localhost:8001"
return response
</code></pre>
<p>Now I can get the token, however the POST request outputs and error <code>PreflightMissingAllowOriginHeader</code>. I get that I am supposed to add a header, but did I not already add <code>Access-Control-Allow-Origin</code> on all requests ?</p>
<p>Moreover I see that for the POST request there are two requests : preflight and fetch, preflight returns <code>500 internal</code> error because django tries to parse the content of the empty request. That I can fix by adding a filter</p>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>if request.method == "OPTIONS": # preflight
res = HttpResponse()
res["Access-Control-Allow-Origin"] = "http://localhost:8001"
return res
if request.method == "GET":
# normal processing after that
</code></pre>
<p>And now preflight returns <code>200 OK</code>. But I am left with a CORS error on the fetch request (<code>HeaderDisallowedByPreflightResponse</code>).</p>
<p>Now, I am stuck. I think there is something I don't understand about CORS headers and requests : am I missing headers ? with what values ? and is the app expected to process the data from the preflight and return the result in a subsequent GET request instead of a single POST request ? Is it the expected behavior ?</p>
|
<python><django><flutter><csrf><django-csrf>
|
2025-03-26 11:06:27
| 0
| 549
|
SpaceBurger
|
79,535,943
| 7,599,215
|
How serve JS locally with folium JSCSSMixin class?
|
<p>I have modified ja/css of Leaflet.Measure plugin, and I want to use them in folium.</p>
<p>So, I took <a href="https://github.com/python-visualization/folium/blob/main/folium/plugins/measure_control.py" rel="nofollow noreferrer">https://github.com/python-visualization/folium/blob/main/folium/plugins/measure_control.py</a> as an example:</p>
<pre class="lang-py prettyprint-override"><code>from jinja2 import Template
from folium.elements import JSCSSMixin
from folium.utilities import parse_options
class Measure(JSCSSMixin):
_template = Template(
"""
{% macro script(this, kwargs) %}
var {{ this.get_name() }} = new L.control.measure(
{{ this.options|tojson }});
{{this._parent.get_name()}}.addControl({{this.get_name()}});
{% endmacro %}
"""
) # noqa
default_js = [
(
"leaflet_measure_js",
"https://cdn.jsdelivr.net/gh/banderlog/Leaflet.Measure@refs/heads/master/src/leaflet.measure.js"
)
]
default_css = [
(
"leaflet_measure_css",
"https://cdn.jsdelivr.net/gh/banderlog/Leaflet.Measure@refs/heads/master/src/leaflet.measure.css"
)
]
def __init__(
self,
**kwargs
):
super().__init__()
self._name = "Measure"
self.options = parse_options(
**kwargs
)
</code></pre>
<p>It works, but I want somehow put local links and serve them locally (like file:///) because I do not want to use external providers.</p>
<p>I know how to do it with flask/jinja2, but I have no idea how to do it with folium/branca</p>
|
<python><folium><folium-plugins>
|
2025-03-26 09:31:42
| 1
| 2,563
|
banderlog013
|
79,535,914
| 7,431,005
|
matplotlib latex text color
|
<p>I want to partially color some LaTeX text in my plot using the <code>\textcolor{}</code> command.
I tried the following, but the text still appears black:</p>
<pre><code>import matplotlib
matplotlib.use('TKAgg')
from matplotlib import rc
rc('text',usetex=True)
rc('text.latex', preamble=r'\usepackage{color}')
import matplotlib.pyplot as plt
if __name__ == "__main__":
fig = plt.figure()
fig.text(0.5, 0.5, r'black \textcolor{red}{red}')
plt.show()
</code></pre>
<p>Changing the backend to pc as done <a href="https://stackoverflow.com/questions/9169052/partial-coloring-of-text">here</a> does not help.
Any suggestions on how to do it?</p>
|
<python><matplotlib><latex>
|
2025-03-26 09:19:47
| 1
| 4,667
|
user7431005
|
79,535,841
| 11,829,002
|
Add a button to jump to location on the map
|
<p>I have a <code>folium.Map</code>, and I'd like to add buttons in the exported HTML file, so that when I click on them, it would change the view to the locations selected.</p>
<p>Here is the Python code to generate the HTML map, with two buttons:</p>
<pre class="lang-py prettyprint-override"><code>import folium
m = folium.Map(location=[48.8566, 2.3522], zoom_start=10)
# Add buttons to jump to Paris and London
buttons_html = """
<div style="position: fixed; top: 10px; left: 50px; z-index: 1000;">
<button onclick="jumpToCity(48.8566, 2.3522, 12)">Paris</button>
<button onclick="jumpToCity(51.5074, -0.1278, 12)">London</button>
</div>
<script>
function jumpToCity(lat, lng, zoom) {
var map = document.querySelector('.leaflet-container')._leaflet_map;
map.setView([lat, lng], zoom);
}
</script>
"""
m.get_root().html.add_child(folium.Element(buttons_html))
m.save('output.html')
</code></pre>
<p>After some digging, in the generated HTML file, I found that the object of the map is generated with a name composed of a hash that changes every time I run the code:</p>
<pre class="lang-js prettyprint-override"><code>var map_931983127ff2ad4bbeb40856ea0710c9 = L.map(
"map_931983127ff2ad4bbeb40856ea0710c9",
{
center: [48.8566, 2.3522],
crs: L.CRS.EPSG3857,
zoom: 10,
zoomControl: true,
preferCanvas: false,
}
);
</code></pre>
<p>If I manually change the command <code>map.setView(...)</code> to <code>map_931983127ff2ad4bbeb40856ea0710c9.setView(...)</code> in the HTML, the buttons work well.</p>
<p>Is there a way to set this automatically? A solution (but an ugly one) would be to include in the Python a small script after the export that would update the HTML file (using regexp or something like that).</p>
|
<python><html><folium>
|
2025-03-26 08:47:31
| 0
| 398
|
Thomas
|
79,535,789
| 149,900
|
SaltStack custom state: How to download file from "salt:" URI
|
<p>I'm trying to write a custom SaltStack state. It will be used like this:</p>
<pre class="lang-yaml prettyprint-override"><code>Example of my.state usage:
my.state:
- name: /some/path
- source: salt://path/to/dsl/file
</code></pre>
<p>This state is planned to grab a <code>file</code> from the Salt Master file server, transfer it to the minion, then execute the file using a program in the minion, analyze the output, delete the downloaded <code>file</code>, and return a proper state dict based on success/failure of the program that executes the DSL file.</p>
<p>My question:</p>
<ul>
<li>How to write the custom state so it's able to download from <code>salt://</code> URI ?</li>
</ul>
<p>Note: I do not envision the need to download the file from anywhere else but the Salt Master.</p>
<p><em>Edit:</em> In the interest of full disclosure of facts, "success/failure" is gleaned by analyzing the output of the program, not a simple exitcode analysis. Just FYI.</p>
|
<python><salt-project>
|
2025-03-26 08:23:29
| 1
| 6,951
|
pepoluan
|
79,535,226
| 28,063,240
|
How to use csv module to make a generator of lines of a csv?
|
<pre class="lang-py prettyprint-override"><code>import csv
from typing import Generator
def get_csv_lines(rows: list[list[str]]) -> Generator[str, None, None]:
"""
Generates lines for a CSV file from a list of rows.
Args:
rows (list[list[str]]): A list where each element is a list of strings representing a row
in the CSV. All rows must have the same number of items (columns)
to ensure a correctly formatted CSV.
Yields:
str: A line of the CSV.
Examples:
>>> rows = [["Name", "Age", "City"], ["Alice", "30", "New York"], ["Bob", "25", "Los Angeles"]]
>>> list(get_csv_lines(rows))
['Name,Age,City', 'Alice,30,New York', 'Bob,25,Los Angeles']
"""
???
</code></pre>
<p>How can I implement such a function?</p>
<p>I only know how to write a CSV to a string:</p>
<pre class="lang-py prettyprint-override"><code>import io
string_out = io.StringIO()
writer = csv.writer(string_out)
for row in rows:
writer.writerow(row)
print(string_out.getvalue())
</code></pre>
<p>But this means the whole CSV has to be in memory at once (in the string). I would prefer to use an API like</p>
<pre class="lang-py prettyprint-override"><code>for line in get_csv_lines(rows):
print(line)
</code></pre>
<p>That doesn't need to buffer all of the lines of the entire CSV in memory, and that might be able to take CSV writer arguments</p>
<pre class="lang-py prettyprint-override"><code>for line in get_csv_lines(rows, dialect="excel"):
print(line)
</code></pre>
|
<python><python-3.x><csv>
|
2025-03-26 01:40:55
| 2
| 404
|
Nils
|
79,535,147
| 1,613,983
|
How do I avoid CPU for loops when inputs are a function of previous outputs?
|
<p>I have a simple reinforcement model in <code>pytorch</code> that makes decisions about where it wants to be and receives its current position (e.g. from a GPS) as an input. For the sake of this example let's just assume it can just move anywhere at any point (ie. it can just specify its position). In the real example we'd either output a movement or the activation would augment the desired position to what is achievable.</p>
<pre><code>class Traveller(torch.nn.Module):
def __init__(
self,
num_inputs: int,
hidden_size: int,
num_layers: int,
dropout: float,
):
super(DeepStatefulMVO, self).__init__()
# The first LSTM layer
self.lstm_layers = torch.nn.LSTM(
input_size=num_inputs + 2,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
batch_first=True
)
self.output_layer = torch.nn.Linear(
in_features=hidden_size,
out_features=2
)
self.activation = torch.nn.Tanh()
def forward(
self,
inputs: torch.Tensor, # this is some independent data that is known ahead of time, e.g. a clock, the wind, lighting conditions, etc.
) -> torch.Tensor:
assert (inputs.shape[1] == 1), "window should always be 1 since we're not using sequences"
hidden_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None
prev_position = torch.zeros(
(1, 1, 2), device=inputs.device, dtype=inputs.dtype
)
positions_list = []
for t in range(inputs.size(0)):
current_inputs = inputs[t : t + 1]
input_data = torch.cat(
[
prev_position,
current_inputs
],
dim=2,
)
lstm_out, hidden_state = self.lstm_layers(input_data, hidden_state)
out = self.output_layer(lstm_out)
position = self.activation(out)
prev_position = position
positions_list.append(position)
positions = torch.cat(positions_list, dim=0).squeeze(dim=1)
assert isinstance(positions, torch.Tensor)
assert positions.shape == (inputs.shape[0], inputs.shape[2])
return positions
</code></pre>
<p>The output is a set of positions that are fed into a utility function which computes some loss (with a gradient).</p>
<p>This code works fine, but the issue is that the <code>for</code> loop is orchestrated from the CPU. Previously (before I had the <code>positions</code> vector being used as an input), I could just pass in an entire batch of data with a large sequence dimension, and everything would be computed in one step, but the LSTM layers wouldn't know their current position, so the model was unable to optimise the loss effectively. Moving to this approach improved training convergence drastically but reduced GPU utilisation from about 95% to 20%.</p>
<p>How can I retain this functionality where the input is a function of the previous output, without resorting to CPU-bound for-loops?</p>
|
<python><pytorch>
|
2025-03-26 00:21:43
| 0
| 23,470
|
quant
|
79,535,064
| 2,140,971
|
How to specificy custom serialization code for a specific query parameter when generating a python wrapper with openapi-generator-cli?
|
<p>In a OpenAPI v3 specification file, one query parameter expects a comma-separated list of strings:</p>
<pre class="lang-yaml prettyprint-override"><code>paths:
/content:
get:
description: Returns data for all given ids
parameters:
- description: Comma-separated list of ids
in: query
name: ids
required: true
schema:
type: string
</code></pre>
<p>The generated code looks roughly like this:</p>
<pre class="lang-py prettyprint-override"><code>@validate_call
def content_get(
self,
ids: Annotated[StrictStr]
):
_param = self._content_get_serialize(
ids=ids,
# ...
)
def _content_get_serialize(self, ids):
# ...
if object_set is not None:
_query_params.append(('ids', ids))
# ....
</code></pre>
<p>I would like to generate a wrapper that:</p>
<ul>
<li>takes an <code>Iterable[StrictStr]</code> instead of a <code>StrictStr</code></li>
<li>builds the comma-separated list under the hood, like <code>_query_params.append(('ids', ','.join(ids)))</code></li>
</ul>
<p>Ideally I would specify this <strong>during</strong> generation, and not by tweaking the generated code, so if I need to regenerate the whole wrapper from scratch I don't have to add all my tweaks again.</p>
|
<python><openapi><openapi-generator><openapi-generator-cli>
|
2025-03-25 23:21:28
| 0
| 753
|
Charles
|
79,534,956
| 11,233,365
|
Installing Rust-dependent Python packages by passing a custom `crates.io` mirror through `pip`
|
<p>I am trying to install <code>maturin</code> and other Python packages that depend on Rust backends on a firewalled PC that can only see a server that my team manages. We have previously been able to set up API endpoints that the PC can poke in order to install Python <code>pip</code> packages, but we have thus far been unable to apply the same tactic to those that depend on Rust.</p>
<p><code>pip</code> downloads the packages and wheels just fine, but during the build stage, the <code>build_rust</code> backend still tries to download the package index and crates from <code>crates.io</code>. I have created API endpoints on the server to mirror the <code>crates.io</code> index and sparse package repository (it works when I run things with <code>cargo</code> directly), but I have not been able to point the Rust-dependent packages installed by <code>pip</code> to it.</p>
<p>Using <code>maturin</code> as an example, these are the installation arguments I have tried so far:</p>
<ol>
<li>Setting <code>CARGO_HOME</code> and <code>CARGO_REGISTRIES_DEFAULT</code></li>
</ol>
<pre><code># This is in an MSYS2 environment, so Unix-like language but Windows paths
$ export CARGO_HOME='C:\Users\username\.cargo'
$ export CARGO_REGISTRIES_DEFAULT=my-mirror
$ pip install maturin --index-url http://hostname:8000/pypi --trusted-host hostname:8000
</code></pre>
<ol start="2">
<li>Passing <code>CARGO_HOME</code> and <code>CARGO_REGISTRIES_DEFAULT</code> as part of the pip install command</li>
</ol>
<pre><code>$ CARGO_HOME='C:\Users\username\.cargo' CARGO_REGISTRIES_DEFAULT=my-mirror pip install maturin --index -url ... --trusted-host ...
</code></pre>
<p>In both these cases, <code>maturin</code> is unable to see my global <code>cargo</code> config.</p>
<p>Is it possible to pass my global <code>cargo</code> config through <code>pip</code> so that it looks for crates from my mirror instead?</p>
|
<python><pip><rust-cargo><mirror><maturin>
|
2025-03-25 21:57:44
| 1
| 301
|
TheEponymousProgrammer
|
79,534,809
| 11,062,613
|
How to overload numpy.atleast_2d in Numba to include non-ndarray data types?
|
<p>Numba does not have an implementation for numpy.atleast_1d() and numpy.atleast_2d() which includes other data types than arrays.
I've attempted to overload atleast_2d, but I'm having trouble handling 2D reflected and typed lists correctly. I need help with the proper type checks for 2D reflected and typed lists to prevent errors. It's confusng to me.</p>
<p>Does anyone know the correct way to check and handle 2D lists in Numba's overload operation?</p>
<p>Here is my attempt:</p>
<pre><code>import numpy as np
from numba import njit, types, typed
from numba.extending import overload
from numba.core.errors import TypingError
@overload(np.atleast_2d)
def ovl_atleast_2d(a):
'''
Implement np.atleast_2d.
Example:
@njit
def use_atleast_2d(a):
return np.atleast_2d(a).ndim
print(f"Result for scalar: {use_atleast_2d(1.)} ndim")
print(f"Result for array 0D: {use_atleast_2d(np.array(1.))} ndim")
print(f"Result for array 1D: {use_atleast_2d(np.ones((3,)))} ndim")
print(f"Result for array 2D: {use_atleast_2d(np.ones((1,1)))} ndim")
print(f"Result for tuple 1D: {use_atleast_2d((1,2,3))} ndim")
print(f"Result for tuple 2D: {use_atleast_2d(((1,2,3), (4,5,6)))} ndim")
print(f"Result for pylist 1D: {use_atleast_2d([1,2,3])} ndim")
# print(f"Result for pylist 2D: {use_atleast_2d([[1,2,3], [4,5,6]])} ndim")
# => TypeError: cannot reflect element of reflected container: reflected list(reflected list(int64)<iv=None>)<iv=None>
print(f"Result for nblist 1D: {use_atleast_2d(typed.List([1,2,3]))} ndim")
nblist2d = typed.List([typed.List([1,2,3]), typed.List([4,5,6])])
# print(f"Result for tlist 2D: {use_atleast_2d(nblist2d)} ndim")
# => KeyError: 'Can only index numba types with slices with no start or stop, got 0.'
# Expected output:
# Result for scalar: 2 ndim
# Result for array 0D: 2 ndim
# Result for array 1D: 2 ndim
# Result for array 2D: 2 ndim
# Result for tuple 1D: 2 ndim
# Result for tuple 2D: 2 ndim
# Result for pylist 1D: 2 ndim
# Result for pylist 2D: 2 ndim (Error)
# Result for nblist 1D: 2 ndim
# Result for nblist 2D: 2 ndim (Error)
'''
if isinstance(a, types.Array):
if a.ndim == 0:
return lambda a: np.array([[a]])
elif a.ndim == 1:
return lambda a: a[np.newaxis, :]
else:
return lambda a: a
elif isinstance(a, (types.Number, types.Boolean)):
return lambda a: np.array([[a]])
elif isinstance(a, (types.Sequence, types.Tuple)):
# For a Python sequence or tuple, first convert to an array then ensure it is at least 2-D.
return lambda a: np.atleast_2d(np.array(a))
elif isinstance(a, types.containers.ListType):
# 1D-list
if isinstance(a[0].dtype, (types.Number, types.Boolean)):
target_dtype = a[0].dtype
def impl(a):
ret = np.empty((len(a), 1), dtype=target_dtype)
for i, v in enumerate(a):
ret[i, 0] = v
return ret
return impl
# 2D-list
elif isinstance(a[0].dtype, types.containers.ListType):
if isinstance(a[0][0].dtype, (types.Number, types.Boolean)):
target_dtype = a[0][0].dtype
def impl(a):
nrows = len(a)
ncols = len(a[0])
ret = np.empty((nrows, ncols), dtype=target_dtype)
for i in range(nrows):
for k in range(ncols):
ret[i, k] = a[i][k]
return ret
return impl
else:
raise TypingError("Argument can't be converted into ndarray.")
</code></pre>
<p>Here is the source code for the atleast implementations:
<a href="https://github.com/numba/numba/blob/c21aa9273ef4298392695b1f4613d29456b53e5c/numba/np/arrayobj.py#L5780" rel="nofollow noreferrer">https://github.com/numba/numba/blob/c21aa9273ef4298392695b1f4613d29456b53e5c/numba/np/arrayobj.py#L5780</a></p>
<p>Edit:
I have found a remark within the related function asarray:</p>
<pre><code># Nested lists cannot be unpacked, therefore only single lists are
# permitted and these conform to Sequence and can be unpacked along on
# the same path as Tuple.
</code></pre>
<p><a href="https://github.com/numba/numba/blob/c21aa9273ef4298392695b1f4613d29456b53e5c/numba/np/new_arraymath.py#L4278" rel="nofollow noreferrer">https://github.com/numba/numba/blob/c21aa9273ef4298392695b1f4613d29456b53e5c/numba/np/new_arraymath.py#L4278</a></p>
<p>I assume if nested lists cannot be unpacked in asarray, it's similar for atleast_2D.</p>
|
<python><numpy><numba>
|
2025-03-25 20:32:03
| 0
| 423
|
Olibarer
|
79,534,761
| 5,413,581
|
GLMNET gives different solutions for lasso-based algorithms when I expect the same solution
|
<p>There are many ways to implement an adaptive lasso model. One of them would be to:</p>
<ol>
<li>Solve an unpenalized regression model and extract the <code>betas</code></li>
<li>Define <code>w=1/abs(beta)</code></li>
<li>Use these weights in an adaptive lasso model and obtain <code>beta2</code></li>
</ol>
<p>Another approach would be to repeat step 1 and then do:</p>
<ol>
<li><code>X2 = X*beta</code> (this being column-wise product)</li>
<li>Solve a lasso model using X2 and obtain <code>beta3</code></li>
<li>Re-scale the obtained coefficients as <code>beta3 = beta3 * beta</code></li>
</ol>
<p>These two approaches are mathematically equivalent, and I have verified using python libraries like <code>asgl</code> that both approaches produce the same solution. However, if I try to compare them using <code>glmnet</code> I obtain very different results.</p>
<p>My questions are: why? how can I obtain the same solutions? I provide here the python implementation (which produces the same solution with both approaches) and the R implementation using glment (that dont)</p>
<p>Python implementation</p>
<pre><code>from sklearn.datasets import make_regression
from asgl import Regressor
import numpy as np
X, y = make_regression(n_samples=1000, n_features=10, n_informative=5, bias=10, noise=5, random_state=42)
model = Regressor(model='lm', penalization=None)
model.fit(X, y)
beta = model.coef_
X2 = X * beta
model = Regressor(model='lm', penalization='lasso', lambda1=0.1)
model.fit(X2, y)
beta2 = model.coef_ * beta
model = Regressor(model='lm', penalization='alasso', lambda1=0.1, individual_weights=1/np.abs(beta))
model.fit(X, y)
beta3 = model.coef_
# beta2 and beta3 are very similar
</code></pre>
<p>R implementation</p>
<pre><code>library(glmnet)
set.seed(42)
n <- 1000 # Number of samples
p <- 10 # Number of features
# Generate synthetic data
X <- matrix(rnorm(n * p), nrow = n)
true_coef <- rep(0, p)
true_coef[1:5] <- rnorm(5)
y <- X %*% true_coef + 10 + rnorm(n, sd = 5)
# Step 1: fit the model you want to use for your weight estimation
lm_model <- lm(y ~ X)
beta <- coef(lm_model)[-1] # Remove intercept
# Step 2: Create X2 by multiplying X with beta
X2 <- sweep(X, 2, beta, "*")
lambda1 = 0.1
# Step 3: Fit lasso regression on X2
lasso_model <- glmnet(X2, y, alpha = 1, lambda = lambda1, standardize = FALSE)
beta2 <- as.vector(coef(lasso_model, s = lambda1)[-1])
# Multiply back by beta
beta2 <- unname(beta2 * beta)
# Step 4: Fit adaptive lasso on X
weights <- 1/(abs(beta)+1e-8)
# Fit adaptive lasso using penalty.factor for the weights
adaptive_lasso_model <- glmnet(X, y, alpha = 1, lambda = lambda1, penalty.factor = weights, standardize = FALSE)
beta3 <- as.vector(coef(adaptive_lasso_model, s = lambda1)[-1])
# beta2 and beta3 are very different
</code></pre>
|
<python><r><glmnet><lasso-regression>
|
2025-03-25 20:12:30
| 0
| 769
|
Álvaro Méndez Civieta
|
79,534,562
| 6,439,229
|
PyQt 6.8 blank button bug?
|
<p><strong>Edit:</strong> This bug is fixed in PyQt 6.9</p>
<p>After upgrading from PyQt 6.7 to 6.8, I noticed a strange thing in my app:<br />
Some, not all, of the disabled <code>QPushButtons</code> were blank (white and no text).</p>
<p>It took some time to figure out the conditions for this bug.<br />
This was needed to reproduce:</p>
<ul>
<li>The OS theme must be dark</li>
<li>The buttons need to be inside a <code>QSplitter</code></li>
<li>The style must be <code>fusion</code> or <code>windowsvista</code></li>
<li>(<em>The style must be set after the initiation of the main window</em>)<br />
edit: Previously 'windows11' style must have been set.</li>
</ul>
<p>A MRE:</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QWidget, QPushButton, QVBoxLayout, QSplitter, QHBoxLayout, QLineEdit
from PyQt6.QtCore import Qt
class Main(QWidget):
def __init__(self):
super().__init__()
pb1 = QPushButton('pb1')
pb2 = QPushButton('pb2')
pb2.setEnabled(False)
pb1.clicked.connect(lambda: pb2.setDisabled(pb2.isEnabled()))
random_widget = QLineEdit()
btn_widg = QWidget()
btn_lay = QVBoxLayout(btn_widg)
btn_lay.addWidget(pb1)
btn_lay.addWidget(pb2)
tot_lay = QHBoxLayout(self)
# tot_lay.addWidget(random_widget)
# tot_lay.addWidget(btn_widg)
split = QSplitter(Qt.Orientation.Horizontal)
split.addWidget(random_widget)
split.addWidget(btn_widg)
tot_lay.addWidget(split)
app = QApplication([])
# app.setStyle('windows')
window = Main()
app.setStyle('fusion')
window.show()
app.exec()
</code></pre>
<p>And this is how it looks:<br />
<a href="https://i.sstatic.net/LRuDKmEd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRuDKmEd.png" alt="example" /></a></p>
<p>There's a simple workaround:<br />
Setting any style before initiating the main window will show the disabled buttons normally, even if 'fusion' is set after that.</p>
<p>Maybe someone can test this on a non-windows OS.<br />
Should this be reported to the Qt peoples?<br />
and if so, how does one do that?</p>
|
<python><pyqt><pyqt6>
|
2025-03-25 18:27:06
| 0
| 1,016
|
mahkitah
|
79,534,333
| 6,843,153
|
Make stramlit-aggrid height dynamic
|
<p>I have a streamlit-aggrid grid and I want the height of it to be as much as rows are in the grid, but I haven't found how to set the grid, the only documental reference I have found is <a href="https://streamlit-aggrid.readthedocs.io/en/docs/AgGrid.html" rel="nofollow noreferrer">this</a>, and it says the <code>height</code> argument is defaulted to 400 and you can set it, but there's no way to set it to be dynamic.</p>
<p>How can I do that?</p>
|
<python><ag-grid><streamlit>
|
2025-03-25 16:43:08
| 0
| 5,505
|
HuLu ViCa
|
79,534,318
| 778,533
|
dask: looping over groupby groups efficiently
|
<p>Example DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import dask.dataframe as dd
data = {
'A': [1, 2, 1, 3, 2, 1],
'B': ['x', 'y', 'x', 'y', 'x', 'y'],
'C': [10, 20, 30, 40, 50, 60]
}
pd_df = pd.DataFrame(data)
ddf = dd.from_pandas(pd_df, npartitions=2)
</code></pre>
<p>I am working with Dask DataFrames and need to perform a groupby operation efficiently without loading everything into memory or computing multiple times. Here are two inefficient solutions I've tried:</p>
<ol>
<li>Loading everything into memory:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>grouped = ddf.compute().groupby('groupby_column')
for name, group in grouped:
# Process each group
</code></pre>
<p>This approach loads the entire DataFrame into memory, which defeats the purpose of using Dask.</p>
<ol start="2">
<li>Computing twice:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>for name in set(ddf['groupby_column'].unique().compute()):
group = ddf[ddf['groupby_column'].eq(name)].compute()
# Process each group
</code></pre>
<p>This approach computes the DataFrame twice, which is inefficient.</p>
<p>Question:</p>
<p>How can I efficiently perform a groupby operation on a Dask DataFrame without loading everything into memory or computing multiple times? Is there an updated best practice for this in 2025?</p>
<p>Any help or updated best practices would be greatly appreciated!</p>
|
<python><dataframe><group-by><dask><dask-dataframe>
|
2025-03-25 16:37:19
| 0
| 9,652
|
tommy.carstensen
|
79,534,138
| 5,422,354
|
How to compute the analytical leave-one-out cross-validation score of a polynomial chaos decomposition in Python?
|
<p>I have a function y = g(x1, ..., xd) where x1, ..., xd are the inputs and y is the output of the function g. I have a multidimensional multivariate sample X and the corresponding output sample Y from the function g. I have computed the polynomial chaos expansion of this function. How to estimate the Q2 leave-one-out cross-validation score using an analytical method?</p>
<p>For example, consider the following sample based on the Ishigami function. The next script computes the coefficients of the PCE using a degree 8 polynomial. How to compute the Q2 score without actually performing the leave-one-out validation?</p>
<pre class="lang-py prettyprint-override"><code>import openturns as ot
import openturns.experimental as otexp
from openturns.usecases import ishigami_function
im = ishigami_function.IshigamiModel()
sampleSize = 500
inputTrain = im.inputDistribution.getSample(sampleSize)
outputTrain = im.model(inputTrain)
multivariateBasis = ot.OrthogonalProductPolynomialFactory([im.X1, im.X2, im.X3])
selectionAlgorithm = ot.PenalizedLeastSquaresAlgorithmFactory()
projectionStrategy = ot.LeastSquaresStrategy(inputTrain, outputTrain, selectionAlgorithm)
totalDegree = 8
enumerateFunction = multivariateBasis.getEnumerateFunction()
basisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
adaptiveStrategy = ot.FixedStrategy(multivariateBasis, basisSize)
chaosalgo = ot.FunctionalChaosAlgorithm(
inputTrain, outputTrain, im.inputDistribution, adaptiveStrategy, projectionStrategy
)
chaosalgo.run()
result = chaosalgo.getResult()
</code></pre>
|
<python><openturns>
|
2025-03-25 15:28:13
| 1
| 1,161
|
Michael Baudin
|
79,534,021
| 1,588,847
|
Modify function args with mypy plugin (for lazy partial functions)
|
<p>I have a python decorator. All functions it decorates must accept a <code>magic</code> argument.
If the <code>magic</code> argument is supplied, the decorated function is evaluated immediately and returned. If the <code>magic</code> argument is not supplied, a partial function is returned instead, allowing for lazy evaluation later.</p>
<p>I am writing a mypy plugin to type the decorated functions. I need to do two things:</p>
<ol>
<li>Change the decorated function's return type to <code>PartialFunction</code> if the magic argument is not supplied.</li>
<li>Prevent "Missing positional argument" for <code>magic</code> when a PartialFunction is returned</li>
</ol>
<p>I can do 1 easily enough with a <code>get_function_hook</code> callback.</p>
<p>However 2 is more difficult. If I use a signature callback (via <code>get_function_signature_hook</code>) it changes the function signature for <em>all</em> invocations (not just the ones without the magic argument). I've tried changing args in a <code>get_function_hook</code> callback (with <code>.copy_modified()</code> and creating new <code>Instance</code> objects) but without success.</p>
<p>Is this possible? And if so, how should I approach this? Examples or links to documentation very welcome!</p>
<p>Below a toy example:</p>
<p><code>mypy.ini</code></p>
<pre><code>[mypy]
plugins = partial_plugin
</code></pre>
<p>This the mypy plugin:
<code>partial_plugin.py</code></p>
<pre><code>from collections.abc import Callable
from mypy.plugin import FunctionContext, Plugin
from mypy.types import Type
def _partial_function_hook_callback(ctx: FunctionContext) -> Type:
if "magic" in ctx.callee_arg_names:
magic_index = ctx.callee_arg_names.index("magic")
if not ctx.args[magic_index]: # If magic not supplied, return PartialFunction type
return ctx.api.named_type("my_partial.PartialFunction")
return ctx.default_return_type
class PartialFunctionPlugin(Plugin):
def get_function_hook(self, fullname: str) -> Callable[[FunctionContext], Type] | None:
"""Return a hook for the given function name if it matches 'my_partial'."""
if fullname.startswith("my_partial."):
return _partial_function_hook_callback
return None
def plugin(version: str) -> type[PartialFunctionPlugin]:
"""Entry point for the plugin."""
return PartialFunctionPlugin
</code></pre>
<p>The decorator and examples to type-check :
<code>my_partial.py</code></p>
<pre><code>import functools
from collections.abc import Callable
from typing import Any, ParamSpec, TypeVar, reveal_type
T = TypeVar("T")
P = ParamSpec("P")
class PartialFunction:
def __init__(self, func: Callable[P, T], *args: Any, **kwargs: Any) -> None:
self.func = func
self.args = args
self.kwargs = kwargs
def __call__(self, *args: Any, **kwargs: Any) -> Any:
# Combine the args and kwargs with the stored ones
combined_args = self.args + args
combined_kwargs = {**self.kwargs, **kwargs}
return self.func(*combined_args, **combined_kwargs)
def partial_decorator(func: Callable[P, T]) -> Callable[P, T]:
def decorator_inner(*args: Any, **kwargs: Any) -> Any:
if "magic" in kwargs:
# If the magic argument is passed, evaluate immediately
return func(*args, **kwargs)
# Otherwise, we want to return a partial function
return PartialFunction(func, *args, **kwargs)
return functools.update_wrapper(decorator_inner, func)
#####################
@partial_decorator
def concat(x: str, magic: str | None) -> str:
return f"{x} {magic}"
foo = concat("hello", magic="world") # gives "hello world"
reveal_type(foo) # Should be type 'str'
print(foo)
foo_partial = concat("hello") # `magic` not supplied, returns a partial function
reveal_type(foo_partial) # Should be type 'PartialFunction'
print(foo_partial(magic="everyone")) # gives "hello everyone"
</code></pre>
<p>With the above three files in the same directory I can run it through mypy with <code>python -m mypy my_partial.py</code></p>
<p>Doing that gives this result:</p>
<pre><code>my_partial.py:42: note: Revealed type is "builtins.str"
my_partial.py:45: error: Missing positional argument "magic" in call to "concat" [call-arg]
my_partial.py:46: note: Revealed type is "my_partial.PartialFunction"
</code></pre>
<p>The two revealed types are good. It is the "error" I want to fix.
Thanks!</p>
|
<python><mypy>
|
2025-03-25 14:42:59
| 0
| 2,124
|
Jetpac
|
79,533,927
| 539,490
|
Elegantly handling python type hint for Shapely polygon.exterior.coords
|
<p>I'm using Python 3.12 and I would like to return <code>polygon.exterior.coords</code> as a type of <code>list[tuple[float, float]]</code> so that I can correctly enforce typing later in the program. I'm wondering if there is a more elegant solution:</p>
<pre class="lang-py prettyprint-override"><code>from shapely import Polygon
polygon = Polygon([(0, 0), (1, 0), (1, 1), (0, 1)])
data = polygon.exterior.coords
# This raises a run time error: IndexError: tuple index out of range
data[0][2]
</code></pre>
<p>I would like to do:</p>
<pre class="lang-py prettyprint-override"><code>data: list[tuple[float, float]] = list(polygon.exterior.coords)
# This now correctly shows a type hint error: "Index 2 is out of range for type tuple[float, float]"
data[0][2]
</code></pre>
<p>However the line <code>data: list[tuple[float, float]] = list(polygon.exterior.coords)</code> then shows a type hint error of:</p>
<pre><code>Type "list[tuple[float, ...]]" is not assignable to declared type "list[tuple[float, float]]"
"list[tuple[float, ...]]" is not assignable to "list[tuple[float, float]]"
Type parameter "_T@list" is invariant, but "tuple[float, ...]" is not the same as "tuple[float, float]"
Consider switching from "list" to "Sequence" which is covariant
</code></pre>
<p>I can solve this by using this code:</p>
<pre class="lang-py prettyprint-override"><code>data = type_safe_list(polygon.exterior.coords)
def type_safe_list(geometry: Polygon) -> list[tuple[float, float]]:
coords: list[tuple[float, float]] = []
for point in clipped_geometry.exterior.coords:
x, y = point
coords.append((x, y))
return coords
</code></pre>
<p>But perhaps there's a more elegant solution that doesn't incur a run time penalty? (not that performance is currently an issue for this part of the code). Also I'm actually trying to handle other types like MultiPolygon so the actual implementation I currently have is:</p>
<pre class="lang-py prettyprint-override"><code>def type_safe_list(geometry: Polygon | MultiPolygon) -> list[tuple[float, float]] | list[list[tuple[float, float]]]:
if isinstance(geometry, Polygon):
coords: list[tuple[float, float]] = []
for point in geometry.exterior.coords:
x, y = point
coords.append((x, y))
return coords
elif isinstance(geometry, MultiPolygon):
multi_coords: list[list[tuple[float, float]]] = []
for poly in geometry.geoms:
coords: list[tuple[float, float]] = []
for point in poly.exterior.coords:
x, y = point
coords.append((x, y))
multi_coords.append(coords)
return multi_coords
else:
raise NotImplementedError(f"Unhandled type: {type(geometry)}")
return []
</code></pre>
|
<python><python-typing><shapely>
|
2025-03-25 14:07:02
| 2
| 29,009
|
AJP
|
79,533,867
| 9,112,151
|
How to display both application/json and application/octet-stream content types in Swagger UI autodocs in FastAPI?
|
<p>I have an endpoint that can return JSON or a file (xlsx):</p>
<pre class="lang-py prettyprint-override"><code>class ReportItemSerializer(BaseModel):
id: int
werks: Annotated[WerkSerializerWithNetwork, Field(validation_alias="werk")]
plu: Annotated[str, Field(description="Название PLU")]
start_date: Annotated[date, None, Field(description="Дата начала периода в формате")]
end_date: Annotated[date | None, Field(description="Дата окончания периода")]
anfmenge: Annotated[int | None, Field(des cription="Запас на начало периода")]
endmenge: Annotated[int | None, Field(description="Остаток по PLU в разрезе поставщика на конец периода")]
soll: Annotated[int, None, Field(description="Поступило за период")]
haben: Annotated[
int | None, Field(description="Количество по PLU, которое было возвращено поставщику за указанный период")]
model_config = ConfigDict(from_attributes=True)
class PaginatedReportItemsSerializer(BaseModel):
count: int
results: list[ReportItemSerializer]
@router.get(
"/orders/report",
responses={
200: {
"description": "Return report",
"content": {
"application/json": {
"schema": PaginatedReportItemsSerializer.schema(ref_template="#/components/schemas/{model}")
},
"application/octet-stream": {}
},
},
}
)
async def get_report():
pass
</code></pre>
<p>With this I have a problem with swagger. With configuration above there is problem:</p>
<p><a href="https://i.sstatic.net/MdYlgCpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MdYlgCpB.png" alt="enter image description here" /></a></p>
<p>Nested <code>ReportItemSerializer</code> is not being displayed (<code>string</code> is instead displayed).</p>
<p>How can I fix it?</p>
|
<python><swagger><fastapi><openapi><swagger-ui>
|
2025-03-25 13:48:18
| 1
| 1,019
|
Альберт Александров
|
79,533,856
| 8,240,910
|
FastAPI StreamingResponse not sending message without putting sleep
|
<p>I am creating simple proxy for streaming, It works but I need to put sleep, without sleep it only sending last message, I tried to sleep 0, and other approaches but not working.</p>
<pre class="lang-py prettyprint-override"><code>import logging
import yaml
import httpx
import json
from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.responses import StreamingResponse
import asyncio
import time
logger = logging.getLogger("proxy")
handler = logging.StreamHandler()
logger.addHandler(handler)
class Upstream:
def __init__(self, url, cls):
self.url = url
self.cls = cls
class Config:
def __init__(self, log_level, upstreams: list = []):
self.log_level = log_level
self.upstream = []
if log_level == "development":
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
for upstream in upstreams:
self.upstream.append(Upstream(upstream.get("url"), upstream.get("cls")))
@staticmethod
def load_config(config_file):
with open(config_file, "r") as f:
yml = yaml.safe_load(f)
return Config(yml.get("log_level"), yml.get("upstreams"))
def pick_random_upstream(self) -> Upstream:
return self.upstream[0]
cfg = Config.load_config("config.yaml")
app = FastAPI()
@app.api_route("/{full_path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
async def proxy(request: Request, full_path: str):
body = await request.body()
try:
req_body = json.loads(body)
except json.JSONDecodeError:
raise HTTPException(status_code=400, detail="invalid request body")
model = req_body.get("model")
if not model:
raise HTTPException(status_code=400, detail="model is required")
logger.debug(f"model: {model} - full_path: {full_path} - body: {req_body}")
upstream = cfg.pick_random_upstream()
upstream_url = upstream.url
async with httpx.AsyncClient() as client:
# Use streaming=True to enable line-by-line reading
upstream_response = await client.request(
request.method,
f"{upstream_url}/{full_path}",
content=body,
headers=request.headers,
extensions={"trace_request": True, "trace_response": True}
)
# More direct streaming approach
async def direct_stream_generator():
try:
async for line in upstream_response.aiter_lines():
if line.strip():
yield f"data: {line}\n\n"
await asyncio.sleep(0.01)
except Exception as e:
print(f"Streaming error: {e}")
yield f"data: {{\"error\": \"{str(e)}\"}}\n\n"
finally:
# Explicitly close the response
await upstream_response.aclose()
# Check for streaming content type
content_type = upstream_response.headers.get("Content-Type", "")
if "text/event-stream" in content_type or "stream" in content_type.lower():
return StreamingResponse(
direct_stream_generator(),
media_type="text/event-stream",
headers=upstream_response.headers
)
else:
# Fallback to full response if not a stream
response_data = await upstream_response.aread()
return Response(
content=response_data,
status_code=upstream_response.status_code,
headers=upstream_response.headers
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p>In above code I am basically calling the model with streaming method, receiving data and again sending back to client. if i remove sleep it only sends last message, but if i add small delay it works well. I think flushing issue.</p>
<p>I want to remove sleep, If anybody help, thanks.</p>
|
<python><streaming>
|
2025-03-25 13:42:22
| 0
| 712
|
Bhautik Chudasama
|
79,533,301
| 9,622,249
|
Accessing an element inside an iframe using nodriver
|
<p>I have recently switched from selenium to nodriver for speed and stealth reasons. I am having trouble accessing elements inside an iframe even though material on this site and elsewhere says that 'Using nodriver, you don't need to switch to iframe to interact with it. nodriver's search methods automatically switch to iframe if the searched text/element is not found.'</p>
<p>My experience is quite different. Despite researching every thing I can find and many attempts, I cannot seem to access an embedded element. In the code below I gain access to a website and advance to the page with the iframe. I then successfully access an element on the page (demo = await page.find...), but outside the iframe (to confirm that the page is correct). The attempt to find the element (address_line_1 = await page.find...) then fails and throws off a 'Time ran out while waiting for text' error.</p>
<pre><code>async def b_main_line_processing():
driver = await uc.start() # maximize=True) # Start a new Chrome instance
await driver.main_tab.maximize()
page = await driver.get(passed_website, new_tab=False, new_window=False) # Open website
# Enter Log In Id & Password
field_to_find = await page.find('//*[@id="p_lt_PageContent_Login_LoginControl_ctl00_Login1_UserName"]')
await field_to_find.send_keys(name)
password_entry = await page.find('//*[@id="p_lt_PageContent_Login_LoginControl_ctl00_Login1_Password"]')
await password_entry.send_keys(password)
# Log In Button Press
button_to_press = await page.find('//*[@id="p_lt_PageContent_Login_LoginControl_ctl00_Login1_LoginButton"]')
await button_to_press.click()
# Advance to iframe page - Open up drop down and make selection
option = await page.find('//*[@id="p_lt_Header_MyProfilePages_lblMemberName"]')
await option.click()
profile = await page.find('// *[ @ id = "p_lt_Header_MyProfilePages_lnkEditProfile"]')
await profile.click()
# Arrived on iframe page; find option above iframe to prove we are on the right page
demo = await page.find("/html/body/form/div[4]/header/div/div[2]/nav/ul/li[8]/a")
print("button found")
# That worked so look for element embedded in the iframe
try:
sleep(2)
address_line_1 = await page.find\
("/html/body/form/div[4]/div[2]/div/div/div/div[2]/ng-form/div[1]/div[5]/div/div[1]")
except Exception as e:
print("Iframe error:", e)
driver.stop()
a_main_processor()
</code></pre>
<p>Relevant portion of html:</p>
<pre><code>> <iframe id="module" class="moduleIframe" src="/CMSModules/CHO/Roster/Roster.aspx#/edit/2137597" frameborder="0" scrolling="no" style="height: 5458px;"></iframe>Blockquoteenter code here
</code></pre>
<p>Any ideas greatly appreciated.</p>
|
<python><web-scraping><iframe><nodriver>
|
2025-03-25 10:03:30
| 0
| 352
|
Stephen Smith
|
79,533,209
| 5,547,553
|
Simplest way to convert aggregated data to visualize in polars
|
<p>Suppose I have aggregated the mean and the median of some value over 3 months, like:</p>
<pre><code>df = (data.group_by('month_code').agg(pl.col('value').mean().alias('avg'),
pd.col('value').median().alias('med')
)
.sort('month_code')
.collect()
)
</code></pre>
<p>Resulting in something like:</p>
<pre><code>df = pd.DataFrame({'month': ['M202412','M202501','M202502'],
'avg': [0.037824, 0.03616, 0.038919],
'med': [0.01381, 0.013028, 0.014843]
})
</code></pre>
<p>And I'd like to visualize it, so should convert to the format:</p>
<pre><code>df_ = pd.DataFrame({'month': ['M202412','M202501','M202502']*2,
'type': ['avg','avg','avg','med','med','med'],
'value': [0.037824, 0.03616, 0.038919, 0.01381, 0.013028, 0.014843],
})
</code></pre>
<p>Which is then easy to visualize:</p>
<pre><code>df_.plot.line(x='month',y='value',color='type').properties(width=400, height=350, title='avg and med')
</code></pre>
<p>What is the simplest way to convert df to df_ above?</p>
|
<python><dataframe><python-polars><altair><unpivot>
|
2025-03-25 09:28:14
| 2
| 1,174
|
lmocsi
|
79,533,113
| 3,813,371
|
Apps aren't loaded yet
|
<p>I'm using Visual Studio 2022 as the IDE. It has 2 projects:</p>
<ol>
<li>CLI - A Python Application</li>
<li>Common - A Blank Django Project</li>
</ol>
<p>The classes are created in Common > app > models.py.<br />
<code>app</code> is a <code>Django app</code></p>
<p><a href="https://i.sstatic.net/oTOMgucA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTOMgucA.png" alt="enter image description here" /></a></p>
<p><code>CLI</code> is the startup project, and <code>cli.py</code> is its startup file.<br />
<code>Common</code> project is referenced in <code>CLI</code>.</p>
<p><a href="https://i.sstatic.net/9nXab11K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nXab11K.png" alt="enter image description here" /></a></p>
<p>When I Start the CLI app in debug mode, there is an error for the <code>Question</code> class.</p>
<p><a href="https://i.sstatic.net/YFDlNZGx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFDlNZGx.png" alt="enter image description here" /></a></p>
<p><strong>Call Stack:</strong></p>
<pre><code> Message=Apps aren't loaded yet.
Source=D:\Projects\Internal\LLRT\src\Common\app\models.py
StackTrace: File
"D:\Projects\Internal\LLRT\src\Common\app\models.py", line 4, in
<module> (Current frame)
q_id = models.IntegerField()
...<7 lines>...
correct_option_num = models.IntegerField() File "D:\Projects\Internal\LLRT\src\CLI\cli.py", line 4, in <module>
> from app.models import Option, Question, Answered_Question django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
|
<python><django><visual-studio><django-models><visual-studio-2022>
|
2025-03-25 08:49:39
| 2
| 2,345
|
sukesh
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.