QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,647,636
| 2,307,570
|
How to get formula of matrix product from formulas of matrices?
|
<p>I have formulas defining matrices.<br>
The result I want is the formula defining their matrix product.</p>
<p>At no point do I want actual matrices. Those shown below are just for illustration.</p>
<p>The examples I use are Pascal's triangle <strong>A</strong> and <a href="https://oeis.org/A038207" rel="nofollow noreferrer">its matrix product with itself</a> <strong>B</strong>.<br>
(The element-wise product <strong>C</strong> is also shown, but not what I want.)</p>
<p>I want to calculate <code>sym_b</code> from <code>sym_a</code>:<br>
<code>sym_b = some_magic(sym_a, sym_a)</code></p>
<p>In the code below I have written <code>sym_b</code> myself.</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
def formula_to_matrix(formula, size):
matrix = sp.zeros(size, size)
for _n in range(size):
for _k in range(_n + 1):
matrix[_n, _k] = formula.subs(n, _n).subs(k, _k).doit()
return matrix
n = sp.Symbol('n', integer=True, nonnegative=True)
k = sp.Symbol('k', integer=True, nonnegative=True)
i = sp.Symbol('i', integer=True, nonnegative=True)
# initial matrix A #########################################################
sym_a = sp.binomial(n, k)
mat_a = sp.Matrix([
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[1, 2, 1, 0, 0],
[1, 3, 3, 1, 0],
[1, 4, 6, 4, 1]
])
assert formula_to_matrix(sym_a, 5) == mat_a
# matrix product B (correct) ###############################################
sym_b = sp.Sum(
sp.binomial(n, i) * sp.binomial(i, k),
(i, 0, n)
)
mat_b = sp.Matrix([
[ 1, 0, 0, 0, 0],
[ 2, 1, 0, 0, 0],
[ 4, 4, 1, 0, 0],
[ 8, 12, 6, 1, 0],
[16, 32, 24, 8, 1]
])
assert formula_to_matrix(sym_b, 5) == mat_b
assert mat_b == mat_a * mat_a # For matrices the * is matrix multiplication.
# element-wise product C (wrong) ###########################################
sym_c = sym_a * sym_a # For formulas the * is element-wise multiplication.
mat_c = sp.Matrix([
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[1, 4, 1, 0, 0],
[1, 9, 9, 1, 0],
[1, 16, 36, 16, 1]
])
assert formula_to_matrix(sym_c, 5) == mat_c
</code></pre>
|
<python><sympy><matrix-multiplication><symbolic-math>
|
2025-06-01 14:35:33
| 2
| 1,209
|
Watchduck
|
79,647,627
| 2,718,067
|
grid_search.fit() report exception with RAPIDS api
|
<p>Got exceptions from the code.
Develop enviroment is: win10, WSL2, RAPIDS 24.02.
Please help to check. I have puzzled for long time.</p>
<blockquote>
<p>Implicit conversion to a host NumPy array via <strong>array</strong> is not allowed, To explicitly construct a GPU matrix, consider using .to_cupy()
To explicitly construct a host matrix, consider using .to_numpy().</p>
</blockquote>
<pre><code>import cudf
import cupy as cp
import numpy as np
from cuml.datasets import make_classification
from cuml.svm import SVC
from cuml.preprocessing import StandardScaler
from cuml.model_selection import GridSearchCV
# Set seeds
cp.random.seed(42)
np.random.seed(42)
# Generate synthetic classification dataset (CuPy format)
X_cupy, y_cupy = make_classification(n_samples=500, n_features=10,
n_informative=5, n_classes=2,
random_state=42)
# Convert to cuDF directly without using .asnumpy()
X_cudf = cudf.DataFrame({f'feat_{i}': X_cupy[:, i] for i in range(X_cupy.shape[1])})
y_cudf = cudf.Series(y_cupy)
# Use cuML StandardScaler (keeps everything on GPU)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_cudf) # stays cuDF
# Define cuML SVC
svc = SVC(class_weight='balanced')
# Define parameter grid
param_grid = {
'C': [0.1, 1.0],
'gamma': ['scale', 0.1],
'kernel': ['rbf']
}
# cuML GridSearchCV — must use cuDF input only
grid_search = GridSearchCV(
estimator=svc,
param_grid=param_grid,
cv=3,
scoring='accuracy',
verbose=2
)
# Fit the grid search using cuDF inputs only
grid_search.fit(X_scaled, y_cudf)
# Print best result
print("✅ Best Parameters:", grid_search.best_params_)
print("✅ Best Score:", grid_search.best_score_)
</code></pre>
|
<python><rapids>
|
2025-06-01 14:29:22
| 0
| 1,663
|
user2718067
|
79,647,350
| 3,170,530
|
Looking for an efficent JAX function to reconstruct an image from patches
|
<p>I have a set of images in (c, h, w) jax arrays. These arrays have been converted to (patch_index, patch_dim) arrays where patch_dim == c * h * w.</p>
<p>I am trying to reconstruct the original images from the patches. Here is vanilla python code that works:</p>
<pre class="lang-py prettyprint-override"><code>kernel = jnp.ones((PATCH_DIM, IMG_CHANNELS, PATCH_HEIGHT, PATCH_WIDTH), dtype=jnp.float32)
def fwd(x):
xcv = lax.conv_general_dilated_patches(x, (PATCH_HEIGHT, PATCH_WIDTH), (PATCH_HEIGHT, PATCH_WIDTH), padding='VALID')
# return channels last
return jnp.transpose(xcv, [0,2,3,1])
patches = fwd(bfrc)
patch_reshaped_pn_c_h_w = patch_reshaped_ph_pw_c_h_w = jnp.reshape(patches, (V_PATCHES, H_PATCHES, IMG_CHANNELS, PATCH_HEIGHT, PATCH_WIDTH))
# V_PATCHES == IMG_HEIGHT // PATCH_HEIGHT
# H_PATCHES == IMG_WIDTH // PATCH_WIDTH
reconstructed = np.zeros(EXPECTED_IMG_SHAPE)
for vpatch in range(0, patch_reshaped_ph_pw_c_h_w.shape[0]):
for hpatch in range(0, patch_reshaped_ph_pw_c_h_w.shape[1]):
for ch in range(0, patch_reshaped_ph_pw_c_h_w.shape[2]):
for prow in range(0, patch_reshaped_ph_pw_c_h_w.shape[3]):
for pcol in range(0, patch_reshaped_ph_pw_c_h_w.shape[4]):
row = vpatch * PATCH_HEIGHT + prow
col = hpatch * PATCH_WIDTH + pcol
reconstructed[0, ch, row , col] = patch_reshaped_ph_pw_c_h_w[vpatch, hpatch, ch, prow, pcol]
# This assert passes
assert jnp.max(jnp.abs(reconstructed - bfrc[0])) == 0
</code></pre>
<p>Of course this vanilla python code is very inefficient. How can I convert the for loops into efficient JAX code?</p>
|
<python><machine-learning><deep-learning><computer-vision><jax>
|
2025-06-01 08:49:34
| 1
| 448
|
user3170530
|
79,647,470
| 3,904,031
|
Is there a way to turn off Python's "Did you mean...?" guessing feature appended to error messages?
|
<p>Since I upgraded my Python (to 3.12.2), error messages have become more difficult for me to read.</p>
<p>I understand that this is a popular feature for many people, but for me it is not. It gives me more text to read through, and it causes me to loose my (coding) train of thought because it makes me to shift to conversational mode. Once I have identified the part that's talking to me as if the error message were a person, I can then block it out mentally and visually, and only then find the actual error message.</p>
<p>Is there a way to turn off Python's "Did you mean...?" guessing feature appended to error messages?</p>
<p>In this case the guess is particularly "lame" because <code>np</code> or <code>numpy</code> is a library I'd imported, it would not be an argument of <code>range()</code> whereas <code>nx</code> (defined just two lines earlier) would have been an obvious (and correct) guess.</p>
<pre><code> for i in range(n):
^
NameError: name 'n' is not defined. Did you mean: 'np'?
</code></pre>
|
<python><python-3.x><runtime-error>
|
2025-06-01 08:36:59
| 1
| 3,835
|
uhoh
|
79,647,185
| 15,072,863
|
Llama_cookbook: why are labels not shifted for CausalLM?
|
<p>I'm studying the <a href="https://github.com/meta-llama/llama-cookbook" rel="nofollow noreferrer">llama_cookbok</a> repo, in particular their <a href="https://github.com/meta-llama/llama-cookbook/blob/main/getting-started/finetuning/quickstart_peft_finetuning.ipynb" rel="nofollow noreferrer">finetuning example</a>.
This example uses <code>LlamaForCausalLM</code> model and <code>samsum_dataset</code> (input: dialog, output: summary). Now, looking at <a href="https://github.com/meta-llama/llama-cookbook/blob/main/src/llama_cookbook/datasets/samsum_dataset.py#L38" rel="nofollow noreferrer">how they process the dataset</a>, if we look at the "labels" part:</p>
<ul>
<li>For prompt tokens, every label is <code>-100</code></li>
<li>For summary tokens, we have <code>labels[i] = input_ids[i]</code></li>
</ul>
<p>Code:</p>
<pre><code>prompt = tokenizer.encode(tokenizer.bos_token + sample["prompt"], add_special_tokens=False)
summary = tokenizer.encode(sample["summary"] + tokenizer.eos_token, add_special_tokens=False)
sample = {
"input_ids": prompt + summary,
"attention_mask" : [1] * (len(prompt) + len(summary)),
"labels": [-100] * len(prompt) + summary,
}
</code></pre>
<p>I also see this when printing the actual samples with the Dataloader they create (Note: if you want to reproduce the behavior, you'll have to change the path to the dataset to <code>knkarthick/samsum</code> in the package internals; or maybe use a different dataset):</p>
<pre><code>batch = next(iter(train_dataloader))
print(batch['input_ids'][0][35:40])
print(batch['labels'][0][35:40])
Output:
tensor([19791, 512, 32, 36645, 41778])
tensor([ -100, -100, 32, 36645, 41778])
</code></pre>
<p>Why is the summary part correct? I thought that for <code>CausalLM</code> we must have <code>labels[i] = input_ids[i + 1]</code> for every label we want to predict.</p>
|
<python><large-language-model><llama><attention-model><fine-tuning>
|
2025-06-01 04:29:02
| 1
| 340
|
Dmitry
|
79,647,131
| 219,153
|
How to split NumPy array with dimensionality reduction?
|
<p>This script:</p>
<pre><code>import numpy as np
a = np.arange(8).reshape(2, 2, 2)
b = np.split(a, 2)
print(b[0].shape)
</code></pre>
<p>produces:</p>
<pre><code>(1, 2, 2)
</code></pre>
<p>I would like to split array <code>a</code> into a list of constituent subarrays with shape <code>(2, 2)</code>, reducing their dimension in the process. Instead, I'm getting a list of subarrays of <code>(1, 2, 2)</code> shape. Is there a way to get what I want without removing the extra dimension in an additional step?</p>
|
<python><arrays><numpy>
|
2025-06-01 02:42:10
| 2
| 8,585
|
Paul Jurczak
|
79,647,006
| 13,706,389
|
Can't close psycopg ConnectionPool
|
<p>I tried <a href="https://www.psycopg.org/psycopg3/docs/advanced/pool.html#basic-connection-pool-usage" rel="nofollow noreferrer">this example</a> from the psycopg documentation:</p>
<pre class="lang-py prettyprint-override"><code>from psycopg import conninfo
from psycopg_pool import ConnectionPool
from config import DB_HOST, DB_NAME, DB_PASSWORD, DB_PORT, DB_USER
with ConnectionPool(
conninfo=conninfo.make_conninfo(
dbname=DB_NAME,
user=DB_USER,
password=DB_PASSWORD,
host=DB_HOST,
port=DB_PORT,
)
) as pool:
with pool.connection() as conn:
conn.execute("SELECT 1")
with conn.cursor() as cur:
cur.execute("SELECT 1")
</code></pre>
<p>This code executes immediately without any errors but the script keeps running for a while and prints the following warnings:</p>
<pre><code>couldn't stop thread 'pool-1-worker-0' within 5.0 seconds
hint: you can try to call 'close()' explicitly or to use the pool as context manager
couldn't stop thread 'pool-1-worker-1' within 5.0 seconds
hint: you can try to call 'close()' explicitly or to use the pool as context manager
couldn't stop thread 'pool-1-worker-2' within 5.0 seconds
hint: you can try to call 'close()' explicitly or to use the pool as context manager
couldn't stop thread 'pool-1-scheduler' within 5.0 seconds
hint: you can try to call 'close()' explicitly or to use the pool as context manager
</code></pre>
<p>And finally finishes after showing this warning for the 4th time. I am using the pool as context manager, I also tried adding <code>pool.close()</code> at the end but with no different outcome. What am I doing wrong?</p>
<p>I'm running this script in a normal <code>.py</code> script:</p>
<ul>
<li>python 3.13.3</li>
<li>psycopg 3.2.9</li>
<li>psycopg-pool 3.2.6</li>
</ul>
<p>The database is running in a local docker container (postgres::latest, so postgres 17 I believe).</p>
<p>The log statements for the queries above look like this:</p>
<pre><code>2025-06-02 16:26:57.780 UTC [86] LOG: statement: BEGIN
2025-06-02 16:26:57.780 UTC [86] LOG: statement: SELECT 1
2025-06-02 16:26:57.781 UTC [86] LOG: statement: COMMIT
2025-06-02 16:26:57.782 UTC [86] LOG: statement: BEGIN
2025-06-02 16:26:57.782 UTC [86] LOG: statement: SELECT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'event_car');
2025-06-02 16:26:57.783 UTC [86] LOG: statement: COMMIT
2025-06-02 16:26:57.783 UTC [86] LOG: statement: BEGIN
2025-06-02 16:26:57.784 UTC [86] LOG: statement: SELECT signal_name, signal_id FROM Signal
2025-06-02 16:26:57.785 UTC [86] LOG: statement: COMMIT
2025-06-02 16:26:57.799 UTC [88] LOG: statement: BEGIN
2025-06-02 16:26:57.799 UTC [88] LOG: statement: SELECT 1
2025-06-02 16:26:57.800 UTC [88] LOG: statement: SELECT 1
2025-06-02 16:26:57.800 UTC [88] LOG: statement: COMMIT
</code></pre>
|
<python><psycopg2><psycopg3>
|
2025-05-31 21:22:13
| 0
| 684
|
debsim
|
79,646,981
| 12,415,855
|
Opening site only possible in non-headless mode?
|
<p>i try to run the below code in headless-mode.
In non headless-mode everything works fine.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
options = Options()
# options.add_argument('--headless=new')
options.add_argument("--window-size=1920x1080")
options.add_argument("start-maximized")
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 5)
actions = ActionChains(driver)
driver.get ("https://www.bing.com/maps")
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//div[@id="bnp_btn_accept"]//a'))).click()
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="maps_sb"]'))).send_keys(f"hairdresser")
actions = actions.send_keys(Keys.ENTER)
actions.perform()
input("Press!")
</code></pre>
<p>But when running the code in the headless mode i get this error:</p>
<pre><code>(selenium) C:\DEV\Fiverr2025\TEMPLATES>python test.py
DevTools listening on ws://127.0.0.1:55519/devtools/browser/e278bbb3-064d-4f9c-a76d-aa92833450cd
[39712:27412:0531/224848.940:ERROR:gles2_cmd_decoder_passthrough.cc(1081)] [GroupMarkerNotSet(crbug.com/242999)!:A07023009C250000]Automatic fallback to software WebGL has been deprecated. Please use the --enable-unsafe-swiftshader flag to opt in to lower security guarantees for trusted content.
[39712:27412:0531/224849.039:ERROR:gles2_cmd_decoder_passthrough.cc(1081)] [GroupMarkerNotSet(crbug.com/242999)!:A0A023009C250000]Automatic fallback to software WebGL has been deprecated. Please use the --enable-unsafe-swiftshader flag to opt in to lower security guarantees for trusted content.
[39712:27412:0531/224849.091:ERROR:gl_utils.cc(431)] [.WebGL-0xd5c0240ce00]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
[39712:27412:0531/224849.131:ERROR:gl_utils.cc(431)] [.WebGL-0xd5c0240ce00]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
[39712:27412:0531/224849.248:ERROR:gl_utils.cc(431)] [.WebGL-0xd5c0240ce00]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
[39712:27412:0531/224849.302:ERROR:gl_utils.cc(431)] [.WebGL-0xd5c0240ce00]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels (this message will no longer repeat)
Traceback (most recent call last):
File "C:\DEV\Fiverr2025\TEMPLATES\test.py", line 29, in <module>
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//div[@id="bnp_btn_accept"]//a'))).click()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\support\wait.py", line 138, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>Why is this not working with headless mode?</p>
|
<python><selenium-webdriver><headless>
|
2025-05-31 20:52:27
| 1
| 1,515
|
Rapid1898
|
79,646,918
| 16,383,578
|
How to efficiently compute the exact value of prime counting function for n?
|
<p>I want to know the exact count of prime numbers no greater than n, without finding all these prime numbers first.</p>
<p>I know of a very efficient method to find all primes no greater than n, using Sieve of Eratosthenes with Wheel Factorization, I use 2310-based wheel to skip all multiples of 2, 3, 5, 7, 11, so that I only need to process 480 numbers out of every 2310 numbers, skipping 61/77 of all candidates, and I only mark composites that are 2310-coprime and I only sieve up to square root, and I mark all multiples of 2, 3, 5, 7, 11 exactly once, so that my implementation is much much faster than naive Sieve of Eratosthenes found in online tutorials.</p>
<p>But even with my fast Sieve of Eratosthenes with Wheel Factorization computing all primes up to n for large n is slow. I have instead implemented one way to get the exact count of primes no greater than n without generating all the primes first.</p>
<p>I have implemented <a href="https://en.wikipedia.org/wiki/Meissel%E2%80%93Lehmer_algorithm" rel="nofollow noreferrer">Meissel-Lehmer algorithm</a>, and I based my implementation on this <a href="https://stackoverflow.com/a/62875264/16383578">answer</a>, I didn't implement the method from the other answer because it was way too complex.</p>
<p>I didn't use the recursive combinatorial phi(n, k) function because that is way too inefficient, instead, phi(n, k) means count of all numbers no greater than n coprime to all the first k primes, coprime means two numbers have no common divisors greater than 1, in other words phi(n, k) means the count of numbers that are not multiples of the first k prime numbers, and it can be efficiently calculated using my optimized prime sieve.</p>
<p>First, the combinatorial phi function implementation:</p>
<pre><code>from bisect import bisect
class Wheel_Sieve:
def __init__(self):
self.primes = [2, 3, 5, 7, 11]
self.composites = {}
self.spoke = 0
self.coprime = 13
self.count = 5
def prime_range(self, limit: int) -> list:
self.sieve_by_value(limit)
return self.primes[: bisect(self.primes, limit)]
def prime_count(self, limit: int) -> int:
self.sieve_by_value(limit)
return bisect(self.primes, limit)
def first_primes(self, count: int) -> list:
self.sieve_by_count(count)
return self.primes[:count]
def sieve_by_count(self, limit: int) -> None:
while self.count < limit:
self.do_sieving(self.coprime)
self.coprime += WHEEL5[self.spoke]
self.spoke += 1
self.spoke -= 480 * (self.spoke == 480)
def sieve_by_value(self, limit: int) -> None:
while self.coprime <= limit:
self.do_sieving(self.coprime)
self.coprime += WHEEL5[self.spoke]
self.spoke += 1
self.spoke -= 480 * (self.spoke == 480)
def do_sieving(self, coprime: int) -> None:
duo = self.composites.pop(coprime, None)
if duo is None:
self.primes.append(coprime)
self.composites[coprime * coprime] = (self.spoke, coprime)
self.count += 1
else:
self.mark_composite(duo)
def mark_composite(self, duo):
j, prime = duo
multiple = self.coprime + prime * WHEEL5[j]
j += 1
j -= 480 * (j == 480)
while multiple in self.composites:
multiple += prime * WHEEL5[j]
j += 1
j -= 480 * (j == 480)
self.composites[multiple] = (j, prime)
SIEVE = Wheel_Sieve()
PHI_MEMO = {}
def phi_memo(n: int, k: int) -> int:
SIEVE.first_primes(k)
if result := PHI_MEMO.get((n, k)):
return result
if not k:
return n
if not n:
return 0
j = k - 1
result = phi_memo(n, j) - phi_memo(n // SIEVE.primes[j], j)
PHI_MEMO[(n, k)] = result
return result
</code></pre>
<p>And my efficient algorithm:</p>
<pre><code>import numba as nb
import numpy as np
# fmt: off
FOREWHEEL5 = np.array((
(4 , 2 ), (9 , 6 ), (25 , 30 ), (35 , 30 ), (49 , 210 ), (77 , 210 ),
(91 , 210 ), (119 , 210 ), (133 , 210 ), (161 , 210 ), (203 , 210 ), (217 , 210 ),
(121 , 2310), (143 , 2310), (187 , 2310), (209 , 2310), (253 , 2310), (319 , 2310),
(341 , 2310), (407 , 2310), (451 , 2310), (473 , 2310), (517 , 2310), (583 , 2310),
(649 , 2310), (671 , 2310), (737 , 2310), (781 , 2310), (803 , 2310), (869 , 2310),
(913 , 2310), (979 , 2310), (1067, 2310), (1111, 2310), (1133, 2310), (1177, 2310),
(1199, 2310), (1243, 2310), (1331, 2310), (1397, 2310), (1441, 2310), (1507, 2310),
(1529, 2310), (1573, 2310), (1639, 2310), (1661, 2310), (1727, 2310), (1793, 2310),
(1837, 2310), (1859, 2310), (1903, 2310), (1969, 2310), (1991, 2310), (2057, 2310),
(2101, 2310), (2123, 2310), (2167, 2310), (2189, 2310), (2299, 2310), (2321, 2310)
), dtype=np.uint64)
WHEEL5 = np.array([
4 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 4 , 6 , 6 , 2 , 6 , 4 , 2 , 6 , 4 , 6 , 8 , 4 , 2 , 4 , 2 , 4 ,
14, 4 , 6 , 2 , 10, 2 , 6 , 6 , 4 , 2 , 4 , 6 , 2 , 10, 2 , 4 , 2 , 12, 10, 2 , 4 , 2 , 4 , 6 ,
2 , 6 , 4 , 6 , 6 , 6 , 2 , 6 , 4 , 2 , 6 , 4 , 6 , 8 , 4 , 2 , 4 , 6 , 8 , 6 , 10, 2 , 4 , 6 ,
2 , 6 , 6 , 4 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 6 , 10, 2 , 10, 2 , 4 , 2 , 4 , 6 , 8 , 4 , 2 , 4 ,
12, 2 , 6 , 4 , 2 , 6 , 4 , 6 , 12, 2 , 4 , 2 , 4 , 8 , 6 , 4 , 6 , 2 , 4 , 6 , 2 , 6 , 10, 2 ,
4 , 6 , 2 , 6 , 4 , 2 , 4 , 2 , 10, 2 , 10, 2 , 4 , 6 , 6 , 2 , 6 , 6 , 4 , 6 , 6 , 2 , 6 , 4 ,
2 , 6 , 4 , 6 , 8 , 4 , 2 , 6 , 4 , 8 , 6 , 4 , 6 , 2 , 4 , 6 , 8 , 6 , 4 , 2 , 10, 2 , 6 , 4 ,
2 , 4 , 2 , 10, 2 , 10, 2 , 4 , 2 , 4 , 8 , 6 , 4 , 2 , 4 , 6 , 6 , 2 , 6 , 4 , 8 , 4 , 6 , 8 ,
4 , 2 , 4 , 2 , 4 , 8 , 6 , 4 , 6 , 6 , 6 , 2 , 6 , 6 , 4 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 4 , 2 ,
10, 2 , 10, 2 , 6 , 4 , 6 , 2 , 6 , 4 , 2 , 4 , 6 , 6 , 8 , 4 , 2 , 6 , 10, 8 , 4 , 2 , 4 , 2 ,
4 , 8 , 10, 6 , 2 , 4 , 8 , 6 , 6 , 4 , 2 , 4 , 6 , 2 , 6 , 4 , 6 , 2 , 10, 2 , 10, 2 , 4 , 2 ,
4 , 6 , 2 , 6 , 4 , 2 , 4 , 6 , 6 , 2 , 6 , 6 , 6 , 4 , 6 , 8 , 4 , 2 , 4 , 2 , 4 , 8 , 6 , 4 ,
8 , 4 , 6 , 2 , 6 , 6 , 4 , 2 , 4 , 6 , 8 , 4 , 2 , 4 , 2 , 10, 2 , 10, 2 , 4 , 2 , 4 , 6 , 2 ,
10, 2 , 4 , 6 , 8 , 6 , 4 , 2 , 6 , 4 , 6 , 8 , 4 , 6 , 2 , 4 , 8 , 6 , 4 , 6 , 2 , 4 , 6 , 2 ,
6 , 6 , 4 , 6 , 6 , 2 , 6 , 6 , 4 , 2 , 10, 2 , 10, 2 , 4 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 10, 6 ,
2 , 6 , 4 , 2 , 6 , 4 , 6 , 8 , 4 , 2 , 4 , 2 , 12, 6 , 4 , 6 , 2 , 4 , 6 , 2 , 12, 4 , 2 , 4 ,
8 , 6 , 4 , 2 , 4 , 2 , 10, 2 , 10, 6 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 4 , 6 , 6 , 2 , 6 , 4 , 2 ,
10, 6 , 8 , 6 , 4 , 2 , 4 , 8 , 6 , 4 , 6 , 2 , 4 , 6 , 2 , 6 , 6 , 6 , 4 , 6 , 2 , 6 , 4 , 2 ,
4 , 2 , 10, 12, 2 , 4 , 2 , 10, 2 , 6 , 4 , 2 , 4 , 6 , 6 , 2 , 10, 2 , 6 , 4 , 14, 4 , 2 , 4 ,
2 , 4 , 8 , 6 , 4 , 6 , 2 , 4 , 6 , 2 , 6 , 6 , 4 , 2 , 4 , 6 , 2 , 6 , 4 , 2 , 4 , 12, 2 , 12
], dtype=np.uint64)
# fmt: on
@nb.njit(cache=True)
def phi(n: int, limit: int) -> int:
sieve = np.full(n + 1, True, dtype=np.uint8)
sieve[:2] = False
start, step = FOREWHEEL5[0]
i = 0
j = 1
while start <= n and i < 60:
sieve[start::step] = False
start, step = FOREWHEEL5[j]
j += j != 59
i += 1
count = 5
last = 11
k = 13
i = 0
while count < limit and k <= n:
if sieve[k]:
count += 1
j = i
multiple = k * k
while multiple <= n:
sieve[multiple] = False
multiple += k * WHEEL5[j]
j += 1
j -= 480 * (j == 480)
last = k
k += WHEEL5[i]
i += 1
i -= 480 * (i == 480)
return np.sum(sieve[last + 1 :]) + 1
</code></pre>
<p>My code is a thousand times faster:</p>
<pre><code>In [364]: phi(65536, 1024)
Out[364]: 5519
In [365]: phi_memo(65536, 1024)
Out[365]: np.uint64(5519)
In [366]: %timeit PHI_MEMO.clear(); phi_memo(65536, 1024)
88.3 ms ± 371 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [367]: %timeit phi(65536, 1024)
91.5 μs ± 384 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>I have implemented prime counting function using my phi function, it is indeed more efficient than sieving all the primes, but not by very much:</p>
<pre><code>@nb.njit(cache=True)
def prime_sieve(limit: int) -> np.ndarray:
is_prime = np.full(limit + 1, True, dtype=np.uint8)
is_prime[:2] = False
start, step = FOREWHEEL5[0]
i = 0
j = 1
while start <= limit and i < 60:
is_prime[start::step] = False
start, step = FOREWHEEL5[j]
j += j != 59
i += 1
top = limit**0.5
low = int(top)
top = low + (top > low)
k = 13
i = 0
while k <= top:
if is_prime[k]:
j = i
multiple = k * k
while multiple <= limit:
is_prime[multiple] = False
multiple += k * WHEEL5[j]
j += 1
j -= 480 * (j == 480)
k += WHEEL5[i]
i += 1
i -= 480 * (i == 480)
return is_prime
@nb.njit(cache=True)
def prime_sieve_with_pi(limit: int) -> tuple[np.ndarray, np.ndarray]:
is_prime = prime_sieve(limit)
return np.flatnonzero(is_prime), np.cumsum(is_prime.astype(np.uint64))
@nb.njit(cache=True)
def get_primes(limit: int) -> np.ndarray:
return np.flatnonzero(prime_sieve(limit))
@nb.njit(cache=True)
def count_primes(limit: int) -> int:
return np.sum(prime_sieve(limit).astype(np.uint64))
def prime_count(limit: int) -> int:
sqrt = int(limit**0.5)
cbrt = int(limit ** (1 / 3))
primes, counts = prime_sieve_with_pi(limit // cbrt + 3)
low = counts[cbrt + 1]
high = counts[sqrt + 1]
return (
phi(limit, low)
+ low
- sum(counts[limit // primes[i]] - i for i in range(low, high))
- 1
)
</code></pre>
<pre><code>In [369]: prime_sieve_with_pi(10**8)
Out[369]:
(array([ 2, 3, 5, ..., 99999959, 99999971, 99999989]),
array([ 0, 0, 1, ..., 5761455, 5761455, 5761455],
dtype=uint64))
In [370]: prime_count(10**8)
Out[370]: np.uint64(5761455)
In [371]: %timeit prime_count(10**8)
316 ms ± 2.64 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [372]: %timeit prime_sieve_with_pi(10**8)
963 ms ± 8.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [373]: %timeit prime_sieve(10**8)
377 ms ± 2.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [374]: %timeit count_primes(10**8)
637 ms ± 5.74 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [375]: %timeit get_primes(10**8)
502 ms ± 2.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [376]: count_primes(10**8)
Out[376]: 5761455
</code></pre>
<p>How can I compute the exact value of prime counting function much faster than sieving?</p>
|
<python><algorithm><primes><number-theory>
|
2025-05-31 19:14:13
| 0
| 3,930
|
Ξένη Γήινος
|
79,646,778
| 948,866
|
How do you extract key/value from a function in a dict comprehension?
|
<p>I have a function <code>foo</code> that generates a key and value from a string. I want to use that function in a dict comprehension, but haven't figured out how to extract multiple values from a single iteration of the function.</p>
<p>The catch is that <code>foo</code> could be expensive or stateful or executed remotely, so I can't call it more than once per iteration.</p>
<p>Example:</p>
<pre><code>def foo(s): # simple placeholder for an expensive function
return s[0], s[1:]
print([foo(s) for s in ['abcd', 'efg', 'hi']])
print({foo(s) for s in ['abcd', 'efg', 'hi']})
print({foo(s)[0]: foo(s)[1] for s in ['abcd', 'efg', 'hi']})
</code></pre>
<p>Output:</p>
<pre><code>[('a', 'bcd'), ('e', 'fg'), ('h', 'i')] # List comprehension works
{('a', 'bcd'), ('e', 'fg'), ('h', 'i')} # Set comprehension works
{'a': 'bcd', 'e': 'fg', 'h': 'i'} # Desired dict output, but function is called twice
</code></pre>
<p>How can I extract k: v from a single invocation of the function?</p>
|
<python><dictionary-comprehension>
|
2025-05-31 16:16:18
| 4
| 3,967
|
Dave
|
79,646,439
| 9,818,388
|
Youtube video upload - issues with special characters
|
<p>Not able to resolve issues with special characters in title and description. I'm using official google example code to upload videos to youtube like below example:</p>
<pre><code>python upload_video.py --file="DU.mp4" --title="Rozporządzenie" --description="Zapraszamy do odsłuchania nowej publikacji dziennika ustaw opublikowanego na stronie sejmu.\n\nTytuł: DU/2025/695 - Rozporzaogon;dzenie Ministra Rolnictwa i Rozwoju Wsi" --keywords="" --category="25" --privacyStatus="private" --noauth_local_webserver
</code></pre>
<p>Acc. to this doc: <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow noreferrer">https://developers.google.com/youtube/v3/guides/uploading_a_video</a></p>
<p>Simply im using polish special characters and while upload process youtube is returing it like below:
<a href="https://i.sstatic.net/6LW8cWBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6LW8cWBM.png" alt="enter image description here" /></a></p>
<p>Was able to find info in google documentation that by default the title and description are encoded in utf-8, this is visible here: <a href="https://i.sstatic.net/BHpnc6Oz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHpnc6Oz.png" alt="enter image description here" /></a></p>
<p>Acc to this doc: <a href="https://developers.google.com/youtube/v3/docs/videos#resource" rel="nofollow noreferrer">https://developers.google.com/youtube/v3/docs/videos#resource</a></p>
<p>How can I resolve this issue? what am I doing wrong?</p>
|
<python><youtube><upload><youtube-api>
|
2025-05-31 08:26:36
| 1
| 2,859
|
Szelek
|
79,646,367
| 806,160
|
Django + Celery + PySpark inside Docker raises SystemExit: 1 and NoSuchFileException when creating SparkSession
|
<p>I'm running a Django application that uses Celery tasks and PySpark inside a Docker container. One of my Celery tasks calls a function that initializes a SparkSession using getOrCreate(). However, when this happens, the worker exits unexpectedly with a SystemExit: 1 and a NoSuchFileException.</p>
<p>Here is the relevant part of the stack trace:</p>
<pre class="lang-py prettyprint-override"><code>SystemExit: 1
[INFO] Worker exiting (pid: 66009)
...
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform...
WARN DependencyUtils: Local jar /...antlr4-4.9.3.jar does not exist, skipping.
...
Exception in thread "main" java.nio.file.NoSuchFileException: /tmp/tmpagg4d47k/connection8081375827469483762.info
...
[ERROR] Worker (pid:66009) was sent SIGKILL! Perhaps out of memory?
</code></pre>
<p>how can i solve the problem</p>
|
<python><django><pyspark>
|
2025-05-31 06:06:10
| 1
| 1,423
|
Tavakoli
|
79,646,161
| 4,907,188
|
cannot open shared object file: No such file or directory with pybind11
|
<p>I have a python script <code>myscript.py</code>:</p>
<pre><code>import pydmy as tp
</code></pre>
<p><code>pydmy</code> is my pybind11 wrapper dynamic library. <code>pydmy</code> will load another C++ library <code>libabc.so</code>. <code>libabc.so</code> is under the same directory as <code>pydmy</code>.</p>
<ol>
<li>In windows, everything is fine.</li>
<li>In linux, <code>libabc.so: cannot open shared object file: No such file or directory</code></li>
</ol>
<p>I was told I need to add <code>-Wl,-rpath,$$ORIGIN</code> in makefile of <code>pydmy</code>. But it does not work.</p>
|
<python><dll><pybind11>
|
2025-05-30 22:32:02
| 0
| 626
|
Taitai
|
79,646,075
| 499,990
|
difflib.SequenceMatcher suddenly returns different similarity ratio without code or environment changes
|
<p>We're using Python’s difflib.SequenceMatcher to compare strings in a production system. Here's the simplified relevant code:</p>
<pre><code>from difflib import SequenceMatcher
similarity = SequenceMatcher(
None,
normalized_transcript,
normalized_expected
).ratio()
</code></pre>
<p>Until 4:10 PM UTC today, the above code was returning a similarity ratio above our internal threshold for a specific string comparison.</p>
<p>After that time, without any change in our code, server configuration, or environment, the same comparison started returning a lower similarity score, failing the threshold check.</p>
<p><strong>Some key facts:</strong></p>
<ul>
<li><p>The behavior changed consistently across all environments: both development and production.</p>
</li>
<li><p>Servers run a mix of Windows (dev) and Unix (dev + prod), so this is not likely to be an OS-specific issue.</p>
</li>
<li><p>There were no code deployments, no dependency changes, and no environment variable alterations.</p>
</li>
<li><p>We are aware that SequenceMatcher works entirely locally—there are no third-party requests or models involved.</p>
</li>
<li><p>We’ve validated that the inputs to SequenceMatcher are identical to previous values (confirmed via logs).</p>
</li>
</ul>
<p><strong>Important detail:</strong> the string normalized_transcript comes from OpenAI API completions. That’s the only potentially "variable" external component in the system. However, the strings in question are very short, and we’ve historically seen consistent outputs from OpenAI for this prompt setup.</p>
<p>This behavior is baffling. Is there any known edge case, maybe time-sensitive internal optimization, or anything else that could explain this sudden change in behavior from SequenceMatcher?</p>
|
<python><openai-api><difflib><sequencematcher>
|
2025-05-30 20:45:53
| 1
| 16,482
|
MatterGoal
|
79,645,815
| 14,179,793
|
How to connect to Postgres docker container via python
|
<p>I am trying to use the postgresql docker container to create some integration tests but trying to connect fails. I added a password because if I do not I get a different error about it requiring a password.</p>
<pre><code>import psycopg2
db = psycopg2.connect(dbname='pytest', user='pytest', password='pytest', host='0.0.0.0', port='5432')
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/home/<user_name>/<directory_name>/temp.py", line 26, in <module>
db = psycopg2.connect(dbname='pytest', user='pytest', password='pytest', host='0.0.0.0', port='5432')
File "/home/<user_name>/anaconda3/envs/gdg/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "0.0.0.0", port 5432 failed: FATAL: password authentication failed for user "pytest"
connection to server at "0.0.0.0", port 5432 failed: FATAL: password authentication failed for user "pytest"
</code></pre>
<p>Compose File:</p>
<pre><code>services:
psql_db:
image: postgres
restart: always
shm_size: 128mb
environment:
POSTGRES_USER: pytest
POSTGRES_PASSWORD: pytest
POSTGRES_HOST_AUTH_METHOD: trust
</code></pre>
<p>pb_hba.conf of container:</p>
<pre><code># TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
# warning trust is enabled for all connections
# see https://www.postgresql.org/docs/17/auth-trust.html
host all all all trust
</code></pre>
<p>What am I missing?</p>
|
<python><postgresql><docker-compose>
|
2025-05-30 16:39:09
| 1
| 898
|
Cogito Ergo Sum
|
79,645,772
| 1,246,366
|
TCP Messages starting with REST verb are not send
|
<p>I am making a very strange observation while using python sockets to send TCP data:</p>
<p>When sending a TCP packet over a new connection that starts with "PUT ", "POST ", "DELETE " or "GET ", the messages are only send much later when the socket is closed.</p>
<p>Some details:</p>
<ul>
<li>Sending anything else (at least all the messages i could think of) works fine (is send immediately)</li>
<li>Sending "PUT " etc. as a second message (and something else first) works fine.</li>
<li>Sending something else after the "PUT " message is also not send.</li>
<li>The space is necessary, i.e. "POST" works fine. The data after the space does not change anything, i.e. "POST some more data" is also not send immediately.</li>
<li>The socket.sendall call immediately returns, but the TCP packet is not send until the socket is closed. This is confirmed with a Wireshark log.</li>
<li>I tried the TCP_NODELAY sockopt without success.</li>
</ul>
<pre><code>import socket
from time import sleep
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("198.18.2.2", 8764))
sock.sendall(b"PUT ")
sleep(10)
</code></pre>
<p>this leads to the "PUT " only being send 10s later. See Wireshark log (captured on the sending side):</p>
<pre><code>No Time Source Destination Len Info
1 0.000000 198.18.2.1 198.18.2.2 TCP 66 59465 → 8764 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 WS=256 SACK_PERM
2 0.000813 198.18.2.2 198.18.2.1 TCP 60 8764 → 59465 [SYN, ACK] Seq=0 Ack=1 Win=512 Len=0 MSS=1460
3 0.000976 198.18.2.1 198.18.2.2 TCP 54 59465 → 8764 [ACK] Seq=1 Ack=1 Win=64240 Len=0
4 10.003259 198.18.2.1 198.18.2.2 TCP 58 "PUT " (0x50 0x55 0x54 0x20)
5 10.003331 198.18.2.1 198.18.2.2 TCP 54 59465 → 8764 [FIN, ACK] Seq=5 Ack=1 Win=64240 Len=0
6 10.004227 198.18.2.2 198.18.2.1 TCP 60 8764 → 59465 [ACK] Seq=1 Ack=5 Win=508 Len=0
7 10.011112 198.18.2.2 198.18.2.1 TCP 60 8764 → 59465 [FIN, ACK] Seq=1 Ack=6 Win=508 Len=0
8 10.011318 198.18.2.1 198.18.2.2 TCP 54 59465 → 8764 [ACK] Seq=6 Ack=2 Win=64240 Len=0
</code></pre>
<p>I am out of ideas where to check for the root cause of this. Apparently some REST verbs are treated differently. By Python? Windows? TCP?</p>
<p><strong>Updates</strong> (trying suggestions):</p>
<ul>
<li>Sending a full HTTP message <code>"PUT /file.txt HTTP/1.1\r\nHost: 198.18.2.2\r\nConnection: close\r\nContent-Length: 0\r\n\r\n"</code> goes through immediately</li>
<li>Putting the server on localhost and sending these specific messages also works (are send and received immediately)</li>
<li>Yes, I agree that there might be some mechanism that intercept HTTP-like messages and holds them until "complete". I am in a corporate environment so security software / firewall is a likely candidate and the focus of further investigations.</li>
<li>The devices are connected via LAN: PC (running client) <-> Linux Router <-> µC (running server).</li>
</ul>
|
<python><sockets>
|
2025-05-30 16:01:21
| 0
| 368
|
Sloothword
|
79,645,726
| 1,361,752
|
How to disable automatic groupby widget in hvplot?
|
<p><code>hvplot</code> has a <code>groupby</code> parameter that lets you pick what variables to group the output by. This results in a widget you use to select the data subset you want to plot.</p>
<p>If groupby is not specified, <code>hvplot</code> often infers what variables you want to group by. For example:</p>
<pre class="lang-py prettyprint-override"><code>ds=xr.Dataset({
'A':(['x','y','z'], np.arange(3*4*5).reshape(3,4,5)),
'B':(['x','y','z'],2*np.arange(3*4*5).reshape(3,4,5)),
},
coords={
'x':range(3),
'y':range(4),
'z':range(5)
}
)
ds.hvplot.scatter('A','B')
</code></pre>
<p>Even though <code>groupby</code> is not specified, hvplot assumes you want <code>groupby=['x','y','z']</code>. So, this creates a widget with sliders for x, y, and z. Since only one point exists for every combination, the resulting scatter plots only have a single point, which isn't very informative!</p>
<p>Is there a way to force <code>hvplot</code> to plot all values in a single plot? Put another way, is there a way to tell it to not infer a default value for the <code>groupby</code> parameter, and not create a widget? In the above example, this would mean producing a scatter plot of all A and B values in the dataset.</p>
|
<python><python-xarray><hvplot>
|
2025-05-30 15:31:55
| 1
| 4,167
|
Caleb
|
79,645,489
| 4,281,664
|
Pylance is not recognizing "Methods:" section in a Python class docstring
|
<p>Consider this Python class with docstring:</p>
<pre><code>class TestClass:
'''
Summary line of docstring
Additional lines of docstring.
Attributes:
attr1 (int): this is the desc of a public attribute
attr2 (bool): another desc for a second attribute
Methods:
calculate
calculate()
calculate(int)
'''
</code></pre>
<p>I used the <a href="https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings" rel="nofollow noreferrer">Google style</a> format for docstrings, with <code>Attributes</code> and <code>Methods</code> sections.</p>
<p>But, in Visual Studio Code, when hovering the class name, the docstring is rendered that way:
<a href="https://i.sstatic.net/H3UcQCtO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3UcQCtO.png" alt="docstring rendering in VSCode" /></a></p>
<p>So, the Attributes section header is recognized and its elements are highlighted; but, the Methods section header isn't. Testing with a method docstring, all section headers I know (<code>Args</code>, <code>Returns</code>, <code>Raises</code>) are also recognized and highlighted.</p>
<p>So, is there anything I can do with VSCode for <code>Methods</code> header to be recognized and highlighted in the docstring rendering view? maybe I'm using the wrong header?</p>
|
<python><visual-studio-code><docstring><pylance>
|
2025-05-30 12:49:53
| 0
| 421
|
Alberto Jiménez
|
79,645,306
| 1,860,805
|
How to stop converting into complex
|
<p>End of this calculation the result automatically turns into complex type in Python 3.</p>
<pre><code>$ cat test.py
#!/usr/bin/python
curryield =-100.0
frequency = 2
coupons = 4
days_to_next = -4.0
days_period = 180.0
part1 = (100.0/((1+curryield/frequency)**(coupons-1+(days_to_next/days_period))))
print("Part 1:{} type:{}".format(part1,type(part1)))
</code></pre>
<p>I want to keep <code>part1</code> as float and stop converting into complex type</p>
<p>When I try this to convert it back to float like this</p>
<pre><code>part2 = float(part1)
print("Part 2:{} type:{}".format(part2,type(part2)))
</code></pre>
<p>but it throws an exception</p>
<pre><code>Traceback (most recent call last): File "./test.py", line 12, in <module>
part2 = float(part1) TypeError: can't convert complex to float
</code></pre>
<p>I know I can convert only the real part <code>part1.real</code> or imaginary part <code>part1.imag</code> to float but I lose the precision. I want to keep the <code>part1</code> as float after end of the above calculation.</p>
|
<python>
|
2025-05-30 10:31:37
| 1
| 523
|
Ramanan T
|
79,645,288
| 11,738,400
|
How to efficiently retrieve xy-coordinates from image
|
<p>I have an image <code>img</code> with 1000 rows and columns each.<br />
Now I would like to consider each pixel as x- and y-coordinates and extract the respective value.</p>
<p>An illustrated example of what I want to achieve:</p>
<p><a href="https://i.sstatic.net/Z4Iba79m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4Iba79m.png" alt="" /></a></p>
<p>In theory, the code snippet below should work. But it is super slow (I stopped the execution after some time).</p>
<pre><code>img = np.random.rand(1000,1000)
xy = np.array([(x, y) for x in range(img.shape[1]) for y in range(img.shape[0])])
xy = np.c_[xy, np.zeros(xy.shape[0])]
for i in range(img.shape[0]):
for j in range(img.shape[1]):
xy[np.logical_and(xy[:,1] == i, xy[:,0] == j),2] = img[i,j]
</code></pre>
<p>Is there a faster way (e.g. some numpy magic) to go from one table to the other?</p>
<p>Thanks in advance!</p>
|
<python><numpy><performance>
|
2025-05-30 10:20:25
| 1
| 612
|
mri
|
79,645,187
| 15,175,627
|
How can I check if `src` and `dst` are the same when writing with `rasterio.open()`
|
<p>I want to read a geotiff, reproject, and write to a new geotiff based on the code from the <code>rasterio</code> docs (<a href="https://rasterio.readthedocs.io/en/stable/topics/reproject.html" rel="nofollow noreferrer">https://rasterio.readthedocs.io/en/stable/topics/reproject.html</a>) (the actual transformation is of course more complex than shown below).</p>
<p>There are a few potential problems that I need to deal with:</p>
<ol>
<li>The output file might already exist; in this case I want to overwrite it</li>
</ol>
<pre class="lang-py prettyprint-override"><code>try:
os.remove(fname)
except FileNotFoundError:
pass
</code></pre>
<ol start="2">
<li>The output file might be opened in another program. Ideally I would like to overwrite it regardless but I understand this is difficult/impossible to do, therefore I catch this as a PermissionError:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>try:
write_reprojected(...)
except (PermissionError, OSError):
print("output_file is opened elsewhere, close and try again")
</code></pre>
<p><strong>3. However, if the input and output files are accidently specified to be the same I would like a different behaviour of the program.</strong> I am trying to catch this with</p>
<pre class="lang-py prettyprint-override"><code>if os.path.abspath(src.files[0]) == os.path.abspath(fname):
#raise MyCustomException()
print("Trying to write file to its old location")
</code></pre>
<p>but I'm not sure if the string comparison is the best way.</p>
<p><strong>Question:</strong> Is there a better way to check if <code>src</code> and <code>dst</code> are the same when writing with <code>rasterio.open()</code>? And also, is there a way to remove/overwrite a file that is used by a different program?</p>
<pre class="lang-py prettyprint-override"><code>
import os
import numpy as np
import rasterio
from rasterio.warp import calculate_default_transform, reproject, Resampling
def write_reprojected(src, crs, transform, width, height, fname):
# raise Exception if file is open for reading
if os.path.abspath(src.files[0]) == os.path.abspath(fname):
#raise MyCustomException()
print("Trying to write file to its old location")
kwargs = src.meta.copy()
kwargs.update({
'crs': crs,
'transform': transform,
'width': width,
'height': height})
# overwrite file if it already exists
try:
os.remove(fname)
except FileNotFoundError:
pass
with rasterio.open(fname, 'w', **kwargs) as dst:
for i in range(1, src.count + 1):
reproject(
source=rasterio.band(src, i),
destination=rasterio.band(dst, i),
src_transform=src.transform,
src_crs=src.crs,
dst_transform=transform,
dst_crs=crs,
resampling=Resampling.bilinear)
input_file = 'old_folder\old_file.tif'
output_folder = 'new_folder'
output_file = 'new_file.tif'
try:
os.makedirs(new_folder)
except FileExistsError:
pass
with rasterio.open(input_file, 'r') as src:
transform, width, height = aligned_target(
src.transform, src.width, src.height, [round(r) for r in src.res])
try:
write_reprojected(src, src.crs, transform, width, height,
os.path.join(output_folder, output_file))
except (PermissionError, OSError):
print("output_file is opened elsewhere, close and try again")
</code></pre>
|
<python><writefile><rasterio>
|
2025-05-30 09:16:12
| 0
| 511
|
konstanze
|
79,645,144
| 5,320,906
|
How can I extend an annotated type
|
<p>Using Python 3.10, I have a base type definition that I use in many models:</p>
<pre class="lang-py prettyprint-override"><code>Alpha = Annotated[str, Field(pattern=r'[A-Za-z]')]
</code></pre>
<p>I want to create additional specialised types based on this type, for example by adding a <code>max_length</code> constraint, or additional validators or serialisers. I want to do this in a declarative way, ideally using <code>Annotated</code> as the type definitions will be auto-generated into a module for import.</p>
<p>This doesn't work, because it produces a nested annotation, but I'd like something like it:</p>
<pre class="lang-py prettyprint-override"><code>ShortAlpha = Annotated[Alpha, BeforeValidator(lambda x: len(x) <= 5)]
</code></pre>
<p>So attributes typed as <code>ShortAlpha</code> would have the combined metadata of both <code>Alpha</code> and <code>ShortAlpha</code>.</p>
<p>is there a way to extend <code>Annotated</code> type definitions without having to explicitly dig around in the original type's internals?</p>
|
<python><python-typing><pydantic><pydantic-v2>
|
2025-05-30 08:36:38
| 0
| 56,990
|
snakecharmerb
|
79,645,053
| 5,320,591
|
The Type Point in geojson_pydantic is not hashable
|
<p>I'm using Python 3.11 and I am struggling a lot to understand this.</p>
<p>When I try to access to the Swagger documentation of my service, in localhost, I have this error:</p>
<pre><code>generate_definitions definitions_remapping = self._build_definitions_remapping()
File \"/app/.venv/lib/python3.11/site-packages/pydantic/json_schema.py\", line 2129, in _build_definitions_remapping\n return _DefinitionsRemapping.from_prioritized_choices(
File \"/app/.venv/lib/python3.11/site-packages/pydantic/json_schema.py\", line 177, in from_prioritized_choices\n schemas_for_alternatives[defs_ref] = _deduplicate_schemas(schemas_for_alternatives[defs_ref])
File \"/app/.venv/lib/python3.11/site-packages/pydantic/json_schema.py\", line 2235, in _deduplicate_schemas\n return list({_make_json_hashable(schema): schema for schema in schemas}.values())
File \"/app/.venv/lib/python3.11/site-packages/pydantic/json_schema.py\", line 2235, in <dictcomp>\n return list({_make_json_hashable(schema): schema for schema in schemas}.values())
File \"/app/.venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py\", line 429, in hash_func\n return hash(getter(self.__dict__))
TypeError: unhashable type: 'Point'"
</code></pre>
<p>so this <strong>TypeError: unhashable type: 'Point</strong></p>
<p><strong>is the cause when generating Swagger docs.</strong></p>
<p>This is the screenshot when I try to read that docs page:</p>
<p><a href="https://i.sstatic.net/vzJghyo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vzJghyo7.png" alt="Swagger API error" /></a></p>
<p>so, the "<strong>Point</strong>" class comes from <strong>geojson_pydantic</strong>.</p>
<p>I have to say that I had the same error in another class of <strong>my code</strong>, and I solved that error by writing:</p>
<pre><code> class Config:
frozen = True
</code></pre>
<p>so:</p>
<pre><code>class MyClass(BaseModel):
class Config:
frozen = True
</code></pre>
<p>I can't understand how to identify the exact class or model that is not "hashable" and that is causing the swagger documentation to not to loading.</p>
<p>the version of <strong>geojson_pydantic</strong> I am using is <strong>1.2.0</strong></p>
<p>Any clue ?</p>
|
<python><swagger><geojson><pydantic>
|
2025-05-30 07:26:58
| 0
| 1,546
|
RobyB
|
79,645,043
| 595,305
|
limiting value for int in pyqtSignal?
|
<p><strong>NOT A DUPLICATE AS THIS INVOLVES A COMPLETELY SILENT ERROR</strong></p>
<p>I have a signal which looks like this:</p>
<pre><code>add_history_row_signal = QtCore.pyqtSignal(int, int)
</code></pre>
<p>The first int is a key for a dict, to retrieve a value. And in fact this int is the object id of a given object (i.e. <code>id(my_object)</code>).</p>
<p>The connected method gets the int value ... but it is wrong, and the key is then not found in the dict.</p>
<p><a href="https://stackoverflow.com/questions/10762809/in-pyside-why-does-emiting-an-integer-0x7fffffff-result-in-overflowerror-af">This question</a> is similar, but this situation, with PyQt5, is a <strong>completely silent</strong> error, far more dangerous, and arguably something the developers of PyQt should do something about (at the very least document it).</p>
<p>However, if I make the int used as a key smaller (losing the guarantee of uniqueness), it works OK:</p>
<pre><code>he_id_val = id(history_entry) % 10000
...
self.he_id_to_row_map[he_id_val] = [date_item, entry_item]
self.add_history_row_signal.emit(he_id_val, len(history_list))
</code></pre>
<p>... these 2 ints then come through OK to the connected method</p>
<p>I could only think of 2 explanations for this: either there is a maximum value for the int key of a dict in Python (this doesn't appear to be the case as far as I can tell), or a pyqtSignal appears to impose some sort of "MAX INT" limitation.</p>
<p>I have searched on this and found nothing. Obviously Qt is written in C++, and this max int value appears to be 2147483647. A typical value from <code>id(x)</code> in Python code is 1715560796288, so larger.</p>
<p>Is there any known way to specify that a PyQt signal is using a C++ long rather than a C++ int?</p>
<p>An obvious workaround is to convert this id value to a string. Just wondered if I can avoid this.</p>
<p>If my hypothesis is correct it's a bit a gotcha: it'd be nice if it was at least documented somewhere in the PyQt documentation (unless it already is). Not least as being a potential silent failure. In fact ... assuming you can create a signal in C++ QT with a <code>C++ long</code> parameter, mightn't it have been safer when developing PyQt to have made things so that every parameter (at least in signals) stated as <code>Python int</code> got translated to <code>C++ long</code>?</p>
|
<python><c++><qt><pyqt><signals-slots>
|
2025-05-30 07:18:46
| 0
| 16,076
|
mike rodent
|
79,645,029
| 12,520,740
|
Running pythonpy installed via pipx gives: ImportError
|
<h2>The problem</h2>
<p>I recently tried installing pythonpy using <code>pipx</code>, but whenever I run any command using <code>py</code>, I see the following traceback:</p>
<pre class="lang-bash prettyprint-override"><code>$ py '2-3'
Traceback (most recent call last):
File "/home/m/.local/bin/py", line 5, in <module>
from pythonpy.__main__ import main
File "/home/m/.local/share/pipx/venvs/pythonpy/lib/python3.12/site-packages/pythonpy/__main__.py", line 16, in <module>
from collections import Iterable
ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.12/collections/__init__.py)
</code></pre>
<h2>Installation</h2>
<p>How I installed <code>py</code></p>
<pre><code>$ sudo apt install pipx
$ pipx install pythonpy
$ pipx ensurepath
</code></pre>
<h2>Additional information</h2>
<p><code>pipx list</code> output:</p>
<pre class="lang-bash prettyprint-override"><code>venvs are in /home/m/.local/share/pipx/venvs
apps are exposed on your $PATH at /home/m/.local/bin
manual pages are exposed at /home/m/.local/share/man
package pythonpy 0.4.11, installed using Python 3.12.3
- find_pycompletion.sh
- py
- py3
- py3.1
- pycompleter
- pycompleter3
- pycompleter3.1
</code></pre>
<p>Versions:</p>
<ul>
<li><strong>Python version:</strong> <code>Python 3.12.3</code></li>
<li><strong>pipx version:</strong> <code>pipx 1.4.3</code></li>
<li><strong>OS:</strong> Ubuntu 24.04.02 LTS</li>
</ul>
<p>Note: That <code>sudo apt install python3-py</code> also gives an unsatisfying result by causing <code>py</code> to throw Syntaxerrors. The answer provided in this stackoflow question <a href="https://stackoverflow.com/questions/79220412/fresh-install-of-pythonpy-gives-syntaxwarnings">link</a> is not intended to work on the newer versions of Ubuntu, since the new <code>pip</code> will give <code>error: externally-managed-environment</code> when installing packages.</p>
<h2>Question</h2>
<p>How can make sure that this <code>pipx</code> installed <code>py</code> command works without error?</p>
|
<python><python-3.x><pipx><python-py>
|
2025-05-30 07:05:40
| 1
| 1,156
|
melvio
|
79,644,972
| 5,657,705
|
Getting attribute error in Django BaseCommand- Check
|
<p>I am working on a tutorial project. The same code works for the instructor but doesn't work for me.</p>
<p>I have a file for custom commands:</p>
<pre><code>import time
from psycopg2 import OperationalError as Psycopg2OpError
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
def handle(self, *args, **options):
self.stdout.write('waiting for database...')
db_up = False
while db_up is False:
try:
self.check(databases=['default'])
db_up = True
except(Psycopg2OpError, OperationalError):
self.stdout.write("Database unavailable, waiting 1 second...")
time.sleep(1)
self.stdout.write(self.style.SUCCESS('Database available!'))
</code></pre>
<p>And I am writing test case for the same in a file called <code>test_command.py</code> which is below:</p>
<pre><code>from unittest.mock import patch
from psycopg2 import OperationalError as Psycopg2OpError
from django.core.management import call_command
from django.db.utils import OperationalError
from django.test import SimpleTestCase
@patch('core.management.commands.wait_for_db.Command.Check')
class CommandTests(SimpleTestCase):
def test_wait_for_db_ready(self, patched_check):
"""test waiting for db if db ready"""
patched_check.return_value = True
call_command('wait_for_db')
patched_check.assert_called_once_with(databases=['default'])
@patch('time.sleep')
def test_wait_for_db_delay(self, patched_sleep, patched_check):
"""test waiting for db when getting operational error"""
patched_check.side_effect = [Psycopg2OpError] * 2 + \
[OperationalError] * 3 + [True]
call_command('wait_for_db')
self.assertEqual(patched_check.call_count, 6)
patched_check.assert_called_with(databases=['default'])
</code></pre>
<p>When I run the tests, I get an error message:</p>
<blockquote>
<p>AttributeError: <class 'core.management.commands.wait_for_db.Command'> does not have the attribute 'Check'</p>
</blockquote>
<p>I am unable to resolve the error.</p>
<p>The structure of the files:</p>
<p><a href="https://i.sstatic.net/Knl8lvLG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Knl8lvLG.png" alt="enter image description here" /></a></p>
|
<python><django><django-testing><django-tests>
|
2025-05-30 06:06:01
| 0
| 1,113
|
The Bat
|
79,644,894
| 2,289,710
|
Python doctests for a colored text output
|
<p>Is it possible to write a docstring test for a function that prints out the colored text into the command line? I want to test only the content ignoring the color or to add somehow the information on color into the docstring. In the example below the test has failed, but it should not.</p>
<p><strong>Example</strong></p>
<pre><code>from colorama import Fore
def print_result():
"""Prints out the result.
>>> print_result()
Hello world
"""
print('Hello world')
def print_result_in_color():
"""Prints out the result.
>>> print_result_in_color()
Hello world
"""
print(Fore.GREEN + 'Hello world' + Fore.RESET)
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Failed example:
print_result_in_color()
Expected:
Hello world
Got:
Hello world
</code></pre>
|
<python><command-line><colors><docstring><doctest>
|
2025-05-30 04:22:26
| 1
| 3,894
|
freude
|
79,644,830
| 10,461,632
|
Password hashes do not match with bcrypt (python)
|
<p>I am trying to compare two passwords using bcrypt. The hashed password is stored in the database as a string. When I compare the two hashed passwords (the one from the database and the one from the user), I make sure they are both encoded, but the passwords still do not match.</p>
<p>Why do the hashed passwords not match when I store it in a database? Is it because the salt is different than when the hash password was originally generated? If so, how am I supposed to check that a password matches what is stored in a database?</p>
<p>code:</p>
<pre><code>import os
import bcrypt
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base, sessionmaker
Base = declarative_base()
os.remove("users.db") if os.path.exists("users.db") else None
engine = create_engine("sqlite:///users.db")
Session = sessionmaker(bind=engine)
session = Session()
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
username = Column(String(50), unique=True, nullable=False)
password_hash = Column(String(100), nullable=False)
Base.metadata.create_all(engine)
def hash_password(password: str) -> str:
hashed = bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt())
return hashed.decode("utf-8")
def create_user(username: str, password: str) -> None:
hashed_password = hash_password(password)
user = User(username=username, password_hash=hashed_password)
session.add(user)
session.commit()
print(f"User '{username}' created.")
print(f'password: {password}')
print(f'password_hash: {hashed_password}')
print()
def verify_password(username: str, password_to_check: str) -> bool:
user = session.query(User).filter_by(username=username).first()
if not user:
print("User not found.")
return False
match = bcrypt.checkpw(
hash_password(password_to_check).encode('utf-8'), user.password_hash.encode("utf-8")
)
if match:
print("Password match!")
else:
print("Password does not match.")
print('password_to_check:', password_to_check)
print('password_to_check (encode):', hash_password(password_to_check).encode('utf-8'))
print('user.password_hash:', user.password_hash)
print('user.password_hash (encode):', user.password_hash.encode('utf-8'))
return match
# Create a user
create_user("alice", "my_secure_password")
# Try to verify password
verify_password("alice", "my_secure_password") # Should match
print()
verify_password("alice", "wrong_password") # Should not match
print()
</code></pre>
<p>output:</p>
<pre><code>User 'alice' created.
password: my_secure_password
password_hash: $2b$12$Q3NmS7nQ.hvg7NnUk8bONO3.tVHmB0IC758BBKXXzT2omFuBp4xd2
Password does not match.
password_to_check: my_secure_password
password_to_check (encode): b'$2b$12$veGsFll.mjNjK10GXYYLqe9at/9FN.2Ue12T.l9BedjbHIQmEVWDe'
user.password_hash: $2b$12$Q3NmS7nQ.hvg7NnUk8bONO3.tVHmB0IC758BBKXXzT2omFuBp4xd2
user.password_hash (encode): b'$2b$12$Q3NmS7nQ.hvg7NnUk8bONO3.tVHmB0IC758BBKXXzT2omFuBp4xd2'
Password does not match.
password_to_check: wrong_password
password_to_check (encode): b'$2b$12$x6CdcXKwfIjgwcAWiAkU8.1ukRrKUVmgmBFY5Sw438odzK6X7D4fG'
user.password_hash: $2b$12$Q3NmS7nQ.hvg7NnUk8bONO3.tVHmB0IC758BBKXXzT2omFuBp4xd2
user.password_hash (encode): b'$2b$12$Q3NmS7nQ.hvg7NnUk8bONO3.tVHmB0IC758BBKXXzT2omFuBp4xd2'
</code></pre>
<p>Even in this most basic example, they still do not match:</p>
<pre><code>match = bcrypt.checkpw(
hash_password('my_secure_password').encode('utf-8'),
hash_password('my_secure_password').encode("utf-8"))
> False
</code></pre>
|
<python><python-3.x><bcrypt>
|
2025-05-30 02:57:04
| 1
| 788
|
Simon1
|
79,644,734
| 44,330
|
Why does inspect fail to get source file for classes in a dynamically-imported module?
|
<p><code>inspect.getsource()</code> and <code>inspect.getsourcefile()</code> can access source info for a function, but not for a class, when they are in a module that is imported dynamically with <code>importlib</code>.</p>
<p>Here are two files, <code>thing1.py</code> and <code>thing2.py</code>:</p>
<ul>
<li><p><code>thing1.py</code></p>
<pre><code>import inspect
import os
import importlib.util
dir_here = os.path.dirname(__file__)
spec = importlib.util.spec_from_file_location("thing2",
os.path.join(dir_here, "thing2.py"))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
print(module.foo(3))
print(module.Bar().inc(3))
print("module source file:", inspect.getsourcefile(module))
for attr in ['foo','Bar']:
print("%s source: %s" % (attr, inspect.getsourcefile(getattr(module, attr))))
print(inspect.getsource(getattr(module, attr)))
</code></pre>
</li>
<li><p><code>thing2.py</code></p>
<pre><code>def foo(x):
return x+1
class Bar(object):
def inc(self, x):
return x+1
</code></pre>
</li>
</ul>
<p>If I run <code>test1.py</code> here's what I get:</p>
<pre><code>> python c:\tmp\python\test2\thing1.py
4
4
module source file: c:\tmp\python\test2\thing2.py
foo source: c:\tmp\python\test2\thing2.py
def foo(x):
return x+1
Traceback (most recent call last):
File "c:\tmp\python\test2\thing1.py", line 16, in <module>
print("%s source: %s" % (attr, inspect.getsourcefile(getattr(module, attr))))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jason\.conda\envs\py3datalab\Lib\inspect.py", line 940, in getsourcefile
filename = getfile(object)
^^^^^^^^^^^^^^^
File "C:\Users\jason\.conda\envs\py3datalab\Lib\inspect.py", line 909, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <class 'thing2.Bar'> is a built-in class
</code></pre>
<p>I'm using Python 3.11.4.</p>
<p>Am I missing something during my import step that tells Python how to get source info for classes?</p>
|
<python><python-importlib><python-inspect>
|
2025-05-29 23:33:53
| 1
| 190,447
|
Jason S
|
79,644,621
| 7,479,675
|
Is it possible to use browser-use with undetected-chromedriver?
|
<p>I’m trying to drive an automated web search with the browser-use library, but I need to swap out the default Playwright for an undetected-chromedriver instance so that I can interact with pages guarded by reCAPTCHA (e.g. <a href="https://recaptcha-demo.appspot.com/recaptcha-v2-invisible.php" rel="nofollow noreferrer">https://recaptcha-demo.appspot.com/recaptcha-v2-invisible.php</a>).</p>
<p>undetected-chromedriver: <a href="https://github.com/ultrafunkamsterdam/undetected-chromedriver" rel="nofollow noreferrer">https://github.com/ultrafunkamsterdam/undetected-chromedriver</a>
browser-use: <a href="https://github.com/browser-use/browser-use" rel="nofollow noreferrer">https://github.com/browser-use/browser-use</a></p>
<p>When I replace the browser in my script with an undetected_chromedriver.Chrome(...), I see failures in the agent steps and eventually get:</p>
<pre class="lang-none prettyprint-override"><code>ERROR [agent] ❌ Stopping due to 3 consecutive failures
None
INFO [uc] ensuring close
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000024128D00FE0>
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\asyncio\proactor_events.py", line 116, in __del__
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\proactor_events.py", line 80, in __repr__
info.append(f'fd={self._sock.fileno()}')
^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>Below is a simplified version of my script:</p>
<pre class="lang-none prettyprint-override"><code>import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from browser_use import Agent, BrowserSession, Controller
from pydantic import BaseModel
import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
load_dotenv()
# Initialize Gemini LLM
llm = ChatGoogleGenerativeAI(
model=os.getenv("GEMINI_MODEL_NAME"),
api_key=os.getenv("GEMINI_API_KEY"),
)
# Launch undetected Chrome
driver = uc.Chrome(headless=False)
browser_session = BrowserSession(driver=driver)
class ResponseResult(BaseModel):
response: str
controller = Controller(output_model=ResponseResult)
async def main():
try:
driver.get('https://recaptcha-demo.appspot.com/recaptcha-v2-invisible.php')
WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.TAG_NAME, 'body')))
await asyncio.sleep(2)
agent = Agent(
task="Find the text input fields, enter any data, send, and extract the response",
llm=llm,
browser_session=browser_session,
initial_actions=[]
)
result = await agent.run()
print(result.final_result())
except Exception as e:
print(f"Error during execution: {e!s}")
finally:
driver.quit()
await browser_session.close()
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Expected behavior:
The agent should be able to open the invisible reCAPTCHA demo page, perform the actions, and return the extracted response text.</p>
<p>Actual behavior:
The agent fails at Step 1 immediately, logging consecutive failures and then an unclosed transport warning from asyncio.</p>
<p>Environment:
Python Python 3.11.1
undetected-chromedriver 3.5.5
browser-use: 0.2.5
OS: Windows 10</p>
|
<python><browser-use>
|
2025-05-29 20:56:07
| 0
| 392
|
Oleksandr Myronchuk
|
79,644,068
| 4,235,960
|
Meta (Facebook) API - products - start_date and "Date of last edit" field
|
<p>I have been reading the following page of Meta (Facebook):
<a href="https://developers.facebook.com/docs/marketing-api/reference/product-item/" rel="nofollow noreferrer">https://developers.facebook.com/docs/marketing-api/reference/product-item/</a>
this is the page for products.</p>
<p>There is a field <code>start_date string Date when the product started to exist </code></p>
<p>when I write python code to get this field for products that I have added in a Facebook catalog through the Facebook Commerce Manager, it returns no value. I tried it also with the Graph Explorer, and the same issue happens there.</p>
<p>Furthermore, in the Commerce Manager, there is a field "date of last edit", but I can't seem to find a way to get this field with code as well.</p>
<p><a href="https://i.sstatic.net/0kHtgW9C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kHtgW9C.png" alt="enter image description here" /></a></p>
<p>Can you help me get these two fields using code?</p>
<p>Here is the code I currently have:</p>
<pre><code>def fetch_products_from_catalog(access_token: str, catalog_id: str):
try:
print('Fetching products for catalog id: ', catalog_id)
url = f"https://graph.facebook.com/v22.0/{catalog_id}/products"
params = {
"access_token": access_token,
"fields": "id,retailer_id,name,price,sale_price,category,start_date"
}
products = []
while url:
response = requests.get(url, params=params)
response.raise_for_status()
data = response.json()
products.extend(data.get("data", []))
url = data.get("paging", {}).get("next")
params = {} # Only needed for the first request
print('Products: ', products)
return products
except Exception as e:
raise Exception(f"Error fetching products from catalog: {e}")
</code></pre>
|
<python><facebook><facebook-graph-api><facebook-marketing-api>
|
2025-05-29 14:23:46
| 0
| 3,315
|
adrCoder
|
79,643,973
| 13,946,204
|
How to check it an object is an instance of dict_itemiterator in python?
|
<p>I have a some library that is returning <code>dict_itemiterator</code> object.</p>
<p>Since I'm working with multiple types of objects I need to know what is actual type of my object.</p>
<p>So I found that it is possible to get <code>dict_itemiterator</code> using this code:</p>
<pre class="lang-py prettyprint-override"><code># of course, not me creating this object in real code, it comes
# from 3rd-party library, I just added it to make a reproducible example
my_tricky_object = iter({}.items())
dict_itemiterator_type = type(iter({}.items()))
if isinstance(my_tricky_object, dict_itemiterator_type):
print('Do stuff here...')
</code></pre>
<p>The problem is I don't know how to import this <code>dict_itemiterator</code> type without using this ambiguous construction: <code>type(iter({}.items()))</code></p>
<p>Of course it is possible to use <code>Iterable</code>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterable
isinstance(my_tricky_object, Iterable)
</code></pre>
<p>But I'm working with different iterable types and want to be sure that <code>my_tricky_object</code> is an instance of a specifically key/value iterator</p>
<p>The question is what is most correct <code>isinstance</code> check here or how to import exactly <code>dict_itemiterator</code> type?</p>
|
<python>
|
2025-05-29 13:29:55
| 1
| 9,834
|
rzlvmp
|
79,643,242
| 3,814,008
|
How do I create thread-safe asyncio background tasks?
|
<p>I have an event loop running in a separate thread from the main thread, and I would like to add a background task to this event loop from the main thread using the <code>asyncio.create_task</code> function.</p>
<p>I am able to achieve this using the <code>asyncio.run_coroutine_threadsafe</code> function, but that ends up returning a <code>concurrent.futures._base.Future</code> object as opposed to the <code>asyncio.Task</code> object that the <code>asyncio.create_task</code> function returns.</p>
<p>I would really like to have a Task object returned as I'm making use of the "name" attribute of the Task object in my application logic. Is there any way to implement this?</p>
|
<python><multithreading><python-asyncio><event-loop>
|
2025-05-29 03:25:24
| 3
| 703
|
Yamen Alghrer
|
79,643,162
| 3,036,367
|
Non-integral optimizer parameters in scipy.stats.fit
|
<p>I am trying to fit a distribution to some data using <code>scipy.stats.fit</code>. I specifically need the <code>stats.nbinom</code> distribution to be fit using a non-integer value for parameter <code>n</code>.</p>
<p>I've made this minimal example of what I'd like to achieve. I can't see what I've done incorrectly.</p>
<p>This is a modified version of the example <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fit.html#scipy.stats.fit" rel="nofollow noreferrer">here</a></p>
<pre><code>from scipy import stats
from scipy.optimize import differential_evolution
import numpy as np
rng = np.random.default_rng()
def optimizer(fun, bounds, *, integrality):
return differential_evolution(fun, bounds, strategy='best2bin',
rng=rng, integrality=[False, False, False])
data = [0,0,0,1,2,4,11]
bounds = [(1.5, 1.55), (0, 1)]
res4 = stats.fit(stats.nbinom, data, bounds, optimizer=optimizer)
print(res4.params)
</code></pre>
<p>This returns:</p>
<pre><code>ValueError: There are no integer values for `n` on the interval defined by the user-provided bounds and the domain of the distribution.
</code></pre>
<p>EDIT:</p>
<p>Here's an example to show that the negative binomial distribution will accept non-integer <code>n</code>.</p>
<pre><code>from scipy import stats
import matplotlib.pyplot as plt
n = 0.4
p = 0.1
x = np.arange(stats.nbinom.ppf(0.01, n, p),
stats.nbinom.ppf(0.99, n, p))
plt.bar(x, stats.nbinom.pmf(x, n, p))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/GPL2Kn2Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPL2Kn2Q.png" alt="plot of pmf with non-integer n" /></a></p>
|
<python><scipy><scipy.stats>
|
2025-05-29 01:00:04
| 1
| 2,020
|
JasTonAChair
|
79,643,014
| 2,953,544
|
Can Pixi use locally installed Python libraries?
|
<p>I'm managing a Python project with Pixi, because I have both Conda and Pypi dependencies. I need to install arcpy, which is ArcGIS's Python API. It's a closed-source library, and it is only available from the Python installation that comes with ArcGIS Pro, a desktop program.</p>
<p>So, my question is, can pixi add a dependency (arcpy) from a local folder? I cannot see a way to do so from the docs.</p>
<p>Alternately, could I tell pixi to use the Python executable from my ArcGIS Python installation?</p>
|
<python><arcgis><arcpy><pixi-package-manager>
|
2025-05-28 21:20:46
| 1
| 468
|
AJSmyth
|
79,642,975
| 6,856,019
|
Discord py slash command only visible to certain people
|
<p>I've seen a few discord bots that have 'hidden' commands available to only server owners. I have searched the internet on how this but so far it's eluded me.</p>
<p>I know I can add checks, but this doesn't hide the command from the slash command list.</p>
<pre><code>/leaderboard
View leaderboard
/leave
Tell bot to exit channel
/listaddons
List extensions
/load
Load extension
/rank
Get users rank card
/reload
Reload extension
</code></pre>
<p>This is what I would like to do:</p>
<pre><code>/changemessage
(ADMIN) Change the cat appear and cought messages
/changetimings
(ADMIN) Change the cat appear timings
</code></pre>
<p>Those commands in the second example will not show to anyone without admin privileges.</p>
<p>Has anyone managed to reproduce this behaviour using the <code>Discord py</code> library?</p>
|
<python><discord><discord.py>
|
2025-05-28 20:36:47
| 1
| 551
|
Mike
|
79,642,829
| 15,006,061
|
Python urllib.request.urlopen with Bearer authentication in redirected request
|
<p>The following command successfully downloads an artifact file from a GitHub workflow run:</p>
<pre><code>curl -L -H "Authorization: Bearer ghp_XXXX" -o arti.zip \
https://api.github.com/repos/OWNER/REPO/actions/artifacts/ID/zip
</code></pre>
<p>The following Python code fails, with the same URL and authentication token:</p>
<pre><code>import os, urllib.request
req = urllib.request.Request('https://api.github.com/repos/OWNER/REPO/actions/artifacts/ID/zip')
req.add_header('Authorization', 'Bearer ghp_XXXX')
with urllib.request.urlopen(req) as input:
with open('arti.zip', 'wb') as output:
output.write(input.read())
</code></pre>
<p>The Python exception is raised in <code>urlopen()</code>:</p>
<pre><code>urllib.error.HTTPError: HTTP Error 403: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
</code></pre>
<p>What is wrong in the Python code?</p>
<p>The request is redirected to another host and we can suspect that the authorization is not redirected as well. However, the <a href="https://docs.python.org/3/library/urllib.request.html#urllib.request.Request.add_header" rel="nofollow noreferrer">Python doc</a> for <code>Request.add_header()</code> says that "<em>headers added using this method are also added to redirected requests</em>". So it should work.</p>
<p>The same code with any public URL and no authentication works. I have seen many over-complicated examples online and on SO, each one explaining that some other Python library should be used, etc. Isn't there a simple way to use <code>urllib</code> with a Bearer Authorization?</p>
|
<python><urllib><bearer-token>
|
2025-05-28 18:42:16
| 1
| 867
|
Thierry Lelegard
|
79,642,757
| 3,357,935
|
How do I make pandas.read_csv() parse one column as datetime while treating all others as strings?
|
<p>I am using <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pandas.read_csv</code></a> to load a CSV file. I want to have my file read mostly as-is to avoid any <a href="https://stackoverflow.com/q/16988526/3357935">automatic data type conversions</a>, except for a specific column (<code>send_date</code>) I want parsed as a datetime.</p>
<p>The reason I want most columns read as strings or objects is to preserve data like zip codes with leading zeros (04321) and Boolean-like values (<code>true</code>, <code>false</code>, <code>unknown</code>) that are stored as strings.</p>
<h2>Problem</h2>
<p>Using <code>read_csv</code> without specifying <code>dtype</code> causes unwanted type conversions.</p>
<pre><code>df = pandas.read_csv("test.csv", parse_dates=['send_date'])
# name: Madeline (type: object) - correct
# zip_code: 4321 (type: int64) - wrong (missing leading 0)
# send_date: 2025-04-13 00:00:00 (type: datetime64[ns]) - correct
# is_customer: True (type: bool) - wrong (not a string)
</code></pre>
<p>Using <code>dtype=object</code> correctly preserves <code>zip_code</code> and <code>is_customer</code> as string-like values, but it prevents <code>send_date</code> from being set to type <code>datetime64[ns]</code>.</p>
<pre><code>df = pandas.read_csv("test.csv", dtype=object, parse_dates=['send_date'])
# name: Madeline (type: object) - correct
# zip_code: 04321 (type: object) - correct
# send_date: 2025-04-13 00:00:00 (type: object) - wrong (not datetime)
# is_customer: true (type: object) - correct
</code></pre>
<p>Manually setting the <code>dtype</code> for <code>send_date</code> to <code>datetime64</code> raises an error.</p>
<pre><code>df = pandas.read_csv("test.csv", dtype={"send_date":"datetime64"}, parse_dates=['send_date'])
# TypeError: the dtype datetime64 is not supported for parsing, pass this column using parse_dates instead
</code></pre>
<p>Setting <code>dtype=str</code> causes <code>send_date</code> to be interpreted as an integer timestamp.</p>
<pre><code>df = pandas.read_csv("test.csv", dtype=str, parse_dates=['send_date'])
# name: Madeline (type: object) - correct
# zip_code: 04321 (type: object) - correct
# send_date: 1744502400000000000 (type: object) - wrong (not a date)
# is_customer: true (type: object) - correct
</code></pre>
<h2>Sample Data (test.csv)</h2>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>name</th>
<th>zip_code</th>
<th>send_date</th>
<th>is_customer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Madeline</td>
<td>04321</td>
<td>2025-04-13</td>
<td>true</td>
</tr>
<tr>
<td>Theo</td>
<td>32255</td>
<td>2025-04-08</td>
<td>true</td>
</tr>
<tr>
<td>Granny</td>
<td>84564</td>
<td>2025-04-15</td>
<td>false</td>
</tr>
</tbody>
</table></div>
<h2>Desired output</h2>
<ul>
<li>name: Madeline (type: object)</li>
<li>zip_code: 04321 (type: object)</li>
<li>send_date: 2025-04-13 00:00:00 (type: datetime64[ns])</li>
<li>is_customer: true (type: object)</li>
</ul>
<h2>Attempted Code</h2>
<pre class="lang-py prettyprint-override"><code>import pandas
def print_first_row_value_and_dtype(df: pandas.DataFrame):
row = df.iloc[0]
for col in df.columns:
print(f"{col}: {row[col]} (type: {df[col].dtype})")
filename = 'test.csv'
df = pandas.read_csv(filename, parse_dates=['send_date'])
print_first_row_value_and_dtype(df)
df = pandas.read_csv(filename, dtype=object, parse_dates=['send_date'])
print_first_row_value_and_dtype(df)
df = pandas.read_csv(filename, dtype=str, parse_dates=['send_date'])
print_first_row_value_and_dtype(df)
dtypes = {"name":"object", "zip_code":"object", "send_date":"datetime64", "is_customer":"object"}
df = pandas.read_csv(filename, dtype=dtypes, parse_dates=['send_date']) # raises TypeError
</code></pre>
<h2>Question</h2>
<p>How can I make <code>pandas.read_csv()</code> parse one column (<code>send_date</code>) as a datetime while treating all other columns as strings or objects to avoid unwanted data type conversions?</p>
|
<python><python-3.x><pandas><dataframe><dtype>
|
2025-05-28 17:50:21
| 1
| 27,724
|
Stevoisiak
|
79,642,719
| 44,330
|
Why make cls a position-only argument in __init_subclass__?
|
<p>The <a href="https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__" rel="nofollow noreferrer">docs for the <code>__init_subclass__</code> hook</a> (see <a href="https://peps.python.org/pep-0487/#proposal" rel="nofollow noreferrer">PEP-0487</a> for background) give this example:</p>
<pre><code>class Philosopher:
def __init_subclass__(cls, /, default_name, **kwargs):
super().__init_subclass__(**kwargs)
cls.default_name = default_name
class AustralianPhilosopher(Philosopher, default_name="Bruce"):
pass
</code></pre>
<p>The <code>/</code> in the method signature threw me for a loop (difficult to search for secondary uses of operators) but I finally ran across the concept of <a href="https://docs.python.org/3.10/glossary.html#term-argument" rel="nofollow noreferrer">positional-only parameters</a>.</p>
<p>Why is it used here, in <code>__init_subclass__</code>? Does this mean that all class methods with <code>cls</code> or <code>self</code> as initial arguments should be using <code>/</code>? I thought Python had a "we're-all-adults-here" philosophy where class designers don't have to be bulletproof against malicious use.</p>
|
<python><parameters><positional-argument>
|
2025-05-28 17:21:16
| 1
| 190,447
|
Jason S
|
79,642,441
| 1,900,950
|
Explicitly stating the type of a loop variable
|
<p>I'm quite new to Python (and its duck typing), but am currently working on a Python project that needs serious refactoring. In a <code>for</code> loop, I need to discriminate between items of a list based on their type, and call functions defined in the derived class. What is the Python way of doing this? Consider this:</p>
<pre><code>from typing import List
class A:
def __init__(self):
pass
class B(A):
def my_function(self):
print("I'm of type B.")
collection: List[A] = [A(), B(), A(), B(), B()]
# select only the B items and call their specific function
for object in [item for item in collection if isinstance(item, B)]
object.my_function() # <- no help from the IDE here
</code></pre>
<p>In that last line, <code>object</code> is expected to be of type <code>A</code>. Since <a href="https://stackoverflow.com/a/41641489/1900950">type hinting the loop variable is not allowed</a>, I could introduce a more specific variable, because I already did the type checking:</p>
<pre><code>for object in [item for item in collection if isinstance(item, B)]
b: B = object # a "sort-of" cast
b.my_function() # <- IDE recognizes b to be of type B
</code></pre>
<p>I'd like to only use the means provided by Python itself, not PyCharm or similar tools. Is there a better way of achieving type hinting support besides introducing a type-hinted extra variable?</p>
|
<python><python-typing>
|
2025-05-28 14:31:29
| 0
| 346
|
hschmauder
|
79,642,222
| 10,886,283
|
How to compute the power spectral density of a vector-valued process without mirroring the autocorrelation function?
|
<p>I'm simulating a 2D Ornstein-Uhlenbeck process (Langevin equation for velocity), and I'm interested in computing the power spectral density (PSD) of the vector-valued velocity process.</p>
<p>Following the Wiener–Khinchin theorem, I compute the velocity autocorrelation function (VACF) defined as:</p>
<p>$$
R(\tau) = \frac{1}{T - \tau} \int_0^{T - \tau} \langle \mathbf{v}(t) \cdot \mathbf{v}(t + \tau) \rangle , dt
$$</p>
<p>Then I apply the Fast Fourier Transform (FFT) to obtain the PSD. However, since VACF is only computed for positive lags $\tau \geq 0$, I need to mirror the VACF manually to make it symmetric before applying <code>np.fft.fft</code>.</p>
<p>That mirroring step feels like a workaround, and I'm wondering:</p>
<ul>
<li>Is there a correct or more natural way to compute the PSD of a vector-valued process without having to explicitly clone the VACF?</li>
<li>Or alternatively, is there a shortcut that computes the PSD directly from time series without computing the VACF at all?</li>
</ul>
<p>Below is my working example. I calculate the VACF as the average dot product over time for all trajectories, and then mirror it to compute the PSD via FFT.</p>
<p><strong>1. Simulation of 2D Langevin trajectories</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
np.random.seed(0)
n = 10_000 # Number of time steps
dt = .1 # Time step size (delta t)
dim = 2 # Dimension of the velocity vector (2D: x and y)
N = 10 # Ensemble size
gamma = 1 # Damping coefficient in Langevin equation
sigma = 1 # Noise intensity
dW = np.random.normal(scale=sigma * np.sqrt(dt), size=(n, dim, N)) # Gaussian noise
v = np.zeros((n, dim, N)) # Velocity - shape (time, dimension, trajectories)
r = np.zeros((n, dim, N)) # Position - shape (time, dimension, trajectories)
# Integrate the Langevin equation using the Euler-Maruyama method
for i in range(n - 1)
v[i + 1] = v[i] - gamma * v[i] * dt + dW[i] # Update velocity: damping + noise
r[i + 1] = r[i] + v[i] * dt # Update position: Euler integration of velocity
</code></pre>
<p><strong>2. VACF and PSD computation</strong></p>
<pre><code># Computes the VACF: ⟨v(t) · v(t + τ)⟩
def manual_vacf(v, lag):
vacf = [] # VACF for the ensemble
for i in range(v.shape[-1]): # Loop over trajectories
v_ = v[..., i] # Select trajectory i (shape: [time, dim])
vacf_ = np.empty(lag) # VACF of current trajectory
for lag_ in range(1, lag + 1): # Loop over lag values
v1, v2 = v_[:-lag_], v_[lag_:] # Vectors at t and t+lag
v1v2 = (v1 - v1.mean(axis=0)) * (v2 - v2.mean(axis=0)) # Remove mean and take element wise product
v1_dot_v2 = np.sum(v1v2, axis=1) # Dot product v(t) · v(t + τ)
vacf_[lag_ - 1] = np.mean(v1_dot_v2) # Time average
vacf.append(vacf_)
return np.mean(vacf, axis=0) # Ensemble average
# Mirrors the VACF and computes the PSD using the Wiener–Khinchin theorem
def psd_wiener_khinchin(vacf, lag, dt):
vacf_mirror = np.hstack([np.flip(vacf), vacf]) # Extend VACF to negative lags
ft = np.fft.fft(vacf_mirror)[:lag] # FFT and keep only non-negative frequencies
freq = np.fft.fftfreq(2 * lag, dt)[:lag] # Frequency values
psd = np.abs(ft) * dt # PSD (magnitude × dt)
return psd, freq
# === vacf ===
lag_time = 4
lag = int(lag_time / dt)
tau = dt * np.arange(lag)
vacf_manual = manual_vacf(v, lag)
tau_theory = dt * np.arange(-lag, lag)
vacf_theory = theoretical_vacf(tau_theory, gamma, sigma)
# === psd ===
psd_manual, freq_manual = psd_wiener_khinchin(vacf_manual, lag, dt)
psd_theory = theoretical_psd(freq_manual, gamma, sigma)
</code></pre>
<p><strong>3. Plotting</strong></p>
<pre><code>fig, axs = plt.subplots(1, 3, figsize=(10, 2))
# first 3 simulated trajectories
for i in range(3):
axs[0].plot(*r[..., i].T)
axs[0].axis('equal')
axs[0].set_xlabel('x')
axs[0].set_ylabel('y')
# vacf
axs[1].plot(tau, vacf_manual)
axs[1].plot(tau_theory, vacf_theory)
axs[1].set_xlim(-lag_time, lag_time)
axs[1].set_xlabel('lag time')
axs[1].set_ylabel('VACF')
# psd
axs[2].plot(freq_manual, psd_manual, label='simulation')
axs[2].plot(freq_manual, psd_theory, label='theory')
axs[2].set_yscale('log')
axs[2].set_xscale('log')
axs[2].set_xlabel('frequency')
axs[2].set_ylabel('PSD')
axs[2].legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/65WMZG4B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65WMZG4B.png" alt="enter image description here" /></a></p>
|
<python><scipy><fft><autocorrelation><spectral-density>
|
2025-05-28 12:36:01
| 1
| 509
|
alpelito7
|
79,641,847
| 1,866,038
|
Reducing PostgreSQL CPU load using sqlalchemy
|
<p>I am using a PostgreSQL database and a big pandas DataFrame (~3500 records) that has to be uploaded into one of the tables in the database.</p>
<p>First, I have to test for existing records in the database and, afterwards, upload non existing ones.</p>
<p>The problem is that the CPU gets heavily loaded and sometimes my old "server" crashes.</p>
<p>So I divided the queries into smaller ones.</p>
<p>In the case of the uploading, I use the <code>df.to_sql()</code> method with the option <code>chunksize=150</code>, so that the upload takes place in chunks of 150 records.</p>
<p>In the case of the testing for existing records, I use the column in the dataframe that is defined as primary key in the database table (it is of type datetime, or Timestamp). So I have a long <code>where</code> clause like this:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT datetime FROM my_table
WHERE datetime = datetime_row1_df
OR datetime = datetime_row2_df
...
OR datetime = datetime_row3500_df
</code></pre>
<p>In order to reduce the CPU load, I divided this query into smaller ones, so that the first one tests for the existence in the database of the dataframe rows from 1 to 150, the second query tests for the existence of the dataframe rows from 151 to 300, and so on.</p>
<p>So I have two questions:</p>
<ul>
<li>Regarding the <code>df.to_sql()</code> method, is it a good method to reduce the CPU load?</li>
<li>Regarding the partition of the <code>where</code> clause in the SQL query, I'm wondering if what I'm doing is putting even more strain on the CPU. I think that with the first chunk, the server seeks the entire datatable, with the second one, again an entire search, and so on. Would it be better to keep it as a single and long query? If the answer is affirmative, what alternatives do I have here in order to reduce CPU load for this query?</li>
</ul>
|
<python><pandas><postgresql><sqlalchemy>
|
2025-05-28 08:55:50
| 2
| 517
|
Antonio Serrano
|
79,641,660
| 2,829,863
|
Merge table cells in a column if the text of the cells is the same using python-docx library
|
<p>I am using python and the <a href="https://python-docx.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">python-docx</a> library to create tables in a docx file. I want to merge cell values in a column if the value in the previous row is the same as the current value. I also want to remove duplicate values in the merged cells. How to do this?</p>
<p><a href="https://i.sstatic.net/HawiW6Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HawiW6Oym.png" alt="explanation in picture" /></a></p>
|
<python><docx>
|
2025-05-28 06:54:25
| 1
| 787
|
Comrade Che
|
79,641,614
| 9,826,710
|
ModuleNotFoundError: No module named 'NSKeyedUnarchiver'
|
<p>I struggle with the error</p>
<pre><code>ModuleNotFoundError: No module named 'NSKeyedUnarchiver'
</code></pre>
<p>which raises on import, after installing the module <a href="https://pypi.org/project/NSKeyedUnArchiver/" rel="nofollow noreferrer"><code>NSKeyedUnArchiver</code></a> via <code>pip</code>.</p>
<p>Project setup (MacOS):</p>
<pre><code>$ python -m venv env
$ source env/bin/activate
$ which python
/Users/Abid/Entwicklung/python/test-nskeyedunarchiver/env/bin/python
$ which pip
/Users/Abid/Entwicklung/python/test-nskeyedunarchiver/env/bin/pip
$ pip install NSKeyedUnArchiver
$ pip freeze
NSKeyedUnArchiver==1.5.1
$ python -c "from nskeyedunarchiver import NSKeyedUnArchiver"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'nskeyedunarchiver'
$ python -c "import NSKeyedUnArchiver"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'NSKeyedUnArchiver'
</code></pre>
<p>Any ideas how to solve this?</p>
|
<python><python-3.x><pip><modulenotfounderror><nskeyedunarchiver>
|
2025-05-28 06:23:03
| 1
| 700
|
Abid
|
79,641,592
| 11,188,210
|
Hash function converted from JS to python hangs up on large strings
|
<p>I've been using this cyrb53 hash function in js from this answer:
<a href="https://stackoverflow.com/questions/7616461/generate-a-hash-from-string-in-javascript">Generate a Hash from string in Javascript</a></p>
<p>The idea is to create hashes from image strings like this:</p>
<pre><code>data:image/png;base64,iVBORI12FAZ6k2AAAAABJRU5ErkJggg=
</code></pre>
<p>I'm moving my project over to python (my first python project) and already have thousands of hashes defined using this algorithm, so for consistency I need to keep using that same function in Python. It'll work with the example string above, but on a full string of image data it hangs up. Whereas JS tooks milliseconds, python sits for several minutes without completing.</p>
<p>Here's the JS function:</p>
<pre><code>const cyrb53 = (str, seed = 0) => {
let h1 = 0xdeadbeef ^ seed, h2 = 0x41c6ce57 ^ seed;
for(let i = 0, ch; i < str.length; i++) {
ch = str.charCodeAt(i);
h1 = Math.imul(h1 ^ ch, 2654435761);
h2 = Math.imul(h2 ^ ch, 1597334677);
}
h1 = Math.imul(h1 ^ (h1 >>> 16), 2246822507);
h1 ^= Math.imul(h2 ^ (h2 >>> 13), 3266489909);
h2 = Math.imul(h2 ^ (h2 >>> 16), 2246822507);
h2 ^= Math.imul(h1 ^ (h1 >>> 13), 3266489909);
return 4294967296 * (2097151 & h2) + (h1 >>> 0);
};
</code></pre>
<p>And here's my python conversion:</p>
<pre><code>def unsigned_right_shift(n, shift):
# Create a mask to simulate a 32-bit unsigned integer
mask = 0xFFFFFFFF
# Apply the mask to ensure the number is treated as unsigned
n &= mask
# Perform the right shift
result = n >> shift
return result
def cyrb53x(str, seed=0):
h1 = 0xdeadbeef ^ seed
h2 = 0x41c6ce57 ^ seed
for ch in str:
h1 = (h1 ^ ord(ch)) * 2654435761
h2 = (h2 ^ ord(ch)) * 1597334677
h1 = (h1 ^ unsigned_right_shift(h1 , 16)) * 2246822507
h1 ^= (h2 ^ unsigned_right_shift(h2 , 13)) * 3266489909
h2 = (h2 ^ unsigned_right_shift(h2 , 16)) * 2246822507
h2 ^= (h1 ^ unsigned_right_shift(h1 , 13)) * 3266489909
return 4294967296 * (2097151 & h2) + (h1 & 0xFFFFFFFF)
</code></pre>
<p>The strings passed to the function is very large, as they are base64 encoded images. Is this just a limitation of python performance? With the short example input above, the result is 85900107939316 for both JS and python so the function does technically work. For one of my real world examples, the string is over 1m characters.</p>
|
<python>
|
2025-05-28 06:08:44
| 1
| 2,052
|
Phaelax
|
79,641,447
| 8,278,075
|
Running Uvicorn on Mac command line results in error uvicorn: error: unrecognized arguments
|
<p>When I run my FastAPI app with the Uvicorn command, the command line args are not recognized:</p>
<pre class="lang-bash prettyprint-override"><code>(env) mydir$ uvicorn main:app --port 8000 --host 0.0.0.0 --reload
...
uvicorn: error: unrecognized arguments: main:app --port 8000 --host 0.0.0.0 --reload
...
</code></pre>
<p>I'm making sure I'm using the <code>uvicorn</code> in my venv by confirming with <code>which uvicorn</code>.</p>
<p>Running uvicorn 0.34.2 with CPython 3.11.2 on Darwin.</p>
<h2>What I tried, same result</h2>
<p>Uninstalling and reinstalling the entire venv.</p>
<p>Uninstalling and reinstalling only uvicorn.</p>
<p>Using Uvicorn with command line args:</p>
<pre class="lang-py prettyprint-override"><code>(env) mydir$ uvicorn main:app --port 8000 --host 0.0.0.0 --reload
INFO: Will watch for changes in these directories: ['/Users/XXX/Documents/mydir']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [7067] using StatReload
usage: uvicorn [-h] [--logs-dir LOGS_DIR]
uvicorn: error: unrecognized arguments: main:app --port 8000 --host 0.0.0.0 --reload
</code></pre>
<p>Using Python module:</p>
<pre class="lang-py prettyprint-override"><code>(env) mydir$ python -m uvicorn main:app --port 8000 --host 0.0.0.0 --reload
INFO: Will watch for changes in these directories: ['/Users/XXX/Documents/mydir']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [6489] using StatReload
usage: __main__.py [-h] [--logs-dir LOGS_DIR]
__main__.py: error: unrecognized arguments: main:app --port 8000 --host 0.0.0.0 --reload
</code></pre>
<p>Using Uvicorn without args:</p>
<pre class="lang-py prettyprint-override"><code>(env) mydir$ uvicorn main:app
usage: uvicorn [-h] [--logs-dir LOGS_DIR]
uvicorn: error: unrecognized arguments: main:app
(env) mydir$
</code></pre>
<h2>What I tried that works</h2>
<p>Running Python main directly. But has no auto-reload option.</p>
<pre class="lang-py prettyprint-override"><code>python3 -m main
</code></pre>
<p>Hardcoding in the main.py the args from command line. But I don't want to retain these settings for production.</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
uvicorn.run(
"main:app",
host="0.0.0.0",
port=8000,
reload=True,
)
</code></pre>
|
<python><fastapi><argparse><uvicorn>
|
2025-05-28 03:08:25
| 2
| 3,365
|
engineer-x
|
79,641,403
| 188,331
|
Using evaluate library to evaluate BertScore only uses 1 busy GPU
|
<p>I'm using <code>evaluate</code> library to evaluate the <code>BertScore</code>. Here are my codes:</p>
<pre><code>import evaluate
bertscore = evaluate.load("bertscore")
bertscore_result = bertscore.compute(predictions=[sentence], references=references_sentence)
bertscore_avg = np.mean(bertscore_result["f1"])
print("BertScore:", bertscore_avg)
</code></pre>
<p>I am using GeForce 4090 GPU, which has two GPU units, each with 24GB RAM. Here is the output of the <code>nvidia-smi</code>:</p>
<pre><code>Wed May 28 00:11:09 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:81:00.0 Off | Off |
| 0% 31C P8 21W / 450W | 14MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:C1:00.0 Off | Off |
| 30% 28C P8 30W / 450W | 24004MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1720 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1720 G /usr/lib/xorg/Xorg 15MiB |
| 1 N/A N/A 1896 G /usr/bin/gnome-shell 10MiB |
| 1 N/A N/A 619637 C /opt/tljh/user/bin/python 23962MiB |
+---------------------------------------------------------------------------------------+
</code></pre>
<p>As you can see the GPU unit 1 is almost occupied, but the GPU unit 0 is idle. However, the code insists using GPU unit 1 and results in:</p>
<blockquote>
<p>RuntimeError: CUDA error: out of memory</p>
</blockquote>
<p>Of course I can kill other GPU-intensive process, but I would like to explore whether the <code>evaluate</code> library codes are smart enough to utlize the idle GPU unit.</p>
<p>Is this ever possible? Thanks in advance.</p>
|
<python><tensorflow><gpu><huggingface-evaluate>
|
2025-05-28 01:53:18
| 1
| 54,395
|
Raptor
|
79,641,325
| 1,754,273
|
win32com pywintypes.datetime in Pandas DataFrame
|
<p>I am using <code>win32com</code> to pull some data out of large excel files and am running into an issue with <code>pywintypes.datetime</code> and subsequent Pandas DataFrame creation. <code>DataBody = tbl.DataBodyRange()</code> gives a large tuple which I convert into a numpy array using <code>DataBody_array = np.array(DataBody)</code>.</p>
<p>The issue is that the <code>.DataBodyRange()</code> from <code>win32com</code> gets the time data column as <code>pywintypes.datetime</code> and when I try and get this array into a pandas dataframe directly, there is an <code>AttributeError: 'NoneType' object has no attribute 'total_seconds'</code></p>
<p>Based on my understanding, <code>pywintypes.datetime</code> are COM objects representing date/time in windows, but when working with NumPy, these should get converted to <code>numpy.datetime64</code>. <strong>What would be the best method to convert these objects from <code>pywintypes.datetime</code> to <code>numpy.datetime64</code>?</strong> Since the excel tables contain a large amount of data I would like to avoid iteration, I was thinking of using something like <code>np.where</code> but I am struggling to get it to do the conversion/replacements.</p>
<p>Below is a full sample of the code and an image of the very simple excel table for the example.</p>
<p><strong>Python Code</strong></p>
<pre><code>import win32com.client as MyWinCOM
import numpy as np
import pandas as pd
# Open excel instance
xl = MyWinCOM.gencache.EnsureDispatch('Excel.Application')
xl.Visible = True
# Open workbook
wb = xl.Workbooks.Open('Test Excel.xlsx')
# Get worksheet object
ws = wb.Worksheets('Sheet1')
# Get table object
tbl = ws.ListObjects('Table1')
# Get the table headers, convert to list
ColumnNames = tbl.HeaderRowRange()
ColumnNamesList = list(np.array(ColumnNames))
# Get the data body range, convery to np array
DataBody = tbl.DataBodyRange()
DataBody_array = np.array(DataBody)
# Print databody tuple
for item in DataBody:
print (item)
# Print databody array
for item in DataBody_array:
print (item)
# Attempt to put into dataframe
Data_df = pd.DataFrame(DataBody_array, columns=ColumnNamesList)
print (Data_df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>('a', pywintypes.datetime(2022, 6, 22, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True)), 1.0)
('b', pywintypes.datetime(2023, 10, 18, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True)), None)
('c', pywintypes.datetime(2022, 6, 21, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True)), 3.0)
['a'
pywintypes.datetime(2022, 6, 22, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True))
1.0]
['b'
pywintypes.datetime(2023, 10, 18, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True))
None]
['c'
pywintypes.datetime(2022, 6, 21, 0, 0, tzinfo=TimeZoneInfo('GMT Standard Time', True))
3.0]
</code></pre>
<p><strong>Excel File/Table</strong>
<a href="https://i.sstatic.net/byohECUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/byohECUr.png" alt="Test Excel.xlsx" /></a></p>
<p><strong>Traceback</strong></p>
<pre><code>Traceback (most recent call last):
File "***.py", line 32, in <module>
print (Data_df)
File "C:\Program Files\Python39\lib\site-packages\pandas\core\frame.py", line 1011, in __repr__
return self.to_string(**repr_params)
File "C:\Program Files\Python39\lib\site-packages\pandas\core\frame.py", line 1192, in to_string
return fmt.DataFrameRenderer(formatter).to_string(
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 1128, in to_string
string = string_formatter.to_string()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\string.py", line 25, in to_string
text = self._get_string_representation()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\string.py", line 40, in _get_string_representation
strcols = self._get_strcols()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\string.py", line 31, in _get_strcols
strcols = self.fmt.get_strcols()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 611, in get_strcols
strcols = self._get_strcols_without_index()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 875, in _get_strcols_without_index
fmt_values = self.format_col(i)
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 889, in format_col
return format_array(
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 1316, in format_array
return fmt_obj.get_result()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 1347, in get_result
fmt_values = self._format_strings()
File "C:\Program Files\Python39\lib\site-packages\pandas\io\formats\format.py", line 1810, in _format_strings
values = self.values.astype(object)
File "C:\Program Files\Python39\lib\site-packages\pandas\core\arrays\datetimes.py", line 666, in astype
return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy)
File "C:\Program Files\Python39\lib\site-packages\pandas\core\arrays\datetimelike.py", line 415, in astype
converted = ints_to_pydatetime(
File "pandas\_libs\tslibs\vectorized.pyx", line 158, in pandas._libs.tslibs.vectorized.ints_to_pydatetime
File "pandas\_libs\tslibs\timezones.pyx", line 266, in pandas._libs.tslibs.timezones.get_dst_info
AttributeError: 'NoneType' object has no attribute 'total_seconds'
</code></pre>
|
<python><pandas><numpy><win32com>
|
2025-05-27 23:26:08
| 1
| 5,554
|
Radical Edward
|
79,641,126
| 17,411,406
|
ModelViewSet does not overwrite DEFAULT_PERMISSION_CLASSES'
|
<p>Hello I'm working on making all urls requires user to be authenticate , but some urls i want them to be accessible by public, so i use <code>permission_classes = [AllowAny] </code> and <code>authentication_classes = ([]) </code> to overwrite default configurations it work in APIVIEW but not in <code>viewsets.ModelViewSet</code> why ?</p>
<p>settings.py</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_FILTER_BACKENDS': [
'django_filters.rest_framework.DjangoFilterBackend',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework_simplejwt.authentication.JWTAuthentication',
],
'DATETIME_FORMAT': "%Y-%m-%d %H:%M:%S",
}
</code></pre>
<p>views.py</p>
<pre><code>class ToolsListViewSet(viewsets.ModelViewSet):
serializer_class = ToolsListSerializer
permission_classes = [AllowAny]
authentication_classes = ([])
pagination_class = None
def get_queryset(self):
return Tools.objects.filter(is_active=True)
enter code here
</code></pre>
<p>error</p>
<pre><code>{
"detail": "Authentication credentials were not provided."
}
</code></pre>
|
<python><django><django-rest-framework>
|
2025-05-27 19:56:23
| 2
| 307
|
urek mazino
|
79,641,119
| 13,860,719
|
How to broadcast operation to Numpy array of objects?
|
<p>Say I have a Numpy array of 500 lists with random sizes ranging from 0 to 9:</p>
<pre><code>import numpy as np
a = np.array([[i for i in range(np.random.randint(10))] for _ in range(500)], dtype=object)
</code></pre>
<p>Now I want to append a value <code>100</code> to indices <code>[0,10,20,30,40,50]</code>, I tried to apply a function to each list in the array:</p>
<pre><code>func = np.vectorize(lambda x: x + [100])
a[[0,10,20,30,40,50]] = func(a[[0,10,20,30,40,50]])
</code></pre>
<p>but I get <code>ValueError: setting an array element with a sequence.</code></p>
<p>Is there any way I can broadcast operations to all objects (with different sizes) in a Numpy array? In my case I usually have up to ~50,000 indices. Using a normal for loop would be too slow. I'm thinking maybe converting the array to a sparse matrix with equal sizes of rows if it's more efficient that way?</p>
|
<python><arrays><list><numpy><array-broadcasting>
|
2025-05-27 19:51:52
| 1
| 2,963
|
Shaun Han
|
79,641,078
| 4,659,442
|
Pylance type error supplying SQLAlchemy NVARCHAR length
|
<p>Pylance is raising a type error when I include a length for the <code>NVARCHAR</code> column in the below script:</p>
<pre class="lang-py prettyprint-override"><code>import urllib.parse
import uuid
import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy.types import NVARCHAR
from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER
# CREATE DATABASE CONNECTION
connection = create_engine(
f"mssql+pyodbc:///?odbc_connect={
urllib.parse.quote_plus(
'DRIVER=' + <driver details here> + ';'
'SERVER=' + <server details here> + ';'
'DATABASE=' + <db details here> + ';'
'UID=' + <uid details here> + ';'
'PWD=' + <pwd details here> + ';'
'Authentication=' + <auth details here>
)
}"
)
# CREATE DATAFRAME
df = pd.DataFrame(
columns=['id', 'name'],
data=[
[str(uuid.uuid4()), 'hello'],
[str(uuid.uuid4()), 'world'],
[str(uuid.uuid4()), 'foo'],
[str(uuid.uuid4()), 'bar'],
[str(uuid.uuid4()), 'baz'],
]
)
# SAVE DATA
df.to_sql(
'test',
con=connection,
dtype={
'id': UNIQUEIDENTIFIER,
'name': NVARCHAR(length=1024),
},
index=False,
)
</code></pre>
<p>Specifically, it raises <code>Argument of type "dict[str, type[UNIQUEIDENTIFIER[_UUID_RETURN@UNIQUEIDENTIFIER]] | NVARCHAR]" cannot be assigned to parameter "dtype" of type "DtypeArg | None" in function "to_sql"</code>.</p>
<p>If I remove the <code>length</code> parameter on <code>NVARCHAR</code> the error goes away, but I need to be able to specify the length.</p>
<p>Am I doing something wrong or is this a bug in Pyright/Pylance?</p>
|
<python><pandas><sqlalchemy><python-typing><pyright>
|
2025-05-27 19:13:03
| 0
| 727
|
philipnye
|
79,640,701
| 5,322,739
|
How can I specify Python type hint for a tuple whose first element is Callable and subsequent arguments are Any?
|
<p>How can I specify a Python type hint for a tuple whose first element is <code>Callable</code> and subsequent elements are <code>Any</code>?</p>
<p>I tried <code>var: tuple[Callable, Any, ...]</code>, but got the following warning in VS Code:</p>
<p><code>"..." is allowed only as the second of two arguments</code></p>
<p>I'm using Python 12.10.3 on Ubuntu 24.04.</p>
|
<python><python-typing>
|
2025-05-27 14:44:03
| 0
| 532
|
Geoff Alexander
|
79,640,638
| 29,295,031
|
how to pass RunnableConfig as arg when call langchain invoke method
|
<p>I'm trying to make a RAG application using langchain and streamlit,I trying to manage the chathistory within this app,but I came across an unexpeted issue, I will explain :</p>
<pre><code>from langchain.schema.runnable import RunnableConfig
from langchain.callbacks.tracers.run_collector import RunCollectorCallbackHandler
def create_full_chain(retriever, groq_api_key=None, chat_memory=ChatMessageHistory()):
model = get_model("Groq", groq_api_key=groq_api_key)
# model = get_model()
system_prompt = """my own promt.
Context: {context}
Question: """
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{question}"),
]
)
rag_chain = make_rag_chain(model, retriever,rag_prompt=prompt) #add by youssef rag_prompt=prompt
chain = create_memory_chain(model, rag_chain, chat_memory)
return chain
def ask_question(chain, query,config=None):
response = chain.invoke(
{"question": query},
config={"configurable": {"session_id": "foo"},},
config=config
)
return response
</code></pre>
<p>here's i create a runnableConfig :</p>
<pre><code>run_collector = RunCollectorCallbackHandler()
runnable_config = RunnableConfig(
callbacks=[run_collector],
tags=["Streamlit Chat"],
)
</code></pre>
<p>now when the user asque a question this methode execute:</p>
<pre><code>response = ask_question(chain, prompt,config=runnable_config)
</code></pre>
<p>now my issue is, I have to pass to the <code>invoke</code> method two args with the same name which is config, because this one <code>config={"configurable": {"session_id": "foo"}</code>,},is managed by the memory(chathistory (RunnableWithMessageHistory)) and and to be able to pass <code>runnable_config</code> i have to pass it like this <code>config=runnable_config</code> inside the invoke methode, Does anyone please have any idea how to resolve this issue please ?
thnaks</p>
|
<python><streamlit><langchain><langsmith>
|
2025-05-27 14:00:19
| 1
| 401
|
user29295031
|
79,640,542
| 13,860,719
|
Fastest way to find the least amount of subsets that sum up to the total set in Python
|
<p>Say I have a dictionary of sets like this:</p>
<pre><code>d = {'a': {1,2,8}, 'b': {3,1,2,6}, 'c': {0,4,1,2}, 'd': {9}, 'e': {2,5},
'f': {4,8}, 'g': {0,9}, 'h': {7,2,3}, 'i': {5,6,3}, 'j': {4,6,8}}
</code></pre>
<p>Each set represents a subset of a total set <code>s = set(range(10))</code>. I would like an efficient algorithm to find the <strong>least</strong> amount of keys that make up the whole set, and return an empty list if it's not possible by any combinations of keys. If there are many possible combinations that have the least amount of keys to sum up to the whole set, I only need one combination and it can be any one of those.</p>
<p>So far I am using an exhaustive approach that checks all possible combinations and then take the combination that has the least amount of keys.</p>
<pre><code>import copy
def append_combinations(combo, keys):
for i in range(len(keys)):
new_combo = copy.copy(combo)
new_combo.append(keys[i])
new_keys = keys[i+1:]
if {n for k in new_combo for n in d[k]} == s:
valid_combos.append(new_combo)
append_combinations(new_combo, new_keys)
valid_combos = []
combo = []
keys = sorted(d.keys())
append_combinations(combo, keys)
sorted_combos = sorted(valid_combos, key=lambda x: len(x))
print(sorted_combos[0])
# ['a', 'c', 'd', 'h', 'i']
</code></pre>
<p>However, this becomes very expensive when the dictionary has many keys (in practice I will have around 100 keys). Any suggestions for a faster algorithm?</p>
|
<python><algorithm><performance><dictionary><set>
|
2025-05-27 13:02:34
| 3
| 2,963
|
Shaun Han
|
79,640,494
| 1,779,895
|
Inexplicit random state in statsmodels holtwinters exponential smoothing?
|
<p>I am dealing with getting different results locally and in databricks most probably from fitting of <code>statsmodels.tsa.holtwinters.ExponentialSmoothing</code> . Based on the <a href="https://www.statsmodels.org/stable/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html" rel="nofollow noreferrer">documentation</a>, one can't pass directly a random state, but this doesn't guarantee that some implicit random_state doesn't happen down the road, potentially causing the discrepancy.</p>
<p>Can you help investigating? Any suggestions are welcome.</p>
<p>My ideas include trying</p>
<pre class="lang-py prettyprint-override"><code>np.random.seed(42)
random.seed(42)
os.environ['PYTHONHASHSEED'] = '42'
</code></pre>
<p>but it looks a bit ugly.</p>
|
<python><statsmodels><random-seed><holtwinters><exponential-smoothing>
|
2025-05-27 12:37:25
| 0
| 817
|
Veliko
|
79,640,369
| 5,452,365
|
How to initialized dataframe without infering schema in polars?
|
<p>The current <code>pl.DataFrame(iterable, infer_schema_length=None)</code> or <code>pl.from_dicts(iterable, infer_schema_length=None)</code> is unreliable and documentation is also ambiguous (reads: The maximum number of rows to scan for schema inference. If set to None, the full data may be scanned (this is slow).)</p>
<p>For my example it's not scanning full data hence not including all the columns.</p>
<p>Is there any way in polars which can reliably reads the iterable either full length or without inferring the data?</p>
<p>Pandas is correctly infering (Reading all the rows)</p>
<p>Currently I'm using</p>
<p><code>return pl.from_dicts(iterable, infer_schema_length=999999999)</code></p>
<p>but there should be a better way.</p>
|
<python><dataframe><python-polars>
|
2025-05-27 11:20:45
| 1
| 11,652
|
Rahul
|
79,640,192
| 4,083,037
|
PNG image to vector graphic conversion method with recognition of 1px lines?
|
<p>I am trying to find a way to convert a .png image with 1px lines to a vector file. The issue I am having is that my approaches are very inconsistent due to the thin lines I am trying to recognize.</p>
<p>One idea was to use potrace, but I get very blobby results.</p>
<p>The other was to use OpenCV, with the result that OpenCV does not recognize all lines.</p>
<p>An example file I am trying to convert is this:</p>
<p><a href="https://geoservices.bayern.de/od/wms/alkis/v1/parzellarkarte?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=by_alkis_parzellarkarte_umr_schwarz&STYLES=&FORMAT=image/jpeg&CRS=EPSG:4326&BBOX=48.140000,11.560000,48.145000,11.565000&WIDTH=2048&HEIGHT=2048" rel="nofollow noreferrer">https://geoservices.bayern.de/od/wms/alkis/v1/parzellarkarte?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=by_alkis_parzellarkarte_umr_schwarz&STYLES=&FORMAT=image/jpeg&CRS=EPSG:4326&BBOX=48.140000,11.560000,48.145000,11.565000&WIDTH=2048&HEIGHT=2048</a></p>
<p>Is there any other way to convert this to a vector file automatically?</p>
<p>i gave opencv another try and made it much further:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import cv2
import numpy as np
import svgwrite
import os
def raster_to_svg(input_path, output_path=None, simplify=False):
img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE)
if img is None:
print(f"Fehler: Bild konnte nicht geladen werden: {input_path}")
return
inv = cv2.bitwise_not(img)
_, thresh = cv2.threshold(inv, 127, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
height, width = img.shape
if not output_path:
output_path = os.path.splitext(input_path)[0] + "_white_areas.svg"
dwg = svgwrite.Drawing(output_path, size=(width, height))
for contour in contours:
if len(contour) < 3:
continue # Kein gültiges Polygon
points = [(int(p[0][0]), int(p[0][1])) for p in contour]
if simplify:
epsilon = 1.0 # Toleranz für Polygonvereinfachung
approx = cv2.approxPolyDP(contour, epsilon, True)
points = [(int(p[0][0]), int(p[0][1])) for p in approx]
dwg.add(dwg.polygon(points=points, fill='white', stroke='black', stroke_width=0.1))
dwg.save()
print(f"SVG gespeichert unter: {output_path}")
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Verwendung: python raster_to_svg.py bild.png [output.svg]")
else:
input_file = sys.argv[1]
output_file = sys.argv[2] if len(sys.argv) > 2 else None
raster_to_svg(input_file, output_file)
</code></pre>
<p>but now i am struggling with the way pixels are represented:
source:
<a href="https://i.sstatic.net/oTmNx3vA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTmNx3vA.png" alt="enter image description here" /></a></p>
<p>result:
<a href="https://i.sstatic.net/tCzp010y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCzp010y.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/tC40RVby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tC40RVby.png" alt="enter image description here" /></a></p>
<p>any idea how i could make these lines straight lines?</p>
<p>Thanks a lot!!</p>
|
<python><opencv><vector-graphics><potrace>
|
2025-05-27 09:35:54
| 1
| 746
|
pcace
|
79,639,750
| 2,780,906
|
How can I assign iterables to columns in pandas dataframes?
|
<p>I have a dataframe containing rows which describe financial stocks. The following is a simplified version:</p>
<pre><code>df = pd.DataFrame(
{
"stockprice": [100, 103, 240],
"Characteristic1": [1, 3, 3],
"Characteristic2": [5, 7, 1],
"Characteristic3": [1, 4, 6],
},
index=["Company A", "Company B", "Company C"],
)
# stockprice Characteristic1 Characteristic2 Characteristic3
# Company A 100 1 5 1
# Company B 103 3 7 4
# Company C 240 3 1 6
</code></pre>
<p>I would like to add a column which should contain for each cell a long dictionary which will be generated based on some of these characteristics - a series of cashflows. Later I will want to do some calculation on this generated dictionary.</p>
<p>Here is a sample function which generates the dictionary, and then the assign function to put it into my dataframe:</p>
<pre><code>def cashflow_series(ch1=1, ch2=2):
return {0: ch1, 0.5: ch2, 1: 7, 2: 8, 3: 9}
df.assign(
cashflows=lambda x: cashflow_series(
ch1=x["Characteristic1"], ch2=x["Characteristic3"]
)
)
</code></pre>
<p>This returns</p>
<pre><code> stockprice Characteristic1 Characteristic2 Characteristic3 cashflows
Company A 100 1 5 1 NaN
Company B 103 3 7 4 NaN
Company C 240 3 1 6 NaN
</code></pre>
<p>How can I fix this?</p>
<p>I want the new column 'cashflows' to contain a dictionary for each row, not a NaN.</p>
<p>I want something like this:</p>
<pre><code> stockprice Characteristic1 Characteristic2 Characteristic3 cashflows
Company A 100 1 5 1 {0:1,..3:9}
Company B 103 3 7 4 {0:3,..3:9}
Company C 240 3 1 6 {0:3,..3:9}
</code></pre>
|
<python><pandas><dataframe>
|
2025-05-27 02:50:30
| 2
| 397
|
Tim
|
79,639,578
| 10,262,805
|
Why does adding token and positional embeddings in transformers work?
|
<p>In transformer models, I've noticed that token embeddings and positional embeddings are added together before being passed into the attention layers:</p>
<pre><code>import torch
import torch.nn as nn
class TransformerModel(nn.Module):
def __init__(self,vocab_size,emb_dim,context_length,dropout_rate):
super().__init__()
self.tok_emb = nn.Embedding(vocab_size, emb_dim)
self.pos_emb = nn.Embedding(context_length, emb_dim])
self.drop_emb = nn.Dropout(dropout_rate)
def forward(self,in_idx):
batch_size,seq_len=in_idx.shape
'''
You add token and positional embeddings element-wise to get a combined representation of:
- What the token is (meaning)
- Where it is (position)
For Example: "The cat sat."
tok_emb["The"] might give you the concept of "the"
pos_emb[0] tells you it's the first word
Combined: the model knows it's "the" at the start of the sentence.
'''
tok_embeds=self.tok_emb(in_idx)
pos_embeds=self.pos_emb(torch.arange(seq_len,device=in_idx.device))
x=tok_embeds + pos_embeds
</code></pre>
<p>But both are just vectors, and addition is elementwise, so how does this preserve both the word's identity and its position? Why does simple addition work so well in practice?</p>
|
<python><torch><large-language-model><embedding><transformer-model>
|
2025-05-26 21:21:11
| 0
| 50,924
|
Yilmaz
|
79,639,543
| 2,153,235
|
Test for multiple variables in locals().keys() fails
|
<p>I am testing the existence of local variables <code>LatLim</code> and <code>LongLim</code> at the Spyder console:</p>
<pre><code>>>> LatLim = (36.033333,36.2333)
>>> LongLim = (-5.5333,-5.25)
>>> locals().keys()
dict_keys(['__name__', '__builtin__', ... 'LatLim', 'LongLim', ... ])
</code></pre>
<p>Each variable tests fine by itself:</p>
<pre><code>>>> 'LatLim' in locals().keys()
True
>>> 'LongLim' in locals().keys()
True
</code></pre>
<p>The OR'ing of the test should be <code>True</code> but is <code>False</code>:</p>
<pre><code>>>> any( key in locals().keys()
for key in ('LatLim','LongLim') )
False
</code></pre>
<p>If I create a dictionary of local keys, the test succeeds:</p>
<pre><code>>>> LocalsKeys = locals().keys()
>>> any( key in LocalsKeys for key in ('LatLim','LongLim') )
True
</code></pre>
<p>Why does the OR'ing fail when checking <code>locals().keys()</code> directly?</p>
|
<python><dictionary><local-variables>
|
2025-05-26 20:31:24
| 0
| 1,265
|
user2153235
|
79,639,410
| 12,468,387
|
Low RPS when perfomance testings django website
|
<p>I have a code like this that caches a page for 60 minutes:</p>
<pre><code>import os
import time
from django.conf import settings
from django.core.cache import cache
from django.core.mail import send_mail
from django.contrib import messages
from django.http import FileResponse, Http404, HttpResponse
from django.shortcuts import render
from django.utils.translation import get_language, gettext as _
from apps.newProduct.models import Product, Variants, Category
from apps.vendor.models import UserWishList, Vendor
from apps.ordering.models import ShopCart
from apps.blog.models import Post
from apps.cart.cart import Cart
# Cache timeout for common data
CACHE_TIMEOUT_COMMON = 900 # 15 minutes
def cache_anonymous_page(timeout=CACHE_TIMEOUT_COMMON):
from functools import wraps
from django.utils.cache import _generate_cache_header_key
def decorator(view):
@wraps(view)
def wrapper(request, *args, **kw):
if request.user.is_authenticated:
return view(request, *args, **kw)
lang = get_language() # i18n
curr = request.session.get('currency', '')
country = request.session.get('country', '')
cache_key = f"{view.__module__}.{view.__name__}:{lang}:{curr}:{country}"
resp = cache.get(cache_key)
if resp is not None:
return HttpResponse(resp)
response = view(request, *args, **kw)
if response.status_code == 200:
cache.set(cache_key, response.content, timeout)
return response
return wrapper
return decorator
def get_cached_products(cache_key, queryset, timeout=CACHE_TIMEOUT_COMMON):
lang = get_language()
full_key = f"{cache_key}:{lang}"
data = cache.get(full_key)
if data is None:
data = list(queryset)
cache.set(full_key, data, timeout)
return data
def get_cached_product_variants(product_list, cache_key='product_variants', timeout=CACHE_TIMEOUT_COMMON):
lang = get_language()
full_key = f"{cache_key}:{lang}"
data = cache.get(full_key)
if data is None:
data = []
for product in product_list:
if product.is_variant:
data.extend(product.get_variant)
cache.set(full_key, data, timeout)
return data
def get_all_cached_data():
featured_products = get_cached_products(
'featured_products',
Product.objects.filter(status=True, visible=True, is_featured=True)
.exclude(image='')
.only('id','title','slug','image')[:8]
)
popular_products = get_cached_products(
'popular_products',
Product.objects.filter(status=True, visible=True)
.exclude(image='')
.order_by('-num_visits')
.only('id','title','slug','image')[:4]
)
recently_viewed_products = get_cached_products(
'recently_viewed_products',
Product.objects.filter(status=True, visible=True)
.exclude(image='')
.order_by('-last_visit')
.only('id','title','slug','image')[:5]
)
variants = get_cached_products(
'variants',
Variants.objects.filter(status=True)
.select_related('product')
.only('id','product','price','status')
)
product_list = get_cached_products(
'product_list',
Product.objects.filter(status=True, visible=True)
.prefetch_related('product_variant')
)
return featured_products, popular_products, recently_viewed_products, variants, product_list
def get_cart_info(user, request):
if user.is_anonymous:
return {}, 0, [], 0, []
cart = Cart(request)
wishlist = list(UserWishList.objects.filter(user=user).select_related('product'))
shopcart_qs = ShopCart.objects.filter(user=user).select_related('product','variant')
shopcart = list(shopcart_qs)
products_in_cart = [item.product.id for item in shopcart if item.product]
total = cart.get_cart_cost()
comparing = len(request.session.get('comparing', []))
compare_var = len(request.session.get('comparing_variants', []))
total_compare = comparing + compare_var
if len(cart) == 0:
shopcart = []
return {
'cart': cart,
'wishlist': wishlist,
'shopcart': shopcart,
'products_in_cart': products_in_cart,
}, total, wishlist, total_compare, shopcart
@cache_anonymous_page(3600)
def frontpage(request):
featured_products, popular_products, recently_viewed_products, variants, product_list = get_all_cached_data()
var = get_cached_product_variants(product_list)
cart_ctx, total, wishlist, total_compare, shopcart = get_cart_info(request.user, request)
context = {
'featured_products': featured_products,
'popular_products': popular_products,
'recently_viewed_products': recently_viewed_products,
'variants': variants,
'var': var,
**cart_ctx,
'subtotal': total,
'total_compare': total_compare,
}
return render(request, 'core/frontpage.html', context)
</code></pre>
<p>I installed django debug toolbar and it shows time ~40 ms for a cached frontpage.
My server has 2 CPUs. When i try perfomance testing using locust I get around 3 RPS.
I thought i would get around 2CPU*(1000/40) ~ 50 RPS.</p>
<p>I run my server using this command inside docker container:</p>
<pre><code>gunicorn main.wsgi:application
-k gevent
--workers 6
--bind 0.0.0.0:8080
--worker-connections 1000
--timeout 120
</code></pre>
<p>Also i use psycopg2 with psycogreen
wsgi.py starts with this:</p>
<pre><code>from psycogreen.gevent import patch_psycopg
patch_psycopg()
</code></pre>
<p>What am i doing wrong? Why can't handle more RPS?</p>
<ol>
<li><p>I'm using redis like this:</p>
<p>CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://supapp_redis_1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}</p>
</li>
<li><p>i'm trying to test with anonymous users because this is the firstpage every user will see. My locustfile.py look like this:</p>
<p>from locust import HttpUser, task, between</p>
<p>class WebsiteUser(HttpUser):
def index(self):
self.client.get("/")</p>
</li>
<li><p>Gunicorn is using 2 cpus. When i use htop and test with locust i can see 100% usage on both cpus</p>
</li>
<li><p>My testsite.com.conf has these lines:</p>
<h1>Serve static files</h1>
<p>location /static/ {
alias /opt/supapp/data/Shop/static/;
}</p>
<h1>Serve media files</h1>
<p>location /media/ {
alias /opt/supapp/data/Shop/media/;
}</p>
</li>
</ol>
<p>I use nginx for static content</p>
<p>Unfortunately i still get 3 RPS with 2 CPUS</p>
<p>PS: I use Django templates for my frontend(SSR)</p>
|
<python><django><performance><gunicorn><wsgi>
|
2025-05-26 18:11:31
| 1
| 449
|
Denzel
|
79,639,365
| 4,375,983
|
Trouble with installing uv subpackages within monorepo
|
<h2>Context</h2>
<p>I'm setting up a Python monorepo using uv for dependency management. When I run <code>uv sync</code> in the root directory, only the root package's dependencies are installed - none of the subpackages' dependencies are being installed.</p>
<p>I'm building a data platform monorepo using Python 3.12 and uv for dependency management. The project has several components:</p>
<ul>
<li>A core library (<code>engine-core</code>)</li>
<li>An orchestrator for data pipelines (<code>orchestrator</code>)</li>
<li>A dbt project for data modeling (<code>dbt_forge</code>)</li>
</ul>
<h2>Project Structure</h2>
<p>Here's the (simplified) project structure :</p>
<pre class="lang-none prettyprint-override"><code>custom-data-forge/
├── pyproject.toml
└── packages/
├── engine-core/ # Python library with importable modules
│ └── pyproject.toml
├── orchestrator/ # Airflow DAGs and pipeline code
│ └── pyproject.toml
└── modeling/
└── dbt_forge/ # dbt models and ClickHouse transformations
└── pyproject.toml
</code></pre>
<h2>Configuration</h2>
<p>Root <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "custom-data-forge"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["pandas", "sqlalchemy", "tabulate"]
[tool.uv.workspace]
members = [
"packages/engine-core",
"packages/orchestrator",
"packages/modeling/dbt_forge",
]
</code></pre>
<p>Subpackage <code>pyproject.toml</code> files:</p>
<pre class="lang-ini prettyprint-override"><code># packages/engine-core/pyproject.toml
[project]
name = "engine-core"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["dbt-core", "dbt-clickhouse"]
# packages/orchestrator/pyproject.toml
[project]
name = "orchestrator"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["engine-core", "apache-airflow"]
# packages/modeling/dbt_forge/pyproject.toml
[project]
name = "dbt_forge"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["dbt-core", "dbt-clickhouse"]
</code></pre>
<h2>The Problem</h2>
<p>When I run <code>uv sync</code> in the root directory:</p>
<ul>
<li>Only the root package's dependencies (<code>pandas</code>, <code>sqlalchemy</code>, etc.) are installed</li>
<li>None of the subpackages' dependencies are installed (<code>dbt-core</code>, <code>dbt-clickhouse</code>, <code>apache-airflow</code>)</li>
</ul>
<h2>What I've Tried</h2>
<ol>
<li>Added <code>[tool.uv] package = true</code> to the subpackages' <code>pyproject.toml</code> files</li>
<li>Verified the workspace configuration in the root <code>pyproject.toml</code></li>
<li>Made sure all <code>[build-system]</code> sections are properly configured</li>
<li>Deleted <code>uv.lock</code> and tried again</li>
</ol>
<h2>Questions</h2>
<ol>
<li>Is this the expected behavior for uv workspaces? Shouldn't <code>uv sync</code> install dependencies from all workspace members?</li>
<li>What's the correct way to configure uv to install dependencies from subpackages?</li>
<li>Do I need to make the subpackages "real" Python packages (with <code>__init__.py</code> files) for uv to recognize them?</li>
</ol>
<h2>Environment</h2>
<ul>
<li>Python 3.12</li>
<li>uv 0.6.16</li>
<li>Linux 5.15.167.4-microsoft-standard-WSL2</li>
</ul>
|
<python><uv>
|
2025-05-26 17:30:34
| 1
| 2,811
|
Imad
|
79,639,259
| 9,915,497
|
Using PVLIB to computer weather adjusted expected generation
|
<p>I am attempting to use PVLIB to compute Weather-Adjusted Expected Generation hourly for a 24 hours.</p>
<p>However, the function I have used is not working well and I am looking to figure out why. The data being used is from the previous day. The input to the function has GHI, Air Temp, & Wind Temp at sampling rate of every minute (so 60 samples in 1 hour).</p>
<p>The other items defined are:</p>
<ul>
<li>latitude</li>
<li>longitude</li>
<li>time zone</li>
<li>DC module parameters</li>
<li>inverters parameters</li>
</ul>
<p>The pvlib_inverter_pvwatts() function is being using to compute the data, but it is returning values that are way below the actual power generation (MWh).</p>
<p>Are there any examples for this specific use case?</p>
|
<python><pvlib><solar>
|
2025-05-26 15:57:46
| 0
| 415
|
Kris
|
79,639,089
| 497,649
|
How to add tooltips to the column headers of a DataGrid table?
|
<p>Given the UI component as defined in</p>
<p><a href="https://shiny.posit.co/py/components/outputs/data-grid/" rel="nofollow noreferrer">https://shiny.posit.co/py/components/outputs/data-grid/</a></p>
<p>how to add a tooltip to each column header, or a subset of?</p>
|
<python><tooltip><py-shiny>
|
2025-05-26 14:19:40
| 1
| 640
|
lambruscoAcido
|
79,639,054
| 1,430,550
|
Access states as plain Python value for serialization
|
<p>I am using the Python reflex package to build a frontend (<a href="https://github.com/reflex-dev/reflex" rel="nofollow noreferrer">https://github.com/reflex-dev/reflex</a>).
I am trying to serialize the state values on a button click, which does not seem to work as expected. The values seem to be encapsulated in something non-serializable.</p>
<p>How do I get access to the "raw" value of a state?</p>
<blockquote>
<p>TypeError: Object of type StringCastedVar is not JSON serializable</p>
</blockquote>
<pre><code>class StateA(rx.State):
value: str = "Hello"
class StateB(rx.State):
value: str = "World"
class ConfirmButton(rx.State):
did_confirm: bool = False
@rx.event
def confirm(self):
self.did_confirm = True
import json
with open("x.json", mode='w') as h:
h.write(json.dumps({
"word1": StateA.value,
"word2": StateB.value,
}))
</code></pre>
|
<python><python-reflex>
|
2025-05-26 13:52:10
| 1
| 2,791
|
toobee
|
79,639,008
| 18,220,526
|
Byte Shifting Error White Reading Safetensor File in C
|
<p>I'm trying to read safetensors file in C. As a read on Huggingface's documentation, I'm taking first 8 byte of file for header size;</p>
<pre class="lang-c prettyprint-override"><code>...
uint64_t header_size = 0;
int read_header_size = fread(&header_size, sizeof(uint64_t), 1, fp);
...
</code></pre>
<p>And after that I'm reading the header of data;</p>
<pre class="lang-c prettyprint-override"><code>...
char *header_str = MALLOC(header_size + 1);
int read_header_str = fread(header_str, 1, header_size, fp);
header_str[header_size] = '\0';
...
</code></pre>
<p>I'am parsing the header with cJSON <code>safe_tensor->header = cJSON_Parse(header_str);</code> and there is no any error. Everything works great in header file.</p>
<pre class="lang-json prettyprint-override"><code>{
"conv2d/bias": {
"dtype": "F32",
"shape": [32],
"data_offsets": [0, 128]
},
"conv2d/kernel": {
"dtype": "F32",
...
...
</code></pre>
<p>Then I'm taking each header item and reading as byte as offset;</p>
<pre class="lang-c prettyprint-override"><code>uint64_t tensor_idx = 0;
uint64_t tensor_offset = 8 + header_size;
cJSON_ArrayForEach(tensor_item, safe_tensor->header) {
if(strcmp(tensor_item->string, "__metadata__") == 0) continue;
cJSON* dtype_json = cJSON_GetObjectItem(tensor_item, "dtype");
cJSON* shape_json = cJSON_GetObjectItem(tensor_item, "shape");
...
...
fseek(fp, tensor_offset, SEEK_SET);
int read_tensor_data = fread(safe_tensor->data[tensor_idx]->data, data_size, num_elements, fp);
tensor_offset += data_size * num_elements;
tensor_idx++;
</code></pre>
<p>data_size is 4 because of f32 and I'm calculating num_elements from shape (Multiplying all array elements in shape_json).</p>
<p>It was working with dataset that I found from huggingface models and dataset I generated. But when I try to export Keras's model as safetensors It's not working;</p>
<pre class="lang-py prettyprint-override"><code>def get_layer_parameter_names(layer):
"""Returns parameters based on layer name."""
layer_type = layer.__class__.__name__
param_names = []
# Dense & Conv
if layer_type in ["Dense", "Conv1D", "Conv2D", "Conv3D"]:
param_names = ["kernel", "bias"]
# Normalization
elif layer_type == "BatchNormalization":
param_names = ["gamma", "beta", "moving_mean", "moving_variance"]
elif layer_type == "LayerNormalization":
param_names = ["gamma", "beta"]
...
...
else:
weights = layer.get_weights()
param_names = [f"weight_{i}" for i in range(len(weights))] if weights else []
return param_names
def save_keras_model_to_safetensors(model, output_path):
tensors = {}
all_layer_names = []
for layer in model.layers:
all_layer_names.append(layer.name)
layer_weights = layer.get_weights()
param_names = get_layer_parameter_names(layer)
for name, weight in zip(param_names, layer_weights):
key = f"{layer.name}/{name}"
tensors[key] = np.array(weight, dtype=np.float32)
save_file(tensors, output_path, metadata=None)
</code></pre>
<p>safetensors library can handle export and import very well with this code and it's actually exporting all the parameters of class. But when I try to import safetensors file that I exported with this function I got a problem.</p>
<p>It reads the header file exactly correctly. Then it reads the first tensor file exactly correctly. But when it comes to the second tensor, every variable is shifted to the right by 1 zero. Likewise, in the 3rd tensor, 2 zeros are shifted to the right (1 zero to the right compared to the previous one). For example;</p>
<pre><code>Tensor[0][0] = 0.16358605 // expected: 0.16358605
Tensor[0][1] = -0.087356716 // expected: -0.087356716
Tensor[1][0] = -0.40413815 // expected: -4.04138158
Tensor[1][1] = 0.13498262 // expected: 1.34982622
Tensor[2][0] = -0.00154723 // expected: -1.54723778
Tensor[2][1] = 0.00372836 // expected: 3.72836853
</code></pre>
<p>What could my error be? Do you have and idea?</p>
|
<python><c><keras><io><safe-tensors>
|
2025-05-26 13:24:41
| 0
| 342
|
Ömer Faruk Demirel
|
79,638,943
| 15,163,418
|
How to globally configure just on Windows to run and edit scripts using short aliases with automatic interpreter detection?
|
<p>I'm trying to globally configure <a href="https://github.com/casey/just" rel="nofollow noreferrer"><code>just</code></a> on my Windows machine to simplify running and editing scripts from anywhere using short aliases.</p>
<h3>Folder Structure</h3>
<p>I have a folder <code>D:\Scripts</code> with subfolders containing Python and JavaScript scripts:</p>
<pre><code>D:\Scripts\
├── mvsl\
│ ├── find-duplicate-images.py
│ └── kinopoisk.ru-downloader.js
└── comics\
├── rename_cbz.py
└── zip_up.py
</code></pre>
<h3>What I Want</h3>
<p>I'd like to define simple aliases like this in a global <code>justfile</code>:</p>
<pre class="lang-none prettyprint-override"><code>mvsl_finddups = "mvsl/find-duplicate-images.py"
</code></pre>
<p>Then be able to run commands like:</p>
<pre class="lang-bash prettyprint-override"><code>just run mvsl.finddups # Runs the script using the correct interpreter
just edit mvsl.finddups # Opens the script in VSCode
</code></pre>
<p>Requirements:</p>
<ul>
<li><p>I want to run <code>just</code> from any folder (i.e., global config).</p>
</li>
<li><p>The <code>run</code> command should detect the correct interpreter based on file extension:</p>
<ul>
<li><code>.py</code> → Python</li>
<li><code>.js</code> → Node.js</li>
</ul>
</li>
<li><p>The <code>edit</code> command should open the corresponding file in VSCode.</p>
</li>
<li><p>I only want to specify the folder and short name, and <code>just</code> should handle the rest (paths, interpreter, etc).</p>
</li>
</ul>
<hr />
<h3>What I Tried</h3>
<p>I tried defining a global <code>justfile</code> like this:</p>
<pre class="lang-none prettyprint-override"><code># Use PowerShell on Windows
set windows-shell := ["powershell.exe", "-NoLogo", "-Command"]
# Base script directory
script_dir := "D:/Scripts/"
# Script mapping (short aliases)
mvsl_finddups := "mvsl/find-duplicate-images.py"
mvsl_kinopoisk := "mvsl/kinopoisk.ru-downloader.js"
# Run a script by alias
run alias:
script := {{alias}}
path := script_dir + script
if path.endswith(".py") {
python {{path}}
} else if path.endswith(".js") {
node {{path}}
} else {
echo "Unsupported script type: {{path}}"
}
# Edit a script by alias
edit alias:
code {{script_dir}}{{alias}}
</code></pre>
<p>But when I run:</p>
<pre class="lang-bash prettyprint-override"><code>just run mvsl_finddups
</code></pre>
<p>it doesn't work as expected, and it's hard to manage path separators and aliases this way. Also, my earlier attempt with:</p>
<pre class="lang-none prettyprint-override"><code>run subfolder script_name:
python "{{ script_dir }}{{ subfolder }}/{{ script_name }}"
</code></pre>
<p>led to issues where <code>mvsl</code> was passed literally as the folder name instead of resolving to <code>movie-stills/</code>.</p>
<hr />
<h3>Question</h3>
<p><strong>How can I globally configure a <code>justfile</code> on Windows to:</strong></p>
<ul>
<li>Use short aliases like <code>mvsl.finddups</code></li>
<li>Automatically resolve to full paths under <code>D:/Scripts/</code></li>
<li>Use the correct interpreter based on file extension</li>
<li>Support commands like <code>just run alias</code> and <code>just edit alias</code>?</li>
</ul>
<p>If possible, I'd like to avoid manually typing full paths or file extensions every time and keep my configuration DRY and declarative.</p>
|
<python><powershell><just>
|
2025-05-26 12:53:27
| 1
| 541
|
Raghavan Vidhyasagar
|
79,638,927
| 6,473,092
|
Alphalens get_clean_factor_and_forward_returns throws Length mismatch: Expected axis has 34 elements, new values have 36 elements
|
<p>I'm using alphalens.utils.get_clean_factor_and_forward_returns() to compute forward returns from a factor series and price DataFrame. But I'm hitting the following error:</p>
<pre><code>ValueError: Length mismatch: Expected axis has 34 elements, new values have 36 elements
</code></pre>
<p><strong>My Input Data:</strong></p>
<ul>
<li><p><strong>factor_series:</strong> A Pandas Series with a MultiIndex of (date, asset), total of 37 rows
Example:</p>
<p>date asset
2007-01-01 AAPL 0.503
AMZN 0.941
2008-01-01 AAPL 0.900
AMZN 0.993
...
2025-01-01 AMZN 1.000
Name: factor, dtype: float64</p>
</li>
<li><p><strong>prices:</strong> A DataFrame with DatetimeIndex and tickers as columns.<br />
Shape: (64 dates, 2 tickers — AAPL and AMZN)<br />
Note: AAPL is missing data on the last date 2025-01-01.</p>
</li>
</ul>
<p>Here’s a minimal version of my code that reproduces the issue:</p>
<pre><code>import pandas as pd
from alphalens.utils import get_clean_factor_and_forward_returns
# Load factor data (CSV: date, asset, factor)
factor_df = pd.read_csv("factor_series_Item1A_positive.csv")
prices_df = pd.read_csv("prices_filtered_Item1A_positive.csv", index_col=0)
# Convert to proper datetime format
factor_df['date'] = pd.to_datetime(factor_df['date'])
prices_df.index = pd.to_datetime(prices_df.index)
# Convert factor DataFrame to MultiIndex Series
factor_series = factor_df.set_index(['date', 'asset'])['factor']
factor_series.name = 'factor'
# Align assets
common_assets = factor_series.index.get_level_values("asset").unique().intersection(prices_df.columns)
factor_series = factor_series[factor_series.index.get_level_values("asset").isin(common_assets)]
prices_df = prices_df[common_assets]
# Align dates
common_dates = factor_series.index.get_level_values("date").unique().intersection(prices_df.index)
factor_series = factor_series[factor_series.index.get_level_values("date").isin(common_dates)]
prices_df = prices_df.loc[common_dates]
# Remove dates that can't have forward returns (e.g., last date)
last_valid_date = prices_df.index.max()
factor_series = factor_series[factor_series.index.get_level_values("date") < last_valid_date]
# Attempt to compute forward returns
data = get_clean_factor_and_forward_returns(
factor_series,
prices_df,
quantiles=5,
bins=None,
periods=[1]
)
</code></pre>
<p><strong>What I've Tried:</strong></p>
<ul>
<li><p>Verified that all dates and tickers in factor_series are in prices</p>
</li>
<li><p>Filtered factor_series to exclude the last date (where forward returns aren't possible)</p>
</li>
<li><p>Ensured both inputs are properly sorted and have no NaNs</p>
</li>
<li><p>Even after filtering to 36 rows in factor_series, I still get this error</p>
</li>
</ul>
<p><strong>Question:</strong></p>
<p>Where exactly in get_clean_factor_and_forward_returns() does this length mismatch occur?<br />
What specifically is mismatching — quantile bin assignment, forward return calculation, or an internal reindex?<br />
How can I fix the issue?</p>
|
<python><time-series><multi-index><valueerror><quantitative-finance>
|
2025-05-26 12:41:01
| 0
| 394
|
Aboriginal
|
79,638,800
| 10,846,467
|
How Ray async actors handle calls to sync methods
|
<p>I'm working with Ray async actors and I want to understand exactly what happens—at a deep technical level—when a synchronous method is called on such an actor.</p>
<p>I know that calling a synchronous method will block, but am looking for clarity on how exactly this blocking occurs within the actor's architecture.</p>
<p>As I see it, there are two possibilities:</p>
<ol>
<li><p>The event loop in the actor thread is preempted</p>
<ul>
<li>The Ray async actor runs on a thread that hosts an asyncio event loop.</li>
<li>When the synchronous method is called, Ray's dispatcher invokes the sync method directly on that same thread, effectively pausing the event loop and running the sync method.</li>
<li>Once the sync method finishes, the event loop resumes.</li>
<li>This means the sync method blocks the thread by interrupting the event loop's execution.</li>
</ul>
</li>
<li><p>The event loop itself executes the synchronous method</p>
<ul>
<li>The synchronous method is somehow wrapped as a coroutine/Future/Task and queued into the event loop's task queue.</li>
<li>The event loop picks it up and executes it within its normal flow.</li>
<li>The sync method’s blocking behavior causes the event loop itself to freeze until the sync method completes.</li>
<li>This means the blocking happens "inside" the event loop rather than as an external interruption.</li>
</ul>
</li>
</ol>
<p>I'm looking for either an precise step-by-step explanation of what happens, or a reference to docs or the right parts of the source code.</p>
|
<python><multithreading><python-asyncio><distributed-computing><ray>
|
2025-05-26 11:00:13
| 2
| 893
|
hegash
|
79,638,787
| 16,405,935
|
Plotly dashboard logout after refreshing page
|
<p>I'm trying to make a Dashboard with Login, Register and Dashboard with Navbar and Tabs. With below code everything working well but I have a problem that if I refreshed page, Dashboard will return to login screen.</p>
<pre><code>from dash import Dash, dcc, html, Input, Output, State, ctx
import dash_mantine_components as dmc
import pandas as pd
import dash
from dash_ag_grid import AgGrid
import plotly.express as px
import os
from datetime import datetime
from pandas.tseries.offsets import BDay
from datetime import datetime, date, timedelta
import base64
import io
import plotly.graph_objects as go
import dash_ag_grid as dag
import numpy as np
excel_file = 'users2.xlsx'
# Tạo file nếu chưa có
if not os.path.exists(excel_file):
pd.DataFrame(columns=["username", "password", "email", "code", "branch", "created_at"]).to_excel(excel_file, index=False)
app = dash.Dash(__name__, suppress_callback_exceptions=True)
def t(lang, key):
translations = {
"vi": {
"login": "Đăng nhập",
"register": "Đăng ký",
"username": "Tên đăng nhập",
"password": "Mật khẩu",
"confirm": "Nhập lại mật khẩu",
"email": "Email",
"code": "Mã nhân viên",
"branch": "Chi nhánh",
"no_account": "Chưa có tài khoản?",
"register_now": "Đăng ký ngay",
"has_account": "Đã có tài khoản?",
"create_account": "Tạo tài khoản",
"success": "Đăng ký thành công. Vui lòng đăng nhập.",
"income": "Thu nhập"
},
"en": {
"login": "Login",
"register": "Register",
"username": "Username",
"password": "Password",
"confirm": "Confirm Password",
"email": "Email",
"code": "Employee Code",
"branch": "Branch",
"no_account": "No account?",
"register_now": "Register now",
"has_account": "Already have an account?",
"create_account": "Create Account",
"success": "Successfully registered. Please login.",
"income": "income"
},
"ko": {
"login": "로그인",
"register": "회원가입",
"username": "사용자 이름",
"password": "비밀번호",
"confirm": "비밀번호 확인",
"email": "이메일",
"code": "사원 번호",
"branch": "지점",
"no_account": "계정이 없으신가요?",
"register_now": "지금 등록하세요",
"has_account": "이미 계정이 있으신가요?",
"create_account": "계정 만들기",
"success": "성공적으로 등록되었습니다. 로그인하세요.",
"income": "수익성"
}
}
return translations.get(lang, translations["vi"]).get(key, key)
def login_screen(lang):
return dmc.Container([
dmc.Select(
id="language-select",
label="Ngôn ngữ / Language / 언어",
data=[
{"label": "Tiếng Việt", "value": "vi"},
{"label": "English", "value": "en"},
{"label": "한국어", "value": "ko"}
],
value=lang,
style={"marginBottom": 20}
),
dmc.Title(t(lang, "login"), order=2, align="center", mb=20),
dmc.TextInput(id="login-username", label=t(lang, "username")),
dmc.PasswordInput(id="login-password", label=t(lang, "password"), mt=10),
dmc.Button(t(lang, "login"), id="login-submit", mt=20),
dmc.Text(id="alert-text", color="red", size="sm", mt=10),
dmc.Text(t(lang, "no_account"), span=True),
dmc.Button(t(lang, "register_now"), id="go-register", variant="subtle", compact=True)
], style={"maxWidth": 400, "margin": "auto", "paddingTop": 100})
def register_screen(lang):
return dmc.Container([
dmc.Title(t(lang, "register"), order=2, align="center", mb=20),
dmc.TextInput(id="register-username", label=t(lang, "username")),
dmc.PasswordInput(id="register-password", label=t(lang, "password"), mt=10),
dmc.PasswordInput(id="register-confirm", label=t(lang, "confirm"), mt=10),
dmc.TextInput(id="register-email", label=t(lang, "email"), mt=10),
dmc.TextInput(id="register-code", label=t(lang, "code"), mt=10),
dmc.TextInput(id="register-branch", label=t(lang, "branch"), mt=10),
dmc.Button(t(lang, "create_account"), id="register-submit", fullWidth=True, mt=20),
dmc.Text(id="alert-text", color="red", size="sm", mt=10),
dmc.Text(t(lang, "has_account"), span=True),
dmc.Button(t(lang, "login"), id="go-login", variant="subtle", compact=True)
], style={"maxWidth": 400, "margin": "auto", "paddingTop": 100})
def full_dashboard_layout(lang):
return dmc.Container([
dmc.Group([
dmc.Switch(id="theme-toggle", label="Dark mode", size="md", offLabel="☀️", onLabel="🌙"),
dmc.Button("Signout", id="sign-out", variant="light", color="red", size="sm")
], position="apart", mb=20),
html.Div([
html.Div(id="sidebar", children=[
dmc.Stack([
dmc.NavLink(label=t(lang, "income"), id="nav-1-btn"),
dmc.NavLink(label="Nav 2", id="nav-2-btn"),
dmc.NavLink(label="Nav 3", id="nav-3-btn"),
])
], style={
"width": "150px",
"minHeight": "100vh",
"borderRight": "1px solid #eee",
"padding": "10px"
}),
html.Div(id="content-area", style={"flex": 1, "padding": "20px"})
], style={"display": "flex"})
], fluid = True)
app.layout = dmc.Container([
dcc.Location(id="url", refresh=False),
dcc.Store(id="auth-status", data="login", storage_type = "local"),
dcc.Store(id="selected-lang", data="vi"),
dcc.Store(id="alert-store", data={}),
dcc.Store(id="theme-store", data="light"),
dmc.MantineProvider(
id="theme-provider",
theme={"colorScheme": "light"},
withGlobalStyles=True,
withNormalizeCSS=True,
children=html.Div(id="app-screen")
),
dmc.Text(id="alert-text", color="red", size="sm", mt=10)
], fluid = True)
@app.callback(Output("app-screen", "children"),
Input("auth-status", "data"),
State("selected-lang", "data"))
def render_app(status, lang):
if status == "login": return login_screen(lang)
if status == "register": return register_screen(lang)
if status == "authenticated": return full_dashboard_layout(lang)
@app.callback(
Output("auth-status", "data", allow_duplicate=True),
Input("go-login", "n_clicks"), prevent_initial_call=True, prevent_initial_callbacks=True)
def back_to_login(go_login): return "login"
@app.callback(
Output("auth-status", "data"),
Output("alert-store", "data", allow_duplicate=True),
Input("go-register", "n_clicks"),
Input("login-submit", "n_clicks"),
State("login-username", "value"),
State("login-password", "value"),
prevent_initial_call=True
)
def login_flow(go_reg, login, username, password):
if ctx.triggered_id == "go-register":
return "register", {}
if not username or not password:
return dash.no_update, {}
path = "users2.xlsx"
try:
df = pd.read_excel(path)
if "username" not in df.columns or "password" not in df.columns:
return dash.no_update, {"message": "Invalid data"}
except:
return dash.no_update, {"message": "Cannot read users"}
match = df[(df["username"] == username) & (df["password"] == password)]
if match.empty:
return dash.no_update, {"message": "Sai tên đăng nhập hoặc mật khẩu"}
return "authenticated", {}
@app.callback(
Output("auth-status", "data", allow_duplicate=True),
Output("alert-store", "data", allow_duplicate=True),
Input("register-submit", "n_clicks"),
State("register-username", "value"),
State("register-password", "value"),
State("register-confirm", "value"),
State("register-email", "value"),
State("register-code", "value"),
State("register-branch", "value"),
prevent_initial_call=True
)
def handle_register(_, username, password, confirm, email, code, branch):
if not any([username, password, confirm, email, code, branch]):
return dash.no_update, {}
if not all([username, password, confirm, email, code, branch]):
return dash.no_update, {"message": "Vui lòng nhập đủ thông tin"}
if password != confirm:
return dash.no_update, {"message": "Mật khẩu xác nhận không khớp"}
if not (email.endswith("@wooribank.com") or email.endswith("@woori.com.vn")):
return dash.no_update, {"message": "Email không hợp lệ"}
path = "users2.xlsx"
try:
df = pd.read_excel(path)
except:
df = pd.DataFrame(columns=["username", "password", "email", "code", "branch", "created_at"])
if any(df["username"] == username) or any(df["email"] == email) or any(df["code"] == code):
return dash.no_update, {"message": "Tài khoản/email/mã đã tồn tại"}
df = pd.concat([df, pd.DataFrame([{
"username": username,
"password": password,
"email": email,
"code": code,
"branch": branch,
"created_at": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}])], ignore_index=True)
df.to_excel(path, index=False)
return "register", {"message": "Đăng ký thành công"}
@app.callback(Output("alert-text", "children"),
Input("alert-store", "data"))
def show_alert(data):
return data.get("message", "") if isinstance(data, dict) else ""
@app.callback(Output("alert-text", "color"),
Input("auth-status", "data"),
Input("alert-store", "data"))
def alert_color(status, data):
if status == "register" and data.get("message") == "Đăng ký thành công":
return "green"
return "red"
@app.callback(
Output("app-screen", "children", allow_duplicate=True),
Input("selected-lang", "data"),
State("auth-status", "data"),
prevent_initial_call=True
)
def update_lang_change(lang, status):
if status == "login":
return login_screen(lang)
elif status == "register":
return register_screen(lang)
return dash.no_update
@app.callback(Output("theme-store", "data"),
Input("theme-toggle", "checked"), prevent_initial_call=True)
def toggle_theme(checked): return "dark" if checked else "light"
@app.callback(Output("theme-provider", "theme"),
Input("theme-store", "data"))
def update_theme(scheme): return {"colorScheme": scheme}
@app.callback(Output("selected-lang", "data"),
Input("language-select", "value"))
def set_language(value): return value
@app.callback(
Output("content-area", "children"),
Input("nav-1-btn", "n_clicks"),
Input("nav-2-btn", "n_clicks"),
Input("nav-3-btn", "n_clicks"),
Input("auth-status", "data"),
State("theme-store", "data"),
State("selected-lang", "data")
)
def update_content(n1, n2, n3, auth_status, theme, lang):
triggered = ctx.triggered_id
if triggered in ["nav-1-btn","nav-2-btn","nav-3-btn"]:
return generate_tabs(triggered.replace("-btn",""), theme, lang)
if auth_status:
return generate_tabs("nav-1", theme, lang)
return dmc.Text(t(lang, "choose_menu"))
@app.callback(
Output("auth-status", "data", allow_duplicate=True),
Input("sign-out", "n_clicks"),
prevent_initial_call=True,
prevent_initial_callbacks=True
)
def logout(n):
return "login"
def generate_tabs(nav_id, theme, lang):
if nav_id == "nav-1":
return dmc.Tabs(
id="nav-1-tabs",
value="tab-1",
children=[
dmc.TabsList([
dmc.Tab(t(lang, "tab1"), value="tab-1"),
dmc.Tab(t(lang, "tab2"), value="tab-2"),
dmc.Tab(t(lang, "tab3"), value="tab-3"),
]),
dmc.TabsPanel(id="nav-1-tab-content", value="tab-1"),
dmc.TabsPanel(dmc.Text(t(lang, "tab2_content")), value="tab-2"),
dmc.TabsPanel(dmc.Text(t(lang, "tab3_content")), value="tab-3"),
]
)
elif nav_id == "nav-2":
return dmc.Tabs(
id="nav-2-tabs",
value="tab-a",
children=[
dmc.TabsList([
dmc.Tab("Bộ lọc", value="tab-a"),
dmc.Tab("Lịch", value="tab-b"),
]),
dmc.TabsPanel([
dmc.Select(label="Chi nhánh", data=["Hà Nội", "HCM"]),
dmc.DatePicker()
], value="tab-a"),
dmc.TabsPanel(dmc.Text("Lịch hoạt động ở đây..."), value="tab-b"),
]
)
elif nav_id == "nav-3":
return dmc.Tabs(
id="nav-3-tabs",
value="tab-x",
children=[
dmc.TabsList([
dmc.Tab("Thông báo", value="tab-x"),
dmc.Tab("Người dùng", value="tab-y"),
dmc.Tab("Vai trò", value="tab-z"),
dmc.Tab("Báo cáo", value="tab-w")
]),
dmc.TabsPanel(dmc.Text("Thông báo hệ thống"), value="tab-x"),
dmc.TabsPanel(dmc.Text("Danh sách người dùng"), value="tab-y"),
dmc.TabsPanel(dmc.Text("Quản lý vai trò"), value="tab-z"),
dmc.TabsPanel(dmc.Text("Báo cáo tổng hợp"), value="tab-w"),
]
)
return dmc.Text("Không có nội dung.")
def blank_fig(is_dark):
#fig = go.Figure(go.Bar(x=[], y=[]))
layout = go.Layout(
paper_bgcolor= "#1A1B1E" if is_dark else "#ffffff",
plot_bgcolor= "#1A1B1E" if is_dark else "#ffffff",
font=dict(color="#ffffff" if is_dark else "#000000"),
xaxis=dict(visible=True),
yaxis=dict(visible=True),
annotations=[
dict(
text="Không có dữ liệu",
x='0.5', y='0.5',
font=dict(size=20, color = "#ffffff" if is_dark else "#000000"),
showarrow = False
)
]
)
return go.Figure(data=[], layout=layout)
def blank_fig2(is_dark):
fig2 = go.Figure(go.Bar(x=[], y=[]))
fig2.update_layout({'paper_bgcolor':'rgba(0,0,0,0)','plot_bgcolor':'rgba(0,0,0,0)'},
margin = dict(l=10,r=10,t=10, b=10),font_color='#ffffff' if is_dark else "#000000", height=200)
fig2.update_xaxes(showline=False,showgrid=False,exponentformat="none",separatethousands=True)
fig2.update_yaxes(showline=False,showgrid=False,exponentformat="none",separatethousands=True)
return fig2
def get_dates(start_date, num_days=365):
return [(start_date + timedelta(days=i)).isoformat()
for i in range(num_days)
if (start_date + timedelta(days=i)).weekday()==4 or (start_date + timedelta(days=i)).weekday()==5]
@app.callback(
Output("nav-1-tab-content", "children"),
Input("nav-1-tabs", "value"),
Input("theme-store", "data"),
State("selected-lang", "data")
)
def nav1_tabs(tab, theme, lang):
is_dark = theme == "dark"
if tab == "tab-1":
return dmc.LoadingOverlay([
dmc.Container([
dmc.Grid([
dmc.Col([
dmc.DatePicker(id='datepickersingle_1',
value=[],
minDate=date(2023,12,29),
disabledDates = get_dates(date.today()),
style={'width': 200})
], span='auto'),
dmc.Col([
dmc.DatePicker(id='datepickersingle_2',
value=[],
minDate=date(2023,12,29),
disabledDates = get_dates(date.today()),
style={'width': 200})
], span='auto'),
dmc.Col([
dmc.DatePicker(id='datepickersingle_3',
value=[],
minDate=date(2023,12,29),
disabledDates = get_dates(date.today()),
style={'width': 200})
], span='auto'),
dmc.Col([
dmc.DatePicker(id='datepickersingle_4',
value=(datetime.today() - BDay(1)).date(),
minDate=date(2023,12,29),
disabledDates = get_dates(date.today()),
style={'width': 200})
], span='auto'),
dmc.Col([
dmc.NumberInput(id='input_1',
label='KPI Exchange Rate',
value=[],
precision=4,
style={'width': 200})
], span='auto'),
dmc.Col([
dmc.Select(label = "Select branch",
id='dropdown_1',
data = ['001+','100','101','102','103','200','201','202','300','301',
'302','303','304','500','700','701','702','703', '203'],
placeholder = 'Please select branch',
value = '001+', clearable=True)
], span='auto'),
dmc.Col([
dmc.Button('Submit', color='gray', id='btn_1')
], span='auto')
], align='flex-start', justify = 'center', style={'margin-top':10}),
], fluid=True)
], loaderProps={'variant':"dot", 'color':'blue', 'fullscreen':True})
return dash.no_update
if __name__ == "__main__":
app.run_server(debug=False, port = 1522, jupyter_mode="external")
</code></pre>
<p>I'm guessing that login and Dashboard has same link and not be separated with href so this situation but I'm not sure how to fix it.</p>
|
<python><plotly><plotly-dash>
|
2025-05-26 10:49:39
| 0
| 1,793
|
hoa tran
|
79,638,749
| 607,407
|
How do I tell tensorflow to throw an error if I am trying to do a non-differentiable operation on a variable?
|
<p>I am learning tensorflow and spent a good amount of time trying to find what is causing this error:</p>
<pre><code>No gradients provided for any variable.
</code></pre>
<p>In the end I tracked that it was caused by using <code>argmax</code> at the very start of my gradient calculation.</p>
<p>Is there some sort of guard or wrapper that can be put on my tensors and will throw at the moment when I lose the differentiability? This should be transitive of course, so if I do this:</p>
<pre><code> raw_output = my_model((input_data))
tf.make_sure_all_steps_are_differentiable(raw_output)
# this is differentiable of course, although I am not sure if it's correct in this context
multiplied = raw_output * 5
# This is not differentiable and should raise
# Non differentiable operation attempted on tensor [name]
# or something like that
# The guard should have been transferred to the new tensor during previous operation
argmaxed = tf.argmax(multiplied, other, arguments)
</code></pre>
<p>Does tensorflow has some helper like that? I could find a lot of questions about what's differentiable and what isn't. This would help find out the moment you use it.</p>
|
<python><tensorflow><neural-network><gradient-descent>
|
2025-05-26 10:24:19
| 0
| 53,877
|
Tomáš Zato
|
79,638,747
| 5,443,120
|
How to join/map a polars dataframe to a dict?
|
<p>I have a polars dataframe, and a dictionary. I want to map a column in the dataframe to the keys of the dictionary, and then add the corresponding values as a new column.</p>
<pre><code>import polars as pl
my_dict = {
'a': 1,
'b': 2,
}
my_df = pl.DataFrame({'region': ['a', 'b', 'a']})
</code></pre>
<pre><code>┌────────┐
│ region │
│ --- │
│ str │
╞════════╡
│ a │
│ b │
│ a │
└────────┘
</code></pre>
<p>I want to end up with:</p>
<pre><code>pl.DataFrame({'region': ['a', 'b', 'a'], 'city': [1, 2, 1]})
</code></pre>
<pre><code>┌────────┬──────┐
│ region ┆ city │
│ --- ┆ --- │
│ str ┆ i64 │
╞════════╪══════╡
│ a ┆ 1 │
│ b ┆ 2 │
│ a ┆ 1 │
└────────┴──────┘
</code></pre>
<p>How can I do this?</p>
|
<python><dataframe><dictionary><python-polars><polars>
|
2025-05-26 10:22:07
| 1
| 4,421
|
falsePockets
|
79,638,659
| 247,696
|
What tools use the .python-version file?
|
<p>Heroku recommmends including a file named <code>.python-version</code> in the project directory, with contents that specify a Python version, like this:</p>
<pre><code>$ cat .python-version
3.13
</code></pre>
<p><a href="https://devcenter.heroku.com/articles/python-runtimes" rel="nofollow noreferrer">Heroku's documentation</a> says:</p>
<blockquote>
<p>we recommend switching to a <code>.python-version</code> file [...], since it’s more widely supported by other tooling in the Python ecosystem.</p>
</blockquote>
<p>My question is: which tooling supports the <code>.python-version</code> file?</p>
|
<python>
|
2025-05-26 09:29:02
| 1
| 153,921
|
Flimm
|
79,638,455
| 12,045,291
|
Torch: how to insert a tensor into another tensor at certain index
|
<p>I have a padded tensor <strong>X</strong> with shape <em>(B, T1, C)</em> and a padded tensor <strong>Y</strong> with shape <em>(B, T2, C)</em>, I also know the sample lengths <strong>L</strong> for <strong>X</strong>. I want to insert the samples of <strong>X</strong> into <strong>Y</strong> at certain index <em>I</em> and pad at the end.</p>
<p>For example, if <em>I = 5</em>, the goal of the transformation is something like:</p>
<pre><code>inputs = []
for i in range(X.shape[0]):
input = torch.cat([Y[i][0:5],
X[i][:L[i]],
Y[i][5:],
torch.zeros(max(L) - L[i], Y.shape[2])],
dim=0)
inputs.append(input)
outputs = torch.stack(inputs, dim=0)
</code></pre>
<p>I want to know how to do this in tensor version instead of the for loop, which is too slow for training.</p>
|
<python><indexing><pytorch><tensor>
|
2025-05-26 07:06:05
| 1
| 412
|
junhuizh
|
79,638,441
| 172,131
|
Suds throwing exception on "type not found"
|
<p>We are using <a href="https://github.com/suds-community/suds" rel="nofollow noreferrer">SUDS</a> v1.1.1 and recently starting getting a "Type not found" exception as the response to a request contains a field not in the wsdl. I tried initializing the client as follows but the exception is still occurring:</p>
<pre><code>client = Client(spec_path, faults=False)
client.set_options(allowUnknownMessageParts=True, extraArgumentErrors=False)
</code></pre>
<p>Is there any other way to get suds to ignore the unknown field without throwing an exception please?</p>
<p><strong>Updates</strong></p>
<p><strong>Error Generated by Library</strong></p>
<blockquote>
<p>Type not found: ‘accountNumber'</p>
</blockquote>
<p><strong>XML Response</strong></p>
<pre><code><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:getTransactionResponse xmlns:ns2="http://soap.api.controller.web.payjar.com/"><return><basket><amountInCents>500</amountInCents><currencyCode>ZAR</currencyCode><description>ABC Retailer</description></basket><displayMessage>Successful</displayMessage><merchantReference>114ff7d7-3a6d-44e1-bff7-dfjkheuiier789</merchantReference><payUReference>dfskjskjdfhjkksk</payUReference><paymentMethodsUsed xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns2:capitecPay"><accountNumber>12347383643</accountNumber><amountInCents>500</amountInCents><beneficiaryStatementDescription>CAPITEC1748237482597</beneficiaryStatementDescription><transactionId>a8f64064-39f2-11f0-9fde-ejnsduinsiuefnf</transactionId></paymentMethodsUsed><resultCode>00</resultCode><resultMessage>Successful</resultMessage><successful>true</successful><transactionState>SUCCESSFUL</transactionState><transactionType>PAYMENT</transactionType></return></ns2:getTransactionResponse></soap:Body></soap:Envelope>
</code></pre>
<p><strong>wsdl</strong></p>
<p>Can be downloaded <a href="https://www.dropbox.com/scl/fi/796vbsbkicz5jkm2hc21s/PayUProduction.wsdl?rlkey=eizisbnqk01nnxuydfq9byfan&st=1zrbmih8&dl=0" rel="nofollow noreferrer">here</a>.</p>
|
<python><django><suds>
|
2025-05-26 06:57:03
| 1
| 20,218
|
RunLoop
|
79,638,347
| 607,407
|
How to convert discrete choices (of a color) from neural network output to colors directly in tensorflow?
|
<p>Context: I am learning to use tensorflow and want to do a simple experiment where I provide a neural network with 4 color choices for each pixel. The network should learn to pick the best colors from choices available for each pixel to represent the image it is given. Basically, a dithering neural network.</p>
<p>I need to map networks output, which comes in the shape of <code>(width*height, 4)</code> to an RGB image. That will be compared to blurred reference image.</p>
<p>I have the following code which I think does what I want, but over <code>numpy</code> matrices:</p>
<pre><code>def convert_to_image(output_matrix, color_choices):
"""Maps network outputs to RGB colors using predefined choices."""
# Converts (height*width, 4-floats) to (height*width) array of 0-3 indices
chosen_colors = np.argmax(output_matrix, axis=1)
# Uses color_choices to mach each indice to a pixel color
mapped_image = color_choices[np.arange(color_choices.shape[0]), chosen_colors] * 255
mapped_image = mapped_image.reshape((INPUT_SHAPE[0], INPUT_SHAPE[1], 3)) # Reshape to (height, width, 3)
return mapped_image
</code></pre>
<p>However I soon learned I cannot use this for reinforcement learning, because tensorflow will not be able to calculate what it should change in the network.</p>
<p>So here is my tensorflow attempt and its output:</p>
<pre><code>def convert_to_image_tf(output_matrices, color_choices_batch):
"""Maps network outputs to RGB colors using predefined choices in TensorFlow."""
batch_size = output_matrices.shape[0]
print("output_matrices dimensions: " + str(output_matrices.shape))
print("color_choices_batch dimensions: " + str(color_choices_batch.shape))
chosen_colors = tf.argmax(output_matrices, axis=2, output_type=tf.int32)
print("chosen_colors shape: " + str(chosen_colors.shape))
#batch_indices = tf.range(batch_size)[:, tf.newaxis] # Shape: (batch_size, 1)
#print("batch_indices shape: " + str(chosen_colors.shape))
#chosen_colors_expanded = tf.stack([batch_indices, chosen_colors], axis=-1) # Shape: (batch_size, width*height, 2)
#print("chosen_colors_expanded shape: " + str(chosen_colors_expanded.shape))
batch_size, num_pixels, num_choices_per_px, num_colors = color_choices_batch.shape # Extract dimensions
# Use batch-wise advanced indexing
mapped_image = color_choices[np.arange(batch_size)[:, None], np.arange(num_pixels), chosen_colors] * 255
#mapped_image = tf.gather_nd(color_choices_batch, chosen_colors) * 255 # Shape: (batch_size, width*height, 3)
# mapped_image = tf.gather(color_choices_batch, chosen_colors, batch_dims=0) * 255
print("mapped_image shape: " + str(mapped_image.shape))
mapped_image = tf.reshape(mapped_image, (INPUT_SHAPE[0], INPUT_SHAPE[1], 3)) # Reshape to (height, width, 3)
</code></pre>
<p>Output:</p>
<pre><code>output_matrices dimensions: (32, 1024, 4)
color_choices_batch dimensions: (32, 1024, 4, 3)
chosen_colors shape: (32, 1024)
Traceback (most recent call last):
File "xxx\python_tests\dither_test_1.py", line 229, in <module>
loss = train_step(color_decider, image_batch, color_choices_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx\python_tests\dither_test_1.py", line 209, in train_step
generated_images = convert_to_image_tf(raw_outputs, color_choices_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx\python_tests\dither_test_1.py", line 182, in convert_to_image_tf
mapped_image = color_choices[np.arange(batch_size)[:, None], np.arange(num_pixels), chosen_colors] * 255
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: index 4 is out of bounds for axis 1 with size 4
</code></pre>
<p>Full program is below. Note that there seem to be some recent issue with tensorflow imports, so you might need to change those:</p>
<pre><code>import tensorflow as tf
import tensorflow.python.keras as keras
import tensorflow.python.keras.layers as layers
import tensorflow.python.keras.losses as losses
import tensorflow.python.keras.optimizers as optimizers
import numpy as np
import os
import random
import cv2
# Define input shape
INPUT_SHAPE = (32, 32, 3) # Example: 32x32 RGB images
NUM_COLORS = 4 # The four color choices per pixel
# Second network: Color decision maker
def build_color_decider():
# First input: Matrix with 4 values per pixel
color_choices = keras.Input(shape=(INPUT_SHAPE[0] * INPUT_SHAPE[1], 4, 3), name="ColorChoices")
# Second input: Normal RGB image
image_data = keras.Input(shape=(INPUT_SHAPE[0], INPUT_SHAPE[1], 3), name="ImageData")
# Process color choices matrix
x1 = layers.Conv2D(32, (3,3), activation="relu")(color_choices)
x1 = layers.Flatten()(x1)
# Process image data matrix
x2 = layers.Conv2D(32, (3,3), activation="relu")(image_data)
x2 = layers.Flatten()(x2)
merged = layers.Concatenate()([x1, x2])
merged = layers.Dense(128, activation="relu")(merged)
# One number per pixel, representing the chosen color
output = layers.Dense(INPUT_SHAPE[0] * INPUT_SHAPE[1]*4, activation='linear')(merged) # Example output layer
output = layers.Reshape((INPUT_SHAPE[0] * INPUT_SHAPE[1], 4))(output)
model = keras.Model(inputs=[color_choices, image_data], outputs=output)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
# model = keras.Sequential([
# layers.Input(shape=(INPUT_SHAPE[0], INPUT_SHAPE[1], 4)),
# layers.Conv2D(INPUT_SHAPE[0], (3,3), activation='relu', input_shape=INPUT_SHAPE),
# layers.Conv2D(64, (3,3), activation='relu'),
# layers.Flatten(),
# layers.Dense(1024, activation='relu'),
# layers.Dense(INPUT_SHAPE[0] * INPUT_SHAPE[1] * 3, activation='linear') # Final RGB values
# ])
return model
image_dirs = ["ENTER IMAGE FOLDER HERE"] # Example folders
def load_random_image():
"""Loads a random image from one of the folders."""
chosen_dir = random.choice(image_dirs) # Pick a random folder
images = os.listdir(chosen_dir) # List images
img_path = os.path.join(chosen_dir, random.choice(images)) # Pick a random image
return img_path
def preprocess_image(img_path, downsample_factor=2, target_size=(32, 32)):
"""Loads, downsamples, and crops the image."""
img = cv2.imread(img_path) # Read image
# Calculate new size so that smallest dimension is same as target size
h, w, _ = img.shape
if h < w:
new_h = target_size[0]
new_w = int(w * (new_h / h))
else:
new_w = target_size[0]
new_h = int(h * (new_w / w))
img = cv2.resize(img, (new_w, new_h)) # Resize to maintain aspect ratio
#img = cv2.resize(img, (64, 64)) # Downsample (adjust size)
# Random Crop (Center Crop for simplicity)
crop_size = target_size[0] # Assume square crop
h, w, _ = img.shape
start_x = (w - crop_size) // 2
start_y = (h - crop_size) // 2
img = img[start_y:start_y+crop_size, start_x:start_x+crop_size] # Crop
img = img / 255.0 # Normalize
return img
def data_generator(batch_size=32):
"""Yields batches of processed images."""
while True:
batch_images = []
for _ in range(batch_size):
img_path = load_random_image()
img = preprocess_image(img_path)
batch_images.append(img)
yield np.array(batch_images)
# Create the models
color_decider = build_color_decider()
color_map = {
"cyan": [0, 255, 255], # RGB for Cyan
"magenta": [255, 0, 255], # RGB for Magenta
"yellow": [255, 255, 0], # RGB for Yellow
"black": [0, 0, 0], # RGB for Black
"white": [255, 255, 255], # RGB for White
"green": [0, 255, 0], # RGB for Green
"red": [255, 0, 0], # RGB for Red
"blue": [0, 0, 255] # RGB for Blue
}
# normalize color map values to [0, 1] range
color_map = {k: np.array(v) / 255.0 for k, v in color_map.items()}
pixel_option_1 = np.array([
color_map["cyan"],
color_map["magenta"],
color_map["yellow"],
color_map["black"]
])
pixel_option_2 = np.array([
color_map["blue"],
color_map["black"],
color_map["red"],
color_map["white"]
])
# first run with preset color choices - CMYK
color_choices = np.array([
pixel_option_1 if xcolor%2==0 else pixel_option_2 for xcolor in range(INPUT_SHAPE[0]*INPUT_SHAPE[1])
])
def data_generator_with_color_choices(batch_size=32):
"""Yields batches of processed images with color choices."""
images_generator = data_generator(batch_size)
while True:
batch_color_choices = [color_choices for _ in range(batch_size)]
yield np.array(batch_color_choices), np.array(next(images_generator))
def convert_to_image_tf(output_matrices, color_choices_batch):
"""Maps network outputs to RGB colors using predefined choices in TensorFlow."""
batch_size = output_matrices.shape[0]
print("output_matrices dimensions: " + str(output_matrices.shape))
print("color_choices_batch dimensions: " + str(color_choices_batch.shape))
chosen_colors = tf.argmax(output_matrices, axis=2, output_type=tf.int32)
print("chosen_colors shape: " + str(chosen_colors.shape))
#batch_indices = tf.range(batch_size)[:, tf.newaxis] # Shape: (batch_size, 1)
#print("batch_indices shape: " + str(chosen_colors.shape))
#chosen_colors_expanded = tf.stack([batch_indices, chosen_colors], axis=-1) # Shape: (batch_size, width*height, 2)
#print("chosen_colors_expanded shape: " + str(chosen_colors_expanded.shape))
batch_size, num_pixels, num_choices_per_px, num_colors = color_choices_batch.shape # Extract dimensions
# Use batch-wise advanced indexing
mapped_image = color_choices[np.arange(batch_size)[:, None], np.arange(num_pixels), chosen_colors] * 255
#mapped_image = tf.gather_nd(color_choices_batch, chosen_colors) * 255 # Shape: (batch_size, width*height, 3)
# mapped_image = tf.gather(color_choices_batch, chosen_colors, batch_dims=0) * 255
print("mapped_image shape: " + str(mapped_image.shape))
mapped_image = tf.reshape(mapped_image, (INPUT_SHAPE[0], INPUT_SHAPE[1], 3)) # Reshape to (height, width, 3)
return mapped_image
def blur_image(image):
"""Applies Gaussian blur to smooth the image."""
return cv2.GaussianBlur(image, (5, 5), 0) # Kernel size of (5,5)
train_data = tf.data.Dataset.from_generator(
lambda: data_generator_with_color_choices(32),
output_types=(tf.float32, tf.float32), # Two inputs: color choices & image data
output_shapes=((None, INPUT_SHAPE[0]*INPUT_SHAPE[1], 4, 3), (None, INPUT_SHAPE[0], INPUT_SHAPE[1], 3)) # First is color choices, second is image data
)
loss_fn = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Custom training loop
def train_step(model, image_batch, color_choices_batch):
with tf.GradientTape() as tape:
batch_size = image_batch.shape[0]
raw_outputs = model((color_choices_batch, image_batch)) # Get predictions
generated_images = convert_to_image_tf(raw_outputs, color_choices_batch)
blurred_generated_images = tf.nn.avg_pool2d(generated_images, ksize=5, strides=1, padding="SAME")
blurred_original_images = tf.nn.avg_pool2d(image_batch, ksize=5, strides=1, padding="SAME")
# Compute loss
loss = loss_fn(blurred_original_images, blurred_generated_images)
# Compute and apply gradients
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss.numpy()
# Training loop
for epoch in range(300):
for step, (color_choices_batch, image_batch) in enumerate(train_data.take(100)):
loss = train_step(color_decider, image_batch, color_choices_batch)
print(f"Epoch {epoch}, Step {step}, Loss: {loss}")
</code></pre>
|
<python><numpy><tensorflow><keras><neural-network>
|
2025-05-26 05:29:41
| 0
| 53,877
|
Tomáš Zato
|
79,638,169
| 3,599,283
|
osxphoto photo.export(testDIR) does not output anything
|
<p><strong>NOTE</strong>: Description changed. Previous problem of photodb.export() is null issue. Export seems to be per photo.export() but does not seem to be outputing any image.</p>
<hr />
<p>Why is photo.export() not working.</p>
<p>Using <a href="https://pypi.org/project/osxphotos/" rel="nofollow noreferrer"><code>osxphotos</code></a> (python) to organize photos. The code below gets list of all photo's and their information. The problem is that photo.export(destDIR) is not working. No error is given, but not file is written.</p>
<p>Output says several photo's are exported.
But none exist.</p>
<p>Here is the code:</p>
<pre><code>phList = [ph for ph in po.db.photos()]
print("Photos found:", len(phList))
DIRexp = os.path.join(Path.cwd().parent,'testExport')
i = 0
for ph in phList:
i += 1
bn = ph.original_filename
fn = ph.filename
if not ph.ismissing:
#print("Photo ismissing", bn, fn)
continue
else:
ph.export(DIRexp)
print("Photo exported:", bn, fn)
tmpFN = os.path.join(DIRexp, ph.original_filename)
if os.path.exists(tmpFN):
print("Export OK", tmpFN)
break
else:
if (i % 400) == 0 :
print("Count", i)
continue
</code></pre>
|
<python><iphone><image><photo>
|
2025-05-26 00:26:36
| 0
| 1,267
|
frankr6591
|
79,638,103
| 10,262,805
|
Why is attention scaled by sqrt(d_k) in Transformer architectures?
|
<p>I have this code in transformer model:</p>
<pre class="lang-py prettyprint-override"><code>keys = x @ W_key
queries = x @ W_query
values = x @ W_value
attention_scores = queries @ keys.T
# keys.shape[-1]**0.5: used to scale the attention scores before applying the softmax
attn_weights = torch.softmax(attention_scores / keys.shape[-1]**0.5, dim=-1)
</code></pre>
<p>I understand that <code>keys.shape[-1]**0.5</code> helps stabilize training by preventing large dot products from pushing softmax into sharp regions. But what’s the statistical or mathematical reasoning for choosing this value? Instead of using a fixed parameter, why a learnable parameter is not used here, Wouldn’t that allow the model to adapt better?</p>
|
<python><torch><transformer-model><attention-model>
|
2025-05-25 21:48:12
| 0
| 50,924
|
Yilmaz
|
79,638,044
| 19,293,506
|
How to insert Text+ clips on specific video tracks using DaVinci Resolve Python API?
|
<p>I'm working on a DaVinci Resolve automation project where I want to create a dialogue-style video. The idea is to overlay lines of dialog (generated audio and corresponding text) using <strong>Text+ titles</strong> programmatically through the Resolve scripting API.</p>
<h3>What I'm trying to do:</h3>
<ul>
<li>I have a JSON file that looks like this:</li>
</ul>
<pre class="lang-json prettyprint-override"><code>[
{
"sequence": 1,
"response": "Hi",
"voiceId": 2
},
{
"sequence": 2,
"response": "How are you?",
"voiceId": 3
},
{
"sequence": 3,
"response": "I'm good!",
"voiceId": 4
},
...
]
</code></pre>
<p><a href="https://i.sstatic.net/oJ1b9DUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJ1b9DUA.png" alt="enter image description here" /></a></p>
<ul>
<li>For every <strong>3 audio clips</strong>, I want to add <strong>3 Text+ titles</strong> — one per clip — to <strong>Video Tracks 2, 3, and 4</strong>.</li>
<li>Each Text+ should:
<ul>
<li>Appear when its corresponding audio starts.</li>
<li>Remain visible until the <strong>last</strong> audio in the group ends.</li>
<li>Be editable manually later on in the Resolve interface (so Compound Clips or subtitles are not suitable).</li>
</ul>
</li>
</ul>
<h3>What’s not working:</h3>
<p>When using:</p>
<pre class="lang-py prettyprint-override"><code>timeline.SetCurrentTimecode(...)
timeline.InsertFusionTitleIntoTimeline("Text+")
clip.SetProperty("TrackIndex", 2)
</code></pre>
<p><a href="https://i.sstatic.net/HA4uDmOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HA4uDmOy.png" alt="enter image description here" /></a></p>
<ul>
<li>The inserted Text+ clips always go to <strong>V1</strong>.</li>
<li>Trying to move them to V2/V3/V4 with <code>.SetProperty("TrackIndex", X)</code> doesn't seem to work.</li>
<li>There is <strong>no clear way to control</strong> the target video track during <code>InsertFusionTitleIntoTimeline</code>.</li>
</ul>
<hr />
<h3>Question:</h3>
<p><strong>How can I programmatically insert Text+ clips to Video Track 2, 3, and 4, using Python API in DaVinci Resolve, and set their duration and text accordingly?</strong></p>
<p>Any workaround or hacky approach that avoids Compound Clips would be helpful!</p>
|
<python><python-3.x><davinci-resolve>
|
2025-05-25 20:29:14
| 1
| 631
|
kawa
|
79,637,884
| 1,719,931
|
Pylance doesn't like calling `[]` on a dict-inherited class
|
<p>PyAlex has the class <a href="https://github.com/J535D165/pyalex/blob/1517794516571b45ae345a842e6dcdf55ff291dd/pyalex/api.py#L277" rel="nofollow noreferrer">OpenAlexEntity</a>:</p>
<pre class="lang-py prettyprint-override"><code>class OpenAlexEntity(dict):
"""Base class for OpenAlex entities."""
pass
</code></pre>
<p>and <a href="https://github.com/J535D165/pyalex/blob/1517794516571b45ae345a842e6dcdf55ff291dd/pyalex/api.py#L942" rel="nofollow noreferrer">Institution</a>:</p>
<pre class="lang-py prettyprint-override"><code>class Institution(OpenAlexEntity):
"""Class representing an institution entity in OpenAlex."""
pass
</code></pre>
<p>So the following code correctly works</p>
<pre class="lang-py prettyprint-override"><code>from pyalex import Institutions
insts = Institutions().search("MIT").get()
inst = insts[0]
print(type(inst)) # prints "Institution"
print(inst["display_name"]) # prints "Massachusetts Institute of Technology"
</code></pre>
<p>However, PyLance underlines it in red showing the following error:</p>
<pre class="lang-none prettyprint-override"><code>No overloads for "__getitem__" match the provided argumentsPylancereportCallIssue
builtins.pyi(1062, 9): Overload 2 is the closest match
</code></pre>
<p><a href="https://i.sstatic.net/4aEniWzL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aEniWzL.png" alt="enter image description here" /></a></p>
<p>How can I fix this?</p>
|
<python><python-typing><pyright><pyalex>
|
2025-05-25 16:54:30
| 2
| 5,202
|
robertspierre
|
79,637,820
| 12,102,460
|
Dynamically parsing API response into the correct types with msgspec
|
<p>Below is a toy example I am unable to get working. How to properly configure msgspec so it will automatically select the correct types (structs)? I can't change the JSON input as it it something that I don't have any control over. I provided 2 examples how that JSON may look like: <code>image</code> or <code>video</code></p>
<pre><code>import json
import msgspec
from typing import Union
class Image(msgspec.Struct):
format: str
class Video(msgspec.Struct):
format: str
MediaType = Union[Image, Video]
class Media(msgspec.Struct):
width: int
height: int
mediaType: MediaType
DECODER = msgspec.json.Decoder(Media)
image = {"width": 100,
"height": 200,
"mediaType": {
"Image": {
"format": "jpg"
}
}
}
video = {"width": 100,
"height": 200,
"mediaType": {
"Video": {
"format": "mp4"
}
}
}
image_str = json.dumps(image)
video_str = json.dumps(video)
image_str_decoded = DECODER.decode(image_str)
video_str_decoded = DECODER.decode(video_str)
print(image_str_decoded)
print(video_str_decoded)
</code></pre>
<p>What I am trying to achieve:</p>
<pre><code>image_str_decoded(width=100, height=200, mediaType=Image(format=jpg))
video_str_decoded(width=100, height=200, mediaType=Video(format=mp4))
</code></pre>
|
<python><msgspec>
|
2025-05-25 15:37:31
| 1
| 1,135
|
miran80
|
79,637,562
| 29,295,031
|
How to remove the <think> tag in reasoning model DeepSeek responses
|
<p>I'm using deepseek llm for my AI app, the responses will include its thinking process.</p>
<p>It looks something like this:
<a href="https://i.sstatic.net/Yjcc6R9x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yjcc6R9x.png" alt="enter image description here" /></a></p>
<p>the code generate the response :</p>
<pre><code># Generate a new response if last message is not from assistant
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = ask_question(qa, prompt)
print(response.content)
st.markdown(response.content)
message = {"role": "assistant", "content": response.content}
st.session_state.messages.append(message)
</code></pre>
<p>The responses of the AI assistant has content in tags.</p>
<p>The actual answer is provided after the tags</p>
<p>do you have any idea please How to remove the tag and its content ?</p>
|
<python><langchain><large-language-model><deepseek>
|
2025-05-25 10:13:06
| 2
| 401
|
user29295031
|
79,637,492
| 5,276,890
|
Use tkinter in an existing window
|
<p>I have an existing open window, created by SDL 2 in C/C++. I would like the process to call a python script (using Boost/Python) to add some GUI elements to it.</p>
<p>Here's a non-working example:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import sdl2
import sdl2.ext
import tkinter as tk
from tkinter import *
# Initialize SDL2
sdl2.ext.init()
# Set up the window
window_width = 800
window_height = 600
window_title = "SDL2 Window"
window = sdl2.ext.Window(window_title, size=(window_width, window_height))
window.show()
# Get SDL handle for the window
wminfo = sdl2.SDL_SysWMinfo()
sdl2.SDL_VERSION(wminfo.version)
window_id = sdl2.SDL_GetWindowWMInfo(window.window, wminfo)
print(wminfo.info.win.window)
#tkinter
The tkinter code examples mentioned below come here
</code></pre>
<p>Looking at <a href="https://docs.python.org/3/library/tkinter.html#tkinter.Tk" rel="nofollow noreferrer">https://docs.python.org/3/library/tkinter.html#tkinter.Tk</a>, I see that</p>
<blockquote>
<p><em>use</em><br />
Specifies the <em>id</em> of the window in which to embed the application, instead of it being created as an independent toplevel window.</p>
</blockquote>
<p>However, it also adds</p>
<blockquote>
<p>Note that on some platforms this will only work correctly if <em>id</em> refers to a Tk frame or toplevel that has its -container option enabled.</p>
</blockquote>
<p>Running the following prints a window id similar to <code>window_id</code> above.</p>
<pre class="lang-py prettyprint-override"><code>root = tk.Tk()
tk_id = root.winfo_id()
</code></pre>
<p>Running <code>child = tk.Tk(use=str(wminfo.info.win.window))</code> causes python to exit without an exception. VS Code writes <code>Server[1] disconnected unexpectedly</code> but I don't think this is the X11 server but rather the debugger.</p>
<p>Running <code>root = tk.Tk() child = tk.Tk(use=str(tk_id))</code> causes python to exit as well.</p>
<p>Can this be done at all? Am I missing something simple here?</p>
|
<python><tkinter><sdl-2>
|
2025-05-25 08:18:01
| 2
| 1,729
|
Roy Falk
|
79,637,340
| 1,034,974
|
obtaining column from another shard using jax.lax.gather()
|
<p>I'm trying to distribute large computation to multiple shards using jax. I have 2x2 tiles of shards. Each shard has 3x3 array (with distinct integers), e.g. in shard (0,0):</p>
<pre><code>x = jnp.array([[40, 54, 70], [14, 3, 94], [44, 54, 88]])
</code></pre>
<p>and in shard (1,0):</p>
<pre><code>x = jnp.array([[50, 46, 22], [ 4, 83, 21], [91, 32, 68]])
</code></pre>
<p>I wanted to use <code>lax.gather()</code> to obtain first column from the sibling shard so that shard (0,0) will get [50, 4, 91] and other shard will get [40, 14, 44]. And then I can use the column from another shard to update first column in the local shard to keep min values.</p>
<p>Another possible approach is to use pmin, but I did not find working example that could be used for the purpose.</p>
<p>My experimental code:</p>
<pre><code>first_column = lax.gather(
operand=x,
start_indices=jnp.array([0]),
dimension_numbers=jax.lax.GatherDimensionNumbers(
offset_dims=(0,),
start_index_map=(1,),
collapsed_slice_dims=()),
slice_sizes=jnp.array([x.shape[0]]))
</code></pre>
<p>returns error:</p>
<blockquote>
<p>TypeError: Shapes must be 1D sequences of concrete values of integer type, got On TFRT_CPU_0 at mesh coordinates (i, j,) = (0, 0):
[3]</p>
</blockquote>
<p>EDIT:
I think I found a workaround using all_gather():</p>
<pre><code>j_coord = lax.axis_index('j')
column = lax.cond(j_coord == 0, lambda _: x[:,-1], lambda _: x[:,0], operand=None)
column_gathered = lax.all_gather(column, axis_name="j", tiled=False)
column_gathered = jnp.minimum(column_gathered[0,:], column_gathered[1,:])
x = lax.cond(j_coord == 0,
lambda _: x.at[:,-1].set(column_gathered),
lambda _: x.at[:,0].set(column_gathered), operand=None)
</code></pre>
<p>However, I'm concerned that all_gather() sends two columns instead of one. Is there a way to make the calculation of minimum among shards more efficient than this?</p>
|
<python><mesh><jax>
|
2025-05-25 03:22:24
| 0
| 348
|
Terry
|
79,637,323
| 3,636,292
|
Python in MS excel: show the output dataframe
|
<p>I have a dataframe which I filtered and stored. I can use the filtered dataframe to do calculations. But, I cannot use it as a table in the excel.</p>
<p>I want to show the filtered dataframe in the excel file. How to do that?</p>
<p>Example:</p>
<pre><code>df_original = xl("G4:H12", headers=True)
condition = df_original["example"].isin(example_list)
df_filtered = df_original[condition]
</code></pre>
<p>I want to show df_filtered in excel</p>
|
<python><excel>
|
2025-05-25 02:29:30
| 1
| 1,714
|
Neeraj Hanumante
|
79,637,322
| 388,506
|
How to dynamically adjust the selection?
|
<p>I've been trying to do these, but as I've browsed the answers, most of these problem ended up with deprecated one, or simply no solution.</p>
<pre class="lang-py prettyprint-override"><code>job_id = fields.Many2one('inw_asset.job_orders', string='Service request', required=True, store=True)
owner_id = fields.Many2one('res.users', string='Asset Owner', required=True, default=lambda self: self.env.user, readonly=True, store=True)
person_in_charge = fields.Many2one('res.users', string='Engineer assigned', required=True, store=True)
report_type = fields.Selection(selection=lambda self: self.get_report_type_list(), required=True, default='i')
def get_report_type_list(self):
user = self.env.user
if self.env.user.has_group('inw_asset.administrator'):
return [
('i', 'Information'),
('p', 'Progress'),
('r', 'Completion Report'),
('d', 'Delayed')
]
elif self.env.user == self.owner_id:
return [
('i', 'Information'),
('r', 'Completion Report'),
]
elif self.env.user == self.person_in_charge
return [
('i', 'Information'),
('p', 'Progress'),
('d', 'Delayed')
]
else:
return [
('i', 'Information')
]
</code></pre>
<p>I tried this method, but it ended up as only <code>i</code> option. I have found some solution telling me to use domain, but it was not working, and when I searched why, it was noted that the method was deprecated.</p>
<p><code>job_id</code> is linked to the service report requested by the asset holder.
Is there a way to restrict the user to pick the options, that is not deprecated? or using a transient record? Or anything?</p>
|
<python><odoo><odoo-18>
|
2025-05-25 02:17:18
| 1
| 2,157
|
Magician
|
79,637,243
| 2,386,113
|
How to display an image in a GUI using Trame, Vuetify
|
<p>I am new to <a href="https://kitware.github.io/trame/examples/" rel="nofollow noreferrer">Trame and Vuetify</a>. I need to show an image in the GUI. The GUI would run on a localhost. My MWE is given below:</p>
<pre><code>from trame.app import get_server
from trame.widgets import html, vuetify, vtk as vtk_widgets
from trame.ui.vuetify import SinglePageLayout
import os
# -----------------------------------------------------------------------------
# Trame setup
# -----------------------------------------------------------------------------
server = get_server(client_type="vue2")
state, ctrl = server.state, server.controller
#################
state.trame__title = "Test"
with SinglePageLayout(server) as layout:
layout.icon.click = "$refs.view.resetCamera()"
with layout.content:
with vuetify.VContainer(fluid=True, classes="pa-0 fill-height d-flex flex-row"):
with vuetify.VCol(style="max-width: 20vw; min-width: 200px;"):
with vuetify.VContainer():
print("Image exists:", os.path.exists("assets/my_test_image.png"))
print("Image exists:", os.path.exists("assets/my_test_image.png"))
html.Img(
src="/assets/my_test_image.png",
style=(
"max-width: 100%; "
"margin-top: 20px; "
"display: block;"
)
)
# -----------------------------------------------------------------------------
# Main
# -----------------------------------------------------------------------------
if __name__ == "__main__":
server.start()
</code></pre>
<p><strong>Question:</strong> In the program above, I want to display an image called <code>my_test_image.png</code> which is already present in the <strong>assets</strong> folder. The program runs but the image is not displayed (rather a borken image icon is displayed). As a sanity check, I have put the <code>print</code> statement to check if the image exists and I am obtained <code>**Image exists: True**</code>. How do I display the image in the gui?</p>
|
<python><vue-component><vuetify.js><trame>
|
2025-05-24 23:17:18
| 0
| 5,777
|
skm
|
79,637,226
| 82,287
|
Has handling of escape characters in quoted strings changed between Python 3.10 and 3.13?
|
<p>I have a program that works under Python 3.10.2 Windows that contains a line like this:</p>
<pre><code>outfilename = outpathname+"\cfile.txt"
</code></pre>
<p>outfilename ends up looking like
c:\some\path\cfile.txt"</p>
<p>This code runs clean on 3.10.2.</p>
<p>I moved this code to another Windows system running Python 3.13.3</p>
<p>It generates syntax warnings:</p>
<pre><code>C:\PythonCode\something.py:184: SyntaxWarning: invalid escape sequence '\c'
outfilename = outpathname+"\cfile.txt"
</code></pre>
<p>I didn't find any relevant details searching on "escape" in <a href="https://docs.python.org/3/whatsnew/changelog.html" rel="nofollow noreferrer">3.13 changes</a>.
Is this a significant change? Should I have been using <code>"\\c"</code> to escape the <code>"\"</code> in the quoted string all along and now a warning has been added?</p>
|
<python><escaping>
|
2025-05-24 22:41:10
| 0
| 2,033
|
tim11g
|
79,637,061
| 784,804
|
Creating nested JSON objects from a flat Python object using Marshmallow and SQLAlchemy
|
<p>Given a simple model (in reality SQLAlchemy):</p>
<pre class="lang-py prettyprint-override"><code>class Thing():
name: str
attr1: int
attr2: str
</code></pre>
<p>How would I write the associated Marshmallow Schema class such that it produced JSON like this - ideally <em>without</em> using fields.Method so that I can do things like <code>fields.Integer(as_string=True)</code> where needed.</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "Name",
"attributes": {
"attr1": "1",
"attr2": "attr2"
}
}
</code></pre>
<p>As it stands i've done something like this, but I'd prefer a non-custom method way if possible. Having to coerce into str() for example feels like i'm missing out on the features of marshmallow ... I'm unsure unsure how deserialising will work.</p>
<pre class="lang-py prettyprint-override"><code>attributes = fields.Method("build_attributes")
def build_attributes(self, obj):
return { "attr1": str(obj.attr1), "attr2": obj.attr2 }
</code></pre>
<p>Updated:</p>
<p>Another way I found was to use <code>@pre_load</code> to <code>data.pop()</code> from settings and add to the root (flatten) and <code>@post_dump</code> to create settings in the root and <code>data.pop()</code> attributes into it (unflatten). However this doesn't seem very 'marshmallow-y' but I think still does allow standard Marshmallow validation on the flattened Schema?</p>
<p>Is this the best way?</p>
|
<python><marshmallow>
|
2025-05-24 18:22:20
| 1
| 431
|
Umm
|
79,636,956
| 17,411,406
|
setup dj_rest_auth and all allauth not working
|
<p>Hello i'm trying to setup dj_rest_auth and allauth with custom user model for login for my nextjs app but it seems not working the backend part</p>
<p>besides it not working i get this warning</p>
<pre><code>/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:228: UserWarning: app_settings.USERNAME_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['username']['required']
required=allauth_account_settings.USERNAME_REQUIRED,
/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:230: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required']
email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED)
/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:288: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required']
email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED)
No changes detected
# python manage.py migrate
/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:228: UserWarning: app_settings.USERNAME_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['username']['required']
required=allauth_account_settings.USERNAME_REQUIRED,
/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:230: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required']
email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED)
/usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:288: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required']
email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED)
</code></pre>
<p>versions i use</p>
<pre><code>Django==5.2.1
django-cors-headers==4.3.1
djangorestframework==3.16.0
dj-rest-auth==7.0.1
django-allauth==65.8.1
djangorestframework_simplejwt==5.5.0
psycopg2-binary==2.9.10
python-dotenv==1.0.1
Pillow==11.2.1
gunicorn==23.0.0
whitenoise==6.9.0
redis==5.2.1
requests==2.32.3
</code></pre>
<p>models.py</p>
<pre><code>import uuid
from django.db import models
from django.utils import timezone
from django.contrib.auth.models import AbstractUser, PermissionsMixin, UserManager
# Create your models here.
class MyUserManager(UserManager):
def _create_user(self, name, email, password=None , **extra_fields):
if not email:
raise ValueError('Users must have an email address')
email = self.normalize_email(email=email)
user = self.model(email=email , name=name, **extra_fields)
user.set_password(password)
user.save(using=self.db)
return user
def create_user(self, name=None, email=None, password=None, **extra_fields):
extra_fields.setdefault('is_staff',False)
extra_fields.setdefault('is_superuser',False)
return self._create_user(name=name,email=email,password=password,**extra_fields)
def create_superuser(self, name=None, email=None, password=None, **extra_fields):
extra_fields.setdefault('is_staff',True)
extra_fields.setdefault('is_superuser',True)
return self._create_user(name=name,email=email,password=password,**extra_fields)
class Users(AbstractUser, PermissionsMixin):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
first_name = None
last_name = None
username = None
name = models.CharField(max_length=255)
email = models.EmailField(unique=True)
is_active = models.BooleanField(default=True)
is_superuser = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
avatar = models.ImageField(upload_to='avatars/', null=True, blank=True)
date_joined = models.DateTimeField(default=timezone.now)
last_login = models.DateTimeField(blank=True, null=True)
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
REQUIRED_FIELDS = []
objects = MyUserManager()
def __str__(self):
return self.email
</code></pre>
<p>settings.py</p>
<pre><code>
import os
from pathlib import Path
from datetime import timedelta
from dotenv import load_dotenv
load_dotenv()
SITE_ID = 1
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
DEBUG = os.environ.get('DEBUG','False') == 'True'
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")
CSRF_TRUSTED_ORIGINS = os.environ.get("DJANGO_CSRF_TRUSTED_ORIGINS","*").split(",")
CORS_ALLOW_CREDENTIALS = True
if DEBUG : CORS_ALLOW_ALL_ORIGINS = True
else : CORS_ALLOWED_ORIGINS = os.environ.get("DJANGO_CORS_ALLOWED_ORIGINS","*").split(",")
AUTH_USER_MODEL = "Users_app.Users"
WEBSITE_URL = os.environ.get("WEBSITE_URL","http://localhost:8000")
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
ACCOUNT_LOGIN_METHODS = {'email'}
ACCOUNT_SIGNUP_FIELDS = ['email*','name*', 'password1*', 'password2*']
ACCOUNT_EMAIL_VERIFICATION = "none"
SIMPLE_JWT = {
"ACCESS_TOKEN_LIFETIME": timedelta(minutes=60),
"REFRESH_TOKEN_LIFETIME": timedelta(days=7),
"ROTATE_REFRESH_TOKENS": False,
"BLACKLIST_AFTER_ROTATION": False,
"UPDATE_LAST_LOGIN": True,
"SIGNING_KEY": SECRET_KEY,
"ALGORITHM": "HS512"
}
REST_AUTH = {
'USE_JWT': True,
'JWT_AUTH_COOKIE': 'access_token',
'JWT_AUTH_REFRESH_COOKIE': 'refresh_token',
}
INSTALLED_APPS = [
'unfold',
'unfold.contrib.filters',
'unfold.contrib.forms',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'rest_framework.authtoken',
'allauth',
'allauth.account',
'allauth.socialaccount',
'dj_rest_auth',
'dj_rest_auth.registration',
'corsheaders',
'tasks',
'Users_app',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
"allauth.account.middleware.AccountMiddleware",
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
AUTHENTICATION_BACKENDS = [
'allauth.account.auth_backends.AuthenticationBackend',
'django.contrib.auth.backends.ModelBackend',
]
ROOT_URLCONF = 'backend.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'backend.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ.get('DATABASE_HOST'),
'NAME': os.environ.get('DATABASE_NAME'),
'USER': os.environ.get('DATABASE_USERNAME'),
'PORT': os.environ.get('DATABASE_PORT'),
'PASSWORD':os.environ.get('DATABASE_PASSWORD'),
}
}
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
STATIC_URL = 'static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# Media files
MEDIA_URL = 'media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_FILTER_BACKENDS': [
'django_filters.rest_framework.DjangoFilterBackend',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'dj_rest_auth.jwt_auth.JWTCookieAuthentication',
'rest_framework.authentication.SessionAuthentication',
],
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination',
'PAGE_SIZE': 50,
'DATETIME_FORMAT': "%Y-%m-%d %H:%M:%S",
}
</code></pre>
<p>please what is the issue ?
what are the stable versions to use if this one are buggy</p>
|
<python><django><dj-rest-auth>
|
2025-05-24 16:10:01
| 1
| 307
|
urek mazino
|
79,636,897
| 29,295,031
|
UserWarning: Importing verbose from langchain root module is no longer supported
|
<p>I'm using <code>langchain==0.1.9</code> with streamlit and python <code>3.11</code>,when I try to run the streamlit app I got this error :</p>
<pre><code>C:\\rag-groq-strteamlit\RAG-PROD\rag\Lib\site-packages\langchain\__init__.py:29: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead.
</code></pre>
<p>then streamlit stops working, do you have any idea how i can resolve this issue please ?</p>
|
<python><streamlit><langchain>
|
2025-05-24 15:09:30
| 0
| 401
|
user29295031
|
79,636,672
| 14,909,621
|
If you add _metadata to a custom subclass of Series, the sequence name is lost when indexing
|
<p><kbd>pandas 2.2.3</kbd></p>
<p>Let’s consider two variants of defining a custom subtype of <code>pandas.Series</code>. In the first one, no <a href="https://pandas.pydata.org/pandas-docs/stable/development/extending.html#define-original-properties" rel="nofollow noreferrer">custom properties</a> are added, while in the second one, custom metadata is included:</p>
<pre><code>import pandas as pd
class MySeries(pd.Series):
@property
def _constructor(self):
return MySeries
seq = MySeries([*'abc'], name='data')
print(f'''Case without _metadata:
{isinstance(seq[0:1], MySeries) = }
{isinstance(seq[[0, 1]], MySeries) = }
{seq[0:1].name = }
{seq[[0, 1]].name = }
''')
class MySeries(pd.Series):
_metadata = ['property']
@property
def _constructor(self):
return MySeries
seq = MySeries([*'abc'], name='data')
seq.property = 'MyProperty'
print(f'''Case with defined _metadata:
{isinstance(seq[0:1], MySeries) = }
{isinstance(seq[[0, 1]], MySeries) = }
{seq[0:1].name = }
{seq[[0, 1]].name = }
{getattr(seq[0:1], 'property', 'NA') = }
{getattr(seq[[0, 1]], 'property', 'NA') = }
''')
</code></pre>
<p>The output will be:</p>
<pre class="lang-none prettyprint-override"><code>Case without _metadata:
isinstance(seq[0:1], MySeries) = True
isinstance(seq[[0, 1]], MySeries) = True
seq[0:1].name = 'data'
seq[[0, 1]].name = 'data'
Case with defined _metadata:
isinstance(seq[0:1], MySeries) = True
isinstance(seq[[0, 1]], MySeries) = True
seq[0:1].name = 'data'
seq[[0, 1]].name = None <<< Problematic result of indexing
getattr(seq[0:1], 'property', 'NA') = 'MyProperty'
getattr(seq[[0, 1]], 'property', 'NA') = 'MyProperty'
</code></pre>
<p>So, if <code>_metadata</code> is defined, the sequence name is preserved when slicing, but <strong>lost when indexing with a list</strong>, whereas without <code>_metadata</code> the name is preserved in both cases.</p>
<p><strong>Question:</strong>
What needs to be done to preserve the name of a custom <code>Series</code> subclass with added metadata when indexing?</p>
|
<python><pandas>
|
2025-05-24 10:30:33
| 1
| 7,606
|
Vitalizzare
|
79,636,582
| 3,427,866
|
Python lxml xpath not working on some elements
|
<p>I'm having trouble extracting a specific element text from a soap response. Other elements seems to be working fine.</p>
<p>I have tried the following:</p>
<pre><code>Python 3.13.3 (main, Apr 8 2025, 13:54:08) [Clang 16.0.0 (clang-1600.0.26.6)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
>>> xml = '''<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
... <soap:Body>
... <soap:Fault>
... <soap:Code>
... <soap:Value>soap:Sender</soap:Value>
... <soap:Subcode>
... <soap:Value xmlns:ns1="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
... ns1:unauthorized
... </soap:Value>
... </soap:Subcode>
... </soap:Code>
... <soap:Reason>
... <soap:Text xml:lang="en">AccessResult: result: Access Denied | AuthenticationAsked: true |
... ErrorCode: IDP_ERROR:
... 137 | ErrorReason: null</soap:Text>
... </soap:Reason>
... <soap:Detail>
... <WebServiceFault xmlns="http://www.taleo.com/ws/integration/toolkit/2005/07">
... <code>SystemError</code>
... <message>AccessResult: result: Access Denied | AuthenticationAsked: true | ErrorCode:
... IDP_ERROR: 137 |
... ErrorReason: null</message>
... </WebServiceFault>
... </soap:Detail>
... </soap:Fault>
... </soap:Body>
... </soap:Envelope>'''
>>> root = etree.fromstring(xml)
>>> print(root)
<Element {http://www.w3.org/2003/05/soap-envelope}Envelope at 0x1032c4680>
>>> ns = { 'soap':'http://www.w3.org/2003/05/soap-envelope', 'ns1':'http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd"' }
>>> print(root.xpath('//soap:Subcode/soap:Value',namespaces=ns)[0].text)
ns1:unauthorized
>>> print(root.xpath('//soap:Reason/soap:Text',namespaces=ns)[0].text)
AccessResult: result: Access Denied | AuthenticationAsked: true |
ErrorCode: IDP_ERROR:
137 | ErrorReason: null
>>> print(root.xpath('//soap:Detail/WebServiceFault/message',namespaces=ns)[0].text)
Traceback (most recent call last):
File "<python-input-7>", line 1, in <module>
print(root.xpath('//soap:Detail/WebServiceFault/message',namespaces=ns)[0].text)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
</code></pre>
<p>For some reason, the message element text I can't get.</p>
<p>Appreciate any help.</p>
|
<python><xml><lxml><elementtree>
|
2025-05-24 08:48:52
| 2
| 1,743
|
ads
|
79,636,426
| 2,233,608
|
How to type a generic callable in python
|
<p>I am trying to do something like this:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Callable, Coroutine
from typing import Any, Generic, TypeVar
CRT = TypeVar("CRT", bound=Any)
class Command(Generic[CRT]): pass
CommandHandler = Callable[[Command[CRT]], Coroutine[Any, Any, CRT]]
class MyCommand(Command[int]): pass
async def my_command_handler(command: MyCommand) -> int:
return 42
async def process_command(command: Command[CRT], handler: CommandHandler[CRT]) -> CRT:
return await handler(command)
async def main() -> None:
my_command = MyCommand()
await process_command(my_command, my_command_handler)
</code></pre>
<p>But I get the following mypy error on my <code>await process_command(my_command, my_command_handler)</code> line</p>
<pre class="lang-none prettyprint-override"><code>Argument 2 to "process_command" has incompatible type "Callable[[MyCommand], Coroutine[Any, Any, int]]"; expected "Callable[[Command[int]], Coroutine[Any, Any, int]]"Mypyarg-type
(function) def my_command_handler(command: MyCommand) -> CoroutineType[Any, Any, int]
</code></pre>
<p>Which I don't understand because <code>MyCommand</code> <em>is</em> a <code>Command[int]</code>.</p>
<p>How do I make mypy happy here?</p>
<p>My end goal is to be able to have different command classes, each command class defines the expected result of the command, and then type some function that takes both the command class, and a handler for the command class, and ensure that the handler takes an instance of the command as a parameter and returns a the correct result defined by the command.</p>
<p>If I change <code>my_command_handler</code> this:</p>
<pre class="lang-py prettyprint-override"><code>async def my_command_handler(command: Command[int]) -> int:
if not isinstance(command, MyCommand):
raise TypeError("Expected MyCommand")
return 42
</code></pre>
<p>It type checks fine, but I'm not a fan of having to do the extra type check in the function to get proper typing inside the function.</p>
|
<python><python-typing><mypy>
|
2025-05-24 04:09:36
| 2
| 1,178
|
niltz
|
79,636,345
| 1,135,775
|
Finding shifts required between two arrays elements
|
<p>I have two arrays, and I am trying to find the displacement for each elements to make them match. An example:</p>
<pre><code>a = np.array([1, 0, 2, 0, 0, 1, 0, 2, 0, 0, 1, 0, 2, 0, 0 ] )
b = np.array([2, 0, 0, 1, 0, 2, 0, 0, 1, 0, 2, 0, 0, 1, 0] )
# expected
c = np.array ([3, 0, 2, 1, 0, 2, 0, 2, 1, 0, 2, 0,2, 1, 0 ])
</code></pre>
<p>The expected results,
in array <code>a</code>, the first element is 1, and the closest match in <code>b</code> is 3 steps away. The second element in <code>a</code> is 0 and it is a match in <code>b</code> so zero steps away. The third element is 2, and the closes in <code>b</code> is <code>b[0]</code> which is 2 steps away from <code>a[2]=2</code> .. and so on.</p>
<p>I tried to come up with a function to solve this:</p>
<pre class="lang-py prettyprint-override"><code>def find_shifts_required(x, y):
assert len(x) == len(y)
result = np.zeros(len(x))
labels = np.unique(x)
for lbl in labels:
x_pos = np.where(x == lbl)
y_pos = np.where(y == lbl) #.min())
result[x_pos] = np.subtract(x_pos , y_pos )
_shifts = np.absolute(result).astype(int)
return _shifts
d = find_shifts_required(a,b)
print("expected: " , c)
print("actual: " , d)
</code></pre>
<p>Which gives me:</p>
<pre><code>expected: [3 0 2 1 0 2 0 2 1 0 2 0 2 1 0]
actual: [3 0 2 1 0 3 0 2 1 0 3 0 2 1 0]
</code></pre>
<p>I am wondering if this problem has been solved somewhere (scipy: I wasn't able to find a name) or a better implementation is available.</p>
<p>By better I mean a working solution, as this one does not give the correct results. A simpler, faster will be great.</p>
|
<python><arrays><numpy><scipy>
|
2025-05-24 01:26:21
| 4
| 739
|
Mansour
|
79,636,250
| 2,662,901
|
QueueListener breaks logging level filter?
|
<p>It appears that <code>logging.handlers.QueueHandler</code>/<code>.QueueListener</code> is breaking the <code>.setLevel</code> of an attached <code>logging.FileHandler</code> in Python 3.12.9 on Windows.</p>
<p>Running the following minimal example results in both INFO and WARNING messages getting into the <code>Test.log</code> file even though the <code>FileHandler</code> instance had <code>.setLevel(logging.WARNING)</code>.</p>
<p>Am I doing something wrong here, or is this a bug?</p>
<pre class="lang-py prettyprint-override"><code>import logging
import logging.handlers
from multiprocessing import Queue
file_logger = logging.FileHandler('Test.log', encoding='utf-8')
file_logger.setLevel(logging.WARNING)
file_logger.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
log_queue = Queue()
queue_listener = logging.handlers.QueueListener(log_queue, file_logger)
queue_listener.start()
root_logger = logging.getLogger()
for handler in root_logger.handlers[:]:
root_logger.removeHandler(handler)
queue_handler = logging.handlers.QueueHandler(log_queue)
root_logger.addHandler(queue_handler)
root_logger.setLevel(logging.DEBUG)
logger = logging.getLogger("TestLogger")
logger.info("This is INFO")
logger.warning("This is WARNING")
queue_listener.stop()
logging.shutdown()
</code></pre>
|
<python><logging><python-multiprocessing>
|
2025-05-23 22:06:01
| 1
| 3,497
|
feetwet
|
79,636,095
| 9,818,388
|
Coqui-ai TTS in Jenkins job
|
<p>Trying to use Coqui-ai TTS in Jenkins job, and Im getting below err:</p>
<pre><code>C:\ProgramData\Jenkins\.jenkins\workspace\Dziennik\scripts>python tts.py --input_text_file_path DU/2025/658
Traceback (most recent call last):
File "C:\ProgramData\Jenkins\.jenkins\workspace\Dziennik\scripts\tts.py", line 34, in <module>
api = TTS(model_name="tts_models/pl/mai_female/vits")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ja\AppData\Local\Programs\Python\Python311\Lib\site-packages\TTS\api.py", line 64, in __init__
self.manager = ModelManager(models_file=self.get_models_file_path(), progress_bar=progress_bar, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ja\AppData\Local\Programs\Python\Python311\Lib\site-packages\TTS\utils\manage.py", line 51, in __init__
self.output_prefix = get_user_data_dir("tts")
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ja\AppData\Local\Programs\Python\Python311\Lib\site-packages\TTS\utils\generic_utils.py", line 139, in get_user_data_dir
key = winreg.OpenKey(
^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] Nie mo�na odnale�� okre�lonego pliku
</code></pre>
<p>This is the code which i use:</p>
<pre><code>from TTS.api import TTS
import torch
import io
import argparse
# arg parser
parser = argparse.ArgumentParser()
parser.add_argument('--input_text_file_path', dest='input_text_file_path',
type=str, help='Add input_text_file_path')
args = parser.parse_args()
# Save the original torch.load function
_original_torch_load = torch.load
# Define a new function that forces weights_only=False
def custom_torch_load(*args, **kwargs):
if "weights_only" not in kwargs:
kwargs["weights_only"] = False
return _original_torch_load(*args, **kwargs)
# Override torch.load globally
torch.load = custom_torch_load
def import_text_file(file_name):
with io.open(f"{file_name}.txt", 'r', encoding="utf-8") as file:
return file.read().replace('\n', '')
# TTS with on the fly voice conversion
api = TTS(model_name="tts_models/pl/mai_female/vits")
api.tts_to_file(
text=import_text_file(args.input_text_file_path.replace('/', '_')),
speaker_wav="target/speaker.wav",
file_path=f"{args.input_text_file_path.replace('/', '_')}.wav"
)
</code></pre>
<p>At the same time the code is working normally when i trigger it from the cmd, just when i use jenkins the issue show up.</p>
<p>Not able to find any proper workaround for this.</p>
|
<python><jenkins><artificial-intelligence><text-to-speech><coqui>
|
2025-05-23 19:25:25
| 1
| 2,859
|
Szelek
|
79,635,993
| 11,515,528
|
Pandas insert row if value is missing
|
<p>Simple one but tired brain is letting me see the solution. I can not pivot because there are repeat values in 'number'.</p>
<pre><code>pd.DataFrame({'KeyID':[1,1,1,1,2,2,2,3,3,3], 'number':['a','a','c','d','a','b','c','a','b','c']})
</code></pre>
<p>I need each KeyID to have a 'd' if it does not then a row with NAN needs to be added. Both KeyId 2 and 3 are missing a 'd' so need a blank row adding.</p>
<p><a href="https://i.sstatic.net/lPoGGk9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lPoGGk9F.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2025-05-23 17:41:48
| 3
| 1,865
|
Cam
|
79,635,896
| 3,696,153
|
How to mimic Python's dict.items() functionality in a custom class
|
<p>In Python, I am creating a quasi-dictionary like thing (custom class that looks or acts like a dictionary in many ways, but is not a <code>dict</code> subclass).</p>
<p>I would like to support the standard <code>foo.items()</code> found in a dictionary</p>
<p>But what I do not know is the <code>self.__magic__()</code> method name that I need to implement for my custom class.</p>
<p>I know for example, <code>__setattr__()</code>, <code>__getattr__()</code>, and I see that a <code>dict</code> has a <code>__getitem__()</code> method, but the docs say it requires a <em>key</em> - what key is going to be used here? I would have expected an index, or yield like method that returns the name/value tuple but I do not see that.</p>
<p>An example of doing this would also be helpful. Google only seems to find how to extend the <code>dict</code>, and that is not what I am doing.</p>
<p>Do I just implement my own <code>class.items()</code> function as a generator?</p>
|
<python><dictionary>
|
2025-05-23 16:26:04
| 0
| 798
|
user3696153
|
79,635,887
| 11,609,834
|
Why is python trace not completely written to a log, and how to fix?
|
<p>I am running a VM on Google Cloud Compute Engine with the following (MRE) script:</p>
<pre><code>#!/bin/bash
set -e
exec > /var/log/startup-script.log 2>&1
source /opt/env/bin/activate
python3 main.py
</code></pre>
<p>In some cases, the python script has failed with a long trace. When it does, the trace it truncated in the <code>startup-script.log</code> file. I can read the start of the trace, but not the end. This is making it difficult to debug the issue, obviously.</p>
<p>My suspicion is that when the trace is long, the end of the Python process occurs before the the trace is fully captured and written. I could be wrong about this suspicion, and corrections are welcome. In any event, is there a way for me to assure that the trace is fully captured and written to the log file so that I can figure out why the python script failed?</p>
|
<python><bash><google-cloud-compute-engine>
|
2025-05-23 16:20:38
| 1
| 1,013
|
philosofool
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.