QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
79,565,542
| 58,347
|
Using typing.assert_never() with abstract base classes to ensure full coverage
|
<p>I have an abstract base class <code>ParserNode</code>, with a bunch of concrete subclasses (<code>StartTag</code>, <code>RawText</code>, etc).</p>
<p>I have a function that takes a ParserNode and does different things based on which subtype it is. I'd like to ensure that mypy will complain if I ever add a new <code>ParserNode</code> subclass without amending this function to handle the new type as well.</p>
<p>Naively, I'd write this as:</p>
<pre class="lang-py prettyprint-override"><code>import typing
def doTheThing(node: ParserNode) -> None:
if isinstance(node, StartTag):
...
elif isinstance(node, RawText):
...
else:
typing.assert_never(node)
</code></pre>
<p>However, this doesn't work - mypy immediately complains that I'm not handling the <code>ParserNode</code> type. But that's an abstract base class, declared with <code>meta=ABCMeta</code> and with an <code>@abstractmethod</code>, so it's impossible for a variable to have that as its concrete type.</p>
<p>So far, the only workaround I've found for this is to manually declare a union of subtypes and use that, like:</p>
<pre class="lang-py prettyprint-override"><code>import typing
ParserNodeSubs: typing.TypeAlias = "StartTag | RawText"
def doTheThing(node: ParserNode) -> None:
node = typing.cast(ParserNodeSubs, node)
if isinstance(node, StartTag):
...
elif isinstance(node, RawText):
...
else:
typing.assert_never(node)
</code></pre>
<p>This works as I want it to, but it's a little annoyingly verbose for a situation that I <em>believe</em> shouldn't require it, requiring me to manually maintain the list of <code>ParserNodeSubs</code> (tho only in one spot) and then manually cast variables when I need them. Is there a better way to declare these classes that makes it work automatically?</p>
|
<python><python-typing><mypy>
|
2025-04-10 00:30:31
| 1
| 18,453
|
Tab Atkins-Bittner
|
79,565,521
| 8,284,452
|
What is the data_vars argument of xarray.open_mfdataset doing?
|
<p>I have two datasets with identical dimension names and shapes and I am trying to use <code>xarray.open_mfdataset()</code> to merge them into one dataset before opening them. You can drop this code in your own IDE if you want to play with it:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
# Create dataset one and export it somewhere on your computer
state = ['California', 'Texas', 'New York']
time_mn = np.arange(1, 61)
time_yr = np.arange(1, 6)
ds1 = xr.Dataset(
data_vars={
'prcp_mn': (['state', 'time_mn'], np.random.rand(len(state), len(time_mn))),
'prcp_yr': (['state', 'time_yr'], np.random.rand(len(state), len(time_yr))),
'temp_mn': (['state', 'time_mn'], np.random.rand(len(state), len(time_mn))),
'temp_yr': (['state', 'time_yr'], np.random.rand(len(state), len(time_yr))),
},
coords={
'state': (['state'], state),
'time_mn': (['time_mn'], time_mn),
'time_yr': (['time_yr'], time_yr)
}
)
ds1.to_netcdf('/Users/foo/Downloads/somefolder/ds1.nc')
# Create second dataset
time_mn = np.arange(61, 121)
time_yr = np.arange(6, 11)
ds2 = xr.Dataset(
data_vars={
'prcp_mn': (['state', 'time_mn'], np.random.rand(len(state), len(time_mn))),
'prcp_yr': (['state', 'time_yr'], np.random.rand(len(state), len(time_yr))),
'temp_mn': (['state', 'time_mn'], np.random.rand(len(state), len(time_mn))),
'temp_yr': (['state', 'time_yr'], np.random.rand(len(state), len(time_yr))),
},
coords={
'state': (['state'], state),
'time_mn': (['time_mn'], time_mn),
'time_yr': (['time_yr'], time_yr)
}
)
ds2.to_netcdf('/Users/foo/Downloads/somefolder/ds2.nc')
</code></pre>
<p>Printing each dataset individually yields this:</p>
<pre class="lang-py prettyprint-override"><code>ds1 = xr.open_dataset('ds1.nc')
print(ds1)
<xarray.Dataset> Size: 4kB
Dimensions: (state: 3, time_mn: 60, time_yr: 5)
Coordinates:
* state (state) <U10 120B 'California' 'Texas' 'New York'
* time_mn (time_mn) int64 480B 1 2 3 4 5 6 7 8 9 ... 53 54 55 56 57 58 59 60
* time_yr (time_yr) int64 40B 1 2 3 4 5
Data variables:
prcp_mn (state, time_mn) float64 1kB 0.5697 0.6228 0.2913 ... 0.8979 0.4984
prcp_yr (state, time_yr) float64 120B 0.8213 0.08581 ... 0.3644 0.02278
temp_mn (state, time_mn) float64 1kB 0.09875 0.7982 ... 0.9709 0.3221
temp_yr (state, time_yr) float64 120B 0.513 0.6703 ... 0.7104 0.2327
ds2 = xr.open_dataset('ds2.nc')
print(ds2)
<xarray.Dataset> Size: 4kB
Dimensions: (state: 3, time_mn: 60, time_yr: 5)
Coordinates:
* state (state) <U10 120B 'California' 'Texas' 'New York'
* time_mn (time_mn) int64 480B 61 62 63 64 65 66 ... 115 116 117 118 119 120
* time_yr (time_yr) int64 40B 6 7 8 9 10
Data variables:
prcp_mn (state, time_mn) float64 1kB 0.9125 0.6096 0.3521 ... 0.7349 0.4133
prcp_yr (state, time_yr) float64 120B 0.3651 0.5097 ... 0.2814 0.9877
temp_mn (state, time_mn) float64 1kB 0.7658 0.922 0.2113 ... 0.7582 0.4592
temp_yr (state, time_yr) float64 120B 0.2494 0.7272 ... 0.8682 0.08946
</code></pre>
<p>I thought what would happen when using <code>open_mfdataset()</code> to concat & merge the two datasets into one would be that the data variables and their associated dimensions would stay consistent since they are identically structured and named between the two datasets. This is what I expected:</p>
<pre class="lang-py prettyprint-override"><code>Data variables:
prcp_mn (state, time_mn) ....
prcp_yr (state, time_yr) ....
temp_mn (state, time_mn) ....
temp_yr (state, time_yr) ....
</code></pre>
<p>Instead this is what I got:</p>
<pre class="lang-py prettyprint-override"><code>Data variables:
prcp_mn (time_yr, state, time_mn) ....
prcp_yr (time_mn, state, time_yr) ....
temp_mn (time_yr, state, time_mn) ....
temp_yr (time_mn, state, time_yr) ....
</code></pre>
<p>After playing with the arguments of <code>open_mfdataset()</code>, I discovered that setting <code>data_vars='minimal'</code> or <code>data_vars='different'</code> both yield the result I was expecting:</p>
<pre class="lang-py prettyprint-override"><code>ds = xr.open_mfdataset(paths=['ds1.nc', 'ds2.nc'], data_vars='minimal', decode_times=False)
print(ds)
<xarray.Dataset> Size: 7kB
Dimensions: (state: 3, time_mn: 120, time_yr: 10)
Coordinates:
* state (state) <U10 120B 'California' 'Texas' 'New York'
* time_mn (time_mn) int64 960B 1 2 3 4 5 6 7 ... 114 115 116 117 118 119 120
* time_yr (time_yr) int64 80B 1 2 3 4 5 6 7 8 9 10
Data variables:
prcp_mn (state, time_mn) float64 3kB dask.array<chunksize=(3, 60), meta=np.ndarray>
prcp_yr (state, time_yr) float64 240B dask.array<chunksize=(3, 5), meta=np.ndarray>
temp_mn (state, time_mn) float64 3kB dask.array<chunksize=(3, 60), meta=np.ndarray>
temp_yr (state, time_yr) float64 240B dask.array<chunksize=(3, 5), meta=np.ndarray>
</code></pre>
<p>If I supply the <code>data_vars</code> argument with a list of data variable names, it concatenates the "extra" dimension to just those specific data variables:</p>
<pre class="lang-py prettyprint-override"><code>ds = xr.open_mfdataset(paths=['ds1.nc', 'ds2.nc'], data_vars=['prcp_mn', 'prcp_yr'], decode_times=False)
print(ds)
<xarray.Dataset> Size: 36kB
Dimensions: (state: 3, time_mn: 120, time_yr: 10)
Coordinates:
* state (state) <U10 120B 'California' 'Texas' 'New York'
* time_mn (time_mn) int64 960B 1 2 3 4 5 6 7 ... 114 115 116 117 118 119 120
* time_yr (time_yr) int64 80B 1 2 3 4 5 6 7 8 9 10
Data variables:
prcp_mn (time_yr, state, time_mn) float64 29kB dask.array<chunksize=(5, 3, 60), meta=np.ndarray>
prcp_yr (time_mn, state, time_yr) float64 29kB dask.array<chunksize=(60, 3, 5), meta=np.ndarray>
temp_mn (state, time_mn) float64 3kB dask.array<chunksize=(3, 60), meta=np.ndarray>
temp_yr (state, time_yr) float64 240B dask.array<chunksize=(3, 5), meta=np.ndarray>
</code></pre>
<p>My question is, what does the <code>data_vars</code> argument really mean, as I don't understand why the default value for the argument (<code>all</code>) doesn't yield what I was expecting. When I see the <a href="https://docs.xarray.dev/en/stable/generated/xarray.open_mfdataset.html" rel="nofollow noreferrer">documentation</a> refer to "data variables" to me that is talking about <code>prcp_mn, prcp_yr, temp_mn, and temp_yr</code>, not the <em>dimensions</em> of the data variables which really seems to be what is being concatenated when using the default <code>data_vars='all'</code> or specifying specific data variable names in the argument. Can someone give me some code examples that relate to what the values <code>minimal</code> and <code>different</code> in the <code>data_vars</code> argument are doing?</p>
|
<python><python-xarray><netcdf>
|
2025-04-10 00:09:21
| 0
| 686
|
MKF
|
79,565,427
| 1,267,780
|
Pydantic CLIApp/CLISubCommand with Env Vars
|
<p>I am currently building a CLI using <a href="https://docs.pydantic.dev/latest/concepts/pydantic_settings/" rel="nofollow noreferrer">Pydantic</a>. One of the sub commands has 2 parameters I would like to load via an Env variable. I was able to load the environment variables on its own when I instantiated the class directly but now that it is a <code>CliSubCommand</code>. I am having issues loading the two parameters via Env variables. I also was able to get the CLI working when <code>AWSStsGcp</code> class was inheriting from <code>BaseModel</code> by providing via the command line. Pydantic is throwing an error saying the two AWS flags are required. They are located in the aws_sts_gcp.py model -> <code>aws_access_key</code> and <code>aws_secret_key</code>.</p>
<p>I also don't want it to be a mandatory global config, since it is currently only relevant for one command.</p>
<p>root_tool.py</p>
<pre><code>from pydantic import BaseModel
from pydantic_settings import CliSubCommand, CliApp
from python_tools.gcp.models.aws_sts_gcp import AwsStsGcp
class DummyCommand(BaseModel):
project_id: str
def cli_cmd(self) -> None:
print(f'This is a dummy command.{self.project_id}"')
class Tool(BaseModel):
aws_sts: CliSubCommand[AwsStsGcp]
dummy: CliSubCommand[DummyCommand]
def cli_cmd(self) -> None:
CliApp.run_subcommand(self)
</code></pre>
<p>aws_sts_gcp.py</p>
<pre><code>from pydantic import SecretStr
from pydantic_settings import BaseSettings, SettingsConfigDict
from python_tools.gcp.aws_gcs_transfer import create_aws_transfer_job
class AwsStsGcp(BaseSettings, cli_parse_args=True):
model_config = SettingsConfigDict(env_file='.env', env_file_encoding='utf-8')
destination_bucket: str
src_bucket: str
manifest_path: str | None = None
aws_access_key: SecretStr
aws_secret_key: SecretStr
tranfer_name: str
project_id: str
def cli_cmd(self) -> None:
create_aws_transfer_job(self)
</code></pre>
<p>cli_runner.py</p>
<pre><code>from python_tools.gcp.models.root_tool import Tool
from pydantic_settings import CliApp
CliApp.run(Tool)
</code></pre>
<p>aws_gcs_transfer.py</p>
<pre><code>from google.cloud.storage_transfer_v1 import (
StorageTransferServiceClient,
TransferJob,
TransferSpec,
TransferManifest,
AwsS3Data,
AwsAccessKey,
GcsData,
RunTransferJobRequest,
CreateTransferJobRequest
)
#from python_tools.gcp.models.aws_sts_gcp import AwsStsGcp
from python_tools.consts import timestr
from python_tools.logging import logger
import time
def create_aws_transfer_job(transfer_details) -> None:
s3_config = None
transfer_manifest = None
client = StorageTransferServiceClient()
s3_config = AwsS3Data(
bucket_name=transfer_details.src_bucket,
aws_access_key=AwsAccessKey(
access_key_id=transfer_details.aws_access_key.get_secret_value(), secret_access_key=transfer_details.aws_secret_key.get_secret_value()
)
)
gcs_dest = GcsData(bucket_name=transfer_details.destination_bucket)
if transfer_details.manifest_path is not None:
transfer_manifest = TransferManifest(location=transfer_details.manifest_path)
sts_spec = TransferSpec(gcs_data_sink=gcs_dest, aws_s3_data_source=s3_config, transfer_manifest=transfer_manifest)
timestamp = time.strftime(timestr)
name = f"transferJobs/{transfer_details.tranfer_name}-{timestamp}"
description = "Automated STS Job created from Python Tools."
sts_job = TransferJob(
project_id=transfer_details.project_id,
name=name,
description=description,
transfer_spec=sts_spec,
status=TransferJob.Status.ENABLED,
)
job_request = CreateTransferJobRequest(transfer_job=sts_job)
logger.info(f"Starting Transfer Job for Job ID: {name}")
transfer_request = RunTransferJobRequest(project_id=transfer_details.project_id, job_name=name)
client.create_transfer_job(request=job_request)
client.run_transfer_job(request=transfer_request)
</code></pre>
<p>.env</p>
<pre><code>AWS_ACCESS_KEY = "test"
AWS_SECRET_KEY = "test"
</code></pre>
<p>I have also tried using BaseSettings at the root like this.</p>
<pre><code>from pydantic import BaseModel
from pydantic_settings import CliSubCommand, CliApp, BaseSettings, SettingsConfigDict
from python_tools.gcp.models.aws_sts_gcp import AwsStsGcp
class DummyCommand(BaseModel):
project_id: str
def cli_cmd(self) -> None:
print(f'This is a dummy command.{self.project_id}"')
class Tool(BaseSettings, cli_parse_args=True):
model_config = SettingsConfigDict(env_file='.env', env_file_encoding='utf-8', extra='ignore')
aws_sts: CliSubCommand[AwsStsGcp]
dummy: CliSubCommand[DummyCommand]
def cli_cmd(self) -> None:
CliApp.run_subcommand(self)
from pydantic import SecretStr, BaseModel, Field
from python_tools.gcp.aws_gcs_transfer import create_aws_transfer_job
class AwsStsGcp(BaseModel):
destination_bucket: str
src_bucket: str
manifest_path: str | None = None
aws_access_key: SecretStr = Field(alias='AWS_ACCESS_KEY', env="AWS_ACCESS_KEY")
aws_secret_key: SecretStr = Field(alias='AWS_SECRET_KEY', env="AWS_SECRET_KEY")
tranfer_name: str
project_id: str
def cli_cmd(self) -> None:
create_aws_transfer_job(self)
</code></pre>
|
<python><pydantic><pydantic-settings>
|
2025-04-09 22:30:14
| 1
| 3,695
|
CodyK
|
79,565,407
| 2,289,986
|
How to read 10-K XBRL file using python?
|
<p>What I tried: I could save a 10-K XBRL file in text format and read line by line to extract various sections.</p>
<p>Requirement: I want to separate text from tables from 10-K.</p>
<p>Problem: For this I need to use the html and it has complex structure. And extracting text and tables seems complex with beautifulsoup/xml etc. I tried various python packages but none of them meet the requirement of extracting tables and text separately.</p>
<p>Question: How to extract text and tables between two s separately?</p>
<p>Last screen shot shows the text representation of html. Highlighted portion is the table.</p>
<p>[<img src="https://i.sstatic.net/8dBS10TK.png" alt="There are 737 divs in document1" />
<a href="https://i.sstatic.net/TXLvwGJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TXLvwGJj.png" alt="__text has the text" /></a>
<a href="https://i.sstatic.net/EDWycMaZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDWycMaZ.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/TMAlNb7J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMAlNb7J.png" alt="Text_Representation_of_html" /></a></p>
|
<python><xbrl>
|
2025-04-09 22:10:08
| 2
| 524
|
Chandra
|
79,565,269
| 5,166,365
|
OpenSCAD for Mac (M1 + Rosetta 2), brew, This application failed to start because it could not find or load the Qt platform plugin "cocoa" in ""
|
<p>So, I downloaded openscad via <code>brew install openscad</code>, but when I run openscad CLI I get</p>
<pre><code>This application failed to start because it could not find or load the Qt platform plugin "cocoa"
in "".
</code></pre>
<p>I saw some bug reports indicating that this may be a known bug, but i was wondering if there were WORKAROUNDS.</p>
<p>I heard that maybe openscad depends on python?</p>
<p>Anyway this is the result of <code>which python3</code></p>
<p><code>/opt/homebrew/bin/python3</code></p>
<p>Is there any package I can install (in a virtual environment) or conda, that can fix the issue?</p>
|
<python><macos><qt><homebrew><openscad>
|
2025-04-09 20:21:39
| 1
| 1,010
|
Michael Sohnen
|
79,565,204
| 17,729,094
|
Transpose 2d Array elements in a column
|
<p>I have a dataframe like:</p>
<pre><code># /// script
# requires-python = ">=3.13"
# dependencies = [
# "numpy",
# "polars",
# ]
# ///
import numpy as np
import polars as pl
n_rows = 5
a = np.random.uniform(size=n_rows).astype(np.float32)
b = np.random.uniform(size=(n_rows, 800, 3)).astype(np.float32)
df = pl.DataFrame(
{
"a": a,
"b": b
},
schema={
"a": pl.Float32,
"b": pl.Array(pl.Float32, (800, 3))
}
)
print(df)
"""
shape: (5, 2)
โโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ a โ b โ
โ --- โ --- โ
โ f32 โ array[f32, (800, 3)] โ
โโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก
โ 0.667222 โ [[0.958246, 0.358944, 0.221115โฆ โ
โ 0.023049 โ [[0.514581, 0.48279, 0.998772]โฆ โ
โ 0.294279 โ [[0.10559, 0.017365, 0.236783]โฆ โ
โ 0.024168 โ [[0.487084, 0.438834, 0.589524โฆ โ
โ 0.259044 โ [[0.355924, 0.947524, 0.63777]โฆ โ
โโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
"""
</code></pre>
<p>And I want to transpose the 2d arrays in column <code>b</code> to be <code>array[f32, (3, 800)]</code> instead.</p>
<p>Actually, my column is <code>list[array[f32, 3]]</code>, and I turn it into 2d array with <code>.list.to_array(800)</code>. In case there is a cleaner way from this.</p>
<p>I can do it with <code>map_elements</code> by hand with numpy, but my real dataframe has > 4 million rows and using <code>map_elements</code> (as mentioned in <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.map_elements.html#polars.Expr.map_elements" rel="nofollow noreferrer">the docs</a> is super slow) and it is taking for ever just doing that transpose.</p>
<p>Is there a faster way of doing this?</p>
|
<python><python-polars><polars>
|
2025-04-09 19:31:12
| 2
| 954
|
DJDuque
|
79,565,121
| 4,606,149
|
TensorFlow `next()` hangs in infinite loop after `take(1)`
|
<p>When using TensorFlow's <code>tf.data.Dataset.take(1)</code> to get the first element of a dataset and then attempting to retrieve this element using <code>iter()</code> and <code>next()</code>, the <code>next()</code> call enters an infinite loop if the <code>pandas</code> library is imported <em>before</em> <code>tensorflow</code>.</p>
<p>This behavior has been observed in both VS Code Jupyter Notebook and standard Jupyter Notebook environments, suggesting an interaction at the Python runtime level.</p>
<p>Code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import tensorflow as tf
# Assume your original dataset looks something like this
def generator():
for i in range(5):
yield (tf.random.uniform(shape=(2, 7, 11)), tf.random.uniform(shape=(2, 1, 1)))
original_dataset = tf.data.Dataset.from_generator(
generator,
output_signature=(
tf.TensorSpec(shape=(None, 7, 11), dtype=tf.float32),
tf.TensorSpec(shape=(None, 1, 1), dtype=tf.float32)
)
)
# take the first element
first_element_ds = original_dataset.take(1)
iterator = iter(first_element_ds)
# get the first element with next() will run into an infinite loop
first_element_tuple = next(iterator)
tensor1, tensor2 = first_element_tuple
print("Tensor 1:", tensor1.numpy().shape)
print("Tensor 2:", tensor2.numpy().shape)
</code></pre>
<p>Environment:</p>
<ul>
<li>Pandas Version: 2.2.2</li>
<li>TensorFlow Version: 2.17.0</li>
<li>Python Version: 3.11.9</li>
<li>NumPy Version: 1.26.4</li>
<li>Jupyter Core Version: 5.7.2</li>
<li>Operating System: macOS 13.7.4</li>
<li>Virtual Environment: conda 24.11.0, cPython 3.11.5</li>
</ul>
<p>Question:</p>
<p>Why does the execution of <code>next()</code> runs into an infinite loop?</p>
|
<python><pandas><tensorflow>
|
2025-04-09 18:47:48
| 1
| 1,511
|
J.E.K
|
79,565,063
| 1,938,552
|
Does pyopencl transfer arrays to host memory implicitly?
|
<p>I have AMD GPU. I'm using pyopencl. I have a context and a queue. Then I created an array:</p>
<pre><code>import pyopencl
import pyopencl.array
ctx = pyopencl.create_some_context(interactive=False)
queue = pyopencl.CommandQueue(ctx)
array = pyopencl.array.empty(queue, [10], dtype=float)
print(array)
</code></pre>
<p>The print statement is showing some contents. Has pyopencl transferred data from GPU memory to print those values? If not, where do they come from?</p>
|
<python><gpu><opencl><pyopencl>
|
2025-04-09 18:21:56
| 1
| 1,059
|
haael
|
79,564,589
| 16,383,578
|
How to find all grid points that correspond to non-reduced fractions in a square?
|
<p>Given a positive integer N, we can label all grid points in the square N x N, starting at 1, the total number of grid points is N x N, and the grid points are <code>list(itertools.product(range(1, N + 1), repeat=2))</code>.</p>
<p>Now, I want to find all tuples <code>(x, y)</code> that satisfies the condition x/y is a non-reduced fraction, the following is a bruteforce implementation that is guaranteed to be correct, but it is very inefficient:</p>
<pre><code>import math
from itertools import product
def find_complex_points(lim: int) -> list[tuple[int, int]]:
return [
(x, y)
for x, y in product(range(1, lim + 1), repeat=2)
if math.gcd(x, y) > 1
]
</code></pre>
<p>Now the next function is slightly smarter, but it generates duplicates and as a result is only noticeably faster but not by much:</p>
<pre><code>def find_complex_points_1(lim: int) -> set[tuple[int, int]]:
lim += 1
return {
(x, y)
for mult in range(2, lim)
for x, y in product(range(mult, lim, mult), repeat=2)
}
</code></pre>
<pre><code>In [255]: %timeit find_complex_points(1024)
233 ms ยฑ 4.44 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
In [256]: %timeit find_complex_points_1(1024)
194 ms ยฑ 1.9 ms per loop (mean ยฑ std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Is there a better way to accomplish this?</p>
<p>(My goal is simple, I want to create a NumPy 2D array of uint8 type with shape (N, N), fill it with 255, and make all pixels (x, y) 0 if (x+1)/(y+1) is a non-reduced fraction)</p>
<hr />
<p>I have devised a method that is smarter than both my previous ones by a wide margin, and also tremendously faster, but it still generates duplicates, I have opt to not to use a <code>set</code> here so that you can copy-paste the code as is and run some tests and see the exact output in the order they are generated:</p>
<pre><code>def find_complex_points_2(lim: int) -> set[tuple[int, int]]:
stack = dict.fromkeys(range(lim, 1, -1))
lim += 1
points = []
while stack:
x, _ = stack.popitem()
points.append((x, x))
mults = []
for y in range(x * 2, lim, x):
stack.pop(y, None)
mults.append(y)
points.extend([(x, y), (y, x)])
for i, x in enumerate(mults):
points.append((x, x))
for y in mults[i + 1:]:
points.extend([(x, y), (y, x)])
return points
</code></pre>
<pre><code>In [292]: sorted(set(find_complex_points_2(1024))) == find_complex_points(1024)
Out[292]: True
In [293]: %timeit find_complex_points_2(1024)
58.9 ms ยฑ 580 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each)
In [294]: %timeit find_complex_points(1024)
226 ms ยฑ 3.24 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
</code></pre>
<hr />
<p>To clarify, the output of <code>find_complex_points_2(10)</code> is:</p>
<pre><code>In [287]: find_complex_points_2(10)
Out[287]:
[(2, 2),
(2, 4),
(4, 2),
(2, 6),
(6, 2),
(2, 8),
(8, 2),
(2, 10),
(10, 2),
(4, 4),
(4, 6),
(6, 4),
(4, 8),
(8, 4),
(4, 10),
(10, 4),
(6, 6),
(6, 8),
(8, 6),
(6, 10),
(10, 6),
(8, 8),
(8, 10),
(10, 8),
(10, 10),
(3, 3),
(3, 6),
(6, 3),
(3, 9),
(9, 3),
(6, 6),
(6, 9),
(9, 6),
(9, 9),
(5, 5),
(5, 10),
(10, 5),
(10, 10),
(7, 7)]
</code></pre>
<p>As you can see, <code>(10, 10)</code> shows up twice. I want to avoid redundant computations.</p>
<p>Also this happens in <code>find_complex_points_1</code>, if I don't use a set, then many duplicates will be included, because the method used will inevitably generate them repeatedly, by using a set there still is unnecessary computation, it just doesn't collect the duplicates.</p>
<p>And no, I actually want the coordinates to be replaced by the sum of all numbers before it, so N is replaced by (N<sup>2</sup> + N) / 2.</p>
<hr />
<p>I just implemented the image generation to better illustrate what I want:</p>
<pre><code>import numpy as np
import numba as nb
@nb.njit(cache=True)
def resize_img(img: np.ndarray, h_scale: int, w_scale: int) -> np.ndarray:
height, width = img.shape
result = np.empty((height, h_scale, width, w_scale), np.uint8)
result[...] = img[:, None, :, None]
return result.reshape((height * h_scale, width * w_scale))
def find_composite_points(lim: int) -> set[tuple[int, int]]:
stack = dict.fromkeys(range(lim, 1, -1))
lim += 1
points = set()
while stack:
x, _ = stack.popitem()
points.add((x, x))
mults = []
for y in range(x * 2, lim, x):
stack.pop(y, None)
mults.append(y)
points.update([(x, y), (y, x)])
for i, x in enumerate(mults):
points.add((x, x))
for y in mults[i + 1 :]:
points.update([(x, y), (y, x)])
return points
def natural_sum(n: int) -> int:
return (n + 1) * n // 2
def composite_image(lim: int, scale: int) -> np.ndarray:
length = natural_sum(lim)
img = np.full((length, length), 255, dtype=np.uint8)
for x, y in find_composite_points(lim):
x1, y1 = natural_sum(x - 1), natural_sum(y - 1)
img[x1 : x1 + x, y1 : y1 + y] = 0
return resize_img(img, scale, scale)
</code></pre>
<p><code>composite_image(12, 12)</code></p>
<p><a href="https://i.sstatic.net/BlIKcrzu.png" rel="noreferrer"><img src="https://i.sstatic.net/BlIKcrzu.png" alt="enter image description here" /></a></p>
|
<python><algorithm><math><number-theory>
|
2025-04-09 14:19:47
| 4
| 3,930
|
ฮฮญฮฝฮท ฮฮฎฮนฮฝฮฟฯ
|
79,564,452
| 5,168,534
|
Format String based on List of Dictionaries
|
<p>I have string which needs to be substituted with values from a list of dictionaries.
The size of the list can vary. Based on the size of the list, the string will be repeated because accordingly the data from the list will be substituted.</p>
<p>e.g. string is like</p>
<pre><code>str_template = '''
Part 1: {}
Part 2:
{}'''
</code></pre>
<p>Let's say the input list is:</p>
<pre><code>l2 = [{'Val1': "c", 'Val2': "d"},{'Val1': "e", 'Val2': "f"}]
</code></pre>
<p><code>l2</code> is the input.</p>
<pre><code>
def repeating(template, n):
return '\n\n'.join([template] * n)
def test_formatting(listval):
size = len(listval)
final_template = (repeating(str_template,size))
# This would print
print(final_template)
</code></pre>
<pre><code>Part 1: {}
Part 2:
{}
Part 1: {}
Part 2:
{}
</code></pre>
<p>Hence, there are 4 places to be substituted. How to go through the list of dictionaries dynamically and substitute that?</p>
<p>I am sure that would be a possible one liner which I am missing.</p>
|
<python>
|
2025-04-09 13:23:41
| 2
| 311
|
anshuk_pal
|
79,564,236
| 2,281,751
|
Can olmocr Run on Two 12 GB Titan X GPUs?
|
<p>Iโm trying to run olmocr (<a href="https://github.com/allenai/olmocr" rel="nofollow noreferrer">https://github.com/allenai/olmocr</a>) locally, which requires a GPU with 20 GB RAM. I have two Titan X GPUs (12 GB each). When I run it, I get:</p>
<pre><code>ERROR:olmocr.check:Torch was not able to find a GPU with at least 20 GB of RAM.
</code></pre>
<p>Can I combine their memory or use multi-GPU support to make this work? The code is editable (installed with pip install -e .[gpu]).</p>
|
<python><pytorch><gpu><ocr>
|
2025-04-09 11:38:51
| 0
| 756
|
Dandelion
|
79,564,149
| 2,894,535
|
Automatically encode/decode ctypes argument and return value
|
<p>I have a shared library exposing a function:</p>
<pre class="lang-c prettyprint-override"><code>const char* foo(const char* s);
</code></pre>
<p>To call it from Python using ctypes, I can do the following:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
foo = ctypes.cdll.LoadLibrary('foo.so')
foo.foo.argtypes = [ctypes.c_char_p]
foo.foo.restype = ctypes.c_char_p
print(foo.foo('Hello World!'.encode('utf8')).decode('utf8'))
</code></pre>
<p>Is there a way to embed the <code>encode/decode('utf8')</code> inside the type itself? I want to do the following:</p>
<pre class="lang-py prettyprint-override"><code>foo.foo.argtypes = [c_utf8_p]
foo.foo.restype = c_utf8_p
print(foo.foo('Hello World!'))
</code></pre>
<p>I was able to handle the <code>argtypes</code> with following:</p>
<pre class="lang-py prettyprint-override"><code>class c_utf8_p:
@classmethod
def from_param(cls, obj: str):
return obj.encode('utf8')
</code></pre>
<p>But cannot figure out the return type. <code>restype</code> docs say assigning it to anything that is not a <code>ctypes</code> type is deprecated, plus truncates value to <code>int</code> which destroys 64-bit pointers. I would like to avoid using <code>errcheck</code> since logically I am not checking any errors, plus it would require setting two attributes for every function.</p>
|
<python><dll><ctypes>
|
2025-04-09 10:52:04
| 1
| 3,116
|
Dominik Kaszewski
|
79,563,745
| 2,148,420
|
Databricks merge issue with null values
|
<p>I'm merging two delta tables with databricks and one struct in the target delta table isn't merged as expected.</p>
<p>On the source delta table I have data as follow:</p>
<pre class="lang-json prettyprint-override"><code>{
id: '123',
artist: {
song: {
id: null,
name: null,
country: null
}
}
}
</code></pre>
<p>My goal is to extract the <code>song</code>, and set it directly as <code>null</code> if its <code>id</code> is null.
To do so I do a select on the source table before merging:</p>
<pre class="lang-py prettyprint-override"><code>self.spark.read.option("mode", "PERMISSIVE")
.json(pattern, schema=artistSchema)
.select(
col('id'),
when((col("artist.song").isNull() | col("artist.song.id").isNull()), lit(None)
.otherwise(col("artist.song")).alias("song")
)
</code></pre>
<p>When testing the select on its own, it gives me the expected data:</p>
<pre class="lang-json prettyprint-override"><code>{
id: '123',
song: null
}
</code></pre>
<p>The issue arise when doing the merge afterwards using this (supposedly valid) data. Instead of <code>{song: null}</code> it merges with <code>{song: {id: null, name: null, country: null}}</code></p>
<p>Here is how I do my merge:</p>
<pre class="lang-py prettyprint-override"><code>deltaTable = DeltaTable.forPath(self.spark, deltaPath)
deltaTable.alias("existing") \
.merge(
dataDf.alias("updates"), # dataDf is the "selected" input
"existing.id = updates.id")
.whenMatchedUpdateAll("(updates.song IS NOT DISTINCT FROM existing.song) OR (updates.song IS NOT NULL AND updates.song.id != existing.song.id OR updates.song.name != existing.song.name OR updates.song.country != existing.song.country)")
.whenNotMatchedInsertAll()
.execute()
</code></pre>
<p>Since I can test the select on its own and it seems to provide valid data, I can only see an issue with the merge itself.
What could cause this modification of the <code>song</code> struct ?</p>
|
<python><pyspark><merge><databricks>
|
2025-04-09 07:48:25
| 1
| 1,758
|
Yabada
|
79,563,650
| 3,381,215
|
ImportError: Error importing numpy: you should not try to import numpy from its source directory
|
<p>I'm starting a new Python project, I made the following pyproject.toml file (using poetry):</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "snaqs"
version = "3.0.0"
description = "A minimal rewrite of Philips' snaqs for DCDC Tx"
authors = ["Freek van Hemert <freek@qodon.bio>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.12"
pandas = "^2.2.3"
numpy = "^2.2.4"
[tool.poetry.group.dev.dependencies]
jupyterlab = "^4.4.0"
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra -q"
testpaths = [
"tests",
]
[tool.poetry.group.test.dependencies]
pytest = "^8.3.5"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>I then do <code>poetry install</code></p>
<p>Usually I then start testing, but to keep it minimal, I open a python3 shell with <code>python3</code>. Now when I import numpy as np I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>$ python3
Python 3.12.8 (main, Dec 3 2024, 18:42:41) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
Traceback (most recent call last):
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/_core/__init__.py", line 23, in <module>
from . import multiarray
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/_core/multiarray.py", line 10, in <module>
from . import overrides
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/_core/overrides.py", line 7, in <module>
from numpy._core._multiarray_umath import (
ImportError: libstdc++.so.6: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/__init__.py", line 114, in <module>
from numpy.__config__ import show_config
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/__config__.py", line 4, in <module>
from numpy._core._multiarray_umath import (
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/_core/__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.12 from "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/bin/python3"
* The NumPy version is: "2.2.4"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: libstdc++.so.6: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/numpy/__init__.py", line 119, in <module>
raise ImportError(msg) from e
ImportError: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python interpreter from there.
</code></pre>
<p>It has been popping up for me on various occasions. It maybe good to say that I use NixOS and installed poetry using this <code>shell.nix</code> file:</p>
<pre class="lang-hs prettyprint-override"><code>{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
buildInputs = [
pkgs.python3
pkgs.poetry
];
}
</code></pre>
<p>There are a lot of reports out there with a lot of advice, ie:</p>
<ul>
<li><a href="https://www.pythonfixing.com/2023/11/fixed-explain-why-numpy-should-not-be.html" rel="nofollow noreferrer">https://www.pythonfixing.com/2023/11/fixed-explain-why-numpy-should-not-be.html</a></li>
<li><a href="https://stackoverflow.com/questions/14570011/explain-why-numpy-should-not-be-imported-from-source-directory">Explain why numpy should not be imported from source directory</a></li>
</ul>
<p>Some more info:</p>
<pre class="lang-bash prettyprint-override"><code>$ which python3
/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/bin/python3
snaqs-py3.12
</code></pre>
<p>(I believe that snaqs-py3.12 is the name of my poetry virtual env, confusingly it is added to the output of any command... Would love to get rid of it because it breaks ie loops over the output of <code>ls</code></p>
<p>I have tried all the tips, but nothing works for me. Any help would be appreciated.</p>
<p>Thanx.</p>
<p>Edit: When I run a script, it becomes more clear that actually Pandas tries to import numpy and that is causing the error:</p>
<pre class="lang-bash prettyprint-override"><code>$ python pcs_interface.py -d
Traceback (most recent call last):
File "/home/freek/projects/snaqs3/src/snaqs/utils/pcs_interface.py", line 4, in <module>
import pandas as pd
File "/home/freek/.cache/pypoetry/virtualenvs/snaqs-LIQmBdDd-py3.12/lib/python3.12/site-packages/pandas/__init__.py", line 19, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python interpreter from there.
</code></pre>
|
<python><numpy><python-poetry><nix>
|
2025-04-09 06:51:15
| 1
| 1,199
|
Freek
|
79,563,224
| 12,702,027
|
Python narrow class type variable in method
|
<p>I'm trying to emulate Rust's</p>
<pre class="lang-rs prettyprint-override"><code>impl<T> Trait for Struct<T> where T: Bound
</code></pre>
<p>(assuming <code>Struct</code> does not bound its type variable, as is <a href="https://stackoverflow.com/questions/49229332/should-trait-bounds-be-duplicated-in-struct-and-impl">generally recommended</a>)</p>
<p>My idea of doing this in Python is as follows:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class A[T]:
data: T
def m[C: int](self: "A[C]") -> int:
return self.data * 2
a1 = A(1)
a2 = A("hi")
a1.m()
a2.m()
</code></pre>
<p>Pylance correctly flags <code>a2.m()</code> as an error, so this apparently works. Is this the best I can do in Python? In particular I'm concerned with how this might interact with subtyping. For example:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class B[T](A[T]):
def m[C: str](self: "A[C]") -> int:
return len(self.data)
b = B(1)
l: list[A[int]] = [b]
a = l[0] # Somehow gets casts to A
a.m() # Typechecks, but errors
</code></pre>
<p>Edit: Rephrase my concern regarding subtyping + provide an example. I realized that the string annotation <code>"A[C]"</code> is not relevant.</p>
|
<python><python-typing>
|
2025-04-08 23:48:48
| 1
| 386
|
Jason
|
79,563,158
| 9,102,437
|
"User directory already in use" Selenium Python
|
<p>I am aware that there are other questions like this one on the site, but I have found that they are trying to do a different thing and the solution is not applicable to my case. The issue is that I want to run a few instances of chrome simultaneously which works fine right now, but I would also like to save the data from each of the instances. To test this, I have run this minimal example:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
import os
import time
profile_path = os.path.join(os.getcwd(), f"chrome_profiles/{time.strftime('%m.%d.%Y_%H.%M.%S')}")
if not os.path.isdir(profile_path):
os.makedirs(profile_path)
chrome_options = Options()
chrome_options.add_argument(f"--user-data-dir={profile_path}")
chrome_options.add_argument(f"--profile-directory=Default")
service = Service()
driver = webdriver.Chrome(service=service, options=chrome_options)
driver.get("https://www.google.com")
</code></pre>
<p>This gives out the following error:</p>
<pre><code>SessionNotCreatedException: Message: session not created: probably user data directory is already in use, please specify a unique value for --user-data-dir argument, or don't use --user-data-dir
</code></pre>
<p>This is impossible since I literally used current time to create the user folder. But the folder gets created and then Chrome crashes. Mind you, I run only one instance.</p>
<p>What is the issue here?</p>
<p>Edit: added suggestions from @JeffC to the code</p>
|
<python><python-3.x><selenium-webdriver>
|
2025-04-08 22:31:22
| 1
| 772
|
user9102437
|
79,563,063
| 7,340,304
|
Different number of __init__ calls for almost the same __new__ implementation
|
<p>My goal is to create a Factory class that will produce instances of slightly different classes based on <code>real_base</code> parameter. For simplicity I pass new base class directly, in reality base class would have been determined based on input parameter with some complex logic.
The problem is that if I use same class as base, I get different set of methods being invoked compared to using another class as base.</p>
<pre class="lang-py prettyprint-override"><code>class SomeClass:
def __init__(self, *args, **kwargs):
print(f"{self.__class__.__name__}.__init__ got {args=}, {kwargs=}")
def __new__(cls, real_base=None, *args, **kwargs):
print(f"{cls.__name__}({__class__.__name__}).__new__ got {real_base=}, {args=}, {kwargs=}")
if not real_base: # If there is not real_base - act like regular class
return super().__new__(cls, *args, **kwargs)
# Otherwise - act like class factory
new_field_class = type(cls.__name__ + 'Custom', (real_base, ), {})
print(f"{cls.__name__} factory call")
res = new_field_class()
print(f"{cls.__name__} factory call returned {res} {res.__class__.mro()}")
return res
class OtherClass:
def __init__(self, *args, **kwargs):
print(f"{self.__class__.__name__} __init__ got {args=}, {kwargs=}")
def __new__(cls, *args, **kwargs):
print(f"{cls.__name__}({__class__.__name__}) __new__ got {args=}, {kwargs=}")
return super().__new__(cls, *args, **kwargs)
</code></pre>
<p>Here main logic is in <code>SomeClass.__new__</code>, while rest of methods just use <code>print</code> to show if they are getting called.</p>
<p>If I use this classes as follows:</p>
<pre class="lang-py prettyprint-override"><code>SomeClass(OtherClass)
print()
SomeClass(SomeClass)
</code></pre>
<p>I get the following output:</p>
<pre><code>SomeClass(SomeClass).__new__ got real_base=<class '__main__.OtherClass'>, args=(), kwargs={}
SomeClass factory call
SomeClassCustom(OtherClass) __new__ got args=(), kwargs={}
SomeClassCustom __init__ got args=(), kwargs={}
SomeClass factory call returned <__main__.SomeClassCustom object at 0x104fcfa10> [<class '__main__.SomeClassCustom'>, <class '__main__.OtherClass'>, <class 'object'>]
SomeClass(SomeClass).__new__ got real_base=<class '__main__.SomeClass'>, args=(), kwargs={}
SomeClass factory call
SomeClassCustom(SomeClass).__new__ got real_base=None, args=(), kwargs={}
SomeClassCustom.__init__ got args=(), kwargs={}
SomeClass factory call returned <__main__.SomeClassCustom object at 0x104fcfa40> [<class '__main__.SomeClassCustom'>, <class '__main__.SomeClass'>, <class 'object'>]
SomeClassCustom.__init__ got args=(<class '__main__.SomeClass'>,), kwargs={}
</code></pre>
<p>I understand why 3rd line is different - it literally calls different <code>__new__</code> implementation.<br />
What I don't understand is - <strong>why in second example there is an extra <code>__init__</code> call</strong> ?</p>
|
<python><python-3.x><oop><init>
|
2025-04-08 21:13:26
| 0
| 591
|
Bohdan
|
79,563,051
| 3,817,456
|
What is the difference between numpy.atan and numpy.arctan?
|
<p>When looking for the 2-pi range version of np.atan (which turns out to be np.atan2) I found that there's a np.arctan as well - is there some difference between np.arctan and np.atan? The test below doesn't seem to show any difference between atan and arctan , or between atan2 and arctan2:</p>
<pre><code>import numpy as np
for i in np.arange(0,2*np.pi,.1):
x = np.cos(i)
y = np.sin(i)
th = np.atan(y/x)
th2 = np.atan2(y,x)
th3 = np.arctan(y/x)
th4 = np.arctan2(y,x)
print(f'atan {th} atan2 {th2} arctan {th3} arctan2 {th4}')
</code></pre>
|
<python><numpy>
|
2025-04-08 21:03:34
| 2
| 6,150
|
jeremy_rutman
|
79,562,820
| 9,715,816
|
Django with a single db raises "the current database router prevents this relation" when intializing model
|
<p>In django I have the following database models:</p>
<ul>
<li><code>ParentA</code></li>
<li><code>ParentB</code></li>
<li><code>Connector</code> which has a foreign key that references the model <code>ParentA</code> and a foreign key that references the model <code>ParentB</code></li>
</ul>
<p>The <code>Connector</code> model:</p>
<pre><code>from django.db import models
class Connector(models.Model):
class Meta:
ordering = ["parent_a"]
unique_together = (("parent_a", "parent_b"),)
parent_a = models.ForeignKey(
ParentA,
on_delete=models.CASCADE,
related_name="connected_bs",
)
parent_b = models.ForeignKey(
ParentB,
on_delete=models.CASCADE,
related_name="connected_as",
)
</code></pre>
<p>In my code each time a <code>ParentA</code> objects gets created I am finding which existing <code>ParentB</code> objects can get connected with it and I am creating <code>Connector</code> objects with the code that follows. The function that creates the <code>Connector</code> objects is a method of the <code>ParentA</code>, so <code>self</code> here is a <code>ParentA</code> object</p>
<pre><code>from django.db import models
class ParentA(models.Model):
...
def create_connectors(self):
matching_p_b_objects = self.__get_matching_parent_b_objects()
new_matching_p_b_objects = matching_p_b_objects.filter(~Q(connected_as__parent_a=self))
Connector.objects.bulk_create(
[
Connector(parent_a=self, parent_b=parent_b)
for parent_b in new_matching_p_b_objects
]
)
</code></pre>
<p>The function is called all the time and it mostly works but some times it randomly throws the error</p>
<pre><code> File "/code/backend/app/models/parent_a.py", line 989, in create_connectors
[
File "/code/backend/app/models/parent_a.py", line 990, in <listcomp>
Connector(parent_a=self, parent_b=parent_b)
File "/usr/local/lib/python3.10/site-packages/django/db/models/base.py", line 541, in __init__
_setattr(self, field.name, rel_obj)
File "/usr/local/lib/python3.10/site-packages/django/db/models/fields/related_descriptors.py", line 254, in __set__
raise ValueError(
ValueError: Cannot assign "<ParentA: 340459>": the current database router prevents this relation.
</code></pre>
<p>I am using a single <code>postgres 17</code> database with <code>python3.10</code> and <code>django==4.0.3</code></p>
<p>Also, although it might be unrelated, in a different place in the code I get <code>ValueError: save() prohibited to prevent data loss due to unsaved related object 'parent_a'.</code> although I am explicitly calling <code>parent_a.save()</code> in a previous step.</p>
|
<python><django>
|
2025-04-08 18:40:25
| 1
| 2,019
|
Charalamm
|
79,562,760
| 2,266,881
|
Read logs from cloud run function in python
|
<p>As the title says, it's possible/how can i read, using python, the logs from a Cloud Run function?</p>
<p>My guess is that, somehow, it can be done with the logging module from google.cloud, but it seems they are for the logging console only.</p>
|
<python><google-cloud-platform>
|
2025-04-08 18:05:36
| 1
| 1,594
|
Ghost
|
79,562,726
| 14,833,503
|
IRFs Appear Reversed in RBC Model Simulation - Problem with Recursive Shock Application?
|
<p>I'm trying to simulate a fairly standard RBC model. The equations used in the code are all correct based on the chosen specification, so my question relates more to the implementation side.</p>
<p>Specifically, I want to use the technology shock generated in z_hat:</p>
<pre><code># Generate the autoregressive process for the technology shock:
for t in range(1, T):
# Each period the shock decays by the factor rho_z, mimicking an AR(1) process.
z_hat[t] = rho_z * z_hat[t - 1]
</code></pre>
<p>to influence the other model equations recursively. The responses of the model variables are captured in the for loop that follows.</p>
<p>The issue is this: when I run the code as it is, the impulse response functions (IRFs) appear to be flipped. I only get the expected graph if I reverse the order of the calculated IRFs for each variable.</p>
<p>Has anyone encountered a similar issue or can spot what might be going wrong in how the IRFs are being generated or stored? I attached the code which created the IRFs and the full code.</p>
<p>Thanks in advance!</p>
<p><strong>Important part of the code:</strong></p>
<pre><code># Generate the autoregressive process for the technology shock:
for t in range(1, T):
# Each period the shock decays by the factor rho_z, mimicking an AR(1) process.
z_hat[t] = rho_z * z_hat[t - 1]
# Initialize arrays to store the log-deviations from steady state for various variables.
# The deviations represent percentage deviations from the benchmark steady state.
c_hat = np.zeros(T+1) # Consumption log-deviation, extra period for terminal condition.
i_hat = np.zeros(T) # Investment log-deviation.
k_hat = np.zeros(T+1) # Capital log-deviation, extra period due to accumulation.
y_hat = np.zeros(T) # Output log-deviation.
r_hat = np.zeros(T) # Interest rate log-deviation.
w_hat = np.zeros(T) # Wage log-deviation.
l_hat = np.zeros(T) # Labor log-deviation.
# Terminal condition is assumed for consumption: c_T = 0 (i.e., no deviation at the final period).
# Set initial conditions: assume no deviation in capital and consumption at time t = 0.
k_hat[0] = 0
c_hat[0] = 0
# Loop over each time period to simulate the dynamic responses (Impulse Response Functions - IRFs)
for t in range(T):
# 1. Factor Prices from Marginal Products:
# The wage (w_hat) is determined as the sum of the technology shock and contributions from
# capital and labor deviations scaled by the capital share.
w_hat[t] = z_hat[t] + alpha * k_hat[t] - alpha * l_hat[t]
# The rental rate (r_hat) is calculated using the marginal product conditions;
# it is affected by the deviation of capital and labor.
r_hat[t] = (alpha - 1) * k_hat[t] + z_hat[t] + (1 - alpha) * l_hat[t]
# 2. Labor Supply from the Intratemporal Condition:
# The log-deviation of labor (l_hat) is updated based on the difference between wages and consumption,
# scaled by the ratio ((1-L_ss)/L_ss), which captures the elasticity in this specification.
l_hat[t] = (w_hat[t] - c_hat[t]) * ((1 - L_ss) / L_ss)
# 3. Output Computation:
# Output is determined by combining the contributions of capital and labor deviations weighted by alpha and
# (1-alpha), respectively, then adding the technology shock.
# This update is performed after updating the labor deviation.
y_hat[t] = alpha * k_hat[t] + (1 - alpha) * l_hat[t] + z_hat[t]
# 4. Investment from the Resource Constraint:
# Investment log-deviation is calculated using the resource constraint where output minus the scaled consumption
# is divided by the share of investment in output.
i_hat[t] = (y_hat[t] - phi_c * c_hat[t]) / phi_i
# 5. Capital Accumulation:
# The next period's capital log-deviation is determined by the current capital after depreciation and the
# new investment weighted by delta, representing the adjustment process.
k_hat[t + 1] = (1 - delta) * k_hat[t] + delta * i_hat[t]
# 6. Euler Equation (Consumption Dynamics):
# According to the Euler equation in a log-linearized setting, the change in the consumption deviation
# equals the product of the steady state return factor (beta*q_ss) and the rental rate deviation.
c_hat[t + 1] = c_hat[t] + (beta * q_ss) * r_hat[t]
</code></pre>
<p><strong>Full code:</strong></p>
<pre><code>###############################################################################
# Loading Required Packages
###############################################################################
# Import the numpy package for efficient numerical computing and array manipulation.
import numpy as np
# Import sympy for symbolic mathematics (e.g., symbolic algebra, calculus).
import sympy as sp
# Import matplotlib's pyplot for plotting and creating visualizations.
import matplotlib.pyplot as plt
# Set the seed for NumPy's random number generator to ensure that any random processes produce
# the same output every time the code is run (this aids in reproducibility).
np.random.seed(42) # optional, for reproducibility
# Import the math module for accessing standard mathematical functions.
import math
###############################################################################
# End of package loading
###############################################################################
###############################################################################
###############################################################################
# 1% Shock in Technology: Set up the DSGE model environment and steady state.
###############################################################################
###############################################################################
# Starting parameter values for our DSGE model:
# beta: Discount factor, reflecting how future payoffs are weighted relative to present ones.
beta = 0.96
# delta: Depreciation rate of capital per period.
delta = 0.1
# p: Capital's share of output (alternatively represented as alpha in many models).
p = 0.36
# Lss: Steady-state (or initial) labor input; here, representing a small fraction for the population.
Lss = 0.01 # Population starting value
# z: Technology level; initially set to a small value (here equivalent to a 1% initial shock).
z = 0.01
# Define functions to compute steady state values using the model's equations.
# These functions calculate key economic quantities as functions of the parameters and z.
def K_fun(beta, delta, p, Lss, z):
"""
Computes the steady state level of Capital (K) using the model's formula.
The expression is derived from equating marginal products and other steady state conditions.
"""
global K_expr
# The expression involves the technology term z, labor Lss and other parameters.
K_expr = (p * Lss**(1 - p) * z / (1/beta - 1 + delta))**(1/(1 - p))
return K_expr
def q_fun(beta, delta, p, Lss, z):
"""
Computes the steady state marginal product of capital (q) which is related
to the return on capital.
"""
global q_expr
# The function inverts part of the K_fun expression and rescales it by z, p and Lss.
q_expr = z * p * (p * Lss**(1 - p) * z / (1/beta - 1 + delta))**(-1) * Lss**(1 - p)
return q_expr
def w_fun(beta, delta, p, Lss, z):
"""
Computes the steady state wage (w) as given by the marginal product of labor.
"""
global w_expr
# Note the use of the exponent p/(1-p) in the wage formula, capturing the relative power
# of capitalโs effect adjusted by the labor share.
w_expr = z * (1 - p) * Lss**(-p) * (p * Lss**(1 - p) * z / (1/beta - 1 + delta))**(p/(1 - p))
return w_expr
def Y_fun(beta, delta, p, Lss, z):
"""
Computes the steady state level of Output (Y) using the production function.
This function assumes a Cobb-Douglas production function depending on K and L.
"""
global Y_expr
# K_expr is used here, and it is assumed to be computed before using K_fun.
Y_expr = z * (K_expr)**p * Lss**(1 - p)
return Y_expr
def I_fun(beta, delta, p, Lss, z):
"""
Computes the level of Investment (I) in the steady state.
Investment is proportional to the capital stock through the depreciation rate.
"""
global I_expr
I_expr = delta * K_expr
return I_expr
def C_fun(beta, delta, p, Lss, z):
"""
Computes steady state Consumption (C) as the residual of output after investment.
"""
global C_expr
C_expr = Y_expr - I_expr
return C_expr
def L_fun(beta, delta, p, Lss, z):
"""
Provides the value of steady state labor. Here it is set to L_ss (previously defined),
implying fixed labor input in the model.
"""
global L_expr
L_expr = Lss
return L_expr
# ---------------------------
# 2. Define the steady state equations (as expressions) in terms of z
# ---------------------------
# Compute the steady state values using the defined functions.
# These are the benchmark values around which log-linearized deviations are computed.
K_val = K_fun(beta, delta, p, Lss, z)
q_val = q_fun(beta, delta, p, Lss, z)
w_val = w_fun(beta, delta, p, Lss, z)
Y_val = Y_fun(beta, delta, p, Lss, z)
I_val = I_fun(beta, delta, p, Lss, z)
C_val = C_fun(beta, delta, p, Lss, z)
L_val = L_fun(beta, delta, p, Lss, z)
# The above assignments are immediately repeated to store the steady state as "_ss" variables,
# which might be used later to make the notation clearer that these are steady-state benchmarks.
K_ss = K_fun(beta, delta, p, Lss, z)
q_ss = q_fun(beta, delta, p, Lss, z)
w_ss = w_fun(beta, delta, p, Lss, z)
Y_ss = Y_fun(beta, delta, p, Lss, z)
I_ss = I_fun(beta, delta, p, Lss, z)
C_ss = C_fun(beta, delta, p, Lss, z)
L_ss = L_fun(beta, delta, p, Lss, z)
# Print the steady state values to verify the baseline of the model. These are calculated at z=1.
print("Steady State Values (at z=1):")
print("K_ss =", K_val)
print("q_ss =", q_val)
print("w_ss =", w_val)
print("Y_ss =", Y_val)
print("I_ss =", I_val)
print("C_ss =", C_val)
print("L_ss =", L_val)
# Define additional parameters for the simulation:
alpha = p # alpha is set equal to capital share, which in some literature uses alpha.
T = 20 # T defines the total number of time periods for the IRF simulation.
rho_z = 0.95 # Autoregressive parameter for the technology shock process.
# From the steady state, compute additional model parameters:
# r_ss is the steady state real interest rate derived from the Euler equation.
r_ss = 1 / beta - 1 + delta
# phi_c and phi_i are ratios of consumption and investment to output in the steady state,
# and are used to scale the IRF responses.
phi_c = C_ss / Y_ss
phi_i = I_ss / Y_ss
# Initialize the technology shock process:
# z_hat is an array representing the shock process over T periods.
z_hat = np.zeros(T)
z_hat[0] = 1 # At time 0, we introduce a 1% technology shock.
# Generate the autoregressive process for the technology shock:
for t in range(1, T):
# Each period the shock decays by the factor rho_z, mimicking an AR(1) process.
z_hat[t] = rho_z * z_hat[t - 1]
# Initialize arrays to store the log-deviations from steady state for various variables.
# The deviations represent percentage deviations from the benchmark steady state.
c_hat = np.zeros(T+1) # Consumption log-deviation, extra period for terminal condition.
i_hat = np.zeros(T) # Investment log-deviation.
k_hat = np.zeros(T+1) # Capital log-deviation, extra period due to accumulation.
y_hat = np.zeros(T) # Output log-deviation.
r_hat = np.zeros(T) # Interest rate log-deviation.
w_hat = np.zeros(T) # Wage log-deviation.
l_hat = np.zeros(T) # Labor log-deviation.
# Terminal condition is assumed for consumption: c_T = 0 (i.e., no deviation at the final period).
# Set initial conditions: assume no deviation in capital and consumption at time t = 0.
k_hat[0] = 0
c_hat[0] = 0
# Loop over each time period to simulate the dynamic responses (Impulse Response Functions - IRFs)
for t in range(T):
# 1. Factor Prices from Marginal Products:
# The wage (w_hat) is determined as the sum of the technology shock and contributions from
# capital and labor deviations scaled by the capital share.
w_hat[t] = z_hat[t] + alpha * k_hat[t] - alpha * l_hat[t]
# The rental rate (r_hat) is calculated using the marginal product conditions;
# it is affected by the deviation of capital and labor.
r_hat[t] = (alpha - 1) * k_hat[t] + z_hat[t] + (1 - alpha) * l_hat[t]
# 2. Labor Supply from the Intratemporal Condition:
# The log-deviation of labor (l_hat) is updated based on the difference between wages and consumption,
# scaled by the ratio ((1-L_ss)/L_ss), which captures the elasticity in this specification.
l_hat[t] = (w_hat[t] - c_hat[t]) * ((1 - L_ss) / L_ss)
# 3. Output Computation:
# Output is determined by combining the contributions of capital and labor deviations weighted by alpha and
# (1-alpha), respectively, then adding the technology shock.
# This update is performed after updating the labor deviation.
y_hat[t] = alpha * k_hat[t] + (1 - alpha) * l_hat[t] + z_hat[t]
# 4. Investment from the Resource Constraint:
# Investment log-deviation is calculated using the resource constraint where output minus the scaled consumption
# is divided by the share of investment in output.
i_hat[t] = (y_hat[t] - phi_c * c_hat[t]) / phi_i
# 5. Capital Accumulation:
# The next period's capital log-deviation is determined by the current capital after depreciation and the
# new investment weighted by delta, representing the adjustment process.
k_hat[t + 1] = (1 - delta) * k_hat[t] + delta * i_hat[t]
# 6. Euler Equation (Consumption Dynamics):
# According to the Euler equation in a log-linearized setting, the change in the consumption deviation
# equals the product of the steady state return factor (beta*q_ss) and the rental rate deviation.
c_hat[t + 1] = c_hat[t] + (beta * q_ss) * r_hat[t]
# The consumption and capital paths are reversed to align the time axis for plotting,
# recognizing that c(t+1) and k(t+1) are plotted rather than c(t) and k(t).
c_hat_aligned = np.insert(c_hat[::-1][:-1], 0, 0.0) # Insert a zero at the beginning after reversing.
k_hat_aligned = np.insert(k_hat[::-1][:-1], 0, 0.0) # Similarly for capital.
# Prepare the variables for plotting by converting log-deviations into percentage deviations.
# The dictionary "variables" maps variable names to their corresponding reversed IRF paths.
variables = {
'Output (Y)': y_hat[::-1],
'Capital (K)': k_hat_aligned,
'Investment (I)': i_hat[::-1],
'Consumption (C)': c_hat_aligned,
'Wage (w)': w_hat[::-1],
'Interest Rate (r)': r_hat[::-1],
'Labor (L)': l_hat[::-1],
}
# Set up the plot size for the IRFs.
plt.figure(figsize=(12, 6))
# Plot each variable's deviation over time.
for name, var in variables.items():
# Create a time array that matches the length of the variable series.
time_var = np.arange(len(var))
plt.plot(time_var, var, label=name)
# Also plot the technology shock (z_hat) for reference.
plt.plot(np.arange(len(z_hat)), z_hat, label='Technology (z)', linewidth=1, color='black')
# Add a horizontal reference line at zero to clearly see deviations from the steady state.
plt.axhline(0, color='gray', linestyle='--')
# Add plot title and axis labels.
plt.title("IRFs from Log-Linearized DSGE Model (Shock at t = 0)")
plt.xlabel("Time")
plt.ylabel("% Deviation from Steady State")
# Add a legend to distinguish between different variables.
plt.legend()
# Display grid lines for improved readability.
plt.grid(True)
# Adjust the layout to ensure nothing is clipped.
plt.tight_layout()
# Render the final plot.
plt.show()
</code></pre>
|
<python><optimization><dynamic><economics>
|
2025-04-08 17:40:12
| 0
| 405
|
Joe94
|
79,562,711
| 10,634,126
|
Issues with personal Microsoft Sharepoint upload from Python
|
<p>I am trying to use a Python script to upload a CSV file to a SharePoint personal directory, with a URL like:</p>
<pre><code>https://{tenant}-my.sharepoint.com/personal/{user}_{tenant}_com/Documents/{path}
</code></pre>
<p>I have configured SharePoint API access with a <code>client_id</code> and <code>client_secret</code> in the main tenant domain (using <code>_layouts/15/appregnew.aspx</code> and <code>_layouts/15/appinv.aspx</code>, and granted these permissions as admin.</p>
<p>I am unclear whether I can use these to upload a file to a personal directory like above.</p>
<pre><code>from office365.runtime.auth.client_credential import ClientCredential
from office365.sharepoint.client_context import ClientContext
import os
site_url = "" # IS THIS THE TENANT-MY.SHAREPOINT.COM URL LIKE ABOVE?
client_id = ""
client_secret = ""
file_path = "path/to/your/local/file.csv"
folder_url = "Shared Documents" # IS THIS THE TENANT-MY.SHAREPOINT.COM PATH LIKE ABOVE?
try:
credentials = ClientCredential(client_id, client_secret)
ctx = ClientContext(site_url).with_credentials(credentials)
with open(file_path, "rb") as f:
file_content = f.read()
target_folder = ctx.web.get_folder_by_server_relative_url(folder_url)
file_name = os.path.basename(file_path)
uploaded_file = target_folder.upload_file(file_name, file_content).execute_query()
</code></pre>
<p><strong>UPDATE</strong></p>
<p>I have successfully connected as follows:</p>
<pre><code>client_id = ""
client_secret = ""
url = "https://{tenant}-my.sharepoint.com/personal/{user}_{tenant}_com/"
ctx_auth = AuthenticationContext(url)
if ctx_auth.acquire_token_for_app(client_id, client_secret):
ctx = ClientContext(url, ctx_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Web title: {0}".format(web.properties["Title"]))
</code></pre>
<p>This outputs:</p>
<pre><code>>> Web title: {#correct user name#}
</code></pre>
<p>Now, I'm trying to interact with OneDrive via this method, but I am unable to read any files or write...</p>
|
<python><sharepoint><office365>
|
2025-04-08 17:27:55
| 1
| 909
|
OJT
|
79,562,663
| 16,563,251
|
Use importlib.resources.files with no argument
|
<p>I want to use <code>importlib.resources.files</code> to access a file from a module.
According to the <a href="https://docs.python.org/3/library/importlib.resources.html#importlib.resources.files" rel="nofollow noreferrer">docs</a>,</p>
<blockquote>
<p>If the anchor is omitted, the callerโs module is used.</p>
</blockquote>
<p>So I would assume something like</p>
<pre class="lang-py prettyprint-override"><code>import importlib.resources
importlib.resources.files() # (+ some further calls afterwards to access the actual file)
</code></pre>
<p>should refer to the callers module. Instead, I get an error:</p>
<pre><code>Traceback (most recent call last):
[...]
importlib.resources.files()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/lib/python3.13/importlib/resources/_common.py", line 45, in wrapper
return func()
File "/usr/lib/python3.13/importlib/resources/_common.py", line 56, in files
return from_package(resolve(anchor))
File "/usr/lib/python3.13/importlib/resources/_common.py", line 116, in from_package
reader = spec.loader.get_resource_reader(spec.name)
^^^^^^^^^
File "/usr/lib/python3.13/importlib/resources/_adapters.py", line 17, in __getattr__
return getattr(self.spec, name)
AttributeError: 'NoneType' object has no attribute 'name'
</code></pre>
<p>This does only happen when calling this from a file.
Creating an <code>__init__.py</code> in the same directory does not help.</p>
<p>Using the same code from the python shell gives me</p>
<pre class="lang-py prettyprint-override"><code>PosixPath('/usr/lib/python3.13/_pyrepl')
</code></pre>
<p>which sounds about right (but is not very helpful for actual use).</p>
<p>What is happening here?
How is this intended to be used?</p>
<p>EDIT:
There is also a <a href="https://stackoverflow.com/questions/58883423/how-to-reference-the-current-package-for-use-with-importlib-resources">similar question</a> that wants to achieve the same thing. The accepted solution using <code>__package__</code> does not work for me, as <code>__package__</code> seems to be <code>None</code> too.</p>
|
<python><python-module><python-importlib>
|
2025-04-08 17:00:27
| 0
| 573
|
502E532E
|
79,562,461
| 6,335,342
|
V1Meta Authenticate Using SSO
|
<p>In the Python SDK README, the example for instantiating a V1Meta instance looks like:</p>
<pre><code>with V1Meta(
instance_url = 'https:// ...`,
username = 'admin',
password = 'admin'
) as v1:
</code></pre>
<p>What would that look like if rather than authenticating with username and password, I need to use the company's SSO?</p>
|
<python><authentication><versionone>
|
2025-04-08 15:22:34
| 0
| 1,605
|
steverb
|
79,562,411
| 8,543,025
|
MNE/Matplotlib Visualization Causes EasyGUI-QT to Crash
|
<p>I'm facing an issue combining <code>mne</code>, <code>matplotlib</code> and <code>easygui-qt</code> and I'm not sure how to find out where the issue is exactly.</p>
<p>I'm trying to show the user an <code>mne</code>-generated figure (preferably interactive) and an <code>easygui-qt</code>-generated modal (see example below). I want the user to be able to play with the figure as much as they want, and eventually click provide input through the pop-up modal. Problem is, when generating the figure, <code>matplotlib</code> turns on <em>interactive mode</em>, which for some reason causes <code>easygui-qt</code> to <strong>quit</strong> (edited, the script quits, not crashes).</p>
<p>For example, in the following code, no matter how I tried to turn off the interactive mode or prevent the figure from being rendered (see the <code>#</code> comments), the <code>input2</code> line causes the script to quit as soon as the user provides an input (i.e., after clicking one of the buttons), and never reaches the <code>print</code> line.</p>
<pre><code>import matplotlib
import mne
from mne.datasets import sample
from mne.io import read_raw_fif
import easygui_qt.easygui_qt as gui
import matplotlib.pyplot as plt
matplotlib.use('Qt5Agg')
mne.viz.set_browser_backend('qt') # or "matplotlib"
fname = sample.data_path() / "MEG" / "sample" / "sample_audvis_raw.fif"
raw = read_raw_fif(fname, preload=False, verbose=False)
raw.crop(0, 60).pick("eeg")
raw.load_data(verbose=False)
input1 = gui.get_continue_or_cancel(title="Input 1", message="", continue_button_text="T", cancel_button_text="F")
fig = raw.plot(verbose=False, show=True) # can set show=False with same downstream effect
# plt.ioff() # this has no effect
# matplotlib.interactive(False) # this has no effect
input2 = gui.get_continue_or_cancel(title="Input 2", message="", continue_button_text="T", cancel_button_text="F")
print(f"In 1: {input1}\t\tIn 2: {input2}")
</code></pre>
<p>Thanks for the help :)</p>
<p>Package versions:</p>
<pre><code>matplotlib:3.8.4
PyQt5: 5.15.11
easygui-qt: 0.9.3
mne: 1.9.0
mne-qt-browser: 0.6.3
</code></pre>
<p>And other system info:</p>
<pre><code>>>> mne.sys_info()
Platform Windows-11-10.0.26100-SP0
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
Executable C:\Path\To\venv\Scripts\python.exe
CPU 13th Gen Intel(R) Core(TM) i7-13700 (24 cores)
Memory 63.7 GiB
Core
+ mne 1.9.0 (latest release)
+ numpy 2.1.1 (OpenBLAS 0.3.27 with 24 threads)
+ scipy 1.15.1
+ matplotlib 3.8.4Backend QtAgg is interactive backend. Turning interactive mode on.
(backend=QtAgg)
Numerical (optional)
+ sklearn 1.6.1
+ pandas 2.2.3
+ h5py 3.12.1
- unavailable numba, nibabel, nilearn, dipy, openmeeg, cupy, h5io
Visualization (optional)
+ qtpy 2.4.3 (PyQt6=6.8.2)
+ pyqtgraph 0.13.7
+ mne-qt-browser 0.6.3
- unavailable pyvista, pyvistaqt, vtk, ipympl, ipywidgets, trame_client, trame_server, trame_vtk, trame_vuetify
Ecosystem (optional)
+ mne-icalabel 0.7.0
- unavailable mne-bids, mne-nirs, mne-features, mne-connectivity, mne-bids-pipeline, neo, eeglabio, edfio, mffpy, pybv
</code></pre>
|
<python><qt><matplotlib>
|
2025-04-08 14:59:11
| 0
| 593
|
Jon Nir
|
79,562,296
| 865,169
|
Why can Pandas weekday DateOffset only move the date forward?
|
<p>I am trying to find the last of a weekday in a month. For example, let us say the last Sunday in October.</p>
<p>I try to do like this:</p>
<pre><code>pd.Timestamp("2025-10-31") - pd.DateOffset(weekday=6)
</code></pre>
<p>The resulting date is <code>Timestamp('2025-11-02 00:00:00')</code>, i.e. the result is a later date despite the minus. The result is also identical if I add in stead of subtract.</p>
<p>This is in contrast to:</p>
<pre><code>>>> pd.Timestamp("2025-10-31") - pd.DateOffset(days=2)
Timestamp('2025-10-29 00:00:00')
>>> pd.Timestamp("2025-10-31") - pd.DateOffset(day=2)
Timestamp('2025-10-02 00:00:00')
</code></pre>
<p>which result in an earlier date as I expect, so the behaviour is just different for 'weekday' than for other offsets.</p>
<p>If I attempt to use the <code>DateOffset.rollback</code> method instead, it simply does not change the date:</p>
<pre><code>>>> pd.DateOffset(weekday=6).rollback(pd.Timestamp("2025-10-31"))
Timestamp('2025-10-31 00:00:00')
</code></pre>
<p>I cannot find anything in the DateOffset or DateOffset.rollback documentation that describes this.</p>
<p>What can I do to make this work?</p>
|
<python><pandas><datetime>
|
2025-04-08 14:13:57
| 2
| 1,372
|
Thomas Arildsen
|
79,562,207
| 8,563,165
|
Flask-smorest get rid of "Default error response" from OpenAPI page
|
<p>The method doesn't return errors responce and I'd like to remove default error responce</p>
<pre class="lang-py prettyprint-override"><code>from flask.views import MethodView
from flask_smorest import Blueprint
from schemas.items import ItemsSchema
blp = Blueprint("api", "items", url_prefix="/items", description="Operations on items")
@blp.route('')
class ItemsView(MethodView):
@blp.response(status_code=200, schema=ITEMS_SCHEMA)
def get(self):
items = Item.query.all()
return ITEMS_SCHEMA.dump(items), 200
</code></pre>
<p><a href="https://i.sstatic.net/KnIJAkyG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnIJAkyG.png" alt="enter image description here" /></a></p>
|
<python><flask><flask-smorest>
|
2025-04-08 13:41:18
| 1
| 848
|
akpp
|
79,562,059
| 17,040,989
|
how to randomly add a list of sequences into a text body
|
<p>This is one of my first tasks with actual Python code.</p>
<p>What I need to do is to import a set of (FASTA) sequences, select those with a length between 400 and 500 (base pairs) characters, and randomly pick 100 of those to be added into another (FASTA genome) text body โ again at random.</p>
<p>One thing to consider is that I should not have any of those 100 sequences be added within another, so that they can eventually be queried for. In fact, it would be ideal to know also the position where each sequence has been added.</p>
<p>This is my go at the problem, again apologies for the limited coding skills, but I'm no expert with the language. Any help is much appreciated! I'm mostly struggling with the last part, as I managed to add a sequence in another, but I cannot do so for a list object, <em>see</em> below for code and error.</p>
<pre><code>###library import
from Bio import SeqIO
import inspect
import random
###sequences handling
input_file = open("hs_sequences.txt")
my_dict = SeqIO.to_dict(SeqIO.parse(input_file, "fasta"))
#my_dict
###compute sequence length
l = []
for i in my_dict.values():
l.append(len(i))
#l
###select sequences based on range-length estimates of abundance
seq_of_choice = [seq for seq in s if 400 < len(seq) < 500]
###import FASTA
def fasta_reader(filename):
from Bio.SeqIO.FastaIO import FastaIterator
with open(filename) as handle:
for record in FastaIterator(handle):
yield record
def custom_print(string):
counter=0
res=""
for char in string:
if counter==60:
print(res)
counter=0
res=""
continue
res+=char
counter+=1
for entry in fasta_reader("hg37_chr1-1000l.fna"):
print(str(entry.id))
custom_print(str(entry.seq))
body = str(entry.seq)
###import full genome
#example_full = SeqIO.index("hg37_23-only.fna", "fasta")
#example_full
###randomly selects 100 sequences and adds them to the FASTA
def insert (source_str, insert_str, pos):
return source_str[:pos] + insert_str + source_str[pos:]
i = 1
hundred_seqs = []
while i <= 100:
hundred_seqs.append(random.choice(seq_of_choice))
i += 1
#hundred_seqs
custom_print(insert(body, hundred_seqs, random.randint(0, len(body))))
</code></pre>
<p>In this case the error is</p>
<blockquote>
<p>TypeError: can only concatenate str (not "list") to str</p>
</blockquote>
<p>but, of course, it works if I use <code>hundred_seqs[1]</code></p>
<p>I can provide a link to the files if necessary, the .txt is quite small but the FASTA genome isn't...</p>
<p><strong>EDIT</strong></p>
<p>Working code to add 100 strings from an original set (of a specified length) within a body of text, randomly and without overlapping โ it seems to be doing what expected; however, I wish to print out where each one of those 100 sequences is placed in the body of text. Thanks in advance!</p>
<pre><code>###randomly selects 100 strings and adds them to the FASTA
def insert (source_str, insert_str, pos):
return source_str[:pos] + insert_str + source_str[pos:]
def get_string_text(genome, all_strings):
string_of_choice = [string for string in all_strings if 400 < len(retro) < 500]
hundred_strings = random.sample(string_of_choice, k=100)
text_of_strings = []
for k in range(len(hundred_strings)):
text_of_strings.append(str(hundred_strings[k].seq))
single_string = ",".join(text_of_strings)
new_genome = insert(genome, single_string, random.randint(0, len(genome)))
return new_genome
big_genome = get_retro_text(body, s)
#len(big_genome)
#custom_print(big_genome)
chr1_retros = "\n".join([head, big_genome])
#len(chr1_retros)
#print(chr1_retros)
</code></pre>
|
<python><string><bioinformatics><biopython><fasta>
|
2025-04-08 12:37:31
| 2
| 403
|
Matteo
|
79,561,979
| 5,980,655
|
regex replace numbers between to characters
|
<p>I have a string <code>'manual__2025-04-08T11:37:13.757109+00:00'</code> and I want <code>'manual__2025-04-08T11_37_13_00_00'</code></p>
<p>I know how to substitute the <code>:</code> and <code>+</code> using</p>
<pre><code>'manual__2025-04-08T11:37:13.757109+00:00'.replace(':','_').replace('+','_')
</code></pre>
<p>but I also want to get rid of the numbers between the two characters '.' and '+'.</p>
<p>I'm using python.</p>
|
<python><regex>
|
2025-04-08 12:09:26
| 5
| 1,035
|
Ale
|
79,561,920
| 2,094,708
|
Find shortest orthogonal path between two points in a 2D plane, through specified channels?
|
<p>How to find the shortest orthogonal path between two points in a 2D plane ?
The path should pass through channels whose coordinates are specified as union of orthogonal rectangles. The points can be assumed to be atleast touching the channels.
Is there any python module which can give this ?</p>
|
<python><python-3.x>
|
2025-04-08 11:44:56
| 1
| 2,283
|
Sidharth C. Nadhan
|
79,561,697
| 2,883,209
|
Missing libraries (libpango) in Azure Python Function App 4.1037.1.1
|
<p>Good morning all</p>
<p>Was hoping someone may be able to shed some light, or even just know where Microsoft publishes their release notes for their Azure Function App Runtimes</p>
<p>We tried to go live with a new function app about three weeks ago, and when we created the new python ~4 function app, and when the app tried to come up, it complained that libpango was missing</p>
<pre><code>Result: Failure `` ``Exception: OSError: cannot load library 'libpango-1.0-0': libpango-1.0-0: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called 'libpango-1.0-0'
</code></pre>
<p>We use libpango for generating pdf's (via weaslyprint), so it missing kind of kills the whole thing, and at first thought it may be an issue with requirements.txt.</p>
<p>After some investigation we figured out that the issue was that libpango (and libpangocairo and libpangoft was missing from the production instance, but not our test instance, but it is a system library, so nothing we control. If you want to check, just run</p>
<pre><code>dpkg -l | grep libpango
</code></pre>
<p>And on preprod we get the list</p>
<pre><code>ii libpango-1.0-0:amd64 1.46.2-3 amd64 Layout and rendering of internationalized text
ii libpangocairo-1.0-0:amd64 1.46.2-3 amd64 Layout and rendering of internationalized text
ii libpangoft2-1.0-0:amd64 1.46.2-3 amd64 Layout and rendering of internationalized text
</code></pre>
<p>After some more investigation, we figured out that it looks like these libraries are in runtime version 4.1036.2.2 (which we happen to have on preprod), but not in 4.1037.1.1, and Monday three weeks ago, the default runtime for ~4 seems to have been 4.1037.1.1.</p>
<p><a href="https://i.sstatic.net/9Oor3iKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Oor3iKN.png" alt="Function app overview" /></a></p>
<p><em>Strangely enough, it looks like if I create a new instance today (or at least last week), the default version is now 4.1036.2.2 (which has libpango), so potentially Microsoft has pulled 4.1037.1.1, or I am missing something. (also, when 4.1037.1.1 for premium apps would be missing libpango, elastic version did not, did not collect runtime on the elastic app alas)</em></p>
<p>Does anyone know where I can find release notes on the runtimes, as I really would like to know why they pulled these libraries,</p>
<p>Failing that, does anyone know if there is a way to set the minor version of runtime at build time?</p>
<p>=================== Edit ========================
@Pravallika KV</p>
<p>I tried your solution but I can't get that to work; I tried setting FUNCTIONS_EXTENSION_VERSION; am I missing something?</p>
<p>I created a brand new app, did not load any code, and got version 4.1037.1.1</p>
<p><a href="https://i.sstatic.net/gwmEGdwI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwmEGdwI.png" alt="enter image description here" /></a></p>
<p>Then, I went and changed FUNCTIONS_EXTENSTION_VERSION</p>
<p><a href="https://i.sstatic.net/3RjxMhlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3RjxMhlD.png" alt="enter image description here" /></a></p>
<p>And now it refuses to come up again</p>
<p><a href="https://i.sstatic.net/BMhFUlzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BMhFUlzu.png" alt="enter image description here" /></a>
Am I missing something?</p>
|
<python><azure-functions>
|
2025-04-08 09:47:13
| 1
| 1,244
|
vrghost
|
79,561,367
| 7,791,963
|
How to disable robot framework automatically logging KubeLibrary response to DEBUG in my python keyword?
|
<p>I am utilizing robot framework's <a href="https://github.com/devopsspiral/KubeLibrary/tree/master/src/KubeLibrary" rel="nofollow noreferrer">KubeLibrary</a> to interact with my k8s cluster.</p>
<p>By default, any function and response is automatically logged to DEBUG in robot framework, meaning that it's response will be visible in the log.html that robot framework generates.</p>
<p>I want to disable this logging for some parts of my code or remove it afterwards somehow, e.g. I don't want the secrets to be part of the log.html regardless of log level.</p>
<p>Current implementation which includes response in log.html when viewing DEBUG.</p>
<pre><code>
from KubeLibrary import KubeLibrary
...
def some_keyword():
instance.kube = KubeLibrary(kube_config=instance.kubeconfig)
secrets_matching_secret_name = instance.kube.get_secrets_in_namespace(
name_pattern=f"^{secret_name}$",
namespace=namespace
)
</code></pre>
<p>Wishful skeleton code for how it could be done</p>
<pre><code>
instance.kube = KubeLibrary(kube_config=instance.kubeconfig)
instance.kube.disable_debug_log()
secrets_matching_secret_name = instance.kube.get_secrets_in_namespace(
name_pattern=f"^{secret_name}$",
namespace=namespace
)
instance.kube.enable_debug_log()
</code></pre>
<p>Is there any way I can disable it? Or somehow filter it out from log.html using rebot or similar?</p>
|
<python><robotframework>
|
2025-04-08 06:47:48
| 1
| 697
|
Kspr
|
79,561,195
| 11,084,338
|
Drop rows with all zeros in a Polars DataFrame
|
<p>I can use <code>drop_nans()</code> function to remove rows with some or all columns set as <code>nan</code>.</p>
<p>Is there an equivalent function for dropping rows with all columns having value 0?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"a":[0, 0, 0, 0, 30],
"b":[0, 0, 0, 0, 40],
"c":[0, 0, 0, 0, 50]})
</code></pre>
<pre><code>shape: (5, 3)
โโโโโโโฌโโโโโโฌโโโโโโ
โ a โ b โ c โ
โ --- โ --- โ --- โ
โ i64 โ i64 โ i64 โ
โโโโโโโชโโโโโโชโโโโโโก
โ 0 โ 0 โ 0 โ
โ 0 โ 0 โ 0 โ
โ 0 โ 0 โ 0 โ
โ 0 โ 0 โ 0 โ
โ 30 โ 40 โ 50 โ
โโโโโโโดโโโโโโดโโโโโโ
</code></pre>
<p>In this example, I would like to drop the first 4 rows from the dataframe.</p>
<pre><code>shape: (1, 3)
โโโโโโโฌโโโโโโฌโโโโโโ
โ a โ b โ c โ
โ --- โ --- โ --- โ
โ i64 โ i64 โ i64 โ
โโโโโโโชโโโโโโชโโโโโโก
โ 30 โ 40 โ 50 โ
โโโโโโโดโโโโโโดโโโโโโ
</code></pre>
|
<python><dataframe><filter><python-polars>
|
2025-04-08 04:46:27
| 1
| 326
|
GH KIM
|
79,561,147
| 11,084,338
|
Change column type in Polars DataFrame
|
<p>I have a Polars DataFrame below.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"a":["1.2", "2.3", "5.4"],
"b":["0.4", "0.03", "0.12"],
"c":["AA", "BB", "CC"]})
</code></pre>
<pre><code>โโโโโโโฌโโโโโโโฌโโโโโโ
โ a โ b โ c โ
โ --- โ --- โ --- โ
โ str โ str โ str โ
โโโโโโโชโโโโโโโชโโโโโโก
โ 1.2 โ 0.4 โ AA โ
โ 2.3 โ 0.03 โ BB โ
โ 5.4 โ 0.12 โ CC โ
โโโโโโโดโโโโโโโดโโโโโโ
</code></pre>
<p>How can I convert the columns to specific types?
In this case, I want to convert columns <code>a</code> and <code>b</code> into floats.</p>
<p>I expect below.</p>
<pre><code>shape: (3, 3)
โโโโโโโฌโโโโโโโฌโโโโโโ
โ a โ b โ c โ
โ --- โ --- โ --- โ
โ f64 โ f64 โ str โ
โโโโโโโชโโโโโโโชโโโโโโก
โ 1.2 โ 0.4 โ AA โ
โ 2.3 โ 0.03 โ BB โ
โ 5.4 โ 0.12 โ CC โ
โโโโโโโดโโโโโโโดโโโโโโ
</code></pre>
|
<python><dataframe><python-polars>
|
2025-04-08 04:01:48
| 1
| 326
|
GH KIM
|
79,561,013
| 11,098,908
|
Is it OK to not use the value of the index 'i' inside a for loop?
|
<p>Would it be frowned upon if the index variable <code>i</code> were not used inside a <code>for</code> loop? I have never come across a code that didn't use the value of the index while it iterates through the loop.</p>
<pre><code>def questionable():
for i in range(3):
print('Is this OK?') # (or do something more complicated)
# as opposed to:
def proper():
for i in range(3):
print(i) # (or do something that the value of 'i' is necessary)
</code></pre>
<p>What's a more Pythonic way to rewrite the function <code>questionable</code>, that is, to repeatedly do something without using the iteration variable?</p>
|
<python><for-loop>
|
2025-04-08 01:34:33
| 2
| 1,306
|
Nemo
|
79,560,990
| 15,828,895
|
Pylance slows my VSCode's autocomplete to as much as 5-10 sec of wait for the dropdown
|
<p>I discovered that Pylance was the source of my VSCode's autocomplete's slowness after following <a href="https://stackoverflow.com/questions/51874486/visual-studio-code-intellisense-is-very-slow-is-there-anything-i-can-do">Visual Studio Code Intellisense is very slow - Is there anything I can do?</a> and disabling all my extensions, then enabling them one by one. Pylance was the culprit.</p>
<p>Now I have <a href="https://stackoverflow.com/questions/71290916/vs-code-pylance-works-slow-with-much-delay">VS Code Pylance works slow with much delay</a> telling me that the solution is "Try moving your code to its own folder and opening that up".</p>
<p>But what does that mean? My project is small! The work of one developer maybe five hours a day for the past three months. Tiny!</p>
<p>My hypothesis was that Pylance was scanning virtual environments. Mine is <code>.venv</code>. But no, it's not. See <code>settings.json</code>:</p>
<pre><code>{
"python.formatting.provider": "autopep8",
"editor.formatOnSave": true,
"python.analysis.diagnosticMode": "workspace",
"python.analysis.exclude": [
"**/.venv/**",
"**/alembic/**",
"**/logs/**",
"**/.pytest_cache/**",
"**/DSScanner.egg-info/**"
]
}
</code></pre>
<p>So I have not been scanning virtual envs as I suspected.</p>
<p>My original project structure was</p>
<pre><code>surveillance # project root
..src
....db
....service
....util
..tests
..server.py
..readme.md
</code></pre>
<p>Claude suggested to me that perhaps Pylance was struggling reading all my relative imports. So, after putting it off for over a month, I redid my layouts to use absolute imports:</p>
<pre><code>surveillance # project root
..surveillance
....src
..tests
..readme.md
</code></pre>
<p>This did not help me</p>
<p>My <code>pyrightconfig.json</code>:</p>
<pre><code>{
"include": ["surveillance", ""],
"exclude": [
".venv",
".pytest_cache",
"**/__pycache__",
"deskSense.egg-info"
],
"executionEnvironments": [
{
"root": ".",
"extraPaths": ["."]
}
],
"typeCheckingMode": "basic",
"useLibraryCodeForTypes": false
}
</code></pre>
<p>It's so slow, I must fix it. Can anyone suggest what is meant by "Try moving your code to its own folder"? Perhaps the problem is my monorepo? But I struggle to believe this, why would Pylance read my frontend folder, and other repos? They're tiny.</p>
<p>Any other solution welcome of course</p>
|
<python><visual-studio-code><pylance>
|
2025-04-08 00:50:11
| 1
| 2,198
|
plutownium
|
79,560,716
| 5,203,069
|
Dockerized Django app - `gunicorn: command not found` on AWS deployment
|
<p>I have a Dockerized Django app that is deployed on an AWS t3a.micro EC2 instance - after some recent updates to the Debian and PostgreSQL images that I'm using in the Dockerfile, the app is suddenly failing to start running successfully when it hits AWS due to an issue with <code>gunicorn</code> -</p>
<pre><code>pyenv: gunicorn: command not found
</code></pre>
<p>This problem did not occur previously and, if I simply deploy a build based on the previous git commit (prior to making these updates), the app deploys and runs just fine. Furthermore, when I run the app locally via <code>docker-compose up</code> and bash into the container, I can run <code>gunicorn</code> commands no problem, so I'm not quite sure what the issue is - <code>gunicorn</code> is included in the <code>requirements.txt</code> file.</p>
<p>Maybe I'll need to provide more info, but for starters, here is the current setup of the <code>Dockerfile</code> and <code>requirements.txt</code> files - I'll note the changes that have been made below them -</p>
<p><code>Dockerfile</code></p>
<pre><code>FROM andrejreznik/python-gdal:py3.11.10-gdal3.6.2
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install postgresql-15 postgresql-server-dev-15 -y
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
ARG ALLOWED_HOSTS
ARG AWS_ACCESS_KEY_ID
ARG AWS_S3_CUSTOM_DOMAIN
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_STORAGE_BUCKET_NAME
ARG AWS_SES_ACCESS_KEY_ID
ARG AWS_SES_SECRET_ACCESS_KEY
ARG CORS_ALLOWED_ORIGINS
ARG DATABASE_HOST
ARG DATABASE_NAME
ARG DATABASE_PASSWORD
ARG DATABASE_PORT
ARG DATABASE_USER
ARG SECRET_KEY
ENV ALLOWED_HOSTS=${ALLOWED_HOSTS}
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
ENV AWS_S3_CUSTOM_DOMAIN=${AWS_S3_CUSTOM_DOMAIN}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
ENV AWS_STORAGE_BUCKET_NAME=${AWS_STORAGE_BUCKET_NAME}
ENV AWS_SES_ACCESS_KEY_ID=${AWS_SES_ACCESS_KEY_ID}
ENV AWS_SES_SECRET_ACCESS_KEY=${AWS_SES_SECRET_ACCESS_KEY}
ENV CORS_ALLOWED_ORIGINS=${CORS_ALLOWED_ORIGINS}
ENV DATABASE_HOST=${DATABASE_HOST}
ENV DATABASE_NAME=${DATABASE_NAME}
ENV DATABASE_PASSWORD=${DATABASE_PASSWORD}
ENV DATABASE_PORT=${DATABASE_PORT}
ENV DATABASE_USER=${DATABASE_USER}
ENV SECRET_KEY=${SECRET_KEY}
EXPOSE 8000
RUN ["python", "manage.py", "collectstatic", "--noinput"]
RUN ["python", "manage.py", "migrate"]
CMD ["gunicorn", "--workers=4", "--bind=:8000", "my_app_backend.wsgi"]
</code></pre>
<p><code>requirements.txt</code></p>
<pre><code>boto3==1.16.18
django==3.1
django-ckeditor==6.0.0
django-cors-headers==3.5.0
django-filter==2.4.0
django-import-export>=2.5.0
django-ses==1.0.3
django-simple-history==2.11.0
django-storages==1.10.1
djangorestframework==3.11.1
djangorestframework-gis==0.16
dj-database-url==0.5.0
gunicorn==20.0.4
Pillow==7.2.0
psycopg2-binary==2.9.10
mysql-connector-python>=8.0.21
timezonefinder>=4.2.0
pytz>=2020.1
</code></pre>
<p>Between the last working release and this one, I've made the following changes to the <code>Dockerfile</code> -</p>
<ul>
<li>updated Debian version from <code>FROM andrejreznik/python-gdal:stable</code> to <code>FROM andrejreznik/python-gdal:py3.11.10-gdal3.6.2</code></li>
<li>added an explicit PostgreSQL version via <code>RUN apt-get install postgresql-15 postgresql-server-dev-15 -y</code></li>
</ul>
<p>And in <code>requirements.txt</code> -</p>
<ul>
<li>updated <code>psycopg2</code> dependency from <code>psycopg2==2.8.5</code> to <code>psycopg2-binary==2.9.10</code></li>
</ul>
<p>Other than these changes, these files are exactly the same.</p>
<p>So far, I have experimented with various combinations of explicitly adding <code>gunicorn</code> to the <code>Dockerfile</code> via</p>
<pre><code>RUN apt-get install gunicorn -y
</code></pre>
<p>and updating <code>gunicorn</code> in the <code>requirements.txt</code> to the latest version, but nothing has helped. For some reason, the <code>gunicorn</code> command is not becoming available in the deployed environment, but I don't understand why.</p>
<p>I also added a few commands to the <code>Dockerfile</code> to get some insight into the <code>gunicorn</code> installation -</p>
<pre><code>#15 [11/17] RUN whereis gunicorn
#15 sha256:b84e89b409fb5702af7f9ad1edce8817a23b733674211baae2edfd6e5baf1397
#15 0.213 gunicorn: /usr/local/pyenv/shims/gunicorn
#15 DONE 0.2s
#16 [12/17] RUN which gunicorn
#16 sha256:0eb044442dfa9b6a75308ac8500fc2e75d4aa6685cfb7e044508a61d92d5b6dc
#16 0.224 /usr/local/pyenv/shims/gunicorn
#16 DONE 0.2s
#17 [13/17] RUN dpkg --get-selections | grep '^g'
#17 sha256:8a4451313e003399ca21a7e9a61c923e8573b869465e64a3dcd5ea581fbc019b
#17 0.238 gcc install
#17 0.238 gcc-12 install
#17 0.238 gcc-12-base:amd64 install
#17 0.238 git install
#17 0.238 git-man install
#17 0.238 gpgv install
#17 0.238 grep install
#17 0.238 gzip install
#17 DONE 0.3s
#18 [14/17] RUN pip list | grep 'gunicorn'
#18 sha256:5391b3d15711450cd56c2e85ae4db5926644f6f616c5584570af60ae9cc4a1d9
#18 3.398 gunicorn 20.0.4
#18 DONE 3.4s
</code></pre>
<p>Please let me know if there is any other info I can provide that will help diagnose/resolve this problem.</p>
|
<python><django><amazon-web-services><docker><gunicorn>
|
2025-04-07 20:13:47
| 0
| 8,640
|
skwidbreth
|
79,560,688
| 7,959,614
|
Transform list of dictionaries into nested dictionary
|
<p>I have the following list of dictionaries</p>
<pre><code>l = [{'u': 1, 'v': 2, 'k': [1, 1, 1, 1, 1]},
{'u': 1, 'v': 3, 'k': [2, 2, 2, 2, 2]},
{'u': 2, 'v': 3, 'k': [3, 3, 3, 3, 3]},
{'u': 1, 'v': 4, 'k': [4, 4, 4, 4, 4]},
{'u': 2, 'v': 5, 'k': [5, 5, 5, 5, 5]}]
</code></pre>
<p>I want to created a nested dictionary of this:</p>
<pre><code>nested_d = {
'1': {'2': [1, 1, 1, 1, 1],
'3': [2, 2, 2, 2, 2],
'4': [4, 4, 4, 4, 4]},
'2': {'3': [3, 3, 3, 3, 3],
'5': [5, 5, 5, 5, 5]}
}
</code></pre>
<p>How can I achieve this easily (without a for loop / in a one-liner)?</p>
|
<python><dictionary>
|
2025-04-07 19:57:49
| 3
| 406
|
HJA24
|
79,560,656
| 3,777,717
|
How does Python represent slices in various cases?
|
<p>Consider</p>
<pre><code>l = [1, 2, 3, 4, 5]
print(l is l[:])
t = (1, 2, 3, 4, 5)
print(t is t[:])
print(t[1:] is t[1:])
print('aa'[1:] is 'aa'[1:])
print('aaa'[1:] is 'aaa'[1:])
</code></pre>
<p>The result is, somewhat surprisingly, <code>False</code>, <code>True</code>, <code>False</code>, <code>True</code>, <code>False</code>.</p>
<p>Additionally, if I create objects of types <code>list</code>, <code>tuple</code> and <code>str</code> with large lengths and then create large numbers of slices <code>[1:]</code> of each, only with <code>str</code> is it efficient in terms of time and memory, even though tuples are also immutable and could, just like strings, be represented without copying the range specified by a slice, just by indexing into the contiguous memory where the data is already stored.</p>
<p>Why does CPython behave this way? Is it an implementation thing, or are all implementations required to follow the same choices?</p>
|
<python><data-structures><heap-memory><implementation>
|
2025-04-07 19:40:43
| 0
| 1,201
|
ByteEater
|
79,560,599
| 20,102,061
|
Algorithm for detecting full loop when iterating over a list
|
<p>Assignment:</p>
<blockquote>
<p>Write a funcion <code>cycle_sublist(lst, start, step)</code> where:</p>
<ul>
<li><code>lst</code> is a list</li>
<li><code>start</code> is number that satisfies: <code>0 <= start < len(lst)</code></li>
<li><code>step</code> is the amount we increase your index each iteration</li>
</ul>
<p><strong>without using</strong>: slicing, importing, list comprehension, built-in functions like <code>map</code> and <code>filter</code>.</p>
<p>The function works in this way:
We start to iterate over the list of items when we get back to start or cross it again.
So for example:</p>
<pre><code>cycle_sublist([1], 0, 2) -> [1]
cycle_sublist([6, 5, 4, 3], 0, 2) -> [6, 4]
cycle_sublist([7, 6, 5, 4, 3], 3, 1) -> [4, 3, 7, 6, 5]
cycle_sublist([4, 3, 2, 5, 1, 6, 9], 2, 2) -> [2, 1, 9, 3]
cycle_sublist([4, 3, 2, 5, 1, 6, 9], 5, 3) -> [6, 3, 1]
</code></pre>
</blockquote>
<p>My problem is detecting when I have completed a cycle. I tried to:</p>
<ul>
<li>Check my previous step and current steps and check it against start. The problem is there are some cases where it fails.</li>
<li>Count my steps and checking if I had crossed the start.</li>
</ul>
<p>None of those worked.</p>
<p>Here is my code - with the missing logic for detecting the cycle:</p>
<pre><code>def cycle_sublist(lst,start,step):
index = start
length = len(last)
cycle_complete = False
res = []
while True:
index = index % length if index >= length else index
if ...:
cycle_complete = True
if cycle_complete and index >= start:
break
res.append(lst[index])
index += step
return res
</code></pre>
<p>If you can I'd like to ask you to answer with the algorithm to detect the cycle only so I can write the code myself.</p>
|
<python><list><algorithm><loops>
|
2025-04-07 19:02:29
| 2
| 402
|
David
|
79,560,569
| 10,634,126
|
Issue uploading CSV to SharePoint from Python
|
<p>I am trying to upload a CSV of a Pandas DataFrame generated by a Python script to Microsoft SharePoint / OneDrive.</p>
<p>I cannot figure out how to get past the following error when I try to connect to upload a file:</p>
<pre><code>import pandas as pd
from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.authentication_context import AuthenticationContext
df = pd.DataFrame()
site_url = "https://{tenant id: str = ########-####-####-####-############ ???}.sharepoint.com/sites/{site name: str = 'tktktk' ???}"
username = "{user}@{site name}.com"
password = "{tktktk}"
auth_ctx = AuthenticationContext(site_url)
auth_ctx.acquire_token_for_user(username, password)
ctx = ClientContext(site_url, auth_ctx)
web = ctx.web
ctx.load(web)
try:
ctx.execute_query()
print(f"Connection to SharePoint site '{web.properties['Title']}' successful!")
except Exception as e:
print(f"Error connecting to SharePoint: {e}")
</code></pre>
<p>Output:</p>
<blockquote>
<p>ValueError: Cannot get binary security token for from <a href="https://login.microsoftonline.com/extSTS.srf" rel="nofollow noreferrer">https://login.microsoftonline.com/extSTS.srf</a></p>
</blockquote>
|
<python><sharepoint><onedrive>
|
2025-04-07 18:39:31
| 0
| 909
|
OJT
|
79,560,413
| 1,501,073
|
how to use xsd with included and imported files with python
|
<p>I have a base.xsd file which includes an enums.xsd. Both files are in the same directory.</p>
<pre class="lang-xml prettyprint-override"><code><xs:include id="enums" schemaLocation="enums.xsd"/>
</code></pre>
<p>The enums.xsd imports:</p>
<pre class="lang-xml prettyprint-override"><code><xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedWIPOST3CodeType-V0-3.xsd"/>
<xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedISOLanguageCodeType-V0-4.xsd"/>
<xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedISOCurrencyCodeType-V0-1.xsd"/>
</code></pre>
<p>These files include some other xsd files.</p>
<p>After all this base.xsd defines</p>
<pre class="lang-xml prettyprint-override"><code><xs:simpleType name="tCountryCodes">
<xs:union memberTypes="com:ExtendedWIPOST3CodeType" />
</xs:simpleType>
<xs:simpleType name="tLangs">
<xs:union memberTypes="com:ExtendedISOLanguageCodeType" />
</xs:simpleType>
<xs:simpleType name="tCurrencyCodes">
<xs:union memberTypes="com:ExtendedISOCurrencyCodeType" />
</xs:simpleType>
</code></pre>
<p>when I use the following python code: <code>schema = xmlschema.XMLSchema(xsd_path)</code> I get the following error:</p>
<blockquote>
<p>Error: unknown type '{http://www.wipo.int/standards/XMLSchema/Common/0}ExtendedISOLanguageCodeType'</p>
</blockquote>
<p>which is defined in <code>ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedISOLanguageCodeType-V0-4.xsd</code> as:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<xsd:schema
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:com="http://www.wipo.int/standards/XMLSchema/Common/0"
targetNamespace="http://www.wipo.int/standards/XMLSchema/Common/0"
elementFormDefault="qualified"
attributeFormDefault="qualified" version="0.4">
<xsd:include schemaLocation="ISOLanguageCodeType-V0-1.xsd"/>
<xsd:include schemaLocation="ISOFormerLanguageCodeType-V0-1.xsd"/>
<xsd:simpleType name="ExtendedISOLanguageCodeType">
<xsd:union memberTypes="com:ISOLanguageCodeType com:ISOFormerLanguageCodeType"/>
</xsd:simpleType>
</xsd:schema>
</code></pre>
<p>As you can notice there is no issue with <code>com:ExtendedWIPOST3CodeType</code> whose definition precedes <code>com:ExtendedISOLanguageCodeType</code></p>
<p>The <code>xs:schema</code> of base.xsd comprises <code>xmlns:com="http://www.wipo.int/standards/XMLSchema/Common/0"</code>. I check all the xmlns and targetNamespace.</p>
<p>If I change base.xsd to:</p>
<pre class="lang-xml prettyprint-override"><code><xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedWIPOST3CodeType-V0-3.xsd"/>
<xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedISOLanguageCodeType-V0-4.xsd"/>
<xs:import
namespace="http://www.wipo.int/standards/XMLSchema/Common/0"
schemaLocation="ST96AnnexIIISchemas-V0-11/Common/0/Basic/ExtendedISOCurrencyCodeType-V0-1.xsd"/>
<xs:simpleType name="tCountryCodes">
<xs:union memberTypes="com:ExtendedWIPOST3CodeType"/>
</xs:simpleType>
<xs:simpleType name="tLangs">
<!--
<xs:union memberTypes="com:ExtendedISOLanguageCodeType"/>
-->
<xs:restriction base="xs:string"/>
</xs:simpleType>
<xs:simpleType name="tCurrencyCodes">
<!--
<xs:union memberTypes="com:ExtendedISOCurrencyCodeType"/>
-->
<xs:restriction base="xs:string"/>
</xs:simpleType>
</code></pre>
<p>The xsd is well loaded and used.</p>
<p>The exact same xsd is accepted by dotnet 8 code.</p>
<p>What can I do from here to avoid the error ?</p>
|
<python><xsd><xsd-validation>
|
2025-04-07 16:50:24
| 0
| 7,830
|
tschmit
|
79,560,285
| 9,173,710
|
Logging in PySide6 GUI with rich.logging RichHandler and QTextEdit HTML text, causes spacing and alignment issues
|
<p>I want to show the application log on the GUI in some way.
I am using my own class, which inherits from both <code>QTextEdit</code> and <code>logging.Handler</code>. It is added to logging as a handler at init.</p>
<p>If I insert the text as plaintext into the widget, it prints fine. Linespacing is just fine. For that i simply do</p>
<pre class="lang-py prettyprint-override"><code>class QTextEditLogger(QTextEdit, logging.Handler):
def __init__(self, parent, level=NOTSET):
QTextEdit.__init__(self, parent)
logging.Handler.__init(self, level=level)
def emit(self, record):
msg = self.format(record)
self.append(msg)
</code></pre>
<p>When I try to use rich.RichHandler formatting to get colored text and insert that as HTML into the QTextEdit, the color works fine; however, the text is not properly spaced /aligned.</p>
<p><a href="https://i.sstatic.net/7AHyqtqe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AHyqtqe.png" alt="enter image description here" /></a></p>
<p>As you can the the line spaicng is too large, and word wrap looks really weird, even though that is turned off for the widget and in the console.</p>
<p>Here is the MRE, I add a RichHandler as member to my class and a Console object that logs to <code>os.devnull</code> and records.
I overloaded the emit function, when the Handler gets called I first dispatch the RichHandler emit function. Then I grab the console text via <code>export_html()</code>, which in turn is added to the textedit via <code>insertHTML()</code>. I even tried to set the console size to the widget width, but that didnt really help.</p>
<p>How can I remove the space between the lines and fix the indent from the first line?</p>
<pre class="lang-py prettyprint-override"><code>import sys
import os
from time import sleep
import logging
from logging import Handler, NOTSET
from rich.logging import RichHandler
from rich.console import Console
from PySide6.QtWidgets import QTextEdit, QApplication, QMainWindow
from PySide6.QtGui import QTextCursor, QTextOption
class QTextEditLogger(QTextEdit, Handler):
"""A QTextEdit logger that uses RichHandler to format log messages."""
def __init__(self, parent=None, level=NOTSET):
QTextEdit.__init__(self, parent)
Handler.__init__(self,level=level)
self.console = Console(file=open(os.devnull, "wt"), record=True,width=42, height=12, soft_wrap=False)
self.rich_handler = RichHandler(show_time=False, show_path=False, show_level=True, markup=True, console=self.console, log_time_format="[%X]", level=self.level)
self.rich_handler.setLevel(self.level)
QTextEdit.setWordWrapMode(self, QTextOption.WrapMode.NoWrap)
self.setAcceptRichText(True)
self.setReadOnly(True)
def showEvent(self, arg__1):
self.console.width = self.width()//self.fontMetrics().averageCharWidth() # Approximate character width
self.console.height = self.height()//self.fontMetrics().height() # Approximate character height
return super().showEvent(arg__1)
def emit(self, record) -> None:
"""Override the emit method to handle log records."""
self.rich_handler.emit(record)
html = self.console.export_html(clear=True, inline_styles=True)
self.insertHtml(html)
self.verticalScrollBar().setSliderPosition(self.verticalScrollBar().maximum())
c = self.textCursor()
c.movePosition(QTextCursor.End)
self.setTextCursor(c)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("QTextEdit Example")
# Create a QTextEdit widget
self.text_edit = QTextEditLogger(self)
self.setCentralWidget(self.text_edit)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.addHandler(window.text_edit)
logger.info("This is an info message.")
sleep(.5)
logger.warning("This is a warning message.")
sleep(.5)
for i in range(10):
logger.debug(f"This is a debug message {i}.")
logger.error("This is an error message.")
sys.exit(app.exec_())
</code></pre>
|
<python><logging><pyside6><rich>
|
2025-04-07 15:36:23
| 1
| 1,215
|
Raphael
|
79,560,135
| 14,385,099
|
Creating new rows in a dataframe based on previous values
|
<p>I have a dataframe that looks like this:</p>
<pre><code>test = pd.DataFrame(
{'onset': [1,3,18,33,35,50],
'duration': [2,15,15,2,15,15],
'type': ['Instr', 'Remember', 'SocTestString', 'Rating', 'SelfTestString', 'XXX']
}
)
</code></pre>
<p>I want to create a new dataframe such that when <code>type</code> contains "TestString",</p>
<ul>
<li>two new rows are created below that row, such that the row is is now split into three rows with (for example) SocTestString_1, SocTestString_2, SocTestString_3</li>
<li>for those three rows, change duration columns to the value 5</li>
<li>for those three rows, also change the onset column such that it is the onset value of the previous column + 5</li>
</ul>
<p>The final dataframe should look like this:</p>
<pre><code>test_final = pd.DataFrame(
{'onset': [1,3,18,23,28,33,35,40,45,50],
'duration': [2,15,5,5,5,2,5,5,5,15],
'type': ['Instr', 'Remember', 'SocTestString_1', 'SocTestString_2', 'SocTestString_3', 'Rating', 'SelfTestString_1', 'SelfTestString_2', 'SelfTestString_3', 'XXX']
})
</code></pre>
<p>How may I accomplish this?</p>
|
<python><pandas>
|
2025-04-07 14:22:41
| 1
| 753
|
jo_
|
79,560,075
| 16,563,251
|
Exclude methods consisting of a single pass statement from coverage reports in python
|
<p>In my class, I have some methods that can be overridden by subclasses, but do not need to be.</p>
<p>I like to test my project and generate a coverage report using <a href="https://coverage.readthedocs.io/" rel="nofollow noreferrer">coverage.py</a>.
Because the method of the superclass does not do anything, it is not really necessary to test it.
But this means it shows up in my coverage report as uncovered, which I want to avoid.
How do I resolve this?</p>
<p>For abstract methods, this is rather straight-forward by just excluding them from the coverage report based on their decorator (<a href="https://stackoverflow.com/questions/9202723/excluding-abstractproperties-from-coverage-reports">See this answer</a> and <a href="https://coverage.readthedocs.io/en/latest/excluding.html#advanced-exclusion" rel="nofollow noreferrer">the coverage.py docs</a>).
But as far as I can tell, there is no such annotation for a method that can be optionally overridden.</p>
<p>Excluding every <code>pass</code> statement would be an option, but as this can be used for other purposes, I would prefer a more specific solution.
Replacing the pass statement by something else (e.g. a doc-string, as suggested in <a href="https://stackoverflow.com/a/19275908/16563251">some answers</a>) is not something I want to do, because I feel it gives some tool influence over my code style, instead of adapting itself, and I think <code>pass</code> is best for readability here.</p>
<pre class="lang-py prettyprint-override"><code>class MySuperclass:
def hello(self):
print("Hello")
self._hello_printed()
# A child class is allowed to override this if needed, but does not have to
def _hello_printed(self):
pass
class MyChild1(MySuperclass):
def _hello_printed(self):
print("World")
class MyChild2(MySuperclass):
# Some other child class stuff
pass
</code></pre>
|
<python><abstract-class><python-class><coverage.py>
|
2025-04-07 13:58:10
| 0
| 573
|
502E532E
|
79,559,952
| 11,062,613
|
Howto efficiently apply a gufunc to a 2D region of a Polars DataFrame
|
<p>Both Polars and Numba are fantastic libraries that complement each other pretty well. There are some limitations when using Numba-compiled functions in Polars:</p>
<ul>
<li>Arrow columns must be converted to NumPy arrays (and then converted back).</li>
<li>NumPy/Numba does not support missing data.</li>
</ul>
<p>I'm trying to apply a Numba-compiled gufunc to a 2D region of a Polars DataFrame with two approaches:</p>
<ol>
<li>Using map_batches with a custom lambda function to apply the gufunc.</li>
<li>Converting the entire DataFrame to a NumPy array, applying the gufunc on the whole array, and then concatenating the result back with the original DataFrame. A nice feature here is that you can utilize the axis keyword to do horizontal or vertical computations.</li>
</ol>
<p>Polars map_batches is designed to apply a custom python function to a sequence of Series, it doesn't natively support applying a function to a 2D region out-of-the-box (afaik).</p>
<p>Is there a more efficient or graceful way to apply a gufunc over a 2D region of a Polars DataFrame?</p>
<p>Here is an example:</p>
<pre><code>import numpy as np
import numba as nb
import polars as pl
@nb.guvectorize([f"void(i8[:], i8, i8[:])"], "(n),()->(n)")
def foo(arr: np.ndarray, value: float, out: np.ndarray = None) -> None:
total = arr.dtype.type(0)
for i, ai in enumerate(arr):
total += ai + value
out[i] = total
def apply_foo_batches(df: pl.DataFrame) -> pl.DataFrame:
"""Apply the gufunc via map_batches."""
return (
df.with_columns(
pl.struct(
pl.col("*").map_batches(lambda a: foo(a, 1, axis=0), return_dtype=pl.Int64))
.alias("foo"))
.with_columns(
pl.col("foo").name.prefix_fields("foo.")).unnest("foo")
)
def apply_foo_concat(df: pl.DataFrame) -> pl.DataFrame:
"""Apply the gufunc by converting to a NumPy array and concatenating the results."""
return (
pl.concat([
df,
pl.DataFrame(data=foo(df.to_numpy(), 1, axis=0),
schema=[f"foo.{c}" for c in df.columns])],
how='horizontal')
)
</code></pre>
<p>Example usage:</p>
<pre><code>df = pl.DataFrame({"val_1": [1, 1, 1], "val_2": [10, 10, 10]})
print(df.pipe(apply_foo_batches))
print(df.pipe(apply_foo_concat))
# โโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโฌโโโโโโโโโโโโ
# โ val_1 โ val_2 โ foo.val_1 โ foo.val_2 โ
# โ --- โ --- โ --- โ --- โ
# โ i64 โ i64 โ i64 โ i64 โ
# โโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโชโโโโโโโโโโโโก
# โ 1 โ 10 โ 2 โ 11 โ
# โ 1 โ 10 โ 4 โ 22 โ
# โ 1 โ 10 โ 6 โ 33 โ
# โโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโ
</code></pre>
<p>And some benchmarks for different DataFrame sizes (i7-5500U, Linux):</p>
<pre><code>shape = (1_000, 50)
data = np.random.randint(1, 9, size=shape)
df = pl.DataFrame(data, schema=[f'c{i}' for i in range(shape[1])])
%timeit df.pipe(apply_foo_batches)
%timeit df.pipe(apply_foo_concat)
# 10.7 ms ยฑ 40.9 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
# 486 ฮผs ยฑ 2.74 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each)
shape = (10_000, 50)
data = np.random.randint(1, 9, size=shape)
df = pl.DataFrame(data, schema=[f'c{i}' for i in range(shape[1])])
%timeit df.pipe(apply_foo_batches)
%timeit df.pipe(apply_foo_concat)
# 11.3 ms ยฑ 57.5 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
# 1.85 ms ยฑ 80.6 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each)
</code></pre>
|
<python><numpy><python-polars><numba><polars>
|
2025-04-07 12:57:01
| 0
| 423
|
Olibarer
|
79,559,899
| 91,401
|
Solve the kinematic equations for aiming a simulated turret with velocity and acceleration
|
<p>I am working on a problem for a simulated game. I want my AI to be able to aim at a moving enemy target with a given starting location, starting velocity, and constant acceleration.</p>
<p>The position of the enemy is given by</p>
<pre><code>p_e(t) = s_e + v_e * t + 0.5 * a_e * t ** 2
</code></pre>
<p>and the position of a fired projectile is given by</p>
<pre><code>v_b(theta) = spd_b * (sin(theta), cos(theta))
p_b(theta, t) = v_b(theta) * t
</code></pre>
<p>The task is to find the angle <code>theta</code> such that there is some <code>t</code> where <code>p_e(t) == p_b(t)</code>, if any such <code>theta</code> exists given the values of the various parameters.</p>
<p>I solved the 0-acceleration version of this using Sympy. Sympy's output seemed logically correct and tested correct in the simulation I'm using. However, when I tried to solve the version with acceleration, I had trouble:</p>
<pre><code>from sympy import *
# Define variables.
#
# Note: we do not define variables for the bullet's starting position, because
# we are going to treat the player ship as a fixed frame of reference.
# Therefore, esx, esy, evx, and evy need to be modified by the controlled
# ship's relative speed and position.
esx, esy, evx, evy, eax, eay = symbols('esx esy evx evy eax eay')
bspd, theta = symbols('bspd theta')
bvx, bvy = bspd * sin(theta), bspd * cos(theta)
t = symbols('t', positive=True)
# Define the x and y positions of the bullet and the enemy.
ex = esx + evx * t + 0.5 * eax * t ** 2
ey = esy + evy * t + 0.5 * eay * t ** 2
bx = bvx * t
by = bvy * t
# Solve for the intersection time in each dimension.
tix = solve(Eq(ex, bx), t)
tiy = solve(Eq(ey, by), t)
print(tix)
print(tiy)
# Set the per-dimension intersection times equal to one another
# and solve for theta.
print(solve(Eq(tix[0], tiy[0]), theta, check=False))
</code></pre>
<p>The output of this program is:</p>
<pre><code>[(bspd*sin(theta) - evx - 1.4142135623731*sqrt(0.5*bspd**2*sin(theta)**2 - bspd*evx*sin(theta) - eax*esx + 0.5*evx**2))/eax, (bspd*sin(theta) - evx + 1.4142135623731*sqrt(0.5*bspd**2*sin(theta)**2 - bspd*evx*sin(theta) - eax*esx + 0.5*evx**2))/eax]
[(bspd*cos(theta) - evy - 1.4142135623731*sqrt(0.5*bspd**2*cos(theta)**2 - bspd*evy*cos(theta) - eay*esy + 0.5*evy**2))/eay, (bspd*cos(theta) - evy + 1.4142135623731*sqrt(0.5*bspd**2*cos(theta)**2 - bspd*evy*cos(theta) - eay*esy + 0.5*evy**2))/eay]
[-oo*I, oo*I]
</code></pre>
<p>When I run this code the final print outputs that the solution contains infinity times an imaginary number. Now <code>tix</code> and <code>tiy</code> have two solutions, but I get this result no matter which combination of solutions I use.</p>
<p>I am not a mathematics person, which is largely why I am trying to use sympy to solve this. Is it really the case that adding the acceleration term makes it impossible to solve this problem? Or have I done something wrong in my use of sympy? If I have done something wrong in my use of sympy, please let me know what it is and how to fix it.</p>
|
<python><sympy>
|
2025-04-07 12:33:49
| 2
| 384
|
James Aguilar
|
79,559,870
| 2,043,014
|
Monitoring Actual Bytes Written to Flash After SQLite Insert Operations
|
<p>I am currently working on a project where I need to monitor the actual bytes written to flash storage after performing insert operations in an SQLite database. I've simplified my approach to the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>def get_sectors_written_from_stat():
with open("/sys/block/sda/stat", "r") as f:
fields = f.read().strip().split()
return int(fields[6]) # 7th field (index 6)
def log_stuff(i, previous_sectors_wrtn):
sectors_wrtn_after = self.get_sectors_written_from_stat()
diff_sectors_wrtn = sectors_wrtn_after - previous_sectors_wrtn
self.log(
f"Sectors written: {sectors_wrtn_after}, Differential: {diff_sectors_wrtn}",
)
def insert_data(self):
data = "x" * self.DATA_LENGTH
self.c.execute("INSERT INTO wear_test (data) VALUES (?)", (data,))
self.conn.commit()
def insert_and_iostat():
sectors_wrtn_before = self.get_sectors_written_from_stat()
for i in range(self.NUM_OF_INSERT_OPERATIONS):
self.insert_data()
self.sqlite_utilities.perform_checkpoint()
log_stuff(i, sectors_wrtn_before)
def run_test():
self.insert_and_iostat()
</code></pre>
<ol>
<li>Is using the output of <code>iostat</code> before and after the insert operations an ideal way to determine the actual bytes written to flash storage?</li>
<li>What other aspects should I monitor, considering that file system caching may influence the results?</li>
<li>Is there a more straightforward method to ask the SQLite library to estimate the amount of flash data written for the operations performed?</li>
</ol>
<p>I appreciate any insights or suggestions on how to improve my monitoring strategy!</p>
|
<python><linux><database><sqlite><iostat>
|
2025-04-07 12:24:13
| 0
| 671
|
user12345
|
79,559,711
| 1,023,390
|
Matplotlib logit scale tick number formatting
|
<p>When using the <code>log</code> scale with <code>matplotlib</code>, we can set globally with (<a href="https://stackoverflow.com/a/72693296/1023390">see this answer</a>)</p>
<pre><code>import matplotlib.pyplot as plt
plt.rcParams['axes.formatter.min_exponent'] = 3
</code></pre>
<p>that ticks on logarithmic axes are in exponential form only for x<1.e-3 and x>1.e3, but in between are just 0.001, 0.01, 0.1, 1, 10, 100, and 1000.</p>
<p>How can I obtain the equivalent behavior with the <a href="https://matplotlib.org/stable/gallery/scales/logit_demo.html" rel="nofollow noreferrer"><code>logit</code></a> scale of <code>matplotlib</code>?, such that the labels become 0.001, 0.01, 0.1, 0.5, 0.9, 0.99, 0.999?</p>
|
<python><matplotlib><xticks>
|
2025-04-07 11:07:37
| 1
| 45,824
|
Walter
|
79,559,664
| 3,840,530
|
How to write a string containing binary representation of data to file?
|
<p>I am trying to write a binary string as binary data to file but my function seems to be having a problem with the int conversion. I am reading a text file and manipulating it which gives me a string representation of the data (say, binary_string below). I want to write this data to file as bits. I found the below function which seems to work but it fails if the binary_string size becomes high.</p>
<p>The error is : <code>OverflowError: int too big to convert</code> which makes sense.</p>
<p>I wish to be able to write any arbitrary length to file. Can someone tell me how should I fix the below code to write arbitrary large files? What would be an efficient way to write this.</p>
<pre><code>import io
binary_string = "10101010100010100101101010101010"
binary_number = int(binary_string, 2)
byte_buffer = binary_number.to_bytes(binary_number.bit_length() // 8, byteorder='big')
buffer = io.BytesIO(byte_buffer)
with open("output_file.bin", "wb") as file:
file.write(buffer.getvalue())
</code></pre>
<p>Thanks</p>
|
<python><file><io><binary><byte>
|
2025-04-07 10:46:48
| 0
| 302
|
user3840530
|
79,559,492
| 5,868,293
|
Get analytical equation of RF regressor model
|
<p>I have the following dataset:</p>
<pre><code> X1 X2 X3 y
0 0.548814 0.715189 0.602763 0.264556
1 0.544883 0.423655 0.645894 0.774234
2 0.437587 0.891773 0.963663 0.456150
3 0.383442 0.791725 0.528895 0.568434
4 0.568045 0.925597 0.071036 0.018790
5 0.087129 0.020218 0.832620 0.617635
6 0.778157 0.870012 0.978618 0.612096
7 0.799159 0.461479 0.780529 0.616934
8 0.118274 0.639921 0.143353 0.943748
9 0.944669 0.521848 0.414662 0.681820
</code></pre>
<p>Which is generated using this code:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
# Create a sample dataset with 10 observations and 3 X-variables, and a target y-variable
np.random.seed(0)
X = np.random.rand(10, 3)
y = np.random.rand(10)
# Convert to DataFrame for better visualization
df = pd.DataFrame(X, columns=['X1', 'X2', 'X3'])
df['y'] = y
</code></pre>
<p>I can fit a linear regression model using:</p>
<pre><code>model = LinearRegression()
model.fit(X, y)
</code></pre>
<p>and get the estimated coefficients</p>
<pre><code>coefficients = model.coef_
intercept = model.intercept_
print("\nEstimated coefficients:", coefficients)
print("Intercept:", intercept)
</code></pre>
<p>This will return:</p>
<pre><code>Estimated coefficients: [-0.06965376 -0.39155857 0.05822415]
Intercept: 0.8021881697754355
</code></pre>
<p>Then if I have a new observation I can do the prediction using the model I have trained:</p>
<pre><code>new_observation = np.array([[0.5, 0.2, 0.8]])
prediction = model.predict(new_observation)
print("\nPrediction for the new observation:", prediction)
</code></pre>
<p>This will return the value <code>0.73562889</code>.</p>
<p>The predicted value is coming from the analytical form of the model:
<code>0.8021881697754355 - 0.5*0.06965376 -0.39155857*0.2 + 0.05822415*0.8</code></p>
<p><em>Task</em></p>
<p>I want to get the <strong>analytical form</strong> of a random forest regressor model, instead of a linear regression. How can I get the analytical form of a random forest regressor when the model is trained with the following code:</p>
<pre><code>rf_model = RandomForestRegressor(n_estimators=100, random_state=0)
rf_model.fit(X, y)
</code></pre>
|
<python><machine-learning><scikit-learn><regression><random-forest>
|
2025-04-07 09:27:58
| 0
| 4,512
|
quant
|
79,559,259
| 1,228,765
|
Why does Ray attempt to install ray wheels from ray-wheels.s3-us-west-2.amazonaws.com?
|
<p>When submitting a Ray job using a conda runtime env (<code>runtime_env = {"conda": "environment.yml"}</code>), Ray attempts to install the <code>ray</code> wheel from <code>ray-wheels.s3-us-west-2.amazonaws.com</code> even when I configure pip to use a different Python package index (<code>--index-url</code>). Example <code>environment.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>name: test
channels:
- https://conda.anaconda.org/conda-forge/
- nodefaults
dependencies:
- pandas=2.2.3
- pip
- pip:
- --index-url https://artifactory.company.com/artifactory/api/pypi/pypi-remote/simple/
- ray==2.20.0
</code></pre>
<p>Ray seems to ignore the provided configuration and attempt to install the <code>ray</code> wheel from <code>ray-wheels.s3-us-west-2.amazonaws.com</code>. Is there a configuration setting to change this behavior or how can <code>ray</code> be installed from a custom Python package index without having to go out to the Internet?</p>
|
<python><conda><pypi><ray>
|
2025-04-07 07:03:10
| 0
| 2,351
|
Martin Studer
|
79,559,164
| 5,959,593
|
When is typing.cast required?
|
<p>Here's the code</p>
<pre class="lang-py prettyprint-override"><code># args: argparse.Namespace
source: str = args.source
...
# subparsers: argparse._SubParsersAction
parser = subparsers.add_parser()
</code></pre>
<p>In the first line of code, <code>: str</code> is enough to tell Pylance that <code>source</code> is a string, although <code>args.source</code> is <code>Any</code>.</p>
<p>However, in the second line, when I add <code>: argparse.ArgumentParser</code>, the type of <code>parser</code> is still <code>Any | argparse.ArgumentParser</code>. I have to manually use <code>cast(argparse.ArgumentParser)</code> to get the desired effect, since <code>add_parser()</code> is typed as returing <code>Any</code>.</p>
<p>Why is that the case? Both are assignment to me, one from a field, one from the return of a function.</p>
|
<python><python-typing><pyright>
|
2025-04-07 05:55:20
| 1
| 1,100
|
Minh Nghฤฉa
|
79,558,939
| 11,084,338
|
Using a list of values to select rows from Polars DataFrame
|
<p>I have a Polars DataFrame below:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"a":[1, 2, 3], "b":[4, 3, 2]})
>>> df
a b
i64 i64
1 4
2 3
3 2
</code></pre>
<p>I can subset based on a specific value:</p>
<pre><code>x = df[df["a"] == 3]
>>> x
a b
i64 i64
3 2
</code></pre>
<p>But how can I subset based on a list of values? - something like this:</p>
<pre><code>list_of_values = [1, 3]
y = df[df["a"] in list_of_values]
</code></pre>
<p>To get:</p>
<pre><code> a b
i64 i64
1 4
3 2
</code></pre>
|
<python><dataframe><list><python-polars><polars>
|
2025-04-07 01:38:11
| 1
| 326
|
GH KIM
|
79,558,911
| 11,084,338
|
How to remove a prefix from all column names in a Polars DataFrame?
|
<p>I have a Polars DataFrame like</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
โโโโโโโฌโโโโโโฌโโโโโโฌโโโโโโ
โ #a โ #b โ #c โ #d โ
โ --- โ --- โ --- โ --- โ
โ i64 โ i64 โ i64 โ i64 โ
โโโโโโโชโโโโโโชโโโโโโชโโโโโโก
โ 1 โ 2 โ 3 โ 4 โ
โโโโโโโดโโโโโโดโโโโโโดโโโโโโ
""")
</code></pre>
<p>I want to change the column labels from
<code>['#a', '#b', '#c', '#d']</code></p>
<p>to <code>['a', 'b', 'c', 'd']</code></p>
<pre><code>โโโโโโโฌโโโโโโฌโโโโโโฌโโโโโโ
โ a โ b โ c โ d โ
โ --- โ --- โ --- โ --- โ
โ i64 โ i64 โ i64 โ i64 โ
โโโโโโโชโโโโโโชโโโโโโชโโโโโโก
โ 1 โ 2 โ 3 โ 4 โ
โโโโโโโดโโโโโโดโโโโโโดโโโโโโ
</code></pre>
|
<python><dataframe><python-polars>
|
2025-04-07 01:04:25
| 3
| 326
|
GH KIM
|
79,558,863
| 9,951,273
|
Set SwaggerUI Access Token on App Startup
|
<p>I have a FastAPI app below.</p>
<pre><code>from fastapi import Depends, FastAPI, Security
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
async def verify(token: HTTPAuthorizationCredentials | None = Depends(HTTPBearer())):
# VERIFY TOKEN
return token
app = FastAPI(
swagger_ui_parameters={"preauthorizeApiKey": {"HTTPBearer": "demo_key_for_testing"}}
)
@app.get("/api/private")
def private(auth_result: str = Security(verify)):
"""A valid access token is required to access this route"""
return auth_result
</code></pre>
<p>Looking at SwaggerUI docs generated from this code, I'm given the following authorization form to set an access token.</p>
<p><a href="https://i.sstatic.net/AJgLjTm8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJgLjTm8.png" alt="SwaggerUI authorization form" /></a></p>
<p>I'd like to pre-populate this form with an access token.</p>
<p>To do this, Swagger has a <a href="https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/" rel="nofollow noreferrer">config parameter</a>, <code>preauthorizeApiKey</code>, which should be configurable through FastAPIs <code>swagger_ui_parameters</code> param. However, there are no docs on implementation details for this config setting.</p>
<p>How can I programmatically set a value in my app's authorization form?</p>
<p>My attempt above does not work as desired.</p>
|
<python><fastapi><swagger-ui>
|
2025-04-06 23:53:15
| 0
| 1,777
|
Matt
|
79,558,818
| 11,462,274
|
Open a popup in Edge browser using subprocess and fixed text in the window bar name which is originally defined by the value of document.title in html
|
<p>To open a popup I use subprocess:</p>
<pre class="lang-python prettyprint-override"><code>import subprocess
EDGE_PATCH = "C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe"
url = "https://www.google.com/"
subprocess.Popen([EDGE_PATCH, f'--app={url}'])
</code></pre>
<p>But I noticed that for the website I use, even though they are broadcasts of different events, the name is always <code>Leitor LiveVideo</code> instead of showing the name of the event and this really bothers me when I want to open it in full screen because it takes a while for me to figure out which one I need to maximize:</p>
<p><a href="https://i.sstatic.net/0b6irlmC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0b6irlmC.png" alt="original popup" /></a></p>
<p>I tried to create a local HTML and use it as a base and add the url inside an iframe, but the site is all white and it seems to be blocked from being placed inside an iframe:</p>
<p>HTML:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>Placeholder</title>
<style>
body { margin: 0; padding: 0; overflow: hidden; }
iframe { width: 100%; height: 100vh; border: none; }
</style>
</head>
<body>
<iframe id="content" src=""></iframe>
<script>
const urlParams = new URLSearchParams(window.location.search);
const home_name = urlParams.get('home');
const away_name = urlParams.get('away');
const url = urlParams.get('url');
document.title = `${home_name} x ${away_name}`;
document.getElementById('content').src = url;
</script>
</body>
</html>
</code></pre>
<p>Python:</p>
<pre class="lang-none prettyprint-override"><code>import subprocess
import urllib.parse
EDGE_PATCH = "C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe"
wrapper_path = r"C:\Users\Computador\Desktop\tests\wrapper.html"
home_name = "Team A"
away_name = "Team B"
original_url = "https://google.com/" # url example
encoded_url = urllib.parse.quote(original_url, safe=':/')
url = f"{wrapper_path}?home={home_name}&away={away_name}&url={encoded_url}"
subprocess.Popen([EDGE_PATCH, f'--app={url}'])
</code></pre>
<p>Now I can set the title but it's useless because the page doesn't open:</p>
<p><a href="https://i.sstatic.net/et6UkCvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/et6UkCvI.png" alt="popup edited" /></a></p>
<p>Is there any way I can already set this value in a fixed way when creating the pop-up using <code>suprocess</code> with <code>--app</code> or any other way to achieve this result without causing access blocks as happens when trying to place it in an local <code>html</code> <code>iframe</code> and the like?</p>
<p>I wanted to open it as I normally do, I just really wanted to change this text and nothing else to have a descriptive name.</p>
|
<python><html><subprocess>
|
2025-04-06 22:31:31
| 0
| 2,222
|
Digital Farmer
|
79,558,781
| 6,068,294
|
ipywidgets widgets not launching
|
<p>I'm trying to write a python3 program that using widget to get input,</p>
<pre><code>import ipywidgets as widgets
from IPython.display import display, clear_output
def button_clicked(b):
with output:
clear_output()
print("Button clicked!")
button = widgets.Button(description="Click Me")
output = widgets.Output()
button.on_click(button_clicked)
display(button, output)
# Event Loop (for standalone Python scripts)
try:
while True:
pass
except KeyboardInterrupt:
with output:
print("Ended")
</code></pre>
<p>but whatever I try the <code>ipywidgets widgets</code> package just writes "Button(description='Click Me', style=ButtonStyle()) Output()" to the screen and no button is launched.</p>
<p>The code is a just an attempt to make a reproducible example.
I'm on a MAC, python version 3.12.7 running in conda.</p>
|
<python><ipywidgets>
|
2025-04-06 21:49:47
| 0
| 8,176
|
ClimateUnboxed
|
79,558,657
| 7,959,614
|
Apply 1d-mask on numpy 3d-array
|
<p>I have the following 3d-<code>numpy.ndarray</code>:</p>
<pre><code>import numpy as np
X = np.array([
[[0.0, 0.4, 0.6, 0.0, 0.0],
[0.6, 0.0, 0.0, 0.0, 0.0],
[0.4, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.6, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.4, 0.0, 1.0]],
[[0.1, 0.5, 0.4, 0.0, 0.0],
[0.6, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0],
[0.1, 0.6, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.4, 0.0, 1.0]]
])
</code></pre>
<p>I want a new array where all the rows and columns are dropped where the diagnonal is equal to 1.</p>
<pre><code>idx = np.diag(X[0]) == 1 # for my implementation it is sufficient to look at X[0]
</code></pre>
<p>Important to note is that <code>X.shape[1] == X.shape[2]</code>, so I try to use the mask as follows</p>
<pre><code>Y = X[:, ~idx, ~idx]
</code></pre>
<p>The above returns something different than my desired output:</p>
<blockquote>
<pre><code> [[0.0, 0.4, 0.6],
[0.6, 0.0, 0.0],
[0.4, 0.0, 0.0]],
[[0.1, 0.5, 0.4],
[0.6, 0.0, 0.0],
[0.2, 0.0, 0.0]]
</code></pre>
</blockquote>
<p>Please advice</p>
|
<python><numpy>
|
2025-04-06 19:36:37
| 1
| 406
|
HJA24
|
79,558,410
| 5,722,359
|
How to prevent ruff from formatting arguments of a function into separate lines?
|
<p>I have a function like so:</p>
<pre><code>def get_foo(a: object, b: tuple, c: int,) -> dict:
.....
</code></pre>
<p>When I do <code>$ ruff format myfile.py</code>, my function is changed to</p>
<pre><code>def get_foo(
a: object,
b: tuple,
c: int,
) -> dict:
....
</code></pre>
<p>How do I stop this behaviour?</p>
<p><strong>Update:</strong> @STerliakov I implemented your <a href="https://stackoverflow.com/a/79558653/5722359">solution</a> but received this warning.</p>
<pre><code>$ ruff format test.py
warning: The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`.
1 file reformatted
</code></pre>
<p>I can't locate <code>isort.split-on-trailing-comma</code> to set it to false to avoid the conflict. How do i fix this issue?</p>
|
<python><ruff>
|
2025-04-06 15:42:15
| 2
| 8,499
|
Sun Bear
|
79,558,403
| 4,996,797
|
How to change the list[str] annotation so that it accepts list[LiteralString] too?
|
<p>I have a function that takes a list of <code>str</code> as an input</p>
<pre class="lang-py prettyprint-override"><code>def function(names: list[str]) -> None:
for name in names:
print(name)
</code></pre>
<p>If I generate the list using the <code>split()</code> function, I end up with an object of type <code>list[LiteralString]</code> so mypy marks it an error</p>
<pre class="lang-py prettyprint-override"><code>names = "Jรผrgen Klopp".split()
function(names) # Argument of type "list[LiteralString]" cannot be assigned to parameter "names" of type "list[str]" in function "function"
</code></pre>
<p>I would like to know how the LiteralString is expected to be used with lists?</p>
<p>I can build myself a type like</p>
<pre class="lang-py prettyprint-override"><code>StrListType = list[str] | list[LiteralString]
</code></pre>
<p>but this workaround is not needed when using non-listed objects, i.e.,</p>
<pre class="lang-py prettyprint-override"><code>def fn(name: str):
print(name)
names = "Jรผrgen Klopp".split()
given_name: LiteralString = names[0]
fn(given_name) # No problems
</code></pre>
|
<python><python-typing>
|
2025-04-06 15:39:30
| 1
| 408
|
Paweล Wรณjcik
|
79,558,301
| 1,549,736
|
Why does my Python test run fail when invoked via Tox, despite running just fine when invoked manually?
|
<p>When I invoke my Python package testing in the usual way, via Tox, it's failing:</p>
<pre class="lang-bash prettyprint-override"><code>$ python -I -m tox run -e py310-lin
py310-lin: install_deps> python -I -m pip install pytest pytest-cov pytest-xdist typing_extensions PyAMI/dist/pyibis_ami-7.2.2-py3-none-any.whl
.pkg: install_requires> python -I -m pip install 'setuptools>=77.0'
.pkg: _optional_hooks> python /home/dbanas/.venv/pybert-dev/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: get_requires_for_build_wheel> python /home/dbanas/.venv/pybert-dev/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: build_wheel> python /home/dbanas/.venv/pybert-dev/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
py310-lin: install_package_deps> python -I -m pip install kiwisolver 'pyibis-ami>=7.2.2' 'pyside6<6.7' 'pyyaml>=6' 'scikit-rf>=0.29' typing_extensions
py310-lin: install_package> python -I -m pip install --force-reinstall --no-deps /home/dbanas/prj/PyBERT/.tox/.tmp/package/1/pipbert-7.2.2-py3-none-any.whl
py310-lin: commands[0]> python -m pytest --basetemp=/home/dbanas/prj/PyBERT/.tox/py310-lin/tmp -vv --cov=pybert --cov-report=html --cov-report=term-missing tests
py310-lin: exit -6 (8.83 seconds) /home/dbanas/prj/PyBERT> python -m pytest --basetemp=/home/dbanas/prj/PyBERT/.tox/py310-lin/tmp -vv --cov=pybert --cov-report=html --cov-report=term-missing tests pid=1351
py310-lin: FAIL code -6 (38.56=setup[29.73]+cmd[8.83] seconds)
evaluation failed :( (38.62 seconds)
</code></pre>
<p>However, I'm able to run it manually, using the same Tox test environment, just fine:</p>
<pre class="lang-bash prettyprint-override"><code>$ . .tox/py310-lin/bin/activate
(py310-lin)
dbanas@Dell-XPS-15:~/prj/PyBERT
$ python -m pytest --basetemp=/home/dbanas/prj/PyBERT/.tox/py310-lin/tmp -vv --cov=pybert --cov-report=html --cov-report=term-missing tests
=============================================================================== test session starts ================================================================================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0 -- /home/dbanas/prj/PyBERT/.tox/py310-lin/bin/python
cachedir: .pytest_cache
rootdir: /home/dbanas/prj/PyBERT
configfile: pyproject.toml
plugins: xdist-3.6.1, cov-6.1.1
collected 63 items
tests/test_basic.py::TestBasic::test_status PASSED [ 1%]
tests/test_basic.py::TestBasic::test_perf PASSED [ 3%]
tests/test_basic.py::TestBasic::test_ber PASSED [ 4%]
tests/test_basic.py::TestBasic::test_dly PASSED [ 6%]
tests/test_basic.py::TestBasic::test_isi PASSED [ 7%]
tests/test_basic.py::TestBasic::test_dcd PASSED [ 9%]
tests/test_basic.py::TestBasic::test_pj PASSED [ 11%]
{Many more passed tests omitted.}
=================================================================== 63 passed, 28 warnings in 102.85s (0:01:42) ====================================================================
</code></pre>
<p><strong>So, why is this failing under Tox control?</strong></p>
|
<python><testing><tox>
|
2025-04-06 14:00:15
| 0
| 2,018
|
David Banas
|
79,558,297
| 10,024,860
|
Mock asyncio.sleep to be faster in unittest
|
<p>I want to mock <code>asyncio.sleep</code> to shorten the delay, e.g. by a factor of 10, to speed up my tests while also trying to surface any possible race conditions or other bugs as a crude sanity check. However I cannot figure out, if <code>sut.py</code> and <code>test.py</code> both use <code>import asyncio</code>, how to patch sleep only in <code>sut.py</code> but not <code>test.py</code>. For example, if I run the following:</p>
<pre><code>import asyncio
import pytest
from unittest.mock import Mock, AsyncMock, patch
@pytest.mark.asyncio
@patch("sut.asyncio.sleep")
async def test(mock_asyncio_sleep):
print(asyncio.sleep)
</code></pre>
<p>It prints an AsyncMock instance rather than the actual sleep function.</p>
<p>The issue is then I cannot simulate the sleep in <code>test.py</code>:</p>
<pre><code>@pytest.mark.asyncio
@patch("sut.asyncio.sleep")
async def test(mock_asyncio_sleep):
async def fast_sleep(t):
await asyncio.sleep(t / 10)
mock_asyncio_sleep.side_effect = fast_sleep
</code></pre>
|
<python><unit-testing><pytest><python-asyncio><python-unittest>
|
2025-04-06 13:58:43
| 1
| 491
|
Joe C.
|
79,558,159
| 633,001
|
Can't exit program - win32gui PumpMessages
|
<p>I have code that needs to detect when a new media device is attached / detached.</p>
<p>I have copied some code based on pywin32 to do so:</p>
<pre><code>import win32api, win32con, win32gui
from ctypes import *
from threading import Thread
#
# Device change events (WM_DEVICECHANGE wParam)
#
DBT_DEVICEARRIVAL = 0x8000
DBT_DEVICEQUERYREMOVE = 0x8001
DBT_DEVICEQUERYREMOVEFAILED = 0x8002
DBT_DEVICEMOVEPENDING = 0x8003
DBT_DEVICEREMOVECOMPLETE = 0x8004
DBT_DEVICETYPESSPECIFIC = 0x8005
DBT_CONFIGCHANGED = 0x0018
#
# type of device in DEV_BROADCAST_HDR
#
DBT_DEVTYP_OEM = 0x00000000
DBT_DEVTYP_DEVNODE = 0x00000001
DBT_DEVTYP_VOLUME = 0x00000002
DBT_DEVTYPE_PORT = 0x00000003
DBT_DEVTYPE_NET = 0x00000004
#
# media types in DBT_DEVTYP_VOLUME
#
DBTF_MEDIA = 0x0001
DBTF_NET = 0x0002
WORD = c_ushort
DWORD = c_ulong
class DEV_BROADCAST_HDR(Structure):
_fields_ = [
("dbch_size", DWORD),
("dbch_devicetype", DWORD),
("dbch_reserved", DWORD)
]
class DEV_BROADCAST_VOLUME(Structure):
_fields_ = [
("dbcv_size", DWORD),
("dbcv_devicetype", DWORD),
("dbcv_reserved", DWORD),
("dbcv_unitmask", DWORD),
("dbcv_flags", WORD)
]
def drive_from_mask(mask):
n_drive = 0
while 1:
if (mask & (2 ** n_drive)):
return n_drive
else:
n_drive += 1
class Notification:
def __init__(self):
message_map = {
win32con.WM_DEVICECHANGE: self.onDeviceChange
}
wc = win32gui.WNDCLASS()
hinst = wc.hInstance = win32api.GetModuleHandle(None)
wc.lpszClassName = "DeviceChangeDemo"
wc.style = win32con.CS_VREDRAW | win32con.CS_HREDRAW
wc.hCursor = win32gui.LoadCursor(0, win32con.IDC_ARROW)
wc.hbrBackground = win32con.COLOR_WINDOW
wc.lpfnWndProc = message_map
classAtom = win32gui.RegisterClass(wc)
style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU
self.hwnd = win32gui.CreateWindow(
classAtom,
"Device Change Demo",
style,
0, 0,
win32con.CW_USEDEFAULT, win32con.CW_USEDEFAULT,
0, 0,
hinst, None
)
def register_callbacks(self, device_arrival, device_removal):
self.dvc_arr = device_arrival
self.dvc_rem = device_removal
def onDeviceChange(self, hwnd, msg, wparam, lparam):
#
# WM_DEVICECHANGE:
# wParam - type of change: arrival, removal etc.
# lParam - what's changed?
# if it's a volume then...
# lParam - what's changed more exactly
#
dev_broadcast_hdr = DEV_BROADCAST_HDR.from_address(lparam)
if wparam == DBT_DEVICEARRIVAL:
if dev_broadcast_hdr.dbch_devicetype == DBT_DEVTYP_VOLUME:
dev_broadcast_volume = DEV_BROADCAST_VOLUME.from_address(lparam)
drive_letter = drive_from_mask(dev_broadcast_volume.dbcv_unitmask)
self.dvc_arr(chr(ord("A") + drive_letter))
if wparam == DBT_DEVICEREMOVECOMPLETE:
self.dvc_rem()
return 1
def listener_thread(clb_arr, clb_rem):
w = Notification()
w.register_callbacks(clb_arr, clb_rem)
win32gui.PumpMessages()
def create_listener(clb_arr, clb_rem):
thread = Thread(target = listener_thread, args=(clb_arr, clb_rem))
thread.start()
return thread
if __name__ == '__main__':
print("Starting listening")
w = Notification()
win32gui.PumpMessages()
</code></pre>
<p>Basically I spawn this Notification() class in its own thread, and register some callbacks for when a device is connected / disconnected. But I cannot get out of the <code>win32gui.PumpMessages</code>.
Code to exit this:</p>
<pre><code>t = create_listener(callback_arrival, callback_removal)
print("Trying to shut down")
handle_window = win32gui.FindWindow(None, "Device Change Demo")
win32gui.SendMessage(handle_window, win32con.WM_QUIT, 0, 0)
print("Sent WM_QUIT message to " + str(handle_window))
print("Sent message")
t.join()
print("Thread joined")
print("Destroyed windows")
print("Exiting...")
</code></pre>
<p>This results in the output:</p>
<pre><code>my handle is 2294256
Trying to shut down
Sent WM_QUIT message to 2294256
Sent message
</code></pre>
<p>Thread is never joined.
My callbacks work, but the PumpMessages() just never stops being blocked. The handle I am sending the signal to is the same as the handle during the window creation. Am I doing the sending wrong? Should it be a different handle to send it to?</p>
<p>Code reference: <a href="https://timgolden.me.uk/pywin32-docs/win32gui__PumpMessages_meth.html" rel="nofollow noreferrer">https://timgolden.me.uk/pywin32-docs/win32gui__PumpMessages_meth.html</a></p>
|
<python><pywin32>
|
2025-04-06 11:52:24
| 0
| 3,519
|
SinisterMJ
|
79,558,133
| 21,540,734
|
How to stop system tray Icons from grouping together
|
<p>I have a couple different projects that I am using pystray for, and these icons are grouping together when moving them in the system tray.</p>
<p>I've found that this is the case for anything using the WinAPI, Whether it be pywin32 or something coded by hand using ctypes to implement the API like pystray. Any way to stop this from happening?</p>
<p><a href="https://i.sstatic.net/26HqooXM.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26HqooXM.gif" alt="An example of my problem" /></a></p>
<p>I created my own system tray icon with PyWin32 with the same result, and I've also tried running it as a <code>multiprocessing.Process</code> also without any luck.</p>
|
<python><winapi><system-tray><pystray>
|
2025-04-06 11:29:30
| 0
| 425
|
phpjunkie
|
79,558,107
| 4,483,861
|
matplotlib.pyplot slow over SSH in terminal (fast in VS Code Remote-SSH)
|
<p>I have this script on a server:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from time import time
bknd = mpl.get_backend()
x = np.random.random(100)*10
y = np.sin(x)
t0 = time()
plt.plot(x,y,'.')
t1 = time()
print(f"{bknd} {t1-t0:.1f} seconds")
plt.show()
</code></pre>
<p>If I run it in the terminal of VS Code using Remote-SSH I get</p>
<pre><code>TkAgg 1.7 seconds
</code></pre>
<p>while using SSH in either iTerm or the regular Mac Terminal I get</p>
<pre><code>TkAgg 7.3 seconds
</code></pre>
<p>So the time to create a figure is much faster in VS Code, while the backend is the same. I have gotten similar results from a Linux machine before, so it shouldn't be Mac related.</p>
<p>Using other graphical programs like <code>feh</code> for images, or <code>mupdf</code> for PDFs, the there is no apparent speed difference.</p>
<p>Another thing I noticed is that if I set my <code>.ssh/config</code> file to use <code>controlMaster auto</code> (sharing the SSH connection) then if I initialize by first <code>ssh</code>ing with iTerm and then connect with VS Code, the timing in VS Code becomes the same as in iTerm (around 7 seconds) while if I create the connection by first using VS Code, then the timings remain the same as when using the separate connections. So VS Code is doing something smart when setting up the connection, that only affects it's own terminals.</p>
<p>How does VS Code get faster plots, and can it be replicated in other terminals?</p>
|
<python><matplotlib><visual-studio-code><terminal><x11-forwarding>
|
2025-04-06 11:02:18
| 0
| 2,649
|
Jonatan รstrรถm
|
79,558,025
| 11,062,613
|
Efficient and readable way to get N-dimensional index array in C-order using NumPy
|
<p>When I need to generate an N-dimensional index array in C-order, Iโve tried a few different NumPy approaches.</p>
<p>The fastest for larger arrays but less readable:</p>
<pre><code>np.stack(np.meshgrid(*[np.arange(i, dtype=dtype) for i in sizes], indexing="ij"), axis=-1).reshape(-1, len(sizes))
</code></pre>
<p>More readable with good performance:</p>
<pre><code>np.ascontiguousarray(np.indices(sizes, dtype=dtype).reshape(len(sizes), -1).T)
</code></pre>
<p>Here I'm not sure if the ascontiguousarray copy is actually necessary, or if there's a better way to make sure the result is in C-order without forcing a copy.</p>
<p>Most readable, but by far the slowest:</p>
<pre><code>np.vstack([*np.ndindex(sizes)], dtype=dtype)
</code></pre>
<p>The iterator conversion is quite slow for larger arrays.</p>
<p>Is there a built-in more straightforward and readable way to achieve this that matches the performance of np.meshgrid or np.indices using NumPy?
If not, can either the meshgrid or indices approaches be optimized to avoid unnecessary memory copies (like ascontiguousarray) while still making sure the array is C-contiguous?</p>
<p>Example:</p>
<pre><code>sizes = (3, 1, 2)
idx = np.ascontiguousarray(np.indices(sizes).reshape(len(sizes), -1).T)
print(idx)
print(f"C_CONTIGUOUS: {idx.flags['C_CONTIGUOUS']}")
# [[0 0 0]
# [0 0 1]
# [1 0 0]
# [1 0 1]
# [2 0 0]
# [2 0 1]]
# C_CONTIGUOUS: True
</code></pre>
|
<python><arrays><numpy><numba>
|
2025-04-06 09:31:27
| 1
| 423
|
Olibarer
|
79,557,952
| 2,371,765
|
pyproject.toml related error while installing spacy library
|
<p>I get the following error while installing the spacy library in Python 3.13.0. The pip version is 25.0.1. Can someone help? Thank you.</p>
<p>(I made sure to install numpy, scipy, preshed,Pyrebase4 based on responses to similar questions, and upgraded setuptools.)</p>
<pre><code>Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1
[21 lines of output]
+ C:\Users\xxx\AppData\Local\Programs\Python\Python313\python.exe C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0\vendored-meson\meson\meson.py setup C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0 C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0\.mesonpy-h0rens75 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0\.mesonpy-h0rens75\meson-python-native-file.ini
The Meson build system
Version: 1.4.99
Source dir: C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0
Build dir: C:\Users\xxx\AppData\Local\Temp\pip-install-53nq9zf7\numpy_789b6c3f53f44d9090d843a9d70df4b0\.mesonpy-h0rens75
Build type: native build
Project name: NumPy
Project version: 2.0.2
WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
The following exception(s) were encountered:
Running `icl ""` gave "[WinError 2] The system cannot find the file specified"
Running `cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `cc --version` gave "[WinError 2] The system cannot find the file specified"
Running `gcc --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang-cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `pgcc --version` gave "[WinError 2] The system cannot find the file specified"
</code></pre>
<p>Please see the latest update in the comments section below. For now, I am also copying here the comment I made below.</p>
<p>I referred to stackoverflow.com/questions/77666734/โฆ to address the above issue. But now I see another error down the line.</p>
<pre><code>spacy/matcher/levenshtein.c(4514): error C2198: 'int _PyLong_AsByteArray(PyLongObject *,unsigned char *,size_t,int,int,int)': too few arguments for call
</code></pre>
|
<python><spacy>
|
2025-04-06 08:13:01
| 1
| 458
|
user17144
|
79,557,819
| 16,383,578
|
How to efficiently calculate the fraction (valid UTF8 byte sequence of length N)/(total N-byte sequences)?
|
<p>This will be a long post. And it absolutely has nothing to do with homework, I am just curious, and this won't have immediate practical benefits, but that is like pursuing pure science, you never know what you will get.</p>
<p>I am trying to calculate the value of the total number of valid UTF8 sequences of length N divided by the total number of N-byte sequences. I want an infinitely accurate reduced fraction (which I represent as integer pair <code>(numerator, denominator)</code>).</p>
<p>The total number of N-byte sequences is easy to calculate, it is just 256<sup>N</sup> or <code>1<<8*N</code> in Python. But the number of valid UTF8 sequences are trickier. But I know how to calculate it.</p>
<p>First, an UTF8 character can be encoded as byte sequences of lengths <code>(1, 2, 3, 4)</code>, the bit patterns of UTF8 encoding are the following:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Code_Range</th>
<th>Byte_Length</th>
<th>Bit_Pattern</th>
<th>Data_Bits</th>
</tr>
</thead>
<tbody>
<tr>
<td>U+0000..007F</td>
<td>1 byte</td>
<td>0xxxxxxx</td>
<td>7 bits</td>
</tr>
<tr>
<td>U+0080..07FF</td>
<td>2 bytes</td>
<td>110xxxxx 10xxxxxx</td>
<td>11 bits</td>
</tr>
<tr>
<td>U+0800..FFFF</td>
<td>3 bytes</td>
<td>1110xxxx 10xxxxxx 10xxxxxx</td>
<td>16 bits</td>
</tr>
<tr>
<td>U+10000..10FFFF</td>
<td>4 bytes</td>
<td>11110xxx 10xxxxxx 10xxxxxx 10xxxxxx</td>
<td>21 bits</td>
</tr>
</tbody>
</table></div>
<p>Now UTF8 encoding doesn't use code-points in the <code>range(0xd800, 0xe000)</code>, the following will raise an exception in Python:</p>
<pre><code>In [530]: chr(0xd800).encode()
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
Cell In[530], line 1
----> 1 chr(0xd800).encode()
UnicodeEncodeError: 'utf-8' codec can't encode character '\ud800' in position 0: surrogates not allowed
</code></pre>
<p>These code-points are used as surrogate pairs in UTF16 and they are invalid in UTF8, there are reserved characters in UNICODE but they are valid in UTF8:</p>
<pre><code>In [531]: chr(0xe000).encode()
Out[531]: b'\xee\x80\x80'
</code></pre>
<p>Now for a given byte sequence of length N, the chance of it being a valid N-byte UTF8 encoding of a single character is:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>First_Byte_Condition</th>
<th>Byte_Length</th>
<th>Data_Bits</th>
<th>Bit_Chunks</th>
<th>Valid_Range</th>
<th>Total</th>
<th>Proportion</th>
</tr>
</thead>
<tbody>
<tr>
<td>(fb & 0x80) == 0x00</td>
<td>1 byte</td>
<td>7</td>
<td>7,</td>
<td>0x00..0x7f</td>
<td>2<sup>7</sup> = 128</td>
<td>128/2<sup>8</sup> = 1/2</td>
</tr>
<tr>
<td>(fb & 0xe0) == 0xc0</td>
<td>2 bytes</td>
<td>11</td>
<td>5, 6</td>
<td>0xc0..0xd7</td>
<td>24*2<sup>6</sup> = 1536</td>
<td>1536/2<sup>16</sup> = 3/128</td>
</tr>
<tr>
<td>(fb & 0xf0) == 0xe0</td>
<td>3 bytes</td>
<td>16</td>
<td>4, 6, 6</td>
<td>0xe0..0xef</td>
<td>2<sup>16</sup> = 65536</td>
<td>65536/2<sup>24</sup> = 1/256</td>
</tr>
<tr>
<td>(fb & 0xf8) == 0xf0</td>
<td>4 bytes</td>
<td>21</td>
<td>3, 6, 6, 6</td>
<td>0xf0..0xf4</td>
<td>0x110000 - 0x10000 = 1048576</td>
<td>1048576/2<sup>32</sup> = 1/4096</td>
</tr>
</tbody>
</table></div>
<p>I have written a UTF8 encoder and decoder as a programming challenge, and I reimplemented it as a UTF8 sequence validator, using absolute bruteforce I was able to find the value of (valid UTF8 of length N)/(byte sequence of length N) for N = 1, 2, 3:</p>
<pre><code>from itertools import product
UTF8_PATTERN = {
0: (0, 0),
1: (1, 0),
2: (2, 0),
3: (3, 0),
4: (4, 0),
5: (5, 0),
6: (6, 0),
7: (7, 0),
8: (8, 0),
9: (9, 0),
10: (10, 0),
11: (11, 0),
12: (12, 0),
13: (13, 0),
14: (14, 0),
15: (15, 0),
16: (16, 0),
17: (17, 0),
18: (18, 0),
19: (19, 0),
20: (20, 0),
21: (21, 0),
22: (22, 0),
23: (23, 0),
24: (24, 0),
25: (25, 0),
26: (26, 0),
27: (27, 0),
28: (28, 0),
29: (29, 0),
30: (30, 0),
31: (31, 0),
32: (32, 0),
33: (33, 0),
34: (34, 0),
35: (35, 0),
36: (36, 0),
37: (37, 0),
38: (38, 0),
39: (39, 0),
40: (40, 0),
41: (41, 0),
42: (42, 0),
43: (43, 0),
44: (44, 0),
45: (45, 0),
46: (46, 0),
47: (47, 0),
48: (48, 0),
49: (49, 0),
50: (50, 0),
51: (51, 0),
52: (52, 0),
53: (53, 0),
54: (54, 0),
55: (55, 0),
56: (56, 0),
57: (57, 0),
58: (58, 0),
59: (59, 0),
60: (60, 0),
61: (61, 0),
62: (62, 0),
63: (63, 0),
64: (64, 0),
65: (65, 0),
66: (66, 0),
67: (67, 0),
68: (68, 0),
69: (69, 0),
70: (70, 0),
71: (71, 0),
72: (72, 0),
73: (73, 0),
74: (74, 0),
75: (75, 0),
76: (76, 0),
77: (77, 0),
78: (78, 0),
79: (79, 0),
80: (80, 0),
81: (81, 0),
82: (82, 0),
83: (83, 0),
84: (84, 0),
85: (85, 0),
86: (86, 0),
87: (87, 0),
88: (88, 0),
89: (89, 0),
90: (90, 0),
91: (91, 0),
92: (92, 0),
93: (93, 0),
94: (94, 0),
95: (95, 0),
96: (96, 0),
97: (97, 0),
98: (98, 0),
99: (99, 0),
100: (100, 0),
101: (101, 0),
102: (102, 0),
103: (103, 0),
104: (104, 0),
105: (105, 0),
106: (106, 0),
107: (107, 0),
108: (108, 0),
109: (109, 0),
110: (110, 0),
111: (111, 0),
112: (112, 0),
113: (113, 0),
114: (114, 0),
115: (115, 0),
116: (116, 0),
117: (117, 0),
118: (118, 0),
119: (119, 0),
120: (120, 0),
121: (121, 0),
122: (122, 0),
123: (123, 0),
124: (124, 0),
125: (125, 0),
126: (126, 0),
127: (127, 0),
192: (0, 1),
193: (1, 1),
194: (2, 1),
195: (3, 1),
196: (4, 1),
197: (5, 1),
198: (6, 1),
199: (7, 1),
200: (8, 1),
201: (9, 1),
202: (10, 1),
203: (11, 1),
204: (12, 1),
205: (13, 1),
206: (14, 1),
207: (15, 1),
208: (16, 1),
209: (17, 1),
210: (18, 1),
211: (19, 1),
212: (20, 1),
213: (21, 1),
214: (22, 1),
215: (23, 1),
224: (0, 2),
225: (1, 2),
226: (2, 2),
227: (3, 2),
228: (4, 2),
229: (5, 2),
230: (6, 2),
231: (7, 2),
232: (8, 2),
233: (9, 2),
234: (10, 2),
235: (11, 2),
236: (12, 2),
237: (13, 2),
238: (14, 2),
239: (15, 2),
240: (0, 3),
241: (1, 3),
242: (2, 3),
243: (3, 3),
244: (4, 3),
}
class UTF8_Validator:
def __init__(self, data: bytes) -> None:
self.data = memoryview(data)
self.length = len(data)
self.pos = 0
def _validate_char(self, pos: int) -> None:
data = self.data
if not (pattern := UTF8_PATTERN.get(data[pos])):
return False
code = pattern[0]
length = self.length
for _ in range(pattern[1]):
pos += 1
if pos == length or ((byte := data[pos]) & 0xC0) != 0x80:
return False
code = (code << 6) | (byte & 0x3F)
if code >= 0x110000:
return False
self.pos = pos + 1
return True
def validate(self) -> bool:
length = self.length
while self.pos < length:
if not self._validate_char(self.pos):
return False
return True
def get_valid_UTF8_proportion(length: int) -> tuple[int, int]:
count = sum(
1
for data in product(range(256), repeat=length)
if UTF8_Validator(bytes(data)).validate()
)
trailing = (count & -count).bit_length() - 1
return count >> trailing, 1 << 8 * length - trailing
</code></pre>
<pre><code>In [527]: get_valid_UTF8_proportion(1)
Out[527]: (1, 2)
In [528]: get_valid_UTF8_proportion(2)
Out[528]: (35, 128)
In [529]: get_valid_UTF8_proportion(3)
Out[529]: (39, 256)
</code></pre>
<p>For the case N = 3, it takes a very long time.</p>
<p>But that is okay, I have implemented an optimized solution using dynamic programming that lets me calculate the result for N in the range of thousands, but it is still using bruteforce at its core.</p>
<p>First, an N byte sequence can only be valid UTF8 sequence if all its subsequences are valid UTF8 sequences and so on... The point is, a single UNICODE character can be encoded either as 1 byte, 2 bytes, 3 bytes or 4 bytes, and a valid UTF8 sequence can only consist of valid UTF8 characters, so to solve the problem we first need to find all ways the number N can be represented as sum of the numbers 1 to 4, it is an integer partition problem.</p>
<p>Integer partition:</p>
<pre><code>def partitions_quad(number: int) -> list[tuple[int, ...]]:
result = []
stack = [(number, ())]
while stack:
remaining, path = stack.pop()
if not remaining:
result.append(path)
else:
stack.extend(
(remaining - i, path + (i,)) for i in range(min(remaining, 4), 0, -1)
)
return result
</code></pre>
<p>Example output:</p>
<pre><code>In [557]: partitions_quad(6)
Out[557]:
[(1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 2),
(1, 1, 1, 2, 1),
(1, 1, 1, 3),
(1, 1, 2, 1, 1),
(1, 1, 2, 2),
(1, 1, 3, 1),
(1, 1, 4),
(1, 2, 1, 1, 1),
(1, 2, 1, 2),
(1, 2, 2, 1),
(1, 2, 3),
(1, 3, 1, 1),
(1, 3, 2),
(1, 4, 1),
(2, 1, 1, 1, 1),
(2, 1, 1, 2),
(2, 1, 2, 1),
(2, 1, 3),
(2, 2, 1, 1),
(2, 2, 2),
(2, 3, 1),
(2, 4),
(3, 1, 1, 1),
(3, 1, 2),
(3, 2, 1),
(3, 3),
(4, 1, 1),
(4, 2)]
</code></pre>
<p>I call the tuples like <code>(3, 2, 1)</code> a run, and the probability of a random 6 byte sequence being a valid UTF8 sequence is the sum of the probability of all these runs. To get the probability, we first need to replace the numbers in the runs with their corresponding probability in the list: <code>[(1, 2), (3, 128), (1, 256), (1, 4096)]</code>. I have guessed and experimentally confirmed that the probability of a run is the product of its sub-probabilities.</p>
<p>Using dynamic programming, bit-hacking, and some basic arithmetic I was able to implement the completely optimized bruteforce solution:</p>
<pre><code>VALID_UTF8 = [(1, 2), (3, 128), (1, 256), (1, 4096)]
def reduced_add(num1: int, den1: int, num2: int, den2: int) -> tuple[int, int]:
if den1 > den2:
num1, den1, num2, den2 = num2, den2, num1, den1
num = (num1 << den2.bit_length() - den1.bit_length()) + num2
return num, den2
def valid_UTF8_chance(length: int) -> tuple[int, int]:
stack = [(1, 1)]
for i in range(1, length + 1):
total_num, total_den = 0, 1
for char_len in range(1, min(i + 1, 5)):
last_num, last_den = stack[-char_len]
run_num, run_den = VALID_UTF8[char_len - 1]
total_num, total_den = reduced_add(
total_num, total_den, last_num * run_num, last_den * run_den
)
stack.append((total_num, total_den))
stack = stack[-4:]
num, den = stack[-1]
shift = min((num & -num).bit_length() - 1, (den & -den).bit_length() - 1)
return num >> shift, den >> shift
</code></pre>
<pre><code>In [561]: [valid_UTF8_chance(i) for i in range(1, 26)]
Out[561]:
[(1, 2),
(35, 128),
(39, 256),
(1389, 16384),
(1545, 32768),
(54995, 2097152),
(61175, 4194304),
(2177581, 268435456),
(2422281, 536870912),
(86223315, 34359738368),
(95912439, 68719476736),
(3414091309, 4398046511104),
(3797741065, 8796093022208),
(135184079315, 562949953421312),
(150375043575, 1125899906842624),
(5352737711661, 72057594037927936),
(5954237885961, 144115188075855872),
(211946563197395, 9223372036854775808),
(235763514741239, 18446744073709551616),
(8392218724509229, 1180591620717411303424),
(9335272783474185, 2361183241434822606848),
(332297603969210835, 151115727451828646838272),
(369638695103107575, 302231454903657293676544),
(13157628659176285741, 19342813113834066795298816),
(14636183439588716041, 38685626227668133590597632)]
</code></pre>
<p>And it uses very little memory and it is reasonably fast:</p>
<pre><code>In [562]: %timeit valid_UTF8_chance(16)
35.8 ฮผs ยฑ 994 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each)
In [563]: %timeit valid_UTF8_chance(32)
79.5 ฮผs ยฑ 652 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each)
In [564]: %timeit valid_UTF8_chance(64)
175 ฮผs ยฑ 1.08 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each)
In [565]: %timeit valid_UTF8_chance(128)
371 ฮผs ยฑ 7.84 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each)
In [566]: %timeit valid_UTF8_chance(256)
818 ฮผs ยฑ 19.8 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each)
In [567]: %timeit valid_UTF8_chance(512)
1.73 ms ยฑ 25.9 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each)
In [568]: %timeit valid_UTF8_chance(1024)
3.96 ms ยฑ 102 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>However it is still inefficient, because it is essentially doing these calculations, and there are lots of repeating patterns in them:</p>
<pre><code>def valid_UTF8_chance_expansion(length: int) -> str:
segments = partitions_quad(length)
return " +\n".join(
" * ".join(("{0}/{1}".format(*VALID_UTF8[s - 1]) for s in segment))
for segment in segments
)
print(",\n\n".join(valid_UTF8_chance_expansion(i) for i in range(1, 7)))
</code></pre>
<p>But my math is very rusty, I can't devise a better algorithm. Is there a better way?</p>
<hr />
<p>Okay, so my understanding of the UTF8 encoding is wrong, and subsequently my implementation of UTF8 encoding and decoding, hence the wrong numbers.</p>
<p>There are 0x80 - 0x00 = 128 one-byte UTF8 characters.</p>
<p>There are 0x800 - 0x80 = 1920 two-byte UTF8 characters. 512 of them have the first byte in the <code>range(0xd8, 0xe0)</code>, these are also the first bytes used by surrogate pairs. I thought to exclude them, because I thought all two-byte sequences with the first byte in the surrogate range must be a part of a surrogate pair, I was wrong.</p>
<p>There are 0x10000 - 0x800 = 63488 UNICODE characters that correspond to a three-byte sequence, but it contains the <code>range(0xd800, 0xe000)</code> which are surrogate bytes, here we should exclude them, subtract 2048 from 63488 and we get 61440.</p>
<p>And finally, there are indeed 0x110000 - 0x10000 = 1048576 four-byte UTF8 characters.</p>
<hr />
<p>I feel I need to add this. My original function <code>valid_UTF8_chance</code> uses the correct method, but the initial values are wrong. Using <code>VALID_UTF8 = [(1, 2), (15, 512), (15, 4096), (1, 4096)]</code> I got the correct numbers. I have re-implemented my UTF8 encoding and decoding function and used <code>itertools.product(range(256), repeat=length)</code> to check all byte sequences of a given length, using absolute bruteforce and running the code on PyPy3 I was able to verify the result for lengths 1, 2, 3, 4 in under 2 hours.</p>
<p>I am currently working on implement a brute-force checking in C++ with threading and asynchronous functions to verify the numbers for case length = 5, I think I need to utilize the GPU for this.</p>
|
<python><algorithm><math><utf-8><combinatorics>
|
2025-04-06 05:22:16
| 1
| 3,930
|
ฮฮญฮฝฮท ฮฮฎฮนฮฝฮฟฯ
|
79,557,769
| 3,057,743
|
Wagtail default template field in_preview_panel is fine in dev but fails in production
|
<p>In the default template from Wagtail even mentioned online <a href="https://docs.wagtail.org/en/stable/tutorial/style_your_site.html" rel="nofollow noreferrer">here</a>, there is a part:</p>
<pre><code>{# Force all links in the live preview panel to be opened in a new tab #}
{% if request.in_preview_panel %}
<base target="_blank">
{% endif %}
</code></pre>
<p>I extend this base template in my other templates.</p>
<p>This is fine in dev, but when running the website in production, I face with this error:</p>
<pre><code>Exception while resolving variable 'in_preview_panel' in template 'home/home_page.html'.
Traceback (most recent call last):
File "/home/myusername/.local/lib/python3.12/site-packages/django/template/base.py", line 880, in _resolve_lookup
current = current[bit]
~~~~~~~^^^^^
TypeError: 'WSGIRequest' object is not subscriptable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/myusername/.local/lib/python3.12/site-packages/django/template/base.py", line 890, in _resolve_lookup
current = getattr(current, bit)
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'WSGIRequest' object has no attribute 'in_preview_panel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/myusername/.local/lib/python3.12/site-packages/django/template/base.py", line 896, in _resolve_lookup
current = current[int(bit)]
^^^^^^^^
ValueError: invalid literal for int() with base 10: 'in_preview_panel'
During handling of the above exception, another exception occurred:
...
</code></pre>
<p>I have no clue on what <code>in_preview_panel</code> is and why it behaves different in dev and prod.</p>
<p>If I remove it, this case of error will be resolved. I have still other problems in my code. But, I do not want to remove this blindly or impacting the Wagtail admin. Instead, I want to understand what is happening.</p>
|
<python><django><wagtail>
|
2025-04-06 03:37:26
| 1
| 1,404
|
barej
|
79,557,694
| 7,822,387
|
saving statsmodel to adls blob storage
|
<p>i currently have a model fit using statsmodel OLS formula and I am trying to save this model to ADLS blob storage. '/mnt/outputs/' is a mount point I have created and I am able to read and write other files from this directory.</p>
<pre><code>import statsmodels.formula.api as smf
fit = smf.ols(formula=f"Pressure ~ {cat_vars_int} + Speed + dose_time:Speed + Speed:log_curr_speed_time", data=df_train).fit()
path = f'/mnt/outputs/Models/20240406_M2.pickle'
fit.save(path)
</code></pre>
<p>However I am getting this error when I am saving. I am trying to write a new file not read an existing file, so i am not sure why i am getting this error. Any help would be great, thanks!</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/mnt/outputs/Models/20240406_M2.pickle'
</code></pre>
|
<python><azure><statsmodels><azure-data-lake-gen2>
|
2025-04-06 01:02:55
| 1
| 311
|
J. Doe
|
79,557,602
| 2,241,653
|
Python Azure Function doesn't show functions after importing packages
|
<p>I have an Azure Function based on Python 3.12 (same issue when I downgrade to Python 3.11). This worked fine until I imported <code>azure.identity</code> and <code>azure.keyvault.secrets</code>. Since I've added them the functions are not shown anymore in my Azure Function. When I remove them the functions will be back. It is fine when the package is loaded from the <code>requirements.txt</code>. The issue just happens when I add</p>
<pre><code>from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
</code></pre>
<p>What can be the issue what leads to that?</p>
<p><strong>Project Structure</strong></p>
<p>function_app.py</p>
<pre><code>import azure.functions as func
from http_blueprint import bp_http
from timer_blueprint import bp_timer
app = func.FunctionApp()
app.register_functions(bp_http)
app.register_functions(bp_timer)
</code></pre>
<p>http_blueprint.py</p>
<pre><code>import logging
import azure.functions as func
bp_http = func.Blueprint()
@bp_http.route(route="default_template")
def default_template(req: func.HttpRequest) -> func.HttpResponse:
...
</code></pre>
<p>timer_blueprint.py</p>
<pre><code>import os
import requests
import logging
import datetime
import azure.functions as func
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
#same issue when libaries would be imported like this
#import azure.identity
#import azure.keyvault.secrets
bp_timer = func.Blueprint()
@bp_timer.timer_trigger(schedule="0 * * * * *",
arg_name="mytimer",
run_on_startup=True)
def exec_timer(mytimer: func.TimerRequest) -> None:
...
</code></pre>
<p>azure_pipelines.yml (based on the sample; build and deploy at the moment back to back - will be splitted if everything works)</p>
<pre><code>pool:
vmImage: ubuntu-latest
strategy:
matrix:
Python312:
python.version: '3.12'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install -U pip
python -m pip install --target="./.python_packages/lib/site-packages" --upgrade -r requirements.txt
displayName: 'Install dependencies'
- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: $(System.DefaultWorkingDirectory)
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId)-$(python.version).zip
replaceExistingArchive: true
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
- task: AzureFunctionApp@2
inputs:
connectedServiceNameARM: 'Deploy'
appType: 'functionAppLinux'
appName: 'appName'
package: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId)-$(python.version).zip'
runtimeStack: 'PYTHON|3.12'
deploymentMethod: 'zipDeploy'
</code></pre>
<p>requirements.txt</p>
<pre><code>azure-functions==1.21.3
discord.py==2.5.2
discord_interactions==0.4.0
Flask==3.1.0
requests==2.32.3
azure-identity==1.21.0
azure-keyvault-secrets==4.9.0
cryptography==43.0.3
</code></pre>
|
<python><azure><azure-functions>
|
2025-04-05 22:47:58
| 3
| 1,525
|
mburm
|
79,557,468
| 2,856,552
|
xtick labels not showing on python line plot
|
<p>My Python code of departures vs years, below works fine for a bar plot. I would like to have year tick mark labels on the x-axis. Currently I get the line plot, but no years labelled on the x-axis. For the bar plot the years are labelled alright.</p>
<p>This is not a multi-plot (or subplots), but plotting the line plot while the bar plotting is commented, and vice versa. Why does one work and the other does not?</p>
<p><a href="https://i.sstatic.net/IYmEwdlW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYmEwdlW.png" alt="sample image" /></a></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv('../RnOut/RAINFALL/Deps/CHIPAT.txt', sep=' ', skipinitialspace=True )
fig, ax = plt.subplots(figsize=(10, 10))
#Drop rows with missing values
#*****************************
df_new = df.drop(df[df['Dep'] <= -99.].index)
#set xtick labels
#****************
xticks=np.arange(df_new['Year'].iloc[0], df_new['Year'].iloc[-1], 5)
xlabels = [f'{x:d}' for x in xticks]
ax.set_xticks(xticks, labels=xlabels)
plt.xticks(xticks, labels=xlabels)
#Draw line plot
#**************
#df_new['Dep'].plot(ax=ax)
#Draw bar plot, red for negative bars
#************************************
colors = ['g' if e >= 0 else 'r' for e in df_new['Dep']]
plt.bar(df_new['Year'],df_new['Dep'], color=colors,edgecolor='black')
#Set titles
#**********
ax.set_title('Rainfall Departures: Oct-Mar\n ---------------------------------------')
plt.xlabel("Years")
plt.ylabel("Rainfall Departure (Std Dev)")
ax.grid()
plt.savefig('ChipatDep.png')
plt.show()
</code></pre>
<p>my sample data</p>
<pre><code>Year Dep
1945 -0.9
1946 0.9
1947 0.6
1948 -0.7
1949 1.2
1950 -0.9
1951 0.9
1952 0.1
1953 -1.0
1954 1.3
1955 -0.3
</code></pre>
|
<python><matplotlib>
|
2025-04-05 20:04:33
| 1
| 1,594
|
Zilore Mumba
|
79,557,430
| 11,462,274
|
Using the free-proxy library with requests to access general https websites
|
<p>When basically requesting a proxy, what happens is that it delivers an http that currently seems to me to be unusable because the vast majority of sites use https and this causes the request to be made using my own IP instead of the collected proxy, as we can see here:</p>
<pre class="lang-python prettyprint-override"><code>from fake_useragent import UserAgent
from fp.fp import FreeProxy
import requests
def pack_requests(the_url, timeout=5):
ua = UserAgent()
proxy = FreeProxy(rand=True, timeout=timeout).get()
protocol = "https" if proxy.startswith("https") else "http"
pack_header = {
'User-Agent': ua.random
}
pack_response = requests.get(the_url, headers=pack_header, proxies={protocol: proxy}, timeout=timeout)
return pack_response
print(pack_requests('https://geolocation-db.com/json').text)
</code></pre>
<p>In the most current version of <a href="https://github.com/jundymek/free-proxy" rel="nofollow noreferrer">free-proxy</a> (<em>1.1.3</em>) the <code>https=True</code> option is generating an error, so I installed version <em>1.1.2</em> which works perfectly with the option mentioned above, the only thing I need to do is define the protocol manually because the proxy it comes with the <code>http</code> protocol when I print the return of <code>FreeProxy(...).get()</code>:</p>
<pre class="lang-python prettyprint-override"><code>from fake_useragent import UserAgent
from fp.fp import FreeProxy
import requests
def pack_requests(the_url, timeout=5):
ua = UserAgent()
proxy = FreeProxy(rand=True, https=True, timeout=timeout).get()
pack_header = {
'User-Agent': ua.random
}
pack_response = requests.get(the_url, headers=pack_header, proxies={"http": proxy, "https": proxy}, timeout=timeout)
return pack_response
print(pack_requests('https://geolocation-db.com/json').text)
</code></pre>
<p>The problem is that this way for websites in general, such as search sites in general, news websites and the vast majority of them (including my own website which I also tested using just to see the result) a connection problem with the Proxy is generated at the time of the requests or a problem with the SSL certificate at the time of the requests.</p>
<p>Is there a way to generate a good result using these two libraries (<em>free-proxy</em> and <em>requests</em>) together to achieve at least the basic needs or has trying to use these proxy bases become obsolete and let's say "impossible" to generate good results?</p>
|
<python><web-scraping><python-requests><proxy>
|
2025-04-05 19:28:48
| 0
| 2,222
|
Digital Farmer
|
79,557,207
| 89,706
|
Python type-hint for attribute or property?
|
<p>I'm adding type hints for an existing large codebase, and am faced with a situation like:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
class C: ...
class Base(ABC): ...
class Child1(Base):
my_awesome_field: C
class Child2(Base):
@property
def my_awesome_field(self) -> C: ...
</code></pre>
<p>i.e., for some classes in a hierarchy <code>my_awesome_field</code> is implemented as an attribute and for some as a property. I'm looking for some way to inform functions getting a <code>Base</code> they can safely use <code>my_awesome_field</code>:</p>
<pre class="lang-py prettyprint-override"><code>def func(b: Base):
maf = b.my_awesome_field
...
</code></pre>
<p><em>AND</em> hopefully respect existing initializers in the code like -</p>
<pre class="lang-py prettyprint-override"><code> c1 = Child1(my_awesome_field=<whatever>)
</code></pre>
<p>If I specify a protocol like</p>
<pre class="lang-py prettyprint-override"><code>class HasAwesomeField(Protocol):
@property
def my_awesome_field(self) -> C: ...
class Base(ABC, HasAwesomeField): ...
</code></pre>
<p>implementations of <code>Base</code> are restricted to properties. If I change `Child1``s implementation to a property:</p>
<pre class="lang-py prettyprint-override"><code>class Child1(Base):
_my_awesome_field: C
@property
def my_awesome_field(self) -> C:
return _my_awesome_field
</code></pre>
<p>constructs like <code>Child1(my_awesome_field=<whatever>)</code> break.</p>
<p>Is what I'm looking for possible? I was under the impression python's typing is structural, but this seems to be a gap. Hopefully I'm missing something.</p>
<p>PS. property-setters omitted for brevity.</p>
|
<python><python-typing><pyright>
|
2025-04-05 15:47:59
| 1
| 16,435
|
Ofek Shilon
|
79,557,068
| 6,038,082
|
How to show a better image view using tkinter?
|
<p>In my Python tkinter gui, I am showing several nodes which are inter-dependent and at various levels. <br>
When I click on each node , it opens a pop-up which is displaying the node which are just at one level up and one level down with respect to the node clicked. <br><br>
However, if the number of nodes are too many the view is getting cluttered up and the arrows pointing to the nodes are getting merged resulting in a very untidy view.
In the given example below if you click at node at Level 3 , you will know what I am talking about.<br><br>
Is there a better library or way to show this representation. For example, in gitk where in we see pictorially the commits in a git repo across several branches, likewise can we show this image in a more tidier fashion so that view is not overlapping when node numbers are more.
<br> <br>
Below is my code:</p>
<pre><code>import tkinter as tk
from tkinter import ttk, filedialog, messagebox
from PIL import Image, ImageDraw, ImageTk
def build_gui(root, level_dict, deps_dict):
main_paned_window = ttk.PanedWindow(root, orient=tk.VERTICAL)
main_paned_window.pack(fill=tk.BOTH, expand=True)
top_frame = tk.Frame(main_paned_window)
main_paned_window.add(top_frame, weight=3)
bottom_panel = ttk.PanedWindow(main_paned_window, orient=tk.HORIZONTAL)
main_paned_window.add(bottom_panel, weight=1)
canvas = tk.Canvas(top_frame, bg="white")
h_scrollbar = tk.Scrollbar(top_frame, orient=tk.HORIZONTAL, command=canvas.xview)
v_scrollbar = tk.Scrollbar(top_frame, orient=tk.VERTICAL, command=canvas.yview)
scrollable_frame = tk.Frame(canvas)
scrollable_frame.bind(
"<Configure>",
lambda e: canvas.configure(
scrollregion=canvas.bbox("all")
)
)
canvas.create_window((0, 0), window=scrollable_frame, anchor="nw")
canvas.configure(xscrollcommand=h_scrollbar.set, yscrollcommand=v_scrollbar.set)
h_scrollbar.pack(side=tk.BOTTOM, fill=tk.X)
v_scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
right_panel = tk.Frame(bottom_panel, width=200, bg="lightgray")
bottom_panel.add(right_panel, weight=1)
left_paned_window = tk.Frame(bottom_panel, bg="lightgray")
bottom_panel.add(left_paned_window, weight=3)
key_values_frame = tk.Frame(right_panel, bg="lightgray")
key_values_frame.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
key_values_frame.params = {}
control_frame = tk.Frame(right_panel, bg="lightgray")
control_frame.pack(side=tk.TOP, fill=tk.X)
type_label = tk.Label(control_frame, text="Types:", bg="lightgray")
type_label.pack(side=tk.LEFT, padx=5, pady=5)
type_options = ["os", "mem", "cpu"]
type_menu = ttk.Combobox(control_frame, textvariable= "type_var", values=type_options)
type_menu.pack(side=tk.LEFT, padx=5, pady=5)
type_menu.bind("<<ComboboxSelected>>", on_type_change)
browse_button = tk.Button(control_frame, text="Pick File", command=browse_file)
browse_button.pack(side=tk.LEFT, padx=5, pady=5)
draw_object(root, scrollable_frame, left_paned_window, key_values_frame, level_dict, deps_dict)
def on_type_change(type):
print (f"Type changed to {type}\n")
def browse_file():
ini_file = filedialog.askopenfilename(filetypes=[("Textfiles", "*.txt")])
def draw_object(root, scrollable_frame,left_paned_window, key_values_frame, level_dict, deps_dict):
for widget in scrollable_frame.winfo_children():
widget.destroy()
level_frames = {}
levels = {}
for section, level in level_dict.items():
if level not in levels:
levels[level] = []
levels[level].append(section)
colors = ["lightblue", "lightgreen", "lightyellow", "lightpink", "lightgray"]
for level, nodes in sorted(levels.items()):
level_frame = tk.Frame(scrollable_frame, bg=colors[level % len(colors)], bd=2, relief=tk.SOLID)
level_frame.pack(fill=tk.X, padx=10, pady=5)
level_frames[level] = level_frame
level_label = tk.Label(level_frame, text=f"Level {level}", bg=colors[level % len(colors)], font=("Arial", 12, "bold"), anchor="w")
level_label.pack(side=tk.TOP, fill=tk.X)
for node in nodes:
draw_nodebox(root,left_paned_window, key_values_frame, level_dict, deps_dict, level_frame, node)
def draw_nodebox(root, left_paned_window, key_values_frame, level_dict, deps_dict, parent, node):
level = level_dict.get(node, 0)
label = f'{node}'
color = 'skyblue'
fg_color = 'navyblue'
node_label = tk.Label(parent, text=label, bg=color, fg=fg_color, font=("Arial", 12, "bold"), bd=1, relief=tk.SOLID, padx=5, pady=5)
node_label.pack(side=tk.LEFT, padx=5, pady=5)
node_label.bind("<Button-3>", lambda event, node=node: show_menu(root, left_paned_window, key_values_frame, event, level_dict, node))
node_label.bind("<Button-1>", lambda event, node=node: open_hierarchy_popup(level_dict, deps_dict, node))
def open_hierarchy_popup(level_dict, deps_dict, node):
popup = tk.Toplevel(root)
popup.title(f"Dependencies for {node}")
canvas = tk.Canvas(popup, width=400, height=300, bg="white")
canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
h_scrollbar = tk.Scrollbar(popup, orient=tk.HORIZONTAL, command=canvas.xview)
h_scrollbar.pack(side=tk.BOTTOM, fill=tk.X)
v_scrollbar = tk.Scrollbar(popup, orient=tk.VERTICAL, command=canvas.yview)
v_scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
canvas.configure(xscrollcommand=h_scrollbar.set, yscrollcommand=v_scrollbar.set)
scrollable_frame = tk.Frame(canvas)
canvas.create_window((0, 0), window=scrollable_frame, anchor="nw")
scrollable_frame.bind(
"<Configure>",
lambda e: canvas.configure(
scrollregion=canvas.bbox("all")
)
)
draw_dependencies(level_dict, deps_dict, canvas, node)
def draw_dependencies(level_dict, deps_dict, canvas, node):
previous_deps = deps_dict.get(node, [])
next_deps = [dep for dep, deps in deps_dict.items() if node in deps]
draw_node_with_level(level_dict, deps_dict, canvas, node, 150, 100)
for i, dep in enumerate(previous_deps):
draw_node_with_level(level_dict, deps_dict, canvas, dep, 50, 50 + i * 40)
draw_arrow(canvas, 70, 50 + i * 40, 130, 100)
for i, dep in enumerate(next_deps):
draw_node_with_level(level_dict, deps_dict, canvas, dep, 250, 50 + i * 40)
draw_arrow(canvas, 170, 100, 230, 50 + i * 40)
def draw_node_with_level(level_dict, deps_dict, canvas, node, x, y):
level = level_dict.get(node, "N/A")
canvas.create_oval(x - 20, y - 20, x + 20, y + 20, fill="lightblue")
canvas.create_text(x, y, text=f"{node} ({level})", font=("Arial", 12, "bold"))
def draw_arrow(canvas, x1, y1, x2, y2):
canvas.create_line(x1, y1, x2, y2, arrow=tk.LAST)
def show_menu(root, left_paned_window, key_values_frame, event, level_dict, node):
context_menu = tk.Menu(root, tearoff=0)
context_menu.add_command(label="Show Key Values", command=lambda: show_key_values(key_values_frame,node, readonly=True))
context_menu.add_command(label="Show Levels", command=lambda: show_levels(left_paned_window, level_dict,node))
context_menu.post(event.x_root, event.y_root)
root.bind("<Button-1>", lambda e: context_menu.unpost())
def show_key_values(key_values_frame, node, readonly):
current_node = node
for widget in key_values_frame.winfo_children():
widget.destroy()
selected_node_label = tk.Label(key_values_frame, text=f"Stage: {node}", font=("Arial", 12, "bold"))
selected_node_label.pack(anchor="w")
params = {"Key1":"Value1", "Key2":"Value2"}
for param, value in params.items():
param_frame = tk.Frame(key_values_frame)
param_frame.pack(fill=tk.X, pady=2)
param_label = tk.Label(param_frame, text=f"{param}:")
param_label.pack(side=tk.LEFT)
param_entry = tk.Entry(param_frame)
param_entry.insert(0, value)
param_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
def show_levels(left_paned_window, level_dict, node):
for widget in left_paned_window.winfo_children():
widget.destroy()
level = level_dict.get(node, "N/A")
selected_node_label = tk.Label(left_paned_window, text=f"Stage: {node} ({level})", font=("Arial", 12, "bold"), bg="lightgray")
selected_node_label.pack(anchor="w")
title = "up down levels"
title_label = tk.Label(left_paned_window, text=title, font=("Arial", 12, "bold"), bg="lightgray")
title_label.pack(anchor="w")
if __name__ == '__main__':
deps_dict = {'delegate_process:MEM': [], 'memory_saver': [], 'performance_tracker': ['memory_saver'], 'refresh_rate': ['memory_saver'], 'startup_procs': ['performance_tracker', 'refresh_rate'], 'active_procs': ['scheduling', 'counter', 'computing', 'lightweight_jobs', 'hypervisor' , 'startup_procs', 'segfaults', 'lookaside_buffer', 'deadlock_avoider', 'pagination', 'mutex_checker', 'os_load', 'wait_and_exit', 'app_mgmt', 'virtualization', 'thread_mgmt', 'concurrent_jobs', 'inter-proc-communication', 'shared_mem_jobs', 'distributed_sys_jobs', 'zombie_proc', 'orphan_proc', 'nohup_proc'], 'pages': ['active_procs'], 'active_procs_list': ['pages'], 'essential_backups': ['active_procs_list'], 'shutdown_jobs': ['essential_backups'], 'bigdata_ml': ['active_procs_list']}
level_dict = {'delegate_process:MEM': 0, 'memory_saver': 0, 'performance_tracker': 1, 'refresh_rate': 1, 'startup_procs': 2, 'active_procs': 3, 'pages': 4, 'segfaults': 2, 'lookaside_buffer': 2, 'deadlock_avoider':2, 'pagination':2 , 'scheduling':2, 'counter':2, 'computing':2, 'lightweight_jobs':2, 'hypervisor':2, 'mutex_checker': 2 , 'os_load':2, 'wait_and_exit':2 , 'zombie_proc':2, 'orphan_proc':2, 'nohup_proc':2 , 'app_mgmt':2, 'virtualization':2, 'thread_mgmt':2, 'concurrent_jobs':2, 'inter-proc-communication':2, 'shared_mem_jobs':2, 'distributed_sys_jobs':2, 'active_procs_list': 5, 'essential_backups': 6, 'shutdown_jobs': 7, 'bigdata_ml': 6}
root = tk.Tk()
root.geometry("600x500")
build_gui(root, level_dict, deps_dict)
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2025-04-05 14:04:22
| 0
| 1,014
|
A.G.Progm.Enthusiast
|
79,556,969
| 7,483,211
|
Pytensor compilation failed during linking stage on macOS
|
<p>When trying to run a simple PyMC example on ARM macOS 15.4 using a fresh conda-forge conda-environment the run fails with a compilation error: <code>pytensor.link.c.exceptions.CompileError: Compilation failed</code></p>
<pre class="lang-py prettyprint-override"><code>import pymc as pm
with pm.Model() as model:
alpha = pm.Normal('alpha', mu=0, sigma=1)
pm.Normal('Est', mu=alpha, sigma=1, observed=0)
pm.sample()
</code></pre>
<p>Relevant logs:</p>
<pre class="lang-none prettyprint-override"><code>pytensor.link.c.exceptions.CompileError: Compilation failed (return status=1):
/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/bin/clang++ -dynamiclib -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -Wno-c++11-narrowing -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -fPIC -undefined dynamic_lookup -ld64 -I/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/numpy/_core/include -I/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/include/python3.13 -I/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pytensor/link/c/c_code -L/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib -fvisibility=hidden -o /Users/cr/.pytensor/compiledir_macOS-15.4-arm64-arm-64bit-Mach-O-arm-3.13.2-64/tmphttorckn/m25516502502c211c050fc4b6a6d2e1108dee165ceb9a6715069ad05cf2e6f5c3.so /Users/cr/.pytensor/compiledir_macOS-15.4-arm64-arm-64bit-Mach-O-arm-3.13.2-64/tmphttorckn/mod.cpp
ld: -lto_library library filename must be 'libLTO.dylib'
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
Apply node that caused the error: Subtensor{i}([1 3], -1)
Toposort index: 0
Inputs types: [TensorType(int64, shape=(2,)), ScalarType(int64)]
Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
File "/Users/cr/code/autotracer_chyby/bayesian/latent.py", line 149, in <module>
trace = pm.sample(1000, tune=2000, chains=2, cores=1, target_accept=0.95)
File "/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pymc/sampling/mcmc.py", line 789, in sample
provided_steps, selected_steps = assign_step_methods(model, step, methods=pm.STEP_METHODS)
File "/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pymc/sampling/mcmc.py", line 261, in assign_step_methods
selected = max(
File "/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pymc/sampling/mcmc.py", line 263, in <lambda>
key=lambda method, var=rv_var, has_gradient=has_gradient: method._competence( # type: ignore[misc]
File "/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pymc/step_methods/compound.py", line 219, in _competence
competences.append(cls.competence(var))
File "/opt/homebrew/Cellar/micromamba/2.0.8/envs/chyby/lib/python3.13/site-packages/pymc/step_methods/metropolis.py", line 533, in competence
k = var.owner.inputs[-1].shape[-1].eval()
HINT: Use a linker other than the C linker to print the inputs' shapes and strides.
HINT: Use the PyTensor flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node.
</code></pre>
<p>How do I fix this?</p>
|
<python><macos><pymc><pytensor>
|
2025-04-05 12:43:12
| 1
| 10,272
|
Cornelius Roemer
|
79,556,867
| 1,581,090
|
How can I create a GIF image using Pillow and imageio with Python?
|
<p>I have the following code that creates 60 frames using Pillow in Pythonย 3.11.10 and that I want to use to create a GIF image (repeating endless), with a duration per frame of 0.1 seconds. The first four frames should show a red square, and the rest of the time (almost six seconds) it should be basically black.</p>
<p>However, the created GIF image only seems to contain two frames, one frame with the red square and one without. How to create the GIF image properly?</p>
<pre><code>from PIL import Image, ImageDraw
import imageio
import numpy as np
# Colors
red = (255, 0, 0)
black = (0, 0, 0)
grey = (30, 30, 30)
frames = []
framerate = 10
frame_count = 60
for i in range(frame_count):
# Create an image with black background
img = Image.new('RGB', (55, 50), black)
draw = ImageDraw.Draw(img)
# Draw rectangle
draw.rectangle((10, 20, 45, 30), fill=red if i<5 else grey)
# Append frame to the list
frames.append(np.array(img))
# Save frames as a GIF
imageio.mimsave("test.gif", frames, duration=1./framerate, loop=0, plugin='pillow')
</code></pre>
<p>Here is the created GIF image:</p>
<p><a href="https://i.sstatic.net/iVJaPTBj.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVJaPTBj.gif" alt="Enter image description here" /></a></p>
<p>Expected behavior: 0.4 seconds showing that red square, then 5.6 seconds showing basically black.</p>
<p>With <a href="https://stackoverflow.com/questions/79556867/how-can-i-create-a-gif-image-using-pillow-and-imageio-with-python/79556898#79556898">Grismar's suggestion</a>:</p>
<p><a href="https://i.sstatic.net/3KBObpBl.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KBObpBl.gif" alt="Enter image description here" /></a></p>
|
<python><python-imaging-library><gif>
|
2025-04-05 11:02:26
| 1
| 45,023
|
Alex
|
79,556,755
| 465,159
|
How to create "dynamic" Literal types from dataclass members
|
<p>How to do add support for custom smart type completion in Python?</p>
<p>Let's say I have a dataclass:</p>
<pre><code>@dataclass
class MyData:
mine: str
yours: str
def col(self, value: str):
return "The column name is: " + value
</code></pre>
<p>I want the <code>value</code> argument in <code>col</code> to be only one of <code>"mine"</code> or <code>"yours"</code>; that is, only be one of the dataclass member names. And I want the editor to understand this and autocomplete it, so that if I do <code>data.col('mi...')</code> it would autocomplete to 'mine'.</p>
<p>This works if I manually specify the type:</p>
<pre><code>def col(self, value: Literal['mine', 'yours']):
...
</code></pre>
<p>But ideally one would not replicate all the field names in the <code>Literal</code> type definition. Also, I would like to generate more value types (e.g. adding the '_now' suffix: <code>Literal['mine', 'yours', 'mine_now', 'yours_now']</code>).</p>
<p>In summary, I would like to start from a dataclass, add a bunch of column names (statically - only thing required is the data class definition) and use those column names in a <code>Literal</code> type hint, so that editors and static type checkers can understand it.</p>
<p>Is it possible to do this?</p>
|
<python><python-typing><python-dataclasses>
|
2025-04-05 09:03:32
| 1
| 5,474
|
Ant
|
79,556,720
| 7,590,783
|
Azure functions get multi file uploads
|
<p>I am trying to count the total no: of files uploaded to Azure Functions (Python). Tried uploading multiple files via postman but the AZ Func always reads only the first file and count is always 1. Why is that? Please help.</p>
<pre><code>for input_file in req.files.values():
filename = input_file.filename
logging.info('Filename: %s' % filename)
</code></pre>
|
<python><azure-functions>
|
2025-04-05 08:31:12
| 1
| 639
|
Svj
|
79,556,656
| 10,440,128
|
Get the maximum frequency of an audio spectrum
|
<p>I want to detect the cutoff frequency of the AAC audio encoder used to compress an M4A audio file.</p>
<p>This cutoff frequency (or maximum frequency) is an indicator of audio quality.
High-quality audio has a cutoff around 20KHz (fullband),
medium-quality audio has a cutoff around 14KHz (superwideband),
low-quality audio has a cutoff around 7KHz (wideband),
super-low-quality audio has a cutoff around 3KHz (narrowband).
See also: <a href="https://en.wikipedia.org/wiki/Voice_frequency" rel="nofollow noreferrer">voice frequency</a></p>
<p>Example spectrum of a 2 hours movie, generated with <code>sox</code>, with a maximum frequency around 19.6KHz:</p>
<p><a href="https://i.sstatic.net/kpfmFAb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kpfmFAb8.png" alt="audio spectrum with maximum frequency around 19.6KHz" /></a></p>
<p>The program should ignore noise below a certain loudness, for example -80dB.</p>
<p>Here is a <a href="https://github.com/milahu/random/blob/master/ffmpeg/audio-cutoff.py" rel="nofollow noreferrer">Python script</a> generated by deepseek.com but it returns 0.2KHz instead of 19.6KHz.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# get the maximum frequency
# of an audio spectrum
# as an indicator
# of the actual audio quality
# generated by deepseek.com
# prompt
"""
create a python script
to detect the maximum frequency
in an m4a audio file.
that maximum frequency is produced
by the lowpass filter
of the aac audio encoder.
high-quality audio
has a maximum frequency
around 20 KHz (fullband),
low-quality audio
has a maximum frequency
around 3 KHz (narrowband).
use ffmpeg to decode the audio
to pcm
in chunks of 10 seconds.
for each chunk:
detect the local maximum,
print the local maximum
and the chunk time
with the format
f"t={t}sec f={f}KHz",
update the global maximum.
to detect the local maximum,
remove the noise floor
around -110dB,
then find the maximum frequency
in the spectrum.
accept some command line options:
--ss n:
pass as "-ss n" to ffmpeg.
--to n:
pass as "-to n" to ffmpeg.
both -ss and -to args
must come before the -i arg
for ffmpeg input seeking.
print all frequencies in KHz.
add a shebang line before the script,
spaced by an empty line.
do not recode the audio with ffmpeg.
use ffprobe to get the input samplerate,
usually 48KHz or 44.1KHz.
create a python class,
so we dont have to pass all parameters to functions.
add a command line option to select the audio track id, by default zero.
"""
#!/usr/bin/env python3
import argparse
import numpy as np
import subprocess
import sys
from tempfile import NamedTemporaryFile
class AudioAnalyzer:
def __init__(self, input_file, audio_track=0, start_time=None, end_time=None):
self.input_file = input_file
self.audio_track = audio_track
self.start_time = start_time
self.end_time = end_time
self.sample_rate = self._get_sample_rate()
self.global_max_freq = 0
self.global_max_time = 0
def _get_sample_rate(self):
cmd = [
'ffprobe',
'-v', 'error',
'-select_streams', f'a:{self.audio_track}',
'-show_entries', 'stream=sample_rate',
'-of', 'default=noprint_wrappers=1:nokey=1',
self.input_file
]
result = subprocess.run(cmd, capture_output=True, text=True)
return float(result.stdout.strip())
def _get_ffmpeg_command(self):
cmd = [
'ffmpeg',
'-hide_banner',
'-loglevel', 'error',
]
if self.start_time is not None:
cmd.extend(['-ss', str(self.start_time)])
if self.end_time is not None:
cmd.extend(['-to', str(self.end_time)])
cmd.extend([
'-i', self.input_file,
'-map', f'0:a:{self.audio_track}',
'-ac', '1', # convert to mono
'-f', 'f32le', # 32-bit float PCM
'-'
])
return cmd
def analyze(self, chunk_size=10):
ffmpeg_cmd = self._get_ffmpeg_command()
with subprocess.Popen(ffmpeg_cmd, stdout=subprocess.PIPE) as process:
chunk_samples = int(chunk_size * self.sample_rate)
bytes_per_sample = 4 # 32-bit float
chunk_bytes = chunk_samples * bytes_per_sample
current_time = self.start_time if self.start_time is not None else 0
while True:
raw_data = process.stdout.read(chunk_bytes)
if not raw_data:
break
samples = np.frombuffer(raw_data, dtype=np.float32)
if len(samples) == 0:
continue
local_max_freq = self._analyze_chunk(samples)
print(f"t={current_time:.1f}sec f={local_max_freq:.1f}KHz")
if local_max_freq > self.global_max_freq:
self.global_max_freq = local_max_freq
self.global_max_time = current_time
current_time += chunk_size
def _analyze_chunk(self, samples):
# Apply Hanning window
window = np.hanning(len(samples))
windowed_samples = samples * window
# Compute FFT
fft = np.fft.rfft(windowed_samples)
magnitudes = np.abs(fft)
# Convert to dB
eps = 1e-10 # avoid log(0)
magnitudes_db = 20 * np.log10(magnitudes + eps)
# Frequency bins
freqs = np.fft.rfftfreq(len(samples), 1.0 / self.sample_rate) / 1000 # in KHz
# Remove noise floor (-110dB)
threshold = -110
valid_indices = magnitudes_db > threshold
valid_freqs = freqs[valid_indices]
valid_magnitudes = magnitudes_db[valid_indices]
if len(valid_freqs) == 0:
return 0
# Find frequency with maximum magnitude
max_idx = np.argmax(valid_magnitudes)
max_freq = valid_freqs[max_idx]
return max_freq
def main():
parser = argparse.ArgumentParser(description='Detect maximum frequency in audio file')
parser.add_argument('input_file', help='Input audio file (m4a)')
parser.add_argument('--ss', type=float, help='Start time in seconds')
parser.add_argument('--to', type=float, help='End time in seconds')
parser.add_argument('--track', type=int, default=0, help='Audio track ID (default: 0)')
args = parser.parse_args()
analyzer = AudioAnalyzer(
input_file=args.input_file,
audio_track=args.track,
start_time=args.ss,
end_time=args.to
)
print(f"Analyzing audio file: {args.input_file}")
print(f"Sample rate: {analyzer.sample_rate/1000:.1f} KHz")
print(f"Audio track: {args.track}")
if args.ss is not None:
print(f"Start time: {args.ss} sec")
if args.to is not None:
print(f"End time: {args.to} sec")
print("---")
analyzer.analyze()
print("---")
print(f"Global maximum: t={analyzer.global_max_time:.1f}sec f={analyzer.global_max_freq:.1f}KHz")
if analyzer.global_max_freq > 15:
print("Quality: Fullband (high quality)")
elif analyzer.global_max_freq > 5:
print("Quality: Wideband (medium quality)")
else:
print("Quality: Narrowband (low quality)")
if __name__ == '__main__':
main()
</code></pre>
<p>Similar question:
<a href="https://dsp.stackexchange.com/questions/79133/how-to-find-the-max-frequency-at-a-certain-db-in-a-fft-signal">How to find the max frequency at a certain db in a fft signal</a></p>
<hr />
<p>Here is an example psd indicating the fullband quality with a psd dropoff around 20 kHz.</p>
<p><a href="https://i.sstatic.net/f5SpH3V6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5SpH3V6.png" alt="psd plot" /></a></p>
|
<python><audio><ffmpeg><fft><spectrogram>
|
2025-04-05 06:58:05
| 1
| 3,764
|
milahu
|
79,556,594
| 3,967,334
|
uv's [project.scripts] won't activate the environment in venv
|
<p>I have a UV project with the structure</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "trapallada"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"pandas>=2.2.3",
"plotly>=6.0.1",
<more stuff>
]
[tool.uv]
package = true
[project.scripts]
"trap_download_obs.py" = "trapallada.xsa.util:cl_download_observations"
</code></pre>
<p>(the <code>package = true</code> is meant to package the code in directory <code>trapallada</code>, which is inside the project directory)</p>
<p>When I do <code>uv pip install -e .</code>, the script "trap_download_obs.py" is created in <code>.venv/bin</code> but can't actually be run since, seemingly, the venv environment is not activated. The content of the script is</p>
<pre class="lang-py prettyprint-override"><code>#!/export/usuarios01/mvazquez/Sync/git/trapallada/.venv/bin/python3
# -*- coding: utf-8 -*-
import sys
from trapallada.xsa.util import cl_download_observations
if __name__ == "__main__":
if sys.argv[0].endswith("-script.pyw"):
sys.argv[0] = sys.argv[0][:-11]
elif sys.argv[0].endswith(".exe"):
sys.argv[0] = sys.argv[0][:-4]
sys.exit(cl_download_observations())
</code></pre>
<p>and the error</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/export/usuarios01/mvazquez/Sync/git/trapallada/.venv/bin/trap_download_obs.py", line 4, in <module>
from trapallada.xsa.util import cl_download_observations
ModuleNotFoundError: No module named 'trapallada'
</code></pre>
<p>If I activate the environment myself, then <code>from trapallada.xsa.util import cl_download_observations</code> will run.</p>
<p>My project's directory is named <code>trapallada</code>, same as the directory inside it containing the python code to be <em>packaged</em>. I guess the important bit here (related to the command) is that I have a module <code>trapallada/xsa/util.py</code>, that contains the function <code>cl_download_observations</code>.</p>
<p>I'm trying to run the script from the project's root directory, by first doing</p>
<pre class="lang-bash prettyprint-override"><code>source .venv/bin/activate
</code></pre>
<p>and then running <code>trap_download_obs.py</code>.</p>
<p>Any clue?</p>
|
<python><pyproject.toml><uv>
|
2025-04-05 05:38:28
| 2
| 1,479
|
manu
|
79,556,592
| 17,729,094
|
How to repeat and truncate Polars list elements to a fixed length
|
<p>I have data that looks like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
lf = pl.LazyFrame(
{
"points": [
[
[1.0, 2.0],
],
[
[3.0, 4.0],
[5.0, 6.0],
],
[
[7.0, 8.0],
[9.0, 10.0],
[11.0, 12.0],
],
],
"other": ["foo", "bar", "baz"],
},
schema={
"points": pl.List(pl.Array(pl.Float32, 2)),
"other": pl.String,
},
)
</code></pre>
<p>And I want to make all lists have the same number of elements.
If it currently has more than I need, it should truncate.
If it has less than I need, it should repeat itself in order until it has enough.</p>
<p>I managed to get it working, but I feel I am jumping through hoops. Is there a cleaner way of doing this? Maybe with <code>gather</code>?</p>
<pre class="lang-py prettyprint-override"><code>target_length = 3
result = (
lf.with_columns(
needed=pl.lit(target_length).truediv(pl.col("points").list.len()).ceil()
)
.with_columns(
pl.col("points")
.repeat_by("needed")
.list.eval(pl.element().explode())
.list.head(target_length)
)
.drop("needed")
)
</code></pre>
<p><strong>EDIT</strong></p>
<p>The method above works for toy examples, but when I try to use it in my real dataset, it fails with:</p>
<pre><code>pyo3_runtime.PanicException: Polars' maximum length reached. Consider installing 'polars-u64-idx'.
</code></pre>
<p>I haven't been able to make a MRE for this, but my data has 4 million rows, and the "points" list on each row has between 1 and 8000 elements (and I'm trying to pad/truncate to 800 elements). These all seem pretty small, I don't see how a maximum <code>u32</code> length is reached.</p>
<p>I appreciate any alternative approaches I can try.</p>
<p>The closest I have (which doesn't panic) is:</p>
<p>But this doesn't pad repeating the list in order. It just pads repeating the last element.</p>
<pre class="lang-py prettyprint-override"><code>target_length = 3
result = (
lf.with_columns(
pl.col("points")
.list.gather(
pl.int_range(target_length),
null_on_oob=True,
)
.list.eval(pl.element().forward_fill())
)
.drop("needed")
)
</code></pre>
|
<python><dataframe><list><python-polars>
|
2025-04-05 05:34:27
| 1
| 954
|
DJDuque
|
79,556,482
| 616,728
|
How to validate search terms when using embedding to look for objects in images
|
<p>I have a search on my site that does both tradition full text search and searches using embeddings. So, for example, when you search 'red balloon' I want both the text and image results. The problem is that not all search terms make sense for object detection (like, say 'William' or even like an identifier like a driver license number) I know there are libraries that will tell me if a word is a noun but is there anything that tells me if a phrase is searchable. So like this:</p>
<ul>
<li>Red Apple YES</li>
<li>Big Idea No</li>
<li>Driver's License YES</li>
<li>Suspended License No</li>
</ul>
|
<python><pytorch><knn><embedding>
|
2025-04-05 02:15:16
| 1
| 2,748
|
Frank Conry
|
79,556,459
| 1,078,556
|
Moving elements of a python dictionary to another one under certain condition using less code
|
<p>OK, so this is more a code "optimization" exercise.</p>
<p>I have to move all elements from a python dictionary to another, under certain condition, while emptying the source dict at the same time. (No matter I find matching elements or not, the source dict must be empty by the end)</p>
<p>Let's have:</p>
<pre><code>pie={"A1":{2000,2001,2002},"A2":{2003,2004,2005},"A3":{2000,2004,2007}} ; slices={}
condition_check=2000
</code></pre>
<p>As I said, I needed to move the elements to the destination dict only when they meet a certain condition. Match is made on keys and/or values while emptying the source dict, and thus I started with this code (in this case the condition is on values):</p>
<pre><code>while bool(pie):
temp = pie.popitem()
if (condition_check in temp[1]): slices.update({temp[0]:temp[1]})
</code></pre>
<p>Now the question is: is there a way to do without the <code>temp = pie.popitem()</code> line? (I thought perhaps it would be possible using an assignment expression to put it within the <code>if</code> condition maybe?)</p>
<p>P.S.
I know I could go with</p>
<pre><code>for s in [{k:v} for k,v in pie.items() if (condition_check in v)]: slices.update(s)
pie.clear()
</code></pre>
<p>or, alternatively</p>
<pre><code>s = {k:v for k,v in pie.items() if (condition_check in v)} ; slices.update(s)
pie.clear()
</code></pre>
<p>Finally, I think <code>slices.update({k:v for k,v in pie.items() if (condition_check in v)}) ; pie.clear()</code> could suit well</p>
<p>Yet, I'd still like optimizing the first code in such a way to "move/merge" the <code>temp = pie.popitem()</code> assignment inside the <code>if</code> condition somehow...
And I'm just curious to know if is it possible to achieve something like that anyway tbh</p>
<p>EDIT:</p>
<p>As @KellyBundy noted on comments : indeed, the thing to really assure here was that the source dictionary had to be empty by the end before going onward. Within the while loop this is done step by step, using <code>popitem()</code>. But also a <code>clear()</code> as very first instruction immediately after the for comprehension is fine for this</p>
|
<python><dictionary><optimization><one-liner>
|
2025-04-05 01:39:46
| 1
| 1,529
|
danicotra
|
79,556,449
| 1,245,659
|
Django ModelForm ensuring FK integrity without using it in the form
|
<p>I have a User Profile model with a Model Form:</p>
<pre><code>class Profile(models.Model):
# Managed fields
user = models.OneToOneField(User, related_name="profile", on_delete=models.CASCADE)
memberId = models.CharField(unique=True, max_length=15, null=False, blank=False, default=GenerateFA)
bio = models.TextField(null=True, blank=True)
avatar = models.ImageField(upload_to="static/MCARS/img/members", null=True, blank=True)
birthday = models.DateField(null=True, blank=True)
gender = models.CharField(max_length=10, choices=constants.GENDER_CHOICES, null=True, blank=True)
invited = models.BooleanField(default=False)
registered = models.BooleanField(default=False)
height = models.PositiveSmallIntegerField(null=True, blank=True)
phone = models.CharField(max_length=32, null=True, blank=True)
address = models.CharField(max_length=255, null=True, blank=True)
number = models.CharField(max_length=32, null=True, blank=True)
city = models.CharField(max_length=50, null=True, blank=True)
state = models.CharField(max_length=50, null=True, blank=True)
zip = models.CharField(max_length=30, null=True, blank=True)
facebook = models.URLField(null=True, blank=True)
Twitter = models.URLField(null=True, blank=True)
LinkedIn = models.URLField(null=True, blank=True)
Instagram = models.URLField(null=True, blank=True)
Snapchat = models.URLField(null=True, blank=True)
website = models.URLField(null=True, blank=True)
class UserProfileForm(ModelForm):
class Meta:
model = Profile
exclude = ('user','memberId','invited', 'registered')
</code></pre>
<p>I don't want <code>user</code> in the form, but remember who the user is, when saving back to the model. How do I ensure that? Without it, the compiler throws a FK integrity error.
This is what I have the view:</p>
<pre><code>@login_required()
def Profile(request, memberid=None):
if memberid:
user = User.objects.select_related('profile').get(id=memberid)
else:
user = User.objects.select_related('profile').get(id=request.user.id)
errors = None
if request.method == 'POST':
print('found me')
data = request.POST
form = UserProfileForm(data)
form.user = user
if form.is_valid():
form.save()
else:
print('form is invalid')
errors = form.errors
context = {
'pagetitle': 'MCARS - Service Record',
'user': user,
'rank': Rank.objects.get(user=user.id),
'assignments': Assignment.objects.filter(user=user.id),
'form': UserProfileForm(),
}
if errors:
context['errors'] = errors
return render(request, template_name='MCARS/pages/ServiceRecord.html', context=context)
</code></pre>
<p>Thanks!</p>
|
<python><django>
|
2025-04-05 01:22:02
| 1
| 305
|
arcee123
|
79,556,360
| 1,609,514
|
Pytest fixture is changing the instance returned by another fixture
|
<p>I'm very baffled and a little concerned to discover the following behaviour where I have two tests and two fixtures.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture
def new_object():
return list()
@pytest.fixture
def a_string(new_object):
# Change this instance of the object
new_object.append(1)
return "a string"
def test_1(new_object):
assert len(new_object) == 0
def test_2(a_string, new_object):
assert len(new_object) == 0
</code></pre>
<p>The first test passes but the second one fails.</p>
<pre class="lang-none prettyprint-override"><code> def test_2(a_string, new_object):
> assert len(new_object) == 0
E assert 1 == 0
E + where 1 = len([1])
tests/test_pytest_list.py:21: AssertionError
================================================ short test summary info =================================================
FAILED tests/test_pytest_list.py::test_2 - assert 1 == 0
============================================== 1 failed, 1 passed in 0.36s ===============================================
</code></pre>
<p>I expected fixtures to pass new instances of an object (unless specified otherwise), not the same object that some other fixture has modified.</p>
<p>According to the <a href="https://docs.pytest.org/en/stable/how-to/fixtures.html#scope-sharing-fixtures-across-classes-modules-packages-or-session" rel="nofollow noreferrer">documentation about the scope of fixtures</a> it says:</p>
<blockquote>
<p>the default is to invoke once per test function</p>
</blockquote>
<p>Does another fixture not qualify as a function?</p>
<p><strong>UPDATE</strong></p>
<p>Based on the comments I now understand the issue, although I still think it's a dangerous behaviour for a unit-testing tool.</p>
<p>Here's another invalid use of fixtures which a naive person like myself might not realize is wrong:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def new_object():
"""I want to test instances of this class"""
return list()
@pytest.fixture
def case_1(new_object):
new_object.append(1)
return new_object
@pytest.fixture
def case_2(new_object):
new_object.append(2)
return new_object
def test_cases(case_1, case_2):
assert sum(case_1) + sum(case_2) == 3 # fails: assert (3 + 3) == 3
</code></pre>
|
<python><pytest><fixtures>
|
2025-04-04 23:16:13
| 2
| 11,755
|
Bill
|
79,556,268
| 629,960
|
MCP Python SDK. How to authorise a client with Bearer header with SSE?
|
<p>I am building the MCP server application to connect some services to LLM . I use the MCP Python SDK <a href="https://github.com/modelcontextprotocol/python-sdk" rel="nofollow noreferrer">https://github.com/modelcontextprotocol/python-sdk</a>
One of things i want to implement is authorisation of a user with the token.</p>
<p><a href="https://i.sstatic.net/CU2VUUer.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CU2VUUer.png" alt="MCP inspector" /></a></p>
<p>I see it must be possible somehow.</p>
<p>Most of tutorials about MCP are related to STDIO kind of a server run. My will be SSE.</p>
<p>There is my code:</p>
<pre><code>from mcp.server.fastmcp import FastMCP
from fastapi import FastAPI, Request, Depends, HTTPException
app = FastAPI()
mcp = FastMCP("SMB Share Server")
@mcp.tool()
def create_folder(parent_path: str, name: str) -> str:
"""Create new subfolder in the specified path"""
return f"Folder {name} created in {parent_path}"
app.mount("/", mcp.sse_app())
</code></pre>
<p>How can i read Authorization header in case if it is sent by the client?</p>
<p>I tried to use approaches of FastAPI - setting dependency, adding request:Request to arguments but this doesn't work.</p>
<p>Is there a way?</p>
|
<python><fastapi><large-language-model>
|
2025-04-04 21:17:58
| 1
| 2,113
|
Roman Gelembjuk
|
79,556,229
| 16,563,251
|
Type hint return type of abstract method to be any instance of parent class
|
<p>How can I type hint that the return type of a method of some abstract class is some instance of this class?</p>
<p>My intuitive answer is that</p>
<pre><code>@abstractmethod
def mymethod() -> typing.Self:
pass
</code></pre>
<p>should be the correct way (as suggested in <a href="https://stackoverflow.com/questions/33533148/how-do-i-type-hint-a-method-with-the-type-of-the-enclosing-class">this post</a> for example).
But when I now subclass from this class, the return type is restricted to the type of the child class.
What is the correct way to type hint here, so every subclass of the parent class is allowed?</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Self, override
import random
class DiceResult(ABC):
@abstractmethod
def reroll(self) -> Self:
pass
class NaturalTwenty(DiceResult):
@override
def reroll(self) -> DiceResult: # Type checkers report an error here
return random.choice([NaturalTwenty(), NaturalOne()])
class NaturalOne(DiceResult):
@override
def reroll(self) -> DiceResult: # Type checkers report an error here
return random.choice([NaturalTwenty(), NaturalOne()])
</code></pre>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=9380d4c570381152ccfb5d87ba837ce6" rel="nofollow noreferrer">Mypy playground</a>
<a href="https://basedpyright.com/?typeCheckingMode=all&code=GYJw9gtgBAhgRgYygSwgBzCALlAggIQGEAaWOAZyxBgSwgFMsALMAEwChRIosBPNZADsA5inSYcAZXoAbYKTAA3eiBDJW9dqgzYo1Qa0jt2CGTHLkoAEWQJ6AJXrkArjKwAKAoQCUALnZQgVAAAvCU1LQMzGwBQRrAeipgMjLu5LLA3lAAtAB8UNJy-kElUGjm5MamFVAAcjBYztQyACoA7vSCfO42do4ubn6xgcFKKmoaw1DxieApaRlZeda2Dk6uWL5QAMRQLfz0UAhM9AgA1iqWIPQ6ODCCUOOYUCfXUyXXjSAP%2BoYQAHTHMCrdwAbXqXxgrQ6XV47m8pAhTShAHlBPR4QBdbxVMwWOoNZEyNEY3prAZYIYlUbKVTqTQlGbXOapdJyJb5Mn9DZbXb7NCHY6nC4gK43CSwB5PEAvFQM0qBT5NH73P6Algg8GE5rtTrdBEEyHE9FY7xAA" rel="nofollow noreferrer">Basedpyright playground</a></p>
|
<python><python-typing>
|
2025-04-04 20:50:29
| 2
| 573
|
502E532E
|
79,556,196
| 6,051,639
|
Tried to update conda, now I've ruined my base env
|
<p>I am using windows 11, Intel(R) Xeon(R) Gold 5215 CPU.</p>
<p>I tried to update fiona in my python 3.12 environment and was told to update conda, but following the conda instructions just resulted in the same prompt to update conda, with the same instructions:</p>
<pre><code>(p312) C:\Users\wesk>conda update fiona
Channels:
- conda-forge
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 24.11.3
latest version: 25.3.1
Please update conda by running
$ conda update -n base -c conda-forge conda
# All requested packages already installed.
(p312) C:\Users\wesk>conda update -n base -c conda-forge conda
Channels:
- conda-forge
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 24.11.3
latest version: 25.3.1
Please update conda by running
$ conda update -n base -c conda-forge conda
# All requested packages already installed.
</code></pre>
<p>I found a suggestion on stackoverflow to do <code>conda update python</code> which seems to have ruined everything! Not recommended:</p>
<pre><code>(base) C:\Users\wesk>conda update python
Channels:
- conda-forge
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 24.11.3
latest version: 25.3.1
Please update conda by running
$ conda update -n base -c conda-forge conda
## Package Plan ##
environment location: C:\Users\wesk\AppData\Local\miniforge3
added / updated specs:
- python
The following packages will be downloaded:
package | build
---------------------------|-----------------
liblzma-5.8.1 | h2466b09_0 102 KB conda-forge
libsqlite-3.49.1 | h67fdade_2 1.0 MB conda-forge
python-3.12.9 |h3f84c4b_1_cpython 15.2 MB conda-forge
------------------------------------------------------------
Total: 16.3 MB
The following packages will be UPDATED:
liblzma 5.6.3-h2466b09_1 --> 5.8.1-h2466b09_0
libsqlite 3.48.0-h67fdade_0 --> 3.49.1-h67fdade_2
python 3.12.8-h3f84c4b_1_cpython --> 3.12.9-h3f84c4b_1_cpython
Proceed ([y]/n)? y
Downloading and Extracting Packages:
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Traceback (most recent call last):
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\exception_handler.py", line 18, in __call__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\cli\main.py", line 73, in main_sourced
from ..base.context import context
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\base\context.py", line 32, in <module>
from ..common._os.linux import linux_get_libc_version
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\common\_os\__init__.py", line 8, in <module>
from .windows import get_free_space_on_windows as get_free_space
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\common\_os\windows.py", line 11, in <module>
from ctypes import (
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\ctypes\__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ImportError: DLL load failed while importing _ctypes: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\wesk\AppData\Local\miniforge3\Scripts\conda-script.py", line 12, in <module>
sys.exit(main())
^^^^^^
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\cli\main.py", line 105, in main
return conda_exception_handler(main, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\exception_handler.py", line 386, in conda_exception_handler
return_value = exception_handler(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\exception_handler.py", line 21, in __call__
return self.handle_exception(exc_val, exc_tb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\exception_handler.py", line 52, in handle_exception
from .exceptions import (
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\exceptions.py", line 31, in <module>
from .models.channel import Channel
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\models\channel.py", line 26, in <module>
from ..base.context import context
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\base\context.py", line 32, in <module>
from ..common._os.linux import linux_get_libc_version
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\common\_os\__init__.py", line 8, in <module>
from .windows import get_free_space_on_windows as get_free_space
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\site-packages\conda\common\_os\windows.py", line 11, in <module>
from ctypes import (
File "C:\Users\wesk\AppData\Local\miniforge3\Lib\ctypes\__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ImportError: DLL load failed while importing _ctypes: The specified procedure could not be found.
</code></pre>
<p>Now I can't seem to do anything: can't activate other environments, can't do <code>conda list</code>, etc. Where to I go from here, now that I have apparently screwed up everything?</p>
<p>Cheers</p>
|
<python><conda>
|
2025-04-04 20:30:33
| 0
| 424
|
Wesley Kitlasten
|
79,556,028
| 8,830,612
|
Azure ML Train Test Valid split for image data
|
<p>I have annotated with bounding boxes couple of hundred pictures in Azure ML Studio from a Labeling Project.</p>
<p>I have exported the annotations as ML Table and COCO JSON format - both are available in Azure ML as Data assets</p>
<p>How to split the data into train, test and valid. I want to use the annotations and images for training a Computer vision model.</p>
<p>What is the best approach and how to do it?</p>
|
<python><azure><azure-machine-learning-service>
|
2025-04-04 18:16:05
| 1
| 518
|
default_settings
|
79,556,023
| 24,108
|
Have a dataclass extend an ABC with an abstract property
|
<p>I want to define an abstract base class that has a property on it. I then want to have a dataclass extent that base class and have the abstract property be one of the dataclass' fields.</p>
<pre><code>from abc import ABC, abstractmethod
from dataclasses import dataclass
class Base(ABC):
@property
@abstractmethod
def prop(self) -> str: ...
@dataclass(kw_only=True)
class DataClass(Base):
prop: str
d = DataClass(prop="foo")
</code></pre>
<p>This errors with:</p>
<pre><code>TypeError: Can't instantiate abstract class DataClass with abstract method prop
</code></pre>
|
<python><python-dataclasses><abstract-base-class>
|
2025-04-04 18:13:26
| 1
| 15,040
|
John Oxley
|
79,555,896
| 2,501,622
|
Python script locked by thread
|
<p>I would like this Python 3.10 script (where the <code>pynput</code> code is partially based on <a href="https://stackoverflow.com/a/43106497">this answer</a>) to enter the <code>while</code> loop and at the same time monitor the keys pressed on the keyboard. When <code>q</code> is pressed, I would like it to end.</p>
<p>(I do not know threads very well, but the <code>while loop</code> probably should run in the main thread and the keybord monitor should run in a child, concurrent thread).</p>
<pre><code>#!/usr/bin/python3
import threading
import sys
from pynput import keyboard
def on_key_press(key):
try:
k = key.char
except:
k = key.name
if k in ['q']:
exit_time = True
exit_time = False
print("Press q to close.")
keyboard_listener = keyboard.Listener(on_press=on_key_press)
keyboard_listener.start()
keyboard_listener.join()
while not exit_time:
sleep(1)
print("Goodbye")
sys.exit(0)
</code></pre>
<p>It instead gets locked in an endless wait after <code>keyboard_listener.start()</code>. I don't know if <code>keyboard_listener.join()</code> doesn't run at all, or if it causes the program to lock.</p>
<p>However, the <code>while</code> loop is not run. If I end the program with Ctrl+C:</p>
<pre><code>^CTraceback (most recent call last):
File "/my/source/./file.py", line 22, in <module>
keyboard_listener.join()
File "/my/.local/lib/python3.10/site-packages/pynput/_util/__init__.py", line 295, in join
super(AbstractListener, self).join(timeout, *args)
File "/usr/lib/python3.10/threading.py", line 1096, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
if lock.acquire(block, timeout):
KeyboardInterrupt
</code></pre>
|
<python><multithreading><keyboard><python-multithreading><python-3.10>
|
2025-04-04 16:50:11
| 1
| 1,544
|
BowPark
|
79,555,604
| 967,621
|
Run Ruff in Emacs
|
<p>How can I run Ruff in Emacs? I need to enable 2 commands on the current buffer:</p>
<ul>
<li><code>ruff check --select ALL current_buffer</code> โ bind to <code>M-x ruff-check</code></li>
<li><code>ruff check --select ALL --fix current_buffer</code> โ bind to <code>M-x ruff-fix</code></li>
</ul>
<p>I can run each of these commands with a file argument on the command line, and <code>ruff</code> is in my <code>$PATH</code>.</p>
<p>I tried the solutions from here: <a href="https://docs.astral.sh/ruff/editors/setup/#emacs" rel="nofollow noreferrer">Setup | Ruff</a>, and have the following lines in <code>~/.emacs</code>, but they are not running <code>ruff</code> on save as expected. Besides, I really want to enable the two distinct commands above, rather than running <code>ruff</code> on save.</p>
<pre class="lang-lisp prettyprint-override"><code>(add-hook 'python-mode-hook 'eglot-ensure)
(with-eval-after-load 'eglot
(add-to-list 'eglot-server-programs
'(python-mode . ("ruff" "server")))
(add-hook 'after-save-hook 'eglot-format))
(require 'ruff-format)
(add-hook 'python-mode-hook 'ruff-format-on-save-mode)
</code></pre>
|
<python><emacs><ruff>
|
2025-04-04 14:38:46
| 1
| 12,712
|
Timur Shtatland
|
79,555,397
| 13,330,435
|
Python's analog of xtregar
|
<p>I'm new in Python. I would like to know wheter there is a package that performs the same thing as Stata's 'xtregar' ou R's 'panelAR'.</p>
<p>I would like to estimate the following regression</p>
<pre><code>Y_{i,t}=\alpha+\beta X_{i,t-1}+\eta_i+\nu_t+u_{i,t}
where u_{i,t}=\rho u_{i,t-1}+\omega_{i,t}
</code></pre>
<p>That is, the errors are possibly heteroskedastic and equicorrelated within individuals, in the sense of Baltagi and Wu (1999) or Peterson (2007).</p>
<p>I would thank any help.</p>
|
<python><linear-regression>
|
2025-04-04 13:06:06
| 0
| 317
|
jorgep
|
79,555,255
| 25,413,271
|
Numpy- strange behaviour of __setitem__ of array
|
<p>Say we have an array:</p>
<pre><code>a = np.array([
[11, 12, 13],
[21, 22, 23],
[31, 32, 33],
[41, 42, 43]
])
a[[1, 3], [0, 2]] = 0
</code></pre>
<p>So we want to set zeros to 0th and 2nd element at both 1st and 3rd rows.
But what we get is:</p>
<pre><code>[[11 12 13]
[ 0 22 23]
[31 32 33]
[41 42 0]]
</code></pre>
<p>Why not:</p>
<pre><code>[[11 12 13]
[ 0 22 0]
[31 32 33]
[0 42 0]]
</code></pre>
<p>?</p>
|
<python><arrays><numpy>
|
2025-04-04 11:59:37
| 2
| 439
|
IzaeDA
|
79,555,174
| 6,498,753
|
Conversion from ECEF (spherical Earth) to geodetic coordinates
|
<p>ECEF (geocentric) coordinate system is generally referred to an ellipsoid (e.g. WGS84). The conversion to geodetic coordinates is then straightforward in python:</p>
<pre><code>geocentric_crs = {"proj":'geocent', "ellps":'WGS84', "datum":'WGS84'}
geodetic_crs = {"proj": "latlong", "datum": "WGS84"}
transformer = pyproj.Transformer.from_crs(geocentric_crs, geodetic_crs, always_xy=True)
lons, lats, alts = transformer.transform(x, y, z, radians=False)
</code></pre>
<p>But what if the ECEF coordinates have been calculated for a spherical Earth instead? It is ok to still use the code above and correct only the alts, according to:</p>
<pre><code>radius = np.sqrt(x**2 + y**2 + z**2)
alts = 6371000 - radius #ย Assuming a fixed radius of 6371km
</code></pre>
<p>?</p>
|
<python><coordinate-systems><coordinate-transformation><pyproj>
|
2025-04-04 11:22:52
| 0
| 461
|
Roland
|
79,555,053
| 6,029,488
|
Group by and apply multiple custom functions on multiple columns in python pandas
|
<p>Consider the following dataframe example:</p>
<pre><code>id date hrz tenor 1 2 3 4
AAA 16/03/2010 2 6m 0.54 0.54 0.78 0.19
AAA 30/03/2010 2 6m 0.05 0.67 0.20 0.03
AAA 13/04/2010 2 6m 0.64 0.32 0.13 0.20
AAA 27/04/2010 2 6m 0.99 0.53 0.38 0.97
AAA 11/05/2010 2 6m 0.46 0.90 0.11 0.14
AAA 25/05/2010 2 6m 0.41 0.06 0.96 0.31
AAA 08/06/2010 2 6m 0.19 0.73 0.58 0.80
AAA 22/06/2010 2 6m 0.40 0.95 0.14 0.56
AAA 06/07/2010 2 6m 0.22 0.74 0.85 0.94
AAA 20/07/2010 2 6m 0.34 0.17 0.03 0.77
AAA 03/08/2010 2 6m 0.13 0.32 0.39 0.95
AAA 16/03/2010 2 1y 0.54 0.54 0.78 0.19
AAA 30/03/2010 2 1y 0.05 0.67 0.20 0.03
AAA 13/04/2010 2 1y 0.64 0.32 0.13 0.20
AAA 27/04/2010 2 1y 0.99 0.53 0.38 0.97
AAA 11/05/2010 2 1y 0.46 0.90 0.11 0.14
AAA 25/05/2010 2 1y 0.41 0.06 0.96 0.31
AAA 08/06/2010 2 1y 0.19 0.73 0.58 0.80
AAA 22/06/2010 2 1y 0.40 0.95 0.14 0.56
AAA 06/07/2010 2 1y 0.22 0.74 0.85 0.94
AAA 20/07/2010 2 1y 0.34 0.17 0.03 0.77
AAA 03/08/2010 2 1y 0.13 0.32 0.39 0.95
</code></pre>
<p>How can I <code>grouby</code> the variables <code>id, hrz</code> and <code>tenor</code> and apply the following custom functions across the dates?</p>
<pre><code> def ks_test(x):
return scipy.stats.kstest(np.sort(x), 'uniform')[0]
def cvm_test(x):
n = len(x)
i = np.arange(1, n + 1)
x = np.sort(x)
w2 = (1 / (12 * n)) + np.sum((x - ((2 * i - 1) / (2 * n))) ** 2)
return w2
</code></pre>
<p>The desired output is the following dataframe (figure results are just examples):</p>
<pre><code>id hrz tenor test 1 2 3 4
AAA 2 6m ks_test 0.04 0.06 0.02 0.03
AAA 2 6m cvm_test 0.09 0.17 0.03 0.05
AAA 2 1y ks_test 0.04 0.06 0.02 0.03
AAA 2 1y cvm_test 0.09 0.17 0.03 0.05
</code></pre>
|
<python><pandas><group-by><apply>
|
2025-04-04 10:22:33
| 2
| 479
|
Whitebeard13
|
79,554,857
| 18,876,759
|
Concrete RTPExtension for Scapy
|
<p>Scapy defines a RTP packet and a RTPExtension</p>
<p>The RTPExtension is defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>class RTPExtension(Packet):
name = "RTP extension"
fields_desc = [ShortField("header_id", 0),
FieldLenField("header_len", None, count_of="header", fmt="H"), # noqa: E501
FieldListField('header', [], IntField("hdr", 0), count_from=lambda pkt: pkt.header_len)] # noqa: E501
</code></pre>
<p>I have a specific implementation of such an extension and want to be able to parse its header.</p>
<p>The length of the header is always one and its fields is a list of 4 bit enums. It has a fixed header_id, let's say 0xABC.</p>
<p>How can I implement this so that scapy parses/dissects the packet correctly?</p>
<p>I've tried</p>
<pre class="lang-py prettyprint-override"><code>class MyRTPExtensionHeader(Packet):
name = "My RTP extension header"
fields_desc = [
BitEnumField("field_a", 0, 4, enum_a),
BitEnumField("field_b", 0, 4, enum_b),
BitEnumField("field_c", 0, 4, enum_c),
...
]
bind_layers(RTPExtension, MyRTPExtensionHeader, header=0xABC)
RTP(...) / RTPExtension(...) / MyRTPExtensionHeader()
</code></pre>
<p>But this way, the data of MyRTPExtensionHeader is in the payload field of RTPExtension and the length field of RTPExtension is not set correclty. Also the dissecting does not work as expected.</p>
<p>So I tried to re-implement the RTP Extension and directly used it on tow:</p>
<pre class="lang-py prettyprint-override"><code>class MyRTPExtension(Packet):
name = "My RTP extension header"
fields_desc = [
# Implement the fields from RTPExtension
ShortField("header_id", 0xABC), # Header field set to my fixed header_id value
FieldLenField("header_len", 1), # fixed length or count all remaining fields' length
# My fields
BitEnumField("field_a", 0, 4, enum_a),
BitEnumField("field_b", 0, 4, enum_b),
BitEnumField("field_c", 0, 4, enum_c),
...
]
bind_layers(RTP, MyRTPExtension, extension=1)
RTP(...) / MyRTPExtension()
</code></pre>
<p>This way I can build my packets correctly. But when I load an RDP packet from a string, the Packet type is not correct.
It will always use the generic RTPExtension. I need to somehow tell scapy to use MyRTPExtension if the header_id inside the extension is 0xABC and otherwise the generic RTPExtension.
How can I achieve this?</p>
<h2>EDIT</h2>
<p>The only solution I found so far is to patch <code>RTP.guess_payload_class</code>. Is there some more elegant/straightforward way?</p>
<pre class="lang-py prettyprint-override"><code>def rtp_modified_guess_payload_class(self, pkt):
if self.extension:
if pkt.startswith(b'\x01g'):
return MyRTPExtension
else:
return RTPExtension
RTP.guess_payload_class = rtp_modified_guess_payload_class
</code></pre>
|
<python><scapy>
|
2025-04-04 08:42:13
| 0
| 468
|
slarag
|
79,554,839
| 1,574,952
|
Python tab completion triggers attribute access
|
<p>I'm running into an issue with tab completion in Python. I have two classes, one serving as a backend that is responsible for managing i/o of a large data collection, and a second that serves as a UI for access to that data. The backend implements a lazy-loading approach where expensive i/o operations are delayed until the relevant class attribute is accessed. The backend also provides a static listing of the available data. The UI class redirects attribute access that it finds in the listing of available data to the backend class.</p>
<p>Now I want to support tab completion. If I do nothing, the attributes for backend data fields are not listed on the UI, as expected. My thinking was to simply implement <code>__dir__</code> on the UI and return the list of available data (a list of strings). A full implementation should supplement this with other relevant attributes, but that's not needed to illustrate the problem that I run into.</p>
<p>When I implement <code>__dir__</code> in this way, fire up an interpreter, create a UI object and try tab completion, for some reason something in the interpreter seems to take it upon itself to check that the attributes that I've listed exist... by accessing them! This triggers an immediate read of <em>all</em> available data fields, causing tab completion to lag for a few minutes and many GB of memory to be consumed. I do get the desired listing in tab completion, eventually...</p>
<p>Here's a simplified example illustrating the issue. If I do <code>d = DataUI(); d.<tab></code> in my interpreter, it triggers <code>READING DATA</code>.</p>
<pre><code>class DataBackend(object):
def __init__(self):
self.available_attrs = ("a", )
self._a = None
return
@property
def a(self):
if self._a is None:
print("READING DATA")
self._a = 1
return self._a
class DataUI(object):
def __init__(self):
self._data_backend = DataBackend()
return
def __getattribute__(self, attr):
backend = object.__getattribute__(self, "_data_backend")
if attr in backend.available_attrs:
return getattr(backend, attr)
try:
return object.__getattribute__(self, attr)
except AttributeError:
return getattr(backend, attr)
def __dir__(self):
return self._data_backend.available_attrs
</code></pre>
<p>I had initially assumed that <code>dir()</code> was responsible for the unwanted behaviour, but I can do <code>d = DataUI(); dir(d)</code> without triggering <code>READING DATA</code> and get the expected list of attributes, in this case <code>['a']</code>.</p>
<p>I've been trying this on <code>Python 3.13.2 (main, Feb 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] on linux</code>, the interpreter supports tab completion.</p>
<p>Is there a way to avoid attribute access during tab completion, or do I have to try to code around this by perhaps trying to temporarily disable i/o operations during tab completion, for example?</p>
<hr />
<p>Some further information: I've obtained a traceback when triggering tab completion via <code>traceback.format_stack()</code>:</p>
<pre><code>File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/lib64/python3.13/_pyrepl/__main__.py", line 6, in <module>
__pyrepl_interactive_console()
File "/usr/lib64/python3.13/_pyrepl/main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
File "/usr/lib64/python3.13/_pyrepl/simple_interact.py", line 143, in run_multiline_interactive_console
statement = multiline_input(more_lines, ps1, ps2)
File "/usr/lib64/python3.13/_pyrepl/readline.py", line 389, in multiline_input
return reader.readline()
File "/usr/lib64/python3.13/_pyrepl/reader.py", line 802, in readline
self.handle1()
File "/usr/lib64/python3.13/_pyrepl/reader.py", line 785, in handle1
self.do_cmd(cmd)
File "/usr/lib64/python3.13/_pyrepl/reader.py", line 710, in do_cmd
command.do()
File "/usr/lib64/python3.13/_pyrepl/completing_reader.py", line 175, in do
r.cmpltn_menu_choices = r.get_completions(stem)
File "/usr/lib64/python3.13/_pyrepl/readline.py", line 152, in get_completions
next = function(stem, state)
File "/usr/lib64/python3.13/rlcompleter.py", line 94, in complete
self.matches = self.attr_matches(text)
File "/usr/lib64/python3.13/rlcompleter.py", line 191, in attr_matches
if (value := getattr(thisobject, word, None)) is not None:
File "/home/txwx36/tab_completion.py", line 30, in __getattribute__
return getattr(backend, attr)
File "/home/txwx36/tab_completion.py", line 14, in a
for line in traceback.format_stack():
</code></pre>
<p>The offending part seems to be</p>
<pre><code>File "/usr/lib64/python3.13/rlcompleter.py", line 191, in attr_matches
if (value := getattr(thisobject, word, None)) is not None:
</code></pre>
<p>In that file I find:</p>
<pre><code>- The evaluation of the NAME.NAME... form may cause arbitrary application
defined code to be executed if an object with a __getattr__ hook is found.
Since it is the responsibility of the application (or the user) to enable this
feature, I consider this an acceptable risk. More complicated expressions
(e.g. function calls or indexing operations) are *not* evaluated.
</code></pre>
<p>It seems like this is (almost?) expected behaviour. It claims to evaluate the expression up to the last dot, but it also accesses the attributes. Still not sure what to do to avoid this, though.</p>
<p>Also in that file:</p>
<pre><code> while True:
for word in words:
if (word[:n] == attr and
not (noprefix and word[:n+1] == noprefix)):
match = "%s.%s" % (expr, word)
if isinstance(getattr(type(thisobject), word, None),
property):
# bpo-44752: thisobject.word is a method decorated by
# `@property`. What follows applies a postfix if
# thisobject.word is callable, but know we know that
# this is not callable (because it is a property).
# Also, getattr(thisobject, word) will evaluate the
# property method, which is not desirable.
matches.append(match)
continue
if (value := getattr(thisobject, word, None)) is not None:
matches.append(self._callable_postfix(value, match))
else:
matches.append(match)
</code></pre>
<p><code>property</code> attributes are not supposed to be evaluated as per the comment, but because the attribute is a <code>property</code> of the backend class, in my case it's not being recognized as a property of the UI class. Maybe this can provide a workaround...</p>
|
<python><lazy-loading><magic-methods><tab-completion>
|
2025-04-04 08:30:41
| 1
| 364
|
Kyle
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.