QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,596,118
| 6,893,983
|
Safely extract uploaded ZIP files in Python
|
<p>I'm working on a python REST API that allows users to upload ZIP files. Before extracting them, I want to protect against common vulnerabilities, especially Zip bombs. Is there a way (ideally based on standard libraries like zipfile) to safely validate and extract ZIP uploads in Python?</p>
<p>I looked into third-party libraries like <a href="https://github.com/tonyrla/DefuseZip/tree/main" rel="nofollow noreferrer">defusedzip</a> or <a href="https://pypi.org/project/SecureZip/" rel="nofollow noreferrer">python-securezip</a> but they seem outdated and not maintained.</p>
<p>I also checked this <a href="https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python">related Stackoverflow question</a>, but I could not find anything that mentions the protection against Zip bombs.</p>
|
<python><zip><python-zipfile>
|
2025-04-28 09:10:00
| 3
| 994
|
sevic
|
79,595,959
| 4,041,117
|
Matplotlib figure with 2 animation subplots: how to update both
|
<p>I'm trying to vizualize simulation results with a figure containing 2 subplots using matplotlib pyplot. Both should contain animation: one uses netgraph library (it's a graph with nodes showing flows of the network) and the other should plot a line graph of another 2 variables (to keep it simple here lets use: sin(x) & cos(x) where both should be updated at each time period--just like the graph). I have an update function to update the graph, but I'm unsure, how to update the line plot at the same time. I would appreciate any suggestions.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from netgraph import Graph
# Simulate a dynamic network with
total_frames = 21
total_nodes = 5
NODE_LABELS = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E'}
NODE_POS = {0: (0.0, 0.5), 1: (0.65, 0.25), 2: (0.7, 0.5), 3: (0.5, 0.75), 4: (0.25, 0.25)}
adjacency_matrix = np.random.rand(total_nodes, total_nodes) < 0.25
weight_matrix = np.random.randn(total_frames, total_nodes, total_nodes)
# Normalise the weights, such that they are on the interval [0, 1].
vmin, vmax = -2, 2
weight_matrix[weight_matrix<vmin] = vmin
weight_matrix[weight_matrix>vmax] = vmax
weight_matrix -= vmin
weight_matrix /= vmax - vmin
cmap = plt.cm.RdGy
def annotate_axes(fig):
for i, ax in enumerate(fig.axes):
ax.tick_params(labelbottom=False, labelleft=False)
fig = plt.figure(figsize=(11, 6))
ax1 = plt.subplot2grid((6, 11), (0, 0), rowspan=6, colspan=5)
ax2 = plt.subplot2grid((6, 11), (1, 6), rowspan=2, colspan=5)
annotate_axes(fig)
title1 = ax1.set_title('Simulation viz', x=0.25, y=1.25)
title2 = ax2.set_title('Flow @t', x=0.15, y=1.25)
g = Graph(adjacency_matrix, node_labels=NODE_LABELS,
node_layout = NODE_POS, edge_cmap=cmap, arrows=True, ax=ax1)
def update(ii):
artists = []
for jj, kk in zip(*np.where(adjacency_matrix)):
w = weight_matrix[ii, jj, kk]
artist = g.edge_artists[(jj, kk)]
artist.set_facecolor(cmap(w))
artist.update_width(0.03 * np.abs(w-0.5))
artists.append(artist)
return artists
animation = FuncAnimation(fig, update, frames=total_frames, interval=200, blit=True, repeat=False)
plt.show()
</code></pre>
|
<python><matplotlib><animation><subplot><netgraph>
|
2025-04-28 07:12:53
| 1
| 481
|
carpediem
|
79,595,864
| 436,287
|
Python time.strftime gives different results for %Z and %z
|
<p>I'm getting some strange behavior when I pass a UTC time struct to python's <a href="https://docs.python.org/3/library/time.html#time.strftime" rel="nofollow noreferrer"><code>time.strftime</code></a>. Using <code>%z</code> seems to always give me my local offset rather than 0:</p>
<pre class="lang-py prettyprint-override"><code>>>> import time
>>> a = time.gmtime()
>>> a
time.struct_time(tm_year=2025, tm_mon=4, tm_mday=28, tm_hour=5, tm_min=13, tm_sec=45, tm_wday=0, tm_yday=118, tm_isdst=0)
>>> a.tm_zone
'UTC'
>>> a.tm_gmtoff
0
>>> time.strftime("%a, %d %b %Y %I:%M %p %Z", a)
'Mon, 28 Apr 2025 05:13 AM UTC'
>>> time.strftime("%a, %d %b %Y %I:%M %p %z", a)
'Mon, 28 Apr 2025 05:13 AM -0500'
</code></pre>
<p>The variable <code>a</code> is in UTC, so I'd expect <code>%Z</code> to give me <code>UTC</code> (which it does), and <code>%z</code> to give me <code>+0000</code>, but it doesn't. It prints out my local tz offset. And it's not even adjusting the time for that offset (it's still 5:13). And it's not even the right offset, we're in daylight saving time right now, so it should be <code>-0400</code>.</p>
<p>I'm using Python 3.13.2 installed from homebrew on macos 15.3.2. But I also just checked the system <code>python3</code> from Apple, which is 3.9.6, and it exhibits the same behavior.</p>
<p>The command-line <code>date</code> utility seems to always have <code>%Z</code> and <code>%z</code> agree with each other:</p>
<pre class="lang-bash prettyprint-override"><code>$ date '+%a, %d %b %Y %I:%M %p %Z'
Mon, 28 Apr 2025 01:39 AM EDT
$ date '+%a, %d %b %Y %I:%M %p %z'
Mon, 28 Apr 2025 01:39 AM -0400
# And using UTC:
$ TZ=UTC date '+%a, %d %b %Y %I:%M %p %Z'
Mon, 28 Apr 2025 05:39 AM UTC
$ TZ=UTC date '+%a, %d %b %Y %I:%M %p %z'
Mon, 28 Apr 2025 05:39 AM +0000
</code></pre>
<p>Is this known? intentional? documented? I saw some other questions on stackoverflow with similar questions, but none were exactly like what I'm seeing (specifically <a href="https://stackoverflow.com/questions/32353015/python-time-strftime-z-is-always-zero-instead-of-timezone-offset">python time.strftime %z is always zero instead of timezone offset</a> where <code>%z</code> was always giving 0 when they expected a non-zero offset, which is the opposite of what I'm seeing). Is there a way to get <code>time.strftime</code> to always format a time with the timezone of the struct passed to it?</p>
|
<python><macos><time><strftime>
|
2025-04-28 05:53:15
| 3
| 8,517
|
onlynone
|
79,595,840
| 15,416,614
|
Why doesn't multiprocessing.Process.start() in Python guarantee that the process has started?
|
<p>Here is a code to demo my question:</p>
<pre><code>from multiprocessing import Process
def worker():
print("Worker running")
if __name__ == "__main__":
p = Process(target=worker)
p.start()
input("1...")
input("2...")
p.join()
</code></pre>
<p>Note, ran on Python 3.13, Windows x64.</p>
<p>And the output I got is (after inputting <code>Enter</code> twice):</p>
<pre><code>1...
2...
Worker running
Process finished with exit code 0
</code></pre>
<p>From the output, we can see the process actually initialized and started to run after the 2nd input. While I thought <code>start()</code> should block and guarantee the child process is fully initialized.</p>
<p>Is this a normal behavior of Python multiprocessing?</p>
<p>Because if Threading is used here instead, this issue seldom occur. I always get the thread run before the line <code>input("1...")</code>.</p>
<p>May I ask, if <code>Process.start()</code> doesn't guarantee the process is fully-started, how should we code to ensure the child process is actually running before proceeding in the parent?</p>
|
<python><python-3.x><multiprocessing><python-multiprocessing>
|
2025-04-28 05:16:02
| 1
| 387
|
Gordon Hui
|
79,595,836
| 4,352,047
|
Generating key - value map from aggregates
|
<p>I have raw data that appears like this:</p>
<pre><code>┌─────────┬────────┬─────────────────────┐
│ price │ size │ timestamp │
│ float │ uint16 │ timestamp │
├─────────┼────────┼─────────────────────┤
│ 1697.0 │ 11 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 5 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 5 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 5 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 5 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 4 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 1 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 1 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 1 │ 2009-09-27 18:00:00 │
│ 1697.5 │ 3 │ 2009-09-27 18:00:00 │
│ 1697.5 │ 2 │ 2009-09-27 18:00:00 │
│ 1697.0 │ 1 │ 2009-09-27 18:00:00 │
│ 1698.0 │ 1 │ 2009-09-27 18:00:01 │
│ 1698.25 │ 1 │ 2009-09-27 18:00:01 │
│ 1698.25 │ 10 │ 2009-09-27 18:00:02 │
│ 1698.25 │ 4 │ 2009-09-27 18:00:02 │
│ 1697.25 │ 6 │ 2009-09-27 18:00:02 │
│ 1697.25 │ 2 │ 2009-09-27 18:00:02 │
│ 1697.0 │ 28 │ 2009-09-27 18:00:02 │
│ 1697.25 │ 6 │ 2009-09-27 18:00:03 │
├─────────┴────────┴─────────────────────┤
│ 20 rows 3 columns │
</code></pre>
<p>Using DuckDB, I wanted to create histograms for each timestamp, both the price and size.</p>
<p>My attempt:</p>
<pre><code> vp = conn.query(f"""
SET enable_progress_bar = true;
SELECT
timestamp,
histogram(price)
FROM 'data/tickdata.parquet'
GROUP BY timestamp
ORDER BY timestamp
""")
</code></pre>
<p>This produces the following:</p>
<pre><code>┌─────────────────────┬─────────────────────────────────────────────────────────────────┐
│ timestamp │ histogram(price) │
│ timestamp │ map(float, ubigint) │
├─────────────────────┼─────────────────────────────────────────────────────────────────┤
│ 2009-09-27 18:00:00 │ {1697.0=10, 1697.5=2} │
│ 2009-09-27 18:00:01 │ {1698.0=1, 1698.25=1} │
│ 2009-09-27 18:00:02 │ {1697.0=1, 1697.25=2, 1698.25=2} │
│ 2009-09-27 18:00:03 │ {1696.0=2, 1696.5=2, 1697.0=2, 1697.25=1} │
│ 2009-09-27 18:00:04 │ {1696.0=2, 1696.25=2, 1696.75=1, 1697.0=1, 1697.25=3, 1697.5=1}
</code></pre>
<p>At first glance, it "appears correct", <strong>however, the "values" associated with each key are not the SUM of the size but the COUNTs</strong> of the size. What I would expect to see:</p>
<pre><code>┌─────────────────────┬─────────────────────────────────────────────────────────────────┐
│ timestamp │ histogram(price) │
│ timestamp │ map(float, ubigint) │
├─────────────────────┼─────────────────────────────────────────────────────────────────┤
│ 2009-09-27 18:00:00 │ {1697.0=39, 1697.5=5} │
│ 2009-09-27 18:00:01 │ {1698.0=1, 1698.25=1} │
│ 2009-09-27 18:00:02 │ {1697.0=28, 1697.25=8, 1698.25=14}
</code></pre>
<p><strong>Alternatively: I am able to generate the following table, but unsure if there is a way I can map it into the above example?</strong></p>
<pre><code>┌─────────────────────┬─────────┬───────────┐
│ timestamp │ price │ sum(size) │
│ timestamp │ float │ int128 │
├─────────────────────┼─────────┼───────────┤
│ 2009-09-27 18:00:00 │ 1697.0 │ 39 │
│ 2009-09-27 18:00:00 │ 1697.5 │ 5 │
│ 2009-09-27 18:00:01 │ 1698.0 │ 1 │
│ 2009-09-27 18:00:01 │ 1698.25 │ 1 │
│ 2009-09-27 18:00:02 │ 1698.25 │ 14 │
│ 2009-09-27 18:00:02 │ 1697.25 │ 8 │
│ 2009-09-27 18:00:02 │ 1697.0 │ 28 │
</code></pre>
|
<python><sql><duckdb>
|
2025-04-28 05:09:08
| 1
| 379
|
Deftness
|
79,595,835
| 6,455,731
|
Connection pooling with httpx.Client
|
<p>The <a href="https://www.python-httpx.org/advanced/clients/" rel="nofollow noreferrer">clients</a> section in the <code>httpx</code> docs mention connection pooling and generally recommend the use of <code>httpx.Client</code>.</p>
<p>I cannot read from the docs or from anywhere else however, if connection pooling is simply tied to an <code>httpx.Client</code> instance or if calling <code>open</code>/<code>close</code> or entering/exiting a client context manager affects connection pooling.</p>
<p>E.g. if connection pooling was simply tied to a client instance the following should be able to benefit from pooling:</p>
<pre class="lang-py prettyprint-override"><code>shared_client = httpx.Client()
with shared_client as client:
# do stuff
with shared_client as client:
# do more stuff
</code></pre>
<p>If connection pooling was affected by <code>close</code> or exiting a client context, the above would not be able to utilize pooling.</p>
<p>I would appreciate any help on this.</p>
<h2>edit</h2>
<p>The above example violates a very fundamental restriction in httpx clients, that is that a client cannot be reopened. Sorry, I should have tried running something like this before posting.</p>
<p>My actual use case is that I would like to be able to allow users of a class that uses an <code>httpx.Client</code>/<code>httpx.AsyncClient</code> internally to provide the client themselves, in which case they would be able to reuse the client but are also responsible for closing it.</p>
<p>The following is a rough idea for accomplishing this:</p>
<pre class="lang-py prettyprint-override"><code>import warnings
import httpx
warnings.simplefilter("always")
class _ClientWrapper:
def __init__(self, client: httpx.Client) -> None:
self.client = client
def __getattr__(self, value):
return getattr(self.client, value)
def __enter__(self):
return self.client.__enter__()
def __exit__(self, exc_type, exc_value, traceback):
if not self.client.is_closed:
warnings.warn(f"httpx.Client instance '{self.client}' is still open. ")
class Getter:
def __init__(self, client: httpx.Client | None = None) -> None:
self.client = httpx.Client() if client is None else _ClientWrapper(client)
def get(self, url: str) -> httpx.Response:
with self.client:
response = self.client.get(url)
response.raise_for_status()
return response
client = httpx.Client()
getter = Getter(client=client)
response = getter.get("https://www.example.com")
print(response) # 200
print(getter.client.is_closed) # False
client.close()
print(getter.client.is_closed) # True
</code></pre>
<p>The idea of the wrapper is to delegate all attribute access to a client component but overwrite <code>__exit__</code> to just warn and not actually close the client; so if the client is provided, users are responsible for managing/closing the client.</p>
<p>Another option to achieve this would of course be subclassing, but with that users would need to use my subclassed <code>httpx.Client</code> and not any client.</p>
|
<python><connection-pooling><httpx>
|
2025-04-28 05:08:17
| 1
| 964
|
lupl
|
79,595,804
| 10,704,286
|
Custom Shaping in pandas for Excel Output
|
<p>I have a dataset with world population (For clarity, countries are limmited to Brazil, Canada, Denmark):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
world = pd.read_csv("../data/worldstats.csv")
cond = world["country"].isin(["Brazil","Canada","Denmark"])
world = world[cond]
world = world.set_index(["year","country"]).sort_index().head(10)
world
</code></pre>
<p><a href="https://i.sstatic.net/LPYk6udr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LPYk6udr.png" alt="enter image description here" /></a></p>
<p>I want to reshapa my dataframe so the output would look like this</p>
<pre class="lang-none prettyprint-override"><code>
year 1960
country Population GDP
Brazil 72493585.0 1.516557e+10
Canada 17909009.0 4.109345e+10
Denmark 4579603.0 6.248947e+09
year 1961
country Population GDP
Brazil 74706888.0 1.523685e+10
Canada 18271000.0 4.076797e+10
Denmark 4611687.0 6.933842e+09
Year 1962
country Population GDP
Brazil 77007549.0 1.992629e+10
Canada 18614000.0 4.197885e+10
Denmark 4647727.0 7.812968e+09
</code></pre>
<p>I've look into and tried functions such as <code>stack()</code>, <code>transpose()</code>, <code>pivot()</code> and <code>pivot_table()</code>, not I'm unable to come up with a code that will generate the output that I want.</p>
<p>I can use loops to generate the output but I want to export my output to an Excel spreadsheet. What would be the best way to it?</p>
<p>Thank you.</p>
|
<python><pandas>
|
2025-04-28 04:31:29
| 1
| 1,081
|
Demeter P. Chen
|
79,595,772
| 17,729,094
|
Modify list of arrays in place
|
<p>I have a df like:</p>
<pre class="lang-py prettyprint-override"><code># /// script
# requires-python = ">=3.13"
# dependencies = [
# "polars",
# ]
# ///
import polars as pl
df = pl.DataFrame(
{
"points": [
[
[1.0, 2.0],
],
[
[3.0, 4.0],
[5.0, 6.0],
],
[
[7.0, 8.0],
[9.0, 10.0],
[11.0, 12.0],
],
],
},
schema={
"points": pl.List(pl.Array(pl.Float32, 2)),
},
)
"""
shape: (3, 1)
┌─────────────────────────────────┐
│ points │
│ --- │
│ list[array[f32, 2]] │
╞═════════════════════════════════╡
│ [[1.0, 2.0]] │
│ [[3.0, 4.0], [5.0, 6.0]] │
│ [[7.0, 8.0], [9.0, 10.0], [11.… │
└─────────────────────────────────┘
"""
</code></pre>
<p>Each point represents an <code>(x, y)</code> pair. How can I divide all <code>x</code> by 2, and all <code>y</code> by 4.</p>
<pre><code>shape: (3, 1)
┌─────────────────────────────────┐
│ points │
│ --- │
│ list[array[f32, 2]] │
╞═════════════════════════════════╡
│ [[0.5, 0.5]] │
│ [[1.5, 1.0], [2.5, 1.5]] │
│ [[3.5, 2.0], [4.5, 2.5], [5.5,… │
└─────────────────────────────────┘
</code></pre>
|
<python><dataframe><python-polars><polars>
|
2025-04-28 03:55:20
| 1
| 954
|
DJDuque
|
79,595,753
| 5,118,421
|
sql alchemy generates integer instead of int for sql lite
|
<p>Python Sql Alchemy generates table with VARCHAR for sql lite instead of INTEGER so select for sql lite ordered by with alphabet number</p>
<p>Given City Table in Sql lite generated from sql alchemy:</p>
<pre><code>class City(Base):
__tablename__ = "city"
id: Mapped[int] = mapped_column(Integer, primary_key=True, index=True)
city_name: Mapped[str] = mapped_column(String, index=True)
</code></pre>
<p>It generates:
<a href="https://i.sstatic.net/5QxGDzHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5QxGDzHO.png" alt="enter image description here" /></a></p>
<p>Data:</p>
<pre><code>15470 Paris
100567 Paris
</code></pre>
<p>Query:</p>
<pre><code>select(City).where(City.city_name == city_name.upper()).order_by(City.id).limit(1)
SELECT city.id, city.city_name
FROM city
WHERE city.city_name = :city_name_1 ORDER BY city.id
LIMIT :param_1
</code></pre>
<p>returns city id with 100567 instead of 15470.</p>
<p>Do I need to provide example with sql insert as well?</p>
|
<python><sqlite><sqlalchemy>
|
2025-04-28 03:25:52
| 2
| 1,407
|
Irina
|
79,595,678
| 785,494
|
How can I store ids in Python without paying the 28-byte-per-int price?
|
<p>My Python code stores millions of ids in various data structures, in order to implement a classic algorithm. The run time is good, but the memory usage is awful.</p>
<p>These ids are <code>int</code>s. I assume that since Python ints start at 28 bytes and grow, there's a huge price there. Since they're just opaque ids, not actually mathematical object, I could get by with just 4 bytes for them.</p>
<p>Is there a way to store ids in Python that won't use the full 28 bytes? E.g., do I need to put them as both keys and values to dicts?</p>
<p>Note: The common solution of using something like BumPy won't work here, because it's not a contiguous array. It's keys and values into a dict, and dicts of dicts, etc.</p>
<p>I'm also amenable to other Python interpreters that are less memory hungry for ints.</p>
|
<python><memory><data-structures><space-complexity><python-internals>
|
2025-04-28 01:35:56
| 1
| 9,357
|
SRobertJames
|
79,595,515
| 1,394,353
|
urlparse/urlsplit and urlunparse, what's the Pythonic way to do this?
|
<p>The background (but is not a Django-only question) is that the Django test server does not return a scheme or netloc in its response and request urls.</p>
<p>I get <code>/foo/bar</code> for example, and I want to end up with <code>http://localhost:8000/foo/bar</code>.</p>
<p><code>urllib.parse.urlparse</code> (but not so much <code>urllib.parse.urlsplit</code>) makes gathering the relevant bits of information, from the test url and my known server address, easy. What seems more complicated than necessary is recomposing a new url with the scheme and netloc added via <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlunparse" rel="nofollow noreferrer">urllib.parse.urlcompose</a> which wants positional arguments, but does not document what they are, nor support named arguments. Meanwhile, the parsing functions return immutable tuples...</p>
<pre><code>def urlunparse(components):
"""Put a parsed URL back together again. This may result in a ..."""
</code></pre>
<p>I did get it working, see code below, but it looks really kludgy, around the part where I need to first transform the parse tuples into lists and then modify the list at the needed index position.</p>
<p>Is there a more Pythonic way?</p>
<h4>sample code:</h4>
<pre class="lang-py prettyprint-override"><code>
from urllib.parse import urlsplit, parse_qs, urlunparse, urlparse, urlencode, ParseResult, SplitResult
server_at_ = "http://localhost:8000"
url_in = "/foo/bar" # this comes from Django test framework I want to change this to "http://localhost:8000/foo/bar"
from_server = urlparse(server_at_)
print(" scheme and netloc from server:",from_server)
print(f"{url_in=}")
from_urlparse = urlparse(url_in)
print(" missing scheme and netloc:",from_urlparse)
#this works
print("I can rebuild it unchanged :",urlunparse(from_urlparse))
#however, using the modern urlsplit doesnt work (I didn't know about urlunsplit when asking)
try:
print("using urlsplit", urlunparse(urlsplit(url_in)))
#pragma: no cover pylint: disable=unused-variable
except (Exception,) as e:
print("no luck with urlsplit though:", e)
#let's modify the urlparse results to add the scheme and netloc
try:
from_urlparse.scheme = from_server.scheme
from_urlparse.netloc = from_server.netloc
new_url = urlunparse(from_urlparse)
except (Exception,) as e:
print("can't modify tuples:", e)
# UGGGH, this works, but is there a better way?
parts = [v for v in from_urlparse]
parts[0] = from_server.scheme
parts[1] = from_server.netloc
print("finally:",urlunparse(parts))
</code></pre>
<h4>sample output:</h4>
<pre><code> scheme and netloc from server: ParseResult(scheme='http', netloc='localhost:8000', path='', params='', query='', fragment='')
url_in='/foo/bar'
missing scheme and netloc: ParseResult(scheme='', netloc='', path='/foo/bar', params='', query='', fragment='')
I can rebuild it unchanged : /foo/bar
no luck with urlsplit though: not enough values to unpack (expected 7, got 6)
can't modify tuples: can't set attribute
finally: http://localhost:8000/foo/bar
</code></pre>
|
<python><url>
|
2025-04-27 21:26:15
| 1
| 12,224
|
JL Peyret
|
79,595,462
| 2,009,594
|
How to pass a byte buffer from python to C++
|
<p>I am creating a library in C++ with Python bindings.</p>
<p>The library takes buffer created in Python code and processes it in several ways. On my way to achieve that, I used the example below generated by Google's Gemini as starter for that part of the code. But I am getting an error:</p>
<p>Here is the C++ code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <pybind11/pybind11.h>
struct MyStruct {
int value;
double* data;
int size;
// Constructor
MyStruct(int val, int sz) : value(val), size(sz) {
data = new double[sz];
for (int i = 0; i < sz; ++i) {
data[i] = 0.0; // Initialize data to 0
}
}
// Destructor to free allocated memory
~MyStruct() {
delete[] data;
}
};
PYBIND11_MODULE(example, m) {
pybind11::class_<MyStruct>(m, "MyStruct")
.def(pybind11::init<int, int>())
.def_readwrite("value", &MyStruct::value)
.def_readwrite("data", &MyStruct::data)
.def_readwrite("size", &MyStruct::size);
}
</code></pre>
<p>Here is the Python code:</p>
<pre class="lang-py prettyprint-override"><code>import example
my_struct = example.MyStruct(10, 5)
print(f"Value: {my_struct.value}")
print(f"Size: {my_struct.size}")
my_struct.value = 20
print(f"Modified Value: {my_struct.value}")
# Access and modify data through the pointer (be careful!)
for i in range(my_struct.size):
my_struct.data[i] = i * 2.0
print("Data:", [my_struct.data[i] for i in range(my_struct.size)])
</code></pre>
<p>And this is the error I am getting:</p>
<pre class="lang-none prettyprint-override"><code>Value: 10
Size: 5
Modified Value: 20
Traceback (most recent call last):
File "./test.py", line 13, in <module>
my_struct.data[i] = i * 2.0
~~~~~~~~~~~~~~^^^
TypeError: 'float' object does not support item assignment
</code></pre>
<p>So, how to fix this error. And my final goal is really to pass a byte buffer from python to C++ code, if there is any better way to do it, please let me know.</p>
|
<python><c++><pybind11>
|
2025-04-27 20:21:58
| 1
| 6,929
|
feeling_lonely
|
79,595,383
| 5,123,111
|
How to further decrease financial data size?
|
<p>I’ve been working on compressing tick data and have made some progress, but I’m looking for ways to further optimize file sizes. Currently, I use delta encoding followed by saving the data in Parquet format with ZSTD compression, and I’ve achieved a reduction from 150MB to 66MB over 4 months of data, but it still feels like it will balloon as more data accumulates.</p>
<p>Here's the relevant code I’m using:</p>
<pre><code>def apply_delta_encoding(df: pd.DataFrame) -> pd.DataFrame:
df = df.copy()
# Convert datetime index to Unix timestamp in milliseconds
df['timestamp'] = df.index.astype('int64') // 1_000_000
# Keep the first row unchanged for delta encoding
for col in df.columns:
if col != 'timestamp': # Skip timestamp column
df[col] = df[col].diff().fillna(df[col].iloc[0]).astype("float32")
return df
</code></pre>
<p>For saving, I’m using the following, with the maximum allowed compression level:</p>
<pre><code>df.to_parquet(self.file_path, index=False, compression='zstd', compression_level=22)
</code></pre>
<p>I already experimented with the various compression algorithms (hdf5_blosc, hdf5_gzip, feather_lz4, parquet_lz4, parquet_snappy, parquet_zstd, feather_zstd, parquet_gzip, parquet_brotli) and concluded that zstd is the most storage friendly for my data.</p>
<p>Sample data:</p>
<pre><code> bid ask
datetime
2025-03-27 00:00:00.034 86752.601562 86839.500000
2025-03-27 00:00:01.155 86760.468750 86847.390625
2025-03-27 00:00:01.357 86758.992188 86845.914062
2025-03-27 00:00:09.518 86749.804688 86836.703125
2025-03-27 00:00:09.782 86741.601562 86828.500000
</code></pre>
<p>I apply delta encoding before ZSTD compression to the Parquet file. While the results are decent (I went from ~150 MB down to the current 66 MB), I’m still looking for strategies or libraries to achieve further file size reduction before things get out of hand as more data is accumulated. If I were to drop datetime index altogether, purely with delta encoding I would have ~98% further reduction but unfortunately, I shouldn't drop the time information.</p>
<p>Are there any tricks or tools I should explore? Any advanced techniques to help further drop the size?</p>
|
<python><compression><zstd>
|
2025-04-27 18:55:23
| 0
| 1,369
|
lazarea
|
79,595,283
| 14,492,001
|
How to properly extract all duplicated rows with a condition in a Polars DataFrame?
|
<p>Given a polars dataframe, I want to extract all duplicated rows while also applying an additional filter condition, for example:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
"name": ["Alice", "Bob", "Alice", "David", "Eve", "Bob", "Frank"],
"city": ["NY", "LA", "NY", "SF", "LA", "LA", "NY"],
"age": [25, 30, 25, 35, 28, 30, 40]
})
# Trying this:
df.filter((df.is_duplicated()) & (pl.col("city") == "NY")) # error
</code></pre>
<p>However, this results in an error:</p>
<blockquote>
<p>SchemaError: cannot unpack series of type <code>object</code> into <code>bool</code></p>
</blockquote>
<p>Which alludes that <code>df.is_duplicated()</code> returns a series of type <code>object</code>, but in reality, it's a <code>Boolean</code> Series.</p>
<p>Surprisingly, <em>reordering</em> the predicates by placing the expression first makes it work (<em>but why?</em>):<br>
<code>df.filter((pl.col("city") == "NY") & (df.is_duplicated())) # works!</code> correctly outputs:</p>
<pre><code>shape: (2, 3)
┌───────┬──────┬─────┐
│ name ┆ city ┆ age │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════╪══════╪═════╡
│ Alice ┆ NY ┆ 25 │
│ Alice ┆ NY ┆ 25 │
└───────┴──────┴─────┘
</code></pre>
<p>I understand that the optimal approach when filtering for duplicates based on a subset of columns is to use <code>pl.struct</code>, like:<br>
<code>df.filter((pl.struct(df.columns).is_duplicated()) & (pl.col("city") == "NY")) # works</code><br>Which works fine with the additional filter condition.</p>
<p>However, I'm intentionally not using <code>pl.struct</code> because my real dataframe has 40 columns, and I want to check for duplicated rows based on all the columns except three, so I did the following:<br>
<code>df.filter(df.drop("col1", "col2", "col3").is_duplicated())</code>
Which works fine and is much more convenient than writing all 37 columns in a <code>pl.struct</code>. However, this breaks when adding an additional filter condition to the right, <em>but not to the left</em>:</p>
<pre><code>df.filter(
(df.drop("col1", "col2", "col3").is_duplicated()) & (pl.col("col5") == "something")
) # breaks!
df.filter(
(pl.col("col5") == "something") & (df.drop("col1", "col2", "col3").is_duplicated())
) # works!
</code></pre>
<p><em><strong>Why</strong></em> does the ordering of predicates (Series & Expression vs Expression & Series) matter inside <code>.filter()</code> in this case?
Is this intended behavior in Polars, or a bug?</p>
|
<python><dataframe><python-polars>
|
2025-04-27 17:21:44
| 1
| 1,444
|
Omar AlSuwaidi
|
79,595,277
| 774,133
|
Stratification fails in train_test_split
|
<p>Please consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
from sklearn.model_selection import train_test_split
# step 1
ids = list(range(1000))
label = 500 * [1.0] + 500 * [0.0]
df = pd.DataFrame({"id": ids, "label": label})
# step 2
train_p = 0.8
val_p = 0.1
test_p = 0.1
# step 3
n_train = int(len(df) * train_p)
n_val = int(len(df) * val_p)
n_test = len(df) - n_train - n_val
print("* Step 3")
print("train:", n_train)
print("val:", n_val)
print("test:", n_test)
print()
# step 4
train_ids, test_ids = train_test_split(df["id"], stratify=df.label, test_size=n_test, random_state=42)
# step 5
print("* Step 5. First split")
print( df.loc[df.id.isin(train_ids), "label"].value_counts() )
print( df.loc[df.id.isin(test_ids), "label"].value_counts() )
print()
# step 6
train_ids, val_ids = train_test_split(train_ids, stratify=df.loc[df.id.isin(train_ids), "label"], test_size=n_val, random_state=42)
# step 7
train_df = df[df["id"].isin(train_ids)]
val_df = df[df["id"].isin(val_ids)]
test_df = df[df["id"].isin(test_ids)]
# step 8
print("* Step 8. Final split")
print("train:", train_df["label"].value_counts())
print("val:", val_df["label"].value_counts())
print("test:", test_df["label"].value_counts())
</code></pre>
<p>with output:</p>
<pre><code>* Step 3
train: 800
val: 100
test: 100
* Step 5. First split
label
1.0 450
0.0 450
Name: count, dtype: int64
label
1.0 50
0.0 50
Name: count, dtype: int64
* Step 8. Final split
train: label
0.0 404
1.0 396
Name: count, dtype: int64
val: label
1.0 54
0.0 46
Name: count, dtype: int64
test: label
1.0 50
0.0 50
Name: count, dtype: int64
</code></pre>
<ol>
<li>Create a Dataframe with 1000 elements perfectly balanced between class 1 and 0 (positive and negative);</li>
<li>Define the ratio of examples that should go into the training, validation and test partitions. I would like 800 examples in the training split, 100 examples in each one of the other two.</li>
<li>Compute the sizes of the three partitions and print their values.</li>
<li>Perform the first split to get the test set, stratified on <code>label</code>.</li>
<li>Print label stats of the first split. The two partitions are still balanced.</li>
<li>Perform the second splitting into training and validation, stratified on <code>label</code>.</li>
<li>Select examples</li>
<li>Print label stats.</li>
</ol>
<p>As you can see the second split at step 6 does not produce a balanced split (stats printed at step 8). After the first split, the examples (output at step 5) are still balanced and it would be possible to perform a second split keeping a perfect class balance.</p>
<p>What am I doing wrong?</p>
|
<python><scikit-learn>
|
2025-04-27 17:17:11
| 1
| 3,234
|
Antonio Sesto
|
79,595,257
| 843,458
|
tkinter layout problem, pack does not place the elements as expected
|
<p>I want to have three elements.
The first two aligned left and the third below spanning the remaining space in x and y.</p>
<p>This code however does not realized that
import ttkbootstrap as tb</p>
<pre><code>class App(tb.Window):
def __init__(self):
# Initialize window with superhero theme
super().__init__(themename="superhero")
# Configure the root window
self.title('test layout')
self.geometry('800x600')
tb.Button(self, text='first').pack(padx=10, pady=10, side = tb.LEFT)
tb.Button(self, text='second').pack(padx=10, pady=10, side = tb.LEFT)
tb.Label(self, text="filled", borderwidth=2, relief="groove").pack(expand=True, side=tb.BOTTOM)
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>This looks like
<a href="https://i.sstatic.net/lQmaIdi9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQmaIdi9.png" alt="enter image description here" /></a></p>
<p>What is the typical solution?</p>
|
<python><tkinter><layout>
|
2025-04-27 16:44:48
| 1
| 3,516
|
Matthias Pospiech
|
79,595,225
| 9,646,203
|
How python MRO works in hierarchical and multiple inheritance
|
<p>So far it's making me confuse, how below code to understand for the given result. Can someone explain below two parts of output how actually flow is going on. First part giving output as 'C' and second part is 'A'</p>
<pre><code>class A:
def fun1(self):
print("A")
class B(A):
def fun1(self):
print('B')
class C(A):
def fun1(self):
print("C")
class D(B, C):
def fun1(self):
super(B, self).fun1()
obj = D()
obj.fun1() **# How output will be 'C' in this case ?**
</code></pre>
<pre><code>output : C
</code></pre>
<hr />
<pre><code>class A:
def fun1(self):
print("A")
class B(A):
def fun1(self):
print("B")
class C(A):
def fun1(self):
print("C")
class D(B, C):
def fun1(self):
super(C, self).fun1()
obj = D()
obj.fun1() **# How output will be 'A' in this case ?**
</code></pre>
<pre><code>output : A
</code></pre>
|
<python>
|
2025-04-27 16:12:39
| 1
| 615
|
Saisiva A
|
79,595,214
| 2,522,892
|
LinkedIn API: Authorization Error When Posting on Behalf of Organization
|
<p>I’m developing an application that integrates with the LinkedIn API to post content on behalf of our organization. Despite being listed as a super admin and having the following OAuth 2.0 scopes:</p>
<p><a href="https://i.sstatic.net/O9ynpKg1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9ynpKg1.png" alt="enter image description here" /></a>
• w_organization_social
• rw_organization_admin
• r_organization_social </p>
<p>I’m encountering the following error when attempting to create a post:</p>
<p><code>com.linkedin.content.common.exception.BadRequestResponseException: denied by [resource: organizationUgcAuthorizations => responseStatus: DENIED]</code></p>
<pre><code>'description' ='Write to post is denied due to lack of permission'
'input' ={}
'code' ='AUTH_DELEGATION_DENIED'
</code></pre>
<p>Additionally, the error code returned is AUTH_DELEGATION_DENIED, with the description: “Write to post is denied due to lack of permission.”</p>
<p>Steps Taken:</p>
<pre><code>1. Confirmed that my LinkedIn application has the necessary scopes mentioned above.
2. Verified that I’m listed as a super admin for the organization.
3. Ensured that the access token is valid and includes the required scopes.
4. Attempted to post using the endpoint: https://api.linkedin.com/v2/ugcPosts. 
</code></pre>
<p>Are there additional permissions or configurations required to post on behalf of an organization using the LinkedIn API? Any insights or suggestions would be greatly appreciated.</p>
<p>Code :</p>
<p>organization_url = urn:li:organization:<company_id></p>
<p>⸻</p>
<pre><code>def post_it(self, title: str, content: str, image_url: str = None, scheduled_time: datetime.datetime = None):
if not self.access_token or not self.organization_urn:
raise Exception("Access token and organization URN must be set before posting content.")
post_data = {
"author": self.organization_urn,
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": f"{title}\n\n{content}"
},
"shareMediaCategory": "NONE"
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
if image_url:
# First register the media asset
media_register_data = {
"registerUploadRequest": {
"recipes": ["urn:li:digitalmediaRecipe:feedshare-image"],
"owner": self.organization_urn,
"serviceRelationships": [{
"relationshipType": "OWNER",
"identifier": "urn:li:userGeneratedContent"
}]
}
}
# Register the media asset
media_register_response = self.client.action(
resource_path="/assets",
action_name="registerUpload",
action_params=media_register_data,
access_token=self.access_token
)
# Get the upload URL and asset URN
upload_url = media_register_response.value.get('uploadMechanism').get('com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest').get('uploadUrl')
asset_urn = media_register_response.value.get('asset')
# Upload the image to the provided URL
image_data = requests.get(image_url).content
upload_response = requests.put(
upload_url,
data=image_data,
headers={"Content-Type": "image/jpeg"}
)
if upload_response.status_code != 201:
raise Exception(f"Failed to upload image: {upload_response.text}")
# Update post data with the media asset
post_data["specificContent"]["com.linkedin.ugc.ShareContent"]["shareMediaCategory"] = "IMAGE"
post_data["specificContent"]["com.linkedin.ugc.ShareContent"]["media"] = [{
"status": "READY",
"description": {
"text": content
},
"media": asset_urn,
"title": {
"text": title or content[:100]
}
}]
if scheduled_time:
post_data["lifecycleState"] = "DRAFT"
post_data["scheduledTime"] = int(scheduled_time.timestamp() * 1000)
response = self.client.create(
resource_path="/ugcPosts",
entity=post_data,
access_token=self.access_token
)
return response
</code></pre>
|
<python><linkedin-api>
|
2025-04-27 15:52:28
| 0
| 575
|
Lucky
|
79,595,168
| 14,947,895
|
Numpy IO seems to have an 2GB overhead in StorNex (cvfs) File System
|
<p><strong>TL;DR:</strong></p>
<ul>
<li>Initally I thought the numpy load functions doubles memory usage at peak</li>
<li>after some additional tests it seems like the underlying file system (StorNex [cfvs]) leads to a size-independent 2GB overhead</li>
<li>jump to EDIT 4 for the current status.</li>
</ul>
<hr />
<hr />
<p>I thought <a href="https://numpy.org/doc/stable/reference/generated/numpy.load.html" rel="nofollow noreferrer"><code>numpy.load()</code></a> writes directly into the array data memory.</p>
<p>However, the results of the profiling functions (I profiled with memray and mprof) seems a bit strange to me...</p>
<p>My array is 2GB big. I would have expected a peak memory of 2GB for the loading function. I could also imagine, that 4GB peak-wise used as the array might be loaded into a buffer and than written to the data memory.<br />
Nevertheless, the timing in the profiling seems odd. After loading the program should wait for a second and print the time. Nevertheless, the memory size increases after this one second?</p>
<p>I sampled at 0.001s for mprof and 0.1ms for memray.</p>
<pre class="lang-py prettyprint-override"><code>import time
from datetime import datetime as dt
import memray
import numpy as np
def main():
time.sleep(1)
print(dt.now())
m_ = np.load(
os.path.join(os.path.expanduser('~'), 'test_data/max_compression_chunk.npy'),
mmap_mode=None,
)
print(dt.now())
time.sleep(1)
print(dt.now())
if __name__ == "__main__":
with memray.Tracker("memray_test.bin", memory_interval_ms=0.1, follow_fork=True, native_traces=True):
main()
</code></pre>
<p>Attached are also the mprof and memray results:</p>
<p><a href="https://i.sstatic.net/YFAfsIbx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFAfsIbx.png" alt="mprof results" /></a></p>
<p><a href="https://i.sstatic.net/0bTQA0EC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bTQA0EC.png" alt="memray results" /></a></p>
<p>Could anyone explain to me, what is going on? Or give me a hint what to check next?</p>
<hr />
<p><strong>Edit 1</strong>:</p>
<ul>
<li>I use an array of shape <code>(268435451,)</code> with data type <code>float64</code></li>
<li>the test was done on a machine with: Rocky Linux 8.10 (Green Obsidian)</li>
<li>I use: CPython: 3.13.2, Numpy: 2.2.4</li>
</ul>
<p>Attached is the icicle graphical overview... It seems, that there the max allocation there is 2GB (1 time); Other functions have more allocations, but I unfortunately I am still learning to interprete this graphs...</p>
<p><a href="https://i.sstatic.net/BM91o3zu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BM91o3zu.png" alt="icicles graph of memray" /></a></p>
<hr />
<p><strong>Edit 2</strong></p>
<p><strong>Interestingly, it seems that numpy.fromfile does indeed work better.</strong></p>
<p>(The code is the same as above, just, using <code>fromfile</code> instead of <code>load</code>.)</p>
<p><a href="https://i.sstatic.net/9QNEQZfK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QNEQZfK.png" alt="memray for fromfile" /></a></p>
<p><a href="https://i.sstatic.net/TMfUYVjJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMfUYVjJ.png" alt="mprof for fromfile" /></a></p>
<p><a href="https://i.sstatic.net/3Gm9Azsl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Gm9Azsl.png" alt="memray graph fromfile 2" /></a></p>
<p>The mprof output is what I expected (and also for the np.load function). However, there seems to be a discrepancy between the memray output and the mplot output.</p>
<hr />
<p><strong>EDIT 3</strong></p>
<p>After reinstalling my environment and creating a new test array, it seems that the timing relationships are now as expected.</p>
<p><em>I appreciate the help so far and please excuse the effort!</em></p>
<p>However, could you tell me if the performance is now what I would expect? <strong>Especially if the peak memory is expected to be 4GB (twice the array size)?</strong></p>
<p><a href="https://i.sstatic.net/TMWHa69J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMWHa69J.png" alt="memray output new" /></a></p>
<hr />
<p><strong>EDIT 4</strong></p>
<p><strong>It appears, that the behavior is not depending on the file size, but seems to but on the file system from what the array is loaded.</strong></p>
<p>What I did:</p>
<ol>
<li>checked increasing file sizes (however, due to some other restrictions, I changed the location of the directory from where the array is loaded.)
<ul>
<li>everything worked as one would expected</li>
</ul>
</li>
<li>The size of the array came from the maximum chunk size some tested compressors would work on. So I thought maybe it could have something to do, that the size is in the vicinity of this size. Therefore, I increased slowly the size to cross this boarder...
<ul>
<li>everything worked as one would expected</li>
</ul>
</li>
<li>Than I changed back to the original path...
<ul>
<li>the overhead appeared again... in both variations.</li>
</ul>
</li>
</ol>
<p><strong>Working: beegfs.</strong><br />
<strong>Overhead of roughly 2GB (const for increasing data sizes): cvfs</strong></p>
<p>Of course, the difference between the two file locations could be due to something else, but the different file systems seem to be the most obvious reason.</p>
<p><strong>BeeGFS</strong></p>
<p><a href="https://i.sstatic.net/JpYn1Ye2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpYn1Ye2.png" alt="large beegfs" /></a>
<a href="https://i.sstatic.net/LROhhoFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LROhhoFd.png" alt="steps beegfs" /></a></p>
<p><strong>cvfs</strong></p>
<p><a href="https://i.sstatic.net/OlJTwFy1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlJTwFy1.png" alt="large cvfs" /></a>
<a href="https://i.sstatic.net/BHwKKHvz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHwKKHvz.png" alt="steps cvfs" /></a></p>
<p>I shared the code and also the images again in the <a href="https://gist.github.com/fschwar4/768f86cba465fd3a613334516b0192df" rel="nofollow noreferrer">gist here</a></p>
|
<python><numpy><io><numpy-ndarray><memprof>
|
2025-04-27 15:10:26
| 0
| 496
|
Helmut
|
79,595,116
| 7,916,257
|
log-log time series plot with event markers
|
<p>I want to create a log-log plot showing the growth of publications over time (papers/year), similar to scientific figures like Fig. 1a in "A Century of Physics" (Nature Physics 2015).</p>
<p><strong>Requirements:</strong></p>
<ol>
<li>Y-axis (number of papers) on a log scale (10²–10⁶) with ticks at each power of 10.</li>
<li>X-axis (year) with decade ticks (e.g., 1960, 1970, 1980...).</li>
<li>Multiple vertical dashed lines indicating important trend changes.</li>
<li>Serif font, light dashed grid, no top/right spines.</li>
<li>Line colors and styles similar to journal figures (e.g., matplotlib, Seaborn).</li>
<li>A panel label (e.g., "Web of Science") at the top left.</li>
</ol>
<p>How can I best style my matplotlib plot to match this?</p>
<p><strong>The following is what I did (in Python):</strong></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
"""
replicate_figure1a_misinformation.py
Count the yearly number of WoS publications (total) versus
the number of misinformation publications (from combined_journals.xlsx)
from 1960 to 2024, and plot both curves (log‐scale) to reproduce
“Figure 1 | The Evolution of Physics” analogous to Sinatra et al.
"""
import os
import gzip
import xml.etree.ElementTree as ET
from lxml import etree
import pandas as pd
import matplotlib.pyplot as plt
# 1) Configuration
# ──────────────────────────────────────────────────────────
# Path to your combined misinformation Excel file
MISINFO_XLSX = 'combined_journals.xlsx'
# Root directory containing files like 1960.xml.gz … 2024.xml.gz
WOS_DIR = '/Users/DataSets/Documents/DataSets/WoS/merged_yearly'
# Years to include in the plot
START_YEAR = 1960
END_YEAR = 2024
YEARS = list(range(START_YEAR, END_YEAR + 1))
# 2) Load misinformation paper counts by year
# ──────────────────────────────────────────────────────────
misinfo_df = pd.read_excel(MISINFO_XLSX, dtype={'UT (Unique WOS ID)': str,
'Publication Year': int,
'WoS Categories': str})
# ensure only years in our window
misinfo_df = misinfo_df[
(misinfo_df['Publication Year'] >= START_YEAR) &
(misinfo_df['Publication Year'] <= END_YEAR)
]
# count unique UT per year
mis_counts = (
misinfo_df
.drop_duplicates(subset='UT (Unique WOS ID)')
.groupby('Publication Year')
.size()
.reindex(YEARS, fill_value=0)
.to_dict()
)
print(f"Loaded misinformation counts for {len(mis_counts)} years")
# 3) Parse WoS XML to get total counts per year
# ──────────────────────────────────────────────────────────
total_counts = {yr: 0 for yr in YEARS}
for yr in YEARS:
gz_path = os.path.join(WOS_DIR, f'{yr}.xml.gz')
if not os.path.exists(gz_path):
continue
with gzip.open(gz_path, 'rb') as f:
# directly enable recovery and huge_tree here
for _, rec in etree.iterparse(
f,
events=('end',),
tag='REC',
recover=True,
huge_tree=True
):
total_counts[yr] += 1
rec.clear()
# 4) Build DataFrame and plot
# ──────────────────────────────────────────────────────────
df_counts = pd.DataFrame({
'Year': YEARS,
'WoS total': [total_counts[yr] for yr in YEARS],
'Misinformation': [mis_counts.get(yr, 0) for yr in YEARS],
})
df_counts.set_index('Year', inplace=True)
plt.figure(figsize=(10, 6))
# plt.plot(df_counts.index, df_counts['WoS total'], label='WoS total papers')
plt.plot(df_counts.index, df_counts['Misinformation'], label='Web of Science')
plt.yscale('log')
plt.xlabel('Year')
plt.ylabel('Number of papers')
plt.title('The Evolution of Misinformation (1960–2024)')
plt.legend()
plt.grid(True, which='both', linestyle='--', alpha=0.3)
plt.tight_layout()
# To save the figure instead of showing:
plt.savefig('figure1a_misinformation.png', dpi=600)
# Or to display:
# plt.show()
</code></pre>
<p>Attached is my plot and the plot of the article I am trying to replicate exactly same.</p>
<p><a href="https://i.sstatic.net/Daln7Oc4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Daln7Oc4.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/xAxvkXiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xAxvkXiI.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2025-04-27 14:13:09
| 1
| 919
|
Joe
|
79,594,983
| 1,593,077
|
Why does np.fromfile fail when reading from a pipe?
|
<p>In a Python script, I've written:</p>
<pre><code># etc. etc.
input_file = args.input_file_path or sys.stdin
arr = numpy.fromfile(input_file, dtype=numpy.dtype('f32'))
</code></pre>
<p>when I run the script, I get:</p>
<pre><code>$ cat nums.fp32.bin | ./myscript
File "./myscript", line 123, in main
arr = numpy.fromfile(input_file, dtype=numpy.dtype('f32'))
OSError: obtaining file position failed
</code></pre>
<p>why does NumPy need the file position? And - can I circumvent this somehow?</p>
|
<python><numpy><file-io><pipe>
|
2025-04-27 11:42:32
| 1
| 137,004
|
einpoklum
|
79,594,796
| 12,645,782
|
Edit Google Drive and Sheets Files (Airflow Google Provider): Insufficient credentials
|
<p>I am trying to modify a Google Sheets file and a CSV file in Google Drive via the Apache Airflow Google Provider:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data)
csv_data = df.to_csv(index=True)
gcs_hook = GCSHook(gcp_conn_id=GOOGLE_CONNECTION)
gcs_hook.upload(
bucket_name=GOOGLE_CLOUD_BUCKET_NAME,
object_name=csv_name,
data=csv_data,
)
sheets_hook = GSheetsHook(gcp_conn_id=GOOGLE_CONNECTION)
sheets_hook.clear(GOOGLE_SHEET_ID, range_=sheet_name)
sheets_hook.append_values(
GOOGLE_SHEET_ID,
range_=sheet_name,
values=[df.columns.tolist()] + df.values.tolist(),
)
</code></pre>
<p>The string <code>GOOGLE_CONNECTION</code> is just the name of the <code>google</code> connection that I've defined using the Apache Airflow GUI. This connection points to a <code>credentials.json</code> file (the file exists and is discovered), which I obtained in the following way:</p>
<ul>
<li>created a Google Cloud Project</li>
<li>enabled the Google Drive and Google Sheets APIs</li>
<li>created a service account and enabled the <code>Editor</code> role</li>
<li>created a key for this service account</li>
<li>exported the key to <code>credentials.json</code></li>
<li>set up <code>Editor</code> rights for the service account email via the Drive GUI using the <code>Share</code> option</li>
</ul>
<p>Nevertheless, the operations are unsuccessful:</p>
<pre><code>[2025-04-27, 10:09:26] ERROR - Task failed with exception: source="task"
HttpError: <HttpError 403 when requesting https://sheets.googleapis.com/v4/spreadsheets/1qeNnfd74h6EgjAZkYPWKb3BzYGW5ZZ8h9UyuGvL3_PQ/values/countries:clear?alt=json returned "Request had insufficient authentication scopes.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT', 'domain': 'googleapis.com', 'metadata': {'method': 'google.apps.sheets.v4.SpreadsheetsService.ClearValues', 'service': 'sheets.googleapis.com'}}]">
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/sdk/execution_time/task_runner.py", line 825 in run
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/sdk/execution_time/task_runner.py", line 1088 in _execute_task
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/sdk/bases/operator.py", line 408 in wrapper
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/providers/standard/operators/python.py", line 212 in execute
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/providers/standard/operators/python.py", line 235 in execute_callable
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/sdk/execution_time/callback_runner.py", line 81 in run
File "/opt/airflow/dags/ingest_and_save.py", line 139 in export_to_gsheet
File "/opt/airflow/dags/ingest_and_save.py", line 127 in write_table_to_Google
File "/home/airflow/.local/lib/python3.12/site-packages/airflow/providers/google/suite/hooks/sheets.py", line 335 in clear
File "/home/airflow/.local/lib/python3.12/site-packages/googleapiclient/_helpers.py", line 130 in positional_wrapper
File "/home/airflow/.local/lib/python3.12/site-packages/googleapiclient/http.py", line 938 in execute
</code></pre>
<p>I searched for an option in the Google Cloud API to set the credentials scope, but couldn't find one. I tried manually appending the last two fields to the <code>credentials.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"type": "service_account",
"project_id": "<my project id>",
"private_key_id": "<my private key id>",
"private_key": "<my private key>",
"client_email": "<my client service account email>",
"client_id": "<my client id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "<client cert url>",
"universe_domain": "googleapis.com",
"extra__google_cloud_platform__scopes": ["https://www.googleapis.com/auth/spreadsheets", "https://www.googleapis.com/auth/drive"],
"scope": ["https://www.googleapis.com/auth/spreadsheets", "https://www.googleapis.com/auth/drive"]
}
</code></pre>
<p>However, the error is still the same. What am I doing wrong here?</p>
|
<python><google-cloud-platform><google-drive-api><airflow><google-sheets-api>
|
2025-04-27 07:27:04
| 1
| 620
|
Kotaka Danski
|
79,594,793
| 6,141,238
|
How do I change a global constant of a loaded module so that other global variables of the loaded module that depend on it reflect the change?
|
<p>This question may have a simple answer. I have a near-trivial module0.py:</p>
<pre><code>a = 1
b = a + 3
</code></pre>
<p>I import this module into a script and redefine module0's global constant <code>a</code>:</p>
<pre><code>import module0 as m0
m0.a = 2
print(m0.b)
</code></pre>
<p>However, when I run this code, it prints 4 rather than 5. Is there a way to import module0 and change the value of <code>m0.a</code> so that all variables such as <code>m0.b</code> in the global namespace of the imported module are (in some sense) automatically updated with the new value of <code>m0.a</code>?</p>
|
<python><import><namespaces><global-variables><python-import>
|
2025-04-27 07:23:08
| 0
| 427
|
SapereAude
|
79,594,723
| 843,458
|
pyhton TTKBootstrap Menu - style not applied
|
<p>I tried TTKBootstrap instead of tkinter and it looks great. However the menu has not been adapted.</p>
<p>I do the following: Create a Windows based on <code>tb.Window</code></p>
<pre><code>import ttkbootstrap as tb
class App(tb.Window):
def __init__(self):
super().__init__()
# configure the root window
self.title('test ttkbootstrap')
self.geometry('800x600')
style = tb.Style(theme='superhero')
</code></pre>
<p>and create a menu in the class constructor</p>
<pre><code> menubar = tb.Menu(self)
self.config(menu=menubar)
# file Menu
filemenu = tb.Menu(menubar, tearoff=False)
filemenu.add_command(
label="Open...",
command=OpenFile
)
</code></pre>
<p>It works, but looks like the style is not applied.</p>
<p><a href="https://i.sstatic.net/19f6Mdp3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19f6Mdp3.png" alt="enter image description here" /></a></p>
<p>Here the full code:</p>
<pre><code>from tkinter import filedialog
import ttkbootstrap as tb
def OpenFile():
filename = filedialog.askopenfilename(initialdir = "/",title = "Select file",filetypes = (("jpeg files","*.jpg"),("all files","*.*")))
print(filename)
def About():
print("This is a simple example of a menu")
class App(tb.Window):
def __init__(self):
super().__init__()
# configure the root window
self.title('test ttkbootstrap')
self.geometry('800x600')
style = tb.Style(theme='superhero')
self.tabList = []
# tabular Widgets
tabControl = tb.Notebook(self, style='lefttab.TNotebook')
tabControl.pack(pady=20)
self.tabList.append(tb.Frame(tabControl))
tabControl.add(self.tabList[-1], text ='First Tab')
self.tabList.append(tb.Frame(tabControl))
tabControl.add(self.tabList[-1], text ='Second Tab')
tabControl.pack(expand=1, fill="both")
# label
self.label = tb.Label(self.tabList[0], text='Hello, First Tab!')
self.label.pack()
# button
self.button = tb.Button(self.tabList[1], text='Click Me')
self.button.pack()
# Initialize and define the menu including functions to be called.
menubar = tb.Menu(self)
self.config(menu=menubar)
# file Menu
filemenu = tb.Menu(menubar, tearoff=False)
filemenu.add_command(
label="Open...",
command=OpenFile
)
filemenu.add_separator()
filemenu.add_command(
label='Exit',
command=self.destroy,
)
menubar.add_cascade(
label="File",
menu=filemenu,
underline=0
)
# help Menu
helpmenu = tb.Menu(menubar, tearoff=False)
helpmenu.add_command(label="About...", command=About)
menubar.add_cascade(label="Help", menu=helpmenu)
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
|
<python><tkinter><ttkbootstrap>
|
2025-04-27 06:04:01
| 1
| 3,516
|
Matthias Pospiech
|
79,594,401
| 14,551,796
|
Why am I getting errors with discord.ActionRow in discord.py?
|
<p>I'm trying to create a game using buttons in Discord with <code>discord.py</code>, and I'm using <code>discord.ActionRow</code>, but it's giving me errors. Here's the function for context:</p>
<pre class="lang-py prettyprint-override"><code>async def create_game_board(self, view, callback):
buttons = []
for i in range(3):
row_buttons = []
for j in range(3):
button = discord.ui.Button(label="\u200b", style=discord.ButtonStyle.gray, custom_id=f"{i}_{j}")
button.callback = callback
row_buttons.append(button)
buttons.append(row_buttons)
print(len(buttons[0]))
view.add_item(discord.ActionRow(*buttons[0]))
view.add_item(discord.ActionRow(*buttons[1]))
view.add_item(discord.ActionRow(*buttons[2]))
return buttons
</code></pre>
<p>My problem lies in this code snippet</p>
<pre class="lang-py prettyprint-override"><code>view.add_item(discord.ActionRow(*buttons[0]))
view.add_item(discord.ActionRow(*buttons[1]))
view.add_item(discord.ActionRow(*buttons[2]))
</code></pre>
<p>But it results in this error:</p>
<pre class="lang-py prettyprint-override"><code>File "Removed for StackOverflow", line 111, in create_game_board
view.add_item(discord.ActionRow(*buttons[0]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ActionRow.__init__() takes 2 positional arguments but 4 were given
</code></pre>
<p>I've also tried doing <code>discord.ActionRow(components=buttons[0])</code> but that results in errors too.</p>
|
<python><discord.py>
|
2025-04-26 21:02:23
| 1
| 455
|
benz
|
79,594,266
| 9,651,461
|
How do I get the size in bytes of a WriteableBuffer?
|
<p>Previous question: <a href="https://stackoverflow.com/questions/79594143/how-do-i-add-a-type-hint-for-a-writeablebuffer-parameter/79594171#79594171">How do I add a type hint for a WriteableBuffer parameter?</a></p>
<p>I'm trying to implement the <code>readinto()</code> method of a <code>RawIOBase</code> subclass with correct type hints. My implementation relies on the number of bytes in the supplied buffer, and I'm not sure how to get that.</p>
<p>The version I have so far looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from io import RawIOBase
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import WriteableBuffer
class Reader(RawIOBase):
def readinto(self, buf: WriteableBuffer):
n = len(buf) # <-- this is not allowed
# ... do something with n ...
return super().readinto(buf)
</code></pre>
<p>This has two problems. First, <code>pyright</code> complains about calling <code>len()</code> on <code>buf</code>:</p>
<pre><code>io.py:9:19 - error: Argument of type "WriteableBuffer" cannot be assigned to parameter "obj" of type "Sized" in function "len"
"Buffer" is incompatible with protocol "Sized"
"__len__" is not present (reportArgumentType)
1 error, 0 warnings, 0 informations
</code></pre>
<p>WriteableBuffers seems to be an alias for <code>Buffer</code> (<a href="https://github.com/python/typeshed/blob/8a6bfb06981a9bab7c74472f62d6132f501c4a0e/stdlib/_typeshed/__init__.pyi#L282" rel="nofollow noreferrer">source</a>), though it used to be <a href="https://github.com/python/typeshed/blob/536f783a826c1b91e6e1e17380083438c3364efc/stdlib/_typeshed/__init__.pyi#L196" rel="nofollow noreferrer">defined like this</a>:</p>
<pre><code>WriteableBuffer: TypeAlias = bytearray | memoryview | array.array[Any] | mmap.mmap | ctypes._CData # stable
</code></pre>
<p>Which sheds some light on the problem: all of these definitely have a defined length, except <code>ctypes._CData</code>, where only some subclasses define <code>__len__</code> (e.g., arrays do, primitive values like int don't). But it seems the conclusion is that a WriteableBuffer just doesn't have a length, which is weird, because how should you write to it if you don't know its size?</p>
<p>Second, <code>len(buf)</code> isn't always the number of bytes in the underlying buffer; the length of a numpy array is its number of rows, for example.</p>
<p>Is there a way to fix this code? Or should I just add a <code># type: ignore</code> to the lines involving <code>len(buf)</code>?</p>
|
<python><python-typing><pyright>
|
2025-04-26 18:16:37
| 2
| 1,194
|
Maks Verver
|
79,594,143
| 9,651,461
|
How do I add a type hint for a WriteableBuffer parameter?
|
<p>I'm trying to add a parameter type to the <code>readinto()</code> method declared in a custom class that derives from <code>RawIOBase</code>, like this:</p>
<pre class="lang-py prettyprint-override"><code>from io import RawIOBase
class Reader(RawIOBase):
def readinto(self, buf: bytearray) -> int:
pass # actual implementation omitted
</code></pre>
<p>But pyright complains:</p>
<pre class="lang-bash prettyprint-override"><code>io.py:6:9 - error: Method "readinto" overrides class "_RawIOBase" in an incompatible manner
Parameter 2 type mismatch: base parameter is type "WriteableBuffer", override parameter is type "bytearray"
"Buffer" is not assignable to "bytearray" (reportIncompatibleMethodOverride)
1 error, 0 warnings, 0 informations
</code></pre>
<p>How do I fix this? Note: I know I can remove the type hint entirely. I want to assign it the correct type instead.</p>
<p>I'm using Python 3.13.3 and pyright 1.1.400.</p>
|
<python><python-typing><pyright>
|
2025-04-26 16:17:20
| 1
| 1,194
|
Maks Verver
|
79,594,108
| 1,908,650
|
How can I stop a matplotlib table overlapping a graph?
|
<p>The MWE below produces a plot like this:</p>
<p><a href="https://i.sstatic.net/lGiuB189.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGiuB189.png" alt="enter image description here" /></a></p>
<p>The row labels, <code>X</code>, <code>Y</code>, <code>Z</code>, overlap the right hand side of the bar chart in an ugly fashion. I'd like them moved further to the right, leaving a small margin between the chart and the table. The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.table.html" rel="nofollow noreferrer">documentation</a> for matplotlib.pyplot.table doesn't have any <code>loc</code> options that would allow this. The <code>bbox</code> argument might allow it, but it would seem to require trial and error with different bounding boxes to get that working.</p>
<p>Is there a way to lay this out more cleanly?</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(9, 5), dpi=300)
ax = sns.barplot(
x = range(1, 3),
y = [15,30],
legend=False,
)
plt.table(cellText=['A','B','C'], rowLabels=['X','Y','Z'], colWidths=[0.15], loc='right')
plt.show()
</code></pre>
|
<python><matplotlib>
|
2025-04-26 15:42:54
| 3
| 9,221
|
Mohan
|
79,593,965
| 16,813,096
|
How to get list of all the available capture devices in python without using any external module? (Windows)
|
<p>I want to retrieve a list of available webcams on Windows, without relying on external libraries such as OpenCV, PyGrabber, or Pygame.</p>
<p>Although I found a code snippet that accomplishes this task, but it uses WMIC. Unfortunately, when I tested it on another Windows device, I encountered an error stating <code>'wmic' is not recognized as an internal or external command</code></p>
<pre class="lang-py prettyprint-override"><code>import subprocess
def get_webcams_windows():
try:
# Execute the WMIC command to get a list of video capture devices
result = subprocess.check_output(
'wmic path win32_pnpentity where "Description like \'%Video%\'" get Name',
shell=True,
text=True
)
webcams = result.strip().split('\n')[1:] # Skip the header
return [webcam.strip() for webcam in webcams if webcam.strip()] # Filter out empty lines
except Exception:
return []
webcam_list = get_webcams_windows()
print(webcam_list)
</code></pre>
<p>Is there any other efficient method?</p>
|
<python><python-3.x><webcam-capture>
|
2025-04-26 13:25:17
| 2
| 582
|
Akascape
|
79,593,938
| 10,966,844
|
While testing airflow task with pytest, I got an error
|
<p>While testing airflow with pytest, i got an Error.</p>
<pre><code># tests/conftest.py
import datetime
import pytest
from airflow.models import DAG
@pytest.fixture
def test_dag():
return DAG(
"test_dag",
default_args={
"owner": "airflow",
"start_date": datetime.datetime(2025, 4, 5),
"end_date": datetime.datetime(2025, 4, 6)
},
schedule=datetime.timedelta(days=1)
)
</code></pre>
<pre><code># tests/test_instance_context.py
import datetime
from airflow.models import BaseOperator
from airflow.models.dag import DAG
from airflow.utils import timezone
class SampleDAG(BaseOperator):
template_fields = ("_start_date", "_end_date")
def __init__(self, start_date, end_date, **kwargs):
super().__init__(**kwargs)
self._start_date = start_date
self._end_date = end_date
def execute(self, context):
context["ti"].xcom_push(key="start_date", value=self.start_date)
context["ti"].xcom_push(key="end_date", value=self.end_date)
return context
def test_execute(test_dag: DAG):
task = SampleDAG(
task_id="test",
start_date="{{ prev_ds }}",
end_date="{{ ds }}",
dag=test_dag
)
task.run(
start_date=test_dag.default_args["start_date"],
end_date=test_dag.default_args["end_date"]
)
expected_start_date = datetime.datetime(2025, 4, 5, tzinfo=timezone.utc)
expected_end_date = datetime.datetime(2025, 4, 6, tzinfo=timezone.utc)
assert task.start_date == expected_start_date
assert task.end_date == expected_end_date
</code></pre>
<p>Test code is passed, but I got an issue here.</p>
<pre><code>tests/test_instance_context.py [2025-04-26T12:51:18.289+0000] {taskinstance.py:2604} INFO - Dependencies not met for <TaskInstance: test_dag.test manual__2025-04-05T00:00:00+00:00 [failed]>, dependency 'Task Instance State' FAILED: Task is in the 'failed' state.
[2025-04-26T12:51:18.303+0000] {taskinstance.py:2604} INFO - Dependencies not met for <TaskInstance: test_dag.test manual__2025-04-06T00:00:00+00:00 [failed]>, dependency 'Task Instance State' FAILED: Task is in the 'failed' state.
.
</code></pre>
<p>I want to test task.run to see difference between <code>task.run</code> and <code>task.execute</code>.
when I passed jinja variables, then airflow automatically rendering the variables by run method.</p>
<p>So, I want to see prev_ds, ds, start_date, end_date is successfully rendered.
But I got an error above..</p>
|
<python><airflow><pytest>
|
2025-04-26 13:00:04
| 1
| 343
|
hhk
|
79,593,911
| 3,555,115
|
Pair all elements leaving alternate element in a list and form a new list of lists
|
<p>I have a list which I would like to have unique element pairs generated for every alternate element</p>
<pre><code>List = [ L1, L2, L3, L4, L5, L6, L7,L8 ]
List_in_pairs = [[L1,L3], [L2,L4], [L5,L7],[L6,L8]]
</code></pre>
<p>I tried</p>
<pre><code>list_in_pairs = list(zip(List[::1], List[2::1]))
</code></pre>
<p>but that doesn't seem to generate pairs as above fashion. Any insights?</p>
|
<python>
|
2025-04-26 12:27:53
| 3
| 750
|
user3555115
|
79,593,886
| 774,575
|
How to use Model-View with a QCheckBox?
|
<p>How to use the model-view approach with a checkbox? The view is expected to display the model status but when the user clicks on the view, the checkbox status is actually changed before it is told by the model. For example the sequence should be:</p>
<ul>
<li>Current model state is <code>unchecked</code> and CB state is <code>unchecked</code></li>
<li>User clicks the CB to switch its state to <code>checked</code> (but not directly)</li>
<li>To have the state changed, the view asks the model to change its state to <code>checked</code></li>
<li>The model updates its state to <code>checked</code></li>
<li>The view sees the model change and sets the CB state to <code>checked</code>, doing what is done usually by the user click when not using the Model-View concept.</li>
</ul>
<p>However this doesn't happen because between steps 2 and 3 the CB state switches to <code>checked</code> and the view actually asks the model to switch to the other state, <code>unchecked</code>.</p>
<pre><code>from qtpy.QtCore import QObject, Signal
from qtpy.QtWidgets import QApplication, QMainWindow, QWidget, QCheckBox, QHBoxLayout
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.model = Model()
self.view = View(self.model)
self.setCentralWidget(self.view)
class View(QWidget):
user_request = Signal(bool)
def __init__(self, model):
super().__init__()
self.model = model
self.cb = QCheckBox('Click Me')
layout = QHBoxLayout(self)
layout.addWidget(self.cb)
self.cb.clicked.connect(self.cb_clicked)
self.cb.stateChanged.connect(self.cb_changed)
self.model.updated.connect(self.update_from_model)
self.user_request.connect(self.model.request_update)
def cb_clicked(self):
current_state = self.cb.isChecked()
desired = not current_state
print('User wants the state to be', desired)
self.user_request.emit(desired)
def cb_changed(self, state):
states = ['unchecked', 'tristate', 'checked']
print('CB has been updated to', states[state], '\n')
def update_from_model(self, state):
print('View aligns CB on model state', state)
self.cb.setChecked(state)
class Model(QObject):
updated = Signal(bool)
def __init__(self):
super().__init__()
self.state = False
self.updated.emit(self.state)
def data(self):
return self.state
def request_update(self, checked):
self.state = checked
print('Model sets its state to', checked)
self.updated.emit(checked)
def main():
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
if __name__ == '__main__':
main()
</code></pre>
|
<python><model><qt5><pyside2><qcheckbox>
|
2025-04-26 11:46:56
| 1
| 7,768
|
mins
|
79,593,818
| 577,288
|
concurrent.futures not showing thread completion
|
<pre><code>import concurrent.futures
import random
import pdb
# Analysis of text packet
def Threads1(curr_section, index1):
words = open('test.txt', 'r', encoding='utf-8', errors='ignore').read().replace('"', '').split()
longest_recorded = []
for ii1 in words:
test1 = random.randint(1, 1000)
if test1 > 900: break
else: longest_recorded.append(ii1)
perc = (index1 / max1) * 100
print('In: ' + str([index1, str(int(perc))+'%']))
return [index1, longest_recorded]
# Split text into packets
max1 = 20; count_Done = 0; ranger = [None for ii in range(0,max1)]
print(str(int((count_Done / max1) * 100)) + '%')
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
working_threads = {executor.submit(Threads1, curr_section, index1): curr_section for index1, curr_section in enumerate(ranger)}
for future in concurrent.futures.as_completed(working_threads):
count_Done += 1; current_result = future.result()
# Write to disk (random)
text1 = ''
for ii in range(500, random.randint(2000, 5000)): text1 += 'a'
with open('temp_Publish.txt', 'w', encoding='utf-8') as file: file.write(text1 + '\n')
# Write to disk (random)
text2 = ''
for ii in range(500, random.randint(2000, 5000)): text2 += 'a'
with open('threads.txt', 'w', encoding='utf-8') as file: file.write(text2)
print('Out: ' + str([current_result[0], str(int((count_Done / max1) * 100)) + '%']))
</code></pre>
<p>This part of the code, goes from 30% to 100% instantly.</p>
<pre><code>print('Out: ' + str([current_result[0], str(int((count_Done / max1) * 100)) + '%']))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>0%
In: [0, '0%']
In: [1, '5%']
In: [2, '10%']
In: [3, '15%']
In: [4, '20%']
Out: [0, '5%']
In: [5, '25%']
In: [6, '30%']
Out: [1, '10%']
In: [7, '35%']
In: [8, '40%']
In: [9, '45%']Out: [2, '15%']
In: [10, '50%']
In: [11, '55%']
Out: [3, '20%']
In: [12, '60%']
In: [13, '65%']
In: [14, '70%']
Out: [4, '25%']In: [15, '75%']
In: [16, '80%']
In: [17, '85%']
Out: [5, '30%']In: [18, '90%']
In: [19, '95%']
Out: [6, '35%']
Out: [7, '40%']
Out: [8, '45%']
Out: [9, '50%']
Out: [10, '55%']
Out: [11, '60%']
Out: [12, '65%']
Out: [13, '70%']
Out: [14, '75%']
Out: [15, '80%']
Out: [16, '85%']
Out: [17, '90%']
Out: [18, '95%']
Out: [19, '100%']
</code></pre>
<p>The above output is suppose to look like this.</p>
<pre><code>0%
In: [0, '0%']
Out: [0, '5%']
In: [1, '5%']
Out: [1, '10%']
In: [2, '10%']
Out: [2, '15%']
In: [3, '15%']
Out: [3, '20%']
In: [4, '20%']
Out: [4, '25%']
In: [5, '25%']
Out: [5, '30%']
In: [6, '30%']
Out: [6, '35%']
In: [7, '35%']
Out: [7, '40%']
In: [8, '40%']
Out: [8, '45%']
In: [9, '45%']
Out: [9, '50%']
In: [10, '50%']
Out: [10, '55%']
In: [11, '55%']
Out: [11, '60%']
In: [12, '60%']
Out: [12, '65%']
In: [13, '65%']
Out: [13, '70%']
In: [14, '70%']
Out: [14, '75%']
In: [15, '75%']
Out: [15, '80%']
In: [16, '80%']
Out: [16, '85%']
In: [17, '85%']
Out: [17, '90%']
In: [18, '90%']
Out: [18, '95%']
In: [19, '95%']
Out: [19, '100%']
</code></pre>
<p>But the only way I can get it to look like this ... <strong>is by removing this part of the code.</strong></p>
<pre><code># Write to disk (random)
text1 = ''
for ii in range(500, random.randint(2000, 5000)): text1 += 'a'
with open('temp_Publish.txt', 'w', encoding='utf-8') as file: file.write(text1 + '\n')
# Write to disk (random)
text2 = ''
for ii in range(500, random.randint(2000, 5000)): text2 += 'a'
with open('threads.txt', 'w', encoding='utf-8') as file: file.write(text2)
</code></pre>
<p>How can I keep this write to disk code ... while also keeping the progress bar in sync with thread complete?</p>
|
<python><multithreading><concurrent.futures>
|
2025-04-26 10:22:48
| 2
| 5,408
|
Rhys
|
79,593,659
| 13,538,030
|
Check whether one string can be formed by another string in Python
|
<p>I want to check whether one string can be formed by another string, e.g., in the example below, I want to check how many strings in the list <code>targets</code> can be formed by string <code>chars</code>. Each character in <code>chars</code> can only be used once.</p>
<pre><code>targets = ["cat","bt","hat","tree"], chars = "atach"
</code></pre>
<p>My code is as follows:</p>
<pre><code> ans = 0
chars_freq = Counter(chars)
for word in targets:
word_freq = Counter(word)
for char in word:
if word_freq[char] > chars_freq[char]:
break
ans += 1
return ans
</code></pre>
<p>For the example the answer should be <code>2</code>, but mine gets <code>4</code>.</p>
|
<python><string>
|
2025-04-26 06:59:59
| 2
| 384
|
Sophia
|
79,593,506
| 1,614,051
|
How can I annotate a function that takes a tuple of types and returns an object of one of those types?
|
<p>I want to annotate a function that essentially does this:</p>
<pre><code>def safe_convert(value: Any, allowed_types: tuple):
if isinstance(value, allowed_types):
return value
raise TypeError()
</code></pre>
<p>Now, my intention is for <code>allowed_types</code> to be a tuple of type objects, and to tell the type checker that the result is a <code>Union</code> of those types, but I can't figure out how to do that. My closest attempt is to write something like this:</p>
<pre><code>def safe_convert[*Ts](value: Any, allowed_types: tuple[*type[Ts]]) -> Union[*Ts]:
...
</code></pre>
<p>But this produces an error: <code>**error: "type[Ts]" cannot be unpacked (must be tuple or TypeVarTuple) [valid-type]**</code></p>
<p>For reference, the single argument version works just fine:</p>
<pre><code>def safe_convert_single[T](value: Any, allowed_type: type[T]) -> T:
if isinstance(value, allowed_type):
return value
raise TypeError()
</code></pre>
<p>Any ideas? I'm really struggling here.</p>
<p>Edit: Here's a slightly expanded version of what I want to write.</p>
<pre><code>class AllTheThings:
def __init__(self):
self._things = dict[str, Any]()
def put(self, key: str, value: Any):
self._things[key] = value
def safe_get(self, key: str, allowed_types: tuple):
value = self._things[key]
if isinstance(value, allowed_types)
return value
raise TypeError()
things = AllTheThings()
thigns.put("foo", "a")
things.put("bar", 42)
things.put("baz", {})
thing = things.safe_get("foo", (int, str))
reveal_type(thing) # Should be Union[int, str]
</code></pre>
<p>I want type checkers to treat this more or less like</p>
<pre><code>assert isinstance((value := self._things[key]), allowed_types)
</code></pre>
|
<python><python-typing>
|
2025-04-26 01:47:36
| 1
| 2,072
|
Filipp
|
79,593,505
| 12,921,500
|
Can't close cookie pop up on website with selenium webdriver
|
<p>I am trying to use selenium to click the Accept all or Reject all button on a cookie pop up for the the website <a href="https://autotrader.co.uk" rel="nofollow noreferrer">autotrader.co.uk</a>, but I cannot get it to make the pop up disappear for some reason.</p>
<p>This is the pop up:</p>
<p><a href="https://i.sstatic.net/JlHDaV2C.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JlHDaV2C.jpg" alt="enter image description here" /></a></p>
<p>and here is the html:</p>
<pre><code><button title="Reject All" aria-label="Reject All" class="message-component message-button no-children focusable sp_choice_type_13" style="opacity: 1; padding: 10px 5px; margin: 10px 5px; border-width: 2px; border-color: rgb(5, 52, 255); border-radius: 5px; border-style: solid; font-size: 14px; font-weight: 400; color: rgb(255, 255, 255); font-family: arial, helvetica, sans-serif; width: calc(35% - 20px); background: rgb(5, 52, 255);">Reject All</button>
</code></pre>
<p><a href="https://i.sstatic.net/2Xhue5M6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Xhue5M6.jpg" alt="enter image description here" /></a></p>
<p>The code I have tried is the following:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
path_to_driver = r"C:\path_to_project\chromedriver.exe"
service = Service(executable_path=path_to_driver)
driver = webdriver.Chrome(service=service)
driver.get("https://www.autotrader.co.uk")
time.sleep(5)
WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.CLASS_NAME, 'message-component message-button no-children focusable sp_choice_type_13'))).click()
time.sleep(10)
driver.quit()
</code></pre>
<p>Can anyone help here?</p>
|
<python><selenium-webdriver><web-scraping><selenium-chromedriver>
|
2025-04-26 01:47:28
| 2
| 785
|
teeeeee
|
79,593,499
| 7,076,615
|
astor, ast, astunparse do not preserve lineno, astmonkey puts \ (backslash) everywhere
|
<p>Here is the output run on a very simple urls.py from a test Django project:</p>
<pre><code>from django.contrib import admin
from django.urls import path
urlpatterns = [\
path('test_path1/', views.test_view1, name='test_name1'), \
path('test_path2/', views.test_view1, name='test_name1'), path('admin/', admin.site.test1.test2.urls, name='admin')]
</code></pre>
<p>See the last few lines. Astmonkey almost gets it right, but put in \ everywhere!</p>
<p>Here's my code:</p>
<pre><code>from ast import (Name, Call, Load, Attribute, Constant, Expr,
NodeTransformer, parse, unparse, dump, keyword, fix_missing_locations)
from base_injector import BaseInjector
from astmonkey import visitors
import astor
class UrlEntry:
def __init__(self, path, view, name):
self._path = path
self._view = view
self._name = name
@property
def path(self):
return self._path
@property
def view(self):
return self._view
@property
def name(self):
return self._name
class DjangoUrlInjector(BaseInjector):
def __init__(self, filename, urls, index=0):
self._filename = filename,
self._urls = urls
self._index = index
super().__init__()
def visit_Assign(self, node):
target = node.targets[0]
if target.id == 'urlpatterns':
patterns = node.value
index = self._index
index %= len(patterns.elts)
# Determine lineno of existing urlpatterns
if len(patterns.elts) == 0:
line_number = node.lineno + 1
else:
line_number = patterns.elts[index].lineno
for url in self._urls:
path_call = Call(func=Name(id='path', ctx=Load()),
args=[Constant(value=url.path),
self._dot_str_to_attrib(url.view)],
keywords=[keyword(arg='name', value=Constant(value=url.name))])
path_call.lineno = line_number
line_number += 1
patterns.elts.insert(index, path_call)
index += 1
# Adjust remaining line number past where we've inserted:
if index < len(patterns.elts[index:]):
for existing in patterns.elts:
existing.lineno = line_number
line_number += 1
node = self.generic_visit(node)
return node
if __name__ == '__main__':
urls_filename = 'test/django_project/django_project/urls.py'
url_injector = DjangoUrlInjector(
urls_filename,
[UrlEntry('test_path1/', 'views.test_view1', 'test_name1'),
UrlEntry('test_path2/', 'views.test_view1', 'test_name1')]
)
import os
os.chdir('../..')
ast_tree = astor.parsefile(urls_filename)
ast_tree = url_injector.visit(ast_tree)
result_code = visitors.to_source(ast_tree)
print(result_code)
`
</code></pre>
<p>Why is it so difficult to preserve line numbers and or inject some code. I thought knowing the AST would be beneficial, but it's not. It's as if I'd get further using regexes...</p>
<p>What I've tried is all combinations of ast, astor, astunparse, astmonkey.</p>
<p>What I'm expecting is that lineno's mean something. To the first 3 they don't. Then for the last one, astmonkey, it decides to randomly put in backslashes.</p>
<p>What I'm looking for is a code fix of ast / astor that preserves lineno's. Or a fix of astmonkey that gets rid of backslashes. OR, an alternative library that actually does what it advertises.</p>
|
<python><parsing><abstract-syntax-tree><code-injection><line-numbers>
|
2025-04-26 01:37:00
| 0
| 643
|
Daniel Donnelly
|
79,593,281
| 813,951
|
How to prevent unescaped program arguments
|
<p>This test program parses a single text argument, which could be a URL:</p>
<pre><code>#!/usr/bin/env python
import argparse
def parse_input():
parser = argparse.ArgumentParser(
description='Program description'
)
parser.add_argument('input_text', type=str, help='A string')
args = parser.parse_args()
return args.input_text
if __name__ == '__main__':
text = parse_input()
print(f"Input text: \"{text}\"")
</code></pre>
<p>And of course a URL could have a querystring.</p>
<p>If the program is called with a quoted argument, nothing bad happens:</p>
<pre><code>$ ./test.py "sometext?somearg=someval&somearg2=someval2"
Input text: "sometext?somearg=someval&somearg2=someval2"
</code></pre>
<p>But if someone forgets to quote the argument, this happens:</p>
<pre><code>$ ./test.py sometext?somearg=someval&somearg2=someval2
[1] 5174
$ Input text: "sometext?somearg=someval"
^C
[1]+ Done ./test.py sometext?somearg=someval
</code></pre>
<p>The console waited forever so I had to press Ctrl+C at line 3. I don't really understand what was going on with this wait, but I think that because of the <code>&</code> character in the unescaped argument, Bash executes <code>./test.py sometext?somearg=someval</code> in background, and immediately executes <code>somearg2=someval2</code> in foreground, which just assigns a session variable in the shell. For every additional querystring parameter bash would launch them as commands in background. Now imagine if by chance one of those potentially many querystring parameter had no value (as in <code>&somearg3</code>), which are <a href="https://stackoverflow.com/questions/4557387/is-a-url-query-parameter-valid-if-it-has-no-value">perfectly valid</a>. Also if by chance that unvalued parameter name was the same as some binary in the system, very bad things could happen.</p>
<p>Can we do something to prevent the user from inadvertedly entering an unquoted argument? So far <code>argparse</code> gives me the exact same result regardless of whether the argument was quoted or unquoted.</p>
|
<python><bash><escaping><code-injection><shebang>
|
2025-04-25 20:39:55
| 1
| 28,229
|
Mister Smith
|
79,593,227
| 226,499
|
How to detect MIME type from a file buffer in Python, especially for legacy Office formats like .xls, .doc, .ppt?
|
<p>I'm building a general-purpose Python library for text extraction that should support input from either:</p>
<ul>
<li>A file path (e.g., str pointing to a local file),</li>
</ul>
<p>or</p>
<ul>
<li>A file-like object (e.g., BytesIO stream, such as you'd get from a
web upload or in-memory operation)</li>
</ul>
<p>To decide which extraction function to call, I need to detect the MIME type of the file.</p>
<p>🧪 <strong>Current approach</strong>
I'm currently using python-magic, which wraps libmagic, like this:</p>
<pre><code>import magic
file_input.seek(0)
mime_type = magic.Magic(mime=True).from_buffer(file_input.read(2048))
file_input.seek(0)
</code></pre>
<p>This works well for modern formats like PDF, DOCX, XLSX, etc., but not for legacy Microsoft Office formats such as .xls, .doc, .ppt.</p>
<p>For those, the detected MIME type is always:</p>
<pre><code>application/x-ole-storage
</code></pre>
<p>Which means I can't distinguish between .doc and .xls from a buffer, unless I already know the original filename or extension — which may not be available (e.g., if the file is being streamed or uploaded).</p>
<p>🧩 <strong>What I’ve tried or considered</strong></p>
<ul>
<li>magic.from_file(path) works well — but obviously requires a real file
path</li>
<li>filetype is lightweight, but doesn't support .xls/.doc</li>
<li>mimetypes.guess_type() is extension-based — not usable with buffers</li>
<li>Using heuristics: try parsing with xlrd, or looking for streams
inside the OLE2 structure
<ul>
<li>This works, but is fragile and a bit ugly</li>
</ul>
</li>
<li>Embedding extension in metadata (e.g., as a side-channel) — not ideal
for generic libraries</li>
</ul>
<p>❓ <strong>The Question</strong>
Is there a better way to detect the MIME type of a file given only a BytesIO buffer, especially for legacy formats like .xls, .doc, and .ppt?</p>
<p>Or put differently:</p>
<p>How do you build a clean, extensible MIME type detection strategy that works even in streaming or upload-based contexts, and doesn't depend on file extensions?</p>
<p>🧠 <strong>Related discussions</strong>
I’m aware of this classic and widely referenced post: 👉 <a href="https://stackoverflow.com/questions/43580/how-to-find-the-mime-type-of-a-file-in-python">How to find the MIME type of a file in Python</a></p>
<p>But none of the answers there fully address the specific case of distinguishing OLE2 formats when working with in-memory buffers only.</p>
<p>Thanks in advance for any insight — I'm hoping for a clean and reliable solution to integrate into an open-source Python package! 🚀</p>
|
<python><mime-types><file-type><python-magic><libmagic>
|
2025-04-25 19:52:41
| 1
| 606
|
GBBL
|
79,592,979
| 12,550,791
|
Assert a logger writes to stdout
|
<p>I'm trying to assert the fact that my loggers writes to stdout, but I can't get it to work. I ran the logger in a python file to make sure it outputs something in the standard output.</p>
<p>So far, I can assert with pytest that the message is logged:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import sys
formatter = logging.Formatter("%(message)s")
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
logger = logging.getLogger("test")
def set_verbose(verbose: bool):
if verbose:
logger.setLevel(logging.DEBUG)
logger.removeHandler(stdout_handler)
logger.addHandler(stdout_handler)
else:
logger.setLevel(logging.CRITICAL)
logger.removeHandler(stdout_handler)
def test_set_verbose_log(caplog):
set_verbose(True)
logger.info("test-2-")
assert "test-2-" in caplog.text
set_verbose(False)
logger.info("test-3-")
assert "test-3-" not in caplog.text
set_verbose(True)
logger.info("test-4-")
assert "test-4-" in caplog.text
set_verbose(False)
logger.info("test-5-")
assert "test-5-" not in caplog.text
def test_set_verbose_sys(capsys):
set_verbose(True)
logger.info("test-6-")
assert "test-6-" in capsys.readouterr().out
set_verbose(False)
logger.info("test-7-")
assert "test-7-" not in capsys.readouterr().out
set_verbose(True)
logger.info("test-8-")
assert "test-8-" in capsys.readouterr().out
set_verbose(False)
logger.info("test-9-")
assert "test-9-" in capsys.readouterr().out
def test_set_verbose_fd(capfd):
set_verbose(True)
logger.info("test-10-")
assert "test-10-" in capfd.readouterr().out
set_verbose(False)
logger.info("test-11-")
assert "test-11-" not in capfd.readouterr().out
</code></pre>
<p>and the pytest output is</p>
<pre><code>tests/test_test.py .FF [100%]
tests/test_test.py .FF [100%]
====================================================================== FAILURES ======================================================================
________________________________________________________________ test_set_verbose_sys ________________________________________________________________
capsys = <_pytest.capture.CaptureFixture object at 0x106a6c590>
setup_logging = (<Logger test (DEBUG)>, <function setup_logging.<locals>.set_verbose at 0x106a80ea0>)
def test_set_verbose_sys(capsys, setup_logging):
logger, set_verbose = setup_logging
set_verbose(True)
logger.info("test-6-")
> assert "test-6-" in capsys.readouterr().out
E AssertionError: assert 'test-6-' in ''
E + where '' = CaptureResult(out='', err='').out
E + where CaptureResult(out='', err='') = readouterr()
E + where readouterr = <_pytest.capture.CaptureFixture object at 0x106a6c590>.readouterr
tests/test_test.py:53: AssertionError
---------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
test-6-
----------------------------------------------------------------- Captured log call ------------------------------------------------------------------
INFO test:test_test.py:52 test-6-
________________________________________________________________ test_set_verbose_fd _________________________________________________________________
capfd = <_pytest.capture.CaptureFixture object at 0x106a4cb90>
setup_logging = (<Logger test (DEBUG)>, <function setup_logging.<locals>.set_verbose at 0x106aa42c0>)
def test_set_verbose_fd(capfd, setup_logging):
logger, set_verbose = setup_logging
set_verbose(True)
logger.info("test-10-")
> assert "test-10-" in capfd.readouterr().out
E AssertionError: assert 'test-10-' in ''
E + where '' = CaptureResult(out='', err='').out
E + where CaptureResult(out='', err='') = readouterr()
E + where readouterr = <_pytest.capture.CaptureFixture object at 0x106a4cb90>.readouterr
tests/test_test.py:73: AssertionError
---------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
test-10-
----------------------------------------------------------------- Captured log call ------------------------------------------------------------------
INFO test:test_test.py:72 test-10-
============================================================== short test summary info ===============================================================
FAILED tests/test_test.py::test_set_verbose_sys - AssertionError: assert 'test-6-' in ''
FAILED tests/test_test.py::test_set_verbose_fd - AssertionError: assert 'test-10-' in ''
============================================================ 2 failed, 1 passed in 0.05s =============================================================
</code></pre>
<p>but when I try to run (approximately) the same thing in a file, I can see <code>test-2-</code> being outputted in the standard output.</p>
<pre class="lang-py prettyprint-override"><code>import logging
import sys
stdout_handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(message)s")
stdout_handler.setFormatter(formatter)
logger = logging.getLogger("test")
logger.addHandler(stdout_handler)
def set_verbose(verbose: bool):
if verbose:
logger.setLevel(logging.DEBUG)
if not logger.hasHandlers():
logger.addHandler(stdout_handler)
else:
logger.setLevel(logging.CRITICAL)
logger.removeHandler(stdout_handler)
logger.info("test-1-")
set_verbose(True)
logger.info("test-2-")
set_verbose(False)
logger.info("test-3-")
</code></pre>
<p>Thank you for your time.</p>
|
<python><logging><pytest><python-logging>
|
2025-04-25 16:47:59
| 0
| 391
|
Marco Bresson
|
79,592,693
| 13,061,449
|
Why does pd.to_datetime('2025175', format='%Y%W%w') and pd.Timestamp.fromisocalendar(2025, 17, 5) gives different output?
|
<p>Why does <code>pd.to_datetime('2025175', format='%Y%W%w')</code> and <code>pd.Timestamp.fromisocalendar(2025, 17, 5)</code> gives different output?</p>
<p>I expected to obtain <code>Timestamp('2025-04-25 00:00:00')</code> for both cases.
But the first approach resulted on a Friday one week ahead.</p>
<p>Minimum example</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
friday_datetime = pd.to_datetime('2025175', format='%Y%W%w')
friday_timestamp = pd.Timestamp.fromisocalendar(2025, 17, 5)
assert friday_datetime == friday_timestamp, (friday_datetime, friday_timestamp)
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>assert friday_datetime == friday_timestamp, (friday_datetime, friday_timestamp)
AssertionError: (Timestamp('2025-05-02 00:00:00'), Timestamp('2025-04-25 00:00:00'))
</code></pre>
|
<python><pandas><datetime><timestamp>
|
2025-04-25 14:10:53
| 1
| 315
|
viniciusrf1992
|
79,592,652
| 16,383,578
|
Why using a bigger wheel doesn't make my wheel factorization prime sieve faster?
|
<p>I assume you all know what prime numbers are and what Sieve of Eratosthenes is, so I won't waste time explaining them.</p>
<p>Now, all prime numbers except 2 are odd numbers, so we only need to check odd numbers, this is very obvious, but this is worth mentioning because this simple optimization halved the candidates we need to check, and so we only need to mark odd multiples of prime numbers other than 2.</p>
<p>Another simple optimization is to only check numbers up to square root of limit, and yet another simple optimization is to mark as composite start at the square of the prime number, as the reasons for these should be obvious I won't explain why, though I can't think of further optimizations regarding the marking of composites.</p>
<p>But we can narrow down the search space further, all prime numbers except 2, 3, 5 must be congruent to <code>[1, 7, 11, 13, 17, 19, 23, 29]</code> % 30. This is evident from the nature of modulus. There are only 30 possibilities, if the modulus is even then the number is a multiple of 2, and the other possibilities mean the number is either a multiple of 3, a multiple of 5 or both. In other words all prime numbers must be coprime to 30 except 2, 3, 5.</p>
<p>In Python, this is:</p>
<pre><code>wheel3 = [i for i in range(1, 30) if math.gcd(i, 30) == 1]
# [1, 7, 11, 13, 17, 19, 23, 29]
</code></pre>
<p>Now we calculate the difference between consecutive pairs of elements, starting at 7, and 1 comes immediately after 29 because of nature of modulo operation, for example, <code>31 % 30 == 1</code>, and so the difference between them is 2.</p>
<p>We obtain the following: <code>[4, 2, 4, 2, 4, 6, 2, 6]</code>.</p>
<p>Now, out of every 30 numbers we only need to check 8 numbers, we skip 22 numbers. This is a significant improvement from the previous optimization of only bruteforcing odd numbers, we needed to process 15 numbers out of every 30 numbers if we process all odd numbers, now we have 7 numbers less, the search space is narrowed to 4/15 which is 0.2666...</p>
<p>We can optimize this further by using a bigger wheel, using 2 * 3 * 5 * 7 = 210 as basis, all prime numbers starting at 11 must be coprime to 210.</p>
<pre><code>wheel4 = [i for i in range(1, 210) if math.gcd(i, 210) == 1]
wheel4 == [
1 , 11 , 13 , 17 , 19 , 23 ,
29 , 31 , 37 , 41 , 43 , 47 ,
53 , 59 , 61 , 67 , 71 , 73 ,
79 , 83 , 89 , 97 , 101, 103,
107, 109, 113, 121, 127, 131,
137, 139, 143, 149, 151, 157,
163, 167, 169, 173, 179, 181,
187, 191, 193, 197, 199, 209
]
</code></pre>
<p>And the list of index changes is:</p>
<pre><code>[
2 , 4 , 2 , 4 , 6 , 2 ,
6 , 4 , 2 , 4 , 6 , 6 ,
2 , 6 , 4 , 2 , 6 , 4 ,
6 , 8 , 4 , 2 , 4 , 2 ,
4 , 8 , 6 , 4 , 6 , 2 ,
4 , 6 , 2 , 6 , 6 , 4 ,
2 , 4 , 6 , 2 , 6 , 4 ,
2 , 4 , 2 , 10, 2 , 10
]
</code></pre>
<p>Now out of every 210 numbers we only need to process 48 numbers, down from the previous 56 numbers, we need to process 8 numbers less, we narrowed the search space down to 8/35 which is 0.22857142857142857... and less than a quarter.</p>
<p>So I expect the version using the 210-based wheel to take only 6/7 or 85.71% of the time the 30-based wheel version takes to execute, but that isn't so.</p>
<pre><code>import math
import numpy as np
import numba as nb
@nb.njit(cache=True)
def prime_wheel_sieve(n: int) -> np.ndarray:
wheel = [4, 2, 4, 2, 4, 6, 2, 6]
primes = np.ones(n + 1, dtype=np.uint8)
primes[:2] = False
for square, step in ((4, 2), (9, 6), (25, 10)):
primes[square::step] = False
k = 7
lim = int(math.sqrt(n) + 0.5)
i = 0
while k <= lim:
if primes[k]:
primes[k**2 :: 2 * k] = False
k += wheel[i]
i = (i + 1) & 7
return np.nonzero(primes)[0]
# fmt: off
WHEEL4 = np.array([
2 , 4 , 2 , 4 , 6 , 2 ,
6 , 4 , 2 , 4 , 6 , 6 ,
2 , 6 , 4 , 2 , 6 , 4 ,
6 , 8 , 4 , 2 , 4 , 2 ,
4 , 8 , 6 , 4 , 6 , 2 ,
4 , 6 , 2 , 6 , 6 , 4 ,
2 , 4 , 6 , 2 , 6 , 4 ,
2 , 4 , 2 , 10, 2 , 10
], dtype=np.uint8)
# fmt: on
@nb.njit(cache=True)
def prime_wheel_sieve4(n: int) -> np.ndarray:
primes = np.ones(n + 1, dtype=np.uint8)
primes[:2] = False
for square, step in ((4, 2), (9, 6), (25, 10), (49, 14)):
primes[square::step] = False
k = 11
lim = int(math.sqrt(n) + 0.5)
i = 0
while k <= lim:
if primes[k]:
primes[k**2 :: 2 * k] = False
k += WHEEL4[i]
i = (i + 1) % 48
return np.nonzero(primes)[0]
</code></pre>
<pre><code>In [549]: np.array_equal(prime_wheel_sieve(65536), prime_wheel_sieve4(65536))
Out[549]: True
In [550]: %timeit prime_wheel_sieve(65536)
161 μs ± 1.13 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [551]: %timeit prime_wheel_sieve4(65536)
163 μs ± 1.79 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [552]: %timeit prime_wheel_sieve4(131072)
330 μs ± 10.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [553]: %timeit prime_wheel_sieve(131072)
328 μs ± 7.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [554]: %timeit prime_wheel_sieve4(262144)
680 μs ± 14.3 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [555]: %timeit prime_wheel_sieve(262144)
669 μs ± 7.79 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [556]: %timeit prime_wheel_sieve(524288)
1.44 ms ± 16.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [557]: %timeit prime_wheel_sieve4(524288)
1.48 ms ± 13.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [558]: %timeit prime_wheel_sieve4(1048576)
3.25 ms ± 81.3 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [559]: %timeit prime_wheel_sieve(1048576)
3.23 ms ± 115 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [560]: %timeit prime_wheel_sieve(2097152)
7.08 ms ± 80.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [561]: %timeit prime_wheel_sieve4(2097152)
7.1 ms ± 85.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [562]: %timeit prime_wheel_sieve4(4194304)
14.8 ms ± 120 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [563]: %timeit prime_wheel_sieve(4194304)
14.2 ms ± 145 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [564]: %timeit prime_wheel_sieve(8388608)
39.4 ms ± 1.44 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [565]: %timeit prime_wheel_sieve4(8388608)
41.7 ms ± 2.56 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>According to my tests, using a bigger wheel makes it slower not faster, why is it this case? Theoretically speaking using a bigger wheel narrows the search space, so it shouldn't cause increase in execution time, why does using a bigger wheel slow down the code?</p>
<hr />
<p>Okay, using the scientific method I controlled the differences between the two functions so that the only difference between them, the only quantity that can affect the performance, is the wheel used.</p>
<p>I made the first function use modulo operator instead of bitwise-AND, and I made the second function use a local list just like the first function. I wanted to separate code and data but whatever.</p>
<pre><code>@nb.njit(cache=True)
def prime_wheel_sieve3(n: int) -> np.ndarray:
wheel = [4, 2, 4, 2, 4, 6, 2, 6]
primes = np.ones(n + 1, dtype=np.uint8)
primes[:2] = False
for square, step in ((4, 2), (9, 6), (25, 10)):
primes[square::step] = False
k = 7
lim = int(math.sqrt(n) + 0.5)
i = 0
while k <= lim:
if primes[k]:
primes[k**2 :: 2 * k] = False
k += wheel[i]
i = (i + 1) % 8
return np.nonzero(primes)[0]
@nb.njit(cache=True)
def prime_wheel_sieve4_1(n: int) -> np.ndarray:
# fmt: off
wheel = [
2 , 4 , 2 , 4 , 6 , 2 ,
6 , 4 , 2 , 4 , 6 , 6 ,
2 , 6 , 4 , 2 , 6 , 4 ,
6 , 8 , 4 , 2 , 4 , 2 ,
4 , 8 , 6 , 4 , 6 , 2 ,
4 , 6 , 2 , 6 , 6 , 4 ,
2 , 4 , 6 , 2 , 6 , 4 ,
2 , 4 , 2 , 10, 2 , 10
]
# fmt: on
primes = np.ones(n + 1, dtype=np.uint8)
primes[:2] = False
for square, step in ((4, 2), (9, 6), (25, 10), (49, 14)):
primes[square::step] = False
k = 11
lim = int(math.sqrt(n) + 0.5)
i = 0
while k <= lim:
if primes[k]:
primes[k**2 :: 2 * k] = False
k += wheel[i]
i = (i + 1) % 48
return np.nonzero(primes)[0]
</code></pre>
<p>I had to add the formatting comments to prevent Black formatter from messing up my formatted table in VS Code, and of course that doesn't affect performance at all.</p>
<p>The only differences between the functions are the initial value of k, the primes that had to be processed before rolling the wheel proper, and of course the size of the wheel (and the wheel itself). These had to be different (because they use different wheels) but everything else aren't changed.</p>
<pre><code>In [679]: np.array_equal(prime_wheel_sieve3(65536), prime_wheel_sieve4_1(65536))
Out[679]: True
In [680]: %timeit prime_wheel_sieve3(65536)
162 μs ± 2.27 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [681]: %timeit prime_wheel_sieve4_1(65536)
158 μs ± 1.83 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [682]: %timeit prime_wheel_sieve3(131072)
326 μs ± 7.91 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [683]: %timeit prime_wheel_sieve4_1(131072)
322 μs ± 8.89 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [684]: %timeit prime_wheel_sieve3(262144)
659 μs ± 7.74 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [685]: %timeit prime_wheel_sieve4_1(262144)
655 μs ± 12.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [686]: %timeit prime_wheel_sieve3(524288)
1.45 ms ± 14.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [687]: %timeit prime_wheel_sieve4_1(524288)
1.42 ms ± 8.13 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [688]: %timeit prime_wheel_sieve3(1048576)
3.2 ms ± 68.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [689]: %timeit prime_wheel_sieve4_1(1048576)
3.2 ms ± 199 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>Now interestingly <code>prime_wheel_sieve4_1</code> performs a little bit faster than <code>prime_wheel_sieve3</code>, but it is only a tiny little bit. The speed up is insignificant, but I know the execution time of code is stochastic in nature, so <code>prime_wheel_sieve4_1</code> performs consistently faster than <code>prime_wheel_sieve3</code> by a little bit is statistically significant. Though I haven't tested much, this doesn't exclude the possibility of coincidence.</p>
<p>But according to my theory, I should see 14.29% decrease in execution time, not basically no improvement. My tests made my case stronger.</p>
<p>So why does using a bigger wheel not speed up the code?</p>
|
<python><sieve-of-eratosthenes><wheel-factorization>
|
2025-04-25 13:46:58
| 1
| 3,930
|
Ξένη Γήινος
|
79,592,549
| 7,394,414
|
Cannot interpret 'dtype('int64')' in NLP Python Code from adashofdata
|
<p>I am trying to run the NLP project shared at <a href="https://github.com/adashofdata/nlp-in-python-tutorial" rel="nofollow noreferrer">https://github.com/adashofdata/nlp-in-python-tutorial</a>, but I am encountering an issue with the code in 4-Topic-Modeling.ipynb. I am running the code on Google Colab and experiencing an error when trying to load the .pkl (pickle) file into the data variable.</p>
<p>I am looking for suggestions or solutions regarding this issue. I tried pickle, cloudpickle, joblib.</p>
<p>If there are alternative ways to convert the .pkl file to another format and successfully load it into data, I am open to those suggestions as well.</p>
<p>Thank you!</p>
<pre><code>import pickle
import pandas as pd
import numpy as np
from google.colab import drive
import joblib
# Mount Google Drive
drive.mount('/content/drive')
# File path
file_path = '/content/drive/MyDrive/Colab Notebooks/pynb/pickle/'
# Load the file using pickle
try:
with open(file_path + 'dtm_stop.pkl', 'rb') as f:
data = joblib.load(file_path + 'dtm_stop.pkl') # Load using joblib
# Check the type of the loaded data
print("Data type:", type(data))
# If the data type is numpy ndarray, try to convert the dtype
if isinstance(data, np.ndarray):
print("Data loaded as numpy ndarray")
data = data.astype(np.float64, errors='ignore') # Ignore erroneous values
elif isinstance(data, pd.DataFrame):
print("Data loaded as DataFrame")
data = data.apply(pd.to_numeric, errors='coerce') # Convert erroneous values to numeric
else:
print("Data is not a DataFrame or numpy array, data type:", type(data))
except Exception as e:
print(f"Error loading with pickle: {e}") # Print the error message in more detail
raise e
# Let's see the loaded data
print(data)
</code></pre>
<p>And I took this error messages:</p>
<pre><code>Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
Error loading with pickle: Cannot interpret 'dtype('int64')' as a data type
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-285aaeee691b> in <cell line: 0>()
30 except Exception as e:
31 print(f"Error loading with pickle: {e}") # Print the error message in more detail
---> 32 raise e
33
34 # Let's see the loaded data
5 frames
/usr/local/lib/python3.11/dist-packages/numpy/_core/numeric.py in _frombuffer(buf, dtype, shape, order)
TypeError: Cannot interpret 'dtype('int64')' as a data type
</code></pre>
|
<python><nlp><google-colaboratory>
|
2025-04-25 12:48:00
| 1
| 579
|
mymiracl
|
79,592,530
| 1,635,523
|
tkinter.Listbox has a "shadow selection" beside the proper selection. How to sync both?
|
<p>I built and populated a <code>tkinter.Listbox</code>. Now I have events that will select
the item at index <code>index</code>. Like so:</p>
<pre class="lang-py prettyprint-override"><code>listbox.selection_clear(0, tk.END)
listbox.select_set(index)
</code></pre>
<p>And it works in that the entry with index <code>index</code> is in fact selected.</p>
<p>However, when using 'tab' keys to move to other widgets that also have the
power to select items in that listbox, and then returning to the listbox,
there is a <strong>shadow selection</strong>, that appears not to be the anchor <em>(at least, <code>listbox.selection_anchor(index)</code>
did not solve this issue for me)</em> on the selection that was active, when I last
left focus on the <code>listbox</code>. Using 'up' and 'down' keys will take control of the
active selection. However, they will start not at the proper selection (<code>010_adf</code> in below example), but on that
shadow (<code>007_adf</code>) that I can only specify closer by providing this screenshot:</p>
<p><a href="https://i.sstatic.net/boEW5eUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/boEW5eUr.png" alt="enter image description here" /></a></p>
<p><strong>Fig:</strong> The "shadow" in question is around entry <code>007_adf</code>. The proper selection is <code>010.adf</code>. How to sync the shadow to the proper selection?</p>
|
<python><python-3.x><tkinter>
|
2025-04-25 12:34:41
| 2
| 1,061
|
Markus-Hermann
|
79,592,525
| 3,840,551
|
Static typing for database schema models that handle foreign keys/dbrefs as `external_document: ExternalDocumentModel | DbRef`
|
<p>We're using Python (3.12) with Pydantic models to represent schemas for our MongoDB collections, which we then instantiate with <code>SomeModel.model_validate(<results of pymongo query>)</code>. We define relationships between collections using dbrefs; but we don't have an elegant way to handle these in a type-safe way in python.</p>
<p>To handle lazy loading (avoiding fetching the entire object graph on every query) and situations where a referenced document might be deleted, these fields are typed using <code>Union[RelatedModel, DbRef]</code>.</p>
<p><strong>Example Models:</strong></p>
<pre class="lang-py prettyprint-override"><code>import uuid
from typing import Union, Optional, TypeAlias
from pydantic import BaseModel, Field, EmailStr
from bson.dbref import DbRef
class User(BaseModel):
id: uuid.UUID = Field(default_factory=uuid.uuid4)
email: EmailStr
class Post(BaseModel):
id: uuid.UUID = Field(default_factory=uuid.uuid4)
title: str
author: Union[User, DBRef]
# --- Simulation of data loading (e.g., using Motor/Beanie) ---
async def load_post_from_db(post_id: uuid.UUID, fetch_author: bool = False) -> Post:
... # if fetch_author is false, author is left as a reference, if not, the referenced object is loaded
</code></pre>
<p><strong>The Problem:</strong></p>
<p>When working with a <code>Post</code> object, Pyright (or MyPy) correctly flags potential errors if we try to access attributes specific to the <code>User</code> model on the <code>post.author</code> field, because it could be a <code>DBRefUUID</code> dictionary/object at runtime.</p>
<pre class="lang-py prettyprint-override"><code>async def get_author_email(post_id: uuid.UUID) -> Optional[EmailStr]:
# Assume fetch_author=True was used here for this example
post = await load_post_from_db(post_id, fetch_author=True)
# Type Error from Pyright/MyPy:
# error: Item "email" of "Union[User, DbRef]" has no attribute "email"
# return post.author.email # <-- This line causes the error
# Workaround:
if isinstance(post.author, User):
return post.author.email # Type checker is happy inside this block
return None
# Even if we *know* fetch_author=True was used, the type checker doesn't.
</code></pre>
<p>This forces us to use runtime checks (<code>isinstance</code>) or assertions (<code>assert isinstance(post.author, User)</code>) frequently, purely to satisfy the type checker, even when our application logic guarantees the field <em>should</em> be resolved. This adds verbosity and couples type safety concerns tightly with runtime checks. Using <code># type: ignore</code> feels like avoiding the problem.</p>
<p><strong>Question:</strong>
I think this pattern is pretty common and not limited to mongodb - a similar scenario likely happens for relational databases/ORMs. Is there an elegant way to do this?</p>
<p>Unfortunately python typing isn't as powerful as e.g. TypeScript and thus doesn't allow derived types that would allow us to express something like <code>type PostWithAuthor</code> that is identical to <code>Post</code> except <code>author</code> has type <code>Author</code> rather than <code>Author | DbRef</code> - if that was the case we could use overloads/typeguards to make ad-hoc types that allow us to tell the typesystem which fields are <em>known</em> to contain DbRefs vs. the actual types during the execution flow.</p>
<p>I don't think this is actually solvable in python, so is there a better approach to this whole thing?</p>
|
<python><pymongo><python-typing><pydantic>
|
2025-04-25 12:31:22
| 0
| 1,529
|
Gloomy
|
79,592,071
| 11,829,002
|
Doxygen docstring displayable in vscode
|
<p>I have a Python code, and I recently discovered Doxygen which generates the documentation automatically from the source code.
If I understood correctly, to make the generated code well detected by Doxygen, the doc string should be like this :</p>
<pre class="lang-py prettyprint-override"><code>def f(self, x, y):
"""!Compute the sum
@param x first number
@param y second number
@return sum of x and y
"""
return x + y
</code></pre>
<p>But with Vscode, the snippet, when I pointto this method, is no longer displayed correctly (cf Fig. 1)</p>
<p>When I enter the docstring in this format</p>
<pre class="lang-py prettyprint-override"><code>"""Compute the sum
Parameters
----------
x : int
first number
y : int
second number
Returns
-------
int
sum of x and y
"""
</code></pre>
<p>it is correctly displayed in VScode (Fig. 2)</p>
<p><a href="https://i.sstatic.net/87bOW0TK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/87bOW0TK.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/MZVhmwpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MZVhmwpB.png" alt="enter image description here" /></a></p>
<p>But with this second method, Doxygen no longer recognizes the parameters, and in the generated HTML page, the docstring is just displayed in verbatim mode (like with three ` in markdown)</p>
<p>Is there a way to combine both ways? So that Doxygen correctly catches the parameters as well as VSCode?</p>
|
<python><visual-studio-code><doxygen><docstring>
|
2025-04-25 07:59:28
| 1
| 398
|
Thomas
|
79,592,065
| 12,415,855
|
403-response when doing a python request?
|
<p>i try to do a get request using the follwing code -</p>
<p>(i have taken the get response from this site:
<a href="https://www.marinetraffic.com/en/ais/home/centerx:-15.7/centery:25.1/zoom:3" rel="nofollow noreferrer">https://www.marinetraffic.com/en/ais/home/centerx:-15.7/centery:25.1/zoom:3</a>
copied it with right click and "Copy as cURL Bash" in the Network Tab of Chrome
and converted it using <a href="https://curlconverter.com/python/" rel="nofollow noreferrer">https://curlconverter.com/python/</a>)</p>
<pre><code>import requests
cookies = {
'__cf_bm': '0ZwKxO2ya.H7zWswqxfQsS72t7uWAnPvcWR72oTPgZo-1745566904-1.0.1.1-V6w98vJXaZd8Dq6ctvKkOikRo5XdMJlYY0ROodABBkuXFHQ3PpuCbCseetBZNuwkIrMZXg1.7G1Xw1B.5hyqqhU1Gb7HIifoiaCdfhh5GLQ',
'_cfuvid': 'BCmzr2QSh7cfUL9D7AS.Y3J0DadifB63Nk_DjJdnU_M-1745566904509-0.0.1.1-604800000',
'cf_clearance': 'Rt7dTDVgSuVxSh9AjhJIqI3qQnvoE6w1T3s0FFsB9E4-1745566904-1.2.1.1-zf2VBp3dh0u4KfN7MfNQ_GexuhddHBlx30bbEGIlof9ByIebnoyiO00kFsyV0ABGpEk1Vq3SK5sLH6V8aQ6_EIHHeik14Hx2CVrmrunyyNvD9D18Yc1rDAUFuageLWREiNPDULxYSgiEin_sUk9fSOv56RZqe3U6E7trgFClAjYllL4QDRnoPkSF10VBCX8ZiMn9a2cL82r5ChCcLI3HjzZfLcKq.yXdXwODotlux64.GXLGA5pDk6Ixr4BJ6M6Cs1zsteb0n6ODJvYmRzun.d_uvcH.YkYI3BBNoPi8_wfmqWFLDqBFJSwg9uzn4472CBlTuhPDsfykKmMCFIIhn2ZZtsttcBa9QX5czSGX5Qk',
'usprivacy': '1N--',
'vTo': '1',
'euconsent-v2': 'CQQbCUAQQbCUAAKA1AENDgCsAP_AAEPAAAwIg1NX_H__bW9r8X7_aft0eY1P9_j77sQxBhfJE-4F3LvW_JwXx2E5NF36tqoKmRoEu3ZBIUNlHJHUTVmwaogVryHsakWcoTNKJ6BkkFMRM2dYCF5vm4tjeQKY5_p_d3fx2D-t_dv839zzz8VHn3e5fue0-PCdU5-9Dfn9fRfb-9IP9_78v8v8_l_rk2_eT13_pcvr_D--f_87_XW-9_cAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEQagCzDQuIAuyJCQm0DCKBACIKwgIoEAAAAJA0QEALgwKdgYBLrCRACBFAAcEAIQAUZAAgAAEgAQiACQIoEAAEAgEAAIAEAgEADAwADgAtBAIAAQHQMUwoAFAsIEiMiIUwIQoEggJbKBBKCoQVwgCLDAigERsFAAgCQEVgACAsXgMASAlYkECXUG0AABAAgFFKFQik_MAQ4Jmy1V4om0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAACAA.f_gAAAAAAAAA',
'addtl_consent': '1~43.3.9.6.9.13.6.4.15.9.5.2.11.8.1.3.2.10.33.4.15.17.2.9.20.7.20.5.20.7.2.2.1.4.40.4.14.9.13.8.9.6.6.9.41.5.3.1.27.1.17.10.9.1.8.6.2.8.3.4.146.65.1.17.1.18.25.35.5.18.9.7.21.20.2.4.18.24.4.9.6.5.2.14.25.3.2.2.8.28.8.6.3.10.4.20.2.17.10.11.1.3.22.16.2.6.8.6.11.6.5.33.11.19.28.12.1.5.2.17.9.6.40.17.4.9.15.8.7.3.12.7.2.4.1.19.13.22.13.2.6.8.10.1.4.15.2.4.9.4.5.4.7.13.5.15.17.4.14.10.15.2.5.6.2.2.1.2.14.7.4.8.2.9.10.18.12.13.2.18.1.1.3.1.1.9.7.2.16.5.19.8.4.8.5.4.8.4.4.2.14.2.13.4.2.6.9.6.3.2.2.3.7.9.10.11.9.19.8.3.3.1.2.3.9.19.26.3.10.17.3.4.6.3.3.3.4.1.7.11.4.1.11.6.1.10.13.3.2.2.4.3.2.2.7.15.7.14.4.3.4.5.4.3.2.2.5.5.3.9.7.9.1.5.3.7.10.11.1.3.1.1.2.1.3.2.6.1.12.8.1.3.1.1.2.2.7.7.1.4.3.6.1.2.1.4.1.1.4.1.1.2.1.8.1.7.4.3.3.3.5.3.15.1.15.10.28.1.2.2.12.3.4.1.6.3.4.7.1.3.1.4.1.5.3.1.3.4.1.5.2.3.1.2.2.6.2.1.2.2.2.4.1.1.1.2.2.1.1.1.1.2.1.1.1.2.2.1.1.2.1.2.1.7.1.7.1.1.1.1.2.1.4.2.1.1.9.1.6.2.1.6.2.3.2.1.1.1.2.5.2.4.1.1.2.2.1.1.7.1.2.2.1.2.1.2.3.1.1.2.4.1.1.1.9.6.4.5.9.1.2.3.1.4.3.2.2.3.1.1.1.1.12.1.3.1.1.2.2.1.6.3.3.5.2.7.1.1.2.5.1.9.5.1.3.1.8.4.5.1.9.1.1.1.2.1.1.1.4.2.13.1.1.3.1.2.2.3.1.2.1.1.1.2.1.3.1.1.1.1.2.4.1.5.1.2.4.3.10.2.9.7.2.2.1.3.3.1.6.1.2.5.1.1.2.6.4.2.1.200.200.100.300.400.100.100.100.400.1700.304.596.100.1000.800.500.400.200.200.500.1300.801.99.303.99.104.95.1399.1100.100.4302.1798.2100.600.200.100.800.900.100.200.700.100.800.2900.1100.600.400.2200.2300.400.1101.899.2100.100.100',
}
headers = {
'accept': '*/*',
'accept-language': 'en-GB,en;q=0.9,de-AT;q=0.8,de;q=0.7,en-US;q=0.6',
'priority': 'u=1, i',
'referer': 'https://www.marinetraffic.com/en/ais/home/centerx:-12.1/centery:25.0/zoom:4',
'sec-ch-ua': '"Google Chrome";v="135", "Not-A.Brand";v="8", "Chromium";v="135"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36',
'vessel-image': '00ca8c234cef4e61fa99ee23afb75bb85903',
'x-newrelic-id': 'undefined',
'x-requested-with': 'XMLHttpRequest',
'cookie': '__cf_bm=0ZwKxO2ya.H7zWswqxfQsS72t7uWAnPvcWR72oTPgZo-1745566904-1.0.1.1-V6w98vJXaZd8Dq6ctvKkOikRo5XdMJlYY0ROodABBkuXFHQ3PpuCbCseetBZNuwkIrMZXg1.7G1Xw1B.5hyqqhU1Gb7HIifoiaCdfhh5GLQ; _cfuvid=BCmzr2QSh7cfUL9D7AS.Y3J0DadifB63Nk_DjJdnU_M-1745566904509-0.0.1.1-604800000; cf_clearance=Rt7dTDVgSuVxSh9AjhJIqI3qQnvoE6w1T3s0FFsB9E4-1745566904-1.2.1.1-zf2VBp3dh0u4KfN7MfNQ_GexuhddHBlx30bbEGIlof9ByIebnoyiO00kFsyV0ABGpEk1Vq3SK5sLH6V8aQ6_EIHHeik14Hx2CVrmrunyyNvD9D18Yc1rDAUFuageLWREiNPDULxYSgiEin_sUk9fSOv56RZqe3U6E7trgFClAjYllL4QDRnoPkSF10VBCX8ZiMn9a2cL82r5ChCcLI3HjzZfLcKq.yXdXwODotlux64.GXLGA5pDk6Ixr4BJ6M6Cs1zsteb0n6ODJvYmRzun.d_uvcH.YkYI3BBNoPi8_wfmqWFLDqBFJSwg9uzn4472CBlTuhPDsfykKmMCFIIhn2ZZtsttcBa9QX5czSGX5Qk; usprivacy=1N--; vTo=1; euconsent-v2=CQQbCUAQQbCUAAKA1AENDgCsAP_AAEPAAAwIg1NX_H__bW9r8X7_aft0eY1P9_j77sQxBhfJE-4F3LvW_JwXx2E5NF36tqoKmRoEu3ZBIUNlHJHUTVmwaogVryHsakWcoTNKJ6BkkFMRM2dYCF5vm4tjeQKY5_p_d3fx2D-t_dv839zzz8VHn3e5fue0-PCdU5-9Dfn9fRfb-9IP9_78v8v8_l_rk2_eT13_pcvr_D--f_87_XW-9_cAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEQagCzDQuIAuyJCQm0DCKBACIKwgIoEAAAAJA0QEALgwKdgYBLrCRACBFAAcEAIQAUZAAgAAEgAQiACQIoEAAEAgEAAIAEAgEADAwADgAtBAIAAQHQMUwoAFAsIEiMiIUwIQoEggJbKBBKCoQVwgCLDAigERsFAAgCQEVgACAsXgMASAlYkECXUG0AABAAgFFKFQik_MAQ4Jmy1V4om0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAACAA.f_gAAAAAAAAA; addtl_consent=1~43.3.9.6.9.13.6.4.15.9.5.2.11.8.1.3.2.10.33.4.15.17.2.9.20.7.20.5.20.7.2.2.1.4.40.4.14.9.13.8.9.6.6.9.41.5.3.1.27.1.17.10.9.1.8.6.2.8.3.4.146.65.1.17.1.18.25.35.5.18.9.7.21.20.2.4.18.24.4.9.6.5.2.14.25.3.2.2.8.28.8.6.3.10.4.20.2.17.10.11.1.3.22.16.2.6.8.6.11.6.5.33.11.19.28.12.1.5.2.17.9.6.40.17.4.9.15.8.7.3.12.7.2.4.1.19.13.22.13.2.6.8.10.1.4.15.2.4.9.4.5.4.7.13.5.15.17.4.14.10.15.2.5.6.2.2.1.2.14.7.4.8.2.9.10.18.12.13.2.18.1.1.3.1.1.9.7.2.16.5.19.8.4.8.5.4.8.4.4.2.14.2.13.4.2.6.9.6.3.2.2.3.7.9.10.11.9.19.8.3.3.1.2.3.9.19.26.3.10.17.3.4.6.3.3.3.4.1.7.11.4.1.11.6.1.10.13.3.2.2.4.3.2.2.7.15.7.14.4.3.4.5.4.3.2.2.5.5.3.9.7.9.1.5.3.7.10.11.1.3.1.1.2.1.3.2.6.1.12.8.1.3.1.1.2.2.7.7.1.4.3.6.1.2.1.4.1.1.4.1.1.2.1.8.1.7.4.3.3.3.5.3.15.1.15.10.28.1.2.2.12.3.4.1.6.3.4.7.1.3.1.4.1.5.3.1.3.4.1.5.2.3.1.2.2.6.2.1.2.2.2.4.1.1.1.2.2.1.1.1.1.2.1.1.1.2.2.1.1.2.1.2.1.7.1.7.1.1.1.1.2.1.4.2.1.1.9.1.6.2.1.6.2.3.2.1.1.1.2.5.2.4.1.1.2.2.1.1.7.1.2.2.1.2.1.2.3.1.1.2.4.1.1.1.9.6.4.5.9.1.2.3.1.4.3.2.2.3.1.1.1.1.12.1.3.1.1.2.2.1.6.3.3.5.2.7.1.1.2.5.1.9.5.1.3.1.8.4.5.1.9.1.1.1.2.1.1.1.4.2.13.1.1.3.1.2.2.3.1.2.1.1.1.2.1.3.1.1.1.1.2.4.1.5.1.2.4.3.10.2.9.7.2.2.1.3.3.1.6.1.2.5.1.1.2.6.4.2.1.200.200.100.300.400.100.100.100.400.1700.304.596.100.1000.800.500.400.200.200.500.1300.801.99.303.99.104.95.1399.1100.100.4302.1798.2100.600.200.100.800.900.100.200.700.100.800.2900.1100.600.400.2200.2300.400.1101.899.2100.100.100',
}
resp = requests.get(
'https://www.marinetraffic.com/getData/get_data_json_4/z:3/X:1/Y:0/station:0',
cookies=cookies,
headers=headers,
)
print(resp.status_code)
</code></pre>
<p>When i open the link</p>
<p><a href="https://www.marinetraffic.com/getData/get_data_json_4/z:3/X:1/Y:0/station:0" rel="nofollow noreferrer">https://www.marinetraffic.com/getData/get_data_json_4/z:3/X:1/Y:0/station:0</a></p>
<p>in the browser i get a result</p>
<pre><code>{
"type": 1,
"data": {
"rows": [
{
"LAT": "65.39167",
"LON": "-22.175043",
"SPEED": "6",
"COURSE": "148",
"HEADING": "97",
"ELAPSED": "14",
"DESTINATION": "REYKHOLAR",
"FLAG": "IS",
"LENGTH": "38",
"SHIPNAME": "GRETTIR",
"SHIPTYPE": "2",
"SHIP_ID": "293921",
"WIDTH": "9",
"L_FORE": "8",
"W_LEFT": "4",
"DWT": "206",
"GT_SHIPTYPE": "43"
},
{
"LAT": "66.998337",
"LON": "-21.540001",
"SPEED": null,
"COURSE": null,
"HEADING": null,
"ELAPSED": "14",
"DESTINATION": "",
"FLAG": "IS",
"LENGTH": "2",
"ROT": "0",
"SHIPNAME": "HBII STRAUMM",
"SHIPTYPE": "1",
"SHIP_ID": "3953648",
"WIDTH": "2",
"L_FORE": "1",
"W_LEFT": "1"
},
...
</code></pre>
<p>How can i do the request using python without a 403-error?</p>
|
<python><curl><request>
|
2025-04-25 07:55:39
| 1
| 1,515
|
Rapid1898
|
79,591,974
| 15,157,684
|
Error Running OCR with Qwen2.5-VL in Colab
|
<p>I am trying to run the OCR functionality of Qwen2.5-VL by following the tutorial provided in this notebook: <a href="https://github.com/QwenLM/Qwen2.5-VL/blob/main/cookbooks/ocr.ipynb?spm=a2ty_o01.29997173.0.0.4f11c921W6BADP&file=ocr.ipynb" rel="nofollow noreferrer">OCR Tutorial Notebook</a></p>
<p>However, I am encountering an error when attempting to execute the code. Here are the details:</p>
<p>Steps Taken:</p>
<ul>
<li>I opened the notebook in Google Colab using this link: Colab Notebook</li>
<li>I followed the instructions in the notebook to set up the environment
and run the OCR example.</li>
</ul>
<p>Error Encountered:</p>
<pre><code>text: <|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Extract the key-value information in the format:{"company": "", "date": "", "address": "", "total": ""}<|vision_start|><|image_pad|><|vision_end|><|im_end|>
<|im_start|>assistant
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-176a5a9fc203> in <cell line: 0>()
6
7 ## Use a local HuggingFace model to inference.
----> 8 response = inference(image_path, prompt)
9 display(Markdown(response))
27 frames
/usr/local/lib/python3.11/dist-packages/triton/backends/nvidia/driver.py in __call__(self, *args, **kwargs)
442
443 def __call__(self, *args, **kwargs):
--> 444 self.launch(*args, **kwargs)
445
446
ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
</code></pre>
<p><a href="https://i.sstatic.net/feiLoj6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/feiLoj6t.png" alt="enter image description here" /></a></p>
<p>Additional Information:</p>
<p>Environment : Google Colab</p>
<p>Python Version : Default Colab version</p>
<p>Google Colab: <a href="https://colab.research.google.com/drive/1JR1Abv9ORIQZWcjm5-xdFM4zJo6hdp51?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1JR1Abv9ORIQZWcjm5-xdFM4zJo6hdp51?usp=sharing</a></p>
|
<python><gpu><ocr><large-language-model>
|
2025-04-25 07:05:18
| 0
| 1,951
|
JS3
|
79,591,938
| 11,770,390
|
Why does pip install fail due to project layout when installing dependencies?
|
<p>Upon running <code>python -m pip install .</code> I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>$ python -m pip install .
Processing /home/user/dev/report-sender/report_broadcaster
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
error: Multiple top-level packages discovered in a flat-layout: ['config', 'notifiers'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>So it seems like the requirements to build some wheel were not satisfied, and they were not satisfied because my project didn't have a proper layout (it suggests me to switch to source layout).</p>
<p>How does installing dependencies have anything to do with my project layout?</p>
<p>This is my <code>pyproject.toml</code>:</p>
<pre class="lang-toml prettyprint-override"><code>[project]
name = "report-broadcaster"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"cryptography~=41.0",
"jsonschema~=4.23",
"pika~=1.3",
"psycopg2-binary~=2.9",
"pydantic~=2.0",
"python-dateutil~=2.9",
"pytz>=2025.1",
"requests~=2.31",
"shapely~=2.0",
"sqlalchemy~=2.0",
]
[tool.pytest.ini_options]
pythonpath = [
"."
]
</code></pre>
|
<python><pip><dependencies><pyproject.toml>
|
2025-04-25 06:45:35
| 1
| 5,344
|
glades
|
79,591,859
| 1,909,927
|
Nixos - Flask - ModuleNotFoundError
|
<p>I wanted to add a form to my flask-based website, but I got the following error message:</p>
<pre><code>Apr 24 21:18:04 nixos uwsgi[2261]: from flask_wtf import FlaskForm
Apr 24 21:18:04 nixos uwsgi[2261]: ModuleNotFoundError: No module named 'flask_wtf'
</code></pre>
<p>Here are the installed packages part of my the configuration.nix file:</p>
<pre><code> environment.systemPackages = with pkgs; [
# vim # Do not forget to add an editor to edit configuration.nix! The Nano editor is also installed by default.
git
wget
pkgs.notepadqq
pkgs.nginx
pkgs.postgresql_16
pkgs.pgloader
pkgs.libgcc
pkgs.python312
pkgs.python312Packages.virtualenv
pkgs.uwsgi
pkgs.mariadb_114
pkgs.kdePackages.okular
pkgs.libreoffice-qt-fresh
pkgs.spotify
pkgs.fwupd
];
</code></pre>
<p>I intended to install any further python packages in virtualenv, but flask_wtf seems to be not recognised.</p>
<p>Running the command "pip freeze" outside of my virtualenv gives the following response:</p>
<pre><code>[user@nixos:~]$ pip freeze
The program 'pip' is not in your PATH. It is provided by several packages.
You can make it available in an ephemeral shell by typing one of the following:
nix-shell -p cope
nix-shell -p python311Packages.pip
nix-shell -p python312Packages.pip
</code></pre>
<p>Running the command "pip freeze" in the virtualenv gives the following response:</p>
<pre><code>(folder_env)
[user@nixos:/var/www/folder/folder_env/bin]$ pip freeze
blinker==1.9.0
click==8.1.8
Flask==3.1.0
Flask-WTF==1.2.2
itsdangerous==2.2.0
Jinja2==3.1.6
MarkupSafe==3.0.2
psycopg2-binary==2.9.10
setuptools==78.1.0
Werkzeug==3.1.3
wheel==0.45.1
WTForms==3.2.1
</code></pre>
<p>As you see, Flask-WTF is installed here.
Why is flask_wtf not found if it is installed? How should I resolve this?</p>
<p>Thank you for your advice.</p>
|
<python><flask><pip><nixos>
|
2025-04-25 05:30:58
| 1
| 783
|
picibucor
|
79,591,819
| 2,552,290
|
Why does a==b call b.__eq__ when derived from list and tuple with missing override?
|
<h2>Background</h2>
<p>I am writing math utility classes <code>ListVector</code> and <code>TupleVector</code>,
inheriting from <code>list</code> and <code>tuple</code> respectively:</p>
<pre><code>class ListVector(list):
...
class TupleVector(tuple):
...
</code></pre>
<p>(Aside: I'm not necessarily claiming this is really a good idea; in fact,
I'm well aware that, arguably, I should not do this,
since my intended relationships are logically "has-a" rather than "is-a",
and inappropriately making it "is-a" is dangerous since, if I'm not careful,
undesired behavior will leak from the base classes into my
classes, e.g. the behavior of operators +, +=, *, *=, ==, and
<a href="https://treyhunner.com/2019/04/why-you-shouldnt-inherit-from-list-and-dict-in-python/" rel="nofollow noreferrer">other gotchas described here</a>.);
one possible advantage of using "is-a" anyway is that I expect there will be
little-to-no overhead in terms of memory and for getting/setting the
i'th element, compared to using <code>list</code> and <code>tuple</code> directly.)</p>
<p>I want to support comparison of <code>ListVector</code> against <code>TupleVector</code> using == and !=;
e.g. I want this to succeed:</p>
<pre><code> assert ListVector((1,2,3)) == TupleVector((1,2,3))
</code></pre>
<p>Note that that's different from the base class behavior:</p>
<pre><code> assert list((1,2,3)) != tuple((1,2,3))
</code></pre>
<p>i.e.</p>
<pre><code> assert [1,2,3] != (1,2,3)
</code></pre>
<p>Therefore I'll need to override the <code>__eq__</code> and <code>__ne__</code> methods
in both of my vector classes.</p>
<h2>The problem</h2>
<p>I implemented the overrides for <code>__eq__</code> and <code>__ne__</code> in my <code>TupleVector</code> class,
but I initially forgot to implement them in my <code>ListVector</code> class.</p>
<p>No problem so far: I'm doing TDD, so my unit test should catch that mistake and force me to fix it.</p>
<p>But the unit test assertion that's supposed to catch the mistake unexpectedly succeeds:</p>
<pre><code> assert ListVector((1,2,3)) == TupleVector((1,2,3)) # unexpectedly succeeds!
</code></pre>
<p><strong>Expected behavior:</strong> since I forgot to override <code>__eq__</code> and <code>__ne__</code> in ListVector,
I expect the == call to fall through to <code>list.__eq__</code>, which should return <code>False</code>,
and so the assertion should fail.</p>
<p><strong>Actual behavior:</strong> it calls reflected <code>TupleVector.__eq__</code> instead,
which returns <code>True</code>, and so the assertion succeeds!</p>
<h2>The question</h2>
<p>So my question is: why is it calling reflected <code>TupleVector.__eq__</code> instead of (non-reflected) <code>list.__eq__</code>?</p>
<p>According to the rules described <a href="https://stackoverflow.com/questions/3588776/how-is-eq-handled-in-python-and-in-what-order#answer-12984987">here</a>
(which is taken from <a href="https://eev.ee/blog/2012/03/24/python-faq-equality/" rel="nofollow noreferrer">this faq</a>),
I think it should call <code>list.__eq__</code>.
Specifically, it looks to me like the 2nd clause applies:</p>
<blockquote>
<p>If <code>type(a)</code> has overridden <code>__eq__</code> (that is, <code>type(a).__eq__</code> isn’t <code>object.__eq__</code>), then the result is <code>a.__eq__(b)</code>." where my <code>type(a)</code> and <code>type(b)</code> are <code>ListVector</code> and <code>TupleVector</code> respectively.</p>
</blockquote>
<p>My reading of the <a href="https://docs.python.org/3/reference/datamodel.html#object.__eq__" rel="nofollow noreferrer">documentation</a> also seems to lead to the same conclusion as the faq (that is, the left operand's method, i.e. <code>list.__eq__</code>, should be called):</p>
<blockquote>
<p>If the operands are of different types, and the right operand’s type is a direct or indirect subclass of the left operand’s type, the reflected method of the right operand has priority, <strong>otherwise the left operand’s method has priority</strong>.</p>
</blockquote>
<p><strong>Here is the code:</strong></p>
<pre><code>#!/usr/bin/python3
class ListVector(list):
# OOPS! Forgot to implement __eq__ and __ne__ for ListVector
# ...
pass
class TupleVector(tuple):
def __eq__(self, other):
print("TupleVector.__eq__ called")
# strict=True so comparing Vectors of unequal length will throw
return all(x==y for x,y in zip(self,other, strict=True))
def __ne__(self, other):
return not self.__eq__(other)
# ...
# Unit test
assert repr(ListVector((1,2,3))) == "[1, 2, 3]" # succeeds as expected
assert repr(TupleVector((1,2,3))) == "(1, 2, 3)" # succeeds as expected
assert TupleVector((1,2,3)) == ListVector((1,2,3)) # emits "TupleVector.__eq__ called" and succeeds, as expected
assert ListVector((1,2,3)) == TupleVector((1,2,3)) # WTF: unexpectedly emits "TupleVector.__eq__ called" and succeeds!
# Confirm that the condition "type(a).__eq__ isn’t object.__eq__", mentioned
# in the decision procedure in the FAQ, holds:
assert ListVector.__eq__ is list.__eq__ # because I forgot to override that
assert ListVector.__eq__ is not object.__eq__ # because list.__eq__ is not object.__eq__
assert TupleVector.__eq__ is not tuple.__eq__ # because I remembered that override
assert TupleVector.__eq__ is not object.__eq__ # definitely not
</code></pre>
<p>The (surprising) output is:</p>
<pre><code>TupleVector.__eq__ called
TupleVector.__eq__ called
</code></pre>
<p>I expected that, instead, "<code>TupleVector.__eq__ called</code>" should be emitted
only once instead of twice,
and the "<code>assert ListVector((1,2,3)) == TupleVector((1,2,3))</code>" should fail.</p>
|
<python><inheritance>
|
2025-04-25 04:45:49
| 2
| 5,611
|
Don Hatch
|
79,591,781
| 563,299
|
Python: partial TypedDict for a wrapper function (mypy)
|
<p>Consider the file below named <code>test.py</code>, which gives an error in mypy (though it does work correctly). Although greatly simplified here, I want to create a wrapper function that changes the default value of the function that it wraps, but otherwise passes along all kwargs to that wrapped function. I want to continue to use the <code>TypedDict</code> because the signature of that function may change in the future (e.g. it may someday get a <code>d</code> keyword argument and I don't want to update my library just because of that). Is there a good way to do this? In the actual use case, <code>method1</code> has many arguments to it, so I really don't want to duplicate the signature.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict, Unpack
# Method1Kwargs and method1 is defined in another library
class Method1Kwargs(TypedDict):
a: NotRequired[int]
b: NotRequired[float]
c: NotRequired[str]
def method1(*, a: int = 0, b: float = 10, c: str = 'c'):
print(a, b, c)
# in another library, we want to define method 2, which is a wrapper of method1
# except that we want to change the default value of the keyword argument "a"
def method2(*, a: int = 1, **kwargs: Unpack[Method1Kwargs]):
method1(a=a, **kwargs)
method1() # prints: 0 10 c
method2() # prints: 1 10 c
method2(a=5) # prints: 5 10 c
</code></pre>
<p>Although this works correctly, when run through mypy it generates the following error:</p>
<pre class="lang-bash prettyprint-override"><code>$ mypy test.py
test.py:17: error: Overlap between argument names and ** TypedDict items: "a" [misc]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>On option to make this work would be to define a new <code>TypedDict</code> that does not include the "a" argument, as shown below. But now if the signature of <code>method1</code> changes (which is defined in an upstream library), I need to update <code>Method2Kwargs</code> to keep it in sync.</p>
<pre class="lang-py prettyprint-override"><code>from typing import NotRequired, TypedDict, Unpack
class Method1Kwargs(TypedDict):
a: NotRequired[int]
b: NotRequired[float]
c: NotRequired[str]
def method1(*, a: int = 0, b: float = 10, c: str = 'c'):
print(a, b, c)
class Method2Kwargs(TypedDict):
b: NotRequired[float]
c: NotRequired[str]
def method2(*, a: int = 1, **kwargs: Unpack[Method2Kwargs]):
method1(a=a, **kwargs)
</code></pre>
<p>Another option would be to not use the <code>TypedDict</code> at all, but this has the same issue as the previous one where the signatures can easily get out of sync (again, recall that <code>method1</code> is actually in a different library). Even worse, if the default value in the base library changes (e.g. the default for "b" changes to 20), then that won't be reflected in <code>method2</code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import NotRequired, TypedDict
def method1(*, a: int = 0, b: float = 10, c: str = 'c'):
print(a, b, c)
def method2(*, a: int = 1, b: float = 10, c: str = 'c'):
method1(a=a, b=b, c=c)
</code></pre>
<p>I guess what I would ideally like is a way to have a "partial" <code>TypedDict</code> where 1 or more arguments are dropped from it. So I could do something like <code>def method2(*, a: int = 1, **kwargs: Unpack[Method1Kwargs,Skip[a]]):</code> such that <code>mypy</code> would know that <code>a</code> does not actually have overlap. Surely there is some canonical way to do this, but I haven't been able to find it online or figure it out without duplicating the <code>method1</code> signature.</p>
|
<python><python-typing><mypy><typeddict>
|
2025-04-25 03:54:37
| 0
| 2,612
|
Scott B
|
79,591,713
| 24,271,353
|
Splitting the time dimension of nc data using xarray
|
<p>Now I have a time<em>lon</em>lat 3D data where time is recorded as year, month and day. I need to split time in the form of year*month+day. So that the data becomes 4 dimensional. How should I do this?</p>
<p>I have given a simple data below:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
import pandas as pd
time = pd.date_range("2000-01-01", "2001-12-31", freq="D")
time = time[~((time.month == 2) & (time.day == 29))]
lon = np.linspace(100, 110, 5)
lat = np.linspace(30, 35, 4)
data = np.random.rand(len(time), len(lon), len(lat))
da = xr.DataArray(
data,
coords={"time": time, "lon": lon, "lat": lat},
dims=["time", "lon", "lat"],
name="pr"
)
</code></pre>
<p>except dim:</p>
<p>year: 2000, 2001</p>
<p>monthly: 01-01, 01-02,...12-31</p>
<p>lon: ...</p>
<p>lat: ...</p>
<hr />
<p>One additional question:
Why is <code>.first</code> and <code>.last</code> reporting errors? How should I use them?</p>
<pre><code>da.assign_coords(year = da.time.dt.year, monthday = da.time.dt.strftime("%m-%d")).groupby(['year', 'monthday']).first()
da.assign_coords(year = da.time.dt.year, monthday = da.time.dt.strftime("%m-%d")).groupby(['year', 'monthday']).last()
</code></pre>
|
<python><pandas><python-xarray>
|
2025-04-25 02:30:01
| 2
| 586
|
Breeze
|
79,591,637
| 166,601
|
How do I make a script accessible across all directories within a python monorepo using uv?
|
<p>I have a python monorepo with a top-level <code>pyproject.toml</code>. The monorepo contains a projects directory each with its own python package and <code>pyproject.toml</code> file. Some of these projects define scripts:</p>
<pre><code>[project.script]
aaa = "aaa.something:main"
</code></pre>
<p>Within the project directory, I can run the script with <code>uv run aaa</code>. Outside of the project directory the script cannot be found.</p>
<p>What can I do to make the project script accessible with <code>uv run</code> across the workspace?</p>
|
<python><uv>
|
2025-04-25 00:43:17
| 2
| 3,953
|
jbcoe
|
79,591,629
| 11,609,834
|
Torch tensor dataloader shape issue
|
<p>I have a simple application of <code>torch.DataLoader</code> that gets a nice performance boost. It's created by the <code>tensor_loader</code> in the following example.</p>
<pre><code>from torch.utils.data import DataLoader, TensorDataset, BatchSampler, RandomSampler
import numpy as np
import pandas as pd
def tensor_loader(dataset: TensorDataset, batch_size: int):
return DataLoader(
dataset=dataset,
sampler=BatchSampler(
sampler=RandomSampler(dataset), # randomly sample indexes, same as shuffle=True
batch_size=batch_size,
drop_last=True
)
)
dataset = TensorDataset(torch.tensor(np.random.random(1_000_000).reshape(-1, 10), dtype=torch.float32))
start = pd.Timestamp.now()
for i in tensor_loader(dataset, 4096):
i[0]
end = pd.Timestamp.now()
print(end - start)
assert i[0].shape == torch.Size([1, 4096, 10])
start = pd.Timestamp.now()
simple_loader = DataLoader(dataset=dataset, batch_size=4096)
for i in simple_loader:
pass
end = pd.Timestamp.now()
print(end - start)
assert next(iter(simple_loader))[0].shape == torch.Size([4096, 10])
</code></pre>
<p>However, the difference in the shapes is a little annoying: the <code>data_loader</code> adds an exterior dimension that I don't want. It means the the two loaders can't be substituted for one another, which would mean a lot of niggling changes to existing code to substitute the <code>tensor_loader</code> for the existing one.</p>
<p>Obviously, I can subclass the <code>DataLoader</code> to drop the outer dimension, but this feels more complicated than it should be. Is there are way to create something like the above <code>data_loader</code>'s return value that will produce the shape of the <code>simpler_loader</code> when iterating?</p>
|
<python><pytorch><pytorch-dataloader>
|
2025-04-25 00:31:13
| 1
| 1,013
|
philosofool
|
79,591,487
| 2,526,586
|
Roll back a commit?
|
<p><em>Note: This question focuses on web apps utilising MySQL's transactions - commit and rollback. Even though the code samples below use Python, the problem itself is not limited to the choice of programming language building the web app.</em></p>
<p>Imagine I have two files: <code>main.py</code> and <code>my_dao.py</code>. <code>main.py</code> acts as the starting point of my application and it simply triggers the methods in class <code>MyDAO</code>:</p>
<p><code>main.py</code></p>
<pre><code>import my_dao
MyDAO.run_a()
...
MyDAO.run_b()
...
</code></pre>
<p><code>my_dao.py</code> defines a DAO-like class with two methods, each does something to the database:</p>
<pre><code>import mysql.connector
class MyDAO:
conn = mysql.connector.connect(...)
@classmethod
def run_a(cls):
try:
do_something_1()
cursor = cls.cursor()
cursor.execute('Query A 1')
cursor.execute('Query A 2')
do_something_2()
cursor.close()
conn.commit()
except Error as e:
conn.rollback()
log(...)
@classmethod
def run_b(cls):
try:
do_something_1()
cursor = cls.cursor()
cursor.execute('Query B 1')
# calling cls.run_a() here
cls.run_a()
cursor.execute('Query B 2')
do_something_2()
cursor.close()
conn.commit()
except Error as e:
conn.rollback()
log(...)
</code></pre>
<p>As you can see, both methods have their own <code>commit</code>s and <code>rollback</code>s. <code>run_a()</code> basically runs a bunch of queries and then commit. <code>run_b()</code> is similar except that it calls <code>run_a()</code> in between its queries.</p>
<hr />
<p><strong>Problem</strong></p>
<p>If everything works, this seems fine. However, if <code>run_b()</code> fails after successfully running <code>run_a()</code> inside, this would cause a problem because <code>run_a()</code> has already committed and no matter how <code>run_b()</code> <code>rollback</code>s, it will not <code>rollback</code> to the point before <code>run_b()</code> was called.</p>
<p>I understand that MySQL doesn't support nested transactions. How can I redesign the above so that <code>run_b()</code> can rollback successfully including the commit used by <code>run_a()</code> within it?</p>
<hr />
<p><em>My thoughts:</em></p>
<p>Not sure if the above is a bad design, but I have wrapped each method with try...except and commit/rollback where needed so that each method can be called independently outside the class.</p>
<p>I am aware of <code>savepoint</code> but I think rewriting the above using <code>savepoint</code> would be quite messy, and <code>run_a()</code> would also lose its 'independentness' as it doesn't know whether it should commit within the method itself.</p>
<p>Alternatively, I have also thought of extracting the inner part of <code>run_a()</code> into a common function, but it looks quite clumsy to me:</p>
<pre><code>import mysql.connector
class MyDAO:
conn = mysql.connector.connect(...)
@classmethod
def _real_run_a(cls, cursor):
cursor.execute('Query A 1')
cursor.execute('Query A 2')
@classmethod
def run_a(cls):
try:
do_something_1()
cursor = cls.cursor()
cls._real_run_a(cursor)
do_something_2()
cursor.close()
conn.commit()
except Error as e:
conn.rollback()
log(...)
@classmethod
def run_b(cls):
try:
do_something_1()
cursor = cls.cursor()
cursor.execute('Query B 1')
cls._real_run_a(cursor)
cursor.execute('Query B 2')
do_something_2()
cursor.close()
conn.commit()
except Error as e:
conn.rollback()
log(...)
</code></pre>
|
<python><mysql><transactions><rollback>
|
2025-04-24 21:33:57
| 1
| 1,342
|
user2526586
|
79,591,467
| 13,634,560
|
plotly python: go.chloroplethmap location auto zoom
|
<p>I cannot seem to find a similar question here on SO, but please direct me there if such a question already exists.</p>
<p>I have a function that plots values on a map, as below. The graph works fine. Note that it is the <strong><code>go.Chloroplethmap()</code></strong> function, as opposed to the <code>go.Chloropleth</code> or other similar functions with Plotly Express. I know that there was a recent transition within Plotly for the map features, which may be why it's so difficult to find an answer.</p>
<p>I would like the map to auto-zoom to a certain location. Here is what I have tried:</p>
<pre><code> go.Choroplethmap(
geojson=states_uts,
locations=india['state_ut'],
z=india[feature],
featureidkey="properties.name",
colorscale=px.colors.sequential.Reds,
marker={"line": {"width": 0.001, "color": "white"}}
)
).update_layout({
"geo": {
# "scope": "asia",
# "center": {"lon":78.9629, "lat": 20.5937},
# "projection_type": "mercator",
# "fitbounds": "locations",
}
}).update_geos({
# "scope": "asia",
# "fitbounds": "locations",
# "projection_type": "mercator",
# "center": {"lon":78.9629, "lat": 20.5937},
})
</code></pre>
<p>None of the commented out lines was able to effectively zoom in on the region. Does anyone have a hint as to how to do so, after the recent Plotly map transition?</p>
|
<python><plotly>
|
2025-04-24 21:12:33
| 1
| 341
|
plotmaster473
|
79,591,459
| 301,081
|
pip building wheel for wxPython on Ubuntu
|
<p>I have tried to install wxPython on Python versions 3.10, 3.12, and 3.13, and they all fail with much the same error.
I've installed as many of the required packages as are necessary.
Does anyone have any ideas as to what the problem is?</p>
<pre><code>2025-04-24T15:50:35,367 make[1]: *** [Makefile:28693: coredll_sound_sdl.o] Error 1
2025-04-24T15:50:35,367 make[1]: *** Waiting for unfinished jobs....
2025-04-24T15:50:40,070 make[1]: Leaving directory '/tmp/pip-install-sb8y98_e/wxpython_07aa987c368f479ba95852e8f4be3929/build/wxbld/gtk3'
2025-04-24T15:50:40,071 Error building
2025-04-24T15:50:40,071 ERROR: failed building wxWidgets
2025-04-24T15:50:40,073 Traceback (most recent call last):
2025-04-24T15:50:40,073 File "/tmp/pip-install-sb8y98_e/wxpython_07aa987c368f479ba95852e8f4be3929/build.py", line 1607, in cmd_build_wx
2025-04-24T15:50:40,073 wxbuild.main(wxDir(), build_options)
2025-04-24T15:50:40,073 ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
2025-04-24T15:50:40,073 File "/tmp/pip-install-sb8y98_e/wxpython_07aa987c368f479ba95852e8f4be3929/buildtools/build_wxwidgets.py", line 505, in main
2025-04-24T15:50:40,073 exitIfError(wxBuilder.build(dir=buildDir, options=args), "Error building")
2025-04-24T15:50:40,074 ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-04-24T15:50:40,074 File "/tmp/pip-install-sb8y98_e/wxpython_07aa987c368f479ba95852e8f4be3929/buildtools/build_wxwidgets.py", line 70, in exitIfError
2025-04-24T15:50:40,074 raise builder.BuildError(msg)
2025-04-24T15:50:40,074 buildtools.builder.BuildError: Error building
2025-04-24T15:50:40,074 Finished command: build_wx (7m52.940s)
2025-04-24T15:50:40,074 Finished command: build (7m52.940s)
2025-04-24T15:50:40,130 WARNING: Building this way assumes that all generated files have been
2025-04-24T15:50:40,130 generated already. If that is not the case then use build.py directly
2025-04-24T15:50:40,130 to generate the source and perform the build stage. You can use
2025-04-24T15:50:40,130 --skip-build with the bdist_* or install commands to avoid this
2025-04-24T15:50:40,130 message and the wxWidgets and Phoenix build steps in the future.
2025-04-24T15:50:40,130 "/home/cnobile/.virtualenvs/nc3.13/bin/python" -u build.py build
2025-04-24T15:50:40,131 Command '"/home/cnobile/.virtualenvs/nc3.13/bin/python" -u build.py build' failed with exit code 1.
2025-04-24T15:50:40,197 ERROR: Building wheel for wxPython (pyproject.toml) exited with 1
2025-04-24T15:50:40,202 [bold magenta]full command[/]: [blue]/home/cnobile/.virtualenvs/nc3.13/bin/python /home/cnobile/.virtualenvs/nc3.13/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmp3i6rdxss[/]
2025-04-24T15:50:40,202 [bold magenta]cwd[/]: /tmp/pip-install-sb8y98_e/wxpython_07aa987c368f479ba95852e8f4be3929
2025-04-24T15:50:40,203 ERROR: Failed building wheel for wxPython
2025-04-24T15:50:40,203 Failed to build wxPython
2025-04-24T15:50:40,266 Remote version of pip: 25.0.1
2025-04-24T15:50:40,266 Local version of pip: 25.0.1
2025-04-24T15:50:40,268 Was pip installed by pip? True
2025-04-24T15:50:40,269 ERROR: Failed to build installable wheels for some pyproject.toml based projects (wxPython)
2025-04-24T15:50:40,269 Exception information:
2025-04-24T15:50:40,269 Traceback (most recent call last):
2025-04-24T15:50:40,269 File "/home/cnobile/.virtualenvs/nc3.13/lib/python3.13/site-packages/pip/_internal/cli/base_command.py", line 106, in _run_wrapper
2025-04-24T15:50:40,269 status = _inner_run()
2025-04-24T15:50:40,269 File "/home/cnobile/.virtualenvs/nc3.13/lib/python3.13/site-packages/pip/_internal/cli/base_command.py", line 97, in _inner_run
2025-04-24T15:50:40,269 return self.run(options, args)
2025-04-24T15:50:40,269 ~~~~~~~~^^^^^^^^^^^^^^^
2025-04-24T15:50:40,269 File "/home/cnobile/.virtualenvs/nc3.13/lib/python3.13/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
2025-04-24T15:50:40,269 return func(self, options, args)
2025-04-24T15:50:40,269 File "/home/cnobile/.virtualenvs/nc3.13/lib/python3.13/site-packages/pip/_internal/commands/install.py", line 435, in run
2025-04-24T15:50:40,269 raise InstallationError(
2025-04-24T15:50:40,269 ...<4 lines>...
2025-04-24T15:50:40,269 )
2025-04-24T15:50:40,269 pip._internal.exceptions.InstallationError: Failed to build installable wheels for some pyproject.toml based projects (wxPython)
2025-04-24T15:50:40,270 Removed build tracker: '/tmp/pip-build-tracker-2k4mpfgl'
</code></pre>
|
<python><python-3.x><pip><wxpython>
|
2025-04-24 21:03:45
| 0
| 435
|
cnobile
|
79,591,253
| 2,410,605
|
I can run an RPA exe by going to the server directory and double clicking it, but get an error using a command line
|
<p>I've built an RPA that runs fine as a .py. I turned it into an .exe and it ran fine that way as well. Next I moved it to the production server that it will run from and using File Explorer, double clicked on the exe and again it ran fine.</p>
<p>The job is scheduled to run from a SQL Server Job Scheduler, and when it's ran through the scheduler, that's when we get the error:</p>
<p><a href="https://i.sstatic.net/fzMIhiH6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzMIhiH6.png" alt="Unhandled Exception in Script" /></a></p>
<p>But the file token.json IS there.</p>
<p>The Job Scheduler setup is this:</p>
<p><a href="https://i.sstatic.net/F3H8hJVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F3H8hJVo.png" alt="Job Scheduler" /></a></p>
<p>Below is the excerpt from the log file created. The first <strong>StartOfJob</strong> is the result of running it from the SQL Job Scheduler, you can see in the log that it's looking for the token and that's when the error message above pops up. The second <strong>StartOfJob</strong> is what I got after opening a command prompt and running it myself from a command line, it worked.</p>
<p>2025-04-24 09:38:31,280 ===========S T A R T O F J O B=====================
2025-04-24 09:38:31,280 Step 1. Gather Credentials
2025-04-24 09:38:31,280 Looking for token.json in \\LocationOfServices
2025-04-24 14:02:46,845 ==========S T A R T O F J O B======================
2025-04-24 14:02:46,847 Step 1. Gather Credentials
2025-04-24 14:02:46,848 Looking for token.json in \\LocationOfServices
2025-04-24 14:02:46,876 token.json exists, use token to establish identity
2025-04-24 14:02:47,135 token.json exists but is expired, refresh and retry
2025-04-24 14:02:47,141 token.json has now been created</p>
<p>I don't have access to test from our SQL Server, so I know a big difference is that I'm running it as a mapped shared drive from my computer, where as the SQL Server job scheduler is running it directly.</p>
<p>My DBAs are running it from the Job Scheduler using a service account that has All-Access/Admin privileges on the server.</p>
<p>Below is the code for the py in case that's relevant. Other than that I'm not sure what else to include to help, but if anybody wants to offer suggestions on what else is needed I will be happy to provide.</p>
<p>Thanks in advance for any help you can provide in figuring this out!</p>
<pre><code>import os
import logging
import pysftp
import requests
import sys
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
#local defs
download_path = "\\\<ServerName>\\LocationOfServices"
SCOPES = ["https://www.googleapis.com/auth/spreadsheets"]
SPREADSHEET_ID = "<SpreadsheetID>"
OUT_DIR = 'output/'
OUT_FILENAME = "jcpsLocationOfServices.csv"
#Set up logging
logger = logging.getLogger(__name__)
fileHandler = logging.FileHandler(os.path.join(download_path, "logs.txt"))
formatter = logging.Formatter("%(asctime)s %(message)s")
fileHandler.setFormatter(formatter)
logger.addHandler(fileHandler)
logger.setLevel(logging.INFO)
logger.info("=====================S T A R T O F J O B=================================")
#Main function to loc ingo Google Sheet
def main():
logger.info("Step 1. Gather Credentials")
credentials = None
logger.info(f" Looking for token.json in {download_path}")
if os.path.exists(os.path.join(download_path, "token.json")):
credentials = Credentials.from_authorized_user_file("token.json", SCOPES)
logger.info(" token.json exists, use token to establish identity")
else:
logger.info(" Cannot find token.json")
if not credentials or not credentials.valid:
if credentials and credentials.expired and credentials.refresh_token:
credentials.refresh(Request())
logger.info(" token.json exists but is expired, refresh and retry")
else:
flow = InstalledAppFlow.from_client_secrets_file("credentials.json", SCOPES)
credentials = flow.run_local_server(port=0)
logger.info(" token.json does not exist, use the credentials.json file to contact Google and create it")
with open(os.path.join(download_path, "token.json"), "w") as token:
token.write(credentials.to_json())
logger.info(" token.json has now been created")
try:
logger.info("Step 2. Send Google Sheet URL to the API and set up usage permissions")
#if Google Sheet contains more than one tab, the tab id is the "gid=" at the end of the URL
url = f'https://docs.google.com/spreadsheets/export?id=<SpreadsheetID>&exportFormat=csv&gid=<GridID>'
response = requests.get(url, headers = {'Authorization': 'Bearer ' + credentials.token})
if response.status_code == 200:
<ftp data>
else:
logger.info(f' *****ERROR***** downloading Google Sheet: {response.status_code}')
except HttpError as error:
logger.info(f'*****ERROR***** {error}')
print(error)
if __name__ == "__main__":
main()
logger.info("Step 4. Exit program - CONGRATS!!")
sys.exit(0); ## success
</code></pre>
<p><strong>ADDENDUM:</strong></p>
<p>The link to the post that solved my issue was a tremendous help and got me pointed in the right direction, but it wasn't 100% what my issue was.</p>
<p>If you look at these two lines in my code, the second line was the problem:</p>
<pre><code>if os.path.exists(os.path.join(download_path, "token.json")):
credentials = Credentials.from_authorized_user_file("token.json", SCOPES)
</code></pre>
<p>In the first line I'm looking for the existence of the file by passing in the file path, so it was finding it. But then on the second line, I'm trying to open the file WITHOUT supplying the file path, so it was trying to look for the file in the temporary folder and not finding it. Once I added the file path to the second line, it started working. I'm not sure if I would have ever caught that, though, without reading about a temporary directory being created. So a huge thanks to @sevC_10 and @furas for providing links and explaining about the temporary folder that gets created!!</p>
|
<python><command-line><sql-server-job>
|
2025-04-24 18:21:10
| 0
| 657
|
JimmyG
|
79,591,152
| 9,465,029
|
Root mean square linearisation for linear programming
|
<p>I am trying to linearise the function root mean square to use it in a linear optimisation or Mixed integer linear optimisation. Any idea how I could do this? For instance with the example below, if I wanted to maximize P*100, the model would give P=10, Q = 0 and S=10.</p>
<p>Many thanks</p>
<pre><code>import numpy as np
import pulp
S = np.sqrt(P**2 + Q**2)
model = pulp.LpProblem("Linearise RMS", pulp.LpMaximize)
P = pulp.LpVariable("P", lowBound=-10, upBound=10 ,cat="Continuous")
Q = pulp.LpVariable("Q", lowBound=-10, upBound=10 ,cat="Continuous")
S = pulp.LpVariable("S", lowBound= 0, upBound=10 ,cat="Continuous")
objective_function = P*100
model.setObjective(objective_function)
cbc_solver = PULP_CBC_CMD(options=['ratioGap=0.02'])
result = model.solve(solver=cbc_solver)
</code></pre>
|
<python><optimization><pulp>
|
2025-04-24 17:08:24
| 1
| 631
|
Peslier53
|
79,590,866
| 16,611,809
|
How to make a reactive event silent for a specific function?
|
<p>I have the app at the bottom. Now, I have this preset field where I can select from 3 options (+ the option <code>changed</code>). What I want to be able to set the input_option with the preset field. But I also want to be able to change it manually. If I change the input_option manually the preset field should switch to <code>changed</code>. The problem is, if I set the option with the preset field, this automatically triggers the second function and sets input_preset back to <code>changed</code>. But this should only happen, if I manually change it, not if it is changed by the first reactive function. is that somehow possible? I tried a little with <code>reactive.isolate()</code>, but this does not seem to have any effect.</p>
<pre><code>from shiny import App, ui, reactive
app_ui = ui.page_fillable(
ui.layout_sidebar(
ui.sidebar(
ui.input_select("input_preset", "input_preset", choices=["A", "B", "C", "changed"]),
ui.input_text("input_option", "input_option", value=''),
)
)
)
def server(input, output, session):
@reactive.effect
@reactive.event(input.input_preset)
def _():
if input.input_preset() != 'changed':
# with reactive.isolate():
ui.update_text("input_option", value=str(input.input_preset()))
@reactive.effect
@reactive.event(input.input_option)
def _():
ui.update_select("input_preset", selected='changed')
app = App(app_ui, server)
</code></pre>
|
<python><py-shiny>
|
2025-04-24 14:31:24
| 1
| 627
|
gernophil
|
79,590,719
| 3,854,191
|
How to display inactive inherit_children_ids lines in ir.ui.view's form?
|
<p>In the model <code>ir.ui.view</code> in odoo 18, i want to get the inactive lines of the one2Many-field: <code>inherit_children_ids</code> as permanently displayed.</p>
<p>I have tried to customize using a custom module (see below) but it does not change anything: the inactive line of <code>inherit_children_ids</code> disappears after having clicked on the Save button of the current (parent) view.</p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
from odoo import api, fields, models, tools
class IrUiViewZsb(models.Model):
_inherit = 'ir.ui.view'
# override from addon : base>ir_ui_view.py: add domain="[('active', 'in', [False,True])]"
inherit_children_ids = fields.One2many('ir.ui.view', 'inherit_id', domain="[('active', 'in', [False,True])]", string='Views which inherit from this one')
# override active = True from addon : base>ir_ui_view.py
@api.model
def _get_inheriting_views_domain(self):
""" Return a domain to filter the sub-views to inherit from. """
return [('active', 'in', [False,True])]
</code></pre>
<p>I have tried another approach by extending the xml view ("<code>base.view_view_form</code>") using <code>domain</code> and <code>context</code> but it does not help neither:</p>
<pre class="lang-xml prettyprint-override"><code><field name="inherit_children_ids" position="attributes">
<attribute name="domain">[]</attribute>
<attribute name="context">{'active_test': False}</attribute>
</field>
</code></pre>
<p><a href="https://i.sstatic.net/fb1J4P6t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fb1J4P6t.jpg" alt="enter image description here" /></a></p>
|
<python><xml><view><one2many><odoo-18>
|
2025-04-24 13:28:05
| 1
| 1,677
|
S Bonnet
|
79,590,570
| 2,123,706
|
save table in python as image
|
<p>I want to save a dataframe as a table in python.</p>
<p>Following <a href="https://plotly.com/python/table/" rel="nofollow noreferrer">https://plotly.com/python/table/</a>, I can save a table as an image, but only the first few rows, not the entire dataset.</p>
<p><a href="https://i.sstatic.net/E4Kc3H9Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4Kc3H9Z.png" alt="enter image description here" /></a></p>
<p>Is there a way to save all the rows?</p>
<p>code used:</p>
<pre><code>df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2014_usa_states.csv')
fig = go.Figure(data=[go.Table(
header=dict(values=list(df.columns),
fill_color='paleturquoise',
align='left'),
cells=dict(values=[df.Rank, df.State, df.Postal, df.Population],
fill_color='lavender',
align='left'))
])
fig.write_image('image.png',scale=10)
</code></pre>
|
<python><plotly><save-image>
|
2025-04-24 12:18:36
| 1
| 3,810
|
frank
|
79,590,554
| 538,256
|
python pipes deprecated - interact with mpg123 server
|
<p>I wrote some years ago an iTunes-replacing program in Python, with mpg123 accessed by a pipe (from the manual: <code>-R, --remote: Activate generic control interface. mpg123 will then read and execute commands from stdin. Basic usage is ``load <filename>'' to play some file</code>).</p>
<p>I dont have available a machine with the old python & pipes still working so it's hard to give you an example, but it was something like this:</p>
<pre><code>import subprocess,pipes
mpgpipe = "/tmp/mpg123remote.cmd"
mpgout = "/tmp/mpg123.out"
fmpgout=open(mpgout,"w")
mpg123proc = subprocess.Popen(["/usr/bin/mpg123", "-vR", "--fifo", mpgpipe ], stdin=subprocess.PIPE, stdout=fmpgout)
t = pipes.Template()
t.append("echo load \"%s\"" % "test.mp3", '--')
f = t.open(mpgpipe, 'w')
f.close()
</code></pre>
<p>Recently I started to get a warning <code>DeprecationWarning: 'pipes' is deprecated and slated for removal in Python 3.13</code> - and now it happened: the pipes library isnt available anymore.</p>
<p>This is the test code I'm using, trying to access mpg123 by a named pipe:</p>
<pre><code>import os, io, subprocess
mpgpipe = "/tmp/mpg123remote.cmd"
try:
os.mkfifo(mpgpipe)
print("Named pipe created successfully!")
except FileExistsError:
print("Named pipe already exists!")
except OSError as e:
print(f"Named pipe creation failed: {e}")
fmpgout=open("/tmp/mpg123.out","w")
mpg123proc = subprocess.Popen(["/usr/bin/mpg123", "-a", "default", "-vR", "--fifo", mpgpipe ], stdout=fmpgout)
pipfd = os.open(mpgpipe, os.O_RDWR | os.O_NONBLOCK)
songname = "test.mp3"
command=f"load \"{songname}\"\n"
with io.FileIO(pipfd, 'wb') as f: f.write(command.encode())
</code></pre>
<p>but it doesnt work: no audio, no error, and this text output :</p>
<pre><code>High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3
version 1.32.9; written and copyright by Michael Hipp and others
free software (LGPL) without any warranty but with best wishes
Decoder: x86-64 (AVX)
Trying output module: alsa, device: default
</code></pre>
<p>(of course the audio device is ok and listening to test.mp3 with mpg123 from the terminal works perfectly...)</p>
<p>any idea???</p>
|
<python><pipe><mpg123>
|
2025-04-24 12:10:36
| 1
| 4,004
|
alessandro
|
79,590,536
| 1,593,077
|
What Python exception should I raise when an input file has the wrong size?
|
<p>I'm writing a Python script which reads some input from a file. The input size can be calculated using other information the script has, and the file must have exactly that. Now, if I check and figure out that the file size is not as expected - what kind of exception should I raise?</p>
<p>I had a look at the <a href="https://docs.python.org/3/library/exceptions.html#exception-hierarchy" rel="nofollow noreferrer">hierarchy of exception types</a> in the official Python docs but could not decide which one would be the best fit. Is it a <code>ValueError</code>? <code>AssertionError</code>? Maybe Something else? Or maybe just <code>Exception</code>?</p>
|
<python><exception><error-handling><file-io>
|
2025-04-24 12:02:26
| 1
| 137,004
|
einpoklum
|
79,590,189
| 5,786,649
|
Closing a session in Flask, SQLAlchemy to delete the temporary file
|
<p>I know there are similar questions out there, but none answered my problem.</p>
<p>I am using a basic test setup in Flask using SQLAlchemy, that is a slightly modified version of the Flask-tutorial tests. My problem is that the temporary file created by the app-fixture cannot be closed. This is the fixture:</p>
<pre class="lang-py prettyprint-override"><code>import os
import tempfile
import pytest
from flask.testing import FlaskClient
# custom modules: create_app is the function factory, db is flask_sqlalchemy.SQLAlchemy(model_class=Base)
from flaskr import create_app
from flaskr.models import db
@pytest.fixture
def app():
# creates and opens a temporary file
db_fd, db_path = tempfile.mkstemp(dir="test")
# call the factory function, override DATABASE path so it points to the temporary
# path instead of the instance folder
app = create_app(
{
"TESTING": True,
"SQLALCHEMY_DATABASE_URI": f"sqlite:///{db_path}",
}
)
with app.test_client() as test_client:
with app.app_context():
db.create_all()
yield app
with app.test_client() as test_client:
with app.app_context():
db.drop_all()
db.session.remove()
os.close(db_fd)
os.unlink(db_path)
</code></pre>
<p><code>os.unlink()</code> always fails, because the file is still used by another process. A typical usage in a test would be</p>
<pre class="lang-py prettyprint-override"><code>def test_example(app):
with app.app_context():
assert db.session.execute(stmt) ...
</code></pre>
<p>I could probably just use the same file for testing the database every time, then delete it at the start of the <code>app()</code>-fixture, but I am worried that the cause of the unterminated session connection might also cause other problems, so I would like to understand and fix it.</p>
|
<python><flask><sqlalchemy>
|
2025-04-24 08:57:24
| 2
| 543
|
Lukas
|
79,590,090
| 8,291,840
|
model.predict hangs in celery/uwsgi
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from apps.common.utils.error_handling import suppress_callable_to_sentry
from django.conf import settings
from threading import Lock
MODEL_PATH = settings.BASE_DIR / "apps/core/utils/nsfw_detector/nsfw.299x299.h5"
model = tf.keras.models.load_model(MODEL_PATH, custom_objects={"KerasLayer": hub.KerasLayer}, compile=False)
IMAGE_DIM = 299
TOTAL_THRESHOLD = 0.9
INDIVIDUAL_THRESHOLD = 0.7
predict_lock = Lock()
@suppress_callable_to_sentry(Exception, return_value=False)
def is_nsfw(image):
if image.mode == "RGBA":
image = image.convert("RGB")
image = image.resize((IMAGE_DIM, IMAGE_DIM))
image = np.array(image) / 255.0
image = np.expand_dims(image, axis=0)
with predict_lock:
preds = model.predict(image)[0]
categories = ["drawings", "hentai", "neutral", "porn", "sexy"]
probabilities = {cat: float(pred) for cat, pred in zip(categories, preds)}
individual_nsfw_prob = max(probabilities["porn"], probabilities["hentai"], probabilities["sexy"])
total_nsfw_prob = probabilities["porn"] + probabilities["hentai"] + probabilities["sexy"]
return (individual_nsfw_prob > INDIVIDUAL_THRESHOLD) or (total_nsfw_prob > TOTAL_THRESHOLD)
</code></pre>
<p>This works from python shell and django shell but stucks at predict part in uwsgi and in celery, anyone got any idea why that might be happening?</p>
<p>I put a bunch of breakpoints and the problem is at the prediction itself, in shell it returns in ~100ms but it hangs in uwsgi and celery for 10+ minutes (didn't try for longer as I think it is obvious it won't return)</p>
<p>Tried it with and without the lock, same result</p>
|
<python><django><tensorflow><keras><celery>
|
2025-04-24 08:10:45
| 0
| 3,042
|
Işık Kaplan
|
79,589,899
| 8,913,338
|
How to resume halted core to work with pylink
|
<p>I am using the <code>pylink</code> library for accessing JLINK.</p>
<p>How can I from <code>pylink</code> run the processor when it halted (similar to the vscode debugger continue command).</p>
<p>I know that there is <a href="https://pylink.readthedocs.io/en/latest/pylink.html#pylink.jlink.JLink.reset" rel="nofollow noreferrer">reset</a> command in <code>pylink</code> but this will make my core to start from its entry point according to the docs instead of resuming from current halted point.</p>
|
<python><arm><segger-jlink>
|
2025-04-24 05:52:29
| 1
| 511
|
arye
|
79,589,800
| 10,445,333
|
Find corresponding date of max value in a rolling window of each partition
|
<p>Sample code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import date
from random import randint
df = pl.DataFrame({
"category": [cat for cat in ["A", "B"] for _ in range(1, 32)],
"date": [date(2025, 1, i) for _ in ["A", "B"] for i in range(1, 32)],
"value": [randint(1, 50) for _ in ["A", "B"] for _ in range(1, 32)]
})
</code></pre>
<p>I know how to find the maximum value in a rolling window in each partition by following:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
value_max=pl.col("value").rolling_max_by("date", window_size="5d", closed="both").over(pl.col("category"))
)
</code></pre>
<pre><code>shape: (62, 4)
┌──────────┬────────────┬───────┬───────────┐
│ category ┆ date ┆ value ┆ value_max │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ date ┆ i64 ┆ i64 │
╞══════════╪════════════╪═══════╪═══════════╡
│ A ┆ 2025-01-01 ┆ 42 ┆ 42 │
│ A ┆ 2025-01-02 ┆ 18 ┆ 42 │
│ A ┆ 2025-01-03 ┆ 33 ┆ 42 │
│ A ┆ 2025-01-04 ┆ 35 ┆ 42 │
│ A ┆ 2025-01-05 ┆ 46 ┆ 46 │
│ … ┆ … ┆ … ┆ … │
│ B ┆ 2025-01-27 ┆ 49 ┆ 49 │
│ B ┆ 2025-01-28 ┆ 22 ┆ 49 │
│ B ┆ 2025-01-29 ┆ 49 ┆ 49 │
│ B ┆ 2025-01-30 ┆ 32 ┆ 49 │
│ B ┆ 2025-01-31 ┆ 25 ┆ 49 │
└──────────┴────────────┴───────┴───────────┘
</code></pre>
<p>I also want to create another column and find the corresponding date of the <code>value_max</code>. For example, for category <code>A</code>, the value of this new column in date <code>2025-01-01</code> / <code>2025-01-02</code> / <code>2025-01-03</code> / <code>2025-01-04</code> is <code>2025-01-01</code>, and <code>2025-01-05</code> when date <code>2025-01-05</code>.</p>
<p>I currently use a self-join with condition to get the date. Is there another way to do that without using joining? Also, is the new logic achievable by using SQL?</p>
|
<python><sql><dataframe><window-functions><python-polars>
|
2025-04-24 03:59:03
| 1
| 2,333
|
Jonathan
|
79,589,689
| 11,462,274
|
RSS Memory and Virtual Memory do not decrease even after killing a playwright instance and creating a completely new one
|
<p>I'm doing continuous iteration tests with playwright on a website and I noticed that RSS Memory and Virtual Memory are gradually increasing and no direct method is managing to dissipate the memory consumption. I've already tried every 100 iterations:</p>
<ol>
<li>Refresh the current page to avoid generating memory waste
accumulation (didn't work)</li>
<li>Continue work in a new tab and close the previous one that had
memory waste accumulation (didn't work)</li>
<li>Destroy the current instance of playwright and start a completely
new one (didn't work)</li>
</ol>
<blockquote>
<p>I've tested it for a few hours and it's becoming unfeasible because
there comes a point where it consumes so much memory that it becomes
impossible to run on a server.</p>
</blockquote>
<p>The current test I did by destroying playwright and starting a completely new one:</p>
<pre class="lang-python prettyprint-override"><code>from playwright.sync_api import sync_playwright
from urllib.parse import urlparse
import time
import psutil
import os
from threading import Thread
def create_proxy(proxy_string:str) -> Dict[str, str]:
try:
proxy_ip, proxy_port, proxy_user, proxy_pass = proxy_string.split(":")
proxies = {
"http": f"http://{proxy_user}:{proxy_pass}@{proxy_ip}:{proxy_port}",
"https": f"http://{proxy_user}:{proxy_pass}@{proxy_ip}:{proxy_port}",
}
return proxies
except ValueError:
raise ValueError("Invalid Proxy format. Expected: IP:port:user:password")
class PlaywrightBrowser:
def __init__(self, proxy_string):
self.proxies = create_proxy(proxy_string)
proxy_url = self.proxies['http']
parsed = urlparse(proxy_url)
self.server = f"{parsed.scheme}://{parsed.hostname}:{parsed.port}"
self.username = parsed.username
self.password = parsed.password
self.playwright = sync_playwright().start()
self.browser = self.playwright.chromium.launch(
proxy={
"server": self.server,
"username": self.username,
"password": self.password
}
)
self.context = self.browser.new_context()
def new_page(self):
return self.context.new_page()
def close(self):
if self.context:
self.context.close()
if self.browser:
self.browser.close()
if self.playwright:
self.playwright.stop()
def setup_page(page, url):
page.goto(url, timeout=60000)
page.wait_for_selector("#result", timeout=30000)
def monitor_dice(proxy_string, url):
try:
pb = PlaywrightBrowser(proxy_string)
page = pb.new_page()
setup_page(page, url)
print("Monitoring dice result. Press Ctrl+C to stop.")
iteration_count = 0
while True:
page.wait_for_function("document.querySelector('#result').textContent.length > 0", timeout=15000)
result = page.query_selector("#result").text_content().strip()
print(f"Dice result: {result}")
iteration_count += 1
if iteration_count % 100 == 0:
print("Restarting Playwright to free memory...")
page.close()
pb.close()
pb = PlaywrightBrowser(proxy_string)
page = pb.new_page()
setup_page(page, url)
time.sleep(1)
except KeyboardInterrupt:
print("\nMonitoring interrupted by user.")
except Exception as e:
print(f"Error: {e}")
raise
finally:
if 'pb' in locals():
pb.close()
print("Execution finished.")
def monitor_resources():
process = psutil.Process(os.getpid())
while True:
cpu_percent = process.cpu_percent(interval=1)
memory_info = process.memory_info()
memory_rss = memory_info.rss / 1024 / 1024 # Memory in MB
memory_vms = memory_info.vms / 1024 / 1024 # Virtual memory in MB
print(f"CPU: {cpu_percent}% | RSS Memory: {memory_rss:.2f} MB | Virtual Memory: {memory_vms:.2f} MB")
time.sleep(1)
if __name__ == "__main__":
monitor_thread = Thread(target=monitor_resources, daemon=True)
monitor_thread.start()
proxy_string = "111.222.33.44:55555:AAAAAAAA:BBBBBBBB" # Replace with your proxy
url = "http://olympus.realpython.org/dice"
monitor_dice(proxy_string, url)
</code></pre>
|
<python><playwright><playwright-python><psutil>
|
2025-04-24 01:27:39
| 0
| 2,222
|
Digital Farmer
|
79,589,564
| 1,413,856
|
Is it possible to limit attributes in a Python sub class using __slots__?
|
<p>One use of <code>__slots__</code> in Python is to disallow new attributes:</p>
<pre class="lang-py prettyprint-override"><code>class Thing:
__slots__ = 'a', 'b'
thing = Thing()
thing.c = 'hello' # error
</code></pre>
<p>However, this doesn’t work if a class inherits from another slotless class:</p>
<pre class="lang-py prettyprint-override"><code>class Whatever:
pass
class Thing(Whatever):
__slots__ = 'a', 'b'
thing = Thing()
thing.c = 'hello' # ok
</code></pre>
<p>That’s because it also inherits the <code>__dict__</code> from its parent which allows additional attributes.</p>
<p>Is there any way of blocking the <code>__dict__</code> from being inherited?</p>
<p>It seems to me that this would allow a sub class to be less generic that its parent, so it’s surprising that it doesn’t work this way naturally.</p>
<p><strong>Comment</strong></p>
<p>OK, the question arises as whether this would violate the <a href="https://en.wikipedia.org/wiki/Liskov_substitution_principle" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Liskov_substitution_principle</a> . This, in turn buys into a bigger discussion on inheritance.</p>
<p>Most books would, for example, suggest that a circle is an ellipse so a Circle class should inherit from an Ellipse class. However, since a circle is more restrictive, this would violate the Liskov Substitution Principle in that a sub class should not do <em>less</em> than the parent class.</p>
<p>In this case, I’m not sure about whether it applies here. Python has no access modifiers, so object data is already over-exposed. Further, without <code>__slots__</code> Python objects are pretty promiscuous about adding additional attributes, and I’m not sure that’s really part of the intended discussion.</p>
|
<python><inheritance><subclass>
|
2025-04-23 22:12:02
| 1
| 16,921
|
Manngo
|
79,589,524
| 4,045,275
|
Apply different aggregate functions to different columns of a pandas dataframe, and run a pivot/crosstab?
|
<h2>The issue</h2>
<p>In SQL it is very easy to apply different aggregate functions to different columns, e.g. :</p>
<pre><code>select item, sum(a) as [sum of a], avg(b) as [avg of b], min(c) as [min of c]
</code></pre>
<p>In Python, not so much. For a simple groupby, this <a href="https://stackoverflow.com/questions/66195952/pythonic-way-to-apply-different-aggregate-functions-to-different-columns-of-a-pa/66197013">answer</a> provides an elegant and pythonic way to do it.</p>
<h2>Desired output</h2>
<p>The answer linked above shows calculations ( sum of a, weighted avg of b, etc) for each city.
Now I need to add another dimension - let's call it colour - and show the intersection / pivot / crosstab of city and colour. I want to create the two tables below:</p>
<p><a href="https://i.sstatic.net/YjBXhUtx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjBXhUtx.png" alt="enter image description here" /></a></p>
<h2>What I have tried (with a toy example)</h2>
<p>Using <code>groupby()</code> and <code>unstack()</code> gives me Version 2. I am not sure how to obtain version 1 of the screenshot above. See toy example below.</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(100) # so results are always the same
df = pd.DataFrame(columns =['a','b','c','d'], data = np.random.rand(300,4))
df['city'] = np.repeat(['London','New York','Buenos Aires'], 100)
df['colour'] = np.random.choice(['green','red'],300)
def func(x, df):
# func() gets called within a lambda function; x is the row, df is the entire table
s_dict = {}
s_dict['sum of a'] = x['a'].sum()
s_dict['% of a'] = x['a'].sum() / df['a'].sum() if df['a'].sum() !=0 else np.nan
s_dict['avg of b'] = x['b'].mean()
s_dict['weighted avg of a, weighted by b'] = ( x['a'] * x['b']).sum() / x['b'].sum() if x['b'].sum() >0 else np.nan
s_dict['sum of c'] = x['c'].sum()
s_dict['sum of d'] = x['d'].sum()
return pd.Series( s_dict )
out = df.groupby(['city','colour']).apply(lambda x: func(x,df)).unstack()
</code></pre>
<h2>Note on potential duplicates</h2>
<p>The answer linked above does not address the case of an additional dimension.</p>
<p>Most of the examples I have seen with pivot() and crosstab() do not contain custom functions, but more basic functions like count, sum, average. There are examples with custom functions, but not similar to my case, e.g. operating on one field only, not 2 (see <a href="https://stackoverflow.com/questions/67304225/in-a-pandas-pivot-table-how-do-i-define-a-function-for-a-subset-of-data">here</a> ) or without the bi-dimensional aspect (repeat the calculation for each colour) - e.g. <a href="https://stackoverflow.com/questions/45440895/pandas-crosstab-with-own-function">here</a>.</p>
|
<python><pandas><dataframe><group-by>
|
2025-04-23 21:31:33
| 1
| 9,100
|
Pythonista anonymous
|
79,589,289
| 4,841,248
|
Is it good practice to override an abstract method with more specialized signature?
|
<p><em>Background information below the question.</em></p>
<p>In Python 3, I can define a class with an abstract method and implement it in a derived class using a more specialized signature. I know this works, but like many things work in many programming languages, it may not be good practice. So is it?</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class Base(ABC):
@abstractmethod
def foo(self, *args, **kwargs):
raise NotImplementedError()
class Derived(Base):
def foo(self, a, b, *args, **kwargs):
print(f"Derived.foo(a={a}, b={b}, args={args}, kwargs={kwargs})")
d = Derived()
d.foo(1, 2, 3, "bar", baz="baz")
# output:
# Derived.foo(a=1, b=2, args=(3, 'bar'), kwargs={'baz': 'baz'})
</code></pre>
<p><strong>Is this good or bad practice?</strong></p>
<hr />
<p>More information as promised.</p>
<p>I have an interface that defines an abstract method. It returns some sort of handle. Specialized implementations must always be able to return a sort of default handle if the method is called without any extra arguments. However, they may define certain flags to tweak the handle to the use case of the caller. In this case, the caller is also the one that instantiated the specialized implementation and knows about these flags. Generic code operating only on the interface or the handles does not know about these flags but does not need to.</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class Manager(ABC):
@abstractmethod
def connect(self, *args, **kwargs):
raise NotImplementedError()
class DefaultManager(Manager):
def connect(self, *, thread_safe: bool = False):
if thread_safe:
return ThreadSafeHandle()
else:
return DefaultHandle()
</code></pre>
<p>It is specific to my use case that a <code>Manager</code> implementation may want to issue different implementations of handles specific to the use case of the caller. Managers are defined in one place in my code and callers may or may not have specialized needs, such as thread safety in the example, for the managers they use.</p>
|
<python><python-3.x><inheritance><overriding>
|
2025-04-23 18:37:27
| 2
| 2,473
|
Maarten Bamelis
|
79,589,222
| 150,510
|
How do I make torch.ones(...) work inside a traced wrapper model during symbolic_trace()?
|
<p>Thanks for giving this a read...</p>
<p>I am getting going with PyTorch. I’m building a tool that wraps HuggingFace models in a custom WrappedModel so I can trace their execution using torch.fx.symbolic_trace. The goal is to analyze the traced graph and detect certain ops like float32 usage.</p>
<p>To do this, I:</p>
<ul>
<li>Wrap the model in a subclass of torch.nn.Module.</li>
<li>Run a forward() pass with dummy input_ids.</li>
<li>Call symbolic_trace(wrapped_model) or fall back to torch.jit.trace().</li>
</ul>
<p><strong>What’s going wrong:</strong></p>
<p>I consistently see:</p>
<blockquote>
<p>Forward pass failed in WrappedModel — slice indices must be integers
or None or have an index method</p>
</blockquote>
<p>And ultimately:</p>
<blockquote>
<p>Rule run failed: ‘method’ object is not iterable</p>
</blockquote>
<p><strong>Likely problematic code:</strong></p>
<pre><code>class WrappedModel(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, input_ids):
try:
batch_size = input_ids.size(0)
seq_len = input_ids.size(1)
# This line fails during symbolic tracing
attention_mask = torch.ones((batch_size, seq_len), dtype=torch.int64)
output = self.model(input_ids=input_ids, attention_mask=attention_mask)
except Exception as e:
logging.warning(f"TRACE ERROR inside wrapped forward: {e}")
return torch.zeros(1, 1)
if hasattr(output, "last_hidden_state"):
return output.last_hidden_state
elif hasattr(output, "logits"):
return output.logits
return output
</code></pre>
<p><strong>What I have already tried:</strong></p>
<ul>
<li>Using input_ids.size(0) instead of input_ids.shape[0]</li>
<li>Making sure the dummy input has fixed dimensions: torch.randint(0, 1000, (1, 10))</li>
<li>Hardcoding the mask shape (e.g., torch.ones((1, 10), dtype=torch.int64))</li>
<li>Falling back to torch.jit.trace — same error during forward Switching between BertModel and BertForSequenceClassification</li>
</ul>
<p><strong>What (I think) I am asking for:</strong></p>
<p>How do I make torch.ones(...) work inside a traced wrapper model during symbolic_trace()?</p>
<p>Thank you in advance for any guidance.</p>
|
<python><pytorch><huggingface-transformers><torch>
|
2025-04-23 17:57:16
| 1
| 5,358
|
Peter
|
79,589,185
| 1,450,294
|
Python interpreter doesn't honour .inputrc readline settings when run from a venv
|
<p>My <code>.inputrc</code> file, which simply contains <code>set editing-mode vi</code>, means that when I use Bash and some other interpretive environments, I can use Vi editor keys. This also works when I run the system Python interpreter. But when I run the Python interpreter in a venv (Python virtual environment), those key bindings don't work.</p>
<p><strong>Update</strong>: When I run <code>python3.13</code> without a virtual environment, the keybindings work, so it does seem to be a venv-specific issue. I also tried installing the venv with <code>--system-site-packages</code>, but that didn't make a difference.</p>
<p>I've actually been using Python, Linux, venv and Vi key bindings for decades, and I don't remember running up against this before a month or two ago, so I wonder if something changed in Python 3.13, or the way venv works starting with that version.</p>
<p><strong>Question:</strong> How can I set up my venv so it honours my <code>.inputrc</code> file, or what else do I need to do so it works?</p>
<p>My setup:</p>
<ul>
<li>system Python 3.10.12</li>
<li>venv Python 3.13.3</li>
<li>bash 5.1.16</li>
<li>readline 8.0.1 (from <code>print /x (int) rl_readline_version</code> in <code>gdb bash</code>)</li>
<li>Linux Mint 21.3 Cinnamon</li>
</ul>
<p>I create and activate the venv with the following commands:</p>
<pre class="lang-bash prettyprint-override"><code>python3.13 -m venv venv
. venv/bin/activate
</code></pre>
<p>I run the Python in the venv just by typing <code>python</code>, and the system python by typing <code>python3</code>. My <code>.inputrc</code> file is in my home directory.</p>
<p>A curious thing is that the two Python versions share the same command history: Commands I type in the system Python appear when I arrow through the venv Python, and vice-versa, except that in the system Python I can use <kbd>Esc</kbd><kbd>j</kbd> and <kbd>Esc</kbd><kbd>k</kbd>, which is what I want.</p>
|
<python><linux><readline><key-bindings><venv>
|
2025-04-23 17:38:54
| 1
| 7,291
|
Michael Scheper
|
79,589,116
| 1,631,159
|
Request to create a note in Google Keep returns invalid argument
|
<p>I'm trying to create notes in Google Keep using API, here is the python script:</p>
<pre><code>import sys
from google.oauth2.service_account import Credentials
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
# Define the required scopes for Google Sheets and Keep
SCOPES = [
'https://www.googleapis.com/auth/keep',
]
# Path to your service account JSON key file
SERVICE_ACCOUNT_FILE = 'service_file.json' # Replace with your file path
def authenticate():
"""Authenticates with Google Sheets and Keep using a service account."""
try:
creds = Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES
)
return creds
except Exception as e:
print(f"Error during authentication: {e}")
return None
def create_keep_note(creds):
"""Creates a note in Google Keep.
Args:
creds: The authenticated credentials.
"""
try:
service = build('keep', 'v1', credentials=creds)
note = {
"title": "title",
"body": {
"text": {
"text": "body"
}
}
}
service.notes().create(body=note).execute()
print(f"Note '{title}' created in Google Keep.")
except HttpError as e:
print(f"Error creating Keep note: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
if __name__ == '__main__':
creds = authenticate()
if not creds:
sys.exit("Authentication failed. Exiting.")
create_keep_note(creds)
print("Completed processing.") #Added to confirm completion
</code></pre>
<p>but I'm getting strange error:</p>
<blockquote>
<p>Error creating Keep note: <HttpError 400 when requesting <a href="https://keep.googleapis.com/v1/notes?alt=json" rel="nofollow noreferrer">https://keep.googleapis.com/v1/notes?alt=json</a> returned "Request contains an invalid argument.". Details: "Request contains an invalid argument."></p>
</blockquote>
<p>I'm following <a href="https://developers.google.com/workspace/keep/api/reference/rest/v1/notes/create" rel="nofollow noreferrer">Google Keep API</a> so not sure what's wrong with my request.</p>
|
<python><google-keep-api>
|
2025-04-23 16:57:25
| 1
| 769
|
Rami Sedhom
|
79,589,019
| 395,857
|
How to avoid auto-scroll when add a lot of text in a `TextArea` in Gradio?
|
<p>In one adds a lot of text in a <code>TextArea</code> in Gradio, it auto-scrolls to the end of the text. Example:</p>
<pre><code>import gradio as gr
def show_text():
return '\n'.join([f'test{i}' for i in range(50)]) # Simulated long text
with gr.Blocks() as demo:
text_area = gr.TextArea(label="Output", lines=10, max_lines=20)
btn = gr.Button("Show Text")
btn.click(fn=show_text, outputs=text_area)
demo.launch()
</code></pre>
<p>It auto-scrolls to the end of the text when I click on "Show Text":</p>
<p><a href="https://i.sstatic.net/82HX8rAT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82HX8rAT.png" alt="enter image description here" /></a></p>
<p>Instead, I want:</p>
<p><a href="https://i.sstatic.net/zu7gLU5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zu7gLU5n.png" alt="enter image description here" /></a></p>
<p>How to avoid auto-scroll when add a lot of text in a <code>TextArea</code> in Gradio?</p>
|
<python><gradio>
|
2025-04-23 15:55:38
| 1
| 84,585
|
Franck Dernoncourt
|
79,589,015
| 4,662,490
|
How to use a xmlrpc.client.ServerProxy object in package function?
|
<p>I am working with the Odoo XML-RPC API and <strong>I want to reuse a connection</strong> (ServerProxy object) across multiple functions in my package.</p>
<p><strong>Current Setup</strong></p>
<p>I establish the connection like this:</p>
<pre><code>import xmlrpc.client
import pprint
# Demo connection to Odoo
info = xmlrpc.client.ServerProxy('https://demo.odoo.com/start').start()
url, db, username, password = info['host'], info['database'], info['user'], info['password']
print(f"Connecting to Odoo at {url}, database: {db}, user: {username}")
# Real data for actual operations
from my_package.core.config import odoo_url, odoo_db, odoo_username, odoo_password
common = xmlrpc.client.ServerProxy(f'{odoo_url}/xmlrpc/2/common')
uid = common.authenticate(odoo_db, odoo_username, odoo_password, {})
models = xmlrpc.client.ServerProxy(f'{odoo_url}/xmlrpc/2/object')
pprint.pprint(common.version())
print('uid =', uid)
print('models =', models)
</code></pre>
<p><strong>output</strong>:</p>
<pre><code>Connecting to Odoo at https://demo4.odoo.com, database: demo_saas-182_6bcd3971f542_1745421877, user: admin
{'protocol_version': 1,
'server_serie': '17.0',
'server_version': '17.0+e',
'server_version_info': [17, 0, 0, 'final', 0, 'e']}
uid = 21
models = <ServerProxy for dtsc.odoo.com/xmlrpc/2/object>
</code></pre>
<p><strong>What I Want to Achieve</strong></p>
<p>I want to connect to the Odoo server once and reuse the models object in different functions across my package, so I avoid reconnecting every time.</p>
<p>Instead of doing this in each function:</p>
<pre><code># Reconnect every time (current approach)
common = xmlrpc.client.ServerProxy(f'{odoo_url}/xmlrpc/2/common')
uid = common.authenticate(odoo_db, odoo_username, odoo_password, {})
models = xmlrpc.client.ServerProxy(f'{odoo_url}/xmlrpc/2/object')
# Do something
</code></pre>
<p>I want to do something like this:</p>
<pre><code># my_package/core/my_file.py
def my_function(models):
# Use the existing ServerProxy object
information = do_something_with(models)
return information
...
# main.py
from my_package.core.my_file import my_function
update_information = my_function(models) # Pass the connected models object
</code></pre>
<p><strong>My Question</strong></p>
<p>How can I best structure my code to pass the ServerProxy object (or other related objects like uid) into my functions for reuse?</p>
<p>Should I pass multiple objects (models, uid, etc.) into each function, or is there a better way to encapsulate this connection logic (e.g., using a class or context manager)?</p>
<p>Would you use a <strong>class</strong> or a <strong>context manager</strong>?</p>
<p>Any best practices for this kind of setup with Odoo’s XML-RPC API would be appreciated!</p>
<p>thanks!</p>
|
<python><odoo><xml-rpc>
|
2025-04-23 15:53:24
| 1
| 423
|
Marco Di Gennaro
|
79,588,998
| 6,439,229
|
How to prevent error on shutdown with Logging Handler / QObject?
|
<p>In order to show logging messages in a PyQt GUI, I'm using a custom logging handler that sends the logRecord as a <code>pyqtSignal</code>.<br />
This handler inherits from both <code>QObject</code> and <code>logging.Handler</code>.</p>
<p>This works as it should but on shutdown there's this error:</p>
<blockquote>
<pre><code> File "C:\Program Files\Python313\Lib\logging\__init__.py", line 2242, in shutdown
if getattr(h, 'flushOnClose', True):
RuntimeError: wrapped C/C++ object of type Log2Qt has been deleted
</code></pre>
</blockquote>
<p>My interpretation is that logging tries to close the handler but because the handler is also a QObject, Qt has already deleted it.<br />
But when you connect the <code>aboutToQuit</code> signal to a function that removes the handler from the logger, the error still occurs.</p>
<p>Here's a MRE:</p>
<pre><code>import logging
from PyQt6.QtWidgets import QApplication, QWidget
from PyQt6.QtCore import QObject, pyqtSignal
class Log2Qt(QObject, logging.Handler):
log_forward = pyqtSignal(logging.LogRecord)
def emit(self, record):
self.log_forward.emit(record)
logger = logging.getLogger(__name__)
handler = Log2Qt()
logger.addHandler(handler)
def closing():
# handler.close()
logger.removeHandler(handler)
print(logger.handlers)
app = QApplication([])
app.aboutToQuit.connect(closing)
win = QWidget()
win.show()
app.exec()
</code></pre>
<p>The print from <code>closing()</code> shows that logger has no more handlers, so why does logging still try to close the handler when it's already removed?<br />
And how could you prevent the error from occuring?</p>
|
<python><pyqt><python-logging>
|
2025-04-23 15:43:43
| 1
| 1,016
|
mahkitah
|
79,588,983
| 12,415,855
|
Parse XML file using selenium and bs4?
|
<p>i try to parse a xml-file using the following code:</p>
<pre><code>import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
options = Options()
# options.add_argument('--headless=new')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
# driver.minimize_window()
waitWD = WebDriverWait (driver, 10)
wLink = "https://projects.propublica.org/nonprofits/organizations/830370609"
driver.get(wLink)
driver.execute_script("arguments[0].click();", waitWD.until(EC.element_to_be_clickable((By.XPATH, '(//a[text()="XML"])[1]'))))
driver.switch_to.window(driver.window_handles[1])
time.sleep(3)
print(driver.current_url)
soup = BeautifulSoup (driver.page_source, 'lxml')
worker = soup.find("PhoneNum")
print(worker)
</code></pre>
<p>But as you can see in the result i am for exmaple not able to parse the element "PhoneNum"</p>
<pre><code>(selenium) C:\DEV\Fiverr2025\TRY\austibn>python test.py
https://pp-990-xml.s3.us-east-1.amazonaws.com/202403189349311780_public.xml?response-content-disposition=inline&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA266MJEJYTM5WAG5Y%2F20250423%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250423T152903Z&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Signature=9743a63b41a906fac65c397a2bba7208938ca5b865f1e5a33c4f711769c815a4
None
</code></pre>
<p>How can i parse the xml-file from this site?</p>
|
<python><xml><selenium-webdriver><beautifulsoup>
|
2025-04-23 15:32:38
| 3
| 1,515
|
Rapid1898
|
79,588,915
| 8,587,712
|
pandas column based on multiple values from other columns
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame(data={
'a':[1,2,3,4,1,2,3,4,5],
'b':[1,4,2,2,1,2,1,1,2],
'c':[1000, 10, 500, 100,100, 10, 500, 100, 10]
})
</code></pre>
<p>which looks like</p>
<pre><code> a b c
0 1 1 1000
1 2 4 10
2 3 2 500
3 4 2 100
4 1 1 100
5 2 2 10
6 3 1 500
7 4 1 100
8 5 2 10
</code></pre>
<p>I am trying to perform an operation on column c to create a new column d based on the values in columns a and b, in this case the sum of c for the unique pairs of a and b. For example, the first entry would be the sum of column c for the rows which a=1 and b=1 (in this case, 1000+100=1100). Next would be the sum of c for a=2 and b=4, etc. How can I do this without looping over the rows individually? I know that <code>groupby()</code> can do something similar but the actual function I am trying to apply is more complicated than just <code>sum()</code> and I need to keep the original DataFrame.</p>
|
<python><pandas>
|
2025-04-23 14:59:41
| 0
| 313
|
Nikko Cleri
|
79,588,882
| 6,563,305
|
Simple pythonic way to check if a list of dict is a subset of another list of dict?
|
<p>Is there a simple pythonic way to check if a list of dictionaries is a subset of another list of dictionaries? This can be done via a <code>for</code> loop by checking each item. I'm hoping there's a faster way with built-in methods.</p>
<p>i.e.</p>
<pre><code>maybe_subset = [
{'Key': 'apple', 'Value': '1234'},
{'Key': 'orange', 'Value': '2431'},
{'Key': 'banana', 'Value': '9999'}
]
maybe_superset = [
{'Key': 'orange', 'Value': '2431'},
{'Key': 'banana', 'Value': '9999'},
{'Key': 'creator', 'Value': 'JOHNSMITH'},
{'Key': 'apple', 'Value': '1234'}
]
def subset(a,b):
something
</code></pre>
<p>so running <code>subset(maybe_subset,maybe_superset)</code> should return true</p>
<p><code>issubset</code> doesn't work since it's a list of dict which is unhashable.</p>
<p>edit: I should clarify I am trying to minimize memory and time usage since this would be running on server-less and billed. This small example isn't an issue but if I'm running it on a large set of data I wish there was a faster way which is why I'm asking. The difference will amount to a material amount.</p>
|
<python>
|
2025-04-23 14:43:43
| 3
| 617
|
Kent Wong
|
79,588,678
| 17,580,381
|
Optimum selection mechanism when choosing relevant rows from a dataframe
|
<p>I have a large Excel spreadsheet. I'm only interested in certain columns. Furthermore, I'm only interested in rows where specific columns meet certain criteria.</p>
<p>The following works:</p>
<pre><code>import pandas as pd
import warnings
# this suppresses the openpyxl warning that we're seeing
warnings.filterwarnings("ignore", category=UserWarning, module="openpyxl")
# These are the columns we're interested in
COLUMNS = [
"A",
"B",
"C"
]
# the source file
XL = "source.xlsx"
# sheet name in the source file
SHEET = "Sheet1"
# the output file
OUTPUT = "target.xlsx"
# the sheet name to be used in the output file
OUTSHEET = "Sheet1"
# This loads the entire spreadsheet into a pandas dataframe
df = pd.read_excel(XL, sheet_name=SHEET, usecols=COLUMNS).dropna()
# this replaces the original dataframe with rows where A contains "FOO"
df = df[df["A"].str.contains(r"\bFOO\b", regex=True)]
# now isolate those rows where the B contains "BAR"
df = df[df["B"].str.contains(r"\bBAR\b", regex=True)]
# output to the new spreadsheet
df.to_excel(OUTPUT, sheet_name=OUTSHEET, index=False)
</code></pre>
<p>This works. However, I can't help thinking that there might be a better way to manage the selection criteria especially if / when they get more complex.</p>
<p>Or is it a case of "step-by-step" is good?</p>
|
<python><pandas><openpyxl>
|
2025-04-23 12:58:07
| 1
| 28,997
|
Ramrab
|
79,588,248
| 1,636,349
|
Using fork & exec in Python
|
<p>I wrote a little test program to play around with <code>fork()</code> and <code>exec()</code> in Python, and I got some puzzling results. The program just forks a new process to run whatever is given as the command-line parameters in <code>argv</code>. Here is my code:</p>
<pre><code>import sys
import os
print("Start: " + str(os.getpid()))
sys.stdout.flush()
if len(sys.argv) > 1:
pid = os.fork()
if pid == 0:
os.execvp(sys.argv[1], sys.argv[1:])
elif pid > 0:
print("Fork: " + str(pid))
sys.stdout.flush()
x = os.wait()
if x[0] != pid: # NEEDED
os.waitpid(pid,0) # NEEDED
print("End: " + str(x[0]))
else:
print("Fork failed", file=sys.stderr)
sys.exit(1)
else:
print("End")
sys.stdout.flush()
</code></pre>
<p>I run this with the following command:</p>
<pre><code>python test.py python test.py ps
</code></pre>
<p>In other words, fork a process which runs another copy of the test program which forks yet another process which runs <code>ps</code>. The output I get looks like this:</p>
<pre><code>Start: 999773
Fork: 999777
Start: 999777
Fork: 999781
PID TTY TIME CMD
999773 ? 00:00:00 python3
999777 ? 00:00:00 python3
999781 ? 00:00:00 ps
End: 999781
End: 999774
</code></pre>
<p>So, process 999773 creates process 999777 which creates process 999781, which runs <code>ps</code> and shows processes 999773, 999777 amd 999781. Process 999781 terminates, so the parent process (999777) prints a message saying that 999781 terminated and then exits. Process 999773 then prints a message saying that process 999774 terminated. Where did 999774 come from?</p>
<p>Then if I remove the two lines with the comment <code>NEEDED</code>, so that it waits for any process to terminate before terminating itself, I get the following output:</p>
<pre><code>Start: 1000822
Fork: 1000830
End: 1000824
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = '/usr/bin/python3'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/usr/bin/python3'
sys.base_prefix = '/usr'
sys.base_exec_prefix = '/usr'
sys.executable = '/usr/bin/python3'
sys.prefix = '/usr'
sys.exec_prefix = '/usr'
sys.path = [
'/usr/lib/python38.zip',
'/usr/lib/python3.8',
'/usr/lib/python3.8/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007f27b84f7740 (most recent call first):
<no Python frame>
</code></pre>
<p>Can anyone explain what on earth is going on here?</p>
|
<python><fork>
|
2025-04-23 09:19:33
| 0
| 548
|
user1636349
|
79,588,208
| 1,785,448
|
Why does `strftime("%Y")` not yield a 4-digit year for dates < 1000 AD in Python's datetime module on Linux?
|
<p>I am puzzled by an inconsistency when calling <code>.strftime()</code> for dates which are pre-1000 AD, using Python's <code>datetime</code> module.</p>
<p>Take the following example:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
old_date = datetime.date(year=33, month=3, day=28) # 28th March 33AD
old_date.isoformat()
>>> "0033-03-28" # Fine!
old_date.strftime("%Y-%m-%d")
>>> "33-03-28" # Woah - where did my leading zeros go?
# And even worse
datetime.datetime.strptime(old_date.strftime("%Y-%m-%d"), "%Y-%m-%d")
>>>
...
File "<input>", line 1, in <module>
File "/usr/lib/python3.12/_strptime.py", line 554, in _strptime_datetime
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/_strptime.py", line 333, in _strptime
raise ValueError("time data %r does not match format %r" %
ValueError: time data '33-03-28' does not match format '%Y-%m-%d'
</code></pre>
<p>The <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="noreferrer">documentation</a> shows examples of <code>%Y</code> yielding zero-padded years. Even using <code>%G</code>, which is documented to be an ISO-8601 4-digit year, is showing only two digits.</p>
<p>This caused a problem in an application where a user can enter a date, and if they type in an old date the exception above would arise when trying to convert a date-string back into a date.</p>
<p>Presumably there is something in my local configuration which is causing this, as this seems too obvious to be a bug in Python. I'm using Python 3.12 on Ubuntu 24.04.</p>
|
<python><python-datetime>
|
2025-04-23 08:54:05
| 2
| 730
|
Melipone
|
79,588,066
| 7,032,878
|
Can I access a SharePoint using Python, when I have username and password with MFA?
|
<p>Is there any mean to access a SharePoint using an interactive Python script? The conditions are the following:</p>
<ul>
<li>I have username/password for a user that can access the SharePoint online;</li>
<li>There is MFA enabled;</li>
<li>I can't register a new application on Azure, I have to stick only to the user.</li>
</ul>
<p>I had a look to the Office365-REST-Python-Client library, but it seems like it is impossibile to do this without using a client_id (= application registered on Azure) or disabling MFA?</p>
|
<python><sharepoint>
|
2025-04-23 07:29:33
| 0
| 627
|
espogian
|
79,587,895
| 10,470,463
|
Should a tkinter button always have a variable as a name?
|
<p>I can make a tkinter button like this:</p>
<pre><code>button1 = ttk.Button(root, text = "Button 1", command=lambda: button_click(button1))
</code></pre>
<p>or in a loop like this:</p>
<pre><code>tk.Button(root, text=f"Button {i}", command=lambda x=i: button_click(x)).pack()
</code></pre>
<p>But the second button does not have a variable name like button1. Therefore, I can't access it by name:</p>
<pre><code>def button_click(button):
button_text = button.cget('text')
ttk.Label(root, text = button_text).pack()
</code></pre>
<p>Or does the second button actually have a tkinter default name when created in a loop like this?</p>
|
<python><tkinter><tkinter-button>
|
2025-04-23 05:26:23
| 1
| 511
|
Pedroski
|
79,587,773
| 826,112
|
Python file behaviour different when run from different IDE's
|
<p>A colleague and I were reviewing some student submissions. He likes using IDLE, while I use PyCharm. The student developed their code in PyCharm.
A simplified example of the students work is:</p>
<pre><code>file = open('test_file.txt','w')
file.write('This is a test file.')
print('Completed')
exit()
</code></pre>
<p>The student has made an error in their file handling and created a situation where the file is left open after the code is completed.</p>
<p>When run from within PyCharm the file.write is completed and the file is updated as expected. When run from within IDLE the file.write appears not to be completed and the file is empty. When run from the command line (btw, we all use macbooks) the file.write is completed and the file has the line of text. One more clue is that when run within IDLE after 'Completed' is output, there is a system dialog that states, "Your program is still running! Do you want to kill it?". This warning does not appear when run from the command line or within PyCharm.</p>
<p>We are trying to understand the difference between these behaviours given that we believe they are all doing the same process of invoking the same interpreter.</p>
|
<python>
|
2025-04-23 03:51:09
| 1
| 536
|
Andrew H
|
79,587,751
| 2,402,098
|
MSGraph ImmutableId of a message changes when draft is sent
|
<p>I am trying to keep track of emails sent using the MSGraph API. If an email is sent using the <a href="https://learn.microsoft.com/en-us/graph/api/message-reply?view=graph-rest-1.0&tabs=http" rel="nofollow noreferrer">Reply</a> endpoint, it understandably does not return the <code>messageId</code> property that I need in the response body. Searching around on the internet suggested that if I need to keep track of a message's <code>messageId</code>, that I should:</p>
<ul>
<li>use <a href="https://learn.microsoft.com/en-us/graph/outlook-immutable-id" rel="nofollow noreferrer">immutableIds</a></li>
<li>create a draft using the <a href="https://learn.microsoft.com/en-us/graph/api/message-createreply?view=graph-rest-1.0&tabs=http" rel="nofollow noreferrer">create draft to reply</a> endpoint</li>
<li>record the <code>immutableId</code> of the draft</li>
<li>send the draft using the <a href="https://learn.microsoft.com/en-us/graph/api/message-send?view=graph-rest-1.0&tabs=http" rel="nofollow noreferrer">send</a> endpoint</li>
<li>I should then be able to use the recorded <code>messageId</code> to retrieve the email message in question and do whatever I need to do</li>
</ul>
<p>Per the previously linked documentation for <code>immutableIds</code></p>
<blockquote>
<p>An item's immutable ID won't change so long as the item stays in the
same mailbox. That means that immutable ID will NOT change if the item
is moved to a different folder in the mailbox.</p>
</blockquote>
<p>My experience directly contradicts this quote. Unless I am misunderstanding how this all works? I was under the impression that creating a draft instantiates a <code>Message</code> object, and sending the draft essentially alters and moves that <code>Message</code> object to the <code>sentItems</code> folder, which should not change the <code>messageId</code>. However, I am observing that the <code>messageId</code> does in fact change after being sent.</p>
<p>Is there a way to reliably get the <code>messageId</code> of a message after it has been sent?</p>
|
<python><microsoft-graph-api>
|
2025-04-23 03:11:04
| 0
| 342
|
DrS
|
79,587,488
| 1,164,295
|
Deal Design by Contract to prove a Python script with decorators
|
<p>I have a dockerized <a href="https://deal.readthedocs.io/index.html" rel="nofollow noreferrer">Deal</a> that I can successfully run using</p>
<pre class="lang-bash prettyprint-override"><code>docker run -it -v `pwd`:/scratch -w /scratch --rm deal python3 -m deal prove deal_demo_cat.py
</code></pre>
<p>which produces</p>
<pre><code>deal_demo_cat.py
cat
proved! post-condition, post-condition, post-condition
</code></pre>
<p>The code (copied from <a href="https://deal.readthedocs.io/basic/motivation.html" rel="nofollow noreferrer">https://deal.readthedocs.io/basic/motivation.html</a>) in <code>deal_demo_cat.py</code> is</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import deal
@deal.ensure(lambda left, right, result: result.startswith(left))
@deal.ensure(lambda left, right, result: result.endswith(right))
@deal.ensure(lambda left, right, result: len(result) == len(left) + len(right))
def cat(left: str, right: str) -> str:
return left + right
</code></pre>
<p>That works. However, when I write my function in <code>add_one.py</code></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import deal
@deal.pre(lambda number: number<1E100) # Precondition must be true before the function is executed.
@deal.pre(lambda number: number>-1E100)
@deal.ensure(lambda number, result: result==number+1)
def add_one_to_arg(number: int | float) -> int | float:
"""
do the addition
"""
return number + 1
</code></pre>
<p>and then run</p>
<pre class="lang-bash prettyprint-override"><code>docker run -it -v `pwd`:/scratch -w /scratch --rm deal python3 -m deal prove add_one.py
</code></pre>
<p>I get no output to the terminal. My question is why I am not getting any output. Is something missing from my script?</p>
<p>For reproducibility, here's the Dockerfile used to containerize Deal:</p>
<pre class="lang-bash prettyprint-override"><code>FROM phusion/baseimage:jammy-1.0.2@sha256:1584de70d2f34df8e2e21d2f59aa7b5ee75f3fd5e26c4f13155137b2d5478745
# Use baseimage-docker's init system
CMD ["/sbin/my_init"]
# TODO: pin the apt package versions
# Update and install packages
RUN apt update && apt -y upgrade && apt -y install \
python3 \
python3-pip
WORKDIR /opt/
RUN python3 -m pip install --user 'deal[all]'
# RUN python3 -m pip install deal-solver
RUN python3 -m pip install pytest
# https://github.com/timothycrosley/hypothesis-auto
# https://timothycrosley.github.io/hypothesis-auto/
RUN python3 -m pip install hypothesis-auto
RUN python3 -m pip install mypy
# as of April 2025 typeguard 4.2.2 is available, but the `CallMemo` feature isn't available after 4.0
RUN python3 -m pip install typeguard==3.0.2
</code></pre>
<p>and then I use</p>
<pre class="lang-bash prettyprint-override"><code>docker build -f Dockerfile -t deal .
</code></pre>
|
<python><design-by-contract>
|
2025-04-22 21:42:26
| 0
| 631
|
Ben
|
79,587,464
| 10,441,038
|
Fastest way to convert results from tuple of tuples to 2D numpy.array
|
<p>I'm training my AI model with a huge data set, so that it's impossible to preload all the data into memory at the beginning. I'm currently using psycopg2 to load data from a Postgresql DB during training.</p>
<p>I need to convert the sample data into numpy.ndarry, while psycopg2 returns data in a tuple of tuples, where every inner tuples carry a row of data, and the outter tuple carries all.</p>
<p>This piece of codes(loading data) is the hotest spot of entire training processing, so I want to make it as fast as possible.</p>
<p>My current codes like this:</p>
<pre><code>rows = cursor.fetchall() # rows is a tuple of tuples
features = [np.array( r, dtype=np.float32 ) for r in rows]
features = np.stack( features, axis=0 ) # final shape of output is (128row, 78column)
</code></pre>
<p>I'm wondering wether or not there is a faster way to convert a tuple of tuples into a 2D numpy array?</p>
<p>Thanks!</p>
|
<python><postgresql><numpy><tuples><psycopg2>
|
2025-04-22 21:27:14
| 2
| 2,165
|
Leon
|
79,587,407
| 984,532
|
Read data from sheet1 and output filtered data on sheet2
|
<p>Is it possible?
Or it seems that each sheet is a separate environment?</p>
<p>CONTEXT:
A clean way to read 200 rows of data (and 30+ columns) is using something like</p>
<p><code>df=xl("A:BS", headers=True)</code></p>
<p>So a user wants a filtered view of my data on sheet2.
e.g., <code>df[df['project'] =='bench']</code></p>
<p>(and despite that one can use Excel filter but prefers not to.</p>
<p>E.g., rewrite in python-in-excel R markdown logic that parses input from excel as html page with 30+ analyses and TOC and graphs and headings)</p>
|
<python><excel>
|
2025-04-22 20:39:26
| 1
| 12,034
|
userJT
|
79,587,376
| 2,276,054
|
How to pass Python interpreter parameters -O/-OO to a console-script? ([project.scripts])
|
<p>In <code>pyproject.toml</code>, I have the following section:</p>
<pre><code>[project.scripts]
work = "mypackage.worker_app:main"
</code></pre>
<p>This means that after activating virtual environment, I can simply type <code>work</code>, and this will execute method <code>main()</code> from class <code>worker_app</code> from module <code>mypackage</code>.</p>
<p>However, what about Python interpreter parameters related to optimization, such as <code>-O</code> (remove asserts) and <code>-OO</code> (remove docstrings)? Can I somehow pass them when running <code>work</code>? Or define them in <code>pyproject.toml</code>? Or do console-scripts always run with with full debug?</p>
|
<python><pyproject.toml>
|
2025-04-22 20:11:04
| 0
| 681
|
Leszek Pachura
|
79,587,372
| 12,871,587
|
How to perform a nearest match join without reusing rows from the right DataFrame?
|
<p>I'm working with two Polars DataFrames and want to join them based on the nearest match of a numeric column, similar to how join_asof works with strategy="nearest". However, I’d like to ensure that each row from the right DataFrame is used at most once, meaning once it's matched to a row from the left, it cannot be reused.</p>
<p>A simplified example.. I have two dataframes:</p>
<pre><code>df1 = pl.DataFrame({
"letters": ["A1", "B1", "C1", "D1", "E1"],
"numbers": [100, 200, 220, 400, 500]
})
df2 = pl.DataFrame({
"letters": ["A2", "B2", "C2", "D2", "E2"],
"numbers": [101, 201, 301, 401, 501]
})
print(df1)
shape: (5, 2)
┌─────────┬─────────┐
│ letters ┆ numbers │
│ --- ┆ --- │
│ str ┆ i64 │
╞═════════╪═════════╡
│ A1 ┆ 100 │
│ B1 ┆ 200 │
│ C1 ┆ 220 │
│ D1 ┆ 400 │
│ E1 ┆ 500 │
└─────────┴─────────┘
print(df2)
shape: (5, 2)
┌─────────┬─────────┐
│ letters ┆ numbers │
│ --- ┆ --- │
│ str ┆ i64 │
╞═════════╪═════════╡
│ A2 ┆ 101 │
│ B2 ┆ 201 │
│ C2 ┆ 301 │
│ D2 ┆ 401 │
│ E2 ┆ 501 │
└─────────┴─────────┘
joined_df = (
df1
.join_asof(
other=df2,
on="numbers",
strategy="nearest",
coalesce=False,
)
)
print(joined_df)
shape: (5, 4)
┌─────────┬─────────┬───────────────┬───────────────┐
│ letters ┆ numbers ┆ letters_right ┆ numbers_right │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ str ┆ i64 │
╞═════════╪═════════╪═══════════════╪═══════════════╡
│ A1 ┆ 100 ┆ A2 ┆ 101 │
│ B1 ┆ 200 ┆ B2 ┆ 201 │
│ C1 ┆ 220 ┆ B2 ┆ 201 │
│ D1 ┆ 400 ┆ D2 ┆ 401 │
│ E1 ┆ 500 ┆ E2 ┆ 501 │
└─────────┴─────────┴───────────────┴───────────────┘
</code></pre>
<p>As you can see, "B2" from df2 is matched twice (once to "B1" and once to "C1"). But I’d like each row from df2 to be used only once. If a row is already matched, I want to skip it and move to the next closest unmatched row. How can I do this efficiently?</p>
|
<python><python-polars>
|
2025-04-22 20:05:55
| 1
| 713
|
miroslaavi
|
79,587,363
| 3,124,150
|
Format np.float64 without leading digits
|
<p>I need to format <code>np.float64</code> floating values without leading digits before the dot, for example <code>-2.40366982307</code> as <code>-.240366982307E+01</code>, in python.<br />
This is to allow me to write in RINEX 3.03 the values with 4X, 4D19.12 formats. I have tried <code>f"{x:.12E}"</code> but it always has a leading 1 for numbers greater than 1. I have also tried <code>np.format_float_positional</code> and <code>np.format_float_scientific</code> but those don't have restrictions on leading digits.</p>
|
<python><numpy><string-formatting>
|
2025-04-22 19:56:16
| 1
| 947
|
EmmanuelMess
|
79,587,261
| 9,669,142
|
Python - stress test all 24 CPU cores
|
<p>I want to understand better how to work with multiprocessing in Python, and as a test, I wanted to set all cores in the CPU at 100%. First, I tested it with a CPU that has 8 cores, and all cores went to 100%, so that worked. Then I tested it with a CPU that has 24 cores, but for some reason it only sets 8 cores to 100% and I don't know why. There might be a system limitation or there is a limitation on the amount of parrallel processes, but I have no idea what it is exactly and how I can work around it.</p>
<p><strong>System information:</strong></p>
<ul>
<li>PC 1: i7-9700F, 8 cores, Windows 11 Home version 24H2</li>
<li>PC 2: i7-12850HX, 24 cores, Windows 10 Enterprise version 22H2</li>
</ul>
<p>Is there a way to work around the limitations and set 16 or all 24 cores to 100%?</p>
<p><strong>Code:</strong></p>
<pre class="lang-python prettyprint-override"><code>import multiprocessing
import time
def get_cpu_core_count():
return multiprocessing.cpu_count()
def stress_core():
while True:
pass
if __name__ == '__main__':
num_cores = int(input("Enter the number of CPU cores to stress: "))
processes = []
for _ in range(num_cores):
p = multiprocessing.Process(target=stress_core)
p.start()
processes.append(p)
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
for p in processes:
p.terminate()
p.join()
</code></pre>
<p><strong>EDIT:</strong></p>
<p>image of the 24 cores with only 8 at 100%</p>
<p><a href="https://i.sstatic.net/4aejt6ZL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aejt6ZL.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ozeen4A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ozeen4A4.png" alt="enter image description here" /></a></p>
|
<python><multiprocessing>
|
2025-04-22 18:56:04
| 0
| 567
|
Fish1996
|
79,587,218
| 547,231
|
Array slicing inside `jax.lax.while_loop yields error` "array boolean indices must be concrete"
|
<p>Please consider the following toy example, which mimics what I'm trying to achieve in my real-world application:</p>
<pre><code>import flax
import jax
def op(x):
return x - 1
@flax.struct.dataclass
class adaptive_state:
i: jax.numpy.ndarray
key: jax.random.PRNGKey
def loop(carry):
state, active_mask = carry
key, step_key = jax.random.split(state.key)
cond = jax.random.bernoulli(step_key, .5, state.i[active_mask].shape)
i_next = op(state.i[active_mask])
return state.replace(
i = state.i.at[active_mask].set(jax.numpy.where(cond, i_next, state.i[active_mask])),
key = key
), active_mask.at[active_mask].set(jax.numpy.where(cond, i_next > 0, active_mask[active_mask]))
init_state = adaptive_state(jax.numpy.array([47, 11, 815, 3], dtype = int), jax.random.PRNGKey(0))
state = jax.lax.while_loop(lambda carry: jax.numpy.any(carry[1]), loop, (init_state, jax.numpy.ones(init_state.i.shape[0], dtype = bool)))[0]
</code></pre>
<p>In my real-world application <code>op</code> is a computationally expensive task which is why I don't want to perform it on already "completely processed" (i.e. "inactive") elements. In they toy example the task is simply to reduce the elements by <code>1</code> until they are <code>0</code>. Also: I do <em>NOT</em> want to use something like <code>jax.vmap</code> to perform <code>op</code> on individual active elements separately, since my real-world <code>op</code> is already optimized for batch processing.</p>
<p>Is there any way to achieve what I'm looking for? The current code yields the error <code>IndexError("Array boolean indices must be concrete.")</code>. It works as expected when I replace <code>jax.lax.while_loop</code> by a dummy <code>while_loop</code> like</p>
<pre><code>def while_loop(cond_fun, body_fun, init_val):
val = init_val
while cond_fun(val):
val = body_fun(val)
return val
</code></pre>
|
<python><python-3.8><jax>
|
2025-04-22 18:26:56
| 2
| 18,343
|
0xbadf00d
|
79,587,213
| 8,741,781
|
Why is Gunicorn creating its socket with the wrong group even though the parent directory has the setgid bit?
|
<p>I'm running Gunicorn under <code>Supervisor</code> as a non-root user (<code>webapps</code>) and trying to create a Unix domain socket at <code>/run/webapps/gunicorn.sock</code>. I want the socket to be group-owned by <code>webapp_sockets</code> so that nginx (running as <code>www-data</code> and added to <code>webapp_sockets</code>) can access it.</p>
<p>I've set up the socket directory like this:</p>
<pre><code>sudo chown webapps:webapp_sockets /run/webapps
sudo chmod 2770 /run/webapps
</code></pre>
<p>The directory looks like this:</p>
<pre><code>drwxrws--- webapps webapp_sockets /run/webapps
</code></pre>
<p>Gunicorn is started under Supervisor as:</p>
<pre><code>user=webapps
</code></pre>
<p>In <code>gunicorn.conf.py</code>:</p>
<pre><code>bind = 'unix:/run/webapps/gunicorn.sock'
pidfile = '/run/webapps/gunicorn.pid'
user = 'webapps'
# umask = 0o007
</code></pre>
<p>After a clean start (and deletion of the old <code>.sock</code>), the socket is created like this:</p>
<pre><code>sudo ls -l /run/webapps
total 8
-rw-r--r-- 1 webapps webapp_sockets 7 Apr 22 11:32 celerybeat.pid
-rw-r--r-- 1 webapps webapp_sockets 7 Apr 22 13:46 gunicorn.pid
srwxrwx--- 1 webapps webapps 0 Apr 22 13:46 gunicorn.sock
</code></pre>
<p>Even though the directory is setgid (2770), the user webapps is a member of webapp_sockets and the .pid files created in the same dir do have the correct group.</p>
<p>I've also tried specifying the <code>group = 'webapp_sockets'</code> in <code>gunicorn.conf.py</code> however I get the following error:</p>
<pre><code>Exception in worker process
Traceback (most recent call last):
File "/webapps/proj/venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/webapps/proj/venv/lib/python3.10/site-packages/gunicorn/workers/base.py", line 97, in init_process
util.set_owner_process(self.cfg.uid, self.cfg.gid,
File "/webapps/proj/venv/lib/python3.10/site-packages/gunicorn/util.py", line 146, in set_owner_process
os.setgid(gid)
PermissionError: [Errno 1] Operation not permitted
</code></pre>
<p>Currently using the latest release <code>gunicorn==23.0.0</code></p>
|
<python><gunicorn><supervisord>
|
2025-04-22 18:25:17
| 1
| 6,137
|
bdoubleu
|
79,587,044
| 4,907,639
|
Setting up a mixed-integer program in python
|
<p>I am trying to figure out if it is possible to configure a specific problem as a mixed-integer program. I think I am able to structure it as a continuous non-linear optimization problem, but would like to see if a MIP works better.</p>
<p>The basic idea is that there are several systems comprised of multiple elements. Over time, the elements degrade and certain actions/decisions can be applied at periodic intervals. Each decision choice has a different cost and impact on element condition. The goal is to define the optimal set of decisions for each element at periodic intervals. However, the decisions must not allow an overall system to degrade below a minimally acceptable level, or to cost more than a budgeted amount for each period.</p>
<p>Below is an example of how an example could be set up and how a single decision matrix would be evaluated:</p>
<pre><code># 1. define parameters like # of systes, decision options, decision periods, types etc.
# 2. define constraints
# 3. model how the elements degrade over time, and costs for decisions
# 4. model how elements will impact system metrics
# 5. structure as MIP <- HOW TO DO THIS??:
import random
from random import choices
import pandas as pd
# define parameters
#
decision_options = list(range(0,4)) # defines number of decision options
periods = list(range(0,6)) # defines number of decision periods
system_types = ['A', 'B', 'C'] # type used for condition constraints
# say we have 100 elements, each assigned to a system
element_ids = list(range(0, 100))
# create initialized element condition values
condition = choices(range(60,101), k=len(element_ids))
# assign types for each item
system_type = choices(system_types, k=len(element_ids))
# assign value for each group
value = random.sample(range(500, 20000), len(element_ids))
# create a dataframe with element, system type, condition, and element value information
df = pd.DataFrame({'Element_ID': element_ids, 'System_Type': system_type, 'Condition': condition, "Value": value})
df
# 2. define constraints
#
# create a dict where each type has a minimum allowable condition
vals= [60, 50, 40] # System = A has a limit of 60, System B has a limit of 50...
min_condition = dict(zip(system_types, vals))
# budget constraint for each period
max_budget = [200000] * len(periods)
# 3. model costs and degradation over time
# create a function that sets a element cost based on decision
def decision_cost(decision, value):
cost = 0
match decision:
case 0:
cost = 0 # do nothing
case 1:
cost = value / 10 # do a little
case 2:
cost = value / 5 # do a little more
case 3:
cost = value / 2 # do a lot
return(cost)
# create a function that sets a element condition based on decision
def decision_result(decision, condition):
match decision:
case 0:
condition = condition # no improvement
case 1:
condition = min(condition*1.1, 100) # a little improvement
case 2:
condition = min(condition*1.2, 100) # a little more improvement
case 3:
condition = min(condition*1.5, 100) # a lot of improvement
return(condition)
# model element degradation
# element loses 10% at each period
def degrade_element(condition):
new_condition = round(condition *0.9,0)
return(new_condition)
# 4. model how elements will impact system metrics
# these are to be compared to the min_condition constraints
#
# system condition is the weighted-and-summed condition of the constituent elements
def system_condition(df):
system_types = sorted(df['System_Type'].unique())
system_condition = [0] * len(system_types)
for i in range(len(system_types)):
system_data = df[df['System_Type'] == system_types[i]]
system_condition[i] = (system_data['Condition'] * system_data['Value']).sum() / system_data['Value'].sum()
return(system_condition)
def period_costs(new_df, periods):
column_names = [f'Period_{p}' for p in periods]
period_sums = []
for col in column_names:
period_sums.append(new_df[col].sum())
return(period_sums)
system_condition(df)
# create a sample decision matrix:
# row = element
# column = period
# cell value = decision
import numpy as np
# randomly initialize a decision matrix
decision_matrix = np.random.randint(0, len(decision_options), size=(len(element_ids), max(periods)+1))
# example evaluation of a decision matrix
# get cost and result dataframes for system/element results
system_cost_df = system_result_df = pd.DataFrame(index=range(len(system_types)), columns=range(0, len(periods)))
element_cost_df = element_result_df = pd.DataFrame(index=range(len(element_ids)), columns=range(0, len(periods)))
def evaluate_decision_matrix(df, decision_matrix, periods):
new_df = df
#new_df['Cost'] = 0
# create a column to collect the cost for each period
column_names = [f'Period_{p}' for p in periods]
new_df = new_df.assign(**{col: 0 for col in column_names})
# for each period
for i in range(0, len(periods)):
# for each element
for j in range(0, len(element_ids)):
# execute decision
decision = decision_matrix[j,i]
element_value = new_df.iloc[j]['Value']
#element_cost_df.loc[j,i] = decision_cost(decision, element_value)
#cost = decision_cost(decision, element_value)
new_df.loc[j, column_names[i]] = decision_cost(decision, element_value)
# impact condition with decision
current_condition = new_df.iloc[j]['Condition']
updated_condition = decision_result(decision, current_condition)
new_df.loc[j, 'Condition'] = updated_condition
# degrade element for next period
degraded_condition = degrade_element(new_df.iloc[j]['Condition'])
new_df.loc[j, 'Condition'] = degraded_condition
return(new_df)
# evaluate decision matrix
new_df = evaluate_decision_matrix(df, decision_matrix, periods)
# calculate system condition
system_condition_values = system_condition(new_df)
# check if system condition constraints are met
# returns true if system condition constraints are met
condition_check = [x > y for x, y in zip(system_condition_values, vals)]
print(condition_check)
# check if budget constaint is met
costs = period_costs(new_df, periods)
budget_check = [x < y for x, y in zip(costs, max_budget)]
print(budget_check)
</code></pre>
<p>As an example, the <code>print(condition_check)</code> command displays <code>[True, True, True]</code> indicating the system condition constraints are satisfied. But <code>print(budget_check)</code> shows <code>[True, False, False, False, True, False]</code> indicated 4 of the period budget constraints have been broken.</p>
<p>Is it possible to structure this as an MIP in python and, if so, how would that be done?</p>
|
<python><optimization><mixed-integer-programming>
|
2025-04-22 16:41:43
| 1
| 2,109
|
coolhand
|
79,586,806
| 6,282,576
|
Using pytest and mongoengine, data is created in the main database instead of a test one
|
<p>I've installed these packages:</p>
<pre><code>python -m pip install pytest pytest-django
</code></pre>
<p>And created a fixture:</p>
<pre class="lang-py prettyprint-override"><code># core/services/tests/fixtures/checkout.py
import pytest
from bson import ObjectId
from datetime import datetime
from core.models.src.checkout import Checkout
@pytest.fixture(scope="session")
def checkout(mongo_db):
checkout = Checkout(
user_id=59,
amount=35_641,
)
checkout.save()
return checkout
</code></pre>
<p>and imported it in the <code>conftest.py</code> in the same directory:</p>
<pre class="lang-py prettyprint-override"><code># core/service/tests/conftest.py
from core.service.tests.fixtures.checkout import *
</code></pre>
<p>Here's how I connect to the test database:</p>
<pre class="lang-py prettyprint-override"><code># conftest.py
import pytest
from mongoengine import connect, disconnect, connection
@pytest.fixture(scope="session", autouse=True)
def mongo_db():
connect(
db="db",
name="testdb",
alias="test_db",
host="mongodb://localhost:27017/",
serverSelectionTimeoutMS=5000,
)
connection._connections.clear()
yield
disconnect()
</code></pre>
<p>And this is my actual test:</p>
<pre class="lang-py prettyprint-override"><code>import json
import pytest
from core.service.checkout import a_function
def test_a_function(checkout):
assert checkout.value is False
response = a_function(id=checkout.id, value=True)
assert response.status_code == 200
response_data = json.loads(response.content.decode("UTF-8"))
assert response_data.get("success", None) is True
checkout.reload()
assert checkout.value is True
</code></pre>
<p>But every time I run <code>pytest</code>, a new record is created in the main database. How can I fix this to use a test database?</p>
|
<python><django><mongodb><pytest><mongoengine>
|
2025-04-22 15:12:32
| 0
| 4,313
|
Amir Shabani
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.