QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,434,549
| 1,940,534
|
trying to pull an value out of xml using xml.etree.ElementTree
|
<p>I have the following xml:</p>
<pre><code><multi-routing-engine-results>
<multi-routing-engine-item>
<re-name>node0</re-name>
<source-resource-usage-pool-information>
<resource-usage-entry style="all-pat-pool">
<resource-usage-pool-name>pool_192_168_1_113</resource-usage-pool-name>
<resource-usage-total-pool-num>57</resource-usage-total-pool-num>
<resource-usage-port-ol-factor>1</resource-usage-port-ol-factor>
<resource-usage-peak-usage>21%</resource-usage-peak-usage>
<resource-usage-peak-date-time seconds="1736272550">2025-01-07 11:55:50 CST</resource-usage-peak-date-time>
<resource-usage-total-address>1</resource-usage-total-address>
<resource-usage-total-used>7741</resource-usage-total-used>
<resource-usage-total-avail>56771</resource-usage-total-avail>
<resource-usage-total-total>64512</resource-usage-total-total>
<resource-usage-total-usage>11%</resource-usage-total-usage>
</resource-usage-entry>
<resource-usage-entry style="all-pat-pool">
<resource-usage-pool-name>pool_192_168_1_114</resource-usage-pool-name>
<resource-usage-total-pool-num>57</resource-usage-total-pool-num>
<resource-usage-port-ol-factor>1</resource-usage-port-ol-factor>
<resource-usage-peak-usage>8%</resource-usage-peak-usage>
<resource-usage-peak-date-time seconds="1734024324">2024-12-12 11:25:24 CST</resource-usage-peak-date-time>
<resource-usage-total-address>1</resource-usage-total-address>
<resource-usage-total-used>2520</resource-usage-total-used>
<resource-usage-total-avail>61992</resource-usage-total-avail>
<resource-usage-total-total>64512</resource-usage-total-total>
<resource-usage-total-usage>3%</resource-usage-total-usage>
</resource-usage-entry>
<resource-usage-entry style="all-pat-pool">
<resource-usage-pool-name>gos_src_pool_207_195_114_101</resource-usage-pool-name>
<resource-usage-total-pool-num>57</resource-usage-total-pool-num>
<resource-usage-port-ol-factor>1</resource-usage-port-ol-factor>
<resource-usage-peak-usage>32%</resource-usage-peak-usage>
<resource-usage-peak-date-time seconds="1733419670">2024-12-05 11:27:50 CST</resource-usage-peak-date-time>
<resource-usage-total-address>1</resource-usage-total-address>
<resource-usage-total-used>15115</resource-usage-total-used>
<resource-usage-total-avail>49397</resource-usage-total-avail>
<resource-usage-total-total>64512</resource-usage-total-total>
<resource-usage-total-usage>23%</resource-usage-total-usage>
</resource-usage-entry>
<resource-usage-entry style="all-pat-pool">
<resource-usage-pool-name>gos_src_pool_207_195_114_103</resource-usage-pool-name>
<resource-usage-total-pool-num>57</resource-usage-total-pool-num>
<resource-usage-port-ol-factor>1</resource-usage-port-ol-factor>
<resource-usage-peak-usage>19%</resource-usage-peak-usage>
<resource-usage-peak-date-time seconds="1739283197">2025-02-11 08:13:17 CST</resource-usage-peak-date-time>
<resource-usage-total-address>1</resource-usage-total-address>
<resource-usage-total-used>7920</resource-usage-total-used>
<resource-usage-total-avail>56592</resource-usage-total-avail>
<resource-usage-total-total>64512</resource-usage-total-total>
<resource-usage-total-usage>12%</resource-usage-total-usage>
</resource-usage-entry>
</source-resource-usage-pool-information>
</multi-routing-engine-item>
</multi-routing-engine-results>
</code></pre>
<p>then I tried to parse out all the pool_192_168_1_ valuesbut this fails and returns nothing, any ideas?</p>
<pre><code>outputxml1 = etree.tostring(rpc_xml, encoding='unicode')
root = ET.fromstring(outputxml1)
x = 1
for item in root.findall('./multi-routing-engine-results/multi-routing-engine-item/re-name/source-resource-usage-pool-information/resource-usage-pool-name'):
print(str(item))
x = x + 1
</code></pre>
|
<python><xml><elementtree>
|
2025-02-12 21:49:49
| 2
| 1,217
|
robm
|
79,434,486
| 1,324,631
|
How to apply regular expressions to practical non-text state machine logic?
|
<p>I'm trying to implement some logic within a system of microcontroller IRQs and event callbacks, that fundamentally require the use of a complex state machine in order to behave correctly.</p>
<p>Regular expressions, as a concept, are a method of compactly and efficiently representing and reasoning about finite state machines of all sorts. They are sometimes misused and when they are they make code harder to reason about rather than easier -- when possible, it's usually better to factor out semantically-meaningful control variables and/or encode the state into the program's control flow -- but regexes are a ubiquitous tool in the programmer's toolbox for a reason.</p>
<p>Unfortunately, non-text use cases are not well supported by most regular expression libraries. It's possible to write hacks that project the sequence of events <em>to</em> a text string and use text-regex tools to process it, but it's always ugly:</p>
<pre class="lang-py prettyprint-override"><code>a = Pin("GPIO1", mode=Pin.IN)
b = Pin("GPIO2", mode=Pin.IN)
x = Pin("GPIO3", mode=Pin.OUT)
y = Pin("GPIO4", mode=Pin.OUT)
inputs = { id(p): s for p,s in [
a: "a",
b: "b",
]}
outputs = [(p, re.compile(s)) for p,s in [
(x, r".*A(a.*){3,}"),
(y, r".*(A[^a]*?B|B[^b]*?A)[^ab]*"),
]]
history = io.StringIO()
def isr(pin):
name = inputs[id(pin)]
if pin.value():
name = name.upper()
history.write(name)
for out, out_re in regexes:
out.value(bool(out_re.fullmatch(history)))
a.irq(isr, trigger=RISING|FALLING)
b.irq(isr, trigger=RISING|FALLING)
</code></pre>
<p>(Of course, you'd never use this for counting three pulses or doing a logical AND like I'm demonstrating here -- these are just toy examples to demonstrate the idea.)</p>
<p>Among other problems, this approach:</p>
<ul>
<li>requires an arbitrarily-growing memory buffer to append to,</li>
<li>has no latency upper bound (completely inappropriate for an IRQ),</li>
<li>might requires allocating new memory (often not even possible to do safely in an IRQ),</li>
<li>and requires some sort of kludge to define text encodings for each event.</li>
</ul>
<p>This just gets awkwarder and awkwarder as you try to implement things like labeled states or asynchronous delay transitions. There <em>are</em> further hacks, massaging exactly how you construct the text string representing the system's history, but they are awful in both senses of the word.</p>
<p><strong>What methods or existing libraries are there that <em>do</em> serve this type of non-text application for regular expressions?</strong></p>
<hr />
<p>Internally, regular expressions are usually compiled into some sort of state machine in order to evaluate them efficiently. There's no theoretical reason that the same compiled state machine inside e.g. a <code>re.Pattern</code> object couldn't be manually initialized and fed a single symbol at a time to step it from state to state and query its state for output.</p>
<p>Unfortunately, most regular expression libraries keep their state machines as an internal implementation detail, and don't provide any kind of API for interactive access; and given the performance requirements for text processing, in Python you can't just touch a few "private" underscore members, that's something you'd need to access its underlying struct through the C API for.</p>
<p><strong>How can I compile a regular expression into an interactive state machine object rather than just a black box?</strong></p>
<p>If there are libraries that do this, I'm primarily interested in either pure python, or C code that I could easy to write a micropython binding for. Libraries in other environments would still be useful to know about, though -- if I do need to implement this myself, it'd still be a good source for insights or design decisions worth borrowing.</p>
<hr />
<p>For the moment, I expect I'll need to have to write my own library code, with my own regex processing algorithm implementations, and probably a distinct regular expression syntax (though ideally, still with a spelling reminiscent of perl-style text regexes).</p>
<p>Given the need for each state machine step to be constant-time (in at least the O(1) sense, but ideally also in the absolute sense of defending from timing side channel attacks), I'm leaning towards the McNaughton–Yamada–Thompson algorithm for evaluating a regex as a nondeterministic finite automata, with state activations encoded as a bit vector. If the required epsilon closures are all fully precomputed, any transition can be fully evaluated with a fixed number of bitwise operations (i.e. it's just a GF(2) matrix multiply using the appropriate transition matrix).</p>
<p>This approach seems to have an advantage in the way each state bit has a clear association with a particular index of the regular expression. With this, one can imagine adding the ability to label these within the regex (i.e. similar in spirit to labeling the "group capture" outputs produced by text-regex libraries), and get additional semantic use from a single expression.</p>
<p>That's also limitations with the approach, though, as the NFA state representation is quite sparse -- it needs O(n) bits of storage for an n-length regex. The process of allocating timer resources for time-delay transitions in that representation wouldn't be especially simple, though, nor resolving conflicting delays when if compiling the NFA to its power-set DFA and running it that way.</p>
<p>There's other algorithms that exist too, though, e.g. Aho-Sethi-Ullman; Brzozowski derivatives. But they're difficult to evaluate for their practical trade-offs here, without investing the engineering effort into actually implementing them all. Most descriptions are either "theory of computation" type works that neglect pragmatic details, or focus on the text-processing application in a way that makes their conclusions useless here.</p>
<p><strong>What other algorithms should I consider here for evaluating non-text regular expressions? What considerations should I examine for picking these algorithms?</strong></p>
|
<python><c><regex><interrupt><computation-theory>
|
2025-02-12 21:24:13
| 0
| 4,212
|
AJMansfield
|
79,434,379
| 2,978,125
|
How to distribute multiple variations of a package on PyPI?
|
<p>I have a Python package with two variations: a CPU-only, and a GPU-enabled build. I would like to distribute both versions on PyPI, where users could maybe do something like the following (it doesn't have to be this exact syntax, as long as its reasonably easy to work with for users, it's OK):</p>
<p><code>pip install mypkg-cpu</code> for the CPU-only build,</p>
<p><code>pip install mypkg</code> for the GPU-enabled build.</p>
<p>Users must be able to have the same <code>import</code> statements in Python regardless of which version they have downloaded.</p>
<p>I've thought about maybe using versioning for this purpose. At least in the past, PyTorch seemed to take this approach: <code>pip install torch==1.9.0+cpu</code>. However, I'm worried about a case where another package starts depending on my package, and in their requirements, they have something like <code>mypackage>=1.23</code> -- would that dependency be satisfied by <code>mypackage==1.23+cpu</code>?</p>
|
<python><pip><pypi>
|
2025-02-12 20:32:55
| 0
| 484
|
user2978125
|
79,434,251
| 25,625,672
|
Split zip-nonzip submodules within one cx_freeze package
|
<p>I am (basically successfully) using cx_freeze to package a Python application that requires Pandas. Pandas is the only package I need that cannot go into cx_freeze's library zip. Specifically, I need to exclude this submodule:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.cxfreeze.build_exe]
packages = ["pandas"]
zip_exclude_packages = ["pandas.io.formats.templates"]
zip_include_packages = ["*"]
</code></pre>
<p>The reason is that Pandas has run-time dependencies on <code>.tpl</code> templates on the filesystem. Fine.</p>
<p>The biggest problem currently is that cx_freeze seems to <strong>ignore</strong> that I've asked for a submodule to be excluded, and excludes Pandas entirely. This results in a much larger distributable since none of the Pandas module lands in <code>library.zip</code>.</p>
<p>I have two closely-related questions:</p>
<ul>
<li>How can I tell cx_freeze to leave <code>lib/pandas/io/formats/templates</code> on the filesystem but e.g. move <code>/lib/pandas/core</code> to the zip?</li>
<li>How can I tell cx_freeze to move all of the other Pandas dependencies to the zip? These include <code>lib/pandas.libs</code> and a long list of <code>lib/numpy...cp312-win_amd64.pyd</code>.</li>
</ul>
|
<python><cx-freeze><software-distribution>
|
2025-02-12 19:35:51
| 0
| 601
|
avigt
|
79,434,242
| 5,921,344
|
Merging Two DataFrames with Overlapping Components While Preserving Original Order
|
<p>I have two dataframes like below. Some of the <code>area1_name</code> and <code>area2_name</code> overlap, and I'm trying to combine the two area names into one long list.</p>
<pre><code>df1 = pd.DataFrame({'area1_index': [0,1,2,3,4,5], 'area1_name': ['AL','AK','AZ','AR','CA','CO']})
df2 = pd.DataFrame({'area2_index': [0,1,2,3,4,5,6], 'area2_name': ['MN','AL','CT','TX','AK','AR','CA']})
</code></pre>
<p>What I want eventually is this:</p>
<pre><code>final = pd.DataFrame({'area1_index': [nan,0,nan,nan,1,2,3,4,5], 'area1_name': [nan,'AL',nan,nan,'AK','AZ','AR','CA','CO'], 'area2_index': [0,1,2,3,4,nan,5,6,nan], 'area2_name':['MN','AL','CT','TX','AK',nan,'AR','CA',nan]})
</code></pre>
<p>My first thought was to identify the overlapping area names, join the overlapping dataframe and the missing dataframe, like below:</p>
<pre><code>df1_df2_overlap = pd.DataFrame({'area1_index': [0,1,3,4], 'area2_index': [1,4,5,6], 'area1_name': ['AL','AK','AR','CA']})
df2_missing = pd.DataFrame({'area2_index': [0,2,3], 'area2_name': ['MN','CT','TX']})
df3 = pd.merge(df1, df2, "outer")
df4 = pd.merge(df3, df2_missing, "outer")
</code></pre>
<p>But this sorts everything by <code>area2_index</code>. I tried adding the <code>.sort_values()</code> argument using the <code>by = ['seq2_index', 'seq1_index']</code> too but had the same result. How can I order this the way I want? Or is there a better way to combine <code>df1</code> and <code>df2</code> without having to identify the overlapping/missing components?</p>
|
<python><pandas>
|
2025-02-12 19:33:07
| 3
| 375
|
Jen
|
79,434,176
| 7,897,042
|
Type hinting of subclass
|
<p>Given a subclass that overrides attributes and methods, is there a convention in Python related to type hinting explicit params (in the case you forgo using <code>*args</code> and/or <code>**kwargs</code>) and return types that are already type hinted in the base class? Meaning is there the expectation that subclasses redundantly type hint overridden attributes and methods?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
def __init__(self, name: str) -> None:
self.name = name
def get_greeting(self) -> str:
raise NotImplementedError
class Derived(Base):
# is there a convention related to including the return type hint of
# this overridden method since it's already included in the base class?
def get_greeting(self) -> str:
return f"My name is {self.name}."
</code></pre>
|
<python>
|
2025-02-12 19:02:41
| 1
| 384
|
mptrossbach
|
79,433,905
| 2,925,620
|
pyqtgraph: Show x axis labels in the form DD HH:MM:SS of data representing timedeltas with DateAxisItem
|
<p>I am trying to use pyqtgraphs <code>DateAxisItem</code> for relative times on the x axis, i.e., durations (which can be between a few hours and several days). I know I could simply use an ordinary <code>AxisItem</code> and show the durations of the type <code>datetime.timedelta</code> in seconds (see the MWE below). However, I want the x axis labels to have a more conventient (adaptable) format "DD HH:MM:SS" (e.g. "00 23:31:29") instead of large floats (e.g. 6132689.0). How could I achieve that? Or is <code>AxisItem</code> more suitable for this? If so, how can I adapt the x labels to this format?</p>
<pre><code>from datetime import datetime
import pyqtgraph as pg
app = pg.mkQApp("DateAxisItem Example")
# Create a plot with a date-time axis
w = pg.PlotWidget(axisItems = {'bottom': pg.DateAxisItem()})
w.showGrid(x=True, y=True)
x_ref = datetime(2024, 1, 1, 12, 10, 22)
x_absolute = [datetime(2024, 2, 11, 9, 36, 50),
datetime(2024, 3, 12, 11, 41, 51),
datetime(2024, 4, 13, 16, 51, 51),
datetime(2024, 5, 14, 18, 58, 52)]
x_relative = [(value - x_ref).total_seconds() for value in x_absolute]
ydata = [10, 9, 12, 11]
# the x axis shows absolute time values instead of relative
w.plot(x_relative, ydata)
w.setWindowTitle('pyqtgraph example: DateAxisItem')
w.show()
if __name__ == '__main__':
pg.exec()
</code></pre>
<p><strong>EDIT:</strong>
If I use <code>AxisItem</code>, I would need to fix overlapping labels and the ticks in between are missing which I am not sure is a good path:</p>
<pre><code>from datetime import datetime, timedelta
import pyqtgraph as pg
app = pg.mkQApp("DateAxisItem Example")
ax = pg.AxisItem(orientation="bottom")
w = pg.PlotWidget(axisItems = {'bottom': ax})
w.showGrid(x=True, y=True)
x_ref = datetime(2024, 1, 1, 12, 10, 22)
x_absolute = [datetime(2024, 2, 11, 9, 36, 50),
datetime(2024, 3, 12, 11, 41, 51),
datetime(2024, 4, 13, 16, 51, 51),
datetime(2024, 5, 14, 18, 58, 52)]
x_relative = [(value - x_ref).total_seconds() for value in x_absolute]
dx = [(D, str(timedelta(seconds=D)).replace(" days,", "d")) for D in x_relative]
ax.setTicks([dx, []])
ydata = [10, 9, 12, 11]
w.plot(x_relative, ydata)
w.setWindowTitle('pyqtgraph example: DateAxisItem')
w.show()
if __name__ == '__main__':
pg.exec()
</code></pre>
|
<python><python-datetime><timedelta><pyqtgraph>
|
2025-02-12 17:08:32
| 0
| 357
|
emma
|
79,433,640
| 28,904,207
|
Python: Deep Q Learning agent doesn't seem to learn
|
<p>I am using <strong>gymnasium</strong> and <strong>torch</strong>. I initially created a custom environment by following the <a href="https://gymnasium.farama.org/introduction/create_custom_env/" rel="nofollow noreferrer">official gymnasium's guide</a>: it shows you how to build a NxN <code>Box</code> where the agent needs to reach a randomly placed target by moving right, up, left or down.
I modified it so that the agent needs to reach <strong>multiple targets</strong> in order to finish the episode. From the guide you can see that the state of the environment is a dictionary containing the agent's and the target's coordinates; I changed the space type from <code>Box</code> to <a href="https://gymnasium.farama.org/api/spaces/composite/#sequence" rel="nofollow noreferrer"><code>Sequence</code></a>, which allows me to specify an indefinite number of observations:</p>
<pre><code>self.observation_space = gym.spaces.Dict(
{
"agent": gym.spaces.Box(0, size-1, shape=(2,), dtype=int),
"targets": gym.spaces.Sequence(
gym.spaces.Box(0, size-1, shape=(2,), dtype=int) # element type
)
})
</code></pre>
<p>I took the agent's code from <a href="https://github.com/vivianhylee/general-OpenAI-gym-agent/blob/master/DQNAgent.py" rel="nofollow noreferrer">here</a> and adapted it to my environment. Most importantly, I've replaced the <code>replay()</code> method with <code>optimize()</code>, which I took from <a href="https://github.com/johnnycode8/gym_solutions/blob/main/frozen_lake_dql.py" rel="nofollow noreferrer">this github repository</a> (shown in <a href="https://www.youtube.com/watch?v=EUrWGTCGzlA" rel="nofollow noreferrer">this video</a>). The code is originally for gymnasium's <strong><code>FrozenLake</code> environment</strong>, where there are holes you can fall into and a single reward to get to, but I've adapted it to my situation. Another important change is the DQN class (Deep Q Network): the original one has 2 layers, but mine has 3 with 128 neurons. Here is its definition:</p>
<pre><code>class DQN(nn.Module):
def __init__(self, n_input, n_output):
super(DQN, self).__init__()
self.layer1 = nn.Linear(n_input, 128)
self.layer2 = nn.Linear(128, 128)
self.layer3 = nn.Linear(128, n_output)
def forward(self, x):
x = torch.relu(self.layer1(x))
x = torch.relu(self.layer2(x))
res = self.layer3(x)
return res
</code></pre>
<p>However, the neural networks don't seem to learn, and the results are pretty inconsistent: when training <em>(general case)</em>, the steps taken in the first 300 episodes vary from 100 to 600, then it continues hitting <code>MAX_STEPS</code> (which I've set to 1000), and the final ~100 episodes are completed with 100 or so steps. To me, it doesn't look like it's learning at all. It all just seems random.</p>
<p>Shown below here is the code used in the <code>optimize()</code> function inside the agent, which is responsible for updating the neural networks.
I added some comments to help you <em>(and me)</em> figure out what the different sections do.
Also, here are some clarifications about the rest of the code in the program:</p>
<ul>
<li>The 2 neural networks are <strong>synchronized every 10 episodes</strong></li>
<li>The optimize function gets <strong>executed once per episode</strong></li>
<li>In my environment, unlike FrozenLake, <strong>there is no way the agent can lose: termination means victory,</strong> and truncation happens during the training loop after hitting <code>MAX_STEPS</code>; at that point it breaks the cycle and resets the environment</li>
<li>The state of my environment is represented by the agent location and the targets' locations
<ul>
<li>The tensor that is passed to the neural network is the combination of all the coordinates, e.g. with 2 targets: <code>[0, 0, 2, 3, 1, 6]</code> (the first 2 are [x,y] coords of the agent, the next 2 are the first target's,...)</li>
<li>When a target gets collected, its coordinates in the tensor are replaced by <code>-1</code> as I cannot change the number of input neurons</li>
</ul>
</li>
<li><em>(See code below)</em> when the for loop gets to an experience with <code>terminated</code> set to True, it knows <strong>the action that has been made led to victory,</strong> and it assigns a higher priority.</li>
</ul>
<p>Code:</p>
<pre><code>def optimize(self, ever_won):
# checks if it has enough experience and has won at least once
if(self.batch_size>len(self.memory)) or (not ever_won):
return
mini_batch = self.memory.sample(self.batch_size) # takes a sample from its memory
current_q_list = []
target_q_list = []
for state, reward, action, new_state, terminated in mini_batch: # cycles the experiences
# if the experience led to a victory
if terminated:
target = torch.FloatTensor([5]) # I assign a higher priority to the action that made it win
# otherwise, the q value is calulated
else:
with torch.no_grad():
target = torch.FloatTensor(
reward + self.gamma * self.target_dqn(self.state_to_tensor(new_state)).max()
)
# get the q values from policy_dqn (main network)
current_q = self.policy_dqn(self.state_to_tensor(state))
current_q_list.append(current_q)
# q values from target_dqn
target_q = self.target_dqn(self.state_to_tensor(state))
# adjust the q value of the action
target_q[action] = target
target_q_list.append(target_q)
# Compute loss for the whole minibatch
loss = self.loss_fn(torch.stack(current_q_list), torch.stack(target_q_list))
# saving loss for later plotting
self.losses.append(loss.item()) # self.losses = list()
# Optimize the model
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# epsilon decay
self.epsilon = max(self.epsilon_min, self.epsilon*self.epsilon_decay)
</code></pre>
<p><strong>Edit 12 feb 2025:</strong> I've tried plotting loss using <code>matplotlib.pyplot</code> and the graph below is the result. <code>loss</code> gets saved in a list in <code>optimize()</code>'s for loop <em>(see the line I've added at the end of the code above)</em>. As you can see there are <strong>many spikes</strong>; I guess that's from when during the training the model keeps hitting <code>MAX_STEPS</code>.</p>
<p><a href="https://i.sstatic.net/GsCWHF2Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsCWHF2Q.png" alt="screenshot of loss graph" /></a></p>
|
<python><deep-learning><pytorch><agent><gymnasium>
|
2025-02-12 15:35:52
| 0
| 305
|
tommat208
|
79,433,460
| 865,169
|
Can I define a Pandas DataFrame GroupBy aggregation involving multiple columns?
|
<p>I have experimented and read the documentation for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html" rel="nofollow noreferrer">DataFrameGroupBy.aggregate</a>, but it is not clear to me whether - and how I can define an aggregation that works on multiple columns. It seems to me like aggregations specified with keyword argument assignment to new columns only work on single columns as input.</p>
<p>If I have a simple data frame like:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"value1": range(8), "value2": range(7, -1, -1), "Sample": [1, 2]*4, "Year": list(range(2023, 2027))*2})
</code></pre>
<p>I would like to be able to do something like:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(["Sample", "Year"]).agg(agg1 = lambda group: (group["value1"] * group["value2"]).mean())
</code></pre>
<p>where I was hoping my callable would be given each entire group as a sub-DataFrame, but this does not work.</p>
<p>Is it possible to do this kind of aggregation and how?</p>
|
<python><pandas><dataframe>
|
2025-02-12 14:36:53
| 1
| 1,372
|
Thomas Arildsen
|
79,433,458
| 6,681,932
|
lightgbm force variables to be in splits
|
<p>Im trying to find a way to train a lightgbm model forcing to have some features to be in the splits, i.e.: "to be in the feature importance", then the predictions are afected by these variables.</p>
<p>Here is an example of a the modeling code with an usless variable as it is constant, but the idea is that there could be an important variable from business perspective that is not in the feature</p>
<pre><code>from lightgbm import LGBMRegressor
import pandas as pd
import numpy as np
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Generar un dataset de regresión aleatorio
X, y = make_regression(n_samples=1000, n_features=10, noise=0.9, random_state=42)
feature_names = [f"feature_{i}" for i in range(X.shape[1])]
# Convertir a DataFrame para mayor legibilidad
X = pd.DataFrame(X, columns=feature_names)
# Agregar características inútiles
X["useless_feature_1"] = 1
# Dividir los datos en conjuntos de entrenamiento y prueba
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Definir el modelo LGBMRegressor
model = LGBMRegressor(
objective="regression",
metric="rmse",
random_state=1,
n_estimators=100
)
# Entrenar el modelo
model.fit(X_train, y_train, eval_set=[(X_test, y_test)])
# Predicciones y evaluación
y_pred = model.predict(X_test)
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
print(f"Test RMSE: {rmse:.4f}")
# Importancia de características
importance = pd.DataFrame({
"feature": X.columns,
"importance": model.feature_importances_
}).sort_values(by="importance", ascending=False)
print("\nFeature Importance:")
print(importance)
</code></pre>
<p>Expected solution: There should be some workarround, but the most interesting one would be the one that is using some param in the fit or the regressor method.</p>
|
<python><machine-learning><lightgbm><ensemble-learning>
|
2025-02-12 14:36:26
| 1
| 478
|
PeCaDe
|
79,433,308
| 11,460,896
|
The thread running under multiprocessing.Process does not update its instance attributes
|
<p>I want to run several class instances in parallel and update the instance attributes using Redis. On each class instance, I run a thread in the background to listen to redis for changes. When I run <code>set value 9</code> in redis, the thread detects the change, but the self.value attribute is not updated.</p>
<pre class="lang-py prettyprint-override"><code>import time
import threading
import redis
class Foo:
def __init__(self, name: str):
self.name = name
self.redis_client = redis.Redis(host="localhost", port=6379)
self.value = 5
self.update_thread = threading.Thread(target=self.update, daemon=True)
self.update_thread.start()
def update(self):
while True:
new_value = int(self.redis_client.get("value"))
if new_value != self.value:
print("new value detected {}".format(new_value))
self.value = new_value
time.sleep(2)
def run(self):
while True:
print("{}: Value: {}".format(self.name, self.value))
time.sleep(2)
if __name__ == "__main__":
import multiprocessing
foo1 = Foo(name="foo1")
foo2 = Foo(name="foo2")
process1 = multiprocessing.Process(target=foo1.run)
process2 = multiprocessing.Process(target=foo2.run)
process1.start()
process2.start()
</code></pre>
<p>console output:</p>
<pre><code>foo1: Value: 5
foo2: Value: 5
new value detected 9
new value detected 9
foo1: Value: 5
foo2: Value: 5
</code></pre>
<p>When I research the subject, I come across the information that "each process has its own memory space". But since I do not share data between processes in this case, I cannot understand why the data in the object instances cannot be preserved.</p>
|
<python><multithreading><memory-management><parallel-processing><multiprocessing>
|
2025-02-12 13:50:27
| 1
| 307
|
birdalugur
|
79,433,213
| 2,920,612
|
Display icon in Status Bar and SysTray
|
<p>I'm creating an app in Python that should run in the background, executing tasks over a period of time.</p>
<p>To help the user get the current version of the app and even to close it, I thought about placing an icon in SysTray (Windows/Linux) and in the Menu Bar (macOS).</p>
<p>I even managed to make it display, but all the code after that is not executed in the application.</p>
<p>What could be wrong?</p>
<p><strong>main.py</strong></p>
<pre><code>from libs.systray import show_tray
def main():
"""Loop principal do agente."""
show_tray()
while True:
eventlog = getapplog()
print(json.dumps(eventlog, indent=4))
send_data(eventlog)
time.sleep(INTERVAL)
if __name__ == "__main__":
main()
</code></pre>
<p><strong>systray.py</strong></p>
<pre><code>import sys
import platform
import os
def show_tray():
if platform.system() == "Darwin":
run_macos_app()
else:
run_windows_app()
if platform.system() == "Darwin":
from Cocoa import (
NSApplication, NSStatusBar, NSMenu, NSMenuItem, NSObject, NSImage, NSApp
)
from PyObjCTools.AppHelper import runEventLoop
class AppDelegate(NSObject):
def applicationDidFinishLaunching_(self, notification):
NSApp.setActivationPolicy_(2) # NSApplicationActivationPolicyProhibited
self.status_item = NSStatusBar.systemStatusBar().statusItemWithLength_(-1)
icon_path = os.path.abspath("./assets/macos-icon.pdf")
icon_image = NSImage.alloc().initWithContentsOfFile_(icon_path)
if icon_image:
self.status_item.button().setImage_(icon_image)
else:
print(f"Error: {icon_path}")
self.menu = NSMenu()
self.menu.addItem_(NSMenuItem.alloc().initWithTitle_action_keyEquivalent_("Quit", "terminate:", "q"))
self.status_item.setMenu_(self.menu)
def run_macos_app():
app = NSApplication.sharedApplication()
delegate = AppDelegate.alloc().init()
app.setDelegate_(delegate)
runEventLoop()
elif platform.system() != "Darwin":
import pystray
from PIL import Image
from pystray import MenuItem as Item, Icon
def exit_app(icon, item):
icon.stop()
def run_windows_app():
icon_path = os.path.abspath(r"assets\windows-icon.png")
if not os.path.exists(icon_path):
print(f"Error: {icon_path}")
sys.exit(1)
image = Image.open(icon_path)
menu = (Item('Quit', exit_app),)
tray_icon = Icon("my_tray_app", image, menu=menu)
tray_icon.run()
</code></pre>
<p>To test my app, I am temporarily running it via CLI.</p>
|
<python>
|
2025-02-12 13:18:47
| 1
| 779
|
Tom
|
79,433,165
| 6,877,252
|
Gradient computation with PyTorch autograd with 1st and 2nd order derivatives does not work
|
<p>I am having a weird issue with PyTorch's autograd functionality when implementing a custom loss calculation on a second order differential equation. In the code below, predictions of the neural network are checked if they satisfy a second order differential equation. This works fine. However, when I want to calculate the gradient of the loss with respect to the predictions, I get an error indicating that there seems to be no connection between loss and <code>u</code> in the computational graph.</p>
<blockquote>
<p>RuntimeError: One of the differentiated Tensors appears to not have
been used in the graph. Set <code>allow_unused=True</code> if this is the desired
behavior.</p>
</blockquote>
<p>This doesn't make sense because the loss is directly dependent and calculated with the prior derivatives that originate from <code>u</code>. Deriving the loss with respect to <code>u_xx</code> and <code>u_t</code> works, deriving to <code>u_x</code> does NOT. We verified that <code>.requires_grad</code> is set to <code>True</code> for all variables (<code>X</code>, <code>u</code>, <code>u_d</code>, <code>u_x</code>, <code>u_t</code>, <code>u_xx</code>).</p>
<p>Why does this happen, and how to fix this?</p>
<p><strong>Main code:</strong></p>
<pre><code># Ensure X requires gradients
X.requires_grad_(True)
# Get model predictions
u = self.pinn(X)
# Compute first-order gradients (∂u/∂x and ∂u/∂t)
u_d = torch.autograd.grad(
u,
X,
grad_outputs=torch.ones_like(u),
retain_graph=True,
create_graph=True, # Allow higher-order differentiation
)[0]
# Extract derivatives
u_x, u_t = u_d[:, 0], u_d[:, 1] # ∂u/∂x and ∂u/∂t
# Compute second-order derivative ∂²u/∂x²
u_xx = torch.autograd.grad(
u_x,
X,
grad_outputs=torch.ones_like(u_x),
retain_graph=True,
create_graph=True,
)[0][:, 0]
# Diffusion equation (∂u/∂t = κ * ∂²u/∂x²)
loss = nn.functional.mse_loss(u_t, self.kappa * u_xx)
## THIS FAILS
# Compute ∂loss/∂u
loss_u = torch.autograd.grad(
loss,
u,
grad_outputs=torch.ones_like(loss),
retain_graph=True,
create_graph=True,
)[0]
# Return error on diffusion equation
return loss
</code></pre>
<p><strong>Model:</strong></p>
<pre><code>==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
Sequential [1, 1] --
├─Linear: 1-1 [1, 50] 150
├─Tanh: 1-2 [1, 50] --
├─Linear: 1-3 [1, 50] 2,550
├─Tanh: 1-4 [1, 50] --
├─Linear: 1-5 [1, 50] 2,550
├─Tanh: 1-6 [1, 50] --
├─Linear: 1-7 [1, 50] 2,550
├─Tanh: 1-8 [1, 50] --
├─Linear: 1-9 [1, 1] 51
==========================================================================================
Total params: 7,851
Trainable params: 7,851
Non-trainable params: 0
Total mult-adds (M): 0.01
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.03
Estimated Total Size (MB): 0.03
==========================================================================================
</code></pre>
<p><strong>What we have already tried:</strong></p>
<p>Reverted to an older PyTorch version (tested on 2.5.0, and 1.13.1). Same issue.</p>
<p>Putting <code>.requires_grad_(True)</code> after every variable assignment. This did not help.</p>
<p>We also tried to replace the tensor slicing by multiplying with zero/one vectors without results. We though this slicing might disturb the computational graph breaking the connection to <code>u</code>.</p>
<pre><code># Extract derivatives
u_x, u_t = u_d[:, 0], u_d[:, 1] # ∂u/∂x and ∂u/∂t
# Extract derivatives alternative
u_x = torch.sum(
torch.reshape(torch.tensor([1, 0], device=u_d.device), [1, -1]) * u_d,
dim=1,
keepdim=True,
)
u_t = ...
</code></pre>
<p>Thanks for your help!</p>
|
<python><math><pytorch><autograd>
|
2025-02-12 13:05:04
| 1
| 1,759
|
Wout Rombouts
|
79,433,136
| 902,769
|
Synapse notebook script runs fine but stops on time-out and stucks queued in pipelines
|
<p>I have a python script in a notebook that performs a parquet file schema correction. It is working fine, runs in less than 10 seconds, depending on the number of files to process. Not exactly fine, more on this later below.</p>
<p>Now I need to run it in a pipeline that correct dataset schemas and then send the data to another place. So I used a Notebook activity and linked it to the notebook I already have and configure the pool in the same way.</p>
<p><a href="https://i.sstatic.net/GPkwvN2Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPkwvN2Q.png" alt="enter image description here" /></a></p>
<p>Running the pipeline lasts more than 30 minutes and the Spark application appears to be stucked in QUEUED state. I guess it will time out some day. I actually didn't allow it to time out, 30 minutes for a script that runs under 10 seconds is clear sign of something going wrong.</p>
<p>I run the notebook directly again, and it goes along the states pretty well. however, even when the script finishes (the last print line of code run), it is still showing as running in the apache spark applications page, and it keeps running until it is stopped because of a time out. "this application failed due to the total number of errors:1.</p>
<blockquote>
<p>Error details This application failed due to the total number of
errors: 1. Error code 1 LIVY_JOB_TIMED_OUT</p>
<p>Message Job failed during run time with state=[dead].</p>
<p>Source Unknown</p>
</blockquote>
<p>The last cell code is as follows:</p>
<pre><code># Usage
schema_path = f"{blob_relative_path}/person.schema.parquet" # Example path
file_paths = [
f"{blob_relative_path}/person.0.parquet",
f"{blob_relative_path}/person.1.parquet",
f"{blob_relative_path}/person.2.parquet",
f"{blob_relative_path}/person.3.parquet"
]
print(f"reading schema template file...")
schema_df = read_parquet(schema_path)
print(f"This schema will be used as the schema template for the rest of the files")
print(f"Starting standardization")
for path in file_paths:
df = read_parquet(path)
print(f"file {path} loaded")
df = standardize_schema(df, schema_df)
print(f"file {path} standardized")
df.info()
write_parquet(df, path)
print(f"All files are standardized.")
</code></pre>
<p>I don't know what needs to be done for the job to finish when the script runs and prevent the application to time out or if this is the expected behaviour: to keep running after script completion until application times out. Could that has something to do with the behaviour of the pipeline being stuck in queued? How can I move forward and make the notebook run correctly alone and in the pipeline?</p>
|
<python><azure-synapse><spark-notebook><azure-synapse-pipeline>
|
2025-02-12 12:56:40
| 1
| 1,175
|
Ricker Silva
|
79,433,015
| 29,295,031
|
How to make streamlit session_state updated when I select a point on plotly_chart
|
<p>I have this use case where I have a plotlychart :</p>
<pre><code>import streamlit as st
import plotly.express as px
if "var" not in st.session_state:
st.session_state["var"] = 0
col3,col4 = st.columns(2)
def plot():
df = px.data.iris()
fig = px.scatter(
df,
x="sepal_width",
y="sepal_length",
color="species",
size="petal_length",
hover_data=["petal_width"],
)
event = st.plotly_chart(fig, key="iris", on_select="rerun")
event.selection
if len(event.selection.points)!=0:
st.session_state["var"] = event.selection.points[0]["x"]
# st.rerun()
with col3:
st.write(st.session_state["var"])
with col4:
plot()
</code></pre>
<p>my issue is when I select a point on the graph, logically the st.session_state["var"] should be updated directly …but it is not the case here…I have tried to use st.rerun() but it is enter in infinity loop…do you have any idea please to make st.session_state["var"] updated directly when I select a point ?
I want to keep the order of cols as it is.
thanks</p>
|
<python><plotly><streamlit>
|
2025-02-12 12:18:44
| 0
| 401
|
user29295031
|
79,433,014
| 5,118,421
|
What is the difference between ConfigDict and dict?
|
<p>What is the difference between <code>ConfigDict</code> and <code>dict</code>?</p>
<pre class="lang-py prettyprint-override"><code>ConfigDict: TypeAlias = dict[str, Union[str, list[str]]]
</code></pre>
<p>What are the advantages of using <code>ConfigDict</code>?</p>
<p><a href="https://github.com/pytest-dev/pytest/pull/13193/files#diff-f1d27932fbd9530086080aa8df367309881fe90f204cdd69102ba59758644761" rel="nofollow noreferrer">https://github.com/pytest-dev/pytest/pull/13193/files#diff-f1d27932fbd9530086080aa8df367309881fe90f204cdd69102ba59758644761</a></p>
|
<python><dictionary><pytest>
|
2025-02-12 12:18:34
| 1
| 1,407
|
Irina
|
79,432,925
| 2,176,819
|
Load UWP icon programmatically
|
<p>I'm writing a <a href="https://github.com/gridranger/progman.py" rel="nofollow noreferrer">Python-based launcher</a> that is automatically populated form Windows' Start menu. I plan to extend it to include UWP apps. Listing and launching them isn't hard, but accessing their icons seems to be tricky.</p>
<p>As the apps reside under <code>\program files\windowsapps\</code>, their icons seem to be inaccessible due to permission issue. I don't plan to run my program elevated.</p>
<p>I have the current leads:</p>
<ul>
<li>Their icons are parsed and cached in the iconcache*.db under <code>%homepath%\AppData\Local\Microsoft\Windows\Explorer</code> (errors of displaying their icons can be fixed by removing these cache files), but I didn't find any solutions to read the image data from there.</li>
<li>Their icons might be fetched from the Windows Store also (but I would prefer an offline solution).</li>
<li>I'm also open to any other solution ideas to get their icon data.</li>
</ul>
|
<python><windows><uwp><icons>
|
2025-02-12 11:43:09
| 1
| 403
|
gridranger
|
79,432,856
| 884,463
|
When I create an array of Numpy floats, I get an array of Python floats
|
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import numpy as np
print(f"We are using Python {sys.version}", file=sys.stderr)
print(f"We are using numpy version {np.__version__}", file=sys.stderr) # 2.2.1
def find_non_numpy_floats(x: any) -> bool:
if not (isinstance(x, np.float64)):
print(f"Found non-numpy.float64: {x} of type {type(x)}", file=sys.stderr)
return False
else:
return True
w: np.ndarray = np.zeros((2, 2), dtype=np.float64)
np.vectorize(lambda x: find_non_numpy_floats(x))(w)
assert (np.all(np.vectorize(lambda x: isinstance(x, np.float64))(w))), "try to keep using the numpy floats"
</code></pre>
<p>I'm expecting <a href="https://numpy.org/doc/stable/reference/generated/numpy.zeros.html" rel="nofollow noreferrer">Numpy.zeros</a> to generate an array of Numpy <code>float64</code>, which are not the same as Python <code>float</code> if I understand correctly (IEEE 64-bit floats vs something Python specific?)</p>
<p>However the above results in:</p>
<pre class="lang-none prettyprint-override"><code>We are using Python 3.13.1 (main, Dec 9 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)]
We are using numpy version 2.2.1
Found non-numpy.float64: 0.0 of type <class 'float'>
Found non-numpy.float64: 0.0 of type <class 'float'>
Found non-numpy.float64: 0.0 of type <class 'float'>
Found non-numpy.float64: 0.0 of type <class 'float'>
</code></pre>
<p>and an assertion error.</p>
<p>Why is that and how can I fix this (and should I want to?)</p>
|
<python><numpy><type-conversion>
|
2025-02-12 11:17:16
| 2
| 15,375
|
David Tonhofer
|
79,432,808
| 1,150,683
|
Understanding Wikipedia titles batching API
|
<p>With the MediaWiki API we can <a href="https://www.mediawiki.org/w/api.php?action=help&modules=query" rel="nofollow noreferrer">query</a> the Wikipedia API. One of the fields is <code>titles</code> where one <em>or more</em> titles can be queried at the same time. Batching them together is recommended in high load scenarios to avoid multiple consecutive requests. Multiple titles should be separated by a pipe <code>|</code> character.</p>
<p>I am using the Wikipedia API to find "translations" of categories. Let's say I have an English category "Antiquity", I want to find the corresponding category in a different language. That is possible by querying the API for the prop <code>langlinks</code>.</p>
<p>I find that, indeed, I can find such one-on-one mappings of an English category if I do not use batching, but if I <em>do</em> use batching, I do not always get all of the results back. To illustrate, I have a list of English categories, and at each iteration I process one item more than before (starting with only one). With batching, it becomes clear that with larger lists (still well within the max. limit of 50 imposed by the API), the earlier categories are lost and not included anymore. When not using batching (batch size=1), this issue does not occur.</p>
<pre class="lang-py prettyprint-override"><code>import requests
def get_translated_category(category_titles: str | list[str], target_lang: str, batch_size: int = 50) -> list[str]:
"""Fetch the translated equivalent of a Wikipedia category."""
endpoint = "https://en.wikipedia.org/w/api.php"
if isinstance(category_titles, str):
category_titles = [category_titles]
category_titles = [f"Category:{title}" for title in category_titles]
translated_categories = {}
# API is limited to 50 titles per request
for start_idx in range(0, len(category_titles), batch_size):
end_idx = start_idx + batch_size
batch_titles = category_titles[start_idx:end_idx]
params = {
"action": "query",
"format": "json",
"prop": "langlinks",
"titles": "|".join(batch_titles),
"lllimit": "max"
}
response = requests.get(endpoint, params=params)
data = response.json()
pages = data.get("query", {}).get("pages", {})
for page_data in pages.values():
title = page_data["title"].split(":")[-1]
if title in translated_categories:
print("We already found this category title!")
langlinks = page_data.get("langlinks", [])
for link in langlinks:
if link["lang"] == target_lang:
translated_categories[title] = link["*"].split(":")[-1]
return translated_categories
if __name__ == "__main__":
english_categories: list[str] = [
"Classical antiquity",
"Late antiquity",
"Latin-language literature",
"Roman Kingdom",
"Roman Republic",
"Roman Empire",
"Byzantine Empire",
"Latin language",
"Ancient Greek",
"Ancient Greece",
"Ancient Greek literature",
"Medieval history of Greece",
]
print("Batch size 50 (default)")
for idx in range(len(english_categories)):
categories = english_categories[:idx+1]
latin_categories = get_translated_category(categories, "la")
print(latin_categories)
print()
print("Batch size 1 (no batching)")
for idx in range(len(english_categories)):
categories = english_categories[:idx+1]
latin_categories = get_translated_category(categories, "la", batch_size=1)
print(latin_categories)
</code></pre>
<p>The output of the code above is:</p>
<pre><code>Batch size 50 (default)
{'Classical antiquity': 'Res classicae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum'}
{'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum'}
{'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Latin language': 'Lingua Latina', 'Roman Empire': 'Imperium Romanum'}
{'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Latin language': 'Lingua Latina', 'Roman Empire': 'Imperium Romanum'}
{'Ancient Greece': 'Graecia antiqua', 'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Roman Empire': 'Imperium Romanum'}
{'Ancient Greece': 'Graecia antiqua', 'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Roman Empire': 'Imperium Romanum'}
{'Ancient Greece': 'Graecia antiqua', 'Byzantine Empire': 'Imperium Byzantinum', 'Late antiquity': 'Antiquitas Posterior', 'Roman Empire': 'Imperium Romanum'}
Batch size 1 (no batching)
{'Classical antiquity': 'Res classicae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum', 'Latin language': 'Lingua Latina'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum', 'Latin language': 'Lingua Latina', 'Ancient Greek': 'Lingua Graeca antiqua'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum', 'Latin language': 'Lingua Latina', 'Ancient Greek': 'Lingua Graeca antiqua', 'Ancient Greece': 'Graecia antiqua'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum', 'Latin language': 'Lingua Latina', 'Ancient Greek': 'Lingua Graeca antiqua', 'Ancient Greece': 'Graecia antiqua', 'Ancient Greek literature': 'Litterae Graecae antiquae'}
{'Classical antiquity': 'Res classicae', 'Late antiquity': 'Antiquitas Posterior', 'Latin-language literature': 'Litterae Latinae', 'Roman Empire': 'Imperium Romanum', 'Byzantine Empire': 'Imperium Byzantinum', 'Latin language': 'Lingua Latina', 'Ancient Greek': 'Lingua Graeca antiqua', 'Ancient Greece': 'Graecia antiqua', 'Ancient Greek literature': 'Litterae Graecae antiquae'}
</code></pre>
<p>It should be immediately clear that there is a difference between batching and not using batching and, more worrisome, that using batching leads to some items being discarded. I thought that perhaps this would be the case if categories are merged in Latin and have the same name, so the API resolves to only returning one of them, but as far as I can tell that is not the case.</p>
<p>How can I ensure that batching my requests (titles) together, I get the same results as firing individual requests with the Wikipedia API?</p>
<p>EDIT: after further investigation it would seem that the API does return results for all categories (the <code>pages </code> variable) but for some reason the corresponding languages (<code>langlinks</code>) are not the same.</p>
|
<python><python-requests><wikipedia-api><mediawiki-api><batching>
|
2025-02-12 11:02:00
| 1
| 28,776
|
Bram Vanroy
|
79,432,772
| 16,906,505
|
sql alchemy constructor typesafety
|
<p>in order to have type safety in SQLAlchemy, do I need to define custom constructors for each model? With the current setup there is no type safety at all when creating an object</p>
<pre><code>from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
fullname: Mapped[str]
user = User(namew="Alice", fullname="Alice Smith")
</code></pre>
<p><a href="https://i.sstatic.net/65aFJTcB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65aFJTcB.png" alt="enter image description here" /></a></p>
|
<python><sqlalchemy><python-typing>
|
2025-02-12 10:49:49
| 1
| 500
|
RheinmetallSkorpion
|
79,432,485
| 5,137,645
|
list the files in sagemaker estimator or processor
|
<p>Is there a way to have the sagemaker estimator or processor print out a list of files that it has in a certain directory without introducing custom scripts to them? Or if a custom script has to to be introduced how do I do it while still running the original transfer_learning.py script.</p>
|
<python><amazon-sagemaker>
|
2025-02-12 09:12:16
| 0
| 606
|
Nikita Belooussov
|
79,432,237
| 1,319,998
|
Can a KeyboardInterrupt be raised during the handling of a KeyboardInterrupt?
|
<p>If we have the following code:</p>
<pre class="lang-py prettyprint-override"><code>try:
long_running_thing()
finally:
# Some cleanup
</code></pre>
<p>And if we're just considering the <code>finally</code> block running because the user pressed CTRL+C and so KeyboardInterrupt was raised in <code>long_running_thing</code>... if they press CTRL+C <em>again</em> while in the finally block, could that be interrupted and another KeyboardInterrupt raised?</p>
<p>(And then again... during handing of <em>that</em> KeyboardInterrupt can there be another KeyboardInterrupt raised...)</p>
<p>My aim here is to have robust handling of CTRL+C/KeyboardInterrupt, including multiple CTRL+Cs/KeyboardInterrupts in quick succession</p>
|
<python><exception><signals>
|
2025-02-12 07:15:49
| 2
| 27,302
|
Michal Charemza
|
79,432,047
| 2,955,827
|
How to call helper functions from task.external_python in dag without set sys.path everytime?
|
<p>I have a helper function need to run in every task to init environment.</p>
<p>The problem is, task can not access any variable out of it self's scope. So I have to add the same code to the beginning of every task, like:</p>
<pre class="lang-py prettyprint-override"><code>@task.external_python(
python=v_python_path,
retries=3,
)
def init_env():
### this part have to be added for every task
import sys
sys.path.append("/opt/airflow/dags")
####
from my_module import my_function
</code></pre>
<p>Is there a way to make it cleaner and graceful?</p>
|
<python><airflow>
|
2025-02-12 05:37:55
| 0
| 3,295
|
PaleNeutron
|
79,431,967
| 7,619,353
|
Python How to convert complex nested classes into dictionary
|
<p>I have a couple of classes instantiate basic types like str, int, float, list, dict's but also other classes that contains similar types. Essentially I have nested layers of objects. It is converted this way so data can be represented as objects and manipulated later. Once all the processing is done I want to export the data into a dictionary so it can be stored as a Json object on a database. Is this common? Are there libraries that help you do this.</p>
<p>So far I was going to implement a function that converts the top level class and nested objects into dictionary nested output. I'm worried I will get stack overflow from the recursion I will be using to</p>
<pre><code>def getObjectAsDict(self):
d = {}
for name_of_attr in dir(some_class):
if name_of_attr.startswith("_"):
continue
value_of_attr = getattr(some_class, name_of_attr)
if isinstance(value_of_attr, str):
pass
elif isinstance(value_of_attr, int):
pass
elif isinstance(value_of_attr, float):
pass
elif isinstance(value_of_attr, bool):
pass
elif isinstance(value_of_attr, list):
for idx, item in enumerate(value_of_attr):
# some recursion logic here
elif isinstance(value_of_attr, dict):
for key, value in value_of_attr.items():
# some recursion logic here
elif isinstance(value_of_attr, ComplexObject):
value_of_attr = value_of_attr.getObjectAsDict()
# some recursion logic here
else:
continue
d[name_of_attr] = value_of_attr
</code></pre>
|
<python><python-3.x><dictionary><recursion>
|
2025-02-12 04:43:23
| 2
| 1,840
|
tyleax
|
79,431,910
| 1,084,875
|
Can't send a NumPy array larger than 2 GB with ZeroMQ
|
<p>I'm using the Python code shown below to serialize and send a NumPy array from the client to the server using ZeroMQ. I noticed that when the NumPy array is larger than 2 GB the client seems to stall when sending the array. For example, in the <code>client.py</code> code shown below, if you use <code>n = 17000</code> the client will stall after creating the array. I ran this code on a MacBook Pro laptop that has 32 GB of memory so there should be plenty of RAM available for the message. Is there a limit to the size of a NumPy array that I can send with ZeroMQ? If there is a limit, then how would I send an array that exceeds the size limit?</p>
<p>Client code that creates NumPy array (client.py)</p>
<pre class="lang-py prettyprint-override"><code>import sys
import numpy as np
import zmq
class Client:
"""Client for sending/receiving messages."""
def __init__(self, address="tcp://localhost:5555"):
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect(address)
self.socket = socket
def send_array(self, array: np.ndarray):
md = {"dtype": str(array.dtype), "shape": array.shape}
self.socket.send_json(md, zmq.SNDMORE) # send metadata
self.socket.send(array, copy=False) # send NumPy array data
def recv_message(self):
reply = self.socket.recv_string()
print("Received reply:", reply)
def main():
# Create array
n = 16000 # 8000 is 500 MB, 11500 is 1 GB, 16000 is 2 GB, 17000 fails to send
x = np.random.rand(n, n)
print(f"Array shape: {x.shape}")
print(f"First three elements: {x[0, 0:3]}")
print(f"Size of array data: {x.nbytes} bytes, {x.nbytes / 1000**2} MB")
print(f"Size of array object: {sys.getsizeof(x)} bytes, {x.nbytes / 1000**2} MB")
# Create client and send array
client = Client()
client.send_array(x)
client.recv_message()
if __name__ == "__main__":
main()
</code></pre>
<p>Server code that receives NumPy array (server.py)</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
import zmq
import numpy as np
class Server:
"""Server for receiving/sending messages."""
def __init__(self, address="tcp://localhost:5555"):
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind(address)
self.socket = socket
print("Server started, waiting for array...")
def _recv_array(self):
md: Any = self.socket.recv_json() # receive metadata
msg: Any = self.socket.recv(copy=False) # receive NumPy array data
array = np.frombuffer(msg, dtype=md["dtype"]) # reconstruct the NumPy array
return array.reshape(md["shape"])
def run(self):
"""Run the server."""
while True:
# Receive the NumPy array
array = self._recv_array()
print("Received array with shape:", array.shape)
print(f"First three elements: {array[0, 0:3]}")
# Send a confirmation reply
self.socket.send_string("Array received")
def main():
server = Server()
server.run()
if __name__ == "__main__":
main()
</code></pre>
|
<python><numpy><zeromq><pyzmq>
|
2025-02-12 03:59:27
| 1
| 9,246
|
wigging
|
79,431,607
| 2,077,386
|
Python type annotation for unique value representing 'not undefined'
|
<p>There is an idiom in Python to do something like,</p>
<pre><code>UNDEFINED = object()
def do_something(value=UNDEFINED):
if value is UNDEFINED:
do1()
else if value is None:
do2()
else:
do3()
</code></pre>
<p>The point being that <code>None</code> may be a valid value and <code>UNDEFINED</code> causes some <em>other</em> default behavior.</p>
<p>The mypy signature would then be:</p>
<pre><code>def do_something(value=int|None|object):
...
</code></pre>
<p>Which seems kind of pointless since everything is an object.</p>
<p><code>Literal[UNDEFINED]</code> isn't a valid, either.</p>
<p>The best I can come up with so far is:</p>
<pre><code>class _Undefined:
pass
UNDEFINED = _Undefined()
</code></pre>
<p>and</p>
<pre><code>def do_something(value=int|None|_Undefined):
...
</code></pre>
<p>It's close, but introduces corner cases of someone passing <code>_Undefined()</code> (making a new instance) and means I'd be better off using <code>isinstance</code> rather than <code>value is UNDEFINED</code> as the old idiom encourages.</p>
<p>Is there a new idiom emerging to do this with mypy and type annotations?</p>
<p>edit:</p>
<p>@drooze points out <a href="https://stackoverflow.com/questions/57959664/handling-conditional-logic-sentinel-value-with-mypy/">Handling conditional logic + sentinel value with mypy</a> ... which does seem to address this!</p>
|
<python><python-typing><mypy>
|
2025-02-12 00:14:05
| 0
| 7,133
|
rrauenza
|
79,431,510
| 2,882,380
|
Python in Excel: How to print output dataframe to a tab
|
<p>I am experimenting on the new Python in Excel feature. I wrote the following lines in a cell.</p>
<pre><code>import pandas as pd
input_df = pd.DataFrame(xl("A6:ASH38", headers=True))
input_header = list(input_df.columns.values)
label_col = input_header[:14]
value_col = input_header[14:]
output_df = input_df.groupby(label_col)[value_col].sum().reset_index()
output_df.to_csv('test.csv', index=False)
</code></pre>
<p>However, I cannot see the test.csv file being generated. Moreover, how can I print the output_df to a new tab in the spreadsheet?</p>
|
<python><excel>
|
2025-02-11 22:53:41
| 0
| 1,231
|
LaTeXFan
|
79,431,483
| 15,842
|
Polars selectors for columns that are Nested Types
|
<p>Some Polars operations, such as <code>.sort()</code> fail when passed a column with a Nested Type.</p>
<p>This (sensible) choice about sort means I cannot use my usual sorting pattern of <code>df.sort(pl.all())</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
NESTED_TYPES = [
pl.List,
pl.Array,
pl.Object,
pl.Struct
]
pl.exclude(NESTED_TYPES)
</code></pre>
<p>Result:</p>
<pre><code>*.exclude([Dtype(List(Null)), Dtype(Array(Null, 0)), Dtype(Object("object", None)), Dtype(Struct([]))])
</code></pre>
<p>Is there a way to select (or exclude) only nested types?</p>
<p>The <a href="https://docs.pola.rs/api/python/stable/reference/selectors.html" rel="nofollow noreferrer">Selectors Documentation</a> has many ideas but nothing seems right for this.</p>
|
<python><python-polars>
|
2025-02-11 22:38:45
| 1
| 21,402
|
Gregg Lind
|
79,431,447
| 4,913,592
|
Gradient Lost When Constructing a Matrix in Mitsuba 3
|
<p>I am trying to optimize the absolute scale of an area light in Mitsuba 3.</p>
<p>I'm doing a simple toy example where I start with a scene like this:</p>
<p><a href="https://i.sstatic.net/oXy670A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oXy670A4.png" alt="enter image description here" /></a></p>
<p>And my target image is like this:</p>
<p><a href="https://i.sstatic.net/6tfREkBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6tfREkBM.png" alt="enter image description here" /></a></p>
<p>My loss is simply MSE between the two images.</p>
<p>I was able to get this to work by using a latent variable (a single float) that is turned into a uniform scaling matrix. However since this was iteratively multiplicative, the scale had a compounding exponential effect, making it difficult for convergence (by the time the scale value got big enough, the latent variable which represented the scale factor was so much larger than 1 that a linear learning rate adjustment couldn't get the multiplier back to 1 in time). It worked, though, if I stopped optimization before the optimized light got bigger than the reference light. Here's a MWE to illustrate:</p>
<pre class="lang-py prettyprint-override"><code>
scene = mi.load_file("my_scene.xml", integrator='prb')
params = mi.traverse(scene)
params.update()
# Generate a reference image (area light at correct size)
reference = mi.render(scene, params, spp=1024)
# Disturb the scale by a factor of 0.5
def multiplicative_resize(matrix, factor):
"""Applies local scale around the centroid (multiplicative)."""
centroid = mi.Vector3f(matrix[0, 3], matrix[1, 3], matrix[2, 3])
to_origin = mi.Transform4f.translate(-centroid)
scale_tf = mi.Transform4f.scale(factor)
from_origin = mi.Transform4f.translate(centroid)
return from_origin @ scale_tf @ to_origin @ mi.Transform4f(matrix)
light_key = "Light.to_world"
original_matrix = params[light_key].matrix
params[light_key] = multiplicative_resize(original_matrix, 0.5)
params.update()
optimizer = mi.ad.Adam(lr=0.01)
optimizer["latent_scale_factor"] = mi.Float(1.001) # couldn't be exactly 1 otherwise there would be no gradient
for it in range(10):
# Render
image = mi.render(scene, params, spp=1024)
loss = dr.mean(dr.sqr(image - reference))
dr.backward(loss)
optimizer.step()
# Clamp
scale_val = dr.clamp(optimizer["latent_scale_factor"], 0.1, 2.0)
optimizer["latent_scale_factor"] = scale_val
# Update the transform multiplicatively
current_matrix = params[light_key].matrix
new_matrix = multiplicative_resize(current_matrix, scale_val)
params[light_key] = mi.Transform4f(new_matrix)
params.update()
print(f"[Multiplicative] Iter {it}, loss = {loss[0]:.6f}")
</code></pre>
<p>So, I figured that setting the absolute scale would make optimization easier. I tried to get the optimizer to learn how to match the absolute scale of the area light to a reference image:</p>
<pre class="lang-py prettyprint-override"><code>def set_area_light_scale(matrix, scale_value):
"""
Sets the (0,0) and (1,1) entries to `scale_value`
directly, ignoring previous scale.
"""
# Suppose for a rectangular area light, the matrix
# defaults to something like:
# [ 2 0 0 x ]
# [ 0 2 0 y ]
# [ 0 0 2 z ]
# [ 0 0 0 1 ]
# Forcibly place `scale_value` in positions (0,0) and (1,1).
new_mat = dr.llvm.ad.Matrix4f(
scale_value, matrix[0,1], matrix[0,2], matrix[0,3],
matrix[1,0], scale_value, matrix[1,2], matrix[1,3],
matrix[2,0], matrix[2,1], matrix[2,2], matrix[2,3],
matrix[3,0], matrix[3,1], matrix[3,2], matrix[3,3]
)
return mi.Transform4f(new_mat)
scene = mi.load_file("my_scene.xml", integrator='prb')
params = mi.traverse(scene)
params.update()
reference = mi.render(scene, params, spp=1024)
# Disturb the scale absolutely (initially 0.5, e.g.)
light_key = "Light.to_world"
original_matrix = params[light_key].matrix
params[light_key] = set_area_light_scale(original_matrix, 0.5)
params.update()
optimizer = mi.ad.Adam(lr=0.01)
optimizer["latent_scale_factor"] = mi.Float(0.5)
for it in range(10):
image = mi.render(scene, params, spp=1024)
loss = dr.mean(dr.sqr(image - reference))
dr.backward(loss)
optimizer.step()
# clamp
scale_val = dr.clamp(optimizer["latent_scale_factor"], 0.1, 2.0)
optimizer["latent_scale_factor"] = scale_val
# forcibly set the scale
current_tf = params[light_key].matrix
params[light_key] = set_area_light_scale(current_tf, scale_val)
params.update()
print(f"[Absolute Scale] Iter {it}, loss = {loss[0]:.6f}")
</code></pre>
<p>The new function properly updates the size of the area light, as I would expect. My issue however, is that now the loss is exactly the same every iteration (it's not going down). Am I breaking the gradient somewhere? What am I doing wrong?</p>
<p><strong>TL;DR:</strong></p>
<ul>
<li>The multiplicative scale transforms approach is properly working with gradient flows, but the change is exponential over the iterations.</li>
<li>The absolute scale approach yields exactly the same loss every iteration</li>
<li>I’d like to figure out why the gradient w.r.t. a newly assigned absolute scale doesn’t reduce the loss, and how to fix it. Thanks!</li>
</ul>
|
<python><machine-learning><graphics><3d><mitsuba-renderer>
|
2025-02-11 22:10:15
| 1
| 351
|
Anson Savage
|
79,431,380
| 11,551,386
|
polars python - list version of search_sorted / filter and sort in one function
|
<p>I am trying to make a function that takes a <code>df</code> and filters and re-arranges rows where the elements of a specified column <code>c</code> are explicitly specified by a list <code>l</code>. The filtering aspect works as expected, but the sorting is not what I expect.</p>
<p>The <code>search_sorted</code> method shows a list of values as being permitted</p>
<p><a href="https://docs.pola.rs/api/python/dev/reference/expressions/api/polars.Expr.search_sorted.html" rel="nofollow noreferrer">https://docs.pola.rs/api/python/dev/reference/expressions/api/polars.Expr.search_sorted.html</a></p>
<p><a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.search_sorted.html" rel="nofollow noreferrer">https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.search_sorted.html</a></p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"a": [1, 2, 3],
"b": [6.0, 5.0, 4.0],
"c": ["z", "x", "y"],
}
)
def filter_sort_explicit(df, c, l):
"""
a function that filters a [df] on [c]olumn by explicitly specificying the order of the values (in that column) in a [list]
"""
return df.filter(pl.col(c).is_in(l)).sort(pl.col(c).search_sorted(l))
for order in ('y x z',
'z x y',
'x z',
'y x',
'z y'):
order = order.split()
print(filter_sort_explicit(df, 'c', order)['c'].to_list() == order)
## should all print true, but that's not the case.
for order in ('y x z', 'z x y', 'x y z'):
order = order.split()
print(order, df.sort(pl.col('c').search_sorted(order, side='left'))['c'].to_list())
</code></pre>
|
<python><python-polars>
|
2025-02-11 21:37:13
| 1
| 344
|
likethevegetable
|
79,431,296
| 265,521
|
Cross-platform command to activate venv
|
<p>Is there a way to activate a Python venv with one command that works on Windows and Unix?</p>
<p>The standard commands are:</p>
<ul>
<li>Windows: <code>venv\Scripts\Activate.ps1</code></li>
<li>Unix: <code>source venv/activate</code></li>
</ul>
<p>(There's a handy table of all the options <a href="https://docs.python.org/3/library/venv.html#how-venvs-work" rel="nofollow noreferrer">here</a>.)</p>
<p>This is pretty annoying in CI though. Is there a clever way to activate the venv with one command? Doesn't matter if it starts a sub-shell. Something like <code>python3 -m venv activate path/to/venv</code>?</p>
|
<python><python-venv>
|
2025-02-11 20:59:00
| 0
| 98,971
|
Timmmm
|
79,431,220
| 782,392
|
Why does TextIOWrapper.seek() not use the buffer?
|
<p>I just noticed that whenever I use <code>seek()</code> on a <code>TextIOWrapper</code> object, the performance decreases noticeably.</p>
<p>The following code opens a text file (should be of size between 10 and 50 MB) reads one line of code and then calls seek with a position before the last line. Then reads another line.</p>
<p>I'd expect this code to only read once from disk. The whole file fits into the buffer.</p>
<p>However, with a file of size of 25 MB, this reads a total of 1.2 GB from disk. If I remove the call to <code>seek()</code> the file is only read once. Why doesn't this work with <code>seek()</code>?</p>
<pre class="lang-py prettyprint-override"><code>input("Press Enter to start...")
with open('file.txt', 'r', 50 * 1024 * 1024, 'utf-8', newline='\n') as file:
while True:
pos = file.tell()
l1 = file.readline()
if not l1:
break
file.seek(pos)
l2 = file.readline()
input("Press Enter to exit...")
</code></pre>
|
<python><python-3.x>
|
2025-02-11 20:19:38
| 2
| 2,674
|
T3rm1
|
79,431,134
| 1,492,613
|
torch cannot add NamedTuple class to safe_globals
|
<p>I define some named tuple like this:</p>
<pre><code>class checkpoint_t(NamedTuple):
epoch: int
model_state_dict: Dict[str, Any]
optimizer_state_dict: Dict[str, Any]
model_name: str | None = None
</code></pre>
<p>However I after save I cannot load this namedtuple via</p>
<pre><code>import torch
from train import checkpoint_t
with torch.serialization.safe_globals([checkpoint_t]):
print("safe globals: ", torch.serialization.get_safe_globals())
checkpoint: checkpoint_t = torch.load(parsed_args.checkpoint, weights_only=True)
</code></pre>
<p>it's still saying:</p>
<blockquote>
<p>WeightsUnpickler error: Unsupported global: GLOBAL <code>__main__.checkpoint_t</code> was not an allowed global by default. Please use <code>torch.serialization.add_safe_globals([checkpoint_t])</code> or the <code>torch.serialization.safe_globals([checkpoint_t])</code> context manager to allowlist this global if you trust this class/function.</p>
</blockquote>
<p>any idea why? and how to fix this?</p>
<p><strong>update about why</strong>
The module train.py has an ifmain block, so the module can be executed as <code>python -m package.subpackage.train</code>. If one run it like this instead of using exposed entry point of console_scripts, the train module name becomes <code>__main__</code>.</p>
|
<python><pytorch>
|
2025-02-11 19:25:40
| 1
| 8,402
|
Wang
|
79,431,023
| 513,904
|
Tracing CustomMaskWarning to its source
|
<p>I am working with old TensorFlow (2.7) and Java Tensorflow but the answer may still help others tracing similar issues. My model produces the error when saved to the 2.7 save() format:</p>
<pre><code>CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
</code></pre>
<p>Originally I imagined this came from the metric and loss function, since the model cannot save and load these, presumably because of some sort of serialization issue.</p>
<pre><code>def masked_mae(y_true, y_pred):
# Mask NaN values, replace by 0
y_true = tf.where(tf.math.is_nan(y_true), y_pred, y_true)
# Calculate absolute differences
absolute_differences = tf.abs(y_true - y_pred)
# Compute the mean, ignoring potential NaN values (if any remain after replacement)
mae = tf.reduce_mean(absolute_differences)
return mae
</code></pre>
<p>I don't know how to deliver custom_objects in Java, and I'm done training, so I tried compiling the model with loss=None and metrics=None and saving it that way. The model can be loaded with this change but the error has not gone away. Is there a way to trace what layer is causing this and why? Is it just a nuisance? There are no obvious uses of masks.</p>
|
<python><java><tensorflow>
|
2025-02-11 18:41:07
| 1
| 1,473
|
Eli S
|
79,430,893
| 20,302,906
|
How to assert an http.client.HTTPSConnection.request
|
<p>My code tries to make HTTP requests to Gitlab API to make different actions such as create projects, branches, milestones etc. I'm not using external modules like <a href="https://requests.readthedocs.io/en/latest/" rel="nofollow noreferrer">requests</a> because I want to keep my project free of dependencies and I haven't found the need to import any so far. That said I'm trying to assert what my <a href="https://docs.python.org/3/library/http.client.html" rel="nofollow noreferrer">HTTPS Connection</a> requests with a mock in my tests in this way:</p>
<p><em>gitlab_requests_tests.py</em></p>
<pre><code>def test_project_creation(self):
connection = http.client.HTTPSConnection("gitlab.com")
r = urllib.request.Request(
method="POST",
url="https://gitlab.com/api/v4/projects",
headers={
"PRIVATE-TOKEN": os.environ.get("ACCESS_TOKEN"),
"name": Path.cwd().name
},
)
glab_requests.create_project(connection)
with patch("http.client.HTTPSConnection.request") as https_mock:
https_mock.assert_called_with(r)
</code></pre>
<p>Which tests this code:</p>
<p><em>gitlab_requests.py</em></p>
<pre><code>def create_project(connection: http.client.HTTPSConnection):
header={
"PRIVATE-TOKEN": os.getenv("ACCESS_TOKEN", default="*"),
"name": Path.cwd().name
}
if re.search(r"[^\w-]", os.getenv("ACCESS_TOKEN", default="*")):
raise GlabRequestException("Invalid Access token format")
connection.request("POST", "/api/v4/projects", headers=header)
</code></pre>
<p>I know that I asserting my request with <code>url.request.Request</code> isn't the right way to do because it creates a different request object to the one I'm calling from my source code.</p>
<p>How can I assert my request? What am I missing/doing wrong?</p>
|
<python><http><mocking><python-unittest>
|
2025-02-11 17:43:45
| 2
| 367
|
wavesinaroom
|
79,430,769
| 696,264
|
Mysql 5.5..33 : (5010, "Authentication plugin 'mysql_old_password' couldn't be found in restricted_auth plugin list.")
|
<p>When I am trying to connect the MySQL db that hosted in remote via the python library MySQLdb, I am getting the following error.</p>
<p><code>(5010, "Authentication plugin 'mysql_old_password' couldn't be found in restricted_auth plugin list.")</code></p>
<p><strong>Python Code</strong></p>
<pre><code>import MySQLdb
try:
mydb = MySQLdb.connect(
host = "test.corp.com",
user = "testuser",
passwd = "test2025",
db = "testingdb"
)
print("Database connection successful.")
except MySQLdb.Error as err:
print (err)
</code></pre>
<p>I couldn't find the support anywhere. Share solution if anyone knows. Thanks in advance.</p>
|
<python><mysql><mysql-python>
|
2025-02-11 17:01:09
| 0
| 1,321
|
Madhan
|
79,430,731
| 12,115,498
|
Unable to launch Jupyter Notebook or update Anaconda
|
<p>I've been writing Python programs in Jupyter notebooks on my Windows 11 computer without any problems for the last few months, but today everything is suddenly not working: I tried to launch Jupyter Notebook from a <a href="https://medium.com/@msc.noguez/launch-jupyter-notebook-with-a-shortcut-windows-10-725e2b07b69e" rel="nofollow noreferrer">desktop shortcut</a> that I created, but nothing happened. (I double click the shortcut, wait, and...nothing happens.) Then I tried to launch
Anaconda using the green Anaconda shortcut on my desktop toolbar. Again, I click the shortcut, wait, and nothing happens. Having failed to launch Jupyter notebook, I tried launching the Spyder editor, but same story--nothing happens when I double click the desktop shortcut. Finally, I went into the Anaconda prompt and tried updating Anaconda with <code>conda update conda</code>, but got this error message:</p>
<pre><code>(base) C:\Users\mbarm>conda update conda
Error loading anaconda_anon_usage: No module named 'anaconda_anon_usage'
C:\Users\mbarm\anaconda3\lib\site-packages\_distutils_hack\__init__.py:33: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\exceptions.py", line 1129, in __call__
return func(*args, **kwargs)
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\cli\main.py", line 86, in main_subshell
exit_code = do_call(args, p)
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 91, in do_call
module = import_module(relative_mod, __name__.rsplit('.', 1)[0])
File "C:\Users\mbarm\anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\cli\main_update.py", line 10, in <module>
from ..notices import notices
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\notices\__init__.py", line 4, in <module>
from .core import notices # noqa: F401
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\notices\core.py", line 15, in <module>
from . import http
File "C:\Users\mbarm\anaconda3\lib\site-packages\conda\notices\http.py", line 9, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
`$ C:\Users\mbarm\anaconda3\Scripts\conda-script.py update conda`
environment variables:
CIO_TEST=<not set>
CONDA_DEFAULT_ENV=base
CONDA_EXE=C:\Users\mbarm\anaconda3\condabin\..\Scripts\conda.exe
CONDA_EXES="C:\Users\mbarm\anaconda3\condabin\..\Scripts\conda.exe"
CONDA_PREFIX=C:\Users\mbarm\anaconda3
CONDA_PROMPT_MODIFIER=(base)
CONDA_PYTHON_EXE=C:\Users\mbarm\anaconda3\python.exe
CONDA_ROOT=C:\Users\mbarm\anaconda3
CONDA_SHLVL=1
CURL_CA_BUNDLE=<not set>
HOMEPATH=\Users\mbarm
PATH=C:\Users\mbarm\anaconda3;C:\Users\mbarm\anaconda3\Library\mingw-w64\bi
n;C:\Users\mbarm\anaconda3\Library\usr\bin;C:\Users\mbarm\anaconda3\Li
brary\bin;C:\Users\mbarm\anaconda3\Scripts;C:\Users\mbarm\anaconda3\bi
n;C:\Users\mbarm\anaconda3\condabin;C:\WINDOWS\system32;C:\WINDOWS;C:\
WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\WI
NDOWS\System32\OpenSSH;C:\Program Files\MATLAB\R2024b\bin;C:\Users\mba
rm\AppData\Local\Microsoft\WindowsApps;C:\Users\mbarm\AppData\Local\Pr
ograms\MiKTeX\miktex\bin\x64;.;C:\Users\mbarm\AppData\Local\Programs\J
ulia-1.10.0\bin
PSMODULEPATH=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\Windows
PowerShell\v1.0\Modules
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
active environment : base
active env location : C:\Users\mbarm\anaconda3
shell level : 1
user config file : C:\Users\mbarm\.condarc
populated config files : C:\Users\mbarm\.condarc
conda version : 22.9.0
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __win=0=0
__archspec=1=x86_64
base environment : C:\Users\mbarm\anaconda3 (writable)
conda av data dir : C:\Users\mbarm\anaconda3\etc\conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Users\mbarm\anaconda3\pkgs
C:\Users\mbarm\.conda\pkgs
C:\Users\mbarm\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\mbarm\anaconda3\envs
C:\Users\mbarm\.conda\envs
C:\Users\mbarm\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/22.9.0 requests/2.28.1 CPython/3.9.13 Windows/10 Windows/10.0.26100
administrator : False
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
If submitted, this report will be used by core maintainers to improve
future releases of conda.
Would you like conda to send this report to the core maintainers? [y/N]: y
Upload did not complete.
</code></pre>
<p>I have tried restarting my computer and repeating the above steps, but still nothing launches. What's going wrong? Any suggestions would be greatly appreciated.</p>
|
<python><jupyter-notebook><anaconda><conda><spyder>
|
2025-02-11 16:45:43
| 0
| 783
|
Leonidas
|
79,430,573
| 8,188,435
|
How to make xarray.DataArray.to_zarr readable by napari?
|
<p>I have big TIFF-arrays that I want to save with xarray and view in napari. However, napari seems unable to read the zarr-format produced by xarray. Is there a way that I can specify the arguments of xarray.DataArray.to_zarr to make it readable by napari?</p>
<p>I know that dask.array.to_zarr is an option but I would prefer to use xarray.</p>
<p>Minimum reproducible example:</p>
<pre><code>import os
import numpy as np
import xarray as xa
arr = np.random.randint(0, 2**16-1, size=(100, 400, 400))
coords = {'z': np.arange(0, 100), 'y': np.arange(0, 400), 'x': np.arange(0, 400)}
da = xa.DataArray(arr, dims=['z', 'y', 'x'], coords=coords)
dir_save = os.getcwd()
path_save = os.path.join(dir_save, 'test.zarr')
da.to_zarr(path_save)
</code></pre>
<p>The traceback of the error that I get when I drag-and-drop it into napari is longer than the allowed length of posts, but maybe this part is a clue?</p>
<blockquote>
<p>File
C:\ProgramData\anaconda3\envs\napari_env\lib\site-packages\napari\layers\image_image_utils.py:94,
in guess_multiscale(data=[dask.array<from-zarr, shape=(100, 400, 400),
dty...hunksize=(25, 100, 100), chunktype=numpy.ndarray>,
dask.array<from-zarr, shape=(400,), dtype=int64, chunksize=(400,),
chunktype=numpy.ndarray>, dask.array<from-zarr, shape=(400,),
dtype=int64, chunksize=(400,), chunktype=numpy.ndarray>,
dask.array<from-zarr, shape=(100,), dtype=int64, chunksize=(100,),
chunktype=numpy.ndarray>])
93 if not consistent:
---> 94 raise ValueError(
trans = <napari.utils.translations.TranslationBundle object at 0x000002492857E5E0>
sizes = [16000000, 400, 400, 100]
95 trans._(
96 'Input data should be an array-like object, or a sequence of arrays of decreasing size. Got arrays in incorrect order,
sizes: {sizes}',
97 deferred=True,
98 sizes=sizes,
99 )
100 )
102 return True, MultiScaleData(data)</p>
<p>ValueError: Input data should be an array-like object, or a sequence
of arrays of decreasing size. Got arrays in incorrect order, sizes:
[16000000, 400, 400, 100]</p>
</blockquote>
<p>Package versions:</p>
<ul>
<li>conda-forge napari 0.5.5 hd8ed1ab_0</li>
<li>conda-forge napari-base 0.5.5 pyh9208f05_0</li>
<li>conda-forge napari-console 0.1.3 pyh73487a3_0</li>
<li>conda-forge napari-plugin-engine 0.2.0 pyha07c04f_3</li>
<li>conda-forge napari-plugin-manager 0.1.4 pyha07c04f_0</li>
<li>conda-forge napari-svg 0.2.1 pyha07c04f_0</li>
<li>conda-forge xarray 2025.1.1 pyhd8ed1ab_0</li>
</ul>
|
<python><python-3.x><python-xarray><zarr><python-napari>
|
2025-02-11 15:53:41
| 1
| 371
|
user8188435
|
79,430,458
| 11,402,025
|
Run sam applications using podman instead of docker on mac Ventura
|
<pre><code>podman version
Client: Podman Engine
Version: 5.3.2
API Version: 5.3.2
Go Version: go1.23.5
Built: Tue Jan 21 13:41:34 2025
OS/Arch: darwin/arm64
Server: Podman Engine
Version: 5.3.2
API Version: 5.3.2
Go Version: go1.23.4
Built: Tue Jan 21 19:00:00 2025
OS/Arch: linux/arm64
</code></pre>
<pre><code>$docker info
Client:
Context: podman
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.9.1)
compose: Docker Compose (Docker Inc., v2.10.2)
extension: Manages Docker extensions (Docker Inc., v0.2.9)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.19.0)
Server:
ERROR: Cannot connect to the Docker daemon at unix:///Users/username/.local/share/containers/podman/machine/podman.sock. Is the docker daemon running?
</code></pre>
<p>When I try to run sam local start-api, I get the error :
podman is running but still getting "Running AWS SAM projects locally requires Docker. Have you got it installed and running?</p>
|
<python><docker><podman><sam><podman-compose>
|
2025-02-11 15:16:14
| 0
| 1,712
|
Tanu
|
79,430,379
| 9,140
|
How can I get uv to Git pull dependencies on uv sync?
|
<p>In uv, I want to add a Git repository as a dependency which is easy in itself, but if we commit in the Git repository of the dependency, I want <code>uv sync</code> to do a Git pull for this Git repository. Basically, do a Git pull on the dependency Git repository and then apply the changed code for the dependency to my current virtual env.</p>
<p>Here is what I tried:</p>
<pre class="lang-none prettyprint-override"><code>uv add git+https://github.com/PowerInsight/quantstats.git
</code></pre>
<p>Then I make a commit on <a href="https://github.com/PowerInsight/quantstats.git" rel="nofollow noreferrer">https://github.com/PowerInsight/quantstats.git</a> in a Python file. Then I run this in the repository that references <a href="https://github.com/PowerInsight/quantstats.git" rel="nofollow noreferrer">https://github.com/PowerInsight/quantstats.git</a></p>
<pre class="lang-none prettyprint-override"><code>uv sync
</code></pre>
<p>The file I changed in the Git repository never gets updated in my <em>.venv</em> folder for the referenced dependency: <code>.venv\Lib\site-packages\quantstats</code></p>
<p>Then tried the same thing with a specific branch:</p>
<pre class="lang-none prettyprint-override"><code>uv add git+https://github.com/PowerInsight/quantstats.git --branch main
</code></pre>
<p>It is the same problem; it does not get updated on commit.</p>
<p>Then I tried adding this to both the <code>pyproject.toml</code> of my main project and the dependency Git repository:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.uv]
cache-keys = [{ git = { commit = true } }]
</code></pre>
<p>I also tried setting this package to always get reinstalled:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.uv]
reinstall-package = ["quantstats"]
</code></pre>
<p>What do I need to do for uv to pull from the Git repository dependency on any commit?</p>
|
<python><uv>
|
2025-02-11 14:47:44
| 1
| 5,498
|
EtienneT
|
79,430,288
| 2,951,942
|
install ansible in official jenkins docker container without rebuild?
|
<p>I've been setting up the official Jenkins docker container in my lab, and I'm installing Ansible (and also Python if possible) as plugins.</p>
<p>I've installed the Ansible plugin and I need to setup/install the binary.
I'm trying to utilise the functionality to "Install plugin via script" which I think builds and installs at container runtime.</p>
<p>Any idea how I can achieve this?</p>
<p>I have seen many guides for rebuilding the container with Ansible installed inside, but is there any advice on how to use the function to install Ansible executable via the plugin?</p>
<p>I'm not looking to build a new container.</p>
|
<python><docker><jenkins><ansible><jenkins-plugins>
|
2025-02-11 14:18:24
| 2
| 571
|
Kareem
|
79,430,185
| 7,959,614
|
Generate all paths that consists of specified number of visits of nodes / edges
|
<p>In a graph/chain there are 3 different states: <code>ST</code>, <code>GRC_i</code> and <code>GRC_j</code>.
The following edges between the states exists:</p>
<pre><code>EDGES = [
# source, target, name
('ST', 'GRC_i', 'TDL_i'),
('ST', 'GRC_j', 'TDL_j'),
('GRC_i', 'GRC_j', 'RVL_j'),
('GRC_j', 'GRC_i', 'RVL_i'),
('GRC_j', 'ST', 'SUL_i'),
('GRC_i', 'ST', 'SUL_j'),
]
</code></pre>
<p><a href="https://i.sstatic.net/pBpI2qEf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBpI2qEf.png" alt="enter image description here" /></a></p>
<p>The values for <code>TDL_i</code>, <code>TDL_i</code>, <code>RVL_i</code> and <code>RVL_j</code> are known.
The chain always starts in <code>ST</code> and the final state is always known.</p>
<p>I want to infer <code>SUL_i</code> and <code>SUL_j</code> based on possible paths that satisfy the known information.</p>
<p>For example if we have the following information:</p>
<pre><code>RVL_i = 2
RVL_j = 1
TDL_i = 0
TDL_j = 2
</code></pre>
<p>and the final position is <code>GRC_i</code> there are two paths that satisfy this criteria:</p>
<blockquote>
<ol>
<li>ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i -> RVL_j -> GRC_j -> SUL_i
-> ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i</li>
<li>ST -> TDL_j -> GRC_j -> SUL_i -> ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i -> RVL_j -> GRC_j ->
RVL_i -> GRC_i</li>
</ol>
</blockquote>
<p>Because both paths imply that <code>SUL_i = 1</code> and <code>SUL_j = 0</code> we conclude that this is the case.</p>
<p>The following relationships are evident:</p>
<ul>
<li>The number of visits to <code>ST</code> is equal to <code>SUL_i + SUL_j + 1</code></li>
<li>The number of visits to <code>GRC_i</code> is equal to <code>TDL_i + RVL_i</code></li>
<li>The number of visits to <code>GRC_j</code> is equal to <code>TDL_j + RVL_j</code></li>
<li>The upper-bound of <code>SUL_i</code> is the number of visits to <code>GRC_j</code></li>
<li>The upper-bound of <code>SUL_j</code> is the number of visits to <code>GRC_i</code></li>
<li>The maximum total number of steps is <code>2 * (TDL_i + TDL_j + RVL_i + RVL_i)</code></li>
</ul>
<p>I was thinking to solve this as a mixed-integer program.</p>
<pre><code>import networkx as nx
import gurobipy as grb
from gurobipy import GRB
from typing import Literal
def get_SUL(TDL_i: int, TDL_j: int, RVL_i: int, RVL_j: int, final_state: Literal['ST', 'GRC_i', 'GRC_j']):
G = nx.DiGraph()
G.add_edges_from([
('ST', 'GRC_i'),
('ST', 'GRC_j'),
('GRC_i', 'GRC_j'),
('GRC_j', 'GRC_i'),
('GRC_j', 'ST'),
('GRC_i', 'ST')
])
n_actions = len(list(G.edges()))
n_states = len(list(G.nodes()))
min_N = TDL_i + TDL_j + RVL_i + RVL_i
max_N = 2 * (TDL_i + TDL_j + RVL_i + RVL_i)
for N in range(min_N, max_N + 1):
m = grb.Model()
SUL_i = m.addVar(lb=0, ub=TDL_j + RVL_j)
SUL_j = m.addVar(lb=0, ub=TDL_i + RVL_i)
# actions
actions = m.addMVar((n_actions, N), vtype=GRB.BINARY)
m.addConstr(actions[0,:].sum() == TDL_i)
m.addConstr(actions[1,:].sum() == TDL_j)
m.addConstr(actions[2,:].sum() == RVL_i)
m.addConstr(actions[3,:].sum() == RVL_j)
m.addConstr(actions[4,:].sum() == SUL_i)
m.addConstr(actions[5,:].sum() == SUL_j)
m.addConstrs(actions[:,n].sum() == 1 for n in range(N))
# states
states = m.addMVar((n_states, N), vtype=GRB.BINARY)
m.addConstr(states[0,:].sum() == SUL_i + SUL_j + 1)
m.addConstr(states[0,:].sum() == TDL_i + RVL_i)
m.addConstr(states[0,:].sum() == TDL_j + RVL_j)
m.addConstr(states[0,0] == 1)
if final_state == 'ST':
m.addConstr(states[0,-1] == 1)
m.addConstr(states[1,-1] == 0)
m.addConstr(states[2,-1] == 0)
elif final_state == 'GRC_i':
m.addConstr(states[0,-1] == 0)
m.addConstr(states[1,-1] == 1)
m.addConstr(states[2,-1] == 0)
else:
m.addConstr(states[0,-1] == 0)
m.addConstr(states[1,-1] == 0)
m.addConstr(states[2,-1] == 1)
m.addConstrs(actions[:,n].sum() == 1 for n in range(N))
# additional constraints
</code></pre>
<p>How do I impose that the action- and states variables are in agreement with each other? For example, the first action can only <code>TDL_i</code> or <code>TDL_j</code> because we start in <code>ST</code>.
I can obtain the adjacency matrix using <code>nx.to_numpy_array(G)</code> but how should I incorporate this into the model?</p>
|
<python><networkx><mixed-integer-programming>
|
2025-02-11 13:43:22
| 2
| 406
|
HJA24
|
79,429,531
| 14,282,714
|
Select the first and last row per group in Polars dataframe
|
<p>I'm trying to use <code>polars</code> dataframe where I would like to select the <code>first</code> and <code>last</code> row per group. Here is a simple example selecting the first row per group:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": [1, 2, 2, 3, 4, 5],
"b": [0.5, 0.5, 4, 10, 14, 13],
"c": [True, True, True, False, False, True],
"d": ["Apple", "Apple", "Apple", "Banana", "Banana", "Banana"],
}
)
result = df.group_by("d", maintain_order=True).first()
print(result)
</code></pre>
<p>Output:</p>
<pre><code>shape: (2, 4)
┌────────┬─────┬──────┬───────┐
│ d ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ f64 ┆ bool │
╞════════╪═════╪══════╪═══════╡
│ Apple ┆ 1 ┆ 0.5 ┆ true │
│ Banana ┆ 3 ┆ 10.0 ┆ false │
└────────┴─────┴──────┴───────┘
</code></pre>
<p>This works good and we can use <code>.last</code> to do it for the last row. But how can we combine these in one <code>group_by</code>?</p>
|
<python><dataframe><python-polars>
|
2025-02-11 09:57:15
| 3
| 42,724
|
Quinten
|
79,429,313
| 25,413,271
|
Python: Numpy shape typing
|
<p>I have a class with one method:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.typing as npt
import pyvista as pv
from typing import Tuple, TypeVar, Literal
class SurfaceTriangleMesh:
int_type = TypeVar("int_type", bound=int)
flt_type = TypeVar("flt_type", bound=float)
int_Shape2D = Tuple[int_type, int_type]
flt_Shape2D = Tuple[flt_type, flt_type]
int_Shape2DType = TypeVar("int_Shape2DType", bound=int_Shape2D)
flt_Shape2DType = TypeVar("flt_Shape2DType", bound=flt_Shape2D)
def __init__(self, mesh: pv.PolyData):
self.mesh = mesh
def add_new_points(self, points_to_add: npt.NDArray[flt_Shape2DType]):
"""
Add new points to surface mesh from attribute self.mesh
:param points_to_add: points-to-add array of shape [N * 3] of type float
:return: new surface mesh of type pyvista.PolyData
"""
pass
</code></pre>
<p>In the method <code>add_new_points</code> I want to pass numpy array of shape <code>[*, 3]</code> and I want to check the shape of this array before to proceed with operations. Is there any way of "automatic" shape assertion?<br />
I.e., can I use some classes or side-libs allowing to assign shape and type of a numpy array in the way when its shape and type are asserted automatically inside classes of these side-libs, so I don't need to write assertions by my own</p>
<p>Or I need to write assertions manually?</p>
|
<python><numpy><python-typing>
|
2025-02-11 08:28:13
| 0
| 439
|
IzaeDA
|
79,429,020
| 21,127,400
|
Is there a Python interpreter with type checking (with anottations) support
|
<p>I use <code>mypy</code> to check type related errors before running my code.
It's really good, but it cannot check in runtime.</p>
<p>But in some projects I use dynamic variables and mypy cannot check it.</p>
<p>I know <code>pydantic</code>. But that is extra code.</p>
<p>I need an interpreter to run same code, with type checking.</p>
<p>And I know <code>mypyc</code> too. But I need an interpreter, not compiler.</p>
<h2>EDIT:</h2>
<p>I checked <a href="https://stackoverflow.com/q/43646823/21127400">that</a> question, but I'm looking for an interpreter I have some projects already written, and I won't able to refactor them. So I need an interpreter, not a library.</p>
<p>And I'm asking "Is there any.." not what you would recommend. I searched and couldn't find. I don't think that question is opinionated.</p>
|
<python><python-typing><mypy>
|
2025-02-11 06:06:04
| 0
| 402
|
0x01010
|
79,428,873
| 1,149,534
|
problems uploading a Kivy app to WebHostPython
|
<p>I'm a total newb with Python and Kivy. I like the way I can create and test a kivy app using VS. I have an account an account at WebHostPython and want to upload my simple kivy app (which runs in VS) to my account at WebHostPythonand run it from my browser. Has anyone done this? I'm having no luck.</p>
|
<python><kivy>
|
2025-02-11 04:08:24
| 1
| 507
|
bobonwhidbey
|
79,428,677
| 8,800,836
|
Batch make_smoothing_spline in scipy
|
<p>In <code>scipy</code>, the function <a href="https://docs.scipy.org/doc/scipy-1.15.1/reference/generated/scipy.interpolate.make_interp_spline.html#scipy.interpolate.make_interp_spline" rel="nofollow noreferrer"><code>scipy.interpolate.make_interp_spline()</code></a> can be batched since its <code>x</code> argument must be one-dimensional with shape <code>(m,)</code> and its <code>y</code> argument can have shape <code>(m, ...)</code>.</p>
<p>However, the function <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_smoothing_spline.html#scipy.interpolate.make_smoothing_spline" rel="nofollow noreferrer"><code>scipy.interpolate.make_smoothing_spline()</code></a> only accepts a <code>y</code> argument of shape <code>(m,)</code>.</p>
<p>Is there a simple way to batch the behavior of <code>make_smoothing_spline()</code> so it has the same behavior as <code>make_interp_spline()</code>?</p>
<p>I was thinking of using <code>numpy.vectorize()</code>, but here I'm not batching operations on an array, I need a single function as output.</p>
<p>I guess I could just implement a loop and make a nested list of splines, but I was wondering if there would be a neater way.</p>
<p>Probably some combination of decorators but I'm twisting my brain in knots...</p>
<p>EDIT: Developers seem to be aware of this issue <a href="https://github.com/scipy/scipy/issues/22118#issuecomment-2639755668" rel="nofollow noreferrer">here</a>.</p>
|
<python><scipy><interpolation><spline>
|
2025-02-11 01:28:29
| 1
| 539
|
Ben
|
79,428,658
| 1,245,659
|
create a profile record when creating a user in Django
|
<p>I'm trying to solve this error:</p>
<pre><code>TypeError: Profile.save() got an unexpected keyword argument 'force_insert'
</code></pre>
<p>signals.py</p>
<pre><code>from django.db.models.signals import post_save
from django.dispatch import receiver
from django.contrib.auth.models import User
from .models import Profile
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_user_profile(sender, instance, **kwargs):
instance.profile.save()
</code></pre>
<p>model.py</p>
<pre><code>class Profile(models.Model):
# Managed fields
user = models.OneToOneField(User, related_name="profile", on_delete=models.CASCADE)
memberId = models.CharField(unique=True, max_length=15, null=False, blank=False, default=GenerateFA)
</code></pre>
<p>The objective is to create a profile record when a user is created.<br />
Appreciate any wisdom on this.</p>
|
<python><django>
|
2025-02-11 01:10:35
| 1
| 305
|
arcee123
|
79,428,655
| 26,428
|
Where in a MySQL statement can I use pyformat variables/parameters?
|
<p>Here's an example that doesn't work:</p>
<pre class="lang-none prettyprint-override"><code>stmt = '''
SELECT `date_new`, `event`
FROM `events`
WHERE `date_new` > TIMESTAMP(DATE_SUB(NOW(), INTERVAL %(days)s day))
'''
args = {'days': 30}
c.execute(stmt, args)
</code></pre>
<p>which gives me an error: <code>ValueError: Could not process parameters</code>.</p>
<p>A query that includes something like this:</p>
<p><code>WHERE quantity = %(qty)s</code></p>
<p>works just fine.</p>
<p>I did find this: <a href="https://stackoverflow.com/a/44315579/26428">What parts of a SQL query are allowed to be parameterized?</a> But that leaves the question in the specific example above: What about the argument to <code>INTERVAL</code> makes it not work?</p>
<p>I have found lots of documentation about the basics of using pyformat variables/parameters. I think part of the problem I'm having finding something is that it involves the intersection of Python and MySQL via mysql.connector.</p>
<p>I'd like a pointer to some documentation about what parts of queries allow their use and what ones don't, specifically for MySQL.</p>
|
<python><parameters><mysql-connector-python>
|
2025-02-11 01:08:51
| 0
| 363,512
|
Dennis Williamson
|
79,428,650
| 13,746,021
|
`map` causing infinite loop in Python 3
|
<p>I have the following code:</p>
<pre><code>def my_zip(*iterables):
iterators = tuple(map(iter, iterables))
while True:
yield tuple(map(next, iterators))
</code></pre>
<p>When <code>my_zip</code> is called, it just creates an infinite loop and never terminates. If I insert a print statement, it is revealed that <code>my_zip</code> is infinitely yielding empty tuples!</p>
<p>My expectation was that something inside <code>my_zip</code> would eventually raise <code>StopIteration</code>.</p>
<p>However, the (supposedly behaviorally) equivalent code with a generator expression instead works fine:</p>
<pre><code>def my_genexp_zip(*iterables):
iterators = tuple(iter(it) for it in iterables)
while True:
try:
yield tuple(next(it) for it in iterators)
except:
print("exception caught!")
return
</code></pre>
<p>Why is the function with <code>map</code> not behaving as expected? (Or, if it is expected behavior, how could I modify its behavior to match that of the function using the generator expression?)</p>
<p>I am testing with the following code:</p>
<pre><code>print(list(my_genexp_zip(range(5), range(0, 10, 2))))
print(list(my_zip(range(5), range(0, 10, 2))))
</code></pre>
|
<python><python-3.x><generator>
|
2025-02-11 01:05:49
| 2
| 361
|
Rusty
|
79,428,431
| 1,419,711
|
VSCode rename symbol ignores references in other files
|
<p>I have a class attribute <code>account_values</code> that occurs multiple times in class <code>MyClass</code>, all in the same the file <em>file1.py</em>. It is also referenced multiple times in <em>file2.py</em>. All the references in <em>file1.py</em> are in class method definitions and are of the form <code>self.account_value</code>. The references in <em>file2.py</em> are of the form <code>instance.account_value</code>. If I hit F2 on the an occurrence in <em>file1.py</em> I can rename all the occurrences of the attribute, but the occurrences are unchanged in <em>file2.py</em>. If I go to <em>file2.py</em> in the editor and hit F2, I get a message that the symbol cannot be renamed. (I can just type a new name in place, however). The files are in a workspace, in the same folder on the file system. Language server is Pylance.</p>
<p>Is there any way to get renaming across the entire workspace in cases like this?</p>
|
<python><visual-studio-code><refactoring>
|
2025-02-10 22:10:18
| 0
| 791
|
Llaves
|
79,428,043
| 1,247,136
|
Why `ast.Assign.targets` is a list?
|
<p><code>ast.Assign.targets</code> is a list</p>
<pre class="lang-py prettyprint-override"><code>a, b = c, d
</code></pre>
<p>Yields following AST:</p>
<pre class="lang-py prettyprint-override"><code>Assign(
targets=[
Tuple(
elts=[
Name(id='a', ctx=Store()),
Name(id='b', ctx=Store())],
ctx=Store())],
value=Tuple(
elts=[
Name(id='c', ctx=Load()),
Name(id='d', ctx=Load())],
ctx=Load()))
</code></pre>
<p>Under what conditions <code>target</code> would be a list with multiple elements instead of
single <code>Tuple</code> ?</p>
|
<python><python-ast>
|
2025-02-10 18:57:35
| 1
| 1,626
|
mr0re1
|
79,428,023
| 1,643,257
|
Can I use inline script dependencies to a `project.scripts` endpoint?
|
<p>I have a Python project that uses <code>uv</code> for building and running.</p>
<p>Under <code>src/module/cli/my_script.py</code> I have a script like this:</p>
<pre><code>#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "numpy",
# ]
# ///
import numpy
def main():
print(numpy.__path__) # just prove that we've loaded it
if __name__ == "__main__":
main()
</code></pre>
<p>My <code>pyproject.toml</code> defines this:</p>
<pre><code>[project.scripts]
my_script = "module.cli.my_script:main"
</code></pre>
<p>If I run just the script, since I have <code>uv</code> in the shebang line, it automatically parses and installs numpy and it runs correctly, but then when I install the project and used <code>my_script</code>, it directly runs the method <code>main()</code> and the inline-deps aren't used.</p>
<p>Any way to make project with multiple scripts, each potentially with its own dependencies but I'd also like to be able to install them together (with somsething like <code>uv install my-project</code>).
All I need is that <code>install</code> would just symlink the scripts to the user's bin folder as a part of the installation, but couldn't find a standard way to declare that..</p>
|
<python><uv>
|
2025-02-10 18:45:38
| 0
| 3,000
|
Zach Moshe
|
79,427,979
| 12,115,498
|
Importing one Google Colab notebook into another
|
<p>I am using Google Colab for the first time, and I am having trouble importing one Colab notebook into another. I have created two new notebooks in my Colab Notebooks folder titled <code>NB1.ipynb</code> and <code>NB2.ipynb</code>. I wish to import the latter from the former, i.e. I want to access all of the functions and variables in NB2.ipynb from NB1.ipynb. I have tried the following code as suggested by Gemini:</p>
<pre><code>import nbimporter
import sys
sys.path.append('/content/drive/MyDrive/Colab Notebooks')
from NB2 import *
</code></pre>
<p>This produced the following error message:</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
<ipython-input-15-a10fc15ee9a7> in <cell line: 0>()
1 # Import the desired notebook as a module
----> 2 from NB2 import *
3
4 # Now you can access variables, functions, and classes defined in NB2
ModuleNotFoundError: No module named 'NB2'
</code></pre>
<p>I'm a bit confused as to why Colab can't find NB2.ipynb, as both files are in the same folder. What's the correct way to import NB2.ipynb?</p>
|
<python><jupyter-notebook><google-colaboratory>
|
2025-02-10 18:24:45
| 3
| 783
|
Leonidas
|
79,427,974
| 1,473,517
|
Can `spawn` be made as memory efficient as `fork` with multiprocessing?
|
<p>I am on Linux and have working multiprocessing code that uses fork. Here is a MWE version:</p>
<pre><code>from multiprocessing import Pool
from time import perf_counter as now
import numpy as np
def make_func():
n = 20000
np.random.seed(6)
start = now()
M = np.random.rand(n, n)
return lambda x, y: M[x, x] + M[y, y]
class ParallelProcessor:
def __init__(self):
pass
def process_task(self, args):
"""Unpack arguments internally"""
index, integer_arg = args
print(f(index, integer_arg))
def run_parallel(self, tasks, num_cores=None):
"""Simplified parallel execution without partial"""
num_cores = num_cores
task_args = [(idx, val) for idx, val in enumerate(tasks)]
start = now()
global f
f = make_func()
print(f"************** {now() - start} seconds to make f")
start = now()
with Pool(num_cores) as pool:
results = pool.map( self.process_task, task_args)
print(f"************** {now() - start} seconds to run all jobs")
return results
if __name__ == "__main__":
processor = ParallelProcessor()
processor.run_parallel(tasks=[1, 2, 3, 4, 5, 6], num_cores=6)
</code></pre>
<p>This uses the fact that <code>global f</code> is shared with the workers and as the numpy array is not modified by them, it is not copied.</p>
<p>In 3.14 Linux multiprocessing will be moved to <code>spawn</code>. So in anticipation I made a version using <code>spawn</code>.</p>
<pre><code>from multiprocessing import Pool, RawArray, set_start_method
from time import perf_counter as now
import numpy as np
def init_worker(shared_array_base, shape):
"""Initializer function to set up shared memory for each worker"""
global M
M = np.frombuffer(shared_array_base, dtype=np.float64).reshape(shape)
def worker_task(args):
"""Worker function that reconstructs f using shared memory"""
index, integer_arg = args
result = M[index, index] + M[integer_arg, integer_arg]
print(result)
return result
class ParallelProcessor:
def __init__(self):
pass
def run_parallel(self, tasks, num_cores=None):
"""Run tasks in parallel using spawn and shared memory"""
set_start_method('spawn', force=True) # Ensure 'spawn' is used
n = 20000
shape = (n, n)
# Use 'd' for double-precision float (float64) instead of np.float64
shared_array_base = RawArray('d', n * n)
M_local = np.frombuffer(shared_array_base, dtype=np.float64).reshape(shape)
# Initialize the array in the main process
np.random.seed(7)
start = now()
M_local[:] = np.random.rand(n, n)
print(f"************** {now() - start} seconds to make M")
# Prepare arguments for worker tasks
task_args = [(idx, val) for idx, val in enumerate(tasks)]
start = now()
with Pool(num_cores, initializer=init_worker, initargs=(shared_array_base, shape)) as pool:
results = pool.map(worker_task, task_args)
print(f"************** {now() - start} seconds to run all jobs")
return results
if __name__ == "__main__":
processor = ParallelProcessor()
processor.run_parallel(tasks=[1, 2, 3, 4, 5, 6], num_cores=6)
</code></pre>
<p>Unfortunately, not only is this slower, it also seems to use much more memory. Maybe it is making a copy of the numpy array?</p>
<p>Is there a way to make the <code>spawn</code> version as efficient as the <code>fork</code> version?</p>
|
<python><multiprocessing>
|
2025-02-10 18:23:19
| 1
| 21,513
|
Simd
|
79,427,869
| 15,842
|
Default filter expression to "match anything"
|
<p>What kind of polars expression (<code>pl.Expr</code>) might be used in a filter context that will <em>match anything</em> including nulls?</p>
<p>Use case: Type hinting and helper Functions that should return an <code>polars.Expr</code>.</p>
|
<python><python-polars>
|
2025-02-10 17:36:23
| 1
| 21,402
|
Gregg Lind
|
79,427,742
| 407,762
|
How do I add missing imports to a Python module using ast-grep?
|
<p>I have the following <a href="https://ast-grep.github.io/" rel="nofollow noreferrer"><code>ast-grep</code></a> rule to replace some http status integer literals in my code with constants</p>
<pre><code>id: replace_http_status_with_constants
language: python
rule:
all:
- regex: "^(200|201|204|302|400|401|403|404|500|502|503|504)$"
- any:
- pattern: "$STATUS"
kind: integer
inside:
kind: "comparison_operator"
- pattern: "$STATUS"
kind: integer
inside:
kind: "keyword_argument"
has:
kind: identifier
regex: "status.*"
- pattern: "$STATUS"
kind: integer
inside:
kind: "assignment"
has:
stopBy: end
kind: identifier
regex: "status*"
- pattern: "$STATUS"
kind: "integer"
inside:
kind: "decorator"
stopBy:
kind: decorator
has:
kind: "identifier"
regex: "pytest|extend_schema"
stopBy: end
rewriters:
- id: HTTP_200_OK
rule:
regex: "200"
fix: "HTTP_200_OK"
- id: HTTP_201_CREATED
rule:
regex: "201"
fix: "HTTP_201_CREATED"
- id: HTTP_204_NO_CONTENT
rule:
regex: "204"
fix: "HTTP_204_NO_CONTENT"
- id: HTTP_302_FOUND
rule:
regex: "302"
fix: "HTTP_302_FOUND"
- id: HTTP_400_BAD_REQUEST
rule:
regex: "400"
fix: "HTTP_400_BAD_REQUEST"
- id: HTTP_401_UNAUTHORIZED
rule:
regex: "401"
fix: "HTTP_401_UNAUTHORIZED"
- id: HTTP_403_FORBIDDEN
rule:
regex: "403"
fix: "HTTP_403_FORBIDDEN"
- id: HTTP_404_NOT_FOUND
rule:
regex: "404"
fix: "HTTP_404_NOT_FOUND"
- id: HTTP_500_INTERNAL_SERVER_ERROR
rule:
regex: "500"
fix: "HTTP_500_INTERNAL_SERVER_ERROR"
- id: HTTP_502_BAD_GATEWAY
rule:
regex: "502"
fix: "HTTP_502_BAD_GATEWAY"
- id: HTTP_503_SERVICE_UNAVAILABLE
rule:
regex: "503"
fix: "HTTP_503_SERVICE_UNAVAILABLE"
- id: HTTP_504_GATEWAY_TIMEOUT
rule:
regex: "504"
fix: "HTTP_504_GATEWAY_TIMEOUT"
transform:
NEW_STATUS:
rewrite:
source: "$STATUS"
rewriters:
- HTTP_200_OK
- HTTP_201_CREATED
- HTTP_204_NO_CONTENT
- HTTP_302_FOUND
- HTTP_400_BAD_REQUEST
- HTTP_401_UNAUTHORIZED
- HTTP_403_FORBIDDEN
- HTTP_404_NOT_FOUND
- HTTP_500_INTERNAL_SERVER_ERROR
- HTTP_502_BAD_GATEWAY
- HTTP_503_SERVICE_UNAVAILABLE
- HTTP_504_GATEWAY_TIMEOUT
fix: $NEW_STATUS
</code></pre>
<p>Is there a way to make it so I can also create any missing imports for these constants?</p>
<p>So far I've come up with this, but it can't really include <em>only</em> missing imports</p>
<pre><code>id: ensure_http_status_imports
language: python
rule:
kind: "module"
pattern: $MODULE
all:
- regex: "HTTP_200_OK|HTTP_201_CREATED|HTTP_204_NO_CONTENT|HTTP_302_FOUND|HTTP_400_BAD_REQUEST|HTTP_401_UNAUTHORIZED|HTTP_403_FORBIDDEN|HTTP_404_NOT_FOUND|HTTP_500_INTERNAL_SERVER_ERROR|HTTP_502_BAD_GATEWAY|HTTP_503_SERVICE_UNAVAILABLE|HTTP_504_GATEWAY_TIMEOUT"
- has:
pattern: $IDENTIFIER
stopBy: end
kind: identifier
not:
inside:
kind: attribute
- not:
has:
pattern: $IDENTIFIER
stopBy: end
kind: "import_from_statement"
has:
kind: identifier
stopBy: end
fix: |
from rest_framework.status import (
HTTP_200_OK,
HTTP_201_CREATED,
HTTP_204_NO_CONTENT,
HTTP_302_FOUND,
HTTP_400_BAD_REQUEST,
HTTP_401_UNAUTHORIZED,
HTTP_403_FORBIDDEN,
HTTP_404_NOT_FOUND,
HTTP_500_INTERNAL_SERVER_ERROR,
HTTP_502_BAD_GATEWAY,
HTTP_503_SERVICE_UNAVAILABLE,
HTTP_504_GATEWAY_TIMEOUT,
)
$MODULE
</code></pre>
|
<python><ast-grep>
|
2025-02-10 16:45:27
| 1
| 3,138
|
armonge
|
79,427,599
| 1,914,781
|
Simplify replace statement with regex
|
<p>I would like to replace <code>[</code> with <code>{</code> and <code>]</code> with <code>}</code>, below code works but I wish to simplify it with one line (maybe regex?)</p>
<pre><code>s = """
[[0,0],[0,1],[1,2],[2,1]]
"""
s = s.replace("[","{")
s = s.replace("]","}")
print(s)
</code></pre>
<p>output:</p>
<pre><code>{{0,0},{0,1},{1,2},{2,1}}
</code></pre>
|
<python>
|
2025-02-10 15:43:39
| 0
| 9,011
|
lucky1928
|
79,427,446
| 8,800,836
|
Interdependent bounds in scipy minimize
|
<p>Say I have a function <code>f(x,y,z)</code> I want to minimize using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow noreferrer"><code>scipy.optimize.minimize</code></a>. I want to minimize it subject to the constraint <code>x < y < z</code>.</p>
<p>I don't think I can use the bounds argument to do this because it does not accept variable-dependent bounds (am I wrong?)</p>
<p>Another option is to redefine my function so that it is large when the inequality is not satisfied:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
f = lambda x,y,z, : x**2 + y**2 + z**2
def new_f(x,y,z):
if x < y < z:
res = f(x,y,z)
else:
res = np.inf
return res
</code></pre>
<p>and that should work fine for optimizer that are not gradient based, at least.</p>
<p>But I was wondering if there were opinions about the most proper and robust way to do this.</p>
|
<python><scipy><constraints><scipy-optimize-minimize><fitbounds>
|
2025-02-10 14:49:59
| 0
| 539
|
Ben
|
79,427,445
| 16,383,578
|
How to use Python to install fonts on Windows without reboot?
|
<p>I have seen some partial duplicates on the same topic, but all of those I have seen require a reboot for the installation to take effect.</p>
<p>I am trying to download different fonts and install them in batch, double clicking the fonts one by one and clicking install works, but that is of course very slow.</p>
<p>I have observed that installing a font is just copying the font to Windows Fonts directory and updating the registry.</p>
<p>I used the following code to simulate the process in batch:</p>
<pre><code>from fontTools.ttLib import TTFont
import winreg
import os
def list_fonts(folder):
result = {}
for file in os.scandir(folder):
if file.is_file() and file.name.endswith((".otf", ".ttf")):
path = file.path.replace("\\", "/")
name = TTFont(path)["name"]
result.setdefault(name.getDebugName(1), {})[name.getDebugName(2)] = (
name.getDebugName(4),
path,
)
return result
HKLM = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
FONTS_KEY = winreg.OpenKey(
HKLM,
r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts",
0,
winreg.KEY_ALL_ACCESS,
)
def install_fonts(folder):
for styles in list_fonts(folder).values():
for font_name, path in styles.values():
name = path.split("/")[-1]
with (
open(path, "rb") as reader,
open(f"C:/Windows/Fonts/{name}", "wb") as writer,
):
writer.write(reader.read())
winreg.SetValueEx(
FONTS_KEY,
font_name + " (TrueType)" * name.endswith(".ttf"),
0,
winreg.REG_SZ,
name,
)
</code></pre>
<p>It works, but it has a big problem, the fonts aren't registered as installed until the computer is restarted. I am downloading and installing on the fly, and this process isn't ideal.</p>
<p>So how can I force the font installation update without rebooting?</p>
<hr />
<p>This is just an example, download <a href="https://www.1001fonts.com/noto-sans-font.html" rel="nofollow noreferrer">Noto Sans</a>, extract the archive, and try to use my code to install the fonts.</p>
<p>After installation, refreshing on C:\Windows\Fonts doesn't make the fonts show up, only after rebooting the fonts show up there.</p>
|
<python><windows><fonts>
|
2025-02-10 14:49:56
| 2
| 3,930
|
Ξένη Γήινος
|
79,427,260
| 1,957,820
|
python systemd timer every 2.5 hrs past the hour, but it runs every 2 hrs
|
<p>I am forced/bound to use python 2.7. I have created a systemd timer (and system.service of course) where the timer should run (24/7) every 2.5 hours past the hour. So starting at 00.00, 02:30, 05:00, 07.30, 10:00, etc). With the source below the timer runs, but (according to systemctl list-timers) it runs unfortunately every 2 hours.</p>
<p>What is wrong with this systemd timer file:</p>
<pre><code>[Unit]
Description=Run every 2:30 Hours
[Timer]
OnCalendar=00/2:30
Persistent=true
[Install]
WantedBy=timers.target
</code></pre>
|
<python><timer><systemd>
|
2025-02-10 13:45:13
| 1
| 438
|
ni_hao
|
79,427,227
| 8,189,936
|
Http-only cross-site cookie not being added to browser
|
<p>I am sending my <em>refresh</em> and <em>access</em> tokens as <em>http-only</em> cookies to the frontend of my Next.js application. When I log the response headers in the console of my frontend, I am able to get the cookies. However, they are not being added to the browser.</p>
<p>My FastAPI CORS middleware looks like this:</p>
<pre class="lang-py prettyprint-override"><code>origins = [
config.FRONTEND_ORIGIN
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"]
)
</code></pre>
<p>The endpoint that sends the response is as follows:</p>
<pre class="lang-py prettyprint-override"><code>@auth_router.post("/login", response_model=SuccessLoginResponse, status_code=status.HTTP_200_OK)
async def login(
response: Response,
login_data: LoginRequest,
request: Request,
session: AsyncSession = Depends(get_session)
):
IS_PRODUCTION = config.ENV == "production"
auth_service = get_auth_service(session)
device_info = request.headers.get("User-Agent", "Unknown Device")
try:
tokens = await auth_service.login(login_data, device_info)
# Set HTTP-only cookies in the response
response.set_cookie(
key="refresh_token",
value=tokens.refresh_token,
httponly=True,
max_age=7 * 24 * 60 * 60, # 7 days
secure=False, # Only set to True in production
samesite="none",
)
response.set_cookie(
key="access_token",
value=f"Bearer {tokens.access_token}",
httponly=True,
max_age=15 * 60, # 15 minutes
secure=False, # Only set to True in production
samesite="none"
)
return {
"success": True,
"message": "Login successful"
}
except UnauthorizedException as e:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e)) from e
except Exception as e:
print(e)
raise ValidationException(
detail={
"message": "Validation error",
"errors": str(e),
"documentation_url": "https://api.example.com/docs"
}
) from e
</code></pre>
<p>Log from my frontend looks like this:</p>
<pre><code>Object [AxiosHeaders] {
date: 'Mon, 10 Feb 2025 13:47:16 GMT',
server: 'uvicorn',
'content-length': '45',
'content-type': 'application/json',
'set-cookie': [
'refresh_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOjM5LCJleHAiOjE3Mzk4MDAwMzZ9.YnELWecBRiLIDuuZS_RUtfwfdRN--GuL7B5XjvGojKY; HttpOnly; Max-Age=604800; Path=/; SameSite=none',
'access_token="Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOjM5LCJleHAiOjE3MzkxOTcwMzd9.3eNjdMx88ax9SpWgcyMkaw3sJCteVfrdUqv7jxTfZVU"; HttpOnly; Max-Age=900; Path=/; SameSite=none'
]
}
</code></pre>
<p>This is the server action for making the request:</p>
<pre class="lang-js prettyprint-override"><code>export async function login(
formData: FormData
): Promise<{success: boolean; message: string}> {
const username = String(formData.get("username"));
const password = String(formData.get("password"));
try {
const response = await axios.post(
`${API_URL}/auth/login`,
{username, password},
{
withCredentials: true,
headers: {
"Content-Type": "application/json",
},
}
);
console.log(response.headers);
if (response.status !== 200) {
throw new Error(response.data?.message || "Login failed");
}
console.log("Login successful");
return {success: true, message: "Login successful"};
} catch (error) {
if (axios.isAxiosError(error)) {
console.error("Login error:", error.response?.data || error.message);
throw new Error(error.response?.data?.message || "Login failed");
} else {
console.error("Unexpected error:", error);
throw new Error("An unexpected error occurred");
}
}
return redirect("/dashboard");
}
</code></pre>
|
<python><reactjs><cookies><fastapi><cross-domain>
|
2025-02-10 13:33:51
| 2
| 1,631
|
David Essien
|
79,427,188
| 6,792,327
|
Firebase Cloud Functions Python - Loading service account path for different env
|
<p>I am attempting to deploy my Python Firebase Cloud Functions, but have not been able to do so because I'm using the same code base to deploy to different environment. I have set up different Firebase projects for <code>dev</code> and <code>prd</code>, and I would need to parse in the different <code>SERVICE_ACCOUNT_PATH</code> to the <code>credentials.Certificate</code>.</p>
<p>I've used both methods <a href="https://firebase.google.com/docs/functions/config-env?gen=2nd#python" rel="nofollow noreferrer">here</a>, but have not been able to succeed. The challenge is that whenever I try to deploy, Firebase CLI will throw errors stating that SERVICE_ACCOUNT_PATH is returning either None or empty path</p>
<p>Code:</p>
<pre><code>from firebase_functions import https_fn
from firebase_admin import credentials, initialize_app, storage, firestore
from dotenv import load_dotenv,find_dotenv
import os
from firebase_functions.params import StringParam
// Method 1 - using Parameterized configuration
SERVICE_ACCOUNT_PATH = StringParam("SERVICE_ACCOUNT_PATH")
cred = credentials.Certificate(SERVICE_ACCOUNT_PATH.value)
// Method 2 - using dotenv
load_dotenv(find_dotenv())
SERVICE_ACCOUNT_PATH_OS = os.getenv("SERVICE_ACCOUNT_PATH_OS")
cr = credentials.Certificate(SERVICE_ACCOUNT_PATH_OS)
app = initialize_app()
db = firestore.client()
@https_fn.on_request()
def on_request_example(req: https_fn.Request) -> https_fn.Response:
return https_fn.Response("Hello world!")
</code></pre>
<p>In method 1, I got the error:
<code>FileNotFoundError: [Errno 2] No such file or directory: ''</code></p>
<p>While in method 2, I got the error suggesting that <code>SERVICE_ACCOUNT_PATH_OS</code> is <code>None</code>.</p>
<p>What can I do? I can't hardcode the path because the service account file names are different for <code>dev</code> and <code>prd</code> environment.</p>
|
<python><firebase><google-cloud-functions>
|
2025-02-10 13:15:37
| 0
| 2,947
|
Koh
|
79,427,136
| 16,389,095
|
How to list all app/software/programs installed on Windows 11 using a Python script
|
<p>I develop a Python script to list all installed apps in my Windows 11 system. I would like to replicate the same results displayed in <em>Settings > App > Installed Apps</em>. Optionally, I would like to list apps installed not only for my user but in the entire system.
Surfing the web, I found these four commands:</p>
<pre><code>cmd1 = "Get-WmiObject -Class Win32_Product | Select-Object Name, Version, Architecture | Sort-Object Name"
cmd2 = "Get-AppxPackage | Select-Object Name, Version, Architecture | Sort-Object Name"
path = f"HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\*"
cmd3 = f"Get-ItemProperty -Path {path} | Select-Object DisplayName, DisplayVersion, Architecture | Sort-Object Name"
path = f"HKLM:\\Software\\Wow6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\*"
cmd4 = f"Get-ItemProperty -Path {path} | Select-Object DisplayName, DisplayVersion, Architecture | Sort-Object Name"
</code></pre>
<p>I develop a simple code to run these four commands sequentially:</p>
<pre><code>def list_installed_apps(cmd: str):
# Run the PowerShell command and capture output
result = subprocess.run(["powershell", "-Command", cmd], capture_output=True, text=True)
# Split the result into lines and filter out unwanted lines
installed_apps = result.stdout.splitlines()
return installed_apps
##############################
# CMD 1
##############################
cmd = "Get-WmiObject -Class Win32_Product | Select-Object Name, Version, Architecture | Sort-Object Name"
apps_1 = list_installed_apps(cmd)
# Find the index of the header line and column label boundaries
header_index = [i for i, app in enumerate(apps_1) if app.find("Version") != -1][0]
header = apps_1[header_index]
bound_1 = header.find("Version")
bound_2 = header.find("Architecture")
# Remove the header and any empty lines
apps_1 = [app.strip() for app in apps_1 if app.strip() and "Name" not in app]
apps_1 = apps_1[1:] #removes the line of -------
apps_1_ordered = [
{"name": app[:bound_1], "version": app[bound_1:bound_2], "architecture": app[bound_2:]}
for app in apps_1
]
##############################
# CMD 2
##############################
cmd = "Get-AppxPackage | Select-Object Name, Version, Architecture | Sort-Object Name"
apps_2 = list_installed_apps(cmd)
# Find the index of the header line and column label boundaries
header_index = [i for i, app in enumerate(apps_2) if app.find("Version") != -1][0]
header = apps_2[header_index]
bound_1 = header.find("Version")
bound_2 = header.find("Architecture")
# Remove the header and any empty lines
apps_2 = [app.strip() for app in apps_2 if app.strip() and "Name" not in app]
apps_2 = apps_2[1:] #removes the line of -------
apps_2_ordered = [
{"name": app[:bound_1], "version": app[bound_1:bound_2], "architecture": app[bound_2:]}
for app in apps_2
]
##############################
# CMD 3
##############################
path = f"HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\*"
cmd = f"Get-ItemProperty -Path {path} | Select-Object DisplayName, DisplayVersion, Architecture | Sort-Object Name"
apps_3 = list_installed_apps(cmd)
# Find the index of the header line and column label boundaries
header_index = [i for i, app in enumerate(apps_3) if app.find("DisplayVersion") != -1][0]
header = apps_3[header_index]
bound_1 = header.find("DisplayVersion")
bound_2 = header.find("Architecture")
# Remove the header and any empty lines
apps_3 = [app.strip() for app in apps_3 if app.strip() and "DisplayName" not in app]
apps_3 = apps_3[1:] #removes the line of -------
apps_3_ordered = [
{"name": app[:bound_1], "version": app[bound_1:bound_2], "architecture": app[bound_2:]}
for app in apps_3
]
##############################
# CMD 4
##############################
path = f"HKLM:\\Software\\Wow6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\*"
cmd = f"Get-ItemProperty -Path {path} | Select-Object DisplayName, DisplayVersion, Architecture | Sort-Object Name"
apps_4 = list_installed_apps(cmd)
# Find the index of the header line and column label boundaries
header = apps_4[1]
bound_1 = header.find("DisplayVersion")
bound_2 = header.find("Architecture")
# Remove the header and any empty lines
apps_4 = [app.strip() for app in apps_4 if app.strip() and "DisplayName" not in app]
apps_4 = apps_4[1:] #removes the line of -------
apps_4_ordered = [
{"name": app[:bound_1], "version": app[bound_1:bound_2], "architecture": app[bound_2:]}
for app in apps_4
]
##############################
# SUM RESULTS OF ALL COMMANDS AND SAVE
##############################
all_apps = apps_1_ordered + apps_2_ordered + apps_3_ordered + apps_4_ordered
fileName = "" # SET YOUR NAME
with open(fileName, "w") as f:
for app in all_apps:
f.write(f"{app['name']} --- {app['version']} --- {app['architecture']}\n")
</code></pre>
<p>Unfortunately, this file content doesn't completely match the apps sorted in the settings. How can I obtain that list?</p>
|
<python><windows><powershell><subprocess><system-administration>
|
2025-02-10 12:54:11
| 1
| 421
|
eljamba
|
79,427,039
| 13,219,123
|
Installing PySpark on Mac with pipenv
|
<p>I have a setup where I use pipenv for my virtual environments. I am using a MacBook.</p>
<p>I have done the following:</p>
<ul>
<li>Installed openJDK 11 using brew</li>
<li>Set paths in the <code>.zprofile</code> folder:</li>
</ul>
<pre><code>eval "$(/opt/homebrew/bin/brew shellenv)"
export JAVA_HOME="/opt/homebrew/opt/openjdk@11"
export PYSPARK_PYTHON="/opt/homebrew/opt/python@3.10/bin/python3.10"
export PATH="/opt/homebrew/opt/python@3.10/bin:$PATH"
</code></pre>
<ul>
<li>Install <code>pipenv</code> using <code>pip</code>.</li>
<li>Install all dependencies using <code>pipenv install -e .</code>, here pyspark version <code>3.3.0</code> is included.</li>
<li>Activate the shell <code>pipenv shell</code></li>
<li>I use VS code, when I run all tests they pass as expected.</li>
</ul>
<p>However, when I then run <code>pyspark</code> in the terminal in the activated shell environment, I get an error:</p>
<pre><code>/Users/<user_name>/.local/share/virtualenvs/<repo_name>/bin/pyspark: line 24: /bin/load-spark-env.sh: No such file or directory
/Users/<user_name>/.local/share/virtualenvs/<repo_name>/bin/pyspark: line 68: /bin/spark-submit: No such file or directory
/Users/<user_name>/.local/share/virtualenvs/<repo_name>/bin/pyspark: line 68: exec: /bin/spark-submit: cannot execute: No such file or directory
</code></pre>
<p>When I follow the path <code>/Users/<user_name>/.local/share/virtualenvs/<repo_name>/bin/pyspark</code> there is no folder <code>pyspark</code> but in the path <code>/Users/<user_name>/.local/share/virtualenvs/<repo_name>/bin/</code> the files (<code>load-spark-env.sh</code> and <code>spark-submit</code>) which causes errors are located. What am I doing wrong here?</p>
|
<python><macos><apache-spark><pyspark>
|
2025-02-10 12:15:33
| 1
| 353
|
andKaae
|
79,426,997
| 7,304,502
|
Issue with YAML formatting when modifying multiline fields in Python script
|
<p>I'm trying to write a Python script to read a YAML file, modify certain fields, and then rewrite it. However, I'm facing an issue with multiline fields. The original YAML file contains multiline strings, but after running my script, the formatting changes, and I see escaped characters instead of the expected format.</p>
<p>Here's the code I'm using:</p>
<pre class="lang-py prettyprint-override"><code>import json
import sys
import yaml
import logging
from datetime import datetime
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
def update_yaml_files(files, field, author_name, author_email):
logging.info("Starting update_yaml_files function")
files_path = json.loads(files)
logging.info(f"Parsed JSON input: {files_path}")
for file in files_path:
logging.info(f"Processing file: {file}")
try:
with open(file, 'r', encoding='utf-8') as f:
data = yaml.load(f, Loader=yaml.FullLoader)
logging.info(f"Loaded YAML data from {file}: {data}")
data['approvals'][field]['name'] = author_name
data['approvals'][field]['email'] = author_email
logging.info(f"Updated name and email for field '{field}'")
if 'approvaldate' in data['approvals'][field]:
data['approvals'][field]['approvaldate'] = datetime.now().strftime("%d-%b-%Y")
logging.info(f"Updated approval date for field '{field}'")
print(f"Updated data with approval date for {file}: {data}")
with open(file, 'w', encoding='utf-8') as f:
yaml.safe_dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True, indent=2, width=4096)
logging.info(f"Successfully wrote updated data to {file}")
except Exception as e:
logging.error(f"Error processing file {file}: {e}")
if __name__ == "__main__":
if len(sys.argv) != 5:
logging.error("Invalid number of arguments. Expected 4 arguments.")
sys.exit(1)
files = sys.argv[1]
field = sys.argv[2]
author_name = sys.argv[3]
author_email = sys.argv[4]
logging.info("Script started with arguments: "f"files={files}, field={field}, author_name={author_name}, author_email={author_email}")
update_yaml_files(files, field, author_name, author_email)
logging.info("Script finished")
</code></pre>
<p>Original YAML:</p>
<pre class="lang-yaml prettyprint-override"><code>type: Microsoft.KeyVault/vaults
apiVersion: 2022-07-01
name: Azure MS KeyVault Definition
version: 1.0.0
servicename: keyvault
description: This document describes about the configuration management standard for Azure Key Vault
targetcsp: azure
approvals:
author:
name: msbauthorname
email: msbauthoremail
approver:
name: msbapprovername
email: msbapproveremail
approvaldate: approvaldate
reviewer:
name: msbreviewername
email: msbrevieweremail
approvaldate: reviewdate
assumptions: |
- The version of Azure Key Vault is that of the time of writing as of October 2024
- Users of this guide have a working knowledge of Azure, Azure Key Vault, basic network knowledge of TCP/IP, and basic system administration skills. In addition, some familiarity with Microsoft PowerShell is important for those deploying the service to the cloud.
- The settings outlined in this document are thoroughly tested before installing them on an operational production network.
control:
control1:
validation: |
To check the SKU for an Azure Key Vault using the Azure Portal, you can follow these steps:
1. **Sign In to Azure Portal**:
- Go to the [Azure Portal](https://portal.azure.com) and sign in with your Azure account credentials.
2. **Navigate to Key Vaults**:
- On the left-hand side menu, click on "All services."
- In the "All services" search box, type in "Key Vaults" and select it from the dropdown menu.
3. **Select Your Key Vault**:
- From the list of Key Vaults, click on the name of the Key Vault that you want to upgrade.
4. **Check the Pricing Tier**:
- Once inside the selected Key Vault, find the "Sku (Pricing tier)" under the "Essentials" section in the "Overview" panel.
remediation: |
Azure Key Vault's pricing tier (SKU) can't be changed directly through the Azure Portal's user interface after the Key Vault has been created.
</code></pre>
<p>YAML after running the script:</p>
<pre class="lang-yaml prettyprint-override"><code>type: Microsoft.KeyVault/vaults
apiVersion: 2022-07-01
name: Azure MS KeyVault Definition
version: 1.0.0
servicename: keyvault
description: This document describes about the configuration management standard for Azure Key Vault
targetcsp: azure
approvals:
author:
name: <author name>
email: <author e-mail>
approver:
name: msbapprovername
email: msbapproveremail
approvaldate: approvaldate
reviewer:
name: msbreviewername
email: msbrevieweremail
approvaldate: reviewdate
assumptions: '- The version of Azure Key Vault is that of the time of writing as of October 2024
- Users of this guide have a working knowledge of Azure, Azure Key Vault, basic network knowledge of TCP/IP, and basic system administration skills. In addition, some familiarity with Microsoft PowerShell is important for those deploying the service to the cloud.
- The settings outlined in this document are thoroughly tested before installing them on an operational production network.'
controls:
control1:
validation: "To check the SKU for an Azure Key Vault using the Azure Portal, you can follow these steps:\n1. **Sign In to Azure Portal**:\n - Go to the [Azure Portal](https://portal.azure.com) and sign in with your Azure account credentials.\n2. **Navigate to Key Vaults**:\n - On the left-hand side menu, click on \"All services.\"\n - In the \"All services\" search box, type in \"Key Vaults\" and select it from the dropdown menu.\n3. **Select Your Key Vault**:\n - From the list of Key Vaults, click on the name of the Key Vault that you want to upgrade.\n4. **Check the Pricing Tier**:\n - Once inside the selected Key Vault, find the \"Sku (Pricing tier)\" under the \"Essentials\" section in the \"Overview\" panel.\n"
remediation: 'Azure Key Vault''s pricing tier (SKU) can''t be changed directly through the Azure Portal''s user interface after the Key Vault has been created.
'
</code></pre>
<p>I suspect that the problem arises during the conversion between YAML and the corresponding object / dictionary. Instead of directly opening the file as a text file and performing a simple replace operation (e.g., <code>replace("original text", "new text")</code>), I would prefer to find a more robust solution</p>
<p>How can I modify the script to maintain the original formatting of multiline fields in the YAML file?</p>
|
<python><yaml>
|
2025-02-10 11:56:27
| 1
| 669
|
delucaezequiel
|
79,426,847
| 5,722,359
|
How to express the dot product of 3 dimensions arrays with numpy?
|
<p>How do I do the following dot product in 3 dimensions with numpy?</p>
<p><a href="https://i.sstatic.net/EfgTHtZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EfgTHtZP.png" alt="enter image description here" /></a></p>
<p>I tried:</p>
<pre><code>x = np.array([[[-1, 2, -4]], [[-1, 2, -4]]])
W = np.array([[[2, -4, 3], [-3, -4, 3]],
[[2, -4, 3], [-3, -4, 3]]])
y = np.dot(W, x.transpose())
</code></pre>
<p>but received this error message:</p>
<pre><code> y = np.dot(W, x)
ValueError: shapes (2,2,3) and (2,1,3) not aligned: 3 (dim 2) != 1 (dim 1)
</code></pre>
<p>It's 2 dimensions equivalent is:</p>
<pre><code>x = np.array([-1, 2, -4])
W = np.array([[2, -4, 3],
[-3, -4, 3]])
y = np.dot(W,x)
print(f'{y=}')
</code></pre>
<p>which will return:</p>
<pre><code>y=array([-22, -17])
</code></pre>
<p>Also, <code>y = np.dot(W,x.transpose())</code> will return the same answer.</p>
|
<python><numpy>
|
2025-02-10 10:54:27
| 5
| 8,499
|
Sun Bear
|
79,426,845
| 15,485
|
In pandas, a groupby followed by boxplot gives KeyError: "None of [Index(['A', 1], dtype='object')] are in the [index]"
|
<p>This very simple script gives error <code>KeyError: "None of [Index(['A', 1], dtype='object')] are in the [index]"</code>:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
L1 = ['A','A','A','A','B','B','B','B']
L2 = [1,1,2,2,1,1,2,2]
V = [9.8,9.9,10,10.1,19.8,19.9,20,20.1]
df = pd.DataFrame({'L1':L1,'L2':L2,'V':V})
print(df)
df.groupby(['L1','L2']).boxplot(column='V')
plt.show()
</code></pre>
<p>So my dataframe is:</p>
<pre><code> L1 L2 V
0 A 1 9.8
1 A 1 9.9
2 A 2 10.0
3 A 2 10.1
4 B 1 19.8
5 B 1 19.9
6 B 2 20.0
7 B 2 20.1
</code></pre>
<p>and I would expect a plot with four boxplot showing the values V, the labels of boxplots should be A/1, A/2, B/1, B/2.</p>
<p>I had a look at <a href="https://stackoverflow.com/questions/55652574/how-to-solve-keyerror-unone-of-index-dtype-object-are-in-the-colum">How To Solve KeyError: u"None of [Index([..], dtype='object')] are in the [columns]"</a> but I was not able to fix my error, AI tools are not helping me either.</p>
<p>What am I not understanding?</p>
|
<python><pandas><group-by><boxplot>
|
2025-02-10 10:52:25
| 2
| 18,835
|
Alessandro Jacopson
|
79,426,669
| 9,827,719
|
Python create LimaCharlie Adapter of type "JSON > Events received throgh LimaCharlie webhooks" gives: Api failure (400): {'error': 'missing data'}
|
<p>I am trying to create a LimaCharlie Adapter of type "JSON > Events received throgh LimaCharlie webhooks" with Python.</p>
<p>This is my code:</p>
<pre><code>import limacharlie
# My variables
lc_oid = "123"
lc_api_key = "456"
adapter_name = "fw-cisco-meraki-tromso"
adapter_secret = "vo8oW2KPwabUZL"
# Create LC Manager
manager = limacharlie.Manager(oid=lc_oid, secret_api_key=lc_api_key)
# Create a Hive
hive = limacharlie.Hive(manager, "cloud_sensor")
# Create adapter
try:
# Create LC Adapter
hive.set(limacharlie.HiveRecord(
recordName=adapter_name,
data={
"tags": ["firewalls"],
"secret": adapter_secret
}
))
except Exception as e:
# Failed to create adapter
print(f"Error could not create adapter {adapter_name}: {e}")
</code></pre>
<p>I get this error:</p>
<pre><code>Api failure (400): {'error': 'missing data'}
</code></pre>
<p>When I try to read the documentation at <a href="https://github.com/refractionPOINT/python-limacharlie/blob/master/limacharlie/Hive.py#L90" rel="nofollow noreferrer">https://github.com/refractionPOINT/python-limacharlie/blob/master/limacharlie/Hive.py#L90</a> I see that the <code>HiveRecord</code> object takes in
<code>recordName</code> and <code>data</code>. Why does it give me error when I am sending in both?</p>
<p>From the documentation:</p>
<pre><code>class HiveRecord( object ):
def __init__( self, recordName, data, api = None ):
self._api = api
self.name = recordName
self.arl = None
self.data = data.get( 'data', None )
if self.data is not None and not isinstance( self.data, dict ):
self.data = json.loads( self.data )
self.expiry = data.get( 'usr_mtd', {} ).get( 'expiry', None )
self.enabled = data.get( 'usr_mtd', {} ).get( 'enabled', None )
self.tags = data.get( 'usr_mtd', {} ).get( 'tags', None )
self.comment = data.get( 'usr_mtd', {} ).get( 'comment', None )
self.etag = data.get( 'sys_mtd', {} ).get( 'etag', None )
self.createdAt = data.get( 'sys_mtd', {} ).get( 'created_at', None )
self.createdBy = data.get( 'sys_mtd', {} ).get( 'created_by', None )
self.guid = data.get( 'sys_mtd', {} ).get( 'guid', None )
self.lastAuthor = data.get( 'sys_mtd', {} ).get( 'last_author', None )
self.lastModified = data.get( 'sys_mtd', {} ).get( 'last_mod', None )
self.lastError = data.get( 'sys_mtd', {} ).get( 'last_error', None )
self.lastErrorTime = data.get( 'sys_mtd', {} ).get( 'last_error_ts', None )
</code></pre>
<p>If this helps anyone (and not make more confusion) here is the GUI version of LimaCharlie when I create new adapters manually:</p>
<p><a href="https://i.sstatic.net/itTHwlwj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itTHwlwj.png" alt="enter image description here" /></a></p>
|
<python>
|
2025-02-10 09:53:51
| 0
| 1,400
|
Europa
|
79,426,664
| 6,230,282
|
`subprocess.run` invokes an executable (not script) using the python interpreter when it's not necessary
|
<p>Related: <a href="https://stackoverflow.com/q/78045053/6230282">After pakage a python to a exe with pyinstaller, why it can not run on a new computer?</a></p>
<p>I'm packaging a command line application with a thin GUI wrapper written in Python using <code>pyinstaller</code>. I received feedback from user that it's not running. Specifically, the error message says (partially redacted):</p>
<pre><code>Fatal error in launcher: Unable to create process using '"C:\Users\DannyNiu\source\repos\AutoSub\whisper-root\Scripts\python.exe" "C:\Users\DannyNiu\source\repos\AutoSub-Done\dist\main-batch\_internal\whisper-root\Scripts\whisper.exe" ...
</code></pre>
<p>I've got a venv in "whisper-root", and the error message seem to suggest that the executable file ("whisper.exe") is being run by the Python interpreter. The venv was in the folder "AutoSub", which I renamed to "AutoSub-Done" to induce the error.</p>
<p>Here's how I've been invoking the executable:</p>
<pre><code>...
base = os.path.dirname(__file__)
...
whisper = base+"/whisper-root/Scripts/whisper.exe"
...
subprocess.run([whisper, ...arguments...], executable = whisper)
...
</code></pre>
<p>What am I supposed to do?</p>
|
<python><subprocess><pyinstaller>
|
2025-02-10 09:51:13
| 1
| 1,641
|
DannyNiu
|
79,426,384
| 15,412,256
|
Docstring for Polars API register namespace
|
<p>An example Python Polars API UDF name registration:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from pydantic import ValidationError, validate_call
@pl.api.register_dataframe_namespace("test")
class TestOps:
def __init__(self, df: pl.DataFrame) -> None:
self._df = df
@validate_call
def add_lit_col(self, lit_int: int = 5) -> pl.DataFrame:
""" Add a literal integer defined by `lit_int` to the dataframe
"""
return self._df.with_columns(pl.lit(lit_int))
pl.DataFrame(
data=["aaa", "bbb", "ccc", "ddd", "eee", "fff"],
).test.add_lit_col("foo")
</code></pre>
<p>The <code>pydantic</code> validation works and when I print out the doc string for <code>add_lit_col</code> method it also shows correctly:</p>
<pre class="lang-py prettyprint-override"><code>TestOps.add_lit_col.__doc__
out:
' Add a literal integer defined by `lit_int` to the dataframe\n '
</code></pre>
<p>But when I hover over both the registered namespace for <code>pl.DataFrame</code> module <code>test</code> as well as the method <code>add_lit_col</code>, it does not show the doc string and parameters: <code>(function) add_lit_col: Any</code></p>
|
<python><python-polars><docstring>
|
2025-02-10 07:47:20
| 0
| 649
|
Kevin Li
|
79,426,312
| 6,036,058
|
Should I generate an SBOM for a Python library with bounded version ranges?
|
<p>I'm maintaining a Python library and considering whether I should generate a Software Bill of Materials (SBOM) for it. However, my pyproject.toml defines dependencies using bounded version ranges (e.g., >=1.0, <=2.0), meaning that the exact versions used can vary depending on the environment.</p>
<p>My main questions are:</p>
<ol>
<li><p>Is it necessary to generate an SBOM for a library when dependencies are defined with version ranges instead of exact versions?</p>
</li>
<li><p>How should I generate the SBOM? Should it be generated without resolving exact versions (only listing direct dependencies as specified in pyproject.toml)? Or should it include resolved exact versions along with transitive dependencies?</p>
</li>
</ol>
|
<python><dependencies><pyproject.toml><package-management><sbom>
|
2025-02-10 07:07:04
| 2
| 667
|
Florian Vuillemot
|
79,426,206
| 11,748,924
|
Understanding YOLOv11 Tensor Ouput Shape for Post-Processing
|
<p>I tried export YOLOv11 model to tensorflow, it said:</p>
<pre><code>'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB)
</code></pre>
<p>Now I have this model summary in Keras 3:</p>
<pre><code>Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ input_layer_4 (InputLayer) │ (None, 640, 640, 3) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ tfsm_layer_8 (TFSMLayer) │ (1, 84, 8400) │ 0 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
</code></pre>
<p>It's clear for the input shape, but not the output shape. According <a href="https://github.com/ultralytics/ultralytics/issues/16238#issuecomment-2348695179" rel="nofollow noreferrer">this</a>:</p>
<blockquote>
<p>The output shapes for YOLOv8n and YOLOv8n-seg models represent different components. For YOLOv8n, the shape (1, 84, 8400) includes 80 classes and 4 bounding box parameters. For YOLOv8n-seg, the first output (1, 116, 8400) includes 80 classes, 4 parameters, and 32 mask coefficients, while the second output (1, 32, 160, 160) represents the prototype masks.</p>
</blockquote>
<p>I tried to infer and manually post processing from ChatGPT source code:</p>
<pre><code># output: (84, 8400) | image: (640, 640, 3)
# Extract bounding box coordinates (first 4 values)
boxes = output[:4, :].T # Shape: (8400, 4)
# Extract confidence scores (5th value)
confidences = output[4, :] # Shape: (8400,)
# Convert (center x, center y, width, height) → (x1, y1, x2, y2)
boxes[:, 0] -= boxes[:, 2] / 2 # x1 = x_center - width/2
boxes[:, 1] -= boxes[:, 3] / 2 # y1 = y_center - height/2
boxes[:, 2] += boxes[:, 0] # x2 = x1 + width
boxes[:, 3] += boxes[:, 1] # y2 = y1 + height
# Filter by confidence threshold (adjust for debugging)
threshold = 0.1
indices = np.where(confidences > threshold)[0]
filtered_boxes = boxes[indices]
filtered_confidences = confidences[indices]
# Draw raw bounding boxes
for i in range(len(filtered_boxes)):
x1, y1, x2, y2 = map(int, filtered_boxes[i])
# Ensure coordinates are within image bounds
x1, y1 = max(0, x1), max(0, y1)
x2, y2 = min(image.shape[1], x2), min(image.shape[0], y2)
# Draw bounding box
cv2.rectangle(image, (x1, y1), (x2, y2), (255, 0, 0), 2) # Red box
# Display confidence score (for debugging)
cv2.putText(image, f"{filtered_confidences[i]:.2f}", (x1, y1 - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)
# Show the image
plt.figure(figsize=(10, 6))
plt.imshow(image)
plt.axis("off")
plt.show()
</code></pre>
<p>And here is the output:</p>
<p><a href="https://i.sstatic.net/VJVix7th.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VJVix7th.png" alt="enter image description here" /></a></p>
<p>Not sure if my post processing implementation is correct since I have no idea how to interpret the tensor output shape.</p>
|
<python><numpy><tensorflow><yolo><ultralytics>
|
2025-02-10 06:12:00
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
79,426,130
| 6,702,598
|
How to create a class that runs business logic upon a query?
|
<p>I'd like to create a class/object that I can use for querying, that contains business logic.</p>
<p>Constraints:</p>
<ul>
<li>Ideally that class/object is not the same one that is responsible for table creation.</li>
<li>It's possible to use the class inside a query</li>
<li>Alembic should not get confused.</li>
<li>SQLAlchemy Version: 1.4 and 2.x.</li>
</ul>
<p>How do I do that? Is that even possible?</p>
<h3>Use Case</h3>
<p>My database table has two columns: <code>value_a</code> and <code>show_value_a</code>. <code>show_value_a</code> specifies if the value is supposed to be shown on the UI or not. Currently, all processes that query <code>value_a</code> have to check if <code>show_value_a</code> is <code>True</code>; If not, the value of <code>value_a</code> will be masked (i.e. set to <code>None</code>) upon returning.</p>
<p>Masking the value is easy to forget. Also, each process has their own specific query (with their specific JOINs), so it's ineffective to do this in some kind of pattern form.</p>
<h3>Example</h3>
<p>Table definition:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, String, Boolean
class MyTable(Base):
__tablename__ = "mytable"
valueA = Column("value_a", String(60), nullable=False)
showValueA = Column("show_value_a", Boolean, nullable=False)
</code></pre>
<p>Data:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>value_a</th>
<th>show_value_a</th>
</tr>
</thead>
<tbody>
<tr>
<td>"A"</td>
<td>True</td>
</tr>
<tr>
<td>"B"</td>
<td>False</td>
</tr>
<tr>
<td>"C"</td>
<td>True</td>
</tr>
</tbody>
</table></div>
<p>Query I'd like to do:</p>
<pre class="lang-py prettyprint-override"><code>values = session.query(MyTable.valueA).all()
# returns ["A", None, "C"]
</code></pre>
<p>Querying the field will intrinsically check if <code>show_value_a</code> is <code>True</code>. If it is, the value is returned. If not, <code>None</code> is returned</p>
|
<python><sqlalchemy><alembic>
|
2025-02-10 05:18:55
| 3
| 3,673
|
DarkTrick
|
79,426,062
| 6,162,679
|
`sklearn.metrics.r2_score` is giving wrong R2 value?
|
<p>I notice that <code>sklearn.metrics.r2_score</code> is giving wrong R2 value.</p>
<pre><code>from sklearn.metrics import r2_score
r2_score(y_true=[2,4,3,34,23], y_pred=[21,12,3,11,17]) # -0.17
r2_score(y_pred=[21,12,3,11,17], y_true=[2,4,3,34,23]) # -4.36
</code></pre>
<p>However, the true R2 value should be 0.002 according to the <code>rsq</code> function in Excel. R2 should be between 0~1. Also, switching the order of "y_true" and "y_pred" should not affect the final result. How to fix this issue?</p>
<p>Also,</p>
<blockquote>
<p>In simple linear regression (one predictor), the coefficient of determination is numerically equal to the square of the Pearson correlation coefficient.</p>
</blockquote>
<p>I wonder why <code>sklearn.metrics.r2_score</code> is different to the <code>squared Pearson correlation coefficient</code> in this case?</p>
|
<python><scikit-learn>
|
2025-02-10 04:12:06
| 1
| 922
|
Yang Yang
|
79,426,034
| 2,128,232
|
Why can't one change the size of a PyQt checkbox?
|
<p>The following code runs, but the checkbox is tiny (it should be large). Why doesn't this work as expected? (One can use styling to resize the text, but that's not what I'm trying to do here). I'll be grateful for any suggestions.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import (
QApplication,
QCheckBox,
QVBoxLayout,
QWidget,
)
class Window(QWidget):
def __init__(self, parent=None):
super().__init__(parent)
# Create a Checkbox with a text label:
chkBox = QCheckBox(text="Check box if you want this option.")
# Make the checkbox large (has no effect):
chkBox.setStyleSheet('QCheckBox::indicator {width:24px; height:24px}')
layout= QVBoxLayout()
layout.addWidget(chkBox)
self.setLayout(layout)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec())
</code></pre>
|
<python><checkbox><pyqt>
|
2025-02-10 03:28:19
| 1
| 566
|
Phillip M. Feldman
|
79,425,937
| 948,866
|
How do I modify a list value in a for loop
|
<p>In <a href="https://stackoverflow.com/questions/4081217/how-to-modify-list-entries-during-for-loop">How to modify list entries during for loop?</a>, the general recommendation is that it can be unsafe so don't do it unless you know it's safe. In comments under the first answer @martineau says:</p>
<blockquote>
<p>It [the loop variable] doesn't make a copies. It just assigns the loop variable name to
successive elements or value of the thing being iterated-upon.</p>
</blockquote>
<p>That is the behavior I expect and want, but I'm not getting it. I want to remove trailing <code>None</code>s from each value in the list. My loop variable is modified correctly but the list elements remain unchanged.</p>
<p>How can I get the loop variable <code>fv</code> to be a pointer to the rows of list <code>foo</code>, not a copy of the row?</p>
<p>Extra credit: my code is sucky non-pythonic, so a solution using comprehensions or slices instead of for loops would be preferred.</p>
<pre class="lang-py prettyprint-override"><code>foo = [
['a', 'b', None, None],
['c', None, 'e', None],
['f', None, None, None]
]
desired = [
['a', 'b'],
['c', None, 'e'],
['f']
]
for fv in foo:
for v in range(len(fv) -1, 0, -1):
if fv[v] == None:
fv = fv[:v]
print(f' {v:2} {fv}')
else:
break
print(foo)
</code></pre>
<p>The output is:</p>
<pre class="lang-py prettyprint-override"><code> 3 ['a', 'b', None]
2 ['a', 'b']
3 ['c', None, 'e']
3 ['f', None, None]
2 ['f', None]
1 ['f']
[['a', 'b', None, None], ['c', None, 'e', None], ['f', None, None, None]]
</code></pre>
|
<python><list><for-loop>
|
2025-02-10 01:25:15
| 2
| 3,967
|
Dave
|
79,425,910
| 6,395,555
|
Python Line profiler not working in VSCode
|
<p>I've installed line_profiler 4.1.3 from conda and I'm using a jupyter notebook in VSCode.</p>
<p>When I run the following cells</p>
<hr />
<pre><code>%load_ext line_profiler
</code></pre>
<hr />
<pre><code>def test_fn():
x = 2+2
return x
</code></pre>
<hr />
<pre><code>lprun -f test_fn test_fn()
</code></pre>
<hr />
<p>I receive this output:</p>
<pre><code>Timer unit: 1e-07 s
Total time: 3.8e-06 s
Could not find file C:\Users\ <user>\AppData\Local\Temp\ipykernel_9076\3402019769.py
Are you sure you are running this program from the same directory
that you ran the profiler from?
Continuing without the function's contents.
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1
2 1 18.0 18.0 47.4
3 1 20.0 20.0 52.6
</code></pre>
<p>For any function I attempt to line profile I get the same warning and I don't see any of the functions contents.</p>
<p>This can be worked around by doing</p>
<hr />
<pre><code>%load_ext line_profiler
</code></pre>
<hr />
<pre><code>%%writefile test.py
def test_fn()
...
</code></pre>
<hr />
<pre><code>from test import test as t
lprun -f t t()
</code></pre>
<hr />
<p>Then I get the correct output.</p>
|
<python><visual-studio-code><line-profiler>
|
2025-02-10 00:44:50
| 0
| 363
|
alessandro
|
79,425,829
| 5,231,665
|
emit Broadcast with flask_socketio
|
<p>I have a flask server running, and sockets are connection successfully. I want to run a service that monitors stuff on my server, such as GPU usage with <code>GPUMonitorService</code>, and the broadcast that data to all connected sockets. I make the GPU service, and pass in the socketio object.
I have an <code>on.connect</code> handler that works as expected. I used these docs for reference <a href="https://flask-socketio.readthedocs.io/en/latest/getting_started.html#broadcasting" rel="nofollow noreferrer">https://flask-socketio.readthedocs.io/en/latest/getting_started.html#broadcasting</a></p>
<pre><code>from flask import Flask, request, jsonify, send_file
from flask_socketio import SocketIO
from services.GPU_Monit import GPUMonitorService
socketio = SocketIO(...)
gpu_service = GPUMonitorService(socketio)
@socketio.on("connect")
def handle_connect():
client_id = request.sid # Get the unique socket ID of the client
print(f"Client connected: {client_id}")
# Emit the new client's ID to all connected clients
socketio.emit( # <---this is working
"new_connection",
{"socket_id": client_id},
)
</code></pre>
<p>//GPU_Monit.py</p>
<pre><code>from flask_socketio import emit
class GPUMonitorService:
def __init__(self, socketio):
self.socketio = socketio
self.thread = threading.Thread(target=self._monitoring_task, daemon=True)
self.thread.start()
def _monitoring_task(self):
while True:
try:
gpu_info = self.get_detailed_gpu_info()
if gpu_info:
#This line does run, but the connected sockets are not getting any messages
emit("gpu_update", gpu_info, broadcast=True)
except Exception as e:
print(f"Error in GPU monitoring: {e}")
time.sleep(10)
</code></pre>
<p>Any thoughts would be appreciated. I'm more familiar with sockets on NodeJS, and Python has some weird things with threads and special async handling that I'm not aware of. Thanks!</p>
<p>EDIT: Im getting an error
<code>'Working outside of request context.This typically means that you attempted to use functionality that needed an active HTTP request. Consult the documentation on testing for information about how to avoid this problem.'</code></p>
|
<python><flask><flask-socketio>
|
2025-02-09 23:06:01
| 1
| 2,062
|
Barrard
|
79,425,585
| 3,213,204
|
Copy image to clipboard in python and Qt6 in linux
|
<h3>General overview</h3>
<p>The goal is pretty simple :</p>
<ol>
<li>copy image in the clipboard</li>
<li>using ideally no graphical library related solution, or if it’s necessary Qt</li>
<li>Working on Unices or ideally a no OS-related solution</li>
<li>With just python tools, no subprocess to call an external no python command</li>
</ol>
<h3>Working tries</h3>
<p>I was able to do it with several ways but none fulfilled these four constraints.</p>
<h4>The Gtk solution</h4>
<pre class="lang-py prettyprint-override"><code>import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, Gdk, GdkPixbuf
from PIL import Image
def copy_image_to_clipboard(image_path):
# Chargin image with pillow
image = Image.open(image_path)
rgba_image = image.convert("RGBA")
data = rgba_image.tobytes()
# Creating a GTK compatible pixbuf
pixbuf = GdkPixbuf.Pixbuf.new_from_data(
data,
GdkPixbuf.Colorspace.RGB,
True, # has_alpha
8, # bits_per_sample
image.width,
image.height,
image.width * 4
)
# Getting the path
clipboard = Gtk.Clipboard.get(Gdk.SELECTION_CLIPBOARD)
clipboard.set_image(pixbuf)
clipboard.store() # S'assurer que l'image reste après la fin du script
# Example
copy_image_to_clipboard("image.png")
Gtk.main()
</code></pre>
<p>This one works, but it doesn’t satisfied the 2d constraint cause it use Gtk tools and not Qt or a non-graphical library.</p>
<h4>The <code>xclip</code> with subprocess solution</h4>
<pre class="lang-py prettyprint-override"><code>import subprocess
def copy_image_to_clipboard(image_path):
subprocess.run(["xclip", "-selection", "clipboard", "-t", "image/png", "-i", image_path])
print(f"✅ Image copiée dans le presse-papier : {image_path}")
copy_image_to_clipboard("/tmp/image.png")
</code></pre>
<p>This other one also work but depend on <code>xclip</code> command and it’s OS related, so it infringe
the 3d and 4th constraints</p>
<h4>The unworking Qt trie</h4>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt6 import QtCore, QtGui
from PIL import Image
def copy_image_to_clipboard(image_path):
# Check if the image exists
if not QtCore.QFile.exists(image_path):
print(f"Error: the file {image_path} was not found.")
return
# Create the Qt application if it doesn't already exist
if QtGui.QGuiApplication.instance() is None:
app = QtGui.QGuiApplication(sys.argv)
# Load the image with PIL and convert it to QImage
image = Image.open(image_path).convert("RGBA")
qimage = QtGui.QImage(image.tobytes(), image.width, image.height, QtGui.QImage.Format.Format_RGBA8888)
# Check if the conversion was successful
if qimage.isNull():
print("Error: the conversion of the image to QImage failed.")
return
# Copy the image to the clipboard
clipboard = QtGui.QGuiApplication.clipboard()
clipboard.setImage(qimage)
print(f"✅ Image copied to clipboard: {image_path}")
# Example usage
copy_image_to_clipboard("/tmp/image.png") # Replace with your image path
</code></pre>
<p>This one is full Qt (2), it seams to be not OS related (3), use only python tools (4), but it doesn’t copy anything (The 1st constraint) so, it’s useless.</p>
<h2>The question</h2>
<p>How to copy image into clipboard using just Qt (and also other library for image processing or dedicated to clipboard management but not a full graphical library like Gtk) ?</p>
|
<python><clipboard><qt6>
|
2025-02-09 19:36:46
| 0
| 321
|
fauve
|
79,425,204
| 1,473,517
|
Why should I pass a function using initializer and can I use shared memory instead?
|
<p>Take this MWE:</p>
<pre><code>from multiprocessing import Pool
from time import perf_counter as now
import numpy as np
def make_func():
n = 20000
np.random.seed(7)
M = np.random.rand(n, n)
return lambda x, y: M[x, x] + M[y, y]
class ParallelProcessor:
def __init__(self):
pass
def process_task(self, args):
"""Unpack arguments internally"""
index, integer_arg = args
print(f(index, integer_arg))
def run_parallel(self, tasks, num_cores=None):
"""Simplified parallel execution without partial"""
num_cores = num_cores
task_args = [(idx, val) for idx, val in enumerate(tasks)]
start = now()
global f
f = make_func()
print(f"************** {now() - start} seconds to make f")
start = now()
with Pool(num_cores) as pool:
results = pool.map( self.process_task, task_args)
print(f"************** {now() - start} seconds to run all jobs")
return results
if __name__ == "__main__":
processor = ParallelProcessor()
processor.run_parallel(tasks=[1, 2, 3, 4, 5], num_cores=2)
</code></pre>
<p>I have declared <code>f</code> to be global. I think that means that a copy of the large numpy array will be made in each worker.</p>
<p>Alternatively I could use initializer with:</p>
<pre><code>from multiprocessing import Pool
from time import perf_counter as now
import time
import os
import numpy as np
def make_func():
n = 20000
np.random.seed(7)
M = np.random.rand(n, n)
return lambda x, y: M[x, x] + M[y, y]
def init_worker():
global f
f = make_func()
class ParallelProcessor:
def __init__(self):
pass
def process_task(self, args):
"""Unpack arguments internally"""
index, integer_arg = args
print(f(index, integer_arg))
def run_parallel(self, tasks, num_cores=None):
"""Parallel execution with proper initialization"""
num_cores = num_cores or len(os.sched_getaffinity(0))
task_args = [(idx, val) for idx, val in enumerate(tasks)]
start = now()
with Pool(num_cores, initializer=init_worker) as pool:
results = pool.map(self.process_task, task_args)
print(f"************** {now() - start} seconds to run all jobs")
return results
if __name__ == "__main__":
processor = ParallelProcessor()
processor.run_parallel(tasks=[1, 2, 3, 4, 5], num_cores=2)
</code></pre>
<p>I am told this is better style but I can't see what the advantage is. I am not sure why <code>f</code> has to be declared global in `init_worker. In any case a copy of the large numpy array is still sent to each worker. Overall it also seems to be slower.</p>
<p>I am using Linux.</p>
<hr />
<p>Ideally I would like not to make a copy of the array at each worker. Is there a fast way to use shared memory to avoid that?</p>
|
<python><multiprocessing>
|
2025-02-09 16:01:43
| 2
| 21,513
|
Simd
|
79,425,183
| 405,017
|
Processing a list of zero or more checkboxes in FastHTML
|
<p>With <a href="https://www.fastht.ml/" rel="nofollow noreferrer">FastHTML</a> I'm writing a route to process a list of same-named checkboxes representing filenames in a <a href="https://picocss.com/docs/dropdown" rel="nofollow noreferrer">Pico CSS Dropdown</a> passed in a query string by HTMX:</p>
<pre class="lang-py prettyprint-override"><code>@rt.get("/files")
def files_selected(my_files: list[pathlib.Path]):
print("\n".join(f.name for f in my_files))
</code></pre>
<p>This works nicely most of the time:</p>
<ul>
<li>HTMX makes a request for <code>/files?my_files=foo.txt&my_files=bar.txt</code></li>
<li>FastHTML looks at the method signature and ~magically converts this into a list of <code>pathlib.Path</code> instances for me and exposes as the local variable <code>my_files</code>.</li>
</ul>
<p>However, when no checkboxes are passed—when HTXML requests just <code>/files</code> with no query string—FastHTML throws HTTP 400 with the response <code>Missing required field: my_files</code>.</p>
<p>I tried switching the function signature to…</p>
<pre class="lang-py prettyprint-override"><code>def files_selected(my_files: list[pathlib.Path] | None = None):
</code></pre>
<p>…but that breaks the magic; passing a single file results in<br />
<code>my_files=['f', 'o', 'o', '.', 't', 'x', 't']</code>.</p>
<p>I've tried changing the function to…</p>
<pre class="lang-py prettyprint-override"><code>def files_selected(req: fh.starlette.Request):
my_files = req.query_params["my_files"]
</code></pre>
<p>…but this only returns one file (as a string) when multiple are supplied.</p>
<p>What's the right way to accept optional parameters in the query string?</p>
|
<python><htmx><starlette><fasthtml>
|
2025-02-09 15:49:48
| 1
| 304,256
|
Phrogz
|
79,425,037
| 11,635,654
|
orbax save/restore with 8 devices
|
<p>I have setup a snippet on Colab <a href="https://colab.research.google.com/drive/1clg6ZKorhfFQUuY44oYjyoW8rJ1FCe1B?usp=sharing" rel="nofollow noreferrer">here</a>
with</p>
<pre><code>jax.__version__ # 0.4.33 9Feb2025
orbax.checkpoint.__version__ # 0.6.4 9Feb2025
</code></pre>
<p>It quite difficult to follow the flax/orbax changes in the save/restore (simple) model even with following the "latest" documentation of these two packages.
I have managed to cooked something but I was wandering if I'm doing the right thing using 8TPUs on Colab; for instance it semms that one can save a single instance of the Model among the 8 existing ones (ie. the use of <code>flax.jax_utils.unreplicate</code> seems necessary</p>
<pre class="lang-py prettyprint-override"><code>ckpt = {'model': flax.jax_utils.unreplicate(model_state)}
</code></pre>
<p>)
At restoration in the same environment after</p>
<pre class="lang-py prettyprint-override"><code>target={'model': abstract_state} # a Training State quite dummy
chpt_restored = checkpoint_manager.restore(checkpoint_manager.latest_step(), items=target)
</code></pre>
<p>one restaure 8 vesions using</p>
<pre class="lang-py prettyprint-override"><code>new_model_state = flax.jax_utils.replicate(chpt_restored['model'])
</code></pre>
<p>but this is 8 replicated version of the model with the same instance.</p>
<p>It may be foreseen to behaves like that, but I wandering how one can resume a first training session, to continue the training as one may use s unique instance at the second training session?
Hope that I have been clear. Any comment on the Colab snippet is welcome.</p>
|
<python><jax><flax>
|
2025-02-09 14:12:29
| 0
| 402
|
Jean-Eric
|
79,425,025
| 893,254
|
Kafka Consumer group session timed out without a successful response from the group coordinator: revoking assignment and rejoining group
|
<p>The title had to be shortened a bit.</p>
<p>The full error message is more like this:</p>
<pre><code>Kafka Consumer group session timed out (in join-state steady) after X ms without a successful response from the group coordinator: revoking assignment and rejoining group
</code></pre>
<p>What causes this?</p>
<h1>Context</h1>
<p>I have a simple python application running which is intended to verify a data migration process.</p>
<ul>
<li>Data was migrated (copied) from one Kafka cluster to another</li>
<li>The process spawns two consumers, one for each cluster, and reads events sequentially</li>
<li>It verifies that the consumed data from each consumer is the same</li>
</ul>
<p>Here are some more detailed log lines.</p>
<pre><code>%5|1739052194.708|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out OffsetCommitRequest in flight (after 158ms, timeout #0)
%5|1739052194.708|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out OffsetCommitRequest in flight (after 158ms, timeout #1)
%5|1739052194.708|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out OffsetCommitRequest in flight (after 158ms, timeout #2)
%5|1739052194.708|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out OffsetCommitRequest in flight (after 158ms, timeout #3)
%5|1739052194.708|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out OffsetCommitRequest in flight (after 158ms, timeout #4)
%4|1739052194.834|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out 2216 in-flight, 0 retry-queued, 15394 out-queue, 1 partially-sent requests
%3|1739052194.835|FAIL|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 192.168.0.2:9092: 17610 request(s) timed out: disconnect (average rtt 159.627ms) (after 100166ms in state UP)
%4|1739052205.056|SESSTMOUT|consumer2.topicname#consumer-2| [thrd:main]: Consumer group session timed out (in join-state steady) after 45000 ms without a successful response from the group coordinator (broker 1, last error was Local: Timed out in queue): revoking assignment and rejoining group
%4|1739052206.391|REQTMOUT|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
%3|1739052206.392|FAIL|consumer2.topicname#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 192.168.0.2:9092: 1 request(s) timed out: disconnect (average rtt 156.853ms) (after 9708ms in state UP)
%4|1739052212.861|REQTMOUT|kafka_topic_data_verify_consumer1_rightmove.property_data#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator/3: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
%3|1739052212.862|FAIL|kafka_topic_data_verify_consumer1_rightmove.property_data#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: 192.168.0.3:9092: 1 request(s) timed out: disconnect (average rtt 146.758ms) (after 118194ms in state UP)
</code></pre>
<p>The application is written in Python, although this is unlikely to be significant.</p>
<p>What is strange is the code is very similar to another code which was used to migrate the topic data. This previous code had a single consumer and producer. After each event was read, the producer was flushed and then the consumer commit function was called.</p>
|
<python><apache-kafka>
|
2025-02-09 14:02:02
| 1
| 18,579
|
user2138149
|
79,424,951
| 11,439,134
|
Cursor not always updating in Tkinter
|
<p>I'm trying to change the cursor in my Tkinter app when the mouse moves. As a simplified example:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
root.geometry("500x200")
frame2 = tk.Frame(root)
frame = tk.Canvas(frame2)
def on_mouse_motion(event):
if event.x > 200:
frame2.config(cursor="xterm")
else:
frame2.config(cursor="crosshair")
frame.bind("<Motion>", on_mouse_motion)
frame.pack(expand=True, fill="both")
frame2.pack(expand=True, fill="both")
root.mainloop()
</code></pre>
<p>However, the cursor updates the first one or two times and then stops changing.</p>
<p>I've noticed that it works if I change the cursor on the Canvas and not the container, but because of limitations with the widget I'm working with (which is a non-standard widget but also faces the same problem), that's not possible. I've tried using <code>update</code> and <code>update_idletasks</code>, but with no avail. The only thing that seems to work is if I put <code>print()</code> in the callback, which is kind of wierd, and not a fantastic solution.</p>
<p>I was wondering if anyone has encountered this issue in the past and knows how to work around/fix it?</p>
|
<python><tkinter>
|
2025-02-09 13:22:19
| 1
| 1,058
|
Andereoo
|
79,424,829
| 367,824
|
How to strip indents (not just lines) from inner Jinja2 blocks?
|
<p>Using Python’s Jinja, I’m concerned with both code readability and correct output.
Here is my Jinja template:</p>
<pre><code>M3U_TEMPLATE = jinja2.Template(
textwrap.dedent("""\
#EXTM3U
{% for item in playlist %}
#EXTALB:{{ item.strAlbum }} ({{ item.release }})
#EXTART:{{ item.strAlbumArtists }}
#EXTINF:{{ item.iDuration }},{{ item.strArtists }} - {{ item.strTitle }}
{{ item.path }}
{% endfor %}
""")
)
</code></pre>
<p>Python’s <code>textwrap.dedent()</code> takes care of removing most of indent from the text. But I want to remove also the indents from the block of text inside the <code>{% for %}</code> loop. I want this kind of result:</p>
<pre><code>#EXTM3U
#EXTALB:Offramp (1982)
#EXTART:Pat Metheny Group
#EXTINF:408,Pat Metheny Group - James
/media/Jazz, Fusion etc/Pat Metheny Group/1982 • Offramp/06 James.m4a
#EXTALB:Blue Moon (1961)
#EXTART:The Marcels
#EXTINF:133,The Marcels - Blue Moon
/media/Pop/The Marcels/1961 • Blue Moon/01 Blue Moon.m4a
</code></pre>
<p>But I’m getting this:</p>
<pre><code>#EXTM3U
#EXTALB:Offramp (1982)
#EXTART:Pat Metheny Group
#EXTINF:408,Pat Metheny Group - James
/media/Jazz, Fusion etc/Pat Metheny Group/1982 • Offramp/06 James.m4a
#EXTALB:Blue Moon (1961)
#EXTART:The Marcels
#EXTINF:133,The Marcels - Blue Moon
/media/Pop/The Marcels/1961 • Blue Moon/01 Blue Moon.m4a
</code></pre>
<p>I want the item’s block indentation in code, but not in final result. So how to get rid of this? Can’t find a Jinja example that covers my case.</p>
|
<python><jinja2>
|
2025-02-09 11:35:21
| 4
| 330
|
avibrazil
|
79,424,554
| 9,495,110
|
How to draw straight lines from one edge of a blob to the other in opencv
|
<p>I have a blob of irregular shapes like this:</p>
<p><a href="https://i.sstatic.net/B0Ujjszu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B0Ujjszu.png" alt="blob" /></a></p>
<p>Now I want to draw lines from the outer edge to the inner edge of the blob like this:</p>
<p><a href="https://i.sstatic.net/Z4NZijsm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4NZijsm.png" alt="line drawn" /></a></p>
<p>I have used <code>cv2.connectedComponents()</code> on result after applying Canny edge detection to get the two edges.</p>
<pre class="lang-py prettyprint-override"><code>originalimage = cv2.imread(R"blob.png")
edge = cv2.Canny(originalimage, 100, 200)
def get_islands(img):
n, labels = cv2.connectedComponents(img.astype(np.uint8))
islands = [(labels == i).astype(np.uint8) for i in range(1, n)]
return islands
edge_sets = get_islands(edge)
inner = edge_sets[1]
outer = edge_sets[0]
</code></pre>
<p>Now I want to draw line from outer to inner. I have tried calculating slope for each point in the outer layer from it's nearest neighbors. But it fails for corner points. First I have tried creating a graph of neighboring pixels.</p>
<pre class="lang-py prettyprint-override"><code>def create_graph(binary_image):
graph = defaultdict(list)
height, width = binary_image.shape
all_points = np.nonzero(binary_image)
all_points= sorted(list(map(list, [*zip(*all_points)]))) #Transpose the List
for x, y in all_points:
for dx in range(-1, 2, 1):
for dy in range(-1, 2, 1):
if dx == 0 and dy == 0:
continue
if (x+dx>=0 and y+dy>=0 and x+dx < height and y+dy < width) and binary_image[x+dx][y+dy]:
graph[(x, y)].append((x+dx, y+dy))
return graph
outer_graph = create_graph(edge_sets[0])
inner_graph = create_graph(edge_sets[1])
</code></pre>
<p>Then I have tried calculating slope of the outer edge at every point from its neighboring pixels and drawing perpendicular line to it.</p>
<pre class="lang-py prettyprint-override"><code>def get_slope(graph, point):
neighbours = graph[point]
slopes = []
eps = 1e-6
for neighbour in neighbours:
if neighbour[0] != point[0]: # Vertical Line
slopes.append((neighbour[1] - point[1]) / (neighbour[0] - point[0] + eps))
if slopes:
return np.mean(slopes)
else:
return 0
lines = []
for point in outer_graph:
slope = get_slope(outer_graph, point)
if slope == 0:
continue
perpedicular_slope = -1/slope
for inner_point in inner_graph:
if abs(get_slope(inner_graph, inner_point) - perpedicular_slope) < 0.01 \
and is_line_in_image(point, inner_point, originalimage):
lines.append([point, inner_point])
</code></pre>
<p>But it fails at the points where the edge is not smooth and the lines deviates a lot.</p>
<pre class="lang-py prettyprint-override"><code>lines = np.array(lines)
line_image = cv2.cvtColor(originalimage, cv2.COLOR_GRAY2RGB)
for i, line in enumerate(lines):
if i % 100 == 0:
line_image = cv2.line(line_image, tuple(line[0][::-1]), tuple(line[1][::-1]), (255, 0, 0), 3)
plt.imshow(line_image)
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/g63sY9Iz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g63sY9Iz.png" alt="output" /></a></p>
<p>How can I improve my algorithm to get the expected output? Is my approach correct?</p>
|
<python><opencv><image-processing><computer-vision><geometry>
|
2025-02-09 07:41:20
| 1
| 1,027
|
let me down slowly
|
79,424,281
| 1,245,659
|
how to refer back to the model from utils.py in a Django object
|
<p>I'm trying to create a field in a django model, that forces a unique value in the model. I'm using utils.py to generate the value.</p>
<p>The error I get is:</p>
<pre><code>File "/Users/evancutler/PycharmProjects/DjangoProject1/MCARS/utils.py", line 2, in <module>
from MCARS.models import Profile
ImportError: cannot import name 'Profile' from partially initialized module 'MCARS.models' (most likely due to a circular import) (/Users/evancutler/PycharmProjects/DjangoProject1/MCARS/models.py)
</code></pre>
<p>Here's my model:</p>
<pre><code>class Profile(models.Model):
# Managed fields
user = models.OneToOneField(User, related_name="profile", on_delete=models.CASCADE)
memberid = models.CharField(
max_length = 10,
blank=True,
editable=True,
unique=True,
default=utils.create_new_ref_number()
)
avatar = models.ImageField(upload_to="static/mcars/img/avatars", null=True, blank=True)
birthday = models.DateField(null=True, blank=True)
gender = models.CharField(max_length=10, choices=constants.GENDER_CHOICES, null=True, blank=True)
invited = models.BooleanField(default=False)
registered = models.BooleanField(default=False)
height = models.PositiveSmallIntegerField(null=True, blank=True)
phone = models.CharField(max_length=32, null=True, blank=True)
address = models.CharField(max_length=255, null=True, blank=True)
number = models.CharField(max_length=32, null=True, blank=True)
city = models.CharField(max_length=50, null=True, blank=True)
zip = models.CharField(max_length=30, null=True, blank=True)
</code></pre>
<p>here's my utils:</p>
<pre><code>import random
from MCARS.models import Profile
def create_new_ref_number():
not_unique = True
while not_unique:
unique_ref = random.randint(1000000000, 9999999999)
if not Profile.objects.filter(memberid=unique_ref):
not_unique = False
return str(unique_ref)
</code></pre>
<p>how do I tell the utils.py to refer back to the model to check if the value is unique and if not, to try again?</p>
<p>Thanks!</p>
|
<python><django>
|
2025-02-09 02:59:47
| 1
| 305
|
arcee123
|
79,423,875
| 6,151,828
|
Equivalent of Pythons selection by multiindex level (especially columns) in Julia
|
<p>My understanding is that DataFrames do not support MultiIndexing, which generally does not pose much problems, but translating some pythonic habits to Julia poses difficulties. I wonder how one could load and subselect features by columns, as in the example below.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
#generating sample data
nsmpls = 10
smpls = [f'smpl{j}' for j in range(nsmpls)]
nfeats = 5
feats = [f'feat{j}' for j in range(nfeats)]
data = np.random.rand(nfeats, nsmpls)
countries = ['France'] * 2 + ['UK'] * 3 + ['US'] * 5
df = pd.DataFrame(data, index=feats, columns=pd.MultiIndex.from_tuples(zip(countries, smpls)))
df.to_csv('./data.tsv', sep='\t')
#---------------------------------------------------------------------
#loading dataset
df = pd.read_csv('./data.tsv', sep='\t', index_col=0, header=[0,1])
#extracting subset
dg = df.xs('France', level=0, axis=1)
print(dg.shape)
#iterating
for country, group in df.groupby(level=0, axis=1):
print('#samples: {}'.format(group.shape[1]))
</code></pre>
|
<python><dataframe><julia><multi-index>
|
2025-02-08 20:31:54
| 1
| 803
|
Roger V.
|
79,423,661
| 1,574,054
|
Encrypt a file using a TPM with tpm2_pytss
|
<p>I am new to TPMs and want to construct a minimal example ecrypting and decrypting a file (here for simplicity represented just by a <code>bytes</code> object). It want everything to be non-persistent, so that the encryption/decryption only works until reboot. Furthermore, I don't want the symmetric encryption key (AES in this case) to leave the TPM.</p>
<p>This is what I have so far:</p>
<pre><code>from tpm2_pytss import *
with ESAPI() as esapi:
primary = esapi.create_primary(
in_sensitive=None,
in_public="rsa2048",
primary_handle=ESYS_TR.NULL
)
primary_handle = primary[0]
symmetric = esapi.create(primary_handle, None, "aes128cfb")
# Question: Can I construct this directly inside the TPM?
# Here it looks like I am importing a key into the TPM that was
# previously exported from it?
key_handle = esapi.load(primary_handle, symmetric[0], symmetric[1])
data = b"0123"
buff, iv_out = esapi.encrypt_decrypt(
key_handle,
decrypt=False,
mode=TPM2_ALG.AES,
iv_in=(b'1' * 8),
in_data=data
)
print(buff)
print(iv_out)
</code></pre>
<p>Please also note the question(s) in the code above.</p>
<p>In this form, the example causes this output followed by the exception:</p>
<pre><code>WARNING:esys:src/tss2-esys/api/Esys_EncryptDecrypt.c:328:Esys_EncryptDecrypt_Finish() Received TPM Error
ERROR:esys:src/tss2-esys/api/Esys_EncryptDecrypt.c:110:Esys_EncryptDecrypt() Esys Finish ErrorCode (0x00000143)
Traceback (most recent call last)
[...]
buff, iv_out = esapi.encrypt_decrypt(
^^^^^^^^^^^^^^^^^^^^^^
[...]
tpm2_pytss.TSS2_Exception.TSS2_Exception: tpm:error(2.0): command code not supported
</code></pre>
<p>How can I fix this and complete the example? Helpful resources for more info in this are also welcome, Google was not exactly helpful here.</p>
|
<python><python-3.x><encryption><tpm><tpm-2.0>
|
2025-02-08 17:51:09
| 0
| 4,589
|
HerpDerpington
|
79,423,573
| 7,483,509
|
A parent class that allows dataclass style annotations that works with nested class hierarchy
|
<p>I would like to write a class that works similarly to <code>dataclass</code>, but would work with inheritance. It would also data to a dictionary rather than directly to the class.</p>
<p>Currently I have something that work only with one subclass:</p>
<pre class="lang-py prettyprint-override"><code>class Data:
metadata: dict
def __init_subclass__(cls):
hints = get_type_hints(cls)
defaults = {k: v for k, v in cls.__dict__.items() if not k.startswith("__")}
def __init__(self, metadata=FrozenDict(), **kwargs):
metadata = dict(metadata)
# Collect attributes from kwargs or defaults
for key, typ in hints.items():
if key == "metadata":
continue
if key in kwargs:
metadata[key] = kwargs[key]
elif key in defaults:
metadata[key] = defaults[key]
else:
raise TypeError(f"Missing required argument: '{key}'")
self.metadata = metadata
cls.__init__ = __init__ # Override constructor dynamically
def __repr__(self):
if not hasattr(self, "__cached_repr"):
self.__cached_repr = f"{cls.__name__}(metadata={repr(self.metadata)})"
return self.__cached_repr
cls.__repr__ = __repr__
def __str__(self):
if not hasattr(self, "__cached_str"):
argstr = ",".join(
f"{k}={repr(v)}" for k, v in self.metadata.items() if k in hints
)
self.__cached_str = f"{cls.__name__}({argstr})"
return self.__cached_str
cls.__str__ = __str__
</code></pre>
<p>Example usage:</p>
<pre class="lang-py prettyprint-override"><code>class MyData(Data):
x: int
y: float = 3.14 # Default value
label: str = "default"
class NestedData(MyData):
extra: str = "nested"
# ✅ Works for direct subclass
n1 = MyData(x=10)
# ❌ Does not work for nested subclass
n2 = NestedData(x=10)
</code></pre>
|
<python><inheritance>
|
2025-02-08 16:52:10
| 0
| 1,109
|
Nick Skywalker
|
79,423,383
| 16,815,358
|
NumbaPerformanceWarning about contiguous arrays, although both arrays are already contiguous
|
<p>I am having a problem with removing this warning, before publishing a package on PyPI.</p>
<p>As a summary, this is the function that I am using to speed up the <code>np.dot()</code> function:</p>
<pre><code>@nb.jit(nb.float64[:,:](nb.float64[:,:], nb.float64[:,:]), nopython=True)
def fastDot(X, Y):
return np.dot(X, Y)
</code></pre>
<p>And the aim is to use this matrix to multiply a matrix of lagged signals with the eigenvectors, it can be also any other matrix:</p>
<pre><code># Compute principal components
PC = fastDot(X, eigenVectors)
</code></pre>
<p>This is where I get the following warning:</p>
<pre><code>NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (Array(float64, 2, 'A', False, aligned=True), Array(float64, 2, 'A', False, aligned=True))
return np.dot(X, Y)
</code></pre>
<p>I have also used this line just before the <code>fastDot()</code> call:</p>
<pre><code>eigenVectors, X = np.ascontiguousarray(eigenVectors), np.ascontiguousarray(X)
</code></pre>
<p>Still no success.</p>
<p>I know it's not a huge problem, but I would like to remove this warning without using statically typed <code>warnings</code>.</p>
<p>Can someone please help me in:</p>
<ol>
<li>Understanding why this is happening</li>
<li>How can I remove this?</li>
</ol>
<p>Thank you so much in advance!</p>
|
<python><numba><scientific-computing>
|
2025-02-08 14:47:07
| 1
| 2,784
|
Tino D
|
79,423,247
| 5,430,790
|
How to color excel cells with specific values in dataframe Python
|
<p>I created a code to insert my dataframe, called df3, into an excel file.</p>
<p>My code is working fine, but now I'd like to change background cells color in all cells based on value</p>
<p>I tried this solution, I don't get any errors but I also don't get any cells colored</p>
<p>My code:</p>
<pre><code>def cond_formatting(x):
if x == 'OK':
return 'background-color: lightgreen'
elif x == 'NO':
return 'background-color: red'
else:
return None
print(pd.merge(df, df2, left_on='uniquefield', right_on='uniquefield2', how='left').drop('uniquefield2', axis=1))
df3 = df.merge(df2, left_on='uniquefield', right_on='uniquefield2', how='left').drop(['uniquefield2', 'tournament2', 'home2', 'away2', 'result2'], axis=1)
df3 = df3[["home","away","scorehome","scoreaway","best_bets","oddtwo","oddthree","htresult","shresult","result","over05ht","over15ht","over05sh","over15sh","over05","over15","over25","over35","over45","goal","esito","tournament","uniquefield"]]
df3 = df3.sort_values('best_bets')
df3.style.applymap(cond_formatting)
# determining the name of the file
file_name = camp + '_Last_20' + '.xlsx'
# saving the excel
df3.to_excel(file_name, freeze_panes=(1, 0))
print('Tournament is written to Excel File successfully.')
</code></pre>
<p>How I said, code is working but all background cells color are white (no colors)
any suggestion?</p>
<p>Thanks for your help</p>
|
<python><excel><pandas>
|
2025-02-08 13:19:35
| 2
| 319
|
Marci
|
79,423,207
| 352,290
|
Close button not enabled while rendering an image using OpenCV
|
<p>I don't see the close button on a Mac machine, but this code works fine on a Windows machine while trying to render the image using OpenCV library on the <code>imshow</code> function.</p>
<pre><code>def detect_objects_in_image(image_path: str):
frame = cv2.imread(image_path)
if frame is None:
typer.echo(f"Error: Unable to load image {image_path}.")
return
results = model(frame, imgsz=640, conf=0.5)
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
class_id, confidence = int(box.cls[0]), float(box.conf[0])
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
cv2.putText(frame, f"Class {class_id}: {confidence:.2f}", (x1, y1 - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
cv2.imshow("Image Analysis", frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
|
<python><macos><opencv>
|
2025-02-08 12:53:03
| 0
| 1,360
|
user352290
|
79,423,122
| 8,354,581
|
RuleBasedCollator rule ignored
|
<p>I'm trying to use the icu RuleBasedCollator in python.
In my code I specify a rule wherby "ä" should sort before "a" as a secondary (accent) difference</p>
<pre><code>from icu import RuleBasedCollator
l=["a","ä"]
rbc = RuleBasedCollator('\n&ä<<a')
sorted(l, key=rbc.getSortKey)
</code></pre>
<p>However, the output of the <code>sorted</code> is:</p>
<p>['a', 'ä']</p>
<p>I expected: ['ä','a']
What did I do wrong?</p>
<p>Many thanks</p>
|
<python><collation><icu>
|
2025-02-08 11:51:02
| 1
| 379
|
korppu73
|
79,422,511
| 15,547,292
|
python ctypes: reading a string array
|
<p>With python ctypes, how can I read a NUL-terminated array of NUL-terminated strings, e.g. ghostscript's <code>gs_error_names</code> ?</p>
<p>I know how to get the first value:</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import *
from ctypes.util import find_library
gs = CDLL(find_library("gs"))
print(c_char_p.in_dll(gs, 'gs_error_names').value)
</code></pre>
<p>I also know how to get a fixed number of values:</p>
<pre class="lang-py prettyprint-override"><code>print(list((c_char_p * 10).in_dll(gs, 'gs_error_names')))
</code></pre>
<p>But how can I read all values until the end of the array?</p>
|
<python><ctypes>
|
2025-02-08 01:51:32
| 2
| 2,520
|
mara004
|
79,422,490
| 10,124,658
|
Python split() function :: Need to split "int_32\n' " so that I get int_32 alone
|
<p>Need to split "int_32\n' " so that I get int_32 alone.</p>
<p>I tried</p>
<pre><code>x = "int_32\n' "
x.split("\n")
</code></pre>
<p>I also tried</p>
<pre><code>x = "int_32\n' "
x.splitlines()
</code></pre>
<p>Both do not yield the required output which is int_32</p>
<p>Instead it yields int_32\n'</p>
<p>\n is what is creating an issue. Anyway I can do this?</p>
|
<python><arrays><string><split>
|
2025-02-08 01:28:14
| 3
| 707
|
Software Fan
|
79,422,452
| 3,251,645
|
Model returns NaNs when using ModernBERT instead of Roberta
|
<p>I'm trying to fine-tune ModernBERT for a classification task. For this I had some old code written using PyTorch that I've used to fine-tune BERT and recently Roberta with no issues. But when I switch out Roberta with ModernBERT, I suddenly get NaNs in my output tensor from the very first batch. This means my loss is also NaNs and training can't happen. I did some research and found the problem could be exploding gradients and or NaNs in the Input, both of which is not true in my case. So, I'm looking for some help in what might be going wrong.</p>
<p>Here's my code for training loop. I'd also appreciate some feedback on this loop.</p>
<pre><code>for epoch_i in range(0, EPOCHS):
total_train_loss = 0
model.train()
for step, batch in enumerate(train_dataloader):
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
output = model(
b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
)
loss = output[0]
total_train_loss += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
avg_train_loss = total_train_loss / len(train_dataloader)
training_time = format_time(time.time() - t0)
t0 = time.time()
model.eval()
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
for batch in validation_dataloader:
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
with torch.no_grad():
output = model(
b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
)
loss = output[0]
logits = output[1]
# loss = loss_fn(output.logits, b_labels.squeeze())
total_eval_loss += loss.item()
label_ids = b_labels.to("cpu").numpy()
total_eval_accuracy += f1_score(
torch.argmax(logits, dim=1).cpu().numpy(),
label_ids,
average="weighted",
)
avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)
print(" F1 Score: {0:.2f}".format(avg_val_accuracy))
avg_val_loss = total_eval_loss / len(validation_dataloader)
</code></pre>
<p>Here's how I'm setting up the model:</p>
<pre><code>config = AutoConfig.from_pretrained("answerdotai/ModernBERT-base")
config.num_labels = 2
tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
model = AutoModelForSequenceClassification.from_pretrained("answerdotai/ModernBERT-base", config=config)
</code></pre>
|
<python><pytorch><fine-tuning>
|
2025-02-08 00:56:47
| 0
| 2,649
|
Amol Borkar
|
79,422,370
| 10,985,257
|
rich.Progress nested with correct time
|
<p>I try to implement a nested progress bar, which resets the inner.</p>
<p>The following works for displaying the progress as I expected:</p>
<pre class="lang-py prettyprint-override"><code>import time
from rich.progress import Progress
with Progress() as progress:
task1 = progress.add_task("[red]Downloading...", total=2)
task2 = progress.add_task("[green]Processing...", total=2)
task3 = progress.add_task("[cyan]Cooking...", total=200)
for i in range(2):
progress.update(task2, completed=0)
for j in range(2):
progress.update(task3, completed=0)
for k in range(200):
progress.update(task3, advance=1)
time.sleep(0.01)
else:
progress.update(task2, advance=1)
else:
progress.update(task1, advance=1)
</code></pre>
<p>The result looks like this:</p>
<pre><code>Downloading... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
Processing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
Cooking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
</code></pre>
<p>What I am irritated by is the time. While for <code>task3</code> the timer ticks down in the first loop, the timer for <code>task1</code> and <code>task2</code> starts with <code>-:--:--</code> which I expected, but in the second iteration displays <code>0:00:00</code> instead of the cumulative time of the lower loops.</p>
<p>Additionally, the time will stick at <code>0:00:00</code> after running ones.</p>
<p>I've tested a bit further and realized, that if I increase the total of <code>task1</code> and <code>taks2</code>. It seems to be that at least calculate their estimated time, but also sticks at <code>0:00:00</code> after the first loop.</p>
<p>Do I need to reset the timer the same way I reset the <code>completed</code> count?</p>
|
<python><progress-bar><rich>
|
2025-02-07 23:28:00
| 0
| 1,066
|
MaKaNu
|
79,422,227
| 2,955,541
|
Make PyFFTW Faster Than SciPy Convolve
|
<p>I have a simple function that performs a sliding dot product using an overlap-add convolution approach:</p>
<pre><code>import numpy as np
from scipy.signal import oaconvolve
import pyfftw
import os
def scipy_sliding_dot(A, B):
m = A.shape[0]
n = B.shape[0]
Ar = np.flipud(A) # Reverse/flip A
AB = oaconvolve(Ar, B)
return AB.real[m - 1 : n]
</code></pre>
<p>For reference, this is the same thing as doing:</p>
<pre><code>def naive_sliding_dot(A, B):
m = len(A)
n = len(B)
l = n - m + 1
out = np.empty(l)
for i in range(l):
out[i] = np.dot(A, B[i:i+m])
return out
</code></pre>
<p>When I initialize two random (always-real, never complex) arrays:</p>
<pre><code>A = np.random.rand(2**6)
B = np.random.rand(2**20)
</code></pre>
<p>and then time <code>scipy_sliding_dot</code> with:</p>
<pre><code>%timeit scipy_sliding_dot(A, B)
</code></pre>
<p>I get:</p>
<pre><code>6.39 ms ± 38.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>I then attempt to speed this up with multi-threaded <code>pyfftw</code>:</p>
<pre><code>class pyfftw_sliding_dot(object):
# Based on https://stackoverflow.com/a/30615425/2955541
def __init__(self, A, B, threads=1):
shape = (np.array(A.shape) + np.array(B.shape))-1
self.rfft_A_obj = pyfftw.builders.rfft(A, n=shape, threads=threads)
self.rfft_B_obj = pyfftw.builders.rfft(B, n=shape, threads=threads)
self.irfft_obj = pyfftw.builders.irfft(self.rfft_A_obj.output_array, n=shape, threads=threads)
def __call__(self, A, B):
m = A.shape[0]
n = B.shape[0]
Ar = np.flipud(A) # Reverse/flip A
rfft_padded_A = self.rfft_A_obj(Ar)
rfft_padded_B = self.rfft_B_obj(B)
return self.irfft_obj(np.multiply(rfft_padded_A, rfft_padded_B)).real[m - 1 : n]
</code></pre>
<p>Then, I test the performance with:</p>
<pre><code>n_threads = os.cpu_count()
obj = pyfftw_sliding_dot(A, B, n_threads)
%timeit obj(A, B)
</code></pre>
<p>and get:</p>
<pre><code>33 ms ± 347 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>which means that multi-threaded <code>pyfftw</code> is ~5x slower than <code>scipy</code>. I've poured through the <a href="https://hgomersall.github.io/pyFFTW/pyfftw/builders/builders.html" rel="nofollow noreferrer">builders documentation</a> and played around with all of the "additional arguments" (e.g., <code>planner_effort</code>, <code>overwrite_input</code>, etc) but the <code>pyfftw</code> performance does not change.</p>
<p>What am I doing wrong with <code>pyfftw</code> and how can I make it faster than <code>scipy</code>?</p>
|
<python><numpy><scipy><fftw><pyfftw>
|
2025-02-07 21:47:49
| 0
| 6,989
|
slaw
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.