QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,468,638
| 10,964,685
|
plotly histogram with filtering toolbar - python
|
<p>I'm trying to generate a plotly histogram that can be filtered by values with a df column. Below, in dataset, I want to be able to filter the values in item to show the distribution of values for each unique value.</p>
<p>However, all I am able to display is one single value. I can't change values to show the results of differing items.</p>
<pre><code>dataset = pd.DataFrame(
{'item': ['A', 'A', 'B', 'C', 'D', 'A', 'A', 'B', 'C', 'D', 'A', 'A', 'B', 'C', 'D'],
'value': [27, 33, 3, 26, 8, 27, 3, 13, 16, 2, 27, 17, 12, 6, 28],
}
)
continuous_vars = ["item"]
fig = go.Figure()
fig.add_trace(
go.Histogram(x = dataset.item,
xbins=go.histogram.XBins(size=5), # Change the bin size
marker=go.histogram.Marker(color="orange"), # Change the color
)
)
buttons = []
for col in continuous_vars:
buttons.append(dict(method='restyle',
label=col,
visible=True,
args=[{"x":[dataset[col]],
"type":'histogram'}, [0]],
)
)
updatemenu = []
your_menu = dict()
updatemenu.append(your_menu)
updatemenu[0]['buttons'] = buttons
updatemenu[0]['direction'] = 'down'
updatemenu[0]['showactive'] = True
fig.update_layout(showlegend=False, updatemenus=updatemenu)
fig.show()
</code></pre>
|
<python><plotly>
|
2025-02-26 05:41:08
| 1
| 392
|
jonboy
|
79,468,372
| 2,031,794
|
ValueError: Must have equal len keys and value when setting with an iterable when setting value on group
|
<p>I have a pandas dataframe with a timestamp index, I am grouping to get only hourly values and after a series of operations on the values of that hour I need to re-write the results to the original DF:</p>
<pre><code>for name, group in df.groupby(pd.Grouper(freq="1H")):
if group.shape[0] > 0:
results = some_function(group) # Operations on the group, returns a list of labels same length of the group
df.loc[group.index, 'results'] = results
</code></pre>
<p>I am getting the error <code>ValueError: Must have equal len keys and value when setting with an iterable</code> but it only happens after many successful iterations (hours) in the for loop. Any ideas?</p>
|
<python><pandas><group-by>
|
2025-02-26 02:41:19
| 1
| 1,326
|
Hakim
|
79,468,240
| 24,718
|
How can I send an alexa notification via proactive events ask_sdk_core python?
|
<p>I'm trying to send a notification from my skill to Alexa using proactive events through the <code>ask_sdk_model</code> python interface.</p>
<p>At the moment I'm trying to test with the Alexa developer console.
I've retrieved my client id/secret from the Alexa developer console, and managed to determine the user_id of the developer console.</p>
<p>I have the following, but it's giving me an ambiguous <code>"Bad Reqeust"</code> error:</p>
<pre class="lang-py prettyprint-override"><code>from ask_sdk_core.api_client import DefaultApiClient
from ask_sdk_core.serialize import DefaultSerializer
from ask_sdk_model.services import ApiConfiguration, AuthenticationConfiguration
from ask_sdk_model.services.proactive_events import CreateProactiveEventRequest, RelevantAudience, RelevantAudienceType, SkillStage
ALEXA_CLIENT_ID = "XXX"
ALEXA_CLIENT_SECRET = "YYY"
def notify_alexa_device(user_id: int):
api_configuration = ApiConfiguration(serializer=DefaultSerializer(), api_client=DefaultApiClient())
authentication_configuration = AuthenticationConfiguration(
client_id=ALEXA_CLIENT_ID,
client_secret=ALEXA_CLIENT_SECRET,
)
client = ProactiveEventsServiceClient(
api_configuration,
authentication_configuration
)
notification_event = Event(
name="AMAZON.Occasion.Updated",
payload={
"state": {"confirmationStatus": "CONFIRMED"},
"occasion": {
"occasionType": "APPOINTMENT",
"subject": "localizedattribute:subject",
"provider": {"name": "localizedattribute:providerName"},
"bookingTime": "2018-11-20T19:16:31Z",
"broker": {"name": "localizedattribute:brokerName"},
},
}
)
localized_attributes = [
{"locale": "en-US", "providerName": "Provider Name", "subject": "Subject", "brokerName": "Broker Name"},
{"locale": "ja-JP", "providerName": "プロバイダー名", "subject": "件名", "brokerName": "ブローカー名"},
]
event_request = CreateProactiveEventRequest(
event=notification_event,
localized_attributes=localized_attributes,
relevant_audience=RelevantAudience(object_type=RelevantAudienceType.Unicast, payload=[str(user_id)]),
)
client.create_proactive_event(
create_proactive_event_request=event_request,
stage=SkillStage.DEVELOPMENT,
)
</code></pre>
<p>Any idea what is causing the "Bad Request", and how I can initiate a proactive event with <code>ask_sdk_model</code>?</p>
<p><em>Exception</em>:</p>
<pre><code> File "visitors/functions.py", line 62, in notify_alexa_device
client.create_proactive_event(
File "7qDsB67C/lib/python3.12/site-packages/ask_sdk_model/services/proactive_events/proactive_events_service_client.py", line 118, in create_proactive_event
access_token = self._lwa_service_client.get_access_token_for_scope(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "7qDsB67C/lib/python3.12/site-packages/ask_sdk_model/services/lwa/lwa_client.py", line 111, in get_access_token_for_scope
return self._get_access_token(scope)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "7qDsB67C/lib/python3.12/site-packages/ask_sdk_model/services/lwa/lwa_client.py", line 155, in _get_access_token
lwa_response = self._generate_access_token(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "7qDsB67C/lib/python3.12/site-packages/ask_sdk_model/services/lwa/lwa_client.py", line 234, in _generate_access_token
api_response = self.invoke(method="POST", endpoint=endpoint,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/ask_sdk_model/services/base_service_client.py", line 151, in invoke
raise ServiceException(message=exception_metadata.message,
ask_sdk_model.services.service_exception.ServiceException: Bad Reques
</code></pre>
|
<python><alexa-skills-kit><ask-sdk>
|
2025-02-26 00:46:21
| 0
| 44,070
|
monkut
|
79,468,013
| 2,940,920
|
How to run DeepSeek model locally
|
<p>I am trying to run DeepSeek locally according to their instructions but it does not work with some silly error (I will show it later).
This is what I am doing.</p>
<ol>
<li>Download the smallest model (3.5GB) from here <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B" rel="nofollow noreferrer">https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B</a></li>
<li>Follow the steps from here: <a href="https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally</a></li>
</ol>
<p>2.1 Get this project
<a href="https://github.com/deepseek-ai/DeepSeek-V3.git" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3.git</a></p>
<p>2.2 Run docker container like this with pre-created volume to put the model</p>
<pre><code>docker run --gpus all -it --name deepseek01 --rm --mount source=deepseekv3,target=/root/deepseekv3 python:3.10-slim bash
</code></pre>
<p>I am using python:3.10-slim because here (<a href="https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally</a>) it is written
"<em>Linux with Python 3.10 only. Mac and Windows are not supported</em>."</p>
<p>2.3 Install latest updates
apt-get update</p>
<p>2.4 get this file <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/requirements.txt" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/requirements.txt</a> and install the requirements</p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p>2.5 Copy the model to the volume mounted to the docker container. these 5 files from here <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B" rel="nofollow noreferrer">https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B</a></p>
<pre><code>config.json
generation_config.json
model.safetensors
tokenizer.json
tokenizer_config.json
</code></pre>
<p>2.6 Convert the model as it is written here <a href="https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#model-weights-conversion" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#model-weights-conversion</a> by this command</p>
<pre><code>python convert.py --hf-ckpt-path /root/deepseekv3/source_model --save-path /root/deepseekv3/converted_model --n-experts 256 --model-parallel 16
</code></pre>
<p>In this step (converting the model) I got this error</p>
<pre><code>Traceback (most recent call last):
File "/root/deepseekv3/inference/convert.py", line 96, in <module>
main(args.hf_ckpt_path, args.save_path, args.n_experts, args.model_parallel)
File "/root/deepseekv3/inference/convert.py", line 63, in main
assert key in mapping
AssertionError
</code></pre>
<p>so, basically, the next steps do not make sense, as this is the essential step.</p>
<p>My questions:</p>
<ol>
<li>What I am doing wrong?</li>
<li>There are some videos on YouTube where deepseek was installed with ollama. Is it really required? Should I be able to run it without it like they described here <a href="https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally" rel="nofollow noreferrer">https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally</a>?</li>
</ol>
<p>UPDATE 1</p>
<p>In order to debug a bit I added these 2 lines.</p>
<pre><code>print("Missing key:", key)
print("Available keys:", list(mapping.keys()))
</code></pre>
<p>Missing keys were identified as these:</p>
<pre><code>embed_tokens
input_layernorm
down_proj
gate_proj
up_proj
post_attention_layernorm
k_proj
</code></pre>
<p>Although all of them do exist inside model.safetensors file.</p>
<p>Also, it was mentioned by @Hans Kilian in the comment, that I might put some file, which is not needed into source_model folder.
I checked line 11 in the convert.py and some of the keys there do not exist inside model.safetensors, but logging reports different keys.</p>
|
<python><docker><machine-learning><large-language-model><deepseek>
|
2025-02-25 22:14:20
| 1
| 618
|
Zlelik
|
79,467,972
| 13,151,915
|
Calling Multiple Functions on Multiple Rows of a Tensor in Parallel
|
<pre><code>import torch
x = torch.ones(3, 3)
factors = [lambda x: 2*x, lambda x: 3*x, lambda x: 4*x]
indices = torch.tensor([0, 1, 2])
def multiply_row_by_factor(row, idx):
return factors[idx](row)
result = torch.vmap(multiply_row_by_factor, in_dims=(0, 0))(x, indices)
# Original Tensor
# tensor([[1., 1., 1.],
# [1., 1., 1.],
# [1., 1., 1.]])
# Desired Result
# tensor([[2., 2., 2.],
# [3., 3., 3.],
# [4., 4., 4.]])
</code></pre>
<p>As the title states, I am looking for a way to call multiple functions on multiple rows of a tensor. I show a minimally reproducible example for simplicity. I am aware that vmap is only meant to be called with one function. I am just using it here as an example to communicate what I am trying to do. This particular approach doesn't work due to the fact that idx is a BatchedTensor. The functions here are lambdas, but in reality, my functions are composed of complex transformations that I would prefer not to decompose to get this to work.</p>
<p>Is there any way to achieve something like this? Something cleaner than pytorch streams? Specifically calling the functions in parallel?</p>
|
<python><pytorch><transformation><tensor>
|
2025-02-25 21:53:31
| 2
| 412
|
gfdb
|
79,467,962
| 15,559,986
|
Combining multiple selections in Altair with the 'empty' property
|
<p>Consider the following MRE:</p>
<pre><code># create data
df = pd.DataFrame()
df['x'] = [0, 1, 2] * 3
df['y'] = [0] * 3 + [1] * 3 + [2] * 3
# create selections
click_select = alt.selection_point(name='_click_select_', toggle='true', encodings=['x','y'], empty=False, clear=False)
box_select = alt.selection_interval(name='_box_select_', empty=False,
on="[mousedown[event.altKey], mouseup] > mousemove",
translate="[mousedown[event.altKey], mouseup] > mousemove!")
# initialize
base = alt.Chart(df).mark_point().encode(x='x', y='y')
base = base.encode(color=alt.when(click_select | box_select).then(alt.value('red')).otherwise(alt.value('blue')))
base = base.add_params(click_select, box_select)
</code></pre>
<p>This is a scatterplot with 9 points. I would expect that these points are BLUE to start, and then will be RED when selected by either clicking on them directly or enclosing them in the box select while holding alt.</p>
<p>However, the result is a plot where all points are RED to start. Only once BOTH the click_select has clicked a point and box_select has created a box does the plot appear as intended, with selected points in red and all others in blue.</p>
<p>My instinct tells me this has something to do with the "empty" parameter, as the behavior looks like all points "pass" the "click_select | box_select" logical combination even when nothing is selected. However, since I put empty=False for both selections, I thought both should evaluate as False and therefore the 'or' should be False as well, triggering the 'otherwise' condition. I've tested changing the "empty" parameter on both selections AND as an additional argument to when() but it seems to have no effect. And even then, why would it require that both selections be active to work, when it is an 'or' condition?</p>
<p>How should I achieve my desired behavior, and what am I missing about "empty"?</p>
|
<python><altair>
|
2025-02-25 21:47:56
| 1
| 304
|
fitzme
|
79,467,944
| 9,893,725
|
Comparing dataframes
|
<p>The goal is to compare two pandas dataframes considering a margin of error.</p>
<p>To reproduce the issue:</p>
<h3>Importing pandas</h3>
<pre><code>import pandas as pd
</code></pre>
<h3>Case one - same data dataframes</h3>
<pre><code>df1 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]})
df2 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]})
print(df1.compare(df2, result_names=('df1', 'df2')))
# The result is an empty dataframe
Empty DataFrame
Columns: []
Index: []
</code></pre>
<h3>Case two - different data dataframes</h3>
<pre><code>df1 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]})
df2 = pd.DataFrame({"A": [1,1,1], "B": [2,2.2,2], "C": [3,3,3]}) # Note that the second B value is 2.2
print(df1.compare(df2, result_names=('df1', 'df2')))
# The result is a dataframe showing differences
B
df1 df2
1 2.0 2.2
</code></pre>
<p>The issue is that I want that it only considers differences greater than 0.5</p>
<h3>How I achieved it.</h3>
<pre><code>threshold = 0.5
df3 = df1.melt().reset_index().merge(df2.melt().reset_index(), on="index")
df3["diff"] = (df3["value_x"] - df3["value_y"]).abs()
print(df3.loc[df3["diff"] > threshold])
# The result is an empty dataframe
Empty DataFrame
Columns: [index, variable_x, value_x, variable_y, value_y, diff]
Index: []
</code></pre>
<p>Is there a better way to do this? It takes a lot of time for a huge DF.</p>
<p>In time:</p>
<ul>
<li>This is only a reproducible example.</li>
<li>I am opened to use other libraries as Numpy.</li>
</ul>
|
<python><pandas><numpy>
|
2025-02-25 21:39:03
| 3
| 899
|
Paulo Marques
|
79,467,759
| 4,470,052
|
How to perform two session.write_pandas transactionally on snowflake?
|
<p>When working with adding df1, and df2 into two different tables, I want to ensure that df1 doesn't write if df2 fails or any of the middle steps fail. But this doesn't work when using "write_pandas" function to write pandas DF to table.</p>
<pre><code>session.sql("commit").collect()
session.write_pandas(df,table1,db,schema,auto_create_table=True)
raise Exception("transaction Test")
session.write_pandas(df,table2,db,schema,auto_create_table=True)
session.sql("commit").collect()
</code></pre>
<p>This basically should be able to rollback table1 insert after raising Exception, but it doesn't. We see the dataframe items appended to table1. Direct SQL Statements do work</p>
<pre><code>session.sql("commit").collect()
session.sql("insert into DB.SC.T1('col1','col2') values (99,'ninenine')")
raise Exception("transaction Test")
session.sql("insert into DB.SC.T2('col1','col2') values (88,'eightyeight')")
session.sql("commit").collect()
</code></pre>
<p>So here the first table doesn't get new values when it fails before committing. Is</p>
|
<python><pandas><snowflake-cloud-data-platform>
|
2025-02-25 20:06:41
| 1
| 692
|
Flyn Sequeira
|
79,467,683
| 5,049,813
|
What is the correct type hint for `os.path.dirname`?
|
<p>When I hover over the function, Pylance tells me the type of the parameter it takes is <code>PathLike[AnyStr@dirname]</code>. What does this mean? Is it just a Union of string and PathLike?</p>
<p>Right now my code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def make_path(filename: os.PathLike) -> os.PathLike:
"""
Ensures that the directory for the given filename exists, creating it if necessary.
Args:
filename (os.PathLike): The path to the file for which the directory should be created.
Returns:
os.PathLike: The original filename.
"""
dirname = os.path.dirname(filename)
if dirname:
os.makedirs(os.path.dirname(filename), exist_ok=True)
return filename
</code></pre>
<p>But it's giving an error for taking in strings, which didn't happen when I didn't have filename typed.</p>
<p><strong>How can I appropriately type <code>filename</code> in the code above?</strong></p>
|
<python><python-typing>
|
2025-02-25 19:27:15
| 0
| 5,220
|
Pro Q
|
79,467,589
| 918,417
|
Relative imports in local python module
|
<p>Our application started as a simple FastAPI application, while working on it we used relative paths for the imports.</p>
<p>Now we need to create another application that extends the existing one. So our new code structure looks like this (there are many more files, this is just to show the issue):</p>
<pre><code>project/
setup.py
__init__.py
core/ # The original application
__init__.py
src/
__init__.py
api/
__init__.py
api.py
routes/
__init__.py
router1.py
core_plus/ # The extending application
__init__.py
src/
api/
main.py
api.py
</code></pre>
<p>I do manage to install <code>core</code> to <code>core-plus</code> using <code>pip install -e ..</code> from <code>core_plus</code> directory. However, when importing something from core (for example <code>core.src.api.api</code>) it fails to run the internal imports (<code>import routes.router1</code>)</p>
<pre class="lang-py prettyprint-override"><code># project/core_plus/src/api/api.py
from core.src.api.api import routers
....
# project/core/src/api/api.py
import routes.router1 as router1
routers = [rotuer1]
....
</code></pre>
<p>I am looking for a way to make it work that wont involve editing all imports in <code>core</code>, and that won't be "hackish" on the other hand (for example adjusting <code>sys.path</code> which may cause ambiguity if having files/dirs with the same names on both core/cloud)</p>
<p>Any suggestions?</p>
|
<python>
|
2025-02-25 18:45:27
| 0
| 1,791
|
Matan Kadosh
|
79,467,524
| 11,342,139
|
Unable to read the full response coming over rs-485 from a Modbus device
|
<p>I have been trying to communicate with a <code>I-7000</code> IO module device that supports Modbus. In general I am able to successfully send commands such as writing to a coil or reading a coil but I face one issue - I am unable to read the response coming from the IO module device, at least not every time. I am using <code>pyserial</code> and I expect 6 bytes returned with a specific order but I fail to read the message correctly. Each message has a device id, function code, coil address, value and CRC message at the end. Most of the time I am able to only read the CRC message at the end and sometimes I am able to read some of the other bytes before that. Rarely I am able to read the whole message.</p>
<p>Here I did setup this simple code to debug my problem, it is not my main code but the issue I face is the same.</p>
<pre><code>import serial
import time
import crcmod.predefined
# Function to calculate Modbus RTU CRC
def calculate_rs_485_crc(message):
crc16 = crcmod.predefined.mkCrcFun("modbus")
return crc16(message)
port = "/dev/ttyS0"
baudrate = 115200
timeout = 1
ser = serial.Serial(
port,
baudrate,
timeout=timeout,
parity=serial.PARITY_NONE,
)
# Construct Modbus RTU command
message = "020100060001"
command = bytes.fromhex(message)
crc = calculate_rs_485_crc(command)
command_with_crc = command + crc.to_bytes(2, "little")
print(f"Sent: {command_with_crc.hex()}")
ser.write(command_with_crc)
byteData = ser.read_until(b"\xcc")
print(f"byteData: {byteData}")
ser.close()
</code></pre>
<p>I have checked baudrate and the port, they are okay. Whatever timeout I place, the result is the same. I tried putting a <code>time.sleep(0.5)</code> between the write and read commands but this makes it actually worse because I am unable to receive any data then. Having no sleep time at least lets me receive some parts of the message.</p>
<p>This is my terminal where I see what is printed. Sometimes receive the end of the message (CRC bytes):<br />
<code>Sent: 0201000600011df8</code><br />
<code>byteData: b'Q\xcc'</code></p>
<p>And sometimes I can see the actually expected message. Or sometimes also something in between.<br />
<code>Sent: 0201000600011df8</code><br />
<code>byteData: b'\x02\x01\x01\x00Q\xcc'</code></p>
<p>Also, currently you can see me using <code>read_until</code> in the code but I tried using <code>readline</code> and just <code>read</code> by specifying the number of bytes I expect to read but none of it helped. Same result. I am not sure why this inconsistency happens. Most of my reading online shows timing issues or hardware issues but it is definitely not hardware issues because there is some other tools I used, provided by the manufacturer to talk to this device and it worked fine. If it is a timing issue, I don't understand how it is such because if I put some time to sleep in order to make sure every piece of the message has arrived, then i get nothing.</p>
<p>I hope you can help me out.</p>
<p>EDIT: Added <code>mbpoll</code> result:</p>
<pre><code>user@localhost:~$ sudo mbpoll -m rtu -b 115200 -d 8 -s 1 -p none /dev/ttyS0 -a 2 -r 1 -1 -t 0
mbpoll 1.0-0 - FieldTalk(tm) Modbus(R) Master Simulator
Copyright © 2015-2019 Pascal JEAN, https://github.com/epsilonrt/mbpoll
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'mbpoll -w' for details.
Protocol configuration: Modbus RTU
Slave configuration...: address = [2]
start reference = 1, count = 1
Communication.........: /dev/ttyS0, 115200-8E1
t/o 1.00 s, poll rate 1000 ms
Data type.............: discrete output (coil)
-- Polling slave 2...
Read discrete output (coil) failed: Connection timed out
</code></pre>
<p>EDIT #2: Imported a function</p>
<p>I have this other code which pretty much does the same thing, only it is in a function. I called this function in my code you can see above and then it works all fine. Not really sure why. But that is still not a good solution because I create two serial port instances and both write and read:</p>
<pre><code>import serial
import time
import logging
import os
import crcmod.predefined
logger = logging.getLogger(__name__)
def send_over_rs485(message: str, max_attempts=3):
attempts = 0
while attempts < max_attempts:
try:
ser = serial.Serial("/dev/ttyS0", 115200, timeout=0.3)
command = bytes.fromhex(message)
crc = calculate_rs_485_crc(command)
command_with_crc = command + crc.to_bytes(2, "little")
ser.write(command_with_crc)
time.sleep(0.2)
if ser.in_waiting:
response = ser.readline()
return response.hex()
else:
# print("No response available in rs-485 serial buffer.")
logging.debug("No response available rs-485 in serial buffer.")
except serial.SerialException as e:
logging.error(f"Serial communication error: {e}")
except OSError as e:
logging.error(f"OS-level communication error: {e}")
except Exception as e:
logging.error(f"Unexpected error: {e}")
finally:
if "ser" in locals() and ser.is_open:
ser.close()
attempts += 1
logging.warning(f"Reached maximum attempts without success: {attempts}")
return None
# Function to calculate Modbus RTU CRC
def calculate_rs_485_crc(message):
crc16 = crcmod.predefined.mkCrcFun("modbus")
return crc16(message)
</code></pre>
<p>So this funciton I import and call in my code above like this:</p>
<pre><code>import serial
import time
import crcmod.predefined
# NOTE: IMPORTED FUNCTION HERE
from serial_protocol import send_over_rs485 # NOTE: it worked only because of this
# Function to calculate Modbus RTU CRC
def calculate_rs_485_crc(message):
crc16 = crcmod.predefined.mkCrcFun("modbus")
return crc16(message)
port = "/dev/ttyS0"
baudrate = 115200
timeout = 1
ser = serial.Serial(
port,
baudrate,
timeout=timeout,
parity=serial.PARITY_NONE,
)
# Construct Modbus RTU command
message = "020100060001"
# NOTE: CALLED FUNCTION HERE
# NOTE: This somehow makes it work:
send_over_rs485(message)
command = bytes.fromhex(message)
crc = calculate_rs_485_crc(command)
command_with_crc = command + crc.to_bytes(2, "little")
print(f"Sent: {command_with_crc.hex()}")
ser.write(command_with_crc)
byteData = ser.read_until(b"\xcc")
print(f"byteData: {byteData}")
ser.close()
</code></pre>
<p>With this change which I do not understand I get the full response. Hopefully you can help me out understand why.</p>
<p>Response:</p>
<pre><code>Sent: 0201000600011df8
in waiting false 0
byteData: b'\x02\x01\x01\x00Q\xcc'
</code></pre>
<p>EDIT #3: Actually it was not the send_over_rs485 that made it possible to work but I was able to make it work if I create the socket write then read,then close the socket, then create the socket again and write and read again and then it works. The second time. I do not understand why. Here is the updated code:</p>
<pre><code>import serial
import time
import crcmod.predefined
# Function to calculate Modbus RTU CRC
def calculate_rs_485_crc(message):
crc16 = crcmod.predefined.mkCrcFun("modbus")
return crc16(message)
def setup_serial_port() -> serial.Serial:
port = "/dev/ttyS0"
baudrate = 115200
timeout = 0.3
ser = serial.Serial(
port=port,
baudrate=baudrate ,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
timeout=timeout
)
return ser
# Construct Modbus RTU command
message = "020100060001"
command = bytes.fromhex(message)
crc = calculate_rs_485_crc(command)
command_with_crc = command + crc.to_bytes(2, "little")
ser = setup_serial_port()
print(f"Sent: {command_with_crc}")
ser.write(command_with_crc)
time.sleep(0.2)
byteData = ser.read(6)
print(f"byteData: {byteData}")
ser.close()
ser = setup_serial_port()
print(f"Sent: {command_with_crc}")
ser.write(command_with_crc)
time.sleep(0.5) # Short delay to ensure the response is available
if ser.in_waiting:
print(f"in waiting {ser.in_waiting}")
else:
print(f"in waiting false {ser.in_waiting}")
byteData = ser.read(6)
print(f"byteData: {byteData}")
ser.close()
</code></pre>
|
<python><serial-port><pyserial><modbus><rs485>
|
2025-02-25 18:15:42
| 2
| 1,046
|
Angel Hadzhiev
|
79,467,440
| 9,112,151
|
What right values for cafile, capath, cadata?
|
<p>There is code for ssl context creation:</p>
<pre><code>import ssl
ssl_context = ssl.create_default_context(cafile=..., capath=..., cadata=...)
</code></pre>
<p>Also there is a docker container with certs:</p>
<pre><code>$ pwd
/app/cert
$ ls
ca.pem cert.pem key.pem
</code></pre>
<p>What values will be for <code>cafile</code>, <code>capath</code>, <code>cadata</code>?</p>
<p>I tried <code>cafile="/app/cert/ca.pem"</code> and <code>capath="/app/cert"</code> but it gives me the error:</p>
<pre><code>[SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1002)
</code></pre>
|
<python><ssl><openssl>
|
2025-02-25 17:41:02
| 0
| 1,019
|
Альберт Александров
|
79,467,265
| 12,881,307
|
Openpyxl unable to save workbook with file path
|
<p>I want to append data from a Pandas <code>DataFrame</code> to both an existing excel sheet and an excel table (<code>ListObject</code>). I'm using <code>openpyxl</code> to do so.</p>
<p>I'm now writing the code for writing to a sheet. I have the following:</p>
<pre class="lang-py prettyprint-override"><code>def _append_to_excel_sheet(self
, data_to_write: pd.DataFrame
, excel_file: str
, sheet_name: str
, **kwargs
) -> bool:
try:
if Path(excel_file).exists():
# Load existing workbook
self.logger.debug(f"Appending {len(data_to_write)} rows to sheet {sheet_name} in {excel_file}")
with open (excel_file, "rb") as f:
wb = load_workbook(f
, read_only=False
, keep_vba=True
, data_only=False
, keep_links=True
, rich_text=True
)
self.logger.debug(wb.sheetnames)
ws = wb[sheet_name] if sheet_name in wb.sheetnames else wb.create_sheet(sheet_name)
# Find last row with data
last_row = ws.max_row
# Write new data
for idx, row in enumerate(data_to_write.values):
self.logger.debug(f"Appending row: {row} to row: {last_row + idx + 1}")
for col_idx, value in enumerate(row, 1):
ws.cell(row=last_row + idx + 1, column=col_idx, value=value)
self.logger.debug(f"New range: {ws.cell(row=last_row + 1, column=1).coordinate}:{ws.cell(row=last_row + len(data_to_write), column=len(data_to_write.columns)).coordinate}")
self.logger.debug(f"Saving to file {excel_file}")
wb.save(excel_file)
else:
# Create new file if it doesn't exist
self.logger.debug(f"Creating new file {excel_file} and writing {len(data_to_write)} rows to sheet {sheet_name}")
with pd.ExcelWriter(excel_file, engine='openpyxl') as writer:
data_to_write.to_excel(
writer,
sheet_name=sheet_name,
index=False
)
except Exception as e:
self.logger.error(f"Failed to write to Excel: {str(e)}")
raise
finally:
wb.close()
</code></pre>
<p>When I run this code, the <code>logger</code> object reaches the <code>self.logger.debug(f"Saving to file {excel_file}")</code> line without raising any exception. Furthermore, I never see an <code>Exception</code> raised: <code>self.logger.error(f"Failed to write to Excel: {str(e)}")</code> is unreachable as far as my tests are concerned.</p>
<p>I have looked at <a href="https://openpyxl.readthedocs.io/en/3.1.3/tutorial.html" rel="nofollow noreferrer">openpyxl's documentation</a> and several answers to similar questions in SO (<a href="https://stackoverflow.com/questions/29901558/writing-data-into-excel-sheet-using-openpyxl-isnt-working">this one</a> and <a href="https://stackoverflow.com/questions/56657204/cant-write-data-into-excel-sheet-using-openpyxl-workbook">this one</a> primarily) but haven't been able to find what I'm doing wrong in my code. The <code>str</code> path I pass to the function is absolute.</p>
<p>I know I can <a href="https://stackoverflow.com/questions/38074678/append-existing-excel-sheet-with-new-dataframe-using-python-pandas">use pandas to append a DataFrame to an existing sheet</a>, and this would be an ideal solution but I also want this functionality with Excel tables.</p>
<p>Is there a way to enable verbose mode to see what <code>openpyxl</code> is doing behind the scenes? Am I missing an edge case in which saving a workbook with the same name is forbidden? What alternatives can I look into if I'm unable to fix the issue I have?</p>
<hr />
<p><em><strong>Edit</strong></em></p>
<p>To add more context on how I'm calling the code, the function is a method of a class called <code>ExcelOutputHandler</code>. I'm calling it from a <code>Unittest.TestCase</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from pathlib import Path
import sys
import unittest
from src.core.types import City # this is a NamedTuple
from src.output_handler import ExcelOutputHandler
import logging
class test_output_handler(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.logger = logging.getLogger()
cls.logger.setLevel(logging.DEBUG)
cls.logger.addHandler(logging.StreamHandler(sys.stdout))
cls.test_dir = Path(__file__).parent
cls.xl_testfile = cls.test_dir / f"./output_history/{datetime.now().strftime("%Y-%m-%d %H-%M-%S")}.xlsx"
with open(cls.test_dir / "./test_worksheet.xlsx", "rb") as template_testfile:
with open(cls.xl_testfile, "wb+") as testfile:
testfile.write(template_testfile.read())
@classmethod
def tearDownClass(cls) -> None:
pass
def setUp(self) -> None:
pass
def tearDown(self) -> None:
pass
def test_write_to_sheet_overlay(self) -> None:
handler = ExcelOutputHandler(self.logger)
data = [
City('London', 'UK', 'EU', 'Rainy', 50, 'S', 5)
, City('Paris', 'FR', 'EU', 'Sunny', 10, 'A', 6)
, City('Berlin', 'DE', 'EU', 'Cold', 20, 'A', 3)
, City('Brussels', 'BE', 'EU', 'Cold', 10, 'B', 6)
, City('Lisbon', 'PT', 'EU', 'Sunny', 20, 'S+', 7)
, City('Oslo', 'NW', 'EU', 'Cold', 10, 'S', 3)
, City('Vienna', 'AT', 'EU', 'Cold', 10, 'A+', 8)
]
handler._append_to_excel_sheet(data, str(self.xl_testfile), "Sheet2")
handler._append_to_excel_sheet(data, str(self.xl_testfile), "Sheet2")
# Verify the results
import openpyxl
wb = openpyxl.load_workbook(str(self.xl_testfile))
ws = wb["Sheet2"]
# Get the number of rows with data (excluding header)
row_count = sum(1 for row in ws.iter_rows(min_row=2) if any(cell.value for cell in row))
# Assert we have twice the number of data rows
self.assertEqual(row_count, len(data) * 2,
f"Expected {len(data) * 2} rows but found {row_count}")
wb.close()
</code></pre>
<p>In the test, I expect the excel file to have 14 data rows, but it always fails with <code>AssertionError: 0 != 14 : Expected 14 rows but found 0</code>. From <a href="https://stackoverflow.com/questions/23667610/what-is-the-difference-between-setup-and-setupclass-in-python-unittest">this answer</a> I assume the code called to set up the <code>xl_testfile</code> will copy the <code>test_worksheet</code> file and then allow the tests to access it.</p>
|
<python><pandas><openpyxl>
|
2025-02-25 16:34:57
| 1
| 316
|
Pollastre
|
79,467,167
| 4,543,870
|
Python jsonschema with definitions
|
<p>I'm validating a JSON schema, but the schema uses "Definitions", as in the example below. Using schema validation sites everything is ok, but in code with jsonschema I have the error:</p>
<pre><code>PointerToNowhere: '/definitions/ipv4_pattern' does not exist inside {}
</code></pre>
<p>Does jsonschema not work with definitions?</p>
<pre class="lang-py prettyprint-override"><code>import jsonschema
def main():
str_json_yaml= {
"version": 1,
"group": "infrastructure",
"service": "monitoring",
"tech": "172.21.101.155"}
str_json_schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"version": {
"type": "integer"
},
"group": {
"type": "string"
},
"service": {
"type": "string"
},
"tech": { "$ref": "#/definitions/ipv4_pattern" }
},
"definitions": {
"ipv4_pattern": {"type": "string",
"pattern": "^(?=\\d+\\.\\d+\\.\\d+\\.\\d+$)(?:(?:25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9][0-9]|[0-9])\\.?){4}$"
}
}
}
validator = jsonschema.Draft202012Validator({})
obj_return = validator.evolve(schema=str_json_schema)
lst_errors = []
for error in obj_return.iter_errors(str_json_yaml):
lst_errors.append('Item: ' + error.json_path + ' / Error: ' + error.message)
if len(lst_errors) > 0:
print(*lst_errors, sep='\n')
return False
else:
return True
if __name__ == "__main__":
print(main())
</code></pre>
|
<python><jsonschema>
|
2025-02-25 16:07:33
| 3
| 423
|
devops-db
|
79,467,071
| 9,640,238
|
Adjacency matrix not square error from square dataframe with networkx
|
<p>I have code that aims to generate a graph from an adjacency matrix from a table correlating workers with their manager. The source is a table with two columns (Worker, manager). It still works perfectly from a small mock data set, but fails unexpectedly with the real data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import networkx as nx
# Read input
df = pd.read_csv("org.csv")
# Create the input adjacency matrix
am = pd.DataFrame(0, columns=df["Worker"], index=df["Worker"])
# This way, it is impossible that the dataframe is not square,
# or that index and columns don't match
# Fill the matrix
for ix, row in df.iterrows():
am.at[row["manager"], row["Worker"]] = 1
# At this point, am.shape returns a square dataframe (2825,2825)
# Generate the graph
G = nx.from_pandas_adjacency(am, create_using=nx.DiGraph)
</code></pre>
<p>This returns: <code>NetworkXError: Adjacency matrix not square: nx,ny=(2825, 2829)</code></p>
<p>And indeed, the dimensions reported in the error are not the same as in those of the input dataframe <code>am</code>.</p>
<p>Does anyone have an idea of what happens in <code>from_pandas_adjacency</code> that could lead to this mismatch?</p>
|
<python><pandas><dataframe><networkx><graph-theory>
|
2025-02-25 15:27:18
| 2
| 2,690
|
mrgou
|
79,466,868
| 14,282,714
|
Merged cells return NaN in load_workbook openpyxl
|
<p>I need to read an excel file using <code>load_workbook</code> from <code>openpyxl</code>. Some cells are merged in this file. This will return NaN values to some of the merged cells. I would like to have all merged cells the same value of the file. Here I tried to create a reproducible example from this simple excel file:</p>
<p><a href="https://i.sstatic.net/Jf9Vfxc2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jf9Vfxc2.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
from openpyxl import load_workbook
workbook = load_workbook("example.xlsx", data_only=True)
sheet = workbook.active
# Create dataframe
max_row = sheet.max_row
max_col = sheet.max_column
df = pd.DataFrame(np.nan, index=range(1, max_row + 1), columns=range(1, max_col + 1))
# Iterate through all cells in the sheet
for row in sheet.iter_rows():
for cell in row:
df.at[cell.row, cell.column] = cell.value
print(df)
</code></pre>
<p>Output:</p>
<pre><code> 1 2 3 4 5
1 merged NaN long_merged NaN NaN
2 1 4.0 A NaN NaN
3 2 5.0 B NaN NaN
4 3 NaN None NaN NaN
</code></pre>
<p>As you can see it returns some NaN to the merged cells. My expected output should be like this:</p>
<pre><code> 1 2 3 4 5
1 merged merged long_merged long_merged long_merged
2 1 4.0 A NaN NaN
3 2 5.0 B NaN NaN
4 3 NaN None NaN NaN
</code></pre>
<p>So I was wondering how we can return the same value to all cells of the merged cell as in the expected output?</p>
|
<python><openpyxl>
|
2025-02-25 14:22:13
| 1
| 42,724
|
Quinten
|
79,466,568
| 4,403,732
|
How to create a literal TypeGuard in Python?
|
<p>I'd like to have a <code>TypeGuard</code> of property names in Python. The source values come from 3rd party source, so I need to validate them effectively.</p>
<p>To simplify, let's say I' making an app listing people's dogs. Each dog has just 3 properties:</p>
<ol>
<li><code>name</code></li>
<li><code>birth_date</code></li>
<li><code>favorite_food</code></li>
</ol>
<p>Now, here's how my <code>TypeGuard</code> looks so far:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, TypeGuard
type DogPropertyName = Literal["name", "birth_date", "favorite_food"]
def is_dog_property_name(value: str) -> TypeGuard[DogPropertyName]:
return value in ["name", "birth_date", "favorite_food"]
</code></pre>
<p>The problem is that I needed to repeat the correct values. I also tried to unpack a tuple, but it's not allowed:</p>
<pre class="lang-py prettyprint-override"><code>dog_property_names = ("name", "birth_date", "favorite_food")
type DogPropertyName = Literal[*dog_property_names]
# Error: Unpacked arguments cannot be used in this context
def is_dog_property_name(value: str) -> TypeGuard[DogPropertyName]:
return value in dog_property_names
</code></pre>
<p>Can I use Literal <code>TypeGuard</code> like this?</p>
|
<python><python-typing><typeguards>
|
2025-02-25 12:33:42
| 0
| 23,062
|
Robo Robok
|
79,466,513
| 353,337
|
Indent, but leave empty lines as-is
|
<p>I would like to use <a href="https://github.com/Instagram/LibCST" rel="nofollow noreferrer">LibCST</a> for indenting code. Wrapping a body in an <code>cst.IndentedBlock</code> works, but gives whitespace in empty lines.</p>
<p>MWE:</p>
<pre class="lang-py prettyprint-override"><code>import libcst as cst
s = cst.parse_module("""\
a = 1
b = 2
""")
ib = cst.IndentedBlock(s.body)
print(cst.Module([ib]).code)
</code></pre>
<pre class="lang-py prettyprint-override"><code> a = 1
# here!
b = 2
</code></pre>
<p>I'd like to preserve the empty lines in the output.</p>
<p>How can I avoid this?</p>
|
<python><libcst>
|
2025-02-25 12:13:20
| 1
| 59,565
|
Nico Schlömer
|
79,466,367
| 9,793,316
|
Applying `vmap` to a model with `torch.utils.checkpoint.checkpoint`
|
<p>I need to use <code>vmap</code> to calculate gradient with respect to a model that uses <code>torch.utils.checkpoint.checkpoint</code>.</p>
<p>I got the following error</p>
<blockquote>
<p>RuntimeError: You tried to vmap over _NoopSaveInputs, but it does not
have vmap support. Please override and implement the vmap staticmethod
or set generate_vmap_rule=True. For more details, please see
<a href="https://pytorch.org/docs/master/notes/extending.func.html" rel="nofollow noreferrer">https://pytorch.org/docs/master/notes/extending.func.html</a></p>
</blockquote>
<p>After following the note given in the error, I tried adding <code>generate_vmap_rule = True</code> under the <code>_NoopSaveInputs</code> class, and I got the following error instead</p>
<blockquote>
<p>RuntimeError: torch.func transforms don't yet support saved tensor hooks. Please open an issue with your use case.</p>
</blockquote>
<p>It seems I need to define the vmap staticmethod manually, but I am not sure how to do it.</p>
<p>I found a <a href="https://github.com/pytorch/pytorch/issues/122729" rel="nofollow noreferrer">github issue</a> asking the same question. But there is still no update after a year later.</p>
<p>How to make <code>vmap</code> to work with <code>torch.utils.checkpoint.checkpoint</code>?</p>
<p>Here's the minimal code to reproduce my problem</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import torch.optim as optim
import random
from torch.func import vmap
from torch.utils.checkpoint import checkpoint
# from torch.autograd.function import once_differentiable
# Set a fixed seed for reproducibility
seed = 42
torch.manual_seed(seed)
random.seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # For multi-GPU setups
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Define a simple Transformer model
class SimpleTransformer(nn.Module):
def __init__(self, input_dim, model_dim, num_heads, num_layers, output_dim):
super(SimpleTransformer, self).__init__()
self.embedding = nn.Embedding(input_dim, model_dim)
self.transformer = nn.Transformer(d_model=model_dim, nhead=num_heads, num_encoder_layers=num_layers)
self.fc_out = nn.Linear(model_dim, output_dim)
def forward(self, src, tgt):
src = self.embedding(src)
tgt = self.embedding(tgt)
# output = self.transformer(src, tgt)
output = checkpoint(self.transformer, src, tgt, use_reentrant=False)
return self.fc_out(output)
def compute_loss(model,
weights,
buffers,
src,
tgt,
):
# Forward pass
output = torch.func.functional_call(
model, (weights, buffers), args=(src.unsqueeze(1), tgt[:-1].unsqueeze(1))
)
# Reshape output and target for loss calculation
output = output.view(-1, output_dim)
tgt = tgt[1:].view(-1) # Shift target sequence by one
loss = criterion_mean(output, tgt)
print(loss.shape)
return loss
# Hyperparameters
input_dim = 1000 # Vocabulary size
model_dim = 512 # Embedding dimension
num_heads = 8 # Number of attention heads
num_layers = 6 # Number of transformer layers
output_dim = 1000 # Output vocabulary size
seq_length = 10 # Sequence length
# Initialize model, loss function, and optimizer
model = SimpleTransformer(input_dim, model_dim, num_heads, num_layers, output_dim)
criterion_mean = nn.CrossEntropyLoss()
criterion_raw = nn.CrossEntropyLoss(reduction='none')
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Dummy data (batch_size = 2, seq_length = 10)
src = torch.randint(0, input_dim, (seq_length, 2)) # Source sequence
tgt_raw = torch.randint(0, output_dim, (seq_length, 2)) # Target sequence
# Print src and tgt to verify they are the same each time
print("Source sequence (src):")
print("Target sequence (tgt):")
# Forward pass
output = model(src, tgt_raw[:-1, :]) # Exclude the last token in the target sequence
# Reshape output and target for loss calculation
output = output.view(-1, output_dim)
tgt = tgt_raw[1:, :].view(-1) # Shift target sequence by one
weights = dict(model.named_parameters())
buffers = dict(model.named_buffers())
grads_fn = torch.func.grad(compute_loss, has_aux=False, argnums=1)
gs = vmap(grads_fn,
in_dims=(None, None, None, 1, 1),
randomness='different')(
model,
weights,
buffers,
src,
tgt_raw
)
</code></pre>
|
<python><pytorch>
|
2025-02-25 11:15:38
| 0
| 3,053
|
Raven Cheuk
|
79,466,300
| 16,906,505
|
Playwright exception: no description
|
<p><a href="https://i.sstatic.net/TSko5dJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TSko5dJj.png" alt="exception" /></a></p>
<p>I have a strange error with no description when I try to use async playwright</p>
<p>This is a FastApi app and I am on Windows</p>
<p>In the docs there is a mention that <code>incompatible with SelectorEventLoop of asyncio on Windows</code> but I am not sure what that means and if it is related to my problem</p>
|
<python><python-asyncio><fastapi><playwright-python>
|
2025-02-25 10:59:16
| 0
| 500
|
RheinmetallSkorpion
|
79,466,178
| 3,070,181
|
How to place Tkinter PanedWindow sash correctly on a Notebook tab?
|
<p>Issue with PanedWindow Sash Placement in a Notebook Tab.</p>
<p>I am trying to create a Notebook tab that contains a PanedWindow with four panes.</p>
<p>However, when I attempt to position the sashes manually, the third sash does not appear at the expected position.</p>
<p><strong>Expected Behavior:</strong></p>
<p>The three sashes should be placed at positions 100, 657, and 1000.</p>
<p><strong>Observed Behavior:</strong></p>
<ul>
<li>The first two sashes are placed correctly at 100 and 657.</li>
<li>The third sash appears at 663 instead of 1000.</li>
</ul>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>
import tkinter as tk
from tkinter import ttk
class Main():
def __init__(self) -> None:
self.root = tk.Tk()
self.root.geometry('1100x400')
self.root.protocol("WM_DELETE_WINDOW", self.dismiss)
self.root.columnconfigure(0, weight=1)
self.root.rowconfigure(0, weight=1)
notebook = self._notebook(self.root)
notebook.grid(row=0, column=0, sticky=tk.NSEW)
# Debugging button to retry sash placement
button = tk.Button(self.root, text='...', command=self.click)
button.grid(row=1, column=0)
self.root.mainloop()
def _notebook(self, master: tk.Frame) -> ttk.Notebook:
notebook = ttk.Notebook(master)
self.test_tab = TestFrame(notebook)
notebook.add(self.test_tab.master_frame, text='Test')
return notebook
def dismiss(self, *args) -> None:
print([self.test_tab.master_frame.sash_coord(index) for index in range(3)])
self.root.destroy()
def click(self, *args) -> None:
# Attempt to reposition sashes (1 moves, 2 doesn't)
sashes = [(100, 1), (500, 1), (1000, 1)]
for index, sash in enumerate(sashes):
self.test_tab.master_frame.sash_place(index, sash[0], 0)
class TestFrame():
def __init__(self, master) -> None:
self.master_frame = self._master_frame(master)
self.master_frame.grid(row=0, column=0, sticky=tk.EW)
# Expected sash positions
sashes = [(100, 1), (657, 1), (1000, 1)]
for index, sash in enumerate(sashes):
self.master_frame.sash_place(index, sash[0], 0)
def _master_frame(self, master) -> ttk.PanedWindow:
frame = tk.PanedWindow(master, orient=tk.HORIZONTAL)
for _ in range(4):
listbox = tk.Listbox(frame)
frame.add(listbox)
return frame
if __name__ == '__main__':
Main()
</code></pre>
<p>Even after the window is created, on clicking the button, the sash will not get positioned correctly.</p>
<p>System Information:</p>
<pre><code>OS: Manjaro Linux
KDE Plasma Version: 6.2.5
KDE Frameworks Version: 6.10.0
Qt Version: 6.8.2
Python Version: 3.13.1
tk.TkVersion: 8.6
</code></pre>
<p>Any insights into why the third sash isn't placed correctly would be greatly appreciated!</p>
|
<python><tkinter>
|
2025-02-25 10:27:26
| 1
| 3,841
|
Psionman
|
79,465,862
| 1,835,603
|
WebSocket Server in Python Not Receiving All Messages While Node.js Does
|
<p>I am running a WebSocket server behind Nginx, and I have two different implementations: one in Node.js and one in Python. The Node.js WebSocket server receives all messages correctly, but the Python server <strong>misses one of the messages</strong>.</p>
<h3><strong>WebSocket Message Flow:</strong></h3>
<ol>
<li>Client opens WebSocket: <code>/my-app/14515/</code></li>
<li>Client sends message: <code>{ "type": "my-app:react_api:editor", "url_kwargs": {...} }</code></li>
<li>Client sends message: <code>{ "type": "my-app:react_api:problem", "url_kwargs": {...} }</code></li>
<li>Client opens another WebSocket: <code>/editor/14515/</code></li>
</ol>
<p>When using <strong>Node.js</strong>, both messages (<code>my-app:react_api:editor</code> and <code>my-app:react_api:problem</code>) are received. But when using <strong>Python</strong>, only the first message (<code>my-app:react_api:editor</code>) is logged, and the second one is missing.</p>
<hr />
<h3><strong>Nginx Configuration:</strong></h3>
<pre class="lang-none prettyprint-override"><code>location /ws/ {
proxy_pass http://127.0.0.1:8387/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
include proxy_params;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
</code></pre>
<hr />
<h3><strong>Node.js WebSocket Server (Works Fine)</strong></h3>
<pre class="lang-js prettyprint-override"><code>const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8387 });
wss.on('connection', (ws, req) => {
const clientIP = req.socket.remoteAddress;
const clientURL = req.url || '/';
console.log(`✅ New connection from ${clientIP}, URL: ${clientURL}`);
ws.send("👋 Welcome to the WebSocket server!");
ws.on('message', (message) => {
console.log(`📩 [${clientURL}] Message from ${clientIP}: ${message}`);
ws.send(`📢 Echo: ${message}`);
});
ws.on('close', (code, reason) => {
console.log(`🔴 [${clientURL}] Connection closed: Code ${code}, Reason: ${reason}`);
});
ws.on('error', (err) => {
console.error(`❌ [${clientURL}] WebSocket error: ${err.message}`);
});
});
console.log("🚀 WebSocket server running on ws://localhost:8387");
</code></pre>
<h3><strong>Python WebSocket Server (Missing Second Message)</strong></h3>
<pre class="lang-py prettyprint-override"><code>import asyncio
import websockets
async def handle_connection(websocket, path):
client_ip = websocket.remote_address[0]
client_url = path if path else "/"
print(f"✅ New connection from {client_ip}, URL: {client_url}")
await websocket.send("👋 Welcome to the WebSocket server!")
try:
async for message in websocket:
print(f"📩 [{client_url}] Message from {client_ip}: {message}")
await websocket.send(f"📢 Echo: {message}")
except websockets.exceptions.ConnectionClosed as e:
print(f"🔴 [{client_url}] Connection closed: Code {e.code}, Reason: {e.reason}")
except Exception as e:
print(f"❌ [{client_url}] WebSocket error: {str(e)}")
async def main():
async with websockets.serve(handle_connection, "localhost", 8387):
await asyncio.Future() # Run forever
print("🚀 WebSocket server running on ws://localhost:8387")
asyncio.run(main())
</code></pre>
<hr />
<h3><strong>Observed Logs</strong></h3>
<h4><strong>Node.js Logs (Works as Expected)</strong></h4>
<pre><code>🚀 WebSocket server running on ws://localhost:8387
✅ New connection from ::ffff:127.0.0.1, URL: /my-app/14515/
✅ New connection from ::ffff:127.0.0.1, URL: /editor/14515/
📩 [/my-app/14515/] Message from ::ffff:127.0.0.1: {"type":"my-app:react_api:editor",...}
📩 [/my-app/14515/] Message from ::ffff:127.0.0.1: {"type":"my-app:react_api:problem",...}
</code></pre>
<h4><strong>Python Logs (Missing <code>my-app:react_api:problem</code>)</strong></h4>
<pre><code>🚀 WebSocket server running on ws://localhost:8387
✅ New connection from 127.0.0.1, URL: /my-app/14515/
📩 [/my-app/14515/] Message from 127.0.0.1: {"type":"my-app:react_api:editor",...}
✅ New connection from 127.0.0.1, URL: /editor/14515/
</code></pre>
<hr />
<h3><strong>What I’ve Tried So Far</strong></h3>
<ol>
<li>✅ <strong>Confirmed the client sends both messages</strong> by logging before sending.</li>
<li>✅ <strong>Logged <code>path</code> in the Python server</strong> to ensure it receives the correct WebSocket connection.</li>
<li>✅ <strong>Tried reading messages differently in Python</strong>, using:
<pre class="lang-py prettyprint-override"><code>while True:
message = await websocket.recv()
</code></pre>
</li>
<li>✅ <strong>Tried adding a small delay (<code>await asyncio.sleep(0.01)</code>)</strong> in Python after receiving a message.</li>
</ol>
<hr />
<h3><strong>Question</strong></h3>
<p>Why does the Python WebSocket server <strong>miss the second message</strong>, while the Node.js implementation works fine? How can I ensure Python properly receives and processes all WebSocket messages?</p>
|
<python><websocket><python-asyncio>
|
2025-02-25 08:27:54
| 0
| 523
|
MKM
|
79,465,837
| 16,869,946
|
Creating new columns for the estimated "utilities" and predicted probabilities in xlogit
|
<p>I was trying to use the <code>xlogit</code> package in python to fit a mixed logit model:</p>
<p><a href="https://xlogit.readthedocs.io/en/latest/notebooks/mixed_logit_model.html#Car-Dataset" rel="nofollow noreferrer">https://xlogit.readthedocs.io/en/latest/notebooks/mixed_logit_model.html#Car-Dataset</a></p>
<pre><code>import pandas as pd
import numpy as np
from xlogit import MixedLogit
df = pd.read_csv("https://raw.github.com/arteagac/xlogit/master/examples/data/car100_long.csv")
df['price'] = -df['price']/10000
df['opcost'] = -df['opcost']
varnames = ['hiperf', 'medhiperf', 'price', 'opcost', 'range', 'ev', 'hybrid']
model = MixedLogit()
model.fit(X=df[varnames],
y=df['choice'],
varnames=varnames,
alts=df['alt'],
ids=df['choice_id'],
panels=df['person_id'],
randvars = {'price': 'ln', 'opcost': 'n',
'range': 'ln', 'ev':'n', 'hybrid': 'n'},
n_draws = 100)
model.summary()
</code></pre>
<p>And I would like to create two new columns <code>utility</code> and <code>predicted_probability</code> for the original dataset which returns the estimated "utility" of each alternative (i.e. the V_n^j in the documentation: <a href="https://arteagac.github.io/xlogit.pdf" rel="nofollow noreferrer">https://arteagac.github.io/xlogit.pdf</a>) and the predicted probability. The <code>xlogit</code> package provides a <code>predict</code> function that can return probabilities of choosing each alternative for each individual, however, I don't know how to compute the estimated utility quickly using the fitted model.</p>
|
<python><pandas>
|
2025-02-25 08:16:39
| 0
| 592
|
Ishigami
|
79,465,836
| 12,935,622
|
local package shows in pip list but not recognized when imported
|
<p>I have file structure as follows</p>
<pre class="lang-none prettyprint-override"><code>- project
- LLaVA-NeXT
- llava
- __init__.py
- ...
- playground
- models
- script.py
</code></pre>
<p>In <code>script.py</code>, I did <code>from llava.model.builder import load_pretrained_model</code> but I get <code>ModuleNotFoundError: No module named 'llava.mm_utils'; 'llava' is not a package</code>. I installed the LLaVA-NeXT locally as editable package following <a href="https://github.com/LLaVA-VL/LLaVA-NeXT" rel="nofollow noreferrer">this</a> and it shows up in <code>pip list</code> as <code>llava 1.7.0.dev0 /home/username/project/LLaVA-NeXT</code>. What am I doing wrong? How do I make the import work for the script?</p>
|
<python><pip>
|
2025-02-25 08:16:22
| 1
| 1,191
|
guckmalmensch
|
79,465,662
| 17,328
|
Rendering streamed Markdown chunks in IPython/Jupyter
|
<p>The context is that I'm streaming LLM tokens from a model, and they're in Markdown, so I want to repeatedly append to the rendered Markdown.</p>
<p>This is roughly the code I'm using with bare text:</p>
<pre class="lang-py prettyprint-override"><code>async for chunk in response.receive():
print(chunk.text, end='')
</code></pre>
<p>Which outputs:</p>
<pre><code># Document heading
Intro text
* A bullet point
* Another bullet point
</code></pre>
<p>But I want to render the markdown:</p>
<pre class="lang-py prettyprint-override"><code>from IPython import display, Markdown
async for chunk in response.receive():
display(Markdown(chunk.text))
</code></pre>
<p>Since this outputs a markdown <em>block</em> with each call, there are breaks between each chunk (but with occasional formatting):</p>
<pre><code>Document
heading
Intro
text
*
A
bullet point
*
Another
bullet point
</code></pre>
<p>Is there a way to do this naturally with the <code>IPython</code> or other library? Or do I need to manually buffer and re-render the response?</p>
|
<python><jupyter-notebook><markdown><ipython>
|
2025-02-25 07:07:46
| 0
| 8,365
|
Mark McDonald
|
79,465,562
| 11,277,878
|
Pybind11 - Multiple definition of PyInit_module_name
|
<p>Solved! - Please check the <a href="https://stackoverflow.com/a/79465870/11277878">answer</a>.</p>
<p>I wrote a library where headers and python bindings are auto-generated. For example <code>dummy_bind.cpp</code> for <code>dummy_message.h</code> and each <code>_bind.cpp</code> file has <code>PYBIND11_MODULE</code> call in it for their specific class. There are dozens of other _bind.cpp files for other headers. What should be the module name for each file when calling the <code>PYBIND11_MODULE</code> like:</p>
<pre><code>PYBIND11_MODULE(protocol_name, m)
{
/// …
}
</code></pre>
<p>If I use <code>protocol_name</code> in each <code>PYBIND11_MODULE(protocol_name, m)</code> call, when compiling I get multiple definition error like: <code>multiple definition of PyInit_protocol_name</code>. If I generate special module name for each message like <code>PYBIND11_MODULE(protocol_name_dummy, m)</code> the extension is compiled but I think I need to import each module one by one which is not viable.</p>
<p>Should I do all exports inside a single <code>PYBIND11_MODULE</code> call? Thanks in advance.</p>
|
<python><c++><pybind11>
|
2025-02-25 06:14:39
| 2
| 614
|
no more sigsegv
|
79,465,553
| 13,097,857
|
How to limit google maps auto complete to only one country?
|
<p>I have tried everything so my search input only gives autocomplete options from the Dominican Republic but I have not been able to achieve it, does anyone here know how to do this?</p>
<pre class="lang-html prettyprint-override"><code><input
type="text"
id="searchInput"
class="form-control"
placeholder="Search for a location..."
>
</code></pre>
<pre class="lang-js prettyprint-override"><code>let autocomplete;
const input = document.getElementById('searchInput');
autocomplete = new google.maps.places.Autocomplete(input, {
types: ['(cities)', 'geocode'], // Allow both city and address searches
componentRestrictions: { country: 'do' }
});
// Handle place selection
autocomplete.addListener('place_changed', function() {
const place = autocomplete.getPlace();
if (!place.geometry) {
return;
}
}
</code></pre>
|
<javascript><python><django><google-maps-api-3><maps>
|
2025-02-25 06:09:24
| 1
| 302
|
Sebastian Nin
|
79,465,547
| 705,086
|
How to set up same SSLContext in Python 3.8 through 3.13?
|
<p>My code so far:</p>
<pre class="lang-py prettyprint-override"><code> def _ssl_context(self, ca: str | None) -> ssl.SSLContext:
# NOTE: ssl.create_default_context() doesn't allow setting the context.protocol in a way
# that's the same across Python 3.8 and 3.10 onwards. Whip the context up by hand.
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.minimum_version = ssl.TLSVersion.TLSv1_3
context.set_alpn_protocols(['http/1.1'])
context.verify_flags |= ssl.VERIFY_X509_STRICT
if (partial_chain := getattr(ssl, "VERIFY_X509_PARTIAL_CHAIN", None)):
# Available starting from Python 3.10. The partial chain flag allows trusting the
# intermediate CAs in the CA list without the matching root CA
context.verify_flags |= partial_chain
if ca is not None:
context.load_verify_locations(cadata=ca)
else:
context.load_default_certs()
return context
</code></pre>
<p>I'm in charge of a library (not an app) which needs to send something out over HTTPS and I'm using pure Python <code>urllib</code> to do so.</p>
<p>I want one codebase to support <strong>Python 3.8 through 3.13</strong>.</p>
<p>I want "Modern" settings, like the Mozilla SSL configuration generator <a href="https://ssl-config.mozilla.org/" rel="nofollow noreferrer">https://ssl-config.mozilla.org/</a> except for the client side of the HTTPS connection.</p>
<p>Am I missing something?</p>
<p>I this the right way to go about it?</p>
|
<python><security><ssl><python-3.8><tls1.3>
|
2025-02-25 06:05:59
| 1
| 11,856
|
Dima Tisnek
|
79,465,431
| 16,382,765
|
Get only user email address with Google OAuth in Python
|
<p>I am working on a <code>Python</code> web backend. And I have successfully implemented <code>Google OAuth</code> functionality using the following page.</p>
<p><a href="https://developers.google.com/identity/protocols/oauth2/web-server#python" rel="nofollow noreferrer">https://developers.google.com/identity/protocols/oauth2/web-server#python</a></p>
<p>Now I just want to know the email address of the logged-in user.</p>
<p>However, I could not find a function to get only the email address in the <code>google-auth-oauthlib</code> library used in the above page.</p>
<p><a href="https://github.com/googleapis/google-auth-library-python-oauthlib" rel="nofollow noreferrer">https://github.com/googleapis/google-auth-library-python-oauthlib</a></p>
<p>Instead, I added permissions for other APIs that I don't need (e.g. gmail-readonly) and used the function of those APIs to get the user's email address. Of course, I don't like this.</p>
<p>So this time, I imported the <code>requests</code> library in my code and successfully got the user's email address using the <code>REST API</code>.
However, if I was going to use <code>REST API</code>, I would have implemented all the <code>OAuth</code> processes as <code>REST API</code> instead of the <code>google-auth-oauthlib</code> library.</p>
<p>However, it seems strange no matter how I think about it. It seems like it should be possible using only the <code>google-auth-oauthlib</code> library and without any unnecessary additional permissions. I just can't figure it out. Please help.</p>
|
<python><rest><google-oauth>
|
2025-02-25 04:41:07
| 1
| 523
|
enoeht
|
79,465,328
| 1,273,987
|
Arrays of size 0 in NumPy
|
<p>I need to work with arrays that can have zeros in their shapes. However, I am encountering an issue. Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.array([[]])
assert arr.shape == (1,0)
arr.reshape((1,0)) # No problem (nothing changes)
arr.reshape((-1,0)) # ValueError: cannot reshape array of size 0 into shape (0)
</code></pre>
<p>I always thought that <code>-1</code> for a reshape operation means the product of all the remaining dimensions, i.e., 1 in this case. Is this a bug, or am I not understanding how this should work?</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2025-02-25 03:28:26
| 1
| 2,105
|
Ziofil
|
79,465,264
| 2,000,548
|
How to cancel a Ray job submitted to a Ray cluster?
|
<p>I have a long-run Ray job.</p>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>import time
import ray
@ray.remote
def square(n: int) -> int:
time.sleep(50000000)
return n * n
@ray.remote
def sum_list(numbers: list[int]) -> int:
return sum(numbers)
if __name__ == "__main__":
ray.init()
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
squared_tasks = [square.remote(n) for n in numbers]
squared_results: list[int] = ray.get(squared_tasks)
print(f"{squared_results = }")
sum_task = sum_list.remote(squared_results)
total_sum = ray.get(sum_task)
print(f"{total_sum = }")
ray.shutdown()
</code></pre>
<p>submit.py</p>
<pre><code>from ray.job_submission import JobSubmissionClient
client = JobSubmissionClient("https://ray.example.com")
client.submit_job(
entrypoint="python src/main.py",
runtime_env={
"working_dir": "./",
},
)
</code></pre>
<p>I submitted this job by calling <code>python src/submit.py</code>.</p>
<p>How to cancel this Ray job?</p>
<p>I saw a question asked <a href="https://discuss.ray.io/t/how-to-stop-the-driver-jobs-from-ray-cluster/12545/3" rel="nofollow noreferrer">here</a> for over a year, but there is no answer.</p>
<p>Thanks!</p>
|
<python><ray>
|
2025-02-25 02:27:12
| 1
| 50,638
|
Hongbo Miao
|
79,465,204
| 1,914,781
|
plotly - remove holiday from stock plot not work as expected
|
<p>Since 2025/2/17 is a holiday, It should be removed from plot.<code>rangebreaks</code> looks exactly for this case, but,it generate a very weird graph.</p>
<p>full code:</p>
<pre><code>#!/usr/bin/env python3
import yfinance as yf
import plotly.graph_objs as go
from plotly.subplots import make_subplots
import pandas as pd
import datetime
STK="AMZN"
end_time = datetime.datetime.strptime("2025-02-19","%Y-%m-%d")
start_time = end_time - datetime.timedelta(days=7)
df = yf.Ticker(STK).history(start=start_time,end=end_time,interval="30m")
fig = make_subplots(rows=1, cols=1,
vertical_spacing=0.01,
#shared_xaxes=True
)
trace = go.Candlestick(
x=df.index,
open=df["Open"],
high=df["High"],
low=df["Low"],
close=df["Close"],
)
fig.add_trace(trace)
fig.update_xaxes(
rangebreaks=[
dict(bounds=["sat", "mon"]),
dict(values=["2025-02-17","2025-12-25"]),
dict(bounds=[16, 9.5], pattern="hour")
]
)
fig.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/ZNAO6cmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZNAO6cmS.png" alt="enter image description here" /></a></p>
<p>If comment out the second line in <code>rangebreaks</code>, output graph as below:</p>
<p><a href="https://i.sstatic.net/YFhIwPRx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFhIwPRx.png" alt="enter image description here" /></a></p>
<p>The expected output should be cut the range of 2025-02-17</p>
<p><a href="https://i.sstatic.net/f5naEOL6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5naEOL6.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2025-02-25 01:30:34
| 0
| 9,011
|
lucky1928
|
79,465,038
| 2,072,516
|
Narrowing typing on a subclass
|
<p>I have a Pydantic class to represent how my general API response should look:</p>
<pre><code>class RootDataResponseSchema(BaseModel):
status: Literal["success", "fail"]
data: BaseModel
</code></pre>
<p>I went with <code>BaseModel</code> for data, as the idea would be I'd subclass <code>RootDataResponseSchema</code> and add more specificity to specific endpoint responses, such as:</p>
<pre><code>class NewItemResponse(RootDataResponseSchema):
status: Literal["success"] = "success"
data: NewItemResponseData
</code></pre>
<p>The problem I've run into seems to only affect the IDE (in my case VSCode), so I assume is a notice from pylance, but tells me that both <code>status</code> and <code>data</code> have changed from the root class. It does mention it's irrelevant, but as a subclassed object does count as the superclass in terms of comparison, is there a mechanism for narrowing the scope of type hints in Python? Or is this just an artifact of loosely typed Python?</p>
|
<python><python-typing><pydantic>
|
2025-02-24 23:21:12
| 2
| 3,210
|
Rohit
|
79,465,001
| 849,076
|
Find corner of annotation with background
|
<p>I am trying to find the corners of the box produced by adding a matplotlib annotation with background color. This is what I have tried:</p>
<pre><code>a = ax.annotate(
label,
xy=(x, y),
backgroundcolor=(0.69, 0.6, 0.65, 0.65),
color="#444444",
fontsize=14,
xytext=(0, 0),
textcoords="offset pixels",
)
# draw figure to be able to find actual corners
ax.get_figure().canvas.draw()
points = a.get_window_extent().get_points() # ...or get_tightbbox(), doesn't really matter here
corner_ur = points[1] # upper right
corner_lr = [points[1][0], points[0][1]] # lower right
... and so on
</code></pre>
<p>However, the bbox returned by <code>a.get_tightbbox()</code> is the bbox of the text, not the whole padded area with background fill. How can I find the corners of <em>the whole padded box</em>?</p>
<p>This image shows in green the result of plotting <code>corner_ur</code> and <code>corner_lr</code>:</p>
<p><a href="https://i.sstatic.net/TMULfAnJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMULfAnJ.png" alt="An annotation with bg color, and two dots well inside the colored area" /></a></p>
<p>The points I get are at the corners of the text's bbox, but not of the whole filled area. How do I get those? (Or, alternatively, how do I find the padding used, to calculate them?)</p>
|
<python><matplotlib>
|
2025-02-24 22:54:19
| 1
| 8,641
|
leo
|
79,464,768
| 15,524,510
|
Catboost error when loading pool from disk
|
<p>I am creating a catboost pool from a pandas dataframe (columns have strings as names, not sure if thats relevant) and then quantizing it and saving to disk using this code:</p>
<pre><code>import catboost as cb
import pandas as pd
data = {
'label': np.random.randint(0, 5, 100), # Random integers 0-4 for label
'feature1': np.random.randn(100), # Random normal distribution
'feature2': np.random.randn(100),
'feature3': np.random.randn(100),
'feature4': np.random.randn(100),
'feature5': np.random.randn(100),
'feature6': np.random.randn(100),
'feature7': np.random.randn(100),
'feature8': np.random.randn(100),
'feature9': np.random.randn(100)
}
train = pd.DataFrame(data)
factors = train.columns.values.tolist()[1:]
pool = cb.Pool(data = train[factors], label = train['label'])
pool.quantize()
pool.save('cbpool')
pool2 = cb.Pool('cbpool')
</code></pre>
<p>This will yield the following error on the last line:</p>
<pre><code>---------------------------------------------------------------------------
CatBoostError Traceback (most recent call last)
<ipython-input-10-440a13636ec9> in <cell line: 0>()
----> 1 pool = cb.Pool('/content/drive/MyDrive/temp/traincb'+endtrain.strftime("%Y%m%d"))
1 frames
/usr/local/lib/python3.11/dist-packages/catboost/core.py in __init__(self, data, label, cat_features, text_features, embedding_features, embedding_features_data, column_description, pairs, graph, delimiter, has_header, ignore_csv_quoting, weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, timestamp, feature_names, feature_tags, thread_count, log_cout, log_cerr, data_can_be_none)
785 "feature_names should have None or string or pathlib.Path type when the pool is read from the file."
786 )
--> 787 self._read(data, column_description, pairs, graph, feature_names, delimiter, has_header, ignore_csv_quoting, thread_count)
788 else:
789 if isinstance(data, FeaturesData):
/usr/local/lib/python3.11/dist-packages/catboost/core.py in _read(self, pool_file, column_description, pairs, graph, feature_names_path, delimiter, has_header, ignore_csv_quoting, thread_count, quantization_params, log_cout, log_cerr)
1334 item = ''
1335 self._check_thread_count(thread_count)
-> 1336 self._read_pool(
1337 pool_file,
1338 column_description,
_catboost.pyx in _catboost._PoolBase._read_pool()
_catboost.pyx in _catboost._PoolBase._read_pool()
CatBoostError: library/cpp/string_utils/csv/csv.cpp:30: RFC4180 violation: quotation mark must be in the escaped string only
</code></pre>
|
<python><pandas><save><catboost>
|
2025-02-24 20:48:33
| 1
| 363
|
helloimgeorgia
|
79,464,713
| 2,438,371
|
Turtle - screen.update() not working with copy method?
|
<p>I have a list with several turtle instances. I want to create a copy of one object and let this one move forward.</p>
<p>This works fine. But when I disable screen udpates and do it manually in my code it gives me a different result and I don't see the turtle on its new position. But I do see that it has moved there.</p>
<p>Here is an example. This produces the expected output. But as soon as I disable <code>tracer</code> and do a manual <code>update</code> it behaves weirdly.</p>
<pre class="lang-py prettyprint-override"><code>from turtle import Screen, Turtle
from copy import copy
screen = Screen()
screen.setup(width=250, height=250)
#screen.tracer(0)
turtles = [Turtle(), Turtle()]
for i in range(10):
#screen.update()
new_turtle = copy(turtles[0])
new_turtle.forward(10)
turtles.append(new_turtle)
screen.exitonclick()
</code></pre>
|
<python><turtle-graphics><python-turtle>
|
2025-02-24 20:21:25
| 1
| 659
|
Denny Crane
|
79,464,633
| 2,077,648
|
Copying certain cells of an excel file to fix the report in Python
|
<p>In the below table how can we copy the column tempx cells for each test from partition column long cell to the corresponding test cell<br></p>
<p>For example when we filter Scenario Column cell A1.results.0.test1 , it has two rows corresponding to each partition type;<br>
Here need to make sure both rows Temp1 to Temp5 columns have the same cell value as that present in the long cell (partition column)</p>
<p>**</p>
<ul>
<li><p>Input Table
<a href="https://i.sstatic.net/wixVF5sY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wixVF5sY.png" alt="enter image description here" /></a></p>
</li>
<li><p>Expected Result Output Table
<a href="https://i.sstatic.net/ykO7SYX0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykO7SYX0.png" alt="enter image description here" /></a>
Please suggest how can we copy the cells for each test</p>
</li>
</ul>
<pre><code>import openpyxl as op
import os
import datetime
input_file_path = r"C:\deleteme_flat"
result_filename = "Report_Summary_" + datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + ".xlsx"
result_file_path = os.path.join(input_file_path, result_filename)
results = list()
def job_summary_gen():
# this is the extension you want to detect
extension = '.xlsx'
file_list = []
i = 0
new_wb = op.Workbook()
# sheet name update
new_wb['Sheet'].title = "Summary"
# wb2 = op.load_workbook(result_file_path)
ws2 = new_wb['Summary']
# If there are any rows already populated in the destination sheet start at next row otherwise start at row 1
if ws2.max_row == 1:
new_row = ws2.max_row
else:
new_row = ws2.max_row + 1
for root, dirs_list, files_list in os.walk(input_file_path):
for file_name in files_list:
if os.path.splitext(file_name)[-1] == extension and file_name == 'testReport.xlsx':
file_name_path = os.path.join(root, file_name)
print(file_name_path) # This is the full path of the filter file
file_list.append(file_name_path)
file_dir = os.path.dirname(file_name_path)
folder_name = os.path.basename(file_dir)
print(folder_name)
wb1 = op.load_workbook(file_name_path)
ws1 = wb1['Summary']
# read flat Report
for cell in ws1.iter_rows(min_col=3, max_col=3):
# print(cell[0].value)
# creating the header
if 'Partition' in cell[0].value.lower() and i < 1:
# print("header workbook"),
# Add 'File #' to first cell in destination row using row number as #
ws2.cell(row=new_row, column=1).value = "File Name"
for x in range(2, ws1.max_column + 1):
# Read each cell from col 1 to last used col
cell_value = ws1.cell(row=cell[0].row, column=x)
# Write last read cell to next empty row
ws2.cell(row=new_row, column=x).value = cell_value.value
# Increment to next unused row
new_row += 1
i += 1
# search for 'long' lower case against the cell value converted to lower case
if 'Partition' not in cell[0].value.lower() and 'long' in cell[0].value.lower():
# Add 'File name' to first cell in destination row using row number as #
ws2.cell(row=new_row, column=1).value = folder_name
# Copy cells Temp1 to Temp5
for x in range(2, ws1.max_column + 1):
# Logic to copy cells Temp1 to Temp5 from the long row to test row
# Increment to next unused row
new_row += 1
new_wb.save(result_file_path)
# create_report()
job_summary_gen()
</code></pre>
|
<python><python-3.x><pandas><dataframe><openpyxl>
|
2025-02-24 19:36:57
| 2
| 967
|
user2077648
|
79,464,608
| 8,887,483
|
How do I correctly order conv1d weights in PyTorch when using groups for per-batch unique filters?
|
<p>I'm working on a PyTorch project where I have a special parameter that learns conv1d weights. After obtaining these weights, I apply a multiplying factor that can vary for each item in the batch. My goal is to use the groups argument in the convolution operation so that each batch item gets its own unique filter.</p>
<p>I know If I have a time series input with shape B[atch], C[hannles], T[ime], and I create a weight with shape C_out,C,K_size then I could repeat that weight B times to have a B*C_out,C,K_size weight, then using functional conv1d I could apply those weights to the input.</p>
<p>The issue is I would need to fold the batch dim into the channel dim, but im not clear on how to do that. Specifically I'm uncertain about how to order the weight tensor channels. Specifically, if I have weights arranged as either (C × B) or (B × C) (where B is the number of batch items and C is the number of channels per group) and set groups=B, which ordering ensures that the channels remain properly grouped without intermixing between batches?</p>
<p>Additionally, I've had difficulty finding the relevant parts of the PyTorch source code that handle this ordering, and the docs don't seem to specify. Any insights on the correct weight arrangement or pointers to the relevant source code would be greatly appreciated.</p>
<p>Below is just some dummy example code</p>
<pre><code>import torch
import torch.nn.functional as F
# Define input tensor with shape (B, C, T)
B, C, T, C_out, K_size = 2, 3, 5, 4, 3 # Batch, Channels, Time, Output Channels, Kernel Size
x = torch.randn(B, C, T) # (B, C, T)
# Define a filter with shape (C_out, C, K_size) for a single batch item
base_weight = torch.randn(C_out, C, K_size) # (C_out, C, K_size)
# Repeat the filter across the batch to create a (B, C_out, C, K_size) weight
weight = base_weight.repeat(B, 1, 1, 1) # (B, C_out, C, K_size)
# Reshape the weight for grouped convolution (B*C_out, C, K_size)
weight = weight.view(B * C_out, C, K_size)
# Reshape input by folding batch into channels (1, B*C, T)
x = x.view(1, B * C, T)
# OR Reshape input by folding batch into channels (1, C*B, T)
x = x.permute(1,0,2) #B,C,T - >C,B,T
x = x.view(1, C * B, T)
# Apply grouped convolution with groups=B
y = F.conv1d(x, weight, groups=B)
</code></pre>
|
<python><pytorch>
|
2025-02-24 19:21:32
| 0
| 605
|
Sami Wood
|
79,464,593
| 2,893,712
|
Python Requests Disable zstd Encoding
|
<p>My Synology DS418play recently updated to the <a href="https://www.synology.com/en-nz/releaseNote/DSM?model=DS418play#ver_69057-7" rel="nofollow noreferrer">latest version of DSM7</a> that is available. I noticed that a lot of the python scripts that I had have started returning weirdly encoded data. Here is an example of the code:</p>
<pre><code>requests.get("https://www.23andmedatasettlement.com/").content
</code></pre>
<p>returns</p>
<blockquote>
<p>b'(\xb5/\xfd\x04X\x1c&\x00\xb6\xbd\xba50\x8b&\x0f\xc0\xc0@\xc7\xb000P\x15\x7fkd\xe1\x8eRJ\x1d\xa4MC\x8bw;\xacv/Ln\x804E\xe7i\xf2\xff\x00U\x11Y\xd9n\x98C\xbe\xcc\xa0\x8ce\x15\xb1\x00\xab\x00\xa5\x00\xd5\xbf\xda\xd8<em>Kl\xa7\x8ds(\x8aK\xb06|\x97\x9a{Tk\x154T\xa7d+\xed?\x15<\xa7?\xdfy\x12z\xe4\x9c\xb5\x1e\xae\xbb\xfb\xad\xf5p\x0f\x82\x05\xc6#\x12\x99\x98\xe8~kA\xd8\x98\xb2\xfa\x83\x87\xeb\xa7\xa8\xf4\x91\xa6E"\x11\x08%W</em>iZI\xf8T\x94\x9c!\x8dM\xa5\x8f\xdc\x83 \xd1\x16\x18\xbd1\x1f\xac\xf5p\xceS\xf2%\xf3l-m\x10T\xfa\xa8%\xb84\x08[\xf60\xb1i\x9aZ\x93\xdc\xffH\xb5:`\xd1\x1a\x85\xd5\xce\x9f\xb9B|i\xc8\xc3 ......'</p>
</blockquote>
<p>and it looks like this is because the request headers is <code>'Accept-Encoding': 'gzip, deflate, br, **zstd**'</code>.</p>
<p>Running <code> requests.get("https://www.23andmedatasettlement.com/", headers={'Accept-Encoding': 'deflate'}).content</code> returns the proper data. I am trying to avoid changing each of my python requests to explicitly set <code>Accept-Encoding</code>. Is there a way to prevent requests from using zstd compression? The output breaks a lot of my scripts.</p>
|
<python><python-requests><zstd>
|
2025-02-24 19:15:10
| 2
| 8,806
|
Bijan
|
79,464,425
| 9,110,646
|
How to full join / merge two frames with polars while updating left with right values?
|
<p>So I got two csv which I load as polars frames:</p>
<p><strong>left:</strong></p>
<pre class="lang-py prettyprint-override"><code>left_csv = b"""
track_name,type,yield,group
8CEB45v1,corn,0.146957,A
A188v2,corn,0.86308,A
B73v6,corn,0.326076,A
CI6621v1,sweetcorn,0.0357792,A
CML103v1,sweetcorn,0.510464,A
""".strip()
</code></pre>
<p><strong>right:</strong></p>
<pre class="lang-py prettyprint-override"><code>right_csv = b"""
track_name,type,yield,group
8CEB45v1,corn,0.999,A
B1234,pepper,1,B
B1235,pepper,2,B
""".strip()
</code></pre>
<p><strong>my code so far:</strong></p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
left = pl.read_csv(left_csv)
right = pl.read_csv(right_csv)
matching_columns = list(set(left.columns) & set(right.columns)) # I do this since I want to join sometimes frame which does not have a 100 % column match. In that case I want to simply add the new columns to the full frame.
full = left.join(
right,
on=matching_columns,
how="full",
coalesce=True,
maintain_order="left",
)
full
</code></pre>
<p><strong>my result:</strong></p>
<pre><code>shape: (8, 4)
┌────────────┬───────────┬───────────┬───────┐
│ track_name ┆ type ┆ yield ┆ group │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ f64 ┆ str │
╞════════════╪═══════════╪═══════════╪═══════╡
│ 8CEB45v1 ┆ corn ┆ 0.146957 ┆ A │
│ A188v2 ┆ corn ┆ 0.86308 ┆ A │
│ B73v6 ┆ corn ┆ 0.326076 ┆ A │
│ CI6621v1 ┆ sweetcorn ┆ 0.0357792 ┆ A │
│ CML103v1 ┆ sweetcorn ┆ 0.510464 ┆ A │
│ B1234 ┆ pepper ┆ 1.0 ┆ B │
│ B1235 ┆ pepper ┆ 2.0 ┆ B │
│ 8CEB45v1 ┆ corn ┆ 0.999 ┆ A │
└────────────┴───────────┴───────────┴───────┘
</code></pre>
<p><strong>my desired output: (<code>yield</code> of <code>8CEB45v1</code> from <code>right</code> updates value of <code>left</code>)</strong></p>
<pre><code>shape: (7, 4)
┌────────────┬───────────┬───────────┬───────┐
│ track_name ┆ type ┆ yield ┆ group │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ f64 ┆ str │
╞════════════╪═══════════╪═══════════╪═══════╡
│ 8CEB45v1 ┆ corn ┆ 0.999 ┆ A │
│ A188v2 ┆ corn ┆ 0.86308 ┆ A │
│ B73v6 ┆ corn ┆ 0.326076 ┆ A │
│ CI6621v1 ┆ sweetcorn ┆ 0.0357792 ┆ A │
│ CML103v1 ┆ sweetcorn ┆ 0.510464 ┆ A │
│ B1234 ┆ pepper ┆ 1.0 ┆ B │
│ B1235 ┆ pepper ┆ 2.0 ┆ B │
└────────────┴───────────┴───────────┴───────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2025-02-24 17:51:39
| 1
| 423
|
Pm740
|
79,464,391
| 18,950
|
Django/Celery SQLite database locked on concurrent access
|
<p>I have a local Django 5.1/Celery 5.4 project that is using SQLite. I am the only user.</p>
<p>Certain model saves trigger a Celery task that queries (<code>SELECT</code>) for the updated record (using the Django ORM), then runs an API call to update a remote record based on the local data, and then runs another <code>UPDATE</code> locally. The task wraps all this inside of <code>with transaction.atomic():</code>.</p>
<p>(The Celery worker is configured to run tasks in serial.)</p>
<p>While this task is running, any attempts to write to the database result in a "database is locked" OperationalError.</p>
<p>I have configured Django/SQLite with the latest "production-ready" settings:</p>
<pre class="lang-py prettyprint-override"><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': DB_DIR / 'db.sqlite3',
'OPTIONS': {
'init_command': """
PRAGMA foreign_keys=ON;
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
PRAGMA busy_timeout = 5000;
PRAGMA temp_store = MEMORY;
PRAGMA mmap_size=134217728;
PRAGMA journal_size_limit=67108864;
PRAGMA cache_size=2000;
""",
'transaction_mode': 'IMMEDIATE',
'timeout': 20,
},
},
}
</code></pre>
<p>I was under the impression that with these settings, concurrent access was possible. "SQLite in Production" is the latest hotness, and these settings, especially the new-to-Django 5.1 <code>'transaction_mode': 'IMMEDIATE'</code> in OPTIONS, would allow writes to queue. What am I missing?</p>
|
<python><django><sqlite>
|
2025-02-24 17:32:46
| 2
| 12,651
|
David Eyk
|
79,464,353
| 2,006,674
|
How to prevent duplicate code for exception handling in Python
|
<p>What is a Pythonic way to not have duplicated code for exception handling?</p>
<p>In a real code base, it is not <code>2 / 0</code>, but reading or writing from an async socket, and I have more exceptions to catch, ConnectionResetError, BrokenPipeError, and TimeoutError.</p>
<p>I do not want to duplicate on code for exception handling, so I would appreciate an example. What is a Pythonic way to do it?</p>
<p>This is the current solution that I have:</p>
<pre><code>try:
2 / 0
except Exception as e:
exception_handling_1(e)
def exception_handling_1(e):
print(f'{type(e)=} {e=}')
if isinstance(e, ValueError):
print("Caught a ZeroDivisionError")
else:
raise e
</code></pre>
|
<python><exception><dry>
|
2025-02-24 17:16:24
| 1
| 7,392
|
WebOrCode
|
79,464,314
| 5,269,892
|
Pandas astype becomes in-place operation for data loaded from pickle files
|
<p>Pandas <code>astype()</code> appears to unexpectedly switch to performing in-place operations after loading data from a pickle file. Concretly, for <code>astype(str)</code>, the data type of the input dataframe values is modified. What is causing this behavior?</p>
<p>Pandas version: 2.0.3</p>
<p><strong>Minimal example:</strong></p>
<pre><code>import pandas as pd
import numpy as np
# create a test dataframe
df = pd.DataFrame({'col1': ['hi']*10 + [False]*20 + [np.nan]*30})
# print the data types of the cells, before and after casting to string
print(pd.unique([type(elem) for elem in df['col1'].values]))
_ = df.astype(str)
print(pd.unique([type(elem) for elem in df['col1'].values]))
# store the dataframe as pkl and directly load it again
outpath = 'C:/Dokumente/my_test_df.pkl'
df.to_pickle(outpath)
df2 = pd.read_pickle(outpath)
# print the data types of the cells, before and after casting to string
print(pd.unique([type(elem) for elem in df2['col1'].values]))
_ = df2.astype(str)
print(pd.unique([type(elem) for elem in df2['col1'].values]))
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.sstatic.net/TMZX6icJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMZX6icJ.png" alt="enter image description here" /></a></p>
|
<python><pandas><type-conversion><pickle>
|
2025-02-24 16:59:46
| 1
| 1,314
|
silence_of_the_lambdas
|
79,464,311
| 7,437,143
|
Urwid example with input validation per keypress on a datetime entry?
|
<h2>Context</h2>
<p>Suppose one would like two, modular, input boxes in which one can enter a date in format <code>yyyy-mm-dd</code> and optionally the time in format: <code>HH-MM</code>, like:</p>
<pre class="lang-bash prettyprint-override"><code>Please enter the date in format YYYY-MM-DD, (and optionally the time in format HH-MM )`
<highlighed input box when selected>
</code></pre>
<p>That:</p>
<ol>
<li>Directly shows a warning/red text (perhaps at the top right of the screen), that says: <code>invalid character entered: "_"</code> if the user tries to type an invalid character, like <code>a</code>.</li>
<li>Has a (customizable) shortcut for help, like <code>ctrl+h</code>, which opens a help screen a bit like <a href="https://stackoverflow.com/q/58319414/7437143">this</a> example.</li>
<li>allows moving to the next box using either the <code>enter</code> button</li>
<li>Allows selecting the auto-completed suggestion using the <code>tab</code> button</li>
<li>Allows selecting the digits using up and down arrows, and going to the next/previous digit using the left, right arrow, whilst only showing valid options, e.g. if february as month is selected, it only allows selecting 1 to 28/29 (instead of showing 1 to 30/31).</li>
<li>Highlights the text box as red if the user selected an invalid date. E.g. selecting 2024-02-29 and then going down on the date one year, e.g. 2023-02-29. (Because 2023 is not a leap year).</li>
</ol>
<h2>Question</h2>
<p>How could one build that in urwid?</p>
<h2>Attempt</h2>
<p>I am experiencing some difficulties in implementing the features 1, 2, 3, 4 and 6 in the MWE below:</p>
<pre class="lang-py prettyprint-override"><code>import urwid
import datetime
import calendar
class DateTimeEdit(urwid.Edit):
# ... (the DateTimeEdit class from the previous response) ...
def __init__(self, caption, date_only=False, **kwargs):
super().__init__(caption, **kwargs)
self.date_only = date_only
self.error_text = urwid.Text("")
self.help_text = urwid.Text("")
self.date_parts = [4, 2, 2] # year, month, day
self.time_parts = [2, 2] # hour, minute
self.current_part = 0
self.date_values = [None, None, None]
self.time_values = [None, None]
self.date_separator = "-"
self.time_separator = ":"
def valid_char(self, ch):
if ch.isdigit() or ch == self.date_separator or (not self.date_only and ch == self.time_separator):
return True
else:
self.error_text.set_text(f"invalid character entered: '{ch}'")
return False
def keypress(self, size, key):
if key == 'ctrl h':
self.show_help()
return None
if key == 'enter':
return 'enter' # Signal to move to the next box
if key == 'tab':
# Implement suggestion selection here
return None
if key == 'left':
self.move_to_previous_part()
return None
if key == 'right':
self.move_to_next_part()
return None
if key == 'up' or key == 'down':
self.adjust_value(key)
return None
result = super().keypress(size, key)
if result:
self.error_text.set_text("") # Clear error on valid input
self.update_values()
return result
def update_values(self):
text = self.get_edit_text()
if self.date_only:
parts = text.split(self.date_separator)
for i, part in enumerate(parts):
if part:
try:
self.date_values[i] = int(part)
except ValueError:
self.date_values[i] = None
else:
date_time_parts = text.split(" ")
if len(date_time_parts) > 0:
date_parts = date_time_parts[0].split(self.date_separator)
for i, part in enumerate(date_parts):
if part:
try:
self.date_values[i] = int(part)
except ValueError:
self.date_values[i] = None
if len(date_time_parts) > 1:
time_parts = date_time_parts[1].split(self.time_separator)
for i, part in enumerate(time_parts):
if part:
try:
self.time_values[i] = int(part)
except ValueError:
self.time_values[i] = None
def move_to_next_part(self):
if self.date_only:
if self.current_part < len(self.date_parts) - 1:
self.current_part += 1
else:
if self.current_part < len(self.date_parts) + len(self.time_parts) - 1:
self.current_part += 1
self.update_cursor()
def move_to_previous_part(self):
if self.current_part > 0:
self.current_part -= 1
self.update_cursor()
def update_cursor(self):
# Calculate cursor position based on current_part
cursor_pos = 0
if self.date_only:
for i in range(self.current_part):
cursor_pos += self.date_parts[i] + 1
else:
if self.current_part < len(self.date_parts):
for i in range(self.current_part):
cursor_pos += self.date_parts[i] + 1
else:
cursor_pos = sum(self.date_parts) + 1
for i in range(self.current_part - len(self.date_parts)):
cursor_pos += self.time_parts[i] + 1
self.set_edit_pos(cursor_pos)
def adjust_value(self, direction):
if self.date_only:
if self.current_part == 0:
self.adjust_year(direction)
elif self.current_part == 1:
self.adjust_month(direction)
elif self.current_part == 2:
self.adjust_day(direction)
else:
if self.current_part == 0:
self.adjust_year(direction)
elif self.current_part == 1:
self.adjust_month(direction)
elif self.current_part == 2:
self.adjust_day(direction)
elif self.current_part == 3:
self.adjust_hour(direction)
elif self.current_part == 4:
self.adjust_minute(direction)
self.update_text()
def adjust_year(self, direction):
if self.date_values[0] is None:
self.date_values[0] = 2024 # Default year
if direction == 'up':
self.date_values[0] += 1
elif direction == 'down':
self.date_values[0] -= 1
def adjust_month(self, direction):
if self.date_values[1] is None:
self.date_values[1] = 1
if direction == 'up':
self.date_values[1] = (self.date_values[1] % 12) + 1
elif direction == 'down':
self.date_values[1] = (self.date_values[1] - 2) % 12 + 1
def adjust_day(self, direction):
if self.date_values[2] is None:
self.date_values[2] = 1
max_days = self.get_max_days()
if direction == 'up':
self.date_values[2] = (self.date_values[2] % max_days) + 1
elif direction == 'down':
self.date_values[2] = (self.date_values[2] - 2) % max_days + 1
def adjust_hour(self, direction):
if self.time_values[0] is None:
self.time_values[0] = 0
if direction == 'up':
self.time_values[0] = (self.time_values[0] + 1) % 24
elif direction == 'down':
self.time_values[0] = (self.time_values[0] - 1) % 24
def adjust_minute(self, direction):
if self.time_values[1] is None:
self.time_values[1] = 0
if direction == 'up':
self.time_values[1] = (self.time_values[1] + 1) % 60
elif direction == 'down':
self.time_values[1] = (self.time_values[1] - 1) % 60
def get_max_days(self):
if self.date_values[0] is None or self.date_values[1] is None:
return 31
try:
_, max_days = calendar.monthrange(self.date_values[0], self.date_values[1])
return max_days
except ValueError:
return 31
def update_text(self):
date_str = self.date_separator.join(map(lambda x: str(x).zfill(2) if x is not None else "00", self.date_values))
if self.date_only:
self.set_edit_text(date_str)
else:
time_str = self.time_separator.join(map(lambda x: str(x).zfill(2) if x is not None else "00", self.time_values))
self.set_edit_text(date_str + " " + time_str)
import urwid
import datetime
import calendar
# (DateTimeEdit class code from previous responses)
def main():
date_edit = DateTimeEdit("Date (YYYY-MM-DD): ", date_only=True)
datetime_edit = DateTimeEdit("Date & Time (YYYY-MM-DD HH:MM): ")
error_display = urwid.Text("")
date_edit.error_text = error_display
datetime_edit.error_text = error_display
pile = urwid.Pile([
urwid.Text("Enter Date and/or Time:"),
date_edit,
datetime_edit,
error_display,
])
fill = urwid.Filler(pile, 'top')
loop = urwid.MainLoop(fill)
loop.run()
if __name__ == "__main__":
main()
</code></pre>
<h2>Output</h2>
<p>Below is an example output that contains an invalid date that is not corrected (first 2024-02-29 is entered, and then the year is moved down 1.</p>
<p><a href="https://i.sstatic.net/xYQOxliI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYQOxliI.png" alt="enter image description here" /></a></p>
|
<python><tui><urwid>
|
2025-02-24 16:57:08
| 0
| 2,887
|
a.t.
|
79,464,199
| 7,091,135
|
How to implement batch resource using SimPy
|
<p>I spent some time looking for the solution to this kind of simple problem but still could not find it. The problem is to simulate process when some events can be processes concurrently by limited amount of resources while some events require aggregating first and then batch processing. Here is a simple code example of ferry transporting people:</p>
<pre><code>import simpy
import matplotlib.pyplot as plt
class Person:
def __init__(self, name):
self.name = name
self.timeline = []
def log(self, event, time):
self.timeline.append((event, time))
class BatchResource(simpy.Resource):
def __init__(self, env, capacity, batch_size, process_time):
super().__init__(env, capacity)
self.batch_size = batch_size
self.process_time = process_time
self.batch_queue = []
def request(self):
req = super().request()
self.batch_queue.append(req)
# Start processing only if a full batch is ready
if len(self.batch_queue) >= self.batch_size:
self._env.process(self.process_batch())
return req
def process_batch(self):
batch = self.batch_queue[: self.batch_size] # Take the batch
self.batch_queue = self.batch_queue[self.batch_size :] # Remove from queue
print(f"[{self._env.now}] Ferry is full! Departing with {len(batch)} people.")
yield self._env.timeout(self.process_time) # Simulate crossing time
print(f"[{self._env.now}] Ferry has crossed the river.")
# Release all passengers in the batch
for req in batch:
if not req.triggered: # Ensure it hasn't already been granted
req.succeed()
def person(env, person, ferry, casses_before, casses_after):
# Process before crossing the river
with casses_before.request() as request:
yield request
person.log("start_before", env.now)
yield env.timeout(1)
person.log("end_before", env.now)
# Wait for the ferry (only departs when full)
with ferry.request() as request:
yield request
person.log("start_wait", env.now)
yield env.timeout(1)
person.log("end_wait", env.now)
# Process after crossing the river
with casses_after.request() as request:
yield request
person.log("start_after", env.now)
yield env.timeout(1)
person.log("end_after", env.now)
def setup(env, num_people, capacity, people):
casses_before = simpy.Resource(env, capacity=3)
casses_after = simpy.Resource(env, capacity=3)
ferry = BatchResource(env, capacity=capacity, batch_size=capacity, process_time=5)
for i in range(num_people):
person_instance = Person(f"Person {i}")
people.append(person_instance)
env.process(person(env, person_instance, ferry, casses_before, casses_after))
yield env.timeout(0.5) # New person arrives every 0.5 time units
# Setup and start the simulation
env = simpy.Environment()
people = []
env.process(setup(env, num_people=30, capacity=10, people=people))
env.run(until=50)
# Plot Gantt-like chart
fig, ax = plt.subplots(figsize=(10, 8))
colors = {"before": "tab:blue", "wait": "tab:orange", "after": "tab:green"}
for i, person in enumerate(people):
for event, time in person.timeline:
if "start" in event:
start_time = time
elif "end" in event:
end_time = time
stage = event.split("_")[1]
ax.broken_barh(
[(start_time, end_time - start_time)],
(i * 10, 9),
facecolors=(colors[stage]),
)
ax.set_yticks([i * 10 + 5 for i in range(len(people))])
ax.set_yticklabels([person.name for person in people])
ax.set_xlabel("Time")
ax.set_ylabel("Person")
ax.grid(True)
plt.show()
</code></pre>
<p>It results in the following process gantt:
<a href="https://i.sstatic.net/JpVSax92.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpVSax92.png" alt="enter image description here" /></a></p>
<p>But I am looking for the case when ferry (orange blocks) wait for people to fill it completely and then to do work.</p>
<p>Can someone please suggest the way to implement it?</p>
<p>Thanks!</p>
|
<python><simpy>
|
2025-02-24 16:08:05
| 1
| 359
|
Alexander Lyapin
|
79,464,131
| 266,254
|
bind parameters by array index
|
<p>I'm trying to migrate from <code>cx_Oracle 8.1.0</code> to <code>oracledb 2.5.1</code> library. Before such query was working:</p>
<pre class="lang-sql prettyprint-override"><code>update command_center set
E_ID = :1,
E_ASPECTS = :2,
E_CREATED_TIMESTAMP = :3
WHERE e_id = :1
</code></pre>
<pre class="lang-py prettyprint-override"><code>def bulk_insert_or_update_data_to_targets(self, sql, target_data, id_index=0):
for target_connections, data in target_data:
for con in target_connections:
cursor = con.cursor()
cursor.executemany(sql, data, batcherrors=True)
if cursor.getbatcherrors():
self.logger.debug("data: " + str(data))
cursor.close()
</code></pre>
<p>but currently I have following exception:</p>
<pre><code>oracledb.exceptions.DatabaseError: DPY-4009: 4 positional bind values are required but 3 were provided
</code></pre>
<p>Is there any chance this could work the same way it worked in <code>cx_Oracle</code> before ... so I guess it retrieved the attribute by array index?</p>
|
<python><cx-oracle><python-oracledb>
|
2025-02-24 15:47:24
| 2
| 5,056
|
bilak
|
79,463,940
| 11,092,636
|
Visual Studio Code not letting me choose Virtual environment Python
|
<p>This is a bug I cannot wrap my head around. I should first mention that everything works perfectly on PyCharm notebook Professional Edition (but I prefer VSCode's way of using notebooks).</p>
<p>When I click on <code>Select Kernel</code>, I only see my two base Python environments. Both work. I would however like to use one of my virtual environment. So I used the <code>>Python: Select Interpreter</code> option:
<a href="https://i.sstatic.net/BKT3rIzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BKT3rIzu.png" alt="enter image description here" /></a></p>
<p>Here, it's easy for me to find the <code>path/to/python.exe</code> of my virtual environment (Python 3.12.5) by clicking on <code>Enter interpreter path</code>:
<a href="https://i.sstatic.net/8x0gNFTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8x0gNFTK.png" alt="enter image description here" /></a></p>
<p>However, when I click and locate <code>path/to/python.exe</code>, nothing happens.
<a href="https://i.sstatic.net/GsjWLXnQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsjWLXnQ.png" alt="enter image description here" /></a></p>
<p>There is still the <code>Select Kernel</code> button saying that I have not selected any Python environment for my notebook.</p>
<p>Things I did:</p>
<ul>
<li>I restarted my computer.</li>
<li>I uninstalled VSCode, and reinstalled VSCode.</li>
<li>I looked for Windows Updates.</li>
<li>I looked for VSCode Updates. None were found. I'm on 1.97.2.</li>
<li>I installed <code>ipykernel</code> to my virtual environment but it was already installed.</li>
<li>I made sure I had the <code>Python</code> extension enabled on VSCode.</li>
<li>I made sure I had the <code>jupyter</code> extension enabled on VSCode.</li>
<li>I made sure the base environments worked as expected when choosing them in <code>Select Kernel</code>. They did.</li>
<li>I made sure everything worked with my virtual environments in PyCharm Professional. They did.</li>
<li>I used <code>python3 -m ipykernel install --user --name=projectname</code>.</li>
</ul>
|
<python><visual-studio-code>
|
2025-02-24 14:52:45
| 5
| 720
|
FluidMechanics Potential Flows
|
79,463,478
| 589,352
|
Testing a flask-WTForms app that uses csrf token (pytest)
|
<p>I have a flask app based on the tutorial here <a href="https://flask.palletsprojects.com/en/stable/tutorial/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/stable/tutorial/</a>. The tutorial comes with a test suite but having adapted the app to use WTForms the tests that post form data stopped working. I can see that's because none of the forms validate as they don't include a CSRF token. How can I add the token to the post data being sent? I came across this rather old gist showing how to do it but it appears to be out of date for the version of Werkzeug I'm using, as it refers to <code>self.cookie_jar</code> which has been removed -- <a href="https://gist.github.com/singingwolfboy/2fca1de64950d5dfed72" rel="nofollow noreferrer">https://gist.github.com/singingwolfboy/2fca1de64950d5dfed72</a>.</p>
<p>I'm using Flask==3.1.0
flask_wtf==1.2.2
pytest==8.3.4
selenium==4.29.0
Werkzeug==3.1.3
WTForms==3.2.1</p>
<p>tests/conftest.py</p>
<pre><code>import os
import tempfile
import pytest
from flaskr import create_app
from flaskr.db import get_db, init_db
with open(os.path.join(os.path.dirname(__file__), 'data.sql'), 'rb') as f:
_data_sql = f.read().decode('utf8')
@pytest.fixture
def app():
db_fd, db_path = tempfile.mkstemp()
app = create_app({
'TESTING': True,
'DATABASE': db_path,
})
with app.app_context():
init_db()
get_db().executescript(_data_sql)
yield app
os.close(db_fd)
os.unlink(db_path)
@pytest.fixture
def client(app):
return app.test_client()
@pytest.fixture
def runner(app):
return app.test_cli_runner()
</code></pre>
<p>tests/test_auth.py</p>
<pre><code>import pytest
from flask import g, session
from flaskr.db import get_db
def test_register(client, app):
assert client.get('/auth/register').status_code == 200
response = client.post_csrf(
'/auth/register', data={'username': 'a', 'password': 'a'}
, follow_redirects=True)
print(f'{response.status=}')
print(f'{response.response=}')
for x in response.response:
print(x)
assert response.headers["Location"] == "/auth/login"
with app.app_context():
assert get_db().execute(
"SELECT * FROM user WHERE username = 'a'",
).fetchone() is not None
Output from the test:
</code></pre>
<pre><code>$ pytest tests/test_auth.py
=========================================== FAILURES
============================================
_________________________________________ test_register
_________________________________________
client = <FlaskClient <Flask 'flaskr'>>, app = <Flask 'flaskr'>
def test_register(client, app):
assert client.get('/auth/register').status_code == 200
response = client.post(
'/auth/register', data={'username': 'a', 'password': 'a'}
, follow_redirects=True)
print(f'{response.status=}')
print(f'{response.response=}')
for x in response.response:
print(x)
> assert response.headers["Location"] == "/auth/login"
tests/test_auth.py:15:
_ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python3.11/site-
packages/werkzeug/datastructures/headers.py:83: in __getitem__
return self._get_key(key)
_ _ _ _ _ _ _ _ _ _ _ _ _
self = Headers([('Content-Type', 'text/html; charset=utf-8'),
('Content-Length', '960'), ('Vary', 'Cookie')])
key = 'Location'
def _get_key(self, key: str) -> str:
ikey = key.lower()
for k, v in self._list:
if k.lower() == ikey:
return v
> raise BadRequestKeyError(key)
E werkzeug.exceptions.BadRequestKeyError: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand.
venv/lib/python3.11/site-packages/werkzeug/datastructures/headers.py:97: BadRequestKeyError
</code></pre>
<pre><code></code></pre>
|
<python><flask><pytest><csrf><flask-wtforms>
|
2025-02-24 12:09:15
| 0
| 1,937
|
jaybee
|
79,463,465
| 1,450,343
|
use pydantic settings cli to parse partial json from file
|
<p>I want to use pydantic-settings for the following scenario: I have two options to write CLI parameters to my application, either by providing a json file or a command. E.g.</p>
<pre><code>myapp --config myconfig.json --logger DEBUG
</code></pre>
<p>The command line arguments should override the definitions in the json file.</p>
<p>My json file is nested.</p>
<p>Is it possible?</p>
<p>I thought of pre-parsing the args, reading the file with pydantic, and then make it the "default" for the cli, but I couldn't find a way to do it.</p>
|
<python><command-line-interface><pydantic><pydantic-settings>
|
2025-02-24 12:02:53
| 0
| 816
|
ModdyFire
|
79,463,344
| 2,883,209
|
Dynamically adding columns to panda dataframes
|
<p>Trying to use pandas to create a data frame that contains the total of time spent on each task over a period of time, then add a column for each week in the data, dynamically identifying what weeks to add.</p>
<p>It feels like the kind of task that pandas should excel at, but (more likely due to my lack of expertise) I end up doing it in a very ugly way, so just asking if there is a better way.</p>
<p>So let's start with the setup, I generated a data frame that has four weeks, and five tasks, and a random number of hours added per tasks</p>
<pre><code>import pandas as pd
import numpy as np
ramdomizer=np.random.default_rng(2025)
hours_array = ramdomizer.integers(2, 8, 15)
df = pd.DataFrame({
"Week": [ 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4],
"Task": [ "Task 1", "Task 2", "Task 3", "Task 4", "Task 5", "Task 1", "Task 2", "Task 4", "Task 5", "Task 3", "Task 1", "Task 3", "Task 5", "Task 1", "Task 4"],
"Hours": hours_array
})
df
</code></pre>
<p>Give me a nice dataframe that looks like</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">Week</th>
<th style="text-align: left;">Task</th>
<th style="text-align: left;">Hours</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Task 2</td>
<td style="text-align: left;">7</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">7</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">7</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Task 2</td>
<td style="text-align: left;">5</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">7</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">7</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">11</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">12</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">13</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">3</td>
</tr>
<tr>
<td style="text-align: left;">14</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">4</td>
</tr>
</tbody>
</table></div>
<p>And getting the total is easy enough</p>
<pre><code>summary_df = df.groupby("Task")["Hours"].sum()
summary_df
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;"><br/>Task</th>
<th style="text-align: left;">Hours<br/></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">17</td>
</tr>
<tr>
<td style="text-align: left;">Task 2</td>
<td style="text-align: left;">12</td>
</tr>
<tr>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">16</td>
</tr>
<tr>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">15</td>
</tr>
<tr>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">17</td>
</tr>
</tbody>
</table></div>
<p>Still nice and forward, but now I want to add the next couple of columns, and preferably I felt the right way would be to do something like</p>
<pre><code>for week, week_data in df.groupby("Week"):
print(f"Processing week {week}")
summary_df[f"Week {week}"] = (week_data.loc[
week_data['Task'].eq[summary_df['Task']],
['Hours']].sum())
</code></pre>
<p>But I just could not get this approach to work, so ended up building a dict with all the data, making it into a data frame:</p>
<pre><code>week_columns={}
print(summary_df.index)
for week, week_data in df.groupby("Week"):
current_week = []
print("---------------------------------------------------")
print(f"Processing week {week}")
for task in summary_df.index:
mask = week_data['Task'].eq(task)
task_hours = week_data.loc[mask, ["Hours"]].sum().iloc[0]
print(f"Found {mask.sum()} entries for {task} in week {week} totaling {task_hours} ")
current_week.append(task_hours)
week_columns[f"Week {week}"] = current_week
print(week_columns)
new_df = pd.DataFrame(
index= summary_df.index,
data= week_columns
)
new_df
</code></pre>
<p>It is VERY verbose, but it does do the trick,</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;"><br/>Task</th>
<th style="text-align: left;">Week 1<br/></th>
<th style="text-align: left;">Week 2<br/></th>
<th style="text-align: left;">Week 3<br/></th>
<th style="text-align: left;">Week 4<br/></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">6</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">3</td>
</tr>
<tr>
<td style="text-align: left;">Task 2</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">9</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">6</td>
<td style="text-align: left;">4</td>
</tr>
</tbody>
</table></div>
<p>And I can then concat the two dataframes,</p>
<pre><code>concat_df = pd.concat([summary_df, new_df], axis=1)
concat_df
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;"><br/>Task</th>
<th style="text-align: left;">Hours<br/></th>
<th style="text-align: left;">Week 1<br/></th>
<th style="text-align: left;">Week 2<br/></th>
<th style="text-align: left;">Week 3<br/></th>
<th style="text-align: left;">Week 4<br/></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Task 1</td>
<td style="text-align: left;">17</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">6</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">3</td>
</tr>
<tr>
<td style="text-align: left;">Task 2</td>
<td style="text-align: left;">12</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: left;">Task 3</td>
<td style="text-align: left;">16</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">9</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: left;">Task 4</td>
<td style="text-align: left;">15</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">Task 5</td>
<td style="text-align: left;">17</td>
<td style="text-align: left;">7</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">6</td>
<td style="text-align: left;">4</td>
</tr>
</tbody>
</table></div>
<p>but it seems convoluted, is there a simpler more "pandas mastery" way?
Any suggestions or suggested reading are welcome :)</p>
|
<python><pandas>
|
2025-02-24 11:21:03
| 0
| 1,244
|
vrghost
|
79,463,169
| 2,509,085
|
Create nested lists based on split of characters
|
<p>I have a list made by strings, correctly cleaned (<code>split(',')</code> can be safely used), and correctly sorted depending on numbers. As a small example:</p>
<pre><code>l = ['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4', 'C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8', 'C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13']
</code></pre>
<p>What I'm trying to achieve is to create as many sublists that start and end with <em>single</em> strings, that is:</p>
<pre><code>[
['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4'],
['C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8'],
['C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13']
]
</code></pre>
<p>I thought to add some logic like the following code, but I'm not sure if I'm on the correct way:</p>
<pre><code>tl = []
for i in l:
# just get the variable
val = i
tl.append(val)
# split by ,
val_split = len(i.split(','))
# check if the value is the first element of the list (C1)
if val == l[0]:
print(1, val)
# check if the split of the character is longer than 2 (C1,C2)
elif val_split > 1:
print(2, val)
# check is the split of the character siis equalt to 1 (C4)
elif val_split == 1:
# here the code should compare if the character is equal to the last value of the nested list. If yes go with teh next value (C5)
if val != tl[-1]:
print(3, val)
else:
print(4, val)
</code></pre>
|
<python><list><split>
|
2025-02-24 10:18:49
| 6
| 4,991
|
matteo
|
79,463,096
| 3,189,980
|
python logging use two different loggers with different formatting
|
<p>I am new to python!</p>
<p>I would like to setup two different loggers, both logging to stdout, but with different formats.</p>
<p>What I tried:</p>
<pre><code>import logging
logging.basicConfig(
level=logging.NOTSET,
format='### %(levelname)-8s - %(message)s'
)
v_obj_CONSOLE = logging.StreamHandler()
v_obj_CONSOLE.setLevel(logging.NOTSET)
v_obj_FORMATTER = logging.Formatter(fmt='### %(name)s - %(asctime)s - %(levelname)-8s - %(message)s', datefmt='%d-%m-%Y %H:%M')
v_obj_CONSOLE.setFormatter(v_obj_FORMATTER)
logger1 = logging.getLogger('mylogger.1')
logger1.addHandler(v_obj_CONSOLE)
print()
print("from logging:")
logging.log(logging.INFO, 'started')
print()
logging.debug("test debug")
logging.info("test INFO")
logging.warning("test warning")
logging.error("test error")
logging.critical("test critical")
print()
print("from logger1:")
logger1.log(logging.INFO, 'started')
print()
logger1.debug("test debug")
logger1.info("test INFO")
logger1.warning("test warning")
logger1.error("test error")
logger1.critical("test critical")
</code></pre>
<p>Result is:</p>
<pre><code>from logging:
### INFO - started
### DEBUG - test debug
### INFO - test INFO
### WARNING - test warning
### ERROR - test error
### CRITICAL - test critical
from logger1:
### mylogger.1 - 24-02-2025 10:51 - INFO - started
### INFO - started
### mylogger.1 - 24-02-2025 10:51 - DEBUG - test debug
### DEBUG - test debug
### mylogger.1 - 24-02-2025 10:51 - INFO - test INFO
### INFO - test INFO
### mylogger.1 - 24-02-2025 10:51 - WARNING - test warning
### WARNING - test warning
### mylogger.1 - 24-02-2025 10:51 - ERROR - test error
### ERROR - test error
### mylogger.1 - 24-02-2025 10:51 - CRITICAL - test critical
### CRITICAL - test critical
</code></pre>
<p>As you see, for "logger1" I get the messages twice.</p>
<p>How can I avoid this and get the messages for "logger1" only in the second format I defined?</p>
<p><strong>Update</strong> based on answer of @Hippolytehh</p>
<p>I changed in the proposed solution to :</p>
<pre><code>logger1.setLevel(logging.NOTSET)
logger2.setLevel(logging.NOTSET)
</code></pre>
<p>complete code:</p>
<pre><code>import logging
logger1 = logging.getLogger('logger1')
logger2 = logging.getLogger('mylogger.1')
handler1 = logging.StreamHandler()
handler2 = logging.StreamHandler()
formatter1 = logging.Formatter(fmt= '### %(levelname)-8s - %(message)s')
formatter2 = logging.Formatter(fmt='### %(name)s - %(asctime)s - %(levelname)-8s - %(message)s', datefmt='%d-%m-%Y %H:%M')
handler1.setFormatter(formatter1)
handler2.setFormatter(formatter2)
logger1.addHandler(handler1)
logger2.addHandler(handler2)
logger1.setLevel(logging.NOTSET)
logger2.setLevel(logging.NOTSET)
print()
print("from logging:")
logging.log(logging.INFO, 'started')
print()
logging.debug("test debug")
logging.info("test INFO")
logging.warning("test warning")
logging.error("test error")
logging.critical("test critical")
print()
print("from logger1:")
logger1.log(logging.INFO, 'started')
print()
logger1.debug("test debug")
logger1.info("test INFO")
logger1.warning("test warning")
logger1.error("test error")
logger1.critical("test critical")
print()
print("from logger2:")
logger2.log(logging.INFO, 'started')
print()
logger2.debug("test debug")
logger2.info("test INFO")
logger2.warning("test warning")
logger2.error("test error")
logger2.critical("test critical")
</code></pre>
<p>and got as a result:</p>
<pre><code>from logging:
WARNING:root:test warning
ERROR:root:test error
CRITICAL:root:test critical
from logger1:
### WARNING - test warning
WARNING:logger1:test warning
### ERROR - test error
ERROR:logger1:test error
### CRITICAL - test critical
CRITICAL:logger1:test critical
from logger2:
### mylogger.1 - 24-02-2025 13:17 - WARNING - test warning
WARNING:mylogger.1:test warning
### mylogger.1 - 24-02-2025 13:17 - ERROR - test error
ERROR:mylogger.1:test error
### mylogger.1 - 24-02-2025 13:17 - CRITICAL - test critical
CRITICAL:mylogger.1:test critical
</code></pre>
|
<python><logging><formatter>
|
2025-02-24 09:59:24
| 1
| 357
|
averlon
|
79,463,058
| 8,137,301
|
How to pass multiple inputs to a Python script in macOS Shortcuts?
|
<p>I’ve been using a Python script to perform multiple find/replace actions at once. It that has 3 inputs right inside the code, in a form like this:</p>
<pre><code>def main():
text_passage = """
(Here be huge blocks of text)
“""
to_replace_input = “""
(Here be large strings to look for, each on a new line)
“””
replacements_input = """
(Here be the corresponding replacements, each on a new line)
“””
</code></pre>
<p>I’ve just been copy/pasting the 3 of these things right into the code of the script, saving, and running the script. It works fine, but I would like make the experience feel a little bit cleaner/faster with some simple UI elements. And since I already use macOS Shortcuts for many quick tools, I was hoping I could make a Shortcut for this.</p>
<p>But I can’t figure out how to pass three different Shortcut inputs to the script. I’m using 3 of the <strong>ASK FOR INPUT</strong> actions and one <strong>RUN SHELL SCRIPT</strong> action.</p>
<p>I tried using ChatGPT to modify the Python script for me to use in this way, but I am getting constant errors relating to input not being passed to the script.</p>
<p><a href="https://i.sstatic.net/TMuMKdJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMuMKdJj.png" alt="enter image description here" /></a></p>
<p>Is there some special way I need to get all 3 inputs communicating to the <strong>RUN SHELL SCRIPT</strong> action?</p>
<p>I should note that I am not a programmer or anything so it’s also possible I’ve just screwed up the Python script and don’t know what I’m doing. The original method I mentioned still works so this isn’t life or death but it would be nice to understand better and get this working as a Shortcut. Thanks!</p>
|
<python><macos><shortcut>
|
2025-02-24 09:49:34
| 1
| 307
|
Crabgrass0899
|
79,462,962
| 2,201,385
|
PyCharm cannot connect to local postgresql database
|
<p>I'm trying to configure PyCharm's Data Source tool to use a locally running postgres database (not in a container) and it simply fails with the (not very helpful) message "[08001] The connection attempt failed."</p>
<p>I can connect to it in the shell with the command <code>psql -hlocalhost -p5432 -Umac1 lib1</code>. Also the code I wrote in PyCharm has no problem accessing it, with <code>psycopg.connect("dbname=lib1")</code>.</p>
<p>I added the data source in the "Data Sources and Drivers" dialog and edited the information accordingly. For "Authentication" I chose "User & Password", set "User" to "mac1", and "Password" to what I set in psql with "ALTER USER mac1 WITH PASSWORD". The URL I get is <code>jdbc:postgresql://localhost:5432/lib1</code>, which seems correct to me. But PyCharm fails to connect. I also tried turning on/off SSL but that didn't help.</p>
<p>The postgresql server version I'm using is 16.3, and PyCharm is using postgresql driver ver 42.7.3.</p>
|
<python><postgresql><pycharm>
|
2025-02-24 09:14:18
| 0
| 581
|
Tony Beta Lambda
|
79,462,913
| 200,783
|
In Python, how can I find the class in which a method is defined?
|
<p>If I have a class hierarchy in Python (we can assume there's no multiple inheritance), how can I find the class in which a method is defined? For example:</p>
<pre><code>class C:
def f(): pass
class D(C):
def f(): pass
class E(D):
pass
</code></pre>
<p>If I call <code>E().f()</code>, this calls the method <code>f</code> that's defined in class <code>D</code>. Is there any way to get this information without calling the method, i.e:</p>
<pre><code>assert get_defining_class(E, f) == D
</code></pre>
<p>One idea is to search each class in <code>E.__mro__</code> for <code>'f'</code> - would that work?</p>
<p>(This sounds like a duplicate of <a href="https://stackoverflow.com/q/25748075/200783">Find class in which a method is defined</a>, but that question is about finding the defining class from within <code>f</code> itself.)</p>
|
<python><class><oop><methods>
|
2025-02-24 08:54:41
| 1
| 14,493
|
user200783
|
79,462,736
| 4,489,082
|
Error while creating Neo4jGraph object using langchain
|
<p>I am trying to create a neo4j object using the following code. While I run the code, I receive error. Kindly help me fix this error. I have verified that the environment variables are read correctly. I am running this on my local Ubuntu machine.</p>
<p>Code -</p>
<pre><code>from langchain_neo4j import Neo4jGraph
import os
from dotenv import load_dotenv
load_dotenv()
neo4j_uri = os.getenv('NEO4J_URI')
print(f"Neo4j URI: {neo4j_uri}")
graph = Neo4jGraph(url= neo4j_uri,username = os.getenv('NEO4J_USERNAME'),password = os.getenv('NEO4J_PASSWORD'))
</code></pre>
<p>Output of the code-</p>
<pre><code>Neo4j URI: neo4j+s://53XXXe0e.databases.neo4j.io
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_async_compat/network/_util.py:144, in NetworkUtil._dns_resolver(address, family)
143 try:
--> 144 info = NetworkUtil.get_address_info(
145 address.host,
146 address.port,
147 family=family,
148 type=socket.SOCK_STREAM,
149 )
150 except OSError as e:
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_async_compat/network/_util.py:130, in NetworkUtil.get_address_info(host, port, family, type, proto, flags)
128 @staticmethod
129 def get_address_info(host, port, *, family=0, type=0, proto=0, flags=0):
--> 130 return socket.getaddrinfo(host, port, family, type, proto, flags)
File /usr/lib/python3.10/socket.py:967, in getaddrinfo(host, port, family, type, proto, flags)
966 addrlist = []
--> 967 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
968 af, socktype, proto, canonname, sa = res
gaierror: [Errno -2] Name or service not known
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Cell In[1], line 6
4 from dotenv import load_dotenv
5 load_dotenv()
----> 6 graph = Neo4jGraph(
7 url = os.getenv('NEO4J_URI'),
8 username = os.getenv('NEO4J_USERNAME'),
9 password = os.getenv('NEO4J_PASSWORD')
10 )
File ~/python3.10_venv/lib/python3.10/site-packages/langchain_neo4j/graphs/neo4j_graph.py:379, in Neo4jGraph.__init__(self, url, username, password, database, timeout, sanitize, refresh_schema, driver_config, enhanced_schema)
377 # Verify connection
378 try:
--> 379 self._driver.verify_connectivity()
380 except neo4j.exceptions.ConfigurationError:
381 raise ValueError(
382 "Could not connect to Neo4j database. "
383 "Please ensure that the driver config is correct"
384 )
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/driver.py:1081, in Driver.verify_connectivity(self, **config)
1074 experimental_warn(
1075 "All configuration key-word arguments to "
1076 "verify_connectivity() are experimental. They might be "
1077 "changed or removed in any future version without prior "
1078 "notice."
1079 )
1080 session_config = self._read_session_config(config)
-> 1081 self._get_server_info(session_config)
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/driver.py:1296, in Driver._get_server_info(self, session_config)
1294 def _get_server_info(self, session_config) -> ServerInfo:
1295 with self._session(session_config) as session:
-> 1296 return session._get_server_info()
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/work/session.py:183, in Session._get_server_info(self)
181 def _get_server_info(self):
182 assert not self._connection
--> 183 self._connect(READ_ACCESS, liveness_check_timeout=0)
184 server_info = self._connection.server_info
185 self._disconnect()
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/work/session.py:136, in Session._connect(self, access_mode, **acquire_kwargs)
134 access_mode = self._config.default_access_mode
135 try:
--> 136 super()._connect(
137 access_mode, auth=self._config.auth, **acquire_kwargs
138 )
139 except asyncio.CancelledError:
140 self._handle_cancellation(message="_connect")
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/work/workspace.py:167, in Workspace._connect(self, access_mode, auth, **acquire_kwargs)
159 else:
160 # This is the first time we open a connection to a server in a
161 # cluster environment for this session without explicitly
(...)
164 # we shall use this database explicitly for all subsequent
165 # actions within this session.
166 log.debug("[#0000] _: <WORKSPACE> resolve home database")
--> 167 self._pool.update_routing_table(
168 database=self._config.database,
169 imp_user=self._config.impersonated_user,
170 bookmarks=self._get_bookmarks(),
171 auth=auth,
172 acquisition_timeout=acquisition_timeout,
173 database_callback=self._set_cached_database,
174 )
175 acquire_kwargs_ = {
176 "access_mode": access_mode,
177 "timeout": acquisition_timeout,
(...)
181 "liveness_check_timeout": None,
182 }
183 acquire_kwargs_.update(acquire_kwargs)
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/io/_pool.py:900, in Neo4jPool.update_routing_table(self, database, imp_user, bookmarks, auth, acquisition_timeout, database_callback)
890 existing_routers = set(routing_table.routers)
892 prefer_initial_routing_address = self.routing_tables[
893 database
894 ].initialized_without_writers
896 if (
897 prefer_initial_routing_address
898 and
899 # TODO: Test this state
--> 900 self._update_routing_table_from(
901 self.address,
902 database=database,
903 imp_user=imp_user,
904 bookmarks=bookmarks,
905 auth=auth,
906 acquisition_timeout=acquisition_timeout,
907 database_callback=database_callback,
908 )
909 ):
910 # Why is only the first initial routing address used?
911 return
912 if self._update_routing_table_from(
913 *(existing_routers - {self.address}),
914 database=database,
(...)
919 database_callback=database_callback,
920 ):
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_sync/io/_pool.py:830, in Neo4jPool._update_routing_table_from(self, database, imp_user, bookmarks, auth, acquisition_timeout, database_callback, *routers)
822 log.debug(
823 (
824 "[#0000] _: <POOL> "
(...)
827 ", ".join(map(repr, routers)),
828 )
829 for router in routers:
--> 830 for address in NetworkUtil.resolve_address(
831 router, resolver=self.pool_config.resolver
832 ):
833 new_routing_table = self.fetch_routing_table(
834 address=address,
835 acquisition_timeout=acquisition_timeout,
(...)
839 auth=auth,
840 )
841 if new_routing_table is not None:
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_async_compat/network/_util.py:196, in NetworkUtil.resolve_address(address, family, resolver)
194 yield address_dns_resolved
195 else:
--> 196 for address_dns_resolved in NetworkUtil._dns_resolver(
197 address, family=family
198 ):
199 log.debug(
200 "[#0000] _: <RESOLVE> dns resolver out: %s",
201 address_dns_resolved,
202 )
203 yield address_dns_resolved
File ~/python3.10_venv/lib/python3.10/site-packages/neo4j/_async_compat/network/_util.py:151, in NetworkUtil._dns_resolver(address, family)
144 info = NetworkUtil.get_address_info(
145 address.host,
146 address.port,
147 family=family,
148 type=socket.SOCK_STREAM,
149 )
150 except OSError as e:
--> 151 raise ValueError(f"Cannot resolve address {address}") from e
152 return _resolved_addresses_from_info(info, address._host_name)
ValueError: Cannot resolve address 53XXXe0e.databases.neo4j.io:7687
</code></pre>
<p>Clearing DNS with <code>sudo resolvectl flush-caches</code>, did not help. At this point I am clueless what could be wrong.</p>
|
<python><neo4j><langchain>
|
2025-02-24 07:38:46
| 2
| 793
|
pkj
|
79,462,563
| 8,813,473
|
Not able to retain ' in the xml ElementTree write in python
|
<p>After parsing an xml string with an attribute value containing the <code>&apos;</code> characters, when I try to write it to a file or output it as a string, I am not able to retain <code>&apos;</code> occurrences. I know, the final xml is still valid, but from the diff perspective, I want to retain the <code>&apos;</code> values. Any tips?</p>
<p>Here is a sample code,</p>
<pre><code>import xml.etree.ElementTree as ETree
xml_str = '''<?xml version="1.0" encoding="UTF-8"?>
<testItem name="SomeName" description="Change to &apos;&apos;Calculated&apos;&apos; in the Diagram tab">
</testItem>
'''
root = ETree.fromstring(xml_str)
tree = ETree.ElementTree(root)
ETree.tostring(root, encoding='utf-8')
# tree.write('output.xml') # this also doesn't work, writes single quotes to the file
</code></pre>
<p>I get the below output,</p>
<p><code>b'<testItem name="SomeName" description="Change to \'\'Calculated\'\' in the Diagram tab">\n</testItem>'</code></p>
|
<python><xml>
|
2025-02-24 05:52:39
| 1
| 410
|
Amit Tendulkar
|
79,462,501
| 10,787,403
|
IterableDataset not supported on GRPOTrainer
|
<p>The following program crashes upon execution</p>
<pre class="lang-py prettyprint-override"><code>from datasets import IterableDataset, Dataset
from trl import GRPOConfig, GRPOTrainer
prompts = ["Hi", "Hello"]
def data_generator():
while True:
for s in prompts:
yield {"prompt" : s}
dataset = IterableDataset.from_generator(data_generator)
training_args = GRPOConfig(
output_dir= "tmp",
max_steps = 1000,
)
trainer = GRPOTrainer(
model="facebook/opt-350m",
reward_funcs=lambda prompts,completions, **kwargs: [1]*8,
train_dataset=dataset,
args=training_args,
)
trainer.train()
</code></pre>
<p>Causes the following trace:</p>
<pre><code>Traceback (most recent call last):
File "/home/pietro/Documents/Code/CS234/starter_code/trl_testing.py", line 24, in <module>
trainer.train()
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/transformers/trainer.py", line 2500, in _inner_training_loop
batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches)
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/transformers/trainer.py", line 5180, in get_batch_samples
batch_samples += [next(epoch_iterator)]
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/data_loader.py", line 856, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/data_loader.py", line 812, in _fetch_batches
batch = concatenate(batches, dim=0)
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 615, in concatenate
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 81, in honor_type
return type(obj)(generator)
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 615, in <genexpr>
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 617, in concatenate
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 617, in <dictcomp>
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/pietro/.conda/envs/cs234_3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 619, in concatenate
raise TypeError(f"Can only concatenate tensors but got {type(data[0])}")
TypeError: Can only concatenate tensors but got <class 'str'>
</code></pre>
<p>However, replacing the <code>IterableDataset</code> as done below with an analogous <code>Dataset</code> fixes the issue:</p>
<pre class="lang-py prettyprint-override"><code>from datasets import IterableDataset, Dataset
from trl import GRPOConfig, GRPOTrainer
prompts = ["Hi", "Hello"]
dataset = Dataset.from_dict({"prompt" : prompts})
training_args = GRPOConfig(
output_dir= "tmp",
max_steps = 1000,
)
trainer = GRPOTrainer(
model="facebook/opt-350m",
reward_funcs=lambda prompts,completions, **kwargs: [1]*8,
train_dataset=dataset,
args=training_args,
)
trainer.train()
</code></pre>
<p>This has been reproduced on 2 very different systems, so unlikely that this is the cause.
Am I missing something?</p>
|
<python><machine-learning><huggingface-datasets><huggingface-trainer>
|
2025-02-24 05:08:12
| 0
| 376
|
PMM
|
79,462,383
| 4,330,537
|
Trying to use chrome with seleniumbase and uc=true option
|
<p>I am trying to scrape a site that has a cloudflare bot check I currently use</p>
<pre><code>import undetected_chromedriver as uc
</code></pre>
<p>and portable CHROME.EXE</p>
<p>however this seems to not get me around the bot check , so now I am going to try</p>
<pre><code>from seleniumbase import SB
</code></pre>
<p>but using</p>
<pre><code>SB(uc=True,agent=user_agent_cycle,binary_location=chromedriver_path) as sb:
</code></pre>
<p>At the with statement it freezes and all I get is:</p>
<pre><code>PS D:\code\Arcgis\FissionStaking2> python .\testbaseuc3.py
<itertools.cycle object at 0x0000027605C89D80>
</code></pre>
<p>here is the full code:</p>
<pre><code>from seleniumbase import SB
chromedriver_path = "C:\\temp\\GoogleChromePortable64-132\\App\\Chrome-bin\\chrome.exe"
import random
import itertools
def user_agent_rotator(user_agent_list):
# shuffle the User Agent list
random.shuffle(user_agent_list)
# rotate the shuffle to ensure all User Agents are used
return itertools.cycle(user_agent_list)
# define Chrome options
#options = SB.ChromeOptions()
# create a User Agent list
user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36",
# ... add more User Agents
]
# initialize a generator for the User Agent rotator
user_agent_cycle = user_agent_rotator(user_agents)
print(user_agent_cycle)
with SB(uc=True,agent=user_agent_cycle,binary_location=chromedriver_path) as sb:
print(1)
sb.open("https://google.com/ncr")
print(2)
sb.type('[title="Search"]', "SeleniumBase GitHub page\n")
print(3)
sb.click('[href*="github.com/seleniumbase/"]')
sb.save_screenshot_to_logs() # ./latest_logs/
print(sb.get_page_title())
</code></pre>
|
<python><web-scraping><seleniumbase>
|
2025-02-24 03:19:41
| 1
| 835
|
RobM
|
79,462,354
| 2,537,486
|
How to access secondary axis properties in Matplotlib?
|
<p>My goal is to plot two sets of data against two different y axes, at the left and right spines, with two different colors. In addition, I want the following elements colored with the same two colors on each side:</p>
<ul>
<li>left and right vertical spine</li>
<li>ticks (major and minor)</li>
<li>tick labels</li>
</ul>
<p>I was unable to achieve this after searching for a while.
It is easy to plot the data points correctly, either using the <code>secondary-y</code> option in <code>pandas.plot</code>, or by defining a secondary y axis with Matplotlib's <code>twinx</code>.</p>
<p>Let me give a complete code example.</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
df = pd.DataFrame(data={'Series A':[1,1.25,1.5,1.75,2],'Series B':[11,10.75,10.5,10.25,10]})
fig1,ax = plt.subplots()
# Plot Series A on ax
df['Series A'].plot(ax=ax,style='o-',lw=0.5,legend=False,color='b',ylim=(0,3));
ax.set_xlabel('x')
ax.set_ylabel('Y values')
# Generate secondary axis
ax2 = ax.twinx()
# Plot Series B on ax2
df['Series B'].plot(ax=ax2,style='^-',legend=False,lw=0.5,color='g',ylim=(9,12));
plt.show();
</code></pre>
<p>At this point, my plot has the two sets of data plotted correctly. Now on to colorizing. I add the following two lines:</p>
<pre><code>ax.tick_params(axis='y', colors='b',which='both')
ax2.tick_params(axis='y', colors='g',which='both')
</code></pre>
<p>And this gets almost what I want: ticks and tick labels are colorized. But the spines aren't.</p>
<p>I tried <code>ax.spines['left'].set_color('orange')</code> but that does not work. Why?</p>
<p>So I have two questions:</p>
<ul>
<li>what is the accessor for the secondary axis parameters (for example, if I want to change ticks, etc.)?</li>
<li>how can I get the spines colorized?</li>
</ul>
|
<python><matplotlib>
|
2025-02-24 02:46:19
| 1
| 1,749
|
germ
|
79,462,214
| 986,533
|
How can I reuse a function in NiceGUI slots without client-side code duplication?
|
<p>NiceGUI documentation on <a href="https://nicegui.io/documentation/table#conditional_formatting" rel="nofollow noreferrer">conditional formatting in Table</a> suggests the following:</p>
<pre class="lang-py prettyprint-override"><code>table.add_slot('body-cell-age', '''
<q-td key="age" :props="props">
<q-badge :color="props.value < 21 ? 'red' : 'green'">
{{ props.value }}
</q-badge>
</q-td>
''')
</code></pre>
<p>This has a very simple logic, namely <code>props.value < 21 ? 'red' : 'green'</code></p>
<p>I do want to put way more advanced formatting logic here, and reuse it across the entire app - in many slots of many different components. I do not want to just inline the same code snippet for each slot as this will result in a lot of extra code duplication on the client side. Instead, I want to define a function once, and call it from each slot.</p>
<p>How can I do it?</p>
<p>I tried attaching a function to <code>window</code> by adding <code><script></code> with <a href="https://nicegui.io/documentation/section_pages_routing#add_html_to_the_page" rel="nofollow noreferrer"><code>ui.add_head_html</code></a> but it was not recognized in my slot.</p>
<p>I tried using <a href="https://github.com/zauberzeug/nicegui/tree/main/examples/custom_vue_component" rel="nofollow noreferrer">custom VUE component</a> for my slot definition but then it doesn't appear to populate <code>props</code>.</p>
|
<javascript><python><nicegui>
|
2025-02-24 00:29:34
| 1
| 3,630
|
Konrad Jamrozik
|
79,462,149
| 2,057,516
|
Why does this sorted key lambda encounter an IndexError
|
<p>I thought I understood the <code>key</code> argument for the <code>sorted</code> function, but for the life of me, I cannot figure out why it appears to be applying the tuple index to the string inside the tuple instead of to the tuple? What is going on here? I'm running python 3.9.</p>
<p>I narrowed it down to this toy example:</p>
<pre class="lang-py prettyprint-override"><code>sorted([val[0] for val in [('C', 'C')]], key=lambda tpl: tpl[1])
</code></pre>
<p>Error message:</p>
<pre class="lang-py prettyprint-override"><code>In [11]: sorted([val[0] for val in [('C', 'C')]], key=lambda tpl: tpl[1])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-11-53d87bf797aa> in <module>
----> 1 sorted([val[0] for val in [('C', 'C')]], key=lambda tpl: tpl[1])
<ipython-input-11-53d87bf797aa> in <lambda>(tpl)
----> 1 sorted([val[0] for val in [('C', 'C')]], key=lambda tpl: tpl[1])
IndexError: string index out of range
</code></pre>
<p>If I don't supply the <code>key</code> argument, it works just fine:</p>
<pre class="lang-py prettyprint-override"><code>In [12]: sorted([val[0] for val in [('C', 'C')]])
Out[12]: ['C']
</code></pre>
<p>I just want to sort by the second item in the tuple.</p>
|
<python><python-3.x><sorting><indexing><tuples>
|
2025-02-23 23:19:01
| 1
| 1,225
|
hepcat72
|
79,461,783
| 170,614
|
pandas resample and candlestick chart
|
<p>I'm trying to figure out how to create a candle stick chart with the data from pandas resample method. But let's start with this simple example:</p>
<pre><code>import pandas as pd
from datetime import datetime
from bokeh.plotting import figure, show
data = {
'time': [
datetime(2025, 1, 1),
datetime(2025, 1, 2),
datetime(2025, 1, 3),
],
'open': [10, 40, 20],
'close': [40, 20, 30],
'low': [ 5, 10, 20],
'high': [40, 50, 35],
}
df = pd.DataFrame(data)
inc = df.close > df.open
dec = df.open > df.close
p = figure()
p.segment(df.index, df.high, df.index, df.low, color="black")
p.vbar(df.index[dec], 0.6, df.open[dec], df.close[dec], color="#eb3c40")
p.vbar(df.index[inc], 0.6, df.open[inc], df.close[inc], fill_color="white",
line_color="#49a3a3", line_width=2)
show(p)
</code></pre>
<p>This gives me an expected chart:</p>
<p><a href="https://i.sstatic.net/TpI1qLTJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpI1qLTJ.png" alt="candlestick chart" /></a></p>
<p>But let's say that I want to create the same chart but I have data in different format. In this format I don't have specific open/close/low/high values. I have just an array of dates and values. Like here:</p>
<pre><code>data = {
'time': [
datetime(2025, 1, 1, 6),
datetime(2025, 1, 1, 10),
datetime(2025, 1, 1, 14),
datetime(2025, 1, 1, 18),
datetime(2025, 1, 2, 6),
datetime(2025, 1, 2, 10),
datetime(2025, 1, 2, 14),
datetime(2025, 1, 2, 18),
datetime(2025, 1, 3, 6),
datetime(2025, 1, 3, 10),
datetime(2025, 1, 3, 14),
datetime(2025, 1, 3, 18),
],
'price': [
10, 5, 40, 40,
40, 10, 50, 20,
20, 20, 35, 30,
]
}
</code></pre>
<p>I know that I can use resample method to group this values by days.</p>
<pre><code>df = pd.DataFrame(data)
resampler = df.resample('D', on='time', kind='period')
</code></pre>
<p>Now I can access "open" as <code>resampler.first()</code>, "close" as <code>resampler.last()</code>, "low" as <code>resampler.min()</code> and "high" as <code>resampler.max()</code>. So looks like I have all I need but I'm struggling to get it together to draw a chart. How can I do it? What will be the equivalent of <code>df.index</code> here?</p>
|
<python><pandas><bokeh><resample>
|
2025-02-23 18:44:29
| 1
| 2,370
|
Adam
|
79,461,655
| 1,047,788
|
How do I approximate Scala's call-by-name in Python?
|
<p>Given the example from <a href="https://docs.scala-lang.org/tour/by-name-parameters.html" rel="nofollow noreferrer">https://docs.scala-lang.org/tour/by-name-parameters.html</a> that implements a while-loop</p>
<pre class="lang-scala prettyprint-override"><code>def whileLoop(condition: => Boolean)(body: => Unit): Unit =
if condition then
body
whileLoop(condition)(body)
var i = 2
whileLoop (i > 0) {
println(i)
i -= 1
} // prints 2 1
</code></pre>
<p>How can I implement my own if-function in Python?</p>
<p>In Scala, I could do</p>
<pre class="lang-scala prettyprint-override"><code>def when[T, U](condition: Boolean, ifTrue: => T, ifFalse: => U): T | U =
if condition then ifTrue else ifFalse
val v: Vector[Int] = Vector()
println(when(v.size > 0, v(0), 0))
</code></pre>
<p>But in Python, it does not seem possible to get a call-by-name, what Scala does by declaring a function parameter with <code>=></code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, reveal_type
def when[T, U](condition: Any, if_true: T, if_false: U) -> T | U:
return if_true if condition else if_false
a: list[int] = []
result = when(a, a[0], 0)
reveal_type(result)
</code></pre>
<p>The above typechecks in mypy, but does not run, giving</p>
<pre><code>Traceback (most recent call last):
File "scratch_10.py", line 7, in <module>
result = when(a, a[0], 0)
~^^^
IndexError: list index out of range
</code></pre>
<p>The real-world motivation for wanting to write such <code>when</code> function in Python is for me <a href="https://anthonyhawkins.medium.com/is-python-the-perfect-json-yaml-templating-engine-c5c1b32418f6" rel="nofollow noreferrer">https://anthonyhawkins.medium.com/is-python-the-perfect-json-yaml-templating-engine-c5c1b32418f6</a>.</p>
|
<python><python-3.x><scala><callbyname>
|
2025-02-23 17:30:54
| 2
| 29,820
|
user7610
|
79,461,582
| 12,369,910
|
Environment variable in conda is not aligning with os.environ, possibly because of un-escaped dollar-sign
|
<p>I am trying to set an environment variable <code>SOME_VAR</code> in a conda environment. The value begins with a <code>$</code> character. Initially I didn't properly single-quote the variable value, which might have caused a silent error in powershell because that character has special meaning. Later I thought I'd fixed the problem by re-assigning the variable properly, executing the following commands in anaconda powershell</p>
<pre><code>conda env config vars set SOME_VAR='$ValueOfTheVariable!'
conda env config vars list -n my_env
</code></pre>
<p>Conda then properly displays the full <code>SOME_VAR=$ValueOfTheVariable!</code>. But Spyder, loading that environment, only sees the following:</p>
<pre><code>In [4]: os.environ.get('SOME_VAR')
Out[4]: '!'
</code></pre>
<p>Specifically, anaconda displays the whole variable, but Spyder and <code>os</code> only see the trailing exclamation point. All other environment variables are in agreement.</p>
<p>Things I've tried:</p>
<ul>
<li>checking <a href="https://docs.conda.io/projects/conda/en/stable/commands/env/config/vars/index.html" rel="nofollow noreferrer">the conda documentation</a></li>
<li>deactivating / reloading environment in powershell</li>
<li>restarting Spyder</li>
<li>removing the environment variable in powershell and reassigning it there</li>
<li>removing the environment variable in <code>os</code>, then restarting Spyder (the offending version reappears)</li>
<li>Unsetting in powershell, then setting with double-quotes using a backtick before the <code>$</code> (per <a href="https://stackoverflow.com/questions/17452401/escaping-dollar-signs-in-powershell-path-is-not-working">the second answer here</a>)</li>
</ul>
<p>Even after unsetting the variable in powershell, the variable persists in <code>os.environ</code>, making me think it's looking somewhere else and finding that version first before getting to my conda environment. But I never set that variable anywhere besides conda.</p>
|
<python><powershell><environment-variables><conda><spyder>
|
2025-02-23 16:49:35
| 2
| 360
|
Eli Johnson
|
79,461,435
| 14,358,689
|
How do I connect my python api to enable login to my wordpress website
|
<p>I am creating an api that would allow users of my wordpress website to also login from a mobile device (I am making use of fast-api). but I cannot figure out how to hash the password from python so that is is compatible with wordpress.</p>
<p>I am making use of <code>passlib.phpass</code>.</p>
<p>I have access to my <code>wp-config.php</code> file and I can see some fields such as:</p>
<pre><code>define( 'AUTH_KEY', '***' );
define( 'SECURE_AUTH_KEY', '***' );
define( 'LOGGED_IN_KEY', '***' );
define( 'NONCE_KEY', '***' );
define( 'AUTH_SALT', '***' );
define( 'SECURE_AUTH_SALT', '***' );
define( 'LOGGED_IN_SALT', '***' );
define( 'NONCE_SALT', '***' );
</code></pre>
<p>how can I setup the password hash so it is compatible with the one on wordpress.</p>
<p>I'll really appreciate any help on how to tackle this issue.</p>
|
<python><wordpress><wordpress-rest-api><password-hash>
|
2025-02-23 15:03:23
| 0
| 367
|
Anthony Aniobi
|
79,461,432
| 10,722,752
|
Getting ModuleNotFoundError: No module named 'mltable' error while executing command job in Azure ml SKD v2
|
<p>I am trying to run a command job that'd read in a Data Asset and perform some pre processing for further ML tasks.</p>
<p>To read in the data in the <code>.py</code> file I used the code in the <code>consume</code> section as below:</p>
<pre><code>def get_data():
ml_client = MLClient.from_config(credential=DefaultAzureCredential())
data_asset = ml_client.data.get("--data--", version="2")
tbl = mltable.load(f'azureml:/{data_asset.id}')
return tbl.to_pandas_dataframe()
</code></pre>
<p>For above code to work, have used <code>mltable</code> import as below:</p>
<pre><code>import mltable
</code></pre>
<p>When I run the above command in my <code>.ipynb</code> notebook cell, it works fine.</p>
<p>But then I try to execute the above <code>py</code> file using a command job:</p>
<p>from azure.ai.ml import command</p>
<h1>configure job</h1>
<pre><code>job = command(
code="./folder",
command="python --script--.py",
environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu@latest",
compute="--computename--",
display_name="....",
experiment_name="...."
)
</code></pre>
<p>and when I execute it, I am getting:</p>
<pre><code>No module named 'mltable'
</code></pre>
<p>I am working on <code>Python 3.10 - SDK v2</code></p>
<p>Can someone please help me with this.</p>
|
<python><azure><azure-machine-learning-service><azureml-python-sdk>
|
2025-02-23 15:02:48
| 1
| 11,560
|
Karthik S
|
79,461,242
| 1,823,068
|
How to convert float columns without decimal into int columns in Polars?
|
<p>The following <code>pandas</code> code removes all the <code>.0</code> decimal precision if I have a float column with 1.0, 2.0, 3.0 values:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"date": ["2025-01-01", "2025-01-02"],
"a": [1.0, 2.0],
"c": [1.0, 2.1],
})
print(df)
columns = df.columns.difference(["date"])
df[columns] = df[columns].map(lambda x: int(x) if x.is_integer() else x)
print(df)
</code></pre>
<p>The output of the above code:</p>
<pre class="lang-py prettyprint-override"><code> date a c
0 2025-01-01 1.0 1.0
1 2025-01-02 2.0 2.1
date a c
0 2025-01-01 1 1.0
1 2025-01-02 2 2.1
</code></pre>
<p>How can I do it using Polars?</p>
|
<python><dataframe><casting><python-polars>
|
2025-02-23 12:49:28
| 1
| 401
|
Nyssance
|
79,461,133
| 3,727,050
|
Python list Get Item not O(1)?
|
<p>I am trying to optimize a piece of code and I saw a lot of time spent into picking a random index out of a list.</p>
<p>Using the following <code>timeit</code>, I am seeing that Get Item against a larger list can take ~3x longer, while generating the random index is constant time:</p>
<pre><code>import timeit
setup = f"""
import random
test_10m = [x for x in range(10000000)]
test_1m = [x for x in range(1000000)]
test_10k = [x for x in range(10000)]
test_1k = [x for x in range(1000)]
"""
print(timeit.timeit('test_10m[int(random.random()*10000000)]', number=1000000, setup=setup))
print(timeit.timeit('test_1m[int(random.random()*1000000)]', number=1000000, setup=setup))
print(timeit.timeit('test_10k[int(random.random()*10000)]', number=1000000, setup=setup))
print(timeit.timeit('test_1k[int(random.random()*1000)]', number=1000000, setup=setup))
print(timeit.timeit('int(random.random()*10000000)', number=1000000, setup=setup))
print(timeit.timeit('int(random.random()*1000000)', number=1000000, setup=setup))
print(timeit.timeit('int(random.random()*10000)', number=1000000, setup=setup))
print(timeit.timeit('int(random.random()*1000)', number=1000000, setup=setup))
</code></pre>
<p>Example output (Python 3.8.6):</p>
<pre><code>0.7138307300047018
0.5209437379962765
0.2407058280077763
0.22641731501789764
0.21460772102000192
0.21099105197936296
0.20940051099751145
0.21421014302177355
</code></pre>
<p>I am getting similar results in Python 3.9.21.</p>
<p>Questions:</p>
<ul>
<li><p>How does <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow noreferrer">the wiki</a> claim O(1) for Get Item? Ie. What am I missing?</p>
</li>
<li><p>Is there a faster way to get a random value out of a large list (large>=100000 for now)</p>
</li>
</ul>
|
<python><list>
|
2025-02-23 11:14:36
| 1
| 5,730
|
urban
|
79,460,515
| 2,280,091
|
How to validate only part of a document with Cerberus?
|
<p>I use Cerberus to validate a loaded configuration file. The user may make changes to that configuration within the program, however, and when they do, I’d like to validate <strong>only that change</strong> with Cerberus – not validate the whole document in vain, when I know that the rest hasn’t changed.</p>
<p>Take the following minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import cerberus
SCHEMA = {
'a': {'type': 'string'},
'b': {'type': 'integer', 'allowed': (39, 93)}
}
v = cerberus.Validator(SCHEMA)
config = {
'a': 'Wow!',
'b': 39
}
print(v.validate(config), v.errors) # So far, so good.
config['b'] = 93 # Then the user changes only one property…
# How do I now validate *only* config['b'], without validating config['a']?
</code></pre>
<p>I considered using <a href="https://docs.python-cerberus.org/api.html#cerberus.Validator._get_child_validator" rel="nofollow noreferrer"><code>cerberus.Validator._get_child_validator</code></a>, which by <em>name</em> would appear to do exactly what I ask, but apparently it still checks the whole document:</p>
<pre class="lang-py prettyprint-override"><code>child = v._get_child_validator(document_crumb='b')
# This is wrong, but I don’t want to validate it – it’s just to illustrate that it *is* being validated.
config['a'] = None
print(child.validate(config), v.errors) # False, {'a': ['null value not allowed']}
</code></pre>
<p>How do I <strong>validate only the part that has changed?</strong></p>
|
<python><cerberus>
|
2025-02-23 00:46:22
| 0
| 1,519
|
obskyr
|
79,460,500
| 4,913,592
|
Differentiable Environment Map Failing Backpropagation in Mitsuba 3.6.4
|
<p>I'm trying to set up a scene in Mitsuba 3 where I optimize an environment map parameter, <code>emitter.data</code>. This seems like it should be possible given their <a href="https://mitsuba.readthedocs.io/en/stable/src/inverse_rendering/caustics_optimization.html" rel="nofollow noreferrer">caustics optimization tutorial</a> and the fact that they do exactly this in <a href="https://mitsuba2.readthedocs.io/en/latest/src/inverse_rendering/advanced.html?" rel="nofollow noreferrer">Mitsuba 2</a>.</p>
<p>I've confirmed that manually changing my environment map's bitmap, <code>emitter.data</code>, can properly change the lighting of the rendered image as expected, but I get an error during backprop stating that the loss does not depend on this parameter.</p>
<hr />
<p><strong>Minimum Example</strong><br />
Below is a short script showing the issue. All it does is create a small, uniform environment map, load a basic scene, attempt a render, then compute a dummy loss against itself (just to test gradient flow).</p>
<pre class="lang-py prettyprint-override"><code>import mitsuba as mi
import drjit as dr
import numpy as np
mi.set_variant('llvm_ad_rgb') # I also tried 'cuda_ad_rgb' with the same result
# Create a uniform environment map
env_width, env_height = 256, 128
env_data = mi.Bitmap(np.full((env_height, env_width, 3), 0.5, dtype=np.float32))
# Define a minimal scene that uses the above environment map
scene_dict = {
"type": "scene",
"emitter": {
"type": "envmap",
"bitmap": env_data, # The parameter in question
"scale": 1.0
},
"integrator": {
"type": "path",
"max_depth": 4
},
"sensor": {
"type": "perspective",
"fov": 45,
"to_world": mi.ScalarTransform4f().look_at(
origin=[0, 0, 5],
target=[0, 0, 0],
up=[0, 1, 0]
),
"film": {
"type": "hdrfilm",
"width": 256,
"height": 256
},
"sampler": {
"type": "independent",
"sample_count": 16
}
},
}
scene = mi.load_dict(scene_dict)
# Access and enable gradient tracking on the environment map data
params = mi.traverse(scene)
params.keep(['emitter.data'])
dr.enable_grad(params['emitter.data'])
# Render (forward pass)
image = mi.render(scene, spp=16)
# Create a dummy reference image with the same dimensions as the rendered image
dummy_reference = mi.Bitmap(np.full((256, 256, 3), 0.5, dtype=np.float32))
# Compute a dummy loss for testing backprop
loss = dr.mean(dr.square(image - dummy_reference))
# Attempt backprop - should now properly depend on input variables
dr.backward(loss)
</code></pre>
<p><strong>Error Message</strong></p>
<pre><code>RuntimeError: drjit.backward_from(): the argument does not depend on the input variable(s) being differentiated...
</code></pre>
<hr />
<p><strong>Observations</strong></p>
<ul>
<li>The forward pass definitely depends on <code>emitter.data</code> (e.g., if I multiply it by 0.1 before rendering, the scene darkens).</li>
<li>Backprop, however, claims the final loss has no dependency on <code>emitter.data</code>.</li>
</ul>
<hr />
<p><strong>Questions</strong></p>
<ul>
<li>Has anyone else run into this issue when differentiating environment maps in Mitsuba 3?</li>
<li>Am I missing something, or is this a bug?</li>
</ul>
<p>Thanks!</p>
|
<python><graphics><3d><rendering><mitsuba-renderer>
|
2025-02-23 00:17:58
| 1
| 351
|
Anson Savage
|
79,460,447
| 3,624,171
|
How do I write a query like (A or B) and C in Polars?
|
<p>I expected either a or b would be 0.0 (not NaN) and c would always be 0.0. The Polars documentation said to use | as "or" and & as "and". I believe I have the logic right:
(((a not Nan) or (b not Nan)) and (c not NaN))</p>
<p>However, the output is wrong.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
df = pl.DataFrame(
data={
"a": [0.0, 0.0, 0.0, 0.0, np.nan, np.nan, np.nan],
"b": [0.0, 0.0, np.nan, np.nan, 0.0, 0.0, np.nan],
"c": [0.0, np.nan, 0.0, np.nan, 0.0, np.nan, np.nan]
}
)
df.with_columns(
((pl.col('a').is_not_nan() | pl.col('b').is_not_nan())
& pl.col('c').is_not_nan()).alias('Keep'))
df_actual = df.filter(pl.col("Keep") is True)
print("df\n", df)
print("df_expect\n", df_expect)
print("df_actual\n", df_actual)
</code></pre>
<p>df</p>
<pre><code> shape: (7, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞═════╪═════╪═════╡
│ 0.0 ┆ 0.0 ┆ 0.0 │
│ 0.0 ┆ 0.0 ┆ NaN │
│ 0.0 ┆ NaN ┆ 0.0 │
│ 0.0 ┆ NaN ┆ NaN │
│ NaN ┆ 0.0 ┆ 0.0 │
│ NaN ┆ 0.0 ┆ NaN │
│ NaN ┆ NaN ┆ NaN │
└─────┴─────┴─────┘
</code></pre>
<p>df_expect</p>
<pre><code> shape: (3, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞═════╪═════╪═════╡
│ 0.0 ┆ NaN ┆ 0.0 │
│ NaN ┆ 0.0 ┆ 0.0 │
│ 0.0 ┆ 0.0 ┆ 0.0 │
└─────┴─────┴─────┘
</code></pre>
<p>df_actual</p>
<pre><code> shape: (0, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞═════╪═════╪═════╡
└─────┴─────┴─────┘
</code></pre>
|
<python><dataframe><filter><python-polars><polars>
|
2025-02-22 23:14:17
| 1
| 432
|
Steve Maguire
|
79,460,405
| 1,092,125
|
Executable Flask app server locally: change Debug mode, is it possible?
|
<p>If I have received executable file that runs Flask app server locally, can I somehow set Debug mode On?
I know it is Flask app and I see in CMD</p>
<pre><code> * Serving Flask app '09WEB'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on ...
</code></pre>
<p>Can I somehow change Debug mode: on for executable (not <code>py</code> script)?</p>
|
<python><flask><debug-mode>
|
2025-02-22 22:30:23
| 0
| 5,385
|
renathy
|
79,460,246
| 420,409
|
Dynamically add subcommands in typer
|
<p>In a typer command line interface with subcommands, is it possible to dynamically (during run time) change the subcommands and parameters that those take?</p>
<p>Consider the following script as an example:</p>
<pre class="lang-py prettyprint-override"><code>import typer
from typing import Annotated, Optional
local_users_app = typer.Typer()
@local_users_app.command("define")
def local_define(address : Annotated[str, typer.Argument()]):
print("local define")
remote_users_app = typer.Typer()
@remote_users_app.command("define")
def remote_define(url: Annotated[str, typer.Argument()],
platform: Annotated[str, typer.Argument()]):
print("remote define")
def user_callback(ctx: typer.Context,
username : Annotated[str, typer.Argument()]):
typer.echo(f"user name: {username}")
# if is_local(username):
# use local commands and interface
# else:
# use remote commands and interface
app = typer.Typer()
users_app = typer.Typer(callback=user_callback)
app.add_typer(users_app, name="user")
@users_app.command()
def define():
print(" ! define")
pass
if __name__=="__main__":
app()
</code></pre>
<p>That I would like to call with the following parameters:</p>
<p><code>user local_user define address "an address"</code></p>
<p>or</p>
<p><code>user remote_user define url some_url platform some_platform</code></p>
<p>So in this (artificial) example, I have a <code>user</code> command that has a callback which queries the user name. The app is able to determine whether it is a remote or a local user, so I don't want to specify this explicitly. Then I want to typer to query and parse the following parameters based on which user type the given user is.</p>
<p>In a more general view, this is an example that occurs if I have derived classes (remote_user, local_user, derived from user) that I would like to be able to configure/initialize/use with their respective parameter set.</p>
<p>Can that be done and if so, how?</p>
<p>If this cannot be done, are there alternative ways to get a clean command-line interface?</p>
|
<python><typer>
|
2025-02-22 20:24:46
| 0
| 2,429
|
Joma
|
79,460,096
| 12,990,915
|
How to GPU-accelerate PDE solvers in Python?
|
<p>I have been working on a small Python package to solve a class of PDEs using <code>scipy.integrate.solve_ivp</code>. As discretizations are made finer, runtime becomes a bottleneck—especially when I need to solve the PDE for a large number of different initial conditions.</p>
<p>I would like to make use of GPU acceleration to speed things up, but I am unsure of how to integrate GPU-based computations into my current implementation. <a href="https://colab.research.google.com/drive/1-6i1ZAs_SzHzIPVgy-2ZBhQmNMz7KyZe?usp=sharing" rel="nofollow noreferrer">Here</a> is an example of my implementation on Google Colab. In the notebook, I also tried using CuPy to transfer data to the GPU, perform a forward step, and then transfer back to the CPU, but the transfer overhead was too large.</p>
<p>Would I have to rewrite the solvers in something like CuPy/JAX to make use of GPUs?</p>
|
<python><scipy><scientific-computing><cupy><scipy-integrate>
|
2025-02-22 18:31:07
| 1
| 383
|
user572780
|
79,460,071
| 1,089,715
|
Is there a way to disable "Just my code" debugging in Visual Studio 2022 for Python?
|
<p>In Visual Studio 2022, I went to "Debug" => "Options" => "General" and unchecked "Enable Just My Code"</p>
<p>I also went to "Debug" => "Options" => "Python" => "Debugging" and checked "Enable debugging of the Python standard library".</p>
<p>I also created "launch.json", and put <code>"justMyCode": false</code> inside <code>configuration</code>.</p>
<p>But nothing helps, Visual Studio 2022 simply ignores that, and when I try to step into third-party code, I see this: "Frame skipped from debugging during step-in".</p>
<p>Is there any way to enforce Visual Studio to debug also third-party code in Python projects?</p>
|
<python><visual-studio><debugging><visual-studio-2022><visual-studio-debugging>
|
2025-02-22 18:16:39
| 0
| 4,843
|
Vitalii
|
79,459,880
| 11,062,613
|
How can I iterate over all columns using pl.all() in Polars?
|
<p>I've written a custom function in Polars to generate a horizontal forward/backward fill list of expressions. The function accepts an iterable of expressions (or column names) to determine the order of filling.
I want to to use all columns via pl.all() as default. The problem is that pl.all() returns a single expression rather than an iterable, so trying to reverse or iterate over it leads to a TypeError.</p>
<p>Is there a way to convert between single expressions and iterables of expressions?
Any suggestions or workarounds are greatly appreciated!</p>
<p>Here is the function:</p>
<pre><code>from typing import Iterable
from polars._typing import IntoExpr
import polars as pl
def fill_horizontal(exprs: Iterable[IntoExpr], forward: bool = True) -> list[pl.Expr]:
"""Generate a horizontal forward/backward fill list of expressions."""
# exprs = exprs or pl.all() # use all columns as default
cols = [col for col in reversed(exprs)] if forward else exprs
return [pl.coalesce(cols[i:]) for i in range(0, len(cols) - 1)]
</code></pre>
<p>Here is an example:</p>
<pre><code>df = pl.DataFrame({
"col1": [1, None, 2],
"col2": [1, 2, None],
"col3": [None, None, 3]})
print(df)
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ col1 ┆ col2 ┆ col3 │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ i64 │
# ╞══════╪══════╪══════╡
# │ 1 ┆ 1 ┆ null │
# │ null ┆ 2 ┆ null │
# │ 2 ┆ null ┆ 3 │
# └──────┴──────┴──────┘
print('forward_fill')
print(df.with_columns(fill_horizontal(df.columns, forward=True)))
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ col1 ┆ col2 ┆ col3 │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ i64 │
# ╞══════╪══════╪══════╡
# │ 1 ┆ 1 ┆ 1 │
# │ null ┆ 2 ┆ 2 │
# │ 2 ┆ 2 ┆ 3 │
# └──────┴──────┴──────┘
print('backward_fill')
print(df.with_columns(fill_horizontal(df.columns, forward=False)))
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ col1 ┆ col2 ┆ col3 │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ i64 │
# ╞══════╪══════╪══════╡
# │ 1 ┆ 1 ┆ null │
# │ 2 ┆ 2 ┆ null │
# │ 2 ┆ 3 ┆ 3 │
# └──────┴──────┴──────┘
</code></pre>
<p>Edit: Merging @Henry Harbeck's answer and @jqurious's comment seems to be not perfect but a sufficient solution as of now.</p>
<pre><code>def fill_horizontal(
exprs: Iterable[IntoExpr] | None = None,
*,
forward: bool = True,
ncols: int = 1000) -> pl.Expr:
"""Generate a horizontal forward/backward fill expression."""
if exprs is None:
# if forward is false, ncols has to be defined with the present number of cols or more
cols = pl.all() if forward else pl.nth(range(ncols, -1, -1))
else:
cols = exprs if forward else reversed(exprs)
return pl.cum_reduce(lambda s1, s2: pl.coalesce(s2, s1), cols).struct.unnest()
</code></pre>
|
<python><dataframe><python-polars>
|
2025-02-22 15:59:56
| 1
| 423
|
Olibarer
|
79,459,847
| 11,475,651
|
ModuleNotFound Error right after I globally install a Python package. What am I missing here?
|
<p>I'm running Python 3.12.3 on Ubuntu 24.04. I have been trying to install a few different packages to determine timezones from longitude and latitude coordinates, and every time I've tried installing packages the last few days, I simply cannot get python to find/use that package. Below is a screenshot.</p>
<p><a href="https://i.sstatic.net/JfXA8Mc2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfXA8Mc2.png" alt="enter image description here" /></a></p>
<p>To show this in code, here is what I am doing:</p>
<pre class="lang-none prettyprint-override"><code>Installing to existing venv 'geocoder'
installed package geocoder 1.38.1, installed using Python 3.12.3
These apps are now globally available
- geocode
done! ✨ 🌟 ✨
python3
Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import geocoder
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'geocoder'
>>>
</code></pre>
<p>I would appreciate any pointers to figure out what I am doing wrong.</p>
|
<python><python-3.x><pip>
|
2025-02-22 15:38:39
| 1
| 317
|
Abed
|
79,459,841
| 1,251,549
|
How use @multimethod and Self tohether in python?
|
<p>Consider the code:</p>
<pre><code>from enum import Enum, auto
from typing import Self
from typing import List
from multipledispatch import dispatch
from multimethod import multimethod
class Name:
__id: list[str]
def __new__(self, id: list[str]):
self.__id = id
# ...
@multimethod
def prefix(self, other: Self) -> Self:
return self.prefix(other.__id)
# EXCEPTION HERE
@multimethod
def prefix(self, prefix: List[str]) -> Self:
return Name(prefix + self.__id)
@multimethod
def postfix(self, postfix: List[str]) -> Self:
return Name(self.__id + postfix)
</code></pre>
<p>Error stack is</p>
<pre><code>Traceback (most recent call last):
File "myfolder\app.py", line 6, in <module>
from my_package import MyClass
File "myfolder\__init__.py", line 13, in <module>
class Name:
File "myfolder\__init__.py", line 50, in Name
@multimethod
^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 257, in __init__
self[signature.from_hints(func)] = func
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 299, in __setitem__
parents = types.parents = self.parents(types)
^^^^^^^^^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 282, in parents
parents = {key for key in list(self) if isinstance(key, signature) and key < types}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 282, in <setcomp>
parents = {key for key in list(self) if isinstance(key, signature) and key < types}
^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 221, in __lt__
return self != other and self <= other
^^^^^^^^^^^^^
File "myfolder\.venv\Lib\site-packages\multimethod\__init__.py", line 218, in __le__
return self.required <= len(other) and all(map(issubclass, other, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\work\tools\Python\Python3.11.3\Lib\typing.py", line 466, in __subclasscheck__
raise TypeError(f"{self} cannot be used with issubclass()")
TypeError: typing.Self cannot be used with issubclass()
</code></pre>
<p>Why this happended? May I use <code>Self</code> return type with <code>@multimethod</code>?</p>
<p>Python version 3.11.3</p>
|
<python><self><multimethod>
|
2025-02-22 15:31:07
| 1
| 33,944
|
Cherry
|
79,459,837
| 179,014
|
Why does calling curl solve my SSL issues in Python on Windows
|
<p>This is a pretty bizarre issue. I have created a fresh Windows 11 VM using VMware Fusion on Mac. I installed VMware Tools and Python on it</p>
<pre><code>winget install -e --id Python.Python.3.11
</code></pre>
<p>I start python from the Powershell terminal and try to connect to Google for testing:</p>
<pre><code>>>> import urllib.request
>>> urllib.request.urlopen("https://www.google.com").status
</code></pre>
<p>The result is an error:</p>
<pre><code>urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>
</code></pre>
<p>Now I figured out that to fix the error I need to exit python and type into the terminal</p>
<pre><code>curl https://www.google.com
</code></pre>
<p>This will generate for a very short moment a message in the terminal:</p>
<p><a href="https://i.sstatic.net/fs8n3S6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fs8n3S6t.png" alt="enter image description here" /></a></p>
<p>Afterwards my SSL issues are resolved forever. The code above will happily return the expected HTTP status code 200</p>
<pre><code>>>> urllib.request.urlopen("https://www.google.com").status
200
</code></pre>
<p>So why is that? Why does calling <code>curl</code> fix my SSL issues in Python on Windows?</p>
<p>PS. Actually I found another answer on stackoverflow, which suggested to call curl to fix network issues: <a href="https://stackoverflow.com/a/78372621/179014">https://stackoverflow.com/a/78372621/179014</a> . But no explanation is given, why that works.</p>
<p>UPDATE: I just learned that calling <code>curl</code> on Windows Powershell is actually calling a fake-curl (see <a href="https://stackoverflow.com/questions/69261782/why-does-the-same-curl-command-output-different-things-in-windows-and-linux">Why does the same curl command output different things in windows and linux?</a>) . However independent of calling <code>curl</code> or <code>curl.exe</code> the result is the same. It fixes my SSL issues permanently.</p>
|
<python><windows><ssl><curl><ssl-certificate>
|
2025-02-22 15:27:26
| 2
| 11,858
|
asmaier
|
79,459,741
| 9,815,697
|
pyo3 multiple `wrap` found
|
<p>I am trying to return a <code>HashMap<String, Vec<Vec<f64>>></code> from Rust to Python.</p>
<p>This is the function in PyO3 wrapper:</p>
<pre class="lang-rust prettyprint-override"><code> pub fn predict_intervals<'py>(
&self,
py: Python<'py>,
flat_data: PyReadonlyArray1<f64>,
rows: usize,
cols: usize,
parallel: Option<bool>,
) -> PyResult<Bound<'py, PyDict<String, PyArray2<f64>>>> {
let flat_data = flat_data.as_slice()?;
let data = Matrix::new(flat_data, rows, cols);
let parallel = parallel.unwrap_or(true);
let predictions: HashMap<String, Vec<Vec<f64>>> = self.booster.predict_intervals(&data, parallel); // Result<YourPredictionType, YourErrorType>
let py_dict: PyDict<String, PyArray2<f64>> = PyDict::new(py);
for (key, value) in predictions?.iter() {
let py_array: PyArray2<f64> = PyArray2::from_array(py, value); // Convert each value to PyArray2
py_dict.set_item(key, py_array)?; // Add to the dictionary
}
Ok(py_dict.into_py_dict_bound(py))
}
</code></pre>
<p>But, I keep getting the following error:</p>
<pre><code>error[E0034]: multiple applicable items in scope
--> src\booster.rs:312:10
|
312 | ) -> PyResult<Bound<'py, PyDict<String, PyArray2<f64>>>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ multiple `wrap` found
|
= note: candidate #1 is defined in an impl for the type `IntoPyConverter<Result<T, E>>`
= note: candidate #2 is defined in an impl for the type `IntoPyConverter<T>`
= note: candidate #3 is defined in an impl for the type `IntoPyObjectConverter<Result<T, E>>`
= note: candidate #4 is defined in an impl for the type `IntoPyObjectConverter<T>`
= note: and 2 others
</code></pre>
<p>How can I solve this problem?</p>
|
<python><rust><pyo3><maturin>
|
2025-02-22 14:19:09
| 0
| 1,182
|
Mutlu Simsek
|
79,459,449
| 4,913,660
|
Subplots with Plotly ternary plots
|
<p>I think I am totally lost when it comes to <code>Plotly Express</code>, <code>Plotly_figure_factory</code> etc.</p>
<p>I cannot figure how to make subplots with Plotly ternary plots out.</p>
<p>First of all, I tried using Plotly Graph Objects, for example I take the ternary plot from <a href="https://plotly.com/python/ternary-plots/" rel="nofollow noreferrer">ternary scatter plot with Plotly Graph Objects</a> and try to make subplots as detailed <a href="https://plotly.com/python/subplots/" rel="nofollow noreferrer">here</a>. Putting it all together I end up with :</p>
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.graph_objects as go
rawData = [
{'journalist':75,'developer':25,'designer':0,'label':'point 1'},
{'journalist':70,'developer':10,'designer':20,'label':'point 2'},
{'journalist':75,'developer':20,'designer':5,'label':'point 3'},
{'journalist':5,'developer':60,'designer':35,'label':'point 4'},
{'journalist':10,'developer':80,'designer':10,'label':'point 5'},
{'journalist':10,'developer':90,'designer':0,'label':'point 6'},
{'journalist':20,'developer':70,'designer':10,'label':'point 7'},
{'journalist':10,'developer':20,'designer':70,'label':'point 8'},
{'journalist':15,'developer':5,'designer':80,'label':'point 9'},
{'journalist':10,'developer':10,'designer':80,'label':'point 10'},
{'journalist':20,'developer':10,'designer':70,'label':'point 11'},
];
def makeAxis(title, tickangle):
return {
'title': {'text': title, 'font': { 'size': 20}},
'tickangle': tickangle,
'tickfont': { 'size': 15 },
'tickcolor': 'rgba(0,0,0,0)',
'ticklen': 5,
'showline': True,
'showgrid': True
}
fig = go.Figure(go.Scatterternary({
'mode': 'markers',
'a': [i for i in map(lambda x: x['journalist'], rawData)],
'b': [i for i in map(lambda x: x['developer'], rawData)],
'c': [i for i in map(lambda x: x['designer'], rawData)],
'text': [i for i in map(lambda x: x['label'], rawData)],
'marker': {
'symbol': 100,
'color': '#DB7365',
'size': 14,
'line': { 'width': 2 }
}
}))
fig.update_layout({
'ternary': {
'sum': 100,
'aaxis': makeAxis('Journalist', 0),
'baxis': makeAxis('<br>Developer', 45),
'caxis': makeAxis('<br>Designer', -45)
},
'annotations': [{
'showarrow': False,
'text': 'Simple Ternary Plot with Markers',
'x': 0.5,
'y': 1.3,
'font': { 'size': 15 }
}]
})
fig.show()
my_ternary_plot = go.Scatterternary({
'mode': 'markers',
'a': [i for i in map(lambda x: x['journalist'], rawData)],
'b': [i for i in map(lambda x: x['developer'], rawData)],
'c': [i for i in map(lambda x: x['designer'], rawData)],
'text': [i for i in map(lambda x: x['label'], rawData)],
'marker': {
'symbol': 100,
'color': '#DB7365',
'size': 14,
'line': { 'width': 2 }
}})
figure = make_subplots(rows=1, cols=2)
figure.add_trace(my_ternary_plot,
row=1, col=1
)
figure.add_trace(
my_ternary_plot,
row=1, col=2
)
figure.update_layout(height=600, width=800, title_text="Side By Side Subplots")
figure.show()
</code></pre>
<p>But I get the error :</p>
<blockquote>
<p>ValueError: Trace type 'scatterternary' is not compatible with subplot type 'xy'<br />
at grid position (1, 1)</p>
<p>See the docstring for the specs argument to <code>plotly.subplots.make_subplots</code> for more information on subplot types</p>
</blockquote>
<p>On top of this, I cannot find references to how to get subplots using plotly express plots, e.g. an object like this :</p>
<pre><code>data_rand = pd.DataFrame(data=np.random.uniform(0,1,size=((20,4))), columns = ["a","b", "c", "d"])
data_rand["size"] = 35
import plotly.graph_objects as go
import plotly.express as px
plot_ter = px.scatter_ternary(data_rand, a="a", b = "b", c="c",
color="d", size="size")
fig.show()
</code></pre>
<p>I would like to have subplots, not only with <code>plotly.express</code> plots, also containing other <code>matplotlib</code> plots, cannot find any hint anywhere. Is there not a simple way to pass i do not know ax objects to <code>matplotlib</code> figures?</p>
|
<python><matplotlib><plotly>
|
2025-02-22 10:36:01
| 1
| 414
|
user37292
|
79,459,350
| 2,457,483
|
Cloud Run Fails to Import Custom Package from Artifact Registry (Terraform Deployment)
|
<p>I'm deploying a Cloud Run function using Terraform, and the function depends on a custom Python package that we uploaded to Google Artifact Registry using Twine. However, the deployment fails with the following
<code>error:ModuleNotFoundError: No module named 'google.cloud.logging'</code></p>
<pre><code>Traceback (most recent call last):
File "/layers/google.python.pip/pip/bin/functions-framework", line 8, in <module>
sys.exit(_cli())
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
...
File "/workspace/main.py", line 1, in <module>
from ddsgcloud.utils import cloud_helpers as ch
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/ddsgcloud/utils/cloud_helpers.py", line 1, in <module>
import google.cloud.logging
ModuleNotFoundError: No module named 'google.cloud.logging'
</code></pre>
<p><strong>Setup</strong>:
The function runs Python 3.9 and is deployed using Terraform.
The custom package is stored in Artifact Registry (ddsgbq is the repository, not the package name).</p>
<p>I uploaded the package using twine:</p>
<pre><code>twine upload --repository-url https://us-central1-python.pkg.dev/dev-di-dashboards-data/ddsgbq/ dist/*
</code></pre>
<p><code>requirements.txt</code> contains:</p>
<pre><code>###### core dependencies
click
google-api-core
google-auth
google-cloud-appengine-logging
google-cloud-audit-log
google-cloud-bigquery
google-cloud-bigquery-storage
google-cloud-logging
google-cloud-core
pandas
pandas-gbq
ipython
jupyter
jupyterlab
jupyterlab-git
pyarrow
numpy
twine
keyrings.google-artifactregistry-auth
tabulate
google
itables
protobuf
google-cloud-logging
google-cloud-pubsub
jira
python-dateutil
</code></pre>
<p>Terraform deployment:</p>
<pre><code>resource "google_cloudfunctions2_function" "cloud_function_v2" {
name = "dds-inovapp"
description = "Function for inovapp operations"
runtime = "python39"
service_config {
service_account_email = "dss-ops-1@dev-di-dashboards-data.iam.gserviceaccount.com"
}
}
</code></pre>
<p><strong>What I Suspect:</strong></p>
<ul>
<li>The custom package is not being installed correctly from Artifact
Registry, causing dependencies like google.cloud.logging to be
missing.</li>
<li>There might be an authentication issue preventing Cloud Run from
pulling private packages.</li>
</ul>
<p>What I’ve Tried:</p>
<ul>
<li>Ensured requirements.txt correctly references the package from Artifact Registry.</li>
<li>Verified service account has Artifact Registry Reader permissions.</li>
<li>Deployed with gcloud builds submit --no-cache to force a fresh dependency install.</li>
<li>Checked Cloud Build logs—it seems the package is not being installed.</li>
<li>Added an explicit pip install command in the entry script.</li>
</ul>
<p><strong>Questions</strong>:</p>
<ul>
<li>How can I verify if the custom package is actually being installed during deployment?</li>
<li>How do I ensure Cloud Run has the correct permissions to install private packages from Artifact Registry?</li>
<li>Is there a way to debug why dependencies might be missing inside Cloud Run?
Any insights would be greatly appreciated!</li>
</ul>
|
<python><terraform><google-cloud-functions><google-cloud-run><google-artifact-registry>
|
2025-02-22 09:31:28
| 0
| 5,411
|
Daniel
|
79,459,289
| 72,437
|
Comparing WhisperX and Faster-Whisper on RunPod: Speed, Accuracy, and Optimization
|
<p>Recently, I compared the performance of WhisperX and Faster-Whisper on RunPod's server using the following code snippet.</p>
<h1>WhisperX</h1>
<pre><code>model = whisperx.load_model(
"large-v3", "cuda"
)
def run_whisperx_job(job):
start_time = time.time()
job_input = job['input']
url = job_input.get('url', "")
print(f"🚧 Loading audio from {url}...")
audio = whisperx.load_audio(url)
print("✅ Audio loaded")
print("Transcribing...")
result = model.transcribe(audio, batch_size=16)
end_time = time.time()
time_s = (end_time - start_time)
print(f"🎉 Transcription done: {time_s:.2f} s")
#print(result)
# For easy migration, we are following the output format of runpod's
# official faster whisper.
# https://github.com/runpod-workers/worker-faster_whisper/blob/main/src/predict.py#L111
output = {
'detected_language' : result['language'],
'segments' : result['segments']
}
return output
</code></pre>
<h1>Faster-whisper</h1>
<pre><code># Load Faster-Whisper model
model = WhisperModel("large-v3", device="cuda", compute_type="float16")
def run_faster_whisper_job(job):
start_time = time.time()
job_input = job['input']
url = job_input.get('url', "")
print(f"🚧 Downloading audio from {url}...")
audio_path = download_files_from_urls(job['id'], [url])[0]
print("✅ Audio downloaded")
print("Transcribing...")
segments, info = model.transcribe(audio_path, beam_size=5)
output_segments = []
for segment in segments:
output_segments.append({
"start": segment.start,
"end": segment.end,
"text": segment.text
})
end_time = time.time()
time_s = (end_time - start_time)
print(f"🎉 Transcription done: {time_s:.2f} s")
output = {
'detected_language': info.language,
'segments': output_segments
}
# ✅ Safely delete the file after transcription
try:
if os.path.exists(audio_path):
os.remove(audio_path) # Using os.remove()
print(f"🗑️ Deleted {audio_path}")
else:
print("⚠️ File not found, skipping deletion")
except Exception as e:
print(f"❌ Error deleting file: {e}")
rp_cleanup.clean(['input_objects'])
return output
</code></pre>
<h1>General Findings</h1>
<ul>
<li>WhisperX is significantly faster than Faster-Whisper.</li>
<li>WhisperX can process long-duration audio (3 hours), whereas Faster-Whisper encounters unknown runtime errors. My guess is that Faster-Whisper requires more GPU/memory resources to complete the job.</li>
</ul>
<h1>Accuracy Observations</h1>
<ul>
<li>WhisperX is less accurate than Faster-Whisper.</li>
<li>WhisperX has more missing words than Faster-Whisper.</li>
</ul>
<h1>Optimization Questions</h1>
<p>I was wondering what parameters in WhisperX I can experiment with or fine-tune in order to:</p>
<ul>
<li>Improve accuracy</li>
<li>Reduce missing words</li>
<li>Without significantly increasing processing time</li>
</ul>
|
<python><speech-to-text><openai-whisper>
|
2025-02-22 08:47:47
| 0
| 42,256
|
Cheok Yan Cheng
|
79,459,091
| 5,404,620
|
VS Code Pylance does not show parameter-specific hints from a module
|
<p>Actual behaviour: the docstring is missing…
<a href="https://i.sstatic.net/UHdGHvED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UHdGHvED.png" alt="enter image description here" /></a></p>
<p>Expected behaviour: the parameter-specific docstring is shown (imagine this in my code instead, not just in the module itself as seen below)
<a href="https://i.sstatic.net/GsjNce6Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsjNce6Q.png" alt="enter image description here" /></a></p>
<hr />
<p>Pylance does find the general function though… and shows info of all parameters.
<a href="https://i.sstatic.net/OlzqUwt1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlzqUwt1.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><pylance>
|
2025-02-22 04:38:35
| 0
| 2,887
|
Adler
|
79,459,070
| 2,602,410
|
Remove duplicated definitions of similar arguments in a series of Python functions
|
<p>I have the following code,</p>
<pre class="lang-py prettyprint-override"><code>class Label(QLabel):
def __init__(self, *args,
width:int|None=None,
height:int|None=None,
minWidth:int|None=None,
minHeight:int|None=None,
maxWidth:int|None=None,
maxHeight:int|None=None,
stretchFactor:int=0,
align:str|None=None,
color:str|None=None,
bgColor:str|None=None,
border:str|None=None,
fontSize:int|None=None,
fontWeight:str|None=None,
fontFamily:str|None=None,
padding:str|None=None,
hExpanding=False,
vExpanding=False,
name:str|None=None,
styleSheet:str='',
labelImg:str|None=None,
**kwargs):
super().__init__(*args, **kwargs)
ss(self, width=width, height=height, minWidth=minWidth, minHeight=minHeight,
maxWidth=maxWidth, maxHeight=maxHeight,
stretchFactor=stretchFactor, align=align, color=color, bgColor=bgColor,
border=border, fontSize=fontSize, fontWeight=fontWeight,
fontFamily=fontFamily, padding=padding, hExpanding=hExpanding,
vExpanding=vExpanding, name=name, styleSheet=styleSheet)
if labelImg is not None:
self.setPixmap(QtGui.QPixmap(labelImg))
class Button(QPushButton):
def __init__(self, *args,
width:int|None=None,
height:int|None=None,
minWidth:int|None=None,
minHeight:int|None=None,
maxWidth:int|None=None,
maxHeight:int|None=None,
stretchFactor:int=0,
align:str|None=None,
color:str|None=None,
bgColor:str|None=None,
border:str|None=None,
fontSize:int|None=None,
fontWeight:str|None=None,
fontFamily:str|None=None,
padding:str|None=None,
hExpanding=False,
vExpanding=False,
name:str|None=None,
styleSheet:str='',
onClick:Callable|None=None,
**kwargs):
super().__init__(*args, **kwargs)
ss(self, width=width, height=height, minWidth=minWidth, minHeight=minHeight,
maxWidth=maxWidth, maxHeight=maxHeight,
stretchFactor=stretchFactor, align=align, color=color, bgColor=bgColor,
border=border, fontSize=fontSize, fontWeight=fontWeight,
fontFamily=fontFamily, padding=padding, hExpanding=hExpanding,
vExpanding=vExpanding, name=name, styleSheet=styleSheet)
self.clicked.connect(onClick)
</code></pre>
<p>, and many other similar self-defined widgets like that.</p>
<p>You can see, so many initial args are the same, it leads to many duplicated code. I want to just define them once.</p>
<p>But I also want IDEs, like vscode, could inpect the args of those widget classes, so when I input like <code>Label(wid</code>, it could give auto-complete suggestions.</p>
<p>How to do that?</p>
|
<python>
|
2025-02-22 04:13:06
| 1
| 1,820
|
Jcyrss
|
79,458,938
| 3,462,494
|
Why does an integer inside a generator function swallow the object of send()?
|
<p>I am not trying to achieve anything -- apart from learning how generator functions and coroutines work on a brick level, which I am not really getting yet, despite lots of reading....</p>
<pre><code>$cat test.py
#No integer
def eee():
num = yield
print(f"First num: {num}")
num = yield
print(f"Second num: {num}")
num = yield
print(f"Third num: {num}")
#integer
def ddd():
yield 100
num = yield
print(f"First num: {num}")
num = yield
print(f"Second num: {num}")
num = yield
print(f"Third num: {num}")
e=eee()
e.send(None)
e.send(1)
e.send(2)
try:
e.send(3)
except StopIteration as e:
print(f'Done with e: {e}\n')
d=ddd()
print(d.send(None))
d.send(1)
d.send(2)
d.send(3)
$python3 test.py
First num: 1
Second num: 2
Third num: 3
Done with e:
100
First num: 2
Second num: 3
</code></pre>
<p>why does d swallow the num 1?</p>
|
<python><generator>
|
2025-02-22 01:21:56
| 1
| 3,615
|
jouell
|
79,458,703
| 10,065,473
|
How can I load a pretrained transformers model that was manually downloaded?
|
<p>I am unable to download huggingface models through the Python functions due to SSL certificate errors. Perhaps it's due to my company firewall.</p>
<p>I am able to download the contents of a huggingface model repo through a browser to a folder. I'm trying to load this model from disk using <code>TFPreTrainedModel.from_pretrained()</code> and <code>AutoTokenizer.from_pretrained()</code>. From the docs, it seems this is a valid option.</p>
<p>I'm receiving an error message that isn't useful <code>AttributeError: 'NoneType' object has no attribute 'from_pretrained'</code>. Appreciate any help!</p>
<p>Example repo:</p>
<p><a href="https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest/tree/main" rel="nofollow noreferrer">https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest/tree/main</a></p>
<p><strong>Code</strong></p>
<pre><code>from transformers import pipeline, TFPreTrainedModel, AutoTokenizer
import os
dir = "./models/twitter-roberta-base-sentiment-latest/"
print(os.listdir(dir)) # confirm the folder contents
model = TFPreTrainedModel.from_pretrained(dir)
tokenizer = AutoTokenizer.from_pretrained(dir)
analyze = pipeline(task="sentiment-analyis", model=model, tokenizer=tokenizer)
print(analyze("this is good"))
print(analyze("this is bad"))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>2025-02-21 16:40:05.896448: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-02-21 16:40:06.653841: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
['config.json', 'gitattributes', 'merges.txt', 'pytorch_model.bin', 'README.md', 'special_tokens_map.json', 'tf_model.h5', 'vocab.json']
Traceback (most recent call last):
File "C:\Users\xxxxx\OneDrive - DuPont\Python Projects\huggingface\sentiment.py", line 8, in <module>
model = TFPreTrainedModel.from_pretrained(dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\transformers\modeling_tf_utils.py", line 2726, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
</code></pre>
<p><strong>Docs</strong></p>
<p><a href="https://huggingface.co/docs/transformers/v4.49.0/en/main_classes/model#transformers.TFPreTrainedModel" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/v4.49.0/en/main_classes/model#transformers.TFPreTrainedModel</a></p>
<pre class="lang-none prettyprint-override"><code>pretrained_model_name_or_path (str, optional) — Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
*A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.*
A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).
</code></pre>
|
<python><huggingface-transformers>
|
2025-02-21 21:54:38
| 2
| 2,492
|
Arthur
|
79,458,586
| 2,130,515
|
pydantic-settings custom_resources does not work as expected
|
<pre><code>class DummyConfig(BaseSettings):
PARAM1: str
PARAM2: str
API_KEY: str
model_config = SettingsConfigDict(
env_prefix="dummy_", env_file=".env", env_file_encoding="utf-8"
)
@classmethod
def customise_sources(
cls,
init_settings,
env_settings,
file_secret_settings):
return (
init_settings,
env_settings,
get_aws_prams_store,
file_secret_settings,
)
def get_aws_prams_store(param_name: str = "/dev/dummy-service/API_KEY"):
session = boto3.session.Session(
profile_name='my-account', region_name="us-east-1")
client = session.client("ssm")
response = client.get_parameters(
Names=[param_name],
WithDecryption=True
)
params = {param["Name"].split("/")[-1]: param["Value"]
for param in response["Parameters"]}
return params
</code></pre>
<p><em><strong>.env content:</strong></em></p>
<pre><code>dummy_PARAM1=value1
dummy_PARAM2=value1
</code></pre>
<p>I tested <code>get_aws_prams_store</code> out of config and it's working well</p>
<pre><code>config= DummyConfig()
</code></pre>
<p>I got an error <strong>API_KEY Field required [type=missing, .....]</strong></p>
<p>I set API_KEY=None just to pass the test.</p>
<pre><code>print("PARAM1", config.PARAM1) #value1
print("PARAM2", config.PARAM2) # value2
print("API_KEY", config.API_KEY) # None
</code></pre>
<p>I am not sure I am doing wrong ?</p>
|
<python><fastapi><pydantic><aws-parameter-store><pydantic-v2>
|
2025-02-21 20:43:34
| 1
| 1,790
|
LearnToGrow
|
79,458,570
| 825,227
|
Create a categorical column from a datetime column in Python
|
<p>I have a dataframe with a column of datetime type that I'd like to create a 2nd categorical column based upon. The second column should consist of string identifiers using the following logic:</p>
<pre><code>ON: [16:00-09:30)
FH: [09:30-10:00)
M: [10:00-15:00)
SLH: [15:00-15:30)
LH: [15:30-16:00)
</code></pre>
<p>For instance, for dataframe <code>d</code>:</p>
<pre><code>date_rng = pd.date_range(start='2024-01-01', periods=48, freq='30min')
d = pd.DataFrame({'datetime': date_rng})
</code></pre>
<p>I'd like the output below:</p>
<pre><code>d.head(40)
Out[186]:
datetime part
0 2024-01-01 00:00:00 ON
1 2024-01-01 00:30:00 ON
2 2024-01-01 01:00:00 ON
3 2024-01-01 01:30:00 ON
4 2024-01-01 02:00:00 ON
5 2024-01-01 02:30:00 ON
6 2024-01-01 03:00:00 ON
7 2024-01-01 03:30:00 ON
8 2024-01-01 04:00:00 ON
9 2024-01-01 04:30:00 ON
10 2024-01-01 05:00:00 ON
11 2024-01-01 05:30:00 ON
12 2024-01-01 06:00:00 ON
13 2024-01-01 06:30:00 ON
14 2024-01-01 07:00:00 ON
15 2024-01-01 07:30:00 ON
16 2024-01-01 08:00:00 ON
17 2024-01-01 08:30:00 ON
18 2024-01-01 09:00:00 FH
19 2024-01-01 09:30:00 FH
20 2024-01-01 10:00:00 M
21 2024-01-01 10:30:00 M
22 2024-01-01 11:00:00 M
23 2024-01-01 11:30:00 M
24 2024-01-01 12:00:00 M
25 2024-01-01 12:30:00 M
26 2024-01-01 13:00:00 M
27 2024-01-01 13:30:00 M
28 2024-01-01 14:00:00 M
29 2024-01-01 14:30:00 M
30 2024-01-01 15:00:00 SLH
31 2024-01-01 15:30:00 LH
32 2024-01-01 16:00:00 ON
33 2024-01-01 16:30:00 ON
34 2024-01-01 17:00:00 ON
35 2024-01-01 17:30:00 ON
36 2024-01-01 18:00:00 ON
37 2024-01-01 18:30:00 ON
38 2024-01-01 19:00:00 ON
39 2024-01-01 19:30:00 ON
</code></pre>
|
<python><dataframe><mapping><categorical-data>
|
2025-02-21 20:34:13
| 2
| 1,702
|
Chris
|
79,458,453
| 2,408,629
|
Python function deepcopy does not copy gym environment LunarLanderContinuous-v2 correctly
|
<p>As the code and its output shows the deepcopy of this environment does not copy aspects of the environment such as the 'action_space' and the attribute 'continuous'.</p>
<p>How can this be resolved?</p>
<pre><code> import gym, copy
env = gym.make('LunarLanderContinuous-v2')
print(env.action_space)
print(env.continuous)
env1 = copy.deepcopy(env)
# env1.continuous = True
print(env1.action_space)
print(env1.continuous)
</code></pre>
<blockquote>
<p>action_space Box(-1.0, 1.0, (2,), float32)</p>
</blockquote>
<blockquote>
<p>continuous True</p>
</blockquote>
<blockquote>
<p>action_space Discrete(4)</p>
</blockquote>
<blockquote>
<p>continuous False</p>
</blockquote>
|
<python><deep-copy><openai-gym>
|
2025-02-21 19:28:44
| 1
| 461
|
PerceptualRobotics
|
79,458,157
| 9,669,142
|
Get database table names and restore database from BAK-file
|
<p>I have several BAK-files locally. Each BAK-file has only one database where the name may differ, so I made a piece of code in Python that is able to obtain the name of the database from the BAK-file. I have SQL Server and SQL Server Management Studio (SMSS) installed. This code works and I test it with one public BAK-file: StackOverflowMini.</p>
<pre><code>from sqlalchemy import create_engine, text
driver = 'ODBC Driver 17 for SQL Server'
server = r'.\SQLEXPRESS'
file_path_bak = r'C:\Users\user\Downloads\folder\StackOverflowMini.bak'
# Create engine
master_engine = create_engine(f'mssql+pyodbc://{server}/master?driver={driver}&trusted_connection=yes')
with master_engine.connect() as conn:
result = conn.execute(text(f"RESTORE FILELISTONLY FROM DISK = '{file_path_bak}'"))
database_name = result.fetchone()[0]
# Close the connection with the master database
master_engine.dispose()
</code></pre>
<p>Now I want to add code that is able to list the tables from the database and eventually save the tables to CSV. However, before I can get there I get an error and I have no idea how to solve it.</p>
<p>The full code (so the working code + the added code for the list of tables) is:</p>
<pre><code>from sqlalchemy import create_engine, text
driver = 'ODBC Driver 17 for SQL Server'
server = r'.\SQLEXPRESS'
file_path_bak = r'C:\Users\ganes\Downloads\New folder (3)\StackOverflowMini_database.bak'
# Create engine
master_engine = create_engine(f'mssql+pyodbc://{server}/master?driver={driver}&trusted_connection=yes')
with master_engine.connect() as conn:
result = conn.execute(text(f"RESTORE FILELISTONLY FROM DISK = '{file_path_bak}'"))
database_name = result.fetchone()[0]
# Close the connection with the master database
master_engine.dispose()
# Create new engine with the database
db_engine = create_engine(f'mssql+pyodbc://{server}/{database_name}?driver={driver}&trusted_connection=yes')
with db_engine.connect() as conn:
result = conn.execute(
text("""
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
"""))
list_table_names = [row[0] for row in result.fetchall()]
db_engine.dispose()
</code></pre>
<p>Here I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>InterfaceError: (pyodbc.InterfaceError) ('28000', '
[28000][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user \'DEKSTOP\\user\'. (18456) (SQLDriverConnect);
[28000][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot open database "StackOverflowMini" requested by the login. The login failed. (4060);
[28000][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user \'DEKSTOP\\user\'. (18456);
[28000][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot open database "StackOverflowMini" requested by the login. The login failed. (4060)
')
(Background on this error at: https://sqlalche.me/e/20/rvf5)
</code></pre>
<p>First, I have no idea why I get this error since a login is not needed (as far as I know). Second, with that I have no idea how to solve this.</p>
<p>Does anyone know how to solve this?</p>
|
<python><python-3.x><sql-server><odbc>
|
2025-02-21 17:08:06
| 0
| 567
|
Fish1996
|
79,458,134
| 6,566,916
|
Attach PyCharm Community to debugpy inside container
|
<p>I am running a <code>debugpy</code> listener inside a Docker container. I am using docker compose to run the containers. This is my config</p>
<pre class="lang-py prettyprint-override"><code>import debugpy
debugpy.listen(("0.0.0.0", 5680))
_logger.info("Starting debug server at port 5680")
debugpy.wait_for_client() # Execution will pause here until PyCharm connects
_logger.info("Debugger attached. Resuming execution.")
</code></pre>
<p>and my docker-compose.yml</p>
<pre class="lang-yaml prettyprint-override"><code> kafka-integration:
image: web:${TAG:-dev}
build:
context: ../../../
dockerfile: deployments/web.dockerfile
command: [ "python", "-m", "myproject.kafka.upload_channel_consumer" ]
depends_on:
kafka:
condition: service_healthy
env_file:
- ./env/dev.env
ports:
- "5680:5680"
</code></pre>
<p>I am using PyCharm community edition. How am I suppose to attach PyCharm to the debugpy listener inside the container? I think I can only use Run > Attach to Process... . I have tried:</p>
<ol>
<li>Checking for the PID inside the container and connecting to it. This produces:</li>
</ol>
<pre><code>error: attach failed: tried to attach to process as user 'x' and process is running as user 'root'
</code></pre>
<ol start="2">
<li>Searching for "docker compose" in the list under "Attach to process..." and connect to them. I get</li>
</ol>
<pre><code>error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
</code></pre>
<p>How am I suppose to attach to debugpy inside the container?</p>
|
<python><docker><pycharm><debugpy>
|
2025-02-21 17:00:21
| 0
| 850
|
Niko
|
79,457,881
| 10,006,235
|
Create column from other columns created within same `with_columns` context
|
<p>Here, column "AB" is just being created and at the same time is being used as input to create column "ABC". This fails.</p>
<pre><code>df = df.with_columns(
(pl.col("A")+pl.col("B")).alias("AB"),
(pl.col("AB")+pl.col("C")).alias("ABC")
)
</code></pre>
<p>The only way to achieve the desired result is a second call to <code>with_columns</code>.</p>
<pre><code>df1 = df.with_columns(
(pl.col("A")+pl.col("B")).alias("AB")
)
df2 = df1.with_columns(
(pl.col("AB")+pl.col("C")).alias("ABC")
)
</code></pre>
|
<python><python-polars>
|
2025-02-21 15:31:16
| 2
| 474
|
Nip
|
79,457,847
| 1,695,672
|
Understanding an instance of Python's struct.unpack
|
<p>I found sample code for interrogating NTP servers on <a href="https://www.mattcrampton.com/blog/query_an_ntp_server_from_python/" rel="nofollow noreferrer">https://www.mattcrampton.com/blog/query_an_ntp_server_from_python/</a>. The code is brief and well-written, but I don't understand the use of <code>struct.unpack</code>. This is the core code:</p>
<pre><code>client = socket.socket(AF_INET,SOCK_DGRAM)
client.sendto(msg.encode('utf-8'),address)
msg,address = client.recvfrom(buf)
t = struct.unpack("!12I",msg)[10]
</code></pre>
<p>It returns an integer value (seconds from 1900-01-01) but I have two questions:</p>
<ul>
<li>How does the unpack work in this instance? How did he arrive at <code>"!12I"</code> to do the decoding?</li>
<li>Is it possible to get a floating point value from the remote server?</li>
</ul>
|
<python><ntp>
|
2025-02-21 15:21:17
| 1
| 443
|
Philip Kearns
|
79,457,647
| 12,034,466
|
Python saving audio streamed to node input as speedup and pitchshifted .wav
|
<p>I am trying to create a ComfyUI node that would take image sequence and audio from respective node inputs ("IMAGE", "AUDIO") and saving it to an uncompressed AVI, but for some reason it is speeding-up and pitch shifting the audio.</p>
<p>This is the relevant code that I use for saving temporary wav file:</p>
<pre><code>a_waveform = audio['waveform']
temp_audio_path = self.get_unique_filename(output_dir, filename_prefix, "wav")
waveform_data = a_waveform.cpu().numpy()
sample_rate = audio['sample_rate']
write(temp_audio_path, sample_rate, waveform_data)
</code></pre>
|
<python><audio>
|
2025-02-21 14:11:25
| 0
| 513
|
qraqatit
|
79,457,564
| 13,421,357
|
Selecting and deselecting Plotly subplots?
|
<p>I am generating several plotly subplots in a grid. I'd like the user to be able to select one or two (or even three) subplot(s) and examine it (/them) detail. Is this possible? Currently, I am generating rows of plots that are large enough to be examined in detail, but that makes comparison across plots difficult (a lot of scrolling up and down).</p>
<p>I am open to using Dash, but as far as I understand I cannot export visuals running on Dash to static/HTML files, which is a necessity.</p>
|
<python><plotly>
|
2025-02-21 13:43:01
| 0
| 847
|
hainabaraka
|
79,457,450
| 3,825,495
|
Python - Can't Side-Load Minor Versions (Windows 10)
|
<p>I had 3.12.1 installed (c:/python/python_3_12_1), and ran the installer for 3.12.8, installing to its own folder (c:/python/python_3_12_8). however, when I went to use any applications created with 3.12.1, it wouldn't run them. looking in the 3_12_1 folder, it's removed nearly all of the files.</p>
<p>I hadn't seen this before with python versions, but it just may be the first time I've attempted to have two versions under the same 3.x. Is this expected behavior?</p>
<p>Desired behavior is to have 3.12.1 installed as well as 3.12.8 to allow for application testing prior to upgrade.</p>
|
<python><windows>
|
2025-02-21 13:06:45
| 2
| 616
|
Michael James
|
79,457,387
| 3,288,787
|
The type initializer for 'Microsoft.ML.Transforms.TimeSeries.FftUtils' threw an exception. in ML.Net
|
<pre><code> int forecastSize = 144;
var dataPoints = 480;
List<PCIDataPoint> pciData = dataPoints.Select(x => new PCIDataPoint { Value = (float)x }).ToList();
IDataView dataView = mlContext.Data.LoadFromEnumerable(pciData);
// Define SSA Forecasting Model
var pipeline = mlContext.Forecasting.ForecastBySsa(
outputColumnName: "PCIForecastedValues", // Explicitly set the output column
inputColumnName: nameof(PCIDataPoint.Value), // Correct input column
windowSize: forecastSize, // Lookback window size
seriesLength: dataPoints.Count, // Entire series length
trainSize: dataPoints.Count, // Training data size
horizon: forecastSize, // Number of predictions
confidenceLevel: 0.95f, // Confidence level
confidenceLowerBoundColumn: "ConfidenceLowerBound",
confidenceUpperBoundColumn: "ConfidenceUpperBound"
);
// Train the model
var model = pipeline.Fit(dataView);
</code></pre>
<p>Getting Incorrect Format Error is ther any wrong in this my code my series length is 480 my horizon lenth is 144 is that correct way to provide the details for getting the forcating values</p>
|
<python><machine-learning><ml.net>
|
2025-02-21 12:39:40
| 0
| 1,256
|
Srinivas Ch
|
79,457,363
| 3,084,842
|
Python matplotlib tight_layout spacing for subplots
|
<p>I'm using matplotlib to plot my data in a 4x2 grid of subplots. The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tight_layout.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.tight_layout</code></a> automatically fits the subplots, legend, and text labels into a figure that I can save as png. However, when the legend is extremely long, <code>tight_layout</code> seems to add extra horizontal space to some subplots.</p>
<p>What is the most efficient way to avoid this extra space? The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots_adjust.html" rel="nofollow noreferrer"><code>subplots_adjust</code></a> function looks promising, but there's a lot of trial-and-error to adjust everything and I'm hoping to find a quicker automated solution using <code>tight_layout</code>. My MWE is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(-5,5,100)
x1 = np.sin(t)
x2 = np.cos(t)
x3 = np.sin(2*t)
x4 = np.cos(2*t)
x5 = 2*np.sin(t)
x6 = 2*np.cos(t)
x7 = np.sin(0.5*t)
x8 = np.cos(0.5*t)
fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(10, 7))
for r in range(4):
for c in range(2):
ax[r,c].plot(t,x1,label='preliminary 1')
ax[r,c].plot(t,x2,label='preliminary 2')
ax[r,c].plot(t,x3,label='trial 1, result 1')
ax[r,c].plot(t,x4,label='trial 1, result 2')
ax[r,c].plot(t,x5,label='trial 1, result 6')
ax[r,c].plot(t,x6,label='trial 4, result 1')
ax[r,c].plot(t,x7,label='trial 12, result 2')
ax[r,c].plot(t,x8,label='trial 15, result 2')
ax[0,1].legend(loc='best', bbox_to_anchor = (0.3, -1.1, 1.2, 2))
plt.tight_layout()
plt.savefig('myfig.png')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/e8Ttcziv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8Ttcziv.png" alt="graph" /></a></p>
|
<python><matplotlib>
|
2025-02-21 12:30:14
| 2
| 3,997
|
Medulla Oblongata
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.