QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,359,253
| 13,259,162
|
Jupyter gives 404 errors on Debian and I'm unable to open any notebook
|
<p>I'm currently unable to use jupyter on my Dell laptop that runs Debian 12. I'm running jupyter lab in a python virtualenv.</p>
<p>When I run <code>jupyter lab</code> I first get the following error message that is recurrent:</p>
<pre><code>[W 2025-01-15 18:44:52.720 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 39.53ms referer=None
</code></pre>
<p>The jupyter menu shows just fine in my firefox browser.</p>
<p>Then when I try to create a new notebook, I get the following 404 error:</p>
<pre><code>[W 2025-01-15 18:45:03.095 ServerApp] 404 PUT /api/collaboration/session/Untitled.ipynb?1736963103090 (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 1.33ms referer=http://localhost:8888/lab
</code></pre>
<p>And the opened notebook doesn't load:
<a href="https://i.sstatic.net/J0vfGf2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0vfGf2C.png" alt="new notebook not loading" /></a></p>
<p>Here is some console output:</p>
<pre><code>$ jupyter lab
[I 2025-01-15 18:44:09.733 ServerApp] jupyter_lsp | extension was successfully linked.
[I 2025-01-15 18:44:09.736 ServerApp] jupyter_server_terminals | extension was successfully linked.
[I 2025-01-15 18:44:09.740 ServerApp] jupyterlab | extension was successfully linked.
[I 2025-01-15 18:44:09.899 ServerApp] notebook_shim | extension was successfully linked.
[I 2025-01-15 18:44:09.909 ServerApp] notebook_shim | extension was successfully loaded.
[I 2025-01-15 18:44:09.910 ServerApp] jupyter_lsp | extension was successfully loaded.
[I 2025-01-15 18:44:09.911 ServerApp] jupyter_server_terminals | extension was successfully loaded.
[I 2025-01-15 18:44:09.911 LabApp] JupyterLab extension loaded from /home/noe/Documents/univ/TER/venv/lib/python3.11/site-packages/jupyterlab
[I 2025-01-15 18:44:09.912 LabApp] JupyterLab application directory is /home/noe/Documents/univ/TER/venv/share/jupyter/lab
[I 2025-01-15 18:44:09.912 LabApp] Extension Manager is 'pypi'.
[I 2025-01-15 18:44:09.991 ServerApp] jupyterlab | extension was successfully loaded.
[I 2025-01-15 18:44:09.991 ServerApp] Serving notebooks from local directory: /home/noe/Documents/univ/TER/venv
[I 2025-01-15 18:44:09.991 ServerApp] Jupyter Server 2.15.0 is running at:
[I 2025-01-15 18:44:09.991 ServerApp] http://localhost:8888/lab?token=6b54a68b3230f79b6f719f36cdfd9e3dbafcdcba3bc270e7
[I 2025-01-15 18:44:09.991 ServerApp] http://127.0.0.1:8888/lab?token=6b54a68b3230f79b6f719f36cdfd9e3dbafcdcba3bc270e7
[I 2025-01-15 18:44:09.991 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 2025-01-15 18:44:10.009 ServerApp]
To access the server, open this file in a browser:
file:///home/noe/.local/share/jupyter/runtime/jpserver-27699-open.html
Or copy and paste one of these URLs:
http://localhost:8888/lab?token=6b54a68b3230f79b6f719f36cdfd9e3dbafcdcba3bc270e7
http://127.0.0.1:8888/lab?token=6b54a68b3230f79b6f719f36cdfd9e3dbafcdcba3bc270e7
[I 2025-01-15 18:44:10.561 ServerApp] Skipped non-installed server(s): bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server
[W 2025-01-15 18:44:12.203 LabApp] Could not determine jupyterlab build status without nodejs
[W 2025-01-15 18:44:52.720 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 39.53ms referer=None
[I 2025-01-15 18:45:03.006 ServerApp] Creating new notebook in
[W 2025-01-15 18:45:03.095 ServerApp] 404 PUT /api/collaboration/session/Untitled.ipynb?1736963103090 (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 1.33ms referer=http://localhost:8888/lab
[W 2025-01-15 18:45:52.743 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 1.03ms referer=None
[W 2025-01-15 18:46:52.756 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 3.77ms referer=None
w[W 2025-01-15 18:47:52.971 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 3.24ms referer=None
[W 2025-01-15 18:48:52.983 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 3.23ms referer=None
[W 2025-01-15 18:49:53.027 ServerApp] 404 GET /api/collaboration/room/JupyterLab:globalAwareness (3eb6b7033dd547fbbe52293f4e9e484f@127.0.0.1) 3.63ms referer=None
</code></pre>
<p>Jupyter version:</p>
<pre><code>$ jupyter --version
Selected Jupyter core packages...
IPython : 8.31.0
ipykernel : 6.29.5
ipywidgets : not installed
jupyter_client : 8.6.3
jupyter_core : 5.7.2
jupyter_server : 2.15.0
jupyterlab : 4.3.4
nbclient : 0.10.2
nbconvert : 7.16.5
nbformat : 5.10.4
notebook : not installed
qtconsole : not installed
traitlets : 5.14.3
</code></pre>
<p>I get the same error when i run <code>jupyter-notebook</code> outside of a virtualenv, and I'm still unable to run any notebook.</p>
<p>I don't think it comes from the jupyter installation because I get the same result on multiple installations. I was also able to run jupyterlab in a docker container.</p>
|
<python><jupyter><virtualenv><http-status-code-404>
|
2025-01-15 17:57:28
| 1
| 309
|
NoΓ© Mastrorillo
|
79,359,228
| 2,531,068
|
Matplotlib bar graph incoherent behavior when using bottom and height parameters
|
<p>I am trying to plot a bar chart where for each day, used as X axis, we see the activity between different periods of time as a bar going from the time of the start of the activity to the time of the end of the activity. So Y axis goes from 0 to 24, and for example I could have a bar from 1AM to 2AM and a second bar from 3PM and 5PM.</p>
<p>I have used matplotlib bar graph and used the bottom and height parameters to make this work, and to a certain extent, that does work. When I have very little data, everything is displayed correctly, but when I have dozens of activities, somehow the data turns wrong.</p>
<p>The code is the following (taken from Power BI and I've never used Pandas, so I just converted straight to Python array)</p>
<pre class="lang-py prettyprint-override"><code>import os, uuid, matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot
import pandas
import datetime
import matplotlib.ticker
import matplotlib.dates
import numpy
dataset = pandas.read_csv('input_df_3a3333a0-fd5d-4630-8707-5fc23cb0b326.csv')
matplotlib.pyplot.figure(figsize=(5.55555555555556,4.16666666666667), dpi=72)
test_date_time = dataset.to_numpy().tolist()
test_date_time = list(filter(lambda x: type(x[0]) is not float, test_date_time))
test_date_time = [(datetime.datetime.fromisoformat(x[0]), datetime.datetime.fromisoformat(x[1])) for x in test_date_time]
test_date_time = sorted(test_date_time, key=lambda x:x[0])
values = {"days": [], "bottom": [], "height": []}
# ### KIND OF WORKING
for (test_start, test_end) in test_date_time:
values["days"].append(test_start.date())
values["bottom"].append(test_start.time().hour + (test_start.time().minute / 60) + (test_start.time().second / 3600))
values["height"].append((test_end - test_start).total_seconds() / 3600)
matplotlib.pyplot.bar(x=values["days"], height=values["height"], bottom=values["bottom"])
matplotlib.pyplot.show()
</code></pre>
<p>I have checked the values dictionnary and that looks pretty good to me. But when I plot it, I often get values far exceeding 24, like this for example :</p>
<p><a href="https://i.sstatic.net/GPMUIgcQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPMUIgcQ.png" alt="Bar graph showing the issue" /></a></p>
<p>Problem is, if I check through the debugguer, the last height value for April 4th is 1.44 hours and the bottom value is 19.01 hours, so I should have a bar going from 19.01 to 20.45 and that's it, which is not at all what I get.</p>
<p>I have looked at <a href="https://stackoverflow.com/questions/51505291/timeline-bar-graph-using-python-and-matplotlib">Timeline bar graph using python and matplotlib</a> but I'm just curious at why is this happening ? Example data can be found here <a href="https://filebin.net/rivcvi6d9v92sywk" rel="nofollow noreferrer">https://filebin.net/rivcvi6d9v92sywk</a></p>
|
<python><matplotlib>
|
2025-01-15 17:46:22
| 1
| 609
|
Loufylouf
|
79,359,213
| 2,731,076
|
Efficient parsing and processing of millions of json objects in Python
|
<p>I have some working code that I need to improve the run time on dramatically and I am pretty lost. Essentially, I will get zip folders containing tens of thousands of json files, each containing roughly 1,000 json messages. There are about 15 different types of json objects interspersed in each of these files and some of those objects have lists of dictionaries inside of them while others are pretty simple. I need to read in all the data, parse the objects and pull out the relevant information, and then pass that parsed data back and insert it into a different program using an API for a third party software (kind of a wrapper around a proprietary implementation of SQL).</p>
<p>So I have code that does all of that. The problem is it takes around 4-5 hours to run each time and I need to get that closer to 30 minutes.</p>
<p>My current code relies heavily on asyncio. I use that to get some concurrency, particularly while reading the json files. I have also started to profile my code and have so far moved to using orjson to read in the data from each file and rewrote each of my parser functions in cython to get some improvements on that side as well. However, I use asyncio queues to pass stuff back and forth and my profiler shows a lot of time is spent just in the <code>queue.get</code> and <code>queue.put</code> calls. I also looked into msgspec to improving reading in the json data and while that was faster, it became slower when I had to send the <code>msgspec.Struct</code> objects into my cython code and use them instead of just a dictionary.</p>
<p>So was just hoping for some general help on how to improve this process. I have read about multiprocessing both with multiprocessing.pools and concurrent.futures but both of those turned out to be slower than my current implementation. I was thinking maybe I need to change how I pass stuff through the queues so I passed the full json data for each file instead of each individual message (about 1,000 documents each) but that didn't help.</p>
<p>I have read so many SO questions/answers but it seems like a lot of people have very uniform json data (not 15 different message types). I looked into batching but I don't fully understand how that changes things - that was what I was doing using concurrent.futures but again it actually took longer.</p>
<p>Overall I would like to keep it as queues because in the future I would like to run this same process on streaming data, so that part would just take the place of the json reading and instead each message received over the stream would be put into the queue and everything else would work the same.</p>
<p>Some psuedo-code is included below.</p>
<p>main.py</p>
<pre><code>import asyncio
from glob import glob
import orjson
from parser_dispatcher import ParserDispatcher
from sql_dispatcher import SqlDispatcher
async def load_json(file_path, queue):
async with aiofiles.open(file_path, mode="rb") as f:
data = await f.read()
json_data = await asyncio.to_thread(orjson.loads(data))
for msg in json_data:
await queue.put(msg)
async def load_all_json_files(base_path, queue):
file_list = glob(f"{base_path}/*.json")
tasks = [load_json(file_path, queue) for file_path in file_list]
await asyncio.gather(*tasks)
await queue.put(None) # to end the processing
def main()
base_path = "\path\to\json\folder"
paser_queue = asyncio.queue()
sql_queue = asyncio.queue()
parser_dispatch = ParserDispatcher()
sql_dispatch = SqlDispatcher()
load_task = load_all_json_files(base_path, parser_queue)
parser_task = parser_dispatch.process_queue(parser_queue, sql_queue)
sql_task = sql_dispatch.process_queue(sql_queue)
await asyncio.gather(load_task, parser_task, sqlr_task)
if __name__ -- "__main__":
asyncio.run(main))
</code></pre>
<p>parser_dispatcher.py</p>
<pre><code>import asyncio
import message_parsers as mp
class ParserDispatcher:
def __init__(self):
self.parsers = {
("1", "2", "3"): mp.parser1,
.... etc
} # this is a dictionary where keys are tuples and values are the parser functions
def dispatch(self, msg):
parser_key = tuple(msg.get("type"), msg.get("source"), msg.get("channel"))
parser = self.parsers.get(parser_key)
if parser:
new_msg = parser(msg)
else:
new_msg = []
return new_msg
async def process_queue(self, parser_queue, sql_queue):
while True:
msg = await process_queue.get()
if msg is None:
await sql_put.put(None)
process_queue.task_done()
parsed_messages = self.dispatch(msg)
for parsed_message in parsed_messages:
await sql_queue.put(parsed_message)
</code></pre>
<p>sql_dispatcher.py</p>
<pre><code>import asycnio
import proprietarySqlLibrary as sql
class SqlDispatcher:
def __init__(self):
# do all the connections to the DB in here
async def process_queue(self, sql_queue):
while True:
msg = await sql_queue.get()
# then go through and add this data to the DB
# this part is also relatively slow but I'm focusing on the first half for now
# since I don't have control over the DB stuff
</code></pre>
|
<python><json><python-asyncio><python-multiprocessing>
|
2025-01-15 17:38:39
| 2
| 813
|
user2731076
|
79,359,116
| 7,906,206
|
Issues with pip install pyodbc SyntaxError: invalid syntax
|
<p>I am trying to run pip install pyodbc using Visual Studio Code Terminal and it says syntax error. I am doing exactly as Microsoft documentation here:</p>
<p><a href="https://learn.microsoft.com/en-us/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development?view=sql-server-ver16&tabs=windows" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development?view=sql-server-ver16&tabs=windows</a></p>
<p>I am running it as:</p>
<p><a href="https://i.sstatic.net/j6TTZaFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j6TTZaFd.png" alt="enter image description here" /></a></p>
<p>But its failing with the following error:</p>
<pre><code> File "c:\Users\Documents\DBA_Projects\pip_install_pyodbc.py", line 1
pip install pyodbc
^^^^^^^
SyntaxError: invalid syntax
PS C:\Users\>
</code></pre>
<p>My machine is has windows 11. What could be the problem? Or how how I run it successfully?</p>
|
<python>
|
2025-01-15 17:03:21
| 1
| 1,216
|
Immortal
|
79,358,829
| 1,540,967
|
How can I make "ruff check" assume a specific Python version for allowed syntax?
|
<p>I am on Linux, created a Python 3.9 venv, installed ruff in the venv, wrote this code:</p>
<pre class="lang-py prettyprint-override"><code>def process_data(data: list[int]) -> str:
match data:
case []:
return "No data"
case [first, *_] if (average := lambda: sum(data) / len(data)) and average() > 50:
return f"Data average is high: {average():.2f}, starting with {first}"
case _:
return f"Processed {len(data)} items."
</code></pre>
<p>The <code>match</code> syntax is not available in Python 3.9, so running <code>ruff check</code>, I would expect an error. I have tried to set <code>project.requires-python</code> and <code>ruff.target-version</code> but the latter seems to be used only for the formatter according to the <a href="https://docs.astral.sh/ruff/settings/#target-version" rel="nofollow noreferrer">docs</a>. What am I missing?</p>
|
<python><ruff>
|
2025-01-15 15:34:07
| 1
| 389
|
Sebastian Elsner
|
79,358,781
| 1,358,603
|
`OSError: [Errno 24] Too many open files` with python3.12.3 multiprocessing on ubuntu 24.04 terminal
|
<p>The following code always gives me the error
<code>OSError: [Errno 24] Too many open files</code>
when reaching 508 processes executed, which means every process leaves two file descriptors open, and then I reach the system's limit:</p>
<pre><code>from multiprocessing import Process
def do_job(task):
print("Task no " + str(task))
def main():
number_of_processes = 1000
processes = []
for i in range(number_of_processes):
p = Process(target=do_job, args=(i,))
processes.append(p)
# creating processes
for p in processes:
p.start()
p.join()
return True
if __name__ == "__main__":
main()
</code></pre>
<p>If I try the same code on the vs code terminal then it completes with no issue. I looked at many similar online threads but yet to find a working solution.</p>
|
<python><ubuntu><multiprocessing>
|
2025-01-15 15:21:28
| 1
| 757
|
Kyriakos
|
79,358,688
| 788,349
|
Expensive django calculation needs to run on model change
|
<p>I am working on a django application that has a model with a field that needs to be calculated when the model changes. The calculation itself is relatively expensive (1+ seconds) especially if/when this field is recalculated for a large queryset. It is important that this field is recalculated immediately following the model change because it determines the workflow which the user needs to follow and the field value is propagated to the UI for the user to see. At a minimum the calculation needs to be guaranteed to run after update or some way to indicate the calculation is stale.</p>
<p>The current implementation which was not an issue because the calculation was previously less complicated/expensive (before requested changes by the client) is as follows:</p>
<ul>
<li>user makes edit in UI and model is updated</li>
<li>API receives changes and applies changes to model</li>
<li>model .save() is overridden so the field is calculated and set</li>
</ul>
<p>With this implementation we also disabled the queryset manager on the model to prevent the use of .update() to ensure that field is always recalculated on .save().</p>
<p>I am curious on how others would implement something like this to be as efficient as possible and not relying on the blocking save to finish.</p>
|
<python><django>
|
2025-01-15 14:58:36
| 1
| 513
|
acmisiti
|
79,358,613
| 7,007,547
|
Additional Python Files in azure Function
|
<p>I have a python Funktion <code>function_app.py</code>(triggered every minute) in Azure Like this</p>
<pre><code>import azure.functions as func
import logging
app = func.FunctionApp()
@app.timer_trigger(schedule="0 */1 * * * *", arg_name="myTimer", run_on_startup=False,
use_monitor=False)
def zsdbi(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past due!')
logging.info('09 Python timer trigger function executed.')
</code></pre>
<p>On the same Level in the Filesystem I have a file <code>newconfig.py</code> like this :</p>
<pre><code>class MyConfig:
def __init__(self, ftp_host=None, ftp_username=None, ftp_password=None,
sharepoint_url=None, sharepoint_clientid=None, sharepoint_clientsecret=None, azure=False):
self._ftp_host = ftp_host
self._ftp_username = ftp_username
self._ftp_password = ftp_password
self._sharepoint_url = sharepoint_url
self._sharepoint_clientid = sharepoint_clientid
self._sharepoint_clientsecret = sharepoint_clientsecret
</code></pre>
<p>When I try to import <code>newconfig.py</code> in <code>function_app.py</code> like this :</p>
<pre><code>import azure.functions as func
import datetime
import json
import logging
import newconfig # This results in Error
app = func.FunctionApp()
@app.timer_trigger(schedule="0 */1 * * * *", arg_name="myTimer", run_on_startup=False,
use_monitor=False)
</code></pre>
<p>The function is not running anymore I assume caused by an error during Import. How can I add additonal python files not available in public packages to my Azure function</p>
<p><strong>Edit 1</strong></p>
<p>The function is deployed via github actions as :</p>
<pre><code> steps:
- name: Checkout
uses: actions/checkout@v4
- name: 'Azure Login'
uses: azure/login@v2
with:
client-id: ${{ env.ARM_CLIENT_ID }}
subscription-id: ${{ env.ARM_SUBSCRIPTION_ID }}
tenant-id: ${{ env.ARM_TENANT_ID }}
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: 'Download Azure Function publishing profile'
env:
AZURE_SUBSCRIPTION_ID: ${{ env.ARM_SUBSCRIPTION_ID }}
FUNCTION_APP_RESOURCE_GROUP: ${{ env.ARM_RESOURCE_GROUP_NAME }}
FUNCTION_APP_NAME: fa-${{ env.APP_NAME }}
run: |
echo "FUNCTION_APP_PUB_PROFILE=$(az functionapp deployment list-publishing-profiles --subscription $AZURE_SUBSCRIPTION_ID --resource-group $FUNCTION_APP_RESOURCE_GROUP --name $FUNCTION_APP_NAME --xml)" >> $GITHUB_ENV
- name: "Deploy function"
uses: Azure/functions-action@v1
with:
app-name: fa-${{ env.APP_NAME }}
publish-profile: ${{ env.FUNCTION_APP_PUB_PROFILE }}
package: src
</code></pre>
<p><code>src</code>containing all python files</p>
|
<python><azure><azure-functions>
|
2025-01-15 14:35:07
| 2
| 1,140
|
mbieren
|
79,358,567
| 1,520,991
|
Polars - Replace letter in string with uppercase letter
|
<p>Is there any way in polars to replace character just after the <code>_</code> with uppercase using regex replace? So far I have achieved it using <a href="https://docs.pola.rs/api/python/version/0.19/reference/expressions/api/polars.Expr.map_elements.html#polars.Expr.map_elements" rel="nofollow noreferrer">polars.Expr.map_elements</a>.</p>
<p>Is there any alternative using native expression API?</p>
<pre class="lang-py prettyprint-override"><code>import re
import polars as pl
# Initialize
df = pl.DataFrame(
{
"id": [
"accessible_bidding_strategy.id",
"accessible_bidding_strategy.name",
"accessible_bidding_strategy.owner_customer_id",
]
}
)
# Transform
df = df.with_columns(
pl.col("id")
.map_elements(
lambda val: re.sub(r"_\w", lambda match: match.group(0)[1].upper(), val),
return_dtype=pl.String,
)
.alias("parsed_id")
)
print(df)
</code></pre>
<h3>Output</h3>
<pre><code>shape: (3, 2)
βββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β id β parsed_id β
β --- β --- β
β str β str β
βββββββββββββββββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββββββββββ‘
β accessible_bidding_strategy.id β accessibleBiddingStrategy.id β
β accessible_bidding_strategy.name β accessibleBiddingStrategy.name β
β accessible_bidding_strategy.owner_customer_id β accessibleBiddingStrategy.ownerCustomerId β
βββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ
</code></pre>
|
<python><string><python-polars><camelcasing><snakecasing>
|
2025-01-15 14:22:43
| 3
| 3,125
|
dikesh
|
79,358,525
| 10,958,326
|
How to resolve vscode python REPL ModuleNotFoundError
|
<p>In vscode Iβm encountering a <code>ModuleNotFoundError</code> when trying to import a module (<code>src</code>) located in my project workspace folder in a Python REPL. Here's the situation:</p>
<ol>
<li><p>When I run:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.getcwd()
</code></pre>
<p>I get the correct workspace folder path.</p>
</li>
<li><p>However, trying to import the module:</p>
<pre class="lang-py prettyprint-override"><code>import src
</code></pre>
<p>results in a <code>ModuleNotFoundError</code>.</p>
</li>
</ol>
<h3>Steps Iβve Tried</h3>
<ul>
<li>I updated the <code>.env</code> file with the following:
<pre><code>PYTHONPATH=${workspaceFolder}
</code></pre>
</li>
<li>Reloaded and restarted VS Code.</li>
<li>Confirmed that VS Code is using the correct <code>.env</code> file.</li>
</ul>
<p>I am aware of <a href="https://stackoverflow.com/questions/72707064/vs-code-does-not-find-own-python-module-in-workspace">VS Code does not find own python module in Workspace</a></p>
<p>but I find the proposed solutions very inconvenient and it seems to me that this is somehting that must be easily configurable and I am just doing somehting essential wrong</p>
<h3>Observations</h3>
<p>running:</p>
<pre class="lang-py prettyprint-override"><code>from dotenv import load_dotenv
load_dotenv()
</code></pre>
<p>does not load the environment file</p>
<p>When I check:</p>
<pre class="lang-py prettyprint-override"><code>from dotenv import find_dotenv
find_dotenv()
</code></pre>
<p>it points to:</p>
<pre><code>/.../.vscode-server/extensions/ms-python.python-2024.22.2-linux-x64/python_files/.env
</code></pre>
<p>This is not the <code>.env</code> file in my workspace folder.</p>
<h3>Temporary Workaround</h3>
<p>Running the following before importing my module works:</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.path.append(os.getcwd())
</code></pre>
<p>But I know this is not a proper long-term solution.</p>
<h3>Additional Context</h3>
<p>I am working in a remote SSH environment, which might be relevant.</p>
<h3>Question</h3>
<p>How can I configure my environment so that the Python REPL in VS Code correctly recognizes modules in my workspace folder without needing to manually adjust <code>sys.path</code>? From Pycharm I am very used to work with a repl workflow and I am quite irritated that it does not work out of the box as in Pycharm. Am I approaching something wrong here?</p>
|
<python><visual-studio-code><read-eval-print-loop>
|
2025-01-15 14:10:28
| 1
| 390
|
algebruh
|
79,358,216
| 4,875,641
|
Python v3.13 has broken Email delivery due to an SSL change
|
<p>I have only seen one online report of this to date, but perhaps others have run into this and found a way around it. Email delivery using the Python email package has worked in prior versions of Python, but fails in version 3.13. The problem is not with the email package but with SSL.</p>
<p>It appears that version 3.13 of Python has changed the requirement for one of the settings in SSL certificates. And if your email servers certificates do not have this new setting set, your emails deliveries will fail with an SSL failure.</p>
<p>There is a field called Basic Constraints in each certificate, which was ignored in prior versions of Python, but now it looks for that field to be set to Critical. If it is not, SSL declares the handshake as a failed verification. So it throws an exception.</p>
<p>I reviewed the certification that my email server is using for SSL and this field is set to False, not Critical. And since the hosting provider is unable to change a certificate for individual clients, it would be necessary to purchase a custom certification and install it on the sever. This is costly for each domain being used for email delivery.</p>
<p>Have others run into this issue and gotten around it without having to now purchase a custom certification because of the setting that their email service providers has set? Certainly one way around it is to ignore SSL verification, but that does not seem wise.</p>
<p>I am surprised that there would not be a new setting in Python to accept the prior level of certification that v3.12 supported. But we can run the same software on different versions of Python and see the failure on v3.13.</p>
<pre><code>import ssl
EmailServer = 'SomeDomain.com'
EmailPort = 465
context = ssl.create_default_context()
with smtplib.SMTP_SSL (EmailServer, EmailPort, context=context) as server:
server.login(EmailServerUsername, EmailServerPassword)
server.send_message(msg)
</code></pre>
<p>In this code segment in V3.13.1, an exception is thrown at the "with smtplib.SMTP_SSL" line and the server.login is not reached.</p>
<p>The exception thrown is: Emailout exception [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Basic Constraints of CA cert not marked critical (_ssl.c:1018)</p>
<hr />
<p>SOLUTION: In reading the Python 3.13 release notes, I was shown that a flag was set in SSL that required a more strict SSL certificate in order to pass verification. Turning off this 'strict flag' returns SSL to the same level of verification that was in V3.12. So the following line is added after the context = ssl.create_default_context() statement.</p>
<pre><code>context.verify_flags &= ~ssl.VERIFY_X509_STRICT
</code></pre>
<p>This is turning off the STRICT flag so the Basic Constraint Critical is not required to pass certification.</p>
<p>Perhaps the title is not exactly correct, but its purpose was to get attention and a suggestion of how to fix the particular issue being faced. It isn't so much as Email being broken as it is the v3.13 increased the level of SSL security from v3.12. But with that title, one might not realize the implication of such a change unless they ran into a email delivery issue as I did. The first comments posted did in fact point to the solution. Unfortunately my hosting provider does not provide certificates as strict in their settings as v3.13 required</p>
|
<python><email><ssl>
|
2025-01-15 12:28:54
| 1
| 377
|
Jay Mosk
|
79,358,206
| 2,114,932
|
get value from current row in rolling window
|
<p>Given the following data structure</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"order_id": ["o01", "o02", "o03", "o04", "o10", "o11", "o12", "o13"],
"customer_id": ["ca", "ca", "ca", "ca", "cb", "cb", "cb", "cb"],
"date": [
"2024-04-03",
"2024-04-04",
"2024-04-04",
"2024-04-11",
"2024-04-02",
"2024-04-02",
"2024-04-03",
"2024-05-13",
],
},
schema_overrides={"date": pl.Date},
)
</code></pre>
<p>I would like to do some calculations over a rolling window. For that I would like to get a value from the current row (of a column that is not part of the window definition (i.e. partition or frame)), e.g. <code>order_id</code> in the following example, as well as a row index per frame (not partition).</p>
<p>So far I have (the <code>orders</code> column is just an illustration of abovementioned "calculation").</p>
<pre class="lang-py prettyprint-override"><code>(
df.sort("customer_id", "date")
.rolling(
index_column="date",
period="1w",
offset="0d",
closed="left",
group_by="customer_id",
)
.agg(
frame_index=pl.int_range(pl.len()).first(),
current_order_id=pl.col("order_id").first(),
orders=pl.col("order_id"),
)
)
</code></pre>
<pre><code>customer_id date frame_index current_order_id orders
str date i64 str list[str]
"ca" 2024-04-03 0 "o01" ["o01", "o02", "o03"]
"ca" 2024-04-04 0 "o02" ["o02", "o03"]
"ca" 2024-04-04 0 "o02" ["o02", "o03"]
"ca" 2024-04-11 0 "o04" ["o04"]
"cb" 2024-04-02 0 "o10" ["o10", "o11", "o12"]
"cb" 2024-04-02 0 "o10" ["o10", "o11", "o12"]
"cb" 2024-04-03 0 "o12" ["o12"]
"cb" 2024-05-13 0 "o13" ["o13"]
</code></pre>
<p>But I would like to have (note the differences in <code>frame_index</code> and <code>current_order_id</code> in the 3rd and 6th row).</p>
<pre><code>customer_id date frame_index current_order_id orders
str date i64 str list[str]
"ca" 2024-04-03 0 "o01" ["o01", "o02", "o03"]
"ca" 2024-04-04 0 "o02" ["o02", "o03"]
"ca" 2024-04-04 1 "o03" ["o02", "o03"]
"ca" 2024-04-11 0 "o04" ["o04"]
"cb" 2024-04-02 0 "o10" ["o10", "o11", "o12"]
"cb" 2024-04-02 1 "o11" ["o10", "o11", "o12"]
"cb" 2024-04-03 0 "o12" ["o12"]
"cb" 2024-05-13 0 "o13" ["o13"]
</code></pre>
<p>It seems to me that I am missing a <code>current_row()</code> or <code>nth()</code> expression, but there are probably other clever ways to achieve what I want with polars?</p>
<p>UPDATE: I just noticed that one can add a column from the original dataframe with <code>with_column(df.select())</code>, see my answer below.</p>
<p>So let's assume that I want to use a value from the current row in the <code>agg</code> step, e.g. to add or subtract it from a group mean or something.</p>
|
<python><python-polars>
|
2025-01-15 12:26:44
| 2
| 1,817
|
dpprdan
|
79,358,078
| 2,287,458
|
Take unique values horizontally across a Polars DataFrame to create a new string column
|
<p>I have this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""shape: (4, 3)
ββββββββ¬βββββββ
β ccy1 β ccy2 β
β --- β --- β
β str β str β
ββββββββͺβββββββ‘
β USD β USD β
β EUR β USD β
β EUR β EUR β
β USD β JPY β
ββββββββ΄βββββββ
""")
</code></pre>
<p>I want to create a third columns which contains a concatenated list of the unique values across columns <code>ccy1</code> and <code>ccy2</code>. Something like</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.struct('ccy1', 'ccy2')
.map_elements(lambda x: ','.join(sorted(set(x.values()))),
return_dtype=pl.String)
.alias('ccys'))
</code></pre>
<p>which gives</p>
<pre><code>shape: (4, 3)
ββββββββ¬βββββββ¬ββββββββββ
β ccy1 β ccy2 β ccys β
β --- β --- β --- β
β str β str β str β
ββββββββͺβββββββͺββββββββββ‘
β USD β USD β USD β
β EUR β USD β EUR,USD β
β EUR β EUR β EUR β
β USD β JPY β JPY,USD β
ββββββββ΄βββββββ΄ββββββββββ
</code></pre>
<p>But I am looking for a more natural <code>polars</code> way (i.e. vectorized). Any help appreciated.</p>
|
<python><dataframe><python-polars>
|
2025-01-15 11:38:37
| 0
| 3,591
|
Phil-ZXX
|
79,357,992
| 2,018,094
|
Crashing matplotlib with special chars in strings
|
<p>matplotlib crashes when a Pandas Serie contains specific special chars in a string, in my case <code>$$</code>.</p>
<pre><code>import matplotlib.pyplot as plt
import random
import pandas as pd
list_= 'abcdefghijklmnopqrstuvwxyz'
l = pd.Series()
for i in range(0,100):
l[i] = random.choice(list_)
l[50] = '$$'
l.value_counts(normalize=False).plot(kind='bar')
plt.show()
</code></pre>
<p>This code will crash due to the <code>l[50] = '$$'</code> line.</p>
<p>Question: am I expected to clean such strings beforehand, or is it a bug in matplotlib?</p>
<p>I'm fairly new to using python for data science, so bear with my naive approach.<br />
Thanks</p>
<p>EDIT: thanks to @mozway and @chrslg for their answers, <code>$$</code> is indeed interpreted as Latex by matplotlib and it raises an error because there's nothing between the two <code>$</code> signs.</p>
<p>Having no control over the data I'm plotting, I've opted to deactivate the parsing of Latex by matplotlib like so:<br />
<code>plt.rcParams['text.parse_math'] = False</code></p>
|
<python><pandas><matplotlib><special-characters>
|
2025-01-15 11:13:04
| 2
| 1,012
|
Jb Drucker
|
79,357,784
| 5,211,833
|
Generate a PDF from Excel through Python, without opening Excel
|
<p>We generate PDFs from Excel through Python. We currently do this using the Win32com library as per <a href="https://stackoverflow.com/a/66422276/5211833">this Stack Overflow answer</a>. In summary, we explicitly call <code>o = win32com.client.Dispatch("Excel.Application")</code>, opening an actual Excel instance (visible in the Task Manager), call Excel's PDF printing capability, <code>wb.ActiveSheet.ExportAsFixedFormat(0, PATH_TO_PDF)</code>, and finally closing the Excel instance, <code>o.Quit()</code>. The main downside here is that if Excel is already open in the system, the <code>Quit()</code> call causes Excel to open its system dialogue asking to save changes, ignore changes and close, or abort. Though we can catch this error in Python, we seem to cannot close the Excel instance after this happens. Relaunching the Python program doesn't help, since the old Excel instance still lives. We need to quell it through the Task Manager in order to be able to run the program again.</p>
<p>Our generated XLSX file has a fixed size, A1:G50. It contains 3 images (2 logos and the result graph as saved to a PNG file by matplotlib) and otherwise text and numbers, of which some are formatted bold.</p>
<p>An MRE:</p>
<pre><code>def save_as_pdf(ExcelInstance, path_to_pdf):
wb_path = r'~/path_to_xlsx/workbook.xlsx'
wb = ExcelInstance.Workbooks.Open(wb_path)
print_area = 'A1:G50'
ws = wb.Worksheets[0]
ws.PageSetup.Zoom = False
ws.PageSetup.FitToPagesTall = 1
ws.PageSetup.FitToPagesWide = 1
ws.PageSetup.PrintArea = print_area
wb.WorkSheets([1]).Select()
wb.ActiveSheet.ExportAsFixedFormat(0, path_to_pdf)
wb.Close(False)
o = win32com.client.Dispatch("Excel.Application")
o.Visible = False
save_as_pdf(o, path_to_pdf)
o.Quit()
del o
</code></pre>
<p>To reproduce the issue, run the above program with an Excel instance (on any random workbook) open already.</p>
<p>Circumventing this problem would be good, since having to tell users to close Excel through the Task Manager isn't practicable. Additionally, having multiple users generating reports on a centralised system causes trouble if two or more calls open up Excel instances simultaneously.</p>
<p>We found Python solutions such as <a href="https://www.e-iceblue.com/Introduce/xls-for-python.html" rel="nofollow noreferrer">Spire.XLS</a> or <a href="https://www.reportlab.com/" rel="nofollow noreferrer">ReportLab</a> to generate PDFs from Excel documents, but these are rather expensive third-party libraries. Given we do have a valid Excel license, we'd prefer utilising that.</p>
<p>How can we automatically generate PDF documents from Excel without explicitly having to open Excel?</p>
|
<python><excel><pdf-generation>
|
2025-01-15 10:08:54
| 1
| 18,235
|
Adriaan
|
79,357,654
| 6,930,340
|
Polars write_excel: rotate some header columns
|
<p>When using <code>pl.write_excel</code>, I am looking for a possibility to rotate SOME header columns by 90Β°.</p>
<p>I am applying a bunch of input arguments provided by <code>pl.write_excel</code> in order to style the exported dataframe. Among others, I am using <code>header_format</code>.</p>
<pre class="lang-py prettyprint-override"><code>df.write_excel(header_format={"rotation": 90}
</code></pre>
<p>This will rotate all header columns by 90Β°. However, I am looking for a way to rotate only the last three header columns.</p>
|
<python><excel><python-polars><xlsxwriter><polars>
|
2025-01-15 09:24:45
| 1
| 5,167
|
Andi
|
79,357,541
| 2,000,548
|
CUDA_ERROR_NO_DEVICE in WSL2
|
<p>I am trying to run cuML in WSL2 in Windows.</p>
<ul>
<li>Ubuntu 22.04</li>
<li>amd64</li>
<li>NVIDIA RTX A2000 8GB Laptop GPU</li>
<li>CUDA 12.6</li>
</ul>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import logging
import cudf
from cuml.ensemble import RandomForestClassifier
from cuml.preprocessing import StandardScaler
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score, classification_report
from sklearn.model_selection import train_test_split
logger = logging.getLogger(__name__)
def main() -> None:
# Load the iris dataset
iris = load_iris()
x = iris.data
y = iris.target
# Split the data
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.2, random_state=42
)
# Convert to cuDF DataFrames
X_train_cudf = cudf.DataFrame(x_train)
X_test_cudf = cudf.DataFrame(x_test)
y_train_cudf = cudf.Series(y_train)
y_test_cudf = cudf.Series(y_test)
# Scale the features
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(X_train_cudf)
x_test_scaled = scaler.transform(X_test_cudf)
# Create and train the model
rf_classifier = RandomForestClassifier(n_estimators=100, random_state=42)
rf_classifier.fit(x_train_scaled, y_train_cudf)
# Make predictions
y_pred_cudf = rf_classifier.predict(x_test_scaled)
# Convert predictions back to CPU for evaluation
y_pred = y_pred_cudf.values_host
y_test = y_test_cudf.values_host
# Print results
logger.info("cuML Results:")
logger.info(f"Accuracy: {accuracy_score(y_test, y_pred):.4f}")
logger.info("\nClassification Report:")
logger.info(classification_report(y_test, y_pred, target_names=iris.target_names))
if __name__ == "__main__":
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
)
main()
</code></pre>
<p><strong>pyproject.toml</strong></p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "hm-cuml"
version = "1.0.0"
requires-python = "~=3.12.0"
dependencies = [
"cudf-cu12==24.12.0",
"cuml-cu12==24.12.0",
"scikit-learn==1.6.1",
]
</code></pre>
<p>Currently, <code>uv run main.py</code> gives error:</p>
<pre class="lang-bash prettyprint-override"><code>hm-cuml/.venv/lib/python3.12/site-packages/cudf/utils/_ptxcompiler.py:64: UserWarning: Error getting driver and runtime versions:
stdout:
stderr:
Traceback (most recent call last):
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 254, in ensure_initialized
self.cuInit(0)
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 304, in safe_cuda_api_call
self._check_ctypes_error(fname, retcode)
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 372, in _check_ctypes_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [100] Call to cuInit results in CUDA_ERROR_NO_DEVICE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 269, in __getattr__
self.ensure_initialized()
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 258, in ensure_initialized
raise CudaSupportError(f"Error at driver init: {description}")
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)
Not patching Numba
warnings.warn(msg, UserWarning)
Traceback (most recent call last):
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 254, in ensure_initialized
self.cuInit(0)
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 304, in safe_cuda_api_call
self._check_ctypes_error(fname, retcode)
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 372, in _check_ctypes_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [100] Call to cuInit results in CUDA_ERROR_NO_DEVICE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "hm-cuml/src/main.py", line 58, in <module>
main()
File "hm-cuml/src/main.py", line 32, in main
x_train_scaled = scaler.fit_transform(X_train_cudf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/_thirdparty/sklearn/utils/skl_dependencies.py", line 162, in fit_transform
return self.fit(X, **fit_params).transform(X)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/internals/api_decorators.py", line 188, in wrapper
ret = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/_thirdparty/sklearn/preprocessing/_data.py", line 678, in fit
return self.partial_fit(X, y)
^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/internals/api_decorators.py", line 188, in wrapper
ret = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/_thirdparty/sklearn/preprocessing/_data.py", line 707, in partial_fit
X = self._validate_data(X, accept_sparse=('csr', 'csc'),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/_thirdparty/sklearn/utils/skl_dependencies.py", line 111, in _validate_data
X = check_array(X, **check_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/thirdparty_adapters/adapters.py", line 322, in check_array
X, n_rows, n_cols, dtype = input_to_cupy_array(
^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/nvtx/nvtx.py", line 116, in inner
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cuml/internals/input_utils.py", line 465, in input_to_cupy_array
X = X.values
^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/utils/performance_tracking.py", line 51, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/frame.py", line 420, in values
return self.to_cupy()
^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/utils/performance_tracking.py", line 51, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/frame.py", line 542, in to_cupy
return self._to_array(
^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/utils/performance_tracking.py", line 51, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/frame.py", line 507, in _to_array
matrix[:, i] = to_array(col, dtype)
^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/frame.py", line 471, in to_array
array = get_array(col)
^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/frame.py", line 543, in <lambda>
lambda col: col.values,
^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/column/column.py", line 233, in values
return cupy.asarray(self.data_array_view(mode="write"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/cudf/core/column/column.py", line 135, in data_array_view
return cuda.as_cuda_array(obj).view(self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/api.py", line 76, in as_cuda_array
return from_cuda_array_interface(obj.__cuda_array_interface__,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/devices.py", line 231, in _require_cuda_context
with _runtime.ensure_context():
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hongbo-miao/.local/share/uv/python/cpython-3.12.8-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/devices.py", line 121, in ensure_context
with driver.get_active_context():
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 472, in __enter__
driver.cuCtxGetCurrent(byref(hctx))
^^^^^^^^^^^^^^^^^^^^^^
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 269, in __getattr__
self.ensure_initialized()
File "hm-cuml/.venv/lib/python3.12/site-packages/numba_cuda/numba/cuda/cudadrv/driver.py", line 258, in ensure_initialized
raise CudaSupportError(f"Error at driver init: {description}")
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)
</code></pre>
<p>I saw a lot of people meeting same issue here: <a href="https://forums.developer.nvidia.com/t/installation-on-wsl2-windows-11-problem-cant-see-gpu/237895/9" rel="nofollow noreferrer">https://forums.developer.nvidia.com/t/installation-on-wsl2-windows-11-problem-cant-see-gpu/237895/9</a>
but no solution.</p>
<p>Any guide would be appreciate!</p>
|
<python><cuda><cuml><wsl2>
|
2025-01-15 08:43:30
| 1
| 50,638
|
Hongbo Miao
|
79,357,409
| 7,945,506
|
How to patch a conditionally imported module in pytest?
|
<p>I have Python code that works locally and when run on Databricks. For saving the results, a different function is used depending on where to code is run.</p>
<p>On Databricks, several things are automatically initialized when running a notebook; among these is the <code>pyspark</code> library.</p>
<p>Therefore, in order to make my code also work locally, I import <code>pyspark</code> like so:</p>
<pre><code>if "DATABRICKS_RUNTIME_VERSION" in os.environ:
from pyspark.sql import functions as F
def save_results_to_databricks(...):
# do stuff like F.col("relevant")
</code></pre>
<p>Now, when writing a test, I would patch <code>F</code> like so:</p>
<pre><code>class TestSaveResultsToDatabricks(unittest.TestCase):
@patch("path.to.module.F")
def test_save_results_to_databricks(self, MockFunctions):
# test stuff
</code></pre>
<p>However, this will throw an error:</p>
<blockquote>
<p>AttributeError: does not have the attribute 'F'</p>
</blockquote>
<p>So, how do I patch a function that is not available locally?</p>
|
<python><mocking><databricks><python-unittest>
|
2025-01-15 07:48:31
| 0
| 613
|
Julian
|
79,356,820
| 7,447,542
|
SSIS Process Task Return Value
|
<p>I have python script saved in server1 which returns code 0 on success and 1 on failure. And I'm calling this python script from server2 via SSIS execute process task.</p>
<p><a href="https://i.sstatic.net/LhCfKWcd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhCfKWcd.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/jyf78WiF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyf78WiF.png" alt="enter image description here" /></a></p>
<p>How will I capture that return code of 0 or 1 from the python script in the execute process task for further processing of the pipeline?</p>
|
<python><ssis><sql-server-data-tools>
|
2025-01-15 01:23:21
| 0
| 424
|
Nikhil Ravindran
|
79,356,622
| 886,895
|
Access files in the directory running the conan create command
|
<p>Trying to create a conan (v2.11) package from a zip archive to store in a gitlab conan registry. However in my conanfile.py I can't seem to copy or access files from the directory where I'm running conan as it appears to have set the current directory to within the .conan2/p/ cache directory structure.</p>
<p>These diagnostics:</p>
<pre><code>def generate(self):
print(f"Recipe path: {self.recipe_folder}")
print(f"Source path: {self.source_folder}")
print(f"Build path: {self.build_folder}")
print(f"Dest/Package path: {self.package_folder}")
current_directory = os.getcwd()
print(f"Current_directory: {current_directory}")
</code></pre>
<p>have this output:</p>
<pre><code>mypack/1.7.0@jim/test: Calling generate()
mypack/1.7.0@jim/test: Generators folder: C:\Users\jim\.conan2\p\b\mypack4aacdfed668f5\b
Recipe path: C:\Users\jim\.conan2\p\badl5fe3d4cc2e7b1\e
Source path: C:\Users\jim\.conan2\p\b\badl4aacdfed668f5\b
Build path: C:\Users\jim\.conan2\p\b\badl4aacdfed668f5\b
Dest/Package path: C:\Users\jim\.conan2\p\b\badl4aacdfed668f5\p
Current_directory: C:\Users\jim\.conan2\p\b\badl4aacdfed668f5\b
</code></pre>
<p>Here is my conanfile.py:</p>
<pre><code>from conan import ConanFile
from conan.tools.files import copy
class MyPackConan(ConanFile):
name = "mypack"
version = "1.7.0"
exports_sources = "."
...
def build(self):
self.run("ant package", cwd=self.recipe_folder)
def package(self):
copy(self, pattern="dist/*.zip", src=".", dst=self.package_folder, keep_path=False)
def package_id(self):
self.info.requires.full_recipe_mode()
</code></pre>
<p>I was hoping the export_sources = "." would give me access to the files of interest by copying them to the recipe or source or build directories but that did not happen. Any help greatly appreciated.</p>
|
<python><c++><package-managers><conan>
|
2025-01-14 22:59:08
| 1
| 1,926
|
simgineer
|
79,356,278
| 1,737,830
|
Merging lists of dictionaries based on nested list values
|
<p>I'm struggling to create a new list based on two input lists. Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>data_1 = [
{
"title": "System", "priority": "medium", "subtitle": "mason",
"files": [
{"name": "mason", "path": "/tmp/mason/mason.json"},
{"name": "mason", "path": "/tmp/mason/build.json"}
]},
{
"title": "System", "priority": "medium", "subtitle": "kylie",
"files": [
{"name": "kylie", "path": "/tmp/kylie/build.tar"},
{"name": "kylie", "path": "/tmp/kylie/kylie.json"}
]}
]
data_2 = [
{
"title": "System", "priority": "medium",
"files": [
{"name": "build", "path": "/tmp/kylie/build.tar"},
{"name": "kylie", "path": "/tmp/kylie/kylie.json"},
{"name": "mason", "path": "/tmp/mason/mason.json"},
{"name": "build", "path": "/tmp/mason/build.json"}
]}
]
merged = [
{
"title": "System", "priority": "medium", "subtitle": "mason",
"files": [
{"name": "mason", "path": "/tmp/mason/mason.json"},
{"name": "build", "path": "/tmp/mason/build.json"}
]},
{
"title": "System", "priority": "medium", "subtitle": "kylie",
"files": [
{"name": "build", "path": "/tmp/kylie/build.tar"},
{"name": "kylie", "path": "/tmp/kylie/kylie.json"}
]}
]
</code></pre>
<p>Basically, it would be OK to keep data from <code>data_1</code> list and just replace <code>files.name</code> with the names coming from <code>data_2</code> list, having the same <code>path</code>. Or multiply the element in <code>data_2</code>, assigning a <code>subtitle</code> and deleting all <code>data_2.files</code> items that don't match the paths specified in <code>data_1</code> list.</p>
<p>Anyway, I started doing something like:</p>
<pre class="lang-py prettyprint-override"><code>for d1 in data_1:
for d2 in data_2:
d1_key = (d1.get("title"), d1.get("priority"))
d2_key = (d1.get("title"), d1.get("priority"))
if d1_key == d2_key:
for df in d1.get("files"):
if any(df.get("path") in entry.get("path") for entry in d2.get("files")):
print("Help! There must be easier way!")
</code></pre>
<p>And I don't think it's clear enough to get into this further.</p>
<p>Would you suggest any other way to get the <code>merged</code> list created?</p>
|
<python><python-3.x>
|
2025-01-14 20:13:39
| 2
| 2,368
|
AbreQueVoy
|
79,356,143
| 8,547,986
|
How to narrow types in python with Enum
|
<p>In python, consider the following example</p>
<pre class="lang-py prettyprint-override"><code>from enum import StrEnum
from typing import Literal, overload
class A(StrEnum):
X = "X"
Y = "Y"
class X: ...
class Y: ...
@overload
def enum_to_cls(var: Literal[A.X]) -> type[X]: ...
@overload
def enum_to_cls(var: Literal[A.Y]) -> type[Y]: ...
def enum_to_cls(var: A) -> type[X] | type[Y]:
match var:
case A.X:
return X
case A.Y:
return Y
case _:
raise ValueError(f"Unknown enum value: {var}")
</code></pre>
<p>When I attempt to call <code>enum_to_cls</code>, I get a type error, with the following case:</p>
<pre class="lang-py prettyprint-override"><code>selected_enum = random.choice([x for x in A])
enum_to_cls(selected_enum)
# Argument of type "A" cannot be assigned to parameter "var" of type "Literal[A.Y]" in
# function "enum_to_cls"
# "A" is not assignable to type "Literal[A.Y]" [reportArgumentType]
</code></pre>
<p>I understand the error and it makes sense, but I wanted to know, if there is any way to avoid this error. I know I can avoid this error, creating a branch for each enum case but then I am back to square one of why I wanted to created the function <code>enum_to_cls</code>.</p>
|
<python><python-typing>
|
2025-01-14 19:15:37
| 1
| 1,923
|
monte
|
79,356,141
| 2,813,687
|
How to properly type callbacks each accepting different subtypes of an overarching type
|
<p>Right now I have this code:</p>
<pre><code>class A(abc.ABC):
pass
class B(A):
pass
class C(A):
pass
def fn_b(b: B) -> None:
pass
def fn_c(c: C) -> None:
pass
d : dict[type[A], Union[Callable[[A], None], Callable[[B], None]]] = {
B: fn_b,
C: fn_c,
}
</code></pre>
<p>Now this is simplified, in practice I can have dozens of subclasses of <code>A</code>. New classes may be added over time, and key/value pairs could be added/changed in <code>d</code>.</p>
<p>Is there a better way to type <code>d</code> than a massive <code>Union</code>? An option would be <code>dict[type[A], Callable[..., None]]</code> but obviously this doesn't convey that the <code>Callable</code>s should only accept <code>A</code> subclasses.</p>
<p>The <code>fn_*</code> functions are meant to be decoupled from the <code>B</code>, <code>C</code> and so on classes. So moving them to methods won't solve the issue.</p>
<p>Edit: InSync's answer below (<a href="https://mypy-play.net/?mypy=1.14.1&python=3.13&flags=strict&gist=0f823cdb10315d751283c134ce0537d1" rel="nofollow noreferrer">https://mypy-play.net/?mypy=1.14.1&python=3.13&flags=strict&gist=0f823cdb10315d751283c134ce0537d1</a>) does work <em>however</em> it looks like quite a bit of overhead to type this. Is there something simpler and more concise?</p>
|
<python><python-typing>
|
2025-01-14 19:15:17
| 0
| 1,018
|
foo
|
79,356,109
| 10,634,126
|
Python selenium (firefox/geckodriver) not working in Docker Linux/Ubuntu container
|
<p>I am trying to run a Linux/Ubuntu/Debian Docker container that will run web scrapers using Python SeleniumβI am driver-agnostic, but I am starting with trying to run a Firefox/Geckodriver scrape.</p>
<p>My Dockerfile is currently as follows:</p>
<pre><code>FROM ubuntu:jammy
WORKDIR /testdir
COPY requirements.txt requirements.txt
# install python, git, etc., all requirements.txt and selenium β THIS WORKS SO FAR
RUN : \
&& apt update \
&& DEBIAN_FRONTEND=noninteractive apt install \
-y \
--no-install-recommends \
python3-pip \
git \
ssh \
wget \
gnupg \
curl \
firefox \
&& pip3 install -r requirements.txt \
&& pip3 install selenium \
&& :
# install geckodriver β THIS WORKS SO FAR (RETRIEVES LATEST VERSION)
RUN : \
&& GECKODRIVER_VERSION=`curl -sL -I "https://github.com/mozilla/geckodriver/releases/latest" | grep -i "location:" | awk '{print $2}' | grep -o "v[0-9]\+.[0-9]\+.[0-9]\+"` \
&& wget https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz \
&& tar -zxf geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz -C /usr/local/bin \
&& chmod +x /usr/local/bin/geckodriver \
&& rm geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz \
# install firefox β THIS FAILS
&& FIREFOX_SETUP=firefox-setup.tar.bz2 \
&& apt-get purge firefox \
&& wget -O $FIREFOX_SETUP "https://download.mozilla.org/?product=firefox-latest&os=linux64" \
&& tar xjf $FIREFOX_SETUP -C /opt/ \
&& ln -s /opt/firefox/firefox /usr/bin/firefox \
&& rm $FIREFOX_SETUP \
&& :
</code></pre>
<p>The final lines of the error output from running this Dockerfile are as follows:</p>
<pre><code>[+] Running 0/1nt to continue? [Y/n] Abort.
β Service scrape-dev Building 4.9s
failed to solve: process "/bin/sh -c : && GECKODRIVER_VERSION=`curl -sL -I \"https://github.com/mozilla/geckodriver/releases/latest\" | grep -i \"location:\" | awk '{print $2}' | grep -o \"v[0-9]\\+.[0-9]\\+.[0-9]\\+\"` && wget https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && tar -zxf geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz -C /usr/local/bin && chmod +x /usr/local/bin/geckodriver && rm geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && FIREFOX_SETUP=firefox-setup.tar.bz2 && apt-get purge firefox && wget -O $FIREFOX_SETUP \"https://download.mozilla.org/?product=firefox-latest&os=linux64\" && tar xjf $FIREFOX_SETUP -C /opt/ && ln -s /opt/firefox/firefox /usr/bin/firefox && rm $FIREFOX_SETUP && :" did not complete successfully: exit code: 1
</code></pre>
<p>I am running this Dockerfile from a docker-compose.yaml file, which is as follows:</p>
<pre><code># Version: "3.9"
services:
scrape-dev:
build:
context: .
dockerfile: pipeline/dockerfiles/Dockerfile
container_name: scrape-dev
image: scrape-dev
# Volumes below bind local paths to container paths.
# Note: individual SSH directory paths must be specified in individual .env files.
volumes:
- ./:/rtci
# - ${_SSH_PATH}:/root/.ssh
- "~/.gitconfig:/etc/gitconfig"
command: tail -F anything
profiles:
- scrape-dev
</code></pre>
<p>I run it with the following command from the terminal:</p>
<pre><code>> docker compose up scrape-dev -d --build
</code></pre>
<p>Is there a way to see what the error is here, other than exit code?</p>
|
<python><linux><docker><selenium-webdriver><firefox>
|
2025-01-14 19:01:34
| 0
| 909
|
OJT
|
79,355,881
| 19,356,117
|
How to extinguish cycle in my code when calculating EMWAοΌ
|
<p>I'm calculating EWMA values for array of streamflow, and code is like below:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
streamflow_data = np.arange(0, 20, 1)
adaptive_alphas = np.concatenate([np.repeat(0.3, 10), np.repeat(0.6, 10)])
streamflow_series = pl.Series(streamflow_data)
ewma_data = np.zeros_like(streamflow_data)
for i in range(1, len(streamflow_series)):
current_alpha = adaptive_alphas[i]
ewma_data[i] = streamflow_series[:i+1].ewm_mean(alpha=current_alpha)[-1]
</code></pre>
<pre><code># When set dtype of ewma_data to float when initial it, output is like this
Output: [0 0.58823529 1.23287671 1.93051717 2.67678771 3.46668163, 4.29488309 5.1560635 6.04512113 6.95735309 9.33379473 10.33353466, 11.33342058 12.33337091 13.33334944 14.33334021 15.33333625 16.33333457, 17.33333386 18.33333355]
# When I don't point dtype of ewma_data and dtype of streamflow_data is int, output will be floored
Output: [0 0 1 1 2 3 4 5 6 6 9 10 11 12 13 14 15 16 17 18]
</code></pre>
<p>But when length of <code>streamflow_data</code> is very big (such as >100000), this code will become very slow.</p>
<p>So how can I extinguish <code>for</code> in my code and don't influence its result?</p>
<p>Hope for your reply.</p>
|
<python><python-3.x><algorithm><numpy><python-polars>
|
2025-01-14 17:39:15
| 4
| 1,115
|
forestbat
|
79,355,866
| 2,801,187
|
Optimizing The Exact Prime Number Theorem
|
<p>For example, given this sequence of the first 499 primes, can you predict the next prime?</p>
<pre><code>2,3,5,7,...,3541,3547,3557,3559
</code></pre>
<p>The 500th prime is <code>3571</code>.</p>
<hr />
<h2>Prime Number Theorem</h2>
<p>The <a href="https://en.wikipedia.org/wiki/Prime_number_theorem" rel="nofollow noreferrer">Prime Number Theorem</a> (PNT) provides an approximation for the n-th prime:</p>
<p><a href="https://i.sstatic.net/VC771TOt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC771TOt.png" alt="approx" /></a></p>
<p>Computing <code>p_500 β 3107</code> takes <strong>microseconds</strong>!</p>
<hr />
<h2>Exact Prime Number Theorem</h2>
<p>My experimental <a href="https://math.stackexchange.com/questions/5021014/predicting-the-next-prime-using-only-previous-primes?noredirect=1&lq=1">Exact Prime Number Theorem</a> (EPNT) computes the exact n-th prime:</p>
<p><a href="https://i.sstatic.net/cWEYHudg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWEYHudg.png" alt="fast" /></a></p>
<p>Computing <code>p_500 = 3571</code> takes <strong>25 minutes</strong>!</p>
<hr />
<h2>Question</h2>
<p>So far, the EPNT correctly predicts the <a href="https://prime-numbers.info/list/first-500-primes" rel="nofollow noreferrer">first 500 primes</a>.</p>
<p>Unfortunately, numerically verifying the formula for higher primes is <strong>extremely</strong> slow!</p>
<p>Are there any optimization tips to improve the EPNT computational speed? Perhaps</p>
<ul>
<li>Do not use Python</li>
<li>Add multiple threads</li>
<li>Implement a faster math precision library</li>
<li>Modify the decimal precision mp.dps at runtime</li>
<li>Use a math computing engine like WolframAlpha</li>
</ul>
<p>Here's the current Python code:</p>
<pre><code>import time
from mpmath import ceil, ln, mp, mpf, exp, fsum, power, zeta
from sympy import symbols, Eq, pprint, prime
N=500 # <--- Compute the N-th prime.
mp.dps = 20000
primes = []
def vengy_prime(k):
# Compute the k-th prime deterministically
s = ceil(k * ln(k * ln(k)))
# Determine the dynamic Rosser (1941) upper bound
N = int(ceil(k * (ln(k) + ln(ln(k)))))
# Compute finite summation to N
print(f"Computing {N} zeta terms ...")
start_time = time.time()
sum_N = fsum([1 / power(mpf(n), s) for n in range(1, N)])
end_time = time.time()
print(f"Time taken: {end_time - start_time:.6f} seconds")
# Compute the product term involving the first k-1 primes
print(f"Computing product of {k-1} previous primes ...")
start_time = time.time()
prod = exp(fsum([ln(1 - power(p, -s)) for p in primes[:k-1]]))
end_time = time.time()
print(f"Time taken: {end_time - start_time:.6f} seconds")
# Compute next prime p_k
p_k=ceil((1 - 1 / (sum_N * prod)) ** (-1 / s))
return p_k
# Generate the previous known k-1 primes
print("\nListing", N-1, "known primes:")
for k in range(1, N):
p = prime(k)
primes.append(p)
print(primes)
primes.append(vengy_prime(N))
pprint(Eq(symbols(f'p_{N}'), int(primes[-1])))
</code></pre>
<h2>Update</h2>
<p>Wow! Running JΓ©rΓ΄me Richard's new optimized code only took 10 seconds!</p>
<pre><code>Computing 4021 zeta terms ...
Time taken: 7.968423 seconds
Computing product of 499 previous primes ...
Time taken: 1.960771 seconds
pβ
ββ = 3571
</code></pre>
<p>The old code timings were 1486 seconds:</p>
<pre><code>Computing 4021 zeta terms ...
Time taken: 1173.899538 seconds
Computing product of 499 previous primes ...
Time taken: 313.833039 seconds
pβ
ββ = 3571
</code></pre>
<p>The optimized code computed the 4000th prime in 45 minutes: <code>N = 4000, precision = 700000</code></p>
<p><a href="https://i.sstatic.net/fz8GlxX6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fz8GlxX6.png" alt="p_4000" /></a></p>
|
<python><performance><optimization><primes>
|
2025-01-14 17:30:45
| 1
| 2,457
|
vengy
|
79,355,830
| 373,091
|
Python SQLAlchemy mapping column names to attributes and vice versa
|
<p>I have a problem where I need to access a MS SQL DB, so naturally it's naming convention is different: TableName(Id, ColumnName1, ColumnName2, ABBREVIATED,...)</p>
<p>The way I got the model constructed in Python:</p>
<pre><code>class TableName(Base):
__tablename__ = 'TableName'
id = Column('Id', BigInteger, primary_key=True, autoincrement=True)
order_id = Column('OrderId', BigInteger, ForeignKey('Orders.Id'), nullable=False)
column_name1 = Column('ColumnName1', String(50), nullable=True)
column_name2 = Column('ColumnName2', String(50), nullable=True)
abbreviated= Column('ABBREVIATED', String(50), nullable=False)
...
</code></pre>
<p>Then, I have a Repository:</p>
<pre><code>class TableNameRepository:
def __init__(self, connection_string: str):
self.engine = create_engine(connection_string)
self.Session = sessionmaker(bind=self.engine)
def get_entry(self, order_id: int, column_name1: str) -> Optional[TableName]:
with self.Session() as session:
return session.query(TableName).filter_by(
order_id=order_id,
column_name1=column_name1
).first()
</code></pre>
<p>This works, however, now to my actual problem:</p>
<p>I have a template text file with placeholders using the SQL column names, meaning there are <code>Id, ColumnName1, ColumnName2, ABBREVIATED</code> strings and I need to replace them with values from the model.</p>
<p>Here is how I tried to approach this:</p>
<pre><code>retrieved_table_entity = self.repository.get_entry(order_id, something)
attr_to_column = {
column.key: column.name
for column in retrieved_table_entity.__table__.columns
}
</code></pre>
<p>The problem is that doesnt work: It still creates keys that look like:
<code>{'Id': 'Id', 'OrderId': 'OrderId', 'ColumnName1': 'ColumnName1', ...}</code></p>
<p>And so, the next step which is:</p>
<pre><code> for attr_name, column_name in attr_to_column.items():
value = getattr(retrieved_table_entity, attr_name, None)
logging.debug(f"Accessing {attr_name}: {value}")
reg_dict[column_name] = '' if value is None else str(value)
</code></pre>
<p>Doesnt work because the keys are not the python keys, in debug mode testing, if I manually do like: <code>getattr(retrieved_table_entity, 'id')</code>, it returns the value.</p>
<p>To summarize:
I need a way to map, so that I have key/value dictionary like:</p>
<pre><code>{
Id: 1
OrderID: 123
ColumnName1: 'SomeValue'
}
</code></pre>
<p>And then I could simply to soemthing like <code>self.template.format(**keys_to_values_dict)</code></p>
<p>I have tried consulting with LLMs, and they keep circling around not suggesting a way to do this, I dont want to manually map the model again in this service function.</p>
<p>I also reviewed the Mapping API and cant find something relevant:
<a href="https://docs.sqlalchemy.org/en/14/orm/mapping_api.html" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/orm/mapping_api.html</a></p>
<p>For reference, my current dependencies:</p>
<pre><code>sqlalchemy==2.0.36
pymssql==2.2.11
</code></pre>
|
<python><sqlalchemy>
|
2025-01-14 17:17:13
| 1
| 2,849
|
Carmageddon
|
79,355,720
| 700,663
|
Trying to Install Python via Visual Studio Installer. Python seems to be in limbo. I don't know if it's installed or not
|
<p>I used to be able to install python.</p>
<p><a href="https://i.sstatic.net/JTCsMA2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JTCsMA2C.png" alt="enter image description here" /></a></p>
<p>See.</p>
<p>Just click Python 3 64 bit and installed.</p>
<p>Now no matter how many times I did that Python simply didn't get installed.</p>
<p>Or did it? If I click add environment I got this</p>
<p><a href="https://i.sstatic.net/kZeJPQ9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZeJPQ9b.png" alt="enter image description here" /></a></p>
<p>So looks like it's already installed. But where? How do I run interpreter?</p>
<p>If I press alt I got a message</p>
<pre><code>>>> 2+1
No interpreters cannot be started because the path to the interpreter has not been configured.
Please update the environment in Tools->Options->Python Tools->Environment Options
</code></pre>
<p>There is no option tools->options->python tools. There is tools ->options ->python and nothing where I can put the location of python interpreter.</p>
<p>If I try to create virtual environment I got this
<a href="https://i.sstatic.net/ojrYNSA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ojrYNSA4.png" alt="enter image description here" /></a></p>
<p>As if python is not installed.</p>
<p>Annoyed, I installed python on C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64</p>
<p>But how do I let Visual Studio know that I want to use that python interpreter?</p>
<pre><code>Current interactive window is disconnected.
</code></pre>
<p><a href="https://i.sstatic.net/2qGxZ9M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2qGxZ9M6.png" alt="enter image description here" /></a>
Visual studio installer looks like it's installing something but it install nothing. Python should be installed on</p>
<pre><code>C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\
</code></pre>
<p>Nothing got installed there.</p>
<p>Basically I used to have 2 python installation. One from app store and another from visual studio.</p>
<p>I tried to get rid the one from Appstore but can't. So I downloaded Microsoft uninstaller.</p>
<p><a href="https://support.microsoft.com/en-au/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d#articleFooterSupportBridge=communityBridge%EF%BF%BC%EF%BF%BCHowever" rel="nofollow noreferrer">https://support.microsoft.com/en-au/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d#articleFooterSupportBridge=communityBridge%EF%BF%BC%EF%BF%BCHowever</a>,</p>
<p>It turns out the uninstaller uninstall the visual studio's python.</p>
<p>Latter I managed to remove the python from the app store.</p>
<p>Then I want to reinstall visual studio's pyton.</p>
<p>I couldn't. Tried again.</p>
<p>At first I saw that directory</p>
<pre><code>C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\
</code></pre>
<p>Have a bunch of files. But python.exe is not there.</p>
<p>After uninstalling and reinstalling several time, python is not even installed at all. The checkbox for python 3 64 bit is always unchecked. I can check and press modify and it will be unchecked again after I am done.</p>
<p>Oh I open visual studio installer as administrator</p>
|
<python><installation><visual-studio-2022>
|
2025-01-14 16:43:29
| 1
| 33,271
|
user4951
|
79,355,648
| 4,637,141
|
AttributeError: Cannot set attribute. from marshmallow with peewee alias
|
<p>I have this peewee query</p>
<pre><code>transaction_base = (
Transaction.select(
(Transaction.amount - Transaction.amount_refunded).alias('net_amount'),
Transaction.created,
)
.where(Transaction.merchant_id == merchant_account.id)
.order_by(Transaction.created.desc())
)
</code></pre>
<p>and then this serialization using marshmallow</p>
<pre><code> schema = TransactionCSVSchema(many=True)
serialized_data = schema.dump(transaction_base)
</code></pre>
<p>and this is my marshmallow class</p>
<pre><code>class TransactionCSVSchema(Schema):
PaymentDate = fields.DateTime(attribute="created")
NetAmount = fields.Float(attribute="net_amount")
</code></pre>
<p>and it gives this error</p>
<pre><code>Traceback (most recent call last):
File "/var/task/falcon/api.py", line 269, in __call__
responder(req, resp, **params)
File "/var/task/payments/resources/base_resource.py", line 105, in on_get
return self.on_get_list(req, resp, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/peewee.py", line 3088, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/task/payments/resources/transaction.py", line 264, in on_get_list
serialized_data = schema.dump(transaction_base)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/marshmallow/schema.py", line 549, in dump
result = self._serialize(processed_obj, many=many)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/marshmallow/schema.py", line 511, in _serialize
return [
^
File "/var/task/marshmallow/schema.py", line 511, in <listcomp>
return [
^
File "/var/task/peewee.py", line 4588, in next
self.cursor_wrapper.iterate()
File "/var/task/peewee.py", line 4508, in iterate
result = self.process_row(row)
^^^^^^^^^^^^^^^^^^^^^
File "/var/task/peewee.py", line 7768, in process_row
obj = self.constructor(__no_default__=1, **data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/peewee.py", line 6510, in __init__
setattr(self, k, kwargs[k])
File "/var/task/playhouse/hybrid.py", line 35, in __set__
raise AttributeError('Cannot set attribute.')
AttributeError: Cannot set attribute.
</code></pre>
<p>this is the query it makes</p>
<pre><code>('SELECT (`t1`.`amount` - `t1`.`amount_refunded`) AS
`net_amount`, `t1`.`created` FROM `transaction` AS `t1`
WHERE (`t1`.`merchant_id` = %s) ORDER BY `t1`.`created` DESC', [130])
</code></pre>
<p>My guess is that <code>net_amount</code> isn't aliased to <code>t1</code> like the rest of the columns and marshmallow could be expecting that? If I remove <code>.alias('net_amount'),</code> from the query and from the class it works and makes the query</p>
<pre><code>('SELECT (`t1`.`amount` - `t1`.`amount_refunded`)
</code></pre>
<p>but then my schema can't access it.</p>
|
<python><peewee><marshmallow>
|
2025-01-14 16:15:35
| 0
| 1,074
|
Jrow
|
79,355,625
| 17,837,614
|
Passing StringIO object to TextIOWrapper object is giving mypy error
|
<p>I wasn't getting any Mypy error with Mypy 0.812.
But after upgrading Mypy to latest (1.14.1) I am getting below error.
<code>error: Argument 1 to "_redirect_stdout" has incompatible type "StringIO"; expected "TextIOWrapper[_WrappedBuffer]" [arg-type]</code></p>
<p>Please help me to understand</p>
<ul>
<li>the error</li>
<li>whats changed in mypy that is raising the error and</li>
<li>the possible ways to resolve it.
Thanks!</li>
</ul>
<pre class="lang-py prettyprint-override"><code>@contextlib.contextmanager
def _redirect_stdout(
new_target: io.TextIOWrapper
) -> Generator[io.TextIOWrapper, None, None]:
old_target = sys.stdout
sys.stdout = new_target
try:
yield new_target
finally:
sys.stdout = old_target
target_stdout = io.StringIO()
with _redirect_stdout(target_stdout):
print(['These', 'are', 'sample', 'strings.'])
</code></pre>
|
<python><python-typing><mypy>
|
2025-01-14 16:10:12
| 1
| 405
|
Sujay
|
79,355,372
| 20,591,261
|
How to get the day / month name of a column in polars
|
<p>I have a polars dataframe <code>df</code> which has a datetime column <em>date</em>. I'm trying to get the name of the day and month of that column.</p>
<p>Consider the following example.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import datetime
df = pl.DataFrame({
"date": [datetime(2024, 10, 1), datetime(2024, 11, 2)]
})
</code></pre>
<p>I was hoping that I could pass parameter to <code>month()</code> or <code>weekday()</code> to get the desired format.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.col("date").dt.month().alias("Month"),
pl.col("date").dt.weekday().alias("Day")
)
</code></pre>
<pre><code>shape: (2, 3)
βββββββββββββββββββββββ¬ββββββββ¬ββββββ
β date β Month β Day β
β --- β --- β --- β
β datetime[ΞΌs] β i8 β i8 β
βββββββββββββββββββββββͺββββββββͺββββββ‘
β 2024-10-01 00:00:00 β 10 β 2 β
β 2024-11-02 00:00:00 β 11 β 6 β
βββββββββββββββββββββββ΄ββββββββ΄ββββββ
</code></pre>
<p>However, this does not seem to be the case. My desired output looks as follows.</p>
<pre><code>shape: (2, 3)
βββββββββββββββββββββββ¬ββββββββ¬βββββββββββ
β date β Month β Day β
β --- β --- β --- β
β datetime[ΞΌs] β str β str β
βββββββββββββββββββββββͺββββββββͺβββββββββββ‘
β 2024-10-01 00:00:00 β Oct β Tuesday β
β 2024-11-02 00:00:00 β Nov β Saturday β
βββββββββββββββββββββββ΄ββββββββ΄βββββββββββ
</code></pre>
<p>How can I extract the day and month name from the <em>date</em> column?</p>
|
<python><datetime><python-polars>
|
2025-01-14 14:39:50
| 1
| 1,195
|
Simon
|
79,355,047
| 19,024,379
|
Multiprocessing with tkinter progress bar, minimal example
|
<p>I'm looking for a way to track a multiprocessing task with Tkinter progress bar. This is something that can be done very straightforwardly with <code>tqdm</code> for display in the terminal.</p>
<p>Instead of using <code>tqdm</code> I'd like to use <code>ttk.Progressbar</code>, but all attempts I have made at this, the tasks block on trying to update the progressbar (e.g. using update_idletasks and similar). Below is a template of the kind of solution I'm looking for:</p>
<pre class="lang-py prettyprint-override"><code>import time
from multiprocessing import Pool
from tqdm import tqdm
import tkinter as tk
import tkinter.ttk as ttk
def task(x):
time.sleep(0.1)
return x * x
def start_task():
num_processes = 12
num_tasks = 100
with Pool(processes=num_processes) as pool:
with tqdm(total=num_tasks, desc="Processing") as pbar:
def update_progress(_):
# <Insert update to tk progress bar here>
pbar.update(1)
for i in range(num_tasks):
pool.apply_async(task, args=(i,), callback=update_progress)
pool.close()
pool.join()
if __name__ == "__main__":
root = tk.Tk()
root.title("Task Progress")
progress_bar = ttk.Progressbar(root, maximum=100, length=300)
progress_bar.pack(pady=20)
button = tk.Button(text="Start", command=start_task)
button.pack(fill="x", padx=10, pady=10)
root.mainloop()
</code></pre>
<p>In the solution I'd also like to get the output of the task (in this case a list of x*x).</p>
<p>If another multiprocessing structure would work better please feel free to adjust (pool just seemed the simplest for demonstration).</p>
<p>This is a question that has been asked on Stack Overflow before, but all the previous answers I've found have not been minimal examples and I didn't find them very helpful.</p>
|
<python><tkinter><progress-bar><python-multithreading>
|
2025-01-14 12:50:02
| 1
| 1,172
|
Mark
|
79,354,732
| 9,869,695
|
aiohttp post request h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_BODY
|
<ul>
<li>I'm using aiohttp==3.10.10 with python 3.8</li>
<li>For context I'm running a fastAPI app and on every request the app receives it creates an async background task to send a post request to another external API</li>
<li>I create a shared aiohttp.ClientSession for the background tasks (I need this for connection pooling to the other external API)</li>
<li>It mostly works, but I am seeing some errors about the connection closing while a message is in flight</li>
</ul>
<pre><code>"ERROR", "name": "asyncio", "lineno": 1707, "message": "Exception in callback H11Protocol.timeout_keep_alive_handler()
handle: <TimerHandle when=423.140775385 H11Protocol.timeout_keep_alive_handler()>", "exc_info": "Traceback (most recent call last):
File "/usr/local/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 383, in timeout_keep_alive_handler
self.conn.send(event)
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 537, in send_with_data_passthrough
self._process_event(self.our_role, event)
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 272, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/usr/local/lib/python3.8/site-packages/h11/_state.py", line 293, in process_event
self._fire_event_triggered_transitions(role, _event_type)
File "/usr/local/lib/python3.8/site-packages/h11/_state.py", line 311, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_BODY"}
</code></pre>
<p>has anyone come across this before? Or know a fix for it?</p>
|
<python><fastapi><aiohttp>
|
2025-01-14 10:45:15
| 0
| 1,514
|
Arran Duff
|
79,354,646
| 2,342,292
|
pydantic v2 - aggregate errors when using mix of native and custom types with Annotated
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>from .consts import BASE_URL, HOSTNAME_PATTERN
class GenericParseException(Exception):
pass
def validate_hostname(value: str) -> str:
if value.startswith("http://") or value.startswith("https://"):
raise GenericParseException("base_url should not include the protocol (http:// or https://)")
if not re.match(HOSTNAME_PATTERN, value):
raise GenericParseException("Invalid hostname format.")
return value
HostnameType = Annotated[str, AfterValidator(validate_hostname)]
class Config(BaseModel):
api_token: str
base_url: HostnameType = BASE_URL
class ConfigParser:
def __init__(self, logger: LoggerInterface):
self.logger = logger
def parse(self, config: dict[str, Any]) -> Config:
errors: List[str] = []
try:
model = Config(**config)
return model
except ValidationError as e:
errors.extend(e.errors())
except GenericParseException as e:
errors.append(f"Generic validation error: {e}")
except Exception as e:
errors.append(f"Unexpected error: {e}")
finally:
if errors:
self.logger.error({
'action': 'validation',
'status': 'failed',
'errors': errors
})
raise ConfigParseException("; ".join(str(err) for err in errors))
</code></pre>
<p>I trying to aggregate the errors, without success...
I've tried with both <code>AfterValidator</code> & <code> BeforeValidator</code></p>
<p>The expected output should be something like:</p>
<pre><code>api_token - required field
base_url - base_url should not include the protocol (http:// or https://)
</code></pre>
<p>I'm only getting the base_url error.</p>
<p>Any ideas?</p>
|
<python><python-3.x><pydantic><pydantic-v2>
|
2025-01-14 10:16:51
| 0
| 932
|
EldadT
|
79,354,638
| 1,202,172
|
Remove leading space from Shell GPT code output?
|
<p>I'm using <a href="https://github.com/TheR1D/shell_gpt/tree/main/sgpt" rel="nofollow noreferrer">Shell GPT</a> to send queries to GPT from Linux terminal. One problem - it prints a space before each line of code it gives, for example:</p>
<pre><code> #!/bin/bash
# Get the current date and time
current_date_time=$(date '+%Y-%m-%d %H:%M:%S')
</code></pre>
<p>This leading space is very annoying when copypasting the code.</p>
<p>What needs to be modified in the <a href="https://github.com/TheR1D/shell_gpt/tree/main/sgpt" rel="nofollow noreferrer">Shell GPT source code</a> to fix this?</p>
|
<python><openai-api><chatgpt-api>
|
2025-01-14 10:14:20
| 1
| 8,684
|
Danijel
|
79,354,624
| 7,959,614
|
Use Gurobi to create networkx.Graph that has highest edge connectivity
|
<p>I have the following graph <code>G</code></p>
<p><a href="https://i.sstatic.net/bZz0pTvU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZz0pTvU.png" alt="enter image description here" /></a></p>
<p><code>G</code> is created using the following code</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
G = nx.hoffman_singleton_graph()
pos = nx.spring_layout(G)
nx.draw(G, pos=pos)
nx.draw_networkx_labels(G=G, pos=pos)
plt.show()
</code></pre>
<p><code>G</code> consists of 50 nodes. I only want to include 25 nodes. In addition, I only want to include the nodes (and edges) that maximizes the connectivity between node <code>A</code> (=5) and node <code>B</code> (=20).</p>
<p>I wrote the following code:</p>
<pre><code>import numpy as np
import gurobipy as grb
from networkx.algorithms.connectivity import local_edge_connectivity
A = 5
B = 20
nodes = list(G.nodes)
n_nodes = len(nodes)
edges = nx.to_numpy_array(G, nodelist=nodes)
thresh_nodes = 25
model = grb.Model()
F = model.addMVar(n_nodes, vtype=grb.GRB.BINARY)
model.addConstr(F.sum() == thresh_nodes)
model.addConstr(F[nodes.index(A)] == 1)
model.addConstr(F[nodes.index(B)] == 1)
E = F * edges
model.setObjective(local_edge_connectivity(nx.from_numpy_array(A=E), A, B), grb.GRB.MAXIMIZE)
model.optimize()
</code></pre>
<p>This results in an error because <em>nx.from_numpy_array()</em> cannot deal the datatype of <code>E</code>. How do I create a temporary <code>np.ndarray</code> of the .X-values ({0, 1}) of <code>E</code> and use this to determine the solution(s)?</p>
|
<python><networkx><graph-theory><mathematical-optimization><gurobi>
|
2025-01-14 10:07:26
| 1
| 406
|
HJA24
|
79,354,528
| 9,815,792
|
What value is given to constant in Scipy splantider?
|
<p>Im using <code>BSpline</code>s from SciPy and computing their antiderivatives using the <code>antiderivative()</code> method of the <code>BSpline</code> object. The <code>antiderivative()</code> method computes the antiderivative through a call to the funciton <a href="https://docs.scipy.org/doc/scipy-1.15.0/reference/generated/scipy.interpolate.splantider.html" rel="nofollow noreferrer"><code>splantider()</code></a>. My code works fine and does what it needs to do using the splines and their antiderivatives. However, I keep wondering what value is given to the constant value C, i.e. the standard + C in indefinite integration, that arises when computing the antiderivative using these methods.</p>
<p>Given the results of experiments I've done with my code I would assume the value is 0, I've tried looking around the SciPy source code of these methods as well as the work of <em>De Boor (2001)</em> but didn't find anything. I'm also unsure if I can check this programmatically, so any tips there are helpful.</p>
<p><strong>So my question is:</strong></p>
<p>What value is given to the added constant in the antiderivative in SciPy when computing a <code>BSpline</code> antiderivative using the <code>antiderivative()</code> method?</p>
|
<python><scipy><spline>
|
2025-01-14 09:34:43
| 0
| 409
|
Joppe De Jonghe
|
79,354,459
| 25,413,271
|
Apply operation to all elements in matrix skipping numpy.nan
|
<p>I have an array filled with data only in lower triangle spaces, the rest is np.nan. I want to do some operations on this matrix, more precisely- with data elements, not nans, because I expect the behaviour when nans elements are skipped in vectorized operation to be much quicker.</p>
<p>I have two test arrays:</p>
<pre><code>arr = np.array([
[1.111, 2.222, 3.333, 4.444, 5.555],
[6.666, 7.777, 8.888, 9.999, 10.10],
[11.11, 12.12, 13.13, 14.14, 15.15],
[16.16, 17.17, 18.18, 19.19, 20.20],
[21.21, 22.22, 23.23, 24.24, 25.25]
])
arr_nans = np.array([
[np.nan, np.nan, np.nan, np.nan, np.nan],
[6.666, np.nan, np.nan, np.nan, np.nan],
[11.11, 12.12, np.nan, np.nan, np.nan],
[16.16, 17.17, 18.18, np.nan, np.nan],
[21.21, 22.22, 23.23, 24.24, np.nan]
])
</code></pre>
<p>Thats the way I test them:</p>
<pre><code>test = timeit.timeit('arr * 5 / 2.123', globals=globals(), number=1000)
test_nans = timeit.timeit('arr_nans * 5 / 2.123', globals=globals(), number=1000)
masked_arr_nans = np.ma.array(arr_nans, mask=np.isnan(arr_nans))
test_masked_nans = timeit.timeit('masked_arr_nans * 5 / 2.123', globals=globals(), number=1000)
print(test) # 0.0017232997342944145s
print(test_nans) # 0.0017070993781089783s
print(test_masked_nans) # 0.052730199880898s
</code></pre>
<p>I have created a mask array <code>masked_arr_nans</code> and masked all nans. But this way is far slower then the first two. I dont understand why.</p>
<p>The main question is- which is the quckest way to operate with arrays like <code>arr_nans</code> containing a lot of nans, probably there is a qicker approach then the ones I mentioned.</p>
<p>Side question is- why masked array works much slower?</p>
|
<python><numpy><matrix>
|
2025-01-14 09:04:16
| 1
| 439
|
IzaeDA
|
79,354,405
| 12,011,020
|
Polars Schema: TypeError: dtypes must be fully-specified, got: Datetime
|
<p>Hi I want to define a polars schema.</p>
<p>It works fine without a datetime format.
However it fails with <code>pl.Datetime</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
testing_schema: pl.Schema = pl.Schema(
{
"date": pl.Datetime,
"some_int": pl.Int64,
"some_str": pl.Utf8,
"some_cost": pl.Float64,
},
)
</code></pre>
<p>The error:</p>
<pre class="lang-py prettyprint-override"><code># .../lib/python3.11/site-packages/polars/schema.py", line 47, in _check_dtype
# raise TypeError(msg)
# TypeError: dtypes must be fully-specified, got: Datetime
</code></pre>
<p>Unfortunately I do not have any idea on how to fix it.</p>
|
<python><dataframe><python-polars>
|
2025-01-14 08:43:19
| 1
| 491
|
SysRIP
|
79,354,384
| 20,895,654
|
Convert Base64 ImageStream in .NET WinForms to images with Python
|
<p>I have a very big WinForms application in the company I'm at and I want to extract all the images that are contained in it. I already wrote a program in python to extract basic images that are of type <code>Image</code> and <code>Icon</code>, but I struggle to turn an <code>ImageStream</code> base64 string into its images.</p>
<p>Here is a <code>.resx</code> file snipped with an actual base64 encoded string I want to convert:</p>
<pre><code>...
<data name="imageList.ImageStream" mimetype="application/x-microsoft.net.object.binary.base64">
<value>
AAEAAAD/////AQAAAAAAAAAMAgAAAFdTeXN0ZW0uV2luZG93cy5Gb3JtcywgVmVyc2lvbj00LjAuMC4w
LCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPWI3N2E1YzU2MTkzNGUwODkFAQAAACZTeXN0
ZW0uV2luZG93cy5Gb3Jtcy5JbWFnZUxpc3RTdHJlYW1lcgEAAAAERGF0YQcCAgAAAAkDAAAADwMAAADO
CwAAAk1TRnQBSQFMAgEBBgEAAagBAgGoAQIBEAEAARABAAT/AQkBAAj/AUIBTQE2AQQGAAE2AQQCAAEo
AwABQAMAASADAAEBAQABCAYAAQgYAAGAAgABgAMAAoABAAGAAwABgAEAAYABAAKAAgADwAEAAcAB3AHA
AQAB8AHKAaYBAAEzBQABMwEAATMBAAEzAQACMwIAAxYBAAMcAQADIgEAAykBAANVAQADTQEAA0IBAAM5
AQABgAF8Af8BAAJQAf8BAAGTAQAB1gEAAf8B7AHMAQABxgHWAe8BAAHWAucBAAGQAakBrQIAAf8BMwMA
AWYDAAGZAwABzAIAATMDAAIzAgABMwFmAgABMwGZAgABMwHMAgABMwH/AgABZgMAAWYBMwIAAmYCAAFm
AZkCAAFmAcwCAAFmAf8CAAGZAwABmQEzAgABmQFmAgACmQIAAZkBzAIAAZkB/wIAAcwDAAHMATMCAAHM
AWYCAAHMAZkCAALMAgABzAH/AgAB/wFmAgAB/wGZAgAB/wHMAQABMwH/AgAB/wEAATMBAAEzAQABZgEA
ATMBAAGZAQABMwEAAcwBAAEzAQAB/wEAAf8BMwIAAzMBAAIzAWYBAAIzAZkBAAIzAcwBAAIzAf8BAAEz
AWYCAAEzAWYBMwEAATMCZgEAATMBZgGZAQABMwFmAcwBAAEzAWYB/wEAATMBmQIAATMBmQEzAQABMwGZ
AWYBAAEzApkBAAEzAZkBzAEAATMBmQH/AQABMwHMAgABMwHMATMBAAEzAcwBZgEAATMBzAGZAQABMwLM
AQABMwHMAf8BAAEzAf8BMwEAATMB/wFmAQABMwH/AZkBAAEzAf8BzAEAATMC/wEAAWYDAAFmAQABMwEA
AWYBAAFmAQABZgEAAZkBAAFmAQABzAEAAWYBAAH/AQABZgEzAgABZgIzAQABZgEzAWYBAAFmATMBmQEA
AWYBMwHMAQABZgEzAf8BAAJmAgACZgEzAQADZgEAAmYBmQEAAmYBzAEAAWYBmQIAAWYBmQEzAQABZgGZ
AWYBAAFmApkBAAFmAZkBzAEAAWYBmQH/AQABZgHMAgABZgHMATMBAAFmAcwBmQEAAWYCzAEAAWYBzAH/
AQABZgH/AgABZgH/ATMBAAFmAf8BmQEAAWYB/wHMAQABzAEAAf8BAAH/AQABzAEAApkCAAGZATMBmQEA
AZkBAAGZAQABmQEAAcwBAAGZAwABmQIzAQABmQEAAWYBAAGZATMBzAEAAZkBAAH/AQABmQFmAgABmQFm
ATMBAAGZATMBZgEAAZkBZgGZAQABmQFmAcwBAAGZATMB/wEAApkBMwEAApkBZgEAA5kBAAKZAcwBAAKZ
Af8BAAGZAcwCAAGZAcwBMwEAAWYBzAFmAQABmQHMAZkBAAGZAswBAAGZAcwB/wEAAZkB/wIAAZkB/wEz
AQABmQHMAWYBAAGZAf8BmQEAAZkB/wHMAQABmQL/AQABzAMAAZkBAAEzAQABzAEAAWYBAAHMAQABmQEA
AcwBAAHMAQABmQEzAgABzAIzAQABzAEzAWYBAAHMATMBmQEAAcwBMwHMAQABzAEzAf8BAAHMAWYCAAHM
AWYBMwEAAZkCZgEAAcwBZgGZAQABzAFmAcwBAAGZAWYB/wEAAcwBmQIAAcwBmQEzAQABzAGZAWYBAAHM
ApkBAAHMAZkBzAEAAcwBmQH/AQACzAIAAswBMwEAAswBZgEAAswBmQEAA8wBAALMAf8BAAHMAf8CAAHM
Af8BMwEAAZkB/wFmAQABzAH/AZkBAAHMAf8BzAEAAcwC/wEAAcwBAAEzAQAB/wEAAWYBAAH/AQABmQEA
AcwBMwIAAf8CMwEAAf8BMwFmAQAB/wEzAZkBAAH/ATMBzAEAAf8BMwH/AQAB/wFmAgAB/wFmATMBAAHM
AmYBAAH/AWYBmQEAAf8BZgHMAQABzAFmAf8BAAH/AZkCAAH/AZkBMwEAAf8BmQFmAQAB/wKZAQAB/wGZ
AcwBAAH/AZkB/wEAAf8BzAIAAf8BzAEzAQAB/wHMAWYBAAH/AcwBmQEAAf8CzAEAAf8BzAH/AQAC/wEz
AQABzAH/AWYBAAL/AZkBAAL/AcwBAAJmAf8BAAFmAf8BZgEAAWYC/wEAAf8CZgEAAf8BZgH/AQAC/wFm
AQABIQEAAaUBAANfAQADdwEAA4YBAAOWAQADywEAA7IBAAPXAQAD3QEAA+MBAAPqAQAD8QEAA/gBAAHw
AfsB/wEAAaQCoAEAA4ADAAH/AgAB/wMAAv8BAAH/AwAB/wEAAf8BAAL/AgAD/wMAAuwSAAH/AfQC/ycA
AfkCAQHsBQAB+QHsCQAB7AHqAgAB6gHsJgAB+QMBAewDAAH5AgEB7AcAAfQBFAHqAgAB6gEUAfQlAAH5
BAEB7AEAAfkEAQHsBgAB7wEUAeoCAAHqARQB7yYAAfkEAQHsBQEB7AUAAfQCFAHqAgAB6gIUAfQmAAH5
CAEB7AYAAe8CFAHqAgAB6gIUAe8nAAH5BgEB7AYAAfQDFAHqAgAB6gEUARMBFAH0JwAFAQHsBwAB8gES
AhQB6gIAAeoCFAESAfInAAH5BAEB7AoAAQ4EAAIOKAAB+QUBAewJAAH3BgAB9ycAAfkDAQHsAwEB7AsA
AkMpAAH5AwEB7AEAAfkDAQHsCgACQykAAfkCAQHsAwAB+QMBAewJAAJDKgAB+QEBBQAB+QMBCQACQzIA
AfkBAQH5iAABMAI3ATABAzoAATABNwL7ATcBMAEDCAACMy4AATABNwT7ATcBMAEDBgADMy0AATABNwb7
ATcBMAEDBQABMwL6ATMQAAJTGQABMAE3AQAB+wEAAfsDAAH7ATcBMAEDAwABMwP6AjMOAARTCQALBgMA
ATABNwH7AQAB+wEAAvsBAAH7AQMB+wE3ATABAwEAAjME+gEzDQACUwKhAlMJAAn8AwABMAE3AvsBAAED
AQAF+wEDAfsBNwEwAQABMwb6ATMLAAJTBKECUwoAAv4BBgH+AQYFAAE3AfsBAwH7AgAD+wEDAgAD+wE3
AQAD+gKeAvoCMwkAAlMGoQJTCQAC/gEGAf4BBgUAAaAC+wEDAgABAwEAAQMDAAP7AV4BAAGeAfoBngIA
AZ4C+gEzCQACUwahAlMJAAL+AQYB/gEGBQABwwGgAfsCAwIAAQMC+wEDA/sBXgHDAgABngQAAZ4C+gEz
CQACUwShAlMKAAL+AQYB/gEGBgABwwGgAgMCAAIDBPsBXgHDCQABngH6AjMJAAJTAqECUwsAAv4BBgH+
AQYHAAHDAaAC+wEDAgABAwL7AV4BwwsAAZ4B+gIzCQAEUwwAAv4BBgH+AQYIAAHDAaAC+wIAAvsBXgHD
DQABngH6AjMJAAJTDQAC/gEGAf4BBgkAAcMBoAT7AV4Bww8AAZ4B+gIzFwAC/gEGAf4BBgoAAcMBoAL7
AV4BwxEAAZ4B+gEzJwABwwGgAV4BwxMAAZ4B+hEAAUIBTQE+BwABPgMAASgDAAFAAwABIAMAAQEBAAEB
BgABARYAA/8BAAHPAf8B/AE/BAABhwHPAfkBnwQAAYMBhwHxAY8EAAGBAQMB8QGPBAABwAEDAeEBhwQA
AeABBwHhAYcEAAHwAQ8BwQGDBAAB+AEfAcEBgwQAAfgBHwGAAQEEAAHwAR8BgAEBBAAB4AEPAYABAQQA
AcEBBwGAAQEEAAHDAYMBgAEBBAAB5wHDAYABAQQAAf8B4wGAAQEEAAT/BAAC/wH8AR8E/wHqAasB+AEP
AfMD/wHqAasB8AEHAeMD/wHiASMB4AEDAeEB/wH+AX8B4gEjAcABAQHAAf8B/AE/AeABAwGAAQABgAH/
AfgBHwHwAQcCAAGAAX8B8AEPAfgBHwIAAYABPwHgAQcB9AEfAgABjAE/AeABBwH0AR8CAAHeAR8B8AEP
AewBHwGAAQEB/wEPAfgBHwHsAR8BwAEDAf8BhwH8AT8B9AEfAeABBwH/AcMB/gF/AfQBHwHwAQ8B/wHh
Av8B+AEfAfgBHwH/AfEE/wH8AT8B/wH5Av8L
</value>
</data>
...
</code></pre>
<p>Similar to <code>Image</code> and <code>Icon</code> I know that all the required data should be somehow encoded into the base64 value, I just don't understand how.</p>
<p>I tried:</p>
<ul>
<li>Searching around in the source code of the classes that use an <code>ImageStream</code></li>
<li>Searching online, but found literally no question remotely similar to mine</li>
<li>Read the documentation, but obviously that wouldn't go in that much depth either</li>
</ul>
<p>So, given I have a string with the base64 encoded value <code>s</code>, how would I go about decoding this into all its contained images and output them as files? As an addition, how would it be possible to get each image's name (or any other metadata for that matter)?</p>
<p>Image with data about the ImageStream <code>imageList</code>, all images have exact same metadata besides name:
<a href="https://i.sstatic.net/AuUJgk8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AuUJgk8J.png" alt="Image with data about the ImageStream ImageList" /></a></p>
|
<python><.net><decoding><resx><imagestream>
|
2025-01-14 08:32:03
| 0
| 346
|
JoniKauf
|
79,354,186
| 2,210,825
|
What null hypothesis is rejected by the value of statsmodels RLM
|
<p>I am using statsmodels to perform a robust linear regression on some data I have. I wanted to understand whether the slope of my regression is significantly different from 0. In <code>scipy</code> I due this by looking at the p-value.</p>
<p>In statsmodels, is the same null hypothesis being tested? I can get the pvalue as returned by <code>RLMResult.pvalues</code> but I couldn't find the documentation indicating what is being tested.</p>
|
<python><scipy><statistics><linear-regression><statsmodels>
|
2025-01-14 06:54:49
| 0
| 1,458
|
donkey
|
79,353,843
| 2,132,478
|
stable_baselines3: why the reward does not match comparing ep_info_buffer vs evaluation?
|
<p>I was working with stable_baselines3 library, when I found something that i did not expect.</p>
<p>Here a simple code to reproduce the issue:</p>
<pre><code>import gymnasium as gym
from stable_baselines3 import DQN
env = gym.make("CartPole-v1")
model = DQN("MlpPolicy", env, verbose=0, stats_window_size=100_000)
model.learn(total_timesteps=100_000)
</code></pre>
<p>Taking a look at the last episode reward:</p>
<pre><code>print(model.ep_info_buffer[-1])
</code></pre>
<blockquote>
<p>{'r': 409.0, 'l': 409, 't': 54.87983}</p>
</blockquote>
<p>But if I evaluate the model, with the following code:</p>
<pre><code>obs, info = env.reset()
total_reward = 0
while True:
action, _states = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
total_reward = total_reward + reward
if terminated or truncated:
obs, info = env.reset()
break
print("total_reward {}".format(total_reward))
</code></pre>
<blockquote>
<p>total_reward 196.0</p>
</blockquote>
<p>I get a different reward, what I did not expected.</p>
<p>I expected get the same 409 than in the model.ep_info_buffer[-1].</p>
<p>Why that difference? Is that .ep_info_buffer a different thing than the reward per episode?</p>
|
<python><machine-learning><reinforcement-learning><stable-baselines><dqn>
|
2025-01-14 02:14:32
| 1
| 575
|
Alfonso_MA
|
79,353,817
| 3,476,463
|
move email from one yahoo folder to another
|
<p>I'm running python 3.10 on ubuntu desktop 22. I'm running the code below. In the code I'm trying to move an email from one yahoo mail folder to another folder. The code does this by copying the email to the new folder then marking it for deletion in the older folder. When I run the code I'm getting the error message below. Does anyone see what the issue is and can you suggest how to fix it?</p>
<p>code:</p>
<pre><code>import imaplib
import email
from email.header import decode_header
from datetime import datetime, timedelta
import pandas as pd
# Login credentials
user, password = "username@yahoo.com", "xxxxxx"
imap_url = 'imap.mail.yahoo.com'
# Connect to the mail server
my_mail = imaplib.IMAP4_SSL(imap_url)
my_mail.login(user, password)
# move emails from one folder to another
# import imaplib
# Define the target folder to move emails to
target_folder = '"test011325"'
# Ensure the target folder exists
try:
my_mail.select(target_folder)
except imaplib.IMAP4.error:
print(f"Error: Target folder {target_folder} does not exist.")
my_mail.logout()
exit()
# Testing
mail_id_list = [b'8176']
# Move each email from mail_id_list
for mail_id in mail_id_list:
# Ensure the mail ID is in string format
mail_id_str = mail_id.decode()
# Copy the email to the target folder
copy_response = my_mail.copy(mail_id_str, target_folder)
if copy_response[0] != 'OK':
print(f"Failed to copy email with ID {mail_id_str}")
continue
# Mark the email for deletion in the original folder
store_response = my_mail.store(mail_id_str, '+FLAGS', '\\Deleted')
if store_response[0] != 'OK':
print(f"Failed to mark email with ID {mail_id_str} for deletion")
continue
# Permanently delete emails marked as \\Deleted
expunge_response = my_mail.expunge()
if expunge_response[0] == 'OK':
print("Emails moved successfully.")
else:
print("Failed to expunge emails.")
# Logout
my_mail.logout()
</code></pre>
<p>error:</p>
<pre><code>---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-39-99fcc828d9f2> in <module>
15
16 # Mark the email for deletion in the original folder
---> 17 store_response = my_mail.store(mail_id_str, '+FLAGS', '\\\\Deleted')
18 if store_response[0] != 'OK':
19 print(f"Failed to mark email with ID {mail_id_str} for deletion")
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in store(self, message_set, command, flags)
838 if (flags[0],flags[-1]) != ('(',')'):
839 flags = '(%s)' % flags # Avoid quoting the flags
--> 840 typ, dat = self._simple_command('STORE', message_set, command, flags)
841 return self._untagged_response(typ, dat, 'FETCH')
842
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in _simple_command(self, name, *args)
1194 def _simple_command(self, name, *args):
1195
-> 1196 return self._command_complete(name, self._command(name, *args))
1197
1198
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in _command_complete(self, name, tag)
1025 self._check_bye()
1026 if typ == 'BAD':
-> 1027 raise self.error('%s command error: %s %s' % (name, typ, data))
1028 return typ, data
1029
error: STORE command error: BAD [b'[CLIENTBUG] STORE Command arguments invalid']
</code></pre>
<p>update:</p>
<p>I added more detailed debugging as suggested below.</p>
<p>code:</p>
<pre><code>import imaplib
import email
from email.header import decode_header
from datetime import datetime, timedelta
import pandas as pd
# Login credentials
user, password = "username@yahoo.com", "xxxxxx"
imap_url = 'imap.mail.yahoo.com'
# Connect to the mail server
my_mail = imaplib.IMAP4_SSL(imap_url)
my_mail.login(user, password)
# Set the debug level (higher values produce more output)
my_mail.debug = 4
# move emails from one folder to another
# import imaplib
# Define the target folder to move emails to
target_folder = '"test011325"'
# Ensure the target folder exists
try:
my_mail.select(target_folder)
except imaplib.IMAP4.error:
print(f"Error: Target folder {target_folder} does not exist.")
my_mail.logout()
exit()
# Testing
mail_id_list = [b'8176']
# Move each email from mail_id_list
for mail_id in mail_id_list:
# Ensure the mail ID is in string format
mail_id_str = mail_id.decode()
# Copy the email to the target folder
copy_response = my_mail.copy(mail_id_str, target_folder)
if copy_response[0] != 'OK':
print(f"Failed to copy email with ID {mail_id_str}")
continue
# Mark the email for deletion in the original folder
store_response = my_mail.store(mail_id_str, '+FLAGS', '\\Deleted')
if store_response[0] != 'OK':
print(f"Failed to mark email with ID {mail_id_str} for deletion")
continue
# Permanently delete emails marked as \\Deleted
expunge_response = my_mail.expunge()
if expunge_response[0] == 'OK':
print("Emails moved successfully.")
else:
print("Failed to expunge emails.")
# Logout
my_mail.logout()
</code></pre>
<p>error:</p>
<pre><code>46:43.54 > b'LPBI2 SELECT "test011325"'
46:44.29 < b'* 0 EXISTS'
46:44.29 < b'* 0 RECENT'
46:44.29 < b'* OK [UIDVALIDITY 1736816455] UIDs valid'
46:44.29 < b'* OK [UIDNEXT 1] Predicted next UID'
46:44.29 < b'* FLAGS (\\Answered \\Deleted \\Draft \\Flagged \\Seen $Forwarded $Junk $NotJunk)'
46:44.29 < b'* OK [PERMANENTFLAGS (\\Answered \\Deleted \\Draft \\Flagged \\Seen $Forwarded $Junk $NotJunk)] Permanent flags'
46:44.29 < b'* OK [HIGHESTMODSEQ 1]'
46:44.30 < b'* OK [MAILBOXID (150)] Ok'
46:44.57 < b'LPBI2 OK [READ-WRITE] SELECT completed; now in selected state'
46:44.57 > b'LPBI3 COPY 8176 "test011325"'
46:44.98 < b'LPBI3 OK COPY completed'
46:44.98 > b'LPBI4 STORE 8176 +FLAGS (\\Deleted)'
46:49.28 < b'LPBI4 BAD [CLIENTBUG] STORE Bad sequence in the command'
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-44-3e34cf0dc485> in <module>
53
54 # Mark the email for deletion in the original folder
---> 55 store_response = my_mail.store(mail_id_str, '+FLAGS', '\\Deleted')
56 if store_response[0] != 'OK':
57 print(f"Failed to mark email with ID {mail_id_str} for deletion")
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in store(self, message_set, command, flags)
838 if (flags[0],flags[-1]) != ('(',')'):
839 flags = '(%s)' % flags # Avoid quoting the flags
--> 840 typ, dat = self._simple_command('STORE', message_set, command, flags)
841 return self._untagged_response(typ, dat, 'FETCH')
842
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in _simple_command(self, name, *args)
1194 def _simple_command(self, name, *args):
1195
-> 1196 return self._command_complete(name, self._command(name, *args))
1197
1198
~/anaconda3/envs/web_scrape_etl/lib/python3.7/imaplib.py in _command_complete(self, name, tag)
1025 self._check_bye()
1026 if typ == 'BAD':
-> 1027 raise self.error('%s command error: %s %s' % (name, typ, data))
1028 return typ, data
1029
error: STORE command error: BAD [b'[CLIENTBUG] STORE Bad sequence in the command']
</code></pre>
<h1>update</h1>
<p>this worked</p>
<pre><code>import imaplib
import email
from email.header import decode_header
from datetime import datetime, timedelta
# Login credentials
user, password = "username@yahoo.com", "xxxxxx"
imap_url = 'imap.mail.yahoo.com'
# Connect to the mail server
my_mail = imaplib.IMAP4_SSL(imap_url)
my_mail.login(user, password)
# Select the mailbox folder source folder
folder_name = '"username2012"'
my_mail.select(folder_name)
six_months_ago = (datetime.now() - timedelta(days=180)).strftime('%d-%b-%Y')
# Search for emails received since 6 months ago
_, data = my_mail.uid('SEARCH', None, 'SINCE', six_months_ago)
mail_id_list = data[0].split()
# Testing with the first email
if mail_id_list:
mail_id = mail_id_list[0] # Use UID
# Fetch the email by UID
typ, data = my_mail.uid('FETCH', mail_id, '(RFC822)')
if typ == 'OK' and data:
raw_email = data[0][1]
msg = email.message_from_bytes(raw_email)
# Extract and decode the subject line
raw_subject = msg['Subject']
if raw_subject:
decoded_subject, encoding = decode_header(raw_subject)[0]
if isinstance(decoded_subject, bytes):
decoded_subject = decoded_subject.decode(encoding or 'utf-8')
else:
decoded_subject = 'NA'
# Validation: Print the subject and UID of the email to copy
print('Subject:', decoded_subject)
print('UID:', mail_id.decode('utf-8'))
# Destination folder to copy email to
target_folder = '"test011325"'
# Copy the email to the target folder
copy_response = my_mail.uid("COPY", mail_id, target_folder)
if copy_response[0] == 'OK':
print(f"Email successfully copied to {target_folder}")
else:
print(f"Failed to copy email to {target_folder}: {copy_response}")
else:
print("Failed to fetch the email.")
else:
print("No emails found in the source folder.")
# Logout
my_mail.logout()
</code></pre>
|
<python><email><imaplib><yahoo-mail>
|
2025-01-14 01:52:42
| 0
| 4,615
|
user3476463
|
79,353,789
| 871,096
|
Workaround for jsbeautifier yielding invalid negative numbers for JSON
|
<p>I am working with Python 3.10.12 on a Linux (Ubuntu) system. I have installed the package <code>jsbeautifier</code> version 1.15.1.</p>
<p><code>jsbeautifier.beautify</code> outputs invalid JSON for lists of numbers which include negative numbers; sometimes there is a line break between the minus sign and the rest of the number. That is invalid JSON, which requires that there be no whitespace between the minus sign and the rest of the number. This is a known bug in <code>jsbeautifier</code>, see: <a href="https://github.com/beautifier/js-beautify/issues/2251" rel="nofollow noreferrer">https://github.com/beautifier/js-beautify/issues/2251</a></p>
<p>Does anyone know a workaround for this bug? I looked at the source code for <code>jsbeautifier</code> but I don't see how to convince it to keep the minus sign together with the rest of the number. I also don't see any easy way to define a subclass and redefine a function or something in order to change the behavior.</p>
<p>I know I can postprocess whatever is returned to join minus signs with the rest of the number. I am hoping to avoid that except as a last resort.</p>
|
<python><json>
|
2025-01-14 01:17:19
| 0
| 17,677
|
Robert Dodier
|
79,353,713
| 9,632,470
|
Jupyter Command Not Recognized in MacOS
|
<p>I am attempting to install Jupyter on my MacBook Air (OS Sonoma 14.7) but am having difficulties. Below I will list the criteria of my situation and also reference several related questions and point out why they do not help me. I am very novice with OS / installation / path issues / advanced use of the zsh terminal.</p>
<p>Thank you very much for any help.</p>
<ul>
<li><p>I have Python 3.9 installed</p>
<p>murray@Murrays-MacBook-Air ~ % python3 --version
Python 3.9.0</p>
</li>
<li><p>I (think) I have latest pip installed</p>
<p>murray@Murrays-MacBook-Air ~ % pip3 --version
pip 24.3.1 from /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pip (python 3.9)</p>
</li>
<li><p>I installed Jupyter using the command</p>
<p>pip3 install jupyter</p>
</li>
<li><p>The installation output gives some warnings:</p>
<p>WARNING: Ignoring invalid distribution - (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)
WARNING: Ignoring invalid distribution -ip (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)
WARNING: Ignoring invalid distribution - (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)
WARNING: Ignoring invalid distribution -ip (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)
WARNING: Ignoring invalid distribution - (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)
WARNING: Ignoring invalid distribution -ip (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages)</p>
</li>
<li><p>Calling Jupyter does not work</p>
<p>murray@Murrays-MacBook-Air ~ % jupyter lab</p>
<p>zsh: command not found: jupyter</p>
<p>murray@Murrays-MacBook-Air ~ % jupyter</p>
<p>zsh: command not found: jupyter</p>
<p>murray@Murrays-MacBook-Air ~ % jupyter-lab</p>
<p>zsh: command not found: jupyter-lab</p>
</li>
<li><p>Calling Jupyter in this way does work (the notebook opens):</p>
<p>murray@Murrays-MacBook-Air ~ % python3 -m jupyter lab</p>
</li>
<li><p>Asking terminal which Jupyter is installed does not work:</p>
<p>murray@Murrays-MacBook-Air ~ % which -a jupyter
jupyter not found</p>
<p>murray@Murrays-MacBook-Air ~ % which -a jupyter lab
jupyter not found
lab not found</p>
<p>murray@Murrays-MacBook-Air ~ % which -a jupyterlab
jupyterlab not found</p>
<p>murray@Murrays-MacBook-Air ~ % which -a jupyter-lab
jupyter-lab not found</p>
</li>
</ul>
<p>I have seen <a href="https://stackoverflow.com/questions/35029029/jupyter-notebook-command-does-not-work-on-mac">this</a> very similar question, but the main answer is to use the command python3 -m jupyterlab... however I would like to discover the root issue as to why simply jupyterlab does not work. I am also trying to set up Jupyter to use with R as described <a href="https://stackoverflow.com/questions/57870575/install-and-run-r-kernel-for-jupyter-notebook">here</a>, but I don't want to trouble shoot that error until my main installations are working completely.</p>
<p>Thanks!</p>
|
<python><terminal><jupyter-lab>
|
2025-01-14 00:05:11
| 1
| 441
|
Prince M
|
79,353,665
| 8,229,029
|
How to create an array of length n in R where each element has two elements (example given)
|
<p>I would like to create an array in R that looks like the following array made in Python. This may be a very simple question, but it's giving me trouble!</p>
<pre><code>array([[19358, 19388],
[19389, 19416],
[19417, 19447],
[19448, 19477],
[19478, 19508],
[19509, 19538],
[19539, 19569],
[19570, 19600],
[19601, 19630],
[19631, 19661],
[19662, 19691],
[19692, 19722]])
</code></pre>
<p>EDIT: To add some context, I am trying to put a variable of depths (lower and upper values of layers in the atmosphere) into a netcdf dimension using the ncdf4 package in R. It seems like giving this type of array is needed in order to do this.</p>
<p><a href="https://nordatanet.github.io/NetCDF_in_Python_from_beginner_to_pro/09_cells_and_cell_methods.html" rel="nofollow noreferrer">https://nordatanet.github.io/NetCDF_in_Python_from_beginner_to_pro/09_cells_and_cell_methods.html</a></p>
|
<python><r><arrays><netcdf>
|
2025-01-13 23:28:11
| 2
| 1,214
|
user8229029
|
79,353,603
| 5,852,964
|
Changing multiple variables in Python
|
<p>Say I have three groups of variables a1, a2, a3 , b1, b2, b3, c1, c2, c3 and I want to change the first in each group using a loop rather than individual assignments.<br />
Putting each group in a list doesn't work:</p>
<pre><code>a1, a2, a3 = 1, 2, 3
b1, b2, b3 = 4, 5, 6
c1, c2, c3 = 7, 8, 9
ls = [[a1, a2, a3], [b1, b2, b3], [c1, c2, c3]]
for i in range(len(ls)):
ls[i][0] = 100
print(ls)
print(a1, b1, c1)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[[100, 2, 3], [100, 5, 6], [100, 8, 9]]
1 4 7
</code></pre>
<p>I guess that I'm getting into "copy" vs "deep copy" territory here. How can I do this?</p>
|
<python><list><variable-assignment>
|
2025-01-13 22:42:38
| 1
| 331
|
Patrick Dennis
|
79,353,446
| 1,438,045
|
Fabric 2 - how to configure a default host?
|
<p>I have some Fabric tasks in my <code>fabfile.py</code> such as:</p>
<pre><code>@task
def my_task(c):
c.run("hostname")
</code></pre>
<p>which work fine if I run them like this:</p>
<pre><code>fab -H user@host my_task
</code></pre>
<p>But I would like to run them with a default host, so I don't need to type in the host every time. Like this:</p>
<pre><code>fab my_task
</code></pre>
<p>(Such as I used to do in Fabric 1)</p>
<p>I read somewhere to use a <code>fabric.yml</code> file</p>
<pre><code>default:
user: user
host: host
</code></pre>
<p>But it doesn't seem to be working and doing <code>fab my_task</code> runs the task in my local machine.</p>
<p>Also, I don't want to explicitly define the hosts in my fabfile as I would like to have reusable tasks to share between projects.</p>
|
<python><fabric>
|
2025-01-13 21:11:46
| 1
| 1,922
|
Martin Massera
|
79,353,295
| 662,285
|
No overload variant of subprocess.run matches argument types list[str], dict[str,object]
|
<p>Below is my python code where I am trying to pass kubeseal_cmd and run_options to subprocess.run method, but it's giving me error</p>
<blockquote>
<p>No overload variant of subprocess.run matches argument types list[str], dict[str,object].</p>
</blockquote>
<p>What am I missing? I am using python 3.12</p>
<pre><code>kubeseal_path = "/var/tmp/workspace/file.txt"
secret = yaml.safe_load(secret_File.read().encode("utf-8"))
cert_file = = "/var/tmp/workspace/file123.txt"
kubeseal_cmd = [
kubeseal_path,
"--cert",
cert_file,
"--format=yaml",
"</dev/stdin>"
]
run_options = {
"env": {},
"stdout": subprocess.PIPE,
"check": True,
"input": yaml.dump(secret).encode(),
}
sealed_secret = yaml.safe_load(subprocess.run(kubeseal_cmd, **run_options).stdout.decode().strip())
</code></pre>
|
<python><python-typing><mypy>
|
2025-01-13 19:51:46
| 1
| 4,564
|
Bokambo
|
79,353,281
| 7,531,433
|
Type aliasing Python's `typing.Annotated` type
|
<p>Assuming, I have some generic type <code>Foo[T]</code>. How can I create an alias <code>AFoo[T, x] = Annotated[Foo[T], x]</code> or something similar, which works with pyright, but also let's me extract the metadata and type hint <code>x</code> and <code>T</code>?</p>
<p>What I've tried so far:</p>
<ul>
<li>Using an actual type alias</li>
</ul>
<pre><code>T = TypeVar("T")
AFoo = Annotated[Foo[T], Any]
</code></pre>
<ul>
<li>Using <code>__class_getitem__</code></li>
</ul>
<pre><code>T = TypeVar("T")
class AFoo(Generic[T]):
def __class_getitem__(cls, params: tuple[type[T], Any]):
t, x = params
return Annotated[Foo[t], x]
</code></pre>
<p>Both approaches seem to fail because pyright seems to expect that <code>AFoo[str, "bar"]</code> is equivalent to <code>Foo[str, "bar"]</code>.</p>
|
<python><python-typing><pyright>
|
2025-01-13 19:38:08
| 0
| 709
|
tierriminator
|
79,353,192
| 1,788,656
|
Getting into the definition of the Pandas.Series super().reindex
|
<p>I am interested in seeing the source code of the <code>pandas.Series.reindex</code>, so I jumped to the source code using the link in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer">documentation page</a>, yet I found a return to other function <a href="https://github.com/pandas-dev/pandas/blob/v2.2.3/pandas/core/series.py#L5136-L5161" rel="nofollow noreferrer">super().reindex</a>, which I could not yet find out where this function was introduced.
How can I get into the definition of this function <code>pandas.Series.reindex</code>?</p>
|
<python><pandas><super>
|
2025-01-13 18:54:12
| 1
| 725
|
Kernel
|
79,353,092
| 4,721,937
|
How to parametrize class-scoped fixtures with inputs and expected outputs?
|
<p>I'm writing tests for my flask application endpoints. The first endpoint takes a multipart request and stores provided files on the server. The second endpoint retrieves the metadata of the stored files.</p>
<p>I want to create extensible tests for these endpoints, but I don't know how to properly arrange fixtures and parameters. Assume that posting file is an expensive operation which I want to perform only once per class. Here is a simplified code for what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(
("title", "content_type", "expected_label", "expected_media_type"),
[
("My Title", "text/plain; charset=UTF-8", "My Title", "text/plain"),
# remaining test parameters ...
]
)
class TestUploadFile:
@pytest.fixture(scope="class")
def input_file_id(self, app_client, title, content_type):
r = app_client.post(
"/upload", data={"file": (open("example.txt", "rb"), title, content_type}
)
return r.json["file"]. # response contains file id
@pytest.fixture(scope="class")
def file_resource(self, app_client, input_file_id):
return app_client.get(f"/files/{input_file_id}")
def test_file_exists(self, file_resource, expected_label, expected_media_type):
assert file_resource.status_code == 200
def test_file_has_label(self, file_resource, expected_label, expected_media_type):
assert file_resource.json["label"] == expected_title
def test_file_has_media_type(self, file_resource, expected_label, expected_media_type):
assert file_resource.json["mediaType"] == expected_media_type
</code></pre>
<p>Here are the issues and questions I have:</p>
<ul>
<li>Tests raise <code>ScopeMismatch: You tried to access the function scoped fixture input_file_id with a class scoped request object.</code>. Apparently, <code>input_file_id</code> has a <em>function</em> scope, even though I set it to <em>class</em>.</li>
<li>I can change the scopes to <em>function</em>, but I want to upload the file once per dataset, not for every test function.</li>
<li>I heard it's a good practise to have only one assertion per test, but is it true in this case?</li>
<li>I have to repeat <code>expected_label</code> and <code>expected_media_type</code> arguments in tests that don't use them. How to avoid it?</li>
</ul>
<p>There is a similar question already answered <a href="https://stackoverflow.com/questions/76256695/pytest-combine-class-level-parametrization-with-class-scoped-fixtures">pytest: combine class level parametrization with class scoped fixtures</a> but I could not apply it to my case, because my parametrization also include expected values.</p>
|
<python><flask><pytest>
|
2025-01-13 18:08:39
| 0
| 2,965
|
warownia1
|
79,352,963
| 1,128,648
|
Upload file to onedrive personal using python in non-interactive way
|
<p>I have seen some older method using onedrive sdk, seems like those are not working now. This is one of the method I got after some research. But it is not working</p>
<pre><code>import msal
import requests
# Azure AD app credentials
client_id = 'xxx9846xxxx'
client_secret = 'xxxTdxxx'
tenant_id = 'xxx-xxxx'
# Authority URL for your tenant
authority = f'https://login.microsoftonline.com/{tenant_id}'
# Scopes needed for OneDrive file operations
scopes = ['https://graph.microsoft.com/.default']
# Initialize the MSAL ConfidentialClientApplication
app = msal.ConfidentialClientApplication(
client_id,
client_credential=client_secret,
authority=authority
)
# Get the access token
token_response = app.acquire_token_for_client(scopes=scopes)
access_token = token_response.get('access_token')
if not access_token:
raise Exception("Could not acquire access token")
# Define the file to upload
file_path = 'C:/test.csv'
# Microsoft Graph API endpoint for OneDrive (using Application Permissions)
upload_url = 'https://graph.microsoft.com/v1.0/me/drive/root:/Documents/' + file_name + ':/content'
# Open the file in binary mode
with open(file_path, 'rb') as file:
file_content = file.read()
# Make the PUT request to upload the file
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/octet-stream'
}
response = requests.put(upload_url, headers=headers, data=file_content)
# Check if the file upload was successful
if response.status_code == 201:
print(f'File uploaded successfully to OneDrive: {file_name}')
else:
print(f'Error uploading file: {response.status_code}, {response.text}')
</code></pre>
<p>I am geting below error:</p>
<blockquote>
<p>Error uploading file: 400,
{"error":{"code":"BadRequest","message":"/me request is only valid
with delegated authentication
flow.","innerError":{"date":"2025-01-13T17:06:35","request-id":"5959d049-9ad7-4ced-b6fc-00ddddd242","client-request-id":"5959d049-9ad7-4ced-b6fc-0079ddddd"}}}</p>
</blockquote>
<p>How to resolve this error ? or any alternative way to upload the files</p>
<p>Update :</p>
<p>I have created a new app registration which support "Accounts in any organizational directory (Any Microsoft Entra ID tenant - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)"</p>
<p>These are the API permissions I have added:</p>
<p><a href="https://i.sstatic.net/ZAsPixmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZAsPixmS.png" alt="enter image description here" /></a></p>
<p>updated code :</p>
<pre><code>import requests
# Replace with your details
client_id = 'xxxxx'
client_secret = 'xxxx'
tenant_id = '052c8b5b-xxxx'
filename = 'C:/test.csv'
onedrive_folder = 'CloudOnly/test'
user_id = 'xxxx-e7d1-44dc-a846-5exxxx'
# Get the access token
url = f'https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token'
data = {
'grant_type': 'client_credentials',
'client_id': client_id,
'client_secret': client_secret,
'scope': 'https://graph.microsoft.com/.default'
}
response = requests.post(url, data=data)
token = response.json().get('access_token')
# Upload the file to OneDrive
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/octet-stream'
}
file_content = open(filename, 'rb').read()
upload_url = f'https://graph.microsoft.com/v1.0/users/{user_id}/drive/root:/{onedrive_folder}/{filename.split("/")[-1]}:/content'
upload_response = requests.put(upload_url, headers=headers, data=file_content)
if upload_response.status_code == 201:
print('File uploaded successfully!')
else:
print('Error uploading file:', upload_response.json())
</code></pre>
<p>But this gives me error:</p>
<blockquote>
<p>Error uploading file: {'error': {'code': 'BadRequest', 'message':
'Tenant does not have a SPO license.', 'innerError': {'date':
'2025-01-14T02:15:47', 'request-id': 'cf70193e-1723-44db-9f5e-xxxxx',
'client-request-id': 'cf70193e-1723-44db-9f5e-xxxx'}}}</p>
</blockquote>
|
<python><azure><file-upload><onedrive>
|
2025-01-13 17:09:04
| 1
| 1,746
|
acr
|
79,352,669
| 1,672,429
|
How/why are {2,3,10} and {x,3,10} with x=2 ordered differently?
|
<p>Sets are unordered, or rather their order is an implementation detail. I'm interested in that detail. And I saw a case that surprised me:</p>
<pre><code>print({2, 3, 10})
x = 2
print({x, 3, 10})
</code></pre>
<p>Output (<a href="https://ato.pxeger.com/run?1=m72soLIkIz9vwYKlpSVpuhY3kwqKMvNKNKqNdBSMdRQMDWo1uSoUbBWMuKDiFQhxrszcgvyiEoXiymKoLJClV5ZaVJyZn6eJJARUl5Oam5pXklgCkoHYBLUQZjEA" rel="noreferrer">Attempt This Online!</a>):</p>
<pre><code>{3, 10, 2}
{10, 2, 3}
</code></pre>
<p>Despite identical elements written in identical order, they get ordered differently. How does that happen, and is that done intentionally for some reason, e.g., for optimizing lookup speed?</p>
<p>My <code>sys.version</code> and <code>sys.implementation</code>:</p>
<pre><code>3.13.0 (main, Nov 9 2024, 10:04:25) [GCC 14.2.1 20240910]
namespace(name='cpython', cache_tag='cpython-313', version=sys.version_info(major=3, minor=13, micro=0, releaselevel='final', serial=0), hexversion=51183856, _multiarch='x86_64-linux-gnu')
</code></pre>
|
<python><cpython><python-internals>
|
2025-01-13 15:24:07
| 1
| 28,999
|
Stefan Pochmann
|
79,352,565
| 1,788,656
|
Vectorizing three nested loops that calculate the daily mean of hourly data
|
<p>Is there a way to vectorize the following three-nested loop that calculate the daily mean of hourly data? The function below loops first over the year, then months, and finally over days. It also check for the last month and day to ensure that the loop does not go beyond the last month or day of the data.</p>
<pre><code>def hourly2daily(my_var,my_periods):
import pandas as pd
import numpy as np
import sys
print('######### Daily2monthly function ##################')
Frs_year =my_periods[0].year
Frs_month =my_periods[0].month
Frs_day =my_periods[0].day
Frs_hour =my_periods[0].hour
Last_year =my_periods[-1].year
Last_month =my_periods[-1].month
Last_day =my_periods[-1].day
Last_hour =my_periods[-1].hour
print('First year is '+str(Frs_year) +'\n'+\
'First months is '+str(Frs_month)+'\n'+\
'First day is '+str(Frs_day)+'\n'+\
'First hour is '+str(Frs_hour))
print(' ')
print('Last year is '+str(Last_year)+'\n'+\
'Last months is '+str(Last_month)+'\n'+\
'Last day is '+str(Last_day)+'\n'+\
'Last hour is '+str(Last_hour))
Frs = str(Frs_year)+'/'+str(Frs_month)+'/'+str(Frs_day)+' '+str(Frs_hour)+":00"
Lst = str(Last_year)+'/'+str(Last_month)+'/'+str(Last_day)+' '+str(Last_hour)+":00"
my_daily_time=pd.date_range(Frs,Lst,freq='D')
## END of the data_range tricks ###########
nt_days=len(my_daily_time)
nd=np.ndim(my_var)
if (nd == 1): # only time series
var_mean=np.full((nt_days),np.nan)
if (nd == 2): # e.g., time, lat or lon or lev
n1=np.shape(my_var)[1]
var_mean=np.full((nt_days,n1),np.nan)
if (nd == 3): # e.g., time, lat, lon
n1=np.shape(my_var)[1]
n2=np.shape(my_var)[2]
var_mean=np.full((nt_days,n1,n2),np.nan)
if (nd == 4): # e.g., time, lat , lon, lev
n1=np.shape(my_var)[1]
n2=np.shape(my_var)[2]
n3=np.shape(my_var)[3]
var_mean=np.full((nt_days,n1,n2,n3),np.nan)
end_mm=12
k=0
####### loop over years ################
for yy in np.arange(Frs_year,Last_year+1):
print('working on the '+str(yy))
# in case the last month is NOT 12
if (yy == Last_year):
end_mm=Last_month
print('The last month is '+str(end_mm))
## Loop over months ################
for mm in np.arange(1,end_mm+1):
end_day=pd.Period(str(yy)+'-'+str(mm)).days_in_month
# in case the last day is not at the end of the month.
if ((yy == Last_year) & (mm == Last_month)):
end_day=Last_day
#### loop over days ###############
for dd in np.arange(1,end_day+1):
print(str(yy)+'-'+str(mm)+'-'+str(dd))
#list all days of the month and year.
I=np.where((my_periods.year == yy) &\
(my_periods.month == mm) &\
(my_periods.day == dd ))[0]
print(I)
# if there is a discontinuity in time.
# I will be empty and then you have to quit.
# you have first to reindex the data.
if len(I) == 0 :
print('Warning time shift here >>')
print('Check the continuity of your time sequence')
sys.exit()
var_mean[k,...]=np.nanmean(my_var[I,...],0)
k=k+1
return var_mean,my_daily_time
</code></pre>
<p>Here is, perhaps, easy and quick way to call this function.
Note that you may be asked to install Pooch</p>
<pre><code>import numpy as np
import xarray as xr
x = xr.tutorial.load_dataset("air_temperature")
time = x['time'] # reading the time
period=time.to_index().to_period('h')
bb0,bb1=hourly2daily(x['air'],period)
</code></pre>
<p>I am aware that there is another way to implement this; for example, I can do the previous calculation in one single loop as shown below, but it wonβt help for data with discontinues in time.</p>
<pre><code>daily_tem2m = np.full((int(len_time/24),len_lat,len_lon),np.nan,float)
counter=0
timemm=[]
for i in np.arange(0,len_time,24):
print(period[i])
timemm.append(period[i])
daily_tem2m[counter,:,:]=np.nanmean(cleaned_tem2m_celsius.data[i:i+24,:,:],0)
counter=counter+1
</code></pre>
|
<python><python-3.x><pandas><vectorization>
|
2025-01-13 14:50:57
| 2
| 725
|
Kernel
|
79,352,553
| 15,307,950
|
BeautifulSoup prettify changes content, not just layout
|
<p>BeautifulSoup <code>prettify()</code> modifies significant whitespace even if the attribute <code>xml:space</code> is set to <code>"preserve"</code>.</p>
<p>Example xml file with significant whitespace:</p>
<pre><code><svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg">
<text y="20" xml:space="default"> Default spacing</text>
<text y="40" xml:space="preserve"> <tspan>reserved spacing</tspan></text>
</svg>
</code></pre>
<p>Code:</p>
<pre><code>from bs4 import BeautifulSoup
xml_string_with_significant_whitespace ='''
<svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg">
<text y="20" xml:space="default"> Default spacing</text>
<text y="40" xml:space="preserve"> <tspan>reserved spacing</tspan></text>
</svg>
'''
soup = BeautifulSoup(xml_string_with_significant_whitespace, "xml")
# no modifications made
print(soup.prettify()) # modifies significant whitespace
# print(str(soup)) # doesn't modify significant whitespace
</code></pre>
<p>Output:</p>
<pre><code><svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg">
<text xml:space="default" y="20">
Default spacing
</text>
<text xml:space="preserve" y="40">
<tspan>
reserved spacing
</tspan>
</text>
</svg>
</code></pre>
<p>Text will be moved due to modified whitespace.</p>
<p>How do I prevent <code>prettify()</code> from changing the meaning of the xml file, instead of just changing the layout?</p>
|
<python><xml><svg><beautifulsoup>
|
2025-01-13 14:46:13
| 2
| 726
|
elechris
|
79,352,276
| 25,413,271
|
Apply function for lower triangle of 2-d array
|
<p>I have an array:</p>
<pre><code>U = np.array([3, 5, 7, 9, 11])
</code></pre>
<p>I want to get a result like:</p>
<pre><code>result = np.array([
[ np.nan, np.nan, np.nan, np.nan, np.nan],
[U[0] - U[1], np.nan, np.nan, np.nan, np.nan],
[U[0] - U[2], U[1] - U[2], np.nan, np.nan, np.nan],
[U[0] - U[3], U[1] - U[3], U[2] - U[3], np.nan, np.nan],
[U[0] - U[4], U[1] - U[4], U[2] - U[4], U[3] - U[4], np.nan]
])
</code></pre>
<p>I can use <code>np.tril_indices(4, k=-1)</code> to get indices of lower triangle without diagonal, but what is next?</p>
|
<python><arrays><numpy><matrix>
|
2025-01-13 13:06:00
| 2
| 439
|
IzaeDA
|
79,352,069
| 315,168
|
Detecting if Python code is run under Visual Studio Code (or other) Python debugger
|
<p>I have Python and Pandas code <a href="https://github.com/tradingstrategy-ai/getting-started/blob/master/notebooks/single-backtest/base-meme-index.ipynb" rel="nofollow noreferrer">which does complex Pandas series calculations for thousands of series and runs these calculations parallel for speedup</a> using multiprocessing.</p>
<p>It seems Visual Studio Code Python debugger (or any Python debugger) + multiprocessing is very difficult combination to manage. Breakpoints do not trigger, and if they trigger, the debugger does not attach and the application just hangs.</p>
<p>I'd like to change my library code so that if a Python debugger is detected, like pdb or ipdb, it runs all calculation code using a single-threaded main loop instead of multiprocessing, so that the debugger can attach correctly.</p>
<p>How can I detect the presence of Visual Studio Code debugger (or any other Python debugger) inside Python?</p>
|
<python><visual-studio-code><pdb>
|
2025-01-13 11:42:24
| 0
| 84,872
|
Mikko Ohtamaa
|
79,351,991
| 20,298,890
|
Azure authentication failure in spawned processes from multiprocessing pool
|
<p>I have a very specific issue with the combination of multiprocessing + azure authentication and could not find a valuable open issue, so here I go:</p>
<p>I am working on a python program that uses <strong>multiprocessing.Pool</strong> to accelerate computations.</p>
<p>The data is read from and uploaded to <strong>Azure storage</strong> resources. The credentials are retrieved from the configuration created by a run of <code>az login</code> beforehand.</p>
<p>All runs on a docker image using ubuntu, so <strong>POSIX</strong> system.</p>
<p>The program is working, but sometimes <em>hangs</em> on some data. I made my researches and found this had probably to do with a deadlock issue from the <em>fork</em> strategy in process instantiation (see here : <a href="https://pythonspeed.com/articles/python-multiprocessing/" rel="nofollow noreferrer">https://pythonspeed.com/articles/python-multiprocessing/</a>)</p>
<p>So I switched to the <em>spawn</em> method (<em>note: I also tried forkserver and got the same issue</em>), and the corresponding section of my code looks like this :</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
with mp.get_context("spawn").Pool() as pool:
for batch_result in pool.imap(self.thread_apply, point_batches):
results.extend(batch_result)
</code></pre>
<p>where <code>self.thread_apply</code> is the method doing the computation and <code>point_batches</code> is a numpy array.</p>
<p><strong>But then, trying this new setup, after my processes are generated in the pool I get authentication errors:</strong></p>
<pre><code>azure.core.exceptions.ClientAuthenticationError: DefaultAzureCredential failed to retrieve a token from the included credentials.
Attempted credentials:
EnvironmentCredential: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.
Visit https://aka.ms/azsdk/python/identity/environmentcredential/troubleshoot to troubleshoot this issue.
ManagedIdentityCredential: ManagedIdentityCredential authentication unavailable, no response from the IMDS endpoint.
SharedTokenCacheCredential: SharedTokenCacheCredential authentication unavailable. No accounts were found in the cache.
AzureCliCredential: Failed to invoke the Azure CLI
AzurePowerShellCredential: PowerShell is not installed
AzureDeveloperCliCredential: Azure Developer CLI could not be found. Please visit https://aka.ms/azure-dev for installation instructions and then,once installed, authenticate to your Azure account using 'azd auth login'.
To mitigate this issue, please refer to the troubleshooting guidelines here at https://aka.ms/azsdk/python/identity/defaultazurecredential/troubleshoot.
</code></pre>
<p><strong>Note:</strong> Trying authentication without using <code>az login</code> configuration and using cliend id, tenant id and client secret as env vars, the authentication was successful, but I still want to be able to run this program with the <code>az login</code> authentication tool.</p>
<p>Thanks for help :)</p>
|
<python><azure><python-multiprocessing>
|
2025-01-13 11:12:33
| 0
| 503
|
marting
|
79,351,922
| 2,457,483
|
Can't Access Spotify Song Features Despite Successfully Generating Token and Being Logged I
|
<p>I am trying to access audio features of a song using the Spotify Web API, but I keep getting a 403 Forbidden error even though:</p>
<ul>
<li>I am successfully logging in using OAuth.</li>
<li>I can retrieve a valid access token using
sp.auth_manager.get_access_token().</li>
</ul>
<p>The token has the correct scopes (user-library-read and playlist-read-private).
Here is the code I am using to retrieve the song features:</p>
<pre><code>import requests
# Get the cached token, or request a new one if it doesn't exist
token_info = sp.auth_manager.get_cached_token()
if token_info:
access_token = token_info['access_token']
else:
# If no cached token exists, initiate the OAuth flow
token_info = sp.auth_manager.get_access_token()
access_token = token_info['access_token']
print(f" access_token = {access_token}")
url = "https://api.spotify.com/v1/audio-features"
headers = {
"Authorization": f"Bearer {access_token}" # Use the token you've gotten from OAuth
}
params = {
"ids": "7qiZfU4dY1lWllzX7mPBI3" # Replace with a valid track ID
}
response = requests.get(url, headers=headers, params=params)
if response.status_code == 200:
print(response.json())
else:
print(f"Error {response.status_code}: {response.text}")
</code></pre>
<p>Despite the token being valid and the song ID being correct, I am receiving the following error:</p>
<pre><code>access_token = *****
Error 403: {
"error" : {
"status" : 403
}
}
</code></pre>
<p>What I have already tried:</p>
<ul>
<li>Verified the token and checked the scopes are correct
(user-library-read, playlist-read-private).</li>
<li>Confirmed the track ID exists and is accessible. Printed the response
body to check for more details (still just a 403 error).</li>
<li>Ensured the token hasn't expired (using a cached token).</li>
<li>Verified that I am able to make other API calls (e.g., search for
tracks)</li>
</ul>
<p>.</p>
<p>Any ideas on what might be causing this issue? How can I fix the 403 error when trying to access song features?</p>
|
<python><oauth><spotify><spotify-app>
|
2025-01-13 10:46:13
| 0
| 5,411
|
Daniel
|
79,351,880
| 19,155,645
|
RAG on Mac (M3) with langchain (RetrievalQA): code runs indefinitely
|
<br>
I'm trying to run a RAG system on my mac M3-pro (18gb RAM) using langchain and `Llama-3.2-3B-Instruct` on a jupyter notebook (and the vector storage is Milvus).<br>
When I am invoking RetrievalQA.from_chain_type, the cell is running indefinitely (at least 15 mins, did not let it run longer...).<br>
<pre><code>from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
return_source_documents=True, # (optional)
chain_type_kwargs={"prompt": prompt}
)
response = qa_chain.invoke({"query": question})
</code></pre>
<p>Can you help resolve please?</p>
<p>The llm, retriever and prompt are as below:</p>
<pre><code>from langchain.llms.base import LLM
from typing import List, Dict
from pydantic import PrivateAttr
class HuggingFaceLLM(LLM):
# Define pipeline as a private attribute
_pipeline: any = PrivateAttr()
def __init__(self, pipeline):
super().__init__()
self._pipeline = pipeline
def _call(self, prompt: str, stop: List[str] = None) -> str:
# Generate text using the Hugging Face pipeline
# response = self._pipeline(prompt, max_length=512, num_return_sequences=1)
response = self._pipeline(prompt, num_return_sequences=1)
return response[0]["generated_text"]
@property
def _identifying_params(self):
return {"name": "HuggingFaceLLM"}
@property
def _llm_type(self):
return "custom"
llm = HuggingFaceLLM(pipeline=llm_pipeline)
</code></pre>
<p>llm pipeline:</p>
<pre><code>from langchain.prompts import PromptTemplate
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name = "meta-llama/Llama-3.2-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_auth_token=hf_token)
model = AutoModelForCausalLM.from_pretrained(model_name, use_auth_token=hf_token)
llm_pipeline = pipeline( "text-generation",
model=model,
tokenizer=tokenizer,
device=0,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
truncation=True,
)
</code></pre>
<p>propmpt:</p>
<pre><code>prompt_template = """
You are a helpful assistant. Use the following context to answer the question concisely.
If you do not know the answer from the context, please state so and do not search for an answer elsewhere.
Context:
{context}
Question:
{question}
Answer:
"""
prompt = PromptTemplate(
input_variables=["context", "question"],
template=prompt_template
)
</code></pre>
<p>Retriever:</p>
<pre><code>class MilvusRetriever(BaseRetriever, BaseModel):
collection: any
embedding_function: Callable[[str], np.ndarray]
text_field: str
vector_field: str
top_k: int = 5
def get_relevant_documents(self, query: str) -> List[Dict]:
query_embedding = self.embedding_function(query)
search_params = {"metric_type": "IP", "params": {"nprobe": 10}}
results = self.collection.search(
data=[query_embedding],
anns_field=self.vector_field,
param=search_params,
limit=self.top_k,
output_fields=[self.text_field]
)
documents = []
for hit in results[0]:
documents.append(
Document(
page_content=hit.entity.get(self.text_field),
metadata={"score": hit.distance}
)
)
return documents
async def aget_relevant_documents(self, query: str) -> List[Dict]:
"""Asynchronous version of get_relevant_documents."""
return self.get_relevant_documents(query)
retriever = MilvusRetriever(
collection=collection,
embedding_function=embed_model.embed_query,
text_field="text",
vector_field="embedding",
top_k=5
)
</code></pre>
<p>I am also checking that the Mac GPUs are on:</p>
<pre><code>import torch
if torch.backends.mps.is_available():
print("MPS is available!")
</code></pre>
<hr />
<p><b>Edit 1</b>: As recommended here, I tried adding <code>verbose</code>:</p>
<pre><code>qa_chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
return_source_documents=True, # (optional)
# return_source_documents=False, # (optional)
verbose=True,
chain_type_kwargs={
"verbose": True,
"prompt": prompt
}
)
</code></pre>
<p>Now the output is:</p>
<pre><code>> Entering new RetrievalQA chain...
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
<MY PROMPT>
Context:
<some context from my data, seems like this is done ok.>
Question:
<MY QUESTION>
Answer:
</code></pre>
<p>(and still stuck here)</p>
|
<python><macos><gpu><langchain><retrieval-augmented-generation>
|
2025-01-13 10:32:04
| 0
| 512
|
ArieAI
|
79,351,811
| 2,071,807
|
Pydantic restrict string to literal values and convert to lower case
|
<p>How can I restrict the value of a string field to certain values in a case insensitive way?</p>
<pre class="lang-py prettyprint-override"><code>>>> Animal(species="antelope")
>>> Animal(species="AnTeLoPe")
>>> Animal(species="frog") # ValidationError: `species must be antelope or zebra`
</code></pre>
<p>I know that you can restrict a Pydantic field to certain values:</p>
<pre class="lang-py prettyprint-override"><code>import pydantic
from typing import Literal
class Animal(pydantic.BaseModel):
species: Literal["antelope", "zebra"]
</code></pre>
<p>And I know that you can convert input data to lowercase:</p>
<pre class="lang-py prettyprint-override"><code>class Animal(pydantic.BaseModel):
species: pydantic.constr(to_lower=True)
</code></pre>
<p>But I can't see how I would combine these two features so I don't have to spell out the upper-case and lower-case versions of my set of literals:</p>
<pre class="lang-py prettyprint-override"><code>class Animal(pydantic.BaseModel):
species: Literal["antelope", "Antelope"] # ...etc
</code></pre>
<p>I thought using <code>str_to_lower</code> would help, but it doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>class Animal(pydantic.BaseModel):
model_config = pydantic.ConfigDict(str_to_lower=True)
species: Literal["antelope", "zebra"]
Animal(species="AnTeLope") # ValidationError
</code></pre>
<p>I can see a way out of this with regex, but it seems like this is such a common use-case that using regex seems like an inconvenient workaround (what if my allowed values have got regex-ish characters? then I have to escape them)</p>
|
<python><pydantic>
|
2025-01-13 10:08:18
| 3
| 79,775
|
LondonRob
|
79,351,658
| 13,040,314
|
Binary file base64 encode in js and decode in python
|
<p>I want save an excel file as json in database. It is then downloaded in the python backend later on to create the excel file.</p>
<p>My frontend to save excel file as json:</p>
<pre><code> const fileReader: FileReader = new FileReader();
fileReader.onloadend = (_x) => {
const input: any = {
name: file.name,
content: {
author: 'username',
excelFile: fileReader.result,
},
};
httprequest({params: input}).subscribe();
};
fileReader.readAsDataURL(file);
</code></pre>
<p>Python backend to create excel file from json:</p>
<pre><code>data = api_client.get_file_details(id)
decoded_excel = base64.b64decode(data["content"]["excelFile"])
with open('example.xlsx', "wb") as f:
f.write(decoded_excel)
</code></pre>
<p>Unfortunately python decoding does not work. It gives error <code>Error: Invalid base64-encoded string: number of data characters (587469) cannot be 1 more than a multiple of 4</code>. How can solve this issue?</p>
|
<javascript><python><json><encoding><decoding>
|
2025-01-13 09:09:07
| 1
| 325
|
StaticName
|
79,351,120
| 3,765,883
|
Python3: how can I print value of 'tools.staticdir.dir' to console?
|
<p>I'm working my way through Python Cherrypy tutorials. Tut06 demonstrates serving static content with the following example</p>
<pre><code>import os, os.path
import random
import string
import cherrypy
class StringGenerator(object):
@cherrypy.expose
def index(self):
return """<html>
<head>
<link href="/static/css/style.css" rel="stylesheet">
</head>
<body>
<form method="get" action="generate">
<input type="text" value="8" name="length" />
<button type="submit">Give it now!</button>
</form>
</body>
</html>"""
@cherrypy.expose
def generate(self, length=8):
some_string = ''.join(random.sample(string.hexdigits, int(length)))
cherrypy.session['mystring'] = some_string
return some_string
@cherrypy.expose
def display(self):
return cherrypy.session['mystring']
if __name__ == '__main__':
conf = {
'/': {
'tools.sessions.on': True,
'tools.staticdir.root': os.path.abspath(os.getcwd())
},
'/static': {
'tools.staticdir.on': True,
'tools.staticdir.dir': './public'
}
}
cherrypy.quickstart(StringGenerator(), '/', conf)
</code></pre>
<p>using the following style.css file in ./public/css/</p>
<pre><code>body{
background-color: blue;
}
</code></pre>
<p>This works fine on my Linux box (shows a blue background), but fails on my Win11 box (doesn't show the blue background). I suspect there is an issue with the way that Windows treats file paths (\ vs /), but I can't figure out a way to examine the result of the conf = {....}, specifically the value of:</p>
<pre><code>'tools.staticdir.dir': './public'
</code></pre>
<p>so I can compare the results from my windows and Linux boxes side by side. Obviously I'm pretty new to Cherrypy, so any help would be appreciated.</p>
|
<python><cherrypy>
|
2025-01-13 03:23:55
| 0
| 327
|
user3765883
|
79,351,075
| 869,809
|
how can I generate a python environments's minimum requirements?
|
<p>I keep coming up with different ways to do this and it ends up being a PITA no matter how I do it and I cannot be the only one trying to deal with this.</p>
<p>Say that I do this:</p>
<pre><code>virtualenv .venv
alias tp='./.venv/bin/python3'
tp -m pip install AAA
tp -m pip install BBB
tp -m pip install CCC
tp -m pip freeze
</code></pre>
<p>Installing each of AAA, BBB, and CCC installed a bunch of other things. The freeze result ends up looking like:</p>
<pre><code>5D68A4E
AAA
A864534
BBB
EE715F8
F9556BC
CCC
G5B409B
K4F6047
</code></pre>
<p>What I want to know is this. I want to have my requirements.txt contain only AAA, BBB, and CCC. But I do not want to have to do some separate thing every time I do an install to have this happen.</p>
<p>Is there a way to get freeze to give me the "primary" installs and not the other ones that we installed as a side-effect? It would be good to be able to generate this after the fact. Is it possible?</p>
|
<python><installation><pip>
|
2025-01-13 02:38:24
| 1
| 3,616
|
Ray Kiddy
|
79,350,990
| 20,295,949
|
How can I scrape data from a website into a CSV file using Python Playwright (or alternatives) while avoiding access errors and improving speed?
|
<p>I'm trying to scrape data from this website using Python and Playwright, but I'm encountering a few issues. The browser runs in non-headless mode, and the process is very slow. When I tried other approaches, like using requests and BeautifulSoup, I ran into access issues, including 403 Forbidden and 404 Not Found errors. My goal is to scrape all pages efficiently and save the data into a CSV file.</p>
<p>Hereβs the code Iβm currently using:</p>
<pre><code>import asyncio
from playwright.async_api import async_playwright
import pandas as pd
from io import StringIO
URL = "https://www.coingecko.com/en/coins/1/markets/spot"
async def fetch_page(page, url):
print(f"Fetching: {url}")
await page.goto(url)
await asyncio.sleep(5)
return await page.content()
async def scrape_all_pages(url, max_pages=10):
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False, slow_mo=2000)
context = await browser.new_context(viewport={"width": 1280, "height": 900})
page = await context.new_page()
markets = []
for page_num in range(1, max_pages + 1):
html = await fetch_page(page, f"{url}?page={page_num}")
dfs = pd.read_html(StringIO(html)) # Parse tables
markets.extend(dfs)
await page.close()
await context.close()
await browser.close()
return pd.concat(markets, ignore_index=True)
def run_async(coro):
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
return asyncio.create_task(coro)
else:
return asyncio.run(coro)
async def main():
max_pages = 10
df = await scrape_all_pages(URL, max_pages)
df = df.dropna(how='all')
print(df)
run_async(main())
</code></pre>
<p>The primary issues are the slow speed of scraping and the access errors when using alternatives to Playwright. I'm looking for advice on how to improve this approach, whether itβs by optimizing the current code, handling access restrictions like user-agent spoofing or proxies, or switching to a different library entirely. Any suggestions on how to make the process faster and more reliable would be greatly appreciated. Thank you.</p>
|
<python><selenium-webdriver><beautifulsoup><python-requests><playwright>
|
2025-01-13 01:09:24
| 1
| 319
|
HamidBee
|
79,350,912
| 14,923,227
|
Nearest neighbor interpolation
|
<p>Say that I have an array:</p>
<pre><code>arr = np.arange(4).reshape(2,2)
</code></pre>
<p>The array <code>arr</code> contains the elements</p>
<pre><code>array([[0, 1],
[2, 3]])
</code></pre>
<p>I want to increase the resolution of the array in such a way that the following is achieved:</p>
<pre><code>np.array([0,0,1,1],
[0,0,1,1],
[2,2,3,3],
[2,2,3,3]])
</code></pre>
<p>what is this operation called? Nearest-neighbor interpolation?</p>
<p>It is possible to get my desired output with the following</p>
<pre><code>np.concat(np.repeat(arr,4).reshape(-1,2,2,2), axis=-1).reshape(4,4)
</code></pre>
<p>Is there a more general way of doing this for any kind of matrix?</p>
|
<python><numpy>
|
2025-01-12 23:51:01
| 1
| 3,418
|
Kevin
|
79,350,895
| 2,927,848
|
Sql Alchemy 2.x not parsing ORM objects
|
<p>I'm building a little sqlite database to manage some web scraping I'm doing. I have queries like this.</p>
<pre><code>async def get_by_title(session: AsyncSession, title: str) -> Optional[Manga]:
query = select(Manga).where(Manga.title == title).limit(1)
result = await session.scalar(query)
return result
</code></pre>
<p>My issue is all my selects whether I use session.execute, session.scalar, or session.scalars all return tuple object of the row and if I use scalar/one/first etc I only get the very fist item of the object (so the id). This doesn't seem correct based on the documentation. What am I missing?</p>
<pre><code>from .base import Base
from datetime import datetime
from sqlalchemy import func
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.dialects.sqlite import INTEGER, TEXT, DATETIME
class Manga(Base):
__tablename__ = 'manga'
manga_id: Mapped[int] = mapped_column(INTEGER,primary_key=True,autoincrement=True)
title: Mapped[str] = mapped_column(TEXT,nullable=False)
language: Mapped[str] = mapped_column(TEXT,nullable=False)
path_key: Mapped[str] = mapped_column(TEXT,nullable=False)
created_at: Mapped[datetime] = mapped_column(DATETIME,nullable=False,default=func.now())
</code></pre>
<pre><code>from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
</code></pre>
<p>Note: if I iterate over the rows I get a tuple with all the appropriate information</p>
<pre><code>async def print_rows(session: AsyncSession, title: str):
query = select(Manga).where(Manga.title == title)
result = await session.execute(query)
for row in result:
print(row) # (manga_id,title,language,path_key,created_at) all as expected
</code></pre>
|
<python><sqlalchemy>
|
2025-01-12 23:21:53
| 1
| 2,317
|
Chase R Lewis
|
79,350,850
| 327,026
|
How to get legacy NumPy repr in Sphinx docs examples?
|
<p>This question considers <a href="https://numpy.org/neps/nep-0051-scalar-representation.html" rel="nofollow noreferrer">NEP 51</a>, which changed NumPy's string representation. This document describes some potential backward compatibility issues:</p>
<blockquote>
<p>An exception to this are downstream libraries with documentation and especially documentation testing. Since the representation of many values will change, in many cases the documentation will have to be updated...</p>
<p>It may be necessary to adopt tools for doctest testing to allow approximate value checking for the new representation.</p>
</blockquote>
<p>However, looking at the documentation, I don't see any changes to the output of the examples.</p>
<p>Looking at NumPy's docs, e.g. <a href="https://numpy.org/doc/stable/reference/generated/numpy.sin.html" rel="nofollow noreferrer"><code>numpy.sin</code></a>:</p>
<blockquote>
<p>Print sine of one angle:</p>
<pre class="lang-none prettyprint-override"><code>>>> np.sin(np.pi/2.)
1.0
</code></pre>
</blockquote>
<p>But with the latest NumPy, this example shows <code>np.float64(1.0)</code>.</p>
<p>And similar with SciPy's docs, e.g. <a href="https://scipy.github.io/devdocs/reference/generated/scipy.special.erfinv.html" rel="nofollow noreferrer"><code>scipy.special.erfinv</code></a>:</p>
<blockquote>
<pre class="lang-none prettyprint-override"><code>>>> from scipy.special import erfinv, erf
>>> erfinv(0.5)
0.4769362762044699
</code></pre>
</blockquote>
<p>But with the latest NumPy, this example shows <code>np.float64(0.4769362762044699)</code>.</p>
<p>I'm aware of <a href="https://numpy.org/doc/stable/reference/generated/numpy.set_printoptions.html" rel="nofollow noreferrer"><code>numpy.set_printoptions</code></a> to change this default:</p>
<pre class="lang-none prettyprint-override"><code>>>> np.set_printoptions(legacy="1.25")
>>> np.sin(np.pi/2.)
1.0
>>> erfinv(0.5)
0.4769362762044699
</code></pre>
<p>however, I don't see this being used for either NumPy or SciPy's documentation.</p>
<p>How are these Sphinx docs configured to show the legacy outputs?</p>
<p>For modules with examples, how would <a href="https://docs.pytest.org/en/stable/how-to/doctest.html" rel="nofollow noreferrer">pytest doctests</a> be run to pass?
<br />(I.e. using <code>pytest --doctest-modules mymod</code>)</p>
|
<python><pytest><python-sphinx><repr><numpy-2.x>
|
2025-01-12 22:44:23
| 1
| 44,290
|
Mike T
|
79,350,811
| 3,045,351
|
Python code running across multiple venvs where different versions of the same package are stored
|
<p>I have a situation where I have one set of Python scripts running in a virtual environment, calling another set of scripts running in a different virtual environment. This is to handle package conflicts etc.</p>
<p>There is a core system (in this case the ComfyUI back end) running in a venv called 'comfyui' and a slave venv in which a particular software package is running in CogVideox (with a venv called 'cogvideox'.</p>
<p>The slave venv has a .pth file linking it to the 'comfyui' venv as the CogVideoX code uses some shared packages stored in this venv.</p>
<p>I am activating my venvs in my Python scripts using the below:</p>
<p>ComfyUI venv:</p>
<pre><code>activate_this_file = "/usr/local/lib/python3.10/virtual-environments/comfyui/bin/activate_this.py"
exec(open(activate_this_file).read(), {'__file__': activate_this_file})
</code></pre>
<p>CogVideoX venv:</p>
<pre><code>activate_this_file = "/usr/local/lib/python3.10/virtual-environments/cogvideox/bin/activate_this.py"
exec(open(activate_this_file).read(), {'__file__': activate_this_file})
</code></pre>
<p>...the issue I'm having is the 'comfyui' venv has a version of diffusers shared by lots of different node components in their own venv. However, the CogVideoX has it's own version of diffusers.</p>
<p>When the ComfyUI back end is importing the CogVideoX code, it seems to be ignoring my attempt at having the CogVideoX code running in a different venv and trying to use the Diffusers installation in the ComfyUI venv.</p>
<p>Is there a way to stop this from occuring and have my CogXVideo code recognise that when I am doing an <code>import diffusers</code> statement this is meant to be in it's own venv?</p>
|
<python><python-3.x><python-venv>
|
2025-01-12 22:16:42
| 0
| 4,190
|
gdogg371
|
79,350,780
| 1,624,552
|
cannot connect to vpn using python-openvpn-client api
|
<p>I have a pyhton program from which I am using python-openvpn-client API to connect to a vpn server using an .ovpn configuration file.</p>
<p>I have installed python 3.13.1 from the official website. Then I have created a virtual python environment to use in my python project.</p>
<p>I have successfully installed the <a href="https://pypi.org/project/python-openvpn-client/" rel="nofollow noreferrer">python-openvpn-client</a> (version 0.0.1) package which says it requires python >= 3.9. I have used below command to install it in my virtual environment:</p>
<pre><code>pip install python-openvpn-client
</code></pre>
<p>I installed OpenVpn from <a href="https://openvpn.net/community-downloads/" rel="nofollow noreferrer">here</a>. I am using version OpenVPN 2.6.12 - Released 18 July 2024.</p>
<p>Then I do below:</p>
<pre><code>from openvpnclient import OpenVPNClient
def connect_to_vpn(config_path):
vpn = OpenVPNClient(config_path)
try:
vpn.connect()
while not vpn.is_connected():
print(f"Status: {vpn.status}")
sleep(2)
print("Connection successfully established.")
return vpn
except Exception as e:
print(f"Error connecting to VPN server: {e}")
return None
</code></pre>
<p>config_path is the full path to an .ovpn file.</p>
<p>When executing the line of code <code>vpn.connect()</code> an exception is thrown:</p>
<pre><code>module 'signal' has no attribute 'SIGUSR1'
</code></pre>
<p>If I import the same .ovpn file from OpenVPN app and connect to the the VPN server, then is working but not from my python program.</p>
<p>My platform is Windows 10 Pro.</p>
<p>So what am I doing wrong?</p>
|
<python><openvpn>
|
2025-01-12 21:52:48
| 2
| 10,752
|
Willy
|
79,350,773
| 54,873
|
Using xlwings, is there a way to collapse all the groups on a sheet?
|
<p>I'm using <code>xlwings</code> with python to copy one excel sheet to another. As part of the copying, however, I'd like to collapse all the grouped columns.</p>
<p>This is different than hiding the relevant columns and rows; I want them to be non-hidden, but just have the outlines collapsed to level 0!</p>
<p>And to be clear, I also don't want to create new outlines; I just want all the existing ones to be collapsed.</p>
<p>Is there a simple command to do that? Cannot find it via google. (if the answer is to group specific ranges - the answer also would need to list how to find the ranges that are available for grouping!).</p>
<p>Edited to add: The solution must work on MacOS. The answer proposed by @grismar below fails on a Mac:</p>
<pre><code> (Pdb) sheet.api.Outline
*** AttributeError: Unknown property, element or command: 'Outline'
</code></pre>
|
<python><macos><xlwings>
|
2025-01-12 21:49:24
| 2
| 10,076
|
YGA
|
79,350,584
| 6,594,668
|
Pythonic way to conditionally replace values in a numpy array based on neighbour's values
|
<p>I am processing a greyscale (8 bit) black-and-white image in Python with OpenCV.</p>
<p>The image (2D numpy array with shape (100,200) ) contains pixels of only 2 colors - black (0) and white (255).</p>
<p>Here is my Python code:</p>
<pre><code>img = cv2.imread(filename) # 2 dims, pixel type: np.uint8 (and only 2 possible values: 0 or 255)
rows,cols = img.shape # always (100,200)
for i in range(1,rows-1):
for j in range(1,cols-1):
if ((img[i,j-1] == 255 and img[i,j+1] == 255) or (img[i-1,j] == 255 and img[i+1,j] == 255)):
img[i,j] = 255
</code></pre>
<p>This code is too slow. How to rewrite it in Python to make it work faster?</p>
<p><strong>UPDATE:</strong></p>
<p>After some investigation I have discovered that my code may be rewritten like this - without any change in resulting effect:</p>
<pre><code>img = cv2.imread(filename) # 2 dims, pixel type: np.uint8 (and only 2 possible values: 0 or 255)
rows,cols = img.shape # always (100,200)
for i in range(1,rows-1):
for j in range(1,cols-1):
if (img[i,j-1] == 255 and img[i,j+1] == 255):
img[i,j] = 255
for i in range(1,rows-1):
for j in range(1,cols-1):
if (img[i-1,j] == 255 and img[i+1,j] == 255):
img[i,j] = 255
</code></pre>
<p>According to many comments, there is no need to produce both replacements in a single run, 2 runs are perfectly fine.</p>
|
<python><numpy><opencv><neighbours>
|
2025-01-12 19:39:02
| 3
| 2,406
|
prograils
|
79,350,545
| 9,128,863
|
sympy plot does't draw points
|
<p>I use the following code with sympy lib to illustrate lines of function and some points at this graph:</p>
<pre><code>from sympy.plotting.plot import plot_contour
import sympy as sy
x_1, x_2 = sy.symbols('x_1 x_2')
y = sy.sqrt((x_1 - 3) ** 2 + (x_1 - 2) ** 2) + sy.sqrt((x_2 - x_1) ** 2 + x_1 ** 2) + sy.sqrt(
(4 - x_2) ** 2 + 4)
extremums = [(1.3, 0.7), (1.499, 1.3), (1.818650245666504, 2.461580276489258),
(1.9787960052490234, 3.2805638313293457)]
p = plot_contour(y, (x_1, 0, 5), (x_2, 0, 5), markers=[{'args': extremums,
'color': "red", 'marker': "x", 'ms': 6}]
)
</code></pre>
<p>It draw expected function lines, but does not draw points in "extremums"-array. Particulary, I dont see (1.3, 0.7)-point</p>
<p><a href="https://i.sstatic.net/3Kk3sjcl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Kk3sjcl.png" alt="lines" /></a></p>
|
<python><plot><sympy>
|
2025-01-12 19:17:21
| 0
| 1,424
|
Jelly
|
79,350,430
| 13,968,392
|
Check if all values of Polars DataFrame are True
|
<p>How can I check if all values of a polars DataFrame, containing only boolean columns, are True?<br />
Example <code>df</code>:</p>
<pre><code>df = pl.DataFrame({"a": [True, True, None],
"b": [True, True, True],
})
</code></pre>
<p>The reason for my question is that sometimes I want to check if all values of a <code>df</code> fulfill a condition, like in the following:</p>
<pre><code>df = pl.DataFrame({"a": [1, 2, None],
"b": [4, 5, 6],
}).select(pl.all() >= 1)
</code></pre>
<p>By the way, I didn't expect that <code>.select(pl.all() >= 1)</code> keeps the <code>null</code> (None) in last row of column "a", maybe that's worth noting.</p>
|
<python><dataframe><conditional-statements><python-polars><boolean-logic>
|
2025-01-12 18:07:30
| 2
| 2,117
|
mouwsy
|
79,350,396
| 21,440,243
|
How can I fix the error "Java gateway process exited before sending its port number." in Python when running PySpark?
|
<p>I have some code I'm trying to run with PySpark in Python 3.13, here's the least amount of code needed to reproduce my problem:</p>
<pre><code>from pyspark.sql import SparkSession
pyspark = SparkSession.builder.master("local[8]").appName("example").getOrCreate()
</code></pre>
<p>Whenever I run it, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Example/test.py", line 6, in <module>
.getOrCreate()
File "C:\Users\Example\AppData\Roaming\Python\Python313\site-packages\pyspark\sql\session.py", line 497, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "C:\Users\Example\AppData\Roaming\Python\Python313\site-packages\pyspark\context.py", line 515, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "C:\Users\Example\AppData\Roaming\Python\Python313\site-packages\pyspark\context.py", line 201, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "C:\Users\Example\AppData\Roaming\Python\Python313\site-packages\pyspark\context.py", line 436, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "C:\Users\Example\AppData\Roaming\Python\Python313\site-packages\pyspark\java_gateway.py", line 107, in launch_gateway
raise PySparkRuntimeError(
pyspark.errors.exceptions.base.PySparkRuntimeError: [JAVA_GATEWAY_EXITED] Java gateway process exited before sending its port number.
</code></pre>
<p>I am expecting it to not give any errors. I am on Windows 11 using PySpark 3.5.4 (confirmed with <code>pip show pyspark</code> on Command Prompt) on Python 3.13 and have Java 17.0.13 installed (confirmed with <code>java -version</code> on Command Prompt) as well. I tried uninstalling all other versions of Python and Java. I have <code>JAVA_HOME</code> and <code>SPARK_HOME</code> added as a System Variable and added Java and PySpark to the Path variable. I tried increasing Java and PySpark memory allocation. I tried disabling my antivirus. I tried adjusting the log4j configuration by changing <code>pyspark = SparkSession.builder.master("local[8]").appName("example").getOrCreate()</code> to:</p>
<pre><code>spark = SparkSession.builder \
.master("local[8]") \
.appName("test") \
.config("spark.sql.warehouse.dir", "file:///C:/tmp") \
.getOrCreate()
</code></pre>
<p>I tried adding logging by replacing <code>pyspark = SparkSession.builder.master("local[8]").appName("example").getOrCreate()</code> with:</p>
<pre><code>spark = SparkSession.builder \
.appName("example") \
.config("spark.ui.showConsoleProgress", "false") \
.config("spark.driver.extraJavaOptions", "-Dlog4j.debug") \
.getOrCreate()
</code></pre>
<p>but got nothing else.
I tried uninstalling and reinstalling PySpark, Python, and Java as well as removing other versions of them I had installed. For PySpark I tried both <code>pip uninstall pyspark</code> and <code>pip uninstall pyspark py4j</code> (and then changed the uninstall to install to reinstall it, of course). None of these things or anything suggested in the questions below stopped the error or changed it whatsoever. What is happening and how can I fix it so it doesn't give me an error?</p>
<hr />
<sub>
This is not a duplicate of the following posts for the reasons listed below:
<p><br> </p>
<ul>
<li><a href="https://stackoverflow.com/questions/59533884/exception-java-gateway-process-exited-before-sending-its-port-number-pyspark">Exception: Java gateway process exited before sending its port number pyspark</a>, <a href="https://stackoverflow.com/questions/72763025/java-error-java-gateway-process-exited-before-sending-its-port-number">Java error Java gateway process exited before sending its port number</a>, or <a href="https://stackoverflow.com/questions/55292779/pyspark-error-java-gateway-process-exited-before-sending-its-port-number">Pyspark error: Java gateway process exited before sending its port number</a> because they have a different traceback and the suggested solution(s) doesn't work.</li>
<li><a href="https://stackoverflow.com/questions/62756531/init-exception-java-gateway-process-exited-before-sending-its-port-number">init Exception: Java gateway process exited before sending its port number</a> because I don't have Hadoop and I am on a compatible version of Java.</li>
<li><a href="https://stackoverflow.com/questions/31841509/pyspark-exception-java-gateway-process-exited-before-sending-the-driver-its-p">PySpark: "Exception: Java gateway process exited before sending the driver its port number"</a> or <a href="https://stackoverflow.com/questions/77280183/pysparkruntimeerror-java-gateway-exited-java-gateway-process-exited-before-se">PySparkRuntimeError: [JAVA_GATEWAY_EXITED] Java gateway process exited before sending its port number</a> because I'm not on a Macbook or Linux and none of the solutions work.</li>
<li><a href="https://stackoverflow.com/questions/68243342/error-with-pyspark-java-gateway-process-exited-before-sending-its-port-number">Error with pyspark "Java gateway process exited before sending its port number"</a> because I don't have Hadoop, the user encounters the error when they install not when they run, and because they are experiencing other errors I'm not.</li>
<li><a href="https://stackoverflow.com/questions/70907105/structured-streaming-kafka-runtimeerror-java-gateway-process-exited-before-s">Structured Streaming + Kafka: RuntimeError: Java gateway process exited before sending its port number + Failed to find data source: kafka</a> because I don't have kafka.</li>
<li><a href="https://stackoverflow.com/questions/70548399/creating-sparkcontext-on-google-colab-gives-runtimeerror-java-gateway-process">Creating sparkContext on Google Colab gives: `RuntimeError: Java gateway process exited before sending its port number`</a> because I'm not on Google Colab.</li>
<li><a href="https://stackoverflow.com/questions/43863569/exception-java-gateway-process-exited-before-sending-the-driver-its-port-number">Exception: Java gateway process exited before sending the driver its port number while creating a Spark Session in Python</a> or <a href="https://stackoverflow.com/questions/60230124/pyspark-sql-utils-analysisexception-failed-to-find-data-source-kafka">pyspark.sql.utils.AnalysisException: Failed to find data source: kafka</a> because those questions have a different error.
</sub></li>
</ul>
|
<python><java><pyspark><runtime-error>
|
2025-01-12 17:41:46
| 0
| 1,302
|
Starship
|
79,350,359
| 2,962,444
|
Implementation of += for Python lists as related to argument passing
|
<p>I am reviewing a Python workbook for an author who claims the following:</p>
<p><em>Immutable parameters like integers, strings, or tuples are passed by value
and any changes to these parameters within the function do not change their
respective values outside the function
Mutable objects, like lists and dictionaries are passed by reference and any
changes to them are seen outside the function</em></p>
<p>I tried to explain that (to my understanding) <strong>all</strong> arguments are passed the<br />
same way: as a reference to an object,</p>
<p>Python doesn't look to see if that object being referenced is mutable or immutable.<br />
And, in particular, if the parameter name holding the reference to the passed-in<br />
argument is assigned a new value that just means the reference to the original object is lost.</p>
<p>The author's example of Python 'doing the right thing' is given with these two functions:</p>
<pre><code>def f(i)
i += 1
x = 10
f(x)
print(x) # prints 10 "even though parameter i was modified in f()"
def g(l):
l += [8, 9]
y = [1, 2]
g(y)
print(y) # prints [1, 2, 8, 9] "because l is mutable and was modified in g()"
</code></pre>
<p>In emailing with him he says</p>
<p>in <code>f()</code> assigning to <code>i</code> <em>doesn't</em> change the argument value<br />
but in <code>g()</code> assigning to <code>l</code> <em>does</em> change the argument value<br />
Therefore "Python must be passing them differently"</p>
<p>My response was to rewrite <code>g()</code> as what appears to be an equivalent function<br />
and show that it <em>does not</em> have the same behavior</p>
<pre><code>def h(l):
l = l + [8, 9] # on the surface appears to do the same thing as += operator ...
y = [1, 2]
h(y)
print(y) # prints [1, 2] because is assigning to parameter `l` just discards
# the reference to the passed in object
</code></pre>
<p>So in preparing for the question "then why do <code>g()</code> and <code>h()</code> behave differently?"<br />
My assumption is that the <code>__iadd(self, other)__</code> method that implements<br />
the <code>+=</code> for lists, if it were written Python rather than C, would essentially be:</p>
<pre><code>def __iadd__(self, other):
self.extend(other)
</code></pre>
<p>And so the more functionally equivalent rewrite of his <code>g()</code> would be:</p>
<pre><code>def h(l):
l.extend([8, 9])
y = [1, 2]
h(y)
print(y) # prints [1, 2, 8, 9] because the *original* list is modified using the ref
</code></pre>
<p>Ok, sorry for the long wind up, but here are the questions</p>
<ol>
<li><p>it does seem to be a little strange that writing <code>l += [8, 9]</code> vs. <code>l = l + [8, 9]</code>
have different behaviors, is there a general principle in Python to explain why that's ok?</p>
</li>
<li><p>is the <code>+=</code> operator indeed implemented as something like <code>extend(self, other)</code> or is there something else going on I'm missing?</p>
</li>
</ol>
<p>Thanks!</p>
|
<python><list><parameter-passing>
|
2025-01-12 17:25:20
| 0
| 672
|
quizdog
|
79,349,992
| 8,618,818
|
How am I supposed to get just the close price and date from this data returned by yfinance?
|
<p>I am fetching the data of the S&P index for every day and I want to insert the close price and the date into my database, however, I am new to Python and I have no idea how to navigate this weird data structure. This is what print(spxHistoricalData) returns:</p>
<pre><code>Price Close High Low Open Volume
Ticker ^GSPC ^GSPC ^GSPC ^GSPC ^GSPC
Date
2000-01-03 1455.219971 1478.000000 1438.359985 1469.250000 931800000
2000-01-04 1399.420044 1455.219971 1397.430054 1455.219971 1009000000
2000-01-05 1402.109985 1413.270020 1377.680054 1399.420044 1085500000
2000-01-06 1403.449951 1411.900024 1392.099976 1402.109985 1092300000
2000-01-07 1441.469971 1441.469971 1400.729980 1403.449951 1225200000
</code></pre>
<p>Now I tried doing this:</p>
<pre><code>print(spxHistoricalData[["Date", "Close"]])
</code></pre>
<p>But I just get an error <code>KeyError: "['Date'] not in index"</code></p>
<p>Currently my code looks like this:</p>
<pre><code>import yfinance as yf
import mysql.connector
import pandas as pd
spxHistoricalData = yf.download("^GSPC", start="2000-01-01", end="2025-01-01")
print(spxHistoricalData)
print(spxHistoricalData[["Date", "Close"]])
connection = mysql.connector.connect(
host="127.0.0.1",
port=3306,
user="root",
password="password",
database="bitcoin_and_sp500_price_prediction"
)
cursor = connection.cursor()
values = [(row["Date"], row["Close"]) for index, row in spxHistoricalData.iterrows()]
print(values)
# Use executemany for batch insertion
try:
sql_query = "INSERT INTO spx_historical_data (date, close_price) VALUES (%s, %s)"
cursor.executemany(sql_query, values)
connection.commit()
print("All data inserted successfully using batch insertion.")
except mysql.connector.Error as err:
print(f"Error: {err}")
finally:
cursor.close()
connection.close()
</code></pre>
|
<python><yfinance>
|
2025-01-12 13:46:16
| 2
| 5,810
|
Onyx
|
79,349,541
| 4,451,315
|
DuckDB Python Relational API equivalent of `select sum(a) filter (where b>1)`
|
<p>If I have</p>
<pre class="lang-py prettyprint-override"><code>rel = duckdb.sql('select * from values (1, 4), (1, 2), (2, 3), (2, 4) df(a, b)')
</code></pre>
<p>I'd like to do the equivalent of</p>
<pre class="lang-py prettyprint-override"><code>
In [9]: duckdb.sql('select sum(a) filter (where b>1) from rel')
Out[9]:
βββββββββββββββββββββββββββββββββ
β sum(a) FILTER (WHERE (b > 1)) β
β int128 β
βββββββββββββββββββββββββββββββββ€
β 6 β
βββββββββββββββββββββββββββββββββ
</code></pre>
<p>but using the Python Relational API</p>
<p>I've tried</p>
<pre class="lang-py prettyprint-override"><code>rel.select(duckdb.FunctionExpression('sum', duckdb.FunctionExpression('filter', duckdb.ColumnExpression('a'), duckdb.ColumnExpression('b')>1)))
</code></pre>
<p>but get</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
BinderException Traceback (most recent call last)
Cell In[10], line 1
----> 1 rel.select(duckdb.FunctionExpression('sum', duckdb.FunctionExpression('filter', duckdb.ColumnExpression('a'), duckdb.ColumnExpression('b')>1)))
BinderException: Binder Error: No function matches the given name and argument types 'filter(INTEGER, BOOLEAN)'. You might need to add explicit type casts.
Candidate functions:
filter(ANY[], LAMBDA) -> ANY[]
</code></pre>
<p>How is it meant to be written?</p>
|
<python><duckdb>
|
2025-01-12 08:02:00
| 2
| 11,062
|
ignoring_gravity
|
79,349,238
| 654,142
|
Deleting Personal Emails from Gmail Efficiently
|
<p>My Gmail reached the 15 GB limit, so I archived old emails with Thunderbird and I want to delete all the emails that are older than two weeks. Using the GUI is cumbersome because it seems like I can only delete 100 at a time. So I thought I could create a test app and add my Google Account as a test user to speed up the deletion. I created OAuth credentials in Google Cloud Console and wrote a python script that is called locally from a python virtual environment. Maybe I have to publish the app to have the scope to delete, and I would prefer not to publish the app. Here is the code:</p>
<pre><code>from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
import pickle
import os
from datetime import datetime, timedelta
import logging
# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Define scopes for Gmail API
SCOPES = ['https://www.googleapis.com/auth/gmail.modify']
def main():
creds = None
# Load existing credentials
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# Authenticate if no valid credentials
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
# Verify granted scopes
verify_scopes(creds)
service = build('gmail', 'v1', credentials=creds)
delete_all_emails_with_buffer(service)
def verify_scopes(creds):
"""Verify the scopes granted in the OAuth token."""
if creds and creds.scopes:
logging.info(f"Granted Scopes: {creds.scopes}")
required_scopes = set(SCOPES)
granted_scopes = set(creds.scopes)
if required_scopes.issubset(granted_scopes):
logging.info("All required scopes are granted.")
else:
missing_scopes = required_scopes - granted_scopes
logging.error(f"Missing required scopes: {missing_scopes}")
else:
logging.error("No scopes found in the token. Authentication might have failed.")
def delete_all_emails_with_buffer(service, dry_run=False):
try:
buffer_date = (datetime.now() - timedelta(days=14)).strftime('%Y/%m/%d')
query = f'label:inbox before:{buffer_date}'
messages = []
response = service.users().messages().list(userId='me', q=query).execute()
if 'messages' in response:
messages.extend(response['messages'])
while 'nextPageToken' in response:
response = service.users().messages().list(userId='me', q=query,
pageToken=response['nextPageToken']).execute()
messages.extend(response['messages'])
logging.info(f"Found {len(messages)} emails to delete.")
if dry_run:
logging.info("Dry run mode: No emails will be deleted.")
for msg in messages[:10]: # Show a sample of emails
try:
email_details = service.users().messages().get(userId='me', id=msg['id']).execute()
logging.info(f"Email ID: {msg['id']}, Subject: {email_details.get('snippet')}")
except Exception as e:
logging.error(f"Failed to fetch email details for ID {msg['id']}: {e}")
else:
logging.info("Deleting emails...")
for msg in messages:
try:
service.users().messages().delete(userId='me', id=msg['id']).execute()
logging.info(f"Deleted email ID: {msg['id']}")
except Exception as e:
logging.error(f"Failed to delete email ID {msg['id']}: {e}")
except Exception as e:
logging.error(f"Error during email deletion process: {e}")
if __name__ == '__main__':
main()
</code></pre>
<p>I noticed that when the authentication flow starts it doesn't include delete, and when I look at security for third-party apps in Google Account it doesn't list delete either. This is the kind of output I see in terminal:</p>
<pre><code>2025-01-11 17:45:25,628 - WARNING - Encountered 403 Forbidden with reason "insufficientPermissions"
2025-01-11 17:45:25,628 - ERROR - Failed to delete email ID 194080c9712fd742: <HttpError 403 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/194080c9712fd742? returned "Request had insufficient authentication scopes.". Details: "[{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}]">
</code></pre>
|
<python><gmail-api><google-workspace><google-api-python-client>
|
2025-01-12 02:20:27
| 1
| 711
|
Paul Lewallen
|
79,349,147
| 427,942
|
Included App URLs not showing in DRF browsable API
|
<p>I'm using the Django Rest Framework to provide an API, which works great.</p>
<p>All needed URLs for routers and others too are hold in the root urls.py
To better handle the growing number of routes I tried to move routes from Apps to their related app folders - as one would do with pure Django.</p>
<pre><code># urls.py
from django.contrib import admin
from django.urls import include, path
from rest_framework.routers import DefaultRouter
import core.views
router = DefaultRouter()
router.register(r'core/settings', core.views.SettingsViewSet, basename='settings')
router.register(r'core/organization', core.views.OrgViewSet, basename='org')
urlpatterns = [
path('api/', include(router.urls)),
path('api/een/', include('een.urls')),
path('admin/', admin.site.urls),
path('', include('rest_framework.urls', namespace='rest_framework')),
path('api/tokenauth/', authviews.obtain_auth_token),
]
</code></pre>
<pre><code># een/urls.py
from django.urls import path, include
from rest_framework import routers
from . import views
app_name = 'een'
router = routers.DefaultRouter()
router.register(
r'cvs',
views.EENSettingsViewSet,
basename='een-cvs',
)
urlpatterns = [
path('', include(router.urls)),
]
</code></pre>
<p>Everything shown here is working as expected, but the included URLs are not shown in the browsable API. They are reachable and working, but they are not listed.
I do use drf-spectacular, which correctly picks up even the included app urls.</p>
<p>I've tried several different combinations, different url order, etc - with no luck.
What do I overlook? Or is this a general thing with DRF and I should really keep everything in the root urls.py - if I want the full browsable API.</p>
|
<python><django><django-rest-framework>
|
2025-01-12 00:26:57
| 1
| 1,568
|
normic
|
79,349,133
| 12,828,249
|
How to architect the external evnets and Django models?
|
<p>I'm building a django backend app and I have a few different models in my app. One of these models is a <code>Driver</code> and I want to do the following when calling the <code>create-driver</code> endpoint:</p>
<ol>
<li>create the driver in database</li>
<li>create a user account for the driver in my B2C directory</li>
<li>send an email to the driver on successfull creation</li>
<li>send a notification to admins on succesfull creation</li>
</ol>
<p>Operation 1 and 2 should either be succesfull or fail together (an atomic transaction).</p>
<p>I was initially handling 1 and 2 in the <code>Driver</code> model's <code>save</code> method like this:</p>
<pre><code>Class Driver(models.Model):
#...
def save(self, *args, **kwargs):
if self.pk is None:
# Creating a new driver
if self.b2c_id is None:
graph_client = MicrosoftGraph()
try:
with transaction.atomic():
user_info = B2CUserInfoSchema(
display_name=f"{self.first_name} {self.last_name}",
first_name=self.first_name,
last_name=self.last_name,
email=self.email,
phone_number=self.phone,
custom_attributes={
"Role": UserRole.DRIVER.value,
},
)
res = graph_client.create_user(user_info) # create B2C user
self.b2c_id = res.id
super().save(*args, **kwargs) # create driver in database
except Exception as e:
raise e
else:
# Updating an existing driver
super().save(*args, **kwargs)```
</code></pre>
<p>This was working perfectly fine but I didn't like mixing responsibilites here and adding the B2C user creation logic to my Driver's <code>save</code> method. I like to keep the <code>save</code> method simple and focused on creating a database record.</p>
<p>I tried updating the architecture and started using controllers and event dispatchers to handle this. My architecture is like below now:</p>
<pre><code>class Driver(models.Model):
# ...
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.domain_events = []
def save(self, *args, **kwargs):
if self.pk is None:
self.domain_events.append(EntityEvent(self, EntityEventType.CREATE))
else:
self.domain_events.append(EntityEvent(self, EntityEventType.UPDATE))
super().save(*args, **kwargs)
class EntityEventType(Enum):
CREATE = "create"
UPDATE = "update"
class EntityEvent:
def __init__(self, db_entity: models.Model, event_type: EntityEventType):
self.db_entity = db_entity
self.event_type = event_type
class EntityEventDispatcher:
def __init__(self, b2c_entity_service: B2CEntityService):
self._b2c_entity_service = b2c_entity_service
def dispatch(self, events: list[EntityEvent]):
for event in events:
match event.event_type:
case EntityEventType.CREATE:
self._b2c_entity_service.create_entity(event.db_entity)
case EntityEventType.UPDATE:
self._b2c_entity_service.update_entity(event.db_entity)
class EntityController:
def __init__(self, db_entity: models.Model):
self._db_entity = db_entity
self._event_dispatcher = EntityEventDispatcher(
B2CEntityFactory.from_entity_type(type(db_entity))
)
def _dispatch_events(self):
self._event_dispatcher.dispatch(self._db_entity.domain_events)
@transaction.atomic
def create_entity(self):
self._db_entity.save()
self._dispatch_events()
return self._db_entity
@transaction.atomic
def update_entity(self):
self._db_entity.save()
self._dispatch_events()
return self._db_entity
@transaction.atomic
def delete_entity(self):
self._db_entity.delete()
self._dispatch_events()
return self._db_entity
class DriverController(EntityController):
def __init__(self, driver: Driver):
super().__init__(driver)
</code></pre>
<p>As you can see, I'm using dispatchers and controllers now and I keep the logic separated. How I'm creating drivers is as below now:</p>
<pre><code>def create_driver(request):
data = json.loads(request.body)
driver_controller = DriverController(Driver(**data))
driver = driver_controller.create_entity()
</code></pre>
<p>Obviously, more code was added and a few classes were created in the 2nd approach but the benefit I gained was separating the logic between the database models and external dependencies.</p>
<p>Now I should be able to easily add my external dependencies such as sending emails and notifications by simply adding events in my <code>DriverController</code> and send them to my dispatcher to handle the events.</p>
<p>I like to know whether this is a valid approach or I'm overcomplicating my solution. What are the pros and cons of this approach?</p>
<p>Thanks</p>
|
<python><django><architecture><domain-driven-design>
|
2025-01-12 00:11:08
| 0
| 2,249
|
Morez
|
79,349,081
| 3,696,153
|
vscode, python - step through code being 'exec'ed
|
<p>So I have a python app that reads/loads a file that is sort of like a plugin for the main application.</p>
<p>I would really like to be able to debug (step into) the code from VSCODE - which I'm using as an IDE</p>
<p>I've tried a number of things, described here (and related posts)</p>
<p><a href="https://stackoverflow.com/questions/70643167/how-to-debug-python-code-running-inside-exec">How to debug python code running inside exec()</a></p>
<p>These are not working.. I do end up in the pub shell in the terminal window, but that is not really what I was hoping for.</p>
<p>Why? What I have is a large directory (forest) of various data directories. Each subdirectory contains a body of data, and I want to have a "customization script" - that I can execute for that subdirectory. Sort of like a PLUGIN of sorts</p>
<p>So I get to the point where I have the name of the file, I can read the file and I want to "exec()" the contents (string) of that file, and I really want to step into the script being executed.</p>
<p>how can I do this?</p>
<p>EDIT: jan-13
I was asked for an minimally reproducible example, so I am adding one.</p>
<pre class="lang-py prettyprint-override"><code># Normally this code would come from a file I open and read
# In this example I have it as a multi-line string.
code='''
for x in range(0,100):
print("Hello %d" % x)
'''
# I want to step into the above code using some IDE (ANY IDE)
# I currently use VSCODE but another is just as good.
exec( code )
</code></pre>
<p>Launch the above in an IDE and try to step through the 'code'</p>
|
<python><visual-studio-code><plugins><vscode-debugger>
|
2025-01-11 23:17:32
| 0
| 798
|
user3696153
|
79,348,814
| 9,182,405
|
How do I include a custom component in a spaCy training pipeline using the CLI?
|
<p>I'm trying to implement a simple custom component in my spaCy training pipeline. I'm using the spaCy CLI for training, which means I'm directing the pipeline configuration through the config.cfg file, though I do have scripts for generating and annotating training and evaluation data. The custom component I've created is stateless, which by all accounts means it should not need a factory and can just use the <code>@Language.component()</code> decorator. The component simply takes the doc object and adds a classification to it based on the number of occurrences of any of a given set of named entities. If there are more than one of any given entity type, the classification score is 1.0; if not, it is 0.0.</p>
<p>Here is the code for the function:</p>
<pre class="lang-py prettyprint-override"><code># custom_classifier.py
from spacy.language import Language
@Language.component("custom_classifier")
def custom_classifier(doc):
entity_types = ["ENTITY1", "ENTITY2", "ENTITY3"]
entity_counts = [sum(1 for ent in doc.ents if ent.label_ == entity_type) for entity_type in entity_types]
if any(count > 1 for count in entity_counts):
doc.cats["MULTIPLE"] = 1.0
else:
doc.cats["MULTIPLE"] = 0.0
return doc
</code></pre>
<p>I'm then attempting to use this custom component in my training pipeline by adding it to my config.cfg definition, like so:</p>
<pre class="lang-none prettyprint-override"><code>[nlp]
lang = "en"
pipeline = ["tok2vec","ner","custom_classifier"]
batch_size = 1000
...
[components]
...
[components.taper_classifier]
source = "custom_classifier.custom_classifier"
after = "ner"
</code></pre>
<p>No matter what I do, when I attempt to run <code>spacy train config.cfg</code>, I end up with some version of this error message:</p>
<pre class="lang-bash prettyprint-override"><code>OSError: [E050] Can't find model 'custom_classifier.custom_classifier'. It doesn't seem to be a Python package or a valid path to a data directory.
</code></pre>
<p>So far I've tried various solutions, none of which have worked. Those include:</p>
<ul>
<li>Trying every conceivable permutation of the module path I could think of.</li>
<li>Converting the component function to a factory and modifying both the decorator and the config accordingly.</li>
<li>Registering the function using the <code>@spacy.registry</code> decorator.</li>
</ul>
<p>I have been combing through the documentation and I simply can't figure out what I'm doing wrong, although I'll concede that the documentation appears to assume that training using custom components is being done in code rather than with the CLI and does not get into detail about how to declare custom components using the config.cfg file. I feel like there's either something simple I'm just overlooking, or else that I'm fundamentally misunderstanding how these custom components are meant to work.</p>
|
<python><machine-learning><nlp><spacy-3>
|
2025-01-11 20:14:52
| 0
| 656
|
Dumas.DED
|
79,348,677
| 10,985,257
|
Calculating CrossEntropyLoss over sparse tensors
|
<p>While the following works as expected:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch.nn import CrossEntropyLoss
loss = CrossEntropyLoss()
T1 = torch.randn(10, 20, 4, 4) # The BatchSize is a 10 times repeated tensor (1,20,4,4)
T2 = torch.randint(1,20, (10, 4, 4))
loss(T1, T2)
</code></pre>
<p>works as expected, the similar approach with sparse tensors, seems not to implemented at the moment:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch.nn import CrossEntropyLoss
loss = CrossEntropyLoss()
T1 = torch.randn(10, 20, 4, 4)
T2 = torch.randint(1,20, (10, 4, 4))
T1_sparse = T1.to_sparse_coo()
T2_sparse = T2.to_sparse_coo()
loss(T1_sparse, T2_sparse)
</code></pre>
<p>My "BatchSize" is actually way bigger (around 3000), but my GT-Data is very thin per layer, so I thought maybe this is possible with sparse tensors.</p>
<p>At the moment I bring the sparse <code>to_dense</code> but this is exploding my memory.</p>
<p>So I am looking for a way to calculate CrossEntropy with sparse tensors, maybe by utilizing the indices and values.</p>
|
<python><pytorch><sparse-matrix>
|
2025-01-11 18:40:10
| 0
| 1,066
|
MaKaNu
|
79,348,454
| 9,999,861
|
Visual Studio Code does not find python unit tests in sub folders
|
<p>I am setting up a Python project in VisualStudio Code. My folder and file structure looks like follows:</p>
<ul>
<li>Base project folder
<ul>
<li>source_folder
<ul>
<li>package1_folder
<ul>
<li>module1.py</li>
</ul>
</li>
<li>package2_folder
<ul>
<li>module2.py</li>
</ul>
</li>
</ul>
</li>
<li>source_folder_unittests
<ul>
<li>package1_unittests
<ul>
<li>module1_test.py</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Sadly, the unittest discovery does not see the unit tests inside 'module_test.py'. As soon as I move the module1_test.py file one folder up, it works:</p>
<ul>
<li>Base project folder
<ul>
<li>source_folder
<ul>
<li>package1_folder
<ul>
<li>module1.py</li>
</ul>
</li>
<li>package2_folder
<ul>
<li>module2.py</li>
</ul>
</li>
</ul>
</li>
<li>source_folder_unittests
<ul>
<li>module1_test.py</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>This is my settings.json file:</p>
<pre><code>{
"python.testing.unittestArgs": [
"-v",
"-s",
"./source_folder_unittests/",
"-p",
"*_test.py"
],
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
}
</code></pre>
<p>Does VS Code not recognize unit tests in subfolders? Or am I missing something I have to do? I already added the source_folder to the <code>PYTHONPATH</code> environment variable.</p>
|
<python><visual-studio-code><python-unittest><test-discovery>
|
2025-01-11 16:20:25
| 1
| 507
|
Blackbriar
|
79,348,399
| 15,100,030
|
Twilio Return gathering after client press star
|
<p>I'm trying to create a fallback gathering if the interpreter hangup from the conference while the client is still on the line, I have to return the gathering to the client to ask him if he wants another interpreter or disconnect</p>
<p>everything work perfect but, when the client pressed star, the gather said the first part of the word only, and then the call disconnected</p>
<pre class="lang-py prettyprint-override"><code>
def handle_incoming_call(self, call_sid, from_number, language_code):
try:
with transaction.atomic():
next_interpreter = CallSelector.get_next_interpreter(language_code)
conference_name = f"conferenceΩ{call_sid}"
conference = CallSelector.create_conference(
language=next_interpreter.language,
interpreter=next_interpreter.interpreter,
conference_name=conference_name,
)
response = VoiceResponse()
response.say("Please wait while we connect you with an interpreter")
dial = Dial(
hangup_on_star=True,
action=f"{settings.BASE_URL}/api/calls/webhook/interpreter_leave/{conference.conference_id}/",
)
dial.conference(
conference_name,
start_conference_on_enter=True,
end_conference_on_exit=False,
# record=True,
# recording_status_callback=f"{settings.BASE_URL}/api/calls/webhook/recording/{conference.conference_id}/",
status_callback=f"{settings.BASE_URL}/api/calls/webhook/conference-status/{conference.conference_id}/",
status_callback_event="start end join leave announcement",
status_callback_method="POST",
beep=True,
participant_label="client",
)
response.append(dial)
self._add_client_to_conference(conference, from_number, call_sid),
self._call_interpreter(conference, next_interpreter.interpreter),
return response
except Exception as e:
logger.error(f"Call handling failed: {str(e)}", exc_info=True)
return self._generate_no_interpreter_twiml()
</code></pre>
<h1>And here is the callback <code>interpreter_leave</code></h1>
<pre class="lang-py prettyprint-override"><code>def handel_interpreter_leave(self, conference_id: str):
try:
print("Interpreter leave handling started")
conference = CallSelector.get_conference_by_conference_id(conference_id)
response = VoiceResponse()
gather = Gather(
num_digits=1,
action=f"{settings.BASE_URL}/api/calls/webhook/client_choice/{conference_id}/",
method="POST",
timeout=10,
input="dtmf",
)
gather.say("Press 1 to connect with a new interpreter, or press 2 to end the call.")
response.append(gather)
return response
except Exception as e:
logger.error(f"Interpreter leave handling failed: {str(e)}", exc_info=True)
raise
</code></pre>
<p>Before the Action call, I have a function called after the interpreter leaves to tell the client to pressthe star</p>
<pre class="lang-py prettyprint-override"><code>def handle_interpreter_hangup(self, conference_id: str, sid: str):
try:
print("Interpreter hangup handling started")
conference = CallSelector.get_conference_by_conference_id(conference_id)
self.client.conferences.get(conference.conference_sid).update(
announce_url=f"{settings.BASE_URL}/api/calls/webhook/interpreter_leave_redirect/{conference_id}/"
)
except Exception as e:
logger.error(f"Interpreter hangup handling failed: {str(e)}", exc_info=True)
raise
</code></pre>
<p>everything works as expected but when it comes to gathering the twiml says "Press 1 to"</p>
<p>Also note when the attr <code>time_limit</code> added to first Dial() the gather works</p>
|
<python><django><twilio>
|
2025-01-11 15:40:48
| 0
| 698
|
Elabbasy00
|
79,348,313
| 7,920,004
|
pytest throwing module no found in Databricks' notebook
|
<p>I'm recreating below test scenario but constantly getting below error:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/databricks/notebooks/testing" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/databricks/notebooks/testing</a></p>
<pre><code>ImportError while importing test module '/Workspace/python_tests/dummy_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib/python3.12/importlib/__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
E ModuleNotFoundError: No module named 'dummy_test'
=========================== short test summary info ============================
ERROR dummy_test.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.45s ===============================
</code></pre>
<p>This is the structure of my <code>/Workspace/python_tests</code> directory and each file. All 3 files reside in the same directory:</p>
<ol>
<li><code>dummy.py</code></li>
</ol>
<pre><code>import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
# Because this file is not a Databricks notebook, you
# must create a Spark session. Databricks notebooks
# create a Spark session for you by default.
spark = SparkSession.builder.appName('integrity-tests').getOrCreate()
def dummy():
return "I am a dummy"
</code></pre>
<ol start="2">
<li><code>dummy_test.py</code></li>
</ol>
<pre><code># import pytest
import pyspark
from dummy import *
def test_dummy():
assert dummy() == "I am a dummy"
</code></pre>
<ol start="3">
<li><code>pyTestRunner</code> (notebook)</li>
</ol>
<pre><code>%pip install pytest
import pytest
import sys
# Skip writing pyc files on a readonly filesystem.
sys.dont_write_bytecode = True
# Run pytest.
retcode = pytest.main([".", "-v", "-p", "no:cacheprovider"])
# Fail the cell execution if there are any test failures.
assert retcode == 0, "The pytest invocation failed. See the log for details."
</code></pre>
|
<python><pytest><databricks>
|
2025-01-11 14:48:20
| 1
| 1,509
|
marcin2x4
|
79,348,304
| 6,583,936
|
Ipywidgets Controller not working properly, fields empty
|
<p>I want to use my Xbox controller in Jupyter environment. I initialized controller, there widget said press any button, then, widget started displaying all of my controls. as on Screenshot. <a href="https://i.sstatic.net/MAvWXIpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MAvWXIpB.png" alt="enter image description here" /></a></p>
<p>However when i tried to subscribe with <code>observe</code> method but it didnt notify any at all, tried look into fields of <code>axes</code>, <code>buttons</code>, <code>connected</code>it was empty and False. Perhaps i am using wrong interface or may be widget supports only one observer since all actions are displayed properly on widget display. I attach my code below.</p>
<pre><code>#%%
import ipywidgets.widgets as widgets
from IPython.display import display
c = widgets.Controller(index=0)
display(c)
#%%
def on_change(change):
print("Axes:", c.axes)
print("Buttons:", c.buttons)
c.observe(on_change, names=['axes', 'buttons'])
</code></pre>
|
<python><jupyter-notebook><ipython><ipywidgets>
|
2025-01-11 14:42:38
| 1
| 320
|
mcstarioni
|
79,348,281
| 3,008,410
|
Spark JDBC table to Dataframe no partitionCol to use
|
<p>I have a MySQL RDBMS table (3 Million rows, only 209K returned) like this that I need to Python to load into a Spark dataframe. The issue is that I need to load it concurrently as it is REALLY slow (1.5H min), but as you can see I have no way to set an "upperbound" and "lowerbound" that JDBC needs. So my question is how to load this table concurrently. I can't change the table and can't find an example of such a table being loaded into a dataframe with concurrency.</p>
<p>Please let me know if I am being dense about this, but I just haven't come across this issue before.</p>
<p>The USERJSON is a long-character JSON string.</p>
<pre><code>+------------------------------------+-------------------------------------+-------------------------------------+---------+-------------+---------------------+-----------------+------------------------+
|SESSIONID |PARTID |USERID |USERNAME |ACTIVE_FLAG |LOGINTIMESTAMP |LOGOUTTIMESTAMP |USERJSON |
+------------------------------------+-------------------------------------+-------------------------------------+---------+-------------+---------------------+-----------------+------------------------+
|00000123-e63b-4b65-a47a-c84620ae4d20|d6ee09a5-1a16-4a0a-9e2a-3b9ffd9cf1d0 |null |null |1 |2024-09-25 08:43:44 |null |null |
|000012e8-8baf-4adc-bb1e-4c3aead53e60|d6ee09a5-1a16-4a0a-9e2a-3b9ffd9cf1d0 |2ab6e89b-1dc0-e8a2-ad32-87296434b69a |null |1 |2023-09-22 00:00:00 |null |[2,620 CHARACTER_JSON] |
|000022b4-ad4a-4cef-8285-e65d35b7b106|c59ba81c-5e2f-4760-bf44-24432f1e76fc |252ea556-7eb1-336e-bec5-36df57b8ecee |null |1 |2023-12-23 11:20:34 |null |[2,554 CHARACTER_JSON] |
|000034d2-5607-472d-a8d3-ecf81c76a4cf|d6ee09a5-1a16-4a0a-9e2a-3b9ffd9cf1d0 |da192ec4-97ef-34dc-70d2-3b7b17fd6dcc |null |1 |2023-06-19 00:00:00 |null |[2,526 CHARACTER_JSON] |
+------------------------------------+-------------------------------------+-------------------------------------+---------+-------------+---------------------+-----------------+------------------------+
df_session = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://127.0.0.1:3317/sesdb?useSSL=false") \
.option("driver", "com.mysql.jdbc.Driver") \
.option("user", "spark") \
.option("password", "[PASS]") \
.option("query", "select * from sesdb.session where PARTID IN('c59ba81c-5e2f-4760-bf44-24432f1e76fc', '992f6369-bf10-4b2e-bd97-b7c99ec4d6f9', 'd6ee09a5-1a16-4a0a-9e2a-3b9ffd9cf1d0')") \
.load()
</code></pre>
<p>EDIT:</p>
<p>Hi @Jonathan yes I tried that but the problem with that is I don't need them by LOGINTIMESTAMP. I need to get all records, regardless of LOGINTIMESTAMP. But this code below gets me closer.</p>
<pre><code>.option("dbtable", "sesdb.session") \
.option("numPartitions", 200) \
.option("fetchsize", 5000) \
.option("partitionColumn", "LOGINTIMESTAMP") \
.option("lowerBound", "2018-07-06 00:00:00") \
.option("upperBound", "2025-07-20 00:00:00") \
</code></pre>
|
<python><apache-spark><pyspark>
|
2025-01-11 14:28:39
| 0
| 856
|
user3008410
|
79,348,233
| 1,473,517
|
Why is the output from tqdm+concurrent.futures such a mess and what can be done about it?
|
<p>I want to run processes in parallel and show their progress. I have this code:</p>
<pre><code>from math import factorial
from decimal import Decimal, getcontext
from concurrent.futures import ThreadPoolExecutor, as_completed
from tqdm import tqdm
import time
def calc(n_digits, pos, total):
# number of iterations
n = int(n_digits + 1 / 14.181647462725477)
n = n if n >= 1 else 1
# set the number of digits for our numbers
getcontext().prec = n_digits + 1
t = Decimal(0)
pi = Decimal(0)
deno = Decimal(0)
for k in tqdm(range(n), position=pos, desc=f"Job {pos + 1} of {total}", leave=True, dynamic_ncols=True):
t = ((-1) ** k) * (factorial(6 * k)) * (13591409 + 545140134 * k)
deno = factorial(3 * k) * (factorial(k) ** 3) * (640320 ** (3 * k))
pi += Decimal(t) / Decimal(deno)
pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5))
pi = 1 / pi
# no need to round
return pi
def parallel_with_concurrent_futures():
# Define the number of threads to use
n_threads = 3
# Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi)
tasks = [1200, 1700, 900, 1400] # Edit to make code for longer
# Create a list of tqdm objects to manage progress bars
progress_bars = [tqdm(total=int(task + 1 / 14.181647462725477), position=pos, desc=f"Job {pos + 1} of {len(tasks)}", leave=True, dynamic_ncols=True) for pos, task in enumerate(tasks)]
# Run tasks in parallel
with ThreadPoolExecutor(max_workers=n_threads) as executor:
futures = {executor.submit(calc, n, pos, len(tasks)): pos for pos, n in enumerate(tasks)}
for future in as_completed(futures):
pos = futures[future]
progress_bars[pos].close() # Close the progress bar when the job is done
try:
result = future.result()
# Optionally, you can print the result here if needed
# print(f"Job {pos + 1} of {len(tasks)} completed with result: {result}")
except Exception as e:
print(f"Job {pos + 1} of {len(tasks)} failed with error: {e}")
if __name__ == "__main__":
parallel_with_concurrent_futures()
</code></pre>
<p>However, when I run it I see the output gets graduallly worse. It starts well with:</p>
<p><a href="https://i.sstatic.net/BN2YQrzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BN2YQrzu.png" alt="enter image description here" /></a></p>
<p>That is exactly how I want it. But then after the first process finishes I get:</p>
<p><a href="https://i.sstatic.net/f5X4ZaP6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5X4ZaP6.png" alt="enter image description here" /></a></p>
<p>The problem has now started. Then later it shows:</p>
<p><a href="https://i.sstatic.net/M69whNip.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M69whNip.png" alt="enter image description here" /></a></p>
<p>This is even worse. And then when it terminates it shows:</p>
<p><a href="https://i.sstatic.net/I13vFjWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I13vFjWk.png" alt="enter image description here" /></a></p>
<p>How can I change the code so it still uses 3 cores in parallel but I only see something like the first of the pictures above? I don't mind using other modules than concurrent and tqdm if that is the right thing to do.</p>
|
<python><concurrent.futures><tqdm>
|
2025-01-11 13:55:51
| 1
| 21,513
|
Simd
|
79,348,168
| 1,142,881
|
How to on a rolling window pass two column vectors instead of one?
|
<p>I'm computing technical indicators on a rolling basis to avoid any look-ahead bias, for example, for model training and back-testing. To that end I would like to compute the indicator <a href="https://technical-analysis-library-in-python.readthedocs.io/en/latest/ta.html?highlight=forceindexindicator#ta.volume.ForceIndexIndicator" rel="nofollow noreferrer"><code>ForceIndexIndicator</code></a> using the TA Python project. However this needs two inputs instead of one: close and volume, and I can't get hold of both on my rolling - apply pipeline:</p>
<pre><code>import pandas as pd
import ta
...
df.columns = ['close', 'volume']
df['force_index_close'] = (
df.rolling(window=window)
.apply(
lambda x: ta.volume.ForceIndexIndicator(
close=x['close'],
volume=x['volume'],
window=13,
fillna=True)
.force_index().iloc[-1]))
</code></pre>
<p>I get the error <code>KeyError: 'close'</code> because apply gets one column at a time and not both simultaneously as needed.</p>
|
<python><pandas>
|
2025-01-11 13:22:19
| 2
| 14,469
|
SkyWalker
|
79,347,968
| 8,425,824
|
What are the differences between the various Python base 85 functions?
|
<p>Python has three sets of base 85 functions, in the <a href="https://docs.python.org/3/library/base64.html" rel="nofollow noreferrer"><code>base64</code></a> module.</p>
<ul>
<li><a href="https://docs.python.org/3/library/base64.html#base64.a85encode" rel="nofollow noreferrer"><code>base64.a85encode</code></a> and <a href="https://docs.python.org/3/library/base64.html#base64.a85decode" rel="nofollow noreferrer"><code>base64.a85decode</code></a> (Ascii85)</li>
<li><a href="https://docs.python.org/3/library/base64.html#base64.b85encode" rel="nofollow noreferrer"><code>base64.b85encode</code></a> and <a href="https://docs.python.org/3/library/base64.html#base64.b85decode" rel="nofollow noreferrer"><code>base64.b85decode</code></a> (Base85)</li>
<li><a href="https://docs.python.org/3/library/base64.html#base64.z85encode" rel="nofollow noreferrer"><code>base64.z85encode</code></a> and <a href="https://docs.python.org/3/library/base64.html#base64.z85decode" rel="nofollow noreferrer"><code>base64.z85decode</code></a> (Z85)</li>
</ul>
<p>These all convert <code>bytes</code> objects to and from strings composed of 85(ish) printable ASCII characters.</p>
<p>What are their differences?</p>
|
<python><ascii85><base85>
|
2025-01-11 11:25:42
| 2
| 4,486
|
LeopardShark
|
79,347,483
| 865,883
|
VSCode Python Debugger Not Using the Correct Python Binary with launch.json "justMyCode=false"
|
<p>I have a small project where I'd like to use the launch.json with a pipenv-created environment, but VSCode appears to be using an invalid python binary location. This happens when I set "justMycode" to "false". Setting it true, works fine.</p>
<p>Confusingly, when running the command "Python Debugger: Debug Python File" via the run icon, the correct environment appears to run.</p>
<p>The problem arises with the launch.json configuration I've defined. I get an error "File Not Found: /Users/me/Library/Python/3.13". Strangely, this path is not defined in either the launch.json or the user settings.
I've even explicitly updated both with the following:</p>
<p>User Settings.json</p>
<pre class="lang-json prettyprint-override"><code> "python.defaultInterpreterPath": "/Users/me/.local/share/virtualenvs/testproject-vamZemFG/bin/python",
</code></pre>
<p>launch.json</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: All Code",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env",
"justMyCode": false,
"python": "/Users/me/.local/share/virtualenvs/testproject-vamZemFG/bin/python",
}
]
}
</code></pre>
<p>Here is a screenshot of the error I get
<a href="https://i.sstatic.net/CbStQKjr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbStQKjr.png" alt="Error" /></a></p>
|
<python><visual-studio-code><vscode-debugger>
|
2025-01-11 03:20:07
| 0
| 9,567
|
funseiki
|
79,347,405
| 3,614,460
|
How to avoid matplot subplot from overlapping?
|
<p>Code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
maxRow = 23
fig, axes = plt.subplots(maxRow, 1)
for x in range(maxRow):
axes[x].plot(1,1)
plt.show()
</code></pre>
<p>Running it shows all the plot overlapping like this.
<a href="https://i.sstatic.net/DAuhdX4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DAuhdX4E.png" alt="enter image description here" /></a></p>
<p>Can I dynamically space the graphs so they are not overlapping?</p>
<p><strong>Edit 1</strong>: Something that looks like this but with 20 more below it.
<a href="https://i.sstatic.net/Jf3exAm2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jf3exAm2.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2025-01-11 02:08:20
| 1
| 442
|
binary_assemble
|
79,347,389
| 5,404,620
|
Pandas adds "." + digit to the header of a csv
|
<p>I would like to import a csv file with headers to pandas. Somehow, pandas appends a ".7" to the last headers name</p>
<p>The last header in the csv contains a "<code>?</code>" as the last character (on purpose, the question mark is part of the last column's header)</p>
<pre><code>EXAMPLE XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX;XXXXXXXXXXXXXXXXXXXXXX;XXXXXXXXXXXXXXXXXXXXXX;XXXXXXXX?
;;;
</code></pre>
<p>(in reality there are about 400 columnsβ¦)</p>
<pre><code>import pandas as pd
file_path = 'rawdata.csv'
df = pd.read_csv(file_path, sep=';')
print(df.head(1))
</code></pre>
<p>Output</p>
<pre><code> EXAMPLE XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXX \
0 NaN NaN
XXXXXXXXXXXXXXXXXXXXXX XXXXXXXX?.7
0 NaN NaN
</code></pre>
<p>As you can see, the last header ends with a "?", but pandas added .7.</p>
<p>I have tried other ways of outputting the header names and it shows with .7 as well.</p>
<p>During testing I once saw a .4, but I can't reproduce it with the .4β¦</p>
<p>Any other special symbols instead of <code>?</code> behave as expected.
<code>?</code> at the end of any other line behaves as expected as well.</p>
<p>I have saved the original csv with UTF-8 (no BOM) and \n (LF) line terminator via VS Code. I have checked the file for any invisible characters visually ("Render Control Characters" setting turned on in VS Code)</p>
|
<python><pandas><dataframe><csv>
|
2025-01-11 01:58:53
| 1
| 2,887
|
Adler
|
79,347,041
| 1,018,226
|
Process a large separated file with different Python readers
|
<p>I want to handle text files which contain multiple documents in different formats. The documents are <a href="https://yaml.org/spec/1.2.2/#example-two-documents-in-a-stream" rel="nofollow noreferrer">separated by three dashes, similar to as in YAML</a>.</p>
<pre class="lang-yaml prettyprint-override"><code>Example: Here we have some YAML code
PartOne: It is the first of three parts
---
Column1,Column2,Column3
The,second,part
is,a,CSV
---
[ "The third and last part",
"is some JSON"
]
</code></pre>
<p>There are Python modules to easily parse all of the components. They usually read from file objects. So first, the components would need to be split apart. That could be done by reading the whole file, splitting the components apart and then wrapping them in <em>StringIO</em> again to make them act like a file object.</p>
<pre class="lang-py prettyprint-override"><code>import pathlib, io, yaml, csv, json
partone, parttwo, partthree = pathlib.Path("file").read_text().split("\n---\n")
print(yaml.load(io.StringIO(partone)))
print(tuple(csv.reader(io.StringIO(parttwo))))
print(json.load(io.StringIO(partthree)))
</code></pre>
<p>This approach however, requires to read and keep in memory the whole file or at least one whole component. This is excessive, especially for big components. So I am searching for an alternative that can process the file in a streaming manner, while stopping at the separators.</p>
<p>Optimally I would imagine an iterator of file objects that can be read in order.</p>
<pre class="lang-py prettyprint-override"><code>with open("file") as file:
components = splitfile(file, "\n---\n")
print(yaml.load(next(components)))
print(tuple(csv.reader(next(components))))
print(json.load(next(components)))
</code></pre>
<p>Or even more compact would be a reusable file object, that reports each separator as an intermediate end of file.</p>
<pre class="lang-py prettyprint-override"><code>with splitfile(open("file"), "\n---\n") as file:
print(yaml.load(file))
print(tuple(csv.reader(file)))
print(json.load(file))
</code></pre>
<p>I thought about implementing the latter as a file object wrapper. But handling all the edge cases posed to be quite complex - especially when a separator is only partially read, e.g. because of <a href="https://docs.python.org/3/library/io.html#io.TextIOBase.read" rel="nofollow noreferrer">the <em>size</em> argument to the <em>read</em> or <em>readline</em> methods</a>.</p>
<p>Is there a Python library or recipe that can help me make such a <code>splitfile</code> function?</p>
|
<python><parsing><yaml><stringio>
|
2025-01-10 21:12:46
| 1
| 2,574
|
XZS
|
79,347,027
| 20,295,949
|
How to Prevent Twitter Blocking Page Refreshes While Scraping Tweets Using Selenium or Alternatives?
|
<p>I'm trying to scrape new tweets from a Twitter account using Selenium (I'm not sure if selenium is the best way to do this). My script logs into Twitter, navigates to the user's profile, and captures the latest tweets. While Selenium works to an extent, Iβve encountered issues with bot detection and page blocking after multiple refreshes.</p>
<p>Although Iβve written the code using Selenium, Iβm happy to explore other methods (like BeautifulSoup, Scrapy, or any other Python library) if they can achieve my goal more effectively and minimize detection.</p>
<p><strong>My Goal</strong></p>
<p>The script should:</p>
<ol>
<li>Log in to Twitter automatically.</li>
<li>Navigate to a specific user's profile (in this case, Fabrizio Romano: <a href="https://twitter.com/FabrizioRomano" rel="nofollow noreferrer">https://twitter.com/FabrizioRomano</a>).</li>
<li>Capture and print the latest tweets from the page.</li>
<li>Avoid printing the same tweet multiple times, even if it reappears after newer tweets are deleted.</li>
<li>Exclude pinned tweets and reposts (retweets).</li>
</ol>
<p>Hereβs the script I wrote using Selenium:</p>
<pre><code>import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import configparser
import time
import random
class TwitterScraper:
def __init__(self, proxy=None, user_agent=None):
# Configure undetected ChromeDriver with optional proxy and user-agent
options = uc.ChromeOptions()
if proxy:
options.add_argument(f"--proxy-server={proxy}")
if user_agent:
options.add_argument(f"user-agent={user_agent}")
options.add_argument("--headless")
options.add_argument("--disable-blink-features=AutomationControlled")
options.add_argument("--disable-infobars")
options.add_argument("--disable-extensions")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--lang=en-US")
options.add_argument("--window-size=1920,1080")
self.driver = uc.Chrome(options=options)
self.printed_tweets = set() # Track printed tweets
def login(self):
try:
print("Navigating to Twitter login page...")
# Load credentials from Account.ini
config = configparser.ConfigParser()
config.read(r"C:\Users\Gaming\Documents\Python Tweets\Account.ini")
email = config.get("x", "email", fallback=None)
username_value = config.get("x", "username", fallback=None)
password_value = config.get("x", "password", fallback=None)
if not email or not password_value or not username_value:
raise ValueError("Email, username, or password missing in Account.ini")
self.driver.get("https://twitter.com/i/flow/login")
time.sleep(3) # Wait for the page to load
# Enter email
email_field = WebDriverWait(self.driver, 15).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, 'input[autocomplete="username"]'))
)
email_field.send_keys(email)
email_field.send_keys("\n")
time.sleep(3)
# Enter username if prompted
try:
username_field = WebDriverWait(self.driver, 5).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, 'input[data-testid="ocfEnterTextTextInput"]'))
)
username_field.send_keys(username_value)
username_field.send_keys("\n")
time.sleep(3)
except:
print("No additional username prompt detected.")
# Enter password
password_field = WebDriverWait(self.driver, 15).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, 'input[name="password"]'))
)
password_field.send_keys(password_value)
password_field.send_keys("\n")
time.sleep(5)
print("Login successful.")
except Exception as e:
print(f"Error during login: {e}")
self.restart()
def navigate_to_page(self, username):
try:
print(f"Navigating to @{username}'s Twitter page...")
user_url = f"https://twitter.com/{username}"
self.driver.get(user_url)
time.sleep(random.uniform(3, 5))
except Exception as e:
print(f"Error navigating to @{username}'s page: {e}")
def get_recent_tweets(self, num_tweets=3):
try:
WebDriverWait(self.driver, 10).until(
EC.presence_of_element_located((By.CSS_SELECTOR, 'article[role="article"]'))
)
tweets = self.driver.find_elements(By.CSS_SELECTOR, 'article[role="article"]')
recent_tweets = []
for tweet in tweets:
# Skip pinned tweets and retweets
if tweet.find_elements(By.CSS_SELECTOR, 'svg[aria-label="Pinned Tweet"]') or \
tweet.find_elements(By.CSS_SELECTOR, 'svg[aria-label="Retweet"]'):
continue
# Fetch tweet text
try:
tweet_text = tweet.find_element(By.CSS_SELECTOR, 'div[data-testid="tweetText"]').text.strip()
except:
continue
# Fetch timestamp
try:
time_element = tweet.find_element(By.XPATH, './/time')
timestamp = time_element.get_attribute("datetime")
except:
continue
recent_tweets.append((timestamp, tweet_text))
if len(recent_tweets) >= num_tweets:
break
return recent_tweets
except Exception as e:
print(f"Error fetching tweets: {e}")
return []
def start(self, username):
self.login()
self.navigate_to_page(username)
while True:
recent_tweets = self.get_recent_tweets(num_tweets=3)
if recent_tweets:
# Get the most recent tweet (first one in the list)
newest_time, newest_tweet = recent_tweets[0]
if newest_tweet not in self.printed_tweets:
print(f"[{newest_time}] {newest_tweet}")
self.printed_tweets.add(newest_tweet)
time.sleep(5) # Refresh every 5 seconds
self.driver.refresh()
def restart(self):
print("Restarting browser...")
self.driver.quit()
self.__init__() # Reinitialize the driver
def quit(self):
self.driver.quit()
# Usage
if __name__ == "__main__":
scraper = TwitterScraper(proxy=None, user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64)")
try:
scraper.start(username="FabrizioRomano")
except KeyboardInterrupt:
print("Exiting...")
scraper.quit()
</code></pre>
<p><strong>Current Issues</strong></p>
<p>My script scans the first three tweets after every refresh but cannot reliably exclude pinned tweets and retweets using CSS selectors alone. This results in previously printed tweets reappearing when newer tweets are deleted. I want the script to ensure that no tweet is printed more than once, even if it reappears later. For example, if the initial tweets are Tweet1, Tweet2, and Tweet3, the script should print Tweet1. After a refresh, if the tweets change to Tweet2, Tweet4, and Tweet5, the script should print Tweet2. If another refresh shows Tweet1, Tweet4, and Tweet5, the script should not reprint Tweet1. Additionally, after refreshing the page several times, Twitter blocks my view of the userβs posts and displays the message: "Something went wrong. Try reloading." (See attached screenshot).</p>
<p><a href="https://i.sstatic.net/DdZ563j4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdZ563j4.png" alt="enter image description here" /></a></p>
<p>What Iβve Tried</p>
<ul>
<li><strong>Login Automation:</strong> The login script works fine, and I can navigate to the userβs page.</li>
<li><strong>Tweet Scanning:</strong> My script scans the first three tweets because I couldnβt find a reliable way to skip pinned tweets and retweets programmatically.</li>
<li><strong>Avoiding the Twitter API:</strong> Iβve avoided the API due to its free tier limitations and cost.</li>
<li><strong>Minimizing Bot Detection:</strong> Iβve added delays, randomized pauses, and tried various Selenium options to make the bot less detectable, but the issue persists after multiple page refreshes.</li>
</ul>
<p><strong>What I Need Help With</strong></p>
<ul>
<li><p>Better Logic to Skip Pinned and Retweeted Tweets:
Is there a reliable way to identify and exclude these tweets, ideally without relying solely on CSS selectors?</p>
</li>
<li><p>Preventing Page Blocks by Twitter:
How can I prevent Twitter from blocking my view of the userβs posts after several refreshes?</p>
</li>
<li><p>Are there adjustments I can make to my Selenium script?
Should I use proxies, rotate user agents, or implement other techniques to minimize detection?</p>
</li>
<li><p>Exploring Alternatives to Selenium:
Would tools like BeautifulSoup, Scrapy, or other Python libraries be better suited for scraping tweets without detection? Iβm open to switching methods if thereβs a more robust solution.</p>
</li>
<li><p>General Advice on Avoiding Detection:
What are the best practices for building a scraper that avoids detection when interacting with sites like Twitter?</p>
</li>
</ul>
<p><strong>Additional Context</strong></p>
<ul>
<li><p>Iβm using Selenium for this project but am open to exploring alternatives. My main objective is to scrape and print tweets reliably, ensuring no duplicates, while avoiding detection or blocking.</p>
</li>
<li><p>I understand Twitterβs terms of service regarding scraping and aim to use this responsibly.</p>
</li>
</ul>
<p>Here is the code I made for rotating</p>
<p>Any guidance on improving the current approach or switching to a better method would be greatly appreciated!</p>
|
<python><selenium-webdriver><web-scraping>
|
2025-01-10 21:06:43
| 1
| 319
|
HamidBee
|
79,346,879
| 533,104
|
Selenium - Attempting to invoke "next" button on search results
|
<p>I'm using Selenium to scrape the results from this game search page.</p>
<p><a href="https://www.igt.com/products-and-services/gaming/games#gs:category=1" rel="nofollow noreferrer">https://www.igt.com/products-and-services/gaming/games#gs:category=1</a></p>
<p>I am able to load the page, set my search parameters, and the search executes automatically. I then wait for the results and I can parse the results on that page. However I am unable to advance to the next page.</p>
<p>The next page button looks like this.</p>
<pre><code><a href="javascript:__doPostBack('ctl00$ContentPlaceHolder1$main_0$GameSearch4A$ResultsBottomUnorderedlistdatapager$ctl02$ctl00','')">βΊ</a>
</code></pre>
<p>I can find the anchor element but when I call element.click() I get this error message.</p>
<p>ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (661, 1075)</p>
<p>I also tried executing the javascript code like this.</p>
<p>JavascriptException: Message: javascript error: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them</p>
<p>The first problem with click seems like I'm just doing something wrong but not sure what. I don't understand the second error message.</p>
<p>Any help?</p>
<p>Update based on JeffC's answer.</p>
<p>This is the exact code I'm running. I'm using Google Colab so I made slight changes to load the driver from google-colab-selenium.</p>
<pre><code>%pip install -q google-colab-selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import google_colab_selenium as gs
driver = gs.Chrome()
url = 'https://www.igt.com/products-and-services/gaming/games#gs:category=1'
# driver = webdriver.Chrome()
driver.get(url)
wait = WebDriverWait(driver, 10)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.preload_back")))
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR, "div.preload_back")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[text()='βΊ']"))).click()
</code></pre>
<p>The error message is:</p>
<pre><code>ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (661, 839)
(Session info: chrome=131.0.6778.264)
</code></pre>
|
<python><selenium-webdriver>
|
2025-01-10 19:56:22
| 2
| 2,005
|
dakamojo
|
79,346,871
| 6,067,528
|
Memory leak explanation for Python service making lots of outbound calls with `aiohttp`
|
<p>I have an issue with ever-increasing memory in Python-based service that uses celery tasks. These tasks make a tonne of outbound calls using something like the below...</p>
<pre><code>async with aiohttp.ClientSession() as session:
async with session.request(
method, url, json=data, auth=self.basic_auth, headers=headers
) as resp:
resp.raise_for_status()
return await resp.json()
</code></pre>
<p>I have used <code>memray</code> to produce a memory profile of my service, which suggests issue with many, increasing allocations with <code>asyncio.sslproto</code> over time - it increase by a few mb per minute.</p>
<p><a href="https://i.sstatic.net/89XccOTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/89XccOTK.png" alt="enter image description here" /></a></p>
<p>Does anyone have any suggestion as to how I configure my client class to avoid this issue? My assumption is the creation of many SSL contexts has a huge overhead. It's quite unclear to me how this is happening though, because I'm using a context manager protocol to handle resource management so I would assume the memory to release upon exiting.</p>
<p>Anyone able to help me out understand why this is happening?</p>
|
<python><memory-leaks>
|
2025-01-10 19:53:33
| 0
| 1,313
|
Sam Comber
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.