QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,635,883
| 6,479,666
|
Telethon session not authorized with docker container
|
<p>I have a python script within a docker container. I mount a folder with .session files as a volume to the container.</p>
<p>When I connect to telegram locally, with jupyter, it works perfectly fine:</p>
<pre class="lang-py prettyprint-override"><code>messages = []
async with TelegramClient("../volumes/telethon_sessions/session_1", API_ID, API_HASH) as client:
async for message in client.iter_messages(CHANNEL_USERNAME, limit=100):
messages.append(message)
print(len(messages)) # 61
</code></pre>
<pre class="lang-py prettyprint-override"><code>from telethon.sync import TelegramClient
from telethon.sessions import StringSession
client = TelegramClient("../volumes/telethon_sessions/session_1", API_ID, API_HASH)
await client.connect()
auth = await client.is_user_authorized()
print(auth) # True
await client.disconnect()
</code></pre>
<p>When i run container in interactive mode, I can see .session file is mounted. Permissions seems to be sufficient. Env variables are correct.</p>
<pre><code># ls -l sessions
total 64
-rw-r--r-- 1 root root 28672 May 23 18:49 session_1.session
-rw-r--r-- 1 root root 28672 May 23 15:25 session_name.session
</code></pre>
<p>I checked if session file can be reached from the code in the interactive mode - yes. There is only a <code>TELETHON_SESSION_PATH</code> on the picture, but <code>TELEGRAM_API_ID</code> and <code>TELEGRAM_API_HASH</code> are also present and are correct.</p>
<pre><code>>>> import os
>>> os.getenv("TELETHON_SESSION_PATH")
'/code/sessions/session_1'
>>> os.path.exists(os.getenv("TELETHON_SESSION_PATH") + ".session")
True
</code></pre>
<p>Here is my script. I used to use a context manager, but switched to plain <code>.connect</code> because it's easier to debug. For the same reason, I've added SQLiteSession.</p>
<pre class="lang-py prettyprint-override"><code> session_path = os.environ["TELETHON_SESSION_PATH"]
api_id = int(os.environ["TELEGRAM_API_ID"])
api_hash = os.environ["TELEGRAM_API_HASH"]
session = SQLiteSession(session_path)
client = TelegramClient(session, api_id, api_hash)
client.connect()
print("Authorized?", client.is_user_authorized())
string = StringSession.save(client.session)
print("StringSession:", string)
while True:
offset_date = start_date
n = 0
for message in client.iter_messages(
</code></pre>
<p>This script fails</p>
<pre><code>2025-05-23 17:40:08 INFO:telethon.network.mtprotosender:Connecting to 149.154.167.51:443/TcpFull...
2025-05-23 17:40:10 INFO:telethon.network.mtprotosender:Connection to 149.154.167.51:443/TcpFull complete!
2025-05-23 17:40:10 Traceback (most recent call last):
2025-05-23 17:40:10 File "<frozen runpy>", line 198, in _run_module_as_main
2025-05-23 17:40:10 File "<frozen runpy>", line 88, in _run_code
2025-05-23 17:40:10 File "/code/miner_text/main.py", line 54, in <module>
2025-05-23 17:40:10 main()
2025-05-23 17:40:10 ~~~~^^
2025-05-23 17:40:10 File "/code/miner_text/main.py", line 48, in main
2025-05-23 17:40:10 miner.save_old_posts()
2025-05-23 17:40:10 ~~~~~~~~~~~~~~~~~~~~^^
2025-05-23 17:40:10 File "/code/miner_text/miner_text.py", line 44, in save_old_posts
2025-05-23 17:40:10 for post_url, post_time in self._driver.get_post_urls_before(
2025-05-23 17:40:10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
2025-05-23 17:40:10 earliest_timestamp
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 ):
2025-05-23 17:40:10 ^
2025-05-23 17:40:10 File "/code/miner_text/drivers/telegram.py", line 101, in get_post_urls_before
2025-05-23 17:40:10 for url, message_date in self._get_messages(start_date=date):
2025-05-23 17:40:10 ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/code/miner_text/drivers/telegram.py", line 70, in _get_messages
2025-05-23 17:40:10 for message in client.iter_messages(
2025-05-23 17:40:10 ~~~~~~~~~~~~~~~~~~~~^
2025-05-23 17:40:10 self.channel_id, limit=TELEGRAM_FETCH_LIMIT, offset_date=offset_date
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 ):
2025-05-23 17:40:10 ^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/requestiter.py", line 87, in __next__
2025-05-23 17:40:10 return self.client.loop.run_until_complete(self.__anext__())
2025-05-23 17:40:10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/asyncio/base_events.py", line 719, in run_until_complete
2025-05-23 17:40:10 return future.result()
2025-05-23 17:40:10 ~~~~~~~~~~~~~^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/requestiter.py", line 58, in __anext__
2025-05-23 17:40:10 if await self._init(**self.kwargs):
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/client/messages.py", line 27, in _init
2025-05-23 17:40:10 self.entity = await self.client.get_input_entity(entity)
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/client/users.py", line 445, in get_input_entity
2025-05-23 17:40:10 await self._get_entity_from_string(peer))
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/client/users.py", line 561, in _get_entity_from_string
2025-05-23 17:40:10 result = await self(
2025-05-23 17:40:10 ^^^^^^^^^^^
2025-05-23 17:40:10 functions.contacts.ResolveUsernameRequest(username))
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/client/users.py", line 30, in __call__
2025-05-23 17:40:10 return await self._call(self._sender, request, ordered=ordered)
2025-05-23 17:40:10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-23 17:40:10 File "/usr/local/lib/python3.13/site-packages/telethon/client/users.py", line 92, in _call
2025-05-23 17:40:10 result = await future
2025-05-23 17:40:10 ^^^^^^^^^^^^
2025-05-23 17:40:10 telethon.errors.rpcerrorlist.AuthKeyUnregisteredError: The key is not registered in the system (caused by ResolveUsernameRequest)
</code></pre>
<p>And when I try to connect from container in the interactive mode it also fails to authorize:</p>
<pre><code>>>> session_path = os.getenv("TELETHON_SESSION_PATH")
>>> api_id = os.getenv("TELEGRAM_API_ID")
>>> api_hash = os.getenv("TELEGRAM_API_HASH")
>>> from telethon.sync import TelegramClient
>>> client = TelegramClient(session_path, api_id, api_hash)
>>> client.connect()
>>> client.is_user_authorized()
False
</code></pre>
<p>I tried creating a new session file and use different styles in the script (with and without context manager). But it keep failing.</p>
<p>EDIT:</p>
<p>I've added some diagnostic code both into jupyter and container:</p>
<pre class="lang-py prettyprint-override"><code>print("Session file", client.session.filename)
session_string = StringSession.save(client.session)
print("Session hashed", md5(session_string.encode()).hexdigest())
</code></pre>
<p>With jupyter I got:</p>
<pre><code>Session file ../volumes/telethon_sessions/session_1.session
Session hashed 24781ed97ab257ed6a23af5152eaf2fd
</code></pre>
<p>and with container:</p>
<pre><code>Session file /code/sessions/session_1.session
Session hashed e11e563357dd3bcf46d42af910698d0a
</code></pre>
<p>While file seems to be the same, just mounted in different systems, hash also differs. I'm no expert on telethon and initially assumed that it can vary because session file stores info about connections it was used for. But do not know for sure( Might it be the case, that the same file is decoded differently in different systems?</p>
<p>UPDATE:</p>
<p>Seems so, I've created a new session file inside the container, and while using it, my script can connect. File is created in the volumes, as expected. But I can't connect with it from outside the container.</p>
<p>So, it seems that the same file is being encoded/decoded differently between windows and Docker container (<code>python:3.13-slim</code>) But it is extremely inconvenient(</p>
|
<python><docker><telethon>
|
2025-05-23 16:17:45
| 0
| 567
|
Michail Highkhan
|
79,635,401
| 7,002,525
|
Google Cloud and project ID from google.auth.default() in docker container
|
<p>I'm playing Whac-A-Mole with Google Cloud authentication. Sorry for the lengthy question but since I have no idea where the problem is, I'm trying to give enough context.</p>
<p>With Python, calling <a href="https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#google.auth.default" rel="nofollow noreferrer"><code>google.auth.default()</code></a>, I can retrieve the ID of my Google Cloud project outside of a docker container but not within.</p>
<p>I've set my project:</p>
<pre><code>$ gcloud config set project my-project
Updated property [core/project].
</code></pre>
<p>I've set the "Quota project":</p>
<pre><code>$ gcloud auth application-default set-quota-project my-project
Credentials saved to file: [/home/.../.config/gcloud/application_default_credentials.json]
These credentials will be used by any library that requests Application Default Credentials (ADC).
Quota project "my-project" was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource.
</code></pre>
<p>I've set the "billing/quota_project":</p>
<pre><code>$ gcloud config set billing/quota_project my-project
Updated property [billing/quota_project].
</code></pre>
<p>Then, in the docs for <a href="https://cloud.google.com/docs/authentication/set-up-adc-containerized-environment" rel="nofollow noreferrer">a containerized development environment</a>, it says:</p>
<blockquote>
<p>To test your containerized application on your local workstation, you can configure your container to authenticate with your <a href="https://cloud.google.com/docs/authentication/application-default-credentials#personal" rel="nofollow noreferrer">local ADC file</a>. For more information, see <a href="https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment#google-idp" rel="nofollow noreferrer">Configure ADC with your Google Account</a>.</p>
</blockquote>
<p>I'm testing my containerized application on my local workstation, so I've created local authentication credentials for my user account:</p>
<pre><code>$ gcloud auth application-default login
Your browser has been opened to visit:
...
Credentials saved to file: [/home/.../.config/gcloud/application_default_credentials.json]
These credentials will be used by any library that requests Application Default Credentials (ADC).
Quota project "chatbot-smart" was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource.
</code></pre>
<p>Also, based in <a href="https://stackoverflow.com/a/46143083/7002525">this SO answer</a>, since I'm using docker-compose, I added the config folder to the yaml:</p>
<pre><code>volumes:
- ~/.config/gcloud:/root/.config/gcloud
</code></pre>
<p>Now, based on <a href="https://googleapis.dev/python/google-api-core/latest/auth.html#user-accounts-3-legged-oauth-2-0-with-a-refresh-token" rel="nofollow noreferrer">examples</a>:</p>
<blockquote>
<p>The simplest way to use credentials from a user account is via Application Default Credentials using <code>gcloud auth application-default login</code> (as mentioned above) and <strong>google.auth.default()</strong>:</p>
</blockquote>
<pre><code>import google.auth
credentials, project = google.auth.default()
</code></pre>
<p>Outside of a container, I get both the credentials and project id, but inside a container this returns None for the project id, like documented <a href="https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#google.auth.default" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Project ID may be None, which indicates that the Project ID could not be ascertained from the environment.</p>
</blockquote>
<p>The question is, how can I get the Project ID ascertained from the environment inside a docker container?</p>
<p>I found <a href="https://github.com/googleapis/google-auth-library-python/issues/383" rel="nofollow noreferrer">this Github issue</a> with the same problem, but the solution leads eventually to the Google documentation, where I've already been wandering for some time:</p>
<blockquote>
<p>On a second read, I'm don't think this particular flow is supported for getting a default project.</p>
<p>Please see <a href="https://hub.docker.com/r/google/cloud-sdk/" rel="nofollow noreferrer">https://hub.docker.com/r/google/cloud-sdk/</a> and <a href="https://github.com/GoogleCloudPlatform/cloud-sdk-docker" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/cloud-sdk-docker</a> for authenticating inside a docker image.</p>
<p>Thanks!</p>
</blockquote>
|
<python><docker><authentication><google-cloud-platform><gcloud>
|
2025-05-23 11:13:26
| 0
| 706
|
teppo
|
79,635,390
| 4,764,604
|
PyTorch and Streamlit interaction error: "Tried to instantiate class '__path__._path', but it does not exist!"
|
<p>I'm developing an application that uses Streamlit and PyTorch (through the huggingface <code>transformers</code> library) to analyze system requirements with a language model. However, I'm encountering the following recurring error:</p>
<pre class="lang-py prettyprint-override"><code>Examining the path of torch.classes raised:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 347, in run
if asyncio.get_running_loop().is_running():
^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: no running event loop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.12/site-packages/streamlit/watcher/local_sources_watcher.py", line 217, in get_module_paths
potential_paths = extract_paths(module)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/.local/lib/python3.12/site-packages/streamlit/watcher/local_sources_watcher.py", line 210, in <lambda>
lambda m: list(m.__path__._path),
^^^^^^^^^^^^^^^^
File "/home/ubuntu/.local/lib/python3.12/site-packages/torch/_classes.py", line 13, in __getattr__
proxy = torch._C._get_custom_class_python_wrapper(self.name, attr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Tried to instantiate class '__path__._path', but it does not exist! Ensure that it is registered via torch::class_
</code></pre>
<p>The application works correctly, but these errors appear constantly in the terminal. I've already tried:</p>
<ol>
<li>Using lazy imports for PyTorch modules</li>
<li>Setting environment variables:
<pre class="lang-bash prettyprint-override"><code>export TOKENIZERS_PARALLELISM=false
export PYTHONWARNINGS=ignore::FutureWarning
</code></pre>
</li>
<li>Using a custom startup script</li>
<li>Moving the model loading to a separate function with lazy initialization</li>
</ol>
<p>My code for loading the model is structured like this:</p>
<pre class="lang-py prettyprint-override"><code>import re
import json
import numpy as np
from typing import Dict, List, Optional
import os
# Désactiver les avertissements de PyTorch avec Streamlit
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# Chargement différé pour éviter les conflits entre PyTorch et Streamlit
def _import_transformers():
try:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
return AutoModelForCausalLM, AutoTokenizer, pipeline
except ImportError:
print("Transformers n'est pas installé. Mode simulation activé.")
return None, None, None
# Configuration globale
MODEL_NAME = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" # Modèle plus léger et rapide
MAX_NEW_TOKENS = 512
DEVICE = "cpu" # Utiliser "cuda" si vous avez un GPU
# Initialiser le LLM une seule fois (lazy loading)
_llm = None
_tokenizer = None
_model_loading_attempted = False
def _get_llm():
"""Initialise et retourne le modèle LLM (lazy loading)"""
global _llm, _tokenizer, _model_loading_attempted
# Si on a déjà essayé de charger le modèle sans succès, ne pas réessayer
if _model_loading_attempted and _llm is None:
return None, None
# Si le modèle n'est pas encore chargé, le charger
if _llm is None:
try:
print("Chargement du modèle LLM... (première utilisation)")
# Import des modules nécessaires seulement quand on en a besoin
AutoModelForCausalLM, AutoTokenizer, _ = _import_transformers()
if AutoModelForCausalLM is None:
_model_loading_attempted = True
return None, None
_tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
_llm = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map=DEVICE,
torch_dtype="auto",
low_cpu_mem_usage=True, # Réduire l'utilisation de la mémoire CPU
# Ajouter des options pour réduire l'utilisation de mémoire si nécessaire :
# load_in_8bit=True, # Pour la quantification 8-bit
# Trust remote code est nécessaire pour certains modèles
trust_remote_code=True
)
# Vérifier si le GPU est disponible
if DEVICE == "cuda":
import torch
if torch.cuda.is_available():
print(f"Device set to use {torch.cuda.get_device_name(0)}")
else:
print("CUDA demandé mais non disponible, utilisation du CPU à la place")
print(f"Modèle {MODEL_NAME} chargé avec succès!")
_model_loading_attempted = True
except Exception as e:
print(f"Erreur lors du chargement du modèle: {e}")
print("Utilisation du mode de simulation à la place.")
_model_loading_attempted = True
return None, None
return _llm, _tokenizer
</code></pre>
<p>Environment:</p>
<ul>
<li>Python 3.12</li>
<li>Streamlit 1.45.1</li>
<li>PyTorch (latest version installed)</li>
<li>Transformers (latest version)</li>
<li>Ubuntu 22.04</li>
<li>CUDA 12.2</li>
<li>tornado <em>not</em> installed</li>
</ul>
<p>How can I eliminate these compatibility errors between Streamlit's hot-reloading system and PyTorch? It seems to be an issue related to Streamlit's introspection of modules loaded in memory.</p>
<p>Is there any configuration in Streamlit to disable inspection of certain modules, or any way to make PyTorch more compatible with this type of introspection?</p>
|
<python><machine-learning><pytorch><huggingface-transformers><streamlit>
|
2025-05-23 11:06:14
| 1
| 3,396
|
Revolucion for Monica
|
79,635,263
| 6,643,185
|
CrewAI Deployment Issue with Permisson in AWS Lambda
|
<p>Good day all,
I have a <strong>fastapi</strong> application API's that connect to my <strong>CrewAI</strong> agents. It is working fine as expected in local. The CrewAI agent uses the <strong>PDFSearchTool</strong> of CrewAI. The PDFSearchTool create a vector DB of the PDF that is connected. So the code creates a folder '<strong>db\</strong>' parallel to <em>main.py</em>.</p>
<p>Now I am trying to deploy this application to AWS Lambda. But when I deployed my container to Lambda, I started getting exceptions. So I realized we can only create folders inside the '<strong>temp\</strong>' folder. To change the position of the data creating I added few lines to the code but still getting the same error.</p>
<p>The added codes are:</p>
<pre class="lang-py prettyprint-override"><code>#in main.py:
# Override default config directory
import os
os.environ["MEM0_DIR"] = "/tmp/mem0_config"
os.environ["CONFIG_DIR"] = "/tmp/embedchain"
os.environ["EMBEDCHAIN_CONFIG_DIR"] = "/tmp/.embedchain"
os.environ["CHROMA_DB_DIR"] = "/tmp/db"
</code></pre>
<pre class="lang-py prettyprint-override"><code>#in crew.py
import os
from chromadb.config import Settings
chroma_settings = Settings(
chroma_db_impl="chromadb.db.impl.sqlite",
persist_directory="/tmp/db" # This is the correct field name
)
os.environ["CHROMA_DB_DIR"] = "/tmp/db"
os.makedirs("/tmp/db", exist_ok=True)
from app.config.config import Config
from crewai_tools import PDFSearchTool
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from app.utils.helper.agent_settings import get_verbose_config
from embedchain import App
embedchain_app = App(config={"chroma_settings": chroma_settings})
pdf_file = Config.get_env_path("PDF_PATH")
@CrewBase
class SomeCrew():
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
# Get the information from environment
setup_verbose = get_verbose_config()
pdf_tools = PDFSearchTool(pdf=pdf_file, app=embedchain_app)
@agent
def some_agent(self) -> Agent:
return Agent(
config=self.agents_config['some_agent'],
tools=[
self.pdf_tools
],
verbose=self.setup_verbose
)
@task
def some_task(self) -> Task:
....
@crew
def crew(self) -> Crew:
....
</code></pre>
<p>And the error I am still getting is,</p>
<pre><code>{
"errorMessage": "Unable to generate pydantic-core schema for <class 'crewai_tools.tools.stagehand_tool.stagehand_tool.AvailableModel'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.\n\nIf you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.\n\nFor further information visit https://errors.pydantic.dev/2.11/u/schema-for-unknown-type",
"errorType": "PydanticSchemaGenerationError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.12/importlib/__init__.py\", line 90, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1387, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1360, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 1331, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 935, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 999, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 488, in _call_with_frames_removed\n",
" File \"/var/task/main.py\", line 23, in <module>\n from app.api.agent.router import router as agent_router\n",
" File \"/var/task/app/api/agent/router.py\", line 2, in <module>\n from app.api.agent.some_agent import router as some_agent_router\n",
" File \"/var/task/app/api/agent/some_agent.py\", line 4, in <module>\n from app.agents.somaeagent.some_crew import SomeCrew\n",
" File \"/var/task/app/agents/something/some_crew.py\", line 3, in <module>\n from crewai_tools import PDFSearchTool\n",
" File \"/var/lang/lib/python3.12/site-packages/crewai_tools/__init__.py\", line 9, in <module>\n from .tools import (\n",
" File \"/var/lang/lib/python3.12/site-packages/crewai_tools/tools/__init__.py\", line 70, in <module>\n from .stagehand_tool.stagehand_tool import StagehandTool\n",
" File \"/var/lang/lib/python3.12/site-packages/crewai_tools/tools/stagehand_tool/__init__.py\", line 1, in <module>\n from .stagehand_tool import StagehandTool\n",
" File \"/var/lang/lib/python3.12/site-packages/crewai_tools/tools/stagehand_tool/stagehand_tool.py\", line 90, in <module>\n class StagehandTool(BaseTool):\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py\", line 237, in __new__\n complete_model_class(\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py\", line 597, in complete_model_class\n schema = gen_schema.generate_schema(cls)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 706, in generate_schema\n schema = self._generate_schema_inner(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 999, in _generate_schema_inner\n return self._model_schema(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 832, in _model_schema\n {k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1201, in _generate_md_field_schema\n common_field = self._common_field_schema(name, field_info, decorators)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1367, in _common_field_schema\n schema = self._apply_annotations(\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 2279, in _apply_annotations\n schema = get_inner_schema(source_type)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_schema_generation_shared.py\", line 83, in __call__\n schema = self._handler(source_type)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 2261, in inner_handler\n schema = self._generate_schema_inner(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1004, in _generate_schema_inner\n return self.match_type(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1118, in match_type\n return self._match_generic_type(obj, origin)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1141, in _match_generic_type\n return self._union_schema(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1429, in _union_schema\n choices.append(self.generate_schema(arg))\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 706, in generate_schema\n schema = self._generate_schema_inner(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1004, in _generate_schema_inner\n return self.match_type(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 1122, in match_type\n return self._unknown_type_schema(obj)\n",
" File \"/var/lang/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py\", line 634, in _unknown_type_schema\n raise PydanticSchemaGenerationError(\n"
]
}
</code></pre>
<p>Can anyone please help me to resolve the issue?</p>
|
<python><amazon-web-services><aws-lambda><fastapi><crewai>
|
2025-05-23 09:41:26
| 0
| 619
|
Prifulnath
|
79,635,201
| 12,415,855
|
Using Proxies with Selenium / add_to_capabilities error?
|
<p>I am using Selenium with proxies a long time with the following code:</p>
<pre><code>import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.proxy import Proxy, ProxyType
from selenium.webdriver.support.ui import WebDriverWait
print(f"Checking Browser driver...")
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
HOSTNAME = "196.247.168.34"
PORT = "12345"
prox = Proxy()
prox.proxy_type = ProxyType.MANUAL
prox.http_proxy = prox.ssl_proxy = f"{HOSTNAME}:{PORT}"
capabilities = webdriver.DesiredCapabilities.CHROME
prox.add_to_capabilities(capabilities)
driver = webdriver.Chrome (service=srv, options=options, desired_capabilities=capabilities)
waitWD = WebDriverWait (driver, 5)
driver.get ("https://whatismyipaddress.com/")
input("Press!")#
</code></pre>
<p>But now i get this error when running the code:</p>
<pre><code>(selenium) C:\DEVNEU\Python-Diverses\Selenium>python test.py
Checking Browser driver...
Traceback (most recent call last):
File "C:\DEVNEU\Python-Diverses\Selenium\test.py", line 25, in <module>
prox.add_to_capabilities(capabilities)
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Proxy' object has no attribute 'add_to_capabilities'. Did you mean: 'to_capabilities'?
</code></pre>
<p>Its an authenticated proxy with username and password - so i can´t simply use this in the options</p>
<pre><code>options.add_argument(f"--proxy-server={proxy}")
</code></pre>
<p>How can i use my authenticated proxy with selenium without using any other additonal libraries like seleniumbase or seleniumwire?</p>
|
<python><selenium-webdriver>
|
2025-05-23 09:11:33
| 0
| 1,515
|
Rapid1898
|
79,635,098
| 13,392,257
|
undetected_chromedriver error: Too many open files
|
<p>I am running this selenium-script on my Linux server. After 10 hours I see an error</p>
<blockquote>
<p>Too many open files</p>
</blockquote>
<p>My code:</p>
<pre><code>logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("my_parser.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger('ParserLogger')
def get_random_chrome_user_agent():
user_agent = UserAgent(browsers='chrome', os='windows', platforms='pc')
return user_agent.random
def get_chromedriver(proxy_data: dict, USE_GUI=False):
chrome_options = uc.ChromeOptions()
if not USE_GUI:
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--start-maximized')
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36")
manifest_json = """
{
"version": "1.0.0",
"manifest_version": 2,
"name": "Chrome Proxy",
"permissions": [
"proxy",
"tabs",
"unlimitedStorage",
"storage",
"<all_urls>",
"webRequest",
"webRequestBlocking"
],
"background": {
"scripts": ["background.js"]
},
"minimum_chrome_version":"22.0.0"
}
"""
background_js = """
var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "http",
host: "%s",
port: parseInt(%s)
},
bypassList: ["localhost"]
}
};
chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});
function callbackFn(details) {
return {
authCredentials: {
username: "%s",
password: "%s"
}
};
}
chrome.webRequest.onAuthRequired.addListener(
callbackFn,
{urls: ["<all_urls>"]},
['blocking']
);
""" % (proxy_data["proxy"], proxy_data["proxy_port"], proxy_data["proxy_login"], proxy_data["proxy_password"])
if proxy_data:
pluginfile = 'proxy_auth_plugin.zip'
with zipfile.ZipFile(pluginfile, 'w') as zp:
zp.writestr("manifest.json", manifest_json)
zp.writestr("background.js", background_js)
zp.close()
chrome_options.add_extension(pluginfile)
driver = uc.Chrome(chrome_options=chrome_options)
# Apply stealth mode for Russian Linux user
stealth(driver,
languages=["ru-RU", "ru", "en-US", "en"],
vendor="Google Inc.",
platform="Linux x86_64",
webgl_vendor="Intel Inc.",
renderer="Intel Open Source Technology Center Mesa DRI Intel(R) UHD Graphics 620 (KBL GT2)",
fix_hairline=True,
)
return driver
class YandexParser():
def __init__(self, proxy_data, USE_GUI=False, SAVE_SCRIN=False):
# Use native headless mode for Chrome
print(f"USE_GUI={USE_GUI} SAVE_SCRIN={SAVE_SCRIN}")
self.SAVE_SCRIN = SAVE_SCRIN
self.proxy_data = proxy_data
self.USE_GUI = USE_GUI
self.driver = None
self.display = None
self._initialize_driver()
def _initialize_driver(self):
if platform.system() == 'Darwin': # macOS
self.chrome_options = uc.ChromeOptions()
self.chrome_options.add_argument('--headless=new')
self.chrome_options.add_argument('--disable-gpu')
self.chrome_options.add_argument('--no-sandbox')
self.chrome_options.add_argument('--disable-dev-shm-usage')
self.driver = get_chromedriver(proxy_data=self.proxy_data, USE_GUI=self.USE_GUI)
else: # Linux
from pyvirtualdisplay import Display
self.display = Display(visible=0, size=(800, 600))
self.display.start()
self.driver = get_chromedriver(proxy_data=self.proxy_data, USE_GUI=self.USE_GUI)
self.driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {
'source': '''
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
'''
})
self.driver.get("https://ya.ru/")
time.sleep(random.uniform(1.5, 3.5))
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
def close(self):
try:
if self.driver:
self.driver.quit()
self.driver = None
except Exception as e:
logger.error(f"Error closing driver: {e}")
try:
if self.display:
self.display.stop()
self.display = None
except Exception as e:
logger.error(f"Error stopping display: {e}")
def check_captcha(self):
""" Try to overcome isBot button"""
cur_time = str(datetime.now()).replace(' ', '_')
if "showcaptcha" in self.driver.current_url:
logger.info("Captcha found")
if self.SAVE_SCRIN:
self.driver.save_screenshot(f'screens/img_captcha_{cur_time}.png')
raw_button = self.driver.find_elements(By.XPATH, "//input[@class='CheckboxCaptcha-Button']")
if raw_button:
raw_button[0].click()
logger.info("Button clicked")
time.sleep(15)
if self.SAVE_SCRIN:
self.driver.save_screenshot(f'screens/img_captcha_afterclick_{cur_time}.png')
elif self.SAVE_SCRIN:
self.driver.save_screenshot(f'screens/img_{cur_time}.png')
def parse(self, film_name: str):
logger.info(f"Start parse {film_name}")
result_urls = []
try:
self.driver.get(f"https://ya.ru/search/?text={film_name}&lr=213&search_source=yaru_desktop_common&search_domain=yaru")
self.check_captcha()
for i in range(1, 5):
result_urls.extend(self.parse_page(page_id=i))
self.get_next_page()
self.check_captcha()
# Human-like random delay
time.sleep(random.uniform(2, 5))
except Exception:
logger.error(f"Exception in {traceback.format_exc()}")
finally:
logger.info(f"Found {len(result_urls)} for film {film_name}: {result_urls}")
def parse_page(self, page_id):
res = []
urls_raw = self.driver.find_elements(By.XPATH, value='//a[@class="Link Link_theme_normal OrganicTitle-Link organic__url link"]')
for url_raw in urls_raw:
href = url_raw.get_attribute("href")
if href and "yabs.yandex.ru" not in href:
res.append(href)
logger.info(f"Found {len(res)} urls on page {page_id}")
return res
def get_next_page(self):
next_link_raw = self.driver.find_elements(By.XPATH, '//div[@class="Pager-ListItem Pager-ListItem_type_next"]')
if next_link_raw:
next_link_raw[0].click()
# Human-like random delay
time.sleep(random.uniform(3, 6))
if __name__ == "__main__":
proxies = [
# Proxy datya
]
films = ["Терминатор смотреть", "Саша Таня смотреть", "Джон Уик смотреть онлайн"]
idx = 0
while True:
try:
with YandexParser(USE_GUI=False, proxy_data=proxies[idx % 2]) as parser:
film = films[idx]
idx = (idx + 1) % len(films)
parser.parse(film)
time.sleep(random.uniform(8, 15))
except Exception as e:
logger.error(f"Exception {traceback.format_exc()}")
time.sleep(1) # Add delay after error to prevent rapid retries
</code></pre>
<p>Full trace:</p>
<pre><code>[ERROR] Exception Traceback (most recent call last):
File "/home/myuser/yandex-urls-fetcher/03_selenium_stels.py", line 259, in <module>
with YandexParser(USE_GUI=False, proxy_data=proxies[idx % 2]) as parser:
File "/home/myuser/yandex-urls-fetcher/03_selenium_stels.py", line 139, in __init__
self._initialize_driver()
File "/home/myuser/yandex-urls-fetcher/03_selenium_stels.py", line 151, in _initialize_driver
self.display = Display(visible=0, size=(800, 600))
File "/home/myuser/yandex-urls-fetcher/venv/lib/python3.9/site-packages/pyvirtualdisplay/display.py", line 54, in __init__
self._obj = cls(
File "/home/myuser/yandex-urls-fetcher/venv/lib/python3.9/site-packages/pyvirtualdisplay/xvfb.py", line 44, in __init__
AbstractDisplay.__init__(
File "/home/myuser/yandex-urls-fetcher/venv/lib/python3.9/site-packages/pyvirtualdisplay/abstractdisplay.py", line 85, in __init__
helptext = get_helptext(program)
File "/home/myuser/yandex-urls-fetcher/venv/lib/python3.9/site-packages/pyvirtualdisplay/util.py", line 13, in get_helptext
p = subprocess.Popen(
File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.9/subprocess.py", line 1722, in _execute_child
errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files
</code></pre>
<hr />
<p><strong>Update1</strong> -- created instance of Display object only once, but it didn't help (still have the same error)</p>
|
<python><selenium-webdriver><undetected-chromedriver>
|
2025-05-23 08:11:46
| 2
| 1,708
|
mascai
|
79,635,073
| 388,506
|
How to compute on load?
|
<p>I have a model like this:</p>
<pre class="lang-py prettyprint-override"><code># Calculated Fields
request = fields.Many2one('job.request', string='Request', required=True)
e_name = fields.Char('Nama Asset', store=False, compute="_compute_initial")
@api.depends('request')
def _compute_initial(self):
for record in self:
r = record.request
record.e_name = r.equipment_id.name
</code></pre>
<p>And I need to assign a value to <code>e_name</code>, taken from its parent, for display purposes only, as equipment from "job.request". The problem is that <code>_compute_initial</code> doesn't get executed, even though I've put the <code>e_name</code> field on the form, the value is empty.</p>
<p>So, how to execute <code>_compute_initial</code> to fill in the <code>e_name</code> field?</p>
|
<python><odoo><odoo-18>
|
2025-05-23 07:55:03
| 1
| 2,157
|
Magician
|
79,634,952
| 1,367,705
|
JS seems not to work in Python / Selenium / Google Chrome - page is not loading
|
<p>I would like to automate the process of checking domain on <a href="https://haveibeensquatted.com/" rel="nofollow noreferrer">https://haveibeensquatted.com/</a>. I wrote a simple code in Python Selenium, but it seems the page has problems with loading results in selenium - but normally (when I just click and wait) it all works. What is going on here? I do not understand why its happening. Here's the code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import csv
import time
# === CONFIGURATION ===
INPUT_DOMAINS = [
"example.com",
"nonexistent-domain-xyz123.org",
# add more domains here
]
OUTPUT_CSV = "squatted_results.csv"
URL = "https://haveibeensquatted.com/"
# === SET UP SELENIUM WITH CHROME ===
options = webdriver.ChromeOptions()
service = Service(ChromeDriverManager().install())
driver = webdriver.Chrome(service=service, options=options)
wait = WebDriverWait(driver, 20)
results = []
# XPaths
RESULTS_READY_XPATH = "/html/body/main/div[3]/header/div/div/div[2]/div[2]/span[2]"
ITEMS_XPATH = "/html/body/main/div[1]/div[3]/div/div/div/form/following-sibling::div//ul/li"
NO_SQUAT_XPATH = "//div[contains(text(),'No squatted permutations')]"
TABLE_CELL_XPATH = "/html/body/main/div[3]/div/div/div[2]/div/table/tbody/tr/td"
try:
for domain in INPUT_DOMAINS:
# Load fresh page for each domain
driver.get(URL)
# Enter domain
input_xpath = "/html/body/main/div[1]/div[3]/div/div/div/form/div[2]/input"
btn_xpath = "/html/body/main/div[1]/div[3]/div/div/div/form/button"
input_box = wait.until(EC.element_to_be_clickable((By.XPATH, input_xpath)))
input_box.clear()
input_box.send_keys(domain)
driver.find_element(By.XPATH, btn_xpath).click()
time.sleep(1000)
finally:
driver.quit()
# Save results to CSV
with open(OUTPUT_CSV, "w", newline='', encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(["domain", "status"])
writer.writerows(results)
print(f"Done! Results saved to '{OUTPUT_CSV}'")
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2025-05-23 06:28:35
| 2
| 2,620
|
mazix
|
79,634,585
| 9,576,988
|
Removing diacritics in Python
|
<p>Referencing the book Fluent Python, I've been trying to normalize characters and toss out their diacritics. I am facing an issue where there are still some diacritics which stick around like ´ and ˜ which combine with following characters (see image)</p>
<pre class="lang-py prettyprint-override"><code>import unicodedata
import string
def make_ascii(txt):
txt = txt.strip()
# necessary for the loop below
# NFD - normalization form decomposed
txt = unicodedata.normalize("NFKD", txt) # à -> a\u1234
# é -> e à -> a å -> a
latin_base = False
preserve = []
for c in txt:
if unicodedata.combining(c) and latin_base:
continue # ignore diacritic on Latin base char
preserve.append(c)
# if it isn't a combining char, it's a new base char
if not unicodedata.combining(c):
latin_base = c in string.ascii_letters
txt = "".join(preserve)
txt = txt.casefold() # B -> b ß -> ss
return txt
p1 = """´2´µEst陯‚™∑£´®´®´®†¥¨¨¨ˆøπåß∂ƒ©˙∆˚¬…ç√∫˜µεφυρος,÷rocafe\u0301'caféééßß½ààà'…A㊷cafe\N{COMBINING ACUTE ACCENT}4²'"""
print()
print(p1)
p2 = make_ascii(p1)
print(p2)
print()
# ´2´µEst陯‚™∑£´®´®´®†¥¨¨¨ˆøπåß∂ƒ©˙∆˚¬…ç√∫˜µεφυρος,÷rocafé'caféééßß½ààà'…A㊷café4²'
# ́2 ́μestetmæ‚tm∑£ ́® ́® ́®†¥ ̈ ̈ ̈ˆøπass∂ƒ© ̇∆ ̊¬...c√∫ ̃μεφυροσ,÷rocafe'cafeeessss1⁄2aaa'...a42cafe42'
</code></pre>
<p><a href="https://i.sstatic.net/DaG5Fh94.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaG5Fh94.png" alt="enter image description here" /></a></p>
|
<python><unicode><diacritics><unicode-normalization>
|
2025-05-22 21:57:41
| 0
| 594
|
scrollout
|
79,634,531
| 686,334
|
How can I fix Error EOF occurred in violation of protocol (_ssl.c:2393) with AWS IOT and MQTT
|
<p>I am running python to send MQTT messages to an AWS-IOT server. It runs on a Raspberry Pi, and sends a message each time a button is pressed. It works for a while, but soon a message is sent that does not get received by the MQTT Test Client. If I hit the button again, and it tries to send a message I get the error:</p>
<p>ERROR - Error EOF occurred in violation of protocol (_ssl.c:2393) trying to send message:EOF occurred in violation of protocol (_ssl.c:2393)</p>
<pre><code>import os
import sys
import glob
import json
import requests
import threading
import subprocess
from awscrt import mqtt
from awsiot import mqtt_connection_builder
import config
from log import log
cmd = "cat /proc/cpuinfo | grep Serial | cut -d ' ' -f 2 | md5sum | cut -d ' ' -f 1 | tail -c 6"
result = subprocess.check_output(cmd, shell=True)
client_id = result.decode('utf-8').strip('\n')
log.info(f"Serial number: {result} Client ID:{client_id}")
received_count = 0
received_all_event = threading.Event()
# Callback when connection is accidentally lost.
def on_connection_interrupted(connection, error, **kwargs):
print("Connection interrupted. error: {}".format(error))
# Callback when an interrupted connection is re-established.
def on_connection_resumed(connection, return_code, session_present, **kwargs):
print("Connection resumed. return_code: {} session_present: {}".format(return_code, session_present))
if return_code == mqtt.ConnectReturnCode.ACCEPTED and not session_present:
print("Session did not persist. Resubscribing to existing topics...")
resubscribe_future, _ = connection.resubscribe_existing_topics()
# Cannot synchronously wait for resubscribe result because we're on the connection's event-loop thread,
# evaluate result with a callback instead.
resubscribe_future.add_done_callback(on_resubscribe_complete)
def on_resubscribe_complete(resubscribe_future):
resubscribe_results = resubscribe_future.result()
print("Resubscribe results: {}".format(resubscribe_results))
for topic, qos in resubscribe_results['topics']:
if qos is None:
sys.exit("Server rejected resubscribe to topic: {}".format(topic))
# Callback when the subscribed topic receives a message
def on_message_received(topic, payload, dup, qos, retain, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
global received_count
received_count += 1
received_all_event.set()
# Callback when the connection successfully connects
def on_connection_success(connection, callback_data):
print("Connection Successful with return code: {} session present: {}". \
format(callback_data.return_code, callback_data.session_present))
# Callback when a connection attempt fails
def on_connection_failure(connection, callback_data):
print("Connection failed with error code: {}".format(callback_data.error))
# Callback when a connection has been disconnected or shutdown successfully
def on_connection_closed(connection, callback_data):
print("Connection closed")
class AWS:
def __init__(self, certificate_id, client_id, preface='product'):
"""
Initialize MQTT
"""
# Create a MQTT connection from the command line data
ca = os.path.join(config.CERT_DIR, 'AmazonRootCA1.pem')
certfile=os.path.join(config.CERT_DIR, f"{certificate_id}-certificate.pem.crt")
keyfile =os.path.join(config.CERT_DIR, f"{certificate_id}-private.pem.key")
self.mqtt_connection = mqtt_connection_builder.mtls_from_path(
endpoint=config.endpoint,
port=8883,
cert_filepath=certfile,
pri_key_filepath=keyfile,
ca_filepath=ca,
on_connection_interrupted=on_connection_interrupted,
on_connection_resumed=on_connection_resumed,
client_id=f'{preface}_{client_id}',
clean_session=False,
keep_alive_secs=30,
on_connection_success=on_connection_success,
on_connection_failure=on_connection_failure,
on_connection_closed=on_connection_closed)
def connect(self,):
self.connect_future = self.mqtt_connection.connect()
self.connect_future.result()
log.info("Connected!")
def subscribe(self, topic, callback):
self.subscriber, packet_id = self.mqtt_connection.subscribe(topic=topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=callback)
subscribe_result = self.subscriber.result()
log.info("Subscribed with {}".format(str(subscribe_result['qos'])))
def publish(self, topic, message):
result = self.mqtt_connection.publish(topic=topic,
payload=message,
qos=mqtt.QoS.AT_LEAST_ONCE)
log.info(f"Published:{message} Topic:{topic} Result:{result}")
log.info(self.mqtt_connection.get_stats())
def disconnect(self):
disconnect_future = self.mqtt_connection.disconnect()
log.info(f"Disconnect:{disconnect_future.result()}")
</code></pre>
<p>I am not sure whats happening or how to fix it. If any one any suggestions I'd appreciate it.</p>
|
<python><aws-iot>
|
2025-05-22 21:00:13
| 1
| 534
|
CrabbyPete
|
79,634,492
| 1,477,064
|
Autograph / tf.function produce bad Tensorboard Graph
|
<p>When using TensorBoard with the following code, the graph generated from tf.function is not fully connected.</p>
<p>I backported the code to tensorflow V1, and it shows the expected graph.</p>
<p>Am I doing something wrong with tf.function? Is there any way to better annotate the operations?</p>
<hr />
<p>Unrelated Code context:</p>
<p>I'm doing a montecarlo simulation. Given N elements, form K pairs of those elements minimizing the difference. I'm bruteforcing it, testing all possible pairs C(N,2K).</p>
<hr />
<pre><code>@tf.function
def compute_min_of_max_diff(
samples: tf.Tensor, # [batch, num_candidates]
combs: tf.Tensor, # [num_combinations_chunk, elements]
original_size: tf.Tensor # scalar
) -> tf.Tensor:
sorted_s = tf.sort(samples, axis=1) # [batch, num_candidates]
gathered = tf.gather(sorted_s, combs, axis=1) # [batch, chunk, elements]
diffs = gathered[..., 1::2] - gathered[..., ::2] # [batch, chunk, num_pairs]
max_per_comb = tf.reduce_max(diffs, axis=-1)
return tf.reduce_min(max_per_comb[...,:original_size], axis=-1)
</code></pre>
<hr />
<pre><code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
samples_ph = tf.placeholder(tf.float32)
combs_ph = tf.placeholder(tf.int32)
original_size_ph = tf.placeholder(tf.int32)
sorted_s = tf.sort(samples_ph, axis=1)
gathered = tf.gather(sorted_s, combs_ph, axis=1)
diffs = gathered[..., 1::2] - gathered[..., ::2]
max_per_comb = tf.reduce_max(diffs, axis=-1)
batch_min = tf.reduce_min(max_per_comb[...,:original_size_ph], axis=-1)
model = batch_min
</code></pre>
<hr />
<p>TensorFlow V2 Graph
<a href="https://i.sstatic.net/2fGqo79M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fGqo79M.png" alt="V2 Graph" /></a></p>
<p>Legacy TensorFlow Graph
<a href="https://i.sstatic.net/nujaicFP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nujaicFP.png" alt="V1 Graph" /></a></p>
|
<python><tensorflow><tensorboard><tensorflow-autograph>
|
2025-05-22 20:30:10
| 0
| 4,849
|
xvan
|
79,634,346
| 808,151
|
Preserve line breaks in XML attributes when parsing with lxml
|
<p>I'm trying to batch-process a couple of XML files through a python script, with the XML files having line breaks in some of their attributes like so:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<xml>
<test value="This is
a test
with line breaks
"/>
</xml>
</code></pre>
<p>However, I noticed that line breaks within attributes are removed when parsing this file. For example, the following script:</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree as ET
with open("input.xml", "r", encoding="utf-8") as f:
source = ET.parse(f)
root = source.getroot()
dest = ET.ElementTree(root)
dest.write("output.xml", encoding="utf-8", xml_declaration=True)
</code></pre>
<p>would produce the following output file:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<xml>
<test value="This is a test with line breaks "/>
</xml>
</code></pre>
<p>While this seems to be in line with W3Cs recommendations <a href="https://stackoverflow.com/a/8188290/808151">as per this related answer</a>, is there a way to use <code>xml.etree</code> or <code>lxml.etree</code> for modifying the XML file without removing those line breaks?</p>
|
<python><lxml><elementtree>
|
2025-05-22 18:35:09
| 1
| 12,702
|
Tim Meyer
|
79,634,313
| 3,357,935
|
How do I delete a password set using Python's keyring library?
|
<p>I used the <code>keyring</code> library in Python to store a password needed to logon to an external service.</p>
<pre><code>import keyring
keyring.set_password("service_name", "username", "my_password")
password = keyring.get_password("service_name", "my_username")
</code></pre>
<p>I no longer need that password saved and would like to delete it. Is there a function to delete credentials set with the <code>keyring</code> library?</p>
|
<python><python-3.x><python-keyring>
|
2025-05-22 18:11:31
| 1
| 27,724
|
Stevoisiak
|
79,634,153
| 10,331,351
|
opening privatekey from string rather than file in python 3?
|
<p>I found some python code to read a private key from a file to connect to Snowflake and it works fine</p>
<pre><code>with open("rsa_key_1.p8", "rb") as key:
p_key= serialization.load_pem_private_key(
key.read(),
password=None,
backend=default_backend()
)
</code></pre>
<p>However, I would like to use the actual string rather than reading from a file, so I tried replacing <em>key.read()</em> by the actual string</p>
<pre><code>with open("rsa_key_1.p8", "rb") as key:
p_key= serialization.load_pem_private_key(
'MIIE...rZ+x8ZcfhhLj+lyWLhDfA8feg/vYlV+gLDKsZfQLN9JDRMGgffN3EE7vM9WBO/msB8fpo5g==',
password=None,
backend=default_backend()
)
</code></pre>
<p>But this gives me this error</p>
<blockquote>
<p>TypeError: from_buffer() cannot return the address of a unicode object</p>
</blockquote>
<p>Could this be a generic python error?</p>
|
<python><snowflake-cloud-data-platform>
|
2025-05-22 16:17:32
| 1
| 3,795
|
Eric Mamet
|
79,634,056
| 16,389,095
|
How to build a generic user control in Python Flet
|
<p>I'm trying to develop a custom user control in Flet (version==0.28.2). The page contains a button_click event that implements the control:</p>
<pre><code>def button_on_click(e):
page_content.controls=[Create_New_Db_UI()]
page.update()
</code></pre>
<p>The user control is defined by the class:</p>
<pre><code>class Create_New_Db_UI(ft.Control):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def _get_control_name(self):
return "Create_New_Db_UI"
def build(self):
return ft.Column(
[
ft.Text("Create New Database", size=20, weight=ft.FontWeight.BOLD),
ft.Divider(),
ft.Text("Test", size=16,),
],
)
</code></pre>
<p>After running the code, the UI contains a red rectangle with the message: "Unknown control: Create_New_Db_UI", so the control is not shown.
In a previous version of Flet (version==0.21.2), I used successfully the class definition as:</p>
<pre><code>class Create_New_Db_Ui(ft.UserControl)
</code></pre>
<p>which is unavailable in this version. Of course, I can write</p>
<pre><code>class Create_New_Db_UI(ft.Column):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def build(self):
self.controls =[
ft.Text("Create New Database", size=20, weight=ft.FontWeight.BOLD),
ft.Divider(),
ft.Text("Test", size=16,),
]
return self
</code></pre>
<p>But how can I define a new generic user custom control? Which is the correct class from which my control must inherit?</p>
|
<python><flutter><flet>
|
2025-05-22 15:13:54
| 0
| 421
|
eljamba
|
79,634,001
| 215,487
|
Why does "pip list" not like broken pipes on the bash command line?
|
<p>I'm using a bash terminal.</p>
<p>I just wanted to peek at the first few names of installed python packages:</p>
<pre><code>pip list | head -n5
</code></pre>
<p>And got an error message:</p>
<pre><code>Package Version
------------------ ----------------
attrs 21.2.0
Automat 20.2.0
Babel 2.8.0
ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>Is something broken? If I'm going to capture that information programmatically, I definitely don't want an error message included.</p>
|
<python><bash><pip><pipe>
|
2025-05-22 14:47:50
| 2
| 11,401
|
Christopher Bottoms
|
79,633,998
| 1,060,420
|
Remove resource from event where caller is not the organizer using Google Calendar API?
|
<p>I'm trying to remove resources (conference rooms) that have declined events. It works as expected for events where the caller is the event organizer, but not if another person is the organizer. The event <em>is</em> modifiable and I can manually remove the room using the Google Calendar web interface.</p>
<p>This is the code using the Python <code>google-api-client</code> (v2.169.0):</p>
<pre class="lang-py prettyprint-override"><code>service = build("calendar", "v3", credentials=<CREDS>)
event = service.events().get(calendarId="primary", eventId=event_id).execute
new_attendees = [
a
for a in event.get("attendees", [])
if not (a.get("resource") and a.get("responseStatus") == "declined")
]
event["attendees"] = new_attendees
service.events().update(calendarId="primary", eventId=event_id, body=event, sendUpdates="none").execute()
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><google-calendar-api><google-api-python-client>
|
2025-05-22 14:46:53
| 1
| 1,773
|
Allan Beaufour
|
79,633,868
| 1,422,096
|
matplotlib.animation.FuncAnimation lagging when resizing the plot window (more than 5 seconds)
|
<p>The code below generates random data, and displays it in realtime with Matplotlib. The sliders allow the user to change the y-axis range. All of this works.</p>
<p><strong>Problem</strong>: when resizing the window size or moving the window, there is a performance problem: the plot is lagging during 5 seconds (and sometimes it comes back, and then lags again), sometimes even closer to 10 seconds (in an average i5 computer).</p>
<p>How to troubleshoot this in <code>matplotlib.animation</code>?</p>
<p><a href="https://i.sstatic.net/2ygHVxM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ygHVxM6.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np, threading, time
import matplotlib.animation as animation, matplotlib.pyplot as plt, matplotlib.widgets as widgets
class Test():
def generate(self): # generate random data
while True:
self.realtime_y = {i: np.random.rand(100) for i in range(4)}
self.realtime_m = {i: np.random.rand(100) for i in range(4)}
time.sleep(0.030)
def start_gen(self):
threading.Thread(target=self.generate).start()
def realtime_visualization(self):
fig = plt.figure("Test", figsize=(10, 6))
ax1, ax2, ax3, ax4 = fig.subplots(4)
ax1.set_ylim(Y_AXIS_RANGE)
ax2.set_ylim(Y_AXIS_RANGE)
sliders = [widgets.Slider(ax3, f"Ymin", Y_AXIS_RANGE[0], Y_AXIS_RANGE[1], valinit=Y_AXIS_RANGE[0]),
widgets.Slider(ax4, f"Ymax", Y_AXIS_RANGE[0], Y_AXIS_RANGE[1], valinit=Y_AXIS_RANGE[1])]
sliders[0].on_changed(lambda v: [ax1.set_ylim(bottom=v), ax2.set_ylim(bottom=v)])
sliders[1].on_changed(lambda v: [ax1.set_ylim(top=v), ax2.set_ylim(top=v)])
l1 = {}
l2 = {}
for i, c in enumerate(["k", "r", "g", "b"]):
l1[i], *_ = ax1.plot(self.realtime_y[i], color=c)
l2[i], *_ = ax2.plot(self.realtime_m[i], color=c)
def func(n):
for i in range(4):
l1[i].set_ydata(self.realtime_y[i])
l2[i].set_ydata(self.realtime_m[i])
self.realtime_animation = animation.FuncAnimation(fig, func, frames=None, interval=30, blit=False, cache_frame_data=False)
plt.show()
Y_AXIS_RANGE = [-3, 3]
t = Test()
t.start_gen()
t.realtime_visualization()
</code></pre>
<p>Obviously with <code>interval=500</code> the problem is less visible, but the animation has less fps, which is not always possible (e.g. if we have fast-moving data).</p>
|
<python><matplotlib><real-time><matplotlib-animation><real-time-data>
|
2025-05-22 13:33:38
| 2
| 47,388
|
Basj
|
79,633,834
| 1,786,137
|
Calling ImageTk.PhotoImage() in a thread causing deadlock
|
<p>I am working on a camera applicaiton where I need to get images from camera buffer and display it in a TK label live:</p>
<pre class="lang-py prettyprint-override"><code>def display_thread_run(self):
while self.is_acquiring:
image_data = self.camera.get_next_image()
# --- some image conversion methods omitted ---
pil_image = Image.fromarray(image_data)
photo = ImageTk.PhotoImage(image=pil_image)
self.image_frame.after(0, self._create_photo_and_update_gui, photo)
time.sleep(0.01)
</code></pre>
<p>When the application exits, I need to close the camera stream, join the thread and <strong>at the very end</strong>, release camera related resources:</p>
<pre class="lang-py prettyprint-override"><code>def stop_and_cleanup(self):
camera.stop_acquisition()
self.is_acquiring = False
if self.display_thread and self.display_thread.is_alive():
self.display_thread.join()
camera.release()
logger.info("Display resources released")
</code></pre>
<p>However, after I click the close window button (cross button), <code>photo = ImageTk.PhotoImage(image=pil_image)</code> causes deadlock and the thread won't join.</p>
<p>Setting <code>thread.daemon=True</code> and removing <code>display_thread.join()</code> won't work because <code>camera.release()</code> must be called after the thread is properly cleaned.</p>
<p>I'm using:</p>
<ul>
<li>Python 3.10.11</li>
<li>pillow 11.2.1</li>
</ul>
<p>A minimum code example:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from PIL import Image, ImageTk
import threading
import time
IMG_PATH = 'test_img.jpg'
class App:
def __init__(self, root):
self.root = root
self.label = tk.Label(root)
self.label.pack()
self.running = True
self.img = Image.open(IMG_PATH)
self.display_thread= threading.Thread(target=self.update_image_loop, daemon=True)
self.display_thread.start()
self.root.protocol("WM_DELETE_WINDOW", self.on_close)
def update_image_loop(self):
while self.running:
tk_img = ImageTk.PhotoImage(image=self.img)
self.label.after(0, self.set_image, tk_img)
time.sleep(0.001)
def set_image(self, tk_img):
self.label.img = tk_img # keep reference
self.label.config(image=tk_img)
def on_close(self):
print("on_close")
self.running = False
self.display_thread.join()
self.root.destroy()
if __name__ == '__main__':
root = tk.Tk()
app = App(root)
root.mainloop()
</code></pre>
|
<python><tkinter><python-imaging-library>
|
2025-05-22 13:13:30
| 1
| 3,863
|
Anthony
|
79,633,804
| 447,426
|
how to run test in databricks default python bundle?
|
<p>I created a databricks bundle "processing" with <code>databricks bundle init</code> using default python template. I am able to "upload and run" <code>main.py</code>
<a href="https://i.sstatic.net/65OUVaxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65OUVaxB.png" alt="enter image description here" /></a></p>
<p>result:</p>
<pre><code>5/22/2025, 2:48:43 PM - Uploading assets to databricks workspace...
5/22/2025, 2:48:45 PM - Creating execution context on cluster 0508-q3425-jqxbhz37 ...
5/22/2025, 2:49:00 PM - Running src/processing/main.py ...
+--------------------+---------------------+-------------+-----------+----------+-----------+
|tpep_pickup_datetime|tpep_dropoff_datetime|trip_distance|fare_amount|pickup_zip|dropoff_zip|
+--------------------+---------------------+-------------+-----------+----------+-----------+
| 2016-02-13 21:47:53| 2016-02-13 21:57:15| 1.4| 8.0| 10103| 10110|
| 2016-02-13 18:29:09| 2016-02-13 18:37:23| 1.31| 7.5| 10023| 10023|
| 2016-02-06 19:40:58| 2016-02-06 19:52:32| 1.8| 9.5| 10001| 10018|
| 2016-02-12 19:06:43| 2016-02-12 19:20:54| 2.3| 11.5| 10044| 10111|
| 2016-02-23 10:27:56| 2016-02-23 10:58:33| 2.6| 18.5| 10199| 10022|
+--------------------+---------------------+-------------+-----------+----------+-----------+
only showing top 5 rows
5/22/2025, 2:49:17 PM - Done (took 33866ms)
</code></pre>
<p>But how to run the test generated along with main.py <code>main_test.py</code> ?
If i "upload and run" it i get</p>
<pre><code>5/22/2025, 2:53:14 PM - Uploading assets to databricks workspace...
5/22/2025, 2:53:16 PM - Creating execution context on cluster 0508-q3425-jqxbhz37 ...
5/22/2025, 2:53:22 PM - Running tests/main_test.py ...
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File <frozen runpy>:286, in run_path(path_name, init_globals, run_name)
File <frozen runpy>:98, in _run_module_code(code, init_globals, mod_name, mod_spec, pkg_name, script_name)
File <frozen runpy>:88, in _run_code(code, run_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File /home/moritz/workspace/processing/tests/main_test.py:1
----> 1 from processing.main import get_taxis, get_spark
3 def test_main():
4 taxis = get_taxis(get_spark())
main_test.py:1
ModuleNotFoundError: No module named 'processing'
5/22/2025, 2:53:26 PM - Done (took 12080ms)
</code></pre>
<p>I did not changed any file. i just setup vs-code databricks connect and databricks cli (authentication) - this is why main.py is executing fine.</p>
|
<python><visual-studio-code><databricks><azure-databricks>
|
2025-05-22 12:57:29
| 0
| 13,125
|
dermoritz
|
79,633,652
| 3,084,842
|
Fill in elements of large matrix with same value
|
<p>I have a large matrix where I want to assign the same value(s) to many matrix elements. Is there a more efficient way to do this rather than iterating over each column and row using for loops?</p>
<p>Here is an example:</p>
<pre><code>import numpy as np
a = 0.5
b = 0.6
M = np.zeros((16,16)) # empty matrix
np.fill_diagonal(M, 0.9) # diagonal elements
for jj in range(0,16): # rows
for kk in range(0,16): # columns
if jj == 0:
if (kk == 1) | (kk == 3):
M[jj,kk] = a
if jj == 3:
if (kk == 0) | (kk == 2):
M[jj,kk] = b
if jj == 5:
if (kk == 4) | (kk == 6):
M[jj,kk] = a
# etc
print(M)
</code></pre>
|
<python><numpy>
|
2025-05-22 11:34:10
| 2
| 3,997
|
Medulla Oblongata
|
79,633,608
| 11,063,709
|
How to select between using a `jax.lax.scan` vs a `for` loop when using JAX?
|
<p>I am a JAX beginner and someone experienced with JAX told me that if we have repeated calls to a <code>scan</code>/<code>for</code> loop (e.g. when these are themselves wrapped by another <code>for</code> loop), it might be better to leave the loop as a <code>for</code> instead of converting it to a <code>scan</code> because the <code>for</code> loop is unrolled completely and only has the 1-time huge compilation cost while the <code>scan</code> is not unrolled by default and even though its compilation cost will be small, the fact that it is rolled will mean that the cost of repeatedly running this loop will end up making the <code>scan</code> more expensive than the <code>for</code>. This did not strike me immediately when I started writing my code, but made sense upon thinking about it.</p>
<p>So, I tested this assumption using code based on the following pseudo-code (the full code is really long and I hope these relevant parts I provide here are easier to understand):</p>
<pre><code>for i in range(num_train_steps): # Currently fixed to be a for loop
for j in range(num_env_steps): # Currently fixed to be a for loop
act()
def act():
for k in range(num_algo_iters): # Currently playing around with making this one either a scan or a for loop
jax.lax.scan(rollout_func) # Currently fixed to be a scan
</code></pre>
<p>The only loop in the above code that I tested switching between <code>scan</code> and <code>for</code> was the <code>k</code> loop and then I varied the variable <code>num_env_steps</code> to be 1, 100, 1000 and 10000 to see whether increasing the number of times the <code>act()</code> (and thus the <code>k</code> loop) was executed made a difference to the timing. (The testing was done with 5 iterations for the k for loop and 2 iterations for the innermost scan although these are variable in general, if that matters.)
The times taken for <code>act()</code> for the different repeats were 1.5, 11.3,, 99.0, 956.2 seconds for the <code>scan</code> version and 5.1, 14.5, 103.6, 972.7 seconds for the <code>for</code> version. So the <code>for</code> version never ended up faster for the number of repeats I tried.</p>
<p>So, now I am wondering if for any number of repeats (i.e. <code>num_env_steps</code>), the unrolling of the <code>for</code> actually makes the program faster than with <code>scan</code>.
My questions:</p>
<ol>
<li>Would maybe increasing the repeats even more by setting <code>num_env_steps</code> to 100k or 1 million make it faster or can we always just replace a <code>for</code> with a <code>scan</code>? I have this question because I wonder if I am trying to over-optimise my code by converting every <code>for</code> to a <code>scan</code>.</li>
<li>If I set <code>unroll = True</code> for the <code>scan</code>, would it <em>then</em> always be fine to replace all <code>for</code>s with <code>scan</code>s and expect speed-ups?</li>
<li>Is there a rule of thumb that can help me decide when to use <code>for</code> and when to use <code>scan</code> if I am only interested in such speed-ups?</li>
</ol>
<p><code>act</code> was jitted by the way.</p>
|
<python><jax>
|
2025-05-22 11:03:11
| 1
| 1,442
|
Warm_Duscher
|
79,633,258
| 29,295,031
|
How to make plotly text bold using scatter?
|
<p>I'm trying to make a graph using <code>plotly</code> library and I want to make some texts in <strong>bold</strong> here's the code used :</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
import pandas as pd
data = {
"lib_acte":["test 98lop1", "test9665 opp1", "test QSDFR1", "test ABBE1", "testtest21","test23"],
"x":[12.6, 10.8, -1, -15.2, -10.4, 1.6],
"y":[15, 5, 44, -11, -35, -19],
"circle_size":[375, 112.5, 60,210, 202.5, 195],
"color":["green", "green", "green", "red", "red", "red"],
"textfont":["normal", "normal", "normal", "bold", "bold", "bold"],
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
fig = px.scatter(
df,
x="x",
y="y",
color="color",
size='circle_size',
text="lib_acte",
hover_name="lib_acte",
color_discrete_map={"red": "red", "green": "green"},
title="chart"
)
fig.update_traces(textposition='middle right', textfont_size=14, textfont_color='black', textfont_family="Inter", hoverinfo="skip")
newnames = {'red':'red title', 'green': 'green title'}
fig.update_layout(
{
'yaxis': {
"range": [-200, 200],
'zerolinewidth': 2,
"zerolinecolor": "red",
"tick0": -200,
"dtick":45,
},
'xaxis': {
"range": [-200, 200],
'zerolinewidth': 2,
"zerolinecolor": "gray",
"tick0": -200,
"dtick": 45,
# "scaleanchor": 'y'
},
"height": 800,
}
)
fig.add_scatter(
x=[0, 0, -200, -200],
y=[0, 200, 200, 0],
fill="toself",
fillcolor="gray",
zorder=-1,
mode="markers",
marker_color="rgba(0,0,0,0)",
showlegend=False,
hoverinfo="skip"
)
fig.add_scatter(
x=[0, 0, 200, 200],
y=[0, -200, -200, 0],
fill="toself",
fillcolor="yellow",
zorder=-1,
mode="markers",
marker_color="rgba(0,0,0,0)",
showlegend=False,
hoverinfo="skip"
)
fig.update_layout(
paper_bgcolor="#F1F2F6",
)
fig.show()
</code></pre>
<p>and here's the output of above code:
<a href="https://i.sstatic.net/9BLQjlKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9BLQjlKN.png" alt="output" /></a></p>
<p>What I'm trying to do is to make <strong>"test ABBE1", "testtest21","test23"</strong> in bold on the graph, could anyone please help how to do that ?</p>
|
<python><pandas><dataframe><plotly>
|
2025-05-22 07:18:10
| 3
| 401
|
user29295031
|
79,633,161
| 1,879,101
|
specialbots.UploadRobot().run(): looks like I cannot catch an exception?
|
<p>In <a href="https://gitlab.com/vitaly-zdanevich/upload-to-commons-with-categories-from-iptc/-/blob/master/upload_to_commons_with_categories_from_iptc.py" rel="nofollow noreferrer">my script</a>, I want to catch the case when user uploading a file to Commons that already exists.</p>
<p>My logs:</p>
<pre><code>Uploading file to commons:commons...
WARNING: API error fileexists-no-change: The upload is an exact duplicate of the current version of [[:File:005a 309 Пацевичи, 21-05-2004.jpg]].
ERROR: Upload error:
Traceback (most recent call last):
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/specialbots/_upload.py", line 422, in upload_file
success = imagepage.upload(file_url,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/page/_filepage.py", line 325, in upload
return self.site.upload(self, source_filename=filename, source_url=url,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/site/_decorators.py", line 93, in callee
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/site/_apisite.py", line 3029, in upload
return Uploader(self, filepage, **kwargs).upload()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/site/_upload.py", line 134, in upload
return self._upload(self.ignore_warnings, self.report_success)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/site/_upload.py", line 435, in _upload
return self.submit(final_request, result, data.get('result'),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/site/_upload.py", line 455, in submit
result = request.submit()['upload']
^^^^^^^^^^^^^^^^
File "/home/vitaly/p/upload_to_commons_with_categories_from_iptc/myenv/lib/python3.12/site-packages/pywikibot/data/api/_requests.py", line 1144, in submit
raise pywikibot.exceptions.APIError(**error)
pywikibot.exceptions.APIError: fileexists-no-change: The upload is an exact duplicate of the current version of [[:File:005a 309 Пацевичи, 21-05-2004.jpg]].
[stasherrors: [{'message': 'uploadstash-exception', 'params': ['UploadStashBadPathException', "Path doesn't exist."], 'code': 'uploadstash-exception', 'type': 'error'}];
servedby: mw-api-ext.eqiad.main-64bb4cbfc5-c2h2c;
help: See https://commons.wikimedia.org/w/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at &lt;https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/&gt
; for notice of API deprecations and breaking changes.]
1 read operation
Script terminated successfully.
</code></pre>
<p>I tried <code>try/except</code>:</p>
<pre><code>bot = UploadRobot(
sys.argv[1],
description=description,
keep_filename=True,
ignore_warning=True,
target_site=pywikibot.Site('commons', 'commons'),
summary=summary,
always=True
)
try:
bot.run()
# except pywikibot.exceptions.APIError as e:
# sys.exit('ERROR from Wikimedia API')
except Exception as e:
print('xxxxxxxxxxxxxx')
</code></pre>
<p>but exception is not caught.</p>
<p>Also tried <code>post_processor</code>:</p>
<pre><code>def post(file_url, filename):
if filename is None:
# upload failed (duplicate or other error)
print(f"⚠️ Skipped {file_url!r}: already exists or error")
else:
print(f"✅ Uploaded as {filename!r}")
bot = UploadRobot(
sys.argv[1],
description=description,
keep_filename=True,
ignore_warning=True,
target_site=pywikibot.Site('commons', 'commons'),
summary=summary,
always=True,
post_processor=post
)
</code></pre>
<p>but also nothing printed from my <code>post()</code></p>
<p>Also I tried</p>
<pre><code>def global_exception_handler(exctype, value, traceback):
print('xxxxxxxxx')
sys.excepthook = global_exception_handler
threading.excepthook = global_exception_handler
</code></pre>
<p>Related documentation <a href="https://doc.wikimedia.org/pywikibot/master/api_ref/pywikibot.specialbots.html#specialbots.UploadRobot.run" rel="nofollow noreferrer">https://doc.wikimedia.org/pywikibot/master/api_ref/pywikibot.specialbots.html#specialbots.UploadRobot.run</a></p>
|
<python><pywikibot>
|
2025-05-22 06:01:00
| 1
| 15,411
|
Vitaly Zdanevich
|
79,633,071
| 4,755,229
|
How do I color 3d triangular mesh using another scalar in matplotlib?
|
<p>I have 3d vertex position data <code>(x,y,z)</code>, scalar values at each vertex <code>v</code>, and vertex indices for each triangle <code>tri</code>.</p>
<p>If I were to draw the surface geometry itself, I could use <code>plot_trisurf</code>. If I were to draw a 2D projected plot with each triangle colored according to the vertex scalar, than I could use <code>tripcolor</code> with one of the coordinates omitted. However, it seems oddly unclear how I draw the 3D surface with each face colored according to the vertex scalar, not the z coordinate value.</p>
<p>How do I do this? If it is not within matplotlib's ability, is there a python tool that can do this?</p>
|
<python><matplotlib><matplotlib-3d>
|
2025-05-22 04:15:30
| 1
| 498
|
Hojin Cho
|
79,633,000
| 270,043
|
Number of completed tasks exceeds total tasks in PySpark
|
<p>I have the following (simplified) code running in PySpark.</p>
<p><code>df</code> is a PySpark dataframe with 2B rows. I keep hitting resource issues when running such a large dataframe, so I'm trying to get a subset of the dataframe to work on.</p>
<pre><code>df_10M = df.limit(10000000)
df_a = df_10M.withColumn(...).withColumn(...).withColumn(...).withColumn(...)
df_a = df_a.drop("oldColA", "oldColB")
df_a.show(10)
</code></pre>
<p>When I run this on Jupyter Notebook, I eventually see</p>
<p><code>[Stage 4:====================(1572 + 12 / 1122)]</code></p>
<p><code>1122</code> should be the total number of tasks, <code>1572</code> should be the number of completed tasks, and <code>12</code> is the number of running tasks. If I leave the code running, eventually, I hit the <code>Pod ephemeral local storage usage</code> limit.</p>
<p>My question is how can the number of completed tasks be larger than the total number of tasks?</p>
|
<python><pyspark>
|
2025-05-22 02:37:30
| 1
| 15,187
|
Rayne
|
79,632,792
| 5,450,919
|
Error Connecting to Oracle Data Warehouse DW2_DEV: Unsupported Password Verifier Type in Python-oracledb Thin Mode
|
<h3>Description:</h3>
<p>I am attempting to connect to the DW2_DEV database within our Oracle data warehouse using Python 3.11 and the oracledb library version 3.1.1. While I can successfully connect to the DW3_UAT database using a different set of credentials, I encounter an error when trying to connect to DW2_DEV.</p>
<p>The connection to DW3_UAT works flawlessly, and I am able to run a query that lists the total number of tables:</p>
<pre><code>Connected to Oracle Database DW3_UAT!
1861 Tables Found
</code></pre>
<p>However, when attempting to connect to DW2_DEV, I receive the following error:</p>
<pre><code>Error connecting to database: DPY-3015: password verifier type 0x939 is not supported by python-oracledb in thin mode
</code></pre>
<p>Here's what I've tried so far:</p>
<ol>
<li>Consulted with our database administrators to ensure that the username, password, port, host name, and SID are all correct for DW2_DEV.</li>
<li>Verified that there are no firewall issues and confirmed connectivity via Telnet.</li>
<li>Attempted to force a thick mode connection, but have not been successful.</li>
</ol>
<p>Below is my code for connecting to the database:</p>
<pre class="lang-py prettyprint-override"><code>import oracledb
import getpass
import json
import os
def credentials():
filename = 'creds.json'
with open(filename, 'r') as file:
data = json.load(file)
credentials = {}
for key, value in data.items():
credential = {
'username': value['username'],
'password': value['password'],
'sid': value['sid'],
'hostname': value['hostname'],
'port': value['port']
}
credentials[value['sid']] = credential
return credentials
# Main function
def main(sid):
# Load credentials
creds = credentials()
if not creds:
print("Failed to load credentials. Exiting.")
return
# Select connection
selected_conn = creds[sid]
# Get connection details
username = selected_conn["username"]
hostname = selected_conn["hostname"]
port = int(selected_conn["port"])
password = selected_conn["password"]
try:
# Create a DSN (Data Source Name)
dsn = oracledb.makedsn(hostname, port, sid=sid)
# Create a connection
connection = oracledb.connect(user=username, password=password, dsn=dsn)
print(f"Connected to Oracle Database {sid}!")
# Your database operations here
cursor = connection.cursor()
# Execute query to list tables
cursor.execute("SELECT table_name FROM all_tables")
# Fetch all results
tables = cursor.fetchall()
# Print the tables
print(f"{len(tables)} Tables Found")
cursor.close()
connection.close()
except Exception as e:
print(f"Error connecting to database: {e}")
if __name__ == "__main__":
main('DW2_DEV')
</code></pre>
<p>I've been unable to resolve this error and would greatly appreciate any suggestions or insights from the community on how to address this issue and successfully connect to the DW2_DEV database.</p>
|
<python><oracle-database><python-oracledb>
|
2025-05-21 21:35:56
| 0
| 629
|
b8con
|
79,632,701
| 2,514,130
|
How to save xls file in pandas 2.0+
|
<p>Is there still a way to save an xls file in modern pandas (i.e. since <a href="https://github.com/pandas-dev/pandas/pull/49296" rel="nofollow noreferrer">they dropped xlwt</a>)? For example, I can save an xlsx file like so:</p>
<pre><code>import pandas as pd
def create_excel_files():
# Create first sheet data
df1 = pd.DataFrame(
{
"A": ["This is the first sentence.", "This is the second sentence."],
"B": ["This is a third sentence.", "This is a fourth sentence."],
}
)
# Create second sheet data
df2 = pd.DataFrame(
{
"A": ["This is from sheet 2.", "Another sentence from sheet 2."],
"B": ["More content here.", "Final sentence from sheet 2."],
}
)
# Create XLSX file
with pd.ExcelWriter("example.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1", index=False)
df2.to_excel(writer, sheet_name="Sheet2", index=False)
print("Created example.xlsx")
</code></pre>
<p>but if I try to do this:</p>
<pre><code>with pd.ExcelWriter("example.xls", engine='xlwt') as writer:
df1.to_excel(writer, sheet_name="Sheet1", index=False)
df2.to_excel(writer, sheet_name="Sheet2", index=False)
print("Created example.xls")
</code></pre>
<p>I get this error: <code>ValueError: No engine for filetype: 'xls'</code></p>
<p>If I try this:</p>
<pre><code>with pd.ExcelWriter("example.xls") as writer:
df1.to_excel(writer, sheet_name="Sheet1", index=False)
df2.to_excel(writer, sheet_name="Sheet2", index=False)
print("Created example.xls")
</code></pre>
<p>I get this error: <code>ValueError: No engine for filetype: 'xls'</code></p>
|
<python><pandas><xls>
|
2025-05-21 20:09:46
| 2
| 5,573
|
jss367
|
79,632,535
| 11,657,578
|
python core module to create 1 log file for each call
|
<p>I have below code for logging, in a core_replication.py file.
This file is called by multiples python scripts (say table1.py, table2.py), each of which is to replicate a single table from source to target.
core_replication.py takes care of creating a unique log file for each call. Each call can be a direct call to the core_replication.py or by importing core_replication.py into each of table1.py, table2.py etc. Many of these calls can happen in parallel.</p>
<p>How to make sure I create log files as core_replication_pid_timestamp.log or core_replication_table1_pid_timestamp.log etc.</p>
<p>I am concerned parallel calls can cause one log message being written to wrong log file.</p>
<pre><code>import logging
import os
import sys
import time
from pathlib import Path
# ----------------------------------------------------------------------------------------------------
# Log Function
# ----------------------------------------------------------------------------------------------------
# define module-level logger (aka child logger)
logger = logging.getLogger(__name__)
project_name= 'tsmr' #Adjust as needed. Give a project name to uniquely identify tables.
script_start_time= time.strftime('%Y%m%d_%H%M%S',time.localtime())
script_pid= os.getpid() #pid of caller is probably pid for core also, unless called as subprocess.
def logger_setup_fn():
# check caller script basename
this_script_basename= Path(__file__).stem
this_script_dir= Path(__file__).parent
try:
caller_script_basename= Path(sys.argv[0]).stem
except:
caller_script_basename= Path(__file__).stem
# create logs dir if not exists, based on this script location.
if this_script_dir.name=='modules':
log_file_dir= this_script_dir.parent.joinpath('logs')
else:
log_file_dir= this_script_dir.joinpath('logs')
log_file_dir.mkdir(exist_ok=True)
# construct log file full name
if caller_script_basename != this_script_basename:
log_file_basename= f'{this_script_basename}_{caller_script_basename}_{project_name}_{script_pid}_{script_start_time}.log'
else:
log_file_basename= f'{this_script_basename}_{project_name}_{script_pid}_{script_start_time}.log'
log_file_fullname= log_file_dir.joinpath(log_file_basename)
# define root logger (once) (aka parent logger)
if not logging.getLogger().hasHandlers():
logging.basicConfig(
filename= log_file_fullname,
filemode= 'a',
level= logging.INFO, #CRITICAL, ERROR, WARNING, INFO, DEBUG
format='%(asctime)s.%(msecs)03d/%(name)s/%(levelname)s//%(message)s',
datefmt='%Y%m%d_%H%M%S'
)
# redirect print results to logger
class StreamToLoggerCl: #followed suffix Cl for classes
def write(self,message): #standrd interface. do not suffix _mt for this method
if message.strip():
logger.info(message.strip())
def flush(self): pass #standrd interface. do not suffix _mt for this method
sys.stdout= StreamToLoggerCl()
sys.stderr= StreamToLoggerCl()
# notify log file name to stdout once (exception for redirection class above)
sys.__stdout__.write(f'PFB log file: {log_file_fullname.resolve()}')
# check this script depth
this_script_depth= len(this_script_dir.resolve().parts) -1
min_depth= 3
if this_script_depth < min_depth:
print(f'ERROR! core script is located too close to root directory. Maintain a mimimun depth of {min_depth} sub directories from root.')
sys.exit(1)
else: pass
# Call logger_setup_fn() at import
logger_setup_fn()
#...other code including main() etc.
</code></pre>
|
<python>
|
2025-05-21 17:59:32
| 0
| 325
|
Srinivasarao Kotipatruni
|
79,632,404
| 1,204,527
|
Django annotate with ExtractMonth and ExtractYear doesnt extract year
|
<p>I have this model:</p>
<pre class="lang-py prettyprint-override"><code>class KeyAccessLog(models.Model):
key = models.ForeignKey(
Key, related_name="access_logs", on_delete=models.CASCADE
)
path = models.CharField(max_length=255)
method = models.CharField(max_length=10)
ip_address = models.GenericIPAddressField()
created = models.DateTimeField(auto_add_now=True)
class Meta:
verbose_name = "Key Access Log"
verbose_name_plural = "Key Access Logs"
ordering = ("-pk",)
indexes = [
models.Index(
models.Index(
"key",
ExtractMonth("created"),
ExtractYear("created"),
name="key_month_year_idx",
),
),
]
def __str__(self):
return f"{self.key} - {self.path} - {self.method}"
</code></pre>
<p>My point is to use the declared index when filtering by <code>key</code>, <code>month</code>, and <code>year</code> but the query that is generated from ORM is not extracting as it does for the month.</p>
<pre class="lang-py prettyprint-override"><code>dt_now = timezone.now()
qs = key.access_logs.filter(created__month=dt_now.month, created__year=dt_now.year)
print(qs.query)
</code></pre>
<p>gives me:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
"public_keyaccesslog"."id",
"public_keyaccesslog"."key_id",
"public_keyaccesslog"."path",
"public_keyaccesslog"."method",
"public_keyaccesslog"."ip_address",
"public_keyaccesslog"."created"
FROM
"public_keyaccesslog"
WHERE
(
"public_keyaccesslog"."key_id" = 1
AND EXTRACT(
MONTH
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) = 5
AND "public_keyaccesslog"."created" BETWEEN '2025-01-01 00:00:00+00:00' AND '2025-12-31 23:59:59.999999+00:00'
)
ORDER BY
"public_keyaccesslog"."id" DESC
</code></pre>
<p>Now I tried explicitly to annotate extracted month and year:</p>
<pre class="lang-py prettyprint-override"><code>dt_now = timezone.now()
qs = key.access_logs.annotate(
month=ExtractMonth("created"),
year=ExtractYear("created")
).filter(month=dt_now.month, year=dt_now.year)
print(qs.query)
</code></pre>
<p>this gives me almost the same SQL:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
"public_keyaccesslog"."id",
"public_keyaccesslog"."key_id",
"public_keyaccesslog"."path",
"public_keyaccesslog"."method",
"public_keyaccesslog"."ip_address",
"public_keyaccesslog"."created",
EXTRACT(
MONTH
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) AS "month",
EXTRACT(
YEAR
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) AS "year"
FROM
"public_keyaccesslog"
WHERE
(
"public_keyaccesslog"."key_id" = 1
AND EXTRACT(
MONTH
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) = 5
AND "public_keyaccesslog"."created" BETWEEN '2025-01-01 00:00:00+00:00' AND '2025-12-31 23:59:59.999999+00:00'
)
ORDER BY
"public_keyaccesslog"."id" DESC
</code></pre>
<p>The year is never filtered by extracted value like the month, and then my index doesn't make any sense.</p>
<p>I would expect it to be:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
"public_keyaccesslog"."id",
"public_keyaccesslog"."key_id",
"public_keyaccesslog"."path",
"public_keyaccesslog"."method",
"public_keyaccesslog"."ip_address",
"public_keyaccesslog"."created",
EXTRACT(
MONTH
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) AS "month",
EXTRACT(
YEAR
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) AS "year"
FROM
"public_keyaccesslog"
WHERE
(
"public_keyaccesslog"."key_id" = 1
AND EXTRACT(
MONTH
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) = 5
AND EXTRACT(
YEAR
FROM
"public_keyaccesslog"."created" AT TIME ZONE 'UTC'
) = 2025
)
ORDER BY
"public_keyaccesslog"."id" DESC
</code></pre>
<p>Any solution for this?</p>
|
<python><django><django-orm>
|
2025-05-21 16:28:23
| 1
| 4,381
|
Mirza Delic
|
79,632,374
| 2,514,130
|
How to get full, raw, serialized prompt that is sent to LLM using Instructor
|
<p>My question is specifically for use with the Python <a href="https://python.useinstructor.com/" rel="nofollow noreferrer">Instructor</a> library. Is there a way to get the full, raw, serialized prompt that is sent to an the LLM? I saw that this has come up before in <a href="https://github.com/567-labs/instructor/issues/930" rel="nofollow noreferrer">this issue</a>. The response says to use hooks, but I can't get the json object from them. Here's my code:</p>
<pre><code>import pprint
import instructor
from google import genai
from pydantic import BaseModel, Field
genai_client = genai.Client(location="us-central1", project=my_project)
client = instructor.from_genai(genai_client, mode=instructor.Mode.GENAI_STRUCTURED_OUTPUTS)
def log_completion_kwargs(*args, **kwargs):
pprint.pprint({"args": args, "kwargs": kwargs})
client.on("completion:kwargs", log_completion_kwargs)
class SimpleReview(BaseModel):
review: str = Field(..., description="The review text")
sentiment: str = Field(..., description="The sentiment of the review")
score: float = Field(..., ge=0, le=1, description="The confidence score of the sentiment")
prompt = """\
Here is a product review:
"I bought this phone yesterday and the battery life is phenomenal, \
but the camera is mediocre at best."
Return a JSON object that matches the provided schema.
"""
resp = client.chat.completions.create(
model="gemini-2.5-pro-preview-05-06",
messages=[{"role": "user", "content": prompt}],
response_model=SimpleReview,
)
print(resp)
</code></pre>
<p>I get this from the hook:</p>
<pre><code>{'args': (),
'kwargs': {'config': GenerateContentConfig(http_options=None, system_instruction='', temperature=None, top_p=None, top_k=None, candidate_count=None, max_output_tokens=None, stop_sequences=None, response_logprobs=None, logprobs=None, presence_penalty=None, frequency_penalty=None, seed=None, response_mime_type='application/json', response_schema=<class '__main__.SimpleReview'>, routing_config=None, model_selection_config=None, safety_settings=None, tools=None, tool_config=None, labels=None, cached_content=None, response_modalities=None, media_resolution=None, speech_config=None, audio_timestamp=None, automatic_function_calling=None, thinking_config=None),
'contents': [Content(parts=[Part(video_metadata=None, thought=None, inline_data=None, code_execution_result=None, executable_code=None, file_data=None, function_call=None, function_response=None, text='Here is a product review:\n\n"I bought this phone yesterday and the battery life is phenomenal, but the camera is mediocre at best."\n\nReturn a JSON object that matches the provided schema.\n')], role='user')],
'model': 'gemini-2.5-pro-preview-05-06'}}
</code></pre>
<p>The problem is, it's still passing a Python class here: <code>__main__.SimpleReview</code>. I want to see what it looks like when it arrives at the LLM. Is there a way to do this?</p>
|
<python><pydantic><large-language-model><google-gemini>
|
2025-05-21 16:12:15
| 0
| 5,573
|
jss367
|
79,632,332
| 11,188,140
|
Efficient way to replace list elements with COMPUTED values
|
<p>Consider a 2d python list (ie: lst2) and a matrix that gives the value of each adjacent pair of letters (ie: value_matrix).</p>
<pre><code>lst2 = [['acba', 'ca'], ['babab', 'ca', 'bc']]
value_matrix = [['aa', 0], ['ab', 3], ['ac', 5], ['ba', 2], ['bb', 0], ['bc', 9], ['ca', 7], ['cb', 4], ['cc', 0]]
</code></pre>
<p>I want to replace each element of lst2 with an integer derived by summing the values of its adjacent pairs. For example, 'acba' sums the values 5 (from 'ac') + 4 (from 'cb' + 2 (from 'ba') = 11. <strong>I'm sure a dictionary can be used, but I don't know how to incorporate the summing of adjacent pair values.</strong></p>
<p>When completed, the result should be:</p>
<pre><code>new_lst2 = [[11, 7], [10, 7, 9]]
</code></pre>
|
<python><list>
|
2025-05-21 15:42:10
| 1
| 746
|
user109387
|
79,632,223
| 2,074,831
|
Selecting a specific column of a select query
|
<p>I defined a <code>Usr</code> sqlalchemy table:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, String, select
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class Usr(Base): # user table
__tablename__ = "usr"
usr_id = Column(Integer, primary_key=True)
usr_name = Column(String)
usr_age = Column(Integer)
</code></pre>
<p>Let's say have two queries selecting users on their age or on their name:</p>
<pre class="lang-py prettyprint-override"><code>q1 = select(Usr).where(Usr.usr_name.like("%Bob%"))
q2 = select(Usr).where(Usr.usr_age > 12)
</code></pre>
<p>Now I would like to construct a third query returning users that are
returned either by <code>q1</code> either by <code>q2</code>.</p>
<p>This obviously doesn't work since <code>q1</code> and <code>q2</code> return a 3-tuple:</p>
<pre class="lang-py prettyprint-override"><code>q3 = select(Usr).where(Usr.usr_id.in_(q1) | Usr.usr_id.in_(q2))
</code></pre>
<p>Basically I would need something that returns me only a specific
column of a select. But I'm stuck here. Any suggestion ?</p>
|
<python><sqlalchemy>
|
2025-05-21 14:40:19
| 2
| 3,974
|
sevan
|
79,632,156
| 18,744,117
|
How to mark a class as abstract in python (no abstract methods and in a mypy compatible, reusable way)?
|
<p>I'm trying to make it impossible to instantiate a class directly, without it having any unimplemented abstract methods.</p>
<p>Based on other solutions online, a class should have something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>class Example:
def __new__(cls,*args,**kwargs):
if cls is Example:
raise TypeError("...")
return super().__new__(cls,*args,**kwargs)
</code></pre>
<p>I'm trying to move this snippet to a separate place such that each such class does not have to repeat this code.</p>
<pre class="lang-py prettyprint-override"><code>C = TypeVar("C")
def abstract(cls:Type[C])->Type[C]:
class Abstract(cls):
def __new__(cls, *args:Any, **kwargs:Any)->"Abstract":
if cls is Abstract:
msg = "Abstract class {} cannot be instantiated".format(cls.__name__)
raise TypeError(msg)
return cast( Abstract, super().__new__(*args,**kwargs) )
return Abstract
</code></pre>
<p>This is my attempt and might be incorrect</p>
<p>But <code>mypy</code> complains:</p>
<blockquote>
<p>error: Invalid base class "cls"</p>
</blockquote>
<p>How can I have some reusable way (such as a decorator) to achieve what I want whilst passing <code>mypy --strict</code>?</p>
<p>Context:</p>
<p>This is in the context of a <code>pyside6</code> application where I'm subclassing <code>QEvent</code>, to have some additional extra properties. The base class defining these properties (getters) has a default implementation, yet I would like to prevent it from being initialized directly as it is not (and should not) be registered to the Qt event system. (I have a couple more such classes with different default values for convenience)</p>
|
<python><python-typing><mypy>
|
2025-05-21 14:01:51
| 1
| 683
|
Sam Coutteau
|
79,632,061
| 3,760,519
|
What do I need to do in pydantic to support a FastAPI endpoint involving a sparse matrix?
|
<p>I am aware of apparently similar questions, such as</p>
<ul>
<li><a href="https://stackoverflow.com/questions/76937581/defining-custom-types-in-pydantic-v2">Defining custom types in Pydantic v2</a></li>
<li><a href="https://stackoverflow.com/questions/77100890/pydantic-v2-custom-type-validators-with-info">Pydantic v2 custom type validators with info</a></li>
</ul>
<p>These questions do not cover my specific needs.</p>
<p>I need to make a web api endpoint, using the python package FastAPI. I'd like to do this with pydantic data models. My end point needs to communicate a sparse matrix. My core logic uses scipy for some linear algebra. Pydantic does not support scipy types. I do not want to convert to some dense representation, this is wasteful.</p>
<p>I could convert to some custom sparse representation that makes use of pure base types. But I think it would be nicer to instead implement all the "things" (not sure what word to use here) that pydantic needs to support the sparse matrix directly.</p>
<p>From what I can tell by reading the <a href="https://docs.pydantic.dev/latest/concepts/types/#adding-validation-and-serialization" rel="nofollow noreferrer">documentation</a>, I need to implement a bunch of "things" like validators and serializers. But the documentation does not seem to tell me what is the minimum set of "things" I need to implement in order for this to work with FastAPI and frankly I could do with more examples than the one they give.</p>
<p>I have come up with the following transformer, but not sure if this is even useful for directly supporting a <code>lil_matrix</code>.</p>
<pre><code>def sparse_to_coo_dict(mat: lil_matrix) -> dict[str, Any]:
coo = mat.tocoo()
return {
"format": "coo",
"shape": coo.shape,
"row": coo.row.tolist(),
"col": coo.col.tolist(),
"data": coo.data.tolist(),
}
</code></pre>
<p>For sure I could use this to translate my stuff into base types over the wire. But I don't need to worry about the pydantic "things" to do that. So I am assuming there is some fancier way involving specific features of pydantic than just converting to base types.</p>
<p>So my question is: what exactly do I need to do to support a <code>lil_matrix</code> in pydantic to use with FastAPI?</p>
|
<python><scipy><fastapi><pydantic>
|
2025-05-21 13:17:58
| 1
| 2,406
|
Chechy Levas
|
79,632,060
| 11,167,163
|
Why does pct_change() still raise FutureWarning after using .ffill() in pandas?
|
<p>I'm trying to compute percentage changes in a pandas Series, while handling NaN values correctly and avoiding any <code>FutureWarning</code>. However, I'm stuck between two conflicting warnings.</p>
<pre><code>sub_df[f"Δ {col}"] = sub_df[col].ffill().pct_change()
</code></pre>
<p>This seems like the proper way to handle the deprecation of internal fill behavior in pct_change() — I explicitly call .ffill() before pct_change().</p>
<p>Problem is that despite that, I still get this warning:</p>
<blockquote>
<p>FutureWarning: The default fill_method='pad' in Series.pct_change is
deprecated and will be removed in a future version. Call ffill before
calling pct_change to retain current behavior and silence this
warning.</p>
</blockquote>
<p>It seems contradictory, because I already am calling .ffill().</p>
<p>So I tried the alternative:</p>
<pre><code>sub_df[f"Δ {col}"] = sub_df[col].ffill().pct_change(fill_method=None)
</code></pre>
<p>This now raises a different warning:</p>
<blockquote>
<p>FutureWarning: The 'fill_method' and 'limit' keywords in
Series.pct_change are deprecated and will be removed in a future
version. Call ffill before calling pct_change instead.</p>
</blockquote>
<p>Calling <code>.ffill().pct_change()</code> raises a warning that suggests I should call .<code>ffill()</code> — which I already do.</p>
<p>Setting <code>fill_method=None</code> to suppress that warning raises another <code>FutureWarning</code> that fill_method itself is deprecated.</p>
<p>How can I use <code>pct_change()</code> in a way that:</p>
<ol>
<li>handles missing values as before (with forward-fill),</li>
<li>avoids any <code>FutureWarning</code>, and aligns with future pandas behavior?</li>
</ol>
<p>Is this a bug in pandas, or am I misunderstanding something?</p>
|
<python><pandas><dataframe>
|
2025-05-21 13:17:51
| 1
| 4,464
|
TourEiffel
|
79,631,903
| 10,192,256
|
Reducing number of identical BLAS DLLs in pyinstaller generated distributable
|
<p>I created an executable with <code>pyinstaller</code> and noticed that even after some size reduction tricks (creating a custom environment, using OpenBLAS instead of MKL) the package comes out quite big. When looking into the <code>_internal</code> directory I found that the same DLL has been copied there four times. I used WinMerge to verify that indeed the files are binary identical.</p>
<pre class="lang-bash prettyprint-override"><code>dir /Os
[...]
08-May-25 12:07 7,280,128 python313.dll
08-May-25 12:07 27,951,616 liblapack.dll
08-May-25 12:07 27,951,616 openblas.dll
08-May-25 12:07 27,951,616 libcblas.dll
08-May-25 12:07 27,951,616 libblas.dll
145 File(s) 161,490,356 bytes
</code></pre>
<p>Out of a total of 247MB for the package those libraries make up 106MB.</p>
<p>How can I avoid having the same copy multiple times? Can I tell <code>pyinstaller</code>? Can I avoid it during python package installation in the environment?</p>
<p>Creating symbolic links with <code>mklink</code> is not an option on my Windows installation (<code>You do not have sufficient privilege to perform this operation.</code>), so any solution that targets the root cause would be appreciated.</p>
<p>Digging into the <code>conda</code>-created environment reveals that the multiple copies of the same library are already present in the environment. <code>numpy</code> links against all of them making it impossible to untie the dependency for the existing build.</p>
|
<python><conda><pyinstaller><openblas>
|
2025-05-21 11:47:02
| 1
| 1,908
|
André
|
79,631,902
| 2,829,863
|
Setting font size for text and tables using styles approach with python-docx library
|
<p>I am using the <a href="https://python-docx.readthedocs.io/en/latest/" rel="nofollow noreferrer">python-docx</a> library to create a docx file that contains text and a table right after it. I want to set the text size to 12 and table size to 9 point using <strong>styles</strong>. But the size of all the text, including the text in the table cells, is set to 12. I know I can use another <code>loop based</code> approach to solve this problem, but I like to use styles, is this possible?</p>
<p>Style approach (it dosen't work):</p>
<p><a href="https://i.sstatic.net/I478jFWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I478jFWkm.png" alt="result 1" /></a></p>
<pre class="lang-py prettyprint-override"><code>import random
from docx import Document
from docx.shared import Pt
doc = Document()
normal_style = doc.styles['Normal']
normal_style.font.size = Pt(12) # SETTING FONT SIZE 12 FOR TEXT
paragraph = doc.add_paragraph("This is some introductory text.", style='Normal')
num_rows = 5
num_cols = 4
table = doc.add_table(rows=num_rows, cols=num_cols)
for row in table.rows:
for cell in row.cells:
cell.text = str(random.randint(1, 100))
table.style = 'TableGrid'
table_grid_style = doc.styles['TableGrid']
table_grid_style.font.size = Pt(9) # SETTING FONT SIZE 9 FOR TABLES!
doc.save('random_table_with_intro_text.docx')
</code></pre>
<p>Other <code>loop-based</code> approach that works fine, but I dislike it:</p>
<p><a href="https://i.sstatic.net/MVyRigpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MVyRigpBm.png" alt="results 2" /></a></p>
<pre><code>import random
from docx import Document
from docx.shared import Pt
doc = Document()
paragraph = doc.add_paragraph("This is some introductory text.")
run = paragraph.runs[0]
run.font.size = Pt(12)
num_rows = 5
num_cols = 4
table = doc.add_table(rows=num_rows, cols=num_cols)
for row in table.rows:
for cell in row.cells:
paragraph = cell.paragraphs[0]
run = paragraph.add_run(str(random.randint(1, 100)))
run.font.size = Pt(9)
table.style = 'TableGrid'
doc.save('random_table_with_intro_text.docx')
</code></pre>
<p>Also, if I remove the code that adds the text above the table, it works (maybe I'm missing something):</p>
<pre class="lang-py prettyprint-override"><code>import random
from docx import Document
from docx.shared import Pt
doc = Document()
num_rows = 5
num_cols = 4
table = doc.add_table(rows=num_rows, cols=num_cols)
for row in table.rows:
for cell in row.cells:
cell.text = str(random.randint(1, 100))
table.style = 'TableGrid'
table_grid_style = doc.styles['TableGrid']
table_grid_style.font.size = Pt(9) # SETTING FONT SIZE 9 FOR TABLES!
doc.save('random_table_with_intro_text.docx')
</code></pre>
|
<python><docx><python-docx>
|
2025-05-21 11:45:56
| 0
| 787
|
Comrade Che
|
79,631,853
| 10,232,932
|
Suppress / Do not show Uploading artifacts widget in databricks
|
<p>I am running machine learning algorithm in databricks in a Python notebook, how can I suppress / do not show the uploading artifacts widgets in databricks, they are costing to much memories and I do not want to show them. I would expect that there is a option that it is not shown as all.</p>
<p><a href="https://i.sstatic.net/2LAQHHM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2LAQHHM6.png" alt="Uploading artifacts" /></a></p>
|
<python><databricks>
|
2025-05-21 11:12:48
| 0
| 6,338
|
PV8
|
79,631,743
| 2,819,689
|
How to replace sh? Error: No module named 'fcntl'
|
<p>I am running code on Windows and I have this line</p>
<pre><code>pip = sh.Command(os.path.join(sys.exec_prefix,"bin","pip"))
</code></pre>
<p>got</p>
<pre><code>No module named 'fcntl'
</code></pre>
<p>I tried to replace it this way</p>
<pre><code>def pip(*args):
result = subprocess.run(
[sys.executable, "-m", "pip"] + list(args),
capture_output=True,
text=True
)
if result.returncode != 0:
raise Exception(f"Pip command failed: {result.stderr}")
return result.stdout.strip()
</code></pre>
<p>I got error</p>
<pre><code> File "C:\Applics\Python 3.9\lib\subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Applics\Python 3.9\lib\subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Applics\Python 3.9\lib\subprocess.py", line 1360, in _execute_child
args = list2cmdline(args)
File "C:\Applics\Python 3.9\lib\subprocess.py", line 565, in list2cmdline
for arg in map(os.fsdecode, seq):
File "C:\Applics\Python 3.9\lib\os.py", line 822, in fsdecode
filename = fspath(filename)
</code></pre>
<p>when I run</p>
<pre><code>pip("install",tuple("--upgrade"))
</code></pre>
<p>I tried this on Ubuntu and it works fine.</p>
<pre><code>def run_pip(*args):
"""Run pip with the given arguments"""
cmd = [sys.executable, "-m", "pip"] + list(args)
return subprocess.run(cmd, capture_output=True, text=True)
result = run_pip("list")
print(result.stdout)
</code></pre>
<p>it looks like that I do not understand Windows dir structure.</p>
<p>What else could work?</p>
|
<python><windows>
|
2025-05-21 10:00:51
| 1
| 2,874
|
MikiBelavista
|
79,631,370
| 9,393,952
|
Apify Update breaks python debugging capabilities
|
<p>I'm working with apify for web scraping and I recently updated from <code>apify-cli</code> <code>0.21.6</code> to <code>0.21.7</code><br />
I use the python SDK and for debugging I use <code>pdb.set_trace()</code> or <code>breakpoint()</code><br />
When I updated apify-cli the code keeps stopping at the break points but I can't interact with the code through the terminal. It shows the <code>(Pdb)</code> console prompt but doesn't register my input. It freezes rendering the debug process obsolete. For the moment I solved it downgrading to <code>0.21.6</code> but it's not a real solution
Has anyone else encountered this issue with <code>apify-cli v0.21.7</code> and Python's pdb or <code>breakpoint()</code>?<br />
Are there any known workarounds or configurations that can resolve this terminal freezing issue, short of downgrading <code>apify-cli</code>?</p>
|
<python><web-scraping><apify>
|
2025-05-21 01:19:18
| 0
| 826
|
Cristobal Sarome
|
79,631,318
| 3,427,866
|
How to access model method in enumerate of list in Python?
|
<p>I'm new to python and I've managed to populate a list with instances of my model.</p>
<pre><code># requisition.py
from pydantic import BaseModel
from app.schemas.job_information import JobInformation
class Requisition(BaseModel):
# Base fields (common between internal and external)
index: int = 0
contest_number: int = 0
open_date: str = None
close_date: str = ""
job_information: JobInformation = None
def set_index(self, index):
self.index = index
def set_contest_number(self, contest_number):
self.contest_number = contest_number
def set_open_date(self, open_date):
self.open_date = open_date
def set_close_date(self, close_date):
self.close_date = close_date
def set_job_information(self, job_information):
self.job_information = job_information
</code></pre>
<p>Then in my main.py</p>
<pre><code>@router.get("/")
async def get_sourcing_requests():
requisitions = []
pageindex = 1
pagecount, reqs = get_sourcing_requests_by_page_index(pageindex)
requisitions.append(reqs)
while pageindex < pagecount:
pageindex += 1
pagecount, reqs = get_sourcing_requests_by_page_index(pageindex)
requisitions.append(reqs)
for i, req in enumerate(requisitions):
req.set_index(int(i))
return requisitions
</code></pre>
<p>But I keep getting the error</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'set_index'</p>
</blockquote>
<p>EDIT: Adding code for get_sourcing_request_by_page_index</p>
<pre><code>def get_sourcing_requests_by_page_index(pageindex):
url = "https://cp0.taleo.net/enterprise/soap?ServiceName=FindService"
e = "http://www.taleo.com/ws/art750/2006/12"
itk = "http://www.taleo.com/ws/integration/toolkit/2005/07"
soapenv = "http://www.w3.org/2003/05/soap-envelope"
ns = {"e": e, "itk":itk, "soapenv":soapenv}
un = os.environ.get("USERNAME")
pw = os.environ.get("PASSWORD")
filename = "app/files/request-findPartialEntities-SourcingRequest.xml"
pagecount = 1
pageindex
requisitions = []
try:
xml = etree.parse(filename)
root = xml.getroot()
attrs = root.xpath("//soapenv:Body/itk:findPartialEntities/itk:attributes",namespaces=ns)[0]
pientry = etree.Element(f"{{{itk}}}entry")
pikey = etree.Element(f"{{{itk}}}key")
pikey.text = "pageindex"
pival = etree.Element(f"{{{itk}}}value")
pival.text = str(pageindex)
pientry.append(pikey)
pientry.append(pival)
attrs.append(pientry)
req = etree.tostring(root, encoding="unicode", pretty_print=True)
except FileNotFoundError:
print(f"Error: File not found: {filename}")
return 0, None
cl = str(sys.getsizeof(req))
hd = {"Content-Type": "application/xml", "Content-Length": cl}
response = requests.post(url, data=req, headers=hd, auth=(un, pw))
root = etree.fromstring(response.text)
for soap_env in root:
for soap_body in soap_env:
for req_resp in soap_body:
pagecount = int(req_resp.attrib['pageCount'])
for i, entity in enumerate(req_resp, start=0):
contest_number = entity.xpath(
'//*[local-name()="ContestNumber"]', namespaces=ns
)[i].text
last_modified_date = entity.xpath(
"//e:Requisition/e:Requisition/e:JobInformation/e:JobInformation/e:LastModifiedDate",
namespaces=ns,
)[i].text
title = entity.xpath(
"//e:Requisition/e:Requisition/e:JobInformation/e:JobInformation/e:Title/e:value",
namespaces=ns,
)[i].text
description = entity.xpath(
"//e:Requisition/e:Requisition/e:JobInformation/e:JobInformation/e:DescriptionInternalHTML/e:value",
namespaces=ns,
)[i].text
qualification = entity.xpath(
"//e:Requisition/e:Requisition/e:JobInformation/e:JobInformation/e:InternalQualificationHTML/e:value",
namespaces=ns,
)[i].text
open_date = entity.xpath("//e:OpenDate", namespaces=ns)[i].text
if hasattr(entity, "e:CloseDate"):
close_date = entity.xpath("//e:CloseDate", namespaces=ns)[
i
].text
else:
close_date = ""
job_information = JobInformation()
job_information.set_title(title)
job_information.set_last_modified_date(last_modified_date)
# job_information.set_description(description)
# job_information.set_qualification(qualification)
requisition = Requisition()
requisition.set_contest_number(int(contest_number))
requisition.set_open_date(open_date)
requisition.set_close_date(close_date)
requisition.set_job_information(job_information)
requisitions.append(requisition)
return pagecount, requisitions
</code></pre>
<p>I'm guessing req is considered a list object instead of a Requisition object but not really sure.</p>
<p>Appreciate any help.</p>
<p>Using Python 3.13.3.</p>
|
<python><list><pydantic>
|
2025-05-20 23:57:14
| 2
| 1,743
|
ads
|
79,631,317
| 16,100,017
|
How do you change the position (column and row) of a PyQt QWidget inside a QTableWidget?
|
<p>I want to move a QWidget from one position to another. I tried the following:</p>
<pre><code>import sys
from PySide6.QtWidgets import (
QApplication, QMainWindow, QTableWidget, QLabel, QPushButton, QVBoxLayout, QWidget
)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.table = QTableWidget(5, 2) # 5 rows, 2 columns
self.table.setHorizontalHeaderLabels(["Column 1", "Column 2"])
# Add a QLabel to cell (4, 0)
label = QLabel("Test")
self.table.setCellWidget(4, 0, label)
# Add a button to trigger the move
self.button = QPushButton("Move widget")
self.button.clicked.connect(self.move_widget)
layout = QVBoxLayout()
layout.addWidget(self.table)
layout.addWidget(self.button)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
def move_widget(self):
"""Move the widget from cell (4,0) to (3,0)."""
widget = self.table.cellWidget(4, 0)
self.table.removeCellWidget(4, 0)
self.table.setCellWidget(3, 0, widget)
widget.show()
# Debug: Verify the move
print(f"Cell (3,0) now has: {self.table.cellWidget(3, 0)}")
print(f"Cell (4,0) now has: {self.table.cellWidget(4, 0)}")
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
<p>This has the following output:</p>
<pre><code>Cell (3,0) now has: <PySide6.QtWidgets.QLabel(0x26d65f7fc00) at 0x0000026D67071380>
Cell (4,0) now has: None
Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>It seems to me that upon rendering, the QLabel is removed. Is there any way to prevent this from happening?</p>
|
<python><pyqt><pyqt5><pyside><pyside6>
|
2025-05-20 23:54:51
| 1
| 647
|
Rik
|
79,631,313
| 949,664
|
Updated from Django 3.2 to 5.2, now I'm getting "Obj matching query does not exist"
|
<p>I'm using Django in a non-Django project (Postgresql 17 under the hood) purely to make my unit tests easier. I've defined all my models like this:</p>
<pre class="lang-py prettyprint-override"><code>class Bar(models.Model):
internal_type = models.TextField()
class Meta:
managed = False
db_table = 'bar'
class Foo(models.Model):
bar = models.ForeignKey('Bar', models.CASCADE, db_column='bar', related_name='foo')
class Meta:
managed = False
db_table = 'foo'
</code></pre>
<p>This looks funky, but it was working perfectly fine on Django 3.2. It allowed me to write tests like this:</p>
<pre class="lang-py prettyprint-override"><code> def test_foo_bar(self):
from django_app import models
self.cursor.execute("INSERT INTO bar(internal_type) VALUES ('test') RETURNING id;")
self.cursor.execute("INSERT INTO foo(bar) VALUES (%s)", (self.cursor.fetchone()[0],))
self.cursor.connection.commit()
from django.db import connection
connection.force_debug_cursor = True
a = models.Foo.objects.all()
b = models.Bar.objects.all()
assert a[0].bar_id == b[0].id # this succeeds
print(a[0].bar) # this raises an ObjectDoesNotExist error
</code></pre>
<p>This test passes with no issues on Django 3.2, but fails on that last line on 5.2. How can I work around this? It seems like Django is using some stricter transaction isolation behind the scenes?</p>
<p>I've tried committing Django's cursor before, closing its connection and forcing it to re-open, etc. But nothing has worked.</p>
<p>Has my luck run out on this pattern of testing?</p>
<p>Edit: I should also note that I'm <em>not</em> using Django's testing framework. I'm using <code>unittest.TestCase</code> with custom setup functions. Also, this code works fine with Django 3.2.</p>
<p>Another edit:</p>
<p>The schema isn't too interesting for these tables:</p>
<pre><code>create table if not exists bar (
id bigserial primary KEY,
internal_type text
);
create table if not exists foo (
id bigserial primary KEY,
bar bigint references bar(id)
);
</code></pre>
<p>Edit 3:
Here are the query logs for the above unit test:</p>
<pre><code>(0.000) SELECT "foo"."id", "foo"."bar" FROM "foo" LIMIT 1; args=(); alias=default
(0.000) SELECT "bar"."id", "bar"."internal_type" FROM "bar" LIMIT 1; args=(); alias=default
(0.000) SELECT "foo"."id", "foo"."bar" FROM "foo" LIMIT 1; args=(); alias=default
</code></pre>
|
<python><django><postgresql>
|
2025-05-20 23:47:30
| 0
| 1,128
|
sagargp
|
79,631,280
| 2,003,686
|
Counting objects in YOLO11 defining several lines
|
<p>I'm using an <a href="https://docs.ultralytics.com/guides/object-counting/" rel="nofollow noreferrer">example from Ultralytics (YOLOv11)</a> to count apples on a conveyor belt. It defines a line across the frame and counts how many apples cross that line using the <strong>ObjectCounter</strong> solution provided by Ultralytics.</p>
<p>Here is a the original code:</p>
<pre><code>import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video.mp4")
assert cap.isOpened(), "Error reading video file"
region_points = [(20, 400), (1080, 400)] # line counting
# Video writer
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Initialize object counter object
counter = solutions.ObjectCounter(
show=True, # display the output
region=region_points, # pass region points
model="yolo11n.pt", # model="yolo11n-obb.pt" for object counting with OBB model.
# classes=[0, 2], # count specific classes i.e. person and car with COCO pretrained model.
# tracker="botsort.yaml", # choose trackers i.e "bytetrack.yaml"
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or processing is complete.")
break
results = counter(im0)
# print(results) # access the output
video_writer.write(results.plot_im) # write the processed frame.
cap.release()
video_writer.release()
cv2.destroyAllWindows() # destroy all opened windows
</code></pre>
<p>Now I’d like to adapt this code to count cars on a road with two lanes. The idea is to draw two separate lines, one for each lane, and count how many cars cross each line independently.</p>
<p>At first, I tried creating two separate ObjectCounter instances, each with its own line. However, this approach is very inefficient, since it runs inference separately for each counter, even though both are analyzing the same frame.</p>
<p>Is there a way to define multiple counting lines in a single ObjectCounter instance?</p>
<p>I also came across <a href="https://docs.ultralytics.com/es/guides/region-counting/#real-world-applications" rel="nofollow noreferrer">another example on the Ultralytics website using YOLOv11</a>, where they define multiple regions in a dictionary and pass that dictionary to a single instance of the <strong>RegionCounter</strong> class.</p>
<p>However, it seems the <strong>RegionCounter</strong> class only works with <strong>polygons</strong> and requires four points, so it wouldn’t be suitable for working with <strong>lines</strong>. If I define a dictionary of lines and pass it to RegionCounter:</p>
<pre><code>#!pip install ultralytics
import ultralytics
import cv2
from ultralytics import solutions
ultralytics.checks()
cap = cv2.VideoCapture("trafficCam.mp4")
#cap = cv2.VideoCapture("traffic_cam_short.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,
cv2.CAP_PROP_FRAME_HEIGHT,
cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("counting.avi",
cv2.VideoWriter_fourcc(*"mp4v"),
fps, (w, h))
region_lineas = {
"linea1": [(140, 450), (500, 450)], # diccionario de líneas para contar los coches en cada carril
"linea2": [(550, 450), (720, 450)],
}
print(region_lineas)
# Initialize region counter object
counter = solutions.RegionCounter(
show=True, # display the frame
region=region_lineas, # pass region points
model="yolo11s.pt", # model for counting in regions i.e yolo11s.pt
classes=[2], # If you want to count specific classes i.e person and car with COCO pretrained model.
show_in=True, # Display in counts
#show_out=True, # Display out counts
tracker="bytetrack.yaml", # Enable tracking
line_width=2, # Adjust the line width for bounding boxes and text display
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
results = counter(im0) # count the objects
# Mostrar la imagen en tiempo real
cv2.imshow("Cuenta de coches por carril", im0)
# Guardar el vídeo procesado
video_writer.write(results.plot_im)
cap.release() # Release the capture
video_writer.release()
cv2.destroyAllWindows()
</code></pre>
<p>I get an <strong>error</strong>:</p>
<pre><code>ValueError: A linearring requires at least 4 coordinates.
</code></pre>
<p>Passing the dictionary of lines to an <strong>ObjectCounter</strong> instead of to a <strong>RegionCounter</strong> also doesn’t work: ObjectCounter expects a single line (as a list of two points) or a polygonal region (at least 3 points), not a dictionary or multiple lines at once.</p>
<p><strong>My question:</strong>
Is there a way to define multiple counting lines in a single ObjectCounter instance?
If not, what would be the most efficient way to count objects crossing multiple lines in the same frame without repeating inference?</p>
|
<python><computer-vision><yolo><ultralytics><yolov11>
|
2025-05-20 23:05:40
| 0
| 1,940
|
rodrunner
|
79,631,048
| 2,971,574
|
Different behaviour of databricks-connect vs. pyspark when creating DataFrames
|
<p>I've got a project where I develop an ETL pipeline in VS Code using the Databricks extension which is based on the Python package "databricks-connect". To run unit tests on functions I use pytest as well as the VS Code extension "Testing". In gitlab I want to run those unit tests as well when specific merge requests are created to check whether all tests are fine. Within gitlab my tests run in a Docker container where I set up a local pyspark environment that does not include databricks-connect but pyspark (both packages cannot coexist).</p>
<p>That's how I create my spark session that is used to run the tests in both "worlds" (VS Code + databricks-connect vs. Docker + pyspark):</p>
<pre><code>from pyspark.sql import SparkSession
def create_spark_session() -> SparkSession:
# Within VS Code where databricks connect is installed
# the try block "works" whereas within my Docker container
# I end up in the except block.
try:
from databricks.connect import DatabricksSession
DEV_CONFIG = parse_config_file()
return DatabricksSession.builder.remote(
host=DEV_CONFIG.host,
token=DEV_CONFIG.token,
cluster_id=DEV_CONFIG.cluster_id,
).getOrCreate()
# from databricks.sdk.runtime import spark
# return spark
except ImportError:
return SparkSession.builder.master("local[4]").config("spark.databricks.clusterUsageTags.clusterAllTags", "{}").getOrCreate()
</code></pre>
<p>The issue is that there seems to be a difference between databricks-connect and databricks when it comes to creating DataFrames.</p>
<p>For example the code</p>
<pre><code>df_test = spark_session.createDataFrame(
[
"123", # string
123456, # integer
12345678912345, # integer with 14 digits
],
["my_str_column"],
)
</code></pre>
<p>works perfectly using VS Code and databricks-connect but within my Docker container where pyspark is used it runs into the error "pyspark.errors.exceptions.base.PySparkTypeError: [CANNOT_INFER_SCHEMA_FOR_TYPE] Can not infer schema for type: <code>str</code>."</p>
<p>Also the code</p>
<pre><code>data = [
# one and only row
(102),
]
schema = StructType(
[
StructField("my_float_column", FloatType()),
]
)
df = spark_session.createDataFrame(data, schema)
</code></pre>
<p>works perfectly using VS Code and databricks-connect but within my Docker container where pyspark is used it runs into the error "pyspark.errors.exceptions.base.PySparkTypeError: [CANNOT_ACCEPT_OBJECT_IN_TYPE] <code>FloatType()</code> can not accept object <code>102</code> in type <code>int</code>."</p>
<p>Why do both packages behave differently? And what's the best way to deal with this? By the way: Within a Databricks notebook all the syntaxes run smoothly.</p>
|
<python><pyspark><databricks><azure-databricks><databricks-connect>
|
2025-05-20 19:22:16
| 1
| 555
|
the_economist
|
79,631,026
| 1,719,931
|
_metadata properties do not work with pyjanitor
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/development/extending.html#define-original-properties" rel="nofollow noreferrer">_metadata original properties</a> are not pased to pyjanitor manipulation results</p>
<p>Take the following MWE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import janitor # noqa: F401
import pandas_flavor as pf
# See: https://pandas.pydata.org/pandas-docs/stable/development/extending.html#define-original-properties
class MyDataFrame(pd.DataFrame):
# normal properties
_metadata = ["myvar"]
@property
def _constructor(self):
return MyDataFrame
@pf.register_dataframe_method
def regvar(self):
obj = MyDataFrame(self)
obj.myvar = 2
return obj
@pf.register_dataframe_method
def printvar(self):
print(self.myvar)
return self
df = pd.DataFrame(
{
"Year": [1999, 2000, 2004, 1999, 2004],
"Taxon": [
"Saccharina",
"Saccharina",
"Saccharina",
"Agarum",
"Agarum",
],
"Abundance": [4, 5, 2, 1, 8],
}
)
</code></pre>
<p>Now:</p>
<pre><code>df2 = df.regvar().query("Taxon=='Saccharina'").printvar()
</code></pre>
<p>This correctly returns <code>2</code>.</p>
<p>However:</p>
<pre><code>index = pd.Index(range(1999,2005),name='Year')
df2 = df.regvar().complete(index, "Taxon", sort=True).printvar()
</code></pre>
<p>Returns an Exception:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_4412\627945022.py in ?()
39
40 df2 = df.regvar().query("Taxon=='Saccharina'").printvar()
41
42 index = pd.Index(range(1999,2005),name='Year')
---> 43 df2 = df.regvar().complete(index, "Taxon", sort=True).printvar()
HOME\venvs\base\Lib\site-packages\pandas_flavor\register.py in ?(self, *args, **kwargs)
160 object: The result of calling of the method.
161 """
162 global method_call_ctx_factory
163 if method_call_ctx_factory is None:
--> 164 return method(self._obj, *args, **kwargs)
165
166 return handle_pandas_extension_call(
167 method, method_signature, self._obj, args, kwargs
~\AppData\Local\Temp\ipykernel_4412\627945022.py in ?(self)
21 @pf.register_dataframe_method
22 def printvar(self):
---> 23 print(self.myvar)
24 return self
HOME\venvs\base\Lib\site-packages\pandas\core\generic.py in ?(self, name)
6295 and name not in self._accessors
6296 and self._info_axis._can_hold_identifiers_and_holds_name(name)
6297 ):
6298 return self[name]
-> 6299 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'myvar'
</code></pre>
|
<python><pandas><pyjanitor>
|
2025-05-20 19:09:59
| 2
| 5,202
|
robertspierre
|
79,630,930
| 17,902,018
|
Unsloth doesn't find Llama.cpp to convert fine-tuned LLM to GGUF
|
<p>I am executing on an Azure VM this notebook from the Unsloth docs:</p>
<p><a href="https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb</a></p>
<p>Where in the end they save the model to GGUF format after fine-tuning like this:</p>
<pre class="lang-py prettyprint-override"><code>model.save_pretrained_gguf("model", tokenizer, quantization_method="q4_k_m") # or any other quantization
</code></pre>
<p>I get logs</p>
<pre><code>...long list of layer quantization logs like
INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> BF16, shape = {14336, 4096}
INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> BF16, shape = {4096, 14336}
INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> BF16, shape = {4096, 14336}
...
INFO:hf-to-gguf:Set model quantization version
INFO:hf-to-gguf:Set model tokenizer
...
</code></pre>
<p>and then the error</p>
<pre><code>File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/unsloth/save.py:1212, in save_to_gguf(model_type, model_dtype, is_sentencepiece, model_directory, quantization_method, first_conversion, _run_installer)
RuntimeError: Unsloth: Quantization failed for /afh/projects/test_project-5477c8e6-ac7d-4117-9d2b-0bbd54c12c6a/shared/Users/Riccardo.Rorato/model/unsloth.BF16.gguf
You might have to compile llama.cpp yourself, then run this again.
You do not need to close this Python program. Run the following commands in a new terminal:
You must run this in the same folder as you're saving your model.
git clone --recursive https://github.com/ggerganov/llama.cpp
cd llama.cpp && make clean && make all -j
Once that's done, redo the quantization.
</code></pre>
<p>Needless to say, I do have cloned and built <code>llama.cpp</code> (also with the updated guide <a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#cpu-build" rel="nofollow noreferrer">here</a>). I have also tried with previous commits that had previous options, moved the <code>llama-quantize</code> file from <code>llama.cpp/build/bin</code> to the folder of the notebook, but nothing changed.</p>
<p>I am able to get a GGUF file regardless by launching the llama.cpp's quantization script manually like this:</p>
<pre><code>python3 llama.cpp/convert_lora_to_gguf.py my_model
</code></pre>
<p>However I cannot find out why unsloth does not recognize it.</p>
<p>Details:</p>
<ul>
<li>Running on a AzureML Standard_NC24ads_A100_v4 VM.</li>
<li>Model I started with: 'unsloth/Meta-Llama-3.1-8B-Instruct'</li>
<li>unsloth version: '2025.5.6'</li>
</ul>
|
<python><huggingface-transformers><large-language-model><fine-tuning><llamacpp>
|
2025-05-20 17:43:16
| 1
| 2,128
|
rikyeah
|
79,630,857
| 5,273,805
|
Runtime warning in sklearn KMeans
|
<p>I am running k-means using sklearn but has been getting runtime warning. Can you please explain what's happening? Below is a sample code for reproducibility:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
col1 = np.random.normal(loc=0, scale=1, size=1000)
col2 = np.random.normal(loc=1, scale=1, size=1000)
col3 = np.random.normal(loc=2, scale=4, size=1000)
col4 = np.random.normal(loc=3, scale=3, size=1000)
df = pd.DataFrame(list(zip(col1, col2, col3, col4)))
scaler = StandardScaler()
scaled_df = scaler.fit_transform(df)
kmeans = KMeans(n_clusters=2, init='k-means++', max_iter=300, random_state=0)
kmeans.fit(df)
</code></pre>
<p>The warnings are:</p>
<pre><code>miniconda3/lib/python3.13/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: divide by zero encountered in matmul
ret = a @ b
miniconda3/lib/python3.13/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: overflow encountered in matmul
ret = a @ b
miniconda3/lib/python3.13/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: invalid value encountered in matmul
ret = a @ b
miniconda3/lib/python3.13/site-packages/sklearn/cluster/_kmeans.py:237: RuntimeWarning: divide by zero encountered in matmul
current_pot = closest_dist_sq @ sample_weight
miniconda3/lib/python3.13/site-packages/sklearn/cluster/_kmeans.py:237: RuntimeWarning: overflow encountered in matmul
current_pot = closest_dist_sq @ sample_weight
miniconda3/lib/python3.13/site-packages/sklearn/cluster/_kmeans.py:237: RuntimeWarning: invalid value encountered in matmul
current_pot = closest_dist_sq @ sample_weight
</code></pre>
|
<python><scikit-learn><k-means>
|
2025-05-20 16:53:24
| 2
| 349
|
useryk
|
79,630,782
| 2,410,605
|
How to prevent a non-link web element from opening in a new tab
|
<p>I'm using Python Selenium to find an element, click it, and have the new page stay open in the same window instead of opening in a new tab. I'm trying to be careful with how I word this because I it's not an anchor or url, from what I can tell. The html is:</p>
<pre><code><div class="prevent-select">
<div></div>
<span class="tyl-list-item__text menuTitleClass" tabindex="0" data-testid="hub__tylermenu__node__title" text="Employee Job/Salary" aria-describedby="tcw-tooltip-56">Employee Job/Salary</span>
<tcw-tooltip id="tcw-tooltip-56" style="border: 0px; clip: rect(0px, 0px, 0px, 0px); height: 1px; margin: -1px; overflow: hidden; padding: 0px; position: absolute; width: 1px; outline: 0px; appearance: none;">Enterprise ERP&gt;Human Capital Management&gt;Payroll&gt;Employee Maintenance&gt;Employee Job/Salary</tcw-tooltip>
</div>
</code></pre>
<p>My initial code (which IS opening it in a new tab) looks like:</p>
<pre><code>url_jobSalary = WebDriverWait(browser, 15).until(EC.element_to_be_clickable((By.XPATH, "//*[contains(text(), 'Employee Job/Salary')]"))).click()
</code></pre>
<p>Since researching I've tried these techniques below, none of which are working:</p>
<p>Technique #1</p>
<pre><code>#this technique failed because get_jobSalary is returned as nothing, I guess it cannot find an href attribute
url_jobSalary = WebDriverWait(browser, 30).until(EC.presence_of_element_located((By.XPATH, "//*[contains(text(), 'Employee Job/Salary')]")))
get_jobSalary = url_jobSalary.get_attribute('href')
browser.get(get_jobSalary)
</code></pre>
<p>Technique #2</p>
<pre><code>#this one opens in a new tab still
url_jobSalary = WebDriverWait(browser, 30).until(EC.presence_of_element_located((By.XPATH, "//*[contains(text(), 'Employee Job/Salary')]")))
browser.execute_script("arguments[0].target='_self';", url_jobSalary)
url_jobSalary.click()
</code></pre>
<p>Technique #3</p>
<pre><code># Set chrome options -- I found a post that suggested adding the --profile-directory=default argument would work,
# but it still opens in a new tab
options = Options()
options.add_argument('--profile-directory=Default') #prevent element from opening in new tab or window
options.add_experimental_option("prefs", {
"download.default_directory" : download_path
})
options.headless = False
browser = webdriver.Chrome(options=options)
</code></pre>
<p>Finally I found this <a href="https://stackoverflow.com/questions/60172620/how-to-prevent-opening-a-new-tab-during-scraping-with-puppeteer-when-clicking-a">post</a> which feels on the right track, because the page I'm scraping is using those ng- commands, but I don't know how to write the code snippet the answer shows.</p>
<p>Below is a screenshot of what the web element when I added it to a watch, maybe this will help:</p>
<p><a href="https://i.sstatic.net/A2qyz2p8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2qyz2p8.png" alt="screenshot of web element" /></a></p>
|
<python><selenium-webdriver><web-scraping>
|
2025-05-20 15:53:32
| 0
| 657
|
JimmyG
|
79,630,774
| 186,202
|
How to monitor temporal Workflows and Activities using Sentry?
|
<p>We are using Sentry to monitor our production bugs, and since we picked <a href="https://temporal.io/" rel="nofollow noreferrer">Temporal</a> to run our background tasks (workflows and activities) we don't have Sentry logging anymore.</p>
<p>Is it possible to configure our worker to setup Sentry SDK and forward errors in workflows and activities to Sentry?</p>
|
<python><exception><sentry><temporal><temporal-workflow>
|
2025-05-20 15:48:54
| 1
| 18,222
|
Natim
|
79,630,721
| 1,507,014
|
Why does VertexClustering.membership return a copy?
|
<p>Working with igraph in a project, we encounter serious timing issues. Running cProfile on the worst function, we realize that this code takes a lot of time:</p>
<pre><code>[ partition.membership[x.index] if x.index < len(partition.membership) else -1 for x in G.vs ]
</code></pre>
<p>where <code>G</code> is a <code>Graph</code>.</p>
<p>Looking at the documentation, it appears that membership is a property returning a copy of the membership:</p>
<pre><code>@property
def membership(self):
"""Returns the membership vector."""
return self._membership[:]
</code></pre>
<p>Is there a reason for returning an explicit copy and not just <code>self._membership</code>?</p>
|
<python><igraph>
|
2025-05-20 15:11:23
| 1
| 5,006
|
Bentoy13
|
79,630,693
| 2,123,706
|
How to apply a pandas filter if the filter requirement is an element in a list
|
<p>I have a dataframe, and a list of potentially different filters.</p>
<p>Is there a way to apply a filter that comes from a list so that the dataframe is filtered accordingly?</p>
<p>MRE:</p>
<p>In this example, the <code>df</code> is filtered based on the conditions set out in <code>filter_ls[0]</code>, ie, I want to run <code>df[df.col1>3]</code>. the filter used depends on earlier logic, which specifies which element to take from filter_ls, ie element 0</p>
<pre><code>df=pd.DataFrame({'col1':[1,2,3,4,5], 'col2':[6,7,8,9,9], 'col3':[11,12,13,14,15]})
filter_ls=['df[df.col1>3]', 'df[(df.col1<4) & (df.col2<8)]', 'df[(df.col2>7) & (df.col3>14)]']
filter_ls[0]
</code></pre>
<p>which returns <code>KeyError: 'df.col1>3'</code></p>
|
<python><pandas><dataframe><list><filter>
|
2025-05-20 15:02:12
| 1
| 3,810
|
frank
|
79,630,643
| 2,097,820
|
Why composing a SQLAlchemy select doesn't work
|
<p>Consider following snippet:</p>
<pre class="lang-py prettyprint-override"><code>stmt = db.select(Product).where(Product.id == 1)
print(stmt)
stmt = db.select(Product)
stmt.where(Product.id == 1)
print(stmt)
</code></pre>
<p>i'd expect to have equal stmt, but instead i got:</p>
<pre><code>SELECT product.id, product.code, product."desc", product.unit, product.cost_price, product.sale_price
FROM product
WHERE product.id = :id_1
SELECT product.id, product.code, product."desc", product.unit, product.cost_price, product.sale_price
FROM product
</code></pre>
<p>the last <code>stmt</code> doesn't include the where closure.</p>
<p>Why this happens?</p>
|
<python><sqlalchemy><orm>
|
2025-05-20 14:24:43
| 1
| 2,617
|
Victor Aurélio
|
79,630,495
| 2,074,831
|
Typing sqlalchemy where clauses
|
<p>Following this doc:</p>
<p><a href="https://docs.sqlalchemy.org/en/20/orm/extensions/mypy.html" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/extensions/mypy.html</a></p>
<p>I tried to type-check my <code>test.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, String, select
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class Usr(Base):
__tablename__ = "usr"
id = Column(Integer, primary_key=True)
name = Column(String)
stmt = select(
Usr.name
).where(
Usr.id == "test" # comparing an int and an str
)
</code></pre>
<p>using the following command:</p>
<pre><code>mypy --strict --config-file mypy.ini test.py
</code></pre>
<p>where <code>mypy.ini</code> contains:</p>
<pre><code>[mypy]
plugins = sqlalchemy.ext.mypy.plugin
</code></pre>
<p>Mypy does not raise any error on the where clause (<code>Usr.id == "blabla"</code>).</p>
<p>Is there a way to have mypy complain with some <code>Non-overlapping equality check</code> or some workaround for that case ?</p>
|
<python><sqlalchemy><python-typing><mypy>
|
2025-05-20 13:00:47
| 1
| 3,974
|
sevan
|
79,630,444
| 1,310,032
|
How to get a moving average (or max) in time period using Django ORM
|
<p>I am trying to work out if it's possible/practical to use the Django ORM to get the highest value in an arbitrary timebox out of the database.</p>
<p>Imagine a restaurant orders ingredients every day, we might have a simple model that looks like:</p>
<pre><code>class Order(models.Model):
date = models.DateField()
ingredient = models.CharField()
quantity = models.IntegerField()
</code></pre>
<p>Then I am able to get the sum quantities ordered each week:</p>
<pre><code>Order.objects.filter(date__gte=start_date, date__lt=end_date)
.annotate(date=TruncWeek("date"))
.values("ingredient", "date")
.annotate(total=Sum("quantity"))
.order_by("ingredient")
</code></pre>
<p>But now I want to figure out the maximum quanity of each ingredient that has been ordered in any consecutive 7 (or X number of) days, across the filtered date range.</p>
<p>Is it possible to do this in the ORM?</p>
|
<python><django><django-orm>
|
2025-05-20 12:29:13
| 1
| 1,199
|
David Downes
|
79,630,419
| 696,034
|
SeleniumBase usage
|
<p>We have a long-running process that creates RemoteWebDriver and works with Chrome browser; it is destroyed (.close() and .quit()) when needed.</p>
<p>Now we're planning to migrate to SeleniumBase, and I cannot find a way to destroy it manually when needed because most (if not all) of examples follow the pattern of with SB(...) as sb where scope is contained to it.</p>
<p>I suppose that I can create SeleniumBase object and store it in global variable for our long-running process, but how do I destroy it and re-create it again? I have no problems with re-creation, it's the question of how to destroy it manually.</p>
|
<python><seleniumbase>
|
2025-05-20 12:16:24
| 1
| 7,316
|
Daniel Protopopov
|
79,630,126
| 3,213,204
|
Setting a persistent clipboard with multiple targets on X11
|
<h3>General overview</h3>
<p>I am trying to set an image in clipboard into different linux’s <a href="https://man.archlinux.org/man/extra/xclip/xclip.1.en#t" rel="nofollow noreferrer">targets</a>. As example, depending of the context, the same image could be paste in png, jpg, or whatever else.</p>
<h3>The problem</h3>
<p>The big problem I am facing is I didn’t find a way to have in the same time i) many targets, and ii) a persistent modes.</p>
<p>With the simple solutions as Xclip, I can set one target at the time and last one overwrite the previous.
However, with Qt’s modules, I can get the desired behavior, but this behavior ended when I close the Qt application and goes back to the situation before Qt.</p>
<h3>My tries</h3>
<h4>1. Xclip</h4>
<pre><code>xclip -selection clipboard -t image/png -i image.png
xclip -selection clipboard -t image/jpg -i image.jpg
</code></pre>
<p>When I inspect the clipboard situation, I just find the last one:</p>
<pre><code>xclip -selection clipboard -t TARGETS -o
TARGETS
image/jpg
</code></pre>
<p>Well, basically, Xclip <a href="https://github.com/astrand/xclip/issues/32" rel="nofollow noreferrer">doesn’t</a> support it.</p>
<h4>2. Using Qt</h4>
<pre><code>import base64
from PyQt5.QtWidgets import QApplication
from PyQt5.QtGui import QImage, QClipboard
from PyQt5.QtCore import QMimeData, QByteArray, QBuffer, QIODevice
import sys
# Example: load an image in base64 (from a real file here for testing)
with open("image.png", "rb") as f:
base64_data = base64.b64encode(f.read()).decode()
# Decode base64 → bytes
image_bytes = base64.b64decode(base64_data)
# Load image into a QImage
qimage = QImage.fromData(image_bytes)
if qimage.isNull():
raise ValueError("Invalid image")
# Start Qt
app = QApplication([])
# Create a QMimeData
mime_data = QMimeData()
# Add native image
mime_data.setImageData(qimage)
# Add image/png format
png_bytes = QByteArray()
buffer = QBuffer(png_bytes)
buffer.open(QIODevice.WriteOnly)
qimage.save(buffer, "PNG")
buffer.close()
mime_data.setData("image/png", png_bytes)
# Add image/jpeg format
jpeg_bytes = QByteArray()
buffer = QBuffer(jpeg_bytes)
buffer.open(QIODevice.WriteOnly)
qimage.save(buffer, "JPEG")
buffer.close()
mime_data.setData("image/jpeg", jpeg_bytes)
# Copy to clipboard
QApplication.clipboard().setMimeData(mime_data)
# QApplication.clipboard().setMimeData(mime_data, QClipboard.Clipboard)
print("Image copied to clipboard with multiple formats.")
# Optional: keep app alive a bit so clipboard survives
app.processEvents()
input("Press Enter to quit...")
</code></pre>
<p>I verify with Xclip and it works:</p>
<pre><code>% xclip -selection clipboard -t TARGETS -o
application/x-qt-image
image/png
image/jpeg
image/avif
image/bmp
image/bw
image/cur
image/eps
image/epsf
image/epsi
image/heic
image/heif
image/icns
image/ico
image/jpg
image/jxl
image/pbm
BITMAP
image/pcx
image/pgm
image/pic
image/ppm
PIXMAP
image/rgb
image/rgba
image/sgi
image/tga
image/tif
image/tiff
image/wbmp
image/webp
image/xbm
image/xpm
TARGETS
MULTIPLE
TIMESTAMP
SAVE_TARGETS
</code></pre>
<p>But it works until the app is runing. When I push <kbd>Enter</kbd>, the clipboard change.</p>
<h3>Some observations</h3>
<p>Lot of softwares <a href="https://unix.stackexchange.com/questions/375002/xclip-image-binary-contents-pasted-into-text-fields">do it</a>. Look at what’s happens when you copy image or text from your browser, or when you copy files from your file browser. So it doesn’t seems to be something particularly exceptional.</p>
<h3>The question</h3>
<p>How to get the same behavior as the Qt solution but with a persistent mode, not depending of the application’s execution?</p>
|
<python><qt><pyqt><clipboard><x11>
|
2025-05-20 09:00:04
| 1
| 321
|
fauve
|
79,630,089
| 5,958,323
|
How to display a legend when plotting a GeoDataFrame
|
<p>I have a GeoDataFrame I want to plot. This works fine, however somehow I cannot easily plot its legend. I have tried a number of alternatives and checked solutions from googling and LLM, but I do not understand why this does not work.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import geopandas as gpd
from shapely.geometry import box, Polygon, LineString
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
bounding_box = [9.454, 80.4, 12, 80.88]
polygon = box(*bounding_box)
gdf = gpd.GeoDataFrame(geometry=[polygon])
plot_obj = gdf.plot(ax=ax, edgecolor='red', facecolor='none', linewidth=2, label="user bbox query")
# plt.legend() # does not work
# ax.legend(handles=[plot_obj], labels=["test"]) # does not work
ax.legend(handles=[plot_obj]) # does not work
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/fcDURj6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcDURj6t.png" alt="legend is empty" /></a></p>
<p>I get a warning:</p>
<pre><code><python-input-0>:16: UserWarning: Legend does not support handles for Axes instances.
A proxy artist may be used instead.
See: https://matplotlib.org/stable/users/explain/axes/legend_guide.html#controlling-the-legend-entries
ax.legend(handles=[plot_obj]) # does not wor
</code></pre>
<p>But somehow I am not able to take advantage of it to make things work (I tried several ways to plot the legend from "handles", see the different attempts, but none work).</p>
<p>I am certainly missing something - any pointer to how this can be done simply? :)</p>
|
<python><matplotlib><geometry><legend><geopandas>
|
2025-05-20 08:34:01
| 1
| 9,379
|
Zorglub29
|
79,630,032
| 5,437,493
|
Defining a type that is both a Protocol and a Pydantic BaseModel
|
<p>I need to define a type that is both a <code>Protocol</code> and a <code>BaseModel</code>.</p>
<p><strong>In detail:</strong><br />
The model <code>Foo</code> has a field <code>data</code> that can receive any model that has an <code>id</code> attribute ( let's call the type <code>HasId</code>).</p>
<pre class="lang-py prettyprint-override"><code>class Foo(BaseModel):
data: HasId
...
</code></pre>
<p>If <code>Protocol</code> supported inheriting from other base classes I would have defined <code>HasId</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>class HasId(BaseModel, Protocol):
id: str | UUID
</code></pre>
<p>Or using the <code>Intersection</code> solution proposed here <a href="https://github.com/CarliJoy/intersection_examples/issues/53" rel="nofollow noreferrer">here</a> I would have define it like this:</p>
<pre class="lang-py prettyprint-override"><code>class HasIdProtocol(Protocol):
id: str | UUID
type HasId = Intersection[HasIdProtocol, BaseModel]
# OR
type HasId = HasIdProtocol & BaseModel
</code></pre>
<p>Is there a way I can still achieve this functionality currently?</p>
|
<python><python-typing><pydantic>
|
2025-05-20 08:00:33
| 0
| 839
|
Tom Gringauz
|
79,629,981
| 72,437
|
Issues Getting Ghibli-Style Image Output with Gemini API - What Am I Missing?
|
<p>I'm starting a venture in AI-powered image applications, and as a learning project, I'm developing a small app that transforms input images into <strong>Ghibli-style animation art</strong>.</p>
<p>To experiment with different APIs, I tried the following models under the Gemini ecosystem:</p>
<ol>
<li><strong><code>gemini-2.0-flash-exp-image-generation</code></strong> – Poor results.</li>
<li><strong>Uploading an image directly to Gemini Advanced 2.5 Pro (via chat interface)</strong> – Also produces unsatisfactory output.</li>
<li><strong><code>imagen-3.0-generate-002</code></strong> – Seems like this model doesn't support image input?</li>
</ol>
<p><a href="https://i.sstatic.net/266o0RqM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/266o0RqM.png" alt="enter image description here" /></a></p>
<p>In contrast, when I upload the <strong>same image</strong> to <strong>ChatGPT (via the image upload feature)</strong> and use the simple prompt:</p>
<blockquote>
<p><em>Turn this image into Ghibli-style animation art</em></p>
</blockquote>
<p>…the output is <strong>significantly better</strong> and aligns more closely with the style I’m aiming for.</p>
<h2><a href="https://i.sstatic.net/2jrYtSM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2jrYtSM6.png" alt="enter image description here" /></a></h2>
<h3>Question:</h3>
<p>Am I missing something when using the <strong>Gemini API</strong> for this kind of task?</p>
<p>Here’s the code I used to test <code>gemini-2.0-flash-exp-image-generation</code>:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
from io import BytesIO
import google.generativeai as genai
from google.generativeai.types import GenerateContentConfig
client = genai.Client(api_key=API_KEY)
image = Image.open("a.jpg")
prompt = "Turn this image into Ghibli-style animation art"
response = client.models.generate_content(
model='gemini-2.0-flash-exp-image-generation',
contents=[prompt, image],
config=GenerateContentConfig(
response_modalities=['Text', 'Image']
)
)
for part in response.candidates[0].content.parts:
if part.text:
print(part.text)
elif part.inline_data:
result_image = Image.open(BytesIO(part.inline_data.data))
result_image.save('output.jpg')
result_image.show()
</code></pre>
<p>Any tips, corrections, or best practices would be appreciated!</p>
<p>Thanks in advance.</p>
|
<python><artificial-intelligence><openai-api><google-gemini>
|
2025-05-20 07:33:02
| 1
| 42,256
|
Cheok Yan Cheng
|
79,629,963
| 15,358,800
|
How can I explode the string in sublists and fill into individual sublist
|
<p>I have a list like this.</p>
<pre><code>[["Hi, this is Tesa form the sales departmet. Iam working here from"],[""],[""]]
</code></pre>
<p>How can I explode this string inside the first sublist by preserving words by filling all sublists.Like,</p>
<pre><code>[["Hi, this is Tesa."],["form the sales departmet"],["Iam working here from"]]
</code></pre>
<p>Things to consider:</p>
<ol>
<li>The number of sublists will can be <code>n</code> but all of them will be empty for sure except the first one.</li>
<li>Word format should not be broken. There is no need to preserve the sentence meaning.</li>
</ol>
<p>what I have:</p>
<pre><code>def explode_string(array, split_positions=None):
original_text = array[0][0]
num_sublists = len(array)
if split_positions is None:
chunk_size = len(original_text) // num_sublists
split_positions = [chunk_size * i for i in range(1, num_sublists)]
parts = []
start = 0
for pos in split_positions:
parts.append(original_text[start:pos])
start = pos
parts.append(original_text[start:])
result = [[part] for part in parts]
return result
original_array = [
["Hi, this is Tesa form the sales departmet. Iam working here from"],
[""],
[""]
]
result = explode_string(original_array, [15, 39])
print(result)
</code></pre>
<p>The output iam getting:</p>
<pre><code>[['Hi, this is Tes'], ['a form the sales departm'], ['et. Iam working here from']]
</code></pre>
<p>The dataset is very long, Looking if you guys suggest any easy approch to tackle this.</p>
|
<python><list>
|
2025-05-20 07:23:04
| 2
| 4,891
|
Bhargav
|
79,629,943
| 17,580,381
|
Unable to install spacy on MacOS 15.5 (M2) with Python 3.13.3
|
<p>Having created a new venv I am attempting to install spacy strictly in accordance with the <a href="https://spacy.io/usage#pip" rel="nofollow noreferrer">documentation</a></p>
<p>Specifically:</p>
<pre><code>pip install -U pip setuptools wheel
pip install -U 'spacy[apple]'
</code></pre>
<p>This fails (huge error output) that seems to stem from:</p>
<pre><code>In file included from ../numpy/_core/src/umath/string_ufuncs.cpp:20:
../numpy/_core/src/umath/string_fastsearch.h:132:5: error: no type named 'ptrdiff_t' in namespace 'std'; did you mean simply 'ptrdiff_t'?
</code></pre>
<p>My guess is that the 2.2.6 version of numpy is incompatible but I can't find anything in spacy documentation to help me.</p>
<p>My Xcode environment is fully up-to-date.</p>
<p>I have seen other posts on Stackoverflow abut installation issues with spacy but they're rather old and some of the "solutions" are clearly ridiculous.</p>
|
<python><spacy>
|
2025-05-20 07:10:13
| 1
| 28,997
|
Ramrab
|
79,629,806
| 3,163,618
|
Sequence of function iterations
|
<p>Is there a way to abuse assignment expressions or functional tools to generate the sequence x, f(x), f(f(x)), ... in one line?</p>
<p>Here are some contrived examples to demonstrate:</p>
<pre><code>def iter(x, f, lim=10):
for _ in range(lim):
yield x
x = f(x)
iter(1, lambda x: (2*x)%99)
</code></pre>
<p>(This makes one extra function call that goes unused. Ideally this is avoided.)</p>
<p>Another weird idea I had is "One-argument accumulate", even uglier. The idea is to use the binary function but ignore the list elements! It's not a good use of accumulate.</p>
<pre><code>from itertools import accumulate
list(accumulate([None]*10, lambda x,y:2*x, initial=1))
</code></pre>
|
<python><iteration><sequence><python-assignment-expression>
|
2025-05-20 05:06:18
| 5
| 11,524
|
qwr
|
79,629,787
| 1,273,751
|
What is the currently recommended way to install Pytorch with CUDA enabled using conda?
|
<p>Pytorch official website used to have an installation option using conda (see printscreen in this answer: <a href="https://stackoverflow.com/a/51229368/1273751">https://stackoverflow.com/a/51229368/1273751</a>)</p>
<p>But currently no <code>conda</code> option is available:
<a href="https://i.sstatic.net/19ODdWY3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19ODdWY3.png" alt="printscreen pytorch website" /></a></p>
<p>Nevertheless, there still is pytorch on conda-forge: <a href="https://anaconda.org/conda-forge/pytorch" rel="nofollow noreferrer">https://anaconda.org/conda-forge/pytorch</a></p>
<p>Is there, and what is a currently recommended way to install pytorch using conda with CUDA enabled?</p>
|
<python><pytorch><anaconda><conda>
|
2025-05-20 04:37:31
| 1
| 2,645
|
Homero Esmeraldo
|
79,629,693
| 1,719,931
|
Iterate over an object that is not an iterator
|
<p>I'm not sure why I can iterate over an object that is not an iterator?</p>
<pre><code>>>> import spacy
>>> nlp = spacy.load("en_core_web_sm")
>>> doc = nlp("Berlin looks like a nice city")
>>> next(doc)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'spacy.tokens.doc.Doc' object is not an iterator
>>> for token in doc:
... print(token)
...
Berlin
looks
like
a
nice
city
</code></pre>
|
<python><iterator>
|
2025-05-20 02:10:47
| 2
| 5,202
|
robertspierre
|
79,629,689
| 1,503,005
|
How to freeze Python build tools for repeatable builds?
|
<p>TL;DR: When <code>pip install</code> builds and installs wheels, how do I determine which versions of build tool packages (e.g. Cython) it used? How do I force it to use the same versions of those packages in the future? Besides <code>pip</code>, <code>wheel</code>, and <code>cython</code>, what packages do I need to care about in the first place?</p>
<p>I maintain a project which uses Docker containers running Python. These containers use an image which is based on the <code>python:3.11.2-slim-bullseye</code> official image, with the various Python packages and non-Python tools we require installed. One of our concerns is making our builds repeatable and consistent, and we've taken a number of measures to that end:</p>
<ul>
<li>Our base image, rather than being the <code>3.11.2-slim-bullseye</code> tag, is a specific hash, so even if a new version of that tag is released, the image we are building from will be the same</li>
<li>Before installing our Python packages, we make sure to install a specific version of <code>pip</code> and <code>wheel</code></li>
<li>Our <code>requirement.txt</code> file lists not only what packages we require, but the exact versions to install, and the versions of their dependencies to use</li>
</ul>
<p>Despite all this, we recently suffered a broken build: a package, <a href="https://pypi.org/project/metrohash/" rel="nofollow noreferrer">metrohash</a>, started to fail during the compilation step. After investigation, we discovered the problem:</p>
<ul>
<li>The metrohash module was written using <a href="https://cython.org/" rel="nofollow noreferrer">Cython</a>, a package that allows for writing code in a sort of hybrid of Python & C or C++. Code in this format looks mostly like ordinary Python, but Cython can transform it into C or C++; the compiled code can then be called as if it were the Python module it resembled</li>
<li>The metrohash code used the Python 2 type 'long' to represent data that would, in the C++ code, be stored in an actual <code>long int</code></li>
<li>When <code>metrohash</code> was installed, pip (or possibly wheel) would automatically download the latest version of Cython in order to transform and build the code</li>
<li>The latest version of Cython, 3.10, release just a week or so ago, dropped support for a bunch of deprecated Python 2 features, including the Python <code>long</code> type</li>
<li>Thus we were <a href="https://github.com/escherba/python-metrohash/issues/29" rel="nofollow noreferrer">getting an error</a> when trying to transform the code into copilable form, so the build failed</li>
</ul>
<p>We manage to find a workaround for <em>that</em> issue, but the fact that it happened in the first place revealed a serious weakness in our build system: we were implicitly depending on a package, Cython, for which were not controlling what version to use. Obviously, I want to change that, and make the version used consistent, but I have no idea how:</p>
<ul>
<li>I went back and checked the logs from the last successful build, and <em>nothing</em> in <code>pip</code>'s output mentions <code>cython</code> at all, let alone what particular version it is using. Nor does <code>cython</code> show up in the output of <code>pip freeze --all</code>, either before or after the install of <code>metrohash</code>. So how do I check what version is actually being used?</li>
<li>Once I know what version we are currently using, how do I ensure that particular version continues to be used in the future? I could try installing the version in question in a separate <code>pip install</code> step before the main install, but given the package doesn't show up in the output of <code>pip freeze --all</code>, I'm not at all sure it would actually <em>use</em> the installed package for the build.</li>
<li>Most importantly, are there any <em>other</em> packages which may be implicitly used in the install process like this? How can I tell?</li>
</ul>
|
<python><pip><build><cython><python-wheel>
|
2025-05-20 02:02:34
| 0
| 635
|
macdjord
|
79,629,626
| 10,069,542
|
Open edX Tutor Plugin: ModuleNotFoundError for custom OAuth2 backend despite proper installation
|
<p>I'm developing a custom Tutor plugin for Open edX that implements a WordPress OAuth2 authentication backend. Despite following the Tutor plugin development guidelines and ensuring proper installation (cf <a href="https://github.com/cookiecutter-openedx/edx-oauth2-wordpress-backend" rel="nofollow noreferrer">https://github.com/cookiecutter-openedx/edx-oauth2-wordpress-backend</a>), I'm encountering a <code>ModuleNotFoundError</code> when Django tries to load the module.</p>
<p>Here is the structure of my plugin:</p>
<pre><code>edx-oauth2-wordpress-backend/
├── setup.py
├── requirements.txt
└── edx_oauth2_wordpress_backend/
├── __init__.py
├── plugin.py
└── wp_oauth.py
</code></pre>
<h3>wp_oauth.py</h3>
<pre><code>"""
Custom WordPress OAuth2 backend for Open edX.
"""
from social_core.backends.oauth import BaseOAuth2
class WordPressOAuth2(BaseOAuth2):
"""
Custom WordPress OAuth2 backend for Open edX.
This class implements the WordPress OAuth2 authentication.
"""
name = "wordpress-oauth"
# Base URL of your WordPress site (without trailing slash)
BASE_URL = "https://your-wordpress-site.com"
# OAuth endpoints
PATH = "wp-json/moserver/"
AUTHORIZATION_ENDPOINT = "authorize"
TOKEN_ENDPOINT = "token"
USERINFO_ENDPOINT = "resource"
def get_user_details(self, response):
"""
Extract user details from the OAuth response.
"""
return {
'username': response.get('username', ''),
'email': response.get('email', ''),
'fullname': response.get('name', ''),
'first_name': response.get('first_name', ''),
'last_name': response.get('last_name', ''),
}
def user_data(self, access_token, *args, **kwargs):
"""
Get user data from the OAuth provider.
"""
url = f"{self.BASE_URL}/{self.PATH}{self.USERINFO_ENDPOINT}"
return self.get_json(url, headers={'Authorization': f'Bearer {access_token}'})
</code></pre>
<h3>setup.py</h3>
<pre><code>from setuptools import setup, find_packages
setup(
name="edx-oauth2-wordpress-backend",
version="0.1.0",
packages=find_packages(),
install_requires=[
"tutor>=16.0.0",
"social-auth-core>=4.3.0",
"social-auth-app-django>=5.0.0",
],
entry_points={
"tutor.plugin.v1": [
"wordpress-oauth = edx_oauth2_wordpress_backend.plugin"
]
},
python_requires=">=3.8",
)
</code></pre>
<h3>requirements.txt</h3>
<pre><code>edx-oauth2-wordpress-backend>=1.0.8
</code></pre>
<h3>plugin.py</h3>
<pre><code>from tutor import hooks
# Plugin configuration
hooks.Filters.CONFIG_DEFAULTS.add_items([
("WPOAUTH_BACKEND_BASE_URL", ""),
("WPOAUTH_BACKEND_CLIENT_ID", ""),
("WPOAUTH_BACKEND_CLIENT_SECRET", ""),
])
# Add environment patches
hooks.Filters.ENV_PATCHES.add_item(
(
"lms-env",
"""
# WordPress OAuth Backend Settings
WPOAUTH_BACKEND_BASE_URL: "{{ WPOAUTH_BACKEND_BASE_URL }}"
WPOAUTH_BACKEND_CLIENT_ID: "{{ WPOAUTH_BACKEND_CLIENT_ID }}"
WPOAUTH_BACKEND_CLIENT_SECRET: "{{ WPOAUTH_BACKEND_CLIENT_SECRET }}"
ADDL_INSTALLED_APPS: ["edx_oauth2_wordpress_backend"]
THIRD_PARTY_AUTH_BACKENDS: ["edx_oauth2_wordpress_backend.wp_oauth.WordPressOAuth2"]
ENABLE_REQUIRE_THIRD_PARTY_AUTH: true
"""
)
)
# Install the package in the Open edX environment
hooks.Filters.ENV_PATCHES.add_item(
(
"lms-dockerfile-post-install",
"""
# Install WordPress OAuth backend
COPY edx-oauth2-wordpress-backend /openedx/edx-oauth2-wordpress-backend
RUN cd /openedx/edx-oauth2-wordpress-backend && \
pip install -e . && \
pip install social-auth-core>=4.3.0 social-auth-app-django>=5.0.0
"""
)
)
# Add the package to requirements
hooks.Filters.ENV_PATCHES.add_item(
(
"lms-requirements",
"""
edx-oauth2-wordpress-backend @ file:///openedx/edx-oauth2-wordpress-backend
"""
)
)
</code></pre>
<h3>Installation Steps</h3>
<ol>
<li><p>Install the plugin:</p>
<p>cd edx-oauth2-wordpress-backend
pip install -e .
tutor plugins enable wordpress-oauth</p>
</li>
<li><p>Configure the plugin:</p>
<p>tutor config save --set WPOAUTH_BACKEND_BASE_URL="https://your-wordpress-site.com" <br />
--set WPOAUTH_BACKEND_CLIENT_ID="your-client-id" <br />
--set WPOAUTH_BACKEND_CLIENT_SECRET="your-client-secret"</p>
</li>
<li><p>Rebuild and restart:</p>
<p>tutor config save
tutor images build openedx
tutor local launch</p>
</li>
</ol>
<h3>Error Message</h3>
<pre><code>Loading settings lms.envs.tutor.production
Traceback (most recent call last):
File "/openedx/edx-platform/./manage.py", line 103, in <module>
startup.run()
File "/openedx/edx-platform/lms/startup.py", line 20, in run
django.setup()
File "/openedx/venv/lib/python3.11/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/openedx/venv/lib/python3.11/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/openedx/venv/lib/python3.11/site-packages/django/apps/config.py", line 193, in create
import_module(entry)
File "/opt/pyenv/versions/3.11.8/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'edx_oauth2_wordpress_backend'
</code></pre>
<h3>What I've Tried</h3>
<ol>
<li>Verified the package structure and all necessary files are present</li>
<li>Ensured the package is installed in editable mode</li>
<li>Added the package to both <code>INSTALLED_APPS</code> and <code>THIRD_PARTY_AUTH_BACKENDS</code></li>
<li>Added the package to the LMS requirements</li>
<li>Verified the package installation in the container</li>
</ol>
<h3>Environment</h3>
<ul>
<li>Open edX version: Latest (Tutor 16.0.0+)</li>
<li>Python version: 3.11</li>
<li>Operating System: Linux</li>
</ul>
<p>The error occurs during Django's app initialization, specifically when trying to load the custom OAuth2 backend module. Despite the package being installed and configured, Django cannot find the module.</p>
<p>Any insights or suggestions on how to resolve this issue would be greatly appreciated.</p>
|
<python><wordpress><oauth><openedx>
|
2025-05-20 00:04:16
| 0
| 550
|
MMasmoudi
|
79,629,605
| 1,332,263
|
How to replace pack() with grid()
|
<p>The following code enables the status bar to remain at the bottom of the window and expand to fill the window when the window is resized. How can I replace pack() with grid() to do the same thing?</p>
<pre><code>import tkinter as tk
from tkinter import ttk
colour='blue'
class StatusBar(ttk.Frame):
def __init__(self, container):
super().__init__(container)
self.columnconfigure(0, weight=1)
self.rowconfigure(0,weight=1)
self.label=tk.Label(self, text="STATUS BAR")
self.label.grid()
self.pack(fill='x', expand=True, anchor='s')
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('Replace')
self.geometry('800x500')
self.resizable(1, 1)
self.configure(bg=colour)
if __name__ == "__main__":
app = App()
sb = StatusBar(app)
app.mainloop()
</code></pre>
|
<python><tkinter><tkinter-layout>
|
2025-05-19 23:27:07
| 1
| 417
|
bob_the_bob
|
79,629,601
| 1,090,576
|
Why in python would blocking code delay a non-blocking sleep timer?
|
<p>In python, if I run the following code:</p>
<pre><code>import asyncio
import time
from datetime import datetime
async def delay(n: int, id: str):
await asyncio.sleep(n)
print("I am:", id, datetime.now())
return id
async def main():
second = delay(1, "second")
first = delay(1, "First")
task = asyncio.create_task(second)
time.sleep(2) # Removable, this is here to show "second" is being delayed until after first resolves
await first
asyncio.run(main())
</code></pre>
<p>My expectation is if we consider T+0 the start time of the application. The <code>create_task</code> would start first and a timer (at roughly T+0) would be set to resolve at T+1. The sleep timer would then start and block everything until T+2. At this point, this seems to diverge from my expectations. My expectation is that the <code>first</code> coroutine is then run and I would expect at that time the <code>await asyncio.sleep</code> from the <code>second</code> coroutine to be able to take control and complete. This does not appear to be what happens though. It seems that <code>first</code> resolves at T+3 after which first then resolves as well at T+3. Why is "second" delayed until T+3 (instead of resolving at T+2) and why does it not resolve immediately when the <code>sleep</code> call is made in "first" coroutine?</p>
<p><strong>Note:</strong> I am aware that <code>time.sleep</code> is preventing <code>second</code> task from continuing. The question is why is <code>first</code> completed prior to <code>second</code> even though <code>second</code> started running and called <code>asyncio.sleep</code> prior to <code>first</code> being started and calling <code>asyncio.sleep</code> later? Presumably, the <code>second</code> sleep call has already waited for 1 second prior to <code>first</code> even calling <code>await asyncio.sleep</code>.</p>
|
<python><python-asyncio><blocking>
|
2025-05-19 23:20:13
| 1
| 3,390
|
Goblinlord
|
79,629,270
| 3,125,823
|
Vehicle records for auto parts application
|
<p>I'm trying to figure out how to create a vehicle filter for an auto parts application for a client. They want their customers to be able to filter by Year, Make, Model, Trim and Engine.</p>
<p>We're using DRF so our thoughts were:</p>
<p>Create a Django vehicles app with Year, Make, Model, Trim and Engine model classes, that each contain an _id field and the respective fields.</p>
<p>Create records that consists of an _id per Year, Make, Model, Trim and Engine. Then use those records to create a filter that the customers can use to select their vehicle.</p>
<p>So for example:</p>
<p>in vehicles app:</p>
<pre><code># models.py
class Year(models.Model):
year_id = models.UUIDField(...)
year = models.SmallIntergerField(...)
class Make(models.Model):
make_id = models.UUIDField(...)
make = models.CharField(max_length=75)
...
</code></pre>
<p>use <code>django-filter</code> to create a filter that use all the above class Models to filter down the customer's vehicle.</p>
<p>Are we on the right track, is our mental model correct?</p>
|
<python><django-models><django-rest-framework><django-filter>
|
2025-05-19 18:14:27
| 0
| 1,958
|
user3125823
|
79,629,241
| 1,328,439
|
How to format a floating point number in engineering notation with zero before decimal dot
|
<p>I am trying to output the data from Python so that the output format matches the output from a FORTRAN code.</p>
<p>When using <code>E</code> format specifier in Python, I get a non-zero leading digit before the decimal dot, i.e.</p>
<pre><code>>>> print("%13.5E" % 3.1415926)
3.14159E+00
</code></pre>
<p>This corresponds to <code>ES</code> data edit descriptor in FORTRAN.</p>
<p>Is it possible to instruct Python string formatter to output instead the number with a zero as the first leading digit, i.e.</p>
<pre><code> 0.31415E+01
</code></pre>
<p>This behavior matches the behavior of <code>E</code> data edit descriptor in FORTRAN.</p>
|
<python>
|
2025-05-19 17:44:49
| 0
| 17,323
|
Dima Chubarov
|
79,629,166
| 8,357,778
|
Polars read_ndjson DynamoDB file high memory usage issue
|
<p>I'm trying to flatten a DynamoDB JSON file with Polars.</p>
<p>My JSON file is 1GB, 100mo after compressing to gz but in memory when I read it, it uses 16GB.</p>
<p>Because the data comes from DynamoDB, my columns can have multiple types. For example, 1 year ago it was a String, today it's a boolean.</p>
<p>So I made 4 experiments with only a pl.read_ndjson and write_parquet:</p>
<ul>
<li>Read a ~150MB json file with 1 nested level, all lines have same schema -> ~390MB of memory used</li>
<li>Read same file, compressed with gz -> ~750MB of memory used</li>
<li>Read a ~150MB json file with multiple nested level and different schemas -> 7.7GB of memory used</li>
<li>Read same file, compressed with gz -> ~7.7GB of memory used</li>
</ul>
<p><strong>Why it takes so much memory when the schema in nested file isn't constant ?</strong></p>
<p>Here my script for simple data:</p>
<pre class="lang-py prettyprint-override"><code>import json
import random
from uuid import uuid4
from datetime import datetime
DYNAMO_TYPES = ["S", "N", "BOOL"]
def random_dynamodb_value(i, depth=0):
if depth > 0:
return {"S": str(uuid4())}
value_type = DYNAMO_TYPES[i%3]
if value_type == "S":
return {"S": f"val-{uuid4().hex[:8]}"}
elif value_type == "N":
return {"N": str(random.randint(0, 1000))}
elif value_type == "BOOL":
return {"BOOL": random.choice([True, False])}
def generate_extra_easy_fields(n=50):
extra = {}
for i in range(n):
field_name = f"field_{i}"
extra[field_name] = random_dynamodb_value(i)
return extra
def generate_easy_entry():
base = {
"id": {"S": str(uuid4())},
"timestamp": {"S": datetime.now().isoformat()},
}
base.update(generate_extra_easy_fields(50))
return base
def generate_simple_jsonl_file(filename: str, n: int = 20):
with open(filename, "w") as f:
for i in range(n):
entry = json.dumps(generate_easy_entry())
f.write(entry + "\n")
</code></pre>
<p>Here my script for complexe data :</p>
<pre class="lang-py prettyprint-override"><code>import json
import random
from uuid import uuid4
from datetime import datetime
DYNAMO_TYPES = ["S", "N", "BOOL", "M", "L"]
def random_dynamodb_value(i, depth=0):
if depth > 2:
return {"S": str(uuid4())}
value_type = DYNAMO_TYPES[i%5]
if value_type == "S":
return {"S": f"val-{uuid4().hex[:8]}"}
elif value_type == "N":
return {"N": str(random.randint(0, 1000))}
elif value_type == "BOOL":
return {"BOOL": random.choice([True, False])}
elif value_type == "M":
return {
"M": {
f"key_{j}": random_dynamodb_value(random.randint(0, 4), depth + 1)
for j in range(random.randint(1, 3))
}
}
elif value_type == "L":
field = random.randint(0, 4)
return {
"L": [random_dynamodb_value(field, depth + 1) for _ in range(random.randint(2, 4))]
}
def generate_multitype_field():
return random.choice([
{"S": "text"},
{"BOOL": random.choice([True, False])},
{"N": str(random.randint(0, 100))},
])
def generate_extra_fields(n=50):
extra = {}
for i in range(n):
field_name = f"field_{i}"
extra[field_name] = random_dynamodb_value(i)
return extra
def generate_entry():
base = {
"id": {"S": str(uuid4())},
"timestamp": {"S": datetime.now().isoformat()},
"meta": {"M": {"level1": {"M": {"level2": {"M": {"level3": {"L": [random_dynamodb_value(3) for _ in range(2)]}}}}}}},
"listOfStructs": {
"L": [
{
"M": {
"subId": {"N": str(i)},
"flag": {"BOOL": random.choice([True, False])}
}
} for i in range(random.randint(2, 4))
]
},
"structOfStructOfStruct": {"M": {"a": {"M": {"b": {"M": {"c": random_dynamodb_value(2)}}}}}},
"structOfStructOfList": {"M": {"outer": {"M": {"innerList": {"L": [random_dynamodb_value(2) for _ in range(3)]}}}}},
"multiTypeField": generate_multitype_field()
}
base.update(generate_extra_fields(50))
return base
def generate_complexe_jsonl_file(filename: str, n: int = 20):
with open(filename, "w") as f:
for i in range(n):
entry = json.dumps(generate_entry())
f.write(entry + "\n")
</code></pre>
<p>The code to run experiments (choose your scenario):</p>
<pre class="lang-py prettyprint-override"><code>import os
import polars as pl
import psutil
from generate_complexe_data import generate_complexe_jsonl_file
from generate_simple_data import generate_simple_jsonl_file
def print_memory():
process = psutil.Process(os.getpid())
rss = process.memory_info().rss
print(f"Mémoire utilisée : {rss / 1024 / 1024:.2f} Mo")
def main() -> None:
"""Process DynamoDB export from manifest to partitioned parquet."""
# generate_complexe_jsonl_file("dynamodb_complexe.json", n=25000)
# generate_simple_jsonl_file("dynamodb_simple.json", n=100000)
# print("read simple")
# print_memory()
# df_simple = pl.read_ndjson("dynamodb_simple.json")
# print_memory()
# print("write simple")
# df_simple.write_parquet("output_simple.parquet")
# print_memory()
#
# print("read simple gz")
# print_memory()
# df_simple_gz = pl.read_ndjson("dynamodb_simple.json.gz")
# print_memory()
# print("write simple gz")
# df_simple_gz.write_parquet("output_simple_gz.parquet")
# print_memory()
#
# print("read complexe")
# print_memory()
# df_complexe = pl.read_ndjson("dynamodb_complexe.json")
# print_memory()
# print("write complexe")
# df_complexe.write_parquet("output_complexe.parquet")
# print_memory()
print("read complexe gz")
print_memory()
df_complexe_gz = pl.read_ndjson("dynamodb_complexe.json.gz")
print_memory()
print("write complexe gz")
df_complexe_gz.write_parquet("output_complexe_gz.parquet")
print_memory()
if __name__== "__main__":
main()
</code></pre>
|
<python><json><dataframe><amazon-dynamodb><python-polars>
|
2025-05-19 16:46:16
| 0
| 319
|
Franck Cussac
|
79,629,132
| 14,542,688
|
ib_async TWS API only first order gets transmitted, remaining orders doesnt show on TWS
|
<p>Currently, I have a function that creates 3 orders:</p>
<ol>
<li>Market order to enter the stock</li>
<li>Limit order for taking profit</li>
<li>Stop order for stop loss</li>
</ol>
<p>However, on TWS only the first order is shown and auto transmitted to TWS through API. Furthermore, for each instance, only the very first order gets transmitted.</p>
<pre><code>from ib_async import *
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=0)
def place_order(tradeDirection: str, ticker: str, riskSize: float, entryPrice: float, exitPrice: float):
logger.info(f"====== Placing order for {ticker} ======")
positionSize = get_position_size(get_risk(riskSize), entryPrice, exitPrice, tradeDirection)
contract = Stock(ticker, 'SMART', 'USD')
# Parent Market Order
parent_order = MarketOrder(tradeDirection, positionSize)
# Calculate TP and SL
if tradeDirection == "BUY":
takeProfitPrice = entryPrice + 2 * (entryPrice - exitPrice)
stopLossPrice = exitPrice
tp_order = LimitOrder("SELL", positionSize, takeProfitPrice)
sl_order = StopOrder("SELL", positionSize, stopLossPrice)
elif tradeDirection == "SELL":
takeProfitPrice = entryPrice - 2 * (exitPrice - entryPrice)
stopLossPrice = exitPrice
tp_order = LimitOrder("BUY", positionSize, takeProfitPrice)
sl_order = StopOrder("BUY", positionSize, stopLossPrice)
# Place all orders
ib.placeOrder(contract, parent_order)
ib.sleep(0)
ib.placeOrder(contract, tp_order)
ib.sleep(0)
ib.placeOrder(contract, sl_order)
logger.info(f"Main Order placed: {parent_order}")
logger.info(f"TP order placed: {tp_order}")
logger.info(f"SL order placed: {sl_order}")
</code></pre>
<p>From the logs, you can see there are 3 orders inside Open Orders, but on TWS only the 1st Market Order gets transmitted and filled.</p>
<pre><code>2025-05-20 00:20:27.525 | INFO | __main__:place_order:49 - ====== Placing order for TSLA ======
2025-05-20 00:20:27.526 | INFO | __main__:place_order:77 - Main Order placed: MarketOrder(orderId=63, action='BUY', totalQuantity=65)
2025-05-20 00:20:27.527 | INFO | __main__:place_order:78 - TP order placed: LimitOrder(orderId=64, action='SELL', totalQuantity=65, lmtPrice=359.5)
2025-05-20 00:20:27.527 | INFO | __main__:place_order:79 - SL order placed: StopOrder(orderId=65, action='SELL', totalQuantity=65, auxPrice=325.0)
2025-05-20 00:20:27.527 | INFO | __main__:webhook:158 - Open orders: [MarketOrder(orderId=63, action='BUY', totalQuantity=65), LimitOrder(orderId=64, action='SELL', totalQuantity=65, lmtPrice=359.5), StopOrder(orderId=65, action='SELL', totalQuantity=65, auxPrice=325.0)]
</code></pre>
<p>How can I make all 3 orders be transmitted to TWS?</p>
|
<python><interactive-brokers><tws><ib-api><ib-insync>
|
2025-05-19 16:23:59
| 1
| 327
|
0xLasadie
|
79,629,009
| 3,213,204
|
Permutate two rows inside QTableWidget
|
<p>I have a simple <code>QTableWidget</code> with a unique relevant column (the other one still empty for the moment). And I try to permute two rows.</p>
<p>I’ve made this method to my table:</p>
<pre class="lang-py prettyprint-override"><code> def permuteRows(self, row1=0, row2=3):
# saving the original contents
widget1 = self.table.cellWidget(row1, 1)
widget2 = self.table.cellWidget(row2, 1)
# Suppression of the widgets
self.table.removeCellWidget(row1, 1)
self.table.removeCellWidget(row2, 1)
# Re-injection of the widgets (in the oposite rows)
self.table.setCellWidget(row1, 1, widget2)
self.table.setCellWidget(row2, 1, widget1)
</code></pre>
<p>But, when I run this method, I just get this error:</p>
<pre><code>[1] 3189684 segmentation fault python mnemos.py
</code></pre>
<p>And nothing else.</p>
<p>The problem only occur when I inject the second row. It happens as if it occur a circular refrence, however a remove it before.</p>
<h3>The question</h3>
<p>How to permute two rows of table? With this way or another way.</p>
|
<python><qt><pyqt><qtablewidget><pyqt6>
|
2025-05-19 15:00:56
| 0
| 321
|
fauve
|
79,628,986
| 6,998,684
|
Cannot import: `from serpapi import GoogleSearch`
|
<p>I have this app in PyCharm (see screenshot). It won't run because of:</p>
<blockquote>
<p>ImportError: cannot import name 'GoogleSearch' from 'serpapi' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/serpapi/<strong>init</strong>.py)
Process finished with exit code 1</p>
</blockquote>
<p>Has anybody faced this issue?</p>
<p>Invalidate Caches / Restart pycharm. Check my run configuration</p>
|
<python><pycharm><streamlit><venv>
|
2025-05-19 14:47:06
| 2
| 26,929
|
IgorGanapolsky
|
79,628,910
| 9,318,323
|
Improve code that finds nan values with a condition and removes them
|
<p>I have a dataframe where each column starts and finished with certain number of nan values. Somewhere in the middle of a column there is a continuous list of values. It can happen that a nan value "interrupts" the data. I want to iterate over each column, find such values and then remove the whole row.</p>
<p>For example, I want to find the <code>np.nan</code> between <code>9</code> and <code>13</code> and remove it:</p>
<pre><code>[np.nan, np.nan, np.nan, 1, 4, 6, 6, 9, np.nan, 13, np.nan, np.nan]
</code></pre>
<p>Conditions for removal:</p>
<ol>
<li>if value has at least one data point before</li>
<li>if value has at least one data point after</li>
<li>if value is nan</li>
</ol>
<p>I wrote code that does this already, but it's slow and kind of wordy.</p>
<pre><code>import pandas as pd
import numpy as np
data = {'A': [np.nan, np.nan, np.nan, 1, 4, 6, 6, 9, np.nan, 13, np.nan, np.nan], 'B': [np.nan, np.nan, np.nan, 11, 3, 16, 13, np.nan, np.nan, 12, np.nan, np.nan]}
df = pd.DataFrame(data)
def get_nans(column):
output = []
for index_to_check, value in column.items():
has_value_before = not column[:index_to_check].isnull().all()
has_value_after = not column[index_to_check + 1:].isnull().all()
is_nan = np.isnan(value)
output.append(not( has_value_before and has_value_after and is_nan))
return output
for column in df.columns:
df = df[get_nans(df[column])]
print(df)
</code></pre>
<p>How can I improve my code, vectorize it etc?</p>
|
<python><pandas><dataframe><numpy>
|
2025-05-19 14:11:43
| 2
| 354
|
Vitamin C
|
79,628,606
| 3,387,223
|
pyparsing LineStart not matching on indented line
|
<p>I couldn't find this anywhere. I'm using <code>pyparsing</code> on a project right now and I need to match keywords at the start of the line. I looked at the <code>LineStart</code> for this which is close but not quite the thing I need.</p>
<p>E.g.</p>
<pre class="lang-py prettyprint-override"><code>test_string = """this is only
an example
with
an indented keyword
that I want to find at the
start of the line, but not 'with in' a string
with"""
</code></pre>
<p>Using <code>pyparsing.Keyword("with").search_string(test_string)</code> I would find even the occurences not at start of line.</p>
<p>Using <code>pyparsing.LineStart() + pyparsing.Keyword("with")</code> I would not find any except on the last line where it's at the start of the line.</p>
<p>No combination with optional whitespace after the LineStart() seemed to match this.</p>
|
<python><pyparsing>
|
2025-05-19 10:58:36
| 1
| 4,956
|
CodeMonkey
|
79,628,526
| 12,978,930
|
How to make ty ignore a single line in a source file?
|
<p>I have been experimenting with Astral's type checker <a href="https://github.com/astral-sh/ty" rel="nofollow noreferrer">ty</a> recently. As its still pre-release, I run into false-positives from time to time and would like to explicitly tell ty to ignore these.</p>
<p>How can this be done?</p>
|
<python><python-typing><ty>
|
2025-05-19 10:13:50
| 3
| 12,603
|
Hericks
|
79,628,462
| 11,251,373
|
Mock return value of CURL called from within Postgres stored procedure
|
<p>I have a Postgres stored procedure that uses API call made by CURL command from within itself to obtain CSV file from outer server:</p>
<pre class="lang-sql prettyprint-override"><code>create procedure import_national_standards()
language plpgsql
as
$$
BEGIN
SET LOCAL statement_timeout = '60s';
DROP TABLE IF EXISTS tmp;
CREATE TEMPORARY TABLE tmp (
code text,
description text,
is_active text,
oks_code text
);
COPY tmp (code, description, is_active, oks_code)
FROM PROGRAM 'curl -L -m 10 --compressed https:// ........ 20220330.csv'
WITH
delimiter ';'
csv
header
;
DO SOMETHING WITH DATA ....
</code></pre>
<p>Is it possible somehow mock response received from aforementioned CURL command with simple CSV file made specially for testing and launch this mock as a pytest fixture? This stored procedure is called from airflow SQL operatior and must be tested from within arflow service as well.</p>
|
<python><airflow><pytest>
|
2025-05-19 09:29:47
| 0
| 2,235
|
Aleksei Khatkevich
|
79,628,442
| 13,392,257
|
Is using mutex in my class redundant because of GIL?
|
<p>I have a class with two threads</p>
<ol>
<li>MainThread - solving tasks one by one(getting <code>self.cur_task_id</code>, and changing <code>self.cur_task_status</code>)</li>
<li>Thread <code>self.report_status_thread</code> - read <code>self.cur_task_id, self.cur_task_status</code> and send values via http</li>
</ol>
<p>I am using mutex in my class to synchronize these threads. Is that redundant because of GIL (Global Interpreter Lock)?</p>
<pre><code>class DeepAllocationService():
def __init__(self,):
self.mutex = threading.Lock() # ensure cur_task_id and cur_task_status changed in the same thread
self.cur_task_id = None
self.cur_task_status = None
self.report_status_thread = threading.Thread(target=self.periodic_send_status)
def periodic_solve_tasks(self,):
self.report_status_thread.start()
while True:
try:
new_task, task_file = self.get_new_task()
if new_task:
self.update_task_info(new_task["id"], new_task["status"])
self.solve_task(new_task, task_file)
except Exception as e:
self.cur_task_status = TaskStatus.ERROR
self.logger.error(f"Exception in periodic_solve_tasks: {e}")
finally:
time.sleep(1)
def update_task_info(self, task_id=None, task_status=None):
self.mutex.acquire()
self.cur_task_id = task_id
self.cur_task_status = task_status
self.mutex.release()
def periodic_send_status(self,):
while True:
# read-only function - read and send self.cur_task_id and self.cur_task_status
requests.post() # sends elf.cur_task_id and self.cur_task_status
time.sleep(2)
</code></pre>
|
<python>
|
2025-05-19 09:17:09
| 1
| 1,708
|
mascai
|
79,628,273
| 10,258,072
|
How to use uv with Artifactory Python Package index
|
<p>How can I run <code>uv run myscript.py</code> using Artifactory as an alternative package index that needs authentication?</p>
|
<python><artifactory><uv>
|
2025-05-19 07:24:48
| 1
| 1,135
|
Boketto
|
79,628,093
| 219,153
|
Can this similarity measure between different size NumPy arrays be expressed entirely with NumPy API?
|
<p>This script:</p>
<pre><code>import numpy as np
from numpy.linalg import norm
a = np.array([(1, 2, 3), (1, 4, 9), (2, 4, 4)])
b = np.array([(1, 3, 3), (1, 5, 9)])
r = sum([min(norm(a-e, ord=1, axis=1)) for e in b])
</code></pre>
<p>computes a similarity measure <code>r</code> between different size NumPy arrays <code>a</code> and <code>b</code>. Is there a way to express it entirely with NumPy API for greater efficiency?</p>
|
<python><arrays><numpy><euclidean-distance>
|
2025-05-19 04:41:07
| 1
| 8,585
|
Paul Jurczak
|
79,627,995
| 1,639,359
|
Pandas Timedelta UnitChoices typing
|
<p>I am writing some python code using typehints and using mypy to check them. I have a variable <code>period</code> that I stated was a string. I later use that variable to instantiate a <code>pandas.Timedelta</code> object, setting the units to <code>period</code>. Here is a minimally reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
period: str = "minute"
my_timedelta = pd.Timedelta( 30, unit=period)
print(my_timedelta)
</code></pre>
<p>The above code runs fine and creates a Timedelta object of 30 minutes:</p>
<pre class="lang-bash prettyprint-override"><code>$ python example.py
0 days 00:30:00
$
</code></pre>
<p>When I run <strong>mypy</strong> on the above code, I get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>(pdmkt) dino@DINO(24):~/code/pdmkt$ mypy example.py
example.py:5: error: Argument "unit" to "Timedelta" has incompatible type "str"; expected "Literal['W', 'w', 'D', 'd', 'days', 'day', 'hours', 'hour', 'hr', 'h', 'm', 'minute', 'min', 'minutes', 's', 'seconds', 'sec', 'second', 'ms', 'milliseconds', 'millisecond', 'milli', 'millis', 'us', 'microseconds', 'microsecond', 'µs', 'micro', 'micros', 'ns', 'nanoseconds', 'nano', 'nanos', 'nanosecond']" [arg-type]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<hr />
<p>Obviously I could declare a type using the Literal, and modify my code so that <code>period</code> is that type:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import typing
PERIOD_TYPE = typing.Literal['W', 'w', 'D', 'd', 'days', 'day', 'hours', 'hour', 'hr', 'h', 'm', 'minute', 'min', 'minutes', 's', 'seconds', 'sec', 'second', 'ms', 'milliseconds', 'millisecond', 'milli', 'millis', 'us', 'microseconds', 'microsecond', 'µs', 'micro', 'micros', 'ns', 'nanoseconds', 'nano', 'nanos', 'nanosecond']
period: PERIOD_TYPE = "minute"
my_timedelta = pd.Timedelta( 30, unit=period)
print(my_timedelta)
</code></pre>
<p>The above code works the same as the first, <em><strong>and mypy reports:</strong></em> "Success: no issues found in 1 source file"</p>
<p>However, rather than declare my own type, it makes sense to me to somehow import the type from Pandas. I looked through the pandas code and found this file:
<a href="https://github.com/pandas-dev/pandas/blob/main/pandas/core/tools/timedeltas.py" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/blob/main/pandas/core/tools/timedeltas.py</a>
which contains the line:</p>
<pre class="lang-py prettyprint-override"><code>from pandas._libs.tslibs.timedeltas import UnitChoices
</code></pre>
<p>In timedeltas.py the variable <code>unit</code> is of type <code>UnitChoices</code>, so obviously <code>UnitChoices</code> is the type that I want to use.</p>
<p><strong>HERE'S MY QUESTION:</strong></p>
<p><em>How can I import and use <code>UnitChoices</code> from Pandas?</em></p>
<hr />
<hr />
<p><strong>UPDATE:</strong></p>
<p>I tried the suggestion in the answer by @InSync and I am able to successfully import <code>UnitChoices</code> from Pandas, <em>however</em> it appears that the <code>UnitChoices</code> that I get is slightly different than what <code>mypy</code> sees in Pandas.</p>
<p>Here is my latest code example:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
import pandas as pd
import typing
if typing.TYPE_CHECKING:
from pandas._libs.tslibs.timedeltas import UnitChoices
period: UnitChoices = "minute"
my_timedelta = pd.Timedelta( 30, unit=period)
print(my_timedelta)
</code></pre>
<p>And here is the error that I now see (with some formatting added to the single-line error message below to make the error message easier to read):</p>
<pre class="lang-bash prettyprint-override"><code>mypy example6.py
example6.py:11: error: Argument "unit" to "Timedelta" has incompatible type
"Literal['Y', 'y', 'M'] | Literal['W', 'w', 'D', 'd', 'days', 'day', 'hours', 'hour', 'hr', 'h', 'm', 'minute', 'min', 'minutes', 's', 'seconds', 'sec', 'second', 'ms', 'milliseconds', 'millisecond', 'milli', 'millis', 'us', 'microseconds', 'microsecond', 'µs', 'micro', 'micros', 'ns', 'nanoseconds', 'nano', 'nanos', 'nanosecond']";
expected
"Literal['W', 'w', 'D', 'd', 'days', 'day', 'hours', 'hour', 'hr', 'h', 'm', 'minute', 'min', 'minutes', 's', 'seconds', 'sec', 'second', 'ms', 'milliseconds', 'millisecond', 'milli', 'millis', 'us', 'microseconds', 'microsecond', 'µs', 'micro', 'micros', 'ns', 'nanoseconds', 'nano', 'nanos', 'nanosecond']" [arg-type]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>The definitions of <code>UnitChoices</code> appear the same <em><strong>except for</strong></em> the one that I am getting in my example code has the following extra type at the beginning: <code>Literal['Y', 'y', 'M'] | ...</code></p>
<p><strong>Any ideas?</strong></p>
|
<python><pandas><python-typing>
|
2025-05-19 02:08:09
| 1
| 7,874
|
Daniel Goldfarb
|
79,627,856
| 4,704,065
|
Iterate over a Dataframe column and find value based on condition
|
<p>I have a Dataframe with 2 columns.
I want to compare the value of first column with some threshold for 5 iterations and if it exceeds that value, check the corresponding value of the another column</p>
<p>DF:
In the below example I need to check at what value of '<strong>Inst</strong>' where '<strong>Error</strong>' was less than 2.5 for next consecutive 5 iterations</p>
<p>Expected result: Inst was 204273 204302</p>
<p>Below is what I tried but it did not work. Any pointers or better way of implementing it would be nice</p>
<pre><code>count = 0
for i in range(len(df["Inst"])):
while count < 6:
if df["Error"][i] < 2.5:
count += 1
continue
result = df["Inst"][i-5]
</code></pre>
<p>Below is my DF:</p>
<pre><code> Error Inst
0 2.595795 204267
1 2.568556 204268
2 2.562618 204269
4 2.538956 204271
5 2.520247 204272
6 2.498345 204273 #
7 2.474890 204274
8 2.467736 204275
9 2.471115 204276
10 2.466424 204280
11 2.495388 204284
12 2.520301 204285
13 2.604358 204291
14 2.553243 204299
15 2.490774 204302 #
16 2.452384 204303
17 2.434171 204304
18 2.404764 204305
19 2.388775 204306
20 2.384337 204307
</code></pre>
|
<python><pandas><dataframe>
|
2025-05-18 21:21:41
| 2
| 321
|
Kapil
|
79,627,795
| 1,332,263
|
Why does my approach to destroy window fail?
|
<p>I want the EXIT button to close the window but I get the following error:</p>
<p>self.button=ttk.Button(self, text=' EXIT', command=app.exit).grid(column=0, row=0, sticky='ew')
NameError: name 'app' is not defined. Did you mean: 'App'?</p>
<p>Can someone tell me why and how to fix it please?</p>
<pre><code>#!/usr/bin/python3.9
import tkinter as tk
from tkinter import ttk
class ButtonFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
self.__create_widgets()
def __create_widgets(self):
self.button=ttk.Button(self, text=' EXIT', command=app.exit).grid(column=0, row=0, sticky='ew')
self.grid(padx=0, pady=0)
class App(tk.Tk):
def __init__(self):
super().__init__()
self.configure(bg='#ff3800')
self.geometry("500x200") # wxh
self.resizable(False, False)
# layout on the root window
self.columnconfigure(0, weight=1)
self.__create_widgets()
def __create_widgets(self):
# create the button frame
button_frame = ButtonFrame(self)
button_frame.grid(column=0, row=0)
def exit(self):
self.destroy()
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
|
<python><python-3.x><tkinter>
|
2025-05-18 20:17:01
| 1
| 417
|
bob_the_bob
|
79,627,633
| 1,134,991
|
Python loop over nested generator with two variables, in one for loop
|
<p>Hell all. This is when using Python 3.<br>
Say I have a tuple/list/generator, which elements are always are a two elements tuple.<br>
A simple sample would be this:
<code>( ( 1,2) (3,4) )</code><br>
I want to loop (no nested loops), with two loop variables, akin to what is done when using <code>enumerate</code>.<br>
I.e., the code:</p>
<pre><code>for fst,scnd in : Some_operator( ( ( 1,2) (3,4) ) ):
print( fst*scnd )
</code></pre>
<p>Should output:</p>
<pre><code>2
12
</code></pre>
<p>The question is: Does <code>Some_operator</code> exists, and if so, what is it?<br>Thanks.</p>
|
<python><loops>
|
2025-05-18 16:37:43
| 1
| 3,129
|
user1134991
|
79,627,579
| 6,014,277
|
Spire.PDF for Python: PdfDocument.Close() gives attribute error
|
<p>Code (adapted from recipe book here: <a href="https://www.e-iceblue.com/Tutorials/Python/Spire.PDF-for-Python/Program-Guide/Conversion/Python-Convert-PDF-to-SVG.html#1" rel="nofollow noreferrer">https://www.e-iceblue.com/Tutorials/Python/Spire.PDF-for-Python/Program-Guide/Conversion/Python-Convert-PDF-to-SVG.html#1</a> )</p>
<pre><code>from spire.pdf.common import *
from spire.pdf import *
myfile = 'c:/atelier/myimage.pdf'
doc = PdfDocument()
doc.LoadFromFile(myfile)
doc.SaveToFile('c:/atelier/ToSVG.svg', FileFormat.SVG)
doc.Close()
</code></pre>
<p>gives this error:</p>
<pre><code>[quote]Exception ignored in: <function PdfDocument.__del__ at 0x00000172D7233740>
Traceback (most recent call last):
File "C:\Users\suzanne\AppData\Local\Programs\Python\Python312\Lib\site-packages\spire\pdf\PdfDocument.py", line 48, in __del__
AttributeError: 'NoneType' object has no attribute 'PdfDocument_Dispose'[/quote]
</code></pre>
<p>It is the doc.Close() line that gives this error. The ToSVG.svg file is created just fine.</p>
<p>I have Python 3.12, spire.pdf==9.8.0, plum-dispatch==1.7.4, and I downloaded/installed the spire.pdf_11.3 binaries.</p>
|
<python><pdf><spire.pdf>
|
2025-05-18 15:29:25
| 0
| 754
|
Suzanne
|
79,627,492
| 13,727,105
|
Trainer is failing to load optimizer save state when resuming training
|
<h3>Intro to the problem</h3>
<p>I am trying to train Llama-3.1 8B on an H100 but I keep running into the following error when trying to resume training</p>
<pre><code>...
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/transformers/trainer.py", line 2405, in _inner_training_loop
self._load_optimizer_and_scheduler(resume_from_checkpoint)
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/transformers/trainer.py", line 3452, in _load_optimizer_and_scheduler
self.optimizer.load_state_dict(
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/accelerate/optimizer.py", line 107, in load_state_dict
self.optimizer.load_state_dict(state_dict)
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/home/jovyan/folder/training/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 872, in load_state_dict
raise ValueError(
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
</code></pre>
<h3>What I am trying to do</h3>
<p>I have two scripts, <code>train.py</code> and <code>resume.py</code>. <code>train.py</code> keeps crashing every now and then, likely due to issues with the hardware so I am trying to resume training from a checkpoint every time it crashes. I am also quantizing the model and using Lora to add trainable parameters.</p>
<pre class="lang-py prettyprint-override"><code># train.py
from data import LanguageDataset
from torch.utils.data import DataLoader
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling, AutoTokenizer, BitsAndBytesConfig, AutoModelForCausalLM
from tokenization import LanguageTokenizer
from torch.optim import AdamW
import torch
from peft import LoraConfig, TaskType, PeftModel, PeftConfig, get_peft_model
from safetensors.torch import save_model
def train_loop():
dataset = LanguageDataset()
quantization_configs = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B"
, device_map="auto", quantization_config=quantization_configs)
t = LanguageTokenizer().tokenizer
model.resize_token_embeddings(len(t), mean_resizing=True)
training_args = TrainingArguments(
output_dir="./training_output",
per_device_train_batch_size=8,
num_train_epochs=3,
save_steps=10,
save_strategy="steps",
save_safetensors=False,
logging_steps=10,
)
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=8,
lora_alpha=32,
lora_dropout=0.1
)
peft_model = get_peft_model(model=model, peft_config=lora_config)
data_collator = DataCollatorForLanguageModeling(tokenizer=t, mlm=False)
trainer = Trainer(
model=peft_model,
processing_class=t,
args=training_args,
train_dataset=dataset,
data_collator=data_collator
)
trainer.train()
if __name__ == "__main__":
train_loop()
</code></pre>
<pre class="lang-py prettyprint-override"><code># resume.py
from data import LanguageDataset
from torch.utils.data import DataLoader
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling, AutoTokenizer, BitsAndBytesConfig, AutoModelForCausalLM
from config import epochs, model_name, batch_size
from tokenization import LanguageTokenizer
from torch.optim import AdamW
import torch
from peft import LoraConfig, TaskType, PeftModel, PeftConfig
from safetensors.torch import save_model
def resume_loop():
dataset = LanguageDataset()
quantization_configs = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B", device_map="auto", quantization_config=quantization_configs)
t = LanguageTokenizer().tokenizer
model.resize_token_embeddings(len(t), mean_resizing=True)
peft_model = PeftModel.from_pretrained(model, "./training_output/checkpoint-20")
training_args = TrainingArguments(
output_dir="./training_output",
per_device_train_batch_size=8,
num_train_epochs=3,
save_steps=10,
save_strategy="steps",
save_safetensors=False,
logging_steps=10,
)
data_collator = DataCollatorForLanguageModeling(tokenizer=t, mlm=False)
t = AutoTokenizer.from_pretrained("./training_output/checkpoint-20")
trainer = Trainer(
model=peft_model,
processing_class=t,
args=training_args,
train_dataset=dataset,
data_collator=data_collator,
)
trainer.train(resume_from_checkpoint="./training_output/checkpoint-20")
if __name__ == "__main__":
resume_loop()
</code></pre>
<p>The tokenizer I am defining is Llama's pretrained tokenizer in the AutoTokenizer class. It is tokenizing with a <code>target_text</code> so the outputs include <code>input_ids</code>, <code>attention_mask</code> and <code>labels</code>.</p>
<p>The issue seems to be with loading the optimizer save state. In the checkpoint folder, <code>optimizer.pt</code> and <code>scheduler.pt</code> files do exist. But I am quite a bit confused because I am letting <code>Trainer</code> pick the optimizer and scheduler. I am not quite sure what I am doing wrong here.</p>
<p>I've tried to make my code resemble as closely to other training code I found on the internet. I have also tried disabling <code>mean_resizing</code> to no avail.</p>
|
<python><pytorch><huggingface-transformers><huggingface-trainer>
|
2025-05-18 13:35:13
| 0
| 369
|
Praanto
|
79,627,250
| 11,484,423
|
Python portable virtual environment
|
<p>I have a desktop and a laptop I use during commute. Both have the same OS.</p>
<p>I want to work on a Python project on the desktop, sync files, then work on the laptop, sync files, work on the desktop and so on.</p>
<p>What I tried:</p>
<ul>
<li>Bare system Python: it doesn't work, because something works with Python 3.10, something works with Python 3.13, so at least I need different interpreters for each project, different projects also use different libraries. Some use pygame, others matplotlib.</li>
<li>venv, virtualenv, uv: it doesn't work. I digged, and it seems it uses aboslute paths which bricks if I play them from another computer. I tried to change the scripts, but it bricks if I try relative paths.</li>
<li>I made two environments, one for the laptop, one for the desktop, for each and every project, but it feels silly to use <em>two</em> executables with hundreds of megabytes of dependencies, for <em>each</em> project for a language that is supposed to use a OS independent interpreter.</li>
</ul>
<p>It feels especially silly when I move a project with its interpreter to another folder in the same system, an it stops working, so I need to delete the env, and rebuild it. The executable and libraries are identical, it's just that the config paths are absolute and they brick if played from another folder. Why?</p>
<p>I likely need a competently designed virtual environment of some kind, that packages the executable and libraries with relative paths inside the project, so that e.g. it doesn't get confused if I move from D drive to F drive.</p>
<p>How can I achieve that?</p>
<h3>Example using uv</h3>
<p>uv venv:</p>
<ul>
<li>created on a computer with system python in "D:\Programs\Python_3_13\python.exe" and projects in "D:"</li>
<li>running on a computer with system python on "C:\Program Files\Python311\python.exe" and projects in "D:"</li>
<li>I'm not using neither of system pythons, different projects require different version of pythons and libs. It has to be a project dependent VENV.</li>
<li>The venv will use the venv python if it's placed in the exact spot it was created on, usign the venv python "D:\Data\Project\Project Programming\Project Python\python_environment_uv\shaka\Scripts\python.exe"</li>
<li>The venv will brick if the folder is not in the exact spot it was created on, in the exact computer it was created on. It's supposed to use the venv python in: "D:\Data\Project\Project Programming\Project Python\python_environment_uv\shaka\Scripts\python.exe"</li>
</ul>
<p>pyvenv.cfg</p>
<pre class="lang-none prettyprint-override"><code>home = D:\Programs\Python_3_13
implementation = CPython
uv = 0.6.8
version_info = 3.13.1
include-system-site-packages = false
</code></pre>
<p>activate</p>
<pre class="lang-none prettyprint-override"><code>...
VIRTUAL_ENV='D:\Data\Project\Project Programming\Project Python\python_environment_uv\shaka'
...
</code></pre>
<p>error when running any script</p>
<pre class="lang-none prettyprint-override"><code>D:\Data\Project\Project Programming\Project Python\python_environment_uv>call shaka\Scripts\activate
(shaka) D:\Data\Project\Project Programming\Project
Python\python_environment_uv>python test.py
did not find executable at 'D:\Programs\Python_3_13\python.exe': Impossibile trovare il percorso specificato.
</code></pre>
<p>The venv will activate without any issue, but when trying to run a script, it's bricked, because the absolute paths are all wrong. Despite the python it's supposed to use is inside the venv, alongside all the libraries with all dependencies.</p>
<p>None of this makes any sense to me. There is a python executable in the virtual environment, but there are all sort of absolute paths in the configuration and script of the virtual environment that will brick if you move it.</p>
<p>I would expect everything to be relative path from the root of the project. Why would venv care where the system python is? It's not supposed to use it at all.</p>
|
<python><portability><virtual-environment>
|
2025-05-18 08:09:09
| 3
| 670
|
05032 Mendicant Bias
|
79,627,095
| 9,779,999
|
UnslothGKDTrainer SyntaxError with Gemma 3: Missing unsloth_zoo Directory
|
<p>I'm trying to fine-tune Gemma 3 models (<code>egodfred/gemma-3-finetune-text_to_sql</code>, <code>nadmozg/gemma-3-12b-it-sql-finetuned</code>, <code>google/gemma-3-4b-it</code>) for SQL generation using Unsloth in a Conda environment, but I keep encountering a <code>SyntaxError</code> related to <code>UnslothGKDTrainer.py</code>. The file and its parent directory <code>unsloth_zoo</code> are missing, preventing me from importing <code>unsloth.FastLanguageModel</code>. I've tried multiple solutions, but none have resolved the issue. Any help or insights would be greatly appreciated!</p>
<h2>Environment</h2>
<ul>
<li><strong>OS</strong>: Ubuntu 22.04</li>
<li><strong>Python</strong>: 3.10</li>
<li><strong>PyTorch</strong>: 2.5.1+cu121</li>
<li><strong>Transformers</strong>: 4.51.3</li>
<li><strong>Unsloth</strong>: 2025.5.4 (commit <code>17c976c</code>)</li>
<li><strong>Other Dependencies</strong>: <code>bitsandbytes==0.45.5</code>, <code>peft==0.13.0</code>, <code>accelerate==1.0.1</code>, <code>trl==0.11.4</code>, <code>pandas</code>, <code>tqdm</code>, <code>python-dotenv</code>, <code>openai</code></li>
<li><strong>Hardware</strong>: NVIDIA RTX 4090 (24GB VRAM)</li>
<li><strong>NVIDIA Driver</strong>: 566.24, CUDA 12.7</li>
</ul>
<h2>Error Messages</h2>
<p>When running:</p>
<pre class="lang-py prettyprint-override"><code>from unsloth import FastLanguageModel
print('Unsloth imported successfully')
</code></pre>
<p>I get:</p>
<pre><code>🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
Standard import failed for UnslothGKDTrainer: non-default argument follows default argument (UnslothGKDTrainer.py, line 621). Using tempfile instead!
Standard import failed for UnslothGKDTrainer: non-default argument follows default argument (UnslothGKDTrainer.py, line 621). Using spec.loader.exec_module instead!
Traceback (most recent call last):
File "/home/ubuntu2022/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth_zoo/compiler.py", line 391, in create_new_function
new_module, old_path = import_module(compile_folder, name)
File "/home/ubuntu2022/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth_zoo/compiler.py", line 386, in import_module
new_module = importlib.import_module(name)
...
File "/home/ubuntu2022/unsloth_compiled_cache/UnslothGKDTrainer.py", line 621
sft_args,
^^^^^^^^
SyntaxError: non-default argument follows default argument
During handling of the above exception, another exception occurred:
...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ubuntu2022/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth/__init__.py", line 247, in <module>
from .models import *
...
File "/home/ubuntu2022/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth_zoo/compiler.py", line 418, in create_new_function
raise RuntimeError(f"Direct module loading failed for {name}: {e}")
RuntimeError: Direct module loading failed for UnslothGKDTrainer: non-default argument follows default argument (UnslothGKDTrainer.py, line 621)
</code></pre>
<h2>Problem Details</h2>
<ul>
<li>The error suggests a syntax issue in <code>UnslothGKDTrainer.py</code> (line 621), likely a function (e.g., <code>__init__</code>) with a non-default argument (<code>sft_args</code>) following a default argument (<code>model=None</code>).</li>
<li>The file <code>UnslothGKDTrainer.py</code> is missing from <code>~/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth_zoo/</code>.</li>
<li>The entire <code>unsloth_zoo</code> directory is missing in both the installed package and the cloned repository (<code>~/unsloth_repo/unsloth</code>).</li>
<li>The error references cached files (<code>/home/ubuntu2022/unsloth_compiled_cache/UnslothGKDTrainer.py</code>), but these are inaccessible or dynamically generated.</li>
</ul>
<h2>Solutions Tried</h2>
<ol>
<li><p><strong>Reinstalled Unsloth Multiple Times</strong>:</p>
<ul>
<li>Command:
<pre class="lang-bash prettyprint-override"><code>pip install --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
</code></pre>
</li>
<li>Result: Always resolves to commit <code>17c976c</code> (~March 2025), installs <code>unsloth-2025.5.4</code> and <code>unsloth_zoo-2025.5.6</code>, but <code>unsloth_zoo</code> directory is missing.</li>
</ul>
</li>
<li><p><strong>Cleared Caches and Package Files</strong>:</p>
<ul>
<li>Commands:
<pre class="lang-bash prettyprint-override"><code>rm -rf ~/unsloth_compiled_cache
rm -rf /tmp/unsloth_compiled_cache
rm -rf ~/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth*
rm -rf ~/miniconda/envs/cu121torch251/lib/python3.10/site-packages/unsloth_zoo*
</code></pre>
</li>
<li>Result: Removed cached files, but reinstalling Unsloth still lacks <code>unsloth_zoo</code>.</li>
</ul>
</li>
<li><p><strong>Recreated Conda Environment</strong>:</p>
<ul>
<li>Removed <code>cu127torch251</code> and created <code>cu121torch251</code>:
<pre class="lang-bash prettyprint-override"><code>conda env remove -n cu127torch251
conda create -n cu121torch251 python=3.10 -y
pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 torchaudio==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121
pip install --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" transformers==4.51.3 bitsandbytes==0.45.5 peft==0.13.0 accelerate==1.0.1 trl==0.11.4 pandas tqdm python-dotenv openai
</code></pre>
</li>
<li>Result: Fresh environment, but <code>unsloth_zoo</code> still missing.</li>
</ul>
</li>
<li><p><strong>Cloned Unsloth Repository</strong>:</p>
<ul>
<li>Command:
<pre class="lang-bash prettyprint-override"><code>git clone https://github.com/unslothai/unsloth.git ~/unsloth_repo/unsloth
cd ~/unsloth_repo/unsloth
ls unsloth_zoo/
</code></pre>
</li>
<li>Result: <code>No such file or directory</code>. Top-level structure:
<pre><code>CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md images/ pyproject.toml tests/ unsloth/ unsloth-cli.py
</code></pre>
</li>
<li>No <code>unsloth_zoo</code> or <code>UnslothGKDTrainer.py</code>.</li>
</ul>
</li>
<li><p><strong>Checked for Alternative Files</strong>:</p>
<ul>
<li>Command:
<pre class="lang-bash prettyprint-override"><code>find . -name "UnslothGKDTrainer.py" -o -name "*GKDTrainer.py" -o -name "*Trainer.py"
</code></pre>
</li>
<li>Result: No matching files in the repository.</li>
</ul>
</li>
<li><p><strong>Tried Patching (Not Possible)</strong>:</p>
<ul>
<li>Planned to patch <code>UnslothGKDTrainer.py</code> to reorder arguments, but the file is missing.</li>
</ul>
</li>
</ol>
<h2>Questions</h2>
<ul>
<li>Has <code>unsloth_zoo</code> been removed or relocated in Unsloth (v2025.5.4 or main branch)? If so, how can I resolve the <code>SyntaxError</code>?</li>
<li>Are there specific commits post-March 2025 (post-<code>17c976c</code>) that restore <code>unsloth_zoo</code> or fix the <code>SyntaxError</code>?</li>
<li>I’ve checked Unsloth issues on GitHub, but no recent reports match this exact problem.</li>
</ul>
<p>Any suggestions, fixes, or pointers to relevant commits would be greatly appreciated! I can share more details (e.g., script snippets) if needed.</p>
|
<python><artificial-intelligence><gemma>
|
2025-05-18 01:47:07
| 0
| 1,669
|
yts61
|
79,627,075
| 10,461,632
|
python-oracledb fails when initializing thick mode in Docker
|
<p>I am trying to initialize python-oracledb in Thick Mode. It works on my Mac M1 Pro, but when I try to containerize my application with Docker, I get <code>DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory"</code>.</p>
<p>I followed the python-oracledb instructions (<a href="https://python-oracledb.readthedocs.io/en/latest/user_guide/initialization.html" rel="nofollow noreferrer">3.1.3. Enabling python-oracledb Thick Mode on Linux and Related Platforms</a>) and tried to do what was in this <a href="https://stackoverflow.com/questions/77333320/python-oracledb-module-fails-to-use-instantclient-in-a-docker-container">post</a> containing the same error message.</p>
<p>I can see that it is searching in the <code>ORACLE_HOME</code> environment variable (even though I came across multiple posts that say not to use that when using instantclient). I don't see anywhere in the error message that it is searching in <code>LD_LIBRARY_PATH</code>, which is what the python-oracledb instruction say to set. I also printed the directories where the python-oracledb should be looking, and the file does in fact exist (see the error message between the =====). What is the issue here?</p>
<p>Check out the end of the post for step-by-step instructions for what I did.</p>
<p><strong>Error message in container</strong></p>
<pre><code>2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: ODPI-C 5.5.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: debugging messages initialized at level 64
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: Context Parameters:
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: Oracle Client Config Dir: /opt/oracle/network/admin
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: Environment Variables:
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: ORACLE_HOME => "/opt/oracle"
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: LD_LIBRARY_PATH => "/opt/oracle/instantclient"
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: LIBPATH => "/opt/oracle/instantclient"
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: SHLIB_PATH => "/opt/oracle/instantclient"
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: check ODPI-C module directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: ODPI-C module name is /usr/local/lib/python3.13/site-packages/oracledb/thick_impl.cpython-313-aarch64-linux-gnu.so
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load in dir /usr/local/lib/python3.13/site-packages/oracledb
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name /usr/local/lib/python3.13/site-packages/oracledb/libclntsh.so
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: /usr/local/lib/python3.13/site-packages/oracledb/libclntsh.so: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with OS search heuristics
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.19.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.19.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.18.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.18.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.12.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.12.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.11.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.11.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.20.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.20.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name libclntsh.so.21.1
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: libclntsh.so.21.1: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: check ORACLE_HOME
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load in dir /opt/oracle/lib
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load with name /opt/oracle/lib/libclntsh.so
2025-05-17 20:09:40 docker-test | ODPI [00001] 2025-05-18 00:09:40.495: load by OS failure: /opt/oracle/lib/libclntsh.so: cannot open shared object file: No such file or directory
2025-05-17 20:09:40 docker-test | ==================================================
2025-05-17 20:09:40 docker-test | /opt/oracle/instantclient:
2025-05-17 20:09:40 docker-test | ['BASIC_LICENSE', 'BASIC_README', 'libclntsh.so', 'libclntsh.so.10.1', 'libclntsh.so.11.1', 'libclntsh.so.12.1', 'libclntsh.so.18.1', 'libclntsh.so.19.1', 'libclntshcore.so.19.1', 'libipc1.so', 'libmql1.so', 'libnnz19.so', 'libociei.so', 'network']
2025-05-17 20:09:40 docker-test |
2025-05-17 20:09:40 docker-test | /opt/oracle/lib:
2025-05-17 20:09:40 docker-test | ['BASIC_LICENSE', 'BASIC_README', 'libclntsh.so', 'libclntsh.so.10.1', 'libclntsh.so.11.1', 'libclntsh.so.12.1', 'libclntsh.so.18.1', 'libclntsh.so.19.1', 'libclntshcore.so.19.1', 'libipc1.so', 'libmql1.so', 'libnnz19.so', 'libociei.so', 'network']
2025-05-17 20:09:40 docker-test | ==================================================
2025-05-17 20:09:40 docker-test | Error initializing Oracle client: DPI-1047: Cannot locate a 64-bit Oracle Client library: "/opt/oracle/lib/libclntsh.so: cannot open shared object file: No such file or directory". See https://python-oracledb.readthedocs.io/en/latest/user_guide/initialization.html for help
2025-05-17 20:09:40 docker-test | Help: https://python-oracledb.readthedocs.io/en/latest/user_guide/troubleshooting.html#dpi-1047
2036-01-01 00:00:00
docker-test exited with code 0
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.13-slim
RUN apt-get update && \
apt-get install -y wget unzip libaio1
ARG ORACLE_CLIENT_HOME=/opt/oracle/instantclient
WORKDIR /opt/oracle
# Download and install Oracle instantclient
RUN mkdir /tmp/oracle && \
wget https://download.oracle.com/otn_software/linux/instantclient/1921000/instantclient-basic-linux.x64-19.21.0.0.0dbru.zip -P /tmp/oracle && \
unzip /tmp/oracle/instantclient-basic-* -d /tmp/oracle && \
mv /tmp/oracle/instantclient_* ${ORACLE_CLIENT_HOME} && \
cd ${ORACLE_CLIENT_HOME} && rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo ${ORACLE_CLIENT_HOME} > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
# Copy instantclient to lib directory since
# per the oracledb docs, ORACLE_HOME uses $ORACLE_HOME/lib
RUN mkdir lib && cp -r instantclient/* lib
ENV LIBPATH=${ORACLE_CLIENT_HOME}
ENV SHLIB_PATH=${ORACLE_CLIENT_HOME}
ENV LD_LIBRARY_PATH=${ORACLE_CLIENT_HOME}
ENV ORACLE_HOME=/opt/oracle
ENV DPI_DEBUG_LEVEL=64
RUN pip install --upgrade pip && pip install oracledb
WORKDIR /app
ADD main.py ./
CMD ["python", "main.py"]
</code></pre>
<p>Here's what I did step-by-step if you want to try and recreate the error:</p>
<ol>
<li><p>Create a new venv (python==3.13.1 and oracledb==3.1.1).</p>
</li>
<li><p>Pull the docker container from the oracle container registry:</p>
</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>docker pull container-registry.oracle.com/database/express:21.3.0-xe
</code></pre>
<ol start="3">
<li>Create the container:</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>docker container create -it --name oracle -p 1521:1521 -e ORACLE_PWD=welcome123 container-registry.oracle.com/database/express:21.3.0-xe
</code></pre>
<ol start="4">
<li>Start the container:</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>docker start oracle
</code></pre>
<ol start="5">
<li><p>Downloaded the <a href="https://www.oracle.com/database/technologies/instant-client/downloads.html" rel="nofollow noreferrer">instantclient</a> for mac (instantclient-basic-macos.arm64-23.3.0.23.09-2.dmg) and followed the installation instructions on the download page. I moved the instantclient_23_3 folder that gets created in the Downloads folder into my project folder.</p>
</li>
<li><p>Test the oracle connection in python by running <code>python main.py</code>. When I run in my vscode terminal, I get the output I was expecting.</p>
</li>
</ol>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
from pathlib import Path
import oracledb
try:
if os.getenv("DOCKER_CONTAINER", "false") == "false":
oracledb.init_oracle_client(
lib_dir=str(Path(__file__).parent.joinpath("instantclient_23_3"))
)
else:
print("=" * 50)
print("/opt/oracle/instantclient:")
print(sorted(os.listdir("/opt/oracle/instantclient")))
print()
print("/opt/oracle/lib:")
print(sorted(os.listdir("/opt/oracle/lib")))
print("=" * 50)
oracledb.init_oracle_client()
pool = oracledb.create_pool(
user="system",
password="welcome123",
dsn="0.0.0.0:1521",
min=1,
max=5,
increment=1,
)
try:
with pool.acquire() as conn:
with conn.cursor() as cursor:
sql = """select sysdate from dual"""
for r in cursor.execute(sql):
print(r)
except Exception as err:
print("Error with pool:", err)
except oracledb.DatabaseError as err:
print("Error initializing Oracle client:", err)
</code></pre>
<ol start="7">
<li>Test the container by running <code>docker compose up -d --build</code>. The Dockerfile is at the top of the post and the docker-compose.yaml is below.</li>
</ol>
<p><strong>docker-compose.yaml</strong></p>
<pre><code>services:
test:
build:
context: .
dockerfile: Dockerfile
image: docker-test-image
container_name: docker-test
environment:
DOCKER_CONTAINER: "true"
</code></pre>
<p>The project folder structure looks like this:</p>
<pre><code>.
├── .venv
├── instantclient_23_3
├── docker-compose.yaml
├── Dockerfile
├── main.py
├── README.md
2 directories, 4 files
</code></pre>
|
<python><oracle-database><docker><instantclient><python-oracledb>
|
2025-05-18 00:27:09
| 1
| 788
|
Simon1
|
79,627,051
| 12,184,608
|
type checking: assignment of merged dicts not compatible, even though individual dicts are
|
<p>I have a situation where I want to assign the union of two dicts (using |) to a typed dict variable. Each of the dicts on its own is assignment compatible, but assigning the result of the merge where at least one of the dicts is a literal results in MyPy and Pyright both reporting an error:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
type MyType = Literal['a', 'b']
def f() -> dict[MyType, str]:
d1: dict[MyType, str] = {'a': 'foo'} # <-- no error
d2: dict[MyType, str] = {'b': 'bar'} # <-- no error
d3: dict[MyType, str] = d1 | d2 # <-- no error
d4: dict[MyType, str] = {'a': 'foo'} | {'b': 'bar'} # <-- error "dict[str, str]" is not assignable to "dict[MyType, str]"
d5: dict[MyType, str] = {'a': 'foo'} | d2 # <-- error "dict[str, str]" is not assignable to "dict[MyType, str]"
return d4
</code></pre>
<p>If type checkers can recognize that each of the dicts on its own is assignment compatible, why can they not recognize that the merge/union of two compatible dicts is assignment compatible?</p>
<p><strong>Update</strong></p>
<p>If I cast the keys as MyType, MyPy is happy, but Pyright still complains: i.e., <code>{cast(MyType, 'a'): 'foo'} | {cast(MyType, 'b'): 'bar'}</code> works for MyPy. And if I cast the whole union, that works for both: i.e. <code>cast(dict[MyType, str], {'a': 'foo'} | {'b': 'bar'})</code>. Seems like they should be able to figure it out, even without the cast, though.</p>
|
<python><python-typing>
|
2025-05-17 23:25:03
| 1
| 364
|
couteau
|
79,626,877
| 15,587,184
|
Pandas ThreadPoolExecutor with 16 workers cause missing or None DataFrames, while using 1 worker works for excel I/O Tasks
|
<p>I'm working on a real-world data processing pipeline where I process between 300–500 Excel files daily. Each file is read, wrangled into a pandas.DataFrame, and at the end all the resulting DataFrames are concatenated into one.</p>
<p>When I use 1 worker (max_workers=1), everything works 100% fine, every time.</p>
<p>But when I switch to multi-threading with 16 workers (ThreadPoolExecutor(max_workers=16)), I encounter major issues:</p>
<p>Only about 20–50 files (out of ~300) get successfully processed and concatenated.</p>
<p>The rest result in None, or sometimes an error like:</p>
<blockquote>
<p>TypeError: cannot concatenate object of type 'NoneType'</p>
</blockquote>
<p>Some threads produce empty DataFrames even though the source files are valid.</p>
<p>I am not modifying any shared DataFrame between threads. Each thread processes its own file and returns a new DataFrame.</p>
<p>To illustrate this issue, I've written a simplified, version that mimics the structure of my real function (the real one is more complex, involving more wrangling and additional columns):</p>
<pre><code>import pandas as pd
from concurrent.futures import ThreadPoolExecutor, as_completed
def process_file(file_path):
try:
df = pd.read_excel(file_path)
df['date'] = pd.to_datetime(df['date'], errors='coerce')
df['foo'] = df['foo'].astype(str).str.upper()
if df.empty:
print(f"[WARN] Empty DataFrame from: {file_path}")
return df
except Exception as e:
print(f"[ERROR] Failed to process {file_path}: {e}")
return None
def process_files_in_parallel(file_list, max_workers=16):
results = []
failed_files = []
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = {executor.submit(process_file, file): file for file in file_list}
for future in as_completed(futures):
file = futures[future]
try:
df = future.result()
if df is not None and not df.empty:
results.append(df)
else:
failed_files.append(file)
except Exception as e:
print(f"[EXCEPTION] {file}: {e}")
failed_files.append(file)
print(f"[SUMMARY] Successfully processed: {len(results)} files")
print(f"[SUMMARY] Failed or empty: {len(failed_files)} files")
if not results:
raise ValueError("No valid dataframes to concatenate.")
final_df = pd.concat(results, ignore_index=True)
return final_df
</code></pre>
<p>Example Error Output (Real Case):</p>
<pre><code>[WARN] Empty DataFrame from: file_24.xlsx
[ERROR] Failed to process file_35.xlsx: [Errno 2] No such file or directory
[EXCEPTION] file_58.xlsx: cannot concatenate object of type '<class 'NoneType'>'
...
[SUMMARY] Successfully processed: 23 files
[SUMMARY] Failed or empty: 277 files
</code></pre>
<p>What I've confirmed:</p>
<ul>
<li><p>Each thread works on completely independent files. No shared state.</p>
</li>
<li><p>All files exist and are valid Excel files.</p>
</li>
<li><p>When running the same logic with max_workers=1, it consistently succeeds.</p>
</li>
<li><p>The issue arises only when increasing the number of workers.</p>
</li>
</ul>
<p>My guess is this might be due to:</p>
<ul>
<li><p>Thread-safety issues with pandas.read_excel or the underlying engine?</p>
</li>
<li><p>OS-level I/O limits?</p>
</li>
<li><p>Too many concurrent reads on disk?</p>
</li>
</ul>
<hr />
<p>Questions:</p>
<ul>
<li>Why does ThreadPoolExecutor with multiple workers (e.g., 16) cause so many files to return None or empty DataFrames?</li>
<li>How can I safely parallelize the reading and wrangling of hundreds of Excel files?</li>
</ul>
|
<python><excel><multithreading>
|
2025-05-17 18:16:29
| 0
| 809
|
R_Student
|
79,626,781
| 9,549,068
|
pixi how to exit the venv shell in vscode?
|
<p>I'm using <strong>pixi</strong> for Python in vscode.</p>
<p>In a normal terminal I can run <code>pixi shell</code> and <code>exit</code>, just like <code>.venv\Scripts\activate</code> & <code>deactivate</code> to enter or leave the venv.</p>
<p>But in vscode, I am forced to auto enter <code>pixi shell</code> mode and <strong>no way to <code>exit</code></strong> -- it only close the terminal entirely.</p>
<ul>
<li>(even if I restart Vscode, close all extensions, then only enable python related extensions. It came back to normal, does not auto go in venv; but after few minutes, it starts to warn me. or the next time I open it, it auto goes into the venv.)</li>
</ul>
<p><a href="https://i.sstatic.net/M6YO5DRp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6YO5DRp.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/oTZHNzxA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTZHNzxA.png" alt="enter image description here" /></a></p>
<hr />
<p><em><strong>Update</strong></em></p>
<ul>
<li>
<pre><code> "python.terminal.activateEnvironment": false,
"python.terminal.activateEnvInCurrentTerminal": false,
</code></pre>
<p><a href="https://stackoverflow.com/questions/54802148/prevent-visual-studio-code-from-activating-the-python-virtual-environment">Prevent Visual Studio Code from activating the Python virtual environment</a></p>
</li>
</ul>
<p>This can <strong>disable that auto activate venv</strong> behavior.</p>
<p>But it doesn't explain the behavior of: not able to exit/deactivate and return to normal terminal.</p>
|
<python><visual-studio-code><python-venv><pixi-package-manager>
|
2025-05-17 16:56:24
| 0
| 1,595
|
Nor.Z
|
79,626,632
| 459,745
|
Calling asynchronous function from synchronous function inside Jupyter
|
<p>In our project, a number of functions come in 2 flavors: asynchronous and synchronous. We write the asynchronous version and provide a <code>runa()</code> wrapper around the <code>asyncio.run()</code> function:</p>
<pre><code>async def read_sensor_async():
print("Reading sensor...")
# Actual code
return 9 # Simulate sensor reading
def read_sensor():
return runa(read_sensor_async())
def runa(func):
# Prepare steps...
return asyncio.run(func)
</code></pre>
<p>Things seem to be alright up to this point. However, when we run <code>read_sensor()</code> in a Jupyter notebook (which means inside an event loop). We get a <code>RuntimeError</code> for calling <code>asyncio.run()</code>. Reading the documentation, this is the expected behavior. So, to deal with this situation, we need to prepare <code>runa</code> to work both inside and outside an event loop. Here is our version 2:</p>
<pre><code>def runa(func):
# Prepare steps...
with contextlib.suppress(RuntimeError):
return asyncio.run(func)
# Gets here means we are inside an event loop
return await func
</code></pre>
<p>The problem with this version is since <code>runa()</code> is a synchronous, we cannot use <code>await</code>. What can we do to call <code>func</code>?</p>
<h2>Update</h2>
<p>A better question would be: How can we rewrite the function <code>read_sensor()</code> to work synchronously and asynchronously?</p>
|
<python><jupyter-notebook><python-asyncio>
|
2025-05-17 14:17:32
| 1
| 41,381
|
Hai Vu
|
79,626,566
| 8,126,390
|
VS Code python extension seeing some not all Django class
|
<p>Simple problem: the Python extension of VS Code is not seeing a Django class, while all other classes in same file are presented. Restarting, disabling/enabling, switch to pre-release for the extension doesn't change anything. Confirmed it was the extension as disabling it removes all import objects.</p>
<p><em>edited to add code "text"</em></p>
<pre><code>from django.contrib.auth.forms import UserChangeForm, AdminUserCreationForm
from .models import CoreUser
</code></pre>
<p><a href="https://i.sstatic.net/8Xl2iaTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Xl2iaTK.png" alt="python extension not seeing class" /></a></p>
<p>AdminUserCreationForm is not being detected while UserCreationForm and other classes are visible. If I visit the source file by CTRL+click I can clearly see the class there:</p>
<pre><code>class AdminUserCreationForm(SetUnusablePasswordMixin, UserCreationForm):
usable_password = SetUnusablePasswordMixin.create_usable_password_field()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields["password1"].required = False
self.fields["password2"].required = False
</code></pre>
<p><a href="https://i.sstatic.net/TMLbdUJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMLbdUJj.png" alt="Django code base clearly contains the class" /></a></p>
<p>I've tried clearing a few VS Code caches (under AppData\Roaming\Code) to no avail.</p>
|
<python><django><visual-studio-code>
|
2025-05-17 13:15:41
| 0
| 740
|
Brian
|
79,626,509
| 13,392,257
|
How to improve selenium script so that the captcha does not appear
|
<p>My aim is to fetch search-engine results (urls) from yandex.ru</p>
<p>I am running this this selenium script. The script is working fine on my PC, but on the server I see sophisticated captch (see Image below)</p>
<p>How to improve the script so that the captcha does not appear</p>
<pre><code>import undetected_chromedriver as uc
import random
from selenium.webdriver.common.by import By
from datetime import datetime
import time
import logging
import traceback
import pathlib
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("yandex_parser.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger('ParserLogger')
class YandexParser():
def __init__(self, USE_GUI=True):
self.chrome_options = uc.ChromeOptions()
if not USE_GUI:
self.chrome_options.add_argument('--headless')
self.chrome_options.add_argument('--no-sandbox')
self.chrome_options.add_argument('--disable-dev-shm-usage')
self.chrome_options.add_argument('--start-maximized')
self.chrome_options.add_argument('--disable-blink-features=AutomationControlled')
self.chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36")
# self.chrome_options.add_argument('--proxy-server=http://your-proxy:port') # Optional
self.driver = uc.Chrome(options=self.chrome_options)
self.driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {
'source': '''
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
'''
})
self.driver.get("https://ya.ru/")
time.sleep(random.uniform(1.5, 3.5))
def close(self):
self.driver.quit()
def check_captcha(self):
""" Try to overcome isBot button"""
cur_time = str(datetime.now()).replace(' ', '_')
if "showcaptcha" in self.driver.current_url:
logger.info("Captcha found")
self.driver.save_screenshot(f'screens/img_captcha_{cur_time}.png')
raw_button = self.driver.find_elements(By.XPATH, "//input[@class='CheckboxCaptcha-Button']")
if raw_button:
raw_button[0].click()
logger.info("Button clicked")
time.sleep(random.uniform(1, 2))
self.driver.save_screenshot(f'screens/img_captcha_afterclick_{cur_time}.png')
else:
self.driver.save_screenshot(f'screens/img_{cur_time}.png')
def parse(self, film_name: str):
logger.info(f"Start parse {film_name}")
result_urls = []
try:
self.driver.get(f"https://ya.ru/search/?text={film_name}&lr=213&search_source=yaru_desktop_common&search_domain=yaru")
self.check_captcha()
for i in range(1, 5):
result_urls.extend(self.parse_page(page_id=i))
self.get_next_page()
self.check_captcha()
# Human-like random delay
time.sleep(random.uniform(2, 5))
except Exception:
logger.error(f"Exception in {traceback.format_exc()}")
finally:
logger.info(f"Found {len(result_urls)} for film {film_name}: {result_urls}")
def parse_page(self, page_id):
res = []
urls_raw = self.driver.find_elements(By.XPATH, value='//a[@class="Link Link_theme_normal OrganicTitle-Link organic__url link"]')
for url_raw in urls_raw:
href = url_raw.get_attribute("href")
if href and "yabs.yandex.ru" not in href:
res.append(href)
logger.info(f"Found {len(res)} urls on page {page_id}")
return res
def get_next_page(self):
next_link_raw = self.driver.find_elements(By.XPATH, '//div[@class="Pager-ListItem Pager-ListItem_type_next"]')
if next_link_raw:
next_link_raw[0].click()
# Human-like random delay
time.sleep(random.uniform(3, 6))
if __name__ == "__main__":
pathlib.Path('screens/').mkdir(exist_ok=True)
parser = YandexParser(USE_GUI=False) # Default to GUI mode for stealth
films = ["Терминатор смотреть", "Саша Таня смотреть", "Джон Уик смотреть онлайн"]
idx = 0
while True:
try:
film = films[idx]
idx = (idx + 1) % len(films)
parser.parse(film)
time.sleep(random.uniform(8, 15))
except Exception as e:
parser = YandexParser(USE_GUI=False)
</code></pre>
<p>Log of work on the server</p>
<pre><code>2025-05-17 14:53:26,295 [INFO] patching driver executable /home/MY_USER/.local/share/undetected_chromedriver/undetected_chromedriver
2025-05-17 14:53:30,904 [INFO] Start parse Терминатор смотреть
2025-05-17 14:53:33,448 [INFO] Captcha found
2025-05-17 14:53:35,291 [INFO] Button clicked
2025-05-17 14:53:42,194 [INFO] Found 0 urls on page 1
2025-05-17 14:53:42,284 [INFO] Captcha found
2025-05-17 14:53:43,363 [INFO] Button clicked
2025-05-17 14:53:53,826 [INFO] Found 0 urls on page 2
2025-05-17 14:53:53,848 [INFO] Captcha found
2025-05-17 14:53:54,659 [INFO] Button clicked
2025-05-17 14:54:06,023 [INFO] Found 0 urls on page 3
</code></pre>
<p><a href="https://i.sstatic.net/pBGLuyEf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBGLuyEf.png" alt="ц " /></a></p>
|
<python><selenium-webdriver><undetected-chromedriver>
|
2025-05-17 12:12:10
| 2
| 1,708
|
mascai
|
79,626,384
| 6,114,832
|
SCIPY BPoly.from_derivatives compared with numpy
|
<p>I implemented this comparison between numpy and scipy for doing the same function interpolation. The results show how numpy crushes scipy.</p>
<pre><code>Python version: 3.11.7
NumPy version: 2.1.3
SciPy version: 1.15.2
Custom NumPy interpolation matches SciPy BPoly for 10 000 points.
SciPy coeff time: 0.218046 s
SciPy eval time : 0.000725 s
Custom coeff time: 0.061066 s
Custom eval time : 0.000550 s
</code></pre>
<p><strong>edit:</strong> likely, I was too pessimistic below when giving the 4x to 10x, after latest streamlining of code, seems very significant on average.</p>
<p>Varying on system, I get somewhere between 4x to 10x speedup with numpy. This is not the first time I encounter this. So in general, I wonder:</p>
<p>why this huge difference in perf?
should we do well in viewing scipy as just a reference implementation, and go to other pathways for peformance (numpy, numba)?</p>
<pre><code># BPoly.from_derivatives example with 10 000 sinusoidal points and timing comparison
"""
Creates a sinusoidal dataset of 10 000 points over [0, 2π].
Interpolates using SciPy's BPoly.from_derivatives and a custom pure NumPy quintic Hermite.
Verifies exact match, compares timing for coefficient computation and evaluation, and visualizes both.
"""
import sys
import numpy as np
import scipy
import time
from scipy.interpolate import BPoly
# Environment versions
print(f"Python version: {sys.version.split()[0]}")
print(f"NumPy version: {np.__version__}")
print(f"SciPy version: {scipy.__version__}")
# Generate 10 000 sample points over one period
n = 10_000
x = np.linspace(0.0, 2*np.pi, n)
# Analytical sinusoidal values and derivatives
y = np.sin(x) # y(x)
v = np.cos(x) # y'(x)
a = -np.sin(x) # y''(x)
# === SciPy implementation with timing ===
y_and_derivatives = np.column_stack((y, v, a))
t0 = time.perf_counter()
bp = BPoly.from_derivatives(x, y_and_derivatives)
t1 = time.perf_counter()
scipy_coeff_time = t1 - t0
# Evaluation timing
t0 = time.perf_counter()
y_scipy = bp(x)
t1 = time.perf_counter()
scipy_eval_time = t1 - t0
# === Pure NumPy implementation ===
def compute_quintic_coeffs(x, y, v, a):
"""
Compute quintic Hermite coefficients on each interval [x[i], x[i+1]].
Returns coeffs of shape (6, n-1).
"""
m = len(x) - 1
coeffs = np.zeros((6, m))
for i in range(m):
h = x[i+1] - x[i]
A = np.array([
[1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 2, 0, 0, 0],
[1, h, h**2, h**3, h**4, h**5],
[0, 1, 2*h, 3*h**2, 4*h**3, 5*h**4],
[0, 0, 2, 6*h, 12*h**2, 20*h**3],
])
b = np.array([y[i], v[i], a[i], y[i+1], v[i+1], a[i+1]])
coeffs[:, i] = np.linalg.solve(A, b)
return coeffs
def interp_quintic(x, coeffs, xx):
"""
Evaluate quintic Hermite using precomputed coeffs.
x: breakpoints (n,), coeffs: (6, n-1), xx: query points.
"""
idx = np.searchsorted(x, xx) - 1
idx = np.clip(idx, 0, len(x) - 2)
dx = xx - x[idx]
yy = np.zeros_like(xx)
for j in range(6):
yy += coeffs[j, idx] * dx**j
return yy
# Coefficient estimation timing for custom
t0 = time.perf_counter()
custom_coeffs = compute_quintic_coeffs(x, y, v, a)
t1 = time.perf_counter()
cust_coeff_time = t1 - t0
# Evaluation timing for custom
t0 = time.perf_counter()
y_custom = interp_quintic(x, custom_coeffs, x)
t1 = time.perf_counter()
cust_eval_time = t1 - t0
# Verify exact match
assert np.allclose(y_scipy, y_custom, atol=1e-12), "Custom interp deviates"
print("Custom NumPy interpolation matches SciPy BPoly for 10 000 points.")
# Print timing results
print(f"SciPy coeff time: {scipy_coeff_time:.6f} s")
print(f"SciPy eval time : {scipy_eval_time:.6f} s")
print(f"Custom coeff time: {cust_coeff_time:.6f} s")
print(f"Custom eval time : {cust_eval_time:.6f} s")
# Visualization
import matplotlib.pyplot as plt
plt.plot(x, y_scipy, label='SciPy BPoly')
plt.plot(x, y_custom, '--', label='Custom NumPy')
plt.legend()
plt.title('Sinusoidal interpolation: SciPy vs NumPy Quintic')
plt.xlabel('x')
plt.ylabel('y(x) = sin(x)')
plt.show()
</code></pre>
|
<python><numpy><performance><scipy>
|
2025-05-17 09:35:17
| 2
| 1,117
|
Stefan Karlsson
|
79,626,285
| 7,791,963
|
How to make robot framework SSHLibrary not call logger.log_background_messages() on close unless in main thread?
|
<p>I am in my keyword running some parallel tasks using concurrent.futures and using robotbackgroundlogger to log the messages. After all threads are finished I run <code>logger.log_background_messages()</code> in the main thread since it only allows running it in main thread.</p>
<p>The problem is that SSHLibrary() automatically tries to call <code> logger.log_background_messages()</code> in it's close function.</p>
<p>From the <a href="https://github.com/MarketSquare/SSHLibrary/blob/master/src/SSHLibrary/client.py" rel="nofollow noreferrer">source code</a> I can see the following snippet</p>
<blockquote>
<pre><code>def close(self):
"""Closes the connection."""
if self.tunnel:
self.tunnel.close()
self._sftp_client = None
self._scp_transfer_client = None
self._scp_all_client = None
self._shell = None
self.client.close()
try:
logger.log_background_messages()
except AttributeError:
pass
</code></pre>
</blockquote>
<p>Which means that closing any SSH connection in a thread would raise an <code>RuntimeError</code> if you have robotbackgroundlogger installed, since you are not allowed to run the function <code>logger.log_background_messages()</code> unless in the main thread.</p>
<p>How can I disable so that SSHLibrary does not try to run <code>logger.log_background_messages</code> when I close my SSH connection? Or some other workaround to make it work?</p>
|
<python><robotframework><robotframework-sshlibrary>
|
2025-05-17 07:19:22
| 0
| 697
|
Kspr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.